0% found this document useful (0 votes)
194 views

Chapter 8 Evaluation

This document summarizes steps in the evaluation process for an English education curriculum. It discusses: 1) Defining evaluation and determining what questions it will answer, such as course quality or learner satisfaction. 2) Key early steps include identifying stakeholders, deciding what information is needed, and gaining support. 3) Formative evaluations aim to improve the course, while summative evaluations make judgments about the course. Evaluations can have different types of focus, such as cognitive learning or resources. Gaining support from those involved is important for effective evaluation. A variety of tools can be used to gather required information.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
194 views

Chapter 8 Evaluation

This document summarizes steps in the evaluation process for an English education curriculum. It discusses: 1) Defining evaluation and determining what questions it will answer, such as course quality or learner satisfaction. 2) Key early steps include identifying stakeholders, deciding what information is needed, and gaining support. 3) Formative evaluations aim to improve the course, while summative evaluations make judgments about the course. Evaluations can have different types of focus, such as cognitive learning or resources. Gaining support from those involved is important for effective evaluation. A variety of tools can be used to gather required information.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

CURRICULUM AND MATERIALS

DEVELOPMENT OF ENGLISH EDUCATION

CHAPTER 8
EVALUATION

By.group 1
Apsella 17060010
Birgitha 17060053
Fina 17060073

Lecturer:
Dewi Yana,S.Pd.,M.Pd.

ENGLISH EDUCATION STUDY PROGRAM


OF TEACHER TRAINING AND EDUCATION
FACULTY
OF RIAU KEPULAUAN UNIVERSITY
BATAM 2019
1. What is an Evaluation
Evaluation is the process of determining the merit, worth and value of things.Evaluation
requires looking both at the results of the course, and the planning and running of the course.
In reality, most evaluations are more narrowly focused and may be answering questions like
the following:
 Is the teaching on the course of a suitably high standard?
 Is the course preparing the learners properly for their use of English at the end of
the course (e.g. to pass the TOEFL test, to study in an
 English-medium university, to work as a tour guide)?
 Are the learners satisfied with the course?
 Is the course cost effective?
Carrying out an evaluation is like carrying out research, and it is thus critic- ally important
that the evaluator is clear about what question is being asked. That is, why the course is being
evaluated.

2. Steps in an Evaluation
All of the early steps in evaluation aim at deciding why the evaluation is being done and if it is
possible to do it.
1) Find who the evaluation is for and what kind of information they need.
2) Find what the results of the evaluation will be used for – to improve the course, to
decide whether to keep or get rid of the course.
3) Decide if the evaluation is necessary or if the needed information is already
available.
4) Find how much time and money are available to do the evaluation.
5) Decide what kinds of information will be gathered.
 Amount of learning
 Quality of learning
 Quality of teaching
 Quality of curriculum design
 Quality of course administration
 Quality of support services – library, language lab, etc.
 Teacher satisfaction
 Learner satisfaction
 Sponsor satisfaction
 Later success of graduates of the course
 Financial profitability of the course.
6) Try to gain the support of the people involved in the evaluation.
7) Decide how to gather the information and who will be involved in the gathering of
information.
8) Decide how to present the findings.
9) Decide if a follow-up evaluation is planned to check the implementation of the
findings.
3. Purpose and Audience of the Evaluation

Each and in some cases for five or more hours. An evaluation of a university department
involved bringing in some outside evaluators as part of the evaluation team and paying their
travel and accommodation expenses plus a fee for their services. Because of this investment
of time and money, it is important that an evaluation is well focused and well motivated.
Most of the really important work in an evaluation is done before the data gathering begins.
As in experimental research, you cannot fix by statistics what has been spoilt in design.The
first critical step is to find out who the evaluation is for and what kind of information they
value. There are several reasons why this step is very important. Firstly, it helps determine
the degree of confidentiality of the evaluation. Will the report of the evaluation be available to
all involvedor will it only go to the person or group commissioning the evaluation?
Secondly, it helps determine what kind of information should be gathered and what kind of
information should not be gathered. The person or group commissioning the evaluation may
place great importance on learner satisfaction or on economic issues, or they may consider
these irrelevant. In the initial stages of an evaluation, the evaluator needs to talk at length with
the person commissioning the evaluation to make clear the goals and type of data to be
gathered in the evaluation. An effective way to make this clear is to prepare a brief “mock”
report based on false data with the purpose of showing the person commissioning the
evaluation what the report maylook like. People interested in commissioning an evaluation of
a language course could include the learners, the teachers, the Director ofthe language centre
or the owners of the language centre. Each of these interested parties will have a different view
of what a “good” course is and will value different kinds of evidence. Thirdly, knowing who
the evaluation is for is useful in determining whether the data to be gathered will be provided
willingly orreluctantly.

4. The Type and Focus of the Evaluation


A distinction is made between formative evaluation and summative evaluation (see
Table 8.1). The basis of the distinction lies in the purpose of evaluation. A formative evaluation
has the purpose of forming or shaping the course to improve it. A summative evaluation has
the purpose of making a summary or judgement on the quality or adequacy of the course so
that it can be compared with other courses, compared with previous summative evaluations,
or judged as being up to a certain criterion or not. These different purposes may affect the
type of data gathered, the way the results are presented, and when the data are gathered, but
essentially most data canbe used for either of the two purposes. The formative/summative
distinc-tion is important when informing the people who are the focus of an evaluation about
the purpose of the evaluation, in helping the evaluator decide what kind of information will be
most useful to gather, and in using the information gathered. Table 8.1 compares formative
and summative evaluation deliberately contrasting the differences to make the distinction clear.
Most evaluations are short term. Some are conducted over a few days. Others may be long
term. Long-term evaluation is most economically done if it is planned as a part of curriculum
design and we will look at this later in this chapter. Some important features of a course cannot
be validly evaluated in a short-term evaluation. These include quality of teaching and learner
achievement.
The last set of distinctions to look at here is whether the evaluation will include cognitive,
affective and resource factors. Cognitive factors involve learning and teaching and the gaining
of knowledge, and the application of that knowledge after the course has ended.
It should be clear from this brief survey that a full-scale evaluation could be an enormous
undertaking. It is therefore important to decide what the evaluation will focus on. Primarily
this decision should not be based on practical factors but on the kind of information that is
needed to achieve the goal of the evaluation. It is better to have a small amount of relevant
data than a large amount of data that do not address the main concerns of the evaluation.

5. Gaining support of an evaluation


Finding weaknesses carries with it the idea that someone or something is to blame for
the weaknesses and this is clearly a threatening situation. If an evaluation is to proceed
effectively, it is important that honest data are available.
So,it is necessary for those involved in the evaluation, particularly those who are sources of
information, to feel that the evaluation is worthwhile and not personally threatening to their
“face” and their job security. This will require meeting with those involved and involving them
in the planning and carrying out of the evaluation.
For this reason, some evaluations involve a respected outsider who makes gaining the
agreement and cooperation of the staff a prerequisite to doing the evaluation.
That is, if the evaluator is unable to gain the cooperation of staff through meeting with them
and explaining the purpose and likely procedure of the evaluation, then the evaluator decides
not to proceed with the evaluation.
Clearly there is potentially a very wide range of stakeholders, all with different kinds of
connections to the programme. Actively involving a wide range of stakeholders can result in
a better informed evaluation as well as a protective sharing of responsibility (working with
others means you don’t have to take all the blame yourself !).
Not all evaluations are potentially threatening and they may spring from the desire of staff to
improve their programme. In these cases it may be necessary to convince other staff of the
value of the evaluation and that there will be a worthwhile return for the time and effort spent
on the evaluation.
A properly conducted evaluation can be an empowering and motivating activity.The
assumptions behind an evaluation usually are that:
a) this course is worth improving,
b) the people running and teaching the course are capable of improving it,
c) the people involved in the course have the freedom and flexibility to make changes to
the course, and
d) the improvements will make it a better course for all concerned.
Seen in this way, an evaluation is an activity that deserves support.

6. Gathering the Information


The tools of needs analysis and the tools of evaluation are somewhat similar to each
other. This will be apparent in Tables 8.3 and 8.4. The purposes for which the tools are used
differ and in an evaluation they are used to gather a much wider range of data. Let us now look
at a few of the most useful information-gathering tools in more detail. Table 8.2 looks at a
range of focuses for evaluation, suggesting several possible data-gathering tools to choose
from for each focus.
Table 8.2 looks at evaluating teaching and learning. This can involve looking at the
performance of teachers and learners, observing lessons and examining achievement.
Evaluation can also look at the environment of the course, which may involve looking at
administrative procedures, availability and quality of resources, and how outsiders view the
course. Table 8.3 looks at a range of such focuses and possible tools.
Let us now look at some of these data-gathering tools in more detail.

a. Interviews
Interviews are usually conducted on a one-to-one basis, but it is sometimes useful to interview
a committee or to use a staff meeting as a way of gather- ing data. Interviews can be structured
(the interviewer has a procedure and a set of questions to follow and generally keeps to these)
or unstructured (the course of the interview depends on the wishes of the interviewer and
interviewee and is largely unpredictable). It is valuable for the interviewer to take notes,
particularly where a large number of people will be interviewed and it may be necessary to
work out some quantification system in order to be able to summarise and combine interview
data on important issues, for example,how many people consider that the course assessment
procedure needs changing.

b. Self-report scales
Questionnaires are of many different types and so it is useful to distinguish those that
involve open-ended questions from those that are largely asking respondents to rate an aspect
of the course on a predetermined scale. These can be called “self-report scales”. Here is an
example.
The teaching on the course was:

1 2 3 4 5

Very poor poor adequate very good excellent


Self-report scales are very efficient where (1) there is a need to survey a large number
of people, (2) there is a large number of pieces of information to gather, (3) there are very clear
focuses for the evaluation, and (4) there is a need to summarise the data to get a general picture,
to compare with previous evaluations or other courses, or to provide a simple summative
evaluation to see if further data need to be gathered. There are several dangers of self-report
scales:
1. They tend to result in average results if the responses are simply added and averaged. This is
usually avoided by also showing how many people responded with 5 (excellent), how many
responded with 4 (very good) and so on.
2. Self-report scales involve pre-determined questions and types of answers. In reporting the results
of the evaluation, this might be expressed as “60 per cent of the people considered that the
teaching on the course was very good”. This is partly a misrepresentation as the term “very
good” and the focus “teaching on this course” was provided in the self report scale.

3. Self-report scales are often used for student evaluation of teaching and they are
administered in class, allowing the learners a rather short period of time to answer. They
are often thus influenced by what has immediately preceded them. This can be partly
avoided by encouraging learners to reflect on the whole course and by allowing them to
discuss in pairs or small groups before responding individually.
Block (1998) provides a very insightful analysis of students’ comments on their responses
to a questionnaire showing that there may be a wide degree of variety in their interpretations of
the questionnaire items as well as in the reasons for assigning a particular rating. Block suggests
that questionnaires should be trialled in an interview form with a few learners to make sure the
questionnaire deals with what the learners consider most important in their particular learning
culture.
c. Observation and checklists
The checklists for the various kinds of analysis and observation are like tests or
dependent measures in an experiment and need to be reliable, valid and practical. Table 8.4 is a
simple checklist for observing the quality of teaching. Each item can be responded to with a
Yes/No or scaled response, and a space could be left for comments on each item.
A checklist is likely to be reliable if the items on it can be clearly understood by each person
using it, if the people using it are trained to use it, and

if it contains several items. The teaching evaluation checklist in Table 8.4 contains eight items.
Too many would make it too complicated to use. Too few would make a poor item or a poorly
used item have too great an effect on the whole list.
A checklist is likely to be valid if it is based on a well-thought-out, well researched system of
knowledge that is directly relevant to what is being evaluated. The teaching evaluation checklist
in Table 8.4 is based on the principles of presentation described in Chapter 4. Other evaluation
checklists can be based on the parts of the curriculum design process (see Chapter 11 for
designing a course book evaluation form), or on a well-researched and thought-out model of the
aspect that is being evaluated.
A checklist is likely to be practical if it is not too long, if it is easy to use, and if it is easy to
interpret its results. It is well worthwhile doing a small pilot study with a checklist, using it on
one or two occasions, discussing it with colleagues who are prepared to be constructively
critical, and trying to apply its findings. A small amount of time spent on such simple pilot
testing avoids a large amount of future difficulty.
The disadvantages of checklists are that (1) they may “blind” the observer from seeing other
important features that are not on the list, (2) they tend to become out of date as theory changes
(consider the course book evaluation form designed by Tucker (1968)), and (3) many checklists
are based on the assumption that summing the parts is equal to the whole.
The advantages of checklists are that (1) they ensure that there is a systematic coverage of what
is important, (2) they allow comparison between different courses, lessons, teachers etc., and
(3) they can act as a basis for the improvement of a course through formative evaluation.
7. Formative Evaluation as a Part of a Course
In more traditional courses than those based on a negotiated syllabus, formative
evaluation can still be planned as a part of curriculum design. This can be done in the following
ways:
1) Clarke (1991) for an excellent discussion of this). This may include negotiation of
classroom activities, some of the goals of the course, and some assessment procedures.
This negotiation is a kind of evaluation with immediate effects on the course.
2) The course can include periodic and systematic observation of classes by teacher peers.
3) The staff hold regular meetings to discuss the progress of the course.
4) Teachers are required to periodically fill self-evaluation forms that they discuss with a
colleague.
5) Learners periodically fill course evaluation forms.
6) Some class time is set aside for learner discussion of the course and providing feedback
for teachers.
7) Occasionally an outside evaluator is invited to evaluate aspects of the course.

8) The Results of an Evaluation


An issue in evaluation is whether a comparison model should be used. Should
evaluations be norm-referenced or criterion-referenced? If they are norm-referenced what is the
comparison – previous courses, other existing courses, other courses that could replace the
existing course? A report of an evaluation needs to indicate the quality of the course and it must
be made clear what the standard for the measure of quality is.
Most evaluations involve a written report, or in some cases two written reports – one for the
person or group commissioning the evaluation, and one for wider circulation. The written report
will usually be accompanied by an oral report. This oral report has two purposes, (1) to make
sure the written report is clearly understood, and (2) to say things that could not be put tactfully
in writing.
Evaluation is an essential part of good curriculum design. It ensures that weaknesses in
curriculum design are found and repaired. It allows for the adjustment of a course to a changing
environment and changing needs. If evaluation is well planned, it can help teachers develop
professionally and come to feel that the course is truly their own.
We have now covered all the eight parts of the curriculum design model. In the next chapter we
look at the various ways in which the whole process of curriculum design might be carried out.

Summary of the Steps

1 Discover the purpose and type of the evaluation.


2 Assess the time and money needed.
3 Decide what kinds of information to gather.
4 Gain the support of the people involved.
5 Gather the information.
6 Present the findings.
7 Apply what has been learned from the evaluation.
8 Do a follow-up evaluation.

You might also like