0% found this document useful (0 votes)
22 views

MODULE 2 Lesson 1-4

This document discusses the qualities of a good measuring instrument. It identifies validity, reliability, and usability as the key qualities. Validity refers to how well a test measures what it is intended to measure and includes content, construct, concurrent, and predictive validity. Reliability means a test gives consistent results. Usability refers to factors like the length of a test and how well students are prepared. A good measuring instrument accurately assesses the intended skills or knowledge in a reliable and usable way.

Uploaded by

jolai
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views

MODULE 2 Lesson 1-4

This document discusses the qualities of a good measuring instrument. It identifies validity, reliability, and usability as the key qualities. Validity refers to how well a test measures what it is intended to measure and includes content, construct, concurrent, and predictive validity. Reliability means a test gives consistent results. Usability refers to factors like the length of a test and how well students are prepared. A good measuring instrument accurately assesses the intended skills or knowledge in a reliable and usable way.

Uploaded by

jolai
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

Lesson 1

Qualities of a Good Measuring Instrument

I. Intended Learning Outcome

Understand and elaborate the qualities of a good measuring instrument

II. Introduction

When teachers use instructional objectives, it becomes very clear what should be on
the test. Objectives indicate behaviors and skills that students should be able to do
after preparing for class, listening to the lecture, and completing homework and
assignments. The desired behaviors from the objectives translate directly into the
items for the test.

III. Content / Concept

Prior to the construction of a paper and pencil test to be used in the measurement of
cognitive learning, teachers have to answer the question, “How good must a
measuring instrument be?” Here are what we have to look into:

• What should be Tested – Identification of the information, skills, and behaviors


to be tested is the first important decision that a teacher has to take. Knowledge
of what shall be tested will enable a teacher to develop an appropriate test for
the purpose. The basic rule to remember, however, is that testing emphasis
should parallel teaching emphasis.
• How to gather Information about what to test – A teacher has to decide whether
he or she should give a paper and pencil test, then he has to determine what
test items to construct. On the other hand, if he decides to use observation of
the students’ performance of the targeted skill, then he has to develop
appropriate device to use in recording his observations. Decisions on how to
gather information about what to test depends on the objective or the nature or
the behavior to be tested.
• How long the test should be – the answer to the aforementioned question
depends on the following factors: age and attention span of the students; ant
type of questions to be used.
• How best to prepare students for testing – To prepare students for teaching,
Airisian (1994) recommends the following measures:
1. Providing learners with good instruction
2. Reviewing students before testing
3. Familiarizing students with question formats
4. Scheduling the test; and
5. Providing students information about the test.

Qualities of a Good Measuring Instrument


The quality of measuring instruments used in the evaluation of a pupil’s/student’s
performance is very important for the decisions made by the teachers are based on the
information obtained through these instruments. Thus, whether a test is standardized or
teacher-made it should possess the qualities of a good measuring instrument. These
qualities are validity, reliability, and usability.

Validity

Validity may be defined as the degree to which a test measures what it purports to
measure, and how adequately. It is considered to be a very important quality when
preparing or selecting an instrument for use. The validity of a measuring instrument such
as a test must always be considered in relation to the purpose to which it is intended and
should always be specific in relation to some definite situations. The test must be valid,
relevant, appropriate, and adequate.

Validity may be classified under four types, namely: content validity, construct validity,
concurrent validity, and predictive validity.

Content Validity

Content validity refers to the content and format of the measuring instrument. It
refers to the appropriateness and the adequacy of the content of the course, or of
its objectives. Validity essentially involves the systematic examination of the test
content to determine whether it covers a representative sample of the behavior
domain to be measured. The domain under consideration should be fully described
in advance, and not after the test has been prepared. These domains of behavior
include cognitive, affective, and psychomotor.

Content validity is commonly used in evaluating achievement test. A well-


constructed achievement test should cover not only its subject matter but also the
objectives of instruction. For example, if a teacher has a complete list of his course
objectives stated in the behavior domain, and with the table of specifications as
the basis of listing, he may choose at random the objectives which he wishes to
include in the test, then the test is assumed to have content validity. This is
premised on the fact that the test is constructed in accordance with correct test
construction practices.

Construct Validity

The construct validity of a test refers to the extent to which the test measures a
theoretical trait (Calmorin, 1994). Kerlinger (1973) emphasizes that “construct
validation is preoccupied with theory, theoretical constructs, and scientific
empirical inquiry involving the testing of hypothesized relations.

According to Sevilla (1992) construct validity involves the discovering of a positive


correlation between and among the variables/constructs that define the concept.
Usually, there are three steps involved in construct validation:
1. The variable being measured is clearly defined.
2. Hypothesis, based on a theory underlying the variable, are formed
about how individuals who possess a “little” or a “lot” of the variable
will behave in a particular situation, and
3. The hypotheses are tested both logically and empirically.
Factor analysis has been considered as the most powerful method of construct
validation. It is a statistical method of reducing a large number of measures to a
fewer number called “factor”. This is done by correlating each of the measures and
inspecting which ones cluster together. In other words, it is discovering the
correlates of a construct. Factor analysis tells us in effect, what measures measure
the same time and to what extent they measure what they purport to measure.

Concurrent Validity

Concurrent validity is a criterion-related validity. It refers to the extent of correlation


between the test and a criterion set up as an acceptable measure. A criterion is a
second test or other device by which something can be measured. This criterion is
always available at the time of testing as it serves to assess present status of the
individuals rather than prediction of future outcomes. For example, a teacher who
wants to validate an achievement test in Mathematics which he has constructed
will have to administer the test to his Mathematics students and then compare the
result of the test to another Mathematics test (the criterion) which has already been
proven valid. If the correlation between these two tests is high, then the
achievement test in Mathematics shows evidence of concurrent validity.
Predictive Validity

Predictive validity is also a criterion-related validity. Like in concurrent validity, the


validity of judgment in predictive validity is inferred from another test (the criterion).
However, predictive validity differs from concurrent validity in time dimension.

Predictive validity is characterized by a prediction of relation to an outside criterion


and by checking a measuring instrument (or test), in the future, against some
outcome or measure. If the purpose of the test is to determine success in a given
performance in the future, using this test as the basis of prediction, and, if the
students who did very well in the test are found in the future to be the ones who
are successful, then the test has predictive validity. In this case, the criterion
measure against which the test result is validated and obtained is available after a
long period of time. For example, the teacher may want to find out how well a
student may perform (academically) in college on the basis of how well he has
performed on tests he took in his last year in high school.

Reliability

Reliability refers to the degree of consistency and accuracy of a measuring instrument. A


reliable instrument is one that gives consistent results. An instrument is considered to
yield consistent results when it elicits similar results on two or more testing occasions
under similar conditions. For example, a teacher who tested the mathematics
achievement on his pupils/students at two or more different times under similar
circumstances, should obtain pretty close to the same results each time. This consistency
of results would give the teacher confidence that the results accurately represented the
achievement of his pupil/students involved. An instrument can also be considered stable,
if, based on the results of the first testing, future test performances of the pupil/students
can be predicted.

In psychological or educational measurement, there is no way of knowing the “true score”


obtained by a pupil/student. No single score represents the “true performance” of a
pupil/student since there is always a certain amount of error involved. Hence, a single
score is composed of an obtained score component and an error component. The error
component is due may be to a variety of factors, such as differences in motivation, energy,
anxiety, a different testing situation, and so on. The pupil/student themselves may not
always be in the same physical, mental, or emotional condition when taking tests at
different times. It may also be that the testing conditions can not be controlled to ensure
a 100% uniformity. The error component attributed to the different factors above is termed
“error of measurement”, and since error of measurement are always present to some
degree, variability in test scores ( in answers or ratings) are always expected when a test
is administered more than once to the same group of pupils/students. Hence, the
reliability of a test is affected by the conditions of the pupil/students, the testing conditions,
and the test itself.
An instrument can be quite reliable, but may not be valid. For example, if the results of a
math achievement test administered more than once to a group of first year high school
students under similar circumstances is found to be consistent, that is, those who scored
high in the first test also scored high in the second test, those who scored low in the first
test also scored low in the second, and so on, then the test is reliable, but, if the same
results or scores are used to predict the performance of these students in their science
subject, then the instrument is not valid. Any inferences about performance in science
based on a math achievement test would have no validity.

The degree of precision or accuracy of an instrument can be determined by the amount


of variations or variability it produces in comparison to the total amount of variability
among the variables measured. The higher the variability due to the errors of
measurement, the lesser is the degree of reliability of the instrument. The degree of
precision or accuracy of an instrument affects the variability of test scores. Less variability
means a high degree of accuracy and high variability means a lower degree of accuracy
and less variability of the instrument. Thus, test instructors always aim to produce
instruments that reduces the error component of test scores.

Establishing the reliability of the test is mainly done through statistical estimates.
Reliability estimates provide an idea of how much variations to expect. Such estimates
are called reliability coefficients. A reliability coefficient expresses the relationship
between test scores of the same individuals on the same instrument at two different times,
or even between two parts of the same instrument. Unlike other uses, of correlation
coefficient, reliability coefficient ranges from 0.000 to 1.000.

Methods in Estimating Reliability of a Good Measuring Instrument

The four best known ways to obtain a reliability coefficients are:

1. The Test-Retest Method


2. The Parallel – Forms Method
3. The Split-Half Method
4. The Internal – Consistency Method

The Test-Retest Method

The test-retest method involves the administering of the same test twice to the same
group after a sufficient lapse of time. The reliability coefficient is computed to determine
the relationship between the two scores obtained. Reliability coefficients are affected by
the length of time that elapses between the administrations of the two tests. The longer
the time interval, the lower the reliability coefficient is likely to be, since there is a greater
likelihood that such factors as unlearning, forgetting among otehres may occur and my
result to low correlation of the test. On the other hand, if the time is short, the examinees
may recall previous responses and may tend to make the correlation of the test high. In
checking for evidence of test-retest validity an appropriate time interval should be
selected. Generally, two weeks or so has been considered in educational measurement
to be most appropriate, since this time interval may eliminate “memory effect” as well as
“maturity effect” on the part of the examinees.

The Spearman rank correlation or Spearman rho may be used to correlate the scores for
this method.

The formula is:

6 Σ D2
ρ=1−
N (N 2 − 1)
Where:

ρ (rho) = correlation coefficient


Σ D2 = the sum of the squares of the rank differences
N = the total number of cases

Example

To test the reliability of an achievement test in Mathematics, 10 senior high school


students were used as pilot sample and were given the test twice. The following
table shows the students’ scores and reliability in two administrations and
computations of Spearman rho (ρ).

Students Scores Ranks Differences


S1 S2 R1 R2 D D2
1 91 92 2 1.5 0.5 0.25
2 82 85 7.5 6.5 1.0 1.0
3 87 87 4.5 4 0.5 0.25
4 75 74 10 10 0 0
5 92 92 1 1.5 0.5 0.25
6 89 87 3 4 0.5 0.25
7 85 85 6 6.5 0.5 0.25
8 82 83 7.5 8 0.5 0.25
9 79 78 9 9 0 0
10 87 87 4.5 4 0.5 0.25
TOTAL 3.50
Computation:

6 Σ D2
ρ=1−
N (N2 − 1)

6 (3.50)
= 1−
10 (102 − 1)

21
= 1−
1000 − 10

21
=1−
990

= 𝟎. 𝟗𝟖 𝐨𝐫 𝟗𝟖% (a very high correlation)

The rho (ρ) obtained is 0.98 or 98%, a very high correlation value, therefore, the
achievement test in Mathematics is reliable.

The Parallel-Forms Method

The parallel-forms method involves the administration of two different but


equivalent or parallel forms of a test to the same group during the same time period. The
two forms of the test must be constructed in such a way that the content, the type of item,
the weight of each item, the instructions for testing, and other factors, affecting the tests
should be similar but not identical. In addition, these two forms should have approximately
the same average and variability scores.

The correlation between the scores of these two forms of test representing the
reliability coefficient of the test is then calculated. A high coefficient indicates that the test
is reliable and that the two forms are measuring the same thing.

The Split-Half Method

The split-half method involves administering the test to a group of examinees only
once, but dividing the test items into two halves using the “odd-even” scheme, that is,
divide the test into odd and even items. The two halves of the test must be similar but not
identical in content, number of items, difficulty, averages or means and variability.

In this method, each examinee is given two scores, one on the even and the other
on the odd items in one test. The correlation between the scores obtained on the two
halves represents the reliability coefficient of a half test. To obtain the reliability coefficient
of the whole test, the Spearman – Brown Formula is used.
Formula:
2 rht
rwt = 1 + rht

Where:

rwt = reliability coefficient of a whole test

rht = reliability coefficient of a half test

Example:

If the correlation coefficient of a half test obtained is 0.85, determine the reliability
Coefficient of the total test.

2 rht
rwt =
1 + rht

2 (0.85)
=
1 + 0.85

= 0.92 or 92%.

The value of rwt is 0.92 or 92%, a very high relationship indicating that the whole
test is reliable.

The Internal-Consistency Method

In the internal-consistency method, the Kuder-Richardson formulas (Formula #20


and Formula #21) are the most frequently used to compute for the reliability coefficient of
the test scores by analysis of variance.

n V− ΣPi qi
Formula # 20: rtt = [ ]
n−1 V

Where:
rtt = reliability coefficient
n = total number of test items
V = variance of the test scores
ΣPi qi = sum of the product of proportion of passed and failed for
item i.
n nv−x̅ (n−x̅)
Formula # 21: rtt = [ ]
n−1 nv

Where:
rtt = reliability coefficient
n = total number of test items
V = variance of the test scores
x̅ = mean of the test scores

Example:

A 50-item test was administered to a group of students. The test scores were
found to have a mean (x̅) = 45, and a variance (V) = 25. Estimate the reliability
coefficient of the test.

Using Kuder-Richardson Formula # 21, the reliability coefficient is estimated to


be:

n nv−x̅ (n−x̅)
rtt = [ ]
n−1 nv

50 50(25)−45 (50−45)
= [ ]
49 50(45)

1250−225
= 1.02 [ ] = 0.836 or 0.84
1250

Thus, the reliability coefficient estimates for scores on this test 0.84 or 84%.

In evaluating reliability coefficients, there are two steps that can be used.

1. Compare the obtained reliability coefficients with the extremes that are possible. A
coefficient of 0.00 indicates no relationship, therefore, no reliability at all, while a
coefficient of 1.00 indicates the maximum possible coefficient that can be obtained.
2. Compare the obtained coefficients with the coefficients usually obtained for tests of
the same type. For example, many classroom tests have been reported to have
reliability coefficients of 0.70 and higher is much preferred.

Improvement of Reliability of Tests

The reliability of a test (or any measuring instrument) generally can be improved by
increasing the length of the test provided the following conditions are met.

1. The test items to be added must have about the same level of difficulty as the
original ones, and
2. The test items to be added must have the same content, or must be measures of the
same factors or skills as the original ones.

Usability

Usability means the degree to which the measuring instrument can be satisfactorily
used by teachers, supervisors, and school administrators without undue expenditure of
time, money, and effort. In other words usability means practicability. There are five
factors that determine usability:

1. EASE OF ADMINISTRATION. It refers to the facility of administering the measuring


instrument (test). To do this, directions should be simple, precise, complete, and
clearly stated such that the students can readily understand what the instructions want
them to do.
2. EASE OF SCORING. It refers to the ease of checking the test papers. Scoring
becomes an easy task when answer keys are adequately prepared and scoring
directions are clearly defined.
3. EASE OF INTERPRETATION AND APPLICATION. It refers to the ease by which test
scores are correctly interpreted. The use of graphs such as the normal curve and of
the tables of norms such as age norms and grade norms (for the elementary level)
helps in the facility of correct interpretation of test scores.
4. LOW COST. The materials used in preparing the instrument (test) should not cost
much.
5. PROPER MECHANICAL MAKE-UP. It refers to the physical features of the materials
used such as the clarity of the printed words and illustrations, the appropriateness of
the size, etc.

Another related factor is the testing time.

Lesson 2
Measures of Central Tendency: Mean, Median and Mode

I. Intended Learning Outcome

Describe statistical data in classroom assessment and measurement

Define characteristices of central tendencies

Apply the mean, median and mode in analysis of test scores

II. Introduction
The measures of central tendency of a given set of observations is the score value
around which the whole set of observations or scores tend to cluster. It is
represented by a single number which summarizes and describes the whole set.

The most commonly used measures of central tendency are: mode, the mean, and
the arithmetic mean, or average.

III. Content / Concept

The Mean

The arithmetic mean may be defined as an arithmetic average. It is the sum of thhe
individual score divided by the number of scores. It is a computed average and its
magnitude is influenced by every score value in the set. It is the location measure
most frequently used, but can be miseading when the distribution cotains
extremely large or small values.

The symbol for the smple mean is X̄ (reads as bar X), and for the population mean
is the Greek letter mu (µ) reads as “myu”.

Mean of a sample:

𝐗𝐢
𝐗̄ =
𝐧

Where:

Xi = variable / score

Σ = summation / total

n = number of scores

The Median
The median is the value or the score scale that separates the top half of the
distribution from the bottom half. It is the midpoint of the distribution.

For the distributions having an even number of arrayed scores, the median is the
average of the two middlemost values and for the distributions having an odd
number of arrayed scores, median is the middlemost value. The median is the
most appropriate locator of centersince it has resistance to extreme values. It is a
positional average, hence, its vaue depends on its position relative to the number
of scores in the array (or the number of scores in the distribution). The median is
sometimes denoted by Me or Mdn. We will refer to it here as Mdn.

Ex 1: Find the median of the following group of scores:

25, 23, 25, 24, 28, 27, 30, 28, 26

To find the median, first array the scores in either ascending or descending order
of magnitude.

23, 24, 25, 25, 26, 27, 28, 29, 30 (ascending order)

Then find the median. Since there is an odd number of scores (9), the median is
the middlemost score. It is 26.

Ex 2: Compute the median of the follwing group of scores:

26, 25, 26, 23, 24, 26, 28, 29

To find the median, array the scores in either ascending or descending order of
magnitude.

23, 24, 25, 26, 27, 28, 28, 29 (ascending order)

Then find the median. Since there is an even number of scores (8), the median is
thhe averge of the two middlemost scores.

26+27
Mdn = = 26.5
2
When the number of scores in an arrayed arrangement is even and one of the two
middlemost scores occurs two or more times, the median is equal to the average
of the identical scores and the score of immediately preceding it.

Ex 3: Compute the median of thhe following arrayed scores:

23, 24, 25, 26, 27, 27, 28, 29


The median is:

27+27+26+25
Mdn = = 26.25
4
When the number of scores in an arrayed arrangement is odd, and the middlemost
score occurs two or more items, the median is the average of the middlemost score
and the other identical score 9s) and its/their counterpart(s) which either precedes
or follows the middlemost score.

Ex 4: Compute the median of the following arrayed scores.

23, 24, 25, 26, 26, 27, 28

The median is:

26+26+25
Mdn = = 25.67
3
Remember:

It must be noted that the median is a point and not a score on a scale of
measurement and as such may fall on a score on a scale of measurement and as
such may fall on a score as when n is odd, or it may fall between values, hence,
the median may or may not be a variate. The median is not a variate if the two
middlemost scores are not equal, but if the two middlemost scores are equal, the
median is a variable. (A variate means the actual value of a score).

The Mode

The mode is the measure of central tendency that is easiest to find. It is the score
or the point on the scale of measurement that has a frequency larger than of any
other scores in the distribution. It is the score that occurs most frequently and
corresponds to the highest point in the frequency polygon, and can be found by
mere inspection.

To find the mode of, arrange the scores in either ascending or descending order
of magnitude. The score that occurs most frequently is the mode.

Ex: Find the mode of the following scores:

93, 90, 96, 97, 96, 89, 88, 85, 96, 86

To find the mode, rearrange the scores from the highest to the lowest. The mode
is the score that occurs most frequently.
97, 96, 96, 96, 93, 90, 89, 88, 86, 85

The mode is 96.

The Crude Mode

To determine the crude mode from a score frequency distribution, first arrange the
scores in either ascending or descending order of magnitude, writing the score
only once even for score(s) that occur several times.

Then, tally the scores and write the frequency.

Ex: Find the mode of the following distribution of scores:

89, 95, 98, 92, 89, 95, 86, 83, 80, 80, 92, 92, 89, 83, 89, 89

First, arrange the scores in descending order of magnitude, then tally, and write
the frequency of each score.

Score Tally f
98 I 1
95 II 2
92 III 3
89 IIII 5
86 II 2
83 II 2
80 I 1

The mode is 89. It is unimodal.

However, a score frequency distribution may have more than one mode. It is
bimodal when two different scores have the same highest frequency, and multi-
modal when more than two different scores have the same highest frequency.

It is also possible that a distribution of scores may not have any mode at all. The
mode is a rough measure of central location.

Comparing th Mean, Median and Mode

1. The mean is the most frequently used measure of location since it reflects every
value and has characteristics of simplicity, uniqueness, and stability from sample
to sample in a distribution. However, when the distribution contains very large or
very small values, it can be misleading, while the median, on the other hand is the
most appropriate locator of the central measure since it is the midpoint of the
distribution, and is not influenced by extreme values, large or small, but by the
number of scores in a given set.
2. Main characteristics:
a. Mean
i. The mean is the arithmetic average of the measurements.
ii. It lies between the largest and smallest measurements of a set of
test scores.
iii. It is influenced by extreme scores.
iv. There is only one mean in a set of test scores.
b. Median
i. The median is the central value; 50% of the test scores lie above it
and 50% fall below it.
ii. It lies between the largest and smallest measurements of a set of
test scores
iii. It is not influenced by extreme scores.
iv. There is only one median for a set of test scores.
c. Mode
i. The mode is the most frequent score in an array.
ii. It is not influenced by extreme values.
iii. There can be more than one mode for a set of scores. If there are
two modes, the set of scores is bimodal, for three or more, it is multi-
modal.

3. In a symmetrical distribution (normal curve) where there is only one mode, the
mean, the median and the mode have equal values and coincide at the highest
point of the polygon and they all lie at the axis of symmetry.

4. In an asymmetrical distribution, the position of these measures varies. In a


negatively skewed (skewed to the left) distribution, the median lies to the left of the
mode and the mean to the left to the median, while in a positively (skewed to the
right) distribution, the mdian lies to the right of the mode and the mean to the right
of the median.
5. The mean is the most important and widely used measures of averages. The
median, on the other hand, can be determined even for qualitative data, as long
as they can be ordered, while the mode is most is most preferable in getting the
most typical average, since it is the scorethat occurs frequently in a series.

Lesson 3
Measurement of Learning in the Cognitive Domain

I. Intended Learning Outcome

Define the classification of measurement in the Cognitive Domain (Bloom,


et.al.)

Construct assessment and measurement in the Cognitvie Domain

II. Introduction

Learning and achievement in the cognitive domain are usually measured in


school through the use of paper and oencil tests (Olivia, 1988). Teachers have
to measure the students’ achievement in all the levels of the cognitive domain.
Thus, they need to be cognizant with the procedures in the development of the
different types of paper and pencil tests. This lesson is focused on acquainting
prospective teachers with methods and techniques of measuring learning in the
cognitive domain.

Lorin Anderson, a former student of Bloom, and David Krathwohl revisited the
cognitive domain in the mid-nineties and made some changes, with perhaps
the three most prominent ones being (Anderson, Krathwohl, Airasian,
Cruikshank, Mayer, Pintrich, Raths, Wittrock, 2000):
• changing the names in the six categories from noun to verb forms
• rearranging them as shown in the chart below
• creating processes and levels of knowledge matrix
III. Content / Concept

The committee identified three domains of educational activities


or learning (Bloom, et al. 1956):

Cognitive: mental skills (knowledge)


Affective: growth in feelings or emotional areas (attitude or self)
Psychomotor: manual or physical skills (skills)

Behavior Measured in the Cognitive Domain

There are three domains of behavior measured and assessed in schools. The
most commonly assessed, however, is the cognitive domain. The cognitive
domain deals with the recall and recognition of knowledge and development of
intellectual abilities and skills (Bloom, et al, 1956). It is further subdivided into
six heirarchal levels, namely: Remembering, Understanding, Applying,
Analyzing, Evaluating, and Creating.

Preparing for Measurement of Cognitive Learning

Prior to the construction of a paper and pencil test to be used in the measurement
of cognitive learning, teachers have to answer to answer the following questions:

• What Should Be Tested.


Identification of the information, skills, and behaviors to be tested is the first
important decision that a teacher has to take. Knowledge of what shall be
tested will enable a teacher to develop an appropriate test for the purpose. The
basic rule to remember, however, is that testing emphasis should parallel
teaching emphasis.
• How to Gather Information about What to Test.
A teacher has to decide whether he should give a paper and pencil test or
simply gather information through observation. Should he decide to use a paper
and pencil test, then he has to determine what test items to construct. On the
other hand, i he decides to use observation of the students’ performance of the
targeted skill, then he has to develop appropriate device to use in recording his
observations. Decisions on how to gather information bout what to test depends
on the objective or the nature or behavior to be tested.
• How Long the Test Should Be.
The answer to the aforementioned question depemds on the following factors:
age and attention span of the students; and types of questions to be used.
• How Best to Prepare Students for Testing.
1) Provide learners with good instruction
2) Review students before testing
3) Familiarize students with question formats
4) Schedule the test
5) Provide students information about the test

Assessing Cognitive Learning

Teachers use two types in assessing student learning in the cognitive domain:
objective test, and essay test (Reyes, 2000).

An objective test is a kind of test wherein there is only one answer to each item.

An essay test is one wherein the test taker has the freedom to respond to a
question based on how he feels it should be answered.

A. Types of Objective Tests


There are generally two types of objective tests: supply type, and selection type
(Carey, 1995). In the supply type, the student constructs his own answer to each
question. Conversely, the student chooses the right answer to each item in the
selection type of objective test.

a. Supply Types of Objective Test

• Completion Drawing Type – an incomplete drawing is presented which


the student has to complete.
Example:
Instruction: In the following web, draw arrow lines indicating which
organisms are producers and which are consumers.

• Completion Statement Type – an incomplete sentence is presented,


which the student has to complete it by filling in the blank.
Example:
The capital city of the Philippines is _________________________.

• Correction Type – a sentence with an underlined word or phrase is


presented, which the student has to replace to make it right.
Example:
Instruction: Change the underlined word/phrase to make each of the
following statements correct. Write your answer on the space before
each number.

_____________1.The theory of evolution was popularized by Gregor


Mendel.

_____________2.Hydrogrpahy is the study of oceans and ocean


currents.

• Identification Type – a brief description is presented and the student


has to identify what it is.
Example:
Instruction: To what does each of the following refer? Write your
answer on the blankbefore each number.

_____________1.A flat representation of all curved surfaces of the


earth.

_____________2. The transmission of parents’ characteristics and


traits and their offsprings.
• Simple Recall Type – a direct question is presented for the student to
answer using a word or phrase.
Example:

i. What is the product of two negative numbers?


ii. Who is the national hero of the Philippines?

• Short Explanation Type – requires students to answer in a brief


answer. It tends to limit both content and form of students’ response.
Example:

1. Explain a complete sentence why the Philippines was not


really discovered by Magellan.
2. Write two similarities between a peninsula and an
archipelago.

b. Selection Types of Objective Test

• Arrangement Type – terms or objects are to be arranged by the


students in a specified order.
Example 1:

Instruction: Arrange the follwing events chronologically by writing the


letters A, B, C, D, E on the spaces provided.

___ Glorious Revolution ___ Russian Revolution


___ American Revolution ___ French Revolution
___ Puritan Revolution

Example 2:

Instruction: Arrange the following planets according to their nearness


to the sun, by using numbers 1,2,3,4,5.

___ Pluto ___ Jupiter ___ Saturn


___ Venus ___ Mars

• Matching Type – a list of numbered items are related to a list of lettered


choices.
Example:
Instruction: Match the country in Column 1 with its capital city in
Column 2. Write the letters only.
Column 1 Column 2
____1. Philippines a. Washington D.C.
____2. Japan b. Jeddah
____3. United States c. Jerusalem
____4. Great Britain d. Manila
____5. Israel e. London
f. Tokyo
g. New York

• Multiple Choice Type – contains a question, problem, or unfinished


sentence followed by several responses.
Example:
The study of values is:
a.) Axiology c.) Epistemology
b.) Logic d.) Metaphysics
• Alternate Response Type – A test wherein there are only two possible
answers to the question. The true-false format is a form of alternative
response type. Variations on the true-false format include yes-no, agree-
disagree, and right-wrong.
Example:
Instruction: Write TRUE, if the statement is true; FALSE, if it is false.

_______1. Lapulapu was the first Asian to repulse European


colonizers in Asia.
_______2. Magellan’s expedition into the Philippines led to the first
circumnavigation of the globe.
_______3. The early Filipinos were uncivilized before the Spanish
conquest of the archipelago.
_______4. The Arabs introduced Islam in Southern Philippines.

• Key List Test – a test wherein the student has to examine paired
concepts based on a specified set of criteria (Olivia, 1998).
Example:
Instruction: Examine the paired items in column 1 and column 2. On
the blank before each number, write:
A = if the item in column 1 is an example of an item in column
2;
B = if the item in column 2 is a synonym of item in column 2;
C = if item in column 2 is an opposite of the item in column 1;
and
D = if items in columns 1 and 2 are not related in any way.

Column 1 Column 2
____1. Capitalism economic system
____2. Labor incentive capital intensive
____3. Planned economy command economy
____4. Opportunity cost demand and supply
____5. Free goods economic goods

• Interpretive Exercise – a form of multiple choice type of test that can


assess higher cognitive behaviors. According to Airisian (1994) and
Mitchell (1992), interpretive exercises provide students with some
information or data followed by a series of questions on that information.
In responding to the questions in an interpretive exercise, the students
have to analyze, interpret, or apply the material provided, like map,
excerpt of a story, passage of a poem, data matrix, table or cartoon.
Example:
Instruction: Examine the data on the child labor in Europe during the
period immediately after the Industrial Revolution in the continent.
Answer the questions given below by encircling the letter of your
choice.

Child Labor in the Years Right After the Industrial Revolution


in Europe

Year Number of Child Laborers


1750 1800
1760 3000
1770 5000
1780 3400
1790 1200
1800 600
1820 150

i. The employment of child labor was greatly used in ______________.


a. 1750 C. 1770
b. 1760 D. 1780
ii. As industrialization became rapid, what year indicated a sudden
increase in the number of child laborers?
a. 1760 C. 1780
b. 1770 D. 1790
iii. Labor unions and government policies were responsible in
addressing the problems of child labor. In what year was this
evident?
a. 1780 C. 1800
b. 1790 D. 1820

B. Essay Test

This type of test presents a problem or question and the student is to


compose a response in paragraph form, using his own words and ideas.
There are two forms of the essay test: brief or restricted, and extended.
▪ Brief or Restricted Essay Test – This form of the essay test
requires a limited amount of writing or requires that a given
problem be solved.
Example:
Why did early Filipino revolts fail? Cite and explain 2 reasons.
▪ Extended Essay Test – This form of essay test requires a
student to present his answer in several paragraphs or pages of
writing. It gives the students more freedom to express ideas and
opinions and use creating skills to change remembering into a
creative idea.
Example:
Explain your position on the issue of charter change in the
Philippines.
The essay test is appropriate to use when learning outcomes cannot be
adequately measured by objective test items. Nevertheless, all levels of
cognitive behavior can be measured with the use of the essay test as shown
below.

• Remembering
Explain how Siddharta Gautama became Buddha
• Understanding
What does it mean when a person has crossed the rubicon?
• Applying
Cite three instances showing the application of the Law of Supply and
Demand.
• Analyzing
Analyze the annual budget of your college as to categories of funds,
sources of funds, major expenditures, and needs of your college.
• Evaluating
Are you in favor of the political platform of the Liberal Party? Justify your
answer.
• Creating
Propose solutions that can address the landfill problems in the
Philppines.

Tip for test construction:


REVISED Bloom’s Taxonomy Action Verbs
Definitions I. Remembering II. Understanding III. Applying IV. Analyzing V. Evaluating VI. Creating

Bloom’s Exhibit memory Demonstrate Solve problems to Examine and break Present and Compile
Definitio of previously understanding of new situations by information into defend opinions information
n learned material facts and ideas by applying acquired parts by identifying by making together in a
by recalling facts, organizing, knowledge, facts, motives or causes. judgments about different way by
terms, basic comparing, techniques and Make inferences information, combining
concepts, and translating, rules in a different and find evidence validity of ideas, elements in a
answers. interpreting, giving way. to support or quality of work new pattern or
descriptions, and generalizations. based on a set of proposing
stating main ideas. criteria. alternative
solutions.
Verbs • Choose • Classify • Apply • Analyze • Agree • Adapt
• Define • Compare • Build • Assume • Appraise • Build
• Find • Contrast • Choose • Categorize • Assess • Change
• How • Demonstrate • Construct • Classify • Award • Choose
• Label • Explain • Develop • Compare • Choose • Combine
• List • Extend • Experiment with • Conclusion • Compare • Compile
• Match • Illustrate • Identify • Contrast • Conclude • Compose
• Name • Infer • Interview • Discover • Criteria • Construct
• Omit • Interpret • Make use of • Dissect • Criticize • Create
• Recall • Outline • Model • Distinguish • Decide • Delete
• Relate • Relate • Organize • Divide • Deduct • Design
• Select • Rephrase • Plan • Examine • Defend • Develop
• Show • Show • Select • Function • Determine • Discuss
• Spell • Summarize • Solve • Inference • Disprove • Elaborate
• Tell • Translate • Utilize • Inspect • Estimate • Estimate
• What • List • Evaluate • Formulate
• When • Motive • Explain • Happen
• Where • Relationships • Importance • Imagine
• Which • Simplify • Influence • Improve
• Who • Survey • Interpret • Invent
• Why • Take part in • Judge • Make up
• Test for • Justify • Maximize
• Theme • Mark • Minimize
• Measure • Modify
• Opinion • Original
• Perceive • Originate
• Prioritize • Plan
• Prove • Predict
• Rate • Propose
• Recommend • Solution
• Rule on • Solve
• Select • Suppose
• Support • Test
• Value • Theory
Anderson, L. W., & Krathwohl, D. R. (2001). A taxonomy for learning, teaching, and assessing, Abridged Edition. Boston, MA: A llyn and Bacon.

Lesson 4
Evaluation of Learning in the Psychomotor and Affective Domains

I. Intended Learning Outcomes


Identify the process and skills involving the mind and the body and in emotional
development

Define significant motor performance in the psychomotor domain

Explain the degree of internalization

Apply the levels of psychomotor and affective learning in test construction


II. Introduction

As pointed out in then previous lesson, there are three domains of learning
objectives that teachers have to assess. While it is true that achievement in the
cognitive domain is what teachers measure frequently, the students’ growth in
non-cognitive domains of learning should also be given equal emphasis. This
lesson expounds different ays by which learning in the psychomotor and
affective domains can be assessed and evaluated.

III. Content / Concept

Levels of Learning in the Psychomotor Domain

The psychomotor domain is focused on the processe and skills involving the
mind and the body (Eby & Kujawa, 1994). It is the domain of learning which
classifies objectives dealing with physical movement and coordination (Arends,
1994; Simpson, 1966). Thus, objectives in the psychomotor domain require
significant motor performance. Playing a musical instrument, singing a song,
drawing, dancing, putting a puzzle together, reading a poem and presenting a
speech are examples of skills developed in the aforementioned domain of
learning.

There are five levels of pschomotor learning:

• Imitation
This is the ability to carry out the basic rudiments of a skill when given
directions and under supervision. At this level, the total act is not
performed skillfully. Timing and coordination of the act are not yet refined.

• Manipulation
This is the ability to perform a skill independently. The entire skill can be
performed in sequence. Conscious effort is no longer needed to perform
the skill, but complete accuracy has not been achieved yet.

• Precision
This is the ability to perform an act accurately, efficiently, and
harmoniously. Complete coordination of the skill has been acquired. The
skill has been internalized to such extent tht it can be performed
unconsciously.
• Articulation
This is the ability to coordinate and adapt a series of actions to achieve
harmony and internal consistency

• Naturalization
Mastering a high level performance until it becomes second-nature or
natural, without needing to think much about it.

Taxonomy of Psychomotor Domain


Dave (1975)
Category Exampe and Key Words (Verbs)

Examples:
• Copying a work of art
• Performing a skill while observing a demonstator
Key Words:
• Copy
Imitation • Folllow
• Mimic
• Repeat
• Replicate
• Reproduce
• Trace
Examples:
• Being able to perform a skill on ones own after
taking lessons or reading about it.
• Follows instructions to build a model
Manipulation Key Words:
• Act
• Build
• Execute
• Perform
Examples:
• Working and reworking something, so it will be
“just right”.
• Perform a skill or task without assistance.
Precision
• Demonstrate a task to a beginner.
Key Words:
• Calibrate
• Demonstrate
• Perfectionism
• Master
Examples:
• Combining a series of skills to produce a video
that involves music, drama, color, sound etc.
• Combining a series of skills or activities to meet a
novel requirement.
Key Words:
Articulation
• Adapt
• Constructs combine
• Creates
• Customize
• Modify
• Formulate
Examples:
• Maneuvers a car into a tight parallel parking spot.
• Operates a computer quickly and accurately.
• Displays competence while playing the piano.
• Michael Jordan playing basketball
Key Words:
Naturalization
• Create
• Design
• Develop
• Invent
• Manage
• Do naturally
Measuring the Acquisition of Motor and Oral Skills

There are two approaches that teachers can use in measuring the acquisition of
motor and oral skills in the classroom:

• Observation of Student Performance

This is as assessment approach in whcih the learner does the desired skill in
the presence of the teacher. For instance, in a Physical Education clas, the
teacher can directly observe how male students dribble and shoot the
basketball. In this approcah the teacher observes the performance of a
student, gives feedback, and keeps a record of his performance, if
appropriate.

Observation of student performance can either be holistic or atomistic.


o Holistic Observation – employed when the teacher gives a score or
feedback on pre-established prototypes of how an outstanding,
average, or deficient performance looks. Prior to the observation, the
teacher describes the different levels of performance.

For example, a teacher who required his students to make an oral report
on a research they undertook, describes the factors which go into ideal
presentation. What the teacher may consider in grading the report,
include the following: knowledge of the topic, organization of the
presentation of the report, enunciation, voice projection, and
enthusiasm. The ideal presentation has to be described and the teacher
has to comment on each of these factors. A student whose presentation
closely matches the ideal described by the teacher would receive a
perfect mark.

o Atomistic or Analytic Observation – this type of observation requires that


as task analysis be conducted in order to identify the major subtasks
involved in the student performance. For example, in dribbling the ball,
the teacher has to identify movement necessary to perform the task.
Then, he has to develop a checklist which enumerates the movements
necessary to the performance of the task. These positions are
demonstrated by the teacher. As students perform the dribbling of the
ball, the teacher assigns checkmarks for each of the various subtasks.
After the student has performed the specified action, all checkmarks are
considered and an assessment of the performance is made.
• Evaluation of Student Products

This is another approach that teachers can use in the assessment of the
students’ mastery of skills. For example, projects in the different learning areas
may be utilized in assessing students’ progress. Student products include
drawings, models, construction paper products, etc.

The same principles inolved in holistic and atomistic obeservations apply to


the evaluation of projects. The teacher has to identify prototypes representing
different levels of performance for a project or do a task analysis and assign
scores by subtasks. In either case, the student has to be informed of the criteria
and procedures to be used in the assessment of their work.

Assessing Performance Through Student Portfolios

Portfolio Assessment is a new form of assessing students’ performance. A portfolio


is a collection of students work. It is used to gather a series of the students’
performances or products that show their accomplishment and/or improve overtime.
It consists of carefully selected samples of the students’ work indicating their growth
and development in some curricular goals. The following can be included in a
student’s portfolio, to name a few:
• Representative piece of writing
• Solved math problems
• Projects and puzzles completed
• Artistic creations
• Videotapes of performance
• Tape recordings
Portfolios can be used for the following purposes:
• Providing examples of student performance to parents
• Showing student improvement over time
• Providing a record of students’ typical performance to pass on to the next
year’s teacher
• Identifying areas of the curriculum that need improvement
• Encouraging students to think about what constitutes good performance in a
learning area; and
• Grading students

There are four steps to consider in making use of this type of performance
assessment:

1. Establishing a clear purpose


Purpose is very important in carrying out a portfolio assessment. Thus, there
is a need to determine beforehand the objective of the assessment and the
guidelines for the student products that will be included in the portfolio prior to
compilation.

2. Setting performance criteria

While teachers need to collaborate with their colleagues in setting a common


criterion, it is crucial that they involve their students in setting standards of
performance. This will enable the latter to claim ownership over their
performance.

3. Creating an appropriate setting

Portfolio assessment also needs to consider the setting in which the students’
performance will be gathered. Shall it be a written portfolio? Shall it be a
portfolio of oral or physical performances, science experiments, artistic
productions and the like? Setting has to be looked into since arrangements
have to made on how desired performance can be properly collected.

4. Forming scoring criteria or predetermined rating

Scoring methods and judging the students’ performance are required in


portfolio assessment. Scoring students’ portfolio, however, is time-consuming
as a series of documents and performances have to scrutinized and
summarized. Rating scales, anecdotal records, and checklists can be used in
scoring the students’ portfolios. The content of a student’s portfolio, however,
can be reported in the form of a narrative.

Tools for Measuring Acquisition Skills

As pointed out previously, the observation of student performance and evaluation of


student products are ways by which teachers can measure the students’ acquisition
of motor and oral skills. To overcome the problem relating to validity and reliability,
teachers can use rating scales, checklists or other written guides to help them come
up with unbiased or objective observations of student performance.

Rating scale is nothing but a series of categories that are arranged in order of quality.
It can be helpful in judging skills, products, and procedures. According to Reyes
(2000), there are three steps to follow in constructing a rating scale:

• Identify the qualities of the product to be assessed. Create a scale for each
quality or performance aspect.
• Arrange the scales either from positive to negative or vice-versa.
• Write directions for accomplishing the rating scale.

Following is an example of a rating scale for judging a student teacher’s presentation


of a lesson.
Rating Scale for Lesson Presentation

Student Teacher____________________________________Date_____________

Subject __________________________________

Rate the student teacher on each of the skill areas specified below. Use the
following code: 5=outstanding, 4=very satisfactory, 3=satisfactory, 3=fair, 1=needs
improvement.
Encircle the number corresponding to your rating.

5 4 3 2 1 Audience Contact

5 4 3 2 1 Enthusiasm

5 4 3 2 1 Speech quality and delivery

5 4 3 2 1 Involvement of the audience

5 4 3 2 1 Use of non-verbal communication

5 4 3 2 1 Use of questions

5 4 3 2 1 Directions and refocusing

5 4 3 2 1 Use of reinforcement

5 4 3 2 1 Use of teaching aids and instructional materials

A checklist differs from a rating scale as it indicates the presence or absence of


specified characteristics. It is basically a list of criteria upon which a student’s
performance or end product is to be judged. The checklist is used by simply checking
off the criteria items that have been met.

Response on a checklist varies. It can be a simple check mark indicating that an


action took place. For instance, a checklist for observing student participation in the
conduct of a group experiment may appear like this:

____1. Displays interest in the experiment.

____2. Helps in setting up the experiment.


____3. Participates in the actual conduct of the experiment.

____4. Makes worthwhile suggestions.

The rater would simply check the items that occurred during the conduct of group
experiment.

Another type of checklist requires a yes or no response. The yes is checked when
the action is done satisfactorily; the no is checked when the action is done
unsatisfactorily. Below is an example of this type of checklist.

Performance Checklist for a Speech Class

Name___________________________________________Date_______________

Check YES or NO as to whether the specified criterion is met.

Did the student YES NO

1. Use correct grammar? ____ ____


2. Make clear presentation? ____ ____
3. Stimulate interest? ____ ____
4. Use clear diction? ____ ____
5. Demonstrate poise? ____ ____
6. Manifest enthusiasm? ____ ____
7. Use appropriate voice projection? ____ ____
__________________________________________________________________

Levels of Learning in the Affective Domain

Objectives in the affective domain are concerned with emotional development. Thus,
affective domain deals with attitudes, feelings, and emotions. Learning intent in this
domain of learning is organized according to the degree of internalization. Krathwohl
and his colleagues (1964) identified five levels of learning in the affective domain.

Level of Expertise Description of Level

Receiving Demonstrates a willingness to participate in the


activity
Responding Shows interest in the objects, phenomena, or activity
by seeking it out or pursuing it for pleasure
Valuing Internalizes an appreciation for (values) the
objectives, phenomena, or activity
Organization Begins to compare different values, and resolves
conflicts between them to form an internally consistent
system of values
Characterization by a Value or Adopts a long-term value system that is "pervasive,
Value Complex consistent, and predictable"

Evaluating Affective Learning

Learning in the affective domain is difficult and sometimes impossible to assess.


Attitudes, values and feelings can be intentionally concealed. This is because
learners have the right not to show their personal feelings and beliefs, if they choose
to do so. Although the achievement of objectives in the affective domain are important
in the educational system, they cannot be measured or observed like objectives in
the cognitive and psychomotor domains.

Teachers attempt evaluating affective outcomes when they encourage students to


express their feelings, attitudes, and values about topics discussed in class. They
can observe students and may find evidence of some affective learning.

Although it is difficult to assess learning in the affective domain, there are some tools
that teachers can use in assessing learning in this area. Some of these tools are the
following: attitude scale, questionnaire, simple protective techniques, and self-
expression techniques.

Attitude Scale is a form of rating scale containing statements designed to gauge


students’ feelings on an attitude or behavior. An example of an attitude scale is shown
below.

An Attitude Scale for Determining Interest in Mathematics

Name____________________________________________Date______________

Each of the statements below expresses a feeling towards mathematics.


Rate each statement on the extent to which you agree. Use the following
response code: SA=Strongly Agree; A=Agree; U=Uncertain; D=Disagree;
SD=Strongly Disagree.

____1. I enjoy doing my assignments in Mathematics.


____2. The books we are using in the subject is interesting.
____3. The lessons and activities in the subject challenge me to give my
best.
____4. I do not find the exercises during our lesson boring.
____5. Mathematical problems encourage me to think critically.
____6. I feel at ease during recitation and board work.
____7. My grade in the subject is commensurate to the effort I exert.
____8. My teacher makes the lesson easy to understand.
____9. I would like to spend more time in this subject.
____10. I like the way our teacher presents the steps in solving mathematical
problems.

Response to the items is based on the response code provided in the attitude scale.
A value ranging from 1 to 5 is assigned to the options provided. The value of 5 is
usually assigned to the option “strongly agree” and 1 to the option “strongly
disagree”. When a statement is negative, however, the assigned values are usually
reversed. The composite score is determined by adding the scale values and
dividing it by the number of statements or items.

A questionnaire can also be used in evaluating attitudes, feelings, and opinions. It


requires students to examine themselves and react to a series of statements about
their attitudes, feelings and opinions. The response style for a questionnaire can
take any of the following forms:
• Checklist type
The checklist type of response provides the students a list of
adjectives for describing or evaluating something and requires them to
check those that apply. For example, a checklist questionnaire on
students’ attitudes in a science class may include the following:
o This class is ____boring I find Science ____fun
____exciting ____interesting
____interesting ____very tiring
____unpleasant ____difficult
____highly informative ____easy
The scoring of this type of test is simple. Subtract the number of
negative statements checked from the number of positive statements
checked.

• Semantic differential
This is another type of response on a questionnaire. It is usually a five-
point scale showing polar or opposite objectives. It is designed so that
attitudes, feelings, and opinions can be measured by degrees from
very favorable to very unfavorable. Given below is an example of a
questionnaire employing the aforementioned response type.

Working with my group members is:


Interesting ____:____:____:____:____ Boring
Challening ____:____:____:____:____ Difficult
Fulfilling ____:____:____:____:____ Frustrating
The composite score on the total questionnaire is determined by
averaging the scale values given to the items included in the
questionnaire.

• Likert scale
This is one of the frequently used style of response in attitude
measurement. It is oftentimes a five-point scale that links the options
“strongly agree” and “strongly disagree”. An example of this is:

A Likert Scale for Assessing Students’ Attitude Towards


Leadership Qualities of Student Leaders

Name_______________________________________Date________

Read each statement carefully. Decide whether you agree or disagree


with each of them. Use the following response code: 5=Strongly
Agree; 4=Agree; 3=Undecided; 2=Disagree; 1=Strongly Disagree
Write your response on the blank before each item.

Student Leaders:

____1. Have to work for the benefit of the students.


____2. Should set example of good behavior to the members of the
organization
____3. Need to help the school in implementing campus rules and
regulations.
____4. Have to project a good image of the school in the community.
____5. Must speak constructively of the school’s teachers and
administrators.

Scoring of the Likert scale is similar to the scoring of an attitude scale


earlier presented.
Simple projective techniques are usually used when a teacher wants to probe
deeper into the students’ feelings and attitudes. There are three types of simple
projective techniques that can be used in the classroom:
• Word association
The student is given a word and asked to mention what comes to his
mind upon hearing it.
Ex: What comes to your mind when you hear the word corruption?
• Unfinished sentences
The students are presented partial sentences and are asked to
complete them with words that best express their feeling, for instance:
Given the chance to choose,
I_________________________________
I am happy
when___________________________________________
My greatest failure in life was_________________________________
• Unfinished story
A story with no ending is deliberately presented to the students, which
they have to finish or complete. Through this technique, the teacher
will be able to sense students’ worries, problems and concerns.
Another way by which affective learning can be assessed is through the use of self-
expression techniques. Through these techniques, students are provided the
opportunity to express their emotions and views about issues, themselves, and
others. Self-expression techniques may take any of the following forms: log book of
daily routines or activities, diaries, autobiographies, essays, and other written
compositions or themes.

You might also like