0% found this document useful (0 votes)
8 views

9 Assessment of Learning (AL)

Reviewer
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

9 Assessment of Learning (AL)

Reviewer
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

Assessment of Learning

Concept Description
Assessment  Process of gathering quantitative and qualitative data for the
purpose of making decisions

Measurement  Process of quantifying the attributes of an object

Evaluation  Process of making value judgments on the information collected


from measurement

Testing  Most common form of assessment


 The use of test or battery of tests to collect information on student
learning
 Not all assessment use tests or testing

Test  Can either be selected-response or constructed-response

Grading  Process of assigning value to the performance or achievement of a


learner based on specified criteria or standards
 A form of evaluation

Types of Assessment  Formative Assessment


 Summative Assessment
 Diagnostic Assessment
 Placement Assessment
 Traditional Assessment
 Authentic Assessment

Different Principles in Assessing  Clear Purpose


Learning  Not an End in itself – should serve as a means to enhance student
learning
 Ongoing, continuous, and a formative process
 Learner-centered
 Process and product-oriented
 Comprehensive and holistic – conducted in multiple periods
 Requires the use of appropriate measures – age and context-
appropriate; valid and reliable
 As authentic as possible

Purpose of Classroom  Assessment of Learning – summative


Assessment  Assessment for learning – formative
 Assessment as learning – formative in nature; metacognition; self-
regulation
 Evaluative – for the purpose of making judgment or grading
 Facilitative – improve instruction and learning strategies
 Motivational – students to be engaged in learning

Learning Targets – Educational  Bloom’s Taxonomy


Objectives o Cognitive Domain
Page 1 of 6
Concept Description
 Knowledge - Remember
 Comprehension - Understand
 Application - Apply
 Analysis - Analyze
 Synthesis - Evaluate
 Evaluation - Create
o Affective Domain
 Receiving
 Responding
 Valuing
 Organization
 Characterization
o Psychomotor
 Reflex – stretch, extension, postural adjustment
 Basic fundamental movements – walking, running,
twisting
 Perceptual abilities – coordinated movements –
jump rope, catching
 Physical abilities – require endurance and strength
 Skilled movements – sports, dance, recreation
 Non-discursive communication – body postures,
gestures, facial expressions

Types of Learning Targets  Knowledge targets


 Reasoning targets
 Skills targets
 Product targets – concrete, tangible product
 Affect target – attitudes, beliefs, interests, value

Different Classification of  Purpose


Assessment o Educational – used in school setting
o Psychological – measures cognitive and non-cognitive
characteristics
 Form
o Paper-and-pencil
o Performance-based
 Function
o Teacher-made
o Standardized
o Teacher-made tests can be standardized as long as it is
valid, reliable, and with a standard procedure for
administering, scoring, interpreting
 Kind of learning
o Achievement – what learners have learned after
instruction
o Aptitude – aptitudes are characteristics that influence a
person’s behavior that aid goal attainment; ability to
comprehend instructions, manage one’s time, make good
Page 2 of 6
Concept Description
inferences
 Ability
o Speed – typing tests
o Power – with increasing level of difficulty
 Interpretation of learning
o Norm-referenced – interpret using the distribution of
scores of a sample group; how far they are from the mean
and SD of the sample
o Criterion-referenced – has a given set of standards; very
low, low, average, high, very high

Planning a Written Test  TOS – test blueprint; ensures ILOs, assessments and instruction are
aligned
 Steps:
o Objectives
o Coverage of the test
o Calculate weight of each topic
o Determine the number of items for whole test
o Determine the number of items per topic
o Write test questions for each topic based on the ILOs
 60-30-10 rule –
 60 % remembering and understanding
 30% applying and analyzing
 10% evaluating and creating
o Place the item number under each classification

Reliability  Consistency of the responses to measure under three conditions:


o When retested on the same person
o When retested on the same measure
o Similarity of responses across items that measure the
same characteristic
o Test that measures the same characteristic at a different
time
 Factors that affect the reliability of a measure
o Number of items in a test
o Individual differences of participants
o External environment
 Test-Retest
o Administer a test at one time to a group of examinees and
administer it again to the same group at another time
o Time interval of not more than 6 months
 Parallel Forms
o There are two versions of a test
o Items need to measure the same skill exactly
o Administer one form at one time and the other form at
another time to the same group

Page 3 of 6
Concept Description
 Split-Half
o Administer a test to a group of examinees
o Items are split into halves – correlate the two scores from
the same test – the scores should be close or consistent
 Test of internal consistency
o See if the responses per item are consistent with each
other
 Inter-rater reliability
o Consistency of multiple raters when using rating scales or
rubrics to judge performance

Test Validity  A test/measure is valid when it measures what it is supposed to


measure
 Types:
o Content validity
o Face validity
o Predictive validity
o Construct validity
 components of the test should contain strongly
correlated items
 science test composed of 4 domains – 10 items
each
o Concurrent validity
 when two or more measures are present for each
examinee that measure the same characteristic
 math grade and math achievement test
o Convergent validity
 components of a test have positive correlation
 student’s competencies in number sense improves
their capacity to learn patterns, algebra
o Divergent validity
 components of a test have negative correlation
 administer a reading comprehension test to group
A (taught metacognitive awareness)
 administer the same test to group B (not taught
metacognitive awareness)

Frequency Distribution  Negatively skewed – more students got higher scores


 Positively skewed – more students got lower scores
Kurtosis  Flatness of the distribution
o Platykurtic – broad or flat
o Mesokurtic – normal, intermediate
o Leptokurtic – narrow, steep

Measures of central tendency  Mean, median, mode

Levels of measurement  Nominal – categorical


 Ordinal – ranking
Page 4 of 6
Concept Description
 Interval – has properties of both nominal and ordinal; no true zero
point
 Ratio – carries the properties of nominal, ordinal, and interval;
presence of a true zero point – where zero indicates the total
absence of the trait; Ex. 0 km

Measures of Dispersion  Range


 Variance
 Standard Deviation

Measures of Position  Quartile


o the three values that divide a set of scores into four equal
parts
o 25% of the data falls at or below the first quartile
o 50% of the data falls at or below the 2nd quartile (median)
o 75% of the data falls at or below the 3rd quartile
 Decile
o Divided the distribution into 10 equal parts
o A student whose mark is below the first decile is said to
belong in D1
 Percentile
o Divide the distribution into 100 equal parts
o If you scored 95 in a 100-item test, and your percentile
rank is 99th – this means that 99% of those who took the
test performed lower than you

Different methods in scoring tests  For multiple-choice


o Number Right Scoring (NR)
 Positive values to correct answers and zero to
incorrect answers
o Negative Marking (NM)
 Assigning positive values to correct answers while
punishing the learners for incorrect responses
(right minus wrong)
 A fraction of the number of wrong answers is
subtracted from the number of correct answers
o Partial Credit Scoring Methods
 To determine a learner’s degree of level of
knowledge with respect to each response option
given
 They can discern that some responses are clearly
incorrect
 Types:
 Liberal multiple-choice test – select more
than one answer if they feel uncertain
which option is correct
 Eliminating testing – cross out all
alternatives they consider to be incorrect
Page 5 of 6
Concept Description
 Confidence weighting – how confident
they are that their answer is the correct
option
o Multiple Answers Scoring Method
 Allow learners to have multiple answers each item
 Each item has at least one or more correct answer
o Retrospective Correcting for Guessing
 Considers omitted or no-answer items as incorrect,
forcing learners to give an answer for every item
even if they do not know the correct answer
o Standard-Setting Scoring Method
 Use standards when scoring multiple-choice items
 Based on norm-referenced assessment
 For constructed-type tests (Essays and performance tests)
o Holistic Scoring
 Giving a single, overall assessment score
 Ex. Exemplary, Satisfactory, Emerging
o Analytic Scoring
 Assessing each aspect of a performance task and
assigning a score for each criterion
 Prone to halo effect – scores in one scale may
influence the ratings of others
o Primary Trait Scoring
 Focuses on only one aspect or criterion of a task
 The scoring system defines a primary trait in the
task that will be scored
o Multiple Trait Scoring
 Scored in more than one aspect
 Similar to analytic, but focuses on specific features
of performance required to fulfill the given task

Page 6 of 6

You might also like