0% found this document useful (0 votes)
9 views

Written Report

Uploaded by

mecallatuballas6
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Written Report

Uploaded by

mecallatuballas6
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

ORGANIZATION AND ANALYSIS

OF ASSESSMENT DATA FROM


ALTERNATIVE METHODS

A WRITTEN REPORTED

CREATED BY:

Mecalla Tuballas

IN SUBMISSION FOR:

Profed 10

Assessment in Learning 2 With Focus on Trainers

Methodology 1 and 2

Doc. Ferlyn Libao


ABSTRACT OF THE STUDY
This study examines the Organization and
Analysis of Assessment Data from Alternative
Methods, aiming to explore their efficacy and
implications for educational evaluation. To be able
to successfully do this culminating performance
task, you should have understood the different
purposes, functions, and way to make alternative
forms assessment, which include
performance-based assessment, affective
assessment, and portfolio assessment. Drawing on
case studies and best practices, the study provides
insights into the challenges and opportunities
associated with alternative methods, highlighting
their potential to promote deeper learning,
students engagement, and authentic skill
development.

1
TABLE OF CONTENTS

PAGE

INTRODUCTION

BACKGROUND 3

BODY

How do we quantify results from rubrics? 3

How do we quantify results from scales and checklists? 5

How do we quantify results from portfolios? 6

How do we summarize result? 6

Guidelines in Giving Qualitative Feedback. 7

REFERENCE 7

2
INTRODUCTION

Background

Organization and Analysis of Assessment Data from Alternative Methods refers to the
systematic examination and interpretation of assessment data gathered through
non-traditional evaluation techniques in educational contexts. This study investigates
how such data are structured, manage, and analyzed to glean insights into student
learning, performance, and progress.

BODY

HOW DO WE QUANTIFY RESULTS FROM RUBRICS?

In the creation of rubrics, there are scales that represent the degree of performance. This
degree of performance can range from high to low degree of proficiency.

The points depend on the quality of the behavior shown by the learner's performance.
The reliability of the assigned points can be determined when the scoring of two or more
observers to the same behavior is consistent. Such procedure entails the use of multiple
raters or judges to rate the performance.

3
The consistency of the ratings can be obtained using a coefficient of concordance. The
Kendall's w coefficient of concordance is used to test the agreement among raters. If a
performance task was demonstrated by five students and there are three raters. The
rubric used a scale of 1 to 4 where 4 is the highest and 1 is the lowest.

The scores given by the three raters are first computed by summating the ratings for
each demonstration. The mean is obtained from the sum of ratings ratings (X= 8.4). The
mean is subtracted to each of the Sum of Ratings (D). Each difference is squared (D'), then
the sum of squares is computed (ED≥=33.2). The mean and summation of squared
differences are substituted in the Kendall's w formula. In the formula, m is the numbers,
of raters.

Kendall's w coefficient of 0.38 is an estimate of the agreement of the three raters in the
five demonstrations. There is a moderate concordance among the three raters because
the coefficient is far from 1.00.

4
How do we quantify results from scales and checklists?

Scales could be a measure of noncognitive dimensions of students' behavior. When the


items in the scale are answered by students, the response format quantifies the behavior
measured by the scale. The types of response format vary depending on the nature of the
behavior measured.

Likert Scale. The Likert scale is used to measure students' favorability and
unfavorability toward a certain object. The favorability will depend on the degree of
agreement or disagreement to a standpoint.

● To quantify the scales, a numerical score can be assigned to each of the responses.
For example, 4 points can be assigned to strongly agree, 3 points for agree, 2
points for disagree, and 1 point for strongly disagree. To get the total score for the
overall scale, the points for each item can be summated. The total score is a
representation of the overall trait being measured. Usually, high scores in a Likert
scale represent favorable attitude, and low scores represent unfavorable attitudes.
Norms are created to make specific cut off points for the degree of favorability
and unfavorability.

Verbal Frequency Scale. This is used to measure how often a habit is done.

● a verbal frequency scale is scored by assigning numerical values for every


response. When "always" is answered, it can be given 5 points, 4 points for often, 3
points for sometimes, 2 points for rarely, and 1 point for never. The total score for
the habit can also be estimated through a total score by summating the scores of
all the items. A high score means high frequency of the habit while the low score
means lower frequency for the habit. The items here are measures of a habit.

Linear Numeric Scale. This is used when a large array of ratings is provided among the
participants within a continuum. The extreme points of the scale are provided with a
descriptor.

5
Semantic Differential Scale. This scale is used to describe the object or behavior by
making use of two opposite adjectives.

Graphic Scale. This scale uses illustrations to represent the degree of presence or
absence of the characteristics measured. This is usually used for respondents, such as
young children, who have limited vocabulary.

How do we quantify results from portfolios?

Assessment data generated from portfolios can both be qualitative or quantitative.


When assessing portfolios using quantitative approach, scales and rubrics can be used.
The scales and other measures need to specify the criteria required in assessing the
portfolio. Qualitative assessment requires criteria and narrative feedback provided to
the learner.

How do we summarize results?

When results of assessment are summarized, the teacher needs to think about two
things:

1. The kind of scores that will be presented - The teacher may require to have the raw
score, percentage, or transmuted grade. The average and summation of scores may be
required depending on the grading system.

2. The tabular or graphical presentation of the scores - Scores can be presented in a


tabular or graphical manner.

6
Guidelines in Giving Qualitative Feedback

1. The contents of the feedback are based and within the confines of the criteria.

2. The feedback should inform the students on what to do to become better in their
performance or behavior. The recommendation can be:

● a suggested procedure
● how to correct the errors
● the kind of thinking required to get the answer
● where to locate the answer

3. The feedback should be immediate to correct the error.

4. The learner needs to be provided with an opportunity to redo and resubmit the task.

5. Detail the feedback if the learner needs more information.

6. The feedback can be short if the learner knows what to do.

7. Feedback can come in the form of verbal cues and gestures so that the learner is not
disrupted while performing.

REFERENCES

Book: Assessment in Learning 2 Page 119-127

You might also like