0% found this document useful (0 votes)
66 views

8602 Assignment Answer

The document discusses formative and summative assessment. Formative assessment is used to provide feedback during instruction to improve student learning and teaching, while summative assessment evaluates student learning at the end of a unit. Examples of formative assessment include classroom polls and exit tickets, while summative assessments include exams and final projects. The document also discusses how to prepare a table of specifications, which is a chart that represents curriculum elements and objectives to ensure congruence between instruction and assessment content.

Uploaded by

naveed shakeel
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
66 views

8602 Assignment Answer

The document discusses formative and summative assessment. Formative assessment is used to provide feedback during instruction to improve student learning and teaching, while summative assessment evaluates student learning at the end of a unit. Examples of formative assessment include classroom polls and exit tickets, while summative assessments include exams and final projects. The document also discusses how to prepare a table of specifications, which is a chart that represents curriculum elements and objectives to ensure congruence between instruction and assessment content.

Uploaded by

naveed shakeel
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 16

Course Name: Educational Assessment and

Evaluation
Course Code : ‘’ 8602 ’’
Semester : Spring, 2021 B.Ed 1.5 Yrs
Name : Farwa Munir
Roll No : CE-614625

Assignment No. 1

1
QUESTION NO.1

What is formative and summative Assessment? Distinguish between them with the help of
relevant examples.

Answers:
Assessment is a purposeful activity aiming to facilitate students’ learning and to improve the quality of instruction.
Based upon the functions that it performs, assessment is generally divided into three types: assessment for learning,
assessment of learning and assessment as learning

Formative assessment(Assessment for Learning):


It is used to monitor student's learning to provide ongoing feedback that can be used by instructors or teachers to
improve their teaching and by students to improve their leaning.

Assessment for learning is a continuous and an ongoing assessment that allows teachers to monitor students on a day-
to-day basis and modify their teaching based on what the students need to be successful. This assessment provides
students with the timely, specific feedback that they need to enhance their learning. The essence of formative
assessment is that the information yielded by this type of assessment is used on one hand to make immediate
decisions and on the other hand based upon this information; timely feedback is provided to the students to enable
them to learn better. If the primary purpose of assessment is to support high-quality learning then formative
assessment ought to be understood as the most important assessment practice.

Assessment for learning has many unique characteristics for example this type of assessment is taken as “practice."
Learners should not be graded for skills and concepts that have been just introduced. They should be given
opportunities to practice. Formative assessment helps teachers to determine next steps during the learning process as
the instruction approaches the summative assessment of student learning. A good analogy for this is the road test that
is required to receive a driver's license. Before the final driving test, or summative assessment, a learner practice by
being assessed again and again to point out the deficiencies in the skill Another distinctive characteristic of formative
assessment is student involvement. If students are not involved in the assessment process, formative assessment is not
practiced or implemented to its full effectiveness. One of the key components of engaging students in the assessment
of their own learning is providing them with descriptive feedback as they learn. In fact, research shows descriptive
feedback to be the most significant instructional strategy to move students forward in their learning. Descriptive
feedback provides students with an understanding of what they are doing well. It also gives input on how to reach the
next step in learning process.

Summative assessment(Assessment of Learning):


2
It is used to evaluate Student's Learning at the end of an instructional unit by comparing it against some standard or
benchmark. You can tell from their definitions that those two evaluation strategies are not meant to evaluate in the
same way. So let's take a look at the biggest differences between them.

Summative assessment or assessment of learning is used to evaluate students’ achievement at some point in time,
generally at the end of a course. The purpose of this assessment is to help the teacher, students and parents know how
well student has completed the learning task. In other words summative evaluation is used to assign a grade to a
student which indicates his/her level of achievement in the course or program. Assessment of learning is basically
designed to provide useful information about the performance of the learners rather than providing immediate and
direct feedback to teachers and learners, therefore it usually has little effect on learning. Though high quality
summative information can help and guide the teacher to organize their courses, decide their teaching strategies and
on the basis of information generated by summative assessment educational programs can be modified. Many experts
believe that all forms of assessment have some formative element. The difference only lies in the nature and the
purpose for which assessment is being conducted.

Differences between Formative And Summative Assessments:


The difference between formative and summative assessments can be easily looked upon in tabular form in the
following way;

FORMATIVE ASSESSMENTS SUMMATIVE ASSESSMENTS


Instructs how students are gathering knowledge and It helps to access that what knowledge has been
is there any hurdle in their respective learning learned by student to date
process. Moreover, it helps teacher to define what
to do next.
This method is intended to help teachers and Is designed to provide information to those not
students in refining their learning capabilities. directly involved in classroom learning and teaching
(school administration, parents, school board), in
addition to educators and students.
It is a continuous process which goes on and on. It is a periodic process which is designed on intervals
basis.

Usually uses detailed, specific and descriptive Usually uses grades, numbers, scores or marks as
feedback in a formal or informal report part of a formal report.

Usually focuses on improvement, compared with Usually compares the student's learning either with
the student's own previous performance other students' learning (normreferenced) or the
standard for a grade level (criterion-referenced)
Examples of formative assessments:

3
Formative assessments can be classroom polls, exit tickets, early feedback, and But you can make them more
fun too. Take a look at these three examples

✓ In response to a question or topic inquiry, students write down 3 different summaries. 10-15 words long, 30-50
words long and 76-100 words long

✓ The 3-2-1 countdown exercise. Give you student’s cards to write on, or they can respond orally. Students have to
respond to three separate statements: 3 things you didn't know before, 2 things that surprised you about this topic and
1 thing you want to start doing with what you've learned

✓ One minute papers are usually done at the end lesson. Students answer a brief question fn writing. The question
typically centers around the main point of the course, most surprising concept, and most confusing area of the topic
and what question from the topic might appear on the next test.

Examples of summative assessments:


Most of you have been using summative assessments whole their teaching careers. And that's normal
Education is a slow leaner and giving students grades is the easier thing to do. Examples of summative assessments are
midterm exams, end-of-unit or -chapter tests, final projects or papers, district benchmark and scores used for
accountability for schools and students so, that was it for this post. I hope you now know the differences and know
which assessment strategy you are going to use in your teaching. If you want to know more about implementing
formative assessment you should really take a look at this interview of a school without grades and this post about the
building blocks of formative assessment.

Reference:
 www.fairtest.org
 www.stemresources.com
 8602 book

4
QUESTION NO.2

How to prepare table of specifications? What are different ways of developing table of
specifications?

Answers:

Definition:
‘’Table of specification is a chart that provides graphic representations of the content of a course or curriculum
elements and the educational outcomes/objectives.’’

Preparation of Table of Specification It has been discussed earlier that the educational objectives play a significant role
in the development of classroom tests. The reason is that the preparation of classroom test is closely related to the
curriculum and educational objectives. And we have also explained that a test should measure what was taught. For
ensuring that there is similarity between classroom instruction and test content is the development and application of
table of specification, which is also called a test blue print. As the name implies, it specifies the 37 content of a test. It is
a two-way framework which ensures the congruence between classroom instruction and test content. This is one of
the most popular procedures used by test developers for defining the content-domain. One dimension of the test
reflects the content to be covered and other dimension describes the kinds of student cognitive behaviour to be
assessed.Table of specifications are designed based on :

 1-course learning outcomes/objective .


 2-topics covered in class.
 3-amount of time spent on those topics .
 4-methods of instruction .
 5-assessment plan .

The benefits of table of specifications:


The advantages of developing table of specification are;

 Clarify learning outcomes


 ensure content coverage
 matching methods of instruction
 help in assessment plan and blue print
 evaluation of the program •

Things should be taken into account when building a table of specification are:
5
 course learning outcomes/objective and
 topics covered in class
 amount of time spent on those topics
 methods of instruction
 assessment plan .

Constructing the table of specifications:

 content of the curriculum


 guided by learning outcomes/objectives Bloom’s taxonomy and its level and weightage
 methods of instruction
 assessment plan in added .

Example Table:
Topics Knowledge Comprehension Application Analysis Total
Topic 1 5 2 2 3 12
Topic 2 3 3 4 2 12
Topic 3 2 2 3 2 9
Topic 4 3 3 1 1 8
Topic 5 1 2 1 1 5
Topic 6 2 2 0 0 4
Total 16 14 11 9 50
The top of each column of the table represent the level of cognitive domain, the extreme left column represent the
categories of the content (topics) or assessment domains. The numerals in the cells of two way table show the
numbers of items to be included in the test. You can readily see that how the fifty items in this table have been
allocated to the content topics and the levels of cognitive behaviour. The teacher may add some more dimensions. The
table of specification represents four level of cognitive domain. It is not necessary for teacher to develop a test that
completely coincides with the content of taught domain. The teacher is required to adequately sample the content of
the assessment domain. The important consideration here for teachers is that they must make a careful effort on
conceptualizing the assessment domain. An appropriate representativeness must be ensured. Unfortunately, many
teachers develop tests without figuring out what domains of knowledge, skills, or attitude should be promoted and
consequently, formally be assessed. A classroom test should measure what was taught. In simple words a test must
emphasize what was emphasized in the class.

CONCLUSION:A table of specification helps teachers to review the curriculum content on one hand and on the
other hand it helps teachers to be careful in overlooking important concepts or including unimportant and irrelevant
concepts. On the similar patterns a teacher can develop table of specification for affective and psychomotor domain.

Reference:

6
 http://profdrayubent.com/
 8602 code book “Educational Assessment and Evaluation’’
 www.google.com
 www.udel.edu

QUESTION NO.3

Define criteria and Norm-reference testing. Make a comparison between them?

Answers

a) Definition of Norm-Referenced :
Test Norm-referenced tests are made with compare test takers to each other. On an NRT driving test, test-takers would
be compared as to who knew most or least about driving rules or who drove better or worse. Scores would be reported
as a percentage rank with half scoring above and half below the mid-point. This type of test determines a student's
placement on a normal distribution curve. Students compete against each other on this type of assessment. This is
what is being referred to with the phrase, 'grading on a curve'.

b) Definition of Criterion-Referenced Tests:


Criterion-referenced tests are intended to measure how well a person has learned a specific body of knowledge and
skills. Criterion-referenced test is a term which is used daily in classes. These tests assess specific skills covered in class.
Criterion-referenced tests measure specific skills and concepts. Typically, they are designed with 100 total points
possible. Students are earned points for items completed correctly. The students' scores are typically expressed as a
percentage. Criterionreferenced tests are the most common type of test teacher’s use in daily classroom work.

COMPARISON BETWEEN NORM REFERENCE AND CRITERION REFERENCE:


Norm-referenced tests compare an examinee’s performance to that of other examinees. Standardized examinations
such as the SAT are norm-referenced tests. The goal is to rank the set of examinees so that decisions about their
opportunity for success can be made. Criterion-referenced tests differ in that each examinee’s performance is
compared to a pre-defined set of criteria or a standard. The goal with these tests is to determine whether or not the
candidate has the demonstrated mastery of a certain skill or set of skills. These results are usually “pass” or “fail” and
are used in making decisions about job entry, certification, or licensure. A national board medical exam is an example
of a Criterion Reference Test. Either the examinee has the skills to practice the profession, in which case he or she is
licensed, or does not.

7
Criterion-Referenced Tests Norm-Referenced Tests
 To determine whether each  To rank each student with respect to
student has achieved specific the achievement of others in order
skills or concepts based on to discriminate between high and
standards. low achievers.
 Measures specific skills which  Measures broad skill areas sampled
make up a designated from a variety of textbooks, syllabi,
curriculum. These skills are and the judgments of curriculum
identified by teachers and experts.
curriculum experts  Each individual is compared with
 Each individual is compared other examinees and assigned a
with a preset standard for score--usually expressed as a
acceptable achievement. The percentile. Student achievement is
performance of other reported for broad skill areas,
examiners is irrelevant. although some norm-referenced
 Student’s score is usually tests do report student achievement
expressed as a percentage. for individual skills
Student achievement is
reported for individual skills.

Norm-referenced refers to standardized tests that are designed to compare and rank test takers in relation to
one another. Norm-referenced tests report whether test takers performed better or worse than a hypothetical average
student, which is determined by comparing scores against the performance results of a statistically selected group of
test takers, typically of the same age or grade level, who have already taken the exam Calculating norm-referenced
scores is called the "norming process the comparison group is known as the "norming group. Norming groups typically
comprise only a small subset of previous test takers, not all or even most previous test takers. Test developers use a
variety of statistical methods to select norming groups, interpret raw scores, and determine performance levels. Norm-
referenced scores are generally reported as a percentage or percentile ranking. For example, a student who scores in
the seventieth percentile performed as well or better than 70% of other test takers of the same age or grade level, and
30% of students performed better (as determined by norming-group scores Norm-referenced tests often use a
multiple-choice format, though some include open-ended short-answer questions. They are usually based on some
form of national standards, not locally determined standards or curricula. 1 test are among the most well-known norm
referenced tests, as are developmental-screening tests, which are used to identify learning disabilities in young
children or determine eligibility for special-education services. A few major norm-referenced tests include the
California Achievement Test, Iowa Test of Basic Skills, Stanford Achievement Test and Trinova.

✓ The following are a few representative examples of how norm-referenced tests and scores may be used to
determine a young child’s readiness for preschool or kindergarten. These tests may be designed to measure oral-
language ability, visual-motor skills, and cognitive and social development.

8
✓ To evaluate basic reading, writing, and math skills. Test results may be used for a wide variety of purposes, such as
measuring academic progress, making course assignments, determining readiness for grade promotion, or identifying
the need for additional academic support.

✓ To identify specific learning disabilities, such as autism, dyslexia, or nonverbal learning disability, or to determine
eligibility for special-education services.

✓ One norm-referenced measure that many families are familiar with is the baby weight growth charts in the
pediatrician's office, which show which percentile a child's weight falls in. A child in the 50th percentile has an average
weight; a child in the 75th percentile weighs more than 75% of the babies in the norm group and the same as or less
than the heaviest 25% of babies in the norm group; and a child in the 25th percentile weighs more than 25% of the
babies in the norm group and the same as on less than 75% of them. It's important to note that these norm referenced
measures do not say whether a baby's birth weight is "healthy" or "unhealthy," only how it compares with the norm
group.

✓ For example, a baby who weighed 2,600 grams at birth would be in the 7th percentile, weighing the same as or less
than 93% of the babies in the norm group. However, despite the very low percentile, 2,600 grams is classified as a
normal or healthy weight for babies born in the United States-a birth weight of 2,500 grams is the cut-off, or criterion,
for a child to be considered lowweight or at risk. (For the curious, 2,600 grams is about 5 pounds and 12 ounces. Thus,
knowing a baby's percentile rank for weight can tell you how they compare with their peers, but not if the baby's
weight is healthy" or "unhealthy."

✓ Norm-referenced assessments work similarly. An individual student’s percentile rank describes their performance
in comparison to the performance of students in the norm group, but does not indicate whether or not they met or
exceed a specific standard or criterion.

✓ In the charts below you earn see that, while the student's score doesn't change, their percentile rank does change
depending on how well the students in the norm group performed When the individual is a top-performing student,
they have a high percentile rank; When they are a lower forming student, they have a low percentile rank. What we
can't tell is that whether or not the student should be categorized as proficient or below proficient.

Reference:

 8602 code book “Educational Assessment and Evaluation’’


 www.google.com
 www.udel.edu

9
QUESTION NO.4

What are the types of selection types tests items? What are the advantages of multiple
choice questions?

Answer:
There are four types of test items in selection category of test which are in common use today.

 Multiple-Choice
 Matching,
 True-False
 Completion Items

MULTIPLE-CHOICE:
Multiple-choice test items consist of a stem or a question and three or more alternative answers
(options) with the correct answer sometimes called the keyed response and the incorrect answers
called distracters. This form is generally better than the incomplete stem because it is simpler and more
natural.
Grounlund (1995) writes that the multiple choice question is probably the most popular as well as the
most widely applicable and effective type of objective test. Student selects a single response from a list
of options. It can be used effectively for any level of course outcome. It consists of two parts: the stem,
which states the problem and a list of three to five alternatives, one of which is the correct (key) answer
and the others are distracters (incorrect options that draw the less knowledgeable pupil away from the
correct response). Multiple choice questions consist of three obligatory parts:
1. The question ("body of the question")
2. The correct answer ("the key of the question")
3. Several incorrect alternatives (the so called
"distracters") and optional (and especially valuable in
self-assessment)
4. Feedback comment on the student's answer.

The stem may be stated as a direct question or as an incomplete statement. For example
Direct question
Which is the capital city of Pakistan?----------------------------------------------(Stem)
A. Paris. --------------------------------------- (Distracter)
B. Lisbon. -------------------------------------- (Distracter)
C. Islamabad. ---------------------------------- (Key)
D. Rome. --------------------------------------- (Distracter)
Multiple choice questions are composed of one question with multiple possible answers (options), including the correct

10
answer and several incorrect answers (distracters). Typically, students select the correct answer by circling the
associated number or letter, or filling in the associated circle on the machine-readable response sheet. Students can
generally respond to these types of questions quite quickly. As a result, they are often used to test student’s
knowledge of a broad range of content. Creating these questions can be time consuming because it is often difficult to
generate several plausible distracters. However, they can be marked very quickly

Incomplete Statement
The capital city of Pakistan is
A. Paris.
B. Lisbon.
C. Islamabad.
D. Rome.
Advantages:
Multiple-choice test items are not a panacea. They have advantages and advantages just as any other
type of test item. Teachers need to be aware of these characteristics in order to use multiple-choice
items effectively.
Versatility

Multiple-choice test items are appropriate for use in many different subject-matter areas, and can be
used to measure a great variety of educational objectives. They are adaptable to various levels of
learning outcomes, from simple recall of knowledge to more complex levels, such as the student’s
ability to:
 Analyze phenomena
 Apply principles to new situations
 Comprehend concepts and principles
 Discriminate between fact and opinion
 Interpret cause-and-effect relationships
 Interpret charts and graphs
 Judge the relevance of information
 Make inferences from given data
 Solve problems
The difficulty of multiple-choice items can be controlled by changing the alternatives, since the more homogeneous the
alternatives, the finer the distinction the students must make in order to identify the correct answer. Multiple-choice
items are amenable to item analysis, which enables the teacher to improve the item by replacing distracters that are
not functioning properly. In addition, the distracters chosen by the student may be used to diagnose misconceptions of
the student or weaknesses in the teacher’s instruction.

Validity
In general, it takes much longer to respond to an essay test question than it does to respond to a
multiple-choice test item, since the composing and recording of an essay answer is such a slow process.

11
A student is therefore able to answer many multiple- choice items in time it would take to answer a
single essay question. This feature enables the teacher using multiple-choice items to test a broader
sample of course contents in a given amount of testing time. Consequently, the test scores will likely be
more representative of the students’ overall achievement in the course.

Reliability
Well-written multiple-choice test items compare favourably with other test item types on the issue of
reliability. They are less susceptible to guessing than are true-false test items, and therefore capable of
producing more reliable scores. Their scoring is more clear-cut than short answer test item scoring
because there are no misspelled or partial answers to deal with. Since multiple-choice items are
objectively scored, they are not affected by scorer inconsistencies as are essay questions, and they are
essentially immune to the influence of bluffing and writing ability factors, both of which can lower the
reliability of essay test scores.
Efficiency
Multiple-choice items are amenable to rapid scoring, which is often done by scoring machines. This
expedites the reporting of test results to the student so that any follow-up clarification of instruction
may be done before the course has proceeded much further. Essay questions, on the other hand, must
be graded manually, one at a time. Overall multiple choice tests are:
 Very effective
 Versatile at all levels
 Minimum of writing for student
 Guessing reduced
 Can cover broad range of content

Conclusion:
Multiple choice items common way to measure student understanding and recall. Wisely constructed and utilized,
multiple choice questions will make stronger and more accurate assessments. At the end of this activity, you will be
able to construct multiple choice test items and identify when to use them in your assessments Lets begin by thinking
about the advantages and disadvantages of using multiple choice questions.

Reference :
 8602 code book “Educational Assessment and Evaluation’’
 www.google.com
 www.uscharacterschools.org

12
QUESTION NO.5

Which factors affect the reliability of test?

Answer:

Reliability:
Reliability is a measure of the consistency of a metric or a method. Every metric or method we use, including things like
methods for uncovering usability problems in an interface and expert judgment, must be assessed for reliability. In
fact, before you can establish validity, you need to establish reliability.

What does the term reliability mean? Reliability means Trustworthy. A test score is called reliable when we have
reasons for believing the test score to be stable and objective. For example if the same test is given to two classes and
is marked by different teachers even then it produced the similar results, it may be considered as reliable. Stability
and trustworthiness depends upon the degree to which score is free of chance error. We must first build a conceptual
bridge between the question asked by the individual (i.e. are my scores reliable?) and how reliability is measured
scientifically. This bridge is not as simple as it may first appear. When a person thinks of reliability, many things may
come into his mind – my friend is very reliable, my car is very reliable, my internet bill-paying process is very reliable,
my client’s performance is very reliable, and so on. The characteristics being addressed are the concepts such as
consistency, dependability, predictability, variability etc. Note that implicit, reliability statements, is the behaviour,
machine performance, data processes, and work performance may sometimes not reliable. The question is “how
much the scores of tests vary over different observations?”

Definitions of Reliability:
According to Merriam Webster Dictionary:

“Reliability is the extent to which an experiment, test, or measuring procedure yields the same results
on repeated trials.”

According to Hopkins & Antes (2000):


“Reliability is the consistency of observations yielded over repeated recordings either for one subject or
a set of subjects.”

Joppe (2000) defines reliability as:


“…The extent to which results are consistent over time and an accurate representation of the total
population under study is referred to as reliability and if the results of a study can be reproduced under
a similar methodology, then the research instrument is considered to be reliable.” (p. 1)
The more general definition of the reliability is: The degree to which a score is stable and consistent

13
when measured at different times (test-retest reliability), in different ways (parallel-forms and
alternate-forms), or with different items within the same scale (internal consistency).
FACTORS AFFECTING RELIABILITY:
Reliability of the test is an important characteristic as we use the test results for the future decisions
about the students’ educational advances and for the job selection and many more. The methods to
assure the reliability of the tests have been discussed. Many examples have been provided in order to
in-depth understanding of the concepts. Here we shall focus upon the different factors that may affect
the reliability of the test. The degree of the affect of each factor varies from the situation to situation.
Controlling the factor may improve the reliability and otherwise it may lower the consistency of
production of scores. Some of the factors that directly or indirectly affect the test reliability are given as
under.

Test Length:
As a rule, adding more homogeneous questions to a test will increase the test's reliability. The more
observations there are of a specific trait, the more accurate the measure is likely to be. Adding more
questions to a psychological test is similar to adding finer distinctions on a measuring tape.
Method Used to Estimate Reliability:
The reliability coefficient is an estimate that can change depending on the method used to calculate it.
The method chosen to estimate the reliability should fit the way in which the test will be used.

Heterogeneity of Scores
Heterogeneity is referred as the differences among the scores obtained from class. You may say that
there are some students who got high scores and some students who got low scores or intelligent
students who got high scores and other one got low scores or the difference could be due to any reason
may be income level, intelligence of the students, parents qualification etc. Whichever is the reason for
the variability of the scores the greater the variability (range) of test scores, the higher the reliability.
Increasing the heterogeneity of the examinee sample increases variability (individual differences) thus
reliability increases.

Difficulty
A test that is too difficult or too easy reduces the reliability (e.g., fewer test-takers get the answers
correctly or vice-versa). A moderate level of difficulty increases test reliability.

Errors that Can Increase or Decrease Individual Scores:


There might be some errors committed by the test developers that also affect the reliability of the tests
developed by teachers. These errors initially affect the students’ scores, mean deviate the scores from
the true ability of the students, and therefore affect the reliability. A careful consideration of these
factors may help to measure the true ability of the students.
 The test itself: the overall look of the test may affect the students score.
14
Normally a test is written in well readable font size and style, the language of
the test should be simple and understandable.
 The test administration: After the development of the test, the test developer
may have to prepare the manual of the test administration, the time,
environment, invigilation, and the anxiety also affects students’ performance
while attempting the test. Therefore the uniform administration of the test
leads to the increased reliability.
 The test scoring: Marking of the test is another factor towards the variation in
the scores of the students. Normally there are many raters to rate the students’
responses/answers on the test. Objective type test items and the marking rubric
for essay type/ supply type test items help to get the consistent score
Ensuring the Reliability of Test:
The most straightforward ways to improve a test’s reliability are`
First, calculate the item-test correlations and rewrite or reject any that are too low. Any item that does
not correlate with the total test at least (point-biserial) r = .25, should be reconsidered.
Second, look at the items that did correlate well and write more like them. The longer the test, the
higher the reliability will be.

Reference :
 8602 code book “Educational Assessment and Evaluation’’
 www.google.com
 www.udel.edu
 www.uscharacterschools.org

15
16

You might also like