0% found this document useful (0 votes)
6 views25 pages

8602 SPRING 2024

Uploaded by

maham kanwal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views25 pages

8602 SPRING 2024

Uploaded by

maham kanwal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

‫‪1 | Page KHA N BHA I WHATSA P P N O 0325 9594 602‬‬

‫‪ALLAMA IQBAL OPEN UNIVERSITY‬‬

‫‪COURSE CODE: 8602‬‬

‫‪SPRING 2024‬‬

‫ہیافلئابلکلتفمےہ‪،‬اسےکےیلوکیئیھبصخشادایگیئ‬
‫ہنرکے۔ارگوکیئآپےساسافلئےکےیلےسیپبلط‬
‫رکے‪،‬وترباہرکماناکربمناوراساکارکسنیاشٹامہرے‬
‫اسھترئیشرکںی۔تفمیپڈیافیاحلصرکےنےک‬
‫ےیلامہرےواسٹاپیرگوپںیماشلموہاجںیئ۔‬
‫راہطبربمن‪03259594602:‬‬

‫‪K H A N B H A I W H AT S A P P N O 0 3 2 5 9 5 9 4 6 0 2‬‬
2 | Page KHA N BHA I WHATSA P P N O 0325 9594 602

Define assessment and write a detailed note on the principles


of classroom assessment.
Definition of Assessment
Assessment is the systematic process of gathering, analyzing, and
interpreting information about students' learning, skills, abilities, and
understanding to make informed decisions about instruction, curriculum,
and student progress. It involves using various tools and methods to
evaluate what students know, understand, and can do.

Principles of Classroom Assessment


Effective classroom assessment is guided by principles that ensure
fairness, validity, reliability, and relevance. These principles help
educators design and implement assessments that genuinely reflect
students' learning. Below is a detailed explanation of the key principles:

1. Alignment with Learning Objectives


• Assessments must be directly aligned with the learning goals and
objectives of the curriculum.
• Example: If the objective is to develop critical thinking, the
assessment should include tasks that require analysis and
reasoning, not rote memorization.
• Significance: Ensures that assessments measure what they are
intended to evaluate.

2. Validity

K H A N B H A I W H AT S A P P N O 0 3 2 5 9 5 9 4 6 0 2
3 | Page KHA N BHA I WHATSA P P N O 0325 9594 602

• Validity refers to the extent to which an assessment measures what


it claims to measure.
• Example: A math test should assess mathematical concepts, not
language proficiency.
• Significance: Ensures the assessment outcomes are accurate and
meaningful for decision-making.

3. Reliability
• Reliability refers to the consistency of assessment results across
time, different evaluators, or varied conditions.
• Example: If a student takes the same test on different days, the
results should be similar if their knowledge hasn't changed.
• Significance: Builds trust in the assessment process and ensures
fairness.

4. Fairness
• Assessments must be free from bias and provide all students an
equal opportunity to demonstrate their abilities.
• Example: Avoid using culturally specific language or contexts that
may disadvantage certain groups of students.
• Significance: Promotes equity in the classroom.

5. Formative and Summative Balance

K H A N B H A I W H AT S A P P N O 0 3 2 5 9 5 9 4 6 0 2
4 | Page KHA N BHA I WHATSA P P N O 0325 9594 602

• Assessments should include both formative (ongoing assessments


to guide instruction) and summative (end-of-unit or term
evaluations) approaches.
• Example: Using quizzes, observations, and projects (formative)
alongside final exams or cumulative portfolios (summative).
• Significance: Provides a comprehensive view of student progress
and areas for improvement.

6. Feedback-Oriented
• Assessments should provide meaningful, timely, and constructive
feedback to students.
• Example: After a test, giving specific comments on strengths and
areas that need improvement.
• Significance: Encourages learning and growth by guiding students
in the right direction.

7. Variety of Methods
• Effective assessment employs diverse methods, such as written
tests, oral presentations, projects, and peer evaluations.
• Example: Combining multiple-choice questions, essays, and group
activities in a unit assessment.
• Significance: Captures a holistic picture of students’ abilities and
caters to different learning styles.

8. Student Involvement

K H A N B H A I W H AT S A P P N O 0 3 2 5 9 5 9 4 6 0 2
5 | Page KHA N BHA I WHATSA P P N O 0325 9594 602

• Students should be actively involved in the assessment process,


such as through self-assessment or peer assessment.
• Example: Allowing students to evaluate their own projects using a
rubric.
• Significance: Promotes self-reflection, accountability, and
ownership of learning.

9. Transparency
• The criteria, methods, and purposes of assessments should be clear
to both teachers and students.
• Example: Sharing rubrics, guidelines, and expectations before an
assignment.
• Significance: Reduces confusion and ensures that students
understand how they are being evaluated.

10. Ongoing and Continuous


• Assessment should not be a one-time event but a continuous
process that tracks learning over time.
• Example: Monitoring student progress through regular quizzes,
observations, and discussions.
• Significance: Identifies learning trends and provides opportunities
for timely intervention.

11. Practicality and Feasibility

K H A N B H A I W H AT S A P P N O 0 3 2 5 9 5 9 4 6 0 2
6 | Page KHA N BHA I WHATSA P P N O 0325 9594 602

• Assessments should be realistic in terms of time, resources, and


classroom context.
• Example: Designing an assessment that can be graded efficiently
and doesn't overburden the teacher.
• Significance: Ensures smooth implementation and sustainability.

12. Contextual Relevance


• Assessments should reflect real-world applications and meaningful
contexts.
• Example: Including problem-solving tasks that simulate real-life
scenarios in science or economics.
• Significance: Enhances the relevance and applicability of learning
for students.

Write a detailed note on the purposes of testing in education.


Purposes of Testing in Education
Testing plays a crucial role in the educational process, serving a variety
of purposes that contribute to the academic development of students, the
improvement of teaching methodologies, and the assessment of
educational systems. Here is an in-depth exploration of the purposes of
testing in education:

1. Measuring Student Learning


Testing is primarily used to measure students' knowledge, skills, and
understanding in a particular subject or area. It helps educators evaluate

K H A N B H A I W H AT S A P P N O 0 3 2 5 9 5 9 4 6 0 2
7 | Page KHA N BHA I WHATSA P P N O 0325 9594 602

how well students have mastered the learning objectives set for a course
or grade level.
• Formative Assessment: Used during the learning process to
identify gaps and adjust instruction accordingly.
• Summative Assessment: Conducted at the end of a course or unit
to evaluate overall achievement.

2. Diagnosing Strengths and Weaknesses


Tests can identify individual students’ strengths and weaknesses,
enabling teachers to provide targeted support. Diagnostic testing helps
to:
• Highlight specific areas where a student is excelling or struggling.
• Develop personalized learning plans.
• Group students for remedial or advanced instruction.

3. Guiding Instruction
Assessment data from tests inform teaching practices by highlighting the
effectiveness of instructional strategies. Teachers can use test results to:
• Modify lesson plans.
• Allocate more time to challenging topics.
• Adapt teaching methods to cater to diverse learning styles.

4. Providing Feedback
Testing provides valuable feedback to students, parents, and educators:

K H A N B H A I W H AT S A P P N O 0 3 2 5 9 5 9 4 6 0 2
8 | Page KHA N BHA I WHATSA P P N O 0325 9594 602

• Students: Gain insight into their progress, which can motivate


them to improve and set goals.
• Parents: Receive updates on their child’s performance and areas
needing attention.
• Teachers and Administrators: Understand the efficacy of their
teaching methods and curricular design.

5. Encouraging Accountability
Tests hold students, teachers, and educational institutions accountable
for achieving desired outcomes.
• Students are motivated to prepare and engage with the material.
• Teachers are driven to ensure their instruction aligns with
standards.
• Schools and districts are evaluated on their ability to meet
benchmarks.

6. Certifying Competence
In many cases, tests are used to certify that a student has achieved a
certain level of competence or skill. For example:
• High-stakes tests like graduation exams validate a student’s
readiness to advance or enter the workforce.
• Professional certifications or licensure exams confirm specialized
knowledge or skills.

K H A N B H A I W H AT S A P P N O 0 3 2 5 9 5 9 4 6 0 2
9 | Page KHA N BHA I WHATSA P P N O 0325 9594 602

7. Promoting Learning and Retention


Testing is not only a tool for measurement but also a method for
reinforcing learning. The testing effect refers to the phenomenon where
retrieving information during a test enhances long-term retention.
Frequent low-stakes testing can:
• Encourage regular review and practice.
• Reduce anxiety for high-stakes exams by familiarizing students
with test formats.

8. Evaluating Curriculum and Programs


Testing helps assess the effectiveness of educational programs and
curricula by:
• Highlighting strengths and weaknesses in curriculum design.
• Ensuring alignment with learning standards and objectives.
• Informing decisions about curriculum revision or replacement.

9. Supporting Research in Education


Tests are integral to educational research. Data gathered from
assessments is used to:
• Analyze trends in student performance over time.
• Explore factors affecting learning outcomes.
• Develop new teaching strategies and interventions.

10. Facilitating Selection and Placement

K H A N B H A I W H AT S A P P N O 0 3 2 5 9 5 9 4 6 0 2
10 | P a g e K H A N B H A I W H A T S A P P N O 0 3 2 5 9 5 9 4 6 0 2

Tests are often used for selection and placement purposes, such as:
• Identifying students for advanced courses, gifted programs, or
remedial education.
• Placing students in appropriate grade levels or subject groups
based on their proficiency.

11. Benchmarking and Comparisons


Testing allows for benchmarking at multiple levels:
• Student-Level: Comparing individual performance against peers.
• Class/School-Level: Comparing performance across different
classes, schools, or districts.
• National/Global-Level: Participating in standardized assessments
like PISA to compare national educational outcomes globally.

12. Supporting Equity in Education


When designed and implemented fairly, testing can help ensure all
students have equal opportunities to demonstrate their knowledge and
skills. Standardized tests, for example, provide a uniform metric for
evaluation, reducing bias in assessment.

Challenges in Testing
While testing serves these essential purposes, it must be carefully
designed and implemented to avoid:
• Overemphasis on rote learning.

K H A N B H A I W H AT S A P P N O 0 3 2 5 9 5 9 4 6 0 2
11 | P a g e K H A N B H A I W H A T S A P P N O 0 3 2 5 9 5 9 4 6 0 2

• High levels of anxiety and stress among students.


• Misuse of results for punitive measures rather than improvement.

Discuss different types of questions in questionnaire design


with appropriate examples.
In questionnaire design, different types of questions are used depending
on the research objectives, the nature of the data being collected, and the
audience being targeted. Each type of question serves a specific purpose
and influences how respondents provide their answers. Below is an
explanation of the primary types of questions, along with examples for
each:

1. Closed-Ended Questions
Closed-ended questions provide respondents with a limited set of
predefined response options, making it easier to quantify and analyze
data.
Types of Closed-Ended Questions:
• Multiple Choice Questions
Respondents select one or more answers from a list of options.
Example:
What is your preferred mode of transport?
o ☐ Car

o ☐ Bike

K H A N B H A I W H AT S A P P N O 0 3 2 5 9 5 9 4 6 0 2
12 | P a g e K H A N B H A I W H A T S A P P N O 0 3 2 5 9 5 9 4 6 0 2

o ☐ Bus

o ☐ Train
• Yes/No Questions
These require a simple binary response.
Example:
Have you traveled internationally in the last year?
o ☐ Yes

o ☐ No
• Likert Scale
Measures attitudes or opinions on a scale (e.g., agreement,
satisfaction).
Example:
How satisfied are you with our service?
o ☐ Very dissatisfied

o ☐ Dissatisfied

o ☐ Neutral

o ☐ Satisfied

o ☐ Very satisfied
• Rating Scale
Respondents rate a specific aspect on a numeric or descriptive
scale.
Example:
Rate the quality of our customer support (1 = Poor, 5 = Excellent).
o 1☐

K H A N B H A I W H AT S A P P N O 0 3 2 5 9 5 9 4 6 0 2
13 | P a g e K H A N B H A I W H A T S A P P N O 0 3 2 5 9 5 9 4 6 0 2

o 2☐

o 3☐

o 4☐

o 5☐

2. Open-Ended Questions
Open-ended questions allow respondents to answer freely in their own
words. These questions provide rich, qualitative data but are harder to
analyze.
Example:
What improvements would you suggest for our service?
_Response: ______________________________

3. Demographic Questions
These questions collect background information about the respondents.
They are often closed-ended but can include open-ended options when
appropriate.
Example:
What is your age group?
• ☐ Under 18

• ☐ 18–24

• ☐ 25–34

• ☐ 35–44

K H A N B H A I W H AT S A P P N O 0 3 2 5 9 5 9 4 6 0 2
14 | P a g e K H A N B H A I W H A T S A P P N O 0 3 2 5 9 5 9 4 6 0 2

• ☐ 45 and above
What is your occupation?
_Response: ______________________________

4. Contingency Questions
These are follow-up questions dependent on a previous response. They
help in filtering irrelevant questions.
Example:
Did you purchase any of our products in the last month?
• ☐ Yes (If yes, please answer the next question.)

• ☐ No (Skip to Question 5)
Which product did you purchase?
_Response: ______________________________

5. Rank-Order Questions
These ask respondents to rank options in a specific order based on
preference or importance.
Example:
Rank the following features of our product in order of importance (1 =
Most important, 4 = Least important).
• ☐ Price

• ☐ Durability

• ☐ Design

K H A N B H A I W H AT S A P P N O 0 3 2 5 9 5 9 4 6 0 2
15 | P a g e K H A N B H A I W H A T S A P P N O 0 3 2 5 9 5 9 4 6 0 2

• ☐ Functionality

6. Matrix Questions
Matrix questions combine multiple Likert-scale questions in a grid
format, making it efficient for respondents to answer.
Example:
How would you rate the following aspects of our service?
Aspect Poor Fair Good Very Good Excellent

Staff behavior ☐ ☐ ☐ ☐ ☐

Response time ☐ ☐ ☐ ☐ ☐

Ease of access ☐ ☐ ☐ ☐ ☐

7. Dichotomous Questions
These are a simpler form of closed-ended questions with only two
possible answers, often used for binary decision-making.
Example:
Are you currently employed?
• ☐ Yes

• ☐ No

8. Semantic Differential Scale Questions

K H A N B H A I W H AT S A P P N O 0 3 2 5 9 5 9 4 6 0 2
16 | P a g e K H A N B H A I W H A T S A P P N O 0 3 2 5 9 5 9 4 6 0 2

These measure attitudes across a range, with opposing adjectives at


either end of the scale.
Example:
Please rate our website design.
| Unappealing ☐ ☐ ☐ ☐ ☐ Appealing |
| Difficult to navigate ☐ ☐ ☐ ☐ ☐ Easy to navigate |

9. Checklist Questions
These allow respondents to select multiple options from a list.
Example:
Which of the following apps do you use daily?
• ☐ Instagram

• ☐ Twitter

• ☐ Facebook

• ☐ LinkedIn

10. Picture/Visual-Based Questions


These use images or graphics to gather responses, often helpful in
engaging respondents or in surveys for children or diverse audiences.
Example:
Select the smiley that best represents your experience.

K H A N B H A I W H AT S A P P N O 0 3 2 5 9 5 9 4 6 0 2
17 | P a g e K H A N B H A I W H A T S A P P N O 0 3 2 5 9 5 9 4 6 0 2

Discuss differences restrictive response and extended


response test items and their uses with appropriate
examples.
Restrictive response and extended response test items are two types of
constructed-response assessments. They differ in terms of scope, depth,
and the type of responses they elicit from students.

1. Restrictive Response Test Items


These test items require students to provide concise and focused
answers. They are designed to assess specific knowledge or skills, often
within a limited scope.
Characteristics:
• Responses are brief and to the point.
• Focus on specific knowledge, concepts, or processes.
• Typically used to evaluate factual knowledge, basic
comprehension, or application of a single concept.
Uses:
• Assess factual knowledge and recall.
• Evaluate the ability to apply specific procedures or techniques.
• Test comprehension of a narrowly defined topic.
Examples:
• Example 1: Define photosynthesis in one sentence.
(Answer: Photosynthesis is the process by which green plants use
sunlight to synthesize food from carbon dioxide and water.)

K H A N B H A I W H AT S A P P N O 0 3 2 5 9 5 9 4 6 0 2
18 | P a g e K H A N B H A I W H A T S A P P N O 0 3 2 5 9 5 9 4 6 0 2

• Example 2: Solve the equation: 2x + 5 = 15.


(Answer: x = 5.)

2. Extended Response Test Items


These test items require students to provide more elaborate, detailed, and
open-ended responses. They assess higher-order thinking skills such as
analysis, synthesis, and evaluation.
Characteristics:
• Responses are comprehensive and detailed.
• Allow for creativity, critical thinking, and integration of multiple
ideas.
• Evaluate depth of understanding and ability to construct
arguments.
Uses:
• Assess the ability to analyze, evaluate, and synthesize information.
• Evaluate writing skills and ability to organize and express ideas.
• Test understanding of complex or interconnected concepts.
Examples:
• Example 1: Explain the process of photosynthesis and discuss its
significance for the ecosystem.
(Answer: Students would write a detailed explanation of the
photosynthesis process, including the role of chlorophyll, the light
and dark reactions, and its significance in the food chain and
oxygen production.)

K H A N B H A I W H AT S A P P N O 0 3 2 5 9 5 9 4 6 0 2
19 | P a g e K H A N B H A I W H A T S A P P N O 0 3 2 5 9 5 9 4 6 0 2

• Example 2: Analyze the causes and consequences of World War II.


Provide examples to support your argument.
(Answer: This would require students to write an essay covering
political, economic, and social causes, major events, and global
consequences, supported by evidence.)

Comparison Table
Aspect Restrictive Response Extended Response
Length of Brief and focused Detailed and
Response comprehensive
Scope Narrow, specific Broad, allowing for
exploration
Skills Assessed Recall, application, basic Analysis, synthesis,
comprehension evaluation
Examples Definitions, problem- Essays, in-depth
solving explanations
Evaluation Accuracy of facts, Depth, organization,
Criteria correctness of answer creativity, evidence

Choosing Between Them


• Use restrictive response items when the goal is to assess
foundational knowledge or specific skills quickly and efficiently.
• Use extended response items when evaluating students' ability to
think critically, write effectively, or integrate complex ideas.

K H A N B H A I W H AT S A P P N O 0 3 2 5 9 5 9 4 6 0 2
20 | P a g e K H A N B H A I W H A T S A P P N O 0 3 2 5 9 5 9 4 6 0 2

By carefully selecting the type of item, educators can align assessments


with learning objectives and provide a more accurate measure of
students' understanding and abilities.

What is reliability of a test? Explain different types of


reliability in detail.
Reliability of a Test
Reliability of a test refers to the consistency, stability, and repeatability
of the test results. In other words, a reliable test consistently measures
what it is intended to measure, producing similar results under consistent
conditions. If a test is reliable, repeated administrations of the test, or
repeated measures from the same individual, will yield similar
outcomes, assuming that the trait being measured has not changed.
Reliability is a critical concept in psychological testing, education, and
research because it ensures that the results of a test can be trusted and
are not due to random errors. There are different types of reliability, each
assessing different aspects of consistency in test performance. Below are
the most commonly recognized types of reliability:

1. Test-Retest Reliability
Test-retest reliability measures the consistency of a test's results when
it is administered to the same group of people at two different points in
time. This type of reliability is particularly useful for measuring traits or
abilities that are relatively stable over time, such as intelligence or
personality traits.

K H A N B H A I W H AT S A P P N O 0 3 2 5 9 5 9 4 6 0 2
21 | P a g e K H A N B H A I W H A T S A P P N O 0 3 2 5 9 5 9 4 6 0 2

• How it's measured: The test is given to the same group of people
twice, with a time interval between the two administrations. The
correlation between the two sets of scores is computed (usually
using the Pearson correlation coefficient). A higher correlation
indicates higher test-retest reliability.
• Ideal for: Situations where the characteristic being measured does
not change quickly (e.g., intelligence, personality).
• Limitations: Test-retest reliability can be affected by memory or
learning effects, particularly if the test items are the same or
similar each time.

2. Internal Consistency Reliability


Internal consistency reliability refers to the degree to which all items
in a test measure the same concept or construct. In other words, it
assesses how consistently the items in a test or scale produce similar
results. A test with high internal consistency ensures that the various
items on the test are measuring the same underlying attribute.
• How it's measured: It is usually assessed using statistical methods
like Cronbach's alpha or split-half reliability. Cronbach’s alpha
is the most commonly used index, where a higher value (close to
1) indicates greater reliability.
o Cronbach’s alpha ranges from 0 to 1, with values above
0.70 typically considered acceptable, though values closer to
1.0 are ideal.
o Split-half reliability involves dividing the test into two
halves (e.g., odd-numbered and even-numbered items),

K H A N B H A I W H AT S A P P N O 0 3 2 5 9 5 9 4 6 0 2
22 | P a g e K H A N B H A I W H A T S A P P N O 0 3 2 5 9 5 9 4 6 0 2

computing the scores for each half, and then correlating these
scores.
• Ideal for: Multi-item tests, such as surveys or questionnaires,
where items are designed to measure different aspects of a single
construct.
• Limitations: Internal consistency is primarily concerned with how
items correlate with each other, so it doesn’t necessarily indicate
whether the test measures the construct in a valid or
comprehensive way.

3. Inter-Rater Reliability
Inter-rater reliability (also known as inter-observer reliability) refers to
the degree to which different raters or observers agree on their
assessments when using the same test or tool. This is particularly
important in tests or evaluations that involve subjective judgment or
scoring, such as in clinical assessments, interviews, or performance
evaluations.
• How it's measured: It is assessed by comparing the ratings or
scores given by multiple raters for the same individuals or subjects.
Common measures include:
o Cohen’s Kappa: A statistical coefficient that adjusts for the
possibility of the agreement occurring by chance.
o Intra-class correlation (ICC): A measure of reliability used
when there are more than two raters.
• Ideal for: Situations where more than one person is responsible for
scoring or evaluating the same subject, such as in clinical settings,
peer reviews, or educational assessments.

K H A N B H A I W H AT S A P P N O 0 3 2 5 9 5 9 4 6 0 2
23 | P a g e K H A N B H A I W H A T S A P P N O 0 3 2 5 9 5 9 4 6 0 2

• Limitations: Variability in raters’ interpretations and the lack of


clear criteria for judgments can reduce inter-rater reliability.

4. Parallel-Forms Reliability
Parallel-forms reliability involves comparing two different versions of
a test that are designed to measure the same construct. The goal is to
determine whether both forms of the test produce consistent results. This
type of reliability is useful when you want to avoid practice effects in a
test-retest situation.
• How it's measured: The test is administered to the same group of
people using two different but equivalent forms of the test. The
scores from both versions are correlated, and a high correlation
indicates good parallel-forms reliability.
• Ideal for: Testing the same construct with different items, often
used in large-scale assessments or standardized testing where
alternate forms of the test may be needed (e.g., SAT, GRE).
• Limitations: Developing parallel forms of a test that are
equivalent in difficulty and content can be challenging and time-
consuming.

5. Split-Half Reliability
Split-half reliability is a method of testing internal consistency by
splitting a test into two halves and checking how well the two halves
correlate with each other. The idea is that if the two halves of a test are
measuring the same thing, their scores should be highly correlated.

K H A N B H A I W H AT S A P P N O 0 3 2 5 9 5 9 4 6 0 2
24 | P a g e K H A N B H A I W H A T S A P P N O 0 3 2 5 9 5 9 4 6 0 2

• How it's measured: The test is divided into two halves (for
example, odd-numbered and even-numbered items). The scores for
the two halves are correlated, and the result is used to estimate the
overall reliability of the test. This method is often used as an
alternative to Cronbach's alpha.
• Ideal for: Short tests or those with many items where full retesting
may not be feasible.
• Limitations: The way in which the test is split can affect the
results, and it may not always reflect the reliability of the full test.

Summary of Types of Reliability:


Type of Description Best for Common
Reliability Measures
Test-Retest Consistency of Stable Pearson
Reliability scores over time characteristics correlation,
(e.g., intelligence, intraclass
personality) correlation
Internal Consistency of Multi-item tests, Cronbach’s
Consistency items measuring surveys alpha, Split-
the same half reliability
construct
Inter-Rater Agreement Subjective Cohen's
Reliability between assessments, Kappa, Intra-
different raters observations class
or observers correlation

K H A N B H A I W H AT S A P P N O 0 3 2 5 9 5 9 4 6 0 2
25 | P a g e K H A N B H A I W H A T S A P P N O 0 3 2 5 9 5 9 4 6 0 2

Parallel- Consistency Large-scale Pearson


Forms between two assessments, correlation
Reliability equivalent forms alternate forms of
of a test a test
Split-Half Correlation Short tests or Pearson
Reliability between two when re- correlation
halves of a test administration is
not feasible

K H A N B H A I W H AT S A P P N O 0 3 2 5 9 5 9 4 6 0 2

You might also like