0% found this document useful (0 votes)
92 views

Model-Assessment-Framework-MAF01973

Uploaded by

shanayaahmed45
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
92 views

Model-Assessment-Framework-MAF01973

Uploaded by

shanayaahmed45
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 82

Model Assessment Framework

Model Assessment Framework


2024
For Grades 9 – 12

Core Assessment Standards for Conceptual Learning across Regions

1|Page
Model Assessment Framework

Table of Contents

Preface ....................................................................................................................................... 8
Message from the Federal Minister, Education and Professional Training....................... 9
Message from the Secretary, Ministry of Federal Education & Professional Training .. 10
Message from the Executive Director of IBCC ................................................................... 11
Executive Summary ............................................................................................................... 12
Highlights of Each Chapter................................................................................................... 13
Chapter 1. Introduction ........................................................................................................ 15
1.1. Introduction ............................................................................................................... 15
1.2. Background ............................................................................................................... 15
1.3. Rationale.................................................................................................................... 15
1.4. Significance ............................................................................................................... 16
1.5. Objectives .................................................................................................................. 16
Chapter 2. Standards for Assessment .................................................................................. 18
2.1. The Concept of Assessment ...................................................................................... 18
2.2. Washback Effect of Assessment on Teaching and Learning .................................... 18
2.3. Importance and Purpose of Assessment .................................................................... 19
2.4. Types of Assessments ............................................................................................... 19
2.4.1 Formative Assessment ....................................................................................... 19
2.4.2 Formative Assessment Based on Bloom’s Taxonomy for High-Stake Exams
(Grades IX to XII) ............................................................................................................ 20
2.4.3 Integration of Technology in Formative Assessment ........................................ 21
2.4.4 International Best Practices................................................................................ 22
2.4.5 General Principles for Formative Assessment ................................................... 23
2.4.6 Summative Assessment (Board Examinations) ................................................. 23
2.5 Minimum Quality Standards for Board Examinations .................................................. 24
2.5.1 Structure of Quality Assessment ............................................................................ 24
2.5.2 Formulating the Structure of Assessment .............................................................. 24
2.5.3 Pre-Examination Standards .................................................................................... 26
2.5.4 During Examination Standards .............................................................................. 29
2.5.5 Post-Examination Standards .................................................................................. 31
2.5.6 Standards for Digitization of Examinations ........................................................... 33
2.6 Capacity Building and Training ..................................................................................... 35

2|Page
Model Assessment Framework
Chapter 3. Requirements/Alignment with Standards of Curriculum .............................. 37
3.1 National Curriculum ..................................................................................................... 37
3.2 Standards, Benchmarks, and Learning Outcomes ......................................................... 37
Chapter 4. Process of Assessment Development ................................................................. 39
4.1. Standards of Assessment ........................................................................................... 39
4.2. Importance of Item Writing and Review................................................................... 39
4.3. Key Principles for Item Writing ................................................................................ 39
4.4. Item Review Process ................................................................................................. 39
4.5. Examples of Bad Items and Good Items ................................................................... 40
Chapter 5. Conduct of Examination .................................................................................... 42
5.1. Professional Training ................................................................................................ 42
5.2. Conducive Environment ............................................................................................ 42
5.3. Transparency ............................................................................................................. 42
5.4. Appropriate Logistic Support .................................................................................... 43
5.5. Appropriate Measures (SOPs for Secrecy) ............................................................... 43
5.6. Controlling Cheating and Examination Malpractices ............................................... 44
Chapter 6. Coding, Marking, and Compilation of Results ................................................ 45
6.1. Coding of Answer Scripts ......................................................................................... 45
6.2. Marking of Answer Scripts ....................................................................................... 45
6.3. Compilation of Results .............................................................................................. 46
6.4. On-Screen Marking ................................................................................................... 47
Chapter 7. Post-Exam Analysis ............................................................................................ 48
7.1 Purpose of Post-Exam Analysis ................................................................................ 48
7.1.1 Improving Learning Outcomes .......................................................................... 48
7.1.2 Informing Instruction and Feedback to Teachers .............................................. 48
7.1.3 Enhancing Assessment Quality.......................................................................... 49
7.1.4 Stakeholder Communication .............................................................................. 49
7.2 Data Collection and Management ............................................................................. 49
7.2.1 Student Performance Data ................................................................................. 49
7.2.2 Importance of Accurate Data Collection ........................................................... 50
7.2.3 Item Analysis ..................................................................................................... 50
7.3 Item Anlaysis Theories.............................................................................................. 51
7.4 Exam Results Dashboard .......................................................................................... 52
7.4.1 Key Features and Importance ............................................................................ 52
7.4.2 Contextual Data and Demographic Information ................................................ 53

3|Page
Model Assessment Framework
7.4.3 Integrating Data for Comprehensive Analysis ................................................... 53
7.5 Data Analysis Techniques ......................................................................................... 54
7.5.1 Descriptive Statistics .......................................................................................... 54
7.5.2 Item Analysis ..................................................................................................... 54
7.5.3 Inferential Statistics ........................................................................................... 55
7.5.4 Subgroup Analysis ............................................................................................. 56
7.6 Reporting and Interpretation of Results .................................................................... 57
7.6.1 Overview of Student Performance .......................................................................... 57
7.6.2 Item-wise Analysis Report ..................................................................................... 57
7.6.3 Subgroup Performance Report ............................................................................... 58
7.7 Feedback Mechanism ................................................................................................ 58
7.7.1 Student Reports: ...................................................................................................... 58
7.7.2 Strengths and Areas for Improvement .................................................................... 58
7.7.3 Teacher Feedback and Aggregate Performance Data ............................................. 58
7.7.4 Instructional Recommendations.............................................................................. 58
7.7.5 School and Province/Region Reports ..................................................................... 59
7.7.6 Policy Recommendations for Administrators and Policymakers ........................... 59
7.8 Utilizing Analysis for Continuous Improvement ...................................................... 59
7.8.1 Curriculum Adjustment ......................................................................................... 59
7.8.2 Professional Development ..................................................................................... 60
7.8.3 Refining Future Assessments Based on Item Analysis and Feedback .................. 60
7.8.4 Informing Educational Policies and Resource Allocation Based on Performance
Data .................................................................................................................................. 60
7.9 Communicating Results to Stakeholders .................................................................. 61
7.9.1 Students and Parents ............................................................................................... 61
7.9.2 Teachers: Comprehensive Data to Guide Instructional Strategies .................... 61
7.9.3 Educational Authorities for Policy Briefs .......................................................... 61
7.9.4 Educational Authorities for Strategic Planning ................................................. 62
7.10 Conclusion ............................................................................................................. 62
Chapter 8. Policy Decisions on IBCC Forum ...................................................................... 63
Chapter 9. Guidelines and SOPs .......................................................................................... 67
9.1 Guidelines and SOPs for Item Writers ...................................................................... 67
9.1.1 Guidelines for Item Writing ............................................................................... 67
9.1.2 SOPs for Item Writers ........................................................................................ 68
9.2 Guidelines and SOPs for Reviewers ......................................................................... 69

4|Page
Model Assessment Framework
9.2.1 Guidelines for Reviewers ................................................................................... 69
9.2.2 SOPs for Reviewers ........................................................................................... 70
9.3 Guidelines and SOPs for Moderators ........................................................................ 70
9.3.1 Guidelines for Moderators ................................................................................. 70
9.3.2 SOPs for Moderators.......................................................................................... 71
9.4 Guidelines and SOPs for Test Assembly .................................................................. 72
9.4.1 Guidelines for Test Assembly............................................................................ 72
9.4.2 SOPs for Test Assembly .................................................................................... 72
9.5 Guidelines and SOPs for Test Administration .......................................................... 73
9.5.1 Guidelines for Test Administration ................................................................... 73
9.5.2 SOPs for Test Administration ............................................................................ 74
9.6 Guidelines and SOPs for Marking/Coding ............................................................... 74
9.6.1 Guidelines for Marking/Coding ......................................................................... 74
9.6.2 SOPs for Marking/Coding ................................................................................. 75
Resources ................................................................................................................................ 76
Lead Contributors ................................................................................................................. 77

5|Page
List of Figures
Figure 1. Difference between Formative and Summative Assessment .................................. 19
Figure 2. General Principles for Formative Assessment......................................................... 23
Figure 3. Assessment Blueprint .............................................................................................. 26
Figure 4. Pyramid of Cognitive Domain ................................................................................. 28
Figure 5. National Curriculum Framework............................................................................. 37
Figure 6. Item Review Process ............................................................................................... 40
Figure 7. Making of Answer Scripts ....................................................................................... 45
Figure 8. Compilation of Results ............................................................................................ 46
Figure 9. Purpose of Post-Exam Analysis .............................................................................. 49
Figure 10. Sample Data Dashboard ........................................................................................ 53
Figure 11. Guidelines and Standard Operating Procedures (SOPs) for Assessment Staff ..... 67
Figure 12. Process and Guidelines for Item Moderators ........................................................ 71
Figure 13. Process and SOPs for Test Assembly .................................................................... 73

6|Page
Model Assessment Framework
List of Tables
Table 1. Assessment Blueprint Example................................................................................. 26
Table 2. Curriculum-Based Question Matrix .......................................................................... 27
Table 3. SLO-Based Question Matrix Example...................................................................... 28
Table 4. Validity Types, Description and Standard Values .................................................... 30
Table 5. Matric of Minimum Standards .................................................................................. 32
Table 6. Pre-Examination Phase Matrix ................................................................................. 33
Table 7. Duration Examination Phase Matrix ......................................................................... 34
Table 8. Post-Examination Phase Matrix ................................................................................ 34
Table 9. Criteria Summary for Capacity Building .................................................................. 36
Table 10. Bad Item Examples and its Improved Item ............................................................. 41
Table 11. Comparison between CTT and IRT ........................................................................ 52
Table 12. Example of Grading Points for Grade 11 ................................................................ 65
Table 13. Example of Grading Points for Grade 12 ................................................................ 66
Table 14. New Grading System for SSC and HSSC ............................................................... 66

7|Page
Model Assessment Framework

Preface
The Model Assessment Framework (MAF) represents a major step in the country's educational
progress, demonstrating joint dedication to improving the quality of education and providing
equal learning opportunities for all students. As we embark on a new chapter in educational
development, this document is a detailed guide to comprehending and executing a strong
student assessment system that adheres to international standards while considering our distinct
cultural and academic settings.
The Inter Boards Coordination Commission (IBCC) serves as a forum for the chairpersons and
chief executives of member examination boards and allied organizations. Its mandate includes
exchanging information, devising policies for unified implementation, transforming the
examination system through modern techniques, and improving capacity building among
teachers and examination staff. To address the identified gaps and improve assessment
practices, the IBCC formulated a Model Assessment Framework of Pakistan for secondary and
higher secondary levels in collaboration with allied departments from Federal and Provincial
Governments.
The Model Assessment Framework document has been developed by reviewing key
frameworks and standards such as TIMSS, the Curriculum Framework, the Cambridge
Assessment Framework Standards, and the Global Proficiency Framework. The aim is to
define minimum standards for quality assessment and effective testing. By adhering to these
proposed standards, all educational boards in Pakistan can enhance the quality of their
assessments and, consequently, the overall quality of education.
The Model Assessment Framework (MAF) of Pakistan has been designed to offer a clear and
organized method for assessing students at secondary and higher secondary education levels.
This framework results from thorough research, consultation, and cooperation among experts
from different facets of the education sector. It represents a collective vision for a fair,
transparent, and accountable education system that encourages constant improvement and
excellence.
The Model Assessment Framework is structured to cover the secondary and higher secondary
levels of education. It is designed to be implemented in phases, and its key components include
assessment standards, assessment tools and techniques, implementation guidelines, reporting,
and feedback mechanisms. Successfully implementing the Model Assessment Framework
requires collaboration from all stakeholders in the education sector. It calls for a shift in how
we approach assessments, moving from a focus on testing to a broader perspective of
continuous and comprehensive evaluation.
This document is not an end but a living guide that will evolve over time in response to
emerging educational challenges and opportunities. Hopefully, the Model Assessment
Framework will catalyze positive change, inspiring a culture of learning and excellence that
will propel Pakistan towards a brighter and more prosperous future.
We would like to express our sincere gratitude to all those who have contributed to developing
this framework. Their dedication, expertise, and insights have been invaluable in shaping this
visionary document. Together, we can build an education system that truly meets the needs of
students and prepares them for the challenges of the 21st century.

8|Page
Model Assessment Framework

Message from the Federal Minister, Education and Professional


Training

I am honored to present the Model Assessment


Framework (MAF)—a significant step forward in
our mission to provide quality, standardized, and
equitable education to every student across Pakistan.
The development of this framework reflects the
tireless commitment of dedicated educators, experts,
and policymakers who have worked together to create
an assessment system that aligns with our Curriculum
and meets the diverse needs of our nation.

The Model Assessment Framework is more than a tool


for evaluation; it is a strategic initiative aimed at
raising educational standards, fostering student
growth, and ensuring that our students are prepared for
the challenges of the 21st century. By focusing on
Student Learning Outcomes (SLOs) and cognitive skill
levels, the MAF introduces a level of precision, consistency, and fairness that is unprecedented
in our assessment practices. This framework will support our examination boards and educators
in implementing high-quality assessments that accurately reflect each student's understanding
and application of knowledge.

I extend my deepest appreciation to the Inter Boards Coordination Commission (IBCC),


provincial boards, related experts, and technical committee members, whose expertise and
vision were instrumental in developing this comprehensive framework. I am confident that,
with the dedication of our educational community, the MAF will foster a more cohesive,
transparent, and effective assessment system across Pakistan.

I urge all educators, policymakers, and stakeholders to embrace this framework with the same
commitment and vision with which it was created. Together, let us work toward a brighter
educational future where every child in Pakistan has access to the quality education they
deserve.

Dr. Khalid Maqbool Siddiqui


Federal Minister

9|Page
Model Assessment Framework

Message from the Secretary, Ministry of Federal Education &


Professional Training
It is with great pride and a deep sense of responsibility
that I present the Model Assessment Framework a
critical step forward in transforming our educational
assessment system in Pakistan. This framework
represents our collective aspiration to create a more
inclusive, equitable, and modernized approach to
assessing the knowledge and abilities of our students
across the country.

In today’s rapidly evolving world, it is imperative that


our education system goes beyond merely evaluating
students’ memorization skills. The Model Assessment
Framework is designed to foster critical thinking,
problem-solving, and applying knowledge—skills
essential for success in the 21st century. This
framework aligns with the curriculum and aims to bring
uniformity and fairness across all educational boards,
ensuring that every student, regardless of background or region, has an equal opportunity to
thrive. The MAF marks a pivotal moment in our efforts to modernize education in Pakistan by
embracing technology, particularly through the digitalization of the examination system. This
enhances the efficiency and security of assessments and improves the integrity of our
examination processes.

I extend my heartfelt appreciation to the Inter Boards Coordination Commission (IBCC) and
the dedicated teams involved in developing this pioneering document. Their tireless efforts
have produced a framework that sets a new benchmark for quality in educational assessments,
ensuring that our system remains both nationally relevant and internationally competitive.

As we move towards the implementation phase, I urge all stakeholders—educators,


policymakers, and administrators alike—to actively engage with the MAF and support its
rollout across all provinces. By working together, we can ensure that this framework not only
elevates the standard of our assessments but also drives meaningful change in how we teach
and prepare our students for the future.

Mohyuddin Ahmad Wani

Special Secretary of the Ministry of Federal Education and Professional Training (MoFEPT)

10 | P a g e
Model Assessment Framework

Message from the Executive Director of IBCC

With great pride and optimism, I present the Model


Assessment Framework (MAF), a transformative policy
document that will shape the future of assessments
across Pakistan's educational landscape. The
development of this framework represents a critical
milestone in our ongoing efforts to enhance the quality,
fairness, and consistency of educational assessments for
grades 9 to 12 across all 29 examination boards in
Pakistan. This document embodies our collective vision
of creating an education system that evaluates knowledge
and fosters critical thinking, problem-solving, and
creativity—essential for 21st-century learners.

The MAF reflects our commitment to aligning assessments with the curriculum, ensuring that
every student, regardless of region or background, has an equal opportunity to succeed. By
establishing a standardized and reliable system of assessments, we are laying the foundation
for equitable educational practices that will benefit students, educators, and institutions alike.

As we embark on implementing this framework, I believe that digitalizing the examination


system and creating secure, centrally managed item banks will be instrumental in streamlining
the examination process, improving efficiency, and ensuring the integrity of our assessments.
The MAF's emphasis on higher-order thinking and the holistic evaluation of students' abilities
will help shape the next generation of learners who are equipped to meet the challenges of an
ever-evolving world.

I sincerely appreciate the dedicated team of experts and contributors who have worked
tirelessly to develop this comprehensive document. Their insights, research, and passion have
developed a framework to revolutionize how we assess and nurture talent across Pakistan.

As the Executive Director of the IBCC, I am confident that the Model Assessment Framework
will catalyze positive change in our education system. I hope this framework will guide our
educators and policymakers and inspire a new era of learning and development in Pakistan.
Together, we can build an educational future that is both inclusive and innovative, providing
every student with the tools they need to thrive.

I look forward to witnessing the positive impact of the MAF as we work together to uphold the
highest standards of education and assessment in our country.

Dr. Ghulam Ali Mallah


Executive Director, IBCC

11 | P a g e
Model Assessment Framework

Executive Summary
The Model Assessment Framework (MAF) is a comprehensive policy document establishing
a standardized, equitable, and quality-driven assessment system for grades 9 to 12 in Pakistan.
Developed in alignment with the National Curriculum Framework (NCF), the MAF provides
a structured approach to assessment, ensuring that it is valid, reliable, and fair across the
country. The framework outlines key assessment principles and processes designed to enhance
learners' educational outcomes and inform teaching practices.
The primary purpose of the MAF is twofold:

• To create a unified policy framework that supports a consistent examination system


across the country, ensuring that assessments are aligned with the National
Curriculum.
• To provide valuable feedback to all stakeholders—including teachers, students,
educational managers, teacher training institutions, and curriculum authorities—
enhancing the overall quality of education.

The framework is divided into three core phases: Pre-Examination Standards: These standards
ensure the development of assessment tools aligned with the Student Learning Outcomes
(SLOs) outlined in the National Curriculum. Item writing, validation, test blueprinting, and
security protocols are key elements of this phase, designed to ensure fairness, content coverage,
and cognitive balance. Bias-free content, pilot testing, and detailed test blueprints guarantee
that the assessment process begins with high-quality test items. During Examination Standards:
The framework emphasizes maintaining the integrity of the examination process by outlining
clear protocols for invigilation, time management, and student accommodations. It focuses on
creating an environment that ensures fairness and equal opportunities for all learners. In
addition, emergency handling and security measures are established to protect the
examination’s integrity. Post-Examination Standards: This phase includes fair and accurate
marking guidelines, result dissemination, and stakeholder feedback. Moderation processes
ensure consistency in grading, while feedback mechanisms offer insights to students and
educators on performance and areas for improvement. The standards also include a focus on
post-examination analysis, ensuring continuous improvement of the assessment system.
The MAF also addresses the washback effect of assessments, emphasizing how well-designed
assessments can positively influence teaching and learning. The framework seeks to encourage
teaching practices that promote deep learning rather than superficial test preparation by
ensuring that assessments align with the curriculum and educational goals. In addition, the
document highlights the importance of developing comprehensive Guidelines and Standard
Operating Procedures (SOPs) for item writers, reviewers, moderators, and others involved in
the assessment process, ensuring consistency and quality at every stage of the examination
lifecycle. Overall, the Model Assessment Framework is a foundation for building a robust and
uniform assessment system that measures student achievement and promotes educational
quality and equity across Pakistan. The framework aims to ensure that assessments provide
valuable insights for learners and educators, fostering continuous improvement and
accountability in the education system.

12 | P a g e
Model Assessment Framework

Highlights of Each Chapter

Chapter 1: Introduction
This chapter introduces the Model Assessment Framework (MAF) for Pakistan, explaining its
purpose to standardize and improve the quality of educational assessments for grades 9 to 12.
It emphasizes the importance of aligning assessments with the national curriculum to ensure
fairness, inclusivity, and consistency. The chapter also highlights the role of assessments in
evaluating student competencies beyond rote learning, focusing on critical thinking and
problem-solving.
Chapter 2: Standards for Assessment
This chapter defines what constitutes an effective assessment and discusses its various types,
including formative and summative assessments. It introduces the washback effect, describing
how assessments influence teaching and learning practices. The chapter stresses the need for
assessments to be aligned with learning outcomes, cover a range of cognitive levels, and
incorporate both formative and summative methods for comprehensive evaluation.
Chapter 3: Alignment with National Curriculum Standards
This chapter outlines how the MAF aligns with the National Curriculum. It highlights the
importance of setting standards, benchmarks, and learning outcomes that prepare students for
the challenges of the 21st century. The chapter stresses the need for assessments to be
competency-based, promoting problem-solving skills rather than relying solely on content-
based testing.
Chapter 4: Process of Assessment Development
This chapter focuses on the item writing and reviewing process, stressing the importance of
creating valid, reliable, and unbiased assessment items. It also explains the moderation process,
including pilot testing and statistical analysis, to ensure the quality and fairness of each
assessment item. Examples of poorly constructed and improved items are provided to illustrate
best practices.
Chapter 5: Conduct of Examination
This chapter discusses the procedures for administering exams, including training personnel,
ensuring a conducive environment, maintaining transparency, and providing appropriate
logistical support. It includes detailed Standard Operating Procedures (SOPs) for managing the
security of exam materials and handling unforeseen situations during the exam process.
Chapter 6: Coding, Marking, and Compilation of Results
This chapter explains the procedures for coding answer scripts, marking them according to
standardized rubrics, and compiling results. It highlights the use of On-Screen Marking (OSM)
technology to improve accuracy and efficiency in grading. The chapter also discusses measures
to ensure uniformity in marking through multi-layered checks and moderation.

13 | P a g e
Model Assessment Framework
Chapter 7: Post Exam Analysis
This chapter emphasizes the importance of analyzing exam results to improve learning
outcomes and enhance the quality of future assessments. It explains data collection and
management techniques, including item analysis and using descriptive statistics to inform
educational policies and teaching strategies. The chapter also discusses how stakeholder
feedback can be used to refine the assessment process.
Chapter 8: Policy Decisions on IBCC Forum
This chapter outlines key policy decisions made by the forum of the Inter Boards Coordination
Commission (IBCC) to standardize and improve examination practices across Pakistan. Major
reforms include raising the passing marks from 33% to 40%, limiting the use of grace marks
to second attempts, and ensuring the alignment of student learning outcomes (SLOs) with the
National Curriculum. It emphasizes creating item banks, ensuring transparent paper-setting
practices, and separating theory and practical examinations. Additionally, the chapter discusses
the importance of building teacher capacity, establishing research and development units in
boards, and implementing security measures for exam materials.
Chapter 9: Guidelines and SOPs
This chapter provides comprehensive guidelines and SOPs for various personnel involved in
the assessment process, including item writers, reviewers, moderators, and those responsible
for test assembly, administration, and marking. It aims to ensure consistency, fairness, and
quality across all stages of the assessment lifecycle.

14 | P a g e
Model Assessment Framework

Chapter 1. Introduction

1.1. Introduction
The Model Assessment Framework (MAF) is a comprehensive guideline designed to
standardize and elevate the quality of education across secondary and higher secondary levels.
The MAF aims to ensure that the academic assessments are aligned with the national
curriculum, promoting consistency, fairness, and a high standard of education throughout the
country.
The framework focuses on curriculum alignment, ensuring that assessments accurately reflect
the learning objectives outlined in the national curriculum. It stresses evaluating student’s
competencies by measuring rote memorization and understanding and applying knowledge,
critical thinking, and problem-solving skills. It promotes fairness and inclusivity by creating
assessments that are equitable and accessible to all students, regardless of their socio-economic
backgrounds. Teacher development is its prime objective by providing guidelines and
resources for educators to prepare students for assessments effectively. It suggests a robust
mechanism for continuous improvement by incorporating feedback mechanisms to
continuously refine and improve assessment practices.

1.2. Background
The Model Assessment Framework (MAF) for Secondary and Higher Secondary Levels is a
strategic initiative aimed at standardizing and improving the quality of education across the
country. Pakistan's education system has historically faced challenges such as disparities in
quality, regional imbalances, and varying standards across different educational boards. The
assessment systems are often criticized for focusing on rote learning rather than critical
thinking and problem-solving skills.
There was a growing recognition of the need for a comprehensive assessment framework that
could provide a uniform evaluation standard. The aim was to ensure that assessments are
aligned with international best practices and measure a broad range of student competencies.
The formulation of the MAF aligns with the National Education Policy and other strategic
documents that emphasize the need for educational reforms. The framework addresses the
goals set out in the Sustainable Development Goals (SDGs), particularly Goal 4, which focuses
on quality education.

1.3. Rationale
The formulation of a Model Assessment Framework for Secondary and Higher Secondary
Levels in Pakistan is driven by several key factors.
o There have been issues relating to the standardization and equity in the prevailing
assessment system.
o The prevailing system does not provide a structured approach to evaluating the
effectiveness of educational programs and students' performance, resulting in unmet
educational objectives.
o The assessments are not aligned with the national curriculum, which, in turn, does not help
promote coherence in teaching and learning processes.

15 | P a g e
Model Assessment Framework
o The current system does not provide feedback to students, parents, and educators on
academic progress and areas needing improvement; thus, it does not provide a continuous
learning and growth culture.

1.4. Significance
The Model Assessment Framework ensures that assessment standards are consistent across all
regions and schools, reducing disparities and promoting equity. By having a standardized
assessment, the quality of education can be uniformly measured and improved across the
country. The MAF also provides benchmarks for student performance, allowing for the
identification of strengths and weaknesses in the education system and its implementation will
help in providing accurate data from assessments, helping to make informed decisions for
improving educational practices and policies.
The MAF ensures that assessments are aligned with the national curriculum, fostering a more
coherent educational experience. Also, it focuses on assessing skills and knowledge relevant
to the curriculum, and future educational and career needs. The MAF provides valuable
feedback to students, teachers, and parents, guiding improvements in teaching methodologies
and student learning strategies, and also highlights areas where teachers may need additional
training or support, promoting continuous professional development. The MAF will also help
identify gaps in educational attainment between different regions, socio-economic groups, and
genders, leading to targeted interventions.
The MAF also aligns the national assessment system with international standards, facilitating
comparisons and improvements based on global best practices, which, in turn, will help
students' credentials be recognized internationally, aiding those who pursue education or
careers abroad. The MAF focuses on assessing critical thinking, problem-solving, and other
21st-century skills essential for success in higher education and the workforce. It helps ensure
that students are adequately prepared for the demands of higher education and the job market.
The Model Assessment Framework for Secondary and Higher Secondary Levels in Pakistan
aims to create a robust, equitable, and transparent education system. It is designed to enhance
educational quality, promote fairness, and ensure students are well-prepared for their future
academic and professional endeavors.

1.5. Objectives
The Model Assessment Framework (MAF) for Secondary and Higher Secondary levels in
Pakistan serves several crucial objectives aimed at enhancing the quality of education and
improving educational outcomes across the country. These include:
o Establishing uniform guidelines and standards to ensure consistency in assessment
practices across educational boards of different regions of the country to help maintain
fairness and transparency in evaluating student performance.
o Setting clear assessment criteria and benchmarks to uphold and monitor the quality of
education provided at secondary and higher secondary levels, ensuring that assessments
accurately reflect the learning objectives and curriculum standards.
o Aligning assessment practices with broader educational goals and objectives ensures that
assessments not only measure academic achievement but also support the development of
critical thinking, problem-solving skills, and other competencies essential for students'
overall growth.

16 | P a g e
Model Assessment Framework
o Designing assessments not only to evaluate but also to facilitate learning, providing
valuable feedback to students, teachers, and policymakers, helping them identify areas of
improvement and tailor instructional strategies accordingly.
o Support evidence-based decision-making in education policy by collecting and analyzing
national assessment data, including identifying trends, disparities, and areas needing
intervention or improvement.
o Ensure that assessments are fair and inclusive, accommodating the diverse needs and
backgrounds of students across Pakistan, and help reduce disparities in educational
outcomes and promote equal opportunities for all learners.
o Provisions for training teachers and educators in effective assessment practices ensure
that educators are equipped with the necessary skills to implement valid, reliable
assessments and aligned with educational objectives.

17 | P a g e
Model Assessment Framework

Chapter 2. Standards for Assessment

2.1. The Concept of Assessment


Assessment is the process of continuously defining, selecting, designing, collecting, analyzing,
interpreting, and using information to increase students’ learning and development. It
systematically collects, reviews, and uses information about educational programs to improve
student learning. Assessment focuses on what students know, what they can do, and what
values they have when they receive their education.
Assessment is a crucial element in the learning process. While it serves the essential purpose
of marking and grading, it also provides valuable insights to both learners and educators about
the progression of teaching and learning activities. Various terms such as assessment,
evaluation, testing, and examination are often used to measure Student Learning Outcomes
(SLOs). Although these terms have distinct academic meanings, they broadly refer to gathering
valid, reliable, and accurate information about learners’ knowledge, understanding, skills, and
ability to apply what they have learned in real-life contexts.

2.2. Washback Effect of Assessment on Teaching and Learning


The Washback Effect (also known as backwash) refers to the influence of assessments or tests
on teaching and learning. It describes how the content and assessment format impact how
teachers teach and students learn. For example, if a test emphasizes rote memorization, teachers
may focus their teaching on memorization skills, and students will primarily focus on
memorizing facts rather than developing a deeper understanding of the subject matter.
Assessment also has a washback effect, influencing both teaching methods and learning
approaches. Effective assessment has the characteristics to measure the intended outcomes
precisely, and the results are aligned with the expected learning goals. Washback can be
positive or negative:
● Positive Washback: When the assessment aligns well with the educational goals and
promotes deeper learning. It encourages teaching methods and learning practices that
align with the intended curriculum. For instance, if a test emphasizes critical thinking,
teachers will likely focus on teaching students how to think critically, positively
influencing learning.
● Negative Washback: When the assessment promotes undesirable practices such as
"teaching to the test." This often leads to a narrow focus on test-taking skills or
superficial learning rather than understanding the material comprehensively. For
example, if a test only rewards recall of facts, teachers may neglect important skills like
problem-solving or analysis.
In the context of education, the washback effect is significant because assessments should
ideally support and reinforce the broader educational goals, promoting effective teaching and
meaningful learning experiences.

18 | P a g e
Model Assessment Framework

2.3. Importance and Purpose of Assessment


Assessment is an integral component of the teaching process. It is the third stage in an
instructional cycle, undertaken after planning instruction and delivering instruction. For this
reason, after a long time, assessment became one of the most focused educational issues in the
twentieth and twenty-first centuries. This globalization of interest in assessment has created
many international debates.
However, despite the discrepancies in beliefs about the benefits and disadvantages of individual
forms of assessment, most assessment experts share the view that using many instruments to
measure students’ learning quality will provide more reliable results and more valid
interpretations because no single assessment procedure or instrument can be expected to
provide perfect, error-free information. Therefore, regional/provincial/ national/international
large-scale assessments, alternative assessments, and formative and summative assessments
have become increasingly popular in recent years.
Assessment can be formative or summative. Assessment that combines formative and
summative aspects and occurs between them has been referred to as an interim/ diagnostic
assessment. Different assessments serve different purposes. Diagnostic assessment is a form
of pre-assessment where teachers can evaluate students' strengths, weaknesses, knowledge,
and skills before instructions. Pre-assessments allow the teachers to adjust the curriculum to
meet students' present and future needs.

2.4. Types of Assessments

Figure 1. Difference between Formative and Summative Assessment

2.4.1 Formative Assessment


Formative assessment refers to various methods teachers use to conduct in-process
evaluations of student comprehension, learning needs, and academic progress during a
lesson, unit, or course. The goal of formative assessment is to monitor student learning to
provide ongoing feedback that teachers can use to improve their teaching and students to
improve their learning. It also provides feedback to students to help them know where they
are, where they need to be, and what they need to do to reach mastery. More specifically,
formative assessments:

19 | P a g e
Model Assessment Framework
• Help students identify their strengths and weaknesses, target areas that need to be
addressed, and

• Help teachers recognize where students are struggling and address these problems.
2.4.2 Formative Assessment Based on Bloom’s Taxonomy for High-Stake Exams
(Grades IX to XII)
A wide range of educational research sheds light on the significance of assessment in the
teaching-learning process. One of the most vital benefits of assessment is that it helps evaluate
the children's performance. This can be helpful both for the students and the teachers. Effective
assessment of students’ performance requires a structured framework that comprehensively
evaluates cognitive, psychomotor, and affective domains.
Bloom's Taxonomy provides a well-established classification of educational objectives that can
be used to design fair, reliable, and valid assessments. This framework includes a detailed
breakdown of Bloom's Taxonomy and offers guidelines for implementation, tailored according
to the educational context of Pakistan. Here are the details about Bloom’s Taxonomy.

Cognitive Domain Assessment


o Written Exams: Assessing remembering, understanding, and applying knowledge.
o Tools: Multiple-choice questions, short answer questions, essays.
o Project Work: Assessing, analyzing, evaluating, and creating skills.
o Tools: Research papers, design projects, collaborative projects.
o Oral Exams: Assessing understanding and articulation of ideas.
o Tools: Presentations, viva voce, debates.

Psychomotor Domain Assessment


o Practical Tests: Assessing physical skills in subjects like Physical Education and
Science.
o Tools: Laboratory experiments, sports activities, technical skill tests.

o Skill Demonstrations: Evaluating proficiency in vocational subjects.


o Tools: Demonstrations, workshops, practical exams.

o Continuous Observation: Recording students' performance over time.


o Tools: Observation checklists, progress reports, performance logs.
Affective Domain Assessment
o Reflective Journals: Assess students' self-awareness and emotional responses to learning
experiences. Students maintain journals where they reflect on their feelings, attitudes, and
values related to their studies.
o Tools: Journals, guided reflection prompts, online blogging platforms.
o Peer Feedback: Develop students' ability to give and receive constructive feedback,
fostering a supportive learning environment.
o Tools: Structured feedback forms, peer review sessions, collaborative projects.
o Role-Playing and Simulations: Assess empathy, ethical reasoning, and interpersonal
skills through simulated scenarios.
o Tools: Scenario scripts, role-play guidelines, assessment rubrics.
o Service-Learning Projects: Foster a sense of community service and socially
responsible

20 | P a g e
Model Assessment Framework
o Tools: Project proposals, reflection essays, community partner feedback.
o Cultural Immersion Projects: Encourage appreciation of cultural diversity and
linguistic heritage.
o Tools: Research assignments, presentations, cultural artifacts.
o Literature Circles: Enhance engagement with literature and foster a sense of community
o Tools: Reading guides, discussion questions, and group evaluation forms.
o Reflective Essays: Encourage deep reflection on personal growth and attitudes.
o Tools: Essay prompts, reflection guidelines, assessment rubrics.
o Teacher Observations: Monitor students’ behaviors, attitudes, and interactions in
various settings.
o Tools: Observation checklists, anecdotal records, behavior rubrics.
o Project-based Assessments: Assess behavioral changes, attitudes, and interests in
learners.
o Tools: Theoretical project-based assessments, experimental project-based assessments.
o Quizzes: Focus primarily on evaluating the conceptual understanding of students.
o Tools: scored quizzes, personality quizzes
o Portfolio: Provide valuable insight into the learners’ work ethics and set of values.
o Tools: Research papers, creative writing pieces, artwork, graphic designs, and 3D
renderings.

Examples of Affective Domain Assessment


o In a science lab, students work in groups and then provide feedback on each other's
teamwork, collaboration, and attitudes toward the experiments.
o In a language class, students role-play a scenario where they negotiate a conflict, with
assessment focusing on their ability to understand and articulate different viewpoints.
o Students organize a community clean-up event and write reflections on how the experience
influenced their views on environmental responsibility and community involvement.
o In a language class, students research and present on a cultural festival, discussing its
significance and their personal reflections on cultural diversity.

2.4.3 Integration of Technology in Formative Assessment


Integrating technology for formative assessment can greatly enhance engagement and provide
valuable insights into student learning.
Here are some easy tips, along with specific technology tools for formative assessment:

Use Online Quizzes and Polls


o Tip: Create quick quizzes or polls to assess understanding during or after lessons.
o Example Tool: Google Forms - Easily create surveys or quizzes with multiple question
types. Responses are collected automatically and can be analyzed instantly.

Interactive Whiteboards or Apps


o Tip: Use digital whiteboards or interactive apps for real-time feedback and collaborative
problem-solving
o Example Tool: Jam board - Collaborative online whiteboard where students can
brainstorm, draw, and add sticky notes. It supports real-time interaction and group work.

21 | P a g e
Model Assessment Framework
Gamify Learning with Quiz-Based Platforms
o Tip: Gamify assessments to engage students and make learning more enjoyable.
o Example Tool: Kahoot! - Create quizzes and games that students can participate in using
their devices. It provides immediate feedback and encourages friendly competition.

Digital Exit Tickets or Reflections


o Tip: Use digital tools for quick exit tickets or reflections to gauge understanding and gather
feedback.
o Example Tool: Padlet - Set up a digital bulletin board where students can post their
responses to prompts or questions. It allows for easy sharing and viewing of student
reflections.

Peer Assessment and Feedback Tools


o Tip: Facilitate peer assessment and feedback using digital platforms.
o Example Tool: Flip grid - Students can record short videos to share their thoughts,
presentations, or responses. Peers can then provide feedback through video responses or
comments.

Digital Rubrics and Checklists:


o Tip: Use digital rubrics or checklists for assessing student work and providing clear criteria.
o Example Tool: Rubistar - Create customizable rubrics for various assessment criteria. It
simplifies the assessment process and ensures consistency in grading.

Audio and Video Responses:


o Tip: Allow students to respond to questions or prompts using audio or video recordings for
more expressive feedback.
o Example Tool: Vocaroo - Students can record audio responses to assignments or
questions. It's simple to use and provides a personal touch to assessments.

Digital Portfolios for Ongoing Assessment:


o Tip: Use digital portfolios to track and showcase student progress over time.
o Example Tool: Seesaw - Students can upload photos, videos, and documents to
demonstrate their learning journey. Teachers can provide feedback and comments directly.

The above-mentioned tools and tips can help integrate technology into formative assessment
strategies, making assessments more engaging, timely, and informative for teachers and
students.

2.4.4 International Best Practices


o Holistic Approach: Combining formative (ongoing) and summative (final) assessments.
o Diverse Assessment Methods: Utilizing written exams, practical tests, projects,
presentations, and e-portfolios.
o Continuous Feedback: Providing regular feedback to guide students’ improvement.
o Student-Centered Learning: Fostering active participation and self-assessment.
o Technology Integration: Leveraging digital assessment tools, including online quizzes,
simulations, and digital portfolios. Digital platforms and tools are utilized for
administering assessments, tracking progress, and providing instant feedback.

22 | P a g e
Model Assessment Framework
o Contextual Relevance: Adapting global best practices to fit the cultural and educational
context.
o Government Policies: Ensuring assessments align with national education policies and
standards.
o Teacher Training: Providing continuous professional development on modern
assessment techniques.
o Community Involvement: Engaging parents and communities in the educational
process.
o Resource Allocation: Ensuring adequate resources and infrastructure for effective
assessment implementation.

2.4.5 General Principles for Formative Assessment

Clear Criteria

Develop rubrics or guidelines that outline specific criteria related to attitudes, values, and
emotional responses.

Feedback Mechanisms

Provide constructive feedback that encourages students to reflect on and refine their
affective engagement.

Multiple Modalities

Use a variety of assessment methods (e.g., journals, discussions, projects) to capture


different aspects of the affective domain.

Contextual Relevance

Ensure assessments are culturally and contextually relevant to the student's language and
science learning experiences.

Figure 2. General Principles for Formative Assessment

2.4.6 Summative Assessment (Board Examinations)


Summative examination is an important indicator of student achievement and progress. It
helps to provide an efficient and reliable estimate of a student's overall performance in a
subject area relative to Student Learning Outcomes (SLOs) that enable reliable and valid
interpretations of student achievement and progress. Summative assessments are used to
evaluate student learning, skill acquisition, and academic achievement at the end of a defined
instructional period - typically at secondary and higher secondary level by the end of each
year from Grade 9 to Grade 12. Summative assessment is used to assess the benchmarked
performance of students, and its goal is to evaluate student learning by comparing it against
defined standards or benchmarks.

23 | P a g e
Model Assessment Framework

2.5 Minimum Quality Standards for Board Examinations


The quality of the assessment system in Pakistan has long been a subject of concern, reflecting
a significant disparity between high and low-quality assessments. The variation is evident
when comparing high-stakes exams, such as those conducted for grades 9 and 10 across
different boards, with low-level assessments like those from the Trends in International
Mathematics and Science Study (TIMSS). Pakistani students have underperformed in key
subjects such as languages, mathematics, and general sciences in these international
assessments, highlighting a gap in both formative and summative evaluation practices. High-
stakes exams, crucial for academic progression, are often inconsistent in quality due to the
varied standards set by different boards. In response to this need for reform, this document
has been developed by reviewing key standards and frameworks, including TIMSS,
Curriculum Frameworks, Cambridge Assessment Framework Standards, and Global
Proficiency Frameworks. It aims to establish minimum standards for quality assessment and
good testing practices. Adherence to these proposed standards by all educational boards can
significantly improve the quality of assessments and, consequently, the overall education
system in Pakistan.
A good test is characterized by several key qualities that ensure its effectiveness in assessing
student learning and understanding. Firstly, it must exhibit reliability, meaning it consistently
produces similar results when administered under comparable conditions. Secondly, validity
is essential, ensuring that the test accurately measures what it intends to measure, aligned
closely with the curriculum and educational objectives. Objectivity is another critical aspect,
ensuring that the scoring and evaluation process is fair and impartial, free from bias. Clarity
and comprehensiveness in test questions are vital to minimize ambiguity and ensure all
aspects of the curriculum are adequately covered. Finally, a good test should be practical
regarding administration time and resources, providing an appropriate balance between
assessing the depth of knowledge and feasibility within educational contexts.
To address the identified gaps and improve assessment practices, this document has been
developed by reviewing key frameworks and standards such as TIMSS, the Curriculum
Framework, the Cambridge Assessment Framework Standards, and the Global Proficiency
Framework. The aim is to define minimum standards for quality assessment and effective
testing. By adhering to these proposed standards, all educational boards in Pakistan can
enhance the quality of their assessments and, consequently, the overall quality of education.

2.5.1 Structure of Quality Assessment


The assessment structure encompasses various elements, including the format of questions, the
weighting of different assessment components, and the criteria for grading. A well-structured
assessment should be comprehensive, covering a range of cognitive skills from basic recall to
higher-order thinking. It should also be aligned with the curriculum and learning objectives,
ensuring that it measures what it is intended to measure. The structure should include multiple
assessment forms, such as written exams, oral presentations, practical demonstrations, and
project-based evaluations.

2.5.2 Formulating the Structure of Assessment

The structure of assessments must be meticulously designed to ensure alignment with the
curriculum and learning objectives, capturing the core competencies and knowledge areas
that students are expected to master. A Table of Specifications (ToS) serves as a critical tool,

24 | P a g e
Model Assessment Framework
providing a blueprint to balance content coverage and cognitive levels, ensuring fairness,
transparency, and comprehensive evaluation.

Alignment with Curriculum and Learning Objectives


o Assessments must reflect the key competencies and knowledge areas outlined in the
curriculum.
o Use a Table of Specifications (ToS) to map out the content areas and cognitive skills the
assessment will cover.
Balanced Representation
o Ensure a balanced distribution of questions across different topics and cognitive levels.
o ToS should specify the percentage of questions dedicated to each content area and cognitive
level.
Diverse Question Types
o Include a mix of question types such as multiple-choice, short-answer, essay, and practical
tasks.
o Ensure that questions are designed to assess various cognitive skills, including knowledge
recall, application, analysis, and synthesis.
Validity and Reliability
o Conduct regular reviews and pilot testing of assessment items to ensure they accurately
measure the intended learning outcomes.
o Use statistical methods to analyze the reliability of assessments, such as calculating
Cronbach's alpha for internal consistency.
Fairness and Accessibility
o Design assessments to be free from bias and accessible to all students, including those with
disabilities.
o Provide accommodations, such as extended time or alternative formats, as necessary.
Continuous Improvement
o Implement a feedback loop involving teachers, students, and education experts to
refine assessment practices continuously.
o Regularly update assessment items to reflect changes in the curriculum and
emerging educational standards.

25 | P a g e
Model Assessment Framework
Table 1. Assessment Blueprint Example

Curriculum/SLO - Question Number of Percentag Weightage


Based Type Questions e (%) (%)
Formative MCQs
Assessment (Skills CRQs
and Attitude) ERQs
Summative
MCQs
Assessment
CRQs
(Knowledge
ERQs
Base)
Skill-Based (Board
ATP
Tests)

Figure 3. Assessment Blueprint

2.5.3 Pre-Examination Standards


Pre-examination standards set the foundation for a robust and reliable assessment process,
ensuring that all preparatory steps align with the curriculum and assessment framework. These
standards emphasize the importance of thorough planning, adherence to Student Learning
Outcomes (SLOs), and the creation of well-mapped assessments that reflect the intended
learning objectives.
2.5.3.1 Standard 1: Curriculum, SLO-Based Assessment, and Mapping
To ensure that the assessments effectively measure student learning, aligning them with the
national curriculum and specific Student Learning Outcomes (SLOs) is crucial. This alignment
guarantees the assessments are relevant and focused on the expected learning objectives.
Developing detailed mapping documents that link assessment items to curriculum objectives
is essential. These documents will serve as a guide to ensure that all aspects of the curriculum
are covered and that the assessments are designed to measure the intended learning outcomes
accurately.

26 | P a g e
Model Assessment Framework
Criteria:
o Ensure all assessments align with the national curriculum and specific Student
Learning Outcomes (SLOs).
o Develop mapping documents that clearly link assessment items to curriculum objectives.

Table 2. Curriculum-Based Question Matrix

Curriculum Assessment
Topic SLOs
Objectives Type
Apply algebraic Multiple-
Algebra Solve equations
concepts choice
Reading
Analyze texts Interpret text meaning Short-answer
Comprehension
Conduct Explain Scientific
Science Experiment Practical exam
Experiments Methods
Historical Events Understand history Analyze historical impact Essay

2.5.3.2 Standard 2: Coverage of Bloom's Taxonomy (Knowledge, Attitude, Skills)


A comprehensive assessment should cover all Bloom's Taxonomy levels, including knowledge,
comprehension, application, analysis, synthesis, and evaluation. By including a mix of
questions that assess these different cognitive levels, the assessment can provide a more
complete picture of student understanding and abilities.
This ensures that students memorize information and apply, analyze, and evaluate it, which are
higher-order cognitive skills necessary for deeper learning.
Criteria:
o Ensure that assessments cover all levels of Bloom's Taxonomy.
o Include a mix of questions that assess knowledge, comprehension, application, analysis,
synthesis, and evaluation.

2.5.3.3 Standard 3: Content Coverage - Table of Specification


The Table of Specifications (ToS) is a critical tool in the assessment design process. It outlines
the distribution of content areas and cognitive levels, ensuring that the assessment is balanced
and covers all the necessary topics within the curriculum. The ToS serves as a blueprint for
constructing the assessment, providing a clear and structured approach to content coverage.
This systematic approach helps maintain the assessment's validity by ensuring that all key areas
are adequately represented.
Criteria:
o Develop a Table of Specifications (ToS) that outlines the distribution of content areas and
cognitive levels.
o Ensure balanced coverage of all topics within the curriculum.

27 | P a g e
Model Assessment Framework

Figure 4. Pyramid of Cognitive Domain

2.5.3.4 Standard 4: Question Paper Pattern (MCQs, SRQs, ERQs, ATP)


Defining the structure of question papers is essential for consistency and fairness in
assessments. This includes specifying the types of questions, such as Multiple-Choice
Questions (MCQs), Short Response Questions (SRQs), Extended Response Questions
(ERQs), and Analytical Thinking Problems (ATP). A well-balanced mix of question types is
necessary to assess different skills and knowledge levels effectively. This variety caters to
different learning styles and provides a more comprehensive assessment of student abilities.
Criteria:
o Define the structure of question papers, including the types of questions (Multiple Choice
Questions (MCQs), Short Response Questions (SRQs), Extended Response Questions
(ERQs), and Analytical Thinking Problems (ATP)).
o Ensure a mix of question types to assess different skills and knowledge levels.

Table 3. SLO-Based Question Matrix Example

Percentage of Total Assessment


SLO Question Type Number of Questions
(%)
SLO 1 Multiple-choice
Short-answer
SLO 2
MCQs CRQs
ERQs
ATP
SLO 3 Essay
SLO 4 Practical Task

2.5.3.5 Standard 5: Personnel Involved in the Assessment Process


The effectiveness of the assessment process is heavily reliant on the competence and capacity
of the personnel involved at every stage—pre-examination, during examination, and post-

28 | P a g e
Model Assessment Framework
examination. This standard ensures that all individuals engaged in the assessment process are
adequately trained, their roles clearly defined, and their capacities continuously developed to
maintain the integrity and quality of assessments. The roles include educators, examiners,
invigilators, and administrative staff, whose expertise and skills are critical in ensuring smooth,
fair, and accurate assessments.
2.5.4 During Examination Standards
During examination standards focus on ensuring the fair, transparent, and efficient conduct of
exams. These standards prioritize the creation of a conducive environment for students, strict
adherence to guidelines by supervisory staff, and the use of digital tools to enhance monitoring
and security. Proper implementation of these standards ensures the integrity of the examination
process and fosters trust among all stakeholders.
2.5.4.1 Standard 6: Quality of Test (Validity, Reliability)
Ensuring the quality of the test is paramount for producing reliable and valid results.
Conducting validity and reliability checks helps confirm that the assessment measures what it
intends to measure and yields consistent results over time. This involves pilot testing the
assessment items and conducting item analysis to refine and improve them. Such rigorous
quality assurance processes help identify and eliminate any potential biases or errors, thereby
enhancing the overall credibility of the assessment.
Criteria:
Conduct validity and reliability checks to ensure the assessment measures what it is intended
to measure and produces consistent results.
The standard for Validity and Reliability
Ensuring the quality of an assessment is crucial for producing reliable and valid results.
Validity and reliability checks confirm that an assessment measures what it is intended to
measure and yields consistent results over time. This process involves pilot-testing the
assessment items and conducting item analysis to refine and improve them. Rigorous quality
assurance processes help identify and eliminate potential biases or errors, thereby enhancing
the overall credibility of the assessment.
Criteria for the Standard
Conduct Validity and Reliability Checks
o Validity: Ensure the assessment accurately measures the intended content and
construct.
o Content Validity: Ensure the test covers all relevant content areas.
o Construct Validity: Ensure the test measures the intended construct, often assessed
through correlations with other established measures of the same construct.
o Criterion-Related Validity: Ensure the test predicts outcomes it is supposed to predict,
assessed through correlation with a criterion measure.
o Reliability: Ensure the assessment yields consistent results over time and across
different administrations.
o Cronbach's Alpha: A measure of internal consistency, with a value of 0.7 or higher
generally indicating acceptable reliability.
o Test-Retest Reliability: Assess stability over time by administering the same test to
the same group at different points in time. A correlation coefficient (e.g., Pearson’s r)
of 0.7 or higher indicates good reliability.

29 | P a g e
Model Assessment Framework
o Inter-Rater Reliability: Ensure consistency across different raters, measured by
Cohen’s kappa or interclass correlation coefficients (ICC), with values above 0.75
indicating good agreement.

Implement Pilot Testing and Item Analysis


Conduct pilot testing to identify problematic items before the full-scale administration.
Use item analysis techniques such as:
o Item Difficulty Index: The proportion of students answering an item correctly, ranging
from 0.3 to 0.7, is generally considered acceptable.
o Item Discrimination Index: Measure how well an item differentiates between high
and low performers, with values above 0.2 considered acceptable.
o Distractor Analysis: Evaluate the effectiveness of incorrect options in multiple-choice
items.

Align with International Standards and Benchmarks


Adhere to international standards such as those provided by the International Test Commission
(ITC) and the Standards for Educational and Psychological Testing by the American
Educational Research Association (AERA), American Psychological Association (APA), and
National Council on Measurement in Education (NCME). Ensure minimum values for validity
and reliability indices are met:
o Validity Coefficients: A coefficient of 0.3 or higher is often used as a benchmark for
acceptable validity.
o Reliability Indices: A Cronbach’s alpha of 0.7 or higher, test-retest reliability coefficients
of 0.7 or higher, and inter-rater reliability values above 0.75.
Review and update assessment practices regularly to align with evolving international
benchmarks and standards.

Table 4. Validity Types, Description and Standard Values

Ensure the assessment accurately


Validity Checks measures the intended content and Validity Coefficients ≥ 0.3
construct.
Ensure the test covers all relevant Expert review, alignment
Content Validity
content areas. with curriculum
Ensure the test measures the intended Correlations with
Construct Validity
construct. established measures
Ensure the test predicts the outcomes Correlation with a
Criterion-Related Validity
it is supposed to predict. criterion measure
Cronbach’s Alpha ≥ 0.7,
Ensure the assessment yields
Test-Retest ≥ 0.7, Inter-
Reliability Checks consistent results over time
Rater
Reliability ≥ 0.75
Cronbach’s Alpha Measure of internal consistency. α ≥ 0.7
Test-Retest Reliability Assess stability over time. Pearson’s r ≥ 0.7
Ensure consistency across different Cohen’s kappa or ICC ≥
Inter-Rater Reliability
raters. 0.75

30 | P a g e
Model Assessment Framework
Conduct preliminary testing to
Pilot test sample size,
Pilot Testing identify and address issues with test
representative sampling
items.
Difficulty Index: 0.3 - 0.7,
Use statistical techniques to refine and
Item Analysis Discrimination Index >
improve test items.
0.2
Regular review and
Align with ITC, AERA, APA, and
International Standards alignment with
NCME standards benchmarks.
international benchmarks
α ≥ 0.7, Validity
Ensure a Cronbach’s alpha of 0.7+ for
Coefficient ≥ 0.3, Test-
Minimum Values reliability and a validity coefficient of
Retest ≥ 0.7, Inter-Rater
0.3+.
Reliability ≥ 0.75

2.5.4.2 Standard 7: Marking Criteria / Rubrics


Developing detailed marking rubrics is essential for maintaining consistency and objectivity
in scoring, especially for subjective items like essays. These rubrics provide clear guidelines
for markers, ensuring that each response is evaluated based on the same criteria. Training
markers on using these rubrics is crucial to ensure they apply the criteria consistently. This
approach not only enhances the reliability of the scores but also provides transparency and
fairness in the evaluation process.
Criteria:
o Develop detailed marking rubrics for subjective items (e.g., essays) to ensure consistent
scoring.
o Train markers on the use of rubrics to maintain scoring reliability.

2.5.5 Post-Examination Standards


Post-examination standards emphasize the critical processes of coding, marking, result
compilation, and analysis to ensure accuracy, fairness, and transparency. These standards
include rigorous quality checks, adherence to marking schemes, and advanced statistical
analysis to evaluate assessment reliability.
2.5.5.1 Standard 8: Grading GPA
Establishing a standardized grading system, such as the Grade Point Average (GPA), is
necessary for ensuring consistency across different schools and boards. This standardization
helps provide a uniform measure of student performance, making comparing results across
different contexts easier. Clear criteria for grade boundaries and descriptors are essential to
ensure that grades are awarded fairly and accurately, reflecting the true performance of the
students.
Criteria:
o Establish a standardized grading system, such as GPA, to ensure consistency across
different schools and boards.
o Define clear criteria for grade boundaries and descriptors.

31 | P a g e
Model Assessment Framework
2.5.5.2 Standard 9: Results Analysis (Internal and External)
Conducting thorough internal and external analysis of assessment results is crucial for
identifying trends, strengths, and areas for improvement. This involves using statistical
methods to analyze the data and generate meaningful reports. These reports provide valuable
insights that can inform instructional practices and policy decisions. By analyzing the results
comprehensively, educators and policymakers can make data-driven decisions to enhance the
quality of education.
Criteria:
o Conduct thorough internal and external analysis of assessment results to identify trends,
strengths, and areas for improvement.
o Use statistical methods to analyze data and generate meaningful reports.

2.5.5.3 Standard 10: Feedback by the Stakeholders


Collecting feedback from students, educators, and other stakeholders on the assessment
process and outcomes is vital for continuous improvement. This feedback helps identify any
issues or gaps in the assessment process and provides insights into how to improve it.
Establishing a systematic process for collecting and analyzing feedback ensures that the
perspectives of all stakeholders are considered, leading to a more effective and inclusive
assessment system.
Criteria:
o Collect feedback from students, educators, and other stakeholders on the assessment
process and outcomes.
o Use feedback to make continuous improvements to the assessment system.

Table 5. Matric of Minimum Standards


Standards Criteria Minimum Requirement

Curriculum and SLO 100% questions aligned with curriculum and


01 Alignment
alignment SLOs

02 Coverage Content representation All major topics covered


20% knowledge, 30% comprehension,
Cognitive
03 Variety of cognitive levels 30%
Balance
application, 20% analysis
Question
04 Multiple question formats Include at least three different question types
Diversity

05 Validity Accurate measurement Regular review and validation


06 Reliability Consistent results Pilot testing and refinement
07 Fairness Equal opportunity Bias-free and accessible questions
08 Practicality Feasible within constraints Manageable within available resources

32 | P a g e
Model Assessment Framework
2.5.6 Standards for Digitization of Examinations
The standards for digitization of examinations focus on integrating technology-driven solutions
to enhance the efficiency, security, and fairness of the assessment process. These standards
include the use of digital question paper delivery systems, automated proctoring tools, and
electronic marking platforms to ensure accuracy and minimize human errors. By adopting
innovative practices such as Question Item Banks (QIBs) and real-time monitoring systems,
digitization ensures a transparent and streamlined examination process that meets modern
educational demands.
2.5.6.1 Standard 11: Digitization Pre-Examination:
Digitizing the pre-examination processes involves developing and implementing digital tools
for curriculum mapping, SLO alignment, and creating the Table of Specifications (ToS).
Online platforms can be used for item bank development and question paper creation, ensuring
a streamlined and efficient process. These digital tools enhance accuracy and consistency,
making the pre-examination phase more effective and manageable.

2.5.6.2 Standard 12: During Examination


During the examination, digitization can significantly enhance the efficiency and security of
the process. Implementing digital administration of exams, where feasible, ensures secure and
standardized conditions. Computerized systems for real-time monitoring and invigilation help
maintain the integrity of the examination process, preventing cheating and ensuring a fair
testing environment.

2.5.6.3 Standard 13: Post-Examination


Post-examination processes can also benefit greatly from digitization. Using digital systems
for scoring, results compilation, and analysis ensures accuracy and efficiency. Providing
digital platforms for results dissemination and stakeholder feedback collection facilitates
timely and effective communication of results. These digital systems help maintain data
security and confidentiality while making the entire process more transparent and efficient.
Table 6. Pre-Examination Phase Matrix

Plan
Objective Actions Outcome
Period
Develop digital tools for aligning
Curriculum and SLO Initial digital mapping
assessments with curriculum and
Mapping tools in use
SLOs; Train educators
Short-
Item Bank Create initial digital item bank for Basic item bank available;
Term
Development key subjects; Conduct workshops Educators trained
Table of Develop and implement digital ToS Digital ToS in use; Boards
Specification template; Train examination boards trained
Expand digital mapping tools;
Curriculum and SLO Comprehensive mapping
Regular updates based on
Mapping tools; Regular updates
feedback
Expand item bank to more
Mid- Item Bank Expanded and updated
subjects; Periodic reviews and
Term Development item bank
updates
Table of Enhance the ToS system with Advanced ToS
Specification automated checks; Ongoing Features; Continuous
(ToS) training educator training
Long- Curriculum and Integrate advanced analytics; Refined alignment process;
Term SLO Mapping Universal tool usage Universal adoption

33 | P a g e
Model Assessment Framework
Maintain comprehensive item
Item Bank Extensive item bank; AI-
bank; Implement AI-driven
Development enhanced items
analysis
Table of ToS integrated into
Fully integrate ToS into curriculum
Specification planning; Universal
planning; Universal use
(ToS) adoption

Table 7. Duration Examination Phase Matrix

Plan
Objective Actions Outcome
Period
Pilot digital exams in select Pilot digital exams
Digital
schools; Implement real-time conducted; Monitoring is in
Short- Administration
monitoring and invigilation place.
Term
Security Establish digital security Enhanced security measures;
Protocols protocols; Train staff Trained staff
Expand digital exams to more Widespread digital exam
Digital
schools; Improve systems based adoption; System
Mid- Administration
on feedback improvements
Term
Security Implement advanced security Advanced security in all
Protocols measures; Regular reviews centers; Regular updates
Implement fully digital exams
Digital Nationwide digital exams;
nationwide; Use AI and
Administration AI-enhanced monitoring
Long- advanced tools for monitoring.
Term Continuous updates to
Security Cutting-edge security
security measures; Universal
Protocols Measures universally applied
implementation

Table 8. Post-Examination Phase Matrix

Plan
Objective Actions Outcome
Period
Implement digital scoring for
Digital Scoring and Digital scoring for
MCQs; Train markers on digital
Analysis MCQs; Trained markers
Short- rubrics
Term Develop an online results Online platform
Results
platform; Create a feedback available; Feedback
Dissemination
system system in place
Full digital scoring Comprehensive digital
Digital Scoring and
implementation; Regular scoring; Continuous
Analysis
Mid- training sessions Training
Term Enhance the results platform with Improved platform;
Results
detailed reports; Collect and Regular stakeholder
Dissemination
analyze feedback feedback
Digital Scoring and Use AI for scoring and analysis; AI-driven scoring;
Analysis Continuous training Regular marker training
Long-
Term Develop a comprehensive Advanced platform:
Results
digital platform integrating all Updated with the latest
Dissemination
aspects; Regular updates. tech

34 | P a g e
Model Assessment Framework

2.6 Capacity Building and Training

The successful implementation of the Model Assessment Framework (MAF) relies heavily
on the capacity and expertise of assessment staff, board officers, and teachers involved in the
examination and assessment process. Effective capacity building and training are essential to
ensure that all stakeholders are equipped with the knowledge and skills needed to carry out
standardized and reliable assessments aligned with the framework.

The Inter Boards Coordination Commission (IBCC) will be pivotal in facilitating training
programs designed to empower examination board officers and teachers. These programs will
focus on various aspects of the assessment process, including test item development,
evaluation techniques, and adherence to the guidelines and standards outlined in the
MAF. Special attention will be given to enhancing teachers' ability to implement SLO-based
teaching and assessments, ensuring alignment between classroom practices and examination
requirements. Additionally, the IBCC will spearhead efforts to digitalize the examination
process, enabling more efficient and secure management of assessments. As part of this
initiative, the IBCC will assist boards in developing and implementing indigenous Question
Item Banks (QIBs). These QIBs will standardize the examination process by providing a
centralized repository of validated items, ensuring fair and balanced assessments across all
regions. Trainings will be conducted for each phase of the examination process.

Pre-Examination Training:
• Conduct workshops and training sessions focused on curriculum alignment, question
setting, and ethical standards. Ensure all personnel undergo at least 10 hours of training
annually on assessment design and integrity.
• Regularly assess the capacity and readiness of question setters and planners. Use
feedback from past assessments to improve training programs and address gaps in
knowledge.

During Examination Training:


• Provide training on invigilation techniques, technology use, and examination logistics
management. Mandate that all invigilators and support staff complete a certification
program before each examination cycle.
• Monitor invigilators and support staff during examinations to ensure protocol
compliance. Conduct spot checks and use feedback mechanisms to maintain high
standards of supervision.

Post-Examination Training
• Offer training on marking schemes, data handling, and result compilation. Require
markers and graders to participate in standardization meetings and training on unbiased
grading techniques.
• Evaluate the performance of markers and graders through double marking and
consistency checks. Implement peer reviews and moderation sessions to ensure
accuracy and fairness in grading.

35 | P a g e
Model Assessment Framework
Table 9. Criteria Summary for Capacity Building

Monitoring
Training and Capacity
Stage Personnel
Building
and Evaluation
Regular
Curriculum developers,
Pre- Annual 10-hour training, assessments,
question setters, exam
Examination workshops feedback from
planners, admin staff
past assessments
Spot checks,
During Examiners, invigilators, Certification program,
feedback
Examination support staff, IT personnel invigilation techniques
mechanisms
Peer reviews,
Standardization
Post- Markers, graders, data double marking,
meetings, unbiased
Examination analysts, result compilers consistency
grading techniques
checks

In addition, the MAF provides guiding principles for teacher-education departments to update
their teacher-education curricula. This will enable future teachers to develop the competencies
required to meet emerging educational challenges and ensure the effective implementation of
the framework. By integrating MAF-aligned strategies into teacher education, institutions can
prepare teachers to adopt innovative teaching and assessment methods that align with 21st-
century educational demands. Through structured training, capacity-building initiatives, and
curriculum updates in teacher education, the IBCC aims to ensure that all assessment personnel
are well-prepared to implement the MAF effectively, contributing to a more equitable and
transparent education system in Pakistan.

36 | P a g e
Model Assessment Framework

Chapter 3. Requirements/Alignment with Standards of


Curriculum

3.1 National Curriculum


The Curriculum recognizes the importance of students’ experiences, voices, and active
learning involvement. Learning experiences at school should pave the way for constructing
knowledge and fostering creativity and should become a source of joy, not stress. The
examination system seeks a shift from content-based testing to problem-solving and
competency-based assessment.
These are what the National Curriculum targets. In essence, we aspire to;
“Enable all students to develop their capacities as successful learners, confident
individuals, responsible citizens, and effective societal contributors.”

Figure 5. National Curriculum Framework

3.2 Standards, Benchmarks, and Learning Outcomes


In the 21st century, students will remain the most important natural resource to ensure
humankind's continual improvement and ultimate progress. All involved in education must
prepare students to meet the challenges of a constantly changing global society. It is time to
call for a raise in student learning expectations.
Preparing students for success in the new millennium and beyond calls for increasing rigor
and relevance in the curriculum. In adult roles, individuals are expected to work with others

Page | 37
Model Assessment Framework

in a team setting, have an acquired knowledge base, be able to extend and refine knowledge,
be able to construct new knowledge and applications and have a habit of self-assessing their
assimilation of each dimension in their everyday decision-making process. The curriculum is
built upon Standards, Benchmarks, and Learning Outcomes to benefit student growth and
progress.

STANDARDS are what students should know and be able to do. Standards are broad
descriptions of the knowledge and skills students should acquire in a subject area. The
knowledge includes important and enduring ideas, concepts, issues, and information. The
skills that characterize a subject area include thinking, working, communication, reasoning,
and investigating. Standards may emphasize interdisciplinary themes and concepts in the core
academic subjects.
Standards are based on:
o Higher Order Thinking:
It involves students manipulating information and ideas by synthesizing, generalizing,
explaining, or arriving at conclusions that produce new meaning and understanding.
o Deep Knowledge:
It addresses the central ideas of a topic or discipline with enough thoroughness to explore
connections and relationships and to produce a relatively complex understanding.
o Substantive Conversation:
Students engage in extended conversational exchanges with the teacher and/or peers about
the subject matter in a way that builds an improved and shared understanding of ideas or
topics.
o Connections to the World Beyond the Classroom:
Students make connections between substantive knowledge and either public problems or
personal experiences.
BENCHMARKS indicate what students should know and be able to do at various
developmental levels.
LEARNING OUTCOMES indicate what students should know and be able to do for each
topic in any subject area at the appropriate developmental level. The Learning Outcomes sum
up the total expectations from the student.

Page | 38
Model Assessment Framework

Chapter 4. Process of Assessment Development

4.1. Standards of Assessment

Assessment development is a critical component of the Model Assessment Framework


(MAF), designed to ensure that examinations are aligned with Student Learning Outcomes
(SLOs), cognitive levels, and the overarching objectives of the Curriculum. This chapter
outlines the systematic process for creating standardized, equitable, and reflective assessments
of students' learning and competencies. The development process emphasizes quality and
consistency at every stage, from item creation to final exam assembly, ensuring that
assessments accurately measure students' knowledge, skills, and abilities. Standards of
Assessment establish the foundational principles that guide the assessment development
process. These standards ensure assessments' validity, reliability, and fairness while
maintaining alignment with curriculum goals.

4.2. Importance of Item Writing and Review


Item writing and reviewing are essential to developing reliable and valid assessments. Effective
item writing ensures that test questions are aligned with the curriculum's Student Learning
Outcomes (SLOs) and address the appropriate cognitive levels, such as knowledge,
understanding, and application. The review process plays a crucial role in maintaining
assessment items' validity, reliability, and fairness by identifying and rectifying any flaws,
biases, or ambiguities. Poorly written items can lead to misinterpretation, biased results, or
failure to measure intended learning outcomes. Therefore, each item must undergo thorough
review and moderation to ensure it aligns with curriculum standards and meets quality
benchmarks.
4.3. Key Principles for Item Writing

o Alignment with Learning Outcomes: Items should directly assess specific Student
Learning Outcomes (SLOs). Each item must measure the intended knowledge or skill.
o Clarity and Simplicity: Language used in items should be clear, concise, and appropriate
for the target grade level. Avoid complex or ambiguous wording that may confuse students.
o One Focus per Item: Each item should focus on one skill or concept to ensure accurate
measurement.
o Avoid Bias and Sensitivity: Items should be free from cultural, gender, or socio-economic
biases that could disadvantage any group of students.

4.4. Item Review Process

o Content Validity: Ensure each item covers relevant content aligned with the curriculum.
o Cognitive Level: Check that items measure the appropriate cognitive levels based on
Bloom’s Taxonomy (e.g., knowledge, application, analysis).
o Fairness and Accessibility: Items should be reviewed for fairness and accessibility,
including ensuring they are understandable for students with disabilities.

Moderation Rules Moderation ensures consistency and fairness across different assessments.
The moderation process involves:

Page | 39
Model Assessment Framework

o Expert Review: Subject matter experts should review items for alignment with curriculum
and assessment goals.
o Pilot Testing: Pilot items with a sample group of students to identify issues related to item
difficulty, discrimination, and clarity.
o Statistical Analysis: Identify problematic items using item analysis techniques (e.g.,
Difficulty Index, Discrimination Index).
o Final Approval: Only items that meet all content, cognitive, and fairness standards should
be included in the assessment.

Figure 6. Item Review Process

4.5. Examples of Bad Items and Good Items


Designing effective test items is a critical component of the assessment process. Poorly
constructed items can lead to confusion, misinterpretation, or failure to measure the intended
learning outcomes. In contrast, well-designed items are clear, aligned with curriculum
objectives, and accurately assess the desired cognitive levels. Below are examples illustrating
the difference between bad items and good items, highlighting the importance of clarity,
alignment with SLOs, and fairness:

o Bad Items: These questions either ask for rote memorization, lack context, or are too
simplistic, failing to engage students at higher cognitive levels.
o Good Items: These questions encourage deeper thinking, ask students to apply
knowledge to real-world situations, or analyze concepts in a context aligned with
Bloom’s Taxonomy.

Page | 40
Model Assessment Framework

Here's a clearer and more rational set of Bad Items and Improved Items based on common
subjects for grades 9 to 12, highlighting how each Improved Item addresses the limitations of
the Bad Item while aligning better with the SLOs and fostering higher-order thinking.
Table 10. Bad Item Examples and its Improved Item

Sr.
Bad Item Rationale Improved Item
No.

Explain the function of cells in the human body


Basic recall, asking only
1 What is a cell? and how they contribute to maintaining overall
for a definition.
health.

Only asks for the law Describe how Newton’s First Law applies to a
State Newton’s First
2 without requiring an car coming to a sudden stop, and explain the role
Law.
application. of seat belts in preventing injuries.

Simple recall with no Compare the properties of strong and weak acids
3 Define an acid. deeper thought or and provide examples of how they are used in
application. everyday life.

Solve the equation 2x + 5 = 15 and explain how


Solve for x: 2x + 5 Limited to procedural
4 this type of equation can be used to calculate
= 15. knowledge.
expenses in budgeting scenarios.

When was the Focuses only on a date, Analyze the significance of the Lahore
5 Lahore Resolution encouraging Resolution of 1940 and its role in the formation
passed? memorization. of Pakistan.

Discuss the effects of industrial pollution on


Write about Too vague and general,
6 Pakistan’s environment and suggest measures to
pollution. no focus or direction.
reduce its impact in urban areas.

Asks only for a basic Explain the spiritual significance of Hajj in Islam
7 What is Hajj?
definition. and how it fosters unity among Muslims globally.

Compare LAN and WAN in terms of their uses in


Define a computer Limited to a simple
8 modern organizations, and explain how each
network. definition.
supports business operations in Pakistan.

What is the Prove the Pythagorean theorem using a real-


Requires only a
9 Pythagorean world example, such as calculating the height of
memorized response.
theorem? a building using measurements from the ground.

Evaluate the importance of Pakistan's river


Name the rivers of Simple recall without
10 system for its agricultural economy, and discuss
Pakistan. context or analysis.
how climate change is affecting water resources.

Page | 41
Model Assessment Framework

Chapter 5. Conduct of Examination


The conduct of Examinations is the primary assignment undertaken by a Board. The key
elements observed for Examinations' smooth and transparent conduct are the following.
5.1. Professional Training
Personnel Training has always been taken as the foremost step in initiating the process of
conducting Examinations. A training session will be arranged in this regard before the
commencement of the Examination. A booklet containing instructions for the supervisory staff
has also been prepared and shared with all the staff engaged in the conduct of Exams. The key
elements of the training include;
o The observance of the manner and schedule for Question Papers collection from Banks.
o The observance of timing in opening the Question papers and packing of Answer Scripts
at the end of the paper.
o The instructions to complete all the relevant forms prescribed to record information
regarding day-to-day activities.
o The manner of recording attendance of the students.
o The measure to take is to eliminate the possibility of malpractices/ use of unfair means in
the Exams.
o To handle any untoward situation happening during Exams and to report and record each
incident immediately.
o The use of IT/apps for coordinating and sharing information with the Board.

5.2. Conducive Environment


A conducive environment is indispensable for the conduct of Examinations and utmost efforts
are made to provide the students with a favorable environment to take Exams.
o Physical Measures: It is ensured that the Exam may be conducted in a suitable hall where
all the basic facilities, including Exam chairs with Suitable space, lighting, and other
connected facilities, are made available.
o Objective Measure: The students are provided with a peaceful environment to ensure they
perform with peaceful and attentive minds. The staff is directed to discourage shouting in
the Examination Hall and to ensure no external interference during the papers.

5.3. Transparency
It has always remained imperative to maintain transparency in the Examinations as
transparency is instrumental in conducting a fair Exam.
All possible measures are taken to achieve this goal, including the following.
o A team of the highest level of integrity undertakes the composing and printing of Question
Papers.
o The Sealed Question Papers are handed over to the Banks for safe custody. On each day
of the Exam, the relevant Question Paper Packets are handed over to the Superintendent of
the Examination center.

Page | 42
Model Assessment Framework

o The supervisory staff is appointed through Draw to rule out the possibility of arbitration.
o Independent monitoring of the proceedings during the Exam should be carried out through
Inspections.
o Online surveillance of the Exam Centers should be introduced and cameras should be
installed in every Exam center to depict real-time view of the activities.

5.4. Appropriate Logistic Support


Logistics support is provided to secure and efficiently deliver Question Papers and other
Exam-related stationery. The Board constitutes a dedicated team for safely and timely
provision of services and logistic support to the Exam centers. The following are the key
components of the logistics support policy;
o Strategic planning: At the commencement of each Exam, a strategic planning session is
conducted by the Board, wherein a comprehensive work plan is discussed and approved
for the delivery of Question Papers, Answer Books, and other connected material to all the
Exam centers within the jurisdiction of the Board.
o Dedicated secure transport: Dedicated transport is arranged to secure confidential
Exam material at the Exam Centers. It is also made sure that trustworthy Drivers and
officials are deputed for the purpose, who are well aware of the sensitivity of their tasks,
and who ensure the security of the Material and its timely delivery to their respective
destinations.
o Collection and return: The Answer Scripts from the Exam centers are collected daily
by the registered parcel through Pakistan Post, whereas special collection centers are
established in grey areas /remote districts. These collection centers collect the Answer
Scripts at the end of each paper within the stipulated time. These scripts are then
transported to the Board through the dedicated vehicles.

5.5. Appropriate Measures (SOPs for Secrecy)


In order to keep and maintain the Secrecy of the Question papers, the staff with the highest
level of integrity is engaged during the following stages;
o Composing of question paper: A dedicated and competent team is appointed to
compose the Question Papers. For this purpose, the team is provided a safe and secure
place, inaccessible to all others except designated officers.
o Printing of question papers: Printing Question Papers is also carried out at a safe and
secure place by the expert team appointed for the purpose. The Chairman and Controller
of Exams appoint this team. The question papers are printed and packed in sealed
envelopes. These sealed envelopes are placed center-wise in the lockers for safe custody.
o Center-wise filling of question paper bags: In the next stage, the envelopes of
Question papers are packed in polythene bags and sealed before delivery to the respective
centers/banks.
o Delivery of question papers to banks: The bags of Question Papers from each center
are delivered to the authorized and relevant Bank Branches through the dedicated team
engaged. The authorized center superintendents collect Question Papers from the Bank
according to the given Date Sheet daily.

Page | 43
Model Assessment Framework

5.6. Controlling Cheating and Examination Malpractices

Cheating and examination malpractices undermine the integrity of the assessment process and
compromise the quality of education. Addressing these issues requires innovative strategies,
robust policies, and the integration of digital technology to ensure fairness and transparency.
This section provides guidelines and recommendations to control cheating and malpractices
during the conduct of examinations.

Digital tools play a transformative role in reducing cheating and ensuring secure examination
processes. The following technologies can be integrated into the examination system:

➢ Use biometric verification systems to confirm candidate identity and prevent


impersonation.
➢ Implement secure online delivery systems for question papers to minimize handling
and leaks.
➢ Install signal jammers and electronic detectors to block unauthorized devices, such as
mobile phones and smartwatches.
➢ Conduct random inspections of examination centers by independent teams.
➢ Enforce a zero-tolerance policy for cheating, with strict penalties, including result
cancellations and debarment.
➢ Launch awareness campaigns to educate students on the consequences of cheating
and promote academic integrity.
➢ Use statistical tools to analyze answer scripts and detect patterns of copying.
➢ Flag examination centers with unusually high-performance trends for further
investigation.
➢ Set up CCTV cameras for continuous monitoring of all examination centers.
➢ Ensure secure transport of examination materials using tamper-proof packaging.
➢ Provide specialized training for invigilators and examination staff to detect and handle
cheating effectively.
➢ Develop and distribute SOPs for managing incidents of malpractice and recording
evidence.
➢ Utilize Question Item Banks (QIB) to generate unique question papers for individual
students.
➢ Use automated proctoring systems for remote and online examinations.

Page | 44
Model Assessment Framework

Chapter 6. Coding, Marking, and Compilation of Results


The coding, marking, and compilation of results are integral to ensuring assessments' accuracy,
transparency, and reliability. This section provides a detailed overview of the standardized
practices for evaluating student responses, assigning marks, and compiling results. These
processes are designed to minimize errors, maintain consistency, and ensure fairness in scoring.

Figure 7. Making of Answer Scripts

6.1. Coding of Answer Scripts


Soon after receiving the Answer Scripts from the Examination Centres the coding of Answer
Scripts is started. In the first phase, the Answer Scripts are coded with fictitious numbers
against the original Roll Numbers. These fictional numbers are generated through computer
software. After that, the dispatchers, dedicated experts for this purpose, detach the Roll number
part of the Answer scripts and put the fictitious number in its place. The detached part
containing the original Roll Number and the fictitious number were gathered and placed in
bags subject-wise. The dispatchers prepare bundles of Answer scripts containing 200 to 250
Answers in each bundle. These bundles are also allotted alphanumeric codes for identification.
The bundles of Answer Scripts are then shifted for marking.
6.2. Marking of Answer Scripts
The marking of Answer Scripts is one of the most important assignments undertaken by the
Boards by appointing efficient and well-reputed examiners in their relevant fields.
o Mechanism: The bundles of Answer Scripts duly coded are handed over to the Head
Examiners in each group. The Head Examiners must mark the scripts along with their sub-
examiners and checkers. The checkers prepare the Award lists for each group which are
also reviewed and endorsed by the head examiners before submitting these to the Board
officials.

Page | 45
Model Assessment Framework

o Marking Key: Marking keys for each subject are prepared through concerned subject
experts. The marking key is shared with all the Head Examiners. It is ensured that marking
is strictly carried out in accordance with the marking key.
o Subject Coordinators: Subject coordinators are appointed in each major subject to
examine the prescribed ratio of Answer Scripts to ensure uniformity in marking by the
groups and to check whether marking is being undertaken according to the marking key or
otherwise. In case of any diversion from the given marking key, necessary corrective
measures are immediately taken.
o Super Checking: The super checkers are appointed for each subject as a part of the policy
of multi-layered checking of the Answer Scripts to ensure that the possibility of mistakes
in totaling marks is ruled out. The super checkers also point out the unmarked questions,
and if any such case is found, the same is returned to the concerned Head Examiner for
correction.
6.3. Compilation of Results
The compilation of results is a critical step in ensuring the accuracy and reliability of
examination outcomes. This process involves decoding and entering award lists, scrutinizing
computerized entries against original records, and consolidating data into final results.
o Decoding and Entry of Award Lists: The award lists are obtained after marks are handed
over to the IT staff, who enter the awards in the computer database. The Fictitious numbers
are decoded, and marks are allotted against the original Roll numbers.
o Scrutiny of Award Lists: The computerized award lists are scrutinized with the original
award lists. This process further endorses the accuracy of the marks posted on scripts with
those entered in the database.
o Compilation: The results are compiled through software in consolidated form. The
decisions of the UFM cases and absent cases are also included in the result.
o Use of Information Technology: Information Technology has greatly helped improve the
compilation of results and led to a more accurate and brisk compilation of results. One such
measure is the introduction of On-Screen Marking (OSM), which enhances marking
accuracy and saves time. E-marking allows markers to mark a scanned script or online
response on a computer screen rather than on paper.

Figure 8. Compilation of Results

Page | 46
Model Assessment Framework

6.4. On-Screen Marking


On Screen Marking is a marking system for reviewers that helps them to mark, assess and
evaluate answer sheets uploaded by examinees. The server of e-marking software uploads the
answer scripts, which then become available in OSM software for marking. The on-screen
marking also provided access to markers to mark scripts from anywhere. Some of the benefits
of OSM are given below:
o Logistic management of answer sheets
o Digital storage of answer sheets
o Student identity disclosure with the help of a barcode
o Parallel and easy moderation of the marking process
o Auto calculation of total marks
o Accelerated e-marking speed for quicker results
o Online availability of results/marks sheet, etc.
o Improved accuracy and reliability in the grading process.
o Standardized Marking through rubric-based marking.
o Continuous monitoring of marking quality and progress in real-time
o Valuable insights from comprehensive data analysis for informed decision-making.

Page | 47
Model Assessment Framework

Chapter 7. Post-Exam Analysis

Post-exam analysis is a vital component of the assessment cycle, leveraging advanced


techniques such as psychometric analysis, data analysis, and statistical modeling to ensure
examinations' reliability, validity, and fairness. This chapter delves into how post-exam
evaluations utilize methodologies like Classical Test Theory (CTT) and Item Response Theory
(IRT) to assess the quality of test items and their alignment with Student Learning Outcomes
(SLOs). By integrating these rigorous analytical approaches, post-exam analysis enhances the
accuracy and effectiveness of assessments, providing actionable insights for continuous
improvement.
7.1 Purpose of Post-Exam Analysis

Post-exam analysis aims to evaluate the effectiveness of assessments in measuring cognitive


skills and achieving curriculum objectives. It incorporates quantitative research methods and
advanced tools to examine item performance, test reliability, and fairness. Analyzing key
metrics supports test development, refines assessment practices, and ensures that examinations
provide meaningful feedback for stakeholders, including policymakers, educators, and
students. Through such detailed evaluations, post-exam analysis is pivotal in shaping data-
driven decisions and fostering educational excellence.

7.1.1 Improving Learning Outcomes


Detailed examination of individual student performance to highlight areas of mastery and areas
needing improvement. For instance, targeted interventions can be planned if a student
consistently performs well in algebra but struggles with geometry.
Analyzing trends across a group of students to identify common strengths and weaknesses.
This can help recognize patterns that might be addressed through curriculum changes or
additional support. Using the analysis to create customized learning plans that cater to the
specific needs of each student, ensuring that they receive the appropriate resources and
attention to improve their learning outcomes.
7.1.2 Informing Instruction and Feedback to Teachers
Providing teachers with comprehensive reports on student performance, including breakdowns
by question type, content area, and individual student responses. This data helps teachers
understand which concepts were well understood and which were not. Based on the analysis,
teachers can adjust their instructional strategies. For example, if students perform poorly on
questions about a specific topic, teachers might allocate more time to that topic or use different
teaching methods to enhance understanding. Identifying areas where teachers may need further
training or resources. For instance, if a significant number of students struggle with problem-
solving questions, it might indicate a need for professional development focused on teaching
critical thinking skills.

Page | 48
Model Assessment Framework

Figure 9. Purpose of Post-Exam Analysis

7.1.3 Enhancing Assessment Quality


It is conducting a thorough item analysis to evaluate the performance of each test question.
This includes checking for questions that were too easy or difficult or did not effectively
discriminate between high and low performers—moreover, using the analysis to revise or
eliminate problematic questions in future assessments. For example, questions that many
students got wrong due to ambiguity or poor phrasing can be reworded or replaced. Thus
ensuring that the assessment accurately measures student knowledge and skills. Post-exam
analysis helps maintain the validity and reliability of the test by identifying and addressing
issues that may have compromised these aspects.
7.1.4 Stakeholder Communication
Delivering clear and concise performance reports that help students and their parents
understand the results. This includes highlighting areas of strength and providing actionable
suggestions for improvement. It includes sharing comprehensive data with teachers, school
administrators, and policymakers. This enables informed decision-making regarding
curriculum adjustments, resource allocation, and instructional strategies. It offers insights that
can influence educational policy and strategic planning. For example, if the analysis reveals
widespread difficulties in a particular subject area, it might prompt policy changes or new
educational initiatives to address the issue. It ensures transparency in the assessment process
by openly sharing results and analysis with all stakeholders. This fosters trust and
accountability within the educational community.

7.2 Data Collection and Management


Effective data collection and management is essential for ensuring the accuracy and reliability
of post-examination analysis. This process involves gathering comprehensive data, such as
student scores, item responses, and performance metrics, and organizing it in a structured
format for analysis.
7.2.1 Student Performance Data
o Raw Scores: Gathering the total number of points each student earned on the assessment.
This includes both overall scores and scores for individual sections or content areas.
o Detailed Item Responses: Recording how each student responded to every question on
the test. This data provides insight into specific areas of strength and weakness for each
student.

Page | 49
Model Assessment Framework

o Performance Trends: Analyzing patterns in student responses to identify trends, such as


common errors or frequently missed questions. This helps in understanding specific areas
where students struggle.
7.2.2 Importance of Accurate Data Collection
o Data Integrity: Ensuring that the data collected is accurate and reliable. This involves
double-checking scores and responses to prevent errors that could impact the analysis.
o Consistency: Using standardized methods for data collection to ensure consistency across
different students and assessments. This helps in making valid comparisons and drawing
accurate conclusions.
7.2.3 Item Analysis
Item Analysis is a critical process in the assessment cycle that evaluates the quality and
performance of test items. It provides insights into the effectiveness of questions in measuring
Student Learning Outcomes (SLOs) and ensures that each item contributes meaningfully to the
overall test. This process is essential for improving assessments' reliability, validity, and
fairness, particularly in large-scale examinations where accuracy and standardization are
paramount.
7.2.3.1 Features of Item Analysis

1. Difficulty Index: Measures how challenging an item is for test-takers, helping to


balance easy, moderate, and difficult items.
2. Discrimination Index: Evaluates an item's ability to distinguish between high-
performing and low-performing students.
3. Distractor Analysis: Assesses the effectiveness of incorrect options (distractors) in
multiple-choice questions to ensure they are plausible and not misleading.
4. Reliability Contribution: Examines how each item contributes to the overall
reliability of the test.
5. Alignment with SLOs: Ensures that items are aligned with curriculum standards and
cognitive levels.

7.2.3.2 Importance of Item Analysis in Large-Scale Examinations

• Improves Test Quality: Identifies poorly performing items for revision or removal,
ensuring that assessments are fair and valid.
• Supports Equity: Reduces bias and ensures all items are accessible to students from
diverse backgrounds.
• Informs Test Development: Provides data-driven insights for creating balanced and
effective assessments in future test cycles.
• Enhances Decision-Making: Supplies examination boards with reliable metrics to
refine assessments and improve educational outcomes.

7.2.3.3 Applications of Item Analysis

• Item Banking: Ensures that only high-quality items are added to the Question Item
Bank (QIB) for future use.
• Standardized Testing: Supports the creation of exams that are consistent, reliable,
and aligned with curriculum standards.
• Policy Decisions: Informs examination boards and policymakers about the
effectiveness of assessment practices.

Page | 50
Model Assessment Framework

• Feedback to Educators: Highlights areas where students struggle, providing


actionable insights for curriculum improvements.

7.2.3.4 Item Difficulty


o Difficulty Index: Calculating the proportion of students who answered each item
correctly. Items with a high difficulty index are considered easy, while those with a low
difficulty index are considered difficult.
o Balanced Assessment: Ensuring a balanced distribution of item difficulties to
appropriately challenge students at different levels of ability.

7.2.3.5 Discrimination Indices


Discrimination Index: Measuring how well each item differentiates between high-
performing and low-performing students. Items with high discrimination indices
effectively distinguish between students who have mastered the content and those who
have not.
o Improving Assessment Quality: Using discrimination indices to identify items that need
revision or removal. Items that do not discriminate well may be confusing or poorly
constructed.
7.2.3.6 Response Patterns and Distractor Analysis

Distractor Analysis is a crucial component of item analysis that evaluates the effectiveness of
incorrect answer choices (distractors) in multiple-choice questions. Its primary purpose is to
determine whether distractors are functioning as intended—plausible enough to challenge test-
takers who lack the correct knowledge while being unlikely to confuse those who do.

7.3 Item Anlaysis Theories


Item Analysis relies on two key theories: Classical Test Theory (CTT) and Item Response
Theory (IRT), which provide frameworks to evaluate the quality and effectiveness of
assessment items. CTT focuses on the overall test and uses straightforward metrics such as the
difficulty index (proportion of students answering correctly) and the discrimination index (how
well an item differentiates high-performing from low-performing students). For example, in a
mathematics exam, a question answered correctly by 80% of students is considered easier than
one answered correctly by 30%. Conversely, IRT is more sophisticated and evaluates
individual items by modeling their interaction with students' ability levels. It uses parameters
like difficulty, discrimination, and guessing to determine how well an item functions across
varying ability levels. For instance, IRT can identify that a science question requiring critical
thinking might challenge higher-ability students more effectively while ensuring it remains
accessible to those with moderate skills. These theories offer robust methods to refine
assessments, ensuring they are fair, reliable, and aligned with curriculum goals.

Item analysis is indispensable to modern assessment practices, particularly in large-scale


examinations. By employing techniques like CTT and IRT, examination boards can ensure that
assessments are fair, effective, valid, reliable, and unbiased. Using item analysis not only
enhances the quality of examinations but also provides valuable feedback for improving
teaching practices and educational outcomes across diverse populations.

Page | 51
Model Assessment Framework

Table 11. Comparison between CTT and IRT

Aspect Classical Test Theory (CTT) Item Response Theory (IRT)


Focus Focuses on overall test Focuses on individual item
performance and total scores. performance and the relationship
between item characteristics and ability
levels.
Item Based on the proportion of test- Modeled as a parameter based on
Difficulty takers answering correctly. student ability and item characteristics.
Reliability Test reliability is estimated Reliability is calculated using item-
using total test scores. level data and test-taker abilities.
Discrimination Evaluates how well an item Uses discrimination as a parameter in
differentiates between high- and modeling item effectiveness.
low-scoring students.
Modeling Simpler and easier to apply but More sophisticated, providing deeper
less precise. insights but requiring advanced
software and expertise.
Scalability Suitable for smaller tests or Ideal for large-scale assessments with
homogeneous groups. diverse populations.

7.4 Exam Results Dashboard


The Exam Result Dashboard is a cutting-edge tool designed to comprehensively visualize
large-scale student results, transforming raw data into meaningful insights. Such a dashboard
allows educational boards, policymakers, and administrators to analyze exam performance at
various levels, including individual students, schools, districts, and regions. It offers an
interactive and user-friendly interface; the dashboard supports informed decision-making to
improve educational outcomes and policy formulation.

7.4.1 Key Features and Importance

• Data Visualization: The dashboard presents student performance data through charts,
graphs, and heatmaps, making complex data easy to understand and interpret.
• Comparative Analysis: It enables performance comparison across different schools,
districts, and demographic categories, identifying trends and areas for improvement.
• Performance Insights: The tool highlights performance gaps, strengths, and
weaknesses, offering actionable insights to tailor interventions for students and schools.
• Transparency and Accountability: The dashboard promotes transparency and fosters
trust among stakeholders by centralizing and standardizing result data.
• Real-Time Access: Administrators can quickly access real-time data to respond to
emerging challenges and opportunities.

The Exam Result Dashboard is an important step in embracing the digital transformation within
Pakistan's education system. It empowers educational leaders with the ability to evaluate
outcomes effectively, ensuring that resources and efforts are directed toward enhancing student
achievement and fostering equitable educational opportunities nationwide.

Page | 52
Model Assessment Framework

Figure 10. Sample Data Dashboard

7.4.2 Contextual Data and Demographic Information


Incorporating contextual data and demographic information is vital for understanding the
broader factors influencing student performance. This includes details such as students'
socioeconomic background, geographic location, and access to educational resources.
Analyzing this data alongside assessment results provides valuable insights into disparities,
enabling policymakers and educators to design targeted interventions for equitable learning
opportunities.
i. Student Background: Collect data on student demographics such as age, gender,
socioeconomic status, and geographic location. This helps in understanding students'
diverse backgrounds and how these factors may influence performance.
ii. Equity Analysis: Using demographic data to analyze performance across different
groups and ensure the assessment is fair and equitable for all students.
iii. Instructional Context: Gathering information about the instructional environment,
such as class size, teacher qualifications, and available resources. This helps
understand the context in which students learn and how it may affect their
performance.
iv. External Factors: Considering external factors that might impact student
performance, such as health issues, family circumstances, or access to study
materials. Including these variables in the analysis provides a more comprehensive
understanding of student performance.
7.4.3 Integrating Data for Comprehensive Analysis
Integrating various data sources, such as student performance metrics, contextual information,
and demographic details, enables a comprehensive analysis of examination outcomes. This
holistic approach provides a deeper understanding of trends, correlations, and disparities,
allowing stakeholders to identify areas for improvement.

Page | 53
Model Assessment Framework

o Holistic View: Combining student performance, item analysis, and contextual data to
create a comprehensive picture of student achievement. This holistic approach allows for
more nuanced insights and targeted interventions.
o Informed Decision-Making: Using the integrated data to inform decisions about
curriculum adjustments, teaching strategies, and resource allocation. Educators can make
more effective and equitable decisions to support student learning by considering all
relevant factors.

7.5 Data Analysis Techniques


Effective data analysis techniques are essential for extracting meaningful insights from
examination data. These techniques include descriptive statistics to summarize performance
trends, inferential statistics like t-tests and ANOVA to compare groups, and advanced methods
such as regression analysis, item analysis including CTT and IRT, and predictive modeling to
uncover patterns and forecast outcomes.
7.5.1 Descriptive Statistics
i. Mean: The average score of all students on the assessment.
Purpose: Provide a general overview of the overall performance of the student cohort.
The mean helps in understanding the central tendency of the scores and can indicate
whether most students found the test to be within their grasp.
Calculation: Sum of all scores divided by the number of students. For example, if the
total score of all students combined is 2500 and there are 50 students, the mean score
would be 50.
ii. Median: The middle score when all student scores are arranged in ascending order.
Purpose: Offers insight into the distribution of scores, particularly useful in
understanding the central tendency when there are outliers. The median is less affected
by extremely high or low scores than the mean.
Calculation: If there is an odd number of scores, the median is the middle one. If there
is an even number of scores, the median is the average of the two middle scores.
iii. Mode: The score that appears most frequently among student responses.
Purpose: Indicates the most common score among students, providing insight into the
most typical performance level. The mode can be particularly useful in identifying the
most common level of understanding or common errors.
Calculation: Determined by identifying the score that occurs most frequently in the
dataset.
iv. Standard Deviation: A measure of the amount of variation or dispersion of scores from
the mean.
Purpose: Helps in understanding the spread of scores around the mean, indicating how
much individual student scores differ from the average. A high standard deviation
indicates a wide range of scores, while a low standard deviation indicates that scores
are closely clustered around the mean.
Calculation: The square root of the average of the squared deviations from the mean.

7.5.2 Item Analysis


i. Difficulty Index: The proportion of students who answered each item correctly.
Purpose: Helps in determining how easy or difficult each test item is. A high-difficulty
index indicates an easy question, while a low-difficulty index indicates a difficult

Page | 54
Model Assessment Framework

question.
Calculation: Number of students who answered the item correctly divided by the total
number of students. For example, if 30 out of 50 students answered a question
correctly, the difficulty index would be 0.6.
ii. Discrimination Index: A measure of how well an item differentiates between high-
performing and low-performing students.
Purpose: Identifies the effectiveness of each item in distinguishing between students
who understand the material and those who do not. A high discrimination index means
the item is good at distinguishing between different levels of student performance.
Calculation: Typically calculated by comparing the performance of the top 27% of
students to the bottom 27%. For example, if an item is correctly answered by 80% of
the top students and 30% of the bottom students, the discrimination index would be
0.5.
iii. Distractor Analysis: Examination of the incorrect response choices (distractors) to
understand patterns in student mistakes.
Purpose: Identifies whether distractors are functioning effectively by attracting
students who do not know the correct answer. This analysis helps in improving the
quality of multiple-choice questions by ensuring that distractors are plausible and
discriminating.
Process: Reviewing the frequency with which each distractor is chosen. For example,
if a distractor is rarely chosen, it might be too obviously incorrect and need revision.

7.5.3 Inferential Statistics

i. Independent t-Test: A statistical test comparing the means of two independent


groups.
Purpose: To determine if there is a significant difference in performance between two
groups (e.g., male vs. female students).
Application: Comparing average marks of students from rural and urban schools in a
subject.
Example: Analyzing if urban students scored significantly higher than rural students
in mathematics.

ii. ANOVA (Analysis of Variance): A statistical method for comparing means across
three or more groups.
Purpose: To identify if differences exist among multiple categories, such as regions or
school types.
Application: Comparing average scores of students from different provinces in
science.
Example: Evaluating if students from Punjab, Sindh, and KPK perform differently in
biology.

iii. Correlation: A measure of the relationship between two variables.


Purpose: To assess how one variable changes in relation to another.
Application: Analyzing the relationship between study hours and exam scores.

Page | 55
Model Assessment Framework

Example: Determining if higher attendance correlates with better academic


performance.

iv. Linear Regression: A statistical technique to predict the value of a dependent variable
based on one independent variable.
Purpose: To quantify the impact of a single factor on student performance.
Application: Predicting student scores based on their attendance percentage.
Example: Estimating the likely score in English based on the number of practice tests
attempted.

v. Logistic Regression: A method to predict binary outcomes (e.g., pass/fail) based on


independent variables.
Purpose: To identify factors that influence categorical outcomes in assessments.
Application: Determining the likelihood of passing based on socioeconomic factors.
Example: Analyzing how parental education affects a student's chance of passing a
science exam.

vi. Time Series Analysis: A technique for analyzing trends over time.
Purpose: To examine performance patterns and predict future outcomes.
Application: Tracking student performance in national exams over the past five years.
Example: Analyzing yearly trends in mathematics scores to predict next year’s
performance.

vii. Predictive Modeling: The use of statistical techniques to forecast outcomes based on
historical data.
Purpose: To predict future performance or identify at-risk students.
Application: Creating models to predict dropout rates or high-performing students.
Example: Forecasting which students are likely to excel in STEM subjects based on
their grade trends.

7.5.4 Subgroup Analysis


Subgroup analysis examines performance differences across specific groups, such as gender,
region, or school type, to identify disparities and ensure equitable assessment practices.
i. Analyzing Performance Across Different Demographic Groups: Examination of how
various subgroups of students perform on the assessment, such as by gender, region,
socioeconomic status, etc.
ii. Purpose: Ensures that the assessment is fair and equitable for all students by identifying
any disparities in performance among different groups. This analysis helps in
understanding whether all students have equal opportunities to perform well and in
addressing any identified inequities.
iii. Group Identification: Defining the subgroups to be analyzed, such as male vs. female
students, urban vs. rural students, etc.
iv. Comparative Analysis: Comparing the mean, median, mode, and standard deviation of
scores across these subgroups.
v. Equity Assessment: Identifying any significant differences in performance that might
indicate bias or unequal access to educational resources. For example, if urban students
significantly outperform rural students, it may prompt an investigation into resource
allocation and instructional quality.

Page | 56
Model Assessment Framework

7.6 Reporting and Interpretation of Results

The reporting and interpretation of results is a n important phase in the assessment process,
transforming raw examination data into meaningful insights for stakeholders. Effective
reporting not only highlights overall achievement levels but also identifies trends, gaps, and
areas for improvement. By combining statistical analysis with detailed interpretation, this
phase ensures that assessment results serve as a valuable resource for enhancing educational
practices and outcomes.

7.6.1 Overview of Student Performance


o Average Scores: Presenting the mean score of the student cohort to provide a general sense
of how well students performed on the assessment.
o Score Distribution: Visual representation (e.g., histogram) showing the range and
frequency of scores. This helps in understanding how scores are spread across the student
population, indicating the overall difficulty of the test and the level of student achievement.
o Performance Bands: Categorizing scores into performance bands (e.g., below basic,
basic, proficient, advanced) to provide a clearer picture of student proficiency levels.
o Detailed Analysis by Content Domain: Comparing performance across different content
domains to identify areas where students excel or struggle. This can guide targeted
interventions and curriculum adjustments.
7.6.2 Item-wise Analysis Report
7.6.2.1 Item Difficulty
o Easy Items: Items with a high proportion of correct responses indicate that most students
found them easy.
o Moderate Items: Items with a moderate proportion of correct responses indicate an
appropriate level of challenge.
o Difficult Items: Items with a low proportion of correct responses indicate that many
students found them challenging.
o Implications for Instruction: Using item difficulty data to adjust teaching strategies and
focus on areas where students struggle the most.
7.6.2.2 Item Discrimination
o High Discriminating Items: Items that effectively differentiate between high-performing
and low- low-performing students. These items are valuable for assessing true
understanding and skill levels.
o Low Discriminating Items: Items that do not effectively differentiate between students
of different ability levels. These items may need revision or replacement.
o Improving Test Quality: Ensuring that the assessment includes a balance of items with
high discrimination indices to maintain the validity and reliability of the test.
7.6.2.3 Distractor Analysis:
o Functioning Distractors: Distractors that are chosen by students who do not know the
correct answer, indicating that they are plausible and effective.
o Non-functioning Distractors: Distractors that are rarely chosen, suggesting they are too
obviously incorrect and may need revision.
o Identifying Misconceptions: Analyzing which distractors are most frequently chosen to
identify common misconceptions or errors among students. This information can inform
instructional strategies to address these misconceptions.

Page | 57
Model Assessment Framework

7.6.3 Subgroup Performance Report


7.6.3.1 Gender Analysis
o Score Averages: Comparing the average scores of male and female students to identify
any significant differences in performance.
o Distribution Analysis: Examining the score distribution within each gender group to
understand variations and trends.
o Implications for Equity: Identifying and addressing gender-based performance
disparities may indicate the need for targeted interventions or support.
7.6.3.1 Regional Analysis
o Regional Averages: Comparing average scores across different regions or schools (Govt.
and Private) to identify geographical disparities in performance.
o Contextual Factors: Considering factors such as access to resources, teacher quality, and
socio-economic conditions that may impact regional performance.
o Policy Implications: Using regional analysis to inform policy decisions and resource
allocation, ensuring all regions have the support needed to improve student outcomes.
7.7 Feedback Mechanism
7.7.1 Student Reports:
o Score Breakdown: Providing students with a comprehensive breakdown of their
performance on the assessment, including overall scores, scores by content domain,
and individual item responses.
o Visual Aids: Using charts and graphs to visually represent performance data, making
it easier for students to understand their results.
o Comparative Data: Showing how a student's performance compares to the class
average or predefined performance benchmarks to give context to their results.
7.7.2 Strengths and Areas for Improvement
o Key Strengths: Highlighting specific content areas or skills where the student
performed exceptionally well. Encouraging students by acknowledging their strengths
and efforts can boost motivation and confidence.

o Areas for Improvement and Actionable Suggestions: Pinpointing specific topics or


question types where the student struggled. Providing clear and practical
recommendations for how the student can improve in these areas, such as additional
practice, tutoring, or specific study strategies.
7.7.3 Teacher Feedback and Aggregate Performance Data
o Overall Class Results: Summarize the class's overall performance, including average
scores, score distributions, and performance by content domain.
o Trends and Patterns: Identifying trends and patterns in the class's performance, such as
common strengths or areas where many students struggled.
o Comparative Analysis: Comparing the class's performance to previous assessments or to
other classes to identify changes or differences in achievement.
7.7.4 Instructional Recommendations
o Addressing Learning Gaps: Offering specific instructional strategies to address
identified learning gaps, such as focusing more on difficult topics, using different teaching
methods, or incorporating additional resources.

Page | 58
Model Assessment Framework

o Remediation Plans: Suggest structured remediation plans for students who need extra
support, including after-school programs, peer tutoring, or individualized instruction.
o Professional Development: Recommending professional development opportunities for
teachers to enhance their skills in areas where students are struggling. For example, if
students consistently perform poorly in geometry, teachers might benefit from training in
effective geometry instruction techniques.

7.7.5 School and Province/Region Reports


o School-Level Summaries:
A summary of overall school performance (Govt. and Private), including average
scores, score distributions, and trends over time. Breaking down performance by
content domains to identify strengths and weaknesses across different subject areas and
comparing the school's performance to region or national benchmarks to provide
context and identify areas for improvement.
Province/Region-Level Summaries:
o Comparative Data: Comparing performance across different schools within the
province/region to identify trends, disparities, and best practices.
o Resource Allocation: Using performance data to inform decisions about resource
allocation, ensuring that schools with greater needs receive appropriate support.

7.7.6 Policy Recommendations for Administrators and Policymakers


o Data-Driven Decisions: Providing detailed analysis and insights to school administrators
and policymakers to inform educational strategies and policy decisions.
o Identifying Needs: Highlight areas where additional resources or interventions are needed,
such as professional development, curriculum adjustments, or student support services.
o Strategic Planning: Using the data to develop long-term strategic plans to improve
educational outcomes, address equity issues, and ensure that all students have access to
high-quality education.
7.8 Utilizing Analysis for Continuous Improvement
The comprehensive analysis of assessment data plays a pivotal role in driving continuous
improvement across the education system. This analysis empowers examination boards,
policymakers, decision-makers, teacher education departments, and teachers to identify gaps
in student learning, curriculum delivery, and assessment practices by providing actionable
insights. Boards can refine examination processes, policymakers can design evidence-based
reforms, and teacher education departments can update training programs to align with
emerging educational needs. This analysis offers valuable feedback for teachers to enhance
instructional strategies and align teaching practices with desired learning outcomes. Ultimately,
utilizing analysis fosters a culture of accountability, adaptability, and excellence, ensuring that
the education system evolves to meet the challenges of a dynamic world.
7.8.1 Curriculum Adjustment
o Identifying Weak Areas: Analyzing assessment data to pinpoint specific topics or skills
where students consistently underperform. For example, if many students struggle with
fractions, this area may need additional focus.
o Updating Curriculum Materials: Revising textbooks, workbooks, and other instructional
materials to address identified weak areas better. This might include adding more practice

Page | 59
Model Assessment Framework

problems, examples, or explanatory content.


o Aligning with Standards: Ensuring curriculum content remains aligned with national
standards and learning objectives. If assessment data reveals gaps between what is taught
and what is tested, curriculum adjustments can bridge these gaps.
o Integrating New Approaches: Incorporating new teaching methods or pedagogical
approaches that have been shown to improve understanding in specific areas. For example,
using more hands-on activities or real-world applications to teach abstract concepts.
7.8.2 Professional Development
o Addressing Identified Needs: Using assessment analysis to identify areas where teachers
might need additional support or training. For example, if students struggle with problem-
solving questions, teachers may benefit from training in teaching critical thinking skills.
o Providing Practical Strategies: Offering professional development that includes
practical, classroom-ready strategies that teachers can immediately implement. This could
involve workshops, seminars, or online courses.
o Collaborative Learning: Encouraging teachers to share best practices and successful
strategies with one another through professional learning communities or peer mentoring
programs.
o Continuous Improvement: Promoting a culture of continuous professional growth by
regularly updating training programs based on the latest educational research and feedback
from teachers.
7.8.3 Refining Future Assessments Based on Item Analysis and Feedback
o Improving Question Quality: Using item analysis to identify and revise or remove
problematic questions. For example, questions that are too difficult or do not effectively
discriminate between high and low performers can be reworked or replaced.
o Ensuring Fairness: Reviewing assessment items to ensure they are fair and unbiased,
providing all students with an equal opportunity to demonstrate their knowledge and skills.
o Balancing Difficulty Levels: Adjusting the mix of easy, medium, and difficult questions
to create a balanced assessment that accurately measures a range of student abilities.
o Feedback Integration: Incorporating feedback from teachers and students to improve the
clarity and relevance of assessment items. This might involve simplifying complex
language or providing clearer instructions.

7.8.4 Informing Educational Policies and Resource Allocation Based on Performance


Data
o Data-Driven Decision Making: Using detailed performance data to inform policy
decisions at the school, district, and national levels. For example, data might highlight the
need for policy changes in areas such as testing practices, resource allocation, or
curriculum standards.
o Resource Allocation: Allocating resources more effectively based on identified needs. For
instance, if certain schools or regions consistently underperform, they may receive
additional funding, support staff, or instructional materials.
o Equity and Access: Ensuring that all students have equitable access to high-quality
education by addressing disparities identified through performance data. This might
include targeted support for disadvantaged groups or regions.
o Long-Term Planning: Using performance data to develop long-term strategic plans aimed
at improving overall educational outcomes. This can involve setting specific goals,
timelines, and metrics for success.

Page | 60
Model Assessment Framework

7.9 Communicating Results to Stakeholders

Effective communication of assessment results is essential for ensuring that all stakeholders—
students, parents, teachers, administrators, policymakers, and examination boards—can
understand and act on the insights derived from the data. This process involves presenting
results in a clear, concise, and accessible format, using tools like reports, dashboards, and
graphical representations to cater to diverse audiences. Tailored communication ensures that
each stakeholder receives relevant information, whether it is detailed analysis for policymakers,
performance summaries for teachers, or individual feedback for students. By bridging the gap
between data and decision-making, clear communication fosters transparency, builds trust, and
empowers stakeholders to take informed actions to improve educational outcomes.

7.9.1 Students and Parents


o Performance Metrics: Presenting key performance data in a clear, concise format that is
easily understood by both students and parents. This includes overall scores, scores by
content domain, and comparisons to class averages or performance benchmarks.
o Visual Aids: Using charts, graphs, and other visual aids to make complex data more
accessible. For example, bar charts showing performance in different content areas or pie
charts depicting the proportion of correct and incorrect answers.
o Plain Language: Avoiding technical jargon and using straightforward language to explain
the results and their implication
o Specific Recommendations for Improvement: Providing specific, actionable
recommendations for how students can improve their performance. For example,
suggesting additional practice problems for areas of weakness or recommending tutoring
sessions. Offering information about available resources, such as study guides, online
tutorials, or extra classes, can help students address their learning gaps. Encouraging
students to set specific, achievable goals based on their performance data and, for example,
aiming to improve their score in a particular content area by the next assessment.
7.9.2 Teachers: Comprehensive Data to Guide Instructional Strategies
o Performance Breakdown: Providing teachers with detailed data on student
performance, including item-level analysis, domain-wise performance, and individual
student scores.
o Identifying Trends: Helping teachers identify trends and patterns in student
performance, such as common misconceptions or frequently missed questions. This
can inform instructional planning and focus areas.
o Instructional Adjustments: Using the data to make informed decisions about
instructional adjustments, such as reallocating time to difficult topics or incorporating
different teaching methods.
o Professional Learning Communities: Organizing regular meetings where teachers
can discuss assessment results, share insights, and collaborate on strategies to address
common challenges.
o Sharing Successful Strategies: Encouraging teachers to share successful instructional
strategies and interventions that have proven effective in their classrooms.
o Continuous Improvement: Fostering a culture of continuous improvement by
regularly reviewing assessment data, discussing its implications, and adjusting
instructional practices accordingly.
7.9.3 Educational Authorities for Policy Briefs
o Key Insights: Providing summarized findings from assessment data, highlighting key

Page | 61
Model Assessment Framework
insights that can inform policy decisions. This might include trends in student
performance, areas of concern, and potential causes of disparities.
o Evidence-Based Recommendations: Offering evidence-based recommendations for
policy changes or initiatives aimed at improving educational outcomes and for
example, suggesting the introduction of new instructional programs or changes to
existing curriculum standards.
o Clear Communication: Ensuring that policy briefs are clear, concise, and accessible
to non-specialists, including policymakers, school administrators, and other
stakeholders.
7.9.4 Educational Authorities for Strategic Planning
o Long-Term Goals: Using assessment data to set long-term educational goals and
priorities. For example, aiming to improve math proficiency rates by a certain
percentage over the next five years. Resource Allocation: Informing decisions about
resource allocation based on identified needs. For example, directing additional
funding or support to schools or regions with lower performance.
o Monitoring Progress: Establishing metrics and benchmarks to monitor progress
towards strategic goals. Regularly reviewing assessment data to track progress and
make adjustments as needed.
o Stakeholder Engagement: Engaging with a broad range of stakeholders, including
teachers, parents, and community members, to gather input and build support for
strategic initiatives.
7.10 Conclusion

Post-examination analysis is an indispensable part of the assessment process, providing a


foundation for informed decision-making and continuous improvement in education. By
leveraging advanced statistical techniques and psychometric tools, it ensures that assessments
are valid, reliable, and aligned with curriculum objectives. This phase not only identifies trends
and gaps in student performance but also highlights the effectiveness of teaching methods and
assessment practices. The insights gained from post-examination analysis enable examination
boards, policymakers, educators, and teacher training institutions to refine their strategies,
promote equity, and enhance the overall quality of education. Ultimately, post-examination
analysis serves as a bridge between assessment outcomes and actionable improvements,
fostering a culture of accountability, transparency, and excellence in the education system.

Page | 62
Model Assessment Framework

Chapter 8. Policy Decisions on IBCC Forum


Following key policy decisions have been made during various meetings of the IBCC forum.

1. The bar is being raised from 33% to 40% with an intent to produce quality results.
Therefore, the award of grace marks may be stopped with implementing the new
grading scheme.
2. Up to a maximum of seven grace marks may be awarded in one subject only for those
students appearing in second attempts for improvement of grades. There will be no
indication on the result card/certificate about the award of grace marks.
3. Concerned respective Education Departments may refine student learning outcomes
(SLOs) and revise examination syllabi based on the National Curriculum 2006 of SSC
and HSSC. All concerned teachers' training institutions in respective Provinces may
provide training to teachers according to the SLOs. Provincial Governments may be
requested to impart necessary training to the teachers for the purpose.
4. Textbook Boards and Curriculum Bureaus of respective Provinces may make sure that
there may be minor changes in text contents every year to avoid making Helping
Books/Guides. Textbook Boards are to print textbooks only annually so that
preparation of the cheating material in Guides and Pockets gets discouraged. The paper
setters will make sure to prepare their questions and will be banned from setting papers
out of exercises, guides, and previous three- or four-year papers. Provincial
Governments and Textbook Boards may be requested for yearly textbook changes.
5. All BISEs may set uniform examination standards across the board for both SSC and
HSSC examinations based on the scheme of assessment, type of questions/items,
cognitive levels, and weighting for objective/subjective questions.
6. A gradual move is highly recommended toward higher cognitive level questions,
starting with Knowledge 30%, Understanding 50%, and Application 20%. However,
the range may vary from subject to subject and Board to Board.
7. BISEs to make sure the selection of competent professionals and build their capacity
as item writers, reviewers, paper setters, markers, and other exam-related professionals.
Detailed SOP and guidelines are provided in this policy document. IBCC may offer
capacity-building training to officers/teachers involved in assessment.
8. Teachers training institutions in respective Provinces may organize intensive training
courses to shift their teaching-learning from single textbooks to curriculum/SLOs
based classroom practices in collaboration with the Provincial Education Departments,
Education Institutions, and other key stakeholders.
9. Establish Item Banks at the IBCC level where reviewed and piloted items are securely
stored as part of the examination development process regularly. The Boards at the
provincial level may develop one Item Bank as a Central Repository in one of the
Boards having appropriate facilities such as Lahore for Punjab, Peshawar for KP,
Karachi for Sindh, Quetta for Balochistan, and Federal Board for AJK/GB. The Central
Item Bank will be established at the IBCC level to facilitate other Boards by regular
in/outflow of items/questions once all the provinces have adopted a uniform Scheme
of Studies. In setting every paper, 20% of the items shall be taken from the Central
Item Bank of IBCC to ensure National uniformity and standardization. BISEs, BTEs,
and the IBCC Secretariat will jointly work to develop the Central Item Bank and the

Page | 63
Model Assessment Framework
percentage of items to be taken from the Central Item Bank by the member Boards.
10. The practical examination shall be separated from the theory. The students have to pass
theory and practical separately. If a student fails in practical and has passed in theory,
he will appear only in practical in the next exam and vice versa.
11. The practical examination shall be conducted in two phases.
a. Based on MCQs, the answer script of the practical examination will be
evaluated through OMR in the concerned BISEs. The practical examination
(written) of all the subjects will be conducted on the same day and shall be
reflected in the date sheet of the commencing examination.
b. The concerned teacher of the respective subject shall perform as examiner of
his/her class.
12. It was recommended that irrespective of the allocation of marks for the practical
examinations, the practical exam should be separated from the theory, and a student
who failed in the practical may pass the practical only and vice versa. The portion of
the exam, either theory or practical, should remain on the student's credit, and they may
again pass the failed portion only. It was also recommended that Practical Note Books
marked and certified by the examiners may also be deposited in the Boards.
13. All BISEs need to share the best practices in Pre-Conduct, During Conduct, and Post-
Conduct SOPs for conducting fair and transparent examinations at the IBCC Forum.
14. All BISEs may select examination centers carefully based on defined criteria. All
BISEs may select supervisory staff with good reputations and integrity and train them
on their critical role and responsibilities. Therefore, they may make the
examination/assessment/marking duty of teachers/principals obligatory, and no excuse
would be accepted in refusing examination duty.
15. Supply examination material on time, ensure the safety and security of material through
check and balance, and ensure the security of answer scripts in packing at the centers
and dispatch to Board offices. Maintain security and law and order situation at the
centers in collaboration with local administration and education institutions. Develop
a secure and safe storage system for answer scripts at the Board offices and ensure the
safety and security of answer scripts during the marking process.
16. Research & Development Units must be established at each Board with capacity
building and provision of necessary facilities to evaluate results, categorize institutions
based on results (subject-wise), and convey the same to the institutions. Prepare and
send “Performance Reports” to affiliated institutions highlighting the key
achievements as compared to overall averages with areas needing improvement,
preferably subject-wise.
17. Prepare and send “Performance Reports” to affiliated institutions highlighting the key
achievements as compared to overall averages with areas needing improvement,
preferably subject-wise.
18. Research key issues. Each Board is expected to conduct at least one research annually
based on the analysis of annual results of SSC and HSSC. IBCC Secretariat may
emphasize the member Boards for regular research and analysis of results. IBCC will
remain in touch with the member Boards on the subject issue.
19. Recheck marks of the top twenty position holders by subject/group and overall and
maintain zero tolerance in preparing examination results.

Page | 64
Model Assessment Framework
20. Organize Heads of Institutions conference as an annual feature to discuss ways and
means for improving school/college performance based on the lessons learned and
organize review and follow-up meetings involving officers of the Education
Department, Heads and other key stakeholders. Boards (BISEs and BTEs) will
organize the annual conference at the respective level. However, IBCC shall monitor
the implementation by the Boards.

Implementation Plan for New Grading Scheme of Pakistani Examination Boards at


SSC and HSSC Examination

According to the new grading scheme, students will be graded based on GPA and CGPA and
the GPA and CGPA which is also proposed to be included in certificates along with grades.
What is GPA?
Grade point average (GPA) is a raw score average based on the letter grades. Each letter grade
is assigned a numerical value from 0-4 or 5 points, depending on the institution's scale.
Unfortunately, there is no universal way to calculate GPA since methods vary by country and
by institution. However, to calculate GPA, the basic need is to find the grading scale, translate
each letter grade to a corresponding numerical value within the scale, and then average those
values to find the current GPA.
How to Calculate GPA and CGPA
1. Record the point value for each grade. Using the five-point scale, write down the correct
point value next to each grade. So, if you have an A+ in a class, record a 4.5.
2. Add up all the values of your grades. After recording the scores for your grades, add the
values up. So, let's say you received an A+ in Biology, a B+ in English, and a C in
Chemistry. You'd add up the totals in the following way: 4.5 + 3.3 + 2.5 = 10.3. Divide this
final number by the number of courses you are taking. If you have a value of 10.3 on a 5-
point scale for three courses, you will calculate the GPA using the following equation: 10.3
/ 3 = 3.4. You have a GPA of 3.4.

Table 12. Example of Grading Points for Grade 11

S.No. Course Marks out of 100 Grade Grading Point


1 Mathematics 92 A+ 4.5
2 Chemistry 81 B++ 3.7
3 Physics 73 B 3.0
4 English 68 C 2.5
5 Urdu 65 C 2.5
6 Pakistan Studies 75 B+ 3.3
7 Islamiat 45 E 1.0

GPA = Sum of Grading Points / Number of Courses


GPA = 4.5 + 3.7 + 3.0 + 2.5 + 2.5 + 3.3 + 1.0 = 20.5 / 7 = 2.92

Page | 65
Model Assessment Framework
Table 13. Example of Grading Points for Grade 12

S.No. Course Marks out of 100 Grade Grading Point


1 Mathematics 100 A++ 5.0
2 Chemistry 97 A++ 5.0
3 Physics 99 A++ 5.0
4 English 96 A++ 5.0
5 Urdu 95 A++ 5.0
6 Pakistan Studies 96 A++ 5.0
7 Islamiat 92 A+ 4.5

GPA = 5.0 + 5.0 + 5.0 + 5.0 + 5.0 + 5.0 + 4.5 = 34.5 / 7 = 4.92
CGPA = (GPA Grade-11 + GPA Grade-12) / 2
CGPA = (2.92 + 4.92) / 2 = 3.92 out of 5

Table 14. New Grading System for SSC and HSSC

Letter Grade In Percentages Description 5.0 Scale (GPA) for


1 credit subject
A++ 95% ≥ - 100% Exceptional 5.0
A+ 90% ≥ - < 95% Outstanding 4.7
A 85% ≥ - < 90% Excellent 4.3
B++ 80% ≥ - < 85% Very Good 4.0
B+ 75% ≥ - < 80% Good 3.7
B 70% ≥ - < 75% Fairly Good 3.3
C 60% ≥ - < 70% Above Average 3.0
D 50% ≥ - < 60% Average 2.0
E 40% ≥ - < 50% Below Average 1.0
U 40% < Unsatisfactory 0

The HEC, PMC, and PEC have already been taken on board with a new grading policy and a
new pattern of transcripts and certificates to devise admission criteria accordingly.

Page | 66
Model Assessment Framework

Chapter 9. Guidelines and SOPs


This section outlines the Guidelines and Standard Operating Procedures (SOPs) essential for
ensuring a standardized, reliable, and valid assessment process. The procedures are designed
to guide all individuals involved in developing, administering, and evaluating assessments,
ensuring consistency and fairness across the examination system. These guidelines ensure that
every stage of the assessment process, from item writing to grading, adheres to national
standards, promoting equity and excellence in the education system.

1. Guidelines and SOPs for Item Writers

2. Guidelines and SOPs for Reviewers

3. Guidelines and SOPs for Moderators and Other Assessment Personnel

4. Guidelines and SOPs for Test Assembly

5. Guidelines and SOPs for Test Administration

6. Guidelines and SOPs for Marking/Coding


Figure 11. Guidelines and Standard Operating Procedures (SOPs) for Assessment Staff

9.1 Guidelines and SOPs for Item Writers


Item writers are responsible for developing high-quality test questions that accurately measure
the knowledge, skills, and abilities outlined in the curriculum. Their work ensures that
assessments are valid, reliable, and aligned with educational goals.
9.1.1 Guidelines for Item Writing
The following are the guidelines and SOPs for effective item writing, focusing on developing
clear, valid, and curriculum-aligned test items that accurately assess student learning and
cognitive abilities.
i. Alignment with Curriculum and SLOs
o Ensure that every item is directly aligned with specific Student Learning
Outcomes (SLOs) outlined in the National Curriculum.
o Clearly indicate which SLO each item is addressing, ensuring comprehensive
coverage of curriculum content.

ii. Cognitive Level


o Use Bloom’s Taxonomy to ensure a balanced distribution of items across
different cognitive levels (e.g., knowledge, comprehension, application, analysis).
o Include higher-order thinking questions that encourage critical thinking and
problem-solving.

iii. Clarity and Simplicity


o Write items using simple, clear language suitable for the student’s grade level.
o Avoid complex sentences, double negatives, and unnecessary jargon.

Page | 67
Model Assessment Framework
iv. Contextual Relevance:
o Where applicable, items should relate to real-life scenarios or culturally relevant
contexts to enhance understanding and engagement.
o Avoid culturally biased items that may disadvantage any group of students.

v. Fairness and Inclusivity:


o Ensure that items do not unfairly advantage or disadvantage any group based on
gender, socio-economic status, or regional differences.
o Make accommodations for students with special needs by using simple and
straightforward language.

vi. Use of Graphics and Diagrams:


o Include diagrams, charts, or other visuals only when necessary, and ensure they are
clear and unambiguous.
o Ensure that the visuals contribute to the understanding of the item rather than distract
or confuse students.

vii. Constructing Multiple Choice Questions (MCQs):


o Ensure that the stem (the question part of the MCQ) is clear and presents a complete
problem.
o Distractors (incorrect options) should be plausible and not misleading or overly
difficult.
o Avoid using “all of the above” or “none of the above” unless absolutely necessary.
o Ensure that the correct answer is defensible based on curriculum knowledge.

viii. Constructed Response Items (Essays/Short Answers)


o Provide clear and specific instructions outlining what is expected in the response.
o Avoid overly broad questions; instead, ask for a specific aspect or analysis.
o Define how long or detailed the answer should be, particularly in essay-type
questions.

9.1.2 SOPs for Item Writers


The following are the SOPs for item writers, outlining the standards and procedural steps to
ensure the development of high-quality, unbiased, and curriculum-aligned test items.
Step 1: Understand the SLOs: Review the curriculum and the specific SLOs for the subject
and grade level before starting to write items.
Step 2: Initial Item Writing: Create a draft of each item, ensuring it aligns with an SLO and
fits within the cognitive level required. Make sure each item is self-contained and does not
rely on external knowledge beyond the curriculum.
Step 3: Peer Review: Share the draft with at least one other item writer for feedback. Look
for clarity, alignment with SLOs, and balance in cognitive levels.
Step 4: Revise and Finalize Items: Revise the items based on feedback, ensuring that all
content is accurate, fair, and aligned with educational goals.
Step 5: Pilot Testing: Where applicable, conduct pilot testing of items with a small group of
students to ensure they are understandable and perform as expected. Use pilot data to analyze
the difficulty and discrimination indices of each item.

Page | 68
Model Assessment Framework
Step 6: Submit for Review: Once the items are finalized, submit them to the reviewers for
further evaluation, accompanied by the item’s intended SLO and cognitive level.

9.2 Guidelines and SOPs for Reviewers


Reviewers ensure that the test items created by the item writers are valid, reliable, and aligned
with the curriculum. They provide quality assurance by checking for clarity, accuracy, and
alignment with assessment standards.
9.2.1 Guidelines for Reviewers
The following are the guidelines for reviewers, designed to assist in evaluating and refining
test items to ensure clarity, alignment with curriculum standards, and the elimination of biases
or errors.
i. Alignment with Curriculum:
o Ensure that every item corresponds directly to a specific SLO from the National
Curriculum.
o Verify that the cognitive level (e.g., recall, analysis, application) is appropriate for
the item.
ii. Clarity and Wording:
o Check that each item is clearly worded and free of ambiguous terms or complex
structures.
o Ensure that language is suitable for the grade level and target audience.
iii. Bias and Inclusivity:
o Assess each item for potential bias, ensuring that no item disadvantages any group
of students based on gender, culture, or socio-economic background.
o Check for fairness and inclusivity, ensuring that special needs considerations are
made where necessary.
iv. Cognitive Level:
o Ensure that a balanced number of questions are provided across different cognitive
levels (e.g., factual, conceptual, procedural).
o Encourage the inclusion of higher-order thinking questions to challenge students
appropriately.
v. Accuracy and Relevance:
o Verify the factual accuracy of each item and ensure it is relevant to the curriculum.
o Ensure diagrams, charts, or graphs used in questions are accurate and contribute
meaningfully to the question.
vi. Constructing Multiple Choice Items:
o Check that distractors in MCQs are plausible and not misleading.
o Ensure the correct answer is defensible and based on curriculum knowledge.
o Avoid ambiguous language like “always,” “never,” or double negatives.
vii. Constructed Response Items:
o Ensure that constructed response questions (short answers or essays) are clear in
their expectations.
o Check that the questions are specific and aligned with the intended learning
outcome.

Page | 69
Model Assessment Framework
9.2.2 SOPs for Reviewers
The following are the SOPs and steps for reviewers, providing structured protocols to critically
evaluate and validate test items, ensuring they meet the required standards of quality, fairness,
and alignment with learning objectives.
Step 1: Review Alignment with SLOs: Check if each item aligns with a specific SLO.
Ensure that there is balanced coverage across all necessary content areas.
Step 2: Evaluate Cognitive Levels: Verify that a range of cognitive levels is represented,
ensuring a balance of recall, understanding, application, and analysis items.
Step 3: Test for Clarity and Simplicity: Review each item for clarity. Reword items that are
overly complex or confusing for students. Ensure the item instructions are clear and concise.
Step 4: Check for Bias and Inclusivity: Evaluate each item for potential cultural or gender
bias. Make necessary adjustments to ensure fairness across different student groups.
Step 5: Suggest Improvements: Provide constructive feedback to the item writer on how to
improve clarity, relevance, and cognitive level if necessary.
Step 6: Approve or Reject Items: Mark items as “approved,” “conditionally approved with
revisions,” or “rejected with reasons,” ensuring that only high-quality items are included in
the final assessment.
9.3 Guidelines and SOPs for Moderators
Moderators ensure that the overall test is fair, valid, and aligned with the curriculum.
Their role is to review the full set of items, ensuring balance in difficulty, cognitive
levels, and content coverage.
9.3.1 Guidelines for Moderators
The following are the guidelines for moderators, aimed at ensuring the quality and consistency
of assessments by overseeing the review and validation processes for test items and
examination papers.
i. Balanced Test Coverage:
o Ensure that the test adequately covers all key topics in the curriculum, with a
balanced representation of various subject areas.
o Ensure that the test has an appropriate mix of easy, moderate, and difficult items.

ii. Cognitive Levels:


o Verify that items cover a range of cognitive levels from recall to higher-order
thinking (application, analysis, synthesis, evaluation).
o Avoid having too many items focused solely on lower cognitive levels
(recall/recognition).

iii. Fairness and Inclusivity:


o Check for any items or phrasing that may disadvantage certain student groups
(e.g., due to regional or cultural differences).
o Make sure the test accommodates diverse learning needs and is accessible to
students with disabilities.

Page | 70
Model Assessment Framework
iv. Clarity and Structure:
o Ensure that the test follows a logical structure and that instructions for each section
are clear and easy to understand.
o Check the flow of the test, ensuring there are no abrupt shifts in difficulty or content
between sections.

v. Time Management:
o Ensure that the estimated time for completing the test is reasonable based on the
difficulty and number of items.
o Make sure that time limits for sections are clearly indicated.

vi. Security and Confidentiality:


o Ensure that the items, during the moderation process, are kept confidential and that all
procedures related to the test’s security are followed.

9.3.2 SOPs for Moderators


The following are the SOPs for moderators, outlining standardized procedures to maintain
fairness, reliability, and alignment of assessments with curriculum standards during the
moderation process.
Step 1: Review the Full Test for Balance: Examine the entire test to ensure a balanced
representation of content, difficulty levels, and cognitive processes.
Step 2: Check for Clarity and Coherence: Review each section to ensure clarity and
consistency. Make suggestions for improving readability and clarity where necessary.
Step 3: Ensure Fairness: Identify and address any biased language or scenarios in the
items. Ensure inclusivity and fairness for all students.
Step 4: Approve the Final Test: Sign off on the test once it meets all standards for
balance, clarity, inclusivity, and coverage of key curriculum content areas.

Step 5: Prepare Moderation Report: Provide a detailed report on any changes made or
issues found during moderation, outlining how the test aligns with curriculum goals.

Figure 12. Process and Guidelines for Item Moderators

Page | 71
Model Assessment Framework
9.4 Guidelines and SOPs for Test Assembly
Test assembly involves putting together the final version of the assessment, ensuring that
items are logically ordered, visually clear, and balanced across topics and difficulty levels.
9.4.1 Guidelines for Test Assembly
The following are the guidelines for test assembly, focusing on creating balanced, fair, and
curriculum-aligned assessments by carefully selecting and organizing test items based on
cognitive levels, content coverage, and SLOs.
i. Logical Flow:
o Assemble items in a logical sequence, starting from easier items and progressing to
more challenging ones to prevent student fatigue.
o Group similar types of items (e.g., multiple-choice, short-answer) together.
ii. Clarity of Instructions:
o Ensure that instructions for each section are clear and unambiguous.
o Highlight any special instructions (e.g., for essays or problem-solving tasks) to avoid
confusion during the test.
iii. Visual Presentation:
o Ensure that the test is visually clear, with appropriate spacing between items and
enough space for students to write their answers.
o Check that diagrams, charts, and visuals are clearly printed and labeled.
iv. Consistency:
o Ensure that the formatting of all items (font, size, numbering) is consistent throughout
the test.
o Use clear and uniform numbering systems and make sure page numbers are clearly
indicated.
v. Time Allotment:
o Make sure that time allotments are indicated at the start of each section.
o Balance the time students are expected to spend on each section with the difficulty
and length of items.

9.4.2 SOPs for Test Assembly


The following are the SOPs for test assembly, providing detailed protocols to ensure systematic
and standardized construction of tests.
Step 1: Organize Items by Section: Organize the test into sections based on item types
(MCQs, short answers, essays). Ensure a logical order and smooth transitions between sections.
Step 2: Review Visual Layout: Ensure that the visual layout is clean and professional. Check
for proper spacing, clear labels for diagrams, and readable fonts.
Step 3: Final Check for Consistency: Double-check for consistency in formatting (headings,
font size, numbering), clarity of instructions, and item alignment.
Step 4: Review Time Estimates: Ensure that appropriate time limits are provided for each
section based on item complexity and length.
Step 5: Submit for Approval: Once the test is assembled, submit it to the assessment
committee or moderators for final approval before printing or distribution.

Page | 72
Model Assessment Framework

SOPs for Test Assembly

Figure 13. Process and SOPs for Test Assembly

9.5 Guidelines and SOPs for Test Administration


Test administration involves the logistical handling of the test, including its distribution to
students, monitoring during the test, and ensuring security and fairness in the testing
environment.

9.5.1 Guidelines for Test Administration


The following are the guidelines for test administration, aimed at ensuring a fair, transparent,
and conducive environment for conducting examinations while maintaining strict adherence to
rules and procedures.
i. Distribution of Materials:
o Ensure that all test materials are distributed to students in an organized and timely
manner.
o Check that every student has the required materials (e.g., question papers, answer
sheets, calculators, etc.).

ii. Monitoring and Invigilation:


o Ensure that invigilators are trained to actively monitor students during the test,
minimizing distractions and opportunities for cheating.
o Make sure that invigilators are positioned throughout the testing room to maintain
the visibility of all students.

iii. Fairness and Accessibility:


o Ensure that students with special needs have the necessary accommodations (extra
time, special seating, and assistive devices).
o Prevent any bias in the seating arrangement or invigilation process.

iv. Security Measures:


o Implement strict security protocols to prevent cheating, including checking
student IDs and securing the testing area.
o Ensure that no unauthorized materials (e.g., mobile phones, notes) are brought
into the examination room.

v. Time Management:
o Start and end the test exactly on time. Provide warnings before the end of the test
period to help students manage their time.

Page | 73
Model Assessment Framework
vi. Handling Emergencies:
o Have a contingency plan in place to deal with emergencies (e.g., power outages,
medical issues).
o Train invigilators to handle disruptions while maintaining the integrity of the test
environment.
9.5.2 SOPs for Test Administration
The following are the SOPs for test administration, outlining standardized steps and
protocols for the smooth and secure execution of examinations, including handling unforeseen
situations and maintaining integrity.
Step 1: Distribute Test Materials: Ensure that students receive the correct test materials,
including question papers and answer sheets. Provide any required tools (e.g., calculators,
geometry sets).
Step 2: Monitor Students: Monitor students consistently throughout the test, ensuring that
no unauthorized collaboration or cheating occurs.
Step 3: Provide Time Warnings: Announce time warnings (e.g., 10 minutes remaining) to
help students manage their time.
Step 4: Handle Issues: If a student requires assistance (e.g., for clarification or technical
difficulties), handle it promptly without disrupting other students.
Step 5: Collect and Secure Test Materials: At the end of the test, collect all materials
promptly, ensuring no items are left behind. Securely store test papers for grading.
9.6 Guidelines and SOPs for Marking/Coding
The following are the guidelines and SOP for marking/coding, focusing on ensuring
consistency, accuracy, and impartiality in the evaluation process, while maintaining alignment
with predefined marking schemes.
9.6.1 Guidelines for Marking/Coding
Following are the key guidelines for marking and coding.
i. Use of Marking Schemes:
o Ensure that all markers are trained on and follow the official marking schemes or
rubrics.
o Marking schemes should provide clear guidance for each item, including
acceptable variations in responses.

ii. Fairness and Consistency:


o Apply the same standards to all students’ responses to ensure fairness.
o If more than one marker is involved, use moderation or double-marking to ensure
consistency in subjective responses (e.g., essays).

iii. Confidentiality:
o Maintain confidentiality of student responses and grades during the marking
process.

iv. Timeliness:
o Ensure that marking is completed within the specified timeframe without
sacrificing quality or consistency.

Page | 74
Model Assessment Framework
v. Appeals Process:
o Establish a clear process for handling student appeals or requests for re-marking.

9.6.2 SOPs for Marking/Coding


The following are the SOPs for marking/coding, providing detailed instructions for securely
handling, coding, and evaluating answer scripts to ensure reliability and transparency in the
results compilation process.
Step 1: Understand the Marking Scheme: Review the marking scheme in detail before
beginning the marking process to ensure a clear understanding of what is expected.
Step 2: Apply the Rubric Consistently: Use the rubric consistently for each student’s
response, ensuring fairness and uniformity.
Step 3: Record Marks Accurately: Record each mark accurately in the grading system,
checking for errors or inconsistencies before final submission.
Step 4: Double-Check Subjective Items: For essays or short-answer questions, consider
double-marking to ensure consistency and accuracy.
Step 5: Handle Appeals: Be prepared to handle student appeals or re-marking requests in a
professional and transparent manner, following the institution’s guidelines.

Page | 75
Model Assessment Framework

Resources
As part of the Model Assessment Framework’s commitment to enhancing assessment quality
and standardization, the Federal Board of Intermediate and Secondary Education (FBISE) has
developed a comprehensive subject-specific assessment framework for each core subject at the
SSC and HSSC levels, aligned with the 2022-23 Curriculum. These frameworks provide a
robust structure for educators and examiners to implement consistent and meaningful
assessments across Pakistan.

Each subject-specific assessment framework includes the following:

Guidance on Formative and Summative Assessments

Distribution of SLO based on Formative and Summative Assessment

Cognitive Level Classification of Each SLO

Comprehensive List of Student Learning Outcomes (SLOs)

Detailed Table of Specification (TOS)

Patterns for MCQs, ERQs, and SRQs

Model Examination Paper

To access the detailed assessment frameworks for each subject, including SLOs and model
exam papers, please visit the IBCC’s website at https://ibcc.edu.pk/MAF/

This resource is a valuable tool for educators, examiners, and stakeholders, supporting the
successful implementation of the Model Assessment Framework by providing structured,
accessible subject-specific materials.

Page | 76
Model Assessment Framework

Lead Contributors
in Developing the Model Assessment Farmwork Policy Document
Dr. Ghulam Ali Mallah is the Executive Director of the Inter Boards
Coordination Commission (IBCC) and a distinguished scholar with
a Post-doctorate in Educational Technology & Academic
Leadership from the University of Glasgow. With over 26 years of
experience in the education sector, Dr. Mallah has made significant
contributions, particularly in digitalizing examination and
certification processes in Pakistan. He has led a range of
groundbreaking reforms in the national assessment system,
focusing on digitalization, policy formulation, and improving
Dr. Ghullam Ali Mallah efficiency. Under his leadership, innovative practices such as e-
Executive Director, office systems and ISO certification have been successfully
IBCC, Islamabad implemented at IBCC, transforming the assessment and
certification landscape. His vision for incorporating technology into
education has revolutionized the way assessments are conducted,
ensuring transparency, inclusivity, and adherence to international
standards. Dr. Mallah's forward-thinking approach continues to
shape national policies, making lasting contributions to the quality
and integrity of Pakistan's educational system.

Dr. Sajid Ali Yousuf Zai holds a PhD from the University of
Arkansas, USA, and is an accomplished expert in educational
research, data analytics, psychometrics analysis, and large-scale
assessments. With over 13 years of experience, he has contributed
to various national policy documents for FBISE, IBCC, and HEC,
including developing the Model Assessment Framework (MAF).
His work digitalizes the assessment process, aligns assessments
with Student Learning Outcomes (SLOs), and advances reliable,
inclusive, and standardized assessments. Dr. Sajid Ali Yousuf Zai’s
Dr. Sajid Ali Yousuf Zai expertise in Artificial Intelligence and EdTech tools enabled him to
Consultant, IBCC, develop comprehensive and interactive Exam Result Dashboards
Islamabad for examination boards, enhancing data visualization, analysis, and
reporting to make informed decisions. Dr. Sajid has also conducted
numerous professional trainings for teachers to enhance their
pedagogical skills and research capabilities and prepare them to
meet the challenges of 21st-century education. Additionally, he is
the author of two published books: "Quantitative Data Analysis:
Simply Explained using SPSS" and "21st Century Skills
Development".

Page | 77
Model Assessment Framework
Dr. Muhammad Shafique is the Chairman of the Board of
Intermediate and Secondary Education (BISE), Abbottabad, and a
seasoned educational assessment and evaluation expert. He has led
numerous initiatives, including developing Student Learning
Outcome (SLO) based examination systems and implementing
rubric-based on-screen marking. Dr. Shafique’s efforts have
significantly contributed to the modernization of the assessment
framework across Khyber Pakhtunkhwa, ensuring that exams align
with national standards and Pakistan's evolving educational needs.
Dr. Muhammad His work in teacher training and capacity building continues to
Shafique impact the country’s education system.
Chairman BISE,
Abbottabad
Mirza Ali is the Director of Research and Academics at FBISE, with
over 20 years of experience in academic leadership and assessment
strategies. He has significantly contributed to secondary education,
particularly in developing assessment frameworks and aligning
examinations with modern educational standards. As a key
contributor to the IBCC Assessment Framework, he has played a
vital role in its conceptualization and refinement, ensuring
alignment with international best practices for student evaluation.
Throughout his professional career, Mirza Ali has remained
Mirza Ali dedicated to enhancing the quality of education by focusing on
Director, Research & student-centric learning and the alignment of curriculum and
Academics, FBISE assessment practices with modern educational standards. His work
focuses on holistic student assessment, integrating cognitive levels
to foster critical thinking and lifelong learning. As a director, Mirza
Ali has spearheaded several initiatives to improve the assessment
processes within the federal education system. His work
emphasizes the integration of cognitive levels in assessments,
catering to knowledge, understanding, application, analysis, and
synthesis, thus preparing students for both academic excellence and
real-world challenges.
Ms. Riffat Jabeen is the Director of Academics at the Federal
Directorate of Education (FDE), where she manages academic
programs for over 432 educational institutions in Islamabad
territory, serving around 226,000 students. With expertise in
Applied Psychology and Science Education, she has been at the
forefront of curriculum development, centralized examinations, and
professional development programs. Her leadership in initiatives
such as the Tele-School program during the COVID-19 pandemic
has significantly impacted education access. During the
Ms. Riffat Jabeen development of the Model Assessment Framework, her input and
Director Academics valuable suggestions were instrumental in shaping the final policy
Federal Directorate of document. She has led innovative projects such as the Tele-School
Education, Islamabad initiative, educating millions of students. She has pioneered several
initiatives to improve Pakistan's literacy, STEM education, and
blended learning.

Page | 78
Model Assessment Framework
Dr. Nasir Mahmood is a renowned figure in educational research
and assessment in Pakistan, currently serving as Director of
Assessment at the Punjab Examination Commission. He holds a
Ph.D. in Education from the University of Education. Dr.
Mahmood’s expertise lies in psychometrics, educational assessment
frameworks, and data analysis, using advanced methodologies like
Classical Test Theory and Item Response Theory. His contributions
include the development of item banks, formative and summative
assessments, and large-scale evaluations that have enhanced the
quality and fairness of education in Punjab. Internationally trained
at London Metropolitan University, he has collaborated with
leading organizations such as the World Bank and the National
Dr. Nasir Mahmood Foundation for Educational Research (UK). His career is defined by
Director leadership in educational reforms, professional training for
Punjab Examination educators, and a commitment to improving student learning
Commission outcomes across Pakistan.
Dr. Muhammad Zaigham Qadeer is the Director of Policy Research
at the Pakistan Institute of Education (PIE), where he leads key
education policy initiatives. He actively developed the Draft
Education Plan and the National Education Policy Framework
2024, supporting provincial governments in policy formation. Dr.
Zaigham has contributed to the PIE’s Three-Year Operational Plan,
Policy Briefs on Girls' Education and Public Financing, and has led
policy dialogues to impact the education sector. With a PhD in
Curriculum Evaluation, he has extensive academic and
Dr. Muhammad administrative experience, collaborating with international
Zaigham Qadeer organizations such as UNESCO, JICA, and the World Bank to drive
Director, Policy Research educational reforms in Pakistan.
Pakistan Institute of
Education (PIE)
Dr. Shafqat Ali Janjua is the Joint Education Advisor in the Ministry
of Federal Education and Professional Training and the Lead of the
National Curriculum Council. He has played a key role in
developing numerous national education policies and frameworks,
including the National Curriculum Framework 2024, the Adult
Literacy Curriculum 2024, and the Inclusive Scheme of Studies
2024. His contributions to curriculum development, teacher
professional development, and curriculum-based assessment are
well-recognized. He has authored language textbooks and
Dr. Shafqat Janjua contributed to significant initiatives such as the SOLO-Based
Joint Education Advisor Assessment Manual for FBISE and the National Professional
Ministry of Federal Standards for Teachers in Pakistan. He has formulated and
Education and implemented key indicators for teacher performance appraisal in
Professional Training educational institutions under the Federal Directorate of
Education Islamabad. Dr. Janjua has held various leadership roles
in education and remains dedicated to improving Pakistan's teacher
performance and educational standards.

Page | 79
Model Assessment Framework
Dr. Muhammad Idrees has more than twenty years of experience in
teaching, academics, and administration in different positions. He
got his doctorate in the discipline of Education. During his
doctorate, he got special expertise in curriculum-based assessment
from the University of Leicester. He has teaching experience at the
school, college, and university levels. He has extensive experience
working in curriculum and textbook development, review, and
approval in the Federal Ministry of Education in various positions.
He has served as Director of academics and curriculum in the
Dr. Muhammad Idrees Higher Education Commission where he has developed curriculums
Director, National of various disciples on Outcome Based Education (OBE). He has
Assessment experience heading a historical and renowned prestigious institute
Coordination, of higher education in Punjab. Currently, he is heading to the
Pakistan Institute of National Assessment Wing of the Pakistan Institute of Education,
Education (PIE) Ministry of Federal Education and Professional
Training, Islamabad.
Syed Amar Hassan Gilani is the director of coordination and legal
affairs at IBCC, Islamabad. In developing the Model Assessment
Framework (MAF), Mr. Gilani's contributions were pivotal. He led
the coordination efforts with key stakeholders, sharing drafts,
collecting feedback, and compiling suggestions to shape a truly
representative framework. Through his management skills, he
ensured that all voices—from provincial boards to subject experts
and policy advisors—were heard, making the MAF a collaborative
national initiative. His dedication to involving diverse stakeholders
Syed Ammar Hassan has been instrumental in creating a framework that reflects
Gilani Pakistan’s educational diversity and regional needs.
Director Coordination
and Legal Affairs
IBCC, Islamabad

Page | 80

You might also like