b.ed (1) (2) (3) (1)
b.ed (1) (2) (3) (1)
ASSIGNMENT – I
SUBMITTED BY:
Assessment in education means checking how much students have learned. Teachers use it to see
if students understand the lessons and to decide how to teach better. It can be done in different
ways, like tests, quizzes, class activities, or even asking questions during lessons. Some
assessments help students improve while they are learning (formative), and others check what
they have learned at the end (summative). The main goal is to help students learn better and to
make sure learning goals are being met.
Formative assessment happens during the learning process. It helps teachers understand how
students are doing so they can give help when needed.
Summative assessment happens at the end of a topic or course. It is used to check what students
have learned overall.
Both types of assessment are important. Formative helps improve learning, while summative
checks learning results. Another scholar, Dylan Wiliam, said,
● Formative Assessment
Formative assessment is a way for teachers to check how students are learning while they are
still being taught. It helps teachers see what students understand and what they need help with.
This type of assessment is not for grades but to give feedback and improve learning. Examples
include asking questions in class, giving short quizzes, checking homework, or having group
discussions. The main goal is to help students learn better before the final test or project.
The main purpose of formative assessment is to help students learn better while they are still
learning. It gives both the teacher and the student information about how well the student
understands the lesson. Here are some key purposes:
1. Give Feedback:
Formative assessment helps teachers give students helpful comments. This shows
students what they are doing well and what they need to improve.
2. Guide Teaching:
It helps teachers decide what to teach next. If many students are confused, the teacher
can review or explain the topic in a different way.
4. Encourage Participation:
It makes learning more active. Students can ask questions, share ideas, and take part in
their own learning.
2. Class Discussions – Talking about a topic in class to hear students' thoughts and ideas.
3. Peer Reviews – Students read each other’s work and give helpful feedback.
4. Exit Tickets – Before leaving class, students write one thing they learned or a question
they still have.
5. Think-Pair-Share – Students think alone, talk with a partner, then share with the class.
6. Thumbs Up/Down – A quick way to see who understands (thumbs up) or needs help
(thumbs down).
● BENEFITS
Formative assessments have many benefits in the classroom. They help improve student
engagement because students are more active and involved in lessons when they know they will
share their ideas or answer questions. These assessments also support personalized learning, as
teachers can see who needs extra help and adjust their teaching to fit each student’s needs.
Students get quick feedback, so they know what they’re doing well and what they need to work
on. Since formative assessments are low-pressure and not final exams, they reduce stress and
help students focus on learning.
In their influential review "Inside the Black Box", Black and Wiliam found that formative
assessment significantly improves learning, especially for low-achieving students. They argued
that giving timely feedback and involving students in the learning process helps close the gap
between current and desired performance.
🔹 Heritage (2010)
Margaret Heritage stressed that formative assessment is most powerful when it’s part of daily
classroom practice. She also highlighted the role of student self-assessment and peer feedback
in building responsibility for learning.
● Summative Assessment
Summative assessment is a way to check what students have learned at the end of a topic, unit,
or course. It is usually a big test, project, or assignment that shows how much a student knows
after learning is finished. Examples include final exams, end-of-term projects, or standardized
tests. The main goal is to give a final grade or score to show how well the student understood the
material.
Summative assessment is used to check how much students have learned at the end of a lesson,
unit, or course. Its main purpose is to give a final judgment about student learning. This helps
with grading, shows how well a student met the learning goals, and is often used for
accountability. It also helps with standardization, meaning it checks if students across different
classes or schools are learning the same things.
● End-of-unit projects – Big assignments that show what students learned in a whole unit.
● Portfolios – A collection of a student’s best work over time, judged at the end.
Relying too much on summative assessment like final exams or standardized tests can lead to
what educators call “teaching to the test.” This means teachers may focus only on test content,
rather than helping students truly understand the material or develop critical thinking skills.
while summative assessments are useful for measuring performance, they don’t always support
ongoing learning. Students may also feel high levels of stress when grades are the only focus.
On the other hand, using only formative assessment, such as quizzes, discussions, and peer
feedback, may create a more supportive and engaging learning environment, but it can lack clear,
measurable outcomes. Teachers might find it harder to track long-term progress or compare
learning across groups. Without summative measures, schools also miss the data needed for
accountability and planning at a system level.
That’s why a balance of both is essential. Formative assessment helps students learn by
providing feedback during the learning process, allowing them to improve. Summative
assessment measures how much they have learned at the end, giving structure and goals to work
toward.
shows that effective feedback (a core part of formative assessment) has one of the highest
impacts on student achievement. Meanwhile, summative assessments remain important for
ensuring all students meet required learning standards.
“When the cook tastes the soup, that’s formative; when the customer tastes the soup, that’s
summative.
Purpose To support learning during the process To measure learning after instruction is
complete
Examples Quizzes, class discussions, peer feedback, exit Final exams, standardized tests
tickets
Feedback Immediate and used to guide learning Given after completion, usually with a
grade
Stress Level for Usually low-stakes and informal Often high-stakes and formal
Students
Use by Teachers To adjust teaching and give support To evaluate student achievement and
report
Accountability Role Helps students take responsibility for learning Used for grades, reporting, and system
accountability
Practical strategies such as using feedback on drafts, blending formal and informal assessments,
and encouraging self-assessment help make this balance achievable in the classroom. These
approaches make learning more engaging, targeted, and effective.
When teachers thoughtfully combine formative and summative assessments, they create a
supportive and accountable learning environment. Looking ahead, such balanced assessment
practices will not only boost academic achievement but also empower students to take ownership
of their learning, laying the foundation for long-term success in school and beyond.
REFRENCES:
Question: 02
Clear learning objectives are the foundation of effective assessment.
Analyze how Bloom's Taxonomy can guide the alignment of objectives
with the assessment method.
Introduction:
In the field of education, clear learning objectives serve as a foundation for effective teaching
and learning. Learning objectives are precise statements that articulate what learners are
expected to know, understand, or be able to do by the end of a lesson or course. They guide
instructional planning, inform learners of expected outcomes, and form the basis for measuring
academic progress. Without clearly defined objectives, teaching lacks direction, and assessment
becomes misaligned with the intended learning goals. Assessment, on the other hand, is the
systematic process of gathering evidence to evaluate learners’ knowledge, skills, attitudes, or
performance. Its primary purpose is not only to measure achievement but also to inform
instruction, provide feedback, and enhance student learning. However, for assessment to be truly
effective, it must align directly with the learning objectives it aims to evaluate. This alignment
ensures that students are being tested on what they were actually taught and intended to learn.
One widely accepted framework that aids in establishing this alignment is Bloom’s Taxonomy.
Bloom’s Taxonomy
● Created by Benjamin Bloom (1956) and revised by Anderson & Krathwohl (2001)
1. Remember
2. Understand
3. Apply
4. Analyze
5. Evaluate
6. Create
● R – Relevant: The objective should match the subject and course goals.
Most importantly, learning objectives must match the assessments. If the goal is to help
students understand a concept, the assessment should test understanding—not just
memory. When learning objectives and assessments are aligned, it becomes easier to
measure true learning and make sure teaching is effective.
“If you don’t know where you’re going, you’ll end up somewhere else.”
– Yogi Berra
Later, in 2001, two researchers named Anderson and Krathwohl revised the taxonomy to make
it more modern and easier to use. The new version focuses more on how students use knowledge,
not just remember it.
The revised Bloom’s Taxonomy has six levels of thinking(As i discussed at start of question),
starting from simple to more complex. These levels help teachers create learning objectives that
match different types of thinking.
The "Remember" level is the first and most basic stage in Bloom’s Taxonomy. At this
level, students are asked to recall or recognize facts, terms, basic concepts, or simple
answers. It does not require deep understanding—just the ability to remember something
that was taught. This level is important because remembering is the foundation for all
other learning. Before a student can understand or apply something, they must first be
able to remember what it is.
2. Understand
The "Understand" level is about showing that you can explain something in your own
words. It means you don’t just remember facts you know what they mean. For example,
if you are asked to “Explain how photosynthesis works,” you should describe how plants
use sunlight, water, and carbon dioxide to make food. This shows that you understand the
idea, not just memorise it.
The "Apply" level means using what you’ve learned in real-life situations. It’s about
taking your knowledge and putting it to work. For example, if you know a math formula,
like how to calculate area, you might use it to figure out how much paint you need for a
wall. This shows that you can take classroom learning and use it in everyday tasks.
➤ Example: “Use a math formula to solve a problem.”
The "Analyze" level is about breaking something into parts to understand it better. It
means looking closely at how things are related or different. For instance, if you compare
mammals and reptiles, you might notice differences in their body coverings or how they
reproduce. This helps you think deeper about how things work.
➤ Example: “Compare the features of mammals and reptiles.”
The "Create" level is about using what you know to make something new. It means
putting ideas together in original ways. For example, if you learned about some story
characters, you could write your own story using them in a new plot. This shows
imagination and a strong understanding of what you’ve learned.
➤ Example: “Write a story using the characters you learned about.”
This order moves from lower-order thinking (like remembering and understanding) to higher-
order thinking (like evaluating and creating). Bloom’s Taxonomy helps teachers make sure
students do not just memorize they also think deeply, solve problems, and be creative. It is a
step-by-step way to help students become better thinkers and learners.
Bloom’s Taxonomy provides action verbs associated with each cognitive level (e.g., identify,
explain, solve, critique, design). These verbs help educators write specific and measurable
objectives, which are easier to assess accurately. For example, “Describe the process of
photosynthesis” (Understand) versus “Create a model showing the process of photosynthesis”
(Create).
This alignment ensures assessments truly measure the intended skill, not just surface-level
knowledge.
Conclusion
In essence, Bloom’s Taxonomy serves as a valuable framework for aligning learning objectives
with assessment methods in a deliberate and meaningful way. By clearly defining the level of
cognitive skill intended, whether it's remembering, understanding, applying, or creating
educators can ensure that assessments truly measure what students are expected to learn. This
alignment not only strengthens the validity of assessments but also enhances instructional
planning and supports deeper student engagement. When learning objectives are clear and
assessments are purposefully designed to match them, teaching becomes more focused, learning
becomes more effective, and students are better equipped to achieve academic success.
REFERENCES:
● University of Limerick. (n.d.). Quick Tips for Teaching Online: Constructive alignment:
using Bloom’s Digital Taxonomy in curriculum design to align teaching, learning and
assessment. Retrieved from https://www.ul.ie/node/12760
● Verywell Mind. (2023). How Bloom's Taxonomy Can Help You Learn More Effectively.
Retrieved from https://www.verywellmind.com/blooms-taxonomy-and-learning-7548280
● https://www.niallmcnulty.com/2023/12/using-blooms-taxonomy-for-formative-
assessment-enhancing-classroom-strategies
Question: 03
How can diagnostic assessments identify learning gaps, and what
follow-up strategies should teachers adopt? Illustrate with examples.
Definition (Diagnostic Assessment)
Importance(Diagnostic Assessment)
The primary importance of diagnostic assessment lies in its ability to identify learning gaps
differences between what students currently understand and what they are expected to know. By
uncovering these gaps at the outset, teachers can tailor their instruction to address specific needs,
differentiate learning experiences, and offer timely interventions. This not only supports
struggling students but also prevents advanced learners from being held back by redundant
instruction. Ultimately, diagnostic assessment ensures that teaching is responsive, targeted, and
inclusive, leading to more effective learning outcomes for all students.
Diagnostic assessment is a crucial tool in education, characterized by several key features that
distinguish it from other forms of assessment. It is low stakes, meaning it does not contribute to a
student’s final grade, which helps reduce anxiety and encourages honest responses. It is also pre
instructional, administered before a new unit or topic begins, with the primary goal of assessing
student’s existing knowledge, skills, and potential misconceptions. Furthermore, diagnostic
assessment is skill focused, aiming to pinpoint specific areas where learners need support or
enrichment.
● Short quizzes:
Used to assess students' prior knowledge and basic understanding of key concepts.
● Interviews:
Allow teachers to probe individual student thinking, uncover misconceptions, and clarify
understanding through dialogue.
● Concept maps:
Help visualize how students connect and organize information, revealing the depth of
conceptual understanding.
● Writing samples:
Assess both subject-specific knowledge and language skills, especially useful in literacy
and content-based writing tasks.
● Entry slips:
Brief written responses completed at the beginning of a lesson to provide quick insights
into students’ existing knowledge or questions about a topic
Diagnostic assessment plays a foundational role in the assessment cycle by informing both
formative and summative practices. It allows teachers to plan instruction strategically and
differentiate learning from the outset. As Heritage (2010) aptly stated, “Diagnostic assessments
are designed not to grade but to guide”, highlighting their purpose in shaping effective,
student-centered instruction.
One of the most common learning gaps uncovered is in prior knowledge. If students have
missed or misunderstood foundational concepts in previous lessons or grades, they may struggle
to grasp more advanced material. For example, a student unable to recall multiplication facts will
likely have difficulty learning division.
Conceptual understanding gaps occur when students possess fragmented or incorrect ideas
about key concepts. For instance, in science, a student might believe that heavier objects fall
faster than lighter ones, which contradicts fundamental physics principles.
Gaps in procedural skills refer to difficulties in executing specific methods or processes, such as
solving equations, using grammar rules, or applying formulas. These types of gaps are often
identified when students know the theory but cannot apply it accurately.
Language proficiency can also affect performance, particularly in subjects like reading
comprehension, history, or science. Students who struggle with academic language may
understand a concept but be unable to express it effectively, leading to misjudged learning levels.
Teachers use various tools during diagnostic assessment to uncover these gaps:
Example:
A math teacher gives a pre-unit diagnostic test on fractions and discovers that 40% of students
cannot distinguish between numerators and denominators. This specific conceptual gap signals
that without revisiting this foundational idea, students will struggle with more complex
operations like addition and multiplication of fractions.
Research backs this up. Tomlinson (2014) says that when teachers use diagnostic info to
differentiate instruction, it can dramatically close achievement gaps, especially for students who
face learning or language challenges. In short, smart follow-up strategies turn challenges into
opportunities for success.
Conclusion
Diagnostic assessments play a vital role in education by revealing students’ learning gaps before
instruction begins. Their true power lies not just in identifying these gaps but in guiding teachers
to respond effectively. Without timely and appropriate follow-up strategies such as targeted
instruction, formative checks, individualized support, and curriculum adjustments diagnostic data
alone cannot improve learning outcomes. When teachers use this information thoughtfully, they
can tailor their teaching to meet each student’s unique needs. This responsive approach promotes
greater equity by ensuring that no learner is left behind due to unnoticed difficulties. It also
boosts student engagement, as lessons become more relevant and accessible. Ultimately,
combining diagnostic assessment with intentional, personalized teaching leads to higher
achievement and more meaningful learning experiences for all students. In this way, diagnostic
assessment becomes a powerful tool for creating inclusive classrooms where every student has
the opportunity to succeed.
REFERENCES:
● https://file.scirp.org/Html/5-1760968_67213.htm
● https://www.renaissance.com/2021/05/20/blog-the-importance-of-using-diagnostic-
assessment-4-tips-for-identifying-learner-needs/
● https://thirdspacelearning.com/blog/maths-intervention-formative-assessment-diagnostic-
tests/
Question: 04
Analyze the strengths and weaknesses of selection-type (e.g., MCQs)
and supply-type (e.g., essays) test items. When should each be
prioritized?
Introduction:
Selection-type questions (like multiple-choice) and supply-type questions (like essays) are two
common ways to test students. In selection-type questions, students choose the answer from
given options. In supply-type questions, students write their own answers. This paper looks at the
good and bad sides of each type, and explains when each one should be used. It will focus on
things like how fair and accurate the questions are, how well they measure thinking skills, and
how easy they are to use in real classrooms.
Selection-Type:
Selection-type items are praised for their scoring reliability and efficiency. They are
especially effective in large-scale assessments due to automated scoring and
objective marking (Haladyna & Downing, 2004). Furthermore, they can sample a
wide breadth of content in a limited time.
Supply-Type:
Supply-type items are valued for their ability to measure constructed responses and
higher-order thinking skills. They encourage deeper cognitive processing by
requiring test-takers to generate answers rather than choose from given options.
Although scoring may be more time-consuming, these items provide rich insights
into understanding and problem-solving abilities, making them essential for
assessing complex learning outcomes (Haladyna & Downing, 2004).
❖ Comparison Table
Definition Students choose from given Students construct their own answers
answer options
Cognitive Demand Often low to moderate (recall, High (analysis, synthesis, evaluation
recognition) – upper Bloom’s taxonomy levels)
Practical Very practical for large groups More practical for small groups or in-
Implementation or standardized testing depth assessments
● According to Haladyna and Downing (2004), MCQs are good for big exams because
machines or answer keys can score them quickly and fairly, with fewer mistakes.
★ Limitations of Selection-Type Items
● Studies by Roediger and Marsh (2005) show that MCQs can lead students to memorize
facts without deep thinking.
● Students might also guess the right answer, which makes it hard to know what they
really understand.
● They are better at showing real understanding, especially when teachers want to test
critical or creative thinking.
● They are good for checking if students remember facts or understand basic ideas.
● MCQs are also great for large exams, like standardized tests (for example, SATs or
national exams), because they are fast and easy to grade.
● Use essays when you want students to show deep understanding, explain their thoughts,
or give strong arguments.
● Essays are good for subjects like history, literature, or philosophy, where students need to
think deeply and write in detail.
● They are often used in final projects or capstone courses, where students show
everything they’ve learned.
In short:
● MCQs = quick, broad testing (good for facts and large groups).
● Essays = deep, thoughtful answers (good for complex thinking and writing skill.
Conclusion
In summary, both selection-type and supply-type test items have unique strengths and limitations
that make them suitable for different assessment goals. Selection-type items, such as multiple-
choice questions, offer efficiency, objective scoring, and broad content coverage, making them
ideal for large-scale or standardized testing. However, they may fall short in evaluating deep
understanding or complex reasoning. On the other hand, supply-type items, like essays, are
effective in assessing higher-order thinking, creativity, and the ability to construct arguments,
though they are more time-consuming to grade and may introduce subjectivity.
Choosing between these formats should depend on the intended learning outcomes, available
resources, and the cognitive level being assessed. For knowledge recall and wide content
sampling, selection-type items should be prioritized. For critical thinking, analysis, and written
communication, supply-type items are more appropriate. In many cases, a balanced combination
of both types can provide a more complete picture of student learning.
REFERENCES:
● //ceidev.ust.hk/teaching-resources/assessment-learning/assessment-methods
● Precision Data Products – The Pros and Cons of Different Types of Test Questions
https://precisiondataproducts.com/2019/08/20/the-pros-and-cons-of-different-types-of-
test-questions/
Question: 05
Compare internal consistency and inter-rater reliability, providing
examples of when each is crucial in classroom assessments.
Example: A teacher designs a 10-question quiz to assess students’ knowledge of fractions. If all
the questions relate clearly to fractions, the test has high internal consistency. But if a few
questions test unrelated math skills, students' scores may not truly reflect their understanding of
fractions, reducing the test’s reliability.
2. Inter-Rater Reliability
Inter-rater reliability refers to the degree of agreement or consistency between different people
(raters) who are scoring or judging the same student work. This is especially important in
subjective assessments like essays, art projects, presentations, or performance tasks where
personal judgment plays a role.
When inter-rater reliability is high, it means different teachers or evaluators give similar scores
for the same work. This helps ensure fair and unbiased grading, even when more than one person
is involved in the evaluation.
Example: Two teachers are grading student essays using the same rubric. If one teacher gives a
student an 85 and the other gives a 60 for the same essay, the scoring is inconsistent—showing
low inter-rater reliability. To improve this, the teachers might have a scoring meeting to discuss
the rubric and practice grading together to reach more similar results.
Internal consistency matters most when teachers are creating objective tests, such as:
● Multiple-choice questions
● True/false statements
Example: A science teacher gives a quiz on the water cycle. If half of the questions are actually
about weather patterns instead, the test lacks internal consistency, and students’ scores might not
truly reflect what they know about the water cycle.
● Essay writing
● Lab reports
● Oral presentations
● Role-plays or debates
In these cases, different teachers or evaluators must interpret student responses, and their
personal views can influence how they score the work. Inter-rater reliability ensures that scoring
is fair and consistent, no matter who is marking the work. This is especially important when
grades are shared across different classes, schools, or evaluators.
Example: In a group presentation, one teacher might focus more on speaking skills while
another focuses on content. If they don’t use the same scoring criteria or interpret the rubric
differently, students could receive very different scores for the same work. High inter-rater
reliability ensures students are judged equally.
Comparison Summary
In conclusion, internal consistency and inter-rater reliability are both essential for creating
fair, accurate, and meaningful classroom assessments, but they apply in different situations.
Internal consistency focuses on whether all the items in a test are working together to measure
the same skill or concept. It is most important in objective tests, such as multiple-choice or
true/false quizzes, where all questions should be aligned to the same learning goal.
On the other hand, inter-rater reliability is about making sure that different teachers or scorers
grade student’s work in the same way. This type of reliability matters most in subjective
assessments, like essays, presentations, or creative projects, where personal judgment is involved
in scoring. While internal consistency improves the quality and focus of test items, inter-rater
reliability ensures fairness in grading across different evaluators. Both types of reliability help
teachers make more accurate decisions about student learning and progress. Understanding when
and how to apply each one is a key part of good assessment practice in the classroom.
REFERENCES: