0% found this document useful (0 votes)
180 views

Guidelines For Pre and Post Test Design

Pre- and post-tests are used to measure knowledge gained from a training. A pre-test establishes participants' baseline knowledge before training. The same or comparable post-test after training allows comparing scores to see if knowledge increased. Pre- and post-tests should have well-written, clear questions focused on course objectives. Tests should be validated by reviewing with participants and staff to ensure questions are understood as intended. Analyzing individual and question-level results helps evaluate the training's effectiveness.

Uploaded by

مستر 3 3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
180 views

Guidelines For Pre and Post Test Design

Pre- and post-tests are used to measure knowledge gained from a training. A pre-test establishes participants' baseline knowledge before training. The same or comparable post-test after training allows comparing scores to see if knowledge increased. Pre- and post-tests should have well-written, clear questions focused on course objectives. Tests should be validated by reviewing with participants and staff to ensure questions are understood as intended. Analyzing individual and question-level results helps evaluate the training's effectiveness.

Uploaded by

مستر 3 3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Pre- and post-tests are used to measure knowledge gained from participating in a training

course. The pre-test is a set of questions given to participants before the training begins in
order to determine their knowledge level of the course content. After the completion of the
course, participants are given a post-test to answer the same set of questions, or a set of
questions of comparable difficulty. Comparing participants’ post-test scores to their pre-test
scores enables you to see whether the training was successful in increasing participant
knowledge of the training content.

When deciding whether or not to take the time to do both a pre- and a post-test, consider
first what you most want to learn about your training. If you want to understand exactly
what knowledge can be credited to the training itself, using a pre- and post-test
methodology is important. If, instead, you only need to know whether participants can
demonstrate content knowledge or skills by the end of the training, a pre-test is not
necessary.

Remember, one of the limitations of any test of knowledge administered immediately after
training is that it will not tell you what people will remember one week or one year after the
training, nor whether they will apply what they learned in their work.

Developing a Pre- and Post-Test Tests are instruments or tools used to measure change. If
the instrument itself is faulty, it cannot accurately measure changes in knowledge. A valid
and reliable pre- and post-test must be made up of well-written and clear questions. The
following are some tips for creating good questions:

Create questions that focus on the primary course objectives. Try to develop at least one
question for each course objective. This will ensure that you are asking participants to
demonstrate their knowledge of what course developers determined are the most
important concepts to learn across the entire course. You can go one step further by asking
yourself, “What are the 10 most important things in this course that a physician — or other
health professional — needs to know about HIV care?” Then create your questions from this
list of 10 concepts, facts, or skills.

For a workshop or training on a highly technical subject, such as antiretrovirals (ARVs) or


opportunistic infections (OIs), a content expert generally needs to develop the test in order
to ensure that incorrect options are plausible, that the right content is covered, etc. Do not
create questions that demand the memorization of extraneous (i.e., picky) detail.
Participants should not be tested on whether they remember a particular word or phrase, or
whether they remember if prevalence rates were 13% or 15%, but rather on whether they
have learned important concepts and related facts.
Only include questions to which there were clear answers provided during the course. Do
not test participants on concepts or knowledge that were not sufficiently covered in the
course. If there are important concepts you think should be covered in the course but
weren’t, integrate this information into your own evaluation of the training and recommend
it be included the next time the course is taught.

Develop a test that will take between 10-25 minutes to complete. The amount of time
spent on pre- and post-tests should vary depending on the length of the overall training
course and the type of questions asked. It is reasonable that a test covering a week-long
training would be longer than a test covering a two-day training.

Tips for Creating Questions There are a variety of question types . true/false, multiple-
choice— that can be used in your test. You can ask respondents to demonstrate more
specific, detail-oriented learning with multiple choice questions than with true/false
questions.

Creating True/False Questions

Construct questions that are simply worded, to the point, and unambiguous. Simple
sentences are straightforward and have fewer words than more complex, multi-phrase
sentences. Vocabulary that can be interpreted in different ways makes it much more difficult
for respondents to answer.

Example: “There are many ways a person can become infected with HIV” uses a word
(many) that can be interpreted in perhaps ten different ways. A better question would be
one that focused on a single mode of transmission: “An individual can become infected with
HIV through a needle stick.”

Stay away from conjunctions such as “and,” “but,” “except,” and “or.” These words imply a
second idea or concept and can be confusing when respondents are answering True/False
questions.

Example: The true/false question stem, “HIV can be transmitted during intercourse but only
if the individuals are not using a condom,” is problematic. Although the question appears to
be true, HIV can be transmitted even if individuals are using a condom during intercourse.
The “but” provides too much potential for ambiguity and room for confusion in the
respondent.
Creating Multiple-Choice Questions

Develop responses that are substantively distinct from one another.

Answers in a multiple - choice question that are too similar do not provide a respondent
with a clear choice. Such questions can end up testing their ability to make distinctions in
spelling or definition instead of making important discerning choices among crucial concepts
in HIV.

Example: Which of the following is the name for the ARV drug abbreviated as ABC?

Abacab

Abacavan

Abacavir

Abracadabra

Although it might be important for participants in an ARV course to know that Abacavir is
the name for ABC, these responses are more about how well they can distinguish slight
variations in spelling.

A better selection of responses would be: Abacavir Amprenavir Acyclovir Amphotericin B

Develop “incorrect” responses that are potentially plausible but clearly wrong.

Even your most knowledgeable learners should not find the correct answer extremely
obvious; respondents should be presented with a selection of answers that they must
consider carefully. In the example above, the correct response in the first list of responses
was too obvious for most English speakers. Among those who speak English as a second or
third language, it may be merely a spelling or vocabulary test.

Make the multiple-choice question text longer than the text of the answers.

The majority of information should be in the question, not the answers. Participants should
not be overwhelmed with words when attempting to answer the question correctly.

Review your questions and answers for usability.

Cover up the answers and look at the question. Someone knowledgeable of the course
content should be able to answer the question without looking at the answers. If possible,
ask a second colleague knowledgeable of the course content to take the test and see how
he/she answers the questions. If they have problems answering, then chances are that your
questions are not specific enough.
Validating Pre- and Post-Tests

All pre- and post-tests must be validated before they are considered a reliable data
collection tool. If participants get a question wrong, it should be because of lack of
knowledge, not because a participant interpreted the question differently than it was
intended because the question was poorly written and had more than one correct answer,
or because the question addressed content that was not taught in the course. When a
participant gets a question correct, it should be a result of knowledge in that subject area,
not because the incorrect answers were so implausible that it was easy to guess the correct
answer.

As the first step in the validation process, ask four local staff to take the test. Ask them to
mark any questions that were unclear to them when they were taking the test. Have staff
discuss with you their answer to the questions, ensuring that their understanding of the test
questions was the same as what was intended. Although staff may not be representative of
the participants who will take the test, this is a good first step for clarifying questions and
responses before you give the test to a group of training participants.

The most important step of the validation process takes place with the participants
themselves. After administering the post-test to training participants, review the answers as
a group. Ask participants to explain their answers to the questions to better understand how
they were interpreting the questions. It should be clear from the discussion which questions
were confusing to participants and which ones were clearly written. For questions answered
incorrectly, the discussion should help to determine whether the question was confusing
and participants actually understand the concept being tested, or whether the participants
did not acquire the intended knowledge for some reason. Rewrite the test based on
feedback and administer the test to another set of participants to make sure adjustments
clarified any confusing questions.

Analyzing Pre- and Post-Test Results

The final step is to analyze the results of the pre and post-tests both by participant and by
question. Looking at the data in both of these ways will help you learn about both the type
of participant that learned the most from the training (e.g., those with high or low pre-
existing knowledge) and the areas of the training that were most effective for the whole
group. Using Excel, SPSS, or another statistical software package to analyze the results is not
required, but it will greatly facilitate the analysis process. Create a spreadsheet where each
row is a single participant (identified by ID number) and where each question has two
columns — one column indicating if the participant correctly answered the question on the
pre-test and one indicating if the participant correctly answered the question on the post-
test.

Look at changes in correct responses by individual. Did each individual’s score increase? Are
there any discernable patterns indicating which participant scores increased the most? For
example, in a multidisciplinary training, did nurses’ scores increase more than doctors’? Did
each individual’s score increase? Did the overall range of scores change for the group
between pre- and post-testing?

Think about the knowledge level of the audience your training is targeting.

If the training is aimed too high, those that scored high on the pre-test will show the most
increase on the post-test while those who scored low may show very little increase.

Alternatively, if lower scores climb and higher scores are stable, the training may be aimed
too low.

Depending on the purpose of the evaluation information, you may want to see if any
knowledge increase was statistically significant.

Next, look at changes in knowledge by question to uncover which parts of the training were
most effective, that is, resulted in the most increase in knowledge.

Remember that a lack of change in knowledge could indicate either a poorly designed test
question or a weakness in the curriculum.

If pre-test scores are high, then there will be little room for knowledge gain as measured on
the post-test.

If there are questions that multiple participants are missing in both the pre- and post-test,
consider adjustments to the curriculum to strengthen weak or unclear content areas.

Review all the data to make sure you haven’t missed any clues and document any additional
interesting findings.

Use the results to make any necessary adjustments to the training.

You might also like