0% found this document useful (0 votes)
47 views7 pages

The Testing Effect On Skills Learning Might Last 6 Months

This study examined whether testing at the end of a cardio-pulmonary resuscitation (CPR) skills course would lead to better retention of skills after 6 months compared to additional practice time. Medical students were randomly assigned to a course that included testing or additional practice. Both groups received around 3.5 hours of training. The testing group then had 30 minutes of skills testing while the practice group had 30 more minutes of practice scenarios. Six months later, 89 of the original 180 students were assessed on their CPR skills performance in a simulated scenario. The testing group scored slightly higher on average (75.9 vs 70.3) but the difference was not statistically significant. However, the effect size was 0.4, suggesting

Uploaded by

KenKdw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views7 pages

The Testing Effect On Skills Learning Might Last 6 Months

This study examined whether testing at the end of a cardio-pulmonary resuscitation (CPR) skills course would lead to better retention of skills after 6 months compared to additional practice time. Medical students were randomly assigned to a course that included testing or additional practice. Both groups received around 3.5 hours of training. The testing group then had 30 minutes of skills testing while the practice group had 30 more minutes of practice scenarios. Six months later, 89 of the original 180 students were assessed on their CPR skills performance in a simulated scenario. The testing group scored slightly higher on average (75.9 vs 70.3) but the difference was not statistically significant. However, the effect size was 0.4, suggesting

Uploaded by

KenKdw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Adv in Health Sci Educ (2010) 15:395–401

DOI 10.1007/s10459-009-9207-x

The testing effect on skills learning might last 6 months

C. B. Kromann • C. Bohnstedt • M. L. Jensen • C. Ringsted

Received: 28 May 2009 / Accepted: 1 October 2009 / Published online: 17 October 2009
Ó Springer Science+Business Media B.V. 2009

Abstract In a recent study we found that testing as a final activity in a skills course
increases the learning outcome compared to spending an equal amount of time practicing.
Whether this testing effect measured as skills performance can be demonstrated on long-
term basis is not known. The research question was: does testing as a final activity in a
cardio-pulmonary resuscitation (CPR) skills course increase learning outcome when
assessed after half a year, compared to spending an equal amount of time practicing? The
study was an assessor-blinded randomised controlled trial. A convenient sample of 7th
semester medical students attending a mandatory CPR course was randomised to inter-
vention course or control course. Participants were taught in small groups. The intervention
course included 3.5 h skills training plus 30 min of skills testing. The practice-only control
course lasted 4 h. Both groups were invited to a retention assessment of CPR skills half a
year later. Participants included 89/180 (50%) of those invited to participate in the study.
Mean performance score was 75.9 (SD 11.0) in the intervention group (N = 48) and 70.3
(SD 17.1) in the control group, effect size 0.4. The difference between groups was not
statistically significant, P = 0.06. This study suggests that testing as a final activity in a
CPR skills course might have an effect on long-term learning outcome compared to
spending an equal amount of time practicing the skills. Although this difference was not
statistically significant, the identified effect size of 0.4 can have important clinical and
educational implications.

Keywords Testing effect  Long term retention  Skills learning  Resuscitation 


Assessment  Simulation training

C. B. Kromann  C. Bohnstedt  M. L. Jensen  C. Ringsted


Centre for Clinical Education, Copenhagen University and Capital Region of Denmark, Rigshospitalet,
Copenhagen, Denmark

C. B. Kromann (&)
CEKU, Centre for Clinical Education, Rigshospitalet afsnit 5404, Teilumbygningen, Blegdamsvej 9,
2100 København Ø, Denmark
e-mail: [email protected]

123
396 C. B. Kromann et al.

Introduction

Testing has been demonstrated to have a powerful effect on memory of studied material
and can increase retention of learning more than additional study (Roediger and Karpicke
2006). This testing effect applies to conditions where results are not revealed to those being
tested—and when test results are without consequences for the learner (Roediger and
Karpicke 2006). Hence the testing effect differs from formative assessment or the common
notion that testing drives learning in being rather intrinsic in nature.
We have recently found that the testing effect known from knowledge learning (Roediger
and Karpicke 2006) also pertains to cardio-pulmonary resuscitation (CPR) skills when
learning outcome is assessed 2 weeks after a simulation-based CPR skills course (Kromann
et al. 2009). CPR skills in clinical practice have been demonstrated to deteriorate rather fast
following a course and hence regular retraining and recertification after three to 6 months is
recommended (Berden et al. 1993; Woollard et al. 2006; Jensen et al. 2008). Thus for
liability reasons alone there is reason to investigate interventions that can prolong the
retention of CPR skills, and testing might be one of them. In knowledge learning the testing
effect has been demonstrated to last at least 3 months (Roediger and Karpicke 2006), but
beyond 3 months we have found no studies on either knowledge or skills learning.
Hence the aim of this study was to investigate whether a testing effect could be dem-
onstrated after a retention period of half a year. The specific research question was: Does
testing by the end of a CPR skills course for medical students increase retention of skills
after half a year as measured by assessment of performance in a simulated cardiac arrest
scenario, compared to spending a equal amount of time practicing?

Methods

The study was a controlled randomised assessor blinded retention test-only intervention
study of the testing effect on the long term learning outcome from a 4-h simulation-based
CPR skills course.

Study participants

An entire cohort of 7th semester medical students scheduled for a mandatory resuscitation
skills course was randomised to either the intervention course or the control course. After
participating in the course all students were invited to participate half a year later in
assessment of their CPR performance in a simulated cardiac arrest scenario. Participants
were offered 200 DKK (27 euro) for participating in the assessment and were informed that
they should not prepare for the follow-up assessment.

Settings

Both the intervention course and the control course were highly standardised in-hospital
CPR courses where students in groups of six were taught to provide basic life support and
apply safe defibrillation. Following a short theoretical introduction the major part of the
course consisted of practical training of resuscitation skills in simulated cardiac arrest
scenarios. The simulation mannequin was programmed with six basic cardiac arrest sce-
narios that were used with different case stories repeatedly through both courses.

123
Testing effect after 6 months 397

The intervention course consisted of 3.5 h of instructor led simulation training. After
this the students were tested one by one for 5 min each, while the other students observed.
The tester was not the same as the instructor.
The initial 3.5 h of the control course were identical to the initial 3.5 h of the
intervention course. However, instead of testing the final half hour was used for running
additional three to four scenarios where the entire group was activated. The participants
in the intervention group only had hands on twice during the testing session: 5 min
where they were tested individually, and 5 min where they assisted at another partici-
pant’s test. The rest of the time they observed the testing of the other four participants in
the group. In the control group all participants were active during the scenarios the entire
30 min.
In the groups no individual feedback was given after the tests, but the feedback was
different from group to group as different issues surfaced during the scenarios. After the
six-five-minute tests, the intervention group was given a plenary feedback on three to four
important issues related to the standard procedures of the CPR skills taught. Similar to the
feedback procedure for the intervention group, the control group was given a short plenary
feedback to highlight three to four important issues related to the standard CPR procedure
taught.

Randomisation

Participants were given a random ID-number and randomised into 30 groups of six stu-
dents and then allocated to either the intervention or the control group. 5 min before the
course the instructor and the group were told if the group was to be tested. All randomi-
sation sequences and tables were generated using www.random.org.

Assessment of learning outcome

The students who accepted the invitation to participate in assessment of CPR performance
half a year after the course were contacted by e-mail 1 month before the date of assess-
ments to confirm their participation. Prior to the assessment the students were asked
whether they had prepared for the assessment and were excluded from the study in case
they confirmed.
Each participant was assessed individually in a simulated cardiac arrest scenario.
Before the assessment the assessor ran through a set of standardised instructions
regarding utensils, drugs and degree of assistance available during the scenario. Then the
assessor introduced the simulated case by stating: ‘‘You’re about to establish an IV
access when your patient, a 75 year old man, becomes unresponsive. You are now
required to manage this patient.’’ Each participant’s CPR performance was assessed
using a standard 25-item checklist (Kromann et al. 2009). Our checklist was converted
from a validated ERC Advanced Life Support Cardiac Arrest Scenario checklist
(Ringsted et al. 2007) and the inference of our checklist was expert-validated by an ERC
Advanced Life Support instructor. The assessments were conducted by one of two
assessors who were trained to use the checklist the same way. Prior to the assessments
they coordinated their use of the checklist by assessing several participants together and
discussing the scores until consensus was achieved. Both assessors were selected from a
group of resuscitation instructors, but neither of them had been teaching the participants
of this study.

123
398 C. B. Kromann et al.

Sample size and statistical analysis

A minimum sample size calculation was based on the results of our recent study of the
testing effect on learning outcome from the standardised CPR course (Kromann et al.
2009). We expected approximately 50% (90/180) of the entire cohort to accept the invi-
tation to participate in the study and complete the final assessment. With 46 participants in
each group we would be able to detect an effect size of 0.58 with a power of 80% and a
significance level of 0.05 (Carney et al. 2004).
Total score from the assessment of CPR performance was converted to a percentage
maximum possible score. This variable was compared between the control and intervention
group using independent samples t-test.
Effect size (ES) was calculated using Cohen’s d, with ES 0.2, small; 0.5, medium; and
0.8, large (Hojat and Xu 2004).

Ethical approval

The research protocol was submitted to the local research ethics committee, which waived
the need for full ethical approval. All participants gave written informed consent.

Results

In total 131/180 (73%) accepted the invitation to participate in the follow-up assessment
half a year after the course and 112 confirmed their intent to participate by e-mail 1 month
before the assessment. On the day of assessment 89/180 (50%) showed up, consisting of
48/90 (53%) from the intervention group and 41/90 (46%) from the control group. Figure 1
depicts the flow of participants.
None of the participants reported having prepared for the assessment. The CPR per-
formance score 6 months after the course was mean 75.9 (SD 11.0) in the intervention
group and 70.3 (SD 17.1) in the control group. The ES of this difference was 0.40. The
difference was not statistically significant, P = 0.06, independent samples t-test.

Discussion

In this study we found a small to medium effect size (ES = 0.4) in favour of the inter-
vention group, which suggests that testing as a final activity in a CPR skills course might
have an effect on long term learning outcome compared to spending an equal amount of
time practicing the skills. Although the difference was not statistically significant, the
identified effect size of 0.4 is comparable to effect sizes found in studies on long term
learning outcome in knowledge learning (Bangert-Drowns and Kulik 1991; Roediger and
Karpicke 2006).
When resuscitating in the clinical setting time to first DC-conversion is measured in
seconds (Woollard et al. 2006) and small adjustments like hand placement and frequency
of chest compressions have great importance in resuscitation (Wik 2003). Hence even a
small improvement in competence might change the outcome of cardiac arrest situations.
The hands-on time was longer in the control course compared to the intervention course
and one could argue that the learning effect demonstrated in the intervention group was
even more compelling. However, the literature describes a learning effect from observation

123
Testing effect after 6 months 399

Fig. 1 The flow of participants

of skills (Magill 2006). Thus, assumptions about the potency of the testing effect should be
made with caution, as this study was not designed to distinguish between the learning from
testing and the learning from observing others being tested.
This study has several limitations. We ultimately had a slightly smaller study population
than expected, 48 in the intervention group and 41 in the control group, and hence we
cannot rule out a Type-II error as reason for the results not being statistically significant.
The dropout rate was larger in the control group compared to the intervention group.
Unfortunately we do not have any available data to compare the academic performance of
dropout’s and the participating group. However, those volunteering for education trials and
testing of competence are often the most competent students (Callahan et al. 2007). Thus
the higher dropout rate in the control group could have resulted in an artificially higher
mean learning outcome obscuring the testing effect by evening out the difference between
the means.
We consider it unlikely that any of the participants had prepared for the follow-up
assessment, as they did not have access to simulation mannequins and the expensive
training facilities. Furthermore the minor financial compensation did not depend on
assessment results. Finally in this experimental study the students were not informed of
their performance at the tests during the course, and the final assessments did not have any
consequence for their academic progress or grades.

123
400 C. B. Kromann et al.

This study was performed on a group of medical students and caution should be taken to
generalise to junior doctors or mixed groups of other health professionals.
In the light of our findings regarding the testing effect on skills learning we recommend
testing as an essential and cost-effective activity to increase retention from simulation
based skills courses.
Previous results indicate that the testing effect demonstrated for knowledge learning
also applies to skills learning. This study suggests that the testing effect might last until
6 months after the course. Our study was a one test intervention and it is possible that
multiple tests over the follow-up period might contribute to maintaining CPR performance
(Roediger and Karpicke 2006). Also, tests in combination with e-learning simulation
programs might prove feasible to prolong the retention after CPR skills courses (Jensen
et al. 2009).
Further studies on this topic should investigate the composite of the intrinsic testing
effect and the extrinsic learning effects of formative testing, where the participant is
informed about performance and guide towards improvement.
One possible theory behind the testing effect is that training retrieval processes accounts
for the testing effect, where more elaborate retrieval results in better the long term learning
(Roediger and Karpicke 2006). However, this explanation applies well to the testing effect
demonstrated on knowledge learning, but also fits well with skills learning, where prac-
ticing the skills enables retrieval processes. Thus, more clarifying studies on the possible
mechanisms behind the testing effect on skills learning is needed in order to identify
theories that can explain the phenomenon.

Conclusion

This study suggests that testing as a final activity in a CPR skills course might have an
effect on long term learning outcome compared to spending an equal amount of time
practicing the skills. Although this difference was not statistically significant, the identified
effect size of 0.4 can have significant clinical and educational implications.

Acknowledgments This study was financially supported by the Centre for Clinical Education and in part
funded by Trygfonden.

References

Bangert-Drowns, R. L., & Kulik, J. (1991). Effects of frequent classroom testing. The Journal of Educa-
tional Research, 85(2), 89–99.
Berden, H. J., Willems, F. F., Hendrick, J. M., & Pijls, N. H. (1993). How frequently should basic car-
diopulmonary resuscitation training be repeated to maintain adequate skills. British Medical Journal,
306(6892), 1576–1577.
Callahan, C., Hojat, M., & Gonnella, J. (2007). Volunteer bias in medical education research: an empirical
study of over three decades of longitudinal data. Medical Education, 41(8), 746–753.
Carney, P. A., Nierenberg, D. W., Pipas, C. F., Brooks, W. B., Stukel, T. A., & Keller, A. M. (2004).
Educational epidemiology applying population-based design and analytic approaches to study medical
education. Journal of the American Medical Association, 292(9), 1044–1050.
Hojat, M., & Xu, G. (2004). A visitor’s guide to effect sizes—statistical significance versus practical
(clinical) importance of research findings. Advances in Health Sciences Education, 9(3), 241–249.
Jensen, M., Hesselfeldt, R., Rasmussen, M., Mogensen, S., Frost, T., Jensen, M., et al. (2008). Newly
graduated doctors’ competence in managing cardiopulmonary arrests assessed using a standardized
advanced life support (als) assessment. Resuscitation, 77(1), 63–68.

123
Testing effect after 6 months 401

Jensen, M. L., Mondrup, F., Lippert, F., & Ringsted, C. (2009). Using e-learning for maintenance of ALS
competence. Resuscitation, 80(8), 903–908.
Kromann, C., Jensen, M. L., & Ringsted, C. (2009). The testing effect in skills learning. Medical Education,
43(1), 21–27.
Magill, R. A. (2006). Demonstration and verbal instruction. Motor learning and control: Concepts and
applications (pp. 306–330). NY: McGraw-Hill.
Ringsted, C., Lippert, F., & Hesselfeldt, R. (2007). ‘‘Assessment of advanced life support competence when
combining different test methods.’’ Resuscitation, 75(1), 153–160.
Roediger, H. L., & Karpicke, J. D. (2006). The power of testing memory: Basic research and implications
for educational practice. Perspectives on Psychological Science, 1(3), 181–276.
Wik, L. (2003). Rediscovering the importance of chest compressions to improve the outcome from cardiac
arrest. Resuscitation, 58(3), 267–269.
Woollard, M., Whitfield, R., Newcombe, R., Colquhoun, M., Vetter, N., & Chamberlain, D. (2006). Optimal
refresher training intervals for AED and CPR skills: A randomised controlled trial. Resuscitation,
71(2), 237–247.

123

You might also like