Evidence - Based Public Health PDF
Evidence - Based Public Health PDF
Evidence-Based
Public Health
T H IR D E DI T ION
qwq
Ross C. Brownson, PhD
Bernard Becker Professor of Public Health and Director, Prevention
Research Center in St. Louis, Brown School and School of Medicine,
Washington University in St. Louis
Kathleen N. Gillespie, PhD
Associate Professor of Health Management and Policy, College for
Public Health and Social Justice, Saint Louis University
1
1
Oxford University Press is a department of the University of Oxford. It furthers
the University’s objective of excellence in research, scholarship, and education
by publishing worldwide. Oxford is a registered trade mark of Oxford University
Press in the UK and certain other countries.
This material is not intended to be, and should not be considered, a substitute for medical or
other professional advice. Treatment for the conditions described in this material is highly
dependent on the individual circumstances. And, while this material is designed to offer accurate
information with respect to the subject matter covered and to be current as of the time it was
written, research and knowledge about medical and health issues is constantly evolving and
dose schedules for medications are being revised continually, with new side effects recognized
and accounted for regularly. Readers must therefore always check the product information and
clinical procedures with the most up-to-date published product information and data sheets
provided by the manufacturers and the most recent codes of conduct and safety regulation.
The publisher and the authors make no representations or warranties to readers, express or
implied, as to the accuracy or completeness of this material. Without limiting the foregoing, the
publisher and the authors make no representations or warranties as to the accuracy or efficacy
of the drug dosages mentioned in the material. The authors and the publisher do not accept,
and expressly disclaim, any responsibility for any liability, loss or risk that may be claimed or
incurred as a consequence of the use and/ or application of any of the contents of this material.
1 3 5 7 9 8 6 4 2
Printed by WebCom, Inc., Canada
We dedicate this book to our close colleague and friend, Terry
Leet. He was one of the original contributors to our training
program in Evidence-Based Public Health and an author on
previous editions of this book. Terry was an outstanding
scholar and teacher, and we miss him every day.
CON T E N T S
Foreword ix
Preface xiii
Acknowledgments xvii
Glossary 319
Index 333
( vii )
F OR EWOR D
( ix )
( x ) Foreword
number of researchers are looking at how to model the effects and relative
cost-effectiveness to a particular population, and how to determine the likely
impacts over time.
This volume should be sweet music to all of these groups. Anyone needing
to be convinced of the benefit of systematic development and synthesis of
evidence for various public health purposes will quickly be won over. A step-
by-step approach to compiling and assessing evidence of what works and what
does not is well explicated. In a logical sequence, the reader is guided in how
to use the results of his or her search for evidence in developing program or
policy options, including the weighing of benefits versus barriers, and then in
developing an action plan. To complete the cycle of science, the book describes
how to evaluate whatever action is taken. Using this volume does not require
extensive formal training in the key disciplines of epidemiology, biostatistics,
or behavioral science, but those with strong disciplinary skills will also find
much to learn from and put to practical use here.
If every public health practitioner absorbed and applied the key lessons in
this volume, public health would enjoy a higher health and financial return on
the taxpayer’s investment Armed with strong evidence of what works, public
health practitioners could be more successful in competing for limited public
dollars because they would have strong evidence of what works that is easy to
support and difficult to refute. The same standard of difficult-to-refute evi-
dence is much less common in competing requests for scarce public resources.
Jonathan E. Fielding, MD, MPH, MBA
Distinguished Professor of Health Policy and Management,
Fielding School of Public Health, and Distinguished Professor of Pediatrics,
Geffen School of Medicine, School of Public Health,
University of California, Los Angeles
PR E FAC E
To enhance evidence-based decision making, this book addresses all four pos-
sibilities and attempts to provide practical guidance on how to choose, adapt,
( xiii )
( xiv ) Preface
carry out, and evaluate evidence-based programs and policies in public health
settings. It also begins to address a fifth, overarching need for a highly trained
public health workforce.
Progress will require us to answer questions such as the following:
The original need for this book was recognized during the authors’ experi-
ences in public health and health care organizations, legislatures, experiences
in the classroom, and discussions with colleagues about the major issues and
challenges in finding and using evidence in public health practice. This edi-
tion retains our “real-world” orientation, in which we recognize that evidence-
based decision making is a complex, iterative, and nuanced process. It is not
simply a need to use only science-tested, evidence-based interventions. In
some cases, the intervention evidence base is developing in light of an epi-
demic (e.g., control of Zika virus)—hence the need to base decisions on the
best available evidence, not the best possible evidence. It also requires prac-
titioners to remember that public health decisions are shaped by the range
of evidence (e.g., experience, political will, resources, values), not solely on
science.
Our book deals not only with finding and using existing scientific evidence
but also with implementation and evaluation of interventions that generate
new evidence on effectiveness. Because all these topics are broad and require
multidisciplinary skills and perspectives, each chapter covers the basic issues
and provides multiple examples to illustrate important concepts. In addition,
each chapter provides linkages to diverse literature and selected websites for
readers wanting more detailed information. Readers should note that web-
sites are volatile, and when a link changes, a search engine may be useful in
locating the new web address.
Much of our book’s material originated from several courses that we have
taught over the past 15 years. One that we offer with the Missouri Department
of Health and Senior Services, “Evidence-Based Decision-Making in Public
Health,” is designed for midlevel managers in state health agencies and lead-
ers of city and county health agencies. We developed a national version of
this course with the National Association of Chronic Disease Directors and
Preface ( xv )
the Centers for Disease Control and Prevention (CDC). The same course has
been adapted for use in many other US states. To conduct international train-
ings, primarily for practitioners in Central and Eastern Europe, we have col-
laborated with the CDC, the World Health Organization/Pan American Health
Organization, and the CINDI (Countrywide Integrated Noncommunicable
Diseases Intervention) Programme. This extensive engagement with prac-
titioners has taught us many fundamental principles, gaps in the evidence-
based decision-making process, reasons for these gaps, and solutions.
The format for this third edition is very similar to the approach taken in the
course and the second edition. Chapter 1 provides the rationale for evidence-
based approaches to decision making in public health. In a new chapter
(chapter 2), we describe approaches for building capacity in evidence-based
decision making. Chapter 3 presents concepts of causality that help in deter-
mining when scientific evidence is sufficient for public health action. Chapter
4 describes economic evaluation and some related analytic tools that help
determine whether an effective intervention is worth doing based on its ben-
efits and costs. The next seven chapters lay out a sequential framework for the
following:
Although an evidence-based process is far from linear, these seven steps are
described in some detail to illustrate their importance in making scientifically
based decisions about public health programs and policies. We conclude with
a chapter on future opportunities for enhancing evidence-based public health.
This book has been written for public health professionals without exten-
sive formal training in the public health sciences (i.e., behavioral science, bio-
statistics, environmental and occupational health, epidemiology, and health
management and policy) and for students in public health and preventive
medicine. It can be used in graduate training or for the many emerging under-
graduate public health programs. We hope the book will be useful for state
and local health agencies, nonprofit organizations, academic institutions,
health care organizations, and national public health agencies. Although the
book is intended primarily for a North American audience, this third edition
draws more heavily on examples from many parts of the world, and we believe
that although contextual conditions will vary, the key principles and skills
outlined are applicable in both developed and developing countries. Earlier
( xvi ) Preface
( xvii )
CHAPTER 1
w
The Need for Evidence-Based
Public Health
Public health workers … deserve to get somewhere by design, not just by perseverance.
McKinlay and Marceau
P ublic health research and practice are credited with many notable achieve-
ments, including much of the 30-year gain in life expectancy in the United
States over the twentieth century.1 A large part of this increase can be attrib-
uted to provision of safe water and food, sewage treatment and disposal,
tobacco use prevention and cessation, injury prevention, control of infectious
diseases through immunization and other means, and other population-based
interventions.
Despite these successes, many additional challenges and opportunities to
improve the public’s health remain. To achieve state and national objectives
for improved public health, more widespread adoption of evidence-based
strategies has been recommended.2–6 Increased focus on evidence-based pub-
lic health (EBPH) has numerous direct and indirect benefits, including access
to more and higher quality information on what works, a higher likelihood of
successful programs and policies being implemented, greater workforce pro-
ductivity, and more efficient use of public and private resources.4, 7
Ideally, public health practitioners should always incorporate scientific
evidence in selecting and implementing programs, developing policies, and
evaluating progress. Society pays a high opportunity cost when interventions
that yield the highest health return on an investment are not implemented
(i.e., in light of limited resources, the benefit given up by implementing less
effective interventions).8 In practice, decisions are often based on perceived
short-term opportunities, lacking systematic planning and review of the best
( 1 )
( 2 ) Evidence-Based Public Health
BACKGROUND
Formal discourse on the nature and scope of EBPH originated about two
decades ago. Several authors have attempted to define EBPH. In 1997, Jenicek
defined EBPH as the “… conscientious, explicit, and judicious use of cur-
rent best evidence in making decisions about the care of communities and
populations in the domain of health protection, disease prevention, health
maintenance and improvement (health promotion).”32 In 1999, scholars
and practitioners in Australia5 and the United States33 elaborated further on
the concept of EBPH. Glasziou and colleagues posed a series of questions to
enhance uptake of EBPH (e.g., “Does this intervention help alleviate this prob-
lem?”) and identified 14 sources of high-quality evidence.5 Brownson and col-
leagues described a multistage process by which practitioners are able to take a
more evidence-based approach to decision making.4,33 Kohatsu and colleagues
broadened earlier definitions of EBPH to include the perspectives of commu-
nity members, fostering a more population-centered approach.28 Rychetnik
and colleagues summarized many key concepts in a glossary for EBPH.34 There
appears to be a consensus that a combination of scientific evidence, as well
as values, resources, and context should enter into decision making (Figure
1.1).2,4,34,35 A concise definition emerged from Kohatsu: “Evidence-based public
Decision-making
Population Resources,
characteristics, including
needs, values, and practitioner
preferences expertise
Defining Evidence
At the most basic level, evidence involves “the available body of facts or
information indicating whether a belief or proposition is true or valid.”39 The
idea of evidence often derives from legal settings in Western societies. In
law, evidence comes in the form of stories, witness accounts, police testi-
mony, expert opinions, and forensic science.40 Our notions of evidence are
defined in large part by our professional training and experience. For a public
health professional, evidence is some form of data—including epidemiologic
(quantitative) data, results of program or policy evaluations, and qualitative
data—that is used in making judgments or decisions (Figure 1.2).41 Public
Subjective
Box 1.1
DEVELOPING A PRACTICAL UNDERSTANDING OF AN
EVIDENCE TYPOLOGY IN AUSTRALIA
Be
st
Le
ad
in
g
Impact
Pr
om
is
in
g
Em
er
gi
ng
Quality of Evidence
only whether an intervention works but also how interventions work in real
world settings.69
Category Examples
Individual Education level
Basic human needsa
Personal health history
Interpersonal Family health history
Support from peers
Social capital
Organizational Staff composition
Staff expertise
Physical infrastructure
Organizational culture
Sociocultural Social norms
Values
Cultural traditions
Health equity
History
Political and economic Political will
Political ideology
Lobbying and special interests
Costs and benefits
a
Basic human needs include food, shelter, warmth, safety.63
Triangulating Evidence
under domain 10 involves using EBPH from such sources as the Guide to
Community Preventive Services, having access to research expertise, and
communicating the facts and implications of research to appropriate audi-
ences. Third, the prerequisites for accreditation—a community health
assessment, a community health improvement plan, and an agency stra-
tegic plan—are key elements of EBPH, as will be described later in this
chapter.
A critical aspect of the early implementation of PHAB is the development
of an evaluation and research agenda, based on a logic model for accredi-
tation, which can serve as a guide for strengthening the evidence base for
accreditation. In many ways the accreditation process is parallel to the devel-
opment of EBPH: the actual use of standards and measures presents oppor-
tunities to strengthen the evidence base for accreditation, and, as EBPH
evolves, new findings will help inform the refinement of standards and mea-
sures over time.
There are four overlapping user groups for EBPH as defined by Fielding.86
The first includes public health practitioners with executive and managerial
responsibilities who want to know the scope and quality of evidence for
alternative strategies (e.g., programs, policies). In practice, however, pub-
lic health practitioners frequently have a relatively narrow set of options.
Funds from federal, state, or local sources are most often earmarked for a
specific purpose (e.g., surveillance and treatment of sexually transmitted
diseases, inspection of retail food establishments). Still, the public health
practitioner has the opportunity, even the obligation, to carefully review
the evidence for alternative ways to achieve the desired health goals. The
next user group is policy makers at local, regional, state, national, and
international levels. They are faced with macro-level decisions on how to
allocate the public resources for which they are stewards. This group has
the additional responsibility of making policies on controversial public
issues. The third group is composed of stakeholders who will be affected
by any intervention. This includes the public, especially those who vote, as
well as interest groups formed to support or oppose specific policies, such
as the legality of abortion, whether the community water supply should
be fluoridated, or whether adults must be issued handgun licenses if they
pass background checks. The final user group is composed of researchers
on population health issues, such as those who evaluate the impact of a
specific policy or programs. They both develop and use evidence to answer
research questions.
( 12 ) Evidence-Based Public Health
As one evaluates evidence, it is useful to understand where to turn for the best
available scientific evidence. A starting point is the scientific literature and
guidelines developed by expert panels. In addition, preliminary findings from
researchers and practitioners are often presented at regional, national, and
international professional meetings.
( 14 ) Evidence-Based Public Health
A tried and true public health adage is, “what gets measured, gets done.”97
This has typically been applied to long-term endpoints (e.g., rates of mortal-
ity), and data for many public health endpoints and populations are not read-
ily available at one’s fingertips. Data are being developed more for local-level
issues (e.g., the Selected Metropolitan/Micropolitan Area Risk Trends of the
Behavioral Risk Factor Surveillance System [SMART BRFSS]), and a few early
efforts are underway to develop public health policy surveillance systems.
Community Engagement Occurs
Too often in public health, programs and policies are implemented without
much attention to systematic evaluation. In addition, even when programs
are ineffective, they are sometimes continued because of historical or political
T h e N e e d f or E v i de n c e - B a s e d P u b l i c H e a lt h ( 15 )
Public health surveillance is a critical tool for those using EBPH (as will be
described in much more detail in chapter 7). It involves the ongoing systematic
( 16 ) Evidence-Based Public Health
Economic Evaluation
Participatory Approaches
SUMMARY
KEY CHAPTER POINTS
Selected Websites
American Public Health Association (APHA) <http://www.apha.org>. The APHA is the
oldest and most diverse organization of public health professionals in the world,
representing more than 50,000 members. The Association and its members have
been influencing policies and setting priorities in public health since 1872. The
APHA site provides links to many other useful websites.
Canadian Task Force on Preventive Health Care < http://canadiantaskforce.ca/>.
This website is designed to serve as a practical guide to health care providers,
planners, and consumers for determining the inclusion or exclusion, content,
and frequency of a wide variety of preventive health interventions, using the
evidence-based recommendations of the Canadian Task Force on Preventive
Health Care.
Cancer Control P.L.A.N.E.T. <https://ccplanet.cancer.gov/>. Cancer Control
P.L.A.N.E.T. acts as a portal to provide access to data and resources for designing,
implementing, and evaluating evidence-based cancer control programs. The site
provides five steps (with links) for developing a comprehensive cancer control
plan or program.
Center for Prevention—Altarum Institute (CFP)
<http://altarum.org/research-centers/center-for-prevention>. Working to
emphasize disease prevention and health promotion in national policy and prac-
tice, the CFP is one of the research centers of the Altarum Institute. The site
includes action guides that translate several of the Community Guide recommen-
dations into easy-to-follow implementation guidelines on priority health topics
such as sexual health, tobacco control, aspirin, and chlamydia.
Centers for Disease Control and Prevention (CDC) Community Health Resources
<http://www.cdc.gov/nccdphp/dch/online-resource/index.htm>. This searchable
site provides access to the CDC’s best resources for planning, implementing, and
evaluating community health interventions and programs to address chronic
disease and health disparities issues. The site links to hundreds of useful plan-
ning guides, evaluation frameworks, communication materials, behavioral and
risk factor data, fact sheets, scientific articles, key reports, and state and local
program contacts.
Guide to Community Preventive Services (the Community Guide) <http://www.the-
communityguide.org/index.html>. The Guide provides guidance in choosing
evidence-based programs and policies to improve health and prevent disease
at the community level. The Task Force on Community Preventive Services, an
independent, nonfederal, volunteer body of public health and prevention experts
appointed by the director of the Centers for Disease Control and Prevention,
has systematically reviewed more than 200 interventions to produce the recom-
mendations and findings available at this site. The topics covered in the Guide
currently include adolescent health, alcohol-excessive consumption, asthma,
birth defects, cancer, cardiovascular disease, diabetes, emergency preparedness,
health communication, health equity, HIV/AIDS, sexually transmitted infec-
tions and pregnancy, mental health, motor vehicle injury, nutrition, obesity, oral
( 20 ) Evidence-Based Public Health
organizations, and gray literature to find evidence. It is useful for finding inter-
vention evidence for topic areas that have not undergone extensive systematic
review. For each included topic area, there are implementation examples and
resources that communities can use to move forward with their chosen strategies.
World Health Organization (WHO) Health Impact Assessments <http://www.who.int/
hia/en/>. The WHO provides health impact assessment (HIA) guides and exam-
ples from several countries. Many links are provided to assist in understanding
and conducting HIAs.
REFERENCES
1. National Center for Health Statistics. Health, United States, 2000 With Adolescent
Health Chartbook. Hyattsville, MD: Centers for Disease Control and Prevention,
National Center for Health Statistics; 2000.
2. Muir Gray JA. Evidence-Based Healthcare: How to Make Decisions About Health
Services and Public Health. 3rd ed. New York and Edinburgh: Churchill Livingstone
Elsevier; 2009.
3. Brownson RC, Fielding JE, Maylahn CM. Evidence-based public health: a fun-
damental concept for public health practice. Annu Rev Public Health. Apr 21
2009;30:175–201.
4. Brownson RC, Baker EA, Leet TL, Gillespie KN, True WR. Evidence-Based Public
Health. 2nd ed. New York, NY: Oxford University Press; 2011.
5. Glasziou P, Longbottom H. Evidence-based public health practice. Aust N Z J
Public Health. 1999;23(4):436–440.
6. McMichael C, Waters E, Volmink J. Evidence-based public health: what does it
offer developing countries? J Public Health (Oxf). Jun 2005;27(2):215–221.
7. Kohatsu ND, Melton RJ. A health department perspective on the Guide to
Community Preventive Services. Am J Prev Med. Jan 2000;18(1 Suppl):3–4.
8. Fielding JE. Where is the evidence? Annu Rev Public Health. 2001;22:v–vi.
9. Committee on Public Health Strategies to Improve Health. For the Public’s
Health: Investing in a Healthier Future. Washington, DC: Institute of Medicine of
The National Academies; 2012.
10. Institute of Medicine. Committee for the Study of the Future of Public Health. The
Future of Public Health. Washington, DC: National Academy Press; 1988.
11. Dobbins M, Cockerill R, Barnsley J, Ciliska D. Factors of the innovation, orga-
nization, environment, and individual that predict the influence five system-
atic reviews had on public health decisions. Int J Technol Assess Health Care. Fall
2001;17(4):467–478.
12. Dodson EA, Baker EA, Brownson RC. Use of evidence-based interventions in state
health departments: a qualitative assessment of barriers and solutions. J Public
Health Manag Pract. Nov-Dec 2010;16(6):E9–E15.
13. Jacob RR, Baker EA, Allen P, et al. Training needs and supports for evidence-based
decision making among the public health workforce in the United States. BMC
Health Serv Res. Nov 14 2014;14(1):564.
14. Frieden TR. Six components necessary for effective public health program imple-
mentation. Am J Public Health. Jan 2013;104(1):17–22.
15. Oliver K, Innvar S, Lorenc T, Woodman J, Thomas J. A systematic review of bar-
riers to and facilitators of the use of evidence by policymakers. BMC Health Serv
Res. 2014;14:2.
( 22 ) Evidence-Based Public Health
35. Satterfield JM, Spring B, Brownson RC, et al. Toward a transdisciplinary model of
evidence-based practice. Milbank Q. Jun 2009;87(2):368–390.
36. Armstrong R, Pettman TL, Waters E. Shifting sands—from descriptions to solu-
tions. Public Health. Jun 2014;128(6):525–532.
37. Yost J, Dobbins M, Traynor R, DeCorby K, Workentine S, Greco L. Tools to sup-
port evidence-informed public health decision making. BMC Public Health. Jul 18
2014;14:728.
38. Viehbeck SM, Petticrew M, Cummins S. Old myths, new myths: challenging
myths in public health. Am J Public Health. Apr 2015;105(4):665–669.
39. Stevenson A, Lindberg C, eds. The New Oxford American Dictionary. 3rd ed.
New York, NY: Oxford University Press; 2010.
40. McQueen DV. Strengthening the evidence base for health promotion. Health
Promot Int. Sep 2001;16(3):261–268.
41. Chambers D, Kerner J. Closing the gap between discovery and delivery.
Dissemination and Implementation Research Workshop: Harnessing Science to
Maximize Health. Rockville, MD; 2007.
42. Rimer BK, Glanz DK, Rasband G. Searching for evidence about health education
and health behavior interventions. Health Educ Behav. 2001;28(2):231–248.
43. Kerner JF. Integrating research, practice, and policy: what we see depends on
where we stand. J Public Health Manag Pract. Mar-Apr 2008;14(2):193–198.
44. Mulrow CD, Lohr KN. Proof and policy from medical research evidence. J Health
Polit Policy Law. Apr 2001;26(2):249–266.
45. Sturm R. Evidence-based health policy versus evidence-based medicine. Psychiatr
Serv. Dec 2002;53(12):1499.
46. Brownson RC, Royer C, Ewing R, McBride TD. Researchers and policymakers: trav-
elers in parallel universes. Am J Prev Med. Feb 2006;30(2):164–172.
47. Li V, Carter SM, Rychetnik L. Evidence valued and used by health promotion prac-
titioners. Health Educ Res. Apr 2015;30(2):193–205.
48. Milat AJ, Bauman AE, Redman S, Curac N. Public health research outputs
from efficacy to dissemination: a bibliometric analysis. BMC Public Health.
2011;11:934.
49. Melbourne School of Population and Global Health. Public Health Insight. http://
mspgh.unimelb.edu.au/centres-institutes/centre-for-health-equity/research-
group/public-health-insight. Accessed November 24, 2016.
50. Green LW, Glasgow RE. Evaluating the relevance, generalization, and applicabil-
ity of research: issues in external validation and translation methodology. Eval
Health Prof. Mar 2006;29(1):126–153.
51. Green LW, Ottoson JM, Garcia C, Hiatt RA. Diffusion theory, and knowledge dis-
semination, utilization, and integration in public health. Annu Rev Public Health.
Jan 15 2009;30:151–174.
52. Spencer LM, Schooley MW, Anderson LA, et al. Seeking best practices: a concep-
tual framework for planning and improving evidence-based practices. Prev Chronic
Dis. 2013;10:E207.
53. University of Wisconsin Population Health Institute. Using What Works for
Health. http://www.countyhealthrankings.org/roadmaps/what-works-for-
health/using-what-works-health. Accessed July 28, 2016.
54. Kessler R, Glasgow RE. A proposal to speed translation of healthcare research into
practice: dramatic change is needed. Am J Prev Med. Jun 2011;40(6):637–644.
55. Nutbeam D. How does evidence influence public health policy? Tackling health
inequalities in England. Health Promot J Aust. 2003;14:154–158.
( 24 ) Evidence-Based Public Health
96. National Board of Public Health Examiners. Certified in Public Health. http://
www.nbphe.org/examinfo.cfm. Accessed November 24, 2014.
97. Thacker SB. Public health surveillance and the prevention of injuries in
sports: what gets measured gets done. J Athl Train. Apr-Jun 2007;42(2):171–172.
98. Fielding J, Teutsch S, Breslow L. A framework for public health in the United
States. Public Health Reviews. 2010;32:174–189.
99. Institute of Medicine. Who Will Keep the Public Healthy? Educating Public
Health Professionals for the 21st Century. Washington, DC: National Academies
Press; 2003.
100. Homer JB, Hirsch GB. System dynamics modeling for public health: background
and opportunities. Am J Public Health. Mar 2006;96(3):452–458.
101. Stokols D. Translating social ecological theory into guidelines for community
health promotion. American Journal of Health Promotion. 1996;10(4):282–298.
102. Glanz K, Bishop DB. The role of behavioral science theory in development and
implementation of public health interventions. Annu Rev Public Health. Apr 21
2010;31:399–418.
103. Cargo M, Mercer SL. The value and challenges of participatory research:
Strengthening its practice. Annu Rev Public Health. Apr 21 2008;29:325–350.
104. Israel BA, Schulz AJ, Parker EA, Becker AB. Review of community-
based
research: assessing partnership approaches to improve public health. Annual
Review of Public Health. 1998;19:173–202.
105. Leung MW, Yen IH, Minkler M. Community based participatory research: a
promising approach for increasing epidemiology’s relevance in the 21st century.
Int J Epidemiol. Jun 2004;33(3):499–506.
106. Centers for Disease Control and Prevention. A Practitioner’s Guide for Advancing
Health Equity: Community Strategies for Preventing Chronic Disease. Atlanta,
GA: CDC; 2013.
107. Slater MD, Kelly KJ, Thackeray R. Segmentation on a shoestring: health audience
segmentation in limited-budget and local social marketing interventions. Health
Promot Pract. Apr 2006;7(2):170–173.
108. Brownson R. Research Translation and Public Health Services & Systems
Research. Keeneland Conference: Public Health Services & Systems Research.
Lexington, KY; 2013.
109. Thacker SB, Berkelman RL. Public health surveillance in the United States.
Epidemiol Rev. 1988;10:164–190.
110. Thacker SB, Stroup DF. Public health surveillance. In: Brownson RC, Petitti
DB, eds. Applied Epidemiology: Theory to Practice. 2nd ed. New York, NY: Oxford
University Press; 2006:30–67.
111. Oxman AD, Guyatt GH. The science of reviewing research. Ann N Y Acad Sci. Dec
31 1993;703:125–133; discussion 133–124.
112. Waters E, Doyle J. Evidence-based public health practice: improving the quality
and quantity of the evidence. J Public Health Med. Sep 2002;24(3):227–229.
113. Gold MR, Siegel JE, Russell LB, Weinstein MC. Cost-Effectiveness in Health and
Medicine. New York, NY: Oxford University Press; 1996.
114. Carande-Kulis VG, Maciosek MV, Briss PA, et al. Methods for systematic reviews
of economic evaluations for the Guide to Community Preventive Services.
Task Force on Community Preventive Services. Am J Prev Med. Jan 2000;18(1
Suppl):75–91.
115. Harris P, Harris-Roxas B, Harris E, Kemp L. Health Impact Assessment: A Practical
Guide. Sydney: Australia: Centre for Health Equity Training, Research and
T h e N e e d f or E v i de n c e - B a s e d P u b l i c H e a lt h ( 27 )
Evaluation (CHETRE). Part of the UNSW Research Centre for Primary Health
Care and Equity, UNSW; August 2007.
116. Cole BL, Wilhelm M, Long PV, Fielding JE, Kominski G, Morgenstern H.
Prospects for health impact assessment in the United States: new and improved
environmental impact assessment or something different? J Health Polit Policy
Law. Dec 2004;29(6):1153–1186.
117. Kemm J. Health impact assessment: a tool for healthy public policy. Health
Promot Int. Mar 2001;16(1):79–85.
118. Mindell J, Sheridan L, Joffe M, Samson-Barry H, Atkinson S. Health impact
assessment as an agent of policy change: improving the health impacts of the
mayor of London’s draft transport strategy. J Epidemiol Community Health. Mar
2004;58(3):169–174.
119. De Leeuw E, Peters D. Nine questions to guide development and implementation
of Health in All Policies. Health Promot Int. Dec 2014;30(4):987–997.
120. Green LW, George MA, Daniel M, et al. Review and Recommendations for the
Development of Participatory Research in Health Promotion in Canada. Vancouver,
British Columbia: The Royal Society of Canada; 1995.
121. Green LW, Mercer SL. Can public health researchers and agencies reconcile the
push from funding bodies and the pull from communities? Am J Public Health.
Dec 2001;91(12):1926–1929.
122. Hallfors D, Cho H, Livert D, Kadushin C. Fighting back against sub-
stance abuse: are community coalitions winning? Am J Prev Med. Nov
2002;23(4):237–245.
C H A P T E R 2
w
Building Capacity for Evidence-Based
Public Health
Evidence without capacity is an empty shell.
Mohan Singh
( 29 )
( 30 ) Evidence-Based Public Health
Capacity building for EBPH is essential at all levels of public health, from
national standards to agency-level practices. As noted in chapter 1, the
( 32 ) Evidence-Based Public Health
Although the formal concepts of EBPH are relatively new, the underlying
skills are not.9 For example, reviewing the scientific literature for evidence
and evaluating a program intervention are skills often taught in graduate pro-
grams in public health or other academic disciplines and are building blocks
of public health practice. To support building many of these skills, compe-
tencies for more effective public health practice are becoming clearer.36-38 For
example, to carry out the EBPH process, the skills needed to make evidence-
based decisions related to programs and policies require a specific set of com-
petencies (Table 2.2).9,39,40 Many of the competencies on this list illustrate the
value of developing partnerships and engaging diverse disciplines in the EBPH
process.41
To address these and similar competencies, programs have been developed
to train university students (at both undergraduate and graduate levels),42-45
public health professionals,46-49 and staff of community- based organiza-
tions. 50,51
Training programs have been developed across multiple continents
and countries.45,52-55 Some programs show evidence of effectiveness.47,51 The
most common format uses didactic sessions, computer labs, and scenario-
based exercises, taught by a faculty team with expertise in EBPH. The reach
of these training programs can be increased by employing a train-the-trainer
approach.49 Other formats have been used, including Internet-based self-
study,50,56 CD-ROMs,57 distance and distributed learning networks, and tar-
geted technical assistance. Training programs may have greater impact when
delivered by “change agents” who are perceived as experts, yet share com-
mon characteristics and goals with trainees.58 For example, in data from four
states, a train-the-trainer approach was effective in building skills and has
some advantages (e.g., contouring to local needs, local ownership).49 A com-
mitment from leadership and staff to lifelong learning is also an essential
ingredient for success in training59 and is itself an example of evidence-based
decision making.4
Implementation of training to address EBPH competencies should employ
principles of adult learning. These issues were articulated by Bryan and col-
leagues,60 who highlighted the need to (1) know why the audience is learning;
(2) tap into an underlying motivation to learn by the need to solve problems;
(3) respect and build on previous experience; (4) design learning approaches
that match the background and diversity of recipients; and (5) actively involve
the audience in the learning process.
In this section, a multistage, sequential framework to promote greater
use of evidence in day-to-day decision making is briefly described (Figure
2.2).9,52,61 Each part of the framework is described in detail in later chapters.
It is important to note that this process is seldom a strictly prescriptive or
linear one, but instead includes numerous feedback loops and processes that
are common in many program planning models. This multistage framework
Table 2.2. COMPETENCIES IN EVIDENCE-B ASED PUBLIC HEALTH
12. E
valuation in “plain EV I Recognize the importance of translating
English” the impacts of programs or policies in
language that can be understood by
communities, practice sectors and policy
makers.
13. L
eadership and L I Recognize the importance of effective
change leadership from public health
professionals when making decisions in
the midst of ever changing environments.
14. T
ranslating evidence- EBP I Recognize the importance of translating
based interventions evidence-based interventions to unique
“real world” settings.
15. Quantifying the issue T/T I Understand the importance of descriptive
epidemiology (concepts of person, place,
time) in quantifying the public health
issue.
16. D
eveloping an action EBP I Understand the importance of developing
plan for program or a plan of action which describes how the
policy goals and objectives will be achieved,
what resources are required, and how
responsibility of achieving objectives will
be assigned.
17. P
rioritizing health EBP I Understand how to choose and implement
issues appropriate criteria and processes for
prioritizing program and policy options.
18. Qualitative evaluation EV I Recognize the value of qualitative
evaluation approaches including the
steps involved in conducting qualitative
evaluations.
19. Collaborative P/C I Understand the importance of
partnerships collaborative partnerships between
researchers and practitioners when
designing, implementing, and evaluating
evidence-based programs and policies.
20. Nontraditional P/C I Understand the importance of traditional
partnerships partnerships as well as those that have
been considered non-traditional such
as those with planners, department of
transportation, and others.
(continued)
Table 2.2. CONTINUED
Issue of Focus
Assess the
Disseminate Community
Widely
Evaluate
Retool the Program Quantify
or Policy the Issue
Discontinue
Conduct EVIDENCE-BASED
Action PUBLIC HEALTH Develop
Planning & FRAMEWORK a Concise
Implement Issue
Program Statement
or Policy
Apply Economic
Evaluation
Concepts
Assess the Community
Box 2.2
APPLYING AN EVIDENCE-B ASED PLANNING
FRAMEWORK IN COLORADO
Quantify the Issue
The practitioner should next develop a concise statement of the issue or prob-
lem being considered, answering the question, “How important is the issue?”
To build support for any issue (with an organization, policy makers, or a
funding agency), the issue must be clearly articulated. This problem defini-
tion stage has some similarities to the beginning steps in a strategic planning
process, which often involve describing the mission, internal strengths and
weaknesses, external opportunities and threats, and the vision for the future.
It is often helpful to describe gaps between the current status of a program or
organization and the desired goals. The key components of an issue statement
include the health condition or risk factor being considered, the populations
affected, the size and scope of the problem, prevention opportunities, and
potential stakeholders.
( 40 ) Evidence-Based Public Health
After the issue to be considered has been clearly defined, the practitioner needs
to become knowledgeable about previous or ongoing efforts to address the
issue in order to determine the cause of the problem and what should be done
about it. This step includes a systematic approach to identify, retrieve, and eval-
uate relevant reports on scientific studies, panels, and conferences related to
the topic of interest. The most common method for initiating this investigation
is a formal literature review. There are many databases available to facilitate
such a review. Common among them for public health purposes are PubMed,
PsycINFO, and Google Scholar (see chapter 8). Some databases can be accessed
by the public through institutions (such as the National Library of Medicine
[http://www.nlm.nih.gov], universities, and public libraries); others can be
subscribed to by an organization or selectively found on the Internet. There
also are many organizations that maintain Internet sites that can be useful for
identifying relevant information, including many state health departments,
the Centers for Disease Control and Prevention (e.g., https://chronicdata.cdc.
gov/), and the National Institutes of Health. It is important to remember that
not all intervention (Type 2) studies will be found in the published literature.
Based largely on the first three steps, a variety of health program or policy
options are examined, answering the question, “What are we going to do about
the issue?” The list of options can be developed from a variety of sources. The
initial review of the scientific literature may highlight various intervention
options. More often, expert panels provide program or policy recommenda-
tions on a variety of issues. Summaries of available evidence-base program
and policy options are often available in systematic reviews and practice
guidelines. There are several assumptions or contexts underlying any devel-
opment of options. These fall into five main categories: political/regulatory,
economic, social values, demographic, and technological.65
In particular, it is important to assess and monitor the political process
when developing health policy options. To do so, stakeholder input may
be useful. The stakeholder for a policy might be the health policy maker.
Supportive policy makers can frequently provide advice regarding timing of
policy initiatives, methods for framing the issues, strategies for identifying
sponsors, and ways to develop support among the general public. In contrast,
the stakeholder for a coalition-based community intervention might be a
community member. In this case, additional planning data may be garnered
from community members through key informant interviews, focus groups,
or coalition member surveys.66
B u i l di n g C a pa c i t y f or E v i de n c e - B a s e d P u b l i c H e a lt h ( 41 )
This aspect of the process again deals largely with strategic planning issues
that answer the question, “How are we going to address the issue, when,
and what are our expected results?” After an option has been selected, a set
of goals and objectives is developed. A goal is a long-term desired change
in the status of a priority health need, and an objective is a short-term,
measurable, specific activity that leads toward achievement of a goal. The
plan of action describes how the goals and objectives will be achieved, what
resources are required, and how responsibility for achieving objectives will
be assigned.
SUMMARY
KEY CHAPTER POINTS
Leeman J, Calancie L, Hartman MA, et al. What strategies are used to build practitio-
ners’ capacity to implement community-based interventions and are they effec-
tive? A systematic review. Implement Sci. May 29 2015;10:80.
Meyer AM, Davis M, Mays GP. Defining organizational capacity for public health ser-
vices and systems research. J Public Health Manag Pract. Nov 2012;18(6):535–544.
Pettman TL, Armstrong R, Jones K, Waters E, Doyle J. Cochrane update: building
capacity in evidence-informed decision-making to improve public health. J Public
Health (Oxf). Dec 2013;35(4):624–627.
Yarber L, Brownson CA, Jacob RR, Baker EA, Jones E, Baumann C, Deshpande AD,
Gillespie KN, Scharff DP, Brownson RC. Evaluating a train-the-trainer approach
for improving capacity for evidence-based decision making in public health. BMC
Health Serv Res. 2015;15:547.
Yost J, Ciliska D, Dobbins M. Evaluating the impact of an intensive education work-
shop on evidence-informed decision making knowledge, skills, and behaviours: a
mixed methods study. BMC Med Educ. 2014;14:13.
Selected Websites
Centers for Disease Control and Prevention (CDC) Community Health Resources
<http://www.cdc.gov/nccdphp/dch/online-resource/index.htm>. This search-
able site provides access to CDC’s best resources for planning, implementing, and
evaluating community health interventions and programs to address chronic
disease and health disparities issues. The site links to hundreds of useful plan-
ning guides, evaluation frameworks, communication materials, behavioral and
risk factor data, fact sheets, scientific articles, key reports, and state and local
program contacts.
Evidence-based behavioral practice (EBB)) <http://www.ebbp.org/>. The EBBP.org
project creates training resources to bridge the gap between behavioral health
research and practice. An interactive website offers modules covering topics such
as the EBBP process, systematic reviews, searching for evidence, critical appraisal,
and randomized controlled trials. This site is ideal for practitioners, researchers,
and educators.
Evidence-Based Public Health (Association of State and Territorial Health Officials
[ASTHO]) <http://www.astho.org/programs/evidence-based-public-health/>. For
more than 10 years, ASTHO has collaborated with the Centers for Disease
Control and Prevention to promote evidence-based public health, particularly
through adoption of the Task Force recommendations in the Guide to Community
Preventive Services. The site includes state success stories and a set of tools for
using the Community Guide and other related resources.
Evidence-Based Public Health (Washington University in St. Louis)
< http://www.evidencebasedpublichealth.org/> The purpose of this site is to pro-
vide public health professionals and decision makers with resources and tools to
make evidence-based public health practice accessible and realistic. The portal
includes a primer on EBPH, resources, and a training program for practitioners.
PH Partners: From Evidence-Based Medicine to Evidence-Based Public Health <https://
phpartners.org/tutorial/04-ebph/2-keyConcepts/4.2.1.html>. The PH Partners
portal is designed to support the public health workforce on issues related to
information access and management. The site seeks to allow users to find reli-
able and authoritative consumer-oriented materials to support health education;
retrieve statistical information and access data sets relevant to public health; and
retrieve and evaluate information in support of evidence-based practice.
( 44 ) Evidence-Based Public Health
Public Health Services and Systems Research and the Public Health Practice-Based
Research Networks: Administrative Evidence-Based Practices Assessment Tool.
http://tools.publichealthsystems.org/tools/tool?view=about&id=134. This tool
helps managers and practitioners at local and state public health departments
assess the extent to which their departments utilize administrative evidence-
based practices (A- EBPs), leading to improved efficiency and public health
outcomes and building competency for accreditation. This tool provides an
assessment of the extent to which a health department currently supports the
adoption of A-EBPs across five key domains: workforce development, leadership,
organizational climate and culture, relationships and partnerships, and financial
processes. The tool also allows comparison to a national, stratified sample of local
health departments.
REFERENCES
13. Van Lerberghe W, Conceicao C, Van Damme W, Ferrinho P. When staff is under-
paid: dealing with the individual coping strategies of health personnel. Bull World
Health Organ. 2002;80(7):581–584.
14. Baker EA, Brownson RC, Dreisinger M, McIntosh LD, Karamehic-Muratovic A.
Examining the role of training in evidence-based public health: a qualitative
study. Health Promot Pract. Jul 2009;10(3):342–348.
15. Oliver K, Innvar S, Lorenc T, Woodman J, Thomas J. A systematic review of bar-
riers to and facilitators of the use of evidence by policymakers. BMC Health Serv
Res. 2014;14:2.
16. McMichael C, Waters E, Volmink J. Evidence-based public health: what does it
offer developing countries? J Public Health (Oxf). Jun 2005;27(2):215–221.
17. Puchalski Ritchie LM, Khan S, Moore JE, et al. Low-and middle-income countries
face many common barriers to implementation of maternal health evidence prod-
ucts. J Clin Epidemiol. Aug 2016;76:229–237.
18. Singh KK. Evidence-based public health: barriers and facilitators to the transfer of
knowledge into practice. Indian J Public Health. Apr-Jun 2015;59(2):131–135.
19. Beaglehole R, Dal Poz MR. Public health workforce: challenges and policy issues.
Hum Resour Health. Jul 17 2003;1(1):4.
20. Simoes EJ, Land G, Metzger R, Mokdad A. Prioritization MICA: a Web-based
application to prioritize public health resources. J Public Health Manag Pract. Mar-
Apr 2006;12(2):161–169.
21. Simoes EJ, Mariotti S, Rossi A, et al. The Italian health surveillance (SiVeAS) pri-
oritization approach to reduce chronic disease risk factors. Int J Public Health. Aug
2012;57(4):719–733.
22. Wright K, Rowitz L, Merkle A, et al. Competency development in public health
leadership. Am J Public Health. Aug 2000;90(8):1202–1207.
23. Task Force on Community Preventive Services. Guide to Community Preventive
Services. www.thecommunityguide.org. Accessed June 5, 2016.
24. Institute of Medicine. Committee on Public Health. Healthy Communities: New
Partnerships for the Future of Public Health. Washington, DC: National Academy
Press; 1996.
25. Public Health Accreditation Board. Public Health Accreditation Board Standards
and Measures, version 1.5. 2013. http://www.phaboard.org/wp-content/
uploads/SM-Version-1.5-Board-adopted-FINAL-01-24-2014.docx.pdf. Accessed
November 20, 2016.
26. The Cochrane Collaboration. The Cochrane Public Health Group. http://
ph.cochrane.org/. Accessed July 28, 2016.
27. Mays GP, Scutchfield FD. Advancing the science of delivery: public health services
and systems research. J Public Health Manag Pract. Nov 2012;18(6):481–484.
28. Scutchfield FD, Patrick K. Public health systems research: the new kid on the
block. Am J Prev Med. Feb 2007;32(2):173–174.
29. Beitsch LM, Leep C, Shah G, Brooks RG, Pestronk RM. Quality improvement in
local health departments: results of the NACCHO 2008 survey. J Public Health
Manag Pract. Jan-Feb 2010;16(1):49–54.
30. Drabczyk A, Epstein P, Marshall M. A quality improvement initiative to enhance
public health workforce capabilities. J Public Health Manag Pract. Jan-Feb
2012;18(1):95–99.
31. Erwin PC. The performance of local health departments: a review of the literature.
J Public Health Manag Pract. Mar-Apr 2008;14(2):E9–E18.
( 46 ) Evidence-Based Public Health
32. Brownson RC, Reis RS, Allen P, et al. Understanding administrative evidence-
based practices: findings from a survey of local health department leaders. Am J
Prev Med. Jan 2013;46(1):49–57.
33. Erwin PC, Harris JK, Smith C, Leep CJ, Duggan K, Brownson RC. Evidence-based
public health practice among program managers in local public health depart-
ments. J Public Health Manag Pract. Sep-Oct 2014;20(5):472–480.
34. Public Health Services and Systems Research and the Public Health Practice-Based
Research Networks. Administrative Evidence-Based Practices Assessment Tool.
http://tools.publichealthsystems.org/tools/tool?view=about&id=134. Accessed
November 20, 2016.
35. Jacob R, Allen P, Ahrendt L, Brownson R. Learning about and using
research evidence among public health practitioners. Am J Prev Med.
2017;52(3S3):S304–S308.
36. Birkhead GS, Davies J, Miner K, Lemmings J, Koo D. Developing competencies for
applied epidemiology: from process to product. Public Health Rep. 2008;123(Suppl
1):67–118.
37. Birkhead GS, Koo D. Professional competencies for applied epidemiologists: a
roadmap to a more effective epidemiologic workforce. J Public Health Manag Pract.
Nov-Dec 2006;12(6):501–504.
38. Gebbie K, Merrill J, Hwang I, Gupta M, Btoush R, Wagner M. Identifying individ-
ual competency in emerging areas of practice: an applied approach. Qual Health
Res. Sep 2002;12(7):990–999.
39. Brownson RC, Ballew P, Kittur ND, et al. Developing competencies for training prac-
titioners in evidence-based cancer control. J Cancer Educ. 2009;24(3):186–193.
40. Luck J, Yoon J, Bernell S, et al. The Oregon Public Health Policy Institute: Building
Competencies for Public Health Practice. Am J Public Health. Aug
2015;105(8):1537–1543.
41. Haire-Joshu D, McBride T, eds. Transdisciplinary Public Health: Research, Education,
and Practice. San Francisco, CA: Jossey-Bass Publishers; 2013.
42. Carter BJ. Evidence-based decision-making: practical issues in the appraisal of
evidence to inform policy and practice. Aust Health Rev. Nov 2010;34(4):435–440.
43. Fitzpatrick VE, Mayer C, Sherman BR. Undergraduate public health capstone
course: teaching evidence-based public health. Front Public Health. 2016;4:70.
44. O’Neall MA, Brownson RC. Teaching evidence-based public health to public health
practitioners. Ann Epidemiol. Aug 2005;15(7):540–544.
45. Wahabi HA, Siddiqui AR, Mohamed AG, Al-Hazmi AM, Zakaria N, Al-Ansary
LA. Evidence- based decision making in public health: capacity building for
public health students at King Saud University in Riyadh. Biomed Res Int.
2016;2015:576953.
46. Jansen MW, Hoeijmakers M. A masterclass to teach public health professionals to
conduct practice-based research to promote evidence-based practice: a case study
from The Netherlands. J Public Health Manag Pract. Jan-Feb 2012;19(1):83–92.
47. Gibbert WS, Keating SM, Jacobs JA, et al. Training the workforce in evidence-
based public health: an evaluation of impact among US and international practi-
tioners. Prev Chronic Dis. 2013;10:E148.
48. Yost J, Ciliska D, Dobbins M. Evaluating the impact of an intensive education
workshop on evidence-informed decision making knowledge, skills, and behav-
iours: a mixed methods study. BMC Med Educ. 2014;14:13.
49. Yarber L, Brownson CA, Jacob RR, et al. Evaluating a train-the-trainer approach
for improving capacity for evidence-based decision making in public health. BMC
Health Serv Res. 2015;15(1):547.
B u i l di n g C a pa c i t y f or E v i de n c e - B a s e d P u b l i c H e a lt h ( 47 )
50. Maxwell ML, Adily A, Ward JE. Promoting evidence-based practice in population
health at the local level: a case study in workforce capacity development. Aust
Health Rev. Aug 2007;31(3):422–429.
51. Maylahn C, Bohn C, Hammer M, Waltz E. Strengthening epidemiologic compe-
tencies among local health professionals in New York: teaching evidence-based
public health. Public Health Rep. 2008;123(Suppl 1):35–43.
52. Brownson RC, Diem G, Grabauskas V, et al. Training practitioners in
evidence-based chronic disease prevention for global health. Promot Educ.
2007;14(3):159–163.
53. Oliver KB, Dalrymple P, Lehmann HP, McClellan DA, Robinson KA, Twose C.
Bringing evidence to practice: a team approach to teaching skills required for an
informationist role in evidence-based clinical and public health practice. J Med
Libr Assoc. Jan 2008;96(1):50–57.
54. Diem G, Brownson RC, Grabauskas V, Shatchkute A, Stachenko S. Prevention
and control of noncommunicable diseases through evidence- based public
health: implementing the NCD 2020 action plan. Glob Health Promot. Sep
2016;23(3):5–13.
55. Pettman TL, Armstrong R, Jones K, Waters E, Doyle J. Cochrane update: building
capacity in evidence-informed decision-making to improve public health. J Public
Health (Oxf). Dec 2013;35(4):624–627.
56. Linkov F, LaPorte R, Lovalekar M, Dodani S. Web quality control for lec-
tures: Supercourse and Amazon.com. Croat Med J. Dec 2005;46(6):875–878.
57. Brownson RC, Ballew P, Brown KL, et al. The effect of disseminating evidence-
based interventions that promote physical activity to health departments. Am J
Public Health. Oct 2007;97(10):1900–1907.
58. Proctor EK. Leverage points for the implementation of evidence-based practice.
Brief Treatment and Crisis Intervention. Sep 2004;4(3):227–242.
59. Chambers LW. The new public health: do local public health agencies need a
booster (or organizational “fix”) to combat the diseases of disarray? Can J Public
Health. Sep-Oct 1992;83(5):326–328.
60. Bryan RL, Kreuter MW, Brownson RC. Integrating adult learning prin-
ciples into training for public health practice. Health Promot Pract. 2009
Oct;10(4):557–563.
61. Brownson RC, Gurney JG, Land G. Evidence-based decision making in public
health. J Public Health Manag Pract. 1999;5:86–97.
62. Kaplan GE, Juhl AL, Gujral IB, Hoaglin-Wagner AL, Gabella BA, McDermott KM.
Tools for identifying and prioritizing evidence-based obesity prevention strate-
gies, Colorado. Prev Chronic Dis. 2013;10:E106.
63. Brownson RC, Fielding JE, Maylahn CM. Evidence-based public health: a fun-
damental concept for public health practice. Annu Rev Public Health. Apr 21
2009;30:175–201.
64. Hess JJ, Eidson M, Tlumak JE, Raab KK, Luber G. An evidence-based pub-
lic health approach to climate change adaptation. Environ Health Perspect. Nov
2014;122(11):1177–1186.
65. Ginter PM, Duncan WJ, Capper SA. Keeping strategic thinking in strategic plan-
ning: macro-environmental analysis in a state health department of public health.
Public Health. 1992;106:253–269.
66. Florin P, Stevenson J. Identifying training and technical assistance needs in
community coalitions: a developmental approach. Health Education Research.
1993;8:417–432.
( 48 ) Evidence-Based Public Health
( 49 )
( 50 ) Evidence-Based Public Health
BACKGROUND
In this era when public and media interest in health issues is intense, the
reasons for not taking action based on an individual research study, even if
it was carefully designed, successfully conducted, and properly analyzed and
interpreted, need to be emphasized. Public health research is incremental,
with a body of scientific evidence building up over years or decades. Therefore,
although individual studies may contribute substantially to public health deci-
sion making, a single study rarely constitutes a strong basis for action. The
example in Box 3.1 regarding the contamination of drinking water in Flint,
Michigan is unusual because action was warranted based on a small but con-
vincing body of scientific evidence.7, 8
When considering the science, strong evidence from epidemiologic (and
other) studies may suggest that prevention and control measures should be
taken. Conversely, evidence may be equivocal, so that taking action would be
premature. Often the strength of evidence is suggestive, but not conclusive;
yet one has to make a decision about the desirability of taking action. Here,
other questions come to mind:
Box 3.1
CONTAMINATION OF DRINKING WATER IN FLINT,
MICHIGAN
In 2014, the city of Flint, Michigan temporarily changed its drinking water
supply from Lake Huron to the Flint River in an effort to save money. After
this change, residents expressed concerns about water color, taste, and
odor, along with a range of health complaints (e.g., skin rashes).8 Bacteria
were detected in excess of Safe Drinking Water standards. The switch in
water source increased the likelihood for corrosion and leaching of lead
into drinking water, in part due to the aging distribution system in Flint.
Because lead in drinking water is neurotoxic and affects numerous devel-
opments processes (e.g., intelligence, behavior), an investigative team ana-
lyzed blood lead levels in children younger than 5 years before and after
Flint introduced a more corrosive water supply. The incidence of elevated
blood lead levels increased from 2.4% to 4.9% (p < 0.05) after the source
change, and neighborhoods with the highest water lead levels experienced a
6.6% increase.8 The most socioeconomically disadvantaged neighborhoods
showed the highest blood lead level increases. Based on a single epidemi-
ologic study, investigators uncovered one of the most vivid examples of
health inequalities in the United States. In the Flint example, Flint citizens,
mostly blacks, already had a disparity in lead exposure that was widened
by the change in water source and lack of government action.7 The Flint
situation is a vivid example of the public health need to maintain a mod-
ern water infrastructure along with the need to address health inequalities
framed by a history of racial discrimination, “white flight,” declining tax
revenues, and a city government’s inability to provide basic services.8
If the answer to the first three questions is “yes,” then the decision to take
action is relatively straightforward. In practice, unfortunately, decisions are
seldom so simple.
for immediate action, political pressure for action, and community support
for responding to the striking new research findings with new or modified
programs. The importance of community action in motivating public health
efforts was shown in the Long Island Breast Cancer Study Project (LIBCSP).
Community advocates in Long Island raised concerns about the high inci-
dence of breast cancer and possible linkages with environmental chemicals
and radiation. More than 10 research project have been conducted by the
New York State Health Department, along with scientists from universities
and the National Institutes of Health. In each Long Island–area county, breast
cancer incidence increased over a 10-year period, while mortality from breast
cancer decreased.9 At the conclusion of the study, the LIBCSP could not iden-
tify a set of specific environmental agents that could be responsible for the
high rates of breast cancer incidence. The exceptions may be breast cancer risk
associated with exposure to polyaromatic hydrocarbon and living in proxim-
ity to organochlorine-containing hazardous waste sites.10 The LIBCSP is an
important example of participatory research in which patient advocates play
important roles in shaping the research (participatory approaches are dis-
cussed in more detail in chapters 5 and 10).
Large Large
Sample Size (Precision)
address publication bias include making strenuous efforts to find all published
and unpublished work when conducting systematic reviews16 and the estab-
lishment of reporting guidelines that specifically address publication bias
(also see chapter 8).17
The earliest guidelines for assessing causality for infectious diseases were
developed in the 1800s by Jacob Henle and Robert Koch. The Henle-Koch
Postulates state that (1) the agent must be shown to be present in every case
of the disease by isolation in pure culture; (2) the agent must not be found
in cases of other disease; (3) once isolated, the agent must be capable of
reproducing the disease in experimental animals; and (4) the agent must be
recovered from the experimental disease produced.11,20 These postulates have
proved less useful in evaluating causality for more contemporary health con-
ditions because most noninfectious diseases have long periods of induction
and multifactorial causation.
Subsequently, the US Surgeon General,21 Hill,22 Susser,23 and Rothman24
have all provided insights into causal criteria, particularly in regard to causa-
tion of chronic diseases such as heart disease, cancer, and arthritis. Although
criteria have sometimes been cited as checklists for assessing causality, they
were intended as factors to consider when examining an association: they
have value, but only as general guidelines. Several criteria relate to particular
cases of refuting biases or drawing on nonepidemiologic evidence. These crite-
ria have been discussed in detail elsewhere.19,25 In the end, belief in causality
A s s e s s i n g S c i e nt i f i c E v i de n c e f or P u b l i c H e a lt h Ac t i o n ( 55 )
1. Consistency
Definition: The association is observed in studies in different settings and
populations, using various methods.
Rule of evidence: The likelihood of a causal association increases as the pro-
portion of studies with similar (positive) results increases.
2. Strength
Definition: This is defined by the size of the relative risk estimate. In some
situations, meta-analytic techniques are used to provide an overall, sum-
mary risk estimate.
Rule of evidence: The likelihood of a causal association increases as the sum-
mary relative risk estimate increases. Larger effect estimates are generally
less likely to be explained by unmeasured bias or confounding.
3. Temporality
Definition: This is perhaps the most important criterion for causality—
some consider it an absolute condition. Temporality refers to the tem-
poral relationship between the occurrence of the risk factor and the
occurrence of the disease or health condition.
Rule of evidence: The exposure (risk factor) must precede the disease.
4. Dose-response relationship
Definition: The observed relationship between the dose of the exposure
and the magnitude of the relative risk estimate.
Rule of evidence: An increasing level of exposure (in intensity or time)
increases the risk when hypothesized to do so.
5. Biological plausibility
Definition: The available knowledge on the biological mechanism of action
for the studied risk factor and disease outcome.
Rule of evidence: There is not a standard rule of thumb except that the
more likely the agent is biologically capable of influencing the disease,
the more probable that a causal relationship exists.
6. Experimental evidence
Definition: The presence of findings from a prevention trial in which the
factor of interest is removed from randomly assigned individuals.
Table 3.1. DEGREE TO WHICH CAUSAL CRITERIA ARE MET FOR TWO
CONTEMPORARY PUBLIC HEALTH ISSUES
a
Predominantly childhood leukemia and brain cancer.
A s s e s s i n g S c i e nt i f i c E v i de n c e f or P u b l i c H e a lt h Ac t i o n ( 57 )
Outcomes
Program/Policy
Causes (What you
(What you do)
observe)
Alternative
cause
Alternative
cause
Most research in public health to date has tended to emphasize internal valid-
ity (e.g., well-controlled efficacy trials), while giving limited attention to exter-
nal validity (i.e., the degree to which findings from a study or set of studies
can be generalizable to and relevant for populations, settings, and times other
than those in which the original studies were conducted).5 Green succinctly
summarized a key challenge related to external validity in 200132:
Where did the field get the idea that evidence of an intervention’s efficacy from
carefully controlled trials could be generalized as THE best practice for widely
varied populations and settings? (p. 167)
There are many factors that influence decision making in public health
(Table 3.4).19,41-43 Some of these factors are under the control of the public
health practitioner, whereas others are nearly impossible to modify. A group
of experts may systematically assemble and present a persuasive body of
Box 3.2
THE EVOLUTION OF BREAST CANCER SCREENING GUIDELINES
Breast cancer screening guidance for women aged 40 to 49 years has been
the subject of considerable debate and controversy. Breast cancer is the
most common cancer type among US women, accounting for 246,660 new
cases and 40,450 deaths in 2016.45 It is suggested that appropriate use of
screening mammography may lower death rates due to breast cancer by as
much as 30%. Official expert guidance from the US government was first
issued in 1977 when the National Cancer Institute (NCI) recommended
annual mammography screening for women aged 50 years and older
but discouraged screening for younger women.46 In 1980, the American
Cancer Society dissented from this guidance and recommended a baseline
mammogram for women at age 35 years and annual or biannual mammo-
grams for women in their 40s.65 The NCI and other professional organiza-
tions differed on recommendations for women in their 40s throughout
the late 1980s and 1990s. To resolve disagreement, the director of the
National Institutes of Health called for a Consensus Development
Conference in January 1997. Based on evidence from randomized con-
trolled trials, the consensus group concluded that the available data did
not support a blanket mammography recommendation for women in
their 40s. The panel issued a draft statement that largely left the decision
regarding screening up to the woman.47 This guidance led to widespread
media attention and controversy. Within 1 week, the US Senate passed a
98-to-0 vote resolution calling on the NCI to express unequivocal support
for screening women in their 40s, and within 60 days, the NCI had issued
a new recommendation.
The controversy regarding breast cancer screening resurfaced in 2009.
The US Preventive Services Task Force (USPSTF) was first convened by the
Public Health Service in 1984. Since its inception, it has been recognized
as an authoritative source for determining the effectiveness of clinical
( 64 ) Evidence-Based Public Health
As noted earlier, many factors enter into decisions about public health inter-
ventions, including certainty of causality, validity, relevance, economics, and
political climate (Table 3.4). Measures of burden may also contribute substan-
tially to science-based decision making. The burden of infectious diseases,
such as measles, has been primarily assessed through incidence, measured
in case numbers or rates. For chronic or noninfectious diseases like cancer,
burden can be measured in terms of morbidity, mortality, and disability. The
choice of measure should depend on the characteristics of the condition being
examined. For example, mortality rates are useful in reporting data on a fatal
condition such as lung cancer. For a common, yet generally nonfatal condition
such as arthritis, a measure of disability would be more useful (e.g., limita-
tions in activities of daily living). When available, measures of the popula-
tion burden of health conditions are extremely useful (e.g., quality-adjusted
life-years).
When assessing the scientific basis for a public health program or policy,
quantitative considerations of preventable disease can help us make a rational
A s s e s s i n g S c i e nt i f i c E v i de n c e f or P u b l i c H e a lt h Ac t i o n ( 65 )
Pe (relative risk − 1)
,
1 + Pe (relative risk − 1)
Pe (1 − relative risk),
a
Based on body mass index >30 kg/m2.
b
Moderate to heavy alcohol use may increase risk, whereas light use may reduce risk.
From Liu et al.52
data on only 31 (4.4%), suggesting the need to expand the evidence base on
prevention.56
Assessing Time Trends
Numerous other factors may be considered when weighing the need for pub-
lic health action. One important factor to consider involves temporal trends.
Public health surveillance systems can provide information on changes over
time in a risk factor or disease of interest. Through use of these data, one
may determine whether the condition of interest is increasing, decreasing,
or remaining constant. One may also examine the incidence or prevalence
of a condition in relation to other conditions of interest. For example, if
a public health practitioner were working with a statewide coalition to
control cancer, it would be useful to plot both the incidence and mortal-
ity rates for various cancer sites (Figure 3.3).57 The researcher might reach
A s s e s s i n g S c i e nt i f i c E v i de n c e f or P u b l i c H e a lt h Ac t i o n ( 67 )
150
140
130
120
110
100
Rate per 100,000
90
80
70
60
50
40
30
20
10
0
75
77
79
81
83
85
87
89
91
93
95
97
99
01
03
05
07
09
11
13
19
19
19
19
19
19
19
19
19
19
19
19
19
20
20
20
20
20
20
20
Year
Figure 3.3: Trends in incidence and mortality for lung and breast cancer in women, United
States, 1975–2013.
Source: Howlander et al., 2016.56
must be done on a particular health topic). They are often less helpful for
Type 2 evidence (i.e., this specific intervention should be conducted within a
local area).
Public health leaders began to formulate concrete public health objectives
as a basis for action during the post–World War II era. This was a clear shift
from earlier efforts because emphasis was placed on quantifiable objectives
and explicit time limits.58 A few key examples illustrate the use of public data
in setting and measuring progress toward health objectives. A paper by the
Institute of Medicine59 sparked a US movement to set objectives for public
health.58 These initial actions by the Institute of Medicine led to the 1979
Surgeon General’s Report on Health Promotion and Disease Prevention, which
set five national goals—one each for the principal life stages of infancy, child-
hood, adolescence and young adulthood, adulthood, and older adulthood.60
Over approximately the same time period, the World Health Organization
published “Health Targets for Europe” in 1984 and adopted a Health for All
policy with 38 targets.61
More recently, the US Public Health Service established four overarching
health goals for the year 2020: (1) eliminate preventable disease, disability,
injury, and premature death; (2) achieve health equity, eliminate disparities,
and improve the health of all groups; (3) create social and physical environ-
ments that promote good health for all; and (4) promote healthy development
and healthy behaviors across every stage of life.62 As discussed in the final
chapter in this book, addressing social and physical determinants of health
raises important questions about the types of evidence that is appropriate and
how we track progress.
SUMMARY
The issues covered in this chapter highlight one of the continuing challenges
for public health practitioners and policy makers—determining when sci-
entific evidence is sufficient for public health action. In nearly all instances,
scientific studies cannot demonstrate causality with absolute certainty. The
demarcation between action and inaction is seldom distinct and requires care-
ful consideration of scientific evidence as well as assessment of values, pref-
erences, costs, and benefits of various options. The difficulty in determining
scientific certainty was eloquently summarized by A. B. Hill22:
Because policy cannot wait for perfect information, one must consider actions
wherein the benefit outweighs the risk. This was summarized by Szklo as,
“How much do we stand to gain if we are right?” and “How much do we stand
to lose if we are wrong?”63
In many instances, waiting for absolute scientific certainty would mean
delaying crucial public health action. For example, the first cases of acquired
immunodeficiency syndrome (AIDS) were described in 1981, yet the causative
agent (a retrovirus) was not identified until 1983.64 Studies in epidemiology
and prevention research, therefore, began well before gaining a full under-
standing of the molecular biology of AIDS transmission.
KEY CHAPTER POINTS
Selected Websites
Disease Control Priorities Project (DCPP) <http://www.dcp2.org>. The DCPP is an
ongoing effort to assess disease control priorities and produce evidence-based
( 70 ) Evidence-Based Public Health
REFERENCES
1. Brownson RC, Baker EA, Leet TL, Gillespie KN, True WR. Evidence-Based Public
Health. 2nd ed. New York, NY: Oxford University Press; 2011.
2. Fielding JE, Briss PA. Promoting evidence- based public health policy: can
we have better evidence and more action? Health Aff (Millwood). Jul-Aug
2006;25(4):969–978.
3. Sallis JF, Owen N, Fotheringham MJ. Behavioral epidemiology: a systematic
framework to classify phases of research on health promotion and disease pre-
vention. Ann Behav Med. 2000;22(4):294–298.
4. Brunner JW, Sankare IC, Kahn KL. Interdisciplinary priorities for dissemination,
implementation, and improvement science: frameworks, mechanics, and mea-
sures. Clin Transl Sci. Dec 2015;8(6):820–823.
5. Green LW, Ottoson JM, Garcia C, Hiatt RA. Diffusion theory, and knowledge dis-
semination, utilization, and integration in public health. Annu Rev Public Health.
Jan 2009;30:151–174.
6. Brownson R, Colditz G, Proctor E, eds. Dissemination and Implementation Research
in Health: Translating Science to Practice. New York, NY: Oxford University
Press; 2012.
7. Gostin LO. Politics and public health: the Flint drinking water crisis. Hastings Cent
Rep. Jul 2016;46(4):5–6.
8. Hanna- Attisha M, LaChance J, Sadler RC, Champney Schnepp A. elevated
blood lead levels in children associated with the Flint drinking water crisis: a
spatial analysis of risk and public health response. Am J Public Health. Feb
2015;106(2):283–290.
9. US Department of Health and Human Services. Report to the U.S. Congress: The
Long Island Breast Cancer Study Project. Bethesda, MD: National Institutes of
Health; 2004.
10. Winn DM. Science and society: the Long Island Breast Cancer Study Project. Nat
Rev Cancer. Dec 2005;5(12):986–994.
11. Porta M, ed. A Dictionary of Epidemiology. 6th ed. New York, NY: Oxford University
Press; 2014.
12. Olson CM, Rennie D, Cook D, et al. Publication bias in editorial decision making.
JAMA. Jun 5 2002;287(21):2825–2828.
13. Dwan K, Gamble C, Williamson PR, Kirkham JJ. Systematic review of the empiri-
cal evidence of study publication bias and outcome reporting bias: an updated
review. PLoS One. 2013;8(7):e66844.
14. Guyatt G, Rennie D, Meade M, Cook D, eds. Users’ Guides to the Medical Literature.
A Manual for Evidence-Based Clinical Practice. 3rd ed. Chicago, IL: American Medical
Association Press; 2015.
15. Petitti DB. Meta-analysis, Decision Analysis, and Cost-Effectiveness Analysis: Methods
for Quantitative Synthesis in Medicine. 2nd ed. New York, NY: Oxford University
Press; 2000.
16. Delgado-Rodriguez M. Systematic reviews of meta-analyses: applications and
limitations. J Epidemiol Community Health. Feb 2006;60(2):90–92.
17. Liberati A, Altman DG, Tetzlaff J, et al. The PRISMA statement for reporting sys-
tematic reviews and meta-analyses of studies that evaluate healthcare interven-
tions: explanation and elaboration. BMJ. 2009;339:b2700.
( 72 ) Evidence-Based Public Health
41. Anderson LM, Brownson RC, Fullilove MT, et al. Evidence-based public health policy
and practice: promises and limits. Am J Prev Med. Jun 2005;28(5 Suppl):226–230.
42. Bero LA, Jadad AR. How consumers and policy makers can use systematic reviews
for decision making. In: Mulrow C, Cook D, eds. Systematic Reviews. Synthesis of
Best Evidence for Health Care Decisions. Philadelphia, PA: American College of
Physicians; 1998:45–54.
43. Mays GP, Scutchfield FD. Improving population health by learning from systems
and services. Am J Public Health. Apr 2015;105(Suppl 2):S145–S147.
44. Ernster VL. Mammography screening for women aged 40 through 49: a guide-
lines saga and a clarion call for informed decision making. Am J Public Health.
1997;87(7):1103–1106.
45. American Cancer Society. Cancer Facts and Figures 2016. Atlanta, GA: American
Cancer Society; 2016.
46. Breslow L, Agran L, Breslow DM, Morganstern M, Ellwein L. Final Report of NCI
Ad Hoc Working Groups on Mammography in Screening for Breast Cancer. J Natl
Cancer Inst. 1977;59(2):469–541.
47. National Institutes of Health Consensus Development Panel. National Institutes
of Health Consensus Development Conference Statement: Breast Cancer
Screening for Women Ages 40-49, January 21-23, 1997. J Natl Cancer Inst.
1997;89:1015–1026.
48. Screening for breast cancer: U.S. Preventive Services Task Force recommendation
statement. Ann Intern Med. Nov 17 2009;151(10):716–726, W-236.
49. U.S. Preventive Services Task Force. Final Recommendation Statement: Breast
Cancer: Screening. 3rd, http://www.uspreventiveservicestaskforce.org/Page/
Document/RecommendationStatementFinal/breast-cancer-screening1. Accessed
September 2, 2016.
50. Kolata G. Mammogram debate took group by surprise. The New York Times.
November 20, 2009.
51. Oliver TR. The politics of public health policy. Annu Rev Public Health.
2006;27:195–233.
52. Brownson RC, Chriqui JF, Stamatakis KA. Understanding evidence-based public
health policy. Am J Public Health. Sep 2009;99(9):1576–1583.
53. Liu L, Nelson J, Newschaffer C. Cardiovascular disease. In: Remington PL,
Brownson RC, Wegner M, eds. Chronic Disease Epidemiology and Control. 3rd ed.
Washington, DC: American Public Health Association; 2016.
54. Gargiullo PM, Rothenberg RB, Wilson HG. Confidence intervals, hypothesis tests,
and sample sizes for the prevented fraction in cross-sectional studies. Stat Med.
1995;14(1):51–72.
55. Straatman H, Verbeek AL, Peeters PH. Etiologic and prevented fraction in case-
control studies of screening. J Clin Epidemiol. 1988;41(8):807–811.
56. Thacker SB, Ikeda RM, Gieseker KE, et al. The evidence base for public health
informing policy at the Centers for Disease Control and Prevention. Am J Prev
Med. Oct 2005;29(3):227–233.
57. Howlader N, Noone A, Krapcho M, et al. SEER Cancer Statistics Review, 1975-2013.
Bethesda, MD: National Cancer Institute; 2016.
58. Breslow L. The future of public health: prospects in the United States for the
1990s. Annu Rev Public Health. 1990;11:1–28.
59. Nightingale EO, Cureton M, Kamar V, Trudeau MB. Perspectives on Health
Promotion and Disease Prevention in the United States. [staff paper]. Washington,
DC: Institute of Medicine, National Academy of Sciences; 1978.
( 74 ) Evidence-Based Public Health
60. U.S. Department of Health, Education, and Welfare. Healthy People. The Surgeon
General’s Report on Health Promotion and Disease Prevention. Washington,
DC: U.S. Department of Health, Education, and Welfare; 1979. Publication
no. 79-55071.
61. Irvine L, Elliott L, Wallace H, Crombie IK. A review of major influences on current
public health policy in developed countries in the second half of the 20th century.
J R Soc Promot Health. Mar 2006;126(2):73–78.
62. Fielding J, Kumanyika S. Recommendations for the concepts and form of Healthy
People 2020. Am J Prev Med. Sep 2009;37(3):255–257.
63. Szklo M. Translating epi data into public policy is subject of Hopkins sympo-
sium. Focus is on lessons learned from experience. The Epidemiology Monitor.
1998(August/September).
64. Wainberg MA, Jeang KT. 25 years of HIV-1 research: progress and perspectives.
BMC Med. 2008;6:31.
65. American Cancer Society. Report on the cancer-related health checkup. CA: Cancer
J Clin. 1980;30:193–196.
CHAPTER 4
w
Understanding and Applying Economic
Evaluation and Other Analytic Tools
There are in fact two things: science and opinion. One begets knowledge, the latter
ignorance.
Hippocrates
( 75 )
( 76 ) Evidence-Based Public Health
BACKGROUND
Incrementalcosts
ICER=
Incremental benefits
The particular items included in the numerator and denominator will depend
on the intervention and the type of economic evaluation.
E c o n o m i c E va l uat i o n a n d O t h e r A n a ly t i c T o ol s ( 79 )
Costs Benefits
Choice Direct YOLS
Indirect QALYs
Averted Treatment Costs Dollars
Program B
Comparison program
Benefits from B
May be new or old
Could be ‘doing nothing’
In this section, the first three steps are considered. The remaining steps are
considered separately.
The first step is to identify the intervention and the group. Unless the
economic evaluation is to be conducted alongside a new intervention, the
intervention should have already been demonstrated to be effective. There
is nothing to be gained from an economic evaluation of an ineffective inter-
vention. The intervention and the group it applies to should be specified as
completely as possible, including identifying the expected benefits of the
program.
The second element is the selection of the perspective of the economic
evaluation. Any intervention can be considered from several points of view,
often characterized as moving from narrow to broad. The narrowest perspec-
tive is that of the agency or organization directly involved in delivering the
proposed intervention. A next level might be the perspective of insurers, or
payers, especially in health, where consumers and payers are often two sepa-
rate groups. The broadest perspective is that of society as a whole. The Panel
E c o n o m i c E va l uat i o n a n d O t h e r A n a ly t i c T o ol s ( 81 )
Measure Costs
The fourth step is the identification and measurement of all incremental costs
of a program, option, or intervention. Incremental costs are the additional
costs related to the program. The scope of the costs is determined by the per-
spective of the analysis. If such costs are concentrated among a small group
of people, this step will be relatively easy. As costs are more dispersed, it may
become more difficult to identify all potential costs. Measurement of the iden-
tified costs may similarly be complicated by issues of units of measurement
(e.g., monetary wages vs. donated labor time) and timing (e.g., costs incurred
over a 5-year interval).
Table 4.1. TYPES OF ECONOMIC EVALUATIONS AND MEASUREMENT
OF BENEFITS
Cost-effectiveness Single common benefit (or outcome) Natural units, e.g. life
analysis (CEA) Are the (natural) outcomes worth the cost? years gained, lower
CEA compares the costs and benefits of different A1C levels, improved
programs using the same outcome measure. physical activity
Outcome measures are naturally occurring health
outcomes, such as cases found, years of life saved,
or injuries prevented. CEA is easy to understand
in the health field and avoids converting health
outcomes to dollars. It is limited in its ability to
compare interventions because those compared
must have the same outcome. The result of a CEA
(its ICER) is the cost per unit of health outcome
(e.g., cost per year of life saved). Lower ratios are
preferred.
Example:9
Intervention: Smoking cessation program in the
workplace
Costs measured: All costs of the cessation
program for all 100 participants—$8940
Effect measured: Number of people who quit
smoking (quitters)—15 quitters
The incremental cost-effectiveness ratio
(ICER) = $596 per additional quitter
Cost-utility 1 or more benefits (outcomes) standardized into QALYs
analysis a single value
(CUA) Are standardized outcomes worth the cost?
CUA compares the costs and benefits of a
program, with benefits measured in health-related
quality of life-years (QALYs). CUA allows for
comparison of many projects with health-related
outcomes and is useful when both morbidity and
mortality are affected or the programs have a
wide range of outcomes but all have an effect on
healthy years of life. Translating health outcomes,
particularly morbidity, to years of healthy life is
controversial. The result of a CUA (its ICER) is
the cost per QALY ($/QALY). Lower values are
preferred.
Example10:
Intervention: Diabetes self-management
programs in primary care settings
Program costs: $866 per participant per year, total
lifetime costs of program $11,760
Several effects: 87.5% benefited, A1c -0.5%, total
cholesterol. –10%
Using QALYs to add up all effects results in
lifetime gain of 0.2972 QALYs
ICER: $39,563/QALY saved
( 84 ) Evidence-Based Public Health
After all costs are identified and counted, they will be summed to form the
numerator of the ICER. Table 4.2 shows the types of costs and their usual
measures. The labels and definitions for the types of costs vary across disci-
plines and across textbooks. The important objective of the cost portion of
the analysis is the identification and determination of all costs, regardless of
their labels.
The first category of costs is direct, or program, costs. One caution in stat-
ing these costs is that the true economic cost of providing the program should
be identified. This is the resource cost of the program, also referred to as the
opportunity cost. If this program is undertaken, what other program will we
forego? What opportunity must be passed up in order to fund this program?
In health, there is often a distinction between charges and costs. For example, a
screening test for diabetes may be billed at $200; however, the cost of provid-
ing the test may be $150. From a societal standpoint, the $150 figure should
be used. But from the replication standpoint, the charge of $200 is relevant
because this is what it would cost to replicate the program.
Direct costs include labor costs, often measured by the number of full-time
equivalent employees (FTEs) and their wages and fringe benefits. If volun-
teers will be used, the value of their time should be imputed using either their
own wage rates or the average wage rate for similarly skilled work within the
community. Other direct costs are supplies and overhead. (Table 4.3 provides
a detailed worksheet for determining direct costs.)
Indirect costs are the other main component of costs. By indirect, we
mean that they are not directly paid by the sponsoring agency or organiza-
tion or directly received by the program participants. In other words, these
are costs that “spill over,” and they are often referred to as spillover costs.
These can be subdivided into five categories. Three of these (time and travel
costs, the cost of treating side effects, and the cost of treatment during gained
life expectancy) are positive costs and are added to the numerator. The other
two (averted treatment costs and averted productivity losses) are negative
costs (i.e., benefits) that are subtracted from the numerator. They are included
in the numerator because they directly affect the public health budget. This
is especially true in a nation with a global health budget but is also recom-
mended for the United States.
The first category of indirect costs is time and travel costs to the partic-
ipants. From a societal standpoint, these costs should be attributed to the
program. Often, to obtain these costs, a survey of program participants must
be conducted. In addition, if other family members or friends are involved
as caregivers to the program participants, their time and travel costs should
be included. An example of this category of indirect costs would occur with
a diabetes case management program that featured extra provider visits,
group meetings, and recommended exercise. The time spent in all of these
aspects of the program should be valued and counted, as well as any related
Table 4.2 TYPES OF COSTS INCLUDED IN
ECONOMIC EVALUATIONS
Time and travel costs Time costs to participants, including lost wages
Travel costs to participants, including transportation and
child care
Caregiver costs, including both time and travel
Any costs of the program incurred by other
budgetary groups
The value of volunteer labor, measured using the cost to
replace it
Cost of treating side Cost of treatment; using actual cost or charge data or
effects imputed, using local, regional, or national averages
Cost of treatment during National data on average cost of treatment per year,
gained life multiplied by extended life expectancy
expectancy
Averted treatment costs Future health care treatment costs that will be saved as a
result of a program or policy. Measured as the weighted
sum of the cost of treatment, including alternative
options and complications. Weights reflect the proportion
of people projected to have each alternative treatment or
complication. Data can be from administrative databases,
such as claims data, or imputed, using local, regional, or
national average costs or charges
Averted productivity The present value of future wages earned because of
losses disease or injury prevention. Includes costs to employers
of replacing absent workers (recruitment, training, etc.).
Wages and fringe benefits of participants; for persons not
in the labor force, average wages of similarly aged persons
or local, regional, or national average wages
( 86 ) Evidence-Based Public Health
Table 4.3. C ONTINUED
Travel
Examples:
Staff meeting travel,
lodging, and
per diem
Steering group travel and
lodging
Mileage associated with
program
implementation
Other Nonpersonnel
Service Costs
Examples:
Conference call services
Long-distance services
Website service
Transcription costs for focus
group tapes
Indirect/overhead costs
Total costs
ratio? Proponents of their inclusion argue that these costs are part of the
health budget and will affect its future size. Thus, these costs are included
in many studies conducted in countries with global health budgets, such as
the United Kingdom. Those opposed point out that these persons will also
be paying taxes, thus helping to support their additional consumption of
health care. Why single out one aspect of their future spending? Most US-
based studies do not include these costs. The Panel on Cost-Effectiveness in
Health and Medicine did not make a recommendation with respect to this
issue.6
The fourth group of indirect costs is averted treatment costs. These are
future treatment costs that will be saved as a result of the intervention. For
example, screening for diabetes might identify cases earlier and thus limit or
prevent some complications and early mortality. These are complications that
will not need to be treated (if prevented) or that will not need to be treated as
expensively (if delayed). The onset of diabetes and the incidence of complica-
tions with and without the program must be estimated and then multiplied by
the costs of treatment to obtain the averted treatment costs. Information on
the natural course of a disease and the costs of its treatment are often avail-
able from the published literature or publicly available data sources. Cost of
illness studies, described below, provide estimates of disease burden.
The fifth category is averted productivity losses. These represent the sav-
ings to society from avoiding reduced productivity and lost work time. This
will be appropriate for workplace injury prevention programs, but many inter-
ventions lead to reduced absenteeism and increased productivity. For exam-
ple, an asthma management program may lead to fewer sick days. In addition,
as noted previously, the donated labor of caregivers or volunteers needs to be
valued because it is a real cost, even if unbilled and unpaid. If others wish to
replicate the intervention and do not have volunteers, they have an estimate
of the labor expense.
Ideally, productivity losses are measured directly using the wages and
fringe benefits of participants. Often this information is not available—either
it was not collected, or it does not exist because the participants are not in
the labor force. In this case, the average wages and fringe benefits of similar
persons, or of the average person, can be used to estimate this negative cost.
In the United States, average wages by profession can be found at the Bureau
of Labor Statistics website, and similar sites exist for many other countries.
Sources such as these are useful for valuing volunteer labor, as well.
There are several tools and instruments available for estimating produc-
tivity costs, including absenteeism and the value of unpaid time.11,12 These
surveys ask about occupation, other activities (differentiating caregiving
tasks from other activities), and time usually spent at paid work or engaged
in unpaid productive activities. They then provide a methodology to map the
answers to an estimate of productivity costs.
E c o n o m i c E va l uat i o n a n d O t h e r A n a ly t i c T o ol s ( 89 )
Averted productivity losses are used in CBA and CEA but not in CUA.
Benefits in a CUA are measured in terms of health utility, which in turn
depends on a person’s ability to work and earn an income. Thus, the negative
costs of averted productivity losses are incorporated in the benefit measure in
CUA. However, even in this method, it is often useful to calculate the averted
productivity losses so that they can be highlighted for some stakeholders.
Measure Outcomes
The fifth step is the identification and measurement of all outcomes, or ben-
efits. Again, the incremental benefits are of interest: what additional benefits
will this program provide, compared with some specified alternative? This
step is often more complicated than the identification and measurement of
costs. In public health, benefits can include improved health status (cases
prevented) and improved mortality outcomes (deaths averted). Clearly, these
benefits will be difficult to measure and will be partially subjective.
Another complicating factor for public health is the selection of the rele-
vant time period. The aim of a program or intervention is the improvement of
health, so the output to be measured is improved health status. This is a final
outcome that may take many years to achieve. Often, a program can only track
participants for a brief period of time, and any evaluation will, of necessity,
measure intermediate outcomes, such as the number of persons exercising.
In such cases, the literature can often be used to extrapolate the effect of the
intermediate outcome on health. For example, suppose that one were evalu-
ating a program designed to increase physical activity levels. Other studies
have demonstrated that increased physical activity reduces the risk for cardiac
events. These studies can be used to estimate the anticipated final outcomes
of the intervention.
The benefits of the program or intervention are the improvement in health
and are thus conceptually identical, regardless of the type of economic evalu-
ation. However, the unit of measurement and the specific elements included
differ by type of evaluation. In a CMA, when the benefits of the intervention
and its alternative are demonstrated to be identical, no further measurement
of benefits is needed. CBA measures the benefits in money. Thus, improve-
ments to health must be converted to currency amounts. If years of life are
saved, then these years must be valued in monetary units. There are several
suggested methods to make this conversion. All of them are subject to heated
debate.6
In response to dissatisfaction with the measurement of health benefits in
monetary units, particularly the wide range of values found using different
methods, some analysts argued for measuring benefits in a naturally occur-
ring health unit, such as years of life saved. This led to the development of
CEA, which uses a single health measure (years of life saved, cases averted) as
( 90 ) Evidence-Based Public Health
the measure of benefits. This has the advantage of not requiring reductions
of different outcomes to a single scale, but a single health measure cannot
capture all the benefits of most interventions. Most programs yield morbidity
and mortality improvements. By being forced to select one health measure,
only morbidity or mortality can be used to determine the cost-effectiveness
of the project. This underestimates the cost-effectiveness of projects because
the total costs are divided by only a portion of the benefits. In addition, only
programs with outcomes measured in the same unit (e.g., lives saved) can be
compared.
Partly in response to the shortcomings of CEA, some analysts argued for
the development of a health utility measure of benefits. Such a measure com-
bines morbidity and mortality effects into a single metric and is based on
the utility, or satisfaction, that health status gives to a person. Individuals’
self-reports of their valuation of health form the basis of the health utility
measure.
Several measures that meet these criteria have been developed. They
include the quality-adjusted life-year (QALY), the disability-adjusted life-year
(DALY), and the healthy year equivalent. The most widely used of these is the
QALY, defined as the amount of time in perfect health that would be valued
the same as a year with a disease or disability. For example, consider a year
with end-stage renal disease, requiring dialysis. Conceptually, the QALY for
this condition is the fraction of a year in perfect health that one would value
the same as a full year with the condition. Thus, QALYs range from 0 to 1,
with 0 defined as dead and 1 as a year in perfect health. The QALY assigned to
this condition will vary across persons, with some considering the condition
worse than others. If many individuals are surveyed, however, the average
QALY assigned to this condition can be obtained.
There are several ways to elicit QALY weights from individuals. These
include the visual rating scale, time trade-off method, and standard gamble.
There is debate about the theoretically appropriate method and the consis-
tency of results obtained from the different methods.13 With the visual rating
scale, survey participants are presented with a list of health conditions. Beside
each description of a condition, there is a visual scale, or line, that ranges from
0 to 1. Participants are asked to indicate on the lines their QALY valuation of
each health condition by making a mark. A participant might mark “0.6,” for
example, for the year with end-stage renal disease.
To measure the benefits in CUA, the analyst must identify all the mor-
bidity and mortality effects of the intervention. These are then weighted by
the appropriate QALY value. In practice, there are three ways to assign QALY
weights to different conditions. The first is to directly elicit QALY weights
from participants, as described earlier. The second is to use a multi-attribute
utility function, such as the Euroqol 5 Dimension (EQ-5D) or the Health
Utilities Index (HUI).14,15 These are brief survey instruments that ask one to
E c o n o m i c E va l uat i o n a n d O t h e r A n a ly t i c T o ol s ( 91 )
rate various attributes of health. For example, the EQ-5D rates five aspects
of health (mobility, self-care, usual activities, pain/discomfort, and anxiety/
depression) from 1 to 3. The responses are then scored to give a QALY value.
The weights used for the scoring were obtained from surveys of the general
population. The third way to obtain QALY values is by searching the litera-
ture or using the Internet. QALY values for many diseases and conditions can
be found. Some studies report QALY weights for only one or a few diseases
or conditions (e.g., end-stage renal disease), whereas others include tables of
QALY values for numerous health states.16-19
For example, suppose that an intervention among 1,000 persons yields
50 years of life saved. However, these years saved will be lived with some dis-
ability. Review of the literature indicates that this disability has a QALY weight
of 0.7. The benefits of the 50 years of life saved would be valued at 50 • 0.7, or
35 QALYs. Similarly, suppose that the intervention also prevents morbidity
among 500 of the participants for one year. If the QALY weight of the averted
condition is 0.9, then (1 –0.9), or 0.1 QALYs, is saved for each of the 500
persons, yielding a benefit of 50 QALYs. The total benefits for this program
would be 35 + 50, or 85 QALYs. This summary measure thus combines both
the morbidity and the mortality effects of the intervention. An illustration of
the use of QALYs in measuring the impact of screening for diabetes is shown
in Box 4.1.20-24
Box 4.1
COSTS OF SCREENING FOR TYPE 2 DIABETES
The seventh step is the comparison of costs and benefits. This is found by
forming the ICER, with costs in the numerator and benefits in the denomina-
tor. Table 4.4 shows the ICER for CBA, CEA, and CUA. These formulas reflect
analyses conducted in the United States from a societal perspective. Note that
the costs of treatment during gained life expectancy have not been included.
This is true for analyses conducted in the United States. These costs may be
included in studies conducted in other countries, such as the United Kingdom
and Canada.
The numerator of the ICER is the same for CBA and CEA. Averted produc-
tivity losses are not included in cost-utility analysis because they enter into
the determination of the QALY weights for the condition of interest.
In CBA, all the costs and benefits are measured in dollars, so the ratio
becomes a single number reflecting the ratio of costs to benefits. For exam-
ple, a ratio of 1.6 means that it will cost $1.60 for each $1.00 saved. Ratios
below 1 indicate cost-saving interventions. Because both the numerator
and the denominator are in currency units, the difference between benefits
and costs, or net benefits, is often reported instead of a ratio. Net benefits
greater than zero indicate a cost-saving intervention. In a CEA, benefits are
measured in a naturally occurring health unit, so the ratio will be expressed
in terms of that unit. For example, a project might cost $25,000 per life
saved. The product of a CUA is stated in terms of QALYs—it costs $x for each
QALY gained.
The final step is the interpretation of the results. If one finds, for example,
that a program costs $27,000 per life saved, is the program worthwhile? There
are numerous ways to approach this question, involving ethics, practical con-
siderations, political realities, and economics. One could argue that, clearly,
a life is worth $27,000, and the program is worthwhile. If, however, there is
another program that costs $15,000 per life saved and the budget allows only
one to be funded, an argument can be made that the latter program is more
worthwhile than the former. There are two principal ways to interpret and use
the ICER. The first compares the cost-utility ratio internally to other compet-
ing programs; the other uses external references, comparing the ratio to an
( 96 ) Evidence-Based Public Health
Reviews of the economic evaluation literature have found that studies that
are labeled economic evaluations are often only cost studies, only descriptive,
or use the methods inappropriately.37-39 However, there have been guidelines
and checklists developed to assist those conducting and reviewing economic
evaluations.40 More recent reviews find evidence of increased consistency in
economic evaluations.41
Addressing Methodological Issues
There are areas of debate about the appropriate ways to conduct economic
evaluations. Analysts can use established methods inappropriately or employ
methods still being debated and developed. Four particular areas of concern
are as follows: choosing the type of economic evaluation, estimating costs,
standardization of reporting, and measuring benefits using QALYs.
Although CUA is the preferred method in the United Kingdom and else-
where, there has been controversy over its use in the United States. This
methodology was initially recommended by the Panel on Cost-Effectiveness
in Health and Medicine.6 However, CBA is preferred by many federal agencies,
including the US Environmental Protection Agency (EPA). The broader term
of cost-effectiveness is used in many US guidelines and publications to refer to
( 98 ) Evidence-Based Public Health
both CEA and CUA. Currently, there is no clear preference between these two
types of analysis in US federal policy.
It is difficult to measure or estimate costs accurately in many public health
settings.38 Sometimes costs are estimated from national or regional data sets,
and their local applicability may be questionable. In addition, some programs
have high fixed costs, such as equipment or personnel, making it difficult
to achieve cost-effectiveness. In studies of a new intervention there may be
research costs that would not be included in a replication of the intervention.
Including these in the economic evaluation is warranted, but the researchers
should note that replication would have lower costs and thus a lower ICER.
Averted productivity losses, an indirect cost, are often difficult to measure.
There is debate about the appropriate method—human capital or friction
costs—to use to measure these costs, and the estimates obtained from the
two methods are quite different.11 Valuing unpaid caregiver time, such as a
parent caring for a sick child at home, is difficult. But the unpaid time can be
critical in determining the cost-effectiveness of the intervention.42
There have been frequent calls for standardization of methods and report-
ing of economic evaluations. However, there is not always consensus. For
example, whether the reference case should discount future health benefits
is a matter of debate. Another area of concern is the conduct and reporting
of sensitivity analysis. Some have suggested the reporting of ICERs based on
both average and median values.43 Others have focused on the choice of sen-
sitivity analysis methods and the appropriate reporting of sensitivity analysis
results to accurately reflect the degree of uncertainty in the ICER.44
The most frequently used outcome measure in CUA, the QALY, has been
criticized for a number of reasons. First, there are issues related to the preci-
sion and consistency of measurement. Any indicator is imperfect and includes
some level of error. When ranking interventions, the QALY score used for a
particular condition helps determine the cost-utility ratio. Different QALY
values may change an intervention’s relative cost-effectiveness. There are sev-
eral QALY score instruments, such as the EQ-5D, and a growing set of cata-
logs of QALY weights available. Unfortunately, these do not always report the
same values for the same conditions and interventions. Further, the existing
instruments and catalogs are sometimes not sensitive enough to detect small
changes in QALYs.45, 46
A related issue is whether to use QALYs only for direct participants in an
intervention or for other persons, particularly caregivers,47 as well. In addi-
tion, for interventions aimed at a family or community level, it may be dif-
ficult to assess the QALYs of all participants.
There are many critiques of QALYs related to ethical issues, includ-
ing concerns that they may favor the young over the old,48,49 men over
women,50 the able-bodied over the disabled,51,52 and the rich over the
poor.53 By design, QALYs reflect societal preferences and are inherently
E c o n o m i c E va l uat i o n a n d O t h e r A n a ly t i c T o ol s ( 99 )
Cost of illness studies, also called burden of illness studies, measure the direct
and indirect costs of an illness or condition. When combined with data on
the number of people with a particular condition, they estimate the economic
costs to society of that condition. Thus they can be used to give an estimate of
the economic benefits of prevention or more efficient treatment.
The direct expenses of a condition are the health system resources expended
on that condition for treatment. This includes all expenditures, whether
incurred by the health system directly or paid by a combination of insurance
reimbursements and out-of-pocket expenditures by the consumer. In most
countries direct costs can be obtained from diagnostic codes and national sur-
vey data or national health accounts data.
Indirect costs are productivity losses due to the condition. These include
days missed from work, school, or usual activities and their associated costs.
If employed, individuals lose income or potential income, and their employ-
ers incur costs to replace them while they are absent. As noted elsewhere in
the chapter, productivity losses can be estimated with two basic methods: the
( 100 ) Evidence-Based Public Health
human capital and the friction cost approaches. Survey-based tools are avail-
able to estimate these losses.
Cost of illness studies often rely on national surveys that include diag-
nostic, medical expenditure, and productivity information. Persons with
the condition are identified using the diagnostic information. Their medi-
cal expenditures and their productivity losses are calculated and summed.
The medical expenditures and productivity losses of the remaining survey
respondents are also calculated and summed. The difference between the
two totals is the estimate of the cost of illness for this condition. Another
approach uses multiple data sources, rather than a single survey, and com-
bines the information from these sources into a model that is used to esti-
mate total cost. For example, data from 27 surveys and databases were linked
and used in a spreadsheet-based model to estimate the costs of asthma in
the United Kingdom and its member nations at more than £1.1 billion annu-
ally.58 Similarly, a cost of diabetes model developed by the American Diabetes
Association estimates the annual US cost of diabetes at $245 billion, with
$176 billion in direct medical expenditures and the remainder due to produc-
tivity losses.59
Decision Analysis
Figure 4.3: Sample decision tree for Oseltamivir treatment of influenza among persons at
high risk for complications.
Source: Based on data from Postma et al.61
The first two steps help one to draw the decision tree. Step 3, “gathering infor-
mation,” can be done by using new data or by surveying the literature. For a
standard decision tree, the probability of reaching each branch and the num-
ber of persons who will enter the tree are the two essential pieces of informa-
tion. For an economic evaluation, the tree must also include the costs and
benefits of each branch.
The decision tree is analyzed by starting a number of persons at the base
of the tree. The number of persons could be derived from population data
or a hypothetical cohort. Based on the probabilities found at each branch-
ing point, a certain number of persons go to different branches. The process
stops when all of the persons have reached one of the far right-hand branches,
which represent the final outcomes. For example, suppose that 10,000 per-
sons in the Netherlands are at high risk for complications from influenza.
If Oseltamivir is prescribed to all of these persons, 3 will die from influenza
(10,000 × 0.0003). If, alternatively, these persons do not receive Oseltamivir,
5 of them will die from influenza (10,000 × 0.0005). The numbers of people
( 102 ) Evidence-Based Public Health
at the final outcomes of interest are then compared and a conclusion reached.
Using Figure 4.1 and comparing the number of influenza-related deaths by
treatment with Oseltamivir, one could conclude that Oseltamivir reduces the
number of deaths by 40%.61
The fifth step is to conduct a sensitivity analysis. Decision analysis in med-
icine arose in part to reflect and analyze the uncertainty of treatment out-
comes. The probability assigned to each branch is the average likelihood of
that particular outcome. In practice, the actual probability may turn out to
be higher or lower. Sensitivity analysis varies the probability estimates and
reanalyzes the tree. The less the outcomes vary as the probabilities are altered,
the more robust is the result. There are several ways to conduct a sensitivity
analysis, and this technique was discussed further in the context of economic
evaluation earlier in this chapter.
Decision analysis is especially useful for a clinical or policy decision under
the following conditions:
Meta-Analysis
Pooled Analysis
Pooled analysis refers to the analysis of data from multiple studies at the level
of the individual participant. Meta-analysis uses aggregate data from multiple
studies. The goals of a pooled analysis are the same as a meta-analysis, that is,
obtaining a quantitative estimate of effect. This type of analysis is less com-
mon than meta-analysis and has received less formal treatment in the litera-
ture. Nonetheless, it has proved informative in characterizing dose-response
relationships for certain environmental risks that may be etiologically related
to a variety of chronic diseases. For example, pooled analyses have been pub-
lished on radiation risks for nuclear workers65; the relationship between alco-
hol, smoking, and head and neck cancer66; and whether vitamin D can prevent
fractures.67
Recent efforts by journals and granting agencies encouraging and requir-
ing the posting of study data have made pooled studies more feasible.68,69
Methodological and software advances have also spurred an increase in these
types of studies. Pooled studies using shared data can be particularly useful
for studying emerging infections, such as the Zika virus.70 Although pooled
analyses can simply pool the individual data and estimate effect size, they usu-
ally either weight the data or include variables indicating study characteristics
and use a fixed or random effects modeling strategy. The increased availabil-
ity of individual data due to journal reporting requirements and electronic
medical records means that pooled analysis will be used more in public health
analysis.71
Risk Assessment
Another assessment tool is the HIA, which measures the impact of a nonhealth
sector intervention on the health of a community.75-77 For example, zoning
changes to require sidewalks can increase physical activity, thus improving
the health of the community. The number of existing HIAs has been growing
rapidly throughout the world, and there have been calls for more use of this
methodology.78,79 For example, numerous HIAs have investigated the impact
of transportation policies designed to encourage more active transportation,
such as cycling.80 In the United States this method can be viewed as an exten-
sion of the environmental impact statement, an assessment of the intended
and unintended consequences of new development on the environment
required for some projects.
Dannenberg and colleagues78 reviewed 27 HIAs completed in the United
States from 1999 to 2007. Topics studied ranged from policies about living
wages and after-school programs to projects about power plants and public
transit. Within this group of 27 HIAs, an excellent illustration is the assess-
ment of a Los Angeles living wage ordinance.81 Researchers used estimates
of the effects of health insurance and income on mortality to project and
compare potential mortality reductions attributable to wage increases and
changes in health insurance status among workers covered by the Los Angeles
City living wage ordinance.81 Estimates demonstrated that the health insur-
ance provisions of the ordinance would have a much larger health benefit than
the wage increases, thus providing valuable information for policy makers
who may consider adopting living wage ordinances in other jurisdictions or
modifying existing ordinances.
There are five steps to an HIA: screening, scoping, appraisal, reporting,
and monitoring.77 The screening step is used to determine whether the
E c o n o m i c E va l uat i o n a n d O t h e r A n a ly t i c T o ol s ( 105 )
The economic evaluation literature has grown exponentially over the years.
A recent review found 2,844 economic evaluations in health published over a
28-month period.4 Using a search of PubMed, there have been 3,771 economic
evaluations of public health topics in the past 5 years, compared with 1,761
public health economic evaluations in the prior 5 years. This increase in pub-
lication can be a boon to public health practitioners because it is more likely
that interventions being considered for adoption have already been assessed
for cost-effectiveness. The increase in publication has also been accompanied
by the development of more guidelines for economic evaluations and several
specialized databases focusing on economic evaluation that follow standard-
ized abstracting guidelines.
There are challenges in using economic evaluations in policy. Economic
evaluations, though used extensively in other countries, particularly those
with national health plans, have a checkered history within the United
States.37,84,85 A review of economic evaluations in public health areas in the
United States, such as tobacco control, injury prevention, and immunizations,
found inconsistency in the conduct of economic evaluations both across and
( 106 ) Evidence-Based Public Health
within topical areas. Further, the results of the economic evaluations were
influential in decision making in some public health topic areas but not oth-
ers. Clearly, there is room for improvement.86
Another issue is adapting national or state standards for local needs.
Economic evaluations usually take a societal perspective, defined as at least
at the state, province, or regional level but more often at the national level.
To apply the results of these studies, the practitioner has to consider whether
national costs should be adjusted to local costs and whether there are specific
state or local characteristics that would influence implementation of results
from national data. For example, suppose that a policy maker has found
an economic evaluation that supports the use of mass media campaigns to
increase physical activity levels. If the city or county in which the policy maker
works bans billboard advertising, then the economic evaluation results would
have to be adjusted for this restriction.
Finally, there is the matter of training and accessibility. For many in public
health, the key question may be, “How does a practitioner learn about or make
appropriate use of these tools?” To make better use of economic evaluations
and related methods, enhanced training is needed both during graduate edu-
cation and through continuing education of public health professionals work-
ing in community settings.
Despite its limitations, economic evaluation can be a useful tool for man-
agers and policy makers. When doing an economic evaluation, one must
specify the intervention and its intended audience; identify the perspective,
or scope, of the investigation; list and identify all the costs; and list and iden-
tify all the benefits. Then, after discounting to account for differential timing,
the costs and benefits are brought together in an ICER. Finally, the stabil-
ity of the ICER is assessed by varying the assumptions of the analysis in a
sensitivity analysis. All of these steps may not provide a definitive answer.
Economic evaluation is ultimately a decision aid, not a decision rule. But the
clarification provided by the analysis and the insight into the trade-offs that
must be made between costs and health are critical aids to managers, plan-
ners, and decision makers.87
In general, prevention programs are a good value for the money invested.88
A few interventions are cost-saving, such as the routine childhood immuniza-
tion schedule in the United States.89 Most prevention programs will not be
cost-saving, but they will provide a good return on investment. Of course,
not all prevention programs are cost-effective, but there are many prevention
programs that provide increased health outcomes at a lower cost than medi-
cal interventions. In the United States, hundreds of thousands of lives could
be saved if smokers were advised to quit, those at risk for heart disease took
a low-dose aspirin, people received flu shots, and people were screened for
colorectal, breast, and cervical cancer.90 Most of these lives would be saved at
E c o n o m i c E va l uat i o n a n d O t h e r A n a ly t i c T o ol s ( 107 )
a lower cost per life saved than the comparable medical intervention required
to treat the associated diseases.
SUMMARY
This chapter has presented economic evaluation, a useful tool for developing
and practicing evidence-based public health. Economic evaluation quantifies
the costs and benefits of an intervention and provides an assessment of its
effectiveness (i.e., “Are the costs reasonable to obtain the likely benefits?”).
Cost of illness studies, decision analysis, meta-analysis, pooled analysis, risk
assessment, and health impact assessment are all tools to help organize and
assess complex topics.
All of these techniques are relatively sophisticated and are generally car-
ried out by persons with specialized training (e.g., an economist would con-
duct a CUA). The aim of this chapter has been to explain these techniques
to public health practitioners so that they can be educated consumers of
these methods.
KEY CHAPTER POINTS
• Economic evaluations and related techniques can provide reliable tools for
decision making among public health professionals and policy makers.
• These techniques are relatively sophisticated, but their underlying logic
and structure can be understood.
• Economic evaluation is the comparison of costs and benefits to determine
the most efficient allocation of scarce resources.
• Several challenges (inconsistent quality, methodologic issues, difficulties
in implementation) should be kept in mind when considering the use of
economic evaluations.
• Cost of illness studies document the direct and indirect burden of disease
on society.
• Decision analysis provides a visual tool, the tree diagram, to display com-
plex interventions with multiple outcomes and different probabilities of
occurrence. The tree can be calculated to give a score for each of the main
outcomes.
• Meta-analysis and pooled analysis are both methods to synthesize the
results of several quantitative studies to give a summary measure of
effectiveness.
• Risk assessment is a tool to assess complicated pathways of exposure and
risk, such as in environmental exposures.
( 108 ) Evidence-Based Public Health
Selected Websites
Association of Public Health Observatories, The HIA Gateway <http://www.apho.
org.uk/default.aspx?QN=P_HIA>. This UK-based website provides resources for
health impact assessments, including sample causal diagrams and a searchable
catalog of HIAs.
Cochrane Collaboration <http://www.cochrane.org>. The Cochrane Collaboration is an
international organization that aims to help people make well-informed decisions
about health care by preparing, maintaining, and promoting the accessibility of
systematic reviews of the effects of health care interventions. The Collaboration
conducts its own systematic reviews, abstracts the systematic reviews of others,
and provides links to complementary databases.
Chronic Disease Cost Calculator Version 2, Centers for Disease Control and Prevention
<http://www.cdc.gov/chronicdisease/calculator/index.html>. This tool provides
state-level estimates of the cost of several chronic diseases in the United States.
Cost is measured as medical expenditures and absenteeism costs. Diseases
covered are arthritis, asthma, cancer, cardiovascular diseases, depression, and
diabetes.
E c o n o m i c E va l uat i o n a n d O t h e r A n a ly t i c T o ol s ( 109 )
Cost-Effectiveness Analysis Registry, Center for the Evaluation of Value and Risk in
Health, Institute for Clinical Research and Health Policy Studies, Tufts Medical
Center <http://healtheconomics.tuftsmedicalcenter.org/cear4/Home.aspx>.
Originally based on the articles by Tengs et al.,19,91 this website includes a detailed
database of cost- effectiveness analyses, cost- effectiveness ratios, and QALY
weights.
Evaluation, National Association of Chronic Disease Directors <http://www.chron-
icdisease.org/?page=Evaluation>. The evaluation page of this US website includes
resources on return-on-investment analysis. The accompanying guide presents
the different forms of economic evaluation under the umbrella of return on
investment.
Guide to Clinical Preventive Services, Third Edition <http://www.ahrq.gov/clinic/usp-
stfix.htm>. The US Preventive Services Task Force developed and updates this
guide, intended for primary care clinicians, other allied health professionals, and
students. It provides recommendations for clinical preventive interventions—
screening tests, counseling interventions, immunizations, and chemoprophylac-
tic regimens—for more than 80 target conditions. Systematic reviews form the
basis for the recommendations. The Guide is provided through the website of the
Agency for Healthcare Research and Quality.
Guide to Community Preventive Services <http://www.thecommunityguide.org>. Under
the auspices of the US Public Health Service, the Task Force on Community
Preventive Services developed the Guide to Community Preventive Services.
The Guide uses systematic reviews to summarize what is known about the
effectiveness of population-based interventions for prevention and control in
18 topical areas. Interventions that are rated effective are then evaluated for
cost-effectiveness.
Health Impact Assessment, Centers for Disease Control Health Places <http://www.
cdc.gov/healthyplaces/hia.htm>. This website provides definitions, examples,
and links to other catalogs and archives of HIAs.
Health Impact Assessment, National Association of County & City Health Officials
<http://www.naccho.org/programs/community-health/healthy- community-
design/health-impact-assessment/>. Includes resources for local health depart-
ments to assist them in the use of HIA.
Health Impact Project, The Pew Charitable Trusts <http://www.pewtrusts.org/en/
projects/health-impact-project/health-impact-assessment>. Includes an over-
view, a description of the HIA process, links to toolkits and other resources, and
multiple case studies.
World Health Organization Health Impact Assessment <http://www.who.int/hia/en/>.
The World Health Organization provides resources, examples, toolkits, and a
catalog of worldwide HIAs.
REFERENCES
1. Marshall DA, Hux M. Design and analysis issues for economic analysis alongside
clinical trials. Med Care. Jul 2009;47(7 Suppl 1):S14–S20.
2. Ramsey S, Willke R, Briggs A, et al. Good research practices for cost-effectiveness
analysis alongside clinical trials: the ISPOR RCT-CEA Task Force report. Value
Health. Sep-Oct 2005;8(5):521–533.
( 110 ) Evidence-Based Public Health
23. Hoerger TJ, Harris R, Hicks KA, Donahue K, Sorensen S, Engelgau M. Screening
for type 2 diabetes mellitus: a cost-effectiveness analysis. Ann Intern Med. May 4
2004;140(9):689–699.
24. Siu AL. Screening for Abnormal Blood Glucose and Type 2 Diabetes mellitus: U.S.
Preventive Services Task Force recommendation statement. Ann Intern Med. Dec
1 2015;163(11):861–868.
25. Schad M, John J. Towards a social discount rate for the economic evaluation of
health technologies in Germany: an exploratory analysis. Eur J Health Econ. Apr
2010;13(2):127–144.
26. Baio G, Dawid AP. Probabilistic sensitivity analysis in health economics. Stat
Methods Med Res. Dec 2011;24(6):615–634.
27. Neumann PJ, Cohen JT, Weinstein MC. Updating cost-effectiveness: the curi-
ous resilience of the $50,000- per-QALY threshold. N Engl J Med. Aug 28
2014;371(9):796–797.
28. Eddy D. Breast Cancer Screening for Medicare Beneficiaries. Washington, DC: Office
of Technology Assessment; 1987.
29. Braithwaite RS, Meltzer DO, King JT, Jr., Leslie D, Roberts MS. What does the
value of modern medicine say about the $50,000 per quality-adjusted life-year
decision rule? Med Care. Apr 2008;46(4):349–356.
30. Garber AM, Phelps CE. Economic foundations of cost-effectiveness analysis. J
Health Econ. Feb 1997;16(1):1–31.
31. Murray CJ, Evans DB, Acharya A, Baltussen RM. Development of WHO guidelines
on generalized cost-effectiveness analysis. Health Econ. Apr 2000;9(3):235–251.
32. World Health Organization. Macroeconomics and Health: Investing in Health for
Economic Development. Geneva: World Health Organization; 2001.
33. McCabe C, Claxton K, Culyer AJ. The NICE cost-effectiveness threshold: what it is
and what that means. Pharmacoeconomics. 2008;26(9):733–744.
34. Gillick MR. Medicare coverage for technological innovations: time for new crite-
ria? N Engl J Med. May 20 2004;350(21):2199–2203.
35. Owens DK. Interpretation of cost-effectiveness analyses. J Gen Intern Med. Oct
1998;13(10):716–717.
36. Weinstein MC. How much are Americans willing to pay for a quality-adjusted life
year? Med Care. Apr 2008;46(4):343–345.
37. Neumann P. Using Cost-Effectiveness Analysis to Improve Health Care. New York,
NY: Oxford University Press; 2005.
38. Weatherly H, Drummond M, Claxton K, et al. Methods for assessing the cost-
effectiveness of public health interventions: key challenges and recommenda-
tions. Health Policy. Dec 2009;93(2-3):85–92.
39. Zarnke KB, Levine MA, O’Brien BJ. Cost-benefit analyses in the health-care lit-
erature: don’t judge a study by its label. Journal of Clinical Epidemiology. July 1997
1997;50(7):813–822.
40. Drummond MF, Jefferson TO. Guidelines for authors and peer reviewers of eco-
nomic submissions to the BMJ. The BMJ Economic Evaluation Working Party.
BMJ. Aug 3 1996;313(7052):275–283.
41. Thiboonboon K, Santatiwongchai B, Chantarastapornchit V, Rattanavipapong W,
Teerawattananon Y. A systematic review of economic evaluation methodologies
between resource-limited and resource-rich countries: a case of rotavirus vac-
cines. Appl Health Econ Health Policy. Dec 2016;14(6):659–672.
42. Goodrich K, Kaambwa B, Al-Janabi H. The inclusion of informal care in applied
economic evaluation: a review. Value Health. Sep-Oct 2012;15(6):975–981.
( 112 ) Evidence-Based Public Health
life-style related risk factors alcohol, BMI, and smoking: a quantitative health
impact assessment. BMC Public Health. 2016;16:734.
83. Lhachimi SK, Nusselder WJ, Smit HA, et al. DYNAMO-HIA: a dynamic modeling
tool for generic health impact assessments. PLoS One. 2012;7(5):e33317.
84. Azimi NA, Welch HG. The effectiveness of cost-effectiveness analysis in contain-
ing costs. J Gen Intern Med. Oct 1998;13(10):664–669.
85. McDaid D, Needle J. What use has been made of economic evaluation in public
health? A systematic review of the literature. In: Dawson S, Morris S, eds. Future
Public Health: Burdens, Challenges and Approaches. Basingstoke, UK: Palgrave
Macmillan; 2009.
86. Grosse SD, Teutsch SM, Haddix AC. Lessons from cost-effectiveness research for
United States public health policy. Annu Rev Public Health. 2007;28:365–391.
87. Rabarison KM, Bish CL, Massoudi MS, Giles WH. Economic evaluation enhances
public health decision making. Front Public Health. 2015;3:164.
88. Woolf SH. A closer look at the economic argument for disease prevention. JAMA.
Feb 4 2009;301(5):536–538.
89. Zhou F, Santoli J, Messonnier ML, et al. Economic evaluation of the 7-vaccine
routine childhood immunization schedule in the United States, 2001. Arch Pediatr
Adolesc Med. Dec 2005;159(12):1136–1144.
90. National Commission on Prevention Priorities. Preventive Care: A National
Profile on Use, Disparities, and Health Benefits. Washington, DC: Partnership for
Prevention; 2007.
91. Tengs TO, Adams ME, Pliskin JS, et al. Five-hundred life-saving interventions and
their cost-effectiveness. Risk Analysis. 1995;15(3):369–390.
C H A P T E R 5
w
Conducting a Community Assessment
The uncreative mind can spot wrong answers. It takes a creative mind to spot wrong
questions.
A . Jay
( 115 )
( 116 ) Evidence-Based Public Health
are affected by these decisions. In reality, some of these partners may join at a
later stage in the evidence-based process, bringing new perspectives or ques-
tions that warrant additional assessments.
This chapter is divided into several sections. The first provides a back-
ground on community assessments. The next section describes why a
community assessment is critical. The third section discusses a range of
partnership models that might be useful in conducting community assess-
ments. The next sections outline who, what, and how to conduct assess-
ments. The final section describes how to disseminate the community
assessment findings.
BACKGROUND
Box 5.1
REDUCING DISPARITIES IN DIABETES AMONG AFRICAN-
AMERICAN AND LATINO RESIDENTS OF DETROIT: THE
ESSENTIAL ROLE OF COMMUNITY PLANNING
FOCUS GROUPS
What to assess depends very much on the knowledge to be gained and from
whom it will be collected. In terms of the “who” question, it is important to
C o n d u c t i n g a C o m m u n i t y A s s e s s m e n t ( 119 )
Characteristic Description
1. Holistic and comprehensive Allows the coalition to address issues that it deems as
priorities; well illustrated in the Ottawa Charter for Health
Promotion
2. Flexible and responsive Coalitions address emerging issues and modify their
strategies to fit new community needs
3. Build a sense of community Members frequently report that they value and receive
professional and personal support for their participation in
the social network of the coalition
4. Build and enhance resident Provides a structure for renewed civic engagement; coalition
engagement in community becomes a forum where multiple sectors can engage with
life each other
5. Provide a vehicle for As community coalitions solve local problems, they develop
community empowerment social capital, allowing residents to have an impact on
multiple issues
6. Allow diversity to be valued As communities become increasingly diverse, coalitions
and celebrated provide a vehicle for bringing together diverse group to solve
common problems
7. Incubators for innovative Problem solving occurs not only at local levels but also
solutions to large problems at regional and national levels; local leaders can become
national leaders
population health and well-being, and in doing so include the assets in the
community—not just the problems (Figure 5.1).16, 18–20
Ecological frameworks (also discussed in chapter 10) suggest that individ-
ual, social, and contextual factors influence health.21 Several variations of an
ecological framework have been proposed.22–25 Based on work conducted by
McLeroy and colleagues22 and Dahlgren and Whitehead,26 it is useful to con-
sider assessment of factors at five levels:
om
dit
y and c munity
ial, a
il
ion
fam ne
al t
soc
s b
ividual beha
ci
wo
Ind
and
So
vi
Broad
rks
or
Innate
individual traits:
age, sex, race, and
biological
Over the life span factors
—
The biology of
disease
al,
nati
al,
conditions may include:
• Psychosocial factors
sta • Employment status and
te, a
nd local levels occupational factors
(income, education,
occupation)
• The natural and builtC
environments
• Public health services
• Health care services
COLLECTING DATA
There are a number of different ways to collect data on each of the indicators
listed previously. Too often, community assessment data are collected based
on the skills of the individuals collecting the data. If someone knows how to
collect survey data, those are the data collected. As noted earlier, for any com-
munity assessment process to be effective, it is essential to determine the
questions that need answering and from whom data will be collected. Methods
should be used that are best suited to answer the questions—obtaining assis-
tance as needed. Some information may be found using existing data, whereas
other types of information require new data collection. Data are often clas-
sified as either quantitative or qualitative. Quantitative data are expressed
in numbers or statistics—they answer the “what” question. Qualitative data
are expressed in words or pictures and help to explain quantitative data by
answering the “why” question. There are different types and different methods
of collecting each. More often than not, it is useful to collect multiple types of
( 122 ) Evidence-Based Public Health
Level Indicators
Individual: • Leading causes of death
characteristics of the • Leading causes of hospitalization
individual such as knowledge, • Behavioral risk and protective factors
attitudes, skills, and a person’s • Community member skills and talents
developmental history
Interpersonal: • Social connectedness
formal and informal social • Group affiliation (clubs, associations)
networks and social support • Faith communities, churches, and religious organizations
systems, including family and • Cultural and community pride
friends
Organizational: • Number of newspaper, local radio or TV, and media
social institutions, organizational • Number of public art projects or access to art exhibits and
characteristics, and rules or museums
regulations for operation • Presence of food pantries
• Number and variety of businesses
• Number of faith-based organizations
• Number of civic organizations
• Supportive services resource list
• Public transportation systems
• Number of social services (e.g., food assistance, child care
providers, senior centers, housing and shelter assistance)
• Chamber of Commerce—list of businesses
• Number and variety of medical care services: clinics,
programs
• Number of law enforcement services
• Number of nonprofit organizations and types of services
performed (e.g., the United Way, Planned Parenthood)
• Number of vocational and higher education institutions and
fields of study available to students: community college and
university
• Library
Community and social: • Public School System Enrollment numbers
relationships between • Graduation and drop-out rates
organizations, economic forces, the • Test scores
physical environment, and cultural • Community history
variables that shape behavior • Community values
• Opportunities for structured and unstructured involvement
in local decision making
• Recreational opportunities: green spaces, parks, waterways,
gyms, and biking and walking trails
• Crosswalks, curb cuts, traffic calming devices
• Housing cost, availability
C o n d u c t i n g a C o m m u n i t y A s s e s s m e n t ( 123 )
Table 5.2. C ONTINUED
Level Indicators
data because each has certain advantages and disadvantages. Bringing differ-
ent types of data together is often called triangulation.31
Quantitative Data
National, State, and Local Data From Surveillance Systems
These data are collected specifically for a particular community and may
include information on demographics, social indicators, knowledge, behavior,
attitudes, morbidity, and so forth. These data may be collected through phone,
( 124 ) Evidence-Based Public Health
Community Audits
Qualitative Data
Interviews
longer to collect the data. The skills of the interviewer to establish rap-
port with individuals will also have a greater impact in collecting qualitative
compared with quantitative data.
Print media also provide a source of qualitative data. For example, newspapers
or newsletters may provide insight into the most salient issues within a com-
munity. In addition, more recent technological advances allow for review of
blogs or LISTSERVs as forms of important qualitative data (e.g., the types of
support that a breast cancer LISTSERVs provides or concerns about medical
care within a community). Some have used written diaries as a way to track
and log community events or individual actions.
Observation
Photovoice
Photovoice is a type of qualitative data that uses still or video images to docu-
ment conditions in a community. These images may be taken by community
members, community- based organization representatives, or profession-
als. After images are taken they can be used to generate dialogue about the
images.37 This type of data can be very useful in capturing the salient images
around certain community topics from the community perspective. As they
say, a picture is worth a thousand words. However, it may be difficult to know
what the circumstances surrounding the picture are, when it was taken, or
why it was taken. What an image means is in the “eye of the beholder.”
( 126 ) Evidence-Based Public Health
ANALYSIS OF DATA
After data has been collected it needs to be analyzed and summarized. Both
quantitative and qualitative data analysis requires substantial training far
beyond the scope of this book. Chapter 7 will provide an overview of some of
the most important analysis considerations when working with quantitative
data. Often, in a community assessment the analysis of most interest involves
pattern by person, place, and time. Below is an overview of some of the con-
siderations in analyzing qualitative data.
The analysis of qualitative data, whether it is analysis of print media, field
notes, photovoice, listening sessions, or interviews, is an iterative process of
sorting and synthesizing to develop a set of common concepts or themes that
occur in the data in order to discern patterns. The process of analysis often
begins during data collection. Similarly, as one collects and analyzes the data
there may be interpretations or explanations for patterns seen or linkages
among different elements of the data that begin to appear. It is useful to track
these as they occur.
There are many different strategies for conducting qualitative data analy-
sis. As with quantitative data, before any analysis it is important to ensure
that the data are properly prepared. For example, when analyzing interviews
it is important that transcripts (verbatim notes often typed from an audio
recording) are accurate and complete. The next step in analysis of qualita-
tive data is the development of a set of codes or categories within which
to sort different segments of the data. These codes may be predetermined
by the questions driving the inquiry or may be developed in the process of
reviewing the data. When the codes are established, the data are reviewed
C o n d u c t i n g a C o m m u n i t y A s s e s s m e n t ( 127 )
and sorted into the codes or categories, with new codes or categories devel-
oped for data that do not fit into established coding schemes. The data
within each code are reviewed to ensure that the assignment is accurate and
that any subcategories are illuminated. These codes or categories are then
reviewed to determine general themes or findings. There are some methods
that allow comparison across various groups (e.g., development of matrices
that compare findings among men and women or health practitioners and
community members). For large data sets there are software packages that
can automate parts of this process of data analysis and allow for these types
of comparisons (e.g., NVivo, ATLAS.ti). Those interested in further infor-
mation on qualitative analysis should see additional sources.31,40 Whenever
possible, before finalizing data analysis it is helpful to conduct “member
checking.” Member checking is a process of going back to the individuals
from whom the data were collected and verifying that the themes and con-
cepts derived resonate with participants.13
what is surprising and what is expected, what the data represent, and what
seems to still be missing. To move toward action the partnership needs to
have confidence that the data they have, although never being all the data
that could be gathered, are sufficient to move toward action. From there, a
full understanding of the data is important in prioritizing the most important
issues to work on and developing action plans.
SUMMARY
KEY CHAPTER POINTS
Selected Websites
Centers for Disease Control and Prevention (CDC) Social Determinants of Health
Maps <http://www.cdc.gov/dhdsp/maps/social_determinants_maps.htm>. The
social determinants of health maps available at the CDC website can be used in
conjunction with other data to identify interventions that might positively affect
the health of your community of interest.
Centers for Disease Control and Prevention. Community Health Improvement
Navigator. http://www.cdc.gov/chinav/tools/assess.html. The Community
Health Improvement Navigator provides a series of tools for creating successful
community health improvement plans and interventions, including community
assessment. The website includes links to lists of indicators and identifying com-
munity assets and resources.
Community Commons http://www.communitycommons.org. Community commons
provides data, maps, and stories about key community issues related to com-
munity health assessment, including economics, education, environment, equity,
food, and health.
County Health Rankings & Roadmaps: Building a Culture of Health, County by County.
http://www.countyhealthrankings.org Sponsored by the Robert Wood Johnson
Foundation, this website provides data and maps on key health factors and
health outcomes, as well as policies and programs that communities might want
to consider adopting in their communities.
University of California, Los Angeles Center for Health Policy Research, Health DATA
Program <http://healthpolicy.ucla.edu/programs/health-data/Pages/overview.
aspx>. The Health DATA (Data. Advocacy. Training. Assistance.) Program exists to
make data understandable to a wide range of health advocates through train-
ings, workshops, and technical assistance. The site includes instructional videos,
Health DATA publications, and links to free online resources in areas such as
community-based participatory research, community assessment, data collec-
tion (e.g., asset mapping, focus groups, surveys, key informant interviews), and
data analysis and presentation.
REFERENCES
( 133 )
( 134 ) Evidence-Based Public Health
BACKGROUND
Developing a concise and useful issue statement can be informed by the pro-
cesses of community assessment and strategic planning. In a community
assessment, issues emerge and are defined in the process of determining the
health needs or desires of a population. In strategic planning, the identifi-
cation of key strategic issues helps define the priorities and direction for a
group or organization. In addition, issue definition is closely linked with the
objective-setting steps involved in developing an action plan for a program
(chapter 10) and also forms part of the foundation of an effective evaluation
strategy (chapter 11).
• What was the basis for the initial statement of the issue? This may include
the social, political, or health circumstances at the time the issue was origi-
nated, and how it was framed. This provides the context for the issue.
• Who was the originator of the concern? The issue may have developed
internally within a community or organization or may be set as an issue by
a policy maker or funder.
• Should or could the issue be stated in the epidemiologic context of per-
son (How many people are affected and who are they?), place (What is the
Negative Positive
geographic distribution of the issue?), and time (How long has this issue
been a problem? What are anticipated changes over time?)?8
• What is and what should be occurring?
• Who is affected and how much?
• What could happen if the problem is NOT addressed?
• Is there a consensus among stakeholders that the problem is properly
stated?
This section will begin to address these and other questions that one may
encounter when developing an initial issue statement. A sound issue
statement may draw on multiple disciplines, including biostatistics, epi-
demiology, health communication, health economics, health education,
management, medicine, planning, and policy analysis. An issue statement
should be stated as a quantifiable question (or series of questions) leading
to an analysis of root causes or likely intervention approaches. It should
also be unbiased in its anticipated course of action. Figure 6.2 describes the
progression of an issue statement along with some of the questions that
are crucial to answer. One question along the way is, “Do we need more
information?” The answer to that question is nearly always “yes,” so the
challenge becomes where to find the most essential information efficiently.
It is also essential to remember that the initial issue statement is often the
“tip of the iceberg” and that getting to the actual causes of and solutions
to the problem takes considerable time and effort. Causal frameworks (also
know as analytic frameworks; see chapter 9) are often useful in mapping
out an issue.
( 138 ) Evidence-Based Public Health
Sample What do the data show? What might explain the data? Which options are under active
Questions consideration?
to Consider
Are there time trends? Why is the problem not being How does one gather information
addressed? from stakeholders?
Are there high-risk Are there effective (and cost- What resources are needed for
populations? effective) interventions? various options?
Can the data be oriented by What happens if we do nothing? What resources are available for
person, place, time? various options?
Is public health action Do we need more information? What outcomes do we seek to
warranted? achieve?
Figure 6.2: A sequential framework for understanding the key steps in developing an issue
statement.
Issue Components
400
Rate per 100,000
300
200
100
0
1970 1980 1990 2000 2010 2020
Year
Austria Finland Italy Lithuania Russian Federation
50
45
40
Smoking Rate (%)
35
30
Male
25
Female
20
15
10
5
0
China Japan Thailand Vietnam
with the background statement. For example, focus group data may be avail-
able that demonstrate a particular attitude or belief toward a public health
issue. The concepts presented earlier in this chapter related to community
assessment are often useful in assembling background data. In all cases, it
is important to specify the source of the data so that the presentation of the
problem is credible.
In considering the questions about the program or policy, the search for
effective intervention options (our Type 2 evidence) begins. You may want
to undertake a strategic planning process to generate a set of potentially
effective program options that could address the issue. The term program is
defined broadly to encompass any organized public health action, including
direct service interventions, community mobilization efforts, policy develop-
ment and implementation, outbreak investigations, health communication
campaigns, health promotion programs, and applied research initiatives.8
The programmatic issue being considered may be best presented as a series of
questions that a public health team will attempt to answer. It may be stated in
the context of an intervention program, a health policy, cost-effectiveness, or
managerial challenge. For an intervention, you might ask, “Are there effective
intervention programs in the literature to address risk factor X among popu-
lation Y?” A policy question would consider, “Can you document the positive
effects of a health policy that was enacted and enforced in State X?” In the
area of cost-effectiveness, it might be, “What is the cost of intervention Z
per year of life saved?”11 And a managerial question would ask, “What are the
resources needed to allow us to effectively initiate a program to address issue
X?” The questions that ascertain the “how” of program or policy implementa-
tion begin to address Type 3 evidence, as described in c hapter 1.
As the issue statement develops, it is often useful to consider potential solu-
tions. However, several caveats are warranted at this early phase. First, solu-
tions generated at this phase may or may not be evidence based because all the
information may not be in hand. Also, the program ultimately implemented
is likely to differ from the potential solutions discussed at this stage. Finally,
solutions noted in one population or region may or may not be generalizable
to other populations (see discussion of external validity in chapter 3). There
is a natural tendency to jump too quickly to solutions before the background
and programmatic focus of a particular issue are well defined. In Table 6.3,
potential solutions are presented that are largely developed from the efforts
of the Guide to Community Preventive Services, an evidence-based systematic
review described in chapter 8.12
When framing potential solutions of an issue statement, it is useful to
consider whether a “high-risk” or population strategy is warranted. The
high-risk strategy focuses on individuals who are most at risk for a particu-
lar disease or risk factor.13,14 Focusing an early detection program on lower
income individuals who have the least access to screening, for example, is a
( 142 ) Evidence-Based Public Health
exclusive. The year 2020 health goals for the United States, for example, call
for elimination of health disparities (a high-risk approach) and also target
overall improvements in social and physical environments to promote health
for all (a population approach).15 Data and available resources can help in
determining whether a population approach, a high-risk strategy, or both are
warranted.
Although it may seem premature to consider potential outcomes before
an intervention approach is decided on, an initial scan of outcomes is often
valuable at this stage. It is especially important to consider the answer to
the questions, “What outcome do we want to achieve in addressing this
issue? What would a good or acceptable outcome look like?” This process
allows you to consider potential short-and longer-term outcomes. It also
helps shape the choice of possible solutions and determines the level of
resources that will be required to address the issue. For many US public
health issues (e.g., numerous environmental health exposures), data do not
readily exist for community assessment and evaluation at a state or local
level. Long-term outcomes (e.g., mortality rates) that are often available are
not useful for planning and implementing programs with a time horizon of
a few years. A significant challenge to be discussed in later chapters is the
need to identify valid and reliable intermediate outcomes for public health
programs.
Importance of Stakeholder Input
As the issue definition stage continues, it is often critical to obtain the input of
“stakeholders.” Stakeholders, or key players, are individuals or agencies with
a vested interest in the issue at hand.3 When addressing a particular health
policy, policy makers are especially important stakeholders. Stakeholders can
also be individuals who would potentially receive, use, and benefit from the
program or policy being considered. In particular, three groups of stakehold-
ers are relevant8:
Table 6.4 shows how the considerations and motivations of various stakehold-
ers can vary.16 These differences are important to take into account while gar-
nering stakeholder input.
( 144 ) Evidence-Based Public Health
Stakeholder Consideration
Public health The health of the American public has improved substantially as
advocates demonstrated by declining death rates and longer life expectancy.
Major public health programs have been successful in reducing key risk factors
such as cigarette smoking, control of hypertension, and dietary changes.
There are millions of Americans who lack heath care coverage.
Environmental monitoring and control have helped decrease morbidity and
mortality.
Prevention is the cornerstone of effective health policy.
Consumers Personal and out-of-pocket health care costs are too high.
Quality medical care is often not provided.
There are substantial risks to the public from “involuntary” environmental
hazards such as radiation, chemicals, food additives, and occupational
exposures.
An example of the need for stakeholder input can be seen in Box 6.1. In this
case, there are likely to be individuals and advocacy groups with strong feelings
regarding how best to reduce infant mortality. Some of the approaches, such
as increasing funding for family planning, may be controversial. As described
in other parts of this book, there are several different mechanisms for gaining
stakeholder input, including the following:
• Interviews of leaders of various voluntary and nonprofit agencies that have
an interest in this issue
De v e l op i n g a n I n i t i a l S tat e m e n t of t h e I s s u e ( 145 )
Box 6.1
REDUCING INFANT MORTALITY IN TEXAS
For the newly hired director of the Maternal and Child Health Bureau at
the Texas Department of Health and Human Services, the issue of dispar-
ities in infant mortality rates is of high interest. You have been charged
with developing a plan for reducing the rate of infant mortality. The plan
must be developed within 12 months and implemented within 2 years. The
data show that the infant mortality rate in Texas plateaued from 2000 to
2005, but then declined 12% from 2005 to 2015. Significant differences
among infant mortality rates of different races continue. The rate among
non-Hispanic blacks is currently 10.7 per 1,000 live births, and the rate
among non-Hispanic whites is currently 5.1, a relative difference of 110%.
Program staff, policy makers, and advisory groups (stakeholders) have
proposed numerous intervention options, including (1) increased fund-
ing for family planning services; (2) a mass media campaign to encourage
women to seek early prenatal care; and (3) global policies that are aimed
at increasing health care access for pregnant women. Program personnel
face a significant challenge in trying to obtain adequate stakeholder input
within the time frame set out by the governor. You have to decide on the
methods for obtaining adequate and representative feedback from stake-
holders in a short time frame. Some of the issues you need to consider
include the following:
• The role of the government and the role of the private sector in reduc-
ing infant mortality
• The positions of various religious groups on family planning
• The key barriers facing women of various ethnic backgrounds when
obtaining adequate prenatal care
• The views of key policy makers in Texas who will decide the amount of
public resources available for your program
SUMMARY
resources (see chapters 5 and 11). It should also be remembered that public
health is a team sport and that review and refinement of an initial issue state-
ment with one’s team are essential.
KEY CHAPTER POINTS
Selected Websites
Behavioral Risk Factor Surveillance System (BRFSS) <http://www.cdc.gov/brfss/>.
The BRFSS is the world’s largest, ongoing telephone health survey system, track-
ing health conditions and risk behaviors in the United States yearly since 1984.
Currently, data are collected in all 50 states, the District of Columbia, and three
US territories. The Centers for Disease Control and Prevention have developed a
standard core questionnaire so that data can be compared across various strata.
The Selected Metropolitan/Micropolitan Area Risk Trends (SMART) project pro-
vides localized data for selected areas. BRFSS data are used to identify emerging
health problems, establish and track health objectives, and develop and evaluate
public health policies and programs.
Centers for Disease Control and Prevention (CDC). Gateway to Communication and
Social Marketing Practice <http://www.cdc.gov/healthcommunication/cdcyn-
ergy/problemdescription.html>. CDC’s Gateway to Communication and Social
De v e l op i n g a n I n i t i a l S tat e m e n t of t h e I s s u e ( 147 )
REFERENCES
4. Bryson JM. Strategic Planning for Public and Nonprofit Organizations. A Guide to
Strengthening and Sustaining Organizational Achievement. 4th ed. San Francisco,
CA: John Wiley & Sons, Inc.; 2011.
5. Ginter PM, Duncan WJ, Swayne LM. Strategic Management of Health Care
Organizations. 7th ed. West Sussex, UK: John Wiley & Sons Ltd.; 2013.
6. Timmreck TC. Planning, Program Development, and Evaluation. A Handbook for
Health Promotion, Aging and Health Services. 2nd ed. Boston, MA: Jones and
Bartlett Publishers; 2003.
7. Centers for Disease Control and Prevention. Gateway to communication and
social marketing practice. http://www.cdc.gov/healthcommunication/cdcynergy/
problemdescription.html. Accessed June 5, 2016.
8. Centers for Disease Control and Prevention. Framework for program evaluation
in public health. http://www.cdc.gov/eval/framework/. Accessed June 5, 2016.
9. World Health Organization Regional Office for Europe. European Health for All
database http://data.euro.who.int/hfadb/. Accessed June 5, 2016.
10. World Health Organization. Tobacco Free Initiative. http://www.who.int/
tobacco/publications/en/. Accessed June 5, 2016.
11. Tengs TO, Adams ME, Pliskin JS, et al. Five-hundred life-saving interventions and
their cost-effectiveness. Risk Anal. Jun 1995;15(3):369–390.
12. Task Force on Community Preventive Services. Guide to Community Preventive
Services. www.thecommunityguide.org. Accessed June 5, 2016.
13. Rose G. Sick individuals and sick populations. International Journal of Epidemiology.
1985;14(1):32–38.
14. Rose G. The Strategy of Preventive Medicine. Oxford, UK: Oxford University
Press; 1992.
15. Koh HK, Piotrowski JJ, Kumanyika S, Fielding JE. Healthy People: a 2020 vision
for the social determinants approach. Health Educ Behav. Dec 2011;38(6):551–557.
16. Kuller LH. Epidemiology and health policy. Am J Epidemiol. 1988;127(1):2–16.
CHAPTER 7
w
Quantifying the Issue
Everything that can be counted does not necessarily count; everything that counts cannot
necessarily be counted.
Albert Einstein
( 149 )
( 150 ) Evidence-Based Public Health
to evaluate the effectiveness of new public health programs that are designed
to reduce the prevalence of risk factors and the disease burden in target
populations.
1. To discover the agent, host, and environmental factors that affect health,
in order to provide a scientific basis for the prevention of disease and injury
and the promotion of health
2. To determine the relative importance of causes of illness, disability, and
death, in order to establish priorities for research and action
3. To identify those sections of the population that have the greater risk
from specific causes of ill health, in order to direct the indicated action
appropriately
4. To evaluate the effectiveness of health programs and services in improving
the health of the population
The first two functions provide etiologic (or Type 1) evidence to support causal
associations between modifiable and nonmodifiable risk factors and specific
diseases, as well as the relative importance of these risk factors when estab-
lishing priorities for public health interventions. The third function focuses
on the frequency of disease in a defined population and the subgroups within
the population to be targeted with public health programs. The last function
provides experimental (or Type 2) evidence that supports the relative effec-
tiveness of specific public health interventions to address a particular disease.
The terms descriptive epidemiology and analytic epidemiology are commonly
used when presenting the principles of epidemiology. Descriptive epidemiol-
ogy encompasses methods for measuring the frequency of disease in defined
populations. These methods can be used to compare the frequency of disease
within and between populations in order to identify subgroups with the high-
est frequency of disease and to observe any changes that have occurred over
time. Analytic epidemiology focuses on identifying essential factors that influ-
ence the prevention, occurrence, control, and outcome of disease. Methods
used in analytic epidemiology are necessary for identifying new risk factors
for specific diseases and for evaluating the effectiveness of new public health
programs designed to reduce the disease risk for target populations.
Q ua n t i f y i n g t h e I s s u e ( 151 )
in the state at the midpoint of the year (26,060,796) times the duration of the
study period (1 year). Disease rates calculated in this fashion measure the new
occurrence, or incidence, of disease in the population at risk.
This incidence rate should be contrasted with the prevalence rate, which
captures the number of existing cases of disease among surviving members
of the population. Prevalence provides essential information when planning
health services for the total number of persons who are living with the disease
in the community, whereas incidence reflects the true rate of disease occur-
rence in the same population. Incidence rates can lead us to hypothesize about
factors that are causing disease. Planning for public health services requires
a good grasp of the prevalence of the condition in the population, to properly
plan for needed personnel, supplies, and even services.
Although incidence rates are ideal for measuring the occurrence of disease
in a population for a specified period, they are often not available. In this case,
it may be prudent to use cause-specific mortality rates based on the number
of deaths from the disease of interest that occurs in the population during the
same study period. Mortality rates are often used in lieu of incidence rates,
but are only reasonable surrogate measures when the disease is highly fatal.
Of course, mortality rates are more appropriate if the goal is to reduce mortal-
ity among populations in which screening programs can identify early stages
of diseases (e.g., breast cancer or HIV infection) or in which public health pro-
grams can reduce the mortality risk for other conditions (e.g., sudden infant
death syndrome or alcohol-related motor vehicle collisions).
Globally, there are numerous useful tools for estimating burden based on
mortality, life expectancy, disability-adjusted life-years, and other endpoints.
Sources include the Global Burden of Disease study that quantifies health loss
from hundreds of diseases, injuries, and risk factors so that health systems
can be improved and health equity achieved.3–5 The European Health for All
database provides a selection of core health statistics covering basic demo-
graphics, health status, health determinants and risk factors, and health care
resources, utilization, and expenditure in the 53 countries in the World Health
Organization (WHO) European Region.6
Disease rates can be estimated if all cases of disease can be enumerated for
the population at risk during a specified period and the size of the popula-
tion at risk (or amount of person-time) can be determined. In many coun-
tries, disease rates are routinely computed using birth and death certificate
data because existing surveillance systems provide complete enumeration of
these events. Although disease rates are commonly computed using national
( 154 ) Evidence-Based Public Health
and state data, estimating similar rates for smaller geographically or demo-
graphically defined populations may be problematic. The main concern is the
reliability of disease rates when there are too few cases of disease occurring in
the population. As an example, the US National Center for Health Statistics
will not publish or release rates based on fewer than 20 observations. The rea-
son behind this practice can be illustrated by examining the relative standard
error based on various sample sizes, with rates based on fewer than 20 cases
or deaths being very unreliable (Figure 7.1). The relative standard error is the
standard error as a percentage of the measure itself.
Several approaches may prove useful to achieve greater representation of
so-called low-frequency populations such as recent immigrants or minority
populations.10 These strategies may be related to sampling (e.g., expand the
surveillance period by using multiple years to increase the number of cases of
disease and person-time units for the target population). Analytic strategies
may also be useful, such as aggregating data in a smaller geographical area
over several years. Alternate field methods may also be useful (e.g., door-to-
door surveys that might increase response rates). Sometimes, “synthetic” esti-
mates are useful. These estimates can be generated by using rates from larger
geographic regions to estimate the number of cases of disease for smaller geo-
graphic or demographically-specific populations. For example, the number of
inactive, older persons with diabetes within a particular health status group
(homebound, frail, functionally impaired, comorbid conditions, healthy) can
be estimated by multiplying the national proportions for the five health status
groups stratified into four census regions by the state-specific prevalence of
diabetes among adults 50 years or older.11 These synthetic estimates may then
100
90
80
Relative Standard Error
70
60
50
40
30
20
10
0
10 20 30 40 50 60 70 80 90 100
Cases/Deaths
Rates are routinely computed for specific diseases using data from pub-
lic health surveillance systems. These rates, if computed for the total
( 156 ) Evidence-Based Public Health
Box 7.1
SUICIDE RATES BY PERSON, PLACE, AND TIME
In 2013, suicide was the 10th leading cause of death in the United States.
There were more than 2.5 times as many deaths due to suicide as homi-
cide (41,149 vs. 16,121 deaths).36 Overall, the crude suicide rate was 13.0
deaths per 100,000 population. Suicide rates by person, place, and time
revealed the following trends:
• Suicide rates were highest for people who were 45–54 years old (19.7/
100,000), followed by those who were older than 85 years (18.6/
100,000).
• Age-adjusted suicide rates were four times higher for males (20.3/
100,000) than females (5.5/100,000), although females are more
likely to attempt suicide.
• Age- adjusted suicide rates for whites (14.2/ 100,000) and Native
Americans (11.7/100,000) were more than twice as high as for other
racial or ethnic groups.
• Age-adjusted suicide rates for non-Hispanic whites (15.9/100,000)
were almost three times the rates for Hispanics (5.7/100,000) and
non-Hispanic blacks (5.6/100,000).
• Age-adjusted suicide rates were highest in Montana (23.7/100,000)
and lowest in the District of Columbia (5.7/100,000).
• Age- adjusted suicide rates have increased from 10.8 deaths per
100,000 in 2003 to 12.6 deaths per 100,000 in 2013.
• More than half of all suicides in 2013 were committed with a firearm.18
Q ua n t i f y i n g t h e I s s u e ( 157 )
Suppressed value
Unreliable value
13.9 to 21.4
>21.5 to 23.8
>23.9 to 26.5
>26.6 to 34.7
Other
Figure 7.2: Age-adjusted breast cancer mortality rates by county for Missouri women,
1999–2014.
Source: CDC WONDER, Compressed Mortality 1999–2014.
( 158 ) Evidence-Based Public Health
200
180
160
140
Rate per 100,000
120
100
80
60
40
20
0
2000 2002 2004 2006 2008 2010 2012
Year
Austria Finland France Greece Netherlands Sweden
Figure 7.3: Age-adjusted ischemic heart disease mortality rates for all ages for selected
European countries 2000–2012.
Source: European Health for All database.
can also be used to stratify disease rates, but are not usually collected in public
health surveillance systems.
160
140
120
100
Rate per 100,000
80
60
40
20
0
76
80
84
88
92
96
00
04
08
12
19
19
19
19
19
19
20
20
20
20
Year
Figure 7.4: Age-adjusted breast cancer incidence and mortality rates by year and race for
US women, 1976–2012.
Source: SEER Cancer Statistics Review 1975–2013.
deaths per 100,000 men for those born between 1896 and 1905. The mor-
tality rate for the same age group continues to increase in subsequent birth
cohorts, with the highest rate of approximately 430 deaths per 100,000
for the cohort born between 1916 and 1925. The most logical explanation
for this pattern is differences in cumulative lifetime exposure to cigarette
smoke seen in the birth cohorts that are represented in this population dur-
ing 2000. In other words, members of the population born after 1905 were
more likely to smoke cigarettes and to smoke for longer periods than those
born before 1905. Hence, the increasing age-specific lung cancer mortality
rates reflect the increasing prevalence of cigarette smokers in the popula-
tion for subsequent birth cohorts. An example of cohort effect is clearer for
the generations shown because of the marked historical change in smoking
patterns. At the present time, with increased awareness of the dangers of
smoking, the prevalence of smoking is declining, but these changes will not
manifest in present age cohorts for some time.
( 160 ) Evidence-Based Public Health
700
600
500
Rate per 100,000
400
300
200
100
0
35–44 45–54 55–64 65–74 75–84 85+
Age
Figure 7.5: Mortality rates due to trachea, bronchus, and lung cancer by birth cohort for
US men. Each line represents age-specific rates for birth cohorts denoted by labels in boxes.
Adjusting Rates
a
Age-
adjusted lung cancer mortality rate for Florida residents = 474.5 deaths/
1,000,000 persons =
47.5 deaths/100,000 persons.
b
Age-
adjusted lung cancer mortality rate for Alaska residents = 499.8 deaths/1,000,000 persons =
50.0 deaths/100,000 persons
( 162 ) Evidence-Based Public Health
A tried and true public health adage is, “what gets measured, gets done.”21 This
measurement often begins with public health surveillance—the ongoing sys-
tematic collection, analysis, interpretation, and dissemination of health data
for the purpose of preventing and controlling disease, injury, and other health
problems.22 Surveillance systems are maintained at federal, state, and local
levels and can be used to estimate the frequency of diseases and other health
conditions for defined populations. At least five major purposes for surveil-
lance systems can be described: (1) assessing and monitoring health status
and health risks; (2) following disease-specific events and trends; (3) plan-
ning, implementing, monitoring, and evaluating health programs and poli-
cies; (4) conducting financial management and monitoring information; and
(5) conducting public health research.23 The surveillance systems that currently
exist can provide information on births, deaths, infectious diseases, cancers,
birth defects, and health behaviors. Each system usually contains sufficient
information to estimate prevalence or incidence rates and to describe the fre-
quency of diseases or health condition by person, place, and time. Although
data from surveillance systems can be used to obtain baseline and follow-up
measurements for target populations, there may be limitations when using
the data to evaluate intervention effectiveness for narrowly defined popula-
tions. In this case, it may be necessary to estimate the frequency of disease or
other health condition for the target population by using special surveys or
one of the study designs described later in this chapter. This section focuses
primarily on US data sources. There are similar data sources for many coun-
tries and regions; some of these resources are noted in the list at the end of
the chapter.
Vital Statistics
Vital statistics are based on data from birth and death certificates and are
used to monitor disease patterns within and across defined populations.
Birth certificates include information about maternal, paternal, and newborn
demographics, lifestyle exposures during pregnancy, medical history, obstet-
ric procedures, and labor and delivery complications for all live births. Fetal
death certificates include the same data, in addition to the cause of death,
for all fetal deaths that exceed a minimum gestational age or birth weight.
The data collected on birth and fetal death certificates are similar for many
states and territories since the designs of the certificates were modified,
based on standard federal recommendations issued in 1989. The reliability
of the data has also improved since changing from a write-in to a check-box
format, although some variables are more reliable than others. Birth-related
Q ua n t i f y i n g t h e I s s u e ( 163 )
Reportable Diseases
In addition to vital statistics, all states and territories mandate the report-
ing of some diseases. Although the type of reportable diseases may differ by
state or territory, they usually include specific childhood, foodborne, sexu-
ally transmitted, and other infectious diseases. These diseases are reported
by physicians and other health care providers to local public health authori-
ties and are monitored for early signs of epidemics in the community. The
data are maintained by local and state health departments and are submitted
weekly to the CDC for national surveillance and reporting. Disease frequen-
cies are stratified by age, gender, race or ethnicity, and place of residence and
are reported routinely in the MMWR. However, reporting is influenced by
disease severity, availability of public health measures, public concern, ease
of reporting, and physician appreciation of public health practice in the
community.23, 24
Registries
Surveys
There are several federally sponsored surveys, including the National Health
Interview Survey (NHIS), National Health and Nutrition Examination Survey
(NHANES), and BRFSS, that have been designed to monitor the nation’s
health. These surveys are designed to measure numerous health indexes,
including acute and chronic diseases, injuries, disabilities, and other health-
related outcomes. Some surveys are ongoing annual surveillance systems,
whereas others are conducted periodically. These surveys usually provide
prevalence estimates for specific diseases among adults and children in the
United States. Although the surveys can also provide prevalence estimates for
regions and individual states, they cannot currently be used to produce esti-
mates for smaller geographically defined populations.
Q ua n t i f y i n g t h e I s s u e ( 165 )
Several of the large US surveillance datasets such as the BRFSS and CDC
WONDER allow users to access national as well as state-level data. State health
agencies are increasingly making their health data available in user-friendly
data query systems that allow the estimation of baseline and follow-up rates
for needs assessment and for evaluating the effectiveness of new public health
interventions. Examples of the international, national, and state-based query
systems described in this chapter are provided at the end of the chapter under
“Selected Websites.”
Yes-
Qualitative
Experimental
Assignment of
exposure? Yes-
What are the Randomly?
Can you
evaluation Type of No-Quasi-
Mixed assign
questions? Evidence experimental
Exposure?
Context? No-
Observational
Quantitative
Experimental study designs provide the most convincing evidence that new
public health programs are effective. If study participants are randomized
into groups (or arms), the study design is commonly called a randomized con-
trolled trial. When two groups are created, the study participants allocated
randomly to one group are given the new intervention (or treatment), and
those allocated to the other group serve as controls. The study participants
in both groups are followed prospectively, and disease (or health-related out-
come) rates are computed for each group at the end of the observation period.
Because both groups are identical in all aspects, except for the intervention, a
lower disease rate in the intervention group implies that the intervention is
effective.
The same study design can also be used to randomize groups instead of
individuals to evaluate the effectiveness of health behavior interventions
for communities. Referred to as a group-randomized trial, groups of study
participants (e.g., schools within a school system or communities within a
state) are randomized to receive the intervention or to serve as controls for
the study. Initially, the groups may be paired, based on similar characteristics.
Then, each group within each pair is allocated randomly to the intervention
or control group. This helps to balance the distribution of characteristics of
the study participants for both study groups and to reduce potential study
bias. The intervention is applied to all individuals in the intervention group
and is withheld or delayed for the control group. Measurements are taken at
baseline and at the end of the observation period to determine whether there
are significant differences between the disease rates for the intervention and
control groups. The group-randomized design has been used to evaluate the
effectiveness of public health interventions designed to increase immuniza-
tion coverage, reduce tobacco use, and increase physical activity.26
Experimental study designs are considered the gold standard because random-
ization of study participants reduces the potential for study bias. However, it is
not always feasible to use this study design when evaluating new public health
programs. This is particularly challenging for policy evaluation, in which it
is often impossible to randomize the exposure.27 Often, quasi-experimental
study designs are used to evaluate the effectiveness of new programs. Quasi-
experimental studies are identical in design to experimental studies, except
that the study participants are not allocated randomly to the intervention or
control group. Study participants in each group are followed for a predeter-
mined period, and outcomes (e.g., disease rates, behavioral risk factors) are
Q ua n t i f y i n g t h e I s s u e ( 167 )
In the 1960s, Finland had the world’s highest coronary heart disease mor-
tality rates with the highest rates in the eastern province of North Karelia.
In 1971, representatives of the province appealed to national authorities
for help to reduce the burden of cardiovascular disease in the area. In
1972, the North Karelia Project was launched with the idea to carry out
and evaluate a comprehensive prevention intervention aimed at chang-
ing the area’s social, physical, and policy environment to reduce the main
behavioral risk factors for cardiovascular disease. The community-based
approach of the intervention was a novel approach at the time. After the
initial evaluation period, the interventions were extended nationally to
promote cardiovascular disease prevention through Finland. By 2006,
cardiovascular mortality in Finland decreased by 80% among working-
age adults and by 85% in North Karelia (Figure 7.7).30–32 Life expectancy
has increased by 10 years, and other improvements in health and well-
being have also been observed. “North Karelia demonstrated the dramatic
impact of low-resource, community-based interventions that target gen-
eral lifestyles.”31 This project and ultimately the national impact of this
comprehensive prevention intervention have led to a greater understand-
ing of the importance of considering a variety of social determinants
across different public and private sectors to affect health outcomes and
to the Finnish Health in All Policies initiative.
400
North Karelia
300
–85%
200
All Finland
–80%
100
0
69 72 75 78 81 84 87 90 93 96 99 2002 2005
Year
X X
Treatment 1 Treatment 2
for specific diseases. Generally, observational study designs are used to pro-
vide Type 1 evidence, for which the exposure has already occurred and disease
patterns can be studied for those with and without the exposure of interest.
A good historical example is the association between cigarette use and lung
cancer. Because people choose whether or not to smoke cigarettes (one would
not assign this exposure), we can evaluate the hypothesis that cigarette smok-
ers are at increased risk for developing lung cancer by following smokers and
nonsmokers over time to assess their lung cancer rates.
Cohort and case-control studies are two observational study designs that
can be used to evaluate the strength of the association between prior expo-
sure and risk for disease in the study population. Cohort studies compare the
disease rates of exposed and unexposed study participants who are free of
disease at baseline and followed over time to estimate the disease rates in
both groups. Cohort studies are often conducted when the exposure of inter-
est can be identified and followed to determine whether the disease rate is sig-
nificantly higher (or lower) than the rates for unexposed individuals from the
same population. Studies that have focused on the effects of diet or exercise
on specific diseases or health-related outcome33 are good examples of cohort
studies.
Case-control studies compare the frequency of prior exposures for study
participants who have been diagnosed recently with the disease (cases) with
those who have not developed the disease (controls). Case-control studies are
the preferred study design when the disease is rare, and they are efficient when
studying diseases with long latency. As is true for all study designs, select-
ing appropriate controls and obtaining reliable exposure estimates are crucial
when evaluating any hypothesis that a prior exposure increases (or decreases)
the risk for a specific disease. A recent study provides an example of an unusu-
ally large case-control study conducted examining lung cancer cases in Italy
for differences in history of occupations.34 Public health professionals operat-
ing in typical settings may find much more modest case-control designs useful
for exploring possible exposures for health issues encountered.
Cross-sectional studies, a third type of observational study design, can be
completed relatively quickly and inexpensively to look at associations between
exposure and disease. Because information regarding potential exposures
and existing diseases for the study participants is measured simultaneously
when the study is conducted, cross-sectional studies are unable to ascertain
whether the exposure preceded the development of the disease among the
study participants. Hence, cross-sectional studies are used primarily to gen-
erate hypotheses. Nevertheless, cross-sectional studies are used for public
health planning and evaluation. For example, if a public health administrator
wants to know how many women of reproductive age smoked cigarettes while
pregnant, knowledge about the prevalence of maternal smoking in the com-
munity is important. Knowing the maternal smoking rates for subgroups of
Q ua n t i f y i n g t h e I s s u e ( 171 )
this population will help target interventions, if needed, for each subgroup.
Cross-sectional studies are also used to help set research priorities based on
consideration of the disease burden. A cross-sectional study in China was able
to establish, for example, that a rapid screening test for detecting 14 high-risk
types of human papillomavirus was effective in two county hospitals in rural
China.35
SUMMARY
As they develop, implement, and evaluate new public health intervention pro-
grams, public health professionals need a core set of epidemiologic skills to
quantify the frequency of a variety of health outcomes in target populations.
KEY CHAPTER POINTS
Lee LM, Teutsch SM, Thacker SB, St. Louis ME, eds. Principles and Practice of Public
Health Surveillance. 3rd ed. New York, NY: Oxford University Press, 2010.
Selected Websites
American Community Survey https://www.census.gov/programs-surveys/acs/about.
html. The American Community Survey is an ongoing annual survey that is con-
ducted by the US Census Bureau and includes questions on a variety of demo-
graphic, housing, economic, and social factors. Data are provided at the level of
census tracts and in some cases block groups. Though this survey does not con-
tain health status or behavior data, it provides a rich source of information about
the population and can be an integral part of a needs assessment process.
Centers for Disease Control and Prevention Behavioral Risk Factor Surveillance System
(BRFSS) http://www.cdc.gov/nccdphp/brfss. The BRFSS, an ongoing, reliable and
valid data collection program conducted in all states, the District of Columbia,
and three US territories, and the world’s largest telephone survey, tracks health
risks in the United States. Information from the survey is used to improve the
health of the American people. The CDC has developed a standard core question-
naire so that data can be compared across various strata.
CDC WONDER http://wonder.cdc.gov. CDC WONDER is an easy-to-use query system
that provides a single point of access to a wide variety of CDC reports, guidelines,
and public health data. It can be valuable in public health research, decision mak-
ing, priority setting, program evaluation, and resource allocation.
Community Commons http://www.communitycommons.org/maps-data/. Community
Commons is a platform for data, tools, and stories to improve communities and
inspire change. Topic areas include equity, economy, education, environment,
food, and health. The creative and dynamic site allows users to create and share
data visualizations and provides a variety of resources that can be used in pro-
gram development and public health decision making.
County Health Rankings http://www.countyhealthrankings.org/. The County Health
Rankings are being developed by the University of Wisconsin Population Health
Institute through a grant from the Robert Wood Johnson Foundation. This web-
site seeks to increase awareness of the many factors—clinical care access and
quality, health-promoting behaviors, social and economic factors, and the physi-
cal environment—that contribute to the health of communities; foster engage-
ment among public and private decision makers to improve community health;
and develop incentives to encourage coordination across sectors for community
health improvement.
European Health for All database (HFA-DB) http://www.euro.who.int/en/data-and-
evidence/databases/european-health-for-all-database-hfa-db. The HFA-DB pro-
vides statistics for demographic characteristics, health status, risk factors, health
care resources and utilization, and health expenditures for the 53 countries in the
World Health Organization European Region.
Global Burden of Disease (GBD) data http://www.healthdata.org/gbd/data. The GBD
houses all global, regional, and country-level estimates for mortality, disability,
disease burden, life expectancy, and risk factors, which can be downloaded from
the Global Health Data Exchange, a catalog of the world’s health and demo-
graphic data. The tool allows users to explore the input sources to GBD based on
various criteria and to export the results. The GBD also includes many useful data
visualization tools.
National Center for Health Statistics http://www.cdc.gov/nchs/. The National Center
for Health Statistics is the principal vital and health statistics agency for the US
Q ua n t i f y i n g t h e I s s u e ( 173 )
REFERENCES
25. Jemal A, Siegel R, Ward E, Murray T, Xu J, Thun MJ. Cancer statistics, 2007. CA
Cancer J Clin. Jan-Feb 2007;57(1):43–66.
26. Zaza S, Briss PA, Harris KW, eds. The Guide to Community Preventive
Services: What Works to Promote Health? New York, NY: Oxford University
Press; 2005.
27. Brownson RC, Diez Roux AV, Swartz K. Commentary: Generating rigorous evi-
dence for public health: the need for new thinking to improve research and prac-
tice. Annu Rev Public Health. 2014;35:1–7.
28. Reichardt C, Mark M. Quasi-experimentation. In: Wholey J, Hatry H, Newcomer K,
eds. Handbook of Practical Program Evaluation. 2nd ed. San Francisco, CA: Jossey-
Bass Publishers; 2004:126–149.
29. Shadish W, Cook T, Campbell D. Experimental and Quasi-Experimental Designs for
Generalized Causal Inference. Boston, MA: Houghton Mifflin; 2002.
30. Puska P. Health in all policies. Eur J Public Health. Aug 2007;17(4):328.
31. Puska P. The North Karelia Project: 30 years successfully preventing chronic dis-
eases. Diabetes Voice. 2008;53:26–29.
32. Puska P, Stahl T. Health in all policies—the Finnish initiative: background,
principles, and current issues. Annu Rev Public Health. 2010;31:315–328 313 p
following 328.
33. Hu FB, Manson JE, Stampfer MJ, et al. Diet, lifestyle, and the risk of type 2 diabe-
tes mellitus in women. N Engl J Med. Sep 13 2001;345(11):790–797.
34. Consonni D, De Matteis S, Lubin JH, et al. Lung cancer and occupa-
tion in a population- based case- control study. Am J Epidemiol. Feb 1
2010;171(3):323–333.
35. Qiao YL, Sellors JW, Eder PS, et al. A new HPV-DNA test for cervical-cancer
screening in developing regions: a cross-sectional study of clinical accuracy in
rural China. Lancet Oncol. Oct 2008;9(10):929–936.
36. Heron M, Hoyert D, Murphy S, Xu J, Kochanek K, Tejada-Vera B. Deaths: Final
data for 2006. National vital statistics reports. Hyattsville, MD: National Center for
Health Statistics; 2009.
C H A P T E R 8
w
Searching the Scientific Literature
and Using Systematic Reviews
Where is the wisdom we have lost in knowledge? Where is the knowledge we have lost
in information?
T.S. Eliot
( 177 )
( 178 ) Evidence-Based Public Health
BACKGROUND
1. Original research articles: These are the papers written by the authors who
conducted the original research studies. These articles provide details on
the methods used, results, and implications of results. A thorough and
comprehensive summary of a body of literature will consist of careful read-
ing of original research articles, particularly when a topic area is changing
rapidly or there are too few original articles to conduct a review.
Practice
guidelines
Systematic reviews
Meta analyses
Narrative reviews
Individual Studies
Review articles and guidelines often present a useful shortcut for many busy
practitioners who do not have the time to master the literature on multiple
public health topics.
In addition to the type of publication, timeliness of scientific information
is an important consideration. To find the best-quality evidence for medical
decision making, Sackett and colleagues recommended that practitioners
burn their (traditional) textbooks.8–10 Although this approach may seem radi-
cal, it brings to light the limitations of textbooks for providing information
on the cause, diagnosis, prognosis, or treatment of a disorder. To stay up to
date in clinical practice, a textbook may need to be revised on a yearly basis.
Though considered to provide more timely scientific findings, research and
publication of results in a journal are a deliberative process that often takes
years from the germination of an idea, to obtaining funding, carrying out the
study, analyzing data, writing up results, submitting to a journal, and waiting
out the peer-review process and publication lag for a journal.
( 180 ) Evidence-Based Public Health
there is an increased call to shift the focus in our decision making from single
studies to the larger body of scientific evidence.5,15 By focusing on the body
of scientific evidence, practitioners can have greater confidence in the mag-
nitude and consistency of results after careful consideration of the influence
of methodological and publication biases. General methods used in a system-
atic review, as well as several types of reviews and their practical applications,
are described here; more detailed descriptions of these methods are available
elsewhere.16–18 Several checklists, tools, and recommendations can be useful
in assessing the methodological quality of a systematic review,19–23 including
AMSTAR23 and PRISMA24 protocols.
The goal of this section is not to teach readers how to conduct a systematic
review but to provide a basic understanding of the six common steps in con-
ducting a systematic review. Each is briefly summarized, and some selected
differences in approaches are discussed.
Identify the Problem
Search the Literature
There are numerous electronic databases available, and one or more of these
should be systematically searched. Several of these are excellent sources
of hard or electronic copies of published literature as well. For a variety
of reasons, however, limiting searching to electronic databases can have
drawbacks:
( 182 ) Evidence-Based Public Health
The third step is to develop inclusion and exclusion criteria for those studies
to be reviewed. This step often leads to revision and further specification of
the problem statement. Common issues include the study design, the level
of analysis, the type of analysis, and the sources and time frame for study
retrieval. The inclusion and exclusion criteria should be selected so as to yield
those studies most relevant to the purpose of the systematic review. If the
purpose of the systematic review is to assess the effectiveness of interven-
tions to increase physical activity rates among schoolchildren, for example,
then interventions aimed at representative populations (e.g., those including
adults) would be excluded. Ideally, as the inclusion and exclusion criteria are
applied, at least a portion of the data retrieval should be repeated by a second
person, and results should be compared. If discrepancies are found, the inclu-
sion and exclusion criteria are probably not sufficiently specific or clear. They
should be reviewed and revised as needed.
Study Design
The first issue to consider is the type of study. Should only randomized con-
trolled trials be included? Some would answer “yes” because randomized
controlled trials are said to provide the most reliable data and to be specially
suited for supporting causal inference. Others would argue that random-
ized controlled trials also have their limitations, such as contamination or
questionable external validity, and that including a broader range of designs
could increase the aggregate internal and external validity of the entire body
S e a r c h i n g t h e S c i e n t i f i c L i t e r at u r e ( 183 )
Level of Analysis
The inclusion and exclusion criteria for level of analysis should match the pur-
pose of the systematic review. The most salient feature for public health is
whether studies are at the individual or the community level. A potentially
confusing problem, especially if one is interested in assessing community-
based interventions, is what to do with “mixed” studies—those that include
interventions aimed at both the community and the individual. A good strat-
egy in that case is to include all related studies in the data searching and then
use the data abstraction form (described later) to determine whether the
study should remain in the data set.
Type of Analysis
Evaluations of interventions can use several methods. Some, like the use
of focus groups, are more qualitative; others, such as regression modeling,
are more quantitative. Often, the specification of the question will make
some types of analysis relevant and others off-topic. Some questions can be
addressed in varied ways, and when this is true, broad inclusiveness might
give more complete answers. However, the more disparate the methodolo-
gies included, the more difficult it is to combine and consolidate the results.
A qualitative approach to the review tends to be more inclusive, collecting
information from all types of analysis. Meta-analysis, because it consolidates
results using a statistical methodology, requires quantitative analysis.
The final items to be specified are where a search for studies will be conducted
and the time period to be covered. The natural history of the intervention
should help determine the time frame. A major change in the delivery of an
intervention, for example, makes it difficult to compare results from studies
before and after the new delivery method. In this case, one might limit the
time to the “after” period. An additional factor influencing time frame is the
likely applicability of the results. Sometimes, substantial changes in context
( 184 ) Evidence-Based Public Health
have occurred over time. For example, results from the 1980s may be of ques-
tionable relevance to the current situation. In that case, one might limit the
review to more recent data. A pragmatic factor influencing the selection of a
time frame is the availability of electronic databases.
After the inclusion and exclusion criteria have been specified, the next step
is to find the studies that fit the framework, and then to extract a common
set of information from them. In general, a data abstraction form should be
used. This form should direct the systematic extraction of key information
about the elements of the study so that they can be consolidated and assessed.
Typical elements include the number of participants, the type of study, a pre-
cise description of the intervention, and the results of the study. If the data
abstraction form is well designed, the data consolidation and assessment can
proceed using only the forms. The exact format and content of the abstraction
form depend on the intervention and the type of analysis being used in the
systematic review. An excellent and comprehensive example of an abstraction
form is provided by the Task Force on Community Preventive Services.28
Consolidate the Evidence
After the evidence has been consolidated, the final step is to assess it and reach
a conclusion. For example, suppose that the intervention being reviewed is
the launching of mass media campaigns to increase physical activity rates
among adults. Further, assume that a meta-analysis of this topic reveals that
a majority of studies find that community-based interventions improve physi-
cal activity rates. However, the effect size is small. What should the review
conclude?
The review should consider both the strength and weight of the evidence
and the substantive importance of the effect. This assessment can be done by
the reviewer using his or her own internal criteria, or by using explicit crite-
ria that were set before the review was conducted. An example of the latter
approach is the method employed by the US Preventive Services Task Force
(USPSTF).29 The USPSTF looks at the quality and weight of the evidence (rated
S e a r c h i n g t h e S c i e n t i f i c L i t e r at u r e ( 185 )
good, fair, or poor), and the net benefit, or effect size, of the preventive service
(rated substantial, moderate, small, or zero/negative). Their overall rating and
recommendation reflect a combination of these two factors. For example, if a
systematic review of a preventive service finds “fair” evidence of a “substan-
tial” effect, the Task Force gives it a recommendation of “B,” or a recommenda-
tion that clinicians routinely provide the service to eligible patients.
If no formal process for combining the weight of the evidence and the
substantive importance of the findings has been specified beforehand, and
the systematic review yields mixed findings, then it is useful to seek help
with assessing the evidence and drawing a conclusion. The analyst might ask
experts in the field to review the evidence and reach a conclusion or make a
recommendation.
After completing the systematic review, the final step is to write up a report
and disseminate the findings. The report should include a description of all
of the previous steps.23,24 In fact protocols currently exist for writing up and
evaluating systematic reviews. Ideally, the systematic review should be dis-
seminated to the potential users of the recommendations. The method of
dissemination should be targeted to the desired audience. Increasingly, this
means putting reports on the Internet so that they are freely accessible or
presenting the findings to a community planning board. However, it is also
important to submit reviews for publication in peer-reviewed journals. This
provides one final quality check. Various methods for disseminating the
results of systematic reviews are described later in this chapter.
Meta-Analysis
Over the past three decades, meta-analysis has been increasingly used to syn-
thesize the findings of multiple research studies. Meta-analysis, a type of sys-
tematic review, was originally developed in the social sciences in the 1970s
when hundreds of studies existed on the same topics.6 Meta-analysis uses a
quantitative approach to summarize evidence, in which results from separate
studies are pooled to obtain a weighted average summary result.6 Its use has
appeal because of its potential to pool a group of smaller studies, enhancing
statistical power. Meta-analysis studies can increase the statistical and scien-
tific credibility of a scientific finding because they summarize effects across
sites and methodologies. They also may allow researchers to test subgroup
effects (e.g., by gender or racial or ethnic group) that are sometimes difficult to
assess in a single, smaller study. Finally, reviews that summarize various inter-
vention trials are an extremely efficient method for obtaining the “bottom
line” about what works and what does not.4 Suppose there were several stud-
ies examining the effects of exercise on cholesterol levels, with each report-
ing the average change in cholesterol levels, the standard deviation of that
( 186 ) Evidence-Based Public Health
change, and the number of study participants. These average changes could
be weighted by sample size and pooled to obtain an average of the average
changes in cholesterol levels. If this grand mean showed a significant decline
in cholesterol levels among exercisers, then the meta-analyst would conclude
that the evidence supported exercise as a way to lower cholesterol levels.
Similar to the method described previously for conducting a systematic
review, Petitti notes four essential steps in conducting a meta-analysis: (1) iden-
tifying relevant studies; (2) deciding on inclusion and exclusion criteria for
studies under consideration; (3) abstracting the data; and (4) conducting the
statistical analysis, including exploration of heterogeneity.6
Meta-analysis includes several different statistical methods for aggregating
the results from multiple studies. The method chosen depends on the type of
analysis used in the original studies, which, in turn, is related to the type of
data analyzed. For example, continuous data, such as cholesterol levels, can be
analyzed by comparing the means of different groups. Continuous data could
also be analyzed with multiple linear regression. Discrete (dichotomous) data
are often analyzed with relative risks or odds ratios, although a range of other
options also exists.
An important issue for meta-analysis is the similarity of studies to be com-
bined. This similarity, or homogeneity, is assessed using various statistical
tests. If studies are too dissimilar (high heterogeneity), then combining their
results is problematic. One approach is to combine only homogenous subsets
of studies. Although statistically appealing, this to some extent defeats the
purpose of the systematic review because a single summary assessment of
the evidence is not reported. An alternative approach is to use meta-analytic
methods that allow the addition of control variables that measure the differ-
ences among studies. For example, studies may differ by type of study design.
If so, then a new variable could be created to code different study design types,
such as observational and randomized controlled trials.
The statistical issue of the similarity of studies is related to the inclusion
and exclusion criteria. These criteria are selected to identify a group of studies
for review that are similar in a substantive way. If the meta-analysis finds that
the studies are not statistically homogeneous, then the source of heterogene-
ity should be investigated. Measures of inconsistency describe the variation
across studies that is due to heterogeneity rather than chance.18 This kind of
measure can describe heterogeneity across methodological and clinical sub-
groups as well. A careful search for the sources of heterogeneity and a consid-
eration of their substantive importance can improve the overall systematic
review.
Meta-analysis has generated a fair amount of controversy, particularly
when it is used to combine results of observational studies. However, the
quality of meta-analyses has improved, perhaps owing to the dissemination
and adoption of guidelines for their conduct.24 Journal articles based on
S e a r c h i n g t h e S c i e n t i f i c L i t e r at u r e ( 187 )
Box 8.1
THE COMMUNITY GUIDE IN ACTION – NEBRASKA’S
BLUEPRINT FOR SUCCESS IN REDUCING TOBACCO USE
In 2009–2010 in Nebraska, tobacco use claimed 2,200 lives and cost the
state $537 million in health care. It was projected that 36,000 Nebraskans
younger than 18 years would die prematurely from smoking. Charlotte
Burke, manager of the Lincoln-Lancaster County Health Department’s
Division of Health Promotion and Outreach, was alarmed by these sta-
tistics. They made reducing tobacco use and prevention of exposure to
secondhand smoke in the city of Lincoln and the surrounding county a
priority. The Lincoln-Lancaster County Health Department partnered
with the Tobacco Free Lincoln Coalition, the local Board of Health, and
local health organizations and experts to identify resources to decrease
tobacco use. Using recommendations from the Community Guide they built
a plan that started with local education efforts and ultimately led to state-
wide policy changes. By working with local partners and organizations
and educating the public and policymakers, they could make changes that
ultimately led to a higher state tobacco tax, a statewide indoor smoking
ban, and lower county smoking rates among adults and youth.
Details for this effort and other community initiatives across the United
States that used evidence-based recommendations from the Community
Guide to make communities healthier and safer can be found at http://
www.thecommunityguide.org/CG-in-Action/.
Table 8.1 RECOMMENDED ONLINE RESOURCES FOR COMPILED EVIDENCE REVIEWS FOR PUBLIC HEALTH INTERVENTIONS
PH Partners Collaboration of • Homepage provides a dashboard for public Easy way to access/search high-quality, peer-reviewed scientific
(Partners in Information US government health professionals, including news, links literature to identify research evidence for selected Healthy People
Access for the Public agencies, public health to data, jobs, and upcoming conferences 2020 objectives. Queries are “live” and update as new research
Health Workforce) organizations, and and trainings. is added.
http://phpartners.org/ health sciences libraries • Also includes link to Healthy People 2020 Some HP2020 topic areas are still in “beta” status (including heart
index.html Structured Evidence Queries—a topical disease), which means the query has not yet been reviewed by
list of HP2020 goals that link directly subject experts. The query itself is quite complex and would be hard
to preformed PubMed searches on the to replicate outside of the PH partners evidence queries.
literature evidence for each.
(continued)
Table 8.1. CONTINUED
Resource Source What User Notes
RTIPS National Cancer Searchable database of 159 cancer control Can search by topic, setting, or target population. Topics cover early
(Research-Tested Institute interventions and program materials; prevention, such as healthy eating/physical activity, and other areas
Intervention Programs) designed to provide program planners that overlap with other chronic disease program areas. Does not
http://rtips.cancer.gov/ and public health practitioners easy and cover all topic areas in public health (e.g., environmental health,
rtips/index.do immediate access to research-tested materials. communicable disease).
TRIP (Turning Research Incorporated Clinical search engine (includes population- Can also search images, videos, patient information leaflets,
Into Practice) enterprise: Jon Brassey based interventions) designed to allow users educational courses, and news. Research from PubMed updates
https://www. and Dr. Chris Price to quickly and easily find and use high-quality every 2 weeks, and other content updates once per month.
tripdatabase.com research evidence to support their practice
and/or care.
Cochrane Collaboration A global independent Cochrane contributors—37,000 from Although the Cochrane Collaboration is primarily clinical in focus,
http://www.cochrane. network of researchers, more than 130 countries—work together public health is one of 34 subject areas in which there are systematic
org/ professionals, patients, to produce credible, accessible health reviews. http://www.cochranelibrary.com/topic/Public%20health/
http://ph.cochrane.org/ carers, and people information that is free from commercial
interested in health, sponsorship and other conflicts of interest.
based in London Cochrane exists so that health care decisions
get better. Contributors gather and
summarize the best health evidence from
research to help you make informed choices
about treatment.
Health Evidence McMaster University Contains nearly 4,500 quality-rated Houses systematic reviews evaluating the effectiveness of public
http://www. Ontario, Canada systematic reviews evaluating the health interventions, particularly in the areas of prevention, health
healthevidence.org effectiveness of public health interventions. protection, and health promotion.
Must sign up to search, but it’s free.
What Works for Health University of Wisconsin Searches systematic reviews, individual peer- Easy way to access/search high-quality literature for topic areas not
http://www. Population Health reviewed studies, private organizations, and yet included in the Community Guide or for which there is growing
countyhealthrankings. Institute gray literature to find evidence. but not yet complete literature.
org/roadmaps/ Useful for topic areas that have not undergone For each included topic area, there are implementation examples
what-works-for-health extensive systematic review and resources that communities can use to move forward with their
chosen strategies.
( 192 ) Evidence-Based Public Health
Though for many topic areas in public health, systematic reviews are often
available and there are fairly clear recommendations on what works, there are
topic areas that are less studied or advances in technology, for example, that
have not yet been evaluated. Therefore practitioners may need to conduct a
primary search of the scientific literature to identify original research studies
or evaluations in their topic area or population of interest. Though not as rig-
orous as the process of conducting a systematic review, a systematic approach
to literature searching can increase the chances of finding pertinent informa-
tion. Figure 8.2 describes a process for searching the literature and organizing
the findings of a search. The following sections provide a step-by-step break-
down of this process.11
Figure 8.2: Flow chart for organizing a search of the scientific literature. (The later stages
[especially steps 5 and 6] of the process are based largely on the Matrix Method, developed
by Garrard.11 )
S e a r c h i n g t h e S c i e n t i f i c L i t e r at u r e ( 193 )
We focus mainly on the use of PubMed because it is the largest and most
widely available bibliographic database, with coverage of more than 25 million
articles from MEDLINE and life sciences journals. We also focus our search for
evidence on peer-reviewed programs and studies and on data that have been
reviewed by other researchers and practitioners.
Based on the issue statement described in chapter 6, the purpose of the search
should be well outlined. Keep in mind that searching is an iterative process,
and a key is the ability to ask one or more answerable questions. Though the
goal of a search is to identify all relevant material and nothing else, in practice,
this is difficult to achieve.6 The overarching questions include, “Which evidence
is relevant to my questions?” and “What conclusions can be drawn regarding
effective intervention approaches based on the literature assembled?”37
Identify Key Words
Key words are terms that describe the characteristics of the subject being
reviewed. A useful search strategy is dependent on the sensitivity and
( 194 ) Evidence-Based Public Health
precision of the key words used. “Sensitivity” is the ability to identify all
relevant material, and “precision” is the amount of relevant material among
the information retrieved by the search.38 Thus, sensitivity addresses the
question, “Will relevant articles be missed?” whereas precision addresses
the question, “Will irrelevant articles be included?” Most bibliographic
databases require the use of standardized key words. These key words are
often found in the list of Medical Subject Heading (MeSH) terms. There
are a number of tutorials on the PubMed site about using the database,
including information about identifying and selecting MeSH terms. There
are two small screens on the right of the search page of PubMed that are
helpful. One, named “Titles with your search terms,” will permit the user to
consult other published articles similar to what is being searched in order
S e a r c h i n g t h e S c i e n t i f i c L i t e r at u r e ( 195 )
to check the search terms used. Looking at these titles may suggest addi-
tional search terms to include. There is also a screen “Search details,” which
includes MeSH terms. This screen may be helpful when entering open text
on a search and noting that an indicated MeSH term may be a better choice.
For a literature search in MEDLINE, these sources of key words are useful
(Figure 8.2):
1. Identify two scientific papers that cover the topic of interest—one more
recent and one less recent. These papers can be pulled up on PubMed. In
the MEDLINE abstract, a list of MeSH terms will be provided. These can, in
turn, be used in subsequent searches.
2. Key words can be found within the alphabetical list of MeSH terms,
available online at <http://www.nlm.nih.gov/mesh/meshhome.html>.
Alternatively, the MeSH list can be searched through PubMed by selecting
it as the database to search from the dropdown box to the left of the main
search box.
3. MEDLINE and Google Scholar do not require users to use standardized key
words. Therefore, you can select your own key words—these are searched
for in article titles and abstracts. Generally, using nonstandardized key
words provides a less precise literature search than does using standard-
ized terms. However, the MEDLINE interface between standardized and
nonstandardized key words allows complete searching without a detailed
knowledge of MeSH terms.
Conduct the Search
After the databases and initial key words are identified, it is time to run the
search. After the initial search is run, the number of publications returned
will likely be large and include many irrelevant articles. Several features of
PubMed can assist searchers in limiting the scope of the search to the most
relevant articles.
• PubMed will allow you to link to other “related articles” by simply clicking
an icon on the right side of each citation.
• If a particularly useful article is found, the author’s name can be searched
for other similar studies. The same author will often have multiple publica-
tions on the same subject. To avoid irrelevant retrievals, you should use the
author’s last name and first and middle initials in the search.
• In nearly every case, it is necessary to refine the search approach. As arti-
cles are identified, the key word and search strategy will be refined and
improved through a “snowballing” technique that allows users to gain
familiarity with the literature and gather more useful articles.11 Articles
that may be useful can be saved during each session by clicking “send to”
within PubMed.
• Searches may be refined using Boolean operators, words that relate search
terms to each other, thus increasing the reach of the search. Help screens
of different databases will provide more information, but the most com-
mon Boolean operators are (used in CAPS): AND, NOT, OR, NEAR, and
“ ”. The word AND searches for the terms before and after the AND, yield-
ing only articles that include both terms. An example would be: asthma
AND adolescents, which would find all articles about asthma in adoles-
cents. An example of the operator NOT would be: accidents NOT auto-
mobiles, which would find articles about nonautomobile accidents. (A
caution: automobiles is a MeSH term, so this search will exclude cars but
not trucks or other vehicles. Using the more general MeSH term Motor
vehicles would exclude more articles.) The operator OR permits coupling
two search terms that may tap a similar domain. For example, adolescents
OR teenagers will find articles that used either term. The operator NEAR
will define two search terms that must appear within 10 words of each
other to select an article. For example, elevated NEAR lead will find arti-
cles discussing elevated blood lead levels. Use of quotation marks “…” will
define a search term that must appear as listed. For example, the search
term “school clinic” must appear as that phrase to be identified, rather
than identifying articles that contain the words “school” and “clinic” sepa-
rately. Boolean terms are highly useful in specifying a search and can be
used to facilitate a search more efficiently.
Once a set of articles has been located, it is time to organize the documents.11
This will set the stage for abstracting the pertinent information. Generally, it
is helpful to organize the documents by the type of study (original research,
review article, review article with quantitative synthesis, guideline). It is
often useful to enter documents into a reference management database such
S e a r c h i n g t h e S c i e n t i f i c L i t e r at u r e ( 197 )
When a group of articles has been assembled, the next step is to create an
evidence matrix—a spreadsheet with rows and columns that allows users to
abstract the key information from each article.11 Creating a matrix provides
a structure for putting the information in order. In developing a matrix,
the choice of column topics is a key consideration. It is often useful to
consider both methodological characteristics and content-specific results
as column headings. A sample review matrix is shown in Table 8.3 (using
physical activity studies for illustration39–43). In this example, studies were
also organized within rows by an ecological framework, described in detail
in chapter 5.
After a body of studies has been abstracted into a matrix, the literature may
be summarized for various purposes. For example, you may need to provide
background information for a new budget item that is being presented to the
administrator of an agency. Knowing the best intervention science and deter-
mining the best way to transfer that knowledge to key policy makers should
increase the chances of convincing these policy makers of the need for a par-
ticular program or policy.44 You may also need to summarize the literature in
Table 8.3. EXAMPLE EVIDENCE MATRIX FOR LITERATURE ON PHYSICAL ACTIVITY PROMOTION AT VARIOUS LEVELS
OF AN ECOLOGICAL FRAMEWORK
Individual Level
Ory et al.39 2016 Cross-sectional Community- 272 N/A—not an Factors associated Physicians must
Social and environmental dwelling intervention with not meeting PA communicate the
predictors of walking among older adults recommendations importance of PA to
older adults. United States (>=60y) from included being 60-69 y, older patients and
a healthcare poor mental health in discuss strategies to
system in Texas past month, and lack overcome barriers to
of social support for walking.
walking
Interpersonal Level
40
Dowda et al. 2007 Longi-tudinal Adolescent girls 421 Perceived family Family support was Support of PA from PA was assessed
Family support for physical in 8th grade in support for PA, independently associated family members may using a 3 day
activity in girls from 8th to South Carolina perceived behavioral with age-related changes reduce the decline activity recall
12th grade in South Carolina. in 1998 control, self-efficacy in PA which more rapid in PA in adolescent and then
United States declines in PA among girls independent converted
those with low family of self-efficacy and to METs per
support for PA perceived behavioral day at each
control. measurement
time point (8th,
9th, 12th grades)
Organizational Level
41
Gilson et al. 2007 Randomized Employees 64 1 control group Differences in step Significant mean
Walking towards health in a controlled trial at Leeds which received no counts between groups differences between
university community. (feasibility Metropolitan intervention and 2 groups with an
United Kingdom study) University UK intervention groups: a) average decrease in
walking routes- steps in the control
followed prescribed groups and increases
walks around campus in steps in both
with a goal of at walking groups.
least 15 minutes
of brisk walking
during the work day;
b) walking within
tasks-encouraged the
accumulation of steps
in and around the office
during usual activities.
Community Level
Kamada et al.42 2013 Cluster 12 communities 4414 Community-wide Short term changes The CWC did not
A community-wide Randomized within Unnan campaign to promote in knowledge and change PA beliefs,
campaign to promote controlled trial city in Shimane, PA as a public health awareness but no intention or
physical activity in middle- Japan—9 project at the cluster changes in PA at 1year behavior though
aged and elderly people: a intervention (3 level there were short
cluster randomized levels of PA) and term changes in
controlled trial. 3 comparison knowledge and
Japan awareness.
(continued)
Table 8.3. CONTINUED
Lead author, article Year Methodologic Characteristics Content-Specific Findings
title, journal citation
Study Design Study Sample Intervention Results Conclusions Other
Population Size Characteristics Comments
order to build the case for a grant application that seeks external support for
a particular program.
Often a public health practitioner wants to understand not only the outcomes
of a program or policy but also the process of developing and carrying out an
intervention (see chapter 10). Many process issues are difficult to glean from
the scientific literature because the methods sections in published articles
may not be comprehensive enough to show all aspects of the intervention.
A program may evolve over time, and what is in the published literature may
differ from what is currently being done. In addition, many good program and
policy evaluations go unpublished.
In these cases, key informant interviews may be useful. Key informants
are experts on a certain topic and may include a university researcher who
has years of experience in a particular intervention area or a local program
manager who has the field experience to know what works when it comes to
designing and implementing effective interventions. There are several steps in
carrying out a “key informant” process:
1. Identify the key informants who might be useful for gathering informa-
tion. They can be found in the literature, through professional networks,
and increasingly, on the Internet (see <http://www.profnet.com>, a site
that puts journalists and interested persons in touch with scientific experts
who are willing to share their expertise).
2. Determine the types of information needed. It is often helpful to write out
a short list of open-ended questions that are of particular interest. This can
help in framing a conversation and making the most efficient use of time.
Before a conversation with an expert, it is useful to email him or her ques-
tions to allow thinking about replies.
3. Collect the data. This often can be accomplished through a 15-to 30-
minute phone conversation if the questions of interest are well framed
ahead of time.
4. Summarize the data collected. Conversations can be recorded and tran-
scribed using formative research techniques. More often, good notes are
taken and conversations recorded to end up with a series of bullet points
from each key informant conversation.
5. Conduct follow-up, as needed. As with literature searching, key infor-
mant interviews often result in a snowballing effect in which one expert
S e a r c h i n g t h e S c i e n t i f i c L i t e r at u r e ( 203 )
Professional Meetings
SUMMARY
KEY CHAPTER POINTS
Selected Websites
Agency for Healthcare Research and Quality (AHRQ) <http://www.ahrq.gov/>. The
AHRQ mission is to improve the quality, safety, efficiency, and effectiveness of
health care for all Americans. Information from AHRQ research helps people
make more informed decisions and improve the quality of health care services.
Annual Review of Public Health <http://publhealth.annualreviews.org/>. The mission
of Annual Reviews is to provide systematic, periodic examinations of scholarly
advances in a number of scientific fields through critical authoritative reviews.
The comprehensive critical review not only summarizes a topic but also roots out
errors of fact or concept and provokes discussion that will lead to new research
activity. The critical review is an essential part of the scientific method.
Evidence-based behavioral practice (EBBP) <http://www.ebbp.org/>. The EBBP.org
project creates training resources to bridge the gap between behavioral health
research and practice. An interactive website offers modules covering topics such
as the EBBP process, systematic reviews, searching for evidence, critical appraisal,
and randomized controlled trials. This site is ideal for practitioners, researchers,
and educators.
S e a r c h i n g t h e S c i e n t i f i c L i t e r at u r e ( 205 )
REFERENCES
1. Jones R, Kinmonth A-L . Critical Reading for Primary Care. Oxford, UK: Oxford
University Press; 1995.
2. Greenhalgh T. How to read a paper. Getting your bearings (deciding what the
paper is about). British Medical Journal. 1997;315:243–246.
3. Makela M, Witt K. How to read a paper: critical appraisal of studies for application
in healthcare. Singapore Med J. Mar 2005;46(3):108–114; quiz 115.
4. Uman LS. Systematic Reviews and Meta- Analyses. Journal of the Canadian
Academy of Child and Adolescent Psychiatry. 2011;20(1):57–59.
5. Garg AX, Hackam D, Tonelli M. Systematic review and meta-analysis: when one
study is just not enough. Clinical Journal of the American Society of Nephrology.
January 1, 2008 2008;3(1):253–260.
6. Petitti DB. Meta-analysis, Decision Analysis, and Cost-Effectiveness Analysis: Methods
for Quantitative Synthesis in Medicine. 2nd ed. New York, NY: Oxford University
Press; 2000.
7. Porta M, ed. A Dictionary of Epidemiology. 6th ed. New York, NY: Oxford University
Press; 2014.
8. Sackett DL, Rosenberg WMC. The need for evidence-based medicine. Journal of
the Royal Society of Medicine. 1995;88:620–624.
9. Sackett DL, Rosenberg WMC, Gray JAM, Haynes RB, Richardson WS.
Evidence based medicine: what it is and what it isn’t. British Medical Journal.
1996;312:71–72.
10. Straus SE, Richardson WS, Glasziou P, Haynes R. Evidence-Based Medicine. How to
Practice and Teach EBM. 4th ed. Edinburgh, UK: Churchill Livingston; 2011.
11. Garrard J. Health Sciences Literature Review Made Easy. The Matrix Method. 2nd ed.
Sudbury, MA: Jones and Bartlett Publishers; 2006.
12. US National Library of Medicine. Key Medline Indicators Medline statistics [http://
www.nlm.nih.gov/bsd/bsd_key.html. Accessed October 2, 2016.
13. Science Intelligence and InfoPros. How many science journals? https://sciencein-
telligence.wordpress.com/2012/01/23/how-many-science-journals/. Accessed
October 2, 2016.
( 206 ) Evidence-Based Public Health
14. Laakso M, Björk B-C. Anatomy of open access publishing: a study of longitudinal
development and internal structure. BMC Medicine. 2012;10(1):1–9.
15. Murad MH, Montori VM. Synthesizing evidence: shifting the focus from individ-
ual studies to the body of evidence. JAMA. Jun 5 2013;309(21):2217–2218.
16. Bambra C. Real world reviews: a beginner’s guide to undertaking system-
atic reviews of public health policy interventions. Journal of Epidemiology and
Community Health. January 1, 2011 2011;65(1):14–19.
17. Guyatt G, Rennie D, Meade M, Cook D, eds. Users’ Guides to the Medical Literature.
A Manual for Evidence-Based Clinical Practice. 3rd ed. Chicago, IL: American Medical
Association Press; 2015.
18. Higgins J, Green S. Cochrane Handbook for Systematic Review of
Interventions: Cochrane Book Series. Chichester, England: John Wiley & Sons
Ltd; 2008.
19. Briss PA, Zaza S, Pappaioanou M, et al. Developing an evidence-based Guide
to Community Preventive Services— methods. The Task Force on Community
Preventive Services. Am J Prev Med. 2000;18(1 Suppl):35–43.
20. Liberati A, Altman DG, Tetzlaff J, et al. The PRISMA statement for reporting sys-
tematic reviews and meta-analyses of studies that evaluate healthcare interven-
tions: explanation and elaboration. BMJ. 2009;339:b2700.
21. Moher D, Simera I, Schulz KF, Hoey J, Altman DG. Helping editors, peer review-
ers and authors improve the clarity, completeness and transparency of reporting
health research. BMC Med. 2008;6:13.
22. Page MJ, Shamseer L, Altman DG, et al. Epidemiology and reporting character-
istics of systematic reviews of biomedical research: a cross-sectional study. PLoS
Med. 2016;13(5):e1002028.
23. Shea BJ, Grimshaw JM, Wells GA, et al. Development of AMSTAR: a measure-
ment tool to assess the methodological quality of systematic reviews. BMC
Medical Research Methodology. 2007;7(1):10–16.
24. Moher D, Shamseer L, Clarke M, et al. Preferred reporting items for systematic
review and meta- analysis protocols (PRISMA- P) 2015 statement. Systematic
Reviews. 2015;4(1):1–9.
25. Blackhall K. Finding studies for inclusion in systematic reviews of interventions
for injury prevention the importance of grey and unpublished literature. Inj Prev.
Oct 2007;13(5):359.
26. Hopewell S, McDonald S, Clarke M, Egger M. Grey literature in meta-analyses
of randomized trials of health care interventions. Cochrane Database Syst Rev.
2007;(2):Mr000010.
27. Mahood Q, Van Eerd D, Irvin E. Searching for grey literature for systematic
reviews: challenges and benefits. Res Synth Methods. Sep 2014;5(3):221–234.
28. Task Force on Community Preventive Services. Guide to Community Preventive
Services. www.thecommunityguide.org. Accessed June 5, 2016.
29. Harris RP, Helfand M, Woolf SH, et al. Current methods of the U.S. Preventive
Services Task Force. A review of the process. Am J Prev Med. 2001;20(3
Suppl):21–35.
30. Truman BI, Smith-Akin CK, Hinman AR, al e. Developing the Guide to Community
Preventive Services— overview and rationale. American Journal of Preventive
Medicine. 2000;18(1S):18–26.
31. Carande-Kulis VG, Maciosek MV, Briss PA, et al. Methods for systematic reviews
of economic evaluations for the Guide to Community Preventive Services. Task Force
on Community Preventive Services. Am J Prev Med. Jan 2000;18(1 Suppl):75–91.
S e a r c h i n g t h e S c i e n t i f i c L i t e r at u r e ( 207 )
32. Li R, Qu S, Zhang P, et al. Economic evaluation of combined diet and physical activ-
ity promotion programs to prevent type 2 diabetes among persons at increased
risk: a systematic review for the Community Preventive Services Task Force. Ann
Intern Med. Sep 15 2015;163(6):452–460.
33. Patel M, Pabst L, Chattopadhyay S, et al. Economic review of immunization infor-
mation systems to increase vaccination rates: a community guide systematic
review. J Public Health Manag Pract. May-Jun 2015;21(3):1–10.
34. Ran T, Chattopadhyay SK. Economic evaluation of community water fluoridation: a
Community Guide systematic review. Am J Prev Med. Jun 2016;50(6):790–796.
35. Ran T, Chattopadhyay SK, Hahn RA. Economic evaluation of school- based
health centers: a Community Guide systematic review. Am J Prev Med. Jul
2016;51(1):129–138.
36. Ioannidis JP. The mass production of redundant, misleading, and conflicted sys-
tematic reviews and meta-analyses. Milbank Q. Sep 2016;94(3):485–514.
37. Bartholomew L, Parcel G, Kok G, Gottlieb N, Fernandez M. Planning Health
Promotion Programs: An Intervention Mapping Approach. 3rd ed. San Francisco,
CA: Jossey-Bass Publishers; 2011.
38. Lefebvre C, Glanville J, Wieland LS, Coles B, Weightman AL. Methodological
developments in searching for studies for systematic reviews: past, present and
future? Systematic Reviews. 2013;2(1):1–9.
39. Ory MG, Towne SD, Won J, Forjuoh SN, Lee C. Social and environmental predic-
tors of walking among older adults. BMC Geriatrics. 2016;16(1):155–167.
40. Dowda M, Dishman RK, Pfeiffer KA, Pate RR. Family support for physi-
cal activity in girls from 8th to 12th grade in South Carolina. Prev Med. Feb
2007;44(2):153–159.
41. Gilson N, McKenna J, Cooke C, Brown W. Walking towards health in a university
community: a feasibility study. Prev Med. Feb 2007;44(2):167–169.
42. Kamada M, Kitayuguchi J, Inoue S, et al. A community-wide campaign to pro-
mote physical activity in middle-aged and elderly people: a cluster randomized
controlled trial. Int J Behav Nutr Phys Act. 2013;10:44.
43. Slater SJ, Nicholson L, Chriqui J, Turner L, Chaloupka F. The impact of state laws
and district policies on physical education and recess practices in a nationally rep-
resentative sample of US public elementary schools. Arch Pediatr Adolesc Med. Apr
2012;166(4):311–316.
44. Oliver K, Innvar S, Lorenc T, Woodman J, Thomas J. A systematic review of bar-
riers to and facilitators of the use of evidence by policymakers. BMC Health Serv
Res. 2014;14:2.
45. Boeker M, Vach W, Motschall E. Google Scholar as replacement for systematic
literature searches: good relative recall and precision are not enough. BMC Med
Res Methodol. 2013;13:131.
46. Shultz M. Comparing test searches in PubMed and Google Scholar. Journal of the
Medical Library Association: JMLA. 2007;95(4):442–445.
47. Rimer BK, Glanz DK, Rasband G. Searching for evidence about health education
and health behavior interventions. Health Educ Behav. 2001;28(2):231–248.
C H A P T E R 9
w
Developing and Prioritizing
Intervention Options
For every complex problem, there is a solution that is simple, neat, and wrong.
H. L. Mencken
( 209 )
( 210 ) Evidence-Based Public Health
BACKGROUND
Resources are always limited in public health. This stems from the fund-
ing priorities in which public health receives only a small percentage of the
total health spending in the United States, making funding for public health
programs a “zero-sum game” in many settings.4,5 That is, the total available
resources for public health programs and services are not likely to increase
substantially from year to year. Only rarely are there exceptions to this sce-
nario, such as the investments several US states have made in tobacco control,
resulting in substantial public health benefits.6 Therefore, careful, evidence-
based examination of program options is necessary to ensure that the most
effective approaches to improving the public’s health are taken. The key is to
follow a process that is both systematic, objective, and time-efficient, combin-
ing science with the realities of the environment.7
At a macrolevel, part of the goal in setting priorities carefully is to shift
from resource-based decision making to a population-based process. To vary-
ing degrees this occurred in the United States over the past century. In the
resource-based planning cycle, the spiral of increased resources and increased
demand for resources helped to drive the cost of health care services continu-
ally higher, even as the health status of some population groups declined.8 In
contrast, the population-based planning cycle gives greater attention to popu-
lation needs and outcomes, including quality of life, and has been described
as the starting point in decision making.8 On a global scale, the Sustainable
Development Goals9 offer insights into the need to set a broad range of priori-
ties and the need to involve many sectors (e.g., economics, education) outside
De v e l op i n g a n d P r i or i t i z i n g I n t e r v e n t i o n Op t i o n s ( 211 )
Box 9.1
HEALTH IN THE SUSTAINABLE DEVELOPMENT GOALS
Focus
1. End poverty in all its forms everywhere
2. End hunger, achieve food security and improved nutrition, and pro-
mote sustainable agriculture
3. Ensure healthy lives and promote well-being for all at all ages
4. Ensure inclusive and equitable quality education and promote life-
long learning opportunities for all
5. Achieve gender equality and empower all women and girls
6. Ensure availability and sustainable management of water and sanita-
tion for all
7. Ensure access to affordable, reliable, sustainable, and modern energy
for all
8. Promote sustained, inclusive, and sustainable economic growth, full
and productive employment, and decent work for all
9. Build resilient infrastructure, promote inclusive and sustainable
industrialization, and foster innovation
10. Reduce inequality within and among countries
11. Make cities and human settlements inclusive, safe, resilient, and
sustainable
12. Ensure sustainable consumption and production patterns
13. Take urgent action to combat climate change and its impacts (in line
with the United Nations Framework Convention on Climate Change)
14. Conserve and sustainably use the oceans, seas, and marine resources
for sustainable development
15. Protect, restore, and promote sustainable use of terrestrial ecosys-
tems, sustainably manage forests, combat desertification, and halt
and reverse land degradation and halt biodiversity loss
16. Promote peaceful and inclusive societies for sustainable develop-
ment, provide access to justice for all and build effective, account-
able, and inclusive institutions at all levels
17. Strengthen the means of implementation and revitalize the global
partnership for sustainable development
( 212 ) Evidence-Based Public Health
There are many different ways of prioritizing program and policy issues in pub-
lic health practice. Although it is unlikely that “one size fits all,” several tools
and resources have proved useful for practitioners in a variety of settings. In
addition to using various analytic methods, priority setting will occur at dif-
ferent geographic and political levels. An entire country may establish broad
health priorities. In the Netherlands, a comprehensive approach was applied
to health services delivery that included an investment in health technology
assessment, use of guidelines, and development of criteria to determine pri-
ority on waiting lists. Underlying this approach was the belief that excluding
certain health care services was necessary to ensure access of all citizens to
essential health care.11 In Croatia, a participatory, “bottom up” approach com-
bined quantitative and qualitative approaches to allow each county to set its
priorities based on local population health needs.12 The Croatian example also
provides an example of how a country can avoid a centralized, one-size-fits-all
approach that may be ineffective.
In other instances, an individual state or province may conduct a priority-
setting process. Based on the recommendations of a group of consum-
ers and health care professionals, Oregon was one of the first US states to
rank public health services covered under its Medicaid program, using cost-
effectiveness analysis and various qualitative measures, to extend coverage
for high-priority services to a greater number of the state’s poor residents.13,14
The Oregon Health Evidence Review Commission (created by HB 2100 in
2011) leads efforts to develop a prioritized list of health services based on
methods that place a significant emphasis on preventive services and chronic
disease management. The process determines a benefit package designed to
keep a population healthy rather focusing on health care services that treat
illness. Prioritization of health services relates to a set of variables that are
entered into a formula; variables include impact on healthy life-years, impact
on suffering, effects on population, vulnerability of population affected, effec-
tiveness, need for service, and net cost.15
Experience in New Zealand and Australia shows that stakeholder input can
be valuable in priority setting and developing community action plans (Box
9.2).16,17 The team developed the ANGELO (Analysis Grid for Elements Linked
to Obesity) model that has been used to prioritize a set of core issues related to
obesity prevention. The ANGELO framework is generalizable to other regions
across the globe.18,19 Many of the same approaches that have been applied at
a macrolevel can be used to prioritize programs or policies within a public
health or voluntary health agency, within a health care organization, or at a
city or county level.
( 214 ) Evidence-Based Public Health
Box 9.2
PRIORITIZING ENVIRONMENTAL INTERVENTIONS
TO PREVENT OBESITY
There have been few systematic attempts to develop and apply objective cri-
teria for prioritizing clinical preventive services. As noted in c hapter 8, pri-
oritization of clinical interventions tends to benefit from the development
of guidelines for primary care providers. Some of the earliest efforts included
the Canadian Task Force on the Periodic Health Examination20 and the US
Preventive Services Task Force.21
An approach to prioritizing clinical preventive services was first proposed
by Coffield and colleagues.3,22,23 This method was developed in conjunction
with the publication of the third edition of the Guide to Clinical Preventive
Services. With analytic methods, clinical interventions were ranked accord-
ing to two dimensions: burden of disease prevented by each service and
average cost-effectiveness. Burden was described by the clinically prevent-
able burden (CPB): the amount of disease that would be prevented by a
particular service in usual practice if the service were delivered to 100%
De v e l op i n g a n d P r i or i t i z i n g I n t e r v e n t i o n Op t i o n s ( 215 )
for example, a total score of 8 is more valuable but not necessarily twice
as valuable as a total score of 4.25 With this method, the three interven-
tions with the highest priority rankings were discussion of daily aspirin use
with men 40 years and older and women 50 years and older, vaccination of
children to prevent a variety of infectious diseases, and smoking cessation
advice for adults.
There are both qualitative and quantitative approaches to setting public health
priorities for communities. Although many and diverse definitions of “com-
munity” have been offered, we define it as a group of individuals who share
attributes of place, social interaction, and social and political responsibility.26
In practice, many data systems are organized geographically, and therefore
communities are often defined by place. A sound priority-setting process can
help generate widespread support for public health issues when it is well docu-
mented and endorsed by communities.
The prioritization approach, based on comparison of a population
health problem with the “ideal” or “achievable” population health status,
is sometimes used to advance the policy decision-making process by sin-
gling out an objective, limited set of health problems. It usually involves
identifying desirable or achievable levels for an epidemiologic measure such
as mortality, incidence, or prevalence. One such approach used the low-
est achieved mortality rate, calculated from mortality rates that actually
have been achieved by some population or population segment at some
time and place, and risk-eliminated mortality rates, estimated by mortal-
ity levels that would have been achieved with elimination of known-risk
factors.27 A variation of this approach can be used to identify disparities
related to race and ethnicity, gender, or other groupings of populations.
Similar approaches have been applied in states in the United States,28–30 in
Japan,31 and in Spain.32 Another approach used a comparison of observed
and expected deaths to estimate the number of potentially preventable
deaths per year in each state in the United States for the five leading causes
of death.33
Multiple groups of researchers and practitioners have proposed standard-
ized, quantitative criteria for prioritizing public health issues at the commu-
nity level.5,34–41 Each of these methods differs, but they have some combination
of three common elements (similar to those for clinical priorities noted pre-
viously). First, each relies on some measure of burden, whether measured
in mortality, morbidity, or years of potential life lost. Some methods also
attempt to quantify preventability (i.e., the potential effects of intervention).
De v e l op i n g a n d P r i or i t i z i n g I n t e r v e n t i o n Op t i o n s ( 217 )
And finally, resource issues are often addressed in the decision-making pro-
cess, in terms of both costs of intervention and the capacity of an organization
to carry out a particular program or policy. Two analytic methods frequently
used as auxiliary in the prioritization process are economic appraisal and an
approach based on comparison with “ideal” or “achievable” population health
status.27 Several approaches to categorizing and prioritizing various interven-
tions that use the three common elements are discussed briefly here, as well
as one example each of the approaches based on economic data and achievable
population health status.
In most areas of public health, important and creative decisions are enhanced
by group decision-making processes. Often in a group process, a consensus
is reached on some topics. There are advantages and disadvantages to group
decision-making processes (Table 9.2), but the former generally outweigh the
latter.42 Probably, the biggest advantage is that more and better information
is available to inform a decision when a group is used. Additional advantages
include better acceptance of the final decision, enhanced communication,
and more accurate decisions. The biggest disadvantage of group decision
making is that the process takes longer. However, the management literature
shows that, in general, the more “person-hours” that go into a decision, the
more likely it will be that the correct one emerges, and the more likely that
the decision will be implemented.43,44 Other potential disadvantages include
the potential for indecisiveness, compromise decisions, and domination by
one individual. In addition, an outcome known as “groupthink” may result,
Advantages Disadvantages
More information and knowledge are available The process takes longer and may be costlier
More alternatives are likely to be generated Compromise decisions resulting from
indecisiveness may emerge
Better acceptance of the final decision is likely, One person may dominate the group
often among those who will carry out the
decision
Enhanced communication of the decision may “Groupthink” may occur
result
More accurate and creative decisions often
emerge
( 218 ) Evidence-Based Public Health
in which the group’s desire for consensus and cohesiveness overwhelms its
desire to reach the best possible decision.43,45 One way to offset groupthink is
by rotating new members into a decision-making group, ensuring that lead-
ers speak less and listen more (the 80/20 rule), and encouraging principled
dissent.46
The following sections briefly outline several popular brainstorming
techniques that are useful in developing and managing an effective group
process for prioritization. Other techniques for gathering information
from groups and individuals (e.g., focus groups, key informant interviews)
are described in chapters 5 and 11. The methods that follow are both
Box 9.3
A MIXED-M ETHOD APPROACH FOR PRIORITIZING ISSUES
AT THE COMMUNITY LEVEL
The most effective approaches for improving the health of the pub-
lic are likely to involve a merger of scientific evidence with community
preferences. In three communities in Massachusetts, Ramanadhan and
Viswanath worked with community-based organizations (CBOs) to imple-
ment an evidence-based decision-making process.47 The team sought to
better understand the drivers of priority setting and the ways in which
data influence the prioritization process. The overall goal of their efforts
was to build capacity among CBOs to adopt and sustain evidence-based
practices. Their approach involved qualitative methods (focus groups with
31 staff members) and quantitative data collection (a survey of 214 staff
members). In the focus groups, participants were asked to describe their
use of local or state data for priority setting. In the quantitative survey,
respondents were queried about the resources they use in setting priori-
ties (multiple options, including community needs assessment, academic
journals, provider observations, and many others). Their team found that
the top drivers of priority setting included findings from needs assess-
ment, existing data, organizational mission, partnerships, and funding.
They also found that drivers sometimes compete (e.g., funding streams
pushing one way and community needs another). Several key barriers for
using data in priority settings were identified. These included out-of-date
information, lack of local data, and challenges in accessing data. Overall,
the project showed the value of mixed-methods approaches for establish-
ing a data-driven approach to priority setting among CBOs. By building
capacity among CBOs and taking a systematic approach to priority set-
ting, it is likely that programs can be developed with greater impact and
ability to address health equity.
De v e l op i n g a n d P r i or i t i z i n g I n t e r v e n t i o n Op t i o n s ( 219 )
Using a process that relies on existing data, the diamond model of prioritiza-
tion considers two quantitative dimensions: the magnitude of rates and the
trends in rates. It classifies each of these two dimensions into three groups
resulting in a grid of 9 cells (3 × 3).48 In Figure 9.1, the diamond model was
applied for 30 causes of death in Taiwan. The diamond model permits coun-
try, state, and local comparisons of various endpoints, based on morbidity
and mortality rates. This initial prioritization is based solely on quantitative
data and does not include qualitative factors. One of its major advantages is
that it is based on existing data sets and is therefore relatively easy to carry
out. A disadvantage is that it is often based on mortality rates, which are
not highly explanatory for some causes of morbidity (e.g., arthritis, mental
health).
Priority Rank
e
rg
1st
La
Pneumonia
Lung cancer
e
at
Renal failure
er
2nd
od
M
Diabetes
Suicide
Liver cancer
Colon cancer
Liver cirrhosis
Oral cancer
l
al
Sm
Chronic
3rd Ischemic heart
obstructive
disease Breast cancer
pulmonary
Pancreatic cancer
disease
Heart failure
In
Hypertension Falls
cr
Motor vehicle
ea
Leukemia
se
accidents
4th Septicemia
N
Asthma
o
ch
Cervical cancer
an
Drowning
ge
5th
The Hanlon Method
BPR =[( A + B) • C ] / 3 × D,
where A is the size of the problem, B the seriousness of the problem, C the
effectiveness of the intervention, and D the propriety, economics, accept-
ability, resources, and legality (known as PEARL). Values for each part of the
formula are converted from rates and ratings to scores. Finer details of these
quantitative rating systems are available in the original publications.34, 37, 41
As an illustration, the Missouri Department of Health and Senior
Services applied a prioritization method using surveillance-derived data
(the Priority MICA).37 The Priority MICA extends the work of Vilnius and
Dandoy by adding to the earlier criteria (magnitude, severity, urgency, pre-
ventability) a new criterion of community support, two additional mea-
sures of severity (disability, number of hospital days of care), two more
measures of urgency (incidence and prevalence trends), a criterion of racial
disparity, another measure of magnitude (prevalence of risk factors, mea-
sured from two sources), and a new measure of preventability. The ranking
of a final score, from highest to lowest priority, identified the counties with
significantly higher morbidity and mortality than the state. This informa-
tion can be displayed in maps to identify each of the priority diseases and
conditions and to prioritize by geographical area (county). For each condi-
tion, map colors reflected the three possible classifications of mortality and
morbidity in each county in relation to the state: significantly higher than
state, higher than state, same as or less than state. These data show how the
outcome selected (e.g., disability, racial disparity in deaths) can have a large
impact on the relative importance of different diseases or risk conditions
(Table 9.3).50
The Strategy Grid
preventability. Within this framework, options in the upper left and lower
right cells are relatively easy to prioritize. Those in the lower left and upper
right are more difficult to assess. A highly important issue but one about
which little is known from a preventive standpoint should be the focus of
innovation in program development. A strong focus on evaluation should be
maintained in this category so that new programs can be assessed for effec-
tiveness. A program or policy in the upper right corner might be initiated for
political, social, or cultural reasons. The Strategy Grid method can be varied
by changing the labels for the X and Y axes; for example, need and feasibility
might be substituted for importance and changeability.52
( 222 ) Evidence-Based Public Health
More changeable Highest priority for program Low priority except to demonstrate
focus change for political or other purpose
Example: Program to improve Example: Program to prevent
vaccination coverage work-related pneumoconiosis
in children, adolescents, and adults
Less changeable Priority for innovative program No intervention program
with evaluation essential Example: Program to prevent
Example: Program to prevent mental Hodgkin’s disease
impairment and
disability
The Delphi Method
The Delphi method was developed by the Rand Corporation in the 1950s. It is
named after the oracle of Delphi from Ancient Greece, who could offer advice
on the right course of action in many situations.53 It is a judgment tool for pre-
diction and forecast, involving a panel of anonymous experts to whom inten-
sive questionnaires and feedback were given in order to obtain consensus on
a particular topic.54,55 Although the method has been modified and used in
various ways over the years, it remains a useful way to solicit and refine expert
opinion. The Delphi method is most appropriate for broad, long-range issues
such as strategic planning and environmental assessments. It is not feasible
for routine decisions. It can be especially useful for a geographically dispersed
group of experts. There are three types of Delphi: classical, policy, and deci-
sion.56 The decision Delphi is most relevant here because it provides a forum
for decisions. Panel members are not anonymous (although responses are),
and the goal is a defined and supported outcome. Another important charac-
teristic of the Delphi method is that it is iterative and responses are refined
over rounds of sampling.
The first step in a Delphi process involves the selection of an expert panel.
This panel should generally include a range of experts across the public
health field, including practitioners, researchers, and funders. A panel of 30
or fewer members is often used.57 The Delphi method may involve a series
of questionnaires (by mail or email) that begin more generally and, through
iteration, become more specific over several weeks or months. Open-ended
questions may be used in early drafts, with multiple-choice responses in later
versions. A flow chart for a typical Delphi process is shown in Figure 9.2.58
De v e l op i n g a n d P r i or i t i z i n g I n t e r v e n t i o n Op t i o n s ( 223 )
Formulation of first
round questionnaire
Expert panel
selection
Distribution &
collection of responses
Statistical analysis
Formulation of second
round questionnaire
Distribution &
collection of responses
Statistical analysis
Formulation of third
round questionnaire
Distribution &
collection of responses
Final estimation
Statistical analysis &
circulation
Another useful method is the nominal group technique (NGT).59 Unlike the
Delphi methods, whereby panel members do not see each other, the NGT
involves in-person interactions in the same room. However, 6 to 10 mem-
bers represent a group in name only and may not always interact as a group
in a typical work setting. The NGT can be useful in generating creative and
innovative alternatives and is more feasible than the Delphi method for rou-
tine decisions. A key to a successful NGT is an experienced and competent
facilitator, who assembles the group and outlines the problem to them. It is
also important to outline the specific rules that the NGT will follow.57 Often
data and information, such as data from a community assessment, will have
been provided to the group in advance of the meeting. Group members are
asked to write down as many alternatives as they can think of. They then
take turns stating these ideas, which are recorded on a flipchart or black-
board. Discussion is limited to simple clarification. After all alternatives
have been listed, each is discussed in more detail. When discussion is com-
pleted, sometimes after a series of meetings, the various alternatives are
generally voted on and rank-ordered. The primary advantage of NGT is that
it can identify a large number of alternatives while minimizing the impact
of group or individual opinions on the responses of individuals. The main
disadvantage is that the team leader or administrator may not support the
highest ranked alternative, dampening group enthusiasm if his or her work
is rejected.
Multivoting Technique
number of items on the list. The group continues voting and narrowing
down the list until the desired number of priorities are determined. Often
this is a range of three to five items. The group can then discuss the pros
and cons of remaining items, either in small groups or among the group as
a whole.
Regardless of the method used, the first major stage in setting community
priorities is to decide on the criteria. The framework might include one of
those described previously or may be a composite of various approaches.
After criteria are determined, the next steps include forming a working
team or advisory group, assembling the necessary data to conduct the
prioritization process, establishing a process for stakeholder input and
review, and determining a process for revisiting priorities at regular inter-
vals. Vilnius and Dandoy recommend that a six-to eight-member group be
assembled to guide the BPR process.41 This group should include members
within and outside the agency. A generic priority-setting worksheet is pro-
vided in Table 9.5.40 This worksheet provides some guidance on the types
of information that would typically need to be collected and summarized
before a work group begins its activity.
In setting priorities within public health, it important to consider several
issues related to leadership and measurement. No determination of public
health priorities should be reduced solely to numbers; values, social justice, and
the political climate all play roles. Changes in public health leadership present
a unique challenge. The median tenure for a US state public health officer is
only 1.8 years,60 whereas for city and county health officers the median ten-
ure is longer (about 6 years).61 Because each new leader is likely to bring new
ideas, this turnover in leadership often results in a lack of long-term focus on
public health priorities that require years or decades to accomplish. Each ana-
lytic method for prioritization has particular strengths and weaknesses. Some
methods rely heavily on quantitative data, but valid and usable data can be dif-
ficult to come by, especially for smaller geographic areas such as cities or neigh-
borhoods. It can also be difficult to identify the proper metrics for comparison
of various health conditions. For example, using mortality alone would ignore
the disabling burden of arthritis when it is compared with other chronic dis-
eases. Utility-based measures (e.g., QALYs) are advantageous because they are
comparable across diseases and risk factors. Rankings, especially close ranks,
should be assessed with caution. One useful approach is to divide a distribu-
tion of health issues into quartiles or quintiles and compare the extremes of
a distribution. In addition, some key stakeholders may find that quantitative
methods of prioritization fail to present a full picture, suggesting the need to
use methods that combine quantitative and qualitative approaches.
Table 9.5. GENERIC WORKSHEET FOR PRIORITY SET TING
Mortality rate
Community concern
Other:
a
A weight ensures that certain characteristics have a greater influence than others have in the final priority ranking. A sample formula might be: 2(Prevalence Score) + Community Concern
Score + 3(Medical Cost Score) = Priority Score. In this example, the weight for prevalence is 2 and medical cost is 3. Users might enter data or assign scores (such as 1–5) for each criterion
and use the formula to calculate a total score for the health event.
Source: Healthy People 2010 Toolkit.40
De v e l op i n g a n d P r i or i t i z i n g I n t e r v e n t i o n Op t i o n s ( 227 )
Creativity and its role in effective decision making are not fully understood.
Creativity is the process of developing original, imaginative, and innovative
options. To understand the role of creativity in decision making, it is helpful
to know about its nature and process and the techniques for nurturing it.
Researchers have sought to understand the characteristics of creative peo-
ple. Above a threshold in the intelligence quotient, there does not appear to
be a strong correlation between creativity and intelligence.64 There also seem
to be few differences in creativity between men and women.65 Several other
characteristics have been consistently associated with creativity. The typical
period in the life cycle of greatest creativity appears to be between the ages of
30 and 40 years. It also seems that more creative people are less susceptible to
social influences than those who are less creative.
The creative process has been described in four stages: preparation, incuba-
tion, insight, and verification.66 The preparation phase is highly dependent on
the education and training of the individual embarking on the creative pro-
cess. Incubation usually involves a period of relaxation after a period of prepa-
ration. The human mind gathers and sorts data, and then needs time for ideas
( 228 ) Evidence-Based Public Health
With creativity comes uncertainty. Whenever you have uncertainty people feel
uncomfortable and insecure. If [a creative decision] is not successful, the nega-
tive things that can happen to you are ten times greater than the positive things.
(pp. 723–724)
Analytic frameworks (also called logic models or causal frameworks) have ben-
efited numerous areas of public health practice, particularly in developing
and implementing clinical and community-based guidelines.68–70 An analytic
framework is a diagram that depicts the interrelationships between program
resources, intervention activities, outputs, shorter term intervention out-
comes, and longer term public health outcomes. The major purpose of an ana-
lytic framework is to map out the linkages on which to base conclusions about
intervention effectiveness. An underlying assumption is that various linkages
represent “causal pathways,” some of which are mutable and can be inter-
vened on. Numerous types of analytic frameworks are described in Battista
and Fletcher.71 Logic models and their role in action planning are discussed
in chapter 10.
People designing public health interventions often have in mind an ana-
lytic framework that leads from program inputs to health outputs if the pro-
gram works as intended. It is important for planning and evaluation purposes
that what Lipsey has termed this “small theory” of the intervention be made
explicit early, often in the form of a diagram.72 In attempting to map inputs,
mediators, and outputs, it important to determine whether mediators, or con-
structs, lie “upstream” or “downstream” from a particular intervention. As an
analytic framework develops, the diagram also identifies key outcomes to be
considered when formulating a data collection plan is formulated. These are
then translated into public health indicators (i.e., measures of the extent to
De v e l op i n g a n d P r i or i t i z i n g I n t e r v e n t i o n Op t i o n s ( 229 )
Primary
preventive
intervention
Modification of risk
Asymptomatic Target condition
factors including ill
individuals prevented
health behaviors
which targets in health programs are being reached). Besides helping to iden-
tify key information to be collected, an analytic framework can also be viewed
as a set of hypotheses about program action, including the time sequence in
which program-related changes should occur; these can later guide data analy-
sis. If the program is subsequently successful in influencing outcomes at the
end of this causal chain, having measures of the intermediate steps available
aids interpretation by clarifying how those effects came about. Conversely, if
little change in ultimate outcomes is observed, having measures of intermedi-
ate steps can help to diagnose where the causal chain was broken.
Analytic frameworks can be relatively simple or complicated, with every
possible relationship between risk factors, interventions, and health out-
comes. A generic analytic framework is shown in Figure 9.3.71 A more compre-
hensive approach may describe potential relationships between sociopolitical
context, social position, the health care system, and long-term health out-
comes, as described in Figure 9.4.73 By developing this and related diagrams,
Socioeconomic
and political
context
researchers, practitioners, and policy makers are able to (1) describe the
inputs needed for a particular intervention; (2) indicate intervention options
for changing relevant outcomes; (3) indicate categories of relevant interven-
tions; (4) describe the outputs and outcomes that the interventions attempt
to influence; and (4) indicate the types of intervention activities that were
included in a program and those that were not.74, 75
a
Includes the physical, legal, social, and cultural environments.
Source: Adapted from Yen and Syme.79
SUMMARY
The public health practitioner has many tools at her or his fingertips for
identifying and prioritizing program and policy options. This chapter has
summarized several approaches that have proved useful for public health
practitioners. As one proceeds through this process, several key points should
be kept in mind.
KEY CHAPTER POINTS
• The public health practitioner has many tools at her or his fingertips for
identifying and prioritizing program and policy options.
• In public health decision making, there is often not one “correct” answer.
• Although decisions are made in the context of uncertainty and risk, classi-
cal decision theory suggests that when managers have complete informa-
tion, they behave rationally.
( 232 ) Evidence-Based Public Health
Selected Websites
Centers for Disease Control and Prevention (CDC) Program Performance and Evaluation
Office (PPEO) <http://www.cdc.gov/eval/resources/index.htm>. The CDC PPEO
has developed a comprehensive list of evaluation documents, tools, and links to
other websites. These materials include documents that describe principles and
standards, organizations and foundations that support evaluation, a list of jour-
nals and online publications, and access to step-by-step manuals.
County Health Rankings and Roadmaps Action Center
<http:// w ww.countyhealthrankings.org/ roadmaps/ a ction- c enter/ c hoose-
effective-policies-programs>. The County Health Rankings and Roadmaps Action
Center provides guidance on selecting evidence-informed policies and programs
that target priority health issues. The Action Center also provides additional
learning and resources relevant to selecting a program or policy for a community.
Disease Control Priorities Project (DCCP) <http://www.dcp2.org>. The DCPP is an
ongoing effort to assess disease control priorities and produce evidence-based
analysis and resource materials to inform health policymaking in developing
countries. DCPP has produced eight volumes providing technical resources that
can assist developing countries in improving their health systems and, ultimately,
the health of their people.
Guide to Community Preventive Services (the Community Guide) <http://www.thecom-
munityguide.org/index.html>. The Community Guide provides guidance in choos-
ing evidence-based programs and policies to improve health and prevent disease
at the community level. The Task Force on Community Preventive Services—an
De v e l op i n g a n d P r i or i t i z i n g I n t e r v e n t i o n Op t i o n s ( 233 )
REFERENCES
8. Green LW. Health education’s contribution to public health in the twentieth cen-
tury: a glimpse through health promotion’s rear-view mirror. Annual Review of
Public Health. 1999;20:67–88.
9. World Health Organization. The Sustainable Development Goals 2015–2030.
http://una-g p.org/the-sustainable-development-goals-2015-2030/. Accessed
October 8, 2016.
10. Simon HA. Administrative Behavior: A Study of Decision- Making Processes in
Administrative Organizations. 4th ed. New York, NY: Free Press; 1997.
11. Gheaus A. Solidarity, justice and unconditional access to healthcare. J Med Ethics.
Jun 28 2016.
12. Sogoric S, Dzakula A, Rukavina TV, et al. Evaluation of Croatian model of polycen-
tric health planning and decision making. Health Policy. Mar 2009;89(3):271–278.
13. Eddy DM. Oregon’s methods. Did cost- effectiveness analysis fail? JAMA.
1991;266(3):417–420.
14. Klevit HD, Bates AC, Castanares T, Kirk PE, Sipes- Metzler PR, Wopat R.
Prioritization of health care services: a progress report by the Oregon health ser-
vices commission. Archives of Internal Medicine. 1991;151:912–916.
15. Oregon Health Authority. Health Evidence Review Commission: Prioritization
Methodology. http://www.oregon.gov/oha/herc/Pages/Prioritization-
Methodology.aspx. Accessed July 15, 2016.
16. Swinburn B, Egger G, Raza F. Dissecting obesogenic environments: the develop-
ment and application of a framework for identifying and prioritizing environ-
mental interventions for obesity. Preventive Medicine. 1999;29(6 Pt 1):563–570.
17. Simmons A, Mavoa HM, Bell AC, et al. Creating community action plans for obe-
sity prevention using the ANGELO (Analysis Grid for Elements Linked to Obesity)
Framework. Health Promot Int. Dec 2009;24(4):311–324.
18. Braun KL, Nigg CR, Fialkowski MK, et al. Using the ANGELO model to develop
the children’s healthy living program multilevel intervention to promote obesity
preventing behaviors for young children in the U.S.-affiliated Pacific Region. Child
Obes. Dec 2014;10(6):474–481.
19. Cauchi D, Rutter H, Knai C. An obesogenic island in the Mediterranean: map-
ping potential drivers of obesity in Malta. Public Health Nutr. Dec
2015;18(17):3211–3223.
20. Canadian Task Force on the Periodic Health Examination. The periodic health
examination. Canadian Task Force on the Periodic Health Examination. Can Med
Assoc J. Nov 3 1979;121(9):1193–1254.
21. US Preventive Services Task Force. Guide to Clinical Preventive Services: An
Assessment of the Effectiveness of 169 Interventions. Baltimore: Williams &
Wilkins; 1989.
22. Coffield AB, Maciosek MV, McGinnis JM, et al. Priorities among recommended
clinical preventive services (1). Am J Prev Med. 2001;21(1):1–9.
23. Maciosek MV, Coffield AB, Edwards NM, Flottemesch TJ, Goodman MJ, Solberg
LI. Priorities among effective clinical preventive services: results of a systematic
review and analysis. Am J Prev Med. Jul 2006;31(1):52–61.
24. Partnership for Prevention. Rankings of Preventive Services for the US Population.
https:// w ww.prevent.org/ National- C ommission- o n- P revention- P riorities/
Rankings-of-Preventive-Services-for-the-US-Population.aspx. Accessed July
15, 2016.
25. Maciosek MV, Coffield AB, McGinnis JM, et al. Methods for priority setting
among clinical preventive services. Am J Prev Med. 2001;21(1):10–19.
De v e l op i n g a n d P r i or i t i z i n g I n t e r v e n t i o n Op t i o n s ( 235 )
26. Patrick DL, Wickizer TM. Community and health. In: Amick BCI, Levine S, Tarlov
AR, Chapman Walsh D, eds. Society and Health. New York, NY: Oxford University
Press; 1995:46–92.
27. Hahn RA, Teutsch SM, Rothenberg RB, Marks JS. Excess deaths from nine chronic
diseases in the United States, 1986. JAMA. 1990;264(20):2654–2659.
28. Carvette ME, Hayes EB, Schwartz RH, Bogdan GF, Bartlett NW, Graham LB.
Chronic disease mortality in Maine: assessing the targets for prevention. J Public
Health Manag Pract. 1996;2(3):25–31.
29. Hoffarth S, Brownson RC, Gibson BB, Sharp DJ, Schramm W, Kivlaham C.
Preventable mortality in Missouri: excess deaths from nine chronic diseases,
1979-1991. Mo Med. 1993;90(6):279–282.
30. Kindig D, Peppard P, Booske B. How healthy could a state be? Public Health Rep.
Mar-Apr 2010;125(2):160–167.
31. Fukuda Y, Nakamura K, Takano T. Increased excess deaths in urban areas: quanti-
fication of geographical variation in mortality in Japan, 1973-1998. Health Policy.
May 2004;68(2):233–244.
32. Regidor E, Inigo J, Sendra JM, Gutierrez- Fisac JL. [Evolution of mortality
from principal chronic diseases in Spain 1975-1988]. Med Clin (Barc). Dec 5
1992;99(19):725–728.
33. Garcia M, Bastian B, Rossen L, et al. Potentially Preventable Deaths Among
the Five Leading Causes of Death—United States, 2010 and 2014. MMWR.
2016;65(45):1245–1255.
34. Hanlon J, Pickett G. Public Health Administration and Practice. Santa Clara,
CA: Times Mirror/Mosby College Publishing; 1982.
35. Meltzer M, Teutsch SM. Setting priorities for health needs and managing
resources. In: Stroup DF, Teutsch SM, eds. Statistics in Public Health. Quantitative
Approaches to Public Health Problems. New York, NY: Oxford University Press;
1998:123–149.
36. Murray CJ, Frenk J. Health metrics and evaluation: strengthening the science.
Lancet. Apr 5 2008;371(9619):1191–1199.
37. Simoes EJ, Land G, Metzger R, Mokdad A. Prioritization MICA: a Web-based
application to prioritize public health resources. J Public Health Manag Pract. Mar-
Apr 2006;12(2):161–169.
38. Simons-Morton BG, Greene WH, Gottlieb NH. Introduction to Health Education
and Health Promotion. 2nd ed. Prospect Heights, IL: Waveland Press; 1995.
39. Sogoric S, Rukavina TV, Brborovic O, Vlahugic A, Zganec N, Oreskovic S. Counties
selecting public health priorities—a “bottom-up” approach (Croatian experience).
Coll Antropol. Jun 2005;29(1):111–119.
40. US Department of Health and Human Services. Healthy People 2010 Toolkit.
Washington, DC: US Department of Health and Human Services; 2001.
41. Vilnius D, Dandoy S. A priority rating system for public health programs. Public
Health Reports. 1990;105(5):463–470.
42. Griffin RW. Management. 7th ed. Boston, MA: Houghton Mifflin Company; 2001.
43. Von Bergen CW, Kirk R. Groupthink. When too many heads spoil the decision.
Management Review. 1978;67(3):44–49.
44. Golembiewski R. Handbook of Organizational Consultation. 2nd ed. New York,
NY: Marcel Dekker, Inc.; 2000.
45. Janis IL. Groupthink. Boston, MA: Houghton Mifflin; 1982.
46. Sunstein C, Hastie R. Wiser: Getting Beyond Groupthink to Make Groups Smarter.
Boston, MA: Harvard Business Review Press; 2015.
( 236 ) Evidence-Based Public Health
67. Ford C, Gioia D. Factors influencing creativity in the domain of managerial deci-
sion making. J Manag. 2000;26(4):705–732.
68. Woolf SH, DiGuiseppi CG, Atkins D, Kamerow DB. Developing evidence-based
clinical practice guidelines: lessons learned by the US Preventive Services Task
Force. Annual Review of Public Health. 1996;17:511–538.
69. McLaughlin JJ, GB. Logic models: A tool for telling your program’s performance
story. Eval Program Planning. 1999;22:65–72.
70. Task Force on Community Preventive Services. Guide to Community Preventive
Services. www.thecommunityguide.org. Accessed June 5, 2016.
71. Battista RN, Fletcher SW. Making recommendations on preventive prac-
tices: methodological issues. American Journal of Preventive Medicine. 1988;4
Suppl:53–67.
72. Lipsey M. Theory as method: small theories of treatments. New Directions for
Evaluation. 2007;114:30–62.
73. WHO. Closing the gap in a generation: health equity through action on the social deter-
minants of health. Final Report of the Commission on Social Determinants of Heal\th.
Geneva: WHO; 2008.
74. Briss PA, Brownson RC, Fielding JE, Zaza S. Developing and using the Guide to
Community Preventive Services: lessons learned About evidence- based public
health. Annu Rev Public Health. Jan 2004;25:281–302.
75. Briss PA, Zaza S, Pappaioanou M, et al. Developing an evidence-based Guide
to Community Preventive Services: Methods. The Task Force on Community
Preventive Services. Am J Prev Med. 2000;18(1 Suppl):35–43.
76. McKinlay JB. Paradigmatic obstacles to improving the health of populations—
implications for health policy. Salud Publica Mex. Jul-Aug 1998;40(4):369–379.
77. McKinlay JB, Marceau LD. Upstream healthy public policy: lessons from the battle
of tobacco. Int J Health Serv. 2000;30(1):49–69.
78. Berkman LF. Social epidemiology: social determinants of health in the United
States: are we losing ground? Annu Rev Public Health. Apr 29 2009;30:27–41.
79. Yen IH, Syme SL. The social environment and health: a discussion of the epide-
miologic literature. Annual Review of Public Health. 1999;20:287–308.
80. McKinlay JB. The promotion of health through planned sociopoliti-
cal change: Challenges for research and policy. Social Science and Medicine.
1993;36(2):109–117.
81. McKinlay JB, Marceau LD. To boldly go. Am J Public Health. 2000;90(1):25–33.
C H A P T E R 1 0
w
Developing an Action Plan and
Implementing Interventions
Even if you’re on the right track, you’ll get run over if you just sit there.
Will Rogers
( 239 )
( 240 ) Evidence-Based Public Health
1. Planning
3. Evaluation 2. Implementation
BACKGROUND
Effective action planning takes into account essentially all of the issues and
approaches covered elsewhere in this book. For example, let’s assume one is
broadly concerned with the public health needs of a community. Early on, a
partnership or coalition would have been established through which multi-
ple stakeholders and community members are engaged in defining the main
issues of concern and developing, implementing, and evaluating the inter-
vention. The process would begin with a community assessment. This would
start by examining epidemiologic data and prioritizing which health issues to
address. After the quantitative data describing the main health issues have
been established, additional community assessments (quantitative and quali-
tative) can be conducted to determine the specific needs and assets of the pop-
ulation of interest and the context (social, political, economic) within which
the health problem exists. Through this process one would have identified the
specific population and contextual issues using a wide range of local data sets.4
Factors would be examined across the ecological framework (as described in
chapter 5). In addition to a full community assessment, systematic reviews
of the literature and cost-effectiveness studies would be reviewed to assist in
determining possible intervention approaches, including a review of program-
matic, organizational, policy, and environmental change interventions.
After a small set of possible interventions is identified one would then
prioritize which intervention is best to implement (see c hapter 9). Previous
work has identified a number of issues to consider when prioritizing which
intervention to conduct in a particular community or assessing readiness of a
community to engage in a particular intervention (Box 10.1).5–9 As described
in chapter 5, information on issues to consider during prioritization can be
collected as part of a complete community assessment. Consideration of these
factors is needed to determine the levels of the ecological framework (indi-
vidual behavior, organizational, environmental, or policy level change) that
are most appropriate for intervention, the intervention strategy that is best
for a specific community, and the content and processes that should be used
for implementing the intervention.
Box 10.1
QUESTIONS TO CONSIDER WHEN ASSESSING READINESS
FOR INTERVENTION OPTIONS
Box 10.2
AN ECOLOGICAL APPROACH TO REDUCE
DIABETES DISPARITIES
Theory
Theories and models explain behavior and suggest ways to achieve behav-
ior change. As noted by Bandura, in advanced disciplines like mathemat-
ics, theories integrate laws; whereas in newer fields such as public health
or behavioral science, theories describe or specify the determinants influ-
encing the phenomena of interest.36 As a result, in terms of action plan-
ning, theory can point to important intervention strategies. For example,
individual level theories (e.g., health belief model, transtheoretical model
of change, theory of planned behavior11,33,36) suggest that perceptions are
important in maintaining behavior and that it is therefore important to
include some strategies to alter perceptions; whereas if skills are considered
important to change behavior (i.e., social learning theory36), then some
strategy to alter skills must be included in the intervention. If laws and rules
influence health and behavior, policies need to be enacted and enforced to
support health. Policy theories and frameworks suggest that determinants
of policy change include increasing knowledge of the problem and build-
ing support for change. Useful strategies such as developing policy briefs
and engaging leadership can help to create the needed policy changes.37,38
Organizational theories and frameworks point to policies, regulations, and
structures that influence behaviors and health or characteristics within
organizations that facilitate implementation of health-related interven-
tions, including leadership support, funding, and collaborative relation-
ships and networks.9, 39
Numerous frameworks for planning have been proposed over the past few
decades. Among the earliest approaches was a simple program evaluation and
review technique (PERT) chart. As described by Breckon and colleagues,40 this
was a graphically displayed timeline for the tasks necessary in the develop-
ment and implementation of a public health program. Subsequent approaches
have divided program development into various phases, including needs
assessment, goal setting, problem definition, plan design, implementation,
and evaluation. There are numerous other planning frameworks that have
proved useful for various intervention settings and approaches. Among them
are the following:
Each of these frameworks has been used to plan and implement successful
programs. The PRECEDE-PROCEED model alone has generated thousands of
documented health promotion applications in a variety of settings and across
multiple health problems. Others, such as the SHIP process, are explicitly
linked to accreditation standards and measures.43 Rather than providing a
review of each of these planning frameworks, key planning principles have
been abstracted that appear to be crucial to the success of interventions in
community settings and are common to each framework. Those principles
include the following:
Box 10.3
STEPS IN DESIGNING A SUCCESSFUL PUBLIC
HEALTH INTERVENTION
Adaptation
and the social and cultural dynamics of the community. This can improve the
ability to integrate the knowledge gained with action to improve the health
and well-being of community members.48 Driven by values of social or envi-
ronmental justice,49 CBPR creates the structures needed for all partners to
engage in improving community health. These structures are co-created by
all partners and provide the opportunity for all partners to learn from each
other (co-learning).50, 51
One of the challenges that a program often encounters when adapting an
intervention is the tension between fidelity, or keeping the key ingredients
of an intervention that made it successful, and adaptation to fit the com-
munity of interest.52 Adapting interventions from one location to another
requires considerations regarding the determinants of the health issue, the
population, culture and context, and political and health care systems.47,53–56
Lee and colleagues have developed a useful approach for planned adaptation
that includes four steps: (1) examining the evidence-based theory of change,
(2) identifying population differences, (3) adapting the program content, and
(4) adapting evaluation strategies.57
There are several issues to consider in adapting an intervention that has
been effective in one setting and with one population into another set-
ting and population. Among these are attributes of applicability (whether
the intervention process can be implemented in the local setting) such as
the political environment, public acceptance of the intervention, cultural
norms regarding the issue and the intervention proposed, history of the
relationship between the community and the organization implementing
the intervention including the history of trust, engagement of the commu-
nity in the intervention development and implementation, and resources
available for program.47 Other factors relate to transferability (whether the
intervention effectiveness is similar in the local setting and the original
study), baseline risk factors, population characteristics, and the capacity
to implement the intervention.56,58 There are some aspects of an interven-
tion that are relatively benign to change (e.g., the name of the intervention
or the graphics used), whereas changing other aspects of the intervention
may be somewhat concerning (e.g., the sector of the community where the
intervention is implemented) or advised against (e.g., eliminating training
modules).59
Work conducted with the National Community Committee of the
Prevention Research Centers identified 10 considerations to take into account
when adapting evidence-based physical activity programs with and within
racial and ethnic minority communities (Box 10.4).60 Although these were
developed for physical activity, some may also be important to consider
in developing programs for other interventions aimed at reducing health
disparities.60
Box 10.4
THING TO CONSIDER WHEN ADAPTING EVIDENCED-
BASED PROGRAMS WITH AND WITHIN RACIAL/E THNIC
COMMUNITIES (EXAMPLE IS FOR PHYSICAL ACTIVITY
PROMOTION)
1. Attend to culture
• Require ongoing cultural competency training for all staff (e.g.,
those who have experienced racial or ethnic discrimination, those
who have been subjected to racial or ethnic discrimination, and
those who have benefited from racial or ethnic discrimination)
• Recognize a history of mistrust or mistreatment by social and
medical professionals and services
• Recognize diversity within and among groups regarding
cultural norms
• Recognize the complexity and differences within and across racial
and ethnic minority populations
• Recognize differences in the meaning of physical activity across
communities
• Consider the location of programs and resources in communities
• Recognize the variety of responses that may occur with weight loss
related to physical activity (e.g., concern regarding sickness, gain-
ing weight to be attractive)
• Recognize that it is considered acceptable, and sometimes prefer-
able, in some racial and ethnic minority communities for individu-
als to be heavier
2. Build on previous studies and work in the community
• Review recommendations from the Guide to Community Preventive
Services (Community Guide), learning from what others have done,
including essential elements for a specific intervention but adapting
for local racial and ethnic minority communities, based on conver-
sations and previous experiences within the intended community
3. Tailor the intervention (e.g., media messages, programs, policies,
environmental changes) to the population and community you
intend to serve in terms of the following:
• Reading level
• Education level and the quality of the education
• Available resources and infrastructures (e.g., parks, recreation cen-
ters, trails)
• Individual and family characteristics (e.g., age and age-related
norms, work, complex family structures, health conditions)
• Availability of jobs and the unemployment rate
• Incarceration and crime rate
( 252 ) Evidence-Based Public Health
Developing Action Plans
policy and allows for midcourse corrections through process evaluation (see
chapter 11). An intervention objective should include a clear identification of
the health issue or risk factor being addressed, the at-risk population being
addressed, the current status of the health issue or risk factor in the at-risk
population, and the desired outcome of the intervention. A clearly defined
objective can guide both the development of intervention content and the
selection of appropriate communication channels. It also facilitates the devel-
opment of quantitative evaluation measures that can be used to monitor the
success of the intervention and to identify opportunities for improvement.
Importantly, a clearly defined objective will improve the coordination of activ-
ities among the various partners participating in the intervention. Many have
suggested that objectives need to be SMART—specific, measurable, achiev-
able, realistic, and time bound.61 More generally, several aspects of sound
objective-setting have been described1:
Table 10.2 presents examples of sound objectives from national and state gov-
ernmental sectors. These are drawn from the strategic plans and other plan-
ning materials of the programs noted. Some have found it helpful to start with
more simple objectives to ensure that they link to both goals and activities,
and then work from there to make the objectives “SMART.”
A detailed action plan that includes the development of a work plan and
a specific timeline for completion will enhance the chances of a successful
ACTION P L ANNING ( 255 )
Source: https://www.healthypeople.gov/2020/topics-objectives/topic/heart-disease-and-stroke/objectives
#4555
http://www.thecommunityguide.org/index.html.
A sample timeline is shown in Table 10.3. Although there are many ways to
organize a timeline, this example groups activities into four main catego-
ries: (1) administration; (2) intervention development and implementation;
(3) data collection and evaluation; and (4) analysis and dissemination. For
internal purposes it is useful to add another component to this timeline—that
of the personnel intended to carry out each task. Doing this in conjunction
with the timeline will allow for assessment of workload and personnel needs
at various times throughout the proposed project. Another important compo-
nent of program delivery is the assessment of program implementation: “How
Table 10.3. a EXAMPLE TIME LINE FOR IMPLEMENTATION OF A PUBLIC
HEALTH INTERVENTION.
Activity Month
1 2 3 4 5 6 7 8 9 10 11 12
Administration
a
Only year 1 is displayed as an example.
ACTION P L ANNING ( 257 )
well was the program delivered?” These issues are covered in more detail in
chapter 11 within the context of process evaluation.
Assessing Resource Needs
1. Available funds: How many direct funds are available? What are the sources?
Are there limitations on how and when funds can be spent? Are funds
internal or external to a program or agency? Are there “in-kind” funds?
2. Personnel: How many and what types of personnel are needed? What type
of training will be needed for program staff? What personnel do collaborat-
ing organizations bring to the project?
3. Equipment and Materials: What types of equipment and supplies are needed
for the program? Are there certain pieces of equipment that can be obtained
“in-kind” from participating partners?
4. Facilities: For some types of interventions, is significant infrastructure
needed (such as clinics, hospitals, or mobile vans)?
5. Travel: Is there travel directly related to carrying out the project? Are there
related travel expenses for other meetings or presentations in professional
settings?
Table 10.4 CONTINUED
Travel
Examples:
Staff meeting travel,
lodging, and
per diem
Steering group travel and
lodging
Mileage associated with
program
implementation
Other nonpersonnel
service costs
Examples:
Conference call services
Long-distance services
Website service
Transcription costs for focus
group tapes
Indirect/overhead costs
Total costs
the utilization of local community members, or what some have called lay
health advisors, community health workers, or promotoras.62–65 Community
members may have expertise in many areas that make their engagement
important, and they can be trained to implement the specific intervention
and can be given ongoing support as needed.66 The training of all staff, com-
munity health workers, and others should be included as a necessary first step
in the work plan, and the persons responsible for training should be listed in
the work plan.
When addressing training needs, several key questions come to mind:
To the extent possible, a pilot test should be conducted in the same manner as
that intended for the full program. In some cases, a pilot study may use quali-
tative methods, such as focus groups or individual interviews, which are not
part of the main project. However, pilot tests can also provide an opportunity
to examine the utility and appropriateness of quantitative instruments. Pilot
test participants should be similar to those who will be in the actual project.
Generally, pilot test participants should not be enrolled in the main project;
therefore it is sometimes useful to recruit pilot participants from a separate
geographic region.67 Complete notes should be taken during the pilot test so
that the project team can debrief with all needed information.
SUMMARY
KEY CHAPTER POINTS
Selected Websites
Centers for Disease Control and Prevention—Assessment & Planning Models,
Frameworks & Tools <https://www.cdc.gov/stltpublichealth/cha/assessment.
( 262 ) Evidence-Based Public Health
html>. This site provides information on key elements of, as well as differ-
ences between, assessment and planning frameworks. It also provides tools and
resources for commonly used planning models and frameworks.
Community Tool Box <http://ctb.ku.edu/en/>. The Community Tool Box is a global
resource for free information on essential skills for building healthy communi-
ties. It offers more than 7,000 pages of practical guidance on topics such as lead-
ership, strategic planning, community assessment, advocacy, grant writing, and
evaluation. Sections include descriptions of the task, step-by-step guidelines,
examples, and training materials.
Developing and Sustaining Community- Based Participatory Research
Partnerships: A Skill- Building Curriculum <http://www.cbprcurriculum.
info/>. This evidence-based curriculum is intended as a tool for community-
institutional partnerships that are using or planning to use a community based-
participatory research (CBPR) approach to improve health. It is intended for
use by staff of community-based organizations, staff of public health agencies,
and faculty, researchers, and students at all skill levels. Units provide a step-
by-step approach, from the development of the CBPR partnership through the
dissemination of results and planning for sustainability. The material and infor-
mation presented in this curriculum are based on the work of the Community-
Institutional Partnerships for Prevention Research Group that emerged from
the Examining Community-Institutional Partnerships for Prevention Research
Project.
Health Education Resource Exchange (HERE) in Washington <http://here.doh.wa.gov/>.
This clearinghouse of public health education and health promotion projects,
materials, and resources in the State of Washington is designed to help com-
munity health professionals share their experience with colleagues. The website
includes sections on community projects, educational materials, health educa-
tion tools, and best practices.
Knowledge for Health (K4Health) <https://www.k4health.org>. Funded by USAID and
implemented by The Johns Hopkins Bloomberg School of Public Health, the mis-
sion of the K4Health project is to increase the use and dissemination of evidence-
based, accurate, and up-to-date information to improve health service delivery
and health outcomes worldwide. The site offers eLearning opportunities, results
of needs assessment activities, and toolkits for family planning and reproductive
health, HIV/AIDS, and other health topics.
Management Sciences for Health <http://erc.msh.org/>. Since 1971, Management
Sciences for Health (MSH), a nonprofit organization, has worked in more than
140 countries and with hundreds of organizations. MSH resources communicate
effective management practices to health professionals around the world. This
site, the Manager’s Electronic Resource Center, covers topics such as conduct-
ing local rapid assessments, working with community members, and developing
leaders. The site links to case studies and toolkits from around the world.
National Cancer Institute, Health Behavior Constructs <http://cancercontrol.can-
cer.gov/brp/research/constructs/index.html>. This site provides definitions of
major theoretical constructs employed in health behavior research, and informa-
tion about the best measures of these constructs. The National Cancer Institute
has also published a concise summary of health behavior theories in Theory at a
Glance, Second Edition. http://www.sbccimplementationkits.org/demandrmnch/
wp- c ontent/ u ploads/ 2 014/ 0 2/ T heory- a t- a - G lance- A - G uide- For- Health-
Promotion-Practice.pdf.
ACTION P L ANNING ( 263 )
REFERENCES
14. Umberson D, Crosnoe R, Reczek C. Social relationships and health behavior across
life course. Annu Rev Sociol. Aug 1 2011;36:139–157.
15. Israel BA. Social networks and health status: linking theory, research, and prac-
tice. Patient Couns Health Educ. 1982;4(2):65–79.
16. Edwards M, Wood F, Davies M, Edwards A. “Distributed health literacy”: longi-
tudinal qualitative analysis of the roles of health literacy mediators and social
networks of people living with a long-term health condition. Health Expect. Oct
2013;18(5):1180–1193.
17. Eysenbach G, Powell J, Englesakis M, Rizo C, Stern A. Health related virtual com-
munities and electronic support groups: systematic review of the effects of online
peer to peer interactions. BMJ. May 15 2004;328(7449):1166.
18. Israel BA. Social networks and social support: implications for natural helper and
community level interventions. Health Educ Q. 1985;12(1):65–80.
19. Eng E, Young R. Lay health advisors as community change agents. Family and
Community Health. 1992;151:24–40.
20. Aarons GA, Ehrhart MG, Farahnak LR, Sklar M. Aligning leadership across sys-
tems and organizations to develop a strategic climate for evidence-based practice
implementation. Annu Rev Public Health. 2014;35:255–274.
21. Allen P, Brownson R, Duggan K, Stamatakis K, Erwin P. The makings of an
evidence-based local health department: identifying administrative and man-
agement practices. Frontiers in Public Health Services & Systems Research.
2012;1(2).
22. Kelly CM, Baker EA, Williams D, Nanney MS, Haire-Joshu D. Organizational
capacity’s effects on the delivery and outcomes of health education programs. J
Public Health Manag Pract. Mar-Apr 2004;10(2):164–170.
23. Ward M, Mowat D. Creating an organizational culture for evidence-informed deci-
sion making. Healthc Manage Forum. Autumn 2012;25(3):146–150.
24. World Health Organization. Closing the Gap in a Generation: Health Equity Through
Action on the Social Determinants of Health. Final Report of the Commission on Social
Determinants of Health. Geneva: WHO; 2008.
25. Morris JN, Donkin AJ, Wonderling D, Wilkinson P, Dowler EA. A minimum income
for healthy living. J Epidemiol Community Health. Dec 2000;54(12):885–889.
26. Sallis J, Owen N. Ecological models of health behavior. In: Glanz K, Rimer B,
Vishwanath K, eds. Health Behavior: Theory, Research, and Practice. 2nd ed. San
Francisco, CA: Jossey-Bass Publishers; 2015:43–64.
27. McLeroy KR, Bibeau D, Steckler A, Glanz K. An ecological perspective on health
promotion programs. Health Education Quarterly. 1988;15:351–377.
28. Centers for Disease Control and Prevention. National Diabetes Statistics Report,
2014. http://www.cdc.gov/diabetes/pdfs/data/2014-report-estimates-of-
diabetes-and-its-burden-in-the-united-states.pdf. Accessed October 20, 2016.
29. Collinsworth A, Vulimiri M, Snead C, Walton J. Community health workers in
primary care practice: redesigning health care delivery systems to extend and
improve diabetes care in underserved populations. Health Promot Pract. Nov
2014;15(2 Suppl):51S–61S.
30. Lewis MA, Bann CM, Karns SA, et al. Cross-site evaluation of the Alliance to
Reduce Disparities in Diabetes: clinical and patient-reported outcomes. Health
Promot Pract. Nov 2014;15(2 Suppl):92S–102S.
31. Clauser SB, Taplin SH, Foster MK, Fagan P, Kaluzny AD. Multilevel intervention
research: lessons learned and pathways forward. J Natl Cancer Inst Monogr. May
2012;2012(44):127–133.
ACTION P L ANNING ( 265 )
32. Cleary PD, Gross CP, Zaslavsky AM, Taplin SH. Multilevel interventions: study
design and analysis issues. J Natl Cancer Inst Monogr. May 2012;2012(44):49–55.
33. Glanz K, Bishop DB. The role of behavioral science theory in development and
implementation of public health interventions. Annu Rev Public Health. Apr 21
2010;31:399–418.
34. Knowlton L, Phillips C. The Logic Model Guidebook: Better Strategies for Great
Results. 2nd ed. Thousand Oaks, CA: Sage Publications; 2012.
35. Institute of Medicine. Speaking of Health: Assessing health communications strate-
gies for diverse populations. Washington, DC: National Academies Press; 2002.
36. Bandura A. Social Foundations of Thought and Action: A Social Cognitive Theory.
Englewood Cliffs, NJ: Prentice Hall; 1986.
37. Dodson EA, Eyler AA, Chalifour S, Wintrode CG. A review of obesity-themed pol-
icy briefs. Am J Prev Med. Sep 2012;43(3 Suppl 2):S143–S148.
38. Uneke CJ, Ezeoha AE, Uro-Chukwu H, et al. Enhancing the capacity of policy-
makers to develop evidence-informed policy brief on infectious diseases of pov-
erty in Nigeria. Int J Health Policy Manag. Sep 2015;4(9):599–610.
39. Brownson RC, Allen P, Duggan K, Stamatakis KA, Erwin PC. Fostering more-
effective public health by identifying administrative evidence-based practices: a
review of the literature. Am J Prev Med. Sep 2012;43(3):309–319.
40. Breckon DJ, Harvey JR, Lancaster RB. Community Health Education: Settings, Roles,
and Skills for the 21st Century. 4th ed. Rockville, MD: Aspen Publishers; 1998.
41. Green LW, Kreuter MW. Health Promotion Planning: An Educational and Ecological
Approach. 4th ed. New York, NY: McGraw Hill; 2005.
42. Bartholomew L, Parcel G, Kok G, Gottlieb N, Fernandez M. Planning Health
Promotion Programs: An Intervention Mapping Approach. 3rd ed. San Francisco,
CA: Jossey-Bass Publishers; 2011.
43. Association of State and Territorial Health Officials. Developing a state health
improvement plan: Guidance and resources. http://www.astho.org/WorkArea/
DownloadAsset.aspx?id=6597. Accessed September 10, 2016.
44. (Entire issue devoted to descriptions of the Planned Approach to Community
Health [PATCH]). Journal of Health Education. 1992;23(3):131–192.
45. Davis JR, Schwartz R, Wheeler F, Lancaster R. Intervention methods for chronic
disease control. In: Brownson RC, Remington PL, Davis JR, eds. Chronic Disease
Epidemiology and Control. 2nd ed. Washington, DC: American Public Health
Association; 1998:77–116.
46. Stirman SW, Miller CJ, Toder K, Calloway A. Development of a framework and
coding system for modifications and adaptations of evidence-based interven-
tions. Implement Sci. 2013;8:65.
47. Wallerstein N, Duran B. Community-based participatory research contributions
to intervention research: the intersection of science and practice to improve
health equity. Am J Public Health. Apr 1 2010;100(Suppl 1):S40–S46.
48. Israel BA, Schulz AJ, Parker EA, Becker AB. Review of community- based
research: assessing partnership approaches to improve public health. Annual
Review of Public Health. 1998;19:173–202.
49. Cargo M, Mercer SL. The value and challenges of participatory research: strength-
ening its practice. Annu Rev Public Health. Apr 21 2008;29:325–350.
50. Israel B, Eng E, Schultz A, Parker E, eds. Methods in community-based participatory
research for health. 2nd ed. San Francisco, CA: Jossey-Bass Publishers; 2013.
51. Viswanathan M, Ammerman A, Eng E, et al. Community-based participatory
research: assessing the evidence. Rockville, MD: Agency for Healthcare Research and
( 266 ) Evidence-Based Public Health
67. McDermott RJ, Sarvela PD. Health Education Evaluation and Measurement.
A Practitioner’s Perspective. 2nd ed. New York, NY: WCB/McGraw-Hill; 1999.
68. Sleekier A, Orville K, Eng E, Dawson L. Summary of a formative evaluation of
PATCH. Journal of Health Education. 1992;23(3):174–178.
69. Simons-Morton BG, Greene WH, Gottlieb NH. Introduction to Health Education
and Health Promotion. 2nd ed. Prospect Heights, IL: Waveland Press; 1995.
C H A P T E R 1 1
w
Evaluating the Program or Policy
One of the great mistakes is to judge policies and programs by their intentions rather
than their results.
Milton Friedman
BACKGROUND
What Is Evaluation?
Evaluation is the process of analyzing programs and policies and the con-
text within which they occur to determine whether changes need to be made
in implementation and to assess the intended and unintended consequences
of programs and policies; this includes, but is not limited to, determining
whether they are meeting their goals and objectives. Evaluation is “a process
that attempts to determine as systematically and objectively as possible the
relevance, effectiveness, and impact of activities in light of their objectives.”1
There is variation in the methods used to evaluate programs and perhaps even
more variation in the language used to describe each of the various evaluation
( 269 )
( 270 ) Evidence-Based Public Health
Why Evaluate?
There are many reasons for public health practitioners to evaluate programs
and policies. First, practitioners in the public sector must be accountable to
national leaders, state policy makers, local governing officials, and citizens for
the use of resources.14 Similarly, those working in the private and nonprofit
sectors must be accountable to their constituencies, including those providing
the funds for programs and policy initiatives. Evaluation also forms the basis
E va l uat i n g t h e P r o g r a m or P ol i c y ( 271 )
for making choices when resources are limited (as they always are), in part by
helping to determine the costs and benefits of the various options (for more
about this, see c hapter 4). Finally, evaluation is also a source of information for
making midcourse corrections, improving programs and policies, and serving
as the basis for deciding on future programs and policies. It is closely related
to the program planning issues and steps described in c hapter 10 (Table 11.1).
In the early stages of planning and evaluation, it is useful to consider a set
of factors (so-called utility standards) that help to frame the reasons for and
uses of an evaluation (Table 11.2).15–17 In part, these standards frame a set of
questions, such as the following:
Standard Description
experiences. (It is critical to ensure staff that program evaluation is not evalua-
tion of personnel.18) In terms of affected audiences, inclusion in the evaluation
process can increase their investment in the program and ensure that their
interests and desires are considered when changes are made in programs and
policies. Administrators and program funders need to be included to ensure
that evaluation activities are conducted with an understanding of where the
program or policy fits within the broader organizational or agency mission
and to answer questions most urgent to these groups.18 Regardless of who is
included, it is essential that the relationships among these stakeholders be
based on mutual trust, respect, and open communication.
Before the evaluation begins, all key stakeholders need to agree on the
program goals and objectives, along with the purpose of the evaluation. Each
stakeholder may harbor a different opinion about the program goals and
objectives and the purpose of the evaluation, and these differences should
be discussed and resolved before the evaluation plan is developed and imple-
mented. There are several group process techniques that can be helpful in
this regard. For example, the nominal group technique and the multivoting
method (chapter 9) all offer opportunities for individual voices to be heard
while, at the same time, providing a process for prioritization.
After the purpose of the evaluation has been agreed on, the next step is to
turn stakeholder questions into an evaluation design. The specific roles and
responsibilities of each group of stakeholders in creating the questions that
guide the evaluation and in developing the methods to collect data may vary.
In some evaluation designs, the stakeholders may be notified as decisions are
made or have minimal input into evaluation decisions.12 There are also other
evaluation approaches (participatory, collaborative, or empowerment evalua-
tion), in which stakeholders are seen as coequal partners in all evaluation deci-
sions, including which questions are to be answered, which data are collected,
how data are collected and analyzed, and how results are interpreted.19 Some
of these designs emphasize stakeholder participation as a means of ensuring
that the evaluation is responsive to stakeholder needs, whereas other designs
involve stakeholders to increase the control and ownership.10,12 The role of the
stakeholders will depend in part on the desires of the stakeholders and the
paradigm guiding the evaluation. In all cases, everyone involved should have a
clear understanding of their role in the evaluation process.
Before data collection, all stakeholders should also agree on the extent to
which the data collected will be kept confidential, not only in terms of pro-
tecting the confidentiality of participants in data collection (a nonnegotiable
condition for protecting evaluation participants), but also in terms of how
information will be shared within the group of stakeholders (all at once or
some notified before others). The group should also reach consensus on how
and when information will be communicated outside the immediate group of
stakeholders, what will be shared, and by whom.12
( 274 ) Evidence-Based Public Health
TYPES OF EVALUATION
There are several types of evaluation, including those related to program for-
mation, context, process, impact, and outcome. Each type has a different pur-
pose and is thus appropriate at different stages in the development of the
program or policy. Initial evaluation efforts should focus on population needs
and the implementation of program activities, commonly called formative
or process evaluation. Impact evaluations and outcome evaluations are only
appropriate after the program has been functioning for a sufficient amount
of time to see potential changes. The exact timing will depend on the nature
of the program and the changes expected or anticipated. Further, each type of
evaluation involves different evaluation designs and data collection methods.
Choices of which evaluation types to employ are based in part on the interests
of the various stakeholders and the resources available.
Formative Evaluation
• What are the attitudes among school officials toward the proposed healthy
eating program?
• What are current barriers for policies for healthy eating?
• Are there certain schools that have healthier food environments than
others?
• What are the attitudes among schoolchildren toward healthier food
choices?
• What, if anything, has been tried in the past, and what were the results?
E va l uat i n g t h e P r o g r a m or P ol i c y ( 275 )
After these data are collected and analyzed by the relevant stakeholders, a
action plan should be developed. (Chapter 10 describes this process in detail.)
The action plan is essential to evaluation. A key component of the action
plan is the development of a logic model (an analytic framework) (described
in chapter 9). A logic model lists specific activities that are designed (based
on evidence) to lead to the accomplishment of objectives, which, in turn, will
enhance the likelihood of accomplishing program goals. A logic model lays out
what outputs or activities will occur (an educational session on breast cancer
screening at a church) and what they will lead to (increased knowledge among
participants regarding risk factors for breast cancer and specific methods of
breast cancer screening), which will in turn have an impact (increased breast
cancer screening rates), with the intention that this will therefore produce a
long-term outcome (decreased morbidity due to breast cancer). As discussed
in chapter 10, intervention activities and objectives should be based on the
best evidence available.
Several authors have conceptualized this process somewhat differently6,12,21;
however, the overall intent is that the program or policy should be laid out in
such a way that it specifies the activities and the program objectives that are
expected to affect clearly delineated proximal and distal outcomes. Although
any logic model is obviously limited in its ability to predict the often impor-
tant unintended consequences of programs and policies, many argue that,
even with this limitation, a logic model is mandatory to evaluate a program
effectively. Rossi and colleagues have stated that evaluation in the absence of
a logic model results in a “black box” effect in that the evaluation may provide
information with regard to the effects but not the processes that produced the
effects.12 Moreover, because so many of the distal outcomes in public health
are not evident until long after a program is implemented (e.g., decreases in
morbidity due to lung cancer as a result of a tobacco control program), it is
essential to ascertain whether more proximal outcomes (e.g., decreases in cur-
rent smoking rates) are being achieved.
Process Evaluation
These data are important to document changes that have been, and need to
be, made to the program or policy to enable it to be implemented more effec-
tively. Information for process evaluation can be collected through quantita-
tive and qualitative methods, including observations, field notes, interviews,
questionnaires, program records, environmental audits, and local newspapers
and publications.
Impact Evaluation
Impact evaluation assesses the extent to which program objectives are being
met. Some also refer to this as an assessment of intermediate or proximal
outcomes, to acknowledge both the importance of short-term effects and that
impact evaluation can assess intended as well as unintended consequences.10
Impact evaluation is probably the most commonly reported type of evaluation
in the public health literature.
Impact evaluation requires that all program objectives be clearly speci-
fied. A challenge in conducting an impact evaluation is the presence of many
program objectives and their variable importance among stakeholders. There
are also instances when a national program is implemented at many sites.
The national program is likely to require each site to track the attainment
of certain objectives and goals. Each site, however, may also have different
specific program objectives and activities that they enact to accomplish local
and national objectives and achieve the desired changes in outcomes. They
may, therefore, be interested in tracking these local program activities and
objectives in addition to the national requirements for reporting on program
outcomes. Because no evaluation can evaluate all program components, stake-
holders should come to an agreement before collecting data as to which objec-
tives will be measured at what times.
It may be appropriate to alternate the types of data collected over months
or years of a program to meet multiple programmatic and stakeholder
E va l uat i n g t h e P r o g r a m or P ol i c y ( 277 )
needs. For example, suppose one was evaluating the changes in physical
activity in a community over a 5-year period. In the initial phases of a pro-
gram, it may be important to collect baseline data to understand the effects
of the environment on physical activity. At each time point, it may be
important to collect data on a set of core items (e.g., rates of physical activ-
ity) but alternate the data collected for some domains of questions (time
2: data on the role of social support; time 3: data on attitudes toward poli-
cies). Moreover, impact evaluation should not occur until participants have
completed the program as planned or until policies have been established
and implemented for some time. For example, if a program is planned
to include five educational sessions, it is not useful to assess impact on
objectives after the participants have attended only two sessions. It is also
important to include assessments after the program has been completed to
determine whether the changes made as a result of the program have been
sustained over time.
Program objectives assessed by impact evaluation may include changes in
knowledge, attitudes, or behavior. For example, changes in knowledge about
risk factors associated with breast cancer or the benefits of early detection
might be tracked through the use of a questionnaire administered before and
after an educational campaign or program. Similarly, changes in attitude might
be ascertained by assessing a participant’s intention to obtain a mammogram
both before and after an intervention through the use of a questionnaire. In
the case of policy interventions (e.g., a policy enacted to make mammogra-
phy a covered benefit for all women), objectives assessed by impact evaluation
might track the rate of mammography screening before and after enactment
of the policy.
reliability and validity of BRFSS data show risk factor prevalence rates compa-
rable to other national surveys that rely on self-reports.24 Among the survey
questions used in the BRFSS, measures determined to be of high reliability
and high validity were current smoker, blood pressure screening, height,
weight, and several demographic characteristics.25 Measures of both moderate
reliability and validity included when last mammography was received, clini-
cal breast exam, sedentary lifestyle, intense leisure-time physical activity, and
fruit and vegetable consumption.
Even if the instruments under consideration have been shown to be valid
and reliable in one population (e.g., residents of an urban area), it may be
important to assess the reliability and validity of measures in the particular
population being served by the program (e.g., a rural population). For exam-
ple, it may be necessary to translate the items from English into other lan-
guages in a way that ensures that participants understand the meaning of the
questions. This may require more than a simple word-for-word translation.
(Some words or phrases may be culturally defined and may not have a direct
translation.). In addition, the multicultural nature of public health necessi-
tates that the methods used to collect data and the analysis and reporting of
the data reflect the needs, customs, and preferences of diverse populations. It
is important to determine that the measures are appropriate for the popula-
tion that is to be surveyed in terms of content (meeting program objectives),
format (including readability and validity), and method of administering
the questionnaire (e.g., self-administered versus telephone).23,24 Changes in
technologies may affect the reliability, validity, and feasibility of various data
collection methods. For example, data are often collected by telephone, an
effective method during the time when land lines were the norm. The greater
use of cell phones, answering machines, voice mail, and caller ID has contrib-
uted to declines in response rates and has increased costs of conducting tele-
phone surveys.26
Issues of validity and reliability are somewhat different, but no less impor-
tant, in the collection of qualitative data. The concept outlined by Lincoln
and colleagues11 and Shenton27 is “trustworthiness” of qualitative data.
Trustworthiness involves establishing credibility (confidence in the findings),
transferability (applicable in other contexts), dependability (repeatable), and
confirmability (shaped by the respondent and not the interviewer).
Outcome Evaluation
Box 11.1
EVALUATING A COMMUNITY-L EVEL INITIATIVE
TO REDUCE OBESITY
There are many issues to consider in deciding the appropriate methods to use
for a particular evaluation, including the type of data to collect (e.g., qualita-
tive vs. quantitative data). Qualitative data may include individual and group
interviews; diaries of daily or weekly activities; records, newspapers, and
other forms of mass media; and photographs, photovoice, and other visual
and creative arts (e.g., music, poems). Quantitative data include surveys or
questionnaires, surveillance data, and other records. Either form of data may
be collected as primary data (designed for purposes of the evaluation at hand)
or secondary data (existing data collected for a purpose other than the evalu-
ation at hand, but still capable of answering the current evaluation questions
to some extent).
These different types of data are often associated with different para-
digmatic approaches (i.e., differences regarding what is known and how
knowledge is generated) (Table 11.4). Quantitative data are generally col-
lected using a positivist paradigm, or what is often called the “dominant”
paradigm. As discussed earlier in this chapter, a paradigm offers guidance
because it provides a set of understandings about the nature of reality and
the relationship between the knower and what can be known. Within a
positivist paradigm, what is known is constant, separate from the method
( 282 ) Evidence-Based Public Health
of generating knowledge, the person conducting the inquiry, and the con-
text within which the inquiry is conducted. On the other end of the spec-
trum, qualitative data are often collected within alternative paradigms that
include critical theory and constructionism. Although these alternative
paradigms vary, they generally suggest that knowledge is dependent on the
context and the interaction between the researcher and the participant in
a study. It is important to note, however, that quantitative and qualita-
tive data may be collected and analyzed using any paradigm as the guid-
ing framework for the design of the study. For example, community-based
evaluations are often conducted within an alternative paradigm but may
utilize either qualitative or quantitative data, or may include both types
(i.e., mixed methods).
Data Triangulation
Box 11.2
MIXED-M ETHODS EVALUATION OF AN
HIV PREVENTION PROGRAM
2. Do those above the program managers at the higher levels of the organiza-
tion agree with the program manager’s description of the intervention?
3. To what extent does the program or policy have agreed-on measures and
data sources?
4. Does the description of the intervention correspond to what is actually
found in the field?
5. Are planned activities and resources likely to achieve objectives?
6. Does the intervention have well-defined uses for information on progress
toward its measurable objectives?
7. What portion of the program or policy is ready for evaluation of progress
toward agreed-on objectives?
8. What evaluation and management options should organizational leaders
consider?
For public health practitioners, exploratory evaluation has many benefits and
can lead to more effective and efficient evaluations.46 For those seeking to
learn more about exploratory evaluation, several sources are useful.5, 46, 47
Although there are many similarities in using evaluation to assess the imple-
mentation and effectiveness of programs and health policy, there are some
significant differences that should be noted. Just as with program planning,
there are several stages in a policy cycle, including agenda setting, formula-
tion, decision making, implementation, and maintenance or termination.65 In
( 288 ) Evidence-Based Public Health
considering evaluation within the context of the policy cycle, the first decision
is the utilization of data in the agenda setting or policy formation stage and
the policy design or formulation stage. This is similar to a community assess-
ment but is likely to differ in terms of consideration of whether or not the
issue warrants a public or government intervention. If there is evidence that
policy change is warranted, the question becomes whether current policies
adequately address the concern or there is a need to modify existing legisla-
tion, create new policy, or enforce existing policy. Issues of cost-effectiveness
and public opinion are as likely to have a significant impact on the answers to
these questions as are other data collected.
The next phase of the policy cycle is policy implementation. Process data
are useful at this stage, with a focus on the extent to which the policy is being
implemented according to expectations of the various stakeholders. The last
stage in the policy cycle is policy maintenance or termination. In this stage,
longer term data are appropriate, with a focus on the extent to which the pol-
icy has achieved its objectives and goals.
Policy evaluations are critical to understanding the impact of policies on
community-and individual- level behavior changes. They should include
“upstream” (e.g., presence of zoning policies supportive of physical activity),
“midstream” (e.g., the enrollment in walking clubs), and “downstream” (e.g.,
the rate of physical activity) factors.66–68 By far, the most quantitative mea-
sures are routinely available from long-standing data systems for downstream
outcomes.
Benchmarks include programmatic as well as structural, social, and insti-
tutional objectives and goals. For example, 5 years after implementation of a
state law requiring insurance coverage of cancer screenings, several questions
might be addressed:
There are several challenges in evaluating health policies. One is that the
acceptable timing of the evaluation is likely to be determined more by legisla-
tive sessions than programmatic needs.69 Because of the wide variety of objec-
tives and goals, it is important to acknowledge from the outset that evaluation
results provide but one piece of data that is used in decision making regarding
maintaining or terminating a health policy. This is in part because the evalu-
ation of public health policy must be considered part of the political process.
The results of evaluations of public policy inevitably influence the distribution
of power and resources. Therefore, although it is essential to conduct rigorous
E va l uat i n g t h e P r o g r a m or P ol i c y ( 289 )
Resource Considerations
After the data are collected and analyzed, it is important to provide the vari-
ous stakeholders with a full reporting of the information collected and the rec-
ommendations for program or policy improvements. A formal report should
include background information on the evaluation, such as the purpose of the
evaluation (including the focus on process, impact, or outcome questions), the
various stakeholders involved, a description of the program, including pro-
gram goals and objectives, a description of the methodology used, and the
evaluation results and recommendations.4,5,18 Some important questions to
consider when reporting evaluation data are shown in Table 11.5.70–72 Perhaps
( 290 ) Evidence-Based Public Health
Question Considerations
Who are the different audiences (potential Key stakeholders (people and agencies)
consumers) that should be informed? Participants in the program
Public health practitioners
Policy makers
Public health researchers
The general public
What is your message? Focus on what you want people to remember
Keep it concise and actionable
Think of a core message and one or two related
messages
Make it understandable
How will you inform the community Town meetings
about the results of your intervention (the Meetings of local organizations (civic groups)
medium)? Newspapers articles, feature stories
Online articles
Journal articles
Social media
Who will assume responsibility for Public health practitioners
presenting the results? Public health researchers
Community members
What are the implications for program Need for new or different personnel or training of
improvement? existing staff
Need for new resources
Refinement of intervention options
Changes in time lines or action steps
Adapted from The Planned Approach to Community Health (PATCH)70,72 and Grob.71
most important, the report should tell readers something they do not already
know, in a concise format.71 The “Mom Test” is also important for any evalu-
ation report—where months or years of evaluation effort needs to be boiled
down to a few key sentences that are easily understood by a broad audience and
that are specific, are inspiring, and will elicit a response.71 For example: “Our
reading improvement program that started last year in our schools is working.
Reading levels are up significantly in every classroom where it was tried.”
The development and dissemination of evaluation findings have changed
dramatically over the past 20 years. A few decades ago, the typical evalua-
tion report would be a hard-copy volume that might also be reduced to an
executive summary. Newer approaches take advantage of electronic infor-
mation technology by using websites, videos, electronic newsletters, and
E va l uat i n g t h e P r o g r a m or P ol i c y ( 291 )
Figure 11.1: Infographic showing the effects of poverty on mortality in St. Louis, Missouri.
Source: For the Sake of All, 2014.75
( 292 ) Evidence-Based Public Health
the data analysis and interpretation. One useful method is to conduct some
sort of member validation of the findings before presenting a final report.
This is particularly important if the participants have not had other involve-
ment in data analysis and interpretation. Member validation is a process by
which the preliminary results and interpretations are presented back to those
who provided the evaluation data. These participants are asked to comment
on the results and interpretations, and this feedback is used to modify the
initial interpretations.
Utilization of the evaluation report is also influenced by its timeliness
and the match between stakeholder needs and the method of reporting the
evaluation results.6,18,71 Often, evaluation results are reported back to the
funders and program administrators and published in academic journals,
but not provided to community-based organizations or community mem-
bers themselves. The ideal method of reporting the findings to each of these
groups is likely to differ. For some stakeholders, formal written reports are
helpful, whereas for others, an oral presentation of results or information
placed in newsletters or on websites might be more appropriate. It is, there-
fore, essential that the evaluator considers the needs of all the stakehold-
ers and provides the evaluation results back to the various interest groups
in appropriate ways. This includes, but is not limited to, ensuring that the
report enables the various stakeholders to utilize the data for future program
or policy initiatives.
SUMMARY
KEY CHAPTER POINTS
Selected Websites
American Evaluation Association <http://www.eval.org/p/cm/ld/fid=51>. The
American Evaluation Association is an international professional association
of evaluators devoted to the application and exploration of program evaluation,
personnel evaluation, technology, and many other forms of evaluation.
Centers for Disease Control and Prevention (CDC) Program Performance and Evaluation
Office (PPEO) <http://www.cdc.gov/eval/resources/index.htm>. The CDC PPEO
has developed a comprehensive list of evaluation documents, tools, and links to
other websites. These materials include documents that describe principles and
standards, organizations and foundations that support evaluation, a list of jour-
nals and online publications, and access to step-by-step manuals.
Community Health Status Indicators (CHSI) Project <http://wwwn.cdc.gov/commu-
nityhealth >. The Community Health Status Indicators (CHSI) Project includes
3,141 county health status profiles representing each county in the United States
( 294 ) Evidence-Based Public Health
excluding territories. Each CHSI report includes data on access and utilization
of health care services, birth and death measures, Healthy People 2020 targets
and US birth and death rates, vulnerable populations, risk factors for premature
deaths, communicable diseases, and environmental health. The goal of CHSI is to
give local public health agencies another tool for improving their community’s
health by identifying resources and setting priorities.
Community Tool Box <http://ctb.ku.edu/en/>. The Community Tool Box is a global
resource for free information on essential skills for building healthy communi-
ties. It offers more than 7,000 pages of practical guidance on topics such as lead-
ership, strategic planning, community assessment, advocacy, grant writing, and
evaluation. Sections include descriptions of the task, step-by-step guidelines,
examples, and training materials.
RE-AIM.org <http://www.re-aim.org/>. With an overall goal of enhancing the qual-
ity, speed, and public health impact of efforts to translate research into practice,
this site provides an explanation of and resources (e.g., planning tools, measures,
self-assessment quizzes, FAQs, comprehensive bibliography) for those wanting
to apply the RE-AIM framework.
Research Methods Knowledge Base <http://www.socialresearchmethods.net/kb/>.
The Research Methods Knowledge Base is a comprehensive Web-based textbook
that covers the entire research process, including formulating research questions;
sampling; measurement (surveys, scaling, qualitative, unobtrusive); research
design (experimental and quasi-experimental); data analysis; and writing the
research paper. It uses an informal, conversational style to engage both the new-
comer and the more experienced student of research.
United Nations (UN) Development Programme’s Evaluation Office <http://erc.
undp.org/index.html>. The UN Development Programme is the UN’s global
development network, an organization advocating for change and connecting
countries to knowledge, experience, and resources to help people build a better
life. This site on evaluation includes training tools and a link to their Handbook
on Planning, Monitoring and Evaluating for Development Results, available in
English, Spanish and French. The Evaluation Resource Center allows users to
search for evaluations by agency, type of evaluation, region, country, year, and
focus area.
W. K. Kellogg Foundation Evaluation Handbook
<http://www.wkkf.org/resource-directory/resource/2010/w-k-kellogg-foundation-
evaluation-handbook>. The W. K. Kellogg Foundation Evaluation Handbook pro-
vides a framework for thinking about evaluation as a relevant and useful program
tool. It includes a guide to logic model development, a template for strategic commu-
nications, and an overall framework designed for project directors who have evalu-
ation responsibilities.
REFERENCES
23. Mokdad AH. The Behavioral Risk Factors Surveillance System: past, present, and
future. Annu Rev Public Health. Apr 29 2009;30:43–54.
24. Pierannunzi C, Hu SS, Balluz L. A systematic review of publications assessing reli-
ability and validity of the Behavioral Risk Factor Surveillance System (BRFSS),
2004-2011. BMC Med Res Methodol. 2013;13:49.
25. Nelson DE, Holtzman D, Bolen J, Stanwyck CA, Mack KA. Reliability and validity
of measures from the Behavioral Risk Factor Surveillance System (BRFSS). Soz
Praventivmed. 2001;46(Suppl 1):S3–S42.
26. Kempf AM, Remington PL. New challenges for telephone survey research in the
twenty-first century. Annu Rev Public Health. 2007;28:113–126.
27. Shenton A. Strategies for ensuring trustworthiness in qualitative research proj-
ects. Education for Information. 2994;22:63–75.
28. Koepsell TD, Wagner EH, Cheadle AC, et al. Selected methodological issues in
evaluating community-based health promotion and disease prevention programs.
Annual Review of Public Health. 1992;13:31–57.
29. Murray DM, Varnell SP, Blitstein JL. Design and analysis of group-randomized
trials: a review of recent methodological developments. Am J Public Health. Mar
2004;94(3):423–432.
30. Thompson B, Coronado G, Snipes SA, Puschel K. Methodologic advances and
ongoing challenges in designing community-based health promotion programs.
Annu Rev Public Health. 2003;24:315–340.
31. Murray DM. Design and Analysis of Group-R andomized Trials. New York, NY: Oxford
University Press; 1998.
32. Murray DM, Pals SL, Blitstein JL, Alfano CM, Lehman J. Design and analysis of
group-randomized trials in cancer: a review of current practices. J Natl Cancer
Inst. Apr 2 2008;100(7):483–491.
33. Yin RK. Case Study Research: Design and Methods. 5th ed. Thousand Oaks, CA: Sage
Publications; 2014.
34. Noble H, Smith J. Issues of validity and reliability in qualitative research. Evid
Based Nurs. Apr 2015;18(2):34–35.
35. Economos CD, Curtatone JA. Shaping up Somerville: a community initiative in
Massachusetts. Prev Med. Jan 2009;50(Suppl 1):S97–S98.
36. Folta SC, Kuder JF, Goldberg JP, et al. Changes in diet and physical activity resulting
from the Shape Up Somerville community intervention. BMC Pediatr. 2013;13:157.
37. Coffield E, Nihiser AJ, Sherry B, Economos CD. Shape Up Somerville: change in
parent body mass indexes during a child-targeted, community-based environ-
mental change intervention. Am J Public Health. Feb 2014;105(2):e83–e89.
38. Casey D, Murphy K. Issues in using methodological triangulation in research.
Nurse Res. 2009;16(4):40–55.
39. Steckler A, McLeroy KR, Goodman RM, Bird ST, McCormick L. Toward inte-
grating qualitative and quantitative methods: an introduction. Health Education
Quarterly. 1992;19(1):1–8.
40. Brownson RC, Baker EA, Leet TL, Gillespie KN, True WR. Evidence-Based Public
Health. 2nd ed. New York, NY: Oxford University Press; 2011.
41. Rychetnik L, Hawe P, Waters E, Barratt A, Frommer M. A glossary for evidence
based public health. J Epidemiol Community Health. Jul 2004;58(7):538–545.
42. Tarquinio C, Kivits J, Minary L, Coste J, Alla F. Evaluating complex interven-
tions: perspectives and issues for health behaviour change interventions. Psychol
Health. Jan 2015;30(1):35–51.
43. August EM, Hayek S, Casillas D, Wortley P, Collins CB, Jr. Evaluation of the dis-
semination, implementation, and sustainability of the “Partnership for Health”
intervention. J Public Health Manag Pract. Oct 19 2015.
E va l uat i n g t h e P r o g r a m or P ol i c y ( 297 )
44. Guyatt G, Rennie D, Meade M, Cook D, eds. Users’ Guides to the Medical Literature.
A Manual for Evidence-Based Clinical Practice. 3rd ed. Chicago, IL: American Medical
Association Press; 2015.
45. Carter N, Bryant-Lukosius D, DiCenso A, Blythe J, Neville AJ. The use of triangu-
lation in qualitative research. Oncol Nurs Forum. Sep 2014;41(5):545–547.
46. Leviton LC, Khan LK, Rog D, Dawkins N, Cotton D. Evaluability assessment to
improve public health policies, programs, and practices. Annu Rev Public Health.
Apr 21 2010;31:213–233.
47. Strosberg MA, Wholey JS. Evaluability assessment: from theory to practice
in the Department of Health and Human Services. Public Adm Rev. Jan-Feb
1983;43(1):66–71.
48. Dwyer JJ, Hansen B, Barrera M, et al. Maximizing children’s physical activity: an
evaluability assessment to plan a community-based, multi-strategy approach
in an ethno-racially and socio-economically diverse city. Health Promot Int. Sep
2003;18(3):199–208.
49. Durham J, Gillieatt S, Ellies P. An evaluability assessment of a nutrition
promotion project for newly arrived refugees. Health Promot J Austr. Apr
2007;18(1):43–49.
50. Basile KC, Lang KS, Bartenfeld TA, Clinton- Sherrod M. Report from the
CDC: Evaluability assessment of the rape prevention and education pro-
gram: summary of findings and recommendations. J Womens Health (Larchmt).
Apr 2005;14(3):201–207.
51. Trevisan M. Evaluability assessment from 1986 to 2006. Am J Evaluation.
2007;28:209–303.
52. Rabin B, Brownson R. Developing the terminology for dissemination and imple-
mentation research. In: Brownson R, Colditz G, Proctor E, eds. Dissemination
and Implementation Research in Health: Translating Science to Practice. New York,
NY: Oxford University Press; 2012:23–51.
53. Brownson RC, Jones E. Bridging the gap: translating research into policy and
practice. Prev Med. Oct 2009;49(4):313–315.
54. Brownson R, Colditz G, Proctor E, eds. Dissemination and Implementation Research
in Health: Translating Science to Practice. New York, NY: Oxford University
Press; 2012.
55. Tabak RG, Khoong EC, Chambers DA, Brownson RC. Bridging research and prac-
tice: models for dissemination and implementation research. Am J Prev Med. Sep
2012;43(3):337–350.
56. Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health
promotion interventions: the RE- AIM framework. Am J Public Health. Sep
1999;89(9):1322–1327.
57. Luke DA, Calhoun A, Robichaux CB, Elliott MB, Moreland-Russell S. The Program
Sustainability Assessment Tool: a new instrument for public health programs.
Prev Chronic Dis. 2014;11:130184.
58. Dzewaltowski DA, Estabrooks PA, Klesges LM, Bull S, Glasgow RE. Behavior
change intervention research in community settings: how generalizable are the
results? Health Promot Int. Jun 2004;19(2):235–245.
59. Jilcott S, Ammerman A, Sommers J, Glasgow RE. Applying the RE-AIM frame-
work to assess the public health impact of policy change. Ann Behav Med. Sep-Oct
2007;34(2):105–114.
60. Curran GM, Bauer M, Mittman B, Pyne JM, Stetler C. Effectiveness-
implementation hybrid designs: combining elements of clinical effectiveness
and implementation research to enhance public health impact. Med Care. Mar
2012;50(3):217–226.
( 298 ) Evidence-Based Public Health
( 299 )
( 300 ) Evidence-Based Public Health
Table 12.1. CONTINUED
Social The use of social EBPH: Social marketing for reducing How do health information
media and media for health tobacco use and secondhand smoke and communication
informatics behavior change exposure; promoting physical technologies influence the
activity8 effectiveness, efficiency,
and outcomes of public
A-EBP: Access to and free flow of
health strategies delivered
information4
at local, state, and national
levels (e.g., electronic health
records, mobile health
technologies, social media,
electronic surveillance
systems, geographic
information systems,
network analysis, predictive
modeling)?
Demographic Screening and EBPH: Interventions utilizing How do supply-side and
transitions counseling for community health workers8 demand-side factors
chronic diseases affect the racial, ethnic,
Healthful diet and physical
socioeconomic, and cultural
activity for cardiovascular disease
diversity of persons eating a
prevention
healthy diet?
Globalized Sexual EBPH: Interventions to reduce sexual How do the legal powers
travel transmission of risk behaviors or increase protective and duties of governmental
new or emerging behaviors8 public health agencies
diseases influence the effectiveness,
efficiency, and outcomes
of public health strategies
delivered at local and state
levels?
a
Adapted from Erwin and Brownson.6
is, the core concepts of external validity, as described in chapter 3. The issues
in external validity often relate to context for an intervention—for example,
“What factors need to be taken into account when an internally valid program
or policy is implemented in a different setting or with a different population
subgroup?” “How does one balance the concepts of fidelity and reinven-
tion?” If the adaptation process changes the original intervention to such an
Op p or t u n i t i e s f or Ad va n c i n g E v i de n c e - B a s e d P u bl i c H e a lt h ( 303 )
extent that the original efficacy data may no longer apply, then the program
may be viewed as a new intervention under very different contextual condi-
tions. Green has recommended that the implementation of evidence-based
approaches requires careful consideration of the “best processes” needed
when generalizing evidence to alternate populations, places, and times (e.g.,
what makes evidence useful).16
are evidence based.27,28 Even among programs that are evidence based, 37%
of programs within state health departments are discontinued when they
should continue.23
Nearly every public health issue has a global footprint because diseases do
not know borders and shared solutions are needed. This can readily be seen
if one lines up goals of the World Health Organization with national health
plans. Although it is important to acknowledge that public health chal-
lenges in less developed countries are compounded by poverty and hun-
ger, diminished public infrastructure, and the epidemiologic transition to
behaviors that pose risks more typically found in higher income countries,
EBPH decision making still has applicability. There are, however, few data
available on the reach of EBPH across developed and less developed regions
of the world. Early findings from a four-country study (Australia, Brazil,
China, and the United States) show wide variations in knowledge of EBPH
approaches, how that knowledge is developed, and how EBPH-related deci-
sions are made.42
As this work develops there are many areas that are likely to lead to
advances in EBPH. These could include (1) adapting methods of public health
surveillance from one country to another43; (2) understanding how to adapt an
effective intervention in one geographic region to the context of another geo-
graphic region44,45; (3) implementing innovative methods for building capac-
ity in EBPH46; and (4) identifying effective methods for delivery of health care
services in one country that could be applied to another.
the physical environment, which often are not tracked in public health sur-
veillance systems.
Public health surveillance, that is, the ongoing systematic collection, analysis,
and interpretation of outcome-specific health data, is a cornerstone of pub-
lic health.49 In the United States we now have excellent epidemiologic data
for estimating which population groups and which regions of the country are
affected by a specific condition and how patterns are changing over time with
respect to both acute and chronic conditions. To supplement these data, we
need better information on a broad array of environmental and policy factors
that determine these patterns. When implemented properly, policy surveil-
lance systems can be an enormous asset for policy development and evalu-
ation. These data allow us to compare progress among states, determine the
types of bills that are being introduced and passed (e.g., school nutrition stan-
dards, safe routes to school programs), and begin to track progress over time.
In the United States, the core public health workforce is employed in gov-
ernmental settings, including 59 state and territorial public health agen-
cies, nearly 3,000 local health departments, and many federal agencies (e.g.,
the Centers for Disease Control and Prevention, Environmental Protection
Agency). In developing countries, a significant proportion of the public health
workforce is supported by nongovernmental organizations (e.g., the World
Health Organization, the United Nations Children’s Fund, the World Bank).63
A large percentage of this workforce has no formal education in public health.
Therefore, more practitioner-focused training is needed on the rationale for
EBPH, how to select interventions, how to adapt them to particular circum-
stances, and how to monitor their implementation.64 As outlined in chapter 1,
we would supplement this recommendation by inclusion of EBPH-related
competencies.13 Some training programs show evidence of effectiveness.27,28,65
The most common format uses didactic sessions, computer labs, and scenario-
based exercises, taught by a faculty team with expertise in EBPH. The reach of
these training programs can be increased by emphasizing a train-the-trainer
approach.66 Other formats have been used, including Internet-based self-
study,67,68 CD-ROMs,69 distance and distributed learning networks, and tar-
geted technical assistance. Training programs may have greater impact when
delivered by “change agents” who are perceived as experts yet share common
characteristics and goals with trainees.70 A commitment from leadership and
staff to lifelong learning is also an essential ingredient for success.71,72 Because
many of the health issues needing urgent attention in local communities
will require the involvement of other organizations (e.g., nonprofit groups,
hospitals, employers), their participation in EBPH-related training efforts is
essential.
Another way to enhance capacity and leadership in EBPH and public health
more broadly for the public health workforce is through academic-practice
partnerships such as the Academic Health Department (AHD). A recent study
of Council on Education for Public Health–accredited schools and programs
of public health found that of 156 institutions surveyed, 117 completed
the survey and 64 (55%) indicated that they had an AHD partnership.73 The
partnerships varied regarding their structure (formal vs. informal; written
Memorandum of Understanding vs. not) and types of engagement, and the
strongest benefits of such partnerships were clearly for the students involved
by improving competencies of students, enhancing career opportunities of
public health graduates, and improving public health graduates’ preparation
( 310 ) Evidence-Based Public Health
SUMMARY
Prevention was the major contributor to the health gains of the past century,
yet it is vastly undervalued.78 Public health history teaches us that a long
“latency period” often exists between the scientific understanding of a viable
disease prevention method and its widespread application on a population
basis.79 For example, it has been estimated that it takes 17 years for research to
reach practice.80–82 Many of the approaches to reduce this research-to-practice
gap are outlined in this book—these remedies will allow us to expand the
Op p or t u n i t i e s f or Ad va n c i n g E v i de n c e - B a s e d P u bl i c H e a lt h ( 311 )
evidence base for public health, apply the evidence already in hand, address
health equity, and therefore more fully achieve the promise of public health.
KEY CHAPTER POINTS
• The process of evidence-based public health should take into account broad
macro-level forces of change that affect the physical, economic, policy, and
sociocultural environments.
• New intervention evidence is constantly emerging and there is a need to
collect more data on external validity.
• The reach and relevance of evidence is needed to better address health
equity, gather more policy-relevant evidence, and learn from global efforts.
• There is need to continue to set priorities, measure progress and expand
policy-related surveillance.
• Continued efforts are needed to break downs silos in public health and
enhance contributions of disciplines that cross sectors.
• Emphasis is needed on leadership development to enhance EBPH that can
be aided via practice-academic linkages and accreditation.
Selected Websites
Global Health Council <http://www.globalhealth.org/>. The Global Health Council
is the world’s largest membership alliance dedicated to saving lives by improv-
ing health throughout the world. Its diverse membership comprises health care
( 312 ) Evidence-Based Public Health
REFERENCES
16. Green LW. From research to “best practices” in other settings and populations.
Am J Health Behav. 2001;25(3):165–178.
17. Brennan L, Castro S, Brownson RC, Claus J, Orleans CT. Accelerating evidence
reviews and broadening evidence standards to identify effective, promising, and
emerging policy and environmental strategies for prevention of childhood obe-
sity. Annu Rev Public Health. Apr 21 2011;32:199–223.
18. Brownson RC, Fielding JE, Maylahn CM. Evidence-based public health: a fun-
damental concept for public health practice. Annu Rev Public Health. Apr 21
2009;30:175–201.
19. University of Wisconsin Population Health Institute. Using What Works for
Health. http://www.countyhealthrankings.org/roadmaps/what-works-for-
health/using-what-works-health. Accessed July 28, 2016.
20. Nutbeam D. How does evidence influence public health policy? Tackling health
inequalities in England. Health Promot J Aust. 2003;14:154–158.
21. Ogilvie D, Egan M, Hamilton V, Petticrew M. Systematic reviews of health effects
of social interventions: 2. Best available evidence: how low should you go? J
Epidemiol Community Health. Oct 2005;59(10):886–892.
22. Kessler R, Glasgow RE. A proposal to speed translation of healthcare research into
practice: dramatic change is needed. Am J Prev Med. Jun 2011;40(6):637–644.
23. Brownson RC, Allen P, Jacob RR, et al. Understanding mis-implementation in
public health practice. Am J Prev Med. May 2015;48(5):543–551.
24. Gnjidic D, Elshaug AG. De-adoption and its 43 related terms: harmonizing low-
value care terminology. BMC Med. 2015;13:273.
25. Gunderman RB, Seidenwurm DJ. De-adoption and un-diffusion. J Am Coll Radiol.
Nov 2015;12(11):1162–1163.
26. Prasad V, Ioannidis JP. Evidence- based de-implementation for contradicted,
unproven, and aspiring healthcare practices. Implement Sci. 2014;9:1.
27. Dreisinger M, Leet TL, Baker EA, Gillespie KN, Haas B, Brownson RC.
Improving the public health workforce: evaluation of a training course to
enhance evidence-based decision making. J Public Health Manag Pract. Mar-Apr
2008;14(2):138–143.
28. Gibbert WS, Keating SM, Jacobs JA, et al. Training the workforce in evidence-
based public health: an evaluation of impact among US and international practi-
tioners. Prev Chronic Dis. 2013;10:E148.
29. Subramanian SV, Belli P, Kawachi I. The macroeconomic determinants of health.
Annu Rev Public Health. 2002;23:287–302.
30. Ezzati M, Friedman AB, Kulkarni SC, Murray CJ. The reversal of fortunes: trends
in county mortality and cross-county mortality disparities in the United States.
PLoS Med. Apr 22 2008;5(4):e66.
31. Kulkarni SC, Levin-Rector A, Ezzati M, Murray CJ. Falling behind: life expectancy
in US counties from 2000 to 2007 in an international context. Popul Health Metr.
Jun 15 2011;9(1):16.
32. Shaya FT, Gu A, Saunders E. Addressing cardiovascular disparities through com-
munity interventions. Ethn Dis. Winter 2006;16(1):138–144.
33. Freudenberg N, Franzosa E, Chisholm J, Libman K. New approaches for mov-
ing upstream: how state and local health departments can transform practice to
reduce health inequalities. Health Educ Behav. Apr 2015;42(1 Suppl):46S–56S.
34. Masi CM, Blackman DJ, Peek ME. Interventions to enhance breast cancer screen-
ing, diagnosis, and treatment among racial and ethnic minority women. Med Care
Res Rev. Oct 2007;64(5 Suppl):195S–242S.
Op p or t u n i t i e s f or Ad va n c i n g E v i de n c e - B a s e d P u bl i c H e a lt h ( 315 )
35. Peek ME, Cargill A, Huang ES. Diabetes health disparities: a systematic review of
health care interventions. Med Care Res Rev. Oct 2007;64(5 Suppl):101S–156S.
36. Petticrew M, Roberts H. Systematic reviews: do they “work” in informing decision-
making around health inequalities? Health Econ Policy Law. Apr 2008;3(Pt
2):197–211.
37. Shah SN, Russo ET, Earl TR, Kuo T. Measuring and monitoring progress toward
health equity: local challenges for public health. Prev Chronic Dis. 2014;11:E159.
38. Jones E, Kreuter M, Pritchett S, Matulionis RM, Hann N. State health pol-
icy makers: what’s the message and who’s listening? Health Promot Pract. Jul
2006;7(3):280–286.
39. Stamatakis K, McBride T, Brownson R. Communicating prevention messages to
policy makers: the role of stories in promoting physical activity. J Phys Act Health.
2010;7(Suppl 1):S00-S107.
40. Otten JJ, Cheng K, Drewnowski A. Infographics and public policy: using
data visualization to convey complex information. Health Aff (Millwood). Nov
2015;34(11):1901–1907.
41. Spiegelhalter D, Pearson M, Short I. Visualizing uncertainty about the future.
Science. Sep 9 2011;333(6048):1393–1400.
42. deRuyter A, Ying X, Budd E, et al. Implementing evidence-based practices to pre-
vent chronic disease: knowledge, knowledge acquisition, and decision-making
across four countries. 9th Annual Conference on the Science of Dissemination and
Implementation. Washington, DC: NIH; 2016.
43. Schmid T, Zabina H, McQueen D, Glasunov I, Potemkina R. The first telephone-
based health survey in Moscow: building a model for behavioral risk factor sur-
veillance in Russia. Soz Praventivmed. 2005;50(1):60–62.
44. Cuijpers P, de Graaf I, Bohlmeijer E. Adapting and disseminating effective public
health interventions in another country: towards a systematic approach. Eur J
Public Health. Apr 2005;15(2):166–169.
45. Cambon L, Minary L, Ridde V, Alla F. Transferability of interventions in health
education: a review. BMC Public Health. Jul 02 2012;12:497.
46. Diem G, Brownson RC, Grabauskas V, Shatchkute A, Stachenko S. Prevention
and control of noncommunicable diseases through evidence- based pub-
lic health: implementing the NCD 2020 action plan. Glob Health Promot. Sep
2016;23(3):5–13.
47. World Health Organization. The Sustainable Development Goals 2015–2030.
http://una-g p.org/the-sustainable-development-goals-2015-2030/. Accessed
October 8, 2016.
48. Fielding J, Kumanyika S. Recommendations for the concepts and form of Healthy
People 2020. Am J Prev Med. Sep 2009;37(3):255–257.
49. Thacker SB, Berkelman RL. Public health surveillance in the United States.
Epidemiol Rev. 1988;10:164–190.
50. Green LW, George MA, Daniel M, et al. Review and Recommendations for the
Development of Participatory Research in Health Promotion in Canada. Vancouver,
British Columbia: The Royal Society of Canada; 1995.
51. Israel BA, Schulz AJ, Parker EA, Becker AB. Review of community- based
research: assessing partnership approaches to improve public health. Annual
Review of Public Health. 1998;19:173–202.
52. Cargo M, Mercer SL. The value and challenges of participatory
research: Strengthening its practice. Annu Rev Public Health. Apr 21 2008;
29:325–350.
( 316 ) Evidence-Based Public Health
53. Minkler M, Salvatore A. Participatory approaches for study design and analysis in
dissemination and implementation research. In: Brownson R, Colditz G, Proctor
E, eds. Dissemination and Implementation Research in Health: Translating Science to
Practice. New York, NY: Oxford University Press; 2012:192–212.
54. Haire-Joshu D, McBride T, eds. Transdisciplinary Public Health: Research, Education,
and Practice. San Francisco, CA: Jossey-Bass Publishers; 2013.
55. Harper GW, Neubauer LC, Bangi AK, Francisco VT. Transdisciplinary research
and evaluation for community health initiatives. Health Promot Pract. Oct
2008;9(4):328–337.
56. Stokols D. Toward a science of transdisciplinary action research. Am J Community
Psychol. Sep 2006;38(1–2):63–77.
57. Hall KL, Vogel AL, Stipelman B, Stokols D, Morgan G, Gehlert S. A four-phase
model of transdisciplinary team-based research: goals, team processes, and strat-
egies. Transl Behav Med. Dec 1 2013;2(4):415–430.
58. Byrne S, Wake M, Blumberg D, Dibley M. Identifying priority areas for longitu-
dinal research in childhood obesity: Delphi technique survey. Int J Pediatr Obes.
2008;3(2):120–122.
59. Russell-Mayhew S, Scott C, Stewart M. The Canadian Obesity Network and inter-
professional practice: members’ views. J Interprof Care. Mar 2008;22(2):149–165.
60. Brownson RC, Reis RS, Allen P, et al. Understanding administrative evidence-
based practices: findings from a survey of local health department leaders. Am J
Prev Med. Jan 2013;46(1):49–57.
61. Bekemeier B, Grembowski D, Yang Y, Herting JR. Leadership matters: local health
department clinician leaders and their relationship to decreasing health dispari-
ties. J Public Health Manag Pract. Mar 2012;18(2):E1–E10.
62. Jacob R, Allen P, Ahrendt L, Brownson R. Learning about and using research evi-
dence among public health practitioners. Am J Prev Med. 2017;52(3S3):S304–S308.
63. International Medical Volunteers Association. The Major International Health
Organizations. http://www.imva.org/pages/orgfrm.htm. Accessed November
23, 2016.
64. Centers for Disease Control and Prevention. Modernizing the Workforce for the
Public’s Health: Shifting the Balance. Public Health Workforce Summit Report Atlanta,
GA: CDC; 2013.
65. Maylahn C, Bohn C, Hammer M, Waltz E. Strengthening epidemiologic compe-
tencies among local health professionals in New York: teaching evidence-based
public health. Public Health Rep. 2008;123(Suppl 1):35–43.
66. Yarber L, Brownson CA, Jacob RR, et al. Evaluating a train-the-trainer approach
for improving capacity for evidence-based decision making in public health. BMC
Health Serv Res. 2015;15(1):547.
67. Linkov F, LaPorte R, Lovalekar M, Dodani S. Web quality control for lec-
tures: Supercourse and Amazon.com. Croat Med J. Dec 2005;46(6):875–878.
68. Maxwell ML, Adily A, Ward JE. Promoting evidence-based practice in population
health at the local level: a case study in workforce capacity development. Aust
Health Rev. Aug 2007;31(3):422–429.
69. Brownson RC, Ballew P, Brown KL, et al. The effect of disseminating evidence-
based interventions that promote physical activity to health departments. Am J
Public Health. Oct 2007;97(10):1900–1907.
70. Proctor EK. Leverage points for the implementation of evidence-based practice.
Brief Treatment and Crisis Intervention. Sep 2004;4(3):227–242.
Op p or t u n i t i e s f or Ad va n c i n g E v i de n c e - B a s e d P u bl i c H e a lt h ( 317 )
71. Chambers LW. The new public health: do local public health agencies need a
booster (or organizational “fix”) to combat the diseases of disarray? Can J Public
Health. Sep-Oct 1992;83(5):326–328.
72. St Leger L. Schools, health literacy and public health: possibilities and challenges.
Health Promot Int. Jun 2001;16(2):197–205.
73. Erwin PC, Harris J, Wong R, Plepys CM, Brownson RC. The Academic Health
Department: academic-practice partnerships among accredited U.S. schools and
programs of public health, 2015. Public Health Rep. Jul-Aug 2016;131(4):630–636.
74. Brownson RC, Diez Roux AV, Swartz K. Commentary: generating rigorous evi-
dence for public health: the need for new thinking to improve research and prac-
tice. Annu Rev Public Health. 2014;35:1–7.
75. Bender K, Halverson PK. Quality improvement and accreditation: what might it
look like? J Public Health Manag Pract. Jan-Feb 2010;16(1):79–82.
76. Public Health Accreditation Board. Public Health Accreditation Board Standards
and Measures, version 1.5. 2013. http://www.phaboard.org/wp-content/
uploads/SM-Version-1.5-Board-adopted-FINAL-01-24-2014.docx.pdf. Accessed
November 20, 2016.
77. Kronstadt J, Meit M, Siegfried A, Nicolaus T, Bender K, Corso L. Evaluating the
Impact of National Public Health Department Accreditation—United States,
2016. MMWR Morb Mortal Wkly Rep. Aug 12 2016;65(31):803–806.
78. McGinnis JM. Does proof matter? why strong evidence sometimes yields weak
action. Am J Health Promot. May-Jun 2001;15(5):391–396.
79. Brownson RC, Bright FS. Chronic disease control in public health practice: look-
ing back and moving forward. Public Health Rep. May-Jun 2004;119(3):230–238.
80. Balas EA. From appropriate care to evidence-based medicine. Pediatr Ann. Sep
1998;27(9):581–584.
81. Green LW, Ottoson JM, Garcia C, Hiatt RA. Diffusion theory, and knowledge dis-
semination, utilization, and integration in public health. Annu Rev Public Health.
Jan 15 2009;30:151–174.
82. Westfall JM, Mold J, Fagnan L. Practice-based research—“blue highways” on the
NIH roadmap. JAMA. Jan 24 2007;297(4):403–406.
GL O S S A RY
Action planning: Planning for a specific program or policy with specific, time-╉
dependent outcomes.
Adaptation: The degree to which an evidence-╉based intervention is changed
or modified by a user during adoption and implementation to suit the
needs of the setting or to improve the fit to local conditions.
Adjusted rates: Rate in which the crude (unadjusted) rate has been
standardized to some external reference population (e.g., an age-╉adjusted
rate of lung cancer). An adjusted rate is often useful when comparing
rates over time or for populations (e.g., by age, gender, race) in different
geographic areas.
Advocacy: Set of skills that can be used to create a shift in public opinion and
mobilize the necessary resources and forces to support an issue. Advocacy
blends science and politics in a social-╉justice value orientation with the
goal of making the system work better, particularly for individuals and
populations with the least resources.
Analytic epidemiology: Study designed to examine associations, commonly
putative or hypothesized causal relationships. An analytic study is usually
concerned with identifying or measuring the effects of risk factors or is
concerned with the health effects of specific exposures.
Analytic framework: (causal framework, logic model) Diagram that depicts
the inter relationships among population characteristics, intervention
components, shorter-╉term intervention outcomes, and longer-╉term public
health outcomes. Its purpose is to map out the linkages on which to base
conclusions about intervention effectiveness. Similar frameworks are
also used in program planning to assist in designing, implementing, and
evaluating effective interventions.
Basic priority rating (BPR): A method of prioritizing health issues based on
the size of the problem, the seriousness of the problem, the effectiveness
of intervention, and its propriety, economics, acceptability, resources, and
legality (known as PEARL).
(â•›319â•›)
( 320 ) Glossary
REFERENCES
Green LW, Kreuter MW. Health Promotion Planning: An Educational and Ecological
Approach. 4th ed. New York, NY: McGraw Hill; 2005.
Haddix AC, Teutsch SM, Corso PS. Prevention Effectiveness. A Guide to Decision Analysis
and Economic Evaluation. 2nd ed. New York: Oxford University Press; 2002.
Kohatsu ND, Robinson JG, Torner JC. Evidence-based public health: an evolving con-
cept. Am J Prev Med. Dec 2004;27(5):417–421.
Last JM. A Dictionary of Public Health. New York: Oxford University Press; 2007.
Porta M, editor. A Dictionary of Epidemiology. 6th ed. New York: Oxford University
Press; 2014.
Novick LF, Morrow CB, Mays GP, eds. Public Health Administration. Principles for
Population-Based Management. Second Edition. Sudbury, MA: Jones and
Bartlett Publishers; 2008.
Petticrew M, Cummins S, Ferrell C, et al. Natural experiments: an underused tool for
public health? Public Health. Sep 2005;119(9):751–757.
Rabin B, Brownson R. Developing the terminology for dissemination and implemen-
tation research. In: Brownson R, Colditz G, Proctor E, editors. Dissemination
and Implementation Research in Health: Translating Science to Practice.
New York: Oxford University Press; 2012. p. 23–51
Straus SE, Richardson WS, Glasziou P, Haynes R. Evidence-Based Medicine. How to
Practice and Teach EBM. 4th ed. Edinburgh, UK: Churchill Livingston; 2011.
Witkin BR, Altschuld JW. Conducting and Planning Needs Assessments. A Practical
Guide. Thousand Oaks, CA: Sage Publications, 1995.
I N DEX
(â•›333â•›)
( 334 ) Index