SlideShare a Scribd company logo
Journal of Counseling & Development ■ Spring 2007 ■
Volume 85162
Assessment & Diagnosis
© 2007 by the American Counseling Association. All rights
reserved.
Program evaluation in counseling has been a consistent
topic
of discourse in the profession over the past 20 years
(Gysbers,
Hughey, Starr, & Lapan, 1992; Hadley & Mitchell, 1995;
Loesch,
2001; Wheeler & Loesch, 1981). Considered an applied
research
discipline, program evaluation refers to a systematic process
of
collecting and analyzing information about the efficiency,
the ef-
fectiveness, and the impact of programs and services
(Boulmetis &
Dutwin, 2000). The field of program evaluation has grown
rapidly
since the 1950s as public and private sector organizations
have
sought quality, efficiency, and equity in the delivery of
services
(Stufflebeam, 2000b). Today, professional program
evaluators are
recognized as highly skilled specialists with advanced
training in
statistics, research methodology, and evaluation procedures
(Hosie,
1994). Although program evaluation has developed as a
distinct
academic and professional discipline, human services
professionals
have frequently adopted program evaluation principles in
order to
conduct micro-evaluations of local services. From this
perspective,
program evaluation can be considered as a type of action
research
geared toward monitoring and improving a particular
program or
service. Because micro-evaluations are conducted on a
smaller
scale, they may be planned and implemented by
practitioners.
Therefore, for the purposes of this article, we consider
counseling
program evaluation to be the ongoing use of evaluation
principles
by counselors to assess and improve the effectiveness and
impact
of their programs and services.
Challenges to Counseling Program Evaluation
Counseling program evaluation has not always been
conceptual-
ized from the perspective of practicing counselors. For
instance,
Benkofski and Heppner (1999) presented guidelines for
counsel-
ing program evaluation that emphasized the use of
independent
evaluators rather than counseling practitioners.
Furthermore,
program evaluation literature has often emphasized
evaluation
models and principles that were developed for use in large-
scale
organizational evaluations by professional program
evaluators
(e.g., Kellaghan & Madaus, 2000; Kettner, Moroney, &
Martin,
1999). Such models and practices are not easily
implemented by
counseling practitioners and may have contributed to the
hesi-
tance of counselors to use program evaluation methods.
Loesch
(2001) argued that the lack of counselor-specific
evaluation
models has substantially contributed to the dichotomy
between
research and practice in counseling. Therefore, new
paradigms
of counseling program evaluation are needed to
increase the
frequency of practitioner-implemented evaluations.
Much of the literature related to counseling program
evaluation has cited the lack of both counselors’
ability to
systematically evaluate counseling services and of their
interest in doing so (e.g., Fairchild, 1993; Whiston,
1996).
Many reasons have been suggested for counselors’ failure
to
conduct evaluations. An important reason is that conducting
an evaluation requires some degree of expertise in research
methods, particularly in formulating research questions, col-
lecting relevant data, and selecting appropriate analyses.
Yet
counselors typically receive little training to prepare them
for
demonstrating outcomes (Whiston, 1996) and evaluating
their
services (Hosie, 1994). Consequently, counselor
education
programs have been criticized for failing to provide
appropri-
ate evaluation and research training to new counselors
(Bor-
ders, 2002; Heppner, Kivlighan, & Wampold, 1999; Sexton,
1999; Sexton, Whiston, Bleuer, & Walz, 1997). Counselors
may, therefore, refrain from program evaluation because of
Randall L. Astramovich, Department of Counselor Education,
University of Nevada, Las Vegas; J. Kelly Coker, Harbin and
As-
sociates Psychotherapy, Fayetteville, North Carolina. J. Kelly
Coker is now at the Department of Counselor Education,
Capella
University. Correspondence concerning this article should be
addressed to Randall L. Astramovich, Department of Counselor
Education, University of Nevada, Las Vegas, 4505 Maryland
Parkway, Box 453066, Las Vegas, NV 89154-3066 (e-mail:
Randy.
[email protected]).
Program Evaluation: The Accountability
Bridge Model for Counselors
Randall L. Astramovich and J. Kelly Coker
The accountability and reform movements in education and the
human services professions have pressured coun-
selors to demonstrate outcomes of counseling programs and
services. Evaluation models developed for large-scale
evaluations are generally impractical for counselors to
implement. Counselors require practical models to guide them
in planning and conducting counseling program evaluations.
The authors present the Accountability Bridge Counseling
Program Evaluation Model and discuss its use in evaluating
counseling services and programs
Journal of Counseling & Development ■ Spring 2007 ■
Volume 85 163
The Accountability Bridge Model for Counselors
a lack of confidence in their ability to effectively collect
and
analyze data and apply findings to their professional
practice
(Isaacs, 2003). However, for those counselors with the req-
uisite skills to conduct evaluations, their hesitance may
be
related to the fear of finding that their services are
ineffective
(Lusky & Hayes, 2001; Wheeler & Loesch, 1981).
Despite calls for counselors and counseling programs to
em-
brace research and evaluation as an integral part of the
provision of
counseling services (e.g., Borders & Drury, 1992; Fairchild,
1994;
Whiston, 1996), there is virtually no information that
documents
counselors’ interest in and use of counseling program
evaluation.
Although counselors may place minimal value on research
and
evaluation activities (Loesch, 2001), strong sociopolitical
forces,
including the emphasis on managed care in mental health
and
the school reform movement in public education, often
require
today’s counselors to use evaluation methods to demonstrate
the
effectiveness and impact of their counseling services.
Program Evaluation and Accountability
Distinguishing between program evaluation and
accountability
is essential because many professionals use the terms
inter-
changeably and, occasionally, as categories of each other.
For
instance, Isaacs (2003) viewed program evaluation as a type
of
accountability that focuses primarily on program
effectiveness
and improvement. However, from our perspective,
counseling
program evaluation precedes accountability. As defined
by
Loesch (2001), counseling program evaluations help
practi-
tioners “maximize the efficiency and effectiveness of
service
delivery through careful and systematic examination of
program
components, methodologies, and outcomes” (p. 513).
Counsel-
ing program evaluations, thus, have inherent value in
helping
practitioners plan, implement, and refine counseling practice
regardless of the need to demonstrate accountability.
However,
when called on to provide evidence of program
effectiveness
and impact, counselors can effectively draw on
information
gathered from their own program evaluations.
We, thus, conceptualize counseling accountability as provid-
ing specific information to stakeholders and other
supervising
authorities about the effectiveness and efficiency of
counseling
services (Studer & Sommers, 2000). In our view,
demonstrat-
ing accountability forms a bridge between counseling
practice
and the broader context of the service impact on
stakeholders.
However, accountability should not be the sole motivation
for
counseling program evaluation. As emphasized by Loesch
(2001), counseling program evaluations should be
undertaken
to improve counseling services rather than merely to
provide a
justification for existing programming.
The Need for New Models of Counseling
Program Evaluation
We believe that a significant contributor to counselors’ dis-
interest in evaluation involves the lack of practical program
evaluation models available to them for this purpose.
Fur-
thermore, confusion about the differences between program
evaluation and accountability appear to deter counselors
from
engaging in ongoing program evaluations (Loesch,
2001).
Therefore, the development of new, counselor-specific
models
that clearly conceptualize program evaluation and account-
ability may provide the necessary impetus to establish
program
evaluation as a standard of practice in counseling.
Recent examples of counselor-focused evaluation ap-
proaches include Lusky and Hayes’s (2001) consultation
model of counseling program evaluation and Lapan’s (2001)
framework for planning and evaluating school
counseling
programs. Gysbers and Henderson (2000) also discussed the
role of evaluation in school counseling programs and
offered
practical strategies and tools that counselors could
imple-
ment. These approaches have helped maintain a focus on
the
importance of counseling program evaluation.
The purpose of this article was to build on the
emerg-
ing counselor-focused literature on program evaluation
by
providing counselors with a practical model for developing
and implementing evaluation-based counseling services. As
Whiston (1996) emphasized, counseling practice and
research
form a continuum rather than being mutually exclusive
activi-
ties. Although some counselors may identify more strongly
with research and others more strongly with practice,
both
perspectives provide valuable feedback about the impact of
counseling on clients served. Indeed, evaluation and
feedback
are integral parts of the counseling process, and most coun-
selors will identify with the idea of refining their practice
by
using feedback from numerous sources as a basis.
This article is geared both to practitioners who may have
had little prior training in or experience with
counseling
program evaluations and to counselor educators interested in
training students in counseling program evaluation methods.
We begin by discussing accountability in counseling and
the
uses of counseling program evaluation. Next, we
present
the Accountability Bridge Counseling Program Evaluation
Model and discuss the steps involved in its implementation.
Finally, we discuss implications and make recommendations
for training counselors in evaluation skills.
Accountability in Counseling
Accountability has become a catchword in today’s
sociopoliti-
cal climate. Since the 1960s, local, state, and federal
govern-
ment spending has been more closely scrutinized and
the
effectiveness of social programs and initiatives more
carefully
questioned (Houser, 1998; Kirst, 2000). As professionals in
the social services field, counselors have not been
shielded
from the demands to demonstrate successful and cost-
effective
outcomes, nor have counseling programs. Despite
increas-
ing pressure to document effectiveness, some counselors
maintain that counseling programs are generally immeasur-
able (Loesch, 2001). However, given the rising demands for
Journal of Counseling & Development ■ Spring 2007 ■
Volume 85164
Astramovich & Coker
accountability in education and social programs, such
an
attitude is undoubtedly naïve. In fact, funding of
educational
programs and social services often hinges on the
ability to
demonstrate successful outcomes to stakeholders. Because
counselors often rely on third-party and government
funding,
the future of the counseling profession may indeed rest on
the
ability of practitioners to answer the calls for
documentation
of effectiveness (Houser, 1998).
School Counseling Accountability
Today’s school counselors face increased demands to
demon-
strate program effectiveness (Adelman, 2002; Borders, 2002;
Herr, 2002; House & Hayes, 2002; Lusky & Hayes, 2001).
Primarily rooted in the school reform movement,
demonstrat-
ing accountability is becoming a standard practice
among
school counselors (Dahir & Stone, 2003; Fairchild &
Seeley,
1995; Hughes & James, 2001; Myrick, 2003; Otwell &
Mullis,
1997; Vacc & Rhyne-Winkler, 1993). Standards-based
educa-
tion reforms, including the No Child Left Behind
(NCLB)
Act of 2001, have fueled pressures on local school systems
to demonstrate effective educational practices (Albrecht
&
Joles, 2003; Finn, 2002; Gandal & Vranek, 2001). The
NCLB
Act of 2001 emphasizes student testing and teacher
effective-
ness; however, school counselors have also recognized that
in
the current educational environment, actively evaluating the
effectiveness of their school counseling programs is crucial.
Although the pressures for accountability have
seemingly
increased in recent years, Lapan (2001) noted that
school
counselors have developed results-based systems and
used
student outcome data for many years. Furthermore,
school
counselors have historically been connected with school re-
form, and their roles have often been shaped by educational
legislation (Herr, 2002).
Although accountability demands are numerous, school
counselors may fail to evaluate their programs because of
time
constraints, elusiveness of measuring school counseling
out-
comes, lack of training in research and evaluation methods,
and
the fear that evaluation results may discredit school
counseling
programs (Schmidt, 1995). Because of these factors,
when
school counselors attempted to provide accountability, they
may
have relied on simple tallies of services and programs
offered to
students. However, as discussed by Fairchild and Seeley
(1995),
merely documenting the frequency of school counseling
services
no longer meets the criteria for demonstrating program
effective-
ness. Although data about service provision may be
important,
school counselors must engage in ongoing evaluations of
their
counseling programs in order to assess the outcomes
and the
impact of their services.
Trevisan (2000) emphasized that school counseling pro-
gram evaluation may help the school counseling profession
by providing accountability data to stakeholders, generating
feedback about program effectiveness and program needs,
and
clarifying the roles and functions of school counselors. As
the
profession of school counseling evolves, increasing emphasis
on leadership and advocacy (Erford, House, & Martin,
2003;
House & Sears, 2002) and on comprehensive school coun-
seling programs (American School Counselor Association
[ASCA], 2003; Sink & MacDonald, 1998; Trevisan, 2002b)
will coincide with ongoing research and program evaluation
efforts (Paisley & Borders, 1995; Whiston, 2002;
Whiston
& Sexton, 1998). ASCA’s (2003) revised national standards
for school counseling reflect the importance of school
coun-
seling accountability and provide direction for practicing
school counselors in the evaluation of their
comprehensive
school counseling programs (Isaacs, 2003). Considering the
accountability and outcomes-focused initiatives in today’s
education environment, school counselors need skills
and
tools for systematically evaluating the impact of the
services
they provide (Trevisan, 2001).
Mental Health Counseling Accountability
Like professional school counselors, today’s mental
health
counselors have experienced significant pressures to
dem-
onstrate the effectiveness and the efficiency of their
counsel-
ing services. To secure managed care contracts and receive
third-party reimbursements, mental health counselors are
increasingly required to keep detailed records about specific
interventions and outcomes of counseling sessions (Granello
& Hill, 2003; Krousel-Wood, 2000; Sexton, 1996). Despite
the financial implications of avoiding such
accountability
measures, many mental health counselors have fought
for
autonomy from third-party payers in the provision of coun-
seling services. Mental health counselors often indicate that
their ability to provide quality mental health care to clients
is
hampered by managed care’s demands to demonstrate tech-
nical proficiency and cost-effective service delivery (Scheid,
2003). Furthermore, mental health counselors often express
concerns about their therapeutic decision-making capacity
being curtailed by managed care (Granello & Hill, 2003).
Managed care’s mandate for accountability in the field of
mental health counseling may have resulted, in part,
from
counselors’ failure to initiate their own outcomes
assessments
(Loesch, 2001). However, the emergence of empirically sup-
ported treatments (ESTs) has helped counselors respond to
the call for accountability from managed care (Herbert,
2003).
Specifically, ESTs draw on evidence-based practices
from
empirical counseling research to provide counselors with
intervention guidelines and treatment manuals for
specific
client problems. Yet, mental health counselors may resist
the
use of such approaches, insisting that counseling procedures
and outcomes cannot be formally measured and that
attempt-
ing such evaluations merely reduces time spent
providing
counseling services (Sanderson, 2003). Today’s managed
care companies, however, may require counselors to
base
their practice on specific ESTs in order to receive payment
for services. Further complicating the issue is the fact that,
Journal of Counseling & Development ■ Spring 2007 ■
Volume 85 165
The Accountability Bridge Model for Counselors
as previously noted with other areas of counseling,
mental
health counselors often receive no training in evaluating the
outcomes and impact of their services (Granello & Hill,
2003;
Sexton et al., 1997). Ultimately, resistance from mental
health
counselors to document counseling outcomes may be due to
insufficient counselor training in evaluation methods.
Despite the tumultuous history of the pressures brought
to bear on mental health practitioners by managed care for
accountability, there is a major impetus for shifting
toward
examining program effectiveness and outcomes in
mental
health counseling—the benefit of forging a professional
identity. Kelly (1996) underscored the need for mental
health
counselors to be accepted as legitimate mental health
provid-
ers who are on the same professional level as social
workers,
psychologists, and psychiatrists. The ability to document
outcomes and identify effective treatments is, therefore,
criti-
cal in furthering the professional identity of mental
health
counselors within the mental health professions.
Accountability in Other Counseling Specialties
Although most literature on counseling accountability
empha-
sizes school and mental health settings, calls for
accountability
have also been directed to other counseling specialties.
Bishop
and Trembley (1987) discussed the accountability pressures
faced in college counseling centers. Similar to school coun-
selors and mental health counselors, college counselors and
those in authority in college counseling centers have
resisted
accountability demands placed on them by authorities
in
higher education. Bishop and Trembley also noted that
some
counselors have maintained that counseling centers are de-
signed for practice rather than research.
Ultimately, all counseling practitioners, despite their spe-
cialty area, are faced with the need to demonstrate program
effectiveness. Although counselors may be hesitant or
unwill-
ing to evaluate the effectiveness of their services because
they
see little relevance to their individual practice, the future
of
the counseling profession may well be shaped by the
way
practitioners respond to accountability demands.
Program Evaluation in Counseling
In recent years, the terms program evaluation and ac-
countability have often been used synonymously in dis-
cussions of counseling research and outcomes. However,
accountability efforts in counseling generally result from
external pressures to demonstrate eff iciency and effec-
tiveness. On the other hand, counselor-initiated program
evaluations can be used to better inform practice and
improve counseling services. We believe that a key shift
in the profession would be to have counselors continu-
ally evaluate their programs and outcomes not because
of external pressures, but from a desire to enhance client
services and to advocate for clients and the counseling
profession. New perspectives on the role of evaluation of
counseling practices may ultimately help program evalu-
ation become a standard of practice in counseling.
Program evaluation models have proliferated in the fields
of economics, political science, sociology, psychology, and
education (Hosie, 1994) and have been used for improving
quality (Ernst & Hiebert, 2002), assessing goal
achieve-
ment, decision making, determining consumer impact,
and
examining cost-effectiveness (Madaus & Kellaghan, 2000).
Many program evaluation models were developed for use in
large-scale organizational evaluations and are, thus,
impracti-
cal for use by counselors. Furthermore, large-scale program
evaluation models are generally based on the assumption
that a
staff of independent evaluation experts or an assessment
team
will plan and implement the evaluation. Within the counsel-
ing professions, however, financial constraints generally
make such independent evaluations of programs unfeasible.
Consequently, counselors usually rely on limited
resources
and their own research skills to carry out an
evaluation of
program effectiveness. Fortunately, many of the
principles
and practices of large-scale evaluation models can be
adapted
for use by counselors.
Given the wide range of program evaluation definitions and
approaches, models from human services professions and
edu-
cation appear most relevant for the needs of counselors
because
these models generally emphasize ongoing evaluation for
pro-
gram improvement (e.g., Stufflebeam, 2000a). Counseling
pro-
gram evaluation may be defined as the ongoing use of
evaluation
principles by counselors to assess and improve the
effectiveness
and impact of counseling programs and services. Ongoing
coun-
seling program evaluations can provide crucial feedback
about
the direction and the growth of counseling services and can
also
meet the accountability required by stakeholders (Boulmetis
&
Dutwin, 2000; Loesch, 2001; Stufflebeam, 2000b).
Reasons for Evaluating Counseling Programs
Program evaluations may be initiated for various
reasons;
however, evaluations are intended to generate practical
in-
formation rather than to be mere academic exercises
(Royse,
Thyer, Padgett, & Logan, 2001). Counseling program evalu-
ations should, therefore, provide concrete information about
the effectiveness, the efficiency, and the impact of services
(Boulmetis & Dutwin, 2000). Specifically, counseling
pro-
gram evaluations can yield information that will
demonstrate
the degree to which clients are being helped. Evaluations
may
also provide feedback about client satisfaction and can help
to distinguish between effective and ineffective approaches
for the populations being served (Isaacs, 2003). On a
broader
scope, program evaluations can help to determine if
services
are having an influence on larger social problems (Royse et
al., 2001). On the contextual level, evaluations can provide
information about the use of staff and program resources in
the provision of services (Stufflebeam, 2000a).
Journal of Counseling & Development ■ Spring 2007 ■
Volume 85166
Astramovich & Coker
Accountability to stakeholders has often been a
consideration
in formulating approaches to counseling program evaluation.
For
example, Lapan (2001) indicated that program evaluations
help
counselors to identify effective services that are valued by
stake-
holders. Thus, by using stakeholder feedback in program
planning
and then providing valued services, counselors are better
prepared
to demonstrate the accountability of their programs and
practice.
Internal accountability may be requested by administrators
of local
programs to determine if program staff and resources are
being
used effectively. On the other hand, external accountability
may
be requested by policy makers and stakeholders with an
interest
in the effectiveness of provided services (Priest, 2001).
Counseling program evaluations are generally implemented
to
provide information about local needs; however, in some
instances
information from local evaluations may have significant
implica-
tions for the entire counseling profession. As discussed by
Whiston
(1996), the professional identity of counselors can be
enhanced
through action research that demonstrates the effectiveness
of ser-
vices. By conceptualizing program evaluations as a type of
action
research, counselors have the potential to consider this
effort as a
contribution to the growing research-base in counseling.
Questions That Evaluations May Answer
Counseling program evaluations, like all forms of evalua-
tions, are undertaken to answer questions about the
effective-
ness of programs and services in meeting specific goals
(Berk
& Rossi, 1999). Questions about the overall
effectiveness
and impact of services may be answered, as well as
more
discrete, problem-specific concerns. Furthermore, questions
posed in evaluations help guide the collection and analysis
of outcome information and the subsequent reporting of
outcomes to stakeholders.
Numerous questions may be explored with evaluations.
Powell, Steele, and Douglah (1996) indicated that
evalu-
ation questions generally fall into four broad
categories:
outcomes and impacts, program need, program context, and
program operations. The following are some examples
of
the types of questions that counseling program evaluations
may answer:
• Are clients being helped?
• What methods, interventions, and programs are most
helpful for clients?
• How satisfied are clients with services received?
• What are the long-term effects of counseling programs
and services?
• What impact do the services and programs have on
the larger social system?
• What are the most effective uses of program staff?
• How well are program objectives being met?
Program evaluations are generally guided by specific
questions related to program objectives. Guiding
questions
help counselors to plan services and gather data specific to
the problems under investigation. Depending on program
and
stakeholder needs, counseling evaluations may be designed
to answer many questions simultaneously or they may
be
focused on specific objectives and outcomes. As part of an
ongoing process, the initial cycle of a counseling
program
evaluation may yield information that can help to define or
refine further problems and questions for exploration in the
next evaluation cycle.
Ultimately, counseling program evaluations may serve many
purposes and may provide answers to a variety of
questions.
However, if counselors are to implement evaluations, a
practical
framework for conceptualizing the evaluation process seems
essential. Counselors, thus, need a conceptual foundation
for
guiding the evaluation of their programs and services.
The Accountability Bridge Counseling
Program Evaluation Model for Counselors
The Accountability Bridge Counseling Program Evaluation
Model (see Figure 1) provides a framework to be
used by
individual counselors and within counseling programs
and
counseling agencies to plan and deliver counseling services
and to assess their effectiveness and impact. Drawing
on
concepts from the business evaluation model proposed
by
Ernst and Hiebert (2002) and the Context, Input,
Process,
Product Model (CIPP) developed by Stufflebeam
(2000a),
the Accountability Bridge Counseling Program Evaluation
Model organizes counseling evaluation into two reoccur-
ring cycles that represent a continual refinement of services
based on outcomes, stakeholder feedback, and the needs of
the populations served. The counseling program evaluation
cycle focuses on the provision and outcomes of counseling
services, whereas the counseling context evaluation cycle ex-
amines the impact of counseling services on stakeholders
and
uses their feedback, along with the results yielded by needs
assessments, to establish and refine the goals of counseling
programs. The two cycles are connected by an
“accountability”
bridge, whereby results from counseling practices are com-
municated to stakeholders within the context of the
larger
service system. Providing accountability to stakeholders is,
therefore, an integral part of the model. Although it is
beyond
the scope of this article to discuss each component in
depth, a
basic review of the framework and principles of the model
will
help counselors begin to conceptualize the process of
planning
and implementing counseling program evaluations.
Counseling Program Evaluation Cycle
The counseling program evaluation cycle involves the
planning
and implementation of counseling practice and culminates
with
assessing the outcomes of individual and group
counseling,
guidance services, and counseling programs. Four stages are
involved in the counseling program evaluation cycle.
Journal of Counseling & Development ■ Spring 2007 ■
Volume 85 167
The Accountability Bridge Model for Counselors
1. Program planning. Although we enter the discussion of
the model at the program planning stage, information
obtained
from the counseling context evaluation cycle is critical in
the
planning process. Thus, on the basis of input obtained from
needs assessments and the subsequent formation of service
objectives, counseling programs and services are
planned
and developed to address the needs of the populations
served.
Program planning involves identifying specific counsel-
ing methods and activities that are appropriate for
certain
populations as well as determining the availability of
needed
resources, including staff, facilities, and special
materials
(Royse et al., 2001).
Lapan (2001) stressed that effective school counseling
programs meet objectives by planning results-based
inter-
ventions that can be measured. Therefore, a key component
of the program planning process involves the simultaneous
planning of methods for measuring outcomes (Boulmetis &
Dutwin, 2000). For instance, during the program
planning
phase, a community counseling agency that is planning
a
new substance abuse aftercare program should determine
the means of assessing client progress through the program.
Furthermore, developing multiple outcome measures can
help increase the validity of findings. Gysbers and Hender-
son (2000) discussed several means for assessing
school
counseling outcomes, including pretest–posttest instruments,
performance indicators, and checklists. Studer and Sommers
(2000) indicated that multiple measures, such as assessment
instruments, observable data, available school-based data,
and client/parent/teacher interviews, could be used in school
counseling program evaluation. In mental health and college
counseling specialties, similar measures of client and
program
progress can be used, including standardized assessment
tools
such as depression and anxiety inventories. Other
means
of collecting outcome data include surveys, individual
and
group interviews, observation methods, and document review
(Powell et al., 1996). Furthermore, data can be collected
over
a 1- to 3-year period to determine program effectiveness
over
longer periods of time (Studer & Sommers, 2000).
A f inal consideration in the program planning stage
involves determining when clients will complete selected
measures and assessments . Individuals who will be respon-
sible for gathering and processing the information should
be
identified as well. For example, in a community agency
setting,
counselors may take responsibility for collecting data about
their own client caseload, whereas a counselor supervisor
may
collect data from community sources.
2. Program implementation. After programs and services
have been planned and outcome measures have been
selected,
programs and services are initiated. Sometimes referred to
as
“formative evaluation,” the program implementation
phase
actualizes the delivery of services shaped by input from the
counseling context evaluation cycle. During program imple-
mentation, counselors may identify differences between the
planned programs and the realities of providing the
services.
Therefore, at this point, decisions may be made to
change
programs before they are fully operational or to make
refine-
ments in programs and services as the need arises.
3. Program monitoring and refinement. Once programs and
services have been initiated and are fully operational,
coun-
selors may need to make adjustments to their practice
based
on preliminary results and feedback from clients and
other
interested parties. Programs and services may, therefore,
need
to be refined and altered to successfully meet the needs of
the
clientele served. Monitoring program success helps to
ensure
the quality of counseling services and maximizes the
likelihood
of finding positive results during outcomes assessments.
4. Outcomes assessment. As programs and services are
completed, outcomes assessments help to determine if
objec-
FIGURE 1
Accountability Bridge Counseling Program Evaluation Model
Program
Monitoring and
Refinement
Feedback
From
Stakeholders
Journal of Counseling & Development ■ Spring 2007 ■
Volume 85168
Astramovich & Coker
tives have been met. Therefore, during the outcomes
assessment
phase, final data are collected, and all program data are
analyzed
to determine the outcomes of interventions and
programs.
Counseling outcome data should be analyzed and interpreted
as
soon as possible after being collected (Gysbers &
Henderson,
2000). Data analysis approaches differ for quantitative
and
qualitative data, and counselors with limited research
back-
ground may need to seek assistance from peers and
supervisors
with knowledge of analyzing a variety of data sets.
Available
data analysis computer software can also expedite the
analysis
and interpretation of data. Such software programs also
allow
for easy creation of charts and graphs that can play a key
role
in the dissemination of evaluation results.
The Accountability Bridge
We conceptualize the process of communicating outcome
data
and program results to stakeholders as the
“accountability
bridge” between counseling programs and the context
of
counseling services. Outcome data and evaluation
findings
are the means for providing information about program ef-
fectiveness to stakeholders. When counselors are asked
to
demonstrate program effectiveness and efficiency, they can
present information from the counseling program evaluation
cycle to interested parties. However, beyond being merely
an
ameliorative process, communicating results to stakehold-
ers can also be conceptualized as a marketing tool whereby
counselors help maintain support and increase the demands
for
their services (Ernst & Hiebert, 2002). Therefore, rather
than
waiting for external requests for accountability,
counselors
should consider the task of communicating program results
to stakeholders as being a standard part of the
counseling
program evaluation process.
In the program evaluation literature, stakeholders are often
referred to as “interested parties” (Berk & Rossi, 1999),
mean-
ing all individuals and organizations involved in or affected
by a program (Boulmetis & Dutwin, 2000). As discussed by
Loesch (2001), the most obvious stakeholders in counseling
programs are those clients receiving services. In
addition,
stakeholders of counseling programs may include
funding
sources, other professional counselors, community members,
administrators, staff, and organizations or programs that
refer
clients. Information provided to stakeholders must be
tailored
to address the concerns of the specific group. For instance,
when communicating results, counselors may want to
consider
if their audience will be more impressed with numbers and
statistics or if case studies and personal narratives
will be
more effective (Powell et al., 1996).
Evaluation reports and summaries can be used to dissemi-
nate information about program outcomes to
stakeholders.
Counseling program evaluation reports may be structured to
include (a) an introduction defining the purposes and goals
of
programs and of the evaluation, (b) a description of
programs
and services, (c) a discussion of the evaluation design
and
data analysis procedures, (d) a presentation of the
evaluation
results, and (e) a discussion of the findings and
recommenda-
tions of the evaluation (Gysbers & Henderson, 2000; Royse
et
al., 2001). In addition to written reports, formal
presentations
of program results may also be an effective means for
fulfilling
the requirement of accountability to stakeholders.
Counseling Context Evaluation Cycle
The counseling context evaluation cycle focuses on the im-
pact that the counseling practice has on stakeholders in the
context of the larger organizational system. Using feedback
from stakeholders, counselors and individuals responsible
for
counseling programs may engage in strategic planning
and
conduct needs assessments to develop and refine
program
objectives. The counseling context evaluation cycle consists
of four stages.
1. Feedback from stakeholders. Once outcome data have
been reported to stakeholders, counselors should actively
solicit their feedback. Indeed, stakeholder feedback
should
be considered a vital element in the eventual design
and
delivery of counseling services. Viability of counseling ser-
vices is maintained through a continual cycle of stakeholder
feedback regarding the development of program goals
and
the design and evaluation of counseling services (Ernst
&
Hiebert, 2002).
2. Strategic planning. After feedback from stakeholders
has been solicited, counselors and individuals in their orga-
nizational systems may engage in strategic planning
designed
to examine the operations of the organization. In
particular,
strategic planning may include an examination and
possible
revision of the purpose and mission of programs and
services.
Furthermore, during strategic planning, decisions about the
al-
location of staff and monetary resources may be considered.
3. Needs assessment. Coinciding with strategic planning,
needs assessments can help provide counselors with crucial
information that shapes the provision of counseling
programs
and services. In particular, identifying the needs of
stakehold-
ers is a key part of developing programs that will have
positive
impact. Needs assessments should, therefore, gather informa-
tion from multiple stakeholders and should be planned with
a clear indication of what information is needed
(Royse et
al., 2001; Stufflebeam, McCormick, Brinkerhoff, & Nelson,
1985). A key part of needs assessment is the development
of
the method or instrument for collecting information. Written
surveys and checklists can be used as well as focus-
group
meetings, interviews, and various forms of qualitative
inquiry.
Effective needs assessments will help clarify and
prioritize
needs among stakeholders and the populations served.
4. Service objectives. Developing precise program goals
and objectives is crucial for the eventual provision and
evalua-
tion of counseling programs and services. Goals and
objectives
should be developed based on prior outcomes of counseling
Journal of Counseling & Development ■ Spring 2007 ■
Volume 85 169
The Accountability Bridge Model for Counselors
services, stakeholder feedback, and information gathered
from
needs assessments. Programs without clearly identified goals
and objectives cannot be evaluated for impact and
effective-
ness (Berk & Rossi, 1999). Royse et al. (2001) discussed
two
main types of program objectives: process objectives and
outcome objectives. Process objectives may be thought of as
milestones or competencies needed for achieving long-term
goals. In counseling, process objectives may be considered
as
a series of benchmarks that indicate progress toward
program
growth and improvement. Process objectives are
achieved
through a series of developmental steps, whereas
outcome
objectives refer to specific competencies or outcomes to be
achieved in a given time period.
Once program objectives have been established, the entire
evaluation cycle is repeated, with information from the
coun-
seling context evaluation cycle feeding back into the
program
planning stage of the counseling program evaluation cycle.
Ultimately, counseling program evaluation should be consid-
ered an ongoing process rather than a single incident.
Implications for Counselors and
Counselor Education
Meeting the Challenges of Counseling
Program Evaluations
Although counseling program evaluation may enhance client
services and promote the professional identity of
counselors,
barriers to implementing program evaluation cannot be
over-
looked. First of all, program evaluation practices have
often
been considered as being too time-consuming and
complex
(Loesch, 2001; Wheeler & Loesch, 1981). Thus,
counselors
who have not previously initiated evaluations of their
programs
and services may be hesitant to embark on a seemingly
difficult
task. However by conceptualizing program evaluation as
a
collaborative process, counselors may be more interested
and
motivated to participate in evaluations. By teaming with
other
professionals, counselors may help to ensure that
evaluations are
implemented effectively and that results are disseminated in
an
effective manner. Furthermore, collaboration helps
counselors
new to program evaluation to obtain support and
mentoring
during the evaluation process (Trevisan, 2002a).
Another major obstacle to any outcome or evaluation
study of counseling is the complex and dynamic nature of
the
counseling process itself. As discussed by Whiston
(1996),
the seemingly immeasurable nature of counseling often
makes
straightforward evaluations of its effectiveness difficult. The
complexity of counseling processes may be addressed
by
developing program and service objectives that are
more
readily measurable. For example, client improvement is
a
concept that seems vague and difficult to measure.
However,
by being more specific and operationalizing definitions
of
client improvement, counselors can more easily measure cli-
ent change. For example, exploring client improvement
using
standardized measures of depression by comparing pre- and
posttreatment scores can provide counselors with one
measure
of the effectiveness of counseling interventions.
Considerations for Training and Research
in Program Evaluation Methods
Despite increased focus on accountability and calls for
evaluation-based counseling practice, counselors frequently
lack the training to effectively evaluate the effectiveness
and
impact of their services. Counselor training has rarely em-
phasized research and evaluation skills as a method for
guid-
ing practice (Heppner et al., 1999; Sexton et al., 1997). As
a
result, counselors may see little utility in acquiring and
using
research and evaluation skills. Counselor educators who are
responsible for counselor education programs must, there-
fore, reconsider the importance placed on acquiring research
and evaluation skills in the training of new counselors. The
2001 standards of the Council for Accreditation of Counsel-
ing and Related Educational Programs have addressed the
need for today’s counselors to develop skills in research
and
evaluation. Yet, as pointed out by Trevisan (2000), the
mere
inclusion of evaluation skills in training standards has not
spurred counselors’ use of evaluation activities.
Whiston and Coker (2000) called for reconstructing the
clinical training of counselors based on findings in
counseling
research. Integrating evaluation and research practices
into
clinical training may likewise enhance the clinical
preparation
of new counselors by giving them supervised experiences in
which they use evaluation methods. Trevisan (2000, 2002a)
advocated for a sequential approach to teaching program
eval-
uation skills in counselor education programs. Accordingly,
counselors might first receive didactic training in evaluation
and research methods. Next, counselors could be given
clinical
experiences that would allow them to implement research
and
evaluation skills under supervision. Finally, trained
counselors
would be able to conceptualize and implement
evaluations
of counseling programs on their own, consulting with other
professionals as necessary.
In addition to revising the evaluation and research train-
ing in counselor education, providing postgraduate training
and workshop opportunities to practicing counselors must be
considered. Counseling conferences should, therefore,
actively
solicit programs and presentations geared toward helping
counselors develop skills in research and evaluation.
Further-
more, counselors should purposefully seek opportunities for
the development of their research and evaluation skills.
Although counseling program evaluation has been dis-
cussed for many years, few studies have appeared in
the
literature that examine the use of program evaluation by
prac-
ticing counselors. We, therefore, issue a call to the
profession
to systematically investigate the use of evaluation practices
in counseling. Such findings could have a substantial
impact
Journal of Counseling & Development ■ Spring 2007 ■
Volume 85170
Astramovich & Coker
on the continued development of the counseling profession
by providing further understanding of counseling
program
evaluation models and practices.
Conclusion
Twenty-first century counselors can no longer question the
merit of and need for evaluating their counseling programs
and services. Instead, today’s counselors must actively learn
about and use evaluation methods as a means of enhanc-
ing their counseling practices, providing accountability to
stakeholders, and enhancing the professional identity of
all counselors. As Wheeler and Loesch (1981) predicted
nearly 25 years ago, program evaluation continues to
be
a force in the development of the counseling professions.
They likewise suggested that counseling professionals are
gradually beginning to recognize that if counseling program
evaluations are to be used, they must be initiated and
imple-
mented by counselors themselves. Given the persistence
of the topic and the ongoing calls for outcomes research
and accountability of counseling practices, program evalu-
ation can no longer be ignored by counseling professionals.
Indeed, program evaluation may be considered a newly
evolving standard of practice in counseling.
References
Adelman, H. S. (2002). School counselors and school
reform: New
directions. Professional School Counseling, 5, 235–248.
Albrecht, S. F., & Joles, C. (2003). Accountability and
access to op-
portunity: Mutually exclusive tenets under a high-stakes
testing
mandate. Preventing School Failure, 48, 86–91.
American School Counselor Association. (2003). The
American
School Counselor Association National Model: A framework for
school counseling programs. Alexandria, VA: Author.
Benkofski, M., & Heppner, C. C. (1999). Program
evaluation. In P. P.
Heppner, D. M. Kivlighan, & B. E. Wampold, Research
design in
counseling (pp. 488–513). Belmont, CA: Wadsworth.
Berk, R. A., & Rossi, P. H. (1999). Thinking about program
evalu-
ation (2nd ed.). Thousand Oaks, CA: Sage.
Bishop, J. B., & Trembley, E. L. (1987). Counseling
centers and
accountability: Immoveable objects, irresistible forces.
Journal
of Counseling and Development, 65, 491–494.
Borders, L. D. (2002). School counseling in the 21st
century: Personal and
professional reflections. Professional School Counseling, 5,
180–185.
Borders, L. D., & Drury, S. M. (1992). Comprehensive
school
counseling programs: A review for policymakers and
practitio-
ners. Journal of Counseling & Development, 70, 487–498.
Boulmetis, J., & Dutwin, P. (2000). The ABCs of evaluation:
Timeless
techniques for program and project managers. San Francisco:
Jossey-Bass.
Council for Accreditation of Counseling and Related
Educational
Programs. (2001). CACREP accreditation manual. Alexandria,
VA: Author.
Dahir, C. A., & Stone, C. B. (2003). Accountability: A
M.E.A.S.U.R.E.
of the impact school counselors have on student
achievement.
Professional School Counseling, 6, 214–221.
Erford, B. T., House, R., & Martin, P. (2003).
Transforming the school
counseling profession. In B. T. Erford (Ed.),
Transforming the
school counseling profession (pp. 1–20). Upper Saddle
River,
NJ: Prentice Hall.
Ernst, K., & Hiebert, B. (2002). Toward the development
of a program
evaluation business model: Promoting the longevity of
counsel-
ling in schools. Canadian Journal of Counselling, 36, 73–84.
Fairchild, T. N. (1993). Accountability practices of
school
counselors: 1990 national survey. The School Counselor,
40,
363–374.
Fairchild, T. N. (1994). Evaluation of counseling services:
Account-
ability in a rural elementary school. Elementary School
Guidance
and Counseling, 29, 28–37.
Fairchild, T. N., & Seeley, T. J. (1995). Accountability
strategies for
school counselors: A baker’s dozen. The School Counselor,
42,
377–392.
Finn, C. E. (2002). Making school reform work. The Public
Inter-
est, 148, 85–95.
Gandal, M., & Vranek, J. (2001, September). Standards:
Here today,
here tomorrow. Educational Leadership, 6–13.
Granello, D. H., & Hill, L. (2003). Assessing outcomes in
practice
settings: A primer and example from an eating disorders
program.
Journal of Mental Health Counseling, 25, 218–232.
Gysbers, N. C., & Henderson, P. (2000). Developing and
managing
your school guidance program (3rd ed.). Alexandria, VA:
Ameri-
can Counseling Association.
Gysbers, N. C., Hughey, K., Starr, M., & Lapan, R. T.
(1992). Im-
proving school guidance programs: A framework for
program,
personnel, and results evaluation. Journal of Counseling &
Development, 70, 565–570.
Hadley, R. G., & Mitchell, L. K. (1995). Counseling
research and
program evaluation. Pacific Grove, CA: Brooks/Cole.
Heppner, P. P., Kivlighan, D. M., & Wampold, B. E.
(1999). Research
design in counseling (2nd ed.). Belmont, CA: Wadsworth.
Herbert, J. D. (2003). The science and practice of
empirically sup-
ported treatments. Behavior Modification, 27, 412–430.
Herr, E. L. (2002). School reform and perspectives on the
role of
school counselors: A century of proposals for change.
Profes-
sional School Counseling, 5, 220–234.
Hosie, T. (1994). Program evaluation: A potential area
of exper-
tise for counselors. Counselor Education and Supervision, 33,
349–355.
House, R. M., & Hayes, R. L. (2002). School counselors:
Becoming
key players in school reform. Professional School
Counseling,
5, 249–256.
House, R. M., & Sears, S. J. (2002). Preparing school
counselors to
be leaders and advocates: A critical need in the new
millennium.
Theory Into Practice, 41, 154–162.
Houser, R. (1998). Counseling and educational research:
Evaluation
and application. Thousand Oaks, CA: Sage.
Journal of Counseling & Development ■ Spring 2007 ■
Volume 85 171
The Accountability Bridge Model for Counselors
Hughes, D. K., & James, S. H. (2001). Using accountability
data to
protect a school counseling program: One counselor’s
experience.
Professional School Counseling, 4, 306–309.
Isaacs, M. L. (2003). Data-driven decision making: The
engine of
accountability. Professional School Counseling, 6, 288–295.
Kellaghan, T., & Madaus, G. F. (2000). Outcome
evaluation. In D.
L. Stufflebeam, G. F. Madaus, & T. Kellaghan (Eds.),
Evaluation
models: Viewpoints on educational and human services evalua-
tion (2nd ed., pp. 97–112). Boston: Kluwer Academic.
Kettner, P. M., Moroney, R. M., & Martin, L. L. (1999).
Designing
and managing programs: An effectiveness-based approach (2nd
ed.). Thousand Oaks, CA: Sage.
Kelly, K. R. (1996). Looking to the future:
Professional identity,
accountability, and change. Journal of Mental Health Counsel-
ing, 18, 195–199.
Kirst, M. W. (2000). Accountability: Implications for state
and local
policy makers. In D. L. Stufflebeam, G. F. Madaus,
& T. Kel-
laghan (Eds.), Evaluation models: Viewpoints on educational
and human services evaluation (2nd ed., pp. 319–339).
Boston:
Kluwer Academic.
Krousel-Wood, M. A. (2000). Outcomes assessment and
performance
improvement: Measurements and methodologies that matter
in
mental health care. In P. Rodenhauser (Ed.), Mental health
care
administration: A guide for practitioners (pp. 233–253). Ann
Arbor: University of Michigan Press.
Lapan, R. T. (2001). Results-based comprehensive
guidance and
counseling programs: A framework for planning and
evaluation.
Professional School Counseling, 4, 289–299.
Loesch, L. C. (2001). Counseling program evaluation:
Inside and outside
the box. In D. C. Locke, J. E. Myers, & E. L. Herr
(Eds.), The hand-
book of counseling (pp. 513–525). Thousand Oaks, CA:
Sage.
Lusky, M. B., & Hayes, R. L. (2001). Collaborative
consultation and pro-
gram evaluation. Journal of Counseling & Development, 79,
26–38.
Madaus, G. F., & Kellaghan, T. (2000). Models,
metaphors, and
definitions in evaluation. In D. L. Stufflebeam, G. F.
Madaus, & T.
Kellaghan (Eds.), Evaluation models: Viewpoints on
educational
and human services evaluation (2nd ed., pp. 19–31).
Boston:
Kluwer Academic.
Myrick, R. D. (2003). Accountability: Counselors count.
Professional
School Counseling, 6, 174–179.
No Child Left Behind Act of 2001, Pub. L. No. 107-
110, 115
Stat. 1425 (2002).
Otwell, P. S., & Mullis, F. (1997). Academic
achievement and
counselor accountability. Elementary School Guidance and
Counseling, 31, 343–348.
Paisley, P. O., & Borders, L. D. (1995). School
counseling: An
evolving specialty. Journal of Counseling & Development, 74,
150–153.
Powell, E. T., Steele, S., & Douglah, M. (1996). Planning a
program
evaluation. Madison: Division of Cooperative Extension of
the
University of Wisconsin-Extension.
Priest, S. (2001). A program evaluation primer. Journal of
Experi-
ential Education, 24, 34–40.
Royse, D., Thyer, B. A., Padgett, D. K., & Logan, T.
K. (2001).
Program evaluation: An introduction (3rd ed.). Belmont,
CA:
Brooks/Cole.
Sanderson, W. C. (2003). Why empirically supported
treatments are
important. Behavior Modification, 27, 290–299.
Scheid, T. L. (2003). Managed care and the rationalization
of men-
tal health services. Journal of Health and Social Behavior, 44,
142–161.
Schmidt, J. J. (1995). Assessing school counseling programs
through
external reviews. The School Counselor, 43, 114–123.
Sexton, T. L. (1996). The relevance of counseling outcome
research:
Current trends and practical implications. Journal of
Counseling
& Development, 74, 590–600.
Sexton, T. L. (1999). Evidence-based counseling:
Implications
for counseling practice, preparation, and professionalism.
Greensboro, NC: ERIC Clearinghouse on Counseling & Stu-
dent Services. (ERIC Document Reproduction Service
No.
ED 435 948)
Sexton, T. L., Whiston, S. C., Bleuer, J. C., &
Walz, G. R.
(1997). Integrating outcome research into counseling prac-
tice and training. Alexandria, VA: American Counseling
Association.
Sink, C. A., & MacDonald, G. (1998). The status of
comprehensive
guidance and counseling in the United States. Professional
School
Counseling, 2, 88–94.
Studer, J. R., & Sommers, J. A. (2000). The
professional school
counselor and accountability. National Association of
Secondary
School Principals Bulletin, 84, 93–99.
Stufflebeam, D. L. (2000a). The CIPP model for
evaluation.
In D. L. Stufflebeam, G. F. Madaus, & T. Kellaghan
(Eds.),
Evaluation models: Viewpoints on educational and human
services evaluation (2nd ed., pp. 279–317). Boston: Kluwer
Academic.
Stufflebeam, D. L. (2000b). Foundational models for 21st
century
program evaluation. In D. L. Stufflebeam, G. F.
Madaus, & T.
Kellaghan (Eds.), Evaluation models: Viewpoints on
educational
and human services evaluation (2nd ed., pp. 33–96).
Boston:
Kluwer Academic.
Stufflebeam, D. L., McCormick, C. H., Brinkerhoff, R. O.,
& Nelson,
C. O. (1985). Conducting educational needs assessment.
Boston:
Kluwer Academic.
Trevisan, M. S. (2000). The status of program evaluation
expectations
in state school counselor certification requirements.
American
Journal of Evaluation, 21, 81–94.
Trevisan, M. S. (2001). Implementing comprehensive
guidance pro-
gram evaluation support: Lessons learned. Professional
School
Counseling, 4, 225–228.
Trevisan, M. S. (2002a). Enhancing practical evaluation
training
through long-term evaluation projects. American Journal of
Evaluation, 23, 81–92.
Trevisan, M. S. (2002b). Evaluation capacity in K-12
school
counseling programs. American Journal of Evaluation, 23,
291–305.
Journal of Counseling & Development ■ Spring 2007 ■
Volume 85172
Astramovich & Coker
Vacc, N. A., & Rhyne-Winkler, M. C. (1993). Evaluation
and ac-
countability of counseling services: Possible implications for
a
midsize school district. The School Counselor, 40, 260–266.
Wheeler, P. T., & Loesch, L. (1981). Program evaluation
and counsel-
ing: Yesterday, today and tomorrow. The Personnel and
Guidance
Journal, 51, 573–578.
Whiston, S. C. (1996). Accountability through action
research:
Research methods for practitioners. Journal of Counseling
&
Development, 74, 616–623.
Whiston, S. C. (2002). Response to the past, present, and
future of
school counseling: Raising some issues. Professional
School
Counseling, 5, 148–156.
Whiston, S. C., & Coker, J. K. (2000). Reconstructing
clinical
training: Implications from research. Counselor Education
and
Supervision, 39, 228–253.
Whiston, S. C., & Sexton, T. (1998). A review of school
counseling
outcome research: Implications for practice. Journal of
Counsel-
ing & Development, 76, 412–426.
Running head: VETERANS AND MILITARY FAMILIES 1
Veterans and Military Families
Annotated Bibliography
Student Name
COMM 2367: Persuasive Communication
Instructor Name
January 15, 2017
VETERANS AND MILITARY FAMILIES
2
Veterans and Military Families
Annotated Bibliography
Begin this assignment with a 5-7 sentence introduction that
provides an overview of your
sources and demonstrates an understanding of the connections
between them. The last sentence
of your introduction is your thesis statement and should be in
bold.
Link, P. E., & Palinkas, L. A. (2013). Long-term trajectories
and service needs for military
families. Clinical Child & Family Psychology Review, 16(4),
376-393.
doi:10.1007/s10567-013-0145-z
Link and Palinkas’ research investigates the impact that
military deployment and trauma
have on family member relationships. Besides mental illness,
families are also prone to other
difficulties, such as struggles with relationships, which often
result in divorce or domestic abuse.
This article will help me explore the various ways in which both
military members and their
family members are affected by deployment, mental illness, and
other stressful situations. The
study also suggests ways in which families can seek help to deal
with their difficulties. Besides
government agencies, such as Veterans Affairs, various
nonprofit organizations exist with the
goal of providing care to the military and their families.
Organizations including the National
Science Foundation and NASA have funded the research of Dr.
Lawrence Palinkas, a professor
at the University of Southern California. The expertise of Dr.
Patrick Link, a practicing
psychiatrist, adds credibility to this article as well.
Philipps, D. (2014, December 31). Mission ends in Afghanistan,
but sacrifices are not over for
U.S. soldiers. The New York Times. Retrieved from
VETERANS AND MILITARY FAMILIES
3
http://www.nytimes.com/2015/01/01/us/mission-ends-but-
sacrifices-are-not-over-for-us-
soldiers.html
Because the war in Afghanistan is considered to be over, many
Americans have turned
their backs on service members and their families. Even though
things may be coming to a
close, people often forget that military personnel continue to
serve overseas and risk their lives.
This article provides a viewpoint from military families who
often feel like the public has
forgotten about those soldiers. It is important to provide
support for military families who
continue to have loved ones overseas, as well as those who have
returned. This story will help
provide context from current military families, while focusing
on some of the difficulties that
they endure. The article is credible because of its publication in
an internationally respected
periodical. In addition, Dave Philips won the Pulitzer Prize for
National Reporting, and chose to
document a family who is in the middle of the controversy.
U.S. Department of Veterans Affairs. (2014). PTSD: National
Center for PTSD. Retrieved from
http://www.ptsd.va.gov/about/index.asp
The United States Department of Veterans Affairs is at the
forefront of PTSD research
and care. For 25 years, the department has been treating PTSD
and educating others about the
illness through the National Center for PTSD. Their mission is
to provide the best care to their
patients, continue to explore the science of the illness, and
educate physicians and other care
providers. The website provides information about PTSD,
insight into their over 100 research
projects, and information on where to inquire about and receive
care. They are the primary
source for PTSD information because of their world-renowned
research, facilities, faculty, and
http://www.nytimes.com/2015/01/01/us/mission-ends-but-
sacrifices-are-not-over-for-us-soldiers.html
http://www.nytimes.com/2015/01/01/us/mission-ends-but-
sacrifices-are-not-over-for-us-soldiers.html
http://www.ptsd.va.gov/about/index.asp
VETERANS AND MILITARY FAMILIES
4
experience. This mental illness is often associated with post-
deployment military service
members and their families; thus it will help strengthen my
argument about their needs.
Write a 5-7 sentence conclusion that summarizes your main
points and provides closure
for your reader.
research problem( topic): children& youth
here are some related things i wrote before
Writing Goal #1 - Learn to practice writing reviews with peers,
and, give good feedback to them.
Writing Goal #2 - Write something creative and relevant to
world events everyday.
Nonprofit Organization: Animal Welfare Foundation of Canada
Article “Animal Cruelty or the Price of Dinner”
Summary: Farm animals like chickens do not elicit the same
emotional responses like pets, but live in far worse conditions.
These conditions also affect the people who eat them.
Consumers should use their buying power to push for more
humane conditions for farm animals, which is feasible with new
technology.
citation
https://www.nytimes.com/2016/04/17/opinion/sunday/animal-
cruelty-or-the-price-of-dinner.html
(Links to an external site.)
Links to an external site.
Reply
Annotated Bibliography FAQ 1
COMM 2367: Persuasive Communication
ANNOTATED BIBLIOGRAPHY PAPER
FAQ
General Questions
What if I have trouble finding sources about my topic?
I know it’s often a last resort for students, but librarians are
specifically educated and trained to
locate credible information quickly. They really do want to help
you! If you go to
library.osu.edu, you can find contact information on the right
side of the page: phone, email, or
instant chat. If you have been unsuccessfully searching for
information for an hour, stop and
contact someone at the library. Don’t waste valuable time being
frustrated; ask for help.
How many sources should focus on my region?
You are not required to find any sources that specifically focus
on your region right now. The
primary goal for this assignment is to gather information and
gain a general understanding of
your topic. You can add regional sources to your next
assignment.
How current should my sources be?
Because we can easily access current information, your sources
should be no more than five
years old. Sources that contain statistics should be as current as
possible. If you find a source that
is not current but believe it contains important and relevant
information, email me.
Should my sources be listed in alphabetical order?
Yes, just as you would alphabetize on a reference page.
What is a DOI?
“The Digital Object Identifier (DOI®) System is for identifying
content objects in the digital
environment. DOI® names are assigned to any entity for use on
digital networks. They are used
to provide current information, including where they (or
information about them) can be found
on the Internet. Information about a digital object may change
over time, including where to find
it, but its DOI name will not change” (The DOI system, 2011).
Should I include the URL or the DOI name in my reference
citation?
If you found the source in an academic database, you aren’t
required to use the DOI or the URL.
If, however, you located the article another way, use the DOI
name. It will always be accurate
and is much shorter than an entire URL.
I don’t see a DOI name. Should I use the URL?
If you found the source in an academic database, you aren’t
required to use the DOI or the URL.
If, however, you located the article another way, use the URL.
The title page on the Purdue OWL website is different from the
title page we reviewed in
class. Should I follow the example on the OWL or the one in my
notes?
APA format does not provide definitive formatting information
for class papers. Instead, they are
focused on papers being submitted for degrees or journal
publication. You may see a few
http://library.osu.edu/
Annotated Bibliography FAQ 2
differences in the header and on the title page. Please use the
information from class to format
your paper.
My assignment is longer than three pages. Do I need to cut
some information?
No, but it should be no longer than four pages (not including the
title page). The page
requirement is a general guideline, not a strict requirement.
I have completed my assignment early. Can I email it to you for
some feedback?
I don’t provide feedback via email for assignments completed
ahead of time, but I am more than
happy to meet with you during office hours before the due date.
If you have a conflict with my
office hours, we can try to schedule another time to meet.
Do you have any tips to avoid deductions for
grammar/mechanics and APA formatting?
—begin with the last
sentence or paragraph and read it
aloud. This interrupts your flow of thought and you are less
likely to miss mistakes because
you will focus more on the words and language use.
sentences. Refer to this link
when including numbers in text:
http://owl.english.purdue.edu/owl/resource/593/01/
precisely. Avoid slang, clichés, and
informal language:
https://owl.english.purdue.edu/owl/owlprint/608/
punctuation.
Remember to change the margin
default setting to 1” on every side.
What kind of information should I be looking for?
Keep your end goal in mind--you want to persuade the audience
that your topic is a problem in
your region. Look for sources that provide evidence of the
problem. Some general suggestions
for types of information from each source are as follows:
-reviewed journal article will most likely be about
your topic/issue. The journal
article will not provide regional information.
topic/issue. You will likely
find good regional information from this source. Below are
suggestions based on your
region:
o Columbus: Columbus Dispatch
o Ohio: Columbus Dispatch, Cleveland Plain Dealer, Cincinnati
Enquirer
o Midwest: any of the above and Chicago Tribune, Chicago
Sun-Times,
Indianapolis Star
o United States: any of the above and The Wall Street Journal,
The New York
Times, Los Angeles Times, The Washington Post, TIME,
Newsweek
toward finding current
research and statistics about the topic/issue.
https://www.grammarly.com/
http://owl.english.purdue.edu/owl/resource/593/01/
https://owl.english.purdue.edu/owl/owlprint/608/
Annotated Bibliography FAQ 3
I’m the Moderator and my group’s problem is broad. How do I
narrow my search for
sources?
I would encourage you to search for articles that cover the
subtopics your group members have
chosen. You may decide to locate one article for three different
subtopics; you may decide to
focus only on one or two subtopics for now. Also, as you search
for subtopics, you may find
some great articles that address your problem in general or that
focus on your region. These are
all acceptable. If you are still uncertain, see me during office
hours.
Annotated Bibliography Paper 1
COMM 2367: Persuasive Communication
ANNOTATED BIBLIOGRAPHY PAPER
30 points
Expected Learning Outcomes
GE Course
• Students retrieve and use written information analytically and
effectively.
Second Writing Course
• Students access and use information critically and
analytically.
Overview
This assignment requires you to begin researching your
problem. In this paper, you will develop
your ability to access, evaluate, and use credible information by
identifying and summarizing
highly credible sources.
Guidelines
In approximately 2-3 typed pages (not including the title page)
using APA format, you should
write an introduction, a thesis statement, a summary of 3
credible sources, and a conclusion.
After conducting thorough research on your topic, you will
demonstrate how the sources inform
you about your problem and how they are connected. The
sources included should reflect
considerable research effort on your part; however, this should
not be the end of your research
effort. You are expected to continue to research your topic for
several more weeks as you
develop and refine your message strategy.
1. Write an introduction: https://cstw.osu.edu/writing-
center/handouts/introductions
2. Write a thesis statement: https://cstw.osu.edu/writing-
center/handouts/thesis-statements
3. List three sources in correct APA format. You must use the
following source types:
a. 1 peer-reviewed academic journal article
b. 1 credible/reputable newspaper or periodical
c. 1 government, university, or research institution website
4. Under each source, write a one paragraph summary in which
you answer the following
questions:
a. What are the main points of the source? What is its purpose?
(Summarize the
information.)
b. How is the source relevant to your topic? How will the source
be useful to you in
developing a persuasive argument? How does it connect to the
other sources? (Be
specific about the type of information provided by the source.)
c. Why is the source highly credible? What are the
author’s/organization’s
qualifications? (Provide personal and/or professional
information.)
5. Write a conclusion: https://cstw.osu.edu/writing-
center/handouts/conclusions
https://cstw.osu.edu/writing-center/handouts/introductions
https://cstw.osu.edu/writing-center/handouts/thesis-statements
https://cstw.osu.edu/writing-center/handouts/conclusions
Annotated Bibliography Paper 2
ANNOTATED BIBLIOGRAPHY PAPER
GRADING RUBRIC
Excellent Poor
Introduction, Thesis, Conclusion
Introduction communicates what the reader can
expect; sets the tone and style of the argument
3 2 1.5 1 0
Thesis contains a claim about the problem that
can be explained and justified
3 2 1.5 1 0
Conclusion effectively summarizes and provides
closure
3 2 1.5 1 0
Source Information
Demonstrates strong summary skills by
effectively and concisely identifying main points
and purpose of source*
6 4.5 3 1.5 0
Clearly explains source’s relevance to topic and
connection to other sources; thoughtfully
considers usefulness of source for persuasive
argument*
6 4.5 3 1.5 0
Effectively establishes credibility of source;
provides specific personal or professional
information*
6 4.5 3 1.5 0
Correctly uses APA format for reference
citations
3 2 1.5 1 0
* Sources that do not conform to the criteria in the COMM 2367
Source Requirements document
will receive a zero (0) for these items.
Adjustments (your grade may be lowered up to 10% for each of
the following)
Excellent Poor
-0% -10%
Free of errors in grammar and mechanics -0 -1 -1.5 -2 -3
Correct APA Format (title page, header, margins,
font, reverse indent for references)
-0 -1 -1.5 -2 -3
TOTAL _____/30
Journal of Counseling & Development  ■  Spring 2007  ■  Volume.docx

More Related Content

PPTX
counseling psychology- evaluation of counseling
Saalini Vellivel
 
DOCX
Chapter 1 Evaluation and Social Work Making the ConnectionP.docx
zebadiahsummers
 
PDF
Mentoring 418
Agnia Yulianti
 
DOCX
There needs to be a seperate response to each peers posting and it .docx
OllieShoresna
 
PPT
Guidance & counselling
Viji Pn
 
PDF
Counseling Assessment And Evaluation Counseling And Professional Identity Flamez
qubweif4373
 
PDF
6406 LP7
kdotsonblake
 
PDF
Characteristic of effective counselor
jayapratha9
 
counseling psychology- evaluation of counseling
Saalini Vellivel
 
Chapter 1 Evaluation and Social Work Making the ConnectionP.docx
zebadiahsummers
 
Mentoring 418
Agnia Yulianti
 
There needs to be a seperate response to each peers posting and it .docx
OllieShoresna
 
Guidance & counselling
Viji Pn
 
Counseling Assessment And Evaluation Counseling And Professional Identity Flamez
qubweif4373
 
6406 LP7
kdotsonblake
 
Characteristic of effective counselor
jayapratha9
 

Similar to Journal of Counseling & Development  ■  Spring 2007  ■  Volume.docx (20)

PPT
Evaluating Student Success Initiatives
3CSN
 
PPT
Evaluating Student Success Initiatives
Bradley Vaden
 
PPTX
DIFFERENT TYPE OF TECHNIQUES OF COUNSELING.pptx
virengeeta
 
PPTX
Evaluating the guidance program
Emsz Domingo
 
PPT
Guidance and Counselling by S.Lakshmanan Psychologist
LAKSHMANAN S
 
PDF
School Counselor Agreements
Ashley Lovato
 
PDF
Teachingphilosophy
tride3611
 
PPTX
Entegro principles of evaluation
Youise Saculo
 
PPTX
Discplines and Ideas in the Applied Social SciencesLesson 1.pptx
Golden Success College
 
PDF
lesson1diass-220116075405.pdf
Nikothefilipinoguy
 
PPTX
Lesson 1 diass
GabrielDiasnes
 
PPTX
Lesson 1 Discipline and Idea in Applied Social Science
renzguioguio
 
PDF
Introduction to counseling and effective counselor
jayapratha9
 
PPT
Street Jibe Evaluation
Brent MacKinnon
 
PPTX
Discipline and Ideas in Applied Social Sciences Counselling Week 1
mgplagran
 
PDF
School counselors
ashleyralexander
 
PPTX
Counseling service
Christine Serrano
 
DOCX
VISTAS Online is an innovative publication produced for the Am.docx
jessiehampson
 
PDF
diass-week-3-activity-sheets jkscjsckdsn
dariozlucero1
 
PPTX
Guidance and counselling
DikshaRai24
 
Evaluating Student Success Initiatives
3CSN
 
Evaluating Student Success Initiatives
Bradley Vaden
 
DIFFERENT TYPE OF TECHNIQUES OF COUNSELING.pptx
virengeeta
 
Evaluating the guidance program
Emsz Domingo
 
Guidance and Counselling by S.Lakshmanan Psychologist
LAKSHMANAN S
 
School Counselor Agreements
Ashley Lovato
 
Teachingphilosophy
tride3611
 
Entegro principles of evaluation
Youise Saculo
 
Discplines and Ideas in the Applied Social SciencesLesson 1.pptx
Golden Success College
 
lesson1diass-220116075405.pdf
Nikothefilipinoguy
 
Lesson 1 diass
GabrielDiasnes
 
Lesson 1 Discipline and Idea in Applied Social Science
renzguioguio
 
Introduction to counseling and effective counselor
jayapratha9
 
Street Jibe Evaluation
Brent MacKinnon
 
Discipline and Ideas in Applied Social Sciences Counselling Week 1
mgplagran
 
School counselors
ashleyralexander
 
Counseling service
Christine Serrano
 
VISTAS Online is an innovative publication produced for the Am.docx
jessiehampson
 
diass-week-3-activity-sheets jkscjsckdsn
dariozlucero1
 
Guidance and counselling
DikshaRai24
 

More from tawnyataylor528 (20)

DOCX
•Reflective Log•Your reflective log should include the.docx
tawnyataylor528
 
DOCX
•The philosophers Thomas Hobbes and John Locke disagreed on the un.docx
tawnyataylor528
 
DOCX
•From the first e-Activity, examine two (2) economic effects that yo.docx
tawnyataylor528
 
DOCX
• What are the NYS Physical Education Standards, and how do they ali.docx
tawnyataylor528
 
DOCX
• Choose a health problem in the human population. Some examples i.docx
tawnyataylor528
 
DOCX
•Key elements to GE’s learning culture include active experimentat.docx
tawnyataylor528
 
DOCX
• This summative assessment can be completed in class or at any .docx
tawnyataylor528
 
DOCX
• 2 pages• APA• how the airport uses sustainability at the o.docx
tawnyataylor528
 
DOCX
¿Lógico o ilógicoIndicate whether each of the doctors statemen.docx
tawnyataylor528
 
DOCX
·Which of the following is considered a hybrid organizational fo.docx
tawnyataylor528
 
DOCX
·Write aresearch paper of three (3) body pages on a narrow aspec.docx
tawnyataylor528
 
DOCX
·InterviewConduct an interview and document it.During this c.docx
tawnyataylor528
 
DOCX
·Submit a 50- to 100-word response to each of the followin.docx
tawnyataylor528
 
DOCX
·Section 3·Financial management, quality and marketing asp.docx
tawnyataylor528
 
DOCX
·Why is the effort to standardize the language used in reporti.docx
tawnyataylor528
 
DOCX
·Humans belong to the genus Homo and chimpanzees to the genus .docx
tawnyataylor528
 
DOCX
·Crash House II and add resources and costs—remember, only crash.docx
tawnyataylor528
 
DOCX
·What is the main difference between the approaches of CONFLICT .docx
tawnyataylor528
 
DOCX
·What is the work of art’s historical and cultural context·.docx
tawnyataylor528
 
DOCX
·Review the steps of the SDLC. Explain why quality service deliv.docx
tawnyataylor528
 
•Reflective Log•Your reflective log should include the.docx
tawnyataylor528
 
•The philosophers Thomas Hobbes and John Locke disagreed on the un.docx
tawnyataylor528
 
•From the first e-Activity, examine two (2) economic effects that yo.docx
tawnyataylor528
 
• What are the NYS Physical Education Standards, and how do they ali.docx
tawnyataylor528
 
• Choose a health problem in the human population. Some examples i.docx
tawnyataylor528
 
•Key elements to GE’s learning culture include active experimentat.docx
tawnyataylor528
 
• This summative assessment can be completed in class or at any .docx
tawnyataylor528
 
• 2 pages• APA• how the airport uses sustainability at the o.docx
tawnyataylor528
 
¿Lógico o ilógicoIndicate whether each of the doctors statemen.docx
tawnyataylor528
 
·Which of the following is considered a hybrid organizational fo.docx
tawnyataylor528
 
·Write aresearch paper of three (3) body pages on a narrow aspec.docx
tawnyataylor528
 
·InterviewConduct an interview and document it.During this c.docx
tawnyataylor528
 
·Submit a 50- to 100-word response to each of the followin.docx
tawnyataylor528
 
·Section 3·Financial management, quality and marketing asp.docx
tawnyataylor528
 
·Why is the effort to standardize the language used in reporti.docx
tawnyataylor528
 
·Humans belong to the genus Homo and chimpanzees to the genus .docx
tawnyataylor528
 
·Crash House II and add resources and costs—remember, only crash.docx
tawnyataylor528
 
·What is the main difference between the approaches of CONFLICT .docx
tawnyataylor528
 
·What is the work of art’s historical and cultural context·.docx
tawnyataylor528
 
·Review the steps of the SDLC. Explain why quality service deliv.docx
tawnyataylor528
 

Recently uploaded (20)

PDF
Electricity-Magnetic-and-Heating-Effects 4th Chapter/8th-science-curiosity.pd...
Sandeep Swamy
 
PPTX
vedic maths in python:unleasing ancient wisdom with modern code
mistrymuskan14
 
PPTX
How to Manage Global Discount in Odoo 18 POS
Celine George
 
PDF
Arihant Class 10 All in One Maths full pdf
sajal kumar
 
PPTX
Introduction and Scope of Bichemistry.pptx
shantiyogi
 
PDF
Landforms and landscapes data surprise preview
jpinnuck
 
PDF
1.Natural-Resources-and-Their-Use.ppt pdf /8th class social science Exploring...
Sandeep Swamy
 
PPTX
Tips Management in Odoo 18 POS - Odoo Slides
Celine George
 
PPTX
Congenital Hypothyroidism pptx
AneetaSharma15
 
PPTX
An introduction to Dialogue writing.pptx
drsiddhantnagine
 
PPTX
Software Engineering BSC DS UNIT 1 .pptx
Dr. Pallawi Bulakh
 
PPTX
An introduction to Prepositions for beginners.pptx
drsiddhantnagine
 
PDF
The Final Stretch: How to Release a Game and Not Die in the Process.
Marta Fijak
 
PPTX
Nursing Management of Patients with Disorders of Ear, Nose, and Throat (ENT) ...
RAKESH SAJJAN
 
PPTX
Cardiovascular Pharmacology for pharmacy students.pptx
TumwineRobert
 
PPTX
Information Texts_Infographic on Forgetting Curve.pptx
Tata Sevilla
 
PPTX
HISTORY COLLECTION FOR PSYCHIATRIC PATIENTS.pptx
PoojaSen20
 
PDF
What is CFA?? Complete Guide to the Chartered Financial Analyst Program
sp4989653
 
PDF
Types of Literary Text: Poetry and Prose
kaelandreabibit
 
PPTX
Presentation on Janskhiya sthirata kosh.
Ms Usha Vadhel
 
Electricity-Magnetic-and-Heating-Effects 4th Chapter/8th-science-curiosity.pd...
Sandeep Swamy
 
vedic maths in python:unleasing ancient wisdom with modern code
mistrymuskan14
 
How to Manage Global Discount in Odoo 18 POS
Celine George
 
Arihant Class 10 All in One Maths full pdf
sajal kumar
 
Introduction and Scope of Bichemistry.pptx
shantiyogi
 
Landforms and landscapes data surprise preview
jpinnuck
 
1.Natural-Resources-and-Their-Use.ppt pdf /8th class social science Exploring...
Sandeep Swamy
 
Tips Management in Odoo 18 POS - Odoo Slides
Celine George
 
Congenital Hypothyroidism pptx
AneetaSharma15
 
An introduction to Dialogue writing.pptx
drsiddhantnagine
 
Software Engineering BSC DS UNIT 1 .pptx
Dr. Pallawi Bulakh
 
An introduction to Prepositions for beginners.pptx
drsiddhantnagine
 
The Final Stretch: How to Release a Game and Not Die in the Process.
Marta Fijak
 
Nursing Management of Patients with Disorders of Ear, Nose, and Throat (ENT) ...
RAKESH SAJJAN
 
Cardiovascular Pharmacology for pharmacy students.pptx
TumwineRobert
 
Information Texts_Infographic on Forgetting Curve.pptx
Tata Sevilla
 
HISTORY COLLECTION FOR PSYCHIATRIC PATIENTS.pptx
PoojaSen20
 
What is CFA?? Complete Guide to the Chartered Financial Analyst Program
sp4989653
 
Types of Literary Text: Poetry and Prose
kaelandreabibit
 
Presentation on Janskhiya sthirata kosh.
Ms Usha Vadhel
 

Journal of Counseling & Development  ■  Spring 2007  ■  Volume.docx

  • 1. Journal of Counseling & Development ■ Spring 2007 ■ Volume 85162 Assessment & Diagnosis © 2007 by the American Counseling Association. All rights reserved. Program evaluation in counseling has been a consistent topic of discourse in the profession over the past 20 years (Gysbers, Hughey, Starr, & Lapan, 1992; Hadley & Mitchell, 1995; Loesch, 2001; Wheeler & Loesch, 1981). Considered an applied research discipline, program evaluation refers to a systematic process of collecting and analyzing information about the efficiency, the ef- fectiveness, and the impact of programs and services (Boulmetis & Dutwin, 2000). The field of program evaluation has grown rapidly since the 1950s as public and private sector organizations have sought quality, efficiency, and equity in the delivery of services (Stufflebeam, 2000b). Today, professional program evaluators are recognized as highly skilled specialists with advanced training in
  • 2. statistics, research methodology, and evaluation procedures (Hosie, 1994). Although program evaluation has developed as a distinct academic and professional discipline, human services professionals have frequently adopted program evaluation principles in order to conduct micro-evaluations of local services. From this perspective, program evaluation can be considered as a type of action research geared toward monitoring and improving a particular program or service. Because micro-evaluations are conducted on a smaller scale, they may be planned and implemented by practitioners. Therefore, for the purposes of this article, we consider counseling program evaluation to be the ongoing use of evaluation principles by counselors to assess and improve the effectiveness and impact of their programs and services. Challenges to Counseling Program Evaluation Counseling program evaluation has not always been conceptual- ized from the perspective of practicing counselors. For instance, Benkofski and Heppner (1999) presented guidelines for counsel- ing program evaluation that emphasized the use of independent
  • 3. evaluators rather than counseling practitioners. Furthermore, program evaluation literature has often emphasized evaluation models and principles that were developed for use in large- scale organizational evaluations by professional program evaluators (e.g., Kellaghan & Madaus, 2000; Kettner, Moroney, & Martin, 1999). Such models and practices are not easily implemented by counseling practitioners and may have contributed to the hesi- tance of counselors to use program evaluation methods. Loesch (2001) argued that the lack of counselor-specific evaluation models has substantially contributed to the dichotomy between research and practice in counseling. Therefore, new paradigms of counseling program evaluation are needed to increase the frequency of practitioner-implemented evaluations. Much of the literature related to counseling program evaluation has cited the lack of both counselors’ ability to systematically evaluate counseling services and of their interest in doing so (e.g., Fairchild, 1993; Whiston, 1996). Many reasons have been suggested for counselors’ failure to conduct evaluations. An important reason is that conducting an evaluation requires some degree of expertise in research
  • 4. methods, particularly in formulating research questions, col- lecting relevant data, and selecting appropriate analyses. Yet counselors typically receive little training to prepare them for demonstrating outcomes (Whiston, 1996) and evaluating their services (Hosie, 1994). Consequently, counselor education programs have been criticized for failing to provide appropri- ate evaluation and research training to new counselors (Bor- ders, 2002; Heppner, Kivlighan, & Wampold, 1999; Sexton, 1999; Sexton, Whiston, Bleuer, & Walz, 1997). Counselors may, therefore, refrain from program evaluation because of Randall L. Astramovich, Department of Counselor Education, University of Nevada, Las Vegas; J. Kelly Coker, Harbin and As- sociates Psychotherapy, Fayetteville, North Carolina. J. Kelly Coker is now at the Department of Counselor Education, Capella University. Correspondence concerning this article should be addressed to Randall L. Astramovich, Department of Counselor Education, University of Nevada, Las Vegas, 4505 Maryland Parkway, Box 453066, Las Vegas, NV 89154-3066 (e-mail: Randy. [email protected]). Program Evaluation: The Accountability Bridge Model for Counselors Randall L. Astramovich and J. Kelly Coker The accountability and reform movements in education and the human services professions have pressured coun-
  • 5. selors to demonstrate outcomes of counseling programs and services. Evaluation models developed for large-scale evaluations are generally impractical for counselors to implement. Counselors require practical models to guide them in planning and conducting counseling program evaluations. The authors present the Accountability Bridge Counseling Program Evaluation Model and discuss its use in evaluating counseling services and programs Journal of Counseling & Development ■ Spring 2007 ■ Volume 85 163 The Accountability Bridge Model for Counselors a lack of confidence in their ability to effectively collect and analyze data and apply findings to their professional practice (Isaacs, 2003). However, for those counselors with the req- uisite skills to conduct evaluations, their hesitance may be related to the fear of finding that their services are ineffective (Lusky & Hayes, 2001; Wheeler & Loesch, 1981). Despite calls for counselors and counseling programs to em- brace research and evaluation as an integral part of the provision of counseling services (e.g., Borders & Drury, 1992; Fairchild, 1994; Whiston, 1996), there is virtually no information that documents counselors’ interest in and use of counseling program
  • 6. evaluation. Although counselors may place minimal value on research and evaluation activities (Loesch, 2001), strong sociopolitical forces, including the emphasis on managed care in mental health and the school reform movement in public education, often require today’s counselors to use evaluation methods to demonstrate the effectiveness and impact of their counseling services. Program Evaluation and Accountability Distinguishing between program evaluation and accountability is essential because many professionals use the terms inter- changeably and, occasionally, as categories of each other. For instance, Isaacs (2003) viewed program evaluation as a type of accountability that focuses primarily on program effectiveness and improvement. However, from our perspective, counseling program evaluation precedes accountability. As defined by Loesch (2001), counseling program evaluations help practi- tioners “maximize the efficiency and effectiveness of service delivery through careful and systematic examination of program components, methodologies, and outcomes” (p. 513). Counsel-
  • 7. ing program evaluations, thus, have inherent value in helping practitioners plan, implement, and refine counseling practice regardless of the need to demonstrate accountability. However, when called on to provide evidence of program effectiveness and impact, counselors can effectively draw on information gathered from their own program evaluations. We, thus, conceptualize counseling accountability as provid- ing specific information to stakeholders and other supervising authorities about the effectiveness and efficiency of counseling services (Studer & Sommers, 2000). In our view, demonstrat- ing accountability forms a bridge between counseling practice and the broader context of the service impact on stakeholders. However, accountability should not be the sole motivation for counseling program evaluation. As emphasized by Loesch (2001), counseling program evaluations should be undertaken to improve counseling services rather than merely to provide a justification for existing programming. The Need for New Models of Counseling Program Evaluation We believe that a significant contributor to counselors’ dis- interest in evaluation involves the lack of practical program
  • 8. evaluation models available to them for this purpose. Fur- thermore, confusion about the differences between program evaluation and accountability appear to deter counselors from engaging in ongoing program evaluations (Loesch, 2001). Therefore, the development of new, counselor-specific models that clearly conceptualize program evaluation and account- ability may provide the necessary impetus to establish program evaluation as a standard of practice in counseling. Recent examples of counselor-focused evaluation ap- proaches include Lusky and Hayes’s (2001) consultation model of counseling program evaluation and Lapan’s (2001) framework for planning and evaluating school counseling programs. Gysbers and Henderson (2000) also discussed the role of evaluation in school counseling programs and offered practical strategies and tools that counselors could imple- ment. These approaches have helped maintain a focus on the importance of counseling program evaluation. The purpose of this article was to build on the emerg- ing counselor-focused literature on program evaluation by providing counselors with a practical model for developing and implementing evaluation-based counseling services. As Whiston (1996) emphasized, counseling practice and
  • 9. research form a continuum rather than being mutually exclusive activi- ties. Although some counselors may identify more strongly with research and others more strongly with practice, both perspectives provide valuable feedback about the impact of counseling on clients served. Indeed, evaluation and feedback are integral parts of the counseling process, and most coun- selors will identify with the idea of refining their practice by using feedback from numerous sources as a basis. This article is geared both to practitioners who may have had little prior training in or experience with counseling program evaluations and to counselor educators interested in training students in counseling program evaluation methods. We begin by discussing accountability in counseling and the uses of counseling program evaluation. Next, we present the Accountability Bridge Counseling Program Evaluation Model and discuss the steps involved in its implementation. Finally, we discuss implications and make recommendations for training counselors in evaluation skills. Accountability in Counseling Accountability has become a catchword in today’s sociopoliti- cal climate. Since the 1960s, local, state, and federal govern- ment spending has been more closely scrutinized and the effectiveness of social programs and initiatives more
  • 10. carefully questioned (Houser, 1998; Kirst, 2000). As professionals in the social services field, counselors have not been shielded from the demands to demonstrate successful and cost- effective outcomes, nor have counseling programs. Despite increas- ing pressure to document effectiveness, some counselors maintain that counseling programs are generally immeasur- able (Loesch, 2001). However, given the rising demands for Journal of Counseling & Development ■ Spring 2007 ■ Volume 85164 Astramovich & Coker accountability in education and social programs, such an attitude is undoubtedly naïve. In fact, funding of educational programs and social services often hinges on the ability to demonstrate successful outcomes to stakeholders. Because counselors often rely on third-party and government funding, the future of the counseling profession may indeed rest on the ability of practitioners to answer the calls for documentation of effectiveness (Houser, 1998). School Counseling Accountability
  • 11. Today’s school counselors face increased demands to demon- strate program effectiveness (Adelman, 2002; Borders, 2002; Herr, 2002; House & Hayes, 2002; Lusky & Hayes, 2001). Primarily rooted in the school reform movement, demonstrat- ing accountability is becoming a standard practice among school counselors (Dahir & Stone, 2003; Fairchild & Seeley, 1995; Hughes & James, 2001; Myrick, 2003; Otwell & Mullis, 1997; Vacc & Rhyne-Winkler, 1993). Standards-based educa- tion reforms, including the No Child Left Behind (NCLB) Act of 2001, have fueled pressures on local school systems to demonstrate effective educational practices (Albrecht & Joles, 2003; Finn, 2002; Gandal & Vranek, 2001). The NCLB Act of 2001 emphasizes student testing and teacher effective- ness; however, school counselors have also recognized that in the current educational environment, actively evaluating the effectiveness of their school counseling programs is crucial. Although the pressures for accountability have seemingly increased in recent years, Lapan (2001) noted that school counselors have developed results-based systems and used student outcome data for many years. Furthermore, school counselors have historically been connected with school re-
  • 12. form, and their roles have often been shaped by educational legislation (Herr, 2002). Although accountability demands are numerous, school counselors may fail to evaluate their programs because of time constraints, elusiveness of measuring school counseling out- comes, lack of training in research and evaluation methods, and the fear that evaluation results may discredit school counseling programs (Schmidt, 1995). Because of these factors, when school counselors attempted to provide accountability, they may have relied on simple tallies of services and programs offered to students. However, as discussed by Fairchild and Seeley (1995), merely documenting the frequency of school counseling services no longer meets the criteria for demonstrating program effective- ness. Although data about service provision may be important, school counselors must engage in ongoing evaluations of their counseling programs in order to assess the outcomes and the impact of their services. Trevisan (2000) emphasized that school counseling pro- gram evaluation may help the school counseling profession by providing accountability data to stakeholders, generating feedback about program effectiveness and program needs,
  • 13. and clarifying the roles and functions of school counselors. As the profession of school counseling evolves, increasing emphasis on leadership and advocacy (Erford, House, & Martin, 2003; House & Sears, 2002) and on comprehensive school coun- seling programs (American School Counselor Association [ASCA], 2003; Sink & MacDonald, 1998; Trevisan, 2002b) will coincide with ongoing research and program evaluation efforts (Paisley & Borders, 1995; Whiston, 2002; Whiston & Sexton, 1998). ASCA’s (2003) revised national standards for school counseling reflect the importance of school coun- seling accountability and provide direction for practicing school counselors in the evaluation of their comprehensive school counseling programs (Isaacs, 2003). Considering the accountability and outcomes-focused initiatives in today’s education environment, school counselors need skills and tools for systematically evaluating the impact of the services they provide (Trevisan, 2001). Mental Health Counseling Accountability Like professional school counselors, today’s mental health counselors have experienced significant pressures to dem- onstrate the effectiveness and the efficiency of their counsel- ing services. To secure managed care contracts and receive
  • 14. third-party reimbursements, mental health counselors are increasingly required to keep detailed records about specific interventions and outcomes of counseling sessions (Granello & Hill, 2003; Krousel-Wood, 2000; Sexton, 1996). Despite the financial implications of avoiding such accountability measures, many mental health counselors have fought for autonomy from third-party payers in the provision of coun- seling services. Mental health counselors often indicate that their ability to provide quality mental health care to clients is hampered by managed care’s demands to demonstrate tech- nical proficiency and cost-effective service delivery (Scheid, 2003). Furthermore, mental health counselors often express concerns about their therapeutic decision-making capacity being curtailed by managed care (Granello & Hill, 2003). Managed care’s mandate for accountability in the field of mental health counseling may have resulted, in part, from counselors’ failure to initiate their own outcomes assessments (Loesch, 2001). However, the emergence of empirically sup- ported treatments (ESTs) has helped counselors respond to the call for accountability from managed care (Herbert, 2003). Specifically, ESTs draw on evidence-based practices from empirical counseling research to provide counselors with intervention guidelines and treatment manuals for specific client problems. Yet, mental health counselors may resist the use of such approaches, insisting that counseling procedures and outcomes cannot be formally measured and that
  • 15. attempt- ing such evaluations merely reduces time spent providing counseling services (Sanderson, 2003). Today’s managed care companies, however, may require counselors to base their practice on specific ESTs in order to receive payment for services. Further complicating the issue is the fact that, Journal of Counseling & Development ■ Spring 2007 ■ Volume 85 165 The Accountability Bridge Model for Counselors as previously noted with other areas of counseling, mental health counselors often receive no training in evaluating the outcomes and impact of their services (Granello & Hill, 2003; Sexton et al., 1997). Ultimately, resistance from mental health counselors to document counseling outcomes may be due to insufficient counselor training in evaluation methods. Despite the tumultuous history of the pressures brought to bear on mental health practitioners by managed care for accountability, there is a major impetus for shifting toward examining program effectiveness and outcomes in mental health counseling—the benefit of forging a professional identity. Kelly (1996) underscored the need for mental health counselors to be accepted as legitimate mental health
  • 16. provid- ers who are on the same professional level as social workers, psychologists, and psychiatrists. The ability to document outcomes and identify effective treatments is, therefore, criti- cal in furthering the professional identity of mental health counselors within the mental health professions. Accountability in Other Counseling Specialties Although most literature on counseling accountability empha- sizes school and mental health settings, calls for accountability have also been directed to other counseling specialties. Bishop and Trembley (1987) discussed the accountability pressures faced in college counseling centers. Similar to school coun- selors and mental health counselors, college counselors and those in authority in college counseling centers have resisted accountability demands placed on them by authorities in higher education. Bishop and Trembley also noted that some counselors have maintained that counseling centers are de- signed for practice rather than research. Ultimately, all counseling practitioners, despite their spe- cialty area, are faced with the need to demonstrate program effectiveness. Although counselors may be hesitant or unwill- ing to evaluate the effectiveness of their services because they
  • 17. see little relevance to their individual practice, the future of the counseling profession may well be shaped by the way practitioners respond to accountability demands. Program Evaluation in Counseling In recent years, the terms program evaluation and ac- countability have often been used synonymously in dis- cussions of counseling research and outcomes. However, accountability efforts in counseling generally result from external pressures to demonstrate eff iciency and effec- tiveness. On the other hand, counselor-initiated program evaluations can be used to better inform practice and improve counseling services. We believe that a key shift in the profession would be to have counselors continu- ally evaluate their programs and outcomes not because of external pressures, but from a desire to enhance client services and to advocate for clients and the counseling profession. New perspectives on the role of evaluation of counseling practices may ultimately help program evalu- ation become a standard of practice in counseling. Program evaluation models have proliferated in the fields of economics, political science, sociology, psychology, and education (Hosie, 1994) and have been used for improving quality (Ernst & Hiebert, 2002), assessing goal achieve- ment, decision making, determining consumer impact, and examining cost-effectiveness (Madaus & Kellaghan, 2000). Many program evaluation models were developed for use in large-scale organizational evaluations and are, thus, impracti- cal for use by counselors. Furthermore, large-scale program
  • 18. evaluation models are generally based on the assumption that a staff of independent evaluation experts or an assessment team will plan and implement the evaluation. Within the counsel- ing professions, however, financial constraints generally make such independent evaluations of programs unfeasible. Consequently, counselors usually rely on limited resources and their own research skills to carry out an evaluation of program effectiveness. Fortunately, many of the principles and practices of large-scale evaluation models can be adapted for use by counselors. Given the wide range of program evaluation definitions and approaches, models from human services professions and edu- cation appear most relevant for the needs of counselors because these models generally emphasize ongoing evaluation for pro- gram improvement (e.g., Stufflebeam, 2000a). Counseling pro- gram evaluation may be defined as the ongoing use of evaluation principles by counselors to assess and improve the effectiveness and impact of counseling programs and services. Ongoing coun- seling program evaluations can provide crucial feedback about the direction and the growth of counseling services and can also
  • 19. meet the accountability required by stakeholders (Boulmetis & Dutwin, 2000; Loesch, 2001; Stufflebeam, 2000b). Reasons for Evaluating Counseling Programs Program evaluations may be initiated for various reasons; however, evaluations are intended to generate practical in- formation rather than to be mere academic exercises (Royse, Thyer, Padgett, & Logan, 2001). Counseling program evalu- ations should, therefore, provide concrete information about the effectiveness, the efficiency, and the impact of services (Boulmetis & Dutwin, 2000). Specifically, counseling pro- gram evaluations can yield information that will demonstrate the degree to which clients are being helped. Evaluations may also provide feedback about client satisfaction and can help to distinguish between effective and ineffective approaches for the populations being served (Isaacs, 2003). On a broader scope, program evaluations can help to determine if services are having an influence on larger social problems (Royse et al., 2001). On the contextual level, evaluations can provide information about the use of staff and program resources in the provision of services (Stufflebeam, 2000a). Journal of Counseling & Development ■ Spring 2007 ■ Volume 85166
  • 20. Astramovich & Coker Accountability to stakeholders has often been a consideration in formulating approaches to counseling program evaluation. For example, Lapan (2001) indicated that program evaluations help counselors to identify effective services that are valued by stake- holders. Thus, by using stakeholder feedback in program planning and then providing valued services, counselors are better prepared to demonstrate the accountability of their programs and practice. Internal accountability may be requested by administrators of local programs to determine if program staff and resources are being used effectively. On the other hand, external accountability may be requested by policy makers and stakeholders with an interest in the effectiveness of provided services (Priest, 2001). Counseling program evaluations are generally implemented to provide information about local needs; however, in some instances information from local evaluations may have significant implica- tions for the entire counseling profession. As discussed by Whiston (1996), the professional identity of counselors can be
  • 21. enhanced through action research that demonstrates the effectiveness of ser- vices. By conceptualizing program evaluations as a type of action research, counselors have the potential to consider this effort as a contribution to the growing research-base in counseling. Questions That Evaluations May Answer Counseling program evaluations, like all forms of evalua- tions, are undertaken to answer questions about the effective- ness of programs and services in meeting specific goals (Berk & Rossi, 1999). Questions about the overall effectiveness and impact of services may be answered, as well as more discrete, problem-specific concerns. Furthermore, questions posed in evaluations help guide the collection and analysis of outcome information and the subsequent reporting of outcomes to stakeholders. Numerous questions may be explored with evaluations. Powell, Steele, and Douglah (1996) indicated that evalu- ation questions generally fall into four broad categories: outcomes and impacts, program need, program context, and program operations. The following are some examples of the types of questions that counseling program evaluations may answer:
  • 22. • Are clients being helped? • What methods, interventions, and programs are most helpful for clients? • How satisfied are clients with services received? • What are the long-term effects of counseling programs and services? • What impact do the services and programs have on the larger social system? • What are the most effective uses of program staff? • How well are program objectives being met? Program evaluations are generally guided by specific questions related to program objectives. Guiding questions help counselors to plan services and gather data specific to the problems under investigation. Depending on program and stakeholder needs, counseling evaluations may be designed to answer many questions simultaneously or they may be focused on specific objectives and outcomes. As part of an ongoing process, the initial cycle of a counseling program evaluation may yield information that can help to define or refine further problems and questions for exploration in the next evaluation cycle. Ultimately, counseling program evaluations may serve many purposes and may provide answers to a variety of questions. However, if counselors are to implement evaluations, a practical
  • 23. framework for conceptualizing the evaluation process seems essential. Counselors, thus, need a conceptual foundation for guiding the evaluation of their programs and services. The Accountability Bridge Counseling Program Evaluation Model for Counselors The Accountability Bridge Counseling Program Evaluation Model (see Figure 1) provides a framework to be used by individual counselors and within counseling programs and counseling agencies to plan and deliver counseling services and to assess their effectiveness and impact. Drawing on concepts from the business evaluation model proposed by Ernst and Hiebert (2002) and the Context, Input, Process, Product Model (CIPP) developed by Stufflebeam (2000a), the Accountability Bridge Counseling Program Evaluation Model organizes counseling evaluation into two reoccur- ring cycles that represent a continual refinement of services based on outcomes, stakeholder feedback, and the needs of the populations served. The counseling program evaluation cycle focuses on the provision and outcomes of counseling services, whereas the counseling context evaluation cycle ex- amines the impact of counseling services on stakeholders and uses their feedback, along with the results yielded by needs assessments, to establish and refine the goals of counseling programs. The two cycles are connected by an “accountability” bridge, whereby results from counseling practices are com-
  • 24. municated to stakeholders within the context of the larger service system. Providing accountability to stakeholders is, therefore, an integral part of the model. Although it is beyond the scope of this article to discuss each component in depth, a basic review of the framework and principles of the model will help counselors begin to conceptualize the process of planning and implementing counseling program evaluations. Counseling Program Evaluation Cycle The counseling program evaluation cycle involves the planning and implementation of counseling practice and culminates with assessing the outcomes of individual and group counseling, guidance services, and counseling programs. Four stages are involved in the counseling program evaluation cycle. Journal of Counseling & Development ■ Spring 2007 ■ Volume 85 167 The Accountability Bridge Model for Counselors 1. Program planning. Although we enter the discussion of the model at the program planning stage, information obtained from the counseling context evaluation cycle is critical in the
  • 25. planning process. Thus, on the basis of input obtained from needs assessments and the subsequent formation of service objectives, counseling programs and services are planned and developed to address the needs of the populations served. Program planning involves identifying specific counsel- ing methods and activities that are appropriate for certain populations as well as determining the availability of needed resources, including staff, facilities, and special materials (Royse et al., 2001). Lapan (2001) stressed that effective school counseling programs meet objectives by planning results-based inter- ventions that can be measured. Therefore, a key component of the program planning process involves the simultaneous planning of methods for measuring outcomes (Boulmetis & Dutwin, 2000). For instance, during the program planning phase, a community counseling agency that is planning a new substance abuse aftercare program should determine the means of assessing client progress through the program. Furthermore, developing multiple outcome measures can help increase the validity of findings. Gysbers and Hender- son (2000) discussed several means for assessing school counseling outcomes, including pretest–posttest instruments, performance indicators, and checklists. Studer and Sommers (2000) indicated that multiple measures, such as assessment instruments, observable data, available school-based data, and client/parent/teacher interviews, could be used in school
  • 26. counseling program evaluation. In mental health and college counseling specialties, similar measures of client and program progress can be used, including standardized assessment tools such as depression and anxiety inventories. Other means of collecting outcome data include surveys, individual and group interviews, observation methods, and document review (Powell et al., 1996). Furthermore, data can be collected over a 1- to 3-year period to determine program effectiveness over longer periods of time (Studer & Sommers, 2000). A f inal consideration in the program planning stage involves determining when clients will complete selected measures and assessments . Individuals who will be respon- sible for gathering and processing the information should be identified as well. For example, in a community agency setting, counselors may take responsibility for collecting data about their own client caseload, whereas a counselor supervisor may collect data from community sources. 2. Program implementation. After programs and services have been planned and outcome measures have been selected, programs and services are initiated. Sometimes referred to as “formative evaluation,” the program implementation phase
  • 27. actualizes the delivery of services shaped by input from the counseling context evaluation cycle. During program imple- mentation, counselors may identify differences between the planned programs and the realities of providing the services. Therefore, at this point, decisions may be made to change programs before they are fully operational or to make refine- ments in programs and services as the need arises. 3. Program monitoring and refinement. Once programs and services have been initiated and are fully operational, coun- selors may need to make adjustments to their practice based on preliminary results and feedback from clients and other interested parties. Programs and services may, therefore, need to be refined and altered to successfully meet the needs of the clientele served. Monitoring program success helps to ensure the quality of counseling services and maximizes the likelihood of finding positive results during outcomes assessments. 4. Outcomes assessment. As programs and services are completed, outcomes assessments help to determine if objec- FIGURE 1 Accountability Bridge Counseling Program Evaluation Model
  • 28. Program Monitoring and Refinement Feedback From Stakeholders Journal of Counseling & Development ■ Spring 2007 ■ Volume 85168 Astramovich & Coker tives have been met. Therefore, during the outcomes assessment phase, final data are collected, and all program data are analyzed to determine the outcomes of interventions and programs. Counseling outcome data should be analyzed and interpreted as soon as possible after being collected (Gysbers & Henderson, 2000). Data analysis approaches differ for quantitative and qualitative data, and counselors with limited research back- ground may need to seek assistance from peers and supervisors with knowledge of analyzing a variety of data sets. Available data analysis computer software can also expedite the analysis and interpretation of data. Such software programs also
  • 29. allow for easy creation of charts and graphs that can play a key role in the dissemination of evaluation results. The Accountability Bridge We conceptualize the process of communicating outcome data and program results to stakeholders as the “accountability bridge” between counseling programs and the context of counseling services. Outcome data and evaluation findings are the means for providing information about program ef- fectiveness to stakeholders. When counselors are asked to demonstrate program effectiveness and efficiency, they can present information from the counseling program evaluation cycle to interested parties. However, beyond being merely an ameliorative process, communicating results to stakehold- ers can also be conceptualized as a marketing tool whereby counselors help maintain support and increase the demands for their services (Ernst & Hiebert, 2002). Therefore, rather than waiting for external requests for accountability, counselors should consider the task of communicating program results to stakeholders as being a standard part of the counseling program evaluation process. In the program evaluation literature, stakeholders are often
  • 30. referred to as “interested parties” (Berk & Rossi, 1999), mean- ing all individuals and organizations involved in or affected by a program (Boulmetis & Dutwin, 2000). As discussed by Loesch (2001), the most obvious stakeholders in counseling programs are those clients receiving services. In addition, stakeholders of counseling programs may include funding sources, other professional counselors, community members, administrators, staff, and organizations or programs that refer clients. Information provided to stakeholders must be tailored to address the concerns of the specific group. For instance, when communicating results, counselors may want to consider if their audience will be more impressed with numbers and statistics or if case studies and personal narratives will be more effective (Powell et al., 1996). Evaluation reports and summaries can be used to dissemi- nate information about program outcomes to stakeholders. Counseling program evaluation reports may be structured to include (a) an introduction defining the purposes and goals of programs and of the evaluation, (b) a description of programs and services, (c) a discussion of the evaluation design and data analysis procedures, (d) a presentation of the evaluation results, and (e) a discussion of the findings and
  • 31. recommenda- tions of the evaluation (Gysbers & Henderson, 2000; Royse et al., 2001). In addition to written reports, formal presentations of program results may also be an effective means for fulfilling the requirement of accountability to stakeholders. Counseling Context Evaluation Cycle The counseling context evaluation cycle focuses on the im- pact that the counseling practice has on stakeholders in the context of the larger organizational system. Using feedback from stakeholders, counselors and individuals responsible for counseling programs may engage in strategic planning and conduct needs assessments to develop and refine program objectives. The counseling context evaluation cycle consists of four stages. 1. Feedback from stakeholders. Once outcome data have been reported to stakeholders, counselors should actively solicit their feedback. Indeed, stakeholder feedback should be considered a vital element in the eventual design and delivery of counseling services. Viability of counseling ser- vices is maintained through a continual cycle of stakeholder feedback regarding the development of program goals and the design and evaluation of counseling services (Ernst & Hiebert, 2002).
  • 32. 2. Strategic planning. After feedback from stakeholders has been solicited, counselors and individuals in their orga- nizational systems may engage in strategic planning designed to examine the operations of the organization. In particular, strategic planning may include an examination and possible revision of the purpose and mission of programs and services. Furthermore, during strategic planning, decisions about the al- location of staff and monetary resources may be considered. 3. Needs assessment. Coinciding with strategic planning, needs assessments can help provide counselors with crucial information that shapes the provision of counseling programs and services. In particular, identifying the needs of stakehold- ers is a key part of developing programs that will have positive impact. Needs assessments should, therefore, gather informa- tion from multiple stakeholders and should be planned with a clear indication of what information is needed (Royse et al., 2001; Stufflebeam, McCormick, Brinkerhoff, & Nelson, 1985). A key part of needs assessment is the development of the method or instrument for collecting information. Written surveys and checklists can be used as well as focus- group meetings, interviews, and various forms of qualitative inquiry. Effective needs assessments will help clarify and
  • 33. prioritize needs among stakeholders and the populations served. 4. Service objectives. Developing precise program goals and objectives is crucial for the eventual provision and evalua- tion of counseling programs and services. Goals and objectives should be developed based on prior outcomes of counseling Journal of Counseling & Development ■ Spring 2007 ■ Volume 85 169 The Accountability Bridge Model for Counselors services, stakeholder feedback, and information gathered from needs assessments. Programs without clearly identified goals and objectives cannot be evaluated for impact and effective- ness (Berk & Rossi, 1999). Royse et al. (2001) discussed two main types of program objectives: process objectives and outcome objectives. Process objectives may be thought of as milestones or competencies needed for achieving long-term goals. In counseling, process objectives may be considered as a series of benchmarks that indicate progress toward program growth and improvement. Process objectives are achieved through a series of developmental steps, whereas outcome objectives refer to specific competencies or outcomes to be
  • 34. achieved in a given time period. Once program objectives have been established, the entire evaluation cycle is repeated, with information from the coun- seling context evaluation cycle feeding back into the program planning stage of the counseling program evaluation cycle. Ultimately, counseling program evaluation should be consid- ered an ongoing process rather than a single incident. Implications for Counselors and Counselor Education Meeting the Challenges of Counseling Program Evaluations Although counseling program evaluation may enhance client services and promote the professional identity of counselors, barriers to implementing program evaluation cannot be over- looked. First of all, program evaluation practices have often been considered as being too time-consuming and complex (Loesch, 2001; Wheeler & Loesch, 1981). Thus, counselors who have not previously initiated evaluations of their programs and services may be hesitant to embark on a seemingly difficult task. However by conceptualizing program evaluation as a collaborative process, counselors may be more interested and motivated to participate in evaluations. By teaming with
  • 35. other professionals, counselors may help to ensure that evaluations are implemented effectively and that results are disseminated in an effective manner. Furthermore, collaboration helps counselors new to program evaluation to obtain support and mentoring during the evaluation process (Trevisan, 2002a). Another major obstacle to any outcome or evaluation study of counseling is the complex and dynamic nature of the counseling process itself. As discussed by Whiston (1996), the seemingly immeasurable nature of counseling often makes straightforward evaluations of its effectiveness difficult. The complexity of counseling processes may be addressed by developing program and service objectives that are more readily measurable. For example, client improvement is a concept that seems vague and difficult to measure. However, by being more specific and operationalizing definitions of client improvement, counselors can more easily measure cli- ent change. For example, exploring client improvement using standardized measures of depression by comparing pre- and posttreatment scores can provide counselors with one measure
  • 36. of the effectiveness of counseling interventions. Considerations for Training and Research in Program Evaluation Methods Despite increased focus on accountability and calls for evaluation-based counseling practice, counselors frequently lack the training to effectively evaluate the effectiveness and impact of their services. Counselor training has rarely em- phasized research and evaluation skills as a method for guid- ing practice (Heppner et al., 1999; Sexton et al., 1997). As a result, counselors may see little utility in acquiring and using research and evaluation skills. Counselor educators who are responsible for counselor education programs must, there- fore, reconsider the importance placed on acquiring research and evaluation skills in the training of new counselors. The 2001 standards of the Council for Accreditation of Counsel- ing and Related Educational Programs have addressed the need for today’s counselors to develop skills in research and evaluation. Yet, as pointed out by Trevisan (2000), the mere inclusion of evaluation skills in training standards has not spurred counselors’ use of evaluation activities. Whiston and Coker (2000) called for reconstructing the clinical training of counselors based on findings in counseling research. Integrating evaluation and research practices into clinical training may likewise enhance the clinical preparation
  • 37. of new counselors by giving them supervised experiences in which they use evaluation methods. Trevisan (2000, 2002a) advocated for a sequential approach to teaching program eval- uation skills in counselor education programs. Accordingly, counselors might first receive didactic training in evaluation and research methods. Next, counselors could be given clinical experiences that would allow them to implement research and evaluation skills under supervision. Finally, trained counselors would be able to conceptualize and implement evaluations of counseling programs on their own, consulting with other professionals as necessary. In addition to revising the evaluation and research train- ing in counselor education, providing postgraduate training and workshop opportunities to practicing counselors must be considered. Counseling conferences should, therefore, actively solicit programs and presentations geared toward helping counselors develop skills in research and evaluation. Further- more, counselors should purposefully seek opportunities for the development of their research and evaluation skills. Although counseling program evaluation has been dis- cussed for many years, few studies have appeared in the literature that examine the use of program evaluation by prac- ticing counselors. We, therefore, issue a call to the profession to systematically investigate the use of evaluation practices
  • 38. in counseling. Such findings could have a substantial impact Journal of Counseling & Development ■ Spring 2007 ■ Volume 85170 Astramovich & Coker on the continued development of the counseling profession by providing further understanding of counseling program evaluation models and practices. Conclusion Twenty-first century counselors can no longer question the merit of and need for evaluating their counseling programs and services. Instead, today’s counselors must actively learn about and use evaluation methods as a means of enhanc- ing their counseling practices, providing accountability to stakeholders, and enhancing the professional identity of all counselors. As Wheeler and Loesch (1981) predicted nearly 25 years ago, program evaluation continues to be a force in the development of the counseling professions. They likewise suggested that counseling professionals are gradually beginning to recognize that if counseling program evaluations are to be used, they must be initiated and imple- mented by counselors themselves. Given the persistence of the topic and the ongoing calls for outcomes research and accountability of counseling practices, program evalu- ation can no longer be ignored by counseling professionals. Indeed, program evaluation may be considered a newly evolving standard of practice in counseling.
  • 39. References Adelman, H. S. (2002). School counselors and school reform: New directions. Professional School Counseling, 5, 235–248. Albrecht, S. F., & Joles, C. (2003). Accountability and access to op- portunity: Mutually exclusive tenets under a high-stakes testing mandate. Preventing School Failure, 48, 86–91. American School Counselor Association. (2003). The American School Counselor Association National Model: A framework for school counseling programs. Alexandria, VA: Author. Benkofski, M., & Heppner, C. C. (1999). Program evaluation. In P. P. Heppner, D. M. Kivlighan, & B. E. Wampold, Research design in counseling (pp. 488–513). Belmont, CA: Wadsworth. Berk, R. A., & Rossi, P. H. (1999). Thinking about program evalu- ation (2nd ed.). Thousand Oaks, CA: Sage. Bishop, J. B., & Trembley, E. L. (1987). Counseling centers and accountability: Immoveable objects, irresistible forces. Journal of Counseling and Development, 65, 491–494. Borders, L. D. (2002). School counseling in the 21st century: Personal and
  • 40. professional reflections. Professional School Counseling, 5, 180–185. Borders, L. D., & Drury, S. M. (1992). Comprehensive school counseling programs: A review for policymakers and practitio- ners. Journal of Counseling & Development, 70, 487–498. Boulmetis, J., & Dutwin, P. (2000). The ABCs of evaluation: Timeless techniques for program and project managers. San Francisco: Jossey-Bass. Council for Accreditation of Counseling and Related Educational Programs. (2001). CACREP accreditation manual. Alexandria, VA: Author. Dahir, C. A., & Stone, C. B. (2003). Accountability: A M.E.A.S.U.R.E. of the impact school counselors have on student achievement. Professional School Counseling, 6, 214–221. Erford, B. T., House, R., & Martin, P. (2003). Transforming the school counseling profession. In B. T. Erford (Ed.), Transforming the school counseling profession (pp. 1–20). Upper Saddle River, NJ: Prentice Hall. Ernst, K., & Hiebert, B. (2002). Toward the development of a program evaluation business model: Promoting the longevity of
  • 41. counsel- ling in schools. Canadian Journal of Counselling, 36, 73–84. Fairchild, T. N. (1993). Accountability practices of school counselors: 1990 national survey. The School Counselor, 40, 363–374. Fairchild, T. N. (1994). Evaluation of counseling services: Account- ability in a rural elementary school. Elementary School Guidance and Counseling, 29, 28–37. Fairchild, T. N., & Seeley, T. J. (1995). Accountability strategies for school counselors: A baker’s dozen. The School Counselor, 42, 377–392. Finn, C. E. (2002). Making school reform work. The Public Inter- est, 148, 85–95. Gandal, M., & Vranek, J. (2001, September). Standards: Here today, here tomorrow. Educational Leadership, 6–13. Granello, D. H., & Hill, L. (2003). Assessing outcomes in practice settings: A primer and example from an eating disorders program. Journal of Mental Health Counseling, 25, 218–232. Gysbers, N. C., & Henderson, P. (2000). Developing and
  • 42. managing your school guidance program (3rd ed.). Alexandria, VA: Ameri- can Counseling Association. Gysbers, N. C., Hughey, K., Starr, M., & Lapan, R. T. (1992). Im- proving school guidance programs: A framework for program, personnel, and results evaluation. Journal of Counseling & Development, 70, 565–570. Hadley, R. G., & Mitchell, L. K. (1995). Counseling research and program evaluation. Pacific Grove, CA: Brooks/Cole. Heppner, P. P., Kivlighan, D. M., & Wampold, B. E. (1999). Research design in counseling (2nd ed.). Belmont, CA: Wadsworth. Herbert, J. D. (2003). The science and practice of empirically sup- ported treatments. Behavior Modification, 27, 412–430. Herr, E. L. (2002). School reform and perspectives on the role of school counselors: A century of proposals for change. Profes- sional School Counseling, 5, 220–234. Hosie, T. (1994). Program evaluation: A potential area of exper- tise for counselors. Counselor Education and Supervision, 33, 349–355. House, R. M., & Hayes, R. L. (2002). School counselors:
  • 43. Becoming key players in school reform. Professional School Counseling, 5, 249–256. House, R. M., & Sears, S. J. (2002). Preparing school counselors to be leaders and advocates: A critical need in the new millennium. Theory Into Practice, 41, 154–162. Houser, R. (1998). Counseling and educational research: Evaluation and application. Thousand Oaks, CA: Sage. Journal of Counseling & Development ■ Spring 2007 ■ Volume 85 171 The Accountability Bridge Model for Counselors Hughes, D. K., & James, S. H. (2001). Using accountability data to protect a school counseling program: One counselor’s experience. Professional School Counseling, 4, 306–309. Isaacs, M. L. (2003). Data-driven decision making: The engine of accountability. Professional School Counseling, 6, 288–295. Kellaghan, T., & Madaus, G. F. (2000). Outcome evaluation. In D. L. Stufflebeam, G. F. Madaus, & T. Kellaghan (Eds.), Evaluation
  • 44. models: Viewpoints on educational and human services evalua- tion (2nd ed., pp. 97–112). Boston: Kluwer Academic. Kettner, P. M., Moroney, R. M., & Martin, L. L. (1999). Designing and managing programs: An effectiveness-based approach (2nd ed.). Thousand Oaks, CA: Sage. Kelly, K. R. (1996). Looking to the future: Professional identity, accountability, and change. Journal of Mental Health Counsel- ing, 18, 195–199. Kirst, M. W. (2000). Accountability: Implications for state and local policy makers. In D. L. Stufflebeam, G. F. Madaus, & T. Kel- laghan (Eds.), Evaluation models: Viewpoints on educational and human services evaluation (2nd ed., pp. 319–339). Boston: Kluwer Academic. Krousel-Wood, M. A. (2000). Outcomes assessment and performance improvement: Measurements and methodologies that matter in mental health care. In P. Rodenhauser (Ed.), Mental health care administration: A guide for practitioners (pp. 233–253). Ann Arbor: University of Michigan Press. Lapan, R. T. (2001). Results-based comprehensive guidance and counseling programs: A framework for planning and evaluation. Professional School Counseling, 4, 289–299.
  • 45. Loesch, L. C. (2001). Counseling program evaluation: Inside and outside the box. In D. C. Locke, J. E. Myers, & E. L. Herr (Eds.), The hand- book of counseling (pp. 513–525). Thousand Oaks, CA: Sage. Lusky, M. B., & Hayes, R. L. (2001). Collaborative consultation and pro- gram evaluation. Journal of Counseling & Development, 79, 26–38. Madaus, G. F., & Kellaghan, T. (2000). Models, metaphors, and definitions in evaluation. In D. L. Stufflebeam, G. F. Madaus, & T. Kellaghan (Eds.), Evaluation models: Viewpoints on educational and human services evaluation (2nd ed., pp. 19–31). Boston: Kluwer Academic. Myrick, R. D. (2003). Accountability: Counselors count. Professional School Counseling, 6, 174–179. No Child Left Behind Act of 2001, Pub. L. No. 107- 110, 115 Stat. 1425 (2002). Otwell, P. S., & Mullis, F. (1997). Academic achievement and counselor accountability. Elementary School Guidance and Counseling, 31, 343–348.
  • 46. Paisley, P. O., & Borders, L. D. (1995). School counseling: An evolving specialty. Journal of Counseling & Development, 74, 150–153. Powell, E. T., Steele, S., & Douglah, M. (1996). Planning a program evaluation. Madison: Division of Cooperative Extension of the University of Wisconsin-Extension. Priest, S. (2001). A program evaluation primer. Journal of Experi- ential Education, 24, 34–40. Royse, D., Thyer, B. A., Padgett, D. K., & Logan, T. K. (2001). Program evaluation: An introduction (3rd ed.). Belmont, CA: Brooks/Cole. Sanderson, W. C. (2003). Why empirically supported treatments are important. Behavior Modification, 27, 290–299. Scheid, T. L. (2003). Managed care and the rationalization of men- tal health services. Journal of Health and Social Behavior, 44, 142–161. Schmidt, J. J. (1995). Assessing school counseling programs through external reviews. The School Counselor, 43, 114–123. Sexton, T. L. (1996). The relevance of counseling outcome research:
  • 47. Current trends and practical implications. Journal of Counseling & Development, 74, 590–600. Sexton, T. L. (1999). Evidence-based counseling: Implications for counseling practice, preparation, and professionalism. Greensboro, NC: ERIC Clearinghouse on Counseling & Stu- dent Services. (ERIC Document Reproduction Service No. ED 435 948) Sexton, T. L., Whiston, S. C., Bleuer, J. C., & Walz, G. R. (1997). Integrating outcome research into counseling prac- tice and training. Alexandria, VA: American Counseling Association. Sink, C. A., & MacDonald, G. (1998). The status of comprehensive guidance and counseling in the United States. Professional School Counseling, 2, 88–94. Studer, J. R., & Sommers, J. A. (2000). The professional school counselor and accountability. National Association of Secondary School Principals Bulletin, 84, 93–99. Stufflebeam, D. L. (2000a). The CIPP model for evaluation. In D. L. Stufflebeam, G. F. Madaus, & T. Kellaghan (Eds.), Evaluation models: Viewpoints on educational and human services evaluation (2nd ed., pp. 279–317). Boston: Kluwer
  • 48. Academic. Stufflebeam, D. L. (2000b). Foundational models for 21st century program evaluation. In D. L. Stufflebeam, G. F. Madaus, & T. Kellaghan (Eds.), Evaluation models: Viewpoints on educational and human services evaluation (2nd ed., pp. 33–96). Boston: Kluwer Academic. Stufflebeam, D. L., McCormick, C. H., Brinkerhoff, R. O., & Nelson, C. O. (1985). Conducting educational needs assessment. Boston: Kluwer Academic. Trevisan, M. S. (2000). The status of program evaluation expectations in state school counselor certification requirements. American Journal of Evaluation, 21, 81–94. Trevisan, M. S. (2001). Implementing comprehensive guidance pro- gram evaluation support: Lessons learned. Professional School Counseling, 4, 225–228. Trevisan, M. S. (2002a). Enhancing practical evaluation training through long-term evaluation projects. American Journal of Evaluation, 23, 81–92. Trevisan, M. S. (2002b). Evaluation capacity in K-12
  • 49. school counseling programs. American Journal of Evaluation, 23, 291–305. Journal of Counseling & Development ■ Spring 2007 ■ Volume 85172 Astramovich & Coker Vacc, N. A., & Rhyne-Winkler, M. C. (1993). Evaluation and ac- countability of counseling services: Possible implications for a midsize school district. The School Counselor, 40, 260–266. Wheeler, P. T., & Loesch, L. (1981). Program evaluation and counsel- ing: Yesterday, today and tomorrow. The Personnel and Guidance Journal, 51, 573–578. Whiston, S. C. (1996). Accountability through action research: Research methods for practitioners. Journal of Counseling & Development, 74, 616–623. Whiston, S. C. (2002). Response to the past, present, and future of school counseling: Raising some issues. Professional School Counseling, 5, 148–156. Whiston, S. C., & Coker, J. K. (2000). Reconstructing
  • 50. clinical training: Implications from research. Counselor Education and Supervision, 39, 228–253. Whiston, S. C., & Sexton, T. (1998). A review of school counseling outcome research: Implications for practice. Journal of Counsel- ing & Development, 76, 412–426. Running head: VETERANS AND MILITARY FAMILIES 1 Veterans and Military Families Annotated Bibliography Student Name COMM 2367: Persuasive Communication Instructor Name January 15, 2017 VETERANS AND MILITARY FAMILIES
  • 51. 2 Veterans and Military Families Annotated Bibliography Begin this assignment with a 5-7 sentence introduction that provides an overview of your sources and demonstrates an understanding of the connections between them. The last sentence of your introduction is your thesis statement and should be in bold. Link, P. E., & Palinkas, L. A. (2013). Long-term trajectories and service needs for military families. Clinical Child & Family Psychology Review, 16(4), 376-393. doi:10.1007/s10567-013-0145-z Link and Palinkas’ research investigates the impact that military deployment and trauma have on family member relationships. Besides mental illness, families are also prone to other difficulties, such as struggles with relationships, which often result in divorce or domestic abuse. This article will help me explore the various ways in which both military members and their
  • 52. family members are affected by deployment, mental illness, and other stressful situations. The study also suggests ways in which families can seek help to deal with their difficulties. Besides government agencies, such as Veterans Affairs, various nonprofit organizations exist with the goal of providing care to the military and their families. Organizations including the National Science Foundation and NASA have funded the research of Dr. Lawrence Palinkas, a professor at the University of Southern California. The expertise of Dr. Patrick Link, a practicing psychiatrist, adds credibility to this article as well. Philipps, D. (2014, December 31). Mission ends in Afghanistan, but sacrifices are not over for U.S. soldiers. The New York Times. Retrieved from VETERANS AND MILITARY FAMILIES 3 http://www.nytimes.com/2015/01/01/us/mission-ends-but- sacrifices-are-not-over-for-us-
  • 53. soldiers.html Because the war in Afghanistan is considered to be over, many Americans have turned their backs on service members and their families. Even though things may be coming to a close, people often forget that military personnel continue to serve overseas and risk their lives. This article provides a viewpoint from military families who often feel like the public has forgotten about those soldiers. It is important to provide support for military families who continue to have loved ones overseas, as well as those who have returned. This story will help provide context from current military families, while focusing on some of the difficulties that they endure. The article is credible because of its publication in an internationally respected periodical. In addition, Dave Philips won the Pulitzer Prize for National Reporting, and chose to document a family who is in the middle of the controversy. U.S. Department of Veterans Affairs. (2014). PTSD: National Center for PTSD. Retrieved from http://www.ptsd.va.gov/about/index.asp
  • 54. The United States Department of Veterans Affairs is at the forefront of PTSD research and care. For 25 years, the department has been treating PTSD and educating others about the illness through the National Center for PTSD. Their mission is to provide the best care to their patients, continue to explore the science of the illness, and educate physicians and other care providers. The website provides information about PTSD, insight into their over 100 research projects, and information on where to inquire about and receive care. They are the primary source for PTSD information because of their world-renowned research, facilities, faculty, and http://www.nytimes.com/2015/01/01/us/mission-ends-but- sacrifices-are-not-over-for-us-soldiers.html http://www.nytimes.com/2015/01/01/us/mission-ends-but- sacrifices-are-not-over-for-us-soldiers.html http://www.ptsd.va.gov/about/index.asp VETERANS AND MILITARY FAMILIES 4 experience. This mental illness is often associated with post- deployment military service
  • 55. members and their families; thus it will help strengthen my argument about their needs. Write a 5-7 sentence conclusion that summarizes your main points and provides closure for your reader. research problem( topic): children& youth here are some related things i wrote before Writing Goal #1 - Learn to practice writing reviews with peers, and, give good feedback to them. Writing Goal #2 - Write something creative and relevant to world events everyday. Nonprofit Organization: Animal Welfare Foundation of Canada Article “Animal Cruelty or the Price of Dinner” Summary: Farm animals like chickens do not elicit the same
  • 56. emotional responses like pets, but live in far worse conditions. These conditions also affect the people who eat them. Consumers should use their buying power to push for more humane conditions for farm animals, which is feasible with new technology. citation https://www.nytimes.com/2016/04/17/opinion/sunday/animal- cruelty-or-the-price-of-dinner.html (Links to an external site.) Links to an external site. Reply Annotated Bibliography FAQ 1 COMM 2367: Persuasive Communication ANNOTATED BIBLIOGRAPHY PAPER FAQ General Questions What if I have trouble finding sources about my topic? I know it’s often a last resort for students, but librarians are specifically educated and trained to locate credible information quickly. They really do want to help you! If you go to
  • 57. library.osu.edu, you can find contact information on the right side of the page: phone, email, or instant chat. If you have been unsuccessfully searching for information for an hour, stop and contact someone at the library. Don’t waste valuable time being frustrated; ask for help. How many sources should focus on my region? You are not required to find any sources that specifically focus on your region right now. The primary goal for this assignment is to gather information and gain a general understanding of your topic. You can add regional sources to your next assignment. How current should my sources be? Because we can easily access current information, your sources should be no more than five years old. Sources that contain statistics should be as current as possible. If you find a source that is not current but believe it contains important and relevant information, email me. Should my sources be listed in alphabetical order?
  • 58. Yes, just as you would alphabetize on a reference page. What is a DOI? “The Digital Object Identifier (DOI®) System is for identifying content objects in the digital environment. DOI® names are assigned to any entity for use on digital networks. They are used to provide current information, including where they (or information about them) can be found on the Internet. Information about a digital object may change over time, including where to find it, but its DOI name will not change” (The DOI system, 2011). Should I include the URL or the DOI name in my reference citation? If you found the source in an academic database, you aren’t required to use the DOI or the URL. If, however, you located the article another way, use the DOI name. It will always be accurate and is much shorter than an entire URL. I don’t see a DOI name. Should I use the URL? If you found the source in an academic database, you aren’t
  • 59. required to use the DOI or the URL. If, however, you located the article another way, use the URL. The title page on the Purdue OWL website is different from the title page we reviewed in class. Should I follow the example on the OWL or the one in my notes? APA format does not provide definitive formatting information for class papers. Instead, they are focused on papers being submitted for degrees or journal publication. You may see a few http://library.osu.edu/ Annotated Bibliography FAQ 2 differences in the header and on the title page. Please use the information from class to format your paper. My assignment is longer than three pages. Do I need to cut some information? No, but it should be no longer than four pages (not including the title page). The page requirement is a general guideline, not a strict requirement.
  • 60. I have completed my assignment early. Can I email it to you for some feedback? I don’t provide feedback via email for assignments completed ahead of time, but I am more than happy to meet with you during office hours before the due date. If you have a conflict with my office hours, we can try to schedule another time to meet. Do you have any tips to avoid deductions for grammar/mechanics and APA formatting? —begin with the last sentence or paragraph and read it aloud. This interrupts your flow of thought and you are less likely to miss mistakes because you will focus more on the words and language use. sentences. Refer to this link when including numbers in text: http://owl.english.purdue.edu/owl/resource/593/01/
  • 61. precisely. Avoid slang, clichés, and informal language: https://owl.english.purdue.edu/owl/owlprint/608/ punctuation. Remember to change the margin default setting to 1” on every side. What kind of information should I be looking for? Keep your end goal in mind--you want to persuade the audience that your topic is a problem in your region. Look for sources that provide evidence of the problem. Some general suggestions for types of information from each source are as follows: -reviewed journal article will most likely be about your topic/issue. The journal article will not provide regional information. topic/issue. You will likely find good regional information from this source. Below are suggestions based on your region: o Columbus: Columbus Dispatch o Ohio: Columbus Dispatch, Cleveland Plain Dealer, Cincinnati
  • 62. Enquirer o Midwest: any of the above and Chicago Tribune, Chicago Sun-Times, Indianapolis Star o United States: any of the above and The Wall Street Journal, The New York Times, Los Angeles Times, The Washington Post, TIME, Newsweek toward finding current research and statistics about the topic/issue. https://www.grammarly.com/ http://owl.english.purdue.edu/owl/resource/593/01/ https://owl.english.purdue.edu/owl/owlprint/608/ Annotated Bibliography FAQ 3 I’m the Moderator and my group’s problem is broad. How do I narrow my search for sources? I would encourage you to search for articles that cover the subtopics your group members have chosen. You may decide to locate one article for three different subtopics; you may decide to
  • 63. focus only on one or two subtopics for now. Also, as you search for subtopics, you may find some great articles that address your problem in general or that focus on your region. These are all acceptable. If you are still uncertain, see me during office hours. Annotated Bibliography Paper 1 COMM 2367: Persuasive Communication ANNOTATED BIBLIOGRAPHY PAPER 30 points Expected Learning Outcomes GE Course • Students retrieve and use written information analytically and effectively. Second Writing Course • Students access and use information critically and analytically.
  • 64. Overview This assignment requires you to begin researching your problem. In this paper, you will develop your ability to access, evaluate, and use credible information by identifying and summarizing highly credible sources. Guidelines In approximately 2-3 typed pages (not including the title page) using APA format, you should write an introduction, a thesis statement, a summary of 3 credible sources, and a conclusion. After conducting thorough research on your topic, you will demonstrate how the sources inform you about your problem and how they are connected. The sources included should reflect considerable research effort on your part; however, this should not be the end of your research effort. You are expected to continue to research your topic for several more weeks as you develop and refine your message strategy. 1. Write an introduction: https://cstw.osu.edu/writing-
  • 65. center/handouts/introductions 2. Write a thesis statement: https://cstw.osu.edu/writing- center/handouts/thesis-statements 3. List three sources in correct APA format. You must use the following source types: a. 1 peer-reviewed academic journal article b. 1 credible/reputable newspaper or periodical c. 1 government, university, or research institution website 4. Under each source, write a one paragraph summary in which you answer the following questions: a. What are the main points of the source? What is its purpose? (Summarize the information.) b. How is the source relevant to your topic? How will the source be useful to you in developing a persuasive argument? How does it connect to the other sources? (Be specific about the type of information provided by the source.) c. Why is the source highly credible? What are the author’s/organization’s qualifications? (Provide personal and/or professional information.) 5. Write a conclusion: https://cstw.osu.edu/writing-
  • 66. center/handouts/conclusions https://cstw.osu.edu/writing-center/handouts/introductions https://cstw.osu.edu/writing-center/handouts/thesis-statements https://cstw.osu.edu/writing-center/handouts/conclusions Annotated Bibliography Paper 2 ANNOTATED BIBLIOGRAPHY PAPER GRADING RUBRIC Excellent Poor Introduction, Thesis, Conclusion Introduction communicates what the reader can expect; sets the tone and style of the argument 3 2 1.5 1 0 Thesis contains a claim about the problem that can be explained and justified 3 2 1.5 1 0 Conclusion effectively summarizes and provides
  • 67. closure 3 2 1.5 1 0 Source Information Demonstrates strong summary skills by effectively and concisely identifying main points and purpose of source* 6 4.5 3 1.5 0 Clearly explains source’s relevance to topic and connection to other sources; thoughtfully considers usefulness of source for persuasive argument* 6 4.5 3 1.5 0 Effectively establishes credibility of source; provides specific personal or professional information* 6 4.5 3 1.5 0
  • 68. Correctly uses APA format for reference citations 3 2 1.5 1 0 * Sources that do not conform to the criteria in the COMM 2367 Source Requirements document will receive a zero (0) for these items. Adjustments (your grade may be lowered up to 10% for each of the following) Excellent Poor -0% -10% Free of errors in grammar and mechanics -0 -1 -1.5 -2 -3 Correct APA Format (title page, header, margins, font, reverse indent for references) -0 -1 -1.5 -2 -3 TOTAL _____/30