Program Evaluation: A Brief Introduction ..
Program Evaluation: A Brief Introduction ..
Note that the concept of program evaluation can include a wide variety of methods
to evaluate many aspects of programs in nonprofit or for-profit organizations. There
are numerous books and other materials that provide in-depth analysis of
evaluations, their designs, methods, combination of methods and techniques of
analysis. However, personnel do not have to be experts in these topics to carry out a
useful program evaluation. The "20-80" rule applies here, that 20% of effort
generates 80% of the needed results. It's better to do what might turn out to be an
average effort at evaluation than to do no evaluation at all. (Besides, if you resort to
bringing in an evaluation consultant, you should be a smart consumer. Far too many
program evaluations generate information that is either impractical or irrelevant -- if
the information is understood at all.) This document orients personnel to the nature
of program evaluation and how it can be carried out in a realistic and practical
fashion.
Note that much of the information in this section was gleaned from various works of
Michael Quinn Patton.
Program Evaluation
Some Myths About Program Evaluation
1.. Many people believe evaluation is a useless activity that generates lots of boring
data with useless conclusions. This was a problem with evaluations in the past when
program evaluation methods were chosen largely on the basis of achieving complete
scientific accuracy, reliability and validity. This approach often generated extensive
data from which very carefully chosen conclusions were drawn. Generalizations and
recommendations were avoided. As a result, evaluation reports tended to reiterate
the obvious and left program administrators disappointed and skeptical about the
value of evaluation in general. More recently (especially as a result of Michael
Patton's development of utilization-focused evaluation), evaluation has focused on
utility, relevance and practicality at least as much as scientific validity.
2. Many people believe that evaluation is about proving the success or failure of a
program. This myth assumes that success is implementing the perfect program and
never having to hear from employees, customers or clients again -- the program will
now run itself perfectly. This doesn't happen in real life. Success is remaining open to
continuing feedback and adjusting the program accordingly. Evaluation gives you this
continuing feedback.
3. Many believe that evaluation is a highly unique and complex process that occurs
at a certain time in a certain way, and almost always includes the use of outside
experts. Many people believe they must completely understand terms such as
validity and reliability. They don't have to. They do have to consider what information
they need in order to make current decisions about program issues or needs. And
they have to be willing to commit to understanding what is really going on. Note that
many people regularly undertake some nature of program evaluation -- they just
don't do it in a formal fashion so they don't get the most out of their efforts or they
make conclusions that are inaccurate (some evaluators would disagree that this is
program evaluation if not done methodically). Consequently, they miss precious
opportunities to make more of difference for their customer and clients, or to get a
bigger bang for their buck.
First, we'll consider "what is a program?" Typically, organizations work from their
mission to identify several overall goals which must be reached to accomplish their
mission. In nonprofits, each of these goals often becomes a program. Nonprofit
programs are organized methods to provide certain related services to constituents,
e.g., clients, customers, patients, etc. Programs must be evaluated to decide if the
programs are indeed useful to constituents. In a for-profit, a program is often a one-
time effort to produce a new product or line of products.
Other Reasons:
This may seem too obvious to discuss, but before an organization embarks on
evaluating a program, it should have well established means to conduct itself as an
organization, e.g., (in the case of a nonprofit) the board should be in good working
order, the organization should be staffed and organized to conduct activities to work
toward the mission of the organization, and there should be no current crisis that is
clearly more important to address than evaluating programs.
To effectively conduct program evaluation, you should first have programs. That is,
you need a strong impression of what your customers or clients actually need. (You
may have used a needs assessment to determine these needs -- itself a form of
evaluation, but usually the first step in a good marketing plan). Next, you need some
effective methods to meet each of those goals. These methods are usually in the
form of programs.
It often helps to think of your programs in terms of inputs, process, outputs and
outcomes. Inputs are the various resources needed to run the program, e.g., money,
facilities, customers, clients, program staff, etc. The process is how the program is
carried out, e.g., customers are served, clients are counseled, children are cared for,
art is created, association members are supported, etc. The outputs are the units of
service, e.g., number of customers serviced, number of clients counseled, children
cared for, artistic pieces produced, or members in the association. Outcomes are the
impacts on the customers or on clients receiving services, e.g., increased mental
health, safe and secure development, richer artistic appreciation and perspectives in
life, increased effectiveness among members, etc.
Your program evaluation plans depend on what information you need to collect in
order to make major decisions. Usually, management is faced with having to make
major decisions due to decreased funding, ongoing complaints, unmet needs among
customers and clients, the need to polish service delivery, etc. For example, do you
want to know more about what is actually going on in your programs, whether your
programs are meeting their goals, the impact of your programs on customers, etc?
You may want other information or a combination of these. Ultimately, it's up to you.
But the more focused you are about what you want to examine by the evaluation, the
more efficient you can be in your evaluation, the shorter the time it will take you and
ultimately the less it will cost you (whether in your own time, the time of your
employees and/or the time of a consultant).
There are trade offs, too, in the breadth and depth of information you get. The more
breadth you want, usually the less depth you get (unless you have a great deal of
resources to carry out the evaluation). On the other hand, if you want to examine a
certain aspect of a program in great detail, you will likely not get as much
information about other aspects of the program.
For those starting out in program evaluation or who have very limited resources, they
can use various methods to get a good mix of breadth and depth of information. They
can both understand more about certain areas of their programs and not go bankrupt
doing so.
Key Considerations:
Goals-Based Evaluation
Often programs are established to meet one or more specific goals. These goals are
often described in the original program plans.
Goal-based evaluations are evaluating the extent to which programs are meeting
predetermined goals or objectives. Questions to ask yourself when designing an
evaluation to see if you reached your goals, are:
1. How were the program goals (and objectives, is applicable) established? Was the
process effective?
2. What is the status of the program's progress toward achieving the goals?
3. Will the goals be achieved according to the timelines specified in the program
implementation or operations plan? If not, then why?
4. Do personnel have adequate resources (money, equipment, facilities, training,
etc.) to achieve the goals?
5. How should priorities be changed to put more focus on achieving the goals?
(Depending on the context, this question might be viewed as a program
management decision, more than an evaluation question.)
6. How should timelines be changed (be careful about making these changes - know
why efforts are behind schedule before timelines are changed)?
7. How should goals be changed (be careful about making these changes - know why
efforts are not achieving the goals before changing the goals)? Should any goals be
added or removed? Why?
8. How should goals be established in the future?
Process-Based Evaluations
Outcomes-Based Evaluation
Also see:
Appreciative Inquiry
Survey Design
Note that if you plan to include in your evaluation, the focus and reporting on
personal information about customers or clients participating in the evaluation, then
you should first gain their consent to do so. They should understand what you're
doing with them in the evaluation and how any information associated with them will
be reported. You should clearly convey terms of confidentiality regarding access to
evaluation results. They should have the right to participate or not. Have participants
review and sign an informed consent form. See the sample informed-consent form.
The overall goal in selecting evaluation method(s) is to get the most useful
information to key decision makers in the most cost-effective and realistic fashion.
Consider the following questions:
1. What information is needed to make current decisions about a product or
program?
2. Of this information, how much can be collected and analyzed in a low-cost and
practical manner, e.g., using questionnaires, surveys and checklists?
3. How accurate will the information be (reference the above table for disadvantages
of methods)?
4. Will the methods get all of the needed information?
5. What additional methods should and could be used if additional information is
needed?
6. Will the information appear as credible to decision makers, e.g., to funders or top
management?
7. Will the nature of the audience conform to the methods, e.g., will they fill out
questionnaires carefully, engage in interviews or focus groups, let you examine their
documentations, etc.?
8. Who can administer the methods now or is training required?
9. How can the information be analyzed?
Note that, ideally, the evaluator uses a combination of methods, for example, a
questionnaire to quickly collect a great deal of information from a lot of people, and
then interviews to get more in-depth information from certain respondents to the
questionnaires. Perhaps case studies could then be used for more in-depth analysis
of unique and notable cases, e.g., those who benefited or not from the program,
those who quit the program, etc.
There are four levels of evaluation information that can be gathered from clients,
including getting their:
1. reactions and feelings (feelings are often poor indicators that your service made
lasting impact)
2. learning (enhanced attitudes, perceptions or knowledge)
3. changes in skills (applied the learning to enhance behaviors)
4. effectiveness (improved performance because of enhanced behaviors)
Usually, the farther your evaluation information gets down the list, the more useful is
your evaluation. Unfortunately, it is quite difficult to reliably get information about
effectiveness. Still, information about learning and skills is quite useful.
Interpreting Information:
1. Attempt to put the information in perspective, e.g., compare results to what you
expected, promised results; management or program staff; any common standards
for your services; original program goals (especially if you're conducting a program
evaluation); indications of accomplishing outcomes (especially if you're conducting
an outcomes evaluation); description of the program's experiences, strengths,
weaknesses, etc. (especially if you're conducting a process evaluation).
2. Consider recommendations to help program staff improve the program,
conclusions about program operations or meeting goals, etc.
3. Record conclusions and recommendations in a report document, and associate
interpretations to justify your conclusions or recommendations.
Still, they can do the 20% of effort needed to generate 80% of what they need to
know to make a decision about a program. If they can afford any outside help at all, it
should be for identifying the appropriate evaluation methods and how the data can
be collected. The organization might find a less expensive resource to apply the
methods, e.g., conduct interviews, send out and analyze results of questionnaires,
etc.
If no outside help can be obtained, the organization can still learn a great deal by
applying the methods and analyzing results themselves. However, there is a strong
chance that data about the strengths and weaknesses of a program will not be
interpreted fairly if the data are analyzed by the people responsible for ensuring the
program is a good one. Program managers will be "policing" themselves. This caution
is not to fault program managers, but to recognize the strong biases inherent in
trying to objectively look at and publicly (at least within the organization) report
about their programs. Therefore, if at all possible, have someone other than the
program managers look at and determine evaluation results.
Pitfalls to Avoid
1. Don't balk at evaluation because it seems far too "scientific." It's not. Usually the
first 20% of effort will generate the first 80% of the plan, and this is far better than
nothing.
2. There is no "perfect" evaluation design. Don't worry about the plan being perfect.
It's far more important to do something, than to wait until every last detail has been
tested.
3. Work hard to include some interviews in your evaluation methods. Questionnaires
don't capture "the story," and the story is usually the most powerful depiction of the
benefits of your services.
4. Don't interview just the successes. You'll learn a great deal about the program by
understanding its failures, dropouts, etc.
5. Don't throw away evaluation results once a report has been generated. Results
don't take up much room, and they can provide precious information later when
trying to understand changes in the program.