ImpactPerformanceAudits-DiscussionPaperEN Searchable
ImpactPerformanceAudits-DiscussionPaperEN Searchable
The Impact of
Performance Audits:
Defining, Measuring and
Reporting Impact
CANADIAN AUDIT
& ACCOUNTABILITY
A FOUNDATION
The Impact of Performance Audits - Defining, Measuring, and Reporting Impact April 2019
Visit us at www.caaf-fcar.ca for more information about our products and services.
All rights reserved. No part of this publication, or its companion products, may be reproduced by any means,
electronic, mechanical, photocopying, recording or otherwise, without the prior written permission of the
publisher.
Published by:
Canadian Audit and Accountability Foundation
1505, Laperriere Avenue, Suite 100
Ottawa, Ontario CANADA
K12 711
Tel: 613-241-6713
ISBN: 978-1-7752844-2-0
Table of Contents
Executive Summary...
Introduction
Purpose and Structure of This Discussion Paper
ResEArch MEthOGOIOGY . ... i
Part 1 —What Kind Of IMPACL? .....oiiiiiiiiiiit e 7
1.1 Value and Impact
1.2 Impact in Theory and Practice .
1.3 The Views of Canadian Performance Auditors on Impact............ccocevviiioiiiiiioniin 15
Executive Summary
Purpose of This Paper
This discussion paper synthesizes the information available on the impact of performance audits. It is meant
to stimulate discussion and sharing about current approaches to measuring and reporting the impact of
performance audit activities in Canada’s public sector.
How We Did It
The research that supports this paper draws on various sources of information, including academic papers,
guidance documents, annual reports, websites, and news reports, as well as interviews with 34 senior
representatives of municipal, provincial, and federal audit institutions in Canada, the United Kingdom, and
the United States.
What We Found
We analyzed the recent annual reports of 22 Canadian audit institutions (federal, provincial, and municipal)
and found that they generally use a small number of indicators to measure the impact of their performance
audits. The most common indicators are statistics related to audit recommendation implementation rates and
results of satisfaction surveys conducted with auditees and elected officials.
This paper demonstrates that there are opportunities for audit institutions to improve their reporting on the
impact of their performance audits by increasing the number and type of indicators they use, broadening
their coverage to include financial impact and qualitative information. There are also opportunities to make
more information publicly available and to improve the presentation of the information already provided.
Beyond the measurement and reporting of impact, efforts can also be made to increase the impact of
performance audits and many Canacian audit institutions have taken recent initiatives in this direction. These
efforts have targeted all phases of the audit process, from audit selection to follow-up activities. They include:
Introduction
Legislative performance auditing (or value-for-money auditing) has been conducted for over 40 years in
Canada. In 1977, the Auditor General
Act added performance auditing to the mandate of the Office of the
Auditor General of Canada. Over the following years, a similar mandate was granted to provincial auditors
general and to the auditors general of several large municipalities, including Montréal and Toronto.
Today, all the provincial legislative audit offices publish performance audit reports at least once a year, as do
a number of municipal audit offices and the British Columbia Auditor General for Local Government.
Together, these audit institutions and the Office of the Auditor General of Canada have an annual budget of
over $200 million, of which more than $75 milion is dedicated to performance audit activities.
The trend of expanding public sector performance audit activities that began four decades ago is continuing.
For example, a law adopted in Quebec in April 2018 expanded the mandate of the Quebec Municipal
Commission to include responsibilities for conducting performance audits in municipalities with populations
between 10,000 and 100,000.2 In addition, as part of the ongoing process of creating a new fiscal
relationship between First Nations and the Government of Canada, the federal government and the Assembly
of First Nations have recently commissioned studies to explore the possibility of establishing a First Nations
Auditor General office with a mandate that would include conducting performance audits.
From the above, it can be concluded that there is continuing interest in performance audit activities in
Canada's public sector and that significant sums of taxpayers’ money will continue to be spent on these
activities for the foreseeable future.
As with any sizeable investment of public funds, it is fair for governments, legislators, auditors, and citizens in
general to ask what value is obtained from this investment and whether performance audits generate positive
impacts.
While it is possible to quickly identify performance audit reports that have had lasting impacts on
governments and public administration in Canada, assessing the impact of each performance audit or of
performance auditing in general is much more difficult.
These are the three main questions that will be explored in this discussion paper.
2 These municipalities are required to have a performance audit conducted every two years and can mandate an external auditor to
perform this audit or adopt a regulation permanently designating the Quebec Municipal Commission as the external auditor.
3 The first report is available on the CAAF website: £stablishing
a First Nations Auditor General (2018).
Canadian Audit and Accountability Foundation — www.caaf-fcar.ca 5
The Impact of Performance Audits — Defining, Measuring, and Reporting Impact April 2019
This discussion paper is divided in three parts, each of which focuses on one of the three questions listed
above.
Part 1, What Kind of Impact?, defines the term “impact” and explores key questions about this concept,
drawing on the academic literature for answers. In addition, it presents the views of senior Canadian auditors
on the impact of performance audits, based on the interviews we conducted for this research project.
Part 2, How to Measure and Report Impact?, discusses the measurement of performance audit impact and
the subsequent reporting of this information by audit institutions. It presents an overview of the common
performance indicators used by audit offices in Canada and abroad and discusses the strengths and
weaknesses of various indicators. It also highlights good practices that audit institutions can adopt to improve
how they measure and report their performance audit impact.
Part 3, How to Increase Impact?, includes a brief overview of good practices that Canadian audit institutions
have adopted or are currently implementing to increase the positive impact of their performance audits.
Finally, a list of relevant articles, books, and reports consulted during the preparation of this discussion paper
is included in the References section at the end of the paper.
Research Methodology
This discussion paper draws on various sources of information, including academic papers, guidance
documents, annual reports, websites, and news reports, as well as interviews with senior performance
auditors.
The research conducted to support this paper was largely based on a review of Canadian and international
academic literature on performance auditing published over the last 25 years, as well as a review of the latest
annual reports (from 2017 and 2018) published by audit institutions in Canada and other countries.
Section highlights
= Audit institutions have inherent value because they foster good governance, accountability,
transparency, and trust in public administrations,
= In addition to their role as accountability officers, many audit institutions see themselves as
agents of change and seek to “have an impact.”
= In an auditing context, “impact” means a change in the public service or society resulting from
a performance audit.
All legislative audit offices have an important accountability role, which they fulfill by providing assurance and
objective information on government finances and operations to elected officials. Audit offices accomplish
this in large part by conducting performance audits: independent, objective, and reliable examinations of
whether government programs, activities, or organizations are performing in accordance with the principles
of economy, efficiency, and effectiveness.
Audit institutions are critical features of good governance in the public sector, fostering transparent and
accountable government and trust in public administration. As such, they possess inherent institutional value
that is widely accepted.
In addition to their traditional role as accountability officers, many Canadian and foreign audit offices today
see themselves as agents of change and seek to “make a difference” and to “have an impact.” This intention
is reflected in the mission statements, value statements, and strategic plans of various audit offices and is
supported by the International Organization of Supreme Audit Institutions’ (INTOSAI) vision of public auditing
In Canada, a review making a difference in the life of citizens.*
of the mission
In Canada, for example, the value statement of the Office of the Auditor
statements, values,
General (OAG) of Canada says, “We focus on significant issues to make a
and strategic plans of
positive and measurable impact.”® Similarly, the OAG of Alberta states in its
the provincial and
Performance Audit Program of V/ork 2018-2021 that its vision is “Making a
federal legislative
difference in the lives of Albertans” and the OAG of Prince Edward Island’s
audit offices reveals
2018 Report
Legislative
to the Assembly states that “We select our audits
that a majority of
on the basis of significance and risk with the goal of making a positive
these offices see
difference for Islanders.” The audit offices of both New Brunswick® and
themselves as agents
Nova Scotia’ list “impact” as part of their values, which they also define in
of change.
terms of selecting significant audit topics with the goal of “making a
4 INTOSAI, ISSAI 12, Value and Benefits of SAls = Making a Difference in the Life of Citizens
5 Who We Are, OAG Canada website, consulted on 13 February 2019
€ OAG New Brunswick, Strategic Plan 2014-2020, p. 5
7 Dur Values, OAG Nova Scotia website, consulted on 13 February 2019
positive difference” for their citizens. Finally, the values of the Office of the Auditor General of Québec
include “Acting for maximum impact.”
Reading these mission statements, values, and strategic plans, a clear sense emerges that “making a
difference” and “having an impact” are synonymous expressions and that both have a positive connotation.
In a strict linguistic sense, impact is the effect or influence of one person, thing, or action on another. In an
auditing context, therefore, impact means the effect or influence of an audit or an audit office on the
auditees, the government, or society in general. And, for audit offices that see themselves as agents of
change, the "effect or influence” can be interpreted as a change resulting from an audit. A working
definition of “impact” for the purpose of this discussion paper, which focuses on performance audits, is thus
proposed: Impact means a change in the public service or society resulting from a performance audit. This
change can be qualitative or quantitative.
Section highlights
= Measuring the wider impact that audit institutions have through their performance audits is
inherently difficult because it is challenging to separate their contributions to specific outcomes
from the contributions of other stakeholders.
= For this reason, audit institutions tend to measure their impact at the level of individual audits
rather than at the level of the practice as a whole. This means that much focus is placed on
monitoring the implementation of audit recommendations.
= Audits can potentially have negative effects, but few studies have examined this question to
determine whether this happens in practice.
= Multiple internal and external factors can influence the impact of performance audits.
The question of the impact of performance audits has received attention from many auditors over the years,
but also from a number of academic researchers interested in public administration, governance, and
accountability. As a result, many reports and articles have been written on the subject. Using this lierature as
a source, we explore in this section several key questions about audit impact, from both a theoretical and
practical perspective. (A complete list of the studies we consulted is in the References section at the end of
this paper.)
¢ For the purpose of this paper, and to ensure clarity, we will use the term “impact.” Audit institutions in Canada and elsewhere also
often use the terms "values” and "Eenefits.”
Canadian Audit and Accountability Foundation — www.caaf-fcar.ca 9
The Impact of Performance Audits — Defining, Measuring, and Reporting Impact April 2019
As explained by John Mayne (2001), this situation created a challenge for public sector organizations because
they now had to deal with what he called the “attribution problem.” This problem arose because the focus
on performance measurement shifted from outputs, a part of the result chain (Figure 1) that departments
and agencies have control over, to outcomes, which can be influenced by other external stakeholders and are
therefore not under the sole influence of departments and agencies. The likelihood that many stakeholders
could influence an outcome meant that public sector organizations had to find ways to separate their
contributions to an outcome from other contributions. In other words, they had to determine the extent to
which achieving an outcome could be attributed to their own actions.
Context: Other
External Influences
REsouR‘E@
(s) |
Source: Modified from HM Treasury (UK), Choosing the Right FABRIC — A Framework for Performance Information, 2013
In the interest of coherence and consistency in practice, many audit institutions in governments that have
adopted a “managing for results” philosophy have also expanded their reporting practices. They have moved
from reporting solely on their outputs, such as how many audits they produced in a year, to reporting on
both outputs and outcomss: the impact of their audits. In doing so, audit institutions were faced with the
same attribution problem as other public sector organizations: how could they isolate their contributions
(audits) to the achievement of an outcome from other such contributions? For example, it is generally
recognized that audits help improve governance and accountability, but many other factors— including
reforms, new legislation, and continuous adoption of best practices—also contribute to this improvement.
Fortunately, there is a way to report on outcomes without having to confront the attribution problem head
on. This solution consists of shifting the focus of reporting from “ultimate outcomes” to “intermediate
outcomes” or “immediate outcomes” (see the definitions in Table 1). For audit offices, shifting the focus
from ultimate outcomes to immediate or intermediate outcomes makes reporting on their impact much more
practical. This is because it means moving away from trying to determine their long-term contribution to
improved accountability or better health for citizens, for example, to determining their impact on specific
practices or programs in selected public sector organizations on a shorter time horizon.
Intermediate An outcome that is expected to These outcomes are Parliament holds the
outcome logically occur once one or more often at the change government to
immediate outcomes have been of behaviour level account.
achieved. In terms of time frame, among a target
these are medium-term outcomes. population. Audited
organizations take
steps to implement
audit
recommendations.
Ultimate The highest-level outcome that can be | These outcomes The audit office
outcome reasonably attributed to a policy, represent a change contributes to
program, or initiative in a causal of state in a target better-managed
manner, and results from one or more | population. government
intermediate outcomes having been programs and better
achievad. These outcomes usually accountability to
represent the raison d'étre of a policy, Parliament.
program, or initiative. They are long-
term outcomes.
Source: adapted from the Government of Canada’s Results-Based Management Lexicon (2015) and from the Office of the Auditor
General of Canada’s 2003 Performance Report
By focusing on intermediate outcomes, audit institutions can assess the impact of a particular audit on one or
a few selected organizations. As will be discussed in Part 2 of this paper, this explains why the most ccmmon
indicators of impact used by audit institutions are related <o the implementation rates of audit
recommendations. Similarly, the focus of most academic studies on the impact of performance audits has
been at the level of individual audits and intermediate outcomes.
In addition, it is possible to aggregate some of the data on the impact of individual audits and to present the
results annually. Doing so can provide a broader (but not necessarily complete) view of impact. One example
of success at this level is that some audit offices have been able to track the annual financial impact (savings
or additional revenues) of their performance audits on the public purse. (See Reporting Financial Impacts for
more information.)
. . In summary, since the 1990s, many audit institutions in Canada and abroad
Some audit offices
have demonstrated that it is possible to go beyond reporting on outputs
have been able to
alone and to provide some information on outcomes as well, especially
track the financial
intermediate outcomes. However, reporting on the impact of performance
impact of their
audits is still an evolving practice and challenges remain, as will be discussed
performance audits.
in more detail in Part2.
Of course, auditors are aware that the audit process requires audited organizations to spend time and
resources providing all the information requested by audit teams. Auditors are also aware that implementing
their recommendations will have a cost for the auditees. However, through discussions with the auditees,
auditors strive to formulate recommendations whose benefits (financial and non-financial) would outweigh
the costs.
But, beyond this obvious effect, can there be negative impacts? The academic literature on performance
auditing has discussed this question many times (e.g., Leeuw, 2011) and has proposed several theoretical
negative effects:
= Ossification: an excessive attention on formal adherence to rules and standards, which ultimately
inhibits innovation within an organization and causes it to ossify.
= Tunnel vision: an excessive emphasis on activities whose results can be easily quantified and
measured, at the expense of activities whose results are not so easily quantified.
= Short-termism: an excessive short-term focus on achieving good results against performance
indicators valued by auditors at the expense of keeping sight on long-term objectives.
= Sub-optimization: excessive efforts and resources spent on improving a system at the expense of
other systems within an organization.
= Window dressing: creating systems, procedures, and so on, merely to keep auditors satisfied, while
behind this formal facade things continue as before.
This literature on negative effects has essentially focused on how audits affect the audited organizations, and, to
a limited extent, the public servants directly involved in the audits. Whether performance audits can have
negative impacts for public service users or on citizens in general does not appear to have been studied in detalil.
While few academic studies on the negative effects of audits on auditees are available, the existing ones tend
to demonstrate that the theory on negative effects is rarely supported by the experience of auditees. For
example, the research of Desmedt et al. (2017) in Belgium has found that the surveyed auditees “almost
unanimously confirmed that performance audits did not have negative consequences” in their organization.
Similarly, the research of Morin (2004, 2008, 2014) in Canada asserts that negative effects are rare and
marginal.
Still, negative impacts cannot be discounted outright, and some rare cases have been noted in the literature.
For example, Katrien Weets (2011), in her research on three performance audits in the Dutch city of
Rotterdam, found that the auditees surveyed for one of the audits had reported the appearance of
ossification and tunnel vision. More studies on the negative impacts of performance audits are needed to
clarify the relative frequency and importance of this phenomenon.
Audit institutions have a high degree of control over factors that relate directly to the audit process itself,
such as the selection of audit topics, the expertise and competence of auditors, the auditor—auditee
relationship, the quality of audit reports, and the relevance of the audit recommendations. While these
factors are generally acknowledged as important by academics, several studies have particularly highlighted
the importance of the relationship between auditors and auditees. According to Jane Etverk (2002), the
quality of this relationship is crucial in determining whether the audited body will accept an audit’s
recommendations. Danielle Morin (2004) found that audit teams that demonstrated openness and fluid
communications with auditees likely strengthened the impact of their audits. Conversely, if auditors behaved
like inquisitors, there was greater risk that their efforts would yield no result. Katrien Weets (2011) similarly
found that a lack of empathy for the auditees could be detrimental to an audit team’s efforts to drive change
through its audit work.
Conversely, there are two internal factors that have received less attention in the academic literature but that
likely play an important role in increasing the impact of performance audits. The first is the presence of
follow-up mechanisms and tracking systems to determine whether audit recommendations are implemented
on a timely basis. The second factor is the various efforts made by audit offices to disseminate their
conclusions and recommendations to a wide audience, using various strategies, including writing in plain
language, using social media platforms, and presenting audit findings at conferences and events organized
by interested stakeholders. It is logical to think that better communicating audit findings to a larger audience
and following up with auditees to track progress made in implementing audit recommendations can
potentially increase the impact of performance audits. More research on these two factors could be beneficial
for the performance audit community.
There are also external factors over which audit institutions have little
The presence of follow-
influence. In part, this is because legislative audit offices in the Westminster
up mechanisms, to
tradition are not granted enforcement powers to ensure that their
determine whether
recommendations are implemented. Instead, they must rely upon elected
audit recommendations
officials to hold audited organizations to account. Similarly, audit institutions
are implemented on a
do not control the media or the political agenda on any given day. There is
timely basis, can
therefore always a certain element of chance involved in publishing an audit
increase the impact of
report on a pre-selected date; the context might favour a high impact, or it
performance audits.
might not.
In the academic literature, the external factors that have received the most attention are the influence of
parliamentarians and media coverage. With regard to parliamentarians, many studies (e.g., Morin, 2008)
have recognized that the attention paid by elected officials to performance audit reports can play a crucial
role by putting pressure on audited organizations to implement audit recommendations. However, this
observation does not appear to be universal; rather, the influence of parliamentarians varies from one
jurisdiction to the next. In Belgium, for example, Desmedt at al. (2017) found that the impact of
parliamentarians was marginal. This influence can also vary within a single jurisdiction. In Canada, Danielle
Morin has found that the influence of parliamentarians could be an important factor (Morin, 2004), but that
it cannot always be counted upon (Morin, 2014). In her 2014 study, which covered five provincial legislative
audit offices and the Office of the Auditor General of Canada, only 36 out of 87 respondents (41 percent)
stated that their audit had been studied by a parliamentary committee.
The media, through their influence on audited organizations or on parliamentarians, also have the capacity,
in theory, to push auditees to implement performance audit recommendations. However, a number of
studies have cast doubt on the role of the media as a strong factor driving audit impact. For example,
Danielle Morin (2004, 2014) has found that the influence of the media was felt only weakly by auditees in
Canada. Similarly, Desmedt et al. (2017) found that this influence was marginal in Belgium. In Norway, Kristin
Reichborn-Kjennerud (2014b) has also found that the media were interested in only a few audit reports and
that audits did not play an important role in media debates. However, a 2008 study by Danielle Morin found
that when an audit received media coverage, auditees tended to take corrective measures more rapidly. Also,
a 2015 study by Raudla et al. in Estonia noted that recommendations were more likely to be implemented
when the media created a political debate over an audited issue, thereby prompting parliamentarians to play
a more active oversight role. The influence of the media therefore appears to vary significantly from audit to
audit and may be felt directly by the audited organizations or indirectly through the intervention of
parliamentarians.
Section highlights
Between September and November 2018, we interviewed 31 senior Canadian performance auditors, active
or recently retired, from 16 audit institutions in 13 cities, from Halifax to Victoria. (The list of interviews is in
the Acknowledgements section at the end of this discussion paper.) Among other questions, we asked these
auditors what “impact” meant to them, what value they saw in performance audits, and what change they
had seen happen as a result of their audit work.
The notion of impact was perceived in various ways by interviewees and, in some cases, different perceptions
existed within a single audit office. A majority of interviewees saw making a difference as an important
objective of their work. For some auditors, this is what motivates them to get up in the morning and go to
work; some are explicitly driven by the belief that their work can lead to important policy changes. Others
define making a difference as helping government do a better job, getting buy-in from management, and
influencing actions toward positive change. Some auditors also believe that they can make a difference by
educating the public, shedding light on unknown situations, busting myths, and fostering conversations and
public debates about impartant questions.
The auditors we interviewed generally recognized their dual role of providing assurance and influencing
change. While a majority thought that their role of influencer was more important to them as individuals, a
few auditors said they were more driven by their roles as providers of assurance. These latter auditors saw
their role as pointing out where there are problems in the public sector and then letting Parliamentarians and
the public play their own role in holding the government to account.
Notwithstanding their views on their role, all the auditors we interviewed agreed that audits often bring
attention to issues that were either unknown or ignored for too long and create an opportunity for debate to
take place and change to happen. Many aud tors described a performance audit as a catalyst that can make
change happen faster. Reflecting back on their careers, some interviewees agreed that change might have
happened eventually without any audits, but they were convinced that audits definitely accelerated the pace
of change and, in cases where auditees were not aware of the problems raised by auditors, were crucial in
making change happen.
The interviewees all believed that performance audits can have impacts and they could all point to audits
their offices had produced that had played a key role in improving various government policies, programs,
and services. They knew that their office can positively influence how government works but were also aware
that it is often difficult to measure and document audit impact. Some interviewees also acknowledged that
sometimes an audit will have no impact at al—because auditees have other priorities, or because of a lack of
political will, lack of resources, bad timing, or other reasons.
The senior auditors we talked to generally agreed that some change often happens during an audit, before
the report is completed. In some cases, when an audit is announced well before it starts, change takes place
before the audit work begins, as the auditees strive to “clean up their house” before the auditors arrive.
When there is sufficient documentation, Canadian audit offices note recent improvements in their audit
reports or, alternatively, let the auditees mention them in their official response to an audit.
The interviewees recognized that media reports about performance audits could increase an audit's impact
but noted that media coverage was not a prerequisite for impact to happen and that media coverage did not
guarantee a larger impact. (Some audits have received much media attention, but little progress was made in
addressing the identified issues nonetheless.) Several auditors said that change within departments and
agencies could take place without media coverage and that, for this reason, attracting media attention was
not a high priority.
A few interviewees also noted that change can be very slow to happen (it may only materialize many years
after an audit) and that, in some cases, important but complex change takes place over time without most
people, or the media, paying much attention to it.
Finally, interviewees noted that Public Accounts Committees (in federal and provincial jurisdictions) can
increase the impact of performance audits when they review performance audit reports in a non-partisan
manner and hold the government to account for implementing audit recommendations. Similarly, at the
municipal level, audit committees can help increase the impact of performance audits by having the
municipal auditor general speak to each audit report and by subsequently following up with municipal
management to determine what steps are being taken to address the known issues and the audit
recommendations.
Section highlights
= Audit institutions can use a range of performance indicators to measure the impact of
performance audits, including statistics about recommendations and estimates of savings or
additional revenues resulting from the audits.
= The common performance indicators used to measure quantitative impact vary in terms of their
usefulness or intrinsic value.
= Audit offices can also report on the impact of performance audits by providing qualitative
information such as case studies and examples of the concrete impact of audits on audited
programs.
Considering that legislative audit offices are funded by taxpayers to fulfill their mandate, it is only fair to
expect them to report on their performance just like any other public sector organization. This expectation is
often formalized in legislation or in administrative policies and is also supported by INTOSAI's standards,
which state that audit institutions have a responsibility to demonstrate their ongoing relevance to citizens,
Parliament, and other stakeholders.®
Reporting on performance can imply reporting on outputs, on outcomes, or both. However, for an audit
office to report on the impact of its performance audits requires more emphasis on outcomes than on
outputs. This is because the number of reports an audit office produces in a year (its output) may influence its
impact, but it does not in itself provide information on what that impact is (the outcome).
As explained in Can Impact Be Measured?, audit institutions tend to report on intermediate outcomes much
more than on ultimate outcomes. This happens because reporting on ultimate outcomes is often not practical
considering the difficulties involved in separating the contributions made by audit offices to ultimate
outcomes from the contributions made by other stakeholders. In consequence, reporting on impact by audit
institutions tends to focus on the impact of individual audits and to strongly emphasize recommendation
implementation rates. What information exists on the impact of audit offices can usually be found in their
annual report on their operations or in a report focused on the follow-up of previous recommendations. This
information can be quantitative or qualitative.
This section provides an overview of the performance indicators most often used by audit offices to report on
the impact of their performance audits. It also discusses the use of qualitative reporting techniques for the
same purpose.
¢ INTOSAI, ISSAI 12, Value and Benefits of SAIs— Making a Difference in the Life of Citizens
The quantitative performance information in these reports generally falls into one of the following categories:
covered in newspaper articles or on news programs or the number of interviews given by the Auditor
General for a given report.
= Website visit statistics. These are statistics on website visits and audit report downloads. Audit
offices gather these statistics to document how many times performance audit reports are
downloaded or consulted on their website over a given period.
These performance indicators are not used by all audit institutions and some
Not all performance
are more popular than others. For example, a majority of audit offices use
indicators are used
one or more indicators about audit recommendations. Many audit offices also
by all audit
use surveys of auditees and/or elected officials. By contrast, very few offices
institutions and
report on their financial impact. (The National Audit Office of the United
some are more
Kingdom and the United States Government Accountability Office are
popular than others.
examples of national audit offices that have adopted this practice.)
These indicators also differ in their usefulness or value. While they all provide performance information that
goes beyond outputs, in some cases the link to impacts is much clearer than for others. The following is a
description of the qualities and limitations of the usual quantitative indicators listed above.
Of these two indicators, the one about implemented recommendations is the most informative because it is
about actions that have actually been taken, whereas the indicator about accepted recommendations only
relates to the intention of taking actions.
As many performance auditors and academic researchers have noted, the “We always get the
proportion of recommendations that are accepted is often very high—100 department
percent is not rare—because political pressure forces auditees to accept all agreeing to our
recommendations to preserve appearances and reassure the public that all recommendations
will be well. However, in practice, things often end differently, and but then somehow
implementation rates are often well below 100 percent. This situation was we come back five
decried by the Auditor General of Canada in June 2018 on a national radio years later, 10 years
program: “We always get the department agreeing to our recommendations later and we find
but then somehow we come back five years later, 10 years later and we find the same
the same problems. It almost is like the departments are trying to make our problems.”
recommendations and our reports go away by saying they agree with our
recommendations.” '
1 Quoted by Elise von Scheel, CBC News, 1 June 1 2018, “Auditor General o public service: stop ignoring my reports”
Canadian Audit and Accountability Foundation — www.caaf-fcar.ca 21
The Impact of Performance Audits — Defining, Measuring, and Reporting Impact April 2019
Because it is about actions taken as a result of audit recommendations, the second indicator clearly
establishes a link between the auditors and the impact of their audits. As such, this indicator meets the basic
requirement for a good indicator of impact. However, a review of the literature on the reporting of audit
impacts shows that this indicator—the most used by legislative audit institutions—nonetheless suffers from a
number of limitations in the eyes of academics:
= By focusing on post-audit actions, the indicator captures only changes that took place after the publication
of the audit report and so ignores all the changes that may have taken place during the audit process.
= Only instrumental changes (resulting from modification to processes, policies, programs, and so on) are
captured; conceptual changes (learning and modified attitudes, behaviours, and outlook) are ignored.
= Readers do not know whether the implemented recommendations were significant ones or whether those
that were not implemented were the most significant ones. (Also, readers often do not know why some
recommendations were not implemented.)
= Similarly, because the data is often aggregated and presented as a single number, readers are not left with
a clear picture of what actual change took place, only with the knowledge that change did take place.
= Finally, implementing recommendations does not necessarily result in an improvement if recommendations
do not address the root causes of observed problems.
Another factor to consider is whether the data used to report on this indicator is self-reported, reviewed, or
audited. The less auditors have verified the data provided by auditees on their performance in implementing
audit recommendations, the less reliable the indicator will be because it is well known that auditees tend to
overestimate their progress.
For all the above reasons, academic researchers generally consider that reporting on the percentage of
implemented recommendations is a limited means of reporting on the impact of performance audits; in their
opinion, this indicator provides an incomplete and imperfect picture (Lonsdale, 1999; Morin, 2001, 2014;
Weets, 2008; Van Loocke and Put, 2011; Desmedt et al., 2017).
Savings Estimates
Estimates of the one-time and recurrent savings (or additional revenues) that result from implementing audit
recommendations are an effective indicator of the financial impact of performance audits because it is often
possible to directly link cause (audit recommendations implemented by auditees) and effect (savings or
additional revenues).'" However, for the reported numbers to be valid, some other conditions must be met:
= A consistent methodology for calculating savings must be established and adhered to.
= The calculations must deduct the cost of implementing the recommendations.
= The estimates should be conservative. They should include only financial benefits that have been
realized (e.g., one-time savings) and that have a high likelihood of being maintained over several
"1 Estimates of financial impact may capture only a subset of all the savings that may result from audits. For example, it may not be
possible to measure the savings that may result from the better mitigation of risks in the future (i.e., avoided adverse events). In this
sense, estimates of financial impact can be said to be conservative.
years (e.g., recurrent savings). Financial benefits that are only potential should not be included in the
calculations.
= The estimates should be validated by the audited organizations. Ideally, they should also be externally
validated.
= The budget of the audited organization should be reduced by the identified amount within a
reasonable time period (i.e., three years or less).
This indicator has several advantages. It is easy to understand and to communicate to elected officials and to
the public. It provides a single unit of measure that can compare the financial impact of different audits. And,
when added up and presented annually, savings can be compared with an audit office’s operational expense
to demonstrate the value of audit work as a whole. For example, in 2017, the National Audit Office of the
United Kingdom declared that every pound sterling invested in its operations resulted in £11 of savings for
the government.™ For the period 2014 to 2018, the Office of the Auditor General for the City of Toronto
reported a return of $11.70 for every dollar invested in its operations.” These numbers enable these audit
offices to demonstrate their value in concrete terms and to support eventual requests for additional funds
with reasonable assurance that additional value would be obtained.
However, this financial indicator also has limitations. Because of its nature, Estimates of financial
it only informs on one aspect of impact—the financial impact. Performance
impacts are easy to
audits can also lead to positive changes in the public sector that are not
understand and to
measured in terms of money; changes can often be measured in terms of
communicate to
increased efficiency, better service, safer communities, and so on.
elected officials and to
Therefore, using a financial indicator cannot provide a full picture of the the public, but they
impact of performance audits.
only tell one part of
Using this financial indicator can also lead to a bias in audit selection if the story.
auditors are not careful. This can happen if auditors preferentially select
audits that they are confident will generate savings or additional revenues for the government, neglecting
other audits where non-financial benefits could also be significant. Following a risk-based audit selection
process is the best way to ensure that audits are selected in an objective and consistent manner, without
favouring financial impact over other types of impact. (For more information on measuring financial impacts,
see Reporting Financial Impacts.)
12 National Audit Office of the United Kingdom, Annual Report and Accounts 2017-18, p. 36
12 Auditor General's Office, City of Toronto, 2018 Annual Report, p. i
1 There are a few other examples, including the Government Accountability Office of the United States of America.
Canadian Audit and Accountability Foundation — www.caaf-fcar.ca 23
The Impact of Performance Audits — Defining, Measuring, and Reporting Impact April 2019
achieved an immediate outcome (holding government to account). As such, statistics from surveys of elected
officials can be relevant indicators of audit impact, but only if they provide information on the usefulness of
performance audits in helping elected officials to exercise their accountability function. However, because
surveys capture subjective opinions and perceptions, and because it is rare to obtain responses from everyone
in a selected population, they may not always accurately portray objective reality.
Instead of relying on numbers and statistics, narrative techniques present information on impact in a short,
clear text that highlights the main changes or actions that resulted from one or more performance audits.
One approach, adopted by the National Audit Office (NAO) of the United Kingdom, is to include a series of
short case studies in an annual performance report. An example of this approach is in the text box below.
“Our work has directly influenced the ongoing development of the new regulatory regime, as well as
contributing to the wider debate that culminated, in February 2018, with the government launching a
review of post-18 education funding. Our report also had influence in Parliament: it informed the
Education Select Committee’s inquiry into value for money in higher education, and the Treasury Select
Committee drew directly on our study findings in its report on student loans, concluding that market
mechanisms, in and of themselves, are not sufficient to drive meaningful improvements in quality.”
Source: Annual Report and Accounts 2017-18, National Audit Office of the United Kingdom, p. 34
Short summaries of performance audits and of the change that they fostered can compement quantitative
information because they present real cases that concretely illustrate how audits can make a difference in
citizens' lives—something that aggregated numbers in tables do not convey nearly as well.
Of course, not all audits have easily identifiable impacts, and some
Short summaries of
are therefore better suited to be illustrative examples. Audit offices
qualitative changes fostered
can demonstrate their value without having to present an impact
by performance audits can
summary for all their audits, but rather by highlighting examples
show how they make a
that demonstrate that audits can lead to changes in policy,
difference in citizens’ lives.
improved service quality, increased safety, better governance and
oversight, or other non-quantifiable impacts.
In presenting case studies or narratives on the impact of their audits, offices need to deal with questions of
attribution and plausibility. Of course, changes that result from implementing audit recommendations cannot
solely be attributed to audit offices; the actions are taken by the auditees and supported by the government.
Public Accounts Committees can also play a role in making change happen. Audit institutions must therefore
be careful how they phrase their narratives. The goal should be to demonstrate their influence and value, not
to take credit away from others.
It may not be possible, however, to demonstrate the exact extent or importance of the influence of an audit
in bringing about a specific change. Data may not always be complete, or it may not be easy to clearly
separate one influencing factor from another. In such cases, audit institutions may need to decide whether
they can base their reporting on the concept of plausibility, as opposed to assurance. If a plausible argument
cannot be made to clearly link an audit and observed changes, then it is not possible to claim any impact.
Section highlights
= The performance indicator most commonly used by Canadian audit institutions is the
percentage of audit recommendations implemented.
= There are limited instances of reporting on the financial impact of performance audits.
Conflicting priorities, resource considerations, and availability and quality of information limit
audit institutions’ capacity to report on financial impact.
= Over time, net progress has been made by Canadian audit institutions in reporting on the
impact of their performance audits, but efforts in this direction have been uneven.
To determine how municipal, provincial, and federal audit institutions in Canada report on the impact of their
performance audits, we reviewed the latest annual report available (from 2017 and 2018) for each of 22
audit institutions:
In this section, these audit institutions are treated as two separate groups:
= members of the Canadian Council of Legislative Auditors (CCOLA), which include the 10 provincial
offices and the Office of the Auditor General of Canada and
= the offices with a local focus, which includes the 10 municipal offices and the Auditor General for
Local Government in British Columbia.
For each audit office, the information on performance was found in:
= anannual report on operations (stand-alone or part of a larger annual report that also includes audit
reports); and/or
= areport on the status of audit recommendations (stand-alone or part of a larger report that also
includes new audit reports).
Performance Indicators
In both groups, the most commonly used performance indicator is the percentage of audit recommendations
implemented by the auditees. In the CCOLA group, 9 out of 11 offices use this indicator, with reported
results that vary between 40 and 97 percent. (The methodologies differ in some respects, so the numbers are
not directly comparable.) One more office reports on the percentage of recommendations accepted by the
auditees and provides a breakdown of outstanding recommendations per ministry.
Among the audit offices with a local focus, the most commonly used In the CCOLA group,
performance indicator is also the percentage of audit recommendations 9 out of 11 offices
implemented by the auditees. In this group, 8 out of 11 offices use this report on the
indicator and another one was planning to start using it in its next report.’ percentage of audit
The reported rates of recommendation implementation vary between 61 and recommendations
91 percent (the methodologies differ in some respects). Four offices report on implemented by the
the percentage of accepted recommendations (in some cases, this is in auditees.
addition to reporting on the percentage of implemented recommendations).
In the CCOLA group, the second most-used indicator is the result of surveys of elected officials. About half (5
of 11) of the offices in this group use this indicator. Four audit offices also report the results of post-audit
surveys of auditees. In addition, three offices provide information on the number or percentage of their
performance audit reports that were reviewed by the Public Accounts Committee (or another legislative
committee) over the previous fiscal year.
Unlike offices in the CCOLA group, audit offices with a local focus do not report to a legislative assembly or a
Parliament. Their reporting arrangements vary; some report to a city council, some to an audit committee,
and others to a city manager. Of these 11 offices, one reports the results of post-audit surveys of auditees.
Because the practice of formally surveying audit committee members or councillors about their satisfaction
15 Of the two remaining offices, one does provide the information necessary to calculate a percentage but does not report the
percentage explicitly. The other office provides information on the number of recommendations outstanding, but no information on the
number of recommendations already implemented for a given period; percentage therefore cannot be calculated.
with the work of auditors is not established at the municipal level, no information is reported on this. (Face-
to-face discussion is a more common method of obtaining feedback for audit offices with a local focus.)
Among the CCOLA group, no office reports on the financial impact of its performance audits. Among the
audit offices with a local focus, only the Auditor General for the City of Toronto reports this information. This
municipal audit institution has been required by Toronto’s Municipal Code to report this information since
2004. For the period 2014 to 2018, the Auditor General reported that the implementation of audit
recommendations by auditees had resulted in one-time and projected five-year savings of $303.7 million. For
every dollar invested in this audit institution over this period, a return of $11.70 was generated.
Of the 22 audit institutions included in our research scope, only one reported information on the number of
visits to its website and on the number of audit report downloads.
Finally, qualitative reporting using narrative techniques or case studies is not a well-developed practice among
any of the audit institutions in cur research scope.
Some Canadian audit Overall, readers interested in the impact of audit institutions can find some
institutions provide information on whether recommendations are being implemented and
information on the whether the main stakeholders (elected officials and auditees) see valuz in
views of elected the work of performance auditors. However, readers can rarely find more
officials and of detailed information, whether on the financial impact of audits, the impact
auditees on the value of specific audits, or the rates of recommendation implementation in specific
and impact of departments and agencies. Essentially, readers are told that audit institutions
per‘formance audits. have impacts, but cannot form a precise understanding of what this impact is
in concrete terms.
As we reviewed the academic literature, interviewed auditors in many Canadian audit institutions, and
searched through older annual reports, it became apparent during our research that, while some change
took place in reporting practices over the years, this change had not been revolutionary nor consistent in
direction over time. Often, a practice was adopted in an office for some time, then eventually faded away.
Sometimes it was revived later on in another context or another office. For example, one office used to report
qualitative information (narratives) on the impact of specific audits, but not anymore. Another office used to
report the financial impacts of some audits, but not anymore. Another provided information on rates of
recommendation implementation but has stopped doing so. And several audit institutions used to survey
elected officials but have since scopped because of the low response rates they obtained in these surveys. In
some cases, like in this last one, a practice was ended because of the limited quantity and/or quality of
information available for reporting. In other cases, changing priorities and reallocation of resources justified
the decision.
Net progress has been made in reporting on impact over the last two decades or so, but efforts in this
direction appear to have been uneven. Looking at this situation, it seems reasonable to wonder why this has
been so and why innovations in this field have been rather few and far between.
A number of possible answers to these questions came up during the interviews Net progress has
we conducted as part of the research for this paper. To start with, there has been
been made in
a general lack of demand from key stakeholders for more information on the
reporting impact
impact of performance audits. Academic researchers may be hungry for this
over the last two
information, but elected officials and the public have not been pressing auditors
decades, but efforts
for it. The only notable exception has been the City of Toronto's audit committee,
in this direction
which in 2004 requested the Auditor General to “report actual and potential
appear to have
dollar savings, at risk dollars, and the impact of non-financial benefits to the City
been uneven.
of Toronto resulting from the Office’s work.” The Toronto Municipal Code (2006)
requires the Auditor General to “report annually to City Council on the activities
of the Office and savings achieved.” This requirement explains in large part why this audit office is the only
one in Canada to regularly report on the financial impacts of its performance audits.
Despite the lack of demand, the leaders and staff of audit institutions we interviewed were generally
interested in improving their reporting on impacts.
The other reasons that may explain the erratic evolution of reporting on audit impacts have to do with
priorities, resources, the availability and quality of information, and organizational culture.
Interviewees often mentioned that increasing efforts to monitor the impact of performance audits involved a
trade-off between allocating resources to new audits and allocating resources to monitoring and following
up on previous audits. Public audit institutions have limited resources and ultimately this decision depends on
the priorities of each auditor general. Resources also affect the capacity of audit institutions to conduct
research on and develop new means of monitoring and reporting audit impacts; smaller audit offices usually
do not have a dedicated research or methodology team that can undertake such work.
The limited availability or quality of information on audit impact is another constraint frequently cited by
interviewees. For example, most of them expressed reluctance to report on the financial impact of their audits
because they felt that in many cases this information would be very difficult to obtain or to validate.
Finally, organizational culture may also play a role in slowing the evolution of practices for reporting on audit
impact. Interviewees made many comments about impact attribution and the legitimacy of claiming credits
for actions that are ultimately taken by departments and agencies, not by audit institutions.
Section highlights
= There are many strategies to improve the reporting of audit impact. In fact, it is when these
strategies are used together that they are the most effective.
= Increasing transparency is the cornerstone of good performance reporting. Effective use of
technology and balanced reporting can magnify the transparency of an audit office.
= Improving information presentation is a recognized approach to ensure that impacts are well
communicated. Many good practices could be replicated.
= Measuring financial impact is one way to demonstrate vividly the impact of performance
audits. Although fraught with challenges, it has been done successfully by a few offices.
= Using narratives to report qualitative information is another effective way to broaden and
improve reporting on audit impact.
Many Canadian audit institutions recognize that they cculd make improvements to how they measure and
report the impact of their performance audits. Indeed, several offices were actively working on this challenge
when this paper was being prepared.
The observations in section 2.2 showed that Canadian audit institutions rely on few performance indicators,
rarely report financial impact information, and provide little qualitative information on audit impacts. This
suggests that these audit offices have opportunities to broaden their reporting by using different types of
indicators and sources of information.
The general advice on performance reporting provided in a 2004 paper written by John Mayne, a former
Principal at the Office of the Auditor General of Canada, is very relevant here:
“It is likely that no single piece of evidence gathered will on its own be enough to build a
credible case concerning a result achieved or a contribution made by a program. ...
Although no one piece of evidence may be very convincing, a larger set of different and
complementary evidence can become quite convincing. It is the totality of the evidence
gathered — some of it strong, some perhaps rather weak — that builds a credible
The remainder of this section presents some good practices and concrete steps that audit institutions can
consider adopting to improve how they measure and report the impact of their performance audits.
Increasing Transparency
In recent years, the operationalization of the open government concept has been facilitated by many
information technology innovations. As a result, citizens now have access to more information on
government operations and can use this information to hold their governments accountable.
Audit institutions can be part of this trend by making more information publicly available on their websites.
For example, most audit offices have a database of audit recommendations and information on
implementation status that they use to facilitate their follow-up processes. However, these databases are not
usually publicly available. But, with today’s technology, some or all the information they contain could be
shared online as a source of information for citizens to follow the progress of specific departments or check
the implementation status of a specific audit’s recommendations. The Office of the City Auditor for the City
of San José, California, did exactly that early in 2018. With the help of interns and the Microsoft Power Bl
software, this small audit office successfully developed an Access database, which it posted online as an
Interactive Dashboard of Open Audit Recommendations that enables users to rapidly find information on the
status of any open audit recommendation. (The information is provided by departments and reviewed by
auditors.) Using this inspiration and their available resources, larger audit offices could no doubt imitate and
build on this first model.
Other transparency improvements can be made without new technologies. Sometimes, increasing
transparency is simply about making information easier to find in annual reports. For example, some audit
institutions provide information on open or implemented recommendations by department or by audit. There
may be opportunities to aggregate this information to present an overall measure of the rate of
implementation for a given year (in addition to what is already being reported).
Another option for increasing transparency would be to publish on the Web the action plans that auditees
provide to audit institutions and Public Accounts Committees to explain how they intend to implement the
audit recommendations aimed at their organizations. Having this information would help interested parties to
track progress and to hold the government accountable for its commitments.
Finally, performance reporting allows audit institutions to describe their performance objectively, including
areas for improvement. Should unintended or negative impacts result from a performance audit, this too can
be reported and adequately explained.
80% o 5%
0%
2009 2010 2011 2012 2013 2014 2015 2016 2017 2018
Reporting Year
Source: Follow-up of 2014 and 2015 Recommendations, Office cf the Auditor General of Nova Scotia, 2018, p. 9
Providing more detailed information by increasing the granularity of data can also improve reporting on
impact. As already mentioned, some audit institutions go beyond providing information on recommendation
implementation at the government-wide level by breaking down this information at the departmental level.
Another example s from the office of the Provincial Auditor of Saskatchewan, which reports annually on two
categories of recommendations:
= Type 1 recommendations are less complex, easier to carry out, and can be implemented in one year;
these are tracked over a five-year period.
= Type 2 recommendations are more difficult to carry out and may take up to five years to implement;
these are tracked over a 10-year period.
The implementation rates for the two types of recommendations are presented in a single chart, shown in
Figure 3.
AITEIRLE
85
Acted On
=8
of
S3
Percentage
Recommendations
N8 &=
0
2009 2010 2011 201 2015 2016 2017 2018
wType1 ~Type2
Source: Annual Report on Operations for the Year Ended March 31, 2018, Provincial Auditor of Saskatchewan, 2018, p. 16
PERFORMANCE MEASURES
PERCENTAGE OF RECOMMENDATIONS 10
ACCEPTED BY AUDITEES ACTUAL
Source: Annual Report, British Columbia Auditor General for Local Government, 2018, p. 9
Charts can also report effectively on several indicators simultaneously. For example, in the chart in Figure 4,
British Columbia’s Auditor General for Local Government reports on various indicators based on the results of
surveys of auditees and of local governments, as well as statistics on recommendation implementation.
Finally, other options, such as web-based interactive charts and infographics, are also available to audit
institutions to make their reporting on the impact of their performance audits more attractive and engaging
to readers.
The Office of the Auditor General for the City of Toronto has been reporting financial impacts for 14 years
and the National Audit Office of the United Kingdom (NAO) has done so for over 25 years. With the
exception of the City of Toronto, the Canadian community of performance auditors has not adopted this
approach so far. A large majority of the auditors we interviewed for this paper stated varicus reasons for not
measuring and reporting financial impacts. These reasons generally fell into three categories:
= Risk: Many interviewees felt that adopting this practice would expose their office to a potential loss
of credibility. In their view, the assumptions that would need to be made in calculating financial
impact could easily be challenged or could lead to misinterpretation. Some interviewees were also
uncomfortable about the perceived need to make predictions or assumptions about the future that
could turn out to be wrong.
= Resources: Some interviewees thought that adopting this practice would require too much time and
resources from their audit teams. A trade-off would be involved, in which fewer audits would be
done in order to free resources to measure and report financial impacts.
= Limited scope: Many interviewees thought that a financial indicator would be too restrictive because
it would provide information on only one aspect of audit impact. They felt that this indicator would
take the focus away from other non-financial aspects, such as safety and service quality.
As part of the research for this paper, we also interviewed representatives from the NAO and the Office of
the Auditor General for the City of Toronto to find out how they managed the process of measuring and
reporting the financial impact of their performance audits. In addition, we reviewed the annual reports of
these two audit institutions. This research has provided clues as to how these two offices have dealt with the
reasons cited by auditors for not measuring and reporting on financial impact.
First of all, it is true that not all audit topics lend themselves to results that can be measured in financial
terms. For this reason, neither the NAO nor the Auditor General of Toronto report financial impacts for every
audit. They both take great care to select audit topics based on risks (as good practices suggest), not on
whether they would be likely to generate financial impact.
It is also true that financial impact is only one aspect of audit impacts and that reporting only on this aspect
would be limited. This is why these two offices use additional indicators and use narratives to present
qualitative impact information in their annual reports. Both the NAO and the Auditor General of Toronto
report on the percentage of recommendations that auditees have implemented. In addition, the NAO's
annual report presents a number of case studies and the Auditor General of Taronto’s annual report provides
short narratives about the non-quantifiable benefits of several of its recent performance audits.
The concern about the risk to an audit office’s credibility is also valid. To manage this risk, the NAO and the
Auditor General of Toronto have rigorous processes and clear guidance to ensure that their measures of
financial impacts can resist challenge. Table 3 summarizes the principles that support the NAO's process to
measure and report financial impacts.
Table 3 - Key Principles That Must Be Met Before Financial Impacts Can Be
Claimed by the National Audit Office (NAO) of the United Kingdom
Causation: There must be a causal link between work conducted by the NAO and the benefit.
Realization: Impacts must have been realized within, or before, the calendar year in which they are
reported.
Valuation: Reliable evidence or data support claims, or both, and implementation costs are acknowledged.
There must be a robust methodology to value the impact.
Attribution: The proportion of impact claimed should reflect the NAO's degree of contribution to the
benefit realized.
Validation: All impact claims need to be validated by the audited body concerned (at sufficiently senior
level) and approved internally.
Source: Annual Report and Accounts 2017-18, National Audit Office of the United Kingdom, 2018, p. 116
At the NAO, all the financial impacts are identified, discussed, and agreed to with the audited organizations.
The results are then reviewed by an internal audit team before being validated by an independent external
auditor. Only financial impacts that have been realized can be claimed and sufficient evidence must be
provided by the auditees to support the assertions. This rigorous process allows the NAO to feel confident
about reporting the financial impacts of its performance audits.
The Auditor General of Toronto also has a structured process to manage the measurement of financial
impact. This process is integrated in the annual recommendations follow-up process, follows regular
performance audit principles, and requires similar skills. Only realized financial impacts can qualify and all
estimates must be discussed and agreed to with the auditees. Realized savings must also be reflected in
organizational budgets. For impacts that recur over many years, results only up to five years may be included
in the calculations.
Because audit topics vary every year and because the potential for financial impact varies from one audit to
another, the total financial impact of performance audits can fluctuate much from year to year. To smooth
out this fluctuation, the Auditor General of Toronto presents this information in its annual report based on a
rolling five-year period.
This inherent fluctuation may pose a particular difficulty for offices that do only a few performance audits
each year. In such cases, it may happen that none of the risk-based audits for a given year are amenable to
the measurement of financial impacts, producing a less-than-desirable reportable result of $0 for that year.
This situation will be less of a concern for audit institutions that preduce more audits each year.
Audit offices that can measure and report on their financial impact benefit from this practice because it helps
them to demonstrate their value using a metric that is easily understood by elected officials and the public.
This demonstration is made easier still by using the ratio of financial impacts vs. annual operating costs of the
audit office, which clearly communicates that each dollar invested in auditing results in x dollars of savings or
additional government revenue.
Of course, doing all this has a cost. Measuring and reporting the financial impacts of performance audits
requires time and resources. However, at the NAO and the Auditor General of Toronto, these requirements
do not appear to be unreasonable (although they might be so for smaller audit institutions). The Auditor
General of Toronto has a team of auditors dedicated to the follow-up of audit recommendations; their work
includes the measurement of financial impacts. This represents between two and three full-time equivalents
(FTEs). Preparing the annual report also requires about a quarter of an FTE. At the NAO, which is a much
bigger office, each audit division has its own champion to support audit teams in measuring financial
impacts. While some trade-off may be required to make this work possible, the NAO sees this trade-off as
worth the benefits. (The Auditor General of Toronto has no choice because it is required by the Municipal
Code to report on its financial impact.)
However, audit institutions often struggle with this type of reporting because they either do not collect the
necessary information or do not feel confident about using it in their annual report. Some useful data could
be collected for this purpose when auditors conduct their annual follow-up on the state of implementation of
the auditees’ action plan. Another practice that may help audit offices to gather this information is to require
their audit teams to set specific value-added expectations during the planning phase of each audit and to
report internally on the achievement of these expectations within a year after the audit report is released.
These surveys have some recognized challenges. Survey fatigue is a complication for surveys of Public
Accounts Committee members. Response rates can be quite low among this group, which diminishes the
value of the survey results. Other methods, such as interviews or focus groups, may be more effective in such
situations.
Surveys of auditees can be affected by some respondents’ lack of objectivity. This may particularly happen if
an audit report includes negative findings and the auditees feel resentment toward the auditors. Survey
responses may be skewed as a result.
Another problem is that post-audit surveys of auditees are often conducted relatively soon after audits are
completed. Because of this, they cannot provide information on the impact in an audited organization two or
three years later, when recommendations have had more time to be implemented. One solution would be to
conduct a further round of surveys later in the follow-up process to obtain more in-depth information on
impacts. While this practice does not appear to exist currently in Canadian audit institutions, it has been used
by some academic researchers n the past. Of course, one limiting factor is the loss of corporate memory that
results from staff turnover in audited organizations over time.
Finally, designing good surveys requires certain specialized skills and some experience, which may not exist in
equal measure in all audit institutions. However, considering that all audit offices are looking for similar
performance information, they could all benefit from collaborating to develop well-designed surveys that
could be used by all offices to collect good information on the impact of their performance audits.
There are a number of options for offices tha: want to improve their reporting on impacts. Based on our
research, in Table 4 we propose 10 good practices to help audit institutions better demonstrate the value
and impact of performance audits.
While it may not be possible for all offices to adopt all the practices in Table 4, they can consider which ones
would enable them to have a balanced approach. Such an approach should tend toward a mix of
quantitative and qualitative as well as financial and non-financial information, with both aggregate statistics
and illustrative examples. Seeking advice from other offices and sharing knowledge among the Canadian
audit community will help each office select and adopt the right practices.
Table 4 - Ten Good Practices That Audit Institutions Can Adopt to Better
Demonstrate Their Value and Impact
1. State the immediate, intermediate, and ultimate outcomes that are expected to result from the
office’s performance audit practice as a whole.
2. Set value-added objectives for each performance audit during the planning phase.
3. When possible, ensure that pre-report impacts are captured either in the audit report or in the
audited organization's response to the audit recommendations.
4. Report annually on the percentage of implemented audit recommendations, using a consistent
approach over time. Also provide a breakdown of this information at the departmental level.
5. Report on recommendation implementation trends over the years and explain any variance
observed.
6. Increase transparency by making a searchable database of recommendations and their
implementation status available online. Specify whether the information in the database has
been reviewed or audited by the audit office.
7. Use case studies and narratives based on qualitative information to report notable audit
impacts.
Where feasible and relevant, report the financial impact of performance audits.
Report on the extent to which auditees and the members of Public Accounts Committees see
value in performance audits by disclosing the results of post-audit surveys.
10. Conduct surveys of audit impact several years after the completion of selected audits and
report the findings of these surveys. Where feasible, link the findings back to the office’s
expected outcomes for performance audits.
3.1 The Actions Canadian Audit Institutions Are Taking to Increase Their
Impact
Section highlights
In Part 1 of this paper, we listed many factors that can influence the impact of a performance audit and made
a distinction between the factors that audit institutions can control and those that they cannot (see Table 2).
The factors that audit offices can control include:
During the interviews we conducted for this paper, we asked senior performance auditors how their office
was trying to increase the impact of its performance audits and on which of the above factors these efforts
focused. The auditors told us about their own office’s priorities and initiatives. These were related, for the
most part, to four of the above factors: audit topic selection, relationships with the auditees, efforts to
disseminate the audit findings, and follow-up mechanisms.
In this part of the paper, we provide an overview of the ideas and opinions we heard from interviewees about
how audit offices can increase the impact of their performance audits. The ideas and practices are presented
following the phases of the audit process, so the practices related to the audit planning phase are presented
first and those related to the follow-up phase are discussed last.
Some of the ideas presented in the following sections are fairly common and generally accepted, while others
are not unanimously supported. We hope to stimulate discussions about which ones can be the most
effective at increasing the impact of performance audits.
Several interviewees also mentioned that their audit institution had recently improved its audit selection
process to make it more formal, rigorous, and risk-based. The hope is that a more robust audit selection
process will allow performance audit managers to make better decisions and select more impactful audits.
(Various options for improving the audit selection process can be found in our discussion paper Audit
Selection and Multi-Year Planning.)
Good relationships are important because they are the basis for building trust and understanding, and
ultimately, a cooperative attitude that enables auditors and auditees to agree on a set of practical audit
recommendations. Without this cooperative attitude, auditors are less likely to obtain the buy-in of auditees
and auditees are less likely to implement audit recommendations. And, without implementation, an audit will
have a much less significant impact.
From what we heard during the intervews, many Canadian audit offices have made conscious efforts in
recent years to improve relationships with their auditees. The many actions mentioned by auditors were
simple enough in themselves, but they do require a continued commitment from all the auditors involved in
discussions with auditees. These actions included:
= holding more frequent meetings with deputy ministers (or senior executives or city managers) to
respond to their questions and concerns;
= being more transparent with auditees about audit plans and criteria;
= sharing and discussing significant audit observations with management early on (the “no surprise”
approach), as well as sharing news releases with aud tees prior to their publication;
= taking the time to explain audit findings in detail to management and discussing potential audit
recommendations to ensure that they would be practical and well-targeted;
= being respectful of auditees’ time and aware of their workload, as well as making efforts to minimize
the impact of the audit process on their operations; and
= listening carefully to what auditees have to say and demonstrating to them that the audit team has
developed a good understanding of their business.
One interviewee stated that building good relationships goes beyond all
Building good
of these actions; it is in fact “a frame of mind that must be cultivated and
relationships between
sustained over time” to reap results. Another interviewee stressed that
auditors and auditees is
good relationships are not something you start to work on at the
“a frame of mind that
beginning of an audit. By then it is already too late. It is much better to
must be cultivated and
cultivate good relationships as early as possible, as soon as one becomes
sustained over time.”
responsible for a specific audit portfolio.
Finally, a senior auditor we interviewed attributed in good measure the increasing rate of recommendation
implementation that his office has observed in recent years to the efforts this office made to improve
relationships with auditees over the same period. For this auditor, there was no doubt that good relationships
can contribute to producing more impactful audits.
The senior auditors we interviewed were aware of social media’s potential for reaching out to a broader
audience, particularly younger generations. However, many were yet unsure how this potential could be best
exploited. Many audit offices have taken the first step of announcing the publication of their audit reports on
Twitter or Facebook, but some are not quite certain whether they can or want to do more than this. Also,
many audit offices do not have a communication specialist on staff and do not feel they currenly have the
capacity to manage several social media platforms concurrently. But whatever the means used and the
resources available, the goal of using social media remains the same: to increase the potential for impact by
disseminating audit messages to a larger audience
Some audit institutions use YouTube and are now producing a short video for each of their performance
audits that highlights their findings. Because YouTube is very popular among younger people, it may be an
effective means of reaching a growing audience of young citizens who prefer to learn by watching short
videos.
Audit conclusions can also be shared through traditional means, by making Audt reports also
presentations at conferences or on university campuses. While th's is not
aim to educate—they
common among Canadian audit institutions, several auditors believe that their
can go beyond the
office would gain by making more of these presentations because, to quote an
“who's to blame”
interviewee, “education is important, too—we can go beyond the ‘who’s to
approach.
blame’ approach.”
Reaching a wider audience and increasing engagement with existing audiences can also be achieved by
producing audit reports that are more attractive and easier to read and understand. Many audit offices across
Canada have made efforts in this direction in recent years through various means, including:
= adopting a plain language style and providing training to their auditors on report writing (including
CAAF's Effective Report Writing Training);
= using infographics and data visualization techniques to convey key information at a glance;
= using more colours in audit reports (instead of black text on a white background); and
= writing shorter reports and providing a concise summary at the front end.
Several auditors we interviewed also emphasized that reporting more frequently can effectively increase the
impact of performance audits. By “reporting more frequently,” they meant publishing audit reports as soon
as they are completed, as opposed to publishing all audit reports in one single annual report (or many audits
together in semi-annual reports). This is currently common in municipal audit institutions, but rare in the
provincial offices (the exceptions are the auditors general of British Columbia and Manitoba). The auditors
who favoured this reporting strategy had a number of arguments to support their claim:
= Reporting when ready means that reports are timely and current. When reporting annually, some
audits may have been completed months before the reporting date and may be less current as a
result.
= Reporting audits separately allows auditors general, the media, and those elected officals charged
with reviewing audit reports to devote their attention to each report one at a time. When many
reports are published all at once, attention tends to be captured by a few reports at the expense of
the others.
= Reporting audits separately also allows an audit office to use social media platforms to highlight the
main messages of each audit, which may not be possible when many audits are published all at
once.
= Reporting more frequently provides more visibility for audit institutions and more opportunities to
remind elected officials and the public of their role and the importance of their work.
However, not everyone we talked with agreed with all these arguments. For example, we heard that the
“reporting as ready” approach made planning and coordination more difficult within an office. We also
heard that offering only one report at a time to elected officials took away from these officials the option of
deciding on which topic to focus their attention. And, of course, depending on the wording of their legal
mandate, some audit institutions may not have the flexibility of reporting as often as they might want to.
To increase their performance in this regard, many Canadian audit institutions have made changes in recent
years to their recommendation follow-up practices. Many auditors we interviewed highlighted the improved
practices their office had adopted and explained that these changes reflected a belief that putting more
emphasis on recommendation follow-up leads to more impactful audits.
Most of the changes we heard about have to do with the scope, timing, and frequency of follow-up
activities. Several audit offices now follow up on recommendations earlier and more often than in the past.
For example, some offices now follow up on audit recommendations every year after they are first presented
to the auditees. In some cases, the practice ceases after three or four years, but in others the follow-up
continues until all recommendations are implemented or until they become obsolete. According to auditors
we interviewed, this “keeps the pressure on” and reduces the likelihood of “auditees losing their momentum
over time or forgetting about some recommendations because they have not heard from the auditors for two
years or more.”
Due to limited Another change we heard about from several auditors is the decision of their audit
resources, there | office to put more effort into verifying the information reported by the auditees about
is often a trade- the status of the recommendations they are responsible for implementing. There
off between appears to be a trend toward providing a higher level of assurance for the follow-up of
fO||OW—Up work performance audit recommendations.
and new
Sudits Of course, doing more frequent follow-up work at a higher level of assurance requires
more time and resources. In practice, this has meant that several audit institutions have
made a conscious choice to allocate more of their resources to follow-up audits at the
expense of doing fewer new audits. This results in fewer audit recommendations overall, but these
recommendations are subjected to more rigorous and more frequent follow-up, which increased the
likelihood that they will be fully implemented and lead to positive change.
According to our interviewees, this trade-off between follow-up work and new audits has been worth it. For
example, senior managers at the Office of the Auditor General of Québec have noted an improvement in the
rate of recommendation implementation since they have started to do more frequent and more rigorous
follow-ups. They and other auditors we talked with have also noted that making fewer recommendations
every year has forced them to pay more attention when they draft their recommendations. They feel that this
new constraint provides them with an additional incentive to make only recommendations that will drive
significant change; in essence, they now make fewer but better recommendations. To help auditors achieve
this aim, Table 5 lists questions to consider when drafting performance audit recommendations.
1. s the recommendation addressed to the right organization (that is, the one that can actually
implement it and make change happen)?
2. s the recommendation aimed at the root cause of the issue or at its symptoms? (See our
discussion paper on Root Cause Analysis for guidance on this topic.)
3. Is the recommendation consistent with the audit observations and with recommendations made
in previous audit reports, where applicable?
4. s the recommendation focused on an area of significant risk?
5. Is the recommendation succinct but detailed enough to stand alone?
6. Is the recommendation worded in such a way that it is not too prescriptive? (That is, the auditees
retain the flexibility to decide the best means of implementing the recommendation.)
7. What is the cost and feasibility of implementing the proposed action? Are there alternative
courses of remedial action that would be easier to implement or be more affordable?
8. Can the recommendation be implemented within a reasonable time?
9. What would be the impact on results, both positive and negative, if the recommendation were
adopted?
10. Could successful implementation of the recommendation be reasonably determined in a follow-
up audit?
Beyond doing more frequent and more rigorous follow-ups, and issuing better recommendations, a few audit
institutions have adopted a more systematic approach to action plans for implementing audit
recommendations. Clear rules have been defined about when auditees have to submit their action plans and
what these plans should include. The offices of the auditors general of Québec, City of Québec, and City of
Laval have also started assessing the quality of these action plans against pre-established criteria and they
follow up with auditees to ensure that their action plans meet these criteria.
Following a recommendation by the Auditor General of the City of Montréal, the City allocated funding in its
2019 budget tc create an analyst position in the office of the Comptroller General. This analyst will be
responsible (among other tasks) for following up on the City’s implementation of Auditor General
recommendations and for ensuring that corrective measures are taken as set out in action plans. (The City of
Laval has a similar arrangement.) Managers who fail to meet their commitments will be asked by the City
Manager to provide explanations and justifications. This type of internal accountability structure strongly
complements the follow-up activities of audit institutions.
Finally, in some provinces, the Public Accounts Committee is actively involved in requesting the action plans
from the auditees and in following up on their progress in implementing these plans. This is done in
collaboration with the provincial Auditor General. For more information on how auditors general and Public
Accounts Committees can best collaborate and increase their impact, consult our discussion paper on this
topic: Building and Sustaining Effective Auditor General — Public Accounts Committee Relationships.