Using Electronic Audit Workpaper Systems in Audit Practice Task Analysis, Learning, and Resistance
Using Electronic Audit Workpaper Systems in Audit Practice Task Analysis, Learning, and Resistance
Jean C. Bedard
(Contact Author)
Department of Accountancy
Bentley College
175 Forest Street
Waltham, MA 02452
Michael L. Ettredge
Division of Accounting and Information Systems
School of Business
University of Kansas
Lawrence, KS 66045-2003
Karla M. Johnstone
Department of Accounting and Information Systems
School of Business
University of Wisconsin - Madison
975 University Avenue
Madison, WI 53706-1323
March 2006
Acknowledgments: We thank the participating firm for enabling us to work with them on this
project. We appreciate the comments of Vicky Arnold and Steve Sutton.
USING ELECTRONIC AUDIT WORKPAPER SYSTEMS IN AUDIT PRACTICE:
TASK ANALYSIS, LEARNING, AND RESISTANCE
SUMMARY: While many audit firms have adopted electronic systems for workpaper
preparation and review in hopes of improving both efficiency and effectiveness, prior research
shows that the expected gains may be difficult to achieve. In order to investigate possible sources
of difficulty in full use of these systems in audit practice, this paper identifies individual task
components involved in workpaper preparation and review. We assess the relative difficulty of
performing the individual component tasks, and examine the “learning curve” by relating
difficulty ratings to performance frequency. We also assess which component tasks are more
difficult in an electronic versus a paper environment, and measure auditor resistance to the
electronic system after one to two years of use. Using survey data from auditors at an
international audit firm that recently adopted an electronic workpaper system, we find that tasks
involving “navigation” around the electronic system (e.g., agreeing lead sheets with workpapers)
are the most difficult for auditors to accomplish. Audit managers and partners express greater
difficulty with the electronic system, and report using fewer of the capabilities of the system,
relative to staff and seniors. Finally, we present reported incidence of “working around” the
system, including behaviors such as creating review notes and storing workpapers outside the
system. The difficulties that we document present possible implications for complying with
professional standards such as Auditing Standard No. 3, on audit documentation. Our results are
useful to audit practice in targeting training efforts, and to research in providing topics for study
of decision improvement.
Data Availability: Inquiries regarding potential uses of the data may be directed to the contact
author.
USING ELECTRONIC AUDIT WORKPAPER SYSTEMS IN AUDIT PRACTICE:
TASK ANALYSIS, LEARNING, AND RESISTANCE
INTRODUCTION
This study investigates specific sources of difficulty faced by auditors in preparing and
reviewing workpapers using electronic systems. In addition to documenting the component tasks
in which greater difficulty occurs, we assess whether difficulty is reduced by greater frequency
of practice, and whether the negative association of difficulty with frequency differs among
preparers (audit staff and seniors) and reviewers (manager and partners). We further assess
reports of “working around” the system after a period of experience with it, which may
Our analysis is motivated by two trends in the audit industry. First, while some audit
firms have shifted from paper to fully electronic environments for audit work systems (Yang
1993; Rothman 1997; McCollum and Salierno 2003), others have adopted partially electronic
systems (e.g., simply creating pdf files for storage) or are still considering making this transition.
Further, based on our discussions with practicing auditors, there is important variation in the
nature and use of electronic audit workpaper systems even at the largest firms, which implies
resistance by some auditors to fully incorporating the systems into their everyday audit practice.
For example, while some firms require all personnel to use the system, others require only staff
and senior auditors to use the system. Some firms allow individual partners to “opt out” of using
the system on specific engagements, while others do not. Some firms require that all tasks are
completed using the system, while others require only certain tasks to be completed using the
system. Some firms require use of the system for electronic storage of documentation, while
others allow storage of documentation in both electronic and paper formats. Further, there are
often pockets of clients for which electronic systems are not yet appropriately tailored to
industry-specific needs, e.g., governmental and non-profit engagements, so in these industries the
traditional paper-based system is still in use. These variances in practice imply that the move
The second trend that motivates our analysis is that regulators are currently very
documentation (e.g., PCAOB 2004). As our anecdotal evidence suggests, there is considerable
variation (even among the largest audit firms) in how such documentation is developed and
maintained. While some sources of variation are known because they result from design features,
in other cases variation in documentation may occur based on varying system usage. This
variation may gain importance due to the PCAOB inspection process and the requirements of
While these trends motivate research on electronic audit workpaper systems, there are
only a few studies that investigate issues associated with use of such systems. What these studies
reveal is that there are potential effectiveness and efficiency difficulties associated with adoption
and continuing use of electronic workpaper systems (Bedard et al. 2003; Brazel et al. 2004; Bible
et al. 2005; Rosman et al. 2006). These findings are not surprising, given that prior research in
the contexts of education and information systems shows that information processing in
technology requires devotion of some portion of short-term memory that would otherwise be
used for processing of task information (e.g., Thuring et al. 1995). Further, hypertext
performance (e.g., Mills et al. 2002). Due to the increasing use of electronic practice tools in
auditing and evidence that auditors might not perform better in electronic environments, further
2
research is clearly warranted. Specifically, research is needed on the sources of difficulty that
auditors are experiencing difficulties with electronic workpaper systems in practice, and the
To study this issue, we worked with personnel at an international auditing firm that had
recently introduced an electronic audit workpaper system in its U.S. practice, This firm was
interested in supporting research that could help assess specific areas in which further training
and support could be most usefully directed. 1 Working jointly with system developers, we
developed a taxonomy of the individual component tasks involved in preparing and reviewing
audit workpapers electronically using the firm’s system. Using this taxonomy, we surveyed firm
personnel with one to two years’ experience using the system (having a mean of 28 electronic
engagements overall), regarding their perceptions of the difficulty of the component tasks and
the frequency with which they accomplish those tasks. Our analyses relate the difficulty of each
type of task to performance frequency, to assess the extent to which specific steps are subject to a
learning curve effect. We compare which component tasks are relatively more difficult in
electronic environments, and assess the extent of specific behaviors that indicate auditors are
working around the new system. Contact personnel at 12 U.S. offices of the firm distributed the
survey, and we obtained usable responses from 119 audit personnel at all ranks.
The results provide information about the processes of constructing and reviewing audit
workpapers using electronic audit workpaper systems. Our taxonomy contains 38 component
tasks performed by workpaper preparers (staff and seniors), and 28 component tasks performed
by workpaper reviewers (managers and partners), classified into the categories of system
security, data input, organization of the file and verification of data, and review. Overall, our
results reveal specific tasks that auditors consider relatively difficult in the electronic system
3
(e.g., ensuring that workpapers are updated for adjusting journal entries and tracing amounts
from financial statements to lead sheets, among others), whose common characteristic involves
“navigating” around the electronic system. Further, reviewers find the electronic system more
Regarding the overall learning curve associated with the new system, results show that it
takes about five engagements on average before personnel are comfortable using the electronic
system, although reviewers report a greater mean and range on this measure. Auditors at all
levels report a significant increase in using the full capabilities of the system over time.
However, there is evidence of variance in full use of the system, based on responses to questions
about behaviors suggestive of “working around” the system (e.g., creating review notes on paper
outside of the electronic system). Both preparers and reviewers report reduced incidence of
working around the system as they gained familiarity with it, but we still find some reports of
these behaviors even after system familiarity is achieved. In sum, our results show improvement
in system use with practice. However, we identify specific pockets of difficulty that persist even
Our results are useful to audit practitioners as they develop new electronic audit
workpaper systems, as they revise existing systems, and as they consider how to focus training
on areas of greatest difficulty to their professionals. Further, leaders of audit firms may find this
regulatory risk) associated with the challenges of electronic audit workpapers. Our results are
also useful for researchers in designing studies about specific features of the audit workpaper
preparation and review process, and are useful for educators in preparing students for audit
practice. The remainder of this paper is organized as follows. The next section provides a
4
discussion of the issues concerning using electronic audit workpaper systems. We then discuss
Entities of all types are incorporating electronic technologies with the objective of
improving effectiveness and efficiency of business processes (e.g., Bell et al. 2002). However,
there are indications that the goal of the “paperless” office is often not completely achieved, and
that employees often attempt to circumvent electronic systems by reverting to paper processing.
Research in other contexts shows that users bypass newly implemented work systems by
reverting to the former system for certain tasks (Chau 1996), by duplicating tasks in both old and
new systems (Sellen and Harper 2002), and/or by not using the new system correctly (Markus
1983; Hartwick and Barki 1994). In addition, even when individuals have a strong motivation to
appropriately use an electronic system, their success may be limited because of the complexity of
the task and associated disorientation within the electronic system (e.g., Nielsen 1990), or
because their task knowledge is not well-developed enough to most successfully leverage system
In the case of a fully integrated workplace system, such as those developed by some of
the large audit firms, there are a number of potential consequences to reverting to paper for
performance of difficult steps in electronic file construction or review. The audit workpaper is a
legal document containing evidence supporting the audit opinion. The completed workpaper
compiles evidence, which for audits of large companies, is accumulated over a period of time by
many individual professionals acting in a hierarchy. Each firm has a defined, complex set of
procedures that must be performed in a certain order, aggregated, and reviewed for
completeness. Bypassing the system by working off-line can affect efficiency, effectiveness, or
5
both. Inefficiency could result if tasks are duplicated, while ineffectiveness could result if key
workpapers were lost or the file was not constructed correctly so that it could not be easily
reviewed. Further, if preparers of the engagement file print out workpapers or lead sheets during
the engagement, they will not be using the system linkage and cross-referencing capabilities.
Thus, subsequent reviewers of the file will be unable to perform an efficient review. Creating
review notes on paper at any point in the team hierarchy will also result in subsequent reviewers
being unable to access them from remote locations. Thus, there are potentially important
consequences to working around the system by resisting electronic functionalities and reverting
to paper processing. In addition to consequences for the engagement itself, in the U.S. the
integrity of the workpaper is crucial as a foundation for later inspection by the Public Company
Despite the key role of electronic workpaper systems, there is little research on the audit
effectiveness and efficiency implications of these systems. While few in number, these studies
provide preliminary signals of concern. For instance, Bible et al. (2005) find audit effectiveness
system. Extending those findings, Rosman et al. (2006) show that auditors’ difficulties in
electronic environments are associated with system complexity, and that the most successful
auditors in the electronic environment adapt to it by limiting the extent of their navigation around
the system and instead focus on understanding and remembering the information gained from the
system. Further, Brazel et al. (2004) show that, compared to those anticipating face-to-face
review, auditors anticipating electronic review are less concerned about audit effectiveness, more
likely to be influenced by prior workpapers, and feel less accountable for their work. Further,
6
Glover et al. (2000) find that many internal auditors report using internally developed software in
performing their professional roles, but their satisfaction with these tools varies widely.
While these studies motivate further research on effectiveness and efficiency in electronic
versus paper environments, the issue arises as to whether such effects would be limited to new
system applications, or whether they would persist following training and/or practice. On the
issue of training effects, Bedard et al. (2003) find that face-to-face training prior to
implementation improves auditors’ perceptions of system quality and intentions toward using a
new electronic workpaper system. However, they also find that auditors’ perceptions of their
own ability to perform audit tasks using the system does not necessarily improve with training.
Thus, the little evidence on improvements due to training in this context is mixed. Regarding the
effect of practice, we are unaware of any relevant research in the auditing context, thus
workpaper system, it is important to understand the basic nature of the tasks that auditors
complete. However, we are unaware of published academic or practitioner articles describing the
exact nature of the tasks that auditors with different work roles accomplish using electronic audit
workpaper systems. Gaining this understanding is important from a practical standpoint for
guiding implementation and training, but it is also important because performing good research
and Usoff 2001; Trotman 2005). Our first set of research questions seeks to provide information
in this regard, defining the component tasks and assessing their relative difficulty:
(1) What component audit tasks are involved in an electronic audit workpaper system for
auditors in different workpaper roles?
7
(3) Which component tasks are more difficult in an electronic system compared to a
traditional paper audit workpaper system?
The second issue in using electronic audit workpaper systems concerns the transition to
those systems. As previously noted, research in other business contexts finds that the full benefit
of paperless office systems is often not achieved because employees often work around systems.
In the current context, there is evidence of effectiveness problems associated with electronic
systems (Bible et al. 2004). However, it may be that repeated performance of workpaper tasks
within the context of an auditor’s normal practice would resolve this issue over time. To our
knowledge, the extent to which difficulties in performing electronic audit workpaper tasks are
reduced with practice has not been investigated. We approach this issue by assessing the
frequency of performance of component tasks using the electronic workpaper system, and then
considering the correlation between difficulty and frequency of task performance. In addition to
this analysis, which is aimed at providing insight into the individual aspects of the workflow, we
also assess the learning curve associated with performing workpaper tasks on the system by
evaluating auditors’ comfort using the system. Stated formally, our second set of research
questions is:
(5) What are the factors affecting auditors’ learning to use the new system?
The third broad issue involved in using electronic audit workpaper systems concerns
potential auditor resistance. Prior research on technology adoption consistently demonstrates that
people of all types and across a multitude of technology-based systems are prone to resisting
these systems, despite the best efforts of system designers and administrators (e.g., Sellen and
Harper 2002). Therefore, understanding how auditors might try to work around systems, and the
8
frequency with which they do so, is important. Further, we also address the learning curve in this
context by assessing whether these resistance behaviors decline with system experience.
(6) What is the frequency of possible resistance behaviors that might be expected as auditors
adopt an electronic audit workpaper system?
(7) Does the frequency of those resistance behaviors decline with experience using the
system?
METHODS
The electronic audit workpaper system of the participating audit firm encompasses all
phases of the audit process. Auditors begin the process of engagement file construction by
gaining access to the system, which is password protected and has file-sharing features that
enable remote users to simultaneously access and change the file. Once access is gained to the
system, workpaper preparers work with a master file containing generalized procedures that
enables the auditors to conduct an effective audit that appropriately controls risks. Workpaper
preparers can tailor the file to address specific client risks, including setting the strategy to be
used on the engagement and altering the nature, timing, and/or extent of planned audit
procedures. Workpaper reviewers can electronically access the file, making changes and
perform the engagement. The file contains standard workpaper templates for auditing routine
account balances, and auditors use these to update information from prior years. Auditors insert
electronic memos into the file in order to document discussions with the client or create short
notes about the results of a test or procedure. Copies of related files can be embedded within the
master file or an electronic link can be made between files. The system has various
9
functionalities that assist auditors, including electronic tickmarks, and the generation of a
workpaper reference list that documents all tasks accomplished and reviewed, and all tasks for
which work still needs to be accomplished. The system uses a cascading windows-type feature
that enables auditors to view and copy portions of various files on the computer screen at the
same time, and there is an electronic scratch pad for making quick mathematical calculations.
The audit firm’s decision aids are linked into the system, including the audit sampling tool.
The system automatically records which system user accomplished each audit task, and
the time that each task was accomplished. Auditors save electronic copies of the file at least
twice daily, and the system saves changes to the file and stores them at a remote site daily. The
completed engagement file is electronically archived at a secure, remote location. Finally, the
system contains a roll-forward feature that makes it possible to create a new engagement file
With the assistance of system developers and other personnel at the participating
international audit firm, we developed a survey instrument to assess auditors’ perceptions of the
relative difficulty of the component tasks of preparing and reviewing electronic workpapers, and
the relative frequency of use. The instrument also captures self-reports of behaviors inconsistent
with the goal of electronic processing and storage of audit information (i.e., resisting the system
by “working around” it). The instrument was distributed by contact personnel at 12 offices of the
firm. Valid responses were obtained from 119 professionals in those offices, a response rate of
about 70 percent. Of the respondents, 24 are audit staff, 45 are seniors, 27 are managers, and 23
are partners. 2 Respondents had experience using the electronic system for one or two years. The
mean number of engagements using the system is 23 for preparers, and 36 for reviewers.
10
Variable Measurement and Testing
We designed the survey instrument to assess the relative frequency and difficulty of using
all component tasks of an audit using an electronic workpaper system. To address RQ1
(identifying task components), we worked with system developers to provide precise steps
involved in constructing and reviewing workpapers on this system. These tasks are shown in
Table 1. While the existing literature provides examples of various audit tasks (e.g.,
Abdolmohammadi 1999, Rich et al. 1997), these general taxonomies do not specifically relate to
the component steps used within this particular system. Thus, we relied on developers of this
system, who best understood the steps involved in task completion. For purposes of analysis, we
categorized the component tasks according to major phases of the preparation or review process.
Too address RQ2 and RQ4, participants assessed relative difficulty and frequency,
respectively, for each component task relevant to their workpaper role.. For some component
tasks, there is an equivalent audit task in a paper-based system (e.g., creating review notes), and
for those tasks participants also made frequency and difficulty assessments relative to the paper
environment. 3 We compare the task difficulty ratings between electronic and paper environments
to address RQ3.
To address learning effects (RQ5), we: (1) examine the correlation between task
difficulty and task frequency, (2) measure the number of engagements auditors complete before
feeling comfortable using the electronic system, and (3) evaluate self-reports on the extent to
which auditors use the full capabilities of the electronic system (both on the first few
engagements and after gaining familiarity with the system). To evaluate resistance to the new
system, we worked with system developers to identify behaviors consistent with “working
11
around” the system (RQ6). We asked auditors to indicate the extent that they engaged in those
behaviors on the first few engagements using the new system and after gaining familiarity with
the system, in order to measure whether such behaviors decline with system experience (RQ7).
RESULTS
Table 1 reports results relating to our first four research questions. 4 Regarding RQ1, the
major tasks that auditors complete in electronic audit workpaper systems include security, data
input, organization/verification, and review (see Table 1 for specific tasks within each of these
categories). RQ2 concerns the difficulty of various component workpaper tasks. Panel A shows
variation in difficulty ratings among these major task categories, revealing that the relatively
more difficult tasks for preparers in the electronic system involve data input and
categories. 5 For instance, almost 40 percent of preparers indicate difficulty in annotating and
Difficult”). Other component tasks with high percentages of difficulty ratings within the data
input category include creating scanned documents (28.1 percent), importing client information
from external databases (18.8 percent), and refreshing workpapers after AJE’s are booked (16.7
percent). The most difficult component tasks in organization/verification include ensuring that
workpapers are updated for adjusting journal entries (22.7 percent) and agreeing lead sheets to
workpapers (14.7 percent). All of these tasks are crucial to constructing and maintaining accurate
For reviewers, Panel B of Table 1 shows that organization/verification and review tasks
are rated as more difficult overall than security tasks. The most difficult organization/verification
12
tasks for reviewers include ensuring that workpapers are updated for AJE’s (46.9 percent) and
finding workpapers/memos (32.7 percent). Some of the review tasks causing the most difficulty
include determining that workpapers or memos have been prepared for all significant balances
appearing on the lead sheets (48 percent), tracing amounts from the financial statements to the
lead sheets and workpapers (44 percent), determining which workpapers are key (36.7 percent),
ensuring that the workpapers agree to lead sheets (32.7 percent), and ensuring that workpapers
The evidence in Table 1 yields several insights. First, while many tasks are relatively
easy, the proportion of auditors indicating difficulty with some component tasks is fairly high,
even after fairly extensive electronic experience. This is despite a very well-designed system,
thorough training, and continued assistance to engagement personnel by the development team.
Second, there is considerable variance in difficulty ratings of component tasks within the input
organization/verification and review categories for reviewers. This suggests that it is not the
activity that is being performed, but some aspect of performing it electronically, that is causing
the problem. Third, reviewers’ responses indicate greater difficulty on most dimensions than
preparers’ responses. Thus, although reviewers perform their tasks on more engagements than
continue to have difficulty with some aspects of the task, perhaps leading to inefficiency. 7
Table 2 reports results of RQ3, which concerns identifying tasks that are more difficult in
an electronic system compared to a traditional paper audit workpaper system. For preparers,
some of the tasks whose difficulty increases the most in the shift to an electronic workpaper
environment include: agreeing lead sheets to workpapers, agreeing workpapers to lead sheets and
13
supporting documents, and tracing amounts from the financial statements to the lead sheets and
workpapers. For reviewers, some tasks whose difficulty increases the most in the shift to an
electronic workpaper environment include tracing an amount from the financial statements to the
lead sheets and workpapers, determining that workpapers/memos have been prepared for all
significant balances appearing on the lead sheets, and ensuring that workpapers agree to lead
sheets. Taken together, these results imply that the electronic environment seems to present
important difficulties to both preparers and reviewers in navigating around the electronic file. It
is also interesting to note that reviewers’ mean difference in difficulty between paper and
electronic environments is much higher than that of preparers’, providing further evidence that
Our second set of findings addresses transition and learning issues. RQ4 concerns the
relative frequency with which component tasks are performed. Table 1 Panel A shows that more
frequent tasks for preparers include signing onto the electronic workpaper system, inserting
Excel and Word workpapers, and creating memos. Panel B shows that more frequent tasks for
reviewers include signing off workpapers and memos as reviewed, reviewing Excel and Word
Table 3 reports results addressing RQ5, which concerns the factors that affect auditors’
learning to use the new electronic audit workpaper system. The first way we investigate this
issue is by considering the correlation between the task performance frequency and perceived
task difficulty. We find that for workpaper preparers, the overall correlation between difficulty
and frequency is significantly negative, as is the correlation between difficulty and frequency on
14
the ten most difficult tasks, which implies that performing a task more frequently reduces task
difficulty for these auditors. In contrast, there is the correlations for reviewers are not significant,
implying that “learning by doing” is not effective in reducing task difficulty for managers and
partners. This implies that managers and partners require more intensive training and assistance
in order to become comfortable using the system. When we analyze the association of difficulty
and frequency by task category, we find that for preparers, organization/verification is the only
task category for which there is not a significant negative correlation, suggesting a learning curve
effect. For reviewers, the only task category in which difficulty declines with frequent
performance is file security. These findings reinforce the organization/verification tasks as a key
engagements auditors completed before feeling comfortable using the electronic system. Based
on the results reported above, we expected that reviewers might require more engagements to
move up the learning curve. The results in Table 3 Panel B are consistent with this expectation.
Preparers’ require an average of 4.63 engagements (with a range of 1-12) to become comfortable
with the new system, while reviewers’ require 6.30 engagements (with a range of 1-40). Finally,
we investigate the extent to which auditors report using the full capabilities of the electronic
system on their first few engagements and after they became familiar with the system (Table 3
Panel C). Means for both preparers and reviewers indicate that auditors generally do not use the
full capabilities of the electronic system, even after they are familiar with it (e.g., means of 3.70
and 3.38 for preparers and reviewers, respectively where a three indicates that they sometimes
use the system’s full capabilities and a four indicates that they frequently use its’ full
15
capabilities). However, both workpaper preparers and reviewers report significant improvement
in using the system’s full capabilities once they get used to the system, indicating that practice
Resistance Issues
Our third set of results concerns resistance issues. RQ6 concerns the frequency of
potentially negative consequences for efficiency and/or effectiveness. Results presented in Table
4 show that on average, preparers report moderate levels of working around the system, with
means representing incidence of these behaviors between “infrequent” and “sometimes”. The
most common behavior in working around the system on the first few engagements is storing or
maintaining workpapers on paper instead of in electronic form. The most common behavior after
getting used to the system is printing out lead sheets and cross-referencing them to the financial
statements and audit workpapers rather than doing so electronically. However, in each case,
practice with the system significantly reduced the incidence of working around the system.
Regarding reviewers’ work-around behaviors, the results show that reviewers also report
moderate incidence of working around the system on the first few engagements. With the
exception of terminating the engagement, the means represent an incidence of these behaviors
between “infrequent” and “sometimes”. On the first few engagements, the mean response to the
question regarding printing out workpapers so that they can be reviewed is 2.98, and the mean
for creating review notes on paper is 2.52. As with preparers, practice significantly reduced the
CONCLUSIONS
16
This paper describes component tasks involved in preparing and reviewing audit
workpapers using a fully integrated electronic audit workpaper system, investigating relative
difficulty and frequency of performance of these components, the learning curve for electronic
workpaper systems, and reporting ways in which auditors try to work around the system while
learning. Our findings provide considerable information about the processes of constructing and
Our first set of results relates to task difficulty. We identify 38 tasks performed by
workpaper preparers (seniors and staff), and 28 tasks performed by workpaper reviewers
(managers and partners) within electronic workpaper systems. We classify these into the
categories of system security, data input (setting up the engagement file), organization of the file
and verification of data, and review. Our analysis shows that reviewers consider working with
the electronic system to be more difficult than do preparers. The results also reveal that the tasks
that seem most difficult for both preparers and reviewers involve navigating around the
electronic system. For example, almost half of preparers report difficulty completing tasks such
as tracing amounts from the financial statements to lead sheets/workpapers, determining that
workpapers/memos have been prepared for all significant account balances, and ensuring that
workpapers are updated for adjusting journal entries. Further, when auditors compared the
difficulty of tasks in electronic and paper environments, the results also reveal that “navigation”
tasks are particularly difficult. Our finding of persistent navigation difficulties, even after one to
two years of using a system carefully designed to assist file construction and review, reinforces
the results of Bible et al. (2005) on navigation problems in electronic audit workpaper systems.
Therefore, system design, implementation, and training need to be especially targeted toward
17
Our second set of results relates to transition and learning effects. When we correlate task
difficulty and frequency ratings, we find that completing tasks more frequently within the
electronic system is helpful in reducing task difficulty for workpaper preparers, but not for
reviewers. Since preparers spend more time actually using the workpapers in their job role (Rich
et al. 1997), it may be that while frequency improves difficulty perceptions for these auditors,
this relationship takes longer to develop for reviewers because they simply spend less time on
task. Regarding the overall learning curve associated with the new system, results show that it
takes preparers between four and five engagements before feeling comfortable using the new
system, whereas it takes reviewers between six and seven engagements. Auditors also report a
significant increase in using the full capabilities of the system once they become familiar with it.
Once familiarity is gained, preparers indicate greater use of the system’s full capabilities than
reviewers. These findings suggest that training using highly realistic cases is important, and that
oversight or peer review may be appropriate to ensure quality on engagements when teams
Out third set of findings concerns resistance issues. While the use of self-reports to
capture these behaviors might under-represent their incidence, we find that personnel at all levels
report some behaviors indicating “working around” the system (e.g., creating review notes on
paper outside of the electronic system or storing documentation in a paper binder instead of on
the electronic system). Both preparers and reviewers report reduced incidence of working around
the system as they gained familiarity with it, but we still find reports of such behavior by some
auditors even after they have gained significant familiarity with the system. This variation in
how audit evidence is documented is important because it may result in difficulties in later
retrieving evidence for internal quality control and for PCAOB inspection teams. The possibility
18
of subsequent documentation problems has greater import under Auditing Standard No. 3 than
under previous auditing standards. Audit firms currently using electronic audit workpaper
systems, and those transitioning to such systems, should be aware of these findings and should
make efforts to ensure that auditor resistance does not result in failure to comply with
professional standards.
systems. Bible et al. (2005) summarize this literature by noting that prior research has not
demonstrated that electronic environments facilitate information processing (e.g., Dillon 1996).
To the contrary, findings of studies within and outside of auditing are consistent in showing
performance problems associated with the cognitive load involved in navigating around
electronic environments, causing disorientation (e.g., McDonald and Stevenson 1996). While
Rosman et al. (2006) find that specific decision processes overcome this difficulty. Further,
studies outside of auditing such as Mills et al. (2002) show that greater domain knowledge is
associated with better ability to navigate through a hypertext environment. Complementing these
studies, we show that difficulties with performing some audit tasks on a new system decline with
practice, but do not disappear completely. From an audit effectiveness perspective, future
research could investigate individual auditor characteristics that influence task difficulty, and
how task difficulty perceptions subsequently affect individual auditor decision-making (e.g.,
during the workpaper review process). In addition, research could investigate the extent to which
avoiding electronic workpaper systems by “working around” them affects auditor decision-
making and the required documentation of audit evidence, and whether these behaviors persist in
19
mature systems. From an efficiency perspective, studies could investigate the cost-benefit
tradeoffs associated with the shift to electronic audit workpaper systems, and how the learning
curve on the new systems affects audit efficiency. Studies comparing training methods might
also be directed toward auditing students, as effective preparation before entering the workplace
will ease the transition for students and reduce cost to firms once they are employed.
In addition to the above research implications, our findings also contribute to audit
practice and education. For audit practice, the results provide insight on implementation of
electronic audit workpaper systems, including information about the tasks completed within the
system and auditors’ perceptions of the difficulty of those tasks and auditors’ reports on how
frequently they use those features of the system. This information should be useful to other audit
firms as they design and update their own electronic audit workpaper systems. Further, the
results provide evidence on the transition and learning issues associated with the adoption of an
electronic audit workpaper system, and provide evidence on the existence and nature of auditor
behaviors associated with resisting the new system. Understanding these features should assist
system developers and audit firm personnel as they consider potential implementation costs and
training needs associated with new electronic audit workpaper systems. We are unaware of any
other study that provides evidence on these issues that is derived from the real audit practice
environment. In addition, educators will find these results useful to share with students in their
20
REFERENCES
Abdolmohammadi, M., and C. Usoff. 2001. A longitudinal study of applicable decision aids for
detailed tasks in a financial audit. International Journal of Intelligent Systems in
Accounting, Finance, and Management. 20 (September, 3): 139-154.
Bedard, J., M. Ettredge, C. Jackson, and K. Johnstone. 2003. The effect of training on auditors’
acceptance of an electronic work system. International Journal of Accounting
Information Systems 4 (4): 227-250.
Bell, T., J. Bedard, K. Johnstone and E. Smith. 2002. KRisksm: A computerized decision aid for
client acceptance and continuance risk assessments. Auditing: A Journal of Practice &
Theory (September): 97-113.
Bible, L., L. Graham, and A. Rosman. 2005. Comparing auditors' performance in paper and
electronic work environments. Journal of Accounting, Auditing and Finance (March): 27-42.
Brazel, J., A. Agoglia, and R. Hatfield. 2004. Electronic vs. face-to-face review: The effects of
alternative forms of review on auditors’ performance. Working paper.
Dillon, A. 1996. Myths, misconceptions, and an alternative perspective on information usage and
the electronic medium. In Hypertext and Cognition. Edited by J. Rouet, J. J. Levonen, A.
Dillon, and R. J. Spiro. Mahwah, N.J.: Lawrence Erlbaum Associates.
Glover, S., D. Prawitt, and M. Romney. 2000. The software scene. The Internal Auditor 57 (4):
49-57.
Hartwick , J., and H. Barki. 1994. Explaining the role of user participation in information system
use. Management Science 40 (4): 440-465.
Markus, M. L. 1983. Power, politics, and MIS implementation. Communications of the ACM 26
(6): 430-444.
McCollum, T., and D. Salierno. 2003. Choosing the right tools. The Internal Auditor (August):
32-43.
McDonald, S., and R. J. Stevenson. 1996. Disorientation in hypertext: The effects of three text
structures on navigation performance. Applied Ergonomics 27:61–68.
21
Mills, R.J., D. Paper, K.A. Lawless, and J.M. Kulikowich. 2002. Hypertext navigation – An
intrinsic component of the corporate intranet. Journal of Computer Information Systems
(Spring): 44-50.
Nielsen. J. 1990. The art of navigating through hypertext. Communications of the ACM 33: 296-
309.
Public Company Accounting Oversight Board (PCAOB). 2004. Auditing Standard No. 3: Audit
Documentation.
Rich, J.S., I. Solomon, and K.T. Trotman. 1997. The audit review process: A characterization
from the persuasion perspective. Accounting, Organizations and Society 22 (5): 481-506.
Rosman, A., L. Bible, S. Biggs, and L. Graham. 2006. Successful audit workpaper review strategies in
electronic environments. Journal of Accounting, Auditing and Finance: Forthcoming.
Rothman, S. 1997. Expert: Paperless audit saves money. The Credit Union Accountant,
(November 24): 1.
Sellen, A.J., and Harper, R.H.R. 2002. The Myth of the Paperless Office. Cambridge, MA: The
MIT Press.
Thuring, M., J. Hannemann, and J. M. Haake. 1995. Hypermedia and cognition: Designing for
comprehension. Communications of the ACM 38: 57–66.
Trotman, K. 2005. Discussion of M. Nelson and H-T. Tan, “Behavioral review judgment and
decision-making research in auditing: A task, person and interaction perspective”.
Presented at the 25th Anniversary Conference of Auditing: A Journal of Practice &
Theory.
Yang, D.C. 1993. The use of microcomputer software in the audit environment: The implications
for accounting education. Journal of Applied Business Research (Summer): 29-31.
22
TABLE 1
Relative Difficulty and Frequency of Using Audit Tasks in an
Electronic Workpaper System
Panel A. Preparers
DIFFICULTY FREQUENCY
Percent rating as
Mean (s.d.) difficult Mean (s.d.)
Security Tasks
Manage the security of the file 1.9 (1.0) 4.1 3.3 (1.5)
Check in files 1.8 (1.0) 9.1 4.3 (1.1)
Backup files to external media (to a CD) 1.8 (1.0) 7.6 3.6 (1.4)
Distribute checked out files 1.8 (0.9) 6.1 4.2 (1.2)
Create checked out files 1.7 (0.9) 4.5 4.3 (1.1)
Ensure the appropriate use of signatures by team
member who did the work 1.7 (0.8) 4.6 4.2 (1.0)
Backup files to the network 1.5 (0.7) 1.5 3.7 (1.3)
Sign into Electronic Audit System file 1.0 (0.2) 0 4.9 (0.2)
Average of Security Tasks: 1.6 (0.5) 4.1 (0.7)
Input Tasks
Annotate and complete scanned workpapers
electronically 3.0 (1.5) 39.7 2.5 (1.5)
Create scanned documents 2.6 (1.5) 28.1 2.4 (1.5)
Import client and engagement information from
separate databases 2.2 (1.2) 18.8 1.6 (1.2)
Insert scanned documents 2.0 (1.3) 13.6 2.8 (1.5)
Refresh workpapers inserted in Electronic Audit
System file after AJE’s booked 2.0 (1.3) 16.7 4.7 (0.6)
Use tickmarks 1.7 (1.0) 7.3 4.5 (0.9)
Annotate and complete Excel and Word workpapers
electronically 1.6 (0.9) 2.9 4.7 (0.6)
Insert financial statement workpapers 1.4 (0.7) 3.0 4.6 (0.7)
Create Excel and Word workpapers 1.4 (0.8) 1.5 3.9 (1.6)
Insert Excel and Word workpapers 1.3 (0.6) 1.5 4.9 (0.3)
Create memos 1.1 (0.4) 1.5 4.9 (0.4)
Average of Input Tasks: 1.9 (0.6) 3.7 (0.5)
Notes to Table: Task difficulty is measured on a scale of 1 (very easy) to 5 (very difficult). Task
frequency is measured on a scale of 1 (very rarely) to 5 (very often). In addition to means and standard
deviations, the table reports the percent of auditors indicating the task is relatively difficult (i.e., a
difficulty rating of 4 or 5).
23
TABLE 1 (continued)
Relative Difficulty and Frequency of Using Audit Tasks in an
Electronic Workpaper System
Review Tasks
24
TABLE 1 (continued)
Relative Difficulty and Frequency of Using Audit Tasks in an
Electronic Workpaper System
Panel B. Reviewers
DIFFICULTY FREQUENCY
Percent rating
Mean (s.d.) as difficult Mean (s.d.)
Security Tasks
Manage the security of the file 2.3 (1.0) 10.2 2.9 (1.3)
Ensure the appropriate use of signatures by the engagement
team member who did the work 1.9 (1.0) 6.3 4.0 (1.2)
Sign into Electronic Audit System file 1.2 (0.5) 2.0 4.7 (0.6)
Average of Security Tasks: 1.8 (0.6) 3.9 (0.7)
Organization/Verification Tasks
Ensure that workpapers are updated for AJE’s 3.3 (1.2) 46.9 3.8 (1.4)
Find workpapers/memos 3.0 (1.1) 32.7 4.5 (0.7)
Sign off workpapers/memos as reviewed 1.6 (1.0) 10.0 4.8 (0.5)
Determine which workpapers/memos have not been reviewed 1.6 (0.8) 4.1 4.6 (0.6)
Determine which review notes have not been closed 1.5 (0.8) 2.0 4.3 (1.1)
Average of Organization/Verification Tasks: 2.2 (0.7) 4.4 (0.6)
Review Tasks
25
TABLE 1 (continued)
Relative Difficulty and Frequency of Using Audit Tasks in an
Electronic Workpaper System
Trace an amount from a workpaper to a supporting document 2.5 (1.0) 14.0 3.9 (1.3)
Indicate that you as reviewer have agreed an amount to a lead
sheet or supporting document 2.4 (1.3) 18.4 3.3 (1.6)
Insert the reviewer’s tickmarks into a workpaper 2.2 (1.3) 18.4 3.1 (1.7)
At the planning stage, determine what tailoring changes were
made to the audit procedures 2.1 (1.0) 12.0 4.1 (1.1)
Send a file that you have worked on to another engagement
team member 1.9 (1.1) 10.2 4.3 (1.2)
Review Word workpapers on screen 1.7 (0.8) 2.0 4.8 (0.5)
Ensure that all review notes have been properly cleared 1.7 (0.9) 4.1 4.4 (1.0)
Create review notes 1.6 (0.9) 8.2 4.4 (1.3)
Indicate that an engagement team member’s response to a
review note is not adequate 1.4 (0.8) 4.2 3.3 (1.4)
Delete review notes from the engagement file at the end of the
engagement 1.3 (0.6) 0 4.1 (1.3)
Review memos on screen 1.3 (0.6) 0 4.9 (0.4)
Close review notes 1.3 (0.7) 2.1 4.4 (1.2)
Average of Review Tasks: 2.2 (0.6) 3.9 (0.7)
26
TABLE 2
Description of Tasks That Are More Difficult in an Electronic Environment
Mean Difference in
Difficulty Between
Paper and Electronic Matched-
Task Description Environments Std. Dev. pairs t
Ensure that workpapers are updated for AJE’s -1.131 1.348 -5.436***
27
TABLE 3
Factors Affecting Auditors’ Learning on New Electronic Audit Workpaper Systems
PREPARERS REVIEWERS
Overall -0.376*** -0.065
Top Ten Most Difficult Tasks -0.286*** 0.028
t-test between
Range Mean (s.d.) ranks
Panel C. The Learning Curve: Self-Reports on Using the Full Capabilities of the Electronic
System
Mean Mean
(first few (after gaining Mean
engagements) familiarity) Difference t-test
Notes to the table: The following symbols indicate significant effects: * = < 0.10; ** = < 0.05; *** = <
0.01. Panel C data represent auditors’ responses to the question: “How often do you believe you were
using the full capabilities of the system (on the first few engagements, and after you got used to using the
system)?” The response scale is: 1 = Never, 2 = Infrequently, 3 = Sometimes, 4 = Frequently, 5 = Always.
28
TABLE 4
Frequency of Work-Around Behaviors and Experience-Related Differences
Notes to Table: The response scale for questions in this table is: 1 = Never, 2 = Infrequently, 3 =
Sometimes, 4 = Frequently, 5 = Always.
29
ENDNOTES
1
The participating audit firm wishes to remain anonymous.
2
Personnel of the firm providing data informed us that audit staff and seniors are primarily
responsible for preparing the audit workpapers within the new system, so we refer to these
individuals collectively as “preparers”. Audit managers and partners are responsible for
reviewing completed audit workpapers within the new system, so we refer to these individuals
collectively as “reviewers”.
3
In the current study, this comparison is facilitated because the firm did not implement any
change in the underlying audit process and the objectives of that process during the period in
or two years, we examined whether difficulty or frequency ratings differed between these groups
of offices. Results of independent samples t-tests on difficulty and frequency ratings for
preparers and reviews show that the ratings do not differ between groups.
6
We tested whether the ordering of task difficulty ratings is sensitive to scaling each
participant’s difficulty score by their individual average difficulty assessment across all tasks in
the electronic system. The results of this test are consistent with our reported results.
7
While we document the greater difficulty experienced by reviewers, we are not able to pinpoint
the precise source of the difference, as preparers and reviewers differ on several dimensions
30
8
We acknowledge that this result could be due to a demand effect in the survey instrument. For
example, auditors may have felt that the “right answer” was to report greater system use with
practice.
9
Our conclusions are limited to reflect information about the electronic workpaper tasks within
one audit firm’s new system. While we believe our taxonomy of component tasks to be
generalizeable, findings on relative difficulty and frequency of component tasks may vary in
different systems. Therefore, additional evidence from other audit firms and across various types
of electronic audit workpaper systems would be useful in further describing contemporary audit
practice.
31