BP Reader 1
BP Reader 1
1|Page
BP Introduction
Processes are generally identified in terms of beginning and end points, interfaces, and
organization units involved, particularly the customer unit. High Impact processes should
have process owners. Examples of processes include: developing a new product; ordering
goods from a supplier; creating a marketing plan; processing and paying an insurance claim;
etc.
Processes may be defined based on three dimensions (Davenport & Short 1990):
. Activities: Processes could involve two types of activities: Managerial (e.g. develop a
budget) and Operational (e.g. fill a customer order).
Business processes occur more naturally than hierarchies they are much more difficult to
describe than organizational structures. Organizations have departments, managers and
personnel that can easily be described in some type of hierarchical organization chart.
However the company's business processes are usually invisible, are neither named or
described, and they always cross departmental boundaries. Business processes occur
more naturally than hierarchies in a company because the processes happen when people
realize they need to work together to achieve the result promised to the customer.
Processes cut straight through traditional organizations.
2|Page
The Work System Framework (Alter, 2003)
The work system framework was developed to help business professionals recognize and
understand IT-reliant systems such as business processes in organizations. This framework
emphasizes business rather than IT concerns, and provides an outline for describing the
business process being studied, identifying problems and opportunities, describing possible
changes, and tracing how those changes might affect other parts of the work system.
The first four elements are the basic components of a work system. These include
processes and activities, participants, information, and technologies. The remaining five
elements also make an important contribution to the understanding of a work system:
• The products and services that the work system produces
• The customers for those products and services
• The environment that surrounds the work system
• The common infrastructure shared with other work systems
• The strategies used by the
work system and the
organization.
3|Page
to accomplish tasks within a process. There are common business processes within
industries at the higher level. For example, most firms have a need to convert orders to
cash. Therefore, the top of the process knowledge pyramid (Figure 1) would contain general
knowledge on “what” the order-to-cash process should accomplish. While definitions and
practices are common at this higher “what” level, differences exist in “how” each organization
accomplishes and manages tasks within a process. For example, invoicing within the order-
to-cash process may be done differently in a retail organization as compared to a
pharmaceutical company. Broad BPM knowledge consists of those elements that are
consistent across firms (i.e. the “what” of BPM).
Armed with this definition of a business process, we define BPM as a body of methods,
techniques and tools to discover, analyze, redesign, execute and monitor business
processes. We can say that BPM inherits from the continuous improvement philosophy of
TQM, embraces the principles and techniques of operations management, Lean and Six
Sigma, and combines them with the
capabilities offered by modern information
technology, in order to optimally align
business processes with the performance
objectives of an organization.
BPM has a plethora of facets as its origins are in Business Process Reengineering, Process
Innovation, Process Modelling, and Workflow Management to name a few. Organisations
increasingly recognize the requirement for an increased process orientation and require
appropriate comprehensive frameworks, which help to scope and evaluate their BPM
initiative.
The definitions of Business Process Management range from IT-focused views to BPM as a
holistic management practice. The IT-focused definition characterises BPM from the
perspective of business process automation (Harmon, 2003). The analysis of BPM
definitions reveals that the focus is often on analysing and improving processes (Zairi, 1997),
4|Page
(Elzinga et al., 1995). DeToro and McCabe (1997) see Business Process Management as a
new way of managing an organisation, which is different to functional, hierarchical
management. This view is supported by Pritchard and Armistead (1999) whose research
resulted in BPM being seen “as a ‘holistic’ approach to the way in which organisations are
managed”. Armistead and Machin (1997) state that BPM is “concerned with how to manage
processes on an ongoing basis, and not just with the one-off radical changes
associated with BPR”. According to Zairi (1997), BPM relies on good systems and
structural change and, even more importantly, on cultural change (Spanyi, 2003). A
comprehensive BPM approach requires alignment to corporate objectives, adequate
governance and an employees’ customer focus and involves, besides a cross-functional
viewpoint, strategy, operations, techniques and people.
Wave 4: Customer Obsession Is Now a Top Driver for BPM (Richardson, 2016)
Forrester defines BPM as a discipline for continually improving and transforming cross-
functional business processes. The discipline of BPM is changing, driven by increased focus
on how businesses win, serve, and retain customers. Four disruptive trends are
underpinning this transformation:
1. › Customer experience is upgrading BPM to front-office status. Until recently, BPM
has primarily lived in the back office of the enterprise, delivering reliable improvements in
efficiency and productivity for organizations willing to invest considerable amounts of
time, money, and talent. Now that companies have shifted away from cost-cutting and
toward driving top-line revenue growth, demand for BPM in the front office is on the rise.
In 2015, Forrester found that more than 30% of BPM initiatives had customer experience
(CX) improvement as a primary driver.
2. › Mobile is fueling the next cycle of process innovation. Marketing and CX teams
regularly build and deploy “throwaway” mobile apps to test new digital ideas. Many of
these apps end up duplicating core business processes rather than integrating with
them. Business process pros are beginning to play a critical role in mobile-enabling
core business processes to support the speed, flexibility, and context that today’s
mobile customers and mobile workforce expect.
3. › Cognitive software is rebalancing automation and the human touch. As companies
follow their digital business strategies — whether that involves organic growth,
acquisitions, partnerships, or outsourcing — they quickly find that siloed, duplicated, and
inconsistent business processes impede digital innovation. These firms are turning to
cognitive intelligence to help employees deliver outstanding CX. This approach requires
marrying cognitive intelligence and BPM, which in the end delivers recommendations
and advice in the context of the current process or task.
4. › Advanced process mining is providing new insights. Business process pros can
become distracted by the sheer volume of benchmarks available for tracking operational
activities. Despite — or perhaps because of — their volume, most benchmarks fail to
align with business goals or financial (income statement) performance; few measure
customer satisfaction accurately. Performance management today is limited to
monitoring the efficiency of isolated enterprise functions and fixing tactical process gaps.
Process mining techniques can provide a broader and more systematic set of
benchmarks and extract knowledge from event logs in order to discover, monitor,
and improve customer-centric business processes
Wave 4: Reframing BPM for better customer experiences (BPM Online, 2017)
In 2017, effective digital transformation will require businesses to digitize their key business
processes taking into account the support of digital customer touchpoints. For instance, an
5|Page
organization’s back-office should be upgraded to better support customer self-service or it
should be supplemented with new payment options. That’s why, instead of focusing on BPM
excellence, businesses need to make the shift to analyze and optimize business processes
for digital operational excellence centered around digital customer touchpoints.
With speed as one of the top priorities for digital businesses, the ability to quickly prototype
and deploy new processes is an absolute must. That’s why organizations need BPM tools
that are sophisticated, yet easy to use, and should consider low or no-code solutions that
focus on rapid customer-centric innovations.
Wave 5: Exploit and Explore in Bi-Modal BPM (Benner & Tushman, 2015)
More than a decade after our article challenged the promise of a widely popular “best
practice,” the false promise of universal best practices persists. Faced with uncertainty,
managers search for solutions to their challenges often by looking to “experts,” such as
consultants, or to other successful organizations for promising approaches. Although
organization theorists know that there are unlikely to be universal best practices, such
practices continue to be touted, even in academic research. Back then it was programs like
business process reengineering, the Malcolm Baldrige Award criteria, Six Sigma, TQM, and
ISO 9000 that were rapidly diffusing, following pressure by academics, consultants,
government agencies, and large purchasing organizations. The popularity of those specific
practices may have waned over time, but they have been replaced by talk of new best
practices, including ERP systems, Lean Six Sigma, the balanced scorecard, and, even more
recent, techniques for big data and data mining, design thinking, rapid prototyping, the “Lean
Startup,” and many more promoted as universally relevant. The continued rise of these new
programs, promoted as universal panaceas for organizational challenges, again suggests
the importance of a careful understanding of how popular practices influence organizations.
Over a decade ago we examined the paradoxical tensions managers face as they try to
simultaneously host an exploitative, efficiency oriented process management approach while
still maintaining exploration and innovation in opposition to the requirements of process
management. We found that as organizations embraced the increased exploitation inherent
in process management activities, they shifted toward more exploitative innovations—and
away from more exploratory innovations. The inventor and major proponent of Six Sigma,
Motorola, has not succeeded long term, at least not in the way we typically think of “success”
(i.e., survival or performance) in management research. Separately, at GE (a major
proponent of Six Sigma), stories similarly emerged of dramatic efforts by CEO Jeff Immelt to
spur “breakthrough innovation,” after decades of adherence to process management under
the leadership of Jack Welch. Over the past decade there has been an explosion of research
on when, if, and how organizations attend to the challenge of Abernathy’s productivity
dilemma or to March’s challenge of balancing exploration and exploitation. This research
suggests that the ability to both explore and exploit is positively associated with organization
outcomes. Beyond the innovation challenges of exploration and exploitation, organizations
are now challenged to be local and global (e.g., Marquis & Battilana, 2009), doing well and
doing good (e.g., Battilana & Lee, 2014; Margolis & Walsh, 2003), social and commercial
(e.g., Battilana & Dorado, 2010), artistic or scientific and profitable (e.g., Glynn, 2000), high
commitment and high performance (e.g., Beer & Eisenstadt, 2009), and profitable and
sustainable (e.g., Eccles, Ioannou, & Serafeim, 2014; Henderson, Gulati, & Tushman, 2015;
Jay, 2013).
Innovation has traditionally taken place within hierarchical, control-oriented firms and/or with
selected partners. Information processing, storage, and communication costs have been
constraints on innovation, spurring the internalization of innovation activities within firms.
Two secular trends drive the increasing importance of open innovation. The first is the
increasing prevalence and importance of “digitization” (Greenstein, 2010). The second trend
is modularity associated with task decomposition (Baldwin & Clark, 2000). In contexts where
6|Page
computational costs are low and widely available and distributed communication is
inexpensive, open or peer innovation communities displace organization-based innovation
(Benkler, 2006; O’Mahony & Lakhani, 2011). In these contexts, communities of peers
spontaneously emerge to freely share information on innovation production as well as
problem solving. Such radically decentralized, cooperative, self-organizing modes of
problem solving and production are in sharp contrast to organizationally centered innovation.
Open innovation is most clearly seen in open source software development, which depends
on many individuals contributing their time, for free, to a common project. Community-based
innovation is not limited to software development. Peer modes of innovation, where actors
freely share and co-create innovation, have been documented in a range of product
domains. For example, at LEGO the Mindstorms robot kit became successful only after the
firm opened its boundaries to permit a committed set of users to independently develop and
select a range of Mindstorms products (see Hatch & Schultz, 2010). The availability of
inexpensive computation power and ease of communication permit a fundamentally different
form of innovation—a mode of innovation rooted in free choice, sharing, and openness
absent formal boundaries and formal hierarchy. In these open contexts, variation, selection,
and retention are all done beyond the firm’s boundaries. Thus, these nonmarket, peer
innovation methods complement and, under some conditions, displace firm-centered
innovation (e.g., Wikipedia’s substitution for Encarta and Encyclopedia Britannica). For
incumbent firms, community-based innovation modes stand in sharp contrast to their
historically based hierarchical, control-oriented innovation modes. Exploration now
increasingly resides outside the boundaries of the traditional firm.
Digital business optimization is the part of the digital journey that improves the enterprise’s
current business model or public mission. Examples include using digital technologies to
improve customer experience and productivity while maintaining the same business model.
A major part of an enterprise’s digital journey will often consist of improving/optimizing its
current revenue streams, citizen/mission mandate, operations and customer experiences —
without changing its main value propositions and core business models. This part of the
digital journey is about creating a more digitally capable version of itself. It is not about
disruption or launching new digital business models.
Some enterprises, including those in industry segments ripe for disruption, have more
aggressive digital ambitions. They feel that leveraging digital is not just about improving the
current version of themselves. It’s about seeking net new opportunities and possibly
disrupting the status quo. They want to go beyond digital business optimization to create net
new revenue streams and citizen value from digital products/services and business models.
We call this part of the journey digital business transformation. Digital business
transformation is the part of the digital journey that pushes the enterprise beyond the current
business model or public mission, as a whole or in specific business units. Examples include
new digital products/services, platform and subscription-based business models.
7|Page
References
Alter, Steven. (2003). The IS Core – IX Sorting out issues about the core, scope, and identity of the IS
field. CAIS 12, 607-628.
Antonucci, Y. L., & Goeke, R. J. (2011). Identification of appropriate responsibilities and positions for
business process management success: Seeking a valid and reliable framework. Business
Process Management Journal, 17(1), 127-146.
Benner, M. J., & Tushman, M. L. (2015). Reflections on the 2013 Decade Award—“Exploitation,
exploration, and process management: The productivity dilemma revisited” ten years
later. Academy of Management Review, 40(4), 497-514.
BPM Online (2017). How to fuel your digital transformation in 2017 with intelligent BPM tools.
https://www.bpmonline.com/digital-transformation-2017
Business Process Problems (n.d.). http://www.cis.saic.com/computerbt/BPR/bpr1proc.htm
Davenport, T.H. & Short, J.E. (1990). "The New Industrial Engineering: Information Technology and
Business Process Redesign," Sloan Management Review, pp. 11-27.
Malhotra, Yogesh. (1998). "Business Process Redesign: An Overview," IEEE Engineering
Management Review, vol. 26, no. 3, Fall 1998. Retrieved from
http://www.brint.com/papers/bpr.htm
Dumas, M., La Rosa, M., Mendling, J., & Reijers, H. A. (2013). Business process management.
Berlin: Springer-Verlag.
LeHong, H & Waller, G. (2022). Digital Business Ambition: Transform or Optimize? G00739742.
Gartner.
Richardson (2016). The BPM Playbook Is Your Guide To Customer Centric Process Change.
Forrester.
https://www.forrester.com/report/The+BPM+Playbook+Is+Your+Guide+To+CustomerCentric+Proc
ess+Change
8|Page
Rosemann, M and de Bruin, T (2005). Towards a Business Process Management Maturity Model.
European Conference on Information Systems. Retrieved 03 August 2006 from
http://is2.lse.ac.uk/asp/aspecis/20050045.pdf
BP Modelling
Essentially, all models are wrong, but some are useful. George E.P. Box (1919–)
9|Page
BPMN (OMG, 2011)
The Object Management Group (OMG) has developed a standard Business Process
Model and Notation (BPMN). The primary goal of BPMN is to provide a notation that is
readily understandable by all business users, from the business analysts that create the
initial drafts of the processes, to the technical developers responsible for implementing the
technology that will perform those processes, and finally, to the business people who will
manage and monitor those processes. Thus, BPMN creates a standardized bridge for the
gap between the business process design and process implementation. Another goal, but no
less important, is to ensure that XML languages designed for the execution of business
processes, such as WSBPEL (Web Services Business Process Execution Language), can
be visualized with a business-oriented notation. This specification represents the
amalgamation of best practices within the business modeling community to define the
notation and semantics of Collaboration diagrams, Process diagrams, and Choreography
diagrams. The intent of BPMN is to standardize a business process model and notation in
the face of many different modeling notations and viewpoints. In doing so, BPMN will provide
a simple means of communicating process information to other business users, process
implementers, customers, and suppliers. The membership of the OMG has brought forth
expertise and experience with many existing notations and has sought to consolidate the
best ideas from these divergent notations into a single standard notation. Examples of other
notations or methodologies that were reviewed are UML Activity Diagram, UML EDOC
Business Processes, IDEF, ebXML BPSS, Activity-Decision Flow (ADF) Diagram,
RosettaNet, LOVeM, and Event-Process Chains (EPCs).
10 | P a g e
for the remaining group of labels (e.g. \Incident agenda"). The respondents also perceived
labels in verb-object style significantly more useful than other labels.1
G7 Decompose the model if it has more than 50 elements. This guideline relates to G1 that
is motivated by a positive correlation between size and errors. For models with more than 50
elements the error probability tends to be higher than 50% [9]. Therefore, large models
should be split up into smaller models.
Anti-patterns for process modeling problems (de Brito Dias et al., 2019)
Various notations have been proposed for process modeling [7] (e.g., Event- Driven Process
Chain, UML Activity Diagram, Business Process Model and Notation (BPMN), etc.) and a
variety of process modeling tools [4] (e.g., Bonita, Camunda) is available. Recent years have
seen a convergence on BPMN [9], and its version 2.0 has been adopted as an ISO
standard. Quality assurance has been less pervasive. Some tools provide functionality for
detecting subclasses of problems and many problems go undetected depending on the tool
[4]. Rozman et al. [10] identified problems being repeated constantly across a couple of
thousands of processes modeled by novice modelers and defined a set of anti-patterns of
business process modeling. Even professionally created process models such as the SAP
Reference Model [6] contain errors. Anti-patterns are partially useful not only for spotting
such errors, but also by providing feedback to the modelers regarding which scenarios
should be avoided in the future [10].
In this paper, we investigate the support of anti-patterns by BPMN-based process modeling
and execution tools, in order to understand to which extent these tools automatically detect
anti-patterns and help the user to correct them. We found that most common anti-patterns
are detected by one or the other tool. Our analysis illustrates gaps in detection support. After
conducting the literature research to search works dealing with common problems
encountered in the process modeling task, we selected the ten anti-patterns:
(1) Activities in one pool are not connected;
(2) The event does not contain an end event;
(3) Sequence flow crosses sub-process boundary;
(4) Sequence flow crosses pool boundary;
(5) Gateway receives, evaluates and sends a message;
(6) Intermediate events are placed on the edge of the pool;
(7) Hanging intermediate events or activities in the process model;
(8) Each swimlane in the pool contains start event;
(9) Exception flow is not connected to the exception;
(10) Message flow misuse across swimlanes.
Example:
11 | P a g e
BPMN collaboration diagram
guidelines used in
assessment
a. Correct minimal symbol set and correct Ensure you use the correct shapes and arrow heads denoting flow
shapes are used throughout, when drawing on paper.
connectors have arrows showing
direction, and the model is sufficiently
readable
b. All sequential tasks (without further The following is incorrect, the last two tasks should be aggregated.
inputs) performed by one role are
grouped into one task [Mendling G1:
Use as few elements in the model as
possible]
c. All tasks are named using Verb-Object The following task is incorrect it does
format [Mendling G6: Use verb-object not start with a verb.
activity labels]
d. Pools are used for Note this means no software system or Departmental lanes. Note the
companies/organisations. Although the customer of an organisation (with one role) is normally modelled as
customer (with one role) can be part of the organisation (a lane in the organisation’s pool).
modelled as part of the organisation.
Use lanes for each human role in main
pool
e. Each pool has one start and one end The following is incorrect the pool does not contain a start and end
event [Mendling G3: Use one start and event only intermediate events.
one end event]
12 | P a g e
f. Connected sequence flow within a pool. The following is incorrect, X
(Intermediate events & tasks need at should be connected to Y:
least one incoming and only one
outgoing sequence flow).
g. There are no crossing sequence flows or The following is incorrect as the lines cross:
sequence flows between pools
References
List, B., & Korherr, B. (2006). An Evaluation of Conceptual Business Process Modelling Languages.
ACM. Dijon: ACM.
Mendling, J., Reijers, H. A. & van der Aalst, W.M.P (2010). Seven process modelling guidelines
(7PMG). Information and Software Technology, Vol. 52(2): 127-136
OMG (2010) Retrieved from http://www.omg.org/spec/BPMN/2.0
Recker, J. (2010) Opportunities and Constraints: The Current Struggle with BPMN, Business Process
Management Journal (16:1) 181-201
de Brito Dias, C.L., Stein Dani, V., Mendling, J., & Thom, L.H. (2019) Anti-patterns for process
modeling problems: an analysis of BPMN 2.0-based tools behavior. BPM. LNBIP, 362, 745–757.
https://doi.org/10.1007/978-3-030-37453-2_59
13 | P a g e
BP Redesign
14 | P a g e
that one of the major roles that the process manager will undertake is the maintenance and
improvement of the process on an ongoing basis. The methodology also assumes that most
implementation phases of most projects will involve other groups, such as IT or HR, in the
development of components, such as training courses and software applications, which will
be needed for the new process design. A major redesign effort takes time and consumes the
efforts of lots of executives and managers. Thus it is justified only when it is determined that
minor changes won’t produce the desired result. A major redesign is usually undertaken only
if the organization makes a major shift in its strategic orientation, or if a major new
technology is to be incorporated that will impact a number of different subprocesses and
activities within a major business process.
15 | P a g e
Process Scoping (Long, 2010)
Guides provide information on how, why, and when of a process. For example, Guides are
the events that define the boundaries of the process. The "triggering" or "starting event"
designates the beginning of the process, and the "ending" or "closing" event designates the
16 | P a g e
ending of the process. In addition, guides are the policies, procedures, knowledge,
experiences, and even measures that drive the decisions that are made in a process.
Guides also include 'rules'. In generic definitions of SIPOC, it is suggested that rules should
be included along with other "things" as inputs to the process. The problem is that the rules
don't get changed by the process but rather are used as the decision logic of a process. In
order to conduct an efficient and effective analysis of a process, it is necessary to separate
the changes in state from the criteria that create those state changes. The majority of real
problems in a process are not the result of the changes that occur but rather the criteria that
initiate and drive those changes. At least 80-85% of the time, it is the guides/rules that need
to be changed in order to solve the real problems of efficiency, effectiveness, and agility in
the process. Changing the cause of the problem (rules) and not the symptoms (input/output)
provides an organization with opportunities to significantly reduce process cycle times as
much as 15 days to 45 minutes or 12 days to 30 seconds. Dramatic changes like this are
rarely possible without changes to the 'rules' that drive the process. In order to identify
opportunities to change the rules they must be clearly separated from the "flow" of the
process. Therefore, it's best to consider guides as a separate component when documenting
the process. This will also create a more maintainable process model for the future.
17 | P a g e
BP Scoping (Harmon, 2019)
The process-in-scope is placed in the middle box. Inputs and outputs are then examined.
The sources of the inputs and those who receive the outputs are also identified. Then, in
addition, one looks at Guides – information that controls the execution of the process,
including business rules and management policies – and we look at what Enables the
process, including employees, data from IT applications and the physical layout of the work
environment. As we define the flows into and out of the process-in-scope, we look for
problems and we also begin to define how we will measure the effectiveness of the process
and where the problems seem to reside.
All process problems are divided into one of six broad types: (1) Output Problems, (2)
Input Problems, (3) Guide or Constraint Problems [Inadequate Controls], (4) Enabler or
Resource Problems, (5) Activity or Flow Problems, or (6) Process Management Problems.
Figure 9.1 shows a process scope diagram with the five subprocesses we initially identified
as those contained within the deliver pizzas process. We have connected the five processes
18 | P a g e
into a flow diagram. Flow problems occur because some of these subprocesses are poorly
designed or because the flow is not the best possible sequence. In addition, each of the
processes has a manager or supervisor who is responsible for the work that goes on within
that subprocess. Process management problems occur because one or more of the
managers assigned to plan, organize, monitor, and control the subprocesses is not doing his
or her job as well as possible.
In essence, every process or activity should have someone who is responsible for ensuring
that the process or activity is accomplished. This process manager may be a team leader, a
supervisor, or a manager who is responsible for several other activities, including this one. It
is the manager who is responsible for ensuring that the process has the resources it needs,
that employees know and perform their jobs, and that employees get feedback when they
succeed or when they fail to perform correctly. It is just as likely that a process is broken
because the manager is not doing his or her job as it is that the process is broken because
of the flow of activities or the work of the employees.
19 | P a g e
Business process redesign best practices (Limam Mansar & Reijers, 2007)
Although the focus on the business process is now a widely accepted industrial attitude,
business process redesign (BPR) in practice is still more art than science. Design
methodology is primarily the field of consulting firms who have developed proprietary BPR
methods (Kettinger et al., 1997). These mainly emphasize project management and
organizational issues of process design, but often fail to address the “technical challenge” of
developing a process design that is a radical improvement of the current one (Reijers,
2003).Valiris and Glykas [9] also recognize as limitations of existing BPR methodologies that
“there is a lack of a systematic approach that can lead a process redesigner through a series
of steps for the achievement of process redesign”. As Sharp and McDermott [10]
commented more recently: “How to get from the as-is to the to-be [in a BPR project] isn’t
explained, so we conclude that
during the break, the famous
ATAMO procedure is
invoked—And Then, A Miracle
occurs”.
20 | P a g e
off that underlies a redesign measure is very important in the redesign of a business
process.
Below we classify the best practices in a way that respects the Alter framework. In our
framework, the business process element has two views; the participants in the business
process considering: the organization structure (elements: roles, users, groups,
departments, etc.); and the organization population (individuals: agents which can have
tasks assigned for execution and relationships between them). The most popular ones are
also listed:
4.1. Customer
4.1.1. Control relocation: ‘move controls towards the customer’
Different checks and reconciliation operations that are part of a business process may be
moved towards the customer. Klein [35] gives the example of Pacific Bell that moved its
billing controls towards its customers eliminating in this way the bulk of its billing errors. It
also improved customer’s satisfaction. A disadvantage of moving a control towards a
customer is higher probability of fraud, resulting in less yield. This best practice is named by
Klein [35].
4.1.2. Contact reduction: ‘reduce the number of contacts with customers and third parties’
The exchange of information with a customer or third party is always time-consuming.
Especially when information exchanges take place by regular mail, substantial wait times
may be involved. Also, each contact introduces the possibility of intruding an error. Hammer
and Champy [6] describe a case where the multitude of bills, invoices and receipts creates a
heavy reconciliation burden. Reducing the number of contacts may therefore decrease
throughput time and boost quality. Note that it is not always necessary to skip certain
information exchanges, but that it is possible to combine them with limited extra cost. A
disadvantage of a smaller number of contacts might be the loss of essential information,
which is a quality issue. Combining contacts may result in the delivery or receipt of too much
data, which involves cost.
4.1.3. Integration: ‘consider the integration with a business process of the customer or a supplier’
This best practice can be seen as exploiting the supply-chain concept known in production
[37]. The actual application of this best practice may take on different forms. For example,
when two parties have to agree upon a product they jointly produce, it may be more efficient
to perform several intermediate reviews than performing one large review after both parties
have completed their part. In general, integrated business processes should render a more
efficient execution, both from a time and cost perspective. The drawback of integration is
that mutual dependence grows and, therefore, flexibility may decrease.
4.2. Business process operation
4.2.1. Order types: ‘determine whether tasks are related to the same type of order and, if necessary,
distinguish new business processes’
Especially Berg and Pottjewijd [38] convincingly warn for parts of business processes that
are not specific for the business process they are part of. Ignoring this phenomenon may
result in a less effective management of this ’sub1ow’ and a lower efficiency. Applying this
21 | P a g e
best practice may yield faster processing times and less cost. Also, distinguishing common
subflows of many different flows may yield efficiency gains. Yet, it may also result in more
coordination problems between the business process (quality) and less possibilities for
rearranging the business process as a whole (flexibility).
4.2.2. Task elimination: ‘eliminate unnecessary tasks from a business process’
A common way of regarding a task as unnecessary is when it adds no value from a
customer’s point of view. Typically, control tasks in a business process do not do this; they
are incorporated in the model to fix problems created (or not elevated) in earlier steps.
Control tasks are often identified by iterations. Tasks redundancy can also be considered as
a specific case of task elimination (Fig. 6). In order to identify redundant tasks, Castano et al.
[40] have developed entity-based similarity coefficient s. They help automatically checking
the degree of similarities between tasks (or activities). The aims of this best practice are to
increase the speed of processing and to reduce the cost of handling an order. An important
drawback may be that the quality of the service deteriorates. This best practice is
widespread in literature, for example, see Peppard and Rowland [32] Berg and Pottjewijd
[38] and Van der Aalst and Van Hee [41]. Buzacott [36] illustrates the quantitative effects of
eliminating iterations with a simple model.
4.2.3. Order-based work: ‘consider removing batch-processing and periodic activities from a business
process’
Some notable examples of disturbances in handling a single order are:(a) its piling up in a
batch and (b) periodic activities, e.g. because processing depends on a computer system
that is only available at specific times. Getting rid of these constraints may significantly
speed up the handling of individual orders. On the other hand, efficiencies of scale can be
reached by batch processing. Also, the cost of making information systems permanently
available may be costly. This best practice results from our own reengineering experience.
4.2.4. Triage: ‘consider the division of a general task into two or more alternative tasks’ or ‘consider
the integration of two or more alternative tasks into one general task’
When applying this best practice in its first and most popular form, it is possible to design
tasks that are better aligned with the capabilities of resources and the characteristics of the
orders being processed (Fig. 7). Both interpretations improve upon the quality of the
business process. Distinguishing alternative tasks also facilitates a better utilization of
resources, with obvious cost and time advantages. On the other hand, too much
specialization can make processes become less flexible, less efficient, and cause
monotonous work with repercussions for quality. An alternative form of the triage best
practice is to divide a task into similar instead of alternative tasks for different subcategories
of the orders being processed. For example, a special cash desk may be set up for
customers with an expected low processing time. Note that this best practice is in some
sense similar to the order types best practice we mentioned in this section. The main
interpretation of the triage concept can be seen as a translation of the order type best
practice on a task level.
4.2.5. Task composition: ‘combine small tasks into composite tasks and divide large tasks into
workable smaller tasks’
Combining tasks should result in the reduction of setup times, i.e., the time that is spent by a
resource to become familiar with the specifics of a order. By executing a large task which
used to consist of several smaller ones, some positive effect may also be expected on the
quality of the delivered work. On the other hand, making tasks too large may result in (a)
smaller run-time flexibility and (b) lower quality as tasks become unworkable. Both effects
are exactly countered by dividing tasks into smaller ones. Obviously, smaller tasks may also
result in longer setup times (Fig. 8). This best practice is related to the triage best practice in
the sense that they both are concerned with the division and combination of tasks. It is
probably the most cited best practice, mentioned by Hammer and Champy [6], Rupp and
Russell [39], Peppard and Rowland [32], Berg and Pottjewijd [38], Seidmann and
Sundararajan [25], Reijers and Goverde [44], Van der Aalst [45] and Van der Aalst and Van
Hee [41]. Some of these authors only consider one part of this best practice, e.g. combining
22 | P a g e
smaller tasks into one. Buzacott [36], Seidmann and Sundararajan [25] and Van der Aalst
[45] provide quantitative support for the optimality of this best practice for simple models.
4.3. Business process behaviour
4.3.1. Resequencing: ‘move tasks to more appropriate places’
In existing business processes, actual tasks orderings do not reveal the necessary
dependencies between tasks (Fig. 11). Sometimes it is better to postpone a task if it is not
required for immediately following tasks, so that perhaps its execution may prove to become
super1uous. This saves cost. Also, a task may be moved into the proximity of a similar task,
in this way diminishing setup times. The resequencing best practice is mentioned as such by
Klein [35]. It is also known as ’process order optimization’.
4.3.2. Knock-out: ‘order knock-outs in a decreasing order of effort and in an increasing order of
termination probability’
A typical part of a business process is the checking of various conditions that must be
satisfied to deliver a positive end result. Any condition that is not met may lead to a
termination of that part of the business process: the knock-out (Fig. 12). If there is freedom in
choosing the order in which the various conditions are checked, the condition that has the
most favorable ratio of expected knock-out probability versus the expected effort to check
the condition should be pursued. Next, the second best condition, etc. This way of ordering
checks yields on average the least costly business process execution. There is no obvious
drawback on this best practice, although it may not always be possible to freely order these
kinds of checks. Also, implementing this best practice may result in a (part of a) business
process that takes a longer throughput time than a full parallel checking of all conditions. The
knock-out best practice is a specific form of the resequencing best practice. Van der Aalst
[45] mentions this best practice and also gives quantitative support for its optimality.
4.3.3. Parallelism: ‘consider whether tasks may be executed in parallel’
The obvious effect of putting tasks in parallel is that the throughput time may be considerably
reduced (Fig. 13). The applicability of this best practice in business process redesign is
large. In practical experiences we have had with analyzing existing business process, tasks
were mostly ordered sequentially without the existence of hard logical restrictions prescribing
such an order. A drawback of introducing more parallelism in a business process that
incorporates possibilities of knock-outs is that the cost of business process execution may
increase. Also, the management of business processes with concurrent behavior can
become more complex, which may introduce errors (quality) or restrict run-time adaptations
(flexibility).
4.3.4. Exception: ‘design business processes for typical orders and isolate exceptional orders from
normal flow’
Exceptions may seriously disturb normal operations. An exception, will require workers to
get acquainted with the specifics of the exception, although they may not be able to handle
it. Setup times are then wasted. Isolating exceptions, for example by a triage, will make the
handling of normal orders more efficient. Isolating exceptions may possibly increase the
overall performance as specific expertise can be build up by workers working on the
exceptions. The price paid is that the business process will become more complex, possibly
decreasing its flexibility. Also, if no special knowledge is developed to handle the exceptions
(which is costly) no major improvements are likely to occur. This best practice is mentioned
by Poyssick and Hannaford [46] and Hammer and Champy [6].
4.4. Organization
4.4.1. Structure
4.4.1.1. Order assignment: ‘let workers perform as many steps as possible for single orders’
By using order assignment in the most extreme form, for each task execution the resource is
selected from the ones capable of performing it that has worked on the order before—if any.
The obvious advantage of this best practice is that this person will get acquainted with the
case and will need less setup time. An additional benefit may be that the quality of service is
increased. On the negative side, the flexibility of resource allocation is seriously reduced.
23 | P a g e
The execution of an order may experience substantial queue time when the person to whom
it is assigned is not available.
4.4.1.2. Flexible assignment: ‘assign resources in such a way that maximal flexibility is preserved for
the near future’.
For example, if a task can be executed by either of two available resources, assign it to the
most specialized resource. In this way, the possibilities to have the free, more general
resource execute another task are maximal. The advantage of this best practice is that the
overall queue time is reduced:it is less probable that the execution of an order has to await
the availability of a specific resource. Another advantage is that the workers with the highest
specialization can be expected to take on most of the work, which may result in a higher
quality. The disadvantages of this best practice can be diverse. For example, work load may
become unbalanced resulting in less job satisfaction. Also, possibilities for specialists to
evolve into generalists are reduced.
4.4.1.3. Centralization: ‘treat geographically dispersed resources as if they are centralized’.
This best practice is explicitly aimed at exploiting the benefits of a Workflow Management
System or WfMS for short [23]. After all, when a WfMS takes care of assigning work to
resources it has become less relevant where these resources are located geographically. In
this sense, this best practice is a special form of the integral technology best practice (see
Section 4.6). The specific advantage of this measure is that resources can be committed
more flexibly, which gives a better utilization and possibly a better throughput time. The
disadvantages are similar to that of the integral technology best practice.
4.4.1.4. Split responsibilities: ‘avoid assignment of task responsibilities to people from different
functional units’
The idea behind this best practice is that tasks for which different departments share
responsibility are more likely to be a source of neglect and conflict. Reducing the overlap in
responsibilities should lead to a better quality of task execution. Also, a higher
responsiveness to available work may be developed so that customers are served quicker.
On the other hand, reducing the effective number of resources that is available for a work
item may have a negative effect on its throughput time, as more queuing may occur.
4.4.1.5. Customer teams: ‘consider assigning teams out of different departmental workers that will
take care of the complete handling of specific sorts of orders.
This best practice is a variation of the order assignment best practice. Depending on its
exact desired form, the customer team best practice may be implemented by the order
assignment best practice. Also, a customer team may involve more workers with the same
qualifications, in this way relaxing the strict requirements of the order assignment best
practice. Advantages and disadvantages are similar to those of the order assignment best
practices. In addition, work as a team may improve the attractiveness of the work and a
better understanding, which are both quality aspects.
4.4.1.6. Numerical involvement: ‘minimize the number of departments, groups and persons involved in
a business process’
Applying this best practice should lead to less coordination problems. Less time spent of
coordination makes more time available for the processing of orders. Reducing the number
of departments may lead to less split responsibilities, with similar pros and cons as the split
responsibilities best practice. In addition, smaller numbers of specialized units may prohibit
the build of expertise (a quality issue) and routine (a cost issue). This best practice is
described by Hammer and Champy [6], Rupp and Russell [39] and Berg and Pottjewijd [38].
4.4.1.7. Case manager: ‘appoint one person as responsible for the handling of each type of order, the
case manager’.
The case manager is responsible for a specific order or customer, but he or she is not
necessarily the (only) resource that will work on it. The difference with the order assignment
practice is that the emphasis is on management of the process and not on its execution. The
most important aim of the best practice is to improve upon the external quality of a business
process. The business process will become more transparent from the viewpoint of a
24 | P a g e
customer as the case manager provides a single point of contact. This positively affects
customer satisfaction. It may also have a positive effect on the internal quality of the
business process, as someone is accountable for correcting mistakes. Obviously, the
assignment of a case manager has financial consequences as capacity must be devoted to
this job.
4.4.2. Population
4.4.2.1. Extra resources: ‘if capacity is not sufficient, consider increasing the number of resources
This straightforward best practice speaks for itself. The obvious effect of extra resources is
that there is more capacity for handling orders, in this way reducing queue time. It may also
help to implement a more flexible assignment policy. Of course, hiring or buying extra
resources has its cost. Note the contrast of this best practice with the numerical involvement
best practice.
4.4.2.2. Specialist-generalist: ‘consider to make resources more specialized or more generalist’.
Resources may be turned from specialists into generalists or the other way round. A
specialist resource can be trained for other qualifications; a generalist may be assigned to
the same type of work for a longer period of time, so that his other qualifications become
obsolete. When the redesign of a new business process is considered, application of this
best practice comes down to considering the specialist–generalist ratio of new hires. A
specialist builds up routine more quickly and may have a more profound knowledge than a
generalist. As a result he or she works quicker and delivers higher quality. On the other
hand, the availability of generalists adds more flexibility to the business process and can
lead to a better utilization of resources. Depending on the degree of specialization or
generalization, either type of resource may be more costly. Note that this best practice
differs from the triage concept in the sense that the focus is not on the division of tasks.
4.4.2.3. Empower: ‘give workers most of the decision making authority and reduce middle
management’.
In traditional business processes, substantial time may be spent on authorizing work that
has been done by others. When workers are empowered to take decisions independently, it
may result in smoother operations with lower throughput times. The reduction of middle
management from the business process also reduces the labor cost spent on the processing
of orders. A drawback may be that the quality of the decisions is lower and that obvious
errors are no longer found. If bad decisions or errors result in rework, the cost of handling a
order may actually increase compared to the original situation.
4.5. Information
4.5.1. Control addition: ‘check the completeness and correctness of incoming materials and check the
output before it is send to customers’
This best practice promotes the addition of controls to a business process. It may lead to a
higher quality of the business process execution and, as a result, to less required rework
(Fig. 24). Obviously, an additional control will require time and will absorb resources. Note
the contrast of the intent of this best practice with that of the task elimination best practice,
which is a business process operation best practice (see Section 4.2).
4.5.2. Buffering: ‘instead of requesting information from an external source, buffer it by subscribing to
updates’
Obtaining information from other parties is a major time-consuming part in many business
process. By having information directly available when it is required, throughput times may
be substantially reduced. This best practice can be compared to the caching principle
microprocessors apply. Of course, the subscription fee for information updates may be
rather costly. This is especially so when we consider information sources that contain far
more information than is ever used. Substantial cost may also be involved with storing all the
information. Note that this best practice is a weak form of the integration best practice (see
Section 4.1). Instead of direct access to the original source of information—which the
integration with a third party may come down to—a copy is maintained. This best practice
follows from our own reengineering experience.
25 | P a g e
4.6. Technology
4.6.1. Task automation: ‘consider automating tasks’
A particular positive result of automating tasks may be that tasks can be executed faster,
with less cost, and with a better result. An obvious disadvantage is that the development of a
system that performs a task may be very costly. Generally speaking, a system performing a
task is also less flexible in handling variations than a human resource. Instead of fully
automating a task, an automated support of the resource executing the task may also be
considered. A significant application of the task automation best practice is the business
process perspective of e-commerce: As cited by Gunasekaran et al. [48] and defined by
Kalakota and Whinston [49] e-commerce can be seen as the application of technology
towards the automation of business transactions and workflows.
4.6.2. Integral technology: ‘try to elevate physical constraints in a business process by applying new
technology’
In general, new technology can offer all kinds of positive effects. For example, the
application of a WfMS may result in less time that is spend on logistical tasks. A Document
Management System will open up the information available on orders to all participants,
which may result in a better quality of service. New technology can also change the
traditional way of doing business by giving participants completely new possibilities. The
purchase, development, implementation, training and maintenance efforts related to
technology are obviously costly. In addition, new technology may arouse fear with workers or
may result in other subjective effects; this may decrease the quality of the business process.
4.7. External environment
4.7.1. Trusted party: ‘instead of determining information oneself, use results of a trusted party’
Some decisions or assessments that are made within business process are not specific for
the business process they are part of. Other parties may have determined the same
information in another context, which—if it were known —could replace the decision or
assessment. An example is the creditworthiness of a customer that bank A wants to
establish. If a customer can present a recent creditworthiness certificate of bank B, then
bank A will accept it. Obviously, the trusted party best practice reduces cost and may even
cut back throughput time. On the other hand, the quality of the business process becomes
dependent upon the quality of some other party’s work. Some coordination effort with trusted
parties is also likely to be required, which diminishes flexibility. This best practice is different
from the buffering best practice (see Section 4.5), because the business process owner is
not the one obtaining the information. This best practice results from our own reengineering
experience.
4.7.2. Outsourcing: ‘consider outsourcing a business process in whole or parts of it’
Another party may be more efficient in performing the same work, so it might as well perform
it for one’s own business process. The obvious aim of outsourcing work is that it will
generate less cost. A drawback may be that quality decreases. Outsourcing also requires
more coordination efforts and will make the business process more complex. Note that this
best practice differs from the trusted party best practice. When outsourcing, a task is
executed at run time by another party. The trusted party best practice allows for the use of a
result in the (recent) past.
4.7.3. Interfacing: ‘consider a standardized interface with customers and partners’
The idea behind this best practice is that a standardized interface will diminish the probability
of mistakes, incomplete applications, unintelligible communications, etc. (Fig. 28). A
standardized interface may result in less errors (quality), faster processing (time) and less
rework (cost). The interfacing best practice can be seen a specific interpretation of the
integration best practice, although it is not specifically aimed at customers.
26 | P a g e
References
Harmon, P. (2014) A BPM Methodology—What Is It and Why It Is Important.
https://www.bptrends.com/bpt/wp-content/uploads/09-02-2014-COL-Harmon-on-BPM-BPTA-
Methodology-Harmon.pdf
Harmon, P. (2019). Business process change: a business process management guide for managers
and process professionals. Morgan Kaufmann.
Limam Mansar, S., & Reijers, H. A. (2007). Best practices in business process redesign: use and
impact. Business Process Management Journal,13(2), 193-213.
Long, K. (2010) SIPOC for Service — Is It Enough? Business Rules Journal, 11(9),
http://www.BRCommunity.com/a2010/b553.html
BP Human Focus
The Sociotechnical Axis of Cohesion for the IS Discipline (Sarker et al., 2019)
Broadly speaking, the sociotechnical perspective considers the technical artifacts as well as
the individuals/ collectives that develop and use the artifacts in social (e.g., psychological,
cultural, and economic) contexts (Briggs et al. 2010). This perspective privileges neither the
technical nor the social, and sees outcomes as emerging from the interaction between the
two. Further, it espouses a focus on instrumental outcomes such as efficiency and
productivity as well as on humanistic outcomes, such as well-being, equality, and freedom
(Beath et al. 2013; Mumford 2006).
The origin of sociotechnical thinking can be traced to the multiple post-World War II field
studies undertaken in the British coal-mining industry by the Tavistock Institute (Rice and
Trist 1952; Trist and Bamforth 1951). It emerged as a new way of thinking, which challenged
the prevailing worldview on technologies as being external antecedents to organizational
and social structure and behavior (Beath et al. 2013), and paved the way for establishing
what could be considered among the earliest IS programs (Management Science 1967).
Such programs sought to bridge the divide between the socially oriented approaches to
solving organizational problems advocated by psychological/organizational disciplines and
the technically oriented approaches advocated by disciplines such as computer science and
operations research (Davis and Olson 1985).
27 | P a g e
Explaining information systems change (Lyytinen & Newman, 2008)
Information system (IS) change is concerned with generating a deliberate change to an
organization’s technical and organizational subsystems that deal with information (Swanson,
1994).
IS change re-configures a work system by embedding into it new information technology (IT)
components. Such work systems execute, coordinate, and manage information related work
(Alter, 2002; Bergman et al., 2002a, b; Mumford, 2003). They are characterized by low
malleability due to path dependencies, habitualization, cognitive inertia, and high complexity.
Because of this rigidity and complexity, IS change must be planned and deliberate (Lyytinen,
1987b; Alter, 2002). Traditionally, socio-technical thinking has assumed that the systems will
remain stable due to low component variation and their strong mutual interdependencies.
Occasionally, when any one component becomes incompatible with others due to increased
variation (e.g. malfunctioning, learning, replacement) we can observe a structural
misalignment, which we label here a gap – a property of a system that affects the systems’
behavior and its repertoire of responses. A gap is any contingency in the system which, if left
unattended, will reduce the system’s performance and threaten its viability. Often events that
generate gaps are abrupt: a system failing, a financial crisis, or key people leaving. In other
situations the system can drift towards the misalignment: a gradual and innocent change in
one component reaches a tipping point that pushes the whole system into a misalignment
(Plowman et al., 2007). For example, a gradual and small increase in the system’s input
volume can break the technology and affect the whole work system. We call any event that
generates a gap a critical incident.
Attempts to remove these gaps are specific types of events called interventions. These are
measures oriented towards one or more socio-technical elements, or a system that can be
controlled or manipulated (e.g. work system) as a whole as to mitigate or remove an
observed gap. Events can succeed (i.e. remove the gap), but they can also fail, or even
weaken the system’s stability. This can be due to failed cognition, the system’s complex
interdependencies (Cohen et al., 1972; van de Ven et al., 1999), or an actor’s deficient
performance. Sometimes interventions fail because of bad luck (randomness). Owing to
circularity, interventions can result in unintended second- and third-order effects that
produce path-dependent impacts on the system. This can over time morph into an
unpredictable wake of change, or stall the system in paralysis.
Affective response and job impact with ERP adoption (Seymour & Roode,
2008)
The research was done at a South African Higher Education Institution. An ERP system was
implemented as part of a strategic integrated information management system for student
administration. We shall refer to this system using the acronym PERP. Interviews were the
primary method of gathering data in this study and, at a secondary level, organisational
documentation relevant to the PERP implementation project was also reviewed. Most
studies on technology acceptance and resistance to organisational change give little
28 | P a g e
analytical attention to the emotions or affective responses of users [7]. Yet, the first primary
theme to emerge from our interview analysis is the affective experiences of participants
when engaging with PERP. In Table 2 below, the affective responses expressed by
interviewees were classified according to satisfaction and engagement. Generally
respondents expressed more statements of high (as opposed to low) engagement but low
(as opposed to high) satisfaction.
Emotions can be classified according to two dimensions: the activation dimension and the
pleasantness dimension as depicted in Figure 1 (Feldman 1998). Affective responses to an
ICT implementation can hence be expected to signal engagement, disengagement,
satisfaction or dissatisfaction with a change [25]. Satisfaction, an important IS outcome
variable, is positively related to satisfaction with facilitation, task, process and outcomes. In
contrast, negative emotions such as anger and frustration emerge when there are major
disagreements or when parties interfere with the attainment of each other’s important goals
[11]. Aladwani [12] noted negative affective responses to ERP implementations varying from
feeling disengaged, apprehensive, fearful, unsure about job tasks and responsibilities,
unsure of role and position in the organisation, incompetent and threatened. These negative
emotions can adversely affect group outcomes and affective acceptance. When these
interviews were performed, users were still struggling to maintain adequate levels of job
performance due to high workloads and subsequent exhaustion. This is not unlike the
situation faced by other Higher Educational Institutions undergoing the same experience
[16]. The increase in workload was attributed to the timing of the implementation, the parallel
systems being run, the amount of learning required on the job and the attributes and
nomenclature of the implemented solution.
Adaptations
Consultations Ease of use
Requirements Affective System Inefficiency
Implementation
Communication Response Perceptions Usefulness
Planning User friendliness
Testing Flexibility
Facilitating Facilitating Slowness
Support Conditions
Conditions
Documentation
Training
Job
Impact
Meaningful
Recognition Independence
Networking Valuable
Learning Procrastinate
Workload Access
29 | P a g e
The derived framework (Figure 2) shows the high levels of connection between the various
categories that emerged from the analysis. In the first instance, implementation choices were
found to impact on system perceptions. In this case, the decision to run systems in parallel
gave users the perception that the system was inefficient and the amounts of modifications
and incorrect requirements influenced users’ perception of the system being valuable. All
four other categories were found to impact affective response. In terms of facilitating
conditions, lack of support and training increased frustration while peer support reduced
anxiety. In the implementation category, project communication caused excitement and
anxiety while poor planning decisions and a lack of determining correct requirements
increased frustration. A range of emotional responses was elicited by the various concepts
under job impact, such as the amount of learning, increased workload and the restriction of
access to the system. System perceptions such as ease-of- use, inefficiencies and slowness
also elicited emotional responses. The job impact was in turn influenced by the other four
categories. In terms of facilitating conditions, peer support had the strongest effect on job
impact, allowing for networking, peer recognition, and increased independence. System
perceptions had a job impact in that inefficiencies in the system resulted in increased work
hours while the user friendliness of the system made it valuable in meeting job requirements.
In terms of the implementation, planning decisions such as when to train users and start
using PERP resulted in reduced learning and workload increases which both had a job
impact. Finally, the job impact also influenced system perceptions. For example, reduced
access given to staff resulted in perceptions of system inefficiencies and as learning in the
job increased, the perceptions of ease-of-use of the system increased.
The framework shows that implementation actions and decisions; the support or facilitating
conditions provided by the organisations; and the perceptions of the system all influence
both users’ affective response as well as their job performance. Rather than ignore or try to
suppress emotive responses they need to be ’carefully dealt with by creating trustful spaces
of interaction, patiently over time’ [6]. According to McGrath [7], emotions not only indicate
concerns of personal loss of status or power but also suggest legitimate directions for an
organization.
Hackman and Oldham’s (1980) approach to job enrichment aimed at increasing employees’
critical psychological states that lead to intrinsic motivation, job satisfaction, and
performance outcomes. The job characteristics of skill variety, task identity, and task
significance positively influence the sense of meaningfulness; the job characteristic of
autonomy positively influences the sense of responsibility, and feedback as a job
characteristic positive influences knowledge of results, energizing the self- regulation
process of self-setting goals and monitoring, evaluating, and reinforcing behavior (Bandura,
1986).
In parallel to the development of the job enrichment model in the United States, a group of
researchers at the Tavistock Institute in England explored ways to improve productivity and
morale in organizations under the assumption that organizations constitute the relationship
30 | P a g e
between a technological system and a human system (Trist, 1981). These researchers
disagreed with Taylor’s rational system view that standardization and routine increase
efficiency. The socio-technical job design model optimizes the integration of the social and
technical aspects of the work system, taking into consideration the cultural values that
specify the norms, rules, and regulations of behavior in organizations. Rather than focusing
on individual employees, socio-technical designs implement at the group level psycho logical
mechanisms that are similar to those in the job characteristics model. These mechanisms
include group autonomy and responsibility, group feedback on performance, and task
meaningfulness as enhanced by skill variety, task identity, and significance.
References
Erez, M. (2010). Culture and job design. Journal of Organizational Behavior, 31(2/3), 389-400.
Lyytinen, K., & Newman, M. (2008). Explaining information systems change: a punctuated socio-
technical change model. European Journal of Information Systems, 17(6), 589-613.
Sarker, S., Chatterjee, S., Xiao, X., & Elbanna, A. (2019). The sociotechnical axis of cohesion for the
IS discipline: Its historical legacy and its continued relevance. MIS Quarterly, 43(3), 695-720.
Seymour, L. F., & Roode, J. D. (2008). Investigating affective response and job impact with ERP
adoption. South African Computer Journal, 40, 74-82.
31 | P a g e