Algorithmic Decision-Making
Algorithmic Decision-Making
research-article2019
ORG0010.1177/1350508419855714OrganizationBader and Kaiser
Organization
1–18
Algorithmic decision-making? © The Author(s) 2019
Article reuse guidelines:
The user interface and its role sagepub.com/journals-permissions
DOI: 10.1177/1350508419855714
https://doi.org/10.1177/1350508419855714
for human involvement in decisions journals.sagepub.com/home/org
Abstract
Artificial intelligence can provide organizations with prescriptive options for decision-making. Based
on the notions of algorithmic decision-making and user involvement, we assess the role of artificial
intelligence in workplace decisions. Using a case study on the implementation and use of cognitive
software in a telecommunications company, we address how actors can become distanced from or
remain involved in decision-making. Our results show that humans are increasingly detached from
decision-making spatially as well as temporally and in terms of rational distancing and cognitive
displacement. At the same time, they remain attached to decision-making because of accidental
and infrastructural proximity, imposed engagement, and affective adhesion. When human and
algorithmic intelligence become unbalanced in regard to humans’ attachment to decision-making,
three performative effects result: deferred decisions, workarounds, and (data) manipulations. We
conceptualize the user interface that presents decisions to humans as a mediator between human
detachment and attachment and, thus, between algorithmic and humans’ decisions. These findings
contrast the traditional view of automated media as diminishing user involvement and have useful
implications for research on artificial intelligence and algorithmic decision-making in organizations.
Keywords
Algorithmic and human-based intelligence, algorithmic decision-making, artificial intelligence,
interface, workplace decisions
Introduction
Algorithmic decision-making refers to the automation of decisions and is considered as a form of
remote control and standardization of routinized workplace decisions (Möhlmann and Zalmanson,
Corresponding author:
Verena Bader, School of Economics and Management, Bundeswehr University Munich, Werner-Heisenberg-Weg 39,
85579 Neubiberg, Germany.
Email: [email protected]
2 Organization 00(0)
notion of user involvement in media theory. Then we present our research design, outlining the
case study’s empirical setting and our methodological approach to data collection and analysis.
Next, we illustrate the findings of our empirical case study on the implementation of cognitive
software in a call center. Based on these findings, we present our framework for the role of AI in
workplace decisions and discuss our contribution to the literatures of digital media and organiza-
tion studies. Finally, we describe how our framework can inform future research.
decision-making as an assemblage of human actors and algorithms mediated by the user interface
(Figure 1). This mediation evokes the balance or imbalance between low user involvement, which
means human detachment from the decision, and high user involvement, which means human
attachment to the decision.
Methods
Empirical setting: introduction of cognitive software in a call center
Studying human involvement in algorithmic decision-making on a practice level (Günther et al.,
2017) requires in-depth qualitative data that allow the exploration of ‘technologies-in-use’ (e.g.
Gherardi, 2012: 79). We assess the introduction of software and affective user interactions (Bracha
and Brown, 2012) using a case-study research design that can help to explain complex scenarios
(Eisenhardt, 1989; Yin, 2013). Fieldwork is commonly used as a qualitative approach to assessing
technologies-in-use and material arrangements (Gherardi, 2012) and to observing the use of soft-
ware ‘in [a] natural setting’ (Eisenhardt and Bourgeois, 1988).
This article presents research that was part of a broader study on algorithms’ capacity to act.
Part of that study was investigating the use of decision-support software in the call center of a
large cable operator that has about 1500 employees and about 1.3 million customers. In 2005,
the firm was acquired by an internationally operating group and, during the course of several
group-wide software standardization processes, a new cognitive system, IBM Interact, was
launched in a call center. Call centers provide a relevant empirical context, as the agents’ tradi-
tional tasks have considerable potential for automation, so they are among the first empirical
settings in which the implementation of AI and actors’ reactions to it can be analyzed. IBM
Interact is a cognitive system that operates in a prescriptive way (Van der Vlist, 2016) by ana-
lyzing historical and real-time customer data over time and providing its users, the call center
agents in this case, with predetermined and increasingly well-suited sales options for the cus-
tomer who is on the line. During the phone conversation with the customer, the call center agent
has to react and make decisions on what offer is the best for the company. Our analysis focuses
on this company part of the sales negotiation when the call center agents’ decisions were newly
supported by AI.
Before IBM Interact’s implementation, call center agents assessed a customer’s needs using a
‘manual demand analysis’ that was conducted by asking the customer for information or manually
gathering the information from various internal databases. The customer demand analysis was usu-
ally geared to selling a product, and the procedure followed a strict protocol on which the agents
6 Organization 00(0)
were well trained during their onboarding phase and afterward. While the customer demand analy-
sis and customer interactions were scripted to make as many sales as possible, IBM Interact was
based on predictive modeling that sought to make not only the highest-priced but also the most
suitable offer to the customer. This predictive modeling considered internal and external data like
the customer’s purchasing and surfing behavior, age, and residency and the marketing and sales
departments’ requirements to predict the likelihood of customer churn and promote customized
service and individualized offers. Thus, with the implementation of this software, the quantity-
focused manual demand analysis was succeeded by a quality-focused, algorithm-based demand
analysis. By automating the demand analysis, the decision about what to offer the client was pre-
sented to the agent via the user interface of IBM Interact, a simple display that gave the agent little
information about how the decision was made, as the agent was not involved in the data collection
and analysis.
Our analysis considered the interplay between human and algorithmic intelligence and how
human involvement in decisions played out when the agents’ decisions were succeeded by choices
presented via IBM Interact’s user interface.
Findings
This section assesses the involvement of human actors in algorithmic decision-making and dis-
cusses the decision situations the call center agents faced.
Spatial and temporal separation. IBM Interact accesses external customer data and performs predic-
tive modeling. External actors feed data into this database in various ways and at various times. For
instance, the marketing department wanted to drive high-priced products; a customer started to
play video games intensely and his or her real-time clicks on the webpage were fed back as data
from the customer’s home to IBM Interact (Head of Marketing Intelligence Department, Interview
21); internal databases included the customers’ historical purchasing behavior. Thus, call center
agents were confronted with prescriptive algorithmic decisions made from data they would not
otherwise have had access to. These functionalities meshed with the fast-response environment in
the inbound call center, which takes customers’ calls about administrative or technical problems,
so agents do not have time to prepare but must serve the customer’s concern while interacting with
him or her to sell a product. Thus, with the software’s decision based on internal and historical data
and external and foresight data to which the agents do not have access, agents are spatially and
temporally separated from the basis of the decision:
At that moment, you need to be able to identify what is now the next best action that we can offer this
customer, given everything we know about this customer and his past purchase behavior. […] Before
[IBM Interact], the agents were drilled to try to make a sale […] in every single call, never mind what the
customer’s situation is, so there are customers who ordered a product last Thursday and they are calling in
today asking, ‘Hey, where is my stuff?’ [and], of course, there is no way that you have the chance to sell
something to this customer, because he is still waiting […] for his first product. Now, with [IBM Interact],
8 Organization 00(0)
we are able to filter those customer groups out. […] Then, based on that segmentation and all the customer
insights we have of this customer, we can offer the agents what we call the next best action. (Head of
Marketing Intelligence Department, Interview 21)
Rational distancing. The agent has at least two roles that are expressed in the human-material
arrangement of the selling situation: the agent’s interaction with IBM Interact’s user interface,
which presents the best offers for the next customer in line, and the interaction with the customer
through the phone headset that links the agent to the customer, who is elsewhere. The elements of
this triangle arrangement among IBM Interact’s interface, the agent, and the customer jointly build
the situation in which decisions concerning what the agent offers and what the customer buys are
made based on specific goals and rationalities. Therefore, the interface is only the surface of the
rational-instrumental goals (i.e. performance targets) of management and the marketing and sales
department. IBM Interact is configured to allow the organization to make a shift toward customer
service and to change the agents’ focus to ‘quality versus quantity’ and ‘more value than volume’
(Manager, Interview 25), but the agents were still focused on the number of sales as the most
important performance indicator, a view that a manager described as ‘conditioned’ (Manager,
Interview 25). The third perspective was that of the customer, who had his or her own reasons to
act in the situation. In this arrangement of different and even conflicting rationalities, the agent
became rationally distanced, not least because the data that built the basis for the algorithmic deci-
sion was unavailable to the agent. Hence, we refer to rational distancing, where the human decision
differs from what the interface presents and the opaque algorithmic decision instruction and the
human’s decision logic diverge.
with artificial competencies like efficiency and accuracy that are badly needed in the context of a
call center but that cognitively displaced the agents from decisions.
Accidental and infrastructural proximity. When the agents were first introduced to IBM Interact, they
had no use for it. The user interface’ design did not support the work routines that had been deter-
mined by their individual procedures when they used manual demand analysis, so they often (unin-
tentionally) failed to feed data—or even fed incorrect data—into the system. A manager who
worked at the intersection between the IT department and sales and customer relations described
the outcome of the poorly recorded agent–customer interactions that initially biased the database
on which IBM Interact based its decisions: ‘[The interaction] is only recorded when the agent
presses “save.” Well, if you forget that, of course, nothing is recorded’ (Sales & Customer Opera-
tions-Release-Manager, Interview 27). Agents were also attached to their own decision-making
because of infrastructural and material conditions, the most obvious of which was the agent’s role
as a user of multiple media. For example, the agent filtered and checked IBM Interact’s user inter-
face when he or she was talking to the customer, as biases in the database resulted in IBM Interact’s
proposing offers that the agent understood as not applicable when he or she engaged more fully
with the customer and received more information on which to base a decision. IBM Interact’s
biases might be based on no data (e.g. when the customer gets service from another telecommuni-
cations provider) or flawed data (e.g. when customers make accidental clicks on the webpage or
someone else uses the hardware), in which case the interface might present five Internet offers even
though the customer said at the beginning of the conversation that he had an Internet contract with
another company (Interviewee 21). In such cases, the agent filtered out all Internet-related offers
from IBM Interact. Therefore, accidental and infrastructural proximity (e.g. produced through the
telephone) was another reason for agents to intervene in algorithmic decisions when the interface’s
design did not support their routine work habits or when other surrounding media stimulated sen-
sual and cognitive engagement.
Imposed engagement. In contrast to attachment situations, where agents intervened in the decision
process on their own, in some situations agents were commanded to be attached to decisions.
While managers and developers of IBM Interact urged the agents to adhere strictly to the soft-
ware’s proposals (Head of Marketing Intelligence, Interview 21; Manager, Interview 25), team
leaders allowed or even instructed the agents to ignore IBM Interact when they felt that their own
analysis was better. In one case, a team leader met his team in a private cubicle, where he instructed
them to ignore IBM Interact entirely so that the team could meet its sales goals (memo from an
informal conversation with a Call Center Agent). We refer to this form of forced attachment that is
due to organizational conditions as imposed engagement.
Affective adhesion. Agents were also attached to decisions by their emotions. Agents had been con-
stantly informed of their individual sales numbers and key performance indicators (KPIs) via emails,
monitors, rankings, and tournaments (Team Leader, Interview 7) and had been ‘conditioned over
10 Organization 00(0)
years’ (Manager, Interview 25) to focus on their sales in drill sessions and private briefings (Call
Center Agents, Interviews 14, 15, 17). In contrast to this omnipresent selling focus, the aim of IBM
Interact was to provide the agent with the best solution for the customer on the line, including the
option not to make an offer at all. Especially when IBM Interact advised the agent not to make an
offer, such as when it would have been disadvantageous to their individual or team goals, the agent’s
emotions sometimes came into play. As a supervisor described it:
In the beginning, we said you must not make an offer if [IBM Interact] tells you not to make an offer, but
the problem is that selling is a goal for them. They must sell. Well, that means, if they can record a sale in
[…], our selling-tool, […] then it is a sale for them. The employees won’t let a tool ruin their sales. I
understand that. It is really hard. They are getting pushed. […] They’re under pressure to make sales, and
if a tool says you must not sell, but you could make a sale, then the employee, of course, makes the sale
[…] because he wants to reach his goals. He wants to have his commission. He just wants to be good.
(Interview 15)
Affective attachment is also closely linked to a managerial narrative about IBM Interact as a
tool that will take over the agents’ work. Using the same narrative, agents refused to use IBM
Interact and switched to the manual customer-demand analysis because they wanted to sell them-
selves as a matter of ‘professional ethos’ (Manager, Interview 25). These empirical examples show
that both the need and the wish to use unique human competencies were forms of affective adhe-
sion where agents held on to their decisions.
Deferred decisions. In some cases, the conflicting rationalities among the goals of IBM Interact,
customer, managers and team leaders, and agents brought the agents into conflicting situations that
they could not resolve themselves, so they either refused to take calls or deferred decisions during
the phone conversation instead of asking the team leader what to do. One supervisor, who was a
key user of IBM Interact, told that the agents could ‘wait a bit longer to accept the call’ or ‘switch
to AUX’ (Supervisor, Interview 15), a management control system that counts the time agents are
logged off the coordination system during, for instance, meetings or coaching sessions.
Workarounds. When agents had access to the tools they had used previously for the manual demand
analyses process, they often switched to these tools. For instance, one agent complied with IBM
Interact if its decision coincided with the agent’s personal goal of making three sales per day. If she
had already sold three items and IBM Interact still demanded a sale, the agent delayed accepting
the next call or used an older system that was still accessible. However, when she decided to work
around IBM Interact, she still documented her decision by clicking the ‘save’ button, thus feeding
information about the customer interaction to IBM Interact (Call Center Agent, Interview 22).
Manipulation. Although the rigidity of the work environment was frequently compared to the mili-
tary, the tools that were available to the agents gave them opportunities to cheat and manipulate the
system: ‘These are all ways you can cheat as an employee. But apart from that, the numbers are
right’ (Supervisor, Interview 15).
Bader and Kaiser 11
One agent described her own sales goals as conflicting with IBM Interact’s instruction not to
sell a product:
Those from the project feel hoaxed because they work and work and sweat because IBM Interact doesn’t
work how it should work. I manipulate the numbers because I say, ‘Yes, okay, but what [IBM Interact] says
doesn’t interest me’. I manipulate the numbers, but if it is obvious to me that this customer will call in the
next two or three months because their sixteen-year-old daughter says she wants to see this program, I
would miss my sale otherwise. (Interview 16)
Clearly, individual call center agents were involved in and committed to decision-making in
spite of the system’s algorithmic propositions. We found their attachment to decisions took three
forms: accidental and infrastructural proximity, a form of materiality-driven attachment; imposed
engagement, where agents were forced to make decisions in reaction to organizational conditions;
and affective adhesion, which was driven primarily by emotions. Our findings also reveal the per-
formative effects of a lack of balance between human and algorithmic involvement, where human
attachment resulted in deferred decisions, workarounds, and manipulation.
Discussion
Organization and media studies’ research on the work of algorithms has captured algorithmic deci-
sion-making from a critical perspective as an approach to automatic management that dictates
decisions to humans (Beer, 2017; Gillespie, 2012, 2014; Introna, 2016; Newell and Marabelli,
2015). However, questions concerning how workers deal with algorithmic decision-making, how
the user interface influences their ongoing human involvement, and how workplace decisions are
affected by AI, have remained unanswered. Our study addresses these questions by examining the
ongoing human involvement in AI-supported workplace decisions. Taking a media-theoretical per-
spective, we analyze human decision-makers’ confrontations with the essence of the algorithmic
decision via the user interface and show that AI has a dual role in workplace decisions by creating
both human attachment to and detachment from decisions, which result from both high and low
levels of human involvement in interactions with IBM Interact’s user interface. On one hand, the
user interface evokes low human involvement, increasingly detaching the human decision-maker
from decision-making in the form of spatial and temporal separation, rational distancing, and cog-
nitive displacement. On the other hand, the decisions presented via the interface sometimes also
lead to a high degree of human involvement. Therefore, our findings suggest a simultaneous attach-
ment to and detachment from the decisions brought about the functionalities of AI. We identified
accidental and infrastructural proximity to decisions as a materiality-driven form of attachment and
revealed the significance of contextual factors that only humans can take into account. The third
form of human attachment to decisions, affective adhesion, is emotion-driven. Finally, our findings
suggest that a lack of balanced involvement of humans in decisions has negative performative
effects because of deferred decisions, workarounds, and manipulations.
Figure 2 presents a framework that summarizes this dual role of AI in workplace decisions, a
role that simultaneously evokes humans’ detachment from and attachment to decisions, which the
users master situationally.
The framework contributes to the human-intelligence versus algorithmic-intelligence debate on
the workplace level (Günther et al., 2017). Discourse on algorithmic decision-making frequently
points to superior software characteristics like machine learning and to problems like algorithmic
opacity (e.g. Zarsky, 2016). However, we argue that, depending on the user interface, AI detaches
humans from decisions while at the same time encouraging their attachment.
12 Organization 00(0)
Figure 2. The role of the user interface in algorithmic intelligence supported workplace decisions.
The framework speaks for a dual view of the superior side and the ‘dark side’ (e.g. Marabelli
et al., 2018) of AI functionalities, which are black boxes for most users. First, our findings suggest
that, with the application of algorithmic decision-making on the worker level, the software’s func-
tionalities, such as its access to external data and predictive modeling, can generate spatial and
temporal separation, rational distancing, and cognitive displacement as forms of humans’ detach-
ment from its decisions, increasing humans’ reliance on algorithmic decisions (e.g. Fuller and
Goffey, 2012). This argument is supported in previous research that has suggested that algorithms
gain control over humans’ decisions and actions (Barocas et al., 2013), albeit on a theoretical level
(e.g. Abbasi et al., 2016). The extant research, then, gains empirical grounding with our study.
Second, our analysis shows that the software’s advanced functionalities and its opaque decision
logic can also lead to users’ strong engagement with the decision, whether because of infrastruc-
tural proximity, imposed engagement, or affective adhesion. Our findings suggest that, although
the software lacks the ability to consider contextual factors, its superior functionalities, such as
access to external data and predictive modeling, attach humans to decisions. Thus, humans decide
differently than algorithms do since humans cannot reconstruct the algorithms’ decision logic. We
find that this unbalanced human involvement results in negative outcomes like deferred decisions,
workarounds, and manipulations. Thus, we go beyond studies on algorithmic decision-making that
treat the neglect of contextual factors as a major hazard (e.g. Marabelli et al., 2018) and reveal the
Bader and Kaiser 13
ambivalent character of algorithms that is determined by both human autonomy and human
dependency.
Research has often pointed to the potential of automated data analysis and decisions (Helbing,
2019), treating algorithmic decisions as disclosed units (e.g. Dewett and Jones, 2001) that facilitate
remote managerial control (Bailey et al., 2012). In contrast, we conceptualize algorithmic decision-
making as an assemblage (DeLanda, 2016) of algorithms and humans (Lichtenthaler, 2018). In
doing so, we first refine the frequent classification of users as either being free (detached) or bound
(attached) to specific technologies (Latour, 1999) and show that a user interface that presents algo-
rithmic decisions provokes human detachments as well as attachments. In contrast to previous
analyses of the subject in algorithmic decision-making (Borche and Lange, 2017) and attachments,
we set the interface as a mediator in users’ involvement.
Therefore, we address Latour’s (1999) call for researchers to look beyond the mere opposition
of attachment and detachment by finding more subtle distinctions in their components. We respond
to this call by identifying the constitutive elements of attachments and detachments. The sophisti-
cated software functionalities result in low user involvement in terms of spatial and temporal sepa-
ration, rational distancing, and cognitive displacement. At the same time, the interface’s simplicity
evokes individuals’ active involvement and scrutiny when it activates human senses and discern-
ment related to decisions, especially if other media (e.g. the telephone) in the environment or
prevalent organizational conditions provoke human attachment to themselves. In addition, the
findings that relate to other media in the environment suggest that, in response to the introduction
of a new technology to be used by employees (e.g. Orlikowski, 2000), the extant media can deter-
mine the extent to which the new technology plays a role in decisions.
Our findings that are based on the idea of assemblage also extend existing scholarly work on
managerial control. Research in this field has analyzed unintended uses or ‘drift’ of new manage-
ment systems (e.g. Ciborra and Hanseth, 2000), although it neglects the role of technologies as
carriers of rationality (Bader and Kaiser, 2017; Cabantous and Gond, 2011). In pursuing our notion
of rational distancing, we examine how the user interface builds a site on which decision-makers
with different and sometimes opposing decision rationalities meet. Hence, similar to existing work
that has debated the various knowledge groups involved in using analytics (Pachidi et al., 2014),
we add to the drift debate information regarding how unintended uses unfold if decision logics do
not coincide and/or are not transparent to the human decision-maker.
Our third contribution addresses the literature on the relationship of algorithms to human prac-
tices, those practices’ reaction to the algorithms (Gillespie, 2014), and the algorithms’ relevance to
social outcomes (Beer, 2017; Newell and Marabelli, 2015). The results of our empirical investigation
emphasize how learning algorithms depend on humans, so our study joins those of researchers (e.g.
Suchman, 2014) who have highlighted the simultaneous making and using of data and have agreed
that the human-algorithmic interaction is more important in understanding the implications of algo-
rithms at work than are algorithms on their own (Couldry, 2012; Lowrie, 2017; Orlikowski, 2007;
Wegner, 1997). We enlarge this perspective through our empirical data and shed light on how human-
algorithmic interactions in the workplace affect organizational processes (Yoo et al., 2012).
Specifically, we find that users face the challenge of situationally mastering their detachment and
attachment, as an unbalanced involvement of humans in algorithmic decision-making results in
deferred decisions, workarounds, and manipulations. Earlier work on human manipulations as a
response to algorithms has highlighted how actors overly engage with algorithms by orienting their
actions to their suppositions about the algorithms’ computations to make themselves more recogniz-
able (e.g. Gillespie, 2017). In contrast to this over-engagement and identification with algorithms, we
find that, if humans are under-engaged with algorithms—that is, if they are disproportionally detached
14 Organization 00(0)
from the algorithmic decision—flawed data may be fed back to the database, causing negative out-
comes since the algorithms were working with biased data (e.g. Cunha and Carugati, 2018).
Conclusion
Our framework informs future research on algorithmic decision-making and the use of AI in auto-
mated data analysis in organizations. These findings are based on the notion of user involvement,
which media theory has traditionally connoted as having to do with psychology (e.g. Krugman,
1971). Our framework on the role of AI is built primarily on the constitutive elements of humans’
detachment from and attachments to decision-making that users face in mastering their involve-
ment in decision-making. In doing so, we answer the question concerning the distance between
humans and their decision authority.
However, we also raise the issue of the ontological distance between or convergence of humans and
AI. In this context, our framework works as a theoretical starting point for researchers who seek to
address the ontological categorization of human versus algorithmic intelligence (Westerhoff, 2005).
Similarly, our findings on rational distancing, which are grounded in the divergence between algorith-
mic decisions and human decision logic, suggest that future research in organization studies consider
in more detail the role of epistemologies in algorithmic decision-making (Abbasi et al., 2016; Pachidi
et al., 2014). On the workplace level, considering the user interface as the site of clashing rationalities
could be a fruitful approach to explaining decision-making in organizations (Bader and Kaiser, 2017).
Our findings are based on a single case study on the implementation of a cognitive system in a
call center, an empirical setting that is at the forefront of the development of algorithmic decisions
since workplace decisions in this setting are usually structured and can be easily automated. Future
research may use the framework as theoretical guidance not only in contexts in which human-
algorithmic decision assemblages are part of operational, routine, and daily workplace decisions,
such as high-frequency trading (Borche and Lange, 2017) and loan processing (Chae, 2014), but
also in more complex and unstructured decision-making domains, such as people analytics
(Boudreau and Cascio, 2017; Loebbecke and Picot, 2015; Markus, 2017).
Finally, our study shows the potential of drawing on the rich corpus of media studies in analyzing
the empirical phenomenon of digitization in organizations. Envisioning user interfaces that pre-
scribe decision options as mediators that involve humans more or less in decisions allowed us to
delve into the organizational context of the use of digital media and to elaborate our framework on
the dual detaching and attaching roles of AI’s functionalities in workplace decisions. This approach
points to the role media studies can play in explaining organizational phenomena during the process
of digitization and beyond. Without taking into account the interfaces that present algorithmic deci-
sions as mediators of specific forms of humans’ detachments and attachments, and without empha-
sizing this dual role of AI in workplace decisions, future research might neglect the ongoing human
involvement and go too far in the debate about algorithms as autonomous elements.
ORCID iD
Verena Bader https://orcid.org/0000-0002-4732-506X
References
Abbasi, A., Sarker, S. and Chiang, R. H. K. (2016) ‘Big Data Research in Information Systems: Toward an
Inclusive Research Agenda’, Journal of the Association of Information Systems 17(2): i–xxxii.
Ananny, M. (2016) ‘Toward an Ethics of Algorithms: Convening, Observation, Probability, and Timeliness’,
Science, Technology & Human Values 41(1): 93–117.
Bader and Kaiser 15
Anteby, M. and Chan, C. K. (2018) ‘The Self-Fulfilling Cycle of Coercive Surveillance’, Organization
Science 29(2): 247–63.
Bader, V. and Kaiser, S. (2017) ‘Autonomy and Control? How Heterogeneous Sociomaterial Assemblages
Explain Paradoxical Rationalities in the Digital Workplace’, Management Revue 28(3): 338–58.
Bailey, D. E., Leonardi, P. M. and Barley, S. R. (2012) ‘The Lure of the Virtual’, Organization Science 23(5):
1485–504.
Barocas, S., Hood, S. and Ziewitz, M. (2013) ‘Governing Algorithms: A Provocation Piece’, SSRN Electronic
Journal. Available at: https://ssrn.com/abstract=2245322 (accessed 12 January 2019).
Beer, D. (2017) ‘The Social Power of Algorithms’, Information, Communication & Society 20(1): 1–13.
Belanger, F., Hiller, J. S. and Smith, W. J. (2002) ‘Trustworthiness in Electronic Commerce: The Role of
Privacy, Security, and Site Attributes’, Journal of Strategic Information Systems 11(3–4): 245–70.
Boudreau, J. and Cascio, W. (2017) ‘Human Capital Analytics: Why Are We Not There?’, Journal of
Organizational Effectiveness: People and Performance 4(2): 119–26.
Bracha, A. and Brown, D. J. (2012) ‘Affective Decision Making: A Theory of Optimism Bias’, Games and
Economic Behavior 75(1): 67–80.
Brunton, F. and Coleman, G. (2014) ‘Closer to the Metal’, in T. Gillespie, P. J. Boczkowski and K. A. Foot
(eds) Media Technologies: Essays on Communication, Materiality, and Society, pp. 77–97. Cambridge,
MA: The MIT Press.
Cabantous, L. and Gond, J. -P. (2011) ‘Rational Decision Making as Performative Praxis: Explaining
Rationality’s Eternel Retour’, Organization Science 22(3): 573–86.
Callon, M. (1984) ‘Some Elements of a Sociology of Translation: Domestication of the Scallops and the
Fishermen of St Brieuc Bay’, The Sociological Review 32(1): 196–233.
Chae, B. K. (2014) ‘A Complexity Theory Approach to IT-Enabled Services (IESs) and Service Innovation:
Business Analytics as an Illustration of IES’, Decision Support Systems 57: 1–10.
Chen, H., Chiang, R. H. and Storey, V. C. (2018) ‘Business Intelligence and Analytics: From Big Data to Big
Impact’, MIS Quarterly 36(4): 1165–88.
Ciborra, C. U. and Hanseth, O. (2000) ‘Introduction: From Control to Drift’, in C. U. Ciborra, K. Braa, A.
Cordella, et al (eds) From Control to Drift: The Dynamics of Corporate Information Infrastructures, pp.
1–14. Oxford: Oxford University Press.
Clark, T. D., Jones, M. C. and Armstrong, C. P. (2007) ‘The Dynamic Structure of Management Support
Systems: Theory Development, Research Focus and Directions’, MIS Quarterly 31(3): 579–615.
Constantiou, I. D. and Kallinikos, J. (2015) ‘New Games, New Rules: Big Data and the Changing Context of
Strategy’, Journal of Information Technology 30(1): 44–57.
Couldry, N. (2012) Media, Society, World: Social Theory and Digital Media Practice. Cambridge: Polity.
Cramer, F. and Fuller, M. (2008) ‘Interface’, in M. Fuller (ed,) Software Studies: A Lexicon, pp. 149–52.
Cambridge, MA: The MIT Press.
Cunha, J. and Carugati, A. (2018) ‘Transfiguration Work and the System of Transfiguration: How Employees
Represent and Misrepresent Their Work’, MIS Quarterly 42(3): 873–94.
Davenport, T. H. (2013) ‘Linking Decisions and Analytics for Organizational Performance’, in T. H.
Davenport (ed.), Enterprise Analytics: Optimize Performance, Process, and Decision through Big Data,
pp. 135–54. Upper Saddle River, NJ: FT Press.
DeLanda, M. (2016) Assemblage Theory. Edinburgh: Edinburgh University Press.
Dewett, T. and Jones, G. R. (2001) ‘The Role of Information Technology in the Organization: A Review,
Model, and Assessment’, Journal of Management 27(3): 313–46.
Downey, G. (2014) ‘Making Media Work: Time, Space, Identity, and Labor in the Analysis of Information and
Communication Infrastructures’, in T. Gillespie, P. J. Boczkowski and K. A. Foot (eds) Media Technologies:
Essays on Communication, Materiality, and Society, pp. 141–66. Cambridge; London: The MIT Press.
Eisenhardt, K. M. (1989) ‘Building Theories From Case Study Research’, The Academy of Management
Review 14(4): 532–50.
Eisenhardt, K. M. and Bourgeois, L. J. (1988) ‘Politics of Strategic Decision Making in High-Velocity
Environments: Toward a Midrange Theory’, Academy of Management Journal 31(4): 737–70.
16 Organization 00(0)
Faraj, S., Pachidi, S. and Sayegh, K. (2018) ‘Working and Organizing in the Age of the Learning Algorithm’,
Information and Organization 28(1): 62–70.
Fuller, M. and Goffey, A. (2012) Evil Media. Cambridge, MA: The MIT Press.
Gherardi, S. (2012) How to Conduct a Practice-Based Study: Problems and Methods. Cheltenham: Edward
Elgar.
Gillespie, T. (2012) ‘Can an Algorithm Be Wrong? Limn 1(2)’, Retrieved January 31, 2018 from https://
escholarship.org/uc/item/0jk9k4hj
Gillespie, T. (2014) ‘The Relevance of Algorithms’, in T. Gillespie, P. J. Boczkowski and K. A. Foot (eds)
Media Technologies: Essays on Communication, Materiality, and Society, pp. 167–94. Cambridge, MA:
The MIT Press.
Gillespie, T. (2017) ‘Algorithmically Recognizable: Santorum’s Google Problem, and Google’s Santorum
Problem’, Information, Communication & Society 20(1): 63–80.
Gitelman, L. (2006) Always Already New: Media, History, and the Data of Culture. Cambridge, MA: The
MIT Press.
Goffey, A. (2008) ‘Intelligence’, in M. Fuller (ed.) Software Studies: A Lexicon, pp. 132–42. Cambridge, MA:
The MIT Press.
Greenwood, D. N. (2008) ‘Television as Escape from Self: Psychological Predictors of Media Involvement’,
Personality and Individual Differences 44(2): 414–24.
Günther, W. A., Mehrizi, M. H. R., Huysman, M., et al. (2017) ‘Debating Big Data: A Literature Review on
Realizing Value from Big Data’, The Journal of Strategic Information Systems 26(3): 191–209.
Helbing, D. (2019) ‘Societal, Economic, Ethical and Legal Challenges of the Digital Revolution: From Big
Data to Deep Learning, Artificial Intelligence, and Manipulative Technologies’, in D. Helbing (ed.)
Towards Digital Enlightenment. Essays on the Dark and Light Sides of the Digital Revolution, pp. 47–
72. Cham: Springer.
Helbing, D., Frey, B. S., Gigerenzer, G., et al. (2019) ‘Will Democracy Survive Big Data and Artificial
Intelligence?’, in D. Helbing (ed.) Towards Digital Enlightenment. Essays on the Dark and Light Sides
of the Digital Revolution, pp. 73–98. Cham: Springer.
Hennion, A. (2015) The Passion for Music: A Sociology of Mediation. Farnham: Ashgate.
Hennion, A. (2017a) ‘Attachments, You Say? How a Concept Collectively Emerges in One Research Group’,
Journal of Cultural Economy 10(1): 112–21.
Hennion, A. (2017b) ‘From Valuation to Instauration: On the Double Pluralism of Values’, Valuation Studies
5(1): 69–81.
Introna, L. D. (2016) ‘Algorithms, Governance, and Governmentality: On Governing Academic Writing’,
Science, Technology, & Human Values 41(1): 17–49.
Jung, J., Shroff, R., Feller, A., et al. (2018) ‘Algorithmic Decision Making in the Presence of Unmeasured
Confounding’, arXiv. Retrieved from https://arxiv.org/abs/1805.01868
Klein, G. A. (2017) Sources of Power: How People Make Decisions. Cambridge, MA: The MIT Press.
Krugman, H. E. (1971) ‘Brain Wave Measures of Media Involvement’, Journal of Advertising Research
11(1): 3–9.
Latour, B. (1999) ‘Factures/Fractures: From the Concept of Network to the Concept of Attachment’, Res:
Anthropology and Aesthetics 36(1): 20–31.
LaValle, S., Lesser, E., Shockley, R., et al. (2011) ‘Big Data, Analytics and the Path from Insights to Value’,
MIT Sloan Management Review 52(2): 21–32.
Lichtenthaler, U. (2018) ‘Substitute or Synthesis: The Interplay between Human and Artificial Intelligence’,
Research-Technology Management 61(5): 12–14.
Livingstone, S. (2014) ‘Identifying the Interests of Digital Users as Audiences, Consumers, Workers,
and Publics’, in T. Gillespie, P. J. Boczkowski and K. A. Foot (eds) Media Technologies: Essays on
Communication, Materiality, and Society, pp. 241–50. Cambridge, MA: The MIT Press.
Loebbecke, C. and Picot, A. (2015) ‘Reflections on Societal and Business Model Transformation Arising
from Digitization and Big Data Analytics: A Research Agenda’, The Journal of Strategic Information
Systems 24(3): 149–57.
Bader and Kaiser 17
Lowrie, I. (2017) ‘Algorithmic Rationality: Epistemology and Efficiency in the Data Sciences’, Big Data &
Society 4(1): 1–13.
McFarland, D. A. and McFarland, H. R. (2015) ‘Big Data and the Danger of Being Precisely Inaccurate’, Big
Data & Society 2(2): 1–4.
Mackenzie, A. (2006) Cutting Code: Software and Sociality. New York: Peter Lang.
Mackenzie, A. (2013) ‘Programming Subjects in the Regime of Anticipation: Software Studies and
Subjectivity’, Subjectivity 6(4): 391–405.
McLuhan, M. (1994) Understanding Media: The Extensions of Man. Cambridge, MA: The MIT Press.
Marabelli, M., Newell, S. and Page, X. (2018) ‘Algorithmic Decision-Making in the US Healthcare Industry’,
in IFIP 8.2 Working Conference, San Francisco, CA, 11–12 December, pp. 1–5. Retrieved from https://
papers.ssrn.com/sol3/papers.cfm?abstract_id=3262379
Markus, M. L. (2017) ‘Datification, Organizational Strategy, and IS Research: What’s the Score?’, The
Journal of Strategic Information Systems 26(3): 233–41.
Möhlmann, M. and Zalmanson, L. (2017) ‘Hands on the Wheel: Navigating Algorithmic Management and
Uber Drivers’ Autonomy’, in Proceedings of the International Conference on Information Systems
(ICIS), Seoul South Korea, 10–13 December.
Newell, S. and Marabelli, M. (2015) ‘Strategic Opportunities (and Challenges) of Algorithmic Decision-
Making: A Call for Action on the Long-Term Societal Effects of “Datification”’, The Journal of Strategic
Information Systems 24(1): 3–14.
Orlikowski, W. J. (2000) ‘Using Technology and Constituting Structures: A Practice Lens for Studying
Technology in Organizations’, Organization Science 11(4): 404–28.
Orlikowski, W. J. (2007) ‘Sociomaterial Practices: Exploring Technology at Work’, Organization Studies
28(9): 1435–48.
Orlikowski, W. J. and Scott, S. V. (2014) ‘What Happens When Evaluation Goes Online? Exploring
Apparatuses of Valuation in the Travel Sector’, Organization Science 25(3): 868–91.
Oudshoorn, N. E. J. and Pinch, T. (2003) How Users Matter: The Co-Construction of Users and Technologies.
Cambridge, MA: The MIT Press.
Pachidi, S., Berends, H., Faraj, S., et al. (2014) ‘What Happens When Analytics Lands in the Organization?
Studying Epistemologies in Clash’, Academy of Management Proceedings 2014(1): 15590.
Peterson, M. (2017) An Introduction to Decision Theory. Cambridge: Cambridge University Press.
Schneeweiss, C. (2012) Distributed Decision Making. Heidelberg: Springer.
Seaver, N. (2017) ‘Attending to the Mediators’, Journal of Cultural Economy 10(3): 309–13.
Shaikh, M. and Vaast, E. (2016) ‘Material Agency as Counter-Performativity: A Second-Order Perspective’,
Academy of Management Proceedings 2016(1): 11067.
Sharma, R., Mithas, S. and Kankanhalli, A. (2014) ‘Transforming Decision-Making Processes: A Research
Agenda for Understanding the Impact of Business Analytics on Organisations’, European Journal of
Information Systems 23(4): 433–41.
Shollo, A. and Galliers, R. D. (2016) ‘Towards an Understanding of the Role of Business Intelligence Systems
in Organisational Knowing’, Information Systems Journal 26(4): 339–67.
Shollo, A. and Kautz, K. (2010) ‘Towards an Understanding of Business Intelligence’, ACIS 2010 Proceedings
2010(21): 86.
Suchman, L. (2014) ‘Mediations and Their Others’, in T. Gillespie, P. J. Boczkowski and K. A. Foot (eds)
Media Technologies: Essays on Communication, Materiality, and Society, pp. 129–39. Cambridge, MA:
The MIT Press.
Van der Vlist, F. N. (2016) ‘Accounting for the Social: Investigating Commensuration and Big Data Practices
at Facebook’, Big Data & Society 3(1): 1–16.
Wegner, P. (1997) ‘Why Interaction Is More Powerful Than Algorithms’, Communications of the ACM 40(5):
80–91.
Weich, A. and Othmer, J. (2016) ‘Unentschieden? Subjektpositionen Des (Nicht-) Entscheiders in
Empfehlungssystemen’, in T. Conradi, F. Hoof and R. F. Nohr (eds) Medien der Entscheidung,
pp. 131–49. Münster; Hamburg; Berlin; London: Lit.
Westerhoff, J. (2005) Ontological Categories: Their Nature and Significance. Oxford: Oxford University Press.
18 Organization 00(0)
Author biographies
Verena Bader is a PhD candidate in human resources and organization at the Bundeswehr University Munich.
Her research interests lie at the intersection of information systems, organization, and work. Specifically, she
focuses on research questions in the areas of digital technologies artificial intelligence in their relation to
human actors as well as their intertwinement’s implications for work and organizing.
Stephan Kaiser is a professor in the School of Economics and Management at the Bundeswehr University
Munich. Where he has been a faculty member since 2009. He received his Ph.D. from the Catholic University
of Eichstaett-Ingolstadt. His main research interests are in organizational theory, work and human resources.