0% found this document useful (0 votes)
62 views24 pages

ChatGPT LaMDA and The Hype Around Communicative AI

This document discusses the emerging field of research on the automation of communication within media and communication studies. It argues that to fully understand how communicative AI is transforming society, the focus needs to broaden from just human-machine interactions to societal communication more broadly. An approach inspired by figurational sociology is recommended to examine how the dynamics of societal communication change when technologies like ChatGPT become integrated aspects of communication. The automation of communication has a long history but is raising new issues as systems are increasingly participating in communication themselves rather than just mediating it.

Uploaded by

Ivan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views24 pages

ChatGPT LaMDA and The Hype Around Communicative AI

This document discusses the emerging field of research on the automation of communication within media and communication studies. It argues that to fully understand how communicative AI is transforming society, the focus needs to broaden from just human-machine interactions to societal communication more broadly. An approach inspired by figurational sociology is recommended to examine how the dynamics of societal communication change when technologies like ChatGPT become integrated aspects of communication. The automation of communication has a long history but is raising new issues as systems are increasingly participating in communication themselves rather than just mediating it.

Uploaded by

Ivan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Human-Machine Communication

Volume 6, 2023
https://doi.org/10.30658/hmc.6.4

ChatGPT, LaMDA, and the Hype Around Communicative


AI: The Automation of Communication as a Field
of Research in Media and Communication Studies

Andreas Hepp1 , Wiebke Loosen2 , Stephan Dreyer2 , Juliane Jarke3 ,


Sigrid Kannengießer4 , Christian Katzenbach1 , Rainer Malaka5 ,
Michaela Pfadenhauer6 , Cornelius Puschmann1 , and Wolfgang Schulz2

1 ZeMKI, Centre for Media, Communication and Information Research, University of Bremen, Germany
2 Leibniz-Institute for Media Research | Hans-Bredow-Institut Hamburg, Germany
3 BANDAS-Center & Department of Sociology, University of Graz, Austria
4 Institute for Communication Studies, University of Münster, Germany
5 TZI, Center for Computing Technologies, University of Bremen, Germany
6 Institute for Sociology, University of Vienna, Austria

Abstract

The aim of this article is to more precisely define the field of research on the automation of
communication, which is still only vaguely discernible. The central thesis argues that to be
able to fully grasp the transformation of the media environment associated with the auto-
mation of communication, our view must be broadened from a preoccupation with direct
interactions between humans and machines to societal communication. This more widely
targeted question asks how the dynamics of societal communication change when com-
municative artificial intelligence—in short: communicative AI—is integrated into aspects
of societal communication. To this end, we recommend an approach that follows the tradi-
tion of figurational sociology.

Keywords: automation of communication, artificial intelligence, communicative AI,


algorithms, agency, communication, figuration

CONTACT Andreas Hepp • [email protected] • ZeMKI, Centre for Media, Communication and Information Research •
University of Bremen • Linzerstr. 4 • D-28359 Bremen, Germany

ISSN 2638-602X (print)/ISSN 2638-6038 (online)


www.hmcjournal.com

Copyright 2023 Authors. Published under a Creative Commons Attribution 4.0 International (CC BY-NC-ND 4.0) license.

41
42 Human-Machine Communication

Introduction
Current media coverage surrounding ChatGPT, LaMDA, and Luminous has brought ques-
tions about the automation of communication into the mainstream: Artificially intelligent
media are no longer merely mediating instances of communication, but are themselves
becoming communicative participants. This has generated broad public discussion about
these systems and the challenges they bring to fields such as education, public discourse,
and journalistic production.1 In light of this intensifying discussion, researchers who have
been working on the topic for a longer time warn against blindly embracing the hype.2
As media and communications researchers, we don’t want to ignore these warnings and
want to avoid getting caught up by the hyperbole. Nevertheless, communication technolo-
gies such as ChatGPT, LaMDA, and Luminous need to be taken seriously as they genuinely
represent a new step in the automation of communication—a process that is nevertheless
persistent and opens up a great deal of further discussion. The role played by bots and
algorithmic personalization on social media platforms in the spread of fake news and hate
speech, for example, have inspired fervent academic discussion (i.e., Lazer et al., 2018).
Systems such as Amazon Alexa, Google Assistant, Microsoft Cortana, or Apple Siri have
existed for nearly a decade forcing us to question our thinking about human communica-
tion and agency (i.e., Guzman, 2015). Questions of automation have been addressed further
in discussions about news production (i.e., Thurman et al., 2019), surveillance capitalism
(i.e., Zuboff, 2019) and data colonialism (i.e., Couldry & Mejías, 2019).
In principle, the automation of communication has a much longer history than recent
public discussions might imply and can affect all areas of social life. However, it is partic-
ularly important where societal communication is concerned, as can be illustrated in the
example of journalism. Here, the automation of communication plays a dual role: inter-
nally, for example, when journalistic working practices change as a result of the automated
production and distribution of content (Carlson, 2018; Diakopoulos, 2019), and externally,
when content created in this way becomes part of the public discussion (Graefe & Bohlken,
2020; Volcic & Andrejevic, 2023).
These examples indicate that automated communication systems have become part of
our media environment and are thereby appropriated in specific ways in various societal
domains, such as public discourse, journalism, politics, and education. This development
poses considerable challenges (Fortunati & Edwards, 2020): empirically, in terms of how
automated communication can be researched, and theoretically, in that the fundamental
concepts of agency, media, and communication are dramatically altered.
With this article we want to define the automation of communication as a research
area in more detail. Our main thesis is that if we are ever going to comprehensively deal
with the transformation of our media environment associated with the automation of

1. This is exemplified by a simple GoogleTrends analysis, which shows increasing interest in ChatGPT world-
wide from November 27, 2022, with a peak on February 12, 2023. Retrieved March 10, 2023, from https://
trends.google.de/trends/explore?q=ChatGPT
2. As an example, among others, reference can be made in this regard to a discussion between Emily M. Bender
and Casey Fiesler. Retrieved March 10, 2023, from https://web.archive.org/web/20230303074525/https://
www.radicalai.org/chatgpt-limitations
Hepp, Loosen, Dreyer, Jarke, Kannengießer, Katzenbach, Malaka, Pfadenhauer, Puschmann, and Schulz 43

communication, we have to address our investigation more broadly from investigating the
direct interaction of humans and machines to societal communication. In broadening our
view, we are compelled to ask how the dynamics of societal communication change when
ChatGPT, LaMDA, Luminous, and comparable technologies become an integral part of it.
To support this reasoning, we first take a closer look at the automation of communica-
tion as a phenomenon. Against this background, we engage with the notion of communi-
cative AI, which we believe can operate as a “sensitizing concept” (Blumer, 1954, p. 7) that
directs us to the true breadth and depth of the phenomenon. Subsequently, we show how a
figurational approach can be used to analyze automated automation as part of societal com-
munication and connect the discussion to already existing “definitive concepts” (Blumer,
1954, p. 7) familiar to media and communication studies.

The Automation of Communication: An Emerging Subject in


Media and Communication Studies
It would certainly be a misconception to assume that the automation of communication is
only a subject of the most recent media and communications research. If we look histori-
cally at the emergence of today’s digital systems of automated communication, we can see
that they are closely interconnected with cybernetics (Turner, 2006), and identify links to
media and communication theory from as far back as the 1970s. Cybernetics has always
addressed questions of (communicative) automation, albeit through a primarily technical
lens (i.e., Bibby et al., 1975). At the same time a rapprochement between cybernetics and
the social sciences took place only to a limited extent. One of the reasons for this restraint
was that the mathematical theory of communication present in the early cybernetic discus-
sion (Baecker, 1997, p. 11), stood in contrast to social science’s interests that have tended to
focus on meaning and understanding.3 This also applies to the “post-discipline” (Waisbord,
2019, p. 1) of media and communication studies4: Even in James R. Beniger’s The Con-
trol Revolution, the automation of communication remained a rather marginal topic (1986,
pp. 304–307). This was contrasted by research in informatics, where the potential of auto-
mating communications was an important research topic very early on, inspired in large
part by The Computer as Communications Device, a 1968 article by J. C. R. Licklider and
Robert W. Taylor. This discussion, for example, about the Turing Test or Weizenbaum’s
(1966) ELIZA, took place largely outside the purview of media and communication studies
(i.e., Searle, 1980) and was only really addressed as a historical discussion after systems
of automated communication became a more widespread phenomenon (Natale, 2021b).
There are very few exceptions (Gunkel, 2012).
In a coarse simplification—sometimes necessary in the context of reconstruction—
we can describe media and communication studies’ increasing interest in questions of

3. This is exemplified by the analysis of the dominant communication theorists until the end of the 1980s
(Beniger, 1990).
4. Silvio Waisbord uses the term “post-disciplinary” to summarize that for media and communication studies
“disciplinary boundaries are fluid” and that it is an “intellectually open enterprise rather than a traditional
endeavor interested in defining and patrolling epistemological boundaries” (131) (2019, pp. 127, 131); see also
Livingstone, 2009; McQuail & Deuze, 2020.
44 Human-Machine Communication

automated communication as having taken place along three steps toward addressing digital
communication. Temporally speaking, there are various overlaps between them, although
their distinction makes sense in that they each stand for different discursive contexts in
scholarly thinking about automation in relation to communication.
In the first stage, media and communication studies turned to digital communication
by asking how communication itself and social relationships change when media become
digital. The dominant concept in media and communication studies became that of
“computer-mediated communication” (CMC) (Chesebro & Bonsall, 1989; Jones, 1998)
and scientific interests turned to person-computer interaction (Cathcart & Gumpert, 1985;
Morris & Ogan, 1996) as well as the growth of online relationships and communities (Baym,
1994; Wellman et al., 1996). This research on transforming communications was related
to more general discussions about an emerging information society (see, among others,
Castells, 2000; Mattelart, 2003). Later, media and communication studies research into dig-
ital communications turned to broader questions such as the “mediatization of society”
(Hepp & Krotz, 2014; Hjarvard, 2013; Lundby, 2014). In all these cases, however, the auto-
mation of communication remained a marginal topic, addressed by only a small number of
scientists or those working at the fringes of the discipline (i.e., Steels & Kaplan, 2000).
In the second stage, questions of digital data and their (societal) contexts of use and
exploitation came to the fore—parallel to the fact that technology companies and state
actors increasingly discovered the potential of digital data as a commodity or a resource
(Zuboff, 2019). The core of the discussion was, at first, a critical engagement with big data as
an economic, social, and cultural resource (Andrejevic, 2014; Crawford et al., 2014; Gitel-
man, 2013), which then led to a critique of the progressing datafication of society (Dencik
& Kaun, 2020; Flensburg & Lomborg, 2021; van Dijck, 2014). Here, there was also a stron-
ger rapprochement between media and communication studies and science and technol-
ogy studies, where, for example, expert systems and artificial intelligence had already been
closely studied for much longer (i.e., Star & Ruhleder, 1996; Suchman, 1987). This led to,
among other things, so-called critical data studies that sat at the intersection of media and
communication studies, sociology, and science and technology studies (Burns et al., 2019;
Dalton & Thatcher, 2014; Hepp et al., 2022; Iliadis & Russo, 2016; Kitchin, 2014). In contexts
like these, discussions have focused on the influence of datafied “platforms” (van Dijck et
al., 2018), the need for their “regulation” (Hofmann et al., 2017), “surveillance capitalism”
(Zuboff, 2019), “deep mediatization” (Hepp, 2020b), and “data colonialism” (Couldry &
Mejías, 2019). Questions of automation have always played and continue to play a role in
this discussion about datafication—but less in the sense of automating communication than
in the sense of automating data processing.
In the third stage of research on digital communication this turn takes place toward the
forms of communicative automation. As mentioned above, there were early precursors to
this discussion (Gunkel, 2012; for an overview: Richards et al., 2022); however, the foun-
dation of journals such as Human-Machine Communication (Fortunati & Edwards, 2020)
or a corresponding interest group in the International Communication Association were
exemplary for increasing the discursive momentum. A broad discussion took place to clar-
ify the field of human-machine communication (HMC), as well as an institutionalization of
the research landscape (Fortunati & Edwards, 2021; Guzman, 2018; Guzman et al., 2023).
Nevertheless, it is important to keep in mind that the preoccupation with the automation
Hepp, Loosen, Dreyer, Jarke, Kannengießer, Katzenbach, Malaka, Pfadenhauer, Puschmann, and Schulz 45

of communication in media and communication studies goes beyond the institutionalizing


power of HMC, and continues to address questions around topics such as “robot journal-
ism” (Carlson, 2015), “social bots” (Gehl & Bakardjieva, 2016), the “automation of commu-
nicative labor” (J. Reeves, 2016), “algorithmic content moderation” (Gorwa et al., 2020) or
“automated media” (Andrejevic, 2020; Napoli, 2014).
In a sense, one can say that there are not only genealogical interrelations between the
three stages of engagement with the automation of communication in media and com-
munication studies, but that this refers to a general characteristic of digital communica-
tion: If one understands algorithms for their ability “to act when triggered without any
regular human intervention or oversight” (Gillespie, 2014, p. 170), automation—generally
understood as the machine-autonomous achievement of specific goals for action—has been
a key aspect of software-based media from the beginning. Digitization, datafication, and
algorithmization represent both the conditions of possibility and the need for automatic
communication processes. However, what then is technical in automation can vary consid-
erably, ranging from simple scripts with determinate steps (i.e., linear algorithms in infor-
matics terms), on which many social bots are based (cf. Veale & Cook, 2018), to complex
technical machine learning systems (cf. Heuer et al., 2021).
The crucial point is that we are dealing with the automation of communication and not,
for example, with forms of automation such as product manufacturing processes where
robotic systems build things. The automation of communication is based on digital traces
as inherent byproducts of datafication. These have a materiality of their own that is far more
opaque than that of automation by locally placed material-machine systems such as manu-
facturing robots (Burrell, 2016). This has significant consequences for various forms of auto-
mated communication processes (Esposito, 2017, p. 251): For all their heterogeneity—for
example, in health care, justice, politics, journalism, everyday practice, science, the public
sector, or education—it is a materiality that refers to the globalized digital infrastructures of
today’s automated communication systems (Crawford, 2021). Accordingly, the three stages
do not simply mean that the last one represents increasing hype or interest, but that a broad
view of the automation of communication seems all the more necessary.

Broadening the Perspective: Moving From the Individual


to the Societal
Initially, and in the trajectory of computer-mediated communication, media and commu-
nication studies approached the phenomenon of automated communication mainly from
the perspective of the individual (i.e., the question of how individuals deal with automated
systems, what agency they attribute to them, or what form of agency can be theoretically
distinguished from them). This can be illustrated by publications from the 2010s that were
particularly influential to the discussion: Robert W. Gehl and Maria Bakardjieva, for exam-
ple, develop the perspective in their essay on social bots when they described that they are
“intended to present a self, to pose as an alter-ego, as a subject with personal biography,
stock of knowledge, emotions and body, as a social counterpart, as someone like me, the
user, with whom I could build a social relationship” (2016, p. 2). In the same period, Andrea
Guzman defined the field of human-machine-communication more intently as “the cre-
ation of meaning between human and machine” (2018, p. 3).
46 Human-Machine Communication

Looking at these texts now, they seem particularly concerned with direct interac-
tion between humans and machines, as well as with the agency that automated systems
may or may not have or that is attributed to them. This is also apparent in more media-
psychology-oriented approaches such as CASA research (“Computers-Are-Social-
Actors”). At its core, the CASA paradigm holds that the moment computers or other tech-
nical systems look, communicate, or act like a person, people respond to them as if they
were “real” people (Lee & Nass, 2010; Nass et al., 2006). The CASA approach can be traced
to Byron Reeves and Clifford Nass’ text (1996), in which they addressed the “media equa-
tion”; that is, the tendency of users to put new media on a par with natural persons and
places. CASA research has led to important findings; for example, on the perception of the
communication qualities of automated systems (Edwards et al., 2014), on the relationship
norms of humans toward Twitter bots (Li & Li, 2014), or on the anthropomorphism of
smartphones (Wang, 2017). However, when it comes to expanding CASA research, the dis-
cussion is less focused on going beyond the individual-machine relationship and more on
how we appropriately frame it: The argument is that if a person appropriates new systems
of automated communication today (for example, an Artificial Companion), he or she will
apply not only scripts that are familiar from their interactions with humans, but also those
from interaction with machines (Gambino et al., 2020). Such arguments fundamentally
expand the CASA approach but remain trapped in the relationship between individual and
machine.
From our point of view, we should go a step further and broaden the perspective
beyond the direct interaction of humans and machines when addressing issues of auto-
mated communication. It is apparent from the example of social bots that focusing solely
on the direct interaction of humans and machines does not do justice to the phenomenon.
Although direct interaction between humans and bots is undoubtedly a relevant topic (Fer-
rara et al., 2016; Varol et al., 2018), as is the question of how bots can be empirically deter-
mined (Cresci, 2020; Martini et al., 2021), research that focuses on the role of bots in public
communication points to dynamics that go further. Florian Muhle (2022), for example,
points out that the significance of Twitter bots is less their direct interaction with humans
but, rather, their indirect influence on communication processes: Bots on Twitter primarily
attempt to “exploit the amplification potential of the service to reach the broad journalis-
tically manufactured public” (Muhle, 2022, p. 48). In other words, traffic is generated by
the bots’ retweets, whereby the platform’s algorithms assign a higher relevance to certain
hashtags, tweets, or accounts than to others. In this way, bots generate “public resonance”
(Fürst, 2017, p. 4). In many cases, this is aimed at journalists to influence their attitudes
toward certain people and topics and, as a consequence, coverage in journalistic media.5
Against this background, the automation of communication is to be seen both in
greater depth and breadth than has often been the case. The depth of the phenomenon arises
from the fact that the automation of communication impacts the “hybrid media system”
(Chadwick, 2017) and its overall communication dynamics. Automated systems are entan-
gled with communications across various levels through which, for example, the publics of

5. This broader view is also addressed by informatics research into human-computer interaction under the
notion of tertiary users—that is, users who do not interact directly with the system but “who are affected by the
introduction of the system or influence its purchase” (Alsos & Svanæs, 2011, p. 85).
Hepp, Loosen, Dreyer, Jarke, Kannengießer, Katzenbach, Malaka, Pfadenhauer, Puschmann, and Schulz 47

online platforms and journalistic publics are placed in a dynamic relationship. However,
communication dynamics can also be thought of even more broadly if we keep in mind
that the data generated in automated communication become the basis for more extensive
automations as is the case, for example, with automated decision-making and how this is
assessed and evaluated by humans (Araujo et al., 2020; Carlson, 2018; Zarsky, 2015). The
breadth of automated communication results from the diversity of its different technologies
such as artificial companions (Pfadenhauer & Lehmann, 2022), chat bots (Beattie et al.,
2020), news bots (Lokot & Diakopoulos, 2016), social bots (Keller & Klinger, 2019), work
bots (Loosen & Solbach, 2020), as well as a diverse range of emerging systems.
In order to grasp this depth and breadth, we should take the automation of commu-
nication more seriously in relation to its overarching, societal character. This means not
stopping at the communicative relationship between individual humans and machines but
expanding our view to the role played by automation in societal communication. It is this
perspective that we would like to assert as necessary when examining the concept of com-
municative AI.

Communicative AI: A Sensitizing Concept


As the last two sections outline, the automation of communication is still a relatively young
and dynamic field of research. In recent years there have been a range of conceptual pro-
posals for how this should be done. For example, references are made to “automated media”
(Andrejevic, 2020), “communicative robots” (Hepp, 2020a), or “media agents” (Gambino
et al., 2020). Increasingly, however, the term “communicative AI” has become established
in the international research discussion (e.g. Dehnert & Mongeau, 2022; Guzman & Lewis,
2020; Natale, 2021b; Schäfer & Wessler, 2020; Stenbom et al., 2021). Andrea Guzman and
Seth Lewis, who originally proposed the term, define communicative AI as “technologies
designed to carry out specific tasks within the communication process that were formerly
associated with humans” (2020, p. 3), a definition also shared by Agnes Stenbom et al.
(2021, p. 1), and Marco Dehnert and Paul Mongeau (2022, p. 3). Mike Schäfer and Hartmut
Wessler lean toward such an understanding but argue that these technologies should be
understood “no longer just as mediators of communication between people, but as com-
municators” (2020, p. 311).
All of these proposals emphasize the communicative aspect but remain generic in the
sense that they outline a specific genre of media and communication technologies without
analytically reflecting both their commonality and distinction from others. For example,
Guzman and Lewis’s (2020) definition raises the question of whether all automation in the
communication process—including editing videos or automated translations—should be
called communicative AI. In the other publications quoted above it remains unclear to what
extent the term artificial intelligence in communicative AI is merely a buzzword—and thus
a reference to the current hype around ChatGPT and similar systems—or if it is intended
to refer to specific technologies such as machine learning, or what further implications are
associated with it. Against this background, we propose a definition of communicative AI
based on three criteria.
48 Human-Machine Communication

Communicative AI
(1) is based on various forms of automation designed for the central purpose of communi-
cation,
(2) is embedded within digital infrastructures, and
(3) is entangled with human practices.

Each of these three points require further explanation, especially if we think of them not
simply in terms of societal communication.
The first point looks toward a nexus that Elena Esposito already pointed out a few years
ago in an article on what she calls “artificial communication.” By contrast to the discussion
about the Turing Test, she emphasizes that the crucial point in “artificial communication”
is not “that the machine is able to think but that it is able to communicate” (2017, p. 250;
see also Esposito, 2022, pp. 14–16). This argument is an important intellectual step in that it
points us to the communicative construction of the concept of artificial intelligence in com-
municative AI. Media and communications studies in particular show that the human attri-
bution of intelligence to technical systems is a variable construct and does not depend on
whether or not it is based on, for example, machine learning (Natale, 2021b, pp. 68–86). For
example, Weizenbaum’s ELIZA, developed in the 1960s, can already be understood as com-
municative AI because it was able to communicate with people in an automated way which
then led to the attribution of intelligence to it, even if ELIZA was a chat program based on
simple scripts (Natale, 2019; Weizenbaum, 1966). Twitter bots, which are also often based
on simple scripts, are likewise communicative AI according to this understanding because
they are programmed for the purpose of communication and develop their own commu-
nication dynamics. Embracing systems like these into the notion of communicative AI is
helpful because it sensitizes us as media and communications researchers to consider the
issue of constructing attributions of intelligence to simpler systems as well. From a media
and communication studies’ point of view, defining artificial intelligence is not so much a
determination along certain technical characteristics (e.g., Mühlhoff, 2019), but a question
of communicative construction including the attribution of intelligence, which is always a
contested process (Bareis & Katzenbach, 2021). Such processes of construction refer to the
dominant understandings of being human in a societal context, which typically means the
capability of doing something similar to humans (e.g., Guzman, 2020), possibly including
affective and emotional qualities (Beattie et al., 2020; Ling & Björling, 2020).
The second point requires just as much explanation: the embedding of communicative
AI within technical infrastructures. This highlights the need to distinguish between the
interface between communicative AI and its users and the underlying structures behind
it. Kate Crawford and Vladan Joler (2018) have illustrated this through a rich visualization
using Alexa as an example. This artificial companion operates—like Google Assistant, Mic-
rosoft Cortana, or Apple Siri—through the infrastructure of the internet, without which
they would not be functional. Similarly, social bots rely on the infrastructure of platforms
such as Twitter, which pre-structure communication to an extent that bots can replicate
human actors comparatively easily (Gehl & Bakardjieva, 2016). In this respect, we can say
that many systems of communicative AI constitute media within media as they rely on
existing “infrastructural platforms” (van Dijck et al., 2018, p. 11; van Dijck et al., 2019,
Hepp, Loosen, Dreyer, Jarke, Kannengießer, Katzenbach, Malaka, Pfadenhauer, Puschmann, and Schulz 49

p. 9) as media. The materiality of communicative AI concerns not only the primary sys-
tem of automated communication, but also the materiality of the infrastructures in which
this is embedded: the technical networks and server farms (Constantinides et al., 2018,
p. 381). These infrastructures secure necessary data storage and processing, while simulta-
neously drawing communicative AI into the structures of surveillance capitalism and data
colonialism (Turow, 2021). Furthermore, these infrastructures are associated with extensive
“planetary costs” (Crawford, 2021) (i.e., the socio-ecological consequences of, among other
things, the extraordinarily high levels of energy consumption required for the operation
of digital infrastructures; Brevini, 2021; Kannengießer, 2020). If we see communicative AI
in the realm of societal communication, it is important to also consider those less visible
elements as infrastructures.
The third point—entanglement with human practice—highlights the importance
of understanding that the processing of these systems cannot be understood beyond
human practice. The notion of entanglement, which has gained currency through Sci-
ence and Technology Studies, derives in particular from the work of Karen Barad (2007),
who developed it as an analytical concept. As Susan Scott and Wanda Orlikowski (2014,
pp. 881–882) argue, “the entanglement of matter and meaning is produced in practice
within specific phenomena.” They go on to explain that this means questioning the notion
of predefined categories such as subject and object or human and nonhuman and empha-
sizing that such differences are constituted in the process of their relationalization:

To be entangled is not simply to be intertwined with another, as in the joining of


separate entities, but to lack an independent, discrete, self-contained existence.
Existence is not an individual affair. Individuals do not pre-exist their interac-
tions; rather, individuals emerge through and as part of their entangled intra-
relating.” (Barad, 2007, p. ix)

Understood in this way, the concept of entanglement is associated with a certain approach
to the materiality of automated media, which strongly emphasizes their processual and rela-
tional constitution—especially in distinction to concepts seen in actor-network theory that
emphasize the permanence of society in matter (Latour, 1991). More specific to the object
of communicative AI, this means focusing on the coming together of matter and meaning
in human practice. Materiality then becomes graspable in a double form of the techni-
cal on the one hand and the corporeality of practice on the other (Pfadenhauer & Grenz,
2017). This understanding of practices overcomes the reductionism found in some forms
of practice theory (Reckwitz, 2002) by taking relationality—human beings’ inevitable relat-
edness—into account. Yet, a focus on entanglement with human practice is also important
if one wants to capture the technologies of communicative AI in more detail. For example,
models for speech recognition are built on the basis of large datasets obtained via human
practice online.
To sum up: If we define communicative AI in the ways outlined above, this is not sim-
ply a buzzword representing the current hype around ChatGPT, LaMDA, and similar sys-
tems, but can act as a sensitizing concept in Herbert Blumer’s sense of the term. Following
Blumer, the establishment of a sensitizing concept offers “a general sense of reference
and guidance in approaching empirical instances” (Blumer, 1954, p. 7). In this sense,
50 Human-Machine Communication

communicative AI draws our attention to a certain “family resemblance” (Wittgenstein,


1971) that various examples of automated communication systems share, opening up a
guiding orientation, they illustrate what their breadth and depth exactly mean and why
a societal perspective matters. The challenge of any sensitizing concept is, however, that it
cannot be empirically operationalized without difficulty. This is the point at which “defini-
tive concepts” (Blumer, 1954, p. 7) gain importance; that is, concepts that can be empirically
operationalized. But, how exactly should we proceed with this if we want to grasp auto-
mated communication as a part of societal communication? Certainly, different answers to
this question are possible; the answer we want to propose is that of a figurational approach.

Agency Between the Individual and the Machine:


Taking a Figurational Approach
A figurational approach6 seems to us particularly suitable for researching communicative
AI from a societal perspective for two reasons. First, this approach does not create a con-
tradiction between the individual and society. Society is not understood as a discrete object
that surrounds humans, but as something that emerges from humans—all the while, the
individual is produced by society. In this sense, speaking of the individual and of society
is a matter of perspective, or, as Norbert Elias put it, “the concept ‘individual’ refers to
interdependent people in the singular, and the concept ‘society’ refers to interdependent
people in the plural” (1978, p. 125). Second, a figurational approach is particularly focused
on questions of change and transformation. One of its dominant questions relates to how
societies are structurally transformed, and the role of technologies in this process is an
important subject of study (Elias, 1995). The main, “conceptual tool” (Elias, 1978, p. 130)
used to address such nexuses is that of the figuration, which we can understand as a bridg-
ing concept directed toward the definitive conceptualizations necessary.
Speaking of figurations and refigurations is quite common, especially in social science
research on artificial intelligence. In her analysis of “human-machine reconfigurations,”
Lucy Suchman (2012, p. 227), for example, takes up arguments by Donna Haraway (1997,
p. 11; emphasis added) and characterizes technologies as a “materialized figuration that
bring together assemblages of stuff and meaning into more or less stable arrangements.”
Sarah Kember (1998) also considers communication technologies as constituting parts of
figurations, while Hubert Knoblauch and Martina Löw (2017) address them in terms of the
refiguration of spaces.
Put simply, figurations are “processes of interweaving” (Elias, 1978, p. 130) of inter-
dependent people such as a group, community, or organization. From a media and com-
munications perspective, we can consider any figuration as a communicative one: It is
communicative practices through which meanings are ascribed (in) figurations, and these
practices are increasingly mediated. Family members, for example, may be spatially sep-
arated but connected through multimodal communication via (cell) phone, email, and
exchanges on digital platforms, which maintains the everyday-world dynamics of famil-
ial relationships. Organizations are also held together as figurations using databases,

6. On process sociology, which is strongly influenced by Norbert Elias, cf. Baur & Ernst, 2011; Dunne, 2009;
Morrow, 2009.
Hepp, Loosen, Dreyer, Jarke, Kannengießer, Katzenbach, Malaka, Pfadenhauer, Puschmann, and Schulz 51

communication through an intranet, and printed flyers and other media for internal and
external communication. Individuals are involved in these figurations through the roles
and positions they occupy in their respective actor constellations. Conducting media and
communications research using a figurational approach makes it possible to connect the
perspectives of the individual and society and reflect on how the practices of their construc-
tion are closely entangled with media.
There are three basic characteristics that constitute a figuration and can be connected
to established “definitive concepts” in media and communication studies (cf. Couldry &
Hepp, 2016, pp. 66–67; Hepp, 2020b, pp. 100–113; Hepp & Hasebrink, 2018). The structural
basis of every figuration is, first, an actor constellation, a network of actors who are inter-
connected in a certain balance of power and through interrelated communicative practices.
Second, every figuration is characterized by a frame of relevance that guides the practices of
its actors and their mutual orientation toward each other. This frame of relevance defines
the action orientation of the actors involved and the specificity of the figuration. Third,
figurations are constantly rearticulated in communicative practices that are interwoven with
other social practices. These practices are typically entangled with a media ensemble.
A special theoretical feature of a figurational approach is that it opens up a way of
thinking about the agency of communicative AI at all levels of the social scale, combining
the perspective of the individual and its interactional relations by understanding figurations
such as organizations and communities as collective actors. Figurations, of which commu-
nicative AI becomes a part, can then be understood as hybrid figurations. Hybrid here does
not mean a dissolution of the boundary between human and machine, as can be seen in the
imaginary of the cyborg (Berscheid et al., 2019; Britton & Semaan, 2017; Haraway, 1991);
hybrid here refers to a unique “supra-individual” (Schimank, 2010, p. 327) agency of the
overall figuration that develops in the coming together of human and machine.
This can be illustrated by the example of a newsroom where journalists use automated
communication systems such as Quill from Narrative Science, ChatGPT from OpenAI, or
Luminous from Aleph Alpha. A newsroom using these systems for “automating the news”
(Diakopoulos, 2019) has a different agency than newsrooms without them. Research in
media and communication studies is then concerned with the question of what is special
about this hybrid agency and how it differs from other forms of supra-individual agency. It
is also concerned with related challenges; for example, questions about authorship and the
accountability of journalistic communications (Lewis et al., 2019; Montal & Reich, 2017),
as well as the emergence of coping strategies for journalists that might begin to feel discon-
nected from technological developments (Min & Fink, 2021).
Such a figurational approach avoids dissolving the conceptual boundary between
the agency of humans and machines, as has been proposed in some of the research on
human-machine interaction (Banks & de Graaf, 2020). Our argument for maintaining such
a boundary is an empirical one, since precisely this kind of separation is deeply embed-
ded in everyday life. In the everyday practice of people, the question of what counts as
machine-automated and what counts as human-authentic seemingly persists (Pfadenhauer
& Grenz, 2017, p. 226). Similar demarcations between human and machine are made in law:
The legal classification of automated systems focuses on the simple solution of attributing
system behavior to natural or legal persons who developed, programmed, or implemented a
52 Human-Machine Communication

system (Schulz & Schmees, 2022). Putting it metaphorically, there are no formal or accepted
methods of serving a subpoena to a communicative AI.
With a figurational approach, understanding contradicting positions in the discussion
about the agency of humans and machines within automated communication as differ-
ent perspectives of analysis is rendered more straightforward. Constructivist-based theo-
ries such as social phenomenology, communicative constructivism, or systems theory, on
the one hand, emphasize that machines are to be described as an objectification of human
action and that the agency attributed to them is a projection of human actors or a personi-
fication of their expectations (Esposito, 2022; Knoblauch, 2020; Lindemann, 2016; Muhle,
2016; Pfadenhauer, 2015). Approaches from new materialism such as actor-network theory,
or extended action theory, on the other hand, emphasize the idea of distributed or shared
agency between humans and machines (Bellacasa, 2017; Gunkel, 2018a; Hanson, 2009).
Both approaches to theorizing can be understood as different perspectives on hybrid
figurations: From the internal perspective of a hybrid figuration—that is: from the point
of view of the people involved in it—it is a matter of projections and personalized expecta-
tions in regard to communicative AI. To take up once more the example of the newsroom,
journalists do indeed project agency onto systems of automated communication when they
speak of a certain system “writing a story,” and they “forget” in such phrases that this hap-
pens on the basis of scripts and data that they themselves have entered into the system
(Caswell & Dörr, 2018). From an external perspective (i.e., from an overall view of hybrid
figurations by an observer), it is also true that this newsroom as an organizational unit
possesses a different kind of shared agency than one without: Certain content could be pub-
lished more quickly and systems of automation secure space for other kinds of journalistic
work such as follow-up research and in-depth articles (Young & Hermida, 2015).
A figurational approach allows us to see not only communicative AI in terms of broader
societal nexuses and to move beyond the narrow focus on the interaction between indi-
vidual humans and machines, it also allows us to connect to existing concepts of media
and communication studies, despite its current novelty. A view of communication is then
developed that keeps its distance from technically induced transfer models and focuses on
meaningful, social construction of which automated communication is a part.
At this point it is worth referring to James Carey (2009), who warned against reducing
communication to defining it as the transfer of information (and its effects). Carey points
out that communication should be understood as a form of symbolic reality construction
(p. 19). We can see parallels when we argue for directing our attention to the various hybrid
figurations of automated communication and their role in communicatively constructing
society. However, Carey also pointed out that as scientists we are always confronted with
the question of whether the concepts we use to grasp reality (still) correspond to how this
reality is actually constructed in communication (p. 24). This also concerns the concept
of communication itself, which seems to be questioned when machines automate it. But,
from our point of view, this represents a misplaced response to the challenge, falling back
as it does into simple transferal understandings of communication by simply explaining the
machine as an actor more or less identical to the human. We now need to face this challenge
to the concept of communication (i.e., Fortunati & Edwards, 2020; Guzman & Lewis, 2020;
Hepp, Loosen, Dreyer, Jarke, Kannengießer, Katzenbach, Malaka, Pfadenhauer, Puschmann, and Schulz 53

Hepp & Loosen, 2023; Natale, 2021a). But, we also need the readiness for more complex
answers than the simple equation of humans and machines.

Conclusion: Resisting the Hype Through Research


We began this article by looking at the hype around ChatGPT and other automated com-
munication systems that are now entering the public consciousness and generating fertile
academic discussion. For all the diversity of the “post-discipline” (Waisbord, 2019) and in
light of earlier approaches (Gunkel, 2012), it is fair to say that our engagement with automa-
tion represents a third stage of research into digital communications. While we bask in the
nascent hype and the academy’s enthusiasm to embrace the discussion, as researchers it is
always important to approach new phenomena reflexively. We agree that caution should be
applied in the sense that, from the point of view of media and communication studies, it is
important to not simply adopt the discourse from the tech companies verbatim. From our
point of view, however, we should take note of the hype insofar as it may stand for a funda-
mental change in the ways we all communicate: Its automation is becoming an increasingly
widespread phenomenon, and this will invariably be accompanied by changes in the ways
we construct our realities.
This means, however, that the automation of communication is to be approached differ-
ently than from the limiting perspective of the interaction between individual humans and
machines. We see the concept of communicative AI as a useful tool or wave upon which
we might be able to sensitize ourselves to a concept requiring deeper reflection. While this
increases scientific attention to automated communication, we are at the same time engaged
in a discussion about what an appropriate approach might be if we are to accomplish a soci-
etal perspective on automated communication. Against this background, we have proposed
a figurational approach as one such possibility.
Equipped in this way, our task is to resist the hype on the surface by critically examining
the growth of automated communication. This means that we accept the need to question
existing concepts in the field of media and communications—agency, communication, and
media—and ask whether or to what extent they are still useful in a world where commu-
nication is increasingly automated by machines. At the same time, however, we should be
careful not to lose sight of the boundaries that are still part of ongoing processes of societal
communication. Specifically, this concerns an equation of human and machine agency or
the insinuation that systems of automated communication construct meaning for them-
selves. These thought games can certainly sensitize us to the opportunities and risks that the
increasing use of automated communication may bring and are helpful in this respect. But,
it remains an empirical question to investigate what, in terms of automated communica-
tion, are the constructions we observe as part of the everyday. From our point of view, then,
it is a matter of investigating the construction of reality that changes with the automation
of communication and then, on this basis, working toward the further development of the
scientific, conceptual apparatus. A possible point of departure, in our view, is the figura-
tional approach.
54 Human-Machine Communication

Author Biographies
Andreas Hepp, Dr, is Professor of Media and Communications and Head of ZeMKI,
Centre for Media, Communication and Information Research, University of Bremen,
Germany. His research focuses on mediatization, datafication, automation of communica-
tion, communicative AI, pioneer communities, media use and appropriation.
https://orcid.org/0000-0001-7292-4147

Wiebke Loosen, Dr, is a Senior Journalism Researcher at the Leibniz Institute for Media
Research | Hans-Bredow-Institut (HBI) (Germany) as well as a professor at the University
of Hamburg. Her research focuses on the transformation of journalism within a changing
media environment, the journalism-audience relationship, forms of pioneering journalism
and the start-up culture in journalism, as well as the datafication and automation of com-
munication.
https://orcid.org/0000-0002-2211-2260

Stephan Dreyer, Dr, is Senior Researcher for Media Law and Media Governance at the
Leibniz Institute for Media Research | Hans-Bredow-Instiut in Hamburg, Germany. He is
working on legal issues of automated communication, transparency as a regulatory resource
and rights-based approaches to child safety online.
https://orcid.org/0000-0002-9450-1193

Juliane Jarke, PhD, is Professor of Digital Societies at the Business Analytics and Data
Science Center (BANDAS-Center) and the Department of Sociology, University of Graz.
She works at the intersection of digital and feminist STS, data studies and participatory
design research.
https://orcid.org/0000-0001-8349-2298

Sigrid Kannengießer, Dr, is Professor of Communication Studies with a focus on Media


Sociology at the Institute for Communication Studies, University of Münster, Germany.
Her research focuses on digital technologies, infrastructures, AI and sustainability, critical
data practices, social movements, and gender media studies.
https://orcid.org/0000-0002-2342-9868

Christian Katzenbach, Dr, is Professor of Media and Communication at ZeMKI, Uni-


versity of Bremen and associated researcher at the Alexander von Humboldt Institut for
Internet and Society (HIIG). His research addresses the formation of platforms and their
governance, the discursive and political shaping of “Artificial Intelligence” (AI) and the
increasing automation of communication.
https://orcid.org/0000-0003-1897-2783
Hepp, Loosen, Dreyer, Jarke, Kannengießer, Katzenbach, Malaka, Pfadenhauer, Puschmann, and Schulz 55

Rainer Malaka, Dr, is Professor of Digital Media and Managing Director of the Center for
Computing Technologies at the University of Bremen, Germany. His research focuses on
human computer Interaction and human-centric artificial intelligence. Application areas of
his research range from Entertainment Computing to Robotics and Medicine.
https://orcid.org/0000-0001-6463-4828

Michaela Pfadenhauer, Dr, is Professor of Sociology and Vice Dean for Research, Infra-
structure and Sustainability at the Faculty of Social Sciences, University of Vienna. Her
research focus is on sociology of knowledge and culture, social robotics and artificial com-
panionship, mediatization, and the communicative construction of reality.
https://orcid.org/0000-0002-6082-0364

Cornelius Puschmann, Dr, is Professor of Media and Communication at ZeMKI,


University of Bremen, and an affiliate researcher at the Leibniz Institute for Media
Research | Hans-Bredow-Instiut in Hamburg, Germany. His interests include digital media
usage, online aggression, the role of algorithms for the selection of media content, and
automated content analysis.
https://orcid.org/0000-0002-3189-0662

Wolfgang Schulz, Dr, is Professor of Media Law, Public Law and Legal Theory at Ham-
burg University and Director of the Leibniz Institute for Media Research | Hans-Bredow-
Institut (HBI) and also of the Humboldt Institute for Internet and Society. His recent
research revolves around information governance, law and technology and freedom of
expression.
https://orcid.org/0000-0002-9999-5508

References
Alsos, O. A., & Svanæs, D. (2011). Designing for the secondary user experience. P. Campos,
N. Graham, J. Jorge, N. Nunes, P. Palanque, & M. Winckler (Eds.), 13th IFIP TC 13
International Conference, Lisbon, Portugal, September 5–9, 2011, Proceedings, Part IV
(pp. 84–91). Springer. https://doi.org/10.1007/978-3-642-23768-3_7
Andrejevic, M. (2014). The big data divide. International Journal of Communication, 8(1),
1673–1689. https://espace.library.uq.edu.au/view/UQ:348586/UQ348586_OA.pdf
Andrejevic, M. (2020). Automated media. Routledge. https://doi.org/10.4324/9780429242595
Araujo, T., Helberger, N., Kruikemeier, S., & de Vreese, C. H. (2020). In AI we trust? Percep-
tions about automated decision-making by artificial intelligence. AI & SOCIETY, 35(3),
611–623. https://doi.org/10.1007/s00146-019-00931-w
Baecker, D. (1997). Reintroducing communication into cybernetics. Systemica, 11, 11–29.
https://ssrn.com/abstract=2200830
Banks, J., & de Graaf, M. M. (2020). Toward an agent-agnostic transmission model: Syn-
thesizing anthropocentric and technocentric paradigms in communication. Human-
Machine Communication, 1, 19–36. https://doi.org/10.30658/hmc.1.2
56 Human-Machine Communication

Barad, K. (2007). Meeting the university halfway: Quantum physics and the entanglement of
matter and meaning. Duke University Press. https://doi.org/10.1215/9780822388128-002
Bareis, J., & Katzenbach, C. (2021). Talking AI into being: The narratives and imaginaries
of national AI strategies and their performative politics. Science, Technology, & Human
Values, 47(5), 855–881. https://doi.org/10.1177/01622439211030007
Baur, N., & Ernst, S. (2011). Towards a process-oriented methodology: Modern social
science research methods and Norbert Elias’s figurational sociology. The Sociological
Review, 59(1), 117–139. https://doi.org/10.1111/j.1467-954X.2011.01981.x
Baym, N. K. (1994). Communication, interpretation, and relationships: A study of a computer-
mediated fan community. University of Illinois, Urbana-Champaign.
Beattie, A., Edwards, A. P., & Edwards, C. (2020). A bot and a smile: Interpersonal impres-
sions of chatbots and humans using emoji in computer-mediated communication.
Communication Studies, 71(3), 409–427. https://doi.org/10.1080/10510974.2020.1725082
Bellacasa, M. P. de la. (2017). Matters of care: Speculative ethics in more than human worlds.
University of Minnesota Press.
Beniger, J. R. (1986). The control revolution. Technological and economic origins of the infor-
mation society. Harvard University Press.
Beniger, J. R. (1990). Who are the most important theorists of communication? Communi-
cation Research, 17(5), 698–715. https://doi.org/10.1177/009365090017005006
Berscheid, A. L., Horwath, I., & Riegraf, B. (2019). Einleitung: Cyborgs revisited: Zur
Verbindung von Geschlecht, Technologien und Maschinen. Feministische Studien,
37(2), 241–249. https://doi.org/10.1515/fs-2019-0025
Bibby, K. S., Margulies, F., Rijnsdorp, J. E., Withers, R. M. J., & Makarov, I. M. (1975).
Man’s role in control systems. IFAC Proceedings Volumes, 8(1), 664–683. https://doi.
org/10.1016/S1474-6670(17)67612-2
Blumer, H. (1954). What is wrong with social theory? American Sociological Review, 19,
3–10. https://doi.org/10.2307/2088165
Brevini, B. (2021). Is AI good for the planet? Polity.
Britton, L. M., & Semaan, B. (2017). Manifesting the cyborg through techno-body
modification. Proceedings of the 2017 CHI conference on human factors in comput-
ing systems. Association for Computing Machinery, USA, 2499–2510. https://doi.
org/10.1145/3025453.3025629
Burns, R., Hawkins, B., Hoffmann, A. L., Iliadis, A., & Thatcher, J. (2019). Transdisciplinary
approaches to critical data studies. Proceedings of the Association for Information Sci-
ence and Technology, 55(1), 657–660. https://doi.org/10.1002/pra2.2018.14505501074
Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning
algorithms. Big Data & Society, 3(1). https://doi.org/10.1177/2053951715622512
Carey, J. W. (2009). Communication as culture. Essays in Media and Society. Routledge.
https://doi.org/10.4324/9780203928912
Carlson, M. (2015). The robotic reporter. Digital Journalism, 3(3), 416–431. https://doi.org/
10.1080/21670811.2014.976412
Carlson, M. (2018). Automating judgment? Algorithmic judgment, news knowledge,
and journalistic professionalism. New Media & Society, 20(5), 1755–1772. https://doi.
org/10.1177/1461444817706684
Hepp, Loosen, Dreyer, Jarke, Kannengießer, Katzenbach, Malaka, Pfadenhauer, Puschmann, and Schulz 57

Castells, M. (2000). The rise of the network society (2nd ed ). Blackwell Publishing.
Caswell, D., & Dörr, K. (2018). Automated journalism 2.0: Event-driven narratives. Journal-
ism Practice, 12(4), 477–496. https://doi.org/10.1080/17512786.2017.1320773
Cathcart, R., & Gumpert, G. (1985). The person-computer interaction: A unique source. In
B. D. Ruben (Ed.), Information and behavior. Volume 1, 113–124. Transaction Books.
Chadwick, A. (2017). The hybrid media system: Politics and power (2nd ed.). Oxford Univer-
sity Press. https://doi.org/10.1093/oso/9780190696726.001.0001
Chesebro, J. W., & Bonsall, D. G. (Eds.). (1989). Computer-mediated communication: Human
relationships in a computerized world. University of Alabama Press.
Constantinides, P., Henfridsson, O., & Parker, G. G. (2018). Introduction—Platforms and
infrastructures in the digital age. Information Systems Research, 29(2), 381–400. https://
doi.org/10.1287/isre.2018.0794
Couldry, N., & Hepp, A. (2016). The mediated construction of reality. Polity Press.
Couldry, N., & Mejías, U. A. (2019). The costs of connection. How data is colonizing
human life and appropriating it for capitalism. Stanford University Press. https://doi.
org/10.1515/9781503609754
Crawford, K. (2021). The atlas of AI. Yale University Press. https://doi.org/10.12987/
9780300252392
Crawford, K., & Joler, V. (2018). Anatomy of an AI System. The Amazon Echo as an ana-
tomical map of human labor, data and planetary resources. Virtual Creativity, 9(1–2).
117–120. https://doi.org/10.1386/vcr_00008_7
Crawford, K., Miltner, K. M., & Gray, M. W. (2014). Critiquing big data: Politics, ethics,
epistemology. International Journal of Communications—Special Section Introduction,
8, 1663–1672. https://ijoc.org/index.php/ijoc/article/view/2167/1164
Cresci, S. (2020). A decade of social bot detection. Communications of the ACM, 63(10),
72–83. https://doi.org/10.1145/3409116
Dalton, C., & Thatcher, J. (2014). What does a critical data studies look like, and why do we
care? Seven points for a critical approach to ‘big data.’ Digital Geographies. Retrieved on
March 30, 2023, from https://web.archive.org/web/20200928063532/https://www.society
andspace.org/articles/what-does-a-critical-data-studies-look-like-and-why-do-we-care
Dehnert, M., & Mongeau, P. A. (2022). Persuasion in the age of artificial intelligence (AI):
Theories and complications of AI-based persuasion. Human Communication Research,
48(3), 386–403. https://doi.org/10.1093/hcr/hqac006
Dencik, L., & Kaun, A. (2020). Datafication and the welfare state. Global Perspectives, 1(1).
https://doi.org/10.1525/gp.2020.12912
Diakopoulos, N. (2019). Automating the news: How algorithms are rewriting the media. Har-
vard University Press. https://doi.org/10.4159/9780674239302
Dunne, S. (2009). The politics of figurational sociology. Sociological Review, 57(1), 28–57.
https://doi.org/10.1111/j.1467-954X.2008.01803.x
Edwards, C., Edwards, A., Spence, P. R., & Shelton, A. K. (2014). Is that a bot running the
social media feed? Testing the differences in perceptions of communication quality for
a human agent and a bot agent on Twitter. Computers in Human Behavior, 33, 372–376.
https://doi.org/10.1016/j.chb.2013.08.013
Elias, N. (1978). What is sociology? Hutchinson.
58 Human-Machine Communication

Elias, N. (1995). Technization and civilization. Theory, Culture & Society, 12(3), 7–42.
https://doi.org/10.1177/026327695012003002
Esposito, E. (2017). Artificial communication? The production of contingency by algo-
rithms. Zeitschrift für Soziologie, 46(4), 249–265. https://doi.org/10.1515/zfsoz-2017-
1014
Esposito, E. (2022). Artificial communication: How algorithms produce social intelligence.
The MIT Press. https://doi.org/10.7551/mitpress/14189.001.0001
Ferrara, E., Varol, O., Davis, C., Menczer, F., & Flammini, A. (2016). The rise of social bots.
Communications of the ACM, 59(7), 96–104. https://doi.org/10.1145/2818717
Flensburg, S., & Lomborg, S. (2021). Datafication research: Mapping the field for a future
agenda. New Media & Society, 0(0). https://doi.org/10.1177/14614448211046616
Fortunati, L., & Edwards, A. P. (2020). Opening space for theoretical, methodological, and
empirical issues in human-machine communication. Human-Machine Communica-
tion, 1, 7–18. https://doi.org/10.30658/hmc.1.1
Fortunati, L., & Edwards, A. P. (2021). Moving ahead with human-machine communica-
tion. Human-Machine Communication, 2, 7–28. https://doi.org/10.30658/hmc.2.1
Fürst, S. (2017). Öffentlichkeitsresonanz als Nachrichtenfaktor—Zum Wandel der Nach-
richtenselektion. MedienJournal, 37(2), 4–15. https://doi.org/10.24989/medienjournal.
v37i2.122
Gambino, A., Fox, J., & Ratan, R. (2020). Building a stronger CASA: Extending the Com-
puters Are Social Actors Paradigm. Human-Machine Communication, 1, 71–86. https://
doi.org/10.30658/hmc.1.5
Gehl, R. W., & Bakardjieva, M. (2016). Socialbots and their friends. In R. W. Gehl & M.
Bakardjieva (Eds.), Socialbots and their friends: Digital media and the automation of
sociality (pp. 1–16). Taylor & Francis. https://doi.org/10.4324/9781315637228
Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. J. Boczkowski, & K. A.
Foot (Eds.), Media technologies. Essays on communication, materiality, and society (pp.
167–194). The MIT Press. https://doi.org/10.7551/mitpress/9780262525374.003.0009
Gitelman, L. (Ed.). (2013). ‘Raw data’ is an oxymoron. The MIT Press. https://doi.org/10.7551/
mitpress/9302.001.0001
Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical
and political challenges in the automation of platform governance. Big Data & Society,
7(1). https://doi.org/10.1177/2053951719897945
Graefe, A., & Bohlken, N. (2020). Automated journalism: A meta-analysis of readers’ per-
ceptions of human-written in comparison to automated news. Media and Communica-
tion, 8(3), 50–59. https://doi.org/10.17645/mac.v8i3.3019
Gunkel, D. J. (2012). Communication and artificial intelligence: Opportunities and chal-
lenges for the 21st century. communication +1, 1(1), 1–25. https://doi.org/10.7275/
R5QJ7F7R
Gunkel, D. J. (2018a). Ars ex machina: Rethinking responsibility in the age of creative
machines. In A. L. Guzman (Ed.), Human-machine communication: Rethinking commu-
nication, technology, and ourselves (pp. 221–236). Peter Lang. https://doi.org/10.3726/
b14399
Gunkel, D. J. (2018b). Robot rights. The MIT Press. https://doi.org/10.7551/mitpress/
11444.001.0001
Hepp, Loosen, Dreyer, Jarke, Kannengießer, Katzenbach, Malaka, Pfadenhauer, Puschmann, and Schulz 59

Guzman, A. L. (2015). Imagining the voice in the machine: The ontology of digital social agents.
PhD dissertation. University of Illinois at Chicago. https://hdl.handle.net/10027/19842
Guzman, A. L. (2018). Introduction: What is human-machine-communication anyway? In
A. L. Guzman (Ed.), Human-machine communication (pp. 1–28). Peter Lang. https://
doi.org/10.3726/b14399
Guzman, A. L. (2020). Ontological boundaries between humans and computers and the
implications for human-machine communication. Human-Machine Communication,
1, 37–54. https://doi.org/10.30658/hmc.1.3
Guzman, A. L., & Lewis, S. C. (2020). Artificial intelligence and communication: A
Human-Machine Communication research agenda. New Media & Society, 22(1), 70–86.
https://doi.org/10.1177/1461444819858691
Guzman, A. L., McEwen, R., & Jones, S. (Eds.). (2023). The SAGE Handbook of Human-
Machine Communication. Sage.
Hanson, F. A. (2009). Beyond the skin bag: On the moral responsibility of extended agen-
cies. Ethics and information technology, 11(1), 91–99. https://doi.org/10.1007/s10676-
009-9184-z
Haraway, D. (1991). A Cyborg manifesto. Science, technology, and socialist-feminism in
the late twentieth century. In D. Haraway (Ed.), Simians, cyborgs and women: The rein-
vention of nature (pp. 149–181). Routledge.
Haraway, D. (1997). Modest_Witness@Second_Millennium.FemaleMan©_Meets_OncoMouse™.
Feminism and Technoscience (1st ed.). Routledge. https://doi.org/10.1023/a:1004349615837
Hepp, A. (2020a). Artificial companions, social bots and work bots: Communicative robots
as research objects of media and communication studies. Media, Culture & Society,
42(7–8), 1410–1426. https://doi.org/10.1177/0163443720916412
Hepp, A. (2020b). Deep mediatization. Routledge. https://doi.org/10.4324/9781351064903
Hepp, A., & Hasebrink, U. (2018). Researching transforming communications in times of
deep mediatization: A figurational approach. In A. Hepp, A. Breiter, & U. Hasebrink
(Eds.), Communicative figurations: Transforming communications in times of deep medi-
atization (pp. 51–80). London: Palgrave Macmillan. https://doi.org/10.1007/978-3-319-
65584-0_2
Hepp, A., Jarke, J., & Kramp, L. (Eds.). (2022). New perspectives in critical data studies. Pal-
grave Macmillan. https://doi.org/10.1007/978-3-030-96180-0
Hepp, A., & Krotz, F. (Eds.). (2014). Mediatized worlds: Culture and society in a media age.
Palgrave Macmillan. https://doi.org/10.1057/9781137300355
Hepp, A., & Loosen, W. (2023). The interdisciplinarity of HMC: Rethinking communica-
tion, media and agency. In A. L. Guzman, R. McEwen, & S. Jones (Eds.), The SAGE
Handbook of Human-Machine Communication (Preprint). Sage.
Heuer, H., Jarke, J., & Breiter, A. (2021). Machine learning in tutorials—Universal applica-
bility, underinformed application, and other misconceptions. Big Data & Society, 8(1).
https://doi.org/10.1177/20539517211017593
Hjarvard, S. (2013). The mediatization of culture and society. Routledge. https://doi.
org/10.4324/9780203155363
Hofmann, J., Katzenbach, C., & Gollatz, K. (2017). Between coordination and regulation:
Finding the governance in internet governance. New Media & Society, 19(9), 1406–
1423. https://doi.org/10.1177/1461444816639975
60 Human-Machine Communication

Iliadis, A., & Russo, F. (2016). Critical data studies: An introduction. Big Data & Society,
3(2). https://doi.org/10.1177/2053951716674238
Jones, S. G. (Ed.). (1998). Cybersociety 2.0. Revisiting computer-mediated communication
and technology. Sage. https://doi.org/10.4135/9781452243689
Kannengießer, S. (2020). Acting on media for sustainability. In H. C. Stephansen & E. Treré
(Eds.), Citizen media and practice: Currents, connections, challenges (pp. 176–188).
Routledge. https://doi.org/10.4324/9781351247375-13
Keller, T. R., & Klinger, U. (2019). Social bots in election campaigns: Theoretical, empirical,
and methodological implications. Political Communication, 36(1), 171–189. https://doi.
org/10.1080/10584609.2018.1526238
Kember, S. (1998). Virtual anxiety: Photography, new technologies and subjectivity. Man-
chester University Press.
Kitchin, R. (2014). The data revolution: Big data, open data, data infrastructures and their
consequences. Sage. https://doi.org/10.4135/9781473909472
Knoblauch, H. (2020). The communicative construction of reality. Routledge. https://doi.
org/10.4324/9780429431227
Knoblauch, H., & Löw, M. (2017). On the spatial re-figuration of the social world. Sociolog-
ica, 11(2), 1–27. https://doi.org/10.2383/88197
Latour, B. (1991). Technology is society made durable. In J. Law (Ed.), A sociology of mon-
sters. Essays on power, technology and domination (pp. 103–131). Routledge. https://doi.
org/10.1111/j.1467-954x.1990.tb03350.x
Lazer, D., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., Metzger,
M. J., Nyhan, B., Pennycook, G., Rothschild, D., Schudson, M., Sloman, S. A., Sunstein,
C. R., Thorson, E. A., Watts, D. J., & Zittrain, J. L. (2018). The science of fake news. Sci-
ence, 359(6380), 1094–1096. https://doi.org/10.1126/science.aao2998
Lee, J.-E. R., & Nass, C. I. (2010). Trust in computers: The Computers-Are-Social-Actors
(CASA) paradigm and trustworthiness perception in human-computer communica-
tion. In D. Latusek & A. Gerbasi (Eds.), Trust and technology in a ubiquitous modern
environment (pp. 1–15). IGI Global. https://doi.org/10.4018/978-1-61520-901-9.ch001
Lewis, S. C., Sanders, A. K., & Carmody, C. (2019). Libel by algorithm? Automated journal-
ism and the threat of legal liability. Journalism & Mass Communication Quarterly, 96(1),
60–81. https://doi.org/10.1177/1077699018755983
Li, Z., & Li, C. (2014). Twitter as a social actor: How consumers evaluate brands differently
on Twitter based on relationship norms. Computers in Human Behavior, 39, 187–196.
https://doi.org/10.1016/j.chb.2014.07.016
Licklider, J. C. R., & Taylor, R. W. (1968). The computer as a communication device. Science
and Technology, 76(2), 21–31.
Lindemann, G. (2016). Social interaction with robots: Three questions. AI & society, 31(4),
573–575. https://doi.org/10.1007/s00146-015-0633-4
Ling, H. Y., & Björling, E. A. (2020). Sharing stress with a robot: What would a robot say?
Human-Machine Communication, 1, 133–159. https://doi.org/10.30658/hmc.1.8
Livingstone, S. M. (2009). On the mediation of everything: ICA presidential address 2008.
Journal of Communication, 59(1), 1–18. https://doi.org/10.1111/j.1460-2466.2008.01401.x
Lokot, T., & Diakopoulos, N. (2016). News bots. Digital Journalism, 4(6), 682–699. https://
doi.org/10.1080/21670811.2015.1081822
Hepp, Loosen, Dreyer, Jarke, Kannengießer, Katzenbach, Malaka, Pfadenhauer, Puschmann, and Schulz 61

Loosen, W., & Solbach, P. (2020). Künstliche Intelligenz im Journalismus? Was bedeutet
Automatisierung für journalistisches Arbeiten? In T. Köhler (Ed.), Fake News, Framing,
Fact-Checking: Nachrichten im digitalen Zeitalter (pp. 177–203). Transcript. https://doi.
org/10.1515/9783839450253-010
Lundby, K. (Ed.). (2014). Mediatization of communication. de Gruyter. https://doi.
org/10.1515/9783110272215
Martini, F., Samula, P., Keller, T. R., & Klinger, U. (2021). Bot, or not? Comparing three
methods for detecting social bots in five political discourses. Big Data & Society, 8(2),
https://doi.org/10.1177/20539517211033566
Mattelart, A. (2003). The information society: An introduction. Sage.
McQuail, D., & Deuze, M. (2020). McQuail’s media and mass communication theory (7th
ed.). Sage.
Min, S. J., & Fink, K. (2021). Keeping up with the technologies: Distressed journalistic labor
in the pursuit of “shiny” technologies. Journalism Studies, 22(14), 1987–2004. https://
doi.org/10.1080/1461670x.2021.1979425
Montal, T., & Reich, Z. (2017). I, robot. You, journalist. Who is the author? Digital Journal-
ism, 5(7), 829–849. https://doi.org/10.1080/21670811.2016.1209083
Morris, M., & Ogan, C. (1996). The internet as mass medium. Journal of Communication,
46(1), 39–50. http://doi.org/10.1111/j.1460-2466.1996.tb01460.x
Morrow, R. A. (2009). Norbert Elias and figurational sociology: The comeback of the
century. Contemporary Sociology: A Journal of Reviews, 38(3), 215–219. https://doi.
org/10.1177/009430610903800301
Muhle, F. (2016). “Are you human?” Plädoyer für eine kommunikationstheoretische Fund-
ierung interpretativer Forschung an den Grenzen des Sozialen. Forum Qualitative
Sozialforschung/Forum: Qualitative Social Research 17(1), 33. https://doi.org/10.17169/
fqs-17.1.2489
Muhle, F. (2022). Socialbots at the Gates. Plädoyer für eine holistische Perspektive auf
automatisierte Akteure in der Umwelt des Journalismus. Medien & Kommunikation-
swissenschaft, 70(1-2), 40–59. https://doi.org/10.5771/1615-634X-2022-1-2-40
Mühlhoff, R. (2019). Human-aided artificial intelligence: Or, how to run large computa-
tions in human brains? Toward a media sociology of machine learning. New Media &
Society, 22(10), 1868–1884. https://doi.org/10.1177/1461444819885334
Napoli, P. M. (2014). Automated media: An institutional theory perspective on algorithmic
media production and consumption. Communication Theory, 24(3), 340–360. https://
doi.org/10.1111/comt.12039
Nass, C., Takayama, L., & Brave, S. (2006). Socializing consistency: From technical homo-
geneity to human epitome. In P. Zhang & D. F. Galletta (Eds.), Human-computer inter-
action and management information systems: Foundations (pp. 373–391). Routledge.
https://doi.org/10.4324/9781315703619
Natale, S. (2019). If software is narrative: Joseph Weizenbaum, artificial intelligence
and the biographies of ELIZA. New Media & Society, 21(3), 712–728. https://doi.
org/10.1177/1461444818804980
Natale, S. (2021a). Communicating through or communicating with: Approaching artifi-
cial intelligence from a communication and media studies perspective. Communication
Theory, 31(4), 905–910. https://doi.org/10.1093/ct/qtaa022
62 Human-Machine Communication

Natale, S. (2021b). Deceitful media. Oxford University Press. https://doi.org/10.1093/


oso/9780190080365.001.0001
Pfadenhauer, M. (2015). The contemporary appeal of artificial companions: Social robots
as vehicles to cultural worlds of experience. The Information Society, 31(3), 284–293.
https://doi.org/10.1080/01972243.2015.1020213
Pfadenhauer, M., & Grenz, T. (2017). Von Objekten zu Objektivierung. Soziale Welt, 68(2–
3), 225–242. https://doi.org/10.5771/0038-6073-2017-2-3-225
Pfadenhauer, M., & Lehmann, T. (2022). Affects after AI: Sociological perspectives on arti-
ficial companionship. In A. Elliott (Ed.), The Routledge Social Science Handbook of AI
(pp. 91–106). Routledge. https://doi.org/10.4324/9780429198533-7
Reckwitz, A. (2002). Toward a theory of social practices. A development in cultur-
alist theorizing. European Journal of Social Theory, 5(2), 245–265. https://doi.
org/10.1177/13684310222225432
Reeves, B., & Nass, C. (1996). The media equation: How people treat computers, television,
and new media like real people and places. Center for the Study of Language and Infor-
mation: Cambridge University Press.
Reeves, J. (2016). Automatic for the people: The automation of communicative labor. Com-
munication and Critical/Cultural Studies, 13(2), 150–165. https://doi.org/10.1080/14791
420.2015.1108450
Richards, R., Spence, P., & Edwards, C. (2022). Human-machine communication scholar-
ship trends: An examination of research from 2011 to 2021 in communication journals.
Human-Machine Communication, 4, 45–65. https://doi.org/10.30658/hmc.4.3
Schäfer, M. S., & Wessler, H. (2020). Öffentliche Kommunikation in Zeiten künstlicher
Intelligenz. Publizistik, 65(3), 307–331. https://doi.org/10.1007/s11616-020-00592-6
Schimank, U. (2010). Handeln und Strukturen. Einführung in die akteurstheoretische Sozio-
logie (4th ed.). Juventa.
Schulz, W., & Schmees, J. (2022). Möglichkeiten und Grenzen der Künstlichen Intelligenz
in der Rechtsanwendung. In I. Augsberg & G. F. Schuppert (Eds.), Wissen und Recht
(pp. 561–593). Nomos. https://doi.org/10.5771/9783748921479-561
Scott, S. V., & Orlikowski, W. J. (2014). Entanglements in practice: Performing anonym-
ity through social media. Management Information Systems Quarterly, 38(3), 873–893.
https://doi.org/10.25300/misq/2014/38.3.11
Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3), 417–
424. https://doi.org/10.1017/s0140525x00005756
Star, S. L., & Ruhleder, K. (1996). Steps toward an ecology of infrastructure: Design and
access for large information spaces. Information Systems Research, 7(1), 111–134.
https://doi.org/10.1287/isre.7.1.111
Steels, L., & Kaplan, F. (2000). AIBO’s first words: The social learning of language and mean-
ing. Evolution of communication, 4(1), 3–32. https://doi.org/10.1075/eoc.4.1.03ste
Stenbom, A., Wiggberg, M., & Norlund, T. (2021). Exploring communicative AI: Reflec-
tions from a Swedish newsroom. Digital Journalism, 1–19. https://doi.org/10.1080/2167
0811.2021.2007781
Suchman, L. A. (1987). Plans and situated actions: The problem of human-machine commu-
nication. Cambridge University Press.
Hepp, Loosen, Dreyer, Jarke, Kannengießer, Katzenbach, Malaka, Pfadenhauer, Puschmann, and Schulz 63

Suchman, L. A. (2012). Configuration. In C. Lury & N. Wakeford (Eds.), Inventive


methods. The happening of the Social (pp. 48–60). Taylor and Francis. https://doi.
org/10.4324/9780203854921
Thurman, N., Lewis, S. C., & Kunert, J. (2019). Algorithms, automation, and news. Digital
Journalism, 7(8), 980–992. https://doi.org/10.1080/21670811.2019.1685395
Turner, F. (2006). From counterculture to cyberculture: Stewart Brand, the Whole Earth Net-
work, and the rise of digital utopianism. University of Chicago Press.
Turow, J. (2021). The voice catchers: How marketers listen in to exploit your feelings, your
privacy, and your wallet. Yale University Press. https://doi.org/10.12987/9780300258738
van Dijck, J. (2014). Datafication, dataism and dataveillance: Big Data between scientific par-
adigm and ideology. Surveillance and Society, 12(2), 197–208. https://doi.org/10.24908/
ss.v12i2.4776
van Dijck, J., Nieborg, D., & Poell, T. (2019). Reframing platform power. Internet Policy
Review, 8(2). https://doi.org/10.14763/2019.2.1414
van Dijck, J., Poell, T., & de Waal, M. (2018). The platform society: Public values in a connective
world. Oxford University Press. https://doi.org/10.1093/oso/9780190889760.001.0001
Varol, O., Davis, C. A., Menczer, F., & Flammini, A. (2018). Feature engineering for social
bot detection. In G. Dong & H. Liu (Eds.), Feature Engineering for Machine Learn-
ing and Data Analytics (pp. 311–334). CRC Press. https://doi.org/10.1201/978131518
1080-12
Veale, T., & Cook, M. (2018). Twitterbots: Making machines that make meaning. The MIT
Press. https://doi.org/10.7551/mitpress/10859.001.0001
Volcic, Z., & Andrejevic, M. (2023). Automated media and commercial populism. Cultural
Studies, 37(1), 149–167. https://doi.org/10.1080/09502386.2022.2042581
Waisbord, S. (2019). Communication: A post-discipline. John Wiley & Sons.
Wang, W. (2017). Smartphones as social actors? Social dispositional factors in assess-
ing anthropomorphism. Computers in Human Behavior, 68, 334–344. https://doi.
org/10.1016/j.chb.2016.11.022
Weizenbaum, J. (1966). ELIZA—A computer program for the study of natural language
communication between man and machine. Communications of the ACM, 9(1), 36–45.
https://doi.org/10.1145/365153.365168
Wellman, B., Salaff, J. W., Dimitrova, D. S., Garton, L., Gulia, M., & Haythornthwaite, C.
(1996). Computer networks as social networks: Collaborative work, telework, and vir-
tual community. Annual Review of Sociology, 22(1), 213–238. https://doi.org/10.1146/
annurev.soc.22.1.213
Wittgenstein, L. (1971). Philosophische Untersuchungen (1st ed.). Suhrkamp Verlag.
Young, M. L., & Hermida, A. (2015). From Mr. and Mrs. Outlier to central tendencies:
Computational journalism and crime reporting at the Los Angeles Times. Digital Jour-
nalism, 3(3), 381–397. https://doi.org/10.1080/21670811.2014.976409
Zarsky, T. (2015). The trouble with algorithmic decisions: An analytic road map to examine
efficiency and fairness in automated and opaque decision making. Science, Technology,
& Human Values, 41(1), 118–132. https://doi.org/10.1177/0162243915605575
Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new
frontier of power. Profile Books.

You might also like