The Behavioral Code Recommender Systems
The Behavioral Code Recommender Systems
M. de Jong (*)
Philosophy Department, University of Groningen, Groningen, Netherlands
e-mail: [email protected]
R. Prey
Department of Media Studies and Journalism, University of Groningen,
Groningen, Netherlands
e-mail: [email protected]
8.1 Introduction
Many early Internet users were affected by a paralysing condition known to psy-
chologists as “overchoice” as a seemingly infinite supply of information, products,
and services greeted them in their first forays online. Recommenders soon came to
the rescue, bringing a degree of order to the chaos of digital information. The fore-
most technique used in early recommenders was collaborative filtering. Collaborative
filtering is a widely used method to filter information by grouping together users
deemed to have similar tastes or preferences. Developed as a research project at
Xerox PARC in 1992, “Tapestry” is widely considered to be the first algorithmic
recommender to use the term “collaborative filtering” (Goldberg et al., 1992). By
the mid-1990s, a team from the University of Minnesota employed the same method
for a Usenet news recommender called GroupLens. This team later created
MovieLens, which asked users to rate movies on a five-star scale and then recom-
mended movies seen by other users who had provided similar ratings. Soon after,
MIT’s Media Lab released “Ringo” (later Firefly), which used collaborative filter-
ing to automate music recommendations (Riedl & Konstan, 2002). As e-commerce
websites began to proliferate in the 1990s, recommenders fulfilled a need by busi-
ness to help customers sort through products and make choices. In the March 1997
issue of the Communications of the ACM, the guest editors marvelled at how “a
flurry of commercial ventures have recently introduced recommender systems for
products ranging from Web URLs to music, videos, and books” (Resnick & Varian,
1997, p. 58).
Today, we encounter recommenders seemingly everywhere online: They filter
books and other products on Amazon, television shows and films on Netflix, and
news and social media posts on Facebook. Indeed, much of the online content that
we consume is delivered to us through algorithmic recommenders. While collabora-
tive filtering remains the archetypal recommender, a wide array of different filtering
systems, such as content-based and context-based recommenders, have been devel-
oped over the years. Most recommenders now use an ensemble or hybrid approach,
combining two or more filtering methods. While the information that drives these
filtering systems can include explicit signals, such as product ratings, there has been
a marked trend in recent years towards favoring implicit feedback, such as clicks or
other trackable user interactions (Ekstrand & Willemsen, 2016).
The majority of research on recommenders (cf. Adomavicius & Tuzhilin, 2005;
Bobadilla et al., 2013; Lops et al., 2011; Pazzani & Billsus, 2007; Ricci et al., 2011),
is focused on explaining or comparing the strengths and weaknesses of different
approaches, or offering suggestions on how to improve recommendations (cf.
Burke, 2007; Linden et al., 2003; Salter & Antonopoulos, 2006; Tkalčič et al.,
2010). More recently, humanities and social science scholars have begun to shine a
critical light on recommenders to explore how they produce, reproduce and manage
consumer desire (Drott, 2018) and individual subjects (Prey, 2018). Other critical
research is concerned with privacy issues that surround recommenders (Perik et al.,
2004) and with how such systems exercise influence over the culture we consume
146 M. de Jong and R. Prey
(Beer, 2009, 2013; Morris, 2015; Seaver, 2012). For example, the specific tech-
niques Netflix utilizes to understand its users’ tastes and to recommend content
could impact the type of television programs and films that get produced (Hallinan
& Striphas, 2016). Importantly, scholars have pointed out how algorithms and the
recommenders they power are always sociotechnical ensembles that extend and
magnify “the all-too-human biases, worldviews, and blind spots of the people who
designed, built, and maintained them” (Seaver, 2021). One such bias or worldview,
we argue, is the assumption that individual preferences can best be defined as
behavioral dispositions and predicted through past action and implicit behavior.
Before we develop this argument, we will briefly review Feenberg’s concept of
“formal bias” and how such a bias emerges out of specific “technical codes.”
Seemingly neutral, they actually offer a material affirmation of – and thus a bias
towards – the ruling social values. This does not mean that they lose their claim to
being rational, as for Feenberg rationality is relative to social context. He makes this
clear in his example of “rational” machine design in the era of child labour. As
Feenberg writes in this volume:
[W]hen the socially accepted definition of the labor force included children, features of the
technology such as the placement of controls were designed for small workers. This was
technically rational under the given conditions although today we might consider the whole
business of child labor a scandal.
Since the algorithms that drive recommenders are largely kept a secret (they are,
after all, what determines the success of a digital platform), another approach is
needed to study them. We choose to study recommenders from the outside and fill
in the gaps by reading texts from within the field of recommender systems. We per-
formed a close reading of educational textbooks (e.g., Aggarwal, 2016; Falk, 2019)
and papers from the annual ACM Conference on Recommender Systems (e.g.,
Ekstrand & Willemsen, 2016; Wan & McAuley, 2018): “the premier international
forum for the presentation of new research results, systems and techniques in the
broad field of recommenders” (RecSys, n.d.). In what follows we provide an over-
view of the core techniques and data primarily utilized in the development of con-
temporary recommender systems.
Demographic markers for identity, such as age and gender, have long been used by
media and market research as a proxy for preference. Algorithmic recommenders
instigated a break with this method by claiming to circumvent the need for proxies
altogether. “Treat customers as individuals, not demographics,” two pioneers of col-
laborative filtering advised their readers: “Let their preferences, not stereotypes,
dictate which products and messages you present to them” (Riedl & Konstan, 2002).
Indeed, contemporary recommenders could be described as “post-demographic
machines” (Rogers, 2009). As the vice-president of Netflix’s Original Series
remarked: “We found that demographics are not a good indicator of what people
148 M. de Jong and R. Prey
like to watch” (Lynch, 2018). Rather than eliminating proxies for preference alto-
gether however, demographics have been replaced with real-time behavioral data.
More specifically, users are typically reduced to (1) measurable, (2) implicit, (3)
past, and, increasingly, (4) contextualized behavior.
Recommenders only allow for input that can be processed by algorithms.
Consequently, they work with a certain type of behavior: the kind that can be digi-
tally observed, or “datafied” (Fisher & Mehozay, 2019 p. 10). In other words, their
input is measurable behavior. What cannot be directly observed by recommenders,
however, are inner states such as thoughts, feelings, motives, and preferences. In
other words, recommenders cannot immediately observe the very thing that they are
after. One way to get around this difficulty is to directly ask users to communicate
their inner states. Indeed, coaxing users to provide explicit feedback, such as ratings
and reviews, to express their preferences used to be a common approach.
Over time, however, a different method began to be given primacy: tracking
implicit behavior, like clicks or other trackable user interactions (Ekstrand &
Willemsen, 2016; Seaver, 2019, p. 430). This shift resulted from the discovery that
explicit user-data – such as ratings – poses a threat to prediction. It turns out that
explicit ratings vary significantly depending on time and setting: a user could give a
movie three stars one day and five the next one. In addition, explicit data is rela-
tively scarce as it requires users to take time to express preferences. On the other
hand, implicit behavioral data, or interaction data such as clicking and scrolling, is
demonstrably good at predicting future user behavior, and is also readily available
and thus easier to collect (Ekstrand & Willemsen, 2016). Consequently, explicit rat-
ings have been widely replaced with implicit behavioral data (Seaver, 2019).
Explicit data based on users’ subjective interpretation is now often perceived as
a hindrance to actually understanding the user. In a recent paper, Nick Seaver (2021,
p. 15) describes a conversation he had with “Tom,” a product manager for “audience
understanding” at a music recommendation company anonymized as “Whisper”:
‘We don’t interview users’, he told me. Instead, audience understanding depended on the
same aggregated listening data that powered Whisper’s recommendations. ‘We think we
have real science here’, Tom said.
Netflix developers likewise explain that their platform tracks activity such as “the
time elapsed since viewing, the point of abandonment (mid-program vs. beginning
or end), whether different titles have been viewed since, and the devices used”
(Gomez-Uribe & Hunt, 2015, p. 4). Listening or viewing logs are considered a more
legitimate and reliable form of knowledge that better represents how users “actually
behaved” (Seaver, 2021, p. 15), rather than what they might claim to have consumed
if asked explicitly.
Finally, it follows that recommenders work with past behavior. As an influential
early book in the field announced: “In order to know what someone wants, what you
really need to know is what they’ve wanted” (Riedl & Konstan, 2002, para. 13).
This is typical for algorithmic systems: Existing data is used to predict some future
state of affairs. For instance, the music you listened to last week will be used as
input by the recommender to make predictions about your future listening behavior.
8 The Behavioral Code: Recommender Systems and the Technical Code of Behaviorism 149
With collaborative filtering the basic premise is that “people who agreed in their
subjective evaluation of past [items] are likely to agree again in the future” (Resnick
et al., 1994 p. 176).
However, this premise assumes that taste is static, with many users complaining
about being “haunted” by their past preferences. In reaction, the field of recom-
mender systems research has recently taken a “contextual turn” (Pagano et al.,
2016). As one paper explains:
[...] a context-driven recommender system, ‘personalizes’ to users’ context states. In this
way, it introduces a disassociation between users and their historical behavior, giving users
room to develop beyond their past needs and preferences. Instead, users receive recommen-
dations based on what is going on around them in the moment (situation) and on what they
are trying to accomplish (intent). (ibid. p. 249)
The input that recommenders work with is thus composed of measurable, implicit,
and past behavioral data (hereafter “behavioral data”), in combination with data
about the context (hereafter “contextual data”) in which the behavior takes place. In
the next section we build from here to identify the underlying technical code of
recommender systems.
Technologies offer a material affirmation of, and a bias towards, particular values
and worldviews. More specifically, Feenberg argues that modern technology is
biased by contingent social factors specific to capitalism (Kirkpatrick, 2020).
Developers of recommenders, like technologists more generally, do not typically
aim at specific social benefits or prejudicial outcomes. Instead, they focus on effi-
ciency gains that are to result from the technology that is developed. Over time, the
technologies as well as the systems of thought that underlie them become seemingly
uncontroversial. As Bernhard Rieder (2020, p. 253) puts it when describing the his-
tory of how observed market behaviour came to stand for consumer preference in
economics, “[w]hat users do is what they want and what they want is what they shall
receive. How could it be otherwise?” It is precisely the apparent incontestability of
this “technical code” that renders recommenders “formally biased.”
150 M. de Jong and R. Prey
Since John B. Watson coined the term in 1913, behaviorism grew into a highly
influential school of thought that covers multiple scientific fields. For B. F. Skinner
(1904–1990), perhaps the most well-known and influential behaviorist, to know a
person means to know “what he does, has done, or will do” in certain contexts
(Skinner, 1974, p. 176). According to Skinner, “[a] self or personality is at best a
repertoire of behavior imparted by an organized set of contingencies” (1974 p. 149).
Contingencies refer to the relationship between three things: events that occur
immediately before a behavior (antecedents), behavioral responses, and conse-
quences that take place immediately after the response. Certain behavior can be
“reinforced” (e.g., Skinner, 1974 p. 42) when its consequences are positive, or
weakened if the consequences are negative. Thus, the self, for behaviorists, is “at
best” a set of likely behaviors under certain circumstances. As a result, the ingredi-
ents necessary to know someone are their overt behavior and the environment in
which this takes place.
The emphasis on overt behavior does not mean that behaviorists deny the exis-
tence of inner states such as feelings, thoughts, and preferences. Instead, feelings
and thoughts are reduced to bodily states and processes. In other words, inner states
are seen as a type of behavior, just as overt actions are.1 However, behaviorists do
object to assigning inner states causal power and, as such, explanatory power.
Skinner, for example, argued that inner states are by-products. The following pas-
sage from his book On Behaviorism (1974) sheds more light on this position:
When a person has been subjected to mildly punishing consequences in walking on a slip-
pery surface, he may walk in a manner we describe as cautious. It is then easy to say that he
walks with caution or that he shows caution. There is no harm in this until we begin to say
that he walks carefully because of his caution (p. 161).
What Skinner objects to, then, is the role of inner states as the subject of scientific
study. “The objection to inner states,” Skinner wrote, “is not that they do not exist,
1
As such, behaviorists deny the Cartesian mind-body dualism.
8 The Behavioral Code: Recommender Systems and the Technical Code of Behaviorism 151
but that they are not relevant in a functional analysis” (Skinner, 1953, p. 35). Instead,
he argues, we should shift our focus from the inside to the outside – to overt behav-
ior and the environment in which people act. For Skinner and his followers, only
behavior provides publicly observable data upon which to construct rigorous and
scientifically-sound models of how and why people do what they do (Moore, 1999).
What is more, behaviorists believe that to understand behavior means to be able to
both predict and control behavior. In other words, it is about being able to anticipate
what people will do and being able to steer this behavior through reinforcement and
punishment.
Contemporary recommenders posit the internet user in much the same way as
Skinner and other behaviorists posited their test subjects. Behaviorists broadly work
with two variables: overt behavior and environmental factors. Regarding the latter,
recall the turn towards “context-aware” recommenders. Like behaviorists, such sys-
tems emphasize the importance of environmental factors in understanding a per-
son’s behavior. If you listen to classical music almost every night before you go to
bed, Spotify will very likely recommend playlists of this genre to you around
this time.
With regard to the first variable, the primary input of recommenders consists of
behavioral data. The idea that past behavior lends itself for predicting the probabil-
ity of future behavior endorses the behaviorist doctrine. As Skinner wrote: “The
probability of behavior depends upon the kind of frequency of reinforcement in
similar situations in the past” (1974, p. 69). In addition, by focusing on overt and
implicit behavior, recommenders meet the behaviorist “rule” of shifting one’s atten-
tion from inner states to overt behavior. Recommenders focus their attention on
what can be “objectively” and consistently measured. While explicit behavioral data
used to be collected by recommenders, as pointed out above, subjective interpreta-
tions of inner states are now largely dismissed due to their inconsistency and scar-
city. For example, in recommending music, Spotify is not that interested in how
users self-identify as music fans, or even in demographic markers that traditionally
acted as a proxy for music preferences. Instead, a “taste profile” – a dynamic record
of one’s musical identity – is constructed for each user. This profile is generated
primarily through implicit behavioral feedback that is generated every time you
search for an artist, listen to a track, add songs to a playlist, or skip a song.
Combining behavioral data and context, recommenders aim to understand the
user by identifying patterns of behavior. In Fisher and Mehozay’s (2019, p. 10)
formulation of the “algorithmic episteme”: “To know someone does not mean to
analytically and empirically understand the reasons for her behavior, but simply to
be able to recognize patterns of behavior.” This appears to follow the behaviorist
doctrine – that to know someone is to know what someone has done, is doing, and
will do in the future.
152 M. de Jong and R. Prey
2
Skinner preferred to avoid mental concepts, but the underlying idea of (analytical or logical)
behaviorism is that a mental state or condition is the idea of a behavioral disposition or family of
behavioral tendencies (Graham, 2019). This means that a behaviorist can in principle continue to
use mental concepts, but they would refer to a certain behavioral disposition rather than inner states.
3
Even though developers and behaviorists work with different motivations – developers work
according to a commercial incentive while behaviorists are motivated by a certain ideal of “real”
science – they eventually both aim for the prediction and control of human behavior. These goals
have proven to be greatly compatible; the founder of behaviorism, John B. Watson, joined an
advertising agency after he left academia and became highly successful in that field (Baars, 1986;
Waldrop, 2001). In addition, Skinner’s analysis has been called the psychological equivalent of
wage-labor capitalism (Baars, 1986), as the prediction and control of human behavior in order to
increase productivity has been a central focus of managerial practices; from “scientific manage-
ment” to “nudge management” more recently (Ebert & Freibichler, 2017).
8 The Behavioral Code: Recommender Systems and the Technical Code of Behaviorism 153
Between approximately 1920 and the mid-1950s (e.g., Baars, 1986; Chung &
Hyland, 2012; Miller, 2003; Reisberg, 2016), the majority of psychologists in the
United States were behaviorists. By the mid-1950s, however, the popularity of
behaviorism went into fast decline as it was critiqued from several angles. In psy-
chology, behaviorism was largely obliterated by the “cognitive revolution” (e.g.,
Miller, 2003; Reisberg, 2016; Waldrop, 2001). Psychologists grew convinced that a
subject’s behavior was guided by how the subject understood or interpreted a situa-
tion – not by the objective situation itself. By focusing merely on the objective situ-
ation, we misunderstand the motivations people have for their actions and
subsequently make mistakes in predicting future behavior. In other words, it became
clear that psychologists needed to study mental states after all.
From a philosophical perspective, critical theorists contrasted Skinner’s “science
of behaviour” with what they viewed as the much richer Marxist concept of
“praxis.”4 Praxis, according to one critic of Skinner, “refers to man as an active
agent in the world, a world that he constructs and transforms, on which he confers
meaning, and to which he responds” (Mishler, 1976, p. 25). In other words, from the
perspective of praxis the human subject is an interpretive being engaged in mean-
ingful action. One does not simply run, for example, but rather one runs because of
a reason – a reason that emerges out of the subjective interpretation of an event. The
meaning of the behavior is what defines the behavior; which could be as varied as
running from something that scares you or going for a run to clear your head.
4
Apart from the praxis critique, there are roughly three main reasons for the rejection of behavior-
ism within philosophy (Graham, 2019). First of all, many people were, and still are, sceptical about
behaviorism’s commitment to the thesis that behavior can be understood without referring to men-
tal processes. A second reason for the dismissal of behaviorism is the existence of “qualia” (e.g.,
Place, 2000): behaviorism cannot account for the qualitatively distinctive experience underlying
overt behavior. Yet another critique came from Noam Chomsky (1967 [1959]). According to
Chomsky, behaviorism cannot account for the fact that language does not seem to be learned
through explicit teaching. He pointed out that linguistic performance outstripped individual rein-
forcement histories.
154 M. de Jong and R. Prey
Behaviorists, however, reject a focus on meaning not because they deny subjec-
tive or inner states, but because they see them as functionally useless for predicting
rates of response. There is an analogous focus on “rating prediction accuracy” in
recommender system design. Both can be seen as expressions of what Habermas
(1970, p. 105–7) called “technocratic consciousness”:
It is a singular achievement of this (technocratic) ideology to detach society’s self-
understanding from the frame of reference of communicative action and from the concepts
of symbolic interaction and replace it with a scientific mode. . . . This is paralleled subjec-
tively by the disappearance of the difference between purposive-rational action and interac-
tion from the consciousness not only of the sciences of man, but of men themselves. The
concealment of this difference proves the ideological power of the technocratic
consciousness.
For critical theorists, behaviorism represented the further colonization of the life-
world by positivist scientism. As one trenchant critique put it, behaviorism circum-
vents the necessity of interpretation “by defining a single scalar index as the
“behaviour” of interest, and by coding many different types of behaviour in this one
category while ignoring other features of the behaviour” (Mishler, 1976, p. 32). It
conveniently ignores why the human subject gently pushes the lever or smashes it.
“Instead of a science constructed so as to be appropriate to its phenomena of study,
the phenomena are transformed so as to be appropriate to a particular methodology”
(ibid., p. 33).5
The principal takeaway here is that the model of human action and motivation
becomes defined through the lens that it is perceived through. Like the example
earlier of the product manager at a music recommendation company that equated
“audience understanding” with aggregated listening data, behaviorism distinguished
itself from alternative methods of human understanding by claiming the mantle of
“real science.” In doing so, it defined the world in its image and allowed for certain
questions while ignoring others. Another vision of science – one that sees human
beings as meaning constructors and symbols users – would result in an alternative
definition of the world.
What made behaviorism especially dangerous was not that it did not work, but
rather that it pretended to be the only scientific approach to the study and under-
standing of humans. As Baars (1986, p. 51–52) put it: “Behaviorism was viewed as
the one right way to do psychological science; every alternative was unscientific.”
As we have shown, however, behaviorists actually worked with a very limited
understanding of the meaning and purpose of science and of human-beings. While
behaviorists could lay claim to an undoubted objectivity in their observations, they
had to pay a very high price for it. They had rejected too many things: “[...] in hot
pursuit of scientism, psychology had lost psychology” (Baars, 1986, p. 69). In other
words, and to return to Feenberg, even though behaviorism might have been ratio-
nal, it was also formally biased. As Mishler (1976, p. 29) puts it: “More is at stake
than whether information about “inner states” helps to “predict” a discrete and
5
Notice here that behaviorism is a presupposed framework rather than a scientific theory, meaning
that it cannot be falsified by any experimental results (Baars, 1986).
8 The Behavioral Code: Recommender Systems and the Technical Code of Behaviorism 155
meaningless response. Rather, these states are central topics of interest in and of
themselves, as are their complex relationships to behaviour and the rules governing
the stability and change of these relationships.” Behaviorism could have been a way
of doing experimental psychology to be complemented by other forms of study that
focus on meaning and understanding. That way, behaviorists would at least have
recognized and respected the formal bias that was integrated into their program. Yet
this is exactly what they did not allow for.
Like behaviorism, recommenders get the job done. And like behaviorism, this
does not mean that they are not formally biased. Recommenders also embody the
same impoverished view of what it means to be human. Interestingly, their develop-
ers show a similar attitude toward their method as behaviorists did. Recall “Whisper,”
the music recommendation company studied by Seaver (2021). This company
believed to have overcome the “challenging alterity of their users by appealing to
“data,” which was taken to provide a putatively objective position beyond individual
perspectives” (p. 14). Note the further similarity with behaviorism when the
employee says: “We think we have real science here” (ibid.).
While the developers of recommenders, unlike behaviorists, may not explicitly
claim that their account of humans is the way to view them, the materialization of
behaviorist assumptions in these omnipresent recommenders does create a formal
bias that reinforces a behaviorist understanding of humans. As such, they might
even cause users to see themselves through a behaviorist lens. After all, recommend-
ers are said to “personalize” content, which critics have argued “imbues the system
with the power to co-constitute users’ experience, identity and selfhood in a perfor-
mative sense (Kant, 2020, p. 12). There is however another concern, namely that
“[i]t is the programmers themselves who are more likely to suffer these conse-
quences. It is the objectification of others that is dehumanizing, and this is integral
to the behaviourist approach” (Mishler, 1976, p. 34).
To summarize, recommenders embody a behavioral code and are as such biased
towards the beliefs and values that underlie behaviorism. This formal bias promotes
an impoverished view of what it means to be human – among users as well as devel-
opers. As such, the formal bias of recommenders should be of public concern.
8.7 Conclusion
Over a decade ago, Google’s former CEO Eric Schmidt pointed out how ubiquitous
recommendation was (Jenkins Jr., 2010). Today, on platforms like Netflix, “every-
thing is a recommendation”: Not only are the films personalized to fit viewing
behavior, but so is the cover art (Mullaney, 2015; Yu, 2019). At the same time, data
is drawn from an ever-widening and growing array of interactions. As Nick Seaver
(2019, p. 11) writes, “algorithmic recommendation has settled deep into the infra-
structure of online cultural life, where it has become practically unavoidable.”
If recommenders exert such a ubiquitous and powerful influence on our lives,
then – as Feenberg asks of technology in general – “why don’t we apply the same
156 M. de Jong and R. Prey
6
While the purpose of this chapter is not to explore solutions, there are several interesting propos-
als and projects underway. For example, academics and developers have called for and experi-
mented with more user-centric recommenders that allow users some degree of control over how
they are profiled. One example of user-centric design is gobo.social, a social media news aggrega-
tor designed by the MIT Media Lab. This tool offers sliders that users control in order to filter
information: The user can explore a range of political perspectives on a continuum from left to
right, or “the extent of seriousness, rudeness, gender, and other parameters” (Reviglio & Agosti,
2020, p. 6). In another example, Harambam et al. (2018) provide an interesting proposal to grant
users greater “voice” in our algorithmically-driven media ecosystem. The authors propose the
creation of algorithmic recommender personae to “allow people instead to demand from [recom-
menders] to behave in ways that align with their own specific... interests at each single moment”
(ibid. p. 4). It is also possible to involve users in the earliest stages of the design and development
of recommender algorithms. The benefits of participatory design are not only in creating more
user-friendly technologies, but also in making “explicit the critical, and inevitable, presence of
values in the system design process” (Suchman, 1993, p. viii). As Feenberg convincingly argues in
Questioning Technology, by widening opportunities to intervene, user participation in design
serves to limit “the operational autonomy of technical personnel” (Feenberg, 1999, p. 135) who are
socialized into the technical codes of the profession (ibid, p. 142).
8 The Behavioral Code: Recommender Systems and the Technical Code of Behaviorism 157
References
Adomavicius, G., Mobasher, B., Ricci, F., & Tuzhilin, A. (2011). Context-aware recommender
systems. AI Magazine, 32(3), 67–80. https://doi.org/10.1609/aimag.v32i3.2364
Adomavicius, G., & Tuzhilin, A. (2005). Toward the next generation of recommender systems: A
survey of the state-of-the-art and possible extensions. IEEE Transactions on Knowledge and
Data Engineering, 17(6), 734–749.
Aggarwal, C. C. (2016). Recommender systems (Vol. 1). Springer International Publishing.
Baars, B. J. (1986). The cognitive revolution in psychology. New York: Guilford Press.
Beer, D. (2009). Power through the algorithm? Participatory web cultures and the technological
unconscious. New Media and Society, 11(6), 985–1002.
Beer, D. (2013). Popular culture and new media: The politics of circulation. Springer.
Bobadilla, J., Ortega, F., Hernando, A., & Gutiérrez, A. (2013). Recommender systems survey.
Knowledge-Based Systems, 46, 109–132.
Burke, R. (2007). Hybrid web recommender systems. In The adaptive web (pp. 377–408). Springer.
Cheney-Lippold, J. (2011). A new algorithmic identity: Soft biopolitics and the modulation of
control. Theory, Culture and Society, 28(6), 164–181.
Chomsky, N. (1967 [1959]). Review of B. F. Skinner’s verbal behavior. In L. A. Jakobovits &
M. S. Miron (Eds.), Readings in the psychology of language (pp. 142–143). Prentice-Hall.
Chung, M. C., & Hyland, M. (2012). Behaviourism, and the disappearance and reappearance of
organism (Person) variables. In M. C. Chung & M. Hyland (Eds.), History and philosophy of
psychology (pp. 144–169). Wiley-Blackwell.
Drott, E. (2018). Why the next song matters: Streaming, recommendation, scarcity. Twentieth-
Century Music, 15(3), 325–357.
Ebert, P., & Freibichler, W. (2017). Nudge management: Applying behavioural science to increase
knowledge worker productivity. Journal of Organization Design, 6(1), 1–6.
Ekstrand, M. D., & Willemsen, M. C. (2016, September). Behaviorism is not enough: Better rec-
ommendations through listening to users. In Proceedings of the 10th ACM conference on rec-
ommender systems (pp. 221–224).
Falk, K. (2019). Practical recommender systems. Manning Publications.
Feenberg, A. (1992). Subversive rationalization: Technology, power, and democracy. Inquiry,
35(3–4), 301–322.
Feenberg, A. (1999). Questioning technology. Routledge.
Feenberg, A. (2008). Critical theory of technology: An overview. In G. J. Leckie & J. E. Buschman
(Eds.), Information technology in librarianship: New critical approaches (pp. 31–46). Libraries
Unlimited.
Feenberg, A. (2017). Critical theory of technology and STS. Thesis Eleven, 138(1), 3–12.
Fisher, E., & Mehozay, Y. (2019). How algorithms see their audience: Media epistemes and the
changing conception of the individual. Media, Culture and Society, 41(8), 1176–1191.
Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. J. Boczkowski, & K. A. Foot
(Eds.), Media technologies: Essays on communication, materiality, and society (pp. 167–194).
The MIT Press.
Goldberg, D., Nichols, D., Oki, B. M., & Terry, D. (1992). Using collaborative filtering to weave
an information tapestry. Communications of the ACM, 35(12), 61–70.
Gomez-Uribe, C. A., & Hunt, N. (2015). The Netflix recommender system: Algorithms, busi-
ness value, and innovation. ACM Transactions on Management Information Systems (TMIS),
6(4), 1–19.
Graham, G. (2019, Spring). Behaviorism. In E. N. Zalta (Ed.), The Stanford encyclopedia of phi-
losophy. https://plato.stanford.edu/archives/fall2019/entries/behaviorism/
Habermas, J. (1970). Towards a rational society. Beacon Press.
Hallinan, B., & Striphas, T. (2016). Recommended for you: The Netflix Prize and the production
of algorithmic culture. New Media and Society, 18(1), 117–137.
158 M. de Jong and R. Prey
Harambam, J., Helberger, N., & van Hoboken, J. (2018). Democratizing algorithmic news rec-
ommenders: How to materialize voice in a technologically saturated media ecosystem.
Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering
Sciences, 376(2133), 20180088.
Jenkins, H. W., Jr. (2010, August 14). Google and the search for the future. Retrieved from https://
www.wsj.com/articles/SB10001424052748704901104575423294099527212
Kant, T. (2020). Making it personal: Algorithmic personalization, identity, and everyday life.
Oxford University Press.
Kirkpatrick, G. (2020). Technical politics. In G. Kirkpatrick (Ed.), Technical politics: Andrew
Feenberg’s critical theory of technology (pp. 70–95). Manchester University Press.
Linden, G., Smith, B., & York, J. (2003). Amazon.com recommendations: Item-to-item collabora-
tive filtering. IEEE Internet Computing, 7(1), 76–80.
Lops, P., De Gemmis, M., & Semeraro, G. (2011). Content-based recommender systems: State of
the art and trends. In Recommender systems handbook (pp. 73–105). Springer US.
Lu, Y., Dong, R., & Smyth, B. (2018, September). Why I like it: multi-task learning for recom-
mendation and explanation. In Proceedings of the 12th ACM Conference on Recommender
Systems (pp. 4–12).
Lynch, J. (2018, July 2018). Netflix thrives by programming to ‘taste communities,’ not demo-
graphics. Retrieved 1 Nov 2020, from AdWeek: https://www.adweek.com/tv-video/
netflix-thrives-by-programming-to-taste-communities-not-demographics/
Miller, G. A. (2003). The cognitive revolution: A historical perspective. Trends in Cognitive
Sciences, 7(3), 141–144.
Mishler, E. G. (1976). Skinnerism: Materialism minus the dialectic. Journal for the Theory of
Social Behaviour 6(1), 21–47.
Moore, J. (1999). The basic principles of behaviorism. In B. Thyer (Ed.), The philosophical legacy
of behaviorism (pp. 41–68). Springer.
Morris, J. W. (2015). Curation by code: Infomediaries and the data mining of taste. European
Journal of Cultural Studies, 18(4–5), 446–463.
Mullaney T (2015) Everything is a recommendation. MIT Technology Review, 23 March.
Available at: https://www.technologyreview.com/s/535936/everything-is-a-recommendation/
Pagano, R., Cremonesi, P., Larson, M., Hidasi, B., Tikk, D., Karatzoglou, A., & Quadrana, M. (2016,
September). The contextual turn: From context-aware to context-driven recommender systems.
In Proceedings of the 10th ACM conference on recommender systems (pp. 249–252).
Pazzani, M. J., & Billsus, D. (2007). Content-based recommendation systems. In The adaptive web
(pp. 325–341). Springer Berlin Heidelberg.
Perik, E., De Ruyter, B., Markopoulos, P., & Eggen, B. (2004). The sensitivities of user pro-
file information in music recommender systems. In Proceedings of private, security, trust
(pp. 137–141).
Place, U. T. (2000). The causal potency of qualia: Its nature and its source. Brain and Mind, 1(2),
183–192.
Prey, R. (2018). Nothing personal: Algorithmic individuation on music streaming platforms.
Media, Culture and Society, 40(7), 1086–1100.
RecSys. (n.d.). 15th ACM Conference on Recommender Systems, from https://recsys.acm.org/
recsys21/
Reisberg, D. (2016). The science of mind. In D. Reisberg (Ed.), Cognition: Exploring the science
of mind (6th ed., pp. 2–27). W. W. Norton & Company.
Resnick, P., Iacovou, N., Suchak, M., Bergstrom, P., & Riedl, J. (1994, October). GroupLens: An
open architecture for collaborative filtering of netnews. In Proceedings of the 1994 ACM con-
ference on Computer supported cooperative work (pp. 175–186).
Resnick, P., & Varian, H. R. (1997). Recommender systems. Communications of the ACM,
40(3), 56–58.
Reviglio, U., & Agosti, C. (2020). Thinking outside the black-box: The case for “algorithmic sov-
ereignty” in social media. Social Media + Society, 6(2), 2056305120915613.
8 The Behavioral Code: Recommender Systems and the Technical Code of Behaviorism 159
Ricci, F., Rokach, L., & Shapira, B. (2011). Introduction to recommender systems handbook. In
Recommender systems handbook (pp. 1–35). Springer.
Rieder, B. (2020). Engines of order: A mechanology of algorithmic techniques. Amsterdam
University Press.
Riedl, J., & Konstan, J. (2002). Word of mouse: The marketing power of collaborative filtering.
Warner Books.
Rogers, R. (2009). Post-demographic machines. Walled Garden, 38(2009), 29–39.
Salter, J., & Antonopoulos, N. (2006). CinemaScreen recommender agent: Combining collabora-
tive and content-based filtering. IEEE Intelligent Systems, 21(1), 35–41.
Seaver, N. (2012). Algorithmic recommendations and synaptic functions. Limn, 1(2). from https://
escholarship.org/uc/item/7g48p7pb
Seaver, N. (2019). Captivating algorithms: Recommender systems as traps. Journal of Material
Culture, 24(4), 421–436.
Seaver, N. (2021). Seeing like an infrastructure: Avidity and difference in algorithmic recommen-
dation. Cultural Studies, 35(4–5), 771–791.
Skinner, B. F. (1953). Science and human behavior. Macmillan.
Skinner, B. F. (1974). About behaviorism. Knopf.
Suchman, L. (1993). Foreword. In D. Schuler & A. Namioka (Eds.), Participatory design:
Principles and practices. CRC/Lawrence Erlbaum Associates. vii–x.
Tkalčič, M., Burnik, U., & Košir, A. (2010). Using affective parameters in a content-based recom-
mender system for images. User Modeling and User-Adapted Interaction, 20(4), 279–311.
Waldrop, M. M. (2001). The dream machine: J.C.R. Licklider and the revolution that made com-
puting personal. Viking.
Wan, M., & McAuley, J. (2018, September). Item recommendation on monotonic behavior chains.
In Proceedings of the 12th ACM conference on recommender systems (pp. 86–94).
Watson, J. B. (1913). Psychology as the behaviorist views it. Psychological Review, 20(2), 158.
Yu, A. (2019). How netflix uses ai, data science, and machine learning — from a product perspective
from https://becominghuman.ai/how-netflix-uses-ai-and-machine-learning-a087614630fe
Yoo, K. H., & Gretzel, U. (2011). Creating more credible and persuasive recommender systems:
The influence of source characteristics on recommender system evaluations. In Recommender
systems handbook (pp. 455–477). Springer.
Zuboff, S. (2019). The age of surveillance capitalism: The fight for the future at the new frontier
of power. Profile Books.