0% found this document useful (0 votes)
4 views

Do Digital Humanists Need to Understand Algorithms?

In 'Do Digital Humanists Need to Understand Algorithms?', Benjamin M. Schmidt argues that while digital humanists do not need to fully grasp algorithms, they must understand the transformations these algorithms create to enhance their scholarly practices. He distinguishes between algorithms as procedural steps and transformations as the meaningful outcomes they produce, emphasizing that understanding the latter is crucial for effective humanities data analysis. The chapter critiques the instrumental use of algorithms and advocates for a deeper exploration of their underlying assumptions to foster original interpretations in digital humanities research.

Uploaded by

Clay Miller
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Do Digital Humanists Need to Understand Algorithms?

In 'Do Digital Humanists Need to Understand Algorithms?', Benjamin M. Schmidt argues that while digital humanists do not need to fully grasp algorithms, they must understand the transformations these algorithms create to enhance their scholarly practices. He distinguishes between algorithms as procedural steps and transformations as the meaningful outcomes they produce, emphasizing that understanding the latter is crucial for effective humanities data analysis. The chapter critiques the instrumental use of algorithms and advocates for a deeper exploration of their underlying assumptions to foster original interpretations in digital humanities research.

Uploaded by

Clay Miller
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Chapter Title: Do Digital Humanists Need to Understand Algorithms?

Chapter Author(s): BENJAMIN M. SCHMIDT

Book Title: Debates in the Digital Humanities 2016


Book Editor(s): Matthew K. Gold and Lauren F. Klein
Published by: University of Minnesota Press

Stable URL: https://www.jstor.org/stable/10.5749/j.ctt1cn6thb.51

JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide
range of content in a trusted digital archive. We use information technology and tools to increase productivity and
facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected].

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
https://about.jstor.org/terms
This content is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives
4.0 International License (CC BY-NC-ND 4.0). To view a copy of this license, visit
https://creativecommons.org/licenses/by-nc-nd/4.0/.

University of Minnesota Press is collaborating with JSTOR to digitize, preserve and extend
access to Debates in the Digital Humanities 2016

This content downloaded from


167.206.19.12 on Sat, 29 Mar 2025 21:42:01 UTC
All use subject to https://about.jstor.org/terms
part Vi ][ Chapter 48

Do Digital Humanists Need to Understand


Algorithms?
Benjamin M. Schmidt

Algorithms and Transforms


Ian Bogost recently published an essay1 arguing that fetishizing algorithms can
pollute our ability to accurately describe the world we live in. “Concepts like ‘algo-
rithm,’ ” he writes, “have become sloppy shorthands, slang terms for the act of mis-
taking multipart complex systems for simple, singular ones” (Bogost). Even critics of
computational culture succumb to the temptation to describe algorithms as though
they operate with a single incontrovertible beauty, he argues; this leaves them with
a “distorted, theological view of computational action” that ignores human agency.
As one of the few sites in the humanities where algorithms are created and
deployed, the digital humanities are ideally positioned to help humanists better
understand the operations of algorithms rather than blindly venerate or condemn
them. But too often, we deliberately occlude understanding and meaning in favor of
an instrumental approach that simply treats algorithms as tools whose efficacy can
be judged intuitively. The underlying complexity of computers makes some degree
of ignorance unavoidable. Past a certain point, humanists certainly do not need to
understand the algorithms that produce results they use; given the complexity of
modern software, it is unlikely that they could.
But although there are elements to software we can safely ignore, some basic
standards of understanding remain necessary to practicing humanities data analy-
sis as a scholarly activity and not merely a technical one. While some algorithms
are indeed byzantine procedures without much coherence or purpose, others are
laden with assumptions that we are perfectly well equipped to understand. What
an algorithm does is distinct from, and more important to understand, than how
it does it. I want to argue here that a fully realized field of humanities data analysis
can do better than to test the validity of algorithms from the outside; instead, it will
explore the implications of the assumptions underlying the processes described in
546 ] software. Put simply: digital humanists do not need to understand algorithms at all.

This content downloaded from


167.206.19.12 on Sat, 29 Mar 2025 21:42:01 UTC
All use subject to https://about.jstor.org/terms
Do Digital Humanists Need to Understand Algorithms? [ 547

They do need, however, to understand the transformations that algorithms attempt


to bring about. If we do so, our practice will be more effective and more likely to
be truly original.
The core of this argument lies in a distinction between algorithms and trans-
formations. An algorithm is a set of precisely specifiable steps that produce an out-
put. “Algorithms” are central objects of study in computer science; the primary
intellectual questions about an algorithm involve the resources necessary for those
steps to run (particularly in terms of time and memory). “Transformations,” on the
other hand, are the reconfigurations that an algorithm might effect. The term is
less strongly linked to computer science: its strongest disciplinary ties are to math-
ematics (for example, in geometry, to describe the operations that can be taken on
a shape) and linguistics (where it forms the heart of Noam Chomsky’s theory of
“transformational grammar”).
Computationally, algorithms create transformations. Intellectually, however,
people design algorithms in order to automatically perform a given transformation.
That is to say: a transformation expresses a coherent goal that can be understood
independently of the algorithm that produces it. Perhaps the simplest example is the
transformation of sorting. “Sortedness” is a general property that any person can
understand independently of the operations that produce it. The uses that one can
make of alphabetical sorting in humanities research — such as producing a con-
cordance to a text or arranging an index of names—are independent of the partic-
ular algorithm used to sort. There are, in fact, a multitude of particular algorithms
that enable computers to sort a list. Certain canonical sorting algorithms, such as
quicksort, are fundamental to the pedagogy in computer science. (The canonical
collection and explanation of sorting algorithms is the first half of Knuth’s canonical
computer science text.) It would be ludicrous to suggest humanists need to under-
stand an algorithm like quicksort to use a sorted list. But we do need to understand
sortedness itself in order to make use of the distinctive properties of a sorted list.
The alternative to understanding the meaning of transformations is to use
algorithms instrumentally; to hope, for example, that an algorithm like Latent
Dirichlet Allocation will approximate existing objects like “topics,” “discourses,” or
“themes” and explore the fissures where it fails to do so. (See, for example, Rhody;
Goldstone and Underwood; Schmidt, “Words Alone.”) This instrumental approach
to software, however, promises us little in the way of understanding; in hoping that
algorithms will approximate existing meanings, it in many ways precludes them from
creating new ones. The signal criticism of large-scale textual analysis by traditional
humanists is that it tells scholars nothing they did not know before. This critique is
frequently misguided; but it does touch on a frustrating failure, which is that dis-
tant reading as commonly practiced frequently fails to offer any new ways of under-
standing texts.
Far more interesting, if less immediately useful, will be to marry large-scale
analysis to what Stephen Ramsay calls “algorithmic criticism”: the process of using

This content downloaded from


167.206.19.12 on Sat, 29 Mar 2025 21:42:01 UTC
All use subject to https://about.jstor.org/terms
548 ] benjamin m. schmidt

algorithmic transformations as ways to open texts for new readings (Ramsay). This
is true even when, as in some of the algorithms Ramsay describes, the transforma-
tion is inherently meaningless. But transformations that embody a purpose them-
selves can help us to create new versions of text that offer fresh or useful perspectives.
Seeking out and describing how those transformations function is a type of work
we can do more to recognize and promote.

The Fourier Transform and Literary Time


A debate between Annie Swafford and Matt Jockers over Jockers’s “Syuzhet” pack-
age2 for exploring the shape of plots through sentiment analysis offers a useful case
study of how further exploring a transformation’s purpose can enrich our vocabu-
lary for describing texts. Although Swafford’s initial critique raised several issues
with the package, the bulk of her continuing conversation with Jockers centered
on the appropriateness of his use of a low-pass filter from signal processing as a
“smoothing function.” Jockers argued it provided an excellent way to “filter out the
extremes in the sentiment trajectories.” Swafford, on the other hand, argued that it
was often dominated by “ringing artifacts” which, in practice, means the curves pro-
duced place almost all their emphasis “at the lowest point only and consider rises
or falls on either side irrelevant” (Jockers, “Revealing Sentiment”; Swafford “Prob-
lems”; Swafford, “Why Syuzhet Doesn’t Work”).
The Swafford and Jockers debate hinged over not just an algorithm, but a con-
cretely defined transformation. The discrete Fourier transform undergirds the low-
pass filters that Jockers uses to analyze plot. The thought that the Fourier transform
might make sense as a formation for plot is an intriguing one; it is also, as Swafford
argues, quite likely wrong. The ringing artifacts that Swafford describes are effects of
a larger issue: the basic understanding of time embodied in the transformation itself.
The purpose of the Fourier transform is to represent cyclical events as frequen-
cies by breaking complex signals into their component parts. Some of the most
basic elements of human experience—most notably, light and sound—physically
exist as repeating waves. The Fourier transform offers an easy way to describe these
infinitely long waves as a short series of frequencies, constantly repeating. The pure
musical note “A,” for example, is a constant pulsation at 440 cycles per second; as
actually produced by a clarinet, it has (among other components) a large number of
regular “overtones,” less powerful component notes that occur at a higher frequency
and enrich the sound beyond a simple tone. A filter like the one Jockers uses strips
away these regularities; it is typically used in processes like MP3 compression to strip
out notes too high for the human ear to hear. When applied even more aggressively
to such a clarinet tone, it would remove the higher frequencies, preserving the note
“A” but attenuating the distinctive tone of the instrument.3
The idea that plots might be represented in the frequency domain is fascinat-
ing, but makes some highly questionable assumptions. Perhaps the most striking

This content downloaded from


167.206.19.12 on Sat, 29 Mar 2025 21:42:01 UTC
All use subject to https://about.jstor.org/terms
Do Digital Humanists Need to Understand Algorithms? [ 549

assumption is that plots, like sound or light, are composed of endlessly repeat-
ing signals. A low-pass filter like the one Jockers employs ignores any elements
that seem to be regularly repeating in the text and instead focuses on the longest-
term motions; those that take place over periods of time greater than a quarter or
a third the length of the text. The process is analogous to predicting the continuing
sound of the clarinet based on a sound clip of the note “A” just 1/440th of a second
long, a single beat of the base frequency. This, remarkably, is feasible for the musi-
cal note, but only because the tone repeats endlessly. The default smoothing in the
Syuzhet package assumes that books do the same; among other things, this means
the smoothed versions assume the start of every book has an emotional valence that
continues the trajectory of its final sentence. (I have explained this at slightly greater
length in Schmidt, “Commodius Vici.”)
For some plots, including Jockers’s primary example, Portrait of the Artist as
a Young Man, this assumption is not noticeably false. But for other plots, it causes
great problems. Figure 48.1 shows the plot of Portrait and four other novels, with
text taken from Project Gutenberg. William Dean Howell’s The Rise of Silas Lapham
is a story of ruination; Ragged Dick, by Horatio Alger, is the archetypal “Rags to
Riches” novel of the nineteenth century; Madame Bovary is a classically tragic
tale of decline. Three different smoothing functions are shown: a weighted moving
average, among the simplest possible functions; a loess moving average, which is
one of the most basic and least assumption-laden algorithms used in exploratory
data analysis; and the low-pass filter included with Syuzhet.4
The problems with the Fourier transform here are obvious. A periodic function
forces Madame Bovary to be “as well off ” after her death as before her infidelity.
The less assumption-laden methods, on the other hand, allow her fate to collapse at
the end and for Ragged Dick’s trajectory to move upward instead of ending on the
downslope. Andrew Piper suggests5 that it may be quite difficult to answer the ques-
tion, “How do we know when a curve is ‘wrong’?” (Piper, “Validation”). But in this
case, the wrongness is actually quite apparent; only the attempt to close the circle
can justify the downturn in Ragged Dick’s fate at the end of the novel.
What sort of evidence is this? By Jockers’s account,6 the Bovary example is
simply a negative “validation” of the method, by which I believe he means a sort of
empirical falsification of the claim that this is the best method in all cases (Jockers,
“Requiem”). Swafford’s posts imply similarly that case-by-case validation and fal-
sification are the gold standard. In her words, the package (and perhaps the digital
humanities as a whole) need “more peer review and rigorous testing—designed to
confirm or refute hypotheses” (Swafford, “Continuing”).
Seen in these terms, the algorithm is a process whose operations are funda-
mentally opaque; we can poke or prod to see if it matches our hopes, but we can
never truly know it. But when the algorithm is a means of realizing a meaningful
transformation, as in the case of the Fourier transform, we can do better than this
kind of quality assurance testing; we can interpretively know in advance where a

This content downloaded from


167.206.19.12 on Sat, 29 Mar 2025 21:42:01 UTC
All use subject to https://about.jstor.org/terms
550 ] benjamin m. schmidt

Figure 48.1. Four plot trajectories.

transformation will fail. I did not choose Madame Bovary at random to see if it
looked good enough; instead, the implications of the smoothing method made it
obvious that the tragedy, in general, was a type of novel that this conception of sen-
timent that Syuzhet’s smoothing could not comprehend. I will admit, with some
trepidation, that I have never actually read either Madame Bovary or Ragged Dick;
but each is the archetype of a plot wholly incompatible with low-pass filter smooth-
ing. Any other novel that ends in death and despair or extraordinary good fortune
would fail in the same way.
These problems carry through to Jockers’s set of fundamental plots: all begin
and end at exactly the same sentiment. But the obvious problems with this assump-
tion were not noted in the first two months of the package’s existence (which surely
included far more intensive scrutiny than any peer-review process might have). One
particularly interesting reason that these failings were not immediately obvious is

This content downloaded from


167.206.19.12 on Sat, 29 Mar 2025 21:42:01 UTC
All use subject to https://about.jstor.org/terms
Do Digital Humanists Need to Understand Algorithms? [ 551

Figure 48.2. Four plot trajectories plotted in polar coordinates.

that line charts, like Figure 48.1, do not fully embody the assumptions of the Fou-
rier transform. The statistical graphics we use to represent results can themselves
be thought of as meaningful transformations into a new domain of analysis. And in
this case, the geometries and coordinate systems we use to chart plots are themselves
emblazoned with a particular model. Such line charts assume that time is linear and
infinite. In general, this is far and away the easiest and most accurate way to repre-
sent time on paper. It is not, though, true to the frequency domain that the Fourier
transform takes for granted. If the Fourier transform is the right way to look at plots,
we should be plotting in polar coordinates, which wrap around to their beginning. I
have replotted the same data in Figure 48.2, with percentage represented as an angle
starting from 12:00 on a clock face and the sentiment defined not by height but by
distance from the center.
Here, the assumptions of the Fourier transform are much more clear. For all of
the novels here, time forms a closed loop; the ending points distort themselves to
line up with the beginning, and vice versa. The other algorithms, on the other hand,
allow great gaps: the Madame Bovary arc circles inward as if descending down a
drain, and Ragged Dick propels outward into orbit.

This content downloaded from


167.206.19.12 on Sat, 29 Mar 2025 21:42:01 UTC
All use subject to https://about.jstor.org/terms
552 ] benjamin m. schmidt

These circular plots are more than falsifications. Fully embracing the underly-
ing assumptions of the transform in this way does not only highlight problems with
the model; it suggests a new perspective for thinking about plots. This view high-
lights the gap between the beginning and end as a central feature of the novel; in
doing so, it challenges us to think of the time that plots occupy as something other
than straightforwardly linear.
This is a conversation worth having, in part because it reminds us to question
our other assumptions about plots and time. The infinite time that the Cartesian plot
implies is, in some ways, just as false as the radial one. Many smoothing methods
(including the one I would like to see used in Syuzhet, loess regression), can easily
extrapolate past the beginning and end of the plot. That this is possible shows that
they are, in some ways, equally unsuitable for the task at hand. The heart of the dis-
tinction between fabula and syuzhet, in fact, is that there is no way to speak about
“before the beginning” of a novel, or what words Shakespeare might have written
if he had spent a few more hours working past the end of Hamlet. Any model that
implies such phrases exist is obviously incorrect.
But even when arguably false, these transformations may yet be productive of
new understandings and forms of analysis. While this cyclical return is manifestly
inappropriate to the novel, it has significant implications for the study of plot more
generally. By asking what sorts of plots of the frequency domain might be useful
for, we can abstractly identify whole domains where new applications may be more
appropriate.
For example: the ideal form of the three-camera situation comedy is written
so that episodes can air in any arbitrary order in syndication. That is to say, along
some dimensions they should be cyclical. For sitcom episodes, cyclicality is a
useful framework to keep in mind. The cleanness of the fit of sentiment, theme,
or other attributes may be an incredibly useful tool both to understand how com-
mercial implications intertwine with authorial independence, or for understanding
the transformation of a genre over time. Techniques of signal processing could be
invaluable in identifying, for example, when and where networks allow writers to
spin out multi-episode plot lines.7
Though the bulk of the Swafford and Jockers conversation centered on the
issue of smoothing, many digital humanists seem to have found a second critique
Swafford offered far more interesting. She argued that the sentiment analysis algo-
rithms provided by Jockers’s package, most of which were based on dictionaries
of words with assigned sentiment scores, produced results that frequently violated
“common sense.” While the first issue seems blandly technical, the second offers
a platform for digital humanists to talk through how we might better understand
the black boxes of algorithms we run. What does it mean for an algorithm to
accord to common sense? For it to be useful, does it need to be right 100 percent
of the time? 95 percent? 50.1 percent? If the digital humanities are to be a field that

This content downloaded from


167.206.19.12 on Sat, 29 Mar 2025 21:42:01 UTC
All use subject to https://about.jstor.org/terms
Do Digital Humanists Need to Understand Algorithms? [ 553

appropriates tools created by others, these are precisely the questions it needs to
practice answering.
To phrase the question this way, though, is once again to consider the algo-
rithm itself as unknowable. Just as with the Fourier transform, it is better to ask con-
sciously what the transformation of sentiment analysis does. Rather than thinking
of the sentiment analysis portion of Syuzhet as a set of word lists to be tested against
anonymous human subjects, for example, we should be thinking about the best
way to implement the underlying algorithms behind sentiment analysis—logistic
regression, perhaps — to distinguish between things other than the binary of “pos-
itive” and “negative.” Jockers’s inspiration, Kurt Vonnegut, for example, believed
that the central binary of plot was fortune and misfortune, not happiness and sad-
ness; while sentiment analysis provides a useful shortcut, any large-scale platforms
might do better to create a classifier that actually distinguishes within that desired
binary itself. Andrew Piper’s work on plot structure involves internal comparisons
within the novel itself (Piper, “Novel Devotions”). Work like this can help us to bet-
ter understand plot by placing it into conversation with itself and by finding useful
new applications for transformations from other fields.
Doing so means that digital humanists can help to dispel the myths of algorith-
mic domination that Bogost unpacks, rather than participating in their creation.
When historians applied psychoanalysis to historical subjects, we did not suggest
they “collaborate” with psychoanalysts and then test their statements against the
historical record to see how much they held true; instead, historians themselves
worked to deploy concepts that were seen as themselves meaningful. It is good and
useful for humanists to be able to push and prod at algorithmic black boxes when
the underlying algorithms are inaccessible or overly complex. But when they are
reduced to doing so, the first job of digital humanists should be to understand the
goals and agendas of the transformations and systems that algorithms serve so that
we can be creative users of new ideas, rather than users of tools the purposes of
which we decline to know.

Notes

1. http://www.theatlantic.com/technology/archive/2015/01/the-cathedral-of-com
putation/384300.
2. http://www.matthewjockers.net/%202015/02/02/syuzhet.
3. It may be worth emphasizing that a low-pass filter removes all elements above a
certain frequency; it does not reduce to its top five or ten frequencies, which is a different,
equally sensible compression scheme.
4. For all three filters, I have used a span approximating a third of the novel. The loess
span is one-third; the moving average uses a third of the novel at a time; and the cutoff for
the low-pass filter is three. To avoid jagged breaks at outlying points, I use a sine-shaped

This content downloaded from


167.206.19.12 on Sat, 29 Mar 2025 21:42:01 UTC
All use subject to https://about.jstor.org/terms
554 ] benjamin m. schmidt

kernel to weight the moving average so that each point weights far-away points for its aver-
age less than the point itself.
5. http://txtlab.org/?p=470.
6. http://www.matthewjockers.net/2015/04/06/epilogue.
7. This does not necessarily mean that Fourier transform is the best way to think
of plots as radial. Trying to pour plot time into the bottle of periodic functions, as we are
seeing, produces extremely odd results. As Scott Enderle points out, even if a function is
completely and obviously cyclical, it may not be regular enough for the Fourier transform
to accurately translate it to the frequency domain (Enderle).

Bibliography

Bogost, Ian. “The Cathedral of Computation.” The Atlantic, January 15, 2015. http://www.the
atlantic.com/technology/archive/2015/01/the-cathedral-of-computation/384300/.
Enderle, Scott. “What’s a Sine Wave of Sentiment?” The Frame of Lagado (blog), April 2,
2015. http://www.lagado.name/blog/?p=78.
Goldstone, Andrew, and Ted Underwood. “The Quiet Transformations of Literary Stud-
ies: What Thirteen Thousand Scholars Could Tell Us.” New Literary History 45, no. 3
(2014): 359–84. doi:10.1353/nlh.2014.0025.
Jockers, Matthew. “Requiem for a Low Pass Filter.” Matthewjockers.net, April 6, 2015.
http://www.matthewjockers.net/2015/04/06/epilogue/.
———. “Revealing Sentiment and Plot Arcs with the Syuzhet Package.” Matthewjockers.
net, February 2, 2015. http://www.matthewjockers.net/2015/02/02/syuzhet/.
Knuth, Donald E. The Art of Computer Programming: Volume 3: Sorting and Searching.
Reading, Mass.: Addison-Wesley Professional, 1998.
Piper, Andrew. “Novel Devotions: Conversional Reading, Computational Modeling, and
the Modern Novel.” New Literary History 46, no. 1 (2015): 63–98. doi:10.1353/nlh
.2015.0008.
———. “Validation and Subjective Computing.” txtLAB@Mcgill, March 25, 2015. http://
txtlab.org/?p=470.
Ramsay, Stephen. Reading Machines: Toward an Algorithmic Criticism. Urbana: Univer-
sity of Illinois Press, 2011.
Rhody, Lisa M. “Topic Modeling and Figurative Language.” Journal of Digital Humanities
2, no. 1 (2013). http://journalofdigitalhumanities.org/2–1/topic-modeling-and-figu
rative-language-by-lisa-m-rhody/.
Schmidt, Benjamin. “Commodius Vici of Recirculation: The Real Problem with Syu-
zhet.” Author’s blog, April 13, 2015. http://benschmidt.org/2015/04/03/commodius
-vici-of-recirculation-the-real-problem-with-syuzhet/.
———. “Words Alone: Dismantling Topic Models in the Humanities.” Journal of Digital
Humanities 2, no. 1 (2013). http://journalofdigitalhumanities.org/2-1/words-alone-by
-benjamin-m-schmidt/.

This content downloaded from


167.206.19.12 on Sat, 29 Mar 2025 21:42:01 UTC
All use subject to https://about.jstor.org/terms
Do Digital Humanists Need to Understand Algorithms? [ 555

Swafford, Annie. “Problems with the Syuzhet Package.” Anglophile in Academia: Annie
Swafford’s Blog, March 2, 2015. https://annieswafford.wordpress.com/2015/03/02
/syuzhet/.
———. “Continuing the Syuzhet Discussion.” Anglophile in Academia: Annie Swafford’s Blog,
March 7, 2015. https://annieswafford.wordpress.com/2015/03/07/continuingsyuzhet/.
———. “Why Syuzhet Doesn’t Work and How We Know.” Anglophile in Academia: Annie
Swafford’s Blog, March 30, 2015. https://annieswafford.wordpress.com/2015/03/30
/why-syuzhet-doesnt-work-and-how-we-know/.

This content downloaded from


167.206.19.12 on Sat, 29 Mar 2025 21:42:01 UTC
All use subject to https://about.jstor.org/terms

You might also like