Cyber Insecurity
Cyber Insecurity
Andrew Odlyzko
University of Minnesota
[email protected]
http://www.dtc.umn.edu/∼odlyzko
Revised version, March 18, 2019.
1 Introduction
It is time to acknowledge the wisdom of the “bean counters.” For ages, multitudes of
observers, including this author, have been complaining about those disdained accountants
and business managers. They have been blamed for placing excessive emphasis on short-
term budget constraints, treating cybersecurity as unimportant, and downplaying the risks
of disaster.
With the benefit of what is now several decades of experience, we have to admit those
bean counters have been right. The problems have simply not been all that serious. Further,
if we step back and take a sober look, it becomes clear those problems are still not all that
serious.
All along, the constant refrain has been that we need to take security seriously, and
engineer our systems from the ground up to be truly secure. The recent program of recom-
mended moves [4] opens with a quote from the famous 1970 Ware Report that called for
such steps. This demand has been growing in stridency, and has been increasingly echoed
by higher levels of management and of political leadership. Yet in practice over the last few
decades we have seen just a gradual increase in resources devoted to cybersecurity. Action
has been dominated by minor patches. No fundamental reengineering has taken place.
This essay argues that this “muddle through” approach was not as foolish as is usually
claimed, and will continue to be the way we operate. Cyberinfrastructure is becoming more
important. Hence intensifying efforts to keep it sufficiently secure to let the world function
is justified. But this process can continue to be gradual. There is no need to panic or make
2 Andrew Odlyzko
drastic changes, as the threats are manageable, and not much different from those that we
cope with in the physical realm.
This essay reviews from a very high level the main factors that have allowed the world
to thrive in spite of the clear lack of solid cyber security. The main conclusion is that,
through incremental steps, we have in effect learned to adopt techniques from the physical
world to compensate for the deficiencies of cyberspace. This conclusion is diametrically
opposed to the heated rhetoric we observe in the popular media and to the unanimous
opinions of the technical and professional literature. No claim is made that this process
was optimal, just that it was “good enough.” Further, if we consider the threats we face,
we are likely to be able to continue operating in this way. But if we look at the situation
realistically, and plan accordingly, we might
– enjoy greater peace of mind
– produce better resource allocations
The analysis of this essay does lead to numerous contrarian ideas. In particular, many fea-
tures of modern technologies such as “spaghetti code” or “security through obscurity,” are
almost universally denigrated, as they are substantial contributors to cyber insecurity. But
while this is true, they are also important contributors to the imperfect but adequate levels
of cyber security that we depend on. Although a widely cited mantra is that “complexity
is the enemy of security,” just the opposite is true in the world we live in, where perfect
security is impossible. Complexity is an essential element of the (imperfect) security we
enjoy, as will be explained in more detail later. Hence one way to improve our security is
to emphasize “spaghetti code” and “security through obscurity” explicitly, and implement
them in systematic and purposeful ways. In general, we should adopt the Dr. Strangelove
approach, which is to
stop worrying and learn to love the bomb.
In other words, not just accept that our systems will be insecure. Recognize that insecurity
often arises in systematic ways, and that some of those ways can be turned into defensive
mechanisms. We do have many incremental ways to compensate, and we have to learn how
to systematically deploy them, so as to live and prosper anyway. The key point is that, in
cyberspace as well as in physical space,
security is not the paramount goal by itself.
Some degree of security is needed, but it is just a tool for achieving other social and
economic goals.
This essay is a substantial revision and expansion of the author’s earlier piece [6], which
was an extended abstract of the WiSec’10 keynote, and also builds on the author’s other
papers, such as [5]. However, no originality is claimed. While this piece is likely to strike
many readers as very contrarian, many of the arguments made here can also be found
elsewhere, for example in [1], and are not inconsistent with many of the recommendations
of mainstream reports such as [4]. Historically, for many observers a serious reassessment
of the traditional search for absolute security was provoked by Dan Geer’s 1998 post [2].
However, awareness of general risk issues, and growing perception that they were key, can
Cybersecurity not important 3
be traced much further back, to various research efforts in the 1980s, and the founding of
Peter Neumann’s RISKS Digest in 1985. No attempt is made here to trace this evolution
of attitudes towards security. That is a nice large subject that is left for future historians
to deal with. This essay considers only the current situation and likely evolution in the
near future.
In the cyber realm itself, we have experienced many prominent disasters. But most
of them, such as airlines being grounded for hours or days, or cash machine networks
not functioning, have arisen not from hostile action, but from ordinary run-of-the-mill
programming bugs or human operational mistakes. And of course we have the myriad
issues such as cost overruns and performance disappointments which plague information
as well as other rapidly evolving technologies. They have little to do with the lack of cyber
security. Yet we suffer from them every day.
There is a third curious incident in information technology (in)security that also appears
to be universally ignored. For several decades we have had simple tools for strengthening
security that did not require any fundamental reengineering of information systems. A
very conspicuous example of such tools is two-factor authentication. The widely cited and
widely accepted explanation for this technology not having been deployed more widely
before is that users disliked the extra bother it involved. So apparently decision makers
felt that the extra security provided by two-factor authentication did not warrant the cost
of inconveniencing users. The big “dog did not bark” question then is, given that this
technology was not deployed, why did nothing terrible happen?
The general conclusion of this essay is that from the start, the “bean counters” un-
derstood the basic issues better than the technologists, even though they usually did not
articulate this well. The main problem all along was risk mitigation for the human world
in which cyberspace played a relatively small role. It was not absolute security for the
visionary cyberspace that technologists dreamed of.
4 Threats
We certainly do face many threats. In particular, we do face many cyberthreats. It seems
inevitable that we will suffer a “digital Pearl Harbor.” What we have to keep in mind is
that we have suffered a physical Pearl Harbor and other non-cyber disasters that large or
Cybersecurity not important 5
larger. Many occurred quite recently, as noted before. It seems absolutely certain we will
suffer many more, and an increasing number of them will surely be coming from the cyber
realm. On the other hand, it is questionable whether the cyber threats are yet the most
urgent ones.
The human race faces many potentially devastating non-cyber dangers, such as aster-
oid strikes, runaway global warming, and large pandemics. These threats could have giant
impacts, but are hard to predict and quantify, and are seemingly remote, so tend to be
ignored by almost all people most of the time. However, we also face a variety of other still
large dangers, such as those from earthquakes and hurricanes. Those occur more frequently,
so the damage they cause is moderately predictable, at least in a long-run statistical sense.
Yet we are not doing anywhere near as much to protect against them as we could, if we
wanted to do so. We accept that they will occur, and rely on general resilience and insur-
ance, whether of the standard variety, or the implicit insurance of governments stepping
in with rescue and recovery assistance.
We also tolerate the ongoing slaughter of over a million people each year in automobile
accidents worldwide (with about 40,000 in the U.S. alone). The horrendous losses of human
life as well as property that involve cars arise mostly from unintentional mistakes. They
result from our accepting the limitations of Homo sapiens when dealing with a dangerous
technology. It’s just that this technology has proven extremely attractive to our species.
Hence we accept the collateral damage that results from its use, even though it far exceeds
that from all wars and civil conflicts of recent times.
On top of accidents we also have the constant ongoing malicious damage, coming from
crime in its many dimensions. Society suffers large losses all the time, and mitigates the
threat, but has never been able to eliminate it. We have large security forces, criminal
courts, jails, and so on. The U.S. alone has close to a million uniformed police officers, and
more than a million private security guards.
Military establishments tend to be substantially larger than law enforcement ones.
The main justification for them is to guard against the far rarer but potentially more
damaging actions of hostile nations. One way or another, most societies have decided to
prioritize protection against those external dangers over that of internal crime. Further, in
recent decades, military spending (and therefore total security-related spending) has been
declining as a fraction of the world’s economic output. So when societies feel threatened
enough, they do manage to put far more effort into security than is the case today.
Yet even military security at its very best is not water-tight, which has to be kept
in mind when considering cyber security. Serious gaps have been uncovered on numerous
occasions, such as a deep penetration of an American nuclear weapons facility by a pacifist
group that included an 82-year old nun.
The bottom line is that society has always been devoting substantial and sometimes
huge resources to security without ever achieving complete security. But those resources
are still not as great as they could be. That’s because, as noted above, security is not the
paramount goal by itself. We make tradeoffs, and are only willing to give up a fraction
of the goods and services we produce for greater safety. There is even extensive evidence
6 Andrew Odlyzko
for human desire for a certain level of risk in their lives. When some safety measures are
introduced, people compensate for that by behaving with less care.
Still, we do employ many people and extensive resources protecting ourselves from
traditional physical world threats, far more than we devote to cybersecurity. Hence it is
clear, and has been been clear for a long time, that more effort could have been dedicated to
cybersecurity, even without consuming productive resources. All we had to do was just shift
some of the effort devoted to traditional physical security to the cyber realm. And indeed
that is what is happening now, at least in relative sense. More attention and resources is
being devoted to cybersecurity. One measure of the greater stress being placed on this area
is the growing (but still very small) number of CEOs who have lost their jobs as result of
security breaches. So the question arises, essentially the same question as before, just in
a different form: Why was this not done before, and why has not much harm come from
this?
What makes our lives tolerable is that the Barlow vision is divorced from reality. Cy-
berspace is intimately tied to what we might call Humanspace, the convoluted world of
physical objects and multiple relations, including institutions such as governments, and
laws, and lawyers. In fact, we can say:
The dream of people like Barlow was to build a Cyberspace that would overcome the
perceived defects of Humanspace. In practice we have used the defensive mechanisms
of Humanspace to compensate for the defects of Cyberspace.
Those defensive mechanisms is what we consider next, starting with the limitations of
attackers in both physical and cyber realms.
cash machines, or even causing a blackout in a city does not carry as strong a message as
blowing up airplanes, bringing down buildings, or causing blood to flow among spectators
in a sports arena.
There is much concern about ongoing technology developments making the lack of cyber
security far more dangerous, especially as more devices go online, and IoT (the Internet
of Things) becomes more pervasive. Those are valid concerns, but let us keep in mind
that those ongoing technology developments are also creating or magnifying many physical
dangers even without taking advantage of cyber insecurity. Just think of drones (or possibly
imaginary drone sightings) shutting down airports recently, or drones or self-driving cars
delivering bombs in the future.
In general, and reinforcing earlier discussions, society has always faced manifold dangers
from its members misusing various technologies. Deterrence, detection, punishment, in
addition to general social norms, is what has enable civilized human life to exist. Contrary
to the cyberlibertarian visions of people like Barlow (or many modern advocates of Bitcoin
and blockchain) they are likely to be just as crucial in the future, if not more so.
Of course, as the old saying goes, bank robbers went after banks because that is where
the money was. But now the money is in cyberspace. So that is where criminals are moving.
And that is also where security resources are being redirected. Completely natural and
expected, and happening at a measured pace.
vidually they pose limited and reasonably well understood dangers. Overall, their potential
impact can be estimated and constrained by standard approaches.
At the other end of the spectrum, though, there are the “black swans,” the giant
security breaches that cause major damage. Those don’t fit into the equilibrium framewor,
just as catastrophic financial collapses don’t fit into the standard economic equilibrium
framework (and have been almost entirely ignored by mainstream economists). But neither
do the giant physical disasters, such as Pearl Harbor or Hurricane Katrina. Their damaging
effects basically can only be mitigated by designing in general resilience.
Measures that provide resilience against cyber attacks are often the same as those
against traditional physical attacks or against natural disasters. As just one example, there
is much concern about the damage to the electric power grid that might be caused by
malicious actors. But the worst scenarios along those lines are similar to what we are sure
to suffer when something like the Carrington Event recurs. This was the giant geomagnetic
solar storm that hit the Earth in 1859. It caused widespread failures of the telegraphs, the
only electrical grids in existence at that time. Estimates are that if it were to occur today,
it would cause damages in the trillions of dollars. And it is bound to recur some day!
The conclusion that emerges is again that cyberspace is not all that different from the
more traditional physical space we are more used to. And security measure for the two are
again similar.
Yet even now, two-factor authentication is nowhere near universal. Further, most de-
ployments of it at this time appear to use the least secure version of it, with texts to
mobile phones. Practical attacks on this version have been developed and applied. The
more secure versions with hardware tokens are used much less frequently. Obviously what
is happening is that choices are being made, the additional inconvenience to users being
weighed against the likely losses from hostile penetrations. Even without any new technol-
ogy breakthroughs, more secure versions of two-factor authentication can be deployed when
they are seen as necessary. But they are clearly not being seen as necessary at present.
There are many more examples of relatively easy steps that have been available for a
long time, and can strengthen security without any fundamental reengineering of informa-
tion systems, or rearranging how society functions. Consider the adoption of chip credit
cards. They have been universal in much of the world for years, but are only now taking
over in the U.S. The costs have been understood by the banking industry, and it was de-
cided, through a messy process by various stakeholders, that they were too high until the
perceived threats increased.
Electronic voting is another prominent example where simple and well-known steps
would have provided greater security a long time ago. Experts have been arguing from the
start that purely electronic voting basically cannot be made secure, at least not with feasible
technology and the financial resources that are available or are likely to be made available.
All the evidence that has been gathered over the years supports this view. Further, all the
advantages of electronic voting (convenience, accessibility for those with handicaps, quick
collection of results, ...) can be obtained very easily, together with a much higher degree
of security, through the use of printed records that are preserved in physical form. The
additional costs that are involved are very modest, and seem well worth it to most people
who have examined the situation, including this author. Yet in many jurisdictions this
simple solution is being ignored. And it has to be admitted that so far no serious abuses
have been documented. What is likely to happen is that if some big scandal surfaces that
is based on a cyber breach, political leaders will swing into action, and find the resources
to provide the obvious solution. (We should remember that big voting scandals do occur
all the time, based on other aspects of the voting system, and they lead to responses that
vary with circumstances.) But, as seems typical in human affairs, it will likely take a big
scandal to cause this to happen.
Electronic voting provides an interesting illustration of a cyber insecurity that is not
difficult to fix, but is not being fixed. It also provides an example of a common phenomenon,
namely that the fix involves stepping back to the traditional physical world, in this case of
messy paper ballots. (The same could be said of chip cards.) In other words, the insecurity
of the cyber realm is compensated by a measure from the brick-and-mortar world.
An even better example of reliance on physical world to compensate for defects in cyber
security is that of passwords. They have been pronounced obsolete and dead many times,
but are still ubiquitous. A key element in making them more tolerable in spite of their well-
known weaknesses is the use of paper for users to write them down (or, preferably, to write
down hints for those passwords or passphrases). The security field has finally been forced
to admit that asking users to remember scores of complicated passwords (and change them
12 Andrew Odlyzko
every few months) is not going to work, not with the bulk of human users. But paper slips
work out quite well, as physical wallets and purses do not get stolen all that often.
Notice that there are many other direct physical methods for increasing security. Air-
gapped systems, isolated from the Internet, have been standard in high-security environ-
ments. They are again not absolutely secure, as the Stuxnet case demonstrates. But they
do provide very high levels of security, as breaching them requires special skills and exten-
sive effort (as the Stuxnet case demonstrates, again). At a simpler level, allowing certain
operations (such as resetting the options on a router or another device) only through the
press of a physical button on the device also limits what attackers can do.
Frequent backups serve to mitigate ransomware and many other attacks. They can be
automated, so that they do not impose any significant mental transaction costs on the
users. They increase the reversibility of actions, which is a key component to security (but
seems not to be understood by the advocates of Bitcoin and other cryptocurrencies). And
they are not expensive in terms of hardware. Of course, backups increase security only if
they are not subverted. But there are a variety of ways to make backups more trustworthy,
such as using write-only media (such as some optical disks), or special controllers that limit
what operations can be done.
We should also remember there is one piece of advice that applies in both cyberspace
and physical space: If it’s dangerous, don’t use it! Some very cautious organizations disable
USB ports on their computers, but such organizations are rare. Email attachments are a
notorious carrier for all sorts of malicious software. They could be blocked, but seldom are.
All these examples show how society has in effect accepted obvious risks in order to get
benefits of insecure information technology solutions.
For the purposes of this essay, the key counterpoint to this line of argument is that
this erosion of privacy we experience has little to do with cyber insecurity. Some of that
erosion does come from illicit hacking of our systems, which is indeed facilitated by the
insecurity of our information systems. But most of it comes by design, as providers of
services and devices purposely build them to collect data about users for exploitation by
those providers and their (almost universally concealed) networks of partners. (Even the
illicit hacking of those devices, databases, and so on, can occur only because of this huge and
legal, even though usually obfuscated, data gathering.) Hence there are no improvements
in cybersecurity that would by themselves make a measurable difference to the erosion of
privacy that we experience. To the extent that society wants to preserve some semblance
of privacy, other methods will have to be used, which likely will have to be based on laws
and regulations, and to some extent on technologies for users to protect themselves.
On the other hand, the erosion of privacy is a key element to maintaining tolerable
levels of security in general. Tens or sometimes hundreds of millions of credit cards are
routinely captured by criminals by compromises of databases. Yet the overall damages are
limited, and often dominated by the cost of arranging for replacement cards. The prices
of stolen credit card credentials on the black market are low, on the order of a dollar
or so each. The reason is that banks have developed techniques for detecting credit card
fraud. Those are based on knowledge of users’ patterns of behavior. A typical card holder
is not an anonymous “standing wave” of Barlow’s imagination, or some account even more
anonymous than those involved in the not-all-that anonymous Bitcoin operations. Instead,
such a person is usually an individual who follows a staid routine in life and in commercial
transactions, say stopping by a particular coffee shop on the way to work, or dropping in
at a grocery store on the way back from work.
There are many measures that erode privacy, such as cross-device tracking (in which
users are identified even though they use different gadgets) or identifying users by the
patterns of their typing, that are often regarded as objectionable or even creepy. Yet they
do serve to identify users, and thereby to prevent mischief, even if this is incidental to
the main purposes for which they are deployed. Organizations that operate these systems
can get a high degree of assurance as to the person they are dealing with, and in such
circumstances stealing a credit card or cracking a password is often of limited use.
It should also be remembered that since enterprises do want to track customers or
potential customers for their own business reasons, they have incentives to develop and
deploy those privacy-invasive methods in preference to providing more direct security. This
is a case where general economic incentives skew what security methods are used. But
those methods are very effective in compensating for cyber insecurity.
But neither can it be assumed that all relevant information will be available in forms that
lead to proper action. The technique of “hiding in plain sight” was popularized by Edgar
Allan Poe two centuries ago. Modern technology creates so much more information that
this often works with minimal efforts at concealment, or even without any such effort.
Even when information is known, it is often not known widely, and is not known by people
who might or should act on it. Just consider Dieselgate, where various groups had obtained
measurements of emissions exceeding legal limits years before the scandal erupted. Or think
of the Danish bank that laundered over $200 billion through a small Estonian branch over
a few years. Not to mention all the various sexual harassment cases that took ages to be
noticed publicly.
As technology advances, the level of information that can be acquired increases, and
so one might argue that the importance of tacit knowledge decreases. But that is very
questionable. Systems are increasingly complicated, so it is harder to formally describe
their functioning and their various failure modes and special features.
Further, modern technology allows for significant enhancements to the basic technique
of “hiding in plain sight.” Obfuscation techniques can be improved, and deployed much
more widely and systematically, since we have increasing ability to create fake information.
Looking forward, we are likely to see an arms race, with AI systems used to create “alternate
realities” on one hand, and to try to penetrate and deconstruct them on the other. The
“post-truth” world is regarded as a danger, but it seems inevitable, and does have positive
angles.
Note that there are many examples of primitive instances of such developments. The
impenetrable legalese in the Terms of Service that users have to accept to use online services
is a frequently encountered instance of what one recent paper referred to as “transparency
[as] the new opacity.” In general, “speed bumps,” steps which offer some protection, rather
than absolute security, proliferate. Non-Disclosure Agreements, or NDAs, are one such
example. Silicon Valley, home of both privacy-abusers and transparency advocates, uses
them widely. Though far from impenetrable, NDAs do substantially limit the spread and
use of information.
Cybersecurity not important 15
the need to station an armed guard at each location. And the costs are far lower than for
physical protective measures.
Finally, there is that basic approach that was mentioned before: If it’s too dangerous,
don’t use it. If high speed is a problem (as it is, as cryptocurrency enthusiasts keep dis-
covering over and over, and fail to learn from), slow things down. Don’t allow large money
transfers to occur until a day or two have passed, and there is a chance for monitoring
systems (possibly ones involving loss of privacy) to collect and analyze data about the
behavior of the entities involved. And so on.
These basic techniques underlie the usual approach taken by operators when faced with
serious problems: Bring down the network, repair (by reinstalling basic operating systems
if necessary) all the machines that might be affected, and start bringing up functionality
in sections of the network. That is how the now-ancient Morris Worm infestation was
dealt with. It is also how the collapse of campus network at a prestigious college was
cured recently [3]. The ability of modern technology to operate in a decentralized fashion,
with multiple ways of providing at least some basic functionality, is very helpful. As the
report on that college’s information systems debacle notes, when the basic network stopped
functioning, the people involved “got creative.” Not something that one would undertake
voluntarily. But it demonstrates the resilience of the system, and, among other things,
makes it that much less attractive a target for attackers.
16 Conclusions
This essay is a brief and very high level view of the cybersecurity area, in particular of how
society has managed to thrive in spite of reliance on insecure information systems. The
main conclusion is that, contrary to the public perception and many calls from prominent
business and government leaders, we are not facing a crisis. This does not mean, though,
that cybersecurity can be neglected, nor that all the effort that has been devoted to new
security technologies has been wasted. Threats have been proliferating, and attackers have
been getting more sophisticated. Hence new measures need to be developed and deployed.
Firewalls are widely claimed to be becoming irrelevant. But they have been very useful
in limiting threats over the last few decades. Now, though, we have to migrate to new
approaches.
Furthermore, just as in the physical realm, dramatically different levels of security are
called for in different organizations. The military and the intelligence agencies can naturally
be expected and required to devote far more attention and resources to security than
civilian enterprises. And they can also be called upon to deal with powerful state actors
that threaten ordinary businesses. We don’t expect hotels to protect against foreign agents
bringing rare and ultra-lethal poison agents to the premises. That is what government
agencies are for, as they can marshall the expertise and resources to deal with such threats.
Still, much can be done even at the level of small civilian enterprises. We do not know
how to build secure systems of substantial complexity. But we can build very secure systems
of limited functionality. Those can be deployed for specialized purposes, such as monitoring
large systems for signs of penetrations or corruptions, or ensuring integrity of backups.
We can also improve laws, regulations, and security standards. Cybersecurity is partic-
ularly rife with problems arising from the “tragedy of the commons” and negative external-
ities, and those problems can be mitigated. Microsoft dramatically improved the security
of its products early in this century as a result of pressure from customers. Much more can
be done this way. For example, it has been known that it is important to perform array
bound checking, and how to do it, for half a century. It would not be too difficult to close
that notorious hole that is key to numerous exploits.
The buffer overrun issue cited above brings up one of the main points of this essay,
namely that there are many ways to improve cybersecurity even without new inventions.
As a recent piece notes, “[m]ost of our security vulnerabilities arises from poor practice,
not from inadequate technology” [1]. What that means is that one has to be modest in
expectations for anything truly novel. It may be a worthwhile goal to try for a “moonshot”
or “silver bullet” technological solution, in order to inspire the designers. But even if some
dramatic breakthrough is achieved, it will still have to compete with a slew of other, more
modest “Band-Aid” style approaches. So other factor than pure effectiveness, such as ease
of use, may easily dominate, and result in slow or no adoption.
This essay does suggest some contrarian ideas for increasing security. They are based on
increasing complexity, to enable many of the “speed bumps” that limit what attackers can
do and help trace them. “Spaghetti code” has already been helpful, and can be deployed
in more systematic ways. In general, we should develop what Hilarie Orman has suggested
calling a “theory of bandaids.”
18 Andrew Odlyzko
This essay does not claim that a “digital Pearl Harbor” will not take place. One, or
more, almost surely will. But that has to be viewed in perspective. Given our inability
to build secure system, such events are bound happen in any case. So all we can affect is
their frequency and severity, just as with large physical dangers. Further, the likelihood of
a“digital Pearl Harbor” has to be considered in comparison to all the other threats we face.
The issue is risk management, deciding how much resources to devote to various areas.
Acknowledgments
The author thanks Ross Anderson, Steve Bellovin, Dorothy Denning, Ben Gaucherin, Bal-
achander Krishnamurthy, Peter Neumann, Hilarie Orman, Walter Shaub, Robert Sloan,
Bart Stuck, Phil Venables, Richard Warner, Bill Woodcock, and the editors of Ubiquity
(Rob Akscyn, Kemal Delic, Peter Denning, Ted Lewis, and Walter Tichy) for their com-
ments. Their providing comments should not be interpreted as any degree of endorsement
of the thesis of this essay.
References
1. P. J. Denning, “The Profession of IT: An interview with William Hugh Mur-
ray,” Communications of the ACM, vol. 62, no. 3, March 2019, pp. 28–30. Avail-
able at hhttps://cacm.acm.org/magazines/2019/3/234920-an-interview-with-william-
hugh-murrayi.
2. D. Geer, “Risk management is where the money is,” Risks Digest, vol. 20, issue 06,
Nov. 12, 1998, available at hhttps://catless.ncl.ac.uk/Risks/20/06i.
3. L. McKenzie, “Amherst students incredulous about going for days without
services they consider absolute necessities,” InsideHigherEd, Feb. 21, 2019,
hhttps://www.insidehighered.com/news/2019/02/21/almost-week-no-internet-amherst-
collegei.
4. New York Cyber Task Force, “Building a defensible cyberspace,” Sept. 2017 re-
port, available at hhttps://sipa.columbia.edu/ideas-lab/techpolicy/building-defensible-
cyberspacei.
5. A. Odlyzko, “Cryptographic abundance and pervasive comput-
ing,” iMP: Information Impacts Magazine, June 2000, available at
hhttps://web.archive.org/web/20030415005519/http://www.cisp.org/imp/june 2000/06 00odlyzko-
insight.htmi.
6. A. Odlyzko, “Providing security with insecure systems,” Extended abstract. WiSec’10:
Proceedings of the Third ACM Conference on Wireless Network Security, ACM, 2010,
pp. 87–88. Available at hhttp://www.dtc.umn.edu/∼odlyzko/doc/wisec2010.pdfi.