Trust in Computer Systems and the Cloud
By Mike Bursell
()
About this ebook
Learn to analyze and measure risk by exploring the nature of trust and its application to cybersecurity
Trust in Computer Systems and the Cloud delivers an insightful and practical new take on what it means to trust in the context of computer and network security and the impact on the emerging field of Confidential Computing. Author Mike Bursell’s experience, ranging from Chief Security Architect at Red Hat to CEO at a Confidential Computing start-up grounds the reader in fundamental concepts of trust and related ideas before discussing the more sophisticated applications of these concepts to various areas in computing.
The book demonstrates in the importance of understanding and quantifying risk and draws on the social and computer sciences to explain hardware and software security, complex systems, and open source communities. It takes a detailed look at the impact of Confidential Computing on security, trust and risk and also describes the emerging concept of trust domains, which provide an alternative to standard layered security.
- Foundational definitions of trust from sociology and other social sciences, how they evolved, and what modern concepts of trust mean to computer professionals
- A comprehensive examination of the importance of systems, from open-source communities to HSMs, TPMs, and Confidential Computing with TEEs.
- A thorough exploration of trust domains, including explorations of communities of practice, the centralization of control and policies, and monitoring
Perfect for security architects at the CISSP level or higher, Trust in Computer Systems and the Cloud is also an indispensable addition to the libraries of system architects, security system engineers, and master’s students in software architecture and security.
Related to Trust in Computer Systems and the Cloud
Related ebooks
Security Architect: Careers in information security Rating: 4 out of 5 stars4/5Advanced Network Defense: Architectures and Best Practices for Today’s Perimeter Rating: 0 out of 5 stars0 ratingsCybersecurity First Principles: A Reboot of Strategy and Tactics Rating: 5 out of 5 stars5/5The Ultimate Cybersecurity Careers Guide Rating: 0 out of 5 stars0 ratingsHumanity's Ledger: The Trust Protocol Rating: 0 out of 5 stars0 ratingsWTF! Where's My Crypto? Security of Cryptocurrencies Rating: 0 out of 5 stars0 ratingsKeeping Cyber Security Simple Rating: 0 out of 5 stars0 ratingsVirus Safeguarding: Navigating Cybersecurity Challenges Rating: 0 out of 5 stars0 ratingsPayments and Banking in Australia: From coins to cryptocurrency: how it started, how it works, and how it may be disrupted Rating: 0 out of 5 stars0 ratingsProfessional Cocoa Application Security Rating: 0 out of 5 stars0 ratingsLPI Security Essentials Study Guide: Exam 020-100 Rating: 0 out of 5 stars0 ratingsSecurity in a Web 2.0+ World: A Standards-Based Approach Rating: 0 out of 5 stars0 ratingsNavigating the Digital Landscape: Fundamentals, Cybersecurity, Emerging Technologies, and Applications Rating: 0 out of 5 stars0 ratingsA Simplified Approach to It Architecture with Bpmn: A Coherent Methodology for Modeling Every Level of the Enterprise Rating: 0 out of 5 stars0 ratingsTribe of Hackers: Cybersecurity Advice from the Best Hackers in the World Rating: 5 out of 5 stars5/5Different Types Of Blockchain Technologies Rating: 0 out of 5 stars0 ratingsThe Cyber Security Handbook – Prepare for, respond to and recover from cyber attacks Rating: 0 out of 5 stars0 ratingsBeyond Firewalls: Security at scale: Security-At-Scale Rating: 0 out of 5 stars0 ratingsI Don't Trust You: But Blockchain and Bitcoin Will Help Rating: 0 out of 5 stars0 ratingsBlockchain: The Building Blocks of Trust and Transparency in the Digital Age Rating: 0 out of 5 stars0 ratingsThe DevSecOps Playbook: Deliver Continuous Security at Speed Rating: 0 out of 5 stars0 ratingsChat GPT Prompt Engineering With Tech Trends: Tech trends, #1 Rating: 0 out of 5 stars0 ratingsComptia+ Network Rating: 0 out of 5 stars0 ratingsCISSP in 21 Days - Second Edition Rating: 3 out of 5 stars3/5Beginning Software Engineering Rating: 5 out of 5 stars5/5Learning Software Architecture Rating: 0 out of 5 stars0 ratings
Security For You
Cybersecurity For Dummies Rating: 5 out of 5 stars5/5CompTIA Security+ Study Guide with over 500 Practice Test Questions: Exam SY0-701 Rating: 5 out of 5 stars5/5CompTIA Security+ Study Guide: Exam SY0-601 Rating: 5 out of 5 stars5/5Codes and Ciphers Rating: 5 out of 5 stars5/5Ultimate Guide for Being Anonymous: Hacking the Planet, #4 Rating: 5 out of 5 stars5/5How to Hack Like a Pornstar Rating: 4 out of 5 stars4/5IAPP CIPP / US Certified Information Privacy Professional Study Guide Rating: 0 out of 5 stars0 ratingsHacking For Dummies Rating: 4 out of 5 stars4/5Make Your Smartphone 007 Smart Rating: 4 out of 5 stars4/5Tor and the Dark Art of Anonymity Rating: 5 out of 5 stars5/5The Art of Intrusion: The Real Stories Behind the Exploits of Hackers, Intruders and Deceivers Rating: 4 out of 5 stars4/5Sandworm: A New Era of Cyberwar and the Hunt for the Kremlin's Most Dangerous Hackers Rating: 4 out of 5 stars4/5Cybersecurity: The Beginner's Guide: A comprehensive guide to getting started in cybersecurity Rating: 5 out of 5 stars5/5How to Hack Like a GOD: Master the secrets of hacking through real-life hacking scenarios Rating: 4 out of 5 stars4/5The Darknet Superpack Rating: 0 out of 5 stars0 ratingsHow to Become Anonymous, Secure and Free Online Rating: 5 out of 5 stars5/5Social Engineering: The Science of Human Hacking Rating: 3 out of 5 stars3/5The Art of Attack: Attacker Mindset for Security Professionals Rating: 5 out of 5 stars5/5Deep Dive: Exploring the Real-world Value of Open Source Intelligence Rating: 0 out of 5 stars0 ratingsEthical Hacking 101 - How to conduct professional pentestings in 21 days or less!: How to hack, #1 Rating: 5 out of 5 stars5/5(ISC)2 CISSP Certified Information Systems Security Professional Official Study Guide Rating: 3 out of 5 stars3/5The Hacker Crackdown: Law and Disorder on the Electronic Frontier Rating: 4 out of 5 stars4/5CompTia Security 701: Fundamentals of Security Rating: 0 out of 5 stars0 ratingsLinux Basics for Hackers: Getting Started with Networking, Scripting, and Security in Kali Rating: 3 out of 5 stars3/5Unmasking the Social Engineer: The Human Element of Security Rating: 5 out of 5 stars5/5Hacking: 10 Easy Beginners Tutorials on How to Hack Plus Basic Security Tips Rating: 0 out of 5 stars0 ratings
Reviews for Trust in Computer Systems and the Cloud
0 ratings0 reviews
Book preview
Trust in Computer Systems and the Cloud - Mike Bursell
Introduction
I am the sort of person who reads EULAs,¹ checks the expiry dates on fire extinguishers, examines the licensing notices in lifts (or elevators), and looks at the certificates on websites before I purchase goods from retailers or give away my personal details to sites purporting to be using my information for good in the world. Like many IT security professionals, I have a (hopefully healthy) disrespect for authority—or, maybe more accurately, for the claims made by authorities or those claiming to be authorities in the various fields of interest in which I've found myself involved over the years.
Around 2001, I found myself without a job as my employer restructured, and I was looking for something to do. I had been getting interested in peer-to-peer interactions in computing, based on a project I'd been involved with at a previous company and the question of how trust relationships could be brokered in this sphere. I did a lot of reading in the area and nearly started a doctorate before getting a new job where finding time to do the requisite amount of study was going to be difficult. Not long after, my wife and I started trying for a family, and the advent of children in the household further reduced the amount of time—and concentration—available to study at the level of depth that I felt the subject merited.
Years went by, and I kept an eye on the field as my professional interests moved in a variety of different directions. Around 2013, I joined a group within ETSI (the European Telecommunications Standards Institute) working on network function virtualisation (NFV). I quickly gravitated to the Security Working Group (Sec-WG), where I found several people with similar professional interests. One of those interests was trust, how to express it, how to define it, and how to operate systems that depended on it. We did some interesting work in the group, producing a number of documents that looked at particular aspects of telecommunications and trust, including the place of law enforcement agencies and regulators in the sector. As the telecommunications industry struggled to get its collective head around virtualisation and virtual machines (VMs), it became clear to the members of the security group that the challenges presented by a move to VMs were far bigger—and more complex—than might originally have been expected.
Operators, as telecommunications providers are known in the industry—think Orange, Sprint, or NTT Docomo—have long known that they need to be careful about the hardware they buy and the software they run on it. There were a handful of powerful network equipment providers (NEPs) whose business model was building a monolithic software stack on top of well-defined hardware platforms and then selling it to the operators, sometimes running and monitoring it for them as well. The introduction of VMs offered the promise (to the operators) and the threat (to the NEPs) of a new model, where entrants into the market could provide more modular software components, some of which could run on less-specialised hardware. From the operators' point of view, this was an opportunity to break the NEPs' stranglehold on the industry, so they (the operators) were all for the new NFV world, while the NEPs were engaged in the ETSI process to try to show that they were still relevant.
From the security point of view, we quickly realised that there was a major shift taking place from a starting point where operators were able to manage risk by trusting the one or two NEPs that provided their existing infrastructure. This was beginning to develop into a world where they needed to consider all of the different NFV vendors, the components they supplied, the interactions the components had with each other, and, crucially, the interactions the components had with the underlying infrastructure, which was now not going to be specialised hardware dedicated to particular functions, but generic computing hardware bought pretty much off the shelf. I think the Sec-WG thoroughly exasperated much of the rest of the ETSI NFV consortium with our continuous banging on about the problem, but we were equally exasperated by their inability to understand what a major change was taking place and the impact it could have on their businesses. The trust relationships between the various components was key to that, but trust was a word that was hardly even in the vocabulary of most people outside the Sec-WG.
At about the same time, I noticed a new trend in the IT security vendor market: people were beginning to talk about a new model for building networks, which they called zero trust. I was confused by this: my colleagues and I were spending huge amounts of time and effort trying to convince people that trust was important, and here was a new movement asserting that the best way to improve the security of your networking was to trust nothing. I realised after some research that the underlying message was more sophisticated and nuanced than that, but I also had a concern that the approach ignored a number of important abstractions and trust relationships. That concern has not abated as zero trust has been adopted as a rallying cry in situations where significantly less attention has been paid by those involved.
As virtualisation allowed the growth of cloud computing, and as Linux containers² and serverless computing have led to public cloud offerings that businesses can deploy simply and quickly, security is becoming more of a concern as organisations move from using cloud computing for the odd application here and there to considering it a key part of their computing infrastructure. The issue of trust, however, has not been addressed. From the (seemingly) simple question, Do I trust my cloud service provider to run my applications for me?
to more complex considerations around dynamic architectures to protect data in transit, at rest, and in use, trust needs to be central to discussions about risk and security in private and public clouds, telecommunications, finance, government, healthcare, the Edge, IoT, automotive computing, blockchain, and AI.
The subject of trust seems, at first blush, to be simple. As you start delving deeper and examining how to apply the concept—or multiple concepts—to computing, it becomes clear that it is actually a very complex field. As we consider how business A deploys software from software provider B, using libraries from open source community C and proprietary software provider D, for consumption by organisation E and its user group F on hardware supplied by manufacturer G running a BIOS from H, an operating system from I, and a virtualisation stack from J, using storage from K, over a network from L, owned by cloud service provider M, we realise that we are already halfway through the alphabet and have yet to consider any of the humans in the mix. We need, as a security and IT community, to be able to talk about trust—but there is little literature or discussion of the subject aimed at our requirements and the day-to-day decisions we make about how to architect, design, write, deploy, run, monitor, patch, and decommission the systems we manage. This book provides a starting point for those decisions, building on work across multiple disciplines and applying them to the world of computing and the cloud.
Notes
1 End user licenses or license agreements.
2 Popularised by Docker, Inc.
CHAPTER 1
Why Trust?
I trust my brother and my sister with my life. My brother is a doctor, and my sister trained as a diving instructor, so I wouldn't necessarily trust my sister to provide emergency medical aid or my brother to service my scuba gear. I should actually be even more explicit because there are times when I would trust my sister in the context of emergency medical aid: I'm sure she'd be more than capable of performing CPR, for example. On the other hand, my brother is a paediatrician, not a surgeon, so I'd not be very confident about allowing him to perform an appendectomy on me. To go further, my sister has not worked as a diving instructor for several years now, so I might consider whether my trust in her abilities should be impacted by that.
This is not a book about human relationships or trust between humans, but about trust in computer systems. In order to understand what that means—or even can mean—however, we need to understand what we mean by trust. Trust is a word that arises out of human interactions and human relationships. Words are tricky. Words can mean different things to different people in different contexts.
The classic example of words meaning different things depending on context is the names of colours—the light frequencies included in the colours I identify as mauve, beige, and ultramarine are very likely different to yours—but there are other examples that are equally or more extreme. If I discuss scheduling
with an events coordinator, a DevOps expert, and a kernel developer, each person will almost certainly have a different view of what I mean.
Trust is central to the enterprise of this book, and to discuss it, we must come to some shared understanding of what is meant by the word itself.¹ The meaning that we carry forward into our discussion of computer systems must be, as far as is possible, shared. We must, to the extent we can, come to agree on a common referent, impossible as this exercise may seem in a post-modern world.² Our final destination is firmly within the domain of computing, where domain-specific vocabulary is well-established. But since day-to-day usage of the word trust is rooted in a discussion about relationships between humans, this is where we will start.
The sort of decisions that I have described around trusting my sister and brother are ones that humans make all the time, often without thinking about them. Without giving it undue thought, we understand that multiple contexts are being considered here, including:
My relationship to the other person
Their relationship to me
The different contexts of their expertise
The impact that time can have on trust
This list, simple as it is, already exposes several important points about trust relationships to which we will return time and time again in this book: they are asymmetric (trust may be different in one direction to another), they are contextual (medical expertise and diving equipment expertise are not the same), and they are affected by time. As noted earlier, this book is not about human relationships and trust—though how we consider our relationships will be important to our discussions—but about trust in computing systems. Too often, we do not think much about trust relationships between computing systems (hardware, software, and firmware), and when we do, the sort of statements that tend to emerge are This component trusts the server
or We connect to this trusted system
. Of course, in the absence of significantly greater levels of artificial intelligence than are currently in evidence at the time of writing, computing systems cannot make the sort of complex and nuanced decisions about trust relationships that humans make; but it turns out that trust is vitally important in computing systems, unstated and implicit though it usually is.
There is little discussion about trust—that is, computer-to-computer or machine-to-machine trust—within the discipline or professional practice of computing, and very little literature about it except in small, specialised fields. The discussions that exist tend to be academic, and there is little to find in the popular professional literature—again, with the exception of particular specialised fields. When the subject of trust comes up in a professional IT or computing setting, however, people are often very interested in discussing it. The problem is that when you use the word trust, people think they know what you mean. It turns out that they almost never do. What one person's view of trust entails is almost always different—sometimes radically different—from that of those to whom they are speaking. Within computing, we are used to talking about things and having a shared knowledge, at least to some degree of approximation. Some terms are fairly well defined in the industry, at least in general conversation: for example, cryptography, virtualisation, and kernel. Even a discussion on more nebulous concepts such as software or networking or authentication generally starts from a relatively well-defined shared understanding. The same is not true of trust, but trust is a concept that we definitely need to get our heads around to establish a core underpinning and begin to frame an understanding of what shared meaning we hope to convey.
Why is there such a range of views around trust? We have already looked at some of the complexity of trust between humans. Let us try to tease out some of the reasons for people's confusion by starting with four fairly innocuously simple-looking statements:
I trust my brother and my sister.
I trust my bank.
My bank trusts its IT systems.
My bank's IT systems trust each other.
When you make four statements like this, it quickly becomes clear that something different is going on in each case. Specifically, the word trust signifies something very different in each of the four statements. Our first step is to make the decision to avoid using the word trust as a transitive verb—a word with a simple object, as in these examples—and instead talk about trust relationships to another entity. This is because there is a danger, when using the word trust transitively, that we may confuse a unidirectional relationship with a bidirectional relationship. In the second case, for example, the bank may well have a relationship with me, but it is how I think of the bank, and therefore how I interact with it, which is the relationship that we want to examine. This is not to say that the relationship the bank has with me is irrelevant to the one I have with it—it may well inform my relationship—but that the bank's relationship with me is not the focus. For the same reason, we will generally talk about the "trust relationship to another entity, rather than the
trust relationship with" another, to avoid implying a bidirectional relationship. The standard word used to describe the entity doing the trusting is trustor, and the entity being trusted is the trustee—though we should not confuse this word with other uses (such as the word trustee as used in the context of prisons or charity boards).
Analysing Our Trust Statements
The four cases of trust relationships that we have noted may look similar, but there are important differences that will shed light on some important concepts to which we will return throughout the book and that will help us define exactly what our subject matter is.
Case 1: My Trusting My Brother and Sister As we have already discussed, this statement is about trust between individual humans—specifically, my trust relationship to my brother, and my trust relationship to my sister. There are two humans involved in each case (both me and whichever sibling we are considering), with all of the complexity that this entails. But we share a set of assumptions about how we react, and we each have tens of thousands of years of genetics plus societal and community expectations to work out how these relationships should work.
Case 2: My Trusting My Bank Our second statement is about trust between an individual and an organisation: specifically, my trust relationship to a legal entity with particular services and structure. The basis of the expression of this relationship has changed over the years in many places: the relationship I would have had in the UK with my bank 50 years ago, say, would often have been modelled mainly on the relationship I had with one or more individuals employed by the bank, typically a manager or deputy manager of a particular branch. My trust relationship to the bank now is more likely to be swayed by my views on its perceived security practices and its exercising of fiscal and ethical responsibilities than my views of the manager of my local branch—if I have even met them. There is, however, still a human element associated with my relationship, at least in my experience: I know that I can walk into a branch, or make a call on the phone, and speak to a human.³
Case 3: The Bank Trusting Its IT Systems Our third statement is about an organisation trusting its IT systems. When we follow our new resolution to rephrase this as The bank having a trust relationship to its IT systems
, it suddenly feels like we have moved into a very different type of consideration from the initial two cases. Arguably, for some of the reasons mentioned earlier about interacting with humans in a bank, we realise that there is a large conceptual difference between the first and second cases as well. But we are often lulled into a false sense of equivalence because when we interact with a bank, it is staffed by people, and it also enjoys many of the legal protections afforded to an individual. There are still humans in this case, though, in that we can generally assume that it is the intention of certain humans who represent the bank to have a trust relationship to certain IT systems. The question of what we mean by represent the bank
is an interesting one when we consider when we might use this phrase in practice. Might it be in a press conference, with a senior executive saying that the bank trusts its IT systems
? What might that mean? Or it could be in a conversation between a regulator or auditor with the chief information security officer (CISO) of the bank. Who is the bank
that is being referred to in this situation, and what does this trust mean?
Case 4: The IT Systems Trusting Each Other As we move to our fourth case, it is clear that we have transitioned to yet another very different space. There are no humans involved in this set of trust relationships unless we attribute agency to specific systems; and if so, which? What, then, is doing the trusting, and what does the word trust even mean in this context? The question of agency raised earlier—about an entity representing someone else, as a literary agent represents an author or a federal agent represents a branch of government—may allow us to consider what is going on. We will return to this question later in this chapter.
The four cases we have discussed show that we cannot just apply the same word, trust, to all of these different contexts and assume that it means the same thing in each case. We need to differentiate between them: what is going on, who is trusting whom to do what, and what trust in that instance truly means.
What Is Trust?
What, then, is trust? What do we mean, or hope to convey, when we use this word? This question gets a whole chapter to itself; but to start to examine it, its effects, and the impact of thinking about trust within computing systems, we need a definition. Here is the one we will use as the basis for the rest of the book. It is in part derived from a definition by Gambetta⁴ and refined after looking at multiple uses and contexts.
Trust is the assurance that one entity holds that another will perform particular actions according to a specific expectation.
This is a good start, but we can go a little further, so let us propose three corollaries to sit alongside this definition. We will go into more detail for each later.
First Corollary Trust is always contextual
.
Second Corollary One of the contexts for trust is always time
.
Third Corollary Trust relationships are not symmetrical
.
This set of statements should come as no surprise: it forms the basis for the initial examination of the trust relationships that I have to my brother and sister, described at the beginning of this chapter. Let us re-examine those relationships and try to define them in terms of our definition of trust and its corollaries. First, we deal with the definition:
The entities identified are a) me and b) my siblings.
The actions ranged from performing an emergency appendectomy to servicing my scuba gear.
The expectation was fairly complex, even in this simple example: it turns out that trusting someone with my life
can mean a variety of things, from performing specific actions to remedy an emergency medical condition, to performing actions that, if neglected or incorrectly carried out, could cause my death.
We find that we have addressed the first corollary—that trust is always contextual:
The contexts included my having a cardiac arrest, requiring an appendectomy, and planning to go scuba diving.
Time, the second corollary, is also covered:
My sister has not recently renewed her diving instructor training, so I might have less trust in her to service my diving gear than I might have done 10 years ago.
The third corollary about the asymmetry of trust is so obvious in human relationships that we often ignore it, but is very clear in our examples:
I am neither a doctor nor a trained scuba diving instructor, so my brother and sister trust me neither to provide emergency medical care nor to service their scuba gear.
Let us restate one of these relationships in the form of our definition and corollaries about trust:
I hold an assurance that my brother will provide me with emergency medical aid in the event that I require immediate treatment.
This is a good statement of how I view the relationship from me to my brother, but what can we gain with more detail? Let us use the corollaries to move us to a better description of the relationship.
First Corollary The medical aid is within an area of practice in which he has trained or with which he is familiar
.
Second Corollary My brother will only undertake procedures for which his training is still sufficiently recent that he feels confident that he can perform them without further detriment to my health
.
Third Corollary My brother does not expect me to provide him with emergency medical aid
.
This may seem like an immense amount of unpacking to do on what was originally presented as a simple statement. But when we move over to the world of computing systems, we need to consider exactly this level of detail, if not an even greater level.
Let us begin moving into the world of computing and see what happens when we start to apply some of these concepts there. We will begin with the concept of a trusted platform: something that is often a requirement for any computation that involves sensitive data or algorithms. Immediately, questions present themselves. When we talk about a trusted platform, what does that mean? It must surely mean that the platform is trusted by an entity (the workload?) to perform particular actions (provide processing time and memory?) whilst meeting particular expectations (not inspecting program memory? maintaining the integrity of data?). But the context of what we mean for a trusted platform is likely to be very different between a mobile phone, a military installation, and an Internet of Things (IoT) gateway. That trust may erode over time (are patches applied? Is there also a higher likelihood that an attacker may have compromised the platform a day, a month, or a year after the workload was provisioned to it?). We should also never simply say, following the third corollary (on the lack of trust symmetry), that these entities trust each other
without further qualification, even if we are referring to the relationships between one trusted system and another trusted system.
One concrete example that we can use to examine some of these questions is when we connect to a web server using a browser to purchase a product or service. Once they connect, the web server and the browser may establish trust relationships, but these are definitely not symmetrical. The browser has probably established that the web server represents the provider of particular products and services with sufficient assurance for the person operating it to give up credit card details. The web server has probably established that the browser currently has permission to access the account of the user operating it. However, we already see some possible confusion arising about what the entities are: what is the web server, exactly? The unique instance of the server's software, the virtual machine in which it runs (if, in fact, it is running in a virtual machine), a broader and more complex computer system, or something entirely different? And what ability can the browser have to establish that the person operating it can perform particular actions?
These questions—about how trust is represented and to do what—are related to agency and will also help us consider some of the questions that arose around the examples we considered earlier about banks and their IT systems.
What Is Agency?
When you write a computer program that prints out Hello, world!
, who is saying
those words: you or the computer? This may sound like an idle philosophical question, but it is more than that: we need to be able to talk about entities as part of our definition of trust, and in order to do that, we need to know what entity we are discussing.
What exactly, then, does agency mean? It means acting for someone: being their agent—think of what actors' agents do, for example. When we engage a lawyer or a builder or an accountant to do something for us, we set very clear boundaries about what they will be doing on our behalf. This is to protect both us and the agent from unintended consequences. There exists a huge legal corpus around defining, in different fields, exactly the scope of work to be carried out by a person or a company who is acting as an agent for another person or organisation. There are contracts and agreed restitutions—basically, punishments—for when things go wrong. Say that my accountant buys 500 shares in a bank with my money, and then I turn around and say that they never had the authority to do so: if we have set up the relationship correctly, it should be entirely clear whether or not the accountant had that authority and whose responsibility it is to deal with any fallout from that purchase.
The situation is not so clear when we start talking about computer systems and agents. To think a little more about this question, here are two scenarios:
In the classic film WarGames, David Lightman (Matthew Broderick's character) has a computer that goes through a list of telephone numbers, dialling them and then recording the number for later investigation if they are answered by another machine that attempts to perform a handshake. Do we consider that the automatic dialling Lightman's computer performs is carried out as an act with agency? Or is it when the computer connects to another machine? Or when it records the details of that machine? I suspect that most people would not argue that the computer is acting with agency once Lightman gets it to complete a connection and interact with the other machine—that seems very intentional on his part, and he has taken control—but what about before?
Google used to run automated programs against messages received as part of the Gmail service.⁵ The programs were looking for information and phrases that Google could use to serve ads. The company were absolutely adamant that they, Google, were not doing the reading: it was just the computer programs.⁶ Quite apart from the ethical concerns that might be raised, many people would (and did) argue that Google, or at least the company's employees, had imbued these automated programs with agency so that philosophically—and probably legally—the programs were performing actions on behalf of Google. The fact that there was no real-time involvement by any employee is arguably unimportant, at least in some contexts.
This all matters because in order to understand trust, we need to identify an entity to trust. One current example of this is self-driving cars: whose fault is it when one goes wrong and injures or kills someone? Equally, when the software in certain Boeing 737 MAX 8 aircraft malfunctioned,⁷ pilots—who can be said to have trusted the software—and passengers—who equally can be said to have trusted the pilots and their ability to fly the aircraft correctly—lost their lives. What exactly was the entity to which they had a trust relationship, and how was that trust managed?
Another example may help us to consider the question of context. Consider a hypothetical automated defence system for a military base in a war zone. Let us say that, upon identifying intruders via its cameras, the system is programmed to play a recording over loudspeakers, warning them to move away; and, in the case that they do not leave within 30 seconds of a warning, to use physical means up to and including lethal force to stop them proceeding any further. The base commander trusts the system to perform its job and stop intruders: a trust relationship exists between the base commander and the automated defence system. Thus, in the language of our definition of trust:
The base commander holds an assurance that the automated defence system will identify, warn, and then stop intruders who enter the area within its camera and weapon range
.
We have a fair amount of context already embedded within this example. We stated up front that the base is in a war zone, and we have mentioned the range of the cameras and weapons. A problem arises, however, when the context changes. What if, for instance:
The base is no longer in a war zone, and rules of engagement change
Children enter the coverage area who do not understand the warnings or are unable to leave the area
A surge of refugees enters the area—so many that those at the front are unable to move, despite hearing and understanding the warning
These may seem to be somewhat contrived examples, but they serve to show how brittle trust relationships can be when contexts change. If the entity being trusted with defence of the base were a soldier, we would hope the soldier could be much more flexible in reacting to these sorts of changes, or at least know that the context had changed and protocol dictated contacting a superior or other expert for new orders. The same is not true for computer systems. They operate in specific contexts; and unless they are architected, designed, and programmed to understand not only that other contexts exist but also how to recognise changes in contexts and how their behaviour should change when they find themselves in a new context, then the trust relationships that other entities have with them are at risk. This can be thought of as an example of programmatically encoded bias: only certain contexts were considered in the design of the system, which means inflexibility is inherent in the system when other contexts are introduced or come into play.
In our example of the automated defence system, at least the base commander or empowered subordinate has the opportunity to realise that a change in context is possible and to reprogram or switch off the system: the entity who has the relationship to the system can revise the trust relationship. A much bigger problem arises when both entities are actually computing systems and the context in which they are operating changes or, just as likely, they are used in contexts for which they were not designed—or, put another way, in contexts their designers neglected to imagine. How to define such contexts, and the importance of identifying when contexts change, will feature prominently in later chapters.
Trust and Security
Another important topic in our discussion of trust is security. Our core interest, of course, is security in the realm of computing systems, sometimes referred to as cyber-security or IT security. But although security within the electronic and online worlds has its own peculiarities and specialities, it is generally derived from equivalent or similar concepts in real life
: the non-electronic, human-managed world that still makes up most of our existence and our interactions, even when the interactions we have are digitally mediated
via computer screens and mobile phones. When we think about humans and security, there is a set of things that we tend to identify as security-related, of which the most obvious and common are probably stopping humans going into places they are not supposed to visit, looking at things they are not supposed to see, changing things they are not supposed to alter, moving things that they are not supposed to shift, and stopping processes that they are not supposed to interrupt. These concepts are mirrored fairly closely in the world of computer systems:
Authorisation: Stopping entities from going into places
Confidentiality: Stopping entities from looking at things
Integrity: Stopping entities from moving and altering things
Availability: Stopping entities from interrupting processes
Exactly what constitutes a core set of security concepts is debatable, but this is a reasonably representative list. Related topics, such as identification and authentication, allow us to decide whether a particular person should be stopped or allowed to perform certain tasks; and categorisation allows us to decide which things which humans are allowed to alter, or which places they may enter. All of these will be useful as we begin to pick apart in more detail how we define trust.
Let us look at one of these topics in a little more detail, then, to allow us to consider its relationship to trust. Specifically, we will examine it within the context of computing systems.
Confidentiality is a property that is often required for certain components of a computer system. One oft-used example is when I want to pay for some goods over the Web. When I visit a merchant, the data I send over the Internet should be encrypted; the sign that it is encrypted is typically the little green shield or padlock that I see on the browser bar by the address of the merchant. We will look in great detail at this example later on in the book, but the key point here is that the data—typically my order, my address, and my credit card information—is encrypted before it leaves my browser and decrypted only when it reaches the merchant. The merchant, of course, needs the information to complete the order, so I am happy for the encryption to last until it reaches their server.
What exactly is happening, though? Well, a number of steps are involved to get the data encrypted and then decrypted. This is not the place for a detailed description,⁸ but what happens at a basic level is that my browser and the merchant's server use a well-understood protocol—most likely HTTP + SSL/TLS—to establish enough mutual trust for an encrypted exchange of information to take place. This protocol uses algorithms, which in turn employ cryptography to do the actual work of encryption. What is important to our discussion, however, is that each cryptographic protocol used across the Internet, in data centres, and by governments, banks, hospitals, and the rest, though different, uses the same cryptographic pieces
as its building blocks. These building blocks are referred to as cryptographic primitives and range from asymmetric and symmetric algorithms through one-way hash functions and beyond. They facilitate the construction of some of the higher-level concepts—in this case, confidentiality— which means that correct usage of these primitives allows for systems to be designed that make assurances about certain properties.
One lesson we can learn from the world of cryptography is that while using it should be easy, designing cryptographic algorithms is often very hard. While it may seem simple to create an algorithm or protocol that obfuscates data—think of a simple shift cipher that moves all characters in a given string up
one letter in the alphabet—it is extremely difficult to do it well enough that it meets the requirements of real-world systems. An oft-quoted dictum of cryptographers is, Any fool can create a cryptographic protocol that they can't defeat
; and part of learning to understand and use cryptography well is, in fact, the experience of designing such protocols and seeing how other people more expert than oneself go about taking them apart and compromising them.
Let us return to the topics we noted earlier: authorisation, integrity, etc. None of them defines trust, but we will think of them as acting as building blocks when we start considering trust relationships in more detail. Like the primitives used in encryption, these concepts can be combined in different ways to allow us to talk about trust of various kinds and build systems to model the various trust relationships we need to manage. Also like cryptographic primitives, it is very easy to use these primitives in ways that do not achieve what we wish to achieve and can cause confusion and error for those using them.
Why is all of this important? Because trust is important to security. We typically use security to try to enforce trust relationships because humans are not, sadly, fundamentally trustworthy. This book argues that computing systems are not fundamentally trustworthy either, but for somewhat different reasons. It would be easy to think that computing systems are neutral with regard to trust, that they just sit there and do what they do; but as we saw when we looked briefly at agency, computers act for somebody or something, even when the actions they take are unintended⁹ or not as intended. Equally, they may be maliciously or incompetently directed (programmed or operated). But worst, and most common of all, they are often—usually—unconsciously and implicitly placed into trust relationships with other systems, and ultimately humans and organisations, often outside the contexts for which they were designed. The main goal of this book is to encourage people designing, creating, and operating computer systems to be conscious and explicit in their actions around trust.
Trust as a Way for Humans to Manage Risk
Risk is a key concept to be able to consider when we are talking about security. There is a common definition of risk within the computing community, which is also shared within the business community:
equationIn other words, the risk associated with an event is the likelihood that it will occur multiplied by the impact to be considered if it were to occur. Probability is expressed as a number between 0 and 1 (0 being no possibility of occurrence, 1 being certainty), and the loss can be explicitly stated either as an amount of money or as another type of impact. The point of the formula is to allow risks to be compared; and as long as the different calculations use the same measure of loss, it is generally unimportant what measure is employed. To give an example, let us say that I am interested in the risk of my new desktop computer failing in the first three years of its life. I do some research and discover that the likelihood of the keyboard failing is 4%, or 0.04, whereas the likelihood of the monitor failing is only 1%, or 0.01. If I were to consider this information on its own, it would seem that I should worry more about the keyboard than the monitor, until I take into account the cost of replacement: the keyboard would cost me $15 to replace, whereas the monitor would cost me $400 to replace. We have the following risk calculations then:
equationequationequationIt turns out that if I care about risk, I should be more concerned about the monitor than the keyboard. Once we have calculated the risk, we can then consider mitigations: what to do to manage the risk. In the case of my desktop computer, I might decide to take out an extended manufacturer's warranty to cover the monitor but just choose to buy a new keyboard if that breaks.
Risk is all around us and has been since before humans became truly human, living in groups and inhabiting a social structure. We can think of risk as arising in four categories: