0% found this document useful (0 votes)
36 views

Advanced Network Book PG

The document discusses network security and the concepts of trust and threats. It explains that security involves making assumptions about trust while evaluating threats and mitigating risks. There is no perfect security. Trust assumes how external and internal actors will behave, while threats are potential failure scenarios designed to be avoided. The best approach is to explicitly identify threats that can be cost-effectively eliminated and clarify trust assumptions to avoid being caught off guard by changing circumstances like more sophisticated adversaries.

Uploaded by

Elamathi L
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views

Advanced Network Book PG

The document discusses network security and the concepts of trust and threats. It explains that security involves making assumptions about trust while evaluating threats and mitigating risks. There is no perfect security. Trust assumes how external and internal actors will behave, while threats are potential failure scenarios designed to be avoided. The best approach is to explicitly identify threats that can be cost-effectively eliminated and clarify trust assumptions to avoid being caught off guard by changing circumstances like more sophisticated adversaries.

Uploaded by

Elamathi L
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

8.

1 Trust and Threats

Before we address the how’s and why’s of building secure


networks, it is important to establish one simple truth: We will
inevitably fail. This is because security is ultimately an exercise in
making assumptions about trust, evaluating threats, and mitigating
risk. There is no such thing as perfect security.

Trust and threats are two sides of the same coin. A threat is a potential failure
scenario that you design your system to avoid, and trust is an assumption you make
about how external actors and internal components you build upon will behave. For
example, if you are transmitting a message over WiFi on an open campus, you
would likely identify an eavesdropper that can intercept the message as a threat (and
adopt some of the methods discussed in this chapter as a countermeasure), but if
you are transmitting a message over a fiber link between two machines in a locked
datacenter, you might trust that channel is secure, and so take no additional steps.

You could argue that since you already have a way to protect WiFi-based
communication you just as well use it to protect the fiber-based channel, but that
presumes the outcome of a cost/benefit analysis. Suppose protecting any message,
whether sent over WiFi or fiber, slows the communication down by 10% due to the
overhead of encryption. If you need to squeeze every last ounce of performance out
of a scientific computation (e.g., you are trying to model a hurricane) and the odds of
someone breaking into the datacenter are one in a million (and even if they did, the
data being transmitted has little value), then you would be well-justified in not
securing the fiber communication channel.

These sorts of calculations happen all the time, although they are often implicit and
unstated. For example, you may run the world’s most secure encryption algorithm on
a message before transmitting it, but you’ve implicitly trusted that the server you’re
running on is both faithfully executing that algorithm and not leaking a copy of your
unencrypted message to an adversary. Do you treat this as a threat or do you trust
that the server does not misbehave? At the end of the day, the best you can do is
mitigate risk: identify those threats that you can eliminate in a cost effective way, and
be explicit about what trust assumptions you are making so you aren’t caught off-
guard by changing circumstances, such as an ever more determined or
sophisticated adversary.
In this particular example, the threat of an adversary compromising a server has
become quite real as more of our computations move from local servers into the
cloud, and so research is now going into building a Trusted Computing Base (TCB),
an interesting topic, but one that is in the realm of computer architecture rather than
computer networks. For the purpose of this chapter, our recommendation is to pay
attention to the words trust and threat (or adversary), as they are key to
understanding the context in which security claims are made.

There is one final historical note that helps set the table for this chapter. The Internet
(and the ARPANET before it) where funded by the U.S. Department of Defense, an
organization that certainly understands threat analysis. The original assessment was
dominated by concerns about the network surviving in the face of routers and
networks failing (or being destroyed), which explains why the routing algorithms are
decentralized, with no central point of failure. On the other hand, the original design
assumed all actors inside the network were trusted, and so little or no attention was
paid what today we would call cybersecurity (attacks from bad actors that are able to
connect to the network). What this means is that many of the tools described in this
chapter could be considered patches. They are strongly-grounded in cryptography,
but “add-ons” nonetheless. If a comprehensive redesign of the Internet were to take
place, integrating security would likely be the foremost driving factor.
From Web Services to Cloud Services

If Web Services is what we call it when the web server that implements my
application sends a request to the web server that implements your application, then
what do we call it when we both put our applications in the cloud so that they can
support scalable workloads? We can call both of them Cloud Services if we want to,
but is that a distinction without a difference? It depends.

Moving a server process from a physical machine running in my machine room into a
virtual machine running in a cloud provider’s datacenter shifts responsibility for
keeping the machine running from my system admin to the cloud provider’s
operations team, but the application is still designed according to the Web Services
architecture. On the other hand, if the application is designed from scratch to run on
a scalable cloud platform, for example by adhering to the micro-services
architecture, then we say the application is cloud native. So the important distinction
is cloud native versus legacy web services deployed in the cloud.

We briefly saw the micro-services architecture in Chapter 5 when describing gRPC,


and although it’s difficult to definitively declare micro-services superior to web
services, the current trend in industry almost certainly favors the former. More
interesting, perhaps, is the ongoing debate about REST+Json versus
gRPC+Protbufs as the preferred RPC mechanism for implementing micro-services.
Keeping in mind that both run on top of HTTP, we leave it as an exercise for the
reader to pick a side and defend it.
OpenConfig

SNMP is still widely used and has historically been “the” management protocol for
switches and routers, but there has recently been growing attention paid to more
flexible and powerful ways to manage networks. There isn’t yet complete agreement
on an industry-wide standard, but a consensus about the general approach is
starting to emerge. We describe one example, called OpenConfig, that is both
getting a lot of traction and illustrates many of the key ideas that are being pursued.

The general strategy is to automate network management as much as possible, with


the goal of getting the error-prone human out of the loop. This is sometimes
called zero-touch management, and it implies two things have to happen. First,
whereas historically operators used tools like SNMP to monitor the network, but had
to log into any misbehaving network device and use a command line interface (CLI)
to fix the problem, zero-touch management implies that we also need
to configure the network programmatically. In other words, network management is
equal parts reading status information and writing configuration information. The goal
is to build a closed control loop, although there will always be scenarios where the
operator has to be alerted that manual intervention is required.

Second, whereas historically the operator had to configure each network device
individually, all the devices have to be configured in a consistent way if they are
going to function correctly as a network. As a consequence, zero-touch also implies
that the operator should be able to declare their network-wide intent, with the
management tool being smart enough to issue the necessary per-device
configuration directives in a globally consistent way.
Figure 234. Operator manages a network through a configuration and management tool,
which in turn programmatically interacts with the underlying network devices (e.g., using
gNMI as the transport protocol and YANG to specify the schema for the data being
exchanged).

Figure 234 gives a high-level depiction of this idealized approach to network


management. We say “idealized” because achieving true zero-touch management is
still more aspirational than reality. But progress is being made. For example, new
management tools are starting to leverage standard protocols like HTTP to monitor
and configure network devices. This is a positive step because it gets us out of the
business of creating yet another request/reply protocol and lets us focus on creating
smarter management tools, perhaps by taking advantage of Machine Learning
algorithms to determine if something is amiss.

In the same way HTTP is starting to replace SNMP as the protocol for talking to
network devices, there is a parallel effort to replace the MIB with a new standard for
what status information various types of devices can report, plus what configuration
information those same devices are able to respond to. Agreeing to a single
standard for configuration is inherently challenging because every vendor claims
their device is special, unlike any of the devices their competitors sell. (That is to say,
the challenge is not entirely technical.)

The general approach is to allow each device manufacturer to publish a data


model that specifies the configuration knobs (and available monitoring data) for its
product, and limit standardization to the modeling language. The leading candidate is
YANG, which stands for Yet Another Next Generation, a name chosen to poke fun at
how often a do-over proves necessary. YANG can be viewed as a restricted version
of XSD, which you may recall is a language for defining a schema (model) for XML.
That is, YANG defines the structure of the data. But unlike XSD, YANG is not XML-
specific. It can instead be used in conjunction with different over-the-wire message
formats, including XML, but also Protobufs and JSON.

What’s important about this approach is that the data model defines the semantics of
the variables that are available to be read and written in a programmatic form (i.e.,
it’s not just text in a standards specification). It’s not a free-for-all with each vendor
defining a unique model since the network operators that buy network hardware
have a strong incentive to drive the models for similar devices towards convergence.
YANG makes the process of creating, using, and modifying models more
programmable, and hence, adaptable to this process.

This is where OpenConfig comes in. It uses YANG as its modeling language, but has
also established a process for driving the industry towards common models.
OpenConfig is officially agnostic as to the RPC mechanism used to communicate
with network devices, but one approach it is actively pursuing is called gNMI (gRPC
Network Management Interface). As you might guess from its name, gNMI uses
gRPC, which you may recall, runs on top of HTTP. This means gNMI also adopts
Protobufs as the way it specifies the data actually communicated over the HTTP
connection. Thus, as depicted in Figure 234, gNMI is intended as a standard
management interface for network devices. What’s not standardized is the richness
of the management tool’s ability to automate, or the exact form of the operator-facing
interface. Like any application that is trying to serve a need and support more
features than the alternatives, there is still much room for innovation in tools for
network management.

For completeness, we note that NETCONF is another of the post-SNMP protocols


for communicating configuration information to network devices. OpenConfig works
with NETCONF, but our reading of the tea leaves points to gNMI as the future.

We conclude by emphasizing that a seachange is underway. While listing SNMP and


OpenConfig in the title to this section suggests they are equivalent, it is more
accurate to say that each is “what we call” these two approaches, but the
approaches are quite different. On the one hand, SNMP is really just a transport
protocol, analogous to gNMI in the OpenConfig world. It historically enabled
monitoring devices, but had virtually nothing to say about configuring devices. (The
latter has historically required manual intervention.) On the other hand, OpenConfig
is primarily an effort to define a common set of data models for network devices,
roughly similar to the role MIB plays in the SNMP world, except OpenConfig is (1)
model-based, using YANG, and (2) equally focused on monitoring and configuration.

You might also like