UNIT 2
UNIT 2
Demand Is Increasing
A number of trends are increasing the load on enterprise networks, the Internet, and other internets.
• Cloud computing: There has been a dramatic shift by enterprises to both public and private cloud
services.
• Big data: The processing of huge data sets requires massive parallel processing on thousands of servers,
all of which require a degree of interconnection to each other.
• Mobile traffic: Employees are increasingly accessing enterprise network resources via mobile personal
devices, such as smartphones, tablets, and notebooks. These devices support sophisticated apps that can
consume and generate image and video traffic, placing new burdens on the enterprise network.
• The Internet of Things (IoT): Most “things” in the IoT generate modest traffic, although there are
exceptions, such as surveillance video cameras.
Supply Is Increasing
As the demand on networks is rising, so is the capacity of network technologies to absorb rising loads. The
increase in the capacity of the network transmission technologies has been matched by an increase in the
performance of network devices, such as LAN switches, routers, firewalls, intrusion detection system/intrusion
prevention systems (IDS/IPS), and network monitoring and management systems. Year by year, these devices
have larger, faster memories, enabling greater buffer capacity and faster buffer access, as well as faster processor
speeds.
Traffic Patterns Are More Complex
If it were simply a matter of supply and demand, it would appear that today’s networks should be able to cope
with today’s data traffic. But as traffic patterns have changed and become more complex, traditional enterprise
network architectures are increasingly ill suited to the demand.
A number of developments have resulted in far more dynamic and complex traffic patterns within the
enterprise data center, local and regional enterprise networks, and carrier networks. These include the
following:
• Client/server applications typically access multiple databases and servers that must communicate with
each other, generating “horizontal” traffic between servers as well as “vertical” traffic between
servers and clients.
• Network convergence of voice, data, and video traffic creates unpredictable traffic patterns, often of
large multimedia data transfers.
• Unified communications (UC) strategies involve heavy use of applications that trigger access to
multiple servers.
• The heavy use of mobile devices, including personal bring your own device (BYOD) policies, results
in user access to corporate content and applications from any device anywhere any time.
The traditional architecture relies heavily on the network interface identity. At the physical layer of the TCP/IP
model, devices attached to networks are identified by hardware-based identifiers, such as Ethernet MAC
addresses. At the internetworking level, including both the Internet and private internets, the architecture is a
network of networks. Each attached device has a physical layer identifier recognized within its immediate network
and a logical network identifier, its IP address, which provides global visibility.
The design of TCP/IP uses this addressing scheme to support the networking of autonomous networks, with
distributed control. This architecture provides a high level of resilience and scales well in terms of adding new
networks. Using IP and distributed routing protocols, routes can be discovered and used throughout an internet.
Using transport-level protocols such as TCP, distributed and decentralized algorithms can be implemented to
respond to congestion.
Traditionally, routing was based on each packet’s destination address. In this datagram approach, successive
packets between a source and destination may follow different routes through the internet, as routers constantly
seek to find the minimum-delay path for each individual packet. More recently, to satisfy QoS requirements,
packets are often treated in terms of flows of packets. Packets associated with a given flow have defined QoS
characteristics, which affect the routing for the entire flow.
However, this distributed, autonomous approach developed when networks were predominantly static and end
systems predominantly of fixed location. Based on these characteristics, the Open Networking Foundation (ONF)
cites four general limitations of traditional network architectures.
i. Static, complex architecture: To respond for demands such as differing levels of QoS, high and
fluctuating traffic volumes, and security requirements, networking technology has grown more
complex and difficult to manage. This has resulted in a number of independently defined protocols
each of which addresses a portion of networking requirements.
Mobility: Control functionality must accommodate mobility, including mobile user devices and virtual
servers.
Integrated security: Network applications must integrate seamless security as a core service instead of as an
add-on solution.
On-demand scaling: Implementations must have the ability to scale up or scale down the network and its
services to support on-demand requests.
SDN Architecture
An analogy can be drawn between the way in which computing evolved from closed, vertically integrated,
proprietary systems into an open approach to computing and the evolution coming with SDN (see Figure 3.1). In
the early decades of computing, vendors such as IBM and DEC provided a fully integrated product, with a
proprietary processor hardware, unique assembly language, unique operating system (OS), and the bulk if not all
of the application software. In this environment, customers, especially large customers, tended to be locked in to
one vendor, dependent primarily on the applications offered by that vendor. Migration to another vendor’s
hardware platform resulted in major upheaval at the application level.
Today, the computing environment is characterized by extreme openness and great customer flexibility. The bulk
of computing hardware consists of x86 and x86-compatible processors for standalone systems and ARM
processors for embedded systems. Even proprietary systems such as Windows and Mac OS provide programming
environments to make porting of applications an easy matter. It also enables the development of virtual machines
that can be moved from one server to another across hardware platforms and operating systems.
The networking environment today faces some of the same limitations faced in the pre-open era of computing.
Here the issue is not developing applications that can run on multiple platforms. Rather, the difficulty is the lack
of integration between applications and network infrastructure. As demonstrated in the preceding section,
traditional network architectures are inadequate to meet the demands of the growing volume and variety of traffic.
The central concept behind SDN is to enable developers and network managers to have the same type of control
over network equipment that they have had over x86 servers. The SDN approach splits the switching function
between a data plane and a control plane that are on separate devices (see Figure 3.2). The data plane is simply
responsible for forwarding packets, whereas the control plane provides the “intelligence” in designing routes,
setting priority and routing policy parameters to meet QoS and QoE requirements and to cope with the shifting
traffic patterns. Open interfaces are defined so that the switching hardware presents a uniform interface regardless
of the details of internal implementation. Similarly, open interfaces are defined to enable networking applications
to communicate with the SDN controllers.
Figure 3.3 showing more detail of the SDN approach. The data plane consists of physical switches and virtual
switches. In both cases, the switches are responsible for forwarding packets. The internal implementation of
buffers, priority parameters, and other data structures related to forwarding can be vendor dependent. However,
each switch must implement a model, or abstraction, of packet forwarding that is uniform and open to the SDN
controllers. This model is defined in terms of an open application programming interface (API) between the
control plane and the data plane (southbound API).
Industry Consortia
Consortia for open standards began to appear in the late 1980s. There was a growing feeling within private- sector
multinational companies that the SDOs acted too slowly to provide useful standards in the fast-paced world of
technology. Recently, a number of consortia have become involved in the development of SDN and NFV
standards.
By far the most important consortium (A group of independent organizations joined by common interests)
involved in SDN standardization is the Open Networking Foundation (ONF). ONF is an industry consortium
dedicated to the promotion and adoption of SDN through open standards development. Its most important
contribution to date is the OpenFlow protocol and API. The OpenFlow protocol is the first standard interface
specifically designed for SDN and is already being deployed in a variety of networks and networking products,
both hardware based and software based. The standard enables networks to evolve by giving logically centralized
control software the power to modify the behavior of network devices through a well-defined “forwarding
instruction set.”
The Open Data Center Alliance (ODCA) is a consortium of leading global IT organizations dedicated to
accelerating adoption of interoperable solutions and services for cloud computing. Through the development of
usage models for SDN and NFV, ODCA is defining requirements for SDN and NFV cloud deployment.
OpenStack
OpenStack is an open source software project that aims to produce an open source cloud operating system. It
provides multitenant Infrastructure as a Service (IaaS), and aims to meets the needs of public and private clouds
regardless of size, by being simple to implement and massively scalable. SDN technology is expected to
contribute to its networking part, and to make the cloud operating system more efficient, flexible, and reliable.
OpenStack is composed of a number of projects. One of them, Neutron, is dedicated for networking. It provides
Network as a Service (NaaS) to other OpenStack services. Almost all SDN controllers have provided plug-ins for
Neutron, and through them services on OpenStack and other OpenStack services can build rich networking
topologies and can configure advanced network policies in the cloud.
Control support function: Interacts with the SDN control layer to support programmability via resource-
control interfaces. The switch communicates with the controller and the controller manages the switch via the
OpenFlow switch protocol.
Data forwarding function: Accepts incoming data flows from other network devices and end systems and
forwards them along the data forwarding paths that have been computed and established according to the rules
defined by the SDN applications.
These forwarding rules used by the network device are embodied in forwarding tables that indicate for give
categories of packets what the next hop in the route should be. In addition to simple forwarding of a packet,
the network device can alter the packet header before forwarding, or discard the packet. As shown, arriving
packets may be placed in an input queue, awaiting processing by the network device, and forwarded packets
are generally placed in an output queue, awaiting transmission.
The OpenFlow specification defines three types of tables in the logical switch architecture.
i. A flow table matches incoming packets to a particular flow and specifies what functions are to be
performed on the packets. There may be multiple flow tables that operate in a pipeline fashion.
ii. A flow table may direct a flow to a group table, which may trigger a variety of actions that affect one
or more flows.
iii. A meter table can trigger a variety of performance-related actions on a flow. Using the OpenFlow
switch protocol, the controller can add, update, and delete flow entries in tables, both reactively (in
response to packets) and proactively.
• Priority: Relative priority of table entries. This is a 16-bit field with 0 corresponding to the lowest
priority. In principle, there could be 216 = 64k priority levels.
• Counters: Updated for matching packets. The OpenFlow specification defines a variety of counters.
Table 4.1 lists the counters that must be supported by an OpenFlow switch.
For the final table in the pipeline, forwarding to another flow table is not an option. If and when a packet is finally
directed to an output port, the accumulated action set is executed and then the packet is queued for output. Figure
4.7 illustrates the overall ingress pipeline process.
FIGURE 4.6 Simplified Flowchart Detailing Packet Flow Through an OpenFlow Switch
If egress processing is associated with a particular output port, then after a packet is directed to an output port at
the completion of the ingress processing, the packet is directed to the first flow table of the egress pipeline. Egress
pipeline processing proceeds in the same fashion as for ingress processing, except that there is no group table
processing at the end of the egress pipeline. Egress processing is shown in Figure 4.8
For this example, it would be possible to define each of these fine-grained subflows in Table 0. The use of multiple
tables simplifies the processing in both the SDN controller and the OpenFlow switch. Actions such as next hop
that apply to the aggregate flow can be defined once by the controller and examined and performed once by the
switch. The addition of new subflows at any level involves less setup. Therefore, the use of pipelined, multiple
tables increases the efficiency of network operations, provides granular control, and enables the network to
respond to real-time changes at the application, user, and session levels.
Group Table
In the course of pipeline processing, a flow table may direct a flow of packets to the group table rather than
another flow table. The group table and group actions enable OpenFlow to represent a set of ports as a single
entity for forwarding packets. Different types of groups are provided to represent different forwarding
abstractions, such as multicasting and broadcasting.
Group identifier: A 32-bit unsigned integer uniquely identifying the group. A group is defined as an entry
in the group table.
A group is designated as one of the types depicted in Figure 4.10: all, select, fast failover, and indirect.
The all type executes all the buckets in the group. Thus, each arriving packet is effectively cloned.
Typically, each bucket will designate a different output port, so that the incoming packet is then
transmitted on multiple output ports. This group is used for multicast or broadcast forwarding.
The select type executes one bucket in the group, based on a switch-computed selection algorithm (for
example, hash on some user-configured tuple or simple round-robin). The selection algorithm should
implement equal load sharing or, optionally, load sharing based on bucket weights assigned by the SDN
controller.
The fast failover type executes the first live bucket. Port liveness is managed by code outside of the scope
of OpenFlow and may have to do with routing algorithms or congestion control mechanisms. The buckets
are evaluated in order, and the first live bucket is selected. This group type enables the switch to change
forwarding without requiring a round trip to the controller.
The indirect type allows multiple packet flows (that is, multiple flow table entries) to point to a common
group identifier. This type provides for more efficient management by the controller in certain situations.
4.3-OPENFLOW PROTOCOL
The OpenFlow protocol describes message exchanges that take place between an OpenFlow controller and an
OpenFlow switch. Typically, the protocol is implemented on top of TLS, providing a secure OpenFlow channel.
The OpenFlow protocol enables the controller to perform add, update, and delete actions to the flow entries in the
flow tables.
By- Mr. Bhanuprasad Vishwakarma pg. 20
M.Sc.IT Sem II Modern Networking Unit-II
Controller to switch: These messages are initiated by the controller and, in some cases, require a response
from the switch. This class of messages enables the controller to manage the logical state of the switch,
including its configuration and details of flow and group table entries. Also included in this class is the
Packet-out message. This message is sent by the controller to a switch when that switch sends a packet to
the controller and the controller decides not to drop the packet but to direct it to a switch output port.
Asynchronous: These types of messages are sent without solicitation from the controller. This class
includes various status messages to the controller. Also included is the Packet-in message, which may be
used by the switch to send a packet to the controller when there is no flow table match.
Symmetric: These messages are sent without solicitation from either the controller or the switch. They
are simple yet helpful. Hello messages are typically sent back and forth between the controller and switch
when the connection is first established. Echo request and reply messages can be used by either the switch
or controller to measure the latency or bandwidth of a controller-switch connection or just verify that the
device is up and running. The Experimenter message is used to stage features to be built in to future
versions of OpenFlow.
In general terms, the OpenFlow protocol provides the SDN controller with three types of information to be
used in managing the network:
Event-based messages: Sent by the switch to the controller when a link or port change occurs.
Flow statistics: Generated by the switch based on traffic flow. This information enables the controller
to monitor traffic, reconfigure the network as needed, and adjust flow parameters to meet QoS
requirements.
Encapsulated packets: Sent by the switch to the controller either because there is an explicit action
to send this packet in a flow table entry or because the switch needs information for establishing a new
flow.
The functionality provided by the SDN controller can be viewed as a network operating system (NOS). As with
a conventional OS, an NOS provides essential services, common application programming interfaces (APIs), and
an abstraction of lower-layer elements to developers. The functions of an SDN NOS, such as those in the
preceding list, enable developers to define network policies and manage networks without concern for the details
of the network device characteristics, which may be heterogeneous and dynamic. The northbound interface,
discussed subsequently, provides a uniform means for application developers and network managers to access
SDN service and perform network management tasks. Further, well-defined northbound interfaces enable
developers to create software that is independent not only of data plane details but to a great extent usable with a
variety of SDN controller servers.
A number of different initiatives, both commercial and open source, have resulted in SDN controller
implementations. The following list describes a few prominent ones:
• OpenDaylight: An open source platform for network programmability to enable SDN, written in Java.
OpenDaylight was founded by Cisco and IBM, and its membership is heavily weighted toward network
vendors. OpenDaylight can be implemented as a single centralized controller, but enables controllers to
be distributed where one or multiple instances may run on one or more clustered servers in the network.
• Open Network Operating System (ONOS): An open source SDN NOS, initially released in 2014. It is
a nonprofit effort funded and developed by a number of carriers, such as AT&T and NTT, and other
service providers. Significantly, ONOS is supported by the Open Networking Foundation, making it
likely that ONOS will be a major factor in SDN deployment. ONOS is designed to be used as a
distributed controller and provides abstractions for partitioning and distributing network state onto
multiple distributed controllers.
• POX: An open source OpenFlow controller that has been implemented by a number of SDN developers
and engineers. POX has a well written API and documentation. It also provides a web-based graphical
user interface (GUI) and is written in Python, which typically shortens its experimental and
developmental cycles compared to some other implementation languages, such as C++.
By- Mr. Bhanuprasad Vishwakarma pg. 22
M.Sc.IT Sem II Modern Networking Unit-II
• Beacon: An open source package developed at Stanford. Written in Java and highly integrated into the
Eclipse integrated development environment (IDE). Beacon was the first controller that made it possible
for beginner programmers to work with and create a working SDN environment.
• Floodlight: An open source package developed by Big Switch Networks. Although its beginning was
based on Beacon, it was built using Apache Ant, which is a very popular software build tool that makes
the development of Floodlight easier and more flexible. Floodlight has an active community and has a
large number of features that can be added to create a system that best meets the requirements of a
specific organization. Both a web-based and Java-based GUI are available and most of its functionality
is exposed through a REST API.
• Ryu: An open source component-based SDN framework developed by NTT Labs. It is open sourced
and fully developed in python.
• Onix: Another distributed controller, jointly developed by VMWare, Google, and NTT. Onix is a
commercially available SDN controller.
Southbound Interface
The southbound interface provides the logical connection between the SDN controller and the data plane switches
(seeFigure 5.3). Some controller products and configurations support only a single southbound protocol. A more
flexible approach is the use of a southbound abstraction layer that provides a common interface for the control
plane functions while supporting multiple southbound APIs.
• OpenFlow protocol: The OpenFlow protocol defines the interface between an OpenFlow Controller
and an OpenFlow switch. The OpenFlow protocol allows the OpenFlow Controller to instruct the
OpenFlow switch on how to handle incoming data packets.
• Open vSwitch Database Management Protocol (OVSDB): Open vSwitch (OVS) an open source
software project which implements virtual switching that is interoperable with almost all popular
hypervisors. OVS uses OpenFlow for message forwarding in the control plane for both virtual and
physical ports. OVSDB is the protocol used to manage and configure OVS instances.
• Forwarding and Control Element Separation (ForCES):An IETF effort that standardizes the
interface between the control plane and the data plane for IP routers.
• Protocol Oblivious Forwarding (POF): This is advertised as an enhancement to OpenFlow that
simplifies the logic in the data plane to a very generic forwarding element that need not understand the
protocol data unit (PDU) format in terms of fields at various protocol levels. Rather, matching is done
by means of (offset, length) blocks within a packet. Intelligence about packet format resides at the
control plane level
Northbound Interface
The northbound interface enables applications to access control plane functions and services without needing to
know the details of the underlying network switches. The northbound interface is more typically viewed as a
software API rather than a protocol.
Unlike the southbound and eastbound/westbound interfaces, where a number of heterogeneous interfaces have
been defined, there is no widely accepted standard for the northbound interface. The result has been that a number
of unique APIs have been developed for various controllers, complicating the effort to develop SDN applications.
Figure 5.5 shows a simplified example of an architecture with multiple levels of northbound APIs, the levels of
which are described in the list that follows:
Routing
As with any network or internet, an SDN network requires a routing function. In general terms, the routing
function comprises a protocol for collecting information about the topology and traffic conditions of the network,
and an algorithm for designing routes through the network.
There are two categories of routing protocols: interior router protocols (IRPs) that operate within an autonomous
system (AS), and exterior router protocols (ERPs) that operate between autonomous systems.
An IRP is concerned with discovering the topology of routers within an AS and then determining the best route
to each destination based on different metrics. Two widely used IRPs are Open Shortest Path First (OSPF)
Protocol and Enhanced Interior Gateway Routing Protocol (EIGRP). An ERP need not collect as much detailed
traffic information. Rather, the primary concern with an ERP is to determine reachability of networks and end
systems outside of the AS. Therefore, the ERP is typically executed only in edge nodes that connect one AS to
another. Border Gateway Protocol (BGP) is commonly used for the ERP.
Traditionally, the routing function is distributed among the routers in a network. Each router is responsible for
building up an image of the topology of the network. For interior routing, each router as well must collect
information about connectivity and delays and then calculate the preferred route for each IP destination address.
The centralized routing application performs two distinct functions: link discovery and topology manager.
For link discovery, the routing function needs to be aware of links between data plane switches. Note that in the
case of an internetwork, the links between routers are networks, whereas for Layer 2 switches, such as Ethernet
switches, the links are direct physical links. In addition, link discovery must be performed between a router and
a host system and between a router in the domain of this controller and a router in a neighboring domain.
Discovery is triggered by unknown traffic entering the controller’s network domain either from an attached host
or from a neighboring router.
The topology manager maintains the topology information for the network and calculates routes in the network.
Route calculation involves determining the shortest path between two data plane nodes or between a data plane
node and a host.
5.2-ITU-T MODEL
The SDN high-level architecture is given in Figure 5.6 as defined in ITU-T Y.3300. The Y.3300 model consists
of three layers, or planes: application, control, and resource. As defined in Y.3300, the application layer is where
SDN applications specify network services or business applications by defining a service-aware behavior of
network resources. The applications interact with the SDN control layer via APIs that form an application-control
interface. The applications make use of an abstracted view of the network resources provided by the SDN control
layer by means of information and data models exposed via the APIs.
5.3-OPENDAYLIGHT
The OpenDaylight Project is an open source project hosted by the Linux Foundation and includes the involvement
of virtually every major networking organization, including users of SDN technology and vendors of SDN
products.
• Network applications, orchestration, and services: Consists of business and network logic applications
that control and monitor network behavior. These applications use the controller to gather network
intelligence, run algorithms to perform analytics, and then use the controller to orchestrate the new rules,
if any, throughout the network.
• APIs: A set of common interfaces to OpenDaylight controller functions. OpenDaylight supports the Open
Service Gateway Initiative (OSGi)( A set of specifications that defines a dynamic component system for
Java) framework and bidirectional REST for the northbound API. The OSGi framework is used for
applications that will run in the same address space as the controller, while the REST (web-based) API is
used for applications that do not run in the same address space as the controller.
• Controller functions and services: SDN control plane functions and services.
• Service abstraction layer (SAL): Provides a uniform view of data plane resources, so that control plane
functions can be implemented independent of the specific southbound interface and protocol.
• Southbound interfaces and protocols: Supports OpenFlow, other standard southbound protocols, and
vendor-specific interfaces.
There are several noteworthy aspects to the OpenDaylight architecture.
• First, OpenDaylight encompasses both control plane and application plane functionality. Thus,
OpenDaylight is more than just an SDN controller implementation. This enables enterprise and
telecommunications network managers to host open source software on their own servers to construct an
SDN configuration. Vendors can use this software to create products with value-added additional
application plane functions and services.
OpenDaylight Helium
The most recent release of OpenDaylight is the Helium release, illustrated in Figure 5.9. The controller platform
consists of a growing collection of dynamically pluggable modules, each of which performs one or more SDN-
related functions and services. Five modules are considered base network service functions.
• Topology manager: A service for learning the network layout by subscribing to events of node addition
and removal and their interconnection. Applications requiring network view can use this service.
• Statistics manager: Collects switch-related statistics, including flow statistics, node connector, and
queue occupancy.
• Switch manager: Holds the details of the data plane devices. As a switch is discovered, its attributes
(for example, what switch/router it is, software version, capabilities) are stored in a database by the
switch manager.
• Forwarding rules manager: Installs routes and tracks next-hop information. Works in conjunction with
switch manager and topology manager to register and maintain network flow state. Applications using
this need not have visibility of network device specifics.
• Host tracker: Tracks and maintains information about connected hosts.
5.4-REST
REpresentational State Transfer (REST) is an architectural style used to define APIs. This has become a
standard way of constructing northbound APIs for SDN controllers. A REST API, or an API that is RESTful
(adheres to the constraints of REST) is not a protocol, language, or established standard. It is essentially six
constraints that an API must follow to be RESTful. The objective of these constraints is to maximize the
scalability and independence/interoperability of software interactions, and to provide for a simple means of
constructing APIs.
REST Constraints
REST assumes that the concepts of web-based access are used for interaction between the application and the
service that are on either side of the API. REST does not define the specifics of the API but imposes constraints
on the nature of the interaction between application and service. The six REST constraints are as follows:
i. Client-Server
ii. Stateless
iii. Cacheable
iv. Uniform Interface
v. Layered System
vi. Code-On-Demand
i-Client-Server Constraint
This simple constraint dictates that interaction between application and server is in the client-server
request/response style. The principle defined for this constraint is the separation of user interface concerns from
data storage concerns. This separation allows client and server components to evolve independently and supports
the portability of server-side functions to multiple platforms.
ii-Stateless Constraint
The stateless constraint dictates that each request from a client to a server must contain all the information
necessary to understand the request and cannot take advantage of any stored context on the server. Similarly, each
response from the server must contain all the desired information for that request. One consequence is that any
“memory” of a transaction is maintained in a session state kept entirely on the client. Because the server does not
retain any record of the client state, the result is a more efficient SDN controller. Another consequence is that if
the client and server reside on different machines, and therefore communicate via a protocol, that protocol need
not be connection oriented.
iii-Cache Constraint
The cache constraint requires that the data within a response to a request be implicitly or explicitly labeled as
cacheable or noncacheable. If a response is cacheable, then a client cache is given the right to reuse that response
data for later, equivalent requests. That is, the client is given permission to remember this data because the data
is not likely to change on the server side. Therefore, subsequent requests for the same data can be handled locally
at the client, reducing communication overhead between client and server, and reducing the server’s processing
burden.
vi-Code-on-Demand Constraint
REST allows client functionality to be extended by downloading and executing code in the form of applets or
scripts. This simplifies clients by reducing the number of features required to be pre-implemented. Allowing
features to be downloaded after deployment improves system extensibility.
Between each SDN domain and the AS, BGP is used to exchange information, such as the following:
• Reachability update: Exchange of reachability information facilitates inter-SDN domain routing.
This allows a single flow to traverse multiple SDNs and each controller can select the most
appropriate path in the network.
• Flow setup, tear-down, and update requests: Controllers coordinate flow setup requests, which
contain information such as path requirements, QoS, and so on, across multiple SDN domains.
• Capability Update: Controllers exchange information on network-related capabilities such as
bandwidth, QoS and so on, in addition to system and software capabilities available inside the
domain.
1. The SDN controller must be configured with BGP capability and with information about the location
of neighboring BGP entices.
2. BGP is triggered by a start or activation event within the controller.
3. The BGP entity in the controller attempts to establish a TCP connection with each neighboring BGP
entity.
4. Once a TCP connection is established, the controller’s BGP entity exchanges Open messages with the
neighbor. Capability information is exchanged with using the Open messages.
5. The exchange completes with the establishment of a BGP connection.
6. Update messages are used to exchange NLRI (network layer reachability information), indicating
what networks are reachable via this entity. Reachability information is used in the selection of the
most appropriate data path between SDN controllers. Information obtained through NLRI parameter
By- Mr. Bhanuprasad Vishwakarma pg. 35
M.Sc.IT Sem II Modern Networking Unit-II
is used to update the controller’s Routing Information Base (RIB).
7. The Update message can also be used to exchange QoS information, such as available capacity.
8. Route selection is done when more than one path is available based on BGP process decision. Once the
path is established packets can traverse successfully between two SDN domains.
IETF SDNi
IETF has developed a draft specification that defines common requirements to coordinate flow setup and
exchange reachability information across multiple domains, referred to as SDNi . The SDNi specification does
not define an east/westbound SDN protocol but rather provides some of the basic principles to be used in
developing such a protocol.
SDNi functionality, as defined in the document, includes the following:
• Coordinate flow setup originated by applications, containing information such as path
requirement, QoS, and service level agreements across multiple SDN domains.
• Exchange reachability information to facilitate inter-SDN routing. This will allow a single flow
to traverse multiple SDNs and have each controller select the most appropriate path when multiple such
paths are available.
The message types for SDNi tentatively include the following:
• Reachability update
• Flow setup/teardown/update request (including application capability requirement such as QoS,
data rate, latency, and so on)
• Capability update (including network-related capabilities, such as data rate and QoS, and system
and software capabilities available inside the domain).
OpenDaylight SNDi
Included in the OpenDaylight architecture is an SDNi capability for connecting multiple OpenDaylight federated
controllers in a network and sharing topology information among them. This capability appears to be compatible
with the IETF specification for an SDNi function. The SDNi application deployable on an OpenDaylight
controller consists of three components, as illustrated in Figure 5.14 and described in the list that follows.
• SDNi aggregator: Northbound SDNi plug-in acts as an aggregator for collecting network
information such as topology, statistics, and host identifiers. This plug-in can evolve to meet the needs
for network data requested to be shared across federated SDN controllers.
• SDNi REST API: SDNi REST APIs fetch the aggregated information from the northbound
plug-in (SDNi aggregator).
• SDNi wrapper: SDNi BGP wrapper is responsible for the sharing and collecting information
to/from federated controllers.
Figure 5.15 shows the interrelationship of the components, with a more detailed look at the SDNi wrapper.
Northbound Interface
The northbound interface enables applications to access control plane functions and services without needing to
know the details of the underlying network switches. Typically, the northbound interface provides an abstract
view of network resources controlled by the software in the SDN control plane.
Figure 6.1 indicates that the northbound interface can be a local or remote interface. For a local interface, the
SDN applications are running on the same server as the control plane software (controller network operating
system). Alternatively, the applications could be run on remote systems and the northbound interface is a protocol
or application programming interface (API) that connects the applications to the controller network operating
system (NOS) running on central server.
Network Services Abstraction Layer
This layer could provide an abstract view of network resources that hides the details of the underlying data
plane devices.
This layer could provide a generalized view of control plane functionality, so that applications could be written
that would operate across a range of controller network operating systems.
Distribution Abstraction
This abstraction arises in the context of distributed controllers. A cooperating set of distributed controllers
maintains a state description of the network and routes through the networks. The distributed state of the entire
network may involve partitioned data sets, with controller instances exchanging routing information, or a
replicated data set, so that the controllers must cooperate to maintain a consistent view of the global network.
This abstraction aims at hiding complex distributed mechanisms (used today in many networks) and
separating state management from protocol design and implementation. It allows providing a single
coherent global view of the network through an annotated network graph accessible for control via an
API. An implementation of such an abstraction is an NOS, such as OpenDaylight or Ryu.
Specification Abstraction
The distribution abstraction provides a global view of the network as if there is a single central controller, even if
multiple cooperating controllers are used. The specification abstraction then provides an abstract view of the
global network. This view provides just enough detail for the application to specify goals, such as routing or
security policy, without providing the information needed to implement the goals. The presentation by Shenker
summarizes these abstractions as follows:
• Forwarding interface: An abstract forwarding model that shields higher layers from
forwarding hardware.
• Distribution interface: A global network view that shields higher layers from state
dissemination/collection.
• Specification interface: An abstract network view that shields application program from details of
physical network.
6.3-TRAFFIC ENGINEERING
Traffic engineering is a method for dynamically analyzing, regulating, and predicting the behavior of data
flowing in networks with the aim of performance optimization to meet service level agreements (SLAs). Traffic
engineering involves establishing routing and forwarding policies based on QoS requirements. With SDN, the
task of traffic engineering should be considerably simplified compared with a non-SDN network. SDN offers a
uniform global view of heterogeneous equipment and powerful tools for configuring and managing network
switches.
PolicyCop
An instructive example of a traffic engineering SDN application is PolicyCop, which is an automated QoS policy
enforcement framework. It leverages the programmability offered by SDN and OpenFlow for
• Dynamic traffic steering
• Flexible Flow level control
• Dynamic traffic classes
• Custom flow aggregation levels
Key features of PolicyCop are that it monitors the network to detect policy violations (based on a QoS SLA)
and reconfigures the network to reinforce the violated policy.
As shown in Figure 6.5, PolicyCop consists of eleven software modules and two databases, installed in both the
application plane and the control plane. PolicyCop uses the control plane of SDNs to monitor the compliance
with QoS policies and can automatically adjust the control plane rules and flow tables in the data plane based on
the dynamic network traffic statistics.
In the control plane, PolicyCop relies on four modules and a database for storing control rules, described as
follows:
• Admission Control: Accepts or rejects requests from the resource provisioning module for reserving
network resources, such as queues, flow-table entries, and capacity.
• Routing: Determines path availability based on the control rules in the rule database.
• Device Tracker: Tracks the up/down status of network switches and their ports.
• Statistics Collection: Uses a mix of passive and active monitoring techniques to measure different
network metrics.
• Rule Database: The application plane translates high-level network-wide policies to control rules
and stores them in the rule database.
By- Mr. Bhanuprasad Vishwakarma pg. 42
M.Sc.IT Sem II Modern Networking Unit-II
A RESTful northbound interface connects these control plane modules to the application plane modules, which
are organized into two components: a policy validator that monitors the network to detect policy violations, and
a policy enforcer that adapts control plane rules based on network conditions and high-level policies.
The modules are as follows:
• Traffic Monitor: Collects the active policies from policy database, and determines appropriate
monitoring interval, network segments, and metrics to be monitored.
• Policy Checker: Checks for policy violations, using input from the policy database and the Traffic
Monitor.
• Event Handler: Examines violation events and, depending on event type, either automatically invokes
the policy enforcer or sends an action request to the network manager.
• Topology Manager: Maintains a global view of the network, based on input from the device tracker.
• Resource Manager: Keeps track of currently allocated resources using admission control and statistics
collection.
• Policy Adaptation: Consists of a set of actions, one for each type of policy violation.
By- Mr. Bhanuprasad Vishwakarma pg. 43
M.Sc.IT Sem II Modern Networking Unit-II
• Resource Provisioning: This module either allocates more resources or releases existing ones or both
based on the violation event.
Figure 6.6 shows the process workflow in PolicyCop.
6.5-SECURITY
Applications in this area have one of two goals:
• Address security concerns related to the use of SDN: SDN involves a three-layer architecture
(application, control, data) and new approaches to distributed control and encapsulating data. All of this
introduces the potential for new vectors for attack. Threats can occur at any of the three layers or in the
communication between layers. SDN applications are needed to provide for the secure use of SDN
itself.
Figure 6.7 shows the overall context of the Defense4All application. The underlying SDN network consists of a
number of data plane switches that support traffic among client and server devices. Defense4All operates as an
application that interacts with the controller over an OpenDaylight controller (ODC) northbound API.
To mitigate a detected attack, Defense4All performs the following procedure:
By- Mr. Bhanuprasad Vishwakarma pg. 45
M.Sc.IT Sem II Modern Networking Unit-II
1.It validates that the AMS device is alive and selects a live connection to it. Currently, Defense4All is
configured to work with Radware’s AMS, known as DefensePro.
2.It configures the AMS with a security policy and normal rates of the attacked traffic. This provides the
AMS with the information needed to enforce a mitigation policy until traffic returns to normal rates.
3.It starts monitoring and logging syslogs arriving from the AMS for the subject traffic. As long as
Defense4All continues receiving syslog attack notifications from the AMS regarding this attack,
Defense4All continues to divert traffic to the AMS, even if the flow counters for this PO do not indicate
any more attacks.
4.It maps the selected physical AMS connection to the relevant PO link. This typically involves changing
link definitions on a virtual network, using OpenFlow.
5.It installs higher-priority flow table entries so that the attack traffic flow is redirected to the AMS and
reinjects traffic from the AMS back to the normal traffic flow route. When Defense4All decides that the
attack is over (no attack indication from either flow table counters or from the AMS), it reverts the
previous actions: It stops monitoring for syslogs about the subject traffic, it removes the traffic diversion
flow table entries, and it removes the security configuration from the AMS.
Compared to electronic switches, optical switches have the advantages of greater data rates with reduced cabling
complexity and energy consumption. A number of projects have demonstrated how to collect network- level
traffic data and intelligently allocate optical circuits between endpoints (for example, top-of-rack switches) to
improve application performance. However, circuit utilization and application performance can be inadequate
unless there is a true application-level view of traffic demands and dependencies.
Figure 6.9 shows a simple hybrid electrical and optical data center network, in which OpenFlow-enabled top- of-
rack (ToR) switches are connected to two aggregation switches: an Ethernet switch and an optical circuit switch
(OCS). All the switches are controlled by a SDN controller that manages physical connectivity among ToR
switches over optical circuits by configuring the optical switch. It can also manage the forwarding at ToR switches
using OpenFlow rules.
The SDN controller is also connected to the Hadoop scheduler, which forms queues of jobs to be scheduled and
the HBase Master controller of a relational database holding data for the big data applications. In addition, the
SDN controller connects to a Mesos cluster manager. Mesos is an open source software package that provides
scheduling and resource allocation services across distributed applications.
• A cloud customer uses a simple policy language to specify network services required by the customer
applications. These policy statements are issued to a cloud controller server operated by the cloud service
provider.
• The cloud controller maps the network policy into a communication matrix that defines desired
communication patterns and network services.
• The logical communication matrix is translated into network-level directives for data plane forwarding
elements.
• The network-level directives are installed into the network devices via OpenFlow.
For efficient operation, two additional challenges need to be addressed: how to measure the popularity of content
accurately and without a large overhead, and how to build and optimize routing tables to perform deflection. To
address these issues, the architecture calls for three new modules in the SDN controller:
• Measurement: Content popularity can be inferred directly from OF flow statistics. The measurement
module periodically queries and processes statistics from ingress OF switches to return the list of most
popular content.
• Optimization: Uses the list of most popular contents as an input for the optimization algorithm. The
objective is to minimize the sum of the delays over deflected contents under the following constraints:
(1) each popular content is cached at exactly one node, (2) caching contents at a node does not exceed
node’s capacity, and (3) caching should not cause link congestion.
• Deflection: Uses the optimization results to build a mapping, for every content, between the content
name (by means of addresses and ports computed from the content name hash) and an outgoing
interface toward the node where the content is cached.
Finally, mappings are installed on switches’ flow tables using the OF protocol such that subsequent Interest
packets can be forwarded to appropriate caches.
Figure 6.13 shows the flow of packets. The OpenFlow switch forwards every packet it receives from other ports
to the wrapper, and the wrapper forwards it to the CCNx module. The OpenFlow switch needs to help the wrapper
identify the switch source port of the packet. To achieve this, the OF switch is configured to set the ToS value of
all packets it receives to the corresponding incoming port value and then forward all of them to the wrapper’s
port.
The wrapper maps a face of CCNx to an interface (that is, port) of OpenFlow switches using ToS value. Face W
is a special face between wrapper and the CCNx module. W receives every Content packet from the wrapper and
is used to send every Interest packet from CCNx to the wrapper.