0% found this document useful (0 votes)
24 views

UNIT 2

The document discusses the evolving requirements of modern networking, emphasizing the inadequacies of traditional network architectures in handling increasing demand, complex traffic patterns, and the need for adaptability. It introduces Software-Defined Networking (SDN) as a solution that separates the control plane from the data plane, allowing for centralized management and programmability of network resources. Additionally, it outlines the various standards and organizations involved in the development of SDN and Network Functions Virtualization (NFV) to address these challenges.

Uploaded by

Mahesh Kudalkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views

UNIT 2

The document discusses the evolving requirements of modern networking, emphasizing the inadequacies of traditional network architectures in handling increasing demand, complex traffic patterns, and the need for adaptability. It introduces Software-Defined Networking (SDN) as a solution that separates the control plane from the data plane, allowing for centralized management and programmability of network resources. Additionally, it outlines the various standards and organizations involved in the development of SDN and Network Functions Virtualization (NFV) to address these challenges.

Uploaded by

Mahesh Kudalkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 54

M.Sc.

IT Sem II Modern Networking Unit-II

3. SDN: Background and Motivation


3.1-Evolving Network Requirements
A number of trends are driving network providers and users to reevaluate traditional approaches to network
architecture. These trends can be grouped under the categories of demand, supply, and traffic patterns.

Demand Is Increasing
A number of trends are increasing the load on enterprise networks, the Internet, and other internets.
• Cloud computing: There has been a dramatic shift by enterprises to both public and private cloud
services.
• Big data: The processing of huge data sets requires massive parallel processing on thousands of servers,
all of which require a degree of interconnection to each other.
• Mobile traffic: Employees are increasingly accessing enterprise network resources via mobile personal
devices, such as smartphones, tablets, and notebooks. These devices support sophisticated apps that can
consume and generate image and video traffic, placing new burdens on the enterprise network.
• The Internet of Things (IoT): Most “things” in the IoT generate modest traffic, although there are
exceptions, such as surveillance video cameras.
Supply Is Increasing
As the demand on networks is rising, so is the capacity of network technologies to absorb rising loads. The
increase in the capacity of the network transmission technologies has been matched by an increase in the
performance of network devices, such as LAN switches, routers, firewalls, intrusion detection system/intrusion
prevention systems (IDS/IPS), and network monitoring and management systems. Year by year, these devices
have larger, faster memories, enabling greater buffer capacity and faster buffer access, as well as faster processor
speeds.
Traffic Patterns Are More Complex
If it were simply a matter of supply and demand, it would appear that today’s networks should be able to cope
with today’s data traffic. But as traffic patterns have changed and become more complex, traditional enterprise
network architectures are increasingly ill suited to the demand.
A number of developments have resulted in far more dynamic and complex traffic patterns within the
enterprise data center, local and regional enterprise networks, and carrier networks. These include the
following:
• Client/server applications typically access multiple databases and servers that must communicate with
each other, generating “horizontal” traffic between servers as well as “vertical” traffic between
servers and clients.
• Network convergence of voice, data, and video traffic creates unpredictable traffic patterns, often of
large multimedia data transfers.
• Unified communications (UC) strategies involve heavy use of applications that trigger access to
multiple servers.
• The heavy use of mobile devices, including personal bring your own device (BYOD) policies, results
in user access to corporate content and applications from any device anywhere any time.

By- Mr. Bhanuprasad Vishwakarma pg. 1


M.Sc.IT Sem II Modern Networking Unit-II
• The widespread use of public clouds has shifted a significant amount of what previously had been
local traffic onto WANs for many enterprises, resulting in increased and often very unpredictable
loads on enterprise routers.
• The now-common practice of application and database server virtualization has significantly increased
the number of hosts requiring high-volume network access and results in every-changing physical
location of server resources.

Traditional Network Architectures are Inadequate


Even with the greater capacity of transmission schemes and the greater performance of network devices,
traditional network architectures are increasingly inadequate in the face of the growing complexity, variability,
and high volume of the imposed load. In addition, as quality of service (QoS) and quality of experience (QoE)
requirements imposed on the network are expanded as a result of the variety of applications, the traffic load must
be handled in an increasingly sophisticated and agile fashion.
The traditional internetworking approach is based on the TCP/IP protocol architecture. Three noteworthy
characteristics of this approach are as follows:
1. Two-level end system addressing
2. Routing based on destination
3. Distributed, autonomous control

The traditional architecture relies heavily on the network interface identity. At the physical layer of the TCP/IP
model, devices attached to networks are identified by hardware-based identifiers, such as Ethernet MAC
addresses. At the internetworking level, including both the Internet and private internets, the architecture is a
network of networks. Each attached device has a physical layer identifier recognized within its immediate network
and a logical network identifier, its IP address, which provides global visibility.

The design of TCP/IP uses this addressing scheme to support the networking of autonomous networks, with
distributed control. This architecture provides a high level of resilience and scales well in terms of adding new
networks. Using IP and distributed routing protocols, routes can be discovered and used throughout an internet.
Using transport-level protocols such as TCP, distributed and decentralized algorithms can be implemented to
respond to congestion.
Traditionally, routing was based on each packet’s destination address. In this datagram approach, successive
packets between a source and destination may follow different routes through the internet, as routers constantly
seek to find the minimum-delay path for each individual packet. More recently, to satisfy QoS requirements,
packets are often treated in terms of flows of packets. Packets associated with a given flow have defined QoS
characteristics, which affect the routing for the entire flow.
However, this distributed, autonomous approach developed when networks were predominantly static and end
systems predominantly of fixed location. Based on these characteristics, the Open Networking Foundation (ONF)
cites four general limitations of traditional network architectures.
i. Static, complex architecture: To respond for demands such as differing levels of QoS, high and
fluctuating traffic volumes, and security requirements, networking technology has grown more
complex and difficult to manage. This has resulted in a number of independently defined protocols
each of which addresses a portion of networking requirements.

By- Mr. Bhanuprasad Vishwakarma pg. 2


M.Sc.IT Sem II Modern Networking Unit-II
ii. Inconsistent policies: To implement a network-wide security policy, staff may have to make
configuration changes to thousands of devices and mechanisms. In a large network, when a new
virtual machine is activated, it can take hours or even days to reconfigure ACLs across the entire
network.
iii. Inability to scale: Demands on networks are growing rapidly, both in volume and variety. Adding
more switches and transmission capacity, involving multiple vendor equipment, is difficult because
of the complex, static nature of the network.
iv. Vendor dependence: Given the nature of today’s traffic demands on networks, enterprises and
carriers need to deploy new capabilities and services rapidly in response to changing business needs
and user demands. A lack of open interfaces for network functions leaves the enterprises limited by
the relatively slow product cycles of vendor equipment.

3.2-THE SDN APPROACH


Requirements
The Open Data Center Alliance (ODCA) provides a useful, concise list of the principal requirements
for a modern networking approach, which include the following:
Adaptability: Networks must adjust and respond dynamically, based on application needs, business policy,
and network conditions.
Automation: Policy changes must be automatically propagated so that manual work and errors can be
reduced.
Maintainability. Introduction of new features and capabilities (software upgrades, patches) must be
seamless with minimal disruption of operations.
Model management: Network management software must allow management of the network at a model
level, rather than implementing conceptual changes by reconfiguring individual network elements.

Mobility: Control functionality must accommodate mobility, including mobile user devices and virtual
servers.
Integrated security: Network applications must integrate seamless security as a core service instead of as an
add-on solution.
On-demand scaling: Implementations must have the ability to scale up or scale down the network and its
services to support on-demand requests.

SDN Architecture
An analogy can be drawn between the way in which computing evolved from closed, vertically integrated,
proprietary systems into an open approach to computing and the evolution coming with SDN (see Figure 3.1). In
the early decades of computing, vendors such as IBM and DEC provided a fully integrated product, with a
proprietary processor hardware, unique assembly language, unique operating system (OS), and the bulk if not all
of the application software. In this environment, customers, especially large customers, tended to be locked in to
one vendor, dependent primarily on the applications offered by that vendor. Migration to another vendor’s
hardware platform resulted in major upheaval at the application level.

By- Mr. Bhanuprasad Vishwakarma pg. 3


M.Sc.IT Sem II Modern Networking Unit-II

Today, the computing environment is characterized by extreme openness and great customer flexibility. The bulk
of computing hardware consists of x86 and x86-compatible processors for standalone systems and ARM
processors for embedded systems. Even proprietary systems such as Windows and Mac OS provide programming
environments to make porting of applications an easy matter. It also enables the development of virtual machines
that can be moved from one server to another across hardware platforms and operating systems.
The networking environment today faces some of the same limitations faced in the pre-open era of computing.
Here the issue is not developing applications that can run on multiple platforms. Rather, the difficulty is the lack
of integration between applications and network infrastructure. As demonstrated in the preceding section,
traditional network architectures are inadequate to meet the demands of the growing volume and variety of traffic.
The central concept behind SDN is to enable developers and network managers to have the same type of control
over network equipment that they have had over x86 servers. The SDN approach splits the switching function
between a data plane and a control plane that are on separate devices (see Figure 3.2). The data plane is simply
responsible for forwarding packets, whereas the control plane provides the “intelligence” in designing routes,
setting priority and routing policy parameters to meet QoS and QoE requirements and to cope with the shifting
traffic patterns. Open interfaces are defined so that the switching hardware presents a uniform interface regardless
of the details of internal implementation. Similarly, open interfaces are defined to enable networking applications
to communicate with the SDN controllers.
Figure 3.3 showing more detail of the SDN approach. The data plane consists of physical switches and virtual
switches. In both cases, the switches are responsible for forwarding packets. The internal implementation of
buffers, priority parameters, and other data structures related to forwarding can be vendor dependent. However,
each switch must implement a model, or abstraction, of packet forwarding that is uniform and open to the SDN
controllers. This model is defined in terms of an open application programming interface (API) between the
control plane and the data plane (southbound API).

By- Mr. Bhanuprasad Vishwakarma pg. 4


M.Sc.IT Sem II Modern Networking Unit-II

FIGURE 3.3 Software-Defined Architecture


By- Mr. Bhanuprasad Vishwakarma pg. 5
M.Sc.IT Sem II Modern Networking Unit-II
Characteristics of Software-Defined Networking
The key characteristics of SDN are as follows:
• The control plane is separated from the data plane. Data plane devices become simple packet-forwarding
devices.
• The control plane is implemented in a centralized controller or set of coordinated centralized controllers.
The SDN controller has a centralized view of the network or networks under its control. The controller is
portable software that can run on commodity servers and is capable of programming the forwarding
devices based on a centralized view of the network.
• Open interfaces are defined between the devices in the control plane (controllers) and those in the data
plane.
• The network is programmable by applications running on top of the SDN controllers. The SDN
controllers present an abstract view of network resources to the applications.

3.3-SDN- AND NFV-RELATED STANDARDS


Unlike some technology areas, such as Wi-Fi, there is no single standards body responsible for developing open
standards (Documents that provide requirements, specifications, guidelines, or characteristics that can be used
consistently to ensure that materials, products, processes, and services are fit for their purpose) for SDN and NFV.
Rather, there is a large and evolving collection of standards-developing organizations (SDOs), industrial
consortia, and open development initiatives involved in creating standards and guidelines for SDN and NFV.
Standards-Developing Organizations
• Internet Society
• ITU-T
• ETSI
All above making key contributions to the standardization of SDN and NFV.
Internet Society
A number of standards-developing organizations (SDOs)are looking at various aspects of SDN. Perhaps the
most active are two groups within the Internet Society (ISOC): IETF and IRTF. ISOC is the coordinating
committee for Internet design, engineering, and management. Areas covered include the operation of the Internet
itself and the standardization of protocols used by end systems on the Internet for interoperability.
The Internet Engineering Task Force (IETF) has working groups developing SDN-related specifications in the
following areas:
Interface to routing systems (I2RS): Develop capabilities to interact with routers and routing protocols to
apply routing policies.
Service function chaining: Develop an architecture and capabilities for controllers to direct subsets of traffic
across the network in such a way that each virtual service platform sees only the traffic it must work with.
ITU-T
The International Telecommunication Union-Telecommunication Standardization Sector (ITU-T) is a UN agency
that issues standards, called recommendations, in the telecommunications area. The document addresses

By- Mr. Bhanuprasad Vishwakarma pg. 6


M.Sc.IT Sem II Modern Networking Unit-II
definitions, objectives, high-level capabilities, requirements, and high- level architecture of SDN. It provides a
valuable framework for standards development.
Four ITU-T study groups (SGs) are involved in SDN-related activities:
• SG 13 (Future networks, including cloud computing, mobile, and next-generation networks): This
is the lead study group of SDN in ITU-T and developed Y.3300. This group is studying SDN and
virtualization aspects for next-generation networks (NGNs).
• SG 11 (Signaling requirements, protocols, and test specifications): This group is studying the
framework for SDN signaling and how to apply SDN technologies for IPv6.
• SG 15 (Transport, access, and home): This group looks at optical transport networks, access
networks, and home networks. The group is investigating transport aspects of SDN, aligned with the
Open Network Foundation’s SDN architecture.
• SG 16 (Multimedia): This group is evaluating OpenFlow as a protocol to control multimedia packet
flows, and is studying virtual content delivery networks.

ETI (European Telecommunications Standards Institute)


ETSI is recognized by the European Union as a European Standards Organization. However, this not-for- profit
SDO has member organizations worldwide and its standards have international impact.
ETSI has taken the lead role in defining standards for NFV. ETSI’s Network Functions Virtualisation (NFV)
Industry Specification Group (ISG) began work in January 2013 and produced a first set of specifications in
January 2015. The 11 specifications include an NFV’s architecture, infrastructure, service quality metrics,
management and orchestration, resiliency requirements, and security guidance.

Industry Consortia
Consortia for open standards began to appear in the late 1980s. There was a growing feeling within private- sector
multinational companies that the SDOs acted too slowly to provide useful standards in the fast-paced world of
technology. Recently, a number of consortia have become involved in the development of SDN and NFV
standards.

By far the most important consortium (A group of independent organizations joined by common interests)
involved in SDN standardization is the Open Networking Foundation (ONF). ONF is an industry consortium
dedicated to the promotion and adoption of SDN through open standards development. Its most important
contribution to date is the OpenFlow protocol and API. The OpenFlow protocol is the first standard interface
specifically designed for SDN and is already being deployed in a variety of networks and networking products,
both hardware based and software based. The standard enables networks to evolve by giving logically centralized
control software the power to modify the behavior of network devices through a well-defined “forwarding
instruction set.”

The Open Data Center Alliance (ODCA) is a consortium of leading global IT organizations dedicated to
accelerating adoption of interoperable solutions and services for cloud computing. Through the development of
usage models for SDN and NFV, ODCA is defining requirements for SDN and NFV cloud deployment.

By- Mr. Bhanuprasad Vishwakarma pg. 7


M.Sc.IT Sem II Modern Networking Unit-II
The Alliance for Telecommunications Industry Solutions (ATIS) is a membership organization that provides the
tools necessary for the industry to identify standards, guidelines, and operating procedures that make the
interoperability of existing and emerging telecommunications products and services possible. Although ATIS is
accredited by ANSI, it is best viewed as a consortium rather than an SDO.

Open Development Initiatives


There are a number of other organizations that are not specifically created by industry members and are not
official bodies such as SDOs. Generally, these organizations are user created and driven and have a particular
focus, always with the goal of developing open standards or open source software. A number of such groups have
become active in SDN and NFV standardization. This section lists three of the most significant efforts.
OpenDaylight
OpenDaylight is an open source software activity under the auspices of the Linux foundation. Its member
companies provide resources to develop an SDN controller for a wide range of applications. Although the core
membership consists of companies, individual developers and users can also participate, so OpenDaylight is more
in the nature of an open development initiative than a consortium. ODL also supports network programmability
via southbound protocols, a bunch of programmable network services, a collection of northbound APIs, and a set
of applications.

Open Platform for NFV


Open Platform for NFV is an open source project dedicated to acceleration the adoption of standardized NFV
elements. OPNFV will establish a carrier-grade, integrated, open source reference platform that industry peers
will build together to advance the evolution of NFV and to ensure consistency, performance, and interoperability
among multiple open source components. Because multiple open source NFV building blocks already exist,
OPNFV will work with upstream projects to coordinate continuous integration and testing while filling
development gaps.

OpenStack
OpenStack is an open source software project that aims to produce an open source cloud operating system. It
provides multitenant Infrastructure as a Service (IaaS), and aims to meets the needs of public and private clouds
regardless of size, by being simple to implement and massively scalable. SDN technology is expected to
contribute to its networking part, and to make the cloud operating system more efficient, flexible, and reliable.
OpenStack is composed of a number of projects. One of them, Neutron, is dedicated for networking. It provides
Network as a Service (NaaS) to other OpenStack services. Almost all SDN controllers have provided plug-ins for
Neutron, and through them services on OpenStack and other OpenStack services can build rich networking
topologies and can configure advanced network policies in the cloud.

By- Mr. Bhanuprasad Vishwakarma pg. 8


M.Sc.IT Sem II Modern Networking Unit-II

4-SDN Data Plane and OpenFlow


4.1-SDN DATA PLANE
The SDN data plane, referred to as the resource layer in ITU-T Y.3300 and also often referred to as the
infrastructure layer, is where network forwarding devices perform the transport and processing of data according
to decisions made by the SDN control plane. The important characteristic of the network devices in an SDN
network is that these devices perform a simple forwarding function, without embedded software to make
autonomous decisions.

Data Plane Functions


Figure 4.2 illustrates the functions performed by the data plane network devices (also called data plane network
elements or switches). The principal functions of the network device are the following:

Control support function: Interacts with the SDN control layer to support programmability via resource-
control interfaces. The switch communicates with the controller and the controller manages the switch via the
OpenFlow switch protocol.
Data forwarding function: Accepts incoming data flows from other network devices and end systems and
forwards them along the data forwarding paths that have been computed and established according to the rules
defined by the SDN applications.
These forwarding rules used by the network device are embodied in forwarding tables that indicate for give
categories of packets what the next hop in the route should be. In addition to simple forwarding of a packet,
the network device can alter the packet header before forwarding, or discard the packet. As shown, arriving
packets may be placed in an input queue, awaiting processing by the network device, and forwarded packets
are generally placed in an output queue, awaiting transmission.

By- Mr. Bhanuprasad Vishwakarma pg. 9


M.Sc.IT Sem II Modern Networking Unit-II
The network device in Figure 4.2 is shown with three I/O ports: one providing control communication with an
SDN controller, and two for the input and output of data packets. This is a simple example. The network device
may have multiple ports to communicate with multiple SDN controllers, and may have more than two I/O ports
for packet flows into and out of the device.
Data Plane Protocols
Data packet flows consist of streams of IP packets. It may be necessary for the forwarding table to define entries
based on fields in upper-level protocol headers, such as TCP, UDP, or some other transport or application
protocol. The network device examines the IP header and possibly other headers in each packet and makes a
forwarding decision.

4.2-OPENFLOW LOGICAL NETWORK DEVICE


To turn the concept of SDN into practical implementation, two requirements must be met:
• There must be a common logical architecture in all switches, routers, and other network devices to be
managed by an SDN controller. This logical architecture may be implemented in different ways on
different vendor equipment and in different types of network devices, as long as the SDN controller sees
a uniform logical switch functionality.
• A standard, secure protocol is needed between the SDN controller and the network device.
These requirements are addressed by OpenFlow, which is both a protocol between SDN controllers and network
devices and a specification of the logical structure of the network switch functionality. OpenFlow is defined in
the OpenFlow Switch Specification, published by the Open Networking Foundation (ONF).

By- Mr. Bhanuprasad Vishwakarma pg. 10


M.Sc.IT Sem II Modern Networking Unit-II
Figure 4.3 indicates the main elements of an OpenFlow environment, consisting of SDN controllers that include
OpenFlow software, OpenFlow switches, and end systems.
Figure 4.4 displays the main components of an OpenFlow switch. An SDN controller communicates with
OpenFlow-compatible switches using the OpenFlow protocol running over Transport Layer Security (TLS). Each
switch connects to other OpenFlow switches and, possibly, to end-user devices that are the sources and
destinations of packet flows. On the switch side, the interface is known as an OpenFlow channel. These
connections are via OpenFlow ports. An OpenFlow port also connects the switch to the SDN controller.

OpenFlow defines three types of ports:


i. Physical port: Corresponds to a hardware interface of the switch. For example, on an Ethernet switch,
physical ports map one to one to the Ethernet interfaces.
ii. Logical port: Does not correspond directly to a hardware interface of the switch. Logical ports are
higher- level abstractions that may be defined in the switch using non-OpenFlow methods (for example,
link aggregation groups, tunnels, loopback interfaces). Logical ports may include packet encapsulation
and may map to various physical ports.
iii. Reserved port: Defined by the OpenFlow specification. It specifies generic forwarding actions such
as sending to and receiving from the controller, flooding, or forwarding using non-OpenFlow methods,
such as “normal” switch processing.

The OpenFlow specification defines three types of tables in the logical switch architecture.
i. A flow table matches incoming packets to a particular flow and specifies what functions are to be
performed on the packets. There may be multiple flow tables that operate in a pipeline fashion.
ii. A flow table may direct a flow to a group table, which may trigger a variety of actions that affect one
or more flows.
iii. A meter table can trigger a variety of performance-related actions on a flow. Using the OpenFlow
switch protocol, the controller can add, update, and delete flow entries in tables, both reactively (in
response to packets) and proactively.

By- Mr. Bhanuprasad Vishwakarma pg. 11


M.Sc.IT Sem II Modern Networking Unit-II
Flow Table Structure
The basic building block of the logical switch architecture is the flow table. Each packet that enters a switch
passes through one of more flow tables. Each flow table consists of a number of rows, called entries, consisting
of seven components (see part a of Figure 4.5), as defined in the list that follows:

• Priority: Relative priority of table entries. This is a 16-bit field with 0 corresponding to the lowest
priority. In principle, there could be 216 = 64k priority levels.
• Counters: Updated for matching packets. The OpenFlow specification defines a variety of counters.
Table 4.1 lists the counters that must be supported by an OpenFlow switch.

• Instructions: Instructions to be performed if a match occurs.


• Timeouts: Maximum amount of idle time before a flow is expired by the switch. Each flow entry has
an idle_timeout and a hard_timeout associated with it. A nonzero hard_timeout field causes the flow
entry to be removed after the given number of seconds, regardless of how many packets it has matched.

By- Mr. Bhanuprasad Vishwakarma pg. 12


M.Sc.IT Sem II Modern Networking Unit-II
A nonzero idle_timeout field causes the flow entry to be removed when it has matched no packets in the
given number of seconds.
• Cookie: 64-bit opaque data value chosen by the controller. May be used by the controller to filter flow
statistics, flow modification and flow deletion; not used when processing packets.
• Flags: Flags alter the way flow entries are managed; for example, the flag OFPFF_SEND_FLOW_REM
triggers flow removed messages for that flow entry.
Match Fields Component
The match fields component of a table entry consists of the following required fields:
• Ingress port: The identifier of the port on this switch on which the packet arrived. This may be a
physical port or a switch-defined virtual port. Required in ingress tables.
• Egress port: The identifier of the egress port from action set. Required in egress tables.
• Ethernet source and destination addresses: Each entry can be an exact address, a bitmasked value
for which only some of the address bits are checked, or a wildcard value (match any value).
• Ethernet type field: Indicates type of the Ethernet packet payload.
• IP: Version 4 or 6.
• IPv4 or IPv6 source address, and destination address: Each entry can be an exact address, a
bitmasked value, a subnet mask value, or a wildcard value.
• TCP source and destination ports: Exact match or wildcard value.
• UDP source and destination ports: Exact match or wildcard value.
The following fields may be optionally supported.
• Physical port: Used to designate underlying physical port when packet is received on a logical port.
• Metadata: Additional information that can be passed from one table to another during the processing of
a packet. Its use is discussed subsequently.
• VLAN ID and VLAN user priority: Fields in the IEEE 802.1Q virtual LAN header.
• IPv4 or IPv6 DS and ECN: Differentiated Services and Explicit Congestion Notification fields.
• SCTP source and destination ports: Exact match or wildcard value for Stream Transmission Control
Protocol.
• ICMP type and code fields: Exact match or wildcard value.
• ARP opcode: Exact match in Ethernet Type field.
• Source and target IPv4 addresses in ARP payload: Can be an exact address, a bitmasked value, a subnet
mask value, or a wildcard value.
• IPv6 flow label: Exact match or wildcard.
• ICMPv6 type and code fields: Exact match or wildcard value.
• IPv6 neighbor discovery target address: In an IPv6 Neighbor Discovery message.
• IPv6 neighbor discovery source and target addresses: Link-layer address options in an IPv6 Neighbor
Discovery message.
• MPLS label value, traffic class, and BoS: Fields in the top label of an MPLS label stack.
• Provider bridge traffic ISID: Service instance identifier.
• Tunnel ID: Metadata associated with a logical port.
• TCP flags: Flag bits in the TCP header. May be used to detect start and end of TCP connections.
• IPv6 extension: Extension header.
By- Mr. Bhanuprasad Vishwakarma pg. 13
M.Sc.IT Sem II Modern Networking Unit-II
Instructions Component
• The instructions component of a table entry consists of a set of instructions that are executed if the packet
matches the entry. Before describing the types of instructions, we need to define the terms action and
action set. Actions describe packet forwarding, packet modification, and group table processing
operations. The OpenFlow specification includes the following actions:
• Output: Forward packet to specified port. The port could be an output port to another switch or the port
to the controller. In the latter case, the packet is encapsulated in a message to the controller.
• Set-Queue: Sets the queue ID for a packet. When the packet is forwarded to a port using the output action,
the queue ID determines which queue attached to this port is used for scheduling and forwarding the
packet. Forwarding behavior is dictated by the configuration of the queue and is used to provide basic
QoS support.
• Group: Process packet through specified group.
• Push-Tag/Pop-Tag: Push or pop a tag field for a VLAN or Multiprotocol Label Switching (MPLS)
packet.
• Set-Field: The various Set-Field actions are identified by their field type and modify the values of
respective header fields in the packet.
• Change-TTL: The various Change-TTL actions modify the values of the IPv4 TTL (time to live), IPv6
hop limit, or MPLS TTL in the packet.
• Drop: There is no explicit action to represent drops. Instead, packets whose action sets have no output
action should be dropped.

The types of instructions can be grouped into four categories:


• Direct packet through pipeline: The Goto-Table instruction directs the packet to a table farther along in
the pipeline. The Meter instruction directs the packet to a specified meter.
• Perform action on packet: Actions may be performed on the packet when it is matched to a table entry.
The Apply-Actions instruction applies the specified actions immediately, without any change to the action
set associated with this packet. This instruction may be used to modify the packet between two tables in
the pipeline.
• Update action set: The Write-Actions instruction merges specified actions into the current action set for
this packet. The Clear-Actions instruction clears all the actions in the action set.
• Update metadata: A metadata value can be associated with a packet. It is used to carry information from
one table to the next. The Write-Metadata instruction updates an existing metadata value or creates a new
value.

Flow Table Pipeline


A switch includes one or more flow tables. If there is more than one flow table, they are organized as a pipeline,
with the tables labeled with increasing numbers starting with zero. The use of multiple tables in a pipeline, rather
than a single flow table, provides the SDN controller with considerable flexibility.
The OpenFlow specification defines two stages of processing:

By- Mr. Bhanuprasad Vishwakarma pg. 14


M.Sc.IT Sem II Modern Networking Unit-II
• Ingress processing: Ingress processing always happens, beginning with Table 0, and uses the identity
of the input port. Table 0 may be the only table, in which case the ingress processing is simplified to
the processing performed on that single table, and there is no egress processing.
• Egress processing: Egress processing is the processing that happens after the determination of the
output port. It happens in the context of the output port. This stage is optional. If it occurs, it may
involve one or more tables. The separation of the two stages is indicated by the numerical identifier of
the first egress table. All tables with a number lower than the first egress table must be used as ingress
tables, and no table with a number higher than or equal to the first egress table can be used as an ingress
table.
Pipeline processing always starts with ingress processing at the first flow table; the packet must be first matched
against flow entries of flow Table 0.
When a packet is presented to a table for matching, the input consists of the packet, the identity of the ingress
port, the associated metadata value, and the associated action set. For Table 0, the metadata value is blank and
the action set is null. At each table, processing proceeds as follows (see Figure 4.6).
1. If there is a match on one or more entries, other than the table-miss entry, the match is defined to be with the
highest-priority matching entry. As mentioned in the preceding discussion, the priority is a component of a table
entry and is set via OpenFlow; the priority is determined by the user or application invoking OpenFlow. The
following steps may then be performed:
a. Update any counters associated with this entry.
b.Execute any instructions associated with this entry. This may include updating the action set, updating
the metadata value, and performing actions.
c. The packet is then forwarded to a flow table further down the pipeline, to the group table, to the meter
table, or directed to an output port.
2. If there is a match only on a table-miss entry, the table entry may contain instructions, as with any other
entry. In practice, the table-miss entry specifies one of three actions:
a. Send packet to controller. This will enable the controller to define a new flow for this and similar packets,
or decide to drop the packet.
b. Direct packet to another flow table farther down the pipeline.
c. Drop the packet.
3. If there is no match on any entry and there is no table-miss entry, the packet is dropped.

For the final table in the pipeline, forwarding to another flow table is not an option. If and when a packet is finally
directed to an output port, the accumulated action set is executed and then the packet is queued for output. Figure
4.7 illustrates the overall ingress pipeline process.

By- Mr. Bhanuprasad Vishwakarma pg. 15


M.Sc.IT Sem II Modern Networking Unit-II

FIGURE 4.6 Simplified Flowchart Detailing Packet Flow Through an OpenFlow Switch

By- Mr. Bhanuprasad Vishwakarma pg. 16


M.Sc.IT Sem II Modern Networking Unit-II

If egress processing is associated with a particular output port, then after a packet is directed to an output port at
the completion of the ingress processing, the packet is directed to the first flow table of the egress pipeline. Egress
pipeline processing proceeds in the same fashion as for ingress processing, except that there is no group table
processing at the end of the egress pipeline. Egress processing is shown in Figure 4.8

By- Mr. Bhanuprasad Vishwakarma pg. 17


M.Sc.IT Sem II Modern Networking Unit-II

Figure 4.8: Packet Flow Through OpenFlow Switch: Egress Processing


The Use of Multiple Tables
The use of multiple tables enables the nesting of flows, or put another way, the breaking down of a single flow
into a number of parallel subflows. Figure 4.9 illustrates this property. In this example, an entry in Table 0 defines
a flow consisting of packets traversing the network from a specific source IP address to a specific destination IP
address. Once a least-cost route between these two endpoints is established, it might make sense for all traffic
between these two endpoints to follow that route, and the next hop on that route from this switch can be entered
in Table 0. In Table 1, separate entries for this flow can be defined for different transport layer protocols, such as
TCP and UDP. For these subflows, the same output port might be retained so that the subflows all follow the
same route. However, TCP includes elaborate congestion control mechanisms not normally found with UDP, so
it might be reasonable to handle the TCP and UDP subflows differently in terms of quality of service (QoS)-
related parameters. Any of the Table 1 entries could immediately route its respective subflow to the output port,
but some or all of the entries may invoke Table 2, further dividing each subflow. The figure shows that the TCP
subflow could be divided on the basis of the protocol running on top of TCP, such as Simple Mail Transfer
Protocol (SMTP) or File Transfer Protocol (FTP). Similarly, the UDP flow could be subdivided based on
By- Mr. Bhanuprasad Vishwakarma pg. 18
M.Sc.IT Sem II Modern Networking Unit-II
protocols running on UDP, such as Simple Network Management Protocol (SNMP). The figure also indicates
other subflows at Table 1 and 2, which may be used for other purposes.

For this example, it would be possible to define each of these fine-grained subflows in Table 0. The use of multiple
tables simplifies the processing in both the SDN controller and the OpenFlow switch. Actions such as next hop
that apply to the aggregate flow can be defined once by the controller and examined and performed once by the
switch. The addition of new subflows at any level involves less setup. Therefore, the use of pipelined, multiple
tables increases the efficiency of network operations, provides granular control, and enables the network to
respond to real-time changes at the application, user, and session levels.

Group Table
In the course of pipeline processing, a flow table may direct a flow of packets to the group table rather than
another flow table. The group table and group actions enable OpenFlow to represent a set of ports as a single
entity for forwarding packets. Different types of groups are provided to represent different forwarding
abstractions, such as multicasting and broadcasting.

Group identifier: A 32-bit unsigned integer uniquely identifying the group. A group is defined as an entry
in the group table.

• Group type: To determine group semantics, as explained subsequently.


• Counters: Updated when packets are processed by a group.
• Action buckets: An ordered list of action buckets, where each action bucket contains a set of actions
to execute and associated parameters.
Each group includes a set of one or more action buckets. Each bucket contains a list of actions. Unlike the
action set associated with a flow table entry, which is a list of actions that accumulate while the packet is
processed by each flow table, the action list in a bucket is executed when a packet reaches a bucket. The action
list is executed in sequence and generally ends with the Output action, which forwards the packet to a specified
port. The action list may also end with the Group action, which sends the packet to another group. This enables
the chaining of groups for more complex processing.

A group is designated as one of the types depicted in Figure 4.10: all, select, fast failover, and indirect.

By- Mr. Bhanuprasad Vishwakarma pg. 19


M.Sc.IT Sem II Modern Networking Unit-II

The all type executes all the buckets in the group. Thus, each arriving packet is effectively cloned.
Typically, each bucket will designate a different output port, so that the incoming packet is then
transmitted on multiple output ports. This group is used for multicast or broadcast forwarding.
The select type executes one bucket in the group, based on a switch-computed selection algorithm (for
example, hash on some user-configured tuple or simple round-robin). The selection algorithm should
implement equal load sharing or, optionally, load sharing based on bucket weights assigned by the SDN
controller.
The fast failover type executes the first live bucket. Port liveness is managed by code outside of the scope
of OpenFlow and may have to do with routing algorithms or congestion control mechanisms. The buckets
are evaluated in order, and the first live bucket is selected. This group type enables the switch to change
forwarding without requiring a round trip to the controller.
The indirect type allows multiple packet flows (that is, multiple flow table entries) to point to a common
group identifier. This type provides for more efficient management by the controller in certain situations.

4.3-OPENFLOW PROTOCOL
The OpenFlow protocol describes message exchanges that take place between an OpenFlow controller and an
OpenFlow switch. Typically, the protocol is implemented on top of TLS, providing a secure OpenFlow channel.
The OpenFlow protocol enables the controller to perform add, update, and delete actions to the flow entries in the
flow tables.
By- Mr. Bhanuprasad Vishwakarma pg. 20
M.Sc.IT Sem II Modern Networking Unit-II
Controller to switch: These messages are initiated by the controller and, in some cases, require a response
from the switch. This class of messages enables the controller to manage the logical state of the switch,
including its configuration and details of flow and group table entries. Also included in this class is the
Packet-out message. This message is sent by the controller to a switch when that switch sends a packet to
the controller and the controller decides not to drop the packet but to direct it to a switch output port.
Asynchronous: These types of messages are sent without solicitation from the controller. This class
includes various status messages to the controller. Also included is the Packet-in message, which may be
used by the switch to send a packet to the controller when there is no flow table match.
Symmetric: These messages are sent without solicitation from either the controller or the switch. They
are simple yet helpful. Hello messages are typically sent back and forth between the controller and switch
when the connection is first established. Echo request and reply messages can be used by either the switch
or controller to measure the latency or bandwidth of a controller-switch connection or just verify that the
device is up and running. The Experimenter message is used to stage features to be built in to future
versions of OpenFlow.
In general terms, the OpenFlow protocol provides the SDN controller with three types of information to be
used in managing the network:
Event-based messages: Sent by the switch to the controller when a link or port change occurs.
Flow statistics: Generated by the switch based on traffic flow. This information enables the controller
to monitor traffic, reconfigure the network as needed, and adjust flow parameters to meet QoS
requirements.
Encapsulated packets: Sent by the switch to the controller either because there is an explicit action
to send this packet in a flow table entry or because the switch needs information for establishing a new
flow.

5.-SDN Control Plane


5.1-SDN CONTROL PLANE ARCHITECTURE
The SDN control layer maps application layer service requests into specific commands and directives to data
plane switches and supplies applications with information about data plane topology and activity. The control
layer is implemented as a server or cooperating set of servers known as SDN controllers.
Control Plane Functions
Figure 5.2 illustrates the functions performed by SDN controllers. The figure illustrates the essential functions
that any controller should provide, which include the following:
• Shortest path forwarding: Uses routing information collected from switches to establish
preferred routes.
• Notification manager: Receives, processes, and forwards to an application events, such as alarm
notifications, security alarms, and state changes.
• Security mechanisms: Provides isolation and security enforcement between applications and services.
• Topology manager: Builds and maintains switch interconnection topology information.
• Statistics manager: Collects data on traffic through the switches.
• Device manager: Configures switch parameters and attributes and manages flow tables.
By- Mr. Bhanuprasad Vishwakarma pg. 21
M.Sc.IT Sem II Modern Networking Unit-II

The functionality provided by the SDN controller can be viewed as a network operating system (NOS). As with
a conventional OS, an NOS provides essential services, common application programming interfaces (APIs), and
an abstraction of lower-layer elements to developers. The functions of an SDN NOS, such as those in the
preceding list, enable developers to define network policies and manage networks without concern for the details
of the network device characteristics, which may be heterogeneous and dynamic. The northbound interface,
discussed subsequently, provides a uniform means for application developers and network managers to access
SDN service and perform network management tasks. Further, well-defined northbound interfaces enable
developers to create software that is independent not only of data plane details but to a great extent usable with a
variety of SDN controller servers.
A number of different initiatives, both commercial and open source, have resulted in SDN controller
implementations. The following list describes a few prominent ones:

• OpenDaylight: An open source platform for network programmability to enable SDN, written in Java.
OpenDaylight was founded by Cisco and IBM, and its membership is heavily weighted toward network
vendors. OpenDaylight can be implemented as a single centralized controller, but enables controllers to
be distributed where one or multiple instances may run on one or more clustered servers in the network.
• Open Network Operating System (ONOS): An open source SDN NOS, initially released in 2014. It is
a nonprofit effort funded and developed by a number of carriers, such as AT&T and NTT, and other
service providers. Significantly, ONOS is supported by the Open Networking Foundation, making it
likely that ONOS will be a major factor in SDN deployment. ONOS is designed to be used as a
distributed controller and provides abstractions for partitioning and distributing network state onto
multiple distributed controllers.
• POX: An open source OpenFlow controller that has been implemented by a number of SDN developers
and engineers. POX has a well written API and documentation. It also provides a web-based graphical
user interface (GUI) and is written in Python, which typically shortens its experimental and
developmental cycles compared to some other implementation languages, such as C++.
By- Mr. Bhanuprasad Vishwakarma pg. 22
M.Sc.IT Sem II Modern Networking Unit-II
• Beacon: An open source package developed at Stanford. Written in Java and highly integrated into the
Eclipse integrated development environment (IDE). Beacon was the first controller that made it possible
for beginner programmers to work with and create a working SDN environment.
• Floodlight: An open source package developed by Big Switch Networks. Although its beginning was
based on Beacon, it was built using Apache Ant, which is a very popular software build tool that makes
the development of Floodlight easier and more flexible. Floodlight has an active community and has a
large number of features that can be added to create a system that best meets the requirements of a
specific organization. Both a web-based and Java-based GUI are available and most of its functionality
is exposed through a REST API.
• Ryu: An open source component-based SDN framework developed by NTT Labs. It is open sourced
and fully developed in python.
• Onix: Another distributed controller, jointly developed by VMWare, Google, and NTT. Onix is a
commercially available SDN controller.

Southbound Interface
The southbound interface provides the logical connection between the SDN controller and the data plane switches
(seeFigure 5.3). Some controller products and configurations support only a single southbound protocol. A more
flexible approach is the use of a southbound abstraction layer that provides a common interface for the control
plane functions while supporting multiple southbound APIs.

By- Mr. Bhanuprasad Vishwakarma pg. 23


M.Sc.IT Sem II Modern Networking Unit-II
The most commonly implemented southbound API include the following:

• OpenFlow protocol: The OpenFlow protocol defines the interface between an OpenFlow Controller
and an OpenFlow switch. The OpenFlow protocol allows the OpenFlow Controller to instruct the
OpenFlow switch on how to handle incoming data packets.
• Open vSwitch Database Management Protocol (OVSDB): Open vSwitch (OVS) an open source
software project which implements virtual switching that is interoperable with almost all popular
hypervisors. OVS uses OpenFlow for message forwarding in the control plane for both virtual and
physical ports. OVSDB is the protocol used to manage and configure OVS instances.
• Forwarding and Control Element Separation (ForCES):An IETF effort that standardizes the
interface between the control plane and the data plane for IP routers.
• Protocol Oblivious Forwarding (POF): This is advertised as an enhancement to OpenFlow that
simplifies the logic in the data plane to a very generic forwarding element that need not understand the
protocol data unit (PDU) format in terms of fields at various protocol levels. Rather, matching is done
by means of (offset, length) blocks within a packet. Intelligence about packet format resides at the
control plane level

Northbound Interface
The northbound interface enables applications to access control plane functions and services without needing to
know the details of the underlying network switches. The northbound interface is more typically viewed as a
software API rather than a protocol.
Unlike the southbound and eastbound/westbound interfaces, where a number of heterogeneous interfaces have
been defined, there is no widely accepted standard for the northbound interface. The result has been that a number
of unique APIs have been developed for various controllers, complicating the effort to develop SDN applications.
Figure 5.5 shows a simplified example of an architecture with multiple levels of northbound APIs, the levels of
which are described in the list that follows:

By- Mr. Bhanuprasad Vishwakarma pg. 24


M.Sc.IT Sem II Modern Networking Unit-II
• Base controller function APIs: These APIs expose the basic functions of the controller and are used
by developers to create network services.
• Network service APIs: These APIs expose network services to the north.
• Northbound interface application APIs: These APIs expose application-related services that are
built on top of network services.

Routing
As with any network or internet, an SDN network requires a routing function. In general terms, the routing
function comprises a protocol for collecting information about the topology and traffic conditions of the network,
and an algorithm for designing routes through the network.
There are two categories of routing protocols: interior router protocols (IRPs) that operate within an autonomous
system (AS), and exterior router protocols (ERPs) that operate between autonomous systems.
An IRP is concerned with discovering the topology of routers within an AS and then determining the best route
to each destination based on different metrics. Two widely used IRPs are Open Shortest Path First (OSPF)
Protocol and Enhanced Interior Gateway Routing Protocol (EIGRP). An ERP need not collect as much detailed
traffic information. Rather, the primary concern with an ERP is to determine reachability of networks and end
systems outside of the AS. Therefore, the ERP is typically executed only in edge nodes that connect one AS to
another. Border Gateway Protocol (BGP) is commonly used for the ERP.
Traditionally, the routing function is distributed among the routers in a network. Each router is responsible for
building up an image of the topology of the network. For interior routing, each router as well must collect
information about connectivity and delays and then calculate the preferred route for each IP destination address.
The centralized routing application performs two distinct functions: link discovery and topology manager.
For link discovery, the routing function needs to be aware of links between data plane switches. Note that in the
case of an internetwork, the links between routers are networks, whereas for Layer 2 switches, such as Ethernet
switches, the links are direct physical links. In addition, link discovery must be performed between a router and
a host system and between a router in the domain of this controller and a router in a neighboring domain.
Discovery is triggered by unknown traffic entering the controller’s network domain either from an attached host
or from a neighboring router.
The topology manager maintains the topology information for the network and calculates routes in the network.
Route calculation involves determining the shortest path between two data plane nodes or between a data plane
node and a host.

5.2-ITU-T MODEL
The SDN high-level architecture is given in Figure 5.6 as defined in ITU-T Y.3300. The Y.3300 model consists
of three layers, or planes: application, control, and resource. As defined in Y.3300, the application layer is where
SDN applications specify network services or business applications by defining a service-aware behavior of
network resources. The applications interact with the SDN control layer via APIs that form an application-control
interface. The applications make use of an abstracted view of the network resources provided by the SDN control
layer by means of information and data models exposed via the APIs.

By- Mr. Bhanuprasad Vishwakarma pg. 25


M.Sc.IT Sem II Modern Networking Unit-II

The control layer can be viewed as having the following sublayers:


• Application support: Provides an API for SDN applications to access network information and
program application-specific network behavior.
• Orchestration: Provides the automated control and management of network resources and coordination
of requests from the application layer for network resources. Orchestration encompasses physical and
virtual network topologies, network elements, traffic control, and other network-related aspects.
• Abstraction: Interacts with network resources, and provides an abstraction of the network resources,
including network capabilities and characteristics, to support management and orchestration of physical
and virtual network resources. Such abstraction relies upon standard information and data models and is
independent of the underlying transport infrastructure.
• The resource layer consists of an interconnected set of data plane forwarding elements (switches).
Collectively, these switches perform the transport and processing of data packets according to decisions
made by the SDN control layer and forwarded to the resource layer via the resource-control interface.
Most of this control is on behalf of applications.
The resource layer can be viewed as having the following sublayers:
• Control support: Supports programmability of resource-layer functions via the resource-control
interface.
• Data transport and processing: Provides data forwarding and data routing functions.

5.3-OPENDAYLIGHT
The OpenDaylight Project is an open source project hosted by the Linux Foundation and includes the involvement
of virtually every major networking organization, including users of SDN technology and vendors of SDN
products.

By- Mr. Bhanuprasad Vishwakarma pg. 26


M.Sc.IT Sem II Modern Networking Unit-II
OpenDaylight Architecture
Figure 5.7 provides a top-level view of the OpenDaylight architecture. It consists of five logical layers.

• Network applications, orchestration, and services: Consists of business and network logic applications
that control and monitor network behavior. These applications use the controller to gather network
intelligence, run algorithms to perform analytics, and then use the controller to orchestrate the new rules,
if any, throughout the network.
• APIs: A set of common interfaces to OpenDaylight controller functions. OpenDaylight supports the Open
Service Gateway Initiative (OSGi)( A set of specifications that defines a dynamic component system for
Java) framework and bidirectional REST for the northbound API. The OSGi framework is used for
applications that will run in the same address space as the controller, while the REST (web-based) API is
used for applications that do not run in the same address space as the controller.
• Controller functions and services: SDN control plane functions and services.
• Service abstraction layer (SAL): Provides a uniform view of data plane resources, so that control plane
functions can be implemented independent of the specific southbound interface and protocol.
• Southbound interfaces and protocols: Supports OpenFlow, other standard southbound protocols, and
vendor-specific interfaces.
There are several noteworthy aspects to the OpenDaylight architecture.
• First, OpenDaylight encompasses both control plane and application plane functionality. Thus,
OpenDaylight is more than just an SDN controller implementation. This enables enterprise and
telecommunications network managers to host open source software on their own servers to construct an
SDN configuration. Vendors can use this software to create products with value-added additional
application plane functions and services.

By- Mr. Bhanuprasad Vishwakarma pg. 27


M.Sc.IT Sem II Modern Networking Unit-II
• A second significant aspect of the OpenDaylight design is that it is not tied to OpenFlow or any other
specific southbound interface. This provides greater flexibility in constructing SDN network
configurations. The key element in this design is the SAL, which enables the controller to support multiple
protocols on the southbound interface and provide consistent services for controller functions and for SDN
applications.

OpenDaylight Helium
The most recent release of OpenDaylight is the Helium release, illustrated in Figure 5.9. The controller platform
consists of a growing collection of dynamically pluggable modules, each of which performs one or more SDN-
related functions and services. Five modules are considered base network service functions.

• Topology manager: A service for learning the network layout by subscribing to events of node addition
and removal and their interconnection. Applications requiring network view can use this service.
• Statistics manager: Collects switch-related statistics, including flow statistics, node connector, and
queue occupancy.
• Switch manager: Holds the details of the data plane devices. As a switch is discovered, its attributes
(for example, what switch/router it is, software version, capabilities) are stored in a database by the
switch manager.
• Forwarding rules manager: Installs routes and tracks next-hop information. Works in conjunction with
switch manager and topology manager to register and maintain network flow state. Applications using
this need not have visibility of network device specifics.
• Host tracker: Tracks and maintains information about connected hosts.

By- Mr. Bhanuprasad Vishwakarma pg. 28


M.Sc.IT Sem II Modern Networking Unit-II

5.4-REST
REpresentational State Transfer (REST) is an architectural style used to define APIs. This has become a
standard way of constructing northbound APIs for SDN controllers. A REST API, or an API that is RESTful
(adheres to the constraints of REST) is not a protocol, language, or established standard. It is essentially six
constraints that an API must follow to be RESTful. The objective of these constraints is to maximize the
scalability and independence/interoperability of software interactions, and to provide for a simple means of
constructing APIs.

REST Constraints
REST assumes that the concepts of web-based access are used for interaction between the application and the
service that are on either side of the API. REST does not define the specifics of the API but imposes constraints
on the nature of the interaction between application and service. The six REST constraints are as follows:
i. Client-Server
ii. Stateless
iii. Cacheable
iv. Uniform Interface
v. Layered System
vi. Code-On-Demand

i-Client-Server Constraint
This simple constraint dictates that interaction between application and server is in the client-server
request/response style. The principle defined for this constraint is the separation of user interface concerns from
data storage concerns. This separation allows client and server components to evolve independently and supports
the portability of server-side functions to multiple platforms.

ii-Stateless Constraint
The stateless constraint dictates that each request from a client to a server must contain all the information
necessary to understand the request and cannot take advantage of any stored context on the server. Similarly, each
response from the server must contain all the desired information for that request. One consequence is that any
“memory” of a transaction is maintained in a session state kept entirely on the client. Because the server does not
retain any record of the client state, the result is a more efficient SDN controller. Another consequence is that if
the client and server reside on different machines, and therefore communicate via a protocol, that protocol need
not be connection oriented.
iii-Cache Constraint
The cache constraint requires that the data within a response to a request be implicitly or explicitly labeled as
cacheable or noncacheable. If a response is cacheable, then a client cache is given the right to reuse that response
data for later, equivalent requests. That is, the client is given permission to remember this data because the data
is not likely to change on the server side. Therefore, subsequent requests for the same data can be handled locally
at the client, reducing communication overhead between client and server, and reducing the server’s processing
burden.

By- Mr. Bhanuprasad Vishwakarma pg. 29


M.Sc.IT Sem II Modern Networking Unit-II
iv-Uniform Interface Constraint
REST emphasizes a uniform interface between components, regardless of the specific client-server application
API implemented using REST. This enables controller services to evolve independently and provides the ability
for an SDN controller provider to use software components from various vendors to implement the controller.

To obtain a uniform interface, REST defines four interface constraints:


• Identification of resources: Individual resources are identified using a resource identifier (for
example, a URI).
• Manipulation of resources through representations: Resources are represented in a format like
JSON, XML, or HTML.
• Self-descriptive messages: Each message has enough information to describe how the message is to
be processed.
• Hypermedia as the engine of the application state: A client needs no prior knowledge of how to
interact with a server, because the API is not fixed but dynamically provided by the server.
v-Layered System Constraint
The layered system constraint simply means that a given function is organized in layers, with each layer only
having direct interaction with the layers immediately above and below. This is a fairly standard architecture
approach for protocol architectures, OS design, and system services design.

vi-Code-on-Demand Constraint
REST allows client functionality to be extended by downloading and executing code in the form of applets or
scripts. This simplifies clients by reducing the number of features required to be pre-implemented. Allowing
features to be downloaded after deployment improves system extensibility.

Example REST API


We discuss a REST API for the northbound interface of the Ryu SDN network operating system. The particular
API switch manager service function in Ryu is designed to provide access to OpenFlow switches.
Each function that can be performed by the switch manager on behalf of an application is assigned a URI. For
example, consider the function to get a description of all the entries in the group table of a particular switch. The
URI for this function for this switch is as follows:
/ stats / group / < dpid >
where stats (statistic) refers to the set of APIs for retrieving and updating switch statistics and parameters, group
is the name of the function, and <dpid> (data path ID) is the unique identifier of the switch. To invoke the function
for switch 1, the application issues the following command to the switch manager across the REST API:
GET http://localhost:8080/stats/groupdesc/1
The localhost portion of this command indicates that the application is running on the same server as the Ryu
NOS. If the application were remote, the URI would be a URL that provides remote access via HTTP and the
web. The switch manager responds to this command with a message whose message body includes the dpid then
a sequence of blocks of values, one for each group defined in the switch dpid.
The values are as follows:
By- Mr. Bhanuprasad Vishwakarma pg. 30
M.Sc.IT Sem II Modern Networking Unit-II
• type: All, select, fast failover, or indirect
• group_id: Identifier of an entry in the group table.
• buckets: A structured field consisting of the following subfields:
• weight: Relative weight of bucket (only for select type).
• watch_port: Port whose state affects whether this bucket is live (only required for fast failover groups).
• watch_group: Group whose state affects whether this bucket is live (only required for fast failover
groups).
• actions: A list, possibly null, of actions.
The buckets portion of the message body is repeated, once for each group table entry.

5.5-COOPERATION AND COORDINATION AMONG


CONTROLLERS
In addition to northbound and southbound interfaces, a typical SDN controller will have an east/westbound
interface that enables communication with other SDN controllers and other networks. This section surveys key
design issues related to the east/westbound interface.

Centralized Versus Distributed Controllers


A key architectural design decision is whether a single centralized controller or a distributed set of controllers
will be used to control the data plane switches. A centralized controller is a single server that manages all the data
plane switches in the network.
In a large enterprise network, the deployment of a single controller to manage all network devices would prove
unwieldy or undesirable. A more likely scenario is that the operator of a large enterprise or carrier network divides
the whole network into a number of nonoverlapping SDN domains, also called SDN islands (Figure 5.10),
managed by distributed controllers. Reasons for using SDN domains include those in the list that follows.

By- Mr. Bhanuprasad Vishwakarma pg. 31


M.Sc.IT Sem II Modern Networking Unit-II
• Reliability: The use of multiple controllers avoids the risk of a single point of failure.
• Privacy: A carrier may choose to implement different privacy policies in different SDN domains. For
example, an SDN domain may be dedicated to a set of customers who implement their own highly
customized privacy policies, requiring that some networking information in this domain should not be
disclosed to an external entity.
• Incremental deployment: A carrier’s network may consist of portions of legacy and nonlegacy
infrastructure. Dividing the network into multiple individually manageable SDN domains allows for
flexible incremental deployment.
Distributed controllers may be collocated in a small area, or widely dispersed, or a combination of the two. Closely
placed controllers offer high throughput and are appropriate for data centers, whereas dispersed controllers
accommodate multilocation networks.
In a distributed architecture, a protocol is needed for communication among the controllers. In principle, a
proprietary protocol could be used for this purpose, although an open or standard protocol would clearly be
preferable for purposes of interoperability.
The functions associated with the east/westbound interface for a distributed architecture include maintaining
either a partitioned or replicated database of network topology and parameters, and monitoring/notification
functions.
High-Availability Clusters
Within a single domain, the controller function can be implemented on a high-availability (HA) cluster.
Typically, there would be two or more nodes that share a single IP address that is used by external systems (both
north and southbound) to access the cluster. An example is the IBM SDN for Virtual Environments product,
which uses two nodes. Each node is considered a peer of the other node in the cluster for data replication and
sharing of the external IP address. When HA is running, the primary node is responsible for answering all traffic
that is sent to the cluster’s external IP address and holds a read/write copy of the configuration data. Meanwhile,
the second node operates as a standby, with a read-only copy of the configuration data, which is kept current with
the primary’s copy. The secondary node monitors the state of the external IP. If the secondary node determines
that the primary node is no longer answering the external IP, it triggers a failover, changing its mode to that of
primary node. It assumes the responsibility for answering the external IP and changes its copy of configuration
data to be read/write. If the old primary reestablishes connectivity, there is an automatic recovery process trigger
to convert the old primary to secondary status so that configuration changes that are made during the failover
period are not lost.

Federated SDN Networks


It is also possible for SDN networks that are owned and managed by different organizations to cooperate using
east/westbound protocols. Figure 5.11 is an example of the potential for inter-SDN controller cooperation.
In this configuration, we have a number of service subscribers to a data center network providing cloud-based
services. Subscribers are connected to the service network through a hierarchy of access, distribution, and core
networks. These intermediate networks may all be operated by the data center network, or they may involve other
organizations. In the latter case, if all the networks implement SDN, they need to share common conventions for
share control plane parameters, such as quality of service (QoS), policy information, and routing information.

By- Mr. Bhanuprasad Vishwakarma pg. 32


M.Sc.IT Sem II Modern Networking Unit-II

Border Gateway Protocol


BGP was developed for use in conjunction with internets that use the TCP/IP suite, although the concepts are
applicable to any internet. BGP has become the preferred exterior router protocol (ERP) for the Internet. BGP
enables routers, called gateways in the standard, in different autonomous systems to cooperate in the exchange of
routing information. The protocol operates in terms of messages, which are sent over TCP connections.
Three functional procedures are involved in BGP:
• Neighbor Acquisition
• Neighbor Reachability
• Network Reachability
Two routers are considered to be neighbors if they are attached to the same network or communication link. If
they are attached to the same network, communication between the neighbor routers might require a path through
other routers within the shared network. If the two routers are in different autonomous systems, they may want to
exchange routing information. For this purpose, it is necessary first to perform neighbor acquisition. The term
neighbor refers to two routers that share the same network. To perform neighbor acquisition, one router sends an
Open message to another. If the target router accepts the request, it returns a Keepalive message in response.
Once a neighbor relationship is established, the neighbor reachability procedure is used to maintain the
relationship. Each partner needs to be assured that the other partner still exists and is still engaged in the neighbor
relationship. For this purpose, the two routers periodically issue Keepalive messages to each other.
The final procedure specified by BGP is network reachability. Each router maintains a database of the networks
that it can reach and the preferred route for reaching each network. Whenever a change is made to this database,
the router issues an Update message that is broadcast to all other routers for which it has a neighbor relationship.
Because the Update message is broadcast, all BGP routers can build up and maintain their routing information.

Routing and QoS Between Domains


For routing outside a controller’s domain, the controller establishes a BGP connection with each neighboring
router. Figure 5.12 illustrates a configuration with two SDN domains that are linked only through a non-SDN AS.

By- Mr. Bhanuprasad Vishwakarma pg. 33


M.Sc.IT Sem II Modern Networking Unit-II

Between each SDN domain and the AS, BGP is used to exchange information, such as the following:
• Reachability update: Exchange of reachability information facilitates inter-SDN domain routing.
This allows a single flow to traverse multiple SDNs and each controller can select the most
appropriate path in the network.
• Flow setup, tear-down, and update requests: Controllers coordinate flow setup requests, which
contain information such as path requirements, QoS, and so on, across multiple SDN domains.
• Capability Update: Controllers exchange information on network-related capabilities such as
bandwidth, QoS and so on, in addition to system and software capabilities available inside the
domain.

Using BGP for QoS Management


A common practice for inter-AS interconnection is a best-effort interconnection only. That is, traffic forwarding
between autonomous systems is without traffic class differentiation and without any forwarding guarantee. It is
common for network providers to reset any IP packet traffic class markings to zero, the best- effort marking, at
the AS ingress router, which eliminates any traffic differentiation. There is no standardized set of classes, no
standardized marking (class encoding), and no standardized forwarding behavior, that cross-domain traffic could
rely on. However RFC 4594 (Configuration Guidelines for DiffServ Service Classes, August 2006) provides a set
of “best practices” related to this parameters. QoS policy decisions are taken by network providers independently
and in an uncoordinated fashion. This general statement does not cover existing individual agreements, which do
By- Mr. Bhanuprasad Vishwakarma pg. 34
M.Sc.IT Sem II Modern Networking Unit-II
offer quality-based interconnection with strict QoS guarantees. However, such service level agreement (SLA)-
based agreements are of bilateral or multilateral nature and do not offer a means for a general “better than best
effort” interconnection.
IETF is currently at work on a standardized scheme for QoS marking using BGP (BGP Extended Community for
QoS Marking, draft-knoll-idr-qos-attribute-12, July 10, 2015). Meanwhile, SDN providers have implemented
their own capabilities using the extensible nature of BGP. In either case, the interaction between SDN controllers
in different domains using BGP would involve the steps illustrated in Figure 5.13 and described in the list that
follows:

1. The SDN controller must be configured with BGP capability and with information about the location
of neighboring BGP entices.
2. BGP is triggered by a start or activation event within the controller.
3. The BGP entity in the controller attempts to establish a TCP connection with each neighboring BGP
entity.
4. Once a TCP connection is established, the controller’s BGP entity exchanges Open messages with the
neighbor. Capability information is exchanged with using the Open messages.
5. The exchange completes with the establishment of a BGP connection.
6. Update messages are used to exchange NLRI (network layer reachability information), indicating
what networks are reachable via this entity. Reachability information is used in the selection of the
most appropriate data path between SDN controllers. Information obtained through NLRI parameter
By- Mr. Bhanuprasad Vishwakarma pg. 35
M.Sc.IT Sem II Modern Networking Unit-II
is used to update the controller’s Routing Information Base (RIB).
7. The Update message can also be used to exchange QoS information, such as available capacity.
8. Route selection is done when more than one path is available based on BGP process decision. Once the
path is established packets can traverse successfully between two SDN domains.

IETF SDNi
IETF has developed a draft specification that defines common requirements to coordinate flow setup and
exchange reachability information across multiple domains, referred to as SDNi . The SDNi specification does
not define an east/westbound SDN protocol but rather provides some of the basic principles to be used in
developing such a protocol.
SDNi functionality, as defined in the document, includes the following:
• Coordinate flow setup originated by applications, containing information such as path
requirement, QoS, and service level agreements across multiple SDN domains.
• Exchange reachability information to facilitate inter-SDN routing. This will allow a single flow
to traverse multiple SDNs and have each controller select the most appropriate path when multiple such
paths are available.
The message types for SDNi tentatively include the following:

• Reachability update
• Flow setup/teardown/update request (including application capability requirement such as QoS,
data rate, latency, and so on)
• Capability update (including network-related capabilities, such as data rate and QoS, and system
and software capabilities available inside the domain).

OpenDaylight SNDi
Included in the OpenDaylight architecture is an SDNi capability for connecting multiple OpenDaylight federated
controllers in a network and sharing topology information among them. This capability appears to be compatible
with the IETF specification for an SDNi function. The SDNi application deployable on an OpenDaylight
controller consists of three components, as illustrated in Figure 5.14 and described in the list that follows.

• SDNi aggregator: Northbound SDNi plug-in acts as an aggregator for collecting network
information such as topology, statistics, and host identifiers. This plug-in can evolve to meet the needs
for network data requested to be shared across federated SDN controllers.
• SDNi REST API: SDNi REST APIs fetch the aggregated information from the northbound
plug-in (SDNi aggregator).
• SDNi wrapper: SDNi BGP wrapper is responsible for the sharing and collecting information
to/from federated controllers.

By- Mr. Bhanuprasad Vishwakarma pg. 36


M.Sc.IT Sem II Modern Networking Unit-II

Figure 5.15 shows the interrelationship of the components, with a more detailed look at the SDNi wrapper.

By- Mr. Bhanuprasad Vishwakarma pg. 37


M.Sc.IT Sem II Modern Networking Unit-II

6. SDN Application Plane


6.1 SDN APPLICATION PLANE ARCHITECTURE
The application plane contains applications and services that define, monitor, and control network resources and
behavior. These applications interact with the SDN control plane via application-control interfaces, for the SDN
control layer to automatically customize the behavior and the properties of network resources. The programming
of an SDN application makes use of the abstracted view of network resources provided by the SDN control layer
by means of information and data models exposed via the application-control interface.
An overview of application plane functionality depicted in Figure 6.1.

Northbound Interface
The northbound interface enables applications to access control plane functions and services without needing to
know the details of the underlying network switches. Typically, the northbound interface provides an abstract
view of network resources controlled by the software in the SDN control plane.
Figure 6.1 indicates that the northbound interface can be a local or remote interface. For a local interface, the
SDN applications are running on the same server as the control plane software (controller network operating
system). Alternatively, the applications could be run on remote systems and the northbound interface is a protocol
or application programming interface (API) that connects the applications to the controller network operating
system (NOS) running on central server.
Network Services Abstraction Layer
This layer could provide an abstract view of network resources that hides the details of the underlying data
plane devices.
This layer could provide a generalized view of control plane functionality, so that applications could be written
that would operate across a range of controller network operating systems.

By- Mr. Bhanuprasad Vishwakarma pg. 38


M.Sc.IT Sem II Modern Networking Unit-II
This functionality is similar to that of a hypervisor or virtual machine monitor that decouples applications from
the underlying OS and underlying hardware.
This layer could provide a network virtualization capability that allows different views of the underlying data
plane infrastructure.
Network Applications
There are many network applications that could be implemented for an SDN. Different published surveys of SDN
have come up with different lists and even different general categories of SDN-based network applications. Figure
6.1 includes six categories that encompass the majority of SDN applications.
User Interface
The user interface enables a user to configure parameters in SDN applications and to interact with applications
that support user interaction. Again, there are two possible interfaces. A user that is collocated with the SDN
application server (which may or may not include the control plane) can use the server’s keyboard/display. More
typically, the user would log on to the application server over a network or communications facility.

6.2-NETWORK SERVICES ABSTRACTION LAYER


Abstraction refers to the amount detail about lower levels of the model that is visible to higher levels. More
abstraction means less detail; less abstraction means more detail.
An abstraction layer is a mechanism that translates a high-level request into the low-level commands required
to perform the request. An API is one such mechanism. It shields the implementation details of a lower level of
abstraction from software at a higher level. A network abstraction represents the basic properties or characteristics
of network entities (such as switches, links, ports, and flows) is such a way that network programs can focus on
the desired functionality without having to program the detailed actions.
Abstractions in SDN
SDN can be defined by three fundamental abstractions: forwarding, distribution, and specification, as illustrated
in Figure 6.2 and described further in the sections that follow.
Forwarding Abstraction
The forwarding abstraction allows a control program to specify data plane forwarding behavior while hiding
details of the underlying switching hardware. This abstraction supports the data plane forwarding function. By
abstracting away from the forwarding hardware, it provides flexibility and vender neutrality. The OpenFlow API
is an example of a forwarding abstraction.

Distribution Abstraction
This abstraction arises in the context of distributed controllers. A cooperating set of distributed controllers
maintains a state description of the network and routes through the networks. The distributed state of the entire
network may involve partitioned data sets, with controller instances exchanging routing information, or a
replicated data set, so that the controllers must cooperate to maintain a consistent view of the global network.
This abstraction aims at hiding complex distributed mechanisms (used today in many networks) and
separating state management from protocol design and implementation. It allows providing a single
coherent global view of the network through an annotated network graph accessible for control via an
API. An implementation of such an abstraction is an NOS, such as OpenDaylight or Ryu.

By- Mr. Bhanuprasad Vishwakarma pg. 39


M.Sc.IT Sem II Modern Networking Unit-II

FIGURE 6.2 SDN Architecture and Abstractions

Specification Abstraction
The distribution abstraction provides a global view of the network as if there is a single central controller, even if
multiple cooperating controllers are used. The specification abstraction then provides an abstract view of the
global network. This view provides just enough detail for the application to specify goals, such as routing or
security policy, without providing the information needed to implement the goals. The presentation by Shenker
summarizes these abstractions as follows:

• Forwarding interface: An abstract forwarding model that shields higher layers from
forwarding hardware.
• Distribution interface: A global network view that shields higher layers from state
dissemination/collection.
• Specification interface: An abstract network view that shields application program from details of
physical network.

By- Mr. Bhanuprasad Vishwakarma pg. 40


M.Sc.IT Sem II Modern Networking Unit-II
Frenetic
An example of a network services abstraction layer is the programming language Frenetic. Frenetic enables
networks operators to program the network as a whole instead of manually configuring individual network
elements. Frenetic was designed to solve challenges with the use of OpenFlow-based models by working with an
abstraction at the network level as opposed to OpenFlow, which directly goes down to the network element level.
Frenetic includes an embedded query language that provides effective abstractions for reading network state. This
language is similar to SQL and includes segments for selecting, filtering, splitting, merging and aggregating the
streams of packets. Another special feature of this language is that it enables the queries to be composed with
forwarding policies. A compiler produces the control messages needed to query and tabulate the counters on
switches.
Frenetic consists of two levels of abstraction, as illustrated in Figure 6.4. The upper level, which is the Frenetic
source-level API, provides a set of operators for manipulating streams of network traffic. The query language
provides means for reading the state of the network, merging different queries, and expressing high-level
predicates for classifying, filtering, transforming, and aggregating the packet streams traversing the network. The
lower level of abstraction is provided by a run-time system that operates in the SDN controller. It translates high-
level policies and queries into low-level flow rules and then issues the needed OpenFlow commands to install
these rules on the switches.

6.3-TRAFFIC ENGINEERING
Traffic engineering is a method for dynamically analyzing, regulating, and predicting the behavior of data
flowing in networks with the aim of performance optimization to meet service level agreements (SLAs). Traffic
engineering involves establishing routing and forwarding policies based on QoS requirements. With SDN, the
task of traffic engineering should be considerably simplified compared with a non-SDN network. SDN offers a
uniform global view of heterogeneous equipment and powerful tools for configuring and managing network
switches.

By- Mr. Bhanuprasad Vishwakarma pg. 41


M.Sc.IT Sem II Modern Networking Unit-II
This is an area of great activity in the development of SDN applications. The SDN survey paper by Kreutz
Proceedings of the IEEE lists the following traffic engineering functions that have been implemented as SDN
applications:
• On-demand virtual private networks
• Load balancing Energy-aware routing
• Quality of service (QoS) for broadband access networks
• Scheduling/optimization Traffic engineering with minimal overhead
• Dynamic QoS routing for multimedia apps
• Fast recovery through fast-failover groups
• QoS policy management framework
• QoS enforcement
• QoS over heterogeneous networks
• Multiple packet schedulers
• Queue management for QoS enforcement
• Divide and spread forwarding tables

PolicyCop
An instructive example of a traffic engineering SDN application is PolicyCop, which is an automated QoS policy
enforcement framework. It leverages the programmability offered by SDN and OpenFlow for
• Dynamic traffic steering
• Flexible Flow level control
• Dynamic traffic classes
• Custom flow aggregation levels
Key features of PolicyCop are that it monitors the network to detect policy violations (based on a QoS SLA)
and reconfigures the network to reinforce the violated policy.

As shown in Figure 6.5, PolicyCop consists of eleven software modules and two databases, installed in both the
application plane and the control plane. PolicyCop uses the control plane of SDNs to monitor the compliance
with QoS policies and can automatically adjust the control plane rules and flow tables in the data plane based on
the dynamic network traffic statistics.
In the control plane, PolicyCop relies on four modules and a database for storing control rules, described as
follows:
• Admission Control: Accepts or rejects requests from the resource provisioning module for reserving
network resources, such as queues, flow-table entries, and capacity.
• Routing: Determines path availability based on the control rules in the rule database.
• Device Tracker: Tracks the up/down status of network switches and their ports.
• Statistics Collection: Uses a mix of passive and active monitoring techniques to measure different
network metrics.
• Rule Database: The application plane translates high-level network-wide policies to control rules
and stores them in the rule database.
By- Mr. Bhanuprasad Vishwakarma pg. 42
M.Sc.IT Sem II Modern Networking Unit-II

A RESTful northbound interface connects these control plane modules to the application plane modules, which
are organized into two components: a policy validator that monitors the network to detect policy violations, and
a policy enforcer that adapts control plane rules based on network conditions and high-level policies.
The modules are as follows:
• Traffic Monitor: Collects the active policies from policy database, and determines appropriate
monitoring interval, network segments, and metrics to be monitored.
• Policy Checker: Checks for policy violations, using input from the policy database and the Traffic
Monitor.
• Event Handler: Examines violation events and, depending on event type, either automatically invokes
the policy enforcer or sends an action request to the network manager.
• Topology Manager: Maintains a global view of the network, based on input from the device tracker.
• Resource Manager: Keeps track of currently allocated resources using admission control and statistics
collection.
• Policy Adaptation: Consists of a set of actions, one for each type of policy violation.
By- Mr. Bhanuprasad Vishwakarma pg. 43
M.Sc.IT Sem II Modern Networking Unit-II
• Resource Provisioning: This module either allocates more resources or releases existing ones or both
based on the violation event.
Figure 6.6 shows the process workflow in PolicyCop.

6.4-MEASUREMENT AND MONITORING


The area of measurement and monitoring applications can roughly be divided into two categories: applications
that provide new functionality for other networking services, and applications that add value to OpenFlow-based
SDNs. An example of the first category is in the area of broadband home connections. If the connection is to an
SDN- based network, new functions can be added to the measurement of home network traffic and demand,
allowing the system to react to changing conditions. The second category typically involves using different kinds
of sampling and estimation techniques to reduce the burden of the control plane in the collection of data plane
statistics.

6.5-SECURITY
Applications in this area have one of two goals:
• Address security concerns related to the use of SDN: SDN involves a three-layer architecture
(application, control, data) and new approaches to distributed control and encapsulating data. All of this
introduces the potential for new vectors for attack. Threats can occur at any of the three layers or in the
communication between layers. SDN applications are needed to provide for the secure use of SDN
itself.

By- Mr. Bhanuprasad Vishwakarma pg. 44


M.Sc.IT Sem II Modern Networking Unit-II
• Use the functionality of SDN to improve network security: Although SDN presents new security
challenges for network designers and managers, it also provides a platform for implementing consistent,
centrally managed security policies and mechanisms for the network. SDN allows the development of
SDN security controllers and SDN security applications that can provision and orchestrate security
services and mechanisms.
OpenDaylight DDoS Application
Defense4All offers carriers and cloud providersdistributed denial of service (DDoS) detection and mitigation
as a native network service. Defense4All uses a common technique for defending against DDoS attacks, which
consists of the following elements:
• Collection of traffic statistics and learning of statistics behavior of protected objects during
peacetime. The normal traffic baselines of the protected objects are built from these collected
statistics.
• Detection of DDoS attack patterns as traffic anomalies deviating from normal baselines.
• Diversion of suspicious traffic from its normal path to attack mitigation systems (AMSs) for traffic
scrubbing, selective source blockage, and so on. Clean traffic exiting out of scrubbing centers is re-
injected back into the packet’s original destination.

Figure 6.7 shows the overall context of the Defense4All application. The underlying SDN network consists of a
number of data plane switches that support traffic among client and server devices. Defense4All operates as an
application that interacts with the controller over an OpenDaylight controller (ODC) northbound API.
To mitigate a detected attack, Defense4All performs the following procedure:
By- Mr. Bhanuprasad Vishwakarma pg. 45
M.Sc.IT Sem II Modern Networking Unit-II
1.It validates that the AMS device is alive and selects a live connection to it. Currently, Defense4All is
configured to work with Radware’s AMS, known as DefensePro.
2.It configures the AMS with a security policy and normal rates of the attacked traffic. This provides the
AMS with the information needed to enforce a mitigation policy until traffic returns to normal rates.
3.It starts monitoring and logging syslogs arriving from the AMS for the subject traffic. As long as
Defense4All continues receiving syslog attack notifications from the AMS regarding this attack,
Defense4All continues to divert traffic to the AMS, even if the flow counters for this PO do not indicate
any more attacks.
4.It maps the selected physical AMS connection to the relevant PO link. This typically involves changing
link definitions on a virtual network, using OpenFlow.
5.It installs higher-priority flow table entries so that the attack traffic flow is redirected to the AMS and
reinjects traffic from the AMS back to the normal traffic flow route. When Defense4All decides that the
attack is over (no attack indication from either flow table counters or from the AMS), it reverts the
previous actions: It stops monitoring for syslogs about the subject traffic, it removes the traffic diversion
flow table entries, and it removes the security configuration from the AMS.

By- Mr. Bhanuprasad Vishwakarma pg. 46


M.Sc.IT Sem II Modern Networking Unit-II
Figure 6.8 shows the principal software components of Defense4All.
• Web (REST) Server: Interface to network manager.
• Framework Main: Mechanism to start, stop, or reset the framework.
• Framework REST Service: Responds to user requests received through the web (REST) server.
• Framework Management Point: Coordinates and invokes control and configuration commands.
• Defense4All Application: Described subsequently.
• Common Classes and Utilities: A library of convenient classes and utilities from which any framework
or SDN application module can benefit.
• Repository Services: One of the key elements in the framework philosophy is decoupling the compute
state from the compute logic.
• Logging and Flight Recorder Services: The logging service uses logs error, warning, trace, or
informational messages. These logs are mainly for Defense4All developers. The Flight Recorder records
events and metrics during run time from Java applications.
• Health Tracker: Holds aggregated run-time indicators of the operational health of Defense4All and
acts in response to severe functional or performance deteriorations.
• Cluster Manager: Responsible for managing coordination with other Defense4All entities operating in
a cluster mode.
The Defense4All Application module consists of the following elements.
• DF App Root: The root module of the application.
• DF Rest Service: Responds to Defense4All application REST requests.
• DF Management Point: The point to drive control and configuration commands. DFMgmtPoint in turn
invokes methods against other relevant modules in the right order.
• ODL Reps: A pluggable module set for different versions of the ODC. Comprises two functions in two
submodules: stats collection for and traffic diversion of relevant traffic.
• SDN Stats Collector: Responsible for setting “counters” for every PN at specified network locations
(physical or logical).
• SDN Based Detection Manager: A container for pluggable SDN-based detectors. It feeds stat reports
received from the SDNStatsCollector to plugged-in SDN based detectors.
• Attack Decision Point: Responsible for maintaining attack lifecycle, from declaring a new attack,
to terminating diversion when an attack is considered over.
• Mitigation Manager: A container for pluggable mitigation drivers. It maintains the lifecycle of each
mitigation being executed by an AMS. Each mitigation driver is responsible for driving attack mitigations
using AMSs in their sphere of management.
• AMS Based Detector: This module is responsible for monitoring/querying attack mitigation by AMSs.
AMS Rep: Controls the interface to AMSs.

6.6-DATA CENTER NETWORKING


Cloud computing, big data, large enterprise networks, and even in many cases, smaller enterprise networks,
depend strongly on highly scalable and efficient data centers. Following are key requirements for data centers:
high and flexible cross-section bandwidth and low latency, QoS based on the application requirements, high

By- Mr. Bhanuprasad Vishwakarma pg. 47


M.Sc.IT Sem II Modern Networking Unit-II
levels of resilience, intelligent resource utilization to reduce energy consumption and improve overall
efficiency, and agility in provisioning network resources .

Big Data over SDN


A paper by Wang, et al., in the Proceedings of HotSDN’12 [WANG12], reports on an approach to use SDN to
optimize data center networking for big data applications. The approach leverages the capabilities of SDN to
provide application-aware networking. It also exploits characteristics of structured big data applications as well
as recent trends in dynamically reconfigurable optical circuits. With respect to structured big data applications,
many of these applications process data according to well-defined computation patterns, and also have a
centralized management structure that makes it possible to leverage application-level information to optimize the
network.

Compared to electronic switches, optical switches have the advantages of greater data rates with reduced cabling
complexity and energy consumption. A number of projects have demonstrated how to collect network- level
traffic data and intelligently allocate optical circuits between endpoints (for example, top-of-rack switches) to
improve application performance. However, circuit utilization and application performance can be inadequate
unless there is a true application-level view of traffic demands and dependencies.
Figure 6.9 shows a simple hybrid electrical and optical data center network, in which OpenFlow-enabled top- of-
rack (ToR) switches are connected to two aggregation switches: an Ethernet switch and an optical circuit switch
(OCS). All the switches are controlled by a SDN controller that manages physical connectivity among ToR
switches over optical circuits by configuring the optical switch. It can also manage the forwarding at ToR switches
using OpenFlow rules.

The SDN controller is also connected to the Hadoop scheduler, which forms queues of jobs to be scheduled and
the HBase Master controller of a relational database holding data for the big data applications. In addition, the
SDN controller connects to a Mesos cluster manager. Mesos is an open source software package that provides
scheduling and resource allocation services across distributed applications.

By- Mr. Bhanuprasad Vishwakarma pg. 48


M.Sc.IT Sem II Modern Networking Unit-II
Cloud Networking over SDN
Cloud Network as a Service (CloudNaaS) is a cloud networking system that exploits OpenFlow SDN capabilities
to provide a greater degree of control over cloud network functions by the cloud customer. CloudNaaS enables
users to deploy applications that include a number of network functions, such as virtual network isolation, custom
addressing, service differentiation, and flexible interposition of various middleboxes.
Figure 6.10 illustrates the principal sequence of events in the CloudNaaS operation, as described in the list that
follows.

• A cloud customer uses a simple policy language to specify network services required by the customer
applications. These policy statements are issued to a cloud controller server operated by the cloud service
provider.
• The cloud controller maps the network policy into a communication matrix that defines desired
communication patterns and network services.
• The logical communication matrix is translated into network-level directives for data plane forwarding
elements.
• The network-level directives are installed into the network devices via OpenFlow.

By- Mr. Bhanuprasad Vishwakarma pg. 49


M.Sc.IT Sem II Modern Networking Unit-II
The abstract network model seen by the customer consists of VMs and virtual network segments that connect
VMs together. Policy language constructs identify the set of VMs that comprise an application and define
various functions and capabilities attached to virtual network segments. The main constructs are as follows:
• address: Specify a customer-visible custom address for a VM.
• group: Create a logical group of one or more VMs. Grouping VMs with similar functions makes it
possible for modifications to apply across the entire group without requiring changing the service
attached to individual VMs.
• middlebox: Name and initialize a new virtual middlebox by specifying its type and a configuration file.
The list of available middleboxes and their configuration syntax is supplied by the cloud provider.
Examples include intrusion detection and audit compliance systems.
• networkservice: Specify capabilities to attach to a virtual network segment, such as Layer 2 broadcast
domain, link QoS, and list of middleboxes that must be traversed.
• virtualnet: Virtual network segments connect groups of VMs and are associated with network services.
A virtual network can span one or two groups. With a single group, the service applies to traffic between
all pairs of VMs in the group.
Figure 6.11 provides an overview of the architecture of CloudNaaS. Its two main components are a cloud
controller and a network controller. The cloud controller provides a base Infrastructure as a Service (IaaS) service
for managing VM instances.

FIGURE 6.11 CloudNaaS Architecture


By- Mr. Bhanuprasad Vishwakarma pg. 50
M.Sc.IT Sem II Modern Networking Unit-II
A cloud service that provides the customer access to the underlying cloud infrastructure. IaaS offers the customer
virtual machines, storage, networks, and other fundamental computing resources so that the customer can deploy
and run arbitrary software, which may include operating systems and applications.
The network controller uses the communication matrix to configure data plane physical and virtual switches. It
generates virtual networks between VMs and provides VM placement directives to the cloud controller. It
monitors the traffic and performance on the cloud data plane switches and makes changes to the network state as
needed to optimize use of resources to meet tenant requirements.

6.6-MOBILITY AND WIRELESS


In addition to all the traditional performance, security, and reliability requirements of wired networks, wireless
networks impose a broad range of new requirements and challenges. Mobile users are continuously generating
demands for new services with high quality and efficient content delivery independent of location. Network
providers must deal with problems related to managing the available spectrum, implementing handover
mechanisms, performing efficient load balancing, responding to QoS and QoE requirements, and maintaining
security.
SDN can provide much-needed tools for the mobile network provider and in recent years a number of SDN- based
applications for wireless network providers have been designed. following are SDN application areas, among
others: seamless mobility through efficient handovers, creation of on-demand virtual access points, load
balancing, downlink scheduling, dynamic spectrum usage, enhanced intercell interference coordination, per client
/ base station resource block allocations, simplified administration, easy management of heterogeneous network
technologies, interoperability between different networks, shared wireless infrastructures, and management of
QoS and access control policies.
SDN support for wireless network providers is an area of intense activity, and a wide range of application
offerings is likely to continue to appear.

6.7- INFORMATION-CENTRIC NETWORKING


Information-centric networking (ICN), also known as content-centric networking. ICN is aimed at providing
native network primitives for efficient information retrieval by directly naming and operating on information
objects.
With ICN, a distinction exists between location and identity, thus decoupling information for its sources. The
essence of this approach is that information sources can place, and information users can find, information
anywhere in the network, because the information is named, addressed, and matched independently of its location.
In ICN, instead of specifying a source-destination host pair for communication, a piece of information itself is
named. In ICN, after a request is sent, the network is responsible for locating the best source that can provide the
desired information. Routing of information requests thus seeks to find the best source for the information, based
on a location-independent name.
Deploying ICN on traditional networks is challenging, because existing routing equipment would need to be
updated or replace with ICN-enabled routing devices.
A number of projects have proposed using SDN capabilities to implement ICNs. There is no consensus approach
to achieving this coupling of SDN and ICN. Suggested approaches include substantial enhancements/

By- Mr. Bhanuprasad Vishwakarma pg. 51


M.Sc.IT Sem II Modern Networking Unit-II
modifications to the OpenFlow protocol, developing a mapping of names into IP addresses using a hash function,
using the IP option header as a name field, and using an abstraction layer between an OpenFlow (OF) switch and
an ICN router, so that the layer, OF switch, and ICN router function as a single programmable ICN router.
CCNx
CCNx is being developed by the Palo Alto Research Center (PARC) as an open source project. Communication
in CCN is via two packet types: Interest packets and Content packets. A consumer requests content by sending
an Interest packet. Any CCN node that receives the Interest and has named data that satisfies the Interest responds
with a Content packet (also known as a Content). Content satisfies an Interest if the name in the Interest packet
matches the name in the Content Object packet. If a CCN node receives an Interest, and does not already have a
copy of the requested Content, it may forward the Interest toward a source for the content. The CCN node has
forwarding tables that determine which direction to send the Interest. A provider receiving an Interest for which
it has matching named content replies with a Content packet. Any intermediate node can optionally choose to
cache the Content Object, and it can respond with a cached copy of the Content Object the next time it receives
an Interest packet with the same name.
The basic operation of a CCN node is similar to an IP node. CCN nodes receive and send packets over faces. A
face is a connection point to an application, or another CCN node, or some other kind of channel. A face may
have attributes that indicate expected latency and bandwidth, broadcast or multicast capability, or other useful
features. A CCN node has three main data structures:
• Content Store: Holds a table of previously seen (and optionally cached) Content packets.
• Forwarding Information Base (FIB): Used to forward Interest packets toward potential data sources.
• Pending Interest Table (PIT): Used to keep track of Interests forwarded upstream by that CCN node
toward the content source so that Content packets later received can be sent back to their requestors.
ICN relies substantially on in-network caching—that is, to cache content on the path from content providers to
requesters. This on-path caching achieves good overall performance but is not optimal as content may be
replicated on routers, thus reducing the total volume of content that can be cached. To overcome this limitation,
off-path caching can be used, which allocates content to well-defined off-path caches within the network and
deflects the traffic off the optimal path toward these caches that are spread across the network. Off-path caching
improves the global hit ratio by efficiently utilizing the network-wide available caching capacity and permits to
reduce egress links’ bandwidth usage.
Use of an Abstraction Layer
The central design issue with using an SDN switch (in particular an OF switch) to function as an ICN router is
that the OF switch forwards on the basis of fields in the IP packet, especially the destination IP address, and an
ICN router forwards on the basis of a content name. In essence, the proposed approach hashes the name inside
the fields with an OF switch can process.
Figure 6.12 shows the overall architecture of the approach. To link a CCNx node software module with an OF
switch, an abstraction layer, called the wrapper, is used. The wrapper pairs a switch interface to a CCNx face,
decodes and hashes content names in CCN messages into fields that an OF switch can process (for example, IP
addresses, port numbers). The forwarding tables in the OF switch are set to forward based on the contents of the
hashed fields. The switch does not “know” that the contents of these fields are no longer legitimate IP addresses,

By- Mr. Bhanuprasad Vishwakarma pg. 52


M.Sc.IT Sem II Modern Networking Unit-II
TCP port numbers, and so forth. It forwards as always, based on the values found in the relevant fields of incoming
IP packets.

For efficient operation, two additional challenges need to be addressed: how to measure the popularity of content
accurately and without a large overhead, and how to build and optimize routing tables to perform deflection. To
address these issues, the architecture calls for three new modules in the SDN controller:
• Measurement: Content popularity can be inferred directly from OF flow statistics. The measurement
module periodically queries and processes statistics from ingress OF switches to return the list of most
popular content.
• Optimization: Uses the list of most popular contents as an input for the optimization algorithm. The
objective is to minimize the sum of the delays over deflected contents under the following constraints:
(1) each popular content is cached at exactly one node, (2) caching contents at a node does not exceed
node’s capacity, and (3) caching should not cause link congestion.
• Deflection: Uses the optimization results to build a mapping, for every content, between the content
name (by means of addresses and ports computed from the content name hash) and an outgoing
interface toward the node where the content is cached.
Finally, mappings are installed on switches’ flow tables using the OF protocol such that subsequent Interest
packets can be forwarded to appropriate caches.

Figure 6.13 shows the flow of packets. The OpenFlow switch forwards every packet it receives from other ports
to the wrapper, and the wrapper forwards it to the CCNx module. The OpenFlow switch needs to help the wrapper
identify the switch source port of the packet. To achieve this, the OF switch is configured to set the ToS value of
all packets it receives to the corresponding incoming port value and then forward all of them to the wrapper’s
port.

By- Mr. Bhanuprasad Vishwakarma pg. 53


M.Sc.IT Sem II Modern Networking Unit-II

The wrapper maps a face of CCNx to an interface (that is, port) of OpenFlow switches using ToS value. Face W
is a special face between wrapper and the CCNx module. W receives every Content packet from the wrapper and
is used to send every Interest packet from CCNx to the wrapper.

By- Mr. Bhanuprasad Vishwakarma pg. 54

You might also like