0% found this document useful (0 votes)
24 views

Information Systems & Telecommunication

This Technical Brochure discusses the implementation of Software Defined Networking (SDN) and Network Function Virtualisation (NFV) in electric power utilities, highlighting their relevance and potential benefits. It provides an overview of core SDN concepts, real-world use cases, and insights from a survey conducted among CIGRE member utilities regarding their views on these technologies. The document also emphasizes the importance of SDN and NFV in enhancing operational efficiency and agility in response to evolving telecommunication demands.

Uploaded by

thiagogiroux
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views

Information Systems & Telecommunication

This Technical Brochure discusses the implementation of Software Defined Networking (SDN) and Network Function Virtualisation (NFV) in electric power utilities, highlighting their relevance and potential benefits. It provides an overview of core SDN concepts, real-world use cases, and insights from a survey conducted among CIGRE member utilities regarding their views on these technologies. The document also emphasizes the importance of SDN and NFV in enhancing operational efficiency and agility in response to evolving telecommunication demands.

Uploaded by

thiagogiroux
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 56

D2 TECHNICAL BROCHURE

Information systems
& telecommunication

Enabling software defined


networking for electric power
utilities
Reference: 866

April 2022
TECHNICAL BROCHURE

Enabling software defined


networking for electric
power utilities
WG D2.43

Members

V. TAN, Convenor AU H. DOI JP


D. HOLSTEIN US M. SEEWALD DE
Q. YANG CN Z. MBEBE ZA
K. SETLHAPELO ZA L. WATTS AU
O. AGGAR FR M. COSTA DE ARAUJO BR
T. GODFREY US P. ZHANG US
G. STUEBING US C. VILLASANTI PY
J. MATABOGE ZA G. HELPS AU
K. LI CN S. KACAR CA
V. KARANTAEV RU Z. JIANG CN

Copyright © 2022
“All rights to this Technical Brochure are retained by CIGRE. It is strictly prohibited to reproduce or provide this publication in any
form or by any means to any third party. Only CIGRE Collective Members companies are allowed to store their copy on their
internal intranet or other company network provided access is restricted to their own employees. No part of this publication may
be reproduced or utilized without permission from CIGRE”.

Disclaimer notice
“CIGRE gives no warranty or assurance about the contents of this publication, nor does it accept any responsibility, as to the
accuracy or exhaustiveness of the information. All implied warranties and conditions are excluded to the maximum extent permitted
by law”.

WG XX.XXpany network provided access is restricted to their own employees. No part of this publication may be
reproduced or utilized without permission from CIGRE”.

Disclaimer notice
“CIGRE gives no warranty or assurance about the contents of this publication, nor does it accept any
responsibility, as to the accuracy or exhaustiveness of the information. All implied warranties and
ISBN : 978-2-85873-571-6
conditions are excluded to the maximum extent permitted by law”.
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

Executive summary
This Technical Brochure provides an overview of Software Defined Networking (SDN) in the context of
applications for power utilities.
The focus of this Technical Brochure is to inform the reader in the areas of SDN that are most
applicable to power utilities; namely, SDN core concepts, Network Function Virtualisation (NFV) -
where network functions such as routers and firewalls are virtualised, and a basic technical overview
of the the technology. A deep and detailed description of the various technologies and protocols is not
the intent of this Technical Brochure - as these are easily obtainable from other literature and sources.
Rather, we aim to provide context and relevance of the areas most applicable to power utilities.
SDN is a large topic, where many different technologies are brought together and integrated as a
solution to create a modern programmable and agile network. A subset of these core SDN
technologies and protocols is described, along with references to the relevant standards and related
works, which are summarised so that the reader can gain a basic level of appreciation of the
numerous works that have been brought together to define SDN.
Real world use cases of SDN and NFV have materialised and gained maturity, especially within the
networks of telecommunications and cloud service providers. 5G is expected to heavily utilise SDN
and NFV as these networks are being implemented around the world.
For the typical power utility, SDN and NFV have often already been implemented, often without any
general awareness, aside from perhaps the technical staff knowing that a commercial product
implemented in the utility's network is an SDN solution. However, SDN solutions are more commonly
implemented in the datacentre or control centre environment rather than at the edge - for example, at
the remote locations such as the substations.
In this Technical Brochure, we describe some SDN and NFV use cases in the context of the utility's
operational technology (OT) environments; namely substation virtualisation, the OT network digital
twin, micro-segmentation, which is a cyber security use case, OT cloud service integration, IEC 61850
configuration and SD-WAN.
The working group has also carried out a survey on the CIGRE member utilities' views on SDN and
NFV, on the relevance and perceived benefits of these technologies to their organisations, and their
intent and timeframe of SDN and NFV implementation.
A case study based on a Japanese power utility's use case is described, along with the current market
landscape of SDN and NFV.
Future work to extend the overview coverage provided by this Technical Brochure is recommended,
especially in the areas of private 5G networks' integration with SDN and NFV, further development of
the intelligent edge using SDN and NFV, and a more detailed study of extending the utility's OT
environment into the cloud.
We believe that SDN and NFV will be important technologies for power utilities to consider due to their
potential to increase the efficiency and agility in provisioning services, as utilities face a rapidly
changing telecommunication and information systems requirements brought by distributed energy
resources (DERs) and renewables.

3
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

Contents
Executive summary ............................................................................................................. 3

Figures and Illustrations ..................................................................................................... 6

1 Introduction and Technology Evolution................................................................... 7


1.1 Software Defined Network (SDN) ............................................................................................................. 7
1.2 Early Barriers to Adoption of SDN ........................................................................................................... 8
1.3 SDN's Role in Network Virtualisation....................................................................................................... 8
1.4 Network Function Virtualisation (NFV) .................................................................................................. 10
1.5 5G's Adoption of SDN and NFV .............................................................................................................. 11

2 SDN and NFV Building Blocks ................................................................................ 13


2.1 Overview ................................................................................................................................................... 13
2.2 Controller ................................................................................................................................................. 14
2.3 Programmability through APIs (Application Programming Interfaces) and Protocols ..................... 15
2.4 Other Common Protocols ....................................................................................................................... 18
2.5 Physical Components of SDN ................................................................................................................ 19
2.5.1 Network Devices .............................................................................................................................. 19
2.5.2 SDN-capable Hypervisor .................................................................................................................. 20
2.6 NFV Devices ............................................................................................................................................. 20

3 Standards and Related Work ................................................................................. 22

4 SDN and NFV Utility Use Cases and Architectures ............................................... 25


4.1 Substation Virtualisation ........................................................................................................................ 25
4.2 Network Modelling for Tests, Validation, and Proof-of-Concepts ....................................................... 28
4.3 OT Network Digital Twin ......................................................................................................................... 28
4.4 Micro-segmentation................................................................................................................................. 29
4.5 OT Cloud Service Integration ................................................................................................................. 32
4.6 IEC 61850 Configuration ......................................................................................................................... 33
4.7 SD-WAN .................................................................................................................................................... 38

5 Survey Results ......................................................................................................... 39


5.1 Overview ................................................................................................................................................... 39
5.2 Relevance of SDN and NFV to EPUs ...................................................................................................... 39
5.3 Benefits of SDN and NFV to EPUs ......................................................................................................... 40
5.4 Potential Use Cases to EPUs .................................................................................................................. 40
5.5 Timeframe in Implementation ................................................................................................................. 41

4
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

6 Case Study: Reduction Effects of Network Operation Process Using Network


Virtualization Techniques in Japanese Electric Power Company ........................ 42
6.1 Overview ................................................................................................................................................... 42
6.2 SDN system structure example on electric power company’s control network ................................ 42
6.3 Benefits of implementing SDN for new lines setup work flow............................................................. 43
6.4 Future Works............................................................................................................................................ 44

7 Current Market Landscape ...................................................................................... 45

8 Future Work ............................................................................................................. 47

APPENDIX A. Definitions, abreviations and symbols ..................................................... 48


A.1. General Terms.......................................................................................................................................... 48
A.2. Specific Terms Used in this Technical Brochure.................................................................................. 49
A.3. Organisation Acronyms .......................................................................................................................... 52

APPENDIX B. Links and references ................................................................................. 53


B.1. CIGRE Papers and Contributions........................................................................................................... 53
B.2. Other References ..................................................................................................................................... 53

5
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

Figures and Illustrations


Figure 1 – History of programmable networks (Feamster, N., et. al., 2013) ........................................... 7
Figure 2 - The concept of multiple independent logical networks being overlaid on top of an existing
network, with the overlay function built into physical servers and SDN capable switches ..................... 9
Figure 3 - The vision for NFV (Chiosi et al., 2012) ................................................................................ 10
Figure 4 - High level system component view of a private 5g network architecture (5G PPP, 2021) ... 12
Figure 5 - An implementation of 5G network components using SDN (Ericsson Technology Review,
2018) ...................................................................................................................................................... 12
Figure 6 - Architecture overview of SDN and NFV ................................................................................ 13
Figure 7 - OpenFlow switch components (Open Networking Foundation, 2015) ................................. 17
Figure 8 - Timeline of IETF Specifications for NETCONF, RESTCONF, YANG (Jethanandi, M., 2017)
............................................................................................................................................................... 18
Figure 9 - The ruggedised substation server as a hypervisor with VXLAN tunnel endpoint (VTEP)
functionality ............................................................................................................................................ 20
Figure 10 - Substation virtualisation architecture using COTS hardware which virtualises the network
and applications (Tan, V., 2018) ........................................................................................................... 25
Figure 11 - Potential EPU network WAN integration and migration approach, where the virtual
substation architecture is implemented alongside the existing technologies ........................................ 27
Figure 12 - An EPU test environment which uses a standard server, along with physical OT
components such as protection relays, and NFV components ............................................................. 28
Figure 13 - A conceptual view of the segments in the utility substation without micro-segmentation .. 31
Figure 14 - A conceptual view of microsegmentation in a utility substation Ethernet network ............. 32
Figure 15 - Cloud service integration into the utility's existing environment, forming a hybrid utility cloud
............................................................................................................................................................... 33
Figure 16 - Levels and logical interfaces in substation automation systems ........................................ 34
Figure 17 - Example IEC 61850 message exchange for bus differential and feeder overcurrent
protection ............................................................................................................................................... 35
Figure 18 - Network diagram of the example IEC 61850 system .......................................................... 36
Figure 19 - Distribution of Respondent Countries ................................................................................. 39
Figure 20 - Relevance of SDN / NFV .................................................................................................... 39
Figure 21 - Benefits of SDN and NFV ................................................................................................... 40
Figure 22 - Potential Use Cases............................................................................................................ 41
Figure 23 - Implementation timeframe .................................................................................................. 41
Figure 24 - Concept of SDN’s Virtual Tenant Network (VTN) ............................................................... 42
Figure 25 - Workflows to create a new network .................................................................................... 43

Tables
Table 1 - Other commonly used protocols in SDN and NFV solutions ................................................. 18
Table 2 – Examples of SDN-capable physical network devices ........................................................... 19
Table 3 - NFV device types and applicability to the site types in the power utility ................................ 21
Table 4 – Standards and related work .................................................................................................. 22
Table 5 - Digital twin representation of physical components in a substation control system
environment ........................................................................................................................................... 28
Table 6 - Example QoS policy for an IEC 61850 substation ................................................................. 34
Table 7 - Logical signal exchange via GOOSE ..................................................................................... 36
Table 8 - Logical signal exchange via SV ............................................................................................. 36
Table 9 - SDN flow list for GOOSE messaging ..................................................................................... 37
Table 10 - SDN flow list for SV messaging ........................................................................................... 37
Table 11 - A sample of SDN solutions in the market............................................................................. 45

Table A.1 - Definition of general terms used in this TB ......................................................................... 48


Table A.2 - Definition of technical terms used in this TB ....................................................................... 49
Table A.3 - Definition of technical terms used in this TB ....................................................................... 52
Table B.1 – CIGRE Papers and Contributions ...................................................................................... 53
Table B.2 – Other References ............................................................................................................... 53

6
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

1 Introduction and technology evolution


1.1 Software Defined Network (SDN)
The term SDN was first coined by the Stanford OpenFlow project.
However, the history of programmable networks stretches as far back as 1990s, as shown in Figure 1.

Figure 1 – History of programmable networks (Feamster, N., et. al., 2013)

The evolution of SDN has the common theme in making the network programmable.
Throughout the history of SDN, the attempts to make the network programmable had the following
benefits in mind:
1. Ability to define behaviours of the network through external programs or automation tools: In the
1990s, this capability was achieved through the development of the Active Networks program
supported by the Defense Advanced Research Projects Agency (DARPA), academia and the
telecommunication research industry at that time. Today, this ability has evolved to a set of
industry-accepted open application programming interface (API), with Openflow being one of the
most common and widely used API.
2. Ability to virtualise the network by multiplexing and demultiplexing network traffic to applications
based on headers. This capability originated from the need to support innovation through
experimentation in the Internet and networking industry at that time. The ability to virtualise the
network right through to the application layer enabled experimentation at that time.
3. Ability to troubleshoot, manage and control the network at scale: This was enabled by the
separation of the control and data planes. The control plane is a function that determines behavior
of the network (for example, the network paths and topology, the functions a network node plays,
etc.) and the data plane performs the actions defined by the Control plane (for example,
forwarding packets from one interface to another, applying packet manipulation such as header
re-writing, etc.). Advances and commoditisation of computing with rapidly increasing computing
power led to the feasibility of centralised processing of the state of the entire network even in large
Internet service providers (ISPs).
Software Defined Network (SDN) was a much-hyped term throughout the 2010s. Perhaps a victim of
an overhyped market, SDN has not been clearly understood and have multiple definitions, especially
in the early days of the technology. Its strong associations in the early days with open-source origins
(e.g. OpenFlow) has resulted in a narrow definition of what SDN is, although many SDN solutions
today have their origins from these early open source projects. Overly enthusiastic vendors who label
numerous products with the term SDN to mean everything and anything to do with advanced
automated networks, did not help clarify or bring focus to what SDN is.

7
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

In our view, SDN is not limited to solutions (open source or otherwise) consisting of a controller or
switches that implement an Ethernet switch flow specification. Today's SDN solutions have matured to
an extent that solutions no longer directly bear the name "SDN" in the product or solution name.
Instead, many SDN products today have names with other terms such as Intelligent, Cloud, Agile, etc.
Nevertheless, all these SDN solutions retain the main characteristics of SDN - programmable,
centrally managed, and comes with software-based agility and modularity - these attributes are further
described in Chapter 2.

1.2 Early Barriers to Adoption of SDN


The lack of compelling use cases was hampering the adoption of SDN in the 1990s and 2000s.
Early scepticisms of SDN’s main concept of a centralised control plane led to fears that the network
would be less reliable due to a central point of failure. However, there was a realisation that in
conventional distributed routing protocols such as Border Gateway Protocol (BGP) and Open Shortest
Path First (OSPF) came with the similar failure concerns when the need to scale arose (for example,
route reflectors and limitations with OSPF areas and flooding mechanisms) (Feamster et. al, 2013).
An early misconception of the control plane was that it was essential in forwarding data packets. Most
SDN implementations today do not rely on the control plane’s availability for the continued forwarding
of data in a network. Of course, when the SDN controller goes down, reconfiguration functions such as
traffic engineering, new security policy installations, etc. will not be available until the controller is
online again.
The centralisation of the control plane only needs to be logical – later advancements such as
replication of databases, backup strategies and splitting of the control plane using multiple control
plane functions, meant that the resilience of the control plane was increased.
Another early barrier to SDN adoption was that the dominant network vendors were slow to adopt an
open API, which was critical for the adoption of SDN. However, with the emergence of merchant
silicon which opened the network chipset’s programming interface, suitable for SDN’s use such as
Openflow, high-performance SDN switches became available in the marketplace and are common
nowadays.

1.3 SDN's Role in Network Virtualisation


Network virtualisation provides a way to present multiple networks independent from the physical
hardware.
In the technology evolution of SDN, network virtualisation has been adopted as a primary SDN use
case, especially in the context of the datacentre network or Local Area Network (LAN) scenarios.
For electric power utilities (EPUs), a common use of network virtualisation is to carry applications of
different requirements securely and to discriminate applications based on performance or service
criteria.
Examples of network virtualisation in use today by EPUs are as follows:
1. Virtual LAN (VLAN), as defined in the IEEE 802.1Q standard. VLANs virtualise the Ethernet
topology of a network, by creating multiple virtual Ethernet segments on shared physical
hardware.
2. L2VPN and L3VPN virtualise multiple Ethernet networks and IP networks across a common Wide
Area Network (WAN).
3. TDM-based technologies. SDH and SONET networks virtualise TDM circuits over a Wide Area
TDM network, where a point-to-point TDM circuit can be linked up between geographically
separated locations as though they are directly connected. By the time Ethernet gained popularity,
SDH/SONET also gained the capability to virtualise Ethernet networks using a common
SDH/SONET core network, using techniques such as Virtual Concatenation (VC) (ITU G.707 and
ITU G.783), and Generic Framing Procedure (GFP) (ITU G.7041) to deliver Ethernet over
SDN/SONET.
One main common shortcoming in the above traditional methods of virtualising networks is that new
virtual networks cannot be dynamically created in a scalable way to meet new requirements.
Equipment vendors did not have an open API which new applications can use to dynamically create
new virtual networks at scale or change their behaviour programmatically.

8
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

Although SDN’s scope is much wider that network virtualisation, it is one of the main uses for SDN
today. This was due to SDN’s ability to program a network via open Application Programming
Interfaces (APIs).
The popularity of using SDN to virtualise the network also stemmed from the fact that many pioneering
implementations (Nicira – now VMware NSX, Open vSwitch, etc.) do not require SDN support in
existing network hardware, thereby encouraging adoption of SDN by lowering the barrier of entry on
existing networks.
This is possible due to the network overlay techniques used in combination with SDN. Popular current
examples of SDN overlays include VXLAN and NVGRE, whose behaviours are implemented in
software. Figure 2 shows how SDN, with the use of network overlays, provides network virtualisation.
The only assumption on the underlying network is Ethernet or IP reachability between the SDN nodes
– these are either built directly into the physical server or hypervisor, or SDN-capable switches.

Figure 2 - The concept of multiple independent logical networks being overlaid on top of an existing
network, with the overlay function built into physical servers and SDN capable switches

9
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

1.4 Network Function Virtualisation (NFV)


The term Network Function Virtualisation (NFV) was first described in 2012 at the SDN and Openflow
World Congress in Germany (M. Chiosi et al. 2012). NFV standardisation work has been carried out
by ETSI through its NFV Industry Specification Group since then.
While SDN is concerned with programmability of the network with the main feature of a centralised
control plane, NFV’s focus is on using “standard IT virtualisation technology to consolidate many
network equipment types” (M. Chiosi et al. 2012). Figure 3 shows the original vision of NFV, where
many physical network equipment at that time was thought to be virtualisable and can be consolidated
into common server hardware using virtualisation technology.
There are now numerous network equipment types where the NFV principle can be applied to,
including mobile network nodes, session border controllers (for VoIP), IP and MPLS routers, load
balancers, firewalls, IPS, VPN concentrators, radio network equipment, etc.

Figure 3 - The vision for NFV (Chiosi et al., 2012)

Although NFV is a distinct concept from SDN, over the years, NFV has been closely associated with
SDN for the following reasons:
• The framework for automation and orchestration that resulted from the programmability of SDN
(e.g. RESTful protocols) can also be applied to NFV.
• When managed using the common toolset, the management of SDN and NFV achieves increased
value due to the improved efficiencies in automating both SDN and NFV as a whole.
• SDN and NFV use cases are closely related – these include extending networks (for example,
SDN extends a logical network to any location including the cloud provider and NFV can be used
to secure the extended network through the use of virtual firewalls, etc.) and orchestration (for
example, SDN Controller uses Openflow to program the network segments, and the Controller can
be integrated with NFV devices using REST APIs – the result is that the Controller manages the
entire SDN and NFV network).

10
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

In the EPU’s scenario, the potential benefits are as follows:


• Consolidating remote and substation network equipment such as firewalls, IPS/IDS, IP routers,
MPLS routers into a virtualised form which runs on standard ruggedised physical servers or
processing units such as telecommunication hardware that are also hypervisors. This would result
in the decrease of the number of physical hardware to be maintained at remote sites. The use of a
minimum set of standardised server-based hardware enables a more efficient asset lifecycle
management approach.
• Similarly for non-network equipment such as HMIs and RTUs, we expect to see an increased
adoption of the virtual form, i.e. virtual HMIs and virtual RTUs.
• Consolidating central control site network equipment such as PABX, core firewalls, core security
devices, core IP routers, core MPLS routers, and VPN concentrators.
• NFV provides the flexibility and efficiency in implementing cybersecurity and new services. For
example, new cybersecurity services can be brought online by deploying a virtual firewall using
existing server hardware on-site often without significant physical installation work and cabling,
and the NFV can be managed centrally.

1.5 5G's Adoption of SDN and NFV


In recent years, 5G networks have gained traction and adoption.
Ultra-reliable and low-latency communications (URLLC), support for massive machine type
communication (mMTC) and enhanced mobile broadband with significantly increased data rate
compared to 4G/LTE, the technology has great potential in improving the efficiency and in meeting the
demands of the rapidly changing power industry, spurred by distributed energy resources (DERs) and
renewables. The CIGRE Study Committee D2 has formed a Working Group to investigate the 5G use
cases for power utilities and the work of the Working Group has commenced in 2021.
The 5G Infrastructure Public Private Partnership (5G-PPP) describes the 5G network architecture as
one which utilises SDN and NFV extensively, where NFVs and the SDN-capable switches are
important components, and the SDN controller is used extensively to orchestrate and operate the 5G
network (5GPPP Architecture Working Group, 2021). Figure 4 shows a high level 5G architecture,
where SDN is used as a critical part of the infrastructure layer.
It is expected that private 5G networks will be deployed in the future by some power utilities, possibly
to support mission critical real-time applications such as protection and other important utility use
cases. Knowledge of how SDN and NFV are used within a 5G network will be critical for these
utilities.
Figure 5 shows a current implementation of 5G which is described as a "self-contained infrastructure
underlay with an SDN-controlled overlay for a variety of RAN - Radio Access Networks and user
services" (Ericsson Technology Review, 2018).

11
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

Figure 4 - High level system component view of a private 5g network architecture (5G PPP, 2021)

Figure 5 - An implementation of 5G network components using SDN (Ericsson Technology Review, 2018)

12
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

2 SDN and NFV building blocks


2.1 Overview
The building blocks described here are what we term as conventional SDN building blocks. These are
conventional because in the early days of SDN, various open source and publicly known
implementations of SDN approach the programmability aspects of SDN with these components.
These components may be hidden from view from the end-user, for example, if the SDN solutions are
packaged as black-box systems that utilise a mix of widely known components with additional
proprietary methods.

Figure 6 - Architecture overview of SDN and NFV


Figure 6 shows the high-level view of the SDN and NFV. The SDN Controller is the central component
governing the function of SDN. The Controller uses the concept of Southbound and Northbound APIs
(Application Programming Interfaces) and plugins to communicate with network devices and
applications.
It is important to note that there are alternative views to the architecture in Figure 4, due to SDN and
NFV being fast-moving areas. Nevertheless, this view is consistent and has been derived from the
SDN work of Open Networking Foundation, and from ETSI through their Industry Specification Group
on NFV.
Using the API as an interfacing layer provides a significant advantage compared to the traditional
method of disparate per-device communication methods. For example, a conventional NMS (Network
Management System) which lives in the application layer, may need to directly support the multitudes

13
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

of methods in interrogating and configuration devices, including the use of SNMP, console interaction
(SSH), web interface, etc. – all of which vary greatly between devices and vendors.
By abstracting the network devices’ state and behaviour using the Southbound and Northbound APIs,
the intricacies of multi-device and multi-vendor support is wrapped within the Northbound REST API
(usually carried within HTTPS) to be easily consumed by various applications. The network devices
interfacing methods are decoupled from the application layer. Changes within the network devices do
not affect the application layer, as the Northbound API stays consistent.
In summary, the Controller abstracts the functionality of the Software Defined Network by using well-
defined interface “contracts” exposed by the Northbound and Southbound APIs.

2.2 Controller
The centralised Controller is at the heart of the SDN. As NFV is increasingly viewed as
complementary to SDN, the Controller may also include managing NFV devices.
The main concept of SDN is that the Controller controls the control plane of the network.
There are 2 planes in most networks. The control plane is a function that determines the behaviour of
the network, and the data plane (sometimes called the forwarding plane) performs the actions defined
by the control plane. There is also the management plane (where management function is carried out,
commonly over a separate out-of-band interface on a network device).
Specific examples of the control plane and data plane are as follows:
• In OSPF, the control plane is the routing protocol which is distributed by nature. Each router runs
an instance of the routing protocol and each router holds the full view of the entire OSPF network
topology. Dijkstra’s algorithm is used to determine the optimal path independently within each
router, which determines the behaviour and topology of the entire network in a consistent manner.
As each router receives packets, based on its view of the OSPF topology, it performs forwarding
actions by transmitting the packets from its appropriate interface – this is the data plane of the
network. Changes detected by a router is communicated to the rest of the routers using flooding of
packets in the data path – it can be said that the OSPF control plane operates in-band to the data
plane.
• In the Ethernet switch, the control plane is the layer 2 protocols which are also distributed by
nature. Using Spanning Tree Protocol (STP) or one of its 802.1D or 802.1w-based variants, each
switch determines independently its limited view of the overall topology. Governed by the STP, a
consistent view of the entire layer 2 network is derived by hop-by-hop view of each switch port’s
role as defined the protocol (i.e. STP Blocking, Listening, Learning and Forwarding roles).
Through the information determined by STP, the switch learns and installs records in its
forwarding table (usually in high-speed hardware tables known as the Content Addressable
Memory – CAM) which defines the data plane of the layer 2 network.
• In IP/MPLS routers, the data plane consists of the MPLS header labels working in conjuction with
the forwarding information base (FIB), and the distributed control plane consists of a combination
of protocols including Labe Distribution Protocol (LDP), BGP and OSPF or IS-IS.
• MPLS-TP supports static provisioning and dynamic provisioning (RFC 6373). In the static
provisioning method via a network management system (NMS), the control and configuration of
the network are carried out statically via the centralised NMS.
• In SDH / SONET networks, the configuration of the circuits is similar in concept to a centralised
control plane, where the centralised NMS is used to statically control and configure the network.
It can be seen from the examples above that the control plane can be generally categorised as either
centralised or distributed. In the case of SDN, the controller adopts the centralised model.
In highly available networks of the EPU, it is important to ensure that the failure of the controller does
not impact the continued ability of the network to function.
The controller's role is to configure and monitor the state of the network, and the network must
continue to operate with a temporary failure or short-term outage of the controller. Dual controllers can
also be deployed to ensure controller high availability.
Most commercial implementations of SDN in the market today support controller resilience through
multiple controllers, where the failure of a controller does not impact the availability and operation of

14
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

the network. The subject of optimal placement of multiple controllers has been a research topic in
recent years, and is commonly known as the Controller Placement Problem. Das, T. et. al.
(2020) carried out a survey on SDN controller placement and found that the three main objectives in
optimising placement of multiple controllers in an SDN network are to optimise one or multiple of the
following parameters: latency, resilience and quality of service. They found that the placement of the
SDN controllers is a crucial design decision. In the case of SDN controller placement for power utility
applications, it needs to take into account the various classes of applications that are used in the
utility's telecommunications network - for example, transporting SCADA traffic over a large-scale SDN
/ NFV-based network would require resilience and quality of service as the primary optimisation
parameters; transporting real-time protection traffic would require optimisation of all the parameters -
low latency, resilience and quality of service.

2.3 Programmability through APIs (Application Programming Interfaces)


and Protocols
An advantage of the SDN and NFV approach is the benefits gained through programmability.
Using Application Programming Interfaces (APIs) and well-defined protocols, the system becomes
extensible and modular, where the behaviour of the network can be extended and improved upon by
"plugging in" modules that integrate into the SDN and NFV network via these APIs and protocols.
The APIs and protocols define the message exchange mechanism between modular components in
the system - these are in effect, the "contract" of behaviour that govern the exchange of messages,
and these APIs are published so that third parties can develop and extend the network via new
applications. For example, network provisioning, security monitoring, and billing can all be integrated
as modules that conform to the API of the system.
Figure 6 shows the modular nature of the Northbound and Southbound APIs and protocols which
provide extensibility to a software defined network. Northbound APIs are the mechanisms which
applications, modules and plugins exchange data with the SDN controller, whereas the Southbound
APIs are the methods which the SDN controller communicates with the network devices exchange
configuration information.

No. API and Protocol Description

1. REST The Representation State Transfer (REST) API has become the
"prevalent choice for northbound API of SDN" (Zhou, W., et. al, 2014).
REST is a lightweight and stateless architecture which establishes simple
communication rules for application modules to retrieve or modify the
system state, and is widely use not just in SDN, but in countless public
and private web services and applications.
Although the REST architecture is agnostic to the underlying
telecommunications channel, it is commonly transported over a secure
TLS channel between the application modules and the SDN controller.
For simple data exchange between the application modules and the SDN
controller, the commonly used communication data structures include
XML or JSON. The use of these simple communication data structures
decouples the internal representation of the application modules and the
SDN controller.

2. NETCONF Network Configuration Protocol (NETCONF) is defined in RFC 6241 and


provides the mechanisms to install, manipulate and delete the
configuration of network devices.
In the context of SDN and NFV, NETCONF is usually a Southbound
interface used by the SDN controller to configure network devices.
Similar to REST, the configuration data between the application uses a
simple data exchange encapsulation, which in NETCONF's case, is XML.
The use of XML decouples the internal state representation of the SDN

15
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

No. API and Protocol Description

controller and the state of the various network devices which may be a
multi-vendor environment.
NETCONF is usually transported over a secure communication channel
via an SSH or TLS session between the SDN controller and the network
device.

3. RESTCONF RESTCONF is an HTTP-based Southbound API protocol and is defined in


RFC 8040.
RESTCONF is built on the NETCONF concepts and uses the REST
architecture to exchange data between the SDN controller and the
network device.
Similar to REST, it is usually transported over a secure TLS channel
between the SDN controller and the network device.

4. YANG YANG is a data modelling language defined in RFC 7950.


Using RESTCONF, the SDN controller manipulates the state of the
network devices by updating the YANG data model. Each vendor
publishes the network device YANG data model, which forms the basis of
configuration state exchange between the SDN controller and the network
device.

5. OpenFlow The OpenFlow protocol is defined by the Open Network Foundation (ONF)
with the latest OpenFlow switch specification published in 2015.
OpenFlow is an open source SDN protocol that defines the control
behaviour between the SDN controller and the SDN switches that
implement the OpenFlow specification.
Note that proprietary commercial solutions also exist in the form of a
turnkey single-vendor solution which consists of the proprietary SDN
controller software and physical SDN switches from the same vendor.
Some vendors have built-in flexibility in their switches to operate with
either the proprietary controller or an open-source controller which is
compliant with the OpenFlow specification.
Although the description here applies to the OpenFlow protocol,
proprietary SDN controller to network device approaches would be similar
in concept to how OpenFlow operates.
OpenFlow is the protocol which connects the controller (Control Plane)
and the Ethernet switches (Data Plane). Other types of network devices
that support OpenFlow include routers, and less commonly available,
firewalls.
OpenFlow enables a flow-based forwarding configured and automated
based on standardized application-programmatic interfaces. The SDN
controller and an OpenFlow-compliant network device implements the
Openflow specification.

16
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

No. API and Protocol Description

Figure 7 - OpenFlow switch components (Open Networking Foundation,


2015)
Figure 7 shows the main OpenFlow switch components.
The Controller manages all aspects of OpenFlow switches (or other
OpenFlow-capable network devices) via the OpenFlow switch protocol.
The OpenFlow protocol defines the management operation interface
between the Controller and the OpenFlow switches that the Controller
manages. This management channel is typically carried over a TCP
connection secured with TLS.
The Controller manages the switch by manipulating the flow table in the
switch. The flow table consists of rules that match Ethernet frames
received on its ports, and define a set of rules to operate on the matched
packets, for example, applying VLAN or MPLS tags, set QoS actions, etc.
The Controller is fundamental to the operation of the OpenFlow switch.
Without the Controller, the switch cannot function, as an OpenFlow switch
must receive flow table instructions from the Controller. The switch may be
able to operate independently for a short time (by using the previously
downloaded flow instructions from the Controller), however, a prolonged
Controller outage will block even the most fundamental behaviour change
such permitting communicating with a new host connected to the switch.
Therefore, the Controller is a critical element in an OpenFlow
environment. Deploying multiple Controllers can reduce the single point of
failure risk. However, there is still a stringent requirement in the network
connection between the Controller and the switches that it manages.
Deployment of OpenFlow switches in remote sites such as substations
may not be suitable if there is an absence of a high-quality WAN
connection to the Controller(s).

17
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

Figure 8 shows the milestone timeline of the various Southbound APIs and protocols.

Figure 8 - Timeline of IETF Specifications for NETCONF, RESTCONF, YANG (Jethanandi, M., 2017)

2.4 Other Common Protocols


It should be obvious to the reader that the SDN and NFV are more akin to an approach rather than a
fixed set of standards and specifications that define the SDN.
Over the years, the networking community and various standards bodies have adapted existing
technologies, standards, Request for Comments (RFCs) and recommendations, and developed some
new ones, with the aim to enable SDN.
This has resulted in a large set of APIs (both Southbound and Northbound) and network protocols that
are commonly adopted to achieve the functionalities of SDN and NFV.
Table 1 shows the description of a small subset of other commonly used protocols in SDN and NFV
solutions.
Table 1 - Other commonly used protocols in SDN and NFV solutions

No. Protocol Description

1. VXLAN Virtual Extensible Local Area Network (VXLAN) is defined in RFC 7348.
The purpose of VXLAN is to extend Layer 2 Ethernet networks over a underlying
(often Layer 3) network. For example, VXLAN can transparently extend the Local
Area Network (LAN) of a substation to another substation over the existing IP/MPLS
network.
Extending a Layer 2 Ethernet network using traditional 802.1D/W Spanning Tree
based protocols is not scalable. A large Spanning Tree Protocol (STP) domain
would often result in reduced availability and performance to the Ethernet network.
VXLAN is often used in an SDN and NFV solution due to the flexibility and
scalability that VXLAN provides, in dynamically extending Layer 2 Ethernet
networks, independent of the underlying Layer 3 network.
In the power utility scenario, an example would be extending a remote control LAN
between sites using SDN switches at the substations.
Although the VXLAN method is simple in operation, it competes with the already
well-established methods of extending Layer 2 Ethernet networks in utilitities, using
IP/MPLS (using L2VPN), MPLS-TP, and existing TDM mechanisms such as
SDH/SONET.
For utilities that already use any of these methods to extend layer 2 communication
between sites, the method in using VXLAN as a standalone technology may
struggle to find its value proposition compared to the existing methods. However,
VXLAN, when used in the context of SDN and NFV, may be valuable due to the
benefits provided by SDN and NFV.

2. NVGRE Network Virtualization Using Generic Routing Encapsulation (NVGRE) is defined in


RFC 7637.

18
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

No. Protocol Description

Similar to VXLAN, NVGRE is also a tunneling protocol that extends Layer 2


Ethernet networks over an underlying network.
NVGRE is functionally equivalent to VXLAN, with minor technical differences, which
will not be discussed in this Technical Brochure.

3. BGP EVPN BGP MPLS-Based Ethernet VPN (EVPN) is defined in RFC 7432.
The protocol is very commonly used in conjunction with VXLAN to scale the
implementation of extending large network of VXLAN-capable SDN switches.
VXLAN can be viewed as the data plane, where Layer 2 packets are encapsulated
using the VXLAN protocol
BGP EVPN can be viewed as the control plane, which provides signalling between
the switches in the environment to establish the VXLAN tunnels.

2.5 Physical Components of SDN


2.5.1 Network Devices
Network devices that work with SDN are those that can be controlled by the SDN controller using the
Southbound protocols (see Figure 6).
SDN network devices are not limited to the Ethernet switches that conform to the OpenFlow switch
specification, although this is the common perception.
The SDN technology and approach has evolved beyond the flow table manipulation using OpenFlow
or similar proprietary flow-based protocols, to include other Southbound protocols such as
RESTCONF and NETCONF. The SDN solutions in the market today include integration with non-
switch devices.
An enabler in the early days of SDN (in the 2010s) was the introduction of merchant silicon that
implement the SDN functionality, which is usually OpenFlow, but may also be a proprietary
specification. Merchant silicons are chips - usually ASICs (Application Specific Integrated Circuits) that
implement the behaviour of high-speed Ethernet switches - that the switch vendor has not designed or
built. These merchant silicon chips enable various third parties to produce Ethernet switches without
having to invest in the development of the switching chips, which is often expensive and has a long
development lifecycle.
Some Ethernet switch chip makers that produce SDN functionality within those chips include Barefoot,
Broadcom, Cisco, Marvell and Mellonox.
Table 2 shows some of the SDN-capable devices and common requirements to be SDN-capable.
Note that not all requirements are mandatory for SDN integration, depending on the implementation.
Table 2 – Examples of SDN-capable physical network devices

No. Network SDN Capability Requirements


Device

1. Ethernet Ability for its switch flow table to be manipulated programmatically by the SDN
switch controller via a Southbound protocol;
Layer 2 tunneling integration and automation, for example, VXLAN.

2. Firewall Ability for its firewall policy to be manipulated programmatically by the SDN
controller via a Southbound protocol;
Ability for firewall service insertion into any arbitrary network segment.

3. Load Ability for its load balancing policy to be manipulated programmatically by the SDN
balancer controller via a Southbound protocol;
Ability for load balancer service insertion into any arbitrary network segment.

19
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

2.5.2 SDN-capable Hypervisor


The hypervisor is the foundational virtualisation component and is the base-layer operating system
that runs and manages virtual machines (VMs). It allows for safe isolation and shared physical server
resource access between multiple VMs to co-reside within the same physical host.
Since SDN and NFV are closely related to the concept of virtualisation, the hypervisor integration into
SDN is an important element of SDN. Similarly, the hypervisor enables NFV, where virtual network
devices such as firewalls are virtualised to run in physical servers.
Figure 9 shows a conceptual view where the hypervisor, which may be a ruggedised substation server
located in the substation, running virtualised services including NFV network devices, virtual HMIs and
virtual RTUs. These hypervisors may be controlled by the SDN controller (or controllers for
redundancy). The hypervisor is built-in with an SDN-capable software-based switch (the virtual switch)
which enables the hypervisor to encapsulate Ethernet frames using layer 2 tunneling protocols such
as VXLAN.
The hypervisors in the market today have wide support for SDN integration - for example, the
Microsoft Hyper-V, VMware ESXi NSX, and Red Hat KVM or OpenStack, and Citrix Hypervisor all
support the NVGRE and VXLAN encapsulation methods.

Figure 9 - The ruggedised substation server as a hypervisor with VXLAN tunnel endpoint (VTEP)
functionality
It should be clear that there are strong benefits of SDN, when used along virtualisation and NFV.
SDN, NFV and virtualisation can be tightly integrated to form an efficient and flexible network service
for power utilities. For example, deploying virtualised NFV network devices (virtual routers, virtual
firewalls, virtual cybersecurity appliances, etc.) on commodity and standardised ruggedised substation
servers is advantageous from physical asset management perspective, especially when scaled to a
larger number of substations.
Due to the natural fit between SDN, NFV and virtualisation, robust automation methods can be applied
across the networks to provide an agile utility network to better serve the demands of DER and smart
grid.

2.6 NFV Devices


NFV devices are virtualised network devices. As discussed in the previous sections, virtualising
network functions brings benefits not just from the efficient asset management point of view for the
power utility, but also from the automation and service orchestration perspective.
A wide range of network functions have been identified as targets for virtualisation. ETSI has
described the use cases and targets for virtualisation for numerous network functions (ETSI, 2021).

20
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

Table 3 shows the types of NFV devices that may be applicable in the power utility environment. For
the purpose of categorisation in the table below, substations are utility-owned or operated sites that
are essential for the operation of the electricity network including voltage conversion and power
system monitoring. Remote sites are small sites which may be network-extended from nearby larger
sites such as the substations or control centres - the remote sites are basic sites which have basic
rudimentary infrastructure including lack of cooling, dust-proofing or redundant power sources.
Remote sites include small street side cabinets or pole-mounted cabinets and are often close to the
end-consumer premises metering.
Table 3 - NFV device types and applicability to the site types in the power utility

No. NFV Device Types Utility Environment

1. Access Router Substations, remote sites, control sites, cloud.

2. Provider Edge Router, e.g. MPLS router Substations, control sites, cloud.

3. Firewall Substations, remote sites, control sites, cloud.

4. Next Generation Firewall Substations, remote sites, control sites, cloud.

5. WAN optimisation controller Substations, control sites, cloud.

6. Deep packet inspection Substations, control sites, cloud.

7. Intrusion prevention system Substations, control sites, cloud.

8. Cryptographic-as-a-Service (CaaS) (e.g. Control sites, cloud.


encryption, SSL proxy)

9. Residential gateway Remote sites.

10. Content distribution network controller Control sites, cloud.

11. Security-as-a-service (SecaaS) (e.g. Substations, control sites, cloud.


network probes, honeypots, monitoring, ML-
based threat intelligence)

21
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

3 Standards and related work


There is not a single standard that defines SDN and NFV, since SDN and NFV consist of various
components that interoperate as a solution.
However, the following are some of the publications that relate to SDN and NFV, many of which help
shape the direction of these technologies, as shown in Table 4. Note that the activities and
publications listed below are non-exhaustive, and are intended to show the breadth of coverage of
SDN and NFV activities from various organisations.
Table 4 – Standards and related work

No. Standards / Description Publishing


Frameworks Organisation

1. ETSI NFV ETSI releases specifications for NFV in 2-year phases, ETSI
Specification which started in 2013.
Releases
The current phase (NFV Release 4, 2019-2020) defines the
NFV specifications in the areas of OS container support,
MANO (Management and Orchestration) optimisations, and
security hardening.
In November 2021, NFV Release 5 (2021-2022) was
initiated to increase support for cloud-enabled deployments

2. ITU-T Publications The following are some related publications by ITU: ITU
• ITU-T Y.3300 - Framework of SDN (2014)
• ITU-T Y.3301 - Functional requirements of SDN (2016)
• ITU-T Y.3011 - Framework of network virtualisation for
future networks (2012)
• ITU-T Y.3320 - Requirements for applying formal
methods to SDN (2014)
• ITU-T Y.3015 - Functional architecture of network
virtualisation for future networks (2016)

3. IEEE Standards The following are some related standards by IEEE: IEEE
• IEEE 1903-2011 - IEEE Standard for the Functional
Architecture of Next Generation Service Overlay
Networks (2011)
• IEEE 1903.1-2017 - IEEE Standard for Content Delivery
Protocols of Next Generation Service Overlay Network
(2017)
• IEEE 1903.2-2017 - IEEE Standard for Service
Composition Protocols of Next Generation Service
Overlay Network (2017)
• IEEE 1903.3-2017 - IEEE Standard for Self-Organizing
Management Protocols of Next Generation Service
Overlay Network (2017)
• IEEE P1915.1 - IEEE Draft Standard for Software
Defined Networking and Network Function Virtualization
Security (DRAFT)
• IEEE P1916.1 - IEEE Draft Standard for Software
Defined Networking and Network Function Virtualization
Performance (DRAFT)

22
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

No. Standards / Description Publishing


Frameworks Organisation

• IEEE P1917.1 - IEEE Draft Standard for Software


Defined Networking and Network Function Virtualization
Reliability (DRAFT)
• IEEE P1921.1 - IEEE Draft Standard for Software-
Defined Networking (SDN) Bootstrapping Procedures
(DRAFT)
• IEEE P1930.1 - IEEE Draft Recommended Practice for
Software Defined Networking (SDN) based Middleware
for Control and Management of Wireless Networks
(DRAFT)

4. IETF Working IETF originates the following related RFCs and activities, IETF
Groups and RFCs many of which form the basis of SDN and NFV-based
approaches:
• RFC 7348 - VXLAN (2014)
• RFC 8040 - RESTCONF (2017)
• RFC 6241 - NETCONF (2011)
• RFC 7950 - YANG (2016)
• RFC 6830 - LISP (2013)

5. BBF Working The Broadband Forum (BBF) working groups have the BBF
Groups following coverage on SDN and related activities:
• SD-303 - Business Requirements and Framework for
SDN in Telecommunication Broadband Networks
(DRAFT)
• SD-326, Flexible Service Chaining (DRAFT)

6. ATIS NFV Forum The Alliance for Telecommunications Industry Solutions ATIS
(ATIS) NFV Forum has established frameworks, use cases
and standards for NFV, including the following:
• Operational Opportunities and Challenges of SDN/NFV
Programmable Infrastructure (2013)
• NFV Infrastructure Metrics for Monitoring Virtualized
Network Deployments (2018)
• NFV Forum Use Cases (2015)

7. Open Networking The Open Networking Foundation (ONF) publishes an open ONF
Foundation SDN specification, i.e. the OpenFlow standard.
Standards
Some of the standards published by ONF are as follows:
• OpenFlow Switch Specification version 1.5.1 (2015)
• Conformance Test Specification for OpenFlow® Switch
Specification 1.3.4 - Basic Single Table (2015)
• SDN Enabled Broadband Acces (SEBA) Reference
Design (2021)
• Converged Multi-Access and Core (COMAC) Reference
Design (2020)

8. IEC 61850-90-13 In February 2021, IEC 61850-90-13: Deterministic IEC


Networking Technologies has been published.

23
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

No. Standards / Description Publishing


Frameworks Organisation

It contains references to TSN and DetNet, using controller-


based concepts in their architecture.

24
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

4 SDN and NFV utility use cases and architectures


4.1 Substation Virtualisation
Figure 10 shows the substation virtualisation architecture to virtualise the network and applications
using commercial off-the-shelf (COTS) hardware. Note that although two COTS servers are shown in
each substation, the number of servers can vary depending on the EPU’s specific requirements.
Scalability is inherent in the virtualisation technology, where new COTS servers can be added as
required to increase the available computing capacity.
It should be noted that virtualisation supports heterogenous configuration of servers, i.e. the hardware
specification of the COTS servers in one substation may differ from another substation. This allows for
increased flexibility to the EPU in asset management – for example, the EPU may choose to deploy
environmentally hardened COTS servers with lower computing power in remote and space-
constrained distribution substations and more capable COTS servers in larger substations with better
cooling. This is possible because virtualisation technology virtualises the physical server resources
and presents them as a pool of resources usable by virtualised applications, agnostic to the underlying
hardware.
Application and network virtualisation technology has reached a level of maturity that early doubts in
performance and reliability have been overcome. Packet forwarding acceleration software libraries,
encryption accelerators and techniques such as Single Root I/O Virtualisation (SR-IOV) enable NFV
and SDN to achieve high performance nearing those of custom-built hardware.
The ability to scale computing resources seamlessly using server and network virtualisation enhances
the EPU’s ability to rapidly meet demand of the Smart Grid – for example, by supplementing
processing power by extending its application to the private, hybrid or public cloud.

Figure 10 - Substation virtualisation architecture using COTS hardware which virtualises the network and
applications (Tan, V., 2018)

25
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

Figure 11 shows an approach in the integration of the existing environment in the substation which
uses legacy non-Ethernet components. This approach enables the benefits of a virtualised substation
and maintains interoperability with existing solutions.
The hybrid substation refers to an existing substation whose existing functions and technology stay
unchanged, but with virtualisation implemented alongside the existing infrastructure, i.e.:
• The synchronous communication network (SDH/SONET/PDH) remains unchanged – this allows
the EPU to attain the benefits of the intelligent virtualised substation without compromising existing
operational functions. For example, the Teleprotection signals transported over the synchronous
network remain unchanged, until the EPU is ready to migrate Teleprotection to Ethernet-based
transport (for example, over MPLS using virtual or physical MPLS routers)
• Flexible integration options – Figure 11 shows the use of NFV, where MPLS is enabled over
SONET/SDH using virtual MPLS routers residing in COTS servers. Multiple variations are
possible, for example:
o In a mixed-MPLS environment, a combination of physical MPLS routers and virtual MPLS
routers are implemented in the environment to form a single MPLS WAN. EPUs might choose
this approach in replacing or upgrading their existing hardware-based MPLS equipment. This
approach may involve virtual and hardware MPLS routers from the same vendor, or from
different vendors. Some vendors’ virtual router implementation shares the same code as their
hardware routers, which may improve interoperability and manageability. Nevertheless, if a
multi-vendor environment is to be implemented (for example, existing physical MPLS routers
from a vendor, new virtual MPLS routers from a different vendor), there is a sufficient level of
MPLS interoperability, as demonstrated in MPLS and Ethernet Congress interoperability tests
since early 2000s.
o Physical MPLS routers are retained. In this case, the benefits of NFV can still be realised by
replacing other hardware, such as physical firewalls with virtual firewalls. This option may be
attractive for EPUs who have recently implemented its MPLS environment and would like to
realise the benefits of substation virtualisation
o Note that although the scenarios above refer to the MPLS router as a candidate for hardware
replacement, they apply to other network equipment whose functions can be virtualised,
including downstream routers, switches, etc.
• Legacy components continue to operate over existing communication technologies or migrated to
NFV using protocol translation devices. Examples of protocol translation devices include Serial-to-
IP DNP gateways used to translate serial RTU communication data to Internet Protocol (IP) and
C37.94-to-IP protocol converters for Teleprotection. In some cases, protocol translation may not
be feasible and the EPU may be required to maintain the existing network and gradually migrate
the applications according to its digital technology roadmap.
For an EPU interested in considering the substation virtualisation architecture, formulating a
technology lifecycle roadmap is an important initial step, by considering its current application
requirements and legacy environment, and viewing substation virtualisation architecture an enabler for
the intelligent substation in the Smart Grid.

26
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

Figure 11 - Potential EPU network WAN integration and migration approach, where the virtual substation
architecture is implemented alongside the existing technologies

27
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

4.2 Network Modelling for Tests, Validation, and Proof-of-Concepts


Figure 12 shows a reference environment which consists of standard off-the-shelf components used to
provide the EPU an efficient and rapid setup to validate, test and provide proof-of-concepts.
This approach was successfully used by a distribution utility in Australia, described in paper D2-308
(Tuazon, P, et. al, 2020) where NFVs in a virtual lab to validate teleprotection-over-MPLS system
design. The NFVs used in that scenario included the following virtual network components: virtual
MPLS routers, virtual WAN emulator (to simulate various conditions of the WAN, including injection of
latency, packet loss, path asymmetry and jitter), and virtual DNP3 RTU. These NFVs were used in
combination with physical devices to provide various test scenarios.

Figure 12 - An EPU test environment which uses a standard server, along with physical OT components
such as protection relays, and NFV components

4.3 OT Network Digital Twin


In 2021, the utility has extended the network modelling environment to include software-based virtual
RTUs to further automate the scenario setup of the environment.
This approach is becoming an implementation of the digital twin concept, where a model of the actual
network is being implemented in a software-based digital twin with the use of SDN and NFV. Table 5
shows the physical components in a control system environment in the substation and the digital twin
components implemented.
Table 5 - Digital twin representation of physical components in a substation control system environment

No. Physical Components Digital Twin Components

1. RTU Virtual RTU

2. SCADA Master Station Virtual SCADA Master

3. LAN switch Virtual switch (part of the hypervisor)

28
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

No. Physical Components Digital Twin Components

4. MPLS router Virtual MPLS router

5. Substation firewall Virtual Substation Firewall

6. HMI Virtual HMI

7. IED Virtual IED

The implementation of the digital twin components using the virtual elements is made possible using
NFV and SDN - both of which are closely related to virtualisation technology.
The advantages of having the digital twin implemented in the virtual environment using NFV and SDN
are as follows:
1. Efficiency and rapid setup - Faster setup of the digital twin due to the use of virtual components,
all of which are software components run within hypervisor machines.
2. Ease of simulation - Being implemented within a virtualisation environment enables automation
and orchestration tools to quickly change the simulation scenario to validate proposed changes to
the system. For example, if IPSEC encryption is required on the DNP3 communication between
the RTU and the master station, configuration changes can be designed and tested in the digital
twin prior to updating the physical components. Multiple iterations of tests can be done more
easily within the digital twin due common toolset in the digital twin environment (for example, the
ability to quickly rollback changes).
3. Training environment - The digital twin is a functional replica of the physical system, and hence
provides a useful training environment.
To successfully implement a digital twin using NFV, the core physical network components must have
an equivalent virtual product. This requirement can only be met if the network product being used by
the utility comes in both the physical and virtual form. In the context of NFV, where network functions
are virtualised, most vendors in the market today have a virtual equivalent of their physical products.
These NFV components include MPLS routers, network firewalls, wireless LAN controller, etc. By
having the virtual components that are equivalent to the actual physical devices used in the
substations, the design and configuration of the digital twin of the network can closely resemble that of
the physical network.
In the case of virtual RTUs and virtual IEDs, many physical RTU and IED vendors do not have an
equivalent virtualised simulator. However, if the system used is based on open protocols, such as
DNP3 and IEC 61850, replacement software simulators are available in the market to provide the
virtual RTU and virtual IED functions in the digital twin. The alternative is to use the actual physical
devices and by integrating them into the NFVs, thereby creating a hybrid physical-virtual digital twin.
The view that SDN and NFV can be leveraged to establish an effective digital twin environment is
further supported by Kherbache, M. et. al. (2021), who discuss the use of SDN in establishing a digital
twin architecture for IIoT in the context of industrial systems with the use of the network digital twin.

4.4 Micro-segmentation
The concept of micro-segmentation is based on the notion that every host on an Ethernet or IP
network should be segmented - this concept relates closely to the Zero Trust cyber security model,
where hosts should not be implicitly trusted.
In many networks today, strong security controls are placed at the edges and borders of the network,
for example having next generation firewalls and strong security controls at interfacing points to
external parties and networks such as an third-party solar IP RTU connection into the substation or the
Internet. The typical network, therefore, has a soft or weak core, from the perspective of cyber
security.
Even if security controls are placed between segments (for example, by having a firewall control traffic
between these segments), hosts that reside within the same segment are implicitly trusted by the
network.

29
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

A segment is usually a layer 2 Ethernet segment which forms a broadcast domain. Network switches
are usually used to implement network segmentation, using Virtual LANs (VLANs).
Figure 13 shows the conceptual view of the segments in the utility substation common in today's
implementations, where each common function is grouped into its own segment. We refer to this as
the conventional segmentation approach. Within each segment, multiple devices of the same function
exist. Note that this is a simplified conceptual view only and does not show the physical layers such as
the physical switches and redundant network components and links, and other elements such as the
process bus Ethernet network are not pictured.
Each function may be mapped and relate to the utility's security architecture of mutually trusted
devices - for example, the utility's security architecture may prescribe that all HMIs and serial terminal
servers (used to access the out-of-band management console ports of the IEDs and RTUs for remote
engineering management access) are classified as the Engineering access function, and hence being
placed in the same Engineering access segment.
There are two issues in supporting an agile utility network with the conventional segmentation
approach. The first one is the lack of granular cyber security policy enforcement.
In a non-SDN Ethernet switch, devices that belong to the same segment or VLAN are free to
communicate with each other - this is the nature of Ethernet communication where the switch learns
the host or device MAC address, alongside with trusted mechanisms to discover hosts within the
same segments via the broadcast mechanism and Address Resolution Protocol (ARP). The
conventional Ethernet communication assumes that all hosts within the same segment are trusted.
Taking advantage of this assumption, a compromised or misbehaving host can compromise the other
trusted hosts in the same segment. Some hosts within the same segment may carry higher cyber
security privileges, where they are allowed to communicate to other hosts in other segments or to the
control centres via the WAN. The attackers can leverage the trusted nature within the segment to
mount a lateral attack, by first compromising a weak host and then gaining access to another higher
privileged host in the same segment, and then move laterally within the utility's network via the WAN.
There are methods such as the use of Private VLAN (PVLAN) or layer 2 per-port access control lists
(ACLs), to restrict inter-device communication within the same segment. However, these methods are
cumbersome to manage, are error-prone to configure and maintain and hence do not scale well in a
utility environment. Other methods include turning on the host firewall (i.e. the firewall software built
into hosts) - however, this again, is difficult to manage due to the diverse environment in a power
utility's substation and many devices do not have the built-in host firewall functionality.
Another issue with the conventional segmentation approach is the IP space design and allocation
problem.
Utilities with many sites are required to design and manage an IP address scheme which is both
scalable and at the same time limit the wastage of usable IP addresses.
The number of segments that may exist in the substation may also be limited due to practicality and IP
address space limitations. Typically, an Ethernet segment or VLAN is mapped to an IP subnet. For a
transmission utility that has over 100 sites, the utility may choose to design the IP address space in
such a way that each substation may support around 200 IP addresses (based on a /24 summary
address with the summary mask of 255.255.255.0 per substation). If the utility chooses a granular
segmentation scheme in the substation by creating too many segments, it risks limiting each segment
to too few devices per segment, thereby limiting the growth prospects of a function by limiting the
number of devices. Having too many segments also introduces IP address wastage, where the first
and last IP addresses in an IP subnet cannot be used (due to them being the subnet's network
address and the broadcast address, respectively).
Having too few segments risks having a network that places too many unrelated functions within the
few segments, thereby amplifying the cyber security risk described in the first problem.
In short, this second problem of the conventional segmentation approach is a balancing act of meeting
the immediate needs and foreseeing the future growth potential of the utility's IP services.

30
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

Figure 13 - A conceptual view of the segments in the utility substation without micro-segmentation
As shown in Figure 14, with the micro-segmentation approach, enabled by SDN and NFV, SDN-
capable switches are used. SDN switches are flow-based and allows for communication to occur
between hosts based on the flows configured by the SDN controller. NFV security devices (see 2.6
NFV Devices) offer security controls on a per-host basis. Along with the orchestration and policy
automation feature of the SDN controller, the security policy can be consistently applied throughout all
substations.
With micro-segmentation, each host is its own segment (i.e. each SDN physical switch port is its own
segment). Whilst all hosts still obey and comply with how the Ethernet protocol works, i.e. they still
perform the usual broadcast and ARP to discover the layer and layer 3 identities of the hosts of the
same subnet, SDN intercepts these messages and based on the policy information configured in the
Controller, provides the intended communication strictly to the permitted target hosts. No changes to
the behaviour of the hosts are required - in short, microsegmentation provides a very high level of
security where all hosts are applied with security controls; there is no longer shared Ethernet
segments where multiple hosts reside in.
Due to the notion that each host is on its own segment, a flat IP address space can be configured, if
desired, without having to map a VLAN segment to an IP address subnet as with the conventional IP
Ethernet network design and planning. All devices could be placed in the same IP subnet, and yet
each host is segmented from all other hosts via microsegmentation.
It should be obvious that deploying microsegmentation, SDN and NFV is a significant paradigm shift in
the design, planning and implementation of its IP and Ethernet networks, especially for a utility's non-
datacentre assets such as the substation. Meine, R., (2019) describes the design and implementation
considerations of SDN in an OT environment and offers a high-level overview on approaching the
design and implementation of SDN in OT.

31
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

Figure 14 - A conceptual view of microsegmentation in a utility substation Ethernet network

4.5 OT Cloud Service Integration


Public cloud and advanced cyber security capabilities built into the cloud are becoming the
cornerstones of the future utility infrastructure, and an increasing proportion of power utilities will rely
on the cloud services to meet the needs of the customers and infrastructures (Bigliani, R. et. al, 2020).
Rather than building internal infrastructures to cope with the rising and changing demands, utilities are
expected to leverage cloud computing to augment and supplement its own internal infrastructures, to
respond quickly to the changing electricity market. The challenges in building services and
applications in the internal private network and in a leased or public cloud environment require a
consistent approach in managing both environments. This consistency can be provided by SDN and
NFV, using the SDN toolsets and characteristics, i.e. in being able to extend networks, orchestrate
consistent configuration and policy, and enable strong cyber security approaches.
Issues such as data ownership, privacy requirements and national regulations are important boundary
conditions when designing and operating cloud-based services in the utility domain. SDN and NFV,
along with their automation toolset, can be used to efficiently implement policies based on these
issues.
Figure 15 shows an overview of the integration of the utility's applications and computing resources
that reside in the cloud, into the utility's private network. Using SDN with SDN overlays and
orchestration, both the private and cloud resources can be managed consistently and efficiently.
Workloads can be balanced and secured regardless of the location, which improves resource
utilisation, disaster recovery, and resilience.

32
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

Figure 15 - Cloud service integration into the utility's existing environment, forming a hybrid utility cloud

4.6 IEC 61850 Configuration


IEC 61850-based protection communications networks include time-critical traffic, e.g., SV and
GOOSE messaging on the process bus, and other traffic, e.g., manufacturing message specification
(MMS) and engineering access on the station bus.
Ethernet-based substation protection and control systems, such as IEC 61850 process bus networks,
must be fast, deterministic, and reliable. IEC 61850 details communications architectures for
substation automation systems. The station bus network is the communications channel between the
control house equipment and the IEDs at the bay level, as shown in Figure 16. The station bus
network is typically used for exchanging data between SCADA systems and IEDs. Such
communications networks typically carry MMS, GOOSE, synchrophasor, and other IP-based
protocols.
The process bus network is the communications channel between the bay-level IEDs and the merging
units (MUs) in the switchyard. It is designed to exchange time-critical data. This communication
network usually carries SV, GOOSE, and IEEE 1588 Precision Time Protocol (PTP) (Power Utility
Profile, i.e. IEC 61850-9-3) messages.
When a protection and automation application is designed from the top down, engineers first
determine the response time of various protection and automation applications. Design the process
bus network such that signals are exchanged between IEDs according to the application
requirements. Each signal can correspond to a logical network path. Unique identifiers can be
assigned such as the Ethertype and Application ID to identify each signal exchange.

33
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

Figure 16 - Levels and logical interfaces in substation automation systems


Traditional managed ethernet switches use Quality of Service (QoS) to assign priorities to different
traffic and VLANs to segregate traffic and prioritise traffic within a VLAN. An example policy is shown
in Table 6.
Table 6 - Example QoS policy for an IEC 61850 substation

Category Communication Class Priority Queue

Protection & Control GOOSE, SV, PTP High Yes

Time Synchronisation PTP (IEC 61850-9-3) High Yes

SCADA, HIM, Historian MMS Medium No

Engineering Access FTP, Telnet Low No

This SDN programmability simplifies the controlled delivery of unicast and multicast packet from the
publishers to only the subscribers expecting to receive the packet by combining the physical and
logical flow between source and destination. The SDN programmability provides great flexibility for
engineers to design their network using a bottom-up or a top-down approach.
In the bottom-up approach, network engineers design each flow entry, group entry, fast failover
groups, and meter entry for each switch for each different ethernet messages. This approach can be
time-consuming but provides the very fine grain traffic engineering control a system may require.
In the top-down approach, engineers design the protection and automation applications and define the
signals exchanged between IEDs. When designing a IEC 61850 based substation, engineers start
building IEC 61850 substation configuration language (SCL) file. An SCL file includes communication
network segmentation (subnets), GOOSE and SV publications and subscriptions between IEDs. Each
GOOSE and SV publication and subscription represents a binary or an analogue signal flow
respectively. Signals exchanged are clearly defined in IEC 61850 SCL file. Applications built to
interface with the SDN control plane can be developed to parse such a file and extract the publications
and subscriptions between IEDs. These applications then automatically build network logical paths
between SDN switches knowing where these IEDs are connected in the network. Given a known
network architecture, some SDN controllers have built in mechanism to allow users to proactively
program failover or redundant network paths. Circuit provisioning in an SDN system can be at many
layers depending on what the user prefers. This provisioning can be automated in learning modes,

34
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

done circuit by circuit as if the user was selecting start and end points and let path provisioning be
automated, or perform direct SDN configuration of the flow tables in each network device.
Figure 17 shows an IEC 61850 logical diagram of bus differential and overcurrent protection in a
single bus scheme. Differential protection logical node PDIF receives SV streams from current
transformer logical node TCTR2, TCTR4 and TCTR5. Time overcurrent protection logical node
PTOC1 receives a SV stream from current transformer TCTR1. PTOC2 receives an SV stream
TCTR3. Protection trip conditioning logical node PTRC1 and PTRC2 receives operate signalling from
PTOC1 and PTOC2 respectively. If PTRC and PTOC are not hosted in the same IED, the operate
signal from PTOC should be communicated to PTRC via GOOSE. Lastly, PTRC communicates trip
signals via GOOSE to each circuit breaker, represented by logical node XCBR.

Figure 17 - Example IEC 61850 message exchange for bus differential and feeder overcurrent protection
Figure 18 shows an example network topology of these IEDs and SDN switches.
An application that interfaces with the SDN control plane can be built to generate tables 1, 2, 3, and 4
by extracting publisher and subscriber information from the IEC 61850 substation configuration
language (SCL) configuration. The connected ports and the SDN switch number need extra
identification from the user depending on the SDN network topology designed.

35
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

Figure 18 - Network diagram of the example IEC 61850 system


To fully automate the process, applications can be built to define a unique set of match fields for each
signal to be exchanged based on the SCL configuration.
In Substation Automation Systems (SAS), GOOSE and SV are typically implemented as layer 2
messages, although layer 3 mappings based on IP multicast exist as well. Unique identifiers for each
signal exchange can be identified by message source MAC address, destination MAC address, and
application ID. Flow tables will be generated based on the IEC 61850 application as defined in the
SCL configuration. Redundancy can be preplanned so the network response to any link or switch
failure can be determined ahead of time and tested to for safe and reliable network recovery. SDN
technology works well with dual attached nodes, dual NIC nodes in failover modes, and any
combination supported in IEDs with process bus and station bus in the different industry supported
redundancy modes without the need for MAC table floods to learn or convergence times to recover.
Table 7 and Table 8 list the signals exchanged between IEDs and the IEDs connected port on the
SDN switches.
Table 7 - Logical signal exchange via GOOSE

Piece of Information Application GOOSE Publisher GOOSE Subscriber


for COMmunication ID Publisher connected SDN Subscriber connected SDN
(PICOM) and SDN port and SDN port

PTRC3.Tr.general 1003 IED3 SDN2, port 5 MU3 SDN3, port 3

PTRC3.Tr.general 1003 IED3 SDN2, port 5 MU2 SDN3, port 2

PTRC3.Tr.general 1003 IED3 SDN2, port 5 MU1 SDN3, port 1

PTRC2.Tr.general 1002 IED2 SDN1, port 2 MU2 SDN3, port 2

PTRC1.Tr.general 1001 IED1 SDN1, port 1 MU1 SDN3, port 1

Table 8 - Logical signal exchange via SV

Piece of Information for SV Publisher SV Subscriber


COMmunication (PICOM) Publisher connected SDN and Subscriber connected SDN and
SDN port SDN port

TCTR1 MU1 SDN3, port 1 IED1 SDN1, port 1

36
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

Piece of Information for SV Publisher SV Subscriber


COMmunication (PICOM) Publisher connected SDN and Subscriber connected SDN and
SDN port SDN port

TCTR2 MU1 SDN3, port 1 IED3 SDN3, port 1

TCTR3 MU2 SDN3, port 2 IED2 SDN1, port 2

TCTR4 MU2 SDN3, port 2 IED3 SDN3, port 1

TCTR5 MU3 SDN3, port 3 IED3 SDN3, port 1

Table 9 and Table 10 list the flow entries that can be created via IEC 61850 applications.
Table 9 - SDN flow list for GOOSE messaging

Ethernet Type GOOSE GOOSE Source MAC GOOSE Destination


(Hex) Application ID Publisher Subscriber MAC

0x88b8 1003 IED3 xx.xx.xx.xx.xx.13 MU3 xx.xx.xx.xx.xx.23

0x88b8 1003 IED3 xx.xx.xx.xx.xx.13 MU2 xx.xx.xx.xx.xx.22

0x88b8 1003 IED3 xx.xx.xx.xx.xx.13 MU1 xx.xx.xx.xx.xx.21

0x88b8 1002 IED2 xx.xx.xx.xx.xx.12 MU2 xx.xx.xx.xx.xx.22

0x88b8 1001 IED1 xx.xx.xx.xx.xx.11 MU1 xx.xx.xx.xx.xx.21

Table 10 - SDN flow list for SV messaging

Ethernet Type SV Application ID SV Source MAC SV Destination


(Hex) (hex) Publisher Subscriber MAC

0x88ba 0x4003 MU1 xx.xx.xx.xx.xx.13 IED1 xx.xx.xx.xx.xx.23

0x88ba 0x4003 MU1 xx.xx.xx.xx.xx.13 IED3 xx.xx.xx.xx.xx.22

0x88ba 0x4003 MU2 xx.xx.xx.xx.xx.13 IED2 xx.xx.xx.xx.xx.21

0x88ba 0x4002 MU2 xx.xx.xx.xx.xx.12 IED3 xx.xx.xx.xx.xx.22

0x88ba 0x4001 MU3 xx.xx.xx.xx.xx.11 IED3 xx.xx.xx.xx.xx.21

Given the granularity in logical network path programming, engineers can proactively orchestrate a
fleet of SDN switches to provide redundant network paths or failover paths for GOOSE and SV
messaging at the IEC 61850 process bus.
OpenFlow Switch optionally supports fast failover group types. The fast failover is proactively
programmed and the SDN switch executes fast failover based on the liveness of monitored ports or
groups.
In the example network shown in Figure 18, GOOSE and SV high priority traffic is given a dedicated
communication ring, following path a-b-c-d-e-f. IP traffic including MMS and engineering access is
assigned to communicate on the outer ring, following the path, A-B-C-D-E-F.
If SDN2 port 2 detects a loss of liveness, e.g., broken fibre connection, the SDN2 is programmed with
fast failover group to route the egress GOOSE message (APPID 0x1003) from port 2 to egress from
port 1. The GOOSE message travels from path D-E to SDN3. It ingresses from port 5 on SDN3 and
passes onto port 1, 2, and 3 on SDN3. MU1, MU2 and MU3 may see no message loss depending on
the fast failover speed, which when using proactive SDN flow programming network heal times are
<50uS

37
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

If SDN3 port 4 detects a loss of liveness, SV messages (APPID 0x4001, 0x4002 and 0x4003) cannot
be delivered via path e-d to SDN2 and egress to IED3. The alternate fast failover paths can be
proactively programmed to egress the SV traffic received on port 1, 2 and 3 on SDN3 to be forwarded
to port 7 and go through path f-a-b-c to SDN2.
SDN Network programmability enables the possibility to build IEC 61850 process bus based
protection and automation application with networking engineering automated and safely control how
multicast packets are delivered to only the intended subscribers.

4.7 SD-WAN
Software Defined Networking (SDN) enables EPUs to implement their WAN-based use cases in a
programmable, flexible way, and to connect their applications securely. The Technical Report IEC
61850-90-12 (Wide Area Network Engineering Guidelines) provides a comprehensive overview on
such use cases and their specific requirements. The subsequent bullets list only a few:
• Tele-Protection
• Telecontrol
• Condition Monitoring
• Substation to Control Center
• Wide Area Monitoring System (WAMS)
• Wide area monitoring, protection and control (WAMPAC)
Technically, Software Defined Wide Area Network (SD-WAN) provides a software overlay that runs
over standard network transport technologies, including MPLS and broadband, to connect applications
typically operated in a substation or control center environment. The overlay entity allows
programmability and assurance regarding essential requirements such as latency and jitter.
Furthermore, high availability can be assured by providing redundancy and seamless failover handling
to meet the reliability and dependability requirements of core applications such as tele-protection.
Application requirements in standardized notation and format as provided by IEC 61850 (SCL
definitions) to a WAN controller entity support programmability and an application-centric architecture
inherently. In principle, automated, end-to-end provisioning of connectivity by addressing the
requirements of the applications is feasible.
On top of it, SD-WAN provides the capabilities for network visibility with high granularity (end-to-end
network path) which is a prerequisite to identify any kind of issue in the network with repercussions on
the applications operated by the EPUs. Integrated telemetry, data collection and event logging allow
detailed troubleshooting and root-cause analysis. This includes system security monitoring and
awareness in order to provide capabilities such as threat intelligence and intrusion detection. In terms
of security, SD-WAN enables end-to-end security with high granularity in segmentation in order to
provide a zones & conduits based approach as typically required for control system applications.
For future architectures, SD-WAN provides the features and capabilities to connect use cases and
related scenarios over the WAN as described earlier in this chapter. This especially pertains to
Centralized Substation Protection, Automation & Control Systems (CPC Systems) based on
virtualization technologies and SDN for Substation Automation Systems (SAS). The latter one does
comprise the use of Time-Sensitive Networking (TSN) where the control plane is separated from the
data plane and a controller-based approach provides programmability and external control. Use
cases, requirements and migration scenarios regarding TSN in power utility automation are part of the
Technical Report IEC 61850-90-13 (Deterministic Networking Technologies).

38
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

5 Survey results
5.1 Overview
A survey was conducted in 2018 with CIGRE member country as respondents.
The 23 survey responses received consist of the following countries, as shown in Figure 19.

Figure 19 - Distribution of Respondent Countries

5.2 Relevance of SDN and NFV to EPUs


Most respondents indicated that SDN and NFV are relevant or somewhat relevant to their
organisation, as shown in Figure 20.

Figure 20 - Relevance of SDN / NFV

39
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

5.3 Benefits of SDN and NFV to EPUs


From the responses, the main benefit of SDN and NFV is increasing efficiency through automation, as
shown in Figure 21. This is understandable as increasing demand driven by new applications in Smart
Grid and DER place additional strains in operations and changes on the network.
The potential of automation through SDN provides the benefit of an agile network which can be
reconfigured to adapt to new requirements.

Figure 21 - Benefits of SDN and NFV

5.4 Potential Use Cases to EPUs


Figure 22 shows the potential use cases to EPUs, as indicated by our survey respondents.
Consistent with the greatest benefit of SDN, the respondents felt that automation is of the greatest
potential to EPUs, followed by network optimisation, which includes areas such as traffic engineering
of paths to make use of underutilised links.
These are followed by security and compliance, where a network-wide security policy can be applied
consistently through a centralised controller.
Surprisingly, the respondents thought that use case of extending the networks internally and extending
networks to the cloud have a low potential. This is in contrast with the non-EPU Enterprise or
Corporate environment, where this area is viewed as high potential, with applications being extended
to the cloud (this can be public clouds such as Amazon, Azure, IBM, etc., or private clouds in
geographically diverse locations).
The IEC 61850 use case is currently viewed as a niche use-case. To gain traction in this use case,
multiple vendors involved in the IEC 61850 environment (LAN switches, engineering software, etc.)
will need to support this methodology, including adding SDN capabilities in their solution.

40
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

Figure 22 - Potential Use Cases

5.5 Timeframe in Implementation


As shown in Figure 23, it is clear that SDN and NFV is a nascent area to EPUs, at least to those who
responded to the survey.
Although most EPUs who responded thought that there are clear benefits and potential use cases, the
implementation timeframe is either uncertain or in a longer timeframe. Although there are some (40%
of respondents) thought that their organisation would implement some form of SDN / NFV solution
within the next 12 months, almost half of the respondents do not have any plans to deploy SDN / NFV.

Figure 23 - Implementation timeframe

41
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

6 Case Study: Reduction Effects of Network


Operation Process Using Network Virtualization
Techniques in Japanese Electric Power Company
6.1 Overview
In Japanese electric power companies, there are efforts to make workflow more efficient, reduce
maintenance and operational costs, and create secure and seamless access environments. As one of
the methods to realize workflow and cost efficiency, network and information system virtualization is
examined and implemented. The SDN (OpenDaylight, 2021) is one of the major network virtualization
systems. The SDN has already been implemented in many Japanese companies and brings benefits,
for example reducing maintenance and operation costs (Katsuura, K. et. al, 2014). Also, it is expected
to obtain the same kind of benefits in electric power companies. Thus, we examine the benefits of
SDN for network operations in Japanese electric companies.

6.2 SDN system structure example on electric power company’s control


network

Figure 24 - Concept of SDN’s Virtual Tenant Network (VTN)


There is a workflow which is a setup of new communications lines as electric power company’s work
flows with high frequency (100 lines/day), because the electrical equipment maintenance work. Also,
there is the Virtual Tenant Network (VTN) (OpenDaylight, 2021) of an OpenDaylight SDN
controller (OpenDaylight, 2021) function to realize new network automatic setup. These VTN are
expected to ease this workflow of new lines setup in electric power company. We suppose that SDN
with VTN is implemented on an electric power company’s network and so we show this network
structure of SDN in Figure 24.
VTN provides virtualized conventional network functions, such as L2/L3 switch, a gateway and so on.
Once the conventional L2/L3 network is designed on VTN, it will automatically be mapped onto an
underlying physical network, and then configured on the individual switch leveraging SDN control
protocol. The electric power company’s control network has many network devices. Therefore, the
number of network devices which are needed to be configured manually becomes large and the
benefits of automatic configuration becomes large as well.

42
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

6.3 Benefits of implementing SDN for new lines setup work flow
We focus on workflows for the setup of new communications lines in electric power companies to
make the benefits of SDN clear.
The frequency that those workflows are held is very high and it is very important to reduce those
workflows. Here are two typical cases of new lines setup workflows.

Figure 25 - Workflows to create a new network

The workflows are divided into the following 3 steps.


• Step 1: Planning.
• Step 2: Making specifications and preparations
• Step 3: Network device setup and execution on site

We consider the following two cases in the workflow:


1. To create a new network adding new physical devices
2. To create a new network on existing physical devices
We show the workflows (Step 1/2/3) and operation items for each case in Figure 25.

Step1: Planning.
According to a demand for new communication lines, network planners start to examine the plan for
new communication lines and needed network devices.

[case 1] Network planners confirm the existing network devices, examine new network devices, and
decide the specifications and routes for each new communication line. The conventional workflow and
new workflow on SDN are the same in this step.

43
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

[case 2] In the conventional workflow, network planners confirm the existing network devices and the
structure of existing communication lines to decide the specifications and routes for each new
communication line. In the new workflow on SDN, demanded communication lines or networks are
automatically mapped into an existing network. Therefore, network planners need not confirm the
structure of an existing network.

Step2: Decide on specifications and preparations.


If the planned specifications and routes for new communication lines are approved, network planners
make detailed network device specifications including configuration. When specifications are fixed,
network planners order the network devices and arrange the workers for network devices setup on
site.

[case 1] In the conventional workflow, network planners write not only about the performance of the
new network device but also configurations in the specification, such as VLAN or MPLS configuration
and so on. When the detailed specifications are fixed, new network devices are ordered and added to
the device management database. Also, setup schedule is coordinated and registered to schedule
management system. In the new workflow on SDN, the main configuration items of OpenFlow switch
are only port enable for OpenFlow and controller IP address. These setup configurations are easier
than VLAN or MPLS ones. Network planners need to make the virtual networks on VTN. An SDN
controller has the database for the whole network topology and devices on a network, thus, an
external device management database is not needed.
[case 2] In the conventional workflow, network planners write network devices configurations in the
specification, and it is needed to arrange the workers for network devices setup in a machine room. In
the new workflow on SDN, it is not needed to arrange the workers, because the OpenFlow’s available
switch need not stop for the modification of configuration. The network planners need to operate the
virtual networks on VTN and coordination of its operation schedule is needed.
Step3: Device setup.
According to specifications, network devices are set up and confirmed running network functions
correctly in this step.
[case 1] In both workflows, the workers setup hardware devices and enter the configuration in a
machine room of the electric power office. In the conventional workflow, workers enter the
configuration for VLAN or MPLS and confirm the running network is function correctly. In the new
workflow on SDN, workers enter the configuration only for OpenFlow port enable and the controller IP
address and they confirm to connect between a switch and a controller. This process is easier than
the conventional one. When a switch can connect to a controller, the VTN function of the controller
automatically sets up switches.
[case 2] In the conventional workflow, a device setup in a machine room of the electric power office is
needed to change the main route of the existing communications lines to a backup one. Because the
workers stop communication lines connected to a configured network device during changing the
network device configurations and restarting a configured network device, and the workers need to
check that all network devices execute normally. But in the new workflow on SDN, this worker’s
process is omitted and the SDN controller automatically configures the underlying switches.
Finally, we summarise workflows to create a new network and show reduced operation items and
omitted operation items in Figure 25. We can see that many operation items by workers are reduced
or omitted by the SDN techniques. As a result, network operation cost and time is reduced when SDN
is implemented on communication networks of electric power company.

6.4 Future Works


There are requirements for an electric power company’s control network. The requirements are 3
items: delay time, grace time for disruption and reliability. In the conventional operation, the line
bandwidth has redundancy to avoid the network congestion and the increase delay time. To satisfy
grace time for disruption, communication lines have backups. Reliability of communication lines is
manually calculated and checked in the planning. In the case of SDN, requirements can be satisfied
by the same methods. But we study the new method that requirements are automatically satisfied by
new functions of an SDN controller when demanded communication lines are built on VTN.

44
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

7 Current Market Landscape


The list below, as shown in Table 11, contains some of the SDN solutions currently available (as of
2021). This list is non-exhaustive and is provided as-is.
The appearance of the products in the list below or any products omitted from the list, do not imply our
recommendation or any intended exclusion. The order of the list is arbitrary and does not imply any
preferences or superiority of any solution.
The list of products listed below is a small subset of the SDN controllers in the market. We do not
endorse any products listed below. This list is simply to provide an overview of the sampling of the
actual implementations of SDN. These descriptions are high level in nature. Readers should
undertake their own detailed investigations into these platforms and solutions if further technical
details are required.
Table 11 - A sample of SDN solutions in the market

No. SDN Solution Description

1. Arista Big Switch Big Switch Networks SDN Controller is a commercial controller based
Networks on the open-source Floodlight OpenFlow-based controller.
The SDN controller, Big Cloud Fabric, has since been renamed
Converged Cloud Fabric (CCF).
Along with the Controller, Arista also provides a switch operating
system (Switch Light OS) which runs on white-box switches.

2. Lumina Networks / The Lumina / Brocade SDN Controller is a commercial controller


Brocade SDN Controller based on OpenDaylight. In 2020, Lumina Networks has ceased
operation.

3. Cisco ACI and APIC The Cisco ACI and APIC solution is a commercial based solution and
integrates with Cisco Nexus switches, routers and firewalls. Plugins
exist to interoperate with third party SDN switches and NFVs. The
solution uses Cisco's OpFlex protocol as the main Southbound
protocol.

4. Juniper Contrail The Juniper Contrail solution is a commercial based solution and
integrates with Juniper switches, routers and firewalls.

5. Ericsson Cloud SDN and Ericsson Cloud SDN is a commercial solution based on the
Cloud Execution OpenDaylight controller.
Environment
Part of the Cloud SDN is the Cloud Execution Environment, which is
an NFV hosting solution.

6. NEC ProgrammableFlow The NEC ProgrammableFlow is an OpenFlow-based SDN solution


with integration into other solutions including VoIP and Unified
Communications (UC).

7. HP Virtual Application The HP VAN SDN Controller is an OpenFlow-based SDN controller


Network (VAN) and integrates with HP switches or OpenFlow-compliant switches
from other vendors.

8. Huawei Agile Huawei's Agile Controller consists of the Campus and Datacenter
variants with the different target use cases for SDN. It integrates with
its own and other vendor's devices and NFVs via the OpenFlow as
one of the Southbound protocols.

9. VMware NSX VMware NSX integrates with the vendor's vSphere virtualisation
platform to enable hypervisors to act as virtual SDN switches by using
agent software. It integrates with its own hypervisor product (ESXi),
other vendors' hypervisors, and can also integrate with physical SDN
switches.

45
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

The NFV market is mature and implementations too numerous to name. Almost every vendor has a
virtualised form of the network functions traditionally represented by physical devices - for example,
virtual access routers, virtual MPLS routers, virtual firewalls and next generation firewalls, etc. With 5G
and cloud computing, the need for NFV is even more pronounced, given that the network functions
need to be flexibly provisioned without the constraints of the conventional physical forms.

46
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

8 Future Work
There is significant potential for 5G to improve the operations for power utilities.
5G uses the concepts of SDN and NFV extensively. We propose for future studies to include the how
SDN and NFV play their role in meeting the requirements such as the architecture, implementations,
cyber security issues, and other challenges of 5G. A specific area that may be worth venturing into is
the area of private 5G networks provisioned by the power utility to support its mission critical
applications such as protection and SCADA, and how a private utility 5G solution with SDN and NFV
components can successfully interoperate and be integrated into the utility's existing
telecommunication networks and information systems infrastructure.
Another area of interest is the concept of the intelligent edge of the power utility. Due to the demands
of renewables and DER, with the significant increase of telemetry and flow of information from various
points in the power network, the intelligent substation or intelligent remote sites and asset may not
stay conceptual, as utilities increasingly require methods in providing services and intelligent
processing capabilities at various points of the distributed network - for a transmission utility, these
may be the transmission-level substations and communication sites; for a distribution utility, these may
be street-level cabinets or power poles distributed throughout a vast geographical area. We believe
SDN and NFV, along with virtualisation, would provide the technology required to meet the demands
of the intelligent power utility edge.
The use cases of extending the utility's OT networks into the cloud, forming a hybrid environment, may
also be an area of interest. This is due to the ever-converging IT and OT environments and systems
within the utility, and the requirements for utilities to scale services in an agile method.
Finally, deterministic network technologies based on Time-Sensitive Networking (TSN) and Detnet
provide the capabilities to meet essential requirements for protection, automation and control within a
substation environment and over the WAN (for example, inter-substation communication). Time-
Sensitive Networking is a set of standards specified by the IEEE 802. DetNet is being developed by
the IETF in the Deterministic Networking (DetNet) Working Group. A stringent controller concept
allows flexibility, programmability, and automation, and it would be of interest to investigate SDN’s
applicability to TSN in the context of power utility applications.
Of course, in the evaluation on whether to formally propose these future studies, the timing,
participation interest and availability of subject matter experts are important considerations.

47
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

APPENDIX A. Definitions, abreviations and symbols


A.1. General Terms

Table A.1 - Definition of general terms used in this TB


Acronym Phrase Definition
TB Technical Brochure A publication produced by CIGRE representing
the "state-of-the-art" guidelines and
recommendations produced by an SC WG.
SC Study Committee One of the 16 technical domain groups of
CIGRE.
https://www.cigre.org/GB/knowledge-
programme/our-16-study-committees-and-
domains-of-work
TG Thematic Group A supervisory group relative to a sub-topic
within the scope of a particular SC.
WG Working Group A select group tasked with developing a TB
relative to a defined TOR. e.g. WG B3.40 is a
WG within the SC B3 domain.
JWG Joint Working Group A WG comprising collaboration between two or
more SCs e.g. JWG C3/B3.20 involves both SC
C3 and B3 and is led by SC C3.
TF Task Force Generally a sub-group of a WG tasked with
investigation of a specific aspect within the
overall Terms Of Reference, or a sub-group of a
SC with a shorter term task that may ultimately
lead some other CIGRE work.
NC National Committee National entities responsible for local CIGRE
activities and Membership.
TOR Terms Of Reference The scope of work defined for the WG as
approved by the TC.
TC Technical Council The group of SC Chairman and the Chairman of
the TC.

48
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

A.2. Specific terms used in this Technical Brochure


Table A.2 - Definition of technical terms used in this TB

Acronym Definition

4G Fourth Generation Wireless

5G Fifth-generation wireless

5G-PPP 5G Infrastructure Public Private Partnership

ACL Access Control List

API Application Programming Interface

ARP Address Resolution Protocol

ASIC Application Specific Integrated Circuits

BGP Border Gateway Protocol

CaaS Cryptographic as a Service

CAM Content Addressable Memory

COTS Commercial Off the Shelf

DARPA Defence Advanced Research Projects Agency

DER Distributed Energy Resources

DNP3 Distributed Network Protocol 3

EPU Electric Power Utility

EVPN Ethernet VPN

GFP Generic Framing Procedure

GOOSE Generic Object Oriented Substation Event

HMI Human Machine Interface

HTTP Hypertext Transfer Protocol

HTTPS HTTP Secure

IDS Intrusion Detection System

IED Intelligent Electronic Device

IP Internet Protocol

IPS Intrusion Prevention System

IPSEC IP Security

ISP Internet Service Provider

ITU-T International Telecommunication Union - Telecommunication

L2VPN Layer 2 VPN

L3VPN Layer 3 VPN

LAN Local Area Network

49
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

Acronym Definition

LIST Locator Identifier Separation Protocol

LTE Long Term Evolution

MANO Management and Orchestration

MMS Manufacturing Message Specification

mMTC Massive Machine Type Communication

MPLS Multi-Protocol Label Switching

MU Merging Unit

NETCONF Network Configuration Protocol

NFV Network Function Virtualisation

NSH Network Service Header

NVGRE Network Virtualisation using Generic Routing Encapsulation

ONF Open Networking Foundation

OS Operating System

OSPF Memory

OT Operational Technology

OVSDB Open vSwitch Database

PCEP Path Computation Element Protocol

PTP Precision Time Protocol

PVLAN Private VLAN

QoS Quality of Service

RAN Radio Access Network

REST Representational State Transfer

RESTCONF REST Configuration

RFC Request for Comments

RTU Remote Terminal Unit

SAS Substation Automation System

SCADA Supervisory Control and Data Acquisition

SCL Substation Configuration Language

SDH Synchronous Digital Hierarchy

SDN Software Defined Networking

SD-WAN Software Defined WAN

SNMP Simple Network Management Protocol

SONET Synchronous Optical Network

50
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

Acronym Definition

SR-IOV Single Root Input Output Virtualisation

SSH Secure Shell

SSL Secure Sockets Layer

STP Spanning Tree Protocol

SV Sampled Values

TCP Transmission Control Protocol

TDM Time Domain Multiplexing

TLS Transport Layer Security

TSN Time Sensitive Networking

URLLC Ultra-Reliable and Low-Latency Communications

VC Virtual Concatenation

VLAN Virtual LAN

VM Virtual Machine

VoIP Voice over IP

VPN Virtual Private Network

VTN Virtual Tenant Network

VXLAN Virtual Extensible LAN

WAMPAC Wide Area Monitoring, Protection and Control

WAMS Wide Area Monitoring System

WAN Wide Area Network

YANG Yet Another Next Generation

51
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

A.3. Organisation Acronyms

Table A.3 - Definition of technical terms used in this TB

Acronym Full name Web link

CIGRE CIGRE (previously a French acronym pronounced in English https://www.cigre.org/


as "sea-grey")

IEC International Electrotechnical Commission https://www.iec.ch/

IEEE Institute of Electrical and Electronic Engineering https://www.ieee.org/

NERC North American Electric Reliability Corporation https://www.nerc.com/

IETF Internet Engineering Task Force https://www.ietf.org

ETSI European Telecommunications Standards Institute https://etsi.org

ITU International Telecommunication Union https://itu.int

52
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

APPENDIX B. Links and references


B.1. CIGRE Papers and Contributions

Table B.1 – CIGRE Papers and Contributions

Year SC Reference Title Author Web link


#

2018 D2 D2-201 Substation Virtualisation: An Tan, V. https://e-cigre.org/publication/


Architecture for Information SESSION2018_D2-201
Technology and Operational
Technology Convergence for
Resilience, Security and
Efficiency

2020 D2 D2-308 Telecommunications Network Tuazon, https://e-cigre.org/publication/


Modernisation in Utilities: P., SESSION2020_D2-308
Challenges of Migrating from Withanage,
Time Domain Multiplexing S., Tan, V.
(TDM) Technology to Packet
Switched Network (PSN)

B.2. Other References

Table B.2 – Other References

Journal or Year Title Author(s) Web link


Publication

Sensors 2021 When Digital Twin Meets Kherbache, https://www.ncbi.nlm.nih.gov/


(Basel) Network Softwarization in the M., Maimour, pmc/articles/PMC8704305/
Industrial IoT: Real-Time M., Rondeau,
Requirements Case Study E.

IEEE 2020 A Survey on Controller Das, T., https://ieeexplore.ieee.org/


Communicatio Placement in SDN Sridharan, document/8802245
ns Surveys & V.,
Tutorials, Gurusamy,
M.

International 2014 REST API Design Patterns for Zhou, W., Li, https://ieeexplore.ieee.org/ab
Conference on SDN Northbound API L., Luo, M., stract/document/6844664
Advanced Chou, W.
Information
Networking
and
Applications

Optical Fiber 2017 YANG, NETCONF, Jethanandi, https://ieeexplore.ieee.org/do


Communicatio RESTCONF - What is This All M. cument/7937308
n Conference About and How it is Used for
Multi-Layer Networks

Computer 2014 The Road to SDN: An Feamster, N., http://www.sigcomm.org/nod


Communicatio Intellectual History Rexford, J., e/3488
n Review of Programmable Networks Zegura, E.

53
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

Journal or Year Title Author(s) Web link


Publication

SDN and 2012 Network Functions Chiosi, M., https://portal.etsi.org/nfv/nfv_


OpenFlow Virtualisation - An Introduction, etc. al white_paper.pdf
World Benefits, Enablers, Challenges
Congress & Call for Action

ETSI Group 2021 Network Functions ETSI https://www.etsi.org/deliver/et


Report Virtualisation (NFV); Use si_gr/NFV/001_099/001/01.0
Cases 3.01_60/gr_NFV001v010301
p.pdf

5GPPP 2021 View on 5G Architecture 5GPPP https://5g-ppp.eu/wp-


Architecture content/uploads/2021/11/Arc
Working Group hitecture-WP-V4.0-final.pdf

Ericsson 2018 Intelligent Transport in 5G Dahlfort, S., https://www.ericsson.com/49


Technology De Gregorio, d133/assets/local/reports-
Review A., Fiaschi, papers/ericsson-technology-
G., Khan, S., review/docs/2018/intelligent-
Rosenberg, transport-in-5g.pdf
J., Thyni, T.

Power and 2019 A Practical Guide to Designing Meine, R. https://selinc.com/api/downlo


Energy and Deploying ad/125640/
Automation OT SDN Networks
Conference

IDC 2020 IDC FutureScape: Worldwide Bigliani, R., https://www.idc.com/getdoc.j


FutureScame Utilities 2021 Predictions Segalotto, J., sp?containerId=US45816020
Gallotti, G.,
Villali, J.,
Verma, J.,
Skalidis, P.

OpenDaylight Viewed OpenDaylight, Linux OpenDaylight https://www.opendaylight.org


2021 Foundation Collaborative contributors
Project

NEC Technical 2014 IaaS Automated Operations Katsuura, K., https://www.nec.com/en/glob


Journal, Vol. 8, Management Solutions That Miyauchi, M., al/techrep/journal/g13/n02/pd
No. 2 Improve Virtual Environment Numazaki, f/130207.pdf
Efficiency T.,
Kurogouchi,
Y., Satoh, Y.,
Koseki, T.

OpenDaylight Viewed OpenDaylight Virtual Tenant OpenDaylight https://docs.opendaylight.org/


2021 Network contributors en/stable-fluorine/user-
guide/virtual-tenant-network-
(vtn).html

OpenDaylight Viewed OpenDaylight Controller OpenDaylight https://docs.opendaylight.org/


2021 (Stable Magnesium) contributors projects/controller/en/stable-
magnesium/dev-guide.html

IETF 2011 RFC 6373 - MPLS Transport Andersson, https://datatracker.ietf.org/do


Profile (MPLS-TP) Control L., Berger, L., c/html/rfc6373#page-9
Plane Framework Fang, L.,
Bitar, N.,
Gray, E.

54
TB 866 – Enabling Software Defined Networking for Electric Power Utilities

Journal or Year Title Author(s) Web link


Publication

Open 2015 OpenFlow Switch Specification ONF https://opennetworking.org/w


Networking 1.5.1 p-
Foundation content/uploads/2014/10/ope
nflow-switch-v1.5.1.pdf

55
ISBN : 978-2-85873-571-6

TECHNICAL BROCHURES Reference 866 - April 2022


©2022 - CIGRE

You might also like