Campus Network Architectures and Technologies (Data Communication Series) 1st Edición
Campus Network Architectures and Technologies (Data Communication Series) 1st Edición
Architectures
and Technologies
Data Communication Series
Ningguo Shen
Bin Yu
Mingxiang Huang
Hailin Xu
First edition published 2021
by CRC Press
6000 Broken Sound Parkway NW, Suite 300, Boca Raton, FL 33487-2742
Te right of Ningguo Shen, Bin Yu, Mingxiang Huang and Hailin Xu to be identifed as authors of this work has
been asserted by them in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988.
Reasonable eforts have been made to publish reliable data and information, but the author and publisher cannot
assume responsibility for the validity of all materials or the consequences of their use. Te authors and publish-
ers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to
copyright holders if permission to publish in this form has not been obtained. If any copyright material has not
been acknowledged please write and let us know so we may rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or
utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including
photocopying, microflming, and recording, or in any information storage or retrieval system, without written
permission from the publishers.
For permission to photocopy or use material electronically from this work, access www.copyright.com or con-
tact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. For
works that are not available on CCC please contact [email protected]
Trademark notice: Product or corporate names may be trademarks or registered trademarks and are used only for
identifcation and explanation without intent to infringe.
Typeset in Minion
by codeMantra
Contents
Summary, xv
Introduction, xvii
Acknowledgments, xxv
Authors, xxvii
v
vi ◾ Contents
xv
Introduction
xvii
xviii ◾ Introduction
xxv
xxvi ◾ Acknowledgments
Huqun Li, Ninggang An, Fan Yang, Zunying Qin, Zhe Zhang, Yonggang
Cheng, Feilong Wu, Xiaomang Zhu, Chen Liu, Xin Zhang, and Yuetang Wei
While the writers and reviewers of this book have many years of
experience in ICT and have made every efort to ensure accuracy, it may
be possible that minor errors have been included due to time limitations.
We would like to express our heartfelt gratitude to the readers for their
unremitting eforts in reviewing this book.
Authors
xxvii
xxviii ◾ Authors
Getting to Know
a Campus Network
1. Network scale
Campus networks can be classifed into three types: small, mid-
size, and large campus networks, each difering in the number of
terminal users or Network Elements (NEs). Tis is described in
Table 1.1. Sometimes, the small campus network and midsize cam-
pus network are collectively called small- and medium-sized campus
networks.
A large campus network generally has complex requirements and
structures, resulting in a heavy operations and maintenance (O&M)
workload. To handle this, a full-time professional O&M team takes
charge of end-to-end IT management, ranging from campus net-
work planning, construction, and O&M to troubleshooting. Tis
team also builds comprehensive O&M platforms to facilitate efcient
O&M. In contrast to large campus networks, small/midsize cam-
pus networks are budget-constrained and usually have no full-time
O&M professionals or dedicated O&M platforms. Typically, only
one part-time employee is responsible for network O&M.
2. Service targets
If we look at campus networks from the perspective of service
targets, we will notice that some campus networks are closed and
restrictive, only allowing internal users, while others are open to
TABLE 1.1 Campus Network Scale Measured
by the Number of Terminal Users or NEs
Campus Network Category Terminal Users NEs
Small campus network <200 <25
Midsize campus network 200–2000 25–100
Large campus network >2000 >100
Getting to Know a Campus Network ◾ 5
4. Access modes
Campus networks have two access modes: wired and wireless.
Most of today’s campus networks are a hybrid of wired and wireless.
Wireless is free from restrictions on port locations and cables, and
therefore can be flexibly deployed and utilized.
A traditional campus network is a wired campus network. From
the perspective of users, each device on the wired campus net-
work is connected to an Ethernet port in the wall or on the desk-
top through an Ethernet cable. These connections typically do not
affect each other. As such, wired campus network architecture is
usually structured and layered, with clear logic, simple manage-
ment, and easy troubleshooting.
The wireless campus network, however, differs greatly from the
wired campus network. It is usually built on Wi-Fi standards (WLAN
standards). WLAN terminals connect to WLAN access points (APs)
using IEEE 802.11 series air interface protocols. Network deploy-
ment and installation quality determine the effect of network cov-
erage. Network optimization must be performed periodically based
on network service situations to ensure network quality. The wire-
less network is also vulnerable to interference from external signal
sources, causing a series of abnormalities that are difficult to locate.
Given that wireless connections are invisible and discontinuous,
abnormalities occur abruptly and are difficult to reproduce. For
these reasons, O&M personnel for wireless networks must have suf-
ficient knowledge and expertise related to air interfaces.
5. Industry attributes
Campus networks adapt to the industries they serve. To meet the
distinct requirements of different industries, we should design the
campus network architecture in line with the requirements for each
industry. The ultimate goal is to develop campus network solutions
with evident industry attributes. Typical industry campus networks
include enterprise campus networks, educational campus networks,
e-Government campus networks, and commercial campus net-
works, just to name a few.
a. Enterprise campus network: Strictly speaking, an enterprise
campus network covers the largest scope of any network type
8 ◾ Campus Network Architectures
The network not only transmits data but also manages online
behaviors of students to avoid extreme and aggressive actions.
Supporting research and teaching is another top priority for the
network. For these reasons, the network must be highly advanced
and support the intense demands of cutting-edge technologies.
c. e-Government campus network: The internal networks of gov-
ernment agencies are a good example of this type of network.
Due to stringent requirements on security, the internal and
external networks on an e-Government campus network are
generally isolated to ensure the absolute security of confidential
information.
d. Commercial campus network: This type of network is deployed
in commercial organizations and venues, such as shopping
malls, supermarkets, hotels, museums, and parks. In most cases,
a commercial campus network facilitates internal office work
via a closed subnet but, most importantly, serves a vast group of
consumers, such as guests in shopping malls, supermarkets, and
hotels. In addition to providing network services, the commer-
cial campus network also serves business intelligence systems.
The resulting benefits include improving customer experience,
reducing operational costs, increasing business efficiency, and
mining more value from the network.
4. Security platform
An advanced security platform utilizes network-wide big data
provided by the network management platform to defend against
Advanced Persistent Treats (APTs). It can also invoke the north-
bound APIs provided by the network management platform to iso-
late and automatically clean up threat trafc for APT defense.
limited network scale, resulting in the inability to meet these instant mes-
saging requirements. All of these pain points posed higher requirements
on campus networks.
Against this backdrop, Layer 3 switches were developed in 1996.
A Layer 3 switch, also called a routed switch due to it integrating both
Layer 2 switching and Layer 3 routing functions, came with a simple and
efcient Layer 3 forwarding engine designed and optimized for campus
scenarios. Its Layer 3 routing capabilities were approximately equivalent
to Layer 2 switching performance.
Layer 3 switches introduced the Layer 3 routing function to LANs for
the frst time, enabling each LAN to be divided into multiple subnets that
would be connected to diferent Layer 3 interfaces. As shown in Figure 1.3,
the Layer 3 switch-based networking structure eliminated the limitation
that BDs posed on the scale of the network, meaning that LANs were no
longer constrained by collision domains or BDs. Instead, the network
scale could be expanded on demand simply by adding subnets, and each
user or terminal could easily access the network as required.
In addition, the campus network backbone consisting of Layer 3
switches greatly increased bandwidth on the entire network, allowing
users to smoothly access multimedia brought by WWW. Various ofce
systems were also migrated to the campus network, and ofces gradually
transitioned toward being paperless as they became fully networked.
forwarding mode” of the Wi-Fi network with the unifed wired and wire-
less forwarding feature of agile switches in order to efciently resolve the
networking problem of large-scale Wi-Fi user access.
Technological advances have made it possible for Ethernet switches to
solve many problems that previously seemed like impossible challenges.
As mentioned earlier, Ethernet switches have become hugely popular
largely due to the emergence of ASIC forwarding engines.
Huawei takes this to a new level by combining the ASIC forwarding
engines with Network Processors originally used for high-performance
routers, launching high-performance chips. Tese chips, used as the for-
warding engine, enable agile switches to efciently support not only multi-
service implementation, but also Wi-Fi features, laying a solid foundation
for wired and wireless convergence.
Wi-Fi has subsequently become deeply integrated into and a typical fea-
ture of third-generation campus networks. Sofware-Defned Networking
(SDN) has also been introduced to campus networks in order to simplify
services. Tis generation of campus networks generally meets the require-
ments of enterprises that are in the early stages of all-wireless transfor-
mation. However, a number of problems still exist. For example, Wi-Fi
networks cannot deliver a high enough service quality, but can instead
only be used as a supplement to wired networks. Other examples of this
are that maintenance difculties related to the large-scale deployment of
Wi-Fi are not completely eliminated, the network architecture is not opti-
mized, and multiservice support still depends heavily on VPN technol-
ogy. All these lead to insufcient network agility. Despite these problems,
the launch of agile switches provides a good hardware basis for further
evolution of campus networks.
In summary, campus networks have constantly evolved and made
dramatic improvements in terms of bandwidth, scale, and service con-
vergence. However, they face new challenges in terms of connectivity,
experience, O&M, security, and ecosystem as digital transformation
sweeps across all industries. For example, IoT services require ubiquitous
connections; high-defnition (HD) video, Augmented Reality (AR), and
Virtual Reality (VR) services call for high-quality networks; and a huge
number of devices require simplifed service deployment and network
O&M. To address these unprecedented challenges, industry vendors
gradually introduce new technologies such as Artifcial Intelligence (AI)
20 ◾ Campus Network Architectures
and big data to campus networks. Tey have launched a series of new
solutions, such as SDN-based management, fully virtualized architec-
ture, all-wireless access, and comprehensive service automation.
Campus networks are embarking on a new round of technological
innovation and evolution, and they are expected to gradually incorporate
intelligence and provide customers with unprecedented levels of simpli-
fed service deployment and network O&M. Such a future-proof cam-
pus network is called an “intent-driven campus network”, which we will
explore in more detail in subsequent sections.
CHAPTER 2
Campus Network
Development Trends
and Challenges
21
22 ◾ Campus Network Architectures
Shared transportation has redefned the way we travel. Today, there are
more than 100 million bike-sharing users in about 150 cities, and over
400 million online car-hailing users across more than 400 cities.
Video intelligence has revolutionized the man-machine interaction
model. Technologies used for recognition, such as facial and license plate
recognition, are becoming more widespread in felds such as attendance
management, permission control, and commercial payment.
AI has innovated the operation model. In today’s world, intelligent
robots can be found in high-value felds such as customer service, lan-
guage translation, production, and manufacturing, replacing from 3 to 5.6
positions that were once performed manually.
As digital technologies become more mature and used on a larger scale,
they ofer us unprecedented convenience both at home and at work. Tey
also inspire us to expect unlimited possibilities for the future. Te digital
era — or more accurately, the intelligence era — is coming. Are you ready
for this new era?
In the past, technical experts were required to visit the site and
generally needed one week to solve all problems. Now, however, the
required time is greatly reduced. R&D resources are shared on the
cloud, and cloud services such as the simulation cloud, design cloud,
and test cloud are available. In this way, IT resources and hardware
resources can be centrally allocated in order to support cross-region
joint commissioning by different teams. The time needed to prepare
the environment has been more than halved from the one month it
used to take.
2. Digital operations are first implemented inside enterprise campuses.
Many enterprises have chosen their internal campuses as a start-
ing point for their digital transformation. Employees are the first to
experience the digital services available at their fingertips.
As shown in Figure 2.2, all resources are clearly displayed on the
Geographic Information System (GIS) map for employees to easily
obtain. This involves offices, conference rooms, printers, and service
desks. In a conference room, for example, people can experience
intelligent services at any time. Such services include online reser-
vation, automatic adjustment of light, temperature, and humidity
before the conference, wireless projection, facial recognition sign-in,
and automatic output of meeting minutes via speech recognition.
24 ◾ Campus Network Architectures
2.1.2 Education
In March 2016, the World Economic Forum in Davos released a research
report entitled New Vision for Education: Fostering Social and Emotional
Learning through Technology. Te report points out that a serious gap
exists between the demand for talents in the future intelligent society and
the industrialized education system.
26 ◾ Campus Network Architectures
and external talent demand data. Tis development path provides rec-
ommended majors, communities, and employment choices, giving stu-
dents a strong foundation from which to plan their development.
In addition, an accurate teaching evaluation can be performed
for each student based on their self-learning data, attendance data,
credit data, and book borrowing data. Suitable coaching suggestions
are then ofered to students, helping them efciently complete their
studies.
2.1.4 Retail
Over the past 20–30 years, e-commerce has developed explosively due to
unique advantages such as low costs and no geographical restrictions.
E-commerce has shaken up the brick-and-mortar retail industry.
However, as time goes by, the online trafc dividend of e-commerce
is reaching the ceiling. It has become impossible to penetrate further
into the consumer goods retail market. Facing this, the retail industry
has started to enter a new retail era where online and ofine retail are
converged.
E-commerce companies want to achieve business breakthroughs by
streamlining online and ofine practices and expanding trafc portals
into omni-channels through campus networks. Traditional retail players
are also actively embracing this change, in the hopes of taking a place in
the future new retail era based on new Information and Communication
Technology (ICT).
Digital transformation of the retail industry requires campus networks
to be open at all layers in order to build an enriched digital application
ecosystem. Further to this, traditional network O&M needs to be radically
overhauled and innovations made in order to provide users with network
O&M methods that are far simpler.
In addition, the rise of the IoT drives profound campus network trans-
formation. Huawei’s latest global connectivity index shows that by 2025,
there will be 100 billion connections worldwide and IoT will develop at an
accelerated pace.
Campus networks of the future are expected to not only converge wired,
wireless, and IoT but also provide ubiquitous connections while delivering
always-on services. However, these expectations will bring the following
challenges to campus networks:
1. Wi-Fi will become the mainstream access mode and will need to sup-
port large-scale, high-performance, and scenario-tailored networking.
Since its debut, the Wi-Fi network has been positioned only as a
supplement to the wired network, and so it does not innately sup-
port large-scale networking and high-density access. Due to the
self-interference of air interfaces on the Wi-Fi network, large-scale
networking causes adjacent-channel interference, and high-density
access leads to co-channel interference. As a result, the performance
and quality of the entire Wi-Fi system deteriorate quickly.
Wi-Fi networks use Piggy-Back architecture; the wireless access
controller (WAC) and access points (APs) communicate with each
other through Control and Provisioning of Wireless Access Points
(CAPWAP) tunnels; and data have to be aggregated and centrally
processed by the WAC. As such, the Wi-Fi network is actually over-
laid on top of the wired network. Piggy-Back architecture makes it
difcult to improve the performance of the Wi-Fi backbone network.
For example, if large-capacity WACs are deployed in centralized
mode, there will be high requirements on WAC performance and
costs. Further, if multiple WACs are deployed in distributed mode,
complex roaming between WACs is one of the top concerns, and
subsequently trafc on the network is detoured.
Wi-Fi networks are originally deployed in a relatively simple envi-
ronment (such as ofce spaces). However, in the future, they will be
deployed in various complex environments, for example, large sta-
diums requiring high-density access and hotels with many rooms
where inter-room interference is severe. As such, various deploy-
ment solutions will be needed in diferent scenarios, which will pose
high requirements on vendors’ products and solutions.
36 ◾ Campus Network Architectures
2.2.2 On-Demand Services
Over the course of digital transformation, applications are rapidly chang-
ing and new services constantly emerging. Enterprises hope that the
upper-layer service platform can easily obtain all necessary information
from the underlying network in order to provide more value-added ser-
vices and create greater business outcomes.
To achieve business success, enterprises strive to shorten the time to
market of new services and implement on-demand provisioning of cam-
pus network applications. This requires campus networks to solve the fol-
lowing pain points:
deployed each week during peak times. In this case, if we use tradi-
tional deployment methods, chain stores may have missed business
opportunities even before network deployment is complete.
2. Services and networks are tightly coupled, failing to support fre-
quent service adjustment.
On traditional campus networks, service adjustment inevitably
leads to network adjustment. For example, if a new video surveillance
service is to go live, configurations (including route, virtual private
network (VPN), and Quality of Service (QoS) policies) should be
adjusted at different nodes such as the access layer, aggregation layer,
core layer, campus egress, and data center. Such new service rollout
takes at least several months or even one year.
In addition, due to the strong coupling between services and net-
works, the rollout of new services easily affects existing services. To
prevent this, a large amount of verification is required before new
services are brought online.
3. Policies and IP addresses are tightly coupled, which cannot ensure
policy consistency in mobile office scenarios.
On traditional campus networks, service access policies are
implemented by configuring a large number of Access Control Lists
(ACLs). Administrators usually plan service policies based on infor-
mation such as IP addresses and VLANs.
However, in mobile access scenarios, information like user access
locations, IP addresses, and VLANs can change at any time, bringing
great challenges to service access policy configuration and adjust-
ment, and affecting user service experience during mobility.
of our society. AI, the engine of the fourth industrial revolution, will
promote the development of all industries around the world.
Figure 2.12 shows four diferent phases along the AI productivity/adop-
tion curve. AI, as a general purpose technology (GPT), has lef the frst
phase, where exploration of AI technology and application takes place on
a small scale. It is now in the second phase where tech development and
social environment are colliding. Driven by continuous collision, AI is
fnding ever-increasing and more comprehensive use in industry applica-
tion scenarios. Tis, in turn, generates greater productivity. Data networks
are a key driving force in the IT era, and will be further developed and
optimized in the AI era.
1. What is an intent?
An intent is a business requirement and service objective described
from a user’s perspective. Intents do not directly correspond to net-
work parameters. Instead, they should be translated using an intent
translation engine into parameter configurations that can be under-
stood and executed by networks and components. After introducing
intents to networks, our approach to managing networks will change
significantly, including the following:
a. A shift from network-centric to user-centric: We no longer use
obscure technical jargon or parameters to define networks.
Instead, we use language that can be easily understood by users.
For example, we used to describe the Wi-Fi network quality
Campus Network Development Trends and Challenges ◾ 51
2. IDN architecture
As shown in Figure 2.16, IDN consists of an ultra-broadband,
simplified network, and a smart network brain.
The ultra-broadband, simplified network is the cornerstone of
IDN. It follows an ideal in the same vein as Moore’s law. That is, it
is driven by continuous innovation and breakthroughs in key com-
munications technologies. In addition, node capacity doubles every
24 months to meet the bandwidth and transmission needs of new
services such as 4K/VR, 5G, and cloud computing.
52 ◾ Campus Network Architectures
2.4.1 Ultra-Broadband
Ultra-broadband refers to two concepts: wide coverage and large band-
width. The former means that the network must be sufficiently extensive
to allow any terminal to access the network at any time and from any loca-
tion. The latter means that the network must provide sufficient bandwidth
and throughput so that as much data as possible can be forwarded within
a given period of time.
If we compare the network to a highway, wide coverage refers to the
highway covering an extensive area and having sufficient entrances and
exits that enable vehicles to enter and leave at any time. High bandwidth
refers to the highway being sufficiently wide so that enough lanes exist for
vehicles to quickly pass.
2.4.2 Simplicity
An ultra-broadband campus network — one that offers wide coverage and
high bandwidth — functions as the “information highway”, efficiently
connecting everything and implementing a diverse array of services. How
to quickly operate this information highway is the next key perspective
involved in digital transformation.
In the future, administrators of large or midsize campus networks
will handle thousands, potentially even hundreds of thousands of ter-
minals, and will perform network planning, design, configuration, and
deployment for each data forwarding service. The use of traditional
methods in the future would make it extremely difficult to manage the
campus network. Compounding this is mobility, which leads to fre-
quent network policy changes, and digit transformation, which requires
faster provisioning of new services in order to adapt to changing market
requirements.
With all of these factors in mind, we need to transform campus net-
works in an E2E manner, from planning and management to deploy-
ment and O&M. This will ensure that the new campus network can
overcome the problems caused by network complexity, provide ultra-
broadband campus network services on demand, and greatly simplify
E2E deployment.
58 ◾ Campus Network Architectures
2.4.3 Intelligence
Technical advances and service upgrades complicate network planning,
design, deployment, and O&M, increasing the workload for network engi-
neers. Enterprises therefore have to invest more labor resources to deploy
networks, analyze network problems, and rectify network faults, leading
to expensive and inefcient O&M. Te ideal way to overcome these bur-
dens is to add intelligence and build an intelligent network.
2.4.4 Security
Traditional campus networks have clear security boundaries and trust-
worthy access terminals, and can leverage security devices such as frewalls
to defend against attacks and ensure security. However, as these security
boundaries become increasingly blurred or even disappear entirely, new
security concepts and technical approaches must be developed in order to
implement borderless security defense. To this end, the following ideas are
prevalent in the industry.
2. Technologies such as big data and AI are introduced into the security
analysis feld to build an intelligent security threat detection system.
Security probes are installed onto all devices on the network so
that a security analysis engine can collect network-wide data, moni-
tor the security status of the entire network, and perform in-depth
analysis on network behavior data.
Deep neural network algorithms and machine learning technolo-
gies are also used to promptly detect, analyze, and pinpoint security
threats, while network devices collaborate with security devices to
quickly handle security threats.
Although remarkable achievements have been made regard-
ing the performance of various security devices and the use of new
threat defense methods, no enterprise can establish a complete threat
defense system by focusing exclusively on limited security.
In the future, campus networks will become borderless. To
embrace them, enterprises should build an integrated and multidi-
mensional security defense system from the perspective of the entire
network. Only in this way can they beneft from unbreakable cam-
pus network security.
2.4.5 Openness
To upgrade the entire network from the current to the next generation
generally takes at least fve years, far longer than that for applications. As
such, the campus network and applications running on it develop at dif-
ferent speeds.
Take WLAN as an example. Each generation of WLAN standard is usu-
ally released fve years afer the preceding one, but WLAN applications are
updated more quickly, usually every 2–3 years and even shorter nowadays.
For this reason, the current campus network is unable to support the con-
stant emergence of innovative applications if it is not adequately open. Te
openness of the campus network can be achieved from the following aspects.
Overall Architecture
of an Intent-Driven
Campus Network
1. Network layer
An intent-driven campus network uses virtualization technology
to divide the network layer into the physical network and virtual
65
66 ◾ Campus Network Architectures
the access location of the terminal, check the access path between
the terminal and application, and collect traffic statistics on a
per-device basis. This situation would differ significantly on an
intent-driven campus network. On such a network, the Analytics
Engine constantly collects statistics about network performance
and the status of each user and application. With its predictive
maintenance capability, the Analytics Engine can notify the net-
work administrator of a fault even before an end user can detect,
helping improve user satisfaction and experience.
d. Security Engine: implements the security capability of an intent-
driven campus network. By using big data analytics and machine
learning technologies, the Security Engine extracts key informa-
tion from massive amounts of network-wide data collected by
Telemetry. The Security Engine then analyzes the information from
multiple dimensions to detect potential security threats and risks
on the network and determine the situation of a network’s security.
When a security risk is detected, the Security Engine automatically
works with the Intent Engine to instantly isolate or block the threat.
urn:ietf:params:xml:ns:netconf:capability:
{name}:{version}
In the format, name and version indicate the name and version
of a capability, respectively.
The definition of a capability may depend on other capabili-
ties in the same capability set. Only when a server supports all
the capabilities that a particular capability relies on, will that
capability be supported too. In addition, NETCONF provides
specifications for defining the syntax and semantics of capabili-
ties, based on which equipment vendors can define nonstandard
capabilities when necessary.
The client and server exchange capabilities to notify each
other of their supported capabilities. The client can send only the
operation requests that the server supports.
e. NETCONF configuration database
A configuration database is a collection of complete con-
figuration parameters for a device. Table 3.2 lists the configu-
ration databases defined in NETCONF.
3. NETCONF application on an intent-driven campus network
a. The controller uses NETCONF to manage network devices.
Huawei’s intent-driven campus network solution supports
cloud management and uses the SDN controller to manage
network devices through NETCONF, realizing device plug-
and-play while implementing quick and automated deploy-
ment of network services. The following example describes
how NETCONF is used when a device registers with the con-
troller for management. After the device accesses the network
and obtains an IP address through DHCP, it establishes a con-
nection with the controller following the NETCONF-based
interaction process shown in Figure 3.4.
Architecture of an Intent-Driven Campus Network ◾ 79
For example, to create a VLAN on a device, the controller sends the device
a NETCONF packet similar to the following.
<huawei-vlan:id>100</huawei-vlan:id>
</huawei-vlan:vlan>
</huawei-vlan:vlans>
</config>
</edit-config>
</rpc>
</device-state>
</data>
</rpc-reply>
In October 2016, YANG 1.1 (RFC 7950) was published. In fact, the
YANG model has become an indisputable mainstream data model in
the industry and is drawing growing concerns from vendors. More and
more vendors require adaptations to the YANG model, and many well-
known network equipment vendors have rolled out NETCONF- and
YANG-capable devices. For example, Huawei SDN controllers support
86 ◾ Campus Network Architectures
module ietf-interfaces {
namespace "urn:ietf:params:xml:ns:yang:
ietf-interfaces";
Type definition
Argument definition
prefix if;
prefix yang;
}
organization
"IETF NETMOD (NETCONF Data Modeling Language) Working
Group";
contact
"WG Web:<http://tools.ietf.org/wg/netmod/>
WG List:<mailto:[email protected]>
WG Chair: Thomas Nadeau
<mailto:[email protected]>
WG Chair: Juergen Schoenwaelder
<mailto:[email protected]>
Editor: Martin Bjorklund
<mailto:[email protected]>";
Description
"This module contains a collection of YANG definitions
for managing network interfaces.
Copyright (c) 2014 IETF Trust and the persons
identified as authors of the code.
All rights reserved.
Redistribution and use in source and binary
forms, with or without modification, is permitted
pursuant to, and subject to the license terms
contained in, the Simplified BSD License set forth in
Section 4.c of the IETF Trust's Legal Provisions
Relating to IETF Documents (http://trustee.ietf.org/
license-info).
This version of this YANG module is part of RFC
7223; see the RFC itself for full legal notices.";
revision 2014-05-08 {
description
"Initial revision.";
reference
"RFC 7223: A YANG Data Model for Interface
Management";
}
88 ◾ Campus Network Architectures
typedef my-base-int32-type {
type int32 {
range "1..4 | 10..20";
}
}
YANG allows users to define derived types from base types
using the “typedef” statement based on their own require-
ments. Derived type example:
typedef percent {
type uint16 {
range "0 .. 100";
}
description "Percentage";
}
leaf completed {
type percent;
}
The following example combines multiple type definitions
using the union statement.
typedefthreshold {
description "Threshold value in percent";
type union {
type uint16 {
range "0 .. 100";
}
type enumeration
enum disabled {
description "No threshold";
}
}
}
}
Architecture of an Intent-Driven Campus Network ◾ 89
grouping target {
leaf address {
type inet:ip-address;
description "Target IP";
}
leaf port {
type inet:port-number;
description "Target port number";
}
}
container peer {
container destination {
uses target;
}
}
RFC 6021 defines a collection of common data types derived from the
built-in YANG data types (for example, the derived type ietf-yang-types.
yang). These derived types can be directly referenced by developers.
The identity statement is used to define a new, globally
unique, abstract, and untyped identity.
module phys-if {
identity ethernet {
description "Ethernet family of PHY interfaces";
}
identity eth-1G {
base ethernet;
description "1 GigEth";
}
identity eth-10G {
base ethernet;
description "10 GigEth";
}
The identityref type is used to reference an existing identity.
module newer {
identity eth-40G {
base phys-if:ethernet;
90 ◾ Campus Network Architectures
feature has-local-disk {
description
"System has a local file system that can be used for
storing log files";
}
container system {
container logging {
if-feature has-local-disk;
presence "Logging enabled";
leaf buffer-size {
type filesize;
}
}
}
+--rw interfaces
| +--rw interface* [name]
| +--rw name string
| +--rw description? string
| +--rw type identityref
| +--rw enabled? boolean
| +--rw link-up-down-trap-enable? enumeration
+--ro interfaces-state
+--ro interface* [name]
+--ro name string
+--ro type identityref
+--ro admin-status enumeration
+--ro oper-status enumeration
Configuration and state data are defined using two separate containers,
becoming configured interfaces and dynamically generated interfaces,
respectively. The interfaces configuration data contain a list of interfaces,
using the leaf node as the key. This example illustrates the suitability of the
YANG model’s hierarchical tree structure for defining the configuration
and state data of network devices.
d. Definition of RPC and event notification
RPCs are introduced to cope with the syntax defects of the
YANG model; for instance, one-off operations that do not
need to be saved or actions that cannot be expressed using
NETCONF operations (such as system reboot and software
upgrade). The latest YANG 1.1 allows the definition of actions
for operation objects, so RPCs should be avoided if possible.
An RPC defines the input and output of an operation, whereas
an event notification defines only the information to be sent.
The following is a sample RPC and event notification.
rpc activate-software-image {
input {
leaf image {
type binary;
}
92 ◾ Campus Network Architectures
}
output {
leaf status {
type string;
}
}
}
notification config-change {
description "The configuration changed";
leaf operator-name {
type string;
}
leaf-list change {
type instance-identifier;
}
}
e. Argument definition
An argument is used to add new schema nodes to a previ-
ously defined schema node.
augment /sys:system/sys:user {
leaf expire {
type yang:date-and-time;
}
}
3.2.3 RESTful APIs
The SDN controller interconnects with the application layer through
northbound RESTful APIs, including basic network APIs, VAS APIs,
third-party authentication APIs, and location-based service (LBS) APIs.
Representational State Transfer (REST) is a software architectural style
that defines a set of constraints to be used for creating web services. Using
this architecture allows everything on a network to be abstracted as
resources, with each resource identified by a unique identifier. Through
standard methods, all resources can be operated without altering their
identifiers, as all operations are stateless. A RESTful architecture — one
that complies with the constraints and rules of REST — aims for better
use of the rules and constraints in existing web standards.
In REST, anything that needs to be referenced can be considered a
resource (or a “representation”). A resource can be an entity (for example,
a mobile number) or an abstract concept (for example, a value). To make
a resource identifiable, a unique identifier — called a URI in the World
Wide Web — needs to be assigned to it.
APIs developed in accordance with REST design rules are called
RESTful APIs. External applications can use the Hypertext Transfer
Protocol (HTTP), as well as the secure version — HTTPS — for security
purposes, to access RESTful APIs in order to implement functions such as
service provisioning and status monitoring.
Standard HTTP methods for accessing managed objects include GET,
PUT, POST, and DELETE, as described in Table 3.3.
A description of a RESTful API should include the typical scenario,
function, constraints, invocation method, URI, request and response
parameter description, and sample request and response. For example, the
94 ◾ Campus Network Architectures
RESTful API provided by Huawei’s SDN controller for user access autho-
rization is described as follows:
POST /controller/cloud/v2/northbound/accessuser/haca/
authorization HTTP/1.1
Host: IP address:Port number
Content-Type: application/json
Accept: application/json
Accept-Language: en-US
X-AUTH-TOKEN: CA48D152F6B19D84:637C38259E6974E17788348
128A430FEE150E874752CE75
4B6BF855281219925
{
"deviceMac" : "48-digit MAC address of the device",
"deviceEsn" : "ESN of the device",
"apMac" : "48-digit MAC address of the AP",
TABLE 3.4 Parameters in the Body
Parameter Name Mandatory Type Parameter Description
body Yes REFERENCE Authorization information
TABLE 3.5 Parameters in the Body Object
Parameter Name Mandatory Type Value Range Default Value Parameter Description
deviceMac No STRING - - Media Access Control (MAC) address of a device. Either the device
MAC address, ESN, or both must be provided
deviceEsn No STRING - - ESN of a device. Either the device MAC address, ESN, or both
must be provided
apMac No STRING - - MAC address of an AP
ssid Yes STRING - - Base64 code of an AP service set identifier (SSID)
policyName No STRING - - Name of an access control policy. If this parameter is empty,
policy-based access control is not performed
terminalIpV4 No STRING - - IPv4 address of a terminal. This parameter is mandatory if a
terminal has an IPv4 address
terminalIpV6 No STRING - - IPv6 address of a terminal. This parameter is mandatory if a
terminal has an IPv6 address
terminalMac Yes STRING - - MAC address of a terminal
userName Yes STRING - - User name
nodeIp Yes STRING - - IP address of a node that performs authorization
temPermitTime No INTEGER [0–600] - Duration a user is allowed temporary network access, measured
in seconds. If this parameter is not carried in a RESTful packet
or if the value is set to 0, there is no time limit
Architecture of an Intent-Driven Campus Network ◾ 95
96 ◾ Campus Network Architectures
"ssid" : "dcd=",
"policyName" : "aa",
"terminalIpV4" : "IPv4 address of the terminal",
"terminalIpV6" : "IPv6 address of the terminal",
"terminalMac" : "MAC address of the terminal",
"userName" : "User name",
"nodeIp" : "IP address of a node that performs
authorization",
"temPermitTime" : 300
}
HTTP/1.1 200 OK
Date: Sun,20 Jan 2019 10:00:00 GMT
Server: example-server
Content-Type: application/json
{
"errcode" : "0",
"errmsg" : " ",
"psessionid" : "5ea660be98a84618fa3d6d03f65f47ab578ba
3b4216790186a932f9e
8c8c880d"
}
this model, network administrators can centrally plan and manage net-
work services through GUIs to automate the deployment of these services.
With a service model that is suitably configured, the SDN controller can
accurately display a management GUI from the perspective of user ser-
vices through northbound interfaces and deliver the configurations of
user services to the corresponding NEs through southbound interfaces.
The underlay network of a legacy campus is usually built and adjusted over
a long period of time and therefore may contain devices from multiple
vendors. For example, buildings A and B may each have devices of a differ-
ent vendor, or the core, aggregation, and access layers in the same building
100 ◾ Campus Network Architectures
may use devices from three different vendors. A further example is a three-
layer (core/aggregation/access) tree topology that is subsequently changed
to a four-layer (core/aggregation/level-1 access/level-2 access) topology
due to complex and expensive cabling, leading to a network topology that
may be disorganized and difficult to manage. Such examples highlight the
difficulties in building a unified service model for the underlay network of
a traditional campus. By using an SDN controller for modeling, the intent-
driven campus network eliminates the impact of a disorganized topology,
as shown in Figure 3.8.
Underlay network service modeling aims for Layer 3 interoperability
between all devices, with one device assuming the root role and other
devices assuming the leaf role. An administrator needs only to specify the
root device (typically the core switch) and configure IP address resources
used by the underlay network on the root device. The root device then
automatically calculates and configures routes, discovers aggregation
and access switches, and specifies them as leaf devices through negotia-
tion. It also delivers all the underlay network configurations to the leaf
devices, which can be connected in any networking mode, and maintains
the routes between itself and the leaf devices as well as those between leaf
devices. In effect, the root device manages the entire underlay network.
these models are based on cloud platform architectures, among which the
open-source OpenStack is the most popular.
OpenStack is the general front end of Infrastructure as a Service (IaaS)
resources and manages computing, storage, and network resources. Its
virtual network model is typically used in cloud data center scenarios.
Compared with campus scenarios, cloud data center scenarios do not have
wireless networks and users, but have virtual machines (VMs). Consequently,
the virtual network service model needs to be expanded and reconstructed
when defined in the intent-driven campus network architecture.
This section starts by describing OpenStack service models in the cloud
data center scenario and then provides details about service models for a
campus virtual network.
FIGURE 3.18 Traffic priority model defined for an intent-driven campus network.
Building Physical
Networks for an
Intent-Driven Campus
Network
117
118 ◾ Campus Network Architectures
are highly developed. For wired networks, the 25GE, 100GE, and 400GE
technology standards have been released and ready for large-scale com-
mercial use thanks to the improved performance of chips and produc-
tivity at lower costs. These standards enable campus networks with the
ultra-broadband forwarding capability at reasonable costs. As for wireless
networks, the Wi-Fi 6 standard is quickly being put into commercial use
and supports larger capacity and more user density than previous Wi-Fi
standards, providing comprehensive coverage of campuses. Wireless
device vendors are launching next-generation Wi-Fi 6 access points (APs),
and in terms of IoT, an increasing number of IoT terminals connect to
campus networks based on Wi-Fi standards. IoT APs integrate multiple
wireless communication modes such as Bluetooth and radio frequency
identification (RFID), further expanding the scope of access terminals.
4.1 ULTRA-BROADBAND FORWARDING
ON A PHYSICAL NETWORK
Ultra-broadband forwarding on physical networks involves the wired
backbone as well as wired and wireless access networks. The transmission
rate of the wired backbone network will evolve from 25, 100, to 400 Gbit/s,
and that of the wired access network will evolve from 2.5, 5, to 10 Gbit/s.
In addition to this, the wireless local area network (WLAN) standard
evolves from Wi-Fi 4, Wi-Fi 5, to Wi-Fi 6.
requiring that the physical network for a campus network supports ultra-
broadband forwarding.
The ultra-broadband forwarding rate of wired networks fundamen-
tally complies with the Ethernet evolution rule. This rule stated that,
before 2010, the Ethernet transmission rate increased 10-fold each time,
as follows: 10 Mbit/s → 100 Mbit/s → 1 Gbit/s → 10 Gbit/s → 100 Gbit/s.
However, since 2010, when a large number of wireless APs started to
access campus networks, the Ethernet rate has evolved in a variety of
ways, as shown in Figure 4.2.
With the copper media, the Ethernet rate evolves as follows: 1 Gbit/s →
2.5 Gbit/s → 5 Gbit/s → 10 Gbit/s, whereas with the optical media, the
FIGURE 4.2 Evolution of the port forwarding rate on campus network devices.
120 ◾ Campus Network Architectures
In the early stage of the optical module industry, the industry standard
had not yet been formed. Each vendor had its own form factor and dimen-
sions for optical modules; therefore, a unified standard urgently needed.
As an official organization, the IEEE 802.3 Working Group played a key
role in defining standards for optical modules and has been formulating
a series of Ethernet transmission standards, such as 10GE, 25GE, 40GE,
50GE, 100GE, 200GE, and 400GE. The Ethernet transmission rate on a
campus network is developed as follows:
Physical Networks for Campus Network ◾ 121
1. 10GE
In 2002, IEEE released the IEEE 802.3ae standard, which deter-
mined the 10 Gbit/s (10GE) wired backbone for early campus net-
works. 10GE achieves a leap in Ethernet standards, reaching a
transmission rate of 10 Gbit/s, ten times higher than that of the ear-
lier GE standard. Additionally, 10GE greatly extends the distance of
transmission, eliminating the limitation of the traditional Ethernet
on a Local Area Network (LAN). Its technology is applicable to vari-
ous network structures. It can simplify a network as well as help
construct a simple and cost-effective network that supports multiple
transmission rates, enabling transmission at a large capacity on the
backbone network.
2. 40GE/100GE
On the heels of 10GE, the 40GE and 100GE standards were
released by IEEE 802.3ba in 2010. At that time, however, the 40GE
rate was actually achieved by a 4-lane 10GE serializer/deserializer
(SerDes) bus, and the 100GE rate by a 10-lane 10GE SerDes bus. The
costs of deployment for 40GE and 100GE are high; for example, in
contrast to 10GE optical transceivers that require the small form-
factor pluggable (SFP) encapsulation mode, 40GE optical transceiv-
ers use the quad small form-factor pluggable (QSFP) encapsulation
mode. Additionally, 40GE connections require four pairs of optical
fiber; the optical transceivers and cables result in high costs.
3. 25GE
Originally applied to data center networks, 40GE is high in cost
and does not efficiently leverage Peripheral Component Interconnect
Express (PCIe) lanes on servers. To eliminate these drawbacks, com-
panies including Microsoft, Qualcomm, and Google established the
25 Gigabit Ethernet Consortium and launched the 25GE standard.
To ensure the unification of Ethernet standards, IEEE introduced
25GE to the IEEE 802.3 by standard in June 2016.
25GE optical transceivers use the single-lane 25GE SerDes bus
and SFP encapsulation. In addition, the previous 10GE optical
transceivers can be smoothly upgraded to 25GE without the need to
replace existing network cables, greatly reducing the overall cost of
network deployment.
122 ◾ Campus Network Architectures
TABLE 4.1 Frequency Bandwidth, Transmission Rate, and Typical Application Scenarios
of Common Twisted Pairs
Cable Frequency Maximum Data Typical Application
Category Bandwidth (MHz) Transmission Rate Scenario
Category 5 100 100 Mbit/s 100BASE-T and
(CAT5) cable 10BASE-T Ethernets
CAT5e cable 100 5 Gbit/s 1000BASE-T,
2.5GBASE-T, and some
5GBASE-T Ethernets
Category 6 250 10 Gbit/s (at a distance 5GBASE-T and some
(CAT6) cable of 37–55 m) 10GBASE-T Ethernets
Augmented 500 10 Gbit/s (at a distance 10GBASE-T Ethernet
category 6 of 100 m)
(CAT6a) cable
Category 7 600 10 Gbit/s (at a distance 10GBASE-T Ethernet
(CAT7) cable of up to 100 m)
124 ◾ Campus Network Architectures
electrical signals, and MAC layer protocols. Table 4.2 shows the evolution
phases of Ethernet standards in chronological order.
802.3ae signifies the beginning of the 10 Gbit/s Ethernet era and lays
a foundation for end-to-end Ethernet transmission.
5. Multi-GE (IEEE 802.3bz)
Initially released in 2016, the IEEE 802.3bz standard comes with
2.5GBASE-T and 5GBASE-T, which reach transmission rates of 2.5
and 5 Gbit/s, respectively, at a distance of 100 m. Physical layer (PHY)
transmission technology of IEEE 802.3bz is based on 10GBASE-T
but operates at a lower signal rate. Specifically, IEEE 802.3bz reduces
the transmission rate to 25% or 50% that of 10GBASE-T, to achieve a
rate of 2.5 Gbit/s (2.5GBASE-T) or 5 Gbit/s (5GBASE-T). This lowers
the requirements on cabling, enabling 2.5GBASE-T and 5GBASE-T
to be deployed over 100 m long unshielded CAT5e and CAT6 cables,
respectively.
The IEEE 802.3bz standard is intended to support the improve-
ment in air interface forwarding performance of wireless APs.
In January 2014, the IEEE 802.11ac standard was officially
released. It uses signals on the 5 GHz frequency band for communi-
cation and provides a theoretical transmission rate of higher than 1
Gbit/s for multistation WLAN communication. With IEEE 802.11ac,
an uplink rate of 1 Gbit/s on the backbone network can hardly meet
the requirements of next-generation APs, as it is impossible to carry
10 Gbit/s traffic over CAT5e cables that have been routed alongside
existing network devices. The 10GE can provide the rate required by
APs, but the need for re-cabling using CAT6 or higher-specifications
cables or optical fibers to implement this rate results in high costs
and complex construction.
Against this backdrop, there is an immediate solution. Specifically,
an Ethernet standard that supports intermediate speeds between 1
and 10 Gbit/s while eliminating the need for re-cabling is gaining
wide recognition from users and network vendors. This technology
is also known in the industry as multi-GE technology.
Two technology alliances have been established in the world
to promote the development of 2.5GE and 5GE technologies on
enterprise networks. In October 2014, the NBASE-T Alliance was
jointly founded by Aquantia, Cisco, Freescale Semiconductor, and
Xilinx, with its members now including most network hardware
126 ◾ Campus Network Architectures
FIGURE 4.3 Wireless transmission rate changes during IEEE 802.11 evolution.
Physical Networks for Campus Network ◾ 127
TABLE 4.3 Mappings between Wi-Fi Generations and IEEE 802.11 Standards
Frequency
Released In IEEE 802.11 Standard Band (GHz) Wi-Fi Generation
2009 IEEE 802.11n 2.4 or 5 Wi-Fi 4
2013 IEEE 802.11ac Wave 1 5 Wi-Fi 5
2015 IEEE 802.11ac Wave 2 5
2019 IEEE 802.11ax 2.4 or 5 Wi-Fi 6
In 2018, the Wi-Fi Alliance introduced a new naming system for IEEE
802.11 standards to identify Wi-Fi generations by a numerical sequence,
providing Wi-Fi users and device vendors with an easy-to-understand
designation for the Wi-Fi technology supported by their devices and used
in Wi-Fi connections. Additionally, the new approach to naming high-
lights the significant progress of Wi-Fi technology, with each new genera-
tion of the Wi-Fi standard introducing a large number of new functions to
provide higher throughput, faster speeds, and more concurrent connec-
tions. Table 4.3 lists the mappings between Wi-Fi generations and IEEE
802.11 standards announced by the Wi-Fi Alliance.
1. IEEE 802.11a/b/g/n/ac
a. IEEE 802.11
In the early 1990s, the IEEE set up a dedicated IEEE 802.11
Working Group to study and formulate WLAN standards. In
June 1997, the IEEE released the first generation WLAN proto-
col, the IEEE 802.11 standard. This standard allows the PHY of
WLANs to work on the 2.4 GHz frequency band at a maximum
data transmission rate of 2 Mbit/s.
b. IEEE 802.11a/IEEE 802.11b
In 1999, the IEEE released the IEEE 802.11a and IEEE 802.11b
standards. The IEEE 802.11a standard operates on the 5 GHz
frequency band and uses orthogonal frequency division multi-
plexing (OFDM) technology. This technology divides a specified
channel into several subchannels, each of which has one subcar-
rier for modulation. The subcarriers are transmitted in parallel,
improving the spectrum utilization of channels and bringing a
PHY transmission rate of up to 54 Mbit/s. IEEE 802.11b is a direct
extension of the modulation technique defined in IEEE 802.11,
128 ◾ Campus Network Architectures
i. More subcarriers
Compared with IEEE 802.11a/g, IEEE 802.11n has four
more valid subcarriers and increases the theoretical rate from
54 to 58.5 Mbit/s, as shown in Figure 4.4.
ii. Higher coding rate
Data transmitted on WLANs contains valid data and
forward error correction (FEC) code. If errors occur in the
valid data due to attenuation, interference, or other factors,
the FEC code can be used to rectify the errors and restore
data. IEEE 802.11n increases the effective coding rate from
3/4 to 5/6, thereby increasing the data rate by 11%, as shown
in Figure 4.5.
iii. Shorter GI
To prevent intersymbol interference, IEEE 802.11a/b/g
introduces the use of a guard interval (GI) for transmitting data
frames. The GI defined in IEEE 802.11a/b/g is 800 ns, which
IEEE 802.11n also uses by default; however, IEEE 802.11n
allows for half this GI (400 ns) in good spatial environments.
This improvement increases device throughput to approxi-
mately 72.2 Mbit/s (about 10% higher), as shown in Figure 4.6.
iv. Wider channel
With IEEE 802.11n, two adjacent 20 MHz channels can
be bonded into a 40 MHz channel, doubling bandwidth and
therefore delivering a higher transmission capability. Use
of the 40 MHz frequency bandwidth enables the number of
subcarriers per channel to be doubled from 52 to 104. This
increases the data transmission rate to 150 Mbit/s (108%
higher). Figure 4.7 shows the 40 MHz channel bandwidth as
defined in IEEE 802.11n.
v. More spatial streams
In IEEE 802.11a/b/g, a single antenna is used for data trans-
mission between a terminal and an AP in single-input single-
output (SISO) mode. This means that data are transmitted
through a single spatial stream. In contrast, IEEE 802.11n
supports multiple-input multiple-output (MIMO) transmis-
sion, which enables it to achieve a maximum transmission
i. Higher throughput
Higher throughput was pursued for Wi-Fi standards ear-
lier than 802.11ac. Throughput has increased from a maxi-
mum of 2 Mbit/s in IEEE 802.11 to 54 Mbit/s in IEEE 802.11a,
600 Mbit/s in IEEE 802.11n, and 6.93 Gbit/s in IEEE 802.11ac.
Furthermore, throughput planning and design have enabled
IEEE 802.11ac to better meet high-bandwidth requirements.
Table 4.5 lists the parameter changes of different IEEE 802.11
Wi-Fi standards.
ii. Less interference
IEEE 802.11ac mainly works on the 5 GHz frequency band
where more resources are available. Frequency bandwidth
is only 83.5 MHz on the 2.4 GHz frequency band; whereas
the 5 GHz frequency band allows planning for bandwidth
resources attaining hundreds of megahertz in certain coun-
tries. A higher frequency indicates a lower frequency reuse
degree with the same bandwidth, reducing intrasystem inter-
ference. In addition, as numerous devices (including micro-
wave ovens) work on the 2.4 GHz frequency band, devices
in a Wi-Fi system on this frequency band face external
TABLE 4.5 Parameter Changes of Different IEEE 802.11 Wi-Fi Standards
Frequency Maximum
Bandwidth Modulation Number of Maximum
Wi-Fi Standard (MHz) Scheme Spatial Streams Rate
IEEE 802.11 20 Differential 1 2 Mbit/s
quadrature phase
shift keying
IEEE 802.11b 20 Complementary 1 11 Mbit/s
code keying
IEEE 802.11a 20 64-QAM 1 54 Mbit/s
IEEE 802.11g 20 64-QAM 1 54 Mbit/s
IEEE 802.11n 20 or 40 64-QAM 4 600 Mbit/s
IEEE 802.11ac 20, 40, 80, or 160 256-QAM 8 6.93 Gbit/s
136 ◾ Campus Network Architectures
2. 802.11ax (Wi-Fi 6)
As video conferencing, cloud VR, mobile teaching, and vari-
ous other service applications increasingly diversify, an increasing
number of Wi-Fi access terminals are being used. In addition, more
mobile terminals are available due to the development of the Internet
of Things (IoT). Even relatively dispersed home Wi-Fi networks,
where devices used to be thinly scattered, are becoming crowded
with an increasing number of smart home devices. Therefore, it is
imperative that improvements are achieved in terms of Wi-Fi net-
works accommodating various types of terminals. This will meet
users’ bandwidth requirements for different types of applications
running on their terminals.
Figure 4.10 shows the relationship between access capacity and
per capita bandwidth in different Wi-Fi standards.
The low efficiency of Wi-Fi networks, which is caused by the
access of more terminals, needs to be resolved in the next-generation
Wi-Fi standard. To address this issue, the High Efficiency WLAN
Study Group was established in as early as 2014, and the Wi-Fi 6
standard was ratified in 2019. By introducing technologies such as
uplink MU-MIMO, orthogonal frequency division multiple access
(OFDMA), and 1024-QAM high-order coding, Wi-Fi 6 is designed
to resolve network capacity and transmission efficiency problems
from aspects such as spectrum resource utilization and multiuser
access. Compared with IEEE 802.11ac (Wi-Fi 5), Wi-Fi 6 aims to
Physical Networks for Campus Network ◾ 137
FIGURE 4.10 Relationship between the access capacity and per capita band-
width in different Wi-Fi standards.
achieve a fourfold increase in average user throughput and increase
the number of concurrent users more than threefold in dense user
environments.
Wi-Fi 6 is compatible with the earlier Wi-Fi standards. This means
that legacy terminals can seamlessly connect to a Wi-Fi 6 network.
Furthermore, Wi-Fi 6 inherits all the advanced MIMO features of
Wi-Fi 5 while also introducing several new features applicable to
high-density deployment scenarios. Some of the Wi-Fi 6 highlights
are described as follows:
a. OFDMA
Before Wi-Fi 6, the OFDM mode was used for data transmis-
sion and users were distinguished based on time segments. In
each time segment, one user occupied all subcarriers and sent a
complete data packet, as shown in Figure 4.11.
OFDMA is a more efficient data transmission mode intro-
duced by Wi-Fi 6. It is also referred to as MU-OFDMA since
Wi-Fi 6 supports uplink and downlink MU modes. This tech-
nology enables multiple users to reuse channel resources by
allocating subcarriers to various users and adding multiple
access in the OFDM system. To date, OFDMA has been uti-
lized in 3rd Generation Partnership Project (3GPP) Long
Term Evolution (LTE) among numerous other technolo-
gies. In addition, Wi-Fi 6 defines the smallest subcarrier as a
138 ◾ Campus Network Architectures
e. Expanding range
Wi-Fi 6 uses the long OFDM symbol transmission mech-
anism, which leads to an increase in the data transmission
duration from 3.2 to 12.8 μs. A longer transmission time
can reduce the packet loss rate of STAs. Additionally, Wi-Fi
6 can use only 2 MHz bandwidth for narrowband transmis-
sion, which reduces noise interference on the frequency band,
improves the receiver sensitivity of STAs, and increases cover-
age distance, as shown in Figure 4.17.
The preceding core technologies can be used to verify the efficient transmis-
sion and high-density capacity brought by Wi-Fi 6. However, Wi-Fi 6 is not
the ultimate Wi-Fi standard. This is just the beginning of the HEW. The new
Wi-Fi 6 standard still needs to achieve compatibility with legacy devices
while taking into account the development of future-oriented IoT networks
and energy conservation. The other new features of Wi-Fi 6 include:
FIGURE 4.17 How long OFDM symbol and narrowband transmission increase
coverage distance.
Physical Networks for Campus Network ◾ 145
4.2 ULTRA-BROADBAND COVERAGE
ON A PHYSICAL NETWORK
The development of Wi-Fi technologies enables campus networks to pro-
vide full wireless coverage over an entire campus. However, ultra-broad-
band coverage on the physical network is faced with another challenge,
that is, connecting IoT terminals to campus networks.
1. Wi-Fi
As the most important wireless access mode on a campus net-
work, a Wi-Fi network allows access from various Wi-Fi terminals,
such as laptops, mobile phones, tablets, and printers. In recent years,
more and more terminal types are starting to access campus net-
works through Wi-Fi, including electronic whiteboards, wireless
displays, smart speakers, and smart lights. Wi-Fi technologies have
already been detailed in the preceding sections; therefore, they will
not be mentioned here.
148 ◾ Campus Network Architectures
2. RFID
RFID is a wireless access technology that automatically identifies
target objects and obtains related data through radio signals without
requiring manual intervention.
a. RFID system
An RFID system is composed of RFID electronic tags (RFID
tags for short) and RFID readers, and connected to an informa-
tion processing system.
i. RFID tag: consists of a chip and a tag antenna or coil. The tag
has built-in identification information.
ii. RFID reader: reads and writes tag information. Read-only
RFID readers are often referred to as card readers.
Figure 4.18 shows an RFID system. In such a system, an
RFID tag communicates with an RFID reader through induc-
tive coupling or electromagnetic reflection. The RFID reader
then reads information from the RFID tag and sends it to the
information processing system over a network for storage and
unified management.
b. Operating frequency
An RFID system typically operates on a low frequency (LF)
band (120–134 kHz), high frequency (HF) band (13.56 MHz),
ultra-high frequency (UHF) band (433 MHz, 865–868 MHz,
902–928 MHz, and 2.45 GHz), or super-high frequency (SHF)
band (5.8 GHz), as shown in Figure 4.19.
RFID tags are cost-effective and energy-efficient because they only store a
small amount of data and support a relatively short reading distance. The
c. Power supply
RFID tags are classified into either active, semi-active, or pas-
sive, based on their power supply mode.
i. An active RFID tag uses a built-in battery to provide all or
part of the energy for a microchip, without the need for an
RFID reader to provide energy for startup. Active RFID tags
support a long identification distance (up to 10 m), but have a
limited service life (3–10 years) and high cost.
ii. A semi-active RFID tag features a built-in battery that sup-
plies power to the internal circuits, but does not actively
transmit signals. Before a semi-active RFID tag starts to
work, it remains in dormant state, which means it is equiva-
lent to a passive RFID tag. However, the battery inside the
tag does not consume much energy and therefore has a long
service life and low cost. When the tag enters the read/write
area, the tag starts to work once it receives an instruction sent
by the RFID reader. The energy sent by the reader supports
information exchange between the reader and the tag.
iii. A passive RFID tag does not have an embedded battery. This
type of tag stays in passive state when it is out of the reading
range of an RFID reader. When it enters the reading range of
an RFID reader, the tag is powered by the radio energy emit-
ted by the reader to work at a distance ranging from 10 cm to
several meters. Passive RFID tags are the most widely used
RFID tags as they are lightweight, small, and have a long ser-
vice life.
d. Read/write attributes of RFID tags
RFID tags are classified into either read-only or read-write
tags, depending on whether or not stored information can be
modified. Information on a read-only RFID tag is written when
its integrated circuit is produced; however, it cannot be modified
afterward and can only be read by a dedicated device. In con-
trast, a read-write RFID tag writes the stored information into its
internal storage area and can be rewritten by either a dedicated
programming or writing device.
Physical Networks for Campus Network ◾ 151
Figure 4.25 shows a typical PoE system, which consists of power sourc-
ing equipment (PSE), PDs, and PoE module (built in a PoE switch).
4.3 BUILDING AN ULTRA-BROADBAND
PHYSICAL NETWORK ARCHITECTURE
The physical network architecture is the foundation of a campus network.
As such, any changes to the physical network architecture pose grave risks
to networks, as well as incurring huge costs. Most campus networks have
devices distributed across different buildings or floors, making it very dif-
ficult to lay and adjust cables between these devices. To overcome such
challenges, it is paramount that we plan a proper physical network archi-
tecture before network construction. To this end, this section describes
how to build ultra-broadband wired and wireless networks.
from the core layer to the access layer, supports a large network
scale, and is relatively easy to reconstruct. However, the downside
is that it requires more optical modules and devices, leading to high
networking cost.
2. Simplified two-layer tree networking architecture
This networking architecture consists of only the core layer and
access layer, which are directly connected, as shown in Figure 4.27.
The simplified two-layer tree networking architecture has the fol-
lowing advantages:
a. Low network deployment cost
In this architecture, there is no aggregation layer, meaning that
fewer devices and optical modules are required and the overall
Physical Networks for Campus Network ◾ 165
Table 4.10 lists the device port models calculated for different
user scales based on the preceding formulas.
management. Fit APs provide radio signals for STAs to access a wire-
less network, and provide almost no management or control capa-
bilities at all.
The WAC + Fit AP networking architecture is applicable to large-
and medium-sized campus networks. On such a network, a WAC
can be deployed at either the aggregation or core layer, depending on
the wireless network capacity and locations of Fit APs at the access
layer. For reliability purposes, deploying the WAC at the core layer is
considered best practice. The WAC is responsible for service control
such as configuration delivery and upgrade management for all Fit
APs, which are also plug-and-play, greatly reducing WLAN manage-
ment, control, and maintenance costs.
WACs provide the same features in both in-path and off-path
deployments. Figure 4.30 shows the latter deployment mode for
Building Virtual
Networks for an
Intent-Driven Campus
Network
177
178 ◾ Campus Network Architectures
into one logical switch, integrating the control plane and achieving uni-
fied management. Strictly speaking, they are device-level virtualization
technologies and cannot be used as independent protocols on campus net-
works. VLAN and VPN technologies, on the other hand, cannot meet the
network virtualization requirements of intent-driven campus networks.
5.2.1 VN Architecture
VXLAN-based VNs can decouple service networks from the physical net-
work, irrespective of that network’s complexity. When service networks
need to be adjusted, the physical network topology does not need to be
changed. As shown in Figure 5.3, the VN architecture of an intent-driven
campus network features two layers, consisting of underlay and overlay
networks.
The underlay network is the physical infrastructure consisting of vari-
ous physical devices, such as access switches, aggregation switches, core
switches, routers, and firewalls.
The overlay network is completely decoupled from the underlay net-
work. It is a fully connected logical fabric topology built on top of the
physical topology using VXLAN technology. In the logical fabric topol-
ogy, resources such as user IP addresses, VLANs, and access points are
pooled in a unified manner and allocated to VNs on demand. Through
the logical fabric topology, users can create multiple VN instances based
on service requirements to achieve a multi-purpose network, service isola-
tion, and fast service deployment.
184 ◾ Campus Network Architectures
5.2.2 Roles of a VN
On a VN, four roles are defined: border node, edge node, access node,
and transparent node. The entities of all four roles are physical devices
on the physical network. The border node and edge node are assigned
new functions on the VN. Figure 5.4 shows the VN roles on a campus
network.
186 ◾ Campus Network Architectures
network, the office network uses a tree structure, the security network
uses an active/standby structure, and the IoT network uses a ring struc-
ture. The three types of networks are logically independent of each other
and can flexibly construct a VN topology based on service characteristics.
switch (for example, tenants 4–7 share one aggregation switch). Large-
and medium-sized enterprises generally rent multiple floors of the build-
ing. As a result, small-sized data centers need to be deployed. However,
small-sized enterprises rent only a portion of a single floor and just require
access to the Internet. In addition, tenant networks must be isolated from
one another. Traditionally, each time a new tenant arrives, the network
would need to be re-planned and commissioned again based on the ten-
ant’s scale and requirements. This process is inefficient and features slow
service provisioning.
In this case, network virtualization technology is a feasible way. Network
virtualization does not require reconstruction or complex configuration
of the existing network. Instead, the SDN controller can be used to quickly
create VNs based on various service requirements to provide services for
new tenants. Tenants can manage their own VNs.
190 ◾ Campus Network Architectures
VTEPs. This type of MAC/IP route is also called the ARP route.
ARP entry advertisement applies to the following scenarios:
i. ARP broadcast suppression: After a Layer 3 gateway learns
the ARP entries of hosts, it generates host information that
contains the host IP and MAC addresses, Layer 2 VNI, and
gateway’s VTEP IP address. The Layer 3 gateway then trans-
mits an ARP route carrying the host information to a Layer 2
gateway. Upon receiving an ARP request, the Layer 2 gateway
checks whether it includes the host information correspond-
ing to the destination IP address of the packet. If such host
information exists, the Layer 2 gateway replaces the broad-
cast MAC address in the ARP request with the destination
unicast MAC address and unicasts the packet. This imple-
mentation suppresses ARP broadcast packets.
ii. Virtual machine (VM) migration in a distributed gateway
scenario: After a VM migrates from one gateway to another,
the new gateway learns the ARP entry of the VM, and gen-
erates host information that contains the host IP and MAC
addresses, Layer 2 VNI, and gateway’s VTEP IP address. Then,
the new gateway transmits an ARP route carrying the host
information to the original gateway. After the original gate-
way receives the ARP route, it detects a VM location change
and triggers ARP probe. If ARP probe fails, the original gate-
way withdraws the ARP entry and host route of the VM.
with the remote VTEP. If the remote VNI is the same as the local
VNI, an ingress replication list is created for subsequent BUM packet
forwarding.
3. Type 5 route: IP prefix route
Figure 5.12 shows the format of an IP prefix route.
Table 5.5 describes the fields.
The IP Prefix Length and IP Prefix fields can identify a host IP
address or network segment.
a. If the IP Prefix Length and IP Prefix fields identify a host IP
address, the route is used for IP route advertisement in distrib-
uted VXLAN gateway scenarios. In such cases, the route func-
tions the same as an IRB route on the VXLAN control plane.
IP Prefix (4 or 16 bytes)
GW IP Address (4 or 16 bytes)
Advertised EVPN routes carry RDs and VPN targets (also known
as route targets).
RDs are used to identify different VXLAN EVPN routes. In addi-
tion, VPN targets are BGP extended community attributes used to
control the export and import of EVPN routes.
A VPN target is either an export target or an import target.
a. Export target: It is carried in the EVPN routes advertised by the
local device and defines which remote devices can accept the
EVPN routes.
b. Import target: It determines whether the local device accepts the
EVPN routes advertised by remote devices. When receiving an
EVPN route, the local device matches the export targets carried
in the received route against its own import targets. If a match
is found, the route is accepted. If no match is found, the route is
discarded.
On the network shown in Figure 5.13, Host 1 and Host 3 are attached to
VTEP 2, Host 2 is attached to VTEP 3, and a Layer 3 gateway is deployed on
VTEP 1. To allow Host 3 and Host 2, which are on the same subnet, to com-
municate with each other, a VXLAN tunnel needs to be established between
VTEP 2 and VTEP 3. To allow Host 1 and Host 2 on different subnets to
communicate with each other, VXLAN tunnels need to be established
between VTEP 2 and VTEP 1 and between VTEP 1 and VTEP 3. Although
Host 1 and Host 3 are both attached to VTEP 2, they belong to different
subnets and must communicate through the Layer 3 gateway (VTEP 1). For
this reason, a VXLAN tunnel is also required between VTEP 2 and VTEP 1.
The following example illustrates how to use BGP EVPN to dynami-
cally establish a VXLAN tunnel between VTEP 2 and VTEP 3, as shown
in Figure 5.14.
1. VTEP 2 and VTEP 3 first establish a BGP EVPN peer relationship.
Then, local EVPN instances are created on VTEP 2 and VTEP 3, and
a route distinguisher (RD), export VPN target (ERT), and import
VPN target (IRT) are configured for each EVPN instance. Layer 2
BDs are created and bound to VNIs and EVPN instances. After IP
addresses are configured on VTEP 2 and VTEP 3, they generate a
BGP EVPN route and advertise it to each other. The BGP EVPN
route carries the ERT list of the local EVPN instance and an inclu-
sive multicast route (Type 3 route defined in BGP EVPN).
2. When VTEP 2 and VTEP 3 receive a BGP EVPN route from each
other, they match the ERT list of the remote EVPN instance car-
ried in the route against the IRT list of the local EVPN instance.
If a match is found, the route is accepted. If no match is found,
the route is discarded. If the route is accepted, VTEP 2 and VTEP
3 obtain each other’s IP address and VNI carried in the route.
If the IP addresses are reachable at Layer 3, the VTEPs estab-
lish a VXLAN tunnel. If the remote VNI is the same as the local
VNI, an ingress replication list is created to forward subsequent
BUM packets.
The process of dynamically establishing a VXLAN tunnel between
VTEP 2 and VTEP 1 and between VTEP 3 and VTEP 1 using BGP
EVPN is the same as that between VTEP 2 and VTEP 3.
Virtual Networks for Campus Network ◾ 203
2. When receiving the BGP EVPN route from VTEP 2, VTEP 3 matches
the ERT list of the EVPN instance carried in the route against
the IRT list of the local EVPN instance. If a match is found, the route
is accepted. If no match is found, the route is discarded. If the route
is accepted, VTEP 3 obtains the mapping between Host 3’s MAC
address, BD ID, and VTEP 2’s IP address (next-hop attribute), and
generates a MAC address entry for Host 3. Based on the next-hop
attribute, the MAC address entry’s outbound interface is recursed to
the VXLAN tunnel destined for VTEP 2.
VTEP 2 learns Host 2’s MAC address in the same way.
3. When Host 3 attempts to communicate with Host 2 for the first time,
Host 3 sends an ARP request for Host 2’s MAC address, with the des-
tination MAC address set to all Fs and the destination IP address set to
IP 2. By default, VTEP 2 broadcasts the ARP request to devices on the
same network segment as the interface that receives the request. To
reduce broadcast packets, ARP broadcast suppression can be enabled
on VTEP 2. With this function enabled, VTEP 2 searches the local
MAC address table for the MAC address of Host 2 based on the desti-
nation IP address in the received ARP request. Then, if Host 2’s MAC
address is found, VTEP 2 replaces the destination MAC address with
this MAC address, and unicasts the ARP request to VTEP 3 through
the VXLAN tunnel established between them. VTEP 3 then forwards
the received ARP request to Host 2. In this way, Host 2 learns Host
3’s MAC address and responds with a unicast ARP reply. After Host
3 receives the ARP reply, it learns Host 2’s MAC address.
By this stage, Host 3 and Host 2 have learned the MAC address of
each other, and they can communicate in unicast mode.
Virtual Networks for Campus Network ◾ 205
Automated Service
Deployment on
an Intent-Driven
Campus Network
6.2.1 Device Plug-and-Play
As network technologies develop rapidly and enterprise networks continue
to expand, enterprise customers need to manage and maintain hundreds
214 ◾ Campus Network Architectures
the egress gateway, and configure service set identifiers (SSIDs) for
wireless access devices.
Step 2 Deployment personnel install hardware, connect cables, and
power on the egress gateway onsite, and manually configure the
gateway to connect to the Internet. The gateway then registers with
the SDN controller for management, and the SDN controller delivers
service configurations to the gateway.
Step 3 Deployment personnel install devices attached to the egress gate-
way, connect cables, and power on the devices onsite. The devices
then obtain their IP addresses and the DNS server IP address from
the egress gateway functioning as the DHCP server, and register
with the SDN controller to obtain service configurations.
218 ◾ Campus Network Architectures
When an OSPF network is divided, not all areas are equal. The area
with ID 0 is known as the backbone area, which is responsible for trans-
mitting routes between nonbackbone areas. Therefore, OSPF requires that
all nonbackbone areas be connected to the backbone area, and the back-
bone area’s devices must all be connected to each other.
On the network shown in Figure 6.4, all switches in an AS run OSPF,
and the AS is divided into three areas. Switch A and Switch B function as
area border routers (ABRs) to forward interarea routes. Then, after basic
OSPF functions are configured, each switch learns the routes to all network
segments in the AS, including the Virtual Local Area Network (VLAN)
ID of each interface and the IP address of each VLANIF interface.
On the preceding network, traditional route deployment poses many
problems. First, it is time-consuming to log in to each device to con-
figure IP addresses for nearly 20 OSPF-enabled interfaces and OSPF-
dependent interfaces. Second, configuring each device using commands
can lead to errors and faults that are difficult to locate. Lastly, OSPF con-
vergence upon network changes is slow, causing long periods of service
interruption.
The automatic route configuration solution improves on traditional
route deployment. In this solution, the network administrator only needs
to plan the network topology on the SDN controller. The SDN controller
will automatically divide OSPF areas and generate configurations based
on the network topology, and deliver the configurations after verifying
they match with the network service requirements. Therefore, this solu-
tion improves the efficiency and correctness of the route configuration,
and reduces the service interruption period.
The configuration simulation and verification technology is used to
check whether or not the end-to-end network behaviors (determined by
the configuration and status) meet configuration expectations. After the
administrator configures services, the SDN controller performs config-
uration simulation and verification, and delivers only the verified con-
figurations to devices, as shown in Figure 6.5. Additionally, the SDN
controller notifies the administrator of incorrect configurations, and con-
tinues simulation and verification after the administrator modifies these
configurations.
The configuration simulation and verification technology verifies ser-
vice configurations and network forwarding.
User configurations
Incorrect
Simulation and
verification
Correct
Configuration delivery
may pass through the network, and compares the traffic model with
the configuration objective to verify the configurations. For exam-
ple, the network verification module checks whether there are reach-
able routes between any two nodes on the network and whether the
maximum transmission unit (MTU) is correct.
The basic architecture for managing digital certificates is the public key
infrastructure (PKI), which uses a pair of keys (a private key and a pub-
lic key) for encryption and decryption. A private key is mainly used for
signature and decryption and is user-defined and known only by the key
generator. A public key, however, is used for signature verification and
encryption and can be shared by multiple users. The following describes
how digital certificates work:
• Public key encryption is also known as asymmetric key encryption.
When a confidential file is sent, the sender uses the public key of the
receiver to encrypt the file, and the receiver uses its own private key
to decrypt the file, as shown in Figure 6.6.
• Digital signature: A sender can also use a private key to generate a
digital signature in a message to be sent. The digital signature uniquely
identifies the sender. The receiver can then determine whether the
message is tampered with by checking the digital signature. Figure 6.7
shows the encryption and decryption process using digital signatures.
FIGURE 6.8 Certificate authentication process between the SDN controller and
device.
Service Deployment on Campus Network ◾ 225
Step 2 After being powered on, the device obtains an IP address and
sends a registration request carrying the device certificate to the
SDN controller.
Step 3 After obtaining the device certificate, the SDN controller tra-
verses the certificate chain to verify the device certificate and verifies
the device ESN.
Step 4 If authentication is successful, the SDN controller sends an
authentication success message carrying the SDN controller certifi-
cate to the device.
Step 5 Upon receipt of the SDN controller certificate, the device per-
forms certificate authentication and sends an authentication success
message to the SDN controller if authentication is successful.
Step 6 After verifying each other’s identity, the SDN controller and
device start to use the encrypted channel for service delivery.
• Method 1: If the carrier or enterprise has its own CA, apply for SSL
client and SSL server certificates from the CA and obtain the corre-
sponding trust certificates.
• Method 2: Use tools such as OpenSSL or XCA to make digital
certificates.
networks (VNs). To create a VN, they just need to perform two steps on
the SDN controller.
First, specify resources including the physical device roles, IP address
segment, and VN access location for the VN. This step is also called creat-
ing a fabric on an intent-driven network. The fabric virtualizes and pools
all resources on the network and is presented in the form of VNs to carry
services.
Second, create the VN based on service requirements, including speci-
fying the VN name, available IP address segment, and access interfaces.
Throughout the entire process, the network administrator does not
need to consider the specific network implementation. This significantly
reduces the degree to which service requirements and network implemen-
tation are coupled, and improves network planning efficiency.
This two-step operational simplicity can be partially credited to the
orchestration by the SDN controller. The following illustrates what hap-
pens to the SDN controller and network devices in the two steps per-
formed by the network administrator.
6.3.2 Automated VN Deployment
After physical network resources are pooled, the administrator can
create VNs based on service requirements. In real-world situations, a
VN is an independent service network that is typically created for an
independent department. For example, if a company has a marketing
department, a finance department, and an R&D department, a VN can
be created for each of the three departments. Automated deployment
of these VNs involves resource pool instantiation and VN creation by
228 ◾ Campus Network Architectures
1. Introduction to NAC
NAC mainly implements network admission and policy-based
control when users access a network.
a. Network admission
An open network environment provides users with more
convenient access to network resources; however, it poses
various security threats. For example, unauthorized users
may access the internal network of a company, compromis-
ing the company’s information security. Diversified terminals
on a campus network are the main source of security threats
as user activities on the network are difficult to manage and
control. For security purposes, the campus network needs to
authenticate users based on their identity and terminal sta-
tus, and grant access permission only to those who pass the
authentication. Figure 6.13 shows an example of network
admission for various terminals.
Service Deployment on Campus Network ◾ 231
b. 802.1X authentication
Extensible Authentication Protocol (EAP) is used to exchange
information between terminals, admission devices, and admis-
sion servers. EAP can run without an IP address over various
bottom layers, including the data link layer and upper-layer pro-
tocols such as User Datagram Protocol (UDP) and Transmission
Control Protocol (TCP). This offers much flexibility to 802.1X
authentication.
i. EAP packets transmitted between terminals and admission
devices are encapsulated in the EAP over LAN (EAPoL) f ormat
and transmitted across the Local Area Network (LAN).
ii. We can determine to use either of the following authenti-
cation modes between the admission device and admission
server based on the support of the admission device and
based on network security requirements.
c. Portal authentication
Portal authentication is also known as web authentication.
Generally, a Portal authentication website is referred to as a web
portal. When a user accesses the Internet, the user must be first
authenticated on the web portal. If the authentication fails, the user
can access only specified network resources. The user can access
more network resources only after being successfully authenticated.
The Portal server pushes a web authentication page to the
terminal, on which the user needs to enter account informa-
tion, as shown in Figure 6.18. The terminal, admission device,
and admission server exchange protocol packets to implement
1. Policy automation
Traditionally, policy-based control was focused on IP addresses.
This conventional approach is not suitable for an intent-driven
campus network. Ideally, policy-based control is centered on user
identities, which means it is decoupled from the physical network
topology, IP address plan, and VLAN plan. Huawei’s free mobility
technology perfectly meets these requirements. It focuses on ser-
vices, users, and experience, uses the SDN controller for centralized
management, and uses service languages and global user groups to
maintain consistency in user policies regardless of changes in user
locations and IP addresses.
Free mobility divides a campus network into multiple logical lay-
ers, as shown in Figure 6.27.
a. User terminal layer: provides man–machine interfaces for users
to complete authentication and access resources on servers.
248 ◾ Campus Network Architectures
TABLE 6.5 Comparison between Free Mobility and Traditional Access Control Solution
Deployment and Maintenance
Solution Network Experience Consistency Efficiency
Free mobility Security groups are used to Only security groups and
decouple policies from IP inter-group policies need to be
addresses, eliminating the defined based on user identities,
need to take into account the simplifying network planning
network topology, VLANs, and deployment. The network
and host IP addresses. administrator centrally plans
Consistent user permissions security groups and manages
and experience are delivered service policies on the SDN
based on user identities controller, facilitating
regardless of the terminal management and maintenance
types and user locations
Traditional ACL-based policies are tightly A large number of VLANs, IP
access control coupled with the network addresses, and ACL rules need
topology and IP addresses, to be planned in the early phase
and service policies need to of network design. User policies
be reconfigured if user are mapped to ACL rules based
locations or the network on the planned VLANs and IP
topology changes. In addresses, which require
mobility scenarios, the complex configurations
planning for configuration is The network administrator needs
complex, the configuration to manually configure devices
workload is heavy, and it is one by one, complicating
difficult to ensure consistent management and maintenance
network access permissions
for users
252 ◾ Campus Network Architectures
Step 2 The SDN controller delivers security groups and intergroup poli-
cies to the policy enforcement point.
Security groups and intergroup policies take effect only after
they are deployed on network devices.
Step 3 A user is authenticated and authorized based on the security
group to which the user belongs.
The user can be authenticated using MAC address, Portal, or
802.1X authentication. The authentication process is the same as
that on a traditional campus network. The SDN controller veri-
fies the user identity by checking the user name and password,
and associates the user with the corresponding security group
based on preconfigured authorization rules. If the authentica-
tion succeeds, the SDN controller sends the authorization result
containing the security group to which the user belongs to the
authentication point.
Step 4 The SDN controller generates dynamic mappings between IP
addresses and security groups and synchronizes them to the policy
enforcement point.
Service Deployment on Campus Network ◾ 253
Intelligent O&M
on an Intent-Driven
Campus Network
255
256 ◾ Campus Network Architectures
These merits make life much easier for network O&M person-
nel, who need to ensure normal running of networks through rou-
tine maintenance, as well as locating and rectifying faults quickly.
Traditionally, this would have to be done by manually analyzing
large amounts of data and relying on personal experience, which is
hugely challenging and time-consuming. For example, when net-
work O&M personnel receive reports about user network access
difficulties or failures, they would need to check all login logs and
obtain packets. In addition, network O&M personnel may be unfa-
miliar with the complex user network access process, which varies
greatly between different authentication modes. In such cases, they
would have to seek help from professional engineers to locate user
access failures. Faults may also need to be reproduced on site, mak-
ing troubleshooting even more difficult.
The protocol tracing function is provided in intelligent O&M.
With such function, the SDN controller can display each phase of
user access and the corresponding result (success or failure) on the
graphical user interface (GUI), as well as providing root causes of
user access issues, helping O&M personnel quickly locate faults.
Figure 7.3 details the protocol tracing function, which enables visu-
alization of user access in three phases: association, authentication,
and Dynamic Host Configuration Protocol (DHCP). The function
enables the SDN controller to collect statistics on the results and
time spent in each protocol interaction phase, and performs refined
Intelligent O&M on Campus Network ◾ 261
1. Wireless data
The quality of a wireless network is mainly evaluated based on
KPIs of APs, radios, and users, as listed in Table 7.1. With AI algo-
rithms and the correlation analysis function, the SDN controller
can proactively identify air interface performance and user access
issues, such as weak signal coverage, high interference, and high
channel usage.
2. Wired data
Wired network devices use Telemetry to collect performance
indicator data of devices, interfaces, and optical links, as listed in
Table 7.2. Such devices also predict baselines of KPIs including the
device CPU usage and memory usage using AI algorithms such as
Intelligent O&M on Campus Network ◾ 265
TABLE 7.1 KPI Data Collected by Wireless Network Devices Using Telemetry
Minimum
Sampling
Measurement Applicable Precision
Object KPIs Device Type (Seconds)
AP CPU usage, memory usage, AP 10
and number of online users
Radio Number of online users, AP 10
channel usage, noise, traffic,
backpressure queue,
interference rate, and power
User RSSI, negotiated rate, packet AP 10
loss rate, latency, DHCP IP
address obtaining, and
Dot1x authentication data
TABLE 7.2 KPI Data Collected by Wired Network Devices Using Telemetry
Minimum
Sampling
Measurement Applicable Precision
Object KPIs Device Type (Minutes)
Device/card CPU usage Switch 1
and WAC
Memory Usage Switch 1
and WAC
Interface Number of received/sent Switch 1
packets, number of received/ and WAC
sent broadcast packets,
number of received/sent
multicast packets, number
of received/sent unicast
packets, number of
discarded received/sent
packets, number of received/
sent error packets
Optical link Rx/Tx power, bias current, Switch 1
voltage, and temperature
FIGURE 7.7 Logical architecture of the intelligent analysis system of the SDN
controller.
b. Data analysis: The big data analytics platform can collect and
analyze millions of data flows per minute based on the distrib-
uted database, high-performance message distribution mech-
anism, and distributed file system. Of these, the distributed
database provides distributed computing, aggregation, and
storage of large amounts of real-time data, as well as supporting
multidimensional data retrieval and statistics query in seconds.
The machine learning algorithm library currently contains
multiple network O&M analysis algorithms, providing AI ser-
vices for upper-layer O&M applications. It can be constantly
expanded.
c. Application service: The intelligent analysis system of the SDN
controller provides a large number of application services for
data analysis based on typical O&M and troubleshooting scenar-
ios of campus networks. For example, it can intelligently detect
connection, air interface performance, roaming, and device
issues, analyze connection and performance issues, play back
user journeys, analyze AP details, and detect the quality of audio
and video services.
Figure 7.9 illustrates how the SDN controller processes data. Starting
from data reporting by network devices and culminating in data dis-
play on the GUI, the data processing flow consists of five parts: subscrip-
tion, collection, buffering/distribution, analysis/AI computing (filtering,
combination, expert library-based analysis, and machine learning), and
storage/display.
Intelligent O&M on Campus Network ◾ 271
TABLE 7.3 Comparison between Telemetry and Traditional Data Collection Methods
Data Collection Sampling Inter-Vendor
Method Description Interval Compatibility
SNMP/Syslog/CLI Pull mode: devices Minutes The data format
respond only after is defined by
receiving query each vendor. For
requests. The collector example, SNMP
queries NEs using traps need to be
SNMP in round-robin parsed based on
mode the
Push mode: devices Management
proactively report data Information
such as SNMP traps Base (MIB) tree.
and Syslog files. The Character
data format varies strings are
between vendors, unstructured
making data analysis and defined by
more difficult each vendor,
meaning that
adaptation is
required for
each trap
Telemetry Push mode: Data are Seconds The unified data
proactively reported flow format
upon subscription. That (ProtoBuf)
is, data is continuously simplifies data
reported as scheduled analysis
with one subscription,
avoiding the impact of
query in round-robin
mode on the collector
and network traffic
274 ◾ Campus Network Architectures
2. Understanding Telemetry
Telemetry is a network monitoring technology developed to
quickly collect performance data from physical or virtual devices
remotely. Telemetry enables the SDN controller to manage a larger
number of devices, laying a foundation for fast network fault locat-
ing and network optimization. It transforms network quality analy-
sis into big data analytics, effectively supporting intelligent O&M.
As shown in Figure 7.11, Telemetry enables network devices to push
high-precision performance data to the collector in real time and at
high speeds, improving the utilization of devices and networks dur-
ing data collection.
Telemetry has the following advantages over traditional network
monitoring technologies:
a. Sample data are uploaded in push mode, increasing the number
of nodes to be monitored.
In the SNMP query process, the NMS and devices interact
with each other by alternatively sending requests and responses.
If 1000 SNMP query requests need to be sent in the first data
query, the SNMP query requests are parsed 1000 times. In the
second query, SNMP query requests are parsed another 1000
times. This process is subsequently repeated. In fact, the SNMP
query requests are the same in the first and second queries, with
the query requests parsed repeatedly in each subsequent query.
Parsing these query requests consumes CPU resources of devices,
and therefore the number of monitored nodes must be limited to
ensure normal device running.
In the Telemetry process, the NMS and devices interact with
each other in push mode. In the first subscription, the NMS
sends 1000 subscription request packets to a device and the
device parses these packets 1000 times. During the parsing pro-
cess, the device records the subscription information. Then, in
the subsequent sampling process, the NMS does not need to
send subscription packets to the device again. Instead, the device
automatically and continuously pushes the subscribed data to the
NMS based on the recorded subscription information. Telemetry
saves the packet parsing time and CPU resources of the device,
and increases the device monitoring frequency.
Intelligent O&M on Campus Network ◾ 275
FIGURE 7.11 Comparison between the SNMP query process and Telemetry
sampling process.
b. YANG model
The SDN controller uses Telemetry to collect sample data
based on Huawei’s YANG model, which is compatible with the
openconfig-telemetry.yang model defined by OpenConfig.
278 ◾ Campus Network Architectures
c. GPB encoding
GPB is a language-neutral, platform-neutral, and extensible
mechanism for serializing structured data for communication
protocols and data storage. It features high parsing efficiency
and consumes less traffic when transmitting the same amount of
information (2–5 times more efficient encoding/decoding than
JSON). The size of GPB-encoded data is 1/3 to 1/2 that of JSON-
encoded data, thereby ensuring the data throughput perfor-
mance of Telemetry and saving CPU and bandwidth resources.
Compared with the common output formats (XML and JSON),
the GPB format has lower readability due to being binary. Because
of this, data in GPB format are only suitable for being read by
machines to deliver high transmission efficiency. In the O&M sys-
tem of an intent-driven campus network, raw data are described
in a structure defined in the .proto file and encoded based on the
YANG model. Table 7.4 compares GPB encoding and decoding.
d. Transport protocol
Telemetry supports two transport protocols: gRPC and
UDP. Telemetry-enabled network devices in intent-driven
campuses use the gRPC protocol to report the encoded sample
data to the SDN controller for storage. gRPC is a high-per-
formance, open-source, universal RPC framework designed
for mobile applications and HTTP/2. It also supports multiple
programming languages and SSL encrypted channels. gRPC
essentially provides an open programming framework, based
on which vendors can develop their own server or client pro-
cessing logics using different languages to shorten the devel-
opment cycle for product interconnection. Figure 7.13 shows
the gRPC protocol stack layers, with Table 7.5 describing each
of the layers.
The Telemetry-based data collection system provides the
SDN controller with accurate, real-time, abundant data sources,
laying a foundation for intelligent O&M. The Telemetry-based
data reporting system, on the other hand, enables wired and
wireless devices on the entire campus network to efficiently
collect and display data, contributing to intelligence and auto-
mation of the O&M system.
Intelligent O&M on Campus Network ◾ 279
packets are sent at low speeds, no video stream in the buffer can
be played, leading to frame freezing. The jitter reflects the frame
freezing and a larger amount of jitter leads to a lower VMOS.
b. MDI
The MDI is a set of measures that can be used to quantify the
transmission quality of streaming media on the network. The
MDI is typically displayed as two numbers separated by a colon:
Delay Factor (DF) and Media Loss Rate (MLR).
The DF represents the latency and jitter of monitored video
streams, in milliseconds. It converts video stream jitter changes
into buffer requirements for video transmission and decoding
devices. A greater DF value indicates a larger amount of video
stream jitter. According to the WT126 standard, the DF value
of video streams during transmission should not exceed 50 ms.
The MLR, expressed in the number of media data packets lost
per second, indicates the rate of packet loss during the transmis-
sion of monitored media streams. The loss of video data pack-
ets directly affects video playback quality. As such, the desired
MLR value during the transmission of IP video streams is zero.
Typically, each IP packet contains seven TS frames. Therefore, if
one IP packet is lost, seven TS frames (excluding empty ones) are
lost. According to the WT126 standard, the maximum accept-
able MLR for standard definition/Video on Demand videos is
five media packets per 30 minutes, and that for high definition
videos is five media packets per 240 minutes.
c. eMDI
eMDI is enhanced MDI. Compared with VMOS and MDI,
eMDI reduces the packet parsing overhead. For audio and video
services transmitted over UDP, effective packet loss factors are
proposed in the forward error correction (FEC) and retransmis-
sion (RET) compensation mechanisms in order to accurately
describe the impact of packet loss on audio and video services
and improve the fault locating accuracy. For audio and video ser-
vices transmitted over TCP, the SDN controller analyzes infor-
mation such as the TCP sequence number and calculates the
packet loss rate and latency of upstream and downstream TCP
flows, which help with fault locating.
Intelligent O&M on Campus Network ◾ 285
FIGURE 7.18 Intelligent fault pattern library building based on expert experience.
290 ◾ Campus Network Architectures
After logging in to the SDN controller, O&M personnel can view the
network health evaluation of each listed site and view site details such as
the health and device registration status. The SDN controller displays the
following network health information:
The SDN controller also displays the running data of each single device in
real time, including the CPU, memory, traffic, and user quantity, and pro-
vides historical data curves for tracing. The running data of a single device
gives insights into the device’s health status, based on which O&M person-
nel can easily identify abnormal devices and adjust the network efficiently.
As more campus networks are going wireless, radio monitoring is
increasingly important during network maintenance. Wireless networks
differ greatly from wired networks because wireless networks transmit
data using electromagnetic waves. Therefore, wireless networks are vul-
nerable to interference from surroundings, especially when there are a
large number of interfering devices, such as Bluetooth devices, microwave
296 ◾ Campus Network Architectures
1. Connectivity issues
When a large number of users fail to access the network or access
the network at low speeds, a group fault may have occurred. Group
faults, especially authentication and access failures, have significant
impacts on the network. Such faults must be rectified as soon as pos-
sible; otherwise, a large number of users will be affected.
Users fail to access the network due to various reasons, and not
all the access failures are caused by network faults. As shown in
Figure 7.32, the SDN controller uses machine learning algorithms
to generate baselines by training a significant amount of historical
data, implementing intelligent detection of abnormal network access
behaviors and accurate identification of network faults.
The SDN controller generates an access baseline by training his-
torical big data. Access failures and anomalies within the baseline
are considered individual issues. When the number of access fail-
ures exceeds the baseline, the SDN controller can automatically
determine that an anomaly has occurred and then identify the fault
pattern and analyze the root cause. In addition, the SDN controller
abstracts characteristics of terminals that fail to access the network
and performs group fault analysis using clustering algorithms. Based
on login logs of terminals and KPI data, the SDN controller extracts
possible root causes and provides troubleshooting suggestions for
O&M personnel to rectify the faults.
2. Air interface performance issues
Wireless networks transmit data using electromagnetic waves,
and wireless signals are vulnerable to interferences in the wireless
environment. Therefore, air interface issues are common on wire-
less networks. To address such issues, the SDN controller builds
fault pattern baselines for six typical models of air interface per-
formance issues based on big data and collects performance data
3. Roaming issues
As access locations of wireless terminals change constantly, the
terminals frequently roam between devices. In comparison, the
majority of terminals on wired networks access the network at fixed
locations. Even if the access location of a user changes, the user needs
to go offline first and go online again after a period of time. It is
normal that wireless terminals change their access locations because
they may move anytime. Therefore, roaming of wireless terminals is
a basic service feature for wireless networks. User roaming involves
interactions between the user and multiple devices and may require
re-authentication. Therefore, the roaming process is complex. User
roaming experience is also related to the performance of user termi-
nals, with terminals having different roaming sensitivity. In the same
area, terminals with higher roaming sensitivity and aggressiveness
roam preferentially. Terminals with poor aggressiveness may stick to
the originally connected AP for a prolonged time, resulting in poor
experience. Therefore, the roaming process of wireless terminals is
complex and prone to exceptions. The SDN controller can analyze
roaming issues and identify common issues, so that O&M personnel
can optimize the network accordingly to improve user access and
roaming experiences. Roaming issues are classified into repeated
roaming issues and roaming faults.
In repeated roaming, a terminal roams multiple times between
APs within a short time period and the terminal’s KPIs before and
after roaming are poor. Roaming faults include roaming failures and
long roaming duration.
Repeated roaming is typically caused by poor signal coverage. In
an area, when a terminal is covered by multiple APs with poor signal
strength, the terminal is triggered to roam when detecting that the
signal strength is poor. If the signal strength is still poor, the rate is
low, and many packets are lost after the terminal roams, the termi-
nal roams again. This process is repeated many times, and the ter-
minal roams between multiple APs with poor signal quality. When
repeated roaming occurs, O&M personnel need to enhance the sig-
nal strength or add APs as soon as possible.
Group roaming faults are not roaming failures or exceptions
of single users. The SDN controller collects statistics on continu-
ous roaming data of an AP. This is done to determine whether a
Intelligent O&M on Campus Network ◾ 305
FIGURE 7.37 Group device fault detection on the wired and wireless integrated
topology.
Intelligent O&M on Campus Network ◾ 309
FIGURE 7.40 An AP’s transmit power decreases when the number of neighbor
APs increases.
FIGURE 7.41 When an AP goes offline or fails, the transmit power of neighbor
APs increases.
FIGURE 7.43 Channel allocation results on the 2.4 and 5 GHz frequency bands.
318 ◾ Campus Network Architectures
FIGURE 7.44 Bandwidth allocation results on the 2.4 and 5 GHz frequency bands.
Intelligent O&M on Campus Network ◾ 319
The mobile O&M app currently serves wireless networks, with support for
wired network O&M also planned. The app offers four main functions:
information import by scanning barcodes, device deployment, network
diagnosis, and network monitoring.
2. App-based deployment
In small and micro branches, a single AP is often capable of
meeting network requirements. Network uplinks are usually
Internet lines provided by carriers and a Point-to-Point Protocol
over Ethernet (PPPoE) account needs to be configured for Internet
access in dial-up mode. Traditionally, O&M personnel can log in to
an AP from a laptop or PC and perform account and service con-
figurations on the web system. However, if APs need to be deployed
in a large number of stores and the construction and deployment
personnel do not possess necessary network O&M skills or tools,
professional deployment personnel are required. This increases the
overall deployment time and costs. As shown in Figure 7.49, after the
mobile O&M app connects to an AP and simple configurations are
performed, the AP is ready for deployment. This simplified process
dramatically improves deployment efficiency and reduces associated
costs, while also requiring minimal deployment skills. Thanks to the
mobile O&M app, even installation personnel can successfully com-
plete deployment and bring APs online.
3. Network diagnosis
Network services can be configured in advance by network
administrators or configured after network devices are deployed and
registered. When the network is operating properly, deployment per-
sonnel can perform simple network acceptance or diagnosis tasks
to ensure that services are running correctly. For example, they can
check whether wireless signals can be connected, access the external
network to check network connectivity, or carry out roaming tests
to check the network coverage status. If a network fault occurs, local
IT O&M personnel can use the mobile O&M app for troubleshoot-
ing. They can also test network connectivity and verify the quality
of video and gaming user experiences to proactively determine net-
work faults and quickly remedy simple issues.
The mobile O&M app leverages the diagnosis function of network
devices to remotely display network diagnosis information. The net-
work diagnosis functions include:
a. Ping: APs can ping STAs or other devices to check connectivity
between the local and external networks. Also, the SDN control-
ler can ping APs to check connectivity between the APs and SDN
controller.
b. Throughput testing: The AP or network throughput can be tested
to check network performance.
c. Roaming testing: Network roaming performance can be tested,
including the roaming time and roaming effect.
d. Game testing: The gaming experience on the current network
can be tested by simulating games using the mobile O&M app to
evaluate the network performance.
e. Video testing: The video transmission performance of the cur-
rent network can be tested by playing network videos and exam-
ining the video playback effects.
f. Intelligent diagnosis: CPU, memory, and other anomalies of a
specific device can be diagnosed.
4. App-based monitoring
In most cases, O&M personnel are required to log in to the man-
agement system using a PC or laptop in order to perform network
O&M. With the mobile O&M app, O&M personnel can monitor the
network using mobile phones anytime and anywhere regardless of
being off duty or on business trips. Figure 7.51 shows the device and
traffic monitoring screens on the mobile O&M app.
332 ◾ Campus Network Architectures
FIGURE 7.51 Device and traffic monitoring screens on the mobile O&M app.
O&M personnel use the mobile O&M app to check basic monitoring
data, enabling them to monitor the network status anytime and any-
where. The mobile O&M app displays the following monitoring data:
a. Site device status, including the registration and running status
of all site devices of a tenant
b. AP information, such as the IP address, version number, running
status, and connected terminals
c. Traffic data, including traffic statistics for the current day or
week, top SSIDs, and top APs
d. Terminal information, including the IP address, access time,
access duration, signal strength, accumulated traffic volume, and
RET rate of terminals
Chapter 8
333
334 ◾ Campus Network Architectures
Before configuring security services, you must first create related secu-
rity zones. Then you can deploy security services based on the priorities of
the security zones.
Typically, three security zones will suffice in a simple environment with
only a small number of networks. The three security zones are described
as follows:
and security levels for the security assets. A security zone consists of
security assets with the same security policies and security levels. Finally,
you can design different security defense capabilities for different security
zones based on their potential risks.
It is recommended that security devices, such as firewalls, be deployed
between different security zones to ensure border defense and isolation,
as shown in Figure 8.3. For example, in a campus network scenario, even
though it is considered a secure network, it will inevitably confront secu-
rity challenges due to being connected to the Internet. Therefore, when
creating security zones, we allocate the Internet to the untrusted zone and
the campus network to the trusted zone; and deploy security devices at the
campus network egress to isolate the campus network from the Internet
and defend against external threats. In addition, we usually allocate the
data center to the DMZ, and deploy security devices in the DMZ to iso-
late traffic between the campus intranet and servers in the data center.
office and email systems used by enterprise employees, and the sys-
tems and versions used by corporate web servers.
Step 2 Establishing a foothold: Attackers employ social engineering
to implant malware in the target network using phishing emails,
web servers, and USB flash drives, and wait for target users to open
email attachments, URL links, and files in USB flash drives, or access
“watering hole” websites.
Step 3 Escalating privileges: Once the attackers have established a foot-
hold, malware unleashes a range of viruses. For example, it can
install the remote access Trojan (RAT) in the target system or initiate
encrypted connections to the server specified by the attacker to make
the server download, install, and run even more malicious programs.
Then, malicious programs escalate their permission or add administra-
tors, and they set themselves to start upon system startup or start as a
system service. Some malicious programs may even modify or disable
the firewall settings of their victims in the background to ensure they
remain undetected for an extended period. When attackers success-
fully establish a foothold in the target network and obtain the corre-
sponding permissions, the victim computers become zombies, leaving
them with no choice but to wait for the attackers’ further exploitation.
Step 4 Causing damage or exfiltrating data: After all preparations are
complete, attackers can wait for the right moment to cause damage
E2E Network Security on Campus Network ◾ 341
or exfiltrate data. For example, attackers can use the RAT’s keystroke
logging and screen recording functions to obtain users’ domain,
email, and server passwords and remotely log in to various internal
servers, such as internal forums, team spaces, file servers, and code
servers, through zombies and steal valuable information.
FIGURE 8.5 Big data and AI-based intelligent security collaboration system
defending against APTs.
viewed as a race against time. Therefore, reducing the time from threat
intrusion to damage repair is key to reducing economic and data loss.
As mentioned above, the WannaCry ransomware caused losses to
more than 240000 victims; however, compared with highly complex
attacks such as Stuxnet, WannaCry was not technically powerful;
rather, it was able to spread quickly and had a wide range of infections.
Even though vendors claim that they can detect the WannaCry
virus, customers are more concerned about whether infected com-
puters can be quickly located and isolated to prevent the virus from
spreading to the internal network. They also require that infected com-
puters can be quickly repaired. Therefore, the automatic response and
repair capability need to be advanced to satisfy the needs of customers.
The big data-powered security collaboration solution uses control-
lers to collaborate with network devices and quickly handle threats.
This solution uses the AI-based threat analysis capability to quickly
detect and respond to unknown worms, for example, blocking port
445 of the egress firewall and router, and updating the IPS signa-
ture database. The solution’s key mechanism uses network devices as
executors which collaborate with access switches through control-
lers to isolate infected computers in a timely manner. It also collects
traffic through network nerve endings, locates the infection path,
instructs associated terminal software to automatically clear worms,
pushes patches in batches to assist O&M personnel in fixing vulner-
abilities, and automatically releases tools to restore encrypted files.
3. Automated policy O&M
Larger enterprises usually have more complex networks. Security
devices guarding networks against threats accumulate massive
amounts of security policies as time goes by, making security policy
O&M the top issue for large- and medium-sized organizations and
enterprises. For example, in the network of a financial services cus-
tomer, the data center may include more than 500 firewalls, with
each firewall including tens of thousands of security policies. The
data center’s firewall policies need to be adjusted each time services
are updated; therefore, thousands of policies may need to be updated
on a daily basis, thereby making policy O&M extremely difficult. In
addition, notifications such as service offline and IP address recla-
mation cannot be delivered to the network security department in a
E2E Network Security on Campus Network ◾ 347
1. Data collection
Data collection includes both log collection and traffic collection,
which are implemented by the log collector and the flow probe,
respectively.
The log collection process includes log receiving, categorization,
formatting, and forwarding, while the traffic collection process
348 ◾ Campus Network Architectures
5. Threat interworking
The analyzer generates an interworking policy based on suspi-
cious analysis results and delivers it to all NEs on the network. This
policy contains precise control instructions, enabling the NEs to
block any suspicious threats.
the straight line, a threat exists. Otherwise, no threat exists. As such, this
method proves unreliable in terms of detection and false positive rates.
To address this, Huawei’s security team have meticulously analyzed the
advantages and disadvantages of dynamic behavior and static data, and
have taken the lead in adding dynamic behavior to the technical route of
machine learning, which they call machine learning of dynamic behavior.
Simply put, based on the experience of security experts, dynamic behav-
ior (including function names, parameters, and return values) is digitized
through probability statistics to form eigenvectors. The supervised learn-
ing method is used to add the random forest machine learning algorithm
and establish a detection model for each family. This new approach leads
to significantly improved detection rates.
However, during the process of continuously improving detection rates,
the random forest algorithm also encountered a roadblock: how can non-
linear problems be addressed? The team of experts then employed the deep
learning route, utilizing the convolutional neural network (CNN) algo-
rithm based on backpropagation (BP), as shown in Figure 8.9. Through
meticulous design and exploration of convolutional layer parameters, and
selection of filters for each layer, the team finally achieved effective results.
As more general-purpose graphics processing units (GPUs) are invested
in computing, the parameters of each CNN layer are no longer restricted,
contributing to scalability and automatic improvement of the BP-based
CNN algorithm. For example, as shown in Figure 8.10, through machine
learning of dynamic behavior, the BP-based CNN algorithm transforms the
354 ◾ Campus Network Architectures
FIGURE 8.14 Proportions of some cipher suites in black and white samples.
Open Ecosystem
for an Intent-Driven
Campus Network
367
368 ◾ Campus Network Architectures
the programmable object. For individual developers and even small- and
medium-sized development teams, building a development and test envi-
ronment is expensive. Therefore, the support service for a network solu-
tion will involve providing a development environment, for example, an
online remote lab, in addition to traditional expert support.
A network operating system’s commercial monetization capability also
differs from a stand-alone operating system because profit generation
through third-party applications relies heavily on network solution pro-
viders. Therefore, network solution providers need to offer comprehensive
operations and marketing platforms that can help developers promote and
sell third-party applications, in order to attract continuous investment
from developers in platform application development.
Shopping malls and supermarkets can select the most suitable mode
for themselves.
a. Authorization API: After authenticating a user terminal, the
third-party authentication and authorization platform invokes
the authorization API to instruct the SDN controller to authorize
the user terminal.
b. HTTPS+RADIUS: The SDN controller functions as a RADIUS
client to interwork with the third-party authentication and
authorization platform using RADIUS.
2. LBS
For shopping malls and supermarkets, the key to Wi-Fi VASs is
to obtain the location of user terminals. By locating terminals, shop-
ping malls and supermarkets can provide LBS, such as navigation
service, for customers. Terminal location data also gives insights into
customers’ consumption habits, allowing shopping malls and super-
markets to target promotional content. Terminal location informa-
tion can be broken down into two types: accurate terminal location
expressed using coordinates (x, y), and the received signal strength
indicator (RSSI) detected by an Access Point (AP). The SDN con-
troller sends the terminal location information to the LBS server
through an API, so that shopping malls and supermarkets can ana-
lyze customer flow, push promotions to terminals, and provide navi-
gation and other related applications to terminals. Figure 9.3 shows
the LBS architecture.
The SDN controller can aggregate terminal location data collected
by APs and periodically send the data to the third-party LBS server.
The third-party LBS server applies data analysis algorithms to ana-
lyze the data, providing VAS applications such as heat map, tracking,
and customer flow analysis for shopping malls and supermarkets
based on the analysis result. Location data can be sent to the third-
party LBS server in any of the following ways:
a. Hypertext Transfer Protocol Secure (HTTPS) + JavaScript Object
Notation (JSON): APs send detected RSSI information to the
SDN controller, which then releases the RSSI information to the
third-party LBS server.
Open Ecosystem for Campus Network ◾ 375
3. Crowd profiling
Precision marketing is an important marketing technique in the
retail industry that requires retailers to obtain crowd profiles and iden-
tify target customers. Once this information has been obtained, retail-
ers can deliver personalized services or push specific information to
these target customers. Crowd profiling requires two types of data: one
is online data, which can be collected using an online system; the other
type of data is offline data, which can be collected using Wi-Fi probes
or the network, if the data are carried over the network. As shown in
Figure 9.4, the SDN controller provides terminal-related offline data to
the big data analytics platform through the LBS API and VAS API.
376 ◾ Campus Network Architectures
4. Network O&M
In some scenarios, customers or Managed Service Providers
(MSPs) may need to use a third-party network management system
(NMS) to manage or monitor devices managed by the SDN control-
ler. Typical management and monitoring operations on a third-party
NMS are creating tenant administrator accounts, managing devices,
configuring networks for specified devices, and monitoring device
status and alarms. In such scenarios, network service O&M can be
implemented in two modes, as shown in Figure 9.5.
a. Network service API: The third-party NMS interconnects with
the SDN controller through a RESTful API to manage and moni-
tor devices.
Open Ecosystem for Campus Network ◾ 377
5. Smart IoT
When partners want to deploy IoT applications (such as elec-
tronic shelf label, IoT positioning, energy efficiency management,
and asset management), they can use the network infrastructure to
provide IoT signal coverage (ZigBee, Bluetooth, Radio Frequency
Identification (RFID), etc.), without the need to deploy a second net-
work. This reduces the capital expenditure for customers. Figure 9.6
shows the smart IoT architecture.
In this scenario, Huawei provides the network infrastructure,
open AP hardware, and basic management and monitoring func-
tions for IoT cards, while partners develop IoT card applications,
card management software, and IoT service software. APs commu-
nicate with IoT cards through console ports or network ports, and
IoT services can be provided based on the network infrastructure
through cooperation between Huawei and its partners.
378 ◾ Campus Network Architectures
the network and extracts, compares, and analyzes large amounts of data.
The data analysis results provide insight into users’ consumption behav-
iors and the stores or commodities that are more attractive to consumers.
Then, enterprise customers from the large shopping mall/supermarket,
retail, airport, and hospitality industry can use the insight to deliver a
personalized consumption experience to consumers. In this way, enter-
prise customers can accurately grasp business dynamics, gain more busi-
ness value, improve sales efficiency, and achieve business success. Table 9.2
describes the benefits of the commercial Wi-Fi solution.
Open Ecosystem for Campus Network ◾ 383
Intent-Driven
Campus Network
Deployment Practices
391
392 ◾ Campus Network Architectures
TABLE 10.5 Overview of the Campus Network Requirements Survey — Network Security
Requirement Survey Content Analysis
4.1 Service security: Determine whether network services
service need to be isolated and how to
isolation and isolate them. Physical isolation
interoperability means construction of multiple
requirements networks and the separate design
of each network. VXLAN is
recommended for logical
isolation. With VXLAN technology,
a campus network is virtualized
into multiple virtual networks
to carry different services.
Another thing worth considering is
whether interoperability is required
between different services. If
interoperability is required, develop
interoperability policies and
solutions in advance
4.2 Security defense Deploy security devices such
against as the firewall, intrusion
external threats prevention system (IPS), intrusion
detection system (IDS), and network
log audit devices to protect
network border security. If a
customer has high network security
requirements, independent security
devices are recommended.
Otherwise, use integrated security
devices, such as Unified Threat
Management (UTM) or value-added
security service cards
4.3 Internal Use online behavior management
network software or dedicated devices to
security prevent security incidents caused
defense by internal users
4.4 Terminal Terminal network security defense
network includes terminal access security
security check and terminal security check,
defense which determines whether a
network access control (NAC)
solution is required
Intent-Driven Campus Network Deployment Practices ◾ 397
TABLE 10.6 Overview of the Campus Network Requirements Survey — Network Scale
Requirement Survey Content Analysis
5.1 Number of wired Determine the port quantity and density of
users or APs access switches and the approximate
network bandwidth requirements based on
the service survey findings
5.2 (Optional) Number Determine the WAC specifications and AP
of wireless users quantities, and whether there is a need for
high-density access in key areas such as
conference rooms
5.3 (Optional) Estimate network reconstruction workload
Specifications of and determine the network upgrade
the legacy solution, covering device reusability,
network compatibility, and seamless upgrade
5.4 Network scale for When designing network interfaces, ensure
the next 3–5 years, capacity expansion and smooth upgrade are
or the highest carefully considered so that the design can
growth rate in be future-proof for the next 3–5 years
recent years Network scale includes both the user scale
(number of users or terminals) and the
service scale (service type, bandwidth,
quantity, and scope)
5.5 (Optional) Branch Consider the connection mode (private line
offices or VPN) between the branches and the
headquarters and determine whether link
backup is required
TABLE 10.7 Overview of the Campus Network Requirements Survey — Terminal Type
Requirement Survey Content Analysis
6.1 Wired user terminals, Consider the network interface card
including desktop (NIC) rate of terminals
computers
6.2 Wireless user Determine the supported wireless
terminals, including protocols/standards, access frequency
portable computers, band, access authentication mode,
smart phones, and whether to use unified authentication
mobile smart devices for wired and wireless networks,
such as tablets whether to allow guest access and
which zones are accessible to them, as
well as the power supply for terminals
6.3 Dumb terminals, Determine the access and
including IP phones, authentication solutions for dumb
network printers, and terminals
IP cameras
(Continued )
398 ◾ Campus Network Architectures
1. Construction objectives
The construction objectives include multinetwork convergence,
advanced architecture, and on-demand expansion.
a. Multi-network convergence: A comprehensive network capable
of carrying wired, wireless, and Internet of Things (IoT) services
must be constructed to meet the access requirements of various
data terminals and sensors at any location.
b. Advanced architecture: The overall architecture should be indus-
try-leading in terms of performance, capacity, reliability, and
technology application, and must be future-proofed to ensure it
can meet any and all requirements over the next 5–10 years.
c. On-demand expansion: The overall network architecture must
meet the requirements for coverage area, terminal numbers, and
services, and should be expandable on demand without architec-
ture adjustment.
2. Requirements analysis
University A’s computer center is responsible for network con-
struction and Operation & Maintenance (O&M). The center wants
to introduce network virtualization and software-defined network-
ing (SDN) technologies in order to build a campus network as a ser-
vice that can uniformly carry and flexibly deploy multiple services
such as teaching and scientific research. Table 10.8 lists the specific
network construction requirements.
FIGURE 10.3 Abstracted network model for the intent-driven campus network.
FIGURE 10.5 BGP routing protocol planning on the underlay network (without
an RR).
FIGURE 10.6 BGP routing protocol planning on the underlay network (border
node as an RR).
function as border nodes and edge nodes, respectively. The entire
network uses VNs, simplifying service provisioning and network
management. The border node taking on the role of user gateway is
deployed in centralized mode.
each other, but routes for users on different VN are isolated. VNs can
be planned based on the following principles:
a. An independent service department is considered a VN.
b. VNs are not used to isolate users of different levels in the same
service department. Instead, intergroup policies of security
groups, which are divided based on user roles, achieve this result.
There are four VLAN access modes for VN subnets, and Table
10.12 describes the application scenarios of each mode. The dynami-
cally authorized VLAN mode requires users to go online again dur-
ing Portal authentication and is therefore not recommended for
Portal authentication.
4. VN communication planning
In the virtualized campus network solution, VNs are isolated at
Layer 3 using VPNs and cannot communicate with each other by
default, unless they reside on the same overlay network or commu-
nicate through firewalls. Table 10.13 describes the application sce-
narios of the two VN communication modes.
Figure 10.9 shows how VN communication traffic travels within
the overlay network. Here, to achieve VN communication, configure
VNs and their subnets and then import each other’s routes.
Figure 10.10 shows how VNs communicate through firewalls.
Different VNs connect to different security zones on the firewalls
through different logical egresses. Interzone policies are deployed on
the firewalls to implement VN communication.
Intent-Driven Campus Network Deployment Practices ◾ 409
FIGURE 10.11 Firewall security zone division when user gateways are located
inside the overlay network.
FIGURE 10.15 Traffic model for security policies based on security groups.
Intent-Driven Campus Network Deployment Practices ◾ 417
421
422 ◾ Campus Network Architectures
world. Huawei employees handle more than 2.8 million emails and hold
more than 80000 meetings every day.
The massive amounts of services mentioned above are enabled by more
than 600 IT applications, 180000 network devices, and a high-perfor-
mance backbone network with a total bandwidth of more than 480 Gbit/s.
These IT applications and underlying network devices effectively support
the smooth, orderly, and efficient operations of Huawei’s overall business
around the world. As shown in Figure 11.1, Huawei IT has undergone four
distinct development phases: localization, internationalization, globaliza-
tion, and digitalization.
1. Phase 1: Localization
Before 1997, Huawei’s IT network was mainly based on the
local campus network in the company’s Shenzhen headquarters.
Meanwhile, Huawei began building WANs and private lines to meet
the needs of branch offices in China, as well as leasing carriers’ net-
work resources to facilitate interconnection between the company’s
branch offices and headquarters in Shenzhen.
2. Phase 2: Internationalization
During the early 2000s, Huawei introduced large-scale IT office
systems into the company’s different departments, including R&D,
marketing, finance, and supply chain, to support its rapid growth.
Huawei IT also started to build data centers and Multiprotocol Label
the campus network also needs to transform itself to adapt to the changes
brought by digital transformation. Key challenges include the following:
To effectively address the above challenges, the network needed for digital
transformation must meet the following four fundamental principles:
• The first layer (also known as the top layer) is the service layer oriented
to each scenario segment and is supported by Huawei’s HIS platform.
Typical scenario segments include the operation scenario, shared ser-
vice scenario, and campus office scenario. It supports different appli-
cations based on the different requirements of each scenario.
428 ◾ Campus Network Architectures
• The second layer is the platform layer. This layer provides vari-
ous professional application platform services. It covers the big
data platform, operation support platform, and security platform.
It uses open interfaces to provide basic support for upper-layer
applications.
• The third layer is the network layer. This layer implements full con-
nectivity through cloudification. For example, the MPLS private
network is constructed through the cloud backbone network and
carriers’ private line networks, and wireless and wired networks
are built on the campuses of each research center and branch office.
These deployments all realize full connectivity.
• The bottom layer includes a large number of terminals. Various ter-
minals are on-boarded, authenticated, and managed in a unified
manner to meet different service requirements.
FIGURE 11.2 Three SSIDs for Huawei’s all-scenario wireless campus network.
In the future, an increasing number of IoT terminals will access the campus
network; therefore, an independent SSID will be planned for IoT access on
the existing Wi-Fi network. In addition, IoT access will go through Media
Access Control (MAC) address authentication as well as equipment serial
number (ESN) verification.
resources can automatically move with users, thereby ensuring user expe-
rience and security.
As shown in Figure 11.3, Huawei’s free mobility solution deploys a
unified policy control center on the campus network, which allows the
network administrator to formulate service policies based on employees’
identities in advance, without the need to consider employees’ IP addresses
or network access locations. In this solution, network access devices
report employees’ identity information to the unified policy control cen-
ter regardless of whether employees choose to access the network from
Beijing, Shenzhen, or London, or from any building in an office space.
Then, the policy control center notifies network access devices of the ser-
vice policies to be executed. In this way, employees’ service access rights
and user experience remain consistent, and network resources remain
accessible to employees. Huawei appropriately named the solution “free
mobility” because the solution makes it seem like network resources are
moving around freely with employees. Free mobility addresses the poor
432 ◾ Campus Network Architectures
mobile office experience issues that have troubled users for many years.
Such advantages have led to it being widely used on the campus networks
of both Huawei and global enterprise customers.
Rough statistics have shown that Huawei possesses more than 400000
assets in total, including over 13000 instruments and meters as well as
120000 electronic devices within the R&D department alone, as shown
in Figure 11.6. Effectively managing and utilizing these fixed assets from
around the world is a major challenge for Huawei to address.
Since 2014, Huawei has used radio frequency identification (RFID)
tags to identify and manage various assets; however, the widespread use
of RFID has posed many challenges. For example, it requires the deploy-
ment of a large number of RFID tag readers to manage assets. These card
readers needed separate power supplies and network cables, leading to
a relatively low management efficiency. A management system was also
absent for these readers; therefore, it was impossible to detect reader faults
quickly. In this case, the relevant administrators did not know whether
an RFID reader was faulty or had been relocated until an asset report was
Huawei IT Best Practices ◾ 435
FIGURE 11.9 Campus evolution from the digital era to the intelligence era.
Huawei IT Best Practices ◾ 439
Intent-Driven Campus
Network Products
441
442 ◾ Campus Network Architectures
1. Multi-functional
AirEngine WACs provide Portal or 802.1X authentication through
a built-in server, reducing costs for customers.
2. Built-in application identification server
With a built-in application identification server, AirEngine WACs
provide the following features:
a. Identify over 6000 Layer 4 to Layer 7 applications, from common
office applications to point-to-point download applications, such
as Microsoft Lync, FaceTime, YouTube, and Facebook.
b. Support application-based policy control technologies, includ-
ing traffic blocking, traffic limiting, and priority-based schedul-
ing policies.
c. Support online application update in the application signature
database, without the need for a software upgrade.
1. High performance
NetEngine AR series adopts all-new Solar AX architecture and
applies innovative CPU + NP heterogeneous forwarding to SD-WAN
customer premise equipment (CPE).
Intent-Driven Campus Network Products ◾ 449
2. High reliability
NetEngine AR series boasts highly reliable features, including the
following:
a. Complies with carrier-grade design standards and provides reli-
able and high-quality services for enterprise users.
b. Offers hot swappable cards and comes with key hardware (such
as Main Processing Units (MPUs), power modules, and fans) in
redundancy mode, ensuring service security and stability.
c. Provides link backup for enterprise services, improving service
access reliability.
d. Detects and determines faults in milliseconds, minimizing ser-
vice downtime
3. Easy O&M
NetEngine AR series is differentiated for O&M design, such as:
a. Supports multiple management modes, such as SD-WAN man-
agement, SNMP-based network management, and web-based
network management, simplifying network deployment.
450 ◾ Campus Network Architectures
4. Service convergence
NetEngine AR series integrates routing, switching, VPN, secu-
rity, and Wi-Fi functions to meet diversified enterprise service
requirements, save space, and reduce enterprise Total Cost of
Ownership (TCO).
5. Security
NetEngine AR series provides comprehensive security protection
capabilities with built-in firewall, intrusion prevention system (IPS),
URL filtering, and multiple VPN technologies.
6. SD-WAN support
1. Intelligent protection
Provides built-in Next-Generation Engine (NGE) and Content-
based Detection Engine (CDE). As the detection engine of the
NGFW, the NGE provides such content security functions as IPS,
antivirus, and URL filtering, protecting intranet servers and users
against threats. Meanwhile, the brand-new CDE virus detection
engine redefines malicious file detection by leveraging AI. In-depth
data analysis is available to quickly detect malicious files and improve
the threat detection rate, while inclusive AI has been designed to
help customers perform more comprehensive network risk assess-
ment, effectively cope with network threats on the attack chain, and
implement truly intelligent defense.
2. Simplified O&M
Integrates the cloud-based deployment solution, implementing
plug-and-play for more simplified, rapid network deployment. The
security controller is deployed as a component and interworks with
iMaster NCE-Campus to enable unified management and policy
delivery, effectively improving firewall O&M efficiency. The innova-
tive web UI 2.0 provides a new visualized security interface, greatly
improving usability and simplifying O&M.
3. Extensive IPv6 capabilities
Provides various IPv6 capabilities, including network switching,
policy control, security defense, and service visualization. These
advanced techniques enable governments, media and entertainment
agencies, carriers, Internet ISPs, and financial services organizations
implement IPv6 reconstruction.
4. Intelligent traffic steering
Provides both static and dynamic intelligent traffic steering based
on multi-egress links. This function dynamically selects outbound
interfaces based on the link bandwidth, weight, priority, or automat-
ically detected link quality set by the administrator, forwards traffic
to each link in different link selection modes, and dynamically tunes
452 ◾ Campus Network Architectures
12.6 IMASTER NCE-CAMPUS
iMaster NCE-Campus is an all-in-one management, control, and analysis
system developed by Huawei for campus and branch networks. Powered
by cloud computing, SDN, and big data analytics technologies, iMaster
NCE-Campus automatically and centrally manages underlay and overlay
networks to provide greater data collection and analysis capabilities than
traditional solutions. In addition, iMaster NCE-Campus centrally con-
trols access permissions, QoS, bandwidth, applications, and security poli-
cies of campus users. Driven by services, iMaster NCE-Campus provides
simple, fast, and intelligent campus virtualization service provisioning,
enabling the network to be more agile for services.
1. Network automation
a. Automated network deployment: Template-based design and
device plug-and-play significantly reduce the operating expense
of network deployment.
b. Automated VN provisioning: Models are abstracted, one multi-
functional network is achieved, and VXLAN services are provi-
sioned in minutes.
c. Automated policy deployment: User policies, such as network
access, QoS, bandwidth, application, and security, are cen-
trally configured and adjusted in real time to ensure user access
experience.
Future Prospects
of an Intent-Driven
Campus Network
457
458 ◾ Campus Network Architectures
controller, its nervous system will be the physical network of the campus,
and its nerve endings will be various service terminals and the digital sys-
tems inside and outside the campus.
potential faults are predicted, the network will record the anomalies,
analyze their root causes, and optimize the network in real time. Once
optimization is complete, the network will continuously follow up the
optimization results until user experience returns to normal. What is
more? The entire process will not require any manual intervention.
If a network fault occurs without any precursor, the network will
automatically analyze information such as logs and alarms gener-
ated along with the fault, and rectify the fault in real time. All related
spare parts will be sent to the fault point immediately. Then, once the
consumption of spare parts is confirmed, the warehouse will auto-
matically purchase and supplement spare parts by itself.
Let us use frame freezing in a video conference as an example.
Generally speaking, frame freezing is caused by packet loss. With
regards to autonomous driving networks at lower levels, after receiv-
ing a frame freezing fault report from a user, the administrator then
has to find the packet loss point based on the source and destination.
However, this fault locating method is inefficient, because it is dif-
ficult to reproduce the frame freezing fault.
This situation is entirely different on autonomous driving net-
works at higher levels. Specifically, the AI controller collects all
traffic on the network in real time, analyzes the traffic types, and
determines the flow quality in real time for each traffic type. If the
quality of a flow is poor, the diagnosis mechanism is automatically
triggered to locate the packet loss point. Then, once the packet loss
point is found, the AI controller automatically analyzes the running
status and logs of the packet loss point and performs optimization
accordingly. Following optimization, the AI controller continues
to verify whether the corresponding service experience has been
improved and then decides whether to further optimize. In this way,
fault diagnosis is changed from passive to proactive, and fault recti-
fication is shifted from post-event to real-time, effectively enhancing
the satisfaction of campus network users.
The information that can be sensed and referenced by the net-
work will not be limited to the information of the network itself.
Therefore, the success rate of network fault prediction and defense
will be greatly improved, and services will run uninterrupted due
to reliability assurance that has been planned during the network
design phase.
466 ◾ Campus Network Architectures
between ICT teams while most likely combining the network, appli-
cation, and security teams.
When purchasing products for a digital campus, enterprises
will not simply evaluate vendors based on function fulfillment or
performance indicators. Instead, they will evaluate the end-to-end
problem-solving capabilities of vendors’ solutions according to
their service scenarios. As such, vendors will need to have a deeper
understanding of enterprise services. Only then can they perform
scenario-specific abstraction, which can be converted into a sim-
ple, easy-to-use man-machine interface. And in doing so, they can
truly help enterprises improve production efficiency internally and
enhance customer experience externally.
In the future, Huawei will lead industry development in five direc-
tions: redefining the technical architecture, reshaping the product
architecture, setting the industry pace, resetting the industry direc-
tion, and opening up new industry space. Huawei will also strive to
break the limits in four aspects to create a better future:
469
470 ◾ Acronyms and Abbreviations
BD Bridge Domain
BFD Bidirectional Forwarding Detection
B-frame Bidirectional Predicted Frame
BGP Border Gateway Protocol
BI Business Intelligence
BLE Bluetooth Low Energy
BMS Bare Metal Server
BP Backpropagation
BSC Base Station Controller
BYOD Bring Your Own Device
C&C Command and Control
C/S Client/Server
CA Certificate Authority
CAPEX Capital Expenditure
CAPWAP Control and Provisioning of Wireless Access Points
CCA Clear Channel Assessment
CDE Content-Based Detection Engine
CDMA Code Division Multiple access
CIO Chief Information Officer
CIS Cybersecurity Intelligence System
CLI Command Line Interface
CMF Configuration Manager Frame
CNN Convolutional Neural Network
CPE Customer Premise Equipment
CRC Cyclic Redundancy Check
CSMA/CA Carrier Sense Multiple Access with Collision Avoidance
CSMA/CD Carrier Sense Multiple Access with Collision Detection
CSP Certified Service Partner
CSS Cluster Switch System
DBS Dynamic Bandwidth Selection
DC Data Center
DCA Dynamic Channel Assignment
DCN Data Center Network
DF Delay Factor
DFA Dynamic Frequency Assignment
DFBS Dynamic Frequency Band Selection
DGA Domain Generation Algorithm
DHCP Dynamic Host Configuration Protocol
Acronyms and Abbreviations ◾ 471
OA Office Automation
OFDM Orthogonal Frequency Division Multiplexing
OLT Optical Line Terminal
ONU Optical Network Unit
OPEX Operating Expense
OSI Open System Interconnection
OSPF Open Shortest Path First
P2P Point-to-Point
PAM4 Four-Level Pulse Amplitude Modulation
PCAP Process Characterization Analysis Package
PCI Peripheral Component Interconnect
PCIe Peripheral Component Interconnect Express
PD Powered Device
PES Packetized Elementary Stream
P-frame Predicted Frame
PHY Physical Layer
PKI Public Key Infrastructure
PMSI P-Multicast Service Interface
PON Passive Optical Network
POP3 Post Office Protocol Version 3
PPTP Point-to-Point Tunneling Protocol
PSE Power Sourcing Equipment
PTZ Pan-Tilt-Zoom
QoE Quality of Experience
QR Quick Response
QSFP Quad Small Form-Factor Pluggable
RADIUS Remote Authentication Dial-In User Service
RAT Remote Access Trojan
REST Representational State Transfer
RET Retransmission
RF Radio Frequency
RFC Requirement for Comments
RFID Radio Frequency Identification
RPC Remote Procedure Call
RR Route Reflector
RSA Rivest-Shamir-Adleman
RSSI Received Signal Strength Indicator
RTP Real-time Transport Protocol
Acronyms and Abbreviations ◾ 475
RU Resource Unit
SAE Society of Automotive Engineers
SD Standard Definition
SDK Software Development Kit
SDN Software-Defined Networking
SFP Small Form-Factor Pluggable
SHF Super-High Frequency
SIEM Security Information and Event Management
SIG Special Interest Group
SIP Session Initiation Protocol
SISO Single-Input Single-Output
SLA Service Level Agreement
SME Small- and Medium-Sized Enterprise
SMI Structure of Management Information
SMS Short Message Service
SMTP Simple Mail Transfer Protocol
SNMP Simple Network Management Protocol
SNR Signal-to-Noise Ratio
SOHO Small Office Home Office
SPF Shortest Path First
SPT Shortest Path Tree
SSH Secure Shell
SSID Service Set Identifier
SSL Secure Sockets Layer
STA Station
STP Shielded Twisted Pair
STT Stateless Transport Tunneling
SVF Super Virtual Fabric
TCP/IP Transmission Control Protocol/Internet Protocol
TDMA Time Division Multiple Access
TLS Transport Layer Security
TS Transport Stream
TTM Time to Market
TWT Target Wakeup Time
UDP User Datagram Protocol
UHF Ultra High Frequency
UI User Interface
UL MU-MIMO Uplink MU-MIMO
476 ◾ Acronyms and Abbreviations