WS-013 Azure Stack HCI
WS-013 Azure Stack HCI
Stack HCI
In Azure Stack HCI, you have the option to virtualize its network resources by implementing
Windows Server 2019 SDN. You have the choice of integrating Azure Stack HCI into an
existing VLAN-based infrastructure or isolating its workloads by leveraging SDN-based
network virtualization.
Lessons:
o Overview of Azure Stack HCI core networking technologies
To address the requirements for deploying an isolated VDI farm for users in the Contoso Securities
Research department, which is supposed to replace an aging Windows Server 2012 R2–based RDS
deployment, you’ll implement SDN on hyperconverged infrastructure. As the first step in this process,
you need to provision the SDN infrastructure by using the scripts available online.
Lab A: Deploying Software-Defined Networking
The goal of this lesson is to provide an overview of core networking technologies that serve as
a foundation of Azure Stack HCI SDN
Topics:
o Hyper-V virtual switch, VFP extensions, HNVv2, and VXLAN
o Converged configurations
Wire
Hyper-V virtual switch, VFP extensions, HNVv2, and VXLAN (2 of
4)
vSwitch is extensible and provides support for Virtual Switch Extensions
Implement the SDN functionality in Windows Server 2019-based HNVv2 by using the VFP
forwarding extension
The VFP extension can’t be used in conjunction with any other third-party switch extension
Hyper-V virtual switch, VFP extensions, HNVv2, and VXLAN (3 of
4)
HNVv2 implements L2 switching and L3 routing semantics by using Network Controller. This
is implemented as an OS role.
Network Controller provides programmable interfaces for centralized management and
automation:
o Northbound API for management tools to communicate with Network Controller
Contoso “Tunnel”
mapping between:
PA Space Physical Network
The tenant overlay network IP
addresses referred to as
Customer Address or CA
The physical underlay network IP
addresses referred to as
Provider Address or PA
10.1.1.1 10.1.1.1 10.1.1.1 10.1.1.1
1 1 2 2
10.1.1.1 10.1.1.1 10.1.1.1 10.1.1.1 10.1.1.1 10.1.1.1 10.1.1.1 10.1.1.1
1 2 1 2
CA Space 1 2 1 2
VM Networks
Software-only network offload and optimization
o Benefits Azure Stack HCI scenarios that involve passing network traffic via a Hyper-V
switch by supporting:
• Traditional Hyper-V compute workloads
• Storage Spaces Direct implementations
• SDN deployments
Other SDN features include:
o SDN ACLs
o SDN QoS
o SMB Multichannel
Software and hardware-integrated network offload and
optimization (1 of 14)
Switch Embedded Teaming:
Is the primary NIC teaming solution for Windows Server 2019 SDN
Integrates the NIC Teaming functionality into the Hyper-V virtual switch
Groups up to eight physical Ethernet NICs into one or more software-based virtual network
adapters
Software and hardware-integrated network offload and
optimization (2 of 14)
Virtual Machine Queue (VMQ):
Was introduced in Windows Server 2012
Makes use of hardware queues in the pNIC
Assigns processing of network traffic for each vNIC and vmNIC to individual and different
CPU cores
Involves a data path through the Hyper-V vSwitch
o Doesn’t apply to technologies that bypass the Hyper-V vSwitch, such as RDMA or SR-
IOV
The limitations and drawbacks of VMQ:
o Functionally disables RSS on any pNIC attached to vSwitch
o A single virtual adapter can reach at most six Gbps on a well-tuned system
VM02 0 3
VM03 0 1 CPU CPU CPU CPU
0 1 2 3
OS Miniport
Defaul VMQ 1 VMQ 2 VMQ 3 VMQn
t
Physica
l Physical ports
Mac + VLAN Filter
Adapter NIC
Wire
Software and hardware-integrated network offload and
optimization (4 of 14)
Virtual Receive Side Scaling (vRSS):
Was introduced in Windows Server 2012 R2
Makes use of hardware queues in the pNIC and VMQ
Depends on RSS in vNIC or vmNIC
Requires VMs to have multiple logical processors
Creates the mapping or the indirection table of the VMQs of the pNIC to processors:
o Uses the indirection table to map processing of network traffic for each vNIC or
vmNIC to multiple or different CPU cores
o Defaults to eight CPU cores that you can configure by using pNIC properties
OS Miniport
OS Miniport
o Tunes the indirection table so VMs can reach and maintain the desired throughput
o ETS implements bandwidth allocation using traffic classes based on CoS tags
o DCBX allows for automatic configuration of hosts based on the DCB configuration of
switches
Software and hardware-integrated network offload and
optimization (12 of 14)
Remote Direct Memory Access (RDMA):
Provides high-throughput, low-latency communication that minimizes CPU usage
Supports zero-copy networking that allows a pNIC to transfer data directly to and from
memory
Version Features
Further enhanced in • Supports RDMA in Hyper-V guest VMs (running Windows Server 1709
Windows Server 1709 or 2019)
and Windows Server • Results in latency between VMs and physical network to match
2019 as NDKPI Mode latency between the Hyper-V host and physical network
3 • Precludes the use of Hyper-V ACLs or QoS policies, which can be
mitigated by affinitizing VMs to a separate pNIC
Software and hardware-integrated network offload and
optimization (13 of 14)
RDMA implementations have the following features:
RDMA over Converged Ethernet v2 (RoCEv2) over UDP/IP:
o Uses DCB for flow control and congestion management
o Low latency
Is available and enabled by default on all currently supported versions of Windows Server
Hardware-only network offload and optimization
Jumbo Frames:
o Allow for Ethernet frames larger than the default 1,500 bytes (typically 9,000 bytes)
o Work with the MTU_for_HNV offload that Windows Server introduced to ensure that
encapsulated traffic doesn't require segmentation between the host and the
adjacent switch
LSO:
o Offloads dividing large blocks of data into MTU-sized packets to pNIC
RSC:
o Relies on pNIC to coalesce incoming packets that are part of the same stream into
one packet
o Isn’t available on pNICs that are bound to the Hyper-V vSwitch
Converged NIC configurations expose RDMA capabilities through a host partition vNIC that:
Allows for a host partition to access RDMA traffic through VM
Storage
pNICs bound to the Hyper-V vSwitch:
o Minimizes cost by using fewer pNICs
VM VM VM
o Improves resource utilization through Host Partition
load sharing
SMB
o Facilitates resiliency by supporting SET Live Migration
TCP/IP RDMA
Management/ vmNI vmNI vmNI
Cluster C C C
Other Stuff
Hyper-V Switch (SDN)
with embedded
teaming
o Cluster validation doesn’t issue warnings related to multiple pNICs on the same
subnet
o All discovered networks are used for cluster heartbeats
Network virtualization and SDN help you overcome challenges associated with the traditional
network infrastructure by increasing agility, improving security, and optimizing efficiency
Topics:
o Azure Stack HCI network virtualization
o Programs policies for overlay virtual networks by making use of Hyper-V vSwitch
Custome
r
5005 500
Subnet
7
5002 5003
Contoso subnet Contoso subnet Contoso subnet Fabrikam Subnet
2 3 4 1
RDID 1 RDID 2 RDID 3
Azure Stack HCI–based and Network Function Virtualization
Benefits:
o Seamless capacity expansion and workload mobility
o Minimized operational complexity
o Increased mobility
o Datacenter Firewall
Plan for Software-Defined Networking deployment (1 of 4)
o An external Hyper-V virtual switch created with at least one physical adapter
Logical networks
Management and HNV Provider logical networks:
o All Hyper-V hosts need access to the Management and HNV Provider logical networks
Network hardware:
o pNICs need to support such capabilities such as RDMA, SET, and custom MTUs
Network Controller
Security groups that will be used to grant permissions to:
o Configure Network Controller
Local or shared file system locations for Network Controller debug logs
Dynamic DNS registration for Network Controller
Permissions to create and configure Service Principal Name for Kerberos authentication
o Configure Network Controller to run as gMSA
Deploy Azure Stack HCI Software-Defined Networking
o Encryption: SSL
Southbound Communication:
o Authentication and authorization: WCF/TCP/OVSDB, WinRM
3. Create at least one virtual subnet for each IP address prefix identified in the first
step
4. As an option, add ACLs to the virtual subnets or gateway connectivity for tenants
Manage Azure Stack HCI tenant networking (2 of 5)
o Service chaining
Requirements:
o Peered virtual networks must have non-overlapping IP address spaces
o Once peering is established, you can’t change address ranges in either virtual
network
At a high level, to configure virtual network peering:
1. Configure peering from the first virtual network to the second virtual network
2. Configure peering from the second virtual network to the first virtual network
Manage Azure Stack HCI tenant networking (3 of 5)
Requirements:
o Windows Server 2019-based SDN
o Authoritative DNS service for name resolution and DNS registration of tenant
workloads
o Recursive DNS service for resolution of internet names from tenant VMs
o High availability
Requirements:
o iDNS Servers
o iDNS Proxy
o If the VM requires network access on startup, set the interface ID on the vmNIC port
first
At a high level, to connect a VM to a virtual network:
1. Create a VM with a vmNIC that has a static MAC address
2. Retrieve the virtual network that contains the subnet to which you want to connect
the vmNIC
3. Create a network interface object in Network Controller
4. Retrieve the InstanceId of the network interface object from Network Controller
o Port mirroring
o SLB must be configured with a health probe that designates the owner of the
floating IP
At a high level, to implement guest clustering in a virtual network:
1. Select the virtual IP
5. Add a probe to detect which cluster node the floating address is currently active on
6. Add the load balancing rules for the port representing the clustered service
9. Configure the failover clustering service to match the load balancer configuration
Troubleshoot Azure Stack HCI Software-Defined Networking (1 of
3)
Common types of problems in Windows Server 2019 HNVv2 SDN include:
o Invalid or unsupported configuration
o Error in policy application
o External error related to NIC hardware and drivers or the underlay network fabric
Tools:
o Network Controller (control-path) diagnostic tools
o GitHub scripts
• Debug-NetworkController
• Debug-NetworkControllerConfigurationState
• Debug-ServiceFabricNodeStatus
• Get-NetworkControllerDeploymentInfo
• Get-NetworkControllerReplica
To use the HNV Diagnostics (data-path) diagnostic tools, import the HNVDiagnostics
module:
o Run Hyper-V host diagnostics cmdlets
Limitations:
Instructor-led lab
B:
Managing virtual
networks by
using Windows
Admin Center and
PowerShell
Managing virtual networks by using
Windows Admin Center and PowerShell
Lab B scenario
Now you’re ready to start testing the functionality of your SDN environment. You’ll start by provisioning
virtual networks, deploying VMs into them, and validating their connectivity within the same virtual
network and between virtual networks
Lab B: Managing virtual networks by using Windows Admin Center
and PowerShell
Exercise 1: Managing virtual networks by using Windows Admin Center and PowerShell
Lesson 3: Planning for
and implementing
Switch Embedded
Teaming
Lesson 3 overview
SET provides high availability and load balancing for physical network interfaces for Azure
Stack HCI scenarios
Topics:
o Compare SET with NIC Teaming
o Implement SET
Compare SET with NIC Teaming (1 of 2)
SET is compatible with the following networking technologies in Windows Server 2019:
o DCB
o Hyper-V Network Virtualization
o RDMA
o SR-IOV
o Dynamic VMMQ
o vRSS
o RSC in the Hyper-V vSwitch
o Receive-side Checksum offloads (IPv4, IPv6, TCP), when all the SET members
support them
o Transmit-side Checksum offloads (IPv4, IPv6, TCP), when all the SET members
support them
SET isn’t compatible with the following networking technologies in Windows Server 2019:
o 802.1X authentication
o VM-QoS
Compare SET with NIC Teaming (2 of 2)
You must create a SET team at the same time you create the Hyper-V vSwitch
o New-VMSwitch -Name TeamedvSwitch -NetAdapterName "NIC 1","NIC 2" `
-EnableEmbeddedTeaming $true
You can add and remove team members, change the load balancing algorithm, and set
pNIC affinity
o Set-VMSwitchTeam -Name TeamedvSwitch -NetAdapterName 'NIC 1','NIC 3'
o Set-VMSwitchTeam -Name TeamedvSwitch -LoadBalancingAlgorithm Dynamic
o Set-VMNetworkAdapterTeamMapping -VMNetworkAdapterName SMB_1 –ManagementOS
`
-PhysicalNetAdapterName 'NIC1'
o Set-VMNetworkAdapterTeamMapping -VMNetworkAdapterName SMB_2 –ManagementOS
`
-PhysicalNetAdapterName 'NIC2'
Lesson 4: Planning for
and implementing
Datacenter Firewall
Lesson 4 overview
Datacenter Firewall is another core component of the NFV-based SDN implementation. Its
purpose is to restrict internal and external connectivity in SDN environments
Topics:
o Datacenter Firewall functionality in Software-Defined Networking
o Firewall rules that filter traffic between VMs on the same virtual subnet and between
VMs on different
o Firewall rules that protect and isolate network traffic between user networks and
their virtual networks hosted by Azure Stack HCI
Datacenter Firewall infrastructure in Software-Defined Networking
PowerSh
ell
Network Northbound Interface (REST APIs)
Controller
Distributed Firewall
Manager
Southbound
Interface
Policies Policies
Host 1
Host 2
VM1 VM2 VM1 VM1 VM2 VM1
vmNIC vmNICs
s
vSwitch vSwitch
pNIC pNIC pNIC pNIC
Gateway
Implement and configure Datacenter Firewall in Software-Defined
Networking (1 of 2)
Configure Datacenter Firewall to allow all traffic
To test basic connectivity, create two rules that allow all network traffic
Source IP Destinati Protocol Source Destinati Direction Action Priority
on IP port on port
* * All * * Inbound Allow 100
* * All * * Outbound Allow 101
o Requires you to configure a centralized location for log collection to a remote file
share to minimize the possibility of disk space issues
Instructor-led lab
C:
Implementing
SDN Access
Control List by
using Windows
Admin Center
Implementing SDN Access Control List by
using Windows Admin Center
Lab C scenario
As part of the security requirements within the SDN environment, you need to be able to filter specific
types of traffic between virtual network subnets. You intend to use the SDN functionality for this purpose,
rather than relying exclusively on the operating system to perform this task
Lab C: Implementing SDN Access Control List by using Windows
Admin Center
Exercise 1: Implementing SDN Access Control List by using Windows Admin Center
Lesson 5: Planning for
and implementing
Software Load Balancing
Lesson 5 overview
The Microsoft SDN Software SLB is an L4 load balancer that distributes incoming traffic among
virtual machines defined in a load balancer set. In addition to standard load balancing
features, it’s a core component of the NFV-based SDN implementation, which offers such
enhancements as DSR, Hyper-V hosts-based health-probes, built-in NAT functionality, and
automatic support for the Active/Active mode
Topics:
o SLB functionality in Software-Defined Networking
DIP
SLB
DIP
Host
Agent
DIP
SLB functionality in Software-Defined Networking (2 of 7)
Implement SLB by using data plane Hyper-V virtual switch and control it by using Network
Controller
SLB maps virtual IP addresses to dynamic IP addresses:
o Virtual IPs are located on an SLB MUX and provide public or private access to a pool
of load balanced VMs
o Dynamic IPs are IP addresses of VMs in the load balanced pool assigned within a
virtual network
An SLB MUX consists of one or more VMs and serves several functions:
o Implements load balancing (5, 3, and 2 tuples) policies defined by Network
Controller
o Uses BGP to advertise virtual IPs to routers
SLB health probes originate from the host where the dynamic IP VM resides and can be TCP
or HTTP-based
SLB functionality in Software-Defined Networking (3 of 7)
High availability:
o Support for multiple nodes in an Active/Active configuration
o Support for horizontal scaling of MUX pools
o Uptime of 99% of individual MUX instances
o Health monitoring data available to management entities
Integration with SCVMM, RAS Gateway, and Datacenter Firewall
SLB functionality in Software-Defined Networking (5 of 7)
SLB optimizes traffic flow by using Direct Server Return for public access:
1. A client sends a request targeting a public virtual IP
5. The Hyper-V host intercepts the response, rewrites dynamic IP to virtual IP,
identifies the client IP, and sends the response via the edge router, bypassing
the MUX
SLB functionality in Software-Defined Networking (6 of 7)
MUX
Server Farm
MUX 1
VIP:
107.105.47.60
http://sharepoint.contoso.com DIP1:
10.10.10.5 Server Farm
DIP2: Request 2
MUX
10.10.20.5
DIP1: 10.10.10.5
VIP: MUX
107.105.47.60
Response Server Farm
MUX 3
MUX
SLB functionality in Software-Defined Networking (7 of 7)
SLB optimizes traffic flow by using Direct Server Return for private access
1. A client sends a request targeting a private virtual IP
5. The Hyper-V host intercepts the response, identifies the client IP, and sends the
response back to the client, bypassing the MUX
6. After the traffic flow is established, subsequent traffic bypasses the MUX
completely
SLB infrastructure in Software-Defined Networking (1 of 2)
A Network Controller:
o Hosts the SLB Manager
o Processes SLB commands that come in through the Northbound API
An SLB MUX:
o Processes inbound network traffic and maps virtual IPs to dynamic IPs, then forwards
the traffic to dynamic IPs
o Uses BGP to publish virtual IP routes to edge routers
A Hyper-V host:
o Host MUX VMs, dynamic IP VMs, and SLB Host Agents
o Program rules for SLB into SDN enabled Hyper-V virtual switches
SLB infrastructure in Software-Defined Networking (2 of 2)
o Bypass the MUX for outbound network traffic, sending it to a router using DSR
o For outbound network traffic, use the route provided by the host
o Remove SLB MUXs from the SLB rotation if Keep Alive fails
Management applications:
o Use System Center 2019, SDN Express, Windows PowerShell, or another
management application to install and configure Network Controller and its SLB
infrastructure
Implement SLB in Software-Defined Networking (1 of 3)
o Routers supporting ECMP routing, BGP, and configured to accept BGP peering
requests from the SLB MUX VMs
Implementation scenarios:
o Public virtual IP load balancing
o Outbound NAT
o Inbound NAT
Implement SLB in Software-Defined Networking (2 of 3)
Example 1-To create a public virtual IP for load balancing a pool of two VMs on a virtual
network:
1. Prepare the load balancer object
2. Assign a front-end IP address that will serve as a virtual IP
3. Allocate a back-end address pool, which contains the dynamic IPs that make up the
members of the load-balanced set of VMs
4. Define a health probe that the load balancer uses to determine the health state of
the backend pool members
5. Define a load balancing rule to send traffic that arrives at the front-end IP to the
back-end IP
6. Add network interfaces to the back-end pool. You must repeat this step for each
network interface that can process requests made to the virtual IP
7. Get the load balancer object containing the back-end pool to add a network
interface
8. Get the network interface and add the backendaddress pool to the
loadbalancerbackendaddresspools array
9. Apply the change to the network interface
Implement SLB in Software-Defined Networking (3 of 3)
4. Add network interfaces to the back-end pool. You must repeat this step for each
network interface that can process requests made to the virtual IP
5. Get the load balancer object containing the back-end pool to add a network
interface
6. Get the network interface and add the backendaddress pool to the
loadbalancerbackendaddresspools array
7. Apply the change to the network interface
SLB Diagnostics:
o Software Load Balancer Manager (SLBM) fabric errors (Hosting service provider
actions)
o SLBM Tenant errors (Hosting service provider and tenant actions)
RAS Gateway is a software-based, BGP-capable router based on HNV. RAS Gateways handle
routing and tunneling in scenarios that involve connectivity between virtual and physical
networks
Topics:
o RAS Gateway functionality in Software-Defined Networking
o GRE Tunneling
Physical
Network HNV Network
GRE tunnel
(VLAN Multitenant Red
Isolation) GW Virtual
Networ
k
GRE tunnel
Green
Virtual
Network
RAS Gateway functionality in Software-Defined Networking (3 of
3) Hoste
r
Physical HNV Network
Network Single-tenant 192.168.20.0/24
Multitenant Red
172.123.10.0/2 GW
192.168.20.0/24 GRE tunnel GW Virtual
4 Networ
k
GRE tunnel
192.170.20.0/24
Green
172.123.10.10 192.170.20.0/24 Virtual
Network
Hoste
r
Load
Balancer
GRE
tunnels
HNV Network
Multitenant Red
Red S25 tunnel GW Virtual
Enterprise
Networ
Site k
S25 tunnel
Green Green
Enterprise Virtual
Site Networ
k
RAS Gateway infrastructure in Software-Defined Networking (1 of
3)
Gateway pools route traffic between physical and virtual networks:
o Each pool is M+N redundant
Contoso
IP 1 GW 1-S
Washington DC Contoso
IP 1 GW
Site
2 Multitenant Virtual Network
RAS
MUX Gateway
BGP Router
Contoso GW 2
IP 2
HQ (LA)
IP 2 GW
3
MUX Woodgrove
GW 3 Virtual Network
Woodgrove
SFO Site IP 3 IP 3 GW
3
RAS Gateway infrastructure in Software-Defined Networking (3 of
3)
BGP Route Reflector:
o Provides an alternative to the BGP mesh topology required for route synchronization
o Functions as the only router connecting to all other routers (BGP Route Reflector
clients)
o Calculates best routes and redistributes them to its clients
o Resides in the control plane between the RAS Gateways and the Network Controller
o Doesn’t participate in data plane routing
o Defaults to the first tenant RAS Gateway
o Serves all the RAS Gateway VMs associated with the same tenant
o Updates Network Controller with the routes that correspond to the tenant's remote
sites
Implement RAS Virtual Gateways and gateway pools (1 of 2)
The RAS Gateway Management feature of Network Controller manages the following features:
Deployment of the gateway pools
Adding and removing RAS gateway VMs in the pool
Provisioning and deprovisioning virtual gateways on tenant virtual networks, including:
o S2S VPN gateway connectivity with remote tenant networks by using IPsec
o S2S VPN gateway connectivity with remote tenant networks by using GRE
o L3 forwarding connectivity
BGP routing
Switching network traffic flows to a standby gateway in the event of a gateway failure
Implement RAS Virtual Gateways and gateway pools (2 of 2)
o ToR Switch
You need to configure VMs on virtual networks that will serve load-balanced workloads accessible from
within the datacenter hosting your SDN infrastructure. In addition, you need to ensure that you’ll be
able to configure VMs on virtual networks to connect to the internet and to accept inbound
connectivity from your datacenter servers. Rather than relying on third-party load balancers, you
intend to use SDN Software Load Balancer for this purpose
Module-review questions (1 of 3)
1. Which of the following technologies serves as the basis for Software-Defined Networking
(SDN) implementation of Azure Stack HCI?
a. Virtual Filtering Platform (VFP) forwarding extension
b. Windows Management Instrumentation (WMI)
c. System Center Virtual Machine Manager
d. Network Virtualization Generic Routing Encapsulation (NVGRE)
2. Which of the following features optimizes distribution of traffic destined for vmNICs across
multiple processors on a Hyper-V host in Windows Server 2019?
a. Virtual Machine Queue (VMQ)
b. Virtual Machine Multi-Queue (VMMQ)
c. Dynamic Virtual Machine Queue (Dynamic VMQ)
d. Dynamic Virtual Machine Multi Queue (Dynamic VMMQ)
Module-review questions (2 of 3)
4. Which of the following isn’t a requirement for Hyper-V hosts hosting tenant workload VMs,
but not SDN infrastructure VMs, in an SDN environment?
a. Windows Server 2019 Standard edition
b. An external Hyper-V virtual switch
c. A Management IPv4 address assigned to the virtual NIC
d. A Management IPv6 address assigned to the virtual NIC
Module-review questions (3 of 3)
5. Which of the following methods can be used for authorization of Southbound API
communication of Network Controller with RAS Gateways?
a. WinRM
b. OVSDB
c. WCF
d. TCP
Module-review answers
1. Which of the following technologies serves as the basis for Software-Defined Networking
(SDN) implementation of Azure Stack HCI?
a. Virtual Filtering Platform (VFP) forwarding extension
2. Which of the following features optimizes distribution of traffic destined for vmNICs across
multiple processors on a Hyper-V host in Windows Server 2019?
d. Dynamic Virtual Machine Multi Queue (Dynamic VMMQ)
3. Which of the following features doesn’t depend on the underlying hardware?
d. Receive Segment Coalescing (RSC) in the Hyper-V Virtual Switch (vSwitch)
4. Which of the following isn’t a requirement for Hyper-V hosts hosting tenant workload VMs,
but not SDN infrastructure VMs, in an SDN environment?
d. A Management IPv6 address assigned to the virtual NIC
5. Which of the following methods can be used for authorization of Southbound API
communication of Network Controller with RAS Gateways?
a. WinRM
Thank you