0% found this document useful (0 votes)
39 views

WS-013 Azure Stack HCI

Uploaded by

rafaladmin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views

WS-013 Azure Stack HCI

Uploaded by

rafaladmin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 111

WS-013 Azure

Stack HCI

© Copyright Microsoft Corporation. All rights reserved.


Module 4: Planning for
and implementing Azure
Stack HCI Networking
Module overview

In Azure Stack HCI, you have the option to virtualize its network resources by implementing
Windows Server 2019 SDN. You have the choice of integrating Azure Stack HCI into an
existing VLAN-based infrastructure or isolating its workloads by leveraging SDN-based
network virtualization.
 Lessons:
o Overview of Azure Stack HCI core networking technologies

o Overview of network virtualization and Software-Defined Networking

o Planning for and implementing Switch Embedded Teaming

o Planning for and implementing Datacenter Firewall

o Planning for and implementing Software Load Balancing

o Planning for and implementing RAS Gateways


Instructor-led lab
A:
Deploying
Software-Defined
Networking
 Deploying Software-Defined Networking
by using PowerShell
Lab A scenario

To address the requirements for deploying an isolated VDI farm for users in the Contoso Securities
Research department, which is supposed to replace an aging Windows Server 2012 R2–based RDS
deployment, you’ll implement SDN on hyperconverged infrastructure. As the first step in this process,
you need to provision the SDN infrastructure by using the scripts available online.
Lab A: Deploying Software-Defined Networking

 Exercise 1: Deploying Software-Defined Networking by using PowerShell


Lesson 1: Overview of
Azure Stack HCI core
networking technologies
Lesson 1 overview

The goal of this lesson is to provide an overview of core networking technologies that serve as
a foundation of Azure Stack HCI SDN
 Topics:
o Hyper-V virtual switch, VFP extensions, HNVv2, and VXLAN

o Software-only network offload and optimization

o Software and hardware–integrated network offload and optimization

o Hardware-only network offload and optimization

o Converged configurations

o Simplified SMB Multichannel and Multi-NIC Cluster Networks


Hyper-V virtual switch, VFP extensions, HNVv2, and VXLAN (1 of
4)
 The Hyper-V Virtual Switch is a software-based layer-2 Ethernet network switch
 Hyper-V Virtual Switch offers such core capabilities as:
o ARP/ND poisoning protection
o DHCP Guard protection
o Bandwidth limit and burst support
o Trunk mode to a VM Parent Partition VM01 VM02
o Network traffic monitoring Virtual
Switch
o Isolated (private) VLAN (NetVSP) vmNIC vmNIC
Host OS Components NetVSC NetVSC
o ECN marking support OS Miniport Miniport
Driver
o Port ACLs
o Diagnostics Physical ports vmBus
pNIC

Wire
Hyper-V virtual switch, VFP extensions, HNVv2, and VXLAN (2 of
4)
 vSwitch is extensible and provides support for Virtual Switch Extensions
 Implement the SDN functionality in Windows Server 2019-based HNVv2 by using the VFP
forwarding extension
 The VFP extension can’t be used in conjunction with any other third-party switch extension
Hyper-V virtual switch, VFP extensions, HNVv2, and VXLAN (3 of
4)
 HNVv2 implements L2 switching and L3 routing semantics by using Network Controller. This
is implemented as an OS role.
 Network Controller provides programmable interfaces for centralized management and
automation:
o Northbound API for management tools to communicate with Network Controller

o Southbound API for Network Controller to communicate with managed network


environment
 Network Controller pushes HNVv2 policies via Management Network Aware
Applications Applications
OVSDB management protocol to Host Agents:
Northbound API
 Host Agents run on Hyper-V hosts that
are part of the SDN infrastructure SL
Firewall
Gatewa
Network
B ys
 Agents apply the policies in the VFP extension of Controller
the Hyper-V switch Southbound API
 Hyper-V switch handles the policy enforcement
Virtual and Private Network
Infrastructure
Hyper-V virtual switch, VFP extensions, HNVv2, and VXLAN (4 of
4)
 HNVv2 supports two network virtualization protocols:
 NVGRE
 VXLAN

Contoso “Tunnel”

 Azure Stack HCI uses VXLAN to create a Fabrikam “Tunnel”

mapping between:
PA Space Physical Network
 The tenant overlay network IP
addresses referred to as
Customer Address or CA
 The physical underlay network IP
addresses referred to as
Provider Address or PA
10.1.1.1 10.1.1.1 10.1.1.1 10.1.1.1
1 1 2 2
10.1.1.1 10.1.1.1 10.1.1.1 10.1.1.1 10.1.1.1 10.1.1.1 10.1.1.1 10.1.1.1
1 2 1 2
CA Space 1 2 1 2
VM Networks
Software-only network offload and optimization

 RSC in the Hyper-V vSwitch:


o Optimizes processing of network packets for virtual workloads
o Combines TCP segments in a TCP-stream destined for a Hyper-V VM into larger
segment
o Enables by default in vSwitch

o Doesn’t require physical hardware support

o Benefits Azure Stack HCI scenarios that involve passing network traffic via a Hyper-V
switch by supporting:
• Traditional Hyper-V compute workloads
• Storage Spaces Direct implementations
• SDN deployments
 Other SDN features include:
o SDN ACLs

o SDN QoS

o SMB Multichannel
Software and hardware-integrated network offload and
optimization (1 of 14)
Switch Embedded Teaming:
 Is the primary NIC teaming solution for Windows Server 2019 SDN
 Integrates the NIC Teaming functionality into the Hyper-V virtual switch
 Groups up to eight physical Ethernet NICs into one or more software-based virtual network
adapters
Software and hardware-integrated network offload and
optimization (2 of 14)
Virtual Machine Queue (VMQ):
 Was introduced in Windows Server 2012
 Makes use of hardware queues in the pNIC
 Assigns processing of network traffic for each vNIC and vmNIC to individual and different
CPU cores
 Involves a data path through the Hyper-V vSwitch
o Doesn’t apply to technologies that bypass the Hyper-V vSwitch, such as RDMA or SR-
IOV
 The limitations and drawbacks of VMQ:
o Functionally disables RSS on any pNIC attached to vSwitch

o Only one pNIC queue is assigned to each vNIC or vmNIC

o One vNIC or vmNIC maps to a single CPU core

o A single virtual adapter can reach at most six Gbps on a well-tuned system

o The mapping between vNIC or vmNIC and a CPU core is static


Software and hardware-integrated network offload and
optimization (3 of 14)
Host OS - vSwitch

Mac + VLAN Filter VMO VMO VMO


1 2 3
Dest. Proc Proc #
Filter Group
VM01 0 1

VM02 0 3
VM03 0 1 CPU CPU CPU CPU
0 1 2 3

Host OS Components Miniport

OS Miniport
Defaul VMQ 1 VMQ 2 VMQ 3 VMQn
t
Physica
l Physical ports
Mac + VLAN Filter
Adapter NIC

Wire
Software and hardware-integrated network offload and
optimization (4 of 14)
Virtual Receive Side Scaling (vRSS):
 Was introduced in Windows Server 2012 R2
 Makes use of hardware queues in the pNIC and VMQ
 Depends on RSS in vNIC or vmNIC
 Requires VMs to have multiple logical processors
 Creates the mapping or the indirection table of the VMQs of the pNIC to processors:
o Uses the indirection table to map processing of network traffic for each vNIC or
vmNIC to multiple or different CPU cores
o Defaults to eight CPU cores that you can configure by using pNIC properties

 Involves a data path through the Hyper-V vSwitch:


o Doesn’t apply to technologies that bypass the Hyper-V vSwitch, such as RDMA or SR-
IOV
o A single virtual adapter can reach about 15 Gbps on a well-tuned system

 Is enabled by default on Windows Server 2012 R2


Software and hardware-integrated network offload and
optimization (5 of 14)
Host OS - vSwitch
VCPU VCPU VCPU VCPU
0 1 2 3
VRSS Table
Indirection
E.g. Max Processors: 3
Dest. Group Base Availabl VM01
Filter VMQ e
Procs
VM01 0 2 2, 5,
7

CPU CPU CPU CPU CPU CPU CPU CPU


Miniport 0 1 2 3 4 5 6 7
Host OS Components

OS Miniport

VMQ 1 VMQ 2 VMQ 3 VMQ 4


Defaul VMQ n
Physica t
l Physical ports
Adapter Mac + VLAN Filter
NIC
Wire
Software and hardware-integrated network offload and
optimization (6 of 14)
Dynamic Virtual Machine Queue (Dynamic VMQ):
 Was introduced in Windows Server 2012 R2
 Makes use of hardware queues in the pNIC
 Enhances VMQ by taking advantage of vRSS:
o vRSS dynamically reassigns VMQs to CPU cores to provide balanced distribution

 The limitations and drawbacks of Dynamic VMQ:


o Supports only vmNICs and doesn’t support vNICs

o Doesn’t handle burst workloads well


Software and hardware-integrated network offload and
optimization (7 of 14)
Virtual Machine Multi-Queue (VMMQ):
 Was introduced in Windows Server 2016
 Makes use of embedded Ethernet switch in the pNIC for mappings to queues:
o Offloads packet distribution across queues to pNIC

 Is disabled by default in Windows Server 2016


 A primary drawback is that it deprecates VMMQ:
o Assignments of a queue to a CPU core will stay the same even if the load changes
Software and hardware-integrated network offload and
optimization (8 of 14)
Host OS - vSwitch VMO1 VMO2 VMO3
VRSS Indirection
Table VCP
U
VCP
U VCP
U
VCP
U VCP
U
VCP
U
VCP
U
VCP
U
E.g. Max Processors: 4 0 1 0 1 0 1 0 1

Dest. Group Base Availabl


e
Filter
Procs
VM01 0 1 1, 4
VM02 0 2 2, 5,
7
VM03 0 6 6… CPU CPU CPU CPU CPU CPU CPU CPU
0 1 2 3 4 5 6 7
Host OS Components

OS Miniport

Physica vPort0 vPort1 vPort2 vPort3 vPortn


l Physical ports
NIC NicSwitc
Adapter h Default VMQ3 VMQ4 VMQ5 VMQ6 VMQ7 VMQ8
VMQn
Wire
Software and hardware-integrated network offload and
optimization (9 of 14)
Dynamic Virtual Machine Multi-Queue (VMMQ):
 Was introduced in Windows Server 2019
 Offers three primary benefits:
o Optimizes host efficiency.

o Tunes the indirection table so VMs can reach and maintain the desired throughput

o Handles bursts in demand

 Eliminates administrative overhead associated with optimizing network throughput:


o Dynamically distributes processing of network traffic across multiple CPUs

o Accounts for affinity between pNICs and NUMA nodes

o Benefits from modifying the base processor to avoid using CPU0

 Dynamically distributes processing of network traffic across multiple processors


 Implements queue parking and queue packing
Software and hardware-integrated network offload and
optimization (10 of 14)
Single Root I/O Virtualization (SR-IOV):
 Is an extension to the PCIe specification
 Allows a network adapter to separate access to its resources among various PCIe hardware
functions:
o A PF associated with the Hyper-V parent partition
o One or more PCIe VFs associated with a Hyper-V child partition
 Allows network traffic to bypass the Hyper-V virtualization stack:
o Network traffic flows directly between the VF and an individual child partition
o I/O overhead in the software emulation layer is minimized
o Network performance is nearly the same as in nonvirtualized environments
 Requirements and limitations of SR-IOV:
o The Hyper-V host hardware, including its network device and its driver, must support
it
o Can only be associated with an external Hyper-V vSwitch
o Can only be configured when a virtual switch is created
o Isn’t subject to vSwitch-based policies because its traffic bypasses the Hyper-V
Software and hardware-integrated network offload and
optimization (11 of 14)
Data Center Bridging (DCB):
 Provides hardware queue-based bandwidth management in hosts and adjacent switches
 Allows for dedicated traffic flows in converged scenarios
 Uses CoS tags to designate specific types of traffic
 Can be applied to any pNIC, including those bound to a Hyper-V switch
 Consists of the following protocols:
o PFC implements flow control based on CoS tags

o ETS implements bandwidth allocation using traffic classes based on CoS tags

o DCBX allows for automatic configuration of hosts based on the DCB configuration of
switches
Software and hardware-integrated network offload and
optimization (12 of 14)
Remote Direct Memory Access (RDMA):
 Provides high-throughput, low-latency communication that minimizes CPU usage
 Supports zero-copy networking that allows a pNIC to transfer data directly to and from
memory
Version Features

Introduced in • Supports binding pNICs to Hyper-V vSwitch


Windows Server 2012 • Forces adding dedicated RDMA pNICs for SMB traffic only
as NDKPI Mode 1
Enhanced in Windows • Supports binding pNICs to Hyper-V vSwitch
Server 2016 as NDKPI • Allows for RDMA and Hyper-V traffic on the same pNICs
Mode 2 • Supports SET

Further enhanced in • Supports RDMA in Hyper-V guest VMs (running Windows Server 1709
Windows Server 1709 or 2019)
and Windows Server • Results in latency between VMs and physical network to match
2019 as NDKPI Mode latency between the Hyper-V host and physical network
3 • Precludes the use of Hyper-V ACLs or QoS policies, which can be
mitigated by affinitizing VMs to a separate pNIC
Software and hardware-integrated network offload and
optimization (13 of 14)
RDMA implementations have the following features:
 RDMA over Converged Ethernet v2 (RoCEv2) over UDP/IP:
o Uses DCB for flow control and congestion management

o Requires additional configuration of DCB on hosts and network switches

 Internet Wide Area RDMA Protocol (iWarp) over TCP/IP:


o Uses TCP for flow control and congestion management

o Doesn’t require any additional configuration of hosts and network switches

 IB over InfiniBand network


o Uses proprietary flow, congestion control mechanisms, and proprietary switches
Software and hardware-integrated network offload and
optimization (14 of 14)
SMB Direct:
 Optimizes the use of RDMA network adapters for SMB traffic by providing:
o Maximum bandwidth

o Low latency

o Low CPU utilization

 Is available and enabled by default on all currently supported versions of Windows Server
Hardware-only network offload and optimization

 Jumbo Frames:
o Allow for Ethernet frames larger than the default 1,500 bytes (typically 9,000 bytes)
o Work with the MTU_for_HNV offload that Windows Server introduced to ensure that
encapsulated traffic doesn't require segmentation between the host and the
adjacent switch
 LSO:
o Offloads dividing large blocks of data into MTU-sized packets to pNIC

 RSC:
o Relies on pNIC to coalesce incoming packets that are part of the same stream into
one packet
o Isn’t available on pNICs that are bound to the Hyper-V vSwitch

 Address Checksum Offload:


o Relies on pNIC to calculate address checksums for both send and receive traffic

 Interrupt Moderation (IM):


o Buffers multiple received packets before interrupting the operating system
Converged configurations

Converged NIC configurations expose RDMA capabilities through a host partition vNIC that:
 Allows for a host partition to access RDMA traffic through VM
Storage
pNICs bound to the Hyper-V vSwitch:
o Minimizes cost by using fewer pNICs
VM VM VM
o Improves resource utilization through Host Partition
load sharing
SMB
o Facilitates resiliency by supporting SET Live Migration
TCP/IP RDMA
Management/ vmNI vmNI vmNI
Cluster C C C

Other Stuff
Hyper-V Switch (SDN)
with embedded
teaming

pNIC pNIC pNIC


DCB DCB DCB
Simplified SMB Multichannel and Multi-NIC Cluster Networks

Simplified SMB Multichannel and Multi-NIC Cluster Networks:


 Allow the use of multiple pNICs on the same cluster network subnet
 Enable an SMB Multichannel that make use of that configuration
 Offer several additional benefits:
o Failover Clustering recognizes all pNICs

o SMB Multichannel is enabled and configured

o Networks with IPv6 Link Local IP addresses are designated as cluster-only

o A single IP Address resource is configured on each CAP NN

o Cluster validation doesn’t issue warnings related to multiple pNICs on the same
subnet
o All discovered networks are used for cluster heartbeats

o Additional levels of resiliency protect against failures of individual switches and


pNICs
Lesson 2: Overview of
network virtualization and
Software-Defined
Networking
Lesson 2 overview

Network virtualization and SDN help you overcome challenges associated with the traditional
network infrastructure by increasing agility, improving security, and optimizing efficiency
 Topics:
o Azure Stack HCI network virtualization

o Azure Stack HCI–based and Network Function Virtualization

o Plan for Software-Defined Networking deployment

o Deploy Azure Stack HCI Software-Defined Networking

o Manage Software-Defined Networking by using Network Controller

o Demonstration: Manage Software-Defined Networking by using Windows Admin


Center
o Manage Azure Stack HCI tenant networking

o Manage Azure Stack HCI tenant workloads

o Troubleshoot Azure Stack HCI Software-Defined Networking


Azure Stack HCI network virtualization (1 of 4)

 Hyper-V network virtualization:


o Allows for overlay virtual networks decoupled from physical networks
o Removes constraints of VLANs and hierarchical IP address assignments

o Programs policies for overlay virtual networks by making use of Hyper-V vSwitch

 Overlay virtual networks:


o Form an isolation boundary through encapsulation by using NVGRE or VXLAN

o Each have a unique RDID

o Consist of one or more virtual subnets which:

• Implement an L3 IP subnet and an L2 broadcast domain boundary


• Are assigned a unique VSID
o Provide connectivity via vmNIC and are associated with two IP addresses:

• CA visible to the virtual machine and reachable by the customer


• PA assigned by hosting providers based on physical network infrastructure
Azure Stack HCI network virtualization (2 of 4)

Switching in Hyper-V Network Virtualization


 A VM in an HNV virtual network attempts to connect to another VM in the same VSID by
using the following steps:
1. The source VM checks for a local ARP entry of the target vmNIC's IP address
2. If not found, the source VM sends an ARP broadcast for the MAC address of the
target vmNIC
3. The Hyper-V vSwitch intercepts the request and sends it to the local Host Agent
4. The Host Agent looks up the MAC address of the target vmNIC in its local database
5. If found, the Host Agent sends an ARP response to the source VM
6. The source VM sends a frame for the target VM to its port on the local Hyper-V
vSwitch
7. The Host Agent encapsulates the frame into an NVGRE or VLANX packet
• Host Agent identifies the IP address of the target Hyper-V host based on CA-PA
mapping
8. Hyper-V vSwitch applies routing rules and VLAN tags to the packet and sends them
to the target host
Azure Stack HCI network virtualization (3 of 4)

Routing in Hyper-V network virtualization


 HNV uses a built-in, distributed router that’s part of every host
 The built-in router has an interface in every VSID using the star topology
 HNV router is a default gateway for all traffic between virtual subnets that are in the same
VSID network
Routing between PA subnets
 HNV uses one PA IP address per SET pNIC team member from the same Provider (PA)
logical subnet:
o Constructs the outer IP headers for the encapsulated packet based on the CA-PA
mapping
o Relies on the host IP stack to ARP for the default PA gateway
o Builds the outer Ethernet headers based on the ARP response
o Relies on an L3 router to route encapsulated packets between provider logical
subnets or VLANs
Routing Outside a Virtual Network
 Requires RAS gateways for:
Azure Stack HCI network virtualization (4 of 4)
Hoster Datacenter
Contoso Corp Fabrikam
Contoso R&D Contoso Sales Corp
Fabrikam HR Net
Net Net
Custome
r Contoso subnet Contoso subnet Fabrikam Subnet
Network 1 5 2

5001 5004 500


6

Custome
r
5005 500
Subnet
7
5002 5003
Contoso subnet Contoso subnet Contoso subnet Fabrikam Subnet
2 3 4 1
RDID 1 RDID 2 RDID 3
Azure Stack HCI–based and Network Function Virtualization

 Benefits:
o Seamless capacity expansion and workload mobility
o Minimized operational complexity

o Simplified provisioning and management

o Increased mobility

o Support for vertical and horizontal scaling

 Windows Server 2019 SDN network function virtualization-based solutions:


o SDN Load Balancer

o RRAS Multitenant Gateway

o Datacenter Firewall
Plan for Software-Defined Networking deployment (1 of 4)

Compute host configuration and physical networks


 Each Hyper-V host must have:
o Windows Server 2019 installed

• The Datacenter edition for SDN infrastructure role VMs


• The Standard edition for workloads VMs
o The Hyper-V role enabled

o An external Hyper-V virtual switch created with at least one physical adapter

o A management IP address assigned to the vNIC designated for management

o Network connectivity through one or more network adapters attached to physical


switch ports
Plan for Software-Defined Networking deployment (2 of 4)

Logical networks
 Management and HNV Provider logical networks:
o All Hyper-V hosts need access to the Management and HNV Provider logical networks

 SLB and Gateways logical networks:


o Transit logical network

o Public virtual IP logical network

o Private virtual IP logical network

o GRE virtual IP logical network

 Logical networks required for RDMA-based storage:


o A subnet for each physical adapter in your computer and storage hosts
Plan for Software-Defined Networking deployment (3 of 4)

Physical network devices


 Default gateways:
o Configured on the adapter used to reach the internet

 Network hardware:
o pNICs need to support such capabilities such as RDMA, SET, and custom MTUs

o Switches and routers considerations include support for:

• Custom switchport MTU settings


• Link control
• Availability
• Redundancy
• Routing
• Tagging
• Monitoring
Plan for Software-Defined Networking deployment (4 of 4)

Network Controller
 Security groups that will be used to grant permissions to:
o Configure Network Controller

o Configure and manage network by using Network Controller

 Local or shared file system locations for Network Controller debug logs
 Dynamic DNS registration for Network Controller
 Permissions to create and configure Service Principal Name for Kerberos authentication
o Configure Network Controller to run as gMSA
Deploy Azure Stack HCI Software-Defined Networking

 There are four primary methods of deploying SDN:


o Windows Admin Center
o System Center Virtual Machine Manager 2019
o SDN Express graphical installer
o SDN Express PowerShell module
Manage Software-Defined Networking by using Network Controller
(1 of 3)
 Network Controller features that support configuring and managing virtual and physical
network devices and services:
o Firewall Management

o Software Load Balancer Management

o Virtual Network Management

o RAS Gateway Management

• Site-to-site VPN using IPsec


• Site-to-site VPN using GRE
• Layer 3 forwarding
• BGP routing
Manage Software-Defined Networking by using Network Controller
(2 of 3)
Secure Network Controller by using:
 Northbound Communication:
o Authentication and authorization: Kerberos, X509, None

o Encryption: SSL

 Network Controller Cluster Communication:


o Authentication and authorization: Kerberos, X509, None

o Encryption: WCF Transport

 Southbound Communication:
o Authentication and authorization: WCF/TCP/OVSDB, WinRM

o Encryption: WCF/TCP/OVSDB, WinRM


Manage Software-Defined Networking by using Network Controller
(3 of 3)
 To back up SDN infrastructure:
1. Export a copy of each Network Controller VM
2. If using SCVMM, stop the SCVMM service and back up its SQL Server database
3. Back up the Network Controller database
4. Verify the completion of the backup
5. If using SCVMM, start the service
 To restore SDN infrastructure:
1. If necessary, redeploy Hyper-V hosts and the necessary storage
2. If necessary, restore the Network Controller VMs, RAS gateway VMs, and MUX VMs
from backup
3. Stop the Network Controller host agent and SLB host agent on all Hyper-V hosts
4. Restore the Network Controller
5. If using SCVMM, restore the database
6. If necessary, restore workload VMs from backup
7. Verify system health
Demonstration:
Manage Software-
Defined Networking
by using Windows
Admin Center
 Manage Software-Defined Networking (SDN)
by using Windows Admin Center
Manage Azure Stack HCI tenant networking (1 of 5)

Creating, modifying, and deleting tenant virtual networks


 Benefits:
o Tenant isolation

o Support for overlapping IP address spaces

 At a high level, to create a new virtual network:


1. Identify the IP address prefixes of the virtual subnets to be included in the virtual
network
2. Identify the logical provider network into which the tenant traffic is tunneled

3. Create at least one virtual subnet for each IP address prefix identified in the first
step
4. As an option, add ACLs to the virtual subnets or gateway connectivity for tenants
Manage Azure Stack HCI tenant networking (2 of 5)

Configuring virtual network peering


 Benefits:
o Traffic routed through the backbone infrastructure via private IP addresses only

o A low-latency, high-bandwidth connection between resources in different virtual


networks
o No negative impact on workloads in either virtual network when establishing peering

o Service chaining

 Requirements:
o Peered virtual networks must have non-overlapping IP address spaces

o Peered virtual networks must be managed by the same Network Controller

o Once peering is established, you can’t change address ranges in either virtual
network
 At a high level, to configure virtual network peering:
1. Configure peering from the first virtual network to the second virtual network

2. Configure peering from the second virtual network to the first virtual network
Manage Azure Stack HCI tenant networking (3 of 5)

Configuring encryption for a virtual subnet


 Benefits:
o Protection against eavesdropping, tampering, and forgery originating from the
physical network
 Requirements:
o Encryption certificates installed on each of the SDN-enabled Hyper-V hosts

o A credential object in the Network Controller referencing the thumbprint of that


certificate
o Configuration on each of the Virtual Networks contain subnets that require
encryption
 At a high level, to configure encryption for a virtual network subnet:
1. Create the encryption certificate and install it on every Hyper-V host

2. Create a certificate credential

3. Enable virtual network subnets for encryption by referencing the encryption


certificate and credential
Manage Azure Stack HCI tenant networking (4 of 5)

Enabling IPv6 on a virtual network


 Benefits:
o Provides support for scenarios that require IPv6

 Requirements:
o Windows Server 2019-based SDN

 At a high level, to enable IPv6 on a virtual network:


1. Add IPv6 address ranges to virtual subnets

2. Assign IPv6 addresses to virtual machines


Manage Azure Stack HCI tenant networking (5 of 5)

Implementing DNS service (iDNS) for SDN


 Benefits:
o Shared DNS name resolution services for tenant workloads

o Authoritative DNS service for name resolution and DNS registration of tenant
workloads
o Recursive DNS service for resolution of internet names from tenant VMs

o High availability

 Requirements:
o iDNS Servers

o iDNS Proxy

 At a high level, to implement iDNS for SDN:


1. Deploy a VM outside of SDN and install the DNS server role within the OS of the VM

2. Configure iDNS information in Network Controller

3. Configure the iDNS Proxy Service

4. Restart the Network Controller Host Agent Service


Manage Azure Stack HCI tenant workloads (1 of 4)

Creating a VM and connecting it to a tenant virtual network


 Requirements:
o Create the vmNIC with a static MAC address for the lifetime of the VM

o If the VM requires network access on startup, set the interface ID on the vmNIC port
first
 At a high level, to connect a VM to a virtual network:
1. Create a VM with a vmNIC that has a static MAC address

2. Retrieve the virtual network that contains the subnet to which you want to connect
the vmNIC
3. Create a network interface object in Network Controller

4. Retrieve the InstanceId of the network interface object from Network Controller

5. Set the vmNIC port to the InstanceId


Manage Azure Stack HCI tenant workloads (2 of 4)

Configuring QoS for a tenant VM network interface


 At a high level, there are two ways to implement QoS for a vmNIC:
o DCB by using the Windows PowerShell NetQoS cmdlets

o SDN QoS by using Network Controller:

• Limit bandwidth on a vmNIC to throttle bandwidth of VMs


• Reserve a specific amount of bandwidth for VMs
Manage Azure Stack HCI tenant workloads (3 of 4)

Implementing network virtual appliances on a virtual network


 Benefits:
o User-defined routing

o Port mirroring

 At a high level, to implement an NVA VM on a virtual network:


1. Create a VM that contains the appliance.

2. Connect the VM to one or more virtual network subnets:


• If the NVA requires multiple vmNICs, create each in Network Controller and assign
an interface ID for each additional vmNIC
3. Deploy and configure the NVA:
• For user-defined routing, create a routing table, add user-defined routes to the
table, and associate the routing table to one or more subnets
• Ensure that the VM has the second vmNIC for management and enable mirroring
as a destination on the first network interface
Manage Azure Stack HCI tenant workloads (4 of 4)

Implementing guest clustering in a virtual network


 Requirements:
o Clustering that relies on floating IP addresses requires an SLB virtual IP

o SLB must be configured with a health probe that designates the owner of the
floating IP
 At a high level, to implement guest clustering in a virtual network:
1. Select the virtual IP

2. Create the load balancer properties object

3. Create a front-end IP address of the load balancer

4. Create a back-end pool containing the cluster nodes

5. Add a probe to detect which cluster node the floating address is currently active on

6. Add the load balancing rules for the port representing the clustered service

7. Create the load balancer in Network Controller

8. Add the cluster nodes to the back-end pool

9. Configure the failover clustering service to match the load balancer configuration
Troubleshoot Azure Stack HCI Software-Defined Networking (1 of
3)
 Common types of problems in Windows Server 2019 HNVv2 SDN include:
o Invalid or unsupported configuration
o Error in policy application

o Configuration drift or software bug

o External error related to NIC hardware and drivers or the underlay network fabric

 Tools:
o Network Controller (control-path) diagnostic tools

o HNV Diagnostics (data-path) diagnostic tools

o Network Controller logging

o GitHub scripts

o Packet Monitor (PacketMon)


Troubleshoot Azure Stack HCI Software-Defined Networking (2 of
3)
 To use the Network Controller (control-path) diagnostic tools:
o Install the RSAT-NetworkController feature and the
NetworkControllerDiagnostics module.
o Run Network Controller diagnostics cmdlets:

• Debug-NetworkController
• Debug-NetworkControllerConfigurationState
• Debug-ServiceFabricNodeStatus
• Get-NetworkControllerDeploymentInfo
• Get-NetworkControllerReplica
 To use the HNV Diagnostics (data-path) diagnostic tools, import the HNVDiagnostics
module:
o Run Hyper-V host diagnostics cmdlets

 Network Controller logging


o Automatically enabled on each node

o Consider configuring centralized logging


Troubleshoot Azure Stack HCI Software-Defined Networking (3 of
3)
 Packet Monitor (PacketMon):
o A cross-component network diagnostics included with Windows Server 2019 and
Windows 10
o Helpful in virtualization scenarios including SDN and container networking

o Available as pktmon.exe command line utility and as a Windows Admin Center


extension
 Capabilities:
o Packet capture at multiple locations of the networking stack

o Packet drop detection, including drop reason reporting

o Runtime packet filtering with encapsulation support

o Flexible packet counters

o Real-time on-screen packet monitoring

o High-volume in-memory logging

o Microsoft Network Monitor (NetMon) and Wireshark (pcapng) compatibility

 Limitations:
Instructor-led lab
B:
Managing virtual
networks by
using Windows
Admin Center and
PowerShell
 Managing virtual networks by using
Windows Admin Center and PowerShell
Lab B scenario

Now you’re ready to start testing the functionality of your SDN environment. You’ll start by provisioning
virtual networks, deploying VMs into them, and validating their connectivity within the same virtual
network and between virtual networks
Lab B: Managing virtual networks by using Windows Admin Center
and PowerShell
 Exercise 1: Managing virtual networks by using Windows Admin Center and PowerShell
Lesson 3: Planning for
and implementing
Switch Embedded
Teaming
Lesson 3 overview

SET provides high availability and load balancing for physical network interfaces for Azure
Stack HCI scenarios
 Topics:
o Compare SET with NIC Teaming

o Plan for SET

o Implement SET
Compare SET with NIC Teaming (1 of 2)

 SET is compatible with the following networking technologies in Windows Server 2019:
o DCB
o Hyper-V Network Virtualization
o RDMA
o SR-IOV
o Dynamic VMMQ
o vRSS
o RSC in the Hyper-V vSwitch
o Receive-side Checksum offloads (IPv4, IPv6, TCP), when all the SET members
support them
o Transmit-side Checksum offloads (IPv4, IPv6, TCP), when all the SET members
support them
 SET isn’t compatible with the following networking technologies in Windows Server 2019:
o 802.1X authentication
o VM-QoS
Compare SET with NIC Teaming (2 of 2)

 Differences between SET and NIC Teaming include:


o All pNICs in SET are active and there isn’t support for standby pNICs
o SET supports only Switch Independent teaming mode while NIC Teaming supports
three teaming modes)
o SET does not support vmNIC teaming while NIC Teaming supports vmNIC teaming
o Use SET to map individual vmNIC to a separate physical NIC
o SET supports:
• RDMA Teaming
• RDMA in guest VMs
• SDN VFP
• VMMQ
• Dynamic VMMQ
• RSC in the Hyper-V switch (vSwitch)
o SET does not support:
• LACP
• Asymmetric NICs
Plan for SET

 SET teaming supports two load balancing distribution modes:


o Hyper-V Port:
• Distributes ingress load to the port where the MAC address is located
• Integrates with VMQs
• Might not be granular enough to achieve a well-balanced distribution with fewer
VMs
• Limits ingress to a single VM to bandwidth available on a single pNIC
o Dynamic:
• Distributes egress loads based on a hash of the TCP ports and IP addresses
• Rebalances loads in real time
• Distributes ingress loads in the same manner as the Hyper-V port mode
• Replaces source MAC addresses on egress frames
Implement SET

 You must create a SET team at the same time you create the Hyper-V vSwitch
o New-VMSwitch -Name TeamedvSwitch -NetAdapterName "NIC 1","NIC 2" `
-EnableEmbeddedTeaming $true

 You can add and remove team members, change the load balancing algorithm, and set
pNIC affinity
o Set-VMSwitchTeam -Name TeamedvSwitch -NetAdapterName 'NIC 1','NIC 3'
o Set-VMSwitchTeam -Name TeamedvSwitch -LoadBalancingAlgorithm Dynamic
o Set-VMNetworkAdapterTeamMapping -VMNetworkAdapterName SMB_1 –ManagementOS
`
-PhysicalNetAdapterName 'NIC1'
o Set-VMNetworkAdapterTeamMapping -VMNetworkAdapterName SMB_2 –ManagementOS
`
-PhysicalNetAdapterName 'NIC2'
Lesson 4: Planning for
and implementing
Datacenter Firewall
Lesson 4 overview

Datacenter Firewall is another core component of the NFV-based SDN implementation. Its
purpose is to restrict internal and external connectivity in SDN environments
 Topics:
o Datacenter Firewall functionality in Software-Defined Networking

o Datacenter Firewall infrastructure in Software-Defined Networking

o Implement and configure Datacenter Firewall in Software-Defined Networking

o Troubleshoot Datacenter Firewall in Software-Defined Networking


Datacenter Firewall functionality in Software-Defined Networking

 Datacenter Firewall is a network layer, 5-tuple, stateful, distributed multitenant firewall


o Admins can apply firewall policies on the vmNIC and subnet level by using the
Northbound API of Network Controller
 Benefits for Azure Stack HCI admins:
o A software-based firewall solution that’s highly scalable and manageable

o Support for moving across Hyper-V hosts VMs

o Protection of VMs regardless of their guest operating system

 Benefits for Azure Stack HCI users:


o Firewall rules that filter traffic targeting Internet-facing workloads

o Firewall rules that filter traffic between VMs on the same virtual subnet and between
VMs on different
o Firewall rules that protect and isolate network traffic between user networks and
their virtual networks hosted by Azure Stack HCI
Datacenter Firewall infrastructure in Software-Defined Networking
PowerSh
ell
Network Northbound Interface (REST APIs)
Controller
Distributed Firewall
Manager

Southbound
Interface
Policies Policies
Host 1
Host 2
VM1 VM2 VM1 VM1 VM2 VM1

vmNIC vmNICs
s
vSwitch vSwitch
pNIC pNIC pNIC pNIC

Gateway
Implement and configure Datacenter Firewall in Software-Defined
Networking (1 of 2)
 Configure Datacenter Firewall to allow all traffic
 To test basic connectivity, create two rules that allow all network traffic
Source IP Destinati Protocol Source Destinati Direction Action Priority
on IP port on port
* * All * * Inbound Allow 100
* * All * * Outbound Allow 101

 Use ACLs to limit traffic within a subnet


Source IP Destination Protoc Source Destination Direction Actio Priorit
IP ol port port n y
192.168.0.1 * All * * Inbound Allow 100
* 192.168.0.1 All * * Outbound Allow 101
192.168.0.0/ * All * * Inbound Block 102
24
* 192.168.0.0/2 All * * Outbound Block 103
4
* * All * * Inbound Allow 104
Implement and configure Datacenter Firewall in Software-Defined
Networking (2 of 2)
 To add an ACL to a network interface:
1. Get or create the network interface to which you’ll add the ACL
2. Get or create the ACL you’ll add to the network interface

3. Assign the ACL to the AccessControlList property of the network interface

4. Add the network interface in Network Controller


 To remove an ACL from a network interface:
1. Get the network interface from which you’ll remove the ACL

2. Assign $NULL to the AccessControlList property of the ipConfiguration

3. Add the network interface object in Network Controller


Troubleshoot Datacenter Firewall in Software-Defined Networking

 Datacenter Firewall auditing:


o Introduced in Windows Server 2019
 To implement Datacenter Firewall auditing:
• Apply a one-time configuration to the Network Controller
• Enable logging on individual ACL rules
• Network flows that match the ACL rules are automatically logged
 Centralized logging of Network Controller:
o Includes debug logs for the Network Controller cluster, the Network Controller
application, gateway logs, SLB, virtual networking, and the Distributed Firewall
o Is automatically enabled during installation of a Network Controller cluster

o Requires you to configure a centralized location for log collection to a remote file
share to minimize the possibility of disk space issues
Instructor-led lab
C:
Implementing
SDN Access
Control List by
using Windows
Admin Center
 Implementing SDN Access Control List by
using Windows Admin Center
Lab C scenario

As part of the security requirements within the SDN environment, you need to be able to filter specific
types of traffic between virtual network subnets. You intend to use the SDN functionality for this purpose,
rather than relying exclusively on the operating system to perform this task
Lab C: Implementing SDN Access Control List by using Windows
Admin Center
 Exercise 1: Implementing SDN Access Control List by using Windows Admin Center
Lesson 5: Planning for
and implementing
Software Load Balancing
Lesson 5 overview

The Microsoft SDN Software SLB is an L4 load balancer that distributes incoming traffic among
virtual machines defined in a load balancer set. In addition to standard load balancing
features, it’s a core component of the NFV-based SDN implementation, which offers such
enhancements as DSR, Hyper-V hosts-based health-probes, built-in NAT functionality, and
automatic support for the Active/Active mode
 Topics:
o SLB functionality in Software-Defined Networking

o SLB infrastructure in Software-Defined Networking

o Implement SLB in Software-Defined Networking

o Troubleshoot SLB in Software-Defined Networking


SLB functionality in Software-Defined Networking (1 of 7)
Config
Network Controller Hyper-V Host
Deploy VM
Health SLB DIP
Management Monitor Mgr
workstation SLB
DIP
Host
Agent
DIP
SLB MUX VM
Client

VIP SLB MUX


Hyper-V Host

DIP
SLB
DIP
Host
Agent
DIP
SLB functionality in Software-Defined Networking (2 of 7)

Implement SLB by using data plane Hyper-V virtual switch and control it by using Network
Controller
 SLB maps virtual IP addresses to dynamic IP addresses:
o Virtual IPs are located on an SLB MUX and provide public or private access to a pool
of load balanced VMs
o Dynamic IPs are IP addresses of VMs in the load balanced pool assigned within a
virtual network
 An SLB MUX consists of one or more VMs and serves several functions:
o Implements load balancing (5, 3, and 2 tuples) policies defined by Network
Controller
o Uses BGP to advertise virtual IPs to routers

 SLB health probes originate from the host where the dynamic IP VM resides and can be TCP
or HTTP-based
SLB functionality in Software-Defined Networking (3 of 7)

SLB functionality can be grouped into the following categories


 Core functionality:
o Layer 4 load balancing for North-South and East-West TCP/UDP traffic
o Support for HNV and VLAN networks for VMs connected to an SDN Hyper-V vSwitch
o Sharing of MUX VMs across tenants
o Support for scalable, low-latency return path using DSR
o Integration with SET and SR-IOV
o Support for NAT in S2S gateway scenarios with a single public IP supporting all
connections
 Scale and performance:
o Support for horizontal and vertical scaling of MUXs and Host Agents
o Optimized use of SLB Manager Network Controller module (support for up to eight
MUX instances)
SLB functionality in Software-Defined Networking (4 of 7)

 High availability:
o Support for multiple nodes in an Active/Active configuration
o Support for horizontal scaling of MUX pools
o Uptime of 99% of individual MUX instances
o Health monitoring data available to management entities
 Integration with SCVMM, RAS Gateway, and Datacenter Firewall
SLB functionality in Software-Defined Networking (5 of 7)

SLB optimizes traffic flow by using Direct Server Return for public access:
1. A client sends a request targeting a public virtual IP

2. A MUX selects a dynamic IP corresponding to the virtual IP, encapsulates the


packet, and forwards it to the physical network IP address of the Hyper-V host
where the dynamic IP is located
3. The Hyper-V host removes encapsulation from the packet, rewrites the virtual IP
to a dynamic IP, remaps the ports, and forwards the packet to the dynamic IP VM
4. The dynamic IP VM generates a response

5. The Hyper-V host intercepts the response, rewrites dynamic IP to virtual IP,
identifies the client IP, and sends the response via the edge router, bypassing
the MUX
SLB functionality in Software-Defined Networking (6 of 7)

MUX
Server Farm
MUX 1

VIP:
107.105.47.60
http://sharepoint.contoso.com DIP1:
10.10.10.5 Server Farm
DIP2: Request 2
MUX
10.10.20.5

DIP1: 10.10.10.5
VIP: MUX
107.105.47.60
Response Server Farm
MUX 3
MUX
SLB functionality in Software-Defined Networking (7 of 7)

SLB optimizes traffic flow by using Direct Server Return for private access
1. A client sends a request targeting a private virtual IP

2. A MUX selects a dynamic IP corresponding to the virtual IP, encapsulates the


packet, and forwards it to the physical network IP address of the Hyper-V host
where the dynamic IP is located
3. The Hyper-V host removes encapsulation from the packet, translates the virtual
IP to a dynamic IP, remaps the ports, and forwards the packet to the dynamic IP
VM
4. The dynamic IP VM generates a response

5. The Hyper-V host intercepts the response, identifies the client IP, and sends the
response back to the client, bypassing the MUX
6. After the traffic flow is established, subsequent traffic bypasses the MUX
completely
SLB infrastructure in Software-Defined Networking (1 of 2)

 A Network Controller:
o Hosts the SLB Manager
o Processes SLB commands that come in through the Northbound API

o Calculates policy for distribution to Hyper-V hosts and SLB MUXs

o Provides the health status of the SLB infrastructure

 An SLB MUX:
o Processes inbound network traffic and maps virtual IPs to dynamic IPs, then forwards
the traffic to dynamic IPs
o Uses BGP to publish virtual IP routes to edge routers

 A Hyper-V host:
o Host MUX VMs, dynamic IP VMs, and SLB Host Agents

 An SLB Host Agent:


o Accepts SLB policy updates from Network Controller

o Program rules for SLB into SDN enabled Hyper-V virtual switches
SLB infrastructure in Software-Defined Networking (2 of 2)

 SDN enabled Hyper-V virtual switches:


o Process data path for SLB
o Receive inbound network traffic from the MUX

o Bypass the MUX for outbound network traffic, sending it to a router using DSR

 BGP enabled routers:


o Route inbound traffic to the MUX by using ECMP

o For outbound network traffic, use the route provided by the host

o Accept route updates for virtual IPs from SLB MUX

o Remove SLB MUXs from the SLB rotation if Keep Alive fails

 Management applications:
o Use System Center 2019, SDN Express, Windows PowerShell, or another
management application to install and configure Network Controller and its SLB
infrastructure
Implement SLB in Software-Defined Networking (1 of 3)

 Prerequisites for implementation:


o Network Controller
o One or more Hyper-V hosts with the SDN-enabled Hyper-V vSwitches running SLB
Host Agent
o One or more SLB MUX VMs

o Routers supporting ECMP routing, BGP, and configured to accept BGP peering
requests from the SLB MUX VMs
 Implementation scenarios:
o Public virtual IP load balancing

o Private virtual IP load balancing

o Outbound NAT

o Inbound NAT
Implement SLB in Software-Defined Networking (2 of 3)

Example 1-To create a public virtual IP for load balancing a pool of two VMs on a virtual
network:
1. Prepare the load balancer object
2. Assign a front-end IP address that will serve as a virtual IP
3. Allocate a back-end address pool, which contains the dynamic IPs that make up the
members of the load-balanced set of VMs
4. Define a health probe that the load balancer uses to determine the health state of
the backend pool members
5. Define a load balancing rule to send traffic that arrives at the front-end IP to the
back-end IP
6. Add network interfaces to the back-end pool. You must repeat this step for each
network interface that can process requests made to the virtual IP
7. Get the load balancer object containing the back-end pool to add a network
interface
8. Get the network interface and add the backendaddress pool to the
loadbalancerbackendaddresspools array
9. Apply the change to the network interface
Implement SLB in Software-Defined Networking (3 of 3)

Example 2-To configure outbound NAT:


1. Create the load balancer properties, front-end IP, and back-end pool

2. Define the outbound NAT rule

3. Add the load balancer object in Network Controller

4. Add network interfaces to the back-end pool. You must repeat this step for each
network interface that can process requests made to the virtual IP
5. Get the load balancer object containing the back-end pool to add a network
interface
6. Get the network interface and add the backendaddress pool to the
loadbalancerbackendaddresspools array
7. Apply the change to the network interface

8. Add the load balancer configuration to Network Controller


Troubleshoot SLB in Software-Defined Networking

Use several types of data to troubleshoot SLB-related issues:


 SLB Configuration State:
o Included in the output of the Debug-NetworkController cmdlet

 SLB Diagnostics:
o Software Load Balancer Manager (SLBM) fabric errors (Hosting service provider
actions)
o SLBM Tenant errors (Hosting service provider and tenant actions)

 SLBMuxDriver Windows Analytics and Debug Log:


o In Windows Event Viewer, first enable the option Show Analytic and Debug Logs,
then observe the SlbMuxDriver log
o Keep logging enabled only while you’re trying to reproduce a problem
Lesson 6: Planning for
and implementing RAS
Gateways
Lesson 6 overview

RAS Gateway is a software-based, BGP-capable router based on HNV. RAS Gateways handle
routing and tunneling in scenarios that involve connectivity between virtual and physical
networks
 Topics:
o RAS Gateway functionality in Software-Defined Networking

o RAS Gateway infrastructure in Software-Defined Networking

o Implement RAS Virtual Gateways and gateway pools

o Troubleshoot RAS Virtual Gateways


RAS Gateway functionality in Software-Defined Networking (1 of
3)
 RAS Gateway offers the following primary features:
o IKEv2 site-to-site VPN and point-to-site VPN
o L3 forwarding gateway

o GRE Tunneling

• Access from tenant virtual networks to tenant physical networks


• High-speed connectivity
• Integration with VLAN based isolation
• Access to shared resources
• Services of third-party devices to tenants
o Dynamic routing with BGP

 RSA Gateway performance


o In Windows Server 2019, the throughput has increased significantly

• 1.8 Gbps for IPsec connections


• 15 Gbps and GRE connections
RAS Gateway functionality in Software-Defined Networking (2 of
3) Hoste
Physical r
Network
HNV Network
GRE tunnel Multitenant
GW Red
Virtual
Networ
k

Red Hoste HNV Network


Enterprise
Site1 r Multitenant
GRE tunnel GW Red
Virtual
Networ
k
MPLS
tunnel

Physical
Network HNV Network
GRE tunnel
(VLAN Multitenant Red
Isolation) GW Virtual
Networ
k
GRE tunnel
Green
Virtual
Network
RAS Gateway functionality in Software-Defined Networking (3 of
3) Hoste
r
Physical HNV Network
Network Single-tenant 192.168.20.0/24
Multitenant Red
172.123.10.0/2 GW
192.168.20.0/24 GRE tunnel GW Virtual
4 Networ
k
GRE tunnel
192.170.20.0/24
Green
172.123.10.10 192.170.20.0/24 Virtual
Network

Hoste
r

Load
Balancer
GRE
tunnels
HNV Network
Multitenant Red
Red S25 tunnel GW Virtual
Enterprise
Networ
Site k
S25 tunnel
Green Green
Enterprise Virtual
Site Networ
k
RAS Gateway infrastructure in Software-Defined Networking (1 of
3)
Gateway pools route traffic between physical and virtual networks:
o Each pool is M+N redundant

o A pool can perform any combination of the individual gateway functions

o Assign a single public IP address to all pools or to a subset of pools

o Scale a gateway horizontally by adding or removing gateway VMs in the pool

o Connections of a single tenant can terminate on multiple pools and multiple


gateways in a pool
o Create gateway pools using criteria, including:

• Tunnel types, such as IKEv2 VPN, L3 VPN, or GRE VPN


• Capacity
• Redundancy level
• Customized separation for tenants
RAS Gateway infrastructure in Software-Defined Networking (2 of
3)

Software Load Multitenant Edge


Balancing Pool RAS Gateways

Contoso
IP 1 GW 1-S
Washington DC Contoso
IP 1 GW
Site
2 Multitenant Virtual Network
RAS
MUX Gateway
BGP Router
Contoso GW 2
IP 2
HQ (LA)
IP 2 GW
3

MUX Woodgrove
GW 3 Virtual Network
Woodgrove
SFO Site IP 3 IP 3 GW
3
RAS Gateway infrastructure in Software-Defined Networking (3 of
3)
 BGP Route Reflector:
o Provides an alternative to the BGP mesh topology required for route synchronization
o Functions as the only router connecting to all other routers (BGP Route Reflector
clients)
o Calculates best routes and redistributes them to its clients
o Resides in the control plane between the RAS Gateways and the Network Controller
o Doesn’t participate in data plane routing
o Defaults to the first tenant RAS Gateway
o Serves all the RAS Gateway VMs associated with the same tenant
o Updates Network Controller with the routes that correspond to the tenant's remote
sites
Implement RAS Virtual Gateways and gateway pools (1 of 2)

The RAS Gateway Management feature of Network Controller manages the following features:
 Deployment of the gateway pools
 Adding and removing RAS gateway VMs in the pool
 Provisioning and deprovisioning virtual gateways on tenant virtual networks, including:
o S2S VPN gateway connectivity with remote tenant networks by using IPsec

o S2S VPN gateway connectivity with remote tenant networks by using GRE

o L3 forwarding connectivity

 BGP routing
 Switching network traffic flows to a standby gateway in the event of a gateway failure
Implement RAS Virtual Gateways and gateway pools (2 of 2)

 To add a virtual gateway to a tenant virtual network:


1. Identify a gateway pool object in Network Controller
2. Identify a subnet to be used for routing packets out of the tenant's virtual network
in Network Controller
3. Create an object for the tenant virtual gateway and update the gateway pool
reference
4. Create a connection with IPsec, GRE, or L3 forwarding
a. For L3 forwarding, configure a logical network
b. Create a network connection object and add it to Network Controller
5. Configure the gateway as a BGP router and add it to Network Controller
a. Add a BGP router for the tenant
b. Add a BGP Peer for this tenant, corresponding to the site-to-site VPN network
connection that you added in the previous step
Troubleshoot RAS Virtual Gateways

 Perform RAS Gateway validation from:


o Network Controller
o Gateway VM

o ToR Switch

o A Windows BGP router

 Take advantage of the centralized logging capability of Network Controller:


o Network Controller logs all gateway configuration and state changes
Instructor-led lab D:
Implementing SDN
Software Load
Balancing with
private virtual IP by
using PowerShell
 Implementing SDN Software Load Balancing
by using Windows Admin Center and Windows
PowerShell
Lab D: Implementing SDN Software Load Balancing with private
virtual IP by using PowerShell
 Exercise 1: Implementing SDN Software Load Balancing by using Windows Admin Center
and Windows PowerShell
Lab D scenario

You need to configure VMs on virtual networks that will serve load-balanced workloads accessible from
within the datacenter hosting your SDN infrastructure. In addition, you need to ensure that you’ll be
able to configure VMs on virtual networks to connect to the internet and to accept inbound
connectivity from your datacenter servers. Rather than relying on third-party load balancers, you
intend to use SDN Software Load Balancer for this purpose
Module-review questions (1 of 3)

1. Which of the following technologies serves as the basis for Software-Defined Networking
(SDN) implementation of Azure Stack HCI?
a. Virtual Filtering Platform (VFP) forwarding extension
b. Windows Management Instrumentation (WMI)
c. System Center Virtual Machine Manager
d. Network Virtualization Generic Routing Encapsulation (NVGRE)

2. Which of the following features optimizes distribution of traffic destined for vmNICs across
multiple processors on a Hyper-V host in Windows Server 2019?
a. Virtual Machine Queue (VMQ)
b. Virtual Machine Multi-Queue (VMMQ)
c. Dynamic Virtual Machine Queue (Dynamic VMQ)
d. Dynamic Virtual Machine Multi Queue (Dynamic VMMQ)
Module-review questions (2 of 3)

3. Which of the following features doesn’t depend on the underlying hardware?


a. Switch Embedded Teaming (SET)
b. Dynamic VMQ
c. Single Root I/O Virtualization (SR-IOV)
d. Receive Segment Coalescing (RSC) in the Hyper-V Virtual Switch (vSwitch)

4. Which of the following isn’t a requirement for Hyper-V hosts hosting tenant workload VMs,
but not SDN infrastructure VMs, in an SDN environment?
a. Windows Server 2019 Standard edition
b. An external Hyper-V virtual switch
c. A Management IPv4 address assigned to the virtual NIC
d. A Management IPv6 address assigned to the virtual NIC
Module-review questions (3 of 3)

5. Which of the following methods can be used for authorization of Southbound API
communication of Network Controller with RAS Gateways?
a. WinRM
b. OVSDB
c. WCF
d. TCP
Module-review answers

1. Which of the following technologies serves as the basis for Software-Defined Networking
(SDN) implementation of Azure Stack HCI?
a. Virtual Filtering Platform (VFP) forwarding extension
2. Which of the following features optimizes distribution of traffic destined for vmNICs across
multiple processors on a Hyper-V host in Windows Server 2019?
d. Dynamic Virtual Machine Multi Queue (Dynamic VMMQ)
3. Which of the following features doesn’t depend on the underlying hardware?
d. Receive Segment Coalescing (RSC) in the Hyper-V Virtual Switch (vSwitch)
4. Which of the following isn’t a requirement for Hyper-V hosts hosting tenant workload VMs,
but not SDN infrastructure VMs, in an SDN environment?
d. A Management IPv6 address assigned to the virtual NIC
5. Which of the following methods can be used for authorization of Southbound API
communication of Network Controller with RAS Gateways?
a. WinRM
Thank you

© Copyright Microsoft Corporation. All rights reserved.

You might also like