MS Ref - Virtualization and HyperV
MS Ref - Virtualization and HyperV
Virtualization
Guarded fabric and shielded VMs
Hyper-V
Technology Overview
What's new in Hyper-V
System requirements
Supported Windows guest operating systems
Supported Linux and FreeBSD VMs
CentOS and Red Hat Enterprise Linux VMs
Debian VMs
Oracle Linux VMs
SUSE VMs
Ubuntu VMs
FreeBSD VMs
Feature Descriptions for Linux and FreeBSD VMs
Best Practices for running Linux
Best practices for running FreeBSD
Feature compatibility by generation and guest
Get started
Install Hyper-V
Create a virtual switch
Create a virtual machine
Plan
VM Generation
Networking
Scalability
Security
Discrete Device Assignment
Deploy
Export and import virtual machines
Set up hosts for live migration
Upgrade virtual machine version
Deploy graphics devices using DDA
Deploy storage devices using DDA
Manage
Configuring Persistent Memory Devices for Hyper-V VMs
Choose between standard or production checkpoints
Create a VHD set
Enable or disable checkpoints
Manage hosts with Hyper-V Manager
Manage host CPU resource controls
Using VM CPU Groups
Manage hypervisor scheduler types
About Hyper-V scheduler type selection
Manage Integration Services
Manage Windows VMs with PowerShell Direct
Set up Hyper-V Replica
Move VMs with live migration
Live Migration Overview
Set up hosts for live migration
Use live migration without Failover Clustering
Hyper-V Performance Tuning
Hyper-V Virtual Switch
Remote Direct Memory Access (RDMA) and Switch Embedded Teaming (SET)
Manage Hyper-V Virtual Switch
Configure and View VLAN Settings on Hyper-V Virtual Switch Ports
Create Security Policies with extended Port Access Control lists
T IP
Looking for information about older versions of Windows Server? Check out our other Windows Server libraries on
docs.microsoft.com. You can also search this site for specific information.
Virtualization in Windows Server is one of the foundational technologies required to create your software defined infrastructure.
Along with networking and storage, virtualization features deliver the flexibility you need to power workloads for your customers.
Windows Containers
Windows Containers provide operating system-level virtualization that allows multiple isolated applications to be run on a
single system. Two different types of container runtimes are included with the feature, each with a different degree of
application isolation.
Hyper-V Virtual Switch is available in Hyper-V Manager after you install the Hyper-V server role.
Included in Hyper-V Virtual Switch are programmatically managed and extensible capabilities that allow you to connect
virtual machines to both virtual networks and the physical network.
In addition, Hyper-V Virtual Switch provides policy enforcement for security, isolation, and service levels.
Related
Hyper-V requires specific hardware to create the virtualization environment. For details, see System requirements for
Hyper-V on Windows Server 2016.
For more information about Hyper-V on Windows 10, see Hyper-V on Windows 10.
Guarded fabric and shielded VMs
3/12/2019 • 2 minutes to read • Edit Online
Applies to: Windows Server 2019, Windows Server (Semi-Annual Channel), Windows Server 2016
One of the most important goals of providing a hosted environment is to guarantee the security of the virtual
machines running in the environment. As a cloud service provider or enterprise private cloud administrator, you
can use a guarded fabric to provide a more secure environment for VMs. A guarded fabric consists of one Host
Guardian Service (HGS ) - typically, a cluster of three nodes - plus one or more guarded hosts, and a set of shielded
virtual machines (VMs).
IMPORTANT
Ensure that you have installed the latest cumulative update before you deploy shielded virtual machines in production.
Videos, blog, and overview topic about guarded fabrics and shielded
VMs
Video: How to protect your virtualization fabric from insider threats with Windows Server 2019
Video: Introduction to Shielded Virtual Machines in Windows Server 2016
Video: Dive into Shielded VMs with Windows Server 2016 Hyper-V
Video: Deploying Shielded VMs and a Guarded Fabric with Windows Server 2016
Blog: Datacenter and Private Cloud Security Blog
Overview: Guarded fabric and shielded VMs overview
Planning topics
Planning guide for hosters
Planning guide for tenants
Deployment topics
Deployment Guide
Quick start
Deploy HGS
Deploy guarded hosts
Configuring the fabric DNS for hosts that will become guarded hosts
Deploy a guarded host using AD mode
Deploy a guarded host using TPM mode
Confirm guarded hosts can attest
Shielded VMs - Hosting service provider deploys guarded hosts in VMM
Deploy shielded VMs
Create a shielded VM template
Prepare a VM Shielding helper VHD
Set up Windows Azure Pack
Create a shielding data file
Deploy a shielded VM by using Windows Azure Pack
Deploy a shielded VM by using Virtual Machine Manager
The Hyper-V role in Windows Server lets you create a virtualized computing environment where you can create
and manage virtual machines. You can run multiple operating systems on one physical computer and isolate the
operating systems from each other. With this technology, you can improve the efficiency of your computing
resources and free up your hardware resources.
See the topics in the following table to learn more about Hyper-V on Windows Server.
Evaluate Hyper-V
Blogs
- Virtualization Blog
- Windows Server Blog
- Ben Armstrong's Virtualization Blog (archived)
Related technologies
The following table lists technologies that you might want to use in your virtualization computing environment.
TECHNOLOGY DESCRIPTION
Failover Clustering Windows Server feature that provides high availability for
Hyper-V hosts and virtual machines.
Applies To: Windows Server 2016, Microsoft Hyper-V Server 2016, Windows Server 2019, Microsoft Hyper-V
Server 2019
Hyper-V is Microsoft's hardware virtualization product. It lets you create and run a software version of a computer,
called a virtual machine. Each virtual machine acts like a complete computer, running an operating system and
programs. When you need computing resources, virtual machines give you more flexibility, help save time and
money, and are a more efficient way to use hardware than just running one operating system on physical hardware.
Hyper-V runs each virtual machine in its own isolated space, which means you can run more than one virtual
machine on the same hardware at the same time. You might want to do this to avoid problems such as a crash
affecting the other workloads, or to give different people, groups or services access to different systems.
Related technologies
These are some technologies from Microsoft that are often used with Hyper-V:
Failover Clustering
Remote Desktop Services
System Center Virtual Machine Manager
Various storage technologies: cluster shared volumes, SMB 3.0, storage spaces direct
Windows containers offer another approach to virtualization. See the Windows Containers library on MSDN.
What's new in Hyper-V on Windows Server
6/20/2019 • 15 minutes to read • Edit Online
Applies To: Windows Server 2019, Microsoft Hyper-V Server 2016, Windows Server 2016
This article explains the new and changed functionality of Hyper-V on Windows Server 2019, Windows Server
2016, and Microsoft Hyper-V Server 2016. To use new features on virtual machines created with Windows Server
2012 R2 and moved or imported to a server that runs Hyper-V on Windows Server 2019 or Windows Server
2016, you'll need to manually upgrade the virtual machine configuration version. For instructions, see Upgrade
virtual machine version.
Here's what's included in this article and whether the functionality is new or updated.
IMPORTANT
The vmguest.iso image file is no longer needed, so it isn't included with Hyper-V on Windows Server 2016.
For more information about Linux virtual machines on Hyper-V, see Linux and FreeBSD Virtual Machines on
Hyper-V. For more information about the cmdlet, see Set-VMFirmware.
More memory and processors for generation 2 virtual machines and Hyper-V hosts (updated)
Starting with version 8, generation 2 virtual machines can use significantly more memory and virtual processors.
Hosts also can be configured with significantly more memory and virtual processors than were previously
supported. These changes support new scenarios such as running e-commerce large in-memory databases for
online transaction processing (OLTP ) and data warehousing (DW ). The Windows Server blog recently published
the performance results of a virtual machine with 5.5 terabytes of memory and 128 virtual processors running 4
TB in-memory database. Performance was greater than 95% of the performance of a physical server. For details,
see Windows Server 2016 Hyper-V large-scale VM performance for in-memory transaction processing. For details
about virtual machine versions, see Upgrade virtual machine version in Hyper-V on Windows 10 or Windows
Server 2016. For the full list of supported maximum configurations, see Plan for Hyper-V scalability in Windows
Server 2016.
Nested virtualization (new)
This feature lets you use a virtual machine as a Hyper-V host and create virtual machines within that virtualized
host. This can be especially useful for development and test environments. To use nested virtualization, you'll need:
To run at least Windows Server 2019, Windows Server 2016 or Windows 10 on both the physical Hyper-V
host and the virtualized host.
A processor with Intel VT-x (nested virtualization is available only for Intel processors at this time).
For details and instructions, see Run Hyper-V in a Virtual Machine with Nested Virtualization.
Networking features (new)
New networking features include:
Remote direct memory access (RDMA ) and switch embedded teaming (SET). You can set up RDMA
on network adapters bound to a Hyper-V virtual switch, regardless of whether SET is also used. SET
provides a virtual switch with some of same capabilities as NIC teaming. For details, see Remote Direct
Memory Access (RDMA) and Switch Embedded Teaming (SET).
Virtual machine multi queues (VMMQ ). Improves on VMQ throughput by allocating multiple hardware
queues per virtual machine. The default queue becomes a set of queues for a virtual machine, and traffic is
spread between the queues.
Quality of service (QoS ) for software-defined networks. Manages the default class of traffic through
the virtual switch within the default class bandwidth.
For more about new networking features, see What's new in Networking.
Production checkpoints (new)
Production checkpoints are "point-in-time" images of a virtual machine. These give you a way to apply a checkpoint
that complies with support policies when a virtual machine runs a production workload. Production checkpoints
are based on backup technology inside the guest instead of a saved state. For Windows virtual machines, the
Volume Snapshot Service (VSS ) is used. For Linux virtual machines, the file system buffers are flushed to create a
checkpoint that's consistent with the file system. If you'd rather use checkpoints based on saved states, choose
standard checkpoints instead. For details, see Choose between standard or production checkpoints in Hyper-V.
IMPORTANT
New virtual machines use production checkpoints as the default.
IMPORTANT
After you update the cluster functional level, you can't return it to Windows Server 2012 R2.
For a Hyper-V cluster with a functional level of Windows Server 2012 R2 with nodes running Windows Server
2012 R2, Windows Server 2019 and Windows Server 2016, note the following:
Manage the cluster, Hyper-V, and virtual machines from a node running Windows Server 2016 or Windows
10.
You can move virtual machines between all of the nodes in the Hyper-V cluster.
To use new Hyper-V features, all nodes must run Windows Server 2016 or and the cluster functional level
must be updated.
The virtual machine configuration version for existing virtual machines isn't upgraded. You can upgrade the
configuration version only after you upgrade the cluster functional level.
Virtual machines that you create are compatible with Windows Server 2012 R2, virtual machine
configuration level 5.
After you update the cluster functional level:
You can enable new Hyper-V features.
To make new virtual machine features available, use the Update-VmConfigurationVersion cmdlet to
manually update the virtual machine configuration level. For instructions, see Upgrade virtual machine
version.
You can't add a node to the Hyper-V Cluster that runs Windows Server 2012 R2.
NOTE
Hyper-V on Windows 10 doesn't support failover clustering.
For details and instructions, see the Cluster Operating System Rolling Upgrade.
Shared virtual hard disks (updated)
You can now resize shared virtual hard disks (.vhdx files) used for guest clustering, without downtime. Shared
virtual hard disks can be grown or shrunk while the virtual machine is online. Guest clusters can now also protect
shared virtual hard disks by using Hyper-V Replica for disaster recovery.
Enable replication on the collection. Enabling replication on a collection is only exposed through the WMI
interface. See the documentation for Msvm_CollectionReplicationService class for more details. You cannot
manage replication of a collection through PowerShell cmdlet or UI. The VMs should be on hosts that are
part of a Hyper-V cluster to access features that are specific to a collection. This includes Shared VHD - shared
VHDs on stand-alone hosts are not supported by Hyper-V Replica.
Follow the guidelines for shared VHDs in Virtual Hard Disk Sharing Overview, and be sure that your shared VHDs
are part of a guest cluster.
A collection with a shared VHD but no associated guest cluster cannot create reference points for the collection
(regardless of whether the shared VHD is included in the reference point creation or not).
Virtual machine backup(new)
If you are backing up a single virtual machine (regardless of whether host is clustered or not), you should not use a
VM group. Nor should you use a snapshot collection. VM groups and snapshot collection are meant to be used
solely for backing up guest clusters that are using shared vhdx. Instead, you should take a snapshot using the
Hyper-V WMI v2 provider. Likewise, do not use the Failover Cluster WMI provider.
Shielded virtual machines (new)
Shielded virtual machines use several features to make it harder for Hyper-V administrators and malware on the
host to inspect, tamper with, or steal data from the state of a shielded virtual machine. Data and state is encrypted,
Hyper-V administrators can't see the video output and disks, and the virtual machines can be restricted to run only
on known, healthy hosts, as determined by a Host Guardian Server. For details, see Guarded Fabric and Shielded
VMs.
NOTE
Shielded virtual machines are compatible with Hyper-V Replica. To replicate a shielded virtual machine, the host you want to
replicate to must be authorized to run that shielded virtual machine.
IMPORTANT
The .vmcx file name extension indicates a binary file. Editing .vmcx or .vmrs files isn't supported.
IMPORTANT
After you update the version, you can't move the virtual machine to a server that runs Windows Server 2012 R2.
You can't downgrade the configuration to a previous version.
The Update-VMVersion cmdlet is blocked on a Hyper-V Cluster when the cluster functional level is Windows Server 2012
R2.
Applies To: Windows Server 2016, Microsoft Hyper-V Server 2016, Windows Server 2019, Microsoft Hyper-V
Server 2019
Hyper-V has specific hardware requirements, and some Hyper-V features have additional requirements. Use the
details in this article to decide what requirements your system must meet so you can use Hyper-V the way you plan
to. Then, review the Windows Server catalog. Keep in mind that requirements for Hyper-V exceed the general
minimum requirements for Windows Server 2016 because a virtualization environment requires more computing
resources.
If you're already using Hyper-V, it's likely that you can use your existing hardware. The general hardware
requirements haven't changed significantly from Windows Server 2012 R2 . But, you will need newer hardware to
use shielded virtual machines or discrete device assignment. Those features rely on specific hardware support, as
described below. Other than that, the main difference in hardware is that second-level address translation (SLAT) is
now required instead of recommended.
For details about maximum supported configurations for Hyper-V, such as the number of running virtual
machines, see Plan for Hyper-V scalability in Windows Server 2016. The list of operating systems you can run in
your virtual machines is covered in Supported Windows guest operating systems for Hyper-V on Windows Server.
General requirements
Regardless of the Hyper-V features you want to use, you'll need:
A 64-bit processor with second-level address translation (SLAT). To install the Hyper-V virtualization
components such as Windows hypervisor, the processor must have SLAT. However, it's not required to
install Hyper-V management tools like Virtual Machine Connection (VMConnect), Hyper-V Manager, and
the Hyper-V cmdlets for Windows PowerShell. See "How to check for Hyper-V requirements," below, to find
out if your processor has SLAT.
VM Monitor Mode extensions
Enough memory - plan for at least 4 GB of RAM. More memory is better. You'll need enough memory for
the host and all virtual machines that you want to run at the same time.
Virtualization support turned on in the BIOS or UEFI:
Hardware-assisted virtualization. This is available in processors that include a virtualization option -
specifically processors with Intel Virtualization Technology (Intel VT) or AMD Virtualization (AMD -V )
technology.
Hardware-enforced Data Execution Prevention (DEP ) must be available and enabled. For Intel
systems, this is the XD bit (execute disable bit). For AMD systems, this is the NX bit (no execute bit).
Systeminfo.exe
Scroll to the Hyper-V Requirements section to review the report.
Hyper-V supports several versions of Windows Server, Windows, and Linux distributions to run in virtual
machines, as guest operating systems. This article covers supported Windows Server and Windows guest
operating systems. For Linux and FreeBSD distributions, see Supported Linux and FreeBSD virtual machines for
Hyper-V on Windows.
Some operating systems have the integration services built-in. Others require that you install or upgrade
integration services as a separate step after you set up the operating system in the virtual machine. For more
information, see the sections below and Integration Services.
Windows Server 2008 with 8 Install all critical Windows Datacenter, Enterprise,
Service Pack 2 (SP2) updates after you set up the Standard and Web editions
guest operating system. (32-bit and 64-bit).
Supported Windows client guest operating systems
Following are the versions of Windows client that are supported as guest operating systems for Hyper-V in
Windows Server 2016 and Windows Server 2019.
Windows 10 32 Built-in
Windows 7 with Service Pack 4 Upgrade the integration Ultimate, Enterprise, and
1 (SP 1) services after you set up the Professional editions (32-bit
guest operating system. and 64-bit).
Windows Server 2012 R2 and Windows 8.1 - Supported Windows Guest Operating Systems for Hyper-V
in Windows Server 2012 R2 and Windows 8.1
- Linux and FreeBSD Virtual Machines on Hyper-V
Windows Server 2012 and Windows 8 Supported Windows Guest Operating Systems for Hyper-V in
Windows Server 2012 and Windows 8
Windows Server 2008 and Windows Server 2008 R2 About Virtual Machines and Guest Operating Systems
See also
Linux and FreeBSD Virtual Machines on Hyper-V
Supported Guest Operating Systems for Client Hyper-V in Windows 10
Supported Linux and FreeBSD virtual machines for
Hyper-V on Windows
7/18/2019 • 2 minutes to read • Edit Online
Applies To: Windows Server 2019, Windows Server 2016, Hyper-V Server 2016, Windows Server 2012 R2,
Hyper-V Server 2012 R2, Windows Server 2012, Hyper-V Server 2012, Windows Server 2008 R2, Windows
10, Windows 8.1, Windows 8, Windows 7.1, Windows 7
Hyper-V supports both emulated and Hyper-V -specific devices for Linux and FreeBSD virtual machines. When
running with emulated devices, no additional software is required to be installed. However emulated devices do not
provide high performance and cannot leverage the rich virtual machine management infrastructure that the Hyper-
V technology offers. In order to make full use of all benefits that Hyper-V provides, it is best to use Hyper-V -
specific devices for Linux and FreeBSD. The collection of drivers that are required to run Hyper-V -specific devices
are known as Linux Integration Services (LIS ) or FreeBSD Integration Services (BIS ).
LIS has been added to the Linux kernel and is updated for new releases. But Linux distributions based on older
kernels may not have the latest enhancements or fixes. Microsoft provides a download containing installable LIS
drivers for some Linux installations based on these older kernels. Because distribution vendors include versions of
Linux Integration Services, it is best to install the latest downloadable version of LIS, if applicable, for your
installation.
For other Linux distributions LIS changes are regularly integrated into the operating system kernel and applications
so no separate download or installation is required.
For older FreeBSD releases (before 10.0), Microsoft provides ports that contain the installable BIS drivers and
corresponding daemons for FreeBSD virtual machines. For newer FreeBSD releases, BIS is built in to the FreeBSD
operating system, and no separate download or installation is required except for a KVP ports download that is
needed for FreeBSD 10.0.
TIP
Download Windows Server 2019 from the Evaluation Center.
The goal of this content is to provide information that helps facilitate your Linux or FreeBSD deployment on Hyper-
V. Specific details include:
Linux distributions or FreeBSD releases that require the download and installation of LIS or BIS drivers.
Linux distributions or FreeBSD releases that contain built-in LIS or BIS drivers.
Feature distribution maps that indicate the features in major Linux distributions or FreeBSD releases.
Known issues and workarounds for each distribution or release.
Feature description for each LIS or BIS feature.
Want to make a suggestion about features and functionality? Is there something we could do better? You can
use the Windows Server User Voice site to suggest new features and capabilities for Linux and FreeBSD Virtual
Machines on Hyper-V, and to see what other people are saying.
In this section
Supported CentOS and Red Hat Enterprise Linux virtual machines on Hyper-V
Supported Debian virtual machines on Hyper-V
Supported Oracle Linux virtual machines on Hyper-V
Supported SUSE virtual machines on Hyper-V
Supported Ubuntu virtual machines on Hyper-V
Supported FreeBSD virtual machines on Hyper-V
Feature Descriptions for Linux and FreeBSD virtual machines on Hyper-V
Best Practices for running Linux on Hyper-V
Best practices for running FreeBSD on Hyper-V
Supported CentOS and Red Hat Enterprise Linux
virtual machines on Hyper-V
7/26/2019 • 13 minutes to read • Edit Online
Applies To: Windows Server 2016, Hyper-V Server 2016, Windows Server 2012 R2, Hyper-V Server 2012 R2,
Windows Server 2012, Hyper-V Server 2012, Windows Server 2008 R2, Windows 10, Windows 8.1, Windows
8, Windows 7.1, Windows 7
The following feature distribution maps indicate the features that are present in built-in and downloadable versions
of Linux Integration Services. The known issues and workarounds for each distribution are listed after the tables.
The built-in Red Hat Enterprise Linux Integration Services drivers for Hyper-V (available since Red Hat Enterprise
Linux 6.4) are sufficient for Red Hat Enterprise Linux guests to run using the high performance synthetic devices on
Hyper-V hosts.These built-in drivers are certified by Red Hat for this use. Certified configurations can be viewed on
this Red Hat web page: Red Hat Certification Catalog. It isn't necessary to download and install Linux Integration
Services packages from the Microsoft Download Center, and doing so may limit your Red Hat support as described
in Red Hat Knowledgebase article 1067: Red Hat Knowledgebase 1067.
Because of potential conflicts between the built-in LIS support and the downloadable LIS support when you
upgrade the kernel, disable automatic updates, uninstall the LIS downloadable packages, update the kernel, reboot,
and then install the latest LIS release, and reboot again.
NOTE
Official Red Hat Enterprise Linux certification information is available through the Red Hat Customer Portal.
In this section:
RHEL/CentOS 8.x Series
RHEL/CentOS 7.x Series
RHEL/CentOS 6.x Series
RHEL/CentOS 5.x Series
Notes
Table legend
Built in - LIS are included as part of this Linux distribution. The kernel module version numbers for the built
in LIS (as shown by lsmod, for example) are different from the version number on the Microsoft-provided
LIS download package. A mismatch does not indicate that the built in LIS is out of date.
✔ - Feature available
Availability
Networking
VLAN tagging and trunking 2019, 2016, 2012 R2, 2012, 2008 R2 ✔
TCP Segmentation and Checksum 2019, 2016, 2012 R2, 2012, 2008 R2 ✔
Offloads
Storage
Memory
PAE Kernel Support 2019, 2016, 2012 R2, 2012, 2008 R2 N/A
Video
Miscellaneous
WINDO
WS
SERVER
FEATUR VERSIO
E N 7.5-7.6 7.3-7.4 7.0-7.2 7.5-7.6 7.4 7.3 7.2 7.1 7.0
Availa LIS 4.3 LIS 4.3 LIS 4.3 Built in Built in Built in Built in Built in Built in
bility
Core 2019, ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
2016,
2012
R2,
2012,
2008
R2
Windo 2019, ✔ ✔
ws 2016
Server
2016
Accurat
e Time
Netwo
rking
WINDO
WS
SERVER
FEATUR VERSIO
E N 7.5-7.6 7.3-7.4 7.0-7.2 7.5-7.6 7.4 7.3 7.2 7.1 7.0
Jumbo 2019, ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
frames 2016,
2012
R2,
2012,
2008
R2
VLAN 2019, ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
taggin 2016,
g and 2012
trunkin R2,
g 2012,
2008
R2
Live 2019, ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
Migrati 2016,
on 2012
R2,
2012,
2008
R2
Static 2019, ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
IP 2016, Note 2 Note 2 Note 2 Note 2 Note 2 Note 2 Note 2 Note 2 Note 2
Injectio 2012
n R2,
2012
vRSS 2019, ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
2016,
2012
R2
TCP 2019, ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
Segme 2016,
ntation 2012
and R2,
Checks 2012,
um 2008
Offload R2
s
SR-IOV 2019, ✔ ✔ ✔ ✔ ✔
2016
Storag
e
VHDX 2019, ✔ ✔ ✔ ✔ ✔ ✔ ✔
resize 2016,
2012
R2
WINDO
WS
SERVER
FEATUR VERSIO
E N 7.5-7.6 7.3-7.4 7.0-7.2 7.5-7.6 7.4 7.3 7.2 7.1 7.0
Virtual 2019, ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
Fibre 2016, Note 3 Note 3 Note 3 Note 3 Note 3 Note 3 Note 3 Note 3 Note 3
Chann 2012
el R2
Live 2019, ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
virtual 2016, Note 5 Note 5 Note 5 Note Note 4, Note 4, Note 4, Note 4, Note 4,
machin 2012 4,5 5 5 5 5 5
e R2
backup
TRIM 2019, ✔ ✔ ✔ ✔ ✔ ✔ ✔
suppor 2016,
t 2012
R2
SCSI 2019, ✔ ✔ ✔ ✔
WWN 2016,
2012
R2
Memo
ry
PAE 2019, N/A N/A N/A N/A N/A N/A N/A N/A N/A
Kernel 2016,
Suppor 2012
t R2,
2012,
2008
R2
Config 2019, ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
uration 2016,
of 2012
MMIO R2
gap
Dynam 2019, ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
ic 2016, Note 8, Note 8, Note 8, Note 9, Note 9, Note 9, Note 9, Note 9, Note 8,
Memor 2012 9, 10 9, 10 9, 10 10 10 10 10 10 9, 10
y- R2,
Hot- 2012
Add
Dynam 2019, ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
ic 2016, Note 8, Note 8, Note 8, Note 9, Note 9, Note 9, Note 9, Note 9, Note 8,
Memor 2012 9, 10 9, 10 9, 10 10 10 10 10 10 9, 10
y- R2,
Balloon 2012
ing
WINDO
WS
SERVER
FEATUR VERSIO
E N 7.5-7.6 7.3-7.4 7.0-7.2 7.5-7.6 7.4 7.3 7.2 7.1 7.0
Runtim 2019, ✔ ✔ ✔
e 2016
Memor
y
Resize
Video
Hyper- 2019, ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
V- 2016,
specific 2012
video R2,
device 2012,
2008
R2
Miscell
aneou
s
Key- 2019, ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
Value 2016,
Pair 2012
R2,
2012,
2008
R2
Non- 2019, ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
Maska 2016,
ble 2012
Interru R2
pt
File 2019, ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
copy 2016,
from 2012
host to R2
guest
lsvmbu 2019, ✔ ✔ ✔
s 2016,
comma 2012
nd R2,
2012,
2008
R2
Hyper- 2019, ✔ ✔ ✔
V 2016
Sockets
WINDO
WS
SERVER
FEATUR VERSIO
E N 7.5-7.6 7.3-7.4 7.0-7.2 7.5-7.6 7.4 7.3 7.2 7.1 7.0
PCI 2019, ✔ ✔ ✔ ✔ ✔
Passthr 2016
ough/
DDA
Gener
ation 2
virtual
machi
nes
Boot 2019, ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
using 2016, Note Note Note Note Note Note Note Note Note
UEFI 2012 14 14 14 14 14 14 14 14 14
R2
Secure 2019, ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
boot 2016
WINDOWS
SERVER 6.10, 6.9,
FEATURE VERSION 6.7-6.10 6.4-6.6 6.0-6.3 6.8 6.6, 6.7 6.5 6.4
Availabili LIS 4.3 LIS 4.3 LIS 4.3 Built in Built in Built in Built in
ty
Core 2019, ✔ ✔ ✔ ✔ ✔ ✔ ✔
2016,
2012 R2,
2012,
2008 R2
Windows 2019,
Server 2016
2016
Accurate
Time
Networki
ng
Jumbo 2019, ✔ ✔ ✔ ✔ ✔ ✔ ✔
frames 2016,
2012 R2,
2012,
2008 R2
WINDOWS
SERVER 6.10, 6.9,
FEATURE VERSION 6.7-6.10 6.4-6.6 6.0-6.3 6.8 6.6, 6.7 6.5 6.4
Live 2019, ✔ ✔ ✔ ✔ ✔ ✔ ✔
Migration 2016,
2012 R2,
2012,
2008 R2
vRSS 2019, ✔ ✔ ✔ ✔ ✔
2016,
2012 R2
TCP 2019, ✔ ✔ ✔ ✔ ✔
Segmenta 2016,
tion and 2012 R2,
Checksum 2012,
Offloads 2008 R2
SR-IOV 2019,
2016
Storage
VHDX 2019, ✔ ✔ ✔ ✔ ✔ ✔
resize 2016,
2012 R2
TRIM 2019, ✔ ✔ ✔ ✔
support 2016,
2012 R2
SCSI 2019, ✔ ✔ ✔
WWN 2016,
2012 R2
Memory
WINDOWS
SERVER 6.10, 6.9,
FEATURE VERSION 6.7-6.10 6.4-6.6 6.0-6.3 6.8 6.6, 6.7 6.5 6.4
PAE 2019, ✔ ✔ ✔ ✔ ✔ ✔ ✔
Kernel 2016,
Support 2012 R2,
2012,
2008 R2
Configura 2019, ✔ ✔ ✔ ✔ ✔ ✔ ✔
tion of 2016,
MMIO 2012 R2
gap
Runtime 2019,
Memory 2016
Resize
Video
Hyper-V- 2019, ✔ ✔ ✔ ✔ ✔ ✔
specific 2016,
video 2012 R2,
device 2012,
2008 R2
Miscellan
eous
Non- 2019, ✔ ✔ ✔ ✔ ✔ ✔ ✔
Maskable 2016,
Interrupt 2012 R2
lsvmbus 2019, ✔ ✔ ✔
command 2016,
2012 R2,
2012,
2008 R2
WINDOWS
SERVER 6.10, 6.9,
FEATURE VERSION 6.7-6.10 6.4-6.6 6.0-6.3 6.8 6.6, 6.7 6.5 6.4
Hyper-V 2019, ✔ ✔ ✔
Sockets 2016
PCI 2019, ✔
Passthrou 2016
gh/DDA
Generati
on 2
virtual
machines
Boot 2012 R2
using
UEFI
Secure 2019,
boot 2016
WINDOWS SERVER
FEATURE VERSION 5.2 -5.11 5.2-5.11 5.9 - 5.11
Networking
VLAN tagging and 2019, 2016, 2012 R2, ✔ Note 1 ✔ Note 1 ✔ Note 1
trunking 2012, 2008 R2
Storage
Memory
Dynamic Memory - 2019, 2016, 2012 R2, ✔ Note 7, 9, 10, 11 ✔ Note 7, 9, 10, 11
Ballooning 2012
Video
Miscellaneous
Generation 2 virtual
machines
Notes
1. For this RHEL/CentOS release, VLAN tagging works but VLAN trunking does not.
2. Static IP injection may not work if Network Manager has been configured for a given synthetic network
adapter on the virtual machine. For smooth functioning of static IP injection please make sure that either
Network Manager is either turned off completely or has been turned off for a specific network adapter
through its ifcfg-ethX file.
3. On Windows Server 2012 R2 while using virtual fibre channel devices, make sure that logical unit number 0
(LUN 0) has been populated. If LUN 0 has not been populated, a Linux virtual machine might not be able to
mount fibre channel devices natively.
4. For built-in LIS, the "hyperv-daemons" package must be installed for this functionality.
5. If there are open file handles during a live virtual machine backup operation, then in some corner cases, the
backed-up VHDs might have to undergo a file system consistency check (fsck) on restore. Live backup
operations can fail silently if the virtual machine has an attached iSCSI device or direct-attached storage
(also known as a pass-through disk).
6. While the Linux Integration Services download is preferred, live backup support for RHEL/CentOS 5.9 -
5.11/6.4/6.5 is also available through Hyper-V Backup Essentials for Linux.
7. Dynamic memory support is only available on 64-bit virtual machines.
8. Hot-Add support is not enabled by default in this distribution. To enable Hot-Add support you need to add a
udev rule under /etc/udev/rules.d/ as follows:
a. Create a file /etc/udev/rules.d/100-balloon.rules. You may use any other desired name for the file.
b. Add the following content to the file: SUBSYSTEM=="memory", ACTION=="add", ATTR{state}="online"
The Linux Integration Services download can be applied to existing Generation 2 VMs but does not impart
Generation 2 capability.
15. In Red Hat Enterprise Linux or CentOS 5.2, 5.3, and 5.4 the filesystem freeze functionality is not available, so
Live Virtual Machine Backup is also not available.
See Also
Set-VMFirmware
Supported Debian virtual machines on Hyper-V
Supported Oracle Linux virtual machines on Hyper-V
Supported SUSE virtual machines on Hyper-V
Supported Ubuntu virtual machines on Hyper-V
Supported FreeBSD virtual machines on Hyper-V
Feature Descriptions for Linux and FreeBSD virtual machines on Hyper-V
Best Practices for running Linux on Hyper-V
Red Hat Hardware Certification
Supported Debian virtual machines on Hyper-V
7/26/2019 • 3 minutes to read • Edit Online
Applies To: Windows Server 2016, Hyper-V Server 2016, Windows Server 2012 R2, Hyper-V Server 2012 R2,
Windows Server 2012, Hyper-V Server 2012, Windows Server 2008 R2, Windows 10, Windows 8.1, Windows
8, Windows 7.1, Windows 7
The following feature distribution map indicates the features that are present in each version. The known issues and
workarounds for each distribution are listed after the table.
Table legend
Built in - LIS are included as part of this Linux distribution. The Microsoft-provided LIS download package
doesn't work for this distribution so do not install it. The kernel module version numbers for the built in LIS
(as shown by lsmod, for example) are different from the version number on the Microsoft-provided LIS
download package. A mismatch does not indicate that the built in LIS is out of date.
✔ - Feature available
WINDOWS SERVER
OPERATING
FEATURE SYSTEM VERSION 10 (BUSTER) 9.0-9.6 (STRETCH) 8.0-8.11 (JESSIE) 7.0-7.11 (WHEEZY)
Networking
Storage
Live virtual 2019, 2016, ✔ Note 4,5 ✔ Note 4,5 ✔ Note 4,5 ✔ Note 4
machine backup 2012 R2
Memory
Video
Miscellaneous
WINDOWS SERVER
OPERATING
FEATURE SYSTEM VERSION 10 (BUSTER) 9.0-9.6 (STRETCH) 8.0-8.11 (JESSIE) 7.0-7.11 (WHEEZY)
Generation 2
virtual machines
Notes
1. Creating file systems on VHDs larger than 2TB is not supported.
2. On Windows Server 2008 R2 SCSI disks create 8 different entries in /dev/sd*.
3. Windows Server 2012 R2 a VM with 8 cores or more will have all interrupts routed to a single vCPU.
4. Starting with Debian 8.3 the manually-installed Debian package "hyperv-daemons" contains the key-value
pair, fcopy, and VSS daemons. On Debian 7.x and 8.0-8.2 the hyperv-daemons package must come from
Debian backports.
5. Live virtual machine backup will not work with ext2 file systems. The default layout created by the Debian
installer includes ext2 filesystems, you must customize the layout to not create this filesystem type.
6. While Debian 7.x is out of support and uses an older kernel, the kernel included in Debian backports for
Debian 7.x has improved Hyper-V capabilities.
7. On Windows Server 2012 R2 Generation 2 virtual machines have secure boot enabled by default and some
Linux virtual machines will not boot unless the secure boot option is disabled. You can disable secure boot in
the Firmware section of the settings for the virtual machine in Hyper-V Manager or you can disable it
using Powershell:
Set-VMFirmware -VMName "VMname" -EnableSecureBoot Off
8. The latest upstream kernel capabilities are only available by using the kernel included Debian backports.
See Also
Supported CentOS and Red Hat Enterprise Linux virtual machines on Hyper-V
Supported Oracle Linux virtual machines on Hyper-V
Supported SUSE virtual machines on Hyper-V
Supported Ubuntu virtual machines on Hyper-V
Supported FreeBSD virtual machines on Hyper-V
Feature Descriptions for Linux and FreeBSD virtual machines on Hyper-V
Best Practices for running Linux on Hyper-V
Supported Oracle Linux virtual machines on Hyper-V
7/26/2019 • 9 minutes to read • Edit Online
Applies To: Windows Server 2019, Windows Server 2016, Hyper-V Server 2016, Windows Server 2012 R2,
Hyper-V Server 2012 R2, Windows Server 2012, Hyper-V Server 2012, Windows Server 2008 R2, Windows
10, Windows 8.1, Windows 8, Windows 7.1, Windows 7
The following feature distribution map indicates the features that are present in each version. The known issues and
workarounds for each distribution are listed after the table.
In this section:
Red Hat Compatible Kernel Series
Unbreakable Enterprise Kernel Series
Notes
Table legend
Built in - LIS are included as part of this Linux distribution. The kernel module version numbers for the built
in LIS (as shown by lsmod, for example) are different from the version number on the Microsoft-provided
LIS download package. A mismatch doesn't indicate that the built in LIS is out of date.
✔ - Feature available
WINDO
WS
SERVER 6.4-6.8 6.4-6.8
FEATUR VERSIO AND AND RHCK RHCK RHCK RHCK RHCK6.
E N 7.5-7.6 7.4 7.0-7.3 7.0-7.2 7.0-7.2 6.8 6.6, 6.7 6.5 4
Availa Built in Built in LIS 4.2 LIS 4.1 Built in Built in Built in Built in Built in
bility
Core 2019, ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
2016,
2012
R2,
2012,
2008
R2
WINDO
WS
SERVER 6.4-6.8 6.4-6.8
FEATUR VERSIO AND AND RHCK RHCK RHCK RHCK RHCK6.
E N 7.5-7.6 7.4 7.0-7.3 7.0-7.2 7.0-7.2 6.8 6.6, 6.7 6.5 4
Windo 2019,
ws 2016
Server
2016
Accurat
e Time
Netwo
rking
Jumbo 2019, ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
frames 2016,
2012
R2,
2012,
2008
R2
VLAN 2019, ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
taggin 2016, (Note 1 (Note 1 Note 1 Note 1 Note 1 Note 1
g and 2012 for 6.4- for 6.4-
trunkin R2, 6.8) 6.8)
g 2012,
2008
R2
Live 2019, ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
Migrati 2016,
on 2012
R2,
2012,
2008
R2
Static 2019, ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
IP 2016, Note Note
Injectio 2012 14 14
n R2,
2012
vRSS 2019, ✔ ✔ ✔ ✔ ✔ ✔ ✔
2016,
2012
R2
TCP 2019, ✔ ✔ ✔ ✔ ✔ ✔ ✔
Segme 2016,
ntation 2012
and R2,
Checks 2012,
um 2008
Offload R2
s
WINDO
WS
SERVER 6.4-6.8 6.4-6.8
FEATUR VERSIO AND AND RHCK RHCK RHCK RHCK RHCK6.
E N 7.5-7.6 7.4 7.0-7.3 7.0-7.2 7.0-7.2 6.8 6.6, 6.7 6.5 4
SR-IOV 2019, ✔ ✔
2016
Storag
e
VHDX 2019, ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
resize 2016,
2012
R2
Virtual 2019, ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
Fibre 2016, Note 2 Note 2 Note 2 Note 2 Note 2 Note 2 Note 2 Note 2
Chann 2012
el R2
Live 2019, ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
virtual 2016, Note Note Note 3, Note 3, Note 3, Note 3, Note 3, Note 3, Note 3,
machin 2012 11,3 11, 3 4 4 4, 11 4, 11 4, 11 4, 5, 11 4, 5, 11
e R2
backup
TRIM 2019, ✔ ✔ ✔ ✔ ✔ ✔
suppor 2016,
t 2012
R2
SCSI 2019, ✔ ✔ ✔
WWN 2016,
2012
R2
Memo
ry
Config 2019, ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
uration 2016,
of 2012
MMIO R2
gap
WINDO
WS
SERVER 6.4-6.8 6.4-6.8
FEATUR VERSIO AND AND RHCK RHCK RHCK RHCK RHCK6.
E N 7.5-7.6 7.4 7.0-7.3 7.0-7.2 7.0-7.2 6.8 6.6, 6.7 6.5 4
Dynam 2019, ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
ic 2016, Note 8, Note 8, Note 7, Note 7, Note 6, Note 6, Note 6, Note 6,
Memor 2012 9 9 8, 9, 10 8, 9, 10 7, 8, 9 7, 8, 9 7, 8, 9 7, 8, 9
y- R2, (Note 6 (Note 6
Hot- 2012 for 6.4- for 6.4-
Add 6.7) 6.7)
Dynam 2019, ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
ic 2016, Note 8, Note 8, Note 7, Note 7, Note 6, Note 6, Note 6, Note 6, Note 6,
Memor 2012 9 9 9, 10 9, 10 8, 9 8, 9 8, 9 8, 9 8, 9, 10
y- R2, (Note 6 (Note 6
Balloon 2012 for 6.4- for 6.4-
ing 6.7) 6.7)
Runtim 2019,
e 2016
Memor
y
Resize
Video
Hyper- 2019, ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
V- 2016,2
specific 012 R2,
video 2012,
device 2008
R2
Miscell
aneou
s
Key- 2019, ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
Value 2016, Note Note Note Note Note
Pair 2012 12 12 12 12 12
R2,
2012,
2008
R2
Non- 2019, ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
Maska 2016,
ble 2012
Interru R2
pt
File 2019, ✔ ✔ ✔ ✔ ✔
copy 2016,
from 2012
host to R2
guest
WINDO
WS
SERVER 6.4-6.8 6.4-6.8
FEATUR VERSIO AND AND RHCK RHCK RHCK RHCK RHCK6.
E N 7.5-7.6 7.4 7.0-7.3 7.0-7.2 7.0-7.2 6.8 6.6, 6.7 6.5 4
lsvmbu 2019, ✔ ✔
s 2016,
comma 2012
nd R2,
2012,
2008
R2
Hyper- 2019, ✔ ✔
V 2016
Sockets
PCI 2019, ✔ ✔
Passthr 2016
ough/
DDA
Gener
ation 2
virtual
machi
nes
Boot 2019, ✔ ✔ ✔ ✔ ✔ ✔
using 2016, Note Note Note Note Note Note
UEFI 2012 13 13 13 13 13 13
R2
Secure 2019, ✔ ✔
boot 2016
WINDOWS
SERVER
FEATURE VERSION UEK R5 UEK R4 UEK R3 QU3 UEK R3 QU2 UEK R3 QU1
Networking
WINDOWS
SERVER
FEATURE VERSION UEK R5 UEK R4 UEK R3 QU3 UEK R3 QU2 UEK R3 QU1
Storage
Memory
WINDOWS
SERVER
FEATURE VERSION UEK R5 UEK R4 UEK R3 QU3 UEK R3 QU2 UEK R3 QU1
Video
Miscellaneou
s
Key-Value Pair 2019, 2016, ✔ Note 11, ✔ Note 11, ✔ Note 11, ✔ Note 11, ✔ Note 11,
2012 R2, 12 12 12 12 12
2012, 2008
R2
File copy from 2019, 2016, ✔ Note 11 ✔ Note 11 ✔ Note 11 ✔ Note 11 ✔ Note 11
host to guest 2012 R2
Generation 2
virtual
machines
Notes
1. For this Oracle Linux release, VLAN tagging works but VLAN trunking does not.
2. While using virtual fibre channel devices, ensure that logical unit number 0 (LUN 0) has been populated. If
LUN 0 has not been populated, a Linux virtual machine might not be able to mount fibre channel devices
natively.
3. If there are open file handles during a live virtual machine backup operation, then in some corner cases, the
backed-up VHDs might have to undergo a file system consistency check (fsck) on restore.
4. Live backup operations can fail silently if the virtual machine has an attached iSCSI device or direct-attached
storage (also known as a pass-through disk).
5. Live backup support for Oracle Linux 6.4/6.5/UEKR3 QU2 and QU3 is available through Hyper-V Backup
Essentials for Linux.
6. Dynamic memory support is only available on 64-bit virtual machines.
7. Hot-Add support is not enabled by default in this distribution. To enable Hot-Add support you need to add a
udev rule under /etc/udev/rules.d/ as follows:
a. Create a file /etc/udev/rules.d/100-balloon.rules. You may use any other desired name for the file.
b. Add the following content to the file: SUBSYSTEM=="memory", ACTION=="add", ATTR{state}="online"
The Linux Integration Services download can be applied to existing Generation 2 VMs but does not impart
Generation 2 capability.
14. Static IP injection may not work if Network Manager has been configured for a given synthetic network
adapter on the virtual machine. For smooth functioning of static IP injection please make sure that either
Network Manager is either turned off completely or has been turned off for a specific network adapter
through its ifcfg-ethX file.
See Also
Set-VMFirmware
Supported CentOS and Red Hat Enterprise Linux virtual machines on Hyper-V
Supported Debian virtual machines on Hyper-V
Supported SUSE virtual machines on Hyper-V
Supported Ubuntu virtual machines on Hyper-V
Supported FreeBSD virtual machines on Hyper-V
Feature Descriptions for Linux and FreeBSD virtual machines on Hyper-V
Best Practices for running Linux on Hyper-V
Supported SUSE virtual machines on Hyper-V
5/24/2019 • 5 minutes to read • Edit Online
Applies To: Windows Server 2016, Hyper-V Server 2016, Windows Server 2012 R2, Hyper-V Server 2012 R2,
Windows Server 2012, Hyper-V Server 2012, Windows Server 2008 R2, Windows 10, Windows 8.1, Windows
8, Windows 7.1, Windows 7
The following is a feature distribution map that indicates the features in each version. The known issues and
workarounds for each distribution are listed after the table.
The built-in SUSE Linux Enterprise Service drivers for Hyper-V are certified by SUSE. An example configuration
can be viewed in this bulletin: SUSE YES Certification Bulletin.
Table legend
Built in - LIS are included as part of this Linux distribution.The Microsoft-provided LIS download package
does not work for this distribution, so do not install it.The kernel module version numbers for the built in LIS
(as shown by lsmod, for example) are different from the version number on the Microsoft-provided LIS
download package. A mismatch doesn't indicate that the built in LIS is out of date.
✔ - Feature available
WINDOWS
SERVER
OPERATING
SYSTEM SLES 12
FEATURE VERSION SLES 15 SP3/SP4 SLES 12 SP2 SLES 12 SP1 SLES 11 SP4 SLES 11 SP3
Networkin
g
Storage
Live virtual 2019, 2016, ✔ Note 2, ✔ Note 2, ✔ Note 2, ✔ Note 2, ✔ Note 2, ✔ Note 2,
machine 2012 R2 3, 8 3, 8 3, 8 3, 8 3, 8 3, 8
backup
Memory
Video
Miscellane
ous
Generation
2 virtual
machines
Notes
1. Static IP injection may not work if Network Manager has been configured for a given Hyper-V -specific
network adapter on the virtual machine. To ensure smooth functioning of static IP injection please ensure
that Network Manager is turned off completely or has been turned off for a specific network adapter
through its ifcfg-ethX file.
2. If there are open file handles during a live virtual machine backup operation, then in some corner cases, the
backed-up VHDs might have to undergo a file system consistency check (fsck) on restore.
3. Live backup operations can fail silently if the virtual machine has an attached iSCSI device or direct-attached
storage (also known as a pass-through disk).
4. Dynamic memory operations can fail if the guest operating system is running too low on memory. The
following are some best practices:
Startup memory and minimal memory should be equal to or greater than the amount of memory
that the distribution vendor recommends.
Applications that tend to consume the entire available memory on a system are limited to consuming
up to 80 percent of available RAM.
5. Dynamic memory support is only available on 64-bit virtual machines.
6. If you are using Dynamic Memory on Windows Server 2016 or Windows Server 2012 operating systems,
specify Startup memory, Minimum memory, and Maximum memory parameters in multiples of 128
megabytes (MB ). Failure to do so can lead to Hot-Add failures, and you may not see any memory increase in
a guest operating system.
7. In Windows Server 2016 or Windows Server 2012 R2, the key/value pair infrastructure might not function
correctly without a Linux software update. Contact your distribution vendor to obtain the software update in
case you see problems with this feature.
8. VSS backup will fail if a single partition is mounted multiple times.
9. On Windows Server 2012 R2, Generation 2 virtual machines have secure boot enabled by default and
Generation 2 Linux virtual machines will not boot unless the secure boot option is disabled. You can disable
secure boot in the Firmware section of the settings for the virtual machine in Hyper-V Manager or you can
disable it using Powershell:
Applies To: Windows Server 2019, 2016, Hyper-V Server 2019, 2016, Windows Server 2012 R2, Hyper-V
Server 2012 R2, Windows Server 2012, Hyper-V Server 2012, Windows Server 2008 R2, Windows 10,
Windows 8.1, Windows 8, Windows 7.1, Windows 7
Beginning with Ubuntu 12.04, loading the "linux-virtual" package installs a kernel suitable for use as a guest virtual
machine. This package always depends on the latest minimal generic kernel image and headers used for virtual
machines. While its use is optional, the linux-virtual kernel will load fewer drivers and may boot faster and have less
memory overhead than a generic image.
To get full use of Hyper-V, install the appropriate linux-tools and linux-cloud-tools packages to install tools and
daemons for use with virtual machines. When using the linux-virtual kernel, load linux-tools-virtual and linux-
cloud-tools-virtual.
The following feature distribution map indicates the features in each version. The known issues and workarounds
for each distribution are listed after the table.
Table legend
Built in - LIS are included as part of this Linux distribution. The Microsoft-provided LIS download package
doesn't work for this distribution, so don't install it. The kernel module version numbers for the built in LIS
(as shown by lsmod, for example) are different from the version number on the Microsoft-provided LIS
download package. A mismatch doesn't indicate that the built in LIS is out of date.
✔ - Feature available
WINDOWS
SERVER
OPERATING
SYSTEM
FEATURE VERSION 19.04 18.10 18.04 LTS 16.04 LTS 14.04 LTS 12.04 LTS
Networkin
g
WINDOWS
SERVER
OPERATING
SYSTEM
FEATURE VERSION 19.04 18.10 18.04 LTS 16.04 LTS 14.04 LTS 12.04 LTS
Storage
Memory
WINDOWS
SERVER
OPERATING
SYSTEM
FEATURE VERSION 19.04 18.10 18.04 LTS 16.04 LTS 14.04 LTS 12.04 LTS
Video
Miscellane
ous
Generation
2 virtual
machines
Boot using 2019, 2016, ✔ Note 11, ✔ Note 11, ✔ Note 11, ✔ Note 11, ✔ Note 11,
UEFI 2012 R2 12 12 12 12 12
Notes
1. Static IP injection may not work if Network Manager has been configured for a given Hyper-V -specific
network adapter on the virtual machine. To ensure smooth functioning of static IP injection please ensure
that Network Manager is turned off completely or has been turned off for a specific network adapter
through its ifcfg-ethX file.
2. While using virtual fiber channel devices, ensure that logical unit number 0 (LUN 0) has been populated. If
LUN 0 has not been populated, a Linux virtual machine might not be able to mount fiber channel devices
natively.
3. If there are open file handles during a live virtual machine backup operation, then in some corner cases, the
backed-up VHDs might have to undergo a file system consistency check ( fsck ) on restore.
4. Live backup operations can fail silently if the virtual machine has an attached iSCSI device or direct-attached
storage (also known as a pass-through disk).
5. On long term support (LTS ) releases use latest virtual Hardware Enablement (HWE ) kernel for up to date
Linux Integration Services.
To install the Azure-tuned kernel on 14.04, 16.04 and 18.04, run the following commands as root (or sudo):
# apt-get update
# apt-get install linux-azure
12.04 does not have a separate virtual kernel. To install the generic HWE kernel on 12.04, run the following
commands as root (or sudo):
# apt-get update
# apt-get install linux-generic-lts-trusty
On Ubuntu 12.04 the following Hyper-V daemons are in a separately installed package:
VSS Snapshot daemon - This daemon is required to create live Linux virtual machine backups.
KVP daemon - This daemon allows setting and querying intrinsic and extrinsic key value pairs.
fcopy daemon - This daemon implements a file copying service between the host and guest.
To install the KVP daemon on 12.04, run the following commands as root (or sudo).
# apt-get install hv-kvp-daemon-init linux-tools-lts-trusty linux-cloud-tools-generic-lts-trusty
Whenever the kernel is updated, the virtual machine must be rebooted to use it.
6. On Ubuntu 18.10 or 19.04, use the latest virtual kernel to have up-to-date Hyper-V capabilities.
To install the virtual kernel on 18.10 or 19.04, run the following commands as root (or sudo):
# apt-get update
# apt-get install linux-azure
Whenever the kernel is updated, the virtual machine must be rebooted to use it.
7. Dynamic memory support is only available on 64-bit virtual machines.
8. Dynamic Memory operations can fail if the guest operating system is running too low on memory. The
following are some best practices:
Startup memory and minimal memory should be equal to or greater than the amount of memory
that the distribution vendor recommends.
Applications that tend to consume the entire available memory on a system are limited to consuming
up to 80 percent of available RAM.
9. If you are using Dynamic Memory on Windows Server 2019, Windows Server 2016 or Windows Server
2012/2012 R2 operating systems, specify Startup memory, Minimum memory, and Maximum
memory parameters in multiples of 128 megabytes (MB ). Failure to do so can lead to Hot-Add failures, and
you might not see any memory increase on a guest operating system.
10. In Windows Server 2019, Windows Server 2016 or Windows Server 2012 R2, the key/value pair
infrastructure might not function correctly without a Linux software update. Contact your distribution vendor
to obtain the software update in case you see problems with this feature.
11. On Windows Server 2012 R2, Generation 2 virtual machines have secure boot enabled by default and some
Linux virtual machines will not boot unless the secure boot option is disabled. You can disable secure boot in
the Firmware section of the settings for the virtual machine in Hyper-V Manager or you can disable it
using Powershell:
12. Before attempting to copy the VHD of an existing Generation 2 VHD virtual machine to create new
Generation 2 virtual machines, follow these steps:
a. Log in to the existing Generation 2 virtual machine.
b. Change directory to the boot EFI directory:
# cd /boot/efi/EFI
See Also
Supported CentOS and Red Hat Enterprise Linux virtual machines on Hyper-V
Supported Debian virtual machines on Hyper-V
Supported Oracle Linux virtual machines on Hyper-V
Supported SUSE virtual machines on Hyper-V
Feature Descriptions for Linux and FreeBSD virtual machines on Hyper-V
Best Practices for running Linux on Hyper-V
Set-VMFirmware
Ubuntu 14.04 in a Generation 2 VM - Ben Armstrong's Virtualization Blog
Supported FreeBSD virtual machines on Hyper-V
7/26/2019 • 3 minutes to read • Edit Online
Applies To: Windows Server 2016, Hyper-V Server 2016, Windows Server 2012 R2, Hyper-V Server 2012 R2,
Windows Server 2012, Hyper-V Server 2012, Windows Server 2008 R2, Windows 10, Windows 8.1, Windows
8, Windows 7.1, Windows 7
The following feature distribution map indicates the features in each version. The known issues and workarounds
for each distribution are listed after the table.
Table legend
Built in - BIS (FreeBSD Integration Service) are included as part of this FreeBSD release.
✔ - Feature available
WINDOWS
SERVER
OPERATING
SYSTEM
FEATURE VERSION 11.1/11.2 11.0 10.3 10.2 10.0 - 10.1 9.1 - 9.3, 8.4
Networkin
g
Memory
Video
Miscellane
ous
Generation
2 virtual
machines
Additional Notes: The feature matrix of 10 stable and 11 stable is same with FreeBSD 11.1 release. In
addition, FreeBSD 10.2 and previous versions (10.1, 10.0, 9.x, 8.x) are end of life. Please refer here for an up-
to-date list of supported releases and the latest security advisories.
Additional Notes: The feature matrix of 10 stable and 11 stable is same with FreeBSD 11.1 release. In addition,
FreeBSD 10.2 and previous versions (10.1, 10.0, 9.x, 8.x) are end of life. Please refer here for an up-to-date list of
supported releases and the latest security advisories.
See Also
Feature Descriptions for Linux and FreeBSD virtual machines on Hyper-V
Best practices for running FreeBSD on Hyper-V
Feature Descriptions for Linux and FreeBSD virtual
machines on Hyper-V
5/24/2019 • 8 minutes to read • Edit Online
Applies To: Windows Server 2016, Hyper-V Server 2016, Windows Server 2012 R2, Hyper-V Server 2012 R2,
Windows Server 2012, Hyper-V Server 2012, Windows Server 2008 R2, Windows 10, Windows 8.1, Windows
8, Windows 7.1, Windows 7
This article describes features available in components such as core, networking, storage, and memory when using
Linux and FreeBSD on a virtual machine.
Core
FEATURE DESCRIPTION
Integrated shutdown With this feature, an administrator can shut down virtual
machines from the Hyper-V Manager. For more information,
see Operating system shutdown.
Time synchronization This feature ensures that the maintained time inside a virtual
machine is kept synchronized with the maintained time on the
host. For more information, see Time synchronization.
Windows Server 2016 Accurate Time This feature allows the guest to use the Windows Server 2016
Accurate Time capability, which improves time synchronization
with the host to 1ms accuracy. For more information, see
Windows Server 2016 Accurate Time
Multiprocessing support With this feature, a virtual machine can use multiple
processors on the host by configuring multiple virtual CPUs.
Heartbeat With this feature, the host to can track the state of the virtual
machine. For more information, see Heartbeat.
Integrated mouse support With this feature, you can use a mouse on a virtual machine's
desktop and also use the mouse seamlessly between the
Windows Server desktop and the Hyper-V console for the
virtual machine.
Hyper-V specific Storage device This feature grants high-performance access to storage
devices that are attached to a virtual machine.
Hyper-V specific Network device This feature grants high-performance access to network
adapters that are attached to a virtual machine.
Networking
FEATURE DESCRIPTION
Jumbo frames With this feature, an administrator can increase the size of
network frames beyond 1500 bytes, which leads to a
significant increase in network performance.
VLAN tagging and trunking This feature allows you to configure virtual LAN (VLAN) traffic
for virtual machines.
Live Migration With this feature, you can migrate a virtual machine from one
host to another host. For more information, see Virtual
Machine Live Migration Overview.
Static IP Injection With this feature, you can replicate the static IP address of a
virtual machine after it has been failed over to its replica on a
different host. Such IP replication ensures that network
workloads continue to work seamlessly after a failover event.
vRSS (Virtual Receive Side Scaling) Spreads the load from a virtual network adapter across
multiple virtual processors in a virtual machine.For more
information, see Virtual Receive-side Scaling in Windows Server
2012 R2.
TCP Segmentation and Checksum Offloads Transfers segmentation and checksum work from the guest
CPU to the host virtual switch or network adapter during
network data transfers.
SR-IOV Single Root I/O devices use DDA to allow guests access to
portions of specific NIC cards allowing for reduced latency and
increased throughput. SR-IOV requires up to date physical
function (PF) drivers on the host and virtual function (VF)
drivers on the guest.
Storage
FEATURE DESCRIPTION
VHDX resize With this feature, an administrator can resize a fixed-size .vhdx
file that is attached to a virtual machine. For more information,
see Online Virtual Hard Disk Resizing Overview.
Virtual Fibre Channel With this feature, virtual machines can recognize a fiber
channel device and mount it natively. For more information,
see Hyper-V Virtual Fibre Channel Overview.
FEATURE DESCRIPTION
Live virtual machine backup This feature facilitates zero down time backup of live virtual
machines.
Note that the backup operation does not succeed if the virtual
machine has virtual hard disks (VHDs) that are hosted on
remote storage such as a Server Message Block (SMB) share or
an iSCSI volume. Additionally, ensure that the backup target
does not reside on the same volume as the volume that you
back up.
TRIM support TRIM hints notify the drive that certain sectors that were
previously allocated are no longer required by the app and can
be purged. This process is typically used when an app makes
large space allocations via a file and then self-manages the
allocations to the file, for example, to virtual hard disk files.
SCSI WWN The storvsc driver extracts World Wide Name (WWN)
information from the port and node of devices attached to the
virtual machine and creates the appropriate sysfs files.
Memory
FEATURE DESCRIPTION
PAE Kernel Support Physical Address Extension (PAE) technology allows a 32-bit
kernel to access a physical address space that is larger than
4GB. Older Linux distributions such as RHEL 5.x used to ship a
separate kernel that was PAE enabled. Newer distributions
such as RHEL 6.x have pre-built PAE support.
Configuration of MMIO gap With this feature, appliance manufacturers can configure the
location of the Memory Mapped I/O (MMIO) gap. The MMIO
gap is typically used to divide the available physical memory
between an appliance's Just Enough Operating Systems (JeOS)
and the actual software infrastructure that powers the
appliance.
FEATURE DESCRIPTION
Dynamic Memory - Hot-Add The host can dynamically increase or decrease the amount of
memory available to a virtual machine while it is in operation.
Before provisioning, the Administrator enables Dynamic
Memory in the Virtual Machine Settings panel and specify the
Startup Memory, Minimum Memory, and Maximum Memory
for the virtual machine. When the virtual machine is in
operation Dynamic Memory cannot be disabled and only the
Minimum and Maximum settings can be changed. (It is a best
practice to specify these memory sizes as multiples of 128MB.)
Dynamic Memory - Ballooning The host can dynamically increase or decrease the amount of
memory available to a virtual machine while it is in operation.
Before provisioning, the Administrator enables Dynamic
Memory in the Virtual Machine Settings panel and specify the
Startup Memory, Minimum Memory, and Maximum Memory
for the virtual machine. When the virtual machine is in
operation Dynamic Memory cannot be disabled and only the
Minimum and Maximum settings can be changed. (It is a best
practice to specify these memory sizes as multiples of 128MB.)
Runtime Memory Resize An administrator can set the amount of memory available to a
virtual machine while it is in operation, either increasing
memory ("Hot Add") or decreasing it ("Hot Remove"). Memory
is returned to Hyper-V via the balloon driver (see "Dynamic
Memory - Ballooning"). The balloon driver maintains a
minimum amount of free memory after ballooning, called the
"floor", so assigned memory cannot be reduced below the
current demand plus this floor amount. The Memory tab of
Hyper-V manager will display the amount of memory assigned
to the virtual machine, but memory statistics within the virtual
machine will show the highest amount of memory allocated.
(It is a best practice to specify memory values as multiples of
128MB.)
Video
FEATURE DESCRIPTION
Hyper-V-specific video device This feature provides high-performance graphics and superior
resolution for virtual machines. This device does not provide
Enhanced Session Mode or RemoteFX capabilities.
Miscellaneous
FEATURE DESCRIPTION
KVP (Key-Value pair) exchange This feature provides a key/value pair (KVP) exchange service
for virtual machines. Typically administrators use the KVP
mechanism to perform read and write custom data operations
on a virtual machine. For more information, see Data
Exchange: Using key-value pairs to share information between
the host and guest on Hyper-V.
File copy from host to guest This feature allows files to be copied from the host physical
computer to the guest virtual machines without using the
network adaptor. For more information, see Guest services.
lsvmbus command This command gets information about devices on the Hyper-V
virtual machine bus (VMBus) similiar to information
commands like lspci.
PCI Passthrough/DDA With Windows Server 2016 administrators can pass through
PCI Express devices via the Discrete Device Assignment
mechanism. Common devices are network cards, graphics
cards, and special storage devices. The virtual machine will
require the appropriate driver to use the exposed hardware.
The hardware must be assigned to the virtual machine for it to
be used.
Boot using UEFI This feature allows virtual machines to boot using Unified
Extensible Firmware Interface (UEFI).
Secure boot This feature allows virtual machines to use UEFI based secure
boot mode. When a virtual machine is booted in secure mode,
various operating system components are verified using
signatures present in the UEFI data store.
See Also
Supported CentOS and Red Hat Enterprise Linux virtual machines on Hyper-V
Supported Debian virtual machines on Hyper-V
Supported Oracle Linux virtual machines on Hyper-V
Supported SUSE virtual machines on Hyper-V
Supported Ubuntu virtual machines on Hyper-V
Supported FreeBSD virtual machines on Hyper-V
Best Practices for running Linux on Hyper-V
Best practices for running FreeBSD on Hyper-V
Best Practices for running Linux on Hyper-V
6/20/2019 • 4 minutes to read • Edit Online
Applies To: Windows Server 2019, Windows Server 2016, Hyper-V Server 2016, Windows Server 2012 R2,
Hyper-V Server 2012 R2, Windows Server 2012, Hyper-V Server 2012, Windows Server 2008 R2, Windows
10, Windows 8.1, Windows 8, Windows 7.1, Windows 7
This topic contains a list of recommendations for running Linux virtual machine on Hyper-V.
The ext4 format is preferred to ext3 because ext4 is more space efficient than ext3 when used with dynamic
VHDX files.
When creating the filesystem specify the number of groups to be 4096, for example:
Specifying a kickstart file to the pre-install kernel would also avoid the need for keyboard and mouse input during
installation.
NUMA
Linux kernel versions earlier than 2.6.37 don't support NUMA on Hyper-V with larger VM sizes. This issue
primarily impacts older distributions using the upstream Red Hat 2.6.32 kernel, and was fixed in Red Hat Enterprise
Linux (RHEL ) 6.6 (kernel-2.6.32-504). Systems running custom kernels older than 2.6.37, or RHEL -based kernels
older than 2.6.32-504 must set the boot parameter numa=off on the kernel command line in grub.conf. For more
information, see Red Hat KB 436883.
See also
Supported Linux and FreeBSD virtual machines for Hyper-V on Windows
Best practices for running FreeBSD on Hyper-V
Deploy a Hyper-V Cluster
Create Linux Images for Azure
Optimize your Linux VM on Azure
Best practices for running FreeBSD on Hyper-V
7/26/2019 • 2 minutes to read • Edit Online
Applies To: Windows Server 2019, Windows Server 2016, Hyper-V Server 2016, Windows Server 2012 R2,
Hyper-V Server 2012 R2, Windows Server 2012, Hyper-V Server 2012, Windows Server 2008 R2, Windows
10, Windows 8.1, Windows 8, Windows 7.1, Windows 7
This topic contains a list of recommendations for running FreeBSD as a guest operating system on a Hyper-V
virtual machine.
1. Reboot the system into single user mode. This can be accomplished by selecting boot menu option 2 for
FreeBSD 10.3+ (option 4 for FreeBSD 8.x), or performing a 'boot -s' from the boot prompt.
2. In Single user mode, create GEOM labels for each of the IDE disk partitions listed in your fstab (both root
and swap). Below is an example of FreeBSD 10.3.
# cat /etc/fstab
# Device Mountpoint FStype Options Dump Pass#
/dev/da0p2 / ufs rw 1 1
/dev/da0p3 none swap sw 0 0
Additional information on GEOM labels can be found at: Labeling Disk Devices.
3. The system will continue with multi-user boot. After the boot completes, edit /etc/fstab and replace the
conventional device names, with their respective labels. The final /etc/fstab will look like this:
4. The system can now be rebooted. If everything went well, it will come up normally and mount will show:
# mount
/dev/label/rootfs on / (ufs, local, journaled soft-updates)
devfs on /dev (devfs, local, mutilabel)
# sysctl net.link.ether.inet.max_age=60
See also
Supported FreeBSD virtual machines on Hyper-V
Hyper-V feature compatibility by generation and
guest
6/7/2019 • 2 minutes to read • Edit Online
The tables in this article show you the generations and operating systems that are compatible with some of the
Hyper-V features, grouped by categories. In general, you'll get the best availability of features with a generation 2
virtual machine that runs the newest operating system.
Keep in mind that some features rely on hardware or other infrastructure. For hardware details, see System
requirements for Hyper-V on Windows Server 2016. In some cases, a feature can be used with any supported
guest operating system. For details on which operating systems are supported, see:
Supported Linux and FreeBSD virtual machines
Supported Windows guest operating systems
Compute
FEATURE GENERATION GUEST OPERATING SYSTEM
Mobility
FEATURE GENERATION GUEST OPERATING SYSTEM
Networking
FEATURE GENERATION GUEST OPERATING SYSTEM
Single root input/output virtualization 1 and 2 64-bit Windows guests, starting with
(SR-IOV) Windows Server 2012 and Windows 8.
Discrete device assignment (DDA) 1 and 2 Windows Server 2016, Windows Server
2012 R2 only with Update 3133690
installed, Windows 10
Note: For details on Update 3133690,
see this support article.
Storage
FEATURE GENERATION GUEST OPERATING SYSTEM
Shared virtual hard disks (VHDX only) 1 and 2 Windows Server 2016, Windows Server
2012 R2, Windows Server 2012
Applies To: Windows Server 2016, Microsoft Hyper-V Server 2016, Windows Server 2019, Microsoft Hyper-V
Server 2019
Use the following resources to set up and try out Hyper-V on the Server Core or GUI installation option of
Windows Server 2019 or Windows Server 2016. But before you install anything, check the System Requirements
for Windows Server and the System Requirements for Hyper-V.
Download and install Windows Server
Install the Hyper-V role on Windows Server
Create a virtual switch for Hyper-V virtual machines
Create a virtual machine in Hyper-V
Install the Hyper-V role on Windows Server
6/7/2019 • 2 minutes to read • Edit Online
To create and run virtual machines, install the Hyper-V role on Windows Server by using Server Manager or the
Install-WindowsFeature cmdlet in Windows PowerShell. For Windows 10, see Install Hyper-V on Windows 10.
To learn more about Hyper-V, see the Hyper-V Technology Overview. To try out Windows Server 2019, you can
download and install an evaluation copy. See the Evaluation Center.
Before you install Windows Server or add the Hyper-V role, make sure that:
Your computer hardware is compatible. For details, see System Requirements for Windows Server and System
requirements for Hyper-V on Windows Server.
You don't plan to use third-party virtualization apps that rely on the same processor features that Hyper-V
requires. Examples include VMWare Workstation and VirtualBox. You can install Hyper-V without uninstalling
these other apps. But, if you try to use them to manage virtual machines when the Hyper-V hypervisor is
running, the virtual machines might not start or might run unreliably. For details and instructions for turning off
the Hyper-V hypervisor if you need to use one of these apps, see Virtualization applications do not work
together with Hyper-V, Device Guard, and Credential Guard.
If you want to install only the management tools, such as Hyper-V Manager, see Remotely manage Hyper-V hosts
with Hyper-V Manager.
If you're connected locally to the server, run the command without -ComputerName <computer_name> .
4. After the server restarts, you can see that the Hyper-V role is installed and see what other roles and features
are installed by running the following command:
If you're connected locally to the server, run the command without -ComputerName <computer_name> .
NOTE
If you install this role on a server that runs the Server Core installation option of Windows Server 2016 and use the
parameter -IncludeManagementTools , only the Hyper-V Module for Windows PowerShell is installed. You can use the GUI
management tool, Hyper-V Manager, on another computer to remotely manage a Hyper-V host that runs on a Server Core
installation. For instructions on connecting remotely, see Remotely manage Hyper-V hosts with Hyper-V Manager.
See also
Install-WindowsFeature
Create a virtual switch for Hyper-V virtual machines
5/24/2019 • 3 minutes to read • Edit Online
Applies To: Windows 10, Windows Server 2016, Microsoft Hyper-V Server 2016, Windows Server 2019,
Microsoft Hyper-V Server 2019
A virtual switch allows virtual machines created on Hyper-V hosts to communicate with other computers. You can
create a virtual switch when you first install the Hyper-V role on Windows Server. To create additional virtual
switches, use Hyper-V Manager or Windows PowerShell. To learn more about virtual switches, see Hyper-V Virtual
Switch.
Virtual machine networking can be a complex subject. And there are several new virtual switch features that you
may want to use like Switch Embedded Teaming (SET). But basic networking is fairly easy to do. This topic covers
just enough so that you can create networked virtual machines in Hyper-V. To learn more about how you can set
up your networking infrastructure, review the Networking documentation.
Allow management operating system to share this Select this option if you want to allow the Hyper-V host to
network adapter share the use of the virtual switch and NIC or NIC team
with the virtual machine. With this enabled, the host can
use any of the settings that you configure for the virtual
switch like Quality of Service (QoS) settings, security
settings, or other features of the Hyper-V virtual switch.
Enable single-root I/O virtualization (SR-IOV) Select this option only if you want to allow virtual machine
traffic to bypass the virtual machine switch and go directly
to the physical NIC. For more information, see Single-Root
I/O Virtualization in the Poster Companion Reference:
Hyper-V Networking.
7. If you want to isolates network traffic from the management Hyper-V host operating system or other virtual
machines that share the same virtual switch, select Enable virtual LAN identification for management
operating system. You can change the VLAN ID to any number or leave the default. This is the virtual LAN
identification number that the management operating system will use for all network communication
through this virtual switch.
8. Click OK.
9. Click Yes.
Get-NetAdapter
4. Create a virtual switch by using the New -VMSwitch cmdlet. For example, to create an external virtual switch
named ExternalSwitch, using the ethernet network adapter, and with Allow management operating
system to share this network adapter turned on, run the following command.
For more advanced Windows PowerShell scripts that cover improved or new virtual switch features in Windows
Server 2016, see Remote Direct Memory Access and Switch Embedded Teaming.
Next step
Create a virtual machine in Hyper-V
Create a virtual machine in Hyper-V
6/7/2019 • 4 minutes to read • Edit Online
Applies To: Windows 10, Windows Server 2016, Microsoft Hyper-V Server 2016, Windows Server 2019,
Microsoft Hyper-V Server 2019
Learn how to create a virtual machine by using Hyper-V Manager and Windows PowerShell and what options you
have when you create a virtual machine in Hyper-V Manager.
4. Use the New -VM cmdlet to create the virtual machine. See the following examples.
NOTE
If you may move this virtual machine to a Hyper-V host that runs Windows Server 2012 R2, use the -Version
parameter with New-VM to set the virtual machine configuration version to 5. The default virtual machine
configuration version for Windows Server 2016 isn't supported by Windows Server 2012 R2 or earlier versions. You
can't change the virtual machine configuration version after the virtual machine is created. For more information, see
Supported virtual machine configuration versions.
Existing virtual hard disk - To create a virtual machine with an existing virtual hard disk, you can
use the following command where,
-Name is the name that you provide for the virtual machine that you're creating.
-MemoryStartupBytes is the amount of memory that is available to the virtual machine at
start up.
-BootDevice is the device that the virtual machine boots to when it starts like the network
adapter (NetworkAdapter) or virtual hard disk (VHD ).
-VHDPath is the path to the virtual machine disk that you want to use.
-Path is the path to store the virtual machine configuration files.
-Generation is the virtual machine generation. Use generation 1 for VHD and generation 2
for VHDX. See Should I create a generation 1 or 2 virtual machine in Hyper-V?.
-Switch is the name of the virtual switch that you want the virtual machine to use to connect
to other virtual machines or the network. See Create a virtual switch for Hyper-V virtual
machines.
For example:
This creates a generation 2 virtual machine named Win10VM with 4GB of memory. It boots
from the folder VMs\Win10.vhdx in the current directory and uses the virtual switch named
ExternalSwitch. The virtual machine configuration files are stored in the folder VMData.
New virtual hard disk - To create a virtual machine with a new virtual hard disk, replace the -
VHDPath parameter from the example above with -NewVHDPath and add the -
NewVHDSizeBytes parameter. For example,
New-VM -Name Win10VM -MemoryStartupBytes 4GB -BootDevice VHD -NewVHDPath .\VMs\Win10.vhdx -Path
.\VMData -NewVHDSizeBytes 20GB -Generation 2 -Switch ExternalSwitch
New virtual hard disk that boots to operating system image - To create a virtual machine with a
new virtual disk that boots to an operating system image, see the PowerShell example in Create
virtual machine walkthrough for Hyper-V on Windows 10.
5. Start the virtual machine by using the Start-VM cmdlet. Run the following cmdlet where Name is the name
of the virtual machine you created.
For example:
VMConnect.exe
Options in Hyper-V Manager New Virtual Machine Wizard
The following table lists the options you can pick when you create a virtual machine in Hyper-V Manager and the
defaults for each.
Specify Name and Location Name: New Virtual Machine. You can also enter your own name and
choose another location for the virtual
Location: machine.
C:\ProgramData\Microsoft\Windows
\Hyper-V\. This is where the virtual machine
configuration files will be stored.
Assign Memory Startup memory: 1024 MB You can set the startup memory from
32MB to 5902MB.
Dynamic memory: not selected
You can also choose to use Dynamic
Memory. For more information, see
Hyper-V Dynamic Memory Overview.
Configure Networking Not connected You can select a network connection for
the virtual machine to use from a list of
existing virtual switches. See Create a
virtual switch for Hyper-V virtual
machines.
Connect Virtual Hard Disk Create a virtual hard disk You can also choose to use an existing
virtual hard disk or wait and attach a
Name: <vmname>.vhdx virtual hard disk later.
Location:
C:\Users\Public\Documents\Hyper-
V\Virtual Hard Disks\
Size: 127GB
Installation Options Install an operating system later These options change the boot order of
the virtual machine so that you can
install from an .iso file, bootable floppy
disk or a network installation service,
like Windows Deployment Services
(WDS).
DEFAULT FOR WINDOWS SERVER 2016
PAGE AND WINDOWS 10 OTHER OPTIONS
Summary Displays the options that you have Tip: You can copy the summary from
chosen, so that you can verify they are the page and paste it into e-mail or
correct. somewhere else to help you keep track
of your virtual machines.
- Name
- Generation
- Memory
- Network
- Hard Disk
- Operating System
See also
New -VM
Supported virtual machine configuration versions
Should I create a generation 1 or 2 virtual machine in Hyper-V?
Create a virtual switch for Hyper-V virtual machines
Plan for Hyper-V on Windows Server
3/12/2019 • 2 minutes to read • Edit Online
Applies To: Windows 10, Windows Server 2016, Microsoft Hyper-V Server 2016, Windows Server 2019,
Microsoft Hyper-V Server 2019
NOTE
If you plan to ever upload a Windows virtual machines (VM) from on-premises to Microsoft Azure, generation 1 and
generation 2 VMs in the VHD file format and have a fixed sized disk are supported. See Generation 2 VMs on Azure to learn
more about generation 2 capabilities supported on Azure. For more information on uploading a Windows VHD or VHDX, see
Prepare a Windows VHD or VHDX to upload to Azure.
Your choice to create a generation 1 or generation 2 virtual machine depends on which guest operating system you
want to install and the boot method you want to use to deploy the virtual machine. We recommend that you create
a generation 2 virtual machine to take advantage of features like Secure Boot unless one of the following
statements is true:
The VHD you want to boot from is not UEFI-compatible.
Generation 2 doesn't support the operating system you want to run on the virtual machine.
Generation 2 doesn't support the boot method you want to use.
For more information about what features are available with generation 2 virtual machines, see Hyper-V feature
compatibility by generation and guest.
You can't change a virtual machine's generation after you've created it. So, we recommend that you review the
considerations here, as well as choose the operating system, boot method, and features you want to use before you
choose a generation.
Windows 10 ✔ ✔
Windows 8.1 ✔ ✔
Windows 8 ✔ ✔
Windows 7 ✔ ✖
The following table shows which 32-bit versions of Windows you can use as a guest operating system for
generation 1 and generation 2 virtual machines.
Windows 10 ✔ ✖
Windows 8.1 ✔ ✖
Windows 8 ✔ ✖
Windows 7 ✔ ✖
CentOS and Red Hat Enterprise Linux guest operating system support
The following table shows which versions of Red Hat Enterprise Linux (RHEL ) and CentOS you can use as a guest
operating system for generation 1 and generation 2 virtual machines.
For more information, see CentOS and Red Hat Enterprise Linux virtual machines on Hyper-V.
Debian guest operating system support
The following table shows which versions of Debian you can use as a guest operating system for generation 1 and
generation 2 virtual machines.
FreeBSD 8.4 ✔ ✖
The following table shows which versions of Unbreakable Enterprise Kernel you can use as a guest operating
system for generation 1 and generation 2 virtual machines.
Ubuntu 12.04 ✔ ✖
IDE controller Virtual SCSI controller Boot from .vhdx (64 TB maximum size,
and online resize capability)
IDE CD-ROM Virtual SCSI CD-ROM Support for up to 64 SCSI DVD devices
per SCSI controller.
Legacy network adapter Synthetic network adapter Network boot with IPv4 and IPv6
Universal asynchronous Optional UART for debugging Faster and more reliable
receiver/transmitter (UART) for COM
ports
i8042 keyboard controller Software-based input Uses fewer resources because there is
no emulation. Also reduces the attack
surface from the guest operating
system.
2. Add a COM port. Use the Set-VMComPort cmdlet to do this. For example, the following command
configures the first COM port on virtual machine, TestVM, to connect to the named pipe, TestPipe, on the
local computer:
NOTE
Configured COM ports aren't listed in the settings of a virtual machine in Hyper-V Manager.
See Also
Linux and FreeBSD Virtual Machines on Hyper-V
Use local resources on Hyper-V virtual machine with VMConnect
Plan for Hyper-V scalability in Windows Server 2016
Plan for Hyper-V networking in Windows Server
3/12/2019 • 4 minutes to read • Edit Online
Applies To: Microsoft Hyper-V Server 2016, Windows Server 2016, Microsoft Hyper-V Server 2019, Windows
Server 2019
A basic understanding of networking in Hyper-V helps you plan networking for virtual machines. This article also
covers some networking considerations when using live migration and when using Hyper-V with other server
features and roles.
This article gives you details about the maximum configuration for components you can add and remove on a
Hyper-V host or its virtual machines, such as virtual processors or checkpoints. As you plan your deployment,
consider the maximums that apply to each virtual machine, as well as those that apply to the Hyper-V host.
Maximums for memory and logical processors are the biggest increases from Windows Server 2012, in response
to requests to support newer scenarios such as machine learning and data analytics. The Windows Server blog
recently published the performance results of a virtual machine with 5.5 terabytes of memory and 128 virtual
processors running 4 TB in-memory database. Performance was greater than 95% of the performance of a
physical server. For details, see Windows Server 2016 Hyper-V large-scale VM performance for in-memory
transaction processing. Other numbers are similar to those that apply to Windows Server 2012. (Maximums for
Windows Server 2012 R2 were the same as Windows Server 2012.)
NOTE
For information about System Center Virtual Machine Manager (VMM), see Virtual Machine Manager. VMM is a Microsoft
product for managing a virtualized data center that is sold separately.
Size of physical disks attached directly Varies Maximum size is determined by the
to a virtual machine guest operating system.
Virtual hard disk capacity 64 TB for VHDX format; Each virtual hard disk is stored on
2040 GB for VHD format physical media as either a .vhdx or a
.vhd file, depending on the format used
by the virtual hard disk.
Virtual network adapters Windows Server 2016 supports 12 The Hyper-V specific network adapter
total: provides better performance and
- 8 Hyper-V specific network adapters requires a driver included in integration
- 4 legacy network adapters services. For more information, see Plan
Windows Server 2019 supports 72 for Hyper-V networking in Windows
total: Server.
- 64 Hyper-V specific network adapters
- 4 legacy network adapters
- Hardware-assisted
virtualization
- Hardware-enforced Data
Execution Prevention (DEP)
Memory 24 TB None.
Network adapter teams (NIC No limits imposed by Hyper- For details, see NIC Teaming.
Teaming) V.
Virtual network switch ports Varies; no limits imposed by The practical limit depends
per server Hyper-V. on the available computing
resources.
Running virtual machines per cluster 8,000 per cluster Several factors can affect the real
and per node number of virtual machines you can
run at the same time on one node,
such as:
- Amount of physical memory being
used by each virtual machine.
- Networking and storage bandwidth.
- Number of disk spindles, which affects
disk I/O performance.
Plan for Hyper-V security in Windows Server
3/12/2019 • 3 minutes to read • Edit Online
Applies To: Windows Server 2016, Microsoft Hyper-V Server 2016, Windows Server 2019, Microsoft Hyper-V
Server 2019
Secure the Hyper-V host operating system, the virtual machines, configuration files, and virtual machine data. Use
the following list of recommended practices as a checklist to help you secure your Hyper-V environment.
Applies To: Microsoft Hyper-V Server 2016, Windows Server 2016, Microsoft Hyper-V Server 2019, Windows
Server 2019
Discrete Device Assignment allows physical PCIe hardware to be directly accessible from within a virtual machine.
This guide will discuss the type of devices that can use Discrete Device Assignment, host system requirements,
limitations imposed on the virtual machines as well as security implications of Discrete Device Assignment.
For Discrete Device Assignment's initial release, we have focused on two device classes to be formally supported
by Microsoft: Graphics Adapters and NVMe Storage devices. Other devices are likely to work and hardware
vendors are able to offer statements of support for those devices. For these other devices, please reach out to those
hardware vendors for support.
If you are ready to try out Discrete Device Assignment, you can jump over to Deploying Graphics Devices Using
Discrete Device Assignment or Deploying Storage Devices using Discrete Device Assignment to get started!
System Requirements
In addition to the System Requirements for Windows Server and the System Requirements for Hyper-V, Discrete
Device Assignment requires server class hardware that is capable of granting the operating system control over
configuring the PCIe fabric (Native PCI Express Control). In addition, the PCIe Root Complex has to support
"Access Control Services" (ACS ), which enables Hyper-V to force all PCIe traffic through the I/O MMU.
These capabilities usually aren't exposed directly in the BIOS of the server and are often hidden behind other
settings. For example, the same capabilities are required for SR -IOV support and in the BIOS you may need to set
"Enable SR -IOV." Please reach out to your system vendor if you are unable to identify the correct setting in your
BIOS.
To help ensure hardware the hardware is capable of Discrete Device Assignment, our engineers have put together a
Machine Profile Script that you can run on an Hyper-V enabled host to test if your server is correctly setup and
what devices are capable of Discrete Device Assignment.
Device Requirements
Not every PCIe device can be used with Discrete Device Assignment. For example, older devices that leverage
legacy (INTx) PCI Interrupts are not supported. Jake Oshin's blog posts go into more detail - however, for the
consumer, running the Machine Profile Script will display which devices are capable of being used for Discrete
Device Assignment.
Device manufactures can reach out to their Microsoft representative for more details.
Device Driver
As Discrete Device Assignment passes the entire PCIe device into the Guest VM, a host driver is not required to be
installed prior to the device being mounted within the VM. The only requirement on the host is that the device's
PCIe Location Path can be determined. The device's driver can optionally be installed if this helps in identifying the
device. For example, a GPU without its device driver installed on the host may appear as a Microsoft Basic Render
Device. If the device driver is installed, its manufacturer and model will likely be displayed.
Once the device is mounted inside the guest, the Manufacturer's device driver can now be installed like normal
inside the guest virtual machine.
Security
Discrete Device Assignment passes the entire device into the VM. This means all capabilities of that device are
accessible from the guest operating system. Some capabilities, like firmware updating, may adversely impact the
stability of the system. As such, numerous warnings are presented to the admin when dismounting the device from
the host. We highly recommend that Discrete Device Assignment is only used where the tenants of the VMs are
trusted.
If the admin desires to use a device with an untrusted tenant, we have provided device manufactures with the
ability to create a Device Mitigation driver that can be installed on the host. Please contact the device manufacturer
for details on whether they provide a Device Mitigation Driver.
If you would like to bypass the security checks for a device that does not have a Device Mitigation Driver, you will
have to pass the -Force parameter to the Dismount-VMHostAssignableDevice cmdlet. Understand that by doing so,
you have changed the security profile of that system and this is only recommended during prototyping or trusted
environments.
MMIO Space
Some devices, especially GPUs, require additional MMIO space to be allocated to the VM for the memory of that
device to be accessible. By default, each VM starts off with 128MB of low MMIO space and 512MB of high MMIO
space allocated to it. However, a device might require more MMIO space, or multiple devices may be passed
through such that the combined requirements exceed these values. Changing MMIO Space is straight forward and
can be performed in PowerShell using the following commands:
The easiest way to determine how much MMIO space to allocate is to use the Machine Profile Script. To download
and run the machine profile script, run the following commands in a PowerShell console:
For devices that are able to be assigned, the script will display the MMIO requirements of a given device like the
example below:
NVIDIA GRID K520
Express Endpoint -- more secure.
...
And it requires at least: 176 MB of MMIO gap space
...
The low MMIO space is used only by 32-bit operating systems and devices that use 32-bit addresses. In most
circumstances, setting the high MMIO space of a VM will be enough since 32-bit configurations aren't very
common.
IMPORTANT
When assigning MMIO space to a VM, the user needs to be sure set the MMIO space to the sum of the requested MMIO
space for all desired assigned devices plus an additional buffer if there are other virtual devices that require a few MB of
MMIO space. Use the default MMIO values described above as the buffer for low and high MMIO (128 MB and 512 MB,
respectively).
If a user were to assign a single K520 GPU as in the example above, they must set the MMIO space of the VM to
the value outputted by the machine profile script plus a buffer--176 MB + 512 MB. If a user were to assign three
K520 GPUs, they must set the MMIO space to three times 176 MB plus a buffer, or 528 MB + 512 MB.
For a more in-depth look at MMIO space, see Discrete Device Assignment - GPUs on the TechCommunity blog.
Can Discrete Device Assignment be used to pass a USB device into a VM?
Although not officially supported, our customers have used Discrete Device Assignment to do this by passing the
entire USB3 controller into a VM. As the whole controller is being passed in, each USB device plugged into that
controller will also be accessible in the VM. Note that only some USB3 controllers may work, and USB2 controllers
cannot be used with Discrete Device Assignment.
Deploy Hyper-V on Windows Server
3/12/2019 • 2 minutes to read • Edit Online
Applies To: Windows Server 2016, Microsoft Hyper-V Server 2016, Windows Server 2019, Microsoft Hyper-V
Server 2019
Use these resources to help you deploy Hyper-V on Windows Server 2016.
Configure virtual local area networks for Hyper-V
Set up hosts for live migration without Failover Clustering
Upgrade virtual machine version in Hyper-V on Windows 10 or Windows Server 2016
Deploy graphics devices using Discrete Device Assignment
Deploy storage devices using Discrete Device Assignment
3/12/2019 • 3 minutes to read • Edit Online
Applies To: Windows 10, Windows Server 2016, Microsoft Hyper-V Server 2016, Windows Server 2019,
Microsoft Hyper-V Server 2019
Restore
To import the virtual machine specifying your own path for the virtual machine files, run a command like this,
replacing the examples with your values:
Import as a copy
To complete a copy import and move the virtual machine files to the default Hyper-V location, run a command like
this, replacing the examples with your values:
Applies To: Windows Server 2016, Microsoft Hyper-V Server 2016, Windows Server 2019, Microsoft Hyper-V
Server 2019
This article shows you how to set up hosts that aren't clustered so you can do live migrations between them. Use
these instructions if you didn't set up live migration when you installed Hyper-V, or if you want to change the
settings. To set up clustered hosts, use tools for Failover Clustering.
Step 2: Set up the source and destination computers for live migration
This step includes choosing options for authentication and networking. As a security best practice, we recommend
that you select specific networks to use for live migration traffic, as discussed above. This step also shows you how
to choose the performance option.
Use Hyper-V Manager to set up the source and destination computers for live migration
1. Open Hyper-V Manager. (From Server Manager, click Tools >>Hyper-V Manager.)
2. In the navigation pane, select one of the servers. (If it isn't listed, right-click Hyper-V Manager, click
Connect to Server, type the server name, and click OK. Repeat to add more servers.)
3. In the Action pane, click Hyper-V Settings >>Live Migrations.
4. In the Live Migrations pane, check Enable incoming and outgoing live migrations.
5. Under Simultaneous live migrations, specify a different number if you don't want to use the default of 2.
6. Under Incoming live migrations, if you want to use specific network connections to accept live migration
traffic, click Add to type the IP address information. Otherwise, click Use any available network for live
migration. Click OK.
7. To choose Kerberos and performance options, expand Live Migrations and then select Advanced
Features.
If you have configured constrained delegation, under Authentication protocol, select Kerberos.
Under Performance options, review the details and choose a different option if it's appropriate for your
environment.
8. Click OK.
9. Select the other server in Hyper-V Manager and repeat the steps.
Use Windows PowerShell to set up the source and destination computers for live migration
Three cmdlets are available for configuring live migration on non-clustered hosts: Enable-VMMigration, Set-
VMMigrationNetwork, and Set-VMHost. This example uses all three and does the following:
Configures live migration on the local host
Allows incoming migration traffic only on a specific network
Chooses Kerberos as the authentication protocol
Each line represents a separate command.
PS C:\> Enable-VMMigration
Set-VMHost also lets you choose a performance option (and many other host settings). For example, to choose
SMB but leave the authentication protocol set to the default of CredSSP, type:
PS C:\> Set-VMHost -VirtualMachineMigrationPerformanceOption SMBTransport
OPTION DESCRIPTION
Next steps
After you set up the hosts, you're ready to do a live migration. For instructions, see Use live migration without
Failover Clustering to move a virtual machine.
Upgrade virtual machine version in Hyper-V on
Windows 10 or Windows Server
6/7/2019 • 6 minutes to read • Edit Online
Applies To: Windows 10, Windows Server 2019, Windows Server 2016, Windows Server (Semi-Annual
Channel)
Make the latest Hyper-V features available on your virtual machines by upgrading the configuration version. Don't
do this until:
You upgrade your Hyper-V hosts to the latest version of Windows or Windows Server.
You upgrade the cluster functional level.
You're sure that you won't need to move the virtual machine back to a Hyper-V host that runs a previous
version of Windows or Windows Server.
For more information, see Cluster Operating System Rolling Upgrade and Perform a rolling upgrade of a Hyper-V
host cluster in VMM.
You can also see the configuration version in Hyper-V Manager by selecting the virtual machine and looking at the
Summary tab.
Update-VMVersion <vmname>
If you need to create a virtual machine that you can move to a Hyper-V Host that runs an older version of
Windows, use the New -VM cmdlet with the -version parameter. For example, to create a virtual machine that you
can move to a Hyper-V host that runs Windows Server 2012 R2 , run the following command. This command will
create a virtual machine named "WindowsCV5" with a configuration version 5.0.
NOTE
You can import virtual machines that have been created for a Hyper-V host running an older version of Windows or restore
them from backup. If the VM's configuration version is not listed as supported for your Hyper-V host OS in the table below,
you have to update the VM configuration version before you can start the VM.
HYPER-
V HOST
WINDO
WS
VERSIO
N 9.1 9.0 8.3 8.2 8.1 8.0 7.1 7.0 6.2 5.0
Windo ✖ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
ws
Server
2019
Windo ✖ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
ws 10
Enterpr
ise
LTSC
2019
Windo ✖ ✖ ✖ ✖ ✖ ✔ ✔ ✔ ✔ ✔
ws
Server
2016
Windo ✖ ✖ ✖ ✖ ✖ ✔ ✔ ✔ ✔ ✔
ws 10
Enterpr
ise
2016
LTSB
Windo ✖ ✖ ✖ ✖ ✖ ✖ ✖ ✖ ✔ ✔
ws 10
Enterpr
ise
2015
LTSB
HYPER-
V HOST
WINDO
WS
VERSIO
N 9.1 9.0 8.3 8.2 8.1 8.0 7.1 7.0 6.2 5.0
Windo ✖ ✖ ✖ ✖ ✖ ✖ ✖ ✖ ✖ ✔
ws
Server
2012
R2
Windo ✖ ✖ ✖ ✖ ✖ ✖ ✖ ✖ ✖ ✔
ws 8.1
HYPER-
V HOST
WINDO
WS
VERSIO
N 9.1 9.0 8.3 8.2 8.1 8.0 7.1 7.0 6.2 5.0
Windo ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
ws 10
May
2019
Update
(versio
n
1903)
Windo ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
ws
Server,
version
1903
Windo ✖ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
ws
Server,
version
1809
Windo ✖ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
ws 10
Octobe
r 2018
Update
(versio
n
1809)
HYPER-
V HOST
WINDO
WS
VERSIO
N 9.1 9.0 8.3 8.2 8.1 8.0 7.1 7.0 6.2 5.0
Windo ✖ ✖ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
ws
Server,
version
1803
Windo ✖ ✖ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
ws 10
April
2018
Update
(versio
n
1803)
Windo ✖ ✖ ✖ ✔ ✔ ✔ ✔ ✔ ✔ ✔
ws 10
Fall
Creator
s
Update
(versio
n
1709)
Windo ✖ ✖ ✖ ✖ ✔ ✔ ✔ ✔ ✔ ✔
ws 10
Creator
s
Update
(versio
n
1703)
Windo ✖ ✖ ✖ ✖ ✖ ✔ ✔ ✔ ✔ ✔
ws 10
Annive
rsary
Update
(versio
n
1607)
Virtual hard disk Stores virtual hard disks for the virtual machine.
File name extension: .vhd or .vhdx
Default location: C:\ProgramData\Microsoft\Windows\Hyper-
V\Virtual Hard Disks
Automatic virtual hard disk Differencing disk files used for virtual machine checkpoints.
File name extension: .avhdx
Default location: C:\ProgramData\Microsoft\Windows\Hyper-
V\Virtual Hard Disks
NOTE
Once the VM configuration version is updated, the VM won't be able to start on hosts that do not support the updated
configuration version.
The following table shows the minimum virtual machine configuration version required to use some Hyper-V
features.
FEATURE MINIMUM VM CONFIGURATION VERSION
For more information about these features, see What's new in Hyper-V on Windows Server.
Deploy graphics devices using Discrete Device
Assignment
7/24/2019 • 4 minutes to read • Edit Online
Applies To: Microsoft Hyper-V Server 2016, Windows Server 2016, Windows Server 2019, Microsoft Hyper-V
Server 2019
Starting with Windows Server 2016, you can use Discrete Device Assignment, or DDA, to pass an entire PCIe
Device into a VM. This will allow high performance access to devices like NVMe storage or Graphics Cards from
within a VM while being able to leverage the devices native drivers. Please visit the Plan for Deploying Devices
using Discrete Device Assignment for more details on which devices work, what are the possible security
implications, etc.
There are three steps to using a device with Discrete Device Assignment:
Configure the VM for Discrete Device Assignment
Dismount the Device from the Host Partition
Assigning the Device to the Guest VM
All command can be executed on the Host on a Windows PowerShell console as an Administrator.
If no Partitioning driver is provided, during dismount you must use the -force option to bypass the security
warning. Please read more about the security implications of doing this on Plan for Deploying Devices using
Discrete Device Assignment.
What’s Next
After a device is successfully mounted in a VM, you’re now able to start that VM and interact with the device as you
normally would if you were running on a bare metal system. This means that you’re now able to install the
Hardware Vendor’s drivers in the VM and applications will be able to see that hardware present. You can verify this
by opening device manager in the Guest VM and seeing that the hardware now shows up.
You can then re-enable the device in device manager and the host operating system will be able to interact with the
device again.
Examples
Mounting a GPU to a VM
In this example we use PowerShell to configure a VM named “ddatest1” to take the first GPU available by the
manufacturer NVIDIA and assign it into the VM.
Starting with Windows Server 2016, you can use Discrete Device Assignment, or DDA, to pass an entire PCIe
Device into a VM. This will allow high performance access to devices like NVMe storage or Graphics Cards from
within a VM while being able to leverage the devices native drivers. Please visit the Plan for Deploying Devices
using Discrete Device Assignment for more details on which devices work, what are the possible security
implications, etc. There are three steps to using a device with DDA:
Configure the VM for DDA
Dismount the Device from the Host Partition
Assigning the Device to the Guest VM
All command can be executed on the Host on a Windows PowerShell console as an Administrator.
You can then re-enable the device in device manager and the host operating system will be able to interact with the
device again.
Manage Hyper-V on Windows Server
3/12/2019 • 2 minutes to read • Edit Online
Use the resources in this section to help you manage Hyper-V on Windows Server 2016.
Choose between standard or production checkpoints
Enable or disable checkpoints
Manage hosts with Hyper-V Manager
Manage host CPU resource controls
Using VM CPU Groups
Manage Windows virtual machines with PowerShell Direct
Set up Hyper-V Replica
Use live migration without Failover Clustering to move a virtual machine
Cmdlets for configuring persistent memory devices
for Hyper-V VMs
3/12/2019 • 2 minutes to read • Edit Online
This article provides system administrators and IT Pros with information about configuring Hyper-V VMs with
persistent memory (aka storage class memory or NVDIMM ). JDEC -compliant NVDIMM -N persistent memory
devices are supported in Windows Server 2016 and Windows 10 and provide byte-level access to very low latency
non-volatile devices. VM persistent memory devices are supported in Windows Server 2019.
Add-VMPmemController ProductionVM1x
Persistent memory devices within a Hyper-V VM appear as a persistent memory device to be consumed and
managed by the guest operating system. Guest operating systems can use the device as a block or DAX volume.
When persistent memory devices within a VM are used as a DAX volume, they benefit from low latency byte-level
address-ability of the host device (no I/O virtualization on the code path).
NOTE
Persistent memory is only supported for Hyper-V Gen2 VMs. Live Migration and Storage Migration are not supported for
VMs with persistent memory. Production checkpoints of VMs do not include persistent memory state.
Choose between standard or production checkpoints
in Hyper-V
6/14/2019 • 2 minutes to read • Edit Online
Applies To: Windows 10, Windows Server 2016, Microsoft Hyper-V Server 2016, Windows Server 2019,
Microsoft Hyper-V Server 2019
Starting with Windows Server 2016 and Windows 10, you can choose between standard and production
checkpoints for each virtual machine. Production checkpoints are the default for new virtual machines.
Production checkpoints are "point in time" images of a virtual machine, which can be restored later on in a
way that is completely supported for all production workloads. This is achieved by using backup technology
inside the guest to create the checkpoint, instead of using saved state technology.
Standard checkpoints capture the state, data, and hardware configuration of a running virtual machine and
are intended for use in development and test scenarios. Standard checkpoints can be useful if you need to
recreate a specific state or condition of a running virtual machine so that you can troubleshoot a problem.
NOTE
Only Production Checkpoints are supported on guests that run Active Directory Domain Services role (Domain Controller)
or Active Directory Lightweight Directory Services role.
See also
Production checkpoints
Enable or disable checkpoints
Create Hyper-V VHD Set files
5/31/2019 • 2 minutes to read • Edit Online
VHD Set files are a new shared Virtual Disk model for guest clusters in Windows Server 2016. VHD Set files
support online resizing of shared virtual disks, support Hyper-V Replica, and can be included in application-
consistent checkpoints.
VHD Set files use a new VHD file type, .VHDS. VHD Set files store checkpoint information about the group virtual
disk used in guest clusters, in the form of metadata.
Hyper-V handles all aspects of managing the checkpoint chains and merging the shared VHD set. Management
software can run disk operations like online resizing on VHD Set files in the same way it does for .VHDX files. This
means that management software doesn't need to know about the VHD Set file format.
PS c:\>Remove-VMHardDiskDrive existing.vhdx
PS c:\>Add-VMHardDiskDrive new.vhds
Enable or disable checkpoints in Hyper-V
3/12/2019 • 2 minutes to read • Edit Online
Applies To: Windows 10, Windows Server 2016, Microsoft Hyper-V Server 2016, Windows Server 2019,
Microsoft Hyper-V Server 2019
You can choose to enable or disable checkpoints for each virtual machine.
1. In Hyper-V Manager, right-click the virtual machine and click Settings.
2. Under the Management section, select Checkpoints.
3. To allow checkpoints to be taken of this virtual machine, make sure Enable checkpoints is selected. To
disable checkpoints, clear the Enable checkpoints check box.
4. Click Apply to apply your changes. If you are done, click OK to close the dialog box.
See also
Choose between standard or production checkpoints in Hyper-V
Remotely manage Hyper-V hosts with Hyper-V
Manager
5/31/2019 • 6 minutes to read • Edit Online
Applies To: Windows Server 2016, Windows Server 2012 R2, Windows 10, Windows 8.1
This article lists the supported combinations of Hyper-V hosts and Hyper-V Manager versions and describes how
to connect to remote and local Hyper-V hosts so you can manage them.
Hyper-V Manager lets you manage a small number of Hyper-V hosts, both remote and local. It's installed when
you install the Hyper-V Management Tools, which you can do either through a full Hyper-V installation or a tools-
only installation. Doing a tools-only installation means you can use the tools on computers that don't meet the
hardware requirements to host Hyper-V. For details about hardware for Hyper-V hosts, see System requirements.
If Hyper-V Manager isn't installed, see the instructions below.
Windows 2016, Windows 10 - Windows Server 2016—all editions and installation options,
including Nano Server, and corresponding version of Hyper-V
Server
- Windows Server 2012 R2—all editions and installation
options, and corresponding version of Hyper-V Server
- Windows Server 2012—all editions and installation options,
and corresponding version of Hyper-V Server
- Windows 10
- Windows 8.1
Windows Server 2012 R2, Windows 8.1 - Windows Server 2012 R2—all editions and installation
options, and corresponding version of Hyper-V Server
- Windows Server 2012—all editions and installation options,
and corresponding version of Hyper-V Server
- Windows 8.1
Windows Server 2012 - Windows Server 2012—all editions and installation options,
and corresponding version of Hyper-V Server
HYPER-V MANAGER VERSION HYPER-V HOST VERSION
Windows Server 2008 R2 Service Pack 1, Windows 7 Service - Windows Server 2008 R2—all editions and installation
Pack 1 options, and corresponding version of Hyper-V Server
Windows Server 2008, Windows Vista Service Pack 2 - Windows Server 2008—all editions and installation options,
and corresponding version of Hyper-V Server
NOTE
Service pack support ended for Windows 8 on January 12, 2016. For more information, see the Windows 8.1 FAQ.
Enable-PSRemoting
NOTE
This will only work for Windows Server 2016 or Windows 10 remote hosts.
NOTE
This will only work for Windows Server 2016 or Windows 10 remote hosts.
Connect to a Windows 2016 or Windows 10 remote host outside your domain, or with no domain
To do this:
1. On the Hyper-V host to be managed, open a Windows PowerShell session as Administrator.
2. Create the necessary firewall rules for private network zones:
Enable-PSRemoting
3. To allow remote access on public zones, enable firewall rules for CredSSP and WinRM:
NOTE
This will only work for Windows Server 2016 or Windows 10 remote hosts.
add-windowsfeature rsat-hyper-v-tools
See also
Install Hyper-V
Hyper-V Host CPU Resource Management
3/12/2019 • 4 minutes to read • Edit Online
Hyper-V host CPU resource controls introduced in Windows Server 2016 or later allow Hyper-V administrators to
better manage and allocate host server CPU resources between the “root”, or management partition, and guest
VMs. Using these controls, administrators can dedicate a subset of the processors of a host system to the root
partition. This can segregate the work done in a Hyper-V host from the workloads running in guest virtual
machines by running them on separate subsets of the system processors.
For details about hardware for Hyper-V hosts, see Windows 10 Hyper-V System Requirements.
Background
Before setting controls for Hyper-V host CPU resources, it’s helpful to review the basics of the Hyper-V
architecture.
You can find a general summary in the Hyper-V Architecture section. These are important concepts for this article:
Hyper-V creates and manages virtual machine partitions, across which compute resources are allocated and
shared, under control of the hypervisor. Partitions provide strong isolation boundaries between all guest
virtual machines, and between guest VMs and the root partition.
The root partition is itself a virtual machine partition, although it has unique properties and much greater
privileges than guest virtual machines. The root partition provides the management services that control all
guest virtual machines, provides virtual device support for guests, and manages all device I/O for guest
virtual machines. Microsoft strongly recommends not running any application workloads in a host partition.
Each virtual processor (VP ) of the root partition is mapped 1:1 to an underlying logical processor (LP ). A
host VP will always run on the same underlying LP – there is no migration of the root partition’s VPs.
By default, the LPs on which host VPs run can also run guest VPs.
A guest VP may be scheduled by the hypervisor to run on any available logical processor. While the
hypervisor scheduler takes care to consider temporal cache locality, NUMA topology, and many other
factors when scheduling a guest VP, ultimately the VP could be scheduled on any host LP.
When Minroot is active, Task Manager will display the number of logical processors currently allotted to the host,
in addition to the total number of logical processors in the system.
3/12/2019 • 11 minutes to read • Edit Online
Applies To: Windows Server 2016, Microsoft Hyper-V Server 2016, Windows Server 2019, Microsoft Hyper-V
Server 2019
NOTE
Only the HCS may be used to create and manage CPU groups; the Hyper-V Manager applet, WMI and PowerShell
management interfaces don’t support CPU groups.
Microsoft provides a command line utility, cpugroups.exe, on the Microsoft Download Center which uses the HCS
interface to manage CPU groups. This utility can also display the CPU topology of a host.
For example, consider a CPU group configured with 4 logical processors (LPs), and a cap of 50%.
G = n * C
G = 4 * 50%
G = 2 LP's worth of CPU time for the entire group
In this example, the CPU group G is allocated 2 LP's worth of CPU time.
Note that the group cap applies regardless of the number of virtual machines or virtual processors bound to the
group, and regardless of the state (e.g., shutdown or started) of the virtual machines assigned to the CPU group.
Therefore, each VM bound to the same CPU group will receive a fraction of the group's total CPU allocation, and
this will change with the number of VMs bound to the CPU group. Therefore, as VMs are bound or unbound VMs
from a CPU group, the overall CPU group cap must be readjusted and set to maintain the resulting per-VM cap
desired. The VM host administrator or virtualization management software layer is responsible for managing
group caps as necessary to achieve the desired per-VM CPU resource allocation.
G = n * C
G = 8 * 50%
G = 4 LP's worth of CPU time for the entire group
The host administrator adds a single “B” tier VM. At this point, our “B” tier VM can use at most 50% worth of the
host's CPU, or the equivalent of 4 LPs in our example system.
Now, the admin adds a second “Tier B” VM. The CPU group’s allocation—is divided evenly among all the VMs.
We’ve got a total of 2 VMs in Group B, so each VM now gets half of Group B's total of 50%, 25% each, or the
equivalent of 2 LPs worth of compute time.
NOTE
Command line parameters for the CpuGroups tool are passed using only spaces as delimiters. No ‘/’ or ‘-‘ characters should
proceed the desired command line switch.
C:\vm\tools>CpuGroups.exe GetGroups
C:\vm\tools>CpuGroups.exe GetGroups
CpuGroupId CpuCap LpIndexes
------------------------------------ ------ ---------
36AB08CB-3A76-4B38-992E-000000000001 65536 0,1,16,17
36AB08CB-3A76-4B38-992E-000000000002 32768 4,5,6,7,8,9,10,11,20,21,22,23
36AB08CB-3A76-4B38-992E-000000000003 65536 12,13,14,15
36AB08CB-3A76-4B38-992E-000000000004 65536 24,25,26,27,28,29,30,31
Now let's confirm our setting by displaying the group we just updated.
Example 6 – Print CPU group ids for all VMs on the host
C:\vm\tools>CpuGroups.exe GetVmGroup
C:\vm\tools>CpuGroups.exe GetVmGroup
VmName VmId CpuGroupId
------ ------------------------------------ ------------------------------------
G2 4ABCFC2F-6C22-498C-BB38-7151CE678758 36ab08cb-3a76-4b38-992e-000000000002
P1 973B9426-0711-4742-AD3B-D8C39D6A0DEC 36ab08cb-3a76-4b38-992e-000000000003
P2 A593D93A-3A5F-48AB-8862-A4350E3459E8 36ab08cb-3a76-4b38-992e-000000000004
G3 B0F3FCD5-FECF-4A21-A4A2-DE4102787200 36ab08cb-3a76-4b38-992e-000000000002
G1 F699B50F-86F2-4E48-8BA5-EB06883C1FDC 00000000-0000-0000-0000-000000000000
C:\vm\tools>CpuGroups.exe GetVmGroup
VmName VmId CpuGroupId
------ ------------------------------------ ------------------------------------
G2 4ABCFC2F-6C22-498C-BB38-7151CE678758 36ab08cb-3a76-4b38-992e-000000000002
P1 973B9426-0711-4742-AD3B-D8C39D6A0DEC 36ab08cb-3a76-4b38-992e-000000000003
P2 A593D93A-3A5F-48AB-8862-A4350E3459E8 36ab08cb-3a76-4b38-992e-000000000004
G3 B0F3FCD5-FECF-4A21-A4A2-DE4102787200 36ab08cb-3a76-4b38-992e-000000000002
G1 F699B50F-86F2-4E48-8BA5-EB06883C1FDC 36AB08CB-3A76-4B38-992E-000000000001
C:\vm\tools>CpuGroups.exe GetGroupVms
CpuGroupId VmName VmId
------------------------------------ ------ ------------------------------------
36AB08CB-3A76-4B38-992E-000000000001 G1 F699B50F-86F2-4E48-8BA5-EB06883C1FDC
36ab08cb-3a76-4b38-992e-000000000002 G2 4ABCFC2F-6C22-498C-BB38-7151CE678758
36ab08cb-3a76-4b38-992e-000000000002 G3 B0F3FCD5-FECF-4A21-A4A2-DE4102787200
36ab08cb-3a76-4b38-992e-000000000003 P1 973B9426-0711-4742-AD3B-D8C39D6A0DEC
36ab08cb-3a76-4b38-992e-000000000004 P2 A593D93A-3A5F-48AB-8862-A4350E3459E8
Example 12 – Unbind the only VM from a CPU group and delete the group
In this example, we'll use several commands to examine a CPU group, remove the single VM belonging to that
group, then delete the group.
First, let's enumerate the VMs in our group.
C:\vm\tools>CpuGroups.exe GetGroups
CpuGroupId CpuCap LpIndexes
------------------------------------ ------ -----------------------------
36AB08CB-3A76-4B38-992E-000000000002 32768 4,5,6,7,8,9,10,11,20,21,22,23
36AB08CB-3A76-4B38-992E-000000000003 65536 12,13,14,15
36AB08CB-3A76-4B38-992E-000000000004 65536 24,25,26,27,28,29,30,31
C:\vm\tools>CpuGroups.exe GetGroupVms
CpuGroupId VmName VmId
------------------------------------ -------------------------------- ------------------------------------
36ab08cb-3a76-4b38-992e-000000000002 G2 4ABCFC2F-6C22-498C-BB38-7151CE678758
36ab08cb-3a76-4b38-992e-000000000002 G3 B0F3FCD5-FECF-4A21-A4A2-DE4102787200
36AB08CB-3A76-4B38-992E-000000000002 G1 F699B50F-86F2-4E48-8BA5-EB06883C1FDC
36ab08cb-3a76-4b38-992e-000000000003 P1 973B9426-0711-4742-AD3B-D8C39D6A0DEC
36ab08cb-3a76-4b38-992e-000000000004 P2 A593D93A-3A5F-48AB-8862-A4350E3459E8
Managing Hyper-V hypervisor scheduler types
6/4/2019 • 10 minutes to read • Edit Online
Applies To: Windows 10, Windows Server 2016, Windows Server, version 1709, Windows Server, version
1803,Windows Server 2019
This article describes new modes of virtual processor scheduling logic first introduced in Windows Server 2016.
These modes, or scheduler types, determine how the Hyper-V hypervisor allocates and manages work across guest
virtual processors. A Hyper-V host administrator can select hypervisor scheduler types that are best suited for the
guest virtual machines (VMs) and configure the VMs to take advantage of the scheduling logic.
NOTE
Updates are required to use the hypervisor scheduler features described in this document. For details, see Required updates.
Background
Before discussing the logic and controls behind Hyper-V virtual processor scheduling, it’s helpful to review the
basic concepts covered in this article.
Understanding SMT
Simultaneous multithreading, or SMT, is a technique utilized in modern processor designs that allows the
processor’s resources to be shared by separate, independent execution threads. SMT generally offers a modest
performance boost to most workloads by parallelizing computations when possible, increasing instruction
throughput, though no performance gain or even a slight loss in performance may occur when contention between
threads for shared processor resources occurs. Processors supporting SMT are available from both Intel and AMD.
Intel refers to their SMT offerings as Intel Hyper Threading Technology, or Intel HT.
For the purposes of this article, the descriptions of SMT and how it is utilized by Hyper-V apply equally to both
Intel and AMD systems.
For more information on Intel HT Technology, refer to Intel Hyper-Threading Technology
For more information on AMD SMT, refer to The "Zen" Core Architecture
Where is the number of SMT threads per core the guest VM will see.
Note that = 0 will set the HwThreadCountPerCore value to match the host's SMT thread count per core value.
NOTE
Setting HwThreadCountPerCore = 0 is supported beginning with Windows Server 2019.
Below is an example of System Information taken from the guest operating system running in a virtual machine
with 2 virtual processors and SMT enabled. The guest operating system is detecting 2 logical processors belonging
to the same core.
NOTE
Microsoft recommends that all customers running Windows Server 2016 Hyper-V select the core scheduler to ensure their
virtualization hosts are optimally protected against potentially malicious guest VMs.
NOTE
The following updates are required to use the hypervisor scheduler features described in this document. These updates
include changes to support the new 'hypervisorschedulertype' BCD option, which is necessary for host configuration.
NOTE
The hypervisor root scheduler is not supported on Windows Server Hyper-V at this time. Hyper-V administrators should not
attempt to configure the root scheduler for use with server virtualization scenarios.
2 = Classic scheduler
3 = Core scheduler
4 = Root scheduler
Querying the Hyper-V hypervisor scheduler type launch event using PowerShell
To query for hypervisor event ID 2 using PowerShell, enter the following commands from a PowerShell prompt.
Applies To:
Windows Server 2016
Windows Server, version 1709
Windows Server, version 1803
Windows Server 2019
This document describes important changes to Hyper-V's default and recommended use of hypervisor scheduler
types. These changes impact both system security and virtualization performance. Virtualization host
administrators should review and understand the changes and implications described in this document, and
carefully evaluate the impacts, suggested deployment guidance and risk factors involved to best understand how to
deploy and manage Hyper-V hosts in the face of the rapidly changing security landscape.
IMPORTANT
Currently known side-channel security vulnerabilities sighted in multiple processor architectures could be exploited by a
malicious guest VM through the scheduling behavior of the legacy hypervisor classic scheduler type when run on hosts with
Simultaneous Multithreading (SMT) enabled. If successfully exploited, a malicious workload could observe data outside its
partition boundary. This class of attacks can be mitigated by configuring the Hyper-V hypervisor to utilize the hypervisor
core scheduler type and reconfiguring guest VMs. With the core scheduler, the hypervisor restricts a guest VM's VPs to run
on the same physical processor core, therefore strongly isolating the VM's ability to access data to the boundaries of the
physical core on which it runs. This is a highly effective mitigation against these side-channel attacks, which prevents the VM
from observing any artifacts from other partitions, whether the root or another guest partition. Therefore, Microsoft is
changing the default and recommended configuration settings for virtualization hosts and guest VMs.
Background
Starting with Windows Server 2016, Hyper-V supports several methods of scheduling and managing virtual
processors, referred to as hypervisor scheduler types. A detailed description of all hypervisor scheduler types can
be found in Understanding and using Hyper-V hypervisor scheduler types.
NOTE
New hypervisor scheduler types were first introduced with Windows Server 2016, and are not available in prior releases. All
versions of Hyper-V prior to Windows Server 2016 support only the classic scheduler. Support for the core scheduler was
only recently published.
NOTE
While the hypervisor's internal support for the scheduler types was included in the initial release of Windows Server 2016,
Windows Server 1709, and Windows Server 1803, updates are required in order to access the configuration control which
allows selecting the hypervisor scheduler type. Please refer to Understanding and using Hyper-V hypervisor scheduler types
for details on these updates.
NOTE
Windows Server 2016 does not support setting HwThreadCountPerCore to 0.
1. Configure VMs to run as SMT-enabled, optionally inheriting the host SMT topology automatically
The SMT configuaration for a VM is displayed in the Summary panes in the Hyper-V Manager console.
Configuring a VM's SMT settings may be done by using the VM Settings or PowerShell.
Configuring VM SMT settings using PowerShell
To configure the SMT settings for a guest virtual machine, open a PowerShell window with sufficient permissions,
and type:
Where:
0 = Inherit SMT topology from the host (this setting of HwThreadCountPerCore=0 is not supported on Windows
Server 2016)
1 = Non-SMT
Values > 1 = the desired number of SMT threads per core. May not exceed the number of physical SMT threads per
core.
To read the SMT settings for a guest virtual machine, open a PowerShell window with sufficient permissions, and
type:
Note that guest VMs configured with HwThreadCountPerCore = 0 indicates that SMT will be enabled for the
guest, and will expose the same number of SMT threads to the guest as they are on the underlying virtualization
host, typically 2.
Guest VMs may observe changes to CPU topology across VM mobility scenarios
The OS and applications in a VM may see changes to both host and VM settings before and after VM lifecycle
events such as live migration or save and restore operations. During an operation in which VM state is saved and
restored, both the VM's HwThreadCountPerCore setting and the realized value (that is, the computed combination
of the VM's setting and source host’s configuration) are migrated. The VM will continue running with these
settings on the destination host. At the point the VM is shutdown and re-started, it’s possible that the realized value
observed by the VM will change. This should be benign, as OS and application layer software should look for CPU
topology information as part of their normal startup and initialization code flows. However, because these boot
time initialization sequences are skipped during live migration or save/restore operations, VMs that undergo these
state transitions could observe the originally computed realized value until they are shut down and re-started.
Alerts regarding non-optimal VM configurations
Virtual machines configured with more VPs than there are physical cores on the host result in a non-optimal
configuration. The hypervisor scheduler will treat these VMs as if they are SMT-aware. However, OS and
application software in the VM will be presented a CPU topology showing SMT is disabled. When this condition is
detected, the Hyper-V Worker Process will log an event on the virtualization host warning the host administrator
that the VM's configuration is non-optimal, and recommending SMT be enabled for the VM.
How to identify non-optimally configured VMs
You can identify non-SMT VMs by examining the System Log in Event Viewer for Hyper-V Worker Process event
ID 3498, which will be triggered for a VM whenever the number of VPs in the VM is greater than the physical core
count. Worker process events can be obtained from Event Viewer, or via PowerShell.
Querying the Hyper-V worker process VM event using PowerShell
To query for Hyper-V worker process event ID 3498 using PowerShell, enter the following commands from a
PowerShell prompt.
Impacts of guest SMT configuaration on the use of hypervisor enlightenments for guest operating systems
The Microsoft hypervisor offers multiple enlightenments, or hints, which the OS running in a guest VM may query
and use to trigger optimizations, such as those that might benefit performance or otherwise improve handling of
various conditions when running virtualized. One recently introduced enlightenment concerns the handling of
virtual processor scheduling and the use of OS mitigations for side-channel attacks that exploit SMT.
NOTE
Microsoft recommends that host administrators enable SMT for guest VMs to optimize workload performance.
The details of this guest enlightenment are provided below, however the key takeaway for virtualization host
administrators is that virtual machines should have HwThreadCountPerCore configured to match the host’s
physical SMT configuration. This allows the hypervisor to report that there is no non-architectural core sharing.
Therefore, any guest OS supporting optimizations that require the enlightenment may be enabled. On Windows
Server 2019, create new VMs and leave the default value of HwThreadCountPerCore (0). Older VMs migrated
from Windows Server 2016 hosts can be updated to the Windows Server 2019 configuration version. After doing
so, Microsoft recommends setting HwThreadCountPerCore = 0. On Windows Server 2016, Microsoft
recommends setting HwThreadCountPerCore to match the host configuration (typically 2).
NoNonArchitecturalCoreSharing enlightenment details
Starting in Windows Server 2016, the hypervisor defines a new enlightenment to describe its handling of VP
scheduling and placement to the guest OS. This enlightenment is defined in the Hypervisor Top Level Functional
Specification v5.0c.
Hypervisor synthetic CPUID leaf CPUID.0x40000004.EAX:18[NoNonArchitecturalCoreSharing = 1] indicates that
a virtual processor will never share a physical core with another virtual processor, except for virtual processors that
are reported as sibling SMT threads. For example, a guest VP will never run on an SMT thread alongside a root VP
running simultaneously on a sibling SMT thread on the same processor core. This condition is only possible when
running virtualized, and so represents a non-architectural SMT behavior that also has serious security implications.
The guest OS can use NoNonArchitecturalCoreSharing = 1 as an indication that it is safe to enable optimizations,
which may help it avoid the performance overhead of setting STIBP.
In certain configurations, the hypervisor will not indicate that NoNonArchitecturalCoreSharing = 1. As an example,
if a host has SMT enabled and is configured to use the hypervisor classic scheduler,
NoNonArchitecturalCoreSharing will be 0. This may prevent enlightened guests from enabling certain
optimizations. Therefore, Microsoft recommends that host administrators using SMT rely on the hypervisor core
scheduler and ensure that virtual machines are configured to inherit their SMT configuration from the host to
ensure optimal workload performance.
Summary
The security threat landscape continues to evolve. To ensure our customers are secure by default, Microsoft is
changing the default configuration for the hypervisor and virtual machines starting in Windows Server 2019
Hyper-V, and providing updated guidance and recommendations for customers running Windows Server 2016
Hyper-V. Virtualization host administrators should:
Read and understand the guidance provided in this document
Carefully evaluate and adjust their virtualization deployments to ensure they meet the security, performance,
virtualization density, and workload responsiveness goals for their unique requirements
Consider re-configuring existing Windows Server 2016 Hyper-V hosts to leverage the strong security
benefits offered by the hypervisor core scheduler
Update existing non-SMT VMs to reduce the performance impacts from scheduling constraints imposed by
VP isolation that addresses hardware security vulnerabilities
6/7/2019 • 9 minutes to read • Edit Online
Applies To: Windows 10, Windows Server 2016, Windows Server 2019
IMPORTANT
Each service you want to use must be enabled in both the host and guest in order to function. All integration services except
"Hyper-V Guest Service Interface" are on by default on Windows guest operating systems. The services can be turned on and
off individually. The next sections show you how.
Earlier guest operating systems will not have all available services. For example, Windows Server 2008 R2 guests
cannot have the "Hyper-V Guest Service Interface".
IMPORTANT
Stopping an integration service may severely affect the host's ability to manage your virtual machine. To work correctly, each
integration service you want to use must be enabled on both the host and guest. As a best practice, you should only control
integration services from Hyper-V using the instructions above. The matching service in the guest operating system will stop
or start automatically when you change its status in Hyper-V. If you start a service in the guest operating system but it is
disabled in Hyper-V, the service will stop. If you stop a service in the guest operating system that is enabled in Hyper-V,
Hyper-V will eventually start it again. If you disable the service in the guest, Hyper-V will be unable to start it.
Use Windows Services to start or stop an integration service within a Windows guest
1. Open Services manager by running services.msc as an Administrator or by double-clicking the Services
icon in Control Panel.
2. Find the services that start with "Hyper-V".
3. Right-click the service you want start or stop. Click the desired action.
Use Windows PowerShell to start or stop an integration service within a Windows guest
1. To get a list of integration services, run:
3. Run either Start-Service or Stop-Service. For example, to turn off Windows PowerShell Direct, run:
3. To find out if the required daemons are running, use this command.
ps -ef | grep hv
compgen -c hv_
hv_vss_daemon
hv_get_dhcp_info
hv_get_dns_info
hv_set_ifconfig
hv_kvp_daemon
hv_fcopy_daemon
Integration service daemons that might be listed include the following. If any are missing, they might not be
supported on your system or they might not be installed. Find details, see Supported Linux and FreeBSD
virtual machines for Hyper-V on Windows.
hv_vss_daemon: This daemon is required to create live Linux virtual machine backups.
hv_kvp_daemon: This daemon allows setting and querying intrinsic and extrinsic key value pairs.
hv_fcopy_daemon: This daemon implements a file copying service between the host and guest.
Examples
These examples demonstrate stopping and starting the KVP daemon, named hv_kvp_daemon .
1. Use the process ID (PID ) to stop the daemon's process. To find the PID, look at the second column of the
output, or use pidof . Hyper-V daemons run as root, so you'll need root permissions.
ps -ef | hv
3. To start the daemon again, run the daemon as root:
sudo hv_kvp_daemon
4. To verify that the hv_kvp_daemon process is listed with a new process ID, run:
ps -ef | hv
NOTE
The image file vmguest.iso isn't included with Hyper-V on Windows 10 because it's no longer needed.
Windows Vista (SP 2) Windows Update Requires the Data Exchange integration
service.*
Windows Server 2012 Windows Update Requires the Data Exchange integration
service.*
Windows Server 2008 R2 (SP 1) Windows Update Requires the Data Exchange integration
service.*
Windows Server 2008 (SP 2) Windows Update Extended support only in Windows
Server 2016 (read more).
GUEST UPDATE MECHANISM NOTES
Windows Home Server 2011 Windows Update Will not be supported in Windows
Server 2016 (read more).
Windows Small Business Server 2011 Windows Update Not under mainstream support ( read
more).
Linux guests package manager Integration services for Linux are built
into the distro but there may be
optional updates available. ********
* If the Data Exchange integration service can't be enabled, the integration services for these guests are available
from the Download Center as a cabinet (cab) file. Instructions for applying a cab are available in this blog post.
For virtual machines running on Windows 8.1 hosts:
Windows Server 2008 (SP 2) Integration Services disk See instructions, below.
Windows Home Server 2011 Integration Services disk See instructions, below.
Windows Small Business Server 2011 Integration Services disk See instructions, below.
Windows Server 2003 R2 (SP 2) Integration Services disk See instructions, below.
GUEST UPDATE MECHANISM NOTES
Windows Server 2003 (SP 2) Integration Services disk See instructions, below.
Linux guests package manager Integration services for Linux are built
into the distro but there may be
optional updates available. **
Windows Server 2008 (SP 2) Integration Services disk See instructions, below.
Windows Home Server 2011 Integration Services disk See instructions, below.
Windows Small Business Server 2011 Integration Services disk See instructions, below.
Windows Server 2003 R2 (SP 2) Integration Services disk See instructions, below.
Windows Server 2003 (SP 2) Integration Services disk See instructions, below.
Linux guests package manager Integration services for Linux are built
into the distro but there may be
optional updates available. **
For more details about Linux guests, see Supported Linux and FreeBSD virtual machines for Hyper-V on Windows.
Applies To: Windows 10, Windows Server 2016, Windows Server 2019
You can use PowerShell Direct to remotely manage a Windows 10, Windows Server 2016, or Windows Server
2019 virtual machine from a Windows 10, Windows Server 2016, or Windows Server 2019 Hyper-V host.
PowerShell Direct allows Windows PowerShell management inside a virtual machine regardless of the network
configuration or remote management settings on either the Hyper-V host or the virtual machine. This makes it
easier for Hyper-V Administrators to automate and script virtual machine management and configuration.
There are two ways to run PowerShell Direct:
Create and exit a PowerShell Direct session using PSSession cmdlets
Run script or command with the Invoke-Command cmdlet
If you're managing older virtual machines, use Virtual Machine Connection (VMConnect) or configure a virtual
network for the virtual machine.
Exit-PSSession
See Also
Enter-PSSession
Exit-PSSession
Invoke-Command
Set up Hyper-V Replica
6/20/2019 • 11 minutes to read • Edit Online
Hyper-V Replica is an integral part of the Hyper-V role. It contributes to your disaster recovery strategy by
replicating virtual machines from one Hyper-V host server to another to keep your workloads available. Hyper-V
Replica creates a copy of a live virtual machine to a replica offline virtual machine. Note the following:
Hyper-V hosts: Primary and secondary host servers can be physically co-located or in separate
geographical locations with replication over a WAN link. Hyper-V hosts can be standalone, clustered, or a
mixture of both. There's no Active Directory dependency between the servers and they don't need to be
domain members.
Replication and change tracking: When you enable Hyper-V Replica for a specific virtual machine, initial
replication creates an identical replica virtual machine on a secondary host server. After that happens,
Hyper-V Replica change tracking creates and maintains a log file that captures changes on a virtual machine
VHD. The log file is played in reverse order to the replica VHD based on replication frequency settings. This
means that the latest changes are stored and replicated asynchronously. Replication can be over HTTP or
HTTPS.
Extended (chained) replication: This lets you replicate a virtual machine from a primary host to a
secondary host, and then replicate the secondary host to a third host. Note that you can't replicate from the
primary host directly to the second and the third.
This feature makes Hyper-V Replica more robust for disaster recovery because if an outage occurs you can
recover from both the primary and extended replica. You can fail over to the extended replica if your primary
and secondary locations go down. Note that the extended replica doesn't support application-consistent
replication and must use the same VHDs that the secondary replica is using.
Failover: If an outage occurs in your primary (or secondary in case of extended) location you can manually
initiate a test, planned, or unplanned failover.
When should I run? Verify that a virtual During planned downtime During unexpected events
machine can fail over and and outages
start in the secondary site
Where is it initiated? On the replica virtual Initiated on primary and On the replica virtual
machine completed on secondary machine
How often should I run? We recommend once a Once every six months or Only in case of disaster
month for testing in accordance with when the primary virtual
compliance requirements machine is unavailable
TEST PLANNED UNPLANNED
Is there any data loss? None None. After failover Hyper- Depends on the event and
V Replica replicates the last recovery points
set of tracked changes
back to the primary to
ensure zero data loss.
Is there any downtime? None. It doesn't impact The duration of the The duration of the
your production planned outage unplanned outage
environment. It creates a
duplicate test virtual
machine during failover.
After failover finishes you
select Failover on the
replica virtual machine and
it's automatically cleaned
up and deleted.
Recovery points: When you configure replication settings for a virtual machine, you specify the recovery
points you want to store from it. Recovery points represent a snapshot in time from which you can recover a
virtual machine. Obviously less data is lost if you recover from a very recent recovery point. You can access
recovery points up to 24 hours ago.
Deployment prerequisites
Here's what you should verify before you begin:
Figure out which VHDs need to be replicated. In particular, VHDs that contain data that is rapidly
changing and not used by the Replica server after failover, such as page file disks, should be excluded from
replication to conserve network bandwidth. Make a note of which VHDs can be excluded.
Decide how often you need to synchronize data: The data on the Replica server is synchronized
updated according to the replication frequency you configure (30 seconds, 5 minutes, or 15 minutes). The
frequency you choose should consider the following: Are the virtual machines running critical data with a
low RPO? What are you bandwidth considerations? Your highly-critical virtual machines will obviously need
more frequent replication.
Decide how to recover data: By default Hyper-V Replica only stores a single recovery point that will be
the latest replication sent from the primary to the secondary. However if you want the option to recover data
to an earlier point in time you can specify that additional recovery points should be stored (to a maximum of
24 hourly points). If you do need additional recovery points you should note that this requires more
overhead on processing and storage resources.
Figure out which workloads you'll replicate: Standard Hyper-V Replica replication maintains consistency
in the state of the virtual machine operating system after a failover, but not the state of applications that
running on the virtual machine. If you want to be able to recovery your workload state you can create app-
consistent recovery points. Note that app-consistent recovery isn't available on the extended replica site if
you're using extended (chained) replication.
Decide how to do the initial replication of virtual machine data: Replication starts by transferring the
needs to transfer the current state of the virtual machines. This initial state can be transmitted directly over
the existing network, either immediately or at a later time that you configure. You can also use a pre-existing
restored virtual machine (for example, if you have restored an earlier backup of the virtual machine on the
Replica server) as the initial copy. Or, you can save network bandwidth by copying the initial copy to external
media and then physically delivering the media to the Replica site. If you want to use a preexisting virtual
machine delete all previous snapshots associated with it.
Deployment steps
Step 1: Set up the Hyper-V hosts
You'll need at least two Hyper-V hosts with one or more virtual machines on each server. Get started with Hyper-V.
The host server that you'll replicate virtual machines to will need to be set up as the replica server.
1. In the Hyper-V settings for the server you'll replicate virtual machines to, in Replication Configuration,
select Enable this computer as a Replica server.
2. You can replicate over HTTP or encrypted HTTPS. Select Use Kerberos (HTTP ) or Use certificate-based
Authentication (HTTPS ). By default HTTP 80 and HTTPS 443 are enabled as firewall exceptions on the
replica Hyper-V server. If you change the default port settings you'll need to also change the firewall
exception. If you're replicating over HTTPS, you'll need to select a certificate and you should have certificate
authentication set up.
3. For authorization, select Allow replication from any authenticated server to allow the replica server to
accept virtual machine replication traffic from any primary server that authenticates successfully. Select
Allow replication from the specified servers to accept traffic only from the primary servers you
specifically select.
For both options you can specify where the replicated VHDs should be stored on the replica Hyper-V server.
4. Click OK or Apply.
Step 2: Set up the firewall
To allow replication between the primary and secondary servers, traffic must get through the Windows firewall (or
any other third-party firewalls). When you installed the Hyper-V role on the servers by default exceptions for HTTP
(80) and HTTPS (443) are created. If you're using these standard ports, you'll just need to enable the rules:
To enable the rules on a standalone host server:
1. Open Windows Firewall with Advance Security and click Inbound Rules.
2. To enable HTTP (Kerberos) authentication, right-click Hyper-V Replica HTTP Listener (TCP -In)
>Enable Rule. To enable HTTPS certificate-based authentication, right-click Hyper-V Replica
HTTPS Listener (TCP -In) > Enable Rule.
To enable rules on a Hyper-V cluster, open a Windows PowerShell session using Run as Administrator,
then run one of these commands:
For HTTP:
get-clusternode | ForEach-Object {Invoke-command -computername $_.name -scriptblock {Enable-
Netfirewallrule -displayname "Hyper-V Replica HTTP Listener (TCP-In)"}}
For HTTPS:
get-clusternode | ForEach-Object {Invoke-command -computername $_.name -scriptblock {Enable-
Netfirewallrule -displayname "Hyper-V Replica HTTPS Listener (TCP-In)"}}
Run a failover
After completing these deployment steps your replicated environment is up and running. Now you can run
failovers as needed.
Test failover: If you want to run a test failover right-click the primary virtual machine and select Replication >
Test Failover. Pick the latest or other recovery point if configured. A new test virtual machine will be created and
started on the secondary site. After you've finished testing, select Stop Test Failover on the replica virtual machine
to clean it up. Note that for a virtual machine you can only run one test failover at a time. Read more.
Planned failover: To run a planned failover right-click the primary virtual machine and select Replication >
Planned Failover. Planned failover performs prerequisites checks to ensure zero data loss. It checks that the
primary virtual machine is shut down before beginning the failover. After the virtual machine is failed over, it starts
replicating the changes back to the primary site when it's available. Note that for this to work the primary server
should be configured to recive replication from the secondary server or from the Hyper-V Replica Broker in the
case of a primary cluster. Planned failover sends the last set of tracked changes. Read more.
Unplanned failover: To run an unplanned failover, right-click on the replica virtual machine and select
Replication > Unplanned Failover from Hyper-V Manager or Failover Clustering Manager. You can recover
from the latest recovery point or from previous recovery points if this option is enabled. After failover, check that
everything is working as expected on the failed over virtual machine, then click Complete on the replica virtual
machine. Read more.
Live Migration Overview
3/12/2019 • 2 minutes to read • Edit Online
Live migration is a Hyper-V feature in Windows Server. It allows you to transparently move running Virtual
Machines from one Hyper-V host to another without perceived downtime. The primary benefit of live migration is
flexibility; running Virtual Machines are not tied to a single host machine. This allows actions like draining a specific
host of Virtual Machines before decommissioning or upgrading it. When paired with Windows Failover Clustering,
live migration allows the creation of highly available and fault tolerant systems.
Live migration is a Hyper-V feature in Windows Server. It allows you to transparently move running Virtual
Machines from one Hyper-V host to another without perceived downtime. The primary benefit of live migration is
flexibility; running Virtual Machines are not tied to a single host machine. This allows actions like draining a specific
host of Virtual Machines before decommissioning or upgrading it. When paired with Windows Failover Clustering,
live migration allows the creation of highly available and fault tolerant systems.
Applies To: Windows Server 2016, Microsoft Hyper-V Server 2016, Windows Server 2019, Microsoft Hyper-V
Server 2019
This article shows you how to set up hosts that aren't clustered so you can do live migrations between them. Use
these instructions if you didn't set up live migration when you installed Hyper-V, or if you want to change the
settings. To set up clustered hosts, use tools for Failover Clustering.
Step 2: Set up the source and destination computers for live migration
This step includes choosing options for authentication and networking. As a security best practice, we recommend
that you select specific networks to use for live migration traffic, as discussed above. This step also shows you how
to choose the performance option.
Use Hyper-V Manager to set up the source and destination computers for live migration
1. Open Hyper-V Manager. (From Server Manager, click Tools >>Hyper-V Manager.)
2. In the navigation pane, select one of the servers. (If it isn't listed, right-click Hyper-V Manager, click
Connect to Server, type the server name, and click OK. Repeat to add more servers.)
3. In the Action pane, click Hyper-V Settings >>Live Migrations.
4. In the Live Migrations pane, check Enable incoming and outgoing live migrations.
5. Under Simultaneous live migrations, specify a different number if you don't want to use the default of 2.
6. Under Incoming live migrations, if you want to use specific network connections to accept live migration
traffic, click Add to type the IP address information. Otherwise, click Use any available network for live
migration. Click OK.
7. To choose Kerberos and performance options, expand Live Migrations and then select Advanced
Features.
If you have configured constrained delegation, under Authentication protocol, select Kerberos.
Under Performance options, review the details and choose a different option if it's appropriate for your
environment.
8. Click OK.
9. Select the other server in Hyper-V Manager and repeat the steps.
Use Windows PowerShell to set up the source and destination computers for live migration
Three cmdlets are available for configuring live migration on non-clustered hosts: Enable-VMMigration, Set-
VMMigrationNetwork, and Set-VMHost. This example uses all three and does the following:
Configures live migration on the local host
Allows incoming migration traffic only on a specific network
Chooses Kerberos as the authentication protocol
Each line represents a separate command.
PS C:\> Enable-VMMigration
Set-VMHost also lets you choose a performance option (and many other host settings). For example, to choose
SMB but leave the authentication protocol set to the default of CredSSP, type:
PS C:\> Set-VMHost -VirtualMachineMigrationPerformanceOption SMBTransport
OPTION DESCRIPTION
Next steps
After you set up the hosts, you're ready to do a live migration. For instructions, see Use live migration without
Failover Clustering to move a virtual machine.
Use live migration without Failover Clustering to
move a virtual machine
6/7/2019 • 2 minutes to read • Edit Online
This article shows you how to move a virtual machine by doing a live migration without using Failover Clustering.
A live migration moves running virtual machines between Hyper-V hosts without any noticeable downtime.
To be able to do this, you'll need:
A user account that's a member of the local Hyper-V Administrators group or the Administrators group on
both the source and destination computers.
The Hyper-V role in Windows Server 2016 or Windows Server 2012 R2 installed on the source and
destination servers and set up for live migrations. You can do a live migration between hosts running
Windows Server 2016 and Windows Server 2012 R2 if the virtual machine is at least version 5.
For version upgrade instructions, see Upgrade virtual machine version in Hyper-V on Windows 10 or
Windows Server 2016. For installation instructions, see Set up hosts for live migration.
The Hyper-V management tools installed on a computer running Windows Server 2016 or Windows 10,
unless the tools are installed on the source or destination server and you'll run them from there.
Troubleshooting
Failed to establish a connection
If you haven't set up constrained delegation, you must sign in to source server before you can move a virtual
machine. If you don't do this, the authentication attempt fails, an error occurs, and this message is displayed:
"Virtual machine migration operation failed at migration Source.
Failed to establish a connection with host computer name: No credentials are available in the security package
0x8009030E."
To fix this problem, sign in to the source server and try the move again. To avoid having to sign in to a source
server before doing a live migration, set up constrained delegation. You'll need domain administrator credentials to
set up constrained delegation. For instructions, see Set up hosts for live migration.
Failed because the host hardware isn't compatible
If a virtual machine doesn't have processor compatibility turned on and has one or more snapshots, the move fails
if the hosts have different processor versions. An error occurs and this message is displayed:
The virtual machine cannot be moved to the destination computer. The hardware on the destination
computer is not compatible with the hardware requirements of this virtual machine.
To fix this problem, shut down the virtual machine and turn on the processor compatibility setting.
1. From Hyper-V Manager, in the Virtual Machines pane, right-click the virtual machine and click Settings.
2. In the navigation pane, expand Processors and click Compatibility.
3. Check Migrate to a computer with a different processor version.
4. Click OK.
To use Windows PowerShell, use the Set-VMProcessor cmdlet:
Hyper-V is the virtualization server role in Windows Server 2016. Virtualization servers can host multiple virtual
machines that are isolated from each other but share the underlying hardware resources by virtualizing the
processors, memory, and I/O devices. By consolidating servers onto a single machine, virtualization can improve
resource usage and energy efficiency and reduce the operational and maintenance costs of servers. In addition,
virtual machines and the management APIs offer more flexibility for managing resources, balancing load, and
provisioning systems.
See also
Hyper-V terminology
Hyper-V architecture
Hyper-V server configuration
Hyper-V processor performance
Hyper-V memory performance
Hyper-V storage I/O performance
Hyper-V network I/O performance
Detecting bottlenecks in a virtualized environment
Linux Virtual Machines
Hyper-V Virtual Switch
3/12/2019 • 4 minutes to read • Edit Online
This topic provides an overview of Hyper-V Virtual Switch, which provides you with the ability to connect virtual
machines (VMs) to networks that are external to the Hyper-V host, including your organization's intranet and the
Internet.
You can also connect to virtual networks on the server that is running Hyper-V when you deploy Software Defined
Networking (SDN ).
NOTE
In addition to this topic, the following Hyper-V Virtual Switch documentation is available.
Manage Hyper-V Virtual Switch
Remote Direct Memory Access (RDMA) and Switch Embedded Teaming (SET)
Network Switch Team Cmdlets in Windows PowerShell
What's New in VMM 2016
Set up the VMM networking fabric
Create networks with VMM 2012
Hyper-V: Configure VLANs and VLAN Tagging
Hyper-V: The WFP virtual switch extension should be enabled if it is required by third party extensions
For more information about other networking technologies, see Networking in Windows Server 2016.
Hyper-V Virtual Switch is a software-based layer-2 Ethernet network switch that is available in Hyper-V Manager
when you install the Hyper-V server role.
Hyper-V Virtual Switch includes programmatically managed and extensible capabilities to connect VMs to both
virtual networks and the physical network. In addition, Hyper-V Virtual Switch provides policy enforcement for
security, isolation, and service levels.
NOTE
Hyper-V Virtual Switch only supports Ethernet, and does not support any other wired local area network (LAN) technologies,
such as Infiniband and Fibre Channel.
Hyper-V Virtual Switch includes tenant isolation capabilities, traffic shaping, protection against malicious virtual
machines, and simplified troubleshooting.
With built-in support for Network Device Interface Specification (NDIS ) filter drivers and Windows Filtering
Platform (WFP ) callout drivers, the Hyper-V Virtual Switch enables independent software vendors (ISVs) to create
extensible plug-ins, called Virtual Switch Extensions, that can provide enhanced networking and security
capabilities. Virtual Switch Extensions that you add to the Hyper-V Virtual Switch are listed in the Virtual Switch
Manager feature of Hyper-V Manager.
In the following illustration, a VM has a virtual NIC that is connected to the Hyper-V Virtual Switch through a
switch port.
Hyper-V Virtual Switch capabilities provide you with more options for enforcing tenant isolation, shaping and
controlling network traffic, and employing protective measures against malicious VMs.
NOTE
In Windows Server 2016, a VM with a virtual NIC accurately displays the maximum throughput for the virtual NIC. To view
the virtual NIC speed in Network Connections, right-click the desired virtual NIC icon and then click Status. The virtual NIC
Status dialog box opens. In Connection, the value of Speed matches the speed of the physical NIC installed in the server.
This topic provides information on configuring Remote Direct Memory Access (RDMA) interfaces with Hyper-V in
Windows Server 2016, in addition to information about Switch Embedded Teaming (SET).
NOTE
In addition to this topic, the following Switch Embedded Teaming content is available.
TechNet Gallery Download: Windows Server 2016 NIC and Switch Embedded Teaming User Guide
TIP
In editions of Windows Server previous to Windows Server 2016, it is not possible to configure RDMA on network adapters
that are bound to a NIC Team or to a Hyper-V Virtual Switch. In Windows Server 2016, you can enable RDMA on network
adapters that are bound to a Hyper-V Virtual Switch with or without SET.
In Windows Server 2016, you can use fewer network adapters while using RDMA with or without SET.
The image below illustrates the software architecture changes between Windows Server 2012 R2 and Windows
Server 2016.
The following sections provide instructions on how to use Windows PowerShell commands to enable Data Center
Bridging (DCB ), create a Hyper-V Virtual Switch with an RDMA virtual NIC (vNIC ), and create a Hyper-V Virtual
Switch with SET and RDMA vNICs.
Enable Data Center Bridging (DCB )
Before using any RDMA over Converged Ethernet (RoCE ) version of RDMA, you must enable DCB. While not
required for Internet Wide Area RDMA Protocol (iWARP ) networks, testing has determined that all Ethernet-based
RDMA technologies work better with DCB. Because of this, you should consider using DCB even for iWARP
RDMA deployments.
The following Windows PowerShell example commands demonstrate how to enable and configure DCB for SMB
Direct.
Turn on DCB
Install-WindowsFeature Data-Center-Bridging
Enable-NetQosFlowControl -Priority 3
If you have a kernel debugger installed in the system, you must configure the debugger to allow QoS to be set by
running the following command.
Override the Debugger - by default the debugger blocks NetQos:
NOTE
Using SET teams with RDMA-capable physical NICs provides more RDMA resources for the vNICs to consume.
New-VMSwitch -Name RDMAswitch -NetAdapterName "SLOT 2"
Get-NetAdapterRdma
Many switches won't pass traffic class information on untagged VLAN traffic, so make sure that the host adapters
for RDMA are on VLANs. This example assigns the two SMB_* host virtual adapters to VLAN 42.
Get-NetAdapterRdma | fl *
SET Overview
SET is an alternative NIC Teaming solution that you can use in environments that include Hyper-V and the
Software Defined Networking (SDN ) stack in Windows Server 2016. SET integrates some NIC Teaming
functionality into the Hyper-V Virtual Switch.
SET allows you to group between one and eight physical Ethernet network adapters into one or more software-
based virtual network adapters. These virtual network adapters provide fast performance and fault tolerance in the
event of a network adapter failure.
SET member network adapters must all be installed in the same physical Hyper-V host to be placed in a team.
NOTE
The use of SET is only supported in Hyper-V Virtual Switch in Windows Server 2016. You cannot deploy SET in Windows
Server 2012 R2 .
You can connect your teamed NICs to the same physical switch or to different physical switches. If you connect
NICs to different switches, both switches must be on the same subnet.
The following illustration depicts SET architecture.
Because SET is integrated into the Hyper-V Virtual Switch, you cannot use SET inside of a virtual machine (VM ).
You can, however use NIC Teaming within VMs.
For more information, see NIC Teaming in Virtual Machines (VMs).
In addition, SET architecture does not expose team interfaces. Instead, you must configure Hyper-V Virtual Switch
ports.
SET Availability
SET is available in all versions of Windows Server 2016 that include Hyper-V and the SDN stack. In addition, you
can use Windows PowerShell commands and Remote Desktop connections to manage SET from remote
computers that are running a client operating system upon which the tools are supported.
NOTE
When you use SET in conjunction with Packet Direct, the teaming mode Switch Independent and the load balancing mode
Hyper-V Port are required.
Because the adjacent switch always sees a particular MAC address on a given port, the switch distributes the
ingress load (the traffic from the switch to the host) to the port where the MAC address is located. This is
particularly useful when Virtual Machine Queues (VMQs) are used, because a queue can be placed on the specific
NIC where the traffic is expected to arrive.
However, if the host has only a few VMs, this mode might not be granular enough to achieve a well-balanced
distribution. This mode will also always limit a single VM (i.e., the traffic from a single switch port) to the bandwidth
that is available on a single interface.
Dynamic
This load balancing mode provides the following advantages.
Outbound loads are distributed based on a hash of the TCP Ports and IP addresses. Dynamic mode also re-
balances loads in real time so that a given outbound flow can move back and forth between SET team
members.
Inbound loads are distributed in the same manner as the Hyper-V port mode.
The outbound loads in this mode are dynamically balanced based on the concept of flowlets. Just as human speech
has natural breaks at the ends of words and sentences, TCP flows (TCP communication streams) also have naturally
occurring breaks. The portion of a TCP flow between two such breaks is referred to as a flowlet.
When the dynamic mode algorithm detects that a flowlet boundary has been encountered - for example when a
break of sufficient length has occurred in the TCP flow - the algorithm automatically rebalances the flow to another
team member if appropriate. In some uncommon circumstances, the algorithm might also periodically rebalance
flows that do not contain any flowlets. Because of this, the affinity between TCP flow and team member can change
at any time as the dynamic balancing algorithm works to balance the workload of the team members.
NOTE
SET always presents the total number of queues that are available across all SET team members. In NIC Teaming, this is called
Sum-of-Queues mode.
Most network adapters have queues that can be used for either Receive Side Scaling (RSS ) or VMQ, but not both
at the same time.
Some VMQ settings appear to be settings for RSS queues but are really settings on the generic queues that both
RSS and VMQ use depending on which feature is presently in use. Each NIC has, in its advanced properties, values
for *RssBaseProcNumber and *MaxRssProcessors .
Following are a few VMQ settings that provide better system performance.
Ideally each NIC should have the *RssBaseProcNumber set to an even number greater than or equal to two (2).
This is because the first physical processor, Core 0 (logical processors 0 and 1), typically does most of the system
processing so the network processing should be steered away from this physical processor.
NOTE
Some machine architectures don't have two logical processors per physical processor, so for such machines the base processor
should be greater than or equal to 1. If in doubt, assume your host is using a 2 logical processor per physical processor
architecture.
The team members' processors should be, to the extent that it's practical, non-overlapping. For example, in a 4-
core host (8 logical processors) with a team of 2 10Gbps NICs, you could set the first one to use base processor
of 2 and to use 4 cores; the second would be set to use base processor 6 and use 2 cores.
If you want to create a SET-capable switch with a single team member so that you can add a team member at a
later time, then you must use the EnableEmbeddedTeaming parameter.
Remove-VMSwitch "SETvSwitch"
This topic is examined in more depth in section 4.2.5 of the Windows Server 2016 NIC and Switch Embedded
Teaming User Guide.
Manage Hyper-V Virtual Switch
3/12/2019 • 2 minutes to read • Edit Online
You can use this topic to access Hyper-V Virtual Switch management content.
This section contains the following topics.
Configure VLANs on Hyper-V Virtual Switch Ports
Create Security Policies with Extended Port Access Control Lists
Configure and View VLAN Settings on Hyper-V
Virtual Switch Ports
3/12/2019 • 2 minutes to read • Edit Online
You can use this topic to learn best practices for configuring and viewing virtual Local Area Network (VLAN )
settings on a Hyper-V Virtual Switch port.
When you want to configure VLAN settings on Hyper-V Virtual Switch ports, you can use either Windows® Server
2016 Hyper-V Manager or System Center Virtual Machine Manager (VMM ).
If you are using VMM, VMM uses the following Windows PowerShell command to configure the switch port.
If you are not using VMM and are configuring the switch port in Windows Server, you can use the Hyper-V
Manager console or the following Windows PowerShell command.
Because of these two methods for configuring VLAN settings on Hyper-V Virtual Switch ports, it is possible that
when you attempt to view the switch port settings, it appears to you that the VLAN settings are not configured -
even when they are configured.
Use the same method to configure and view switch port VLAN settings
To ensure that you do not encounter these issues, you must use the same method to view your switch port VLAN
settings that you used to configure the switch port VLAN settings.
To configure and view VLAN switch port settings, you must do the following:
If you are using VMM or Network Controller to set up and manage your network, and you have deployed
Software Defined Networking (SDN ), you must use the VMNetworkAdapterIsolation cmdlets.
If you are using Windows Server 2016 Hyper-V Manager or Windows PowerShell cmdlets, and you have not
deployed Software Defined Networking (SDN ), you must use the VMNetworkAdapterVlan cmdlets.
Possible issues
If you do not follow these guidelines you might encounter the following issues.
In circumstances where you have deployed SDN and use VMM, Network Controller, or the
VMNetworkAdapterIsolation cmdlets to configure VLAN settings on a Hyper-V Virtual Switch port: If you
use Hyper-V Manager or Get VMNetworkAdapterVlan to view the configuration settings, the command
output will not display your VLAN settings. Instead you must use the Get-VMNetworkIsolation cmdlet to
view the VLAN settings.
In circumstances where you have not deployed SDN, and instead use Hyper-V Manager or the
VMNetworkAdapterVlan cmdlets to configure VLAN settings on a Hyper-V Virtual Switch port: If you use
the Get-VMNetworkIsolation cmdlet to view the configuration settings, the command output will not display
your VLAN settings. Instead you must use the Get VMNetworkAdapterVlan cmdlet to view the VLAN
settings.
It is also important not to attempt to configure the same switch port VLAN settings by using both of these
configuration methods. If you do this, the switch port is incorrectly configured, and the result might be a failure in
network communication.
Resources
For more information on the Windows PowerShell commands that are mentioned in this topic, see the following:
Set-VmNetworkAdapterIsolation
Get-VmNetworkAdapterIsolation
Set-VMNetworkAdapterVlan
Get-VMNetworkAdapterVlan
Create Security Policies with Extended Port Access
Control Lists
3/12/2019 • 9 minutes to read • Edit Online
This topic provides information about extended port Access Control Lists (ACLs) in Windows Server 2016. You can
configure extended ACLs on the Hyper-V Virtual Switch to allow and block network traffic to and from the virtual
machines (VMs) that are connected to the switch via virtual network adapters.
This topic contains the following sections.
Detailed ACL rules
Stateful ACL rules
NOTE
If you want to add an extended ACL to one network adapter rather than all, you can specify the network adapter with
the parameter -VMNetworkAdapterName.
Add-VMNetworkAdapterExtendedAcl [-VMName] <string[]> [-Action] <VMNetworkAdapterExtendedAclAction>
{Allow | Deny}
[-Direction] <VMNetworkAdapterExtendedAclDirection> {Inbound | Outbound} [[-LocalIPAddress]
<string>]
[[-RemoteIPAddress] <string>] [[-LocalPort] <string>] [[-RemotePort] <string>] [[-Protocol]
<string>] [-Weight]
<int> [-Stateful <bool>] [-IdleSessionTimeout <int>] [-IsolationID <int>] [-Passthru] [-
VMNetworkAdapterName
<string>] [-ComputerName <string[]>] [-WhatIf] [-Confirm] [<CommonParameters>]
2. Add an extended ACL to a specific virtual network adapter on a specific VM. Syntax:
3. Add an extended ACL to all virtual network adapters that are reserved for use by the Hyper-V host
management operating system.
NOTE
If you want to add an extended ACL to one network adapter rather than all, you can specify the network adapter with
the parameter -VMNetworkAdapterName.
4. Add an extended ACL to a VM object that you have created in Windows PowerShell, such as $vm = get-vm
"my_vm". In the next line of code you can run this command to create an extended ACL with the following
syntax:
NOTE
The values for the rule parameter Direction in the tables below are based on traffic flow to or from the VM for which you are
creating the rule. If the VM is receiving traffic, the traffic is inbound; if the VM is sending traffic, the traffic is outbound. For
example, if you apply a rule to a VM that blocks inbound traffic, the direction of inbound traffic is from external resources to
the VM. If you apply a rule that blocks outbound traffic, the direction of outbound traffic is from the local VM to external
resources.
DESTINATION DESTINATION
SOURCE IP IP PROTOCOL SOURCE PORT PORT DIRECTION ACTION
Following are two examples of how you can create rules with Windows PowerShell commands. The first example
rule blocks all traffic to the VM named "ApplicationServer." The second example rule, which is applied to the
network adapter of the VM named "ApplicationServer," allows only inbound RDP traffic to the VM.
NOTE
When you create rules, you can use the -Weight parameter to determine the order in which the Hyper-V Virtual Switch
processes the rules. Values for -Weight are expressed as integers; rules with a higher integer are processed before rules with
lower integers. For example, if you have applied two rules to a VM network adapter, one with a weight of 1 and one with a
weight of 10, the rule with the weight of 10 is applied first.
DESTINATION DESTINATION
SOURCE IP IP PROTOCOL SOURCE PORT PORT DIRECTION ACTION
DESTINATION DESTINATION
SOURCE IP IP PROTOCOL SOURCE PORT PORT DIRECTION ACTION
Following are examples of how you can create these rules with Windows PowerShell commands.
NOTE
IGMP has a designated IP protocol number of 0x02.
DESTINATION DESTINATION
SOURCE IP IP PROTOCOL SOURCE PORT PORT DIRECTION ACTION
* * 0x02 * * In Allow
Following is an example of how you can create these rules with Windows PowerShell commands.
NOTE
Due to formatting restrictions and the amount of information in the table below, the information is displayed differently than
in previous tables in this document.
Source IP * * *
Destination IP * * *
Protocol * * TCP
Source Port * * *
Destination Port * * 80
Stateful No No Yes
The stateful rule allows the VM application server to connect to a remote Web server. When the first packet is sent
out, Hyper-V Virtual switch dynamically creates two flow states to allow all packets sent to and all returning packets
from the remote Web server. When the flow of packets between the servers stops, the flow states time out in the
designated timeout value of 3600 seconds, or one hour.
Following is an example of how you can create these rules with Windows PowerShell commands.