Adapter User Guide
Adapter User Guide
User Guide
Overview
Welcome to the User's Guide for Intel Ethernet Adapters and devices. This guide covers hardware and
software installation, setup procedures, and troubleshooting tips for Intel network adapters, connections, and
other devices.
Compatibility Notes
In order for an adapter based on the XL710 controller to reach its full potential, you must install it in a PCIe
Gen3 x8 slot. Installing it in a shorter slot, or a Gen2 or Gen1 slot, will limit the throughput of the adapter.
Some older Intel Ethernet Adapters do not have full software support for the most recent versions of
Microsoft Windows*. Many older Intel Ethernet Adapters have base drivers supplied by Microsoft Windows.
Lists of supported devices per OS are available at
http://www.intel.com/support/go/network/adapter/nicoscomp.htm
NOTE: Microsoft* Windows* 32-bit operating systems are only supported on Intel 1GbE Ethernet
Adapters and slower devices. All adapters support 32-bit versions of Linux* and FreeBSD*.
Basic software and drivers are supported on the following operating systems:
l DOS
l SunSoft* Solaris* (drivers and support are provided by the operating system vendor)
Advanced software and drivers are supported on the following operating systems:
l Microsoft Windows 7
l Microsoft Windows 8
l Microsoft Windows 8.1
l Microsoft Windows 10
l Linux*, v2.4 kernel or higher
l FreeBSD*
l Microsoft* Windows* 7
l Microsoft Windows 8
l Microsoft Windows 8.1
l Microsoft Windows 10
l Microsoft* Windows Server* 2008 R2
l Microsoft Windows Server 2012
l Microsoft Windows Server 2012 R2
l Microsoft Windows Server 2016
l Microsoft Windows Server 2016 Nano Server
l VMWare ESXi 5.5
l VMWare* ESXi* 6.0
l VMWare ESXi 6.5
l Red Hat* Linux*
l Novell* SUSE* Linux
l FreeBSD*
Hardware Compatibility
Before installing the adapter, check your system for the following:
l The latest BIOS for your system
l One open PCI Express slot
NOTE: The Intel 10 Gigabit AT Server Adapter will only fit into x8 or larger PCI Express slots.
Some systems have physical x8 PCI Express slots that actually support lower speeds. Please
check your system manual to identify the slot.
Cabling Requirements
Intel Gigabit Adapters
NOTE: To insure compliance with CISPR 24 and the EUs EN55024, devices based on the 82576
controller should be used only with CAT 5E shielded cables that are properly terminated according
to the recommendations in EN50174-2.
Copper Cables
l Maximum lengths for Intel 10 Gigabit Server Adapters and Connections that use 10GBASE-T on Cat-
egory 6, Category 6a, or Category 7 wiring, twisted 4-pair copper:
l Maximum length for Category 6 is 55 meters.
l Maximum length for Category 6a is 100 meters.
l Maximum length for Category 7 is 100 meters.
l To ensure compliance with CISPR 24 and the EU's EN55024, Intel 10 Gigabit Server
Adapters and Connections should be used only with CAT 6a shielded cables that are properly
terminated according to the recommendations in EN50174-2.
l 10 Gigabit Ethernet over SFP+ Direct Attached Cable (Twinaxial)
l Length is 10 meters max.
Installation Overview
Installing the Adapter
1. Turn off the computer and unplug the power cord.
2. Remove the computer cover and the adapter slot cover from the slot that matches your adapter.
3. Insert the adapter edge connector into the slot and secure the bracket to the chassis.
4. Replace the computer cover, then plug in the power cord.
You must have administrative rights to the operating system to install the drivers.
1. Download the latest drivers from the support website and transfer them to the system.
2. If the Found New Hardware Wizard screen is displayed, click Cancel.
3. Start the autorun located in the downloaded the software package. The autorun may automatically start
after you have extracted the files.
4. Click Install Drivers and Software.
5. Follow the instructions in the install wizard.
Installing Linux* Drivers from Source Code
Optimizing Performance
You can configure Intel network adapter advanced settings to help optimize server performance.
The examples below provide guidance for three server usage models:
l Optimized for quick response and low latency useful for video, audio, and High Performance Com-
puting Cluster (HPCC) servers
l Optimized for throughput useful for data backup/retrieval and file servers
l Optimized for CPU utilization useful for application, web, mail, and database servers
NOTES:
l The recommendations below are guidelines and should be treated as such. Additional factors
such as installed applications, bus type, network topology, and operating system also affect
system performance.
l These adjustments should be performed by a highly skilled network administrator. They are
not guaranteed to improve performance. Not all settings shown here may be available
through network driver configuration, operating system or system BIOS. Linux users, see the
README file in the Linux driver package for Linux-specific performance enhancement
details.
l When using performance test software, refer to the documentation of the application for
optimal results.
General Optimization
l Install the adapter in an appropriate slot.
NOTE: Some PCIe x8 slots are actually configured as x4 slots. These slots have insufficient
bandwidth for full line rate with some dual port devices. The driver can detect this situation
and will write the following message in the system log: PCI-Express bandwidth available for
this card is not sufficient for optimal performance. For optimal performance a x8 PCI-
Express slot is required.If this error occurs, moving your adapter to a true x8 slot will resolve
the issue.
l In order for an Intel X710/XL710 based Network Adapter to reach its full potential, you must install it
in a PCIe Gen3 x8 slot. Installing it in a shorter slot, or a Gen2 or Gen1 slot, will impact the throughput
the adapter can attain.
l Use the proper cabling for your device.
l Enable Jumbo Packets, if your other network components can also be configured for it.
l Increase the number of TCP and Socket resources from the default value. For Windows based sys-
tems, we have not identified system parameters other than the TCP Window Size which significantly
impact performance.
l Increase the allocation size of Driver Resources (transmit/receive buffers). However, most TCP traffic
patterns work best with the transmit buffer set to its default value, and the receive buffer set to its min-
imum value.
l When passing traffic on multiple network ports using an I/O application that runs on most or all of the
cores in your system, consider setting the CPU Affinity for that application to fewer cores. This should
reduce CPU utilization and in some cases may increase throughput for the device. The cores selected
for CPU Affinity must be local to the affected network device's Processor Node/Group. You can use
the PowerShell command Get-NetAdapterRSS to list the cores that are local to a device. You may
need to increase the number of cores assigned to the application to maximize throughput. Refer to your
operating system documentation for more details on setting the CPU Affinity.
l If you have multiple 10 Gpbs (or faster) ports installed in a system, the RSS queues of each adapter
port can be adjusted to use non-overlapping sets of processors within the adapter's local NUMA
Node/Socket. Change the RSS Base Processor Number for each adapter port so that the combination
of the base processor and the max number of RSS processors settings ensure non-overlapping cores.
1. Identify the adapter ports to be adjusted and inspect at their RssProcessorArray using the
Get-NetAdapterRSS PowerShell cmdlet.
2. Identify the processors with NUMA distance 0. These are the cores in the adapter's local
NUMA Node/Socket and will provide the best performance.
3. Adjust the RSS Base processor on each port to use a non-overlapping set of processors within
the local set of processors. You can do this manually or using the following PowerShell com-
mand:
Set-NetAdapterAdvancedProperty -Name <Adapter Name> -DisplayName
"RSS Base
Processor Number" -DisplayValue <RSS Base Proc Value>
4. Use the Get-NetAdpaterAdvancedproperty cmdlet to check that the right values have been set:
Get-NetAdpaterAdvancedproperty -Name <Adapter Name>
For Example: For a 4 port adapter with Local processors 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22,
24, 26, 28, 30, and 'Max RSS processor' of 8, set the RSS base processors to 0, 8, 16 and 24.
Remote Storage
The remote storage features allow you to access a SAN or other networked storage using Ethernet protocols.
This includes Data Center Bridging (DCB), iSCSI over DCB, and Fibre Channel over Ethernet (FCoE).
NOTE: Support for new operating systems will not be added to FCoE. The last operating system
versions that support FCoE are as follows:
l Microsoft*Windows Server* 2012 R2
l RHEL 7.2
l RHEL 6.7
l SLES 12 SP1
l SLES 11 SP4
l VMware* ESX 6.0
Jumbo Frames
The base driver supports FCoE mini-Jumbo Frames (2.5k bytes) independent of the LAN Jumbo Frames
setting.
FCoE VN to VN, also called VN2VN, is a standard for connecting two end-nodes (ENodes) directly using
FCoE. An ENode can create a VN2VN virtual link with another remote ENode by not connecting to FC or
FCoE switches (FCFs) in between, so neither port zoning nor advance fibre channel services is required. The
storage software controls access to, and security of, LUNs using LUN masking. The VN2VN fabric may have
a lossless Ethernet switch between the ENodes. This allows multiple ENodes to participate in creating more
than one VN2VN virtual link in the VN2VN fabric. VN2VN has two operational modes: Point to Point (PT2PT)
and Multipoint.
Remote Boot
Remote Boot allows you to boot a system using only an Ethernet adapter. You connect to a server that
contains an operating system image and use that to boot your local system.
Supported Devices
Intel Boot Agent supports all Intel 10 Gigabit Ethernet, 1 Gigabit Ethernet, and PRO/100 Ethernet Adapters.
Intel Ethernet iSCSI Boot
Intel Ethernet iSCSI Boot provides the capability to boot a client system from a remote iSCSI disk volume
located on an iSCSI-based Storage Area Network (SAN).
NOTE: Release 20.6 is the last release in which Intel Ethernet iSCSI Boot supports Intel Eth-
ernet Desktop Adapters and Network Connections. Starting with Release 20.7, Intel Ethernet
iSCSI Boot no longer supports Intel Ethernet Desktop Adapters and Network Connections.
There are two ways to navigate to the FCoE properties in Windows Device Manager: by using the "Data
Center" tab on the adapter property sheet or by using the Intel "Ethernet Virtual Storage Miniport Driver for
FCoE Storage Controllers" property sheet.
Supported Devices
Virtualization Support
Virtualization makes it possible for one or more operating systems to run simultaneously on the same physical
system as virtual machines. This allows you to consolidate several servers onto one system, even if they are
running different operating systems. Intel Network Adapters work with, and within, virtual machines with
their standard drivers and software.
NOTES:
l Some virtualization options are not available on some adapter/operating system com-
binations.
l The jumbo frame setting inside a virtual machine must be the same, or lower than, the setting
on the physical port.
l When you attach a Virtual Machine to a tenant overlay network through the Virtual NIC ports
on a Virtual Switch, the encapsulation headers increase the Maximum Transmission Unit
(MTU) size on the virtual port. The Encapsulation Overhead feature automatically adjusts the
physical port's MTU size to compensate for this increase.
l See http://www.intel.com/technology/advanced_comm/virtualization.htm for more inform-
ation on using Intel Network Adapters in virtualized environments.
NOTES:
l If Fibre Channel over Ethernet (FCoE)/Data Center Bridging (DCB) is present on the port,
configuring the device in Virtual Machine Queue (VMQ) + DCB mode reduces the number of
VMQ VPorts available for guest OSes. This does not apply to Intel Ethernet Controller
X710 based devices.
l When sent from inside a virtual machine, LLDP and LACP packets may be a security risk.
The Intel Virtual Function driver blocks the transmission of such packets.
l The Virtualization setting on the Advanced tab of the adapter's Device Manager property
sheet is not available if the Hyper-V role is not installed.
l While Microsoft supports Hyper-V on the Windows* 8 client OS, Intel Ethernet adapters do
not support virtualization settings (VMQ, SR-IOV) on Windows 8 client.
l ANS teaming of VF devices inside a Windows 2008 R2 guest running on an open source
hypervisor is supported.
The virtual machine switch is part of the network I/O data path. It sits between the physical NIC and the
virtual machine NICs and routes packets to the correct MAC address. Enabling Virtual Machine Queue (VMQ)
offloading in Intel PROSet will automatically enable VMQ in the virtual machine switch. For driver-only
installations, you must manually enable VMQ in the virtual machine switch.
If you create ANS VLANs in the parent partition, and you then create a Hyper-V Virtual NIC interface on an
ANS VLAN, then the Virtual NIC interface *must* have the same VLAN ID as the ANS VLAN. Using a
different VLAN ID or not setting a VLAN ID on the Virtual NIC interface will result in loss of communication on
that interface.
Virtual Switches bound to an ANS VLAN will have the same MAC address as the VLAN, which will have the
same address as the underlying NIC or team. If you have several VLANs bound to a team and bind a virtual
switch to each VLAN, all of the virtual switches will have the same MAC address. Clustering the virtual
switches together will cause a network error in Microsofts cluster validation tool. In some cases, ignoring this
error will not impact the performance of the cluster. However, such a cluster is not supported by Microsoft.
Using Device Manager to give each of the virtual switches a unique address will resolve the issue. See the
Microsoft TechNet article Configure MAC Address Spoofing for Virtual Network Adapters for more
information.
Virtual Machine Queues (VMQ) and SR-IOV cannot be enabled on a Hyper-V Virtual NIC interface bound to a
VLAN configured using the VLANs tab in Windows Device Manager.
If you want to use a team or VLAN as a virtual NIC you must follow these steps:
NOTES:
l This applies only to virtual NICs created on a team or VLAN. Virtual NICs created on a
physical adapter do not require these steps.
l Receive Load Balancing (RLB) is not supported in Hyper-V. Disable RLB when using
Hyper-V.
NOTE: This step is not required for the team. When the Virtual NIC is created, its protocols
are correctly bound.
Microsoft Windows Server* Core does not have a GUI interface. If you want to use an ANS Team or VLAN as
a Virtual NIC, you must use Microsoft*Windows PowerShell* to set up the configuration. Use Windows
PowerShell to create the team or VLAN.
NOTE: Support for the Intel PROSet command line utilities (prosetcl.exe and crashdmp.exe) has
been removed, and is no longer installed. This functionality has been replaced by the Intel
Netcmdlets for Microsoft* Windows PowerShell*. Please transition all of your scripts and
processes to use the Intel Netcmdlets for Microsoft Windows PowerShell.
The following is an example of how to set up the configuration using Microsoft* Windows PowerShell*.
1. Get all the adapters on the system and store them into a variable.
$a = Get-IntelNetAdapter
NOTE: This does not apply to devices based on the Intel Ethernet X710 controllers.
Intel PROSet displays the number of virtual ports available for virtual functions under Virtualization properties
on the device's Advanced Tab. It also allows you to set how the available virtual ports are distributed between
VMQ and SR-IOV.
Teaming Considerations
l If VMQ is not enabled for all adapters in a team, VMQ will be disabled for the team.
l If an adapter that does not support VMQ is added to a team, VMQ will be disabled for the team.
l Virtual NICs cannot be created on a team with Receive Load Balancing enabled. Receive Load Balan-
cing is automatically disabled if you create a virtual NIC on a team.
l If a team is bound to a Hyper-V virtual NIC, you cannot change the Primary or Secondary adapter.
Virtual Machine Multiple Queues (VMMQ)enables Receive Side Scaling (RSS) for virtual ports attached to a
physical port. This allows RSS to be used with SR-IOV and inside a VMQ virtual machine, and offloads the
RSS processing to the network adapter. RSS balances receive traffic across multiple CPUs or CPU cores.
This setting has no effect if your system has only one processing unit.
SR-IOV Overview
Single Root IO Virtualization (SR-IOV) is a PCI SIG specification allowing PCI Express devices to appear as
multiple separate physical PCI Express devices. SR-IOV allows efficient sharing of PCI devices among
Virtual Machines (VMs). It manages and transports data without the use of a hypervisor by providing
independent memory space, interrupts, and DMA streams for each virtual machine.
SR-IOV architecture includes two functions:
l Physical Function (PF) is a full featured PCI Express function that can be discovered, managed and
configured like any other PCI Express device.
l Virtual Function (VF) is similar to PF but cannot be configured and only has the ability to transfer data in
and out. The VF is assigned to a Virtual Machine.
NOTES:
l SR-IOV must be enabled in the BIOS.
l In Windows Server 2012, SR-IOV is not supported with teaming and VLANS. This occurs
because the Hyper-V virtual switch does not enable SR-IOV on virtual interfaces such as
teaming or VLANs. To enable SR-IOV, remove all teams and VLANs.
SR-IOV Benefits
SR-IOV has the ability to increase the number of virtual machines supported per physical host, improving I/O
device sharing among virtual machines for higher overall performance:
l Provides near native performance due to direct connectivity to each VM through a virtual function
l Preserves VM migration
l Increases VM scalability on a virtualized server
l Provides data protection
iWARP (Internet Wide Area RDMA Protocol)
Remote Direct Memory Access, or RDMA, allows a computer to access another computer's memory without
interacting with either computer's operating system data buffers, thus increasing networking speed and
throughput. Internet Wide Area RDMA Protocol (iWARP) is a protocol for implementing RDMA across
Internet Protocol networks.
Microsoft* Windows* provides two forms of RDMA: Network Direct (ND) and Network Direct Kernel (NDK).
ND allows user-mode applications to use iWARP features. NDK allows kernel mode Windows components
(such as File Manager) to use iWARP features. NDK functionality is included in the Intel base networking
drivers. ND functionality is a separate option available during Intel driver and networking software installation.
If you plan to make use of iWARP features in applications you are developing, you will need to install the user-
mode Network Direct (ND) feature when you install the drivers. (See Installation below.)
NOTE: Even though NDK functionality is included in the base drivers, if you want to allow NDK's
RDMA feature across subnets, you will need to select "Enable iWARP routing across IP Subnets"
on the iWARP Configuration Options screen during base driver installation (see Installation below).
Requirements
The Intel Ethernet User Mode iWARP Provider is supported on Linux* operating systems and Microsoft*
Windows Server* 2012 R2 or later. For Windows installations, Microsoft HPC Pack or Intel MPI Library must
be installed.
Installation
NOTE: For installation on Windows Server 2016 Nano Server, see Installing on Nano Server below.
Network Direct Kernel (NDK) features are included in the Intel base drivers. Follow the steps below to install
user-mode Network Direct (ND) iWARP features.
1. From the installation media, run Autorun.exe to launch the installer, then choose "Install Drivers and
Software" and accept the license agreement.
2. On the Setup Options screen, select "Intel Ethernet User Mode iWARP Provider".
3. On the iWARP Configuration Options screen, select "Enable iWARP routing across IP Subnets" if
desired. Note that this option is displayed during base driver installation even if user mode iWARP was
not selected, as this option is applicable to Network Direct Kernel functionality as well.
4. If Windows Firewall is installed and active, select "Create an Intel Ethernet iWARP Port Mapping Ser-
vice rule in Windows Firewall" and the networks to which to apply the rule. If Windows Firewall is dis-
abled or you are using a third party firewall, you will need to manually add this rule.
5. Continue with driver and software installation.
Follow the steps below to install the Intel Ethernet User Mode iWARP Provider on Microsoft Windows
Server 2016 Nano Server.
1. Create a directory from which to install the iWARP files. For example, C:\Nano\iwarp.
2. Copy the following files into your new directory:
l \Disk\APPS\PROSETDX\Winx64\DRIVERS\i40wb.dll
l \Disk\APPS\PROSETDX\Winx64\DRIVERS\i40wbmsg.dll
l \Disk\APPS\PROSETDX\Winx64\DRIVERS\indv2.cat
l \Disk\APPS\PROSETDX\Winx64\DRIVERS\indv2.inf
l \Disk\APPS\PROSETDX\Winx64\DRIVERS\indv2.sys
3. Run the DISM command to inject the iWARP files into your Nano Server image, using the directory
you created in step 1 for the AddDriver path parameter. For example, "DISM .../Add-Driver C:\Nan-
o\iwarp"
4. Create an inbound firewall rule for UDP port 3935.
5. If desired, use the Windows PowerShell commands below to enable iWARP routing across IP Sub-
nets.
l Set-NetOffloadGlobalSetting -NetworkDirectAcrossIPSubnets Allow
l Disable Adapter
l Enable Adapter
Customer Support
l Main Intel web support site: http://support.intel.com
l Network products information: http://www.intel.com/network
Legal / Disclaimers
Copyright (C) 2016, Intel Corporation. All rights reserved.
Intel Corporation assumes no responsibility for errors or omissions in this document. Nor does Intel make any
commitment to update the information contained herein.
Intel is a trademark of Intel Corporation in the U.S. and/or other countries.
*Other names and brands may be claimed as the property of others.
This software is furnished under license and may only be used or copied in accordance with the terms of the
license. The information in this manual is furnished for informational use only, is subject to change without
notice, and should not be construed as a commitment by Intel Corporation. Intel Corporation assumes no
responsibility or liability
for any errors or inaccuracies that may appear in this document or any software that may be provided in
association with this document. Except as permitted by such license, no part of this document may be
reproduced, stored in a retrieval system, or transmitted in any form or by any means without the express
written consent of Intel Corporation.
Installing the Adapter
Select the Correct Slot
One open PCI-Express slot, x4, x8, or x16, depending on your adapter.
NOTE: Some systems have physical x8 PCI Express slots that actually only support lower speeds.
Please check your system manual to identify the slot.
CAUTION: Turn off and unplug the power before removing the computer's cover. Failure to
do so could endanger you and may damage the adapter or computer.
CAUTION: Some PCI Express adapters may have a short connector, making them
more fragile than PCI adapters. Excessive force could break the connector. Use cau-
tion when pressing the board in the slot.
NOTE: For the Intel 10 Gigabit AT Server Adapter, to ensure compliance with CISPR 24
and the EUs EN55024, this product should be used only with Category 6a shielded cables
that are properly terminated according to the recommendations in EN50174-2.
l For 1000BASE-T or 100BASE-TX, use Category 5 or Category 5e wiring, twisted 4-pair copper:
l Make sure you use Category 5 cabling that complies with the TIA-568 wiring specification. For
more information on this specification, see the Telecommunications Industry Association's web
site: www.tiaonline.org.
l Length is 100 meters max.
l Category 3 wiring supports only 10 Mbps.
CAUTION: If using less than 4-pair cabling, you must manually configure the speed
and duplex setting of the adapter and the link partner. In addition, with 2- and 3-pair
cabling the adapter can only achieve speeds of up to 100Mbps.
Remove and save the fiber optic connector cover. Insert a fiber optic cable into the ports on the network
adapter bracket as shown below.
Most connectors and ports are keyed for proper orientation. If the cable you are using is not keyed, check to
be sure the connector is oriented properly (transmit port connected to receive port on the link partner, and vice
versa).
The adapter must be connected to a compatible link partner operating at the same laser wavelength as the
adapter.
Conversion cables to other connector types (such as SC-to-LC) may be used if the cabling matches the
optical specifications of the adapter, including length limitations.
Insert the fiber optic cable as shown below.
Connection requirements
NOTES:
l Some Intel branded network adapters based on the X710/XL710 controller only support Intel
branded modules. On these adapters, other modules are not supported and will not function.
l For connections based on the X710/XL710 controller, support is dependent on your system
board. Please see your vendor for details.
l In all cases Intel recommends using Intel optics; other modules may function but are not val-
idated by Intel. Contact Intel for supported media types.
l In systems that do not have adequate airflow to cool the adapter and optical modules, you
must use high temperature optical modules.
SR Modules
LR Modules
1G SFP Modules
The following 3rd party 1G SFP modules have received some testing. Not all modules are applicable to
all devices.
QSFP+ Modules
X710/XL710 based SFP+/QSFP+ adapters support passive SFP+/QSFP+ Direct Attach cables. Intel
recommends using Intel Ethernet SFP+/QSFP+ Twinaxial Cables . Other cables may function but are
not validated by Intel. Contact Intel for supported media types.
XL710 based QSFP+ adapters support all passive and active limiting direct attach cables that comply
with SFF-8436 v3.1 specifications.
XXV710-Based Adapters
Intel Intel Ethernet SFP28 Twinaxial Cable (1M, 2M, XXVDACBL1M, XXVDACBL2M,
3M) XXVDACBL3M
For XXV710 based SFP+ adapters Intel recommends using Intel optics and cables. Other modules may
function but are not validated by Intel. Contact Intel for supported media types.
82599-Based Adapters
NOTES:
l If your 82599-based Intel Network Adapter came with Intel optics, or is an Intel Ethernet
Server Adapter X520-2, then it only supports Intel optics and/or the direct attach cables listed
below.
l 82599-Based adapters support all passive and active limiting direct attach cables that com-
ply with SFF-8431 v4.1 and SFF-8472 v10.4 specifications.
Supplier Type Part Numbers
SR Modules
LR Modules
QSFP Mod-
ules
Intel TRIPLE RATE 1G/10G/40G QSFP+ SR (bailed) (40G not sup- E40GQSFPSR
ported on 82599)
The following is a list of 3rd party SFP+ modules that have received some testing. Not all modules are
applicable to all devices.
82598-Based Adapters
NOTES:
l Intel Network Adapters that support removable optical modules only support their original
module type (i.e., the Intel 10 Gigabit SR Dual Port Express Module only supports SR
optical modules). If you plug in a different type of module, the driver will not load.
l 82598-Based adapters support all passive direct attach cables that comply with SFF-8431
v4.1 and SFF-8472 v10.4 specifications. Active direct attach cables are not supported.
l Hot Swapping/hot plugging optical modules is not supported.
l Only single speed, 10 Gigabit modules are supported.
l LAN on Motherboard (LOMs) may support DA, SR, or LR modules. Other module types are
not supported. Please see your system documentation for details.
The following is a list of SFP+ modules and direct attach cables that have received some testing. Not all
modules are applicable to all devices.
THIRD PARTY OPTIC MODULES AND CABLES REFERRED TO ABOVE ARE LISTED ONLY FOR THE PURPOSE OF HIGHLIGHTING THIRD
PARTY SPECIFICATIONS AND POTENTIAL COMPATIBILITY, AND ARE NOT RECOMMENDATIONS OR ENDORSEMENT OR SPONSORSHIP OF
ANY THIRD PARTY'S PRODUCT BY INTEL. INTEL IS NOT ENDORSING OR PROMOTING PRODUCTS MADE BY ANY THIRD PARTY AND THE
THIRD PARTY REFERENCE IS PROVIDED ONLY TO SHARE INFORMATION REGARDING CERTAIN OPTIC MODULES AND CABLES WITH THE
ABOVE SPECIFICATIONS. THERE MAY BE OTHER MANUFACTURERS OR SUPPLIERS, PRODUCING OR SUPPLYING OPTIC MODULES AND
CABLES WITH SIMILAR OR MATCHING DESCRIPTIONS. CUSTOMERS MUST USE THEIR OWN DISCRETION AND DILIGENCE TO PURCHASE
OPTIC MODULES AND CABLES FROM ANY THIRD PARTY OF THEIR CHOICE. CUSTOMERS ARE SOLELY RESPONSIBLE FOR ASSESSING
THE SUITABILITY OF THE PRODUCT AND/OR DEVICES AND FOR THE SELECTION OF THE VENDOR FOR PURCHASING ANY PRODUCT. THE
OPTIC MODULES AND CABLES REFERRED TO ABOVE ARE NOT WARRANTED OR SUPPORTED BY INTEL. INTEL ASSUMES NO LIABILITY
WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF SUCH THIRD PARTY
PRODUCTS OR SELECTION OF VENDOR BY CUSTOMERS.
Type of cabling:
l 40 Gigabit Ethernet over SFP+ Direct Attached Cable (Twinaxial)
l Length is 7 meters max.
l 10 Gigabit Ethernet over SFP+ Direct Attached Cable (Twinaxial)
l Length is 10 meters max.
NOTE: To replace an existing SLA-teamed adapter in a Hot Plug slot, first unplug the adapter
cable. When the adapter is replaced, reconnect the cable.
NOTES:
l The MAC address and driver from the removed adapter will be used by the replacement
adapter unless you remove the adapter from the team and add it back in. If you do not
remove and restore the replacement adapter from the team, and the original adapter is used
elsewhere on your network, a MAC address conflict will occur.
l For SLA teams, ensure that the replacement NIC is a member of the team before connecting
it to the switch.
Microsoft* Windows* Installation and Configuration
Installing Windows Drivers and Software
NOTE: To successfully install or uninstall the drivers or software, you must have administrative
privileges on the computer completing installation.
NOTE: This will update the drivers for all supported Intel network adapters in your system.
Before installing or updating the drivers, insert your adapter(s) in the computer and plug in the network cable.
When Windows discovers the new adapter, it attempts to find an acceptable Windows driver already installed
with the operating system.
If found, the driver is installed without any user intervention. If Windows cannot find the driver, the Found New
Hardware Wizard window is displayed.
Regardless of whether Windows finds the driver, it is recommended that you follow the procedure below to
install the driver. Drivers for all Intel adapters supported by this software release are installed.
1. Download the latest drivers from the support website and transfer them to the system.
2. If the Found New Hardware Wizard screen is displayed, click Cancel.
3. Start the autorun located in the downloaded the software package. The autorun may automatically start
after you have extracted the files.
4. Click Install Drivers and Software.
5. Follow the instructions in the install wizard.
NOTES:
l Intel PROSet is installed by default when you install the device drivers.
l If you want to install the Simple Network Management Protocol (SNMP) subagent, select
IntelSNMPSubagent from the list of installation options under Device Drivers. For more
information, see Simple Network Management Protocol in this guide.
See the link below for more information on deploying a Nano Server image and using the cmdlet:
https://msdn.microsoft.com/en-us/library/mt126167.aspx
NOTES:
l You must install Intel PROSet for Windows Device Manager if you want to use Intel ANS
teams or VLANs.
l Intel PROSet for Windows Device Manager is installed by default when you install the
device drivers. For information on usage, see Using Intel PROSet for Windows Device
Manager.
Intel PROSet for Windows Device Manager is installed with the same process used to install drivers.
NOTES:
l You must have administrator rights to install or use Intel PROSet for Windows Device
Manager.
l Upgrading PROSet for Windows Device Manager may take a few minutes.
NOTE: You can also run setup64.exe from the files downloaded from Customer Support.
2. Proceed with the installation wizard until the Custom Setup page appears.
3. Select the features to install.
4. Follow the instructions to complete the installation.
If Intel PROSet for Windows Device Manager was installed without ANS support, you can install support by
clicking Install Base Drivers and Software on the autorun, or running setup64.exe, and then selecting the
Modify option when prompted. From the Intel Network Connections window, select Advanced Network
Services then click Next to continue with the installation wizard.
The driver install utility install_cmd.exe allows unattended installation of drivers from a command line.
NOTES:
l Intel 10GbE Network Adapters do not support unattended driver install-
ation.
l Intel PROSet cannot be installed with msiexec.exe. You must use
DxSetup.exe.
These utilities can be used to install the base driver, intermediate driver, and all management applications for
supported devices.
install_cmd.exe Command Line Options
By setting the parameters in the command line, you can enable and disable management applications. If
parameters are not specified, only existing components are updated.
install_cmd.exe supports the following command line parameters:
Parameter Definition
BD Base Driver
"0", do not install the base driver.
"1", install the base driver (default).
NOTE: If the ANS parameter is set to ANS=1, both Intel PROSet and ANS will
be installed.
NOTE: If DMIX=0, ANS will not be installed. If DMIX=0 and Intel PROSet, ANS,
and FCoE are already installed, Intel PROSet, ANS, and FCoE will be
uninstalled.
Parameter Definition
NOTE: Although the default value for the SNMP parameter is 1 (install), the
SNMP agent will only be installed if:
l The Intel SNMP Agent is already installed. In this case, the SNMP agent
will be updated.
l The Windows SNMP service is installed. In this case, the SNMP window
will pop up and you may cancel the installation if you do not want it
installed.
NOTE: Even if FCOE=1 is passed, FCoE will not be installed if the operating
system and installed adapters do not support FCoE.
ISCSI iSCSI
"0", do not install iSCSI (default). If iSCSI is already installed, it will be uninstalled.
"1", install FCoE. The iSCSI property requires DMIX=1.
-a Extract the components required for installing the base driver to C:\Program Files\In-
tel\Drivers. The directory where these files will be extracted to can be modified unless
silent mode (/qn) is specified. If this parameter is specified, the installer will exit after the
base driver is extracted. Any other parameters will be ignored.
NOTE: If the installed version is newer than the current version, this parameter
needs to be set.
n Silent install
/l[i|w|e|a] /l --- log file option for DMIX and SNMP installation. Following are log switches:
NOTES:
l You must include a space between parameters.
l If you specify a path for the log file, the path must exist. If you do not specify a complete
path, the install log will be created in the current directory.
l You do not need to specify default values. To install the base drivers, Intel PROSet, and
ANS, the following examples are equivalent:
install_cmd.exe
install_cmd.exe BD=1 DMIX=1 ANS=1
l The ANS property should only be set to ANS=1 if DMIX=1 is set. If DMIX=0 and ANS=1, the
ANS=1 is ignored and only the base driver will be installed.
l Even if FCOE=1 is passed, FCoE using DCB will not be installed if the operating system
and installed adapters do not support it. If FORCE=1 is also passed, FCoE will be installed if
the operating system supports it.
l Even if ISCSI=1 is passed, iSCSI using DCB will not be installed if the operating system
and installed adapters do not support it. If FORCE=1 is also passed, iSCSI will be installed if
the operating system supports it.
l Public properties are not case sensitive. No white space is allowed between characters. For
example:
install_cmd.exe /qn DMIX=1
NOTE: To install teaming and VLAN support on a system that has adapter base drivers and Intel
PROSet for Windows Device Manager installed, type the command line D:\install_cmd.exe
ANS=1.
Switch Description
/s silent install
/nr no reboot (must be used with the /s switch. This switch is ignored if it is included with the /r
switch)
Examples:
Option Description
SetupBD Installs and/or updates the driver(s) and displays the GUI.
SetupBD /s /r Installs and/or updates the driver(s) silently and forces a reboot.
SetupBD /s /r /nr Installs and/or updates the driver(s) silently and forces a reboot (/nr is ignored).
Other information
You can use the /r and /nr switches only with a silent install (i.e. with the "/s" option).
l Before you install the Intel SNMP Agent on a computer, you must install SNMP on the computer. See
your operating system documentation for more information.
l To use the Intel SNMP Agent with an SNMP management application, you must first compile the Intel
MIB (Management Information Base) into the management application's database. This allows the
management application to recognize and support the adapter.
This utility should only be employed by experienced network administrators. Additional software/services
must be installed on your network prior to installing the Intel SNMP Agent.
To install the SNMP Agent, start the autorun menu from the download directory and click Install Drivers and
Software. Follow the instructions on the screen.
NOTE: Support for the Intel PROSet command line utilities (prosetcl.exe and crashdmp.exe) has
been removed, and is no longer installed. This functionality has been replaced by the Intel
Netcmdlets for Microsoft* Windows PowerShell*. Please transition all of your scripts and
processes to use the Intel Netcmdlets for Microsoft Windows PowerShell.
Compatibility Notes
The following devices do not support Intel PROSet for Windows Device Manager
l Intel 82552 10/100 Network Connection
l Intel 82567V-3 Gigabit Network Connection
l Intel X552 10G Ethernet devices
l Intel X553 10G Ethernet devices
l Any platform with a System on a Chip (SoC) processor that includes either a server controller (des-
ignated by an initial X, such as X552) or both a server and client controller (designated by an initial I,
such as I218)
l Devices based on the Intel Ethernet Controller X722
Overview
The Link Speed and Duplex setting lets you choose how the adapter sends and receives data packets over
the network.
In the default mode, an Intel network adapter using copper connections will attempt to auto-negotiate with its
link partner to determine the best setting. If the adapter cannot establish link with the link partner using auto-
negotiation, you may need to manually configure the adapter and link partner to the identical setting to
establish link and pass packets. This should only be needed when attempting to link with an older switch that
does not support auto-negotiation or one that has been forced to a specific speed or duplex mode.
Auto-negotiation is disabled by selecting a discrete speed and duplex mode in the adapter properties sheet.
NOTES:
l When an adapter is running in NPar mode, Speed settings are limited to the root partition of
each port.
l Fiber-based adapters operate only in full duplex at their native speed.
NOTES:
l Although some adapter property sheets (driver property settings) list 10 Mbps and 100 Mbps
in full or half duplex as options, using those settings is not recommended.
l Only experienced network administrators should force speed and duplex manually.
l You cannot change the speed or duplex of Intel adapters that use fiber cabling.
Intel 10 Gigabit adapters that support 1 gigabit speed allow you to configure the speed setting. If this option is
not present, your adapter only runs at its native speed.
If the adapter cannot establish link with the gigabit link partner using auto-negotiation, set the adapter to 1
Gbps Full duplex.
Intel 10 gigabit fiber-based adapters and SFP direct-attach devices operate only in full duplex, and only at their
native speed. Multi-speed 10 gigabit SFP+ fiber modules support full duplex at 10 Gbps and 1 Gbps.
Auto-negotiation and Auto-Try are not supported on devices based on the IntelEthernet Connection X552
controller and IntelEthernet Connection X553 controller.
Manually Configuring Duplex and Speed Settings
Configuration is specific to your operating system driver. To set a specific Link Speed and Duplex mode, refer
to the section below that corresponds to your operating system.
CAUTION: The settings at the switch must always match the adapter settings. Adapter per-
formance may suffer, or your adapter might not operate correctly if you configure the adapter
differently from your switch.
The default setting is for auto-negotiation to be enabled. Only change this setting to match your link partner's
speed and duplex setting if you are having trouble connecting.
1. In Windows Device Manager, double-click the adapter you want to configure.
2. On the Link Speed tab, select a speed and duplex option from the Speed and Duplex drop-down
menu.
3. Click OK.
More specific instructions are available in the Intel PROSet help.
Advanced Tab
The settings listed on Intel PROSet for Windows Device Manager's Advanced tab allow you to customize
how the adapter handles QoS packet tagging, Jumbo Packets, Offloading, and other capabilities. Some of the
following features might not be available depending on the operating system you are running, the specific
adapters installed, and the specific platform you are using.
Default Disabled
Range l Enabled
l Disabled
DMA (Direct Memory Access) allows the network device to move packet data directly to the system's
memory, reducing CPU utilization. However, the frequency and random intervals at which packets arrive do
not allow the system to enter a lower power state. DMA Coalescing allows the NIC to collect packets before it
initiates a DMA event. This may increase network latency but also increases the chances that the system will
consume less energy. Adapters and network devices based on the Intel Ethernet Controller I350 (and later
controllers) support DMA Coalescing.
Higher DMA Coalescing values result in more energy saved but may increase your system's network latency.
If you enable DMA Coalescing, you should also set the Interrupt Moderation Rate to 'Minimal'. This minimizes
the latency impact imposed by DMA Coalescing and results in better peak network throughput performance.
You must enable DMA Coalescing on all active ports in the system. You may not gain any energy savings if it
is enabled only on some of the ports in your system. There are also several BIOS, platform, and application
settings that will affect your potential energy savings. A white paper containing information on how to best
configure your platform is available on the Intel website.
Flow Control
Enables adapters to more effectively regulate traffic. Adapters generate flow control frames when their
receive queues reach a pre-defined limit. Generating flow control frames signals the transmitter to slow
transmission. Adapters respond to flow control frames by pausing packet transmission for the time specified
in the flow control frame.
By enabling adapters to adjust packet transmission, flow control helps prevent dropped packets.
NOTES:
l For adapters to benefit from this feature, link partners must support flow control frames.
l When an adapter is running in NPar mode, Flow Control is limited to the root partition of
each port.
Range l Disabled
l RX Enabled
l TX Enabled
l RX & TX Enabled
Determines whether the adapter or link partner is designated as the master. The other device is designated as
the slave. By default, the IEEE 802.3ab specification defines how conflicts are handled. Multi-port devices
such as switches have higher priority over single port devices and are assigned as the master. If both devices
are multi-port devices, the one with higher seed bits becomes the master. This default setting is called
"Hardware Default."
NOTE: In most scenarios, it is recommended to keep the default value of this feature.
Setting this to either "Force Master Mode" or "Force Slave Mode" overrides the hardware default.
NOTE: Some multi-port devices may be forced to Master Mode. If the adapter is connected to such
a device and is configured to "Force Master Mode," link is not established.
Sets the Interrupt Throttle Rate (ITR). This setting moderates the rate at which Transmit and Receive
interrupts are generated.
When an event such as packet receiving occurs, the adapter generates an interrupt. The interrupt interrupts
the CPU and any application running at the time, and calls on the driver to handle the packet. At greater link
speeds, more interrupts are created, and CPU rates also increase. This results in poor system performance.
When you use a higher ITR setting, the interrupt rate is lower and the result is better CPU performance.
NOTE: A higher ITR rate also means that the driver has more latency in handling packets. If the
adapter is handling many small packets, it is better to lower the ITR so that the driver can be more
responsive to incoming and outgoing packets.
Altering this setting may improve traffic throughput for certain network and system configurations, however
the default setting is optimal for common network and system configurations. Do not change this setting
without verifying that the desired change will have a positive effect on network performance.
Default Adaptive
Range l Adaptive
l Extreme
l High
l Medium
l Low
l Minimal
l Off
This allows the adapter to compute the IPv4 checksum of incoming and outgoing packets. This feature
enhances IPv4 receive and transmit performance and reduces CPU utilization.
With Offloading off, the operating system verifies the IPv4 checksum.
With Offloading on, the adapter completes the verification for the operating system.
Range l Disabled
l RX Enabled
l TX Enabled
l RX & TX Enabled
Jumbo Frames
Enables or disables Jumbo Packet capability. The standard Ethernet frame size about 1514 bytes, while
Jumbo Packets are larger than this. Jumbo Packets can increase throughput and decrease CPU utilization.
However, additional latency may be introduced.
Enable Jumbo Packets only if ALL devices across the network support them and are configured to use the
same frame size. When setting up Jumbo Packets on other network devices, be aware that network devices
calculate Jumbo Packet sizes differently. Some devices include the frame size in the header information
while others do not. Intel adapters do not include frame size in the header information.
Jumbo Packets can be implemented simultaneously with VLANs and teaming. If a team contains one or more
non-Intel adapters, the Jumbo Packets feature for the team is not supported. Before adding a non-Intel adapter
to a team, make sure that you disable Jumbo Packets for all non-Intel adapters using the software shipped
with the adapter.
Restrictions
l Jumbo frames are not supported in multi-vendor team configurations.
l Supported protocols are limited to IP (TCP, UDP).
l Jumbo frames require compatible switch connections that forward Jumbo Frames. Contact your
switch vendor for more information.
l When standard-sized Ethernet frames (64 to 1518 bytes) are used, there is no benefit to configuring
Jumbo Frames.
l The Jumbo Packets setting on the switch must be set to at least 8 bytes larger than the adapter setting
for Microsoft Windows operating systems, and at least 22 bytes larger for all other operating systems.
Default Disabled
Range Disabled (1514), 4088, or 9014 bytes. (Set the switch 4 bytes higher for CRC, plus 4 bytes if
using VLANs.)
NOTES:
l Jumbo Packets are supported at 10 Gbps and 1 Gbps only. Using Jumbo Packets at 10 or
100 Mbps may result in poor performance or loss of link.
l End-to-end hardware must support this capability; otherwise, packets will be dropped.
l Intel adapters that support Jumbo Packets have a frame size limit of 9238 bytes, with a
corresponding MTU size limit of 9216 bytes.
Sets the adapter to offload the task of segmenting TCP messages into valid Ethernet frames. The maximum
frame size limit for large send offload is set to 64,000 bytes.
Since the adapter hardware is able to complete data segmentation much faster than operating system
software, this feature may improve transmission performance. In addition, the adapter uses fewer CPU
resources.
Default Enabled
Range l Enabled
l Disabled
Overrides the initial MAC address with a user-assigned MAC address. To enter a new network address, type
a 12-digit hexadecimal number in this box.
Default None
Range 0000 0000 0001 - FFFF FFFF FFFD
Exceptions:
l Do not use a multicast address (Least Significant Bit of the high byte = 1). For
example, in the address 0Y123456789A, "Y" cannot be an odd number. (Y must be 0,
2, 4, 6, 8, A, C, or E.)
l Do not use all zeros or all Fs.
If you do not enter an address, the address is the original network address of the adapter.
For example,
This setting is used to enable/disable the logging of link state changes. If enabled, a link up change event or a
link down change event generates a message that is displayed in the system event logger. This message
contains the link's speed and duplex. Administrators view the event message from the system event log.
The following events are logged.
l The link is up.
l The link is down.
l Mismatch in duplex.
l Spanning Tree Protocol detected.
Default Enabled
LLI enables the network device to bypass the configured interrupt moderation scheme based on the type of
data being received. It configures which arriving TCP packets trigger an immediate interrupt, enabling the
system to handle the packet more quickly. Reduced data latency enables some applications to gain faster
access to network data.
Default Disabled
Range l Disabled
l PSH Flag-Based
l Port-Based
Network Virtualization using Generic Routing Encapsulation (NVGRE) increases the efficient routing of
network traffic within a virtualized or cloud environment. Some Intel Ethernet Network devices perform
Network Virtualization using Generic Routing Encapsulation (NVGRE) processing, offloading it from the
operating system. This reduces CPU utilization.
Performance Options
Performance Profile
Performance Profiles are supported on Intel 10GbE adapters and allow you to quickly optimize the
performance of your Intel Ethernet Adapter. Selecting a performance profile will automatically adjust some
Advanced Settings to their optimum setting for the selected application. For example, a standard server has
optimal performance with only two RSS (Receive-Side Scaling) queues, but a web server requires more RSS
queues for better scalability.
You must install Intel PROSet for Windows Device Manager to use Performance profiles. Profiles are
selected on the Advanced tab of the adapter's property sheet.
NOTES:
l Not all options are available on all adapter/operating system combinations.
l If you have selected the Virtualization Server profile or the Storage + Virtualization profile,
and you uninstall the Hyper-V role, you should select a new profile.
Teaming Considerations
When you create a team with all members of the team supporting Performance Profiles, you will be asked
which profile to use at the time of team creation. The profile will be synchronized across the team. If there is
not a profile that is supported by all team members then the only option will be Use Current Settings. The team
will be created normally. Adding an adapter to an existing team works in much the same way.
If you attempt to team an adapter that supports performance profiles with an adapter that doesn't, the profile
on the supporting adapter will be set to Custom Settings and the team will be created normally.
Enables the adapter to offload the insertion and removal of priority and VLAN tags for transmit and receive.
Quality of Service
Quality of Service (QoS) allows the adapter to send and receive IEEE 802.3ac tagged frames. 802.3ac tagged
frames include 802.1p priority-tagged frames and 802.1Q VLAN-tagged frames. In order to implement QoS,
the adapter must be connected to a switch that supports and is configured for QoS. Priority-tagged frames
allow programs that deal with real-time events to make the most efficient use of network bandwidth. High
priority packets are processed before lower priority packets.
Tagging is enabled and disabled in Microsoft* Windows* Server* using the "QoS Packet Tagging" field in the
Advanced tab in Intel PROSet. For other versions of the Windows operating system, tagging is enabled
using the "Priority/VLAN Tagging" setting on the Advanced tab in Intel PROSet.
Once QoS is enabled in Intel PROSet, you can specify levels of priority based on IEEE 802.1p/802.1Q frame
tagging.
The supported operating systems, including Microsoft* Windows Server*, have a utility for 802.1p packet
prioritization. For more information, see the Windows system help and Microsoft's knowledge base.
NOTE: The first generation Intel PRO/1000 Gigabit Server Adapter (PWLA 8490) does not
support QoS frame tagging.
Receive Buffers
Defines the number of Receive Buffers, which are data segments. They are allocated in the host memory and
used to store the received packets. Each received packet requires at least one Receive Buffer, and each
buffer uses 2KB of memory.
You might choose to increase the number of Receive Buffers if you notice a significant decrease in the
performance of received traffic. If receive performance is not an issue, use the default setting appropriate to
the adapter.
Default 512, for the 10 Gigabit Server Adapters.
256, for all other adapters depending on the features selected.
When Receive Side Scaling (RSS) is enabled, all of the receive data processing for a particular TCP
connection is shared across multiple processors or processor cores. Without RSS all of the processing is
performed by a single processor, resulting in less efficient system cache utilization. RSS can be enabled for a
LAN or for FCoE. In the first case, it is called "LAN RSS". In the second, it is called "FCoE RSS".
LAN RSS
LAN RSS applies to a particular TCP connection.
NOTE: This setting has no effect if your system has only one processing unit.
RSS is enabled on the Advanced tab of the adapter property sheet. If your adapter does not support RSS, or
if the SNP or SP2 is not installed, the RSS setting will not be displayed. If RSS is supported in your system
environment, the following will be displayed:
l Port NUMA Node. This is the NUMA node number of a device.
l Starting RSS CPU. This setting allows you to set the p referred starting RSS processor. Change this
setting if the current processor is dedicated to other processes. The setting range is from 0 to the num-
ber of logical CPUs - 1. In Server 2008 R2, RSS will only use CPUs in group 0 (CPUs 0 through 63).
l Max number of RSS CPU. This setting allows you to set the maximum number of CPUs assigned to
an adapter and is primarily used in a Hyper-V environment. By decreasing this setting in a Hyper-V
environment, the total number of interrupts is reduced which lowers CPU utilization. The default is 8 for
Gigabit adapters and 16 for 10 Gigabit adapters.
l Preferred NUMA Node. This setting allows you to choose the preferred NUMA (Non-Uniform
Memory Access) node to be used for memory allocations made by the network adapter. In addition the
system will attempt to use the CPUs from the preferred NUMA node first for the purposes of RSS. On
NUMA platforms, memory access latency is dependent on the memory location. Allocation of memory
from the closest node helps improve performance. The Windows Task Manager shows the NUMA
Node ID for each processor.
NOTES:
l This setting only affects NUMA systems. It will have no effect on non-NUMA sys-
tems.
l Choosing a value greater than the number of NUMA nodes present in the system
selects the NUMA node closest to the device.
l Receive Side Scaling Queues. This setting configures the number of RSS queues, which determine
the space to buffer transactions between the network adapter and CPU(s).
NOTES:
l The 8 and 16 queues are only available when PROSet for
Windows Device Manager is installed. If PROSet is not
installed, only 4 queues are available.
l Using 8 or more queues will require the system to reboot.
l If RSS is not enabled for all adapters in a team, RSS will be disabled for the team.
l If an adapter that does not support RSS is added to a team, RSS will be disabled for the team.
lIf you create a multi-vendor team, you must manually verify that the RSS settings for all adapters in the
team are the same.
FCoE RSS
If FCoE is installed, FCoE RSS is enabled and applies to FCoE receive processing that is shared across
processor cores.
FCoE RSS Configuration
If your adapter supports FCoE RSS, the following configuration settings can be viewed and changed on the
base driver Advanced Performance tab:
l FCoE NUMA Node Count. This setting specifies the number of consecutive NUMA Nodes where
the allocated FCoE queues will be evenly distributed.
l FCoE Starting NUMA Node. This setting specifies the NUMA node representing the first node within
the FCoE NUMA Node Count.
l FCoE Starting Core Offset. This setting specifies the offset to the first NUMA Node CPU core that
will be assigned to FCoE queue.
l FCoE Port NUMA Node. This setting is an indication from the platform of the optimal closest NUMA
Node to the physical port, if available. This setting is read-only and cannot be configured.
Performance Tuning
The Intel Network Controller provides a new set of advanced FCoE performance tuning options. These
options will direct how FCoE transmit/receive queues are allocated in NUMA platforms. Specifically, they
direct what target set of NUMA node CPUs can be selected from to assign individual queue affinity. Selecting
a specific CPU has two main effects:
l It sets the desired interrupt location for processing queue packet indications.
l It sets the relative locality of the queue to available memory.
As indicated, these are intended as advanced tuning options for those platform managers attempting to
maximize system performance. They are generally expected to be used to maximize performance for multi-
port platform configurations. Since all ports share the same default installation directives (the .inf file, etc.),
the FCoE queues for every port will be associated with the same set of NUMA CPUs which may result in
CPU contention.
The software exporting these tuning options defines a NUMA Node to be equivalent to an individual processor
(socket). Platform ACPI information presented by the BIOS to the operating system helps define the relation
of PCI devices to individual processors. However, this detail is not currently reliably provided in all platforms.
Therefore, using the tuning options may produce unexpected results. Consistent or predictable results when
using the performance options cannot be guaranteed.
The performance tuning options are listed in the LAN RSS Configuration section.
Example 1: A platform with two physical sockets, each socket processor providing 8 core CPUs (16 when
hyper threading is enabled), and a dual port Intel adapter with FCoE enabled.
By default 8 FCoE queues will be allocated per NIC port. Also, by default the first (non-hyper thread) CPU
cores of the first processor will be assigned affinity to these queues resulting in the allocation model pictured
below. In this scenario, both ports would be competing for CPU cycles from the same set of CPUs on socket
0.
The user of these performance options will want to determine the affinity of FCoE queues to CPUs in order to
verify their actual effect on queue allocation. This is easily done by using a small packet workload and an I/O
application such as IoMeter. IoMeter monitors the CPU utilization of each CPU using the built-in performance
monitor provided by the operating system. The CPUs supporting the queue activity should stand out. They
should be the first non-hyper thread CPUs available on the processor unless the allocation is specifically
directed to be shifted via the performance options discussed above.
To make the locality of the FCoE queues even more obvious, the application affinity can be assigned to an
isolated set of CPUs on the same or another processor socket. For example, the IoMeter application can be
set to run only on a finite number of hyper thread CPUs on any processor. If the performance options have
been set to direct queue allocation on a specific NUMA node, the application affinity can be set to a different
NUMA node. The FCoE queues should not move and the activity should remain on those CPUs even though
the application CPU activity moves to the other processor CPUs selected.
SR-IOV lets a single network port appear to be several virtual functions in a virtualized environment. If you
have an SR-IOV capable NIC, each port on that NIC can assign a virtual function to several guest partitions.
The virtual functions bypass the Virtual Machine Manager (VMM), allowing packet data to move directly to a
guest partition's memory, resulting in higher throughput and lower CPU utilization. SR-IOV also allows you to
move packet data directly to a guest partition's memory. SR-IOV support was added in Microsoft Windows
Server 2012. See your operating system documentation for system requirements.
For devices that support it, SR-IOV is enabled in the host partition on the adapter's Device Manager property
sheet, under Virtualization on the Advanced Tab. Some devices may need to have SR-IOV enabled in a
preboot environment.
NOTES:
l Configuring SR-IOV for improved network security: In a virtualized envir-
onment, on Intel Server Adapters that support SR-IOV, the virtual function
(VF) may be subject to malicious behavior. Software-generated frames are not
expected and can throttle traffic between the host and the virtual switch, redu-
cing performance. To resolve this issue, configure all SR-IOV enabled ports
for VLAN tagging. This configuration allows unexpected, and potentially mali-
cious, frames to be dropped.
l You must enable VMQ for SR-IOV to function.
l SR-IOV is not supported with ANS teams.
l VMWare ESXi does not support SR-IOV on 1GbE ports.
Allows the adapter to verify the TCP checksum of incoming packets and compute the TCP checksum of
outgoing packets. This feature enhances receive and transmit performance and reduces CPU utilization.
With Offloading off, the operating system verifies the TCP checksum.
With Offloading on, the adapter completes the verification for the operating system.
Range l Disabled
l RX Enabled
l TX Enabled
l RX & TX Enabled
Thermal Monitoring
Adapters and network controllers based on the Intel Ethernet Controller I350 (and later controllers) can
display temperature data and automatically reduce the link speed if the controller temperature gets too hot.
NOTE: This feature is enabled and configured by the equipment manufacturer. It is not available on
all adapters and network controllers. There are no user configurable settings.
Transmit Buffers
Defines the number of Transmit Buffers, which are data segments that enable the adapter to track transmit
packets in the system memory. Depending on the size of the packet, each transmit packet requires one or
more Transmit Buffers.
You might choose to increase the number of Transmit Buffers if you notice a possible problem with transmit
performance. Although increasing the number of Transmit Buffers can enhance transmit performance,
Transmit Buffers do consume system memory. If transmit performance is not an issue, use the default
setting. This default setting varies with the type of adapter.
View the Adapter Specifications topic for help identifying your adapter.
Allows the adapter to verify the UDP checksum of incoming packets and compute the UDP checksum of
outgoing packets. This feature enhances receive and transmit performance and reduces CPU utilization.
With Offloading off, the operating system verifies the UDP checksum.
With Offloading on, the adapter completes the verification for the operating system.
Range l Disabled
l RX Enabled
l TX Enabled
l RX & TX Enabled
Determines whether the driver waits for auto-negotiation to be successful before reporting the link state. If this
feature is off, the driver does not wait for auto-negotiation. If the feature is on, the driver does wait for auto-
negotiation.
If this feature is on and the speed is not set to auto-negotiation, the driver will wait for a short time for link to be
established before reporting the link state.
If the feature is set to Auto Detect, this feature is automatically set to On or Off depending on speed and
adapter type when the driver is installed. The setting is:
l Off for copper Intel gigabit adapters with a speed of "Auto".
l On for copper Intel gigabit adapters with a forced speed and duplex.
l On for fiber Intel gigabit adapters with a speed of "Auto".
Range l On
l Off
l Auto Detect
VLANsTab
The VLANs tab allows you to create, modify, and delete VLANs. You must install Advanced Network
Services in order to see this tab and use the feature.
Virtual LANs
Overview
NOTES:
l You must install the latest Microsoft* Windows* 10 updates before you can create Intel ANS
Teams or VLANs on Windows 10 systems. Any Intel ANS Teams or VLANs created with a
previous software/driver release on a Windows 10 system will be corrupted and cannot be
upgraded. The installer will remove these existing teams and VLANs. Intel ANS is only sup-
ported on the Windows 10 Anniversary Update (Windows 10 Version 1607, build 10.0.14393)
branch, and may not be supported on future versions.
l MicrosoftWindows Server2012 R2 is the last Windows Server operating system version
that supports IntelAdvanced Networking Services (Intel ANS). Intel ANS is not supported
on Microsoft Windows Server 2016 and later.
l Intel ANS VLANs are not compatible with Microsoft's Load Balancing and Failover (LBFO)
teams. Intel PROSet will block a member of an LBFO team from being added to an Intel
ANS VLAN. You should not add a port that is already part of an Intel ANS VLAN to an LBFO
team, as this may cause system instability.
The term VLAN (Virtual Local Area Network) refers to a collection of devices that communicate as if they
were on the same physical LAN. Any set of ports (including all ports on the switch) can be considered a
VLAN. LAN segments are not restricted by the hardware that physically connects them.
CAUTION: When using IEEE 802 VLANs, settings must match between the switch and those
adapters using the VLANs.
CAUTION:
l VLANs cannot be used on teams that contain non-Intel network adapters
l Use Intel PROSet to add or remove a VLAN. Do not use the Network and Dial-up
Connections dialog box to enable or disable VLANs. Otherwise, the VLAN driver may
not be correctly enabled or disabled.
NOTES:
l The VLAN ID keyword is supported. The VLAN ID must match the VLAN ID configured on
the switch. Adapters with VLANs must be connected to network devices that support IEEE
802.1Q.
l If you change a setting under the Advanced tab for one VLAN, it changes the settings for all
VLANS using that port.
l In most environments, a maximum of 64 VLANs per network port or team are supported by
Intel PROSet.
l ANS VLANs are not supported on adapters and teams that have VMQ enabled. However,
VLAN filtering with VMQ is supported via the Microsoft Hyper-V VLAN interface. For more
information see Using Intel Network Adapters in a Microsoft* Hyper-V* Environment.
l You can have different VLAN tags on a child partition and its parent. Those settings are sep-
arate from one another, and can be different or the same. The only instance where the VLAN
tag on the parent and child MUST be the same is if you want the parent and child partitions to
be able to communicate with each other through that VLAN. For more information see Using
Intel Network Adapters in a Microsoft* Hyper-V* Environment.
Teaming Tab
The Teaming tab allows you to create, modify, and delete adapter teams. You must install Advanced
Network Services in order to see this tab and use the feature.
Adapter Teaming
Intel Advanced Network Services (Intel ANS) Teaming lets you take advantage of multiple adapters in a
system by grouping them together. ANS teaming can use features like fault tolerance and load balancing to
increase throughput and reliability.
Before creating a team or adding team members, make sure each team member has been configured
similarly. Settings to check include VLANs and QoS Packet Tagging, Jumbo Packets, and the various
offloads. Pay particular attention when using different adapter models or adapter versions, as adapter
capabilities vary.
Configuration Notes
l You must install the latest Microsoft* Windows* 10 updates before you can create Intel ANS Teams or
VLANs on Windows 10 systems. Any Intel ANS Teams or VLANs created with a previous soft-
ware/driver release on a Windows 10 system will be corrupted and cannot be upgraded. The installer
will remove these existing teams and VLANs. Intel ANS is only supported on the Windows 10
Anniversary Update (Windows 10 Version 1607, build 10.0.14393) branch, and may not be supported
on future versions.
l Microsoft*Windows Server*2012 R2 is the last Windows Server operating system version that sup-
ports IntelAdvanced Networking Services (Intel ANS). Intel ANS is not support on Microsoft Windows
Server 2016 and later.
l To configure teams in Linux, use Channel Bonding, available in supported Linux kernels. For more
information see the channel bonding documentation within the kernel source.
l Not all team types are available on all operating systems.
l Be sure to use the latest available drivers on all adapters.
l Not all Intel devices support Intel ANS or Intel PROSet. Intel adapters that do not support Intel ANS or
Intel PROSet may still be included in a team. However, they are restricted in the same way non-Intel
adapters are. See Multi-Vendor Teaming for more information.
l You cannot create a team that includes both Intel X710/XL710-based devices and Intel I350-based
devices. These devices are incompatible together in a team and will be blocked during team setup. Pre-
viously created teams that include this combination of devices will be removed upon upgrading.
l NDIS 6.2 introduced new RSS data structures and interfaces. Because of this, you cannot enable
RSS on teams that contain a mix of adapters that support NDIS 6.2 RSS and adapters that do not.
l If a team is bound to a Hyper-V virtual NIC, you cannot change the Primary or Secondary adapter.
l To assure a common feature set, some advanced features, including hardware offloading, are auto-
matically disabled when an adapter that does not support Intel PROSet is added to a team.
l Hot Plug operations in a Multi-Vendor Team may cause system instability. We recommended that you
restart the system or reload the team after performing Hot Plug operations with a Multi-Vendor Team.
l Spanning tree protocol (STP) should be disabled on switch ports connected to teamed adapters in
order to prevent data loss when the primary adapter is returned to service (failback). Alternatively, an
activation delay may be configured on the adapters to prevent data loss when spanning tree is used.
Set the Activation Delay on the advanced tab of team properties.
lFibre Channel over Ethernet/Data Center Bridging will be automatically disabled when an adapter is
added to a team with non-FCoE/DCB capable adapters.
Configuring ANS Teams
Advanced Network Services (ANS) Teaming, a feature of the Advanced Network Services component, lets
you take advantage of multiple adapters in a system by grouping them together. ANS teaming can use
features like fault tolerance and load balancing to increase throughput and reliability.
NOTES:
l NLB will not work when Receive Load Balancing (RLB) is enabled. This occurs because
NLB and iANS both attempt to set the server's multicast MAC address, resulting in an ARP
table mismatch.
l Teaming with the Intel 10 Gigabit AF DA Dual Port Server Adapter is only supported with
similar adapter types and models or with switches using a Direct Attach connection.
Creating a team
NOTE: If you want to set up VLANs on a team, you must first create the team.
NOTE: A team member should be removed from the team with link down.
1. Open the Team Properties dialog box by double-clicking on a team listing in the Computer Man-
agement window.
2. Click the Settings tab.
3. Click Modify Team, then click the Adapters tab.
4. Select the adapters that will be members of the team.
l Click the checkbox of any adapter that you want to add to the team.
l Clear the checkbox of any adapter that you want to remove from the team.
5. Click OK.
Renaming a Team
1. Open the Team Properties dialog box by double-clicking on a team listing in the Computer Man-
agement window.
2. Click the Settings tab.
3. Click Modify Team, then click the Name tab.
4. Type a new team name, then click OK.
Removing a Team
1. Open the Team Properties dialog box by double-clicking on a team listing in the Computer Man-
agement window.
2. Click the Settings tab.
3. Select the team you want to remove, then click Remove Team.
4. Click Yes when prompted.
NOTE: If you defined a VLAN or QoS Prioritization on an adapter joining a team, you may have to
redefine it when it is returned to a stand-alone mode.
After installing an adapter in a specific slot, Windows treats any other adapter of the same type as a new
adapter. Also, if you remove the installed adapter and insert it into a different slot, Windows recognizes it as a
new adapter. Make sure that you follow the instructions below carefully.
1. Open Intel PROSet.
2. If the adapter is part of a team remove the adapter from the team.
3. Shut down the server and unplug the power cable.
4. Disconnect the network cable from the adapter.
5. Open the case and remove the adapter.
6. Insert the replacement adapter. (Use the same slot, otherwise Windows assumes that there is a new
adapter.)
7. Reconnect the network cable.
8. Close the case, reattach the power cable, and power-up the server.
9. Open Intel PROSet and check to see that the adapter is available.
Microsoft* Load Balancing and Failover (LBFO) teams
Intel ANS teaming and VLANs are not compatible with Microsoft's LBFO teams. Intel PROSet will block a
member of an LBFO team from being added to an Intel ANS team or VLAN. You should not add a port that is
already part of an Intel ANS team or VLAN to an LBFO team, as this may cause system instability. If you use
an ANS team member or VLAN in an LBFO team, perform the following procedure to restore your
configuration:
1. Reboot the machine
2. Remove LBFO team. Even though LBFO team creation failed, after a reboot Server Manager will report
that LBFO is Enabled, and the LBFO interface is present in the 'NIC Teaming' GUI.
3. Remove the ANS teams and VLANs involved in the LBFO team and recreate them. This is an optional
(all bindings are restored when the LBFO team is removed ), but strongly recommended step
NOTES:
l If you add an Intel AMT enabled port to an LBFO team, do not set the port to Standby in the
LBFO team. If you set the port to Standby you may lose AMT functionality.
l Data Center Bridging (DCB) is incompatible with Microsoft Server 2012 NIC Teaming
(LBFO). Do not create an LBFO team using Intel 10G ports when DCB is installed. Do not
install DCB if Intel 10G ports are part of an LBFO team. Install failures and persistent link
loss may occur if DCB and LBFO are used on the same port.
Using Intel ANS Teams and VLANs inside a Guest Virtual Machine
Intel ANS Teams and VLANs are only supported in the following guest virtual machines
Supported Adapters
Teaming options are supported on Intel server adapters. Selected adapters from other manufacturers are also
supported. If you are using a Windows-based computer, adapters that appear in Intel PROSet may be
included in a team.
NOTE: In order to use adapter teaming, you must have at least one Intel server adapter in your
system. Furthermore, all adapters must be linked to the same switch or hub.
During team creation or modification, the list of available team types or list of available devices may not
include all team types or devices. This may be caused by any of several conditions, including:
l The device does not support the desired team type or does not support teaming at all.
l The operating system does not support the desired team type.
l The devices you want to team together use different driver versions.
l You are trying to team an Intel PRO/100 device with an Intel 10GbE device.
l TOE (TCP Offload Engine) enabled devices cannot be added to an ANS team and will not appear in the
list of available adapters.
l You can add Intel Active Management Technology (Intel AMT) enabled devices to Adapter Fault
Tolerance (AFT), Switch Fault Tolerance (SFT), and Adaptive Load Balancing (ALB) teams. All other
team types are not supported. The Intel AMT enabled device must be designated as the primary
adapter for the team.
l The device's MAC address is overridden by the Locally Administered Address advanced setting.
l Fibre Channel over Ethernet (FCoE) Boot has been enabled on the device.
l The device has OS Controlled selected on the Data Center tab.
l The device has a virtual NIC bound to it.
l The device is part of a Microsoft* Load Balancing and Failover (LBFO) team.
Teaming Modes
Adapter Fault Tolerance (AFT) - provides automatic redundancy for a server's network connection. If
the primary adapter fails, the secondary adapter takes over. Adapter Fault Tolerance supports two to
eight adapters per team. This teaming type works with any hub or switch. All team members must be
connected to the same subnet.
l Switch Fault Tolerance (SFT) - provides failover between two adapters connected to separate
switches. Switch Fault Tolerance supports two adapters per team. Spanning Tree Protocol (STP) must
be enabled on the switch when you create an SFT team. When SFT teams are created, the Activation
Delay is automatically set to 60 seconds. This teaming type works with any switch or hub. All team
members must be connected to the same subnet.
l Adaptive Load Balancing (ALB) - provides load balancing of transmit traffic and adapter fault tolerance.
In Microsoft* Windows* operating systems, you can also enable or disable receive load balancing
(RLB) in ALB teams (by default, RLB is enabled).
l Virtual Machine Load Balancing (VMLB) - provides transmit and receive traffic load balancing across
Virtual Machines bound to the team interface, as well as fault tolerance in the event of switch port,
cable, or adapter failure. This teaming type works with anyswitch.
l Static Link Aggregation (SLA) - provides increased transmission and reception throughput in a team of
two to eight adapters. This team type replaces the following team types from prior software releases:
Fast EtherChannel*/Link Aggregation (FEC) and Gigabit EtherChannel*/Link Aggregation (GEC). This
type also includes adapter fault tolerance and load balancing (only routed protocols). This teaming type
requires a switch with Intel Link Aggregation, Cisco* FEC or GEC, or IEEE 802.3ad Static Link Aggreg-
ation capability.
All adapters in a Link Aggregation team running in static mode must run at the same speed and must be
connected to a Static Link Aggregation capable switch. If the speed capability of adapters in a Static
Link Aggregation team are different, the speed of the team is dependent on the lowest common
denominator.
l IEEE 802.3ad Dynamic Link Aggregation - creates one or more teams using Dynamic Link Aggregation
with mixed-speed adapters. Like the Static Link Aggregation teams, Dynamic 802.3ad teams increase
transmission and reception throughput and provide fault tolerance. This teaming type requires a switch
that fully supports the IEEE 802.3ad standard.
l Multi-Vendor Teaming (MVT) - adds the capability to include adapters from selected other vendors in a
team. If you are using a Windows-based computer, you can team adapters that appear in the Intel
PROSet teaming wizard.
IMPORTANT:
l Be sure to use the latest available drivers on all adapters.
l Before creating a team, adding or removing team members, or changing advanced settings
of a team member, make sure each team member has been configured similarly. Settings
to check include VLANs and QoS Packet Tagging, Jumbo Frames, and the various off-
loads. These settings are available in Intel PROSet's Advanced tab. Pay particular atten-
tion when using different adapter models or adapter versions, as adapter capabilities vary.
l If team members implement Advanced features differently, failover and team functionality
will be affected. To avoid team implementation issues:
l Create teams that use similar adapter types and models.
l Reload the team after adding an adapter or changing any Advanced features. One
way to reload the team is to select a new preferred primary adapter. Although there
will be a temporary loss of network connectivity as the team reconfigures, the team
will maintain its network addressing schema.
NOTES:
l Hot Plug operations for an adapter that is part of a team are only available in Windows
Server.
l For SLA teams, all team members must be connected to the same switch. For AFT, ALB,
and RLB teams, all team members must belong to the same subnet. The members of an
SFT team must be connected to a different switch.
l Teaming only one adapter port is possible, but provides no benefit.
1. In the Team Properties dialog box's Settings tab, click Modify Team.
2. On the Adapters tab, select an adapter.
3. Click Set Primary or Set Secondary.
4. Click OK.
The adapter's preferred setting appears in the Priority column on Intel PROSet's Team Configuration tab. A
"1" indicates a preferred primary adapter, and a "2" indicates a preferred secondary adapter.
Failover and Failback
When a link fails, either because of port or cable failure, team types that provide fault tolerance will continue to
send and receive traffic. Failover is the initial transfer of traffic from the failed link to a good link. Failback
occurs when the original adapter regains link. You can use the Activation Delay setting (located on the
Advanced tab of the team's properties in Device Manager) to specify a how long the failover adapter waits
before becoming active. If you don't want your team to failback when the original adapter gets link back, you
can set the Allow Failback setting to disabled (located on the Advanced tab of the team's properties in Device
Manager).
Adapter Fault Tolerance (AFT)
Adapter Fault Tolerance (AFT) provides automatic recovery from a link failure caused from a failure in an
adapter, cable, switch, or port by redistributing the traffic load across a backup adapter.
Failures are detected automatically, and traffic rerouting takes place as soon as the failure is detected. The
goal of AFT is to ensure that load redistribution takes place fast enough to prevent user sessions from being
disconnected. AFT supports two to eight adapters per team. Only one active team member transmits and
receives traffic. If this primary connection (cable, adapter, or port) fails, a secondary, or backup, adapter takes
over. After a failover, if the connection to the user-specified primary adapter is restored, control passes
automatically back to that primary adapter. For more information, see Primary and Secondary Adapters.
AFT is the default mode when a team is created. This mode does not provide load balancing.
NOTES
l AFT teaming requires that the switch not be set up for teaming and that spanning tree pro-
tocol is turned off for the switch port connected to the NIC or LOM on the server.
l All members of an AFT team must be connected to the same subnet.
NOTE: SFT teaming requires that the switch not be set up for teaming and that spanning tree
protocol is turned on.
Configuration Monitoring
You can set up monitoring between an SFT team and up to five IP addresses. This allows you to detect link
failure beyond the switch. You can ensure connection availability for several clients that you consider critical.
If the connection between the primary adapter and all of the monitored IP addresses is lost, the team will
failover to the secondary adapter.
Adaptive/Receive Load Balancing (ALB/RLB)
Adaptive Load Balancing (ALB) is a method for dynamic distribution of data traffic load among multiple
physical channels. The purpose of ALB is to improve overall bandwidth and end station performance. In ALB,
multiple links are provided from the server to the switch, and the intermediate driver running on the server
performs the load balancing function. The ALB architecture utilizes knowledge of Layer 3 information to
achieve optimum distribution of the server transmission load.
ALB is implemented by assigning one of the physical channels as Primary and all other physical channels as
Secondary. Packets leaving the server can use any one of the physical channels, but incoming packets can
only use the Primary Channel. With Receive Load Balancing (RLB) enabled, it balances IP receive traffic. The
intermediate driver analyzes the send and transmit loading on each adapter and balances the rate across the
adapters based on destination address. Adapter teams configured for ALB and RLB also provide the benefits
of fault tolerance.
NOTES:
l ALB teaming requires that the switch not be set up for teaming and that spanning tree
protocol is turned off for the switch port connected to the network adapter in the server.
l ALB does not balance traffic when protocols such as NetBEUI and IPX* are used.
l You may create an ALB team with mixed speed adapters. The load is balanced according to
the adapter's capabilities and bandwidth of the channel.
l All members of ALB and RLB teams must be connected to the same subnet.
l Virtual NICs cannot be created on a team with Receive Load Balancing enabled. Receive
Load Balancing is automatically disabled if you create a virtual NIC on a team.
NOTES:
l VMLB does not load balance non-routed protocols such as NetBEUI and some IPX* traffic.
l VMLB supports from two to eight adapter ports per team.
l You can create a VMLB team with mixed speed adapters. The load is balanced according to
the lowest common denominator of adapter capabilities and the bandwidth of the channel.
l You cannot use and Intel AMT enabled adapter a VMLB team.
Static Link Aggregation
Static Link Aggregation (SLA) is very similar to ALB, taking several physical channels and combining them
into a single logical channel.
This mode works with:
l Cisco EtherChannel capable switches with channeling mode set to "on"
l Intel switches capable of Link Aggregation
l Other switches capable of static 802.3ad
NOTES:
l All adapters in a Static Link Aggregation team must run at the same speed and must be
connected to a Static Link Aggregation-capable switch. If the speed capabilities of adapters
in a Static Link Aggregation team are different, the speed of the team is dependent on the
switch.
l Static Link Aggregation teaming requires that the switch be set up for Static Link Aggregation
teaming and that spanning tree protocol is turned off.
l An Intel AMT enabled adapter cannot be used in an SLA team.
NOTES:
l IEEE 802.3ad teaming requires that the switch be set up for IEEE 802.3ad (link aggregation)
teaming and that spanning tree protocol is turned off.
l Once you choose an aggregator, it remains in force until all adapters in that aggregation team
lose link.
l In some switches, copper and fiber adapters cannot belong to the same aggregator in an
IEEE 802.3ad configuration. If there are copper and fiber adapters installed in a system, the
switch might configure the copper adapters in one aggregator and the fiber-based adapters in
another. If you experience this behavior, for best performance you should use either only
copper-based or only fiber-based adapters in a system.
l An Intel AMT enabled adapter cannot be used in a DLA team.
l Verify that the switch fully supports the IEEE 802.3ad standard.
l Check your switch documentation for port dependencies. Some switches require pairing to start on a
primary port.
l Check your speed and duplex settings to ensure the adapter and switch are running at full duplex,
either forced or set to auto-negotiate. Both the adapter and the switch must have the same speed and
duplex configuration. The full-duplex requirement is part of the IEEE 802.3ad specification: http://stand-
ards.ieee.org/. If needed, change your speed or duplex setting before you link the adapter to the switch.
Although you can change speed and duplex settings after the team is created, Intel recommends you
disconnect the cables until settings are in effect. In some cases, switches or servers might not appro-
priately recognize modified speed or duplex settings if settings are changed when there is an active link
to the network.
l If you are configuring a VLAN, check your switch documentation for VLAN compatibility notes. Not all
switches support simultaneous dynamic 802.3ad teams and VLANs. If you do choose to set up
VLANs, configure teaming and VLAN settings on the adapter before you link the adapter to the switch.
Setting up VLANs after the switch has created an active aggregator affects VLAN functionality.
Multi-Vendor Teaming
Multi-Vendor Teaming (MVT) allows teaming with a combination of Intel and non-Intel adapters.
If you are using a Windows-based computer, adapters that appear in the Intel PROSet teaming wizard can be
included in a team.
MVT Design Considerations
l In order to activate MVT, you must have at least one Intel adapter or integrated connection in the team,
which must be designated as the primary adapter.
l A multi-vendor team can be created for any team type.
l All members in an MVT must operate on a common feature set (lowest common denominator).
l Manually verify that the frame setting for the non-Intel adapter is the same as the frame settings for the
Intel adapters.
l If a non-Intel adapter is added to a team, its RSS settings must match the Intel adapters in the team.
Removing Phantom Teams and Phantom VLANs
If you physically remove all adapters that are part of a team or VLAN from the system without removing them
via the Device Manager first, a phantom team or phantom VLAN will appear in Device Manager. There are two
methods to remove the phantom team or phantom VLAN.
Removing the Phantom Team or Phantom VLAN through the Device Manager
Follow these instructions to remove a phantom team or phantom VLAN from the Device Manager:
1. In the Device Manager, double-click on the phantom team or phantom VLAN.
2. Click the Settings tab.
3. Select Remove Team or Remove VLAN.
Preventing the Creation of Phantom Devices
To prevent the creation of phantom devices, make sure you perform these steps before physically removing
an adapter from the system:
1. Remove the adapter from any teams using the Settings tab on the team properties dialog box.
2. Remove any VLANs from the adapter using the VLANs tab on the adapter properties dialog box.
3. Uninstall the adapter from Device Manager.
You do not need to follow these steps in hot-replace scenarios.
Power Management Tab
The Intel PROSet Power Management tab replaces the standard Microsoft* Windows* Power
Management tab in Device Manager. The standard Windows power management functionality is included on
the Intel PROSet tab.
NOTES:
l The options available on the Power Management tab are adapter and system dependent. Not
all adapters will display all options. There may be BIOS or operating system settings that
need to be enabled for your system to wake up. In particular, this is true for Wake from S5
(also referred to as Wake from power off).
l The Intel 10 Gigabit Network Adapters do not support power management.
l If your system has a Manageability Engine, the Link LED may stay lit even if WoL is
disabled.
Power Options
The Intel PROSet Power Management tab includes several settings that control the adapter's power
consumption. For example, you can set the adapter to reduce its power consumption if the cable is
disconnected.
Reduce Power if Cable Disconnected & Reduce Link Speed During Standby
Enables the adapter to reduce power consumption when the LAN cable is disconnected from the adapter and
there is no link. When the adapter regains a valid link, adapter power usage returns to its normal state (full
power usage).
The Hardware Default option is available on some adapters. If this option is selected, the feature is disabled or
enabled based on the system hardware.
Range The range varies with the operating system and adapter.
NOTE: If you experience link issues when two ULP-capable devices are connected back to back,
disable ULP mode on one of the devices.
NOTES:
l Both ends of the EEE link must automatically negotiate link
speed.
l EEE is not supported at 10 Mbps.
Wake on LAN Options
The ability to remotely wake computers is an important development in computer management. This feature
has evolved over the last few years from a simple remote power-on capability to a complex system interacting
with a variety of device and operating system power states.
Microsoft Windows Server is ACPI-capable. Windows does not support waking from a power-off (S5) state,
only from standby (S3) or hibernate (S4). When shutting down the system, these states shut down ACPI
devices, including Intel adapters. This disarms the adapter's remote wake-up capability. However, in some
ACPI-capable computers, the BIOS may have a setting that allows you to override the operating system and
wake from an S5 state anyway. If there is no support for wake from S5 state in your BIOS settings, you are
limited to Wake From Standby when using these operating systems in ACPI computers.
The Intel PROSet Power Management tab includes Wake on Magic Packet and Wake on directed packet
settings. These control the type of packets that wake up the system from standby.
For some adapters, the Power Management tab in Intel PROSet includes a setting called Wake on Magic
Packet from power off state. Enable this setting to explicitly allow wake-up with a Magic Packet* from
shutdown under APM power management mode.
NOTES:
l To use the Wake on Directed Packet feature, WoL must first be enabled in the EEPROM
using BootUtil.
l If Reduce speed during standby is enabled, then Wake on Magic Packet and/or Wake
on directed packet must be enabled. If both of these options are disabled, power is
removed from the adapter during standby.
l Wake on Magic Packet from power off state has no effect on this option.
Install the IntelNetCmdlets module by checking the Windows PowerShell Module checkbox during the driver
and PROSet installation process. Then use the Import-Module cmdlet to import the new cmdlets. You may
need to restart Windows PowerShell to access the newly imported cmdlets.
To use the Import-Module cmdlet, you must specify the path. For example:
PS c:\> Import-Module -Name "C:\Program Files\Intel\IntelNetCmdlets"
NOTE: If you include a trailing backslash ("\") at the end of the Import-Module command, the import
operation will fail. In Microsoft Windows* 10 and Windows Server* 2016, the auto-complete function
appends a trailing backslash. If you use auto-complete when entering the Import-Module command,
delete the trailing backslash from the path before pressing Return to execute the command.
See Microsoft TechNet for more information about the Import-Module cmdlet.
System requirements for using IntelNetCmdlets:
l Microsoft* Windows PowerShell* version 2.0
l .NET version 2.0
NOTE: If an adapter is bound to an ANS team, do not change settings using the Set
NetAdapterAdvanceProperty cmdlet from Windows PowerShell*, or any other cmdlet not provided
by Intel. Doing so may cause the team to stop using that adapter to pass traffic. You may see this
as reduced performance or the adapter being disabled in the ANS team. You can resolve this issue
by changing the setting back to its previous state, or by removing the adapter from the ANS team
and then adding it back.
NOTES:
l You must have Administrator privileges to run scripts. If you do not have Administrator priv-
ileges, you will not receive an error, the script just will not run.
l Only adapter settings are saved (these include ANS teaming and VLANs). The adapter's
driver is not saved.
l Restore using the script only once. Restoring multiple times may result in unstable con-
figuration.
l The Restore operation requires the same OS as when the configuration was Saved.
l Intel PROSet for Windows*Device Manager must be installed for the SaveRestore.ps1
script to run.
l For systems running a 64-bit OS, be sure to run the 64-bit version of Windows PowerShell,
not the 32-bit (x86) version, when running the SaveRestore.ps1 script.
-ConfigPath Optional. Specifies the path and filename of the main configuration save file. If not spe-
cified, it is the script path and default filename (saved_config.txt).
-BDF Optional. Default configuration file names are saved_config.txt and Saved_StaticIP.txt.
If you specify -BDF during a restore, the script attempts to restore the configuration based
on the PCI Bus:Device:Function:Segment values of the saved configuration. If you
removed, added, or moved a NIC to a different slot, this may result in the script applying
the saved settings to a different device.
NOTES:
l If the restore system is not identical to the saved system, the script may not
restore any settings when the -BDF option is specified.
l Virtual Function devices do not support the -BDF option.
Examples
Save Example
To save the adapter settings to a file on a removable media device, do the following.
1. Open a Windows PowerShell Prompt.
2. Navigate to the directory where SaveRestore.ps1 is located (generally c:\Program Files\Intel\DMIX).
3. Type the following:
SaveRestore.ps1 Action Save ConfigPath e:\settings.txt
Restore Example
To restore the adapter settings from a file on removable media, do the following:
1. Open a Windows PowerShell Prompt.
2. Navigate to the directory where SaveRestore.ps1 is located (generally c:\Program Files\Intel\DMIX).
3. Type the following:
SaveRestore.ps1 Action Restore ConfigPath e:\settings.txt
Intel Network Drivers for DOS
The NDIS2 (DOS) driver is provided solely for the purpose of loading other operating systems -- for example,
during RIS or unattended installations. It is not intended as a high-performance driver.
You can find adapter drivers, PROTOCOL.INI files, and NET.CFG files in the PRO100\DOS or
PRO1000\DOS directory in the download folder. For additional unattended install information, see the text
files in the operating system subdirectories under the APPS\SETUP\PUSH directory.
Normal The driver finds its section in PROTOCOL.INI by matching its instance ID to the value for
Behavior: this parameter.
Possible The device driver uses a DOS function to display the name of the driver it is expecting. This
Errors: function cannot display a '$' character. For this reason, the user may see a message refer-
ring to this value without the '$'; the user must remember to enter the '$' character as part of
the parameter's value.
SPEEDDUPLEX
The parameter disables Auto-Speed-Detect and causes the adapter to function at the speed indicated. Do not
include this parameter if you want your Gigabit adapter to connect at 1000 Mbps.
Syntax: SPEEDDUPLEX = [0 | 1 | 2 | 3]
Example: SPEEDDUPLEX = 2
SLOT
This parameter makes it possible for the driver to uniquely identify which of the adapters is to be controlled by
the driver. The parameter can be entered in hexadecimal or decimal.
Syntax: SLOT = [0x0..0x1FFF]
SLOT = [0..8191]
Normal Beha- The driver uses the value of the parameter to decide which adapter to control.
vior:
Possible Errors: If only one adapter is installed, and the value does not correctly indicate the adapter
slot:
l A message indicates that the value does not match the actual configuration
l The driver finds the adapter and uses it
If more than one adapter is installed, and the value does not correctly indicate an
adapter slot:
l A message indicates possible slots to use
l The driver loads on the next available slot
NODE
This parameter sets the Individual Address of the adapter, overriding the value read from the EEPROM.
Normal The Current Station Address in the NDIS MAC Service-Specific Characteristics (MSSC)
Behavior: table is assigned the value of this parameter. The adapter hardware is programmed to
receive frames with the destination address equal to the Current Station Address in the
MSSC table. The Permanent Station Address in the MSSC table will be set to reflect the
node address read from the adapter's EEPROM.
Possible If any of the rules described above are violated, the driver treats this as a fatal error and an
Errors: error message occurs, indicating the correct rules for forming a proper address.
ADVERTISE
This parameter can be used to restrict the speeds and duplexes advertised to a link partner during auto-
negotiation. If AutoNeg = 1, this value is used to determine what speed and duplex combinations are
advertised to the link partner. This field is treated as a bit mask.
Syntax: ADVERTISE = [1 | 2 | 4 | 8 | 0x20 | 0x2F]:
0x01 = 10 Half, 0x02 = 10 Full, 0x04 = 100 Half, 0x08 = 100 Full, 0x20 = 1000 Full,
0x2F = all rates
Example: ADVERTISE = 1
FLOWCONTROL
This parameter, which refers to IEEE 802.3x flow control, helps prevent packets from being dropped and can
improve overall network performance. Specifically, the parameter determines what flow control capabilities
the adapter advertises to its link partner when auto negotiation occurs. This setting does NOT force flow
control to be used. It only affects the advertised capabilities.
NOTES:
l Due to errata in the 82542 silicon, the chip is not able to receive PAUSE frames if the
ReportTxEarly parameter is set to 1. Thus, if ReportTxEarly =1 and the driver is running on
an adapter using this silicon (such as the PWLA8490), the driver will modify the FlowControl
parameter to disable the ability to receive PAUSE frames.
l If half-duplex is forced or auto-negotiated, the driver will completely disable flow control.
Example: FLOWCONTROL = 1
Default: 3
Possible Errors: An error message is displayed if the value given is out of range.
USELASTSLOT
This parameter causes the driver to load on the device in the last slot found in the slot scan. The default
behavior of the driver is to load on the first adapter found in the slot scan. This parameter forces the driver to
load on the last one found instead.
Syntax: UseLastSlot = [0 | any other value ]
Example: USELASTSLOT = 1
Default: 0
TXLOOPCOUNT
This parameter controls the number of times the transmit routine loops while waiting for a free transmit buffer.
This parameter can affect Transmit performance.
Default: 1000
NOTE: The OS DCBX stack defaults to the CEE version of DCBX, and if a peer is transmitting
IEEE TLVs, it will automatically transition to the IEEE version.
For more information on DCB, including the DCB Capability Exchange Protocol Specification, go to
http://www.ieee802.org/1/pages/dcbridges.html
NOTE: DCB does not install in a VM. iSCSI over DCB is only supported in the base OS. An iscsi ini-
tiator running in a VM will not benefit from DCB ethernet enhancements.
The Intel iSCSI Agent is responsible for maintaining all packet filters for the purpose of priority tagging iSCSI
traffic flowing over DCB-enabled adapters. The iSCSI Agent will create and maintain a traffic filter for an ANS
Team if at least one member of the team has an "Operational" DCB status. However, if any adapter on the
team does not have an "Operational" DCB status, the iSCSI Agent will log an error in the Windows Event Log
for that adapter. These error messages are to notify the administrator of configuration issues that need to be
addressed, but do not affect the tagging or flow of iSCSI traffic for that team, unless it explicitly states that the
TC Filter has been removed.
Linux Configuration
In the case of Open Source distributions, virtually all distributions include support for an Open iSCSI Software
Initiator and Intel Ethernet adapters will support them. Please consult your distribution documentation for
additional configuration details on their particular Open iSCSI initiator.
Intel 82599 and X540-based adapters support iSCSI within a Data Center Bridging cloud. Used in
conjunction with switches and targets that support the iSCSI/DCB application TLV, this solution can provide
guaranteed minimum bandwidth for iSCSI traffic between the host and target. This solution enables storage
administrators to segment iSCSI traffic from LAN traffic, similar to how they can currently segment FCoE
from LAN traffic. Previously, iSCSI traffic within a DCB supported environment was treated as LAN traffic by
switch vendors. Please consult your switch and target vendors to ensure that they support the iSCSI/DCB
application TLV.
Remote Boot
Remote Boot allows you to boot a system using only an Ethernet adapter. You connect to a server that
contains an operating system image and use that to boot your local system.
Flash Images
"Flash" is a generic term for nonvolatile RAM (NVRAM), firmware, and option ROM (OROM). Depending on
the device, it can be on the NIC or on the system board.
Using Intel PROSet to flash the UEFI Network Driver Option ROM
Intel PROSet for Windows Device Manager can install the UEFI network driver on an Intel network
adapter's option ROM. The UEFI network driver will load automatically during system UEFI boot when
installed in the option ROM. UEFI specific *.FLB images are included in the downloaded release media. The
"Boot Options" tab in Intel PROSet for Windows Device Manager will allow the UEFI *.FLB image to be
installed on the network adapter.
BootUtil can only be used to program add-in Intel network adapters. LOM (LAN On Motherboard) network
connections cannot be programmed with the UEFI network driver option ROM.
See the bootutil.txt file for details on using BootUtil.
Installing the UEFI Network Driver Option ROM from the UEFI Shell
The BootUtil command line utility can install the UEFI network driver on an Intel network adapter's option
ROM. The UEFI network driver will load automatically during system UEFI boot when installed into the option
ROM. For example, run BootUtil with the following command line options to install the UEFI network driver on
all supported Intel network adapters:
FS0:\>bootutil64e up=efi all
BootUtil can only be used to program add-in Intel Ethernet network adapters. LOM (LAN On Motherboard)
network connections cannot be programmed with the UEFI network driver option ROM.
See the bootutil.txt file for details on using BootUtil.
Enable Remote Boot
If you have an Intel Desktop Adapter installed in your client computer, the flash ROM device is already
available in your adapter, and no further installation steps are necessary. For Intel Server Adapters, the flash
ROM can be enabled using the BootUtil utility. For example, from the command line type:
BOOTUTIL -E
BOOTUTIL -NIC=1 -FLASHENABLE
The first line will enumerate the ports available in your system. Choose a port. Then type the second line,
selecting the port you wish to enable. For more details, see the bootutil.txt file.
The ifconfig UEFI shell command must be used to configure each network interface. Running "ifconfig -?"
from the UEFI shell will display usage instructions for ifconfig.
Diagnostic Capability
The UEFI network driver features built in hardware diagnostic tests. The diagnostic tests are called with the
UEFI shell drvdiag command.
Long initialization times observed with Intels UEFI driver are caused when the UNDI.Initialize command is
called with the PXE_OPFLAGS_INITIALIZE_CABLE_DETECT flag set. In this case, UNDI.Initialize will try
to detect the link state.
If the port is connected and link is up, initialize will generally finish in about 3.5 seconds (the time needed to
establish link, dependent on link conditions, link speed and controller type) and returns PXE_STATFLAGS_
COMMAND_COMPLETE. If the port is disconnected (link is down), initialize will complete in about 5
seconds and return PXE_STATFLAGS_INIIALIZED_NO_MEDIA (driver initializes hardware then waits for
link and timeouts when link is not establish in 5 seconds).
When UNDI.Initialize is called with PXE_OPFLAGS_INITIALIZE_DO_NOT_DETECT_CABLE the function
will not try to detect link status and will take less than 1 second to complete.
The behavior of UNDI.Initialize is described in UEFI specs 2.3.1: Initializing the network device will take up to
four seconds for most network devices and in some extreme cases (usually poor cables) up to twenty
seconds. Control will not be returned to the caller and the COMMAND_COMPLETE status flag will not be set
until the adapter is ready to transmit.
If you use the Windows operating system on your client computer, you can use Intel PROSet for Windows*
Device Manager to configure and update the Intel Boot Agent software. Intel PROSet is available through the
device manager. Intel PROSet provides a special tab, called the Boot Options tab, used for configuring and
updating the Intel Boot Agent software.
To access the Boot Options tab:
1. Open Intel PROSet for Windows Device Manager by opening the System Control Panel. On the
Hardware tab, click Device Manager.
2. Select the appropriate adapter and click the Boot Options tab. If the tab does not appear, update your
network driver.
3. The Boot Options tab shows a list of current configuration parameters and their corresponding values.
Corresponding configuration values appear for the selected setting in a drop-down box.
4. Select a setting you want to change from the Settings selection box.
5. Select a value for that setting from the Value drop-down list.
6. Repeat the preceding two steps to change any additional settings.
7. Once you have completed your changes, click Apply Changes to update the adapter with the new
values.
Intel provides a utility, Intel Ethernet Flash Firmware Utility (BootUtil) for installing and configuring the Intel
Boot Agent using the DOS environment. See bootutil.txt for complete information.
You can customize the behavior of the Intel Boot Agent software through a pre-boot (operating system
independent) configuration setup program contained within the adapter's flash ROM. You can access this pre-
boot configuration setup program each time the client computer cycles through the boot process.
When the boot process begins, the screen clears and the computer begins its Power On Self Test (POST)
sequence. Shortly after completion of the POST, the Intel Boot Agent software stored in flash ROM executes.
The Intel Boot Agent then displays an initialization message, similar to the one below, indicating that it is
active:
Initializing Intel(R) Boot Agent Version X.X.XX
PXE 2.0 Build 083
NOTE: This display may be hidden by the manufacturer's splash screen. Consult your man-
ufacturer's documentation for details.
The configuration setup menu shows a list of configuration settings on the left and their corresponding values
on the right. Key descriptions near the bottom of the menu indicate how to change values for the configuration
settings. For each selected setting, a brief "mini-Help" description of its function appears just above the key
descriptions.
1. Highlight the setting you need to change by using the arrow keys.
2. Once you have accessed the setting you want to change, press the spacebar until the desired value
appears.
3. Once you have completed your changes, press F4 to update the adapter with the new values. Any
changed configuration values are applied as the boot process resumes.
The table below provides a list of configuration settings, their possible values, and their detailed descriptions:
Network Boot PXE Select PXE for use with Network management programs, such as
Protocol (Preboot LANDesk* Management Suite.
eXecution
NOTE: Depending on the configuration of the Intel Boot Agent, this
Environment)
parameter may not be changeable.
Boot Order Use BIOS Sets the boot order in which devices are selected during boot up if
Setup Boot the computer does not have its own control method.
Order If your client computer's BIOS supports the BIOS Boot
Try network Specification (BBS), or allows PnP-compliant selection of the boot
first, then order in the BIOS setup program, then this setting will always be
local drives Use BIOS Setup Boot Order and cannot be changed. In this
Try local case, refer to the BIOS setup manual specific to your client
drives first, computer to set up boot options.
then network If your client computer does not have a BBS- or PnP-compliant
Try network BIOS, you can select any one of the other possible values listed for
only this setting except for Use BIOS Setup Boot Order.
Try local
drives only
Legacy OS 0 = Disabled If set to 1, the Intel Boot Agent will enable PME in the adapters PCI
Wakeup (Default configuration space during initialization. This allows remote wakeup
Support. (For Value) under legacy operating systems that dont normally support it. Note
82559-based that enabling this makes the adapter technically non-compliant with
1 = Enabled
adapters the ACPI specification, which is why the default is disabled.
only)
NOTE: If, during PXE boot, more than one adapter is installed in a computer and you want to boot
from the boot ROM located on a specific adapter, you can do so by moving the adapter to the top of
the BIOS Boot Order or by disabling the flash on the other adapters.
While the configuration setup menu is displayed, diagnostics information is also displayed in the lower half of
the screen. This information can be helpful during interaction with Intel Customer Support personnel or your IT
team members. For more information about how to interpret the information displayed, refer to Diagnostics
Information for Pre-boot PXE Environments.
For the Intel Boot Agent software to perform its intended job, there must be a server set up on the same
network as the client computer. That server must recognize and respond to the PXE or BOOTP boot protocols
that are used by the Intel Boot Agent software.
NOTE: When the Intel Boot Agent software is installed as an upgrade for an earlier version boot
ROM, the associated server-side software may not be compatible with the updated Intel Boot
Agent. Contact your system administrator to determine if any server updates are necessary.
Consult your Linux* vendor for information about setting up the Linux Server.
Nothing is needed beyond the standard driver files supplied on the media. Microsoft* owns the process and
associated instructions for Windows Deployment Services. For more information on Windows Deployment
Services perform a search of Microsoft articles at: http://technet.microsoft.com/en-us/library/default.aspx
Message
Cause
Text
Invalid PMM PMM is not installed or is not working correctly. Try updating the BIOS.
function num-
ber.
PMM allocation PMM could not or did not allocate the requested amount of memory for driver usage.
error.
Option ROM ini- This may be caused by the system BIOS assigning a 64-bit BAR (Base Address
tialization error. Register) to the network port. Running the BootUtil utility with the -64d command
64-bit PCI BAR line option may resolve this issue.
addresses not
supported, AX=
PXE-E00: This System does not have enough free memory to run PXE image. The Intel Boot Agent
system does was unable to find enough free base memory (below 640K) to install the PXE client
not have software. The system cannot boot via PXE in its current configuration. The error
enough free con- returns control to the BIOS and the system does not attempt to remote boot. If this
ventional error persists, try updating your system's BIOS to the most-recent version. Contact
memory. The your system administrator or your computer vendor's customer support to resolve
Intel Boot Agent the problem.
cannot con-
tinue.
PXE-E01: PCI Image vendor and device ID do not match those located on the card. Make sure the
Vendor and correct flash image is installed on the adapter.
Device IDs do
not match!
PXE-E04: Error PCI configuration space could not be read. Machine is probably not PCI compliant.
reading PCI con- The Intel Boot Agent was unable to read one or more of the adapter's PCI con-
figuration figuration registers. The adapter may be mis-configured, or the wrong Intel Boot
space. The Intel Agent image may be installed on the adapter. The Intel Boot Agent will return control
Boot Agent can- to the BIOSand not attempt to remote boot. Try to update the flash image. If this
not continue. does not solve the problem, contact your system administrator or Intel Customer
Support.
PXE-E05: The The adapter's EEPROM is corrupted. The Intel Boot Agent determined that the
LAN adapter's adapter EEPROM checksum is incorrect. The agent will return control to the BIOS
configuration is and not attempt to remote boot. Try to update the flash image. If this does not solve
corrupted or has the problem, contact your system administrator or Intel Customer Support.
not been
initialized. The
Intel Boot Agent
cannot con-
tinue.
PXE-E06: The system BIOS does not support DDIM. The BIOS does not support the mapping
Option ROM of the PCI expansion ROMs into upper memory as required by the PCI spe-
requires DDIM cification. The Intel Boot Agent cannot function in this system. The Intel Boot Agent
support. returns control to the BIOS and does not attempt to remote boot. You may be able to
resolve the problem by updating the BIOS on your system. If updating your sys-
tem's BIOS does not fix the problem, contact your system administrator or your
computer vendor's customer support to resolve the problem.
PXE-E07: PCI BIOS-level PCI services not available. Machine is probably not PCI compliant.
BIOS calls not
supported.
PXE-E09: Unex- The UNDI loader returned an unknown error status. xx is the status returned.
pected UNDI
loader error.
Status == xx
PXE-E20: BIOS could not move the image into extended memory.
BIOS extended
memory copy
error.
PXE-E20: Error occurred while trying to copy the image into extended memory. xx is the BIOS
BIOS extended failure code.
memory copy
error.AH == xx
PXE-E51: No The Intel Boot Agent did not receive any DHCP or BOOTP responses to its initial
DHCP or request. Please make sure that your DHCP server (and/or proxyDHCP server, if
BOOTP offers one is in use) is properly configured and has sufficient IP addresses available for
received. lease. If you are using BOOTP instead, make sure that the BOOTP service is run-
ning and is properly configured.
PXE-E53: No The Intel Boot Agent received a DHCP or BOOTP offer, but has not received a
boot filename valid filename to download. If you are using PXE, please check your PXE and BINL
received. configuration. If using BOOTP, be sure that the service is running and that the spe-
cific path and filename are correct.
PXE-E61: The adapter does not detect link. Please make sure that the cable is good and is
Media test fail- attached to a working hub or switch. The link light visible from the back of the
ure. adapter should be lit.
PXE-EC1: No base code could be located. An incorrect flash image is installed or the image
Base-code has become corrupted. Try to update the flash image.
ROM ID struc-
ture was not
found.
PXE-EC3: BC Base code could not be installed. An incorrect flash image is installed or the image
ROM ID struc- has become corrupted. Try to update the flash image.
ture is invalid.
PXE-EC4: UNDI ROM ID structure signature is incorrect. An incorrect flash image is installed
UNDI ID struc- or the image has become corrupted. Try to update the flash image.
ture was not
found.
PXE-EC5: The structure length is incorrect. An incorrect flash image is installed or the image
UNDI ROM ID has become corrupted. Try to update the flash image.
structure is
invalid.
PXE-EC6: The UNDI driver image signature was invalid. An incorrect flash image is installed
UNDI driver or the image has become corrupted. Try to update the flash image.
image is invalid.
PXE-EC8: The Intel Boot Agent could not locate the needed !PXE structure resource. An
!PXE structure incorrect flash image is installed or the image has become corrupted. Try to update
was not found the flash image.
in UNDI driver
This may also be caused by the system BIOS assigning a 64-bit BAR (Base
code segment.
Address Register) to the network port. Running the BootUtil utility with the -64d
command line option may resolve this issue.
PXE-EC9: The Intel Boot Agent could not locate the needed PXENV+ structure. An incorrect
PXENV + struc- flash image is installed or the image has become corrupted. Try to update the flash
ture was not image.
found in UNDI
driver code seg-
ment.
This option has You attempted to change a configuration setting that has been locked by your sys-
been locked tem administrator. This message can appear either from within Intel PROSet's
and cannot be Boot Options tab when operating under Windows* or from the Configuration Setup
changed. Menu when operating in a stand-alone environment. If you think you should be able
to change the configuration setting, consult your system administrator.
PXE-M0E: The Intel Boot Agent did not successfully complete a network boot due to a network
Retrying net- error (such as not receiving a DHCP offer). The Intel Boot Agent will continue to
work boot; attempt to boot from the network until successful or until canceled by the user. This
press ESC to feature is disabled by default. For information on how to enable this feature, contact
cancel. Intel Customer Support.
The following list of problems and associated solutions covers a representative set of problems that you might
encounter while using the Intel Boot Agent.
After booting, my computer experiences problems
After the Intel Boot Agent product has finished its sole task (remote booting), it no longer has any effect on
the client computer operation. Thus, any issues that arise after the boot process is complete are most likely
not related to the Intel Boot Agent product.
If you are having problems with the local (client) or network operating system, contact the operating system
manufacturer for assistance. If you are having problems with some application program, contact the
application manufacturer for assistance. If you are having problems with any of your computer's hardware or
with the BIOS, contact your computer system manufacturer for assistance.
Cannot change boot order
If you are accustomed to redefining your computer's boot order using the motherboard BIOS setup program,
the default settings of the Intel Boot Agent setup program can override that setup. To change the boot
sequence, you must first override the Intel Boot Agent setup program defaults. A configuration setup menu
appears allowing you to set configuration values for the Intel Boot Agent. To change your computer's boot
order setting, see Configuring the Boot Agent in a Pre-boot PXE Environment.
My computer does not complete POST
If your computer fails to boot with an adapter installed, but does boot when you remove the adapter, try
moving the adapter to another computer and using BootUtil to disable the Flash ROM.
If this does not work, the problem may be occurring before the Intel Boot Agent software even begins
operating. In this case, there may be a BIOS problem with your computer. Contact your computer
manufacturer's customer support group for help in correcting your problem.
There are configuration/operation problems with the boot process
If your PXE client receives a DHCP address, but then fails to boot, you know the PXE client is working
correctly. Check your network or PXE server configuration to troubleshoot the problem. Contact Intel
Customer Support if you need further assistance.
POST hang may occur if two or more ports on Quad Port Server Adapters are configured for PXE
If you have an Intel Gigabit VT Quad Port Server Adapter, Intel PRO/1000 PT Quad Port LP Server
Adapter, or an Intel PRO/1000 PF Quad Port Server Adapter with two or more ports configured for PXE, you
may experience POST hangs on some server systems. If this occurs the suggested workaround is move the
adapter to another system and disable PXE on all but one port of the Adapter. You may also be able to prevent
this problem by disabling any on-board SCSI or SAS controllers in your system BIOS.
PXE option ROM does not follow the PXE specification with respect to the final "discover" cycle
In order to avoid long wait periods, the option ROM no longer includes the final 32-second discover cycle. (If
there was no response in the prior 16-second cycle, it is almost certain that there will be none in the final 32-
second cycle.
Anytime the configuration setup menu is displayed (see Configuring the Boot Agent in a Pre-boot PXE
Environment), diagnostics information is also displayed on the lower portion of the screen. The information
displayed appears similar to that shown in the lower half of the screen image below. This information can be
helpful during interaction with Intel Customer Support personnel or your IT team members.
NOTE: Actual diagnostics information may vary, depending upon the adapter(s) installed in your
computer.
Diagnostics information may include the following items:
Item Description
PWA The Printed Wire Assembly number identifies the adapter's model and version.
Number
Memory The memory address assigned by the BIOS for memory-mapped adapter access.
I/O The I/O port address assigned by the BIOS for I/O-mapped adapter access.
UNB The address in upper memory where the Boot Agent is installed by the BIOS.
PCI ID The set of PCI identification values from the adapter in the form:
VendorID/DeviceID/SubvendorID/SubdeviceID/Revision
Requirements
1. Make sure the iSCSI initiator system starts the iSCSI Boot firmware. The firmware should be con-
figured properly, be able to connect to iSCSI target, and detect the boot disk.
2. You will need Microsoft* iSCSI Software Initiator with integrated software boot support. This boot ver-
sion of the initiator is available here.
3. To enable crash dump support, follow the steps in Crash Dump Support.
Intel Ethernet iSCSI Boot features a setup menu which allows two network ports in one system to be
enabled as iSCSI Boot devices. To configure Intel iSCSI Boot, power-on or reset the system and press the
Ctrl-D key when the message "Press <Ctrl-D> to run setup..." is displayed. After pressing the
Ctrl-D key, you will be taken to the Intel iSCSI Boot Port Selection Setup Menu.
NOTE: When booting an operating system from a local disk, Intel Ethernet iSCSI Boot should be
disabled for all network ports.
The iSCSI CHAP Configuration menu has the following options to enable CHAP authentication:
l Use CHAP - Selecting this checkbox will enable CHAP authentication for this port. CHAP allows the
target to authenticate the initiator. After enabling CHAP authentication, a user name and target pass-
word must be entered.
l User Name - Enter the CHAP user name in this field. This must be the same as the CHAP user name
configured on the iSCSI target.
l Target Secret - Enter the CHAP password in this field. This must be the same as the CHAP password
configured on the iSCSI target and must be between 12 and 16 characters in length. This password
can not be the same as the Initiator Secret.
l Use Mutual CHAP Selecting this checkbox will enable Mutual CHAP authentication for this port.
Mutual CHAP allows the initiator to authenticate the target. After enabling Mutual CHAP authen-
tication, an initiator password must be entered. Mutual CHAP can only be selected if Use CHAP is
selected.
l Initiator Secret - Enter the Mutual CHAP password in this field. This password must also be con-
figured on the iSCSI target and must be between 12 and 16 characters in length. This password can
not be the same as the Target Secret.
The CHAP Authentication feature of this product requires the following acknowledgements:
This product includes cryptographic software written by Eric Young ([email protected]). This product
includes software written by Tim Hudson ([email protected]).
This product includes software developed by the OpenSSL Project for use in the OpenSSL Toolkit.
(http://www.openssl.org/).
Intel PROSet for Windows* Device Manager
Many of the functions of the Intel iSCSI Boot Port Selection Setup Menu can also be configured or revised
from Windows Device Manager. Open the adapter's property sheet and select the Data Options tab. You
must install the latest Intel Ethernet Adapter drivers and software to access this.
NOTE: To support iSCSI Boot, the target needs to support multiple sessions from the same ini-
tiator. Both the iSCSI Boot firmware initiator and the OS High Initiator need to establish an iSCSI
session at the same time. Both these initiators use the same Initiator Name and IP Address to con-
nect and access the OS disk but these two initiators will establish different iSCSI sessions. In order
for the target to support iSCSI Boot, the target must be capable of supporting multiple sessions and
client logins.
1. Configure a disk volume on your iSCSI target system. Note the LUN ID of this volume for use when
configuring in Intel Ethernet iSCSI Boot firmware setup.
2. Note the iSCSI Qualified Name (IQN) of the iSCSI target, which will likely look like:
iqn.1986-03.com.intel:target1
This value is used as the iSCSI target name when you configuring your initiator system's Intel
Ethernet iSCSI Boot firmware.
3. Configure the iSCSI target system to accept the iSCSI connection from the iSCSI initiator. This
usually requires listing the initiator's IQN name or MAC address for permitting the initiator to access to
the disk volume. See the Firmware Setup section for information on how to set the iSCSI initiator
name.
4. One-way authentication protocol can optionally be enabled for secure communications. Challenge-
Handshake Authentication Protocol (CHAP) is enabled by configuring username/password on iSCSI
target system. For setting up CHAP on the iSCSI initiator, refer to the section Firmware Setup for
information.
You can connect and boot from a target LUN that is larger than 2 Terabytes with the following restrictions:
l The block size on the target must be 512 bytes
l The following operating systems are supported:
l VMware* ESX 5.0, or later
l Red Hat* Enterprise Linux* 6.3, or later
l SUSE* Enterprise Linux 11SP2, or later
l Microsoft* Windows Server* 2012, or later
l You may be able to access data only within the first 2 TB.
NOTE: The Crash Dump driver does not support target LUNs larger than 2TB.
If you are using Dynamic Host Configuration Protocol (DHCP), the DHCP server needs to be configured to
provide the iSCSI Boot configurations to the iSCSI initiator. You must set up the DHCP server to specify
Root Path option 17 and Host Name option 12 to respond iSCSI target information back to the iSCSI initiator.
DHCP option 3, Router List may be necessary, depending on the network configuration.
DHCP Root Path Option 17:
The iSCSI root path option configuration string uses the following format:
iscsi:<server name or IP address>:<-
protocol>:<port>:<LUN>:<targetname>
IQN format.
Example: iqn.1986-03.com.intel:target1
DHCP Host Name Option 12:
Configure option 12 with the hostname of the iSCSI initiator.
DHCP Option 3, Router List:
Configure option 3 with the gateway or Router IP address, if the iSCSI initiator and
iSCSI target are on different subnets.
Crash dump file generation is supported for iSCSI-booted Windows Server x64 by the Intel iSCSI Crash
Dump Driver. To ensure a full memory dump is created:
1. Set the page file size equal to or greater than the amount of RAM installed on your system is necessary
for a full memory dump.
2. Ensure that the amount of free space on your hard disk is able to handle the amount of RAM installed
on your system.
To setup crash dump support follow these steps:
1. Setup Windows iSCSI Boot.
2. If you have not already done so, install the latest Intel Ethernet Adapter drivers and Intel PROSet for
Windows Device Manager.
3. Open Intel PROSet for Windows Device Manager and select the Boot Options Tab.
4. From Settings select iSCSI Boot Crash Dump and the Value Enabled and click OK.
iSCSI Troubleshooting
The table below lists problems that can possibly occur when using Intel Ethernet iSCSI Boot. For each
problem a possible cause and resolution are provided.
Problem Resolution
Intel Ethernet iSCSI l While the system logon screen may display for a longer time during
Boot does not load on system startup, Intel Ethernet iSCSI Boot may not be displayed dur-
system startup and the ing POST. It may be necessary to disable a system BIOS feature in
sign-on banner is not order to display messages from Intel iSCSI Remote Boot. From the
displayed. system BIOS Menu, disable any quiet boot or quick boot options.
Also disable any BIOS splash screens. These options may be sup-
pressing output from Intel iSCSI Remote Boot.
l Intel Ethernet iSCSI Remote Boot has not been installed on the
adapter or the adapter's flash ROM is disabled. Update the network
adapter using the latest version of BootUtil as described in the Flash
Images section of this document. If BootUtil reports the flash ROM
is disabled, use the "BOOTUTIL -flashenable" command to
enable the flash ROM and update the adapter.
l The system BIOS may be suppressing output from Intel Ethernet
iSCSI Boot.
l Sufficient system BIOS memory may not be available to load Intel
Ethernet iSCSI Boot. Attempt to disable unused disk controllers and
devices in the system BIOS setup menu. SCSI controllers, RAID
controller, PXE enabled network connections, and shadowing of sys-
tem BIOS all reduce the memory area available to Intel Ethernet
iSCSI Boot. Disable these devices and reboot the system to see if
Intel iSCSI Boot is able to initialize. If disabling the devices in the
system BIOS menu does not resolve the problem then attempt to
remove unused disk devices or disk controllers from the system.
Some system manufacturers allow unused devices to be disabled by
jumper settings.
After installing Intel l A critical system error has occurred during iSCSI Remote Boot
Ethernet iSCSI Boot, initialization. Power on the system and press the 's' key or 'ESC' key
the system will not boot before Intel iSCSI Remote Boot initializes. This will bypass the Intel
to a local disk or Ethernet iSCSI Boot initialization process and allow the system to
network boot device. boot to a local drive. Use the BootUtil utility to update to the latest
The system becomes version of Intel Ethernet iSCSI Remote Boot.
unresponsive after Intel
l Updating the system BIOS may also resolve the issue.
Ethernet iSCSI Boot
displays the sign-on
banner or after
connecting to the iSCSI
target.
"Intel iSCSI l The system BIOS may not support Intel Ethernet iSCSI Boot.
Remote Boot" does Update the system BIOS with the most recent version available from
not show up as a boot the system vendor.
device in the system l A conflict may exist with another installed device. Attempt to disable
BIOS boot device unused disk and network controllers. Some SCSI and RAID con-
menu. trollers are known to cause compatibility problems with Intel iSCSI
Remote Boot.
Error message l Intel Ethernet iSCSI Boot was unable to detect link on the network
displayed: port. Check the link detection light on the back of the network
"Failed to detect link" connection. The link light should illuminate green when link is
established with the link partner. If the link light is illuminated but the
error message still displays then attempt to run the Intel link and
cable diagnostics tests using DIAGS.EXE for DOS or Intel PROSet
for Windows.
Error message iSCSI was configured to retrieve an IP address from DHCP but no DHCP
displayed: server responded to the DHCP discovery request. This issue can have
"DHCP Server not multiple causes:
found!" l The DHCP server may have used up all available IP address reser-
vations.
l The client iSCSI system may require static IP address assignment
on the connected network.
l There may not be a DHCP server present on the network.
l Spanning Tree Protocol (STP) on the network switch may be pre-
venting the Intel iSCSI Remote Boot port from contacting the DHCP
server. Refer to your network switch documentation on how to dis-
able Spanning Tree Protocol.
Error message l Intel Ethernet iSCSI Boot was not able to detect a valid PnP PCI
displayed: BIOS. If this message displays Intel Ethernet iSCSI Boot cannot run
"PnP Check Structure on the system in question. A fully PnP compliant PCI BIOS is
is invalid!" required to run Intel iSCSI Remote Boot.
Error message l The iSCSI target system is configured to use a disk block size that
displayed: is not supported by Intel Ethernet iSCSI Boot. Configure the iSCSI
"Unsupported SCSI target system to use a disk block size of 512 bytes.
disk block size!"
Error message l Intel Ethernet iSCSI Boot was unable to establish a TCP/IP
displayed: connection with the iSCSI target system. Verify that the initiator and
"ERROR: Could not target IP address, subnet mask, port and gateway settings are
establish TCP/IP configured properly. Verify the settings on the DHCP server if
connection with iSCSI applicable. Check that the iSCSI target system is connected to a
target system." network accessible to the Intel iSCSI Remote Boot initiator. Verify
that the connection is not being blocked by a firewall.
Error message l The CHAP user name or secret does not match the CHAP
displayed: configuration on the iSCSI target system. Verify the CHAP
"ERROR: CHAP configuration on the Intel iSCSI Remote Boot port matches the
authentication with iSCSI target system CHAP configuration. Disable CHAP in the
target failed." iSCSI Remote Boot setup menu if it is not enabled on the target.
Error message l A login request was sent to the iSCSI target system but the login
displayed: request was rejected. Verify the iSCSI initiator name, target name,
"ERROR: Login LUN number, and CHAP authentication settings match the settings
request rejected by on the iSCSI target system. Verify that the target is configured to
iSCSI target system." allow the Intel iSCSI Remote Boot initiator access to a LUN.
When installing Linux to l If these error messages are seen, unused iscsi interfaces on NetApp
NetApp Filer, after a Filer should be disabled.
successful target disk l Continuous=no should be added to the iscsi.conf file
discovery, error
messages may be seen
similar to those listed
below.
Iscsi-sfnet:hostx:
Connect failed with rc -
113: No route to host
Iscsi-sfnet:hostx:
establish_session
failed. Could not
connect to target
Error message dis- l A TCP/IP connection was successfully made to the target IP
played. address, however an iSCSI target with the specified iSCSI target
"ERROR: iSCSI target name could not be found on the target system. Verify that the con-
not found." figured iSCSI target name and initiator name match the settings on
the iSCSI target.
Error message dis- l The iSCSI target cannot accept any new connections. This error
played. could be caused by a configured limit on the iSCSI target or a lim-
"ERROR: iSCSI target itation of resources (no disks available).
can not accept any
more connections."
Error message dis- l An error has occurred on the iSCSI target. Inspect the iSCSI target
played. to determine the source of the error and ensure it is configured prop-
"ERROR: iSCSI target erly.
has reported an error."
Error message dis- l A system on the network was found using the same IP address as
played. the iSCSI Option ROM client.
ERROR: There is an IP l If using a static IP address assignment, attempt to change the IP
address conflict with address to something which is not being used by another client on
another system on the the network.
network. l If using an IP address assigned by a DHCP server, make sure there
are no clients on the network which are using an IP address which
conflicts with the IP address range used by the DHCP server.
iSCSI Known Issues
A device cannot be uninstalled if it is configured as an iSCSI primary or secondary port.
Disabling the iSCSI primary port also disables the secondary port. To boot from the secondary port, change it
to be the primary port.
iSCSI Remote Boot: Connecting back-to-back to a target with a Broadcom LOM
Connecting an iSCSI boot host to a target through a Broadcom LOM may occasionally cause the connection
to fail. Use a switch between the host and target to avoid this.
iSCSI Remote Boot Firmware may show 0.0.0.0 in DHCP server IP address field
In a Linux base DHCP server, the iSCSI Remote Boot firmware shows 0.0.0.0 in the DHCP server IP
address field. The iSCSI Remote Boot firmware looks at the DHCP server IP address from the Next-Server
field in the DHCP response packet. However, the Linux base DHCP server may not set the field by default.
Add "Next-Server <IP Address>;" in dhcpd.conf to show the correct DHCP server IP address.
iSCSI traffic stops after disabling RSC
To prevent a lost connection, Receive Segment Coalescing (RSC) must be disabled prior to configuring a
VLAN bound to a port that will be used for connecting to an iSCSI target. Workaround this issue by disabling
Receive Segment Coalescing before setting up the VLAN. This will avoid this traffic stop.
During installation of Microsoft Windows Server 2012 on an iSCSI LUN, if you inject drivers from a DUP
during the installation, you may experience a blue screen. Please install the hotfix described in kb2782676 to
resolve the issue.
Microsoft Initiator does not boot without link on boot port:
After setting up the system for Intel Ethernet iSCSI Boot with two ports connected to a target and
successfully booting the system, if you later try to boot the system with only the secondary boot port
connected to the target, Microsoft Initiator will continuously reboot the system.
To work around this limitation follow these steps:
1. Using Registry Editor, expand the following registry key:
\System\CurrentControlSet\Services\Tcpip\Parameters
Starting with version 2.2.0.0, the iSCSI crash dump driver gained the ability to support platforms booted using
the native UEFI iSCSI initiator over supported Intel Network Adapters. This support is available on Windows
Server or newer and only on x64 architecture. Any hotfixes listed above must also be applied.
Since network adapters on UEFI platforms may not provide legacy iSCSI option ROM, the boot options tab in
DMIX may not provide the setting to enable the iSCSI crash dump driver. If this is the case, the following
registry entry has to be created:
HKLM\SYSTEM\CurrentControlSet\Control\Class\{4D36E97B-E325-11CE-
BFC1-08002BE10318}\<InstanceID>\Parameters
DumpMiniport REG_SZ iscsdump.sys
Moving iSCSI adapter to a different slot:
In a Windows* installation, if you move the iSCSI adapter to a PCI slot other than the one that it occupied
when the drivers and MS iSCSI Remote Boot Initiator were installed, then a System Error may occur during
the middle of the Windows Splash Screen. This issue goes away if you return the adapter to its original PCI
slot. We recommend not moving the adapter used for iSCSI boot installation. This is a known OS issue.
If you have to move the adapter to another slot, then perform the following:
1. Boot the operating system and remove the old adapter
2. Install a new adapter into another slot
3. Setup the new adapter for iSCSI Boot
4. Perform iSCSI boot to the OS via the original adapter
5. Make the new adapter iSCSI-bootable to the OS
6. Reboot
7. Move the old adapter into another slot
8. Repeat steps 2 - 5 for the old adapter you have just moved
Uninstalling Driver can cause blue screen
If the driver for the device in use for iSCSI Boot is uninstalled via Device Manager, Windows will blue screen
on reboot and the OS will have to be re-installed. This is a known Windows issue.
During OS install, injecting drivers from a DUP can cause a blue screen
During installation of Microsoft Windows Server 2012 on an iSCSI LUN, if you inject drivers from a DUP
during the installation, you may experience a blue screen. Please install the hotfix described in kb2782676 to
resolve the issue.
Adapters flashed with iSCSI image are not removed from the Device Manager during uninstall
During uninstallation all other Intel Network Connection Software is removed, but drivers for iSCSI Boot
adapters that have boot priority.
I/OAT Offload may stop with Intel Ethernet iSCSI Boot or with Microsoft Initiator installed
A workaround for this issue is to change the following registry value to "0":
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\IOATDMA\Start
Only change the registry value if iSCSI Boot is enabled and if you want I/OAT offloading. A blue screen will
occur if this setting is changed to "0" when iSCSI Boot is not enabled. It must be set back to "3" if iSCSI Boot
is disabled or a blue screen will occur on reboot.
NDIS Driver May Not Load During iSCSI Boot F6 Install With Intel PRO/1000 PT Server Adapter
If you are using two Intel PRO/1000 PT Server Adapters in two PCI Express x8 slots of a rack mounted
Xeon system, Windows installation can be done only via a local HDD procedure.
Automatic creation of iSCSI traffic filters for DCB, using Virtual Adapters created by Hyper-V, is only supported on Microsoft*
Windows Server* 2008 releases R2 and later.
The iSCSI for Data Center Bridging (DCB) feature uses Quality of Service (QOS) traffic filters to tag outgoing
packets with a priority. The Intel iSCSI Agent dynamically creates these traffic filters as needed for Windows
Server 2008 R2 and later.
Invalid CHAP Settings May Cause Windows Server 2008 to Blue Screen
If an iSCSI Boot port CHAP user name and secret do not match the target CHAP user name and secret,
Windows Server 2008 may blue screen or reboot during installation or boot. Ensure that all CHAP settings
match those set on the target(s).
Microsoft* Windows Server* 2008 Installation When Performing a WDS Installation
If you perform a WDS installation and attempt to manually update drivers during the installation, the drivers
load but the iSCSI Target LUN does not display in the installation location list. This is a known WDS limitation
with no current fix. You must therefore either perform the installation from a DVD or USB media or inject the
drivers on the WDS WinPE image.
Microsoft has published a knowledge base case explaining the limitation in loading drivers when installing with
iSCSI Boot via a WDS server.
http://support.microsoft.com/kb/960924
With high iSCSI traffic on Microsoft* Windows 2003 Server* R2, link flaps can occur with 82598-based silicon
This issue is caused by the limited support for Large Send Offload (LSO) in this Operating System. Please
note that if ISCSI traffic is required for Windows 2003 Server R2, LSO will be disabled.
F6 Driver Does Not Support Standby Mode.
If you are performing an F6 Windows without a Local Disk installation, do not use Standby Mode.
F6 installation may fail with some EMC targets
An F6 installation may fail during the reboot in step 10 of Installing Windows 2003 without a Local Disk
because of a conflict between the Intel F6 driver, the Microsoft iSCSI Initiator and the following EMC target
model firmware versions:
l AX4-5 arrays: 02.23.050.5.705 or higher.
l CX300, CX500, CX700, and CX-3 Series arrays: 03.26.020.5.021 or higher.
l CX-4 Series arrays: 04.28.000.5.701 or higher, including all 04.29.000.5.xxx revisions.
To avoid the failure, ensure that the secondary iSCSI port cannot reach the target during the reboot in step 10.
iSCSI Boot and Teaming in Windows
Teaming is not supported with iSCSI Boot. Creating a team using the primary and secondary iSCSI adapters
and selecting that team during the Microsoft initiator installation may fail with constant reboots. Do not select
a team for iSCSI Boot, even if it is available for selection during initiator installation.
For load balancing and failover support, you can use MSFT MPIO instead. Check the Microsoft Initiator User
Guide on how to setup MPIO.
Setting LAA (Locally Administered Address) on an iSCSI Boot-Enabled Port Will Cause System Failure on Next Reboot
If a device is not set to primary but is enumerated first, the BIOS will still use that device's version of iSCSI
Boot. Therefore the user may end up using an earlier version of Intel Ethernet iSCSI Boot than expected.
The solution is that all devices in the system must have the same version of iSCSI Boot. To do this the user
should go to the Boot Options Tab and update the devices' flash to the latest version.
IPv6 iSCSI login to Intel Corporation EqualLogic arrays using jumbo frames
To establish an iSCSI session using IPv6 and jumbo frames with Dell EqualLogic arrays, TCP/UDP
checksum offloads on the Intel iSCSI adapter should be disabled.
iSCSI over DCB (priority tagging) is not possible on the port on which VMSwitch is created. This is by design
in Microsoft* Windows Server* 2012.
Automatic creation of iSCSI traffic filters for DCB is only supported on networks which make use of IPv4 addressing
The iSCSI for Data Center Bridging (DCB) feature uses Quality of Service (QOS) traffic filters to tag outgoing
packets with a priority. The Intel iSCSI Agent dynamically creates these traffic filters as needed on networks
using IPv4 addressing.
IPv6 iSCSI login to Dell EqualLogic arrays using jumbo frames
To establish an iSCSI session using IPv6 and jumbo frames with Dell EqualLogic arrays, TCP/UDP
checksum offloads on the Intel iSCSI adapter should be disabled.
Linux Channel Bonding has basic compatibility issues with iSCSI Boot and should not be used.
Authentications errors on EqualLogic target may show up in dmesg when running Red Hat* Enterprise Linux 4
These error messages do not indicate a block in login or booting and may safely be ignored.
LRO and iSCSI Incompatibility
LRO (Large Receive Offload) is incompatible with iSCSI target or initiator traffic. A panic may occur when
iSCSI traffic is received through the ixgbe driver with LRO enabled. The driver should be built and installed
with:
# make CFLAGS_EXTRA=-DIXGBE_NO_LRO install
Installing and Configuring Intel Ethernet FCoE Boot on a Microsoft* Windows* Cli-
ent
WARNINGS:
l Do not update the base driver via the Windows Update method
Doing so may render the system inoperable, generating a blue screen. The FCoE Stack
and base driver need to be matched. The FCoE stack may get out of sync with the base
driver if the base driver is updated via Windows Update. Updating can only be done via the
Intel Network Connections Installer.
l If you are running Microsoft* Windows Server* 2012 R2, you must install
KB2883200.
Failure to do so may result in an Error 1719 and a blue screen.
New Installation on a Windows Server* system
From the Intel downloaded media: Click the FCoE/DCB checkbox to install Intel Ethernet FCoE Protocol
Driver and DCB. The MSI Installer installs all FCoE and DCB components including Base Driver.
Microsoft Hotfixes
The following Microsoft hotfixes have been found to be needed for specific use cases:
Windows 2008 R2
l KB983554 - High-performance storage devices fix
l KB2708811 - Data corruption occurs under random write stress
Multipath I/O (MPIO)
Windows 2008 R2
l KB979743 - MPIO - write errors
l KB981379 - MS DSM - target issues
Windows 2008 R2 SP1
l KB2406705
Configuring MPIO Timers: <http://technet.microsoft.com/en-us/library/ee619749(WS.10).aspx>
contains additional information about these registry settings.
Set the PathRecoveryInterval value to 60
Intel Ethernet FCoE Configuration Using Intel PROSet for Windows* Device Manager
Many FCoE functions can also be configured or revised using Intel PROSet for Windows* Device Manager,
accessed from the FCoE Properties button within the Data Center tab. You can use Intel PROSet to
perform the following tasks:
l Configure FCoE initiator specific settings
l Go to the corresponding port driver
l Review FCoE initiator information
l Obtain general information
l Review statistics
l Obtain information about the initiator
l Obtain information about attached devices
l FIP discovered VLANs and status
In addition, you can find some FCoE RSS performance settings under the Performance Options of the
Advanced tab of the Network Adapter device properties. For additional information see Receive Side Scaling.
NOTES:
l From the Boot Options tab, the user will see the Flash Information button. Clicking on the
Flash Information button will open the Flash Information dialog. From the Flash Inform-
ation dialog, clicking on the Update Flash button allows Intel iSCSI Remote Boot, Intel
Boot Agent (IBA), Intel Ethernet FCoE Boot, EFI, and CLP to be written. The update oper-
ation writes a new image to the adapter's Flash and modifies the EEPROM, which may tem-
porarily disable the operation of the Windows* network device driver. You might need to
reboot the computer following this operation.
l You cannot update the flash image of a LOM; this button will be disabled.
1. Create a disk target (LUN) on an available Fibre Channel target. Configure this LUN to be accessible to
the WWPN address of the initiator of the host being booted.
2. Make sure the client system starts the Intel Ethernet FCoE Boot firmware. The firmware should be
configured properly, be able to connect to Fibre Channel target, and detect the boot disk.
Many of the functions of the Intel Ethernet FCoE Boot Port Selection Setup Menu can also be configured or
revised using Intel PROSet for Windows Device Manager.
l Intel Ethernet FCoE Boot version is displayed on the Boot Options tab if the combo image supports
FCoE Boot.
l Intel Ethernet FCoE Boot is an Active Image option if FCoE Boot is supported by the combo image.
l The Active Image setting enables/disables Intel Ethernet FCoE Boot in the EEPROM.
l Intel Ethernet FCoE Boot settings are displayed if FCoE Boot is the active image.
Upgrading an FCoE-booted system can only be done via the Intel Network Connections Installer. A reboot
is required to complete the upgrade. You cannot upgrade a port's Windows driver and software package if the
port is in the path to the virtual memory paging file and is also part of a Microsoft Server 2012 NIC Team
(LBFO Team). To complete the upgrade, remove the port from the LBFO team and restart the upgrade.
The software components for Intel Ethernet FCoE are comprised of two major components: the Intel
Ethernet base driver and the Intel Ethernet FCoE Driver. They are developed and validated as an ordered
pair. You are strongly encouraged to avoid scenarios, either through upgrades or Windows update, where the
Intel Ethernet driver version is not the version released with the corresponding Intel Ethernet FCoE driver.
For more information, visit the download center.
NOTES:
l Individually upgrading/downgrading the Intel Ethernet FCoE driver will not work and may
even cause a blue screen; the entire FCoE package must be the same version. Upgrade the
entire FCoE package using the Intel Network Connections installer only.
l If you uninstalled the Intel Ethernet Virtual Storage Miniport Driver for FCoE component,
just find the same version that you uninstalled and re-install it; or uninstall and then re-install
the entire FCoE package.
To configure Intel Ethernet FCoE Boot, switch on or reset the system and press Ctrl-D when the message
"Press <Ctrl-D> to run setup..." is displayed. After pressing Ctrl-D, you will be taken to the Intel
Ethernet FCoE Boot Port Selection Setup Menu.
The first screen of the Intel Ethernet FCoE Boot Setup Menu displays a list of Intel FCoE Boot-capable
adapters. For each adapter port, the associated SAN MAC address, PCI device ID, PCI bus/device/function
location, and a field indicating FCoE Boot status is displayed. Up to 10 FCoE Boot-capable ports can be
displayed within the Port Selection Menu. If there are more Intel FCoE Boot-capable adapters, these are not
listed in the setup menu.
Highlight the desired port and press Enter.
FCoE Boot Targets Configuration Menu
FCoE Boot Targets Configuration: Discover Targets is highlighted by default. If the Discover VLAN
value displayed is not what you want, enter the correct value. Highlight Discover Targets and then press
Enter to show targets associated with the Discover VLAN value. Under Target WWPN, if you know the
desired WWPN you can manually enter it or press Enter to display a list of previously discovered targets.
FCoE Target Selection Menu
Highlight the desired Target from the list and press Enter.
Boot Order valid values are 0-4, where 0 means no boot order or ignore the target. A 0 value
also indicates that this port should not be used to connect to the target. Boot order values of 1-4
can only be assigned once to target(s) across all FCoE boot-enabled ports.
VLAN value is 0 by default. You may do a Discover Targets which will display a VLAN. If the
VLAN displayed is not the one you require, enter the VLAN manually and then perform
Discover Targets on that VLAN.
Hit Save.
NOTE: After the Discover Targets function is executed, the Option ROM will attempt to remain
logged into the fabric until the FCoE Boot Targets Configuration Menu is exited.
l Keyboard Shortcuts: Up/Down, TAB and SHIFT-TAB to move between the controls.
Left/Right/Home/End/Del/Backspace in the edit boxes.
l Press the Esc key to leave the screen.
where PORT is the NIC adapter number (in the following example nic=1)
NOTE: The UEFI FCoE driver must be loaded before you perform the following steps.
Adding an FCoEAttempt
An FCoE Attempt is a configured instance of a target from which the system will attempt to boot over FCoE.
1. From the FCoEConfiguration menu, select Add an Attempt. All supported ports are displayed.
2. Select the desired port. The FCoE Boot Targets Configuration screen is displayed.
3. Select Discover Targets to automatically discover available targets (alternatively, you can manually
enter the fields on the FCoE Boot Targets Configuration screen). The Select from Discovered Tar-
gets option displays a list of previously discovered targets.
4. Select Auto-Discovery. Note that the auto-discovery process may take several minutes. When auto-
discovery is complete, the Select Target screen is displayed. Discover VLAN is the VLANassociated
with a discovered adapter. There can be more than one target on a given VLAN.
5. Select the desired target from the list. The FCoEBoot Targets Configuration screen is displayed with
completed fields for the selected target.
6. Press F10 (Save)to add this FCoEattempt. The FCoE Configuration screen is displayed with the
newly added FCoEattempt listed.
Deleting an Existing FCoEAttempt
3. To delete the selected attempts, choose Commit Changes and Exit. To exit this screen without
deleted the selected attempts, choose Discard Changes and Exit.
Changing the Order of FCoEAttempts
3. Use the arrow keys to change the attempt order. When satisfied, press the Enter key to exit the dialog.
The new attempt order is displayed.
4. To save the new attempt order, select Commit Changes and Exit. To exit without saving changes,
select Discard Changes and Exit.
NOTE: If your FCoE Boot target is on a VLAN other than VLAN #1, then you must use the POST
Boot Menu (Ctrl-D) to discover the target.
The Intel Ethernet Virtual Storage Miniport Driver for FCoE may disappear from the Device Manager after
either:
l A virtual network is removed.
l The underlying Intel NIC adapter settings are modified.
This can occur when the corresponding Intel adapter is virtualized to create a new virtual network or delete or
modify an existing Virtual Network. It can also happen when the underlying Intel NIC adapter settings are
modified, including disabling or re-enabling the adapter.
As a workaround, remove all the resource dependencies of the Intel Ethernet Virtual Storage Miniport Driver
for FCoE that are currently being used by the system before making any changes to the Intel adapter for
virtualization. For example, in one use case scenario, the user may have assigned the FCoE disk(s) from the
FCoE storage driver to run one of its Virtual Machines, and at the same time the user wants to alter the
configuration of the same Intel adapter for virtualization. In this scenario the user must remove the FCoE
disks(s) from the Virtual Machine before altering the Intel adapter configuration.
Virtual Port may disappear from Virtual Machine
When the Virtual Machine starts, it asks the Intel Ethernet Virtual Storage Miniport Driver for FCoE ("the
driver") to create a Virtual Port. If the driver is subsequently disabled, the Virtual Port may disappear. The only
way to get the Virtual Port back is to enable the driver and reboot the Virtual Machine.
When installing FCoE after installing ANS and creating AFT Team, Storports are not installed
If the user installs ANS and creates an AFT team and then installs FCoE/DCB, the result is that DCB is off by
default. If the user then enables DCB on one port, the OS detects Storports and the user must manually click
on the new hardware wizard prompts for each of them to install. If the user does not do that, DCB status is
non-operational and the reason given is no peer.
Intel PROSet for Windows Device Manager (DMiX) is not synched with FCoE CTRL-D Utility
When the user disables FCoE via the Control-D menu, the Intel PROSet for Windows Device Manager User
Interface states that the flash contains an FCoE image, but that the flash needs to be updated. Updating the
flash with the FCoE image again, re-enables FCoE and returns the user to the state where all the FCoE
settings are available.
If the user uses the control-D menu to disable FCoE, then they should use the control-D menu to enable it
because Intel PROSet for Windows Device Manager does not support enabling or disabling FCoE.
82599 and X540-based adapters don't display as SPC-3 compliant in Windows MPIO configuration
Because the FCoE initiator is a virtualized device it does not have its own unique hardware ID and thus is not
displayed as a SPC-3 compliant device in Windows MPIO configuration.
When removing ALB teaming, all FCOE functions fail, all DMIX tabs are grayed out, and both adapter ports fail
For ANS teaming to work with Microsoft Network Load Balancer (NLB) in unicast mode, the team's LAA must
be set to cluster node IP. For ALB mode, Receive Load Balancing must be disabled. For further configuration
details, refer to http://support.microsoft.com/?id=278431
ANS teaming will work when NLB is in multicast mode, as well. For proper configuration of the adapter in this
mode, refer to http://technet.microsoft.com/en-ca/library/cc726473(WS.10).aspx
FCoE and TCP/IP traffic on the same VLAN may not work on some switches
The FCoE Option ROM may not discover the desired VLAN when performing VLAN discovery from the
Discover Targets function. If the Discover VLAN box is populated with the wrong VLAN, then enter the
desired VLAN before executing Discover Targets.
Windows Known Issues
Windows uses a paging file on the local disk
After imaging, if the local disk is not removed before booting from the FCoE disk then Windows may use the
paging file from the local disk.
Crash dump to FCoE disks is only supported to the FCoE Boot LUN
When the FCoE Option ROM connects to an FCoE disk during boot, the Windows installer may be unable to
determine if the system was booted from FCoE or not and will block the FCoE uninstall. To uninstall,
configure the Option ROM so that it does not connect to an FCoE disk.
Unable to create VLAN interfaces with Intel Ethernet FCoE Boot enabled
When booted with FCoE, a user cannot create VLANs and/or Teams for other traffic types. This prevents
converged functionality for non-FCoE traffic.
Server adapter configured for FCoE Boot available as External-Shared vnic via Hyper-V
If a port is set as a boot port, when the user installs the Hyper V role in the system and then goes into the
Hyper V Network Manager to select which port to externally virtualize, the boot port displays, which it should
not.
When setting the port to a boot port in Intel PROSet for Windows Device Manager, a message shows that the
user should restart the system for the changes to be effective but does not force a restart. As a result the user
level applications are in boot mode (i.e., Data Center Tab is grayed out) but kernel level drivers havent been
restarted to indicate to the OS that the port is a boot port. When the user then adds the Hyper V service to the
system, the OS takes a snap shot of the ports available and this is the snap shot that it uses after the Hyper V
role is added, system restarted and the user goes into the Hyper V Virtual Network Manager to virtualize the
ports. As a result, the boot port also shows up.
Solutions:
Restart the system after setting a port to a boot port and before adding the Hyper V role. The port does not
appear in the list of virtualizable ports in the Hyper V Virtual network manager.
Disable/enable the port in Device Manager after setting it to boot and before adding the Hyper V role. The port
does not appear in the list of virtualizable ports in the Hyper V Virtual network manager.
FCoE Linkdown Timeout fails prematurely when Remote Booted
If an FCoE-booted port loses link for longer than the time specified in the Linkdown Timeout advanced
setting in the Intel Ethernet Virtual Storage Miniport Driver for FCoE, the system will crash. Linkdown
Timeout values greater than 30 seconds may not provide extra time before a system crash.
Windows fails to boot properly after using the image install method
The following situation may arise when installing Windows for FCoE Boot using the imaging method:
Windows boots successfully from the FCoE LUN when the local drive is installed, but when the local drive is
removed, Windows seems to boot, but fails before reaching the desktop.
In this case it is likely that the Windows installation resides on both the FCoE LUN and local drive. This can
be verified by booting from the FCoE LUN with the local drive installed, then comparing the drive letter in the
path of files on the desktop with the drive letter for the boot partition in Windows' Disk Management tool. If the
drive letters are different, then the Windows installation is split between the two disks.
If this situation is occurring, please ensure that fcoeprep is run prior to capturing the image, and that the
system is not allowed to local boot between running fcoeprep and capturing the image. In addition, the local
drive could be removed from the system prior to the first boot from the FCoE LUN.
Troubleshooting
Common Problems and Solutions
There are many simple, easy-to-fix problems related to network problems. Review each one of these before
going further.
l Check for recent changes to hardware, software, or the network that may have disrupted com-
munications.
l Check the driver software.
l Make sure you are using the latest appropriate drivers for your adapter from the Intel support
website.
l Disable (or unload), then re-enable (reload) the driver or adapter.
l Check for conflicting settings. Disable advanced settings such as teaming or VLANs to see if it
corrects the problem.
l Re-install the drivers.
l Check the cable. Use the best available cabling for the intended data rate.
l Check that the cable is securely attached at both points.
l Make sure the cable length does not exceed specifications.
l For copper connections, make sure the cable is a 4-pair Category 5 for 1000BASE-T or
100BASE-TX or a 4-pair Category 6 for 10GBASE-T.
l Perform a cable test.
l Replace the cable.
l Check the link partner (switch, hub, etc.).
l Make sure the link partner is active and can send and receive traffic.
l Make sure the adapter and link partner settings match one another, or are set to auto-negotiate.
l Make sure the port is enabled.
l Re-connect to another available port or another link partner.
l Look for adapter hardware problems.
l Re-seat the adapter.
l Insert the adapter in another slot.
l Check for conflicting or incompatible hardware devices and settings.
l Replace the adapter.
l Check the Intel support website for possible documented issues.
l Select your adapter from the adapter family list.
l Check the Frequently Asked questions section.
l Check the Knowledge Base.
l Check your process monitor and other system monitors.
l Check to see that there is sufficient processor and memory capacity to perform networking
activity.
l Look for any unusual activity (or lack of activity).
l Use network testing programs to check for basic connectivity.
l Check your BIOS version and settings.
l Use the latest appropriate BIOS for your computer.
l Make sure the settings are appropriate for your computer.
The following troubleshooting table assumes that you have already reviewed the common problems and
solutions.
Problem Solution
Your computer cannot find the adapter Make sure your adapter slots are compatible for the type of
adapter you are using:
l PCI Express v1.0 (or newer)
l PCI-X v2.0
l PCI slots are v2.2
Diagnostics pass but the connection Make sure the cable is securely attached, is the proper type
fails and does not exceed the recommended lengths.
Try running the Sender-Responder diagnostic Test.
Make sure the duplex mode and speed setting on the adapter
matches the setting on the switch.
Another adapter stops working after you Make sure your PCI BIOS is current. See PCI / PCI-X / PCI
installed the Intel Network Adapter Express Configuration.
Check for interrupt conflicts and sharing problems. Make sure
the other adapter supports shared interrupts. Also, make sure
your operating system supports shared interrupts.
Unload all PCI device drivers, then reload all drivers.
The device does not connect at the When Gigabit Master/Slave mode is forced to "master" mode
expected speed. on both the Intel adapter and its link partner, the link speed
obtained by the Intel adapter may be lower than expected.
The adapter stops working without Run the adapter and network tests described under "Test the
apparent cause Adapter".
The Link indicator light is off Run the adapter and network tests described under "Test the
Problem Solution
Adapter".
Make sure the proper (and latest) driver is loaded.
Make sure that the link partner is configured to auto-negotiate
(or forced to match adapter)
Verify that the switch is IEEE 802.3ad-compliant.
The link light is on, but communications Make sure the proper (and latest) driver is loaded.
are not properly established
Both the adapter and its link partner must be set to either auto-
detect or manually set to the same speed and duplex settings.
RX or TX light is off Network may be idle; try creating traffic while monitoring the
lights.
The diagnostic utility reports the adapter The PCI BIOS isn't configuring the adapter correctly. See PCI
is "Not enabled by BIOS" / PCI-X / PCI Express Configuration.
The computer hangs when the drivers Try changing the PCI BIOS interrupt settings. See PCI / PCI-
are loaded X / PCI Express Configuration.
The Fan Fail LED of the 10 Gigabit AT The fan cooling solution is not functioning properly. Contact
Server Adapter is on (red) customer support for further instructions.
PCI / PCI-X / PCI Express Con- If the adapter is not recognized by your OS or if it does not
figuration work you may need to change some BIOS settings. Try the
following only if you are having problems with the adapter and
are familiar with BIOS settings.
l Check to see that the "Plug-and-Play" setting is com-
patible with the operating system you are using.
l Make sure the slot is enabled.
l Install the adapter in a bus-master slot.
l Configure interrupts for level-triggering, as opposed to
edge-triggering.
l Reserve interrupts and/or memory addresses. This pre-
vents multiple buses or bus slots from using the same
interrupts. Check the BIOS for IRQ options for PCI /
Problem Solution
PCI-X / PCIe.
Multiple Adapters
When configuring a multi-adapter environment, you must upgrade all Intel adapters in the computer to the
latest software.
If the computer has trouble detecting all adapters, consider the following:
l If you enable Wake on LAN* (WoL) on more than two adapters, the Wake on LAN feature may over-
draw your systems auxiliary power supply, resulting in the inability to boot the system and other unpre-
dictable problems. For multiple desktop/management adapters, it is recommended that you install one
adapter at a time and use the IBAUtil utility (ibautil.exe in \APPS\BOOTAGNT) to disable the WoL fea-
ture on adapters that do not require WoL capabilities. On server adapters, the WoL feature is disabled
by default.
l Adapters with Intel Boot Agent enabled will require a portion of the limited start up memory for each
adapter enabled. Disable the service on adapters that do not need to boot Pre-Boot Execution Envir-
onment (PXE).
NOTE: The Cable Test is not supported on all adapters. The Cable
Test will only be available on adapters that support it.
NOTE: Hardware tests will fail if the adapter is configured for iSCSI Boot.
To access these tests, select the adapter in Windows Device Manager, click the Link tab, and click
Diagnostics. A Diagnostics window displays tabs for each type of test. Click the appropriate tab and run the
test.
The availability of these tests is dependent on the adapter and operating system. Tests may be disabled if:
l iSCSI Boot is enabled on the port.
l FCoE Boot is enabled on the port.
l The port is used as a manageability port.
l The tests are being run from a virtual machine.
Linux Diagnostics
The driver utilizes the ethtool interface for driver configuration and diagnostics, as well as displaying statistical
information. ethtool version 1.6 or later is required for this functionality.
The latest release of ethtool can be found at: http://sourceforge.net/projects/gkernel.
NOTE: ethtool 1.6 only supports a limited set of ethtool options. Support for a more complete ethtool
feature set can be enabled by upgrading ethtool to the latest version.
Responder Testing
The Intel adapter can send test messages to another Ethernet adapter on the same network. This testing is
available in DOS via the diags.exe utility downloaded from Customer Support.
DOS Diagnostics
Intel's diagnostic software lets you test the adapter to see if there are any problems with the adapter
hardware, the cabling, or the network connection. You can also use diagnostics to isolate problems during
troubleshooting.
DIAGS.EXE runs under MS-DOS* and later compatible operating systems. It will not run from a Windows*
Command Prompt within any version of the Microsoft Windows operating system or in any other non-MS-
DOS operating system.
This utility is designed to test hardware operation and confirm the adapter's ability to communicate with
another adapter in the same network. It is not a throughput measurement tool.
DIAGS can test the adapter whether or not there is a responder present. In order to do a thorough test,
however, you should set up a second system on the network as a responder prior to starting a test. If there are
hot keys, the letters will be highlighted.
NOTE: If there is an MS-DOS network driver present, such as NDIS2 or DOS-ODI, the test utility
and the network driver could become unstable. You should reboot and ensure that there are no
network drivers loaded.
1. Boot to MS-DOS.
2. Navigate to the \DOSUtilities\UserDiag directory, then type DIAGS at the prompt and press <Enter>.
The test utility program automatically scans the hardware and lists all Intel-based adapters. They are
listed in this manner:
l If you have only one network connection in your system, this screen will be bypassed.
l If you have a dual-port or quad-port adapter, each port is listed separately starting with Port A,
then Port B, etc. You can find the port information on the bracket label.
3. Select the adapter you want to test by moving the highlight and pressing <Enter>. The test utility pro-
gram displays its main menu.
Selecting View Adapter Configuration will bring up the adapter configuration screen. This screen describes
various properties of the adapter.
Press <F5> to view additional information on the PCI Express slot occupied by the adapter. This information
is primarily used for troubleshooting by Customer Support.
Press any key to return to Adapter Configuration.
Selecting Test Adapter from the Main Menu brings up the Test Menu. This menu allows you to select which
tests to perform on the adapter and configure the test options.
Begin Adapter Tests
Selecting this option brings up the test screen. While tests are being performed, a rotating spinner is shown to
indicate the application is still "alive." The results of the tests are displayed as each test is performed. If
multiple test passes are selected, the results contain a count of test failures. A list containing zeros means
that all tests have passed. Single tests will display "Passed" or "Failed" for each pass.
Change Test Options
The test setup screen allows you to configure and select the specific tests desired. Each option is toggled by
moving the cursor with the arrow keys and pressing <Enter> to change the option. The number of tests is
simply entered from the keyboard in the appropriate box. If there is a gap in the menu, that means the test is
not supported by your adapter. By default, local diagnostics run automatically, while network diagnostics are
disabled.
NOTE: The test program will test attributes that are applicable to your adapter. Only supported tests
are displayed.
Device Registers - Test patterns are written, read, and verified through the adapter's device registers to
ensure proper functionality.
FIFOs - This will write test bit patterns to the adapter's FIFO buffers to make sure the FIFOs are working
properly. Not all adapters have FIFO, so it will not appear in all test lists.
EEPROM - This test tests both the readability of the EEPROM as well as the integrity of the data stored in the
EEPROM. It reads the EEPROM and calculates the checksum. This checksum is then compared to the
checksum stored in the EEPROM. If values are not the same, the test reports failure.
Interrupt - This tests the adapter's ability to generate an interrupt and have it propagated through the system
to the Programmable Interrupt Controller (PIC). The test triggers an interrupt by setting the interrupt cause
register and then verifies that an interrupt has been triggered.
Loopback - There are two internal loopback tests. These tests set the adapter in the appropriate loopback
mode and send packets back through the adapter's receive circuitry and logic. These tests are chipset-
dependent and may not be selectable.
Link - Checks to see if the adapter has link or does not have link.
Network Test - The Network Test looks for a responder, and then sends packets. If no responder is found, the
test reports failure. If packets are received back from the responder, the test reports success.
NOTE: In some instances, the test may fail when it is connected to a switch with Spanning Tree
Protocol enabled.
Networking Menu
The networking menu contains network-specific tests, such as Spanning Tree detection and Network test
responder.
Set Up as Responder
This allows the user to set up the adapter as a responder so a connected system can perform the network test
portion of the diagnostics tests. Although you can use a variety of adapters as the responder and connect
directly or through a switch, the best results are obtained with a cross-over cable and a same-type adapter.
When you press <Esc>, the responder operation is canceled and control is immediately returned to the
Networking menu.
Detect Spanning Tree
Spanning trees can be troublesome in a networking configuration. The Detect Spanning Tree option attempts
to detect if a spanning tree exists on the network. This is done by resetting the link and listening for spanning
tree packets.
Indicator Lights
The Intel Server and Desktop network adapters feature indicator lights on the adapter backplate that serve to
indicate activity and the status of the adapter board. The following tables define the meaning for the possible
states of the indicator lights for each adapter board.
Green Linked at 40 Gb
ACT/LNK
Blinking On/OFF Actively transmitting or receiving data
Off No link.
The Intel Ethernet 40G 2P XL710 QSFP+ rNDC has the following indicator lights:
Off No link.
Off No activity.
Green Linked at 40 Gb
ACT/LNK
Blinking On/OFF Actively transmitting or receiving data
Off No link.
The Intel Ethernet Converged Network Adapter X520-Q1 has the following indicator lights:
Green Linked at 10 Gb
Yellow Linked at 1 Gb
ACT/LNK
Blinking On/OFF Actively transmitting or receiving data
Off No link.
Green Linked at 10 Gb
LNK
Yellow Linked at 1 Gb
ACT
Off No link.
The Intel 10 Gigabit AF DA Dual Port Server Adapter and Intel Ethernet Server Adapter X520
series of adapters have the following indicator lights:
Off No link.
ACT/LNK
(A or B): Green
Blinking On/Off Actively transmitting or receiving data.
Green Linked at 10 Gb
LNK
Yellow Linked at 1 Gb
ACT
Off No link.
The Intel Ethernet Converged Network Adapter X520-4 has the following indicator lights:
Green Linked at 10 Gb
Yellow Linked at 1 Gb
ACT/L NK
Blinking On/OFF Actively transmitting or receiving data
Off No link.
Dual Port Copper Adapters
The Intel Ethernet Converged Network Adapter X550-T2 has the following indicator lights:
Off No link.
Activity
Off No link.
Off No link.
Activity
The Intel 10 Gigabit CX4 Dual Port Server Adapter has the following indicator lights:
Off No link.
The Intel Ethernet Server Adapter I350-T2, I340-T2, PRO/1000 P, PT Dual Port, and Gigabit ET Dual
Port Server Adapters have the following indicator lights:
Green
ACT/LNK Data activity
flashing
Off No link.
Off 10 Mbps
The Intel PRO/1000 MT and GT Dual Port Server Adapters have the following indicator lights for each
port:
Green
Data activity
flashing
ACT/LNK
Off No link.
Off 10 Mbps
10=OFF
100=GRN Green 100 Mbps
1000=ORG
Orange 1000 Mbps
The PRO/100+ Dual Port Server adapter (with three LEDs per port) has the following indicator lights:
Indication Meaning
Label
Off The adapter and switch are not receiving power; the
cable connection between the switch and adapter is
faulty; or you have a driver configuration problem.
The Intel PRO/100 S Dual Port Server adapter (with 64 bit PCI Connector) has the following indicator
lights:
Off No link.
Activity
Off No link.
Yellow Linked at 1 Gb
Off No link.
Activity
The Intel 10 Gigabit AT Server Adapter has the following indicator lights:
Green
Data activity
blinking
Off No link
Yellow 1 Gbps
Green
ACT/LNK Data activity
flashing
Off No link.
Off 10 Mbps
The Intel PRO/1000 MT Server Adapter has the following indicator lights:
Green
Data activity
flashing
ACT/LNK
Off No link.
Off 10 Mbps
10=OFF
100=GRN Green 100 Mbps
1000=ORG
Orange 1000 Mbps
The Intel Gigabit CT2, Gigabit CT, PRO/1000 T, and PRO/1000 MT Desktop Adapters have the fol-
lowing indicator lights:
Green
Data activity
flashing
ACT/LNK
Off No link.
Off 10 Mbps
10/100/
Green 100 Mbps
1000
Yellow 1000 Mbps
The Intel PRO/1000 XT Server Adapter has the following indicator lights:
Green
Data activity
flashing
ACT/LNK
Off No link.
Off 10 Mbps
10/100/
Green 100 Mbps
1000
Yellow 1000 Mbps
The Intel PRO/1000 T Server Adapter has the following indicator lights:
Off 10 Mbps
The Intel PRO/100+, PRO/100 M, PRO/100 S , and PRO/100 VE and VM Desktop adapters and Net-
work Connections have the following indicator lights:
Off No link.
Green 10 Gbps
Green
ACT/LNK Data activity
flashing
Off No link.
Off 10 Mbps
The Intel PRO/1000 MT, GT and PT Quad Port Server Adapters have the following indicator lights for
each port:
Green
Data activity
flashing
Top LED
ACT/LNK Off No link.
Off No link.
The Intel PRO/1000 MF, PF, and Gigabit EF Dual Port Server Adapters have the following indicator
lights for each port:
Off No link.
Single Port Fiber Adapters
The Intel 10 Gigabit XF SR and LR Server Adapters has the following indicator lights:
Off No link.
The Intel PRO/1000 MF and PF Server Adapters have the following indicator lights:
Off No link.
The Intel PRO/1000 XF Server Adapter has the following indicator lights:
The Intel PRO/1000F Server adapter has the following indicator lights:
Off No link.
Off No link.
The Intel PRO/1000 PF Quad Port Server Adapter has the following indicator lights:
Off No link.
Known Issues
NOTE: iSCSI Known Issues and FCoE Known Issues are located in their own sections of this
manual.
When ports are teamed or bonded together in an active/passive configuration (for example, in a switch fault
tolerance team, or a mode 1 bond), the inactive port may send out frequent LLDP packets, which results in
lost data packets. This may occur with Intel ANS teaming on Microsoft Windows operating systems or with
channel bonding on Linux systems. To resolve the issue, set one of the ports to be the Primary port.
On a system running Microsoft Windows Server 2016, inside a Virtual Machine running Microsoft Windows
Server 2016 or Windows Server 2012 R2, Intel Ethernet connections may have a code 10 yellow bang in
Windows Device Manager. Installing a cumulative updated that contains Microsoft KB3192366 and
KB3176936 will resolve the issue.
If you have an Intel PCI Express adapter installed, running at 10 or 100 Mbps, half-duplex, with TCP Segment
Offload (TSO) enabled, you may observe occasional dropped receive packets. To work around this problem,
disable TSO or update the network to operate in full-duplex or 1 Gbps.
If an Intel gigabit adapter is under extreme stress and is hot-swapped, throughput may significantly drop. This
may be due to the PCI property configuration by the Hot-Plug software. If this occurs, throughput can be
restored by restarting the system.
Setting RSS Queues to a value greater than 4 is only advisable for large servers with several processors.
Values greater than 4 may increase CPU utilization to unacceptable levels and have other negative impacts
on system performance.
If you try to install an unsupported module, the port may no longer install any subsequent modules, regardless
of whether the module is supported or not. The port will show a yellow bang under Windows Device Manager
and an event id 49 (unsupported module) will be added to the system log when this issue occurs. To resolve
this issue, the system must be completely powered off.
If you enable NPar and SR-IOV on the same device, the number of virtual functions enabled and displayed in
lspci may be 8 or less. ESX limits the number of virtual functions to 8 per device. Also, due to ESXi
limitations, the number of virtual functions created may be less than the number requested. See the ESXi
documentation for details.
http://pubs.vmware.com/
Windows Known Issues
Intermittent Link Loss and Degraded Performance at High Stress Can Occur on Windows Server
2012 Systems
In a Windows Server 2012-based system with multi-core processors, possible intermittent link loss and
degraded performance at high stress may occur due to incorrect RSS processor assignments. More
information and a Microsoft hotfix are available at: http://support.microsoft.com/kb/2846837.
Virtual machine loses link on a Microsoft Windows Server 2012 R2 system
On a Microsoft Windows Server 2012 R2 system with VMQ enabled, if you change the BaseRssProcessor
setting, then install Microsoft Hyper-V and create one or more virtual machines, the virtual machines may lose
link. Installing the April 2014 update rollup for Windows RT 8.1, Windows 8.1, and Windows Server 2012 R2
(2919355) and hotfix 3031598 will resolve the issue. See http://support2.microsoft.com/kb/2919355 and
http://support2.microsoft.com/kb/3031598 for details.
DCB QoS and Priority Flow Control do not act as expected
If you use Microsofts Data Center Bridging (DCB) implementation configure Quality of Service (QoS) and
Priority Flow Control (PFC), the actual traffic flow segregation per traffic class may not match your
configuration and PFC may not pause traffic as expected. If you mapped more than one priority to a Traffic
Class, enabling only one of the priorities and disabling the others will work around the issue. Installing Intels
DCB implementation will also resolve this issue. This issue affects Microsoft Windows Server 2012 and
Server 2012 R2.
Link loss after changing the Jumbo Frames setting
Inside a guest partition on a Microsoft Windows Server 2012 R2 Hyper-V virtual machine, if you change the
jumbo frame Advanced setting on an Intel X540 based Ethernet Device or associated Hyper-V NetAdapter,
you may lose link. Changing any other Advanced Setting will resolve the issue.
Virtual Machine Queues are not allocated until reboot
On a Microsoft Windows Server 2012 R2 system with Intel Ethernet Gigabit Server adapters installed, if you
install Hyper-V and create a VM switch, Virtual Machine Queues (VMQ) are not allocated until you reboot the
system. Virtual machines can send and receive traffic on the default queue, but no VMQs will be used until
after a system reboot.
Application Error Event IDs 789, 790, and 791 in the Event Log
If Data Center Bridging (DCB) is enabled, and the enabled port loses link, the following three events may be
logged in the event log:
l Event ID 789: Enhanced Transmission Selection feature on a device has changed to non-operational
l Event ID 790: Priority Flow Control feature on a device has changed to non-operational
l Event ID 791: Application feature on a device has changed to non-operational (FCoE)
This is the expected behavior when a DCB enabled port loses link. DCB will begin working again as soon as
link is reestablished. A port will lose link if the cable is disconnected, the driver or software package is
updated, if the link partner goes down, or for other reasons.
"Malicious script detected" Warning from Norton AntiVirus During PROSet Uninstall
The Intel PROSet uninstall process uses a Visual Basic script as part of the process. Norton AntiVirus and
other virus scanning software may mistakenly flag this as a malicious or dangerous script. Letting the script
run allows the uninstall process to complete normally.
Unexpected Connectivity Loss
If you uncheck the "Allow the computer to turn off this device to save power" box on the Power Management
tab and then put the system to sleep, you may lose connectivity when you exit sleep. You must disable and
enable the NIC to resolve the issue. Installing Intel PROSet for Windows Device Manager will also resolve
the issue.
VLAN Creation Fails on a Team that Includes a Non-Intel Phantom Adapter
If you are unable to create a VLAN on a team that includes a non-Intel phantom adapter, use Device Manager
to remove the team, then recreate the team without the phantom adapter, and add the team to the VLAN.
A VLAN Created on an Intel Adapter Must be Removed Before a Multi-Vendor Team Can be
Created.
In order to create the team, the VLAN must first be removed.
Receive Side Scaling value is blank
Changing the Receive Side Scaling setting of an adapter in a team may cause the value for that setting to
appear blank when you next check it. It may also appear blank for the other adapters in the team. The adapter
may be unbound from the team in this situation. Disabling and enabling the team will resolve the issue.
RSS Load Balancing Profile Advanced Setting
Setting the "RSS load balancing profile" Advanced Setting to "ClosestProcessor" may significantly reduce
CPU utilization. However, in some system configurations (such as a system with more Ethernet ports than
processor cores), the "ClosestProcessor" setting may cause transmit and receive failures. Changing the
setting to "NUMAScalingStatic" will resolve the issue.
Opening Windows Device Manager property sheet takes longer than expected
The Windows Device Manager property sheet may take 60 seconds or longer to open. The driver must
discover all Intel Ethernet devices and initialize them before it can open the property sheet. This data is
cached, so subsequent openings of the property sheet are generally quicker.
When Jumbo Frames is set to 9K with a 10GbE adapter, a 90%/10% ETS traffic split will not actually be
attained on any particular port, despite settings being made on the DCB switch. When ETS is set to a
90%/10% split, an actual observed split of 70%/30% is more likely.
You must not lower Receive_Buffers or Transmit_Buffers below 256 if jumbo frames are enabled on an Intel
10GbE Device. Doing so will cause loss of link.
If you have non-Intel networking devices capable of Receive Side Scaling installed in your system, the
Microsoft Windows registry keyword RSSBaseCPU may have been changed from the default value of 0x0
to point to a logical processor. If this keyword has been changed, then devices based on Intel 82598 or
82599 10 Gigabit Ethernet Controllers might not pass traffic. Attempting to make driver changes in this state
may cause system instability. Set the value of RSSBaseCpu to 0x0, or to a value corresponding to a physical
processor, and reboot the system to resolve the issue.
Continuous PFC pause frames sent from Intel Ethernet X520 based devices
If you have an Intel Ethernet X520 based device connected to a switch port and modify the DCB bandwidth
settings on the switch port, the Intel Ethernet X520 device may perpetually send pause frames, causing a
storm, and fail to transfer data to and from the storage targets it was using. To recover from this issue, disable
the X520 ports, re-enable them, and then reconnect to the iSCSI target volumes. To avoid the issue, if the
DCB bandwidth settings need to be changed, do one of the following:
l Power down the server that contains the Intel Ethernet X520 device prior to modifying the DCB band-
width settings.
l Disable the switch ports connected to Intel X520 based device.
l Have no traffic running on the Intel X520 based device.
If you set the PCIe Maximum Payload Size to 256 bytes in your system BIOS and install an 82599-based
NIC, you may receive an NMI when the NIC attains link. This happens when the physical slot does not
support a payload size of 256 Bytes even if the BIOS does. Moving the adapter to a slot that supports 256
bytes will resolve the issue. Consult your system documentation for information on supported payload values.
Safety Compliance
The following safety standards apply to all products listed above.
l UL 60950-1, 2nd Edition, 2011-12-19 (Information Technology Equipment - Safety - Part 1: General
Requirements)
l CSA C22.2 No. 60950-1-07, 2nd Edition, 2011-12 (Information Technology Equipment - Safety - Part 1:
General Requirements)
l EN 60950-1:2006/A11:2009/A1:2010/A12:2011 (European Union)
l IEC 60950-1:2005 (2nd Edition); Am 1:2009 (International)
l EU LVD Directive 2006/95/EC
Class B products:
l FCC Part 15 (Class B) Radiated & Conducted Emissions (USA)
l CAN ICES-3(B)/NMB-3(B) Radiated & Conducted Emissions (Canada)
l CISPR 22 Radiated & Conducted Emissions (International)
l EN55022: 2010 Radiated & Conducted Emissions (European Union)
l EN55024: 2010 Immunity (European Union)
l EU EMC Directive 2004/108/EC
l VCCI (Class B) Radiated & Conducted Emissions (Japan) (excluding optics)
l CNS13438 (Class B)-2006 Radiated & Conducted Emissions (Taiwan) (excluding optics)
l AS/NZS CISPR 22 Radiated & Conducted Emissions (Australia/New Zealand)
l KN22; KN24 Korean emissions and immunity
l NRRA No. 2012-13 (2012.06.28), NRRA Notice No. 2012-14 (2012.06.28) (Korea)
NOTE: This equipment has been tested and found to comply with the limits for a Class A digital
device, pursuant to Part 15 of the FCC rules. These limits are designed to provide reasonable pro-
tection against harmful interference when the equipment is operated in a commercial environment.
This equipment generates, uses and can radiate radio frequency energy and, if not installed and
used in accordance with the instructions, may cause harmful interference to radio com-
munications. Operation of this equipment in a residential area is likely to cause harmful inter-
ference in which case the user will be required to correct the interference at his own expense.
CAUTION: If the device is changed or modified without permission from Intel, the user may void
his or her authority to operate the equipment.
CAUTION: If the device is changed or modified without permission from Intel, the user may void
his or her authority to operate the equipment.
NOTE: This device complies with Part 15 of the FCC Rules. Operation is subject to the following
two conditions: (1) this device may not cause harmful interference, and (2) this device must accept
any interference received, including interference that may cause undesired operation.
The following products have been tested to Comply with FCC Standards for Home or Office Use.
PRO/1000 MT, PRO/1000 PT, PRO/1000 GT, Gigabit PT, Gigabit ET, I210-T1, I340-T2/T4, I350-T2/T4,
PRO/100 M Desktop Adapter, PRO/100 S Desktop Adapter, PRO/100 S Server Adapter, and PRO/100 S
Dual Port Server Adapter
Manufacturer Declaration
Intel Corporation declares that the equipment described in this document is in conformance with the
requirements of the European Council Directive listed below:
l Low Voltage Directive 2006/95/EC
l EMC Directive2004/108/EC
l RoHS Directive 2011/65/EU
These products follow the provisions of the European Directive 1999/5/EC.
Dette produkt er i overensstemmelse med det europiske direktiv 1999/5/EC.
Dit product is in navolging van de bepalingen van Europees Directief 1999/5/EC.
Tm tuote noudattaa EU-direktiivin 1999/5/EC mryksi.
Ce produit est conforme aux exigences de la Directive Europenne 1999/5/EC.
Dieses Produkt entspricht den Bestimmungen der Europischen Richtlinie 1999/5/EC.
essi vara stenst regluger Evrpska Efnahags Bandalagsins nmer 1999/5/EC.
Questo prodotto conforme alla Direttiva Europea 1999/5/EC.
Dette produktet er i henhold til bestemmelsene i det europeiske direktivet 1999/5/EC.
Este produto cumpre com as normas da Diretiva Europia 1999/5/EC.
Este producto cumple con las normas del Directivo Europeo 1999/5/EC.
Denna produkt har tillverkats i enlighet med EG-direktiv 1999/5/EC.
This declaration is based upon compliance of the Class A products listed above to the following standards:
EN 55022:2010 (CISPR 22 Class A) RF Emissions Control.
EN 55024:2010 (CISPR 24) Immunity to Electromagnetic Disturbance.
EN 60950-1:2006/A11:2009A1:2010/A12:2011 Information Technology Equipment- Safety-Part 1: General
Requirements.
EN 50581:2012 - Technical documentation for the assessment of electrical and electronic products with
respect to the restriction of hazardous substances.
This declaration is based upon compliance of the Class B products listed above to the following standards:
EN 55022:2010 (CISPR 22 Class B) RF Emissions Control.
EN 55024:2010 (CISPR 24) Immunity to Electromagnetic Disturbance.
EN 60950-1:2006/A11:2009/A1:2010/A12:2011 Information Technology Equipment- Safety-Part 1: General
Requirements.
EN 50581:2012 - Technical documentation for the assessment of electrical and electronic products with
respect to the restriction of hazardous substances.
WARNING: In a domestic environment, Class A products may cause radio interference, in which
case the user may be required to take adequate measures.
Responsible Party
Intel Corporation, Mailstop JF3-446
5200 N.E. Elam Young Parkway
Hillsboro, OR 97124-6497
Phone 1-800-628-8686
China RoHS Declaration
LICENSES
Please Note:
l If you are a network or system administrator, the "Site License" below shall apply to you.
l If you are an end user, the "Single User License" shall apply to you.
l If you are an original equipment manufacturer (OEM), the "OEM License" shall apply to you.
SITE LICENSE: You may copy the Software onto your organization's computers for your organization's use,
and you may make a reasonable number of back-up copies of the Software, subject to these conditions:
1. This Software is licensed for use only in conjunction with (a) physical Intel component
products, and (b) virtual ("emulated") devices designed to appear as Intel component
products to a Guest operating system running within the context of a virtual machine. Any
other use of the Software, including but not limited to use with non-Intel component
products, is not licensed hereunder.
2. Subject to all of the terms and conditions of this Agreement, Intel Corporation ("Intel") grants to you a
non-exclusive, non-assignable, copyright license to use the Software.
3. You may not copy, modify, rent, sell, distribute, or transfer any part of the Software except as provided
in this Agreement, and you agree to prevent unauthorized copying of the Software.
4. You may not reverse engineer, decompile, or disassemble the Software.
5. The Software may include portions offered on terms differing from those set out here, as set out in a
license accompanying those portions.
SINGLE USER LICENSE: You may copy the Software onto a single computer for your personal use, and
you may make one back-up copy of the Software, subject to these conditions:
1. This Software is licensed for use only in conjunction with (a) physical Intel component
products, and (b) virtual ("emulated") devices designed to appear as Intel component
products to a Guest operating system running within the context of a virtual machine. Any
other use of the Software, including but not limited to use with non-Intel component
products, is not licensed hereunder.
2. Subject to all of the terms and conditions of this Agreement, Intel Corporation ("Intel") grants to you a
non-exclusive, non-assignable, copyright license to use the Software.
3. You may not copy, modify, rent, sell, distribute, or transfer any part of the Software except as provided
in this Agreement, and you agree to prevent unauthorized copying of the Software.
4. You may not reverse engineer, decompile, or disassemble the Software.
5. The Software may include portions offered on terms differing from those set out here, as set out in a
license accompanying those portions.
OEM LICENSE: You may reproduce and distribute the Software only as an integral part of or incorporated in
your product, as a standalone Software maintenance update for existing end users of your products, excluding
any other standalone products, or as a component of a larger Software distribution, including but not limited to
the distribution of an installation image or a Guest Virtual Machine image, subject to these conditions:
1. This Software is licensed for use only in conjunction with (a) physical Intel component
products, and (b) virtual ("emulated") devices designed to appear as Intel component
products to a Guest operating system running within the context of a virtual machine. Any
other use of the Software, including but not limited to use with non-Intel component
products, is not licensed hereunder.
2. Subject to all of the terms and conditions of this Agreement, Intel Corporation ("Intel") grants to you a
non-exclusive, non-assignable, copyright license to use the Software.
3. You may not copy, modify, rent, sell, distribute or transfer any part of the Software except as provided
in this Agreement, and you agree to prevent unauthorized copying of the Software.
4. You may not reverse engineer, decompile, or disassemble the Software.
5. You may only distribute the Software to your customers pursuant to a written license agreement. Such
license agreement may be a "break-the-seal" license agreement. At a minimum such license shall safe-
guard Intel's ownership rights to the Software.
6. You may not distribute, sublicense or transfer the Source Code form of any components of the Soft-
ware and derivatives thereof to any third party without the express written consent of Intel.
7. The Software may include portions offered on terms differing from those set out here, as set out in a
license accompanying those portions.
LICENSE RESTRICTIONS. You may NOT: (i) use or copy the Software except as provided in this
Agreement; (ii) rent or lease the Software to any third party; (iii) assign this Agreement or transfer the Software
without the express written consent of Intel; (iv) modify, adapt, or translate the Software in whole or in part
except as provided in this Agreement; (v) reverse engineer, decompile, or disassemble the Software; (vi)
attempt to modify or tamper with the normal function of a license manager that regulates usage of the
Software; (vii) distribute, sublicense or transfer the Source Code form of any components of the Software and
derivatives thereof to any third party without the express written consent of Intel; (viii) permit, authorize,
license or sublicense any third party to view or use the Source Code; (ix) modify or distribute the Source Code
or Software so that any part of it becomes subject to an Excluded License. (An "Excluded License" is one that
requires, as a condition of use, modification, or distribution, that (a) the code be disclosed or distributed in
source code form; or (b) others have the right to modify it.); (x) use or include the Source Code or Software in
deceptive, malicious or unlawful programs.
NO OTHER RIGHTS. No rights or licenses are granted by Intel to you, expressly or by implication, with
respect to any proprietary information or patent, copyright, mask work, trademark, trade secret, or other
intellectual property right owned or controlled by Intel, except as expressly provided in this Agreement. Except
as expressly provided herein, no license or right is granted to you directly or by implication, inducement,
estoppel, or otherwise. Specifically, Intel grants no express or implied right to you under Intel patents,
copyrights, trademarks, or other intellectual property rights.
OWNERSHIP OF SOFTWARE AND COPYRIGHTS. The Software is licensed, not sold. Title to all copies
of the Software remains with Intel. The Software is copyrighted and protected by the laws of the United States
and other countries and international treaty provisions. You may not remove any copyright notices from the
Software. You agree to prevent any unauthorized copying of the Software. Intel may make changes to the
Software, or to items referenced therein, at any time without notice, but is not obligated to support or update
the Software.
ADDITIONAL TERMS FOR PRE-RELEASE SOFTWARE. If the Software you are installing or using under
this Agreement is pre-commercial release or is labeled or otherwise represented as "alpha-" or "beta-"
versions of the Software ("pre-release Software"), then the following terms apply. To the extent that any
provision in this Section conflicts with any other term(s) or condition(s) in this Agreement with respect to pre-
release Software, this Section shall supersede the other term(s) or condition(s), but only to the extent
necessary to resolve the conflict. You understand and acknowledge that the Software is pre-release
Software, does not represent the final Software from Intel, and may contain errors and other problems that
could cause data loss, system failures, or other errors. The pre-release Software is provided to you "as-is" and
Intel disclaims any warranty or liability to you for any damages that arise out of the use of the pre-release
Software. You acknowledge that Intel has not promised that pre-release Software will be released in the
future, that Intel has no express or implied obligation to you to release the pre-release Software and that Intel
may not introduce Software that is compatible with the pre-release Software. You acknowledge that the
entirety of any research or development you perform that is related to the pre-release Software or to any
product making use of or associated with the pre-release Software is done at your own risk. If Intel has
provided you with pre-release Software pursuant to a separate written agreement, your use of the pre-release
Software is also governed by such agreement.
LIMITED MEDIA WARRANTY. If the Software has been delivered by Intel on physical media, Intel
warrants the media to be free from material physical defects for a period of ninety days after delivery by Intel.
If such a defect is found, return the media to Intel for replacement or alternate delivery of the Software as Intel
may select.
EXCLUSION OF OTHER WARRANTIES. EXCEPT AS PROVIDED ABOVE, THE SOFTWARE IS
PROVIDED "AS IS" WITHOUT ANY EXPRESS OR IMPLIED WARRANTY OF ANY KIND
INCLUDING WARRANTIES OF MERCHANTABILITY, NONINFRINGEMENT, OR FITNESS FOR A
PARTICULAR PURPOSE. Intel does not warrant or assume responsibility for the accuracy or completeness
of any information, text, graphics, links, or other items contained within the Software.
LIMITATION OF LIABILITY. IN NO EVENT SHALL INTEL OR ITS SUPPLIERS BE LIABLE FOR
ANY DAMAGES WHATSOEVER (INCLUDING, WITHOUT LIMITATION, LOST PROFITS, BUSINESS
INTERRUPTION, OR LOST INFORMATION) ARISING OUT OF THE USE OF OR INABILITY TO USE
THE SOFTWARE, EVEN IF INTEL HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH
DAMAGES. SOME JURISDICTIONS PROHIBIT EXCLUSION OR LIMITATION OF LIABILITY FOR
IMPLIED WARRANTIES OR CONSEQUENTIAL OR INCIDENTAL DAMAGES, SO THE ABOVE
LIMITATION MAY NOT APPLY TO YOU. YOU MAY ALSO HAVE OTHER LEGAL RIGHTS THAT
VARY FROM JURISDICTION TO JURISDICTION. In the event that you use the Software in conjunction
with a virtual ("emulated") device designed to appear as an Intel component product, you acknowledge that
Intel is neither the author nor the creator of the virtual ("emulated") device. You understand and acknowledge
that Intel makes no representations about the correct operation of the Software when used with a virtual
("emulated") device, that Intel did not design the Software to operate in conjunction with the virtual
("emulated") device, and that the Software may not be capable of correct operation in conjunction with the
virtual ("emulated") device. You agree to assume the risk that the Software may not operate properly in
conjunction with the virtual ("emulated") device. You agree to indemnify and hold Intel and its officers,
subsidiaries and affiliates harmless against all claims, costs, damages, and expenses, and reasonable
attorney fees arising out of, directly or indirectly, any claim of product liability, personal injury or death
associated with the use of the Software in conjunction with the virtual ("emulated") device, even if such claim
alleges that Intel was negligent regarding the design or manufacture of the Software.
UNAUTHORIZED USE.THE SOFTWARE IS NOT DESIGNED, INTENDED, OR AUTHORIZED FOR
USE IN ANY TYPE OF SYSTEM OR APPLICATION IN WHICH THE FAILURE OF THE SOFTWARE
COULD CREATE A SITUATION WHERE PERSONAL INJURY OR DEATH MAY OCCUR (E.G
MEDICAL SYSTEMS, LIFE SUSTAINING OR LIFE SAVING SYSTEMS). If you use the Software for any
such unintended or unauthorized use, you shall indemnify and hold Intel and its officers, subsidiaries and
affiliates harmless against all claims, costs, damages, and expenses, and reasonable attorney fees arising
out of, directly or indirectly, any claim of product liability, personal injury or death associated with such
unintended or unauthorized use, even if such claim alleges that Intel was negligent regarding the design or
manufacture of the part.
TERMINATION OF THIS AGREEMENT. Intel may terminate this Agreement at any time if you violate its
terms. Upon termination, you will immediately destroy the Software or return all copies of the Software to
Intel.
APPLICABLE LAWS. Claims arising under this Agreement shall be governed by the laws of the State of
California, without regard to principles of conflict of laws. You agree that the terms of the United Nations
Convention on Contracts for the Sale of Goods do not apply to this Agreement. You may not export the
Software in violation of applicable export laws and regulations. Intel is not obligated under any other
agreements unless they are in writing and signed by an authorized representative of Intel.
GOVERNMENT RESTRICTED RIGHTS. The enclosed Software and documentation were developed at
private expense, and are provided with "RESTRICTED RIGHTS." Use, duplication, or disclosure by the
Government is subject to restrictions as set forth in FAR 52.227-14 and DFARS 252.227-7013 et seq. or its
successor. The use of this product by the Government constitutes acknowledgment of Intels proprietary
rights in the Software. Contractor or Manufacturer is Intel.
LANGUAGE; TRANSLATIONS. In the event that the English language version of this Agreement is
accompanied by any other version translated into any other language, such translated version is provided for
convenience purposes only and the English language version shall control.
Before returning any adapter product, contact Intel Customer Support and obtain a Return Material
Authorization (RMA) number by calling +1 916-377-7000.
If the Customer Support Group verifies that the adapter product is defective, they will have the RMA
department issue you an RMA number to place on the outer package of the adapter product. Intel cannot
accept any product without an RMA number on the package.
Return the adapter product to the place of purchase for a refund or replacement.