0% found this document useful (0 votes)
160 views

Intel Ethernet Controller Products: 26.8 Release Notes

Uploaded by

Eza Fauzi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
160 views

Intel Ethernet Controller Products: 26.8 Release Notes

Uploaded by

Eza Fauzi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Intel® Ethernet Controller Products

26.8 Release Notes

Ethernet Products Group


December 2021

Revision 1.0
690423-001
Intel® Ethernet Controller Products
26.8 Release Notes

Revision History

Revision Date Comments

1.0 December 2021 Initial release.

2 690423-001
Intel® Ethernet Controller Products
26.8 Release Notes

1.0 Overview
This document provides an overview of the changes introduced in the latest Intel® Ethernet controller/
adapter family of products. References to more detailed information are provided where necessary. The
information contained in this document is intended as supplemental information only; it should be used
in conjunction with the documentation provided for each component.
These release notes list the features supported in this software release, known issues, and issues that
were resolved during release development.

1.1 New Features

1.1.1 Hardware Support

Release New Hardware Support

• 26.8 • Linux*- only support for Intel® Ethernet Network Adapter E810-XXV-4T

1.1.2 Software Features

Release New Software Support

• 26.8 • Intel® Ethernet 800 Series devices support RDMA in a Linux* VF on a Microsoft Windows* or Linux host
• FreeBSD* drivers now support DCB on Intel® Ethernet 800 Series devices
• Limited support for the Linux ice driver on CentOS/Red Hat* Enterprise Linux* (RHEL) 7.2. No new features
were added; only limited regression to find and address issues with the ice driver.
• Support for Red Hat* Enterprise Linux* (RHEL) 8.5.

1.1.3 Removed Features

Release Hardware/Feature Support

• 26.8 • The Intel® Ethernet Network Adapter E810-XXV-2 for OCP 2.0 is no longer supported. Release 26.7 is the last
release that supports Intel® Ethernet Network Adapter E810-XXV-2 for OCP 2.0.

1.1.4 Firmware Features

Release New Firmware Support

• 26.8 None for this release.

1.2 Supported Intel® Ethernet Controller Devices


Note: Bold Text indicates the main changes for Software Release 26.8.
For help identifying a specific network device as well as finding supported devices, click here:
https://www.intel.com/content/www/us/en/support/articles/000005584/network-and-i-o/ethernet-
products.html

690423-001 3
Intel® Ethernet Controller Products
26.8 Release Notes

1.3 NVM
Table 1 shows the NVM versions supported in this release.

Table 1. Current NVM


Driver NVM Version

800 Series

E810 3.10

700 Series

700 8.5

500 Series

X550 3.5

200 Series

I210 2.0

4 690423-001
Intel® Ethernet Controller Products
26.8 Release Notes

1.4 Operating System Support

1.4.1 Levels of Support


The next sections use the following notations to indicate levels of support.
• Full Support = FS
• Not Supported = NS
• Inbox Support Only = ISO
• Supported Not Tested = SNT
• Supported by the Community = SBC

1.4.2 Linux
Table 2 shows the Linux distributions that are supported in this release and the accompanying driver
names and versions.
Refer to Section 1.4.1 for details on Levels of Support.

Table 2. Supported Operating Systems: Linux


Red SUSE*
Hat* Linux
Enter- RHEL 8.x RHEL 7.x Enter- SLES 15 SLES 12 Canonical Canonical
RHEL RHEL SLES
Driver prise (8.3 and 7.8 and prise SP2 and SP4 and * Ubuntu* Ubuntu
8.4 7.9 12 SP5
Linux* previous) previous) Server previous previous 20.04 LTS 18.04 LTS
(RHEL) (SLES)
8.5 15 SP3

Intel® Ethernet 800 Series

ice 1.7.16 1.7.16 SNT 1.7.16 SNT 1.7.16 SNT 1.7.16 SNT 1.7.16 1.7.16

Intel® Ethernet 700 Series

i40e 2.17.4 2.17.4 SNT 2.17.4 SNT 2.17.4 SNT 2.17.4 SNT 2.17.4 2.17.4

Intel® Ethernet Adaptive Virtual Function

iavf 4.3.19 4.3.19 SNT 4.3.19 SNT 4.3.19 SNT 4.3.19 SNT 4.3.19 4.3.19

Intel® Ethernet 10 Gigabit Adapters and Connections

ixgbe 5.13.4 5.13.4 SNT 5.13.4 SNT 5.13.4 SNT 5.13.4 SNT 5.13.4 5.13.4

ixgbevf 4.13.3 4.13.3 SNT 4.13.3 SNT 4.13.3 SNT 4.13.3 SNT 4.13.3 4.13.3

Intel® Ethernet Gigabit Adapters and Connections

igb 5.85 5.85 SNT 5.85 SNT 5.85 SNT 5.85 SNT 5.85 5.85

Remote Direct Memory Access (RDMA)

irdma 1.7.72 1.7.72 SNT 1.7.72 SNT 1.7.72 SNT 1.7.72 SNT 1.7.72 1.7.72

690423-001 5
Intel® Ethernet Controller Products
26.8 Release Notes

1.4.3 Windows Server


Table 3 shows the versions of Microsoft Windows Server that are supported in this release and the
accompanying driver names and versions.
Refer to Section 1.4.1 for details on Levels of Support.

Table 3. Supported Operating Systems: Windows Server


Microsoft
Microsoft Microsoft Microsoft Microsoft
Windows
Driver Windows Windows Windows Windows
Server 2012
Server 2022 Server 2019 Server 2016 Server 2012
R2

Intel® Ethernet 800 Series

icea 1.10.51 1.9.65.0 1.9.65.0 NS NS

Intel® Ethernet 700 Series

i40ea 1.16.139.x 1.16.62.x 1.16.62.x 1.16.62.x 1.16.62.x

i40eb 1.16.141.x 1.16.62.x 1.16.62.x 1.16.62.x NS

Intel® Ethernet Adaptive Virtual Function

iavf 1.13.8.x 1.12.9 1.12.9 1.12.9 NS

Intel® Ethernet 10 Gigabit Adapters and Connections

ixe NS NS NS NS 2.4.36.x

ixn NS 4.1.239.x 4.1.239.x 3.14.214.x 3.14.206.x

ixs 4.1.246.x 4.1.239.x 4.1.239.x 3.14.222.x 3.14.222.x

ixt NS 4.1.288.x 4.1.229.x 3.14.214.x 3.14.206.x

sxa NS 4.1.243.x 4.1.243.x 3.14.222.x 3.14.222.x

sxb NS 4.1.239.x 4.1.239.x 3.14.214.x 3.14.206

vxn NS 2.1.249.x 2.1.243.x 1.2.309.x 1.2.309.x

vxs 2.1.241.x 2.1.230.x 2.1.232.x 1.2.254.x 1.2.254.x

Intel® Ethernet 2.5 Gigabit Adapters and Connections

e2f NS 1.0.2.x NS NS NS

6 690423-001
Intel® Ethernet Controller Products
26.8 Release Notes

Microsoft
Microsoft Microsoft Microsoft Microsoft
Windows
Driver Windows Windows Windows Windows
Server 2012
Server 2022 Server 2019 Server 2016 Server 2012
R2

Intel® Ethernet Gigabit Adapters and Connections

e1c NS NS 12.15.31.x 12.15.31.x 12.15.31.x

e1d NS 12.19.1.x 12.18.9.x 12.17.8.x 12.17.8.x

e1e NS NS NS NS 9.16.10.x

e1k NS NS NS NS 12.10.13.x

e1q NS NS NS NS 12.7.28.x

e1r 13.0.9.x 12.18.12.x 12.16.4.x 12.15.1.x 12.14.8.x

e1s 12.16.14.x 12.15.184.x 12.15.184.x 12.13.27.x 12.13.27.x

e1y NS NS NS NS 10.1.17.x

v1q NS 1.4.7.x 1.4.7.x 1.4.5.x 1.4.5.x

690423-001 7
Intel® Ethernet Controller Products
26.8 Release Notes

1.4.4 Windows Client


Table 4 shows the versions of Microsoft Windows that are supported in this release and the
accompanying driver names and versions.
Refer to Section 1.4.1 for details on Levels of Support.

Table 4. Supported Operating Systems: Windows Client


Microsoft Microsoft
Microsoft Microsoft
Driver Windows 10, Windows
Windows 10 Windows 8
version 1809 8.1

Intel® Ethernet 800 Series

icea NS NS NS NS

Intel® Ethernet 700 Series

i40ea NS NS NS NS

i40eb NS NS NS NS

Intel® Ethernet Adaptive Virtual Function

iavf NS NS NS NS

Intel® Ethernet 10 Gigabit Adapters and Connections

ixe NS NS NS NS

ixn 4.1.239.x 4.1.239.x 3.14.214.x NS

ixs 4.1.239.x 4.1.239.x 3.14.222.x NS

ixt 4.1.288.x 4.1.229.x 3.14.214.x NS

sxa 4.1.243.x 4.1.243.x` 3.14.222.x NS

sxb 4.1.239.x 4.1.239.x 3.14.214.x NS

vxn 2.1.249.x 2.1.243.x 1.2.309.x NS

vxs 2.1.230.x 2.1.232.x 1.2.254.x NS

Intel® Ethernet 2.5 Gigabit Adapters and Connections

e2f NS NS NS NS

8 690423-001
Intel® Ethernet Controller Products
26.8 Release Notes

Microsoft Microsoft
Microsoft Microsoft
Driver Windows 10, Windows
Windows 10 Windows 8
version 1809 8.1

Intel® Ethernet Gigabit Adapters and Connections

e1c NS 12.15.31.x 12.15.31.x 12.15.31.x

e1d 12.19.1.x 12.18.9.x 12.17.8.x 12.17.8.x

e1e NS NS NS 9.16.10.x

e1k NS NS NS 12.10.13.x

e1q NS NS NS 12.7.28.x

e1r 12.18.12.x 12.16.4.x 12.15.1.x 12.14.8.x

e1s 12.15.184.x 12.15.184.x 12.13.27.x 12.13.27.x

e1y NS NS NS 10.1.17.x

v1q 1.4.7.x 1.4.7.x 1.4.5.x 1.4.5.x

1.4.5 FreeBSD
Table 5 shows the versions of FreeBSD that are supported in this release and the accompanying driver
names and versions.
Refer to Section 1.4.1 for details on Levels of Support.

Table 5. Supported Operating Systems: FreeBSD


FreeBSD FreeBSD 12.1
Driver FreeBSD 13
12.2 and previous

Intel® Ethernet 800 Series

ice 1.34.2 1.34.2 SNT

Intel® Ethernet 700 Series

ixl 1.12.29 1.12.29 SNT

Intel® Ethernet Adaptive Virtual Function

iavf 3.0.26 3.0.26 SNT

Intel® Ethernet 10 Gigabit Adapters and Connections

ix 3.3.29 3.3.29 SNT

ixv 1.5.30 1.5.30 SNT

Intel® Ethernet Gigabit Adapters and Connections

igb 2.5.21 2.5.21 SNT

690423-001 9
Intel® Ethernet Controller Products
26.8 Release Notes

FreeBSD FreeBSD 12.1


Driver FreeBSD 13
12.2 and previous

Remote Direct Memory Access (RDMA)

irdma 0.0.50 0.0.50 SNT

iw_ixl 0.1.30 0.1.30 SNT

10 690423-001
Intel® Ethernet Controller Products
26.8 Release Notes

2.0 Fixed Issues

2.1 Intel® Ethernet 800 Series

2.1.1 General

2.1.2 Linux Driver


• Linux ice driver versions 1.4.x through 1.6.x have a performance regression that affects NVMe over
Fabric baseline TCP performance using interrupt mode. This issue impacts interrupt mode
workloads only; there is no impact to workloads using busy poll mode, such as ADQ. Fixed in ice
1.7.x and above.
• When changing the RSS queue value in VM while Double VLAN is configured, rx_vlano [rx-vlan-
offload] may stop incrementing correctly.
• For configurations in which a VF is attached to a VM, avoid setting the MTU value of the VF within
the range of 9199-9202. When the MTU is set to one of these values the VM may experience a
memory leak leading to kernel panic when TCP traffic is running from a link partner to the VF.
• Rapid unloading and loading of the irdma and ice drivers may cause a kernel panic.
• There was a statistics issue where ifconfig/ip -s link show <dev> could potentially show overall
statistics counting backwards due to a timing issue in stats collection. Ethtool -S statistics were
unaffected by this issue and would show correct counts. This issue is fixed in ice driver 1.7.x and
above.
• Packets received during the driver load are displayed as dropped because the packet drop counter
was started before the driver was fully loaded. This was a cosmetic issue with no functional impact
other than an increase in the dropped packets count. This issue is fixed in 1.7.x and above
• On CentOS 7.2, ethtool may report Speed as Unknown and you may see warnings in dmesg log.
This does not affect traffic or device functionality.
• QoS bandwidth shaping and priority tagging may not be functional in CentOS 7.2.
• When double VLAN or Queue in Queue feature is enabled, the inner most traffic source might need
to be limited to an MTU size of 1496 or less to avoid connection issues.
• When using Double VLAN configuration with a specific non-Intel link partner, TCP traffic might fail to
pass through the inner VLAN interface when the MTU of this interface matches the MTU of the outer
VLAN interface. Workaround: change MTU of inner VLAN to be 4 bytes less than the MTU of the
outer VLAN.

2.1.3 Windows Driver


• Running the command /dxsetup.exe PROSET=0 ANS=0 was resulting in an initialization of a logging
subsystem that is no longer used. Subsequently, this resulted in the file rule.txt being created on
the system under C:\. The issue has been resolved and the rule.txt file is no longer created.

2.1.4 Linux RDMA Driver


• RDMA stability at application close is improved with NVM version 3.1+.
• Prior to the combination of ice/irdma/nvm (insert 3.1 versions here), PFC for RDMA traffic was only
functional on TC0. With nvm 3.1+/ice 1.7.x+/irdma 1.7.x (insert 3.1 versions here), RDMA traffic
can be configured for PFC on TCs other than TC0.

690423-001 11
Intel® Ethernet Controller Products
26.8 Release Notes

• There might be a PSoD appearing during PF-reset when the NVMe to RDMA connection is active.
• In order to send or receive RDMA traffic, the network interface associated with the RDMA device
must be up. If the network interface experiences a link down event (for example, a disconnected
cable or ip link set <interface> down), the associated RDMA device is removed and no longer
available to RDMA applications. When the network interface link is restored, the RDMA device is
automatically re-added.
• After a system reboot, an Intel® Ethernet Network Adapter E810 RDMA device in RoCEv2 mode
might occasionally become active with a missing or incorrect GID. To correct the GID value, unload
and reload the irdma driver.

2.1.5 NVM Update Tool


None for this release.

2.1.6 NVM
None for this release.

2.1.7 Firmware
• FW treated SFI C2C as type SFF PHY, which caused a BMC link down.
• When the admin queue command was executed to get the PHY recovery clock (0x0631), it was
missing the Port number in the AQC.
• The interface type was displayed as twisted pair instead of optical fiber.
• Fixed an issue on the Intel Ethernet Network Adapter XXVDA4T where external PHYs do not report
temperature correctly.
• The PackageID0 SDP0 and PackageID1 SDP1 no longer have wrong assignments to the SDP PINs.
• The NC-SI OEM command (0x4B) no longer fails.
• Fixed an issue when the FW has not been properly mapped from the physical port to logical to get
interface sensors.
• Fixed an issue when the FW tries to read the PHY threshold even when the module does not that
sensor.
• Fixed an issue where the FW was trying to get power consumer parameters on a module that was
not present.
• Fixed an issue where the FW was handling GPIO interrupts before the board initialization was
complete.
• Fixed an issue where the FW PLDM command GetFirmwareParameters contains additional null
characters.
• Fixed an issue where the FW was reporting a module power value of module from an incorrect
location.
• FW maps the wrong cage index to logical port 0. As a result, customers may see the link API get an
incorrect indication for the cage index related to logical port 0.
• For PLDM Type 2 the device reports the port thermal sensor PDR and status for all types of
connectivity; even those without temp sensor (temperature and thresholds reported as zeros).

12 690423-001
Intel® Ethernet Controller Products
26.8 Release Notes

2.1.8 Manageability
• When bifurcation is enabled on the Chapman Beach adapter, both Intel Ethernet 800 Series
Adapters set the NC-SI Package ID as “0”.

2.1.9 FreeBSD Driver


None for this release.

2.1.10 Application Device Queues (ADQ)


• VXLAN stateless offloads (checksum, TSO) and TC filters directing traffic to a VXLAN interface are
not supported with Linux v5.9 or later.

2.2 Intel® Ethernet 700 Series

2.2.1 General
• When using multiple Traffic Classes (TCs) for ADQ application traffic, adding ntuple rules to the first
queue of TC2 or higher does not work as expected. The ethtool ntuple rule fails for the first queue.
This does not affect ntuple rules on TC1.

2.2.2 Linux driver:


None for this release.

2.2.3 Intel® PROSet:


None for this release.

2.2.4 EFI Driver


None for this release.

2.2.5 NVM
None for this release.

2.2.6 Windows driver:


None for this release.

2.2.7 Intel® Ethernet Flash Firmware Utility:


None for this release.

2.3 Intel® Ethernet 500 Series


None for this release.

2.4 Intel® Ethernet 300 Series


None for this release.

690423-001 13
Intel® Ethernet Controller Products
26.8 Release Notes

2.5 Intel® Ethernet 200 Series


None for this release.

14 690423-001
Intel® Ethernet Controller Products
26.8 Release Notes

3.0 Known Issues

3.1 Intel® Ethernet 800 Series

3.1.1 General
• Link issues (like false link, long time-to-link (TTL), excessive link flaps, no link) may occur when
Parkvale (C827/XL827) retimer is interfaced with SX/LX, SR/LR, SR4/LR4, AOC limiting optics. This
issue is isolated to Parkvale line side PMD RX susceptibility to noise.
• Intel Ethernet 800 Series adapters in 4x25GbE or 8x10GbE configurations will be limited to a
maximum total transmit bandwidth of roughly 28Gbps per port for 25GbE ports and 12Gbps per
port on 10GbE ports.
This maximum is a total combination of any mix of network (leaving the port) and loopback (VF ->
VF/VF -> PF/PF ->VF) TX traffic on a given port and is designed to allow each port to maintain port
speed transmit bandwidth at the specific port speed when in 25GbE or 10GbE mode.
If the PF is transmitting traffic as well as the VF(s), under contention the PF has access to up to
50% TX bandwidth for the port and all VFs have access to 50% bandwidth for the port, which will
also impact the total available bandwidth for forwarding.
Note: When calculating the maximum bandwidth under contention for bi-directional loopback
traffic, the number of TX loopback actions are twice that of a similar unidirectional loopback case,
since both sides are transmitting.
• The version of the Ethernet Port Configuration Tool available in Release 26.1 may not be working as
expected. This has been resolved in Release 26.4.
• E810 currently supports a subset of 1000BASE-T SFP module types, which use SGMII to connect
back to the E810. In order for the E810 to properly know the link status of the module's BASE-T
external connection, the module must indicate the BASE-T side link status to the E810. An SGMII
link between E810 and the 1000BASE-T SFP module allows the module to indicate its link status to
the E810 using SGMII Auto Negotiation. However 1000BASE-T SFP modules implement this in a
wide variety of ways, and other methods which do not use SGMII are currently unsupported in
E810. Depending on the implementation, link may never be achieved. In other cases, if the module
sends IDLEs to the E810 when there is no BASE-T link, the E810 may interpret this as a link partner
sending valid data and may show link as being up even though it is only connected to the module
and there is no link on the module's BASE-T external connection.
• Bandwidth/throughput might vary across different virtual functions (VFs) if VF rate limiting is not
applied. Workaround: To avoid this situation it is recommended to apply VF rate limiting.
• If the PF has no link then a Linux VM previously using a VF will not be able to pass traffic to other
VMs without the patch found here.
https://lore.kernel.org/netdev/
BL0PR2101MB093051C80B1625AAE3728551CA4A0@BL0PR2101MB0930.namprd21.prod.outlook.c
om/T/#m63c0a1ab3c9cd28be724ac00665df6a82061097d
This patch routes packets to the virtual interface.
Note: This is a permanent 3rd party issue. No expected action on the part of Intel.
• Some devices support auto-negotiation. Selecting this causes the device to advertise the value
stored in its NVM (usually disabled).
• VXLAN switch creation on Windows Server 2019 Hyper V might fail.

690423-001 15
Intel® Ethernet Controller Products
26.8 Release Notes

• Intel does its best to find and address interoperability issues, however there might be connectivity
issues with certain modules, cables or switches. Interoperating with devices that do not conform to
the relevant standards and specifications increases the likelihood of connectivity issues.
• When priority or link flow control features are enabled, traffic at low packet rates might increment
priority flow control and/or packet drop counters.
• In order for an Intel® Ethernet 800 Series-based adapter to reach its full potential, users must
install it in a PCIe Gen4 x16 slot. Installing on fewer lanes (x8, x4, x2) and/or Gen3, Gen2 or Gen1,
impedes the full throughput of the device.
• On certain platforms, the legacy PXE option ROM boot option menu entries from the same device
are pre-pended with identical port number information (first part of the string that comes from
BIOS).
This is not an option ROM issue. The first device option ROM initialized on a platform exposes all
boot options for the device, which is misinterpreted by BIOS.
The second part of the string from the option ROM indicates the correct slot (port) numbers.
• When having link issues (including no link) at link speeds faster than 10 Gb/s, check the switch
configuration and/or specifications. Many optical connections and direct attach cables require RS-
FEC for connection speeds faster than 10 Gb/s. One of the following might resolve the issue:
Configure the switch to use RS-FEC mode.
— Specify a 10 Gb/s, or slower, link speed connection.
— If attempting to connect at 25 Gb/s, try using an SFP28 CA-S or CS-N direct attach cable. These
cables do not require RS-FEC.
— If the switch does not support RS-FEC mode, check with the switch vendor for the availability of
a software or firmware upgrade.

3.1.2 Firmware
• Following a firmware update and reboot/power cycle on the Intel Ethernet CQDA2 Adapter, Port 1 is
displaying NO-CARRIER and is not functional.
• When software is requesting from firmware the port parameters on port 0 (via AQ the connectivity
type), the response is BACKPLANE_CONNECTIVITY, when it should be CAGE_CONNECTIVITY.
• The RDE device reports its status as Starting (with low power), even though it is in standby mode.
• Wake On LAN flow is unexpectedly triggered by the E810 CQDA2 for OCP 3.0 adapter. The server
unexpectedly wakes up automatically from S5 power state in few seconds after shut down from the
OS, and it is impossible to shut down the server.
• Health status messages are not cleared with a PF reset, even after the reported issue is resolved.
• Flow control settings have no effect on traffic, and counters do not increment with flow control set
to TX=ON and Rx=OFF. However, flow control works fine with values set to TX=On RX=ON.

3.1.3 Linux Driver


• The Intel ICE and iavf driver may experience kernel panics leading the server to cause abnormal
reboot during a long reboot cycle test (about 250-500 reboots) of Intel Ethernet 800 Series
adapters.
• When the maximum allowed number of VLAN filters are created on a trusted VF, and the VF is then
set to untrusted and the VM is rebooted, the iavf driver may not load correctly in the VM and may
show errors in the VM dmesg log.

16 690423-001
Intel® Ethernet Controller Products
26.8 Release Notes

• Changing the FEC value from BaseR to RS results in an error message in dmesg, and may result in
link issues.
• UEFI PXE installation of Red Hat Enterprise Linux 8.4 on a local disk results with the system failing
to boot.
• When a VF interface is set as 'up' and assigned to a namespace, and the namespace is then
deleted, the dmesg log may show the error "Failed to set LAN Tx queue context, error:
ICE_ERR_PARAM" followed by error codes from the ice and iavf drivers.
• If trusted mode is enabled for a VF while promiscuous mode is disabled and multicast promiscuous
mode is enabled, unicast packets may be visible on the VF and multicast packets may not be visible
on the VF. Alternatively, if promiscuous mode is enabled and multicast promiscuous mode is
disabled, then both unicast and multicast packets may not be visible on the VF interface.
• A VF may incorrectly receive additional packets when trusted mode is disabled but promiscuous
mode is enabled.
• When using bonding mode 5 (i.e., balance-tlb or adaptive transmit load balancing), if you add
multiple VFs to the bond, they are assigned duplicate MAC address. When the VFs are joined with
the bond interface, the Linux bonding driver sets the MAC address for the VFs to the same value.
The MAC address is based on the first active VF added to that bond. This results in balance-tlb
mode not functioning as expected. PF interfaces behave as expected.
The presence of duplicate MAC addresses may cause further issues, depending on your switch
configuration.
• The commands ethtool -C [rx|tx]-frames are not supported by the iavf driver and will be
ignored.
Setting [tx|rx]-frames-irq using ethtool -C may not correctly save the intended setting and may
reset the value back to the default value of 0.
• The Intel Ethernet 800 Series adapter in 8 port 10Gb configuration device may generate errors such
as the example below on Linux PF or VF driver load due to RSS profile allocation. Ports that report
this error will experience RSS failures resulting in some packet types not being properly distributed
across cores.

dmesg: VF add example


ice_add_rss_cfg failed for VSI:XX, error:ICE_ERR_AQ_ERROR
VF 3 failed opcode 45, retval: -5
DPDK v20.11 testpmd example:
Shutting down port 0...
Closing ports...
iavf_execute_vf_cmd(): No response or return failure (-5) for cmd 46
iavf_add_del_rss_cfg(): Failed to execute command of OP_DEL_RSS_INPUT_CFG
• If single VLAN traffic is active on a PF interface and a CORER or GLOBR reset is triggered manually,
PF traffic will resume after the reset whereas VLAN traffic may not resume as expected. For a
workaround, issue the ethtool command: "ethtool -K PF_devname rx-vlan-filter off" followed by
"ethtool -K PF_devname rx-vlan-filter on" and VLAN traffic will resume.
• Interrupt Moderation settings reset to default when the queue settings of a port are modified using
the "ethtool -L ethx combined XX" command.

690423-001 17
Intel® Ethernet Controller Products
26.8 Release Notes

• Adding a physical port to the Linux bridge might fail and result in Device or Resource Busy message
if SR-IOV is already enabled on a given port. To avoid this condition, create SR-IOV VFs after
assigning a physical port to a Linux bridge. Refer to Link Aggregation is Mutually Exclusive with SR-
IOV and RDMA in the ICE driver README.
• If a Virtual Function (VF) is not in trusted mode and eight or more VLANs are created on one VF, the
VLAN that is last created might be non-functional and an error might be seen in dmesg.
• When using a Windows Server 2019 RS5 Virtual Machine on a RHEL host, a VLAN configured on the
VF using iproute2 might not pass traffic correctly when an ice driver older than version 1.3.1 is used
in combination with a newer AVF driver version.
• It has been observed that when using ISCSI, the ISCSI initiator intermittently fails to connect to
the ISCSI target.
• When the Double VLAN Mode is enabled on the host, disabling then re-enabling a Virtual Function
attached to a Windows guest might cause error messages to be displayed in dmesg. These
messages will not affect functionality.
• With the current ice PF driver, there might not be a way for a trusted VF to enable unicast
promiscuous and multicast promiscuous mode without turning on ethtool --priv-flags with vf-true-
promisc-support. As such, the expectation is to not use vf-true-promisc-support to gate VF's
request for unicast/multicast promiscuous mode.
• Repeatedly assigning a VF interface to a network namespace then deleting that namespace might
result in an unexpected error message and might possibly result in a call trace on the host system.
• Receive hashing might not be enabled by default on Virtual Functions when using an older iavf
driver in combination with a newer PF driver version.
• When Double VLAN is created on a Virtual Machine, tx_tcp_cso [TX TCP Checksum Offload] and
tx_udp_cso [TX UDP Checksum Offload] statistics might not increment correctly.
• If a VLAN with an Ethertype of 0x9100 is configured to be inserted into the packet on transmit, and
the packet, prior to insertion, contains a VLAN header with an Ethertype of 0x8100, the 0x9100
VLAN header is inserted by the device after the 0x8100 VLAN header. The packet is transmitted by
the device with the 0x8100 VLAN header closest to the Ethernet header.
• A PCI reset performed on the host might result in traffic failure on VFs for certain guest operating
systems.
• On RHEL 7.x and 8.x operating systems, it has been observed that the rx_gro_dropped statistic
might increment rapidly when Rx traffic is high. This appears to be an issue with the RHEL kernels.
• When ICE interfaces are part of a bond with arp_validate=1, the backup port link status flaps
between up and down. Workaround: It is recommended to not enable arp_validate when
bonding ICE interfaces.
• Changing a Virtual Function (VF) MAC address when a VF driver is loaded on the host side might
result in packet loss or a failure to pass traffic. As a result, the VF driver might need to be restarted.
• Current limitations of minimum Tx rate limiting on SR-IOV VFs:
— If DCB or ADQ are enabled on a PF then configuring minimum Tx rate limiting on SR-IOV VFs on
that PF is rejected.
— If both DCB and ADQ are disabled on a PF then configuring minimum Tx rate limiting on SR-IOV
VFs on that PF is allowed.
— If minimum Tx rate limiting on a PF is already configured for SR-IOV VFs and a DCB or ADQ
configuration is applied, then the PF can no longer guarantee the minimum Tx rate limits set for
SR-IOV VFs.

18 690423-001
Intel® Ethernet Controller Products
26.8 Release Notes

— If minimum Tx rate limiting is configured on SR-IOV VFs across multiple ports that have an
aggregate bandwidth over 100Gbps, then the PFs cannot guarantee the minimum Tx rate limits
set for SR-IOV VFs.
• Some distros may contain an older version of iproute2/devink which may result in errors.
Workaround: Please update to the latest devlink version.
• On Intel Ethernet Adapter XXVDA4T, the driver may not link at 1000baseT and 1000baseX. The link
may go down after advertising 1G.

3.1.4 FreeBSD Driver


• The driver can be configured with both link flow control and priority flow control enabled even
though the adapter only supports one mode at a time. In this case, the adapter will prioritize the
priority flow control configuration. Verify that link flow control is active or not by checking the
"active:" line in ifconfig.

3.1.5 Windows Driver


• When a VM is running heavy traffic loads and is attached to a Virtual Switch with either SR-IOV
enabled or VMQ offload enabled, repeatedly enabling and disabling the SR-IOV/VMQ setting on the
vNIC in the VM, may result a VM freeze/hang.
• The visibility of the iSCSI LUN is dependent upon being able to establish a network connection to
the LUN. In order to establish this connection, factors such as the initialization of the network
controller, establishing link at the physical layer (which can take on the order of seconds) must be
considered. Because of these variables, the LUN might not initially be visible at the selection screen.
• When a VM is running heavy traffic loads and is attached to a Virtual Switch with either SR-IOV
enabled or VMQ offload enabled, repeatedly enabling and disabling the SR-IOV/VMQ setting on the
vNIC in the VM may result in a BSOD.
• Small performance limitations have been seen for some workloads on Windows when stressing both
ports of the E810-2CQDA2 adapter.
• Intel® Ethernet Controller E810 devices are in the DCBX CEE/IEEE willing mode by default. In CEE
mode, if an Intel® Ethernet Controller E810 device is set to non-willing and the connected switch is
in non-willing mode as well, this is considered an undefined behavior. Workaround: Configure
Intel® Ethernet Controller E810 devices for the DCBX willing mode (default).
• In order to use guest processor numbers greater than 16 inside a VM, you might need to remove
the *RssMaxProcNumber (if present) from the guest registry.

3.1.6 Windows RDMA Driver


• The Intel® Ethernet Network Adapter E810 might experience an adapter-wide reset on all ports.
When in firmware managed mode, a DCBx willing mode configuration change that is propagated
from the switch removes a TC that was enabled by RDMA. This typically occurs when removing a TC
associated with UP0 because it is the default UP on which RDMA based its configuration. The reset
results in a temporary loss in connectivity as the adapter re-initializes.
• With a S2D storage cluster configuration running Windows Server 2019, high storage bandwidth
tests might result in a crash for a BSOD bug check code 1E (KMODE_EXCEPTION_NOT_HANDLED)
with smbdirect as the failed module. Customers should contact Microsoft via the appropriate
support channel for a solution.

690423-001 19
Intel® Ethernet Controller Products
26.8 Release Notes

3.1.7 Linux RDMA Driver


• Any usermode test that uses ibv_create_ah will fail. For eg: RoCEv2 usermode test such as udaddy
will fail.
• iWARP mode requires a VLAN to be configured to fully enable PFC.
• When adding the ports to a VM in passthrough mode, all ports on the device must be added to the
same VM with the same instance numbers.
• When using Intel MPI in Linux, Intel recommends to enable only one interface on the networking
device to avoid MPI application connectivity issues or hangs. This issue affects all Intel MPI
transports, including TCP and RDMA. To avoid the issue, use ifdown <interface> or ip link set
down <interface> to disable all network interfaces on the adapter except for the one used for MPI.
OpenMPI does not have this limitation.
• In order to send or receive RDMA traffic, the network interface associated with the RDMA device
must be up. If the network interface experiences a link down event (for example, a disconnected
cable or ip link set <interface> down), the associated RDMA device is removed and no longer
available to RDMA applications. When the network interface link is restored, the RDMA device is
automatically re-added.

3.1.8 NVM Update Tool


• Updating using an external OROM (FLB file) and opting for delayed reboot in the configuration file is
not supported.
• After downgrading to Release 25.6 (and previous), a loss of traffic may result. Workaround: Unload
and reload the driver to resume traffic. Rebooting the system would also help.
• VMware ESX 7.0 operating system may experience a kernel panic (also known as a PSOD) during
the NVMUpdate process. The issue occurs if the installed RDMA driver is older than 1.3.4.23.

3.1.9 Workaround: Update the RDMA driver before the NVMUpdate process, or disable
RDMA in the icen module parameters then perform a platform reboot.

3.1.10 Application Device Queues (ADQ)


The code contains the following known issues:
• The latest RHEL and SLES distros have kernels with back-ported support for ADQ. For all other OS
distros, you must use the LTS Linux kernel v4.19.58 or higher to use ADQ. The latest out-of-tree
driver is required for ADQ on all Operating Systems.

20 690423-001
Intel® Ethernet Controller Products
26.8 Release Notes

• ADQ configuration must be cleared following the steps outlined in the ADQ Configuration Guide. The
following issues may result if steps are not executed in the correct order:
— Removing a TC qdisc prior to deleting a TC filter will cause the qdisc to be deleted from
hardware and leave an unusable TC filter in software.
— Deleting a ntuple rule after deleting the TC qdisc, then re-enabling ntuple, may leave the
system in an unusable state which requires a forced reboot to clear.
— Mitigation — Follow the steps documented in the ADQ Configuration Guide to "Clear the ADQ
Configuration"
• ADQ configuration is not supported on a bonded or teamed Intel® E810 Network adapter interface.
Issuing the ethtool or tc commands to a bonded E810 interface will result in error messages from
the ice driver to indicate the operation is not supported.
• If the application stalls for some reason, this can cause a queue stall for application-specific queues
for up to two seconds.
— Workaround - Recommend configuration of only one application per Traffic Class (TC) channel.
• DCB and ADQ are mutually exclusive and cannot coexist. A switch with DCB enabled might remove
the ADQ configuration from the device.
— Workaround - Do not enable DCB on the switch ports being used for ADQ. Disable LLDP on the
interface by turning off firmware LLDP agent using:
ethtool --set-priv-flags $iface fw-lldp-agent off
• Note (unrelated to Intel drivers): The 5.8.0 Linux kernel introduced a bug that broke the interrupt
affinity setting mechanism.
— Workaround - Use an earlier or later version of the kernel to avoid this error.
• The iavf driver must use Trusted mode with ADQ: Trusted mode must be enabled for ADQ inside a
VF. If TC filters are created on a VF interface with trusted mode off, the filters are added to the
software table but are not offloaded to the hardware.
• VF supports Max Transmit Rate only: the iavf driver only supports setting maximum transmit rate
(max_rate) for Tx traffic. Minimum transmit rate (min_rate) setting is not supported with a VF.
• VF Max Transmit Rate:TC qdisc add command on a VF interface does not verify that max_rate
value(s) for the TCs are specified in increments of 500 Kbps. TC max_rate is expected to be a
multiple of (or equal to) 500 Kbps.
• A core-level reset of an ADQ-configured VF port (rare events usually triggered by other failures in
the NIC/iavf driver) results in loss of ADQ configuration. To recover, reapply ADQ configuration to
the VF interface.
• VF errors occur when deleting TCs or unloading iavf driver in a VF: ice and iavf driver error
messages might get triggered in a VF when TCs are configured, and TCs are either manually
deleted or the iavf driver is unloaded. Reloading the ice driver recovers the driver states.
• Commands such as tc qdisc add and ethtool -L cause the driver to close the associated RDMA
interface and reopen it. This disrupts RDMA traffic for 3-5 seconds until the RDMA interface is
available again for traffic.
• When the number of queues is increased using ethtool -L, the new queues will have the same
interrupt moderation settings as queue 0 (i.e., Tx queue 0 for new Tx queues and Rx queue 0 for
new Rx queues). This can be changed using the ethtool per-queue coalesce commands
• To fully release hardware resources and have all supported filter type combinations available, the
ice driver must be unloaded and re-loaded.

690423-001 21
Intel® Ethernet Controller Products
26.8 Release Notes

• When ADQ is enabled on VFs, TC filters on the VF TCO (default TC) are not supported and will not
pass traffic. It is not expected to add TC filters to TCO since it is reserved for non-filtered default
traffic.
• If a reset occurs on a PF interface containing TC filter(s), traffic does not resume to the TC filter(s)
after the PF interface is restored.
• TC filters can unexpectedly match packets that use IP protocols other than what is specified as the
ip_proto argument in the tc filter add command. For example, UDP packets may be matched on
a TCP TC filter created with ip_proto tcp without any L4 port matches.

3.1.11 Manageability
• Intel updated the E810 FW to align the sensor ID design as defined by DMTF DSP2054 starting from
Release 26.4. Previous versions of the E810 FW were based on draft version of the specification. As
a result updating to the newer NVM with this FW will result in updating numbering for the thermal
sensorsIDs and PDR handlers. Anyone using hard coded values for these will see changes. A
proper description of the system through PLDM type 2 PDRs shall give a BMC enough information to
understand what sensors are available, what they are monitoring and what their ID is.

3.2 Intel® Ethernet 700 Series

3.2.1 General
• Devices based on the Intel® Ethernet Controller XL710 (4x10 GbE, 1x40 GbE, 2x40 GbE) have an
expected total throughput for the entire device of 40 Gb/s in each direction.
• The first port of Intel® Ethernet Controller 700 Series-based adapters display the correct branding
string. All other ports on the same device display a generic branding string.
• In order for an Intel® Ethernet Controller 700 Series-based adapter to reach its full potential, users
must install it in a PCIe Gen3 x8 slot. Installing on fewer lanes (x4, x2) and/or Gen2 or Gen1,
impedes the full throughput of the device.

3.2.2 Intel® Ethernet Controller V710-AT2/X710-AT2/TM4


• Incorrect DeviceProviderName is returned when using RDE NegotiateRedfishParameters. This issue
has been root caused and the fix should be integrated in the next firmware release.

3.2.3 Windows Driver


None for this release.

3.2.4 Linux Driver


• On Kernel version 5.0.9 and higher, setting promiscuous mode on trusted VF leads to a periodic and
endless update of this mode. It can be observed via dmes

3.2.5 Intel® PROSet


None for this release.

22 690423-001
Intel® Ethernet Controller Products
26.8 Release Notes

3.2.6 EFI Driver


• In the BIOS Controller Name as part of the Controller Handle section, a device path appears instead
of an Intel adapter branding name.

3.2.7 NVM
None for this release.

3.3 Intel® Ethernet 500 Series

3.3.1 General
None for this release.

3.3.2 EFI Driver


• In the BIOS Controller Name as part of the Controller Handle section, a device path appears instead
of an Intel adapter branding name.

3.3.3 Windows Driver


None for this release.

3.4 Intel® Ethernet 300 Series

3.4.1 EFI Driver


• In the BIOS Controller Name as part of the Controller Handle section, a device path appears instead
of an Intel adapter branding name.

3.5 Intel® Ethernet 200 Series


None for this release.

3.6 Legacy Devices


Some older Intel® Ethernet adapters do not have full software support for the most recent versions of
Microsoft Windows*. Many older Intel Ethernet® adapters have base drivers supplied by Microsoft
Windows. Lists of supported devices per operating system are available here.

4.0 NVM Upgrade/Downgrade 800 Series/700 Series


and X550
Refer to the Feature Support Matrix (FSM) links listed in Related Documents for more detail. FSMs list
the exact feature support provided by the NVM and software device drivers for a given release.

690423-001 23
Intel® Ethernet Controller Products
26.8 Release Notes

5.0 Languages Supported


Note: This only applies to Microsoft Windows and Windows Server Operating Systems.
This release supports the languages listed in the table that follows:

Languages

English Spanish
French Simplified Chinese
German Traditional Chinese
Italian Korean
Japanese Portuguese

24 690423-001
Intel® Ethernet Controller Products
26.8 Release Notes

6.0 Related Documents


Contact your Intel representative for technical support about Intel® Ethernet Series devices/adapters.

6.1 Feature Support Matrix


These documents contain additional details of features supported, operating system support, cable/
modules, etc.

Device Series Support Link

Intel® Ethernet 800 Series https://cdrdv2.intel.com/v1/dl/getContent/630155

Intel® Ethernet 700 Series:


— X710/XXV710/XL710 https://cdrdv2.intel.com/v1/dl/getContent/332191
— X722 https://cdrdv2.intel.com/v1/dl/getContent/336882
— X710-TM4/AT2 and V710-AT2 https://cdrdv2.intel.com/v1/dl/getContent/619407

Intel® Ethernet 500 Series https://cdrdv2.intel.com/v1/dl/getContent/335253

Intel® Ethernet 300 Series N/A

Intel® Ethernet 200 Series N/A

6.2 Specification Updates


These documents provide the latest information on hardware errata as well as device marking
information, SKU information, etc.

Device Series Support Link

Intel® Ethernet 800 Series https://cdrdv2.intel.com/v1/dl/getContent/616943

Intel® Ethernet 700 Series:


— X710/XXV710/XL710 https://cdrdv2.intel.com/v1/dl/getContent/331430
— X710-TM4/AT2 and V710-AT2 https://cdrdv2.intel.com/v1/dl/getContent/615119

Intel® Ethernet 500 Series


— X550 https://cdrdv2.intel.com/v1/dl/getContent/333717
— X540 https://cdrdv2.intel.com/v1/dl/getContent/334566

Intel® Ethernet 300 Series https://cdrdv2.intel.com/v1/dl/getContent/333066


®
Intel Ethernet 200 Series
— I210 https://cdrdv2.intel.com/v1/dl/getContent/332763
— I211 https://cdrdv2.intel.com/v1/dl/getContent/333015

6.3 Software Download Package


The release software download package can be found here.

6.4 Intel Product Security Center Advisories


Intel product security center advisories can be found at:
https://www.intel.com/content/www/us/en/security-center/default.html

690423-001 25
Intel® Ethernet Controller Products
26.8 Release Notes

LEGAL

No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.
This document (and any related software) is Intel copyrighted material, and your use is governed by the express license under which
it is provided to you. Unless the license provides otherwise, you may not use, modify, copy, publish, distribute, disclose or transmit
this document (and related materials) without Intel's prior written permission. This document (and related materials) is provided as
is, with no express or implied warranties, other than those that are expressly stated in the license.
Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a
particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in
trade.
This document contains information on products, services and/or processes in development. All information provided here is subject
to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps.
The products and services described may contain defects or errors which may cause deviations from published specifications.
Copies of documents which have an order number and are referenced in this document may be obtained by calling 1-800-548-4725
or by visiting www.intel.com/design/literature.htm.
Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
Other names and brands may be claimed as the property of others.
© 2021 Intel Corporation.

26

You might also like