Intel Ethernet Controller Products: 26.8 Release Notes
Intel Ethernet Controller Products: 26.8 Release Notes
Revision 1.0
690423-001
Intel® Ethernet Controller Products
26.8 Release Notes
Revision History
2 690423-001
Intel® Ethernet Controller Products
26.8 Release Notes
1.0 Overview
This document provides an overview of the changes introduced in the latest Intel® Ethernet controller/
adapter family of products. References to more detailed information are provided where necessary. The
information contained in this document is intended as supplemental information only; it should be used
in conjunction with the documentation provided for each component.
These release notes list the features supported in this software release, known issues, and issues that
were resolved during release development.
• 26.8 • Linux*- only support for Intel® Ethernet Network Adapter E810-XXV-4T
• 26.8 • Intel® Ethernet 800 Series devices support RDMA in a Linux* VF on a Microsoft Windows* or Linux host
• FreeBSD* drivers now support DCB on Intel® Ethernet 800 Series devices
• Limited support for the Linux ice driver on CentOS/Red Hat* Enterprise Linux* (RHEL) 7.2. No new features
were added; only limited regression to find and address issues with the ice driver.
• Support for Red Hat* Enterprise Linux* (RHEL) 8.5.
• 26.8 • The Intel® Ethernet Network Adapter E810-XXV-2 for OCP 2.0 is no longer supported. Release 26.7 is the last
release that supports Intel® Ethernet Network Adapter E810-XXV-2 for OCP 2.0.
690423-001 3
Intel® Ethernet Controller Products
26.8 Release Notes
1.3 NVM
Table 1 shows the NVM versions supported in this release.
800 Series
E810 3.10
700 Series
700 8.5
500 Series
X550 3.5
200 Series
I210 2.0
4 690423-001
Intel® Ethernet Controller Products
26.8 Release Notes
1.4.2 Linux
Table 2 shows the Linux distributions that are supported in this release and the accompanying driver
names and versions.
Refer to Section 1.4.1 for details on Levels of Support.
ice 1.7.16 1.7.16 SNT 1.7.16 SNT 1.7.16 SNT 1.7.16 SNT 1.7.16 1.7.16
i40e 2.17.4 2.17.4 SNT 2.17.4 SNT 2.17.4 SNT 2.17.4 SNT 2.17.4 2.17.4
iavf 4.3.19 4.3.19 SNT 4.3.19 SNT 4.3.19 SNT 4.3.19 SNT 4.3.19 4.3.19
ixgbe 5.13.4 5.13.4 SNT 5.13.4 SNT 5.13.4 SNT 5.13.4 SNT 5.13.4 5.13.4
ixgbevf 4.13.3 4.13.3 SNT 4.13.3 SNT 4.13.3 SNT 4.13.3 SNT 4.13.3 4.13.3
igb 5.85 5.85 SNT 5.85 SNT 5.85 SNT 5.85 SNT 5.85 5.85
irdma 1.7.72 1.7.72 SNT 1.7.72 SNT 1.7.72 SNT 1.7.72 SNT 1.7.72 1.7.72
690423-001 5
Intel® Ethernet Controller Products
26.8 Release Notes
ixe NS NS NS NS 2.4.36.x
e2f NS 1.0.2.x NS NS NS
6 690423-001
Intel® Ethernet Controller Products
26.8 Release Notes
Microsoft
Microsoft Microsoft Microsoft Microsoft
Windows
Driver Windows Windows Windows Windows
Server 2012
Server 2022 Server 2019 Server 2016 Server 2012
R2
e1e NS NS NS NS 9.16.10.x
e1k NS NS NS NS 12.10.13.x
e1q NS NS NS NS 12.7.28.x
e1y NS NS NS NS 10.1.17.x
690423-001 7
Intel® Ethernet Controller Products
26.8 Release Notes
icea NS NS NS NS
i40ea NS NS NS NS
i40eb NS NS NS NS
iavf NS NS NS NS
ixe NS NS NS NS
e2f NS NS NS NS
8 690423-001
Intel® Ethernet Controller Products
26.8 Release Notes
Microsoft Microsoft
Microsoft Microsoft
Driver Windows 10, Windows
Windows 10 Windows 8
version 1809 8.1
e1e NS NS NS 9.16.10.x
e1k NS NS NS 12.10.13.x
e1q NS NS NS 12.7.28.x
e1y NS NS NS 10.1.17.x
1.4.5 FreeBSD
Table 5 shows the versions of FreeBSD that are supported in this release and the accompanying driver
names and versions.
Refer to Section 1.4.1 for details on Levels of Support.
690423-001 9
Intel® Ethernet Controller Products
26.8 Release Notes
10 690423-001
Intel® Ethernet Controller Products
26.8 Release Notes
2.1.1 General
690423-001 11
Intel® Ethernet Controller Products
26.8 Release Notes
• There might be a PSoD appearing during PF-reset when the NVMe to RDMA connection is active.
• In order to send or receive RDMA traffic, the network interface associated with the RDMA device
must be up. If the network interface experiences a link down event (for example, a disconnected
cable or ip link set <interface> down), the associated RDMA device is removed and no longer
available to RDMA applications. When the network interface link is restored, the RDMA device is
automatically re-added.
• After a system reboot, an Intel® Ethernet Network Adapter E810 RDMA device in RoCEv2 mode
might occasionally become active with a missing or incorrect GID. To correct the GID value, unload
and reload the irdma driver.
2.1.6 NVM
None for this release.
2.1.7 Firmware
• FW treated SFI C2C as type SFF PHY, which caused a BMC link down.
• When the admin queue command was executed to get the PHY recovery clock (0x0631), it was
missing the Port number in the AQC.
• The interface type was displayed as twisted pair instead of optical fiber.
• Fixed an issue on the Intel Ethernet Network Adapter XXVDA4T where external PHYs do not report
temperature correctly.
• The PackageID0 SDP0 and PackageID1 SDP1 no longer have wrong assignments to the SDP PINs.
• The NC-SI OEM command (0x4B) no longer fails.
• Fixed an issue when the FW has not been properly mapped from the physical port to logical to get
interface sensors.
• Fixed an issue when the FW tries to read the PHY threshold even when the module does not that
sensor.
• Fixed an issue where the FW was trying to get power consumer parameters on a module that was
not present.
• Fixed an issue where the FW was handling GPIO interrupts before the board initialization was
complete.
• Fixed an issue where the FW PLDM command GetFirmwareParameters contains additional null
characters.
• Fixed an issue where the FW was reporting a module power value of module from an incorrect
location.
• FW maps the wrong cage index to logical port 0. As a result, customers may see the link API get an
incorrect indication for the cage index related to logical port 0.
• For PLDM Type 2 the device reports the port thermal sensor PDR and status for all types of
connectivity; even those without temp sensor (temperature and thresholds reported as zeros).
12 690423-001
Intel® Ethernet Controller Products
26.8 Release Notes
2.1.8 Manageability
• When bifurcation is enabled on the Chapman Beach adapter, both Intel Ethernet 800 Series
Adapters set the NC-SI Package ID as “0”.
2.2.1 General
• When using multiple Traffic Classes (TCs) for ADQ application traffic, adding ntuple rules to the first
queue of TC2 or higher does not work as expected. The ethtool ntuple rule fails for the first queue.
This does not affect ntuple rules on TC1.
2.2.5 NVM
None for this release.
690423-001 13
Intel® Ethernet Controller Products
26.8 Release Notes
14 690423-001
Intel® Ethernet Controller Products
26.8 Release Notes
3.1.1 General
• Link issues (like false link, long time-to-link (TTL), excessive link flaps, no link) may occur when
Parkvale (C827/XL827) retimer is interfaced with SX/LX, SR/LR, SR4/LR4, AOC limiting optics. This
issue is isolated to Parkvale line side PMD RX susceptibility to noise.
• Intel Ethernet 800 Series adapters in 4x25GbE or 8x10GbE configurations will be limited to a
maximum total transmit bandwidth of roughly 28Gbps per port for 25GbE ports and 12Gbps per
port on 10GbE ports.
This maximum is a total combination of any mix of network (leaving the port) and loopback (VF ->
VF/VF -> PF/PF ->VF) TX traffic on a given port and is designed to allow each port to maintain port
speed transmit bandwidth at the specific port speed when in 25GbE or 10GbE mode.
If the PF is transmitting traffic as well as the VF(s), under contention the PF has access to up to
50% TX bandwidth for the port and all VFs have access to 50% bandwidth for the port, which will
also impact the total available bandwidth for forwarding.
Note: When calculating the maximum bandwidth under contention for bi-directional loopback
traffic, the number of TX loopback actions are twice that of a similar unidirectional loopback case,
since both sides are transmitting.
• The version of the Ethernet Port Configuration Tool available in Release 26.1 may not be working as
expected. This has been resolved in Release 26.4.
• E810 currently supports a subset of 1000BASE-T SFP module types, which use SGMII to connect
back to the E810. In order for the E810 to properly know the link status of the module's BASE-T
external connection, the module must indicate the BASE-T side link status to the E810. An SGMII
link between E810 and the 1000BASE-T SFP module allows the module to indicate its link status to
the E810 using SGMII Auto Negotiation. However 1000BASE-T SFP modules implement this in a
wide variety of ways, and other methods which do not use SGMII are currently unsupported in
E810. Depending on the implementation, link may never be achieved. In other cases, if the module
sends IDLEs to the E810 when there is no BASE-T link, the E810 may interpret this as a link partner
sending valid data and may show link as being up even though it is only connected to the module
and there is no link on the module's BASE-T external connection.
• Bandwidth/throughput might vary across different virtual functions (VFs) if VF rate limiting is not
applied. Workaround: To avoid this situation it is recommended to apply VF rate limiting.
• If the PF has no link then a Linux VM previously using a VF will not be able to pass traffic to other
VMs without the patch found here.
https://lore.kernel.org/netdev/
BL0PR2101MB093051C80B1625AAE3728551CA4A0@BL0PR2101MB0930.namprd21.prod.outlook.c
om/T/#m63c0a1ab3c9cd28be724ac00665df6a82061097d
This patch routes packets to the virtual interface.
Note: This is a permanent 3rd party issue. No expected action on the part of Intel.
• Some devices support auto-negotiation. Selecting this causes the device to advertise the value
stored in its NVM (usually disabled).
• VXLAN switch creation on Windows Server 2019 Hyper V might fail.
690423-001 15
Intel® Ethernet Controller Products
26.8 Release Notes
• Intel does its best to find and address interoperability issues, however there might be connectivity
issues with certain modules, cables or switches. Interoperating with devices that do not conform to
the relevant standards and specifications increases the likelihood of connectivity issues.
• When priority or link flow control features are enabled, traffic at low packet rates might increment
priority flow control and/or packet drop counters.
• In order for an Intel® Ethernet 800 Series-based adapter to reach its full potential, users must
install it in a PCIe Gen4 x16 slot. Installing on fewer lanes (x8, x4, x2) and/or Gen3, Gen2 or Gen1,
impedes the full throughput of the device.
• On certain platforms, the legacy PXE option ROM boot option menu entries from the same device
are pre-pended with identical port number information (first part of the string that comes from
BIOS).
This is not an option ROM issue. The first device option ROM initialized on a platform exposes all
boot options for the device, which is misinterpreted by BIOS.
The second part of the string from the option ROM indicates the correct slot (port) numbers.
• When having link issues (including no link) at link speeds faster than 10 Gb/s, check the switch
configuration and/or specifications. Many optical connections and direct attach cables require RS-
FEC for connection speeds faster than 10 Gb/s. One of the following might resolve the issue:
Configure the switch to use RS-FEC mode.
— Specify a 10 Gb/s, or slower, link speed connection.
— If attempting to connect at 25 Gb/s, try using an SFP28 CA-S or CS-N direct attach cable. These
cables do not require RS-FEC.
— If the switch does not support RS-FEC mode, check with the switch vendor for the availability of
a software or firmware upgrade.
3.1.2 Firmware
• Following a firmware update and reboot/power cycle on the Intel Ethernet CQDA2 Adapter, Port 1 is
displaying NO-CARRIER and is not functional.
• When software is requesting from firmware the port parameters on port 0 (via AQ the connectivity
type), the response is BACKPLANE_CONNECTIVITY, when it should be CAGE_CONNECTIVITY.
• The RDE device reports its status as Starting (with low power), even though it is in standby mode.
• Wake On LAN flow is unexpectedly triggered by the E810 CQDA2 for OCP 3.0 adapter. The server
unexpectedly wakes up automatically from S5 power state in few seconds after shut down from the
OS, and it is impossible to shut down the server.
• Health status messages are not cleared with a PF reset, even after the reported issue is resolved.
• Flow control settings have no effect on traffic, and counters do not increment with flow control set
to TX=ON and Rx=OFF. However, flow control works fine with values set to TX=On RX=ON.
16 690423-001
Intel® Ethernet Controller Products
26.8 Release Notes
• Changing the FEC value from BaseR to RS results in an error message in dmesg, and may result in
link issues.
• UEFI PXE installation of Red Hat Enterprise Linux 8.4 on a local disk results with the system failing
to boot.
• When a VF interface is set as 'up' and assigned to a namespace, and the namespace is then
deleted, the dmesg log may show the error "Failed to set LAN Tx queue context, error:
ICE_ERR_PARAM" followed by error codes from the ice and iavf drivers.
• If trusted mode is enabled for a VF while promiscuous mode is disabled and multicast promiscuous
mode is enabled, unicast packets may be visible on the VF and multicast packets may not be visible
on the VF. Alternatively, if promiscuous mode is enabled and multicast promiscuous mode is
disabled, then both unicast and multicast packets may not be visible on the VF interface.
• A VF may incorrectly receive additional packets when trusted mode is disabled but promiscuous
mode is enabled.
• When using bonding mode 5 (i.e., balance-tlb or adaptive transmit load balancing), if you add
multiple VFs to the bond, they are assigned duplicate MAC address. When the VFs are joined with
the bond interface, the Linux bonding driver sets the MAC address for the VFs to the same value.
The MAC address is based on the first active VF added to that bond. This results in balance-tlb
mode not functioning as expected. PF interfaces behave as expected.
The presence of duplicate MAC addresses may cause further issues, depending on your switch
configuration.
• The commands ethtool -C [rx|tx]-frames are not supported by the iavf driver and will be
ignored.
Setting [tx|rx]-frames-irq using ethtool -C may not correctly save the intended setting and may
reset the value back to the default value of 0.
• The Intel Ethernet 800 Series adapter in 8 port 10Gb configuration device may generate errors such
as the example below on Linux PF or VF driver load due to RSS profile allocation. Ports that report
this error will experience RSS failures resulting in some packet types not being properly distributed
across cores.
690423-001 17
Intel® Ethernet Controller Products
26.8 Release Notes
• Adding a physical port to the Linux bridge might fail and result in Device or Resource Busy message
if SR-IOV is already enabled on a given port. To avoid this condition, create SR-IOV VFs after
assigning a physical port to a Linux bridge. Refer to Link Aggregation is Mutually Exclusive with SR-
IOV and RDMA in the ICE driver README.
• If a Virtual Function (VF) is not in trusted mode and eight or more VLANs are created on one VF, the
VLAN that is last created might be non-functional and an error might be seen in dmesg.
• When using a Windows Server 2019 RS5 Virtual Machine on a RHEL host, a VLAN configured on the
VF using iproute2 might not pass traffic correctly when an ice driver older than version 1.3.1 is used
in combination with a newer AVF driver version.
• It has been observed that when using ISCSI, the ISCSI initiator intermittently fails to connect to
the ISCSI target.
• When the Double VLAN Mode is enabled on the host, disabling then re-enabling a Virtual Function
attached to a Windows guest might cause error messages to be displayed in dmesg. These
messages will not affect functionality.
• With the current ice PF driver, there might not be a way for a trusted VF to enable unicast
promiscuous and multicast promiscuous mode without turning on ethtool --priv-flags with vf-true-
promisc-support. As such, the expectation is to not use vf-true-promisc-support to gate VF's
request for unicast/multicast promiscuous mode.
• Repeatedly assigning a VF interface to a network namespace then deleting that namespace might
result in an unexpected error message and might possibly result in a call trace on the host system.
• Receive hashing might not be enabled by default on Virtual Functions when using an older iavf
driver in combination with a newer PF driver version.
• When Double VLAN is created on a Virtual Machine, tx_tcp_cso [TX TCP Checksum Offload] and
tx_udp_cso [TX UDP Checksum Offload] statistics might not increment correctly.
• If a VLAN with an Ethertype of 0x9100 is configured to be inserted into the packet on transmit, and
the packet, prior to insertion, contains a VLAN header with an Ethertype of 0x8100, the 0x9100
VLAN header is inserted by the device after the 0x8100 VLAN header. The packet is transmitted by
the device with the 0x8100 VLAN header closest to the Ethernet header.
• A PCI reset performed on the host might result in traffic failure on VFs for certain guest operating
systems.
• On RHEL 7.x and 8.x operating systems, it has been observed that the rx_gro_dropped statistic
might increment rapidly when Rx traffic is high. This appears to be an issue with the RHEL kernels.
• When ICE interfaces are part of a bond with arp_validate=1, the backup port link status flaps
between up and down. Workaround: It is recommended to not enable arp_validate when
bonding ICE interfaces.
• Changing a Virtual Function (VF) MAC address when a VF driver is loaded on the host side might
result in packet loss or a failure to pass traffic. As a result, the VF driver might need to be restarted.
• Current limitations of minimum Tx rate limiting on SR-IOV VFs:
— If DCB or ADQ are enabled on a PF then configuring minimum Tx rate limiting on SR-IOV VFs on
that PF is rejected.
— If both DCB and ADQ are disabled on a PF then configuring minimum Tx rate limiting on SR-IOV
VFs on that PF is allowed.
— If minimum Tx rate limiting on a PF is already configured for SR-IOV VFs and a DCB or ADQ
configuration is applied, then the PF can no longer guarantee the minimum Tx rate limits set for
SR-IOV VFs.
18 690423-001
Intel® Ethernet Controller Products
26.8 Release Notes
— If minimum Tx rate limiting is configured on SR-IOV VFs across multiple ports that have an
aggregate bandwidth over 100Gbps, then the PFs cannot guarantee the minimum Tx rate limits
set for SR-IOV VFs.
• Some distros may contain an older version of iproute2/devink which may result in errors.
Workaround: Please update to the latest devlink version.
• On Intel Ethernet Adapter XXVDA4T, the driver may not link at 1000baseT and 1000baseX. The link
may go down after advertising 1G.
690423-001 19
Intel® Ethernet Controller Products
26.8 Release Notes
3.1.9 Workaround: Update the RDMA driver before the NVMUpdate process, or disable
RDMA in the icen module parameters then perform a platform reboot.
20 690423-001
Intel® Ethernet Controller Products
26.8 Release Notes
• ADQ configuration must be cleared following the steps outlined in the ADQ Configuration Guide. The
following issues may result if steps are not executed in the correct order:
— Removing a TC qdisc prior to deleting a TC filter will cause the qdisc to be deleted from
hardware and leave an unusable TC filter in software.
— Deleting a ntuple rule after deleting the TC qdisc, then re-enabling ntuple, may leave the
system in an unusable state which requires a forced reboot to clear.
— Mitigation — Follow the steps documented in the ADQ Configuration Guide to "Clear the ADQ
Configuration"
• ADQ configuration is not supported on a bonded or teamed Intel® E810 Network adapter interface.
Issuing the ethtool or tc commands to a bonded E810 interface will result in error messages from
the ice driver to indicate the operation is not supported.
• If the application stalls for some reason, this can cause a queue stall for application-specific queues
for up to two seconds.
— Workaround - Recommend configuration of only one application per Traffic Class (TC) channel.
• DCB and ADQ are mutually exclusive and cannot coexist. A switch with DCB enabled might remove
the ADQ configuration from the device.
— Workaround - Do not enable DCB on the switch ports being used for ADQ. Disable LLDP on the
interface by turning off firmware LLDP agent using:
ethtool --set-priv-flags $iface fw-lldp-agent off
• Note (unrelated to Intel drivers): The 5.8.0 Linux kernel introduced a bug that broke the interrupt
affinity setting mechanism.
— Workaround - Use an earlier or later version of the kernel to avoid this error.
• The iavf driver must use Trusted mode with ADQ: Trusted mode must be enabled for ADQ inside a
VF. If TC filters are created on a VF interface with trusted mode off, the filters are added to the
software table but are not offloaded to the hardware.
• VF supports Max Transmit Rate only: the iavf driver only supports setting maximum transmit rate
(max_rate) for Tx traffic. Minimum transmit rate (min_rate) setting is not supported with a VF.
• VF Max Transmit Rate:TC qdisc add command on a VF interface does not verify that max_rate
value(s) for the TCs are specified in increments of 500 Kbps. TC max_rate is expected to be a
multiple of (or equal to) 500 Kbps.
• A core-level reset of an ADQ-configured VF port (rare events usually triggered by other failures in
the NIC/iavf driver) results in loss of ADQ configuration. To recover, reapply ADQ configuration to
the VF interface.
• VF errors occur when deleting TCs or unloading iavf driver in a VF: ice and iavf driver error
messages might get triggered in a VF when TCs are configured, and TCs are either manually
deleted or the iavf driver is unloaded. Reloading the ice driver recovers the driver states.
• Commands such as tc qdisc add and ethtool -L cause the driver to close the associated RDMA
interface and reopen it. This disrupts RDMA traffic for 3-5 seconds until the RDMA interface is
available again for traffic.
• When the number of queues is increased using ethtool -L, the new queues will have the same
interrupt moderation settings as queue 0 (i.e., Tx queue 0 for new Tx queues and Rx queue 0 for
new Rx queues). This can be changed using the ethtool per-queue coalesce commands
• To fully release hardware resources and have all supported filter type combinations available, the
ice driver must be unloaded and re-loaded.
690423-001 21
Intel® Ethernet Controller Products
26.8 Release Notes
• When ADQ is enabled on VFs, TC filters on the VF TCO (default TC) are not supported and will not
pass traffic. It is not expected to add TC filters to TCO since it is reserved for non-filtered default
traffic.
• If a reset occurs on a PF interface containing TC filter(s), traffic does not resume to the TC filter(s)
after the PF interface is restored.
• TC filters can unexpectedly match packets that use IP protocols other than what is specified as the
ip_proto argument in the tc filter add command. For example, UDP packets may be matched on
a TCP TC filter created with ip_proto tcp without any L4 port matches.
3.1.11 Manageability
• Intel updated the E810 FW to align the sensor ID design as defined by DMTF DSP2054 starting from
Release 26.4. Previous versions of the E810 FW were based on draft version of the specification. As
a result updating to the newer NVM with this FW will result in updating numbering for the thermal
sensorsIDs and PDR handlers. Anyone using hard coded values for these will see changes. A
proper description of the system through PLDM type 2 PDRs shall give a BMC enough information to
understand what sensors are available, what they are monitoring and what their ID is.
3.2.1 General
• Devices based on the Intel® Ethernet Controller XL710 (4x10 GbE, 1x40 GbE, 2x40 GbE) have an
expected total throughput for the entire device of 40 Gb/s in each direction.
• The first port of Intel® Ethernet Controller 700 Series-based adapters display the correct branding
string. All other ports on the same device display a generic branding string.
• In order for an Intel® Ethernet Controller 700 Series-based adapter to reach its full potential, users
must install it in a PCIe Gen3 x8 slot. Installing on fewer lanes (x4, x2) and/or Gen2 or Gen1,
impedes the full throughput of the device.
22 690423-001
Intel® Ethernet Controller Products
26.8 Release Notes
3.2.7 NVM
None for this release.
3.3.1 General
None for this release.
690423-001 23
Intel® Ethernet Controller Products
26.8 Release Notes
Languages
English Spanish
French Simplified Chinese
German Traditional Chinese
Italian Korean
Japanese Portuguese
24 690423-001
Intel® Ethernet Controller Products
26.8 Release Notes
690423-001 25
Intel® Ethernet Controller Products
26.8 Release Notes
LEGAL
No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.
This document (and any related software) is Intel copyrighted material, and your use is governed by the express license under which
it is provided to you. Unless the license provides otherwise, you may not use, modify, copy, publish, distribute, disclose or transmit
this document (and related materials) without Intel's prior written permission. This document (and related materials) is provided as
is, with no express or implied warranties, other than those that are expressly stated in the license.
Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a
particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in
trade.
This document contains information on products, services and/or processes in development. All information provided here is subject
to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps.
The products and services described may contain defects or errors which may cause deviations from published specifications.
Copies of documents which have an order number and are referenced in this document may be obtained by calling 1-800-548-4725
or by visiting www.intel.com/design/literature.htm.
Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
Other names and brands may be claimed as the property of others.
© 2021 Intel Corporation.
26