Ug20303 683517 836498
Ug20303 683517 836498
Contents
1. Introduction................................................................................................................... 4
1.1. Terms and Acronyms..............................................................................................4
1.2. MCDMA IP Modes...................................................................................................5
2. Design Example Detailed Description.............................................................................. 7
2.1. Design Example Overview.......................................................................................7
2.1.1. H-Tile MCDMA IP - Design Examples for Endpoint........................................... 7
2.1.2. P-Tile MCDMA IP - Design Examples for Endpoint........................................... 9
2.1.3. F-Tile MCDMA IP - Design Examples for Endpoint..........................................11
2.1.4. R-Tile MCDMA IP - Design Examples for Endpoint......................................... 13
2.1.5. Design Example BAR Mappings.................................................................. 15
2.2. Hardware and Software Requirements.................................................................... 16
2.3. PIO using MCDMA Bypass Mode............................................................................. 16
2.3.1. Avalon-ST PIO using MCDMA Bypass Mode.................................................. 16
2.3.2. Avalon-MM PIO using MCDMA Bypass Mode................................................. 17
2.4. Avalon-ST Packet Generate/Check..........................................................................19
2.4.1. Single-Port Avalon-ST Packet Generate/Check..............................................19
2.5. Avalon-ST Device-side Packet Loopback.................................................................. 22
2.5.1. Simulation Results................................................................................... 23
2.5.2. Hardware Test Results.............................................................................. 24
2.6. Avalon-MM DMA...................................................................................................27
2.6.1. Simulation Results................................................................................... 28
2.6.2. Hardware Test Results.............................................................................. 29
2.7. BAM_BAS Traffic Generator and Checker................................................................. 32
2.7.1. BAM_BAS Traffic Generator and Checker Example Design Register Map........... 33
2.8. External Descriptor Controller................................................................................36
2.8.1. Registers................................................................................................ 40
2.8.2. Hardware Test Results.............................................................................. 42
3. Design Example Quick Start Guide................................................................................ 43
3.1. Design Example Directory Structure....................................................................... 43
3.2. Generating the Example Design using Quartus Prime................................................ 45
3.2.1. Procedure............................................................................................... 45
3.3. Simulating the Design Example..............................................................................47
3.3.1. Testbench Overview................................................................................. 47
3.3.2. Supported Simulators............................................................................... 48
3.3.3. Example Testbench Flow for DMA Test with Avalon-ST Packet Generate/
Check Design Example..............................................................................52
3.3.4. Run the Simulation Script..........................................................................54
3.3.5. Steps to Run the Simulation...................................................................... 54
3.3.6. View the Results...................................................................................... 55
3.4. Compiling the Example Design in Quartus Prime...................................................... 55
3.5. Running the Design Example Application on a Hardware Setup...................................56
3.5.1. Program the FPGA....................................................................................56
3.5.2. Quick Start Guide.................................................................................... 57
4. Multi Channel DMA Intel FPGA IP for PCI Express Design Example User Guide
Archives................................................................................................................ 108
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
2
Contents
5. Revision History for the Multi Channel DMA Intel FPGA IP for PCI Express Design
Example User Guide...............................................................................................109
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
3
683517 | 2024.11.04
Send Feedback
1. Introduction
D2H Device-to-Host
HIP Hard IP
H2D Host-to-Device
IP Intellectual Property
PD Packet Descriptor
© Altera Corporation. Altera, the Altera logo, the ‘a’ logo, and other Altera marks are trademarks of Altera
Corporation. Altera and Intel warrant performance of its FPGA and semiconductor products to current
specifications in accordance with Altera’s or Intel's standard warranty as applicable, but reserves the right to ISO
make changes to any products and services at any time without notice. Altera and Intel assume no 9001:2015
responsibility or liability arising out of the application or use of any information, product, or service described Registered
herein except as expressly agreed to inwriting by Altera or Intel. Altera and Intel customers are advised to
obtain the latest version of device specifications before relying on any published information and before placing
orders for products or services.
*Other names and brands may be claimed as the property of others.
1. Introduction
683517 | 2024.11.04
Term Definition
Table 2. MCDMA IP Modes and FPGA Development Kit for Design Examples
MCDMA IP IP Mode FPGA Development
Kit Board for Design
PCI Express Application Data Application Clock Example Hardware
Width Frequency Test
MCDMA H-Tile Gen3 x16 512 bits 250 MHz Stratix® 10 GX H-Tile
Production FPGA
Gen3 x8 256 bits 250 MHz Development Kit
Stratix 10 MX H-Tile
Production FPGA
Development Kit
MCDMA R-Tile Gen5 2x8, Gen4 x16 512 bits 500/475/450/425/400 Intel Agilex 7 FPGA I-
MHz Series Development
Kit (ES 2 x R-Tile and
Gen4 2x8, Gen3 x16, 512 bits 300/275/250 MHz 1 x F-Tile)
Gen3 2x8 Intel Agilex 7 FPGA I-
Series Development
Gen5 4x4, Gen4 2x8 256 bits 500/475/450/425/400
Kit (ES1 2 x R-Tile and
MHz
1 x F-Tile)
Gen3 2x8, Gen4 4x4, 256 bits 300/275/250 MHz
Gen3 4x4
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
5
1. Introduction
683517 | 2024.11.04
MCDMA F-Tile Gen4 1x16 512 bits 500/400/350/250/225 Intel Agilex 7 FPGA F-
/200/175 MHz Series Development
Kit (ES1 2 x F-Tile)
Gen4 1x8 256 bits 500/400/350/250/225
Gen4 2x8 /200/175 MHz
Note: Intel Agilex 7 FPGA I-Series Development Kit (ES 2 x R-Tile and 1 x F-Tile) R-Tile A0
die revision supports only Gen5 2x8 / 512 bit, Gen4 2x8 / 512bits and Gen3 2x8 / 512
bits.
Note: Intel Agilex 7 FPGA I-Series Development Kit (ES1 2 x R-Tile and 1 x F-Tile) R-Tile B0
die revision supports all PCIe Hard IP Modes defined in MCDMA R-Tile row
For more information about MCDMA IP, refer to the Multi Channel DMA Intel FPGA IP
for PCI Express User Guide.
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
6
683517 | 2024.11.04
Send Feedback
You can generate the design example from the Example Designs tab of the Multi
Channel DMA for PCI Express IP Parameter Editor. The desired user interface type,
either Avalon-ST or Avalon-MM, can be chosen. You can allocate up to 2048 DMA
channels (with a maximum of 512 channels per function) when the Avalon-MM
interface type or Avalon-ST 1-port interface is selected.
© Altera Corporation. Altera, the Altera logo, the ‘a’ logo, and other Altera marks are trademarks of Altera
Corporation. Altera and Intel warrant performance of its FPGA and semiconductor products to current
specifications in accordance with Altera’s or Intel's standard warranty as applicable, but reserves the right to ISO
make changes to any products and services at any time without notice. Altera and Intel assume no 9001:2015
responsibility or liability arising out of the application or use of any information, product, or service described Registered
herein except as expressly agreed to inwriting by Altera or Intel. Altera and Intel customers are advised to
obtain the latest version of device specifications before relying on any published information and before placing
orders for products or services.
*Other names and brands may be claimed as the property of others.
2. Design Example Detailed Description
683517 | 2024.11.04
MCDMA Settings
Design Example Driver Support App Support
User Mode Interface Type
Testpmd (Testpmd-DMA)
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
8
2. Design Example Detailed Description
683517 | 2024.11.04
Note: 1. MCDMA H-Tile design example supports SR-IOV with only 1 PF and its VFs for
simulation in these configurations.
2. Netdev driver supports 4 PFs and 1 channel per PF.
BAM + BAS + MCDMA AVST 1 Port Custom Perfq app (Custom PIO
Read Write Test, DMA
Loopback Test, BAM Test
and BAS Test)
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
9
2. Design Example Detailed Description
683517 | 2024.11.04
PIO using MQDMA Multi-Channel DMA AVMM Custom Perfq app (Custom PIO
Bypass Mode BAM + MCDMA AVST 1 Port Read Write Test)
BAM + BAS + MCDMA DPDK Mcdma_test (DPDK PIO
Test)
Traffic Generator/ BAM + BAS n/a Custom Perfq app (Custom PIO
Checker Read Write Test and BAS
Test)
External Descriptor Data Mover Only AVMM Custom Perfq app (External
Controller Descriptor Mode
Verification)
Note: P-Tile MCDMA IP design example doesn’t support multiple physical functions and SR-
IOV for simulation.
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
10
2. Design Example Detailed Description
683517 | 2024.11.04
AVMM DMA Multi-Channel DMA AVMM Custom Perfq app (Custom PIO
BAM + MCDMA Read Write Test, Verifying
on AVMM DMA and BAM
Test)
Device-side Packet Multi-Channel DMA AVST 1 Port Custom Perfq app (Custom PIO
Loopback BAM + MCDMA Read Write Test, BAM Test
and DMA Loopback Test)
BAM + BAS + MCDMA AVST 1 Port Custom Perfq app (Custom PIO
Read Write Test, DMA
Loopback Test, BAM Test
and BAS Test)
Packet Generate/ Multi-Channel DMA AVST 1 Port Custom Perfq app (Custom PIO
Check BAM + MCDMA Read Write Test, BAM Test
and Packet Gen Test - DMA)
continued...
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
11
2. Design Example Detailed Description
683517 | 2024.11.04
BAM + BAS + MCDMA AVST 1 Port Custom Perfq app (Custom PIO
Read Write Test, BAM Test,
Packet Gen Test - DMA and
BAS Test)
PIO using MQDMA Multi-Channel DMA AVMM Custom Perfq app (Custom PIO
Bypass Mode BAM + MCDMA AVST 1 Port Read Write Test)
BAM + BAS + MCDMA
DPDK Mcdma_test (DPDK PIO
Test)
Traffic Generator/ BAM + BAS n/a Custom Perfq app (Custom PIO
Checker Read Write Test and BAS
Test)
External Descriptor Data Mover Only AVMM Custom Perfq app (External
Controller Descriptor Mode
Verification)
Note: F-Tile MCDMA IP design example doesn’t support multiple physical functions and SR-
IOV for simulation.
Note: F-Tile MCDMA IP 1x4 design example does not support simulation.
Note: F-Tile does not support simulation on the ModelSim* - Intel FPGA Edition, Questa*
Intel FPGA Edition, and Xcelium* simulators.
Note: For F-Tile System PLL reference clock requirement, refer to the Multi Channel DMA
Intel FPGA IP for PCI Express User Guide.
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
12
2. Design Example Detailed Description
683517 | 2024.11.04
Refer to Table MCDMA IP Modes and FPGA Development Kit for Design Examples for
the supported Hard IP Modes that have Design Example support.
AVMM DMA Multi-Channel DMA AVMM Custom Perfq app (Custom PIO Read
BAM + MCDMA Write Test, Verifying on
AVMM DMA and BAM Test)
BAM + BAS + MCDMA AVMM Custom Perfq app (Custom PIO Read
Write Test, Verifying on
AVMM DMA, BAM Test and
BAS Test)
Device-side Packet Multi-Channel DMA AVST 1 Port Custom Perfq app (Custom PIO Read
Loopback BAM + MCDMA Write Test, BAM Test and
DMA Loopback Test)
BAM + BAS + MCDMA AVST 1 Port Custom Perfq app (Custom PIO Read
Write Test, DMA Loopback
Test, BAM Test and BAS Test)
Packet Generate/ Multi-Channel DMA AVST 1 Port Custom Perfq app (Custom PIO Read
Check BAM + MCDMA Write Test, BAM Test and
Packet Gen Test - DMA)
continued...
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
13
2. Design Example Detailed Description
683517 | 2024.11.04
BAM + BAS + MCDMA AVST 1 Port Custom Perfq app (Custom PIO Read
Write Test, BAM Test, Packet
Gen Test - DMA and BAS
Test)
PIO using MQDMA Multi-Channel DMA AVMM Custom Perfq app (Custom PIO Read
Bypass Mode BAM + MCDMA AVST 1 Port Write Test)
BAM + BAS + MCDMA
DPDK Mcdma_test (DPDK PIO Test)
Data Mover Only AVMM Custom Perfq app (Custom PIO Read
Write Test)
Traffic Generator/ BAM + BAS n/a Custom Perfq app (Custom PIO Read
Checker Write Test and BAS Test)
External Descriptor Data Mover Only AVMM Custom Perfq app (External
Controller Descriptor Mode Verification)
Note: R-Tile MCDMA IP PIO using Bypass Mode design example simulation is supported in
x16 and x8 topologies.. The remaining R-Tile design example simulations are not
supported.
Note: R-Tile MCDMA IP 4x4 design example does not support simulation.
Note: R-Tile MCDMA IP design example doesn’t support multiple physical functions and SR-
IOV for simulation.
Note: Data Mover Only Mode is not available in R-Tile MCDMA IP x4 topology.
Note: The R-Tile 32-bit address PIO is not supported when both the 32-bit addressing and
32-bit PIO are enabled.
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
14
2. Design Example Detailed Description
683517 | 2024.11.04
Note: The R-Tile MCDMA BAM + MCDMA design example generation fails when the 32-bit
address routing is enabled.
Note: The R-Tile design example does not support PIPE mode simulations.
Device-side
BAR0 BAR2 N/A
MCDMA Packet Loopback
Packet Generate/
AVST BAR0 BAR2 N/A
Check
Device-side
BAR0 BAR2 N/A
BAM + MCDMA Packet Loopback
Packet Generate/
AVST BAR0 BAR2 N/A
Check
Device-side
BAM + BAS + BAR0 BAR2 BAR4
Packet Loopback
MCDMA
Packet Generate/
AVST BAR0 BAR2 BAR4
Check
Traffic Generator/
BAM + BAS N/A N/A BAR2 BAR0
Checker
continued...
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
15
2. Design Example Detailed Description
683517 | 2024.11.04
User Mode Interface Design Example BAR Selected in BAR Selected for BAR Selected for
MCDMA IP PIO/BAM Design BAS Design
Example Example
For details on the design example simulation steps and running Hardware test, refer to
the Design Example Quick Start Guide .
For more information on development kits, refer to Intel FPGA Development Kits on
the Intel website.
Related Information
• Design Example Quick Start Guide on page 43
• Agilex 7 FPGA I-Series Development Kit
This design example enables Avalon-MM PIO master which bypasses the DMA path.
The Avalon-MM PIO master (rx_pio_master) allows your application to perform single,
non-bursting register read/write operation with on-chip memory. This design example
only supports PIO functionality and does not perform DMA operations. Hence, the
Avalon-ST DMA ports are not connected.
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
16
2. Design Example Detailed Description
683517 | 2024.11.04
Figure 1. MCDMA IP Single Port Avalon-ST Interface PIO Example Design using MCDMA
Bypass Mode
rx_pio_ AVMM
master MEM_PIO
hip_serial
Host PCIe
HIP H2D h2d_st_0
DMA
D2H d2h_st_0
DMA
rx_pio_ AVMM
master MEM_PIO
hip_serial
Host PCIe
HIP H2D h2ddm_
DMA master
D2H d2hdm_
DMA master
This design example enables Avalon-MM PIO master which bypasses the DMA path.
The Avalon-MM PIO master allows application to perform single, non-bursting register
read/write operation with on-chip memory.
This design example only supports PIO functionality and does not perform DMA
operations (similar to the design examples in Avalon-ST PIO using MCDMA Bypass
Mode on page 16). Hence, the Avalon-MM DMA ports are not connected.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
17
2. Design Example Detailed Description
683517 | 2024.11.04
The design example includes the Multi Channel DMA for PCI Express IP Core with the
parameters you specified and other supporting components:
• resetIP – Reset Release IP that holds the Multi Channel DMA in reset until the
entire FPGA fabric enters user mode.
• MEM_PIO – On-chip memory for the PIO operation. Connected to the MCDMA
Avalon-MM PIO Master (rx_pio_master) port that is mapped to PCIe BAR2.
For a description of which driver(s) to use with this design example, refer to Driver
Support on page 57.
Note: Metadata Support is not available in AVST Interface mode, PIO using MCDMA Bypass
Mode example design.
Note: User-FLR Interface is not available in AVMM Interface mode, PIO Using MCDMA Bypass
Mode example design.
Testbench writes 4 KB of incrementing pattern to on-chip memory and read back via
Avalon-MM PIO interface. This design example testbench doesn’t simulate H2D/D2H
data movers.
Note: The Custom Driver was used to generate the following output.
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
18
2. Design Example Detailed Description
683517 | 2024.11.04
Note: If the R-Tile MCDMA IP or F-Tile MCDMA IP is configured with the 32-bit PIO enabled,
use the following command instead:
For a description of which driver(s) to use with this design example, refer to Driver
Support on page 57.
Note: Metadata Support is not available in AVST Packet Generator/Checker design example.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
19
2. Design Example Detailed Description
683517 | 2024.11.04
Note: This test was run with Agilex 7 F-Series P-Tile FPGA PCIe Gen4 x16 configuration using
Custom Driver.
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
20
2. Design Example Detailed Description
683517 | 2024.11.04
Note: This test was run with Agilex 7 F-Series P-Tile FPGA PCIe Gen4 x16 configuration using
CentOS 7 platform.
----------------------------------------------------------------------
BDF: 0000:2d:00.0
Channels Allocated: 8
QDepth 508
Number of pages: 8
Completion mode: WB
Payload Size per descriptor: 32768 Bytes
SOF on descriptor: 1
EOF on descriptor: 1
File Size: 32768 Bytes
PKG Gen Files: 64
Rx Batch Size: 127 Descriptors
DCA: OFF
-----------------------------------------------------------------------
Thread initialization in progress ...
Thread initialization done
---------------------------------------------------------------------------------
----------------------------------
Dir #queues Time_elpsd B_trnsfrd TBW IBW MIBW
HIBW LIBW MPPS #stuck
Rx 8 00:05:000 127333248.00KB 24.28GBPS 24.28GBPS 03.04GBPS
03.04GBPS 03.04GBPS 00.80MPPS 0
---------------------------------------------------------------------------------
----------------------------------
All Threads exited
---------------------------------------------------------------------------------
----------------------------------
Dir #queues Time_elpsd B_trnsfrd TBW IBW MIBW
HIBW LIBW MPPS #stuck
Rx 8 00:10:000 251390912.00KB 23.97GBPS 23.66GBPS 02.96GBPS
02.96GBPS 02.96GBPS 00.78MPPS 0
---------------------------------------------------------------------------------
----------------------------------
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
21
2. Design Example Detailed Description
683517 | 2024.11.04
This design example performs H2D and D2H multi channel DMA via device-side
Avalon-ST streaming interface. With device-side packet loopback, Host to Device
(H2D) data stream is loop backed to the Host (D2H) through an external FIFO.
For H2D streaming, Multi Channel DMA sends the data to Avalon-ST loopback FIFOs
via four Avalon-ST Source ports. For D2H streaming, Multi Channel DMA receives the
data from Avalon-ST loopback FIFOs via Avalon-ST Sink ports.
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
22
2. Design Example Detailed Description
683517 | 2024.11.04
In this device-side loopback example, the Host first sets up memory locations within
the Host memory. Data from the Host memory is then sent to the device-side memory
by the Multi Channel DMA for PCI Express IP via H2D DMA operations. Finally, the IP
loops this data back to the Host memory using D2H DMA operations.
In addition, the design example enables Avalon-MM PIO master which bypasses the
DMA path. It allows application to perform single, non-bursting register read/write
operation with on-chip memory block.
The design example includes the Multi Channel DMA for PCI Express IP Core with the
parameters you specified and following components:
• resetIP – Reset Release IP that holds the Multi Channel DMA in reset until the
entire FPGA fabric enters user mode
• GEN_CHK - Packet Generator and Checker for MCDMA. It interfaces with DUT
Avalon Streaming H2D/D2H interfaces (h2d_st_0, d2h_st_0) for DMA operation.
DUT AVMM PIO Master (rx_pio_master) performs read and write operations to the
CSR and On-chip memory.
• PIO_INTERPRETER - This maps DUT AVMM PIO Master address width to AVMM
Slave side address based on its parameter setting such as MAP_PF, MAP_VF, and
MAP_BAR.
For a description of which driver(s) to use with this design example, refer to Driver
Support on page 57.
Note: Metadata Support is available in AVST Device-side Packet Loopback design example.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
23
2. Design Example Detailed Description
683517 | 2024.11.04
Note: The same test options can be used with DPDK driver and Kernel Mode driver to
generate comparable results.
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
24
2. Design Example Detailed Description
683517 | 2024.11.04
PIO Test
[root@bapvemb005t perfq_app]# ./perfq_app -b 0000:01:00.0 -o -v
PIO Write and Read Test ...
Pass
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
25
2. Design Example Detailed Description
683517 | 2024.11.04
----------------------------------------------------------------------
BDF: 0000:2d:00.0
Channels Allocated: 4
QDepth 508
Number of pages: 8
Completion mode: WB
Payload Size per descriptor: 8192 Bytes
#Descriptors per channel: 32768
SOF on descriptor: 1
EOF on descriptor: 1
File Size: 8192 Bytes
Tx Batch Size: 127 Descriptors
Rx Batch Size: 127 Descriptors
DCA: OFF
-----------------------------------------------------------------------
Thread initialization in progress ...
Thread initialization done
All Threads exited
-------------------------------OUTPUT
SUMMARY--------------------------------------------
Dir #queues Time_elpsd B_trnsfrd TBW MPPS Passed
Failed %passed
Tx 4 00:00:429 1048576.00KB 02.30GBPS 00.30MPPS 4
0 100.00%
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
26
2. Design Example Detailed Description
683517 | 2024.11.04
To enable data validation using -v option, set the software flags in user/common/mk/
common.mk as follows:
• cflags += -UPERFQ_PERF
• cflags += -DPERFQ_LOAD_DATA
Note: This hardware test was run with Agilex 7 P-Tile PCIe Gen4x16 configuration.
rx_pio_ AVMM
master MEM_PIO
hip_serial
MEM
This design example performs H2D and D2H multi channel DMA via Avalon-MM
memory-mapped interface. The Multi Channel DMA for PCI Express IP core provides
one Avalon-MM Write/Read Master port. You can allocate up to 2K DMA channels when
generating this example design.
This example design contains on-chip memories to support PIO and H2D/D2H DMA
operations.
For the H2D (Tx) DMA, the host populates the descriptor rings, allocates Tx packet
buffers in the host memory, and fills the Tx buffers with a predefined pattern. When
the application updates the Queue Tail Pointer register (Q_TAIL_POINTER), the
MCDMA IP starts the H2D DMA and writes received data to the on-chip memory.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
27
2. Design Example Detailed Description
683517 | 2024.11.04
For the D2H (Rx) DMA, the host initializes the FPGA on-chip memory with a predefined
pattern. The MCDMA IP reads the packet data from the on-chip memory and transmits
it to the host memory.
For bidirectional DMA, H2D is started before D2H and then both DMAs operate
simultaneously.
In addition, the design example enables Avalon-MM PIO master which bypasses the
DMA path. It allows application to perform single, non-bursting register read/write
operation with on-chip memory block.
The design example includes the Multi Channel DMA for PCI Express IP Core with the
parameters you specified and following components:
• resetIP – Reset Release IP that holds the Multi Channel DMA in reset until the
entire FPGA fabric enters user mode
• MEM_PIO – On-chip memory for the PIO operation. Connected to the MCDMA
Avalon-MM PIO Master (rx_pio_master) port that is mapped to PCIe BAR2
• MEM – Dual port on-chip memory. One port is connected to the Avalon-MM Write
Master (h2ddm_master) and the other port to Avalon-MM Read Master
(d2hdm_master)
For a description of which driver(s) to use with this design example, refer to Driver
Support on page 57.
Note: User FLR Interface is not available in AVMM DMA design example.
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
28
2. Design Example Detailed Description
683517 | 2024.11.04
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
29
2. Design Example Detailed Description
683517 | 2024.11.04
Figure 17. H2D Avalon-MM Write Agilex 7 F-Series P-Tile PCIe Gen4 x16
The following hardware test was run with Agilex 7 F-Series P-Tile PCIe Gen4 x16 configuration using Custom
Driver.
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
30
2. Design Example Detailed Description
683517 | 2024.11.04
To enable data validation using -v option, set the software flags in user/common/mk/
common.mk as follows:
cflags += -UPERFQ_PERF
cflags += -DPERFQ_LOAD_DATA
Note: Hardware test with P-Tile Gen4 x16 may be added in a future release.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
31
2. Design Example Detailed Description
683517 | 2024.11.04
Figure 20. D2H Avalon-MM Read Agilex 7 F-Series P-Tile PCIe Gen4 x16
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
32
2. Design Example Detailed Description
683517 | 2024.11.04
This design example instantiates the Traffic Generator and Checker for MCDMA module
(BAS_TGC) that creates read and write transactions to exercise the Bursting Avalon-
MM Slave module of the Multi Channel DMA for PCI Express IP configured in BAM+BAS
mode. You can program the BAS_TGC by writing to its control registers through its
Control and Status Avalon-MM slave interface. The traffic that it generates, and the
traffic that checker expects is in a sequence of incrementing dwords.
For traffic generation, the host software allocates a block of memory in the PCIe space
and then programs one of the windows in the Address Mapper to point to the allocated
memory block. It also needs to set the start address which is the base address of the
selected space for write transactions, set the write count to the size of the block of the
allocated memory block, and set the transfer size before kick start to the traffic
generation. The number of completed transfers can be checked by reading the write
count register.
For traffic checking, the host software sets the read address to point to the start
address of the write transactions, set the read count to the size of the block of the
allocated memory block, and set the transfer size before kick start to the traffic
checker. The number of completed data check and the number of errors occurred can
be checked by reading the read count and read error count registers, respectively.
Note: MSI Interface is supported for Traffic Generator and Checker Example Design in
H/P/F/R-Tile MCDMA IP.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
33
2. Design Example Detailed Description
683517 | 2024.11.04
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
34
2. Design Example Detailed Description
683517 | 2024.11.04
the number of
transfers that have
occurred since it was
last read.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
35
2. Design Example Detailed Description
683517 | 2024.11.04
The figure below is the high level block diagram of example design.
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
36
2. Design Example Detailed Description
683517 | 2024.11.04
d2hdm_desc
d2hdm_
desc_status
h2ddm_
desc_cmpl
h2ddm_desc
h2ddm_
desc_status
DUT (MCDMAP-Tile/F-Tile) reset IP
H2D AVMM Wr
Data Mover Master
HIP DMA_MEM
Interface/
PCIe Scheduler/ D2H AVMM Rd
Host
HIP Arbiter Data Mover Master
BAM
BAM
Interpreter
The figure below is the internal representation of external descriptor controller block.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
37
2. Design Example Detailed Description
683517 | 2024.11.04
Global/Queue CSR: Implements the global CSR and Queue CSR registers required
for controlling the DMA operation. Read/Write access to these registers happen
through BAM interface of MCDMA IP. For details about the registers, refer to the
Registers on page 40 section.
Descriptor Status Processing: This block processes the status information received
on h2ddm_desc_status and d2hdm_desc_status interface and generates appropriate
writeback commands to MCDMA IP on d2hdm_desc interface.
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
38
2. Design Example Detailed Description
683517 | 2024.11.04
The differences between main MCDMA and example design are shown in the table
below.
SRIOV Yes No
MSI-X Yes No
(MSI-X may be supported in future release)
MSI No No
Note: Only the descriptor format is the same for D2H and H2D, and the descriptors are in a
separate ring.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
39
2. Design Example Detailed Description
683517 | 2024.11.04
PYLD_CNT[147:128] 20 DMA payload size in bytes. Max 1MB, with 20’h0 indicating 1MB.
RSRV[159:148] 12 Reserved
DESC_IDX [175:160] 16 Unique Identifier for each descriptor, the same number will be applied to
Q_COMPLETED_POINTER.
Same as descriptor count which is used as tail pointer.
For example, descriptor count starts from 1 to 128 for 128 descriptors in a 4K page.
2.8.1. Registers
[0:0] Soft_reset R/W 0 Soft-reset register to reset the QCSR block, Software needs to write 1 to reset and
then write 0 to un-reset.
This space contains control and status registers for external descriptor controller. Host
maintains separate descriptor ring for both D2H & H2D queues. External descriptor
controller example design supports 16 channels. Descriptor context for each channel
can be updates in the following CSR registers.
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
40
2. Design Example Detailed Description
683517 | 2024.11.04
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
41
2. Design Example Detailed Description
683517 | 2024.11.04
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
42
683517 | 2024.11.04
Send Feedback
The generated design example reflects the parameters that you specify. The design
example automatically creates the files necessary to simulate and compile in the
Quartus Prime software. You can download the compiled design to your FPGA
Development Board. To download to custom hardware, update the Quartus Prime
Settings File (.qsf) with the correct pin assignments.
Compilation Functional
(Simulator) Simulation
Design
Example Compilation Hardware
Generation (Quartus Prime) Testing
Design example
pcie_ed.v
top-level HDL
sim
pcie_ed
<simulators> <simulation scripts> simulation
pcie_ed directory
Design example
synth pcie_ed.v
top-level HDL
Testbench
pcie_ed_tb pcie_ed_tb sim pcie_ed_tb.v including Intel
FPGA BFM
continued...
© Altera Corporation. Altera, the Altera logo, the ‘a’ logo, and other Altera marks are trademarks of Altera
Corporation. Altera and Intel warrant performance of its FPGA and semiconductor products to current
specifications in accordance with Altera’s or Intel's standard warranty as applicable, but reserves the right to ISO
make changes to any products and services at any time without notice. Altera and Intel assume no 9001:2015
responsibility or liability arising out of the application or use of any information, product, or service described Registered
herein except as expressly agreed to inwriting by Altera or Intel. Altera and Intel customers are advised to
obtain the latest version of device specifications before relying on any published information and before placing
orders for products or services.
*Other names and brands may be claimed as the property of others.
3. Design Example Quick Start Guide
683517 | 2024.11.04
Testbench
<simulation
<simulators> simulation
script>
directory
Testbench
pcie_ed_tb.qsys Platform Designer
file
pcie_ed.ipx
drivers net
examples mcdma-test
dpdk
v20.05-rc1
dpdk patches
v21.11.2
Licenses license_bsd.txt
version.txt
common
mcdma-custom-
driver
kernel driver kmod Kernel driver
mcdma-netdev-
driver
pX_software
Licenses license_bsd.txt
where X= 0,1, 2,
3 (IP core <test application
numbers) Test Application
software>
perfq_app
cli
README Readme file
devmem
src
Licenses
Quartus project
pcie_ed.qpf
file
Quartus setting
pcie_ed.qsf
file
Design example
pcie_ed.qsys Platform Designer
file
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
44
3. Design Example Quick Start Guide
683517 | 2024.11.04
Note: Software directory is created multiple times depending on Hard IP mode selected
(1x16, 2x8 or 4x4) for Quartus Prime Pro Edition 23.3 version onwards.
• p0_software folder is generated for 1x16, 2x8 and 4x4 Hard IP modes.
• p1_software folder is generated for 2x8 Hard IP modes.
• p2_software and p3_software folders are generated for 4x4 Hard IP modes.
Note: You must use the corresponding software folder with each IP port.
Specify
Start Parameter Specify IP Variation Select Initiate
Example Design and
Editor and Select Device Design Parameters Design Generation
Select Target Board
3.2.1. Procedure
1. In the Quartus Prime Pro Edition software, create a new project File → New
Project Wizard.
2. Specify the Directory, Name, and Top-Level Entity.
3. For Project Type, accept the default value, Empty project. Click Next.
4. For Add Files click Next.
5. For Family, Device & Board Settings, select Stratix 10 (GX/SX/MX/TX/DX)
or Agilex 7 F-Series or Agilex 7 I-Series and the Target Device for your
design.
Note: The selected device is only used if you select None in Step 10c below.
6. Click Finish.
7. In the IP Catalog locate and add the H-Tile Multichannel DMA Intel FPGA IP
(Stratix 10 GX/MX devices), P-Tile Multichannel DMA Intel FPGA IP (Stratix
10 DX and Agilex 7 devices), F-Tile Multichannel DMA Intel FPGA IP or R-Tile
Multichannel DMA Intel FPGA IP (Agilex 7 devices), which brings up the IP
Parameter Editor.
8. In the New IP Variant dialog box, specify a name for your IP. Click Create.
9. On the IP Settings tabs, specify the parameters for your IP variation.
10. On the Example Designs tab, make the following selections:
a. For Example Design Files, turn on the Simulation and Synthesis options.
If you do not need these simulation or synthesis files, leaving the
corresponding option(s) turned off significantly reduces the example design
generation time.
b. For Generated HDL Format, only Verilog is available in the current release.
c. For Target Development Kit, select the appropriate option.
Note: If you select None, the generated design example targets the device
specified. Otherwise, the design example uses the device on the
selected development board. If you intend to test the design in
hardware, make the appropriate pin assignments in the .qsf file.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
45
3. Design Example Quick Start Guide
683517 | 2024.11.04
Note: Appropriate pin assignments in the .qsf file must to be added before
compilation in P/F/R Tiles when Enable CVP (Intel VSEC) option is
checked, and when Target Development Kit is selected as None.
Otherwise, example design compilation in Quartus Prime throws an
error in the Fitter stage.
d. For Currently Selected Example Design, select a design example from a
pulldown menu. Available design examples depends on the User Mode and
Interface type setting in MCDMA Settings under IP Settings tab.
Available design examples for the MCDMA or BAM+MCDMA or BAM+BAS
+MCDMA modes and Avalon-ST Interface type:
• PIO using MQDMA Bypass Mode
• Packet Generate/Check
• Device-side Packet Loopback
Available design examples for the MCDMA or BAM+MCDMA or BAM+BAS
+MCDMA modes and Avalon-MM Interface type:
• PIO using MQDMA Bypass Mode
• AVMM DMA
Available design example for only BAM User mode:
• PIO using MQDMA Bypass Mode
Available design examples for BAM+BAS User mode:
• PIO using MQDMA Bypass Mode
• Traffic Generator / Checker
Available design examples for Data Mover Only User mode:
• PIO using MQDMA Bypass Mode
• External descriptor controller
11. Select Generate Example Design to create a design example that you can
simulate and download to hardware. If you target one of the Intel FPGA
development kits, the device on that board supersedes the device previously
selected in the Quartus Prime Pro Edition project if the devices are different. When
the prompt asks you to specify the directory for your example design, you can
choose to accept the default directory ./
intel_pcie_mcdma_0_example_design or choose another directory.
12. Click Close on Generate Example Design Completed message.
13. Close the IP Parameter Editor. Click File → Exit. When prompted with Save
changes?, you do not need to save the .ip. Click Don’t Save.
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
46
3. Design Example Quick Start Guide
683517 | 2024.11.04
The design example, pcie_ed_inst, is generated with the link width you select in the
IP Parameter Editor. The Intel FPGA BFM, DUT_pcie_tb, is a Root Port BFM.
Note: The H-Tile Root Port BFM only supports up to Gen3 x8 width and downtrains x16
Endpoint to Gen3 x8. If you want to simulate x16 link width with MCMDA H-Tile
Endpoint, you can use a third-party Root Complex BFM.
The testbench uses a Root Port driver module to initiate the configuration and exercise
the target memory and DMA channel in the Endpoint. This is the module that you can
modify to vary the transactions sent to the example Endpoint design or your own
design.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
47
3. Design Example Quick Start Guide
683517 | 2024.11.04
For more information about Intel FPGA BFM, refer to the Testbench overview in the
user guides provided below:
• P-Tile Avalon Streaming Intel FPGA IP for PCI Express User Guide
• F-Tile Avalon Streaming Intel FPGA IP for PCI Express User Guide
• R-Tile Avalon Stream Intel FPGA IP for PCI Express Design Example User Guide
• L- and H-tile Avalon Streaming and Single Root I/O Virtualization (SR-IOV) Intel
FPGA IP for PCI Express User Guide
Note: Root Port mode MCDMA IP simulation is supported by VCS simulator only.
Note: For 2x8 Hard IP modes, example design simulation is supported on PCIe0 only.
Note: MCDMA R-Tile PIO using Bypass Mode design example simulation is supported for x16
and x8 topologies. The remaining R-Tile design example simulations are not
supported. This feature may be supported in a future release of the Quartus Prime
software.
Note: For x4 (1x4 or 2x4 or 4x4) Hard IP modes, example design simulation is not
supported.
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
48
3. Design Example Quick Start Guide
683517 | 2024.11.04
Note: SR-IOV simulation support is provided only for 1 physical function and its VFs.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
49
3. Design Example Quick Start Guide
683517 | 2024.11.04
Tile Design User Mode VCS VCS MX Xcelium QuestaSim Questa Aldec
Example Intel FPGA Riviera Pro
Edition
BAM+BAS
+MCDMA
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
50
3. Design Example Quick Start Guide
683517 | 2024.11.04
Tile Design User Mode VCS VCS MX Xcelium QuestaSim Questa Aldec
Example Intel FPGA Riviera Pro
Edition
BAM+BAS
+MCDMA
Note: MCDMA F-Tile 1x4 design example does not support simulation.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
51
3. Design Example Quick Start Guide
683517 | 2024.11.04
Tile Design User Mode VCS VCS MX Xcelium QuestaSim Questa Aldec
Example Intel FPGA Riviera Pro
Edition
Device-side BAM + No No No No No No
Packet MCDMA
Loopback BAM + BAS
+ MCDMA
Multi
channel
DMA
Packet BAM + No No No No No No
Generate/ MCDMA
Check BAM + BAS
+ MCDMA
Multi
channel
DMA
Traffic BAM+BAS No No No No No No
Generator/
Checker
Note: MCDMA R-Tile 4x4 PIO using Bypass Mode design example does not support
simulation.
Note: Data Mover Only Mode is not available in R-Tile MCDMA IP x4 topology.
Note: The R-Tile design example does not support PIPE mode simulations.
3.3.3. Example Testbench Flow for DMA Test with Avalon-ST Packet
Generate/Check Design Example
The DMA testbench for the Avalon-ST Packet Generate/Check design example
demonstrates the following two major tasks:
• Host-to-Device: Transferring packets stored in the host memory to the Packet
Checker in the design example user logic, where a checker module verifies the
integrity of the packet
• Device-to-Host: Packets generated from a Generator module are transferred to the
host memory where the host checks the packet integrity
Note: This testbench implements transfer of one packet with length of 4096 bytes.
The DMA testbench for the design example completes the following tasks
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
52
3. Design Example Quick Start Guide
683517 | 2024.11.04
1. Set up 4096 bytes of incrementing data pattern for testing data movement from
the host to the device and then back to the host.
2. Write the expected packet length value (4096 bytes) to the Packet Generation and
Checker in the design example user logic through the PIO. This value is used by
the Packet checker module for testing packet integrity.
3. MSI-X is enabled and configured for launching a memory write to signal the end of
each descriptor’s DMA transaction. Write-Back function is kept disabled for the
simulation.
4. Set up the H2D (Host-to-Device) queue in the Multi Channel DMA.
5. Set up three H2D descriptors in the host memory, with the source address
pointing to the incrementing data pattern locations in the host memory. The start
of packet (SOF) and end of packet (EOF) markers along with packet length are
indicated in the descriptors.
6. At the last step of the Queue programming, the Multi Channel DMA tail pointer
register is written, which triggers the Multi Channel DMA to start the H2D DMA
transaction.
7. The previous step instructs the H2D Data Mover to fetch the descriptors from the
host memory.
8. The Multi Channel DMA H2D Data Mover reads the data from the host memory and
forwards the packet to the Packet Generator and Checker through the AVST
Streaming interface.
9. The checker module receives the packet and checks for integrity by testing the
data pattern, length as expected and proper receipt of the “end of packet” marker.
If the packet is found to be proper, the good packet count is incremented by 1 else
the bad packet count is incremented.
10. The testbench does a PIO read access of the Good Packet Count and Bad Packet
Count registers and displays the test success or failure status.
11. MSI-X write commands are triggered for every description or completion which are
checked by the testbench for proper receipt.
12. Next, set up the D2H (Device-to-Host) Queue.
13. Setup three D2H descriptors in the host memory, with the destination address
pointing to a new address space in host memory which is pre-filled with all zeroes.
14. At the last step of the Queue programming, the Multi Channel DMA tail pointer
register is written, which triggers the Multi Channel DMA to start the D2H DMA
transaction.
15. The previous step instructs the H2D Data Mover to fetch the descriptors from the
host memory to start the D2H DMA transaction.
16. The Multi Channel DMA D2H Data Mover reads the incoming packet from the
Packet Generator and writes the data to the host memory according to the
descriptors fetched in the previous step.
17. MSI-X write commands are triggered for every description completion which are
checked by the testbench for proper receipt.
18. Compares the data written back to the system memory in D2H task with the
standard incrementing pattern and declare test success/failure.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
53
3. Design Example Quick Start Guide
683517 | 2024.11.04
3.3.5.1. Steps to Run the Simulation : QuestaSim / Questa Intel FPGA Edition
Simulation Directory
Instructions
1. Invoke vsim (by typing vsim, which brings up a console window where you can
run the following commands).
2. set USER_DEFINED_ELAB_OPTIONS "-voptargs=\"-noprotectopt\""
3. do msim_setup.tcl
Note: Alternatively, instead of doing Steps 1 and 2, you can type: vsim -c -do
msim_setup.tcl
4. ld_debug
5. run -all
Simulation Directory
<example_design>/pcie_ed_tb/pcie_ed _tb/sim/synopsys/vcsmx
Instructions
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
54
3. Design Example Quick Start Guide
683517 | 2024.11.04
Note: If R-Tile MCDMA is configured in PIPE Mode, use the following command
instead:
sh vcs_setup.sh USER_DEFINED_COMPILE_OPTIONS=""
USER_DEFINED_ELAB_OPTIONS="+define
+XTOR_PCIECXL_LM_SVS_SERDES_ARCHITECTURE\ +define+RTILE_PIPE_MODE"
USER_DEFINED_SIM_OPTIONS=""
| tee simulation.log
Simulation Directory
<example_design>/pcie_ed_tb/pcie_ed_tb/sim/xcelium
Instructions
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
55
3. Design Example Quick Start Guide
683517 | 2024.11.04
Note: Set the PCIe refclk switch on the board to select the common refclk.
Note: This section describes how to program the FPGA using the Stratix 10 Development
Board. If you are using one of the boards listed in the previous section, the name of
the development board you select to use shall apply for this section accordingly.
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
56
3. Design Example Quick Start Guide
683517 | 2024.11.04
The following host configuration is used to test the functionality of the design
example:
Kernel 3.10
Support added in Custom, DPDK and NetDev driver to run with Ubuntu 22.04 with
kernel 5.15.0-xx-generic
Kernel v5.15
$ uname -r
Expected output:
5.15.0-xx-generic
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
57
3. Design Example Quick Start Guide
683517 | 2024.11.04
If this is not the kernel version in your Ubuntu 22.04 system, follow the steps below.
These steps will change the kernel from HWE to GA Linux 5.15.0-xx-generic and install
Linux headers and gcc required for the MCDMA drivers.
// Install GA.
// Remove HWE.
// Clean up.
$ gcc --version
To switch between the installed gcc versions, use the update-alternatives tool and
select gcc-11.
Note: Ensure that Proxy is set. Otherwise, some of these updates do not work.
The table below summarizes the driver support for the MCDMA design example
variants. It uses the following acronyms:
• User Space I/O (UIO): A kernel base module that the PCIe device uses to expose
its resources to user space.
• Virtual Function I/O (VFIO) driver: An IOMMU/device agnostic framework for
exposing direct device access to user space in a secure, IOMMU-protected
environment.
• Data Plane Development Kit (DPDK): Consists of libraries to accelerate packet
processing workloads running on a wide variety of CPU architectures.
Note: Software directory is created multiple times depending on Hard IP mode selected
(1x16, 2x8 or 4x4) for Quartus Prime Pro Edition 23.3 version onwards.
• p0_software folder is generated for 1x16 Hard IP modes.
• p1_software folder is generated for 2x8 Hard IP modes.
• p2_software and p3_software folders are generated only for 4x4 Hard IP modes.
Note: You must use the corresponding software folder with each IP port.
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
58
3. Design Example Quick Start Guide
683517 | 2024.11.04
Description Also known as the user This DPDK Poll Mode Driver Kernel Mode Netdev Driver
mode driver, this driver is (PMD) uses the DPDK exposes the MCDMA IP as a
created to support both UIO framework. The PMD will Network Device and enables
and VFIO base expose the device as an standard applications to
kernelmodules. This driver ethernet device. It supports perform network data
provides custom APIs and both UIO and VFIO base transfers using the Linux
can be used without kernel modules. Existing network stack.
depending on any DPDK applications can be
framework. integrated with the MCDMA
PMD.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
59
3. Design Example Quick Start Guide
683517 | 2024.11.04
Multi channel DMA Avalon 1- Yes Yes, 256 channels Yes, support 4 PFs, 64
port Device-side Packet channel per PF
Loopback Design Example
3.5.2.3.1. Prerequisites
Make sure the IOMMU is enabled on the host by using the following command:
External Packages
To run the DPDK software driver, you must install the numactl-devel package:
• CentOS:
yum install numactl-devel
• Ubuntu:
apt-get install numactl
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
60
3. Design Example Quick Start Guide
683517 | 2024.11.04
• CentOS:
yum install qemu-kvm qemu-img libvirt virt-install libvirt-client
• Ubuntu:
apt-get install qemu qemu-kvm virt-manager
apt-get install python-is-python3 libjpeg62
apt-get install make ninja-build pkg-config libglib2.0-dev libpixman-1-
dev
Note: Install libpng using the following steps and create a softlink.
Download from here:
https://sourceforge.net/projects/libpng/files/libpng15/1.5.30/
libpng-1.5.30.tar.gz/download
Copy the downloaded file to ~/Downloads.
$ cd ~/Downloads/
$ tar -xvf libpng-1.5.30.tar.gz
$ cd libpng-1.5.30/
$ ./configure --prefix=/usr/local
$ make check
$ sudo make install
$ sudo ln -s /usr/local/lib/libpng15.so /usr/lib/
2. Download the QEMU software from the official site.
Note: For testing over VM, you need to generate the necessary qcow2 file.
Follow the steps below to modify the default hugepages setting in the grub files:
1. Edit the /etc/default/grub file
Append the highlighted parameters to the GRUB_CMDLINE_LINUX line in the /etc/
default/grub file
CentOS: GRUB_CMDLINE_LINUX=" rd.lvm.lv=centos/root
rd.lvm.lv=centos/swap rhgb default_hugepagesz=1G hugepagesz=1G
hugepages=40 panic=1 intel_iommu=on iommu=pt
Ubuntu: GRUB_CMDLINE_LINUX="default_hugepagesz=1G hugepagesz=1G
hugepages=20 intel_iommu=on iommu=pt panic=1 quiet splash
vt.handoff=7"
File contents after the edit for CentOS is shown below:
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX=" rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb
default_hugepagesz=1G hugepagesz=1G hugepages=40 panic=1 intel_iommu=on
iommu=pt
GRUB_DISABLE_RECOVERY="true"
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
61
3. Design Example Quick Start Guide
683517 | 2024.11.04
In the case of memory allocation failure at the time of Virtual Function creation,
add the following boot parameters:
"pci=hpbussize=10,hpmemsize=2M,nocrs,realloc=on"
To bind the device to vfio-pci and use IOMMU, enable the following parameter:
intel_iommu=on
To use UIO and not enable the IOMMU lookup, add the following parameter:
iommu=pt
To use the AMD platform and the UIO driver, add the following parameter at boot
time: iommu=soft
An example /etc/default/grub file on ubuntu after the edits can be seen
below:
root@bapvecise042:~# cat /etc/default/grub
# If you change this file, run 'update-grub' afterwards to update
# /boot/grub/grub.cfg.
# For full documentation of the options in this file, see:
GRUB_DEFAULT="1>2"
GRUB_TIMEOUT_STYLE=hidden
GRUB_TIMEOUT=0
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
# This works with Linux (no patch required) and with any kernel that obtains
# the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...)
#GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef"
#GRUB_TERMINAL=console
# note that you can use only modes which your graphic card supports via VBE
# you can see them in real GRUB with the command `vbeinfo'
#GRUB_GFXMODE=640x480
#GRUB_DISABLE_LINUX_UUID=true
#GRUB_DISABLE_RECOVERY="true"
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
62
3. Design Example Quick Start Guide
683517 | 2024.11.04
grub-mkconfig -o /boot/efi/EFI/ubuntu/grub.cfg
or
grub2-mkconfig -o /boot/efi/EFI/ubuntu/grub.cfg
or
sudo grub update
In this example, we have 2 NUMAs. If only one NUMA is present, ignore this
step:
b. Check which device is provisioned:
$cat /sys/class/pci_bus/<Domain:Bus>\
/device/numa_node
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
63
3. Design Example Quick Start Guide
683517 | 2024.11.04
__cflags += -UUIO_SUPPORT
__cflags += -UIFC_PIO_64
__cflags += -DIFC_PIO_32
__cflags += -DIFC_32BIT_SUPPORT
For a 64-bit OS, if the 32-bit PIO is to be enabled instead of the 64-bit PIO, modify
"software/user/common/mk/common.mk" as follows:
__cflags += -UIFC_PIO_64
__cflags += -DIFC_PIO_32
1. Install vfio-pci module.
$ modprobe vfio-pci
2. Bind the device to vfio-pci
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
64
3. Design Example Quick Start Guide
683517 | 2024.11.04
Note: For VFIO, in case of multi PF scenarios, you must check whether the BDFs are in same
IOMMU group or not using the command: readlink /sys/bus/pci/
devices/BDF/iommu_group
Note: If BDFs are in same IOMMU group, you need to apply the ACS patch, else its not
required.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
65
3. Design Example Quick Start Guide
683517 | 2024.11.04
Follow the steps below to create the guest environment and assign VF device to VM by
using QEMU:
1. Enable Virtual functions based on requirements:
$ echo 2 > /sys/bus/pci/devices/<bdf>/max_vfs
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
66
3. Design Example Quick Start Guide
683517 | 2024.11.04
CentOS: 3.10.0-1127.10.1.el7.28.x86_64
Linux Kernel
Ubuntu: 5.15.0-52-generic x86_64
CPU Cores 96
Hypervisor KVM
CPU Cores 2
RAM 8 GB
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
67
3. Design Example Quick Start Guide
683517 | 2024.11.04
2. This command displays the available options in the application, as shown in the
image below:
Here the -b option should be provided with the correct BDF in the system.
4. Perform IP reset.
This step will perform an IP Reset. You can perform this step before every run.
Build devmem utility:
$ cd software/user/cli/devmem
$ make clean all
$ ./devmem 0000:01:00.0 0 0x00200120 0x1
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
68
3. Design Example Quick Start Guide
683517 | 2024.11.04
__cflags += -DPERFQ_LOAD_DATA \
__cflags += -UPERFQ_PERF
__cflags += -DCID_PAT
__cflags += -DIFC_PROG_DATA_EN
DIFC_PROG_DATA_EN
__cflags += -DPERFQ_LOAD_DATA
__cflags += -UPERFQ_PERF
Configuration:
a. bdf (-b 0000:01:00.0)
b. 2 channels (-c 2)
c. Loopback (-i)
d. Payload length of 32,768 bytes in each descriptor (-p 32768)
e. Time Limit (-l 5)
f. Debug log enabled (-d 2)
g. One thread per queue (-a 4)
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
69
3. Design Example Quick Start Guide
683517 | 2024.11.04
Note: This hardware test was run with the Stratix 10 GX H-tile PCIe Gen3 x16
configuration.
Note: This hardware test was run with the Agilex 7 P-Tile PCIe Gen4x16
configuration.
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
70
3. Design Example Quick Start Guide
683517 | 2024.11.04
Figure 31. Custom AVST DMA Gen4 x16 : P-Tile Hardware Test Result
Command:
$ ./perfq_app -b 0000:01:00.0 -p 32768 -l 5 -u -c 2 -d 2 -a 4
Configuration:
a. bdf (-b 0000:01:00.0)
b. 2 channels (-c 2)
c. bi-directional DMA transfer (-u)
d. Payload length of 32768 bytes in each descriptor (-p 32768)
e. Time Limit set to 5 (-l 5)
f. Debug log enabled (-d 2)
g. One thread per queue (-a 4)
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
71
3. Design Example Quick Start Guide
683517 | 2024.11.04
Note: To test the data validity, you need to perform H2D then D2H operations.
Figure 32. Custom AVMM DMA Gen4 x16 : P-Tile Hardware Test Result
To enable data validation using -v option, set the software flags in user/
common/mk/common.mk as follows:
cflags += -UPERFQ_PERF
cflags += -DPERFQ_LOAD_DATA
Meta data is the 8-byte private data that the host sends to the device in the H2D
direction and the device sends to the host in the D2H direction.
1. If we have meta data enabled in the bitstream, modify IFC_QDMA_META_DATA
like below in common/mk/common.mk
__cflags += -DIFC_QDMA_META_DATA
__cflags += -DPERFQ_LOAD_DATA
2. Continue with user library build and installation steps.
3. Command:
./perfq_app -p 128 -s 128 -d 1 -i -a 2 -b 0000:01:00.0 -v -c 1
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
72
3. Design Example Quick Start Guide
683517 | 2024.11.04
Note: To verify the meta data functionality, every descriptor should have eof set to 1. Use
the commands in such a way that every descriptor contains a single file. Use
perfq_app with the -s parameter.
You can read and write from PIO address range in bar 2 from any valid custom
memory.
-b <bdf>
-o
--pio_w_addr=<address>
--pio_w_val=<value to write>
--bar=<bar number>
Example:
# ./software/user/cli/perfq_app# ./perfq_app -b 0000:01:00.0 -o --
pio_w_addr=0x1010 --pio_w_val=0x30 --bar=2
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
73
3. Design Example Quick Start Guide
683517 | 2024.11.04
-b <bdf>
-o
--pio_r_addr=<address>
--bar=<bar number>
Example:
# ./software/user/cli/perfq_app# ./perfq_app -b 0000:01:00.0 -o --
pio_r_addr=0x1010 --bar=2
BAM Test
If the BAM support is enabled on hardware, enable the following flag in common/mk/
common.mk:
For PIO using Bypass with BAM/BAS user mode, you are required to change the define
parameter to undef: #undef IFC_QDMA_INTF_ST(software/user/common/
include/mcdma_ip_params.h)
Command:
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
74
3. Design Example Quick Start Guide
683517 | 2024.11.04
Command:
[root@bapvemb005t perfq_app]# ./perfq_app -b 0000:01:00.0 --bam_perf -o
PIO 64 Write and Read Perf Test ...
Total Bandwidth: 0.14GBPS
Pass
[root@bapvemb005t perfq_app]#
• By default BAM/BAS, BAR is 2. If DMA Hardware supports both BAM/BAS and BAR
numbers are different, then pass BAR number parameter as below:
--bar=2 for BAM
--bar=0 for BAS
• For example:
./perfq_app -b 0000:01:00.0 --bam_perf -o --bar=2
• Performance mode of BAM tries to send the data for 10 seconds and calculates the
bandwidth
• PIO 256b test may display fail because of the reason that 2k memory only enabled
in device and PIO test trying to access the memory > 2k
BAS Test
Note: For Traffic Generator/Checker example design, you must disable MSI-X parameter,
IFC_QDMA_MSIX_ENABLE, in Custom Driver's software/kernel/common/
include/mcdma_ip_params.h if MSI-X is not enabled in the IP Parameter Editor
GUI. By default, the Custom Driver software parameter is enabled and MSI-X is
disabled in the IP. This mismatch prevents ifc_uio kernel module from being loaded.
The MCDMA BAS programming sequence consists of the following steps defined in the
following sections:
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
75
3. Design Example Quick Start Guide
683517 | 2024.11.04
BAS support is enabled on the hardware. Enable the following flag in user/common/
include/ifc_libmqdma.h
#define PCIe_SLOT 0 /* 0 – x16, 1 – x8, 2 - x4 */
Commands:
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
76
3. Design Example Quick Start Guide
683517 | 2024.11.04
Performance test:
Note: You may not able to proceed with -z option. Please add flag #define
IFC_QDMA_INTF_ST in user/common/include/mcdma_ip_params.h as a
workaround to make it work.
Note: In case of VFIO, to run BAM+BAS+MCDMA, you need to create at least 3 VFs and run
on each VFs respectively. If you try to use one VF to run BAM+BAS+MCDMA
simultaneously in case of VFIO, it prompts with a resource busy.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
77
3. Design Example Quick Start Guide
683517 | 2024.11.04
2. Continue the procedure in the steps in Software Setup on page 64 for building and
installing MCDMA.
3. Run the perfq_app application command:
$ ./perfq_app -b 0000:01:00.0 -p 32768 -d 2 -c 1\
-a 2 -l 5 -z -n –pf=<pf number> --vf=<vfnumber>
Note: If you run DMA on PF only, then "--vf " might not be required. However, if
you run DMA on VF, then "--pf" and "--vf" both might be needed as you
need to know from which PF the VF was spawned.
Note: The hardware test was run with Gen3 x16. Test with Gen4 x16 may be
added in a future release.
Note: • PF number and VF number start with 1 in command line parameters.
• Please run the performance on PF0 with -n parameter.
3.5.2.3.4. Testing Bitstream Configuration beyond 256 Channels (for MCDMA Custom
Driver)
Note: In case, if the *.sof file generated with the number of channels > 256, please follow
this procedure.
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
78
3. Design Example Quick Start Guide
683517 | 2024.11.04
__cflags += -DIFC_MCDMA_DIDF
__cflags += -UIFC_MCDMA_SINGLE_FUNC
Note: In case of AVMM, BDF is provided as an argument and you must define
IFC_MCDMA_SINGLE_FUNC. __cflags += -DIFC_MCDMA_SINGLE_FUNC
2. Go to this location:
cd user/cli/perfq_app/
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
79
3. Design Example Quick Start Guide
683517 | 2024.11.04
Note:
a. Currently, in DIDF mode, single page is supported.
b. Simultaneous process, currently cannot be supported in DIDF mode. You can
run one process with 2k channels.
cd ./software/user/cli/devmem/
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
80
3. Design Example Quick Start Guide
683517 | 2024.11.04
5. Command:
./ perfq_app -b 0000:09:00.0 -d 1\
-c 16 -a 8 -p 1024 -s 102400 -u
3.5.2.4.1. Prerequisites
CentOS: GRUB_CMDLINE_LINUX="rd.lvm.lv=centos/root
rd.lvm.lv=centos/swap rhgb default_hugepagesz=1G hugepagesz=1G
hugepages=40 iommu=pt panic=1”
In the case of memory allocation failure at the time of Virtual Function creation, add
the following boot parameters:
"pci=hpbussize=10,hpmemsize=2M,nocrs,realloc=on"
To bind the device to vfio-pci and use IOMMU, enable the following parameter:
intel_iommu=on
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
81
3. Design Example Quick Start Guide
683517 | 2024.11.04
To use UIO and not enable the IOMMU lookup, add the following parameter:
iommu=pt
To use the AMD platform and the UIO driver, add the following parameter at boot
time: iommu=soft
# /boot/grub/grub.cfg.
GRUB_DEFAULT="1>2"
GRUB_TIMEOUT_STYLE=hidden
GRUB_TIMEOUT=0
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
# This works with Linux (no patch required) and with any kernel that obtains
# the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...)
#GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef"
# Uncomment to disable graphical terminal (grub-pc only)
#GRUB_TERMINAL=console
# note that you can use only modes which your graphic card supports via VBE
# you can see them in real GRUB with the command `vbeinfo'
#GRUB_GFXMODE=640x480
# Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux
#GRUB_DISABLE_LINUX_UUID=true
#GRUB_DISABLE_RECOVERY="true"
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
82
3. Design Example Quick Start Guide
683517 | 2024.11.04
To check whether the boot system is legacy or EFI-based, check the existence of the
following file:
$ls -al /sys/firmware/efi
If this file is present, the boot system is EFI-based. Otherwise, it is a legacy system.
Note: test-pmd is supported only for CentOS and not for Ubuntu.
If testing the MCDMA by using test-pmd, use the following steps; otherwise, if using
perfq_app, skip to Install PMD and Test Application (for CentOS) on page 86:
1. Download dpdk and apply the build patches.
Execute the following commands with root as user.
$ cd software/dpdk/dpdk/patches/v20.05-rc1
$ sh apply-patch.sh
2. Enable IGB_UIO module in build configuration.
Update the following macro in ./config/common_base to “y”. By default, igb_uio is
disabled.
CONFIG_RTE_EAL_IGB_UIO=y
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
83
3. Design Example Quick Start Guide
683517 | 2024.11.04
5. In case to avoid Tx drop count, enable the following macro in the file:
drivers/net/mcdma/rte_pmd_mcdma.h.
This is applicable for pkt-gen only. Skip this step for loopback.
#define AVOID_TX_DROP_COUNT
continued...
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
84
3. Design Example Quick Start Guide
683517 | 2024.11.04
Note: --burst=0 : If set to 0, driver default is used (i.e. 16 burst-size). Otherwise, the Test
PMD default burst size (i.e. 32) is used. The default Testpmd pkt-len, in the case of
Tx, is 64.
$./x86_64-native-linuxapp-gcc/app/testpmd -c f -n 4 -w
0000:01:00.0 -- --tx-first --nb-cores=1 --rxq=1 --txq=1 --rxd=512
--txd=512 --max-pkt-len=64 --no-flush-rx --stats-period 1 --
burst=127 --txpkts=64
Parameters used:
1. BDF of device. (-w 0000:01:00.0)
2. Forwarding mode (--tx-first)
3. Number of cores (--nb-cores=1)
4. Number of RX and TX queues per port (--rxq=1 --txq=1)
5. Number of descriptors in the RX and TX rings (--rxd=512 --txd=512)
6. Max packet length (max-pkt-len=64)
7. Display statistics every PERIOD seconds (--stats-period 1)
8. Number of packets per burst (--burst=127)
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
85
3. Design Example Quick Start Guide
683517 | 2024.11.04
Note: This hardware test was run with the Stratix 10 GX H-tile PCIe Gen3 x16 configuration.
Note: Hardware test with P-Tile Gen4 x16 may be added in a future release.
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
86
3. Design Example Quick Start Guide
683517 | 2024.11.04
Note: You must ensure that meson utility and elf utility are installed on the system before
building DPDK v21.11.2. If not installed, use commands "sudo apt install
meson" and "sudo apt install python3-pyelftools".
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
87
3. Design Example Quick Start Guide
683517 | 2024.11.04
#sh ./apply-patch.sh
#cd build
6. cd examples/mcdma-test/perfq
Enable VFs
1. Refer to the Install PMD and Test Application (for CentOS) on page 86 section for
building and installing igb_uio kernel driver.
2. Enable Virtual functions based on requirements:
Follow the steps needed to create a guest environment and assign VF device to VM by
using QEMU.
1. Unbind the device from UIO driver:
$./dpdk-devbind.py -u <bdf>
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
88
3. Design Example Quick Start Guide
683517 | 2024.11.04
$modprobe vfio-pci
For example:
echo "0000:01:00.0" > /sys/bus/pci/devices/\
0000\:01\:00.0/driver/unbind
For example:
echo 1172 0000 > /sys/bus/pci/drivers/vfio-pci/new_id
4. For CentOS VMs: Use QEMU version 3.0.0 rc0 on Intel machines. Creating a VM
with 8 GB RAM is advisable.
./qemu-3.0.0-rc0/x86_64-softmmu/qemu-system-x86_64 -smp 8 -m
10240M -boot c -machine q35,kernel-irqchip=split -cpu host -
enable-kvm -nographic -L /home/dev/QEMU/qemu-3.0.0-rc0/pc-bios
-name offload1 -hda /home/dev/QEMU/vm1.qcow2 -device vfio-
pci,host=01:00.4 -netdev
type=tap,id=net6551,script=no,downscript=no,vhost=on,ifname=ne
t6551 -device virtio-net-pci,netdev=net6551 -device intel-
iommu,intremap=on,caching-mode=on -serial
telnet::5551,server,nowait -object memory-backend-
file,id=mem,size=10240M,mem-path=/dev/hugepages,share=on -numa
node,memdev=mem -mem-prealloc -monitor
telnet::3331,server,nowait&
On another terminal (vm)
telnet localhost 5551
ifconfig eth0 up
user:root
pass:root
Bring up interface.
$ifconfig eth0 up
Assign the IP address to eth0.
$ifconfig eth0 <1.1.1.11>
On Host
$ifconfig net6551 up
$ifconfig net6551 <1.1.1.12>
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
89
3. Design Example Quick Start Guide
683517 | 2024.11.04
Get the qemu 7.0.0 version and compile using below steps
wget https://download.qemu.org/qemu-7.0.0.tar.xz
tar xvJf qemu-7.0.0.tar.xz
cd qemu-7.0.0
./configure
make -j$nproc
cd ..
mv ubuntu-22.04-server-cloudimg-amd64.img ubuntu-22.04.qcow2
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
90
3. Design Example Quick Start Guide
683517 | 2024.11.04
CPU Cores 96
Hypervisor KVM
CPU Cores 2
RAM 8 GB
$ifconfig eth0 up
$ifconfig net6551 up
$ifconfig net6551 <1.1.1.12>
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
91
3. Design Example Quick Start Guide
683517 | 2024.11.04
3. Do a PIO test to check if the setup is correct. If successful, the application will
show a Pass status.
[root@BAPVENC011T perfq]# ./build/mcdma-test -- -b
0000:01:00.0 -o
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
92
3. Design Example Quick Start Guide
683517 | 2024.11.04
DPDK Testapp
MCDMA PMD
lgb_uio/vfio-pci
Host
H2D D2H
Configuration in examples/mcdma-test/perfq/perfq_app.h
In the case of static channel mapping, modify the following parameters:
• /* PF count starts from 1 */
#define IFC_QDMA_CUR_PF <pf number>
• /* VF count starts from 1. Zero implies PF was used instead of VF */
#define IFC_QDMA_CUR_VF <vf number>
• /* Number of PFs */
#define IFC_QDMA_PFS <number of PFs>
/* Channels available per PF */
#define IFC_QDMA_PER_PF_CHNLS <number of channels per PF>
• /* Channels available per VF */
#define IFC_QDMA_PER_VF_CHNLS <number of channels per VF>
• /* Number of VFs per PF */
#define IFC_QDMA_PER_PF_VFS <number of VFs per PF>
Configuration:
a. 1 channel (-c 1)
b. Packet generator bidirectional (-z)
c. Payload length of 32,768 bytes in each descriptor (-p 32768)
d. Transfer the data every 5 seconds (-l 5)
e. Dump the progress log every second (-d 2)
f. Configure the number of channels in ED (-n)
g. Number of threads to be used for DMA purpose. (-a 2)
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
93
3. Design Example Quick Start Guide
683517 | 2024.11.04
Note: This hardware test was run with the Stratix 10 GX H-tile PCIe Gen3 x16 configuration.
Figure 37. DPDK Avalon-ST Packet Generate/Check Design Example Gen4 x16 : P-Tile
Hardware Test Results
5. The DPDK driver can also be used with the AVST/AXIST Device-side Packet
Loopback design example to test loopback.
The following diagram shows the testing strategy.
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
94
3. Design Example Quick Start Guide
683517 | 2024.11.04
DPDK Testapp
MCDMA PMD
lgb_uio/vfio-pci
Host
H2D D2H
HW Loopback
FPGA
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
95
3. Design Example Quick Start Guide
683517 | 2024.11.04
Note: This hardware test was run with the Stratix 10 GX H-tile PCIe Gen3 x16 configuration.
Figure 38. DPDK Avalon-ST Device-side Packet Loopback Design Example Gen4 x16 : P-
Tile Hardware Test Result
Channel ID VF PF Verification
For AVST/AXIST LB
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
96
3. Design Example Quick Start Guide
683517 | 2024.11.04
a. software/dpdk/dpdk/examples/mcdma-test/perfq/meson.build (For
Ubuntu)
b. software/dpdk/dpdk/examples/mcdma-test/perfq/Makefile (For
CentOS).
2. Define the flags: CID_PAT and IFC_PROG_DATA_EN in the configuration files
mentioned in the above step.
3. Define the flag: DCID_PFVF_PAT.
__cflags += - DCID_PFVF_PAT
Configuration:
• bdf (-b 0000:01:00.0)
• 1 channel (-c 2)
• Loopback (-i)
• Payload length of 32768 bytes in each descriptor (-p 32768)
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
97
3. Design Example Quick Start Guide
683517 | 2024.11.04
Note: Reset the IP before starting DMA by using the following command: ./build/mcdma-
test -- -b <bdf> -e
Testing Bitstream Configuration beyond 256 Channels (for DPDK Poll Mode Driver)
Note: In case, if the *.sof file generated with the number of channels > 256, please follow
this procedure.
#define IFC_MCDMA_DIDF
In case of AVMM, BDF is provided as an argument and you must define examples/
mcdma-test/perfq/meson.build
-DIFC_MCDMA_SINGLE_FUNC
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
98
3. Design Example Quick Start Guide
683517 | 2024.11.04
Note: In the current release, simultaneous process is not supported in DIDF mode. You can
run one process with 2K channels.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
99
3. Design Example Quick Start Guide
683517 | 2024.11.04
BAM Test
1. If the BAM support is enabled on hardware, enable the following flags in: dpdk/
dpdk/drivers/net/mcdma/rte_pmd_mcdma.h
#define IFC_PIO_256 ➤ 256b
read/write operations on PIO BAR and undef other size
or
#define IFC_PIO_128 ➤ 128b
read/write operations on PIO BAR and undef other size
2. Enable the below flag for 256b read or write operations in: dpdk/dpdk/
drivers/net/mcdma/rte_pmd_mcdma.h
#define IFC_PIO_256
#undef IFC_PIO_128
BAS Test
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
100
3. Design Example Quick Start Guide
683517 | 2024.11.04
For x4 BAS:
1. Set the PCIe_SLOT “2” in rte_pmd_mcdma.h(dpdk/drivers/net/mcdma/
rte_pmd_mcdma.h)
2. X4 BAS supports burst length 32 by default. In the file perfq_app.h (dpdk/
examples/mcdma-test/perfq/perfq_app.h)
#define IFC_MCDMA_BAS_X4_BURST_LENGTH 32
If the BAS support is enabled on hardware, enable the following flag in: dpdk/dpdk/
drivers/net/mcdma/rte_pmd_mcdma.h
#define PCIe_SLOT 0 /* 0 – x16, 1 – x8, 2 - x4 */
Commands:
1. To verify the write operation:
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
101
3. Design Example Quick Start Guide
683517 | 2024.11.04
Performance test:
The below log is collected on Gen3x16 H-tile
./build/mcdma-test -- -b 0000:01:00.0 --bar=0 --bas_perf -s 16384 -z
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
102
3. Design Example Quick Start Guide
683517 | 2024.11.04
Note: For DPDK: VF/PF cannot run BAM+BAS+MCDMA simultaneously within one
VM or hypervisor, You need to run one instance of VF / PF in independent
VM.
$ cd software/kernel
$ make -C driver/kmod/mcdma-netdev-driver
$ insmod driver/kmod/mcdma-netdev-driver/ifc_mcdma_netdev.ko
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
103
3. Design Example Quick Start Guide
683517 | 2024.11.04
Select the MTU value so that, the sum of the MTU value and Ethernet header length is
aligned to 64.
For example, the following command sets the MTU value as 1522. In this case, the
sum of the MTU and the Ethernet header is 1536, which is aligned to 64.
$ifconfig ifc_mcdma0 mtu 1522
Loopback ED has the provision to switch the packets from one channel to another
channel.
cd /sys/kernel/debug/ifc_mcdma_config/
For example:
• Configure the communication from channel number 64 to channel number 0.
echo 64 > src_chnl
echo 0 > dst_chnl
echo 1 > update
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
104
3. Design Example Quick Start Guide
683517 | 2024.11.04
The MCDMA Network Device Driver supports multiple queues. The following
mechanisms are provided to configure the transmit (H2D/Tx) queue usages:
• MCDMA Queue Selection
• Transmit Packet Steering (XPS)
• Default
When using the MCDMA Queue Selection algorithm, you can map multiple transmit
queues to a core. Any application running on that particular core uses one of these
mapped queues to transfer the data.
Multiple application instances can use multiple queues mapped to one core or different
cores enabling parallel transmission streams of packets.
Configuration
Transmit Packet Steering is a mechanism for selecting which transmit queue to use
when transmitting a packet on a multi-queue device.
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
105
3. Design Example Quick Start Guide
683517 | 2024.11.04
Configuration
Default
Configuration
Use the following script to create name spaces and execute the commands.
1. Stop the network manager by using following command:
For example:
ip netns add vm0
ip netns add vm1
ip netns add vm2
ip netns add vm3
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
106
3. Design Example Quick Start Guide
683517 | 2024.11.04
Note: Observing the pkt-loss currently with TCP and UDP traffic overtime after
starting the traffic. This leads to hanging and closing of the iperf or
netperf connections.
$cd software/user/cli/netdev_app
$make clean all
Related Information
Build and Install Netdev Driver on page 103
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
107
683517 | 2024.11.04
Send Feedback
© Altera Corporation. Altera, the Altera logo, the ‘a’ logo, and other Altera marks are trademarks of Altera
Corporation. Altera and Intel warrant performance of its FPGA and semiconductor products to current
specifications in accordance with Altera’s or Intel's standard warranty as applicable, but reserves the right to ISO
make changes to any products and services at any time without notice. Altera and Intel assume no 9001:2015
responsibility or liability arising out of the application or use of any information, product, or service described Registered
herein except as expressly agreed to inwriting by Altera or Intel. Altera and Intel customers are advised to
obtain the latest version of device specifications before relying on any published information and before placing
orders for products or services.
*Other names and brands may be claimed as the property of others.
683517 | 2024.11.04
Send Feedback
2024.11.04 24.3 [H-Tile: 24.2.0] Added a Note about PIPE mode simulations using VCS*
[P-Tile: 8.2.0] in Steps to Run the Simulation: VCS/VCS MX.
[F-Tile: 9.2.0]
[R-Tile: 5.2.0]
2024.07.30 24.2 [H-Tile: 24.1.0] • Design Example Directory Structure: Removed the
[P-Tile: 8.1.0] sample, testapp, and simple_app rows from the
[F-Tile: 9.1.0] Directory Structure table.
[R-Tile: 5.1.0] • Replaced the config IFC_QDMA_INTF_AVST with
IFC_QDMA_INTF_ST in various sections.
2024.01.19 23.4 [H-Tile: 23.1.0] • R-Tile MCDMA IP - Design Examples for Endpoint:
[P-Tile: 7.1.0] Note added about Data Mover mode support for R-
Tile MCDMA IP.
[F-Tile: 8.0.0]
• Hardware Test Results: Information added about
[R-Tile: 4.1.0]
setting flags to enable data validation using -v
option.
• Supported Simulators: Note added about Data Mover
mode support for R-Tile MCDMA IP.
• Steps to Run the Simulation: Simulation command
for F-Tile MCDMA IP added.
• BAM Test: Commands updated in all steps.
2023.10.06 23.3 [H-Tile: 23.0.0] • MCDMA IP Modes: R-Tile and F-Tile information
[P-Tile: 7.0.0] updated
[F-Tile: 7.0.0] • P-Tile MCDMA IP - Design Examples for Endpoint:
New note added about simulation support
[R-Tile: 4.0.0]
• F-Tile MCDMA IP - Design Examples for Endpoint:
New note added about simulation support
• R-Tile MCDMA IP - Design Examples for Endpoint:
New notes added about simulation support
• Avalon-MM PIO using MCDMA Bypass Mode: New
notes added at the end of the section
• Single-Port Avalon-ST Packet Generate/Check: Note
added about Metadata support
• Avalon-ST Device-side Packet Loopback: Note added
about Metadata support
• Avalon-MM DMA: Note added about User FLR
interface support
• BAM_BAS Traffic Generator and Checker: Note
added MSI interface support
• Design Example Directory Structure: Note added at
the end of the section
• Procedure: Note added in Step 10 (c)
• Procedure: Available design examples information
updated for all modes in Step 10 (d)
• Supported Simulators: Information about x4 Hard IP
mode design example simulation support updated
continued...
© Altera Corporation. Altera, the Altera logo, the ‘a’ logo, and other Altera marks are trademarks of Altera
Corporation. Altera and Intel warrant performance of its FPGA and semiconductor products to current
specifications in accordance with Altera’s or Intel's standard warranty as applicable, but reserves the right to ISO
make changes to any products and services at any time without notice. Altera and Intel assume no 9001:2015
responsibility or liability arising out of the application or use of any information, product, or service described Registered
herein except as expressly agreed to inwriting by Altera or Intel. Altera and Intel customers are advised to
obtain the latest version of device specifications before relying on any published information and before placing
orders for products or services.
*Other names and brands may be claimed as the property of others.
5. Revision History for the Multi Channel DMA Intel FPGA IP for PCI Express Design Example
User Guide
683517 | 2024.11.04
2023.04.17 23.1 [H-Tile: 22.2.0] • Updated product family name to "Intel Agilex® 7".
[P-Tile: 5.1.0] • MCDMA R-Tile Design Examples for Endpoint: DPDK
[F-Tile: 5.1.0] Driver Support information added to the table
[R-Tile: 2.0.0] • MCDMA R-Tile Design Examples for Endpoint:
External Descriptor Controller information added to
the table
• Hardware and Software Requirements: Operating
system information added
• Testbench Overview: MCDMA R-Tile Testbench
information added
• Supported Simulators: New table added Supported
Simulators for MCDMA IP R-Tile
• Run the Simulation Script: Table Steps to Run the
Simulation removed
• Software Test Setup: Operating System information
updated
• Set the Boot Parameters: CentOS and Ubuntu
information added
• Enabling VFs and Create Guest VM by Using QEMU:
Host System Configuration table updated with
operating system information
• Run the Reference Example Application:
— Note added in Step (5)
— Custom AVMM DMA Gen4 x16 : P-Tile Hardware
Test Result: New screenshot added
• BAS Test: BAS x4 information added
• Testing Bitstream Configuration beyond 256
Channels (for MCDMA Custom Driver): Note added in
Step (4)
continued...
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
110
5. Revision History for the Multi Channel DMA Intel FPGA IP for PCI Express Design Example
User Guide
683517 | 2024.11.04
2023.02.14 22.4 [H-Tile: 22.1.0] • Kernel mode char driver is no longer supported.
[P-Tile: 5.0.0] MCDMA Kernel Mode Character Device Driver section
removed. All other Chardev driver information also
[F-Tile: 5.0.0]
removed.
[R-Tile: 1.0.0]
• MCDMA R-Tile information added in the following
sections:
— MCDMA IP Modes
— Design Example Overview
— Hardware and Software Requirements
• MCDMA R-Tile Design Examples for Endpoint: New
section added
• BAM+BAS+MCDMA User Mode support information
added in following sections:
— MCDMA H-Tile Design Examples for Endpoint
— MCDMA P-Tile Design Examples for Endpoint
— MCDMA F-Tile Design Examples for Endpoint
— Driver Support
— Supported Simulators
• Hardware and Software Requirements: Development
Kit information updated
• Procedure: Step (7) updated
• Running the Design Example Application on a
Hardware Setup: Development Kit information
updated
• Program the FPGA: Note added
• Software Test Setup: Information added to run
custom driver with Ubuntu 22.04
• External Packages: Commands added for CentOS
and Ubuntu. Note added in Step (1)
• BAS Verification: Note added about running BAM
+BAS+MCDMA
• Testing Bitstream Configuration beyond 256
Channels: Note added in Step (1)
2022.10.28 22.3 H-Tile IP • MCDMA IP Modes: Table MCDMA IP Modes and FPGA
version: 22.0.0 Development Kit for Design Examples updated for P-
P-Tile IP Tile and F-Tile rows
version: 4.0.0 • Kernel Mode Driver Support removed for Device-side
F-Tile IP Packet Loopback Design Example in
version: 4.0.0 — MCDMA H-Tile Design Examples for Endpoint
— MCDMA P-Tile Design Examples for Endpoint
• MCDMA F-Tile Design Examples for Endpoint: Table
updated MCDMA F-Tile Design Examples for Endpoint
continued...
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
111
5. Revision History for the Multi Channel DMA Intel FPGA IP for PCI Express Design Example
User Guide
683517 | 2024.11.04
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
112
5. Revision History for the Multi Channel DMA Intel FPGA IP for PCI Express Design Example
User Guide
683517 | 2024.11.04
Send Feedback Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User
Guide
113
5. Revision History for the Multi Channel DMA Intel FPGA IP for PCI Express Design Example
User Guide
683517 | 2024.11.04
2021.12.01 21.3 H-Tile IP Rev H-Tile 21.2.0—2K channel support for D2H
version: 21.2.0 Rev P-Tile 2.1.0—CS address width reduced from 29 to
P-Tile IP 14 bits
version: 2.1.0 Rev F-Tile 1.0.0:
F-Tile IP • F-Tile support added
version: 1.0.0
• BAS EP design example added
Added new design example: Traffic Generator/Tracker
Multi Channel DMA Intel® FPGA IP for PCI Express* Design Example User Send Feedback
Guide
114