Xilinx Zynqmp Guide
Xilinx Zynqmp Guide
ELECTGON
www.electgon.com
25.04.2024
Contents
1 ZynqMP Description 1
1.1 APU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 RPU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3.1 Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3.1.3 Clocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3.1.4 EMIO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3.1.6 PMU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3.1.7 DMA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3.2 Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4.1.3 PL Memories . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.5 Peripherals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.6 Interconnects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.7 Interrupts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.9.1 Trustzone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2
Contents Contents
2 Hardware 12
2.1 Processor Conguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3 Firmware 15
3.1 Preparations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2.3 Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.3 PMU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.4 ATF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.5.1.2 Binding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.5.1.3 Addressing . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.5.1.4 Interrupt . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.5.1.6 Overwriting . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.6 UBoot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3
Contents Contents
3.7 FileSystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.7.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.9.3 Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4
Contents Contents
4 Software 68
4.1 Linux Drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
5
Abstract
This book is a practical guide for building an integrated system for Xilinx FPGA - Zynq
family. What is meant by integrated system is the complete ow starting from the hardware
FPGA that consists of a Programmable Logic part (PL) and a built-in Processing System
(PS). This means that this FPGA can be used to build a customized hardware that can be
is done then through an Operating System which is usually a Linux build that can be
obtained from a Xilinx repository. Therefore this book is showing important steps needed
ZynqMP Description
Understanding the target hardware platform shall be rst step to be achieved in order to
build needed binaries and bitstream easily as well as to know capabilities of this platform
for best use. This book is discussing build procedures based on Xilinx ZynqMP Ultras-
cale+. Xilinx has already very benecial documentations and support platform that makes
explaining the ZynqMP here is not an added-value. A summarized overview of this plat-
form can be sucient as a starting point but for real explanations it is advised always to
ZynqMP is partitioned basically into Processing System (PS) and Programmable Logic
(PL). PS is a hard-wired ARM processors that are implemented into the chip, PL is the
congurable hardware part of the FPGA. The chip can be classied also in terms of power
with a component called Platform Management Unit (PMU), power management and
isolation between these domains are controlled. Figure 1.1 shows other main components
of ZynqMP.
1
Chapter 1. ZynqMP Description
peripheral that serves a special function. This means that some components have running
processors inside. The fact is each processing system can run independently. That is called
they are Asymmetric processing systems. For example APU has four cores, they can run
with dierent Operating Systems and they don't share workload. RPU has 2 cores they
2
Chapter 1. ZynqMP Description 1.1. APU
can run independently. So the user has the option to choose which processor to work with
(APU, RPU, etc) and how to employ cores of the target processor.
1.1 APU
Application Processing Unit consists of mainly a Quad-Core A53 ARM processor. Some
chips have only a Dual-Core. A53 processor. This A53 processor is based on ARMv8-A
architecture, 64 bit., can operate up to 1.5 GHz. It has independent Memory Management
The four cores of A53 can run symmetrically (they can share workload) and also they
can run independently (Asymmetric MultiProcessing); in this later case they can be unsu-
pervised (no arbiter between AMP blocks) or Supervised (there is a hypervisor coordinating
AMP blocks).
1.2 RPU
Realtime Processing Unit to be used for realtime applications. It consists of mainly a
Dual-Core ARM R5 processor which is built on ARMv7 architecture, 32 bit, can operate
up to 600 MHz. It has also 128 KB of tightly coupled Memory and a dedicated L1 cache.
The cortex-R5 can operate in two dierent modes: split mode or lock-step mode.
Step mode:
This is the default mode.
one core may be running OS while the other is running bare-metal or both are running
dierent OS.
Lock-Step mode:
The dual core will act as single CPU.
Both cores are running same instructions in parallel delayed by 1.5 cycle.
3
Chapter 1. ZynqMP Description 1.3. Programmable Logic
Bi-directional From PL To PL
1.3.1.3 Clocks
There are four clock signals that can be provided from PS to PL.
1.3.1.4 EMIO
ZynqMP has ready GPIO peripherals that can be congured by the user. These GPIO
peripherals can be routed directly outside the chip through MIO or it can be routed into
port peripherals.
4
Chapter 1. ZynqMP Description 1.4. Memory
1.3.1.6 PMU
PL can communicate with the PMU, there are 32 bit signals for data exchange in addition
1.3.1.7 DMA
Interfaces to the Memory units is also available to the PL part.
1.3.2 Logic
Any implemented PL logic should have AXI interface to communicate with the PS
If there are more than one logic, the designer should instantiate AXI Interconnect
The PL includes integrated blocks for PCI Express, Interlaken, 100 G-Ethernet,
It is possible to partially load the bitstream le. That is it, FSBL can load part of
the bitstream and the rest can be loaded dynamically while the initial loaded logic
multiple congurations that result in full bitstream. This procedure is called Partial
1.4 Memory
Zynq Ultrascale+MPSoC devices include:
1. Memory components
FSBL loads the code for booting the PS into the OCM.
used to program non volatile ash memory through JTAG boot mode.
5
Chapter 1. ZynqMP Description 1.4. Memory
128 KB each.
each TCM is formed by 2 banks ATCM and BTCM. So in total there are 4 banks in
RPU.
when RPU is operating in split mode, each RPU core can access two 64KB TCM
memory.
when RPU is operating in Lock-step mode, the RPU can access a block of 256 KB
TCM memory.
TCM is also mapped into the global system memory so that it can be accessed by
the APU or PL part. It can be accessed then as two blocks of 128 KB or one block
1.4.1.3 PL Memories
PL has three types of memory blocks which are faster and consume less power: BRAM -
UltraRAM - LUTRAM.
be used as double or
single port).
35 MB.
engines.
6
Chapter 1. ZynqMP Description 1.5. Peripherals
The FPD-DMA is connected to 128 bit AXI bus, uses 4K buer. The LPD-DMA is
SATA - SDIO - Display Port - QSPI) come with their own DMA controllers.
The external memory will be connected to the system through AXI interconnect
The interface supports multiple memory standards (DDR3, DDR3L, LPDDR3, DDR4,
1.5 Peripherals
As any system block of ZynqMP, peripherals are distributed over four power domains.
Figure 1.3 locates each peripheral in its operation power domain along with the interface
7
Chapter 1. ZynqMP Description 1.5. Peripherals
GPIO: Although 78 pins of MIO are shared between LPD peripherals, in case we don't
use other peripherals, GPIO can control the entire 78 pins. It can also use pins of
SPI: SPI controllers don't support direct memory access (DMA), hence the communica-
tion is done through register reads and writes to the controller AXI interface.
baud rate; 6 or 7 or 8 data bits; 1 or 1.5 or 2 stop bits; odd or even or space or mark
or no parity.
QSPI: The Quad SPI controller of ZynqMP includes two types of controllers: the Generic
8
Chapter 1. ZynqMP Description 1.6. Interconnects
Mb/s. Supporting GMII (through PL pins using EMIO), RGMII (through MIO),
1.6 Interconnects
It links together all of the processing blocks and enables them to interface with the
It is based on AXI.
AXI interface consists of ve dierent channels: Read Address channel, Read Data
channel, Write Address channel, Write Data channel, Write Response channel.
Because there are several components in Zynq, there almost no direct connection
1.7 Interrupts
In Zynq there are two interrupt controllers
GIC-V1: It is implemented for RPU. Each cortex-r5 will have two interrupt lines (normal
GIC-V2: This is same as GIC-V1 but enables interrupt virtualization and it is implemen-
ted for APU. Each cortex-a53 will have four interrupt lines (normal interrupt nIRQ,
normal high priority nFIQ, virtual interrupt vIRQ, virtual high priority vFIQ).
Interrupt Sources:
Interprocessor Interrupts (IPI); In ZynqMP there are 11 IPI channels four, of them
9
Chapter 1. ZynqMP Description 1.9. Isolation and Protection
Firmware of the CSU is stored by the manufacturer into a ROM so it is not customi-
zable. This rmware when it starts it loads FSBL rmware into OCM to be accessed by
this case the CSU performs decryption and loads FSBL into OCM.
These mechanisms are mainly ltering trac passing to the resource based on its ID.
ZynqMP resources can be mainly described as: Processing Units, Memory Units, Pe-
ripherals Units. Therefore each of these units resources is protected by SMMU (System
Memory Management Unit), XMPU(Xilinx Memory Protection Unit), XPPU (Xilinx Pe-
10
Chapter 1. ZynqMP Description 1.10. Power Management
1.9.1 Trustzone
The APU provides further additional mechanism (Trustzone) for isolating and partitioning
This mechanism adds more security level beneath OS level. Traditional processing
systems give OS more privilege than the running software, which is known as user space
and kernel space. Trustzone-capable ARM processor gives more privilege to this layer more
than the OS. The OS still thinks as if it has the highest privilege.
EL1: OS are given this privilege (OS are called also supervisor).
EL2: Hypervisor is given this privilege (this is when running multicore processor).
EL3: ARM Trusted Firmware (ATF) is given this privilege as it is the security monitor.
1. Feature Disabling
3. Frequency Scaling
4. Clock Gating
5. Use of PL Acceleration
RAM.
11
Chapter 2
Hardware
ZynqMP can't be used without it processing unit. i.e. developers can't utilize the PL part
independently. That is because the PL is programmed through the FSBL rmware which
is executed by the processing system. Therefore any hardware used in ZynqMP must be
This leads us to conclude that Xilinx tools are needed in order to instantiate the
processing unit into the hardware model, which means we have to use Vivado to build that
and we have to make sure that we have appropriate license for the instantiated modules
or IPs.
A typical system should look like what is shown in gure 2.1 which shows a simple
system consists of the processing unit, AXI interconnect and a GPIO unit. The GPIO
unit is a peripheral that is attached to the processing system through AXI interconnect
instance, this is mandatory; i.e. we can't attach the peripheral directly to the processing
system.
12
Chapter 2. Hardware 2.1. Processor Conguration
In Vivado to build such a model, we have to create what is called Block Design. This
Block Design contains important instances of the Hardware model such as the processing
note that, the Hardware model shall contain this block design at the second level. That
means that the top design le shall contain a wrapper of a block design that contains
ZynqMP processor. Without that, device tree model will not be generated for the design
as putting the ZynqMP processor in any lower level will not allow to the processor to be
Vivado GUI double click on the processing system instance in the block design. Figure
2.2 shows main processing system components, you can then choose which component to
enable or to congure.
13
Chapter 2. Hardware 2.2. HDF and Bitstream Generation
have HDF le and Bitstream File. HDF le is used to as a description of the Hardware
and it is used in Software ow in order to generate Devicetree and other needed rmware.
Bitstream le is optional to have and it is needed only if the hardware model is utilizing
the PL part. To generate HDF le in Vivado GUI, choose from top menu File >> Export
>> Export Hardware.Choose then where you want to store this HDF le.
Bitstream'.
14
Chapter 3
Firmware
3.1 Preparations
3.1.1 Cross Compiler
ZynqMP has ARM Cortex a53 and Cortex r5 processors. For applying any rmware or
software that should run in these processors, we have to cross compile the rmware or soft-
ware. By default, Xilinx tools (namely SDK) provides cross compiler toolchain for both
<INSTALLATION_DIR>/SDK/<version>/gnu/
In this directory you can nd cross compiler for ARM64 architecture (aarch64), and for R5
architecture (armr5). Inside aarch64, you can nd cross compiler for Linux-based rmware
up and handles the booting process to next routine. In our case here, FSBL is rst routine
that initializes the ZynMP FPGA and make necessary setting. The following lines describes
15
Chapter 3. Firmware 3.2. First Stage Boot Loader
3.2.3 Tools
SDK (GUI) or HSI (command line) or XSCT(command line).
dures needed for third party tools. Details about this tool is included in Xilinx UG1138.
command line mode. To generate an FSBL elf le the following routine can be used
setws .
createhw - name hw0 - hwspec << HW_HDF_FILE_HERE > >
createapp - name fsbl1 - app { Zynq MP FSBL } - proc psu_cortexa53_0 - hwproject
hw0 - os standalone - lang c - arch 64
projects - build
you can get help about any of the above commands by typing
$ < command name > - help
16
Chapter 3. Firmware 3.2. First Stage Boot Loader
Choose the shown options and make sure that Hardware Platform is what you have
built previously.
17
Chapter 3. Firmware 3.2. First Stage Boot Loader
18
Chapter 3. Firmware 3.3. PMU
Generated FSBL elf le can be found under created FSBL project >> Debug >>
example_fsbl.elf
3.3 PMU
Platform Management Unit is a special unit in the Zynq MP used to power up the system
and power management (switching PS between dierent power modes). It also supports
Inter-Process Interrupts (IPI) for communication between the system processors (to do
This PMU has MicroBlaze processor which is connected to 128 kilobytes of RAM with
error-correcting code (ECC) that is used for data and rmware as well as storage of the
19
Chapter 3. Firmware 3.3. PMU
Provides power integrity check using the system monitor (SysMon) assuring proper
Powers down any Power Islands and other IP disabled via eFuse
be tailored to manage and control power of the chip. This rmware can be generated using
Xilinx tools (e.g. SDK) as it has ready template for that rmware.
20
Chapter 3. Firmware 3.4. ATF
3.4 ATF
ATF is responsible for preserving and restoring the non-secure context when switching to
the secure context. It's also responsible for part of the power-management since the PMU,
which is responsible for power management, is a secure AXI slave and will therefore not
accept any commands issued by a hypervisor or non-secure OS running on the APU. Power-
management requests made by non-secure OSes, such as Linux, are therefore provided
Xilinx software stacks running on the Zynq US+ MPSoC APU conform to the standard
ARMv8 topology where Linux running at ARM EL1/0 has hardware-limited access to
system or security-critical registers or devices. All interactions from Linux to those devices
or registers are routed through ARM Trusted Firmware which runs at EL3.
ARM Trusted Firmware provides a reference to secure software for ARMv8-A archi-
State Coordination Interface) and Secure monitor code for interfacing to Normal world
software. Xilinx ARM trusted rmware is based on arm trusted rmware. Xilinx ARM
Trusted Firmware implements the EL3 rmware layer for Xilinx Zynq UltraScale + MP-
SoC. The platform only uses the runtime part of ATF(EL3 rmware) as ZynqMP already
21
Chapter 3. Firmware 3.4. ATF
Xilinx>>SDK>>2017.3>>gnu>>aarch64>>nt>>aarch64-linux
Note that if you want to build the ATF for another 32bit ARM, use cross compiler for
https://github.com/Xilinx/arm-trusted-rmware/releases/
qmp/release/bl31 directory.
setws - switch .
after 1000
exec cp -r $ATF_Source .
22
Chapter 3. Firmware 3.4. ATF
File>>New>>Project
23
Chapter 3. Firmware 3.4. ATF
Provide Project Name and Location. Choose Build Conguration eld as Release. Hit
nish.
Note that ATF source les are downloaded by default in SDK in the following path
24
Chapter 3. Firmware 3.5. Device Tree
attached to the processing system, a description of these devices shall be provided to the
Operating System or specically the kernel. For that purpose, a dts le (device tree source)
is used to list all the attached devices. This dts le is compiled into dtb le (device tree
blob) which is a binary le to be loaded during booting sequence into ram and read by
the kernel (The boot loader copies that chunk of data into a known address in the RAM
Assume you have built a hardware in the Xilinx Zynq with the processing system and
This GPIO controller is connected to the Zynq processor as a slave via AXI bus. To
25
Chapter 3. Firmware 3.5. Device Tree
build device tree for such a system Xilinx SDK tool can be used as discussed later. What
can be shown here is the AXI bus is considered as a hardware node that is attached to the
processor. The GPIO controller is considered as another hardware node that is attached
to the AXI bus. So in the DTS le, the description of the GPIO controller will be inside
description of the AXI bus. For the shown architecture, the DTS le should have
As you can see then, a DTS le contains tree structure of hardware nodes. Each node
has some parameters and properties. A basic format of this structure can be considered as
/ dts - v1 /;
/ {
node1 {
a - string - property = " A string " ;
a - string - list - property = " first string " , " second string ";
// hex is implied in byte arrays . no '0 x ' prefix is required
a - byte - data - property = [01 23 34 56];
child - node1 {
first - child - property ;
second - child - property = <1 >;
a - string - property = " Hello , world " ;
};
26
Chapter 3. Firmware 3.5. Device Tree
child - node2 {
};
};
node2 {
an - empty - property ;
a - cell - property = <1 2 3 4 >; /* each number ( cell ) is a uint32 */
child - node1 {
};
};
};
Source : [2]
bel is an optional eld. The name shall be in the shown format <DeviceName>@<PhysicalAddress>.
Device name must not be unique, i.e. many devices can use the same name.
3.5.1.2 Binding
First and most important parameter is the compatible parameter. It should carry a
unique name. This unique name will be used with the device SW driver in order to
have a link between the SW driver and the HW device. When Linux kernel starts, it is
interrogating the device tree le and kick o the corresponding kernel module according
to the value specied in the parameter compatible. The developer is free to choose any
name or string for his device but it should be the same name also used in the SW driver.
<manufacturer>, <model>
building SW driver for the gpio controller. This driver will dene it as follows in the driver
source le
static struct of_device_id xillybus_of_match [] __devinitdata = {
{ . compatible = " xlnx , xillybus -1.00. a " , } ,
{}
};
MODULE_DEVICE_TABLE ( of , xillybus_of_match ) ;
More details about building the driver is discussed in SW drivers section later.
3.5.1.3 Addressing
In the attached node `reg' parameter contain the assigned address of the hardware node.
27
Chapter 3. Firmware 3.5. Device Tree
which means that the device is assigned on physical address 0x80001000 and allocated
with the size 0x1000 (4K). It can also have two addresses
0x101f4000 0x0010>;
The parent node (amba_pl@0 in our example), will contain information about format
#address-cells = <1>;
#size-cells = <1>;
In this case, it indicates that the child nodes will have 1 cell for the address and 1 cell
for the size. Each cell is represented by 32 bit integer. So denition like reg = <0x80001000
In our example of GPIO controller address-cells and size cells are dened with value 2
both. This is because this system is built for 64 bit processor (Armv8). So the addressing
of the GPIO controller was assigned as reg = <0x0 0x80000000 0x0 0x1000>; with 0x0
3.5.1.4 Interrupt
Interrupt is a signal between two devices to indicate special activity has taken place. In
the DTS le, the device that receive interrupts is marked with the statement interrupt-
Interrupt-controller: as shown in gure 3.15 this is the device that receives interrupt
signal.
signal to declare to whom this signal is sent. See gure 3.15 for example.
28
Chapter 3. Firmware 3.5. Device Tree
Interrupts: this is the parameter that describes when the interrupt is happening.
In gure 3.16 the interrupt is described with the numbers <0 89 4>.
Xilinx is using 3 cells for describing the interrupt. In our example here rst cell is 0
which means it is non SPI interrupt. Second cell is 89 is like ID for the interrupt. This
ID can be found in Xilinx Zynq technical reference manual (UG 1085). In our example
here the ID is mapped to IRQ number (89+32) to be at the end 121. From UG 1085 it is
interrupt signal sent from PL to PS. Third cell is 4 which represents type of interrupt, it
0 Leave it as it was (power-up default or what the bootloader set it to, if it did)
1 Rising edge
write their own custom information in the device tree. For example in the gpio controller it
has parameter to dene width of the output pins xlnx,gpio-width = <0x8>; The xlnx,
prex protects against name collisions and it is arbitrary string. To get more information
about convention that Xilinx is using for its DTS les, this can be found in [3].
There are some other properties that can be described in the DTS le like aliases and
3.5.1.6 Overwriting
To overwrite a property, the node needs to be referenced using the ampersand character
and the label. Later device tree entries overwrite earlier entries (the sequence order of
entries is what matters, hence the include order matters). Typically the higher layers (e.g.
carrier board device tree) overwrite the lower layers (e.g. SoC device tree) since the higher
layers include the lower layers at the very beginning. e.g. for USB controllers which are
29
Chapter 3. Firmware 3.5. Device Tree
capable to be device or host (dual-role), one can overwrite the default mode explicitly using
by node needs a lot of eort. Therefore it is better to use Xilinx tools for automating this
process. Using SDK the following DTS can be generated for the system.
/ dts - v1 /;
/ include / " zynqmp . dtsi "
/ include / " zynqmp - clk - ccf . dtsi "
/ include / " pl . dtsi "
/ include / " pcw . dtsi "
/ {
chosen {
bootargs = " earlycon clk_ignore_unused " ;
stdout - path = " serial0 :115200 n8 " ;
};
aliases {
ethernet0 = & gem3 ;
serial0 = & uart0 ;
serial1 = & uart1 ;
spi0 = & qspi ;
};
memory {
device_type = " memory " ;
reg = <0 x0 0 x0 0 x0 0 x80000000 > , <0 x00000008 0 x00000000 0 x0
0 x80000000 >;
};
};
which is organizing the hardware nodes into dierent les. Zynqmp.dtsi is a ready
clock conguration of each attached node. Pl.dtsi describes hardware nodes that are built
described in zynqmp.dtsi.
For example, by default all peripherals attached to the processing system are disabled
30
Chapter 3. Firmware 3.5. Device Tree
All shown peripherals here have status = disabled. But for a user who wants to
activate UART1 peripheral, the pcw.dtsi will contain as shown in gure 3.18
31
Chapter 3. Firmware 3.5. Device Tree
32
Chapter 3. Firmware 3.5. Device Tree
building the system. For example the aliases values denes which node is targeted. Also
there are three sources for the kernel boot command line in general:
Those included in the device tree, under chosen/bootargs (see listing above).
From top menu open Xilinx>>Repositories.. The following window shall open.
At this step we need to add repository of Xilinx Device Tree. This can be downloaded
https://github.com/Xilinx/device-tree-xlnx.git
33
Chapter 3. Firmware 3.5. Device Tree
Add the local repository by hitting on `New' button and browse to choose location of
Click `OK'. Then in the Main SDK window, In the top menu, File >> New >> Project
Browse to add Target Hardware Specication is declared previously. Then click `Finish'.
Then in SDK main window choose File>>New>>Board Support Package. The follo-
34
Chapter 3. Firmware 3.5. Device Tree
Choose Location of the bsp project. Important to choose device_tree in the OS eld.
The following window will open to choose further conguration/setting for the genera-
For example you can change name of the generated dts le or add boot arguments.
Click `OK'. Generated dts les can be obtained from the created BSP directory.
35
Chapter 3. Firmware 3.6. UBoot
compile itself on the host machine. So the resulted compiler is needed. So export it to the
PATH variable
$ export PATH =/ path / to / dtc : $PATH
The DTB le can be obtained then by
$ dtc -I dts -O dtb -o < devicetree name >. dtb < devicetree name >. dts
3.6 UBoot
Universal Bootloader is used for booting the kernel and rest of operating system compo-
le.
36
Chapter 3. Firmware 3.6. UBoot
the right name of this le. You can add further your own congurations
$ make menuconfig
building is now ready
$ make
after make process is nished, output le u-boot.elf is needed in other steps.
is not the case as we need to congure U-boot to match our target hardware. U-boot
is congured and built like a Linux manner. This means a .cong le is loaded into u-
boot directory in order to have nal conguration of the system. As described before,
this .cong le is loaded from the target architecture conguration le that is located in
all needed congurations. Among other system conguration, two important congurations
we have to take care about: System Conguration and Device Tree Conguration, which
37
Chapter 3. Firmware 3.6. UBoot
CONFIG_SYS_CONFIG_NAME.
This parameter tells the compiler where to nd other extension for the conguration.
This extension of the conguration is not included in defcong le as it is usually tailored
according to the used system. For example, the user board is designed to boot from QSPI.
So in the main conguration le (defcong) we can enable booting from QSPI, but in the
System Conguration le we can mention which partition of the QSPI we shall boot. In
our example here the extended system conguration le is called Xilinx_zynqmp_zcu102.h
it clear, The running hardware has to be dened for the U-boot in order to enable U-boot
to run into it. In our example here, the le zynqmp-zcu102.dts shall be provided in that
directory: ./arch/arm/dts/
It worth mentioning here that special drivers are built in u-boot to handle then working
with the attached devices that are described in device tree le.
conguration le that is located in ./board directory. Here you can nd some ready board
conguration les for some supported device. For our case, Xilinx board conguration le
is located in subdirectory of Xilinx. What is important in this le is the setting for the
38
Chapter 3. Firmware 3.6. UBoot
In this sample, U-boot conguration is reading specic register in the Xilinx FPGA in
which it recognizes which medium is set for booting. Value of this register is set according
to some jumpers settings in the host board. This in turn will set some values back in
the extended system conguration le. i.e. System Conguration le knows boot medium
variable settings. Then he may need to stop auto booting of u-boot in order to tweak some
settings. In this case we disable auto booting by setting boot delay to -1. This setting can
be added using menucong in the main menucong window>> delay in seconds before
automatically booting. This can be set to -1 or in the main conguration le we can set
39
Chapter 3. Firmware 3.6. UBoot
conguration le, the modeboot variable is set to either sdboot, nandboot, qspiboot, etc.
In System Conguration le, there is a description of how to boot using these values.
As a common technique, you can describe booting approach in external le called
usually as uEnv.txt. This is not common in all u-boot les but mostly used. For example
tion about this sd device then runs a predened command called uenvboot or run another
uEnv.txt a separate text le. For example the following can be dened in this text le
ddr =00:0 a :35:00:01:22
kernel_image = uImage
devicetree_image = devicetree . dtb
sdboot = if mmcinfo ; then echo UENV Copying Linux from SD to RAM ... && load
mmc 0 0 x3000000 $ { kernel_image } && load mmc 0 0 x2A00000 $ {
devicetree_image } && bootm 0 x3000000 - 0 x2A00000 ; fi
what is needed then is to put this le with the content of the SD card with other
booting les.
compiling u-boot each time you modify the global environment variable, you can direct
after replacing $sdbootdev with its value. In our example here this $sdbootdev is
Both environment variables have the same arguments to boot from SD card except for
partition number
" sdroot0 = setenv bootargs $bootargs root =/ dev / mmcblk0p2 rw rootwait \0 " \
" sdroot1 = setenv bootargs $bootargs root =/ dev / mmcblk1p2 rw rootwait \0 " \
40
Chapter 3. Firmware 3.6. UBoot
What we can see then from these variables is they are using name of the partition
(mmcblk0p2 or mmcblk1p2). One drawback can be faced from this denition is the name
of the mmc device may change when linux kernel is loading. For example, in your device
But when linux kernel is loaded the driver of sdhci devices will label it in reverse
In this case, sdroot0 and sdroot1 shall be reversed in device tree. Or the better
solution is to point to the target partition in u-boot using uuid. This can be achieved as
follows
In the conguration le Xilinx_zynqmp.h, the following variable is dened for sdboot
sdboot = mmc dev $sdbootdev && mmcinfo && run uenvboot || run
sdroot$sdbootdev ;
41
Chapter 3. Firmware 3.6. UBoot
In which it runs sdroot$sdbootdev. we can replace this last command with another
partition of the SD card). Then we boot according to the obtained uuid. So these two
in U-boot using tftpboot command. This is needed especially during development phases
of a project.
le system. For example the following command can be used in environment variables
declaration space
tftpboot 80000 Image && tftpboot $fdt_addr system . dtb && tftpboot 6000000
rootfs . cpio . ub
In which we load the pointed les into the declared RAM address of the processor.
Afterwards, all what we need is to boot from this loaded items. This can be done using
server. This can be easily dened also in u-boot environment variables space using the
following sample
setenv ipaddr 192.168.1.10
setenv serverip 192.168.1.1
In which we set ip address for the target board and declare the ip address of the server.
At the server itself, a tftp server shall be running. In windows, free tool can be down-
in what is called NFS in which the root le system is hosted in a remote machine in the
network without the need to transfer the whole le system to the target board. Linux
42
Chapter 3. Firmware 3.6. UBoot
kernel when loaded will point to this remote root le system. This can be done in u-boot
the NFS is. In the remote machine, an NFS server shall be installed. Then we can boot
As a developer, you might need to change all these congurations while building your
system. There is one common approach is to write an initial script that U-boot can
understand and behave according to instructions described in that le. The idea is somehow
near to uEnv.txt mentioned previously with the dierence that here in the script you can
ple_uboot_script.txt
in special format. To prepare this format, there is an utility produced when compiling
U-Boot. This utility is called mkimage and can be found after compiling U-Boot in the
subdirectory tools. So, you can add it to your PATH variable in Linux.
43
Chapter 3. Firmware 3.6. UBoot
$ cd tools
$ export PATH = $PWD : $PATH
Then, to create the needed script format use the such like the following command
mkimage -T script -A arm64 -C none -n ' Our U - Boot script . ' -d
sample_uboot_script . txt sample . scr
Where sample_uboot_script.txt is the text le contains your U-Boot script commands.
Sample.scr is the output of this mkimage command. In this command we mentioned also
which architecture the script will be running on using A switch. C is set to none to
mean uncompressed image, -T to dene type of the image which is a script in our case.
Note that you can review details about any image using mkimage I <image_name>
in the host machine. Or in the target machine you can run iminfo in U-Boot prompt.
address.
be done by assigning dynamic IP address for the chip using dhcp command. But rst we
Another simple alternative that we can use without changing environment variables is
to issue dhcp command with the le name that we need to fetch.
44
Chapter 3. Firmware 3.7. FileSystem
dhcp 0 x12000 sample . scr
where 0x12000 is RAM address to which the le will be loaded.
3.7 FileSystem
3.7.1 Denition
Every Embedded Linux system needs a root le system which includes all of the les needed
for the system to work. To boot a system, enough software and data must be present on
the root partition to mount other le systems. This includes utilities, conguration, boot
What is meant here by le system also is the entire hierarchy of directories that is used
to organize les on a computer system. This shouldn't make confusion with type of le
When the bootloader loads and starts the kernel, the kernel will mount the target le
Linux Foundation released what is called Filesystem Hierarchy Standard (FHS) which
in gure 3.27.
45
Chapter 3. Firmware 3.7. FileSystem
Since these basic congurations are important and may dier from a system to another,
a build tool can be used to create these basic components such as BusyBox.
https://busybox.net/downloads/
operation, architecture of the target system and a cross compiler for it shall be dened.
$ export ARCH = arm
$ export CROSS_COMPILE =/ path / to / your / CrossCompiler
$ make defconfig
$ make menuconfig
$ make
$ make install
This will result in a directory _install which contains the output of the build process
the needed directories that are shown in previous gure 3.27. Busybox will generate only
utilities that user chose contained in bin and sbin directories. The user still needs to add
As an alternative, Buildroot can be used to prepare the needed le system as well
as needed utilities. In fact, Buildroot is using Busybox to generate needed utilities and
binaries. Not only but also Buildroot can be used to generate the entire Linux system
starting from bootloader, Linux kernel and the le system. Therefore build root is an
using
$ make your_target_file_defconfig
You can add more changes to these congurations or if you don't have ready congu-
46
Chapter 3. Firmware 3.7. FileSystem
Important to make sure that you have the right Target options. Note also that for
ARM64 architecture, glibc can only be used as other C libraries is not supporting this
architecture [7].
glibc returns a valid pointer to something, while in uClibc calling malloc(0) returns a
NULL [8].
Then you can navigate through other options to choose other settings and utilities.
Save your congurations then to start the build process. Cross Compiler is needed to be
dened.
$ export CROSS_COMPILE =/ path / to / your / CrossCompiler
$ make
This building process takes about 40 minutes to nish According to your chosen pre-
ferences.
is root le systems compressed as tar le. To install it, mount the root partition of the SD
card.
$ mkdir -p ./ temp_mnt
$ sudo mount / dev / sdb2 temp_mnt
then extract the compressed le in the mounted location
$ sudo tar - xpf rootfs . tar -C / path / to / mounted / location
Root File System is now ready.
47
Chapter 3. Firmware 3.7. FileSystem
Note:
During building the rootfs, downloading of linux header was corrupted as the source of the
Conguration is set to use wget utility to download these linux headers. Luckily the
error can be demoted then by using the switch no-check-certicate which turns o checking
if the link is secured or not. But where to add this switch then. This has to be done
instance), open the hidden le `.cong'. There is one parameter called BRE2_WGET.
Then you can build rootfs using make utility. Alternatively, you can use menucong
User may need to create home directory for another user. This can be done using overlays.
Simply you can create the additional needed directory tree and direct conguration of
Buildroot to add this tree by mentioning its path. This setting can be done in conguration
le as
BR2_ROOTFS_OVERLAY=/path/to/location/of/neededtree
tories.
building the system. For example assigning IP address for a network interface. This needs
then to add some conguration in interfaces le. Such modications can be scripted in an
external le to be ready for execution before root le system is assembled.
For example, assume that we need to congure a network interface to acquire IP address
dynamically. A le called `interfaces' shall contain then the following lines
# interface file auto - generated by buildroot
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp
In which we set interface eth0 to dynamically get IP address. Lets assume that this
We need then to create the script that will run to replace this ready interfaces le with
48
Chapter 3. Firmware 3.7. FileSystem
# !/ bin / sh -e
FS_DIR = $1
is passed as the rst argument to each script". That is why we use FS_DIR=$1 as the
directory.
In case you want to pass other arguments to these post scripts, write down these argu-
rst argument by default is the path to target le system (i.e the image source directory that
will have the second order (i.e when referring to it the post scripts, use $2, $3,..etc).
Remaining now to direct Buildroot to use this script before creating the le system
image. This can be done in conguration le using the following setting
BR2_ROOTFS_POST_BUILD_SCRIPT="../NetworkSettings.sh"
In which `NetworkSettings' is the name of the script that contains copying commands.
Alternatively, we can set it using menucong window in section System Conguration >
If you want to use more than one post script, simply enlist them with space separated
in this eld.
a package that going to be installed in the root le system. So waiting until the package
a situation can be handled by preparing a patch le that contains what you are going to
First you have to create a patch le with your changes. For example we will discuss
here how to enable ssh connection for our target system (as ssh server) before deploying
In `sshd_cong' le that is located in /etc/ssh, the following parameter should be set
49
Chapter 3. Firmware 3.7. FileSystem
@@ -29 ,7 +29 ,7 @@
# Authentication :
# LoginGraceTime 2 m
-# PermitRootLogin prohibit - password
+ PermitRootLogin yes
# StrictModes yes
# MaxAuthTries 6
# MaxSessions 10
This le was created using `di ' command to get the patch le.
$ diff -u orig_sshd_config sshd_config > sshd . patch
Where orig_sshd_cong was obtained actually during early development phases of the
project.
The patch le should be name as lename.patch, as Buildroot will use this extension
Also to direct Buildroot to apply this patch le to our target package, we need to
store this le in a directory with the same name of our package. So it should be
maintained at openssh/sshd.patch.
First two lines denes where exactly to nd the target patched le in the package
source directory.
Lets assume now that this directory is located in a directory called external_mod. Re-
maining now to direct Buildroot to apply external patch le. In Buildroot conguration
BR2_PACKAGE_OPENSSH=y
We need to set the following parameter to indicate where to nd patch les
BR2_GLOBAL_PATCH_DIR=/path/to/external_mod
Alternatively, we can use menucong in section Build options > global patch directories.
The patch should be applied then. If we have more than one patch to be applied to the
same package, we have to rename the patch les then to be in alphabetic order. For
le system for these quick modications. Instead, the target package binary can be only
resulted binary can replace the existing binary without the need to deploy the whole root
50
Chapter 3. Firmware 3.7. FileSystem
included in the build. Some Developers may prefer to use their version of the package. So
downloading the package is not entirely needed. In this case, we can use
<pkg_name>_OVERRIDE_SRCDIR
to direct Buildroot to use our package source. This conguration can be added in
main conguration le of Buildroot and we have to assign path to the package source to
it. Importance of this mechanism appears when we run make clean. The source will be
untouched and can be integrated any time we rebuild the root le system.
There are other phases that you can check in Buildroot user manual, but for simplicity
not all of them are mentioned here. Fortunately, developers can add some control in
between these phases. It is called hooks. In Buildroot there are two hooks for each phase,
Pre & Post. Using these hooks developers can dene some actions to be executed before or
after the phase. To dene at which package the hook shall be applied, Buildroot is using
<PACKAGENAME>_<PREorPOST>_<PHASE>_HOOKS
The hook variable can be dened in the package mk le (for each package dened, it
has packagename.mk le in which we declare some information about congurations and
build options for the package). Prior to dening the hook variable, we have to dene rst
(in mk le also) actions that shall be executed by at the hook point.
Define PACKAGENAME_DEVELOPER_ACTION_SET
echo " User specific configurations "
cp file1 file2
# ... other developer ' s actions
Endef
Then we assign these actions to the hook variable, for example:
OURPACKAGE_PRE_CONFIGURE_HOOKS += PACKAGENAME_DEVELOPER_ACTION_SET
Buildroot allows hooks variables for the following phases
PACKAGENAME_PRE_DOWNLOAD_HOOKS
PACKAGENAME_POST_DOWNLOAD_HOOKS
PACKAGENAME_PRE_EXTRACT_HOOKS
PACKAGENAME_POST_EXTRACT_HOOKS
PACKAGENAME_PRE_RSYNC_HOOKS
PACKAGENAME_POST_RSYNC_HOOKS
PACKAGENAME_PRE_PATCH_HOOKS
PACKAGENAME_POST_PATCH_HOOKS
51
Chapter 3. Firmware 3.7. FileSystem
PACKAGENAME_PRE_CONFIGURE_HOOKS
PACKAGENAME_POST_CONFIGURE_HOOKS
PACKAGENAME_PRE_BUILD_HOOKS
PACKAGENAME_POST_BUILD_HOOKS
PACKAGENAME_PRE_INSTALL_IMAGES_HOOKS
PACKAGENAME_POST_INSTALL_IMAGES_HOOKS
PACKAGENAME_PRE_LEGAL_INFO_HOOKS
PACKAGENAME_POST_LEGAL_INFO_HOOKS
ded. i.e. a tar output can't be extracted into the system RAM. More details about the rea-
rootfs-initramfs.txt.
program will enumerate the devices and typically mount other lesystems.
Network (NFS).
Prior to getting more details about these dierent locations, we have to understand rst
that natively a Linux system (or an operating system) can (and indeed) run on the system
RAM. In other words, Filesystems images that we can get from Buildroot or Busybox are
loaded into the system RAM. This is possible because Buildroot or Busybox are generating
minimal le system with the minimum needed binaries so that it can t into the system
RAM. This minimal Filesystem is called then Initial RAM Disk (initrd) or another type
called tmpfs.
This note was important to mention to give the idea that a Linux system can success-
fully run based on this minimized Filesystem (initrd or tmpfs). What is done next is Linux
is recognizing other attached devices and starts to mount it; the whole Filesystem will be
extended then.
is simply can be thought like a tree; the original Filesystem (Root Filesystem) is already
52
Chapter 3. Firmware 3.8. Linux Kernel
started in system RAM then it will be connected to another Filesystem tree. The attached
Filesystem shall be organized also in the Root Filesystem; therefore we shall dene a
location in the Root Filesystem tree. This location is called Mount Point through which
the additional Filesystem will be attached. For example when you attach a CDRom to the
system, one directory will be created in the Root Filesystem (usually /mnt/cdrom). This
created directory will be the root of the new attached CDRom and in the same time it acts
as part of the Root Filesystem. Mounting this CDRom doesn't mean that is loaded into the
system RAM also, it is only connected through the created Mount Point (/mnt/cdrom).
In Linux to review all attached Filesystems, this can be done using command df. While
in order to see how much space is occupied be the Root Filesystem in the RAM; this can
One more point to mention here also is; since initrd or tmpfs is running or located
in system RAM, it is lost after system is powered down. Whenever a Linux system is
starting, it needs to know location of the Root Filesystem Image. User can provide Root
Filesystem Image manually (using tftp methods as declared in U-Boot section). For stable
Based on these notes, we can easily understand that Linux system can start while the
be used to compile Linux kernel for Zynq devices. As quick declaration, the following
commands can be used to get a Linux kernel for ZynqMP. Only a cross compiler is needed
supported by default for ZynqMP, other development areas refer to the resulting uncom-
pressed image as vmlinux. It is simply an elf le. Linux Kernel can be produced also as
53
Chapter 3. Firmware 3.8. Linux Kernel
It can be also produced as uImage which is the compressed version of zImage prepended
by a U-boot header which is information for U-boot to recognize this uImage when booting.
To get uImage, we must have mkimage utility (obtained from U-Boot in its tools
directory). We have to add path to mkimage to PATH env. Variable then we can run the
command in U-Boot. Uncompressed Image is booted using booti, zImage is booted using
In the same way, Device Tree le can be represented as an image, FileSystem can be
compressed into an image, FPGA bit le, U-Boot scripts, etc. It is useful then if there is a
way to include all the needed images in one image so that transferring and loading of the
Flattened Image Tree can be used to build such an image. Its approach is similar Device
Tree le structure in which we describe each hardware as a node that contains other sub-
hardware nodes. In FIT image we describe each image as node and provide information
54
Chapter 3. Firmware 3.8. Linux Kernel
needed for each node. As a sample, the following FIT le is provided in U-Boot as a sample
/ dts - v1 /;
/ {
description = " Simple image with single Linux kernel and FDT blob " ;
# address - cells = <1 >;
images {
kernel@1 {
description = " Vanilla Linux kernel " ;
data = / incbin /( " ./ vmlinux . bin . gz " ) ;
type = " kernel " ;
arch = " ppc " ;
os = " linux " ;
compression = " gzip " ;
load = <00000000 >;
entry = <00000000 >;
hash@1 {
algo = " crc32 " ;
};
hash@2 {
algo = " sha1 " ;
};
};
fdt@1 {
description = " Flattened Device Tree blob " ;
data = / incbin /( " ./ target . dtb " ) ;
type = " flat_dt " ;
arch = " ppc " ;
compression = " none " ;
hash@1 {
algo = " crc32 " ;
};
hash@2 {
algo = " sha1 " ;
};
};
};
configurations {
default = " conf@1 " ;
conf@1 {
description = " Boot Linux kernel with FDT blob " ;
kernel = " kernel@1 ";
fdt = " fdt@1 " ;
};
};
55
Chapter 3. Firmware 3.8. Linux Kernel
};
So we can prepare images that we need in such a description, save it as .its le, create
bootm command. Recall from last section that bootm is used to boot images with U-
Boot header. Note here that Linux kernel used to create this FIT image shall be uImage.
More details about working with FIT image can be found in U-Boot directory under
more FIT images and transfer it to the system RAM, then we can pick a combination of
rst FIT kernel with second FIT device tree to boot. We can do that when calling bootm
bootm 20000: kernel@1 25000: filesystem@2 30000: fdt@3
Where 20000, 25000, 30000 are addresses of the FIT in the system RAM. We can also
choose to boot using kernel and device tree (for example) as follows
bootm 20000: kernel@1 - 30000: fdt@3
Where the space for the le system has been specied - to tell U-Boot to discard it.
Important to note then when loading the all_images.itb, bootargs environment va-
riable shall be correctly dened. Otherwise kernel will fail to load as no arguments have
Another important point is to choose load and entry address of the kernel not to
overwrite other loaded images. For resolving some issues that may face you when starting
working with FIT image, you may need to load it, extract it manually to debug behavior.
To extract it manually
imxtract 0 x10000000 kernel@1 0 x06000000
Where 0x10000000 is address where FIT image has been loaded, 0x06000000 is the
address to which you want to load the kernel image. Same can be executed for fdt
imxtract 0 x10000000 fdt@1 0 x06e00000
then if successful, you can try to load the extracted images
bootm 0 x06000000 - 0 x06e00000
56
Chapter 3. Firmware 3.9. Boot Image
stem)
3.9.3 Tools
SDK (GUI) or Bootgen (command line).
input les to the boot image, along with optional attributes for addressing and optional
To write this le manually, you can add like the following lines
Note that here in this Figure 3.31, paths to elf les are specied according to windows
format (with backslash). In case of Linux Operation Systems, these path shall be modied
For more information about available attributes that can be used within BIF le you
can type
exec bootgen - bif_help
to get detailed view
57
Chapter 3. Firmware 3.9. Boot Image
After completing the BIF le, you can generate boot image using bootgen command
bootgen - image sd_boot . bif - arch zynqmp -w -o C :\ BOOT . bin
Choose Workspace
58
Chapter 3. Firmware 3.9. Boot Image
59
Chapter 3. Firmware 3.10. Boot Media
In Output BIF File path browse to choose location of output les. Output path eld
will be updated accordingly. Then in Boot image partitions space click Add button to add
Click on Create Image button. BOOT.bin le shall be generated successfully in your
output directory.
loaded into processing system RAM to start operation. To pass (or to load) these images
60
Chapter 3. Firmware 3.10. Boot Media
a client machine. This method is then useful to have a centric booting les that can be
deployed into cluster of FPGAs that need same booting les. Each FPGA will need to
have its own rst booting le stored internally, then this rst booting le direct the boot
sequence to get other booting les (Linux Kernel, Device tree, File System image) from
FPGA.
- Kernel, Device Tree, File System are described in the booting script how to be
setenv kernel_img_name " Image "
setenv kernel_loadaddr 0 x6000000
to processing system RAM. When booting from SD card is chosen, Boot.bin le is rst
loaded then it starts to search for other boot les (Kernel Image, Devicetree, Filesystem)
booting les and root le system. Booting les are les used to boot the system like
boot.bin, Linux Image. Root le system is the directory structure of your linux system.
Best way to do that is to make partitioning for the SD card so that it is formatted to have
one partition for boot les and another partition for root le system.
61
Chapter 3. Firmware 3.10. Boot Media
In linux, partitioning the SD card can be performed using fdisk utility. The following
(
echo x
echo h
echo 255
echo s
echo 63
echo c
echo
echo r
echo n
echo p
echo 1
echo 2048
echo +400 M
echo n
echo p
echo 2
echo
echo
echo a
echo 1
echo t
echo 1
echo c
echo t
echo 2
echo 83
echo p
echo " w "
) | fdisk / dev / sdb
QSPI will be connected to the FPGA and conguration of the platform board will be set
Best practice to use QSPI, is to partition it to several partitions (ve or six) allowing
62
Chapter 3. Firmware 3.10. Boot Media
tree le when writing to the QSPI. For example, the following is dened in default device
Which depicts four partitions at addresses (0x0, 0x100000, 0x600000, 0x620000) and
they are named (planned) for Uboot, Kernel, Devicetree and Filesystem respectively. The
Now in order to partition the QSPI, First step is specify the system RAM address where
DTB le is stored (this means we need to load our device tree into the RAM rst our
device tree means not the default U-Boot device tree). In U-boot to specify the address
So what we have done is, we have change the property reg of the linux partition
63
Chapter 3. Firmware 3.10. Boot Media
can load rst boot le into QSPI. Usually SD card is used for that and through U-boot
prompt. To write into QSPI, when U-boot prompt is up type the following command to
system RAM. This is actually the available way in U-boot to write into the QSPI as it
is not possible to write from the SD card. SD card is used only to bring up a working
U-boot. This means that boot les have to be written rst into the RAM, then write each
For loading boot les into system RAM, the usual way then is tftp. The following
64
Chapter 3. Firmware 3.10. Boot Media
Start address of each partition is denoted in U-boot as oset. So for writing into the
right partition we have to mention its oset in the command. The following commands
le, so that we can use it to write the exact size into the QSPI memory.
Booting from the QSPI then in subsequent operations (i.e. after QSPI has been written
with boot les) can be using sf read command, for example
sf read $fdt_addr $fdt_offset $fdt_size
sf read $kernel_addr $kernel_offset $kernel_size
in which we read dened size from the dened oset of the QSPI into the dened
65
Chapter 3. Firmware 3.10. Boot Media
be applied, removing and inserting SD card to update booting les may not be practical ap-
proach. The more convenient approach is to have FPGA under development in connection
with the development machine then transfer new updated les to replace old ones.
At this point, we have to distinguish between development les. For the case of appli-
cations running on the operating system, replacing the les while the system is running
will be an easy task as the compiled binaries of the application will be only replaced (to
replaced but with the need of rebooting the system to let it boot with these new modied
les. What is intended here is to keep system under connection with the development
machine and nd a way to update any of the booting les (like Kernel Image or File
System Image or Device tree). This can be done by rebooting the system normally but
halting it at u-boot stage. At this stage we can load modied les directly to the RAM
using tftp.
loading instructions from external le. This external le is just a script written for U-Boot
the old les can't be performed directly. Instead, modied boot le will be stored tempo-
rarily in the system RAM, then we can replace and overwrite the old boot le from the
temporarily location.
into the processor RAM which will program the PL part accordingly.
Normal or traditional way is to built-in the bitstream le into the boot le while creating
it. Boot le is created using Xilinx tools (bootgen utility for instance) as described in
previous section (Boot Image Generation) in which we list the bitstream le in BIF le to
be included while creating boot.bin le. The bitstream le is then loaded into the FPGA
Another way is to load the bitstream le is through Linux operating system which is
running already on the Processing System of ZynqMP. Xilinx provides one driver that can
handle programming the PL part. This driver can be accessed through sysfs of Linux.
Initial setting shall be performed rst which is to represent the bitstream le (which is .bit
le) into binary le (.bin le). Again Xilinx bootgen tool can be used for that as follows
66
Chapter 3. Firmware 3.10. Boot Media
bootgen - image bitstream . bif - arch zynqmp - process_bitstream bin
Where bitstream.bif is a BIF le which shall contain the following
all : {
[ destination_device = pl ] bitstream . bit
}
Executing bootgen will result then in bistream.bit.bin le. After this setting, The
generated bin le shall be stored exactly in a path like the following
/lib/rmware/bitstream.bit.bin
This Xilinx driver is using this path to identify the target bitstream les. Last step is
Another possible way of loading the bitstream le is to use U-boot for that. U-Boot has
fpga command that can be used to program the attached PL. In this case the bitstream
le has to be represented into binary le also as described previously. The command to
67
Chapter 4
Software
Controlling processing system and its attached devices can be achieved mainly through
Operating Systems or Bare-Metal applications. In both cases, the developer has to dene
some rmware that directs running software for how to work with and control the attached
hardware device. This rmware is what we call a driver for the device and it is what we
need to build in the path between hardware interface and end user interface.
which we need to have simple and easy to use program or application to work with. This
application is communicating with device drivers that reside in the kernel space. Then the
kernel layer comes afterward to provide mechanisms and rmware drivers of the hardware.
End of this path is at the hardware interface which is seen by the kernel using the device
tree le.
Since the hardware devices dier in functionality, properties and attributes, the driver
shall be tailored to deal with this device. Device properties shall be communicated correctly
between hardware developers and drivers developers. For the case of FPGA development,
it can happen that hardware developer needs to build the hardware logic and its driver.
The following guide may help then to clarify how to build driver in Linux kernel.
We can start by simple `hello world' example to have an idea how a kernel module or
be downloaded by
$ git clone https :// github . com / Xilinx / linux - xlnx . git
After downloading this kernel and building it, make sure that it can successfully run
68
Chapter 4. Software 4.1. Linux Drivers
module_init ( hello_init ) ;
module_exit ( hello_exit ) ;
are called by module_init & module_exit respectively. module_init & module_exit are
invoked automatically when the module is installed in Linux kernel and they are dened
in module.h header.
into account that this module is built for the Linux running on Xilinx Zynq chip. So kernel
source of this Linux has to be pointed to. The following Makele can be used to build the
kernel
obj - m := hello_printk . o
PWD := $ ( shell pwd )
all :
$ ( MAKE ) -C $ ( KERNEL ) M = $ ( PWD ) modules
clean :
$ ( MAKE ) -C $ ( KERNEL ) M = $ ( PWD ) clean
When running this Makele it should be executed as
$ make ARCH = arm64 KERNEL =/ path / to / built / kernel CROSS_COMPILE =/ path / to /
CrossCompiler
The following shell script can be easily used for doing everything
# ! / bin / sh -e
export Current_Dir = $PWD
export CROSS_COMPILE =/ path / to / CRS_COMP / aarch64 - linux / bin / aarch64 - linux - gnu -
69
Chapter 4. Software 4.2. Linux Drivers Chain
cd $Mod_Dir
make ARCH = arm64
cd $Current_Dir
To install the module use insmod command. To remove it use rmmod. To see list of
If printk output is not displayed, try to use dmesg | grep world command.
example has printed simple message for the user, it is not connected to any hardware
device yet. Still we need to clarify the case of a module interacting with attached hardware
devices. In this case, there is a chain of libraries and functions that are used to create the
module. In order to organize and represent these libraries and functions, we will represent
the kernel module in form of chain of devices and drivers as shown in gure 4.2 then we
70
Chapter 4. Software 4.2. Linux Drivers Chain
This gure is a trial for simplifying basic steps needed to build a Linux driver. In fact
there are no references or resources mentioning a `Kernel Device' and a `User Device' for
instance. But there is what is called `Platform Device'. So the author used this illustration
What can be indicated in this gure is that there are User Space and Kernel Space in
Linux system. As its name reects, User Space is the place where is user can access his
own functions. User is not allowed to access functions dened in Kernel Space. There are
some specic functions that can be used to handle interactions between both spaces that
Figure 4.2 indicates also that a Linux driver is built as a chain of entities. Each entity
is bound with the other with a keyword or attribute so user application can access the
hardware device through this chain using specied keyword between each entity.
using this le to get some information about these attached hardware devices like registered
address, width of input bus and some other parameters. One of these parameters is the
developer is using this string also to dene a Platform Device. Linux kernel provides
of_device_id structure which can be used to store data about the hardware device. It is
71
Chapter 4. Software 4.2. Linux Drivers Chain
struct of_device_id {
char name [32];
char type [32];
char compatible [128];
# ifdef __KERNEL__
void * data ;
# else
kernel_ulong_t data ;
# endif
};
What is shown in this structure that it is designed to contain strings and data about the
device. Usually, most of developers are using only compatible parameter to match it with
the compatible parameter dened in the device tree le. Both compatible parameters
shall have the same value, otherwise kernel will fail to match them. As a sample of how
of the array. This is usual approach for dening platform device to allow developer to link
as much hardware devices as he wants to this platform device. Eventually, this structure
shall be terminated with a null member {}. One important function shall be used in
Once platform device is dened, it needs platform driver to drive it. Using linux/plat-
form_device.h we can build this driver using structure platform_driver in which we can
dene what functions shall be executed when the device is detected (probe function), what
shall be executed when device is removed (remove function) and some other details about
};
72
Chapter 4. Software 4.2. Linux Drivers Chain
Name of the platform driver in this example is userA_driver, when linux runs this name
but it is important to set value of of_match_table to the name of the platform device as
this is how the platform driver is bound with the platform device. Platform drivers need to
be registered after it have been dened. This can be done using platform_driver_register.
device needs to be registered in the kernel. It is like make an existence for the device
in the kernel. This registration can be done using several functions or at dierent steps.
For instance the device registration can be called immediately when the device is detected
init function). This is because this registration step is independent actually on the device.
You can even register a device that doesn't actually exist. This is possible as long as
you perform valid operations on the registered device. In Linux, devices are classied as
Block devices or Character devices. Block devices simply are devices built for storage
purposes. Character devices are devices handling IO streaming. Therefore, there are many
functions to register the device according to its type. For instance there are alloc_chrdev,
discussed soon. In this example it is chosen 89. Second argument is a name for the device.
Developer is free to choose any name since it is not bound to other objects in the module.
Third argument is a name of a structure used for building the kernel driver. Upon success,
Kernel Driver means here a set of functions used to perform necessary operations on
the device. This set of functions is dened using structure le_operations type which is
dened in linux/fs.h. For example the following structure denes basic operations that can
as les, so using this structure we can tell the kernel what to do when he opens, reads,
writes or closes the device. Developer has to dene these functions and describe what shall
Needed setting and conguration in kernel space will be ready if previous entities were
dened correctly. Remaining now to make some preparations in user space. The device has
73
Chapter 4. Software 4.2. Linux Drivers Chain
to be instantiated also in user space. This is important as the nal user space application
should be in contact with a device. Devices in /dev directory are built for that purpose.
We can instantiate our device then in user space using mknod command if the device has
So if the device has not been instantiated, display the le /proc/devices then nd the
Major Number that you have used while registering the kernel device. Beside this number
you can nd the name of the device as dened in the kernel device (lets assume it is
is the Major Number assigned to the device in the kernel space. 0 is the Minor Number of
the device.
driver as shown in previous declaration. If the driver is managing dierent devices, there is
what is called Minor Number which is used to dierentiate between those dierent devices
Now it comes the point, how to pick a Major Number to assign it to the registered
device. There is one possibility when registering the device that we can register it dyna-
mically. This means that we can let the kernel allocates this number based on available
numbers. In this way we can avoid having conict with another used Major Number. The
dynamic allocation is performed when registering the device with Major Number 0. For
example to be as follows
regdev_ret = register_chrdev (0 , " Dev_Name " , & gpio_fops ) ;
The return value of this dynamic allocation will be the assigned Major Number. If you
want to unregister the device in the driver code you can use the generate number.
unregister_chrdev ( regdev_ret , " Dev_Name " ) ;
If you used dynamic allocation, you can't search for the registered device then in le
/proc/devices with the Major Number as you don't know which number the kernel has
generated. In this case you can search for it using the dened name (Dev_Name for
instance). You can use a script like the following to get this number
# !/ bin / sh -e
MajorNum = $ ( awk ' $2 ~ / '" $ModuleName " '/ { print $1 } ' / proc / devices )
74
Chapter 4. Software 4.2. Linux Drivers Chain
User Driver that can handle user commands and transfer it through kernel driver to kernel
space which will be executed then through platform driver in the platform device. User
has no access to kernel space and functions, therefore there are special functions that can
receive user commands and transfer it to kernel space. To use these functions, we have to
be aware rst that user can handle the instantiated device in Linux as a le. There is a
direct mapping in Linux between le operations in user space and struct le_operations
dened in the Kernel Driver. This means using if user opened the device le in user space,
the open function of the struct le_operations will be called. Closing the le, the release
function will be called. Writing to the le, the write function will be called. Reading from
In user space, to build the driver then we use standard fopen, fwrite, fread, fclose
argument is pointer to the string has be written in the le. Third argument is how many
bytes shall be taken from the written string. Fourth argument is the oset to start take
the bytes. The user in kernel driver has to dene a write function that tells the kernel
what to do when user space writes to the le. To clarify it more assume the following write
75
Chapter 4. Software 4.2. Linux Drivers Chain
( struct file * filep , const char * buf , size_t count , loff_t * f_pos )
{
printk ( " GPIO Kernel Driver : User has written some value \ n "
);
}
In the Kernel Driver also the struct le_operations shall be dened in something like
the following
struct file_operations gpio_fops = {
read : gpio_get ,
write : gpio_set ,
open : gpio_open ,
release : gpio_close
};
As mentioned before, User has no access to kernel space. We have seen the mapping
between le operations in user space and kernel space. Remaining now to understand how
to get the value that user has written or send to user value recorded by the device. There
are special functions that transfer needed values between user space and kernel space.
These functions are dened in the library asm/uaccess.h. In this library you nd some
these functions, we can transfer the content of the device le to/from user space. Assume
more.
76
Chapter 4. Software 4.3. Driver Integration
wing how to include this driver in the nal system is next step that we have to learn. To
integrate your driver with the running Linux, there are two approaches; either integrating
it with Linux Kernel source les or integrate it with Filesystem as Buildroot package.
we need is to prepare our driver along with its Makele and Kcong le, then indicate in
the Makele and Kcong les of drivers directory to include our driver during Kernel
compilation process. Makele of the driver shall contain the following line
obj - $ ( CONFIG_OUR_GPIO ) += < driver_file_name >. o
Kcong of the driver shall contain the following
config OUR_GPIO
tristate " Our driver for GPIO peripheral "
help
77
Chapter 4. Software 4.4. GPIO Driver
my driver module .
Tristate means that the conguration eld can be marked as y yes, n no, m mo-
dularize. Choosing no means driver will not be compiled, y means it will be compiled, m
means it will be compiled but as a separate module. i.e. not included in the nal kernel
Image. As alternative, we can dene the conguration eld as Boolean which take only
two values y or n
bool " enable our GPIO driver "
default y
We can then dene default value for the conguration as declared. These three les shall
Next, in drivers directory we need to mention that there is new driver shall be ready
at end of le
obj - $ ( CONFIG_OUR_GPIO ) += our_driver /
Then open <Linux_Source_dir>/drivers/Kcong and add the following line before
endmenu
source " drivers / our_ driver / Kconfig "
The driver is now ready for compilation. Before starting main kernel compilation, you
can choose to include this driver with the kernel or not. This can be performed using
le (***_defcong ). Note that if the driver conguration le is dened as tristate, here
discussed here.
attached hardware devices as les for the user in order to control it. Sysfs can be considered
as a le system that is mounted automatically (in most linux systems) by the kernel. This
sysfs represents attached devices in form of les which means, it exports these devices
from the binary representation in the kernel into the user interface in order to view and
manipulate it. Figure 4.4 shows mainly the directory structure of sysfs. At the top level of
the sysfs mount point are a number of directories. These directories represent the major
subsystems that are registered with sysfs. These directories are created at system startup
78
Chapter 4. Software 4.4. GPIO Driver
The block directory contains subdirectories for each block device that has been dis-
covered in the system. The bus directory contains subdirectories for each physical bus
type that has support registered in the kernel (either statically compiled or loaded via a
module).
The class directory contains representations of every device class that is registered with
the kernel. A device class describes a functional type of device. For example gure 4.5
shows content of class directory, you can nd then that all registered devices are gathered
here according to its functions (block, gpio, graphics, power_supply, tty, etc). Contents of
this directory are symbolic link to its location in the devices directory.
The Devices directory contains the global device hierarchy. This contains every physical
device that has been discovered by the bus types registered with the kernel. There are two
types of devices that are exceptions to this representation: platform devices and system
devices. Platform devices are peripheral devices that are inherent to a particular platform.
They usually have some I/O ports, or MMIO, that exists at a known, xed location.
Examples of platform devices are GPIO controllers, Serial controllers. System devices are
non-peripheral devices that are integral components of the system. In many ways, they are
nothing like any other device. Examples of system devices are CPUs, APICs, and timers.
To use sysfs for dealing with GPIO, the gpio device shall be exported i.e. represented
to the user interface in form of le. In the /sys/class/gpio directory you can nd hardware
nodes of the registered devices in the device tree source le as shown in gure 4.6.
79
Chapter 4. Software 4.4. GPIO Driver
Using the `export' le we can export the hardware in form of le system using the
following First we have to know assigned base address in the kernel for our target GPIO
$ cat gpiochip499 / base
In our example the result will be 499. Then we need to export this device
$ echo 499 > / sys / class / gpio / export
You can nd then that the gpio device has been exported as le system.
Note that you can use unexport le to revert back importing this device. Finally what
we need then is to set direction of the pin either out or in (pin 499) in direction le. Then
write which value we need to write in value le as shown in gure 4.8.
Note that in previous steps we had imported pin no. 499 of the device gpiochip499. In
order to know how many pins this device has, we can check that by the following
In our case here, the device has 8 pins. First pin has no. 499. If you need to control
fourth pin for instance, then you have to export this pin and use the same sequence as
To read a value from the pin same sequence can be done but with direction in
80
Chapter 4. Software 4.4. GPIO Driver
the GPIO pins. To use these libraries, we have to know rst that this library is designed to
be used in kernel space. That is because it needs some information about the GPIO device
that is it is going to communicate with. This information will be caught from denition
of the GPIO node in the device tree le. So to work with this library, device tree le shall
be congured to provide needed information for the GPIOLIB. This library has sucient
What we can discuss here is showing usage of this library in basic steps.
device node axi_gpio_1. This means that the main device axi_gpio_1 shall be dened
also in the device tree le. This group of gpio pins will be named as org. So we have to
stick to this format when dening name of the gpio pins: gpio_name-gpios.
gpiod_get functions. In case of group of pins we can use as in the following function
static struct gpio_descs * our_gpio ;
static int gpio_probe ( struct platform_device * pdev ) {
int gpio_width ;
struct device * dev = & pdev - > dev ;
printk ( KERN_ALERT " GPIO : Starting Module \ n " ) ;
our_gpio = gpiod_get_array ( dev , " org " , GPIOD_OUT_LOW ) ;
81
Chapter 4. Software 4.4. GPIO Driver
}
return 0;
}
the assigned memory address of the peripheral. According to data written to the peripheral
registers, the peripheral shall react. The advantage is end user doesn't have to know the
address of the peripheral to write into it, instead, he can write to a dedicated le to pass
data to the peripheral. The generic UIO is importing for each peripheral using this driver a
le to the user space so that he can use this le for passing the needed data. The following
lines discuss how to use this generic User IO driver with the hardware peripheral.
peripheral in the device tree le. As a convention, the string generic-uio shall be used as
kernel
" Device Drivers " --->
| - " Userspace I / O drivers " --->
| - <* > Userspace I / O platform driver with generic IRQ handling
| - <* > Userspace I / O OF driver with generic IRQ handling
82
Chapter 4. Software 4.4. GPIO Driver
property compatible in device tree is not yet identied in the kernel driver. So we need
to congure the UIO driver to use this string generic-uio. This step has to be done before
booting the kernel. So we can pass it to the kernel as part of the bootargs. To do that,
When the system boots, the UIO device is represented in the lesystem as "/dev/uioN"
where N is an incrementing integer value for each separate UIO device. To know which
Alternatively, we can use 'lsuio' utility to list all the devices that are binded to generic
UIO driver.
$lsuio
called 'memtool'.
# # To read
memtool md -s / dev / uio$N < offsetAddress >
# # To write
memtool mw -d / dev / uio$N < offsetAddress > < data >
Note again that $N represents the uio number assigned to the device.
Also another tool can be used like 'devmem' which also has similar syntax.
83
Bibliography
[1] Xilinx Vivado and SDK Programm User Interface
[2] https://elinux.org/Device_Tree_Usage
[3] https://www.kernel.org/doc/Documentation/devicetree/bindings/xilinx.txt
[4] http://www.wiki.xilinx.com/Arm+Trusted+Firmware
[5] https://emreboy.wordpress.com/2012/12/20/building-a-root-le-system-using-
busybox/
[6] http://xilinx.wikidot.com/zynq-rootfs
[7] http://www.etalabs.net/compare_libcs.html
[8] https://mirrors.edge.kernel.org/pub/linux/libs/uclibc/Glibc_vs_uClibc_Dierences.txt
[9] https://elinux.org/images/f/f4/Elc2013_Fernandes.pdf
[10] http://www.wiki.xilinx.com/U-Boot+Flattened+Device+Tree
[11] http://www.wiki.xilinx.com/Solution+ZynqMP+PL+Programming
[12] https://www.kernel.org/pub/linux/kernel/people/mochel/doc/papers/ols-
2005/mochel.pdf
[13] https://www.kernel.org/doc/Documentation/gpio/
84