Introducing Adaptive System On Modules
Introducing Adaptive System On Modules
Introducing Adaptive
System-on-Modules
(SOMs)
Executive Summary
CHAPTER 2:
AN INTRODUCTION TO SYSTEM-ON-MODULES (SOMS).......................4
CHAPTER 3:
THE CHALLENGES OF EDGE PROCESSING............................................9
CHAPTER 4:
INTRODUCING ADAPTIVE COMPUTING.............................................. 13
CHAPTER 6:
ADAPTIVE SOM BENEFITS FOR HARDWARE DEVELOPERS.................20
CHAPTER 7:
ADAPTIVE SOM BENEFITS FOR SOFTWARE DEVELOPERS..................23
CHAPTER 8:
WHAT TO LOOK FOR WHEN SELECTING AN EDGE SOLUTION.............26
CHAPTER 9:
SUMMARY..........................................................................................30
CHAPTER 1
AI-enabled applications are increasingly being is beginning to change with smart retail bringing
deployed at the edge and endpoint. A plethora of automated shopping experiences not thought
applications are being developed that use the latest possible a decade ago. All these applications must
AI techniques across a range of industries and operate with utmost reliability, in harsh conditions,
geographic locations. for 10 years or more. The applications require high
performance, yet must be delivered in an efficient,
Cities are becoming smarter with automated vision reliable, and compact form factor.
applications that manage safety and alert emergency
services when help is needed. Industrial IoT
applications increasingly require high-performance
AI inference processing as factories become smarter
and more automated. Even the retail experience
40,000
30,000
20,000
10,000
0
2019 2020 2021 2022 2023 2024 2025
Figure 1. AI edge global chipset revenue is forecasted to exceed $50 billion by 2025. (Source: Omdia)
CHAPTER 2
An Introduction to
System-on-Modules
(SOMs) SOM USE CASE:
3D PRINTING ARM CONTROL
The 3D printing space is rapidly evolving and
designers are looking to improve both speed and
quality of the process using precise, deterministic
control over a scalable number of axes of motion.
When building an AI-enabled edge application there To save the expense and time of a chip-down
are numerous options available. In many industries, development, design teams may consider utilizing a
it’s common for a hardware design team to perform more integrated solution such as Multi-Chip Module
a “chip-down” development, where specific silicon (MCM), System-in-Package (SIP), Single-Board
devices are chosen, and a fully customized circuit Computer (SBC), or System-on-Module (SOM).
board is developed for the application. While this does
produce a highly optimized implementation, it can
take significant development time and cost to reach
production readiness.
SIP
Stacked Chip 3 Stacked
MCM
Memory Memory
Chip 1 (packaged) Chip 2 (packaged) Chip 1 (unpackaged) Chip 2
Processor
Substrate (single package) Other components Stacked
Substrate (single package) Stacked
Other components
Memory Memory
Circuit board Circuit board
SBC SOM
CHAPTER 3
Edge computing is typically limited by power To implement advanced AI applications at the edge,
consumption, footprint, and cost. As processing a domain-specific architecture (DSA) is needed.
demands increase, the challenge of providing the DSAs provide a highly optimized implementation of
performance level required, within the limitations of an application vs. an unaccelerated CPU. They also
edge processing, has increased exponentially. provide determinism and low latency.
Performance-Critical
all of which have higher performance requirements. A Functions
is required to implement efficient AI-enabled applications, however, like any fixed silicon solution,
applications at the edge (and elsewhere). they have limitations. Primarily, the pace of AI
innovation is incredibly rapid, rendering AI models
obsolete much quicker than non-AI technologies.
Fixed silicon devices that implement AI can quickly
become obsolete due to the emergence of newer,
more-efficient AI models. It can take several years
to tape-out a fixed silicon device by which time the
state-of-the-art in AI models will have advanced
significantly.
AI Classification
Model Innovation
BN-AlexNet
AlexNet
GoogLeNet
ResNet-18
fflNet 2x w/SE
MobileNet v2
BN-NIN
ENet
VGG-16
ResNet-34
ResNet-50
ResNet-101
ResNet-153
DenseNet-264
SENet-154
VGG-19
Inception-v3
Inception-v4
ResNeXt-101
Fixed Silicon Device 1 Fixed Silicon Device 2
Figure 3. AI models evolve rapidly – much faster than silicon development cycles
CHAPTER 4
Introducing Adaptive
Computing
SOM USE CASE:
ROBOTIC CONTROL IN FACTORIES
A true industrial solution supports extended
temperatures, harsh environments, and long
lifecycles in order to be trusted in applications
where downtime is impermissible.
One of the most-promising technologies for AI- Adaptive computing is more than just hardware,
enabled edge applications is adaptive computing. however. It also encompasses a comprehensive set
Adaptive computing encompasses hardware that can of design and runtime software, that, when combined,
be highly optimized for specific applications such as delivers a unique adaptive platform from which highly
Field Programmable Gate Arrays (FPGAs). In addition flexible, yet efficient systems can be built.
to FPGAs, new types of adaptive hardware have been
recently introduced, including the adaptive System-on-
Chip (SoC) which contains FPGA fabric, coupled with
one or more embedded CPU subsystems.
CHAPTER 5
The Adaptive
System-On-Module
(SOM)
SOM USE CASE:
ACCESS CONTROL
AI-powered edge applications like access
control require high-performance, low latency
implementation, but must remain within
limited power and footprint requirements.
As we have seen, SOMs provide a good platform Typical application development teams are
for edge applications. However, to achieve the composed of several types of hardware engineers
performance required by modern AI-enabled and software engineers. Circuit board designers
applications, acceleration is needed. Adaptive are responsible for designing custom boards for
computing provides the acceleration required for AI the required application. In cases where adaptive
applications at the edge. computing (including FPGAs) is used, RTL designers
have traditionally been responsible for configuring
the adaptive devices using Hardware Description
Languages (HDLs) such as Verilog and VHDL. And
software developers write code running on embedded
CPUs using languages such as C++ and frameworks
such as OpenCV, which is commonly used for
embedded vision applications.
CHAPTER 6
Typically, hardware developers build AI-enabled edge First, developers need to evaluate and select a device,
hardware by selecting their main silicon devices, build a prototype, and then prove the architecture
then developing an edge form factor board to works sufficiently well with the required software
accommodate the chosen devices. As discussed, this and AI models they plan to deploy. The evaluation
chip-down development can be a costly and complex process alone can take months. Once evaluation
process with a long development cycle. is completed, the production board needs to be
developed, integrated, and manufactured before it
can be deployed.
Figure 5. SOMs can help accelerate time to market and reduce development cost
CHAPTER 7
Adaptive SOM
Benefits for Software
Developers
SOM USE CASE:
SURGICAL ROBOTS
Industrial control, communications, machine
vision, machine learning, human-machine
interfaces, cybersecurity, and safety are key
technology considerations.
The advantages of adaptive SOMs are not just limited Recent advancements in software tools, libraries,
to hardware developers. Software developers can and frameworks can enable some design teams to
accelerate their design cycles as well by using pre-built use adaptive computing without burdening hardware
configurations for the underlying adaptive SoCs. engineers. The available comprehensive software
platforms allow software engineers to utilize the
capability of the entire adaptive SoC without needing
specific hardware customization.
CHAPTER 8
When evaluating a solution for edge applications, allow developers to immediately start application
several factors should be considered: development with a direct path to production
using the same hardware and software. Reduced
Development Time and Cost development costs especially help small and
Successful design teams will look for a turnkey medium-sized companies benefit from adaptive
solution that accelerates development time and computing.
enables fast prototyping. More than just hardware
alone, the solution should consist of an operating
system, a board support package, reference designs,
and open, system-oriented documentation with
easy explanation and guidance. Design cycles can
further be sped up using pre-built starter kits that
Ecosystem
“We’ve been recently seeing
The solution should feature a sizable ecosystem,
a lot of interest in SOMs for
offering a variety of services and solutions, including
ML and AI applications. Many
complementary hardware, software, and tools that
customers do not want to get
are pre-integrated with the chosen AI platform or
involved in the specifics of
alternative AI platforms. And, the solution should
complex designs. SOMs allow them to get to
include accelerated application libraries to give
market fast and save development costs.”
software developers the same jumpstart on design
that reference designs do for hardware developers. – Immanuel Rathinam, Associate Director and
Head of R&D and Business Operations;
Adaptive SOMs bring the time-to-market advantage iWave Systems Technologies Pvt Ltd.
of a SOM-based solution, with the application
optimization that can only be achieved with adaptive
computing. This makes it an ideal approach for
implementing AI-enabled edge applications.
CHAPTER 9
Summary
Complex, AI-enabled workloads are increasingly SOMs provide an ideal edge processing platform.
moving to the edge. These applications require a When coupled with adaptive SoCs, the resulting
large amount of processing to be performed with adaptive SOMs provide a comprehensive, production-
low latency, low power consumption, and in a small ready platform for AI-enabled edge applications.
footprint. To achieve this, the whole application (both
the AI and non-AI functions) must be accelerated. Companies that move to adaptive SOMs benefit from
a unique combination of performance, flexibility, and
As AI models rapidly evolve, the acceleration rapid development time. They can enjoy the benefits
platform must also be adaptable. This allows optimal of adaptive computing without the need to build their
implementation of not just today’s AI techniques, but own circuit boards, something that has only recently
tomorrow’s as well. been possible at the edge with the introduction of
Xilinx’s Kria™ portfolio of adaptive SOMs.
About Xilinx:
Xilinx delivers adaptive platforms. Our adaptive SoCs, FPGAs, accelerator cards, and System-on-Modules, give
leading-edge companies the freedom to innovate and deploy, rapidly. We partner with our customers to create
scalable, differentiated and intelligent solutions from the cloud to the edge. In a world where the pace of change is
accelerating, more and more innovators trust Xilinx to help them get to market faster, and with optimal efficiency and
performance. For more information, visit www.xilinx.com.
Xilinx Europe
Bianconi Avenue
Citywest Business Campus
Saggart, County Dublin
Ireland
Tel: +353-1-464-0311
www.xilinx.com
India
Xilinx India Technology Services Pvt. Ltd.
Block A, B, C, 8th & 13th floors,
Meenakshi Tech Park, Survey No. 39
Gachibowli(V), Seri Lingampally (M),
Hyderabad -500 084
Tel: +91-40-6721-4747
www.xilinx.com
© Copyright 2021 Xilinx, Inc. Xilinx, the Xilinx logo, Artix, ISE, Kintex, Kria, Spartan, Versal, Virtex, Vitis, Vivado, Zynq, and other designated
brands included herein are trademarks of Xilinx in the United States and other countries. AMBA, AMBA Designer, ARM, ARM1176JZ-S,
CoreSight, Cortex, and PrimeCell are trademarks of ARM in the EU and other countries. PCIe, and PCI Express are trademarks of PCI-SIG
and used under license. All other trademarks are the property of their respective owners.
Printed in the U.S.A. WW4/9/21