Download complete Domain Specific Computer Architectures for Emerging Applications Machine Learning and Neural Networks 1st Edition Chao Wang ebook PDF file all chapters
Download complete Domain Specific Computer Architectures for Emerging Applications Machine Learning and Neural Networks 1st Edition Chao Wang ebook PDF file all chapters
https://ebookultra.com/download/machine-analysis-with-computer-
applications-for-mechanical-engineers-1st-edition-james-doane/
https://ebookultra.com/download/domain-specific-model-driven-
testing-1st-edition-stefan-baerisch-auth/
https://ebookultra.com/download/accelerators-for-convolutional-neural-
networks-1st-edition-arslan-munir/
Strategies and Technologies for Developing Online Computer
Labs for Technology based Courses 1st Edition Lee Chao
https://ebookultra.com/download/strategies-and-technologies-for-
developing-online-computer-labs-for-technology-based-courses-1st-
edition-lee-chao/
https://ebookultra.com/download/implementing-domain-specific-
languages-with-xtext-and-xtend-2nd-edition-lorenzo-bettini/
https://ebookultra.com/download/model-driven-domain-analysis-and-
software-development-architectures-and-functions-1st-edition-janis-
osis/
https://ebookultra.com/download/computer-enhanced-and-mobile-assisted-
language-learning-emerging-issues-and-trends-1st-edition-felicia-
zhang/
Domain Specific Computer Architectures for Emerging
Applications Machine Learning and Neural Networks 1st
Edition Chao Wang Digital Instant Download
Author(s): Chao Wang
ISBN(s): 9780429355080, 0429355084
Edition: 1
File Details: PDF, 37.90 MB
Year: 2024
Language: english
Domain‑Specific Computer
Architectures for Emerging
Applications
With the end of Moore’s Law, domain‑specific architecture (DSA) has become
a crucial mode of implementing future computing architectures. This book
discusses the system‑level design methodology of DSAs and their applica‑
tions, providing a unified design process that guarantees functionality,
performance, energy efficiency, and real‑time responsiveness for the target
application.
DSAs often start from domain‑specific algorithms or applications, ana‑
lyzing the characteristics of algorithmic applications, such as computation,
memory access, and communication, and proposing the heterogeneous accel‑
erator architecture suitable for that particular application. This book places
particular focus on accelerator hardware platforms and distributed systems
for various novel applications, such as machine learning, data mining, neural
networks, and graph algorithms, and also covers RISC‑V open‑source instruc‑
tion sets. It briefly describes the system design methodology based on DSAs
and presents the latest research results in academia around domain‑specific
acceleration architectures.
Providing cutting‑edge discussion of big data and artificial intelligence
scenarios in contemporary industry and typical DSA applications, this book
appeals to industry professionals as well as academicians researching the
future of computing in these areas.
Dr. Chao Wang is a Professor with the University of Science and Technology
of China, and also the Vice Dean of the School of Software Engineering.
He serves as the Associate Editor of ACM TODAES and IEEE/ACM TCBB.
Dr. Wang was the recipient of ACM China Rising Star Honorable Mention,
and best IP nomination of DATE 2015, Best Paper Candidate of CODES+ISSS
2018. He is a senior member of ACM, senior member of IEEE, and distin‑
guished member of CCF.
Domain‑Specific
Computer Architectures
for Emerging
Applications
Machine Learning and Neural
Networks
Chao Wang
First edition published 2024
by CRC Press
4 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN
The right of Chao Wang to be identified as author of this work has been asserted in accordance with
sections 77 and 78 of the Copyright, Designs and Patents Act 1988.
All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form
or by any electronic, mechanical, or other means, now known or hereafter invented, including
photocopying and recording, or in any information storage or retrieval system, without permission
in writing from the publishers.
For permission to photocopy or use material electronically from this work, access www.copyright.
com or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA
01923, 978‑750‑8400. For works that are not available on CCC please contact mpkbookspermissions@
tandf.co.uk
Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are
used only for identification and explanation without intent to infringe.
DOI: 10.1201/9780429355080
Typeset in Palatino
by codeMantra
Contents
Preface.......................................................................................................................xi
v
vi Contents
Index...................................................................................................................... 391
Preface
xi
xii Preface
within the same system. Due to the early years of the books, the coverage
of the theories and methods in the books are somewhat different from that
of the algorithms and applications that attract the most attention nowadays,
and the hardware acceleration techniques based on new devices and semi‑
conductor processes are not included. Domain‑specific computing and its
design theory and method are the core technology of manufacturing devel‑
opment and upgrading, especially in the current era of AI in which vari‑
ous types of compute‑intensive and data‑intensive algorithms continue to
emerge, which is of key significance for reducing energy consumption and
hardware costs, and a large number of high‑level professionals are urgently
needed to devote themselves to this field. But at present, the relevant educa‑
tion at domestic universities is relatively weak, and the ability to train practi‑
cal talents is relatively insufficient.
Aiming at the lack of practical aspects of classic textbooks at home and
abroad, combined with our years of teaching and research practice, and by
sorting out the technical methods and development directions of the field,
this book stands on the perspective of computer science and technology and
takes the customized needs of different algorithmic application areas as a
clue to discuss their respective domain‑specific system design issues in dif‑
ferent categories. The book covers mainstream algorithm types including
neural networks, data mining, graph computing, etc., macro theory combined
with specific cases, and takes the construction of domain‑specific accelera‑
tor microstructures and acceleration systems based on field programmable
gate arrays as clues. The system optimization analysis and specific hardware
and software customization methods under different algorithmic scenarios
are discussed in detail in different chapters. Due to space limitations, it is
difficult to cover all the methods and ideas for different application fields
under different hardware platforms. For a more detailed and comprehen‑
sive domain‑specific system customization method for individual applica‑
tions, interested readers can further refer to AI Computing Systems edited by
Chen Yunji et al., and Handbook of Signal Processing Systems edited by Shuvra
S. Bhattacharyya et al.
The publication of this book unites the efforts of many teachers and students
in the Energy Efficient Intelligent Computing Lab, University of Science and
Technology of China. Dr. Lei Gong, Prof. Xuehai Zhou, Prof. Xi Li, Haoran Li,
Haoyu Cai, Yingxue Gao, Yang Yang, Songsong Li, Wenqi Lou, Xuan Wang,
Jiali Wang, Yang Yang, Yangyang Zhao, Haijie Fang, Wenbin Teng, Zheyuan
Zou, Yuxing He, Qiaochu Liang, Jize Pang, Hanyuan Gao, and many other
researchers also took part in the preparation of the manuscript. The material
of this book refers to a large number of relevant textbooks, courseware, and
academic papers at home and abroad, and I would like to express my heartfelt
thanks to the authors of the cited documents and apologize to the authors for
the missing information sources. Due to the author’s limited knowledge, there
must be some improprieties in the book, please readers criticize and correct.
For any comments and suggestions, please contact [email protected].
Preface xiii
Chao Wang
University of Science and Technology of China
1
Overview of Domain‑Specific Computing
DOI: 10.1201/9780429355080-1 1
2 Domain-Specific Computer Architectures for Emerging Applications
Speedup is the ratio of the time it takes for the serial version of a program
to run to the time it takes for the parallel version of the program to run. It
makes sense to parallelize the program only when the ratio is greater than 1,
and often, the larger the ratio, the higher the acceleration effect on the paral‑
lelization of the program.
Efficiency refers to the ratio of the speedup of the program to the number of
processing units, which often reflects the utilization rate of multiple process‑
ing units; the higher the efficiency, the higher the utilization rate of multiple
processing units.
Scalability describes how the efficiency of a program fluctuates as the
number of processing units increases. Scalability is generally related to effi‑
ciency; the higher the efficiency, the better the scalability of the program, and
vice versa.
Resource utilization is primarily targeted at the use of FPGA platforms
for acceleration. When using FPGA to design accelerator architectures, hard‑
ware resources are often very limited, so hardware resources cannot be
blindly used in design, but we need to find a balance between resource and
performance.
10 DOI: 10.1201/9780429355080-2
Machine Learning Algorithms and Hardware Accelerator Customization 11
FIGURE 2.1
Using regression algorithms to make predictions about data.
xj
FIGURE 2.2
Matching an objective function using an instance‑based algorithm.
20
15
10
-5
-10
0 2 4 6 8 10 12 14 16 18 20
FIGURE 2.3
Reducing overfitting with regularization methods.
Root Node
Customized Accelera on?
Yes No
Reconfigurable? CPU/GPU‑Based
Yes No
FPGA‑Based ASIC‑Based
FIGURE 2.4
Solving classification and regression problems using decision tree algorithms.
2.5
1.5
x2
0.5
-0.5
0 0.5 1 1.5 2 2.5 3 3.5 4
x1
FIGURE 2.5
Solving classification and regression problems using Bayesian methods.
Support
vectors
Support
vectors
FIGURE 2.6
Solving classification and regression problems using kernel‑based algorithms.
1.6
1.4
1.2
1.0
0.8
0.6
0.4
0.2
0.0
0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6
FIGURE 2.7
Data categorization using clustering algorithms.
5
5
4 3
2 2
1 1 1 1 1
FIGURE 2.8
Using association rule learning to extract association rules in a multi‑variate data set.
Machine Learning Algorithms and Hardware Accelerator Customization 17
+1 +1
...
...
...
FIGURE 2.9
Artificial neural network structure.
18 Domain-Specific Computer Architectures for Emerging Applications
%QPXQNWVKQP (WNN[EQPPGEVGF
. +PRWV . . . . . .
Z Z Z Z Z 1WVRWV
FIGURE 2.10
Neural network structure in deep learning.
_
<燦
<燧
FIGURE 2.11
Using dimensionality reduction algorithms to analyze the intrinsic structure of data.
Machine Learning Algorithms and Hardware Accelerator Customization 19
All Data
random subset
random subset random subset random subset
At each node:
choose some small subset of variables at random
find a variable (and a value for that variable) which optimizes the split
FIGURE 2.12
Integration of independent learning models for integrated prediction using integration
algorithms.
20 Domain-Specific Computer Architectures for Emerging Applications
TABLE 1.1
Statistics of the Most Time‑Consuming Computational Kernels for Various Machine
Learning Algorithms
Top Three Kernels (%)
Application Kernel 1 (%) Kernel 2 (%) Kernel 3 (%) Sum (%)
k‑Means Distance (68) Clustering (21) minDist (10) 99
Fuzzy K‑Means Clustering (58) Distance (39) fuzzySum (1) 98
BIRCH Distance (54) Variance (22) Redistribution (10) 86
HOP Density (39) Search (30) Gather (23) 92
Naive Bayesian ProbCal (49) Variance (38) dataRead (10) 97
ScalParC Classify (37) giniCalc (36) Compare (24) 97
Apriori Subset (58) dataRead (14) Increment (8) 80
Eclat Intersect (39) addClass (23) invertClass (10) 71
SNP CompScore (68) updateScore (20) familyScore (2) 90
GeneNet CondProb (55) updateScore (31) familyScore (9) 95
SEMPHY bestBrnchLen (59) Expectation (39) IenOpt (1) 99
Rsearch Covariance (90) Histogram (6) dbRead (3) 99
SVM‑RFE quotMatrx (57) quadGrad (38) quotUpdate (2) 97
PLSA pathGridAssgn (51) fillGridCache (34) backPathFind (14) 99
Utility dataRead (46) Subsequence (29) Main (23) 98
architectures of the designed gas pedals, which can utilize both task‑level
parallelism and data‑level parallelism. In addition, pipelining techniques are
often used in gas pedals to increase throughput.
Conceptual Design
Discretizer C4.5 Classifier
Discrete Values
Input Discretized
Feature Raw Values ... Feature
vector vector
……
……
2
1
PE
PE
PE
PE
PE
PE
Stage 1
Stage 2
Stage
Stage
…… ……
Stage 1’
Stage 2’
Input Discretized
Dist. RAM
Dist. RAM
Dist. RAM
Dist. RAM
Dist. RAM
Dist. RAM
Feature Feature
vector vector
Hardware Architecture
FIGURE 2.13
FPGA‑based C4.5 algorithm gas pedal architecture designed in paper [7].
The attribute vectors of the data are fed from the left side to the discretiza‑
tion module, and after each level of the discretization processing unit, the
data is discretized corresponding to a particular attribute value. The data
is then fed to the classification module and after each level, the data goes
one level down in the decision tree. A classification unit has all the inter‑
mediate/leaf nodes of this level in the corresponding decision tree stored
in its local memory, and the next level of the classification unit receives the
parameters (data attribute set, intermediate node address) and then finds the
corresponding intermediate node to continue the classification.
The specific gas pedal structure designed in this thesis still has some short‑
comings. For example, for the classification module, each layer of the decision
tree is handled by a PE, which will inevitably lead to an imbalance in com‑
putational resources as the nodes of each layer are different, and thus the gas
pedal component might have some performance bottlenecks when the input
data size is relatively large.
Off-Chip
DINB
Memory
(Vectors) VPE VPE VPE
DINA
Array Array Array
INST (N FUs) (N FUs) (N FUs)
Vector Vector Vector
Processor Processor …… Processor Parallel Parallel Parallel
Cluster Cluster Cluster Program SIMD Write Write Write
M from NREGs NREGs NREGs
2 2 Instr
Sequencer
Host
To
Off-Chip
Memory
(Results)
Result Consolidation VPE Array Detail
DINA
DINB
FPGA
1 2 3 N
Bank 1 Bank 2
FIGURE 2.14
Accelerator architecture for FPGA‑based SVM algorithm reasoning proposed in paper [8].
Machine Learning Algorithms and Hardware Accelerator Customization 27
alpha b
fixed-point floating-point
domain domain
x
log2(D)
dim1
SV
FPGA Board × ADDER TREE
Test Data FPGA MEMORY
dim2 +
test data SV
Mem MEMORY ×
dim3
Bank n Classifier +
SV
PCI-X Hypertile MEMORY ×
Controller Test Data dim4 +
SV
×
>
class Mem Classifier MEMORY
dim5
label Bank n SV Hypertile +
SVs SV
s MEMORY ×
…
dim6 + +
…
Test Data SV
MEMORY ×
. . . +
dim7
Mem
Bank n
Classifier
Hypertile
class label
SV
MEMORY ×
+
+ F
FX2FP
KERNEL
PROCESSOR × +
dim8 +
. .
. . .
. . .
SV
MEMORY × +
.
. . .
. . .
>
+ ȐUȑ
. . .
dimD +
SV
MEMORY ×
class
FIGURE 2.15
Improvements made in paper [9] for FPGA‑based SVM inference gas pedals.
The gas pedal architecture designed in paper [8] does not accelerate the
computation of kernel functions and does not support operations on hetero‑
geneous data. In paper [9], these problems are improved and a novel architec‑
ture for cascaded SVM gas pedal is proposed.
Paper [9] proposes an improved structure for the shortcomings of paper
[8], as shown in Figure 2.15. In this gas pedal structure, there are multiple
Classifier Hypertiles acting as PEs of the gas pedal. For certain test data, each
PE handles the operation between the test data and a portion of the support
vectors separately. All the support vectors are stored in the on‑chip memory
of the FPGA, and all the test data are stored in the off‑chip memory of the
FPGA. For each test, data is streamed into multiple PEs. For each Classifier
Hypertile (PE), in essence, it also goes for a multiply‑accumulate operation.
Unlike the previous MAC unit, Hypertile has a finer granularity, and it uses a
multiplier for each attribute that corresponds to the accuracy of the attribute
so that it can handle heterogeneous data better. In addition, the gas pedal
architecture uses specialized computational units to accelerate the computa‑
tion of kernel functions.
In addition to the improvement, paper [9] also proposes a novel struc‑
ture of the SVM gas pedal, namely a cascaded SVM gas pedal, as shown in
Figure 2.16. The so‑called cascade is a pipelined concatenation of multiple
SVM classifiers, each of which may have a different classification model and
different classification capabilities. This is equivalent to using the idea of
Boost algorithm, and both multiple weak classifiers are combined to form
a strong classifier. For a certain level of SVM, if it cannot more accurately
determine the type of the input value, then it will be given to the next level
to deal with. It is logical that the latter classifier should be stronger than the
former.
28 Domain-Specific Computer Architectures for Emerging Applications
Mem
Bank n
FPGA
FIGURE 2.16
FPGA‑based cascaded SVM gas pedal structure designed in paper [9].
The thesis designed a two‑level classifier. The first level of the classifier can
better classify the points farther from the hyperplane, using a simple kernel
function, running faster, while the second level of the classifier can classify
the points at the edge of the hyperplane (the first level of the classifier cannot
judge the point), using a kernel function which may be more complex, run‑
ning a little slower.
The widespread use of SVM algorithms makes accelerating SVM algo‑
rithms seem relatively relevant. More work has been done on the infer‑
ence process of SVM algorithms and it is relatively well‑developed, while
relatively little work has been done on accelerating the learning process of
SVM algorithms. In addition, for the inference process of the SVM algo‑
rithm, there are often preprocessing processes such as orthogonalization
and regularization before data classification, which is often inefficient
and occupies a high proportion of time on the CPU. Therefore, accelerat‑
ing the execution of the preprocessing process is also a scientific research
entry point.
FIGURE 2.17
Structure of FPGA‑based gas pedal for the Apriori algorithm designed in paper [10].
in Figure 2.17. The paper divided the first half of Apriori to calculate the sup‑
port into three parts: Candidates Generate, Candidates Punnig, and Support
Calculation. The gas pedal structure can be reused for all three phases and
shows good acceleration results.
Candidates Generate is used to generate candidate frequent itemsets. If
there are two K‑frequent itemsets (already frequent itemsets) and their first
k − 1 items are the same, then a K + 1 candidate frequent item can be gener‑
ated from these two K‑frequent itemsets. Candidates Punnig is used to do
preprocessing on the K + 1 candidate frequent itemset just generated, both
for the K + 1 candidate frequent itemset and for the K + 1 candidate frequent
itemset. Preprocessing, both for the K + 1 attributes, removes any one attri‑
bute and test the remaining K itemsets in the set of K‑frequent itemsets that
have been generated. If one of the remaining K itemsets is not frequent, then
the newly generated K + 1 itemsets must not be frequent. Support Calculation
is used to do the statistics for the K + 1 candidate frequent itemsets that have
been pre‑checked, calculate the K + 1 candidate frequent itemsets, and calcu‑
late the K + 1 candidate frequent itemsets from these two K‑frequent itemsets.
The Support Calculation part is used to do statistics on the K + 1 candidate
frequent itemsets that have passed the pre‑check, calculating the frequency
of the K + 1 candidate frequent itemsets in the whole data set, and only after a
certain frequency, the K + 1 candidate frequent itemsets can be considered as
frequent, and then it is added to the set of K + 1 frequent itemsets.
The Apriori algorithm is not well‑researched; just read this one. However,
since the Apriori algorithm essentially exhibits a techno‑statistical process,
the use of FPGAs to accelerate the Apriori algorithm should also have a bet‑
ter future. In addition, most Apriori algorithms require that the data should
be pre‑numbered in a dictionary order when processing the data, so the pre‑
processing process of sorting the data in a dictionary order should also be a
potential acceleration point.
30 Domain-Specific Computer Architectures for Emerging Applications
FIGURE 2.18
FPGA‑based gas pedal structure for accelerated Gini coefficient computation proposed in
paper [11].
Another Random Scribd Document
with Unrelated Content
refinement, no notion of the amenities of social life.—Bickerstaff, The
Maid of the Mill.
Giles, the “farmer’s boy,” “meek, fatherless, and poor,” the hero of
Robert Bloomfield’s principal poem, which is divided into “Spring,”
“Summer,” “Autumn,” and “Winter” (1798).
Gill (Harry), a farmer, who forbade old Goody Blake to carry home
a few sticks, which she had picked up from his land, to light a wee-
bit fire to warm herself by. Old Goody Blake cursed him for his
meanness, saying he should never from that moment cease from
shivering with cold; and, sure enough, from that hour, a-bed or up,
summer or winter, at home or abroad, his teeth went “chatter,
chatter, chatter still.” Clothing was of no use, fires of no avail, for,
spite of all, he muttered, “Poor Harry Gill is very cold.”—Wordsworth,
Goody Blake and Harry Gill (1798).
Gill (Mrs. Peter). Bustling matron with a genius for innovation. She
conducts her household affairs according to sanitary and sanatory
principles; discovers that condiments are pernicious and that beans
are excellent for the complexion; is bent upon a water-cure, and
finds out and invents so many “must bes” and “don’ts” as to ruin the
comfort of husband and children.—Robert B. Roosevelt, Progressive
Petticoats (1874).
Gypsey Ring, a flat gold ring, with stones let into it, at given
distances. So called because the stones were originally Egyptian
pebbles—that is, agate and jasper.
⁂ The tale is, that the gypsies are wanderers because they
refused to shelter the Virgin and Child in their flight into Egypt.—
Aventinus, Annales Boiorum, viii.
Glasgow Arms, an oak tree with a bird above it, and a bell
hanging from one of the branches; at the foot of the tree a salmon
with a ring in its mouth. The legend is that St. Kentigern built the
city and hung a bell in an oak tree to summon the men to work. This
accounts for the “oak and bell.” Now for the rest: A Scottish queen,
having formed an illicit attachment to a soldier, presented her
paramour with a ring, the gift of her royal husband. This coming to
the knowledge of the king, he contrived to abstract it from the
soldier while he was asleep, threw it into the Clyde, and then asked
his queen to show it him. The queen, in great alarm, ran to St.
Kentigern, and confessed her crime. The father confessor went to
the Clyde, drew out a salmon with the ring in its mouth, handed it to
the queen, and by this means both prevented a scandal and
reformed the repentant lady.
A similar legend is told of Dame Rebecca Berry, wife of Thomas
Elton of Stratford Bow, and relict of Sir John Berry, 1696. She is the
heroine of the ballad called The Cruel Knight. The story runs thus: A
knight, passing by a cottage, heard the cries of a woman in labor. By
his knowledge of the occult sciences, he knew that the infant was
doomed to be his future wife; but he determined to elude his
destiny. When the child was of a marriageable age, he took her to
the seaside, intending to drown her, but relented, and, throwing a
ring into the sea, commanded her never to see his face again, upon
pain of death, till she brought back that ring with her. The damsel
now went as cook to a noble family, and one day, as she was
preparing a cod-fish for dinner, she found the ring in the fish, took it
to the knight, and thus became the bride of Sir John Berry. The
Berry arms show a fish, and in the dexter chief a ring.
Glass (Mrs.), a tobacconist, in London, who befriended Jeanie
Deans while she sojourned in town, whither she had come to crave
pardon from the queen for Effie Deans, her half-sister, lying under
sentence of death for the murder of her infant born before wedlock.
—Sir W. Scott, Heart of Midlothian (time, George II.).
Glover (Simon), the old glover of Perth, and father of the “fair
maid.”
Catharine Glover, “the fair maid of Perth,” daughter of Simon the
glover, and subsequently bride of Henry Smith the armorer.—Sir W.
Scott, Fair Maid of Perth (time, Henry IV.).
Glum-dal´clitch, a girl nine years old “and only forty feet high.”
Being such a “little thing,” the charge of Gulliver was committed to
her during his sojourn in Brobdingnag.—Swift, Gulliver’s Travels.
Goats. The Pleiades are called in Spain The Seven Little Goats.
So it happened that we passed close to the Seven Little Goats.—Cervantes, Don
Quixote, II. iii. 5 (1615).
⁂ Sancho Panza affirmed that two of the goats were of a green
color, two carnation, two blue, and one motley; “but,” he adds, “no
he-goat or cuckold ever passes beyond the horns of the moon.”
God.
Full of the god, full of wine, partly intoxicated.
God made the country, and man made the town.—Cowper’s Task
(“The Sofa”). Varro, in his De Re Rustica, has: “Divina Natura agros
dedit, ars humana ædificavit urbes.”
God sides with the strongest. Napoleon I. said, “Le bon Dieu est
toujours du coté des gros bataillons.” Julius Cæsar made the same
remark.
Godam, a nickname applied by the French to the English, in
allusion to a once popular oath.
Another tale was that they then fell foul of each other in angry
combat.
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
ebookultra.com