FTT-NAS: Discovering Fault-Tolerant Convolutional Neural Architecture
FTT-NAS: Discovering Fault-Tolerant Convolutional Neural Architecture
Neural Architecture
XUEFEI NING, GUANGJUN GE, WENSHUO LI, and ZHENHUA ZHU, Department of
Electronic Engineering, Tsinghua University, China
YIN ZHENG, Weixin Group, Tencent, China
XIAOMING CHEN, State Key Laboratory of Computer Architecture, Institute of Computing Technology,
Chinese Academy of Sciences, China
ZHEN GAO, School of Electrical and Information Engineering, Tianjin University, China
YU WANG and HUAZHONG YANG, Department of Electronic Engineering, Tsinghua University,
China
With the fast evolvement of embedded deep-learning computing systems, applications powered by deep learn-
ing are moving from the cloud to the edge. When deploying neural networks (NNs) onto the devices under
complex environments, there are various types of possible faults: soft errors caused by cosmic radiation and
radioactive impurities, voltage instability, aging, temperature variations, malicious attackers, and so on. Thus,
the safety risk of deploying NNs is now drawing much attention. In this article, after the analysis of the pos-
sible faults in various types of NN accelerators, we formalize and implement various fault models from the
algorithmic perspective. We propose Fault-Tolerant Neural Architecture Search (FT-NAS) to automatically 44
discover convolutional neural network (CNN) architectures that are reliable to various faults in nowadays de-
vices. Then, we incorporate fault-tolerant training (FTT) in the search process to achieve better results, which
is referred to as FTT-NAS. Experiments on CIFAR-10 show that the discovered architectures outperform other
manually designed baseline architectures significantly, with comparable or fewer floating-point operations
(FLOPs) and parameters. Specifically, with the same fault settings, F-FTT-Net discovered under the feature
fault model achieves an accuracy of 86.2% (VS. 68.1% achieved by MobileNet-V2), and W-FTT-Net discovered
under the weight fault model achieves an accuracy of 69.6% (VS. 60.8% achieved by ResNet-18). By inspecting
the discovered architectures, we find that the operation primitives, the weight quantization range, the capac-
ity of the model, and the connection pattern have influences on the fault resilience capability of NN models.
This work was supported by National Natural Science Foundation of China (No. U19B2019, 61832007, 61621091), National
Key R&D Program of China (No. 2017YFA02077600); Beijing National Research Center for Information Science and Tech-
nology (BNRist); Beijing Innovation Center for Future Chips; the project of Tsinghua University and Toyota Joint Research
Center for AI Technology of Automated Vehicle (TT2020-01); Beijing Academy of Artificial Intelligence.
Authors’ addresses: X. Ning, G. Ge, W. Li, Z. Zhu, Y. Wang, and H. Yang, Tsinghua University, Department of Electronic En-
gineering, Rohm Building, Beijing, China, 100084; emails: [email protected], [email protected],
[email protected], [email protected], [email protected], [email protected]; Y.
Zheng, Weixin group, Tencent, Beijing, China, 100080; email: [email protected]; X. Chen, State Key Laboratory
of Computer Architecture, Institute of Computing Technology, Chinese Academy of Sciences, China, 100190; email:
[email protected]; Z. Gao, School of Electrical and Information Engineering, Tianjin University, China, 300072; email:
[email protected].
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee
provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and
the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored.
Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires
prior specific permission and/or a fee. Request permissions from [email protected].
© 2021 Association for Computing Machinery.
1084-4309/2021/08-ART44 $15.00
https://doi.org/10.1145/3460288
ACM Transactions on Design Automation of Electronic Systems, Vol. 26, No. 6, Article 44. Pub. date: August 2021.
44:2 X. Ning et al.
Additional Key Words and Phrases: Neural architecture search, fault tolerance, neural networks
ACM Reference format:
Xuefei Ning, Guangjun Ge, Wenshuo Li, Zhenhua Zhu, Yin Zheng, Xiaoming Chen, Zhen Gao, Yu Wang,
and Huazhong Yang. 2021. FTT-NAS: Discovering Fault-tolerant Convolutional Neural Architecture. ACM
Trans. Des. Autom. Electron. Syst. 26, 6, Article 44 (August 2021), 24 pages.
https://doi.org/10.1145/3460288
1 INTRODUCTION
Convolutional Neural Networks (CNNs) have achieved breakthroughs in various tasks, includ-
ing classification [14], detection [31], segmentation [32], and so on. Due to their promising perfor-
mance, CNNs have been utilized in various safety-critic applications, such as autonomous driving,
intelligent surveillance, and identification. Meanwhile, driven by the recent academic and indus-
trial efforts, the neural network accelerators based on various hardware platforms (e.g., Applica-
tion Specific Integrated Circuits (ASIC) [9], Field Programmable Gate Array (FPGA) [37],
Resistive Random-Access Memory (RRAM) [10]) have been rapidly evolving.
The robustness and reliability related issues of deploying neural networks onto various embed-
ded devices for safety-critical applications are attracting more and more attention. There is a large
stream of algorithmic studies on various robustness-related characteristics of NNs, e.g., adversarial
robustness [44], data poisoning [41], interpretability [53], and so on. However, no hardware mod-
els are taken into consideration in these studies. Besides the issues from the purely algorithmic
perspective, there exist hardware-related reliability issues when deploying NNs onto nowadays
embedded devices. With the down-scaling of CMOS technology, circuits become more sensitive to
cosmic radiation and radioactive impurities [16]. Voltage instability, aging, and temperature vari-
ations are also common effects that could lead to errors. As for the emerging metal-oxide RRAM
devices, due to the immature technology, they suffer from many types of device faults [7], among
which hard faults such as Stuck-at-Faults (SAFs) damage the computing accuracy severely and
could not be easily mitigated [49]. Moreover, malicious attackers can attack the edge devices by
embedding hardware Trojans, manipulating back-doors, and doing memory injection [54].
Recently, some studies [28, 40, 46] analyzed the sensitivity of NN models. They proposed to
predict whether a layer or a neuron is sensitive to faults and protect the sensitive ones. For fault
tolerance, a straightforward way is to introduce redundancy in the hardware. Triple Modular
Redundancy (TMR) is a commonly used but expensive method to tolerate a single fault [4, 42,
55]. References [28, 49] proposed various redundancy schemes for Stuck-at-Faults tolerance in
the RRAM-based Computing Systems. For increasing the algorithmic fault resilience capability,
References [12, 15] proposed to use fault-tolerant training (FTT), in which random faults are
injected in the training process.
Although redesigning the hardware for reliability is effective, it is not flexible and inevitably
introduces a large overhead. It would be better if the issues could be mitigated as far as possible
from the algorithmic perspective. Existing methods mainly concerned about designing training
methods and analyzing the weight distribution [12, 15, 40]. Intuitively, the neural architecture
might also be important for the fault tolerance characteristics [1, 25], since it determines the “path”
of fault propagation. To verify these intuitions, the accuracies of baselines under a random bit-
bias feature fault model1 are shown in Table 1, and the results under SAF weight fault model2 are
shown in Table 2. These preliminary experiments on the CIFAR-10 dataset show that the fault
tolerance characteristics vary among neural architectures, which motivates the employment of
1 The random bit-bias feature fault model is formalized in Section 3.4.
2 The SAF weight fault model is formalized in Section 3.5.
ACM Transactions on Design Automation of Electronic Systems, Vol. 26, No. 6, Article 44. Pub. date: August 2021.
FTT-NAS: Discovering Fault-tolerant Convolutional Neural Architecture 44:3
ACM Transactions on Design Automation of Electronic Systems, Vol. 26, No. 6, Article 44. Pub. date: August 2021.
44:4 X. Ning et al.
and formalize the fault models. In Section 4, we elaborate on the design of the fault-tolerant
NAS system. Then in Section 5, the effectiveness of our method is illustrated by experiments,
and the insights are also presented. Finally, we discuss and conclude our work in Section 6 and
Section 7.
ACM Transactions on Design Automation of Electronic Systems, Vol. 26, No. 6, Article 44. Pub. date: August 2021.
FTT-NAS: Discovering Fault-tolerant Convolutional Neural Architecture 44:5
in CMOS circuits is
[−R f , R f ] = [−2Q −l , 2−l (2Q − 1)]. (4)
where A is the architecture search space, ∼ is the sampling operator, and x t , xv denote the data
sampled from the training and validation data splits D t , Dv , respectively. E x ∼D [·] denotes the ex-
pectation with respect to the data distribution D, R denotes the evaluated reward used to instruct
the sampling process, and L denotes the loss criterion for back propagation during the training of
the weights w.
Originally, for the performance evaluation of each sampled architecture α, one needs to find
the corresponding w ∗ (α ) by fully training the candidate network from scratch. This process is
extremely slow, and shared weights evaluation is commonly used for accelerating the evaluation.
In shared weights evaluation, each candidate architecture α is a subgraph of a super network and
is evaluated using a subset of the super network weights. The shared weights of the super network
are updated along the search process.
3 FAULT MODELS
In Section 3.1, we motivate and discuss the formalization of application-level statistical fault mod-
els. Platform-specific analysis are conducted in Section 3.2 and Section 3.3. Finally, the .MAC-i.i.d.
Bit-Bias (MiBB) feature fault model and the arbitrary-distributed Stuck-at-Fault model (ad-
SAF) weight fault model are described in Section 3.4 and Section 3.5, which would be used in the
neural architecture search process. The analyses in this part are summarized in Figure 4(a) and
Table 3.
ACM Transactions on Design Automation of Electronic Systems, Vol. 26, No. 6, Article 44. Pub. date: August 2021.
FTT-NAS: Discovering Fault-tolerant Convolutional Neural Architecture 44:7
Table 3. Summary of the NN Application-level Statistical Fault Models, Due to Various Types of Errors on
Different Platforms
ACM Transactions on Design Automation of Electronic Systems, Vol. 26, No. 6, Article 44. Pub. date: August 2021.
44:8 X. Ning et al.
more susceptible to soft errors [43]. An unprotected SRAM cell usually has a larger bit soft error
rate (SER) than flip-flops. Since the occurring probability of hard errors is much smaller than that
of the soft errors, we focus on the analysis of soft errors, despite that hard errors lead to permanent
failures.
The soft errors in the weight buffer could be modeled as i.i.d. weight random bit-flips. Given
the original value as x 0 , the distribution of a faulty value x under the random bit-flip (BF) model
could be written as
x ∼ BF(x 0 ; p)
Q
indicates x = 2−l (2l x 0 ⊕ e), e= eq 2q−1 (6)
q=1
eq ∼ Bernoulli(p), q = 1, . . . , Q,
where eq denotes whether a bit-flip occurs at bit position q, ⊕ is the XOR operator.
By assuming that error occurs at each bit with an i.i.d. bit SER of r s , we know that each Q-bit
weight has an i.i.d. probability pw to encounter error, and pw = (1−(1−r s ) Q ) ≈ r s ×Q, as r s ×Q 1.
It is worth noting that throughout the analysis, we assume that the SERs of all components 1,
hence the error rate at each level is approximated as the sum of the error rates of the independent
sub-components. As each weight encounters error independently, a weight tensor is distributed
as i.i.d. random bit-flip (iBF): w ∼ iBF(w 0 ; r s ), where w 0 is the golden weights. Reagen et al. [38]
showed that the iBF model could capture the bit error behavior exhibited by real SRAM hardware.
The soft errors in the feature buffer are modeled similarly as i.i.d. random bit-flips, with a fault
probability of approximately r s ×Q for Q-bit feature values. The distribution of the output feature
map (OFM) values could be written as y ∼ iBF(y0 ; r s ), where y0 is the golden results.
Actually, FPGA-based implementations are usually more vulnerable to soft errors than their
ASIC counterparts [2]. Since the majority space of an FPGA chip is filled with memory cells, the
overall SER rate is much higher. Moreover, the soft errors occurring in logic configuration bits
would lead to persistent faulty computation, rather than transient faults as in ASIC logic. Persis-
tent errors can not be mitigated by simple retry methods and would lead to statistically significant
performance degradation. Moreover, since the persistent errors would be accumulated if no cor-
rection is made, the equivalent error rate would keep increasing as time goes on. We abstract this
effect with a monotonic increasing function Mp (t ) ≥ 1, where the subscript p denotes “persistent,”
and t denotes the time. For example, if the FPGA weight buffer or LUTs are reloaded for every
T period in the radioactive environment [4, 6], then a multiplier of Mp (T ) would be the worst
ACM Transactions on Design Automation of Electronic Systems, Vol. 26, No. 6, Article 44. Pub. date: August 2021.
FTT-NAS: Discovering Fault-tolerant Convolutional Neural Architecture 44:9
bounding case. Note that the exact choice of t is not important in our experiments, since our work
mainly aims at comparing different neural architectures using certain fault insertion pattern and
ratio, and the temporal effect modeled by Mp (t ) does not influence the architectural preference.
Let us recap how one convolution is mapped onto the FPGA-based accelerator to see what the
configuration bit errors could cause on the OFM values. If the dimension of the convolution kernel
is (c, k, k ) (channel, kernel height, kernel width, respectively), then there are ck 2 −1 ≈ ck 2 additions
needed for the computation of a feature value. We assume that the add operations are spatially
expanded onto adder trees constructed by LUTs, i.e., no temporal reusing of adders is used for
computing one feature value. That is to say, the add operations are mapped onto different hardware
adders3 and encounter errors independently. The per-feature error rate could be approximated by
the adder-wise SER times Ml , where Ml ≈ ck 2 . Now, let us dive into the adder-level computation,
in a 1-bit adder with scale s, the bit-flip in one LUTs bit would add a bias ±2s to the output value,
if the input bit signals match the address of this LUTs bit. If each LUT cell has an i.i.d. SER of r s , in
a Q -bit adder, denoting the fraction length of the operands and result as l , then the distribution
of the faulty output x with the random bit-bias (BB) faults could be written as
x ∼ BB(x 0 ; p, Q , l )
Q
−l
indicates x = x 0 + e, e=2 (−1) β 2q−1eq
q=1 (7)
eq ∼ Bernoulli(p)
βq ∼ Bernoulli(0.5), q = 1, . . . , Q .
As for the result of the adder tree constructed by multiple LUT-based adders, since the proba-
bility that multiple bit-bias errors co-occur is orders of magnitude smaller, we ignore the accumu-
lation of the biases that are smaller than the OFM quantization resolution 2−l . Consequently, the
OFM feature values before the activation function follow the i.i.d. Random Bit-Bias distribution
f ∼ iBB( f 0 ; r s × Ml × Mp (t ), Q, l ), where Q and l are the bit-width and fraction length of the OFM
values, respectively.
We can make an intuitive comparison between the equivalent feature error rates induced by
LUTs soft errors and feature buffer soft errors. As the majority of FPGAs is SRAM-based, consid-
ering the bit SER r s of LUTs cell and BRAM cell to be close, we can see that the feature error rate
induced by LUTs errors is amplified by Ml ×Mp (t ). As we have discussed, Mp (t ) ≥ 1, Ml = ck 2 > 1,
the performance degradation induced by LUTs errors could be significantly larger than that in-
duced by feature buffer errors.
ACM Transactions on Design Automation of Electronic Systems, Vol. 26, No. 6, Article 44. Pub. date: August 2021.
44:10 X. Ning et al.
Fig. 2. An example of injecting feature faults under the iBB fault model (soft errors in FPGA LUTs).
for SAF1 and p0 = 1.75% for SAF0) in a fabricated RRAM device. The statistical model of SAFs in
single-bit and multi-bit RRAM devices would be formalized in Section 3.5.
As the RRAM crossbars also serve as the computation units, some non-ideal factors (e.g., IR-
drop, wire resistance) could be abstracted as feature faults. They are not considered in this work,
since the modeling of these effects highly depends on the implementation (e.g., crossbar dimension,
mapping strategy) and hardware-in-the-loop testing [15].
ACM Transactions on Design Automation of Electronic Systems, Vol. 26, No. 6, Article 44. Pub. date: August 2021.
FTT-NAS: Discovering Fault-tolerant Convolutional Neural Architecture 44:11
Fig. 3. An example of injecting weight faults under the adSAF fault model (SAF errors in RRAM cells).
y = д(W x + b)
s.t. W = (1 − θ ) · W + θ · e
θ ∼ Bernoulli(p0 + p1 )Co ×c×k×k
Co ×c×k×k (9)
p1
m ∼ Bernoulli
p0 + p1
e = Rw sgn(W ) · m,
where Rw refers to the representation bound in Equation (3), θ is the mask indicating whether
fault occurs at each weight position, m is the mask representing the SAF types (SAF0 or SAF1) at
faulty weight positions, e is the mask representing the faulty target values (0 or ±Rw ). Every single
weight has an i.i.d. probability of p0 to be stuck at 0, and p1 to be stuck at the positive or negative
bounds of the representation range, for positive and negative weights, respectively. An example
of injecting this type of weight faults is illustrated in Figure 3.
Note that the weight fault model, referred to as arbitrary-distributed Stuck-at-Fault model
(adSAF), is much harder to defend against than SAF faults with a specific known defect map.
A neural network model that behaves well under the adSAF model is expected to achieve high
reliability across different specific SAF defect maps.
The above adSAF fault model assumes the underlying hardware is multi-bit RRAM devices;
adSAFs in single-bit RRAM devices are also of interest. In single-bit RRAM devices, multiple bits
of one weight value are mapped onto different crossbars, of which the results would be shifted and
added together [56]. In this case, a SAF fault that occurs in a cell would cause the corresponding
bit of the corresponding weight to be stuck at 0 or 1. The effects of adSAF faults on a weight value
in single-bit RRAM devices can be formulated as
ACM Transactions on Design Automation of Electronic Systems, Vol. 26, No. 6, Article 44. Pub. date: August 2021.
44:12 X. Ning et al.
4 FAULT-TOLERANT NAS
In this section, we present the FTT-NAS framework. We first give out the problem formalization
and framework overview in Section 4.1. Then, the search space, sampling, and assembling process
are described in Section 4.2 and Section 4.3, respectively. Finally, the search process is elaborated
in Section 4.4.
where A is the architecture search space, and D t , Dv denote the training and validation data
split, respectively. R and L denote the reward and loss criterion, respectively. The major differ-
ence of Equation (11) to the vanilla NAS problem Equation (5) lies in the introduction of the fault
model F .
As the cost of finding the best weights w ∗ for each architecture α is almost unbearable, we
use the shared-weights based evaluator, in which shared weights are directly used to evaluate
sampled architectures. The resulting method, FTT-NAS, is the method to solve this NAS problem
approximately. And FT-NAS can be viewed as a degraded special case for FTT-NAS, in which no
fault is injected in the inner optimization of finding w ∗ (α ).
The overall neural architecture search framework is illustrated in Figure 4(b). There are multi-
ple components in the framework: A controller that samples different architecture rollouts from
the search space; a candidate network is assembled by taking out the corresponding subset of
weights from the super-net. A shared weights–based evaluator evaluates the performance of
different rollouts on the CIFAR10 dataset using fault-tolerant objectives.
ACM Transactions on Design Automation of Electronic Systems, Vol. 26, No. 6, Article 44. Pub. date: August 2021.
FTT-NAS: Discovering Fault-tolerant Convolutional Neural Architecture 44:13
Fig. 4. Illustration of the overall workflow. (a) The setup of the application-level statistical fault models.
(b) The FTT-NAS framework. (c) The final fault-tolerant training stage.
In every cell, there are B nodes, and node 1 and node 2 are treated as the cell’s inputs, which are
the outputs of the two previous cells. For each of the other B − 2 nodes, two incoming connections
will be selected and element-wise added. For each connection, the 11 possible operations are: none;
skip connect; 3 × 3 average (avg.) pool; 3 × 3 max pool; 1 × 1 Conv; 3 × 3 ReLU-Conv-BN block; 5
× 5 ReLU-Conv-BN block; 3 × 3 SepConv block; 5 × 5 SepConv block; 3 × 3 DilConv block; 5 × 5
DilConv block.
The complexity of the search space can be estimated. For each cell type, there are (11 (B−2) × (B −
1)!) 2 possible choices. As there are two independent cell types, there are (11 (B−2) ×(B−1)!) 4 possible
architectures in the search space, which is roughly 9.5 × 1024 with B = 6 in our experiments.
Fig. 5. Illustration of the search space design. Left: The layout and connections between cells. Right: The
possible connections in each cell and the possible operation types on every connection.
ACM Transactions on Design Automation of Electronic Systems, Vol. 26, No. 6, Article 44. Pub. date: August 2021.
FTT-NAS: Discovering Fault-tolerant Convolutional Neural Architecture 44:15
ALGORITHM 1: FTT-NAS
1: EPOCH: the total search epochs
2: w: shared weights in the super network
3: θ : the parameters of the controller π
4: epoch = 0
5: while epoch < EPOCH do
6: for all x t , yt ∼ D t do
7: a ∼ π (a; θ ) # sample an architecture from the controller
8: f ∼ F (f ) # sample faults from the fault model
9: Lc = CE(Net(a; w )(x t ), yt ) # clean cross entropy
10: L f = CE(Net(a; w )(x t , f ), yt ) # faulty cross entropy
11: L(x t , yt , Net(a; w ), f ) = (1 − αl )Lc + αl L f
12: w = w − ηw ∇w L # for clarity, we omit momentum calculation here
13: end for
14: for all xv , yv ∼ Dv do
15: a ∼ π (a; θ ) # sample an architecture from the controller
16: f ∼ F (f ) # sample faults from the fault model
17: Rc = Acc(Net(a; w )(xv ), yv ) # clean accuracy
18: R f = Acc(Net(a; w )(xv , f ), yv ) # faulty accuracy
19: R(xv , yv , Net(a; w ), f ) = (1 − α r ) ∗ Rc + α r ∗ R f
20: θ = θ + ηθ (R − b)∇θ log π (a; θ )
21: end for
22: epoch = epoch + 1
23: schedule ηw , ηθ
24: end while
25: return a ∼ π (a; θ )
carried out experiments under two different settings: without/with FTT. When training with FTT,
a weighted sum of the clean cross-entropy loss CEc and the cross-entropy loss with fault injection
CEf is used to train the shared weights. The FTT loss can be written as
L = (1 − αl ) ∗ CEc + αl ∗ CEf . (13)
As shown in lines 7–12 in Algorithm 1, in each step of training the shared weights, we sample
architecture α using the current controller, then backpropagate using the FTT loss to update the
parameters of the candidate network. Training without FTT (in FT-NAS) is a special case with
αl = 0.
As shown in lines 15–20 in Algorithm 1, in each step of training the controller, we sample
architecture from the controller, assemble this architecture using the shared weights, and then
get the reward R on one data batch in Dv . Finally, the reward is used to update the controller by
applying the REINFORCE technique [47], with the reward baseline denoted as b.
5 EXPERIMENTS
In this section, we demonstrate the effectiveness of the FTT-NAS framework and analyze the dis-
covered architectures under different fault models. First, we introduce the experiment setup in
Section 5.1. Then, the effectiveness under the feature and weight fault models are shown in Sec-
tion 5.2 and Section 5.3, respectively. The effectiveness of the learned controller is illustrated in
Section 5.4. Finally, the analyses and illustrative experiments are presented in Section 5.5.
ACM Transactions on Design Automation of Electronic Systems, Vol. 26, No. 6, Article 44. Pub. date: August 2021.
44:16 X. Ning et al.
5.1 Setup
Our experiments are carried out on the CIFAR-10 [23] dataset. CIFAR-10 is one of the most com-
monly used computer vision datasets and contains 60,000 32 × 32 RGB images. Three manually
designed architectures VGG-16, ResNet-18, and MobileNet-V2 are chosen as the baselines. 8-bit
dynamic fixed-point quantization is used throughout the search and training process, and the frac-
tion length is found following the minimal-overflow principle.
In the neural architecture search process, we split the training dataset into two subsets. 80% of
the training data is used to train the shared weights, and the remaining 20% is used to train the
controller. The super network is an 8-cell network, with all the possible connections and opera-
tions. The channel number of the first cell is set to 20 during the search process, and the channel
number increases by 2 upon every reduction cell. The controller network is an RNN with one hid-
den layer of size 100. The learning rate for training the controller is 1e-3. The reward baseline b
is updated using a moving average with momentum 0.99. To encourage exploration, we add an
entropy encouraging regularization to the controller’s REINFORCE objective, with a coefficient of
0.01. For training the shared weights, we use an SGD optimizer with momentum 0.9 and weight
decay 1e-4, and the learning rate is scheduled by a cosine annealing scheduler [33] started from
0.05. Each architecture search process is run for 100 epochs. Note that all these are typical settings
that are similar to Reference [36].
To conduct the final training of the architectures (Figure 4(c)), we run fault-tolerant training
for 100 epochs. The learning rate is set to 0.1 initially and decayed by 10 at epoch 40 and 80.
We have experimented with a fault-tolerant training choice: whether to mask out the error po-
sitions in feature/weights during the backpropagation process. If the error positions are masked
out, then no gradient would be backpropagated through the erroneous feature positions, and no
gradient would be calculated w.r.t. the erroneous weight positions. We find that this choice does
not affect the fault-tolerant training result, thus, we do not use the masking operation in our final
experiments.
We build the neural architecture search framework and fault injection framework upon the
PyTorch framework, and all the codes are available at https://github.com/walkerning/aw_nas.
ACM Transactions on Design Automation of Electronic Systems, Vol. 26, No. 6, Article 44. Pub. date: August 2021.
FTT-NAS: Discovering Fault-tolerant Convolutional Neural Architecture 44:17
Table 4. Comparison of Different Architectures under the MiBB Feature Fault Model
Fig. 7. The discovered cell architectures under the MiBB feature fault model. (a) Normal cell. (b) Reduction
cell.
We can see that FTT-NAS is much more effective than its degraded variant, FT-NAS. We con-
clude that, generally, NAS should be used in conjunction with FTT, as suggested by Equation (11).
Another interesting fact is that, under the MiBB fault model, the relative rankings of the resilience
capabilities of different architectures change after FTT: After FTT, MobileNet-V2 suffers from the
smallest accuracy degradation among three baselines, whereas it is the most vulnerable one with-
out FTT.
ACM Transactions on Design Automation of Electronic Systems, Vol. 26, No. 6, Article 44. Pub. date: August 2021.
44:18 X. Ning et al.
Table 5. Comparison of Different Architectures under the adSAF Weight Fault Model
Fig. 8. The discovered cell architectures under the adSAF weight fault model. (a) Normal cell. (b) Reduction
cell.
Fig. 9. Accuracy curves under different weight fault models. (a) W-FTT-Net under 8bit-adSAF model.
(b) W-FTT-Net under 1bit-adSAF model. (c) W-FTT-Net under iBF model.
iBF model. As shown in Figure 9(b)(c), under the 1bit-adSAF and iBF weight fault model, W-FTT-
Net outperforms all the baselines consistently at different noise levels.
ACM Transactions on Design Automation of Electronic Systems, Vol. 26, No. 6, Article 44. Pub. date: August 2021.
FTT-NAS: Discovering Fault-tolerant Convolutional Neural Architecture 44:19
per-MAC fault injection probability of 3e-4 is used for feature faults, and a SAF ratio of 8% (p0 =6.7%,
p1 =1.3%) is used for weight faults.
As shown in Table 6 and Table 7, the performance of different architectures in the search space
varies a lot, and the architectures sampled by the learned controllers, F-FTT-Net and W-FTT-Net,
outperform all the random sampled architectures. Note that, as we use different preprocess opera-
tions for feature faults and weight faults (ReLU-Conv-BN 3 × 3 and SepConv 3 × 3, respectively),
there exist differences in FLOPs and parameter number even with the same cell architectures.
ACM Transactions on Design Automation of Electronic Systems, Vol. 26, No. 6, Article 44. Pub. date: August 2021.
44:20 X. Ning et al.
the primitives in F-FTT-Net with SepConv 5 × 5 blocks. The best result achieved by these six
architectures is 77.5% with pm =1e-4 (versus 86.2% achieved by F-FTT-Net). These illustrative
experiments indicate that the connection pattern and combination of different primitives all play
a role in the fault resilience capability of a neural network architecture.
Weight faults: Under the adSAF fault model, the controller prefers ReLU-Conv-BN blocks over
SepConv and DilConv blocks. This preference is not so easy to anticipate. We hypothesize that the
weight distribution of different primitives might lead to different behaviors when encountering
SAF faults. For example, if the quantization range of a weight value is larger, then the value devi-
ation caused by a SAF1 fault would be larger, and we know that a large increase in the magnitude
of weights would damage the performance severely [12]. We conduct a simple experiment to ver-
ify this hypothesis: We stack several blocks to construct a network, and in each block, one of the
three operations (a SepConv3 × 3 block, a ReLU-Conv-BN 3 × 3 block, and a ReLU-Conv-BN 1 ×
1 block) is randomly picked in every training step. The SepConv 3 × 3 block is constructed with a
DepthwiseConv 3 × 3 and two Conv 1 × 1, and the ReLU-Conv-BN 3 × 3 and ReLU-Conv-BN 1 ×
1 contain a Conv 3 × 3 and a Conv 1 × 1, respectively. After training, the weight magnitude ranges
of Conv 3 × 3, Conv 1 × 1, and DepthwiseConv 3 × 3 are 0.036±0.043, 0.112±0.121, 0.140±0.094,
respectively. Since the magnitude of the weights in 3 × 3 convolutions is smaller than that of the 1 × 1
convolutions and the depthwise convolutions, SAF weight faults would cause larger weight deviations
in a SepConv or DilConv block than in a ReLU-Conv-BN 3 × 3 block.
6 DISCUSSION
6.1 Orthogonality
Most of the previous methods are exploiting the inherent fault resilience capability of existing NN
architectures to tolerate different types of hardware faults. In contrast, our methods improve the
inherent fault resilience capability of NN models, thus effectively increase the algorithmic fault
resilience “budget” to be utilized by hardware-specific methods. Our methods are orthogonal to
existing fault-tolerance methods and can be easily integrated with them, e.g., helping hardware-
based methods to reduce the overhead largely.
ACM Transactions on Design Automation of Electronic Systems, Vol. 26, No. 6, Article 44. Pub. date: August 2021.
FTT-NAS: Discovering Fault-tolerant Convolutional Neural Architecture 44:21
training techniques should be incorporated when training the supernet weights to reduce the
gap between the search and final training stages, since it is a common technique for training a fault-
tolerant NN model. Note that these two amendments are all on the evaluator component. Despite
that we choose to use the popular reinforcement learning-based controller, other controllers such
as evolutionary-based [39] and predictor-based ones [35] could be incorporated into the FTT-NAS
framework easily. The application of other controllers is outside the scope of this work’s interests.
FPGA platform: In the MiBB feature fault model, we assume that the add operations are spatially
expanded onto independent hardware adders, which applies to the template-based designs [45].
For ISA (Instruction Set Architecture)-based accelerators [37], the NN computations are
orchestrated using instructions, time-multiplexed onto hardware units. In this case, the accu-
mulation of the faults follows a different model and might show different preferences among
architectures. Anyway, the FTT-NAS framework is general and could be used with different fault
models. We leave the exploration and experiments of other fault models for future work.
RRAM platform: As for the RRAM platform, this article mainly focuses on discovering fault-
tolerant neural architecture to mitigate SAFs, which have significant impacts on computing
accuracy. In addition to SAFs, the variation is another typical RRAM non-ideal factor that may
lead to inaccurate computation. There exist various circuit-level optimizations that can mitigate
the computation error caused by the RRAM variation. First, with the development of the RRAM
device technology, a large on/off ratio of RRAM devices (i.e., the resistance ratio of the high
resistance state and the low resistance state) can be obtained (e.g., 103 [48]). And a large on/off
ratio makes the bit line current difference among different computation results obvious and
thus improves the fault tolerance capability against variation. Second, in existing RRAM-based
accelerators, the number of activated RRAM rows at one time is limited. For example, only
four rows are activated in each cycle, which provides a sufficient signal margin against process
variation [51]. In contrast, compared with the process variation, it is more costly to mitigate SAFs
by circuit-level optimization (e.g., existing work utilizes costly redundant hardware to tolerate
SAFs [18]). Thus, we aim at tolerating the SAFs from the algorithmic perspective. Anyway,
simulating the variation is a meaningful extension of the general FTT-NAS framework for the
RRAM platform, and we leave it for future work.
Combining multiple fault models: We experiment with a fault model at a time and do not com-
bine different fault models. And our experimental results show that the architectural preferences
of the adSAF and iBB feature fault models are distinct (see discussions in Section 5.5). Fortunately,
the two types of faults that we experiment with would not co-exist for the same part of an NN
model: IBB feature faults (caused by FPGA LUTs errors) and the adSAF weight faults (in the RRAM
crossbar). Nevertheless, there indeed exist scenarios that some weights and feature errors could
happen simultaneously on one platform. For example, iBF in the feature buffer and SAF in the
crossbar can occur simultaneously in an RRAM-based accelerator. However, on the same platform
ACM Transactions on Design Automation of Electronic Systems, Vol. 26, No. 6, Article 44. Pub. date: August 2021.
44:22 X. Ning et al.
in the same environment, the influences of different types of errors would usually be vastly differ-
ent. For example, compared with the accuracy degradation caused by the SAF errors in the RRAM
cell, the influences of iBF errors in the feature buffer could usually be ignored on the samedevice.
As a future direction, it might be interesting to combine these fault models to search for a neural
architecture to be partitioned and deployed onto a heterogeneous hardware system. In that case,
the fault patterns, along with the computation and memory access patterns of multiple platforms,
should be considered jointly.
7 CONCLUSION
In this article, we analyze the possible faults in various types of NN accelerators and formalize the
statistical fault models from the algorithmic perspective. After the analysis, the MAC-i.i.d. Bit-Bias
(MiBB) model and the arbitrary-distributed Stuck-at-Fault (adSAF) model are adopted in the neural
architecture search for tolerating feature faults and weight faults, respectively. To search for the
fault-tolerant neural network architectures, we propose the multi-objective Fault-Tolerant NAS
(FT-NAS) and Fault-Tolerant Training NAS (FTT-NAS) method. In FTT-NAS, the NAS technique
is employed in conjunction with the Fault-Tolerant Training (FTT). The fault resilience capabili-
ties of the discovered architectures, F-FTT-Net and W-FTT-Net, outperform multiple manually de-
signed architecture baselines, with comparable or fewer FLOPs and parameters. And W-FTT-Net
trained under the 8bit-adSAF model can defend against other types of weight faults. Generally,
compared with FT-NAS, FTT-NAS is more effective and should be used. In addition, through the
inspection of the discovered architectures, we find that, since operation primitives differ in their
MACs, expressiveness, and weight distributions, they exhibit different resilience capabilities un-
der different fault models. The connection pattern is also shown to have influences on the fault
resilience capability of NN models.
REFERENCES
[1] Austin P. Arechiga and Alan J. Michaels. 2018. The robustness of modern deep learning architectures against single
event upset errors. In IEEE High Performance Extreme Computing Conference (HPEC’18). 1–6.
[2] Hossein Asadi and Mehdi B. Tahoori. 2007. Analytical techniques for soft error rate modeling and mitigation of FPGA-
based designs. IEEE Trans. Very Large Scale Integ. Syst. 15, 12 (Dec. 2007), 1320–1331.
[3] Bowen Baker, Otkrist Gupta, R. Raskar, and N. Naik. 2017. Accelerating neural architecture search using performance
prediction. arXiv preprint arXiv:1705.10823 (2017).
[4] Cristiana Bolchini, Antonio Miele, and Marco D. Santambrogio. 2007. TMR and partial dynamic reconfiguration to
mitigate SEU faults in FPGAs. In IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems (DFT’07).
87–95.
[5] Shekhar Borkar. 2005. Designing reliable systems from unreliable components: The challenges of transistor variability
and degradation. IEEE Micro 25, 6 (2005), 10–16.
[6] Carl Carmichael, Michael Caffrey, and Anthony Salazar. 2000. Correcting single-event upsets through Virtex partial
configuration. Xilinx Application Notes 216 (2000), v1.
[7] Ching-Yi Chen, Hsiu-Chuan Shih, Cheng-Wen Wu, C. Lin, Pi-Feng Chiu, S. Sheu, and F. Chen. 2015. RRAM defect
modeling and failure analysis based on march test and a novel squeeze-search scheme. IEEE Trans. Comput. 64, 1 (Jan.
2015), 180–190.
[8] Lerong Chen, Jiawen Li, Yiran Chen, Qiuping Deng, Jiyuan Shen, X. Liang, and L. Jiang. 2017. Accelerator-friendly
neural-network training: Learning variations and defects in RRAM crossbar. In IEEE/ACM Design, Automation and
Test in Europe Conference (DATE’17). 19–24.
[9] Tianshi Chen, Zidong Du, Ninghui Sun, J. Wang, Chengyong Wu, Yunji Chen, and Olivier Temam. 2014. DianNao:
A small-footprint high-throughput accelerator for ubiquitous machine-learning. In ACM International Conference on
Architectural Support for Programming Languages and Operating Systems (ASPLOS’14).
[10] Ping Chi, Shuangchen Li, C. Xu, Tao Zhang, J. Zhao, Yongpan Liu, Y. Wang, and Yuan Xie. 2016. PRIME: A novel
processing-in-memory architecture for neural network computation in ReRAM-based main memory. In IEEE/ACM
International Symposium on Computer Architecture (ISCA’16). IEEE Press, 27–39.
ACM Transactions on Design Automation of Electronic Systems, Vol. 26, No. 6, Article 44. Pub. date: August 2021.
FTT-NAS: Discovering Fault-tolerant Convolutional Neural Architecture 44:23
[11] Kaiyuan Guo, Shulin Zeng, Jincheng Yu, Yu Wang, and Huazhong Yang. 2019. A survey of FPGA-based neural network
inference accelerators. ACM Trans. Reconfig. Technol. Syst. 12, 1 (Mar. 2019).
[12] Ghouthi Boukli Hacene, FranÃğois Leduc-Primeau, Amal Ben Soussia, Vincent Gripon, and F. Gagnon. 2019. Training
modern deep neural networks for memory-fault robustness. In IEEE International Symposium on Circuits and Systems
(ISCAS’19). 1–5.
[13] Mahta Haghi and Jeff Draper. 2009. The 90 nm double-DICE storage element to reduce single-event upsets. In IEEE
International Midwest Symposium on Circuits and Systems (MWSCAS’09). IEEE, 463–466.
[14] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In IEEE
Conference on Computer Vision and Pattern Recognition (CVPR’16). 770–778.
[15] Zhezhi He, Jie Lin, Rickard Ewetz, J. Yuan, and Deliang Fan. 2019. Noise injection adaption: End-to-end ReRAM
crossbar non-ideal effect adaption for neural network mapping. In ACM/IEEE Design Automation Conference (DAC’19).
[16] Jörg Henkel, Lars Bauer, Nikil Dutt, Puneet Gupta, Sani Nassif, Muhammad Shafique, Mehdi Tahoori, and Norbert
Wehn. 2013. Reliable on-chip systems in the nano-era: Lessons learnt and future trends. In ACM/IEEE Design Automa-
tion Conference (DAC’13). IEEE, 1–10.
[17] Miao Hu, Hai Li, Yiran Chen, Q. Wu, and G. Rose. 2013. BSB training scheme implementation on memristor-based
circuit. In IEEE Symposium on Computational Intelligence for Security and Defense Applications (CISDA’13). IEEE, 80–87.
[18] Wenqin Huangfu, Lixue Xia, Ming Cheng, Xiling Yin, Tianqi Tang, Boxun Li, Krishnendu Chakrabarty, Yuan Xie, Yu
Wang, and Huazhong Yang. 2017. Computation-oriented fault-tolerance schemes for RRAM computing systems. In
IEEE/ACM Asia and South Pacific Design Automation Conference (ASPDAC’17). IEEE, 794–799.
[19] Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. 2017. Quantized neural net-
works: Training neural networks with low precision weights and activations. J. Mach. Learn. Res. 18 (2017).
[20] Sachhidh Kannan, Naghmeh Karimi, Ramesh Karri, and Ozgur Sinanoglu. 2015. Modeling, detection, and diagnosis
of faults in multilevel memristor memories. IEEE Trans. Comput.-aided Des. Integ. Circ. Syst. 34 (2015), 822–834.
[21] Sachhidh Kannan, Jeyavijayan Rajendran, Ramesh Karri, and Ozgur Sinanoglu. 2013. Sneak-path testing of memristor-
based memories. In 26th International Conference on VLSI Design and 12th International Conference on Embedded Sys-
tems. 386–391.
[22] Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on
Learning Representations (ICLR’15).
[23] Alex Krizhevsky. 2009. Learning multiple layers of features from tiny images. http://www.cs.toronto.edu/~kriz/cifar.
html.
[24] Binh Q. Le, Alessandro Grossi, Elisa Vianello, Tony Wu, Giusy Lama, Edith Beigne, H.-S. Philip Wong, and Subhasish
Mitra. 2018. Resistive RAM with multiple bits per cell: Array-level demonstration of 3 bits per cell. IEEE Transactions
on Electron Devices 66, 1 (2018), 641–646. DOI:10.1109/TED.2018.2879788
[25] Guanpeng Li, S. Hari, M. Sullivan, T. Tsai, K. Pattabiraman, J. Emer, and Stephen W. Keckler. 2017. Understanding
error propagation in deep learning neural network (DNN) accelerators and applications. In ACM/IEEE Supercomputing
Conference (SC’17). ACM, 8.
[26] F. Libano, B. Wilson, J. Anderson, M. Wirthlin, C. Cazzaniga, C. Frost, and P. Rech. 2019. Selective hardening for neural
networks in FPGAs. IEEE Trans. Nucl. Sci. 66 (2019), 216–222.
[27] Beiye Liu, Hai Li, Yiran Chen, Xin Li, Qing Wu, and Tingwen Huang. 2015. Vortex: Variation-aware training for
memristor X-bar. In ACM/IEEE Design Automation Conference (DAC’15). 1–6.
[28] Chenchen Liu, Miao Hu, John Paul Strachan, and Hai Li. 2017. Rescuing memristor-based neuromorphic design with
high defects. In 54th ACM/EDAC/IEEE Design Automation Conference (DAC’17). IEEE, 1–6.
[29] Hanxiao Liu, K. Simonyan, and Yiming Yang. 2019. DARTS: Differentiable architecture search. In International Con-
ference on Learning Representations (ICLR’19).
[30] Tao Liu, Wujie Wen, Lei Jiang, Yanzhi Wang, Chengmo Yang, and Gang Quan. 2019. A fault-tolerant neural network
architecture. In ACM/IEEE Design Automation Conference (DAC’19). 55:1–55:6.
[31] Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander C. Berg.
2016. SSD: Single shot MultiBox detector. In European Conference on Computer Vision (ECCV’16). Springer, 21–37.
[32] Jonathan Long, Evan Shelhamer, and Trevor Darrell. 2015. Fully convolutional networks for semantic segmentation.
IEEE Conference on Computer Vision and Pattern Recognition (CVPR’15). 3431–3440.
[33] Ilya Loshchilov and Frank Hutter. 2017. SGDR: Stochastic gradient descent with warm restarts. In International Con-
ference on Learning Representations (ICLR’17).
[34] Renqian Luo, Fei Tian, Tao Qin, Enhong Chen, and Tie-Yan Liu. 2018. Neural architecture optimization. In Conference
on Neural Information Processing Systems (NIPS’18). 7816–7827.
[35] Xuefei Ning, Yin Zheng, Tianchen Zhao, Yu Wang, and Huazhong Yang. 2020. A generic graph-based neural architec-
ture encoding scheme for predictor-based NAS. In European Conference on Computer Vision (ECCV’20).
ACM Transactions on Design Automation of Electronic Systems, Vol. 26, No. 6, Article 44. Pub. date: August 2021.
44:24 X. Ning et al.
[36] Hieu Pham, Melody Y. Guan, Barret Zoph, Quoc V. Le, and Jeff Dean. 2018. Efficient neural architecture search via
parameter sharing. In International Conference on Machine Learning (ICML’18).
[37] Jiantao Qiu, J. Wang, Song Yao, K. Guo, Boxun Li, Erjin Zhou, J. Yu, T. Tang, N. Xu, S. Song, Yu Wang, and H. Yang. 2016.
Going deeper with embedded FPGA platform for convolutional neural network. In ACM International Symposium on
Field-Programmable Gate Arrays (FPGA’16). ACM, 26–35.
[38] Brandon Reagen, Udit Gupta, L. Pentecost, P. Whatmough, S. Lee, Niamh Mulholland, D. Brooks, and Gu-Yeon Wei.
2018. Ares: A framework for quantifying the resilience of deep neural networks. In ACM/IEEE Design Automation
Conference (DAC’18) (DAC’18).
[39] Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V. Le. 2019. Regularized evolution for image classifier archi-
tecture search. In AAAI Conference on Artificial Intelligence, Vol. 33. 4780–4789.
[40] Christoph Schorn, Andre Guntoro, and Gerd Ascheid. 2018. Accurate neuron resilience prediction for a flexible reli-
ability management in neural network accelerators. In IEEE/ACM Design, Automation and Test in Europe Conference
(DATE’18).
[41] Ali Shafahi, W. Ronny Huang, Mahyar Najibi, Octavian Suciu, Christoph Studer, Tudor Dumitras, and Tom Goldstein.
2018. Poison frogs! Targeted clean-label poisoning attacks on neural networks. In Conference on Neural Information
Processing Systems (NIPS’18). 6103–6113.
[42] Xiaoxuan She and N. Li. 2017. Reducing critical configuration bits via partial TMR for SEU mitigation in FPGAs. IEEE
Trans. Nucl. Sci. 64 (2017), 2626–2632.
[43] Charles Slayman. 2011. Soft error trends and mitigation techniques in memory devices. In Reliability and Maintain-
ability Symposium. 1–5.
[44] Christian Szegedy, W. Zaremba, Ilya Sutskever, Joan Bruna, D. Erhan, Ian J. Goodfellow, and R. Fergus. 2013. Intriguing
properties of neural networks. arXiv preprint arXiv:1312.6199 (2013).
[45] Stylianos I. Venieris and C. Bouganis. 2019. fpgaConvNet: Mapping regular and irregular convolutional neural net-
works on FPGAs. IEEE Trans. Neural Netw. Learn. Syst. 30 (2019), 326–342.
[46] Jean-Charles Vialatte and FranÃğois Leduc-Primeau. 2017. A study of deep learning robustness against computation
failures. arXiv:1704.05396 (2017).
[47] Ronald J. Williams. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning.
Mach. Learn. 8, 3-4 (1992), 229–256.
[48] Jiyong Woo, Tien Van Nguyen, Jeong-Hun Kim, J. Im, Solyee Im, Yeriaron Kim, Kyeong-Sik Min, and S. Moon. 2020.
Exploiting defective RRAM array as synapses of HTM spatial pooler with boost-factor adjustment scheme for defect-
tolerant neuromorphic systems. Sci. Rep. 10 (2020).
[49] Lixue Xia, Wenqin Huangfu, Tianqi Tang, Xiling Yin, K. Chakrabarty, Yuan Xie, Y. Wang, and H. Yang. 2018. Stuck-at
fault tolerance in RRAM computing systems. IEEE J. Emerg. Select. Topics Circ. Syst. 8 (2018), 102–115.
[50] Lixue Xia, Mengyun Liu, Xuefei Ning, K. Chakrabarty, and Yu Wang. 2017. Fault-tolerant training with on-line fault
detection for RRAM-based neural computing systems. In ACM/IEEE Design Automation Conference (DAC’17). 1–6.
[51] Cheng-Xin Xue, J.-M. Hung, H.-Y. Kao, Y.-H. Huang, S.-P. Huang, F.-C. Chang, P. Chen, T.-W. Liu, C.-J. Jhang, C.-I.
Su, W.-S. Khwa, C.-C. Lo, R.-S. Liu, C.-C. Hsieh, K.-T. Tang, Y.-D. Chih, T.-Y. J. Chang, and M.-F. Chang. 2021. A 22nm
4Mb 8b-precision ReRAM computing-in-memory macrowith 11.91 to 195.7TOPS/W for tiny AI edges devices. In IEEE
International Solid-State Circuits Conference (ISSCC’21).
[52] Zheyu Yan, Yiyu Shi, Wang Liao, M. Hashimoto, Xichuan Zhou, and Cheng Zhuo. 2020. When single event upset
meets deep neural networks: Observations, explorations, and remedies. In IEEE/ACM Asia and South Pacific Design
Automation Conference (ASPDAC’20). 163–168.
[53] Quanshi Zhang, Ruiming Cao, Feng Shi, Ying Nian Wu, and Song-Chun Zhu. 2018. Interpreting CNN knowledge via
an explanatory graph. In AAAI Conference on Artificial Intelligence.
[54] Yang Zhao, X. Hu, Shuangchen Li, Jing Ye, Lei Deng, Y. Ji, Jianyu Xu, Dong Wu, and Yuan Xie. 2019. Memory Trojan
attack on neural network accelerators. IEEE/ACM Design, Automation and Test in Europe Conference (DATE’19). 1415–
1420.
[55] Zhuoran Zhao, D. Agiakatsikas, N. H. Nguyen, E. Cetin, and O. Diessel. 2018. Fine-grained module-based error recov-
ery in FPGA-based TMR systems. ACM Trans. Reconfig. Technol. Syst. 11, 1 (2018), 4.
[56] Zhenhua Zhu, Hanbo Sun, Yujun Lin, Guohao Dai, L. Xia, Song Han, Yu Wang, and H. Yang. 2019. A configurable
multi-precision CNN computing framework based on single bit RRAM. In ACM/IEEE Design Automation Conference
(DAC’19). 1–6.
[57] Barret Zoph and Quoc V. Le. 2017. Neural architecture search with reinforcement learning. In International Conference
on Learning Representations (ICLR’17) (2017).
ACM Transactions on Design Automation of Electronic Systems, Vol. 26, No. 6, Article 44. Pub. date: August 2021.