0% found this document useful (0 votes)
28 views

Art2 (Secure Control Against Replay Attacks)

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views

Art2 (Secure Control Against Replay Attacks)

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Forty-Seventh Annual Allerton Conference

Allerton House, UIUC, Illinois, USA


September 30 - October 2, 2009

Secure Control Against Replay Attacks

Yilin Mo, Bruno Sinopoli ∗†

Abstract cesses, civil infrastructure, energy, manufacturing and


transportation. Many of these applications are safety-
This paper analyzes the effect of replay attacks on critical. The availability of cheap communication tech-
a control system. We assume an attacker wishes to dis- nologies as the internet makes such infrastructures sus-
rupt the operation of a control system in steady state. ceptible to cyber security threats. National security may
In order to inject an exogenous control input without be affected as infrastructures such as the power grid, the
being detected the attacker will hijack the sensors, ob- telecommunication networks are vital to the normal op-
serve and record their readings for a certain amount eration of our society. Any successful attack may sig-
of time and repeat them afterwards while carrying out nificantly hamper the economy, the environment or may
his attack. This is a very common and natural attack even lead to loss of human life. As a result, the role
(we have seen numerous times intruders recording and security of CPS is of primary importance to guarantee
replaying security videos while performing their attack safe operation of CPS. The research community has ac-
undisturbed) for an attacker who does not know the dy- knowledged the importance of addressing the challenge
namics of the system but is aware of the fact that the of designing secure CPS [2] [3].
system itself is expected to be in steady state for the du- The impact of attacks on the cyber physical sys-
ration of the attack. We assume the control system to tems is addressed in [4]. The authors consider two pos-
be a discrete time linear time invariant gaussian system sible classes of attacks on CPS: Denial of Service (DoS)
applying an infinite horizon Linear Quadratic Gaussian and deception attacks. The DoS attack prevents the ex-
(LQG) controller. We also assume that the system is change of information, usually either sensor readings or
equipped with a χ 2 failure detector. The main contri- control inputs between subsystems, while the deception
butions of the paper, beyond the novelty of the problem attack affects the data integrity of packets by modify-
formulation, consist in 1) providing conditions on the ing their payloads. A robust feedback control design
feasibility of the replay attack on the aforementioned against DoS attack is further discussed in [5]. We feel
system and 2) proposing a countermeasure that guar- that the deception attack can be subtler than DoS attack
antees a desired probability of detection (with a fixed as it is in principle more difficult to detect and it has
false alarm rate) by trading off either detection delay not adequately addressed. Hence, in this paper, we will
or LQG performance, either by decreasing control ac- develop a methodology to detect a particular kind of de-
curacy or increasing control effort. ception attack.
A significant amount of research effort has been
carried out to analyze, detect and handle failures in
1. Introduction CPS. Sinopoli et al. study the impact of random packet
drops on controller and estimator performance [6] [7].
Cyber Physical Systems (CPS) refer to the embed- In [8], the author reviews several failure detection al-
ding of widespread sensing, computation, communi- gorithm in dynamic systems. Results from robust con-
cation and control into physical spaces [1]. Applica- trol [9], a discipline that aims to design controllers
tion areas are as diverse as aerospace, chemical pro- that function properly under uncertain parameter or un-
∗ The authors are with the Department of Electrical and Computer
known disturbances, is applicable to some CPS scenar-
ios. However, a large proportion of the literature as-
Engineering, Carnegie Mellon University, Pittsburgh, PA. Email:
[email protected], [email protected] sumes that the failure is either random or benign. On
† This research is supported in part by CyLab at Carnegie Mel- the other hand, a cunning attacker can carefully design
lon under grant DAAD19-02-1-0389 from the Army Research Office. his attack strategy and deceive both detectors and robust
Foundation. The views and conclusions contained here are those of
the authors and should not be interpreted as necessarily represent-
controllers. Hence, the applicability of failure detection
ing the official policies or endorsements, either express or implied, of algorithms is questionable in the presence of a smart at-
ARO, CMU, or the U.S. Government or any of its agencies. tacker.

978-1-4244-5871-4/09/$26.00 ©2009 IEEE 911


In this paper, we study the effect of a data replay at- A sensor network is monitoring the system de-
tack on control systems. We assume an attacker wishes scribed in (1). At each step all the sensor readings are
to disrupt the operation of a control system in steady sent to a base station. The observation equation can be
state. In order to inject an exogenous control input with- written as
out being detected the attacker will hijack the sensors, yk = Cxk + vk , (2)
observe and record their readings for a certain amount
where yk ∈ ℝm is a vector of measurements from the
of time and repeat them afterwards while carrying out
sensors and vk ∼ N (0, R) is the measurement noise
his attack. This is a very common and natural attack
independent of x0 and wk .
(we have seen numerous times intruders recording and
replaying security videos while performing their attack
2.1. Kalman Filter
undisturbed) for an attacker who does not know the dy-
namics of the system but is aware that the system it- It is well known that for the system of equations
self is expected to be in steady state for the duration of (1), (2) the Kalman filter is the optimal estimator as
the attack. We assume the control system to be a dis- it provides the minimum variance unbiased estimate of
crete time linear time invariant (LTI) Gaussian system the state xk given the previous observations y0 , . . . , yk .
applying an infinite horizon Linear Quadratic Gaussian
The Kalman filter is recursive and it takes the following
(LQG) controller. We also assume that the system is form:
equipped with a χ 2 failure detector. The main contri-
butions of the paper, beyond the novelty of the problem x̂0∣−1 = x̄0 , P0∣−1 = Σ, (3)
formulation, consist in providing conditions on the fea- T
sibility of the replay attack on the aforementioned attack x̂k+1∣k = Ax̂k∣k + Buk , Pk+1∣k = APk∣k A + Q,
and suggesting a countermeasure that guarantees a de- Kk = Pk∣k−1CT (CPk∣k−1CT + R)−1 ,
sired probability of detection (with a fixed false alarm x̂k∣k = x̂k∣k−1 + Kk (yk − Cx̂k∣k−1 ), Pk∣k = Pk∣k−1 − KkCPk∣k−1 .
rate) by trading off either detection delay or LQG cost,
i.e. either by decreasing control accuracy or increasing Although the Kalman filter uses a time varying gain Kk ,
control effort. it is known that this gain will converge if the system
The rest of the paper is organized as follows: In is detectable. In practice the Kalman gain usually con-
Section 2, we provide the problem formulation by re- verges in a few steps. Hence, let us define
visiting and adapting Kalman filter, LQG controller and
χ 2 failure detector to our scenario. In Section 3, we P ≜ lim Pk∣k−1 , K ≜ PCT (CPCT + R)−1 . (4)
k→∞
define the threat model of replay attack and analyze its
effect on the control schemes discussed in Section 2. In Since control systems usually run for a long time,
Section 4 we discuss one possible countermeasure, the we can assume to be running at steady state from the
efficiency of which is illustrated by several numerical beginning. Hence, we assume initial condition Σ = P.
examples in Section 5. Finally Section 6 concludes the In that case, the Kalman filter is a fixed gain estimator,
paper. The appendix contains several proofs, some of taking the following form
which had to be removed due to space constraints.
x̂0∣−1 = x̄0 , x̂k+1∣k = Ax̂k∣k + Buk , x̂k∣k = x̂k∣k−1 + K(yk − Cx̂k∣k−1 ).
2. Problem Formulation
2.2. Linear Quadratic Gaussian (LQG) Opti-
In this section we will formulate the problem by mal Control
deriving the Kalman filter, the LQG controller and χ 2
detector for our case. We will use the notation below Given the state estimation x̂k∣k , the LQG controller
for the remainder of the paper. minimizes the following objective function1:
Consider the following linear, time invariant (LTI) [ ]
system whose state dynamics are given by 1 T −1 T T
J = min lim E
T →∞ T
∑ (xk W xk + uk Uuk ) , (5)
k=0
xk+1 = Axk + Buk + wk , (1)
where W,U are positive semidefinite matrices and uk is
where xk ∈ ℝn is the vector of state variables at time k, measurable with respect to y0 , . . . , yk , i.e. uk is a func-
wk ∈ ℝn is the process noise at time k and x0 is the ini- tion of previous observations. It is well known that the
tial state. We assume wk , x0 are independent Gaussian 1 Here we just discuss the case of infinite horizon LQG control

random variables, x0 ∼ N (x̄0 , Σ), wk ∼ N (0, Q). problem.

912
solution of the above minimization problem will lead to mT degrees of freedom2. Hence, it is easy to calcu-
a fixed gain controller, which takes the following form: late the false alarm rate from χ 2 distribution. If gk is
greater than the threshold, then the detector will trigger
uk = u∗k = −(BT SB + U)−1BT SAx̂k∣k , (6) an alarm.

where u∗k is the optimal control input and S satisfies the


3. Replay Attack against Control System
following Riccati equation

S = AT SA + W − AT SB(BT SB + U)−1BT SA. (7) In this section, we assume that a malicious third
party wants to break the control system described in
Let us define L ≜ −(BT SB + U)−1 BT SA, then u∗k = Section 2. We will define an attack model similar to
Lxk∣k . the replay attack in computer security and analyze the
The objective function given by the optimal estima- feasibility of such kind of attack on the control system.
tor and controller is in our case is We will later generalize our analysis to other classes of
control systems.
J = trace(SQ) + trace[(AT SA + W − S)(P − KCP)]. We suppose the attacker has the capability to per-
(8) form the following actions:

2.3. χ 2 Failure Detector 1. It can inject a control input uak into the system any-
time.
The χ 2 detector [10] is widely used to detect 2. It knows all sensor readings and can modify them.
anomalies in control systems. Before introducing the We will denote the reading modified by the at-
detector, we will characterize the probability distribu- tacker by y′k .
tion of the residue of the Kalman filter:
Given these abilities, the attacker will implement
Theorem 1. For the LTI system defined in (1) with the following attack strategy, which can be divided into
Kalman filter and LQG controller, the residues yi − two stages:
Cx̂i∣i−1 of Kalman filter are i.i.d. Gaussian distributed
with 0 mean and covariance P, where P = CPCT + R. 1. The attacker records a sufficient number of yk s
without giving any input to the system.
Proof. Due to space constraints, we cannot give the
proof here. Please refer to [10] for the details. 2. The attacker gives a sequence of desired control
input while replaying the previous recorded yk s.
By Theorem 1, we know that the probability to get
the sequence yk−T +1 , . . . , yk when the system is operat- Remark 1. The attack on the sensors can be done by
ing normally is breaking the cryptography algorithm. Another way to
[ ]T perform an attack, which we think is much harder to
1 1 defend, is to induce false sensor readings by changing
P(yk−T +1 , . . . , yk ) = exp(− gk ),
(2π )N/2 ∣P∣ 2 the local conditions around it. Such attack may be easy
(9) to carry out when sensors are spatially distributed in
where remote locations.
k Remark 2. We assume that the attacker has control
gk = ∑ (yi − Cx̂i∣i−1)T P −1 (yi − Cx̂i∣i−1 ). (10) over all the sensors. This could be accomplished for a
i=k−T +1
smaller system consisting of few sensors. For a large
When this probability is low, it means that the system is system, usually the whole system can be break down
likely to be subject to certain failure. In order to check to several small and weakly coupled subsystems. For
the probability, we only need to compute gk . Hence, the example consider the temperature control problem in a
χ 2 detector at time k takes the following form building. One can think of the temperature in each room
as subsystems, which will hardly affects each other.
k Hence, the attacker only needs to control the sensors of
gk = ∑ (yi −Cx̂i∣i−1 )T P −1 (yi −Cx̂i∣i−1 ) ≶ threshold, a small subsystem in order to perform the replay attack
i=k−T +1
(11) on the subsystem.
where T is the window size of detection. By Theo- 2 The degrees of freedom is from the definition of χ 2 distribution.

rem 1, the left of the equation is χ 2 distributed with Please refer to [11] for more details.

913
Remark 3. The attack strategy is fairly simple. In prin- Define A ≜ (A + BL)(I − KC), then3
ciple, if the attacker has more knowledge of the sys-
tem model, the controller design, it can perform a much x̂k∣k−1 − x̂′k∣k−1 = A k (x̂0∣−1 − x̂′0∣−1). (14)
more subtle and powerful attack. However, to identify
the underlying model of the system is usually a hard Define x̂0∣−1 − x̂′0∣−1 ≜ ζ . Now write the residue as
problem and not all the attackers have the knowledge
and power to do so. Hence, we will only focus on a sim- y′k − Cx̂k∣k−1 = (y′k − Cx̂′k∣k−1 ) + CA k ζ , (15)
ple, easy to implement attack strategy which is easy to
implement. and
k [
Remark 4. When the system is under attack, the cen- gk = ∑ (y′i − Cx̂′k∣k−1 )T P −1 (y′i − Cx̂′k∣k−1 )
tral computer will be unable to perform close loop con- i=k−T +1
trol on the system since the sensory information is not
]
+2(y′i − Cx̂′k∣k−1 )T P −1CA i ζ + ζ T (A i )T CT P −1CA i ζ .
available. Hence, we cannot guarantee any control per-
formance of the system under this attack. Any counter- (16)
attack will need to be able to detect the attack.
By the definition of the virtual system, we know that
It is worth noticing that in the attacking stage, the y′k − Cx̂′k∣k−1 follows exactly the same distribution as
goal of the attacker is to make the fake readings y′k s look yk −Cx̂k∣k−1 . Hence, if A is stable, the second term and
normal yk s. Replaying the previous yk s is just the easi- the third term in (16) will converge to 0. As a result,
est way to achieve this goal. There are other methods, y′k − Cx̂k∣k−1 will converges to the same distribution as
such as machine learning, to generate a fake sequence of yk −Cx̂k∣k−1 , and the detection rate given by χ 2 detector
readings. In order to provide a unified framework to an- will be the same as false alarm rate. In other words, the
alyze such kind of attack, we can think of y′k s as the out- detector is useless.
put of the following virtual system (this does not neces- On the other hand, if A is unstable, the attacker
sarily mean that the attacker runs a virtual system): cannot replay y′k for long since gk will soon become
unbounded. In this case, the system is resilient to the
x′k+1 = Ax′k + Bu′k + w′k , y′k = Cx′k + v′k , replay attack, as the detector will be able to detect the

x̂k+1∣k = Ax̂′k∣k + Bu′k , x̂′k+1∣k+1 = x̂′k+1∣k + K(y′k − x̂′k+1∣k ), attack. It turns out the feasibility result derived for a
special estimator, controller, and detector implementa-
u′k = Lx̂′k∣k , tions is actually applicable to virtually any system. In
fact we can generalize the technique used here to an-
with initial conditions x′0 and x̂′0∣−1 . If the attacker actu- alyze more general controller, estimator and detectors.
ally learns the system, then the virtual system will be the Suppose the state of the estimator at time k is sk and it
system the attacker runs. For the replay attack, suppose evolves according to
that the attacker records the sequence yk s from time t
time. Then the virtual system is just a time shifted ver- sk+1 = f (sk , yk ). (17)
sion of the real system, with x′k = xt+k , x̂′k∣k = x̂t+k∣t+k
(Note that the attacker may not know xt+k and x̂t+k∣t+k ). Define the norm of f to be
Suppose the system is under attack and the de- ∥ f (s, y) − f (s + ∆s, y)∥
fender is using the χ 2 detector to perform intrusion de- ∥ f ∥ ≜ sup . (18)
∆s∕=0,y,s ∥∆s∥
tection. We will rewrite the estimation of the Kalman
filter x̂k∣k−1 in the following recursive way: Suppose that the defender is using the following crite-
rion to perform intrusion detection
x̂k+1∣k = Ax̂k∣k + Buk = (A + BL)x̂k∣k
g(sk , yk ) ≶ threshold, (19)
= (A + BL)[x̂k∣k−1 + K(y′k − Cx̂k∣k−1 )]
= (A + BL)(I − KC)x̂k∣k−1 + (A + BL)Ky′k . where g is an arbitrary continuous function.
(12)
Theorem 2. If ∥ f ∥ ≤ 1, then
For the virtual system, it is easy to see that the same lim g(sk , y′k ) = g(s′k , y′k ), (20)
equation holds true for x̂′k∣k−1 : k→∞
3 For simplicity, here we consider the time the attack begins as time
x̂′k+1∣k = (A + BL)(I − KC)x̂′k∣k−1 + (A + BL)Ky′k . (13) 0.

914
where s′k is the states variables of the virtual system. Actuator Plant Sensor
The detection rate βk at time k converges to uak
Attacker
monitor/control
lim βk = αk , (21) yk /y′k
k→∞ uk−1
−1
uk z
where αk is the false alarm rate of the virtual system at u∗k x̂k∣k
time k. Controller Estimator

Proof. Due to space limit, we will just give an outline Detector


of the proof. First, ∥ f ∥ ≤ 1 will ensure that sk con- yk −Cx̂k∣k−1
verges to s′k . By the continuity of g, g(sk , y′k ) converges
to g(s′k , y′k ). The detection rate of the system and the
false alarm rate of the virtual system are given by Figure 1. System Diagram

βk = Prob(g(sk , y′k ) > threshold), means that in order to detect the attack, we need to sacri-
(22)
αk = Prob(g(s′k , y′k ) > threshold). fice control performance. The following theorem char-
acterizes the loss of LQG performance when we inject
Hence βk converges to αk . ∆uk into the system:
The LQG controller, Kalman filter and χ 2 de- Theorem 3. The LQG performance after adding ∆uk is
tector becomes just a special case, where the state given by
sk of the estimator at time k is yk−T +1 , . . . , yk and
x̂k−T +1∣k−T , . . . , x̂k∣k−1 . The f function is given by (3) J ′ = J + trace[(U + BT SB)Q]. (24)
and g is given by (11).
Proof. See the appendix.
Remark 5. The convergence of detection rate under the
We now wish to consider the χ 2 detector after
replay attack to the false alarm rate indicates that the
adding the random control signal. The following the-
information given by the detector will asymptotically go
orem shows the effectiveness of the detector under the
to 0. In the other word, the detector becomes useless
modified control scheme.
and the system is not resilient to replay attack.
Theorem 4. In the absence of an attack,
4. Detection of Replay Attack
E[(yk − Cx̂k∣k−1 )T P −1 (yk − Cx̂k∣k−1 )] = m. (25)
As discussed in the previous section, there exist Under attack
control systems that are not resilient to the replay attack.
In this section, we want to design a detection strategy lim E[(y′k − Cx̂k∣k−1 )T P −1 (y′k − Cx̂k∣k−1 )] (26)
k→∞
against replay attacks. Throughout this section we will
always assume that A is stable. = m + 2trace(CT P −1CU ),
The main problem of LQG controller and Kalman where U is the solution of the following Lyapunov
filter is that they use a fixed gain, or a gain that con- equation
verges really fast. Hence, the whole control system is U − BQBT = A U A T . (27)
static in some sense. In order to detect replay attack, we
redesign the controller as Proof. The first equation is trivial to prove using Theo-
rem 1. Rewrite x̂k+1∣k as
uk = u∗k + ∆uk , (23)
x̂k+1∣k = A x̂k∣k−1 + (A + BL)Ky′k + B∆uk . (28)
where u∗k is the optimal LQG control signal and ∆uk s
are drawn from an i.i.d. Gaussian distribution with zero For the virtual system
mean and covariance Q, and ∆uk s are chosen to be also
x̂′k+1∣k = A x̂′k∣k−1 + (A + BL)Ky′k + B∆u′k . (29)
independent of u∗k . Figure 1 shows the diagram of the
whole system. Hence,
We add ∆uk as an authentication signal. We choose
k−1
it to be zero mean because we do not wish to introduce x̂k∣k−1 − x̂′k∣k−1 = A k (x̂0∣−1 − x̂′0∣−1)+ ∑ A k−i−1 B(∆ui −∆u′i ).
any bias to xk . It is clear that without the attack, the con- i=0
troller is not optimal in the LQG sense anymore, which (30)

915
As a result, 5. Simulation Result

y′k − Cx̂k∣k−1 = y′k − Cx̂′k∣k−1 + CA k (x̂0∣−1 − x̂′0∣−1) In this section we provide some simulation results
k−1 on the detection of replay attack. Consider the control
+ C ∑ A k−i−1 B(∆ui − ∆u′i). system described in Section 2 is controlling the temper-
i=0 ature inside one room. Let Tk be the temperature of the
(31) room at time k and T ∗ to be the desired temperature.
Define the state as xk = Tk − T ∗ . Suppose that
The first term has exactly the same distribution as yk −
Cx̂k∣k−1 . The second term will converge to 0 when A xk+1 = xk + uk + wk , (35)
is stable. Also ∆ui is independent of the virtual system
and for the virtual system, y′k − Cx̂′k∣k−1 is independent where uk is the input from air conditioning unit and wk
is the process noise. Suppose that just one sensor is
of ∆u′i . Hence
measuring the temperature, which is
lim Cov(y′k − Cx̂k∣k−1 ) = lim Cov(y′k − Cx̂′k∣k−1 ) yk = xk + vk , (36)
k→∞ k→∞
∞ ∞
+ ∑ Cov(CA i B∆ui ) + ∑ Cov(CA i B∆u′i ) where vk is the measurement noise. We choose R =
i=0 i=0 0.1, Q = W = U = 1. One can compute that P =
∞ 1.092, K = 0.9161, L = −0.6180. Hence A = 0.0321
= P + 2 ∑ CA BQB (A i )T CT .
i T
and the system is vulnerable to replay attack. The LQG
i=0 cost is J = 1.7076, J ′ = J + 2.618Q.
We will first fix the window size T = 5 and show
By the definition of U , it is easy to see that
the detection rate for different Qs. We assume that the
∞ attacker records the yk s from time 1 to time 10 and then
U = ∑ A i BQBT (A i )T . replays it from time 11 to time 20. We also fixed the
i=0 false alarm rate to be 5% at each step.

Hence, limk→∞ Cov(y′k − Cx̂k∣k ) = P + 2CU CT and


0.4

lim E[(y′k − Cx̂k∣k−1 )T P −1 (y′k − Cx̂k∣k−1 )]


0.35

k→∞
[ ] 0.3

′ −1 (32)
= trace lim Cov(yk − Cx̂k∣k ) × P 0.25
Detection Rate

k→∞

= m + 2trace(CT P −1CU ). 0.2

0.15

0.1

0.05

Corollary 1. In the absence of an attack, the expecta-


tion of χ 2 detector is
0
10 11 12 13 14 15 16 17 18 19 20
Time(k)

E(gk ) = mT . (33)
Figure 2. Detection rate at each time step for
Under attack, the asymptotic expectation becomes Q = 0.6 (blue dashed line), Q = 0.4 (brown dot
line), Q = 0.2 (red dash-dot line) and Q = 0
(black solid line).
lim E(gk ) = mT + 2trace(CT P −1CU )T . (34)
k→∞
Figure 2 shows the detection rate at each time step
The difference in the expectation of gk illustrates for different Qs. Each detection rate is the average of
that the detection rate will not converges to the false 10,000 experiments. Note that the attack starts at time
alarm rate, which will also be shown in the next section. 11. Hence, each line starts at the false alarm rate 5%
Another thing worth noticing is that to design Q, one at time 10. One can see that without additional in-
possible criterion is to minimize J ′ − J = trace[(U + put signal, the detection rate will soon converge to 5%,
BT SB)Q] while maximizing trace(CT P −1CU ). which proves that the detector is inefficient for replay

916
0.4
7. Appendix: Proof of Theorem 3
0.35
To simplify notation, let us first define the sigma al-
0.3 gebra generated by yk , . . . , y0 , ∆uk−1 , . . . , ∆u0 to be Fk .
Due to space limit, we will just list the outlines of the
0.25
Detection Rate

proof. Before proving Theorem 3, we need the follow-


0.2 ing lemmas:
0.15
Lemma 1. The following equations about Kalman filter
0.1 are true:
0.05 x̂k∣k = E(xk ∣Fk ), Pk∣k = E(ek∣k eTk∣k ∣Fk ),
0
10 11 12 13 14 15
Time(k)
16 17 18 19 20 where ek∣k = xk − x̂k∣k .
Lemma 2. The following equations are true
Figure 3. Detection rate at each time step for
E(xTk S xk ∣Fk ) = trace(S Pk∣k ) + (x̂k∣k )T S x̂k∣k , (37)
T = 5 (blue dashed line), T = 4 (brown dot
line), T = 3 (red dash-dot line) and T = 2 (black where S is any positive semidefinite matrix.
solid line).
Now define
[ ]
N−1
JN ≜ min E ∑ (xTi W xi + uTi Uui ) . (38)
i=0
attack. With Q = 0.6, the loss of LQG performance is
2.618 × 0.6/1.7076 = 91% with respect to the optimal By the definition of J ′ , we know that J ′ = limN→∞ JN /N.
LQG cost. As a result of the high control performance Now fix N, let us define
lost, one can get more than 35% detection rate at each [ ]
N−1
step. Vk (xk ) ≜ min E ∑ (xTi W xi + uTi Uui)∣Fk , (39)
k=i
Next we would like to fix Q = 0.6 and compare
the detection rate of different window size T . We still and VN (xN ) = 0. By definition, we know that E(V0 ) =
assume the attack starts at time 11 and the false alarm JN . Also from dynamic programming, we know that Vk
rate is 5%. Fig 3 shows the detection rate for different satisfies the following backward recursive equation:
window size. It is worth noticing that choosing a small
E xTk W xk + uTk Uuk + Vk+1(xk+1 )∣Fk .
[ ]
Vk (xk ) = min
window size will make the detector response faster to ∗
uk
replay attack. However, the asymptotic detection rate (40)
will be lower than that of larger window size. On the Let us define
other hand, by the law of large numbers, the asymptotic
detection rate will converges to 1 as T increases. How- Sk−1 ≜ AT Sk A + W − AT Sk B(BT Sk B + U)−1BT Sk A,
ever the detector will respond very slowly to the replay ck−1 ≜ ck + trace[(W + AT Sk A − Sk−1)Pk−1∣k−1 ] + trace(Sk Q)
attack. For more details on the choice of window size,
+ trace[(BT Sk B + U)Q],
please refer to [8].
with SN = 0, cN = 0.

6. Conclusions Lemma 3. Vk (xk ) is given by


Vk (xk ) = E[xTk Sk xk ∣Fk ] + ck , k = N, . . . , 0. (41)
In this paper we defined a replay attack model on Proof. We will use backward induction to prove (41).
cyber physical system and analyzed the performance of First it is trivial to see that VN = 0 satisfies (41). Now
the control system under the attack. We discovered that suppose that Vk+1 satisfies (41), then by (40)
for some control systems, the classical estimation, con-
Vk (xk ) = min E xTk W xk + uTk Uuk + Vk+1(xk+1 )∣Fk
[ ]
trol, failure detection strategy are not resilience to the
replay attack. For such kind of system, we provide a = min E[xTk W xk + (u∗k + ∆uk )T U(u∗k + ∆uk )
technique that can improve detection rate in the expense
of control performance. + xTk+1 Sk+1 xk+1 + ck+1∣Fk ].

917
First we know that u∗k is measurable to Fk and ∆uk is References
independent of Fk , hence
[1] E. A. Lee, “Cyber physical systems: De-
E[(u∗k +∆uk )T U(u∗k +∆uk )∣Fk ] = (u∗k )T Uu∗k +trace(UQ). sign challenges,” EECS Department, Uni-
(42) versity of California, Berkeley, Tech. Rep.
Then let us write xk+1 as UCB/EECS-2008-8, Jan 2008. [Online]. Available:
http://www.eecs.berkeley.edu/Pubs/TechRpts/2008/EECS-
xk+1 = Axk + Bu∗k + B∆uk + wk . 2008-8.html
By the fact that ∆uk , wk are independent of Axk + Bu∗k , [2] E. Byres and J. Lowe, “The myths and facts behind cy-
one can finally get ber security risks for industrial control systems.” VDE
Congress, 2004.
E(xTk+1 Sk+1 xk+1 ∣Fk ) = E(xTk AT Sk+1 Axk ∣Fk )+ [3] A. A. Cárdenas, S. Amin, and S. Sastry, “Research chal-
lenges for the security of control systems,” in HOT-
2(u∗k )T BT Sk+1 Ax̂k∣k + (u∗k )T BT Sk+1 B(u∗k ) (43) SEC’08: Proceedings of the 3rd conference on Hot top-
+ trace(Sk+1Q) + trace(BT Sk+1 BQ). ics in security. Berkeley, CA, USA: USENIX Associ-
ation, 2008, pp. 1–6.
By (42) and (43), we know that [4] ——, “Secure control: Towards survivable cyber-
physical systems,” in Distributed Computing Systems
Vk (xk ) = min [(u∗k )T (U + BT Sk+1 B)u∗k + 2(u∗k )T BT Sk+1 Ax̂k∣k ]
u∗k Workshops, 2008. ICDCS ’08. 28th International Con-
T T ference on, June 2008, pp. 495–500.
+ E[xk (W + A Sk+1 A)xk ∣Fk ] + trace(Sk+1Q) [5] S. Amin, A. Cardenas, and S. S. Sastry, “Safe and
+ E(ck+1 ∣Fk ) + trace[(BT SB + U)Q]. secure networked control systems under denial-of-
service attacks.” in Hybrid Systems: Computation and
Hence, the optimal u∗k is Control. Lecture Notes in Computer Science. Springer
Berlin / Heidelberg, April 2009, pp. 31–45. [Online].
u∗k = −(U + BT Sk+1 B)−1 BT Sk+1 Ax̂k∣k , (44) Available: http://chess.eecs.berkeley.edu/pubs/597.html
and Vk (xk ) is [6] B. Sinopoli, L. Schenato, M. Franceschetti, K. Poolla,
M. Jordan, and S. Sastry, “Kalman filtering with inter-
Vk (xk ) = (x̂k∣k )T AT Sk+1 B(BT Sk+1 B + U)−1BT Sk+1 Ax̂k∣k mittent observations,” Automatic Control, IEEE Trans-
actions on, vol. 49, no. 9, pp. 1453–1464, Sept. 2004.
+ E[xTk (W + AT Sk+1 A)xk ∣Fk ] + trace(Sk+1Q)
[7] L. Schenato, B. Sinopoli, M. Franceschetti, K. Poolla,
+ ck+1 + trace[(BT Sk+1 B + U)Q] and S. Sastry, “Foundations of control and estimation
T T over lossy networks,” Proceedings of the IEEE, vol. 95,
= E(xk Sk xk ∣Fk ) + trace[(W + A Sk+1 A − Sk )Pk∣k ] no. 1, pp. 163–187, Jan. 2007.
+ ck+1 + trace(Sk+1Q) + trace[(BT Sk+1 B + U)Q] [8] A. Willsky, “A survey of design methods for failure de-
tection in dynamic systems,” Automatica, vol. 12, pp.
= E(xTk Sk xk ∣Fk ) + ck , 601–611, Nov 1976.
(45) [9] R. Stengel and L. Ryan, “Stochastic robustness of lin-
ear time-invariant control systems,” Automatic Control,
which completes the proof4. IEEE Transactions on, vol. 36, no. 1, pp. 82–87, Jan
Now we are ready to prove Theorem 3. 1991.
[10] R. Mehra and J. Peschon, “An innovations approach to
Proof of Theorem 3. Since fault detection and diagnosis in dynamic systems,” Au-
tomatica, vol. 7, pp. 637–660, 1971.
N−1
[11] L. L. Scharf, Statistical Signal Processing: Detection,
JN = EV0 = E(xT0 S0 x0 ) + trace[ ∑ (W + AT Sk+1 A − Sk )Pk∣k ]
Estimation, and Time Series Analysis. New York:
k=0
Addison-Wesley Publishing Co., 1990.
N−1 N−1
+ trace( ∑ Sk+1 Q) + trace[ ∑ (BT Sk+1 B + U)Q],
k=0 k=0

we know that
J ′ = lim JN /N = trace[(W + AT SA − S)(P− KCP)] + trace(SQ)
N→∞
+ trace[(BT SB + U)Q] = J + trace[(BT SB + U)Q].
(46)

4 We use Lemma 2 in the second equality.

918

You might also like