On Optimal Deadlock Detection Scheduling
On Optimal Deadlock Detection Scheduling
Yibei Ling
, Shigang Chen
D
(t
p
) > 0, t
p
0, and (3) bounded: n
D
() n, where n
D
(t
p
) is the
derivative of n
D
(t
p
). The rst property refers to the initial deadlock size at t
p
= 0 is zero.
The second property reects the fact that the number of blocked processes in the deadlock
increases monotonically with deadlock persistence time t
p
, and the third property indicates
that the eventual deadlock size is bounded by the total number of distributed processes. For
the sake of easy presentation, we drop the subscript p hereafter.
Now lets revisit the message complexity achieved by the deadlock resolution algorithm
proposed by Mendivil et al. [7], which is O(mn
2
D
) = O(nn
2
D
), where m is the number of
deadlocked processes having priority values greater than those of the deadlocked processes.
Notice that the deadlock size, n
D
, is a function of deadlock persistence time. To make this
dependency concrete, the message overhead can be written as cnn
2
D
(t) for some constant c.
This result will be used later to derive the optimal frequency of deadlock detection scheduling.
11
4 Mathematical Formulation
In this section, we begin with a generic cost model that accounts for both deadlock detection
and deadlock resolution, which is independent of deadlock detection/resolution algorithms
being used. We then prove the existence and the uniqueness of an optimal deadlock detection
frequency that minimizes the long-run mean average cost in terms of the message complexities
of the best known deadlock detection/resolution algorithms.
In this paper we choose the message complexity as the performance metric for measuring
the detection/resolution cost. The reason for choosing message complexity is that communi-
cation overhead is generally a dominant factor that aects the overall system performance in
a distributed system [26, 10, 13, 14], as compared with processing speed and storage space.
Note that the worst-case message complexity can normally be expressed as a polynomial
of n. Per deadlock detection cost is denoted as C
D
. The resolution cost for a deadlock
is denoted as C
R
(t), which is a function of the deadlock persistence time t. In general,
the resolution cost is a polynomial of n
D
(t). For example, the deadlock resolution cost for
Mendivils algorithm [7] is cnn
2
D
(t). Because n
D
(t) is a monotonically increasing function of
deadlock persistence time. C
R
(t) is also monotonically increasing with deadlock persistence
time. We assume that deadlock formation follows a Poisson process for two reasons: First,
the Poisson process is widely used to approximate a sequence of events that occur randomly
and independently. Second, it is due to mathematical tractability of the Poisson process,
which allows us to characterize the essential aspects of complicated processes while making
the problem analytically tractable.
The following theorem presents the long-run mean average cost of deadlock handling in
12
connection with the rate of deadlock formation and the frequency of deadlock detection.
Theorem 1 Suppose deadlock formation follows a Poisson process with rate . The long-run
mean average cost of deadlock handling, denoted by C(T), is
C(T) =
C
D
T
+
_
T
0
C
R
(t)dt
T
, (1)
where the frequency of deadlock detection scheduling is 1/T.
Proof: Let X
i
, i 1 be the interarrival times of independent deadlock formations, where
random variables X
i
, i 1 are independent and exponentially distributed with mean 1/.
Dene S
0
= 0 and S
n
=
n
i=1
X
i
, where S
n
represents the time instant at which the nth
independent deadlock occurs.
Let N(t) = supn : S
n
t represent the number of deadlock occurrences within the
time interval (0, t]. The long-run mean average cost is
lim
t
E(random cost in (0, t])
t
, (2)
where E is the expectation function. In order to associate this cost with the deadlock detec-
tion frequency (1/T), we partition the time interval (0, t] into non-overlapping subintervals of
length T. Let
k
(T) be the cost of deadlock handling on the subinterval ((k1)T, kT], k > 0.
k
(T) is a random variable. According to the stationary and independent increments of Pois-
son process [25], E(
i
(T)) = E(
j
(T)), i ,= j. The long-run mean average cost becomes
C(T) = lim
t
E(random cost in (0, t])
t
= lim
t
E(
t
T
k=0
k
(T))
t
= lim
t
E(
t
T
|
1
(T))
t
=
E(
1
(T))
T
, (3)
where x| is the oor function in x.
13
The cost (T) on interval (0, T] is the sum of a deadlock detection cost C
D
and a deadlock
resolution cost for those deadlocks independently formed within the interval (0, T]. For the
ith deadlock formed at time S
i
T, the resolution cost C
R
(T S
i
) is a function of the
deadlock persistence time T S
i
. Hence, the accrued total cost over (0, T] is
(T) = C
D
+
N(T)
i=1
C
R
(T S
i
)I
{N(T)>0}
, (4)
where I
is the indicator function whose value is 1 (or 0) if predicate is true (or false).
Among that, the deadlock resolution cost on interval (0, T] is
N(T)
i=1
C
R
(T S
i
)I
{N(T)>0}
=
i=1
C
R
(T S
i
)I
{S
i
T}
(5)
E
_
C
R
(T S
i
)I
{S
i
T}
_
=
T
_
0
C
R
(T t)f
i
(t)dt (6)
where f
i
(t) is the probability density function of S
i
which follows the gamma distribution
given below:
f
i
(t) =
i
(i 1)!
t
i1
e
t
, t > 0. (7)
Substituting Eq(7) into Eq(6) gives rise to
E
_
C
R
(T S
i
)I
{S
i
T}
_
=
_
T
0
C
R
(T t)
i
(i 1)!
t
i1
e
t
dt. (8)
The expected total resolution cost over the time interval (0, T] is
E(
N(T)
i=1
C
R
(T S
i
)I
{N(T)>0}
) =
i=1
_
T
0
C
R
(T t)
i
t
i1
(i 1)!
e
t
dt
=
_
T
0
C
R
(T t)e
t
_
i=1
(t)
i1
(i 1)!
_
dt =
_
T
0
C
R
(T t)dt =
_
T
0
C
R
(t)dt. (9)
14
Combining Eqs(3), (4), and (9) yields
C(T) =
E(
1
(T))
T
=
C
D
T
+
_
T
0
C
R
(T t)dt
T
=
C
D
T
+
_
T
0
C
R
(t)dt
T
. (10)
Theorem 1 is thus established.
Theorem 1 is mainly concerned with the impact of deadlock detection frequency and
deadlock formation rate on the long-run mean average cost of overall deadlock handling.
It is independent of the choice of deadlock detection/resolution algorithms. The following
corollary is an immediate consequence of Theorem 1.
Corollary 1 The long-run mean average cost of deadlock handling is proportional to the
rate of deadlock formation .
Proof: the proof is straightforward and thus omitted.
Theorem 1 and Corollary 1 state that the overall cost of deadlock handling is closely
associated not only with per-deadlock detection cost, and aggregated resolution cost, but also
with the rate of deadlock formation, . In the following lemma, we will show the existence and
uniqueness of asymptotic optimal frequency of deadlock detection when deadlock resolution
is more expensive than a deadlock detection in terms of message complexity.
Lemma 1 Suppose that the message complexity of deadlock detection is O(n
), and that of
deadlock resolution is O(n
that yields the minimum long-run mean average cost when n is suciently large.
Proof: Dierentiating Eq(1) yields
C
(T) =
C
D
T
2
+
C
R
(T)
T
_
T
0
C
R
(t)dt
T
2
. (11)
15
Dene a function (T) as follows
(T) T
2
C
(T) = C
D
+ TC
R
(T)
_
T
0
C
R
(t)dt. (12)
Notice that C
(T) and (T) share the same sign. Dierentiating (T), we have
(T) = TC
R
(T) (13)
Because C
R
(T) is a monotonically increasing function, C
R
(T) > 0, which means
(T) >
0. Therefore,
and C
R
() = c
2
n
in Eq(16), we obtain
lim
T
(T) > c
1
n
+ c
2
n
(17)
Since < , lim
T
(T) is asymptotically dominated by the term c
2
n
when n is su-
ciently large. Observe that (0) = C
D
< 0, and (T) is monotonically increasing. By the
16
intermediate value theorem, it must be true that there exists a unique T
, 0 < T
< ,
such that
(T) = T
2
C
(T) =
< 0, 0 T < T
= 0, T = T
> 0, T > T
.
It means that C(T) reaches its minimum at and only at T = T
= arg
_
min
T>0
C(T)
_
is proved.
To make the idea behind this derivation concrete, we apply the up-to-date results of dead-
lock detection/resolution algorithms. As discussed before, the best-known message complex-
ity of a distributed deadlock detection algorithm is 2n
2
[14] when it is written as a polynomial
of n. The best-known message complexity of a deadlock resolution algorithm is O(nn
2
D
) [7].
Therefore, C
D
= n
2
, and C
R
(t) = cnn
2
D
(t), where c is a positive constant. Because the
deadlock size n
D
(t) is always bounded by n, from (15) we have
() = lim
T
(T) > C
D
+ (C
R
() C
R
()) 2n
2
+ cn
3
. (18)
Note that is a xed value that can be arbitrarily chosen. For a suciently large n, Eq(18)
becomes
() cn
3
> 0 (19)
(0) = C
D
= 2n
2
. Because (T) is monotonically increasing, there exists an optimal
deadlock detection frequency 1/T
such that (T
) and thus C
(T
that balances the two costs such that their sum is minimized. The condition
that the asymptotic deadlock resolution cost, C
R
(), is greater than the cost of deadlock
detection, C
D
, constitutes the natural mathematical basis to justify distributed deadlock
detection algorithms.
We are now ready to state the asymptotically optimal frequency for deadlock detection
based on the up-to-date results of distributed deadlock detection and resolution algorithms.
Recall that the best-known message complexity for distributed deadlock detection algorithms
is 2n
2
[14] and that for deadlock resolution algorithms of O(nn
2
D
) [7].
Theorem 2 Suppose the message complexity for distributed deadlock detection is 2n
2
, and
that for distributed deadlock resolution is O(nn
2
D
(t)). Then the asymptotically optimal fre-
quency for scheduling deadlock detections is O((n)
1/3
).
Proof: Assume that the deadlock size function n
D
(t) is both dierentiable and integrable.
2
Then n
D
(t) can be expressed in the form of Maclaurin series as follows:
n
D
(t) =
i=0
n
(i)
D
(0)t
i
i!
=
i=0
c
i
t
i
, (20)
where n
(i)
D
(0) denote the ith derivative of the deadlock size function n
D
(t) at point zero and
c
i
= n
(i)
D
(0)/i!.
2
Recall that n
D
(t) is a continuous approximation function whose curves between jumping points can
be chosen.
18
By the properties of the deadlock size function n
D
(t), we have n
D
(0) = 0 and n
D
(0) > 0.
It can be easily veried that c
0
= 0 and c
1
= n
D
(0) > 0. The resolution cost C
R
(t) can
be written as cnn
2
D
(t) for some constant c. By Theorem 1, the long-run mean average cost
becomes
C(T) =
2n
2
T
+ cn
_
T
0
n
2
D
(t)dt
T
. (21)
Inserting Eq(20) into Eq(21), we have
C(T) =
2n
2
T
+ cn
3
T
1
_
T
0
(
i=1
c
i
t
i
)
2
dt =
2n
2
T
+
cn
3
_
T
0
(c
1
t +
i=2
c
i
t
i
)
2
dt
T
. (22)
Through a lengthy calculation, Eq(22) can be simplied as
C(T) =
2n
2
T
+ cn
3
(
c
2
1
T
2
3
+
2c
1
c
2
T
3
4
) + cn
3
(
i=2
j=2
c
i
c
j
T
i+j
i + j + 1
). (23)
Taking derivative of Eq(23) with respect to T, we have
C
(T) =
2n
2
T
2
+ cn
3
(c
2
1
2T
3
+
3c
1
c
2
T
2
2
) + cn
3
(
i=2
j=2
c
i
c
j
(i + j)T
i+j1
i + j + 1
). (24)
By lemma 1, there exists a unique optimal detection frequency 1/T
when n is suciently
large, such that C(T
(T
) = 0. Based on (24), we
transform C
(T
)
3
3
+
3c
1
c
2
(T
)
4
2
+
i=2
j=2
c
i
c
j
(i + j)(T
)
i+j+1
i + j + 1
). (25)
Only n, T
, and are free variables and the rest are constants. By performing the Big-O
reduction we obtain
1
n
= (((T
)
3
+ (T
)
4
+ (T
)
5
+ ...)) (26)
19
When n is suciently large and T
)
3
1 T
) = O((T
)
3
)
T
= (
1
(n)
1/3
) (27)
Therefore, the asymptotic optimal deadlock detection frequency 1/T
is O((n)
1/3
).
1 0
-2
1 0
-1
1 0
0
1 0
1
1 0
2
1 0
3
1 0
4
1 0
5
1 0
6
1 0
7
1 0
8
D e a d lo c k D e te c tio n In te rv a l (lo g )
L
o
n
g
-
r
u
n
m
e
a
n
a
v
e
r
a
g
e
c
o
s
t
(
lo
g
)
n=1000
n=500
n=200
n=100
n=50
=1/30s
Figure 2: Cost of Deadlock Handling vs. Detection Interval (n: number of processes)
10
-2
10
-1
10
0
10
1
10
2
10
6
10
7
10
8
10
9
Deadlock Detection Interval (log)
L
o
n
g
-
r
u
n
m
e
a
n
a
v
e
r
a
g
e
c
o
s
t
(
l
o
g
)
=1s
=1/30s
=1/60s
=1/90s
=1/120s
n=1000
Figure 3: Cost of Deadlock Handling vs. Deadlock Formation Rate
As an illustration, we consider an example as follows. Let C
R
(t) = n
3
(1 exp(t)),
C
D
= n
2
. In accordance with Theorem 1, the long-run mean average cost of deadlock
20
handling thus is written as
C(T) =
n
2
+ n
3
(T + exp(T) 1)
T
. (28)
Figs(2)-(3) show log-log plots of a family of curves illustrating the dependence of long-
run mean average cost of deadlock handling upon detection interval. The x-axis denotes
the deadlock detection interval and the y-axis denotes the long-run mean average cost of
deadlock handling.
# of Processes Optimal Detection Interval ( = 1)
50 0.214699(s)
100 0.148555(s)
200 0.103495(s)
500 0.064189(s)
1000 0.045402(s)
# of Processes Optimal Detection Interval ( = 1/30)
50 2.0223(s)
100 1.0973(s)
200 0.6832(s)
500 0.3942(s)
1000 0.2675(s)
Table 1: Optimal Detection Interval vs. # of Processes
In Fig(2), we present plots of the deadlock detection interval and cost of deadlock handling
under dierent the total number of processes, 50, 100, 200, 500, and 1000, respectively. Fig(3)
shows the relationship between the overall cost of deadlock handling and deadlock detection
interval under the dierent deadlock formation rates, 1s, 1/30s, 1/60s, 1/90s, and 1/120s,
respectively. Figs(2)-(3) visualizes convexity that suggests the existence of an optimal de-
tection frequency, illustrating that the overall cost of deadlock handling increases with the
total number of processes and deadlock formation rate.
A detailed calculation given in Table 1 shows that as the number of processes in a dis-
tributed system increases, the optimal detection interval decreases, which is clearly in line
21
with our theoretical analysis. In the sequel, we study the impact of coordinated v.s. random
deadlock detection scheduling on the performance of deadlock handling. We consider two
strategies of deadlock detection scheduling: (1) centralized, coordinated deadlock detection
scheduling, and (2) fully distributed, uncoordinated deadlock detection scheduling.
The centralized scheduling excels in its simplicity in implementation and system main-
tenance, but undermines the reliability and resilience against failures because one and only
one process is elected as the initiator of deadlock detections in a distributed system. In
contrast, the fully distributed scheduling excels in the reliability and resilience against fail-
ures because every process in the distributed system can independently initiate detections
[15], without a single point of failure. However, due to the lack of coordination in deadlock
detection initiation among processes, it presents a dierent mathematical problem from the
centralized deadlock detection scheduling.
In the previous discussions we have focused on the derivation of optimal frequency of
deadlock detection in connection with the rate of deadlock formation and the message com-
plexities of deadlock detection and resolution algorithms, assuming deadlock detections are
centrally scheduled at a xed rate of 1/T. To capture the lack of coordination in fully dis-
tributed scheduling, we will study the case where processes randomly, independently initiate
the detection of deadlocks.
Let n be the number of processes in a distributed system and T be the optimal time inter-
val between any two consecutive deadlock detections in the centralized scheduling. Consider
a fully distributed deadlock detection scheduling, where each process initiates deadlock de-
tection at a rate of 1/(nT) independently. Although the average interval between deadlock
detections in the fully distributed scheduling remains T (the same as its centralized counter-
22
part), the actual occurring times of those detections are likely to be non-uniformly spaced
because the initiation of deadlock detection is performed by the processes in a completely
uncoordinated fashion.
In the following we will study the fully distributed (random) scheduling and compare
it with the centralized scheduling. Consider a sequence of independently and identically
distributed iid random variables Y
i
, i 1 dened on (0, ) following certain distribution
H. The sequence Y
i
, i 1 represents the inter-arrival times of deadlock detections initiated
by the fully distributed scheduling, and it is assumed to be independent of the arrival of
deadlock formations. It is obvious that the centralized scheduling is a special case of the
fully distributed scheduling.
Let H be the family of all distribution functions on (0, ) with nite rst moment.
Namely,
H =
_
H: H is a CDF on (0, ),
_
0
H(t)dt <
_
(29)
where
H(t) 1 H(t), t 0.
The following theorem states that the lack of coordination in deadlock detection initia-
tion by fully distributed scheduling will introduce additional overhead in deadlock handling.
Therefore the fully distributed scheduling in general cannot perform as eciently as its
centralized counterpart.
Theorem 3 Let C
H
denote the long-run mean average cost under fully distributed scheduling
with a random detection interval Y characterized by certain distribution H H with the mean
of , and C(T) denote the long-run mean average cost under centralized scheduling with a
23
xed detection interval T. Then
C
H
C(T), (30)
when E(Y ) = = T.
Proof: Since the sequence Y
i
, i 1 of interarrival times of deadlock detection is assumed
to be independent of the Poisson deadlock formations, it is easy to see that the random costs
over the intervals (0, Y
1
], (Y
1
, Y
1
+ Y
2
], . . . are iid. Using the same line of reasoning in the
proof of Theorem 1, the long-run mean average cost is expressed as
C
H
=
E(random cost over Y )
E(Y )
, (31)
where Y His a random variable representing the interval between two consecutive deadlock
detections. Let (Y ) be the random cost in the interval Y . The expected cost over the interval
Y is given by
E((Y )) = EE[(Y )[Y ] =
_
0
E(C
D
+
N(y)
n=1
C
R
(y S
n
)I
{N(y)>0}
)dH(y), (32)
where S
n
=
n
i=1
X
i
denotes the time of the nth deadlock formation and N(y) represents the
number of independent deadlocks occurred in the time interval (0, y). It follows from the
independence of X
i
, i 1 and Y
i
, i 1, and from Eq(32), the long-run mean average
cost is
C
H
=
E((Y ))
E(Y )
=
_
0
(C
D
+
_
y
0
C
R
(t)dt)dH(y)
E(Y )
=
C
D
E(Y )
+
_
0
__
t
C
R
(t)dH(y)
_
dt
E(Y )
=
C
D
E(Y )
+
0
C
R
(t)
H(t)dt
E(Y )
. (33)
When E(Y ) = = T, meaning that the xed deadlock detection interval T equals to the
mean value of the random detection interval Y , we compare the centralized (xed) detection
24
scheduling with the rate of 1/T with the fully distributed (random) one with the mean rate of
1/E(Y ) = 1/. According to Theorem 1, the long run mean average cost of xed detection
is given as
C(T) =
C
D
0
C
R
(t)dt
. (34)
Subtracting Eq(34) from Eq(33) yields
C
H
C(T) =
__
0
C
R
(t)
H(t)dt
_
0
C
R
(t)dt
_
=
__
C
R
(t)
H(t)dt
_
0
C
R
(t)H(t)dt
_
_
C
R
()
_
H(t)dt C
R
()
_
0
H(t)dt
_
=
C
R
()
__
H(t)dt
_
0
(1
H(t))dt
_
=
C
R
()
__
0
H(t)dt
_
= 0. (35)
Hence we have
C
H
C(T). (36)
Theorem 3 is thus established.
It can be seen from Eq(36) that C
H
C(T) and the equality holds if and only if Y is a
degenerate random variable when Prob(Y = T) = 1. Theorem 3 asserts that the fully dis-
tributed (random) deadlock detection scheduling in general results in an increased overhead
in overall deadlock handling.
5 Conclusion
Deadlock detection scheduling is an important, yet often overlooked aspect of distributed
deadlock detection and resolution. The performance of deadlock handling not only depends
upon per-execution complexity of deadlock detection/resolution algorithms, but also depends
fundamentally upon deadlock detection scheduling and the rate of deadlock formation. Ex-
cessive initiation of deadlock detection results in an increased number of message exchange
25
in the absence of deadlocks, while insucient initiation of deadlock detection incurs an in-
creased cost of deadlock resolution in the presence of deadlocks. As a result, reducing the
per-execution cost of distributed deadlock detection/resolution algorithms alone does not
warrant the overall performance improvement on deadlock handling.
The main thrust of this paper is to bring an awareness to the problem of deadlock
detection scheduling and its impact on the overall performance of deadlock handling. The key
element in our approach is to develop a time-dependent model that associates the deadlock
resolution cost with the deadlock persistence time. It assists the study of time-dependent
deadlock resolution cost in connection with the rate of deadlock formation and the frequency
of deadlock detection initiation, diering signicantly from the past research that focuses on
minimizing per-detection and per-resolution costs.
Our stochastic analysis, which solidies the ideas presented in [10, 26, 23, 11], shows that
there exists a unique deadlock detection frequency that guarantees a minimum long-run
mean average cost for deadlock handling when the total number of processes in a distributed
system is suciently large, and that the cost of overall deadlock handling grows linearly with
the rate of deadlock formation.
In addition, we study the fully distributed (random) deadlock detection scheduling and
its impact on the performance of deadlock handling. We prove that in general the lack of
coordination in deadlock detection initiation among processes will increase the overall cost
of deadlock handling.
Theoretical results obtained in this paper could help system designers/practitioners to
better understand the fundamental performance tradeo between deadlock detection and
deadlock resolution costs, as well as the innate dependency of optimal detection frequency
26
upon deadlock formation rate. However, there are still a lot of questions regarding how to
use theoretical results to ne-tune the performance of a distributed system. Determination
of the actual rate of deadlock formation and verication of the Poisson process are problems
of great complexity that can be inuenced by many known/unknown factors such as the
granularity of locking, actual distribution of resource, process mix, and resource request and
release patterns [26]. Tapping into system logging les and inferring the actual deadlock
formation rate via data mining could provide an eective and feasible way to translate
theoretical insights into actual system performance gain.
6 Acknowledgements
We would like to thank Drs. Marek Rusinkiewicz and Ritu Chadh at Applied Research,
Telcordia Technologies for their constructive comments on the manuscript of this paper. We
would also like to thank three anonymous reviewers for critically reviewing the manuscript
and for their insightful comments. We would like to especially thank Dr. Shu-Chan Hsu in
Department of Cell Biology and Neuroscience at Rutgers University for her encouragement
and support.
References
[1] Roberto Baldoni and Silvio Salz. Deadlock Detection in Multidatabase Systems: a
Performance Analysis. DIstributed Systems Engineering, 4:244252, December 1997.
[2] Azzedine Boukerche and Carl Tropper. A Distributed Graph Algorithm for the Detec-
tion of Local Cycles and Knots. IEEE Transactions on Parallel and Distributed Systems,
9(8):748757, August 1998.
27
[3] G. Bracha and S. Toueg. Distributed Deadlock Detection. Distributed Computing,
2:127138, 1987.
[4] K.M. Chandy, J. Misra, and L. Hass. Distributed Deadlock. ACM Transaction on
Computer Systems, 1(2):144156, May 1983.
[5] Shigang Chen, Yi Deng, and Wei Sun. Optimal Dealock Detection in Distributed Sys-
tems Based on Locally Constructed Wait-for Graph. In Proceedings of the 16th Inter-
national Conference on Distributed Computing Systems, pages 613619, 1996.
[6] Shigang Chen and Yibei Ling. Stochastic Analysis of Distributed Deadlock Scheduling.
In Proceedings of the 24th ACM Symposium on Principles of Distributed Computing,
pages 265273, July 17-20 2005.
[7] Jose Ramon Gonzales de Mendivil, Jose Ramon Garitagoitia, Carlos F. Alastruey, and
J.M. Bernabeu-Auban. A Distributed Deadlock Resolution Algorithm for the AND
Model. IEEE Transactions on Parallel and Distributed Systems, 10(5):433447, May
1999.
[8] Jim Gray, P. Homan, Ron Obermarck, and Henry Korth. A Straw-man Analysis of the
Probability of Waiting and Deadlock in a Database System. IBM Research, RJ3066,
February 1981.
[9] Young M. Kim, Tan H. Lai, and Neelam Soundarajan. Ecient Distributed Deadlock
Detection and Resolution Using Probes, Tokens, and Barriers. In Proceedings of the
International Conference on Parallel and Distributed Systems, pages 584591, 1997.
28
[10] Edgar Knapp. Deadlock Detection in Distributed Databases. ACM Computing Surveys,
19(4):303328, 1987.
[11] Natalija Krivokapic, Alfons Kemper, and Ehud Gudes. Deadlock Detection in Dis-
tributed Database Systems: A New Algorithm and a Comparative Performance Analy-
sis. VLDB Journal: Very Large Data Bases, 8(2):79100, 1999.
[12] Ajay D. Kshemkalyani and Mukesh Singhal. Ecient Detection and Resolution of Gen-
eralized Distributed Deadlocks. IEEE Transactions on Software Engineering, 20(1):43
54, January 1994.
[13] Ajay D. Kshemkalyani and Mukesh Singhal. Distributed Detection of Generalized Dead-
locks. In Proceedings of the 1997 International Conference on Distributed Computing
Systems, pages 553560, 1997.
[14] Ajay D. Kshemkalyani and Mukesh Singhal. A One-Phase Algorithm to Detect Dis-
tributed Deadlocks in Replicated Databases. IEEE Transactions on Knowledge and
Data Engineering, 11(6):880895, 1999.
[15] Soojung Lee. Fast, Centralized Detection and Resolution of Distributed Deadlocks in
the Generalized model. IEEE Transactions on Software Engineering, 30(8):561573,
September 2004.
[16] Soojung Lee and Junguk L. Kim. Performance Analysis of Distributed Deadlock Dectec-
tion Algorithms. IEEE Transactions on Knowledge and Data Engineering, 13(3):623
636, 2001.
29
[17] Xuemin Lin and Jian Chen. An Optimal Deadlock Resolution Algorithm in Multi-
database Systems. In Proceedings of the 1996 International Conference on Parallel and
Distributed Systems, pages 516521, 1996.
[18] Yibei Ling, Jie Mi, and Xiaola Lin. A Variational Calculus Approach to Optimal
Checkpoint Placement. IEEE Transactions on Computers, 50(7):699708, July 2001.
[19] Philip P. Macri. Deadlock Detection and Resolution in a CODASYL based Data Man-
agement System. In Proceedings of the 1976 ACM SIGMOD International Conference
on Management of Data, pages 4549, 1976.
[20] William A. Massey. A Probabilistic Analysis of a Database System. ACM SIGMETRICS
Performance Evaluation Review, 14(1):141146, 1986.
[21] Jayadev Misra. Distributed Discrete-Event Simulation. ACM Computing Surveys,
18(1):3965, March 1986.
[22] Ron Obermarck. Distributed Deadlock Detection Algorithm. ACM Transactions on
Database Systems, 7(2):187208, June 1982.
[23] Young Chul Park, Peter Scheuermann, and Snag Ho Lee. A Periodic Deadlock Detec-
tion and Resolution Algorithm with a New Graph Model for Sequential Transaction
Processing. In Proceedings of the Eighth International Conference of Data Engineering,
pages 202209, February 1992.
[24] M. Roesler and W. A. Burkhard. Semantic Lock Models in Object-oriented Distributed
Systems and Deadlock Resolution. In Proceedings of the 1988 ACM SIGMOD Interna-
tional Conference on Management of Data, pages 361370, 1988.
30
[25] Sheldon M. Ross. Stochastic Processes. John Wiley & Sons, Inc., New York, 1996.
[26] Mukesh Singhal. Deadlock detection in distributed systems. IEEE Computer Magazine,
40(8):3748, November 1989.
[27] Igor Terekhov and Tracy Camp. Time Ecient Deadlock Resolution Algorithms. In-
formation Processing Letters, 69:149154, 1999.
[28] Carl Tropper and Azzedine Boukerche. Parallel simulations of communicating nite
state machines. In Proceedings of the SCS Multiconf on Parallel and Distributed Simu-
lation, pages 143150, May 1993.
[29] Jesus Villadangos, Federico Farina, Jose Ramon Gonzales de Mendivil, Jose Ramon
Garitagoitia, and Alberto Cordoba. A Safe Algorithm for Resolving OR Deadlocks.
IEEE Transactions on Software Engineering, 29(7):608622, July 2003.
[30] J.W. Wang, Shing-Tsaan Huang, and Nian-Shing Chen. A Distributed Algorithm for
Detecting Generalized Deadlocks. Technical Report (SF-C-010-1), Computer Science,
National Tsing-Hua University, 1990.
[31] Yi-Min Wang, Michael Merritt, and Alexander B. Romanovsky. Guaranteed Deadlock
Recovery: Deadlock Resolution with Rollback Propagation. In Technical Report Number
648, 1998.
[32] Sugath Warnakulasuriya and Timothy Mark Pinkston. A Formal Model of Message
Blocking and Deadlock Resolution in Interconnection Networks. IEEE Transactions on
Parallel and Distributed Systems, 11(3):212229, March 2000.
31