0% found this document useful (0 votes)
57 views5 pages

Is It 2010

This document summarizes a research paper about coded cooperative data exchange among wireless clients. It presents an algorithm that allows clients to obtain missing packets with the minimum number of transmissions using random linear coding over a sufficiently large finite field. It shows that the field size can be reduced while maintaining the optimal number of transmissions. It also establishes tight lower and upper bounds on the minimum number of transmissions that can be computed easily and are close based on simulations. The algorithm operates in rounds, with a client randomly transmitting a linearly combined encoding vector of packets it has in each round to help other clients recover missing packets.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views5 pages

Is It 2010

This document summarizes a research paper about coded cooperative data exchange among wireless clients. It presents an algorithm that allows clients to obtain missing packets with the minimum number of transmissions using random linear coding over a sufficiently large finite field. It shows that the field size can be reduced while maintaining the optimal number of transmissions. It also establishes tight lower and upper bounds on the minimum number of transmissions that can be computed easily and are close based on simulations. The algorithm operates in rounds, with a client randomly transmitting a linearly combined encoding vector of packets it has in each round to help other clients recover missing packets.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

A Randomized Algorithm and Performance

Bounds for Coded Cooperative Data Exchange


Alex Sprintson, Parastoo Sadeghi, Graham Booker, and Salim El Rouayheb

x2, x3
Abstract—We consider scenarios where wireless clients
are missing some packets, but they collectively know every c1
packet. The clients collaborate to exchange missing packets
over an error-free broadcast channel with capacity of one x2 + x3

packet per channel use. First, we present an algorithm that


x1, x3
allows each client to obtain missing packets, with minimum x1, x2

number of transmissions. The algorithm employs random c3 x1 c2


linear coding over a sufficiently large field. Next, we
show that the field size can be reduced while maintaining
the same number of transmissions. Finally, we establish
Fig. 1. Coded data exchange among three clients.
lower and upper bounds on the minimum number of
transmissions that are easily computable and often tight
as demonstrated by numerical simulations. clients have already obtained packets {x2 , x3 }, {x1 , x3 }
and {x1 , x2 }, respectively, i.e., each of these clients
misses one packet. A simple cooperation scheme would
I. I NTRODUCTION consist of three uncoded transmissions. However, this is
The ever-growing demand of mobile wireless clients not an optimal solution since the clients can send coded
for large file downloads and video applications is strain- packets and help multiple clients with a single transmis-
ing cellular networks in terms of bandwidth and network sion. The number of transmissions for this example can
cost. Inspired by the Internet paradigm where peer-to- be decreased to two as shown in the figure.
peer (P2P) content delivery systems are more efficient The problem we consider may arise in many practical
than a server-client based model, one solution to address settings. For example, consider a wireless network in
these issues is to allow the mobile clients to cooperate which some clients are interested in the same data (such
and exchange data directly among each other. as a popular video clip or an urgent alert message).
In this paper, we consider the problem of information Initially, the entire data is available at a base station and
exchange between a group of wireless clients. Each is broadcast to the interested clients. The communication
client initially holds a subset of packets and needs to ob- link between the base station and the mobile clients can
tain all the packets held by other clients. Each client can be, not only expensive and slow, but also unreliable or
broadcast the packets in its possession (or a combination sometimes even non-existent, which causes some clients
thereof) via a noiseless broadcast channel of capacity to receive only a portion of the data. Partial reception can
one packet per channel use. Assuming that clients can be caused by channel fading or shadowing, connection
cooperate with each other and know which packets are loss, network saturation, or asynchronous client behavior
available to other nodes, the aim is to minimize the total such as in P2P systems. Despite this, whenever the whole
number of transmissions needed to satisfy the demands data is collectively known by the interested clients, they
of all clients. can help each other to acquire the whole data using short-
For example, Fig. 1 shows three wireless clients that range client-to-client communication links or cooperative
are interested in obtaining three packets of m bits each, relaying which can be more affordable or reliable.
x1 , x2 and x3 ∈ GF (2m ). The first, second and third In this paper we investigate theoretical aspects of such
client cooperation and are interested in finding efficient
The first and third authors are with Texas A&M University, College data exchange strategies which require minimum total
Station, Texas, USA, Email: {spalex,gbooker}@tamu.edu. The second number of transmissions. This problem was introduced
author is with the Australian National University, Canberra, ACT, in our preliminary work [1] where lower and upper
Australia, Email: [email protected]. The fourth author is
with the University of California at Berkeley, Berkeley, California, bounds on the minimum number of transmissions were
USA, Email: [email protected]. The work of Alex Sprintson presented, in addition to a data exchange algorithm.
was supported by NSP grant CNS-0954153 and by Qatar Telecom We establish in this work new and improved lower and
(Qtel), Doha, Qatar. The work of Parastoo Sadeghi was supported under
Australian Research Council’s Discovery Projects funding scheme upper bounds. Furthermore, we propose an optimal data
(project no. DP0984950). exchange algorithm based on random linear coding over
a large field and then show how coding can be performed
over a smaller field, once the number of transmissions A client ci is said to have a unique packet xj if
from each client is determined. xj ∈ Xi and xj ∈ / X` for all ` 6= i. A unique packet
A closely-related problem is that of index coding can be broadcast by the client holding it in an uncoded
[2]–[4] in which different clients cannot communicate fashion at any stage without any penalty in terms of
with each other, but can receive transmissions from a optimality. Without loss of generality, we can assume
server possessing all the data. Gossip algorithms [5] and that there are no unique packets in the system.
physical layer cooperation [6] are also related concepts We note that the results of this paper can be applied,
which are extensively studied in the literature. with minor modifications, to settings where the initial
data available to clients include linear combinations of
II. S YSTEM M ODEL the packets in X. However, these settings are beyond the
scope of this paper.
Consider a set of n packets X = {x1 , . . . , xn }
to be delivered to k clients belonging to the set III. A R ANDOMIZED A LGORITHM
C = {c1 , . . . , ck }. The packets are elements of a finite In this section, we present a randomized algorithm for
alphabet which will be assumed to be a finite field the data exchange problem. The algorithm operates over
Fq throughout this paper. At the beginning, each client a finite field Fq of size q > k · n and identifies an optimal
knows a subset of packets denoted by Xi ⊆ X, while solution with probability at least 1 − nk q . The probability
the clients collectively know all packets in X, i.e., of success can be amplified by repeated application of
∪ci ∈C Xi = X. We denote by X i = X \ Xi the set the algorithm. We also show how to reduce the field size
of packets required by client ci . We assume that each to O(k) using bounds from the network coding literature.
client knows the index of the packets that are available
to other clients. A. Algorithm description and analysis
The clients exchange packets over a lossless broadcast For clarity, we describe and analyze the algorithm in
channel with the purpose of making all packets in terms of encoding vectors, rather than original pack-
X available to all clients. The data is transferred in
communication rounds, such that at round i one of the P That j is, instead of saying that a packet pi =
ets.
xj ∈X γi xj has been transmitted, we say that we
clients, say cj , broadcasts a packet pi ∈ Fq to the transmit the corresponding encoding vector γi =
rest of the clients in C. Packet pi may be one of the [γi1 , γi2 , . . . , γin ].
packets in Xj , or a combination of packets in Xj and The algorithm operates in rounds. Assume that in
the packets {p1 , . . . , pi−1 } previously transmitted over round i, the encoding vector γi is transmitted by client
the channel. Our goal is to devise a scheme that enables cti , ti ∈ {1, . . . , k}. Then, the transmitted vector γi
each client ci ∈ C to obtain all packets in X i while is a random linear combination of the unit vectors
minimizing the total number of transmissions. We focus in Uti , i.e., γij = 0 for xj ∈ / Xti ; other elements
on schemes that use linear coding over the field Fq . As of γi are selected at random from the field Fq . The
discussed in Section III below the restriction to linear set Γi−1 = {γ1 , . . . , γi−1 } contains the packets that
coding operations does not result in loss of optimality. have been transmitted during rounds 1, . . . , i − 1 of the
With linear coding, any transmitted packet pi is a algorithm. In general, the transmitted vector γi can be a
linear combination of the original packets in X, i.e., linear function of the initial side information of cti and
X j
pi = γi xj , the transmitted vectors in Γi−1 . But since the vectors
xj ∈X in Γi−1 were received simultaneously by all the clients,
there is no loss of generality in taking γi ∈ span(Uti ).
where γij ∈ Fq are the encoding coefficients of pi . The proofs of the correctness and the optimality of our
We refer to the vector γi = [γi1 , γi2 , . . . , γin ] as the algorithm, presented below, imply that this is optimal.
encoding vector of pi . The i-th unit encoding vector The formal description of the algorithm, referred to as
that corresponds to the original packet xi is denoted by Random Data Exchange (RDE), appears on Fig. 2. The
ui = [u1i , u2i , . . . , uni ], where uii = 1 and uji = 0 for steps performed by the algorithm can be summarized as
i 6= j. We also denote by Ui the set of unit vectors that follows: At each iteration i, we select a client cti with
corresponds to the packets in Xi . the highest rank of initial plus received encoding vectors
Let ni = |Xi | be the number of packets initially up to the beginning of round i. That is,
known to client ci . The number of unknown packets to ti = arg max{rank(Uj ∪ Γi−1 )}; (1)
client ci is therefore, n̄i = |Xi | = n − ni . We denote by cj ∈C
nmin = min1≤i≤k ni , the minimum number of packets The chosen client cti will then select a random
known to a client. The corresponding client or clients linear combination of the packets in its has set which
form a subset Cmin of C. is then broadcast to all other clients. The process is
Algorithm RDE (C, {Uj , cj ∈ C}, Fq ): is used in Lemma 8 in Section IV). Thus, there exists at
input: least one encoding vector, v, that can be removed from
C - set of clients Qi−1 such that Γi−1 ∪ (Qi−1 \ {v}) ∪ Uti remains of
Uj - set of encoding coefficients available to client rank n.
cj , j = 1, . . . , k Let v be a vector, whose existence is guaranteed by
Fq - the finite field
Lemma 1 at the end of round i − 1. We denote by
1 i←1 Q̃i−1 = Qi−1 \ {v}.
2 Γ0 ← ∅ Note that for each client cj ∈ C \ {cti } it holds that
3 while there exists a client cj ∈ C for which it holds the rank of vector set Sj = Γi−1 ∪ Q̃i−1 ∪ Uj is at least
that rank(Uj ∪ Γi−1 ) < n do n − 1. Let C 0 be as subset of C \ {cti } such that for
4 Select a client cti for which the set Uti ∪ Γi−1 is
each cj ∈ C 0 it holds that rank(Sj ) = n − 1. Our goal
of maximum rank, i.e.,
is to show that vector γi , chosen randomly from the
ti = arg max {rank(Uj ∪ Γi−1 )};
cj ∈C span(Uti ), increases the rank of each client cj ∈ C 0
5 Create a new encoding vector γi , such that γij = 0 with probability at least 1 − kq .
for xj ∈ / Xti , otherwise γij is a random element Let cj be a client in C 0 and let ζj be the normal
of field Fq . vector to the span of Sj , which is non-zero according to
6 Γi ← Γi−1 ∪ {γi } the definition of C 0 . Note that ζj can be written as
7 i←i+1 X X
endwhile ζj = βg ug + βg ug ,
8 return ct1 , . . . , cti−1 and γ1 , . . . , γi−1 ug ∈Uti ug ∈U ti

Fig. 2. Algorithm RDE where U ti is the set of unit encoding vectors that
repeated until all clients possess n linearly independent correspond to X ti = X \ Xti .
combination of packets and hence, are able to obtain all Lemma 2: There exists ug ∈ Uti such that βg 6= 0.
the original packets in X. Proof: Suppose that P it is not the case. Then, ζj
We analyze the correctness and optimality of the algo- can be expressed as ζj = ug ∈U t βg ug . Then, ζj is a
i
rithm. For each round i, we denote by OP Ti the minimal normal to span(Uti ). Since ζj is a normal to span(Sj )
number of packets that still need to be transmitted after it is also normal to span(Γi−1 ∪ Q̃i−1 ). Thus, ζj is a
round i, i.e., in addition to the first i transmissions, in normal to span(Γi−1 ∪ Q̃i−1 ∪ Uti ) which contradicts
order to satisfy the demands of all the clients. the fact that rank{Γi−1 ∪ Q̃i−1 ∪ Uti } = n.
Consider iteration i of the algorithm. Let Qi−1 be an 0 0
PLet ζj be a projection of ζj to span(U 0
ti ), i.e., ζj =
optimal set of encoding vectors required to complete the ug ∈Uti g gβ u . Lemma 2 implies that ζ j is not zero.
delivery of the packets to all clients after round i − 1 Lemma 3: If for each client cj ∈ C 0 it holds that
has being completed. That is, Qi−1 includes OP Ti−1 hζj0 , γi i =
6 0, then OP Ti = OP Ti−1 − 1.
encoding vectors such that: Proof: If the condition of the lemma is sat-
1) For each γ ∈ Qi−1 it holds that γ ∈ span(Uj ) for isfied, then it holds that the rank of vector set
some cj ∈ C; Γi−1 ∪ Q̃i−1 ∪ Uj ∪ {γi } is n for all cj ∈ C. This is be-
2) For each client cj ∈ C it holds that the set Γi−1 ∪ cause for clients cj ∈ / C 0 , the rank of Γi−1 ∪ Q̃i−1 ∪ Uj
Qi−1 ∪ Uj is of rank n. is n by definition, and for cj ∈ C 0 , hζj0 , γi i 6= 0
Lemma 1: Let cti be the client selected at Step 4 of implies that the rank of both Γi−1 ∪ Q̃i−1 ∪ Uj ∪ {γi }
the algorithm. Then, there exists at least one encoding and Γi−1 ∪ Q̃i−1 ∪ Uj ∪ ζj0 is equal to n. Therefore,
vector, v, that can be removed from Qi−1 such that after iteration i of the algorithm, the data transfer can
Γi−1 ∪ (Qi−1 \ {v}) ∪ Uti remains of rank n. be completed within OP Ti−1 −1 rounds by transmitting
Proof: Let µ = rank(Uti ∪ Γi−1 ) be the rank of the the vectors in Q̃i−1 .
set of encoding vectors available to client cti . Note that at Recall that γi is a random linear combination of
vectors in Uti , i.e., γi = ug ∈Ut γig ug where the γig ’s
P
the beginning of iteration i the rank of set (Uti ∪ Γi−1 )
i
is at least as large as the rank of (Uj ∪ Γi−1 ) of any are i.i.d. random coefficients chosen from the field Fq .
other client cj ∈ C. This implies that OP Ti−1 is at Lemma 4: For each client cj ∈ C 0 , the probability
least n − µ + 1. Indeed, if there exists a client with that hζj , γi i is equal to zero is 1q .
strictly lower rank than µ, then this client would require Proof: The inner product hζj , γi i can be written as
at least n−(µ−1) transmissions. Otherwise, if all clients X
hζj , γi i = βg γig . (2)
have the same rank µ < n, then the required number of
ug ∈Uti
transmissions is also at least n − µ + 1 (Note that client
cti does not benefit from its own transmission at round i Let Û be a subset of Uti such that for each ug ∈ Û
and hence, OP Ti−1 ≥ 1 + (n − µ). A similar argument it holds that βg 6= 0. Lemma 2 implies that the set Û
packets clients clients

x1
n1
nodes corresponding to k clients. An existing edge e`j
c1 t1

b1 n2
between node ` in the first layer and node j in the second
x2
c2 b t2 layer means that client cj knows packet x` . Client nodes
b2
s x3 w
b
in layer 2 are connected to a single node, w, in layer
b3 b
c3 b t3 3, where the edge capacity bj represents the number
x4
b4 n3 of transmissions
P from cj determined by the algorithm
x5
c4 t4
(bj = i 1[cti = cj ], where 1[a] is the indicator function
n4
and becomes 1 only when condition a is true). And
finally, w distributes coded packets to all k destination
Fig. 3. Example multicast graph for 4 clients and 5 packets. Pk
client nodes with edge capacities equal to b = j=1 bj .
Obviously, client cj is interested in all n source packets
is not empty. Thus, hζj , γi i = ug ∈Û βg γig . Since the
P
but also has side information Xj , which can also be
coefficients γig are i.i.d. uniformly distributed over Fq , represented by direct edges from the second to the last
the probability that hζj , γi i is equal to zero is 1q . layer with capacities equal to nj . This is a standard
Lemma 5: With probability at least 1− kq , it holds that multicast problem of transmitting n packets from the
OP Ti = OP Ti−1 − 1. source node s to k destinations. Using [7], we can find
Proof: Lemma 4 implies that for each client cj ∈ a network coding solution to the problem with |Fq | ≥ k.
C 0 , the probability that hζj , γi i is equal to zero is 1q . By We have thus shown that with linear coding we can
using the union bound we can show that the probability achieve the optimal number of transmissions and achieve
that hζj , γi i = 0 for some client cj ∈ C is bounded the capacity of the equivalent multicast problem. Hence,
by kq . Thus, with probability at least 1 − kq it holds linear coding is sufficient for the data exchange problem.
that Γi−1 ∪ Q̃i−1 ∪ Uj ∪ {γi } is of rank n for for every IV. L OWER AND U PPER B OUNDS
client cj ∈ C. By Lemma 3, OP Ti = OP Ti−1 − 1 with Before running the optimal randomized algorithm, the
probability at least 1 − kq . actual minimum number of transmissions OP T cannot
Theorem 6: The algorithm computes, with probability be known a priori. It is therefore useful to be able to
at least 1− k·nq , an optimal solution for the data exchange compute bounds on OP T . We first review one lower
problem, provided that the size q is larger than n. bound and one upper bound on OP T that were proved
Proof: Let OP T the be the optimum number of in [1]. We then establish some new bounds and comment
transmissions required to solve the data exchange prob- on how they compare with previous bounds.
lem. Note that OP T0 = OP T . By Lemma 5, after each Lemma 8: [1] The minimum number of transmis-
iteration, the number of required transmissions reduces sions OP T satisfies OP T ≥ n − nmin . Moreover, if
by one with probability at least (1 − kq ). Thus, the data all clients initially have the same number of packets
transfer will be completed after OP T iterations with nmin < n, then OP T ≥ n − nmin + 1.
probability at least Lemma 9: [1] For |Fq | ≥ k, the minimum number
 OP T  n
k k k·n of transmissions OP T satisfies
1− ≥ 1− ≥1− ,
q q q OP T ≤ min {|X i | + max |X j ∩ Xi |}. (3)
1≤i≤k 1≤j≤k
where the last inequality holds for q > n. The upper bound is obtained by making a client a leader
By selecting a sufficiently large q (e.g., q ≥ 4k·n), we with uncoded transmissions from other clients and then
can guarantee a certain probability of success (e.g 3/4), asking the leader to satisfy the demands of others.
which can then be amplified to be arbitrary close to 1 by Lemma 10: The minimum number of transmissions
performing multiple iterations and choosing the iteration OP T satisfies & P
that yields the minimum number of transmissions.
' & '
k Pk
i=1 n i kn − i=1 ni
Corollary 7: For any ε > 0 the algorithm can find OP T ≥ = , (4)
an optimal solution to the data exchange problem with k−1 k−1
probability at least 1 − ε in time polynomial in the size where d·e is the integer ceiling function.
of the input and log(ε). Proof: The goal of transmissions P is to reduce the
k
total number of unknown packets from i=1 ni to zero.
Reducing the field size In each transmission round, the transmitting client cannot
We can now construct a multicast problem as shown benefit from its own transmission. Therefore, at most k−
in Fig. 3 to reduce the required field size to |Fq | ≥ k. 1 clients will receive innovative information about their
The multicast setting consists of a source node s and unknown packets. The lower bound follows by noting
4 layers. The first layer has n nodes corresponding to that the number of transmissions has to be an integer.
n source packets. The source node s is connected by The next lower bound is a generalization of Lemma 8
a link to each node in layer 1. Layer 2 comprises k and states that when there is at least one packet that
is known only to clients with nmin known packets, the 50
Max of Lem 6 and Lem 7 LBs

number of transmissions is at least n − nmin + 1. Let 45 Randomized Algorithm


Lemma 8 UB with k−1 choices of helpers

Bounds and Algorithms Performance


CO = C \ Cmin be the set of clients such as ci with 40 Lemma 5 UB [1] with no coding
Trivial UB (n)

ni > nmin . If CO is non-empty, let X O denote the set 35


k = 5 clients
30
of common unknown packets for clients in CO .
25
Lemma 11: Whenever CO = C \Cmin is empty (ni =
20
nmin < n for all clients), then the minimum number 15
of transmissions OP T satisfies OP T ≥ n − nmin + 1. 10

When CO is non-empty, we have OP T ≥ n − nmin + 5


10 15 20 25 30 35 40 45 50
min(|X O |, 1), where X O = ∩ci ∈CO X i . Number of Packets, n

Compared with the Lemma 9 upper bound, here we wish Fig. 4. Comparison of the derived lower and upper bounds with the
to make the client ci a leader (such that it acquires optimal solution for k = 5 clients versus number of packets.
packets in X i ) using coded transmissions from other
clients called helpers.  k = 5 clients versus number of packets. The combined
Let vector hj = hj,1 , hj,2 , · · · , hj,kj denote the lower bounds provide very tight closed-form results on
index of helper clients where kj is the number of helpers the optimal randomized solution which is also shown in
and hj,m for 1 ≤ m ≤ kj is the index of helpers. the figure. The Lemma 12 upper bound is also shown us-
The transmission from helpers is done in the order of ing randomized coding with only k−1 choices of helpers
elements of hj . The overall index j in hj refers to a for each leader. It significantly improves the Lemma 9
particular choice of helpers. To make ci a leader, we upper bound which used uncoded transmissions.
can at most have (k − 1)! distinct ordered helpers. V. C ONCLUSION
Helpers should collectively satisfy the demands of the
kj We presented a randomized algorithm for finding
leader: X i ⊆ ∪m=1 Xhj,m . Each helper client chj,m is
an optimal solution for the cooperative data exchange
responsible for transmitting Ahj,m = |Ehj,m | number of
problem with high probability. While the algorithm gives
packets in its known set which are unknown to the leader
a solution over a relatively large field, we showed that the
and all previous helpers, where Ehj,m is defined as
field size can be reduced, through an efficient procedure,
Ehj,m = X i ∩ X hj,1 ∩ X hj,2 ∩ · · · ∩ X hj,m−1 ∩ Xhj,m . without any penalty in terms of the total number of
Let Γj,m = {γm,1 , · · · , γm,Ahj,m } be the set of cod- transmissions. We also provided two tight lower bounds
ing vectors transmitted by helper chj,m from its known and one upper bound which can be easily computed and
set of unit vectors such that rank(Ui ∪m q=1 Γj,q ) increases therefore, helpful in evaluating system performance.
by Ahj,m compared to rank(Ui ∪m−1 q=1 Γj,q ). This is In the future, we would like to explore two interrelated
required to guarantee that ci becomes the leader in the issues of (i) incentives and their overheads to guarantee
process and requires an appropriate choice of coding continued cooperation from clients and (ii) fairness to
coefficients corresponding to unit vectors of Ehj,m for clients (in terms of number of transmissions) during data
elements of Γj,m . Coefficients at other positions can be exchange.
chosen at random from the chosen field Fq since they R EFERENCES
do not affect decoding at the leader. [1] S. El Rouayheb, A. Sprintson, and P. Sadeghi, “On coding for co-
After the helpers complete their transmissions, a client operative data exchange,” in Proc. Information Theory Workshop,
kj Cairo, Egypt, 2009, pp. 118–122.
such as cp will know Bj,p = rank(Up ∪m=1 Γj,m )
[2] Y. Birk and T. Kol, “Coding-on-demand by an informed source
linearly independent equations and hence needs an extra (ISCOD) for efficient broadcast of different supplemental data
n−Bj,p transmissions from the leader. Summarizing the to caching clients,” IEEE Transactions on Infromation Theory,
results, we arrive at the following upper bound: vol. 52, no. 6, pp. 2825–2830, June 2006.
[3] Z. Bar-Yossef, Y. Birk, T. S. Jayram, and T. Ko, “Index coding with
Lemma 12: The minimum number of transmissions side information,” in Proc. of the 47th Annual IEEE Symposium
OP T satisfies on Foundations of Computer Science (FOCS), 2006, pp. 197–206.
[4] S. El Rouayheb, A. Sprintson, and C. N. Georghiades, “On
OP T ≤ min min{|X i | + max {n − Bj,p }}, (5) the relation between the index coding and the network coding
1≤i≤k j 1≤p≤k
kj problems,” Proc. of IEEE International Symposium on Information
where Bj,p = rank(Up ∪m=1 Γj,m ), Γj,m = Theory (ISIT), 2008.
{γm,1 , · · · , γm,Ahj,m } is the set of coding vectors trans- [5] D. Shah, Gossip Algorithms (Foundations and Trends in Network-
ing). Now Publishers Inc, 2007, vol. 3, no. 1.
mitted by helper chj,m from its known set of unit vectors [6] A. Sendonaris, E. Erkip, and B. Aazhang, “User cooperation
Uhj,m , and the second minimization is over all or only diversity–Part I: System description,” IEEE Trans. Commun.,
a subset of choices of helpers. vol. 51, no. 11, pp. 1927–1938, Nov. 2003.
[7] S. Jaggi, P. Sanders, P. A. Chou, M. Effros, S. Egner, K. Jain,
Finally, we present some numerical results. The and L. Tolhuizen, “Polynomial Time Algorithms for Multicast
bottom curve in Fig. 4 shows the maximum of the Network Code Construction,” IEEE Transactions on Information
Lemma 10 and Lemma 11 lower bounds on OP T for Theory, vol. 51, no. 6, pp. 1973–1982, 2005.

You might also like