Graph Theory
Graph Theory
after sending
a ow of 1 unit along the path s c d t. (c) The nal residual graph
G
uA,vB
c(u, v)
Denition 1.1.3. (Flow) A ow is a mapping f : E R, denoted by f
uv
or
f(u, v), subject to the following constraints:
(1) f(u, v) c(u, v) for each (u, v) E (capacity constraint)
(2) f(u, v) = f(v, u) (skew symmetry)
1
It is also referred to as The Maximum Concurrent Flow Problem .
3
(3)
vV
f(u, v) = 0 u V \ {s, t} (conservation of ow).
The value of ow is dened by |f| =
vV
f(s, v) , where s is the source of
N. It represents the amount of ow passing out of the source to the sink. Figure
1.1 shows a typical network and a ow in that network. Here f can be viewed as
a vector over the directed edges
2
of the network.
1
4
2
5
1
1
4
2
s
t
a
b
c
d
(a)
1/1
1/1
1/1
1/2
2/2
2/4
2/5
3/4
s
t
a
b
c
d
(b)
Figure 1.1: (a) A network with edge capacities. (b) The corresponding ow network
with |f| = 4. The dotted line forms a cut which is also a minimum cut
Denition 1.1.4. (Flow across a cut) If (A, B) is any cut in the network, then
the ow across the cut is dened as
f(A, B) =
uA,vB
f(u, v)
Lemma 1.1.1. If f is a ow in a network G, then for any s, t-cut (A, B), f(A, B) =
|f|.
Proof. We can prove this by considering the ow over the cut and reducing it to
the ow from the source vertex which is the actual ow.
2
The terms directed edge and arc may be used interchangeably.
4
f(A, B) =
uA,vB
f(u, v)
=
uA,vB
f(u, v) +
{w,w
}A
f(w, w
)
Since,
{w,w
}A
f(w, w
uA,vV
f(u, v)
=
vV
f(s, v) +
uA\{s},vV
f(u, v)
=
vV
f(s, v)
= |f|
Denition 1.1.5. (Residual Graph) If f is a ow in a graph G, then the residual
graph is dened as G
f
with c
f
(u, v) = c(u, v) f(u, v).
For a directed edge (a, b) E, if c(a, b) = 0 then in G
f
, the residual capacity
will be c
f
(a, b) = 0 f(a, b) = f(a, b). Figure 1.2 gives an example for a residual
graph(network).
Lemma 1.1.2. f
is a ow in G
f
i f +f
is a ow in G.
Proof. We prove this by proving the following two conditions.
(1) A ow f
is a ow in G
f
i it satises f
(u, v) c
f
(u, v).
f
(u, v) c
f
(u, v)
f
(u, v) c(u, v)
f +f
(u, v) c(u, v)
5
s
b
t
a
c
1
2
2
1
3
3
(a)
s
b
t
a
c
0
0
2
3
1
1
1
1
1
2
(b)
Figure 1.2: (a) A network N of positive integral edge capacities. (b) The residual graph
obtained after pushing a ow of 1 through the path s c b a t
(2) |f f
| = |f| |f
|
|f f
| =
vV
(f f
)(s, v)
=
vV
[f(s, v) f
(s, v)]
=
vV
f(s, v)
vV
f
(s, v)
= |f| |f
|
Lemma 1.1.3. If f
is maximum in G
f
, then f +f
is maximum in G.
Proof. Suppose if some ow |g| > |f + f
| |f|
= |f| +|f
| |f| = |f
|
But, this leads to a contradiction to the fact that |f
| is maximum in G
f
.
6
Denition 1.1.6. (Capacity of a path) The capacity of a path P is given as,
C(P) = Min
(u,v)P
c(u, v)
Denition 1.1.7. (Augmenting Path) An augmenting path P is an s, t-path of
positive capacity.
3
1.1.2 Bounds
Let (A, B) be any s, t-cut, then the value of the ow is at most the capacity
of the cut.
|f| C(A, B)
Since, from Lemma 1.1.1
|f| = f(A, B)
=
uA,vB
f(u, v)
uA,vB
c(u, v) = C(A, B)
1.1.3 Proof of the Max-ow Min-cut Theorem
Theorem 1. (Ford-Fulkerson, 1956) [5] In a Network G, let f be any maximum
ow in G, then a cut (A, B) for which f(A, B) = c(A, B)
Proof. The proof given by Ford-Fulkerson is an algorithmic proof. Here, we give
the algorithm that iteratively nds an augmenting path and augments the ow
and computes the residual graph.
The total value of the resulting ow will be f
1
+ f
2
+ f
3
. . . + f
k
, where k
is the number of iterations and f
i
is the ow obtained in the i
th
iteration. The
above algorithm terminates by producing a maximum ow and also we can show
the existence of a cut which will be the minimum cut. The gure 1.3 shows the
execution of the Ford-Fulkerson algorithm on an example network.
3
Here path means a directed path from s to t.
7
Algorithm 1 Maxow(G)
Find an augmenting path P in G
if P does not exist then
return 0
else
let f be a ow of value C(P) along P
return (f +Maxflow(G
f
))
end if
In order to prove this theorem, we rst consider the following three state-
ments. And by showing a circular dependency between them, we prove the theo-
rem.
(1) G has a Maximum ow |f|
(2) G
f
has no augmenting path
(3) Some cut of G is saturated
As we can see, 1 2 and 3 1. These two are trivial. Because, if
there exists an augmenting path, the ow can be further increased by augmenting
through that path. But, this contradicts the fact that f is maximum. And, for the
second, since the ow can not exceed the capacity of a cut, if some cut is saturated,
then the ow has to be maximum.
What remains is to prove is that 2 3. For this, consider two sets of vertices
A and B in G
f
where A consists of all the vertices reachable from s and B consists
of all the vertices from which t is reachable. Note that the graph we consider is
the residual graph G
f
. Now, we can say that s / B and t / A. If any of the two
are not correct, it implies that there exists an augmenting path, which contradicts
the hypothesis. Now, consider the cut (A, B), this forms a saturated cut. Hence
proved.
8
s t
a b
c
d
(a)
1
1
1
1
1
1
1
(b)
s
1
a b
t
d c
1
0
0
1
1
1 1
1
0
t
(c)
b
c
d
0
0
1
0
1
1
0
0
1
a
s
1
0
1
1
0
Figure 1.3: (a) The initial graph(network) G. (b) The residual graph G
after sending
a ow of 1 unit along the path s c d t. (c) The nal residual graph G
.
Lemma 2.2.1. In a graph G(V, E), the number of pairwise edge-disjoint x, y-paths
in G
is the
corresponding directed graph.
11
Proof. It is easy to see that there will be at least as many pairwise edge-disjoint
x, y-paths in G
also.
1
P
P
2
P
2
1
P
y
a
b
x
(a)
3
P
P
3
P
4
4
P
y
a
b
x
(b)
Figure 2.1: (a) The paths P
1
and P
2
where (a, b) P
1
, (b, a) P
2
. (b) The correspond-
ing paths P
3
and P
4
.
To show that there can be no more in G
and these have to be edge-disjoint in G also. In this way all the pairwise
edge-disjoint x, y-paths in G
that is
internally-disjoint to every path in U
| = k.
Theorem 2. (Mengers,[1927])(Edge version) If x, y are two vertices in a graph
G and (x, y) / E(G), then the minimum size of an x, y-cut equals the maximum
number of pairwise edge-disjoint x,y-paths
Proof. The graph G(V, E) is an undirected graph. Let k be the maximum number
of pairwise edge-disjoint x, y-paths(undirected) in G. The necessary part of the
theorem can be easily veried. Since the minimum x, y-cut should contain at least
one edge from each x, y-path, it should be at least k. Therefore, the size of the
minimum x, y-cut is k.
In order to prove suciency, we rst transform the given undirected graph
into a network with x as source and y as sink. Let G
. Then
by lemma 2.2.1 the number of pairwise edge-disjoint x, y-paths in G
is equal to
k. Since the edges are of unit capacity, the maximum ow possible will be equal
to the number of pairwise edge-disjoint x, y-paths. So maximum ow in G
is k.
Now, by applying the Max-ow Min-cut theorem, we can see that the size of the
minimum cut(directed) in G
. From
lemma 2.2.2, the number of pairwise internally-disjoint paths in G
are k. Now,
we will transform the directed graph G
as follows. Let G
, v
in
and v
out
. v
in
consists of all
the inward edges to v and v
out
consists of all the outward edges from v. Add a
directed edge from v
in
to v
out
. Assign unit capacity to all the edges of G
. x
in
and
y
out
can be left out because there will be no ow into the source vertex and out of
sink vertex.
Now, we show that the maximum ow in G
that
correspond to the pairwise internally-disjoint x, y-paths in G
.
Now, by applying the Ford Fulkerson algorithm, we can get a cut of size k.
14
Select a set S of vertices for each edge in the cut as follows:
(1) If the edge is (v
in
, v
out
), select v.
(2) If the edge is (v
out
, u
in
) where {u, v} / {s, t}, select either u or v.
(3) In case of (x
out
, v
in
) or (v
out
, y
in
), select the vertex v.
Clearly, the size of S is k. This set also forms a separator in the network
G
will be
a x, y-augmenting path. This contradicts the fact that f is maximum in G
. Also
it is easy to see that this set S will be a separator in original graph G. And since
the size of the minimum separator is at least k, this is minimum. Hence proved.
2.3 Konig-Egervary Therorem
Theorem 4. (Konigs theorem,[1931]) In any bipartite graph, the number of edges
in a maximum matching equals the number of vertices in a minimum vertex cover.
Proof. Let G(V, E) be the bipartite graph with partitions X, Y . Let k be the size
of a maximum matching, M. Then, the size of the minimum vertex cover is at
least k. Since, no two edges in a matching has a common vertex, a vertex cover
should consist of at least one vertex for each edge.
What remains is to show that there exists a vertex cover of k. For this, we rst
transform the given graph as below. Construct the new graph G
(V
, E
), where V
consists of all the vertices from G along with two new vertices s, t corresponding
to source and sink and add edges to E
as below:
(1) u X, v Y add a directed edge (u, v)
(2) u X add the directed edge (s, u)
15
(3) v Y add the directed edge (v, t)
Assign unit capacities to all the edges in E
.
e
f
d a
b
c
(a) Original Graph
s
a
b
d
e
f
t
c
(b) Transformed
Figure 2.2: (a) A bipartite graph with partitions {a, b, c} and {d, e, f}. (b) The resulting
network after transformation.
We now show that the no of pairwise internally-disjoint s, t-paths in G
is
equal to the size of the maximum matching in G. Since no two edges in M have
a common vertex, all the s, t-paths corresponding to the edges in M are pairwise
internally-disjoint. There can not be any other path that is internally-disjoint to
all these paths. Because, if there exists such a path, say s xy t, then the size
of matching M can be increased by adding the edge (x, y). Note that every path
in G
is k.
By applying Mengers Theorem(Vertex version) to G
_
1 1 0
0 1 1
0 0 1
_
_
_
_
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
_
_
17
_
_
1 1 0 0 0 1
1 0 1 1 0 0
0 1 1 0 1 0
0 0 0 1 1 1
_
_
Denition 3.1.2. (Linear Program) Let P be a maximization problem. Consider
this as the primal. Then the linear program formulation can be given as
Maximize c
T
x
Subject to Ax b
x 0
(3.1.1)
Where A is a m n matrix and c, x, b are column vectors of order n, n, m
respectively. c
T
denotes the transpose of c.
A =
_
_
a
11
a
12
. . . a
1n
a
21
a
22
. . . a
2n
.
.
.
.
.
.
.
.
.
.
.
.
a
m1
a
m2
. . . a
mn
_
_
x =
_
_
x
1
x
2
.
.
.
x
n
_
_
c =
_
_
c
1
c
2
.
.
.
c
n
_
_
b =
_
_
b
1
b
2
.
.
.
b
n
_
_
Denition 3.1.3. (Dual of a Linear Program) The dual of the LPP(P), say D(P),
can be given as
Minimize b
T
y
Subject to A
T
y c
y 0
18
Where y is a column vector of order m.
y =
_
_
y
1
y
2
.
.
.
y
m
_
_
Theorem 5. (Weak Duality) For any feasible solutions x
and y
of P and D(P),
c
T
x
.
Proof. For a feasible x
and y
b
y
T
Ax
y
T
b
(3.1.2)
Now, consider the objective function c
T
x
of P. We have,
A
T
y
c
(A
T
y
)
T
x
c
T
x
y
T
Ax
c
T
x
(3.1.3)
Now, from equations 3.1.2 and 3.1.3 we have
c
T
x
y
T
Ax
y
T
b (3.1.4)
19
We can see that y
T
b = b
T
y
. Since,
y
T
b =
_
y
1
y
2
. . . y
m
_
_
b
1
b
2
.
.
.
b
n
_
_
=
_
b
1
y
1
+b
2
y
2
+. . . +b
m
y
m
_
=
_
b
1
b
2
. . . b
n
_
_
y
1
y
2
.
.
.
y
m
_
_
= b
T
y
(3.1.5)
Therefore,
c
T
x
y
T
Ax
b
T
y
(3.1.6)
Theorem 6. (Strong Duality) [?] If the primal P has an optimal solution, x
,
then the dual D(P) also has an optimal solution, y
, such that c
T
x
= b
T
y
.
Theorem 7. (Unimodular LP) For the LPP(P), if A is a unimodular matrix and
b is integral then some optimal solution is integral.
The proof of the above two theorems is beyond the scope of this report. The
proof of the Theorem 6 is found in [?] and the proof of the Theorem 7 is found in
[2].
3.2 Konigs Theorem
Theorem 8. (Konigs theorem,[1931]) In any bipartite graph, the number of edges
in a maximum matching equals the number of vertices in a minimum vertex cover.
20
Proof. Let G(V, E) be a bipartite graph and X, Y be the two partitions of V . Let
|X| = m, |Y | = n and {1, 2, . . . , n} X,{m+ 1, m+ 2, . . . , m+n} Y .
If E is empty, then the size of both the minimum vertex cover and maximum
matching is zero. Without loss of generality we shall assume that E is non empty
and |E| = r. We shall introduce two variables p and q corresponding to every
vertex and edge respectively. The linear program(P) for nding the maximum
matching is as follows,
Maximize
(i,j)E
q
ij
(3.2.1a)
Subject to
j:(i,j)E
q
ij
1 i X (3.2.1b)
i:(i,j)E
q
ij
1 j Y (3.2.1c)
q
ij
{0, 1} (i, j) E (3.2.1d)
The rst two constraints imply that at most one edge can be selected corre-
sponding to every node.
Here, the given problem is an Integer Linear Program. We relax the inte-
grality constraints on q
ij
so that it can take on decimal values. The resulting LPP,
say P
, is
Maximize
(i,j)E
q
ij
Subject to
j:(i,j)E
q
ij
1 i X
i:(i,j)E
q
ij
1 j Y
q
ij
0 (i, j) E
(3.2.2)
Let A be the coecient matrix of the LPP(P
_
q
ij
.
.
.
.
.
.
.
.
.
_
_
c =
_
_
1
1
.
.
.
1
_
_
b =
_
_
1
1
.
.
.
1
_
_
y =
_
_
p
1
.
.
.
p
m
p
m+1
.
.
.
p
m+n
_
_
(3.2.3)
Where the order of the matrices are, x is r 1, c is r 1, b is (m + n) 1
and y is (m + n) 1. The optimization function with respect to denition 3.1.3
can be given as,
b
T
y =
_
_
1
1
.
.
.
.
.
.
.
.
.
1
_
_
T
_
p
1
.
.
.
p
m
p
m+1
.
.
.
p
m+n
_
_
(3.2.4)
From Theorem 3.2.1, we know the coecient matrix is unimodular. In the
matrix A, with respect to a variable q
ij
, every column has exactly two ones corre-
sponding to i
th
and j
th
rows. Thus, in A
T
every row has two ones, one in the i
th
22
column and other in the j
th
column. So, the set of constraints, A
T
y c, for the
dual can be given as follows,
p
i
+p
j
1 (i, j) E (3.2.5)
Along with the non-negativity constraints, the dual can be given as,
Minimize
iX
p
i
+
jY
p
j
Subject to,
p
i
+p
j
1 (i, j) E
p
i
0 i X
p
j
0 j Y
(3.2.6)
This actually is the LP formulation for the relaxed minimum vertex cover
problem that we see below.
Now, consider the problem for nding the minimum vertex cover in G. Then
the LPP formulation, say Q, is given as follows,
Minimize
iX
p
i
+
jY
p
j
Subject to,
p
i
+p
j
1 (i, j) E
p
i
{0, 1} i X
p
j
{0, 1} j Y
(3.2.7)
The above given problem is an ILP(Integer Linear Program). We shall relax
the integrality constraints on x
i
, x
j
in order to obtain a LP. Let Q
iX
p
i
+
jY
p
j
Subject to,
p
i
+p
j
1 (i, j) E
p
i
0 i X
p
j
0 j Y
(3.2.8)
Let B be the coecient matrix of Q
) is totally unimodular.
Proof. Clearly, A has m + n rows and r columns in which rst m equations con-
tribute the rst m rows and second n equations contribute the remaining n rows.
Each column of A has exactly two 1s, one in the rst m rows and one in the last
n rows. All other elements are 0. Let D be a square sub-matrix of order k. We
will prove the theorem by using induction on k.
Clearly, for k = 1, |D| = 0 or 1. Assume that all square sub-matrices of
order k 1 have determinant equal to 0, 1 or 1. We shall consider dierent cases,
(1) If D has at least one column containing only zeros, then |D| = 0.
(2) If D has at least one column containing only one one. Then |D| = |E|,
where E is the sub-matrix obtained from D by deleting the corresponding
column and the row containing the one. By induction, |E| = 0, 1 or 1.
Hence |D| = 0, 1 or 1.
24
(3) If every column of D has exactly two 1. In this case, the rst 1 comes from
the rst m rows of A and the second 1 comes from the later n rows of A.
Strictly, every column has exactly a single 1 within the rows that belong
to the rst m rows of A. So, the by performing row addition on all these
rows, we obtain a row with all ones. The same argument applies for the
remaining rows of D that came from the last n rows of A. Hence the rows
of D are linearly dependent and |D| = 0.
Hence A is unimodular.
3.3 The Max-Flow Min-Cut Theorem
Theorem 9. (Ford-Fulkerson, 1956) In a Network G, let f be any maximum ow
in G, then a cut (A, B) for which f(A, B) = c(A, B)
Proof. Let N(V, E, c, s, t) be the network with |V | = n and |E| = m. We shall
write the linear programming formulation for the maximum ow.
We want to nd the maximal ow that can be sent from the source vertex s
to the sink vertex t. Let v be the value of any ow from s to t and x
ij
be the ow
sent along the arc (i, j). Let the vertices be labeled using integers 1 to n such that
the source s is labeled as 1 and the sink t is labeled with n. Then the LPP(R)
25
corresponding to the maximum ow is,
Maximize v
Subject to
(i,j)E
x
ij
(k,i)E
x
ki
v = 0 if i = 1
(i,j)E
x
ij
(k,i)E
x
ki
+v = 0 if i = n
(i,j)E
x
ij
(k,i)E
x
ki
= 0 if i = 2, 3, . . . , n 1
x
ij
c
ij
(i, j) E
x
ij
0 (i, j) E
v 0
(3.3.1)
The rst constraint imply that the net ow out of the source vertex 1 is equal
to v and the second constraint imply that the net ow into the sink vertex n is
v. The third constraint imply that the total ow into any intermediate vertex is
equal to the total outow of that vertex. By relaxing the equality in these three
constraints, we will obtain the following inequalities.
(i,j)E
x
ij
(k,i)E
x
ki
v 0 if i = 1
(i,j)E
x
ij
(k,i)E
x
ki
+v 0 if i = n
(i,j)E
x
ij
(k,i)E
x
ki
0 if i = 2, 3, . . . , n 1
(3.3.2)
Let R
_
x
ij
.
.
.
.
.
.
_
_
b =
_
_
0
.
.
.
0
c
ij
.
.
.
_
_
c =
_
_
1
0
0
.
.
.
_
_
(3.3.3)
Now, we shall derive the dual for the above LP in equation 3.3.1. Let u
i
,
where 1 i n, be the dual variables corresponding to the rst three set of
equations (ow constraints) and y
ij
, where (i, j) E, be the dual variables corre-
sponding to the fourth set of constraints (capacity constraints). Then the matrix
y is,
y =
_
_
u
1
.
.
.
u
n
y
ij
.
.
.
_
_
(3.3.4)
Therefore, the objective funtion for the dual can be given as,
b
T
y =
_
_
0
.
.
.
0
c
ij
.
.
.
_
_
T
_
u
1
.
.
.
u
n
y
ij
.
.
.
_
_
(3.3.5)
Consider the coecient matrix, say A, of the LP formulation for Max-ow.
We shall see the properties of this matrix. There will be (n+m) rows corresponding
27
to the (n + m) constraints and m + 1 columns corresponding to m x
ij
variables.
The coecient matrix A, will look as below,
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
1 2 3 . . . m+ 1
1 1 a
2,1
a
3,1
. . . a
m+1,1
2 0 a
2,2
a
3,2
. . . a
m+1,2
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
n 1 a
2,n
a
3,n
. . . a
m+1,n
x
ij
0 1 0 . . . 0
.
.
. 0 0 1 . . . 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. 0 0 0 . . . 1
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
where a
i,j
= 1 or 0 or 1 2 i (m + 1), 1 j n. This matrix is
actually the transpose of the coecient matrix for the Min-cut LP that we see
below in equation 3.3.6. The objective function of the Min-cut is also the same
as the function in equation 3.3.5. From this, it is easy to see that the dual of the
Max-ow problem is Min-cut.
Now we shall formulate the linear program(T) for nding the minimum cut
capacity as follows,
Minimize
(i,j)E
c
ij
y
ij
Subject to
u
1
+u
n
1
u
i
u
j
+y
ij
0 (i, j) E
u
i
{0, 1} i
y
ij
{0, 1} i, j
(3.3.6)
The solution to the above LPP will result in a cut such that, corresponding
28
to a cut (S,
S) of the network N,
u
i
= 0 if vertex i S
= 1 if vertex i
S
y
ij
= 1 if i S, j
S
= 0 otherwise
(3.3.7)
Now, relax the integrality constraints on u
i
and y
ij
. The resulting LPP(T
)
will be
Minimize
(i,j)E
c
ij
y
ij
Subject to
u
1
+u
n
1
u
i
u
j
+y
ij
0 (i, j) E
u
i
0 i
y
ij
0 i, j
(3.3.8)
In any optimal solution to T, (u
i
, u
j
) = (0.0) or (1, 0) or (1, 1) will imply
y
ij
= 0 and (u
i
, u
j
) = (0, 1) imply y
ij
= 1 and hence T will give the capacity of the
cut (S,
S).
Now, consider the LPP(T
). Let B
be the coecient matrix of T
is
also the optimal solution of T.
Now, by Theorem 6, the optimal values of R
and T
vV
f
i
(u, v) = 0 u V \ {s
i
, t
i
} (conservation of ow).
The value of the ow of a commodity i is given by |f
i
| =
wV
f
i
(s
i
, w).
Denition 4.1.3. (Concurrent Multi-commodity Flow Problem) Given a Multi-
commodity Network along with demands D
1
, D
2
, . . . , D
k
corresponding to the k
commodities, the objective is to assign ow to commodities so as to maximize a
fraction such that for every commodity i, the value of the ow of the commodity
|f
i
| is at least D
i
. The assignment should satisfy the following constraints along
with the ow constraints,
k
i=1
f
i
(u, v) c(u, v) (u, v) E (4.1.1)
4.2 Linear Programming Formulation
Below, we give the LP formulation for the Concurrent Multi-commodity Flow
problem(CMFP).
33
Maximize
Subject to
k
i=1
f
i
(u, v) c(u, v) (u, v) E
wV
f
i
(u, w) = 0 1 i k, u V {s
i
, t
i
}
wV
f
i
(s
i
, w) D
i
1 i k
(4.2.1)
The above formulation is very intuitive and straight forward. We will now
see another formulation of the same problem which uses paths. Let P represent the
set of all non-trivial paths in the network and P
j
be the set of paths corresponding
to the commodity j (paths from s
i
to t
i
). Let x() be a variable corresponding to
every path P. Then, the LPP formulation (primal) can be given as,
Maximize
Subject to
:e
x() c(e) e E
D
j
P
j
x() 0 1 j k
x() 0 P
(4.2.2)
The dual problem for the above linear program can be interpreted as assigning
weights(z
j
) to the commodities and lengths(y(e)) to edges such that for any com-
modity i, the length of every path from s
i
to t
i
should be at least z
i
. The length
of a path is given as the sum of the lengths of all the edges in that path. The LPP
34
formulation for the dual is as follows,
Minimize
eE
c(e)y(e)
Subject to
e
y(e) z
j
P
j
, j
1jk
D
j
z
j
1
l(e) 0 e E
z
j
0 j
(4.2.3)
We shall consider the example in the gure 4.1 with two commodities, say K
1
and
K
2
. Let the demands be D
1
and D
2
respectively. First, the primal formulation of
the problem using paths is given. We shall then derive the dual formulation of the
problem.
s
1
s
2
t
1
t
2
e
1
e
2
e
3 e
4
e
5
e
6
e
7
e
8
a
b
c
d
Figure 4.1: An example of a two commodity ow network with unit demands on the
commodities.
In the example gure, there are two paths from source to sink for each of
the commodities K
1
and K
2
. The edges are labeled e
1
, e
2
, . . . , e
8
. Let us name the
paths with respect to edges as follows,
35
1
e
1
e
2
e
3
e
4
2
e
1
e
6
e
7
e
4
3
e
5
e
2
e
3
e
8
4
e
5
e
6
e
7
e
8
(4.2.4)
The paths
1
,
2
belong to commodity K
1
and the paths
3
,
4
belong to
commodity K
2
. The LP formulation for the example is as follows,
Maximize
Subject to
x(
1
) +x(
2
) c(e
1
)
x(
1
) +x(
4
) c(e
2
)
x(
1
) +x(
4
) c(e
3
)
x(
1
) +x(
2
) c(e
4
)
x(
3
) +x(
4
) c(e
5
)
x(
3
) +x(
2
) c(e
6
)
x(
3
) +x(
2
) c(e
7
)
x(
3
) +x(
4
) c(e
8
)
D
1
x(
1
) x(
2
) 0
D
2
x(
3
) x(
4
) 0
x(
i
) 0 1 i 4
(4.2.5)
Now, multiply the constraints with the dual variables(y(e) and z
j
) on both sides.
36
[ x(
1
) +x(
2
) c(e
1
) ] y(e
1
)
[ x(
1
) +x(
4
) c(e
2
) ] y(e
2
)
[ x(
1
) +x(
4
) c(e
3
) ] y(e
3
)
[ x(
1
) +x(
2
) c(e
4
) ] y(e
4
)
[ x(
3
) +x(
4
) c(e
5
) ] y(e
5
)
[ x(
3
) +x(
2
) c(e
6
) ] y(e
6
)
[ x(
3
) +x(
2
) c(e
7
) ] y(e
7
)
[ x(
3
) +x(
4
) c(e
8
) ] y(e
8
)
[ D
1
x(
1
) x(
2
) 0 ] z
1
[ D
2
x(
3
) x(
4
) 0 ] z
2
(4.2.6)
By combining the equations on the left hand and the right hand sides we get the
following inequality,
[ x(
1
) +x(
2
)] y(e
1
) + [ x(
1
) +x(
4
)] y(e
2
)+
[ x(
1
) +x(
4
)] y(e
3
) + [ x(
1
) +x(
2
)] y(e
4
)+
[ x(
3
) +x(
4
)] y(e
5
) + [ x(
3
) +x(
2
)] y(e
6
)+
[ x(
3
) +x(
2
)] y(e
7
) + [ x(
3
) +x(
4
)] y(e
8
)+
[ D
1
x(
1
) x(
2
) ] z
1
+
[ D
2
x(
3
) x(
4
) ] z
2
8
i=1
c(e
i
)y(e
i
)
(4.2.7)
37
Now, represent the inequality in terms of x(),
[ y(e
1
) +y(e
2
) +y(e
3
) +y(e
4
) z
1
] x(
1
)+
[ y(e
1
) +y(e
6
) +y(e
7
) +y(e
4
) z
1
] x(
2
)+
[ y(e
5
) +y(e
6
) +y(e
7
) +y(e
8
) z
2
] x(
3
)+
[ y(e
5
) +y(e
2
) +y(e
3
) +y(e
8
) z
2
] x(
4
)+
[ D
1
z
1
+D
2
z
2
]
8
i=1
c(e
i
)y(e
i
)
(4.2.8)
Now, with respect to the objective function of the primal, the dual can be formu-
lated as below,
Minimize
8
i=1
c(e
i
)y(e
i
)
Subject to
y(e
1
) +y(e
2
) +y(e
3
) +y(e
4
) z
1
0
y(e
1
) +y(e
6
) +y(e
7
) +y(e
4
) z
1
0
y(e
5
) +y(e
6
) +y(e
7
) +y(e
8
) z
2
0
y(e
5
) +y(e
2
) +y(e
3
) +y(e
8
) z
2
0
D
1
z
1
+D
2
z
2
1
y(e
i
) 0 1 i 8
z
j
0 1 j k
(4.2.9)
This is the resulting LPP formulation for the dual of the example we consid-
ered in gure 4.1. The dual we have given in equation 4.2.3 is a generalization of
this resulting formulation.
The Multi commodity ow problem is very well studied in combinatorics.
Unlike single commodity ow, the structural properties of this problem are not
well known when the number of commodities is greater than two(k > 2). This
38
problem can be solved in polynomial time using linear programming. However,
the problem of nding an integer ow is NP-Complete when k 2.
Chapter 5
Conclusion
In this thesis, we reviewed the classical Max-ow Min-cut theorem and its
proof using Ford-Fulkerson algorithm. We have also presented the Konigs the-
orem and Mengers theorem as consequences of the Max-ow Min-cut theorem.
While the above proofs are very well established, they are proved in an algorithmic
perspective. In the third chapter, we have presented the proofs for Konigs theo-
rem and Ma-ow Min-cut theorem using a complete dierent technique based on
the total unimodularity property of the coecient matrix in their linear program
formulation. Finally, we have briey discussed about Multi-commodity ow and
the Concurrent Multi-commodity Flow Problem(CMFP).
Although these results and the formulations are not new, an attempt was
made to present complete proofs for the results from rst principles, and the mate-
rial does not seem to be consolidated and presented elsewhere in an easily accessible
form.
Many more primal-dual relations exist in graph theory and the approach
generalize to investigation into these relations and discovering LP based proofs for
those Min-Max relations in graph theory. Hence, this approach is a general tool
and the results presented here are just sample cases.
This study gives an insight into dierent techniques and tools that can be
used to prove primal-dual relations in Graph theory. Though we have not estab-
lished any new results, the treatment given in this thesis gives us a good scope of
40
applying these techniques in order to establish new results in both graph theory
and combinatorics. We have also attempted to apply a technique called Lagrangian
relaxation [7] from linear programming to some of these relations in order to gain
some insights into its eectiveness. But, the work related that is not included in
this thesis as it did not yield insightful results. A possibility is to apply dierent
techniques in combination and try to investigate the outcome which could lead to
interesting observations.
Bibliography
[1] Ron Aharoni and Eli Berger. Mengers theorem for innite graphs. Inventiones
Mathematicae, 176(1):162, 2009.
[2] A Chandra Babu, P.V. Ramakrishnan, and C.R. Seshan. New proofs of konig-
egervary theorem. and maximal ow-minimal cut capacity. theorem using o.r.
techniques., August 1990.
[3] Thomas Cormen. Introduction to algorithms. The MIT Press, Cambridge
Mass., 2. ed. edition, 2001.
[4] Thomas Cormen. Introduction to algorithms. The MIT Press, Cambridge
Mass., 2. ed. edition, 2001.
[5] L. R. Ford and D. R. Fulkerson. Maximal ow through a network. Canadian
Journal of Mathematics, 8:399404, January 1956.
[6] Anders Forsgren. Optimization online - an elementary proof of optimality
conditions for linear programming. Technical Report TRITA-MAT-2008-OS6,
Department of Mathematics, Royal Institute of Technology (KTH), SE-100
44 Stockholm, Sweden, June 2008.
[7] Arthur M. Georion. Lagrangian relaxation for integer programming. In
Michael J unger, Thomas M. Liebling, Denis Naddef, George L. Nemhauser,
William R. Pulleyblank, Gerhard Reinelt, Giovanni Rinaldi, and Laurence A.
Wolsey, editors, 50 Years of Integer Programming 1958-2008, pages 243281.
Springer Berlin Heidelberg, Berlin, Heidelberg, 2010.
[8] Alon Itai. Two-Commodity ow. Journal of the ACM, 25(4):596611, October
1978.
[9] Denes K onig. Gr afok es alkalmaz asuk a determin ansok es a halmazok
elmeletere. Matematikai es Termeszettudom anyi
Ertesto, 34:104119, 1916.
[10] Eugene Lawler and Eugene Lawler. Combinatorial optimization: Networks
and matroids. In 4.5. Combinatorial Implications of Max-Flow Min-Cut
Theorem, 4.6. Linear Programming Interpretation of Max-Flow Min-Cut
Theorem, pages 117120. Dover, 2001.
42
[11] Eug`ene L. Lawler. Combinatorial Optimization: Networks and Matroids.
Courier Dover Publications, March 2001.
[12] Tom Leighton and Satish Rao. Multicommodity max-ow min-cut theorems
and their use in designing approximation algorithms. J. ACM, 46(6):787832,
November 1999.
[13] L aszl o Lov asz and M. D Plummer. Matching theory. North-Holland : Elsevier
Science Publishers B.V. ; Sole distributors for the U.S.A. and Canada, Elsevier
Science Pub. Co., Amsterdam; New York; New York, N.Y., 1986.
[14] Christos Papadimitriou. Combinatorial optimization : algorithms and
complexity. Dover Publications, Mineola N.Y., 1998.
[15] Farhad Shahrokhi and D. W. Matula. The maximum concurrent ow problem.
J. ACM, 37(2):318334, April 1990.
[16] Douglas B. West. Introduction to Graph Theory. Prentice Hall, 2 edition,
September 2000.