100% found this document useful (1 vote)
519 views

Solution Manual For Introduction To Optimization 5th Edition by Edwin Chong

This solution manual offers step-by-step answers to optimization problems covered in the 5th Edition of Introduction to Optimization by Edwin Chong. It is an excellent companion for learners aiming to strengthen their skills in optimization techniques and algorithms.

Uploaded by

physics2024
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
519 views

Solution Manual For Introduction To Optimization 5th Edition by Edwin Chong

This solution manual offers step-by-step answers to optimization problems covered in the 5th Edition of Introduction to Optimization by Edwin Chong. It is an excellent companion for learners aiming to strengthen their skills in optimization techniques and algorithms.

Uploaded by

physics2024
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Contact me in order to access the whole complete document. Email: smtb98@gmail.

com
WhatsApp: https://wa.me/+12342513111 Telegram: https://t.me/solumanu

Part I: Mathematical Review

1. Methods of Proof and Some Notation


1.1
A B not A not B A)B (not B))(not A)
F F T T T T
F T T F T T
T F F T F F
T T F F T T
1.2
A B not A not B A)B not (A and (not B))
F F T T T T
F T T F T T
T F F T F F
T T F F T T
1.3
A B not (A and B) not A not B (not A) or (not B))
F F T T T T
F T T T F T
T F T F T T
T T F F F F
1.4
A B A and B A and (not B) (A and B) or (A and (not B))
F F F F F
F T F F F
T F F T T
T T T F T
1.5
The cards that you should turn over are 3 and A. The remaining cards are
irrelevant to ascertaining the truth or falsity of the rule. The card with S is
irrelevant because S is not a vowel. The card with 8 is not relevant because
the rule does not say that if a card has an even number on one side, then it
has a vowel on the other side.
Turning over the A card directly veri es the rule, while turning over the 3
card veri es the contraposition.
1

complete document is available on https://solumanu.com/ *** contact me if site not loaded


Contact me in order to access the whole complete document. Email: [email protected]
WhatsApp: https://wa.me/+12342513111 Telegram: https://t.me/solumanu

1.6
If n is even, then we can represent it as n = 2k for some integer k. Then, we
have
5n + 2 = 5(2k) + 2 = 2(5k + 1) = 2m;
ssm
where m = 5k + 1 is an integer, and therefore, 5 n + 2 is even, which completes
the proof.
1.7
mtt
It is dicult, to say the least, to prove this statement using direct proof. On
the other hand, it is easy to prove it using the proof by contraposition. Indeed,
we start with an equivalent, contrapositive, statement: If n is not odd, then
n2 is not odd. Equivalently, if n is even, then n2 is even. If n is even, then
we can represent it as n = 2k for some integer k. Then, we have
n2 = (2k)2 = 2(2k2 ) = 2m;
bb99
where m = 2k2 is an integer, and therefore, n2 is even, which completes the
proof using a proof by contraposition.
Now we attempt to prove the statement using contradiction. For this, we
assume that n2 is odd and n is not odd, and then we derive a contradiction.
Equivalently, n2 is odd and n is even. Because n is even, we can represent it
as n = 2k for some integer k. Then, we have
88@

n2 = (2k)2 = 2(2k2 ) = 2m;


where m = 2k2 is an integer, and therefore n2 is even, which is a contradiction
to our assumption that n2 is odd.
@

1.8
We use induction on n. The statement is true for n = 1 since
11n 5 = 11 6 = 5
is divisible by 5.
ggm

Let P (n) be the statement that 11n 6 is divisible by 5. Our induction


hypothesis is P (n). To proceed, we assume that P (k) is true. We next show
that P (k + 1) is true. Note that if P (k) is true, then 11k 6 = 5m for some
integer m. That is, 11k = 5m + 6. Then, we have
maa

11k+1 6 = 11(11k ) 6 = 11(5m + 6) 6 = 55m + 60 = 5(11m + 12);


where 11m + 12 is an integer. Therefore, P (k + 1) is true, which completes
the proof.
1.9
1. There are eight possible subsets:
iill..cc

;; f2g; f3g; ff4; 3gg; f2; 3g; f2; f4; 3gg; f3; f4; 3gg; f2; 3; f4; 3gg :
2. There are four possible subsets: ;; f7g; ff;gg ; f7; f;gg
3. There is only one subset: ;
4. There are two possible subsets: ;; f;g
This exercise highlights the di erence between sets and their elements. Con-
oom

fusing the two constitutes a category error. For example, ; is the empty set,
but f;g is a set containing one element: ; (as an object).
1.10
The rst statement is false because f4; 3g 6 f3; f4; 3gg. On the other hand,
m

ff4; 3gg  f3; f4; 3gg


2

complete document is available on https://solumanu.com/ *** contact me if site not loaded


1.11
Applying DeMorgan's law to the second statement gives: not ((not A) or (not
B)) = (not (not A ) and (not (not B)) = A and B, which is the rst statement.
1.12
We construct the truth table:
A B not B A and (not B) B and (not B) A and (not B) A ) B
)B and (not B)
F F T F F T T
F T F F F T T
T F T T F F F
T T F F F T T
The columns corresponding to the statements in question agree, and therefore,
the two statements are logically equivalent.

2. Vector Spaces and Matrices


2.1
We show this by contradiction. Suppose n < m. Then, the number of columns
of A is n. Since rank A is the maximum number of linearly independent
columns of A, then rank A cannot be greater than n < m, which contradicts
the assumption that rank A = m.
2.2 .
): Since there exists a solution, then by Theorem 2.1, rank A = rank[A..b].
So, it remains to prove that rank A = n. For this, suppose that rank A < n
(note that it is impossible for rank A > n since A has only n columns). Hence,
6 0, such that Ay = 0 (this is because the columns of
there exists y 2 Rn , y =
A are linearly dependent, and Ay is a linear combination of the columns of
A). Let x be a solution to Ax = b. Then clearly x + y = 6 x is also a solution.
This contradicts the uniqueness of the solution. Hence, rank A = n.
(: By Theorem 2.1, a solution exists. It remains to prove that it is unique.
For this, let x and y be solutions, i.e. Ax = b and Ay = b. Subtracting, we
get A(x y ) = 0. Since rank A = n and A has n columns, then x y = 0
and hence, x = y , which shows that the solution is unique.
2.3
Consider the vectors ai = [1; a> >
i ] 2R
n+1 , i = 1; : : : ; k . Since k  n +2, then
the vectors a1 ; : : : ; ak must be linearly independent in Rn+1 . Hence, there
exist 1 ; : : : ; k , not all zero, such that
k
X
i ai = 0:
i=1
P
The rst component of the above vector equation is ki=1 i = 0, while the
last n components have the form ki=1 i ai = 0, completing the proof.
P

2.4
Let A = [a1 ; : : : ; an ] be a given square matrix. Suppose that the columns are
linearly dependent. Then one of the columns, say a1 , is a linear combination
of the others: a1 = 2 a2 +    + n an . Now repeatedly replace the rst
column by adding i ai for i = 2; : : : ; n so that the rst column is replaced
by a1 ( 2 a2 +    + n an ) = 0. Recall that the determinant of a matrix is
unchanged if we add a scalar multiple of one column to another column and
replace the other column with the result. Moreover, the determinant of any
matrix with a zero column is zero. Therefore, we deduce that det A = 0.
For the converse, suppose that the determinant is zero. Then, for  = 0,
we have det[I A] = 0, which implies that  = 0 is an eigenvalue of A.
3

complete document is available on https://solumanu.com/ *** contact me if site not loaded


Thus, there exists a nonzero vector x such that Ax = x = 0, which implies
that a nonzero linear combination of the columns of A is zero. By de nition
of linear independence, the columns are linearly dependent.
2.5
ssm
a. We rst postmultiply M by the matrix
 
Ik O
Mm Im
mtt
k;k k
to obtain
    
M m k;k I m k Ik O =
O Im k :
M k;k O Mm k;k Im k M k;k O
bb99
Note that the determinant of the postmultiplying matrix is 1. Next we post-
multiply the resulting product by
 
O Ik
Im k O
to obtain     
O Im k O Ik = Ik O :
88@
M k;k O Im k O O M k;k
Notice that
   
det M = det
Ik O det
O Ik ;
@

O M k;k Im k O
where  
det
O Ik =  1:
Im k O
ggm

The above easily follows from the fact that the determinant changes its sign
if we interchange columns, as discussed in Section 2.2. Moreover,
 
det
Ik O = det(I k ) det(M k;k ) = det(M k;k ):
O M k;k
maa

Hence,
det M =  det M k;k :
b. We can see this on the following examples. We assume, without loss of
generality, that M m k;k = O and let M k;k = 2. Thus k = 1. First consider
the case when m = 2. Then we have
   
M = MO I mO k = 02 10 :
iill..cc

k;k
Thus,
det M = 2 = det ( M k;k ) :
Next consider the case when m = 3. Then,
2 .. 3
0 . 1 0
oom

6 7

O I m k = det 666 0
 7 ..
det 0 1 7 .
7 = 2 6= det ( M k;k ) :
M k;k O 6   7
4 5
..
2 . 0 0
m

complete document is available on https://solumanu.com/ *** contact me if site not loaded


Therefore, in general,
det M 6= det ( M k;k ) :
However, when k = m=2, that is, when all sub-matrices are square and of the
same dimension, then it is true that
det M = det ( M k;k ) :
See [147].
2.6
Let  
M= C A B
D
and suppose that each block is k  k. John R. Silvester [147] showed that if at
least one of the blocks is equal to O (zero matrix), then the desired formula
holds. Indeed, if a row or column block is zero, then the determinant is equal to
zero as follows from the determinant's properties discussed Section 2.2. That
is, if A = B = O, or A = C = O, and so on, then obviously det M = 0. This
includes the case when any three or all four block matrices are zero matrices.
If B = O or C = O, then
 
det M = det
A B
C D = det (AD) :
The only case left to analyze is when A = O or D = O. We will show that
in either case,
det M = det ( BC ) :
Without loss of generality suppose that D = O. Following arguments of
John R. Silvester [147], we premultiply M by the product of three matrices
whose determinants are unity:
      
Ik Ik Ik O Ik Ik A B C O
O Ik Ik Ik O Ik C O = A B :
Hence,
   
det
A B =
C O
C O A B
= det ( C ) det B
= det ( I k ) det C det B :
Thus we have
 
A B
det
C O = det ( BC ) = det ( CB ) :
2.7
We represent the given system of equations in the form Ax = b, where
2 3
 
x1  
6 x2 7
A = 11 1 2
2 0
1
1
; x= 6 7;
4 x3 5 and b= 1
2
:
x4
Using elementary row operations yields
   
A = 11 12 20 11 ! 10 13 22 12 ; and
5

complete document is available on https://solumanu.com/ *** contact me if site not loaded


   
1 1 2 1 1
[A ; b ] =
1 2 0 1 2
! 10 13 22 12 13 ;
from which rank A = 2 and rank[A; b] = 2. Therefore, by Theorem 2.1, the
ssm
system has a solution.
We next represent the system of equations as
    
1 1 x1 1 2x3 x4
=
1 2 x2 2 + x4
mtt
Assigning arbitrary values to x3 and x4 (x3 = d3 , x4 = d4 ), we get
    1 
x1 1 1 1 2x3 x4
=
x2 1 2 2 + x4
  
1 2 1 1 2x3 x4
bb99
=
3 1 1 2 + x4
 4 d3 1 d4 
= 3 3 :
1 23 d3 23 d4
Therefore, a general solution is
2
x1
3 2 4 d3 1 d4 3 2 43 2 13 2 3
0
3 32 32 32
88@
6 x2 7 6 1 2 d3
6 7=6 3 3 d4 7
7=6
6
37 6
7 d3 + 6
7 617
3 7 d4 + 6 7 ;
4 x3 5 4 d3 5 4 1 5 4 0 5 405
x4 d4 0 1 0
@

where d3 and d4 are arbitrary values.


2.8
1. Apply the de nition of j aj:
8
< a;
if a > 0,
j aj = 0;
if a = 0,
ggm

:
( a); if a < 0,
8
< a; if a < 0,
= 0; if a = 0,
:
maa

a; if a > 0,
= ja j:
2. If a  0, then jaj = a. If a < 0, then jaj = a > 0 > a. Hence jaj  a.
On the other hand, j aj  a (by the above). Hence, a  j aj = jaj
(by property 1).
3. We have four cases to consider. First, if a; b  0, then a + b  0. Hence,
ja + b j = a + b = ja j + jb j.
Second, if a; b  0, then a + b  0. Hence ja + bj = (a + b) = a b =
iill..cc

ja j + jb j.
Third, if a  0 and b  0, then we have two further subcases:
1. If a + b  0, then ja + bj = a + b  jaj + jbj.
2. If a + b  0, then ja + bj = a b  jaj + jbj.
The fourth case, a  0 and b  0, is identical to the third case, with a and
b interchanged.
oom

4. We rst show ja bj  jaj + jbj. We have


ja b j = ja + ( b ) j
 jaj + j bj by property 3
m

= ja j + jb j by property 1:
6
To show jjaj jbjj  ja bj, we note that jaj = ja b + bj  ja bj + jbj,
which implies jaj jbj  ja bj. On the other hand, from the above we have
jbj jaj  jb aj = ja bj by property 1. Therefore, jjaj jbjj  ja bj.
5. We have four cases. First, if a; b  0, we have ab  0 and hence
jabj = ab = jajjbj. Second, if a; b  0, we have ab  0 and hence jabj =
ab = ( a)( b) = jajjbj. Third, if a  0, b  0, we have ab  0 and hence
jabj = ab = a( b) = jajjbj. The fourth case, a  0 and b  0, is identical to
the third case, with a and b interchanged.
6. We have
ja + bj  jaj + jbj by property 3
 c + d:
7. ): By property 2, a  jaj and a  ja. Therefore, jaj < b implies
a  jaj < b and a  jaj < b.
(: If a  0, then jaj = a < b. If a < 0, then jaj = a < b.
For the case when \<" is replaced by \," we simply repeat the above
proof with \<" replaced by \."
8. This is simply the negation of property 7 (apply DeMorgan's Law).
2.9
Observe that we can represent hx; y i2 as
 
hx; yi2 = x> 23 35 y = (Qx)> (Qy) = x> Q2 y;
where  
Q = 11 12 :
Note that the matrix Q = Q> is nonsingular.
1. Now, hx; xi2 = (Qx)> (Qx) = kQxk2  0, and
hx; xi2 = 0 , kQxk2 = 0
, Qx = 0
, x=0
since Q is nonsingular.
2. hx; y i2 = (Qx)> (Qy ) = (Qy )> (Qx) = hy ; xi2 .
3. We have
hx + y ; z i2 = ( x + y )> Q 2 z
= x> Q 2 z + y > Q 2 z
=hx ; z i2 + hy ; z i2 :
4. hrx; y i2 = (rx)> Q2 y = rx> Q2 y = rhx; y i2 .
2.10
We have kxk = k(x y) + yk  kx yk + kyk by the Triangle Inequality.
Hence, kxk ky k 
kx yk. On the other hand, from the above we have
kyk kxk  ky xk = kx yk. Combining the two inequalities, we obtain
jkxk kykj  kx yk.
2.11
Let  > 0 be given. Set  = . Hence, if kx yk < , then by Exercise 2.10,
jkxk kykj  kx yk <  = .

3. Transformations
7

You might also like