0% found this document useful (0 votes)
164 views52 pages

Chapter 5 Optimization Techniques

The document discusses optimization techniques for engineering design problems involving multiple parameters. It describes defining a cost function to evaluate designs and using optimization methods like two-parameter techniques to minimize the cost function and find an optimal design. Examples of applying these methods to problems involving curve fitting experimental acceleration data are provided.

Uploaded by

haroine insider
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
164 views52 pages

Chapter 5 Optimization Techniques

The document discusses optimization techniques for engineering design problems involving multiple parameters. It describes defining a cost function to evaluate designs and using optimization methods like two-parameter techniques to minimize the cost function and find an optimal design. Examples of applying these methods to problems involving curve fitting experimental acceleration data are provided.

Uploaded by

haroine insider
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 52

Chapter 5 Optimization techniques

5.1 Introduction
 Almost all engineering design problems are multi-parameter problems.
 For example a solid object has a three-dimensional shape in addition to the material properties
relevant to the application.
 Material properties might include bulk mechanical properties, electromagnetic properties and
chemical properties in addition to surface treatments such as a passivation layer in
microelectronics and environmental protection coatings for pipelines.
 The simplest structure, a three-dimensional rectangular shaped piece of material, will require
engineering design decisions about the length, width and thickness of the object.
 In some cases the external dimensions of an object are determined by engineering standards
or previous commonly accepted sizes.

3/21/2024 1
Example 5.1 Standard measures
 In the building industry the relative brick are similar world-wide.
 The USB (universal serial bus) connector is mandated by international standards.
 The design engineer must find the best design from this large array of independent variables.
 An engineer engaged in research might seek to find a design which satisfies all of many
different requirements relating to:
 Physical size;
 Construction/fabrication materials;
 Monetary cost and material availability;
 Environmental cost and material transport.

3/21/2024 2
• Mathematically the cost of the wall can be represented as a ‘cost function’ C which includes
all of the relevant components and the strength of the wall.
• The best design (the optimized design) will have the minimum cost function providing that a
minimum set of structural requirements is met.
• Optimization is the mathematical process of defining all of the parameters which will lead to
the best possible design taking into account all of the engineering, the monetary cost and the
environmental costs.
• Such calculations provide excellent opportunities for engineering research for both novice and
experienced engineering research teams.

3/21/2024 3
• The cost function is calculated and compared with the cost function derived from the
alternative designs.
• Forward modelling is the process of calculating the cost for a set of known parameters.
• This method of approach can be quite slow as the research team attempts to fine tune the
design by changing the parameters to obtain an optimal result.
• The alternative is to use a mathematical method which solves the problem many times in a
logical, strategic manner to determine the best possible design. For each new design the cost
function is recalculated until the best possible result has been determined. This process is
referred to as inverse modelling or optimization.

3/21/2024 4
 There will be range limits on all of the parameters as the optimization routine must not be
allowed to give answers where the parameters lie outside these ranges.
 For example the physical size of an object should not exceed the maximum handling capacity
of the user and the maximum fabrication length possible in the factory.
Example 5.2 Wider impacts on a cost function
 The design of a 400 km pipeline might ideally use a single pipe of length 400 km. However,
the manufacture, transport and positioning of a 400km length of pipe will be regarded as
impossible. The design team needs to consider a balance between the length of the individual
sections and the number of joins along the length of the pipeline.
 The design of an undersea optical fibre communications cable will have similar challenges to
the pipeline. Unlike the pipeline, an optical cable can be coiled so that larger lengths can be
transported easily.

3/21/2024 5
5.2 Two-parameter optimization methods
 The methods commonly used to optimize a design problem are most simply illustrated if only
two parameters are involved.
 This allows the optimization process to be represented on three-dimensional plots where the
third dimension is the value of the cost function.
 These techniques are illustrated using a curve fitting task related to engineering.
 Equations apply to engineering disciplines such as the movement of pistons and pumps in
mechanical engineering and electrons in alternating current circuits and electromagnetic
waves.

3/21/2024 6
 The position of a piston as a function of time D(t) can be modelled using asinusoidal squared
equation over the range t = 0 to t = 1 = π/2ω:
D(t )  L sin 2 ( t ) , (5.1)
 where L is the amplitude of the movement (angular displacement or lineardistance), ω is the
angular frequency of the movement and t is the time.
 Confining this example to linear movement, then D is the distance and 0 < D < L = 1 for 0 < t
< π/2ω. This function is illustrated in Figure 5.1a.

3/21/2024 7
Figure 5.1 Variation in (a) distance D, (b) velocity V and (c) acceleration A as a function of time
t for a linear displacement following Equations (5.1)–(5.3).

3/21/2024 8
 Equation (5.1) satisfies the requirements that the distance travelled changes monotonically
from 0 to L, and the velocity is zero at the start and the end of the movement. Differentiating
Equation (5.1) gives the velocity V(t) where
dD
V (t )   2 L cos( t ) sin( t )  L sin(2 t ). (5.2)
dt
 At t = 0 and t = π/2ω, V(t) = 0. This is shown in Figure 5.1b. The acceleration A(t) is given by
d 2D
A(t )  2  2 L 2 cos(2 t ). (5.3)
dt
 The function in Equation (5.3) is illustrated in Figure 5.1c.
 The acceleration can be measured using accelerometers attached to the implement or piston of
the car.
 From these acceleration measurements, a fitted curve will allow the determination of the
constants L and ω and so a complete interpretation of D(t) and V(t).

3/21/2024 9
 The objective is to find the best (optimal) values for D and ω which are determined by a
comparison of the function given in Equation (5.3) and the measured data from the
accelerometer.
 The numerical value to be minimized is called the cost function. In this case the cost function
C(D,ω) is the root mean square (for the alternating electric current RMS is equal to the value
of the constant direct current that would produce the same power dissipation) value given by
N
C ( D,  )   ( Ai  A(ti )) 2 , (5.4)
i 1
 where Ai are the experimental measured values at time ti , A(ti ) are the values determined
using Equation (5.3) and N is the total number of sample points.

3/21/2024 10
 For this example, assume that the acceleration is sampled at 50 samples per second (i.e. at
0.02s intervals).
 The experimental data are illustrated in Figure 5.2. As the initial time is not known the data
will only be fitted to the two parameters if an offset time t0 is included in Equation (5.3). The
new equation becomes
A(t )  2 L 2 cos 2 (t  t0 ). (5.5)
 The acceleration A at times t = 0 and t = π/2ω is not zero. This is physically not possible as
all time derivatives of D(t) must be zero at t = 0 and t = π/2ω.
 There must be a lead time before the acceleration reachesits maximum values at the start and
the end of the movement.
 For simplicity, the monotonic portion of the acceleration will be used in this example.

3/21/2024 11
Figure 5.2 Experimentally determined acceleration A(ti ) values from a linear movement. The
acceleration changes from negative to positive values. This indicates that the positive axis of the
accelerometer is in the opposite direction to the movement. The sample rate is 50 per second.

3/21/2024 12
 In two-dimensional optimization t0 will be fixed and L and ω will be varied to minimize C.
 The optimization will be improved if t0 is also added to the list of parameters but for
illustration purposes only 2D optimization will be implemented in this example.
 Two-parameter optimization methods are readily adapted to three-parameter optimization.
 The cost function C must be calculated multiple times (Equation 5.4) using the estimated
values of L and ω, until a minimum C value is found.
 Optimization techniques can be either guided or unguided optimization.
 Unguided media transport electromagnetic waves without using a physical conductor.
 The unguided techniques rely on a predefined strategy to determine the next set of
parameters L and ω.
 A value of C is then calculated and compared to the previous cost function values.

3/21/2024 13
 The algorithm terminates when all predefined values have been tested.
 The values of L and ω which contribute to the minimum value of C are the optimized solution
to the problem.
 Guided optimization techniques use recently derived values of C to select (i.e. to guide) the
next set of parameters L and ω, and C is again determined.
 The next values of L and ω are determined from the previous values of C.
 The algorithm terminates when no further improvement in C can be achieved.
 This coincides with the optimal values of L and ω and the minimum value of C.

3/21/2024 14
 There are many optimization strategies. In the following subsections, four techniques are
illustrated.
 Sequential uniform sampling is an exhaustive search method in which all possible
solutions are tested.
 The Monte Carlo method is a purely random search method.
 The simplex method and gradient methods are based on vector calculus concepts.
 The quality of the solution depends on a number of preset limiting conditions and the (as yet
unknown) shape of the cost function:
 The range of the parameters L and ω (called the solution space);
 The step size of L and ω (the resolution);
 The number of minimum values of C in the solution space.

3/21/2024 15
 It is possible that there is more than one minimum value of C in the solution space.
 If so, there is more than one solution to the optimization routine.
 It is essential to locate the best possible solution which coincides with the minimum value of
C.
 If there is no minimum value in the solution space, then the range of parameters must be
increased or one must conclude that there is no optimal solution.
 When using a guided optimization routine, it is possible that the first solution obtained will
coincide with a minimum value of C which is not the overall minimum value of C in the
solution space.This is called a local minimum.
 For these reasons, all guided optimization routines should be run more than once using
randomly generated start values.

3/21/2024 16
5.2.1 Sequential uniform sampling
 The most obvious approach to optimization is to test all possible values of L and ω.
 It is still necessary to choose the range limits for L and ω, a relatively small increment for
both L and ω, and sequentially step through all possible values of both parameters in the
solution space range.
 This technique is referred to as sequential uniform sampling (Figure 5.3).
 In order to explore and demonstrate the usefulness of the various algorithms, a full view of
the cost function over the parameter range of interest is presented.
 This is an outcome of the process of sequential uniform sampling.

3/21/2024 17
 Figure 5.3 Sequential uniform sampling algorithm for two parameter optimization. C is the cost function
calculated from the RMS error between the fitted equation and the experimental data. The two
parameters to be optimized are L and ω. The routine checks C for all possible values exhaustively. The
optimum values of L and ω correspond to the minimum value of the cost function.
3/21/2024 18
 The flow chart in Figure 5.3 outlines the algorithm used for sequential uniform sampling.

5.2.2 Monte Carlo optimization


 The Monte Carlo optimization technique is based on random selection of parameter values
within the defined range. Figure 5.4 illustrates the algorithm.
 The number of random samples ( Li and i ) tested with the cost function depends on the
sensitivity and range of the parameters.

3/21/2024 19
Figure 5.4 Monte Carlo sampling algorithm for two-parameter optimization. C is the cost
function calculated from the RMS error between the fitted equation and the experimental data.
The two parameters to be optimized are L and ω. The routine checks C for the randomly chosen
values of each parameter until the predefined total number of samples has been completed. The
optimum values correspond to the minimum value of the cost function.
3/21/2024 20
5.2.3 Simplex optimization method
 This optimization method employs a directed strategy to determine the minimum value of the
cost function.
 The start point for L and ω in the range is selected randomly.
 The cost function for two adjacent points is calculated.
 The first is a δω step in the ω direction from the start point.
 The second is a δL step in the L direction from the start point.
 These three adjacent points are called a simplex, see Figure 5.5.
 In three-dimensional analysis the simplex has four points and in an N-dimensional problem
the simplex has N+1 points.

3/21/2024 21
Figure 5.5 A simplex triangle defined in the two-dimensional solution space by the points (Li , ωi ) ,
(Li+1 , ω) and (Li , i 1 ) is shown by the black dots. Depending on the cost function values at each of
these three points, the point with the maximum cost function is replaced by one of the open-circle
positions.

3/21/2024 22
 The path towards the optimal solution is guided by comparing the cost functions for each of
the three adjacent points in the simplex.
 From this information, a new point is selected by stepping in the direction of the minimum
C value (away from the position of maximum C in the simplex). This new point is added to
the simplex and the point with the highest C value is eliminated.
 This forms a new three point simplex. The process is repeated until no reduction in C is
observed. The algorithm can be written as follows:
If C(Li ,ω j ) > C(Li+1 ,ω j ) > C(Li ,ω j+1 ) then Li ,ω j  Li ,ω j+2 ;
If C(Li ,ω j ) > C(Li ,ω j+1 ) > C(Li+1 ,ω j ) then Li ,ω j  Li+2 ,ω j ;
If C(Li+1 ,ω j ) > C(Li ,ω j ) > C(Li ,ω j+1 ) then Li+1 ,ω j  Li-1 ,ω j+2 ;

3/21/2024 23
If C(Li 1 ,  j ) > C(Li ,  j 1 ) > C ( Li ,  j ) then Li 1 ,  j  Li 1 ,  j ;
If C(Li ,  j 1 ) > C(Li ,  j ) > C(Li !,  j ) then Li ,  j 1  Li 1 ,  j 1;
If C(Li ,  j 1 ) > C(Li 1 ,  j ) > C(Li , j ) then Li ,  j 1  Li ,  j 1.

 The cost function is calculated for the new position on the L, ω axes. The process is repeated
until no reduction in the cost function is possible.

3/21/2024 24
5.2.4 Gradient optimization method
 This method is similar to the simplex method, but uses a slightly more direct route to the
solution by placing more emphasis on the relative values of the three points of the cost
function in the simplex.
 The slope of the path through the solution space is calculated using the gradient vector C ,
defined by: C C
C  x y,
L  (5.6)
where x is the unit vector parallel to the L axis and y is the unit vector parallel to the ω axis.
 This straight line path is the line of steepest descent. The slope of the line m is given by:
C / 
m . (5.7)
C / L

3/21/2024 25
 The next position in the solution space is given by the new value of L and the equation:
  m( L  Lmax )  max . (5.8)
 The start point for the initial simplex is randomly selected. The maximum C value at position
L max , max , for the simplex is defined at the start point.
 The value of C at the start point is then calculated using Equation (5.8).
 The path follows a straight line defined by the gradient given by C in Equation (5.6). When
C ceases to decrease along this path, the simplex is again defined at that minimum point and a
new gradient vector is calculated.
 The process continues until no further reduction in C is obtained. The best solution for L and
ω identified at this point forms the best solution.
 It is possible that the path can become trapped in a local cost function minimum.

3/21/2024 26
 As with the simplex method outlined in Section 5.2.3, not all solutions result in locating the
position of the minimum value of the cost function in the solution space.
 It is therefore essential to run the algorithm a number of times using randomly located start
positions. This can reduce the likelihood of the algorithm terminating in a local
minimum.
 The optimal solution is that solution where the cost function is a minimum.

3/21/2024 27
5.3 Multi-parameter optimization methods
 The four methods outlined in Section 5.2 have demonstrated the algorithmic details for a
two-dimensional investigation.
 The routines can all be modified simply to allow for an additional set of parameters to be
optimized simultaneously.
 For example the offset parameter t0 should be included to obtain a more precise fit to the
theoretical relationship given by Equation (5.1).
 All optimization algorithms can also be used for such multi-parameter investigations.

3/21/2024 28
Example 5.3 Curve fitting
The electromagnetic surface impedance is a function of the depth and conductivity of
horizontally stratified layers. In the simplest case of a single layer above an infinitely deep earth
half-space, there are three unknowns: the conductivity of the upper layer, the conductivity of the
lower layer and the depth of the lower layer. The complex surface impedance measured at a
single frequency only provides two measurements – the real and imaginary parts of the
impedance. Given the depth of penetration of the radiation and the limits on the conductivity it
is possible to deduce the three parameters from the complex surface impedance using
optimization.

3/21/2024 29
5.4 The cost function
 The selection of the cost function is of critical importance to all optimization algorithms.
 In the examples given in this chapter, the cost function was a single number – the RMS error
between a set of experimental data and a theoretical function.
 Consider the case of a design engineer seeking to minimize the weight W and maximize the
strength S of a structural beam. The cost function might be defined as:
C  wW1  w2 / S . (5.9)
 where w1 and w 2 are the weighting factors used to balance the importance of the two
parameters.
 The weights can be modified to change the emphasis on one parameter relative to the other.

3/21/2024 30
 It also allows the cost function to have no units as w1 will have the units of inverse W and w2
will have the units of S.
 With most structural beam designs, an increase in strength is accompanied by an increase in
weight.
 With this type of dependency, it is possible to define a set of optimal solutions, rather than one
unique solution.
Example 5.4 Cost function definition
The design of radio frequency identification tags (RFID) for mass production of more than
10M per production run is aimed at optimizing the design for minimum cost D, adequate
material properties M, minimum size S, lower frequency f, and minimal environmental impact E
assuming disposal in a landfill site. The cost function C might be written as:
C  w1 D  w2 M  w3 S  w4 E ,
where w1 , w2 , w3, and w4 are the weighting factors.
3/21/2024 31
5.5 Least Square Optimization
 Consider the set of linear equations
Ax = b (5.10)
where A  R mn , x R n , b R m , and rank A = n.
 1
 If b is in the range space of A and A is nonsingular, then the unique solution is x  A b.
 However, suppose that b is not in the range space of A. Therefore, the equation is said to be
inconsistent and it has no solution.
 In this case, instead of searching to find the solution of Ax = b, it is of interest to find a point

such that || Ax*  b || 2  || Ax  b || 2 for all xR n . In other words, the linear equation solution x
reduces to problem:
min || Ax - b ||2 . (5.11)
x 
 This problem is called the least square optimization problem, and the optimal point x is
called the least square solution.

3/21/2024 32
 The solution of the2 least square optimization problem is the unique solution that
minimizes || Ax - b || is given as
x* = (A T A)-1 A Tb .
 Therefore, x = (AT A)1 AT b satisfies the necessary and sufficient conditions of
optimality and it is the minimum point.
Example 5.5
Consider a wireless communication channel in Fig. 5.6 with transmit power pt . The received
power pr in decibel (dB) is derived using the model:
pr  pt  K 10 log10 d
where K is a constant depending on the radio frequency and antennas gains, γ is the path loss
exponent, and d in meters is the distance between the transmitter and the receiver. In a set of
empirical measurements of pr − pt in dB, given in Table 5.1, find constants K and γ to minimize
the mean square error between the model and the empirical measurements.

3/21/2024 33
d p r − pt
10 m −70 dB
20 m −75 dB
50 m −90 dB
100 m −110 dB
300 m −125 dB

Table 5.1 Empirical measurements Figure 5.6 Wireless communication channel

3/21/2024 34
Solution
Let’s define M model pr  pt  K 10 log10 d and M measure  pr  pt given in Table 5.1.
The mean square error is
1 5
e   ( M model (di )  M measure (di )) 2
5 i 1
1
 || Ax - b ||2
5

where
1 10 log10 10   70 
1 10 log 20   75 
K   10   
x    , A = 1 10 log10 50  , b   90  .
  

  
 1 10 log10 100   110 
1 10 log10 300   125

3/21/2024 35
Substituting in x = (AT A)1 AT b,
1
 1 10 log10 10  T
1 10 log10 10   1 10 log10 10 
T
 70 
  1 10 log 20   1 10 log 20   75 
 1  10 log 20     
10 10 10  
 
x = 1 10 log10 50  1 10 log10 50   1 10 log10 50   90 
       
 1 10 log 10 100  1 10 log10 100    1 10 log10 100   110 
 1 10 log 300  1 10 log10 300   1 10 log10 300   125
 10  

 26.74 
*
we derive x =   and thus K = −26.74 and γ = 3.96.
 3.96 

3/21/2024 36
Moreover, the following code in MATLAB yields the same solution:

A=[1 -10*log10(10);1 -10*log10(20);1 -10*log10(50 );...


1 -10*log10(100);1 -10*log10(300)];
b =[-70;-75;-90;-110;-125];
x = A\b

x=

-26.7440
3.9669

3/21/2024 37
5.6 Convex Optimization
 Convex Optimization includes least-squares and linear equations.
 A number of problems can be posed as constrained optimization problems, of the standard
form
minimize f 0 ( x)
subject to fi ( x)  0, i  1,..., m
hi ( x)  0, i  1,..., p.
 where x R n is the vector of optimization variable; f 0 , fi , hi : R  R are, respectively, the
n

objective or cost function, inequality constraints and equality constraints.


 A point x is feasible if it satisfies all the constraints.
 The set of all feasible points forms the feasible set C. The optimization problem is said to be
feasible if set C is non-empty and is said to be unconstrained if m = p = 0.

3/21/2024 38
 The solution to the optimization problem, i.e. the optimal value, is denoted by f , and a point
 
x C is an optimal point if f ( x )  f .

 In other words, f  inf f 0 ( x) and x  arg min f 0 ( x).

xC x C
 The conditions that lead to the aforementioned guarantees, lead us to convex optimization.
 An optimization problem in standard form is a convex optimization problem if f 0 and fi are
all convex functions and hi are all affine functions, hi ( x)  aiT x  bi . This is often written in
the following form:
minimize f 0 ( x)
subject to fi ( x)  0, i  1,..., m
Ax  b,
 where A  R pn and b  R p .

3/21/2024 39
Example 5.6
Consider an object of interest is comprised of four equal-sized grids see Fig. 5.7. For illustrative
purpose, we consider a 2D object. Once you grasp the idea, it can readily be extended to a 3D object
case.

Figure 5.7. A simple grid example that well illustrates the idea of Computed Tomography (CT) .

3/21/2024 40
 The object has two black grids on left upper and right bottom, while having two white
grids on right upper and left bottom.
 Suppose that X-ray absorbs no photon when passing through a black grid, i.e., the density is 0.
 Here we define the unit of the density as the number of photons per unit length.
 On the other hand, assume that X-ray absorbs something (say, the density is 1) when passing
through a white grid.
 Suppose we project a horizontal X-ray beam to the upper part of the object so that it passes
through the two upper grids.
 Then, it would absorb nothing in the black grid zone while absorbing something in the white
grid zone.
 Since the unit of the density that we define here is w.r.t. the unit length, the intensity of the X-
ray beam would be proportional to the width of the white grid zone.

3/21/2024 41
 For simplicity, assume that the width is 1 unit length. Then, the intensity would be 1.
 But there is always an error in measurement, say that the measurement noise here is 0.1
(marked in red in Fig. 5.7).
 We also project another horizontal X-ray beam to the bottom part of the object. We then
measure the corresponding intensity, say 1 − 0.05.
 Projecting two vertical downward beams, we get two measurements, say: 1 + 0.02 and 1 −
0.1.
 Shooting a top-left to bottom-right diagonal beam to the object, we would absorb nothing
since it passes only through the black grids, so the measurement would be close to 0, say 0 −
0.03.
 On the other hand, the other bottom-left to top-right diagonal beam would pass only through
the white grids.

3/21/2024 42
 Since the length of the passing zone is 2 2 , the intensity measurement would be close to
2 2 , say 2 2 + 0.2.
 What we want to figure out are the densities of the four grids, so let us denote those by
unknown variables, say d1 (top left), d 2 (top right), d3 (bottom left), d 4 (bottom right).
 Using these notations, we can express the above six measurements as the following linear
equations:
d1  d 2  1.1
d3  d 4  0.95
d1  d3  1.02
d 2  d 4  0.9
2d1  2d 4  0.03
2d 2  2d3  2 2  0.2.

3/21/2024 43
 Least squares formulation: Notice in linear equations that we have six equations while
having four unknowns.
 So there is no solution in general. This is indeed the no-solution case. But we can invoke
Gauss’s idea to address this case.
 In other words, we can formulate it as an Least squares formulation (LS) problem. Defining
 1 1 0 0   1.1 
 0 0 1 1   
   0.95 
 1 0 1 0   1.02 
A :   and b :  
 0 1 0 1   0.9 
 2 0 0 2  0.03 
   
 0 2 2 0   2 2  0.2 

 we can formulate it as:


min || Ad  b ||2 .
3/21/2024 44
 Substituting in x = (AT A)1 AT b ,
 then, the solution would be d   ( AT A) 1 AT b. For the above example, we obtain:

(d1 , d 2 , d3 , d 4 )  (0.0400, 1.0613, 1.0463,  0.0950).

 The solution makes an intuitive sense, as it is close to the ground truth (0, 1, 1, 0).

3/21/2024 45
 Moreover, the following code in MATLAB yields the same solution:

A=[1 1 0 0;0 0 1 1;1 0 1 0;...


0 1 0 1;1.41 0 0 1.41;0 1.41 1.41 0];
b =[1.1;0.95;1.02;0.9;-0.03;3.02];

x = A\b

x=

0.0399
1.0614
1.0464
-0.0951

3/21/2024 46
Example 5.7 Joint Power and Bandwidth Allocation
Consider the wireless network in Fig. 5.8 consisting of n users. The power and bandwidth
resources in the network are to be allocated to the users. Allocating power pi and bandwidth wi
 i pi 
to user i, the achieved transmit rate is ri   i wi log 2 1   , where  i and i are positive
scalars. Let W be the available bandwidth in the network.  wi 

Now consider the problem of allocating power and bandwidth jointly in the network such that
th
the transmit rate of each user i must not be less than ri and an economic measure of transmit
powers to be minimized. Formulate this problem as a convex programming.

3/21/2024 47
Fig. 5.8 Power and bandwidth allocation in wireless networks.
3/21/2024 48
 This problem can be formulated as in (5.12a) with a squared cost function of powers.
2
 n

min
p, w  
i 1
pi 

(5.12a)

subject to  i pi  (5.12b)
ri   i wi log 2 1 
th
  0, i  1,..., n
 wi 
w1  w2  ...  wn  W , (5.12c)

pi  0, i  1,..., n (5.12d)

wi  0, i  1,..., n (5.12e)

3/21/2024 49
 The objective (5.12a) is a quadratic and positive function thus it is convex.
 The left-hand side of constraint (5.12b) is a convex function. Moreover, the equality constraint
(5.12c) is affine.
 Finally, the last two constraints are also convex. Consequently, (5.12) is a convex problem.
 The following code in MATLAB yields joint power and bandwidth allocation using CVX
th
for n = 2 users. and assumed values for W, ri , α, and β.
x
 Just note that rel_entr(x,y) in CVX is equivalent to x log   .
 y

3/21/2024 50
%===============================================
W = 1;
rth = [2;1];
a= [1;1];
b = [1;0.5];
cvx_begin
variables p(2) w(2)
minimize sum(p)^2
subject to
for i =1:2
rth(i)-(-a(i)/log(2))*rel_entr(w(i ),w(i)+b(i)*p(i))<=0
end
sum (w) == W
w >= 0
p >= 0
cvx_end
p
w
3/21/2024 51
Status: Solved
Optimal value (cvx_optval): +83.3532

p=

5.1567
3.9731

w=

0.6219
0.3781
%===============================================

3/21/2024 52

You might also like