Maximum Slope Method
Maximum Slope Method
I. Introduction
The Maximum Slope method converges to the solution generally only linearly,
but it is global in nature, that is, convergence occurs from almost every initial
value, even if these initial values are deficient. Consequently, with it sufficiently
accurate initial approximations are achieved for techniques based on Newton's
method, in the same way that the bisection method is used in a single equation.
f 1 ( x 1 , x 2 , … , x n ) =0 ,
f 2 ( x 1 , x 2 , … , x n ) =0 ,
⋮
⋮
f n ( x 1 , x 2 , … , x n ) =0
t
has a solution in x=( x 1 , x 2 , … , x n ) =0 just when the function g defined by:
n
g ( x 1 , x 2 , … , x n )=∑ [ f 1 ( x 1 , x 2 ,… , x n ) ]
2
i=1
The Maximum Slope method for finding a local minimum of any function g of
lR n in lR can be described intuitively as follows:
(0) ( 0) ( 0) (0 ) t
→ Evaluate the function g in an initial approximation x =(x 1 , x 2 , … , x n )
.
→ Determine a direction that, from x (0 ) , a decrease in the deg value occurs.
→ Shift an appropriate amount towards this direction and call the new
vector x (1 ) .
t
∂g ∂g ∂g
∇ g ( x )=( ( x), ( x), … , ( x ))
∂ x1 ∂ x2 ∂ xn
n
‖v‖ =∑ v 2i =1
2
2
i=1
1
Dv g ( x ) =lim
h
[ g ( x +hv )−g ( x) ] =v ∗∇ g ( x ) .
t
h→0
(1) (0)
x = x −α ∇ g(x )
(0) …… (**)
(0) (0)
h ( α )=g( x −α ∇ g(x ))
The value ofα which minimizes h is the value required in equation (**).
Soα^ is the value used to determine the new iteration in the search for the
minimum value of g:
Pointα 3 where the minimum value of P ( x ) in[ α 1 , α 3 ] is the only critical point of P
or the right endpoint of the intervalα 3 because, by assumption,
P ( α 3 )=h ( α 3 ) <h ( α 1 )=P (α 1) .Given the P ( x ) It is a polynomial of the second
degree, said critical point can be easily determined.
t
Input: number of n variables; initial approach x=( x 1 , x 2 , … , x n ) , TOL tolerance;
maximum number of iterations N.
t
Output: approximate solution x=( x 1 , x 2 , … , x n ) or a failure message.
Step 1. Take k = 1 .
Step 2. While(k ≤ N ) , do steps 3-15.
Step 3. Take g1=g ( x 1 , x 2 , … , x n ) ; (Note: g1=g( x¿¿ (k ))¿ )
z=∇ g ( x1 , x2 , … , x n) ; (Note: z=∇ g (x ¿¿( k))¿ )
z 0=‖ z‖2
Step 4. Yeah z 0=0 , then OUTPUT ('Zero Gradient');
EXIT( x 1 , x 2 , … , x n , g 1) ;
(Procedure finished, I was able to have a
minimum. )
STALL.
Step 5. Take z = z/ z 0 ; (convert az to a unit vector.)
α 1=0 ;
α 3=1;
g3=g ( x−α 3 z ).
Step 11. Takeα 0=(α 2−h1 ¿ h3) ; ( the critical point of P occursα 0 )
g0=g (x−α 0 z ).
Step 15.Take k = k + 1
Step 16. OUTPUT ('Maximum number of iterations exceeded').
(Procedure completed without success).
STALL
III. Example
2 2
f 2 ( x 1 , x 2 , … , x n ) =x1 −81 ( x2 +0.1 ) + sen x 3 +1.06=0 ,
−x 1 x2 10 π−3
f 3 ( x 1 , x 2 , … , x n ) =e +20 x 3+ =0
3
Using the Maximum Slope method, calculate the approximation of the solution,
starting at the initial point. x(0) =(0 ,0 , 0)t .
Solution:
2 2 2
Be g ( x 1 , x 2 , x 3 )=[f 1 ( x 1 , x 2 , x 3 ) ] +[ f 2 ( x 1 , x 2 , x 3 ) ] +[f 3 ( x 1 , x 2 , x 3 ) ] ; so:
∇ g ( x 1 , x 2 , x3 ) ≡ ∇ g ( x )=¿
∂f1 ∂f ∂f
2f 1(x) ( x ) +2 f 2 ( x ) 2 ( x )+ 2 f 3 ( x ) 3 ( x ) ,
∂ x2 ∂ x2 ∂ x2
∂f1 ∂f ∂f
2f 1(x) ( x )+2 f 2 ( x ) 2 ( x ) +2 f 3 ( x ) 3 ( x ) ¿
∂ x3 ∂ x3 ∂ x3
t
¿ 2 J ( x ) F ( x)
Be
1
∇ g ( x (0 ) )=(−0.0214514 ,−0.0193062 , 0.999583) .
t
z=
z0
that:
P ( α )=g1 + h1 α +h3 α (α −α 2)
that interpolates
α 1=0, g1=111.975
g2−g1
α 2=0.5 , g2=2.53557 , h1 = =−218.878 ,
α 1−α 2
g3−g2
α 3=1, g3=93.5649, h2 = =182.059,
α 3−α 2
g2−g1
h3 = =400.937 .
α 3−α 1
Therefore:
P ( α )=111.975−218.878 α +400.937 α ( α −0.5 ) .
and
g ( x (1 )) =2.32762 .
The following table contains the rest of the results. A real solution of the
nonlinear system is (0.5, 0, -0.5235988) t .
k k k
k x1 x2 x3 g( x k1 , x k2 , x3k ¿
IV. Application
This method can be used to find the maximum peaks, which can be used in
various branches. In statistics it is used to find the performance of different
situations such as that of a population, finding its maximum slope based on the
variance it presents.
V. Computational algorithm
Start
f(x), N, n, X, TOL
k=1
W
k≤N
g 1 = g(x (k) )
z 0 = ‖z‖2
Yeah
z 0 =0
g1
z = z/z 0
α1= 0
α3= 1
g 3 =g(x- α 3 z)
222 1 3
2 2 1 3
W
g3≥g1
α3= α3/2
g 3 =g(x- α 3 z)
Yeah
α 3 < TOL/2
g1
α2= α3/2
g 2 =g(x- α 2 z)
h 1 = ( g 2 -g 1 )/ α
2
h 2 = (g 3 -g 2 )/ (α 3 -
α2)
2 3
4
2 4 3
h 3 = ( h 2 -h 1 )/ α
3
α 0 = 0.5 (α 2 - h 1 )/ h 3
g 0 =g(x- α 0 z)
x = x-αz
Yeah
|gg 1 |
k=k+1
Maximum number
of iterations (N)
exceeded
VI. End
Conclusions and recommendations
VII. Exhibit
Program in Matlab
function [x,varargout]= maxSlope(a,b,varargin)
n=length(a); x=zeros(n,1);
mmax=40; eps=1e-6;
if nargin>2
mmax=varargin{1};
end
if nargin>3
eps=varargin{2};
end
if (nargin>4)
x=varargin{3};
end
res=zeros(1,mmax);
r=ba*x; res(1)=dot(r,r); aux=norm(b);
for m=1:mmax
p=a*r;
xi=res(m)/dot(r,p);
x=x+xi*r;
r=r-xi*p;
res(m+1)=dot(r,r); % we save waste
if (sqrt(res(m+1))<eps*aux);
break
end
end
res=res(1:m+1);
if (m==mmax) && nargout<=3
disp( 'maximum number of iterations exceeded' )
end
if nargout>1
varargout{1}=m;
end
if nargout>2
varargout{2}=sqrt(res(:));
end
if (nargout>3)
if m==mmax
varargout{3}=0;
else
varargout{3}=1;
end
end
return