0% found this document useful (0 votes)
9 views

A-Generalized-Reduced-Gradient-Algorithm-for-Solving-Large_1989_IFAC-Proceed

Uploaded by

rino ariwibowo
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

A-Generalized-Reduced-Gradient-Algorithm-for-Solving-Large_1989_IFAC-Proceed

Uploaded by

rino ariwibowo
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Copyright © IFAC Control Applications of

Nonlinear Programming and Optimization.


Paris, France 1989

A GENERALIZED REDUCED GRADIENT


ALGORITHM FOR SOLVING LARGE-SCALE
DISCRETE-TIME NONLINEAR OPTIMAL
CONTROL PROBLEMS
J. L. D. Fac6 1
INRIA, Rocquencourt, 78153 Le Chesnay, France

Abstract. Nonlinear dynamical control systems are considered in a unified approach as


discrete-time nonlinear optimal control problems with time delays and inequality
constraints on the state and the control variables (usually double bounds). These
problems are in general large-scale and efficient numerical solutions can be obtained
by constrained Nonlinear Programming methods if reliable techniques able to deal with
these numerical difficulties are employed. A generalized reduced gradient algorithm
is proposed exploiting the staircase structure of the jacobian matrix of the dynamic
equations by using some priority principles on the partition of the variables into
basic and independent sets for each time period when a reenversion is needed, and for
choosing a substitute basic variable when change-of-basis occur for regularity reasons.
Factorized representations of the basic matrix facilitate the resolutions of the linear
systems of equations in different parts of the algorithm. Gaussian eliminations (LU
decompositions) of the diagonal blocks of the main factor of the representation improve
the numerical stability of these processes. In the reduced dimension space of the
bounded independent variables we have an unconstrained differentiable nonlinear
objective function. The search directions can be computed by adapted unconstrained
optimization methods with memory limitation as Conjugate Gradients. If some amount of
extra storage is available, limited-storage methods combining properties of the quasi-
Newton BFGS and conjugate gradients methods present superior convergence rates. These
alternative search directions can improve the convergence of the algorithm. Time
delays in the nonlinear dynamic equations can be considered by some specific sparse
matrix techniques with no influence on the algorithm main strategy. A computer code
has been designed and numerical experiments with different optimization models for
applications to electric power generation planning, macroeconomy and fishery management
have been solved with encouraging results.

Keywords. Nonlinear control systems; large-scale nonlinear programming; reduced


gradient method; limited-storage quasi-Newton algorithms.

INTRODUCTION T is the total time periods, and the time delays


are ki (i=l,2, ... ,s) and ij (j=l,2, .•. ,r).
Nonlinear dynamical control systems are considered
in a unified approach as discrete-time nonlinear This model is a constrained nonlinear programming
optimal control problems with time delays and problem with (m+n)T+m variables and mT equality
inequality constraints on the state and the constraints, generally large-scale, unless m, n
control variables: and T are very small. We shall present a reduced
gradient method that exploits the special
Max L(x O 'x 1 , ... ,x ; u O ,u 1 , ... ,u _ ) so that (1) structure of the dynamic equality constraints (2),
T T 1 and employs large-scale unconstrained nonlinear
Xt+1 = f t (x,x programming techniques, able to take into account
t t- k'x
1 t- k 2 , ... ,x t- k;
s the inequalities (3), (4).
u ,U _ ,u _,e2"" ,u t _ ir ), (2)
t t i1 t An algorithm based on Abadie's Generalized Reduced
Ct ::; u :; St' t=O,l, ... ,T-1, (3) Gradient (GRG) exploiting the staircase structure
t t of the jacobian matrix of the dynamic equations
at :;; x t :;; b t' t=O, 1, ... ,T, (4) and the sparse entries corresponding to the time
delays is presented in the following section.
Limited-storage quasi-Newton methods are briefly
where for all t, introduced in the next section, and we discuss the
m use of these techniques as alternative search
- x E R is the column vector of state variables,
t directions in the reduced space of the independent
- Ut E Rn is the column vector of control variables submitted to simple bounds. A new
variables, reduced gradient algorithm for solving optimal
m control problems with these features is presented
- at, b t E R , and Ct t ' St E Rn are the column
vectors formed by the bounds, in the last section.
- Land f t are continuously differentiable non-
linear functions, THE ALGORITHM GRECO

Reduced Gradient Methods


10n leave from Computer Science Department, Insti-
tuto de Matematica, Universidade Federal do Rio The reduced gradient method was introduced by
de Janeiro, Brazil. Wolfe (1963) to optimize a nonlinear function

45
46 J. L. D. Fac6

submitted to linear constraints and simple bounds We introduce certain priority principles in order
on the variables: to exhibit an easy working structure in the GRG
basic matrix:
n
Max {f(x)IAx=b, al :£ x :£ a2' X6R }, A(mxn) is a
P1 - introduce in the basis as many state
rank m matrix and f is a differentiable nonlinear
variables as you can.
function.
P2 - if a state variable is at a bound look for a
Abadie and Carpentier (1966) extended the approach substitute among the controls of the same
to the case of general nonlinear equality time period.
constraints:
P3 - if you cannot use P2, keep the state variable
in the "working basis" and introduce in the
Max {f(x) ICi (x)=O, i=l ,2, ... ,m, a:; x:; b, x 6 Rn},
"GRG basis" a control of a preceding period
f and ci are differentiable functions on an open
and build the elementary transformation
set containing P = {a :;; x S b } .
matrix (pivoting).
The GRG method consists in the solution of a P4 - do a LU decomposition of each diagonal block.
sequence of unconstrained nonlinear programmes
in a reduced dimension space, a manifold defined Using these principles we can exhibit basic matrix
by the nonlinear equations, that stops when a factorizations that minimize the fill-in in the
constrained local optimal solution is reached basis representation and present good numerical
(a Kuhn-Tucker point). stability.

The original variables are partitioned in m basic


and n-m independent (nonbasic) sets respecting The Algorithm
the regularity conditions:
Step 1. For each t obtain the sets Band N of
(i) the basic variables are strictly between the
basic and independent variables by the application
lower and upper bounds;
of principles Pl, P2, P3 and P4; the GRG basis is
(ii)the basic matrix defined by the columns of the factorized as:
jacobian corresponding to the basic variables
is non-singular. B = A-El -E 2 ... E , where:
k
A working basis
Murtagh and Saunders (1978) introduced the termi-
nology of superbasic for the nonbasic variables B GRG basis
that are between their bounds, like in (i),
Ei eta-vector, i=1,2, ... ,k;
keeping as nonbasic the variables that are equal
to one of their bounds fixed; only the superbasics
will be really independent in a manifold with Step 2. Compute the search direction d N of the
dimension k :;;; n-m. independent variables:
( i ) compute the Lagrange multipliers by a
Several GRG algorithms exist and computer codes
procedure BACKW (x,u):
have been designed and applied in different
domains with practical success. Since recently
1IB = b, where b -cB (the basic components
they were considered as the most efficient and
of the gradient vector)
robust among the available constrained nonlinear
programming codes by different comparative
( ii) compute the reduced gradient gN = (wN,!;:N):
numerical studies.
min{T,t+s} af
The Optimal Control Version
W
t =~ _
,
oX
1I t - 1 + L Ti T - , -
oX
T, t=l, 2 , .• _ ,T-l,
t T=t t

The greatest difficulty in the optimal control


T aL T-l
problem introduced in the first section besides w = 3x - 11
its dimension are the inequality constraints (4) T
on the state variables; in fact they cannot be
always considered as basic and easily computed by t 3L min{T,t+r }, 3f,
(2) after the optimal control has been found. !;: =ail+ I 11 ail' t=O,l, .•• ,T-1;
t ,=t t
Abadie (1970) and Mehra and Davies (1972) showed
that the state variables should be considered as
(iii) compute the projected reduced gradient PN:
independent during parts of the trajectory of the
dynamic system. Abadie and Bichara (1973)
presented an algorithm with good numerical
a) PN = gN or °bound
(if a basic variable is at a
and the gradient compo-
results, but its application had some restrictions. nent points out off the bounded
interval);
Mantell and Lasdon (1978) and Drud (1985)
b) if PN = 0, stop; otherwise apply a conju-
developed well known GRG-based algorithms for
gate gradient method to obtain
large-scale dynamic nonlinear problems.
the improving direction d N;
In each GRG iteration we must solve three linear
Step 3. Compute the search direction dB of the
systems using the inverse of the basic matrix to
basic variables by a procedure FORW (x,u):
compute the next solution estimate. In large-
N
scale problems it is impossible to invert matri- Bd = b, where b = -A d ;
B N
ces, so the algorithm GRECO- Gradient REduit
pour le Controle Optimal - developed by Faco
Step 4. Compute an improved feasible solution:
(1977) solves the linear systems by special
Gaussian eliminations.
( i ) line search on the tangent hyperplane >
new point (x 8 ,u 8 ) that usually does not
The bounded state variables cannot be chosen as
satisfy the nonlinear constraints;
basic everywhere (as it was desirable), if some of
them are at a bound and shall be included in the ( ii) feasibility restoration: by modifying only
independent set for some iterations. the basic variables, solve the system of
nonlinear equations (2) by a Newton-like
A Generalized Reduced Gradient Algorithm 47

procedure FORW (x 8 ,u 8 ), and, by adjusting 8 a QN part and a CG part, and has been considered
if necessary, find a new point so that: the best one; but recent numerical comparisons
made independently by Liu and Nocedal (1988) and
8 8
L(x ,u ) > L(x,u); by Gilbert and Lemarechal (1988) show a better
performance of Nocedal's method. This method is a
(iii) go to step 1; BFGS that uses the m most recent changes in the
gradients and the solution estimates to compute a
QN search direction.
Remarks Concerning the Algorithm

(a) line search: as conjugate gradients are very Consideration of the Simple Bounds on the Variables
sensitive to line searches, in step 4(i) we
compute an optimal stepsize as in the original We shall consider two approaches to deal with these
GRG method [see Abadie (1975)]. constraints trying to preserve the superlinear
convergence property of the unconstrained methods.
(b) in step 4(ii) a basic variable may hit a The first one was introduced by Rosen (1960) - it
bound: we do a change-of-basis choosing the is based on gradient projections, manifold
entering variable by the application of the suboptimizations and active set strategies;
same principles used for the reinversions. Lenard (1979) presents a numerical comparative
study about these procedures.
(c) when the number of factors in the basic matrix
representation attains a predetermined number Stimulated by the deficiencies of these methods in
a reinversion is automatically done. large-scale problems, Murtagh and Saunders (1978)
presented a reduced gradient for linear constraints
(d) different levels of anti-degeneracy techni- extending the revised simplex techniques to a
ques are employed to avoid cycling risks. nonlinear objective function implemented in their
code MINOS that incorporates bound constraints.
(e) time delays have only a passive role in the Some of the n-m independent variables (called
matrix factorization strategy, been considered nonbasic) are fixed at a bound and they use a
by a specific sparse matrix procedure, not conjugate gradient or a BFGS subprogramme to solve
interfering in the principles stated before. the sucessive unconstrained manifold optimizations
and introduce matrices updates to manage the two
(f) other type of constraints can be incorporated situations: (i) a superbasic variable hits a bound
to the formulation (1)-(4), like supplementary and becomes nonbasic, and (ii) a basic variable
inequalities by the introduction of slack hits a bound and is exchanged with a superbasic.
variables and the augmentation of the state
vector dimension. Shanno and Marsten (1982) proposed a similar
reduced gradient formulation with Shanno's
(g) if the initial state vector is free, we just memoryless quasi-Newton CG method trying to choose
consider the control vector of the first the largest set of superbasics at each reinversion,
period with m+n components. and a more efficient treatment of the binding
situations. Comparative numerical experiments
(h) a Fortran code has been designed by Faco also show the advantage in allowing looser conver-
(1977), and different models for applications gence criteria in the initial manifolds.
in electric energy generation, macroeconomy,
and fishery management have been solved by The other approach was introduced by Bertsekas
Faco (1979, 1980, 1983, 1988). (1982a, b) to solve the problem
Min {[(x) Ix ~ 0, x € Rn} proposing projected Newton
methods of the form:
LIMITED-STORAGE QUASI-NEWTON METHODS
xk+1 = [ xk.- <:kDkgk ] + , where.[0]+ d enotes proJect1on
.,
Conjugate gradient algorithms have been widely on the pos1t1ve orthant, uk 1S a stepsize obtained
used to optimize large-scale problems because they by an Armijo-like rule and Dk is a symmetric
use very few memory locations, O(n), but their positive definite matrix partly diagonal.
convergences are quite slow; quasi-Newton methods
are very fast but need a large amount of storage, A key property of this technique is the capacity
0(n2), to approximate the objective function to identify the manifold of binding constraints
hessian matrix. Common properties of these at a solution in a finite number of iterations.
methods were analyzed by many people like Nazareth To avoid feasible directions algorithms zigzag
(1979), Shanno (1978), Buckley (1978), Nocedal phenomenous he suggests certain enlargements of
(1980), etc. the binding sets by the definition of scalars €k
to measure the proximity to the bounds. An
Limited-storage methods combine properties of extension to deal with double bounds is also
Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi- described.
Newton method and conjugate gradients methods
using variable amount of available extra-storage Bonnans (1983) proposes a projected variable
(a controlled mUltiple of n). The performance of metric method for bound constrained optimization
CG methods can be substantialy increased by the following Bertsekas techniques proving superlinear
use of supplementary information. convergence for a different linestep choice:
Wolfe's rule [see Wolfe (1969), Lemarechal (1981)].
We consider the method CONMIN by Shanno and Phua Bonnans (1987) presents some extensions to limited-
(1978), VSCG by Buckley and LeNir (1983) and memory methods.
L-BFGS by Nocedal (1980) proposed to solve large-
scale unconstrained nonlinear programmes. All of Conn, Gould and Toint (1988) extend the global
them present theoretical superlinear convergence. convergence properties of trust region algorithms
from More (1982) to the case with simple bounds,
Buckley-LeNir's method is an extension of the including results for nonconvex functions by the
memoryless quasi-Newton algorithm introduced by Bertsekas's projected operator methodology.
Shanno (1978) that uses 2 pairs of vectors to
approximate the second order information, to a
predetermined variable number of vectors depending
on the available extra storage. The algorithm has
48 J. L D. Faco

ALTERNATIVE SEARCH DIRECTIONS IN THE EXAMPLES


GRECO ALGORITHM
The following problems have been modelled by
Limited-storage methods can be introduced in the Optimal Control and solved by the code GRECO
algorithm GRECO proposed before. In the step (2) designed in Fortran. The introduction of the al-
(iii)(a) the reduced gradient is projected taking terative quasi-Newton search directions and inexact
into account the bounds on the independent vari- linesearches can improve the efficiency of the
ables and the possibility of getting infeasible, method.
just looking at the sign of the corresponding
reduced gradient component:
Electrical Energy Generation Scheduling for Hydro-
thermic Power Systems

Pj = gj otherwise. Hydrothermic electric power systems consisting of


few thermal plants and many hydraulic reservoirs
The number of components different from 0 define are considered. The problem is to determine the
the manifold dimensional k:£ n-m. Any optimal required power generation in order to
unconstrained optimization method could be satisfy a predicted demand at minimal costs while
applied like gradient (linear convergence), conju- respecting some operation constraints over a
gate gradients or quasi-Newton methods (superli- planning horizon T. The model will be determinis-
near convergence rates) if we are aware of the tic for one to two years horizon with time periods
large-scale dimension of the discrete-time optimal representing weeks or months.
control problem. As was observed above limited-
storage quasi-Newton are in our view the only NH = number of hydraulic reservoirs
class of methods that can benefit from the two NTER = number of thermal plants
properties: efficiency and controlled memory size.
T = number of time periods (horizon)
Concerning the bounds constraints discussed
before, both approaches can be adapted to a GRG and,
method, but we think that we must distinguish
between medium-to-large problems and really large- - Xi t = storage of reservoir at the end of
scale ones. Bertsekas approach could theorically , period t,
be applied in any case and the other class of Y i, t = inflow to the reservoir i at period t,
devices could be only used in the first category
for the practical impossibility to introduce the - Hi, t = pow,:,r generate by the hydro plant i at
matrix update operators. Following Lemarechal perIod t,
(1989) we introduce the Bertsekas techniques only - Sj t = power generate by the thermal plant at
for really general large-scale situations. , period t,

The unconstrained optimization method been able to - Qi, t = turbined water


consider these simple constraints, the intro- - Vi, t = spilled water
duction in the GRG method is straightforward.

Model
Modifications in the GRECO Algorithm Use the
Following Techniques Min C
1- Define the set of active bounds:
k s. t.
< k<
I~={j la j =x.=a.+€ and g. < 0 or
J J J
X. =X. +
l,t 1,t-1 kEM
I (Qkt-+Vkt_ ) -Q. t-V, t'
b. _E:k:£x~:;b. and g. > O} .I: i ' T , T 1, 1,
J J J J

2. Define the matrix Dk diagonal with respect to i=l, ... ,NH


I~:
t=l, ... , T

i=l, ... ,NH


3. Compute the new solution estimate:
t=1, ... ,T-1
b. if 0 .~b.
* t=l, ... , T
[0.1 = o.J J J
if a.<O.<b.
J J J J J
a. if O.:£a. i=l, ... ,NH
J J J

Line search by safeguarded cubic interpolations


QIi :£Qi,t 5 QSi i:1, ... ,NH
and a k is chosen by Wolfe's rule:
o :£ V.l,t ( t-1, ... , T
T
f(Xk+akL\) ;; f(x ) + PI a gk
k k L\
where the objective function is to minimize the
r19(xk +ak 6k )T 6
k
~ P2 thermal generation cost
T NTER
C I I G. (s.
J J,t
)
t=l j=l
4. The Dk matrices are never computed, m couples
of gradient and solution estimate changes are where Gj(Sj,t), cost of the thermal generation of
used following Nocedal (1980). the plant j, a nonlinear function, is usually
quadratic.
A Generalized Reduced Gradient Algorithm 49

Fishery Management of Tunas Ecosystem REFERENCES

The tunas fisheries along the southeastern coast Abadie, J. (1970). Application of the GRG method
in Brazil are important economical activities. to optimal control problems. In J. Abadie
Optimal Control models have been built by Thomaz, (Ed.), Integer & Nonlinear Programming, North
Gomes and Face (1985), Thomaz (1986) and Face Holland, Amsterdam. pp. 191-211.
(1988) considering interacting mUltispecies fish Abadie, J. (1975). Methode du gradient reduit
populations submitted to increasing fishery generalise. Note EDF HI 1756/00, Clamart,
pression. Regulation and limitation of the France.
quantities captured are most important to achieve Abadie, J.,and M. Bichara ( 1973). Resolution nu-
a maximum sustainable yield. The following model merique de certains problemes de commande
is a modified version of the Lotka-Volterra multi- optimale. Rev. Fran~. d'Autom., Inform. et
species interaction and a bioeconomic objective Rech. Oper., 7, 77-105.
function. Abadie, J., and J.-Carpentier (1966). Generalisa-
tion de la methode du gradient reduit de Wolfe
Three competitive tuna species are considered. au cas de contraintes non lineaires. In D.B.
Hertz & J. Melese (Eds.), 4th IFORS Conf.,
We can represent the dynamic ecosystem by coupled Wiley, New York. pp. 1041-1053.
difference equations: Bertsekas, D. (1982a). Constrained Optimization
and Lagrange Multipli er Methods. Academic
xi(t+1)-xi(t)=[ri-ailxl (t)-ai2x2(t)-ai,x3(t)] Press, New York.
Bertsekas, D. (1982b). Projected Newton methods
xi(t), for optimization problems with simple
constraints. SIAM J. Contr. & Opt., 20,
where, 221-246. --
Bonnans, J.F. (1983). A variant of a projected
xi(t) = biomass of population i at time t,
variab le metric method for bound constrained
- r = net growth rate of population i, optimization problems, RR242, INRIA-
i
Rocquencourt, France.
- a ij = !~~i:!~~~a~i~~c~,Of the population on
Bonnans, J.F. (1987). Methodes a metrique vari-
able et programmation quadratique sucessive.
for i=l,2,3, and t=O,l,2, ... ,T. Note EDF-INRIA, France.
Buckley, A. (1978). A combined conjugate gradient
The inhibitor effects ai· are estimated taking quasi-Newton minimization algorithm. Math.
into account the s clust~rs generat ed by the Progr., 15, 200-210.
optimal spatial distr ibu t ion, Buckley, A., and A. LeNir (1983). QN-like variable
storage conjugate gradients. Math. Progr., ~,
r. 155-173.
1
a .. K. Conn, A.R., N.I.M. Gould, and P.L. Toint (1988).
lJ
C ij
1 Global convergence of a class of trust region
algorithms for optimization with simple
where,
bounds. SIAM J. Num. An., 25, 433-460.
Drud, A. (1985). CONOPT: a GRG-Code for large
s
sparse dynamic nonlinear optimization pro-
L pih Pjh
blems. Math. Progr., 31, 153-191.
h=i
c .. i,j=1 ,2, ... ,il, Face, J.L.D. (1977). Commande optimale des syste-
lJ s l mes dynamiques non-lineaires a retards avec
L P.
in contraintes dlinegalites sur lletat et la
h=i
commande: une specialisation de la methode
GRG. Dr.lng theSiS, Unlversite Pierre et
and:
Marie Curie, Paris, France.
percentage of the population i in the Face, J.L.D. (1979). Application of the GRECO
cluster h, algorithm to the optimal generation
scheduling for electric power systems. X Int.
- K.1 carrying capacity of the environment for
Symp. on Math. Progr., Montreal, Canada-.-----
the population i in absence of other
Face, J.L.D. (1980). Optimization of a dynamical
competing species (saturation level ).
planning model by the GRG method. V. Symp.
Uber OR, KHln, Germany.
Let us cons ider the optimal control model:
Face, J.L.D. (1983). Optimization of a dynamical
ecosystem by Nonlinear Programming. 11th IFIP
T 3
-Ot Conf. Syst. Mod. & Opt., Copenhagen, Danmark.
Max J= L e L diui(t), Face, J.L.D. (1988). Mathematical Programming
t=O i=l
solutions for fishery management. In G. Mitra
(Ed.), Mathematical Models for Decision
so that,
Support, NATO ASI series, v.F48, Springer-
3
Verlag, Heidelberg pp. 197-205.
x. (t+1)-x. (t)=[r.-
1 1 1
L a .. x.J (t) Ix. (t)-u. (t),
j =1 lJ 1 1 Gilbert, J.C., and C. Lemarechal (1988). Some
numerical experiments with variable-storage
0i ~ ui(t) $ Si ' quasi-Newton algorithms. IIASA work.p. WP-88-
-121, Laxenburg, Austria.
Lemarechal, C. (1981). A view of line searches.
i=1,2,3, and t=O,l,2, .. . ,T,
In A. Auslender, W. Oettli & J. Stoer. (Eds.),
Optimization & Optimal Control , Lect. Notes
where, 30, Springer-Verlag, Heidelberg. pp. 59-78.
Lenard, M. (1979). A computational study of
- ui(t) = harvesting rate of population i,
active set strategies in nonlinear program-
- d = unitary profit of harvesting population i, ming with linear const raints. Math. Progr.,
i
16, 81-97.
- T = planning horizon,
Liu,-n.C., and J. Nocedal (1988). On the limited
- ai' Si' a i and b i bounds on xi and u i ' memory BFGS method for l arge scale optimiza-
tion. Tech. Rep. NAM 3, North Western Univ.,
- xi(O) known, and 6 = medium discount rate on
Evanston , USA.
[0, T] .
50 J. L. D. Fac6

Mantell, J.B., and L.S. Lasdon (1978). A GRG algo-


rithm for econometric control problems. An.
Eco. & Soc. Meas., 6, 581-597. ---
Mehra, R.K., and R.E. Davies (1972). A generalized
gradient method for optimal control problems
with inequality constraints and singular arcs.
IEEE Trans. on A. Control, 17.
More, J. (1982). Recent developments in algorithms
and software for trust region methods. In
Bachem, GrBstchel & Korte. Mathematical Pro-
gramming: the State of the Art, Springer-
Verlag, Berlin. pp. 258-287.
Murtagh, B.A., and M.A. Saunders (1978). Large-
scale linearly constrained optimization.
Math. Progr., 14, 41-72.
Nazareth, L. (1979~ A relationship between the
BFGS and conjugate gradient algorithms and its
implications for new algorithms, SIAM J. Num.
An., 16, 794-800.
Nocedal, ~ (1980). Updating quasi-Newton matrices
with limited storage. Math. of Comp., 35,
773-782.
Rosen, J.B. (1960). The gradient projection method
for nonlinear programming, part I: linear
constraints. SIAM J., 8, 181-217.
Shanno, D.F. (1978). Conjugate gradient methods
with inexact searches, Math. of OR, 3,244-256.
Shanno, D.F., and R.E. Marsten (1982). Conjugate
gradient methods for linearly constrained
nonlinear programming, Math. Progr. Study,
16, 149-161.
Shanno, D.F., and K.H. Phua (1978). Matrix condi-
tioning and nonlinear optimization. Math.
Progr., 14, 149-160. -----
Thomaz, A.C.~ (1986). Otimiza~ao de Sistemas Di-
namicos Nao-lineares corn Aplica~ao a Politi-
cas de Pesca. D.Sc. Thesis, Universidade Fe-
deral do Rio de Janeiro.
Thomaz, A.C.F., F.J.N. Gomes, and J.L.D. Faco
(1984). Dynamic ecosystem with optimal
strategy for catch of tunas and tuna-like
fishes in southeastern coast of Brazil.
IFORS-84, Washington, D.C ..
Wolfe, P. (1969). Convergence conditions for
ascent methods I. SIAM Review, 11, 226-235.
Wolfe, P. (1963). Methods of nonlinear program-
ming. In R.L. Graves and P. Wolfe (Eds.),
Recent Advances in Mathematical Programming,
McGraw-Hill, New York. pp. 76-66.

You might also like