0% found this document useful (0 votes)
44 views

Hamilton-Jacobi-Bellman Equations and Optimal Control: !birkh Auser Verlag Basel

This document discusses Hamilton-Jacobi-Bellman (HJB) equations and their connection to nonlinear optimal control problems. It provides examples of classical optimal control problems and their associated HJB equations, including infinite horizon discounted problems, Mayer problems, exit time problems, state-constrained problems, and monotone control problems. It also briefly discusses viscosity solutions and how they provide a way to understand HJB equations globally in a weak sense, since value functions are not always differentiable.

Uploaded by

Mara
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views

Hamilton-Jacobi-Bellman Equations and Optimal Control: !birkh Auser Verlag Basel

This document discusses Hamilton-Jacobi-Bellman (HJB) equations and their connection to nonlinear optimal control problems. It provides examples of classical optimal control problems and their associated HJB equations, including infinite horizon discounted problems, Mayer problems, exit time problems, state-constrained problems, and monotone control problems. It also briefly discusses viscosity solutions and how they provide a way to understand HJB equations globally in a weak sense, since value functions are not always differentiable.

Uploaded by

Mara
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

c

!Birkh äuser Verlag Basel 1

Hamilton-Jacobi-Bellman equations and


optimal control

Italo Capuzzo Dolcetta∗

1. Introduction

The aim of this paper is to offer a quick overview of some applications of the theory of viscosity
solutions of Hamilton-Jacobi-Bellman equations connected to nonlinear optimal control problems.
The central role played by value functions and Hamilton-Jacobi equations in the
Calculus of Variations was recognized as early as in C. Caratheodory’s work, see [1] for a survey.
Similar ideas reappeared under the name of Dynamic Programming in the work of R. Bellman and
his school and became a standard tool for the synthesis of feedback controls for discrete time systems.
However, the lack of smoothness of value functions, even in simple problems, was recognized as a
severe restriction to the range of applicability of Hamilton-Jacobi-Bellman theory (in short, HJB from
now on) to continuous time processes.
The main reasons for this limitation are twofold:
(i) the very basic difficulty to give an appropriate global meaning to the HJB equation (a fully
non linear partial differential equation) satisfied at all points of differentiability by the value
function,
(ii) to identify value function as the unique solution of that equation; a related important issue is that
of stability of value functions, specially in connection with approximation procedures required
for computational purposes.
Several non classical notions of solutions have been therefore proposed to overcome these difficulties.
Let us mention, in this respect, the Kruzkov theory which applies, in the case of sufficiently smooth
Hamiltonians, to semiconvex functions satisfying the HJB equation almost everywhere (see [2, 3] and
also [4, 5] for recent results on semiconcavity of value functions).
Only in the 80’s, however, a decisive impulse to the setting of a satisfactory mathematical frame-
work to Dynamic Programming came from the introduction by Crandall-Lions [6] of the notion of
viscosity solutions of Hamilton-Jacobi equations.

The presentation here, which is mainly based on material contained in the forthcoming book [7],
to which we refer for detailed proofs, will be focused on optimization problems for controlled ordinary
differential equations and discrete time systems.

The content of the paper is as follows:


2 Some examples of HJB equations in optimal control
3 Some basic facts about viscosity solutions
4 Necessary and sufficient conditions
5 Approximate synthesis of feedbacks
6 Final remarks

Dipartimento di Matematica, Università di Roma ”La Sapienza”, Piazzale A.Moro 2 - I 00185 Roma, Italy;
e-mail: [email protected]. Text of a lecture given at the 12th Conference on Variational Calculus ,
Optimal Control and Applications, Trassenheide, September 23-27,1996. Thanks are due to Istituto per le
Applicazioni del Calcolo C.N.R. Roma for financial support.
Hamilton-Jacobi-Bellman equations and optimal control 2

2. Some examples of HJB equations in optimal control

Let us consider the control system, whose solution will be denoted by yxα ,
ẏ(t) = f (y(t), α(t)) (t > 0), y(0) = x. (2..1)
In (2..1), f : IR × A −→ IR , A is a topological space and
N N

α ∈ M (A) = {α : [0, +∞) −→ A, α measurable}


We assume:
(A0 ) A is compact, f is continuous on IRN × A.
(A1 ) ∃Lf ≥ 0 : |f (x, a) − f (y, a)| ≤ Lf |x − y|, ∀a ∈ A.

Here we list a few classical examples of optimal control problems and associated HJB equations
(complemented in some cases by initial or boundary conditions) for system (2..1).

2.1 The infinite horizon discounted regulator


For this problem the value function is
! +∞
v(x) = Infα∈M (A) l(yxα (t), α(t)) e−λt dt, (2..2)
0
where the running cost l is a real valued function on IRN × A and the discount factor λ is a positive
number.
The corresponding HJB equation is
λv(x) + Supa∈A [−f (x, a) · Dv(x) − l(x, a)] = 0 in IRN . (2..3)

2.2 The Mayer problem


In this case the value function is
v(x, t) = Infα∈M (A) g(yxα (t)) ,
where the terminal cost is a given real valued function on IRN .
The HJB equation takes now the form of a Cauchy problem, namely:
∂v
+ Supa∈A [−f (x, a) · Dv(x)] = 0 in IRN × (0, +∞)
∂t
v(x, 0) = g(x).

2.3 Exit time problems


In these problems a subset T of IRN (the target) is given . If tα x denotes the first time when the
trajectory yxα of system (2..1) hits the target, we consider the value function
v(x) = Infα∈M (A) J(x, α).
Here we set:
! tα
x α
J(x, α) = l(yxα (t), α(t)) e−λt dt + e−λtx g(yxα (tx α )), if tα
x < +∞,
0

and ! +∞
J(x, α) = l(yxα (t), α(t)) e−λt dt, otherwise.
0
The corresponding HJB equation is now the Dirichlet problem:
λv(x) + Supa∈A [−f (x, a) · Dv(x) − l(x, a)] = 0 in IRN \ T ,
v(x) = g(x) if x ∈ ∂T .
Hamilton-Jacobi-Bellman equations and optimal control 3

2.4 State-constrained problems


For this kind of problems the value function is
! +∞
v(x) = Infα∈Mx (A) l(yxα (t), α(t)) e−λt dt.
0

Here, the feasible controls are taken in the set

Mx (A) = {α ∈ M (A) : yxα (t) ∈ Ω, ∀t ≥ 0}.

The set Ω plays here the role of a constraint on the states of system (2..1).
For the present problem the HJB equation (2..3) is complemented with a quite unusual boundary
condition, namely

λv(x) + Supa∈A [−f (x, a) · Dv(x) − l(x, a)] ≥ 0 in ∂Ω.

2.5 The monotone control problem


In this example the control set A is the interval [0, 1] and the value function
v : IRN × [0, 1] −→ IR is
! +∞
v(x, a) = Infα∈Mam (A) l(yxα (t), α(t)) e−λt dt.
0

Here we set:
Mam (A) = {α ∈ M (A), α nondecreasing, α(0) ≥ a} .
The HJB equation for this problem takes the form of the evolutionary variational inequality:
∂v
M ax[λv(x, a) − f (x, a) · Dx v(x, a) − l(x, a); − ]=0 in IRN × [0, 1) ,
∂a
! +∞
v(x, 1) = l(yx1 (t), 1) e−λt dt in IRN .
0

3. Some basic facts about viscosity solutions

It is well-known that the value functions of the various optimal control problems described above
satisfy the corresponding HJB equations at all points of differentiability. This fact can be proved by
means of the Dynamic Programming Principle, a functional equation relating the value of v at the
initial point x to its value at some point reached later by a trajectory of system (2..1).
Let us just indicate that, in the simplest case of the infinite horizon discounted regulator problem, the
Dynamic Programming Principle is expressed by the identity
! T
v(x) = Infα∈M (A) l(yxα (t), α(t)) e−λt dt + e−λT v((yxα (T )), (3..4)
0

which holds for all x ∈ IRN and T > 0.


It is well-known as well that everywhere differentiability of v cannot hold in general. Indeed, simple
examples show that two different optimal trajectories for some initial point x may exist implying non
differentiability of v at x.
A concept of solution which allows to understand HJB equations globally, in a weak sense, is provided
by the notion of viscosity solution.
The definition is based on the notion of first order semidifferentials of a function u : Ω −→ IR, Ω
being an open subset of IRN .
Hamilton-Jacobi-Bellman equations and optimal control 4

These are the convex sets


" #
u(y) − u(x) − p · (y − x)
D u(x) = p ∈ IR : lim sup
+ N
≤0
Ω%y→x |x − y|
" #
u(y) − u(x) − p · (y − x)
D− u(x) = p ∈ IRN : lim inf ≥0 .
Ω%y→x |x − y|
Consider the partial differential equation
F (x, u(x), Du(x)) = 0 x ∈ Ω (3..5)
where F : Ω × IR × IRN −→ IR is a continuous function.
Definition 1 Let u : Ω −→ IR. Then,
(i) u is a viscosity subsolution of (3..5) if u is upper semicontinuous on Ω and
F (x, u(x), p) ≤ 0 ∀p ∈ D+ u(x), ∀x ∈ Ω;

(ii) u is a viscosity supersolution of (3..5) if u is lower semicontinuous on Ω and


F (x, u(x), p) ≥ 0, ∀p ∈ D− u(x), ∀x ∈ Ω;

(iii) u is a viscosity solution of (3..5) if u satisfies both (i) and (ii).


Here we list some facts and remarks about this definition:
- if u is continuous on Ω then the sets A± = {x ∈ Ω : D± u(x) *= ∅} are dense in Ω;
- if u is a viscosity solution of (3..5) then F (x, u(x), Du(x)) = 0 at any x where u is differentiable;
- if u is Lipschitz continuous and satisfies (3..5) in the viscosity sense then,
F (x, u(x), Du(x)) = 0 almost everywhere;
- if u ∈ C 1 (Ω) satisfies (3..5) at all points then u is a viscosity solution of (3..5); on the other hand, if
u ∈ C 1 (Ω) is a viscosity solution of (3..5) then u is a classical solution of (3..5);
- the equations (3..5) and −F (x, u(x), Du(x)) = 0 are not equivalent in the viscosity sense.
- a Lipschitz continuous function is a solution of equation (3..5) in the extended sense if
Supp∈∂u(x) F (x, u(x), p) = 0
where ∂u is the Clarke’s gradient of u; if u is a solution in the extended sense then u is both a viscosity
subsolution of (3..5) and a supersolution of
−F (x, u(x), Du(x)) = 0;
on the other hand, a Lipschitz continuous viscosity solution u of (3..5) is a solution in the extended
sense as well.
Although the notion of viscosity solution is a weak one, good comparison,uniqueness and stability
properties hold. Sample results in these directions are the following ones where, for simplicity, we take
F (x, r, p) = r + H(x, p).
Theorem 1 Assume that H is continuous and satisfies
|H(x, p) − H(y, p)| ≤ ω(|x − y|(1 + |p|))
and
|H(x, p) − H(x, q)| ≤ ω(|p − q|)
for all x, y ∈ Ω,and p, q ∈ IR , where ω : IR+ −→ IR+ is continuous, nondecreasing, ω(0) = 0.
N

If u,w are bounded continuous viscosity sub and super solutions of


u(x) + H(x, Du(x)) = 0 in IRN
then,
u≡w in IRN .
Hamilton-Jacobi-Bellman equations and optimal control 5

Let us just sketch the proof of the inequality u ≤ w, the argument to prove the reverse being
completely similar. Assume, by contradiction, the existence of some x0 such that:

u(x0 ) − w(x0 ) := δ > 0.

Define then
|x − y|2
Φ(x, y) = u(x) − w(y) − − β((g(x) + g(y)),

where ε > 0 and
1
g(x) = log(1 + |x|2 ).
2
The parameter β is chosen as to satisfy
δ δ
β≤ , ω(2β) ≤ .
4g(x0 ) 6
The above choices yield:
δ
SupIRN ×IRN Φ ≥ Φ(x0 , y0 ) ≥ . (3..6)
2
By the assumptions made on u, w it is not hard to prove
the existence of (xε , yε ) such that

Φ(xε , yε ) = SupIRN ×IRN Φ

and that (xε , yε ) remain uniformly bounded with respect to ε. The key point is to observe that
xε − yε
pε := + βDg(xε ) ∈ D+ u(xε ),
ε
and
xε − yε
qε := − βDg(yε ) ∈ D− u(yε ).
ε
By definition of viscosity sub and super solution, then

u(xε ) + H(xε , pε ) ≤ 0 ≤ w(yε ) + H(yε , qε ).

By the assumptions on H this implies:

u(xε ) − w(yε ) ≤ ω(|pε − qε |) + ω(|xε − yε |(1 + |pε |)).

Therefore, by the choice of β and the fact that |Dg| ≤ 1,


δ δ
Φ(xε , yε ) ≤ + ω(ε−1 |xε − yε |2 + |xε − yε |). (3..7)
6 4g(x0 )
Observe now that the inequality

Φ(xε , xε ) + Φ(yε , yε ) ≤ 2Φ(xε , yε )

yields, since u and w are bounded, the following estimate


1
|xε − yε | ≤ (εC) 2 for some C > 0

and, consequently,
|xε − yε |2
→ 0 as ε → 0.
ε
At this point it is easy to realize that inequality (3..7) is contradictory with (3..6).
This concludes the proof that u ≤ w.
As for stability we have:
Hamilton-Jacobi-Bellman equations and optimal control 6

Theorem 2 Assume that Hn is continuous on Ω × IRN for each n = 1, 2, ...


and that

un (x) + Hn (x, Dun (x)) = 0 in Ω, in the viscosity sense.

Assume also that, as n → +∞,

un → u, Hn → H locally uniformly in Ω × IRN .

Then,
u(x) + H(x, Du(x)) = 0 in Ω, in the viscosity sense.

The uniform convergence of un to u guarantees that for any x ∈ Ω and p ∈ D+ u(x) there exist
xn ∈ Ω and pn ∈ D+ un (x) such that

xn → x, pn → p.

From this fact it follows easily that u is a subsolution of the limit equation.
A completely similar argument, with D+ replaced by D− , shows that u is a supersolution as well.
The theory outlined up to now does not depend on the convexity of the map

p −→ H(x, p).

When this property holds true (this fact is typical of Hamiltonians occurring in optimal control
problems), then some special results are valid. Let us just mention the non obvious fact that in this
case a function u is a viscosity solution of equation (3..5) if and only if

F (x, u(x), Du(x) = 0 ∀p ∈ D− u(x)

(see ([14, 15]).

4. Necessary and sufficient conditions

The value functions in the examples 2.1, 2.2, 2.5 of Section 2 are continuous under the assumptions
(A0 ), (A1 ) plus some uniform continuity conditions on the costs l, g. Continuity of v in problems 2.3
and 2.4 is guaranteed under an additional restriction involving the behaviour of the dynamics f on
the boundary of Ω or of T . For problem 2.4 this condition is

Infa∈A f (x, a) · n(x) < 0 ∀x ∈ ∂Ω

where n(x) denotes the outward normal to ∂Ω at x, see ([16, 17]).


The link between the optimal control problem and the HJB equation is provided by the Dynamic
Programming Principle. In all the examples presented the value function turns out to be the viscosity
solution of the corresponding HJB equation.
Let us be more specific on this point with reference to Example 2.1 ; similar results hold, however, for
the various examples shown in Section 2.
For the infinite horizon discounted regulator problem the Dynamic Programming Principle is expressed
by the identity
! T
v(x) = Infα∈M (A) l(yxα (t), α(t)) e−λt dt + e−λT v((yxα (T )), (4..8)
0

which holds for all x ∈ IR N


and T > 0.
We have:
Hamilton-Jacobi-Bellman equations and optimal control 7

Theorem 3 Assume (A0 ),(A1 ); assume also l continuous and bounded on IRN × A and

(A2 ) ∃Ll ≥ 0 : |l(x, a) − l(y, a)| ≤ Ll |x − y|, ∀a ∈ A.

Then, the value function v of the infinite horizon discounted regulator problem is a bounded, continuous
viscosity solution of (2..3).
Moreover, v is the unique viscosity solution of (2..3) in the class of bounded, continuous functions on
IRN .

For the proof, note that the second statement follows from the first and the uniqueness Theorem
1.
Let just indicate how to prove that v is a viscosity solution of (2..3). Observe that (4..8) with
α(t) ≡ a ∈ A yields
!
v(yxa (T ) − v(x) 1 T
≥ l(yxa (t), a) e−λt dt + (e−λT − 1)v(yxa (T )) (4..9)
T T 0

for all T > 0.


On the other hand, p ∈ D+ v(x) implies

v(yxa (T ) − v(x) 1 o(T )


≤ p · (yxa (T ) − x) + , as T → 0+ .
T T T
Hence, taking (2..1) into account,
!
v(yxa (T ) − v(x) 1 T
o(T )
≤p· f (yxa (t), a) dt + , as T → 0+ .
T T 0 T

for all p ∈ D+ v(x).


This and (4..9) imply, by continuity of f , l and v that

p · f (x, a) ≥ −l(x, a) + λv(x)

for all p ∈ D+ v(x) and a ∈ A.


This shows that v is a viscosity subsolution of (2..3); a similar, but slightly more difficult, argument
shows that v is a supersolution as well.
From the above result, which characterizes the value function of the infinite horizon problem
in terms of the Hamilton-Jacobi-Bellman equation (2..3), one can deduce necessary and sufficient
conditions of optimality. Under the assumptions of Theorem 3 we get the following weak formulation
of the Pontryagin Maximum Principle.
Theorem 4 Assume α% = α% (x) is an optimal control, corresponding to the initial position x, for the
infinite horizon discounted regulator, i.e.
! +∞
"
v(x) = l(yxα (t), α% (t)) e−λt dt.
0
" $ "
Then the following hold for all p ∈ D+ v(yxα (t)) D− v(yxα (t)):
" "
p · ẏx% (t) + l(yxα (t), α% (t)) = λv(yxα (t)) a.e. t > 0
" " " "
p · f (yxα (t), α% (t)) + l(yxα (t), α% (t)) = M ina∈A [p · f (yxα (t), a) + l(yxα (t), a)]

As for sufficient conditions we have:


Hamilton-Jacobi-Bellman equations and optimal control 8

Theorem 5 Under the assumptions of Theorem 3 and


%
D+ v(x) D− v(x) *= ∅,

if α% and x satisfy
" " "
p · f (yxα (t), α% (t)) + l(yxα (t), α% (t)) = λv(yxα (t)) a.e. t > 0
" $ "
for all p ∈ D+ v(yxα (t)) D− v(yxα (t)), then α% is optimal for the initial position x.

$
Note that condition D+ v(x) D− v(x) *= ∅ is rather restrictive; it is fulfilled, for example, when v is
semiconcave, i.e.
v(x + z) − 2v(x) + v(x − z) ≤ C|z|2
for some C and all x,z in IRN .

5. Approximate synthesis

Consider again the infinite horizon problem 2.1, the associated HJB equation

λv(x) + Supa∈A [−f (x, a) · Dv(x) − l(x, a)] = 0 in IRN

and set for h > 0 :

uh (x) + Supa∈A [−(1 − λh)uh (x + hf (x, a)) − hl(x, a)] = 0 in IRN . (5..10)

Under the assumptions of Theorem 3, the Contraction Mapping Principle applies to show that for each
h ∈ (0, λ1 ) there exists of a unique bounded, continuous solution uh to the above functional equation.
The functions uh can be interpreted as value functions of a discrete time version of the infinite
horizon problem. Let us define at this purpose

Mh (A) = {α ∈ M (A) : α(t) ≡ costant ∀t ∈ [kh, (k + 1)h)}

and, for α ∈ Mh (A), the discrete time control system

yh (x, k + 1) = yh (x, k) + hf (yh (x, k), a(kh)) yh (x, 0) = x, (5..11)

where k = 0, 1, ...,.
Define then, a feedback law a%h : IRN −→ A by selecting

a%h (x) ∈ argmaxa∈A [−(1 − λh)uh (x + hf (x, a)) − hl(x, a)]

where uh is the solution of equation (5..10). Consider now the control αh% ∈ Mh (A) given by

αh% (t) = a%h (yh% ([t/h]))

where yh% is obtained from (5..11).


It not hard to check then that the solution uh of (5..10) is given by
+∞
&
uh (x) = h (1 − λh)k l(yh% (x, k), α% (kh))
k=0

and also that


+∞
&
uh (x) = Infα∈Mh (A) h (1 − λh)k l(yh (x, k), α(kh)).
k=0

The next result states that uh converges to the value function v of the infinite horizon problem as
the time step h → 0+ .
Hamilton-Jacobi-Bellman equations and optimal control 9

Theorem 6 Under the assumptions of Theorem 3 we have:

SupK |uh (x) − v(x)| −→ 0 as h → 0+

for all K ⊂⊂ IRN . Under the further conditions λ > 2Lf , f smooth and l semiconcave, the estimate

SupIRN |uh (x) − v(x)| ≤ Ch as h → 0+

hold for some constant C.

As a consequence of this convergence result it follows that any optimal pair (αh% , yh% ) for the above
described discrete time problem converges weakly to an optimal relaxed pair (µ% , y % ) for the original
problem (2..2), see [11, 12].Theorem 6 is also the starting point for a numerical approach to the
computation of value functions and optimal feedbacks. We refer for example to [18, 19, 21, 22, 23].

6. Final remarks

In this paper we restricted our attention to the role played by viscosity solutions in optimal control
problems for systems governed by ordinary differential equations. Only a few examples have been
shortly described but many more can be approached in a similar way, see [7] for impulse and switching
control problems,the minimum time problem and H∞ control.
Some important topics we did not touch as well are discontinuous viscosity solutions and their ap-
plications to control and game problems with discontinuos value functions (e.g. the classical Zermelo
navigation problem). Discontinuous viscosity solutions and the closely related weak limits technique
are relevant also in the analysis of some asymptotic problems occurring, for example, in connections
with ergodic systems (see [24]), large deviations (see [25]) or control of singularly perturbed systems
(see [26]).
Let us mention, finally, that the viscosity solutions approach is flexible enough to be applicable
to control problems for stochastic and distributed parameters systems as well as to differential games
(we refer at this purpose to [8, 9, 10, 13]).

REFERENCES

[1] PESCH H.J., BULIRSCH R. The Maximum Principle, Bellman’s equation, and Caratheodory’s
work. J.Optim.Theory Appl. 80 (1994).
[2] KRUZKOV S.N. The Cauchy problem in the large for certain nonlinear first order differential
equations.Sov. Math. Dokl. 1 (1960).
[3] KRUZKOV S.N. Generalized solutions of the Hamilton-Jacobi equations of eikonal type I.
Math.USSR Sbornik 27 (1975).
[4] CANNARSA P., SINESTRARI C. Convexity properties of the minimum time function. Calc.
Var. 3 (1995).
[5] SINESTRARI C. Semiconcavity of solutions of stationary Hamilton-Jacobi equations. Nonlinear
Analysis 24 (1995).
[6] CRANDALL M.G., LIONS P.L. Viscosity solutions of Hamilton-Jacobi equations.
Trans.Amer.Math.Soc. 277 (1983).
[7] BARDI M., CAPUZZO DOLCETTA I. Optimal control and viscosity solutions of Hamilton-
Jacobi-Bellman equations. To appear, Birkhauser (1997).
[8] LIONS P.L. Optimal control of diffusion processes and Hamilton-Jacobi equations. Comm.Partial
Differential Equations 8 (1983).
[9] FLEMING W.H., SONER M.H.Controlled Markov processes and viscosity solutions. Springer
Verlag (1993).
[10] LI X.,YONG J. Optimal control theory for infinite dimensional systems. Birkhauser (1995).
[11] CAPUZZO DOLCETTA I. On a discrete approximation of the Hamilton-Jacobi equation of
dynamic programming. Appl.Math.Optim. 10 (1983).
Hamilton-Jacobi-Bellman equations and optimal control 10

[12] CAPUZZO DOLCETTA I., ISHII H. Approximate solutions of the Bellman equation of deter-
ministic control theory. Appl.Math.Optim. 11 (1984).
[13] BARDI M. Some applications of viscosity solutions to optimal control and differential games. In
Viscosity Solutions and Applications, I.Capuzzo Dolcetta, P.L.Lions (eds.). To appear in Lecture
Notes in Mathematics, Springer (1997).
[14] BARRON E.N., JENSEN R. Semicontinuous viscosity solutions of Hamilton-Jacobi equations
with convex hamiltonians. Comm.Partial Differential Equations 15 (1990).
[15] BARLES G. Discontinuous viscosity solution of first order Hamilton-Jacobi equations: a guided
visit. Nonlinear Analysis 20 (1993).
[16] SONER M.H. Optimal control problems with state-space constraints I-II. SIAM J. Control Optim.
24 (1986).
[17] CAPUZZO DOLCETTA I., LIONS P.L. Hamilton-Jacobi equations with state constraints. Trans.
Amer. Math. Soc. 318 (1990).
[18] GONZALEZ R., ROFMAN E. On deterministic control problems: an approximation procedure
for the optimal cost. SIAM J. Control Optim. 23 (1985).
[19] FALCONE M. A numerical approach to the infinite horizon problem of deterministic control
theory. Appl. Math. Optim. 15 (1987).
[20] LORETI P., TESSITORE M.E. Approximation and regularity results on constrained viscosity
solutions of Hamilton-Jacobi-Bellman equations. J.Math.Systems Estimation Control 4 (1994).
[21] ROUY A. Numerical approximation of viscosity solutions of Hamilton-Jacobi equations with
Neumann type boundary conditions. Math. Models Methods Appl. Sci. 2 (1992).
[22] FALCONE M., FERRETTI R. Discrete high-order schemes for viscosity solutions of Hamilton-
Jacobi equations. Numer. Math. 67 (1994).
[23] BARDI M., BOTTACIN S., FALCONE M. Convergence of discrete schemes for discontinuous
value functions of pursuit-evasion games. In G.J. OLSDER, editor,New Trends in Dynamic Games
and Applications. Birkhauser (1995).
[24] ARISAWA M. Ergodic problem for the Hamilton-Jacobi-Bellman equation I and II. Cahiers du
CEREMADE. (1995).
[25] BARLES G. Solutions de Viscosite des Equations de Hamilton-Jacobi Vol. 17. Mathematiques et
Applications. Springer (1994).
[26] BARDI M., BAGAGIOLO F., CAPUZZO DOLCETTA I., A viscosity solutions approach to some
asymptotic problems in optimal control. In J.P.ZOLESIO, editor,Proceedings of the Conference
”PDE’s Methods in Control, Shape Optimization and Stochastic Modelling”. M.Dekker (1996).

You might also like