• Non ci sono risultati.

Second-Order Hamilton–Jacobi Equations in Infinite Dimensions

N/A
N/A
Protected

Academic year: 2021

Condividi "Second-Order Hamilton–Jacobi Equations in Infinite Dimensions"

Copied!
19
0
0

Testo completo

(1)

SECOND-ORDER HAMILTON-JACOBI

EQUATIONS

IN INFINITE

DIMENSIONS*

PIERMARCO CANNARSA AND GIUSEPPE DA PRATO$

Abstrnet. Somesecond-order Hamilton-Jacobiequationsconnectedtostochasticoptimal control prob-lemsfor infinite-dimensional systemsdrivenbyawhite noisearestudied.Adirectmethodtoproveexistence and uniqueness ofmildsolutionsisdeveloped.Then thissolution isidentifiedasthe valuefunctionof the related stochastic controlproblem,andafeedback formula for optimal controlsisderived.

Keywords. Hamilton-Jacobi equations,stochastic optimal control, dynamicprogramming, viscosity solutions,whitenoise,infinitedimensions

AMS(MOS)subjectclassifications. 49C10, 49A60,93E20

1. Introduction. Second-order Hamilton-Jacobi equations in infinite dimensions have been studiedbyseveral authorsin connection withthe stochasticoptimalcontrol of distributed parameter systems; see

[Lecture Notes

in Mathematics, Vol. 1390,

Springer-Verlag, Berlin,

1989]

and the references quoted therein. Most of the works

on this subject concern systems governed by stochastic partial differential equations drivenby aHilbertspace-valuedWienerprocess. Inthis paper,wefocus our attention onthe case of stochastic systems that are drivenby awhite noise. Forsuch problems

fewer results are available in the literature.

In order to explain the context we have in mind, let X be a separable Hilbert

space, and consider the problemofminimizing:

{fT"

12

}

(1.1)

J(t,x;

z)=

[g(y(s))+-glz(s)l

ds+ch(y(T))

over all controls z

Mw(t,

T; X)

satisfying

Iz(s)l

<-R almost surely for all s t,

T].

Here e,

R,

and Tare given positive numbers and g,

b:X-

Rarebounded uniformly

continuous functions. In

(1.1),

y is the mild solution of the stochastic differential equation

dy(s)=(Ay(s)+F(y(s))+z(s)) dt+/-[ dW(s),

t<-_s<=

T,

(1.2)

(t)

x,

where W is a cylindrical

Wiener

process

(or

white noise) on a probability space

(l),

o,

P). Moreover,

M(t,

T;

X)

denotes the space of the X-valuedprocesses

x(s)

that are adapted to W and satisfy

Ix(s)

ds

<.

As

iswell known,the dynamicprogramming approachtoproblem

(1.1), (1.2)

consists

ofstudyingthe value

function

V,

defined as

(1.3)

V(t,x)=inf{J(t,x;z)’zM(t,T;X),lz(s)l<=Ra.s.

Vs[t,T]}.

ReceivedbytheeditorsDecember 4, 1989;acceptedfor publication(inrevisedform)April25, 1990. DipartimentodiMatematica,Universit/diPisa,ViaF.Buonarroti,2,56127 Pisa, Italy. This research was partially supported by the Italian National Project M.P.I. Equazioni Differenziali e Calcolo delle Variazioni.

$Scuola Normale Superiore, Piazza deiCavalieri, 7,56126 Pisa,Italy.

Thisresearchwaspartiallysupported bythe Italian National ProjectM.P.I.Equazionidi Evoluzione eApplicazioni Fisico-Matematiche.

474

(2)

The function

u

(t, x)

V

T- t,

x)

isrelatedtothe Hamilton-Jacobi-Bellman equation

Tr(u,x)+(Ax+F(x),u,)-H(ux)+g(x)

in]0,

T[xX,

(1.4)

at 2 where

u(O, x)

4(x),

Ipl

if

Ipl

R,

H(p)

R

2

Rip

I--

if

IPl

->-R.

The maingoalof thispaperis todevelopadirectmethod ofsolution forequation

(1.4).

By

"directmethod"wemeanamethod that makes no use of the control theoretic interpretation ofproblem

(1.4).

More

precisely, we will solve the above problem as an initial value problem for a semilinear parabolic equation.

Then,

after having

identified the direct solution of

(1.4)

as the value function

(1.3),

we can transfer information from apartial differential equation context into a variational setting, by

deriving afeedback formula for optimalcontrols.

Wenow explain the main ideas of our method.

As

in finite dimensions, we first

considerthe linearproblem

Tr(ux)+(Ax, u)

in]0,

T[xX,

(1.6)

at 2

u(O,x)=(x)

whose solution can berepresented by the probabilistic formula

(1.7)

u(t,x)=

d e’ax+x/-

e(’-)a

dW(s)

=:

(Ttb)(x).

Indeed,

when

A

is self-adjoint, strictlynegative and

A

-

is nuclear, it is shown in

[7]

that

(1.7)

isthe unique classical solution ofproblem

(1.6).

This resultisrecalled and improved in 3 of this paper, by proving the uniform convergence of some finite-dimensional approximations of

(1.7).

Then,

we define a mild solution of

(1.4)

as asolution ofthe integral equation

(.a

u(,-=

r,+

r,_((F,

u(s, .)-g(u(s,

.+g

s.

We

solve

(1.8)

by fixed-point argumentsin aspaceof functions which are C inx for

>

0and satisfya suitable blowup condition inzero,

(see

4).

Inorder to applythe above result to problem

(1.1), (1.2)

we have to identifythe function

u(T-t, x)

as the value function

V(t, x).

This could be done by standard verification techniques, computing the

It

differential

du(T-t,y(t)),

if u were suciently smooth and the covariance ofWhad finite trace.Toovercome thisdiculty,

westudyasuitablefinite-dimensionalapproximation of

(1.4),

for whichthesmoothness

of the solution

u

iswell known.Wethenapplythe Itg formulato

u

and passto the limit as n

.

To makethisprocedure rigoro,us,it is essential toshow that

u

converges

to

u,

uniformlyon the bounded sets

of[r,

T]xX

for all 0<

<

T

(see

Theorem

4.5).

(3)

The techniques of this paper could be easily arrangedto studythe equation Ou e

Tr(Quxx)+(Ax+F(x),ux)-H(u)+g(x)

in]0,

T[X,

(1.9)

Ot 2

u(O, x)= 4(x),

where

Q

is a self-adjoint positive nuclear operator in X. This problem is related to the optimal control ofa system driven by a "genuine" Wiener process. Unlike

(1.4),

for which no other result seems available in the literature, equation

(1.9)

has been consideredbyseveral authors.In 1

],

problem

(1.9)

is studied withF 0and assuming g and

&

to be convex. In

[9],

the case of

A

=0 is treated by the theory ofabstract Wienerspaces. Notethat,eventhoughthe equation considered in

[9]

looks like

(1.4),

it is indeed equivalentto

(1.9).

The general equation

(1.9)

is solved in

[5]

by using a probabilistic formula like

(1.7).

In

this case,

however,

we do notget C regularity, but

only differentiabilityin some special directions related to

Q.

Weconclude this Introductionby recallingthat several resultsonviscosity solutions are availabletodayfor Hamilton-Jacobi equations in infinite dimensions

(see

[4]

for first-orderequations). Second-order equations havebeen treatedbyLions in

10]-[ 12].

Inthistheory,the existence of solutions to

(1.9),

forweaklycontinuousdata,isusually

obtained by variational methods based on the representation formula

(1.3).

2. Preliminaries. LetXbeaseparableHilbert space, with norm

I.

ForanyR

>

0,

we set

BR

{X6 X;

Ix[

R}.

For any x,y X we denote by x(R)y the operator defined by

x(R) y z

(y, z)x.

Let Y

be another Hilbertspace.

We

denote by

Cb(X, Y)

the Banach space of all boundeduniformlycontinuous mappings

4:

X- Yendowed with the sup norm

I1" 11o.

Likewise,

Cb

h(X,

Y),

h 0, 1, 2,.

.,

endowed with the natural norm

I1"

I1,

is the set

of all the mappings

b"

X- Y which are h times Fr6chet differentiable and such that the kth derivative

b

(k is uniformly continuous and bounded for all k-<h.

Moreover,

we set

Chb(X,

R)=

C(X).

For any

&

Cb(X)

we denote by w6 a continuity modulus ofb, i.e., continuous function

w6:[0,

ee[-[0,

oe[

satisfying

w+(0)=0

and such that

[b(x)-&(y)]

-<

o+(Ix-yl)

for all

x, y

X. It is well known that any function

b

Cb(X)

possesses a concave continuity modulus.

Lip

(X,

Y)

is the space of all Lipschitz continuous andbounded functions from

Xto

Y,

endowed with the norm

"bl’

sup

{

’b(x)

b(y)’

}

[x_y

;x,yX;xCy

/11 11o,

Throughoutthe wholepaperwefix acompleteorthonormal system in

X,

denoted

by

{ek}u.

We define the projection 17, of

X

onto the span of

{el,

e2," "’,

e,}

as

follows:

(2.1)

1-In

e

(R)

e

Vn

N.

k=l

Indeed, setw(t) sup{[b(x) 4’(Y)I;

Ix-

Yl

=<t}.Then o) is a nondecreasing subadditive continuity

modulus forb. Sowecancheck that theconcaveenvelopeofw has the required properties.

(4)

Now,

let

{Ck}

be asequence of positive real numbers. Then there exists aunique

self-adjoint operator

A

in X such that

Aek

=--ckek.

As

is well

known, A

is densely

defined and closed with domain

D(A)

x e

X"

Y

ak(X,

ek)

2

<

oo

k=l

Moreover,

since

A

is negative,

A

generates an analytic semigroup eta in X and

(2.2)

etax

,

e-tk(x,

ek)

k=l

for all x X.

Consider now a complete probability space

{f,

,

P}

and a sequence

{/3k}

of standard one-dimensional Brownian motions, mutually independent.

We

denote by

Wn(t)

the n-dimensional Brownian motion given by

(2.3)

W"(t)=

flk(t)ek

k=l

for all ->0. We set

(2.4)

(2.5)

W(t)=

ek e-"kt-)

dk(s),

k-=l

Io

WA(t)

k

ek e-(t-s)

dflk(S).

=1

We

note that

W(t)

is the stochastic convolution

Io

WnA( t)

e(’-’)A

dW’(s).

In general,

(2.5)

is meaningless since the series in theright-handsidemaynotconverge.

The following proposition shows that it becomes meaningful,under some restrictions

onthe sequence

{Ck}.

PROPOSITION 2.1.

Assume

that

(2.6)

2

1

k=l

Then,

theseries in

(2.5)

converges in

L2(),

if,

P;

H)

for

all >-O.

Moreover,

Wa(

t)

isa Gaussianprocess with mean zero andcovariance operator

Qt

given by

1 e

(2.7)

Qtx

E

(x, ek)ek,

x

X.

k=l

2Cek

Proof

Forall 0, we have

[Io

E

V

e-"(’-)

d(s)

e -2"

as

Z

k=l =1 k=l

which is finite in view of

(2.6). Therefore,

the series in

(2.5)

converges in X for all

>

0 almost surely to a Gaussianprocess

Wa(t).

In orderto prove

(2.7)

it sufficesto

remark

that,

for all

x,

y

X,

we have

Io

W(

t), x)(

W(

t),

y)

2

(x, e)(y, e)

e-(’-’ds.

k=l

(5)

Remark 2.2. If

(2.6)

is fulfilled, we write the stochastic convolution

(2.5)

as

follows:

WA( t)

e(’-s)A

dW(s),

where

W(t)=

k=l

ekk(t)

is usually interpreted as a white noise.

In

ordertoshowthat

WA(t)

has continuous trajectories, we will strengthen

(2.6),

assuming

2o’-1

(2.8)

Y

ce

<

oo

for some cr

]0,

1/2[.

We set

1 e-2kt

q(t)--k=

20k

We notethat

(2.8)

yields

(2.9)

q(t)-Mt

20.

for all 0 and someconstant

M

0.

PROPOSITION 2.3.

Assume

(2.8).

en

WA(t)

hasa-Hldercontinuous trajectories

for

alla

]0, [.

Proo

Since

{k)

are independent, we have

lWa(t)-

WA(S)I===,

e-"(t-o)

d(p)

+

:, e-%-)d(p) -2

Z

e-"(’-) dill(p) e-"(’-)dill(p) =1

=Z

e-2"0

++

e-2"p

+-2

=1 =1 =1

for all s 0.

Now,

by changingthe variable p with

+

s 2p, we obtain

(2.10)

glWA(t

WA(S)12=q(t)+q(s)+2[q(7) -q()]

for all tsO.

Next,

notethat,since

q(t)

q(s)

q(t-

s),

(2.9)

yields qe

C2([0,

m[). Therefore,

there exists C

>

0 such that

From

(2.10)

it follows that

1

(

(1

c(

+-

.

Since

W(t)- W(s)

is a Gaussianprocess,

W(t)-

W(s)lm

C’l-x

m for all t, s 0 and a suitable constant

C’.

The Kolmogorov testyields the conclusion.

PROPOSITION 2.4.

Assume

(2.8).

en,

for

all

T> O,

(6)

Proof

We usethe factorization method as in

[6].

Set

Y(s)

2

ek

(S- r)

e-k(s-r)

dk(r),

k=l

for all s _->0.

Then,

by astraightforward computation,

(2.13)

WA()

sin

o

(t- s)

-

e(’-g(s)

s,

(2.14)

W(t)

sin

(c-s)

-

e(’-Y(s)

ds,

where

An

AIIn.

Therefore,

(2.15)

where

w(t)

w2(t)

n.(t)

+

c.(t),

sin

Bn(t)

|

(t--s)-I[e(’-)A--e(t-s)A"]y(s)

ds,

.o

io

Cn(t)

sinro

(t-s)

-1

e(t-’A"[Y(s)

Yn(s)]

ds.

Wewill estimate

B(t)

and

C(t)

separately. After some computations we obtain

(2.16)

l

g(s)l

2=

2

e-2ak(s-r)(s

r)

-

drN

ko

2

1_2=:

kl

k=l

J0

k=l k

for some constant

ko>

0. Since

Y(s)

is a Gaussianprocess,

(2.16)

implies

that,

for all

mN,

(2.17)

lY(s)lk

for some constant

k

>

0.

Now,

by H61der’s inequalityand

(2.13)

it follows that sup

WA(t)l

2m

S2m(-l)/(2m-1)ds

1

g(s)l

mds.

OtT 0

Moreover,

we conclude that sup

IBn(t)

2m

s2m(-l)/(2m-1)]]esA--eSAl2m/(2m-1)

ds

kONtT

g(s)l

mds.

0

Now,

since the semigroup e

"

is analytic,

lie

-

e’ll

0for all s

>

0 as n

.

Thus,

in view of

(2.17),

the dominated convergence theorem yields

(2.18)

lim

(o,rsup

IB.(,)I

m)

=0

provided that

(1-)2m/(2m-1)<l.

We will now estimate

C(t):

we have

g(s)-

g(s)

=

e-(’-(s-

r)

-

dr

k=n+l

N

ko

_"

k=n+lk

(7)

for some constant

kln>O

such that

limn kin--0.

Since

Y(s)-

Y,(s)

is a Gaussian process,

(2.19)

implies that,for all m

N,

(2.20)

l

Y(s)-

Yn(s)l

2m

cm(kln)

for some constant Cm

>

0.

NOW,

by H61der’s inequalitywe conclude that

(

sup

[C,(t)[

2m

)

(si7)2m(ff0T

)2m-1

S2m(-l)/(2m-1) ds kOtT g()-

gn(s)l

md.

Hence

(2.21)

,-lim

(,

0__<,__<

[C,(t)l

2m)

=0,

which, along with

(2.18),

gives the conclusion, lq

3. Linearparabolic equations.

In

this section we studythe linear problem:

Ou e

Tr(u)+(ax,

u)

in

[0, T]X,

(3.1)

Ot 2

u(O, x)

4(x),

where e is a given positive number and

h

C(X).

Here

A:D(A)=X-X

is a self-adjoint negative operator in X satisfying,for all k

N,

(3.2)

Ae

-ce,

with

ce

>

0. In

(3.1)

thesubscript x represents Fr6chet partial derivative with respect

to x andTr denotes thetrace; i.e.,

Tr

(u)

2

(Uxe, e).

k=l

The following result, provedin

[7],

statesthat,for any

4

C(X),

problem

(3.1)

has aunique classical solution givenby

(3.3)

u(t, x)

g(ch(e’Ax

+x/- Wa(t)))=:

Td)(x),

where

WA(t)

istheprocessdefined in

(2.5).

The method of

[7],

based onaprocedure ofGalerkintype,usesthefollowingsequenceofsemigroupswhichisprovedtoconverge

pointwise to the semigroup defined in

(3.3):

(3.4)

(qb(etAI-[nXq"r

W(t)))=:

(TTqb)(x)

for all

b

C,(X). In

thispaperwe improve the result of

[7],

by showingthat

(T’;4)(x)

converges to

(Ttch)(x)

uniformly. We will use this convergence result in 5. In the

following we denoteby

etaQf

thebounded operator defined by

2ak

e-tk

(3.5)

e,A

Q

-

ek -2,,k ek, k N.

1-e

PROPOSaqON 3.1.

Assume

(3.2)

and

(2.6). Then,

for

anyr

]0, T[,

(i) lim

T’

6)(x)

(Tt4)(x),

(3.6)

(ii)

lim

(T’6)x(x)

(T,6)x(x)

(8)

(3.7)

(3.8)

(3.9)

(3.0)

where

uniformlyon the boundedsets

of

r,

T]

X.

Moreover,

the

function

u

:[0,

oo[

X--->

R, u(

t,

x)

Tt6

)(x)

is continuous.

Furthermore,

u(

t,

C(X)

for

all

> O, u(., x)

C(]O,

oo[)

for

all x

D(A)

and

u,,(t, x) =v/7

(e*AQ-[

WA(t)cb(etAx

+v/7 WA(t))),

U(t,

X)=

e(etAQ-[

W(t)(R)etAQ-[ WA(t)cb(etAx

+v/7

W(t))),

lu(

t,

x)l

<-

,/7

p(t)ll,/,

IIo,

ITr

[u(t,

x}]l--<

(t}

IIo,

(3.11)

/92(t)

Z

20ek

e-2tck

1 e-2t%

k

Finally,

(3.1)

.is

fulfilled

for

all x

D(A)

and

>

O.

Proof

First,we note thatformulas

(3.7)

and

(3.8)

arederived in

[7].

Wewillgive

a detailed proof ofthe uniform convergence in

(3.6).

The proofs of the remaining statements will be only sketched for the reader’s convenience.

From

(3.3)

and

(3.4)

we obtain

I(

T,cb)(x)-(

T74,)(x)[

<-

g149(etAx

q-x/7

Wa(t))-

p(etAI-[nx

+x/7

W.(t))[

(3.12)

_--<

o

(le

tAx e’al-l,x[

+,/7

WA(t)

WA(t)I

<--oo(le’Ax-e’ArI.xl+,/7 l

Wa(t)--where to, is aconcave continuitymodulus for qS.

Now,

since e’A is compact for

>

0,

(3.13)

lim

[etax etaIInX[

--0

uniformly for

(t,

x)

e r,

T]

x

BR,

R

>

0.

Moreover,

(3.14)

gl

Wa(t

W"A(t)l

<--

4-

,/l

Wa(t)

WA(t)I 2.

Thus, (3.6)(i)

follows from

(3.13)

and

(3.14),

in light of Proposition 2.1. Equation

(3.6)(ii)

followsbyasimilar argument,using formula

(3.7)

for

ux(t, x).

Next,

wehave

[(Ux(t.x)

e)12=ee_2,%(

2ak

)2

fO

2

1_e-2,% g

e-2(’-*%

dk(S)Ch(e’Ax+

WA(t))

(3.15)

<=

ee_2,.

(

2a

)

2

Io’

22

1 e -2’% g

e-2’-

)%d/3g

(s)

b

Iio

and

(3.9)

followstaking the sum overk. Similarly,

(3.8)

yields

!

I(ux(

t,

x)e e}

e-2’

I

\1--e-2t%

]

12

-2(t--s)ak

d[k(S

dp(e’Ax+

W(t))

8e-2tak 2ok 1

_----2,% I111o,

which in turn implies

(3.10).

(9)

(3.17)

where

Remark 3.2. Under the strongerJcondition

(2.8),

we have

2

(20lke--tak)

2e

-2tak --t

‘L

(t)<_-sup

qt)<_-- q),

k

1--(3.18)

L=4max e_2,. 0 1

Hence,

estimate

(3.10)

and

(2.9)

yield

(3.19)

ITr

[lgxx( t,

X)]

t2_2c

4. Semilinear parabolic equations. Let T>0 and consider theCauchy problem

Ou

--=-Tr(uxx)+(Ax+F(x),ux)-H(ux)+g(x)

in

[0, T]xX,

(4.1)

Ot 2

u(O,x)=6(x),

wherewe assume the following, in addition to

(3.2)

and

(2.8),

(i)

6,g

Cb(X),

(4.2)

(ii)

H Lip

(X),

H(0)=

0,

(iii)

F

Cb(X; X).

Obviously, the requirement

H(0)-0

implies no loss ofgenerality, as we can replace

g by

g-H(0).

We will solveproblem

(4.1)

in the Banach space

5;

{v

Cb([O,

T]X)"

v)

C(]0,

T]X;

X),

vx

B(]O, T]

x

X; X)},

i[vll,.-sup{lv(t,x)l/ltl-vx(t,x)l

(t, x)@ ]0, r] xX},

where er is defined in

(2.8)

and

B(]0, T]

x

X;

X)

denotes the space of all bounded

X-valued functions defined in

]0,

T]

xX.

DEFINITION 4.1.

A

function u

Z

is called amild solution of problem

(4.1)

ifu is asolution of the integral equation

(4.3)

u(t,’)=Tt6+

Tt_,((F, ux(s,.))-g(u(s,.))+g)as

Vt[O,T],

where

T,

is the semigroup defined in

(3.3).

LZMMA 4.2.

Assume

(2.8)

andlet

q" ]0, T]

x X-

R

be such that

(i) qt

Cb([%

T]xX)

for

all’6

]0,

T],

(ii)

It

1-’O(t,

x)[

<-

K

for

all

(t, x)

]0, T]

x

X

andsome constantK

>

O. Set

(4.4)

f(t, )=

Tt_s(tP(s, ))

ds

Vt

]0,

T].

Then

f

2,.

Proof

Step

1.

f

Cb([O, T]

x

X).

Fix e

>

0and let

-

]0,

T]

such that K

"

=<

ere.

Let

(t,x), (t’, x’)

[0, T]xX.

We

shall consider two cases separately.

(10)

Case 1. t,

t’ [0, r].

Then we

have,

obviously,

[f(t,x)-f(t’,x’)[<=

ITt_,(sl-4,(s,.))(x)ls

*-1

ds+

IT,_s(s-4,(s,.))(x’)]s

-

ds

<=

2K s-1ds <- 2e.

Case 2.

It/2, T[, t’ [r, T].

Then we have

If(,x)-f(’,x’)l<-2e/

r,_,(O(s,

.))(x)

ds-

r,,_,(4,(s,

.))(x’)

ds

r/2 r/2

In view of

(ii),

standard continuity properties of the integral in the right-hand side imply that there exists 8

>

0such

that,

if

It

t’

+

Ix

x’

<

a,

then

_(4,(s,"

))(x)

ds- ,,_(q,(s,.

))(x’)

ds

<

.

To conclude the reasoning, set 6’=min

{6,

r/2}.

Then,

from the above analysis it

follows that

]f(

t,

x)

f(

t’,

x’)l

<=

3e provided

]t-t’l+lx-x’]<6.

This proves that

f6

Cb([0, T]

X).

Step

2.

t-fx

(t,

x)

isbounded on

]0, T]

X. First, notethat, by

(3.9), (2.8),

and Remark3.2, we obtain

CoK

(4.5)

](T,_sq,(s,.

))

<- 0<s

<

t,

--(t__s)l-O-sl-Cr,

where

Co V’eLM,

L and M being defined in

(3.17)

and

(2.9),

respectively.

Hence,

fx(t, x)

exists for all t>0 and x e

X.

Moreover,

by

(ii),

(4.6)

It’-fx(t,x)l<-t’-Co

K

(t-s)-’s

-’ds. On the otherhand,

(4.6)

yields the conclusion of

Step

2 since

22(-)

(4.7)

(t-

s)-’s

-I

ds

t2-’fl

(o

",

o’)

_<-

2-1,

where/3

is the Euler beta function.

Step

3.

fx C

([

to,

T]

x

X;

X)

forall

to

e

]0,

r[.

Fix

to

e

]0,

r[

and

to

--<

_-<

t’

_-<

T,

x, x’

eX. Then

Ifx(t,x)-fx(t’,x’)l

(4.8)

[(

T,_.,(s,.

))x(X)-(

r,,_,q,(s,.

))x(X’)]

as

"(

Tt,_,,lp(s,

))x(X)

ds

Moreover,

recalling

(4.6),

we obtain

(4.9)

"

T,,_,b(s,

))(x)

ds <--

CoK

(t’--s)-ls

-

ds <-

Co--

to

(t’- t)

.

O"

(11)

On the otherhand, for all r/>0there exists

-

]0, to/2]

such that

(4.10)

[(:r,_q(s,

.))(x)-(T,,_sO(S,

.))x(X’)]

as

----<27+

[(r,_,O(s,

"))x(X)-(t,_,O(s,

.))(x’)]

ds

Indeed,

it suffices to take

"

so that

Finally, the conclusion follows from

(4.8)-(4.10)

and the uniform continuity ofthe

mapping

(s,t,x)->(Tt_sO(S,’))x(X)

on

{(s,t,x):to<=t<-T,

’<-_s<=t-",

x6X}.

The

proofof the lemma is thus complete.

THEOREM 4.3.

Assume (3.2), (2.8),

and

(4.2).

Then problem

(4.1)

has a unique

mild solution.

Proof

Suppose

firstthat Tis sufficiently

small,

i.e.,

1

(4.12)

(1

+

22(1-a)Co)

IIFIIo+

IlHIll

T

_

(r

2’

where

Co

x/eLM,

L and M being defined in

(3.17)

and

(2.9),

respectively. Define a map

F

on

E

as follows"

(4.13)

(Fv)(t,

.)=

Ttch+

T_.((F,

vx(s, "))-H(vx(S,

"))+g)

ds for all

[0, T]. From

Lemma 4.2, it follows that

F"

E-*

E.

Moreover,

(4.14)

I(rv)(t,x)-(rz)(t,x)l

IlFll+llslll

Tllv-zll,

(4.15)

tl-l(rV)x(t,

x)-

(rZ)x(t,

x)]

Co(

Ell0

+

g

II,)(,

)TII

v-

zll,

where/3

isthe Euler beta function. Since

fl(cr,

O’)

=<22(1-)/O

",

(4.13)-(4.15)

imply that

F

isacontraction in

E

and the conclusionfollowsbythe contraction mapping principle. Finally, condition

(4.12)

can beremovedbyafinitenumber of iterations of theprevious

fixed-point argument.

In the sequelwe will considerthe following "finite-dimensional" approximation

of

(4.1):

OUn

(4.16)

Ot E

Tr(un,xx)+(AHnx+HnF(Hnx)

Unx)-H(unx)+g(IInx)

Un(O X)"--

I(HnX),

which has the integral form

(4.17)

u,(t,

.)

T’4

II,

+

TT_s((FHn,

un,x(S,.))-H(un,x(S,.))+gHn)ds.

(12)

THEOREM 4.5.

Assume (3.2),

(2.8),

and

(4.2)

and letu and

un

be the solutions

of

(4.1)

and

(4.16),

respectively.

Then,

for

all

"

]0, T[,

(i)

lim

]u(t,x)- u,(t,x)]

=O,

(4.18)

(ii)

lim

lu,(t,

x)

u.,,,(t, x)]

0 uniformly

for

’,

T]

and xin boundedsets

of

X.

Wefirst prove the following lemma.

LEMMA 4.6.

Assume

(2.8)

and let4’,,

:

]0,

T]

X.

R,

n

N,

be such that

(i) On,

qCb([’,

T]xX)

forallz]O, T]

(ii)

It’-q(t,x)l=<K, It’-q(t,x)l=<Kforall

(t, x) ]0, T]

Xandsome constant K>0.

(iii)

lim,_.oosup{Itl-((t,x)-q,,(t,x))l:

t[0,

T],lxl<-R}=O

for

all R>0.

Set

f(

t,

T,-sO,)(s,

ds,

Then,

for

all

R >

0

f(

t,

Tt_O)(s,

ds.

(4.19)

lim

If(

t,

x) f(

t,

x)l

O,

(4.20)

lim

tl-lL,

x(t,

x)

-fx(t,

x)l

0

uniformly

for

[0,

T]

and

Ix[-<-

R.

Proof

First,we note that

fn,

f

Z

in view of

Lemma

4.2.

Now,

fix R

>

0and let 6

[0, T],

]x

_-<R. We have

If(

t,

x)

f,(

t,

x)l

<=

T,_f

qt(s, p.(s,

.)]

ds

(4.21)

+

[T

t_,-

T,_,]O(s,

ds We claimthat

(4.22)

!im

T,_

T,"__]O(s,

ds 0

uniformly for

t[O,

T],

Ixl

<-R.

Indeed,

fix

>0

and let

re]O,

T[

be such that

(K/er)

"

<r/.

Then,

if0=<t-< r,

(4.23)

T,_

TtQ]O(s,

ds

On the other

hand,

ifr

<

=<

T,

then

t[ T_s

TT_]q(s, ds

(4.24)

+

<----27+

T,_-

T’/_,]q(s, ds /2 t-r

Tt--

T’_s]O(s,"

ds

=<4*7+

Tt_

Tt"__,]O(s,"

ds

(13)

and

(4.22)

follows from

(3.6).

Next,

let us show that

io

(4.25)

!iIn

T_.[O(s,

")-O,(s, ")]

as

=0 uniformly for

[0,

T],

[x[

_-<

R.

Recalling

(3.4),

we obtain

T_[O(s,

)-,(s, .)](x)

[O(s,

e(t-)alI,x+v/-

Wa(t-s))

(4.26)

-O,(s,

e(’-)aHnx+

W%(t-s))].

Moreover,

Proposition2.4 implies that there exists a random variable

C,

suchthat

(4.27)

[e(’-s)aHn

x

+

W2(t-

s)[

NR

+

C

for n

N,

0NsN NTand

Ix[

NR.

So,

in lightof hypothesis

(iii),

lim sup

[O(s,

e(’-s)anx+

W(t-s))

n[x[R,zst

(4.28)

-,(s,

e(’-n,x+

w2(t-s))[=o

almostsurelyfor all z

]0,

T].

Hence,

bythe dominatedconvergence

theorem,

for all

z

]0,

T].

(4.29)

i

T,[O(s,

")-O(s,

.)](x)

ds =0

uniformly for

IxlR

and r

]0,

T].

Nowfix >0andchooser sothat

(K/)r <

.

Then

(4.30)

T,,[O(s, ")-O(s,-)](x)

ds

N2+

T,,[O(s,

")-O(s,

.)](x)

ds and

(4.25)

follows from

(4.28).

Finally,

(4.21), (4.22),

and

(4.25)

imply

(4.19). Next,

weprove

(4.20).

For

all t

]0, T],

x

NR we have

-lL(,xl-L,(,xl

-

(rr_s[(s,.l-(s,.l]lxS

(4.31)

+

1-([ Tt-,-

Tt%s])(s,"

ds

Now,

fix

>

0and let z6

]0,

T[

besuch that

8CoKt

<

,

where

Co

eLM,

Land

M beingdefined in

(3.17)

and

(2.9),

respectively.

Then,

by

(4.5)

and

(4.7)

weconclude

that, if 0

T,

-

([r_,-T_,]O)(s,.)d

s

2CoK

-Onthe other

hand,

if

<

N

T,

we obtain, as in

(4.24),

-

([

T,_

T_.]O)x(s

ds

(4.32)

+

-

([

r,_,

rL,])(s,.

s

So,

by

(3.6)(ii),

(4.33)

lim 1-

’([ T,_

TT-])xO(S,

ds =0

(14)

uniformly for

]0,

T]

and

[x[

_-<R. Next we prove that

Io

(4.34)

lim

t-

(T’_[O(s,.)-O,(s,.)])ds

=0

uniformly for t

]0, T]

and

Ixl-<-R.

From

(3.7)

itfollows that

T_[(s,.

)-

o/.(s,.

)])x (x)

(4.35)

x/--

{e(’-)AQ-ll-I,,WA(t--s)[qt(s,e(t-)AII,x+xf-

Wa(t-s))

6,(s,

e’-Arl.x

+,/7

W"(t-

s))]}.

Recalling

(4.28)

and

(2.11),

we concludethat, if >_-

-,

then lim sup

le(’-)AQ,5

II,WA(t-s)[-b,]

(4.36)

(s,

e(t-)AII,x+x/-

W"a(t-s))[=O

almostsurelyfor

]x]

=<

R, ’/2

=<

s

=<

’/2.

Now,

arguingas

above,

we canprove

(4.34)

and so

(4.20).

Proof of

Theorem 4.5. Set

G,(x, p)

(F(1-I,x), p)- H(p)

+

g(II,x),

(4.37)

G(x, p)

(F(rlx), p)- H(p)

+

g(rlx)

for all

x,

peX and

(4.38)

(Fv)(t,.)=

r;4+

r,_,G(

.,

v(s,.))

ds

Vve.

From Lemma

4.2itfollows that

F

maps into

;.

Arguing asintheproofof Theorem 4.3 itfollows that

(4.39)

IIr.-

r,z[I,

<-

11

-

zll,.

provided that T satisfies

(4.12).

Therefore,

F,

has a unique fixed-point u,, which is the unique mild solution of

(4.16).

Moreover,

(4.40)

[[u,-

F

,"

(0)11,.

-<_

21-"(Tllgllo

+

b

11o),

where

F,"

denotes

the/x-iterate

of

F,.

We claimthat for

all/x

N and all R

>

0,

(4.41)

lim

F(0)(t,

x)=

F"(0)(t,

x)

uniformly for

[-, T]

and

Ix[-<_

R. In

fact,

(4.41)

is true

for/x

1,in view of

(3.6)(i).

Now,

suppose that

(4.41)

holds

for/x

N. Thenthe functions

(4.42)

q(t,x)=G(x,F"(O))x(t,x),

,(t,x)=G(x,F(O))x(t,x)

satisfy the assumptions of

Lemma

4.6. Consequently, by

(4.19)

(4.43)

lim

Fn+l(0)(t,

x)=

F"+i(0)(t,

x)

uniformly for

[0, T],

]x]

_--<R. Therefore

(4.41)

holds for all kt N.

Finally,toprove

(4.18)(i)

note thatfor all

,

n N

]U(t,X)--U,,(t,X)

(4.44)

<-_lu(t,x)-F(O)(t,x)l+lF’(O)(t,x)-F.(o)(t,x)l+lF.(o)(t,x)-u.(t,x)l

--<- 22-"(

Yllg

[[o +

(15)

Fix

r/>0

and

let/x,

Nbe such that

2=-,(TIIgll0+ I111o)<

7. Then, (4.44)yields

(4.45)

lu(

t,

x)

un(

t,

x)

_-<

2r/+

IF",

(0)(

t,

x)

F, (0)(

t,

x)l.

Now,

(4.18)(ii)

can be easily derived by minor modifications ofthe above argument

(using

(3.6)(ii)

instead of

(3.6)(i)

and

(4.20)

instead of

(4.19)).

Therefore,

the proof

is complete.

5. Application tostochasticoptimal control. Let

{,

,

P}

beacompleteprobability space and

{ilk}

asequenceof standard one-dimensional Brownian motions, mutually

independent. For anys 0 let be the -algebragenerated by

{k(S)"

k 1, 2,’’’

0s

t}.

Let

M(t,

T;

X)

denote the space of the X-valued processes x such that

x(s)

in

-measurable

for all s T and

Considera stochastic systemgoverned by the state equation

(5.1)

y(s)=e(-’x+

e(’-r[F(y(r))+z(r)]dr+

W(t,s),

stO,

where x

X, A

is a self-adjoint operator satisfying

(3.2)

and

(2.8),

F Lip

(X, X),

2

z M

w(t, T; X),

and

W(t, s)

is defined by

(5.2)

W(t,

s)

k=l

Equation

(5.1)

canberegardedasthe"mild"form of the stochastic differential equation

dy(s)={Ay(s)+F(y(s))+z(s)}ds+dW(s),

tsT,

(.3)

y(t)=x,

where

W(t)

is acylindrical Wiener process

(see

Remark

2.2).

Wenowprove the existence of solutionsto

(5.3)

aswellas aGalerkin approxima-tion result.

POPOSTION 5.1.

Assume (3.2),

(2.8)

and let FLip

(X,

X).

en,

for

all z

M(t,

T;

X),

equation

(5.1)

hasaunique solution

y(.;

t,

x,

z),

whichis continuouswith

probabilityone.

2

Proof

Let

A= {v Mw(t, T;X):

on

A

as follows"

(5.4)

a(v)(s)=e(S-t)Ax+

e(X-r)A[F(v(r))+z(r)]

dr+

WA(t,S),

tNS

From

Proposition 2.4 it follows that

Wa(t," )@

m.

Hence,

1"A A.

Moreover,

is a

contraction provided that

T-t<I/[[FII

and the conclusion follows by standard

fixed-point arguments.

POPOSON 5.2.

Assume (3.2),

(2.8)

andletF Lip

(X,

X).

Let

y,,(.

t,

x, z)

be the solution

of

dy,,(s)=

{AH,y,,(s)+H,F(H,y,,(s))+H,z(s)}

ds+

dW"(s),

(5.5)

y,,(t)=Hx,

tNsN

T,

(16)

wherez

MZw(t,

T; X)

and

Wn(t)

is

defined

in

(2.3).

Then,

(5.6)

n-limg(t__<_=rsup

[y(s;

t,

x,

z)

y,n(s;

t,

x,

z)[)

O

for

all x

X.

Proof

Let

A

bethe space definedinthe previousproofand define amap

Ann

on

A

as follows:

An(V)(s)=

e(s-t)Al-Inx

+

e(s-r)arln[F(I-[nV(r))+

l-Inz(r)]

dr+x/

I-[nWA(t, s),

t<_s<_T.

Then,

hn

is a

contraction

in

A

uniformly with respect to n

N,

provided that T-t

<

1/[I

F[]I.

Moreover;

Proposition2.4 implies that

lim(

sup

[An(v)(s)-A(v)(s)l)=O

for all v

M(t,

T; X).

The conclusion then follows by the contraction mapping

theorem depending on a parameter.

We

will now study thefollowing stochastic optimal control problem.

Given R

>

0, minimize the costfunctional

(5.7)

J(t,x; z)=

g(y(s;

t,x,z))+glz(s)l

ds+(y(T;

t,x,z))

over all controls z

M(t,

T;

X)

satisfying

[z(s)[R

almost surely for all

s[t,

T].

The value

function

ofproblem

(5.7)

is given by

(5.8)

V(t,x)=inf{L(t,x;

z)"

zM(t,

T;

X),[z(s)[R}.

The corresponding Hamilton-Jacobi-Bellman equation reads asfollows:

(5.9)

--+-Tr(vxx)+(Ax+F(x), vx)-H(v)+g(x)=O

Ot 2 in

[0,

T]

x

X,

v(T,x)-(x),

where H is defined by

(5.10)

H(p)

-IP

if

Ipl

R,

R

RIp

I----f-

if

[pl-->

e.

From Theorem 4.3 we obtain the result below.

THEOREM 5.3.

Assume

(2.1),

(2.8), (4.2)(i)

andletF Lip

(X,

X).

Thenproblem

(5.9)

has a unique mildsolution, which coincides with the value

function

V.

Moreover,

for

any t,

x)

[0, T]

x

X,

thereexistsanoptimalcontrol

for

problem

(5.7).

Furthermore,

any optimal control

z*

is related to the corresponding optimalstate

y*

by the

feedback

formula

(5.11)

z*(s)=-h(

OVOx

(s,y*(s))),

t<s<T,=

where

(5.12)

p

if

lpl=R,

h(p)= pR

-1

iflP[>=R"

(17)

Proof

First, we note that the existence and uniqueness ofa mild solution v to

(5.9)

follows fromTheorem 4.3,sincethe function

H

defined in

(5.10)

fulfills

(4.2)(ii).

Let

us show that v

V.

We

claim that v satisfies the dynamic programming

principle below: for any

]0, T[,

x X and z

Mw(t,

T;

X)

such that

]z(s)l

<=

R

almost surely, we have

(5.13)

{Iz(s) +

v(s, y(s;

t,

x,

z))[:-x(v(s,

y(s;

t,

z,

z))-n)}

ds

g(y,(s;

t,

x,

z))+-lz(s)l

d+

6(y(T;

t,

x, z))

where

l’(a)=

0 if a

<=

0 and

x(a)=

a if a>_-0.

Indeed,

let

un

be the solution of problem

(4.16)

with

H

given by

(5.10)

and set

Vn(t,

X)

U,( T-

t,

x).

We

claimthatv, isregular. Toshow this fact let

’(t,

xl,

,

x,)

be defined as

(t,

xl,...,x,)=v,(t,xe+...+xne,),

for all

(t,x,...,x,)

]0, T[

R

n.

Then sr is a classical solution of the problem

-+

A-

E

Ea,x,-(F(x,

el

+...

+

x,

en),

ei)]

Ot i=1 OXi

el+...n

t-OXn

en)

+

g(xe +.

+

x,

en)

O,

(

T,

x

,.

.,

Xn)

(Xlel

+"

+

X,

en).

So,

we can usethe

It6

formulatodifferentiate

vn(s,

y,n(s))

where

y,n(s)

y,,(s;

t,x,

z).

Thus,

we obtain

dn(S,

y,(s))=--(s,

y,n(S))ds+(dY,n(S),

Un,(S,

7,n(S))>+

Tr

(u,(s,

yn(S)))as.

Now,

recall

(4.16)

and

(5.5),

integrate on

[t, T]

and take expectation to obtain

Vn(t,x)___

c_

{ll-Inz(S)+Vn,x(S,

y,n(S))12-X(lVn,x(S,

y,n(S))l-e}

ds

g(y,(;

t,x,z+ylnz(s)l

d+(y,,(T; t,x,z)

By

Proposition 5.2 and Theorem4.5, we obtain

(5.13)

inthe limit as n

ee.

Next,

we

notethat thefollowing inequalityholds:

(5.14)

Iz(s)+v(s,y(s;

t,x,z))12-X(V(s,y(s;

t,x,z))-R)>-O.

Thus,

from

(5.13)

and

(5.14)

it follows that

v(t,x)<=

V(t,x).

To

prove the reverseinequality, let us consider the closed-loop equation

y(s)=e(S-t)Ax+

e(’-r)A[F(y(r))-h(vx(r,y(r)))]

dr+x/

WA(t,s),

(5.15)

T>s_->t->0,

which can be solved by the Schauder fixed-pointtheorem

(see,

e.g.,

[13,

Cor.

2.3]).

Indeed,

from

(2.8)

it follows that e’a is compact for >0. Let

y*

be amild solution

(18)

of

(5.15).

Taking

(5.16)

z(s)

:-h(vx(s,

y*(s))),

we have the equality in

(5.14),

and so

v(t,x)>= V(t,x)

for all t<

T.

Moreover,

the

choice

(5.16)

providesanoptimal control at

(t, x).

Finally,thefeedbackformula

(5.11)

follows from

(5.13)

and the fact that

v(t, x)--

V(t,

x).

Example 5.4.

Let

X

L2(0,

7r)

and define

D(A)

H2(0,

7r)

f3

H(0,

7r),

02x

(5.17)

ax

:-

Vx

D(A),

F(x(:

=f(x(,

g(x

(x(,

:,

4,(x)

((

,

where

f

Cb(R),

o,

1

Cb(R).

By

inspection,

F,

g, and

4

fulfill the hypotheses of

Theorem 5.3.

As

for operator

A, (3.2)

is satisfied by k

,

and so

(2.8)

holds true

forany cr

]0,

1/2[.

Therefore,

the results of this sectionapplytothefollowingstochastic

optimal control problem.

Minimize

(5.18)

J(t,x; z)= V

a(y(s,

))+-lz(s,

c)12

d6ds+

fi(y(T,

))

d

over all controls z

M2w([t,

T];

L(0,

r))

satisfying

Iz(s,

:)12

d:-<_

R2

almost surely

for all s

It,

T],

where the state y is subject to

dy(s,

()=

-xy(s,

()+

f(y(s,

())+z(s, )

ds+x/

dW(s),

(5.19)

y(s,

0)

--y(s,

7r)

:0,

s[t, T],

y(

t,

x( ).

Remark 5.5. Let us consider the same problem as in Example 5.4 for an

N-dimensional parabolic state equation, i.e., taking

X--L2([0,

7r]

N)

and

Ax

Ax,

with Dirichlet or

Neumann

boundary conditions.

Then,

Theorem 5.3 does not apply. In

fact,

we can show that

q(t)=

tl-N/Zo(t)

(see

[7])

and

(2.6)

is notsatisfied.

However,

if we consider the iterated Laplace operator

Ax=(-1)’-(-A)"x,

(with Dirichlet

boundary

conditions),

wehave

q(t)

-N/zm

O(l)

and Theorem 5.3appliesifN

<

2m. Remark 5.6. Using Theorem 5.3 and the variational technique of

[8]

we can characterize the value functions of deterministic optimal control problems as limits,

as e$ 0, ofthe mild solutions

u

of

(4.1).

For example,considerthefollowing problem.

Given R

>

0,minimize

(5.20)

J(t,x; z)=

g(y(s;t,x,z))+-g[z(s)l

2

ds+dp(y(T;

t,x,z))

over all controls z

L2(t,

T; X)

satisfying

Iz(s)l

<=

R.

Here y(.;

t,x,

z)

is the mild solution of the state equation

y’(s)

Ay(s)

+

V(y(s))

+

z(s),

<-_s<-_

T,

y(t)=x.

Define the value function ofproblem

(5.20)

as

V(t, x)

inf

{J(t,

x;

z):

z

LZ(t,

T;

H),

]z(s)]-<

R}.

(19)

Then, for all

(t, x)

[0,

T]

x

H,

we can show that

(5.21)

lu(T-t,x)- V(t,x)l<=

wheretog(respectively,

to)

denotesaconcave modulus ofcontinuityfor g(respectively,

4)

and

C

x/q(T)

q(t)

e

REFERENCES

[1] V. BARBUANDG.DAPRATO,Hamilton-Jacobi Equations inHilbertSpaces,Pitman, Boston, 1982. [2] P.CANNARSAANDG. DAPRATO, The vanishing viscosity methodin infinitedimensions, Atti Accad.

Naz.Lincei,toappear.

[3]

,

AsemigroupapproachtoKolmogoroffequationsinHilbert spaces,Appl. Math.Lett.,toappear.

[4] M.G. CRANDALLANDP.L.LIONS,Hamilton-Jacobiequations ininfinitedimensions.PartIV,preprint.

[5] G.DAPRATO, SomeresultsonBellmanequation inHilbert spaces,SIAM J.Control Optim., 23(1985),

pp. 61-71.

[6] G. DA PRATO, S.KWAPIEN, ANDJ. ZABCZYK, Regularityofsolutionsoflinearstochastic equations inHilbertspaces,Stochastics,23 (1987),pp.1-23.

[7] G. DA PRATO AND J. ZABCZYK, Smoothing properties oftransition semigroups in Hilbert spaces, Stochastics,toappear.

[8] W. H. FLEMING,TheCauchy problemforanonlinearfirstorder partialdifferentialequation,J.Differential Equations, 5 (1969),pp. 515-530.

[9] T. HAV,RNEANU, Existencefor the dynamicprogramming equation ofcontroldiffusionprocesses in Hilbertspace,NonlinearAnal. Theory MethodsAppl., 9(1985),pp. 619-629.

[10] P. L. LIONS, Viscosity solutionsoffullynonlinearsecond-orderequationsand optimal stochastic control in infinitedimensions. Part I: The caseofbounded stochastic evolutions, Acta Math., 161 (1988), pp. 243-278.

[11]

,

ViscositySolutionsofFullyNonlinearSecond-Order Equations and Optimal Stochastic Control inInfiniteDimensions.Part II: Optimal ControlofZakai’sEquation,Lecture Notesin Mathematics, Vol. 1390,Springer-Verlag, Berlin,1989.

[12]

,

Viscosity solutions offully nonlinearsecond-order equationsand optimalstochasticcontrol in

infinite dimensions. Part III: uniqueness ofviscosity solutionsforgeneralsecond-order equations,

J.Funct. Anal.,86(1989),pp. 1-18.

13] A.PAZY, SemigroupsofLinearOperatorsand ApplicationstoPartialDifferentialEquations, Springer-Verlag, New York,1983.

Riferimenti

Documenti correlati

In summary, the One Step approach is associated with an increase in compliance, earlier diagnosis, and a nonsigni ficant increased incidence of GDM from approximately 4 e8% and is

Questi sono stati pubblicati nel Rapporto del Dipartimento federale delle finanze sui risultati della procedura di consultazione avente ad oggetto l’accordo multilaterale tra autorità

Variational solutions of Hamilton-Jacobi equations -2 geometrical setting: the symplectic environment... Background:

Lo scopo di questo studio è quello di valutare la relazione tra suicidalità e le dimensioni minori dello spettro psicotico in soggetti con disturbi dell’umore

We also cite the paper [3] where infinite dimensional Hamilton Jacobi Bellman equations with quadratic hamiltonian are solved: the generator L is related to a more general

In section 6 we study the optimal control problem; we prove in particular that the value function of the control problem is a solution (in the mild sense) of the Hamilton Jacobi

In this work we propose a novel approach to (i) select hard negatives, by implementing an approximated and faster bootstrapping procedure, and (ii) account for the imbalance

Torresani, “Cooperative learning of audio and video models from self-supervised synchronization,” in Advances in Neural Information Processing Systems 31 (S. Zisserman, “Look,