• Non ci sono risultati.

Simple finite difference discretization 24

1.3 Well posedness and continuous dependence

2.1.3 Simple finite difference discretization 24

We now proceed with the simplest numerical approach for solving (2.1) in the case d = 1, in which case we write q = λ, which is assumed positive. Hence, our model problem is:

ut+ λux = 0 u(x, 0) = u0(x) (λ > 0)

We use the finite difference method, that is we use various incremental ratios to approximate the time and space derivatives. To this aim, we subdivide the temporal and spatial intervals uniformly with subintervals of dimensions ∆t and ∆x, respectively, whereby xj = x0 + j∆x, j = 0, . . . , n and

tk = t0 + k∆t, k = 0, . . . , m and denote by ukh,j = u(xj, tk) the numerical approximation of the solution at the grid point (j, k) (Figure 2.2). We can approximate the temporal and spatial derivatives using simple incremental fractions with size ∆x or ∆t or even 2∆x or 2∆t. We obtain the following:

ut(xj, tk)≈ uk+1h,j − ukh,j

∆t ut(xj, tk)≈ uk+1h,j − uk−1h,j

2∆t ux(xj, tk)≈ ukh,j+1− ukh,j

∆x ux(xj, tk)≈ ukh,j − ukh,j−1

∆x ux(xj, tk)≈ ukh,j+1− ukh,j−1

2∆x

We can then use first forward discretizations of the first derivatives both in space and time to write the following difference equation:

uk+1h,j − ukh,j

∆t =−λukh,j+1− ukh,j

∆x

which can be written by setting µ = λ∆t/∆x as:

uk+1h,j = ukh,j − µλ ukh,j+1− ukh,j = (1 + µλ)ukh,j − µλukh,j+1 (2.5) It is easy to see that this method is consistent, i.e., substitution of the real solution u in place of the numerical solution uh,yields a residual that goes to zero as ∆t and ∆x tend to zero. This is readily obtained using the following Taylor series expansions:

u(xj+ ∆x, tk) = u(xj, tk) + ux(xj, tk)∆x + uxx(xj, tk)∆x2

2 +O ∆x3 u(xj, tk+ ∆t) = u(xj, tk) + ut(xj, tk)∆t + utt(xj, tk)∆t2

2 +O ∆t3

and substituting the real solution u(xj, tk) in place of ukh,j into eq. (2.5) we can calculate the residual (local truncation error) as:

τh(xj, tk) =u(xj, tk) + ut(xj, tk)∆t + utt(xj, tk)∆t2

2 +O ∆t3

− (1 + µλ)u(xj, tk) + µλ



u(xj, tk) + ux(xj, tk)∆x + uxx(xj, tk)∆x2

2 +O ∆x3



=utt(xj, tk)∆t2

2 +O ∆t3 + µλ



uxx(xj, tk)∆x2

2 +O ∆x3



, (2.6)

which shows that the local error tends to zero with the discretization size ∆t and ∆x. Since this is a local error (betweeen two mesh nodes and two time steps), the global error is one order less, i.e.,

O (∆t + ∆x). Other schemes can be devised and analyzed. We can write the following:

uk+1h,j =ukh,j − µλ ukh,j+1− ukh,j

 forward Euler - downwind

uk+1h,j =ukh,j − µλ ukh,j− ukh,j−1

 forward Euler - upwind

uk+1h,j =ukh,j − µλ

2 ukh,j+1− ukh,j−1

 forward Euler - central

uk+1h,j =uk−1h,j − µλ ukh,j+1− ukh,j−1

 leap-frog

uk+1h,j =ukh,j+1+ ukh,j−1

2 − µλ

2 ukh,j+1− ukh,j−1

 Lax-Friedrichs

However, convergence towards the real solution is not guaranteed by consistency alone, as stated by the Lax-Richtmeyer equivalence theorem (theorem 1.1), but requires that the scheme be stable. We need to define stability in the case of our FD approximations, but also we need to properly define con-vergence, although the latter one is a more intuitive concept. We will not dwelve into these issues, at the heart of numerical analysis, but give only an intuitive and practical explanation of these concepts.

Example 2.2. Solve the transport problem:

ut+ ux =0 u(x, 0) =

(1− | x | if | x |≤ 1,

0 otherwise.

using the above schemes.

Algorithm 1 Forward-Backward finite difference method for one-dimensional transport equation with constant velocity. We use periodic boundary conditions and assume the velocity of propagation is positiv λ > 0. This implies that periodic BC must be implemented for the first node of the FD mesh and nothing is imposed on the last node.

1: Set λ > 0; a, b (interval boundaries); ∆x; T (final time); ∆t;

2: N = (b− a)/∆x; M = T/∆t; µ = λ∆t/Dx.

3: for j = 1, 2, . . . , N do

4: uh,j = u0(a + (j− 1) ∗ ∆x);

5: end for

6: for k = 1, . . . , M do

7: uh,1 = (1− λµ)uh,1+ λµuh,N +1

8: for j = 2, . . . , N do

9: uh,j = (1− λµ)uh,j+ λµuh,j−1

10: end for

11: end for

We report in fig. 2.3 (top row) the results obtained at t = 0.1 and t = 0.2 using the leap-frog (left panel) and the forward-forward (right panel) schemes. We see immediately that the leap-frog shows

-2 -1 0 1 2

FIGURE 2.3: Top panel: finite difference solution of the transport equation of example 2.2 using the leap-frog scheme (left) and the forward-forward scheme (right). Bottom panel: convergence in space of the stable forward-backward scheme.

some small oscillations at the left foot of the wave, but the forward-forward shows large oscillations that prevent any visually correct representation of the solution. The bottom row reports the behavior of the forward-backward scheme, implemented following algorithm example 2.2. The results are obtained at two successive refinements of the spatial grid (halving each time the value of ∆x) and show experimentally the convergence of the numerical solution towards the real solution, represented by the translation of the initial conditions by a quantity λt.

We would like to explain the behavior of the three schemes seen in the previous example. Hence we need to test the stability of the schemes. To this aim we introduce the shift operator Ehdefined as

Ehg(x) = g(x + ∆x), we can formally write:

uk+1h,j = (1 + µλ− µλEh)ukh,j = Thukh,j

where Th is the so called “transfer operator” (spatial transfer). To ascertain stability we need to look at the error and its norm, defined as:

εkh,j = ukh,j − u(xj, tk);

εkh,·

= ∆x

X

j=−∞

| εkh,j |p

!1p

which are a discrete approximation of the real error εh(x, t) and itsLp norm. Because of linearity, we can write:

uk+1h,j = Thukh,j = Th εkh,j+ u(xj, tk) , from which we have:

εk+1h,j =uk+1h,j − u(xj, tk+1)

=Thukh,j− u(xj, tk+1)

=Th εkh,j + u(xj, tk) − u(xj, tk+1)

=Th εkh,j + u(xj, tk) − Thu(xj, tk) + Thu(xj, tk)− u(xj, tk+1)

=Th εkh,j + u(xj, tk) − Thu(xj, tk) + τh(xj, tk+1)

∆t .

Taking norms and using the triangular inequality, we have now:

εk+1h,j

Th εkh,j+ u(xj, tk) − Thu(xj, tk)

+ kτh(xj, tk+1)k

∆t

≤CT

εkh,j

+kτh(xj, tk+1)k

∆t

where CT is the norm of the operator Th. If CT < 1, i.e., Th is contractive, looking at the expression for τh(x, t), we see that the scheme converges as ∆t and ∆x tend to zero if sup| utt | and sup | uxx | are bounded.

A better look at the error can be derived as follows. By induction on k, the scheme can be written as:

ukh,j = (1 + µλ− µλEh)ku0h,j.

and thus, by linearity and using consistency, we have that:

εkh,j = (1 + µλ− µλEh)kε0h,j Using the binomial theorem4:

ukh,j = (1 + µλ− µλEh)ku0h,j = define the domain of dependence for this difference equation for ukh,j as the set of points x, x +

∆x, . . . , x + k∆x, i.e., the set of grid points in the x-axis that lie between x and x + k∆x. As we have seen, the domain of dependence for the original equation is given by the single point ξ = x− λt = x − λkµ∆x: the domain of dependence of the difference equation does not contain in the domain of dependence of the differential equation. The initial value fh(ξ) is never used in forming the numerical solution. Thus, it is intuitive that the scheme will not converge to the correct solution. In fact, this scheme fails the “Courant-Friedrichs-Lewy” (CFL) condition, which states that the domain of dependence of the numerical scheme must contain the domain of dependence of the PDE. The scheme is also not stable, as can be seen easily if we assume that f (x) is calculated with an error denoted by . Then, considering that for even k− r the negative sign cancels and the corresponding term adds, the error at the k-th step will be approximately:

k which, being µλ > 0, explodes exponentially with k.

If now we use the forward discretization in time and the backward discretization in space, we obtain the scheme:

t

x x− λt = ξ u1h,3= u0h,3+ µλ(u0h,3− u0h,2)

x1 x2 x3 x4 x5 x6 t1

t2

t3

(x3, t1)

t

x x− λt = ξ u1h,3= u0h,3+ µλ(u0h,4− u0h,3)

x1 x2 x3 x4 x5 x6 t1

t2

t3

(x3, t1)

FIGURE 2.4: Downwind (left) vs. upwind (right) FD in the t− x plane.

u

@@ @@ @@ @@

@@ 6

-

-uk1 = 1 — BC u0i = 0 — IC u12 = (1− λ∆x∆t)u02+ λ∆x∆tu01 . . . .

uk+1j = (1− λ∆x∆t)ukj + λ∆x∆tukj−1

x1 x2 x3 x4 x5

λ∆t

x

FIGURE 2.5: Upwind FD in the u− x plane. Crosses denote grid points in space. We can easily see that forλ∆x∆t = 1, i.e., CF L = 1, the upwind FD scheme returns the exact solution.

where Eh−1uh,j = uh,j−1. Written in terms of the initial data, we have the equivalent difference

Now the domain of dependence of the scheme is the set of points x, x− ∆x, x − k∆x = x − t

µ.

At convergence (∆x, ∆t7→ 0, keeping µ = ∆x/∆t constant) this set tends to the interval [x−t/µ, x].

Hence the Courant-Friedrichs-Lewy condition is satisfied if the this interval contains the point ξ = x− λt, i.e. if µ satisfies the stability criterion, or CFL condition (Figure 2.4):

µλ = ∆tλ

∆x ≤ 1.

The stability of the scheme is proved as before because:

k

It can be shown that the backward (or “upwind”) scheme converges if µλ ≤ 1. In other words, we want to see that whenever ∆x 7→ 0, j 7→ ∞ and ∆t 7→ 0, k 7→ ∞ in such a way that x = j∆x Indicating with w = u− uh the error function, we have:

| w(x, t + ∆t) − (1 − µλ)w(x, t) − µλ w(x − ∆x, t) |≤ C∆x2. Taking the supxand using the triangle inequality, we can write:

sup from which convergence is proved since t is fixed and µ = const.

2.2 The method of characteristics

A Lagrangian point of view We assume that a certain quantity (energy, density, etc.) is transported in a velocity field λ(x, t). If we look at the trajectory of every particle, we are using what is called the Lagrangian approach, i.e., we try to infer the solution to the problem by measuring data in a reference frame that moves with λ. If on the other hand we stand at a point x and measure the velocity and its transported quantity, than we are using the Eulerian approach, i.e., we infer the solution by measuring data at a fixed set of points. This is what we have done in the previous section.

For every initial point x0 ∈ Rd, the trajectory starting from x0is given by:

dx

dt = q(x, t) x(0) = x0.

The solution to this ODE is given by x = γx0(t), i.e. a curve written in parametric form γ : Rd7→ R.

If the quantity (call it u(x, t)) transported with the field q does not change in time, its total derivative is zero:

Dtu = du

dt(γx0(t), t) = ∂u

∂t + q· ∇ u = 0, which in R1is written as:

Dtu = du

dt(γx0(t), t) = ut+ λux = 0,

i.e., the advection equation we have studied in the previous section.

Let us now fix an initial condition. In other words, we want to solve the following transport problem:

ut+ λux = 0, u(x, 0) = f (x).

To find the solution of this problem, we want to find the parameterized curve x = γx0(t) for every x0 ∈ R such that:

dx

dt = λ(x, t), x(0) = x0.

For the transported quantity, since Dtu = 0, we have:

u(x, t)x0 = f (x0) x0 = γx−1

0 (t)

u(x, t) = f (γx−10 (t))

In other words, the solution u is given by the function f (the initial data) along the characteristic curve γ. The task is thus to find the characteristic curves, along which the signal is propagated “without changing shape”. This last sentence is in quotes as it is true only for linear advection and with no source term, as we will see in the following examples.

Example 2.3. Find the solution to the problem:

ut+ ux = 0, (2.8a)

u(x, 0) = sin(x). (2.8b)

In this case, λ = 1, and the ODE determining the trajectories for (2.8) is:

dx dt = 1, x(0) = x0,

which has solution given by x = t + x0, so that:

x0 = x− t,

u(x, t) = sin(x− t).

Example 2.4. Find the solution to the problem:

ut+ ux = u, (2.9a)

u(x, 0) = f (x). (2.9b)

In this case, λ = 1, but the equation is not homogeneous. The ODE for the trajectories of (2.9) is the same, but now the transported quantity is not conserved any longer as it undergoes a transformation directly proportional to u, i.e.:

Dtu = u.

This equation has a general solution given by u(x0, t) =Cet. The complete solution is then given by:

x0 = x− t,

u(x, t) = f (x− t)et.

Quasilinear equations Consider the general first order quasilinear equation of the form:

a(x, y, u)ux+ b(x, y, u)uy = c(x, y, u), (2.10)

where we may think of the variable x as space and y as time. The solution function is a surface in the (xyz) space, z = u(x, y), called an “integral surface”. The functions a(x, y, u), b(x, y, u)

and c(x, y, u) define a vector field in the (xyz) space, and identify the field of local “characteristic directions” with direction numbers (a, b, c). The vector normal to the surface z = u(x, t) at each point is given by (ux, uy,−1). Thus, equation (2.10) details the condition by which the integral surface is at each point tangent to the characteristic direction. We can associate to this surface the family of

“characteristic curves” defined so that they are tangent to the direction field at each point. Along a characteristic curve given in parametric form as (x, y, z) = Γ(σ) = (f (σ), g(σ), h(σ)), the tangent condition can be written as:

dx

dσ = a(x, y, z), dy

dσ = b(x, y, z), dz

dσ = c(x, y, z).

The solution of this system of ODEs is formed by a 3-parameter family of curves (x(σ), y(σ), z(σ)) (the parameters corresponding to the constants that arise from solving each of the three ODEs above).

This gives rise to a 2-parameter family of characteristic curves, since multiplying the functions a, b, c by a constant is equivalent to scaling the parameter σ so that the equation and its solution are scaled but not the characteristic curves. It can be shown that there exist a characteristic curve that passes through every point of an integral surface, and vice-versa. The integral surface is then the union of characteristic curves. If two integral surfaces intersect at a point P , then they intersect along an entire characteristic curve passing through P .

As a consequence, a unique solution is defined by choosing one single member of the family of characteristic curves. Let Γ be represented in parametric form by the equations:

x = f (σ), y = g(σ), z = h(σ).

Then the solution u(x, y) in Γ must satisfy the relationship:

h(σ) = u(f (σ), g(σ)). (2.11)

We look only for “local” solutions, i.e., solutions arising for σ in a neighborhood of a point σ0, or equivalently x0 = f (σ0) and y0 = g(σ0). Then varying σ0 we can find other solutions. It can be shown that an integral surface is the union of all these solutions.

This gives rise to the Initial Value problem:

Find u(x, y) such that:

a(x, y, u)ux+ b(x, y, u)uy = c(x, y, u), and such that

u(x, 0) = h(x).

Remark 2.5. This yields the form of the equation that we have seen in the previous sections. Often y coincides with time and thus y ≥ 0, so that the IVP specifies initial data for y = 0. Thus, the

initial-value problem is the special Cauchy problem where we have specified the characteristic curve

This shows that in this IVP we look for that integral surface that intersects with the xz-plane (y = 0).

Going back to the general case, given σ we look for the curve Γ that passes through point Pσ = (xσ, yσ, zσ) = (f (σ), g(σ), h(σ)). Looking for “local” solutions, we let σ vary in a neighborhood of σ, and we form that function (solution) given by the set of points P = (x, y, z), obtained by varying τ in a neighborhood of P (i.e., centered in σ), that have coordinates:

x = X(σ, τ ), y = Y (σ, τ ), z = Z(σ, τ ), (2.12)

Substituting in the third equation of (2.12), we have and explicit representation of the solution surface z = u(x, y):

z = u(x, y) = Z(S(x, y), T (x, y)), and correspondingly we can find the functions:

x = X(S(x, y), T (x, y)), y = Y (S(x, y), T (x, y)),

satisfying the initial conditions σ0 = S(x0, y0) and τ0 = 0 = T (x0, y0), and thus completing the procedure of finding the solution to our problem. However, this can be done provided the determinant of the Jacobian matrix does not vanish:

det (J ) = y = g(σ0) we can calculate the following relationships:

f0b = g0a, h0 = f0ux+ g0uy, c = aux+ buy,

from which we obtain:

bh0 = cg0 and ah0 = cf0.

In practice this shows that det (J ) = 0 implies that f0, h0, and g0are proportional to a, b and c, i.e., the vector field (f0, g0, h0) is parallel to the vector field (a, b, c) and thus defines again the characteristics passing through σ0, showing that this procedure is well defined.

Example 2.6. Find the solution to the problem:

uy+ cux= 0, (2.13a)

u(x, 0) = f (x). (2.13b)

In this case, λ = c, the equation is the same as (2.1). The characteristic ODEs corresponding to (2.13) becomes the following system:

dx

dτ = c, dy

dτ = 1, dz dτ = 0, or:

dx

dτ = c, Dyu = 0.

The integral surface has the parametric representation:

x = X(σ, τ ) = σ + cτ, y = Y (σ, τ ) = τ, z = Z(σ, τ ) = f (σ).

Eliminating σ and τ we obtain the solution:

z = u(x, y) = f (x− cy).

Example 2.7. Find the solution to the problem:

uy+ uux = 0, (2.14a)

u(x, 0) = f (x). (2.14b)

In this case, λ = u, but the equation is now non linear (quasi linear). The system of characteristic ODEs corresponding to (2.14) is:

dx

dτ = u, dy

dτ = 1, dz dτ = 0, or:

dx

dτ = u, Dτu = 0,

to which we need to add the initial data (2.13b). The solution of this system leads to the parametric form:

x = σ + uτ, y = τ, z = f (σ).

The solution u = u(x, τ ) = u(x, y) is given then by:

u(x, y) = f (x− uy).

The projection of the characteristic curve in the plane xy is given by the line (named Cσ):

x = σ + f (σ)y.

Along Cσ the solution assumes the form u = f (σ). Note that strictly speaking a characteristic curve lies in an integral surface in the space xyz. However, its projection is often called also a characteristic curve. We will use this nomenclature, as there is no worry of confusion given the different dimensionality of the two curves, but we will name Γ and Cσ the characteristic curve in xyz and its projection onto xy, respectively. The curve Cσ represents the trajectory of a particle initially (at time y = 0) located at x = σ. Depending on the value of f (σ), the characteristic curves may have different slopes. Thus, in general, two characteristic curves intersect at a point unless f (σ) is a nondecreasing function of σ. Given two characteristics Cσ1 and Cσ2 expressed by:

Cσ1 : x = σ1+ f (σ1)y Cσ2 : x = σ2+ f (σ2)y

they intersect at the point (x, y) given by:

y =− σ2− σ1

f (σ2)− f(σ1), x = σ + f (σ)y.

If σ1 6= σ2 and f (σ1) 6= f(σ2), u at the point (x, y) must take distinct values given by f (σ1) and f (σ2), and thus the function u is not one-to-one. Looking at the first derivative of u, we can write:

u(x, y) = f (x− uy); ux = f0(σ)dσ dx = ∂

∂x(x− u(x, y)y) = 1 − yux, from which we obtain:

ux = f0(σ) 1 + yf0(σ).

Hence, if f0(σ) < 0, uxbecomes infinite at time:

y = −1

f0(σ). (2.15)

The smallest time at which this occurs corresponds to the value σ = σ0 at which f0 has a minimum.

Thus the solution to equation (2.14) is unique only until the finite time T =−1/f00).

-3 -2 -1 0 1 2 3 x 0

1 2 3 4 5 t

T

FIGURE 2.6: Characteristic lines for the equation in Example 2.8.

Example 2.8. We now look at this problem from a physical (geometrical) point of view, i.e., we look at how the solution looks like qualitatively, and why the loss of uniqueness occurs. We will look at the following particular case where f (x) = 1/(1 + x2) (here we will use the more common variable name t for y). The problem is then:

Find the solution u(x, t) such that

ut+ uux = 0, (2.16a)

u(x, 0) = 1

1 + x2. (2.16b)

This is called the “inviscid Burgers” equation. As seen before, since the directional derivative vanishes along the direction of the vector (1, u), the solution u must be constant along the characteristic lines Cσ in the plane t, x given by:

Cσ : (t, x + u(x, 0)t) = (t, x + t 1 + x2).

There the solution is given implicitly by:

u(x + t

1 + x2, t) = 1

1 + x2. (2.17)

-2 -1 0 1 2 3 x

0.2 0.4 0.6 0.8 1 1.2

u

u(0) u(T1) u(T2)

FIGURE2.7: Waves forms for Example 2.8 at time t = 0, t = T1, andt = T2, withT2 > T1.

The previous equation is valid only for early times, and in particular for t < T = 8√

3/3, as given by (2.15) evaluated at x0 = 1/√

3, point that minimizes f0(x). The situation is depicted in Figure 2.6.

Given (2.17), when the characteristic lines intersects, at that point the solution u(x, t) assumes two different values identified by the particular x0 from which originates the characteristic line. Thus we have a loss of uniqueness in the solution.

We look now at the graph of the function u for various fixed times. The points of the graph of u move towards the direction of increasing x with a speed u equal to the distance from the x-axis. As this distance increases, we have that as time progresses the points of the graph with larger u values move faster. At time t ≥ T , these points tend to overcome points that at previous times lagged behind, a situation depicted in Figure 2.7.