4.2 Caso vettoriale
4.2.3 Gli Invarianti di Riemann
x = ξ(sj) t = τ (sj) (4.17) tali che: dξ dsj = ξ 0 j(ξ, τ, u) , dτ dsj = τ 0 j(ξ, τ, u)
in cui l’indice j si riferisce all’autovalore j-esimo del problema (4.14). Notiamo che l’integrazione delle equazioni precedenti non `e immediata in quanto ξ0 e τ0 dipendono oltre che da ξ e τ , anche dal campo incognito u; possiamo difatti riscrivere la (4.16) nella forma
hw(j)| du dsji + hv
(j)| φi = 0
che comporta la derivazione di tutte le componenti non conosciute del campo u rispetto al j-esimo parametro caratteristico.
Questo approccio del problema iniziale non porta ad alcuna soluzione significativa; abbandoneremo perci`o questa strada per passare al Metodo
degli invarianti di Riemann.
4.2.3 Gli Invarianti di Riemann
Ripartiamo dall’equazione:
hw(j)| du
dsji + hv
(j)| ϕi = 0 (4.18)
dove l’indice j si riferisce al j-esimo autovettore del problema (4.14) e quindi alla j-esima curva caratteristica.
Fissiamo ora il valore dell’indice j o, equivalentemente, osserviamo una particolare curva caratteristica:
hw |du
dsi + hv | ϕi = 0 (4.19)
Riemann not`o 1 che il termine hw |du
dsi assume la forma della potenza
associata ad una forza a patto di identificare : w = f e du
ds = v, con f forza agente sul sistema e v velocit`a delle particelle costituenti il fluido.
1B. Riemann si interess`o del problema delle caratteristiche nel 1858 nella sua tesi di dottorato studiando un problema bidimensionale di gasdinamica.
46 Caratteristiche
In completa analogia col richiedere che la forza sia conservativa, poniamo: w = λ ∇µ ≡ λ µ ∂µ ∂u(1), ∂µ ∂u(2), . . . , ∂µ ∂u(n) ¶ (4.20) con u(i)componente i-esima del vettore incognito u. Le funzioni λ e µ saranno chiaramente funzioni di u e delle coordinate spazio temporali x e t. Tale assunzione restituisce immediatamente per la (4.19) la forma
h∇uµ |du
dsi + hv | ϕi = 0 (4.21)
che, valendo per ognuno degli N autovettori w(j), `e
λj dµj
dsj + hv
(j)| ϕi = 0 (4.22)
Con l’ipotesi aggiuntiva che la forzante ϕ sia nulla 2, otteniamo che le µj
sono costanti lungo le curve di parametro sj e pertanto prendono il nome di Invarianti di Riemann. In generale le µj sono conosciute come variabili di Riemann.
Vale il seguente risultato:
1. per N = 2 le variabili di Riemann esistono sempre: il sistema `e un sistema di due equazioni nelle due incognite λ e µ introdotte dalla trasformazione 4.20;
2. per N > 2 non sempre esistono le variabili di Riemann: il sistema risulta sovradeterminato.
Note le variabili di Riemann µ(j) si ricostruiscono le u(j) a partire dalla trasformazione (4.20).
Capitolo 5
Leggi di conservazione
Il Teorema di Arnold-Liouville ci dice che se un sistema hamiltoniano ad n gradi di libert`a ammette n integrali primi del moto indipendenti ed in involuzione (le cui mutue parentesi di Poisson siano nulle), allora `e integrabile [8].
Il problema che vogliamo trattare in questo capitolo `e la determinazione di leggi di conservazione associate a sistemi ad infiniti gradi di libert`a.
Per una generica equazione alle derivate parziali:
∆(x, t, u(x, t)) = 0
dove t ∈ R, x ∈ R sono le variabili temporali e spaziali e u(x, t) ∈ R la variabile dipendente, una legge di conservazione `e un’equazione della forma
DtTi + DxXi = 0
che `e soddisfatta da tutte le soluzioni u(x, t) della PDE in esame.
Ti(x, t) `e detta densit`a conservata e Xi(x, t) flusso conservato e sono in
genere funzioni dello spazio-tempo, del campo u e delle sue derivate. [3]
Consideriamo, ad esempio, l’equazione delle onde per il campo scalare
u(x, t) (1.22).
Sappiamo che l’equazione `e integrabile ed ammette come soluzione onde di traslazione:
u(x, t) = F (x − ct) + G(x + ct)
48 Leggi di conservazione
Ci potremmo aspettare pertanto che, essendo il sistema in considerazione ad infiniti gradi di libert`a, per una generalizzazione del Teorema di Arnold-Liouville, questa possa ammettere infinite leggi di conservazione.
Sappiamo inoltre che l’equazione `e invariante per traslazioni temporali e spaziali, pertanto, per il Teorema di Noether, sia l’energia che la quantit`a di moto totale risulteranno conservate.
Come poter determinare per`o le altre infinite attese leggi di conservazione? Consideriamo di nuovo il sistema 3.15
rt + ρ0vx = 0 vt + 1 ρ0 c2(ρ0) rx= 0
che restituisce immediatamente l’equazione delle onde per i campi densit`a
r(x, t) e velocit`a v(x, t).
Introducendo il vettore a 2 componenti u =
µ
r v
¶ Possiamo riscrivere il sistema nella forma (3.12):
ut + µ 0 ρ0 1 ρ0 c02 0 ¶ ux = 0 cio`e ut = Mu con M = − µ 0 ρ0 1 ρ0 c02 0 ¶ ∂x
La struttura matematica con cui abbiamo a che fare `e quella di un’evoluzione regolata da un’algebra non commutativa, in generale non commutando la matrice M con le altra matrici e ∂x con gli operatori moltiplicativi.
Tralasciamo per il momento il caso particolare del sistema (3.15) ed occupiamoci di una generica dinamica del tipo:
ut = M u
con M generica matrice N × N non singolare ed u(x, t) vettore di L2(R). Dati due vettori u1 ed u2 ne definiamo il prodotto scalare come:
hu1| u2i ≡
Z +∞
−∞
49
Vogliamo determinare dei funzionali c(u), definiti come prodotti scalari (quindi quadratici nell’argomento), tali che ˙c(u) = 0.
Costruiamoli cos`ı nella forma quadratica in u:
c(u) = hu | Γ ui (5.2)
con Γ operatore lineare costante nel tempo da determinare imponendo la condizione ˙c(u) = 0:
˙c(u) = hut| Γ ui + hu | Γ uti =
= hM u | Γ ui + hu | Γ M ui =
= hu | ¡M†Γ + Γ M¢ ui = 0 (5.3)
Dal momento che il vettore u `e completamente arbitrario, la (5.3) deve valere per ogni u, da cui l’equazione operatoriale
M†Γ + Γ M = 0 . (5.4) Essendo M noto perch`e caratterizzante l’equazione di evoluzione in conside-razione, la (5.4) si presenta come un’equazione nell’operatore Γ.
Se Γ `e soluzione della (5.4) anche
Γ(n)= ΓMn (5.5) `e soluzione per ogni n ∈ N. Infatti, sostituendo nella 5.4, si ottiene
M†Γ(n)+ Γ(n)M = M†Γ(n)Mn+ Γ(n)Mn+1= 0 che restituisce la (5.4) per ogni matrice non singolare M
¡
M†Γ(n)+ Γ(n)M¢Mn = 0 .
Da tali osservazioni, segue che abbiamo trovato infinite quantit`a conser-vate
cn(u) = hu | ΓMnui (5.6) che assumeranno, dipendentemente dal problema in esame, forme diverse.
Riprendiamo come esempio proprio il sistema (3.15) e cerchiamo gli op-eratori Γ che permettono di definire le quantit`a conservate (5.6). Essendo
M M = − µ 0 ρ0 1 ρ0 c02 0 ¶ ∂x
50 Leggi di conservazione
e data l’antihermitianit`a dell’operatore di derivazione otteniamo
M† = µ 0 1 ρ0 c02 ρ0 0 ¶ ∂x La (5.4) prende la forma µ 0 1 ρ0 c02 ρ0 0 ¶ ∂xΓ − Γ µ 0 ρ0 1 ρ0 c02 0 ¶ ∂x = 0
Richiediamo per semplicit`a che Γ non sia un operatore differenziale, ma una matrice a coefficienti costanti:
µ 0 1 ρ0 c02 ρ0 0 ¶ Γ − Γ µ 0 ρ0 1 ρ0 c02 0 ¶ = 0 che cerchiamo del tipo
Γ = µ
γ1 0 0 γ2
¶ da cui, sfruttando la (5.4), si ottiene:
Γ = µ c2 0 0 0 ρ2 0 ¶
Le altre infinite quantit`a conservate deriveranno direttamente dalla (5.5): Γ(n)= (−)n µ c2 0 0 0 ρ2 0 ¶ µ 0 ρ0 1 ρ0 c02 0 ¶n ∂n ∂xn.
Capitolo 6
Multiscale expansion and
integrability of dispersive wave
equations
[Lectures given at the Euro Summer School “What is integrability?”, 13-24 August 2001, Isaac Newton, Cambridge, U. K..]
6.1 Introduction
The propagation of nonlinear dispersive waves is of great interest and rele-vance in a variety of physical situations for which model equations, as infinite-dimensional dynamical systems, have been investigated from various perspec-tives and to different purposes. In the ideal case in which waves propagate in a one-dimensional medium (no diffraction) without losses and sources par-ticular progress has been made due to the discovery of integrable models whose investigation has provided important contributions to such matters as stability, wave-collisions, long-time asymptotics among others. On the mathematical side, such progress on integrable models has considerably con-tributed also to our present (admittedly not concise) answer to the question in the title of this School. The same question can be found in [9], and a partial guide to the vaste literature on the theory of Solitons is given in [10]. It is plain that integrable models, though both useful and fascinating, remain exceptional: nonlinear partial differential equations (PDEs) in 1+1 variables (space+time) are generically not integrable. The aim of these notes is to show how an algorithmic technique, based on perturbation theory, may be devised as a tool to establish how far is a given PDE from being inte-grable. This approach [11] has been known in applicative contexts [12] since
52
Multiscale expansion and integrability of dispersive wave equations
several decades as it provides approximate solutions when only one, or a few, monochromatic “carrier waves” propagate in a strongly dispersive and weak-ly nonlinear medium. More recentweak-ly [13] it has proved to be also a simple way to obtain necessary conditions which a given PDE has to satisfy in order to be integrable, and to discover integrable PDEs as well [14].
The basic philosophy of this approach is to derive from a nonlinear PDE one or many other PDEs whose integrability properties are either already known or easily found. In this respect, a general remark on this method of reduction is the following. Integrability is not a precise notion, and different degrees of integrability can be attributed to a PDE within a certain class of solutions and boundary conditions, according to the technique of solving it. For instance, C-integrable are termed those nonlinear equations which can be transformed into linear equations via a change of variables [14], and S-integrable are those equations whose solution requires the method of the spectral (or scattering) transform (see, f.i., [15]). Examples of C-integrability are the equations (ut= ∂u/∂t, ux = ∂u/∂x etc.)
ut+ a1ux− a3uxxx = a3(3uux+ u3)x , u = u(x, t) (6.1)
ut+ a1ux− a3uxxx = 3a3c(u2uxx+ 3uu2x) + 3a3c2u4ux , u = u(x, t) (6.2)
which are both mapped to their linearized version
vt+ a1vx− a3vxxx = 0 , v = v(x, t) (6.3) the first one, 6.1, by the (Cole-Hopf) transformation
u = vx/v (6.4)
and the second one, 6.2, by the transformation [14]
u = v/(1 + 2cw)1/2 , wx= v2 (6.5)
Well-known examples of S-integrable equations are the modified Korteweg-de Vries (mKdV) equation
ut+ a1ux− a3uxxx = 6a3cu2ux , u = u(x, t) (6.6) and the nonlinear Schroedinger (NLS) equation
ut− ia2uxx = 2a2ic|u|2u , u = u(x, t) (6.7) whose method of solution is based on the eigenvalue problem
6.1 Introduction 53
where ψ is a 2-dim vector, σ is the diagonal matrix diag(1, -1) and Q(x, t) is the off-diagonal matrix
Q = µ 0 u −cu 0 ¶ (6.9) for the mKdV equation 6.6 and (the asterisk indicates complex conjugation)
Q = µ 0 u −cu∗ 0 ¶ (6.10) for the NLS equation 6.7. Here k is the spectral variable and c is a real constant. In any case, whatever type of integrability is involved, we adopt in our treatment the “first principle” (axiom) that integrability is preserved by the reduction method. Though in some specific cases, where integrabili-ty can be formulated as a precise mathematical properintegrabili-ty, one can give this principle a rigorous status, we prefer to mantain it throughout our treatment as a robust assumption. Its use, according to contexts, may lead to inter-esting consequences. One is that the implication that a PDE derived by the reduction method from an integrable PDE is itself integrable provides a way to obtain other (possibly new) integrable equations. On the othe hand, if a PDE, which has been obtained by reduction from a given PDE, is proved to be nonintegrable, then from our first principle it there follows that that giv-en PDE cannot be integrable, and this implication leads to quite a number of necessary conditions of integrability. Some of these conditions are found simple and, therefore, of ready practical use. Others conditions are instead the results of lengthy algebraic manipulations which require a rather heavy computer assistance. Finally, this way of reasoning leads to the following ob-servation, which has been clearly pointed out in [14]. Suppose the same PDE is obtained by reduction from any member of a fairly large family of PDEs; so we can call it a “model PDE”. Then the principle stated above explains why a model PDE can be at the same time widely applicable (because it derives from a large class of different PDEs) and integrable (because it suffices that just one member equation of that large family of PDEs be integrable). The most widely known example of such case is the NLS equation 6.7 which is certainly a model equation (as shown also below) with many applications (f.i. nonlinear optics and fluid dynamics [12]), and whose integrability has been discovered in 1971 [16] but it could have been found even earlier by reduction from the KdV equation ut+ uxxx = 6uux (the way to infer the S-integrability of the NLS equation from the S-integrability of the KdV equation has been first pointed out in [17]), whose integrability has been unveiled in 1967 [18]. The method of reduction which we now introduce is a perturbation tech-nique based on three main ingredients : i) Fourier expansion in harmonics,
54
Multiscale expansion and integrability of dispersive wave equations
ii) power expansion in a small parameter ², iii) dependence on a (finite or in-finite) number of “slow” space and time variables, which are first introduced via and ²-dependent rescaling of x and t and are then treated as independent variables. Because of this last feature this approach is also referred to as multiscale perturbation method.
In order to briefly illustrate how these basic ingradients naturally come into play in the simpler context of ordinary differential equations (ODEs), let us consider the well-known Poincar´e-Lindstedt perturbation scheme to construct small amplitude oscillations of an anharmonic oscillator around a stable equilibrium position. Thus our one-degree dynamical system is given by the nonlinear equation ( ˙q ≡ dq/dt)
¨
q + ω2
oq = c2q2+ c3q3+ . . . . , q = q(t, ²) (6.11) where the small perturbative parameter ² is here introduced as the initial amplitude,
q(0, ²) = ² , ˙q(0, ²) = 0 (6.12)
The equation of motion 6.11 is autonomous as all coefficients ω0, c2, c3, . . . . ,
are time-independent, and it has been written with its linear part in the lhs and its nonlinear (polynomial or, more generally, analytic) part in the rhs. In this elementary context,the model equation which is associated with this family of dynamical systems, is of course the harmonic oscillator equation, ¨
q + ω2
0q = 0, which obtains when the amplitude ² is so small that all
nonlin-ear terms can be neglected. In fact, the purpose of the Poincar´e- Lindstedt approach is to capture the deviations from the harmonic motion which are due to the nonlinear terms in the rhs of 6.11. Since, for sufficiently small ², the motion is periodic, namely
q(t, ²) = q µ t + 2π ω(²), ² ¶ , (6.13)
it is natural to change the time variable t into the phase variable θ,
θ = ω(²)t , q(t, ²) = f (θ, ²) , (6.14) even if the frequency ω(²) is not known as it is expected to depend on the initial amplitude ². Then the equations 6.11 and 6.12 now read (f0 ≡ df /dθ)
ω2(²)f00+ ω2
0f = c2f2+ c3f3+ . . . , f (0, ²) = ², f0(0, ²) = 0 (6.15) and we look for approximate solutions via the power expansions
ω2(²) = ω2
6.1 Introduction 55 f (θ, ²) = ²f1(θ) + ²2f2(θ) + . . . . (6.17) We note that the periodicity condition f (θ) = f (θ + 2π) implies that ω(0) =
ω0; inserting the expansions 6.16 and 6.17 in the differential equation 6.15 and equating the lhs coefficients with the rhs coefficients of each power of ², yields an infinite system of differential equations, the first one, at O(²), is homogeneous, while all others, at O(²n) with n > 1, are nonhomogeneous, i.e. O(²) : f00 1 + f1 = 0 f1(0) = 1 f0 1(0) = 0 (6.18) O(²n) : f00 n + fn = {−n, −n + 1, . . . , −1, 0, 1, . . . , n − 1, n} fn(0) = 0 f0 n(0) = 0 (6.19)
The notation in this last equation refers to harmonic expansion and it has the following meaning. Since each function fn(θ) is periodic in the inter-val (0, 2π), one can Fourier-expand it; however, because of the differential equaions they satisfy, only a finite number of the Fourier basis functions exp(iαθ), α being an integer, enters in their representation. This is easily seen by recursion: f1(θ) = 1
2(exp(iθ) + exp(−iθ)), and since fn(θ), for n > 1, satisfies the forced harmonic oscillator equation where the forcing term in the rhs of 6.19 is an appropriate polynomial of f1, f2, . . . , fn−1, its expansion can only contain the harmonics exp(iαθ) with |α| ≤ n. Thus, the integers in the curly bracket in the rhs of 6.19 indicate the harmonics which enter in the Fourier expansion of the forcing term, and this implies that fn(θ) itself has the Fourier expansion
fn(θ) =
n
X
α=−n
fn(α)exp(iαθ), n ≥ 1 (6.20) where the complex numbers fn(α) have to be recursively computed. To this aim, it is required that also the coefficents γn in the expansion 6.16 be com-puted, and the way to do it is to use the periodicity condition fn(θ) =
fn(θ + 2π), or, equivalently, the condition that the ²-expansion 6.17 be uni-formly asymptotic (note that we do not address here the problem of
conver-56
Multiscale expansion and integrability of dispersive wave equations
gence of the series 6.17 but we limit ourselves to establish uniform asymp-toticity). The point is that, for each n ≥ 2, the forcing term in 6.19 contains the fundamental harmonics exp(iθ) and exp(−iθ) which are solutions of the lhs equation (i.e. of the homogeneous equation), and are therefore secular (or, equivalently, at resonance).
At this point, and for future use, we observe that, in a more general setting, if
v0(θ) − Av(θ) = w(θ) + u(θ) (6.21) is the equation of the motion of a vector v(θ) in a linear (finite or infinite dimensional) space and A is a linear operator, then, if the vector w(θ) solves the homogeneous equation,
w0(θ) − Aw(θ) = 0 (6.22) then the forcing term w(θ) in 6.21 is secular. This is apparent from the
θ-dependence of the general solution of 6.21, which reads
v(θ) = ˜v(θ) + θw(θ) (6.23)
where ˜v(θ) is the general solution of the equation ˜v0(θ) − A˜v(θ) = u(θ).
In our present case, the occurence of the harmonics exp(iθ) and exp(−iθ) in the rhs of 6.19 forces the solution fn(θ) to have a nonperiodic dependence on θ, and therefore the condition that the coefficients of exp(iθ) and exp(−iθ) must vanish should be added to our computational scheme. In fact, this con-dition fixes the value of the coefficient γn−1 and this completes the recurrent procedure of computing, at each order in ², both the frequency
ω(²) = ²0+ ω1² + ω2²2+ . . . , (6.24) and the solution f (θ, ²), see 6.17. As an instructive exercise, we suggest the reader to compute the frequency ω(²) up to O(²2) (answer: ω1 = 0, ω2 =
−(10c2 2+ 9ω2
0c3)/24ω3 0).
This approach has been often used in applications with the aim of com-puting approximate solutions; in that context the properties of the series 6.16 and 6.17 of being convergent, or asymptotic, and also uniformly so in
t, is of crucial importance (see, f.i., [19] and the references quoted there),
particularly when one is interested also in the large time behaviour. Our emphasis here is instead in the formal use of the double expansion (see 6.17 and 6.20) q(t, ²) =X n=1 n X α=−n ²nexp (iαθ)f(α) n (6.25)
6.1 Introduction 57
where θ = ω0t + ω1²t + ω2²2t + . . . and therefore here and in the following we
drop any question related to convergence and approximation.
Let us consider now the propagation of nonlinear waves, and let us apply the Poincar´e-Lindstedt method to PDEs. For the sake of simplicity, here and also below throughout these notes, we focus our attention on the following family of first order in time equations
Du = F [u, ux, uxx, . . .] , u = u(x, t), (6.26) with the assumptions that this equation be real, that the linear differential operator D in the lhs have the expression
D = ∂/∂t + iω(−i∂/∂x) , (6.27)
where ω(k) is a real odd analytic function,
ω(k) = X
m=0
a2m+1k2m+1 , (6.28) and that F in the rhs be a nonlinear real analytic function of u and its x -derivates. For instance, the subfamily
ω(k) = a1k + a3k3 , F = cu3
x+ (c2u2+ c3u3+ . . .)x , (6.29) contains three S-integrable equations, i.e. the KdV equation (c = 0, cn = 0 for n ≥ 3), the mKdV equation 6.6 and the equation [20]
ut+ a1ux− a3uxxx = −a3£α sinh u + β (cosh u − 1) + u2
x/8¤ux. (6.30) Since the linearized version of the PDE 6.26, Du = 0, has the harmonic stationary wave solution
u = exp[i(k0x − ˜ω0t)] , ω˜0 = ω(k0) , (6.31) one way to extend the Poincar´e-Lindstedt approach to the PDE 6.26 is to look for solutions, if they exist, which are periodic plane waves,
u(x, t) = f (θ, ²), θ = k(²) x − ˜ω(²) t, f (θ, ²) = f (θ + 2π, ²) , (6.32) together with the power expansions
f (θ, ²) = ²f1(θ)) + ²2f2(θ) + . . . ,
58
Multiscale expansion and integrability of dispersive wave equations
This approach can be easily carried out as for the anharmonic oscillator since the function f (θ, ²) does now satisfies the real ODE
−˜ω(²)f(1)(θ, ²) + iω(−ikd/dθ)f (θ, ²) = F £f, kf(1), k2f(2), . . .¤, k = k(²),
(6.34) where f(j) ≡ djf (θ, ²)/dθj. Periodic plane waves in fluid dynamics have been investigated along these lines and , though exact solutions are known for instance for water waves models (such as the KdV equation) in terms of Jacobian elliptic functions (cnoidal waves), approximate expressions have been found more than a century ago (Stokes’s approximation) [1].
The class of periodic plane-wave solutions (if they exists) is too restrictive to our purpose. In fact their construction requires going from the PDE 6.26 to the ODE 6.34, a step which implies loss of information about the PDE itself. Therefore we now turn our attention to the class of solutions of the wave equation 6.26 whose leading term in the perturbative expansion is a quasi-monochromatic wave, namely a wave-packet whose Fourier spectrum is not one point but is well localized in a small interval of the wave number axis, (k − ∆k, k + ∆k), where k is a fixed real number and ∆k/k is small,
u(x, t) ' ∆k
Z +∞
−∞
dηA(η) exp{i[x(k + η∆k) − tω(k + η∆k)]} + c.c.; (6.35)
here the amplitude A(η) is sharply peaked at η = 0, and the additional complex conjugated term is required by the condition (which we mantain here and in the following) that u(x, t) is real, u = u∗.
The perturbation formalism which is suited to deal with this class of solutions is still close to the Poincar´e-Lindstedt approach to the anharmonic oscillator. In fact, let us go back to the two-index series 6.25 and substitute
θ with the expansion θ = ω0t + ω1t1 + ω2t2 + . . ., where we have formally introduced the rescaled ”slow” times tn = ²nt; then the formal expansion
6.25 reads q(t, ²) =X n=1 n X α=−n ²nEαq(α) n (t1, t2, . . .) , E ≡ exp(iω0t) , (6.36) where the functions qn(α) depend only on the slow-time variables tn. The scheme of computation based on the expansion 6.36 is equivalent to that shown above, and it goes with inserting the expansion 6.36 into the equation 6.11, and by treating the time variables tn as independent variables. In particular the derivative operator d/dt takes the ² - expansion
d¡Eαq(α) n ¢ /dt = Eα¡ iαω0+ ²∂/∂t1+ ²2∂/∂t2+ . . .¢q(α) n , (6.37)
6.1 Introduction 59
and similarly expanding the lhs and rhs of 6.11 in powers of ² and of E finally yields a system of PDEs whose solution (after eliminating secular terms) gives the same result as the (much simpler) frequency-renormalization method based on 6.14 and 6.16. In this case the service of the multiscale technique is merely to display the three ingredients of the approach we use below for PDEs, i.e. the power expansion in a small parameter ², the expansion in harmonics and the dependence on slow variables.
Let us now proceed with applying the multiscale perturbation approach to solutions of the PDE 6.26 along the line discussed above. As a preliminary observation, in the case the PDE 6.26 is linear, i.e. F = 0, the expression 6.35 is exact as it yields the Fourier representation of the solution. If we introduce the harmonic solution
E(x, t) ≡ exp[i(kx − ωt)] , ω = ω(k) , (6.38) the small parameter ² ≡ ∆k/k and the slow variables ξ ≡ ²x, tn ≡ ²nt for n ≥ 1, the Fourier integral takes the expression of a “carrier wave” whose
small amplitude is modulated by a slowly varying envelope
u(x, t) = ²E(x, t)u(1)(ξ, t1, t2, . . .) + c.c. . (6.39) Since the envelope function is (see 6.35)
u(1)(ξ, t1, t2, . . .) = k
Z +∞
−∞
dη A(η) exp£i(kηξ − kω1ηt1− k2ω2η2t2− . . .)¤ ,
(6.40) it satisfies the set of PDEs