• Non ci sono risultati.

Towards a Theory of Complexity Matching

N/A
N/A
Protected

Academic year: 2021

Condividi "Towards a Theory of Complexity Matching"

Copied!
72
0
0

Testo completo

(1)

Facolt`

a di Scienze Matematiche Fisiche e

Naturali

Corso di Laurea Specialistica in Scienze Fisiche

Anno Accademico 2007 - 2008

Tesi di Laurea Specialistica

Towards a Theory of Complexity

Matching

Candidato

Relatore

(2)
(3)

1 Introduction 4 2 Rate Matching 7 2.1 Stochastic Resonance . . . 7 2.2 Rate Matching . . . 14 2.2.1 Single Realisation . . . 20 3 Non-Poisson Systems 24 3.1 Background . . . 24 3.2 Theoretical Preliminaries . . . 30 3.2.1 Non-Poisson Process . . . 30 3.2.2 Aging . . . 32 3.2.3 Autocorrelation . . . 39

3.2.4 Power Spectrum of a Stationary Process . . . 44

3.3 Power-Spectrum of Non-Poisson Systems . . . 48

3.3.1 Numerical Results . . . 51

3.3.2 Interpretation of the Numerical Results . . . 52

4 Conclusion 62

(4)

Today there is still much debate regarding the definition of a complex system and complexity. We limit ourselves to two-state systems which jump randomly from one state to another and thus give rise to a dichotomous stochastic process. Our definition of a complex system is based on two properties: power-law statistics and renewal. The former implies that the waiting time distribution for both states is an inverse-power law with a finite exponent. The latter is a property whereby the time of permanence in one state is completely independent from the time of permanence in the other. If the system obeys power-law statistics with an exponent smaller than 2, then it is not ergodic. If, on the other hand, the exponent is greater than 2, but smaller than 3, then and equilibrium condition exists, but the system can reside for a very long period of time in an out-of-equilibrium condition. Recently, it has been shown that the linear response of a complex system to a coherent perturbation van-ishes in the long-time limit. This result, together with the hypothesis that complex systems can be excited only by other complex systems, is the key motivation for the work presented in this thesis. It is part of an ongoing search for a theory of complexity matching, a theory showing that complex systems respond only if they are excited by other complex systems and that otherwise the response is attenuated. In the first part of the thesis we explore the possibility of coupling two Poisson processes. Our approach is based on the experience obtained in the field of stochastic resonance. We try to perturb a system that obeys ordinary Poisson statistics using Poissonian signals with different rates. To this end, we adopt a simple model that reproduces aperiodic stochastic resonance and we show that such a phenomenon is

(5)

not present in the more general case in which the rate is not necessarily induced by some kind of noise. Furthermore, we adopt the concept of events and use it to study the interaction of Poisson systems. We discover that, when the system produces events at a lower rate with respect to the perturbation, the events of the perturbation become attractors of the system events and vice versa. We use the term rate matching to identify the condition when the two rates of event production are the same.

The second part of the thesis deals with signals produced by complex systems. In order to present the full theory of complexity matching, a new fluctuation-dissipation theorem must be introduced, but this goes beyond the scope of the work presented here. However, the understanding of the non-ergodic nature of complex systems is fundamental to the application of the new fluctuation dissipation theorem. There-fore, here we study the power spectrum of complex signals and show that 1/f -noise is produced by systems that lie on the border that separates ergodic systems from non-ergodic ones. We do this by generalising the Wiener-Khinchin theorem and ex-tending it to non-stationary non-ergodic processes. We distinguish between two dif-ferent types of truncation effects: the physical truncation, where we use a truncated waiting time distribution, and an observation-induced effect, which is a consequence of finite acquisition times. It is the finite observation time that allows us to apply the generalised Wiener-Khinchin theorem in the non-ergodic case. Our final results show that the power spectrum is related to the frequency via an inverse power law and that, in the non-ergodic condition, the power spectrum also depends on the observation time.

(6)

Introduction

We investigate systems that jump randomly between two well-defined states. The rate at which the jumps occur may depend on the absolute time. We assume that all the jumps are completely independent of one another so that each jump resets to zero the systems memory. We call this property renewal. Such systems produce a time series, ξ(t), of some physical property that fluctuates between two values. For simplicity we shall assume that the two states are symmetric and that the sojourn times in each state are governed by a waiting time distribution ψ(τ ). The form of such a distribution determines the nature of the system and the corresponding time series. There are two forms in particular that interest us:

ψ(τ ) = g exp(−gτ ) (1.1)

and

ψ(τ ) = (µ − 1) T

µ−1

(T + τ )µ, (1.2)

where g, µ and T are constant parameters that characterise the distributions. The first distribution is responsible for Poissonian processes and the parameter g deter-mines the rate at which events occur [8]. The second distribution generates power-law statistics. Any system with renewal and a waiting time distribution equal to

(7)

the one in Eq. (1.2) will be considered complex. Our goal is to study the interaction of these complex systems. We are searching for a matching condition under which there is a maximal exchange of information between these systems. We call such a condition complexity matching. Nevertheless, research is still in progress and there-fore only some aspects shall be presented in this thesis. Bethere-fore considering complex systems, we studied the effects of coupling two Poisson systems. These systems are stationary and therefore much easier to deal with. The results are shown in the first part of the thesis. They have also been published recently in the journal ”Physics Letters A” [9].

Chapter 2 is dedicated to the study of Poisson systems and their interaction. We start by presenting the phenomenon of stochastic resonance whereby a two-state system is perturbed by a sinusoidal signal. The important work done by McNamara and Wiesenfeld [11] makes it possible to understand the essence of the phenomenon, by essentially replacing the double-well system with a dichotomous Poisson process ξ(t). We extend this idea by substituting the deterministic signal with a stochastic one, Poissonian, to be precise. The stochastic nature of the perturbation, ξP(t),

forces us to use the Gibbs ensemble approach whereby many realisations of the same process are used. Here we present our first important result, the rate matching condition. We discover that, when the system produces events at a lower rate with respect to the perturbation, the events of the perturbation become attractors of the system events and vice versa.

In chapter 3 we analyse the power spectra of stochastic processes that obey power-law statistics. The main theme of this, second, part of the thesis is the fact that systems characterised by µ = 2 reside on the border that separates ergodic from non-ergodic systems. We show that the power spectrum produced by such systems has the form 1/f and that more generally, for 1 < µ < 3, the power spectrum is given by

S(f ) ∝ 1

f3−µ. (1.3)

We generalise the Wiener-Khinchin theorem and use it to calculate the power spectra of finite non-ergodic processes. We consider this result important because there is a lot of empirical evidence that shows that there is communication between 1/f noise systems. For example, Mutch [2] proved that a ventilator tuned to the 1/f nature of the breathing process is much more efficient than ordinary ventilators. There is

(8)

experimental evidence that a 1/f noise system is more sensitive to 1/f noise than to white noise: The brain is proven to be sensitive to 1/f noise much more than to white noise [3]. The work of Buiatti et al. [5] proves that the brain, as well as music and painting, is a source of 1/f noise, in accordance with [6]. The brain generates 1/f noise and so do the psycho-physiological processes under the brains control [7]. Finally, we show that in the non-ergodic case, the power spectrum depends also on the duration, L, of the process. For a fixed frequency, f0, we have that

S(f0, L) ∝

1

(9)

Rate Matching

2.1

Stochastic Resonance

The well-known phenomenon of stochastic resonance [12] rests on the apparently non-intuitive notion that an increase in noise intensity improves the transition of a signal. We are not going to discuss stochastic resonance in detail; we just want to present the essentials that will be useful for studying the interaction of two Poisso-nian systems.

Originally, the phenomenon was studied using systems composed of a particle in a bistable double-well potential. We start by analysing such a system when it is subjected to thermal noise and perturbed by a periodic signal. The equation of motion is given by the Langevin equation

˙x = v

˙v = −γv − ∂V

∂x + ξ(t) + ε cos(ω0t)

(2.1) Here ξ(t) denotes a Gaussian white noise with zero average and autocorrelation func-tion hξ(t)ξ(t0)i = 2γkBT δ(t − t0). For simplicity, we consider a symmetric potential,

(10)

forcing, the noise causes the particle to jump randomly over the potential barrier from one well into the other. We assume that the signal amplitude, ε, is small enough so that in the absence of any noise, it is insufficient to force the particle over the potential barrier.

For very large values of γ (overdamped approximation), ˙v = 0 and ˙x = −1 γ ∂V ∂x + ξ(t) γ + ε cos(ω0t) γ . (2.2)

The position of the particle x(t) is considered to be the output of the system. In the absence of modulation (ε = 0), x(t) fluctuates around its local stable states (x = ±a) with a statistical variance proportional to the noise intensity γkBT . In

the overdamped approximation, the inter-well transition rate is given by the Kramers rate [10, 22] q = ωaωm 2πγ exp  − V0 kBT  (2.3) where ω2 a= V

00(a) is the squared angular frequency of the potential in the potential

minima at ±a, and ω2

m = |V00(0)| the squared angular frequency at the top of the

barrier, located in x = 0. We can now say that in the overdamped approximation, γ  ωm. Originally, Kramers considered a Brownian particle caught in a potential

well. He calculated the escape rate of the particle over the potential barrier, and used his result in the theory of the velocity of chemical reactions [10]. Because the expression for the Kramers transition rate depends only on the potential barrier height, V0, and the curvature of the potential at the maximum and minima, it is

not necessary that we know the exact form of the potential, V (x). The height of the barrier has to be much greater than the mean (thermal) energy of the particle otherwise it would diffuse more or less freely from one well to the other. Also, if it were of comparable height, the time scales for equilibrium and escape would not be clearly separated. The perturbative force ε cos(ω0t) modifies the double-well

potential producing

(11)

In that case, the barrier height changes too,

∆Vef f(t) = V0+ εa cos(ω0t) (2.5)

and the perturbed Kramers rate is given by q±(t) = ωaωm 2πγ exp  −V0± εa cos(ω0t) kBT  = q exp  ∓ εa kBT cos(ω0t)  . (2.6) The Kramers rate formula is derived under the assumption that the probability

Figure 2.1: This is an example of a typical double well potential. Depending on the amount of kinetic energy possessed by the particle, it may be able to jump over the barrier, from one well to the other.

density within a well is roughly at equilibrium, a Gaussian distribution centred about the minimum. Thus, in order to use Eq.(2.6), the signal frequency must be much slower than the characteristic rate for probability to equilibrate within a well. This rate is the curvature of the well minimum and therefore the adiabatic limit is valid only for ω0  ωa. On assuming that the modulation amplitude is small, i.e.

εa  kBT , we can use the following expansion in the small parameter εa/(kBT ),

q±= q  1 ∓ εa kBT cos(ω0t) + . . .  ' q (1 ∓ η cos(ω0t)) . (2.7)

Before we procede with the calculation of the expectation value of x(t), we introduce the concept of events. It is not an essential part of stochastic resonance but will be very useful in the next chapter.

Whenever the particle reaches the top of the potential barrier we shall say that an event has occurred. Nevertheless, this does not mean that the system has changed its state. When the particle reaches the top of the barrier it can either fall back to

(12)

the bottom of the well from which it started or it can continue and jump into the other well. We shall assume that these to scenarios have the same probability of occurring. Since the particle has the same probability of falling back or continuing into the other well, we can interpret the outcome as the result of a coin toss. If the particle manages to jump into the other well after it has reached to top of the barrier, we shall say that a collision has occurred. In other words, a collision causes the system to change its state. Note that many events can occur before a single collision. The particle may reach the top several times before it actually manages to advance into the other well, and thus convert the final event into a collision. The probability that a collision occurs after n events is given by (1/2)n+1. In this case,

the (n + 1)th event is converted into a collision. It is the same as saying that the

particle moves randomly in a single well, reaching the top n times before actually making the jump into the other well at the (n + 1)th attempt. If we use the new

rate,

g = 2q, (2.8)

together with the coin-tossing procedure, then the correlation function of the cor-responding process will be equal to the correlation function related to the Kramers rate, q. The proof of this statement shall postponed until subsection 3.2.2.

In the presence of a moderate amount of random forcing and heavy damping, the particle will spend most of its time at the bottom of the wells, near x = ±a, making occasional transitions over the barrier. We have already mentioned that the exact shape of the potential is not important for the study of the dynamics of the system. Following McNamara and Wiesenfeld [11], we substitute the double-well with an arbitrary two-state system and maintain the Kramers transition rate shown in Eq. (2.3). We assume that the particle can occupy only two discrete states. For example, we shall assume that the particle occasionally jumps from the position x = a to x = −a and vice versa. In order to calculate the expectation value of x(t), we can procede by using the following master equation (the system has two states and hence the master equation proves to be an effective tool). P1 and P2 are the

(13)

probabilities of finding the particle in the position x = a and x = −a respectively. d dtP1 = − g+(t) 2 P1+ g−(t) 2 P2 d dtP2 = − g−(t) 2 P2+ g+(t) 2 P1. (2.9) The factor 1/2 present in the master equation reflects the fact that g is the rate at which the particle reaches the top of the potential barrier and not the rate at which it overcomes it. When it is at the top of the barrier, the particle may either overcome it or it may return to its original position. Both cases are equally likely and that is why we need the factor 1/2. In order to resolve the master equation we introduce the variable

Π(t) = P1(t) − P2(t) (2.10)

and substitute it into (2.9). After some simplifications we obtain ˙ Π(t) = −g++ g− 2 Π(t) + g−− g+ 2 . (2.11) Substituting (2.7) into (2.11) ˙ Π(t) = −gΠ(t) − gη cos(ω0t). (2.12)

We can find the solution of the master equation by using exp(gt) as the integrating factor in the first order differential equation. We get

Π(t) = −gη Z t

0

e−g(t−t0)cos(ω0t0)dt0+ Π(0)e−gt. (2.13)

From the definition for the expectation value over a Gibbs ensemble, we have that

hx(t)i = aΠ(t), (2.14)

and therefore Π(0) = hx(0)i

(14)

We shall assume that the system starts from equilibrium, in which case Π(0) = 0. This means that before we apply the perturbation, the expectation value of the output is zero for every t. The expectation value of x(t) is obtained by calculating the integral in (2.13). Z t 0 e−g(t−t0)cos(ω0t0)dt0 = e−gt Z t 0 egt0e iω0t0 + e−iω0t0 2 dt 0 = e −gt 2  e(g+iω0)t− 1 g + iω0 + e (g−iω0)t− 1 g − iω0  . (2.16) Since the Poisson system needs time to adapt, we study the limit where t  1/g and consequently exp(−gt) → 0, the system . In that case,

Π(t) = −gη 2  eiω0t g + iω0 + e −iω0t g − iω0  = − gη 2(g2+ ω2 0)

[(g − iω0)(cos(ω0t) + i sin(ω0t)) + (g + iω0)(cos(ω0t) − i sin(ω0t))]

= −gη 1 g2+ ω2 0 (g cos(ω0t) + ω0sin(ω0t)) = − gη g2+ ω2 0 q g2+ ω2 0sin φ cos(ω0t) + q g2+ ω2 0cos φ sin(ω0t)  = −gη 1 pg2+ ω2 0 cos(ω0t + φ). (2.17)

In conclusion, when the system adapts itself to the perturbation,

hx(t)i = −aC cos(ω0t − φ), (2.18)

where C = gη pg2+ ω2 0 and tan φ = g ω0 . (2.19)

Note that when we switch off the perturbation (ε = 0) the expectation value is zero, just as we mentioned earlier. Since (see Eq.(2.3))

g = 2q = 2ωaωm 2πγ exp  − V0 kBT  and η = εa kBT , (2.20)

(15)

we have that C(T ) = 2A exp  − V0 kBT  · εa kBT ·  4A2exp  −2 V0 kBT  + ω02 −12 , (2.21) where A = ωaωm/(2πγ). The graph of Eq. (2.21) is displayed in Fig. (2.2). Notice

that the function has a maximum value. This maximum response of the system to the external perturbation is know as stochastic resonance. Sometimes it is useful to

0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0 1 2 3 4 5 6 7 C(k B T) Noise, kB T

Figure 2.2: This graph is a plot of Eq. (2.21) with ε = 0.1 and with V0, a, A, ω0 all equal to

unity. It shows that for a particular temperature, and consequently, for a certain amount of noise, the intensity of the output signal is maximum.

define stochastic resonance in terms of the signal-to-noise ration (SNR). Following Gammaitoni et al. [12], we present the following definition:

SN R = 2 SN(ω0)  lim ∆ω→0 Z ω0+∆ω ω0−∆ω S(ω)dω  , (2.22)

where S(ω) is the power spectral density (power spectrum from now on) of x(t). The periodic component of x(t) contributes to S(ω) with a series of delta spikes centred at ω = (2n + 1)ω0 with n = 0, ±1, ±2, . . .. Therefore, SN(ω) is the total

power spectrum minus the delta spikes; it is the background power spectrum. For small forcing amplitudes, aε  V0, the SN(ω) is not very different from the power

(16)

spectrum of the unperturbed system. In that case SN(ω) ≈ 2 Z ∞ 0 hxN(t)xN(t + τ )i cos(ωτ )dτ = 2a2 Z ∞ 0 e−gτcos(ωτ )dτ = 2a2 g g2+ ω2, (2.23) where xN(t) is the output produced in the absence of the periodic signal.

S(ω) = π 2a

2C2[δ(ω − ω

0) + δ(ω + ω0)] + SN(ω). (2.24)

The signal-to-noise ratio is given by (in first approximation), SN R ≈ π 2  εa kBT 2 g = π  εa kBT 2 A exp  − V0 kBT  . (2.25)

Fig. (2.3) shows that the signal-to-noise ratio has a maximum similar to the one seen in the case of the amplitude. However, note that the noise intensity at which SNR assumes its maximum does not coincide with the value kBT that maximises

the response amplitude C.

2.2

Rate Matching

We have seen that stochastic resonance is the result of statistical synchronisation, which takes place when the average waiting time, 1/g, between two interwell tran-sitions is comparable with half the period of the periodic forcing. This yields the time-scale matching condition for stochastic resonance,

1 g = 1 2 2π ω0 ⇒ g = ω0 π . (2.26)

Stochastic resonance involves a Poisson system that is perturbed by a periodic sig-nal. However, Eq. (2.6), and all the subsequent approximations, can be used even when the forcing signal (perturbation) is not a deterministic function of time. We are now going to adopt the approach introduced by McNamara and Wiesenfeld [11]. We consider a two-state system that jumps randomly from one state to the other, without paying attention to the physical details, just the essential properties:

(17)

0 0.002 0.004 0.006 0.008 0.01 0.012 0.014 0.016 0.018 0 1 2 3 4 5 6 7 SNR Noise, kB T

Figure 2.3: This figure shows the plot of Eq. (2.25) with ε = 0.1 and V0, a, A all equal to unity.

There is also a maximum when we consider the signal-to-noise ratio. However, the value of T for which the maximum occurs does not coincide with the temperature that produces the maximum in Fig. (2.2).

son waiting time distribution and the hopping rate gS. Generalising even further,

we shall assume that gS is an arbitrary rate, not the noise-induced Kramers rate of

Eq. (2.3). As for the perturbation, we consider a dichotomous signal with random fluctuations that obey Poisson statistics with rate gP. So, we have a situation where

a stochastic signal ξP(t) with rate gP is used to perturb a two-state system, which,

in the absence of the perturbation, jumps randomly form one state to the other at a rate gS. For simplicity, we assume that the two states are |+i = +1 and |−i = −1.

Since we are not going to involve the Kramers rate, we have to generalise Eq. (2.6), in which the perturbation intensity η depends on the thermal noise. From now on, we are going to use ε as the perturbation intensity with the assumption that it is ar-bitrary and dimensionless. Hence, the generalised interaction will mediated through

g±(t) = gS(1 ± εξP(t)). (2.27)

When the system is in state |+i, the transition rate is g+(t) = gS(1 + εξP(t)) and

similarly for the state |−i. The output, ξS(t), will also be a dichotomous function

(18)

its two states so that the output, ξS(t) will still fluctuate between the two values

+1 and −1. The output can be calculated using the master equation in Eq. (2.9) together with the rate in (2.27). By following the steps seen in the previous section, we find that Π(t) = gSε Z t 0 e−gS(t−t0)ξ P(t0)dt0. (2.28)

It is obvious that if we use ξP(t), the mean response, Π(t) = hξS(t)i is a stochastic

function of time. In order to make analytical predictions we have to move from the single realisation of ξP(t) to the realisation of infinitely many ξP(t) that share

the same initial value. Thus, we assume that every realisation of the perturbation satisfies the condition ξP(0) = 1. Otherwise, we would have hξP(t)i = 0. Using Eq.

(2.28), we obtain the two-fold expectation value hΠ(t)iP = hhξS(t)iSiP = gSε

Z t

0

e−gS(t−t0)

P(t0)iPdt0. (2.29)

The condition ξP(0) = 1 shared by all realisations of ξP(t), yields

hξ(t0)iP = exp(−gPt). (2.30)

By substituting (3.1) into (2.28) we have hhξS(t)iSiP = gSε

Z t

0

e−gS(t−t0)e−gPt0dt0 (2.31)

and after some straightforward algebra hhξS(t)iSiP =

gSε

gS− gP

[exp(−gPt) − exp(−gSt)] . (2.32)

A particular case of the function hΠ(t)iP is shown in Fig. (2.4). The fact that

hΠ(t)iP decays can mean two things: Either the single output means hξS(t)iS decay

themselves or they fluctuate in such a way that when we sum them up, the resultant decays. We shall now show that the latter case is true. We take a closer look at Eq. (2.32) and consider the two limits, gS  gP and gS  gP. In the first case we have

hΠ(t)iP = ε

gS

gP

(19)

0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0 50 100 150 200 〈Π (t) 〉P t

Figure 2.4: This figure shows a plot of Eq. (2.32) with gS = 0.07, gP = 0.02 and ε = 0.1.

and in the second case

hΠ(t)iP = ε exp(−gPt). (2.34)

Before proceeding, we have to remind ourselves that in the absence of perturbation hΠ(t)iP = 0. Therefore, the fact that hΠ(t)iP is different from zero implies that

the perturbation has some effect on the system, regardless of the limit considered. By observing Eqs. (2.33) and (2.34) we can conclude that the perturbation is more effective when gS  gP. It is enough to look at the coefficients; the factor εgS/gP

in Eq. (2.33) is much smaller than the coefficient ε in Eq. (2.34). Moreover, when gS  gP, hΠ(t)iP decays exponentially with the factor gP, leading us to believe that

in this case the system is able to reproduce the signal fairly well. Similarly, it is plausible that for gS  gP, the system is not able to reproduce the signal since the

decay factor is gS, therefore the output remains almost unchanged. We must not

forget that an ensemble of Poisson processes with rate g decays exponentially with a factor g when we use the coin-tossing method. These deductions can be consolidated by considering the crosscorrelation between the perturbation and the system.

(20)

Following the authors of Ref. [45], we use the crosscorrelation function

C(t) ≡ hξP(t)ξS(t)iSP (2.35)

In order to get an analytical expression, we procede as before. We consider all the cases where the experiment was done with the same ξP(t) and we make the average

on all the resulting ξS(t). Then we average over all the perturbation realisations.

Thus, we write

C(t) = hξP(t)ξS(t)iSP = hhξS(t)iSξP(t)iP. (2.36)

We can now use Eq. (2.28) and by substituting it into the crosscorrelation function we get C(t) = εgS Z t 0 exp(−gS(t − t0))hξP(t0)ξP(t)iPdt0. (2.37) By noting that hξP(t0)ξP(t)iP = exp(−gP(t − t0)), (2.38) we obtain C(t) = εgSe−(gS+gP)t Z t 0 e(gS+gP)t0dt0 = εgS gS+ gP 1 − e−(gS+gP)t . (2.39)

We consider the asymptotic limit of the crosscorrelation function so that χ ≡ lim

t→∞C(t) =

εgS

gS+ gP

. (2.40)

From Eq. (2.40) we can conclude that the maximum correlation (χ = ε) occurs when gS  gP. The correlation parameter, χ, increases monotonically from 0 to ε and

at gS = gP no deviation from this monotonic behaviour appears. With the help of

Eq. (2.40) we can conclude that the system reproduces the perturbation reasonably well in the case where gS  gP and t → ∞. In other words, hξS(t)iS ≈ εξP(t). On

the other hand, when gS  gP, the correlation is very poor or even absent. This

tells us that the system is not able to reproduce the perturbation ξP. Nevertheless,

(21)

gS is very small. The response is due to the short sojourn times produced by the

system. Such times are very rare because the rate is very low, however, they are present. The fact that they are rare explains the weak response.

By deriving hΠ(t)iP and equating the result to zero we get

−gP exp(−gPt) + gSexp(−gSt) = 0. (2.41)

Therefore, the maximum occurs at tM =

ln(gP) − ln(gS)

gP − gS

. (2.42)

Substituting (2.42) into (2.32) yields the maximum: hΠ(tM)iP = gSε gS− gP  exp  −ln(gP/gS) gP − gS gP  − exp  −ln(gP/gS) gP − gS gS  = gSε gS− gP "  gP gS −gP −gSgP − gP gS −gP −gSgS # ≡ gSε gS− gP A−gP − A−gS , (2.43) where A = gP gS gP −gS1 . (2.44)

If we fix the perturbation rate gP and vary the system rate, hΠ(tM)iP remains

a monotonous function of gS (see Fig. (2.5)). In conclusion, neither the

cross-correlation C(t) nor the maximum value of the output hΠ(tM)iP produces a

reso-nance effect. All calculations show that the amplitude and quality of the output increase monotonically as we increase the system rate gS. In section 2.1 we

consid-ered the interaction between a Poisson system and a deterministic signal (cos(ω0t)).

We saw that for a particular rate of the system, the output intensity had a maxi-mum. However, it should be noted that the maximum is obtained as a consequence of the fact that the rate g is a non-linear function of kBT . In fact, if we

(22)

0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0 5 10 15 20 〈Π (tM ) 〉P gS

Figure 2.5: This figure shows a plot of Eq. (2.43) with gP = 0.05 and ε = 0.1. It shows that

hΠ(tM)iP tends to ε as gS increases.

perturbation intensity ε, we are left with the monotonous function in Eq. (2.19) C(g) = gε

pg2+ ω2 0

. (2.45)

Stochastic resonance is present only if the rate is controlled in a specific way by some other physical quantity, such as the temperature.

2.2.1

Single Realisation

Let us now move from the Gibbs ensemble perspective to the observation of a single realisation of the perturbation-response process. We start by creating two different realisations of a set of laminar regions (representing the time distance between two consecutive events) that obey the Poisson distribution. Therefore each of the two realisations is characterised by a single, constant parameter, namely, the rate of events g. The realisation may possess different rates. One realisation is considered to be the perturbation, P , with a rate gP and the other is considered to be the

system, S, with a rate gS. It has to be pointed out that the definition of an event

(23)

corresponds to the system reaching the barrier top. The experiment is realised as follows: We perturb S using P and we repeat the process by changing either gS

or gP while the other rate is kept constant. We want to see whether there is some

kind of an effect when we set the matching condition gP = gS. In order to establish

which is the effect produced by rate matching, if any exists, we have to define a marker, R, which will reveal it. The simplest marker is realised using what we call the time difference method. We adopt the idea of the authors of Ref. [13], but without the entropic method. Instead, we use a simplified version adapted for our needs. The unperturbed system, S, is confronted with the perturbation and the time differences, tn, are measured as shown in Fig. 2.6. We decide which of the two

realisations will be the reference one and apply the following rule: for each event of the reference realisation we measure the time that it takes for the next event of the other realisation to occur. Let us say that we are considering the nth event, Eref

n ,

of the reference realisation; then we have to start measuring the time from Enref and wait until an event of the other realisation occurs; we consider only the first neighbour. If no such event occurs before En+1ref then the event Enref is disregarded. The time interval measured in this way is the time difference that we are talking about and we shall denote it by the symbol tn. All the time differences measured in

this way are summed up so that we have L0 =

N

X

n=0

tn, (2.46)

where, N is the final event of the reference realisation and t0 is always zero because

both realisations are prepared in such a way that at time t = 0 they both have an event. The next thing to do is to repeat the above procedure. This time we apply the same procedure by switching on the interaction between S and P , namely by assigning a non-vanishing value to ε in Eq. (2.27). In this case, we denote the time differences tP n, n = 0, 1, . . . , N and we have LP = N X n=0 tPn. (2.47)

(24)

Finally, we define the marker R = LP L0 = PN n=0t P n PN n=0tn . (2.48)

Besides defining a ”good” marker, it is important to select an appropriate reference realisation. We have done some investigations relative to this case and the results obtained are shown in Fig. 2.7. In one case the perturbation is kept constant while the system varies and in other case the system is kept constant.

t t

t1 t2 3 4

Figure 2.6: In this figure two realisations are shown. The full line represents the reference

realisation. The intervals indicated by t1, t2, t3 and t4 are the time differences. Each dot

represents an event. The time intervals between two consecutive events are laminar regions which are extracted from a Poisson distribution.

The result of Fig. 2.7 shows that there exists an influence of the P events on the occurrence times of the S events. In the absence of this influence, we would expect R = 1, the neutral condition. The condition R < 1 indicates that the time of occurrence of S events is closer to that of the P events than in the absence of coupling. This suggests that the P events attract the S events. The condition R > 1 indicates the opposite repulsion effect. If we keep gS fixed and we increase

gP from values smaller to values larger than gS (broken line in Fig. 2.7), we see

the following: For gP < gS, R > 1 (repulsion condition). In the close proximity

of gS = gP, R gets the values of 1, in the course of an almost abrupt transition to

Rmin < 1, the attraction condition. The systems assumes this minimum value at

gS = gP, and it maintains it in the whole region gP > gS. Analogously, if we keep

gP fixed, and we decrease gS from values larger than gP to values smaller (full line

in Fig. 2.7), the parameter R moves from the condition R > 1, repulsion condition, to the neutral condition R = 1, in the close proximity of gP = gS, where it assumes

the value Rmin, which is then maintained in the whole region gP < gS. A somewhat

simplified description of all this is as follows. The rate matching condition gS = gP

(25)

0.85 0.9 0.95 1 1.05 1.1 1.15 1.2 1.25 1.3 1.35 1.4 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0.11 R

Full Line: g(S), Broken Line: g(P)

Figure 2.7: This figure shows the two cases that we examined: one in which gS is kept

constant (broken line) and the other where gP is kept constant (full line). The broken-line

curve reppresents the case where gS= 0.04 is kept constant and the perturbation is taken

as the reference realisation. The full-line curve reppresents the case where gP = 0.04 is

kept constant and the perturbation is taken as the reference realisation. In both cases the value of the perturbation strength is  = 0.5.

(26)

Non-Poisson Systems

3.1

Background

In section 2.2 we dealt with a two-state system generating a stochastic processes with renewal. The evolution of such a process was reppresented by a two-state time series with events occurring at times ti and separated by time intervals randomly

selected from an exponential distribution. Each process was characterised by just one parameter, namely, the rate g at which the system switched between the two available states. We shall now consider a similar processes with the only difference that the time intervals are selected from an inverse-power probability distribution density. Such processes maintain the renewal property but are no longer Poissonian. Our goal is to study the power spectrum of non-Poisson processes generated from a waiting time distribution density of the form

ψ(τ ) = (µ − 1) T

µ−1

(T + τ )µ, (3.1)

where 1 < µ < ∞, keeping in mind that µ = 2 is a special case in such a class. A theoretical study of such power spectra has already been done by the authors of Refs. [14, 36]. However, they do not emphasise on the case where µ = 2. The function in Eq.(3.1) is a normalised probability density function. A closer inspection

(27)

shows that for µ > 2, the first moment is given by hτ i = Z ∞ 0 τ ψ(τ0)dτ0 = T µ − 2. (3.2)

On the other hand, when µ < 2, the function τ ψ(τ ) is not integrable and hence the average time interval does not exist. Stochastic processes with a waiting time distribution of the form shown in Eq.(3.1) can be divide into two groups: those with a finite first moment (µ > 2) and those with no definite time scale (µ < 2). The former are ergodic whilst the latter are not. Furthermore, the process with µ > 2 are further divided into those with a finite second momentum, i.e. when µ > 3 and those without one, i.e. when 2 < µ < 3. Right from the beginning we shall neglect processes with µ > 3 since they all obey the central limit theorem and thus become Gaussian in the asymptotic limit.

Our decision to study a non-ergodic dichotomous processes with an inverse-power waiting time distribution was inspired by the results obtained in the study of colloidal nanocrystals or quantum dots. Under the right circumstances, such systems emit light intermittently, switching irregularly between bright and dark states [24]. Furthermore, the durations of these bright and dark periods follow an inverse-power distribution of the form

ψ(τ ) ∝ 1

τµ, (3.3)

with µ < 2 [25,26]. This unexpected behaviour has also been observed in the fluores-cence of single organic molecules [27,28,29,30]. We used the power-law in the form shown in Eq.(3.1) purely out of practical reasons. The inclusion of the variable T , the time scale for which a power-law regime is reached, avoids problems at the origin (τ = 0) when integrating. In the case of semiconductor nanocrystals the hypothesis was made that the time of sojourn in a given state, either “on” or “off”, does not have any correlation with the other sojourn times [31]. This property, referred to as the renewal condition, has been confirmed by the careful statistical analysis of real data made by the authors of Ref. [32]. The renewal and the inverse power law condition with µ < 2 generate perennial aging [34] and consequently a conflict with the ergodic assumption of statistical physics [35], thereby posing a challenge to the adoption of the prescriptions of ordinary statistical physics. Ergodicity breakdown has been recently observed also with the spectroscopy of single organic molecules

(28)

[37]. In the case of fluorescence blinking in nanocrystal quantum dots, the authors of Ref. [23,38] found that these fluctuations have the spectral form of 1/f (flicker) noise. The phenomenon of flicker noise has been known for 82 years [39]. It describes the deviation of the noise spectral density from the flat condition, through the form

P (f ) ∝ 1

fη, (3.4)

where f is the frequency and η ranges from η = 0.5 to η = 1.5. The authors of Refs. [23] and [38] found η = 1.3 and η = 1.1, respectively. In the literature of 1/f noise no attention has been devoted so far to the fact that, although η < 1 is compatible with the ergodic assumption, η > 1 [23, 38] may imply ergodicity breakdown, in spite of the fact that η > 1 is frequently found in the literature. In addition to η > 1 of Refs. [23,38], see, for instance the flicker noise emerging from the scanning tunnelling microscopy [40] with η ≈ 1.08.

The fact that the ergodic assumption is valid for η < 1 can be seen by adopting a very simple model made up of a collection of infinitely many independent linear oscillators [42]. The model shows that by assuming that the process in exam is ergodic we end up with a power spectrum having η < 1. The equation of motion of a single, one-dimensional harmonic oscillator with angular frequency ω is given by

x(t) = x(0) cos(ωt) + v(0)

ω sin(ωt), (3.5)

where x(0) is the initial speed and v(0) is the initial velocity. Alternatively, we can write x(t) = A cos(ωt − ϕ), (3.6) where A cos(ωt − ϕ) ≡ x(0) cos(ωt) + v(0) ω sin(ωt), (3.7) so that A = r x2(0) + v 2(0) ω2 and tan ϕ = v(0) ωx(0). (3.8)

(29)

Consider now a very large amount of identical oscillators, all with the same angular frequency omega. A pair of initial conditions xi(0) and vi(0) is chosen randomly and

assigned to each oscillator of the system. If they are allowed to interact, the system will reach an equilibrium state after some time if left on its own in the absence of external forces. An example of a large number of one-dimensional interacting oscillators is a very long chain of identical springs with identical masses attached to them. Imagine now that we somehow ”freeze” the system together with all the oscillators in it. We can randomly extract a single oscillator and determine its displacement from the equilibrium position and the velocity it had just before being frozen. We can consider these quantities as being the initial conditions of the oscillator when we replace it and unfreeze the system. The probability, p(x, v)dxdv, that the randomly selected (frozen) oscillator has displacement x ± dx and velocity v ± dv is given by Boltzmann’s theory. We consider the selected oscillator to be the system; all the others are part of the heat reservoir. In that case the probability density function is given by

p(x, v) = ωm 2πkBT exp  − m 2kBT v2 + ω2x2  , (3.9)

where T is the temperature of the system. Therefore, we conclude that the initial conditions of all the oscillators placed together form a distribution equal to (3.9). In other words, the initial conditions of the oscillators are picked randomly from the distribution in (3.9). Consequently, the phase ϕ, and amplitude A in Eq. (3.6) are also random values. This random phase assumption is the key ingredient of the approach to 1/f noise proposed by Voss and Clarke [33]. According to the equipartition theorem, the mean energy of a single oscillator is given by

 1 2mv 2+ 1 2mω 2x2  = 1 22kBT = kBT. (3.10) So far we have only considered oscillators with the same angular frequency, ω. Hence-forth, we are going to explore a system composed of many oscillators with different frequencies. We follow Weiss [42] and assume that the frequency distribution is given by

cj = c(ωj) = ω δ+1

2

(30)

This is an attractive way of going beyond ordinary statistical mechanics, or, using the terminology of Ref. [42], beyond the ohmic condition where δ = 1. As we shall see bellow, it enables us to construct a stochastic process with a power spectrum that obeys the inverse power law. We create a fluctuation ξ(t) by summing up the coordinates xi(t) of infinitely many linear harmonic oscillators:

ξ(t) = M X j=0 Nj+1 X i=Nj  xi(0) cos(ωjt) + vi(0) ωj sin(ωjt)  , (3.12) where Nj+1− Nj = c(ωj) and c(ωj)  1 ∀ j. (3.13)

We can also write Eq. (3.12) in the following form: ξ(t) = M X j=0 cj  ˜ xj(0) cos(ωjt) + ˜ vj(0) ωj sin(ωjt)  , (3.14) where ˜ xj(0) = 1 Nj+1− Nj Nj+1 X i=Nj xi(0), ˜ vj(0) = 1 Nj+1− Nj Nj+1 X i=Nj vi(0).

The correlation function of the stochastic process ξ(t) is given by hξ(t)ξ(t0)i = M X j=0 c2j 

h˜x2j(0)i cos(ωjt) cos(ωjt0) +

h˜v2 j(0)i ωj sin(ωjt) sin(ωjt0)  (3.15) We have omitted the mixed terms because the velocities and displacements are independent from one another and therefore

(31)

Since the oscillators are independent, we also have that

hxj(0)xk(0)i = 0 and hvj(0)vk(0)i = 0 when j 6= k. (3.17)

Again, according to the equipartition theorem, we have that hv2 ji = hx 2 jiω 2 j = kBT m . (3.18) Consequently hξ(t)ξ(t0)i = kBT m M X j=0 c2 j ω2 j cos(ωj(t − t0)). (3.19)

By substituting (3.11) into (3.19), the normalised correlation function of ξ(t) be-comes Φξ(t, t0) ≡ hξ(t)ξ(t0)i hξ2(t)i = PM j=0ω δ−1 j cos(ωj(t − t0)) PM j=0ω δ−1 j = PM j=0ω δ−1 j cos(ωjτ ) PM j=0ω δ−1 j , (3.20) where τ = t − t0. We have that

lim τ →∞Φξ(τ ) ∝ 1 tδ for 0 < δ < 1, (3.21) and lim τ →∞Φξ(τ ) ∝ − 1 tδ for 1 < δ < 2. (3.22)

Furthermore, Φξ depends only on the time difference therefore it is a wide-sense

stationary process. Consequently, it is enough to calculate the Fourier transform of Φξ in order to obtain the power spectrum, S(ω) (Wiener-Khinchin theorem).

b Φξ(ω) = S(ω) = A−1 Z ∞ −∞ X j ωjδ−1cos(ωjτ )eiωτdτ = A−1 X j ωjδ−1 Z ∞ −∞ cos(ωjτ )eiωτdτ, (3.23)

(32)

where A =P jω δ−1 j is a constant. Now, F [cos(ωjτ )] = 1 2Fe iωjτ + e−iωjτ = 1 2[2πδ(ω + ωj) + 2πδ(ω − ωj)] . (3.24) Before we precede with the evaluation of (3.23) we have to make a few assumptions. If we consider a reasonably large range of angular frequencies, ωj, we can

approxi-mate the sum in (3.23) with an integral. Furthermore, there must be a lower and upper limit, so we write:

jM X j=0 −→ Z ωM ω0 d˜ω, (3.25)

where ωM  ω0 and ω0 > 0. In that case

S(ω) = A−1π Z ωM

ω0

˜

ωδ−1[δ(ω + ˜ω) + δ(ω − ˜ω)] d˜ω. (3.26) Since the integration is performed over a positive set of values, the first term under the integral vanishes. Therefore,

S(ω) ∝ 1

ωη, ω ∈ [ω0, ωM], (3.27)

where η = 1 − δ. The case δ > 1 does not yield 1/f noise so it is of no interest. Therefore, we only consider the case 0 < δ < 1 which corresponds to 0 < η < 1. Note that δ = 1 corresponds to white noise. In conclusion, the random phase model used by Voss and Clarke is adequate in explaining the 1/f behaviour of noise only in the case where η < 1. The adoption of a random phase resulted in a stationary correlation function. In the next section, we are going to study non-ergodic systems and in section 3.3 we will present a model that is compatible with both cases, η < 1 and η > 2. We shall see that the case η > 1 corresponds to the non-ergodic condition.

3.2

Theoretical Preliminaries

3.2.1

Non-Poisson Process

In the previous section, two important experimental results were mentioned: The fact that blinking quantum dots produce a time series that displays power-law

(33)

statis-tics with µ < 2 and a power spectrum of the form 1/fη. Having these results in

mind, we procede to uncover the entire picture by finding a relationship between µ and η.

We start by writing down the procedure for creating a single realisation or time series, ξ(t), of a stochastic process that displays power-law statistics. Since we are considering only two-state systems, the time series will fluctuate between two values that are, in one way or another, related to the states of the system. The first thing to do is determine the length, L of the realisation. The next step is to identify the times at which the events of the stochastic process occur. We have already introduced the concept of events and collisions in section2.1. In the case of blinking quantum dots there is no physical equivalent for events. Nevertheless, we shall use this concept because it is the only way we can directly link the waiting time distribution ψ(τ ) to the autocorrelation function of the stochastic process [44]. What we actually see in the time series are the collisions and not the events. A collision is when there is a change of state, i.e. when the crystal either starts emitting light or ceases to do so. We always assume that an event occurs at time t = 0 and by doing so, we create what we call a prepared system. So, every time we start a realisation with an event, we intend that the system has been prepared. Having placed the first event at time t = 0, we randomly select a time interval τ1 from the distribution in Eq.(3.1) and

place an event at time t = τ1. We shall refer to the selected time intervals also as

laminar regions. We then select another time interval, τ2, randomly from ψ(τ ) and

place the second event at time t = τ1+ τ2. We continue in this way until we reach

the end of the realisation, i.e. when t = L. Clearly, the final event will very rarely fall on the point t = L so the interval between the last event and t = L will not be equivalent to a time interval selected randomly from ψ(τ ). The fact that we select a time interval (or laminar region), τ , randomly from a probability density function such as ψ(τ ) creates renewal in the process. Renewal simply means that the time intervals are randomly selected from a distribution. The reason why we insist on using renewal events will be explained in subsection 3.2.2. As we have mentioned earlier, the stochastic process under study, ξ(t), is related to a two-state system. We call the two states |+i and |−i and we assign the value +1 or −1 to ξ(t) when the system is in the state |+i or |−i respectively. The final step is to assign one of the two possible values of ξ(t) to each laminar region of the realisation. We do this by tossing a fair coin so that the values 1 and −1 are obtained with the same probability. For example, if we want to know what value to assign to the function

(34)

ξ(t) in the interval between the first and second event, we simply have to toss a coin; if we get heads, we assign the value 1, if we get tails, we assign the value −1. In this way, we construct the collisions of the process. A collision coincides with an event that separates two neighbouring laminar regions with different values. Note that two or more consecutive laminar regions may share the same value. In the end, we obtain a time series characterised by the parameter µ from the waiting time distribution ψ(τ ) in Eq.(3.1).

In the Poisson case, each stochastic process, and therefore system, was charac-terised by the rate, g, of event production. The difference between the ξ(t) presented in this section and the ξ(t) used in chapter2is that the laminar regions in the former case are extracted from

ψ(τ ) = (µ − 1) T

µ−1

(T + τ )µ 1 < µ < 3, (3.28)

whilst the laminar regions in the later case are extracted from

ψ(τ ) = g exp(−gτ ). (3.29)

3.2.2

Aging

As we have already mentioned, the absence of a characteristic time scale in systems with µ < 2 makes them non-ergodic, or following Bouchaud, weakly non-ergodic (see [35]). In the case of weak non-ergodicity, the system does visit all of its phase space but the fraction of time of occupation of a given volume in phase space is not equal to the fraction of phase space volume occupied by it. The specific systems that we are treating can only occupy one of two states at a time. Weakly non-ergodic means that globally, considering the total duration of the process, the system will spend more time in one state than it will in the other. If L is the total duration of the stochastic process, we would have

lim

L→∞

Z L

0

ξ(t)dt 6= 0. (3.30)

We have seen in section3.1 that systems with µ > 2 have a characteristic time scale. Their mean sojourn time is given by the first moment of the waiting time distribution ψ(τ ) (see Eq. (3.2)). Therefore, we can find an intermediate time interval such that

(35)

the time average and the ensemble average coincide. The fact is that all processes with renewal, governed by the inverse-power law in Eq.(3.1), undergo perennial aging (aging from now on). Processes with µ > 2 become stationary and when they are infinitely aged. So, what is aging? Let us consider an ensemble of the realisations of ξ(t), created in subsection 3.2.1 with the help of the waiting time distribution in Eq. (3.1). We shall assume that the realisations are infinitely long and, more importantly, that they are prepared at the origin, t = 0. At the moment, we shall not give importance to whether the first laminar regions are positive or negative. What is important is that there is an event at t = 0. If we set out to create a histogram using only the lengths of the first laminar regions of the realisations in the ensemble, it si obvious that we will end up with the waiting time distribution in (3.1),

ψ(τ ) = (µ − 1) T

µ−1

(T + τ )µ. (3.31)

After all, it is the waiting time distribution that we used to create each realisation. We shall refer to (3.31) as the infinitely young waiting time distribution. Now, we select a single realisation from the ensemble and move from the origin to an arbitrary position, t = ta. It is possible that at t = ta there is an event. It is not important

in which laminar region we find ourselves in or whether it is positive or negative. What is important is the absolute time, t = ta. We move continuously from the

position t = ta until we reach an event, say at some point t = t∗. We shall indicate

with τres the distance between ta and t∗, i.e. τres = t∗ − ta. Of course, if at time

t = ta there is an event, then τres= 0. We calculate the residual time, τres, present

in each realisation of the ensemble. Note that ta is the same for each realisation.

It is t∗ that changes. If we plot a histogram with the residual times, we do not get the waiting time distribution in (3.31). What we get is a waiting time distribution of age ta and we shall indicate it with ψta. The continuous change of the waiting

time distribution from the time of preparation is an effect of aging. The analytical expression for the aged function is

ψta(τres) = ∞ X n=0 Z ta 0 ψn(t0)ψ(t∗− t0)dt0 = ∞ X n=0 Z ta 0 ψn(t0)ψ(ta+ τres− t0)dt0, (3.32)

(36)

where ψn(t0) is the probability that the nth event occurs in the time interval [t0, t0+

dt0]. The function ψta(t) is the probability of observing an event in the time interval

[t, t + dt] under the condition that the observation starts at t = ta. When we start

observing at t = ta we cannot know how many events had occurred. Since we are

considering an ensemble, we have to include all the possibilities, from the case where no events occurred, to the case where infinite events occurred before t = ta. This

explains the presence of a sum in Eq. (3.32). The product ψn(t0)ψ(t∗ − t0) is the

probability density that n events occurred before we started the observation and that that the first event after t = ta occurs at t = t∗. Although it is easy to explain

Eq. (3.32) qualitatively, it is difficult to find an analytical expression if we substitute a waiting time distribution of the form shown in (3.31). The difficulty lies in finding an analytical expression for the event rate

P (t) = ∞ X n=0 ψn(t), (3.33) where ψn(t) = Z t 0 ψn−1(τ )ψ(t − τ )dτ. (3.34) Note that, ψ1(t) = ψ(t) and ψ0(t) = δ(t). (3.35)

According to the authors of Ref. [18], a good approximation is to assume that P (t) is constant when considering systems with a µ close to 2. They show that in the asymptotic limit t → ∞, P (t) = 1 Γ(2 − µ)Γ(µ − 1) 1 Tµ−1 1 t2−µ, for 1 < µ < 2, (3.36) and P (t) = µ − 2 T  1 + T µ−2 (3 − µ) 1 tµ−2  for 2 < µ < 3. (3.37) We shall not provide a proof for the above statements since it is only a matter of mathematics. Details can be found in Ref. [19]. The infinitely young waiting

(37)

distribution that is used to create all the sequences in the ensemble is given by Eq. (3.31). We start from t = 0 and we let the ensemble evolve in time. After an amount of time ta, the waiting time distribution becomes

ψta(τ ) =

Z ta 0

P (t)ψ(ta+ τ − t)dt. (3.38)

We assume that P (t) is constant, thus ψta(τ ) = 1 A(ta) Z ta 0 ψ(ta+ τ − t)dt = 1 A(ta) Z ta 0 ψ(y + τ )dy, (3.39) where A(ta) is the normalisation constant to be determined and y = ta− t. The

normalisation is given by A(ta) = Z ∞ 0 ψ(t, ta)dt = Z ∞ 0 dt Z ta 0 ψ(t + y)dy = Z ta 0 Z ∞ 0 ψ(t + y)dt = Z ta 0 dy Z ∞ y ψ(t0)dt0 = Z ta 0  T T + y µ−1 dy = T µ−1 2 − µ(T + ta) 2−µ− T2−µ .

The aged waiting time distribution is therefore ψta(τ ) = (2 − µ)

(τ + T )1−µ− (τ + T + t a)1−µ

(T + ta)2−µ− T2−µ

. (3.40)

In the ergodic case (2 < µ < 3), we see that for ta → ∞, the distribution ψta(τ )

approaches a limiting value: lim

ta→∞

ψta(τ ) ≡ ψ∞(τ ) = (µ − 2)

Tµ−2

(T + τ )µ−1. (3.41)

For 2 < µ < 3, the process evolves towards a stationary condition, where the survival probability is given by (3.41). For 1 < µ < 2, there is no limiting function, so the process does not tend to a stationary condition; the limit cannot be applied. Nevertheless, Eq. (3.40) is valid in the non-ergodic case (1 < µ < 2) if we always consider finite values of ta. In the limit for which ta → 0, and 2 < µ < 3, we obtain

(38)

the infinitely young distribution, ψ(τ ): ψta(τ ) = (µ − 2) 1 (τ + T )µ−1 − 1 (τ + T )µ−1 1 (1 + ta/(τ + T ))µ−1 1 Tµ−2 − 1 Tµ−2 1 (1 + ta/T )µ−2 ' (µ − 2) 1 (τ + T )µ−1 − 1 (τ + T )µ−1  1 − (µ − 1) ta (τ + T )  1 Tµ−2 − 1 Tµ−2  1 −ta T(µ − 2)  = (µ − 1) T µ−1 (τ + T )µ. (3.42)

Similarly, when 1 < µ < 2, we have for ta→ 0,

ψta(τ ) ' (2 − µ) 1 (τ + T )µ−1 − 1 (τ + T )µ−1  1 − (µ − 1) ta (τ + T )  T2−µ(1 + (2 − µ)t a/T ) − T2−µ = (µ − 1) T µ−1 (τ + T )µ. (3.43)

Another important aspect of aging is the so-called rejuvenation effect. We con-sider Eq. (3.40) without considering the limit for which ta → ∞. Instead, we insist

on the fact that the waiting time distribution is of the finite age ta. We shall now

study the two conditions: τ  ta and τ  ta. We shall first consider the case where

µ > 2. When τ  ta, we rewrite Eq. (3.40) in the following form:

ψta(τ ) = (2 − µ)  1 (τ + T )µ−1 − 1 (τ + T + ta)µ−1   1 (T + ta)µ−2 − 1 Tµ−2 −1 = (µ − 2)  1 (τ + T )µ−1 − 1 tµ−1a ((τ + T )/ta+ 1)µ−1  1 Tµ−2 − 1 tµ−2a 1 (1 + T /ta)µ−2 ' (µ − 2)1/(τ + T ) µ−1 1/Tµ−2 = (µ − 2) T µ−2 (τ + T )µ−1 (3.44)

This result coincides with Eq. (3.41), even though we have kept tafinite. Observing

(39)

older than it is. Under the other condition, τ  ta, therefore ψta(τ ) = (µ − 2) 1 (τ + T )µ−1 − 1 (τ + T )µ−1 1 (1 + ta/(τ + T ))µ−1 1 Tµ−2 − 1 (T + ta)µ−2 ' (µ − 2) 1 (τ + T )µ−1 − 1 (τ + T )µ−1  1 − (µ − 1) ta (τ + T )  1/Tµ−2 = (µ − 2)ta T (µ − 1) Tµ−1 (τ + T )µ. (3.45)

This time, ψta(τ ) is proportional to the infinitely young distribution, ψ(τ ). This

effect is called rejuvenation. The condition τ  ta produces the opposite effect with

respect to the condition τ  ta. Notice how the power index changes from µ − 1

(infinitely aged condition) to µ (infinitely young condition). Aging implies that ta is

so large so as to make it impossible for us to observe sojourn times with a duration longer than ta. In that case we are confined to τ  ta. This is when we observe aging.

If ta< ∞, both τ  ta and τ  taare possible. The latter condition corresponds to

rejuvenation. The former case corresponds to the infinitely aged state. If ta = ∞,

only the infinitely aged condition is possible. For µ < 2, the results are very similar; the numerators in (3.44) and (3.45) remain the same. When τ  ta, then

ψta(τ ) ' (2 − µ) 1/(τ + T )µ−1 (T + ta)2−µ− T2−µ ' (2 − µ) t2−µa 1 (τ + T )µ−1 ∼ 1 τµ−1. (3.46)

On the other hand, when τ  ta,

ψta(τ ) ' (2 − µ)(µ − 1) ta/(τ + T )µ (T + ta)2−µ− T2−µ ' (2 − µ)t µ−1 a Tµ−1 (µ − 1)Tµ−1 (τ + T )µ = (2 − µ)t µ−1 a Tµ−1ψ(τ ) ∝ ψ(τ ). (3.47)

We have very often stressed that the events used to create the realisations ξ(t), must obey renewal. The reason for this is because a system, without the renewal property is not compatible with the aging theory presented above. Such systems

(40)

usually do not even experience aging, but when they do, the effect is only partial. For example, a stochastic process that is modulated by a deterministic prescription does not obey renewal. The authors of Ref. [20] found that under certain conditions, such systems experience aging, but only partially. This explains why we are interested in processes with renewal; they experience the aging process completely.

We now want to show that a process that is characterised by Poisson-statistics does not experience aging. We use the dichotomous realisations ξ(t) introduced in section 2.2. In that case, we have that the waiting time distribution is given by

ψ(τ ) = g exp(−gτ ), (3.48)

where g is the rate of the events in the Poisson process. All we have to do is substitute (3.48) into Eq. (3.32). Before we continue, we have to calculate P (t). The simplest method is to use the Laplace-transform of ψn(t) (we use the tilde to

indicate the Laplace transform): ˜

ψn(u) = ˜ψn−1(u) ˜ψ1(u) =h ˜ψ(u)

in =  g g + u n . (3.49) Anti-transforming, we obtain ψn(t) = (gt)n−1 ge−gt (n − 1)!. (3.50)

We can now calculate the aged waiting time distribution in the Poisson case: ψta(t) = ψ(t) + ∞ X n=1 Z ta 0 ψn(t0)ψ(t − t0)dt0 = g exp(−gt) + g2 ∞ X n=1 Z ta 0 (gt0)n−1 (n − 1)!exp(−gt 0 ) exp(−g(t − t0))dt0 = g exp(−gt) + g2exp(−gt) Z ta 0 ∞ X n0=0 (gt0)n0 n0! dt 0 = g exp(−gt) + g2exp(−gt) Z ta 0 exp(gt0)dt0 = g exp[−g(t − ta)]. (3.51)

(41)

In the Poisson case

ψta(τres) = ψta(t) = ψ(t − ta) = ψ(τres). (3.52)

The waiting time distribution remains the same. Aging has been studied extensively in the literature. See, for example, Refs. [16, 17, 21] for more details. We have introduced the concept of aging because we shall need it shortly, when we investigate the autocorrelation functions of non-ergodic processes. What we have said so far regarding aging should be enough for such a task.

-1

0

1

0

50

100

150

200

250

300

350

400

ξ

(t)

t

Figure 3.1: A typical time series or realisation. The function ξ(t) fluctuates between the two values +1 and −1.

3.2.3

Autocorrelation

By definition, the autocorrelation function of a stochastic process ξ(t) is

Φξ(τ ; t) = hξ(t)ξ(t + τ )i, (3.53)

where h. . .i denotes an ensemble average. This definition is applicable to both, stationary and non-Stationary-stationary processes. If the process is stationary, then Φξ is independent of the time t. An ensemble average means that we have to

use an infinite number of realisations of the stochastic process ξ(t).

Let us consider a Gibbs ensemble of prepared sequences, ξ(t), with an arbitrary µ. All the sequences are prepared so that at t = 0 there is an event and all of them begin with a positive laminar region, i.e. ξ(t) = +1 for 0 ≤ t ≤ τ1. By definition,

the Gibbs average of ξ(t) is given by hξ(t)i = lim N →∞ 1 N N X n=1 ξn(t), (3.54)

(42)

where ξn(t) is the nth realisation of the ensemble. By construction, we have that

hξ(0)i = 1. We now argue that the function hξ(t)i will decay in the same way in which its corresponding survival probability decays. In other words

hξ(t)i = Ψ(t) = Z ∞

t

ψ(t0)dt0. (3.55)

A more rigorous demonstration can be found in Ref. [17]. A sequence chosen ran-domly from the ensemble will have the first laminar region positive. Let us imagine that its second laminar region is also positive and that the first laminar region is of length τ1 and the second τ2. Since there is an infinite number of realisations, there

are surely others whose first two laminar regions are also of length τ1 and τ2.

More-over, we are using the coin tossing method to decide the value of ξ in every laminar region apart from the first. Therefore half of the second laminar regions of length τ2

will be positive and half will be negative. When we sum up these realisations, the second laminar regions will cancel each other out. This argument can be applied to all the other sequences. If we sum all the realisations present in the ensemble and normalise, we end up with the ensemble average of ξ(t). It is obvious that hξ(t)i is a decaying function of time, after all, the first laminar region is always positive. What is less obvious is that hξ(t)i decays as the survival probability, Ψ(τ ),

Ψ(τ ) = Z ∞

τ

ψ(t)dt. (3.56)

This is because all the first laminar regions survive after the summation and the second laminar regions cancel each other out. The surviving laminar regions are distributed according to the waiting time distribution and therefore add up to form the survival probability function. We can now use Eq. (3.53) to calculate the autocorrelation function for t = 0. We have

Φξ(τ ; t = 0) = hξ(0)ξ(τ )i. (3.57)

But, we know that at t = 0, the realisation is always positive, therefore

Φξ(τ ; 0) = hξ(τ )i = Ψ(τ ). (3.58)

(43)

assumption that the events in ξ(t) are generated with the renewal property.

Stochastic processes with µ < 2 are always non-stationary and therefore their correlation functions change with time. The correlation function changes due to the aging process [17]. We can use an argument similar to the one above and calculate the correlation function at any time t = ta. We prepare the ensemble in such a way

that at t = 0 half of the realisations are positive and half are negative. This produces a vanishing mean at the origin, hξ(t = 0)i = 0. Nevertheless, at t = 0, the ensemble is non-stationary and it is not . Then, we let the ensemble evolve up to time t = ta.

At this time we select all the sequences characterised by ξ(ta) = 1. The sum over

all these realisations corresponds to a macroscopic out-of-equilibrium fluctuation whose value is M , with M ≈ N/2 and N is the total number of realisations. In this case, hξ(ta)i = 1 and the Gibbs average regresses slowly to zero. We again

evoke the property of the coin-tossing procedure and the fact that the ensemble was prepared at t = 0 in order to determine the autocorrelation function. The fact that we prepared the ensemble implies that the residues τres = t∗ − ta are distributed

according to the aged waiting time distribution ψta(τres). The fact that we used the

coin-tossing method implies that the laminar regions that start at t = t∗ will vanish during the summation process. Consequently, if we move the origin to t = ta and

forget about what happend before this time, we can conclude that hξ(t)i = Ψta(t) = Z ∞ t ψta(t 0 )dt0. (3.59)

By construction, ξ(ta) = 1, therefore we can easily determine the autocorrelation

function. In fact,

Φξ(τ ; ta) = hξ(ta)ξ(ta+ τ )i = hξ(ta+ τ )i = hξ(τ )i = Ψta(τ ). (3.60)

According to the results obtained above, a process with µ > 2 that is stationary will have the correlation function,

Φ(τ ) = Ψ∞(τ ). (3.61)

This is because a prepared sequence, having µ > 2, becomes stationary only after it has evolved for an infinite time. In that case, ta = ∞ and the autocorrelation

(44)

the limiting autocorrelation function. At t = 0, Φξ(τ ) = Ψ(τ ). As time passes the

autocorrelation evolves and tends towards the limit Ψ∞(τ ). When µ < 2, the process

never approaches a stationary condition, so the autocorrelation keeps on changing, without approaching a limiting function. In conclusion, we have found shown that the use of the coin-tossing procedure generates an autocorrelation function that is related to the waiting time distribution through Eqs. (3.59) and (3.60). This is the reason why in section 2.1 (see Eq. (2.8)) we decided to use the rate g = 2q instead of just the Kramers rate, q. If we apply the results of this section, then the autocorrelation function of a Poisson process with a rate g, using the coin-tossing procedure, is given by

Φξ(τ ) = Ψ(τ ) =

Z ∞

τ

g exp(−gτ ) = exp(−gτ ). (3.62) On the other hand, the autocorrelation function of a Poisson process with the Kramers rate q, without the coin-tossing procedure, is given by

Cξ(τ ) = exp(−2qτ ). (3.63)

Since the two methods have to produce the same realisation ξ(t), then comparing Cξ(τ ) with Ψξ(τ ) we conclude that g = 2q. We now give a proof for the statement

in Eq. (3.63). A Poisson process is characterised by a waiting time distribution of the form

ψ(τ ) = q exp(−qτ ). (3.64)

The rate of events is given by P (t) in Eq. (3.33), whose Laplace transform is ˜ P (u) = Z ∞ 0 exp(−ut)P (t)dt = ∞ X n=0 ˜ ψn(u) = ∞ X n=0  ˜ψ(u)n = 1 1 − ˜ψ(u). (3.65) Since ˜ ψ(u) = q Z ∞ 0 exp(−ut) exp(−qt)dt = q + u q , (3.66)

(45)

we have that the anti-transform of ˜P (u) gives

P (t) = 2δ(t) + q. (3.67)

Therefore in the Poisson case, the rate is constant apart from the singularity in zero. If we consider a two-state system with a waiting time distribution given by Eq. (3.64), then the rate at which the system switches between states is q. Let us assume that the two states are |+i and |−i. As the system evolves in time, it will produce an output signal ξ(t) which is related to the state occupied by the system at time t. When the system is in the state |+i the output signal ξ(t) will have the value +1 and when it is in |−i, ξ will have the value −1. The evolution of such a process can be described by the master equation,

d

dtP|+i(t) = −qP|+i(t) + qP|−i(t) d

dtP|−i(t) = −qP|−i(t) + qP|+i(t),

(3.68) where P|±i is the probability of finding the system in the state |±i. We introduce

the variable

hξ(t)i = Π(t) = P|+i(t) − P|−i(t).

Substituting into (3.68), yields ˙

Π(t) = −2qΠ(t). (3.69)

Integrating, we get

Π(t) = Π(0) exp(−2qt). (3.70)

If consider an ensemble of systems, and place all of them in the state |+i at t = 0, then P (0) = 1 and the mean value becomes

hξ(t)i = exp(−2qt). (3.71)

(46)

function is given by

Cξ(t) ≡ hξ(t + τ )ξ(t)i = hξ(τ )i = exp(−2qτ ). (3.72)

3.2.4

Power Spectrum of a Stationary Process

In the previous subsection we saw that processes with µ > 2 are stationary if they are not prepared. The advantage of dealing with stationary processes is that we can apply the Wiener-Khinchin theorem in order to obtain the power spectrum. To do so, we first have to find the corresponding autocorrelation function. In the previous subsection we used a Gibbs ensemble of realisations and found that the correlation function of an infinitely long realisation with µ > 2 is equal to its infinitely aged survival probability function (see for example Ref. [17] for details). In other words, if ψ(τ ) is the waiting time distribution, then the correlation function is

Φξ(τ ) = Ψ∞(τ ) ≡

Z ∞

τ

ψ(t)dt. (3.73)

We are going to show that the same result can be obtained by using only one realisation of ξ(t). Let us assume that we are given a single realisation of ξ(t), infinite in length. Then, by definition, the correlation function is

Φξ(τ ) ≡ lim L→∞ 1 L Z L−τ 0 ξ(t)ξ(t + τ )dt. (3.74) Eq. (3.74) obtained by moving a window of length τ along the entire realisation ξ(t). If we adopt the coin-tossing method described earlier we can use the following expression (see Ref. [15]):

Φξ(τ ) = 1 hτ i Z ∞ τ dτ0(τ 0 − τ ) τ0 τ 0 ψ(τ0), (3.75)

where ψ(t) is the waiting time distribution density used to create the process ξ(t) and hτ i is the mean laminar region. We still use the method whereby a window of length τ is moved along the entire realisation. Since the realisation was prepared by the coin-tossing procedure, the only situations that contribute to the correlation function are those in which the window falls entirely within a laminar region. In that case the contribution is always positive, no matter what sign was attributed

Figura

Figure 2.1: This is an example of a typical double well potential. Depending on the amount of kinetic energy possessed by the particle, it may be able to jump over the barrier, from one well to the other.
Figure 2.2: This graph is a plot of Eq. ( 2.21 ) with ε = 0.1 and with V 0 , a, A, ω 0 all equal to
Figure 2.3: This figure shows the plot of Eq. ( 2.25 ) with ε = 0.1 and V 0 , a, A all equal to unity.
Figure 2.4: This figure shows a plot of Eq. ( 2.32 ) with g S = 0.07, g P = 0.02 and ε = 0.1.
+7

Riferimenti

Documenti correlati

This kind of PVD process could be used when grinding operations were performed in order to achieve the right roughness to deposit a ceramic top coat without loosing the

un repertorio pressoché inesauribile cui attingere anche in anni successivi ⁷⁵. Come Marinoni Canella fu influenzato dalla pittura fiamminga, che ebbe modo di studiare nei suoi

Table 5 – Hierarchical regression analysis for variables predicting performance of TBI participants in comprehension and production of sincere, deceitful and ironic communicative

the percentage ratio of the standard deviation to the arithmetic mean, the coefficient of variation (ref. 6), and has used it, for example, in comparing the relative variations

Then we have studied the abundance gradients along the Galactic disk produced by our best cosmological model and their dependence upon several parameters: a threshold in the surface

&#34;The Econometric Society is an international society for the advancement of economic theory in its relation to statistics

the upstream regions of cases exons and less LTRs in the upstream regions of control exons, than in the downstream regions, were noticed; with regions of 230 nucleotides

The extreme value statistics for frequency analysis of extreme precipitation were performed for Dar Es Salaam (TZ) and Addis Ababa (ET) case studies for both cases of historical