• Non ci sono risultati.

Mode decomposition of a neural field equation with homogeneous and heterogeneous connectivities

N/A
N/A
Protected

Academic year: 2021

Condividi "Mode decomposition of a neural field equation with homogeneous and heterogeneous connectivities"

Copied!
92
0
0

Testo completo

(1)

Mode decomposition of a neural field equation with

homogeneous and heterogeneous connectivities

Candidate: Daniele Daini

(2)
(3)

Contents

1 Introduction 9

1.1 Models of brain activity . . . 9

1.2 Dynamical systems . . . 14

2 Neural field equation 19 2.1 Neural field dynamics with local and global connectivity and time delay . . . 19

2.1.1 Unidimensional case . . . 19

2.1.2 A first unfruitful approach to the spherical domain . . . 24

2.2 Circumference . . . 26

2.2.1 Instantaneous homogeneous connectivity . . . 26

2.2.2 Homogeneous connectivity with delay . . . 28

2.2.3 Addition of the heterogeneous term . . . 29

2.3 Sphere . . . 31

2.3.1 Instantaneous homogeneous connectivity . . . 31

2.3.2 Homogeneous connectivity with delay . . . 32

2.3.3 Addition of the heterogeneous term . . . 34

3 Results 39 3.1 Circumference . . . 39 3.1.1 Mode Decomposition . . . 39 3.1.2 Stability analysis . . . 52 3.1.3 Exact Equation . . . 57 3.2 Sphere . . . 64 3.2.1 Mode Decomposition . . . 64 3.2.2 Stability Analysis . . . 71 3.2.3 Exact Equation . . . 75 3.3 Code . . . 80 3.3.1 Matlab scripts . . . 80 3.3.2 Python scripts . . . 80

3.3.3 Notes on the final scripts . . . 80

4 Conclusions 83

(4)
(5)

Abstract

Neural field equations, studied since the 1950s, describe the activity of a neural network in a continuous spatiotemporal domain. One of the first works that utilized this approach was the paper of Beurle [1]. During the last decades, a lot of effort has been directed to obtain numerical solutions of these equations, and this has allowed testing some of the theoretical assumptions underlying the neural field equations, by comparing the values of these solutions with human brain EEG and MEG recordings.

Neural field equations describe average neural network activities. An essential feature of a neural network is its connectivity, that is how and where neuronal units are connected to each other. It must be noted that each neuronal unit represents a population of homogeneous neurons, according to an approach called neural mass model [2, 3]. Whereas it is theoretically easy enough to introduce this quantity, from the experimental side the procedure for obtaining the brain connectivity is still an active research area of neuroscience [4, 5].

The work of this thesis starts by the following delayed integro-differential equation, describing the tem-poral and spatial evolution of a neural field:

˙

ψ(x, t) = −ψ(x, t) + Z

Γ

Whom(|x − y|)S[ψ(y, t − |x − y|/c)]dy+ (0.1)

+ Z

Γ

Whet(x, y)S[ψ(y, t − |x − y|/v)]dy (0.2)

This equation, derived in [6] and analyzed in [7], assumes that the connectivity can be decomposed in the sum of two terms: the first one, called the homogeneous connectivity, contains the spatially invariant connections that surround any neural unit, while the second one, called the heterogeneous connectivity, describes the long-distance connections. Both terms contain temporal delay due to a finite velocity of propagation of the neural signal.

The results reported in [7] by solving this equation have been obtained a one-dimensional spatial domain. In this thesis, we have extended the spatial domain to a sphere, we were motivated by the possibility to map the cortical surface to a sphere, by using already-implemented algorithms that inflates it to a sphere. Thus, in the future, it will be possible to compare the results obtained in this thesis to experimentally collected data, mainly EEG and MEG signals.

In order to speed up the simulations of the neural field equation on the spherical domain, we introduced a truncated mode decomposition of the solution, by using as a complete set of functions the spherical harmonics. This has proven to be very efficient: actually, as long as this procedure does not affect too much the dynamic of the solution, the simulation is performed with an arbitrary precision in the spatial domain, that is totally described by the spherical harmonics, so that the gain in computational efficiency is remarkable. We have shown that the first twenty spherical harmonics are sufficient to reproduce the dynamic of the neural field predicted with the complete equation shown above, assuming that a valid spatial resolution of the cortical surface is of the order of one centimetre, that is a realistic experimental spatial resolution.

An extensive use of Matlab and Python has been necessary, and the computational part of the thesis has been developed mainly in Marseille (FR), with the theoretical neuroscience group of Viktor Jirsa.

(6)
(7)

The words are not good for the secret meaning, everything always becomes a bit different,

as soon as it is put into words, gets distorted a bit, a bit silly - yes, and this is also very good, and I like it a lot,

I also very much agree with this,

that this what is one man’s treasure and wisdom always sounds like foolishness to another person. Hermann Hesse - Siddhartha

(8)

Moreover, I would like to thank my friend and colleague Roberto, my partner Sara and my family for the invaluable moral support.

(9)

Introduction

Models of brain activity

Figure 1.1 – A chimpanzee brain at the Science Museum London.

The first reference to the brain anywhere in human records was found in a papyrus written in the seventeenth century B.C., as reported in [8]. The study of the brain, and more generally of the mind, has regarded philosophy for centuries; only more recently, it has been the object of multidisciplinary scientific investigations.

A unified vision of how this organ works is still missing, but in the last decades, many steps forward in the understanding of specific features of its activity have been done, thanks to the progress in experimental techniques and computational power.

The neuroscience, as is called the science of studying nervous systems, includes anatomy, biochemistry, molecular biology of neurons and neural circuits. Experimental data of neuroscience also come from pharmacology, psychology, and medicine. Some of the aims of neuroscience are the understanding of neuronal mechanisms underlying perception, action, memory, emotion, and consciousness, both in health and disease conditions.

The experimental techniques ranging from the molecular and cellular biology of neurons, electrophys-iology, to functional magnetic resonance imaging (fMRI) [9, 10] and this implies the existence of spe-cialization and fragmentation of neuroscience. Among all the approaches, one that is assuming an ever-increasing role is computational neuroscience [11, 12], whose main task is to build bridges between

(10)

the different areas in which neuroscience is fragmented [13].

Moreover, neural dynamics regards phenomena varying on spatial and temporal scales extended over many order of magnitudes. Thereby, models can regard intracellular signalling pathways, synapses, single neuron, small neural networks up to large-scale neural networks. At each spatial scale, models differ for the level of details they include [14].

In this thesis, we have focused on the cerebral cortex and its dynamics, which can be captured with different modelling approaches. Cerebral cortex has been proved to be strongly linked to the higher cognitive functions, like language, perception, memory, and consciousness. It consists of a sheet of neurons and white matter, about 3mm thick and, for each cerebral hemisphere, once it is unfolded, it occupies an area of about 0.12m2. One mm3 of cortex contains around 104 − 105 neurons [15].

The elementary unit of the nervous system is called neuron, which, even though it shows itself in hundred of morphological shapes, by and large, can be schematized as divided into four functional zones: soma-dendrite zone, action potential initiation zone, action potential propagation zone and axonal output zone.

The soma-dendrite zone integrates the incoming signals, the action potential initiation zone is endowed with a threshold mechanism which determines the existence of an action potential. The last two zones propagate and transmit the neural signal to other neurons.

Figure 1.2 – On the left side, SMI32-immunoreactive pyramidal neuron in medial prefrontal cortex of macaque. On the right, Golgi-stained neurons in the somatosensory cortex of the primate, Macaca fascicularis.

We have said that there are different approaches for modelling neurons and neural networks. For example, phase models [16, 17, 18] are mostly used for weakly coupled neurons and statistical approaches are used

in a wide range of conditions, such as in [19]. Neural mass models are able to describe networks

composed of homogeneous pools of neurons [3,20,21]. Some neural field models [7,22,23,24] and phase models [16, 25] take into account time delays due to a finite speed of the signal transmission. The neural field approach aims to capture the average neural activity considering the spatial distribution of a large population; for example, they have been used to describe the higher functions of the brain [26,27,28,29]. We report here a brief introduction to the background and approach [6] used in this thesis: define Z as a vector that contains the variables describing the dynamics of a single neuron; then the dynamics of the whole system generally obeys to:

(11)

1.1 Models of brain activity

dZ(t)

dt = Fa[Z] (1.1)

Different neural models with this form are able to predict the experimentally detected behaviours of a single neuron, see for example [30, 31, 32]. We are interested to describe the interactions between several neurons, then we need a function that models these interactions; let us define it as Hc; the neuron in

position xi evolves with:

dZ(xi, t) dt = Fa Z(xi, t), X j6=i Hc[Z(xj, t)] ! (1.2) It is often found that the strength of connection depends on the distance; moreover, it is usually possible to consider as separate terms the local dynamic and the inputs due to other neurons:

dZ(xi, t)

dt = Fa(Z(xi, t)) + X

j6=i

Hc[Z(xj, t)] (1.3)

The threshold behaviour of the dynamics of a neuron is due to the action of the ion channels, proteins embedded in the cell membrane that permit or stop the flow of specific currents. This channels depolarize or re-polarize the membrane, and are sensitive to specific chemical signals that can activate or deactivate them. In order to stress the role of these proteins, we rewrite last equation as:

dZ(xi, t)

dt =

X

ion

fion(Z(xi, t)) + I(xi, t) (1.4)

where I is the synaptic current due to the presynaptic neuron’s action that activates ligand-gated ion channels, obeying to:

I(xi, t) =

X

j

Hc[Z(xj, t − τj)] + Iexternal (1.5)

It should be noticed that the (discrete) delay due to the finite speed of the signal transmission has been introduced here.

A continuous ensemble of neurons can be modelled with similar relations; the sums become integrals, and we obtain the integro-differential equation:

dZ(x, t) dt = N (Z(x, t)) + Z Ω Z t ∞ h(x − x0)H(Z(x − x0, t − t0)) dt0dx0 (1.6) where N [Z(x, t)] =X ion fion[Z(x, t)]

This expression, thanks to its generality, has been widely used in the literature.

Models of a single neuron can be built at many levels of detail. The neural electric activity can be described with RC circuits connected together, accurately. When the resistance can vary as a function of time and membrane voltage it is possible to describe the genesis propagation of the so-called action potential, which is a variation of the membrane potential transmitted without dispersion (see, for example, historical work of Hodgkin and Huxley [30]). With the same electric circuit analogy, it is possible to describe the connections between neurons, which are called synapses, and in this way, it is possible to model neural networks. This modelling approach is known as conductance based, which is the most expensive in computational cost.

(12)

Thereby, in many cases, the choice is to work on a coarse-grained framework that allows reproducing the approximate behaviour of a network at the cost of losing some details of the dynamics of the single neuron. Different models have been developed in this direction, using the math of dynamical systems (see next chapter for a brief review of the principal features).

For example, the cortex seems to have a columnar organization which has been experimentally detected (see for example Chapter12 of [13]); by means of electrodes, it is possible to record the electrical response of a part of a neural network to a stimulus. By moving the electrode vertically through the different layers of neurons in the cortex, the responses were similar, while by moving the electrode horizontally a continuous change has been detected; these experiments have been carried out mainly on the visual receptive field, that is particularly suited to this kind of proves because of the strong response it shows for the same stimulus. Thus, the idea to describe the recorded structure as quasi-microscopic entities [33] called neural populations [13] grew up. The edges of a neuronal population are still not well-defined, but it must be said that normally they are constituted by a large number of neurons, ranging from a few hundred to several thousand, sharing the same characteristics. The population activity is the mean number of active neurons (defined as the ones that are spiking) in an infinitesimal time interval ∆t. If the neural population is constituted by N neurons, and the number of spikes detected in the time interval [t, t + ∆t] is nact(t; t + ∆t), then the population activity is defined as:

A(t) = lim ∆t→0 1 ∆t nact(t; t + ∆t) N = 1 N N X j=1 X f δ(t − tfj) (1.7)

where δ denotes the Dirac delta function, and the tfj are the firing times of all the neurons of the population. An interesting example that shows how the dynamic of the single neuron can be used to build the dynamic of a neural population is reported in [13]: suppose a neural network is composed of identical neurons that are all connected each other with the same coupling strength. Briefly, we say that the neural network has a homogeneous all-to-all coupling. Moreover, suppose that the dynamic of the single neuron is known; for simplicity, we suppose that the activity of the single neuron can resumed with a single variable u(t) that evolves according to:

τm

dui

dt = −ui+ RIi(t) for ui < θ (1.8)

where τm is the typical time of the system (and it is related to some features of the cellular membrane

of the neuron), R is a constant and Iiis the input current received from the other neurons of the network.

We combine this equation with the reset condition that, once ui ≥ θ, the variable ui restarts at the value

ur = 0. This kind of assumptions are often used in the literature of neural networks; this is an example

of leaky integrate-and-fire neuron model.

Let us assume a coupling strength wij = w0; then, the input current Ii can be easily modelled as:

Ii = N X j=1 X f wijα(t − tfj) + I ext (t) (1.9)

where α(t − tfi) represents the postsynaptic current generated by an input spike, and the sum on the right-hand side runs over all the firing times of all neurons. In this condition, it is of interest to notice that the input current is the same for every neuron: in fact, using the definition of population activity shown in (1.7), last equation becomes:

I(t) = w0N

Z ∞

0

α(s)A(t − s) ds + Iext(t) (1.10)

That does not depend on the neural index i. In this example, we proved that a homogeneous network can easily be described as a single entity.

(13)

1.1 Models of brain activity Let us take a step further, and consider a network of homogeneous populations; the activity of the n-th pool is An(t) = 1 Nn X j∈Γn X f δ(t − tfj) (1.11)

where we indicated by Nn the number of neurons of the n-th neural population and Γn the set of

neurons relative to the n-th neural population. Suppose that every neuron i in the pool n is linked to every neuron j in the neural population m with strength wij = JNnmm. With a similar notation to the one

used before, the input current in group Γn is:

X j X f wijαij(t − tfj) = X m Jnm Z ∞ 0 αnm(s) X j∈Γm X f δ(t − tfj − s) Nm ds (1.12)

Using the definition of the activity of a pool of neurons we stated in (1.11) it is possible to obtain: In= X m Jnm Z ∞ 0 α(s)Am(t − s) ds (1.13)

Thus, the i index does not play a role, because the input current is the same for every neuron in pool n. In this example, the population activity A(t) is a reliable descriptor variable of the interaction between neurons of different pools. Theories of population dynamics are generally called neural mass models, where a neural population is redefined as a neural mass, that has its own non-reducible dynamics; this kind of models are often used to emulate and describe non-invasively obtained brain imaging data, like EEG and MEG. Typically, a neural mass contain from thousand (like corticocortical columns) to million (as for the macrocolumns) neurons.

When the distance between the neural masses is infinitesimally small, the continuous function that describes the activity of the system is called neural field. Notice that in this case the mathematical framework develops interesting analogies with the fluid dynamic theory; average macroscopic quantities (like the population activity) are defined over a continuous of elements (the neural masses) that are supposed to be formed by a big number of primitive units (the neurons).

In this case, in [6], the analogous equation to (1.6) is presented in the large scale situation. Instead of Z, now the activity is described by the neural field Ψ(x, t). If Q is the analogous of N and S is the analogous of H, it is possible to prove that the neural field obeys to equation:

dΨ(x, t) dt = Q(Ψ(x, t)) + Z Ω Z t −∞ h(x − x0)S(Ψ(x − x0, t − t0)) dt0dx0 (1.14) Different models with this structure have been developed in the last decades; in Amari’s classic article [34], a neural field that describes networks without the delay, and with symmetric and translationally invariant connection topologies has been studied. With these assumptions, equation (1.14) becomes:

τdΨ(x, t)

dt = −Ψ(x, t) +

Z

h(x − x0)S(Ψ(x0, t)) dx0+ c + s(x, t) (1.15)

where S in this context is typically the Heaviside function, s represents external input and c is a constant resting potential.

When the spatial scale of the network increase, the time delays play a more important role; thus, it is worthwhile to develop a model that takes the time delays into account. The work of Wilson & Cowan [35], and of Nunez [36, 37] proved to be crucial; for example, the neural field aimed to study the dispersion relations of the linearized neural field dynamics, once the distributions of intracortical and corticocortical fiber systems are specified, follows the integro-differential equation:

(14)

dΨ(x, t) dt = −Ψ(x, t) + s(x, t) + Z Ω Z ∞ 0 h(x − x0, v)S  Ψ  x, t −|x − x 0| v  dvdx0 (1.16)

This equation share the same shape of (2.1), that is the equation over which the whole work presented in this thesis has been built, whose aim is to develop a model that is able to use experimental data in order to estimate some parameters such as the elements of the connectivity matrix.

About the spatial domain, it has been proven that it is possible to map the connectome obtained on the cortical surface to a sphere, inflating with different algorithms the mesh of the folded cortical surface. So, if we are able to develop a neural field model with a spherical topology, we should be able to compare the experimental measurements with the solution of the model.

In order to simulate an equation that has the form of (1.16) on a spherical surface, we need to discretize the sphere and the time, and to compute the evolution of the discretized neural field. It is not difficult to understand why it is not computationally easy to construct a simulation of this equation that is at the same time not too much computing power expensive and accurate. In order to maintain the spatial structure as precise as we want, without modifying too much the time needed for the simulation, we developed a truncated mode decomposition of the neural field. Spherical harmonics have been used, as it is suggested by the dynamic behaviour of the activity of the brain [38].

The spherical harmonics are composed by the product of an associated Legendre polynomial with a plane wave, as can be seen from the definition in (3.36); thus, the spherical harmonic Yl,m(θ, φ) can be rewritten

in order to explicit the dependence on the terms cosl(θ) and cosm(φ). With a Fourier decomposition,

these are the terms that contain the higher power of the trigonometric functions; this property is relevant since it gives us some insights about the spatial resolution reached by the truncated mode decomposition. In fact, the well-known property of the cosine function cosk(x) ' cos(kx) + const cos((k − 1)x) + . . .

implies that a term like cosk(x) is able to resolve a length of the interval of order Tk, where T is the period of the x variable. Thus, if we truncate the mode decomposition to the k-th order, we lose every variation that happens on a scale smaller than Tk; in our case, we are interested to resolve the surface of a sphere, then if we decompose the neural field using all the spherical harmonics until the l∗-order we will be able to resolve an area of A ' R2 2πllπ∗. Hence

l∗ ' R √

2π √

A (1.17)

where R is the ray of the sphere and A is the resolved area. Using this rough estimation we can choose where to truncate the mode decomposition depending on the desired spatial resolution. Better estimates need to take into account the different topology of the sphere from the plane, that does not allow a rectangular mesh of the spherical surface. We anticipate that with l∗ ' 20 we obtained results sufficiently similar to what is obtained with a mesh surface that, on average, considering R ' 20cm, resolved an area of A ' 1cm2.

Dynamical systems

In the last section, we introduced one of the most used framework in order to build reliable models of a neural network. As it can be easily noticed, the dynamical systems have been intensely used; in this section, we introduce some mathematical topic of dynamical systems. We will present only the very basic ideas and concepts behind it, for a more general and rigorous treatment see for example [39, 40]). The tools of dynamical systems have been intensely used in neuroscience and some of the most important works are [32, 41, 42, 43, 44, 45, 46]. In order to define a dynamical system, we need some introductory concepts, and we will use the notation of [40]:

1. The state space is a set X that contains the points x that represent the state and how this state is changing of the system; it is also called the phase space. For example, in the case of an isolated

(15)

1.2 Dynamical systems system with s degrees of freedom, using the generalized coordinates qi and pi, we find X = R2s; if

there are k cyclic coordinates, the space reduces to X = Sk× R2s−k, where Sk is the k-dimensional

sphere.

2. The time is the variable that causes the evolution of the system; from now on, we will suppose for simplicity t ∈ R1

3. The evolution is generally described by a function ϕt: X : X, x

0 7→ x(t) = ϕt(x0) called evolution

operator

According to the notation just introduced, a dynamical system is a triple {T ,X,ϕt}, where T is a time

set, X is a state space, and ϕt is a family of evolution operators parametrized by t ∈ T and satisfying ϕ0 = identity and ϕt+s = ϕts) for every t, s ∈ T .

This definition allows to define in an elegant way the most commonly used features of a dynamical system:

ˆ We call a orbit starting from x0 the ordered subset of X defined as:

Or(x0) = {x ∈ X|x = ϕt(x0) ∀t ∈ T : ϕt(x0) is well defined. } (1.18)

ˆ x0 ∈ X is called fixed point if ϕtx0 = x0 ∀t ∈ T . A fixed point can be stable, unstable or a saddle

point depending on the behaviour of the solution near to it.

ˆ A cycle is a periodic orbit L0 such that ∀x0 ∈ L0∀t ∈ T ∃T0 > 0 : ϕt+T0x0 = ϕtx0. The smallest

T0 that satisfies this property is called the period.

ˆ A cycle with no others cycles near to it is called limit cycle.

ˆ The phase portrait of a dynamical system is a subdivision of X in orbits; it is pretty useful in order to determine the asymptotic states of the system. The appearance of a topologically non equivalent phase portrait under varition of the parameters of the dynamical system i called a bifurcation. It should be noticed that every differential equation is a dynamical system; in fact, consider f : Rn: Rn,

x 7→ ˙x = f (x). We call x(t, t0) a solution of this equation. Then, once we define the evolution operator

as ϕt(x0) = x(t, x0), {R1, Rn, ϕt} is a dynamical system. The big challenge of the dynamical system

theory is to obtain informations about the phase portrait of {R1, Rn, ϕt} without solving the equation

˙x = f (x). For example, if we are interested to determine the fixed points ϕtx

0 = x0, we need to solve

only f (x0) = 0. If we want to know whether a fixed point is stable, we need to use a Lyapunov theorem:

suppose A is the Jacobian matrix of f (x) evaluated on the fixed point x0; then x0 is a stable point

if every eigenvalue λ of A has a negative real part. An example of analysis of a dynamical system is represented in 1.3.

All those very basic concepts that we just introduced have proven to be fundamental in neuroscience. In the historical paper [31], a set of equations is introduced and studied that is able to reproduce the periodic bursting observed in some neurons of the pond snail Lymnaea; the equations are not particularly sophisticated (for an even simpler model that exhibits bursting dynamics, see [43]):

˙x = y − ax3+ bx2+ I − z (1.19)

˙

y = c − dx2 − y (1.20)

˙z = r(s(x − x1) − z) (1.21)

But they are still used in a wide range of cases. For example, the group of Viktor Jirsa in [27] used these equations in order to build a dynamical system capable of reproducing (and, in part, to predict)

(16)

Figure 1.3 – On the left side, the phase plane with the orbits of ˙x = dτ (−f x3 + ex2 + gx + αy + γI)

and ˙y = (d/τ )(cx2+ bx − βy + a). On the right, the trajectories obtained varying the parameter I; notice that a bifurcation is captured, since a cycle breaks and a fixed point is generated. The black lines represent the nullclines.

the features of an epileptic seizure [47]; this model has been named the Epileptor. Using XPPAUT [48], we implemented the Epileptor model, and run some simulations. In Figure 1.4 it is reported one of those simulations, with a zoom on one of the epileptic seizures. The equations used are:

˙x1 = y1− f1(x1, x2) − z + Irest1 (1.22) ˙ y1 = y0 − 5x21− y1 (1.23) ˙z = 1 τ0 (4(x1− x0) − z) (1.24) ˙x2 = −y2+ x2− x32+ Irest2+ 0.002g(x1) − 0.3(z − 3.5) (1.25) ˙ y2 = 1 τ2 (−y2+ f2(x1, x2)) (1.26) (1.27) where the definition of the functions and of the parameters are reported in [27].

We want to underline how important are these results in order to understand epilepsy, that is a neurological disorder that affects a wide part of the world population. This is one of the reasons why dynamical systems theory has found a renew interest of the scientists.

Another interesting example of dynamical systems applied to neuroscience problems is the work of Assisi [49]; in the last section we introduced the concept of neural population as a pool of neurons with exactly the same parameters. In [49], the more realistic case with parameter dispersion and heterogeneous neurons has been considered. Using a globally coupled network of N Fitzhugh-Nagumo neurons, the evolution equations are:

˙xi = c  xi− x3 i 3 + yi  + K(X − xi) + czi (1.28) ˙ yi = 1 c(xi− byi+ a) (1.29)

(17)

1.2 Dynamical systems

Figure 1.4 – Simulations of the Epileptor model; in blue, the z slow variable is plotted, while in red the sum x1 + x2 is plotted. This simulation presents all the features underlined in [27].

where i goes from 1 to N , K is the coupling strength and X = N1 PN

i=1xi is the average activity of

the network. The parameters zi introduce the heterogeneity in the neural network; they are supposed to

be extracted by a distribution g(z) with standard deviation σ. It is observed that varying K and σ, the network shows different behaviours, like bistability, oscillatory death and settling to a fixed point. We simulated this system, the results are shown in Figure 1.5 (they are quantitatively different from what has been showed by Assisi; this should not be surprising since not every parameter has been reported in [49]. Qualitatively, the behaviour is the same as the one presented in the article).

Thus, if heterogeneity is introduced in a neural network, we should expect different results depending

on the parameters’ values of the system. In order to describe this neural network, the mean field

approximation might lose some important dynamic of the neural network. Looking at Figure 1.5, we can see the mean activity capturing the whole dynamic (top), whereas in the second and in the third row of the figure the mean has been done on a bistable system, thus it fails to represent the dynamic of the whole neural network. In [49], a mode decomposition in the space of the parameters has been presented as a possible improvement of the canonical mean field approach. The work presented in [50] and in [51] is a natural evolution of [49], with more biologically realistic cases simulated.

(18)

(a) σ = 0.1 , K = 2 (b) σ = 0.1 , K = 2

(c) σ = 0.3 , K = 0.4 (d) σ = 0.3 , K = 0.4

(e) σ = 0.4 , K = 2 (f) σ = 0.4 , K = 2

Figure 1.5– Three different behaviour observed simulating the neural network of [49]. The first column represents how the x variable of the neurons evolve in time (arbitrary units have been used), while the second one shows the related evolution in time of the mean activity of the population in the same row. Notice the fixed point and limit cycle behaviours.

(19)

Neural field equation

In this chapter, the results of the Jirsa’s paper [7] are collected and deepened. In [7], a system with a connectivity that is supposed to be written as the sum of a homogeneous connectivity term (that is invariant under translation by definition) and a heterogeneous connectivity term is considered. The latter term is, by hypothesis, considered as a delta-shaped link; in this case, the calculations are particularly easy to be carried on. The neural field has been defined on a spatial domain that is one dimensional and has the topology of a straight line. This chapter aims to present a derivation of the delayed integro-differential equation that the neural field must satisfy following the derivation in [7] and underlining the hypothesis and the approximations that are necessary. Then, the same calculus is repeated with different homogeneous connectivity functions in order to show that, as long as the homogeneous connectivity is chosen with a short spatial range, the details of this function does not change the shape of the neural field equation. After that, a mode decomposition around a stationary state is presented first in one dimension, following again what has been done in [7], and then the analogous calculations are performed on a spherical domain. We stress the calculations in the case of homogeneous connections only since the heterogeneous term is straightforward to treat, and does not need any particular attention

After these introductory sections, the neural field delayed integro-differential equation is derived both for a spatial domain chosen as a circumference and a sphere, both in the absence and in presence of a delay and of the heterogeneous term.

Neural field dynamics with local and global connectivity and time delay

In this section, we present and expand the results obtained in [7]. The first part is dedicated to emphasize the approximations and the limits taken in the paper, with a particular focus on the effects of the homo-geneous term of the connectivity on the neural field equation. The calculations about the heterohomo-geneous term are not reported, because they are straightforward and it would not be helpful to the discussion that we are going to present here. In the second part, the same results of [7] are obtained using a spherical spatial domain instead of a straight line.

Unidimensional case

Assume ψ(x, t) is the neural field that represents the mean activity of the network in the spatial and temporal domains. Suppose it is a sufficiently smooth function of the space x ∈ Γ ⊂ R and of the time t ∈ R+. Since we are following the work of [7], we can specify Γ = [0, L]. Then, the delayed

integro-differential equation that describes the spatial and temporal evolution of the neural field reduces to:

∂ψ(x, t)

∂t = −ψ(x, t) +

Z

Γ

Whom(|x − y|)S[ψ(y, t − |x − y|/c)]dy +

Z

Γ

Whet(x, y)S[ψ(y, t − |x − y|/v)]dy

(2.1) where  represents the relaxing coefficient, c is the velocity of the signal in the homogeneous connec-tions, v is the velocity of the signal in the heterogeneous connections and S is the sigmoid function. Consider the case where there are no heterogeneous connections. By defining the Green function G:

(20)

G(x − y, t − T ) = Whom(|x − y|)δ(t − T + |x − y|/c)

And Ψ = ˜ψ the Fourier transform of the neural field, with a Fourier transform in the spatial and tem-poral variables, after some calculations based on the properties of the plane waves, it is straightforward, as shown in [7], to reduce equation (2.1) to:

( − iω)Ψ = 2π ˜G(k, ω) ˜S(k, ω) (2.2)

It is possible to explicitly calculate the Fourier transform of G, that we indicated as ˜G(x − y, t − T ), once it is defined the function Whom(|x − y|). The homogeneous term is invariant under translation and

generally dies out increasing the distance. In the work [7], it has been chosen equal to: Whom(|x − y|) =

1

2σexp(−|x − y|/σ) (2.3)

The expression of ˜G reported in [7] is ˜

G(k, ω) = 1

1 + iωσc

(1 + iσωc )2+ k2σ2 (2.4)

We perform the full calculations in order to obtain it because we explicitly want to stress some approximations that have been used. It is useful to proceed with the substitution of variables ξ = x − y e τ = t − T : ˜ G(k, ω) = 1 2π Z L/2 −L/2 dξ Z ∞ −∞ dτ G(ξ, τ ) exp(−ikξ + iωτ ) = 1 2π Z L/2 −L/2 dξ Z ∞ −∞ dτ 1

2σexp(−|ξ|/σ)δ(τ − |ξ|/c) exp(−ikξ + iωτ )

= 1 2π 1 2σ Z L/2 −L/2

dξe−|ξ|/σe−ikξeiω|ξ|/c

= 1 2π 1 2σ Z 0 −L/2 dξe(σ1−i(k+ ω c))ξ+ Z L/2 0 dξe−(σ1+i(k− ω c))ξ ! = 1 2π 1 − iωσc (1 − iσωc )2+ k2σ2 + 1 4π e−L/2σ (1 − iσωc )2+ k2σ2 [. . . ]

Where the dots contained in the square brackets represent an oscillating function that does not depend on L. In the limit σ << L, that is valid in the short-range hypothesis, the second term goes to zero exponentially, thus, as well-known, it dies out faster than every power of x. Then, a decomposition of ˜G using a Taylor expansion coincides with a decomposition of ˜G without the exponentially decaying term. It should be noticed that the equation obtained is similar but not equivalent to equation (2.4) from [7]; anyway, the other results are compatible to what is reported in [7].

By assuming further approximations σωc → 0 and kσ → 0, which will be clearer in the next sections, ˜G becomes 2π ˜G(k, ω) '1 − iσω c  1 + 2iσω c − 3 σω c 2 − σ2k2  = 1 + iσω c − σ 2k2σω c 2 .

(21)

2.1 Neural field dynamics with local and global connectivity and time delay

˜ S = νΨ

The linear hypothesis has already been used in many different works, like [52,53], and is nicely explained in [33]: it means that we suppose the system is working far away from the saturation zones of the sigmoid function. The ν coefficient does not show up in [7] because, in the proper units of time, it can be supposed to be equal to one; anyway, since it is important for dimensional reasons, we decided to keep it general. It should be noticed that ν has the same unit of measurement as the  coefficient.

By a substitution of this mathematical relationship in equation (2.2) it is easy to obtain: ( − iω)Ψ = ν  1 + iσω c − σ 2k2σω c 2 Ψ (2.5)

Moreover, by assuming c >> σν only the first order term is relevant. This statement is true when it is possible to assume a very fast velocity or a small typical time for the communication between nearby neural nodes. Thereby, the neural field equation in the Fourier representation is the following:

( − iω)Ψ =ν + iωσ

c ν − σ

2k2νΨ

Hence, antitransforming last equation:

ψ + ∂ψ ∂t = ν  1 − σ c ∂ ∂t+ σ 2 ∂2 ∂x2  ψ. (2.6)

Which can be written:

∂ψ ∂t +  0 ψ = D∂ 2ψ ∂x2 (2.7) Where 0 = 1+−νσν c and D = 1+σ2σνν c

. With a change of the unit of measure of D and 0, this equation is compatible to the one obtained in [7]. It must be noted that this is a diffusive equation, with a term of relaxation (0 > 0).

Gaussian Whom

In the previous sections we have imposed a particular form for the homogeneous connectivity Whom.

Here, we want to stress the fact that changing the shape of this function causes only a change in the coefficients of equation (2.7); thus, when Whom is spatially invariant and rapidly decays as the distance

increases, the delayed integro-differential equation that describes the evolution of the neural field is al-ways diffusive with a term of relaxation.

Suppose that the homogeneous connectivity decays as a Gaussian function: Whom(|x − y|) = 1 √ 2πσ2e −1 2( |x−y| σ ) 2 (2.8) This case is particularly interesting because, as it is shown in [54], every biologically realistic ho-mogeneous connectivity typically used in the literature can be represented as the sum of two gaussian terms: Whom(x) = a1e −x σ1 2 − a2e −x σ2 2 (2.9) This is why we want to show that the Gaussian connectivity implies a diffusion equation with a relaxation term. Solving carefully the absolute value |ξ| in equation (2.8), it is possible to obtain two different integrals:

(22)

˜ G(k, ω) = 1 (2π)32σ " e−σ22 (k+ω/c)2 Z L/2 0 e−2σ21 (ξ+σ 2i(k+ω/c))2 dξ + e−σ22 (k−ω/c)2 Z 0 −L/2 e−2σ21 (ξ+σ 2i(k−ω/c))2 dξ # (2.10) By applying Cauchy’s integral theorem applied on a rectangular path in the complex plane, in the limit L → ∞ the following equations are obtained:

Z ∞ 0 e−2σ21 (ξ+σ 2i(k+ω/c))2 dξ = √ 2πσ2 2 + Z 0 σ2(k+ω/c) ey2/(2σ2)dy (2.11) Z 0 −L/2 e−2σ21 (ξ+σ 2i(k−ω/c))2 dξ = √ 2πσ2 2 + Z σ2(k−ω/c) 0 ey2/(2σ2)dy (2.12) (2.13) And, after a substitution:

˜ G = 1 (2π)32σ √ 2πσ2 2  e−σ22 (k+ω/c) 2 + e−σ22 (k−ω/c) 2 + + e−σ22 (k+ω/c)2 Z 0 σ2(k+ω/c) ey2/(2σ2)dy + e−σ22 (k−ω/c)2 Z σ2(k−ω/c) 0 ey2/(2σ2)dy 

It must be noticed that the terms that would contain the products between the k and ω variables are higher than second order, so at the same order of approximation as before, the equation obtained is of the same form compared to the one obtained with an exponential connectivity (with different coefficients).

Dirac delta Whom

The diffusion is caused by the homogeneous connectivity: by supposing Whom= 0 it is straightforward to

see that there is no diffusion of the neural field. But there is also another interesting limit case: if every neural population is connected only with itself, it is still expected that the signal does not propagates in the space. Thus, we suppose that the homogeneous connectivity is representable as a Dirac delta:

Whom= δ(x − y) (2.14)

Calculating ˜G it is easy to find:

˜ G = 1 2π Z L/2 −L/2 dξ Z ∞ −∞ dτ δ(ξ)δ(τ + |ξ|/c)e−ikξ+iωτ = = 1 2π Z L/2 −L/2 dξδ(ξ)e−ikξ−i|ξ|ω/c= 1 2π

Using the linearity of S[ψ]:

( − iω)Ψ = νΨ (2.15)

And, antitrasforming:

∂ψ

∂t = (ν − )ψ (2.16)

That is exactly what we expected: while the relaxation term is still inside the equation, there are no terms that may induce a spatial evolution of the neural field.

(23)

2.1 Neural field dynamics with local and global connectivity and time delay

Mode decomposition

Consider again the equation: ∂ψ(x, t)

∂t = −ψ(x, t) +

Z

Γ

Whom(|x − y|)S[ψ(y, t − |x − y|/c)]dy (2.17)

In Jirsa’s paper [7] the following ansatz about the shape of the neural field is imposed: suppose {φk} constitutes a complete and orthonormal (under the scalar product of L2) set of functions, any

perturbation of the system that is in the stationary condition ψ0(x) leads to a solution that can be

written as: ψ(x, t) = ψ0(x) + ∞ X k=0 φk(x)ξk(t) . (2.18)

With a substitution, the delayed integro-differential equation is derived:  ∂ ∂t+   ξl(t) = ∞ X k=0 Z Γ dy Z Γ Whom(|x − y|)φ † l(x)φk(y)ξk(t − |x − y|/c) .

We want to show how this result is obtainable linearizing the S function. In order to proceed, we perform a Taylor expansion of the function S truncated at the first order:

S[ψ(x, t)] ' S[ψ0(x)] +

∂S

∂ψ[ψ0(x)] (ψ(x, t) − ψ0(x)) (2.19)

For readability it is useful to introduce the substitution A(x) = ∂S∂ψ[ψ0(x)] in equation (2.8), recovering

the specific connectivity used in [7]:

X φk(x) dξk(t) dt +[ψ0(x)+ ∞ X k=0 φk(x)ξk(t)] = Z Γ dy 2σe −|x−y|/σ " S(ψ0) + A(y) ∞ X k=0 φk(y)ξk(t − |x − y|/c) # ,

That, with some easy calculations, becomes:

 ∂ ∂t+   X k φk(x)ξk(t)+ψ0(x) = Z Γ dy 2σe −|x−y|/σ S(ψ0)+ Z Γ dy 2σe −|x−y|/σ A(y) ∞ X k=0 φk(y)ξk(t−|x−y|/c) .

By definition, ψ0 is a solution of the stationary differential equation, thus the two terms involving ψ0

cancel each other. Now, it is possible to obtain the equation for the weight ξl(t) of the l-th mode φl(x)

using a projection; with the L2 inner product, a projection means that we need to multiply by φ

l(x) and integrate over x:  ∂ ∂t +   X k Z Γ dx φ†l(x)φk(x)ξk(t) = 1 2σ Z Γ dx Z Γ dy e−|x−y|/σA(y) ∞ X k=0 φ†l(x)φk(y)ξk(t − |x − y|/c) (2.20) This equation corresponds to the analogous equation reported in [7] once the term due to the hetero-geneous connectivity is removed.

(24)

A first unfruitful approach to the spherical domain

So far, a more detailed presentation of the unidimensional analysis carried out in [7] has been shown; from now on, we extend this approach to different domains, aiming to recover the analogous mathematical relationships showed before.

The neural field satisfies equation (2.1), where in this section the spatial domain Γ must thought as a spherical surface S2. With a straight-line domain the spatial symmetry of the homogeneous connectivity

has been interpreted as an invariance under translation; with periodic boundary conditions, this symme-try must be interpreted as an invariance under rotations of the (θ, φ) polar coordinates (here, θ ∈ [0, π] represents the polar coordinate and φ ∈ [0, 2π[ represent the azimuthal coordinate).

From now on, since we are interested to show a proof-of-concept case, we will consider the system in the limit of instantaneous signals c → ∞.

As we showed in the previous section, if the domain is a straight line it is possible to obtain the PDE that determines the evolution of the neural field ψ(x, t); the Fourier transform permitted to obtain the coefficients of the equation in an elegant and brief way. With a modification of the topology, the com-plete set of functions that are needed in order to transform the function will change consequently; we choose the most simple functions that respect the symmetries of the spatial domain. For the sphere, the spherical harmonics are the best complete set, because of the periodicity in the variables of these functions and because of the simple transformation rules that they obey to once a rotation is performed. Since we are neglecting the delay, there is no need to transform in the time variable, as opposed to what has been done in [7] for the straight line case. Defining Ωi = (θi, φi), we have to deal with a

complete set of enumerable functions:

ψ(Ω, t) = ∞ X l=0 l X m=−l Yl,m(Ω) ˜ψl,m(t) (2.21)

where the transformed function of ψ is defined as: ˜

ψl,m(t) =

Z

S2

Yl,m† (Ω)ψ(Ω, t) dΩ (2.22)

and the homogeneous connectivity can be represented as:

Whom(d(Ω1, Ω2)) = ∞ X l1=0 l1 X m1=−l1 ∞ X l2=0 l2 X m2=−l2 Yl1,m1(Ω1)Yl2,m2(Ω2) ˜W (l1, m1, l2, m2) (2.23) where d(Ω1, Ω2) is the distance between the points with coordinates Ω1 and Ω2 calculated along the

spherical surface.

The transform of the Whom depends on two variables:

˜ W (l1, m1, l2, m2) = Z S2 Z S2 Yl† 1,m1(Ω1)Y † l2,m2(Ω2)Whom(d(Ω1, Ω2)) dΩ1dΩ2 . (2.24) We will assume S(ψ) = νψ. By substituting equations (2.21) and (2.23) in equation (2.17):

 ∂ ∂t +   ∞ X l2=0 l2 X m2=−l2 Yl2,m2(Ω1) ˜ψl2,m2(t) = ν Z S2 " X l2,m2,l3,m3 Yl2,m2(α1)Yl3,m3(Ω2) ˜Wl2,m2,l3,m3 # · " X l1=0 l1 X m1=−l1 Yl1,m1(Ω2) ˜ψl1,m1(t) # dΩ2 (2.25)

(25)

2.1 Neural field dynamics with local and global connectivity and time delay In order to reduce this relation to a simpler form, we need to calculate the sum of the product between two series. It is possible to apply the Cauchy’s product theorem.

Cauchy’s product theorem: suppose we have two infinite series

∞ X k=0 ak and ∞ X k=0 bk

which are convergent and at least one of them is absolutely convergent.Then the product between these two series exists and converges to the Cauchy’s product.

∞ X k=0 ak· ∞ X k=0 bk= ∞ X n=0 n X k=0 akbn−k = ∞ X n=0 cn . (2.26)

Using the notation of equation (2.26), it is possible to collect the sum over l3 and the sum over l1.

Moreover, once we collect the series, the integral over Ω2is straightforward because of the orthonormality

properties of the spherical harmonics. We obtain:

ak = k X m=−k X l2,m2 Yl2,m2(Ω1)Yk,m(Ω2) ˜Wl2,m2,k,m , (2.27) bn−k = n−k X m=k−n Yn−k,m(Ω2) ˜ψn−k,m(t) , (2.28) and: cn= n X k=0 " k X m3=−k X l2,m2 Yl2,m2(α1)Yk,m3(Ω2) ˜Wl2,m2,k,m3 # · " n−k X m1=k−n Yn−k,m1(Ω2) ˜ψn−k,m1(t) # . (2.29)

Recollecting all the terms and including the sums over m1 and m3, the integrand of (2.25) becomes:

∞ X n=0 cn = " X l2,m2,l3,m3 Yl2,m2(Ω1)Yl3,m3(Ω2) ˜Wl2,m2,l3,m3 # · " X l1=0 l1 X m1=−l1 Yl1,m1(Ω2) ˜ψl1,m1(t) # = = ∞ X n=0 n X k=0 k X m3=−k n−k X m1=k−n X l2,m2 Yl2,m2(Ω1)Yk,m3(Ω2) ˜Wl2,m2,k,m3Yn−k,m1(Ω2) ˜ψn−k,m1(t) .

After the substitution in (2.25), we swap the summations with the integral and we obtain the evolution equation for the transformed neural field:

 ∂ ∂t+   ˜ ψl,m= ∞ X n=0 n X k=0 k X m3=−k n−k X m1=k−n ˜ Wl,m,k,m3ψ˜n−k,m1(t) Z S2 Yk,m3(Ω2)Yn−k,m1(Ω2) dΩ2 , (2.30)

where the completeness of the basis Yl2,m2 has been used.

We simplify the right hand side by using the normalization of the spherical harmonics. For this calcula-tions, we have used the convention for the spherical harmonics typical of quantum mechanics (Y† is the complex conjugate); we suppose that they are defined as:

(26)

Yl,m(Ω) = (−1)m s 2l + 1 4π (l − m)! (l + m)!P m l (cos θ)e imφ , Z S2 Yl1,m1(Ω)Yl2,m2(Ω) † dΩ = δl1,l2δm1,m2 , Yl,m† (Ω) = (−1)mYl,−m(Ω) .

Then, the integral can be evaluated: Z

S2

Yk,m3(Ω2)Yn−k,m1(Ω2) dΩ2 = δk,n−kδm3,−m1 Hence, we have simplified the transformed equation (2.30) to:

 ∂ ∂t +   ˜ ψl,m= ∞ X l1=0 l1 X m1=−l1 ˜ W (l, m, l1, m1) ˜ψl1,−m1 . (2.31)

Thus, we have obtained a result analogous to [7], which considered a straight line domain. Although this expression that we obtained may appear simple, to antitransform has proven to be a daunting endeavour, because it is difficult to calculate the transformed homogeneous connectivity. In order to obtain the neural field equation with a spherical domain, we performed a different approach that we have developed. As a proof-of-concept, we start to apply the new approach to a circumference domain, since the calculations are lighter and the related math is clearer.

Circumference

In this section, we present the approach we followed in order to obtain the neural field equation with a circumference domain. We start with the hypothesis of an instantaneous homogeneous connectivity and with no heterogeneous term. Then we add the delay via a finite c parameter and, in the end, we analyze the heterogeneous connectivity term.

Thanks to the periodic boundaries, the circumference domain is the natural missing link between the straight line and the sphere: as we will show later, the same concepts that we apply here can be straightforwardly applied on the sphere, with an increased math complexity.

Instantaneous homogeneous connectivity

In the absence of a heterogeneous term, the neural field equation analogous to the one used by Jirsa in [7] is, if x and y are the cartesian coordinates and s is the coordinate along the circumference:

∂ ∂t+  ν ψ(r cos θ1, r sin θ1) = ∂ ∂t +  ν ψ(x, y) = Z Γ f (x, y) ds = Z π −π

rWhom(d(θ1, θ2))ψ(r cos(θ2), r sin(θ2))dθ2

(2.32) where it has been utilized the parametrization for the circumference:

(x, y) = (r cos(θ), r sin(θ))

The distance along the circumference is defined as (in what follows, we will suppose for simplicity that the radius of the circumference is r = 1.):

(

d(θ1, θ2) = |θ1− θ2| if |θ1− θ2| ≤ π

(27)

2.2 Circumference It is relevant for the subsequent discussion to underline that this distance function reduces to the absolute value if one of the angles is zero.

The angle is defined as θ ∈ [−π, π[. This interval has been chosen because it makes easier to solve the integral than the canonical interval θ ∈ [0, 2π[.

First of all, we apply the ansatz of equation (2.18) where the φk are chosen as the set of eigenfunctions

of the Laplacian in polar coordinates, that corresponds to the plane waves (formally, this is equivalent to perform a discrete Fourier transform). This choice has been suggested by the knowledge of the final form of the neural field equation, that contains the Laplacian operator. It should be noted that the ansatz implies that the spatial dependence of the neural field is all described by the plane waves, naturally taking into account the modularity nature (in the sense of modulo operation) of the angle variables, and automatically simplifying the formalism. With a substitution of the ansatz (2.32) becomes:

(∂t∂ + ) ν ψ(θ1, t) = Z π −π Whom(d(θ1, θ2)) X m ˜ ψmeimθ2dθ2 (2.33)

Where m ∈ Z is the index that spans the wave function set. Since, as we stressed before, it is particularly easy to calculate the distance between two points if one of them is θ = 0, we rotate θ2 of an

angle that is opposite of θ1. Thanks to this change of variable, the distance simply becomes the absolute

value of the new coordinate.

Similarly to the straight line case that we have seen before, the homogeneous connectivity is chosen equal to Whom(z) = N (σ)e−

z

σ, where z = |θ2− θ1| and the new variable is γ = θ2− θ1; the right-hand side equation becomes:

(∂t∂ + ) ν ψ(θ1, t) = Z π −π N (σ)e−|γ|σ X m ˜ ψmeimθ2dθ2 = N (σ)X m eimθ1ψ˜ m Z π −π e−|γ|σ eimγdγ = N (σ)X m eimθ1ψ˜ m Z π −π e−|γ|σ (1 + imγ − m 2 2 γ 2) dγ = N (σ)X m eimθ1ψ˜ m  1 N (σ) − m2 2 Z π −π γ2e−|γ|σ dγ 

Notice that we have truncated at the second order in γ the Taylor series of the wave plane; this is done using the fact that the homogeneous connectivity is considered short-ranged, so that the terms with γ that is far from zero do not contribute to the integral. Thus, we obtain:

(∂t∂ + )

ν ψ(θ, t) = ψ(θ, t) + D

∂2ψ

∂θ2(θ, t) (2.34)

Where it has been used the fact that the second time derivative in θ of ψ corresponds to −m2ψ, that˜

is a well-known property of the plane waves. D is the diffusion coefficient that we can easily compute; in fact we know that:

Z π −π γ2e−|γ|σ dγ = 2e− π σσ 2(e π σ − 1)σ2− 2πσ − π2

So, the diffusion coefficient on the circumference is equal to: D = N (σ)e−πσσ 2(e

π

(28)

Homogeneous connectivity with delay

In the previous section, we have seen the case of homogeneous connectivity without delay, which means that the neural signals are transmitted instantaneously. It may be considered a rough approximation, since in physics it is well-known that nothing can propagates instantaneously. Nevertheless, the homogeneous connectivity is characterized typically by a small length scale; thus, it can be used, together with a typical time, which has to be defined, to build a typical velocity of the system. Moreover, if we suppose that this typical velocity is much smaller than c, then the instantaneous approximation gives us an insight of the behaviour of the system.

In this section, we are interested to a different regime, where the typical velocity is of the same order of c. In this case, we are going to show that the only role of the delay in the homogeneous connectivity is to change the numerical value of the D and  coefficients of equation (2.34). Following a Green method, we artificially introduce a Dirac delta function that separates the dependence of time from the distance due to the delay. The equation we want to approximate is:

(∂t∂ + )

ν ψ(θ1, t) = Z π

−π

rWhom(d(θ1, θ2))ψ(r cos(θ2), r sin(θ2), t −

d(θ1, θ2) c )dθ2 = Z ∞ −∞ Z π −π

rWhom(d(θ1, θ2))ψ(r cos(θ2), r sin(θ2), T )δ(T − t +

d(θ1, θ2)

c ) dT dθ2

With the same ansatz used in last section, we perform the convenient rotation already used. The neural field must satisfy the equation:

(∂ ∂t+ ) ν ψ(θ1, t) = Z ∞ −∞ Z π −π rWhom(|γ|)δ(T − t + |γ| c ) X m eim(θ1+γ)ψ˜ m(T ) dT dγ

The exponential can be written as a sum of the first three orders of its Taylor series. Here we need to take care of the presence of the delay; if f (x) is a continuous function with zeros xi where the derivative

f0(x) is not zero, the following well-known properties of the Dirac delta can be used:

δ(f (x)) = n X i=0 δ(x − xi) |f0(x i)| δ(cx) = δ(x) |c|

Then, readjusting the terms in order to carry out the integral on the γ variable:

δ(T − t +|γ|

c ) = cδ(c(T − t) + |γ|) = c [δ(γ + c(T − t)) + δ(γ − c(T − t))] (2.36)

It is well-known that an integration of a delta function in an interval that does not contain points where the domain of the delta is zero gives zero as a result. By using this property, the substitution can be performed if T ∈t −πc, t .Then it is straightforward to find the new diffusion equation; after a change of variable s = T − t we develop ψ(t + s) as a Taylor series for small s.

(29)

2.2 Circumference ∂ ∂t+  ν ψ(x, y) = rN X m eimθ1 Z t t−πc ˜ ψm(T )  2ce−c(t−T )σ (1 −m 2c2 2 (t − T ) 2)  dT = = rNX m eimθ1 Z 0 −π c " ˜ ψm(t) + ∂ ˜ψm ∂s (t)s # ·  2cecsσ(1 − m 2c2 2 s 2)  dT = = rNX m eimθ1ψ˜ m Z 0 −π c 2cecsσ ds + 2∂ ˜ψm ∂s (t) Z 0 −π c secsσ ds − ˜ψm Z 0 −π c m2c2s2ecsσ ds = = rN c  2A0ψ − 2A1 ∂ψ ∂t + c 2 A2∇2ψ 

where An is the n-th moment of the distribution. With some simplifications, the final result is the

mathematical relationship: ∂ψ ∂t +  − 2rN cA0ν 1 + 2A1rN cν ψ = νrN c 3A 2 1 + 2A1rN cν ∇2ψ (2.37)

Notice that this equation is consistent with the case of no delay, as it can be verified calculating the limits of the coefficients for c → ∞. In the case of an exponentially decaying homogeneous connectivity, as it has been done in [7], we find:

lim c→∞cA0 = −σ e −π σ − 1 lim c→∞cA1 = 0 lim c→∞c 3 A2 = 2σ3− σe− π σ 2σ2 + 2πσ + π2

Taking the limit and calculating the normalization factor:

N (σ) = − 1

2σ e−πσ − 1

Notice that we have obtained the same equation as in the case with no delay ∂ψ(θ1, t)

∂t + 

0

ψ = D∇2ψ(θ1, t) (2.38)

but with the coefficients that are related to the parameters of the system via:

0 = c(e π σ − 1)( − ν) πν − c + νσ + ceπσ − νσe π σ (2.39) D = − cν(2πσ − 2σ 2eπσ + π2+ 2σ2) 2πν − 2c + 2νσ + 2ceπσ − 2νσe π σ (2.40)

Addition of the heterogeneous term

In this section, we treat the heterogeneous connectivity of the system with a circumference spatial domain. After some algebraic manipulations, we can write it as the product of two terms, one depending on the assigned topology, and the other one depending on how the delay enters into the integro-differential equation.

(30)

∂ψ(θ1, t) ∂t = −ψ(θ1, t) + D∇ 2ψ(θ 1, t) + N X i,j=1 µijδ(θ1− θi)ψ(θj, t − d v) (2.41)

Please notice that, if c is finite, µ is rescaled by the factor at the denominator of equation (2.37). As it has been shown before, we substitute the ansatz (2.18) in order to decompose this term in modes. The delayed differential equation for the coefficient of the k-th mode is obtained with a projection:

˙ ξk(t) = −( + D)ξk(t) + X i,j µi,j X k0 Wk†(θi)Wk0(θjk0(t − d(θi, θj) v ) (2.42)

We are going to derive this equation in the next chapter. We report it here because we want to further simplify the heterogeneous term; moreover, in order to generalize, we do not assume that the heterogeneous connection must be Dirac delta shaped. Instead, we keep the connectivity as a continuous function of the spatial variable. In this case, equation (2.41) becomes:

∂ψ(θ1, t) ∂t = −ψ(θ1, t) + D∇ 2 ψ(θ1, t) + Z θ2 µ(θ1, θ2)ψ(θ2, t − d(θ1, θ2) v ) dθ2 (2.43)

The delta-shaped connectivity is obtainable imposing µ(θ1, θ2) =Pi,jµijδ(θ1− θi)δ(θ2− θj).

Let us call H(t, k) the term due to the heterogeneous connection in the last equation. By using the ansatz (2.18) and projecting on the k-th mode we get:

H(t, k) = 1 2π Z θ1 Z θ2 µ(θ1, θ2)e−ikθ1 X k0 eik0θ2ξ n(t − d(θ1, θ2)/v)dθ1dθ2 , (2.44) Next, we:

1. Expand the heterogeneous term on the plane waves basis µ(θ1, θ2) = 1 Pm1,m2µm1,m2e

im1θ1eim2θ2

2. Change variable θ2 → γ through a rotation so that |γ| = d(θ1, θ2) and relabel θ ≡ θ1

3. Fourier transform in the time domain the ξn(t) coefficients.

Obtaining (we do not report the detailed calculations) as a final result, with abuse of notation:

H(ω, m) = 1 4π2 Z θ Z γ X m1,m2 X n µm1,m2e

iθ(m1+m2−m+n)e−iγ(m2+n)eiω|γ|/vξ˜

n(ω) dθdγ . (2.45)

The structure of this equation is of the form:

H = 1 4π2 X n X m1,m2 T (m, m1, m2, n)W (ω, m2, n)µm1,m2ξ˜n (2.46) where T = Z θ eiθ(m1+m2−m+n) (2.47) W = Z γ e−iγ(m2+n)eiω|γ|/v (2.48)

T is related to the topology of the space and W is related to the physical way the delay enters the field equation. In the present case, in which the domain is the circumference, the integral in W can

(31)

2.3 Sphere be explicitly evaluated in terms of elementary functions (we do not report the expression), while the topology function is simply given by T = 2π · δm, m1+m1+n, so that

H = 1 4π2 X n X m2 2πW (ω, m2, n)µm−n−m2,m2ξ˜n. (2.49)

In this form, the first sum is a series expansion for the ˜ξ coefficients, which is going to become the truncation approximation, while the second sum is a “convolution-like” term, which cannot be further simplified without some assumption on the heterogeneous connectivity.

Finally, we consider the v → ∞ limit: we find W (m2, n) = 2π · δm2,−n so that H = P

nµm,−nξ˜n,

which is the correct result that one would get starting directly from H without the delay.

Sphere

In this section, we are going to use the same approach seen with a circumference domain for a spherical domain. The formalism is slightly different, but the underlying idea is the same: we rotate the system in such a way that the point (θ1, φ1) will coincide with the north pole. Thereby, the distance along the

sphere between any point and the north pole is given by the polar coordinate of the generic point. This choice remarkably simplifies the expressions of connectivity. We are going to use a spherical harmonic decomposition of the ψ in order to avoid some unnecessary formal difficulties due to the nature of the spherical coordinates. Within the same spatial domain, we will consider the different cases of: homogeneous instantaneous connectivity, homogeneous connectivity with delay and the full case with the heterogeneous connectivity.

Instantaneous homogeneous connectivity

Suppose the ray of the sphere is equal to one. In this case, the starting equation (2.32) in spherical coordinates becomes: ∂ ∂t +  ν ψ(Ω1, t) = Z Ω2 r2Whom(d(Ω1, Ω2))ψ(Ω2, t) dΩ2 (2.50)

For simplicity, the right-hand side of (2.50) can be called I(θ1, φ1, t). Hence:

I(θ1, φ1, t) = ν Z Ω2 Whom(d(Ω1, Ω2))ψ(Ω2, t) dΩ2 (2.51) = ν Z Ω2 Whom(d(Ω1, Ω2)) X l,m ˜ ψl,m(t)Yl,m(Ω2) dΩ2 (2.52)

where we have introduced the expansion in spherical harmonics.

Since the connectivity function has a simple form in spherical coordinates whose polar axis is along the observation direction of (θ1, φ1), we need to perform the rotation

Yl,m(Ω) =

X

m0

D(l)m0,m(R(Ω1))Yl,m0(Ω2) , (2.53)

where R(Ω1) ≡ R(α = φ1, β = θ1, γ = 0) is the rotation, parametrized through the Euler angles

(α, β, γ), that actively takes the polar axis into the wanted direction. We have used the Wigner rotation matrices D(l) with integer l, well-known from the theory of angular momentum in quantum mechanics. In order to perform the change of integration variables Ω2 → Ω, we use the following properties:

(32)

Dm(l)0,m(R−1) = D (l)† m0,m(R) = D (l)∗ m,m0(R) , (2.54) Dm,0(l) (φ, θ, γ) = r 4π 2l + 1Y ∗ l,m(θ, φ) −→ D (l) 0,m(R −1 (Ω)) = r 4π 2l + 1Yl,m(θ, φ) . (2.55)

By assuming for Whom the analogous expression used in (2.3), the integral I(θ1, φ1, t) becomes

I(θ1, φ1) = νN (σ) X l,m ˜ ψl,m X m0 D(l)m0,m(R−1(Ω1)) Z Ω e−θ/σYl,m0(Ω) dΩ (2.56) = νN (σ)X l,m ˜ ψl,mD(l)0,m(R −1 (Ω1)) Z Ω e−θ/σYl,0(Ω) dΩ (2.57) = νN (σ)X l,m ˜ ψl,m r 4π 2l + 1Yl,m(θ1, φ1) Z Ω e−θ/σYl,0(Ω) dΩ (2.58)

where N is the proper normalization for the connectivity on the sphere and we do not explicitly write the time-dependence of I and ˜ψl,m. It must be noted that in order to go from equation (2.56) to equation

(2.57), the resulting integrand do not depend on φ so that only the m0 = 0 term remains.

Assuming once again a fast-decaying connectivity function, we use the Taylor expansion for the spherical harmonics inside the integral to get the second order approximation:

I(θ1, φ1) = νN (σ) X l,m ˜ ψl,mYl,m(θ1, φ1) r 4π 2l + 1· · Z Ω  Yl,0(0, 0) + ∂Yl,0 ∂θ (0, 0)θ + 1 2 ∂2Y l,0 ∂θ2 (0, 0)θ2  e−θ/σdΩ , (2.59)

where ∂φYl,0= 0 has been used so that all the terms that contain azimuthal derivatives vanish.

We need to calculate the values of the derivatives with respect to the polar angle of the spherical harmonics with m = 0 at the north pole. These values are: ∂θYl,0(0, 0) = 0 and ∂θ2Yl,0(0, 0) =

−1

2l(l + 1)Yl,0(0, 0). By substituting them into the integral, we finally get:

I(θ1, φ1) = νψ(θ1, φ1) − X l,m l(l + 1) ˜ψl,mYl,m(θ1, φ1)ν N (σ) 4 Z Ω θ2e−θ/σdΩ , (2.60)

where Yl,0(0, 0) =p(2l + 1)/4πPl(1) =p(2l + 1)/4π has been used.

We thus obtain the neural field equation as a diffusive one on the unit sphere: ∂ψ(θ1, φ1, t)

∂t + ( − ν)ψ(θ1, φ1, t) = D∇

2ψ(θ

1, φ1, t) , (2.61)

where the diffusion coefficient is given by

D = νN (σ)

4 Z

θ2e−θ/σdΩ . (2.62)

Homogeneous connectivity with delay

We assume that the signal transmission velocity c has the same magnitude of a typical velocity of the system. Here we derive the changes to the neural field equation introduced by the presence of a delay.

(33)

2.3 Sphere As we have already seen in the case of the circumference, we are going to show that the only change lays in the numerical value of the parameters.

By using the same approach and approximations of the last section, we obtain:

I = Z Ω2 r2Whom(d(Ω1, Ω2))ψ(Ω2, t − d(θ1, θ2) c ) dΩ2 = = Z Ω2 Z +∞ −∞ r2Whom(d(Ω1, Ω2))δ(T − t + d(θ1, θ2) c )ψ(Ω2, T ) dT dΩ2 = = Z +∞ −∞ Z Ω2 r2Whom(d(Ω1, Ω2))δ(T − t + d(θ1, θ2) c ) X l,m Yl,m(Ω2) ˜ψl,m(T ) dΩ2dT = = Z +∞ −∞ Z Ω2 r2Whom(θ)δ(T − t + θ c) X l,m ˜ ψl,m(T ) X m0 Dm0,m(R−1(Ω1))Yl,m(Ω) dΩ dT = = N Z +∞ −∞ X l,m ˜ ψl,m(T )D0,m(R−1(Ω1)) Z Ω2 r2e−θσδ(T − t + θ c)Yl,0(Ω) dΩ dT = = 2πN Z t t−πc X l,m ˜ ψl,m(T )D0,m(R−1(Ω1))r2ce− c(t−T ) σ sin(c(t − T ))[Y l,0(0, 0)+ + ∂Yl,m ∂θ c(t − T ) + 1 2 ∂2Y l,0 ∂θ2 c 2(t − T )2] dT = = 2πN r2c Z 0 −π c X l,m ˜ ψl,m(t) + ∂ ˜ψl,m ∂s s ! r 2l + 1 4π − 1 2 l(l + 1) 2 r 2l + 1 4π c 2s2 ! · · r 4π 2l + 1Yl,m(Ω)e cs σ sin(−cs) ds

The last integral can be further simplified; by neglecting the term that mixes the derivatives in space with the derivatives in time, as in [7]:

I(θ1, φ1, t) = 2πr2N c ψl,m(t) Z πc 0 ecsσ sin(−cs) ds − ∂ψ ∂t Z πc 0 secsσ sin(−cs) +r 2c2 4 ∇ 2ψ Z πc 0 ecsσs2sin(−cs) ! (2.63) We want to find the mathematical relationship that links the coefficients of the last equation with the parameters of the system; by renaming some terms, we obtain the neural field equation which is once again a diffusive equation:

˙

ψ + 0ψ = D∇2ψ (2.64)

In the limit of an infinite c, we get the same result as if there was no delay. Explicitly, the coefficients are: 0 = −  − ν νe− πσ(2σ+π+πσ2+2σeπσ) c(σ2+1)(e− πσ +1) − 1 D = −νe −π σ(6σ2e π σ − 2σ4e π σ + 4πσ + 4πσ3+ π2+ 6σ2− 2σ4+ 2σ2π2 + σ4π2) 2(σ2+1)2(e− πσ +1)((2νe− πσ(2σ+π+πσ2+2σeπσ) c(σ2+1)(e− πσ+1) − 2

(34)

Addition of the heterogeneous term

We are going to add the heterogeneous part of the connectivity to the neural field equation. It is straightforward to follow the mathematical calculations of [7] and apply its results using spherical har-monics instead of plane waves. For brevity, we denote a Dirac delta in spherical coordinates as δ(Ω). The full equation, taking into account the approximate expression of the homogeneous term calculated in the previous section, is:

ψ(Ω1, t) ∂t = −ψ(Ω1, t) + D∇ 2ψ(Ω 1, t) + N X i,j=1 µijδ(Ω1− Ωi)ψ  Ωj, t − d v  (2.65) We want to obtain the temporal evolution of the modes. By using the ansatz (2.18), where, in this case, the spherical harmonics have been used instead of φk and thus it becomes ψ =Pl,mξl,mYl,m, and

performing the projection on Yl1,m1† , last equation becomes:

 ∂ ∂t+   ξl1,m1(t) = Z Ω2 Z Ω1 Whom(d(Ω1, Ω2)) X l,m Yl1,m1† (Ω1)Yl,m(Ω2)ξ  t − d(Ω1, Ω2) c  + (2.66) + N X i,j=1 µij X l,m Yl1,m1† (Ωi)Yl,m(Ωj)ξ  t − d(Ωi, Ωj) v  (2.67)

If we solve this equations obtaining ξl,m(t), we can predict the behaviour of the system. If the typical

length of the homogeneous connectivity is short enough, the system must evolve according to:

 ∂ ∂t+   ξl0,m0(t) = −D(l0(l0+ 1))ξl0,m0(t) + N X i,j=1 µi,j X l,m Yl†0,m0(Ωi)Yl,m(Ωj)ξl,m  t − d v  (2.68)

In order to study the instabilities of last equation, we are going to show how to find the dispersion relation in the case of a single heterogeneous connection. The same calculation has been carried out in [7] for the straight line domain. Moreover, we consider the approximation of a single mode describing the whole dynamics of the system (this is a restrictive hypothesis that we use only because we aim to provide a proof-of-concept example. In the next chapter we perform the stability analysis using more modes). Last equation becomes:

 ∂ ∂t +   ξl,m(t) = −D(l(l + 1))ξl,m(t) + A(l, m, Ω1, Ω2)ξl,m  t − d v  (2.69) where A(l, m, Ω1, Ω2) = µ1,2Yl,m† (Ω1)Yl,m(Ω2) + µ2,1Yl,m† (Ω2)Yl,m(Ω1) (2.70)

Following a standard technique, we can obtain the critical surface, by searching for solutions of the form ξ(t) = eλt, where λ is a complex coefficient. The stability of the system depends on the real part of λ. It has been proven that on the critical surface the expression λ = iω, where ω ∈ R holds. With a substitution, separating the real and the imaginary part and then summing the squares of the two equations we obtain:

ω2 = A2− (D(l(l + 1)) + )2 (2.71)

If we want the explicit result, we need the dependence of A by l and m. Using the expression derived in [55]:

Riferimenti

Documenti correlati

The paper presents the influence of the pH of water solution on the decomposition efficiency of chosen organic micropolutants from the group of polycyclic aromatic hydrocarbon

In the first case, the initial sequence of Berlage radio pulses with the relative basic frequency f = 0.1 was added to sinusoidal signal simulating harmonic

In this work, which deals with an efficient Truncated Newton method for solving large scale unconstrained optimization problems, we considered the use of an adapted Bunch and

11-13 show the error response of the 1-dof and 2-dof system, with the adaptive hybrid system, the hybrid system with an optimal fixed n, the Impedance and Admittance Control in a

BabyLux device: a diffuse optical system integrating diffuse correlation spectroscopy and time-resolved near-infrared spectroscopy for the neuromonitoring of the premature

The study, through a comparison of different sites and growing seasons, indicated that a higher planting density can positively influence silage and grain maize yields,

In effetti, partendo dal principio per cui la protezione dei diritti e delle libertà rappresenta la regola e la loro restrizione un’eccezione, si può sostenere

Recently, Shah and Zada [24] obtained very interesting results about the exis- tence, uniqueness and stability of solution to mixed integral dynamic systems with instantaneous