• Non ci sono risultati.

Neural Oscillations and Information Transmission in a Thalamocortical Network Model

N/A
N/A
Protected

Academic year: 2021

Condividi "Neural Oscillations and Information Transmission in a Thalamocortical Network Model"

Copied!
106
0
0

Testo completo

(1)

Neural Oscillations and Information

Transmission in a Thalamocortical Network

Model

Candidate: Matteo Saponati

Supervisors: Enrico Cataldo, Alberto Mazzoni

Master Degree in Physics

Curriculum: Matter Physics

Department of Physics

University of Pisa

(2)
(3)

Abstract

From the point of view of Physics, the brain is a macroscopic system formed by non-linear elements (neurons) interacting in an intricate fashion. Its behavior is studied in the por-trait of dynamical system theory. With this approach cognition processes are modeled as dynamical paths in complex phase spaces. Computational as well as statistical analysis are needed to characterize the emerging collective phenomena. In the present thesis we follow this approach in the study of communication processes between two areas of the brain: the thalamus and the cerebral cortex. In particular, we study how these two systems interact creating dynamical coupling for transferring and processing of information. We develop a scaled model of the system to study temporal dynamics and rhythmical patterns. Actually, an open problem in neuroscience is how neural networks choose to filter out or let pass spe-cific frequencies of oscillatory activity and therefore have a gating role for the dynamics. Our purpose is to shade new light on the functional role of thalamus within sensory information processing in this sense. We stress that thalamocortical system is a complex anatomical structure hard to describe in its entirety. For our purposes, we focus on coupling between a local first order thalamic network and a local cortical circuitry of the respective primary cortical area.

Our model is able to reproduce typical collective oscillations of thalamocortical system. In particular, thalamic model dynamics is characterized by spindle oscillations in the 7-14 Hz range enclosed into slower δ-rhythms in the 1-4 Hz range. On the other hand, cortical model exhibits typical fast γ-oscillations in response to sustained external perturbations. We study if and how thalamus is able to modulate cortical activity through its intrinsic rhythms. We firstly study thalamocortical system in the isolated regime, i.e. when there is no external input influencing the system. We call this regime asleep state because it models an isolated state of the brain when external influences can be neglected. We find out that in a certain region of parameter space the cortical model embodies the slow δ-rhythms while filtering out the spindle oscillations. Such a phenomenon happens also during the awake state, i.e. when external informative perturbations influence the dynamics. We study the underlying pro-cesses of spindle-filtering with several investigations. Moreover, we do a statistical analysis to characterize possible functional roles of frequency-couplings between networks. From a first-order correlation analysis we find that networks show non-vanishing correlation only in the δ-rhythms with a certain degree of phase-locking. Furthermore an higher-order correla-tion analysis based on mutual informacorrela-tion shows that networks multiplex informacorrela-tion into two main pathways. The first is determined by low δ-rhythms while the second by cortical γ-rhythms encoding input strength. In this scenario spindle oscillations are filtered out by the cortex and remain and internal intrinsic thalamic mechanisms.

Our model is general and can be used to study every local thalamocortical network (for instance of every sensory system) respecting our approximations. Further developments of this work would regard the inclusion of cortical feedback which has a crucial role for several cognitive processes. This would possibly extend the number of processes described by our model.

(4)
(5)

I want to thank my supervisors Prof. Cataldo and Dott. Mazzoni for their crucial help during this months and all the colleagues who contributed in some sense to this work, in particular Bartolomeo Fiorini and Matteo Breschi. I want also to thank Prof. Garcia-Ojalvo for being an inspiring mentor and the people from Dynamical System Biology Lab who sup-ported me during my stay in Barcelona.

I want to thank especially my family for the priceless and unforgettable support during all these years, my friends for all the good moments and Ilaria for being always a support and a partner through this life.

(6)
(7)

To Giulio, who opened my eyes.

(8)
(9)

"If a man never contradicts himself, the reason must be that he virtually never says anything at all."

Erwin Schrödinger

(10)
(11)

Contents

1 INTRODUCTION 1

1.1 Neurons as Dynamical Systems . . . 3

1.2 Networks and Collective Phenomena . . . 6

1.3 Thalamocortical System . . . 11

2 MODEL 15 2.1 Single Neuron . . . 16

2.2 Neural Networks . . . 25

3 RESULTS 35 3.1 Cortical Network Dynamics . . . 39

3.2 Thalamic Network Dynamics . . . 48

3.3 Thalamocortical System Dynamics . . . 53

3.3.1 Multimodal Analysis . . . 63

3.3.2 Effects of Thalamocortical Synaptic Coupling . . . 65

3.4 Statistical Analysis . . . 69

3.4.1 First Order Correlation . . . 69

3.4.2 Mutual Information . . . 72

4 CONCLUSION 77 A Numerical Methods 81 A.1 Numerical Simulation . . . 81

A.2 Robustness of the Model . . . 83

A.3 Signal Processing . . . 84

A.4 Statistical Analysis . . . 84

(12)
(13)

CHAPTER

1

INTRODUCTION

Figure 1.1: Detail from an 1899 sketch by Santiago Ramon y Cajal of a Purkinje neuron from the human cerebellum.

The complexity of the human brain is bewildering. It continuously elaborates sensory stimuli from the environment, interfacing with it by means of several cognitive processes. Furthermore, it controls many hierarchical internal activities as the control of the body or modulation of subconscious processes regarding emotions, instincts and other abstract sen-sations.

The study of functional, anatomical and physiological features of the brain have been in-terdisciplinary since ancient times. As reported in [1] the earliest recorded reference to the brain is in the Edwin Smith Papyrus, where diagnosis and prognosis of head injuries were described. One of the first link between brain and intelligence is instead in the work of Hippocrates, physicist and "Father of Medicine". From antiquity to the 18th century many structural and anatomical studies were made, in particular during the Reinassance when fine structures of the brain were discovered. Only in 1875, after the development of the theory of electromagnetism, the physician Richard Caton reported in his pioneeristic work [2] the first recording of electrical activity in the brain. Then in 1929, Hans Berger published the first recording of electrical activity on the human scalp, the ancestor of modern

(14)

CHAPTER 1. INTRODUCTION 2 falogram (EEG) [3]. In his work, prominent oscillatory behavior was detected, setting the basis for one of the most studied feature of neural network activity [4]. On the other hand, Santiago Ramón y Cajal used a novel microscope to uncover many cells type (Fig 1.2), the neurons, and proposed different functions for them. He obtained a Nobel Prize in Physi-ology or Medicine in 1906 for this work. From the discover of neuron a large amount of experimental researches led to a deep understanding of cellular phenomena like excitability and action potentials transmission.

Only from the second half of twentieth century neuroscience has been recognized as a sep-arate academic branch, articulated into several disciplines such as anatomy, biochemistry, physics as well as medicine and psychology [5]. From the galvanometer used by Caton at the end of nineteen century, considerable progresses in experimental techniques have been made ranging from single electrode measurement to functional magnetic resonance imaging (fMRI) for macroscopic behavior [?]. However, the difficulty in brain activity measurements and the fragmented approaches in data recordings lead to a central role of theoretical and computational modeling in this field. The latter allow to link the different approaches and to gain the expansion of our knowledge. Computational modeling extends on different spatial and temporal scales of various brain activities. The level of detail depends on the particular process to be described, ranging from intracellular signals pathways to spiking network or large-scale neural mass models. Furthermore, it is possible to describe macroscopic neu-ral phenomena starting from single cells dynamics (bottom-up) or to infer on microscopic dynamics through the knowledge of the collective behavior (top-down) [7].

It is often difficult to identify the appropriate level of modeling for a particular problem. A frequent mistake is to assume that a more detailed model is necessar-ily superior. Because models act as bridges between levels of understanding, they must be detailed enough to make contact with the lower level yet simple enough to provide clear results at the higher level.

Peter Dayan and L.F. Abbott, ”Theoretical neuroscience” (2005)

From the point of view of Physics, the brain is a macroscopic system formed by non-linear elements, neurons, interacting in an intricate fashion. Its functionalities are studied in the framework of dynamical system theory, where cognitive processes are modeled as dynamical paths in complex phase spaces of this many-body system. Computational as well as statis-tical analysis are needed to characterize the collective phenomena observed in experimental measurements.

In the present thesis we follow this approach in the study of communication between two areas of the brain: the thalamus and the cerebral cortex. In particular, we study how these two systems interact creating dynamical coupling by transferring and processing informa-tion. Typical EEG and LFP (Local Field Potential) measurements show organized collective behavior such as nested oscillatory activities. How these are correlated to cognitive functions is an open issue in neuroscience. In order to explain possible features of rhythmical organi-zation in the network, we develop a computational model and we study its dynamics with theoretical and computational tools. In the rich landscape of computational modeling, we position our work in the regime of spiking network model. At this level of detail, it is possible to build structured network of hundreds or thousands elements coupled together in different fashions. The dynamics of every neuron and every synaptic connection is taken into account and simulated, following the differential equations defining dynamics. This approach let us maintain a well detailed description of microscopic processes while portraying the important

(15)

3 CHAPTER 1. INTRODUCTION macroscopic phenomena. In the next paragraphs we give a general introduction of some the-oretical and experimental elements composing the landscape of computational neuroscience. Then, we will summarize our approach and define the issues we have tried to solve with this thesis.

1.1

Neurons as Dynamical Systems

As already mentioned, at the end of nineteen century decisive works of Cajal uncovered the role of neurons as building blocks of nervous system, the so called neuron doctrine. Neurons are specialized cells containing a large amount of ions or molecules and are characterized by complex electrophysiological activity. These electrically excitable cells are capable to receive, process, and transmit information through electrical signals called action potentials. The latter are traveling membrane depolarizations and they constitute the basic communication between neurons. The uniqueness of neural cells is that they can transmit these pulses over long distances, allowing large and complex communication architectures. All neural network dynamics underlying higher order cognitive functions in the brain derive from this "simple" excitation mechanism.

Roughly speaking, neurons are composed by three main structures: soma, dendrites and axons. Soma is the body of the neuron and contains the nucleus of the cell. Dendrites are cellular extensions with many branches and they carry the majority of the input from other neurons. Finally the axons are the main branch for output communication, where action po-tentials are carried to axon terminals and then sent to target neurons through synapses. The electrophysiological proprieties emerge from the dynamics of ionic currents flowing through neural membranes. These are endowed with ionic channels which are voltage-dependent and permeant to different ionic species. These channels distributed throughout the neuron change the intra- and extra-cellular ionic concentrations by opening and closing following a voltage-dependent mechanism. These produce a feedback process leading to fluctuations of membrane potential. If membrane potential reaches a certain threshold value an action potential is emitted. After an action potential neurons shows a period of inactivity, called refractory period, which can last for some milliseconds and then the membrane potential restart fluctuating.

Due to their electrical proprieties, neurons are typically modeled using an equivalent electric circuit. The membrane is made by a lipid bilayer that acts as a passive capacitor C. Ionic channels creating ionic currents acts as dynamical resistances R and the resting potential membrane Er is represent by a voltage source. Different ionic currents lead to different

ty-pologies of neurons with different excitability mechanisms. Evenmore, the same neuron can exhibits various response to the same stimulus depending on its state.

Looking at neurons as complex RC circuits is the way to relate their electrophysiological and dynamical proprieties. It is important to stress that neural activity is not just defined by the electrophysiological proprieties of the neuron, but also by their dynamical behavior. This is a recurrent relationship in theoretical neuroscience.

(16)

CHAPTER 1. INTRODUCTION 4 Information-processing depends not only on the electrophysiological properties of neurons but also on their dynamical properties. Even if two neurons in the same region of the nervous system possess similar electrophysiological features, they may respond to the same synaptic input in very different manners because of each cell’s bifurcation dynamics.

Eugene M. Izhikevich, ”Dynamical System in neuroscience” (2006)

In this thesis, as well as in the majority of computational and theoretical studies of neuro-science, we describe our models in the framework of dynamical system theory. The latter is widely used in applied mathematical and physical modeling because is able to describe complex non-linear phenomena as the one characterizing biological world. Neurons are ex-citable because are systems near a bifurcation which means that a qualitative change in the dynamics occurred.

At the end of nineteen century Henri Poincarè, in his famous paper on celestial mechanics [8] , laid the basis for a novel understanding of non-linear systems and their dynamical de-scription. Here we briefly summarize the basic ideas of dynamical system theory that will be widely used in our work. For a general and rigorous treatment we refer to [9, 10, 11]. To abstractly define a dynamical system we need a triplete of three entities

Ω, R, ft



ft: Ω → Ω

x0 → x(t) = ft(x0)

(1.1)

where Ω is the state space containing all the possible values (states) of a certain variable x. The set R refers to the time-line t for temporal evolution, while ft is a certain map which

defines the rule that x follows during its evolution. Therefore, the map ft defines a family

of evolution operators parametrized by t ∈ R. We stress that every differential equation ˙x = g(x) can define a dynamical system considering g(x0) = x(t, x0) as the evolution

oper-ator. In this framework one can model a certain phenomenon by defining some differential equations for the state variables and look for dynamical evolution in the phase space. The maps of the model will define trajectories in this abstract space and this will show qualitative behavior of the solutions without really solving the system. The points f (x) = 0 define the sets of states where x will not evolve in time. Therefore one obtain a partition of the state space that catalogue the dynamics and linearization around these fixed point gives informa-tion about the local stability of the system. Thess do not give all the informainforma-tion about the evolution, but it gives a schematic representation of the dynamics in terms of regions called manifolds. If the map ft depends on some parameter set β = {β1, .., βn} one can study

qualitative changes in the dynamics for different values of the parameters. In particular, fixed points can change their stability proprieties and the system undergoes a bifurcation. This can be analyzed using the continuation technique, i.e. continuosly varying the control parameters while tracking the fixed points.

The works that opened the doors to dynamical modeling of neural activity are the ones from Hodgkin, Huxley and Katz [12, 13, 14, 15, 16]. They developed a remarkable and de-tailed biophysical model of action potential generation and propagation that is still widely used nowadays. They discovered that the principal membrane ionic currents were due to sodium and potassium diffusion. However, it is known from later studies that the neural membrane contains way more different ionic channels [17]. They described the neuron as s 4-dimensional system whose variables are the membrane potential v and the gating particles n, m and h of the different ionic channels. According to Kirchhoff’s law, one can write down

(17)

5 CHAPTER 1. INTRODUCTION a differential equation for the membrane potential of the form

C d dtv = −¯gN am 3h(v − E N a) + ¯gKm4(v − EK) + ¯gL(v − EL)  τm d dtm = m∞(v) − m τh d dth = h∞(v) − h τn d dtn = n∞(v) − n (1.2)

Different levels of approximation, such as the linear quasi-ohmic approximation or the inde-pendency between ion currents and between gating-particles, were used to achieve a rather simple system. However, this model was able to describe the electrophysiological recordings of the squid neuron very accurately and has been successfully used in modeling different types of neurons.

Starting from this kind of models one can add different levels of complexity, i.e. consid-ering multi-compartment units for spatial degrees of freedom or calcium-dependent ionic currents. Adding complexity permits to follow more detailed dynamics of the single neuron system.

On the other hand, Hodgkin and Huxley model is typically too computationally heavy to be inserted in multi-unit description. Large-scale or mesoscopic models need reduced neuron description in order to consider just essential proprieties of the process. Starting from the Hodgkin and Huxley four dimension dynamics, Fitzhugh [18] and then Morris and Lecar [19] developed a reduced model of two variables

C d dtv = −¯gCam∞(v)(v − ECa) + ¯gKn(v − EK) + ¯gL(v − EL)  d dtn = φ n∞(v) − n τn(v) (1.3)

These are the prototype model for stability analysis because qualitative representation is easily achieved in a 2D phase space. Following an earlier work from Lapique [20] at the beginning of the twentieth century, a further approximation can be made reducing the system to a 1-D dynamics. This is the case of Leaky Integrate-and-Fire model, were neuron are modeled as a leaky RC circuit that just integrate inputs

C d

dtv = −(v − Er) (1.4)

The dynamics is discretized by an imposed threshold process: when the membrane voltage reaches a certain fixed value it is instantaneously brought to a reset value and an action potential is emitted. This simple model can be generalized considering non-linear function and/or coupled variables to achieve more complex dynamics [21].

Cd

dtv = −(v − Er) + f (v) (1.5)

In these models action potential generation is not a dynamical propriety of the system (a limit cicle attractor) rather an artificially-imposed reset. These models are hybrid dynamical systems defined both by a continuous dynamics, the subthreshold behavior, and a discrete dynamics, the spike and reset process. General integrate-and-fire models are highly popular

(18)

CHAPTER 1. INTRODUCTION 6 in network modeling because they simplified to the minimum the action potential mecha-nisms. In this way collective phenomena and macroscopic observables are easily linked to firing activity of single neurons. Continuing on the practical use of simple neuron model in large scale networks, other approximation can be made to focus just on synchronization pat-tern of non-linear oscillator. This approach was introduced by Kuramoto in a seminal paper [22], where an even more abstract look on neuron dynamics was developed. In Kuramoto model the biophysical proprieties are neglected and the dynamics is mimic by an oscillator with its own natural frequency ωi

d dtθi = ωi + K N X j sin(θi − θj) (1.6)

where a simple phase relation between all the other elements of the network is added.

1.2

Networks and Collective Phenomena

Neurons in the mammalian brain exhibit a wide range of computational proprieties by their own. However, main cognitive functions are generated by collective phenomena emerging from interactions between them. These take place in complex interconnected structures, the neural networks. Such cooperative many-body systems are characterized by rich phase space and high-dimensional dynamics [23]. The operating regime of neural networks is very far from typical equilibrium states studied in classical statistical physics. Influences from the environment continuously affect dynamics of the network which reacts in self-organized activities and multistable patterns [24]. How does organized phenomena emerge? How can such a disordered system show highly-structured cognitive abilities?

Let’s consider a set of variables ~X = (x1, x2, ..., xn) describing the neural state, a general

dynamical evolution for ~X is

d dt

~

X = f ( ~X) (1.7)

The response of this system to a certain external parameter, say an injected current I(t) d

dt ~

X = f ( ~X) + I(t) (1.8)

depends on the map f defining its dynamics. Now, taking a set of this neurons, say

Ωk = { ~X1, ..., ~Xk} (1.9)

we can build a neural network by defining the connections between them. The neural com-munication is mediated by action potentials conveying information through their timing. Their duration in time can easily be neglected (respect to other characteristic times in the brain) and therefore we can treat them as discrete point, i.e. all-or-none processes (similarly to a morse code). With this approach, we define neural activity as a linear sum of point events in time ρj(t) = n X i=1 δ(t − ti) (1.10)

where we sum over the n action potentials emitted by a single neuron j at different times ti during a certain time window T . Neurons use neural activities of the form of Eq (1.10),

also called spike trains, for encoding and transmission of information. The evoked spike trains, in relation to the type of stimulation, characterize the cell computational properties. We know from experimental data that neurons show high variability response to equivalent

(19)

7 CHAPTER 1. INTRODUCTION inputs because of their non linearity. In other words, neural activity is a stochastic process and a statistical treatment is needed. A purpose of computational neuroscience is to find rules that neurons follow to distribute these point processes in time, i.e. to find the neural coding. For instance one can look for mean firing rate in a certain time window

ν = n T = 1 T Z T 0 ρ(t)dt (1.11)

or how firing rate changes for little time steps ∆t ν(t) = 1

∆t

Z t+∆t

t

ρ(t0)dt0 (1.12)

This is a manner to parametrize neural activity and define rules describing neural dynamics. Furthermore one can look for the firing rate of a population of neurons, say a network as before, averaging over the total number of elements

ν(t) = lim ∆t→0 #spikes(t, t + ∆t) N = 1 ∆tN Z t+∆t t N X j=1 ρj(t0)dt0 (1.13)

A complete description of stochastic response with respect to stimulus would need the knowl-edge of probabilities corresponding to every possible sequence of spikes evoked by the stim-ulus itself. It is possible to compute the probability that a spike occurs within a specified interval [t, t + ∆t] similarly to Eq (1.12) or Eq (1.13). In this way we can estimate the prob-ability of an arbitrary spike to occur given the previous informations. In order to compute probabilities, firing rate is the only thing we need if spikes are statistically independent. This means that a spike has a certain probability to occur at a certain time independently on the previous history of the system, i.e. on previous spikes emitted. With this assumption, it is possible to describe spike trains statistically as Poisson stochastic processes with ν(t) as pa-rameter for the mean rate of occurrence. Homogeneous or inhomogeneous Poisson processes faithfully relate with spike trains and can also described higher order statistics. However, time correlation (for example neural refractory period) is typically present and statistical treatment needs to be generalized to renewal processes. Such a statistical approach is useful to consider correlation and coherence between neural activities. Power spectrum of stochastic processes can shed light on synchronization and oscillatory phenomena in the brain. Again, the distribution of inter-spike intervals (ISI) brings information about neural activity. The more the process is Poissonian, the more inter-spike intervals distribute themselves following an exponential statistics with a certain decay rate τ = ν−1.

Synaptic Model

Now, we can insert synaptic connections in the neural set Ωk as some functions of the

activities of other neurons, say s(ρ) d dt ~ Xi = f ( ~Xi) + n X j=1 s(ρj) (1.14)

where the sum runs over all the j-th neurons connected to the i-th neuron. The functional form of s(ρ) depends on the level of detail considered, but is necessarily a function of supra-threshold dynamics ρ(t) of presynaptic neurons. The lowest-level of description is based on pulse-like communication only

s(ρj) = wijρj = wij

X

i

(20)

CHAPTER 1. INTRODUCTION 8 where every spike to i-th postsynaptic neuron changes the membrane potential of a constant value wij. The most famous network model of this kind is reported in the pioneeristic work of

John Hopfield [25], which describes binary neurons coupled with constant synaptic weights. This model is based on the idea developed by Ernst Ising for the description of ferromag-netism in statistical mechanics [26]. Even in such simplified model collective phenomena such as memorization of activity patterns and multistable dynamics is possible. Also emerg-ing oscillatory activities were found in similar disordered network of binary neurons [27]. In dynamical system theory is known that even the most simple evolution map can show complex or chaotic structure for particular values of control parameters. Another example is the famous logistic map developed by biologist Robert May in a seminal paper [28]. Most real synapses in vertebrate brains are chemical synapses and exhibit finite temporal dynamics. The arrival of action potentials from presynaptic neurons triggers a complex chain of biochemical processes for the release of neurotransmitters. They reach post-synaptic side to get detected by specialized receptors in order to trigger the opening of specific chan-nels in posty-synaptic membrane. This convert chemical signals into electrical responses for the modulation of membrane voltage. Synapses are then dynamical systems with finite characteristic timescales which shape time-evolution of pulse-like action potential communi-cation. For instance, some typical models are single exponential (first equation) or double exponential (second equation) models

s(ρj, t) = X Xj wj X tj∈ρj 1 τ↓ exp − t − ti τ↓ ! Θ(t − tj) s(ρj, t) = X Xj wj X tj∈ρj τ↓τ↑ τ↓− τ↑ " exp − t − ti τ↑ ! − exp − t − ti τ↓ !# Θ(t − tj) (1.16)

where Θ(t − tj) is the Heaviside function and therefore every exponential term is considered

only starting from the respective discrete time tj representing spike arrivals. These more

physiological models introduce new timescales in the dynamics and can greatly influence behavior of the system. Synaptic currents coming from thousands of connections are useful lens for looking at mean collective processes. Total synaptic currents of a certain group of neurons represent their mean activity and typically are the leading contributors to macro-scopic signals such as Local Field Potential or EEG. Considering the number of pre-synaptic neurons N in Eq (1.14) in the physiologically plausible order of hundreds to thousands ele-ments, the total input to the i-th becomes a really complex signal. This stress even more the main role of noise in neural network dynamics. In the synaptic bombardment regime, input currents are a sum of stochastic processes. Consequently, using a diffusion approximation the total synaptic inputs can be writeen as a mean current ¯I with superimposed random fluctuations [29]

n

X

j=1

s(ρj, t) ' ¯I + σIη(t) (1.17)

where η(t) is a brownian white noise process with zero mean and unit variance E[η(t)η(s)] = δ(t − s). When time correlation is considered as in Eq (1.16) the stochastic processes is modeled as an Onrstein-Uhlenbeck process [30]

τn

d

dtI(t) = −[I(t) − ¯I] + σIη(t) (1.18)

Synaptic bombardment is the main source of noise in the brain, the network noise. Other sources are ion channels noise and heterogeneity in network connectivity scheme (quenched

(21)

9 CHAPTER 1. INTRODUCTION

Figure 1.2: An example of Rubin’s vase-face illusions. They were firstly developed by Danish psychologist Edgar Rubin. This is an example of bistable perception, i.e. perceptual phenomena in which there are unpredictable sequences of spontaneous subjective changes. Neurons in a network are able to switch from a firing state to another in order to give different perceptive representation of a given stimulus. This is just an example of emerging collective phenomena in neural networks.

disorder). The latter is a main characteristics of real neural networks. Tipically one model network connections as randomly distributed structures following certain probabilities distri-butions. Otherwise, neural networks typically arrange in clustered fashion and small-network structures [31]. This type of quenched noise is similar to the one found in spin-glass ma-terials [32]. The role of noise in neural system had been studied for decades. Random influences on the membrane potential of neurons affect their responsiveness, their ability of integrate external informations and the overall network behavior. Noise emerges from collective fluctuating phenomena and it is reasonable to think about its possible function-ality [33]. Several studies confirmed noise role in enhancing responsiveness of the system [34] and facilitating transitions between multistable states [35]. The latter is a phenomenon called stochastic resonance [36] between multistable states of complex system such as the brain. Recently, various works have been focusing on the so called critical brain hypothesis: neural systems dynamics is concentrated on states near a phase transition, i.e. close to a critical point. This enhance the processing proprieties and the reaction times to external stimuli. The idea of self organized criticality was firstly introduced by Per Bak in 1988 [37]. From his point of view brain uses stochastic background activity as a resting state where efficient computational functions can emerge. Different works studied the typical 1/f spec-trum of cortical ongoing activity has the result of a multistable critical landscape [38, 39, 40]. This collective phenomena could emerge spontaneously in a neural set Ωk which

dynam-ics is described by k differential equation of the form (1.14). The disordered structure and the non-linear dynamics create an intricate energy landscape for the overall state of the network. Other examples of collective phenomena can arise if we consider dependency of the map f on some parameter set θ = {θ1, .., θj}. Furthermore, we can consider neural

(22)

CHAPTER 1. INTRODUCTION 10 control parameter a(t) (external stimuli)

d dt ~ Xi = fθ( ~Xi) + n X j=1 s(ρj, t) + a(t) (1.19)

The dynamics described by k coupled differential equation of the form (1.19) can shows phase-transition in a out-of-equilibrium fashion. Interaction with the environment can re-duced the dimensionality of the dynamics and trigger self-organized behavior [41]. Fur-thermore, we can generalize the model to m different populations of neurons. This means considering m neural set Ωα

kα = {X

α

1, ..., Xkαα} (α = 1, ...,m) that can have recurrent

interac-tions with neurons from the same set and feedforward interaction with neurons of different sets: d dt ~ Xiα = fθ( ~Xiα) + nα X j=1 s(ραj, t) +X β6=α nβ X j=1 s(ρβj, t) + a(t) α = 1, ..., m (1.20)

where the sum on β runs over all the other neural set Ωβk

β different from Ω

α

kα. We stress that

starting from Eq (1.20) is possible to add arbitrary complexities such as different synaptic response function for every population, different connectivity scheme etc. Here we are just introducing the key concepts in the theory of networks of coupled differential equations. One of the most studied collective phenomena emerging from network activities are rhythmi-cal patterns and synchronizations. Oscillations on different time srhythmi-cales and different correla-tion lengths are experimentally demonstrated phenomena in more or less all neural activities [44, 45] . These oscillations emerge from the rhythmical-like behavior of neurons inside the reservoir of network inputs.

Figure 1.3: Classes of oscillatory activity in the cortex. Range and common term for each frequency band is shown. Note the linearity of classes in logarithmic scale. Taken from Buzsáki (2004) [4].

(23)

11 CHAPTER 1. INTRODUCTION Brain oscillators as biological rhythms are related to limit-cycles and weakly chaotic oscilla-tors and share features of both harmonic and relaxation oscillaoscilla-tors. They are supposed to be an mechanism for organizing internal activity and structured response to external per-turbations. Fig 1.3 shows a schematic representation of typical oscillatory activity in the cerebral cortex. Slow oscillations as Slow 4, ..., Slow 1 or delta are observed during sleep and anesthesia and have long correlation length all around brain areas. They are therefore supposed to related to default states or internal states of the brain as an isolated system. During awake behavioral states brain areas typically exhibit faster rhythms as alpha, beta or gamma. Extremely high oscillations such as fast and ultra fast are visible in particular brain areas as the hippocampus.

Correlated oscillations in different part of the brain are thought to represent transmission of informations and cognitive tasks [42]. For example, fast γ-oscillations in the cortical network arise after external inputs are able to triggers internal mechanisms between excitatory and inhibitory populations [46, 47]. Synchronization between neural populations arise as a self-organized phenomena in response to perturbations. One of the most famous scientist who devoted part of his work on organized non-equibrium systems is Hermann Haken. Inspired by the laser theory, he developed the idea of Synergetics as a interdisciplinary science ex-plaining collective phenomena in open disordered systems [48, 49, 50].

The brain abilities of information storage, transmission and procession can be well described by proprieties of non-linear dynamical systems delineated until now. These are some of the theoretical tools that we use in the present work to describe such collective phenomena in the thalamocortical system. Numerical simulations and data analysis are other important instruments for dynamical modeling. In Appendix A we explain the codes and the toolboxes we use forward this thesis to simulate the dynamics and analysis the results.

1.3

Thalamocortical System

The brain contains more or less 1011 neurons and even orders of magnitude more synapses.

It is impossible to model such an intricate network as a whole. In neuroscience, as well as in other scientific areas, simplifications are needed to achieve tenable computational model of the system. Typically, the attention is given on particular part of the brain and a scaled network to mimic the dynamics is considered. Sensory systems in neuroscience have always attracted particular attention because they offer a good experimental ground in which the relationship between the physical world and the neural response can be established at dif-ferent stages of cortical processing. Our present work focus on collective dynamics of the thalamocortical system. We start from works of Mazzoni [51, 52, 55] regarding cortical and thalamic network separately. Then we create an unified novel model coupling thalamic and cortical models and moreover adding new dynamical characteristic for the system. Before describing in detail our model in the next chapter, we give a brief introduction of anatomical and physiological features of thalamocortical system.

The thalamocortical system constitutes the vast majority of mammalian brain and con-tributes to most of the cognitive functions such as consciousness, sleep, decision making, alertness and procession of sensory information . In particular, all sensory inputs from the pherypherical system (with exception of the olfactory system) pass through specific areas of the thalamus (thalamic nuclei) and then are projected in other specific areas of the cortex. For example, visual inputs from the retina are sent to lateral geniculate nucleus of thalamus, which in turn projects to primary visual cortex in the occipital lobe. Thalamus and cerebral

(24)

CHAPTER 1. INTRODUCTION 12

Figure 1.4: Sketch of thalamocortical pathways in a human brain. Different thalamic nuclei (central part of figure) project to different cortical primary sensory system (boundary part of figure). Credits: Niewenhuys et al. (2008). The Human Central Nervous System, 4th Ed.

cortex create a complex interconnected system with feedforward and feedback projections. Thalamus is a paired symmetrical structure located near the center of the brain. It contains three main different populations of cells: thalamocortical relay nuclei (TC), interneurons (INT) and reticular neurons (RE). The former are the main excitatory population of thala-mic nuclei, i.e. neurons coupled with other ones through glutamatergic-like synapses (excita-tory synapses). The latters instead couple throught GABAergic-like inhibi(excita-tory synapses and form inhibitory populations. From an anatomical point of view a particular thalamic nu-cleus is shaped by groups of relay cells innervated all around by a shield of reticular nunu-cleus. Thalamic populations are believed to be divided into two functional groups, the drivers and the modulators [56]. Drivers carries main information flows between part of the brain (for example from the retina to the primary visual cortex) while modulators typically affects the functional modes of relay process and contributes to information computing. The typical modification of thalamic relay proprieties is the transition from bursting mode to tonic mode managed by drivers-modulators interaction. Moreover we can catalogue particular areas of thalamus by typologies of input they receive. In particular thalamus is divide into first order and higher order relay areas. The former are the principal relay connection from particular subcortical sources of information (i.e. sensory system) to preferred areas of primary cortex. The latter instead are involved in higher cortico-thalamo-cortical pathways, in particular between layer 5 and 6 of the related cortical area. Both are also influenced by different level of cortical feedback and other brainstem inputs.

Celebral cortex is the largest part of mammalian brain and its structure has been stud-ied for decades. It has a highly specifstud-ied modular structure, in particular it is divided into functional areas for sensory, motor and cognitive functions (i.e. for procession of visual in-formation through highly specified cortical areas). Furthermore it is subdivided into several

(25)

13 CHAPTER 1. INTRODUCTION layer for the input/output connectivity (i.e. layer for receiving thalamic input and layer to organize feedback output, as discussed above). Cortical layers contain different distribution of neural cell types and different connections with other cortical areas or external inputs. In a particular cortical area is possible to find a maximum of 6 layers. The first 3 layers, in particular first and third layer, are main target of corticocortical afferents and layer 3 is the principal source of corticocortical efferents. Fourth layer is the main target of thalamocorti-cal relay pathways as well as intracortithalamocorti-cal connections. Last two layers (namely layer 5 and layer 6) receive corticocortical inputs specially from layer. Layer 6 plays an important role for cortical feedback because its excitatory populations projects back to the thalamic nucleus connected to later 4 of the same cortical coloumn. Mainly the excitatory populations are composed by pyramidal neurons (PY) which have pyramidal-shaped somas and present long axons which innervates different cortical layers. They are supposed to compose excitatory pathways for information transmission within cortical and thalamic areas. On the other hand inhibitory populations are composed by different neural types such as stellate cells or interneurons. Typically excitatory neurons show wide spatial structures while inhibitory neurons are more localized inside the respective cortical layer.

————————————————

Again, we stress that thalamocortical connections play a fundamental role in the procession and transmission of sensory inputs from the peripheral nervous system to higher cortical areas. During past years several steps forward a more detailed comprehension of thalam-ocortical mechanism have been made. Nowadays state of the art of system neuroscience provides a detailed description of anatomical structures and functional network activities. However, it fails to describe rigorously possible functional roles of neural oscillations in thala-mocortical system. From recent findings we know that thalamus is not only a relay station of information, rather plays a critical part in modulating cortical dynamics [57]. During aware states, slow-waves [0.1-1 Hz or 1-4 Hz] are present in both thalamic and cortical circuitry and seem to correlated and to organize both network dynamics [58, 59]. On the other hand, different fast rhythms are shown during states of awakeness with different frequency-band between thalamus and cortex. However, thalamic network show oscillations in the activity at certain frequencies (the spindle oscillations [7-14 Hz]) which are peculiar of thalamus but are hardly observed in primary cortical dynamics. Actually, an open problem in neuroscience is how neural networks choose to filter out or let pass specific frequencies of oscillatory activity and therefore have a gating role for system dynamics. In this sense, our purpose is to shade new light on the functional role of thalamus within sensory information processing. Our computational model is mainly developed to characterize temporal dynamics of the system and in particular oscillations. In the complex anatomical framework of thalamocortical sys-tem discussed before, we focus on the coupling between a local first order thalamic network and a local cortical circuitry of the respective primary cortical area. We consider a local thalamic network in the approximation that cortical feedback can be neglected, therefore we focus only on the modulation that thalamic activity can induce into cortical dynamics. Moreover, our local cortical network belongs to fourth layer of the specific cortical area and inputs from other brainstem are neglected. Our model is general and can be applied to ev-ery set of local thalamocortical connections from the visual system to other sensory system respecting those approximations.

(26)
(27)

CHAPTER

2

MODEL

Figure 2.1: Image processed by Google DeepDream computer vision program. It is based on a convolutional neural network, i.e. a feedforward neural network model used in machine learning and inspired on biolog-ical visual cortbiolog-ical structures in animals. The program was created for pattern recognition (such as faces, landscapes) in digital images, but it also creates hallucinated output when iterated on the same input image. Taken from Inceptionism: Going deeper into Neural Networks, Google Photo (2015).

In this chapter we describe in detail the computational model we developed for our purposes. Starting from single neuron level to synaptic coupling models, we compose the complete thalamocortical network picture.As explained in previous sections, our aim is to de-scribe collective phenomena of thalamocortical activity without loosing links between macro-scopic observables and micromacro-scopic dynamics of neurons. The spiking network approach lays in between macroscopic and microscopic descriptions. Numerical simulations of such mod-els are computationally heavy because one tracks the dynamics of every neuron and every synapse. The latter needs long computational times especially in high activity states of networks. Therefore, such complex network models need an accurate but simple neuron modeling framework. From the nineties onwards the idea of simple neuron model with su-perimposed spiking threshold mechanism has become more and more popular. The seminal paper of Lapicque in 1907 inspired the formulation of Integrate and Fire models [60]. In this way is easy to separate sub-threshold evolution of neural variables from the supra-threshold

(28)

CHAPTER 2. MODEL 16 dynamics of spiking activity. We stress that such simplifications are justified only for the purpose of large networks description. More biologically-based behavior of neurons can be described using non-linear integrate and fire model. Consequently, we consider such type of single neuron model in our network description. We will describe the particular model we use in the next paragraph. We use the same approach also for synaptic modeling: every synapses is described by differential equations with maps that are function of time and of pre-synaptic neural activities of Eq (1.16). In summary, our description is based on point-neurons, i.e. spatial degrees of freedom are neglected. Also synapses are described as function of time and interaction between neurons only. As we will describe in next sections, we consider a delay parameter τ mimicking propagation of action potentials through neural axons. In this way we take into account the spatial distribution of the network in a simplified manner. Spatial organization have a role in functional proprieties of neural network and recent studies on connectome and spatial neural field are promising for macroscopic fMRI description. How-ever, networks of spiking point-neurons efficiently describe complex behavior of macroscopic activity and have an increasing role in machine learning networks and artificial intelligence in general [61].

2.1

Single Neuron

We follow the Integrate-and-Fire approach for neuron dynamics, by now the most used in computational network modeling, considering an Adaptive Exponential Integrate-and-Fire (AeIF) model, developed for the first time by Brette and Gerstner in 2005 [62]. This kind of two-dimensional phenomenological model generates multiple firing patterns depending on the choice of parameter values. Furthermore, its phase diagram shows transitions from one firing type to the other in a dynamical description. In 2008 this model resulted the most efficient one in a quantitative competition based on reliability performance [63]. In the fol-lowing years it has been studied for different applications [64, 65] and successfully used to describe neurons with complex features such as thalamic cells [66]. To maintain a homoge-neous formalism in our thalamocortical model we use AeIF with different parameter sets for both cortical and thalamic cells. In this way we keep thalamic cells description based on [55] while we achieve an accurate cortical neurons model including adaptation and different spiking responses.

Every neuron is then modeled as a two-dimensional system with state variables (v(t), w(t)). The dynamical evolution is governed by two coupled differential equation for state variables

Adaptive Exponential Integrate-and-Fire (AeIF) model τm d dtv(t) = f(v) − w(t) gl τw d dtw(t) = g(v) − w(t) (2.1) (2.2) with discrete reset-dynamics

if v(t) > Θth −→ (v(t), w(t)) = (vr, w(t) + b)

v(t : t + τrp) = vr

(29)

17 CHAPTER 2. MODEL In particular, the v-map is composed by two terms

f (v) = −[v − Π] + ∆ exp v − Θth ∆



(2.4) the former describes a leaky ionic current in the quasi-ohmic approximation. The parameter Π is the reversal potential of the membrane, i.e. the steady-value in which there is no net flow of ions between the intra- to the extra- cellular medium. The second term describes the rapid evolution of membrane potential near the threshold: when the difference v−Θthis comparable

with the slope factor ∆, subthreshold dynamics rapidly changes until a spike is triggered. The exponential term mimics the electrophysiological positive feedback made by amplifying currents such as Na+ for emission of action potentials. Slope factor ∆ defines the sharpness

of the threshold, i.e. in which range of values the spike emission is exponentially triggered. The overall membrane evolution is characterized by an intrinsic time τm, determined by the

capacitance of lipid bilayer and the conductance of leak ionic current. State variables v(t) represents the neural membrane voltage.

The second state variable w(t) mimicks non-linear adaptation processes and couples with the dynamics of v(t) . This second degree of freedom enriches dynamical behavior of the system. As a matter of fact adaptation is a key feature of non-linear systems such as neurons. The w-map is defined by

g(v) = a[v − Π] (2.5)

Dynamics of w(t) is determined by main parameters a and b related to two decoupled adapta-tion mechanisms. The parameter a determines sub-threshold adaptaadapta-tion as a linear-coupling with membrane potential v. This type of adaptation can arise from linearized ion channels [67] or interaction with passive dendritic compartment. The parameter b determines a supra-threhshold spike-triggered adaptation which depends just on spike times {si} of the neuron.

Every time a spike is emitted, the adaptation current is instantaneously increased by an amount b. This adaptation mechanism arise from slow ionic currents such as IM or IK(Ca)

[68]. The time evolution of w(t) is determined by a characteristic time τm that is equal for

both adaptation processes. The discrete reset-dynamics enriches behavior of the system and models action potential as discontinuous jump processes in time. Near the threshold Θththe membrane potential is quickly depolarized with a step ∆ and a spike is emitted, bringing v(t) to its reset value vr for a refractory time τrp. After the refractory time the membrane restart

to evolve in time according to the map f . Supra-threshold dynamics is indeed described by point-processes in time, i.e. the neural activity function of Eq (1.10). With this approach we completely describe dynamical state of neurons with the couple (v(t), w(t)) and evolution is tracked by paths in the phase plane (v, w).

The response of the system to external interaction is defined by its parameter setting. In a work of Naud et al [69], parameters of AeIF model were divided in two classes: scaling parameters and bifurcation parameters. The former define scaling, stretching of time and changes in offset of state variables and they do not affect the dynamical structure. Whereas, the latter parameters determine the qualitative structure of the dynamics. In Fig 2.2 we report the phase space of AeIF system for an arbitrary parameter set. A nullcline is the set of points in which a given variable remains constant. The nullclines partition phase space in areas where different dynamics occur. The shape and position of nullclines depend mostly on scaling parameters. In particular Θth defines the minimum of v-nullcline, Π defines position

of left intersection between nullclines (fixed point) and the slope of left branch of v-nullcline is proportional to gl . On the other hand, bifurcation parameters a, b, τw and vr can change

(30)

CHAPTER 2. MODEL 18

Figure 2.2: Left) Phase space (v, w) of AeIF model for an arbitrary parameter set (in particular, i = 0). Also v-nullcline and w-nullcline are reported, in orange and green respectively. The minimum of v-nullcline (blue point) is defined by Θthand it partitions phase space in two sections: blue section is excluded from the dynamics because of reset process for spike emission. One fixed point (orange point) is always in the region of evolution (left side of threshold) while one is always outside (right side of threshold) until a bifurcation happens. Black lines indicates direction field of dynamic evolution and length of rows is proportional to ”velocity” of evolution in the phase space. Right) A trajectory in the phase space for a fixed value of control parameter i. In this case the system loose stability via an Andronov-Hopf bifurcation: as the stable fixed point moves, it can loose stability if the slope of w-nullcline is sufficiently high. The evolution of the system brings trajectory to pass the threshold four times with a slight decreasing rate. Every time this happens a spike is added to neural activity in function of time, as described graphically on the right.

τw and a change the slope of w-nullcline and modulate the way system looses stability. To

study linear stability of fixed points we write down the Jacobian matrix of the system J = f0(v) τm − 1 τm a τw − 1 τw ! (2.6) and the fixed points of neural model satisfy

f (v) − w = 0

w = a(v − Π) (2.7)

The stable fixed point is always in the allowed side of phase space, while the unstable one is always on the other side of threshold partition. Right plot of Fig 2.2 shows a particular trajectory of the system evolving in the phase space. This stresses more the relation between sub- and supra- threshold dynamics. State variables evolve in time drawing trajectories in a partitioned phase space. When they reach the threshold boundary, supra-threshold dynamics is updated and a point process (spike) is added.

AeIF model is included in the nonlinear Integrate-and-Fire family such that v-map respects certain regularity assumptions. In particular f (v) is

1. three times continuously differentiable 2. strictly convex

3. f0(v) satisfies the conditions

lim x→−∞f 0 (v) ≤ 0 lim x→+∞f 0 (v) = +∞ (2.8)

(31)

19 CHAPTER 2. MODEL

Figure 2.3: Left) Vertical shift of v-nullcline in the phase space (v, w) for increasing values of control pa-rameter i. The fixed points change their position for higher values of i until stable and unstable points merge approximately in v = Θth. Right) Related bifurcation (saddle-node) diagram in function of control parameter i. Red line indicates values of stable fixed point while black lines indicates values of unstable fixed point. They merge at threshold value, as discussed.

and moreover the second derivative f00(v) never vanishes because of the exponential term. In such situation also the function h(v) = f (v) − a(v − Π) satisfies the previous assumptions for every a ∈ R+. Therefore f (v) has a unique minimum m(a) which is reached for a certain f0(v∗(a)) = a. The dynamics of this two-dimensional system is easily described using dynamical system theory and looking for trajectories in the phase space. Excitability, as explained in the introduction, is a qualitatively change in dynamics caused by some parameter change influencing the system. With regard to neurons, an external parameter for excitability is the external current i (in general it can be a function of time t, of membrane potential of post-synaptic neurons v and so on...) which depolarizes neural membrane and potentially triggers a spike. From Eq (2.1) and Eq (2.2) we can write

τm d dtv(t) = f (v) − w(t) gl + i(t, v, ...) gl τw d dtw(t) = g(v) − w(t) (2.9) (2.10) In general one could consider external parameters on the state variable w(t) or on both variables. Adding the external parameter we find for the fixed point:

f (v) − w gl

+ i gl

= 0 ; w = a(v − Π) (2.11)

It is possible to proove that, if f (v) satisfies the three assumptions described before, this minimum m(a) and the control parameter i can trigger different kind of bifurcation, i.e. qual-itative change of dynamical proprieties of the system. For the particular case of AeIF model, the system can loose stability typically via a saddle node or Andronov-Hopf bifurcation. For very particular values of parameters the system can possibly show also a Bogdanov-Takens bifurcation or a Bautin bifurcation with different saddle-node limit cycles. Different kind of bifurcation means different firing pattern and so different computational proprieties of the system. Considering some applied currents as in Eq (2.9) in the case of constant parameter i ≡ i0 involves a vertical shift of v-nullcline without changing its shape. Therefore this will

(32)

CHAPTER 2. MODEL 20 lead to the qualitative change of stability for fixed points mentioned before.

Left panel of Fig 2.3 shows this vertical shifting of v-nullcline. The more constant current is applied the more the shift, until fixed points qualitative change their proprieties and a bifurcation occurs. If a/gl< τm/τw then the system undergoes a saddle-node bifurcation, i.e.

the two fixed point merge at the threshold and disappear. When the fixed points disappear, the vector field is almost null around the former fixed point (the ghost of the fixed point). Since the vector field can be arbitrarily small close to the bifurcation, the trajectory can be trapped for an arbitrarily long time in the ghost of the fixed point, so that the firing rate can be arbitrary small when i is close to the threshold. This capability to compute low stimuli in very low firing pattern is typical of Class I excitability cells. If a/gl > τm/τw neural

sys-tem undergoes a subcritical Andronov-Hopf bifurcation, meaning that the stable fixed point becomes unstable and small amplitude limit cycles can emerge. This fact implies generally that the model has type II excitability, that is the current-frequency curve is discontinuous at threshold. For a more detailed bifurcation analysis of AeIF we refer to [64]. Right panel of Fig 2.3 shows bifurcation diagram in function of control parameter. Red line indicates stable fixed point while black line indicates unstable fixed point. By increasing values of i they approach themselves until they merge at the threshold and a saddle-node bifurcation occurs. -60 -40 -20 0 v [mV] 0 10 20 w [nA] 0 500 1000 1500 time [ms] 0 0.5 1 i [ A]

Figure 2.4: Time evolution of state variables v(t) and w(t) under the influence of complex external pa-rameter i(v, t) defined in equation (2.13). Papa-rameter set is based on typycal electrophysiological values of cortical neurons. State variable v(t) fluctuates following external parameter and sometimes emits a spike in a irregular fashion. Second tate variable w(t) is not able to follow membrane fluctuations: it integrates v(t) dynamics in a slowly increasing value between discrete jump of spike-triggered adaptation. Lower plot shows time-evolution of external parameter as a sum of decaying exponentials. Spike jump to vmax = - 10 mV is artificially added for a better visualization of a physiological spike.

A common practice in electrophysiological experimental studies is to inject constant step currents (typically in vitro) and measure the neural activity. From those measurements one can achieve the characteristic f -I curve, i.e. the output firing rate of the system as a function of injected current. This means considering external parameter i(t) with a simple

(33)

21 CHAPTER 2. MODEL time-dependency of the type

i(t) = i0Θ[ti,tf](t) (2.12)

where Θ is the Heaviside step function. We will consider this when we characterize behavior of cortical and thalamic neurons with specified parameter sets. On the other hand, more interesting phenomena happens when a time-dependent, or evenmore v-dependent, external parameter is considered. In fact AeIF model of Eq (2.1) and (2.2) is defined by two charac-teristic times τm and τw which can be in general (and mostly they are) of different order of

magnitude. For instance, linearized adaptation is typically a slower process respect to spike emission mechanism. Furthermore the two state variables operate in external environment with other different timescales, creating a rich dynamic scenario. The role of noise is crucial because it could lead to phenomena such as stochastic resonance or coherence resonance [77, 78]. Figure 2.4 shows an example: time evolution of state variables under the influence of a complex external parameter i(v, t), defined by

i(v, t) = K v(t) sν(t) τν d dtν(t) = −[ν(t) − ¯ν] + σν √ 2τνη(t) (2.13) (2.14)

where sν(t) mimics time-evolution of synapse described by single-exponential model as in Eq

(1.16). Neuron is receiving pre-synaptic inputs modeled as a Poisson spike train process ρ(t) given in Eq (1.10) with time-dependent rate ν given in Eq (2.14). This stochastic process is called Ornstein-Uhlenbeck as described before in Eq (1.18), i.e. is a stochastic noise with a certain finite correlation time τν. The resultant synaptic current i(v, t) (the external

param-eter) is computed as a function of time-evolution sν(t) and state variable v(t) of the receiving

neuron. The resulting dynamics for i(v, t) is plotted in the lower plot of Figure 2.4. This kind of external parameter mimics the typical physiological regime in which neurons operate, the so called synaptic bombardment. Poisson-driven synaptic current i(t, v) fluctuates around a mean value and can sometime make state variable v(t) cross threshold-boundary. This is just an example to show the typical dynamical evolution of state variables (v(t), w(t)) under a simulated synaptic bombardment. We refer to next section for a detailed description of synaptic models used in our network model.

In mammalian brain there is a large amount of different cell types. Neuroscientists study their differences to group them in populations by anatomical and electrophysiological propri-eties. For instance in the cerebral cortex one can find molecular, extragranular, ganglionic, or multiform neural cells. On the other hand, thalamic networks are composed by excitatory relay nuclei as well as inhibitory reticular nuclei or interneurons. Every group is related to particular computational abilities of neurons and functional roles in networks [70].

In the framework of spiking network model is computationally impossible to include all dif-ferent kind of neurons that we found in a certain brain region. A common practice is instead to decrease the number of different neuron populations until a sufficient detailed description of a particular phenomena is reached. Different bifurcation means different firing pattern and different excitability scheme [71]. We will focus on this in the next paragraphs where we describe the particular parameter sets we use for modeling cortical and thalamic neurons. Particular choices for neuron parameters are linked also to network phenomena to be de-scribed. Therefore, motivations of parameter set will be detailed explained in next para-graphs.

(34)

CHAPTER 2. MODEL 22 Cortical Neurons

We consider two different types of neurons in our cortical spiking network. In particular, we consider excitatory pyramidal neurons (PY) and inhibitory interneurons (INT). The former have pyramidal shaped cell body (soma) and two distinct dendritic trees. They are common in forebrain structures, in particular are the most numerous excitatory cell type in mam-malian cortical networks [72]. We consider pyramidal neurons as the excitatory population of cortical network. Interneurons instead are different types of cells whose axons and den-drites are limited in a certain cortical area. For this reason they are typically modeled as local inhibitory populations related to local ensemble processing of information. We consider interneuron as the inhibitory local population of cortical network.

We model the cortical regular spiking neurons (RE) as excitatory pyramidal neurons, i.e. neurons with tonic firing response which exhibit a certain level of adaptation to sustained input. This features are typically found in cortical pyramidal neuron and have a role in computation of low-frequency inputs [73]. Typical regular spiking neurons show a Class I excitability which is mostly related to a stability loss via saddle-node bifurcation.

On the other hand wemodel cortical fast spiking neurons (FS) as inhibitory interneuron populations, i.e. neurons with tonic firing response and negligible adaptation. They show typically a Class II excitability and a pretty steep f -I curve. This difference in dynamical characteristic time is commonly found in cortical slices and is at the heart of cortical oscil-latory phenomena. -80 -60 -40 -20 0 v [mV] 0 500 1000 1500 2000 time [ms] 0 5 10 w [pA] -80 -60 -40 -20 0 v [mV] 0 500 1000 1500 2000 time [ms] 0 10 20 w [pA]

A

B

Figure 2.5: Time evolution of state variables v(t) and w(t) for pyramidal parameter set (A) and interneuron parameter set (B). The time evolution is triggered by control parameter i: we mimic the typical step current injection in electrophysiological experiments. Pyramidal neuron exhibits a slower dynamics given by higher values of adaptation and lower characteristic times respect to interneurons. The two parameter set mimic typical regular spiking (RS) and fast spiking (FS) behavior of cortical neurons. Different behavior of w(t) for the two parameter set is also visible.

Figure 2.5 shows time evolution of state variables for cortical parameter sets we considered. We choose cortical parameter following recent works in computational neuroscience litera-ture [66, 73, 74] in order to describe specific neural phenomena we are interested in. Panel A and Panel B show cortical parameter set and interneurons parameter set, respectevely. This particular time-trace are triggered by constant step current as in Eq (2.12) with ti =

100 ms and tf = 1000 ms. Different level of adaptation brings to different neural behavior,

(35)

23 CHAPTER 2. MODEL 380 400 420 440 460 480 i [nA] 0 5 10 15 20 25 [Hz] 300 320 340 360 380 i [nA] 0 10 20 30 40 50 [Hz]

A

B

Figure 2.6: Rate of spike emission ν (number of spikes per second, spk/s) in function of control parameter i. Panel A shows pyramidal neuron firing rate while panel B shows interneurons one. Respect to pyramidal neuron model, interneurons loose stability for lower values of control parameter. Furthermore they start firing at higher rates, in agreement with fast spiking (FS) characteristics. Both panel show different curves: curves of darker green are related to model with higher adaptation parameters a and b. Higher values of adaptation lead to a qualitative change of the curve with lower firing rates.

Figure 2.6 shows a diagram for supra-threshold dynamics in function of parameter i. There is a certain threshold value for control parameter (the rheobase current in electrophysiological language) such that neuron starts firing, i.e. sub-threshold dynamics starts to flow through spike-boundary. Supra-threshold observable ν is the mean firing rate as defined in Eq (1.11). These diagrams are strictly related to electrophysiological f -I curve measurements for neural activity. We plot bifurcation diagram for pyramidal neuron (A) and for interneuron (B). As one would expect, interneurons exhibits a steeper curve respect to pyramidal neurons. For both model we report four curves related to for four different adaptation set, as explained in the caption of Fig 2.6.

Thalamic Neurons

We consider two different types of neurons for our thalamic spiking network. In partic-ular, we consider excitatory thalamocortical relay neurons (TC) and inhibitory reticular neurons (RE). The former are the main kind of excitatory cells in the thalamus and have the functional role of information trasmission, innervating distinct areas of the cortex via thalamocortical fibers. The latter are local inhibitory cells whose functional role seems to be regulation and modulation of thalamic activity by external influences [75]. Therefore we consider RE neurons as the inhibitory population of thalamic circuitry. We stress that thalamic nuclei need a different treatment respect to cortical cells because of their particular dynamic proprieties. As introduced in the first chapter, we refer to the work of Barardi et al [55] for our modeling of thalamic system. In this paper they present a phenomenological AeIF model for single thalamic cells dynamics. They use a proper parameter set to describe peculiar dynamic phenomena of thalamocortical relay and reticular nuclei. In particular, phenomena such as post-inhibitory rebound spike and small-group oscillatory activities are widely observed in thalamic circuits [79, 81]. For our aim is crucial to accurately describe thalamic oscillations in order to study their functional effects in communication with the cor-tex [80]. Following [55] we first consider two different dynamical regimes of thalamic nuclei activity. In particular, they can exhibit tonic firing with adaptation (similarly to cortical

(36)

CHAPTER 2. MODEL 24 neurons) in response to depolarizing inputs [82] while they show post-inhibitory rebound bursts in response to steep hyperpolarizing inputs. The latter phenomenon happens if state variable w(t) becomes negative due to hyperpolarizing inputs and then v(t) suddenly goes back to values higher than Π (hyperpolarizing input suddenly ends). State variable w(t) acts a positive feedback on v(t) triggering a small group of spikes in a very short interval of time (the so called burst firing) until it reaches again positive values. Time traces for both dynamic regimes are show in Fig 2.7.

Figure 2.7: Time evolution of state variables v(t) and w(t) for pyramidal parameter set (A,C) and interneu-ron parameter set (B,D). The time evolution is triggered by control parameter i: we mimic the typical step current injection in electrophysiological experiments as before. Upper plots show tonic firing regime with different level of adaptation fro both neurons. Note the different evolution for w(t) in the two cases. Lower plots show hyperpolarization bursts regime. Fast spiking activity is triggered for negative enough values of w(t).

Burst oscillations are a prominent feature of thalamic dynamics. For decades they have been considered as the prominent activity of sleep-state behavior. This means activity of thalam-ocortical system during sleep or anesthesia (fundamental absence of sensory input influence in the system). This single cell dynamics influence the overall thalamic network activity and lead to the so called spindle-oscillations. During slow-wave sleep, thalamic network display collective spindle oscillations (7 - 15 Hz) which are independent of external stimuli.

Riferimenti

Documenti correlati

Ana Teresa Pereira sa captare l’attenzione del lettore, trasmettendo l’ansia di scoprire qualcosa di invisibile che alla fine non è mai nascosto dove ci aspetta, anzi

javanicum inoculation promoted the highest increase in aggregate stability with time, especially in the silt loam soil, which showed higher values than the sandy loam and loamy

Scopo dello studio WARFASA è stato valutare i benefici clinici della terapia con aspirina per la prevenzione delle recidive in pazienti con storia di tromboembolismo venoso

In the left-hand panel, we show the cumulative mass distribution of sub- haloes, dN (Msubhalo), with the dashed green region being the mass resolution limit of the DM sub-haloes in

COSTRUIRE UN VIDEO SU UN CONFLITTO AMBIENTALE IN ATTO Strumenti di indagine $A EFG7 video 6A?8

Fluorescence spectra obtained by exciting diluted solutions of BBS in different solvents at 360 nm (Figure 2(b)) show a vibronic structure composed by two major peaks around..

A tal fine è stata sviluppata la progettazione di un Master Plan, in collabora- zione con il Dipartimento di Scienze Chimiche e Geologiche dell’Università degli Studi di Modena

Although efforts towards an innovative integration of geo-environmental data from above to underwater are still in their infancy, we have identified seven topics for which