Evolutionary Games on Networks: from Brain Connectivity to Social Behavior
Chiara Mocenni
Dept. of Information Engineering and Mathematics, University ofi Siena (IT) (mocenni@dii.unisi.it)
In collaboration with D. Madeo, J. Moraes, J. P. Zubelli, E.
Santarnecchi, N. Bobko, A. Talarico, S. Mattera
IMPA, Rio de Janeiro (Brasil), February 25, 2016
Selection and Evolution in Homogeneous and Inhomogeneous Populations
Homogeneous case (J. Weibull, 1995)
Infinite population of phenotype-driven individuals Individuals transfer the phenotype to their offspring
Share of optimal payoff phenotype increases: natural selection
˙
x s = x s (p s − φ) Inhomogeneous case (D. Madeo and CM, 2015)
Finite population of individuals
Players aren’t phenotype-driven: mixed strategies are allowed The population structure is described by a graph
˙
x v ,s = x v ,s (p v ,s G − φ G v )
The two models: a comparison
Selection Mechanism Payoff Function
Initial Conditions Adjacency Matrix
Standard case
˙
x s = x s (p s − φ) x s (0) = c s p s (x ) = e T s Bx φ(x ) = x T Bx
x = [x 1 . . . x M ] T ∈ ∆ M
Networked case (EGN)
˙
x v ,s = x v ,s (p G v ,s − φ G v ) x v ,s (0) = c v ,s
p v ,s G (X ) = e T s B v k v (X ) φ G v (X ) = x T v B v k v (X ) k v (X ) = P N
w =1 a vw x w X = {x 1 , . . . , x N }
x v = [x v ,1 . . . x v ,M ] T ∈ ∆ M
The average player k v (X )
What kind of phenomena can be explained by the extended equation?
In the new framework selection depends not only on the payoff matrix, but also on social interactions
Initial Conditions set the system configuration at initial time, including extinction of strategies (x v ,s (0) = 0 for some v) Payoff Matrix describes the internal decision mechanism of individuals, e.g. propensity of individuals to cooperate or non cooperate with the opponent, presence of dominant strategies, indifference, etc.
Moreover, from the EGN structure follows that the preference
ordering in the payoff matrix may be changed by the entries of
adjacency matrix (no need for positive entries)
The (N, 2) equation
If only two strategies are considered, x v ,2 = 1 − x v ,1 , then we have x v = [x v ,1 1 − x v ,1 ]. Then we can set y v = x v ,1 and y = [y 1 . . . y N ]:
˙
y v = y v (1 − y v )f v (y )
f v (y ) = σ v ,1 k v ,1 (y ) − σ v ,2 k v ,2 (y ) B v =
b 11 b 12
b 21 b 22
, σ v ,1 = b 11 − b 21 , σ v ,2 = b 22 − b 12 k v ,1 (y ) = P N
w =1 a vw y w , k v ,2 (y ) = P N
w =1 a vw (1 − y w )
Three sets of steady states
Θ ∗ = n
y ∗ ∈ [0, 1] N : ∀v (y v ∗ = 0 ∨ y v ∗ = 1 ∨ f v (y ∗ ) = 0) o
1
Pure steady states: Θ p = {0, 1} N ⊆ Θ ∗
2
Interior (mixed) steady states: Θ m = (0, 1) N ∩ Θ ∗
3
Pure/mixed steady states: Θ pm = Θ ∗ \ (Θ p ∪ Θ m )
Feasibility and stability of mixed steady states for generic adjacency matrices
Theorem
Suppose σ v ,1 = σ 1 and σ v ,2 = σ 2 ∀v and sign(σ 1 ) = sign(σ 2 ) 6= 0.
Then there always exists a steady state y ∗ ∈ Θ m such that ∀v y v ∗ = σ 2
σ 1 + σ 2
Moreover,
λ(J(y ∗ )) = σ 1 σ 2 σ 1 + σ 2
λ(A)
The steady state depends on the payoff matrix, while its
stability depends mainly on the graph
From connected to reduced connectivity adjacency matrices
In the previous results we assumed some conditions on the payoff matrices (all nodes share the same σs, with same sign) and did not make any assumption on the adjacency matrix Is the feasibility of internal steady states depending on the graph’s structure? How can we weaken a central node?
We take a fully connected graph, choose one specific node
and start deleting iteratively all links from this node
What is the effect in the mixed steady state of such
procedure? (D. Madeo, CM, J. C. Moraes, J.P. Zubelli,
submitted 2015)
Mixed steady states with reduced connectivity
For each node and for each link removal, we find lower and upper bounds of the ratio d v = σ σ
v ,2v ,1
+σ
v ,2in order the mixed steady state to be feasible
These bounds depend on the average of the d v s of connected nodes
For large networks (N → ∞) the only possibility for the steady state to be interior is that all nodes share the same d v
Moreover, the average of the steady state component for each
node corresponds to the average of the d v s
Asymptotic behavior of a network with 50 nodes
Asymptotic behavior of the solution for nodes 1, 20, 30 and 40.
Homogeneous initial conditions 0.6 are set for all nodes, except for
node 1, where y 1 (0) ∈ [0, 1].
Modeling brain connectivity by EGN equation
Brain connectivity refers to patterns of connectivity between distinct units within a nervous system
The units correspond to individual neurons, neuronal populations, or anatomically segregated brain regions
The connectivity can by:
Anatomical (anatomical links) Functional (statistical
dependencies)
Effective (causal interactions)
fMRI data for brain connectivity
The network of connections can be reconstructed by means of functional magnetic resonance imaging (fMRI)
fMRI data consist with spatio-temporal measures of the
oxygen consumption in unitary volumes (voxels) of brain
Network reconstruction
Nodes of the network are anatomical regions of the brain
Edges are obtained by calculating correlation coefficients
(functional connectivity) or Granger causality (effective
connectivity) between the time course recorded by the fMRI in
two nodes (usually 7-10 minutes experiments)
Why evolutionary games?
Brain behavior is regulated by activation or inhibition With payoff matrix B v node v is always activated by node w : strategy 1 is optimal response to strategy 1 (b 11 > b 21 ) and strategy 2 is optimal response to strategy 2 (b 22 > b 12 ) However, negative edge weights can reverse the response
Negative a vw may produce oscillating behavior in the model!
Some examples of oscillations: comparison with real data
No optimization or inverse problems solved!
A real example
The network considers 11 anatomic regions (nodes) In each node, a time series z t of 7 minutes, with sampling time of 2.3 sec., is recorded
Two strategies available: activation and inhibition Payoff matrix B =
0.5 −0.5
−0.5 0.5
, ∀v (σ 1 = 1, σ 2 = 1) EGN equation ˙ x v = x v (1 − x v )f v (x ) =
x v (1 − x v )
"
(σ 1 + σ 2 )
N
X
w =1
a v ,w x w − σ 2
N
X
w =1
a v ,w
#
How to reconstruct the elements of A from the data points
The linear dependence of the system on parameters a v ,w
suggests that we can estimate θ = {a v ,w } using a least square optimization on the basis of some observations of x v
Measurements of the variables x v are available at equispaced time instants: z v ,k = x v (t k ), with t k = kτ c for all
k = 0, . . . , T − 1, τ c > 0.
Considering the Euler discretization of EGN:
x v ,k+1 = x v ,k + τ c m v (x k ) > θ,
where x v ,k = x v (t k ), x k = [x 1,k , . . . , x N,k ] > , and m v (x ) ∈ R N
2m v ,w = x v (t k )(1 − x v (t k ))((σ 1 + σ 2 )x w (t k ) − σ 2 ), t k = kτ c and τ c > 0 for all k = 0, . . . , T − 1. Then, we can write the discretized EGN in matricial form:
Y (x ) = U(x )θ, (1)
The linear formulation of the problem
Y (x ) =
x 1,1 − x 1,0 x 2,1 − x 2,0
.. . x N,1 − x N,0
x 1,2 − x 1,1 x 2,2 − x 2,1
.. . x N,2 − x N,1
.. . x 1,T − x 1,T −1 x 2,T − x 2,T −1
.. . x N,T − x N,T −1
and U(x ) = τ c
m 1 (x 0 ) >
m 2 (x 0 ) >
.. . m N (x 0 ) >
m 1 (x 1 ) >
m 2 (x 1 ) >
.. . m N (x 1 ) >
.. . m 1 (x T −1 ) >
m 2 (x T −1 ) >
.. . m N (x T −1 ) >
,
Parameter estimation
We want to find the parameters θ which minimize the distance between the observations and the model We can minimize the objective function:
1
NT kY (z) − U(z)θk
2= 1 NT
N
X
v =1 T −1
X
k=0