(i) To propose "good" models, namely biologically relevant and mathematically well posed ;
(ii) To make the analysis of these model-dynamics, with rigorous and analytical results if possible, and with a good control on numerical simulations ;
(iii) To interpret and extrapolate these results and compare them to real neuronal networks behaviour.
This constitutes a real
challenge since neuronal networks are dynamical systems
with a huge number of degree of freedom and parameters,
multi-scales organisation with complex interactions. For
example neurons dynamics depend on synaptic
connexions while synapses evolve according to the
neurons activity. Analysing this interwoven activity
requires the development of new methods. Methods coming
from theoretical physics and applied mathematics can be
adapted to produce efficient analysis tools while
providing useful concepts. I am developing such methods
based on statistical physics (mean-field theory, Gibbs
distribution analysis), dynamical systems theory (global
stability, bifurcation analysis, characterisation of
chaotic dynamics) and ergodic theory (symbolic coding,
thermodynamic formalism). I believe that such an
analysis is an important step towards the
characterisation of in vitro or in vivo neuronal
neworks, from space scales corresponding to a few
neurons to scales characterising e.g. cortical columns.
With my colleagues, we have characterized the
dynamics of several firing rate and spiking neurons
models (see publication list below).
Spike train analysis. Neurons activity results from complex and nonlinear mechanisms leading to a wide variety of dynamical behaviours. This activity is revealed by the emission of action potentials or ``spikes''. While the shape of an action potential is essentially always the same for a given neuron,the succession of spikes emitted by this neuron can have a wide variety of patterns (isolated spikes, periodic spiking, bursting, tonic spiking, tonic bursting, ...), depending on physiological parameters, but also on excitations coming either from other neurons or from external inputs. Thus, it seems natural to consider spikes as ``information quanta'' or ``bits'' and to seek the information exchanged by neurons in the structure of spike trains. Doing this, one switches from the description of neurons in terms of membrane potential dynamics, to a description in terms of spikes trains. This point of view is used, in experiments, by the analysis of raster plots, i.e. the activity of a neuron is represented by a mere vertical bar each time this neuron emits a spike. Though this change of description raises many questions, it is commonly admitted in the computational neuroscience community that spike trains constitute a ``neural code''. This raises however other questions. How is ``information'' encoded in a spike train ? How to measure the information content of a spike train ? As a matter of fact, a prior to handle ``information'' in a spike train is the definition of a suitable probability distribution that matches the empirical averages obtained from measures and there is currently a wide debate on the canonical form of these probabilities. We are developing methods for the characterisation of spike trains from empirical data. On one hand, we have shown how Gibbs distribution (in a more general sense that statistical mechanics, see e.g. refs 1,3,4 below) are natural candidate for spike train statistics. On the other hand, we are developing numerical methods for the characterization of Gibbs distribution from experimental data (see the webpage http://enas.gforge.inria.fr/v3/).
Mean-field analysis
to neuronal networks. This method, well
known in the field of statistical physics and quantum field
theory, is used in the field of neural networks
dynamics with the aim of modeling neural activity
at scales integrating the effect of thousands of
neurons. This is of central importance for several
reasons. First, most imaging techniques
are not able to measure individual neuron activity
(``microscopic'' scale), but are instead
measuring mesoscopic effects resulting from the
activity of several hundreds to several hundreds
of thousands of neurons. Second, anatomical data
recorded in the cortex reveal the existence of structures, such
as cortical columns with a diameter of
about 50
micrometers
to 1
millimeter, containing of the order of one hundred
to one thousand neuronsbelonging to a few
different species. In this case, information
processing does not occur at the scale of
individual neurons but rather corresponds to an
activity integrating the collective dynamics of many interacting
neurons and resulting in a mesoscopic signal. Dynamic mean-field
theory allows to obtain the equations of evolution
of the effective mean-field from microscopic
dynamics in several model-examples. We have obtained
them in several examples of discrete time neural
networks with firing rates, and derived rigourous
results on the mean-field dynamics of models with
several populations allowing the rederivation of
classical phenomenological equations for cortical
columns such as Jensen-Ritt's.We are now
developping a mean-field theory for correlated
synaptic weights.
Dynamical effects induced by synaptic and intrinsic
plasticity. This collaboration aims to
understand how structure of biological neural networks
is conditioning their functional capacities, in
particular learning. On one hand we are analysing the
dynamics of neural network models submitted to Hebbian
learning and investigating how the capacity to recognize
objects emerges. What are the effect on dynamics and
topology ? For this we are using concepts coming
from random networks, and non linear analysis (see next
item). On the other hand, we are using the informations
obtained via this analysis to construct artificial
neural network architecture able to learn basic objects
and then to perform generalization by emergent dynamics.
A typical example is the architecture of the V1 cortex
(vision) that we are using as a guideline. In the long
term, the goal would be to produce new computer
architectures inspired from biological networks.
Interplay between synaptic graph
structure and topology. Neuronal networks can
be regarded as graphs where each neuron is a vertex and
each synaptic connection is an edge. The models have
usually simple topologies (e.g. feed forward or
recurrent neural networks) but recent research on
nervous and brain systems suggests that the actual
topology of a real-world neuronal system is much more
complex : small-world and scale-free properties are
for example observed in human brain networks. There is
also a complex interplay between the topological
structure of the synaptic graph and the non linear
evolution of the neurons. Thus, the existence of
synapses between a neuron (A) and another one (B) is
implicitly attached to a notion of ``influence’’ or
causal and directed action from A to B. However, the
neuron B receives usually synapses from many other
neurons, each of them being ``influenced’’ by many other
neurons, possibly acting on A, etc... Thus, the actual
``influence’’ or action of A on B has to be considered
dynamically and in a global sense, by considering A and
B not as isolated objects, but, instead, as entities
embedded in a system with a complex interwoven dynamical
evolution. It is thus necessary to develop tools
allowing to handle this interplay. In this spirit we are
using the linear response approach (see here
for details ). These results could lead to new
directions in neural network analysis and more generally
in the analysis of non linear dynamical systems on
graphs. However the results quoted above were obtained
in a specific model example and further investigations
must be done, in a more general setting. In this spirit,
the present project aims to explore two directions.
Recurrent model with spiking neurons (see item 1 above)
and Complex architecture and learning (item 2 above).