Home page.

Modeling cortical columns dynamics.




Modeling neural activity at scales integrating the effect of thousands of neurons is of central importance for several reasons. On one hand, most imaging techniques are not able to measure individual neuron activity (``microscopic'' scale), but are instead measuring mesoscopic effects resulting from the activity of several hundreds to several thousands of neurons. On the other hand, anatomical data reveals, in the cortex, the existence of structures, such as the cortical columns, with a diameter of about 100 mm, containing of the order of one thousand of neurons belonging to a few different species. These columns have specific functions. For example, in the visual cortex V1, they respond to preferential orientations of bar-shaped visual stimuli. As a matter of fact, in this case, information treatment does not occur at the scale of individual neurons but rather corresponds to a mesoscopic scale integrating the collective dynamics of many interacting neurons. The description of this collective dynamics requires models which are different from individual neurons models. Especially, if the number of neurons is large enough one expects to have ``averaging'' effects  such that the collective dynamics is well described by an effective mean-field, summarizing the effect of the interactions of a neuron with the other neurons, and  depending on a few effective control parameters. This vision, inherited from statistical physics requires that the space scale is large enough to include a large number of microscopic components (here neurons) and small enough so that the region considered is homogeneous. This is the case of cortical columns for instance.


However, obtaining the equations of evolution of the effective mean-field from microscopic dynamics is far from being evident. In simple physical models this can be achieved via the law of large numbers and the central limit theorem, provided that time correlations decrease sufficiently fast. This type of approach has been generalized to fields such as quantum field theory or non equilibrium statistical mechanics. To the best of our knowledge, the idea of applying mean-field methods to neural networks dates back to Amari [Amari:72,Amari:77]. Later on, Crisanti, Sompolinsky and coworkers [Sompolinsky-Zippelius:82, Crisanti-Sompolinsky:87a, Crisanti-Sompolinsky:87b, Sompolinsky-et-al:88] used a dynamic mean-field approach to conjecture the existence of chaos in an homogeneous neural network with random independent synaptic weights. This approach has been made rigorous in [Ben Arous-Guionnet:95, 97, Guionnet:97] . Mean-field methods are often used in neural network community. The main advantage of dynamic mean-field techniques is that they allow one to consider neural networks where synaptic weights are random. This approach allows one to stand genericity results about the dynamics according to the statistical parameters controlling the probability distribution of the synaptic weights [Samuelides-Cessac:07].  It does not only provide the evolution of the ``mean'' activity of the network but also provides informations on the fluctuations and correlations


In this spirit, we have analyzed rigorously the mean-field equations for a multi-populations neural network. One of the motivations for this work is to give an effective description of bunches of neurons to get a better understanding of the neuronal assembly models or neural masses models, such as Jansen and Rit's cortical column model [jansen-rit:95]. 


Main Results.


Bibliography