Modeling cortical columns dynamics.
Modeling neural activity at scales integrating the effect of thousands of neurons is of central importance for several reasons. On one hand, most imaging techniques are not able to measure individual neuron activity (``microscopic'' scale), but are instead measuring mesoscopic effects resulting from the activity of several hundreds to several thousands of neurons. On the other hand, anatomical data reveals, in the cortex, the existence of structures, such as the cortical columns, with a diameter of about 100 mm, containing of the order of one thousand of neurons belonging to a few different species. These columns have specific functions. For example, in the visual cortex V1, they respond to preferential orientations of bar-shaped visual stimuli. As a matter of fact, in this case, information treatment does not occur at the scale of individual neurons but rather corresponds to a mesoscopic scale integrating the collective dynamics of many interacting neurons. The description of this collective dynamics requires models which are different from individual neurons models. Especially, if the number of neurons is large enough one expects to have ``averaging'' effects such that the collective dynamics is well described by an effective mean-field, summarizing the effect of the interactions of a neuron with the other neurons, and depending on a few effective control parameters. This vision, inherited from statistical physics requires that the space scale is large enough to include a large number of microscopic components (here neurons) and small enough so that the region considered is homogeneous. This is the case of cortical columns for instance.
However, obtaining the equations of evolution of the effective mean-field from microscopic dynamics is far from being evident. In simple physical models this can be achieved via the law of large numbers and the central limit theorem, provided that time correlations decrease sufficiently fast. This type of approach has been generalized to fields such as quantum field theory or non equilibrium statistical mechanics. To the best of our knowledge, the idea of applying mean-field methods to neural networks dates back to Amari [Amari:72,Amari:77]. Later on, Crisanti, Sompolinsky and coworkers [Sompolinsky-Zippelius:82, Crisanti-Sompolinsky:87a, Crisanti-Sompolinsky:87b, Sompolinsky-et-al:88] used a dynamic mean-field approach to conjecture the existence of chaos in an homogeneous neural network with random independent synaptic weights. This approach has been made rigorous in [Ben Arous-Guionnet:95, 97, Guionnet:97] . Mean-field methods are often used in neural network community. The main advantage of dynamic mean-field techniques is that they allow one to consider neural networks where synaptic weights are random. This approach allows one to stand genericity results about the dynamics according to the statistical parameters controlling the probability distribution of the synaptic weights [Samuelides-Cessac:07]. It does not only provide the evolution of the ``mean'' activity of the network but also provides informations on the fluctuations and correlations.
In this spirit, we have analyzed rigorously the mean-field equations for a multi-populations neural network. One of the motivations for this work is to give an effective description of bunches of neurons to get a better understanding of the neuronal assembly models or neural masses models, such as Jansen and Rit's cortical column model [jansen-rit:95].
Main
Results.
Olivier Faugeras, Jonathan Touboul, Bruno Cessac, "A constructive mean field analysis of multi-population neural networks with random synaptic weights and stochastic inputs", Frontiers in Comput. Neuroscience, (2009) 3:1
We deal with the problem of bridging the gap between two scales in neuronal modeling. At the first (microscopic) scale, neurons are considered individually and their behaviour described by stochastic differential equations that govern the time variations of their membrane potentials. They are coupled by synaptic connections acting on their resulting activity, a nonlinear function of their membrane potential. At the second (mesoscopic) scale, interacting populations of neurons are described individually by similar equations. These neural mass models relate such quantities as the average membrane potential or the average activity of the neurons in one population, to those of other populations, through population synaptic connections whose weights can also be thought as averages. We establish a correspondence between these two levels of description from first principles. We consider an arbitrary number of neuron populations and assume that the synaptic weights at the microscopic level are independent Gaussian random variables whose moments only depend upon the populations and not on the individual neurons. We then let the total number of neurons go to infinity while keeping the percentage of each population with respect to the total number of neurons constant. We then derive the Mean Field equations that describe the behaviour of the populations. These are not stochastic differential equations but rather functional equations on a set of stochastic processes. This point of view allows us to prove that these equations are well-posed on any finite time interval and to provide, by a fixed point method, a constructive method for effectively computing their unique solution. We also describe some partial results along the same line for infinite time intervals, the solutions being in this case stationary random processes. These results shed some new light on such neural mass models as the one of Jansen and Rit: their dynamics approximate the much richer dynamics that emerge from our analysis because their approach neglects the random fluctuations around the mean values and their correlations. Our preliminary numerical experiments confirm that the framework we propose and the numerical methods we derive from it may provide a new and powerful tool for the exploration of neural behaviours at different scales.
Bibliography
[Amari:72] S. Amari. Characteristics of random nets of analog neuron-like elements. IEEE Trans. Syst. Man Cybernet. SMC-2, pages 643–57, 1972.
[Amari:77] S.I. Amari, K. Yoshida, and K.I. Kanatani. A Mathematical Foundation for Statistical Neurodynamics. SIAM Journal on Applied Mathematics, 33(1):95–126, 1977.
[Ben Arous-Guionnet:95] G Ben Arous and A. Guionnet. Large deviations for Langevin spin glass dynamics. Probability Theory and Related Fields, 102(4):455–509, 1995.
[Ben Arous-Guionnet:97] G.B. Arous and A. Guionnet. Symmetric Langevin Spin Glass Dynamics. The Annals of Probability, 25(3):1367–1422, 1997.
[Crisanti-Sompolinsky:87a] A. Crisanti and H. Sompolinsky. Dynamics of spin systems with randomly asymmetric bonds: Langevin dynamics and a spherical model. Physical Review A, 36(10):4922–4939, 1987.
[Crisanti-Sompolinsky:87b] A. Crisanti and H. Sompolinsky. Dynamics of spin systems with randomly asymmetric bounds: Ising spins and glauber dynamics. Phys. Review A, 37(12):4865, 1987.
[Guionnet:97] A. Guionnet. Averaged and quenched propagation of chaos for spin glass dynamics. Probability Theory and Related Fields, 109(2):183–215, 1997.
[Samuelides-Cessac:07] M. Samuelides and B. Cessac. Random recurrent neural networks. European Physical Journal - Special Topics, 142:7–88, 2007.
[Sompolinsky-et-al:88] H. Sompolinsky, A. Crisanti, and HJ Sommers. Chaos in Random Neural Networks. Physical Review Letters, 61(3):259–262, 1988.
[Sompolinsky-Zippelius:82] H. Sompolinsky and A. Zippelius. Relaxational dynamics of the Edwards-Anderson model and the mean-field theory of spin-glasses. Physical Review B, 25(11):6860–6875, 1982.
[van Rotterdam-et-al:82] A. van Rotterdam, F.H. Lopes da Silva, J. van den Ende, M.A. Viergever, and A.J. Hermans. A model of the spatial-temporal characteristics of the alpha rhythm. Bulletin of Mathematical Biology, 44(2):283–305, 1982. 56