BigData in NeuroScience: let's address this issue using web semantic formalisms.

Models studied in computational neuroscience are fed up with facts obtained from the large literature reporting knowledge obtained from biological experimentation. Such knowledge is put in words, leading to phenomenological descriptions of the underlying behaviors. Then, a part of this information is put in equations and in distributed algorithms to check some quantitative or qualitative aspects of the presumed global understanding. Surprisingly enough, this work is mainly realized like by craftsmen, and more or less by intuition.

Other domains of knowledge have taken into account recent and powerful developments regarding semantic representations and introduce an intermediate framework between knowledge-in-words and knowledge-in-equations: knowledge-in-facts. From very large and heterogeneous corpus like Wikipedia turned in DBpedia semantic data base, to very specialized fields of knowledge formalized as ontology, it is clear that powerful methodological tools are now available to manage the data at a higher scale.

The goal of the proposed work is to address this issue in three steps:

1/ Considering a well-defined topic (either early-vision processing from retina to the thalamus, or functional models of basal-ganglia and related structures of action selection, the choice being made with the student) build an ontology that describe our current knowledge of these brain sub-systems.

2/ From this real example, propose an intermediate language (say, ``turtle´´) that allows to computer scientists to easily annotates publications building the related ontology. This steps is realized with colleagues specialists of semantic web formalism.

3/ Considering existing tools and existing initiative, propose the design of a collective platform allowing a scientific community in the field to construct such big-data, beyond this seminal work.