ResearchActivities.ResearchActivities History

Hide minor edits - Show changes to output

June 23, 2013, at 09:20 PM by 82.228.254.112 -
Changed line 177 from:
* Design of efficient fault tolerant protocols and Algorithm-basedfault tolerance.
to:
* Design of efficient fault tolerant protocols and algorithm-basedfault tolerance.
February 01, 2013, at 12:33 PM by 138.96.201.175 -
Changed line 152 from:
Coordinators: %newwin% [[http://graal.ens-lyon.fr/~fvivien | Frédéric Vivien]] (Roma project-team) and %newwin% [[http://www.lri.fr/~fci/Site-FC/Site-Home-page.html | Franck Cappello]] (Grand-Large project-team)
to:
Coordinators: %newwin% [[http://graal.ens-lyon.fr/~fvivien | Frédéric Vivien]] (Roma project-team) and %newwin% [[https://who.rocq.inria.fr/Laura.Grigori | Laura Grigori]] (Alpines project-team)
August 17, 2012, at 03:44 PM by 138.96.201.175 -
August 17, 2012, at 03:22 PM by 138.96.201.175 -
Changed lines 160-162 from:
systems, NSF Track 1 and Track 2 systems or the DoE Leadership
computing
systems), where high performance computers are pushed to
their limits in scale and technologies, faults are so frequent that
to:
systems, NSF Track 1 and Track 2 systems or the Department of Energy Leadership computing systems), where high performance computers are pushed to their limits in scale and technologies, faults are so frequent that
Changed lines 177-178 from:
* Design of efficient fault tolerant protocols and Algorithm-based
fault
tolerance.
to:
* Design of efficient fault tolerant protocols and Algorithm-basedfault tolerance.
August 17, 2012, at 03:21 PM by 138.96.201.175 -
Changed lines 155-184 from:
to:
Simulation is developing as a third methodological pillar for many
science disciplines, in addition to theory and experiments.
Simulations, which are done on high performance computers, are slowed
down by preventive and corrective actions used to mask hardware or
software component faults. At extreme scale (such as the PRACE
systems, NSF Track 1 and Track 2 systems or the DoE Leadership
computing systems), where high performance computers are pushed to
their limits in scale and technologies, faults are so frequent that
not only they challenge the robustness of the mechanisms that are
supposed to ensure reliable executions, but they also force to use
these mechanisms much more frequently, thereby significantly
decreasing the simulation performance. With the advent of exascale,
computing, involving 10,000,000 processor cores or more, experts of
fault tolerance for HPC systems consider that the time between two
consecutive faults will be lower than the time required for a
restart-checkpoint cycle. At this point, no actual progress of the
application can be expected. An important objective is to develop new
algorithmic techniques to solve the exascale resilience problem.
Solving this problem implies a rupture from current approaches, and
calls for yet-to-be-discovered algorithms, protocols and software
tools. In the project the following research directions will be
considered:
* Energy effective fault tolerant protocols for HPC applications.

* Design of efficient fault tolerant protocols and Algorithm-based
fault tolerance.

* Performance execution models for fault-tolerant applications.

* Resilience for sparse linear algebra.
August 17, 2012, at 03:20 PM by 138.96.201.175 -
Deleted line 100:
Deleted line 128:
Changed lines 151-152 from:
Pole 5: \\
Coordinators: %newwin% [[ | ]] ( project-team) and %newwin% [[ | ]] (project-team)
to:
Pole 5: Resilience for exascale computing\\
Coordinators: %newwin% [[http://graal.ens-lyon.fr/~fvivien | Frédéric Vivien]] (Roma project-team) and %newwin% [[http://www.lri.fr/~fci/Site-FC/Site-Home-page.html | Franck Cappello]] (Grand-Large project-team)
August 17, 2012, at 03:17 PM by 138.96.201.175 -
Deleted line 10:
August 17, 2012, at 03:16 PM by 138.96.201.175 -
Changed lines 132-133 from:
* Component models. Here a hierarchical approach is adopted where, at the lower level, each component is assumed to efficiently exploit the hardware, while at the upper level the coordination between components is responsible to manage communication and synchronization taking into account the hierarchical and heterogeneous characteristics of the underlying architecture as well as the knowledge of interactions between distributed components. However, current HPC applications are still written according to a low level of abstraction. This puts a
heavy burden on application developers as they have to be expert of the target machine to make efficient implementation choices. Moreover, as this work as to be done for each new machine, porting an application to a new machine is a very long, difficult and expensive task. As resources become hierarchical and heterogeneous, this approach seems less and less acceptable.\\
to:
* Component models. Here a hierarchical approach is adopted where, at the lower level, each component is assumed to efficiently exploit the hardware, while at the upper level the coordination between components is responsible to manage communication and synchronization taking into account the hierarchical and heterogeneous characteristics of the underlying architecture as well as the knowledge of interactions between distributed components. However, current HPC applications are still written according to a low level of abstraction. This puts a heavy burden on application developers as they have to be expert of the target machine to make efficient implementation choices. Moreover, as this work as to be done for each new machine, porting an application to a new machine is a very long, difficult and expensive task. As resources become hierarchical and heterogeneous, this approach seems less and less acceptable.\\
Changed line 135 from:
* High level parallel programming models. Although very attractive, this direction is difficult to manage: on one hand, standard programming models, such as MPI, OpenMP are well adopted by the HPC community that have billion of lines of codes that reclaims huge amounts of work to port to new standards; on the other hand with the complexity of new exascale architectures, several researchers have made criticisms about the capacity of these programming models to be able to be productive with good portability of efficiency of the programs. Several new languages, such as X10 (IBM), Fortress (Sun) or Chapel (Cray) which are funded by DARPA’s High Productivity Computing Systems project, have been proposed in order to abstract the architecture (physical parallelism) to allow the development of architecture-oblivious algorithms. High-level programming offers a high level and uniform view of the (heterogeneous) hardware. Most high-level programming models are based on a program-centric view to describe interaction between tasks. Researchers aims at adapting the required number of tasks and studying scheduling algorithms that maps logical parallelism of the application to the physical available parallelism.\\
to:
* High level parallel programming models. Although very attractive, this direction is difficult to manage: on one hand, standard programming models, such as MPI, Openmp are well adopted by the HPC community that have billion of lines of codes that reclaims huge amounts of work to port to new standards; on the other hand with the complexity of new exascale architectures, several researchers have made criticisms about the capacity of these programming models to be able to be productive with good portability of efficiency of the programs. Several new languages, such as X10 (IBM), Fortress (Sun) or Chapel (Cray) which are funded by DARPA’s High Productivity Computing Systems project, have been proposed in order to abstract the architecture (physical parallelism) to allow the development of architecture-oblivious algorithms. High-level programming offers a high level and uniform view of the (heterogeneous) hardware. Most high-level programming models are based on a program-centric view to describe interaction between tasks. Researchers aims at adapting the required number of tasks and studying scheduling algorithms that maps logical parallelism of the application to the physical available parallelism.\\
August 17, 2012, at 03:15 PM by 138.96.201.175 -
Changed lines 114-152 from:
to:
Exascale computing will offer to the programmer the opportunity to use
millions of cores. Cores will certainly be heterogeneous, where a
simple chip may contains numerous simple cores and few complex cores
(CPU); memory will be hierarchically organized. Moreover, as exascale
computing will provide more computing and storage capabilities, it
will enable new kinds of applications. For example, multiphysics
applications will be able to integrate more phenomena. It will be
unimaginable that programmers will be exposed directly to these
hardware and software complexities, which must be carried out by the
software stack between the application and the hardware rather than by
the programmers themselves. In this thematic pole, programming models
will be investigated to reduce the previously mentioned complexities.
The common idea is to structure the parallelism and its description
using known skeletons or patterns with (guaranteed) good performances
on multiple architectures. The absence of consensus to write or
compose HPC applications motivates the research of alternative ways to
program them. Two research directions have been identified:

* Component models. Here a hierarchical approach is adopted where, at the lower level, each component is assumed to efficiently exploit the hardware, while at the upper level the coordination between components is responsible to manage communication and synchronization taking into account the hierarchical and heterogeneous characteristics of the underlying architecture as well as the knowledge of interactions between distributed components. However, current HPC applications are still written according to a low level of abstraction. This puts a
heavy burden on application developers as they have to be expert of the target machine to make efficient implementation choices. Moreover, as this work as to be done for each new machine, porting an application to a new machine is a very long, difficult and expensive task. As resources become hierarchical and heterogeneous, this approach seems less and less acceptable.\\
Without suitable coordination steps the approach of aggregating multiple parallel codes, libraries and frameworks into composite applications is bound to fail at some point, as computing units quantity and heterogeneity levels keep increasing. Indeed, most existing codes have been written with the assumption of a mostly exclusive access to the underlying hardware. Now, application development has to evolve together with runtime systems (pole 3) to make every piece of code cooperate rather than interfere to ensure an efficient overall behavior on exascale architectures and an efficient use of their power.

* High level parallel programming models. Although very attractive, this direction is difficult to manage: on one hand, standard programming models, such as MPI, OpenMP are well adopted by the HPC community that have billion of lines of codes that reclaims huge amounts of work to port to new standards; on the other hand with the complexity of new exascale architectures, several researchers have made criticisms about the capacity of these programming models to be able to be productive with good portability of efficiency of the programs. Several new languages, such as X10 (IBM), Fortress (Sun) or Chapel (Cray) which are funded by DARPA’s High Productivity Computing Systems project, have been proposed in order to abstract the architecture (physical parallelism) to allow the development of architecture-oblivious algorithms. High-level programming offers a high level and uniform view of the (heterogeneous) hardware. Most high-level programming models are based on a program-centric view to describe interaction between tasks. Researchers aims at adapting the required number of tasks and studying scheduling algorithms that maps logical parallelism of the application to the physical available parallelism.\\
Widely adopted standards are invaluable however as a foundation for experimenting with new programming model ideas by testing carefully selected extensions to improve expressiveness and better guide underlying runtime systems. Moreover annotation-based programming models present several advantages of their own. They allow incremental ports of existing applications, thus avoiding expensive and hazardous whole application rewrites. They enable the developers to continuously test their applications during the port process and thus to quickly address issues that may arise from such a port instead of waiting for the whole port to be complete. Finally they bring in the often underused analysis power of the compiler which can be exploited to generate hints during the processing of pragmas.

The key points here are providing a correct abstraction of the
underlying architecture, finding good ways to describe the application
parallelism or to identify computational patterns and, providing a
scheduling algorithm that maps the logical parallelism of the
application to the physical available hardware resources. Right
features and design decisions allow reducing the overhead inherent to
architecture abstraction. This is why the thematic pole 4 is strongly
connected with thematic pole 3 that address runtime tools and
algorithms to exploit parallel architecture. Resource managers,
scheduling algorithms, graph partitioning and load balancing
algorithms that are studied in the context of the thematic pole 3 are
viewed as basic parallel building blocks that will be coordinated at
an upper level by taking into account both architecture abstraction
and structural properties of parallel programs.
August 17, 2012, at 03:10 PM by 138.96.201.175 -
Changed lines 110-111 from:
Pole 4: \\
Coordinators: %newwin% [[ | ]] ( project-team) and %newwin% [[ | ]] (project-team)
to:
Pole 4: Programming models\\
Coordinators: %newwin% [[http://moais.imag.fr/membres/thierry.gautier/TG/home_page.html | Thierry Gautier]] (Moais project-team) and %newwin% [[http://graal.ens-lyon.fr/~cperez/web | Christian Perez]] (Avalon project-team)
August 17, 2012, at 03:06 PM by 138.96.201.175 -
Deleted line 0:
August 17, 2012, at 03:06 PM by 138.96.201.175 -
Added line 1:
August 17, 2012, at 03:06 PM by 138.96.201.175 -
Added lines 1-2:
(:title Research and development activities:)
Added line 11:
Added line 13:
Added line 15:
Added line 17:
Added line 102:
Added line 104:
Changed lines 107-108 from:
* Techniques and tools for static and dynamic processing of numerical data sets. Finite element or finite volume differential equation solvers such as those considered in the project are naturally adapted to the use of unstructured meshes for the discretization of the underlying computational domain. Such meshes are amenable to local refinement or coarsening so as to match the complex geometrical features or/and the steep variations of the solved physical quantities. However, this enhanced flexibility comes at the expense of an increased difficulty to optimize and preserve data locality. Moreover, very large-scale 3D simulations generally involve unstructured meshes sizes ranging from a few tens up to several hundreds million elements if not more. Distributing such meshes on several thousands of processing units in view of their parallel processing by the PDE solvers is a task that can hardly be tackled as a sequential process, especially if this partitioning has to be performed more than once during a simulation as a result of an auto-adaptive discretization process. Overall, several subtopics have to be considered here that are related to the manipulation of very large numerical data sets for their distribution either before (i.e. static partitioning) or during (i.e. dynamic repartitioning - or re-meshing) a numerical simulation, dynamic load
balancing, and for their runtime analysis (i.e. computational steering).
to:
* Techniques and tools for static and dynamic processing of numerical data sets. Finite element or finite volume differential equation solvers such as those considered in the project are naturally adapted to the use of unstructured meshes for the discretization of the underlying computational domain. Such meshes are amenable to local refinement or coarsening so as to match the complex geometrical features or/and the steep variations of the solved physical quantities. However, this enhanced flexibility comes at the expense of an increased difficulty to optimize and preserve data locality. Moreover, very large-scale 3D simulations generally involve unstructured meshes sizes ranging from a few tens up to several hundreds million elements if not more. Distributing such meshes on several thousands of processing units in view of their parallel processing by the PDE solvers is a task that can hardly be tackled as a sequential process, especially if this partitioning has to be performed more than once during a simulation as a result of an auto-adaptive discretization process. Overall, several subtopics have to be considered here that are related to the manipulation of very large numerical data sets for their distribution either before (i.e. static partitioning) or during (i.e. dynamic repartitioning - or re-meshing) a numerical simulation, dynamic load balancing, and for their runtime analysis (i.e. computational steering).
August 17, 2012, at 03:04 PM by 138.96.201.175 -
Deleted line 11:
Changed lines 96-102 from:
* Resource management and scheduling strategies. The resource management topic can be subdivided in two parts. First, as applications become more complex, their mapping to resource also becomes more complex. For example, some parts of an application can depend on the input data size or the number of actual resources. The current solution is to mix application optimization code with the functional code of an application. While it may work for static applications, it is even less satisfactory for dynamic
applications. Therefore, the issue is to revisit interactions between application and resource management systems. Second, system services should be able to operate efficiently at very large scale. Hence, the issue is to design distributed algorithms for example for deployment, discovery or composition/orchestration services. Finally, applications can be written as workflows with data dependencies between tasks (which in turn can be parallel). These applications need specific resource and data management heuristics. Moreover, typical high performance applications require both an initial mapping of data and tasks and periodic re-scheduling to address the dynamic behavior of the system environment (e.g. failure) or of the application itself. The programming model and the role of the programmer are important in order to allow for flexibility in the scheduling algorithm. For instance, using an abstract data flow graph representation allows using, transparently with respect to the application code, graph partitioning algorithms in order to redistribute the workload together with a local (or global) work stealing algorithm to adapt variation of the load among the cores.
* Runtime systems. Traditional processors have reached architectural limits which heterogeneous multicore designs and hardware specialization (e.g. coprocessors, accelerators, etc.) intend to address. However, exploiting such machines introduces numerous challenging issues at all levels, ranging from programming models and compilers to the design of scalable hardware solutions. The design of efficient runtime systems for these architectures is a critical
issue. In the project, we intend to address research topics that are all essential steps in solving the general parallel composability issue at the runtime system level: (1) micro runtime system supporting multiple parallel paradigms, (2) framework for experimenting with hybrid programs, (3) adaptive parallelism and code auto-tuning and, (4) validation using well known parallel kernels.

* Techniques and tools for static and dynamic processing of numerical data sets. Finite element or finite volume differential equation solvers such as those
considered in the project are naturally adapted to the use of unstructured meshes for the discretization of the underlying computational domain. Such meshes are amenable to local refinement or coarsening so as to match the complex geometrical features or/and the steep variations of the solved physical quantities. However, this enhanced flexibility comes at the expense of an increased difficulty to optimize and preserve data locality. Moreover, very large-scale 3D simulations generally involve unstructured meshes sizes ranging from a few tens up to several hundreds million elements if not more. Distributing such meshes on several thousands of processing units in view of their parallel processing by the PDE solvers is a task that can hardly be tackled as a sequential process, especially if this partitioning has to be performed more than once during a simulation as a result of an auto-adaptive discretization process. Overall, several subtopics have to be considered here that are related to the manipulation of very large numerical data sets for their distribution either before (i.e. static partitioning) or during (i.e. dynamic repartitioning - or re-meshing) a numerical simulation, dynamic load
to:
* Resource management and scheduling strategies. The resource management topic can be subdivided in two parts. First, as applications become more complex, their mapping to resource also becomes more complex. For example, some parts of an application can depend on the input data size or the number of actual resources. The current solution is to mix application optimization code with the functional code of an application. While it may work for static applications, it is even less satisfactory for dynamic applications. Therefore, the issue is to revisit interactions between application and resource management systems. Second, system services should be able to operate efficiently at very large scale. Hence, the issue is to design distributed algorithms for example for deployment, discovery or composition/orchestration services. Finally, applications can be written as workflows with data dependencies between tasks (which in turn can be parallel). These applications need specific resource and data management heuristics. Moreover, typical high performance applications require both an initial mapping of data and tasks and periodic re-scheduling to address the dynamic behavior of the system environment (e.g. failure) or of the application itself. The programming model and the role of the programmer are important in order to allow for flexibility in the scheduling algorithm. For instance, using an abstract data flow graph representation allows using, transparently with respect to the application code, graph partitioning algorithms in order to redistribute the workload together with a local (or global) work stealing algorithm to adapt variation of the load among the cores.
* Runtime systems. Traditional processors have reached architectural limits which heterogeneous multicore designs and hardware specialization (e.g. coprocessors, accelerators, etc.) intend to address. However, exploiting such machines introduces numerous challenging issues at all levels, ranging from programming models and compilers to the design of scalable hardware solutions. The design of efficient runtime systems for these architectures is a critical issue. In the project, we intend to address research topics that are all essential steps in solving the general parallel composability issue at the runtime system level: (1) micro runtime system supporting multiple parallel paradigms, (2) framework for experimenting with hybrid programs, (3) adaptive parallelism and code auto-tuning and, (4) validation using well known parallel kernels.

* Techniques and tools for static and dynamic processing of numerical data sets. Finite element or finite volume differential equation solvers such as those considered in the project are naturally adapted to the use of unstructured meshes for the discretization of the underlying computational domain. Such meshes are amenable to local refinement or coarsening so as to match the complex geometrical features or/and the steep variations of the solved physical quantities. However, this enhanced flexibility comes at the expense of an increased difficulty to optimize and preserve data locality. Moreover, very large-scale 3D simulations generally involve unstructured meshes sizes ranging from a few tens up to several hundreds million elements if not more. Distributing such meshes on several thousands of processing units in view of their parallel processing by the PDE solvers is a task that can hardly be tackled as a sequential process, especially if this partitioning has to be performed more than once during a simulation as a result of an auto-adaptive discretization process. Overall, several subtopics have to be considered here that are related to the manipulation of very large numerical data sets for their distribution either before (i.e. static partitioning) or during (i.e. dynamic repartitioning - or re-meshing) a numerical simulation, dynamic load
August 17, 2012, at 03:03 PM by 138.96.201.175 -
Deleted line 8:
Deleted line 9:
Deleted line 10:
Changed lines 97-160 from:

* Resource management and scheduling strategies. The resource
management topic can be subdivided in two parts. First, as
applications become more complex, their mapping to resource also
becomes more complex. For example, some parts of an application can
depend on the input data size or the number of actual resources. The
current solution is to mix application optimization code with the
functional code of an application. While it may work for static
applications, it is even less satisfactory for dynamic
applications. Therefore, the issue is to revisit interactions between
application and resource management systems. Second, system services
should be able to operate efficiently at very large scale. Hence, the
issue is to design distributed algorithms for example for deployment,
discovery or composition/orchestration services. Finally,
applications can be written as workflows with data dependencies
between tasks (which in turn can be parallel). These applications need
specific resource and data management heuristics. Moreover, typical
high performance applications require both an initial mapping of data
and tasks and periodic re-scheduling to address the dynamic behavior
of the system environment (e.g. failure) or of the application
itself. The programming model and the role of the programmer are
important in order to allow for flexibility in the scheduling
algorithm. For instance, using an abstract data flow graph
representation allows using, transparently with respect to the
application code, graph partitioning algorithms in order to
redistribute the workload together with a local (or global) work
stealing algorithm to adapt variation of the load among the cores.

* Runtime systems. Traditional processors have reached architectural
limits which heterogeneous multicore designs and hardware
specialization (e.g. coprocessors, accelerators, etc.) intend to
address. However, exploiting such machines introduces numerous
challenging issues at all levels, ranging from programming models and
compilers to the design of scalable hardware solutions. The design of
efficient runtime systems for these architectures is a critical
issue. In the C2S@Exa project, we intend to address research topics
that are all essential steps in solving the general parallel
composability issue at the runtime system level: (1) micro runtime
system supporting multiple parallel paradigms, (2) framework for
experimenting with hybrid programs, (3) adaptive parallelism and code
auto-tuning and, (4) validation using well known parallel kernels.

* Techniques and tools for static and dynamic processing of numerical
data sets. Finite element or finite volume PDE solvers such as those
considered in the C2S@Exa project are naturally adapted to the use of
unstructured meshes for the discretization of the underlying
computational domain. Such meshes are amenable to local refinement or
coarsening so as to match the complex geometrical features or/and the
steep variations of the solved physical quantities. However, this
enhanced flexibility comes at the expense of an increased difficulty
to optimize and preserve data locality. Moreover, very large-scale 3D
simulations generally involve unstructured meshes sizes ranging from a
few tens up to several hundreds million elements if not more.
Distributing such meshes on several thousands of processing units in
view of their parallel processing by the PDE solvers is a task that
can hardly be tackled as a sequential process, especially if this
partitioning has to be performed more than once during a simulation as
a result of an auto-adaptive discretization process. Overall, several
subtopics have to be considered here that are related to the
manipulation of very large numerical data sets for their distribution
either before (i.e. static partitioning) or during (i.e. dynamic
repartitioning - or re-meshing) a numerical simulation, dynamic load
balancing, and for their runtime analysis (i.e. computational
steering).
to:
* Resource management and scheduling strategies. The resource management topic can be subdivided in two parts. First, as applications become more complex, their mapping to resource also becomes more complex. For example, some parts of an application can depend on the input data size or the number of actual resources. The current solution is to mix application optimization code with the functional code of an application. While it may work for static applications, it is even less satisfactory for dynamic
applications. Therefore, the issue is to revisit interactions between application and resource management systems. Second, system services should be able to operate efficiently at very large scale. Hence, the issue is to design distributed algorithms for example for deployment, discovery or composition/orchestration services. Finally, applications can be written as workflows with data dependencies between tasks (which in turn can be parallel). These applications need specific resource and data management heuristics. Moreover, typical high performance applications require both an initial mapping of data and tasks and periodic re-scheduling to address the dynamic behavior of the system environment (e.g. failure) or of the application itself. The programming model and the role of the programmer are important in order to allow for flexibility in the scheduling algorithm. For instance, using an abstract data flow graph representation allows using, transparently with respect to the application code, graph partitioning algorithms in order to redistribute the workload together with a local (or global) work stealing algorithm to adapt variation of the load among the cores.
* Runtime systems. Traditional processors have reached architectural limits which heterogeneous multicore designs and hardware specialization (e.g. coprocessors, accelerators, etc.) intend to address. However, exploiting such machines introduces numerous challenging issues at all levels, ranging from programming models and compilers to the design of scalable hardware solutions. The design of efficient runtime systems for these architectures is a critical
issue. In the project, we intend to address research topics that are all essential steps in solving the general parallel composability issue at the runtime system level: (1) micro runtime system supporting multiple parallel paradigms, (2) framework for experimenting with hybrid programs, (3) adaptive parallelism and code auto-tuning and, (4) validation using well known parallel kernels.

* Techniques and tools for static and dynamic processing of numerical data sets. Finite element or finite volume differential equation solvers such as those
considered in the project are naturally adapted to the use of unstructured meshes for the discretization of the underlying computational domain. Such meshes are amenable to local refinement or coarsening so as to match the complex geometrical features or/and the steep variations of the solved physical quantities. However, this enhanced flexibility comes at the expense of an increased difficulty to optimize and preserve data locality. Moreover, very large-scale 3D simulations generally involve unstructured meshes sizes ranging from a few tens up to several hundreds million elements if not more. Distributing such meshes on several thousands of processing units in view of their parallel processing by the PDE solvers is a task that can hardly be tackled as a sequential process, especially if this partitioning has to be performed more than once during a simulation as a result of an auto-adaptive discretization process. Overall, several subtopics have to be considered here that are related to the manipulation of very large numerical data sets for their distribution either before (i.e. static partitioning) or during (i.e. dynamic repartitioning - or re-meshing) a numerical simulation, dynamic load
balancing, and for their runtime analysis (i.e. computational steering).
August 17, 2012, at 02:59 PM by 138.96.201.175 -
Added lines 66-178:
Pole 3: Optimization of performances of numerical solvers\\
Coordinators: François Pellegrini (Bacchus project-team) and %newwin% [[http://runtime.bordeaux.inria.fr/oaumage/oa | Olivier Aumage]] (Runtime project-team)

>>frame indent width=50pct<<
The research and development activities at the heart of the thematic
poles 1 and 2 deal with core ingredients of the numerical treatment of
the systems of differential equations modeling the computational
physics problems considered in the project. A feature shared by the
techniques adopted for the discretization of these systems of
differential equations is that they rely on unstructured meshes and
modern principles for building high order, possibly auto-adaptive
(i.e. hp-adaptive) finite element or finite volume type methods. At
the discrete level, these methods result in large sparse or hybrid
sparse-dense linear systems of equations for which we propose to study
solution strategies that combine, in a hierarchical way, iterative and
direct building blocks. Then, the activities undertaken in these
poles aim at designing differential equation solvers and associated
numerical kernels that exploit as far as possible the architectural
characteristics of modern massively parallel computing platforms.
Besides, these solvers will be characterized by complex data
structures and highly irregular data access patterns which will
drastically impact the sustained computational performances, in
particular if one does not take into consideration crucial
complementary questions related to the mapping of data sets and
numerical algorithms to the targeted computing platforms. This is
exactly the objective of the thematic pole 3 which is concerned with
the optimization of the performances of numerical solvers by
considering topics that often make a link or are at the interface
between computer science techniques and tools for exploiting high
performance computing systems and application software. While doing
so, two architectural considerations that must be dealt with are the
hierarchical organization of the memory and the heterogeneity of
processing units and networks. The topics to be addressed in this
thematic pole are the following:

* Resource management and scheduling strategies. The resource
management topic can be subdivided in two parts. First, as
applications become more complex, their mapping to resource also
becomes more complex. For example, some parts of an application can
depend on the input data size or the number of actual resources. The
current solution is to mix application optimization code with the
functional code of an application. While it may work for static
applications, it is even less satisfactory for dynamic
applications. Therefore, the issue is to revisit interactions between
application and resource management systems. Second, system services
should be able to operate efficiently at very large scale. Hence, the
issue is to design distributed algorithms for example for deployment,
discovery or composition/orchestration services. Finally,
applications can be written as workflows with data dependencies
between tasks (which in turn can be parallel). These applications need
specific resource and data management heuristics. Moreover, typical
high performance applications require both an initial mapping of data
and tasks and periodic re-scheduling to address the dynamic behavior
of the system environment (e.g. failure) or of the application
itself. The programming model and the role of the programmer are
important in order to allow for flexibility in the scheduling
algorithm. For instance, using an abstract data flow graph
representation allows using, transparently with respect to the
application code, graph partitioning algorithms in order to
redistribute the workload together with a local (or global) work
stealing algorithm to adapt variation of the load among the cores.

* Runtime systems. Traditional processors have reached architectural
limits which heterogeneous multicore designs and hardware
specialization (e.g. coprocessors, accelerators, etc.) intend to
address. However, exploiting such machines introduces numerous
challenging issues at all levels, ranging from programming models and
compilers to the design of scalable hardware solutions. The design of
efficient runtime systems for these architectures is a critical
issue. In the C2S@Exa project, we intend to address research topics
that are all essential steps in solving the general parallel
composability issue at the runtime system level: (1) micro runtime
system supporting multiple parallel paradigms, (2) framework for
experimenting with hybrid programs, (3) adaptive parallelism and code
auto-tuning and, (4) validation using well known parallel kernels.

* Techniques and tools for static and dynamic processing of numerical
data sets. Finite element or finite volume PDE solvers such as those
considered in the C2S@Exa project are naturally adapted to the use of
unstructured meshes for the discretization of the underlying
computational domain. Such meshes are amenable to local refinement or
coarsening so as to match the complex geometrical features or/and the
steep variations of the solved physical quantities. However, this
enhanced flexibility comes at the expense of an increased difficulty
to optimize and preserve data locality. Moreover, very large-scale 3D
simulations generally involve unstructured meshes sizes ranging from a
few tens up to several hundreds million elements if not more.
Distributing such meshes on several thousands of processing units in
view of their parallel processing by the PDE solvers is a task that
can hardly be tackled as a sequential process, especially if this
partitioning has to be performed more than once during a simulation as
a result of an auto-adaptive discretization process. Overall, several
subtopics have to be considered here that are related to the
manipulation of very large numerical data sets for their distribution
either before (i.e. static partitioning) or during (i.e. dynamic
repartitioning - or re-meshing) a numerical simulation, dynamic load
balancing, and for their runtime analysis (i.e. computational
steering).
>><<

Pole 4: \\
Coordinators: %newwin% [[ | ]] ( project-team) and %newwin% [[ | ]] (project-team)

>>frame indent width=50pct<<

>><<

Pole 5: \\
Coordinators: %newwin% [[ | ]] ( project-team) and %newwin% [[ | ]] (project-team)

>>frame indent width=50pct<<

>><<
August 17, 2012, at 02:34 PM by 138.96.201.175 -
Changed lines 46-47 from:
physical problems.\\
to:
physical problems.\\\
Changed lines 49-68 from:
pole is about the mathematical analysis and numerical study of PDEs
modeling
various types of physical problems namely, transport,
diffusion, convection-diffusion, waves and N-body problems.
In the
following, we describe research activities and expected methodological
contributions in
relation with the numerical treatment of these PDE
models.
While doing so, we do not exclusively refer to the scientific
and engineering use cases discussed in section 3.3 but also target
foreseen applications
that will be considered in the course of the
project.
Indeed, nuclear energy production and radioactive waste
management are the two application domains that will be addressed at
the
beginning of the project. The corresponding scientific and
engineering
use cases typically deal with transport, diffusion and
convection-diffusion problems. This includes in particular
the use
cases have been proposed by two external partners of
the project which
are ANDRA (French National Agency for Radioactive Waste Management
)
and CEA (French Alternative Energies and Atomic Energy
Commission)
. Other applications that are (or will be) considered by
the project-teams involved in this thematic pole deal with transport,
diffusion and wave propagation problems.
to:
pole is about the mathematical analysis and numerical study of
differential equations modeling various types of physical problems.
Nuclear energy production
and radioactive waste management are two
application domains
that are addressed in the first place. The
corresponding
scientific and engineering use cases deal with
transport, diffusion
and convection-diffusion problems. These use
cases are proposed by the two external partners
that are participating
to the project at its start:
ANDRA (French National Agency for
Radioactive Waste Management) and CEA (French Alternative Energies and
Atomic Energy Commission). In
addition to these two central
application challenges, other physical
problems (such as seismis wace
propagation and electromagnetic wave propagation) are considered by
the project partners for the purpose of demonstrating
the benefits of
the methodological contributions of this thematic pole (and
the other
thematic poles as well
).
August 17, 2012, at 02:19 PM by 138.96.201.175 -
Added lines 23-69:

>>frame indent width=50pct<<
The availability of massively parallel systems with theoretical
floating point performances in the petascale range, and in the
exascale range in a near future, will enable the numerical treatment
of more challenging problems involving, on the one hand, discretized
models with higher spatial resolution and, on the other hand, the
access to longer time scales but also to more complex physical models
possibly encompassing multiple space and time scales. However, the
simulation of such complex physical phenomena will require very
accurate and efficient numerical schemes that will ideally be able to
automatically switch between arbitrary high order accuracy in regions
where the problem solution is smooth, and low order accuracy combined
to local adaptivity of the discretization mesh in less regular
regions. Moreover, the targeted simulation tools will possibly have to
couple several mathematical models in a multiscale space-time
setting. From the algorithmic point of view, the implementation of the
proposed numerical schemes will have to be optimized for maximizing
both the single core computational performances and the scalability in
view of exploiting massive parallelism. In this context, the objective
of the activities undertaken in this thematic pole is to address these
needs for some general widely used types of differential equations, as
well as to develop complex solvers tuned for some specific challenging
physical problems.\\

The expertise of the project-teams that are active in this thematic
pole is about the mathematical analysis and numerical study of PDEs
modeling various types of physical problems namely, transport,
diffusion, convection-diffusion, waves and N-body problems. In the
following, we describe research activities and expected methodological
contributions in relation with the numerical treatment of these PDE
models. While doing so, we do not exclusively refer to the scientific
and engineering use cases discussed in section 3.3 but also target
foreseen applications that will be considered in the course of the
project. Indeed, nuclear energy production and radioactive waste
management are the two application domains that will be addressed at
the beginning of the project. The corresponding scientific and
engineering use cases typically deal with transport, diffusion and
convection-diffusion problems. This includes in particular the use
cases have been proposed by two external partners of the project which
are ANDRA (French National Agency for Radioactive Waste Management)
and CEA (French Alternative Energies and Atomic Energy
Commission). Other applications that are (or will be) considered by
the project-teams involved in this thematic pole deal with transport,
diffusion and wave propagation problems.

>><<
August 17, 2012, at 02:14 PM by 138.96.201.175 -
Changed lines 21-24 from:
(:div class="Organization":)
!!!! [[ANDRA use case/ANDRA use case]]
!!!!
[[CEA use case/CEA use case]]
(:divend:)
to:
Pole 2: Numerical schemes for PDE models\\
Coordinators: %newwin%
[[http://www.irisa.fr/sage/jocelyne | Jocelyne Erhel]] (Sage project-team) and %newwin% [[http://www-irma.u-strasbg.fr/~helluy | Philippe Helluy]] (Calvi project-team)
August 17, 2012, at 02:02 PM by 138.96.201.175 -
Changed lines 16-18 from:
* Continuous solvers. They consist of domain decomposition techniques that take into account the mathematical properties of the PDEs. They are often optimal but they still require the solution of some PDEs on the subproblems. Those local solutions can be obtained using either sparse direct solvers or iterative solvers depending on the size of the local problems and features of the target computing platforms.

The combination of the above-mentioned techniques will enable to exploit many levels of parallelism with various granularity suited for an effective use of the forthcoming heterogeneous extreme-scale platforms. The advances in the methods will benefit in particular but not exclusively to the solution of the systems of PDEs characterizing the uses cases considered in the C2S@Exa project.
to:
* Continuous solvers. They consist of domain decomposition techniques that take into account the mathematical properties of the differential equations. They are often optimal but they still require the solution of some differential equations on the subproblems. Those local solutions can be obtained using either sparse direct solvers or iterative solvers depending on the size of the local problems and features of the target computing platforms.

The combination of the above-mentioned techniques will enable to exploit many levels of parallelism with various granularity suited for an effective use of the forthcoming heterogeneous extreme-scale platforms. The advances in the methods will benefit in particular but not exclusively to the solution of the systems of differential equations characterizing the uses cases considered in the project.
August 17, 2012, at 02:01 PM by 138.96.201.175 -
Changed lines 10-54 from:
# Core numerical kernels. Many HPC applications intensively rely on
the same core and critical kernels such as dense linear algebra
solvers and Fast Fourier Transforms. Their efficient implementation
via innovative algorithms is consequently central for the overall
performance. Those kernels usually have a regular data access pattern
which makes them candidates of choice for designing novel algorithms
on each new hardware generation.

# Sparse direct solvers. They are based on Gaussian elimination and
intensively use dense linear algebra kernels. They are very
attractive because of their robustness and accuracy. Although the size
of the problems that can be solved with direct solvers keeps
increasing thanks to the evolution of algorithms and architectures
their intrinsic memory and time complexity is not always not
affordable for huge 3D problems. Nevertheless, direct solvers can be
used as building-blocks for iterative methods.

# Preconditioned iterative solvers. We consider general purpose
techniques, namely preconditioned Krylov subspace solvers. The
targeted preconditioning techniques only exploit information available
in the matrices of the linear systems associated with the problem (so
called algebraic solvers). Compared with direct solvers, the
complexity is significantly reduced by attempting to provide a
trade-off between memory/processing cost on one side and
robustness/accuracy on the other side. Algebraic domain decomposition
techniques exhibit such computational features and exploit sparse
direct solver capabilities on subproblems, their convergence can also
be enhanced thanks to deflating or filtering techniques. Although
generic, they may still be suboptimal for some classes of problems
where information from the continuous problems (giving rise to the
linear system) might be exploited.

# Continuous solvers. They consist of domain decomposition techniques
that take into account the mathematical properties of the PDEs. They
are often optimal but they still require the solution of some PDEs on
the subproblems. Those local solutions can be obtained using either
sparse direct solvers or iterative solvers depending on the size of
the local problems and features of the target computing platforms.

The combination of the above-mentioned techniques will enable to
exploit many levels of parallelism with various granularity suited for
an effective use of the forthcoming heterogeneous extreme-scale
platforms. The advances in the methods will benefit in particular but
not exclusively to the solution of the systems of PDEs characterizing
the uses cases considered in the C2S@Exa project.
to:
* Core numerical kernels. Many HPC applications intensively rely on the same core and critical kernels such as dense linear algebra solvers and Fast Fourier Transforms. Their efficient implementation via innovative algorithms is consequently central for the overall performance. Those kernels usually have a regular data access pattern which makes them candidates of choice for designing novel algorithms on each new hardware generation.

* Sparse direct solvers. They are based on Gaussian elimination and intensively use dense linear algebra kernels. They are very attractive because of their robustness and accuracy. Although the size of the problems that can be solved with direct solvers keeps increasing thanks to the evolution of algorithms and architectures their intrinsic memory and time complexity is not always not affordable for huge 3D problems. Nevertheless, direct solvers can be used as building-blocks for iterative methods.

* Preconditioned iterative solvers. We consider general purpose techniques, namely preconditioned Krylov subspace solvers. The targeted preconditioning techniques only exploit information available in the matrices of the linear systems associated with the problem (so called algebraic solvers). Compared with direct solvers, the complexity is significantly reduced by attempting to provide a trade-off between memory/processing cost on one side and robustness/accuracy on the other side. Algebraic domain decomposition techniques exhibit such computational features and exploit sparse direct solver capabilities on subproblems, their convergence can also be enhanced thanks to deflating or filtering techniques. Although generic, they may still be suboptimal for some classes of problems where information from the continuous problems (giving rise to the linear system) might be exploited.

* Continuous solvers. They consist of domain decomposition techniques that take into account the mathematical properties of the PDEs. They are often optimal but they still require the solution of some PDEs on the subproblems. Those local solutions can be obtained using either sparse direct solvers or iterative solvers depending on the size of the local problems and features of the target computing platforms.

The combination of the above-mentioned techniques will enable to exploit many levels of parallelism with various granularity suited for an effective use of the forthcoming heterogeneous extreme-scale platforms. The advances in the methods will benefit in particular but not exclusively to the solution of the systems of PDEs characterizing the uses cases considered in the C2S@Exa project.
August 17, 2012, at 01:59 PM by 138.96.201.175 -
Added lines 9-54:

# Core numerical kernels. Many HPC applications intensively rely on
the same core and critical kernels such as dense linear algebra
solvers and Fast Fourier Transforms. Their efficient implementation
via innovative algorithms is consequently central for the overall
performance. Those kernels usually have a regular data access pattern
which makes them candidates of choice for designing novel algorithms
on each new hardware generation.

# Sparse direct solvers. They are based on Gaussian elimination and
intensively use dense linear algebra kernels. They are very
attractive because of their robustness and accuracy. Although the size
of the problems that can be solved with direct solvers keeps
increasing thanks to the evolution of algorithms and architectures
their intrinsic memory and time complexity is not always not
affordable for huge 3D problems. Nevertheless, direct solvers can be
used as building-blocks for iterative methods.

# Preconditioned iterative solvers. We consider general purpose
techniques, namely preconditioned Krylov subspace solvers. The
targeted preconditioning techniques only exploit information available
in the matrices of the linear systems associated with the problem (so
called algebraic solvers). Compared with direct solvers, the
complexity is significantly reduced by attempting to provide a
trade-off between memory/processing cost on one side and
robustness/accuracy on the other side. Algebraic domain decomposition
techniques exhibit such computational features and exploit sparse
direct solver capabilities on subproblems, their convergence can also
be enhanced thanks to deflating or filtering techniques. Although
generic, they may still be suboptimal for some classes of problems
where information from the continuous problems (giving rise to the
linear system) might be exploited.

# Continuous solvers. They consist of domain decomposition techniques
that take into account the mathematical properties of the PDEs. They
are often optimal but they still require the solution of some PDEs on
the subproblems. Those local solutions can be obtained using either
sparse direct solvers or iterative solvers depending on the size of
the local problems and features of the target computing platforms.

The combination of the above-mentioned techniques will enable to
exploit many levels of parallelism with various granularity suited for
an effective use of the forthcoming heterogeneous extreme-scale
platforms. The advances in the methods will benefit in particular but
not exclusively to the solution of the systems of PDEs characterizing
the uses cases considered in the C2S@Exa project.
August 17, 2012, at 01:58 PM by 138.96.201.175 -
Changed line 4 from:
Coordinators: %newwin% [[http://graal.ens-lyon.fr/~jylexcel | Jean-Yves L’Excellent]] (Roma project-team) and %newwin% [[http://hiepacs.bordeaux.inria.fr/giraud | Luc Giraud]] (Hiepacs project-team)\\\
to:
Coordinators: %newwin% [[http://graal.ens-lyon.fr/~jylexcel | Jean-Yves L’Excellent]] (Roma project-team) and %newwin% [[http://hiepacs.bordeaux.inria.fr/giraud | Luc Giraud]] (Hiepacs project-team)
August 17, 2012, at 01:58 PM by 138.96.201.175 -
Added line 6:
>>frame indent width=50pct<<
Changed line 9 from:
to:
>><<
August 17, 2012, at 01:57 PM by 138.96.201.175 -
Deleted line 5:
%block bgcolor=#f0f9ff border='1px solid gray' padding=1px%[@
Deleted line 7:
@]
August 17, 2012, at 01:57 PM by 138.96.201.175 -
Changed lines 7-26 from:
The design of the extreme-scale computing platforms that are expected
to become available in the forthcoming decade will represent a
convergence of technological trends and the boundary conditions
imposed by over half a century of algorithm and application software
development. These platforms will be hierarchical as they will provide
coarse grain parallelism between nodes and fine grain parallelism
within each node. They are also expected to be very heterogeneous
since multicore chips and accelerators have completely different
architectures and potentials. It is clear that such a degree of
complexity will embody radical changes regarding the software
infrastructure for large-scale scientific applications. Central
numerical kernels such as fast transforms or numerical linear algebra
solvers –dense or sparse– are intensively used in many large-scale
computer simulations in science and engineering, where they often
account for the most time-consuming part of the computations. It is
widely recognized that there is not a single strategy that outperforms
all the others for a large class or problems. To address these
challenges, we consider in the project numerical kernels that
exhibit a natural hierarchical dependency, which can be summarized in
a bottom-up description as follows:
to:
The design of the extreme-scale computing platforms that are expected to become available in the forthcoming decade will represent a convergence of technological trends and the boundary conditions imposed by over half a century of algorithm and application software development. These platforms will be hierarchical as they will provide coarse grain parallelism between nodes and fine grain parallelism within each node. They are also expected to be very heterogeneous since multicore chips and accelerators have completely different architectures and potentials. It is clear that such a degree of complexity will embody radical changes regarding the software infrastructure for large-scale scientific applications. Central numerical kernels such as fast transforms or numerical linear algebra solvers –dense or sparse– are intensively used in many large-scale computer simulations in science and engineering, where they often
account for the most time-consuming part of the computations. It is widely recognized that there is not a single strategy that outperforms all the others for a large class or problems. To address these challenges, we consider in the project numerical kernels that exhibit a natural hierarchical dependency, which can be summarized in a bottom-up description as follows:
August 17, 2012, at 01:56 PM by 138.96.201.175 -
Changed line 6 from:
%block bgcolor=#f0f9ff border='1px solid gray' padding=2px%[@
to:
%block bgcolor=#f0f9ff border='1px solid gray' padding=1px%[@
August 17, 2012, at 01:55 PM by 138.96.201.175 -
Changed line 6 from:
%block bgcolor=#f0f9ff border='1px solid gray' padding=5px%[@
to:
%block bgcolor=#f0f9ff border='1px solid gray' padding=2px%[@
August 17, 2012, at 01:55 PM by 138.96.201.175 -
Added line 6:
%block bgcolor=#f0f9ff border='1px solid gray' padding=5px%[@
Changed line 27 from:
to:
@]
August 17, 2012, at 01:45 PM by 138.96.201.175 -
Changed lines 4-6 from:
Coordinators: %newwin% [[http://graal.ens-lyon.fr/~jylexcel | Jean-Yves L’Excellent]] (Roma project-team) and %newwin% [[http://hiepacs.bordeaux.inria.fr/giraud | Luc Giraud]] (Hiepacs project-team)\\

>>seeright<<
to:
Coordinators: %newwin% [[http://graal.ens-lyon.fr/~jylexcel | Jean-Yves L’Excellent]] (Roma project-team) and %newwin% [[http://hiepacs.bordeaux.inria.fr/giraud | Luc Giraud]] (Hiepacs project-team)\\\
Changed line 23 from:
challenges, we consider in the C2S@Exa project numerical kernels that
to:
challenges, we consider in the project numerical kernels that
Deleted line 25:
>><<
August 17, 2012, at 01:45 PM by 138.96.201.175 -
Changed lines 6-7 from:
>>nutshell bgcolor=lightblue<<
Test
to:
>>seeright<<
The design of the extreme-scale computing platforms that are expected
to become available in the forthcoming decade will represent a
convergence of technological trends and the boundary conditions
imposed by over half a century of algorithm and application software
development. These platforms will be hierarchical as they will provide
coarse grain parallelism between nodes and fine grain parallelism
within each node. They are also expected to be very heterogeneous
since multicore chips and accelerators have completely different
architectures and potentials. It is clear that such a degree of
complexity will embody radical changes regarding the software
infrastructure for large-scale scientific applications. Central
numerical kernels such as fast transforms or numerical linear algebra
solvers –dense or sparse– are intensively used in many large-scale
computer simulations in science and engineering, where they often
account for the most time-consuming part of the computations. It is
widely recognized that there is not a single strategy that outperforms
all the others for a large class or problems. To address these
challenges, we consider in the C2S@Exa project numerical kernels that
exhibit a natural hierarchical dependency, which can be summarized in
a bottom-up description as follows:
August 17, 2012, at 01:41 PM by 138.96.201.175 -
Changed line 6 from:
>>nutshell bgcolor=blue<<
to:
>>nutshell bgcolor=lightblue<<
August 17, 2012, at 01:41 PM by 138.96.201.175 -
Changed line 6 from:
>>nutshell bgcolor=ivory<<
to:
>>nutshell bgcolor=blue<<
August 17, 2012, at 01:40 PM by 138.96.201.175 -
Changed line 6 from:
>>nutshell border=blue<<
to:
>>nutshell bgcolor=ivory<<
August 17, 2012, at 01:40 PM by 138.96.201.175 -
Changed line 6 from:
>>nutshell border="blue"<<
to:
>>nutshell border=blue<<
August 17, 2012, at 01:40 PM by 138.96.201.175 -
Changed line 6 from:
>>nutshell border="1px blue"<<
to:
>>nutshell border="blue"<<
August 17, 2012, at 01:39 PM by 138.96.201.175 -
Changed line 6 from:
>>nutshell<<
to:
>>nutshell border="1px blue"<<
August 17, 2012, at 01:38 PM by 138.96.201.175 -
Changed lines 6-8 from:
to:
>>nutshell<<
Test
>><<
August 17, 2012, at 01:29 PM by 138.96.201.175 -
Changed lines 11-12 from:
!!!!! [[ANDRA use case/ANDRA use case]]
!!!!! [[CEA use case/CEA use case]]
to:
!!!! [[ANDRA use case/ANDRA use case]]
!!!! [[CEA use case/CEA use case]]
August 17, 2012, at 01:29 PM by 138.96.201.175 -
Changed lines 11-12 from:
!! [[ANDRA use case/ANDRA use case]]
!! [[CEA use case/CEA use case]]
to:
!!!!! [[ANDRA use case/ANDRA use case]]
!!!!! [[CEA use case/CEA use case]]
August 17, 2012, at 01:29 PM by 138.96.201.175 -
Changed lines 11-12 from:
* [[ANDRA use case/ANDRA use case]]
* [[CEA use case/CEA use case]]
to:
!! [[ANDRA use case/ANDRA use case]]
!! [[CEA use case/CEA use case]]
August 17, 2012, at 01:28 PM by 138.96.201.175 -
Changed line 10 from:
(:div class="list-cat":)
to:
(:div class="Organization":)
August 17, 2012, at 01:28 PM by 138.96.201.175 -
Changed line 10 from:
(:div class="list-blogroll":)
to:
(:div class="list-cat":)
August 17, 2012, at 01:28 PM by 138.96.201.175 -
Changed lines 4-11 from:
Coordinators: %newwin% [[http://graal.ens-lyon.fr/~jylexcel | Jean-Yves L’Excellent]](Roma project-team) and %newwin% [[http://hiepacs.bordeaux.inria.fr/giraud | Luc Giraud]] (Hiepacs project-team)\\




(:div class="Organization":)
!!!! [[ANDRA use case/ANDRA use case]]
!!!! [[CEA use case/CEA use case]]
to:
Coordinators: %newwin% [[http://graal.ens-lyon.fr/~jylexcel | Jean-Yves L’Excellent]] (Roma project-team) and %newwin% [[http://hiepacs.bordeaux.inria.fr/giraud | Luc Giraud]] (Hiepacs project-team)\\





(:div class="list-blogroll":)
* [[ANDRA use case/ANDRA use case]]
* [[CEA use case/CEA use case]]
August 17, 2012, at 01:18 PM by 138.96.201.175 -
Changed lines 1-5 from:
Research and development activities undertaken in the project are organized along 5 thematic poles.
to:
Research and development activities undertaken in the project are organized along 5 thematic poles.\\\

Pole 1: Numerical linear algebra\\
Coordinators: %newwin% [[http://graal.ens-lyon.fr/~jylexcel | Jean-Yves L’Excellent]](Roma project-team) and %newwin% [[http://hiepacs.bordeaux.inria.fr/giraud | Luc Giraud]] (Hiepacs project-team)\\
August 17, 2012, at 01:11 PM by 138.96.201.175 -
Added lines 1-4:
Research and development activities undertaken in the project are organized along 5 thematic poles.

August 16, 2012, at 03:59 PM by 138.96.201.175 -
Added lines 1-4:
(:div class="Organization":)
!!!! [[ANDRA use case/ANDRA use case]]
!!!! [[CEA use case/CEA use case]]
(:divend:)
August 16, 2012, at 03:48 PM by 138.96.201.175 -
Design by N.Design Studio, adapted by solidGone.org (version 1.0.0)
Powered by pmwiki-2.2.38