`TECSER project home page`

`Novel high performance numerical solution techniques for RCS computations`

`Research activities`

`High order hybridized discontinuous Galerkin methods`

During the last 10 years, discontinuous Galerkin (DG) methods have been extensively considered for obtaining approximate solution of Maxwell's equations modeling electromagnetic wave propagation. Thanks to the discontinuity of the approximation, this kind of methods has many advantages, such as adaptivity to complex geometries through the use of unstructured possibly non-conforming meshes, easily obtained high order accuracy, hp-adaptivity and natural parallelism. However, despite these advantages, DG methods have one main drawback particularly sensitive for stationary problems: the number of globally coupled degrees of freedom (d.o.f) is much greater than the number of d.o.f required by conforming finite element methods for the same accuracy. Consequently, DG methods are expensive in terms of both CPU time and memory consumption, especially for time-harmonic problems. Hybridization of DG methods is devoted to address this issue while keeping all the advantages of DG methods. In the TECSER project, we consider and further develop a family of hybridized discontinuous Galerkin methods (HDGM) recently introduced for the 2D and 3D time-harmonic Maxwell equations in mixed form. HDGM introduce an additional "hybrid" variable on the faces of the elements, on which the definition of the local (element-wise) solutions is based. A so-called "conservativity condition" is imposed on the numerical trace, whose definition involved the hybrid variable, at the interface between neighboring elements. As a result, HDGM produce a linear system in terms of the d.o.f of the additional hybrid variable only. In this way, the number of globally coupled d.o.f is reduced. The local values of the electromagnetic fields can be obtained by solving local problems element-by-element. Recently, we have studied such HDGM formulations for the systems of 2d and 3d time-harmonic Maxwell equations. In the 3d case, we have proposed a HDG formulation taking the tangential component of the magnetic field as the hybrid variable, and we have shown that the reduced system of the hybrid variable has a wave-equation-like characterization, and the tangential components of the numerical traces for both electric and magnetic fields are single-valued. Very promising results have been obtained, however, the development of the method has been limited so far to a piecewise linear interpolation of the electromagnetic field components and the use of conforming tetrahedral meshes. Extension to higher order accuracy combined to a local (i.e. element-wise) definition of the interpolation order, and a local adaptation (i.e. refinement) of the mesh in a non-conforming way (i.e. with hanging nodes) will be considered in this project.

`Numerical treatment of unbounded propagation domains`

The simulation of electromagnetic wave propagation in open domains (e.g. scattering of a plane wave by complex structures) naturally raises the question of the appropriate treatment of the artificial truncation of the infinite propagation domain in view of the numerical treatment. The main issue is to find a suitable way to efficiently treat the unbounded region beyond the volume mesh of the heterogeneous propagation media. Two main numerical treatments have been developed and intensively used for this purpose. In the first approach, the computational domain is artificially bounded and an appropriate abosrbing boundary condition is imposed, or the method of perfectly matched layers is adopted. Such approaches can be called "approximate methods" in the sense that an inherent error remains present even without any error of discretization. When solving the time-harmonic Maxwell equations, the main advantage of these methods lies in the fact that they lead to a linear system with a purely sparse matrix. The second class of methods referred as "exact methods" is based on a coupling of a volume discretization method with a boundary element method (BEM). A major disadvantage of these methods is that they lead to a matrix which is partly sparse, due to the part of the discrete equations related to the volume discretization method, and partly dense for the ones concerning the discretization by a BEM. Since all the existing linear system solvers are tailored for either one of these two types of matrices, the efficient numerical treatment of the hybrid linear systems characterizing such coupled formulations is far from being a trivial task, especially in the context of high performance parallel computing platforms. To overcome this difficulty, a quite obvious approach is to use domain decomposition principles to uncouple the part of the discrete equations related to the BEM from that concerning the volume discretization method, by means of an iterative procedure. Thanks to this approach, it becomes possible to solve only one kind of system, either with a dense or a sparse matrix, at each iteration of the domain decomposition algorithm. In the TECSER project, we study such a domain decomposition based approach for the numerical treatment of the system of time-harmonic Maxwell equations modeling the propagation in heterogenous media coupled to an integral representation of the propagation in the exterior domain. From the discrete point of view, we are concerned with the coupling of the HDGM method discussed previously with a BEM method.

`Scalable linear system solvers`

The design of the extreme-scale computing platforms that are expected
to become available in the forthcoming decade will represent a
convergence of technological trends and the boundary conditions
imposed by over half a century of algorithm and application software
development. These platforms will be hierarchical as they will provide
coarse grain parallelism between nodes and fine grain parallelism
within each node. They are also expected to be very heterogeneous
since multicore chips and accelerators have completely different
architectures and potentials. It is clear that such a degree of
complexity will embody radical changes regarding the software
infrastructure for large-scale scientific applications. Central
numerical kernels such as fast transforms or numerical linear algebra
solvers -- dense or sparse -- are intensively used in many large-scale
computer simulations in science and engineering, where they often
account for the most time-consuming part of the computations. It is
widely recognized that there is not a single strategy that outperforms
all the others for a large class or problems. To address these
challenges, we consider in the TECSER project solvers that exhibit a
natural hierarchical dependency, which can be summarized in a
bottom-up description as follows:

(1) Core numerical kernels. Many HPC applications intensively rely on
the same core and critical kernels such as dense linear algebra
solvers and Fast Fourier Transforms. Their efficient implementation
via innovative algorithms is consequently central for the overall
performance. Those kernels usually have a regular data access pattern
which makes them candidates of choice for designing novel algorithms
on each new hardware generation.

(2) Sparse direct solvers. They are based on Gaussian elimination and
intensively use dense linear algebra kernels. They are very attractive
because of their robustness and accuracy. Although the size of the
problems that can be solved with direct solvers keeps increasing
thanks to the evolution of algorithms and architectures their
intrinsic memory and time complexity is not always not affordable for
huge 3D problems. Nevertheless, direct solvers can be used as
building-blocks for iterative methods.

(3) Preconditioned iterative solvers. We consider general purpose
techniques, namely preconditioned Krylov subspace solvers. The
targeted preconditioning techniques only exploit information available
in the matrices of the linear systems associated with the problem (so
called algebraic solvers). Compared with direct solvers, the
complexity is significantly reduced by attempting to provide a
trade-off between memory/processing cost on one side and
robustness/accuracy on the other side. Algebraic domain decomposition
techniques exhibit such computational features and exploit sparse
direct solver capabilities on subproblems, their convergence can also
be enhanced thanks to deflating or filtering techniques. Although
generic, they may still be suboptimal for some classes of problems
where information from the continuous problems (giving rise to the
linear system) might be exploited.