One of the most important feature missing
in todays sound rendering systems is the ability to **render sound occlusion**.
Usually the problem is ignored. Even when it is treated, if often uses
a binary visibility operator between two points (i.e. visible or invisible)
like in lighting simulations.

Obviously, this is not a satisfying model
for sound simulation due to the effects of transmission and diffraction.
Region of space that are in the geometrical shadow from a source receive
some energy.

Several approaches have been developed
to solve this difficult problem, but they are usually very costly in terms
of computing power or limited to canonical cases. Part of my PhD work was
aimed to develop a new technique allowing to take into account the effects
of sound occlusion that could give **satisfying approximation** to the
solution very **quickly **and for **general cases**. In order to
achieve fast calculations we proposed to** use a 3D rendering** of the
3D model of the envenronment to simulate. The information that is rendered
can be obtained through the **standard cabled graphics pipeline**
using the **OpenGL** graphics library available on most platforms.

Based on this framework we developped **two
methods**. The first one is a **real-time qualitative **method based
on the occlusion ration of the **1st Fresnel ellipsoids.** The second
is an extension to **more quantitative results** using **Fresnel-Kirchhoff
diffraction theory**. These two methods allow for computing attenuation
values for different frequencies and thus diffraction maps.

They can also be used to derive a filter
to auralize the changes in sound due to the occlusions.