Information given by the user
As the classification is here supervised, the user has to give the number of classes (textures), as well as the parameters of each class (the first and second order moments of the energy distribution in each sub-band of the packet wavelet decomposition).
Parameters:
In our experiments, we always choose and ( ). There remains only two parameters to set: the partition term coefficient , and the common value of the contour regularization terms .
Initialization
In a first time, we have proceeded to a manual initialization with circles or squares. Each circle represents then the zero level set of one of the class (see figure 7).
To get an automatic initialization, and to make it independent of the user, we have then used ``seeds'': we split the initial image into small sub-images (in practice 5*5 images). In each sub-image, for each class , we compute the data term by assuming that all the pixels of the sub-image belong to the same class . We set all the pixels in the sub-image to the class for which the whole sub-image's energy is the smallest (see figure 8). We have used this initialization in the examples presented here-after.
Synthetic image with four textures
|
In this example (see figure 9), one sees clearly that our model can handle with triple junctions. On the contrary, as in the classical approach of the Mumford-Shah functional, the junction of four textures give two triple junctions in the classified image (at 120 degrees).
Synthetic image with two textures
|
This example (see figure 10) shows that our model can handle with any kind of geometrical shape.
Synthetic image with six textures
|
This example (see figure 11) shows that our model can handle complex textured images. Here, some of the textures are visually very close, and the geometrical shape of the contours are yet quite well detected. To get more homogeneous classes, we have applied here a Gaussian mask to the data term.