Shown in Table 9 are the performances on the different individual clusters of the MecaGRID. For this relatively large mesh, the executable size is 871 MB well within the 1 GB of RAM at the IUSTI but greater than the available RAM on the other clusters. This fact may account for the increase in performance of the IUSTI cluster relative to the other clusters. The performances on the individual clusters are approximately as one might expect based on the processor speed. Note that the 2 GHz clusters (nina and the IUSTI) show the same performance. The INRIA-pf cluster performs better than expected with a ratio of 1.2 as opposed to 2. One would expect that the INRIA-pf and CEMEF clusters to perform approximately the same rather than 1.7 and 2.3. In spite of the fact that the executable size of 871 MB would seem to be larger than the INRIA-pf and the CEMEF available RAM, the performances of the INRIA-pf and the CEMEF are notable. Note also that all the Communication/Work3ratios are in the range 3-7 percent thus processor time is dominated by work, a desirable characteristic in parallel codes.
Table 10 shows some of the inter-cluster performances using the explicit solver where it is seen that the inter-cluster performances are quite good, the Communication/Work ratios are < 0.65.