The emergence of new modalities such as Diffusion Tensor Imaging (DTI) is of great interest for the characterization and the temporal study of Multiple Sclerosis (MS). DTI indeed gives information on water diffusion within tissues and could therefore reveal alterations in white matter fibers before being visible in conventional MRI. However, recent studies generally rely on scalar measures derived from the tensors such as FA or MD instead of using the full tensor itself. Therefore, a certain amount of information is left unused. Moreover, many sources can result in a bias in the images, for example errors in non linear registration or artifacts or distortion in the acquisitions. This bias may therefore lead to a prejudice in the results of any statistical analysis.
We have presented in [2] a framework to study the benefits of using the whole diffusion tensor information to detect statistically significant differences between each individual MS patient and a database of control subjects. This framework is based on the construction of a mean DTI atlas built from the control subjects in the following manner:
The mean \( \bar{D}_{\mathrm{Log}} \) and covariance \( C \) of the tensors in each voxel can conveniently be computed in a vector space thanks to the Log-Euclidean framework for tensors proposed by Arsigny et al [1]. Once this DTI atlas has been built, each of the MS patient DTI is then compared with respect to it by utilizing the following approach, illustrated in Fig. 1:
This framework allows us to look for differences both in normally appearing white matter but also in and around the lesions of each patient. We have presented in [2] a study on a database of 11 MS patients (one result example is shown in Fig. 2), showing the ability of the DTI to detect not only significant differences on the lesions but also in regions around them, enabling an early detection of an extension of the MS disease.
(a) | (b) | (c) |
(d) | (e) | (f) |
As mentioned previously, many sources can result in a bias in the images, for example errors in non linear registration or artifacts or distortion in the acquisitions. This bias may therefore lead to a prejudice in the results of any statistical analysis. To reduce the influence of these sources of bias, we have presented in [3] a new algorithm, called continuous STAPLE, to estimate a reference standard from a dataset of registered vector images. This algorithm is based on an Expectation Maximization algorithm and is similar in principle to the STAPLE validation method [4]: the reference standard is treated as unknown data in a maximum likelihood framework. The continuous STAPLE therefore iterates over two steps in order to estimate (more details on these two steps may be found in [3]):
A framework has been associated to this algorithm, based on a Kullback-Leibler divergence between the Gaussian distributions defined by each set of parameters, in order to compare the images of the dataset. This framework can then be used to detect significant differences between the images.
We present in Fig. 3 experimental results on a simulated database, showing that the reference standard obtained by continuous STAPLE, looks more accurate than the one obtained by a classical average, when some outliers are present in the dataset. The tensors in the average estimate (c,f) are indeed swollen and rotated when compared to the known reference standard.
(a) | (b) | (c) |
(d) | (e) | (f) |
Moreover, we have shown in [3] that our comparison framework was able to detect differences in these simulated experiments. We have then applied this framework to the multiple sclerosis patients, showing significant differences in the lesions regions and in their vicinity, confirming the results obtained in the previous section.
We believe that many other applications exist for this algorithm. As it can use general vector images as an input, it could be used on non linear transformations or DT images. Another potential application would be the study of the Jacobians of transformations for the detection of abnormal anatomy.