P. G. Batchelor, PhD
Medical Physics and Bioengineering, Malet Place Engineering Building, University College London, WC1E 6BT, UK
C O N T E N T S
46.1 Introduction 523
46.2 Computational Aspects 523 46.2.1 Coil Sensitivities 524 46.2.2 Coil Array Quality 525 46.2.3 Image Reconstruction 526 46.2.4 Other Artefacts Correction 527 46.2.5 Regularization and Preconditioning 528 46.3 Practical Aspects 529
46.3.1 Manufacturers 530 46.3.2 Online Code 530 46.4 Conclusion 531
References 531
Future Software Developments 46
Philip G. Batchelor
46.1
Introduction
Future software developments in parallel MRI will certainly be mainly infl uenced by future, and present, hardware developments. The most likely direction for hardware developments, at least in the short-term future, is toward a higher number of coils. What eěect will this have on software? As the software will mainly involve the computational reconstruction and exploi- tation of images, the question could be restated as, what are the computational issues?
First of all, let us clarify what will be considered as software issues in this chapter. Strictly speaking, the hardware in parallel MRI consists of array coils, providing images of the object modulated by the coil sensitivities. Thus, in theory, any scanner that allows
one specifi c parallel imaging technique could also perform any of the other ones. In that sense, the dis- tinction between parallel MRI techniques (SMASH, SENSE, GRAPPA, etc., see Chap. 2) could be inter- preted as a distinction between software implemen- tations. If our scanner has 32 coils, we have the abil- ity to get up to 32 times more data out of it than we would from a single coil, and the role of the software computations is to exploit these data.
Furthermore, physiology and physics limit the speeding up of MR image acquisition by purely tech- nological methods, thus software computations will have to shoulder more and more of future improve- ments.
46.2
Computational Aspects
A crucial distinction in MR is the distinction between k-space and image space. The bridge between the two is the Fourier transform, generally implemented in software by the fast fourier transform (FFT), see (Edelman et al. 1999) for a discussion of future developments in this field. This operation is so stand- ard that we consider it a given, thus we can go back and forth to k-space according to our needs, without prohibitive computational costs.
In parallel imaging, the hardware delivers a wealth
of data, one dataset per coil, and the role of the soft-
ware computation is to reconstruct a clinically useful
image. What can we expect in the future from this
point of view? Current commercial products offer
parallel imaging in general with a Cartesian sam-
pling. Speed up is then achieved by acquiring only a
certain fraction 1/R of all lines in k-space. The limit
on R is the number of coils nc. This gives rise to very
specifi c undersampled images. Artefacts appear as
aliases separated by exactly a fraction 1/R of the fi eld
of view. Computationally, this is refl ected in that we
n
cfor each voxel. Thus a failure or imperfect recon- struction will result mostly in residual aliases which preserve an anatomical structure, and may be rela- tively easily recognised by an experienced radiologist, although g-factor noise (see also Chaps. 3 and 4) may still present for unfavourable coil configurations.
What one may want as future development are first to perfect these reconstructions, thus to be able to manipulate the parameters which aěect this recon- struction. This means we need to fi nd these param- eters. Furthermore, Cartesian trajectories are often not the best choice, and radial, spiral, etc., are more appropriate in a lot of applications, see, for example (Cashen and Carroll 2005, Liu et al 2005a) and Chap. 6. Even for Cartesian trajectories, some soft- ware optimisation is possible, as shown by Xu et al.
(2005). But this raises a big diĜculty in parallel imag- ing, as we cannot anymore just jump some lines in k-space, and more importantly, the artefacts are of a totally diěerent nature.
In summary, some of the computational problems of parallel MRI are: we don’t normally know the coil sensitivities; even if we know them the reconstruc- tion may be of bad quality; if the subsampling in k-spaces is not regular the reconstruction becomes much harder and slower, with diěerent artefacts. We will now have a closer look at these issues from a computational software point of view.
46.2.1
Coil Sensitivities
In the standard form of parallel imaging, we have as many datasets as coils, but of size reduced by speed up. A linear operation relates these datasets and the unknown clinically useful image. In between, we would have a rectangular matrix whose coeĜcients depend on the coil sensitivities. To solve the system, we thus need to assume that this matrix is known, in other words, that the coil sensitivities are known.
This normally then makes the reconstruction a two-step process: first, fit a reconstruction operator that can explain the relationship among the data we have, then use this operator to predict the data we have not acquired. When this prediction is perfect, we get a least squares (LS) problem. This implicitly assumes that all the errors due to noise are in the data. In some applications of statistics and numeri- cal linear algebra, a method has been developed to
et al. 1996), p. 595), see Fig. 46.1. As the coil sensitivi- ties are unlikely to be perfectly known, TLS methods seem particularly appropriate in parallel MRI, see Figs. 46.2 and 46.3 from Raj (TL-SENSE), and (Raj 2005; Raj and Zabih 2004; Liang et al. 2002) (maxi- mum likelihood formulation). Note that the diěer- ence between these images is due to software.
The standard hardware method to determine coil sensitivities is an additional scan using a body coil. For this reason, computational methods have been developed that should allow to avoid this step.
Such methods, self-calibrating or auto-calibrating, in general acquire additional lines near the centre of k-space, thus some extra data [see (Breuer et al.
2005, Kholmovski and Samsonov 2005, Lew et al.
2005)] for recent examples, and Chaps. 2 and 8 for more details). A simple method in SENSE to avoid such a step is to use a sum of squares (SOS) of the coil images.
Let s
ibe the image from coil k. If c
kis the ideal coil sensitivity, we should have the relationship s
i=c
ks, where s is the ideal image. Thus, the sum of squares is sos ¦
ks
k 2s
2¦
kc
k 2, and the estimated coil sensitivities are c
ks
k/ sos c
k/ ¦
kc
k 2e
iϕ, where M is the phase of the ideal image. Thus, the SOS approximation is good if the coils have a good
‘coverage’ of the field of view, in the sense that the sum of squares of the coil magnitudes is approxi- mately uniform, see also Fig. 46.4 (Bydder et al.
2002).
●
Fig. 46.1. The diěerence between least squares (LS) and to-
tal least squares (TLS): LS minimises the sum of the lengths
of hyphenated lines, whereas TLS minimises the sum of the
lengths of dotted lines
46.2.2
Coil Array Quality
If all the coils were uniform, or simply similar, the n
cimages would all be similar, and we would have lost the advantage of multiple information. If on the other hand the coils were not overlapping at all, every coil would simply give information on its own portion of the image (see Chap. 3 for coil design issues). In both cases, with speed up, i.e., loss of information, we end up with not enough data to reconstruct an image. It may be instructive to consider what factors
aěect this. Computationally, this appears to depend on how well the coils cover the field of view in a sum- of-squares sense.
Using the above notations, let us compute the condition number of the system defined by [s
k]
=[c
k]s, k=1 . . . n
c. This is the ratio of maximum to minimum singular value and controls error propagation in linear computations. To sim- plify the computation, we assume no speed up.
With speed up, we expect the computation to get worse, thus our estimate should be seen as
●
Fig. 46.2a–c. SENSE reconstructions of brains: Note excessive noise amplification suěered by standard SENSE a. This noise can be reduced by introducing regularization b [see (King and Angelos 2001)], but at the cost of insuĜcient unfolding perform- ance. TL-SENSE removes the noise without degrading unfolding performance c. Images courtesy of A. Raj
c a b
Fig. 46.3. SENSE reconstructions of torso: Note the distortion in heart-stomach boundary introduced by standard regularised
SENSE, which was eěectively removed by TL-SENSE. Strip artefacts across the liver were also removed by TL-SENSE. Images
courtesy of A. Raj
a simple lower bound. It is easy to see that the normal matrix of the system is a square diagonal matrix with as many rows as voxels. The diagonal elements are the sum of squared magnitudes of coil sensitivities for each voxel, thus the condition number is the square root of the ratio of maxi- mum to minimum (over voxels) of these SOSs:
2 2