• Non ci sono risultati.

Efficient projection-space resolution modelling for image reconstruction in Positron Emission Tomography

N/A
N/A
Protected

Academic year: 2021

Condividi "Efficient projection-space resolution modelling for image reconstruction in Positron Emission Tomography"

Copied!
205
0
0

Testo completo

(1)

Department of Physics

Ph.D. Course in Physics

Ecient projection-space resolution

modelling for image reconstruction

in Positron Emission Tomography

Author:

Alessandro Pilleri

Supervisor: Prof. Nicola Belcari

(2)
(3)

Abstract

Positron Emission Tomography (PET) is a functional imaging technique used for the measurement of the spatial distribution of a radiotracer in a living subject.

PET images are typically obtained using iterative reconstruction algorithms, instead of analytical methods, as they provide superior image quality thanks to better modelling of the statistical properties of the data and to an accurate description of the acquisition process. The acquisition process, in terms of the relation between the object space and the measurement (or projection) space, is dened into the so-called system model, and the modelling of the phenomena that occur during the process is referred to as resolution modelling. The accuracy with which the system model is dened plays a critical role in the quality of the reconstructed images, as the model can incorporate various resolution degrading factors. However, its ecient application to the reconstruction process remains a challenging aspect. In fact, accurate models require high computational resources when computed on-the-y during the reconstruction process or high memory resources when pre-computed stored models are used.

The main purposes of this thesis are the development of an ecient method to perform the resolution modelling for pixellated non-TOF PET scanners and its implementation in a generic reconstruction software to provide fast and accurate image reconstruction for user-dened scanner geometries. This thesis targets pre-clinical and application dedicated scanners (like brain tomographs), in which the reconstructed eld of view extends to a large portion of the scanner bore and the detector response is the main degrading factor of the image quality. The approach used in this work is the factorization of the system model into several components, with particular regard to the ecient computation and application of the geometric (G) component and projection-space (D) component.

This manuscript is subdivided into two parts. In the rst part, we cover the background and the literature related to the positron emission tomography and the resolution modelling. The second part covers the reconstruction software, the methods and the results associated with the proposed solutions, providing a quantitative image quality evaluation performed on simulated phantoms.

(4)
(5)

Acknowledgements

First of all I would like to thank my Supervisor Prof. Nicola Belcari, for the funding support, the helping discussions and the valuable input to this research work. I thank Prof. Alberto Del Guerra for all the support and the helping discussions during these years.

I would like to thank all the components of the research group, and Esther and Pietro for all the time we spent together in our oce.

A special thanks goes to Dott. Niccolò Camarlinghi for guiding me at the start of this journey and all the support he gave me during this thesis.

Finally, my deepest appreciation goes to Sara, for staying at my side during this intense period of my life.

(6)
(7)

Contents

Abstract 3

Acknowledgements 5

I Background

11

1 Positron Emission Tomography 13

1.1 Review of the physics principles . . . 14

1.1.1 Positron emission and annihilation . . . 14

1.1.2 Photons interaction with matter . . . 16

1.1.3 Radiation detectors for PET . . . 19

1.2 The PET system . . . 22

1.2.1 Data acquisition . . . 23

1.2.2 Detection eciency . . . 25

1.2.3 Spatial resolution . . . 26

1.2.4 Noise and bias in PET measurement . . . 28

1.2.5 Data Representation . . . 30

1.2.6 Data Correction . . . 30

1.2.7 Detector Normalization . . . 33

2 Image reconstruction methods for PET 35 2.1 Problem formulation for PET . . . 35

2.2 Continuous-Continuous approximation . . . 37

2.2.1 Projection and Back-projection . . . 37

2.2.2 Filtered Back-Projection . . . 38

2.2.3 Three dimensional Analytic Reconstruction . . . 39

2.3 Discrete-Discrete approximation . . . 41

2.3.1 The objective function . . . 43

2.3.2 ML-EM and OS-EM algorithms . . . 44

2.4 The reconstruction task as an inverse problem . . . 47 7

(8)

2.5 System model and resolution modelling . . . 48

2.5.1 Model factorization . . . 50

2.5.2 Image-space resolution modelling . . . 50

2.5.3 Projection-space resolution modelling . . . 52

2.5.4 Sinogram restoration . . . 53

II Application of the resolution modelling

55

3 Basic design of the reconstruction software 61 3.1 Symmetries and geometries . . . 62

3.2 Model computation . . . 64

3.3 Reconstruction tasks . . . 65

3.4 Scanner geometries used in this work . . . 66

4 Geometric component: investigation of the projection algorithms 71 4.1 Introduction . . . 71 4.2 Methods . . . 75 4.2.1 Simulated data . . . 76 4.2.2 Metrics . . . 78 4.2.3 Reconstruction . . . 80 4.3 Results . . . 82 4.3.1 Discussion . . . 92 4.4 Conclusions . . . 95

5 Geometric component: an ecient alternative to the use of TOR projectors 97 5.1 Introduction . . . 97

5.2 Methods . . . 97

5.2.1 Reconstruction . . . 99

5.3 Results: Jacobs + image-space modelling . . . 101

5.4 Results: Joseph + image-space modelling . . . 106

5.5 Discussion and Conclusions . . . 111

6 Investigation of the ecient application of the projection-space resolution modeling 113 6.1 Introduction . . . 113

(9)

CONTENTS 9

6.2.1 Detector matrix computation . . . 116

6.2.2 Factorization approach: application of the resolution mod-eling by exploiting the matrix factorization . . . 121

6.2.3 Two-step approach: application of the resolution modelling by performing a histogram restoration . . . 123

6.2.4 Summary of the reconstruction methods . . . 125

6.2.5 Comparison of the reconstruction methods . . . 125

6.3 Results: Factorization approach . . . 130

6.4 Results: Two-Step reconstruction . . . 137

6.4.1 Results using a tight smoothing lter . . . 138

6.4.2 Results using a large smoothing lter . . . 142

6.4.3 Parameters selection . . . 146

6.5 Results: Low-counts comparison . . . 148

6.6 Comparison with the CASToR reconstruction software . . . 152

6.7 Conclusions . . . 158

7 Application of the projection-space resolution modelling to a brain scanner with dual-layer detectors 161 7.1 Introduction . . . 161 7.2 Methods . . . 162 7.3 Results . . . 164 7.4 Conclusions . . . 172 8 General conclusions 173 8.1 Resolution modelling . . . 173 8.2 Reconstruction software . . . 175 8.3 General conclusions . . . 176 8.4 Future works . . . 177 8.5 Contributions . . . 177

8.6 Publications related to this work . . . 178

A Unmatched projection/backprojection pair 179 A.1 Results and Discussion . . . 180

List of Figures 187

(10)
(11)

Part I

Background

(12)
(13)

Chapter 1

Positron Emission Tomography

Positron Emission Tomography (PET) is a functional molecular imaging tech-nique used for the measurements of the spatial distribution of a β+ emitting radiotracer in a living subject, by the detection of the annihilation γ-rays emit-ted outside the subject.

It has applications both in clinical and pre-clinical studies. Clinical PET imaging is used mainly in three medical areas:

ˆ cancer diagnosis and management (localisation of tumours and metastases); ˆ cardiology and cardiac surgery (measurements of myocardial perfusion and

viability);

ˆ neurology and psychiatry (management of brain tumours, pre-surgical eval-uation of epilepsy, diagnosis of dementia).

In the eld of pre-clinical studies, PET is used as a research tool on small ani-mals subjects such as mice and rats. The principle at the basis of a PET imaging system is the use of a positron-emitting radioactive tracer (radiotracer) which is a chemical compound in which one or more atoms have been replaced by a radioactive isotope. The radiotracer follows the usual biochemistry of the com-pound and, by the study of the distribution of the activity of the radioisotope, it is possible to gather biochemical information about the tissue where the tracer has been localised. All the radioisotopes used in PET studies undergo β+-decay thus emitting a positron that annihilates in the biological tissues with the emis-sion of two photons. The detection of the photons is the key to determining the distribution of the radiotracer inside the patient.

The radiotracer to be used depends on the pathology and organ of interest, one of the most common is the Fluorodeoxyglucose (FDG). The FDG has a chemical

(14)

structure close to the glucose one and similar behaviour, it is taken up by high-glucose-using cells such as brain, hearth and by cancer cells. This compound contains18F that is a β+ emitter, so it could be used in combination with a PET scanner to determine the location of cancer cells inside the patient body.

1.1 Review of the physics principles

The fundamental principle of a PET imaging system is the detection of the two photons that arises from the annihilation of a positron emitted inside the patient body by the decay of a radioactive tracer.

1.1.1 Positron emission and annihilation

The β+ decay is common when an atom has an excess of protons with respect to the number of neutrons so that the nucleus can gain stability converting one of the exceeding protons into a neutron[1]:

ZX →Z−1 Y + β++ νe

The kinetic energy of the recoiling nucleus can be neglected due to its mass hence the released energy is mostly shared between the positron and the antineutrino. The beta emission is governed by the standard exponential law where the number of nucleus at a given time is

N(t) = N0e−

t τ

where N0 is the number of the nucleus at time t=0 and τ is the mean lifetime of the isotope. The activity is dened as the number of disintegration per second:

A(t) =−dN(t)

dt = λN (t) = 1 τN(t)

The time that is necessary to halve the number of atoms is called half-life and is obtained by:

T1/2 = τ · log(2)

The positron sources used in nuclear medicine are articially produced by target-ing stable isotopes with positively charged particles.

The most common used atoms for PET imaging are called physiological ra-dioisotopes because their corresponding stable atoms are among the main

(15)

con-1.1. REVIEW OF THE PHYSICS PRINCIPLES 15 stituents of the human body or can easily replace some functional group. All of them have a short lifetime. This a favourable aspect because it limits the decay of the compound to a narrow temporal window so the subject has a limited residual activity after the PET acquisition is performed and it is possible to minimise the amount of tracer needed to perform the analysis. However, for very short lifetimes (as for 15O) it is necessary to produce the radiotracer just before the experiment, with an in-situ cyclotron.

After the emission, the positrons lose their energy mainly through multiple Coulomb interactions within the biological tissues and reach thermal equilibrium before annihilation. The range of the particle is the distance between the point of emission and the point of annihilation and depends on the density and the Z of the medium. In Table 1.1 are shown some physical properties of typical physiological radioisotopes. Half-Life (min) Positron average kinetic energy (MeV) Positron endpoint kinetic energy (MeV) Positron average range in water (mm) 11C 20.4 0.385 0.960 1.2 13N 10.0 0.491 1.198 1.6 15O 2.0 0.735 1.732 2.8 18F 109.8 0.242 0.633 0.6

Table 1.1: Physical properties of physiological radioisotopes [2].

In the simplest approximation, where the annihilation takes place at-rest, the result is the emission of two back-to-back 511 keV photons to guarantee the conservation of energy and momentum. However, the annihilation is never at rest because the electron is bound to the atom and his energy cannot be neglected. In the reference frame of the centre of mass of the system the process always leads to two back-to-back γ-rays, but in the laboratory reference frame is present a non-collinearity that brings to a smaller angle of emission. It is also possible to have an annihilation in ight, for radioisotopes in water this process occurs ∼ 2% with respect to the total. Another possible process is the annihilation via 3γ but it is usually neglected because of its small cross-section (σ3γ = σ3722γ  σ2γ).

An extensive study [3] showed that the distribution of the non-collinearity of the annihilation process is a sum of two Gaussian curves, with dierent σ.

(16)

The narrower component is consistent with the annihilation of the free positron whereas the broader part can be explained by the formation of a metastable bound-state between a positron and an electron, called positronium [4]. This sys-tem has two minimum energy congurations: para-positronium (singlet state) and ortho-positronium (triplet state). Both systems can decay via a self-annihilation or via pick-o, that is the annihilation of the positron with a free electron. A summary of the annihilation processes is shown in Tab. 1.2.

The angular deviation from collinearity together with the presence of a not null range are fundamental physical limits that degrade the spatial resolution in the reconstructed images.

State Annihilationprocess Comments Lifetime Ang. dev. non-bound

in-ight via 2γ

emission of order of 2% ∼ 1 ps narrow at rest via 2γ

emission standard PET situation ∼ 1 ns narrow at rest via 3γ

emission improbable positronium

para-positronium self-annihilation

1/4 of the bound state preferred annihilation

for para-positronium ∼ 100 ps narrow para-positronium

pick-o improbable ∼ 1 ns narrow ortho-positronium

self-annihilation via 3γ, it is anticipatedby pick-o ∼ 100 ns narrow ortho-positronium

pick-o 3/4 of the bound state ∼ 1 ns large

Table 1.2: Summary of the annihilation processes[2].

1.1.2 Photons interaction with matter

The emitted photons can interact with matter in several ways like photoelectric absorption, Compton and Rayleigh scattering and pair production([5]).

ˆ The photoelectric eect is the interaction of the photons with an absorber atom in which the γ-ray completely disappear. In its place, an energetic electron is emitted by the atom, called photoelectron, from one of its bound

(17)

1.1. REVIEW OF THE PHYSICS PRINCIPLES 17 shells (typically K). This is a threshold eect and the electron is emitted only if the incident photon has energy (hν) greater than the electron binding energy (EB). The energy of the emitted particle is:

Ee = hv− Eb

A rough approximation of the probability of interaction is: σ Z

n E3.5

y

n= 4− 5

ˆ The Compton scattering is the interaction between a photon and an electron of the medium in which a portion of the gamma energy is transferred to the electron and the direction of the photon is changed. All angles of deection are possibles, so the energy transferred could vary from 0 to a large amount of the energy of the incident photon. The probability of interaction depends linearly on Z and falls o with increasing energy.

ˆ The Rayleigh scattering is the elastic scattering of the photon o an atom, so there is no change in the energy of the incident particle but only a change in his propagation direction. The probability of the interaction is :

σ Z 2 E2

ˆ The pair-production is the creation of electron-positron pair by a photon in the proximity of a nucleus. This could happen only if the energy of the incident photon is greater than 1.022 MeV that is the sum of the rest mass of the two particles.

The interactions that play an important role in a PET system are only the pho-toelectric absorption and the Compton scattering, as the pair-production eect cannot happen because the photons emitted in an annihilation process have en-ergy below the threshold and the cross-section of the Rayleigh scattering is never dominant respect to the cross-sections of the others interactions (see Fig. 1.1 and 1.2).

(18)

Interaction inside the patient

In the biological tissues, for energies of 511 keV, the most important interaction that can occur is Compton scattering, as shown in Fig 1.1, assuming the body as made of water. The mean free path for a photon of 511 keV in water is of the order of 7 cm so there are a fraction of particles that are deected from their initial direction. This fraction can be a consistent part of the total number of photons in some experiment. The interactions inside the patient degrade the nal image resolution as it is dicult to discriminate between scattered and unscattered photons. 100 101 102 103 Energy (keV) 10−5 10−4 10−3 10−2 10−1 100 101 102 103 104 Cross-section (cm2/g) Rayleigh scattering Compton scattering Photoelectric absorption

Figure 1.1: Photon cross-sections in water. [6] Attenuation

If the photons are scattered to a direction outside of the Field of View of the scanner or are absorbed, the number of γ-rays that reach the detector is less than the number of emitted particles. This eect is called attenuation and for a collimated beam travelling along the x-direction can be expressed by the formula:

I(x) = I0e−µx

where I(x) is the intensity after a distance x from the source, I0 is the intensity at the source and µ is called linear attenuation coecient and accounts for all kinds of interactions of the photon.

(19)

1.1. REVIEW OF THE PHYSICS PRINCIPLES 19 In most cases, it is necessary to correct the registered photon counts taking into account the attenuation eect.

Interaction inside the detector

The detector is usually made of a high Z and high-density material aiming to maximise the number of photons that interact with photoelectric absorption. If the energy deposition is localised to a small volume, it is possible to better determinate the position where the photons interacted. This aects both the spatial resolution and the reconstructed image noise. In Fig. 1.2 are shown the cross-sections of the interactions for two typical detector materials.

100 101 102 103 Energy (keV) 10−3 10−2 10−1 100 101 102 103 104 Cross-section (cm2/g) Rayleigh scattering Compton scattering Photoelectric absorption (a) NaI:Tl 100 101 102 103 Energy (keV) 10−3 10−2 10−1 100 101 102 103 104 Cross-section (cm2/g) Rayleigh scattering Compton scattering Photoelectric absorption (b) LYSO:Ce

Figure 1.2: Photon cross-sections in typical scintillation materials[6].

1.1.3 Radiation detectors for PET

After escaping from the biological tissue, the photons travel through the air and then reach the detector where they can be revealed. A single PET detector is usually composed of a matrix of scintillators coupled to photodetectors and an electronic read-out system.

Scintillators

A scintillator consists of a dense crystalline material that emits visible light when an incident photon interacts with its constituents. The emitted light can be mea-sured by the coupled photodetector and is proportional to the deposited energy.

(20)

Scintillators are available in the form of organic or inorganic and can be in the solid or liquid state. The most common form used in nuclear medicine is a solid inorganic material.

An inorganic crystal scintillator is usually formed by adding impurity to a pure crystal with the aim of changing its energy levels. Without the impurities, the electrons of the material can be only bound to the lattice sites (like the valence band) or free to move throughout the crystal in the conduction band. With the absorption of energy, an electron can be elevated from the valence band to the conduction band and then the gap in the lattice is rapidly lled by another electron with the emission of another photon. The value of the energy gap usually falls in the range of energies of the U.V. light, so it cannot be revealed by the photodetector that is capable of detecting only light in the visible portion of the e.m. spectrum. Moreover, the crystal is not transparent to that wavelength. The eect of adding impurities is to create new energy levels in the forbidden gap to allow the process of uorescence. In this way, the emitted photons have typical energy in the visible spectrum and could travel across the scintillator without being reabsorbed until they reach the coupled photocathode.

An ideal scintillator for PET measurements should have:

ˆ good stopping power for photons of energy of 511 keV to maximise the number of photons that interact and deposit energy in the detector.

ˆ short decay time from the excited state to let possible to discriminate be-tween two close successive events (high time resolution)

ˆ high number of photons emitted per unit energy (light output) to achieve good energy and timing resolution.

The properties of typical scintillators are shown in Tab. 1.3.

As the photon interaction is a probabilistic event, the particle may penetrate the scintillator and, if the direction of incidence is not orthogonal to the detector surface, an interaction may occur in a dierent crystal with respect to the one where the γ-ray rst entered. This issue is called penetration eect (Fig. 1.3). Moreover, the photoelectric absorptions are just a fraction of the total number of interactions and, due to Compton scattering, a photon can deposit its energy in several points of the detectors. This phenomenon is called inter-crystal scatter (Fig. 1.3). A high number of photons are thus misplaced due to these eects.

(21)

1.1. REVIEW OF THE PHYSICS PRINCIPLES 21

Figure 1.3: Graphical representation of the inter-crystal scatter and the pene-tration eect. These eects are important blurring parameters, causing errors when a photon does not deposit the most of its energy in the crystal where it has entered the scintillator.

Material (g/cmDensity3) (photons/MeV )Light yield time (ns)Decay

µ511keV

(cm−1)

Photofrac-tion at 511

keV

Sodium iodide (NaI:Tl) 3.67 41000 230 0.34 17%

Bismuth germanate (BGO) 7.13 8200 300 0.96 40%

Lutetium oxyorthosilicate (LSO:Ce) 7.40 30000 40 0.87 32%

Lutetium yttrium oxyorthosilicate

(LYSO:Ce) 7.10 32000 40 0.82 30%

Gadolinium oxyorthosilicate (GSO:Ce) 6.71 8000 60 0.70 25%

Yttrium aluminum perovskite

(YAP:Ce) 5.37 ∼21000 27 0.46 4.2%

Lutetium aluminum peroovskite

(LuAP:Ce) 8.3 12000 18 0.95 30%

Barium uoride (BaF2) 4.89 1400 (fast)9500(slow) 630(slow)0.6(fast) 0.43

Lanthanum bromide (LaBr3:Ce) 5.08 63000 16 0.47 15%

(22)

Photodetector

The detection of the scintillation photons is usually performed with the use of a PhotoMultiplier Tube (PMT). A PMT is a device capable of converting visible light into an electrical pulse. In the simplest form, the PMT is made of a vacuum glass envelope containing a series of electrodes called dynodes. The inner part of the entrance window is coated with a thin layer of a material that easily emits electrons when energy is deposited on it, via the photoelectric eect. This part of the tube is called photocathode because it is kept at a negative potential to accelerate the electrons away from it. The probability of the electron emission is called quantum eciency and it is of the order of 15-25%. The electron is then accelerated toward a dynode, striking it and freeing more electrons. At each stage from the cathode to the anode more and more electrons are released at each dynode interaction, resulting in a current pulse that can be extracted from the glass envelope and measured.

In a PET detector, it is necessary to know the position in the crystal matrix where the initial photon had interacted. This information can be retrieved by the use of a special PMT conguration (block detector [9]) or with the use of more complex devices like Position sensitive PhotoMultiplier Tubes (PsPMT).

The construction of hybrid systems integrating PET and Magnetic Resonance Imaging led to the use of solid-state photodetectors, like avalanche photodiodes (APD) or silicon photomultipliers (SiPM), as these detectors are, in contrast to PMT detectors, insensitive to magnetic elds. The SiPM detectors have signal rise time shorter than one nanosecond and are suitable to perform Time-Of-Flight PET. In recent years, SiPM detectors are replacing PMT detectors also in PET-scanners that are not operated in a magnetic eld.

1.2 The PET system

A standard PET system is made of a set of detectors positioned around the object under study to detect the pairs of emitted annihilation photons. The tomographic acquisition requires the collection of a full set of line integrals dened by the possible line of ight obtained sampling the object along with spatial and angular directions. A fundamental concept is the Line of Response (LOR) which is the line connecting the two crystal pixels in which the photons are detected by the scanner.

(23)

1.2. THE PET SYSTEM 23 There exist several detector arrangements that can properly scan the object. Similarly to SPECT, the detector could rotate around the object, so at least a pair of facing detectors are needed to acquire a full set of line integrals. A much more convenient arrangement is a ring geometry (or a polygonal ring approxima-tion) in which many LOR could be sampled simultaneously without any detector movement.

The intersection of the possible LORs between all detectors denes the Field of View (FOV) of the scanner: in a ring geometry, the FOV is a cylinder centred on the scanner axis. To increase the FOV size along the axial dimension, modern PET scanners have more than one ring of detectors creating a so-called multi-ring geometry.

1.2.1 Data acquisition

Exploiting the back-to-back emission (the non-collinearity is not considered for simplicity) it is possible to determine the line in which the annihilation of the positron occurred.

Every time a photon reaches a detector and interact with it, it is recorded as a single event along with a timestamp. As the two γ-rays are generated at the same time they should reach the detectors simultaneously. The dierence between the two timestamps is due to the dierent path length they travel before reaching the scintillators and to the nite timing resolutions of the detectors. To take into account these eects, a pair of single events is identied as a coincident event if the dierence of the timestamp of the two photons is less than a dened constant or, in other words, if it lies inside a so-called time window.

The time window is usually taken at least twice the timing resolution of the scanner, that is the ability of a pair of detectors to determine the time dierence in arrival of the annihilation photons. The timing resolution depends mostly on the properties of the scintillator and it is usually of the order of few hundreds of picoseconds [10] for modern scanners.

The determination of the line where the annihilation event occurred exploiting the time coincidence window is called electronic collimation, for the analogy with the passive collimation used in others nuclear medicine imaging techniques such as scintigraphy or SPECT.

Moreover, a coincidence event is regarded as valid and recorded only if: ˆ the LOR is within a valid acceptance angle of the tomograph;

(24)

ˆ the energy deposited by each one of the detected events is inside a selected energy window centred around 511 keV . A lower energy threshold is used to reject object scattered events and an upper threshold is used to reject pile-up events.

Despite these criteria, some of the recorded events are unwanted because they descend from scattered photons or two photons coming from dierent annihilation processes. The various kind of events that could occur during a PET acquisition are (see Fig. 1.4):

ˆ true coincidence event: it derives from the detection of two photons, arising from a single annihilation event, which reaches two facing detectors within the timing and energy windows without interacting with any other medium apart from the detector;

ˆ random coincidence event: it consists in the detection of two photons aris-ing from dierent annihilation events that enter the detector within the coincidence timing window;

ˆ scattered event: it occurs when one or both photons originated by a single annihilation event undergo scattering before entering the detector within the timing coincidence window. This event leads to a misplaced LOR. Since Compton scattering involves a loss of energy, some events may be neglected using a narrower energy window.

The sum of true, scattered and random events is usually called prompt counts, where only true counts contain useful information for the image reconstruction.

The acquisition in PET can be performed in 2D or 3D mode. In 2D mode, the coincidences events between detectors belonging to dierent rings are physically avoided by the interposition of absorber material between the adjacent rings or just discarded at acquisition time. The acquired data is thus limited to a set of direct planes, that are the imaging planes perpendicular to the scanner or patient axis. This has the advantage of restricting the number of recorded scatter and random events at the cost of a remarkable loss of sensitivity. To exploit all the potential of the electronic collimation it is possible to remove the septa and perform a fully 3D acquisition where all kinds of coincidences between opposing detectors are allowed, namely the acquisition spans on both direct planes as well as the LORs lying on oblique imaging planes that cross the direct ones. The

(25)

1.2. THE PET SYSTEM 25

(a) True (b) Scatter (c) Random

Figure 1.4: Graphical examples of the dierent kinds of coincidence events. 2D acquisition mode is nowadays obsolete and the 3D-mode is the standard in modern PET scanners.

It is important to note that both 2D and 3D PET imaging lead to 3D images, in the case of a 2D acquisition the various planes are stacked together to form a 3D volume (Multi-sliced Imaging or MS). The increased sensitivity of the 3D-mode has a lot of advantages as it contributes to a higher SNR and so to decrease the radiotracer amount needed to perform the scan, but goes to the detriment of the complexity of the reconstruction task, both in terms of data representation and storage and in terms of the image reconstruction process.

1.2.2 Detection eciency

The terms detection eciency refers to the eciency with which a radiation measuring instrument converts the emission from the radiation source to a useful signal. To obtain maximum information with a minimum amount of injected radioactivity is desirable the maximum possible detection eciency.

In a PET scanner, the detection eciency can be dened as D= g·  · f · F

where

ˆ g: geometrical eciency, that is the solid angle coverage of the scanner ring. It can be increased by reducing the ring diameter or increasing the axial

(26)

dimension. The former is limited by the patient size and the latter by the building cost.

ˆ : intrinsic detection eciency, it is obtained as the product of the detectors detection eciency of the two detectors involved in the 511 keV photons detection process. This could be elevated by increasing the crystal thickness but it goes to the detriment of the parallax error.

ˆ f: electronic recording eciency.

ˆ F: scatter and absorption into the object.

1.2.3 Spatial resolution

The spatial resolution of a PET system represents its ability to distinguish be-tween two point sources after image reconstruction. This quantity is usually expressed as the Full Width Half Maximum (FWHM) of the three-dimensional Point Spread Function (PSF) measured in dierent positions of the FOV, as it is not constant nor isotropic along the whole FOV. A theoretical formulation of the FWHM of the spatial resolution is usually derived for a point source located in the centre of the FOV. It is expressed as the quadrature combination of several terms, assuming a Gaussian distribution for each of them [11].

The degradation of the spatial resolution is essentially related to the uncer-tainty in the determination of the LOR that depends on the detector geometry of the scanner, the physics of the β+ decay (in terms of positron range and non-collinearity of the emitted photons) and on others aspects related to the detection process and the technology used.

The positron range depends on the energy of the emitted β particle and on the density of the medium in which the positron is travelling. The eective blurring ranges from 0.54 mm FWHM for 18F to 6.14 mm FWHM for 82Rb [11]. The range distribution is not Gaussian, with a sharp central cusp and relatively broad tails. As an approximation, assuming a Gaussian model, being rms the root-mean-square of the range distribution, the contribution of the positron range to the FWHM of the spatial resolution is:

r= 2.35∗ rms

(27)

rep-1.2. THE PET SYSTEM 27 resented by:

F W HM ∼ ∆θD

4 ' 0.0022D

where ∆θ is the mean angle of non collinearity (in water ∆θ ∼ 0.5◦ [2]) and D is the distance between two opposing detectors.

To investigate the geometrical aspects lets consider a ring geometry and a pixellated detector module. When a coincidence is detected the LOR is dened by the pair of crystals that detected the singles events. Due to the nite size of the crystals, the LOR is represented by a region named tube of response. From simple geometrical consideration, near the centre of the FOV, there is a higher probability for the γ-rays to have been generated in the centre of the tube of response. The position of the annihilation is then known with an uncertainty that has an FWHM equal to half the size of the crystal. Exactly the distance refers to the crystal pitch which is the distance between the centre of two consecutive crystals (considering the thickness of the optically reective material between two scintillators).

F W HM = d 2

When the crystals are not facing each other, the parallax error arises and it is usually indicated with the letter p:

p= α√ r r2+ R2

where r is the radial position where the PSF is computed, R is the radius of the PET ring and α is a factor that depends on the thickness of the crystals and the attenuation coecient of the scintillation material. The parallax error increases moving toward the border of the PET ring and it is, therefore, a major source of errors in pre-clinical or brain scanners in which the FOV occupies a large portion of the scanner bore. The typical artefact produced by this error is the elongation of a point source along the radial direction and the shifting of its position toward the centre of the FOV. The parallax error can be reduced by decreasing the thickness of the crystal by using detectors able to measure the depth of interaction (DOI) of the photons in the scintillation matrix.

Assuming that both crystals elements have been correctly identied as the region of the detector where the rst interaction of the two photons occurred, the geometrical and parallax errors are the only two contributions to the spatial resolution that do not arise directly from the physics of the annihilation process.

(28)

However, some errors could also occur in the identication of the crystals. The crystals are usually identied by the photodetector via a light sharing technique that calculates the centroid of the light spot emerging from the crystal. The procedure is referred to as pixel identication. Errors in the pixel identication and the eect of multiple interactions in more than one crystal (inter-crystal scattering, ICS) are included in the so-called coding errors term that is usually indicated by the letter b.

Summarizing the best achievable spatial resolution in a PET system in the center of the FOV can be expressed as:

F W HM =· s  d 2 2 + (0.0022D)2+ p2+ b2+ r2 (1.1)

1.2.4 Noise and bias in PET measurement

An ideal PET experiment consists of the acquisition of the number of true (T) coincidences in each line of response of the scanner. As the detector has nite sensitivity, nite energy and timing resolution and due to the probabilistic na-ture of the radioactive decay process, the measured data typically diers from the expected theoretical value. The true number of counts is aected by a statis-tical uncertainty which can be described by assuming a Poisson distribution as statistical model, with parameter λT. Moreover, the number of counts recorded by each LOR is corrupted by the presence of scatter (S) and random (R) counts, each one characterized by its Poisson distribution with parameters λS and λR. The total number of counts (η ) recorded by each LOR is obtained as the sum of these three components:

η= T + S + R

The capability of a PET system to reject scattered and random events depends respectively on the energy and time resolution of the detectors, in turn, these features depend on the scintillator used in terms of light yield and scintillation decay-time. The scatter and random counts contribute to the statistical noise, as η is characterized by a Poisson distribution with parameter λη = λT + λS + λR, but also contribute to bias by shifting the count value away from the value T. It is, therefore, necessary to correct the measurements to take into account the S and R counts (see section 1.2.6) or to accurately model these aspects during the reconstruction process.

(29)

1.2. THE PET SYSTEM 29 Several measurements are performed to characterize the noise in the data acquired by a PET scanner.

A general measure is the so-called uniformity measurement which quantify the relative standard deviation of the region of a particular phantom that presents uniform activity concentration.

The measure of the number of scattered events during an acquisition is called scatter fraction and is dened as the rate between true event rate and the sum of true and scattered rate:

SF = T

T + S

The measured scattered and true events vary linearly with the activity and their ratio could be considered constant. The scattered events generate a low spa-tial frequency background that reduces the contrast in the reconstructed image. An annihilation photon may scatter as mush as 45 degrees and low as much as 115 keV of its energy to the recoil electron [12]. Because of the poor energy res-olution of PET scanners, it is most likely to accept a coincidence event involving a scattered photon within the energy window, which is of course determined by the energy resolution of the scanner. For modern tomographs the energy resolu-tion is in the order of 10%-20% FWHM [13, 14, 15] and the scatter fracresolu-tion can represent up to 40% of the total recorded coincidences in a 3D thorax acquisition [16].

The other source of noise arises from the random coincidences that generate a more uniform background. Their importance becomes signicant as the activity increase because the random count rate is proportional to the single event count rate.

To compare the count rate performance of dierent tomographs or of the same scanner operating in dierent conditions it is possible to use the Noise Equivalent Count Rate (NECR) [17] dened as:

N ECR = T

2 T + S + kR

where T, S and R are the True, Scatter and Random coincidences count rates and k is a parameter that depends on the way the R count rate is determined. The parameter k is equal to 1 when it is estimated from the single count rate and 2 when it is estimated using the delayed window method.

(30)

1.2.5 Data Representation

The term data representation indicates the way the information obtained from the scanner is stored in a digital form to be used for image reconstruction.

Raw data from the scanner are usually stored in a format called list-mode. This format includes for each measurement the coordinates of the pixels dening the LOR, the energies and the timestamps. Additionally, more information can be stored, like depth-of-interaction (DOI) in crystals.

The list-mode format is the natural way to store PET data but, in some congurations, it can be optimized to save reduce the occupied computer space disk. In these cases, the list-mode format is substituted with the matrix-based format, often called histogram, in which the only information stored is an index associated with the LOR and the number of events (counts) belonging to that LOR that meet particular criteria (like energy window). All other information is lost.

A common way of storing the histogram is the so-called sinogram in which, apart from storing the counts, each LOR is indexed by the use of a set of phys-ical coordinates. For example, in the simplication of a 2D scanner, in a single plane ad a distance z from the centre of the scanner, a LOR is identied by two coordinates, s and φ. The s coordinate represents the geometrical distance from the axis and φ represents the inclination. So, in 2D PET each LOR is identied by the set (s, φ, z), while in 3D PET a fourth coordinate is needed to account for the inter-rings LORs.

1.2.6 Data Correction

During the acquisition, the count number of the dierent LORs is altered by the presence of attenuation and scattered and random coincidences. Therefore the data need to be corrected to recover the right count number.

Scatter Correction

The scatter correction consists of estimating the number of scatter counts con-tributing to a given LOR as a result of a Compton interaction. The methods used for the scatter correction can be classied according to the approach used to es-timate the scatter count distribution: analytic methods, Monte Carlo simulation techniques, multiple energy window method and model-based scatter correction

(31)

1.2. THE PET SYSTEM 31 algorithms. A treatment of the vast literature related to the various approaches to scatter correction is beyond the scope of this thesis, as it will not be implemented in this work, and we remand to [12] for an introduction.

Random Correction

Apart from limiting the occurrence of random events by reducing the time win-dow, it is also possible to apply a correction for random coincidences in the re-constructed image. This process is based on an estimation of the random counts distribution that may be either stored in a separate histogram or subtracted from the prompt counts.

An indirect measurement of the random counts during an acquisition time T in a particular line of response dened by the detector elements i and j with single counts-rate Ci and Cj can be computed as:

Rij = 2τ Z T

0

Ci(t) Cj(t) dt

where τ is the duration of the timing window.

A better estimation that is not aected by the systematic error in the a priori estimation of τ is obtained with the method of the delayed window. In this approach, the logic pulse from one detector is delayed in such a way that it cannot be correlated with the other photon of the annihilation pair. With the use of another coincidence processor, the coincidences between the delayed pulse from detector i and the prompt pulse from detector j are counted in Rij. This method is accurate but presents two major drawbacks: the increased time taken to process the random coincidences contributes to the overall system dead time, and the estimate of the random counts on each line is subjected to Poisson counting statistics. If the random counts are stored in a separate data-set, instead of directly subtracted by the scanner during the acquisition, they can be post-processed to reduce the noise in the measurements. Several algorithms of variance reduction techniques have been developed [18] and the most accurate has been shown to be the one proposed by Casey and Homan [19].

Attenuation Correction

For photons of 511 keV, the attenuation coecient µ in soft tissues is ∼ 0.096 cm−1 while in bones is ∼ 0.17 cm−1, there is so a relatively high probability for a γ-ray

(32)

to interact inside the patient before reaching the detectors. If we express the num-ber of counts in a particular LOR as the line-integral of the activity distribution, including the eect of the attenuation, we obtain:

Nij = k Z

LORij

ρ(x, y, z)dxdydz· Pi· Pj (1.2)

where

ˆ i and j represents the crystal indexes ˆ Nij is the number of counts in LORij ˆ Pi = e

−R

Liµ(x)dxis the probability of one of the photons to reach the detector

i, along the path Li between the annihilation point and detector ˆ Pj = e

−R

Ljµ(x)dxis the probability of the other photon to reach the detector

j, along the path Lj between the annihilation point and detector

The probability P for both the γ −rays to reach the detector could be written as:

P = Pi· Pj = e− R

Lµ(x)dx with L= Li+ Lj

and it does not depend on the position of the annihilation point along the LOR but only on the line integral of the attenuation coecient along the LOR itself. The quantity

1 P = e

R

Lµ(x)dx

is usually called the attenuation correction factor (ACF). For example, a LOR crossing 10 cm of soft tissue has an ACF value of ∼ 3.

The attenuation has two eect on the nal image: counts are underestimated, so highly aecting quantitative imaging, and image artefacts could arise due to the dierent length of tissue crossed or to the presence of strong attenuation factors like bones.

In the past years, the most common approach used to derive the ACFs in a PET scanner for each LOR is to directly measure µ(x, y, z) in the so-called transmission scan that can be performed with 511 keV coincidence photons. This method consists of comparing the counts recorded in a certain line in the presence of the object (transmission scan) to those obtained without the object (blank scan). Most modern scanners, in addition to the PET acquisition, can acquire

(33)

1.2. THE PET SYSTEM 33 and process the information of a CT transmission scan (Computed Tomography) and the CT data provide a fast source for the correction of photon attenuation in PET emission data [20].

1.2.7 Detector Normalization

The lines of response of the scanner have dierent sensitivity for a variety of rea-sons, such as variations in the detector electronics eciency (as detector thresh-olds), crystal size and light yield and geometric factors (as the solid angle sub-tended). It is necessary to know this variation to reconstruct artefact-free images and to perform quantitative studies. The process of correcting for these eects is known as normalization and the individual correction factors of each LOR are known as normalization coecients [12]. These coecients are experimentally derived from the scan of a geometrically uniform positron source, like a planar object or a cylinder.

The simplest possible approach to correct the data is known as direct normal-ization, in which the coecients are assumed to be proportional to the inverse of the counts in each LOR [12]. The main problems of this approach are:

ˆ to obtain a low statistical noise normalization factors, it is necessary to collect a high number of counts in each LOR. As the source used usually has a relatively low activity density to limit excessive dead time and high random count rate, the scan times are quite long, typically several hours; ˆ the source used must have a very uniform activity concentration or the

resultant coecient will be biased;

ˆ the amount of scatter events and its distribution in the normalization scan may be substantially dierent from that encountered in normal imaging and this can result in bias and possible artefacts.

To overcome these limitations, a so-called component-based approach can be used, in which the coecients are factorised into a series of components, each one reecting a particular source of sensitivity variation. It has the drawback that the accuracy of the normalizations depends on the accuracy of the model used to describe the tomograph but it can help in reducing the acquisition time of the normalization phantom [12].

(34)
(35)

Chapter 2

Image reconstruction methods for PET

An excellent review of the problem formulation in emission tomography is por-trayed by Lewitt and Matej [21] . This chapter starts from this paper to introduce the problem of image reconstruction from projections. As a complete dissertation about the topic is beyond the scope of this thesis, several aspects may be left out.

2.1 Problem formulation for PET

A positron emission tomography experiment consists in a set of discrete mea-surements in which each measurement ηi corresponds to the number of counts recorded by the i-th Line of Response of the scanner. ηi can be obtained by adding up three separate components: true, scatter and random counts. The three components are described by random variables with a Poisson probability density function, therefore ηi is also described as a random variable with Poisson distribution. By denoting ηt

i, ηsi and ηri the expectation values of these compo-nents, we have that the expectation value for the number of counts in LORi is:

ηi = ηt i + η s i + η r i (2.1)

The true-counts component corresponds to some integral transformation of a function of continuous space variables which in turns represents the spatial distribution of the radiotracer administered to the subject under investigation. This component is therefore represented as a discrete-continuous (D-C) model which relates the discrete data to the function of continuous spatial variables ρ(x, y, z). This relationship is usually assumed as linear and space-variant, under the assumption that the detector response is linear over the range of count-rates under analysis. By denoting hi(x, y, z) the contribution to ηit of a point of unit

(36)

strength located in (x, y, z), the D-C model is dened as: ηit =

Z Z Z

hi(x, y, z) ρ(x, y, z) dx dy dz i= 1, ..., M (2.2)

with M the number of measurements and Ω the nite domain of the spatial tracer distribution.

The integration kernel hi(x, y, z)has non-zero values in a tube-shaped region of space dened by the two crystals in coincidence. The scatter contribution is a linear function of ρ(x, y, z) and could be in principle be added to the integration kernel. In this way, the kernel has a much broader region of support with respect to the ideal tube-shaped kernel and it is usually preferred to leave the contribution out of the kernel and to consider it as an additive component to ηi.

To obtain a solution to the problem it's possible to follow three directions. ˆ Try to solve directly the D-C model without introducing further

discretiza-tion or approximadiscretiza-tion. The methods based on this approach have limited use in emission tomography and will not be discussed further.

ˆ Consider the discrete data ηi as samples of a function of continuous vari-ables in the measurements space and obtain a continuous-continuous (C-C) model. For particular forms of the integration kernel and scanner geometry, it is possible to obtain an inversion formula for ρ(x, y, z). These methods usually lead to closed analytic solutions of the problem and are known as analytic image reconstruction algorithms.

ˆ By representing the unknown function ρ(x, y, z) by a linear combination of a nite number of basis functions it is possible to obtain a discrete-discrete (D-D) approximation. These methods usually lead to iterative image reconstruction algorithms.

The choice of the most suitable reconstruction algorithm relies on a compro-mise between the accuracy/quality of the images and the resource requirements (time and computing processing power) for performing the reconstruction. Nowa-days, iterative methods are becoming dominants in PET, due to the possibility to describe the statistical property of the measurements and to include a better physical model of the acquisition system, despite the required major computa-tional eort.

(37)

2.2. CONTINUOUS-CONTINUOUS APPROXIMATION 37

2.2 Continuous-Continuous approximation

2.2.1 Projection and Back-projection

In the context of the C-C model approximation the true-counts component ηt i is expressed by the the concept of projection. For simplicity let ρ(x, y) be the tracer density function in the x-y plane, the projection is dened as the line integral of the density distribution ρ along a particular line:

p(s, φ) = Z +∞

−∞

ρ(s· cos φ − t · sin φ, s · sin φ + t · cos φ)dt (2.3) For each angle, the projections correspond to a line prole. Putting together all the proles for every angle leads to the sinogram, which is usually represented as an image with the s coordinate along the abscissa and φ along the ordinate where each row of the image represents a projection.

The analytical process of transforming the object into its projections to create a set of sinograms, one for each plane, is called Radon transform. The links between the Radon transform and the PET acquisition are:

ˆ the data recorded by the scanner, after the appropriate corrections and neglecting nite spatial resolution and noise, correspond exactly to the pro-jections;

ˆ the integration lines along which the projections are computed, correspond to the lines of response of the scanner.

To investigate the problem of recovering the original activity distribution, we introduce the back-projection operator, that is dened as:

b(x, y) = Z π

0

p(x· cos φ + y · sin φ, φ)dφ (2.4) This operation is not the analytic inverse of the projection but consists, in prac-tice, in re-distributing the activity, recorded by the line integral, along with the integration path. Because the exact location of the source is not known the ac-tivity value is uniformly distributed along the whole path. By repeating this procedure for all the angles between 0 and π, using the superposition eect, the result obtained is an image close to the original one with a lot of blurring in correspondence to the activity spikes. It can be demonstrated that for an innite

(38)

number of angular projections and innite spatial sampling, the back-projected image b(r, φ) (in polar coordinates) is equivalent to the original image ρ(r, φ) convolved with a 1/r function:

b(r, φ) = ρ(r, φ) 1 r (2.5)

Central Slice Theorem

This eect can be simply understood by introducing the central slice theorem. In two dimensions it states that 1D Fourier transform of the projections p(s, φ) with respect to the variable s for a xed angle φ, is equal to a slice through the centre, with the same angle, of the 2D Fourier transform of the distribution function ρ(x, y).

F1[p(s, φ)] = P (vs, φ) = F (vscos φ, vssin φ) =F2[ρ(vscos φ, vssin φ)] (2.6)

where vs is the conjugate Fourier variable of s.

The central slice theorem states that by knowing P (vs, φ) for all angles 0 < φ < π, it is possible to reconstruct the whole F (vx, vy) and then the inverse 2D Fourier transform of F (vx, vy)will give back the ρ(x, y).

2.2.2 Filtered Back-Projection

One of the most common algorithm that use this approach is the so called Filtered Back-Projection, or FBP, that can be summarized by these steps:

ˆ Uni-dimensional Fourier Transform of each projection.

ˆ Filtering each projection in the Fourier space by multiplying by the ramp function.

ˆ Uni-dimensional Inverse Fourier transform of each ltered projection. ˆ Projection backward the ltered projections.

Mathematically its expression is: ρ(x, y) =

Z π 0

(39)

2.2. CONTINUOUS-CONTINUOUS APPROXIMATION 39 where pF represent the ltered projection:

pF(s, φ) =F−1

1 [F1[ρ(s, φ)]· |vs|] (2.8) In the real situation, the results of the FBP reconstruction are not exact due to limited angular and spatial sampling and the presence of noise. When angular sampling is not innite, the image is aected by a common star artefact since, during the back-projections, counts are accumulated preferably along with angles of projections.

The ramp lter amplies the higher frequencies, causing noise amplication. To reduce these eect low pass lter are used.

2.2.3 Three dimensional Analytic Reconstruction

In the 2D acquisition mode, the data acquired can be stored in the form of a sinogram p(s, φ) that is a set of 1D parallel projections of the object f(x, y), orthogonal to the scanner axis, for a set of orientation φ ∈ [0, π].

In the same way, the LORs measured in a 3D acquisition can be grouped into a set of lines parallel to a direction specied by a unit vector

~

n = (− cos θ cos φ, cos θ cos φ, sin φ)

where the angle θ represents the angle between the LOR and the trans-axial plane so that the data for θ = 0 correspond to a 2D acquisition.

The set of line integrals parallel to ~n denes the 2D parallel projection of the tracer distribution

p(~s, ~n) = Z

f(~s + t~n)dt

where the position of the line is specied by the vector ~s ∈ ~n⊥ where ~nis the projection plane orthogonal to ~n.

In the case of a cylindrical scanner with more than one ring (see Fig. 2.1), assuming continuous sampling, the scanner samples all the LORs such that the line dened by (~s, ~n) has two intersections with the lateral surface of the scanner that correspond to two detectors in coincidence.

For each φ 6= 0, not all the LORs parallel to ~n that cross the FOV of the scanner are measured because they have only one intersection with the lateral

(40)

surface of the scanner and cannot be measured in coincidence. In this case, a parallel projection is measured only for a subset of LORs, and it is called a truncated projection. The existence of this problem is sometimes referred to as the lack of shift-invariance of the scanner in the 3D acquisition mode.

(a) 2D parallel projections, same as 3D

parallel projection for θ = 0. (b) 3D parallel projection for θ 6= 0. The pro-jection is truncated because there exist lines intercepting only one of the border crystals (dashed lines).

Figure 2.1: Graphical representation of non-truncated and truncated projection; a ring is dened by each pair of facing crystals.

A positive aspect of this acquisition method is instead the fact that the parallel planes are enough to perform the reconstruction and so the data are redundant. A formulation of the FBP algorithm is possible also for a 3D data set, but its application can be performed only in the presence of non-truncated projections. Several dierent analytical techniques have been developed to solve this problem, the main approaches are those of data re-binning and the re-projection algorithm. The simplest of the re-binning methods is the Single-Slice Re-binning Algorithm (SSRB)[22], in which, after having detected a coincidence event, the average of the two photons axial coordinates is calculated and the event is placed in the closest sinogram. This method oers good image quality only if the scanned object occupies a small fraction of the FOV around the axis of the scanner. A more sophisticated method of re-binning is the FOurier RE-binning Algorithm (FORE [23, 24]).

A dierent approach is the 3D reprojection algorithm [25] which exploits the data redundancy, it consists of four steps:

(41)

2.3. DISCRETE-DISCRETE APPROXIMATION 41 algorithm to the non-truncated parallel projections

ˆ forward projection of the image to estimate the unmeasured non-truncated projection

ˆ merge of the measured and estimated data set to form a complete set of non-truncated projections

ˆ reconstruction of the image with the full data set with the 3D-FBP algo-rithm.

In some application, where the object occupies a small fraction of the FOV, it is possible to restrict the axial angle range to use only non-truncated projections and perform a direct 3D-FBP. However, this is usually not helpful for clinical and pre-clinical application as the subject occupies the entire axial length of the scanner.

It has to be noted that the majority of modern 3D image reconstruction is performed using statistical iterative algorithms, presented in the next section.

2.3 Discrete-Discrete approximation

In this context, the function ρ(x, y, z) is approximated by a nite series expansion using a set of M basis functions. By denoting the j-th basis function as bj(x, y, z) and by ρej the coecients which multiplies bj in the expansion, we get for the approximated function: e ρ(x, y, z) = M X j=1 e ρj bj(x, y, z) (2.9)

Considering the approximation above, the D-C model for the true-counts com-ponent becomes: e ηit= M X j=1 si,j ρej (2.10) where si,j = Z Z Z Ω hi(x, y, z) bj(x, y, z) dx dy dz (2.11)

(42)

As we are dealing with a linear expression, the various objects are usually expressed using standard algebra tools to obtain a matrix form:

e

ηt= S ρe (2.12)

ˆ ηt∈ RN ×1 is the column vector consisting of the components (ηt

1, ..., ηNt)T where N is the number of LORs of the scanner.

ˆ ρ ∈ R1×M is the row vector consisting of the components (ρ

1, ..., ρM)T. ˆ S ∈ RN ×M is a two-dimensional matrix containing the components s

i,j. This object is called projection matrix, system matrix or system model. The element si,j represents the probability that two γ-rays emitted by the annihilation of a positron occurred in the volume related to the basis func-tion j are recorded into the LOR i.

The expectation value for the vector of the measured data becomes: η = Sρe+ηe s +ηe r (2.13) whereηe s and e

ηr are the estimation of the scatter and random coincidences. The goal of the D-D methods is to nd a vector ρe of coecients for the basis functions such that η is close to the measured data with respect to some chosen metric. To nd the solution it is necessary to specify various aspects of the method among a wide set of possibilities which can be subdivided into ve general components.

ˆ A set of basis functions which allows the image to be represented by a set of nite parameters. The most common basis function is the voxel, which corresponds to the characteristic function of boxes located into a regular three-dimensional grid in the space. Other localised basis function have been investigated in literature such as polar grids or voxels [26, 27] and the so-called blobs [28, 27]. The polar voxels are used mostly to improve the symmetries of the description of the problem to reduce computation requirements whilst. The blobs provide an intrinsic higher smoothness of the solution.

ˆ A model of the physics of the experimental setup: this component corresponds to the system model S in Eq. 2.11. In principle, it is possible

(43)

2.3. DISCRETE-DISCRETE APPROXIMATION 43 to include into the model several aspects of the acquisition process, by modelling the physics of the imaging system from the positron emission to the detection of the annihilation photons.

ˆ A model of the measurement uncertainty which is a statistical rela-tionship between the measured values and the expected values of the mea-surements. In other words, this component describes how the measured values vary around their mean value and it is derived from the physical understanding of the acquisition process. In most algorithms, as photons detection is Poisson distributed, a Poisson model is used.

ˆ An objective function which is used to determine how well the solution ts the data and matches a priori desired properties of the image.

ˆ A numerical algorithm to compute the values of the coecients of the basis functions to obtain the nal image, based on the chosen properties of the previous list. The role of the algorithm is to nd the vector of coecients that maximises (or minimise) the objective function. For a typical setup, a D-D model requires the use of an iterative algorithm that provides a progression of estimates which converges toward a solution maximizing (or minimizing) the objective function.

The advantages of the discrete-discrete approximation over the continuous-continuous approximation are twofold:

ˆ the physics of the acquisition process is well modelled into the system matrix and this provides a better image quality with respect to simple Radon transformation;

ˆ the statistical nature of the measured data is now modelled into the recon-struction process and this provides images with less noise.

Of course, the D-D model comes at the price of a much higher computational cost.

2.3.1 The objective function

As we are dealing with random variables the best estimate of the image is com-puted using statistical theory. In a frequentist approach, the problem is mostly solved using a maximum likelihood method. The objective function Ω(eρ, η) is

(44)

therefore associated with the likelihood function of the image given the measure-ments:

Ω(ρ, η) = logL(e ρ, η)e (2.14)

where L(ρ, η)e is the likelihood. When the data are very noisy, a regularization term is usually added to penalize the solutions with high image roughness or to penalize other deviation from an a priori image model. The objective function becomes:

Ω(ρ, η) = logL(e ρ, η)e − βR(eρ) (2.15) where β determines the relative weight of the two terms.

The same objective function can be obtained in a Bayesian framework by using as the image prior a Gibbs distribution in the form of exp(−βR(eρ)). This method is known as maximum a posteriori (MAP) [29].

2.3.2 ML-EM and OS-EM algorithms

In this work we will use the Maximum Likelihood Expectation Maximisation (ML-EM) [30, 31, 32, 33] algorithm and its accelerated version with ordered subset (OS-EM) [34].

The ML-EM algorithm uses the statistical optimization strategy known as expectation maximization (EM) applied to the log-likelihood of the data modelled as Poisson distributed, without any regularization term.

Let Y1...Yn be n independent random variables with probability density func-tion fi(yi; θ) depending on a vector parameter θ. The joint probability density of n independent observations y = (y1, ..., yn) is:

fi(y; θ) = n Y

i=1

fi(yi; θ) = L(θ, y)

This expression, viewed as a function of the unknown parameter θ given the data y represents the likelihood function.

The EM approach is composed of two steps:

ˆ Expectation step (E step): calculate the expectation value of the log-likelihood function, given the data y under the current estimate of the parameter θ.

(45)

2.3. DISCRETE-DISCRETE APPROXIMATION 45 ˆ Maximization step (M step): nd the parameter value that maximizes the

expected likelihood.

For an imaging system the activity value ρ is the unknown parameter and the projections η represent the measured data. Each projection ηi, along the LORi, is the summation of the number of recorded photons arising from the voxels ρj crossed by the LORi:

ηi = X

j

ηij (2.16)

The value ηij is the number of recorded photons for each voxel and is a random variable with a Poisson distribution with expectation values given from 2.12:

E(ηij) = sijρj (2.17)

The probability mass function for a Poisson distribution of parameter λ is P(p; λ) = e−λλ

p

p! (2.18)

where λ is the expected values of the random variable.

Thus, the probability of detection of ηij photons expecting a value E(ηij)is:

P(ηij; E(ηij)) = e−E(ηij) E(ηij)ηij ηij! = e−sijρj(sijρj) ηij ηij! (2.19)

It is possible to set-up the log-likelihood function as the logarithm of the joint probability mass function of all the random variables nij:

log L(ρ) = logY i,j e−sijρj(sijρj) ηij ηij! =X i,j [ηijlog(sijρj)− sijρj]− X i,j log(ηij!) (2.20) the last term does not contain the parameter ρj to be estimated and can be neglected without changing the maximization problem.

The objective function contains the random variable ηij, the E step of the algorithm consists of replacing the random variable by its expected values using the measured values ηi and the current estimate of the parameter ρ:

E(ηij) = ηi sijρcurj P ksikρ cur k (2.21)

(46)

The expected likelihood becomes: log EL(ρ) =X i,j  ηi sijρcurj P ksikρcurk log(sijρj)− sijρj  (2.22) The M step of the algorithm is the maximization of the EL function by taking the derivative respect to the parameter ρj and setting the derivative to zero:

∂EL ∂ρj =X i  ηi sijρcurj P ksikρ cur k sij sijρj − s ij  = 1 ρj X i  ηi sijρcurj P ksikρ cur k  −X j sij (2.23)

Solving for ρj and writing it in terms of number n of iterations:

ρj(n+1) = ρ (n) j PN i=1sij N X i=1 sij ηi PM k=1sikρk(n) (2.24) The single iteration of the ML-EM algorithm starts with an initial distribution ρ(0)j usually chosen uniform and non-negative and then is composed of four steps:

ˆ forward projection of the current image: gives an estimation of values of the projections based on the current image;

ˆ division between the estimated projections and the measurements: gives a multiplicative correction factor for each projection;

ˆ back-projection of the corrected projections into the image domain: gives a multiplicative correction factor for each voxel;

ˆ correction of the image by a weighting term based on the system matrix. The process is then repeated until the estimate reaches the maximum likeli-hood solution. The stopping point of the reconstruction is usually chosen empir-ically when the obtained image is the best trade-o between spatial resolution and noise.

Using the terminology and objects from Section 2.3, we have:

e ρj(n+1)= ρe (n) j PN i=1sij N X i=1 sij ηi PM k=1sikρe (n) k +eη s i +ηe r i (2.25)

Riferimenti

Documenti correlati

The main idea is that the cultural center will become a dynamic space that will bring people from outside (city) to inside (exhibition ramp, outside access,

This result strongly suggests that we are observing discrete shifts from part-time to full-time work, as conjectured by Zabalza et al 1980 and Baker and Benjamin 1999, rather

In the present study, we performed the design and synthesis of new molecules obtained by the symbiotic combination of rivastigmine skeleton with natural

Extensive epidemiological observations, clinical mechanistic studies, and basic laboratory research have suggested that the incidence of Cardiovascular Disease

Le scelte degli autori dei testi analizzati sono diverse: alcuni (per esempio Eyal Sivan in Uno specialista) usano materiale d’archivio co- me fotografie e filmati, altri invece

Here, to make up for the relative sparseness of weather and hydrological data, or malfunctioning at the highest altitudes, we complemented ground data using series of remote sensing

The Carinthian Hunters’ Association founded a lynx group, distributed new lynx obscrva- tion forms, and urged hunters to report lynx observations to mcmbcrr of