• Non ci sono risultati.

Denoising techniques for low dose computed tomography imaging

N/A
N/A
Protected

Academic year: 2021

Condividi "Denoising techniques for low dose computed tomography imaging"

Copied!
89
0
0

Testo completo

(1)

Contents

1 Computed tomography 5 1.1 Basic principle . . . 5 1.1.1 X-ray production . . . 5 1.1.2 X-ray attenuation . . . 9 1.1.3 X-ray detection . . . 12 1.2 CT scanner evolution . . . 13 1.2.1 Spiral CT . . . 16

1.3 Image reconstruction techniques . . . 18

1.3.1 Filtered backprojection . . . 20

1.3.2 Reconstruction from fan projections . . . 25

1.3.3 Spiral CT image reconstruction . . . 29

1.3.4 Iterative reconstruction . . . 30

1.4 Basics of radiation protection and CT dosimetry . . . 32

1.4.1 Dose reduction techniques . . . 34

1.5 Reconstruction artifacts in CT . . . 36

2 Denoising techniques 41 2.1 Noise in CT imaging . . . 41

2.1.1 Noise Power Spectrum . . . 43

2.1.2 Spatial resolution . . . 43

2.1.3 Noise and image quality . . . 44

2.2 Image filters . . . 46

2.2.1 Image space denoising filters . . . 47

2.2.2 Edge detection . . . 49

2.2.3 Edge-preserving smoothing by median filtering . . . 51

2.2.4 Edge-preserving smoothing by bilateral filtering . . . 51

2.3 Proposal of a novel hybrid-space edge-preserving smoothing filter 53 3 Implementation of CT denoising filters 57 3.1 Phantom simulation . . . 57

3.1.1 System setting . . . 58

3.1.2 Simulation results . . . 63

3.2 Phantom images filtration . . . 66

3.2.1 Multi scale filtering . . . 66

3.2.2 Bilateral filtering . . . 69 1

(2)

4 Results 71

4.1 Image quality evaluation . . . 71

4.2 CNR values . . . 72

4.3 Point Spread Function analysis . . . 76

4.4 Preliminary test in clinical images . . . 82

(3)

Introduction

Over the last few decades CT has became a powerful tool in medical imaging, be-cause of the possibility to perform an accurate diagnosis in several pathological conditions, without being invasive. The introduction of spiral CT, which allows the acquisition of 3D data and volume reconstruction, increased its importance in diagnostic field. This lead to an increased number of CT examination pre-scriptions over years, and consequently an increase of the exposure to ionizing radiation for the population. The radiation dose associated with a typical CT scan (1–14 mSv depending on the exam) is comparable to the annual dose re-ceived from natural sources of radiation, such as radon and cosmic radiation (2–3 mSv), depending on geographic region[1].

Despite its undoubtable importance in the non-invasive diagnostic field, a fun-damental physical limitation of CT is that radiation dose is one of the most significant factors determining CT image quality, due to its relation with the noise level in an image. In particular the lower radiation dose, the higher the noise resulting in the image. It is therefore important to investigate techniques able to decrease the potential risk, by reducing radiation dose during CT exam-ination, avoiding at the same time unacceptable losses in image quality. Reducing noise from the original image is indeed a fundamental task in diag-nostic imaging field. Image filtering is one of such tools, which could enable us to obtain good image quality at low radiation dose.

In this thesis work some image filters have been studied, by their application on simulated images. In particular a new filter has been developed and prelim-inarily investigated, here referred to as the multi scale filter, which consists on a hybrid smoothing filter with edge preserving capability. It uses an iterative approach by exploiting informations from different image resolutions and com-bining them to obtain a restored image.

In the first chapter the fundamental principle of the CT will be presented, by describing its physical principle: X-ray production, X-ray interaction with mat-ter and X-ray detection. The image reconstruction techniques will be described, focusing on the filtered back projection algorithm. A description of the noise in CT image and its correlation with the radiation dose and image quality will be performed. Finally, typical CT artifacts will be presented.

In the second chapter a description of the noise in CT image and its correlation with the radiation dose and image quality will be performed. The concept of image denoising will be exposed and a description of most used linear and non linear filter will be performed, an the differences between smoothing filters and edge preserving filters will be explained. In particular the two filters studied in this thesis, bilateral filter and multi scale will be presented and described. In the third chapter the simulation algorithm used to obtain images ad different

(4)

radiation dose will be described. The three steps of the image acquisition will be treated, by describing the X-ray beam simulation, the CT scanner and phantom geometry setting, the X-ray beam with the phantom material interaction and the detection process simulation. Also the implementation of the filters studied will be explained.

In the fourth chapter results will be exposed, by comparing the CNR and the width of the PSF values of the full dose image and the filtered ones.

(5)

Chapter 1

Computed tomography

Computed tomography is a diagnostic imaging technique, exploiting ionizing ra-diation to visualize soft tissue, otherwise invisible with conventional projective radiographic procedure.

It became achievable with the development of modern computer technology, but it wouldn’t be possible without the discovery of the X radiation by Wilhelm Conrad R¨ontgen in 1895 and without the studies of filter with edge preserving capability J.H. Radon in 1917 concerning integral geometry..

A practical implementation of this theory was first reached in 1972 by G.N. Hounsfield and A. Cormack, who are recognized as inventors of computer to-mography.

A CT makes use of computer-processed combinations of many X-ray measure-ments taken from different angles to produce tomographic images of a scanned object, allowing the reconstruction of the density of the body. CT permitted, for the first time, the accurate imaging of a living human body with a nonin-vasive technique, creating detailed images of internal organs, bones, soft tissue and blood vessels.

1.1

Basic principle

Computed tomography aims at reconstructing the spatial distribution of the linear attenuation coefficient µ(x, y).

The three relevant stages of CT imaging are: X-ray production, X-ray attenu-ation and X-ray detection.

1.1.1

X-ray production

The most used source of X-rays is the X-ray tube.

(6)

Figure 1.1: X-ray tube scheme.

The tube consists of a cathode, which emits electrons into the vacuum and an anode to collect the electrons.

The cathode is an electrode and when is flowed with an electric current heats up causing the electrons emission. This establishes a flow of electrical current through the tube. A high voltage generator is connected across cathode and anode to accelerate the electrons. Typical voltage values used is from 20 to 150 kV.

Electrons from the cathode collide with the anode material, usually tungsten or molybdenum, and accelerate other electrons, ions and nuclei within the anode material. Photons are generated via interactions of the accelerated electrons with electrons of nuclei within the tube anode.

The maximum energy of the produced X-rays is limited by the energy of the incident electron, which is equal to the voltage on the tube times the electron charge.

When the electrons hit the target, photons are emitted by two different atomic processes: characteristic X-rays emission (fluorescence) and bremsstrahlung. Characteristic X-rays are produced when an electron has enough energy to knock an atomic electron out of the inner orbital (e.g., K orbital) of a atom. This pro-cess is especially important when K electrons of high Z materials (e.g., metals) are involved.

(7)

As a result electrons from higher energy levels, (e.g., L orbital), fall into the inner shell, emitting photons with an energy level equivalent to the energy dif-ference between the higher and lower states, which is unique for each anode element.

Bremmstrahlung radiation is due to the slowing down of high-speed electrons because of the inelastic scattering by the strong electric field near the high-Z nuclei of the target (anode) or near the electrons in the material.

Figure 1.3: Bremsstrahlung emission.

The lost kinetic energy is given off in the form of a photon. So the X-ray spectrum that is emitted from the X-ray tube is a combination of a continuous bremsstrahlung spectrum, ranging from zero to maximum electron kinetic en-ergy (dependent on the tube accelerating voltage) and characteristic peaks of the specific anode material.

(8)

The X-ray beam may be modified to suit the needs of the application by alter-ing tube current (mA) and voltage (kV), as well as the beam shape by proper collimation. When pulsed X-ray sources are employed, the pulse width may also be set depending on the application.

The current across the tube determines how many electrons are released to hit the anode. As the mA increases, more power is applied to the filament, which heats up and releases more electrons colliding with the anode, with a conse-quent linear increase in number of photons produced. It is common practice to express the amount of radiation produced as the product of time and tube current, mAs.

The voltage across the tube affects velocity of the electrons: increasing the kV increases the potential difference between the cathode and anode, thus increas-ing the energy of each electron when it strikes the target. This affects the energy of the photons produced by the tube.

Only a small fraction, about 1%, of the energy generated is emitted as X-rays. The rest of the energy is released as heat, possibly causing the local melting down of the anode or rupturing the tube housing, if proper cooling is not ad-dressed.

X-ray tubes are constructed to maximize X-ray production and to dissipate heat as rapidly as possible. Most anodes are shaped as beveled disks and attached to the shaft of an electric motor that rotates at relatively high speeds during the X-ray production process, to dissipate heat.

(9)

1.1.2

X-ray attenuation

The intensity of an X-ray beam is reduced by interaction with the matter it en-counters, because as the x ray beam passes through the object, photons may get absorbed or scattered. This attenuation results from interactions of individual photons in the beam with atoms in the absorber.

The simplest case to understand the attenuation process is given by a homoge-neous object crossed by a monochromatic radiation.

The amount of radiation, that passes through an object of thickness x is given by the Beer-Lambert law:

I = I0e−µtotx

Figure 1.6: Scheme of X-ray-absorber interaction: as the monochromatic beam passes through material, the number of photons decreases exponentially.

µtot is known as the linear attenuation coefficient.

The attenuation of the X-ray beam in matter is due, in diagnostic energy range, to three mechanisms: coherent or Rayleigh scattering, Compton scattering, and photoelectric absorption, so it can be expressed as the sum of individual atten-uation coefficient for each type of interaction:

µtot= µRayleigh+ µphotoelectric+ µCompton

It is dependent on the atomic number Z of the material and the energy of the radiation.

(10)

Figure 1.7: Cross section of X-ray interaction depends on X-ray energy and atomic number Z of the absorber.

Coherent scattering, or Rayleigh scattering, may occur when a low-energy inci-dent photon passes near an atom, causing a momentarily vibration at its same frequency. This vibration causes the electron to radiate energy in the form of another X-ray photon with the same frequency and energy as in the incident photon but at an altered direction.

Figure 1.8: Coherent or Rayleigh scattering.

Photoelectric absorption occurs when an incident photon collides with an inner-shell electron in an atom of the object and is total absorbed, while the electron is ejected from its shell. The kinetic energy imparted to the recoil electron is equal to the energy of the incident photon minus that used to overcome the binding energy of the electron.

This process leaves the atom in a ionized state, regaining its electrical neutral-ity by rearrangement of the orbital electrons. The electrons undergoing these rearrangements emit energy as a photon, known as characteristic radiation of the atom.

The recoil electrons ejected (photoelectrons) travel only a short distance in the absorber before they give up their energy. As a consequence, all the energy of incident photons that undergo photoelectric interaction is deposited.

(11)

Interaction of this type takes place in the electron of the K, L ,M shells.

Figure 1.9: Photoelectric absorption

Compton scattering occurs when a photon interacts with an outer orbital elec-tron, which receives kinetic energy and recoils from the point of impact. The incident photon is then deflected and is scattered from the site of the col-lision and it results in a decrease in its energy.

As with photoelectric absorption, Compton scattering results in the loss of an electron and ionization of the absorbing atom.

Unlike photoelectric absorption, Compton effect degrades the image, because photon is not totally absorbed, but can travel in all directions and reach the detector, with a consequent image blurring.

Figure 1.10: Compton scattering.

The spectrum of photon energies in an X-ray beam is polychromatic and the object is heterogeneous. The probability of absorption of individual photons depends on their energy and the linear attenuation coefficient is obtained by summing various components i: µ(E) =P

iµi(E).

Lambert-Beer formula has to be modified as follows:

I = Z Emax 0 I0(E)e− Rx 0 µ(E)dsdE

Low-energy photons are much more absorbed than high-energy ones. As a con-sequence the superficial layers of an absorber tend to remove the low-energy photons and transmit the higher-energy photons. Therefore as an X-ray beam

(12)

passes through matter, the intensity of the beam decreases but the mean energy of the resultant beam increases.

CT image reconstruction is based on the compute of the spatial distribution of the linear attenuation coefficient µ, that is a physical quantity strongly depen-dent on the X-ray beam energy. To partially remove the dependence on energy of µ, it is convenient to express the reconstructed voxel values in CT as a rela-tive variation with respect to some reference material (i.e., water). This unit is the Hounsfield unit and its value are called CT-values.

For an arbitrary tissue T with attenuation coefficient µT, the CT value is [2]:

CT value = (µT − µwater) µwater

∗ 1000HU

Consequently water or water-equivalent tissue has the value of 0 HU and air correspond to a CT value of -1000 HU.

1.1.3

X-ray detection

In the majority of today’s detectors for CT imaging, the conversion is indirect: X-rays are first converted into visible light, which is converted into an electrical signal by a photodiode.

CT detectors are configured as matrices of detection elements (pixels), each composed of a fast inorganic scintillator crystal, a solid-state photodiode for the conversion of the scintillation light into an electrical signal, and an electronic chain for signal amplification, digitization, and subsequent readout[3].

Scintillators emit visible light when energy is deposited in the material, because of the excitation effect of incident radiation on it. A photodiode converts the light to an electrical signal. For cone beam CT and for micro CT, the most used types are of photodiode readout architecture are semiconductor charge-coupled devices (CCD), come into play in the mid 90s, or more recently active pixel sensors in complementary metal–oxide–semiconductor (CMOS)[4].

CCD is an integrated circuit, composed by a semiconductor elements, able to accumulate an electric charge proportional to the light intensity at that loca-tion. These elements are coupled in order to to transfer its charge contents to its neighbor. The last capacitor in the array dumps its charge into a charge amplifier, which converts the charge into a voltage.

CMOS is an active-pixel sensor (APS), consisting of an integrated circuit con-taining an array of pixel sensors and each pixel concon-taining a photodetector and an active amplifier. Each pixel does its own conversion, outputting a digital bits. This means that the design complexity is high and the area available for light capture is reduced. Furthermore the active circuitry in CMOS pixels takes some area on the surface which is not light-sensitive, reducing the photon-detection efficiency of the device.

Both CCD and CMOS sensor are included in charge integrating detector, which produces a voltage output proportional to the integrated value of input charge, produced by the incident photon. In particular the signal output is proportional to the energy released by photons. Such types of detectors produce a dark cur-rent (electronic noise) due to the thermally generated electrons during detection and ADC conversion process. The main advantages of these detectors consist in the possibility of getting a large sensitive area and a good spatial resolution. On

(13)

the other hand, dark current, noise and limited dynamic range penalize image quality in terms of contrast and signal-to-noise (SNR) ratio.

Figure 1.11: CCD vs CMOS techonology: while photosites in CCD are passive, the CMOS photosites do local processing, so the read out in CMOS is local, as opposed to the CCD, that is global.

Another detection approach is represented by Single Photon Counting (SPC) detector, in which pixel answer is proportional to the number of detected photon. It means that zero photon counted corresponds to zero signal.This is obtained by adding in the electronic readout chain several components miniaturized and included on-board on SPC detectors in application specific integrating circuits (ASIC)[4]. Detectors of this type perform counting of single events by selecting photons by means of a single/double energy threshold. SPC detectors are not affected by electronic noise: in fact, all the events below a selectable low energy threshold are rejected[4].

1.2

CT scanner evolution

CT scanners were first introduced in 1971 with a single detector for brain study by Godfrey Hounsfield. Since this first development, there have been a variety of changes applied to the CT scanner instrumentation.

High priority has always been the reduction of scan time and the improvement of image quality; that’s why it has undergone several changes with increase in number of detectors and decrease in the scan time.

The first CT scanner used a rotate/translate system with a single parallel pencil beam, produced by a pinhole collimator to ensure only a single beam of X-rays was interacting with the patient; in addiction, they had only one X-ray detector, which was located on the opposite side of the patient from where the X-ray tube was situated. The detector measured the amount of X-rays passed through the patient. To acquire every slice across the body, the X-ray tube and detector moved linearly, before rotating the position of the X-ray tube to acquire images at a different projection angle. The advantage associated with the first

(14)

genera-tion CT scanner was the significant decrease in the amount of scatter radiagenera-tion interacting with the detector, due to the fact that it rarely detected scattered radiation. But the first generation CT scanner took a lot of of time to acquire the images and to reconstruct the images using the computer.

In order of decreasing the amount of time, the second generation CT scanner was fitted by a narrow fan X-ray beam and more detectors, even if it still re-quired the linear movement of the X-ray tube and detectors at each projection angle. But the amount of linear displacement required was reduced.

Unlike the pencil beam geometry, the detectors were exposed to more scattered radiation decreasing the resolution in the images.

At this stage the main goal was to cut the acquisition time to less than 20 seconds because it meant that someone could hold their breath for images ac-quisition of the abdomen, reducing artifacts from the lung movement on the reconstructed images. This was achieved by introducing a wide aperture fan X-ray beam, which could cover the entire patient at one time, and the rotation of the X-ray tube and the detectors through each of the projection angles without stopping to collect multiple slices per projection angle.

The third generation also had disadvantages. The largest number of detector el-ements is much more expensive than the first or second generation. Furthermore third generation CT scanners produce a characteristic image artifact known as ring artifacts, produced by the lack of calibration between the detectors. Fourth generation scanners solved the ring artifacts problem, removing the de-tectors from the rotating gantry and putting them in a stationary ring around the patient. In this way detectors were able to maintain calibration.

(15)

Fifth generation CT scanners were developed for cardiac tomography imaging, and are known as electron beam scanners. They were composed of no moving parts: behind the patient is placed an electron beam, which ejects electrons and makes contact with a large, half-circle tungsten target ring that encircles the patient. The interaction of the electrons with the target ring generates an X-ray beam, which travels through the patient and is detected by a detector ring on the opposite side.

Figure 1.13: Fifth generation CT scanner.

In previous CT system generation, data acquisition could not be a continuous process, since the gantry had to be stopped after ever slices. This problem was solved introducing slip ring technology in medical imaging field. A slip ring provides the electricity supply to rotating components, allowing the gantry to rotate continuously through all of the patient slices, therefore creating shorter scan times. This led to the development of the sixth generation CT scanner, also known as helical CT (or spiral CT).

With helical CT scanners, the data is acquired in a helical formation and don’t produce planar sections, compensated by reconstruction process.

The most recent generation of CT scanner consists of a multiple detector array and a cone shaped X-ray beam.

The linear detector array of previous generation scanners had to be modified to make a flat panel detector or a multiple detector array. The combination of the cone shaped X-ray beam and the paneled detector makes possible the acquisition of a very large number of slices in a very short time.

Within last decades pre-clinical imaging gained remarkable success. It deals with tissue samples, biopsies, living small animal using X-ray microtomography. Micro-CT, or µCT is an established imaging technique for high-resolution non-destructive assessment of samples, creating slices that can also used to recreate a virtual 3D model [5]. The prefix micro indicates that the pixel sizes are in the micrometre range, with a resulting high resolution imaging of biopsies and tissue samples, in vitro imaging, or in vivo small animal imaging.

(16)

Figure 1.14: Single slice spiral CT and multi slice spiral CT

Figure 1.15: Example of the capabilities of modern micro-CT systems. The whole body images (top row) were obtained in vivo with the IRIS micro-PET/CT scanner[4].

1.2.1

Spiral CT

Conventional CT is based on step and shot mode, consisting in two serialized stages: data acquisition and patient repositioning. This limits the volume

(17)

cov-erage speed, that refers to the capability of rapidly scanning a large volume with high longitudinal (z-axis) resolution and low image artifacts. One of the main purposes in CT development is to improve its volume coverage speed per-formance, in order to reduce scan time, with a consequent reduction of X-ray exposure and minimize motion artifacts. Helical computed tomography revolu-tionized body scanning, involving translatory movement of the patient through the gantry and X-ray source rotation [6]. The X-ray source traces an helix cir-cling the patient, resulting in a helix of raw projection data and each rotation of the tube generates data at an angle plane of section [7].

Figure 1.16: Conventional step and shoot CT vs spiral CT

To obtain a transaxial image, data must be interpolated by using two recon-struction algorithm: 360◦linear interpolation (360◦LI ) algorithm or 180◦linear interpolation (180◦ LI ) algorithm.

The 360◦ LI algorithm explores the 360◦ periodicity in the data, using projec-tions from the two adjacent full turns of the helix to obtain one set of projection at a determinate z-position.

The 180◦ LI algorithm exploits the 180◦ periodicity, due to the fact that two measurements along the same path, but 180◦ apart, would be the same in the absence of patient, motion, noise variation and other errors [6]: a photon under-goes the same attenuation whether it passes through tissue forward or backward. So it uses two set of opposite projections to estimate one projection at a pre-scribed location z.

Parameters are not so different than those in sequential CT, but they may vary, because of the constrain for spiral scan mode. For example the maximum tube current for spiral scan is lower than in sequential one, to avoid anode overheating during the scan time. Furthermore there is a sole additional parameter describ-ing the distance of one set of measurement to the adjacent turn: the pitch. It is defined as the ratio of the table translating distance per gantry rotation, s expressed in mm, to the thickness of the individual X-ray beam, D expressed in mm.

The helical pitch for a single slice CT is [6]:

P = s(mm) D(mm)

(18)

Pitch is a dimensionless quantity of great importance for image quality, because it affects the z sampling in data collection. Increasing pitch increases scan speed, because the same volume is covered in less time.

Figure 1.17: Different pitch values

But there is a limit in the increasing of the pitch: the gantry needs to rotate at least halfway around the patient to get data from all sides. The standard range is 1≤p≤2 for a single slice system. The pitch factor should be larger than 1 to cover a given scan volume as fast as possibile and potentially to reduce the dose compared to sequential CT. Also it should not exceed the value 2, in order to exclude gaps in sampling the object along the z-axis [2]. As helical pitch increases, the z gap increases as well, with a consequent image quality deterioration.

An additional improvement in the volume coverage speed has been reached by the introduction of multi-slice CT (MSCT) system, equipped with a multi-row detector array to simultaneously collect data at different slice location. In this case the helical pitch is defined as follows:

P = s(mm) d(mm)

where the thickness of individual X-ray beam, D(mm), in replaced by the de-tector row collimation, d(mm).

The helical pitch of the multi-slice CT still indicates the number of contiguous slices that can be generated over the table translating distance in one gantry rotation. By using of multi-row detector array, projection data along a path is measured multiple times by different detector rows, therefore is important to select a preferred helical pitch to reduce the redundant measurements and to improve the z sampling efficiency [6].

Similarly to the single slice helical CT, the X-ray beam describes a spiral path around the patient, but it is divided by the N detector rows, so the helices are multiple and interlacing, and form one set of multi-slice helical CT data. This kind of data is three dimensional and it enable to reconstruct a 3D image using particular reconstruction algorithm.

1.3

Image reconstruction techniques

CT image reconstruction computes the information about the attenuation coef-ficients µ of different X-ray absorption path, obtained as a set of data (projec-tion). CT computer system generates the image of the section from the radiation

(19)

transmission profiles, acquired by the CT detector system through the applica-tion of a suitable algorithm.

The principle of image formation in based on two steps: data acquisition and data processing.

The first step consists in recording attenuation measures along a number of X-ray paths in the section of interest (projections) and the second step consists in processing all these measurements to reconstruct the digital image of the section.

Figure 1.18: The two steps of image formation: data acquisition and data processing. The fundamental mathematical problem in computed tomography and it was solved for the first time the mathematician J. H. Radon in 1917. He proved that the distribution of a material in an object layer can be calculated if the integral values along any number of lines passing through the same layer are known [8]. Data acquisition consists of recording the transmitted intensity along different X-ray paths. The simplest case is given by a homogeneous object with monochromatic radiation:

I = I0e−µd

Attenuation in this case, if the thickness of the absorber d is known, is given by the product of the linear attenuation coefficient µ with the thickness.

This attenuation profile recorded is a projection: P = lnI

I0

= −µd In this simple case, µ can be determined directly:

µ = −lnI I0

1 d

If the object is inhomogeneous and the radiation is polychromatic:

I = Z Emax 0 I0(E)e− Rx 0 µ(E)dsdE ⇒ P = ln P EI(E) P EI0(E)  =X E,i µi(E)di

Linear attenuation coefficient depends strongly on energy and materials com-posing the object. In measuring intensities is necessary to integrate over all

(20)

energy interval and ray path length.

µ(x, y) can’t be extract directly, but Radon showed in his early work [8] that the two-dimensional distribution of an object characteristics can be determined exactly if an infinite number of line integrals is given. A finite number of of measurements of the distribution of the attenuation coefficient µ(x, y) is suffi-cient to compute an image to a good approximation [2].

Image reconstruction from a tomographic acquisition may be interpreted as a mathematical linear inverse problem, which derives data from the solution. In-verse problems don’t always have a unique solution, because of the lack of data compared to unknown variables. In such case inverse problem is called ill-posed problem. Well-posed otherwise.

There are two possibile approaches to solve the reconstruction problem: ana-lytical method and iterative method.

The analytical method is based on the inversion of an integral transformation. Physical model behind this method assumes that data are continuous and noise-less. Any deviation between data and reality brings to an artifacts or image noise.

The iterative method models the acquisition method as a linear system, where y is data vector acquired and x is the image to be reconstructed. This vector is related through the linear operation matrix A, describing the physical system, especially in terms of its scanning geometry. The more accurate the model is, the more computational time is required, to iteratively find an approximate in-version.

In both analytical and iterative method, the resulting image is the better rep-resentation of the object through the measuring instrument.

The object function f (x, y) and the image g(x, y) don’t coincide, because the image comes from a measuring process. In the case of CT the object is µ(x, y). The function f (x, y) and g(x, y) are related by:

g(x, y) = f (x, y) ∗ h(x, y) + n(x, y)

where h(x, y) modeled the impulse response of the instrument and n(x, y) rep-resents the noise. Mathematically speaking the signals recorded are continuous, so it would be possibile having infinite samples both radial and angular, but in practice this is not possible. Furthermore the object function should be band limited.

The two major categories of reconstruction methods are analytical tion, in the form form filtered backprojection (FBP), and iterative reconstruc-tion (IR).

1.3.1

Filtered backprojection

The object to be found is µ(x, y). The line integral represents the total atten-uation encountered by a beam of X-rays as it travels in a straight line through the object[9].

To understand the method the simplest way is the system of parallel projection (as the first generation scanners).

(21)

xcosθ + ysinθ = r The line integral along (θ,r) is:

Pθ(r) =

Z

(θ,r)

f (x, y)ds

Figure 1.19: Parallel projection representation. This equation can be rewritten using a δ function:

Pθ(r) =

Z Z ∞

−∞

f (x, y)δ(xcosθ + ysinθ − r)dxdy

The function Pθ(r) is known as the Radon transform of the function f (x, y).

Combining a set of line integrals a projection is obtained.

The set of projections constitutes the Radon transform of the image, which is commonly represented as a sinogram in 2D imaging.

(22)

The most important theorem underlaying image reconstruction is the Fourier Slice Theorem, which relates sinogram with the object function f(x,y):

The one dimensional Fourier transform of a parallel projection at established angle φ is equal to a slice of the two dimensional Fourier transform of the orig-inal object function, taken at the same angle.

Figure 1.21: Fourier Slice Theorem.

Given the projections data, it is possible to estimate the object by performing a two dimensional inverse Fourier transform.

The two dimensional Fourier transform of the object can be written as:

F (u, v) = Z Z ∞

−∞

f (x, y)ej2π(ux+vy)dxdy

Likewise the Fourier transform of the projection at angle θ, Pθ(r) is:

Sθ(ω) =

Z ∞

−∞

Pθ(r)ej2πωrdr

Considering the Fourier transform of the object in the frequency domain at v = 0:

F (u, 0) = Z Z ∞

−∞

f (x, y)ej2πuxdxdy = Z ∞ −∞  Z ∞ −∞ f (x, y)dyej2πuxdx where: Pθ=0(r) = Z ∞ −∞ f (x, y)dy =⇒ F (u, 0) = Z ∞ −∞ Pθ=0(r)ej2πuxdx

It is now possible to derive the relationship between the vertical projection and the coordinate system:

(23)

This result is independent of the orientation between the object and the coor-dinate system.

By taking the projections of an object function at angles θ1,...,θk and Fourier

transforming each of these, it is possible to determine the values F(u,v) or radial lines.

If any infinite number of projections are taken, then F(u,v) would be known at all points in the uv plane [9].

The object function can be derived using inverse Fourier transform: f (x, y) =

Z Z ∞

−∞

F (u, v)ej2π(ux+vy)dudv

In practice only a finite number of projections of an object can be taken and the function F (u, v) is known along a finite number of radial lines[9]. The more projections are acquired, the better image will be.

Figure 1.22: Dependency between number of projection and final reconstructed image: the more projections are acquired, the better image will be.

If the function is bounded by -A/2<x<A/2 and -A/2<y<A/2: f (x, y) = 1 A2 X m,n F (m A, n A)e j2π(m Ax+ n Ay)

but only a finite number of Fourier components is known:

f (x, y) ∼ 1 A2 N X m=−N/2 /2 N X n=−N/2 /2F (m A, n A)e j2π(m Ax+ n Ay)

This equation can be implemented by using the Fast Fourier Transform (FFT) algorithm, providing the N2 Fourier coefficients F (m/A, n/A). This frequency

(24)

data have to be sampled on a rectangular grid, but they are recorded on a ra-dial one. Therefore it is necessary to determine the values on the square grid by interpolation from the radial point in the frequency domain. The interpolation process implies error, because the density of the radial points become sparser as one gets away from the center and these errors become larger. It results in image degradation, because there are larger error in the high frequency compo-nents than in low ones. This produces an image with high density in the center and a resulting image blurring.

The filtering process cancels this unequal sampling density. Each parallel beam projection has to be convolved with a ramp filter h(r) to obtain a filtered pro-jection:

˜

Pθ(r) = Pθ(r) ∗ h(t)

where h(r) is a generalized function defined by the inverse Fourier transform: h(r) =

Z ∞

−∞

|ω|ei2πωr

This filter has the effect of filtering out low oversampled frequencies and pass-ing high undersampled frequencies. This is why the analytical reconstruction process is named Filtered Back Projection (FBP).

The complete FBP reconstruction algorithm: • Measure the projection Pθ(r)

• Fourier transform it to find Sθ(ω)

• Multiply it by the weighting function to filter the Fourier transform[9]

Recalling the formula for the inverse Fourier transform, the object function f (x, y) can be expressed as:

f (x, y) = Z Z ∞

−∞

F (u, v)ej2π(ux+vy)dudv For a polar coordinates system (ω,θ):

u = ωcosθ, v = ωsinθ, dudv = ωdωdθ

The inverse Fourier transform in polar coordinates is:

f (x, y) = Z 2π 0 Z ∞ 0 F (ω, θ)ej2πω(xcosθ+ysinθ)ωdωdθ

Splitting the integral into two by considering θ from 0◦ to 180◦ and then from 180◦ to 360◦ and using the property:

F (ω, θ + 180◦) = F (−ω, θ) f (x, y) may be written as:

(25)

f (x, y) = Z π 0  Z ∞ −inf ty F (ω, θ)|ω|ej2πωrdωdθ where t = xcosθ + ysinθ.

By sobstituing the Fourier transform of the projection at θ: f (x, y) = Z π 0  Z ∞ −∞ Sθ|ω|ej2πωrdωdθ

This integral may be expressed as: f (x, y) = Z π 0 Qθ(xcosθ + ysinθ)dθ where Qθ(r) = Z ∞ −∞ Sθ|ω|ej2πωrdω

This equation represents the filtering operation, where the frequency response of the filter is given by |ω|. Qθ(ω) is called filtered projection and f (x, y) is then

backprojected.

1.3.2

Reconstruction from fan projections

By the introduction of a wide aperture of the X-ray, it has been introduced the fan beam geometry, in order to reduce the acquisition time and movements artifacts.

There are two types of fan projection: equiangular rays, with an arc detector and equally spaced rays, with a flat panel detector.

Equiangular rays

Let Rβ(γ) denote a fan projection, where β is the angle between the source and

the reference axis and γ is the ray angle within the fan.

It the projection data comes from a set of parallel rays, then the ray SA would be a parallel projection Pθ(r):

θ = β + γ and t = Dsinθ

(26)

Figure 1.23: Equiangular fan, where D is the distance of the source S from the origin O and L is the distance of the pixel at location (x,y). Each ray is identified by its angle γ from the central ray.

From parallel projections, f (x, y) would be reconstructed as:

f (x, y) = Z 2π 0 Z tm −tm Pθ(t)h(xcosθ + ysinθ − t)dtdθ

If x,y points are expressed in polar coordinates in term of γ and β defined above:

f (r, φ) = 1 2 Z 2π 0 Z γm −γm Rβ(γ)h(rcos(β + γ − φ) − Dsinγ)Dcosγdγdβ

Let L be the distance from the source S to a point (x, y) and γ0 the angle of the ray asses through (x, y). They determine the pixel location of (r, φ).

Lcosγ0= D + rsin(β − φ) Lsinγ0= rcos(β − φ) And the function f (r, φ) becomes:

f (r, φ) = 1 2 Z 2π 0 Z γm −γm Rβ(γ)h(Lsin(γ0− γ))Dcos(γ)dγtdβ

h(Lsin(γ0−γ))has to be expressed in terms of h(t), the inverse Fourier transform oh |ω| in the frequency domain:

h(t) = Z ∞

−∞

|ω|ej2πt

(27)

f (r, φ) = Z 2π 0 1 L2 Z γm −γm Rβ(γ)g(γ0− γ)Dcosγdβdγ where g(γ) = 12(sinγγ )2h(γ).

This equation may be interpreted as a weighted Filtered Back Projection:

f (r, φ) = Z 2π 0 1 L2Qβ(γ 0)dβ where Qβ(γ0) = (Rβ(γ)Dcosγ) ∗ g(γ)

Equally spaced rays

In case of equally spaced detector, the distance along straight line corresponding to the detector back is indicated by s, replacing the angle γ in the equiangular detector. The corresponding fan projection in described by Rbeta(s).

Figure 1.24: Equispaced detectors on a straight line, where Rβ(s) denotes the

projec-tion.

Projection are measured on the detectors line D1D2passing through the origin.

If parallel projections are considered, the ray SA would belong to Pθ(t)[9]:

t = scosγ = √ SD

D2+ s2 θ = β + γ = β + tan −1 s

(28)

In term of parallel projection: f (r, φ) = Z 2π 0 Z tm −tm Pθ(t)h(rcos(θ − φ)) = 1 2 Z 2π−tan−1(sm/D) −tan−1(s m/D) Pθ+γ( SD √ D2+ s2)· hrcos(β + tan−1(s D) − φ) − sD √ D2+ s2  D 3 (D2+ s2)3/2dsdβ

where the expression of Pβ+γabove corresponds to the ray integral in the parallel

projection data Pθ(t), which in fan projection is reprinted by Rβ(s). Then:

f (r, φ) = 1 2 Z 2π 0 Rβ(s)h(rcos(β + tan−1 s D − φ) − Ds √ D2+ s2) D3 (D2+ s2)3/2dsdβ

In order to make the argument of h easy to be implemented it is necessary to introduce two new variables:

U (r, φ, β) = D + rsin(β − φ)

D s

0 = D rcos(β − φ)

D + rsin(β − φ)

Figure 1.25: This figure illustrates the parameters used in the reconstructed algorithm derivation.

(29)

f (r, φ) = Z 2π 0 1 U2 Z ∞ −∞ Rβ(s)g(s0− s) D √ D2+ s2dsdβ where g(s0− s) = 1

2h(s) and the equation above may be interpreted as weighted

filtered backprojection algorithm and it can be rewritten as:

f (r, φ) = Z 2π 0 1 U2Qβ(s 0)dβ where: Qβ(s) = Rβ(s) D √ D2+ s2 ∗ g(s)

1.3.3

Spiral CT image reconstruction

The spiral CT image reconstruction consists on two basic steps: estimate a set of complete projection measurements by z-interpolation process and recon-struct the slice, with the normal 2D FBP algorithm as described above. The image calculation from spiral data, if not interpolated, will results with arti-facts due to the patient translation, since different sections are measured. The z-interpolation generates a consistent planar data set from the spiral data for an arbitrary image position zR [2]. This is an important advantage than

sequen-tial CT, because in spiral CT there isn’t the direct coupling between the scan position and the image position. On the other hand, projection data have to be interpolated to obtain slice at the same z position. There are two interpola-tion approaches: 360◦ linear interpolation (360◦ LI) algorithm and 180◦ linear interpolation (180◦ LI) algorithm.

The 360◦ LI consists of a linear interpolation between data measured for a given angular position before and behind the desired table position zR, 360◦

apart along the spiral trajectory.

The projection Pz(i, α) for a projection angle α at a selected zRis computed as

[2]:

Pz(i, α) = (1 − w)Pj(i, α) + wPj+1(i, α)

where the interpolation weight is:

w = zR− zj d

Pj(i, α) is the projection j at position zj < zR and Pj+1(i, α) is the respective

projection at rotation j+1 360◦ apart rotation j.

The 180◦ LI algorithm make use of the data redundancy in 360◦ CT scanning: in any 360◦ rotation of the X-ray tube, each projection value is measured twice by two opposing rays. By exploiting this redundancy, the projection at an arbi-trary angular position can be determined from projections measured in opposing direction. By repeating this for each angular position, can be calculated a sec-ond synthesized spiral from the measured one, with both data sets offset to each other by 180◦ [2].

(30)

(a) 360◦LI algorithm

(b) 180◦LI algorithm

Figure 1.26: Different slice reconstruction approach: 360◦ LI algorithm and 180◦ LI algorithm.

Both 360◦LI algorithm and 180◦ LI algorithm provide a set of projections cal-culated at the same z position, as the patient wouldn’t translate during the acquisition. These data can be used for a steep and shot reconstruction algo-rithm.

1.3.4

Iterative reconstruction

The iterative reconstruction algorithms represent a different approach in image reconstruction. They consider an array of unknowns and then setting up alge-braic equations for the unknowns in term of the measured projection data [9]. They present several advantages over analytical methods. Iterative methods can handle complex acquisition geometries and model both geometry and physics of the acquisition system.

Both object function f (x, y) and the data p(θ, t) are discretized as fj=1,...,N and

pi=1,...,M, with N being the number of pixels and M being the number of

mea-sured projection. The imaging system is modeled via the system matrix A made up of MxN elements, where the matrix entry aij expresses the contribution of

image pixel j to the detector bin i.

The imaging process is expressed as a linear system of equations: p = Af + n

where n represents additive noise but it would be implicitly included into the measurements p in what follows.

(31)

To recover the object function f (x, y) there are some possibile reconstruction approaches. The most intuitive is to invert the relation above:

f = A−1p

The base principle of the iterative algorithm is to find a solution by guessing f and improving it iteratively : a first image is estimated, then is simulated a projection data via forward projection to be compared with the measured ones. In case of discrepancy, the estimated image is updated and this correction of image and projection data is repeated until a condition is satisfied and the final image is generated [10].

Figure 1.27: Iterative reconstruction process.

The complexity of the IR algorithm increases when are added components of the acquisition process, such as geometry of the gantry and source characteristics or other physical effects such as scattering, etc.. are considered into the system model.

There are vendor-specified IR algorithms, differing in the measurement and es-timation of projection and in the kind of correction applied.

GE healthcare developed Adaptive statistical iterative reconstruction (ASIR) algorithm. It is a hybrid algorithm, that uses information from the FBP algo-rithm as a building block for each individual image reconstruction. The ASIR model uses matrix algebra to convert the measured value of each pixel to a new estimate of the pixel value. This pixel value is compared with the ideal value predicted by noise modeling. The process is iterated until the final estimated and ideal pixel values converge [10].

Philips Healthcare developed the iDose4 reconstruction algorithm, which ana-lyzes the projection data to identify and to correct the noisiest CT measure-ments, including photon statistics model to each projection. Through an iter-ative process, the noisy data are penalized and edges are preserved [10]. The remaining noise in the image space is highly localized and can be removed, pre-venting photon starvation artifacts and maintaining image quality.

(32)

Siemens Healthcare developed two IR algorithm: iterative reconstruction in im-age space (IRIS) and sinogram-affirmed iterative reconstruction (SAFIRE). IRIS is based on image domain and the image is reconstructed from the raw data applying, with the goal of object contrast enhancement after each iteration. SAFIRE in the second-generation IR algorithm released by Siemens Healthcare. It utilizes both raw data and image data iterations and the interactions are con-trolled by a regularization term. For the initial reconstruction, where a weighted FBP, is used, are introduced two different correction loops. In the first loop are produced new raw data from a forward projection and they are compared with the original ones to derive corrections. These corrections are used to reconstruct a corrected image. The first loop is repeated a number of time depending on the scan mode. The second correction loop works in image space, by removing noise through a statistical optimization process. The corrected image is com-pared with the original and the process is repeated a number of times depending on the examination type [10].

1.4

Basics of radiation protection and CT

dosime-try

As described in the previous section, CT examinations use ionizing radiation and its increasing number concerns about the potential risk of cancer induction from CT.

Ionizing radiation can remove tightly bound electrons from the orbit of an atom, causing the atom to become charged or ionized. This capability of ionizing mat-ter also affects cells in living organisms.

Most adverse health effects of radiation exposure may be grouped in two general categories: deterministic effects, due to the killing/malfunction of cells follow-ing high doses, and stochastic effects, involvfollow-ing either cancer development in exposed individuals or heritable disease in their offspring[1].

Regarding deterministic effects, the induction of tissue reactions is generally characterized by a threshold dose, because radiation damage (serious malfunc-tion or death) of a cell populamalfunc-tion needs to be sustained before injury is expressed in a clinically relevant form. Above the threshold dose the severity of the injury increases with dose.

Stochastic effects consist primarily of cancer and genetic effects and often show up years after exposure. As the dose to an individual increases, the probability that cancer or a genetic effect will occur also increases, but even for high doses, is it not certain that cancer or genetic damage will result. For stochastic effects, there is no threshold dose below which it is relatively certain that an adverse effect cannot occur.

Special dosimetric quantities have been developed for the assessment of doses from radiation exposures. The fundamental protection quantities are based on measures of the energy deposited in organs and tissues of the human body[1]. The absorbed dose, D, is the basic physical dose quantity. It is defined as:

D = dε dm

where dε is the mean energy imparted to matter of mass dm by ionising radi-ation. In the SI system of units, the unit of measure is joules per kilogram, or

(33)

gray (Gy): 1J/Kg = 1Gr.

Absorbed dose is a measurable quantity and primary standards exist to deter-mine its value. It doesn’t take into account the biological effects, depending on the type of radiation and tissue exposed to radiation.

A quantity representing the stochastic effects of ionizing radiation on the human body is the equivalent dose, which is derived from the absorbed dose.

Equivalent dose is defined as the mean absorbed dose deposited in tissue or or-gan T due to radiation of type R , DT ,R, multiplied by the radiation weighting

factor wR, which represents the relative biological effectiveness of the radiation:

HT =

X

R

wRDT ,R

The unit for the quantity equivalent dose is the sievert (Sv), it has the same dimension of the absorbed dose, but it also represents the radiation effects. The relevant quantity describing the risk of cancer in irradiated tissue is the effective dose. This quantity considers the fact that the risk of cancer induction from an equivalent dose depends on the organ receiving the dose. The effective dose is calculated by multiplying the equivalent dose of each organ irradiated, by a tissue-specific weighting factor, wT, for each organ or tissue type. Each

product of equivalent dose and tissue weighting factor have to be summed over all the irradiated organs resulting in effective dose. This quantity is calculated, but not measured.

E =X T wTHT = X T wT X R wRDT ,R

where wT is the tissue weighting factor for tissue T.

The sum is performed over all organs and tissues of the human body considered to be sensitive to the induction of stochastic effects[1]. wT values are chosen to

represent the contributions of individual organs and tissues to overall radiation detriment from stochastic effects. Also the effective dose is expressed in Sv. It is known that the ionization process alters the atomic, and consequently, the molecules structure, which reflects in functional and morphological alterations, leading cells damage. The biological effects are mostly due to cells nucleus damage, because of a chemical structure alteration of nucleic acids. The DNA is the most sensible target, especially in case of double strand brakes (DSB). An alteration may not produce detrimental effects on the tissue exposed, but a damage determines an effect clinically observed. If a cellular damage occurs, it may caused cellular death or wrong replication.

Figure 1.28: DNA alteration caused by ionizing radiation

(34)

selected X-ray spectrum, that is on the voltage and current tube. There is a simple relation between this quantities and the dose [2]:

D ∝ mAs ∗ kV p2

Dose and tube current-time product are in a linear relation: a reduction of the mAs by a factor two reduces the dose at the same rate.

Increasing the kVp values increases also the penetration power of X-ray, which effects on the total quantity of flux produced.

The introduction of spiral CT leads positive effect in the sense of dose reduction. Patient dose information is required by law, but effective dose it is difficult to obtain, because it depends on a number of parameters.

The standardized measure to describe radiation output from a scanner is the Computed Tomography Dose Index (CTDI), which is useful to compare radi-ation output from different scanners. CTDI is a measure of the amount of radiation delivered from a series of contiguous irradiation to a standardized acrylic phantoms [11].

There are variants of CTDI, describing specific steps in the measurements and calculation process.

CT DIvol considers gaps or overlaps between the X-ray beams during a

consec-utive volume scanning and variations in dose across the field of view. It is a useful indicator of the radiation output for a specific exam protocol, because it takes into account protocol-specific information such as pitch [11], it is not strictly related to the effective dose and hence to the patient’s risk.

If CT DIvol is integrated along the scan length to give the Dose Length

Prod-uct (DLP), which better represents the overall energy released by a given scan protocol.

Precise effects caused by low dose exposure are not fully known, but it is impor-tant to decrease the potential risk, without any clinically loss in image quality. It can be managed by applying the ALARA principle (As Low As Reasonably Achievable) [12], namely keeping radiation dose as low as reasonably achievable.

1.4.1

Dose reduction techniques

To achieve radiation dose reduction it is important to define the adequate image quality for the clinical purpose, selecting specific scan parameters and choosing adequate image reconstruction methods.

It is obvious that the level of image quality depends on the objective of the ex-amination. Searching for small lesions or faint lesions with low contrast requires image of high quality and with an acceptable noise level [12].

The anatomy-adapted tube current modulation is the technique for dose reduc-tion. Traditionally the tube current in CT is kept constant during the scan, but to reduce patient dose is necessary to modify the scan acquisition parameters. As mentioned above, the dose increases linearly with the tube current-time prod-uct. During a scanning the tube current exposure time and the tube potential can be altered to give the appropriate exposure to the patient, considering the patient size. Small patients indeed may result in excess radiation dose. It is a fundamental responsibility of the CT operator to take patient size into account, when selecting scan parameters [11].

(35)

To adapt the exposure to the patient, it may be also considered the variations in patient absorption with projection angle and anatomic region.

Figure 1.29: Relative attenuation values associated with body region and projection angle

It is possible to modulate the tube current both angularly and along the patient axis.

Angular mA modulation considers the variation in X-ray attenuation around the patient and the tube current is variated as the X-ray tube rotates. The tube cur-rent varies according to the attenuation informations from the CT radiograph or in real-time, according to the measured attenuation from 180◦previous pro-jection.

Longitudinal mA modulation considers the anatomic regions variation along the z axis of the patient, for example shoulders versus abdomen.

The technology used to adjust the X-ray tube current in real time is known as Automatic Exposure Control (AEC), with preprogrammed current modulation. The system sets the baseline tube current and adjusts it dynamically during the a single scan.The operator doesn’t select the tube current, but simply the desired image quality, which determines the extent of dose reduction: if the image quality is set at a high level, AEC system increases the radiation dose to maintain the image quality as set by the operator [12].

Regarding image reconstruction method, iterative reconstruction provides in general less noisy than the standard FBP method. It is therefore possibile im-proving image quality even with a reduced radiation dose. This is due to the incorporation of a CT system physical model into the reconstruction process, that can accurately characterize the data acquisition process, including noise and artifacts [11]. Iterative reconstruction is necessary to improve image qual-ity in the case of low dose exposure, where data are significantly non-ideal, as required in analytical method.

(36)

1.5

Reconstruction artifacts in CT

In computed tomography, the term artifact is applied to any systematic dis-crepancy between the CT numbers in the reconstructed image and the true attenuation coefficients of the object [13]. Indeed the imaging system may pro-duce artificial structures which that are not associated to real features of the scanned object.

It is possible to distinguish the origins of these artifacts into: physics-based artifacts, patient-based artifacts, scanner-based artifacts, helical artifacts.

Physics-based artifacts

Physics-based artifacts result from the physical processes involved in the acqui-sition of CT data.

Beam hardening artifacts is observed because an X-ray beam is composed of individual photons with a range of energies. Higher energy photons travel more easily through the tissue than low energy ones. As the beam passes through an object, it becomes harder: mean energy increases, because the lower energy photons are absorbed more rapidly than the higher energy photons. This im-plies the already discussed dependence of the linear attenuation coefficient on photons energy.

Two types of artifact results from this effect: cupping artifacts and the appear-ance of dark bands or streaks between dense objects in the image [13].

To comprehend cupping artifacts, let’s consider a uniform cylindrical phantom: X-rays passing through the middle portion of it are hardened more than those passing through the edges, because they are passing through more material. This implies that the beam is more intense when it reaches the detectors than would be expected if it had not been hardened, because it results less attenu-ated. The resultant attenuation profile differs from the ideal profile. It displays a characteristic cupped shape.

The second type of artifact is the appearance of dark bands or streaks between two dense objects in an image, due to the beam passing through one of the objects is hardened less than when it passes through both objects. This type of artifact can occur both in bony regions of the body and in scans where a contrast medium has been used.

(37)

(a) Cupping artifacts (b) Attenuation profile with beam hardening effects

(c) Dark bands or streaks between two dense objects

Figure 1.30: Beam hardening artifacts

Partial volume effect occur when the tissues of a number of different substances are encompassed on the same CT voxel, then the resulting CT value represents some average of their properties.

It occur when high contrast structures extend only partly into the slice exam-ined, since each detector element inevitably averages the radiation intensities in z-direction instead of the attenuation value to be determined, intrinsically ignoring the logarithmic relationship between intensities and attenuation val-ues. Partial volume effects can occur not only in the z-direction but in the scan plan too. In such cases, we generally speak of sampling artifacts and they occur primary at transition with very high contrast, such as within metallic object [2].

Photon starvation is another source of artifacts, in term of streaking in the image. This effect occurs in highly attenuating areas, such as the shoulders: when the X-ray beam is traveling horizontally, the attenuation is greatest and insufficient photons reach the detectors [13].

One of the most important factor in image quality is the number of projection used during the reconstruction process. If the interval between projection is too large, aliasing effect, due to the under sampling, can occur. Aliasing effect comes in the form of stripes in the image, in particular in the edge and close to a dense structure. Aliasing may not have too serious an effect on the diagnostic quality of an image, since the evenly spaced lines do not normally mimic any anatomic structures [13]. But undersampling effects resolution of fine details, so it is important to be avoided as far as possible.

(38)

Figure 1.31: Aliasing artifacts: stripes appear in the image due to the undersampling.

Patien-based artifacts

Patient-based artifacts are caused patient movement or the presence of metallic materials in or on the patient.

The presence of metal objects intensifies beam hardening and partial volume artifacts and leads to severe streaking artifacts [13]. They occur because the density of the metal allows the penetration of only high photons energy resulting in incomplete attenuation profiles and extinguishes the image contents in the vicinity of metallic object, leaving only disturbing noise structures.

Another artifacts caused by patient occur in case of motion causing misregistra-tion, which usually appear as shading, blurring or streaking in the reconstructed image.

Also a portion of the patient outside the scan field of view may cause artifacts, because the computer will have an incomplete information. If discontinuities in contrast are found in this outer region, such as the patient’s arms, streak artifacts can result, which affect the entire image [2].

(a) Metal artifacts (b) Motion artifacts

(39)

Scanner-based artifacts

Scanner-based artifacts result from imperfections in scanner function. If one of the detectors is out of calibration, the detector will give erroneous reading at each angular position, resulting in a circular artifact, called ring artifact. A scanner where all the detectors are separate entities is in principle more to ring artifacts than a scanner with gas detectors, in which the detector array consists of a single xenon-filled chamber subdivided by electrodes.

Even if ring are visible, they would rarely be confused with disease. However, they can impair the diagnostic quality of an image, particularly likely when cen-tral detectors are affected, creating a dark smudge at the center of the image.

Figure 1.33: Ring artifacts

Helical Artifacts

The same artifacts occurs in sequential CT as in helical CT. The presence of additional artifacts is due to the helical interpolation and reconstruction pro-cess: if there is a high contrast edge between the two detector rows, then the interpolated value may not be accurate. So the artifacts occur when anatomic structures change rapidly in the z direction (eg, at the top of the skull) and are worse for higher pitches [13]. This inaccuracy creates periodic streaks originat-ing from high contrast edges, which are called windmill artifacts.

(40)
(41)

Chapter 2

Denoising techniques

The most important property of a good denoising algorithm is that it should remove the noise as much as possible, by preserving high spatial frequency de-tails such as edges.

All denoising algorithm are based on a noise model and a generic image smooth-ness model, either local or global [14].

All of the methods assume that the noise is oscillatory and the image is smooth. So they try to separate the smooth part to the oscillatory one. Many fine struc-tures in image are oscillatory as noise is and also noise has low frequencies and therefore smooth component [14]. Since denoising algorithms see no difference between small details and noise, it may remove them.

Noise reduction can occur in projection space, before reconstruction, or in image space, after reconstruction.

Projection space denoising filters make use of photon statistics and smooth data using a statistical noise model or applying a non linear adaptive filter to the noise [15, 16]. Image space denoising is a filtering process working on reconstructed image. The sinogram processing is usually much more effective, because the properties of the noise in the projection data are well known, unlike the statis-tics of the noise in the reconstructed image that is not as well understood. Conventional shift-invariant filtration seeks to suppress the high-frequency com-ponent in the sinogram with a simple assumption that all the measurements are equally reliable, which may result in severe degradation of spatial resolution. More sophisticated methods have been developed to implement locally adapted filters, in order to preserve edge information in the sinogram, thus maintaining high spatial resolution.

2.1

Noise in CT imaging

Image noise is the uncertainty introduced by an imaging system due to the stochastic or systematic fluctuations of the image values at each point[3]. In CT imaging, the noise is introduced during X-ray beam generation and then propagates as it traverses the body and reaches the detection system. For exam-ple a CT image of uniform object has random variation in CT numbers about some mean value.

Noise of an ideal system should be purely statistical in origin, caused by fluc-41

(42)

tuations in the number of X-ray quanta registered by the detector [2], but all electronic devices raise also electronic noise. Also roundoff error may be consid-ered: digital computers introduce noise in the reconstruction process because of the limited number of bits used to represents number.

X-ray imaging is characterized by several random processes: the emission of photons from the source is a Poisson process, while the number of photons passing through the object, and captured by the detector, and the subsequent production of light photons per captured X-ray photon are binomial.

Because the cascading of the Poisson and binomial distributions results in a Poisson distribution, the overall noise distribution is approximately Poisson in X-ray imaging. Photon counting is a Poisson process: individual photon detec-tion is an independent events that follow a random temporal distribudetec-tion. The number of photons N measured by a given sensor element is described by the discrete probability distribution:

P (N = k) =e

−λt(λt)k

k!

where λ is the expected number of photons per unit time interval and λt is the expected incident photon count. P is the probability of detecting k photons in the time interval t.

The variance of the Poissonian distribution is equal to its expectation value: E[N]=Var[N]=λt. It means that if the number of photons impacting on the de-tector is N, then it will measure on average N photons with a standard deviation √

N .

For a small number of counts, noise is generally dominated by other signal-independent sources of noise (as electronic noise) and for larger counts, the central limit theorem ensures that the Poisson distribution approaches a Gaus-sian distribution.[17].

Electronic noise originates from X-ray detection system, because electronic cir-cuits inevitably add some noise to the signal. Analog circir-cuits are most suscep-tible to electronic noise; it is generated by the thermal agitation of the charge carriers inside an electrical conductor at equilibrium. The source of the elec-tronic noise in the CT is the detection system and gets added to the signal during the AID (analog into digital) converter stage [18].

Modern CT systems are made of solid state detector cells, each of them con-sisting of radiation-sensitive material that converts the absorbed X-rays into visible light. The light is detected by a photodiode, which produces a small analog electrical current, amplified and converted into a digital signal [19]. The current flow arising from the thermal agitation of the charge carries adds up to this electrical current and the resulting digital signal is made by two contri-bution: the original noise from the detector and the current from the thermal agitation. When signal is comparable or lower than electronic noise level, it re-sults degrade: there is a signal threshold below which a signal can’t be detected and it merges with the noise.

(43)

2.1.1

Noise Power Spectrum

Noise is described at the simplest level the standard deviation of the recon-structed image voxels over a specifyed region of interest of an homogeneous area, but this doesn’t show frequency representation of the correlation present in the noise. In this case a frequency domain response characterization is more useful than a spatial one. The noise Power Spectrum, (NPS), is a spectral de-composition of the variance, which provides an estimate of the spatial frequency dependence of the point-wise fluctuation in the image[3].

It can be calculated as:

N P S(u, v) = lim X,Y →∞ 1 2XY Z X −X Z Y −Y

∆d(x, y)e−2πi(ux+vy)dxdy

where ∆d(x, y) is the image intensity minus the average background intensity. This formula describes the NPS of an analog system, but for a digital x-ray system it has to be modified, because it has to contain the pixel pitch of the digital image in x and y.

The final NPS is the average of spectra obtained from a series of uniform irradi-ation images taken at identical conditions[3]: an image containing nothing but the noise is decomposed into its two-dimensional frequency components. This components are squared to obtain the power contribution at each frequency and this results is averaged with the results obtained from other similar images to arrive finally at the average noise power for each frequency. The noise power spectrum is therefore proportional to the average (or mean) square value of the frequency amplitudes of the noise[20]. In particular NPS shows in which extent the random variation in CT number at one point of the image is dependent on the random variation at another point. It can be extracted that:

σ2= Z ∞ −∞ Z ∞ −∞ N P S(u, v)dudv

which means that the pixel value variance can be estimated by integrating the NPS over all frequencies.

2.1.2

Spatial resolution

The spatial resolution expresses the ability of an imaging system to repre-sent sharp edges between distinct anatomical features within the object being imaged[3].

The description of an imaging system is based on the linear systems theory. If f(r) is the input of the system, g(r) is the output and S{}, this elements are related as:

g(r) = S{f (r)}

Furthermore it is possible to consider the system as linear and shift invariant (LSI), i.e. the response to an input signal is the same at all locations r. When the input signal to a linear system is a Dirac’s delta function,δ(r − r0)

Riferimenti

Documenti correlati

risk of NAFLD Abbreviations: BMI, body mass index; FLI, fatty liver index; FT3, free triiodothyronine; FT4, free thyroxine; HOMA, homeostatic model assessment; IR, insulin

Always considering a number of 25 trains per day and a recovered energy around 350 MJ (third plot from the top) for single braking, including a charging- discharging efficiency

Nanovectors of different sizes and shapes (micelles, lipo- somes, or hard nanoparticles) loaded with doxorubicin or other cytotoxic drugs and externally functionalized with natural

Supplementary Materials: The following are available online at http://www.mdpi.com/2073-4409/9/5/1270/s1 , Figure S1: Schematic presentation of experimental setup with time

Desorption experiments with organic liquid phase have been performed recently at our workplace using the wetted-wall column [22], however, although the device was often

In realtà, la filosofia - come originaria garante dell'esercizio di un pen- siero critico libero - è puro pensiero della &#34;crisi&#34;, essa non può trasformarsi in una nuova

When!the!user!takes!a!picture!of!a!problem!or!barrier!in!the!real!world,!the! geographical! location! received! by! the! built/in! GPS! receiver! in!