• Non ci sono risultati.

5 Quantitative Techniques in PET

N/A
N/A
Protected

Academic year: 2021

Condividi "5 Quantitative Techniques in PET"

Copied!
34
0
0

Testo completo

(1)

5 Quantitative Techniques in PET

Steven R Meikle and Ramsey D Badawi

Introduction

PET has long been regarded as a quantitative imaging tool. That is, the voxel values of reconstructed images can be calibrated in absolute units of radioactivity con- centration with reasonable accuracy and precision. The ability to accurately and precisely map the radiotracer concentration in the body is important for two reasons.

First, it ensures that the PET images can be interpreted correctly since they can be assumed to be free of physi- cal artefacts and to provide a true reflection of the un- derlying physiology. Second, it enables the use of tracer kinetic methodology to model the time-varying distri- bution of a labelled compound in the body and quan- tify physiological parameters of interest.

The reputation of PET as a quantitative imaging tool is largely based on the fact that an exact correction for attenuation of the signal due to absorption of photons in the body is theoretically achievable. However, accu- rate attenuation correction is not so easy to achieve in practice and there are many other factors, apart from photon attenuation, that potentially impact on the ac- curacy and precision of PET measurements. These include count-rate losses due to dead time limitations of system components, variations in detector efficiency, acceptance of unwanted scattered and random coinci- dences and dilution of the signal from small structures (partial volume effect). The ability to accurately measure or model these effects and correct for them, while minimizing the impact on signal-to-noise ratio, largely determines the accuracy and precision of PET images.

This chapter discusses the various sources of mea- surement error in PET. Methodological approaches to correct for these sources of error are described, and

their relative merits and impact on the quantitative ac- curacy of PET images are evaluated. The sequence of the following sections corresponds approximately to the order in which the various corrections are typically applied. It should be noted, however, that the particular sequence of corrections varies from scanner to scanner and depends on the choice of algorithms.

Randoms Correction

Origin of Random Coincidences

Random coincidences, also known as “accidental” or

“chance” coincidences, arise because of the finite width of the electronic time window used to detect true coin- cidences. This finite width allows the possibility that two uncorrelated single detection events occurring sufficiently close together in time can be mistakenly identified as a true coincidence event, arising from one annihilation. This is shown schematically in Fig. 5.1.

The rate at which random coincidences occur between a detector pair is related to the rate of single events on each detector and to the width of the time window. The exact relationship is dependent upon the implementation of the counting electronics. Figure 5.2 shows an implementation whereby each timing signal opens a gate of duration τ; if gates on two channels are open at the same time, a coincidence is recorded. If there is a timing signal on channel i at time T, there will be a coincidence on the relevant line-of-response Lijif there is a timing signal on channel j at any time between T – τ and T + τ. Therefore, the total time during which a coin- cidence may be recorded with the event on channel i (a 93

* Figures 1–3, 5, 6, 12–16 and 19–21 are reproduced from Valk PE, Bailey DL, Townsend DW, Maisey MN. Positron Emission Tomography: Basic Science and Clinical Practice. Springer-Verlag London Ltd 2003, 91–114.

(2)

parameter known as the resolving time of the circuit, or the coincidence time window) is 2τ. So, if the rate of single events in channel i is ricounts per second, then in one second the total time during which coincidences can be accepted on Lijwill be 2τri. If we can assume that the single events occurring on channel j are uncorrelated with those on channel i (i.e., there are no true coinci-

dences), then Cij, the number of random coincidences on Lijper second, will be given by

where rjis the rate of single events on channel j. While it is obviously not generally true that there is no corre- lation between the single events on channel i and the

Cij= 2τr ri j ( )1

Scattered coincidence

Random coincidence

True coincidence

= Annihilation event

= Gamma ray

= Assigned LOR

Figure 55.1. Types of coincidence event recorded by a PET scanner.

Figure 55.2. Figure 5.2 Example coincidence circuitry. Each detector generates a pulse when a photon deposits energy in it; this pulse passes to a time pick-off unit. Timing signals from the pick-off unit are passed to a gate generator which generates a gate of width τ. The logic unit generates a signal if there is a voltage on both inputs simultaneously. This signal then passes to the sorting circuitry.

(3)

single events on channel j, the number of single events acquired during a PET acquisition is typically 1 to 2 orders of magnitude greater than the number of coin- cidences. In such an environment, equation (1) pro- vides a good estimate of the random coincidence rate.

The timing of commercial tomographs is usually governed by a system clock. A timing signal on channel i is thus assigned to a particular clock cycle. If there is a timing signal on channel j within a certain range of, say, n neighbouring clock cycles, a coincidence is recorded on Lij(Fig. 5.3). Therefore the randoms rate on Lijwould be given by

where tcis the duration of a single clock cycle. A typical BGO tomograph might have a 2.5 nanosecond clock cycle, and n = 5 clock cycles. Thus, the total coinci- dence time window ntc(equivalent to 2τfor an analog system) would equal 12.5 nanoseconds.

Equations (1) and (2) indicate that the overall randoms rate for an acquisition will change at a rate proportional to the square of the overall singles rate.

Provided dead time is small, this means that for a given source distribution the randoms will change roughly in proportion to the square of the activity concentration.

Random coincidences can form a significant fraction of all recorded coincidences in PET imaging, particu- larly if large amounts of activity are used or if scans are performed in 3D mode. The number of randoms detected may be reduced by shortening the coinci- dence window. However, the window must be large enough to prevent loss of true coincidences due to the difference in arrival times (which may be up to 2 ns for an annihilation pair originating 30 cm from the centre of the tomograph) or statistical variations in the trig- gering of the event timing circuitry. Thus, selection of the coincidence window is a trade-off between min- imising acceptance of randoms and loss of sensitivity to true coincidences. The coincidence window is typi- cally set to 3 to 4 times the full width half maximum (FWHM) timing resolution of the tomograph.

The use of fast scintillators such as LSO or GSO reduces timing uncertainty (compared to that obtain- able with slower scintillators such as BGO or NaI), but the window width cannot be less than 3 nsec to 4 nsec without accounting for time-of-flight effects. Randoms may also be reduced by shielding the detectors from activity that lies outside the tomograph field of view–

this reduces the singles rates without adversely affect- ing sensitivity to true coincidences [1, 2].

Randoms tend to be fairly uniformly distributed across the field of view. This contrasts with true coinci- dences, which follow activity concentration and are reduced in regions of high attenuation. Thus, the frac- tion of random coincidences in regions of high attenu- ation can become very large and, if uncorrected, substantial quantitative errors can arise.

Corrections for Random Coincidences

Tail Fitting

Because the distribution of random coincidences in sinogram or projection space tends to be a slowly changing function, it may be possible to estimate the distribution within the object by fitting a function such as a paraboloid or Gaussian to the tails falling outside the object. This method requires that the object subtend only a fraction of the field of view, so that the tails are of reasonable length and contain a reasonable number of counts--otherwise small changes in the tails will result in large changes in the randoms estimate. In some systems this method has been used to correct for both scatter and randoms simultaneously [3].

Estimation from Singles Rates

The total number of randoms on a particular line of re- sponse Lij can, in principle, be determined directly from the singles ratesriandrjusing equation (1) or (2).

Consider an acquisition of duration T. The random co- incidencesRijin the data element corresponding to the

Cij=nt r rc i j ( )2

Channel j

Tomogr aph clock cycle

2.5 nsec 12.5 nsec

Neighbour ing clock cycles

Channel i

Figure 5.3. Detecting coincidences using the tomograph clock.

(4)

line of response Lij may be found by integrating equation (1) or (2) over time:

Ifri(t) and rj(t) change in the same way over time, we can factor out this variation to obtain

where k is a constant and siand sjare the single event rates at, say, the start of the acquisition. For an emis- sion scan,f(t) is simply the square of the appropriate exponential decay expression, provided that tracer re- distribution can be ignored.Rijcan then be determined from the single events accumulated on channels i and j over the duration of the acquisition. It should be noted that the randoms total is proportional to the integral of the product of the singles rates, and not simply the product of the integrated singles rates. Failure to account for this leads to an error of about 4% when the scan duration T is equal to the isotope half-life T1

2, and about 15% when T = 2T12.

For coincidence-based transmission scans, where positron-emitting sources are rotating in the field of view,f(t) becomes a complicated function dependent on position as well as time, and equation (4) is no longer valid. However, in principle the total number of randoms could still be obtained by sampling the singles rates with sufficiently high frequency.

The singles rates used for calculating randoms should ideally be obtained from data that have already been qualified by the lower energy level discriminator – they are not the same as the singles rates that deter- mine the detector dead-time. Correction schemes have been implemented which use detector singles rates prior to LLD qualification [4], but the differences between the energy spectrum of events giving rise to randoms and that giving rise to trues and scatter must be carefully taken into account. These differences are dependent upon the object being imaged and upon the count-rate, since pulse pile up can skew the spectra [5].

In its simplest form this method does not account for the electronics dead-time arising from the coinci- dence processing circuitry (to which the randoms in the coincidence data are subject).

Delayed Coincidence Channel Estimation

The most accurate (and currently the most commonly implemented) method for estimating random coinci-

dences is the delayed channel method. In this scheme, a duplicate data stream containing the timing signals from one channel is delayed for several times the dura- tion of the coincidence window before being sent to the coincidence processing circuitry. This delay removes the correlation between pairs of events arising from actual annihilations, so that any coincidences de- tected are random. The resulting coincidences are then subtracted from the coincidences in the prompt channel to yield the number of true (and scattered) co- incidences. The coincidences in the delayed channel encounter exactly the same dead-time environment as the coincidences in the prompt channel, and the accu- racy of the randoms estimate is not affected by the time-dependence of the activity distribution.

While accurate, this method has two principal drawbacks. Firstly, the increased time taken to process the delayed coincidences contributes to the overall system dead time. Secondly, and more impor- tantly, the estimates of the randoms on each line-of- response are individually subject to Poisson counting statistics. The noise in these estimates propagates di- rectly back into the data, resulting in an effective dou- bling of the statistical noise due to randoms. This compares poorly to the estimation from singles method, since the singles rates are typically two orders of magnitude greater than the randoms rates, so that the fractional noise in the resulting randoms estimate is effectively negligible. To reduce noise, the delayed channel can be implemented with a wider co- incidence time window. However, this will further increase the contribution of delayed channel coinci- dences to system dead time.

Randoms Variance Reduction

Where randoms form a significant fraction of the ac- quired events, as is frequently the case in 3D imaging, it becomes desirable to obtain randoms estimates that are accurate but contain less noise than those ob- tained using the delayed channel method. Most delayed channel implementations allow the acquisi- tion of separate datasets from the prompt and delayed coincidence channels – this allows the possibility of post-processing the randoms estimate to reduce noise, prior to subtraction from the prompt coincidence channel data.

The simplest form of variance reduction is to smooth the delayed data. The success of this approach will depend somewhat on the architecture of the scanner. In full-ring block detector systems, there are significant differences between the efficiency of adja- Rij s si j f t dt ks si j

=2 T = 4

0

τ ( ) ( )

Rij C t dtij r t r t dti j

T

=T∫ ( ) = ∫2 ( ) ( ) ( )3

0 0

τ

(5)

cent detectors. This information is lost during smooth- ing, and if unaccounted for, high-frequency circular artefacts can appear in the reconstructed images [6].

However, in rotating systems, lines of response may be sampled by many detectors (particularly in the centre of the field of view), so that efficiency differences become less important. Caution must still be exercised, because rotational sampling effects can result in varying sensitivity to randoms across the field of view [7]. One solution is to smooth only over lines of re- sponse which share a common radius.

More accurate methods of variance reduction can be envisaged. A randoms sinogram consists of noisy esti- mates of the Rij, the randoms in the prompt data. A typical data acquisition may consist of a few million such estimates (one for each LOR), but there may only be a few thousand of the singles values si(one for each detector element). There is therefore substantial redun- dancy in the data, which may be used to reduce the effects of statistical noise. Let us consider two opposing groups ofN detectors, A and B. Detector i is a member of group A and j is a member of B (Fig. 5.4).

If the singles flux varies in the same way for all de- tectors for the duration of the acquisition, so that equa- tion (4) is valid, thenRiB, the sum of the randoms on all the lines of response joining detector i and group B may be written

similarly,RjA, the sum of the randoms on all the lines of response joining detector j and group A may be written

Now RAB, the sum of all the randoms over all possible lines of response between groups A and B is simply the sum ofRjAover all possible j:

RAB ksj si k s s

i N

i i N

j j N

j

= N

⎣⎢

⎦⎥= ∑

⎣⎢

⎦⎥

=1 =1 =1 =1

7 ( ) RjA ksj si

i

= N

=1 ( )6

RiB ksi sj

j

= N

=1 ( )5

Figure 55.4. Accurate randoms variance reduction. To obtain a variance-reduced estimate of the number of random coincidences in the LOR joining detectors i and j, the product of the mean values of the LORs in each of the two LOR fans shown is calculated, and divided by the mean value of all possible LORs between detectors in groups A and B. For ease of implementation, the LOR data from the relevant sinograms can be re-binned into histograms as shown. (From [9], with kind permission from Kluwer Academic publishers.)

(6)

If we multiply equations (5) and (6) and divide by equation (7), we get

All of the terms on the left hand side of equation (8) can be obtained from the data, and we have obtained another estimate ofRij. However, if N is large enough the variance of this estimate is less than that of the original estimate, since the line-of-response sums RjA, RiBand RABare all larger than Rijby factors of approxi- mately N, N and N2 respectively (assuming that there are roughly the same number of randoms on each line of response). This method was devised by Casey and Hoffman [8], who also showed that the ratio Q of the variance of the noise-reduced estimate and the origi- nal estimate ofRijis given by

so that there is an improvement in the noise provided N, the number of detectors per group, is three or more.

Several related algorithms have been developed and applied to the problem of randoms variance reduction, but the one described here has been shown to be the most accurate [9]. The only significant drawback of this method (compared to direct subtraction of the delayed channel data) is that acquiring a separate randoms sinogram doubles the size of the dataset. This can be a particular problem for fast dynamic scanning in 3D mode, where sorter memory and data transfer time can be a limiting factor.

Normalisation

Lines of response in a PET dataset have differing sensi- tivity for a variety of reasons including variations in detector efficiency, solid angle subtended and summa- tion of neighbouring data elements. Information on these variations is required for the reconstruction of quantitative and artefact-free images – indeed, most al- gorithms require that these variations be removed prior to reconstruction. The process of correcting for these effects is known as normalisation, and the indi- vidual correction factors for each LOR are referred to as normalisation coefficients.

Causes of Sensitivity Variations

Summing of Adjacent Data Elements

It is common practice to sum adjacent data elements in order to simplify reconstruction or to reduce the size of the dataset. This is usually performed axially, but may also be performed radially (a process known as “mashing”). Summation of data elements axially cannot be performed uniformly across the entire field of view and image planes at the ends of the field of view have substantially reduced sensitivity compared to those in the centre. This effect is fairly simple to account for, since the degree of summing is always known. However, it can complicate the process of cor- recting for other effects if the summing is performed prior to normalisation [10, 11].

Rotational Sampling

In a rotating system, LORs at the edge of the field of view are sampled just once per half-rotation, while those near the centre are sampled many times (see Fig.

5.5). As a result, sensitivity falls as radius increases.

Detector Efficiency Variations

In a block detector system, detector elements vary in efficiency because of the position of the element in the block, physical variations in the crystal and light guides and variations in the gains of the photomulti- plier tubes. These variations result in substantial high- frequency non-uniformities in the raw data. In particular there is a systematic variation in detector efficiency with the crystal position within the block (the “block profile”) which results in significant varia- tions in the sensitivity of the tomograph in the axial direction. Radially the effect is not so great, because any one pixel in the image is viewed by many detectors and there is a tendency for these effects to cancel out during reconstruction. Nevertheless, failure to correct for them leads to radial streaking in the image, and the systematic block profile effects can reinforce during re- construction, resulting in circular “saw-tooth” artefacts.

Detector efficiency, and in particular the block profile, can be affected by count rate. One result of pulse pileup within a block detector is the shifting of detected events towards the centre of the block [12]. This is not really a normalisation effect in the conventional sense, but since it results in a systematic change in the appar- ent efficiency of the lines of response with position in the block it manifests itself in a very similar way. The

Q N

=2N+1

2 ( )9

R R

R ks s R

jA iB

AB

i j ij

× = = ( )8

(7)

effect can be reduced by measuring normalisation coefficients at a similar count-rate to that used during data acquisition, or by creating a rate-dependent look- up table of normalization coefficients [13].

If this is not possible, any resulting image artefacts may be reduced by extracting systematic effects from the raw data after normalisation but prior to reconstruction.

Geometric and Solid Angle Effects

Figure 5.6 shows that in a system with segmented detec- tors, such as a block-detector based system, lines of re- sponse close to the edge of the field of view are narrower and more closely spaced than those at the centre. This geometric effect is also apparent axially and can be significant for large area tomographs oper- ating in 3D mode. The narrowing of the LORs results in a tighter acceptance angle and in reduced sensitivity, al- though in the transaxial plane this effect is partially compensated by the fact that the separation between opposing detectors is less towards the edge of the field of view, so that the acceptance angle is changed in the opposite direction. The narrowing of LORs also results in reduced sampling distance. However, this effect is easily describable analytically and can be corrected for at reconstruction time – a process known as “arc cor- rection”. Arc correction may not be an issue for systems that employ continuous detectors, as it is usually possi- ble to bin the data directly into LORs of uniform width.

An effect that is relevant for systems employing either continuous or discrete detectors, and that is not so easy to describe analytically, is related to the angle of incidence of the line of response at the detector face.

A photon entering a crystal at an angle will usually have more material in its path than one entering nor- mally, thus having an increased probability of interac- tion. In the case of a ring scanner, this results in measurable changes in sensitivity as the radial position of the line of response is increased and is known as the Figure 55.5. Rotational sampling. (Left) Lines of response at the edge of the transaxial field of view are sampled once per detector half-rotation. (Right) lines of response close to the centre of the field of view are sampled many times, as more detector elements are brought to bear.

Figure 55.6. Lines of response narrow as the radial distance increases.

(8)

radial, or transaxial, geometric effect (Fig. 5.7). However, a photon entering a detector close to its edge and at an angle may have significantly less material along its path and may therefore be more likely to escape. For block detector systems this results in a pattern of sensitivity change which varies both with radial position and with the position of the line of response with respect to the block (Fig. 5.8). This has become known as the “crystal interference” effect [10]. Again, similar effects can be found in the axial direction [11].

It should be noted that the photon incidence angle is most strongly correlated with the line of response for true coincidences – these geometric effects would be expected to be much weaker or non-existent for random and scattered coincidences [14].

Time Window Alignment

For coincidence detection to work efficiently, timing signals from each detector must be accurately synchro-

nised. Asynchronicity between detector pairs results in an offset and effective shortening of the time window for true and scattered (but not random) coincidences.

This, in turn, results in variations in the sensitivity to true and scattered coincidences. For block detector systems, the greatest source of such variations occurs at the block level. Figure 5.9 shows the variations in efficiency resulting from time alignment effects in a block tomograph plotted as a sinogram. Each diamond corresponds to a different block combination.

Structural Alignment

In a ring tomograph, the accuracy with which the de- tectors are aligned in the gantry can affect line of re- sponse efficiency. Such variations will manifest in different ways depending on the exact design of the to- mograph, the detectors and any casing in which the de- tectors are contained. Frequently, block detectors are mounted in modules or cartridges, each containing Radial Profiles

0.8 0.9 1 1.1 1.2

-45 -35 -25 -15 -5 5 15 25 35 45

LOR incidence angle (degrees)

951 96 2Advance

Figure 55.7. Figure 5.7. Mean radial geometric profiles for three block-detector tomographs – the Siemens/CTI ECAT 951, the Siemens/CTI ECAT 962 and the GE Advance, measured using the rotating transmission sources. The 951 data shows asymmetry due to the fact that the centre of rotation of the transmission sources is not coincident with the centre of the detector ring.

(From [11], with permission.)

Figure 55.8. Crystal interference factors for the Siemens/CTI ECAT 951.

(From [15], with permission.)

Figure 55.9. Time-window alignment factors for the Siemens/CTI ECAT 951.

The factors range in value from 0.872 to 1.120. (From [15], with permission.)

(9)

several units. Misalignments of these modules can have noticeable affects on LOR sensitivity [11, 15]. Some full-ring systems have a “wobble” feature designed to improve spatial sampling – this feature allows the de- tectors to describe a small orbit about the mean detec- tor position. As a result, it is possible that the transmission sources can rotate about a point which is not actually the centre of the detector ring, and if they are used to perform normalisation measurements, er- roneous asymmetries can be introduced into the nor- malisation coefficients [11].

Septa

Septa can affect LOR sensitivity in a variety of ways.

They have a significant shadowing effect on the detec- tors, which can reduce sensitivity by 40% or more [16].

For block detector systems, they also preferentially shadow the edges of the detectors, which may change their relative performance. On systems which can operate either with or without septa, it is therefore preferable to have a separate normalisation measure- ment for each case.

Direct Normalisation

The simplest approach to normalisation is to illumi- nate all possible LORs with a planar or rotating line positron source (usually 68Ge). Once an analytical cor- rection for non-uniform radial illumination has been applied, the normalisation coefficients are assumed to be proportional to the inverse of the counts in each LOR. This process is known as “direct normalisation”.

Problems with this approach include:

1. To obtain adequate statistical quality in the normali- sation dataset, scan times are long, typically several hours.

2. The sources used must have a very uniform activity concentration or the resultant normalisation coefficients will be biased.

3. The amount of scatter and its distribution in the normalisation acquisition may be substantially dif- ferent from that encountered in normal imaging, particularly if the tomograph is operating in 3D mode. This can result in bias and possibly artefacts.

To reduce normalisation scan times, variance reduc- tion techniques similar to those devised for randoms correction can be applied. However, in order to imple- ment these, the normalisation coefficients must be fac- tored into a series of components, each reflecting a particular source of sensitivity variation. A drawback

of this approach is that the accuracy of the normalisa- tion is dependent on the accuracy of the model used to describe the tomograph. However, it has the advantage that a more intelligent treatment of the different prop- erties of scattered and true coincidences is possible, which can be very helpful in 3D imaging.

A Component-based Model for Normalization

Consider a tomograph where detectors are indexed using the coordinate system shown in Fig. 5.4. A general expression for the activity contained in a par- ticular LOR joining a detector i in ring u and detector j in ring v can be written as follows:

where Auivjis the activity within the LOR,Puivj,Suivjand Ruivjare the prompt, scattered and random count rates respectively,ACuivjis the attenuation correction factor for the LOR,DTuivj is the dead time correction factor for the LOR and ηuivj

true is the normalization coefficient for true coincidences. We will assume that Ruivj,ACuivj and DTuivj can be measured accurately for each LOR.

However,Suivjcannot be measured directly and must be calculated. Most algorithms for calculating scatter result in a smoothly varying function that does not include normalization effects. Where scatter is only a small proportion of the signal (e.g., 2D imaging) this is probably unimportant. In 3D imaging, where scatter can make up a significant fraction of detected events, we can modify equation (10) as follows:

where the ηuivj scatter

are the normalization coefficients for scattered coincidences.

As a first approximation, we could say that ηscatter ηtrues. However, this leads to bias because some of the more important normalization effects for true coinci- dences arise because photons resulting in coincidences on a particular LOR have a tightly constrained angle of incidence at the detector face, a condition which is clearly not met for scatter. Allowing ηscatterto take dif- ferent values to ηtrues was first proposed by Ollinger [14]. This is still an approximation, because the distrib- ution of incidence angles and photon energies for

A P S

R AC DT

uivj uivj

uivjcalculated

uivjscatter uivj uivj uivj uivjtrue

⎟ ⋅

η η ( )11

Auivj

(

PuivjSuivjRuivj

)

ACuivjDTuivjηuivjtrue ( )10

(10)

scattered photons will be dependent on the source and attenuation distribution, so that, in general, there will be no unique value for ηuivj

scatter

. However, at the present time, errors in this formulation are likely to be small compared to errors in the scatter estimate itself.

The task of normalization is to obtain values for the ηuivj

scatter

and the ηuivj

true. It is clear from the discussion above that there is no generally applicable model which will yield the ηuivj

scatterand the ηuivj

true for all tomo- graph designs. We can, however, write down an example expression for a block detector system:

where

• the εare the intrinsic detector efficiency factors, describing the random variations in detector efficiency due to effects such as crystal non- uniformity and variations in PMT gain

• the btr are the transaxial block profile factors, de- scribing the systematic transaxial variation in detec- tor efficiency with position in the block detector.

These are frequently incorporated into the ε; however, it can be useful to consider them separately if count-rate dependent effects are to be included in the normalization process

• the bax are the axial block profile factors – they are the relative efficiencies of each axial ring of detec- tors. Again, these are rate dependent to a degree – however, the primary reason for separating them from the εis to simplify the process of measurement (see section on axial block profile factors below)

• the t are the time-window alignment factors

• the gtrare the transaxial geometric factors, describ- ing the relationship between LOR efficiency, photon incidence angle and detector position within the block. In this formulation they include the crystal interference effect.

• the gaxare the axial geometric factors. There is one factor for each ring combination.As with the axial block profile factors, they are separated from their transaxial counterparts simply for ease of measurement.

• the m are the structural misalignment factors. These are similar to the geometric factors in that they will usually vary with photon incidence angle.

The analytically derivable components are missing from this model since they do not need to be measured.

The normalization coefficients for scatter may be written as follows:

The geometric components have been removed and the efficiency components retained. This model makes the assumption that scattered photons have a random dis- tribution of incidence angles for any particular LOR [14], and that the efficiency factors are the same for trues and scatter. Thus, any dependence of the distribu- tion of incidence angles for scattered photons on the source and attenuation distribution is ignored, as are any changes in detection efficiency with photon energy.

Measurement of the Components

Although several components must be accounted for in component-based normalisation, they can be measured from just two separate scans using a rela- tively simple protocol. A typical protocol involves scanning a rotating rod source with nothing in the field of view and a uniform cylindrical source. Both scans are performed with low activity concentrations to minimise dead time effects and the scan times are quite long, typically several hours, to ensure adequate counting statistics. The rod scan is used to calculate the geometric effects while the uniform cylinder scan is used to calculate the crystal efficiencies. The details of how the various factors are extracted from each of these scans are given in the following sections (Fig.

5.10).

Axial Block Profile Factors,b axu and Axial Geometric Factors,g uvax

The axial block profile factors may be calculated from an acquisition of a central uniform right cylinder source. If scatter is not significant, the calculation is straightforward – the total counts Cu in each of the direct plane (i.e., ring difference = 0) sinograms are computed, and the buaxare then given by

where Cuis the mean value of the total counts in each sinogram. In 3D imaging, the amount of scatter can be large, and more importantly, the distribution can vary in the axial direction. The data should therefore be scatter corrected prior to the calculation of the buax. A simple al- gorithm such as fitting a Gaussian to the scatter tails is usually sufficient for this purpose, but care must be taken to ensure that high-frequency variations in detector efficiencies do not bias the results [11].

b C

uax Cu u

= ( )14

ηuivj ε ε

scatter

ui vj ui tr

vj tr

u ax

v ax

b b b b tuivj

= ( )13

ηuivjtrue ε ε

ui vj uitr vjtr

uax vax

uivj uivjtr uvax

b b b b t g g muivj

= ( )12

(11)

The axial geometric factors guvax are also computed from cylinder data, after they have been corrected for scatter and for the axial block profile. IfCuvis the sum of the counts in the sinogram indexed by ring u and ring v, the corresponding axial geometric factor guvax is obtained simply by dividing Cuvby the mean value of all the Cuvand inverting the result.

If the axial acceptance angle is large, it may be neces- sary to correct for the variation in source attenuation between sinograms corresponding to large and small ring differences prior to calculating the guvax.

An unfortunate consequence of calculating the values of the axial components in this way is that errors in the scatter correction give rise to bias in the normal- isation coefficients [15].

In some implementations, the axial block profile and geometric factors are not calculated directly. Instead, the cylinder data are reconstructed and correction factors are computed by comparing the counts in each image plane with the mean for all planes. This works well in 2D imaging, where the data used to reconstruct any one image plane is effectively independent of those used to reconstruct any other. In 3D imaging this is not the case, and the use of post-reconstruction correction factors entangles effects due to normalisation, recon- struction and source distribution.

Intrinsic Detector Efficiencies,εui, and Transaxial Block Profile,b tru

The intrinsic detector efficiencies are again usually computed from an acquisition of a central uniform right cylinder source, although planar or rotating line

sources can also be used. Variance reduction may be ef- fected using the fan-sum algorithm, which is essentially a simplified version of that used in randoms variance reduction. In the fan-sum algorithm, the fans of LORs emanating from each detector and defining a group A of opposing detectors are summed (see Fig. 5.11). It is assumed that the activity distribution intersected by each fan is the same, and that the effect of all normali- sation components apart from detector efficiency is also the same for each fan. The total counts in each fan Cuithen obeys the following relation:

Cui ui vj Cui ui

j A

v A vj

j A v A

ε ε or ε ε ( )15

Figure 5.11. Lines of response in the fan-sum algorithm.

Figure 5.10. Effects of normalisation on image uniformity. Images (summed over all axial planes) from a low-variance 20 cm cylinder acquisition, performed in 3D mode on a Siemens/CTI ECAT 951. (From [15], with permission.)

(Upper row) linear grey scale covering entire dynamic range.

(Lower row) linear grey scale, zero-point set to 70% of image maximum.

(a) no scatter correction; (e) no transaxial block profile correction;

(b) no normalisation; (f) no crystal interference correction;

(c)no correction for the radial profile; (g) no time alignment correction;

(d) no crystal efficiency correction; (h) fully normalised and scatter corrected.

a b c d e f g h

(12)

IfA contains a sufficiently large number of detectors, it can be assumed that the expression is also a constant (the fan-sum approximation, attributable to 17).

The εuiare then given by the following expression:

where Cuiis the mean value of all the fan-sums for de- tector ring u. Note that the efficiencies are not deter- mined using the mean value of the Cui computed over all detector rings as the numerator in equation (16).

This avoids potential bias arising from the fact that the mean angle of incidence of the LORs at the detector face varies from the axial centre to the front or back of the tomograph.

If the Cuiare calculated by summing only over LORs lying within detector ring u, the method is known as the 2D fan-sum algorithm. This method is quite widely implemented because of its simplicity, and because it can be used for both 2D and 3D normalization.

However, in the 3D case it is both less accurate and less precise than utilizing all possible LORs [18]. The accu- racy of the fan-sum approximation also depends cru- cially on utilizing an accurately centered source distribution [19–21]. Other algorithms for calculating the εuialso exist (see, for example, [17, 18, 20–22]).

The εui calculated in this way incorporate the transaxial block profile factors butr. If required, they can be extracted from the εuivery simply – they are just the mean values of the detector efficiencies calculated for each position in the block detector:

where N is the number of detectors around the ring and D is the number of detectors across a detector block. In practice, the butrare obtained by averaging data across such a large and evenly sampled propor- tion of the field of view that they are effectively inde- pendent of the source distribution. As a result, changes in the transaxial block profile factors due to count-rate effects can be computed directly from the emission data, a process known as self-normalisation [24].

The Transaxial Geometric Factors g utrivj

Rotating transmission sources, planar sources and scanning line sources have all been used to generate

data for calculating the transaxial geometric factors [e.g., 16, 22, 25, 26]. Once the data have been collected, an analytic correction is applied to compensate for non-uniform illumination of the LORs by the source.

The data are then corrected for variations in detector efficiency and block profile. For systems where “crystal interference” is not expected to be a problem, the transaxial geometric factors can be obtained by averag- ing the data in each sinogram over all LORs sharing a common radius. Thus, one “radial profile” describing the transaxial geometric effect is obtained for each sinogram. Otherwise, the data are averaged over LORs which share a common radius and a common position within block detectors, resulting in D radial profiles per sinogram, where D is the number of detectors across a block. Each radial profile is then divided by its mean and inverted to yield the transaxial geometric factors.

The Time-Window Alignment Factors tuivj

As with the transaxial geometric factors, time-window alignment factors can be derived from the data ac- quired using rotating transmission sources, planar sources or scanning line sources. Non-uniform illumi- nation is compensated for, and the data are then cor- rected for intrinsic detector efficiency, block profile and all geometric effects. Data elements with common block detector combinations are summed to produce an array with one element for each block combination.

This array is then divided by the mean of all its ele- ments and inverted to yield the tuivj.

The Structural Misalignment Factors,muivj

The effects of structural misalignment are not easy to predict. They can often be determined by examining data used for calculating the transaxial geometric factors after normalisation for all other known com- ponents. On the GE Advance (GE Medical Systems, Milwaukee,WI), this process reveals high-frequency non-uniformities which are consistent in every sino- gram, regardless of ring difference. These non-unifor- mities are correlated with rotational misalignments of the block modules, which extend for the entire length of the tomograph. However, examination of data from the Siemens/CTI ECAT 962 tomograph (CTI Inc., Knoxville, TN), which also has block modules that extend for the entire length of the tomograph, does not reveal these consistent non-uniformities. On the Advance, the consistency of the non-uniformities can be exploited in a simple manner to yield the required correction factors. Data from a rotating line source is

b N

uitr D

u nD i D n

N

= ∑D

= +

ε , mod ( )

0 1

17 εui ui

ui

C

C ( )16

vj j A v A

ε

(13)

corrected for all known normalisation effects and then summed over all ring differences, yielding a matrix with the same dimensions as a single sino- gram. Each element in the matrix is then divided by the mean over all the matrix elements and inverted to yield the muivj.

Frequency of Measurement

The geometric factors do not normally change with time and need only be measured once. Depending on their nature, the misalignment factors may either be fixed, or may need to be re-measured as components are replaced. The time window alignment factors should be re-measured whenever detector compo- nents are replaced. The detector efficiency and block profile components can change with time, as photo- multiplier tube gains drift, and should be re-measured routinely (usually monthly or quarterly, but possibly more often in a less stable environment such as that found in a mobile PET system). The rate-dependent component of the transaxial block profile can, if nec- essary, be determined for each individual scan using self-normalisation.

Dead Time Correction

Definition of Dead Time

PET scanners may be regarded as a series of sub- systems, each of which requires a minimum amount of time to elapse between successive events for them to be registered as separate. Since radioactive decay is a random process, there is always a finite probability that successive events will occur within this minimum time, and at high count-rates, the fraction of events falling in this category can become very significant.

The principle effect of this phenomenon is to reduce the number of coincidence events counted by the PET scanner, and since the effect becomes stronger as the photon flux increases, the net result is that the linear response of the system is compromised at high count- rates. The parameter that characterises the counting behaviour of the system at high event rates is known as the “dead time”. The fractional dead time of a system at a given count-rate is defined as the ratio of the measured count-rate and the count-rate that would have been obtained if the system behaved in a linear manner.

Sources of Dead Time

The degree to which a system suffers from dead time and the sources of dead time within a system are highly dependent on its design and architecture.

We now describe three sources of dead time typically found in clinical PET scanners. A more detailed discussion of this topic can be found in [12] and [27].

Within a well-designed scintillation detector sub- system, the primary factor affecting the minimum time between separable events is the integration time, that is, the time spent integrating charge from the photo- multiplier tubes arising from a scintillation flash in the detector crystal. If a photon deposits energy in the de- tector crystal while charge is still being integrated from the previous event (a phenomenon known as “pulse pileup”), there are two possible outcomes. Either the total collected charge is sufficiently great that the upper energy level discriminator threshold is exceeded, in which case both events will be rejected, or the two events are treated as one, with incorrect position and energy (Fig. 5.12). In addition to the integration time, the detector electronics will usually have a “reset” time, during which the sub-system is unable to accept further events. The effects of pulse pileup in block de- tectors have been investigated by Germano and Hoffman [28], and in large-area PET detectors by Wear et al [29]. To reduce the limiting effect of integration time, several groups have implemented schemes for fast digitisation of the detector output signal. This signal can then be post-processed to separate over- lapped pulses [e.g., 30].

Within the coincidence detection circuitry, there is the possibility that more than two events might occur during the coincidence time window. This is known as a “multiple” coincidence, and since it is impossible to ascertain which is the correct coincidence pair, all events comprising the multiple coincidence are rejected.

Processing a coincidence event also takes time, during which no further coincidences may be accepted.

Although the number of coincidences is usually small compared to the number of single events, dead time arising from coincidence processing can be significant because of the architecture of the coincidence electron- ics. There are too many detector pairs in a PET scanner for each to have its own coincidence circuit. To over- come this problem, the data channels are multiplexed into a much smaller number of shared circuits. These shared circuits have commensurately higher data rates, and as a result become important contributors to overall system dead time.

Riferimenti

Documenti correlati

[10,23] and for codes constructed by using an algebraic curve [6] as Goppa codes [24,38] , Hermitian codes [12,25] and Castle codes [27].. In this paper, we provide a bound on

Dörr, Renate (t.b.c.) European Affairs, ZDF (Germany), Brussels Gibbons, Tom Professor, University of Manchester. Jütte, Bernd Justin Post-doctoral research fellow, University

Al diritto dell’Unione è attribuito il compito di stabilire il fondamento della responsabilità (la norma materiale violata e le condizioni sostanziali per il sorgere della

Fig. Operator leaving the workstation unattended after a while.. below the threshold, resulting in session expiration. A strong authentication is initially performed

This is an author version of the contribution published on: Questa è la versione dell’autore dell’opera:. Spike train distances and

To investigate whether the subtraction of NF-␬B/Rel nuclear complexes affected cell response to AraC, we incubated leukemic cells from 11 AML patients with the chemotherapeutic

This rare form of neurofibromatosis (of which only 15 cases have been reported) can be ascribed to type V according to Riccardi's classification, improved

[r]