• Non ci sono risultati.

Infrared image processing techniques for automatic target detection

N/A
N/A
Protected

Academic year: 2021

Condividi "Infrared image processing techniques for automatic target detection"

Copied!
145
0
0

Testo completo

(1)
(2)
(3)

Contents

1 Introduction to IR-FPA 17

1.1 History of IR Focal-Plane Array technology . . . 17

1.2 Types of staring FPAs in the IR spectrum . . . 18

1.3 Applications . . . 21

1.4 Overview on FPAs performance . . . 22

1.4.1 Spectral Quantum Efficiency . . . 22

1.4.2 Detectors and coolers . . . 23

1.4.3 Generic staring FPA parameters . . . 25

I

Non-uniformity correction

27

2 Noise sources in IR devices 29 2.1 Introduction to fixed-pattern noise . . . 29

2.2 Description of noise sources in IR devices . . . 30

2.2.1 Noise sources classification . . . 33

2.3 Mathematical formalization of FPN in IR-FPA . . . 39

3 An Overview on Non-uniformity Correction 41 3.1 Reference-based methods . . . 41

3.2 Scene-based NUC methods . . . 44

3.2.1 Statistical algorithms . . . 45

(4)

4 The problem of ghosting 49

4.1 The Scribner’s algorithm . . . 49

4.2 Ghosting: causes and effects . . . 52

4.3 De-ghosting methods in the literature . . . 56

4.3.1 Adaptive learning-rate de-ghosting . . . 57

4.3.2 Edge-detection based de-ghosting . . . 58

5 Novel de-ghosting methods 61 5.1 Bilateral filter-based de-ghosting . . . 61

5.2 Temporal statistics-based de-ghosting . . . 68

6 Experimental results 71 6.1 Data and experiments setup . . . 71

6.2 BF de-ghosting results . . . 74

6.3 TS de-ghosting results . . . 80

6.4 Overall comparison . . . 89

II

Clutter removal

101

7 Introduction and motivation 103 8 Description of the problem 107 8.1 Signal model . . . 107

8.2 Background clutter removal and target detection . . . 108

9 Automatic setting of Background Estimation Algorithm 111 9.1 Description of the proposed method . . . 111

9.2 Extension to BEA selection . . . 112

9.3 Practical issues . . . 114

10 Experimental results 117 10.1 Data setup . . . 117

10.2 Summary of analyzed BEAs . . . 118

10.3 Results and analysis . . . 120

11 Conclusions 133

(5)

List of Figures

1.1 Development of staring sensors in terms of imaging pixels over the years . . . 18

1.2 Quantum efficiency (in a logarithmic scale) of visible and IR detectors in dependence of the wavelength . . . 19

1.3 Sensor design alternatives and related wavelengths: scanning sensors vs. staring sensors 20

2.1 Examples of frames affected by FPN: (a) corrected with laboratory measured gain and offset, (b) shutter calibration, (c) calibrated with manufacturer nominal values, (d) gain corrected, (e) offset corrected and (f) neither gain and offset corrected. . . 31

2.2 The effect of striping noise on scanning sensors: (a) frame with striping effects and (b) the respective stripe-removed frame. . . 32

2.3 Comparison of theoretical and experimental measured noise components on IR-FPA . . . 37

3.1 Graphical description of the two-point linear reference-based non-uniformity correction . 43

4.1 LMS Scribner’s algorithm: block-scheme . . . 50

4.2 Corrected frame No. 860 of the analyzed IR sequence using the Scribner’s algorithm with (a) η = 2.5× 10=3, (b) η = 10=2, (c) and η = 2× 10=2, and a Gaussian spatial filter of

di-mension 11× 11 pixels. Original uncorrupted frame (d). . . 54

4.3 Trace of motion of the hot object moved by the operator from the frame No. 760 to No. 880 of the test sequence . . . 55

4.4 Temporal evolution of (a) the offset estimates and (b) the error signal related to the edge-pixel of coordinates (229, 359) extracted from the test sequence with NUC operated with the Scribner’s algorithm employing a moving average spatial filter of dimension 7× 7. . . 55

(6)

4.5 Zoom of the frame No. 172 of the test sequence: original frame (a) and frame corrected (b) with Scribner’s algorithm employing a spatial moving average filter of dimension 7× 7

pixels. . . 56

4.6 Original uncorrupted frame (a), frame corrected with Scribner’s algorithm (b) setting η = 10=2and a moving average spatial filter of dimension 7× 7 pixels. . . 57

4.7 ALR de-ghosting technique: block-scheme . . . 58

4.8 ED de-ghosting technique: block-scheme . . . 58

5.1 Bilateral filter: example of filtering in the spatial and intensity domain . . . 63

5.2 Bilateral filter: test image (a) and example BF operated by varying the parameters σsand σr(b) . . . 64

5.3 BF-based de-ghosting technique: block-scheme . . . 64

5.4 BF de-ghosting: (a) region of a corrected frame extracted from the test sequence processed for target signal computation using a mask of dimension 5× 5 pixels; (b) mask of the values of the differences between the center pixel and the neighbors. . . 65

5.5 TS-based de-ghosting technique: block-scheme . . . 69

6.1 Seq. #1. Example frames extracted from Seq. #1 (a,c) and related corrupted frames with FPN (b,d) . . . 75

6.2 Seq. #1: SNR temporal evolution . . . 75

6.3 BF de-ghosting: MSE offset estimation by varying (a) the value of the standard deviation σrand (b) the dimension D of the BF mask . . . 76

6.4 BF de-ghosting: original uncorrupted frame No. 850 of Seq. #1 (a). Noise frame (b). Cor-rected frames with Scribner’s algorithm (η = 10−2) by varying the spatial filter: moving average (7× 7) (c), Gaussian (7 × 7) (d), 4-nearest pixels (e), and BF (7 × 7 pixels, σr= 2· σb, σs=76)(f). . . 78

6.5 BF de-ghosting: MSE comparison on Seq. #1 by varying the spatial filter in the LMS-NUC algorithm. The configuration of the filters is the same as in Fig.6.4 . . . 79

6.6 BF de-ghosting: comparison of the temporal evolutions of the percentage error of offset estimates related to pixel [200, 50] (a) and [111, 214] (b) extracted from Seq. #1. . . 79

6.7 Seq. #2: original frames No. 335 (a) and No. 530 (c) and respective corrupted versions (b) and (d) . . . 80

6.8 Seq. #2: temporal evolution of SNR . . . 81

6.9 MSE offset estimates operating Scribner’s algorithm on Seq. #2 by varying the dimension of the employed Gaussian spatial filter . . . 82

(7)

List of Figures 7

6.10 TS de-ghosting: comparison of the MSE offset estimates on Seq. #2 by varying the width of the tolerance intervals K (a) and the starting frame for TS f .s. (b) . . . 83

6.11 TS de-ghosting: comparison of the MSE achieved with the LMS-NUC on Seq. #2 within original version and proposed TS de-ghosting . . . 83

6.12 TS de-ghosting. Comparison of frame No. 335 extracted from Seq. #2 with simulated hot object: original (a), noisy (b), corrected with Scribner’s algorithm (c) and using the TS de-ghosting (d) . . . 84

6.13 TS de-ghosting. Comparison of frame No. 534 extracted from Seq. #2 with simulated hot object: original (a), noisy (b), corrected with Scribner’s algorithm (c) and using the TS de-ghosting (d) . . . 85

6.14 Example frame extracted from Seq. #3: noisy frames No. 138 (a), No. 496 (b), No. 730 (c) and related original frames (d), (e) and (f) . . . 86

6.15 Seq. #3: temporal evolution of SNR . . . 87

6.16 Comparison of the MSE of the offset estimates related to Seq.#3 achieved with TS de-ghosting by varying the width of the tolerance intervals K (a) and the starting frame f .s. (b) . . . 88

6.17 Comparison of MSE of the offset estimation achieved operating Scribner’s algorithm with no de-ghosting and with TS de-ghosting on Seq. #3 . . . 88

6.18 Comparison of frame No. 1250 extracted from Seq. #3 corrected operating LMS-NUC with no de-ghosting (a) and with TS de-ghosting (b) . . . 89

6.19 Comparison of the percentage error of the offset estimates related to the edge-pixel of coordinates (93, 44) extracted from Seq. #3 . . . 90

6.20 Error signal extracted from the pixel of coordinates (57,121) in Seq. #3 and related tolerance intervals by varying the width K . . . 90

6.21 Comparison of the error signal standard deviation of the edge-pixel (93,44) of Seq. #3 in the TS de-ghosting by varying K . . . 91

6.22 Comparison of MSE of the offset estimates operating the Scribner’s algorithm on Seq. #4 by varying the dimension of the spatial Gaussian filter . . . 92

6.23 Offset estimate MSE on Seq. #4 related to the state-of-the-art ALR and ED de-ghosting compared to the proposed BF and TS de-ghosting techniques . . . 93

6.24 Original uncorrupted frame No. 1450 extracted from Seq. #4 . . . 94

6.25 Overall comparison of the processed frames related to frame No. 1450 of Fig. 6.24 ex-tracted from Seq. #4: corrupted (a), original Scribner’s with no de-ghosting (b), ALR (c), ED (d), BF (e) and TS (f). . . 95

(8)

6.26 Comparison of the temporal evolution of the percentage error related to two edge-pixels

extracted from Seq. #4 . . . 96

9.1 BEA-SC algorithm flowchart . . . 113

10.1 Example of IR images of SeqA (a) and SeqB (b) . . . 118

10.2 Region of first frame of SeqA used to test the BEA-SC . . . 121

10.3 First frame of SeqA (a) with the simulated target and residual images obtained with the analyzed BEAs: (b) 2D MA (c), MEDIAN, and (d) BF. . . 122

10.4 Automatic parameters selection procedure on SeqA. [email protected] (in logarithmic scale) ob-tained with different settings of the parameters of the tested BEAs: (a) 2D MA, (b) ME-DIAN, and (c) BF. . . 124

10.5 SeqA: Ex-ROC curves obtained with the detection schemes based on the (a) 2D MA, (b) MEDIAN, (c) BF, and (d) comparison among the best detection performance provided by each BEA. . . 126

10.6 Region of first frame of SeqB used to test the BEA-SC . . . 127

10.7 (a) First frame of SeqB with the simulated target and residual images obtained with the analyzed BEAs: (b) 2D MA (c) MEDIAN, and (d) BF. . . 128

10.8 Automatic parameters selection procedure on SeqB: [email protected] (in logarithmic scale) ob-tained with different settings of the parameters of (a) 2D MA, (b) MEDIAN, and (c) BF BEAs. . . 129

10.9 SeqB: Ex-ROC curves obtained with the detection schemes based on the (a) 2D MA, (b) MEDIAN, (c) BF, and (d) comparison among the best detection performance provided by each BEA. . . 130

(9)

List of Tables

1.1 Military and civilian applications of thermal imaging devices . . . 21

1.2 Requirements in the design of IR devices . . . 22

1.3 Overview on typical values of design parameters depending on wavelength . . . 26

2.1 Classification of main noise sources in IR-FPA cameras . . . 34

5.1 Spatial distances mask (a), intensities differences mask (b), and final resulting mask (c) related to the BF processing of the mask of Fig.5.4a. . . 66

5.2 BF computational load: number of operations per pixel . . . 67

(10)
(11)

Abstract

In the framework of remote sensing and optical sensors, infrared (IR) cameras allow the user to form an image using IR radiation, similar to common cameras that form an image using visible light. The radiation acquired from IR cameras is related to the temperature of the observed scene through the Planck’s law, in fact IR cameras are also known as “thermal cameras” for their capability of detecting information concerning the heat of the observed scene. Differently from visible devices that acquire visible radiation on three distinct bands and from multispectral and hyperspectral sensors, which are characterized by a high spectral resolution, IR cameras acquire radiation on a single channel, or over two separate channels (dual-band devices). Such devices are often known as “wide-band sensors” in order to indicate a wide spectral response spread over specified bands. Typically thermal cameras acquire IR radiation in the medium-wave IR (MWIR) and the long-wave IR (LWIR) bands.

Thermal cameras were originally developed for military purposes during the World War II, for detect-ing and trackdetect-ing targets on complex scenarios as an alternative to radar systems, in that optical systems - due to their passive nature - are more robust to countermeasures for interception. For this specific ad-vantage, research in the field of thermal cameras has been primarily developed for military applications. Nevertheless, over the years IR cameras have been employed also in several civilian fields such as fire-fighting operations, wall structural analysis and monitoring, medicine, archeology, astronomy and so on in any scientific area where temperature can be useful to analyze or to monitor any type of condition of the observed scene. For example, firefighters use it to see through smoke in order to localize people, power-line maintenance technicians locate overheating parts to eliminate potential hazards, building construction technicians look for heat leaks to improve the efficiency of air-conditioning. Thermal imag-ing cameras are also installed in some cars to aid the driver in difficult meteorological conditions. Some physiological activities in human beings and other warm-blooded animals can also be monitored with thermographic imaging.

(12)

which could require a cooling system or not. Cooled IR cameras are very sensitive in terms of noise-equivalent temperature difference (NETD), but are harder to realize, they are expensive, power and time-consuming. On the other hand, uncooled IR cameras, due to their easier feasibility, are less expen-sive and much portable: they can be stabilized to an operating temperature to reduce image noise, but they are not cooled to low temperatures and do not require bulky and expensive cryogenic coolers.

In addition, a further classification of IR devices is made in connection with the mechanism employed to sense the IR radiation in thermal cameras. During the years, thermal cameras have been developed according to two different architectures: (i) scanning sensors, i.e. the field of view is covered sequentially with a small number of detectors and (ii) staring devices, i.e. an array is positioned in the focal image plane of the device with a large number of detectors, in order to acquire in parallel the IR incident radiation. Historically, due to physical constraints, scanning sensors are used for shorter wavelengths (MWIR) while focal-plane arrays (FPAs) are employed in longer wavelengths (LWIR). In recent years, this division has been abandoned since staring arrays have fully replaced scanning devices. The research has made affordable the feasibility and cooling requirements of FPAs which have the clear advantage of an easier fabrication and a very high sensitivity due to longer integration times that increase the signal-to-noise ratio of the acquired IR radiation.

The rapid development of IR devices has led to investigate specific techniques to manage the acquired IR images whose characteristics sensibly differ from images acquired in the visible spectrum. In this context of IR image processing, the military origin of IR devices has led to consider the problem of target detection as one of the most investigated topic. Generally, in an operational context, target detection can be accomplished by means of two different manners:

• man-made target detection; • automatic target detection.

The former is accomplished by an operator and the reliability of the detection is left to the human per-ceptual capabilities of discerning targets embedded in highly structured scenarios and high levels of noise. IR cameras developed for such purposes - or, in a more general way, to furnish the temperature information of the scene to the operator - are also known as forward-looking infrared (FLIR) cameras. Since modern IR cameras are characterized by a high sensitivity in terms of NETD, the dynamics of the acquired images are very high to be correctly displayed on commercial monitors. In this context, many techniques have been developed over the years for the visualization of IR images using conventional 8-bit monitors. These fields of research are known in the literature as High Dynamic Range Compression (HDRC) and Contrast Enhancement (CE) and are not the research subject of this work of thesis.

On the other hand, automatic target detection (ATD) is object of investigation of this PhD thesis. The images acquired from IR devices are processed to reveal the presence of targets that typically cannot

(13)

Abstract 13

be easily distinguished from both clutter, spatial and temporal noise sources. Over the years, several Infrared search-and-track (IRST) systems have been developed to accomplish ATD tasks. Obviously, the reliability of target detection cannot depend on the visual perception of an operator, especially on com-plex military scenarios. In this context, the aim of ATD techniques is to furnish detections to consider reliable in a statistical approach, i.e. in terms of probability of false-alarm (PFA) and probability of

de-tection (PD). In order to get reliable ATD performance, appropriate de-noising techniques and clutter

removal algorithms have to be developed as a pre-processing step, since the mitigation of both noise sources (spatial and temporal) and the accurate estimation of background clutter improve the perfor-mance of ATD.

In this framework, this PhD thesis is focused on pre-processing techniques developed to improve the performance of ATD techniques. The first part of the thesis is focused on a de-noising problem known in the literature as non-uniformity correction (NUC). Modern IR cameras sense IR radiation by means of FPAs composed by a great number of detectors that unfortunately, due to physical limits in the fabrication process, are characterized by different responses. This source of noise is also known in the literature as fixed-pattern noise, so as to address it to the spatial noise types, since the related temporal drift is very slow.

In this work of thesis, an overview of NUC methods is given reviewing the most known techniques proposed in the literature, i.e. reference-based and scene-based techniques. Particular emphasis has been posed on scene-based techniques since NUC is operated with the only employment of the acquired sig-nal. Such techniques have been developed to overcome the limits of reference-based calibration, for which it is necessary to halt the standard operations of the IR camera for calibration setup. Moreover, the mechanical complexity of the shutter mechanism, used as a reference black-body, gets the feasibility difficult. A detailed overview of scene-based NUC techniques presented in the literature is carried out highlighting advantages and drawbacks. Focusing on the performance of the NUC scene-based tech-niques, the Scribner’s algorithm has been deeply analyzed revealing high NUC performance combined with small computational load.

In relation to this NUC technique, it has been introduced the problem of ghosting artifacts which is considered as a collateral effect emerging from scene-based NUC techniques based on a statistical approach. Ghosting artifacts are mainly generated by strong edges in the processed scene and have to be reduced because they can interfere with ATD techniques. In this context, two de-ghosting methods have been proposed which aim at reducing the collateral effect of ghosting generated from the Scribner’s NUC technique:

• Bilateral filter (BF)- based de-ghosting. The bilateral filter has been employed to operate accurate spatial estimates of the FPN;

(14)

• Temporal statistics (TS)-based de-ghosting. Temporal statistics are computed to predict the trend of an accurate estimate of the NUC parameters.

The proposed techniques have been tested and evaluated in terms of de-ghosting capability and global NUC effectiveness by means of IR experimental data sets with simulated FPN. The de-ghosting effec-tiveness of the proposed techniques has been shown by means of the analysis of edge-pixels, such as hot points or structures, in the examined frames. The results confirm the benefits of the introduced tech-niques since the percentage mean-square error (MSE) of the FPN estimate is prominently reduced (from 20% to 1-2 %). Such quantitative results have been confirmed by the visualization of the corrected frames with mitigated ghosting artifacts.

In addition, in this PhD thesis it has been treated a complementary topic of data processing for ATD applications: background estimation and removal in IR images. It has been adopted a well-established scheme for target detection in IR surveillance systems, that is the cascade of background clutter removal plus a strategy of target detection. Background removal is accomplished by subtracting, from the original image, the estimate of the spatially varying background signal furnished by a background estimation algorithm (BEA). In such an ATD scheme, the overall detection performance is strongly influenced by the effectiveness of the employed BEA. Particularly, the BEA and its design parameters should be chosen so as to get an accurate estimate of the background signal and to avoid biases caused by the possible presence of targets (target leakage). In the literature, several BEAs have been proposed, each tailored to a particular application and optimized in accordance to the characteristics of the specific operating scenario.

In this framework, it is proposed a novel method for the choice and setting of the best perform-ing BEA in its best performperform-ing configuration for the detection of dim point targets in IR images. The proposed procedure relies on a simulation-based off-line approach. Dim simulated targets, with signal-to-clutter ratio (SCR) in accordance to the target specifics of the considered application, are implanted on an acquired sample image reporting the characteristics of the scenario of interest. The choice of the best performing configuration for a given BEA is made by exploring the performance of the detection scheme for several configurations of the BEA parameters over the images with implanted targets.

The effectiveness of the proposed BEA selection procedure is evaluated in two case studies typical of IR video-surveillance: maritime and airborne surveillance scenarios. Experimental image sequences have been acquired by MWIR cameras in order to test the proposed BEA selection criterion (BEA-SC). The comparison of the different BEAs in their related different settings has been accomplished by Ex-perimental Receiver Operating Characteristics (Ex-ROC). The achieved results confirm the benefits in-troduced by the proposed technique in that the performance of the IR detection system with the BEA tuned according to the proposed selection criterion is sensibly improved: the number of false alarms is

(15)

Abstract 15

(16)
(17)

Chapter

1

Introduction to IR-FPA

1.1

History of IR Focal-Plane Array technology

For hundred of years following the development of optical systems such as telescopes, microscopes, magnifiers and cameras, the optical image was formed on photographic plates or on films. In the 19th century, the basis of optical image processing were put through the discovery of electrolytic photo-cells, photo-conductivity and photovoltaic effect. Vacuum tube were successfully developed between the 1920s and the 1930s leading to the development of television. The first example of a detector array was first developed through the use of islands of photosensitive materials, deposited through a screen mesh, in some of the photo-tubes developed in the 1940s. [1]

Intensive development of IR solid-state detectors (PbS) began in Germany in the years prior to World War II. During the war, Germany deployed small arrays of IR detectors in aircraft to search for ap-proaching Allied planes. Limited to hand assembly from individual detectors elements, detector array size did not begin to grow rapidly until the introduction of photo-lithography in the early 1960s. Photo-lithography enabled the fabrication of silicon monolithic imaging focal planes for the visible spectrum: some of these early developments were intended for technologies to be spread in the mass market. Other efforts were put for the development of television cameras, satellite surveillance, and digital imaging. In this context, infrared imaging has been widely pursued in parallel with visible imaging because of its utility in military applications.

Early visible arrays were manufactured in two-dimensional configurations for staring imaging de-vices, although scanning arrays have developed too for important applications such as document scan-ning. Infrared detector arrays were initially built in linear formats - individually wiring each detector element to a preamplifier board: this readout technology limited the array size to about 200 detectors.

(18)

Figure 1.1: Development of staring sensors in terms of imaging pixels over the years

to read and process the signals acquired from the array, and with the ability to display the resulting im-age. Thus the size of array formats has progressed in relation with the perspective of silicon dynamic random access memory (DRAM) or the number of transistors in a microprocessor. This trend is shown in Figure 1.1where it can be noted that imaging array formats are compared with the complexity of microprocessors technology, as it is indicated by the transistors count. Imaging formats have gone be-yond that required for high definition TV in many detector types. Note the rapid rise of CMOS imaging devices challenging CCDs in the visible spectrum, driving the exponential growth of these capabilities in the progression of silicon processing technology, as referenced by the reduction in minimum size of MOS/CMOS features which is shown at the bottom. For what concerns IR-FPA devices, it is worth reporting that the limitation of the total number of imaging pixels is mainly due to feasibility difficul-ties such as uniform responses requirement, which leads to the design of expensive and bulky cooling systems.

1.2

Types of staring FPAs in the IR spectrum

During the years, detectors have been developed to sense radiations from the entire electromagnetic spectrum: cosmic ray, gamma ray, X-ray, ultra-violet (UV), visible, IR, far IR/sub-millimeter, microwave, and radio waves. In particular, the visible and IR regions have a wide variety of detector array formats available. Fewer options are available for shorter or longer wavelengths, with only hard-wired arrays of

(19)

1.2 Types of staring FPAs in the IR spectrum 19

Figure 1.2: Quantum efficiency (in a logarithmic scale) of visible and IR detectors in dependence of the wavelength

discrete devices at the extremes of the spectrum. Due to the subject of this thesis, in this introduction the attention is focused on IR detector arrays.

A number of detector technologies have been optimized for imaging in the UV, visible and IR bands extending from approximately 0.1 to 30 µm. Figure1.2shows the quantum efficiency (QE) - which is an accurate measurement of the device’s electrical sensitivity to light - of some of the detector materials used to build visible and IR-FPAs. Detector arrays in the visible and IR bands are typically integrated with silicon readout circuits with multiplexed outputs. The architecture of these devices has assumed a number of forms which are now briefly described.

IR sensors can be manufactured following two types of distinct configurations. The two alternatives are presented in Figure1.3. A field of view can be covered either sequentially employing a small number of detectors with a scanning mechanism or in parallel with a staring mosaic containing a large number of detectors. Historically, the shorter wavelengths have been covered mainly by staring sensors, while the longer wavelengths have been covered by scanning sensors. The evolution of this division of spectral coverage has been driven mainly by physical principles:

1. Spectral distribution of the photon flux. Photon flux levels from natural backgrounds are generally much lower at shorter wavelengths. The use of staring sensors is desirable because photons im-aged by the optics are collected with essentially 100% efficiency, i.e. with longer integration times. For many long wavelengths applications, the abundance of photons can permit the use of scan-ning arrays leading to adequate performance. Wide-angle coverage can be achieved with a simple

(20)

Figure 1.3: Sensor design alternatives and related wavelengths: scanning sensors vs. staring sensors

scanner mechanism. However, in order to achieve a greater sensitivity, the high-photon collection efficiency of staring sensors is required.

2. FPA feasibility and cooling requirements. Production of long-wavelengths FPAs, if compared to short wavelengths devices, is more difficult both in fabrication and cooling adequately. This is the direct result of small energy transitions associated with long wavelengths. FPAs must operate at very low temperatures to minimize the dark current, and extreme purity of materials and accuracy of pro-cesses control are needed for acceptable levels of uniformity response of the FPA detectors. These stringent requirements can limit the practicality of large FPAs, so scanning becomes necessary for coverage of large fields of view.

Staring FPA sensors collect IR radiation from the optics onto a photon-detecting surface to convert the image to a charge-carrier pattern. This pattern is usually integrated in parallel on a storage surface. Before integration, the photo-current can first be amplified in an intensifier to provide extremely low-light-level operations. Many devices do not have this pre-storage gain, but good sensitivity can still be attained by using a low-noise amplifier in the readout process. Readout of almost all staring sensors is accomplished serially. This is an efficient way to access a large number of resolution elements. For further details on construction of staring devices, the reader is referred to the literature [2–8].

(21)

1.3 Applications 21

1.3

Applications

The field of application of IR systems is split in two broad categories: military and commercial. Table1.1

highlights a few applications for each of the two application categories.

Community Applications Military Reconnaissance Target acquisition Fire control Navigation Commercial Civil Law enforcement Fire fighting Border patrol Environmental Earth resources Pollution control Energy conservation Industrial Maintenance Manufacturing Non-destructive testing Medical Mammography Soft tissue injury Arterial construction Table 1.1: Military and civilian applications of thermal imaging devices

Military and commercial systems are similar in basic design, but each system is built for a specific purpose. As a result, military and commercial systems tend to be described by different performance parameters. Some generic differences are listed in Table1.2.

In the field of military applications IR search-and-track (IRST) systems, which are designed to detect point targets, have been highly developed during the years. Generally, in the military framework, the specific system design depends upon the overall application, the atmospheric transmittance, the avail-ability of optics and detectors, the characteristics of the operative scenario. As an example, it is easy to notice that the design of a system for target detection sensibly differs with that of high-speed aircraft navigation. Military requirements typically refer to low noise with maximum reliability. On the other hand, in the commercial world, the requirements are low cost, reproducibility and ease of maintenance, thus allowing a lower performing device. However, these distinctions are getting blurred in today’s en-vironment in that the trend is to develop devices for a single imaging system to perform all functions. The development of novel image processing algorithms is required to succeed in that.

(22)

Design areas Military Commercial

Vibration stabilized

Required for very narrow fields-of-view

Usually not required Image processing

algorithms Specific application

Menu-driven multiple options Resolution High resolution (targets at long distances) The image is magnified by moving closer Image processing

time Real time

Real time not usually required Target signature

Usually just perceptible (low

NETD)

High contrast target (NETD not a dominant factor) Table 1.2: Requirements in the design of IR devices

1.4

Overview on FPAs performance

1.4.1

Spectral Quantum Efficiency

Quantum efficiency (QE), introduced in the previous section, is an accurate measurement of the device’s electrical sensitivity to light. QE refers to the percentage of photons hitting the photo-reactive surface that will produce an electron–hole pair. Being the radiation of different forms over the electromagnetic spectrum, it is more appropriate to introduce the spectral QE, i.e. how much current comes out of the device for an incoming light beam of a given power and wavelength. Spectral QE directly relates the photo-sensitivity of the device to the wavelength radiation. Figure1.2shows typical spectral quantum-efficiency characteristics of various photo-detectors. The very long wavelengths are covered by extrinsic-Si materials used mostly in scanning formats. In the regions of the spectrum of the short wavelengths, either vacuum photo-emitters or thinned and surface-treated charge-coupled devices (CCDs) have been used mostly in a staring format. In the medium wave IR (MWIR) band, available detector types include HgCdTe, InSb and PtSi and IrSi Schottky-barrier detectors. In addition to the detectors reported in Figure

1.2, GeSi-heterojunction ans GaAs quantum-well detectors recently have been employed successfully in a staring mosaic configuration. However, the final choice of detector materials can be derived from the trade-off among the following specifics:

• coverage format (staring or scanning); • FPA feasibility;

• photon flux within the spectral band; • sensor integration time;

(23)

1.4 Overview on FPAs performance 23

• internal noise levels; • cooling requirements.

The greatest activity of FPA fabrication is in Si-CCD FPAs operated in the visible spectrum. This has been fostered by the huge consumer, industrial, and security markets for video cameras and the tech-nology support provided by the silicon semiconductor industry. Arrays of many detectors, low-noise performance, and excellent pixel uniformity are readily available at low cost.

For what concerns the IR spectrum, Schottky-barrier and GeSi-heterojunction FPAs are solid-state photo-emissive devices that cover the SWIR, MWIR and LWIR spectral bands and are another extension of visible-spectrum Si-CCD technology. A significant number of industrial organizations are engaged in the fabrication of InSb and HgCdTe staring FPAs. This activity is largely an extension of previous scanning-sensor technology. Recent effort has yielded FPAs of high QE and quite good uniformity in the 3- to 5-µm (MWIR) band, and progress is being made with HgCdTe to attain the performance and feasibility required for 8- to 12-µm (LWIR) band applications.

1.4.2

Detectors and coolers

Coolers classification

Typically, detectors are distinguished between cooled or uncooled. These are only generic characteristics, since both categories have many different types of detectors.

• Cooled detectors. LWIR photon detectors have to be cooled at temperatures below 100 K. These temperatures can be reached only with a mechanical cooler or with liquid nitrogen. Many MWIR detectors can operate at 200 K and this temperature can easily be reached with a thermoelectric cooler which appears to have an infinite lifetime if compared to mechanical coolers that degrade over time. Generally, cooler add costs, bulk, and consume power.

• Uncooled detectors. Thermal detectors can operate at room temperatures and therefore are called uncooled devices. Although called uncooled, these devices require a cooler to stabilize the detector temperature. Usually a thermoelectric cooler is used. Uncooled devices are lightweight, small in size, and easy to use. Due to the very low power consumption, uncooled devices are hand-held and operate with standard batteries.

Detector classification

Detectors are classified as classical semiconductors, novel semiconductors, and thermal detectors. De-tector performance parameters were developed for the classical semiconductor and thermal deDe-tectors.

(24)

Semi-conductors such as the Schottky barrier photo-diode and quantum-well have been introduced more recently in the years. For these latter, classical theory is no longer appropriate and new measures of per-formance have been developed. These categories are briefly described as it follows:

• Classical semiconductors. Photo-conductive detectors require a constant bias voltage. The absorbed photons change the bulk resistivity, which, in turn, changes the current. The current change is monitored in an external circuit. Since current is constantly flowing, the detector dissipates heat. As a result, very large arrays are difficult to cool. Photovoltaic detectors are usually realized with a p-n junction in a semiconductor. An absorbed photon produces a voltage change that is sensed in an external circuit. It does not dissipate heat and therefore can be built into very large arrays. • Novel semiconductors. Schottky barrier photo-diode devices are photo-emissive devices that

pro-duce a voltage. These detectors are compatible with silicon fabrication technology. It is relatively easy to fabricate monolithic devices with the detectors and readout fabricated at the same time. Very large arrays can be built. Band-gap engineered photo-detectors (quantum well) are charac-terized by the spectral response which can be engineered to any wavelength. Unfortunately, this spectral response is very narrow and these devices need cooling.

• Thermal detectors. Bolometers absorb heat thus changing the resistance. The change in current, due to the change of resistance, is monitored by means of an external circuit. In large arrays the dissi-pation of heat becomes a challenging task. Bolometers are usually optically chopped to improve sensitivity and non-uniformity. Piezoelectric detectors can only sense a changing temperature. The thermal changes alter the electrical polarization that appears as a voltage difference. These AC de-vices create halos around the high ∆T targets. These systems typically have a chopper (to produce a changing scene) between the lens system and detector. The chopper is synchronized with the frame rate of the camera so that the displayed image appears uniform.

• Specific photon detectors.

Silicide Schottky-barrier devices. The most popular is platinum silicide (PtSi) which is sensi-tive in the 1.0 to 5.5 µm region. It requires a filter to limit the spectral response, e.g. to create a 3 to 5.5 µm system. Because of the low quantum efficiency, PtSi has poor performance when the background temperature is less than 0° C. Due to the advances in silicon technology, large arrays can be fabricated very cheaply. The device is often cooled to 70 K to reduce the dark current.

Indium antimonide (InSb) is a high quantum efficiency MWIR detector. It has generally re-placed PtSi in those systems using modest sized arrays. Its peak response is near 5 µm. A filter is used to limit the spectral response to the MWIR region.

(25)

1.4 Overview on FPAs performance 25

Mercury Cadmiun Tellurite (HgCdTe), also called MCT is a mixture of the three employed elements and by varying the ratio of the mixture, the spectral response can be tailored to the MWIR or LWIR region. The most popular is the LWIR detector with a peak response near 12 µ m. HgCdTe detectors are used in all common module systems.

Quantum Well Infrared Photo-detectors (QWIP) is based upon GaAs technology. The wells are created by layers of GaAs/AlGaAs and the response can be tailored from 3 to 19 µm. The LWIR version typically has a spectral from 8.3 to 10 µm. The responsivity and noise are temperature sensitive so that QWIP devices are cooled to less than 60 K.

1.4.3

Generic staring FPA parameters

In order to provide a perspective for the reader, Table1.3reports basic parameters associated with FPAs, i.e. temporal noise (TN) measured in rms-elector/pixel, non-uniformity (NU) reported in percentage and operating temperature (OT) expressed in Kelvin. Rapid progress has been made in the development and availability of staring FPAs, so the characteristics quoted in Table1.3continuously require update.

The most highly developed FPAs are CCDs in visible and HgCdTe in IR, as many detectors have been fabricated for both military but also commercial purposes. In the context of military applications, some of these arrays have been tailored in format and in readout characteristics for space-surveillance applications. Special features include low-noise and multi-channel readout, for fabrication of very large FPAs for wide-angle coverage.

UV CCDs have been under development as an extension of visible-spectrum Si CCD designs. There-fore, total resolution capability can be comparable to visible devices. Visible and UV devices have the smallest pixels, and this has evolved from requirements to match optics resolution performance. UV applications require the smallest pixels, but photo-lithographic technology is likely to limit pixel spacing to a few micrometers.

A variety of Schottky-barrier FPAs of a few hundred elements on a side have been under development for commercial and military applications. While currently it is demonstrated that quantum efficiency is relatively low in the thermal-emission band beyond 3 -µm, the high uniformity and high photon-collection efficiency afforded by very large staring arrays have permitted demonstration of excellent thermal sensitivity. Fabrication of large Schottky-barrier FPAs has been very practical. Recently, three-side abuttable devices with Schottky-barrier and GeSi-heterojunction detectors have become available for very wide-angle applications.

Extremely small pixels are not appropriate for all applications. Because of optics diffraction limits, IR applications require large pixels. In addition, astronomy and space-surveillance applications require very large optics for photon collection, so large pixels are needed to match the optics. Because of

(26)

require-Band Detector FPA Pixel size TN NU (%) OT (K)

UV Si CCD 10 - 30 µm 1 to 5 1 260 to 300

VISIBLE Si CCD 10 - 30µm 1 to 5 1 260 to 300

MWIR InSb Hybrid 40 - 60µm 80 to 500 4 55 to 80

HgCdTe Hybrid 40 - 60 µm 100 to 1000 5 60 to 120

PtSi Hybrid 20 - 40µm 80 to 500 2 55 to 80

PtSi CCD 20 - 40 µm 20 to 100 0.25 55 to 80

LWIR HgCdTe Hybrid 40 - 60 µm 300 to 3000 20 40 to 80

IrSi, GeSi CCD 20 - 40µm 20 to 100 1 30 to 60

Bol, BST Monolithic 50µm 30000 20 300

Table 1.3: Overview on typical values of design parameters depending on wavelength

ments for large pixels, there is an implied need for physically large focal planes to satisfy requirements for total resolution and total field of view. From the viewpoint of FPA fabrication yield, this usually can be achieved best by assembling sub-arrays together.

The internal noise performance of a sensor is important for low-exposure applications. The level needed depends on the background exposure associated with the spectral band and with the mode of operation. Historically, intensifier vacuum tubes have had the lowest noise performance: due to the very high gain of these sensors, dynamic range is limited if compared to solid-state devices. Recently, visible CCDs have been successfully fabricated with internal noise as low as a few rms-electrons/pixel, and very high dynamic range is possible. For UV and visible bands, in which backgrounds are low, levels of a few rms-electrons/pixel are needed. Since radiation flux from backgrounds is higher in the MWIR and LWIR bands, on such bands levels of a few-hundred rms-electrons/pixel are usually satisfactory.

(27)

Part I

(28)
(29)

Chapter

2

Noise sources in IR devices

In this chapter a brief introduction on fixed-pattern noise is carried out in order to introduce the reader to the problem of non-uniformity in FPA detectors of IR devices. In section2.1the problem of fixed-pattern noise is introduced showing the effects on some acquired images. In section2.2a physical approach has been used to report a commonly adopted model of acquired signal and fixed-pattern noise with respect to other briefly introduced sources of noise that affect IR images. This physical model is herein introduced to show how much the fixed-pattern noise component weighs on the overall contribution with respect to other noise sources in IR devices. Finally, in section2.3a mathematical model of the acquired signal taking into consideration the contribution of fixed-pattern noise has been described: this notation has been adopted for the description of NUC techniques shown in the following chapters of the thesis.

2.1

Introduction to fixed-pattern noise

A staring infrared focal-plane array (IR-FPA) is made up of two-dimensional mosaic of detectors, with each detector covering a pixel of the scene within the sensor field of view. Compared to mechanically scanned IR imaging devices, staring IR-FPA sensors can be superior not only because of their optical simplicity and consequent compactness, but also because of the high sensitivity achieved with their long integration times.

Unfortunately, in practice the theoretical background-limited IR photo-detector sensitivity cannot be achieved because of fixed-pattern noise (FPN). FPN is a source of noise caused by non-uniformity and dark current contributions among the responses of detectors in the FPA. In details, the causes of FPN are listed as below:

(30)

fabri-cation process control factors. In some cases, spectral response variations in the detection material itself and non-uniformity in the multiplex or readout circuits can also contribute to the response non-uniformity.

• Dark current: electric current that flows through photosensitive devices even when no photons are entering the device.

The effect of FPN is especially serious in IR devices which typically are employed in scenarios with very low-contrast scenes, i.e. with no significant variations of temperature. To show the damaging effect of FPN in Fig. 2.1there are reported some examples frames affected by FPN and acquired with an IR-FPA camera in different scenarios and several calibration conditions. Since the non-uniformity contribution is random among the detectors, the response image looks like a white-noise-degraded picture. The scene information is buried in this fixed spatial pattern. Please note that in the literature such source of noise is named “fixed”: this is not completely correct since FPN has a very low temporal drift whose entity is surely negligible if compared to the typical frame rates of thermal cameras but whose presence, as it will be shown in the previous part, will determinate the design of non-uniformity correction techniques. Obviously, fixed-pattern noise affects the acquisition of the IR radiation also in mechanically scanned IR imaging devices, but the visual effect on the acquired images is sensibly different. Since, scanning sensors acquire the IR radiation exploiting the horizontal scanning mechanism, in the simplest configuration, each row of the resulting image is acquired by a single detector scanning the entire field of view. The resulting FPN consists of “stripes” over the image, as it can be seen in Fig.2.2.

2.2

Description of noise sources in IR devices

The electron-density exposure ε associated with either an object or the background of the acquired image can be expressed by the following:

ε= π TiFs 4e−( f /#)2

ˆ λ2

λ1

RλToλTaλLλdλ e/m 2

(2.1) where the reported parameters are listed as this below:

f/# optics focal ratio

Ti sensor integration time (s)

Fs FPA active area fill factor

(31)

2.2 Description of noise sources in IR devices 31

(a) (b)

(c) (d)

(e) (f)

Figure 2.1: Examples of frames affected by FPN: (a) corrected with laboratory measured gain and offset, (b) shutter calibration, (c) calibrated with manufacturer nominal values, (d) gain corrected, (e) offset corrected and (f) neither gain and offset corrected.

(32)

(a)

(b)

Figure 2.2: The effect of striping noise on scanning sensors: (a) frame with striping effects and (b) the respective stripe-removed frame.

(33)

2.2 Description of noise sources in IR devices 33

λ1 lower-wavelength limit (µm)

λ2 upper-wavelength limit (µm)

Rλ sensor spectral responsivity AW−1 T optics transmission

Taλ atmospheric transmission

Lλ object or background spectral radiance Wm−2sr−1µ m−1

 For a gray-body, exploiting the Planck’s law, Eq.2.1can be rewritten :

ε= π c 2hT iFs 2e−( f /#)2 ˆ λ2 λ1 RλToλTaλελdλ λ5exp kT λhc  − 1 e/m 2 (2.2)

where T is the object (or background) temperature (K), ελ is the object (or background) emissivity, h is

the Planck’s constant (Js), k is the Boltzmann’s constant JK−1 , and c is the velocity of light (m/s). The electron-density exposure ε of Eq.2.2is useful to introduce the FPA pixel-exposure ξpexpressed

as:

ξp= 10−6ε ∆2 (e) (2.3)

where ∆ is the mean-pixel spacing (mm). Generally, from electron-density exposure ε can be derived the image contrast C as the ratio of difference-signal-to background exposures:

C=|εo− εb| εb

(2.4) where εo is the object exposure and εb is the background exposure. For IR systems operating in an

environment of very small temperature difference between object and background, the background nor-malized differential temperature contrast C∆is a useful quantity which better emphasizes on local signal

differences: C∆= 1 εb dεb dT K −1 (2.5)

2.2.1

Noise sources classification

Typically, sensor design and performance assessment require identification and quantitative represen-tation of noise sources. Noise is generated within a sensor and from the optical radiation environment. Mathematical representation of various noise sources should include functional dependence on exposure

(34)

Noise Type Exposure Dependence

Temporal Quantum ∝√exposure(quantum)

Readout None (additive)

Fixed-pattern Responsivity ∝√exposure(multiplicative) Dark-level None (additive)

Table 2.1: Classification of main noise sources in IR-FPA cameras

and on spatial and temporal characteristics. For maximum utility, noise levels should be normalized to laboratory tests standards such as bar patterns.

A simplified characterization of various noise sources in terms of temporal and spatial characteristics is presented in Table2.1. Temporal noise varies from frame to frame, whereas fixed-pattern noise does not: this latter is characterized by a very slow temporal drift. Temporal readout noise and dark level noise are not a function of exposure, while quantum and responsivity noises are different functions of exposure. The proportionality relationships to exposure are indicated in the Table2.1.

Temporal and fixed-pattern noise generated in background regions (i.e. with no structures in order to better estimate noise) of an image usually determine the noise threshold for both visual imaging and for machine detection. Therefore, relationships of noise to background exposure and the details of mechanism of noise generation within sensing devices are treated in the literature [9–12].

Quantum noise

The quantum-noise variance density nQcan be computed as:

nQ= Kεb=

106Kξpb

∆2 e/m

2

(2.6) where εb is the image-plane exposure density e/m2, K is the quantum-noise factor, ξpb is the pixel

exposure (e/pixel), and ∆is the pixel spacing (mm). The factor K is a measure of how ideal the photo-detection process is, and it is derived from performance measurements. For a given exposure, it is the ratio of measured quantum-noise variance to shot-noise variance. For a real detector K > 1 and represents excess noise not included by Poisson statistics. A typical mechanism that gives rise to this departure from the ideal is the presence of gain in the sensor. For a sensor containing discrete pixels, it can be computed through the following:

K=10 6N2 Q εb∆2 =N 2 Q ξpb (2.7) where NQis the measured quantum rms noise (e/pixel).

(35)

2.2 Description of noise sources in IR devices 35

Additive temporal noise

The origin of additive temporal noise includes the FPA readout amplifier, thermal noise, and poor charge-transfer efficiency. The variance density nAT of additive temporal noise can be determined from

FPA noise measurements and is computed from the following:

nAT =

106NAT2

∆2 e/m

2

(2.8) where NAT is the measured rms noise (e/pixel).

Multiplicative fixed-pattern noise

FPN does not vary temporally, and can be either independent of some function of exposure. An impor-tant type of fixed-pattern noise is represented by an rms level that is directly proportional to exposure and is produced by pixel-to-pixel differences in responsivity or gain. It is considered a significant noise source, especially under low-contrast conditions, even if it can be neglected if compared to the additive FPN component introduced in the next sub-section. The mathematical representation of the multiplica-tive component of FPN applies to all detectors viewing a source of given spectral distribution, and all detectors that have no variation in relative responsivity, regardless of spectral distribution of the source. If each pixel has a different relative spectral responsivity, FPN will obviously also be a function of the spectral distribution of the source. Typically, this applies to the detectors that exhibit variations in cut-off wavelength. Mathematical description of this situation is very complex and fixed-pattern noise correction is less effective: it will be not treated here.

Multiplicative FPN can be measured by means of laboratory experiments and expressed as an rms noise-modulation factor M: M=10 3N MFP εbfen∆2 =10−3U fen (m) (2.9)

where NMFPis the measured rms multiplicative FPN (e/pixel), fen is the spatial-noise equivalent

band-width of noise (cycles/mm), and U is the ratio of rms noise to mean exposure. For FPN model that is independent from pixel to pixel,

fen=

1

2∆ (cycles/mm) (2.10)

Then, M = 2× 10−3U ∆(m) , and the FPN variance density nMFPis:

nMFP=  Mεb 2 2 e/m2 (2.11)

(36)

The rms foxed-pattern noise per pixel NMFPcan be computed from the following:

NMFP= 10−3 nMFP∆2

1

2 (e/pixel) (2.12)

For those cases in which FPN is not independent from pixel to pixel, computation of noise factor M is more complicated [13].

Additive fixed-pattern noise

Additive fixed-pattern noise is not dependent on sensor background exposure. The cause of additive FPN resides in dark current exposure. The dark current flows through photosensitive devices even when no photons are entering the device. Physically, dark current is due to random generation of electrons and holes within the depletion region of the device that are then swept by the high electric field: dark current is present in all diodes. Variations in this exposure can be modeled in the same manner as described above for multiplicative background-exposure noise. Comparable dark-current-related quantities NDFP,

fDen, UD, MDand nDFPcan be described identically to those represented in Eqs. 2.9to2.12. Dark current

exposureεDis expressed as:

εD=

JDTi

e− e/m

2

(2.13) where JD is the dark current density adjusted for pixel fill factor A/m2, Ti is the integration time (s),

and e−is the electron charge (C/e). Then, the additive dark-current FPN variance density is:

nAFP=  MDεD 2 2 e/m2 (2.14)

and the rms fixed-pattern noise NAFPis expressed through the following:

NAFP= 10−3 nAFP∆2

1

2 (e/pixel) (2.15)

Example of sensor noise measurement and modeling

FPA noise measurements can be made to separate the components into the various temporal and fixed-pattern types. All noise sources and exposure components are referred to the FPA detectors by dividing the values measured at the output by the gain between the output and the detectors. Temporal-noise components are obtained from frame-to-frame variations that occur in individual detector elements. Fixed-pattern noise components are obtained from pixel-to-pixel spatial variations, subsequent to frame-to-frame temporal averaging.

(37)

2.2 Description of noise sources in IR devices 37

Figure 2.3: Comparison of theoretical and experimental measured noise components on IR-FPA

In Fig. 2.3it is presented an example of FPA noise measurement. The measurements carried out through the experiment are superimposed in the graph to the theoretical noise calibration for shot (quan-tum) noise and multiplicative FPN. It is worth noting that the shot noise component varies as the square root of the background exposure, and the fixed-pattern noise components vary directly with background exposure. The data presented in Fig.2.3were obtained by integration of multiple frames to average tem-poral noise down to a negligible value. Then the rms of the residual noise was obtained from the pixel samples. This procedure was followed for each value of exposure. At high exposures, FPN was found to approach the theoretical direct variation with exposure, thus the multiplicative dependence. At low ex-posures, fixed-pattern noise was found to become independent of exposure: this represents the additive contribution.

Temporal-noise data were obtained by observation of frame-to-frame variations from a selected pixel. This procedure avoids pixel-to-pixel variations caused by fixed-pattern noise. At medium and high ex-posures, the temporal noise is very close to theoretical shot noise levels, with the displacement repre-senting the square root of the quantum noise factor K described in Eq.2.7. At low exposures, temporal noise becomes independent of exposure and represents the additive component. The noise parameters needed for the performance modeling can be derived from Fig.2.3. To obtain the quantum-noise factor K, it is selected an exposure in the quantum noise-limited region, for example, 104e/pixel. The related

rms noise is:

NQ= 1.3× 102 (e/pixel) (2.16)

Then from Eq.2.7, we obtain:

(38)

and from Eq.2.6, the quantum noise variance density is

nQ= 1.69εb e/m2 (2.18)

For what concerns the multiplicative FPN, it is obtained by using any point on the linearly increasing asymptote of2.3. The rms non-uniformity is U = 0.0062. Using M = 2× 10−3U ∆, the noise modulation

factor is M = 4.96× 10−7and from Eq.2.12, the multiplicative noise variance density is:

nMFP= 6.15× 10−14εb2 e/m2 (2.19)

For completeness, for what concerns the additive fixed-pattern noise, from Fig. 2.3, the rms additive FPN is:

NAFP= 2.5× 102 (e/pixel) (2.20)

and from Eq.2.14the additive FPN density is:

nAFP= 3.91× 1013 e/m2 (2.21)

Finally, from Fig.2.3, the additive temporal rms noise is:

NAT = 30 (e/pixel) (2.22)

From Eq.2.8, the additive temporal noise variance density is:

nAT = 5.63× 1011 e/m2



(2.23)

Finally, the total noise variance density n is:

n = nQ+ nMPF+ nAPF+ nAT =

= 1.69εb+ 6.15× 10−14εb2+ 3.91× 1013+ 5.63× 1011 e/m2



(2.24)

It is worth noting that, while the entity of quantum noise and multiplicative FPN depend on the exposure εb, the contribution of additive FPN exceeds the contribution of additive temporal noise for two orders

(39)

2.3 Mathematical formalization of FPN in IR-FPA 39

2.3

Mathematical formalization of FPN in IR-FPA

It is introduced a mathematical notation to deal with the problem of FPN in IR-FPA cameras. In the literature, the most-established - and easy to employ - model of the acquired signal embedded in FPN is a linear model [14]. Consider an image sequence yi, j(n) generated by a M× N IR-FPA where n = 1,2,3,...

is the temporal frame index. A commonly used model for the i, j-th FPA sensor output at the n-th frame is:

yi, j(n) = ai, j(n) xi, j(n) + bi, j(n) (2.25)

where xi, j(n), here termed the irradiance, is proportional to the number of photons collected by the i, j-th

detector during the camera integration time, ai, j(n) and bi, j(n) are the detector gain and bias parameters

of the linear model of fixed-pattern noise, referred to the pixel of position i, j at the n-th frame. In the thesis, it is assumed that the main source of FPN is due to the detector biases, therefore, to simplify the model of Eq. 2.25, the gain is assumed uniform across all detectors with a value of unity. Thus, the uniform gain detector model becomes:

yi, j(n) = xi, j(n) + bi, j(n) (2.26)

This assumption and the model of Eq.2.26are confirmed by laboratory measurements [15]. Moreover, it is assumed throughout the whole PhD thesis that the temporal drift of fixed-pattern noise is very slow, i.e. from frame to frame at adequate frame rates the parameters of FPN can be considered constant in time, or at least on a fixed number of frames [16]. With this assumption, the model of Eq.2.26is further on simplified as:

(40)
(41)

Chapter

3

An Overview on Non-uniformity

Correction

In this chapter, the state of the art concerning non-uniformity correction (NUC) is presented. Reference-based methods are briefly introduced in section3.1 showing their limits and their variants proposed over the years. In section3.2, a detailed overview of scene-based NUC techniques has been reported distinguishing them between statistical (sub-section3.2.1) and registration-based algorithms (sub-section

3.2.2).

3.1

Reference-based methods

Reference-based methods compute an estimate of FPN employing some form of absolute temperature reference, such as a black body radiation source or a uniform temperature shutter. The calibration is performed by heating the source to a uniform and known temperature and placing it within the full camera field of view (FOV). The resulting spatially flat images can then be used to linearly extract the non-uniformity that characterizes the response of the detectors of the FPA.

A standard form of calibration is the two-point calibration (TPC). This method is known as “two-point calibration” because the source is imaged at two distinct and known temperatures. From these two spatially-uniform images the gain and bias parameters can be directly extracted. Reference-based calibration is performed by initially cold-shuttering the device to a uniform scene x(c)i, j = x(c) ∀i, j, pro-ducing a response y(c)i, j in each element of the FPA:

(42)

where ai, jand bi, jrepresent the gain and offset of the detector of position i, j. The FPA is then shuttered

at a hot temperature using a uniform hot scene x(h)i, j = x(h) ∀i, j, thus producing the following response:

y(h)i, j = ai, jx(h)+ bi, j (3.2)

After that, the FPA is exposed to the non-uniform scene x(s)i, j producing the response:

y(s)i, j = ai, jx(s)i, j+ bi, j (3.3)

The two-point reference-based NUC processor solves the following equations to produce a corrected output y(out)i, j proportional to the nonuniform scene x

(s) i, j:

y(out)i, j = K1

x(s)i, j

x(h)− x(c)+ K2 (3.4)

which has the desired property of being independent of ai, j and bi, j. The constants K1 and K2can be

chosen such that the range of flux values in a typical scene produce output values that are consistent with the electronics used for display or for further signal processing [17]. These constants form the lin-ear mapping function that maps the different detector voltages resulting from one flux level into a single display voltage. As an example, this mapping is shown schematically in Fig.3.1for three distinct detec-tors: the result of the calibration is indicated as yD



x(S). The average values of detectors responsivity and offset can be used for K1and K2if the objective is to reproduce the average detectors outputs.

When only the bias parameters are desired a single uniform-temperature calibration (also known as one-point calibration) may be employed. Similarly, to more accurately characterize the non-linear behavior of the detector response, a multi-point calibration can be used to obtain a higher-order estimate of each detector response curve [14].

Reference-based calibration techniques have the desirable property that they provide reasonably good estimates of the non-uniformity, and, after correction, the resulting imagery is radiometrically ac-curate if expressed in units of radiance. Note that to get a truly acac-curate temperature profile, additional information about the object emissivity and atmospheric properties are required.

On the other hand, the main disadvantage of reference-based techniques is that, due to drift in the FPN, the FPA must be re-calibrated on the order of minutes throughout camera operation. Thus, the cam-era is “blind” during the process of calibration, i.e. the standard functioning of acquisition of the camcam-era is interrupted. Furthermore, because the calibration procedure itself can take a reasonable amount of time, e.g. several seconds in the TPC, most real-time imaging systems are forced to use the less accu-rate one-point calibration procedure. In addition, it may be undesirable to endure the expense of the black-body source, particularly for applications where radiometry is of no concern.

(43)

3.1 Reference-based methods 43

Figure 3.1: Graphical description of the two-point linear reference-based non-uniformity correction

Milton et al. discussed the sources of the residual error that remains after attempting to correct for non-uniformity in a staring FPA. The investigation in [17] has been carried out using both linear and non-linear responsivity variations and showing that for the linear case the residual pattern noise is introduced in the A/D conversion while for the non-linear model it is found out that variations of gain acting through the non-linearity are not compensated during the standard two-point correction algorithm. For this purpose, multiple points calibration can be used to reduce the effect of non-linearity [17].

Mooney et al. mathematically treated and experimentally verified the effect of spatial non-uniformity noise and studied the residual pattern noise on corrected frames pointing out that neither variations in detectors spectral response nor excess low frequency noise can be fully corrected by means of reference-based calibration techniques [18].

The residual spatial noise after correction has been also treated and measured in [14] referring to both two-point calibration and one-point calibration taking into account the properties of the IR source, sensor constants and spatial variations in the IR imaging device, carrying out the results for a platinum silicide IR sensor.

Such residual errors of non-uniformity correction have been reviewed by Friedenberg et al. [19] car-rying out a study on non-linearity and yielding to accurate results.

(44)

noticing that reference-based techniques correct fixed-pattern noise to a magnitude below temporal noise and introducing two figures of merit of the status of non-uniformity correction: (i) “correctability” and (ii) long-term stability constant. This latter indicates the duration (after non-uniformity correction) dur-ing which the spatial noise increases to values higher than that of the temporal noise. The results high-light that correctability is degraded in time but the process is very slow: depending on the examined detector, the order of the long-term stability constant can vary from minutes to hours.

Finally, in the context of FPN characterization and reference-based NUC techniques, it is important to highlight the dramatic effect on target identification under fixed-pattern noise effects. For this purpose, Driggers et al. investigated this issue carrying out a target acquisition experiment describing the relative effects of FPN on target identification: U.S. Army soldiers were tasked to identify targets under several FPN and temporal noise conditions showing a direct comparison between the effects of fixed-pattern noise and temporal noise on target identification [21].

3.2

Scene-based NUC methods

In recent years, a new approach to calibration has been proposed that refers to scene-based techniques. Scene-based methods exploit the spatial and temporal information of the data sensed by the imaging system in order to operate non-uniformity correction. Differently from reference-based NUC techniques, scene-based NUC methods do not require any black body in the calibration process. Scene-based NUC methods have the following benefits:

• Reduction of manufacturing complexity. Typically reference-based techniques are implemented on IR devices by means of shutter mechanisms that have to be cooled down and heated at tempera-tures around the operating point. This is not necessary for scene-based methods.

• Device management. Differently from reference-based techniques that operate calibration inter-rupting the standard operation of the IR camera in order to compensate for the FPN temporal drift, scene-based methods achieve on-line NUC, i.e. during the acquisition of the frames as a pre-processing step.

On the other hand, generally scene-based methods reach less accurate calibration performance if com-pared to standard reference-based methods. Moreover, for several analyzed scene-based calibration tech-niques it is noticeable that the calibration performance depend on the scene variations and on the time elapsed since the start of the NUC algorithm. Scene-based algorithms are generally identified by two main approaches:

Figura

Figure 1.1: Development of staring sensors in terms of imaging pixels over the years
Figure 1.2: Quantum efficiency (in a logarithmic scale) of visible and IR detectors in dependence of the wavelength
Figure 1.3: Sensor design alternatives and related wavelengths: scanning sensors vs. staring sensors
Table 1.3: Overview on typical values of design parameters depending on wavelength
+7

Riferimenti

Documenti correlati

These samples are then used to compute the statistical parameters of each re- sponse value (i.e. mean value, standard deviation, skewness, kurtosis) and to evaluate the

Up to now this class of low power thrusters has not been deeply studied, in fact, only recently there is a great interest to mini and micro satellites that ask for thruster

A similar conclusion regards the vortex core radius, which values at different streamwise locations, for the data affected from wandering, are determined by the 3HFP

However the uncertainty on the experimental PMTF has been esti- mated by performing eight independent measurements of the MTF. The error due to the phase effect has been taken

Il simulatore ci ha consentito infatti di osservare in maniera diretta gli scambi di energia tra i diversi sottosistemi che compongono il satellite e monitorare lo stato

The experiments were presented at national and international conferences (AIF Perugia 2014, AIF Trento 2015, AIF Assergi 2016, SIF Roma 2015, SIF Padova 2016, GIREP Wroclaw 2015,

COUNTRY FAMILY NAME INITIALS PERCENTAGE RANK MEDALS. ARGENTINA Aberbuj

Morphological analysis of the myeloid colony formation stimulated by these CM showed that three cell lines derived from patients with T-cell malignancies released factors