Chapter 1
Introduction
1.1 Synthetic aperture radar imaging
Imaging sensor systems are classified as passive and active. While passive systems exploit radiations naturally emitted or reflected by the Earth surface, active systems are equipped with a transmitter, and they receive the signal backscattered from the illuminated surface [1].
Among imaging active sensors, a predominant role is played by radar systems, which operate in the microwave region of the electromagnetic spectrum. These instruments allow day and night and all-weather imaging, which constitues an important prerequisite for the continuous and global monitoring of the Earth surface. The main limitation of these sen- sors (generally referred to as real aperture radars) is the poor resolution achievable in the azimuth direction, which is proportional to the ratio between the sensor-to-surface distance and the sensor antenna dimension. To overcome this limitation, the concept of the synthetic aperture was patented by Carl Wiley in 1965, according to which a very long antenna can be synthesized by moving a small one along the flight of the radar platform. Early spaceborne missions then demonstrated that synthetic aperture radar (SAR) is able to reliably map the Earth surface and acquire information about its physical properties. Nowadays, SAR remote sensing is an established technique finding many applications to geophysical problems, either by themselves or in conjunction with data from other remote sensing instruments. Examples of such applications include land use mapping, vegetation, biomass measurements, and soil moisture mapping [1, 2].
Basically, the large bandwidth of the transmitted signal, typically a long chirp, assures a short pulse duration at the output of the receiver matched filter, enabling the high range resolution. Moreover, coherently (i.e., amplitude and phase) integrating the compressed pulses from several antenna locations, the output of the synthetic antenna is produced, with a very narrow beam width and thus guaranteeing the high azimuth resolution. So doing, the SAR imaging process provides a geometric projection of the 3-D radar reflectivity func- tion γ (x, r, z) into 2-D cylindrical coordinates (x, r), followed by a 2-D convolution with the system point-spread function (PSF) f (x, R) [3, 4]:
g (x, r) =
Z
γ (x, y, z)Rd θ · exp
− j 4 π λ r
⊗ ⊗ f (x,r) (1.1)
Chapter 1 - Introduction
where g (x, r) is the complex SAR image, x is the along-track or azimuth direction, r is the slant range, z is the vertical height, r is the slant range (the distance of the point scatterers to the SAR sensor), θ is the elevation (look) angle, and λ is the wavelength. As shown in equa- tion (1.1), due to the projection, information about the spatial structure and the location of the scatterer gets lost. For many applications, this height-dependent distortion adversely affects the interpretation of the imagery. As a consequence, the development of SAR interferometry techniques (InSAR) has enabled the measurement of the third dimension.
1.2 SAR interferometry
To solve the ambiguity in height intrinsic in a SAR image, it is necessary to introduce some kind of diversity. SAR interferometry (InSAR) introduces this diversity to provide us with information about the missing dimension, the elevation angle θ . The first application of interferometry to radar dates back to 1969 by Rogers and Ingalls to solve firstly the am- biguities arising in the radar observation of the planet Venus, then for the measurement of the Moon topography [2]. The first report of an InSAR system applied to Earth observation was by Graham in 1974 [5]. He equipped a conventional airborne SAR platform with an additional physical antenna displaced in the cross-track plane from the conventional SAR antenna, forming an imaging interferometer. In a few words, the phase difference between the received signals at the two antennas (or channels in the interferometry jargon) is related to the geometric path length to the image point, which depends on the topography . The Graham interferometer mixes the two signals, and records the amplitude variations induced by the beating patterns of the relative phase of the two signals. The resulting amplitude fringe variations track the topography contours. However, given the inherent difficulties of inverting amplitude fringes to get the scenario topography, subsequent InSAR systems were developed to record the complex amplitude and phase information at the two antennas, to allow a direct reconstruction of the relative phase of each image point [2, 3].
After the first seminal experiments, the so-called across-track interferometry (XTI-SAR) is nowadays a well-assessed and operative technique for the remote sensing of the Earth surface elevation, making possible the quick and cheap production of digital elevation models (DEMs). As already mentioned, a classic XTI-SAR system estimates the surface height from the phase difference ϕ (the interferometric phase) between the two images collected by two sensors slightly separated by a cross-track baseline. So, an XTI-SAR system operates in viewing angle diversity (spatial diversity), and it can also be seen as an elevation direction of arrival (DOA) estimation technique [4]. The overall relationship between terrain height z and ϕ is
z = H − r cos
φ − arccos
− ϕλ 4 π b ⊥
(1.2)
where H is the platform altitude, φ is the baseline tilt angle (the angle between the baseline
and the ground range axis), and b ⊥ is the baseline length. The measure of the interferometric
phase ϕ is generally corrupted by the presence of the thermal noise and the so-called speckle,
which can be well modeled as a complex-valued multiplicative stochastic process. To coun-
teract the effects of the resulting phase noise, especially for natural extended targets, it is of-
ten advantageous to make use of some kind of coherent averaging over different sub-images
of the same scene. The sub-images formed during the SAR processing are usually called
looks. For a pixel at fixed range-azimuth coordinates, let g 1 (n) and g 2 (n), n = 1, . . . , N, be
2
the pixel complex amplitudes for N looks 1 . Therefore, the maximum likelihood estimator of the interferometric phase is given by:
ϕ ˆ = arg ( N
n=1 ∑
g 1 (n) g ∗ 2 (n) )
(1.3)
where symbol ∗ denotes the conjugate operator. It is worth noting that the interferometric processing of two SAR images needs the set up of specific pre-processing procedures. First of all, each SAR image must be focused preserving the phase information, and compensat- ing the undesired platform motion instabilities, especially in the airborne case. Then, the two complex SAR images must be co-registered with sub-pixel accuracy and spatially smoothed.
Subsequently, the interferometric phase can be estimated through (1.3) and unwrapped to remove the 2 π ambiguities. Finally, the height can be computed, geocoded, and converted into a desired cartographic reference system. Each of these steps represents a specific tech- nical and research field per se. Classical two-channels XTI-SAR has also been extended to process more than one single baseline in order to reduce problems of data noise and phase ambiguity [3, 4].
More generally, in addition to measuring the topography, InSAR represents a powerful technique also for measuring changes over both short- and long-time scale, and other changes in the detailed characteristics of the monitored surface. The key idea is to change the kind of diversity among the two complex-valued SAR images [2]. When the diversity parameter is the acquisition time, the technique is called along track interferometry (ATI-SAR). This technique was advanced for the first time by Goldstein and Zebker in 1987, and experimented by the same authors by augmenting a conventional airborne SAR system with an additional aperture, separated on the fuselage of the aircraft. If the flight path and imaging geometries of all the SAR observations are identical, any interferometric phase difference is due to changes over time of the SAR system clock, variable propagation delay, or surface motion in the direction of the radar line of sight. However, for short (fraction of second) time scales clock drift and propagation delay are negligible, and ATI-SAR can be used to measure small velocities (e.g. ocean currents, moving vehicles). In the ideal case, since the two antennas are arranged along the flight track of a single platform, there is no cross-track separation of the aperture, and therefore no sensitivity to topography.
ATI-SAR can also be regarded as a particular case of repeat-pass interferometry, which can be used to generate topography and motion at the same time. If the repeat flight path result in a cross-track separation, and the surface has not changed between observations, then the repeat-track observation pair will act as an interferometer for the topography mea- surement. On the other hand, if there is no cross-track separation, there is no sensitivity to topography and radial motion can be measured directly. Since the temporal separation between observation is typically hours to days, the ability to detect small radial velocities is substantially better than a conventional ATI-SAR [4]. First investigations of repeat track interferometry for velocity mapping were carried out by Goldstein in 1993 over the Rutford ice stream in Antarctica with ERS-1 data. More commonly, the track of the sensor does not repeat itself exactly, thus the time-induced phase comprises both the topographic phase and the one originated by the surface or the radial movement. In this case, in the last two decades, specific procedures have been developed to measure surface displacements by get- ting rid of the topography. These techniques are referred to as differential SAR interferometry
1