• Non ci sono risultati.

DVB – S/S2 BASED BISTATIC ISAR IMAGING

N/A
N/A
Protected

Academic year: 2021

Condividi "DVB – S/S2 BASED BISTATIC ISAR IMAGING "

Copied!
38
0
0

Testo completo

(1)

26

3

DVB – S/S2 BASED BISTATIC ISAR IMAGING

3.1. Monostatic ISAR Geometry

With ISAR (Inverse Synthetic Aperture Radar) imaging technique, we refer to a technique that allows obtaining an electromagnetic image of a certain target, comparable to the Synthetic Aperture Radar (SAR) technique. However, the SAR technique allows the creation of a synthetic antenna array using the platform’s movement where the radar is located.

If the radar is fixed somewhere, we could thinking to use the target’s movement instead of the platform’s movement (Fig. 3.1). The main aspect that differentiates the ISAR from SAR is the absence of target cooperativity. This non – cooperativity of the target complicates the image formation.

Fig. 3.1: (a) SAR Geometry – (b) ISAR Geometry

In general, for a real aperture radar, the range resolution Δ𝑅 depends on the bandwidth of the transmitted signal:

Δ𝑅 = 𝑐

2𝐵 (3.1)

where 𝑐 = 299792458 𝑚 𝑠 ⁄ is the speed of light, and 𝐵 is the bandwidth of the signal. The azimuth

resolution (or cross – range resolution in distances terms) depends on the antenna – beam aperture:

(2)

27

Δ𝑅

𝑐𝑟

= 𝑅𝛼

𝑎𝑧

(3.2)

where 𝛼

𝑎𝑧

is the antenna – beam aperture angle and 𝑅 is the target’s distance from the radar.

In this situation, a target that has smaller dimensions than the dimension of a resolution cell, would be seen like a punctual target. If, instead, the resolution cell becomes smaller than the target’s dimensions, we would have echoes from all the punctual scatterers that compose the target.

Known the transmitted signal 𝑠

𝑇

(𝑡) and measured the received signal 𝑠

𝑅

(𝑡), in order to obtain the target image, we need to create a certain depending model of the two signals.

3.1.1. Received Signal Model

Supposing to have a (𝑥, 𝑦) reference axes, and supposing to have in this bi-dimensional space a punctual fixed target with position 𝑃 = (𝑥

0

, 𝑦

0

), the received signal can be written as an attenuate and delayed version of the transmitted signal:

𝑠

𝑅

(𝑡) = 𝐴𝑠

𝑇

(𝑡 − 𝜏) = 𝐴𝑠

𝑇

(𝑡 − 2𝑅

0

𝑐 ) (3.3)

where 𝑅

0

is the distance between the radar and the target, 𝑐 is the speed of light and 𝐴 ∈ ℂ is the attenuation factor, that can introduce a phase rotation due to the particular material the target is made of.

The resulting image is an image that has only one pixel with intensity 𝐴 in the position 𝑃, and the others points are 0. Thus, the (3.3) can be written as:

𝑠

𝑅

(𝑡) = ∬ 𝐴𝛿(𝑥 − 𝑥

0

)𝛿(𝑦 − 𝑦

0

)𝑠

𝑇

(𝑡 − 2𝑅(𝑥, 𝑦)

𝑐 )𝑑𝑥𝑑𝑦

+∞

−∞

(3.4)

where 𝛿 is the Dirac’s Delta, thanks to its sampling property.

If we have a set of punctual scatterers (Fig. 3.2), we have to modify the (3.3) and so the (3.4).

(3)

28

Fig. 3.2: Set of punctual scatters

We consider the scattered signal from the target as the linear combination of the signals come from the all scatters composing the target (however, in reality, every punctual scatter depends on the presence of the others scatters).

Thus, the (3.3) becomes:

𝑠

𝑅

(𝑡) = ∑ 𝐴

𝑖

𝑠

𝑇

(𝑡 − 𝜏

𝑖

)

𝑁

𝑖=1

= ∑ 𝐴

𝑖

𝑠

𝑇

(𝑡 − 2𝑅

𝑖

𝑐 )

𝑁

𝑖=1

(3.5)

and so the (3.4) becomes:

𝑠

𝑅

(𝑡) = ∬ 𝜌(𝑥, 𝑦)𝑠

𝑇

(𝑡 − 2𝑅(𝑥, 𝑦)

𝑐 )𝑑𝑥𝑑𝑦

+∞

−∞

(3.6)

where:

𝜌(𝑥, 𝑦) = ∑ 𝐴

𝑖

𝛿(𝑥 − 𝑥

𝑖

)𝛿(𝑦 − 𝑦

𝑖

)

𝑁

𝑖=𝑖

(3.7)

(4)

29

where 𝑁 is the total number of scatters the target is made of, instead 𝜌(𝑥, 𝑦) represents the Reflectivity Function of the target (x and y represent the range and cross-range directions respectively).

A certain target is a tridimensional object, and its Reflectivity Function depends on its position (𝑥, 𝑦, 𝑧). Nevertheless, any radar system is able to “see” the particular target rotation around an axis orthogonal to the LOS (Line of Sight). The radar needs that the particular movement/rotation of the target causes a phase variation of the received echo. Thus, every rotation around an axis which is parallel to the LOS does not create a distance variation between the radar and the target, so it does not create a phase variation on the received signal.

The 2D ISAR image is the 3D target Reflectivity Function projected into a 2D plane called the Image Projection Plane (IPP). The Image Projection Plane is defined as the plane perpendicular to the effective target rotation vector Ω

𝑒𝑓𝑓

(𝑡). In fact, the total rotation vector Ω

𝑡𝑜𝑡

(𝑡) can be decomposed in two components, parallel and orthogonal to the LOS, respectively:

Ω

𝑡𝑜𝑡

(𝑡) = Ω

𝐿𝑂𝑆

(𝑡) + Ω

𝑒𝑓𝑓

(𝑡) (3.8)

where Ω

𝐿𝑂𝑆

(𝑡) is the component with axis parallel to the LOS, and Ω

𝑒𝑓𝑓

(𝑡) is the component that creates the phase variation. Thus, defining 𝜌

(𝑥, 𝑦, 𝑧) as the tridimensional reflectivity function of the target, we have:

∭ 𝜌

(𝑥, 𝑦, 𝑧)

+∞

−∞

𝑠

𝑇

(𝑡 − 2𝑅(𝑥, 𝑦)

𝑐 ) 𝑑𝑥𝑑𝑦𝑑𝑧 = ∬ 𝜌(𝑥, 𝑦)𝑠

𝑇

(𝑡 − 2𝑅(𝑥, 𝑦)

𝑐 )𝑑𝑥𝑑𝑦

+∞

−∞

(3.9)

where

𝜌(𝑥, 𝑦) = ∫ 𝜌

(𝑥, 𝑦, 𝑧)𝑑𝑧

+∞

−∞

(3.10)

so, eventually, we have to project the tridimensional reflectivity function into the IP (Image Plane) plane.

If we impose 𝑖̂

𝐿𝑂𝑆

= 𝑖̂

𝑦

, and 𝑖̂

𝑥

, 𝑖̂

𝑧

orthogonal one another and with 𝑖̂

𝑦

respectively (Fig. 3.3), we obtain

that

(5)

30

Ω ⃗⃗

𝑒𝑓𝑓

(𝑡) = 𝑖̂

𝑦

(𝑡) × [Ω ⃗⃗

𝑡𝑜𝑡

(𝑡) × 𝑖̂

𝑦

(𝑡)] (3.11)

and it belongs to the plane identify by the couple (𝑖̂

𝑥

, 𝑖̂

𝑧

).

Fig. 3.3: Geometry for Ω⃗⃗ 𝑒𝑓𝑓(𝑡) definition

The problem is that Ω ⃗⃗

𝑒𝑓𝑓

(𝑡) depends on the time, thus, although the image plane is always parallel to the LOS, the cross – range direction can change depending on time as well (3.12) (Fig. 3.4).

𝑖̂

𝑐𝑟

(𝑡) = 𝑖̂

𝐿𝑂𝑆

× Ω ⃗⃗

𝑒𝑓𝑓

(𝑡) (3.12)

Fig. 3.4: Geometry for 𝑖̂𝑐𝑟(𝑡) definition

(6)

31

About 𝑅(𝑥, 𝑦), as usually in surveillance applications, we are interested in target detection at big distances from the radar (big distances respect to the target’s dimensions), so we can omit the 𝑧 dependence because the particular scatter along this coordinate has practically the same distance from the radar as the phase center of the target.

Actually, this is an easier model of the reflectivity function 𝜌(𝑥, 𝑦), it has more dependences (e.g.

the particular frequency of the radar, wave’s polarization etc…); we have to demonstrate the validity of the model only using the experimental way.

3.1.2. ISAR image reconstruction

Now we focus our attention to the target’s image reconstruction, that is, how to extract the reflectivity function 𝜌(𝑥, 𝑦) knowing 𝑠

𝑇

(𝑡) and 𝑠

𝑅

(𝑡). We consider the system represented in (Fig. 3.5), where we have introduced a second reference system united with the target.

Fig. 3.5: Introduction of the second reference system

To use these type of reference systems we need the Straight Iso – Range Approximation. Thus,

we introduce a new third reference system (𝑣, 𝑢), that is not united to the target, but has only the same

origin of the previous one and the 𝑣 axis parallel to the LOS (Fig. 3.6).

(7)

32

Fig. 3.6: Third reference system for the Straight Iso – Range Approximation

The triangle formed by RADAR – 𝑃 – 𝐻 is a rectangular triangle, so we can write:

𝑅(𝑣, 𝑢, 𝑡) = √(𝑅

0

(𝑡) + 𝑣)

2

+ 𝑢

2

(3.13)

if we have that 𝑢 ≪ (𝑅

0

(𝑡) + 𝑣), we can do the following simplification:

𝑅(𝑣, 𝑢, 𝑡) ≅ 𝑅

0

(𝑡) + 𝑣 (3.14)

We can not delete 𝑣 as well, because we would fall in the punctual target case. Now, starting from the (𝑣, 𝑢) coordinates, to obtain the respective (𝑥, 𝑦) coordinates we need of an axes rotation:

𝑣 = 𝑥 cos[𝑑(𝑡)] + 𝑦 sin[𝑑(𝑡)]

𝑢 = −𝑥 sin[𝑑(𝑡)] + 𝑦 cos[𝑑(𝑡)]

(3.15)

thus the (3.14) can be rewrite as:

𝑅(𝑣, 𝑢, 𝑡) ⇒ 𝑅(𝑥, 𝑦, 𝑡) = 𝑅

0

(𝑡) + 𝑥 cos[𝑑(𝑡)] + 𝑦 sin[𝑑(𝑡)] (3.16)

(8)

33

As can we see, the target’s motion can be seen as the sum of the translation motion of its phase center and a rotation motion around this point. It is simple understanding that if 𝑑(𝑡) does not change, the radar sees the target from the same point of view, thus the array formation principle can not be done.

Resuming the expression (3.6), supposing a time dependence of the distance 𝑅 because a moving target, and supposing a sinusoidal transmitted signal (3.17)

𝑠

𝑇

(𝑡) = 𝑎𝑒

𝑗2𝜋𝑓0𝑡

(3.17)

we have (calculations are omitted):

𝑠

𝑅

(𝑡) = 𝜉(𝑡)𝛾(𝑡) ∬ 𝜌(𝑥, 𝑦)𝑒

−𝑗2𝜋(𝑥𝒳(𝑡)+𝑦𝒴(𝑡)) +∞

−∞

𝑑𝑥𝑑𝑦 (3.18)

where

𝜉(𝑡) = 𝑎

𝑒

𝑗2𝜋𝑓0𝑡

(3.19)

𝛾(𝑡) = 𝑒

−𝑗4𝜋𝑓𝑐0𝑡𝑅0(𝑡)

(3.20)

and

𝒳(𝑡) = 2𝑓

0

𝑐 cos[𝑑(𝑡)] ; 𝒴(𝑡) = 2𝑓

0

𝑐 sin[𝑑(𝑡)] (3.21)

The presence of the AWGN (Additive White Gaussian Noise) 𝑛(𝑡) in the 𝑠

𝑅

(𝑡) definition is useless for the calculations and it is omitted.

Observing the equation (3.18), we can do the following consideration:

1. The term (3.19), for less than an amplitude constant, is the transmitted signal, that can be deleted

doing a coherent demodulation.

(9)

34

2. The term (3.20) is a phase modulation term, due to the movement of the phase center of the target, and can be deleted using tracking techniques (we will suppose, starting from now, that we are able to do it).

3. Without the (3.19) and (3.20) terms, the (3.18) becomes the 2D-Fourier Transform of 𝜌(𝑥, 𝑦) (3.21).

𝑠

𝑅

(𝑡) = ∬ 𝜌(𝑥, 𝑦)𝑒

−𝑗2𝜋(𝑥𝒳(𝑡)+𝑦𝒴(𝑡)) +∞

−∞

𝑑𝑥𝑑𝑦 (3.22)

Sampling the equation (3.22) at 𝑡̅, 𝑠

𝑅

(𝑡̅) represents the 2D-FT of 𝜌(𝑥, 𝑦) calculated into the spatial frequencies {𝒳(𝑡̅), 𝒴(𝑡̅)}.

However, looking for (3.21), we notice that:

𝒳

2

(𝑡) + 𝒴

2

(𝑡) = 4

𝜆

02

(3.23)

thus, 2D-FT of 𝜌(𝑥, 𝑦) is measured only along the circumference with ray 2 𝜆 ⁄ . In particular, because

0

we have a direct dependence from 𝑑(𝑡), the known part of the circumference is the arch between 𝑑

𝑚𝑎𝑥

and 𝑑

𝑚𝑖𝑛

(Fig. 3.7).

Fig. 3.7: Fourier Domain

where 𝑇

𝑜𝑏𝑠

is the target observation time.

If we rewrite the (3.22) as follow:

(10)

35 𝑠

𝑅

(𝑡) = ∬ 𝜌(𝑥, 𝑦)𝑒

−𝑗2𝜋(𝑥𝒳(𝑡)+𝑦𝒴(𝑡))

+∞

−∞

𝑑𝑥𝑑𝑦 = 𝐹(𝒳, 𝒴)𝑊(𝒳, 𝒴) (3.24)

where

𝑊(𝒳, 𝒴) = { 1 𝑜𝑣𝑒𝑟 𝑡ℎ𝑒 𝑎𝑟𝑐ℎ 𝑤ℎ𝑒𝑟𝑒 𝐹 𝑖𝑠 𝑘𝑛𝑜𝑤𝑛

0 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 (3.25)

we can write:

2𝐷 − 𝐼𝐹𝑇{𝐹(𝒳, 𝒴)𝑊(𝒳, 𝒴)} = 𝜌(𝑥, 𝑦)⨂𝑤(𝑥, 𝑦) = 𝜌̂(𝑥, 𝑦) (3.26)

𝜌̂(𝑥, 𝑦) is the estimated version of 𝜌(𝑥, 𝑦), and 𝑤(𝑥, 𝑦) is the impulse response of the imaging system. Imposing Δ = 𝑑

𝑚𝑎𝑥

− 𝑑

𝑚𝑖𝑛

, and supposing it sufficiently small, we can approximate the arch with its tangent cord and the Fourier domain where this function is known is approximated with a line:

{ Δ𝒳 = 4

𝜆

0

sin ( Δ 2 ) Δ𝒴 = 0

(3.27)

As a direct consequence, along the 𝑥-axis 𝑤(𝑥, 𝑦) has a main lobe with width 1 Δ𝒳 ⁄ . Thus, the cross – range resolution is:

Δ𝑅

𝑐𝑟

= 1

Δ𝒳 = 𝜆

0

4 sin ( Δ

2)

≅ 𝜆

0

2Δ (3.27)

where the approximation is valid if 𝑇

𝑜𝑏𝑠

is small enough.

Regarding the range resolution, to improve it we need the transmission of a wideband signal.

Acting on the received signal expression, we introduce a new term:

(11)

36 𝑠

𝑅

(𝑡) = 𝜉(𝑡)𝛾(𝑡) ∬ 𝜌(𝑥, 𝑦)𝑒

−𝑗2𝜋(𝑥𝒳(𝑡)+𝑦𝒴(𝑡))

+∞

−∞

𝑑𝑥𝑑𝑦 𝑟𝑒𝑐𝑡 [ 𝑡

𝑇

𝑜𝑏𝑠

] 𝑟𝑒𝑐𝑡 [ 𝑓 − 𝑓

0

𝐵 ] =

= 𝜉(𝑡)𝛾(𝑡) ∬ 𝜌(𝑥, 𝑦)𝑒

−𝑗2𝜋(𝑥𝒳(𝑡)+𝑦𝒴(𝑡)) +∞

−∞

𝑑𝑥𝑑𝑦 ⋅ 𝑊(𝑡, 𝑓)

(3.28)

Applying the previous calculation to (3.28) we arrive at (calculations are omitted):

𝑠

𝑅

(𝑡) = ∬ 𝜌(𝑥, 𝑦)𝑒

−𝑗2𝜋(𝑥𝒳(𝑡)+𝑦𝒴(𝑡)) +∞

−∞

𝑑𝑥𝑑𝑦 ⋅ 𝑊(𝒳, 𝒴) (3.29)

The first term represents again the 2DFT of 𝜌(𝑥, 𝑦), instead 𝑊(𝒳, 𝒴) is the spatial frequencies domain where this transformation is known; its inverse transformation is of course the impulse response of the imaging system. This domain seems an circular crown sector (Fig. 3.8).

Fig. 3.8: circular crown sector domain

If we consider that the total variation of Δ could be small, that sector can be approximated as a rectangular domain where the two sides will be 2𝐵 𝑐 ⁄ along the 𝒴-axis, and 2𝑓

0

Δ 𝑐 ⁄ along the 𝒳-axis.

Considering 𝑑(𝑡) belonging to a small interval around 𝜋 2 ⁄ , we can write:

𝒳 = 2𝑓

𝑐 cos[𝑑(𝑡)] ≅ 2𝑓

𝑐 ⋅ 𝑑(𝑡) ; 𝒴 = 2𝑓

𝑐 sin[𝑑(𝑡)] ≅ 2𝑓

𝑐 (3.30)

(12)

37

Along 𝒳, due to this approximation, we can consider 𝑓 ≃ 𝑓

0

. Thus, we rewrite the 𝑊 function as follow:

𝑊(𝒳, 𝒴) ≈ 𝑊

(𝒳, 𝒴) = 𝑟𝑒𝑐𝑡 [ 𝒳 2𝑓

0

𝑐

] ⋅ 𝑟𝑒𝑐𝑡 [ 𝒴 − 2𝑓

0

𝑐 2𝐵

𝑐

] (3.31)

and its FT is:

𝑤

(𝑥, 𝑦) = 2𝑓

0

Δ

𝑐 𝑠𝑖𝑛𝑐 ( 2𝑓

0

Δ

𝑐 𝑥) ⋅ 2𝐵

𝑐 𝑠𝑖𝑛𝑐 ( 2𝐵

𝑐 𝑦) 𝑒

𝑗4𝜋𝑓𝑐0𝑦

(3.32)

so, considering the main lobes (in both the directions) of this bi – dimensional function, we obtain the range and cross – range resolutions:

Δ𝑅 = 𝑐

2𝐵 𝑎𝑛𝑑 Δ𝑅

𝑐𝑟

= 𝑐

2𝑓

0

Δ (3.33)

The range resolution is the same of a common radar that uses the pulse compression technique, instead the cross – range resolutions depends on Δ, that is, the target aspect angle change with respect to the radar, and it is unknown. If we consider a nearly constant rotation velocity for the target, remembering that Ω

𝑒𝑓𝑓

(𝑡) was the effective rotation vector of the target, we have:

Ω ⃗⃗

𝑒𝑓𝑓

(𝑡) ≈ Ω ⃗⃗

𝑒𝑓𝑓

(3.34)

and, as a consequence, 𝑑(𝑡) = |Ω ⃗⃗

𝑒𝑓𝑓

| ⋅ 𝑡. We can rewrite the (3.30) as:

𝒳 ≅ 2𝑓

𝑐 |Ω ⃗⃗

𝑒𝑓𝑓

| ⋅ 𝑡 ; 𝒴 ≅ 2𝑓

𝑐 (3.35)

Substituting the (3.35) inside (3.29), we have:

(13)

38 𝑠

𝑅

(𝑡) = ∬ 𝜌(𝑥, 𝑦)𝑒

−𝑗2𝜋(𝜈𝑡+𝜏𝑓)

+∞

−∞

𝑑𝑥𝑑𝑦 ⋅ 𝑊(𝑓, 𝑡) (3.36)

where

𝜈 = 2𝑓

0

|Ω ⃗⃗

𝑒𝑓𝑓

|

𝑐 𝑥 ; 𝜏 = 2

𝑐 𝑦 (3.37)

The variable 𝜏 represents the round trip delay that is directly proportional to the target distance from the radar; 𝜈 is physically a Doppler frequency.

Considering the geometry in (Fig. 3.9), if we know the distance 𝐿 of the target from the radar, and |Ω ⃗⃗

𝑒𝑓𝑓

|, we obtain:

𝑣

𝑟

= 𝑣 cos(𝛼) = |Ω ⃗⃗

𝑒𝑓𝑓

| ⋅ 𝐿 cos(𝛼) = |Ω ⃗⃗

𝑒𝑓𝑓

| ⋅ 𝑥 (3.38)

and substituting it inside (3.37) we have:

𝜈 = 2𝑓

0

|Ω ⃗⃗

𝑒𝑓𝑓

|

𝑐 𝑥 = 2𝑓

0

𝑐 𝑣

𝑟

[𝐻𝑧] (3.39)

Fig. 3.9: Reference scenario

(14)

39

Thus, at the end, we can see the target electromagnetic image, but we can not determine its dimensions. It’s a scaling problem. Along the range, actually, we don’t have this problem, because 𝜏 is directly proportional to 𝑦; but 𝜈 is not bound to 𝑥 in the same way, due to the unknown of Ω ⃗⃗

𝑒𝑓𝑓

.

What we do is representing the image in a particular domain called Range – Doppler Domain, where 𝑦 is quoted in meters, and 𝑥 is quoted in Hz.

In real life, the system is sampled: along the fast time we have the ADC; along the slow time we have the impulses repetition. This sampling, after the Fourier Transform, creates the repetition of the resulting ISAR image. This problem is known as aliasing. We need to keep attention to this problem, and introduce a non – ambiguity spatial window, where its dimensions depend on 𝑇

𝑅

(PRI – Pulse Repetition Interval) and Δ𝑓 (frequency step) in this way (3.35):

𝛿𝒳 = 2𝑓

0

Ω

𝑒𝑓𝑓

𝑐 𝑇

𝑅

𝛿𝒴 = 2

𝑐 Δ𝑓 ⇒

Δ𝑥 = 1

𝛿𝒳 = 𝑐

2𝑓

0

Ω

𝑒𝑓𝑓

𝑇

𝑅

Δ𝑦 = 1

𝛿𝒴 = 𝑐

2Δ𝑓

(3.40)

Thus, if this window is smaller respect to the target dimensions, we will see it repeat itself in the ISAR image. Of course, there is still the problem that Ω

𝑒𝑓𝑓

is unknown, and in general we can not have information about the cross – range ambiguity.

3.2. Bistatic ISAR Geometry

A bistatic radar can be defined as a system where the transmitter and the receiver are located in different places, with a certain distance between them (Fig. 3.10). The ISAR technique applied to a bistatic radar can create high resolution images of a certain target if and only if both, the transmitter and the receiver, see the target at the same time. A bistatic configuration of the radar can give the possibility to acquire more information about the target, avoid blind velocity and avoid small RCS directions.

In the case of a passive radar, we don’t know the signal emitted by the transmitter a priori, so we need at the receiver a reference channel, in addiction to the surveillance channel, that can provide a

“clear” version of the transmitted signal.

(15)

40

Fig. 3.10: Bistatic Geometry (in the case of a passive solution)

The bistatic angle 𝛽 is shaped by the two unit vectors of the transmitter and the receiver, called 𝑖̂

𝑇𝑋

and 𝑖̂

𝑅𝑋

respectively.

3.2.1. Received Signal Model

As we have done in the (3.1.1) section, we are interesting in the explanation of the received signal model. Considering the time – frequency domain we have:

𝑠

𝑅

(𝑓, 𝑡) = 𝑊(𝑓, 𝑡) ∫ 𝜌(𝒙)𝑒

−𝑗4𝜋𝑓𝑐 [𝑅𝐵𝐼(𝑡)+𝐾(𝑡)𝒙⋅𝑖̂𝐵𝐼(𝑡)]

𝑉

𝑑𝒙 (3.41)

where

𝐾(𝑡) = | 𝑖̂

𝑇𝑋

(𝑡) + 𝑖̂

𝑅𝑋

(𝑡)

2 | = cos ( cos

−1

(𝑖̂

𝑇𝑋

(𝑡) ⋅ 𝑖̂

𝑅𝑋

(𝑡))

2 ) = cos ( 𝛽(𝑡)

2 ) (3.42)

and

𝑅

𝐵𝐼

(𝑡) = 𝑅

𝑇𝑋

(𝑡) + 𝑅

𝑅𝑋

(𝑡) 2

𝑖̂

𝐵𝐼

(𝑡) = 𝑖̂

𝑇𝑋

(𝑡) + 𝑖̂

𝑅𝑋

(𝑡)

|𝑖̂

𝑇𝑋

(𝑡) + 𝑖̂

𝑅𝑋

(𝑡)|

(3.43)

(16)

41

In (Fig. 3.10) 𝑅

𝑇𝑋

(𝑡) and 𝑅

𝑅𝑋

(𝑡) are the distances between the target phase center 𝑂 and the transmitter and receiver respectively; 𝒙 is the vector that localized a generic target scatter.

In general, respect to the monostatic case, a bistatic geometry introduces distortions on the ISAR image, due in particular for the 𝐾(𝑡) term.

3.2.2. Impulse Response of a Bistatic ISAR

The 𝐾(𝑡) term brings with it the information about the time – variation of the bistatic geometry.

The effective thing that affects the impulse response of the ISAR image is the time – variation of the bistatic angle 𝛽(𝑡), in other words, the angle between the lines (Transmitter – target) and (target – receiver) (Fig. 3.10).

As usual, we are interested in “see” targets far from the radar, so we can consider true the following two hypothesis:

1. Far field condition;

2. 𝑇

𝑜𝑏𝑠

small enough.

Assuming valid these two hypothesis, we can consider the effective rotation vector Ω

𝑒𝑓𝑓

constant during 𝑇

𝑜𝑏𝑠

. In this case, the received signal from a generic target scatter can be written (after motion compensation) as:

𝑠

𝑅

(𝑡) = 𝑊(𝑓, 𝑡) ∬ 𝜌(𝑥, 𝑦)

+∞

−∞

𝑒

−𝑗4𝜋𝑓𝑐 [𝐾(𝑡)⋅(𝑥 sin(Ω𝑒𝑓𝑓𝑡)+𝑦 cos(Ω𝑒𝑓𝑓𝑡))]

𝑑𝑥𝑑𝑦 (3.44)

where 𝜌(𝑥, 𝑦) is the same of (3.10), because choosing 𝑧 parallel to Ω

𝑒𝑓𝑓

, we can consider 𝜌(𝑥, 𝑦) instead of 𝜌

(𝑥, 𝑦, 𝑧).

For the hypothesis above mentioned, the bistatic angle will not change to much for big target movements during 𝑇

𝑜𝑏𝑠

, so 𝛽(𝑡) can be approximated with its first order Taylor polynomial:

𝛽(𝑡) = 𝛽(0) + 𝛽̇(0)𝑡 (3.45)

where

(17)

42

|𝑡| ≤ 𝑇

𝑜𝑏𝑠

2 𝑎𝑛𝑑 𝛽̇ = 𝜕𝛽

𝜕𝑡 (3.46)

Thus, we obtain:

𝐾(𝑡) ≈ 𝐾(0) + 𝐾̇(0)𝑡 = cos ( 𝛽(0)

2 ) − 𝛽̇(0)

2 sin ( 𝛽(0)

2 ) 𝑡 = 𝐾

0

+ 𝐾

1

𝑡 (3.47)

Thus, if the total angle variation (that depends on 𝑇

𝑜𝑏𝑠

) is small enough, we can rewrite the equation (3.44) as follow:

𝑠

𝑅

(𝑡) = 𝑊(𝑓, 𝑡) ∬ 𝜌(𝑥, 𝑦)

+∞

−∞

𝑒

−𝑗4𝜋𝑓𝑐 [(𝐾(0)+𝐾1𝑡)Ω𝑒𝑓𝑓𝑡⋅𝑥+(𝐾(0)+𝐾1𝑡)⋅𝑦]

𝑑𝑥𝑑𝑦 (3.48)

We have to keep attention on the motion compensation: we have to compensate the term:

4𝜋𝑓

𝑐 𝑅

𝐵𝐼

(𝑡) = 4𝜋𝑓

𝑐 [ 𝑅

𝑇𝑋

(𝑡) + 𝑅

𝑅𝑋

(𝑡)

2 ] (3.49)

This operation can be done considering the monostatic equivalent geometry (Fig. 3.10), because the 𝐾(𝑡) does not affect the operation: this means that we consider a monostatic radar with distance 𝑅

𝐵𝐼

(0) from the target along the bisector of the bistatic angle.

To obtain the ISAR image, we have to do two Fourier Transformations, one for the range compression, and the other one for the cross – range image formation (as seen in (Section 3.1.2)).

3.2.3. Passive Bistatic ISAR Processing

As explained in (Fig. 3.10), in the case of a passive radar, the receiver is composed of two

antennas: the reference antenna, which points towards the transmitter to acquire the reference signal

𝑠

𝑟𝑒𝑓

(𝑡), and the surveillance antenna, which points towards the target to acquire the surveillance signal

𝑠

𝑠𝑢𝑟

(𝑡). As already stated, a passive radar is intrinsically bistatic, since the transmitter and the receiver

are typically not collocated.

(18)

43

As shown in [10], the cross – ambiguity function (the theory concerning the Ambiguity Function will be explained in the next Section 3.3) between the surveillance signal 𝑠

𝑠𝑢𝑟

(𝑡) and the reference signal 𝑠

𝑟𝑒𝑓

(𝑡) can be seen as a weighted sum of the ambiguity functions calculated within each batch (Fig.

3.11). In particular, when 𝑇

𝑅

⋅ 𝑓

𝐷𝑚𝑎𝑥

≪ 1, where 𝑇

𝑅

is the batch interval and 𝑓

𝐷𝑚𝑎𝑥

is the maximum allowed Doppler frequency, the cross – ambiguity function can be written as follows:

𝜓(𝜏, 𝑓

𝐷

) = ∑ 𝑒

−𝑗2𝜋𝑓𝐷𝑛𝑇𝑅

𝑁𝑏

𝑛=1

∫ 𝑠

𝑠

(𝑡, 𝑛)𝑠

𝑟

(𝑡 − 𝜏, 𝑛)𝑑𝑡 (3.50)

where

𝑠

𝑠

(𝑡, 𝑛) = 𝑠

𝑠𝑢𝑟

(𝑡 + 𝑛𝑇

𝑅

) ⋅ 𝑟𝑒𝑐𝑡 ( 𝑡 − (𝑇

𝑅

+ 𝜏

𝑚𝑎𝑥

) 2 ⁄ 𝑇

𝑅

+ 𝜏

𝑚𝑎𝑥

)

𝑠

𝑟

(𝑡, 𝑛) = 𝑠

𝑟𝑒𝑓

(𝑡 + 𝑛𝑇

𝑅

) ⋅ 𝑟𝑒𝑐𝑡 ( 𝑡 − 𝑇

𝑅

⁄ 2 𝑇

𝑅

)

(3.51)

𝑡 ∈ [0, 𝑇

𝑜𝑏𝑠

], 𝑇

𝑜𝑏𝑠

is the observation time (i.e. integration time), 𝑛 is the batch number, and 𝜏

𝑚𝑎𝑥

is the maximum allowed delay – time, which is directly related to the maximum target distance.

Fig. 3.11: Graphical representation of both surveillance signal 𝑠𝑠𝑢𝑟(𝑡) and reference signal 𝑠𝑟𝑒𝑓(𝑡) [10].

(19)

44

The reference signal 𝑠

𝑟𝑒𝑓

(𝑡) is modeled as the sum of 𝑁

𝑏

contiguous batches of length 𝑇

𝑅

𝑠

𝑟𝑒𝑓

(𝑡) = ∑ 𝑠

𝑟

(𝑡 − 𝑛𝑇

𝑅

, 𝑛)

𝑁𝑏

𝑛=1

. (3.52)

Since the target distance is not known a priori and in order to guarantee the maximum integration gain at the output of each cross – correlation, 𝑠

𝑠𝑢𝑟

(𝑡) is divided into partially overlapped batches in the integration time (Fig. 3.11)

𝑠

𝑠𝑢𝑟

(𝑡) = ∑ 𝑠

𝑠

(𝑡 − 𝑛𝑇

𝑅

, 𝑛)

𝑁𝑏

𝑛=1

. (3.53)

Equation (3.50) can then be reformulated as follows:

𝜓(𝜏, 𝑓

𝐷

) = ∑ 𝑒

−𝑗2𝜋𝑓𝐷𝑛𝑇𝑅

𝑁𝑏

𝑛=1

𝜓

𝑐𝑐

(𝜏, 𝑛) (3.54)

where

𝜓

𝑐𝑐

(𝜏, 𝑛) = ∫ 𝑠

𝑠

(𝑡, 𝑛)𝑠

𝑟

(𝑡 − 𝜏, 𝑛)𝑑𝑡 (3.55)

represents the cross – ambiguity function relative to the 𝑛

𝑡ℎ

block. The cross – correlation in (3.55) implements the matched filter and therefore the pulse compression. It is straightforward to conclude that 𝜓

𝑐𝑐

(𝜏, 𝑛) represents the range profile relative to the 𝑛

𝑡ℎ

block.

3.2.4. Passive ISAR Performance Prediction

ISAR system performances are usually evaluated in terms of the spatial resolutions, which can

be calculated through the analysis of the system PSF (Section 3.3).

(20)

45

The reference signal 𝑠

𝑟𝑒𝑓

(𝑡) can be modeled as a random signal, for this reason only the statistical averages of the spatial resolutions can be computed. To do that we need to derive the statistical average of the power density function of 𝑠

𝑟

(𝑡, 𝑛) [10].

As shown in (3.48), the support band 𝑊(𝑓, 𝑡) is proportional to the power spectral density (PSD) of 𝑠

𝑟

(𝑡, 𝑛) and, if the transmitted signal were known, 𝑊(𝑓, 𝑡) would be:

𝑊(𝑓, 𝑡) = |𝑆

𝑟

(𝑓, 𝑛)|

2

⋅ ℛ(𝑛) (3.56)

where ℛ(𝑛) = 𝑟𝑒𝑐𝑡(𝑛) = 𝑢(𝑛) − 𝑢(𝑛 − 𝑁

𝑏

) identify the 𝑛

𝑡ℎ

batch, and 𝑢(𝑛) is the unit step discrete function.

By interpreting 𝑠

𝑟

(𝑡, 𝑛) as a random function of a stochastic process, the PSD of this process can be estimated by means (3.57), by considering random realizations of finite length 𝑇

𝑅

:

𝒫̂

𝑆𝑟

(𝑓) = 𝐸[|𝑆

𝑟

(𝑓, 𝑛)|

2

]

𝑇

𝑅

. (3.57)

Equation (3.57) is obviously an approximation of 𝒫

𝑆𝑟

(𝑓) = lim

𝑇𝑅→∞

𝐸[|𝑆𝑟(𝑓,𝑛)|2]

𝑇𝑅

. However 𝒫̂

𝑆𝑟

(𝑓) is a consistent estimator of 𝒫

𝑆𝑟

(𝑓).

Then, the signal support band can be written as a function of the PSD 𝒫̂

𝑆𝑟

(𝑓), instead of

|𝑆

𝑟

(𝑓, 𝑛)|

2

, as follows:

𝑊 ̅ (𝑓, 𝑡) = 𝒫̂

𝑆𝑟

(𝑓) ⋅ ℛ(𝑛). (3.58)

By inverse Fourier transforming the (3.57), the statistical average of the PSF can be derived:

𝑤 ̅(𝜏, 𝑓

𝐷

) = 𝜒̂

𝑆𝑟

(𝜏) ⋅ 𝑟(𝑓

𝐷

) (3.59)

where

(21)

46

𝑟(𝑓

𝐷

) = 𝑒

𝑗𝜋(𝑁𝑏−2)𝑓𝐷𝑇𝑅

sin(𝑁

𝑏

𝑇

𝑅

𝜋𝑓

𝐷

)

sin(𝜋𝑓

𝐷

𝑇

𝑅

) . (3.60)

The (3.60) is a direct result of the application of a two – dimensional Fourier transform to a function that is the product of separate functions of two independent variables.

It should be pointed out that the shape of the PSF along the Doppler coordinate is independent of the reference signal content, and therefore the Doppler resolution is determined by the observation time 𝑇

𝑜𝑏𝑠

= 𝑁

𝑏

𝑇

𝑅

only. Instead, the characteristics of 𝜒̂

𝑆𝑟

(𝜏) will affect the PSF along the time – delay coordinate, by generating side lobes, gratings lobes, and artifacts that will have to be dealt with.

3.2.5. Turntable Geometry

Our case studies refers to a turntable geometry: this is the simplest geometry through we can obtain the best ISAR images, because the aspect angle (i.e. the angle formed between the longitudinal axis of a target in flight and the axis of a radar beam.) has the biggest variation. The turntable used, property of the Fraunhofer FHR Institute, is shown in (Fig. 3.11). It has a rotation velocity of 6 degrees/s clockwise if seen from above.

Fig. 3.12: Turntable (courtesy of Fraunhofer FHR)

This geometry allows to simplify the theory described in (3.2.2), and it is used during the first

tests of any ISAR radar. Since the target does a movement around itself, we do not need of any motion

compensation (Fig. 3.13).

(22)

47

Fig. 3.13: Geometry during case studies

In this situation, being the bistatic angle constant, we can rewrite the (3.45) as:

𝛽(𝑡) = 𝛽(0) + 𝛽̇(0)𝑡 = 𝛽(0) = 𝛽 (3.61)

and the (3.47) as:

𝐾(𝑡) ≈ 𝐾(0) + 𝐾̇(0)𝑡 = cos ( 𝛽

2 ) = 𝐾

0

(3.62)

Thus, the expression of the received signal explained in (3.48) can be simplified as:

𝑠

𝑅

(𝑡) = 𝑊(𝑓, 𝑡) ∬ 𝜌(𝑥, 𝑦)

+∞

−∞

𝑒

−𝑗4𝜋𝑓𝑐 [𝐾0Ω𝑒𝑓𝑓𝑡⋅𝑥+𝐾0⋅𝑦]

𝑑𝑥𝑑𝑦 (3.63)

In a bistatic geometry, the main difference respect to the monostatic one, is that in general the unit vectors of the range and cross – range directions are not orthogonal one another; so the bistatic angle variation has big impacts on the ISAR images. In this case, where the bistatic angle is constant (𝐾

1

= 0), we obtain a cross – range unit vector that is orthogonal to 𝑖̂

𝐵𝐼

.

Thus, at the end, the ISAR image that we obtain with an equivalent processing to the one

explained in the section (3.1.2), is equal to the image that we would obtain using the equivalent

monostatic system, considering the scale factor 1 𝐾 ⁄

0

in the resolution (range and cross – range both).

(23)

48

For more detailed explanations and general cases, see [9].

3.3. DVB – S/S2 Ambiguity Function

In this chapter, the theory behind the radar ambiguity function is explained, its features and utility for understanding the performances of a certain signal in surveillance applications. After that, some ambiguity functions of DVB – S2 signals are presented, with particular attention on different configurations that will be useful in next sections.

3.3.1. Radar Ambiguity Function

The Radar Ambiguity Function is a basic mathematical tool used for characterizing radar performance in terms of target resolution and clutter rejection, to gain an understanding of how a signal processor responds, or reacts, to a given returned signal [9]. The ambiguity function of a signal 𝑠

𝑡

(𝑡) is a two – dimensional function in Doppler frequency shift 𝑓

𝐷

and time – delay 𝜏 defined as:

|𝜓(𝜏, 𝑓

𝐷

)|

2

= |∫ 𝑠

𝑡

(𝑡)𝑠

𝑡

(𝑡 + 𝜏)𝑒

𝑗2𝜋𝑓𝐷𝑡

𝑑𝑡

+∞

−∞

|

2

(3.64)

where the asterisk refers to the conjugate. The fact that the notation contains a square magnitude sign is used to indicate that we are characterizing the power of the signal processor output.

In a strict sense, when one used the phrase ambiguity function there is an underlying assumption that the signal processor is matched to the transmitted waveform. If the signal processor is not matched to the transmitted waveform the proper terminology in the ambiguity function context would be to refer to the cross ambiguity function.

The peak value of the ambiguity function is always at the center of the origin:

|𝜓(𝜏, 𝑓

𝐷

)|

2

≤ |𝜓(0, 0)|

2

(3.65)

The peak of the ambiguity function |𝜓(0, 0)|

2

means that is impossible to resolve two nearby

targets if their differences in the time – delay and the Doppler frequency shift are all zeros. An ideal

ambiguity function is a thumbtack – type function which has a peak value at 𝜏 = 0, 𝑓

𝐷

= 0 and near

(24)

49

zero elsewhere. This means with the thumbtack – type ambiguity function, two nearby targets can be perfectly resolved if their differences in the time delay and the Doppler frequency shift are not zeros. Of course, if 𝜏 = 0 and 𝑓

𝐷

= 0, the ambiguity function has an infinite peak that makes two targets ambiguous.

Other properties of the ambiguity function include the following:

 Ambiguity function of a scaled signal 𝑠

𝑡

(𝛼𝑡) is

𝑠

𝑡

(𝑡) = 𝑠

𝑡

(𝛼𝑡) ⇒ |𝜓

(𝜏, 𝑓

𝐷

)|

2

= 1

|𝛼|

2

|𝜓(𝛼𝜏, 𝑓

𝐷

/𝛼)|

2

(3.66)

 Ambiguity function of a time – shifted signal 𝑠

𝑡

(𝑡 − Δ𝑡) is

𝑠

𝑡

(𝑡) = 𝑠

𝑡

(𝑡 − Δ𝑡) ⇒ |𝜓

(𝜏, 𝑓

𝐷

)|

2

= |𝜓(𝜏, 𝑓

𝐷

)|

2

|exp {−𝑗2𝜋𝑓

𝐷

Δ𝑡}|

2

(3.67)

 Ambiguity function of a frequency – modulated signal 𝑠

𝑡

(𝑡)exp {𝑗2𝜋𝑓𝑡} is

𝑠

𝑡

(𝑡) = 𝑠

𝑡

(𝑡)exp {𝑗2𝜋𝑓𝑡} ⇒ |𝜓

(𝜏, 𝑓

𝐷

)|

2

= |𝜓(𝜏, 𝑓

𝐷

)|

2

|exp {−𝑗2𝜋𝑓𝑡}|

2

(3.68)

If we set the Doppler shift to zero, the ambiguity function becomes the squared auto – correlation function of the signal 𝑠

𝑡

(𝑡):

|𝜓(𝜏, 0)|

2

= |∫ 𝑠

𝑡

(𝑡)𝑠

𝑡

(𝑡 + 𝜏)𝑑𝑡

+∞

−∞

|

2

(3.69)

As we will see, the ambiguity function provides a wealth of information about radar waveforms and how they interact with the environment and the radar signal processor.

3.3.2. One Transponder

In Fig. 3.14 (a) and (b) we can observe the DVB-S2 ambiguity function for a real signal recorded

from one transponder (integration time 𝑇

𝑖𝑛𝑡

= 16 𝑚𝑠):

(25)

50

Fig. 3.14 – (a): DVB-S2 ambiguity function plot

Fig. 3.14 – (b): DVB-S2 ambiguity function plot

(26)

51

Considering the HPBW, 5.332 meters of range resolution is obtained (3.1). The Doppler resolution is 61,9 Hz (it is approximately the inverse of the transmission’s duration, in our case 𝑇

𝑖𝑛𝑡

= 16 𝑚𝑠, so 1 𝑇 ⁄

𝑖𝑛𝑡

≅ 61,9 𝐻𝑧). About the SLL, we have -18.6 dB for the range profile (Fig. 3.15 – (a)), and -13.28 dB for the Doppler profile (Fig 3.15 – (b)).

Fig. 3.15 – (a): Range profile

Fig. 3.15 – (b): Doppler profile

(27)

52

How we expected from the pulse compression theory, the range resolution is approximately

Δ𝑅 = 𝑐

2𝐵 (3.70)

where 𝑐 is the speed of light in [𝑚 𝑠] ⁄ and 𝐵 is the bandwidth of the signal (Fig. 3.16).

Fig. 3.16: Spectrum measured recording the signal from one transponder DVB – S2

3.3.3. Two Transponders

In Fig. 3.17 (a) and (b) we can observe ambiguity function of the signal recorded from two

different but neighbor transponders (Fig. 3.19).

(28)

53

Fig. 3.17 – (a): Ambiguity Function plot (two transponders)

Fig. 3.17 – (b): Ambiguity Function plot (two transponders)

We have obtained along the range axis an HPBW of 2.4 m. (better than the previous case of

course, due to the increased bandwidth), and a SLL of -11.1 dB (worse than the previous case)

(29)

54

(Fig. 3.18). Along the Doppler axis, naturally, we have the same performances of the one transponder’s case because we didn’t change the duration of the recording.

Fig. 3.18: Range profile

Fig. 3.19: Spectrum of the received signal (two transponders)

(30)

55

As expected, the improvement of the bandwidth (Fig. 3.19) allows to have a better range resolution (a narrower main lobe), but higher side lobes, due for the inhomogeneity of the two transponder’s powers, and the gap between the two bands as well.

3.3.4. Four Transponders

In Fig. 3.20 (a) and (b) we can observe the ambiguity function of the signal recorded from four different transponders (Fig. 3.22), for a total band of 130 MHz.

Fig. 3.20 – (a): Ambiguity Function plot

(31)

56

Fig. 3.20 – (b): Ambiguity Function plot

We have obtained along the range axis an HPBW of 1.2 m. (better than the previous case of course, due to the increased bandwidth), and a SLL of -9.9 dB (worse than the previous case) (Fig. 3.21).

Along the Doppler axis, naturally, we have the same performances of the one transponder’s case because we did not change the duration of the recording.

Fig. 3.21: Range profile

(32)

57

Fig. 3.22: Spectrum of the received signal (four transponders)

As expected, the improvement of the bandwidth (Fig. 3.22) allows to have a better range resolution (a narrower main lobe), but higher side lobes, due for the inhomogeneity of the four transponder’s powers, and the gap between the four bands as well.

3.3.5. Four Transponders with a gap between the two bands

In Fig. 3.23 (a) and (b) we can observe the ambiguity function of the signal recorded from four different transponders, but with a gap of 7 MHz between the two bands, for a total band of 137 MHz.

This case is aimed for the eventual application of Compressive Sensing (CS) algorithms [15],

[16]. These algorithms, solving a sparsity – driven optimization problem, are able to improve

significantly the image quality (in terms of resolution and grating lobes); where the degradation is due

by the band gap shown in (Fig. 3.25)

(33)

58

Fig. 3.23 – (a): Ambiguity Function plot

Fig. 3.23 – (b): Ambiguity Function plot

We have obtained along the range axis an HPBW of 1.1 m. (better than the previous case of course, due to the increased bandwidth), and a SLL of -8.55 dB (worse than the previous case) (Fig.

3.24).

(34)

59

Along the Doppler axis, naturally, we have the same performances of the one transponder’s case because we did not change the duration of the recording.

Fig. 3.24: Range Profile

Fig. 3.25: Spectrum of the received signal (four transponders with 7 MHz of gap)

(35)

60

3.3.6. Effect of deterministic components in the DVB – S/S2 signal

One of the mostly used waveforms in passive radar applications is the DVB – T signal. The DVB – T stations have a high radiating power that, together with the fact that the distance between them and the receivers is at maximum around 100 Km. (thus, a relative vast coverage area), allow to reach high SNR at the receiver antenna. Moreover, the wide bandwidth provides about 20 meters of range resolution.

The transmitted signal has two sets of components: random and deterministic. The random components are the result of the compression algorithm, channel coding, interleaving and OFDM modulation. In particular, the MPEG-2 compression algorithm potentially removes similarity between images, creating a noise – like transmission. The deterministic components occur in the signal later owing to the guard interval, pilot and TPS (Transmission Parameters Signaling) carriers injection to the DVB – T signal.

In (Fig. 3.26) is shown the ambiguity function of the DVB – T signal containing the complete set of deterministic components: guard interval, pilot and TPS carriers. As shown, the unwanted peaks outside the main lobe area are created by the deterministic components, that introduce some regularity in the random OFDM signal.

Fig. 3.26: DVB – T Ambiguity Function (Range Profile) [14].

These peaks, together with the random side lobe level and noise, can mask the signal reflected

from the targets and/or introduce false alarms.

(36)

61

As the cause of the unwanted peaks in the AF is known, we can modify the signal in the reference channel in order to create a flat side lobes area around the main lobe. In particular, what it is really done, is to reconstruct the signal acquired in the reference channel, by substituting the known parts of the DVB – T signal with zero or noise – like signals, and equalizing the power of the pilot carriers. Naturally, this processing creates a mismatching between the surveillance channel signal and the reference channel signal, introducing a certain amount of loss in the SNR level, but allowing the cancellation of the unwanted peaks. For more details see [14].

The DVB – S2 signal has similar characteristic, but it has not an OFDM modulation. To allow the frame synchronization, the PLHEADER (that has an 𝜋 2 ⁄ – BPSK modulation) is formed by 90 symbols divided in this way:

 26 symbols (SOF field) come from the modulation of the binary number 18𝐷2𝐸82

𝐻𝐸𝑋

, so are completely known.

 64 symbols (PLS code) come from the coding of 7 bits, that specify the payload characteristics, the modulation and the presence/absence of the pilot. For more details about their value and coding see the paragraph (2.3.1).

The SOF field is always the same for every PLFRAME; the PLS code can change during the entire transmission, but due to the fact that in radar applications it is not used long period of target observation, it can be considered as fixed and known as well.

The standard provides the possibility of the pilot symbols introduction, i.e., 36 fixed and known symbols every 16 slots of 90 symbols of the PLFRAME (header excluded), to help the receiver during the synchronization process, specially in bad weather conditions and high level of phase noise. Since the pilot symbols are introduced before the scrambling process for energy dispersion [6], they can be ignored and considered as part of the information.

Considering this situation, we have checked the repetition periods of the header (with and without pilot symbols), for QPSK and 8PSK respectively. In case of a FECFRAME of 64800 bits (mandatory for broadcasting application [6], [7]) we have a total number of symbols for each PLFRAME of:

 QPSK:

o without pilot: 32490 symbols;

o with pilot: 33282 symbols;

(37)

62

 8PSK:

o without pilot: 21690 symbols;

o with pilot: 22230 symbols;

and considering the worst condition of 𝑅

𝑆

= 29.7 𝑀𝑏𝑎𝑢𝑑 (the higher the symbol rate, the smaller the repetition period), we have:

 QPSK:

o PLHEADER with and without pilot: 1.1 𝑚𝑠 ⇒ ≈ 336 𝐾𝑚;

 8PSK:

o PLHADER with and without pilot: 0.7 𝑚𝑠 ⇒ ≈ 221 𝐾𝑚;

Considering the simple power link budget:

𝑃

𝑅𝑋

= 𝑃

𝑇𝑋

𝐺 ( 𝜆

0

4𝜋𝑅

𝑝𝑎𝑡ℎ

)

2

(3.71)

under ideal weather conditions, for satellite communications we obtain an average received power at the antenna of −144 𝑑𝐵𝑊 (𝐸𝐼𝑅𝑃 = 50 𝑑𝐵𝑊, 𝐺

𝑅𝑋

= 38 𝑑𝐵, 𝑓

0

= 11.7 𝐺𝐻𝑧, 𝑅

𝑝𝑎𝑡ℎ

= 36000 𝐾𝑚).

In passive radar application, the surveillance channel antenna has to catch the scattered signal from the target. Supposing a RCS of 50 𝑚

2

(big truck or a medium cargo aircraft), the ratio between the scattered received power and the direct path received power has this trend (Fig. 3.27)

𝑃

𝑠𝑐𝑎𝑡𝑡

𝑃

𝑑_𝑝𝑎𝑡ℎ

= 𝜎

𝑅𝐶𝑆

𝑅

𝑒𝑥𝑡𝑟𝑎2

(3.72)

(38)

63

Fig. 3.27: Loss between the direct path received power and the reflected received power

where 𝜎

𝑅𝐶𝑆

is the radar cross section of the target and 𝑅

𝑒𝑥𝑡𝑟𝑎

is the path between the target and the surveillance channel antenna. As can we see, it is useless searching for targets over than a few kilometers from the radar, because the received power is deeply under the noise level.

The possible reason of the unwanted peaks as in the DVB – T signal can be only the presence of

the PLHEADER, specially of the SOF field. Since the radar is not able to “see” over 2 − 3 𝐾𝑚., due to

the power budget, each signal folding is completely uncorrelated with the previous one, and the

ambiguity function (Fig. 3.13 – 3.16) does not show any unwanted peak, allowing good detection

performances and preventing the direct signal reconstruction [14].

Riferimenti

Documenti correlati

The development of a heater is a significant technological challenge tied with the need of reaching high ignition temperatures (1500-2000 K) minimizing the power

Dal punto di vista dei dati, l’applicazione è basata su due differenti modelli computazionali: un modello della rete stradale e delle infrastrutture ciclabili, e un modello

15 In particular, data can be processed without the consent of the data subject: (a) for the performance of a contract to which the data subject is party or in order to take steps

Following the filter we have the key-component of this work, that is the Low Noise Amplifier, then the first Mixer, that translates the carrier frequency to a lower and

It can be seen that patients in phase ON tended to under- estimate the time interval that had to be measured, with respect to controls; the same patients, however,

In Figure 1.1 we show the approximate Fekete points computed by the above code (marked by +) as well as the true Fekete points, the extended Chebyshev points and also the so-called

L’occultamento di alcune informazioni all’interno di un testo al fine di suscitare interesse, ma al tempo stesso la promessa di rivelare dettagli che gli organi di

Here, to make up for the relative sparseness of weather and hydrological data, or malfunctioning at the highest altitudes, we complemented ground data using series of remote sensing