• Non ci sono risultati.

Bayesian Algorithms based on double-AR1 Phase Noise

4.4 Bayesian Algorithms

4.4.4 Bayesian Algorithms based on double-AR1 Phase Noise

Model

All Bayesian algorithms derived until now, are arried out by onsidering a

rstorderphasenoisemodel,i.e.,adis rete-time Wienermodel.Ontheother

hand, sometimes hannels of pra ti al interest are ae ted by a phase noise

whi h anbemodeledasthesumoftworstorderauto-regressivelters.This

is the ase of the SATMODE phase noise, whi h impairs satellite

ommuni- ations hara terized bylow- ost equipment, andwhose double-AR1modelis

des ribed inSe tion 2.4.2. In Chapter 2, numeri al results show that the

de-te tor deriveduntil now, basedonjustasingle-AR1PNgeneration,exhibitsa

signi ant performan e degradation when employed on double-AR1 PN

gen-eration (see Se tion 2.4.4). Thus, it an be useful to derive some Bayesian

algorithms basedon su hdouble-AR1 PN model, asinthe aseof the

D-DP-BCJRalgorithmdes ribedinSe tion2.4.3.Se tion2.4.4,demonstratethatthe

information rate lossa hieved bythese algorithms withrespe t to the

oher-ent ase, is lose to zero. However, the good result in performan e is rea hed

at the pri e of an in reased trellis state omplexity sin e the system state is

omposedbytheunionoftheCPMstate,withthestateofboththetwophase

noise omponents.Thusthesealgorithms annotbetakenintoa ountfor

sys-temsofpra ti alinterest.FinallyinSe tion2.4.3,wealsoproposeanimproved

version of the DP algorithm (the so alled I-DP), suitable when onsidering

double-AR1 PN with a very fast omponent, whi h an be approximated as

jitter,independent fromsampleto sample.ThetrellisoftheI-DPalgorithm is

basedon thefull- omplexityCPMtrellisbasedon theRimoldide omposition

and thus is omposed by

D pM L−1

states. We an redu e the omplexity by

applying the same te hnique des ribed in Se tion 2.4.3 to the MM-DP

algo-rithm of Se tion 4.4.2, whi h is based on the MM de omposition and ounts

just

D

trellis states. The way to obtain the Improved-MM-DP (I-MM-DP) starting from MM-DP is very similar to thatused to derive the I-DPin

Se -tion 2.4.3.Inparti ular, rstlywe justneed to hange themean and varian e

of the Gaussian pdf in (4.22) representing phase noise probability transition

p(θ n+1 |θ n )

, sin e the PN

θ n

is no more modeled as a Wiener model but as

an AR1 pro ess. Se ondly, in order to take into a ount also the se ond PN

omponent

θ f,n

(i.e., the faster omponent), we ompute the bran h metri

H n (x n , ψ n−1 , θ n )

in(3.11) byaveraging over su h a omponent. Sin e su h a

omponent is a zero mean Gaussian random variable, with varian e

σ 2 f

, the

bran h metri be omes:

H n (x n , ψ n−1 , θ n ) = Z

θ f,n

exp (

− θ f,n 2f 2

)

·

exp ( 1

N 0

Re

" M −2 X

k=0

p k,n a k,n (x n , ψ n−1 )e −j(θ n f,n )

#)

f,n .

(4.45)

wheretheintegralin(4.45) anbeevaluatedasasimplesumbydis retizingthe

faster phase omponent

θ f,n

as done for theslower omponent

θ n

.Hen e the

I-MM-DP algorithm isobtained byjust in reasing thebran h metri

omple-tionoftheMM-DPalgorithm.However,Se tion 2.4.4showthatimproved-DP

algorithm doesnot get signi ant information rateimprovement with respe t

to the simple DP algorithm. We will verify this statement in theSe tion 4.5,

bynumeri al results.

Dete tion Algorithm Based on Phase Noise Linear Predi tion

In the following, we propose a Bayesian algorithm basedon linearpredi tion

of the phase noise pro ess. In parti ular, we extend the approximate linear

predi tive approa hdes ribed in[71℄,fora generaltime-varyingphasepro ess

θ n

.Weassume

θ n

stationaryanddes ribedbyagivenauto orrelationsequen e of thephasorpro ess

h n = e n

,denoted by

R h (k) = E n

e n+k e −jθn o

.

(4.46)

Firstofall,wefo usontheve tor

y

whi h olle tstheset

y n (x n , σ n )

of

suf- ient statisti softhere eivedsignal

r(t)

,for theideal oherent dete tor(see

Se tion4.2).Thesestatisti saregeneratedasindi atedin(2.12),i.e.,byabank

of lters mat hed to all possible length-

T

CPM waveforms

{¯s(t; x n , ω n )}

. In

thepresentSe tion, wepropose adierent setof su ient statisti s, olle ted

inthe ve tor

r

,obtained proje tingthere eived signal

r(t)

overanalternative orthonormal base. In detail, the statisti s are obtained by oversampling the

re eived signal by an oversampling fa tor

N s

whi h respe ts theanti-aliasing

ondition;hen e,byoversamplingthesignal

r(t)

in(4.3)at

t n,k = n T +k T /N s

,

we get

r n,k = es n,k (x n , σ n )e n + w n,k , n = 0, 1, . . . , N − 1 k = 0, 1, . . . , N s − 1

(4.47)

where we have dened

r n,k , r(t n,k )

,

e s n,k (x n , σ n ) , es(t n,k ; x n , σ n )

and

w n,k , w(t n,k )

. In other words,

n

is the symbol index, while

k

is the oversampling index.Itistrivialtodemonstratethat

{w n,k } n,k

areindependentanduniformly distributed omplexGaussianrandomvariables,withzeromeansandvarian e

σ w 2 = 2 N 0 N s /T

.Fromthat onsideration, wendthefollowing expressionfor thepdf

p(r|x)

p(r|x) =

N −1 Y

n=0

exp (

− 1 σ w 2

N X s −1 k=0

r n,k − es n,k (x n , σ n )e n 2

)

.

(4.48)

We dene

r n,k (x n , σ n ) , r n,k e

s n,k (x n , σ n ) = r n,k e s n,k (x n , σ n )

(4.49)

sothat, repla ing(4.49) in(4.47), when onsidering thetransmitted sequen e

we get

r n,k (x n , σ n ) = e n + w n,k

(4.50)

where

w n,k , w n,k s e n,k (x n , σ n )

isstatisti allyequivalentto

w n,k

.Hen e,we an

exploitthe re eived samples(4.50) toperform

e n

estimation, basedonlinear predi tion ltering. We derive

e j ˆ θ n = P C

i=1 p i r n−i,0 (x n−i , σ n−i )

P C

i=1 p i r n−i,0 (x n−i , σ n−i )

(4.51)

where

C

assumesthemeaningofpredi tionorder and

{p i } C i=1

arethepredi tor

oe ients. The predi tor oe ients an be omputed by solving a

Wiener-Hopf linear system

R p = b

, where

p , (p 1 , p 2 , . . . , p C ) T

is the unknown

ve tor 10

,

b = [R h (1), R h (2), . . . , R h (C)] T

and

R

is a square

C × C

matrix

10

Thenotation

(.) T

indi atesthetranspositionoperator

whoseelementsaredened astheauto orrelationfun tionofthesamples

r n,k

,

and hasthe followingexpression

R(ℓ, m) =

( R h (|ℓ − m|)

if

ℓ 6= m R h (0) + σ w 2

if

ℓ = m .

(4.52)

Repla ing(4.51) in(4.48), we derive

p(r|x) ≃

N −1 Y

n=0

exp (

− 1 σ ǫ 2

N X s −1 k=0

|G n (x n , . . . , x n−C , σ n , . . . , σ n−C )| 2 )

(4.53)

where

σ 2 ǫ

is the mean square error predi tion error, whi h an be expressed

as[71℄

σ 2 ǫ = R h (0) + σ w 2 − X C

i=0

p i R h (i) .

(4.54)

and

G n (x n , . . . , x n−C , σ n , . . . , σ n−C ) , r n,k −es n,k (x n , σ n ) P C

i=1 p i r n−i,0 (x n−i , σ n−i )

P C

i=1 p i r n−i,0 (x n−i , σ n−i ) .

(4.55)

By employing (1.16), we an writethe

s e n,k (x n , σ n )

denitionas

e s n,k (x n , σ n ) = ¯ s(t n,k ; x n , ω n ) e n

= ¯ s n,k (x n , ω n ) e n

(4.56)

where

s(t ¯ n,k ; x n , ω n )

(1.17)isthe omponentoftheCPMwaveform,depending

on just the present symbol

x k

and on the orrelative state

ω n

(1.4);

π n

(1.5)

is the phasestate and we dened

s ¯ n,k (x n , ω n ) , ¯ s(t n,k ; x n , ω n )

. Thus,

repla -ing(4.56)and (4.49)in(4.55),we ndthatthe

G n (x n , . . . , x n−i , σ n , . . . , σ n−i )

fun tion be omes

G n (x n , . . . , x n−i , σ n , . . . , σ n−i ) =

(4.57)

= r n,k − ¯s n,k (x n , σ n ) P C

i=1 p i r n−i,0 ¯ s n−i,0 (x n−i , ω n−i ) e j(π n −π n−i )

P C

i=1 p i r n−i,0 s ¯ n−i,0 (x n−i , ω n−i )

= r n,k − ¯s n,k (x n , ω n ) P C

i=1 p i r n−i,0 s ¯ n−i,0 (x n−i , ω n−i ) e jπh P i−1 m=0 x n−L−m

P C

i=1 p i r n−i,0 s ¯ n−i,0 (x n−i , ω n−i )

.

(4.58)

Sin ethe orrelative stateis omposedbythe

L − 1

most re ent pastsymbol,

we understand that the fun tion

G n

just depends on the present symbol

x n

andonasetof

L − 1 + C

pastsymbolsgivenby

(x n−1 , . . . , x n−(L−1+C) )

.Thus,

bydeninga newsystemstate

µ n , x n−1 , . . . , x n−(L−1+C) 

(4.59)

we have

G n (x n , . . . , x n−i , σ n , . . . , σ n−i ) ≡ G n (x n , µ n ) ,

(4.60)

and the pdf in(4.53) anbe written as

p(r|x) ≃

N −1 Y

n=0

V n (x n , µ n )

(4.61)

where

V n (x n , µ n ) , exp (

− 1 σ 2 ǫ

N X s −1 k=0

|G n (x n , µ n )| 2 )

(4.62)

Finally, we an derive the MAP symbol dete tion strategy by marginalizing

p(x, µ|y)

(whereve tor

µ

olle ts

µ n

elements,with

n

from

0

to

N − 1

)byFG

and SPA. Indetail, byBayesrule

p(x, µ|r) ∝ p(r|x, µ) P (µ|x) P (x)

= P (µ 0 )

N −1 Y

n=0

V n (x n , µ n )I(x n , µ n , µ n+1 )P (x n )

(4.63)

where

I(x n , µ n , µ n+1 )

is thetrellis indi ator fun tion, equal to one if

x n

,

µ n

,

and

µ n+1

satisfythe trellis onstraints and to zero otherwise. It is lear that fa torization (4.63) is equivalent to the fa torization in (2.14) of the optimal

oherent dete tor, where here the state denition

µ n

repla es the CPMstate

σ n

(and thus, also the trellis indi ator fun tion is dierent), and where the

bran h metri

V n (x n , µ n )

repla es

F n (x n , σ n )

.Thus, forward re ursion,

ba k-ward re ursion and ompletion stage are performed as des ribed in (2.17),

(2.18), (2.21), respe tively. From (4.59), we know that the BCJR algorithm

we have derived isbased on atrellis of

M L−1+C

states. So,it is lear that its

omputational omplexity rapidly in rease with the parameters

L

and

C

. In

the present work,we do not address the problemof omplexity redu tionfor

the des ribed algorithm, but we think that a possible solution is represented

byredu ed-sear h te hniques(des ribed inSe tion3.6).

Predi tor Coe ients for Wiener and double-AR1 PN Models

In order to ompute the predi tor oe ients

p

by the Wiener-Hopf linear system

R p = b

, we need to assume a statisti al model for the phase noise

θ n

.Fromsu h model we an derive an expressionfor

R h (k)

dened in(4.46),

ne essaryfor the omputation ofmatrix

R

(4.52).

Ifwemodelthephasenoiseasadis rete-timeWienerpro essofin remental

varian e

σ 2

,it iseasy to verifythat

R h (k) = e −|k|σ 2 /2 .

(4.64)

Ontheotherhand,whenwemodelthePNbythedouble-AR1modeldes ribed

inSe tion 2.4.2,we an derive

R h (k)

asfollows.Firstof all,were allthatthe

hara teristi fun tion ofanyrandom variable

β

isgiven by

γ β (t) = E n

e jtβ o

.

(4.65)

Hen e,wenotefrom(4.46),that

R h (k)

isequivalenttothe hara teristi fun -tionoftherealrandomvariables

k+n −θ n )

, omputedfor

t = 1

.The

dis rete-timephasenoisepro ess

θ n

weare onsidering,isthesumoftwoGaussianrst

order auto-regressive pro esses,

θ a,n = a θ a,n−1 + v a,n θ b,n = b θ b,n−1 + v b,n

where

v a,n

and

v b,n

aretwo Gaussian zero-mean random variables of varian e

σ v,a 2

and

σ v,b 2

,respe tively.

θ a,n

and

θ b,n

arestill Gaussian randomvariables of

varian e

σ 2 a

and

σ b 2

,respe tively.Thus,it is easy to prove that

k+n − θ n )

is

stillaGaussianvariable,withmeanzeroandvarian e

2[R θ (0) − R θ (k)]

,where

R θ (k)

isthe auto orrelation sequen e

R θ (k) = E{θ n θ n+k } .

(4.66)

Thus, sin e the hara teristi fun tion of a zero-mean Gaussian random v

ari-able

β

withvarian e

σ 2

is [72℄

γ β (t) = e −σ 2 /2

(4.67)

we derive that

R h (k) = e [R θ (k)−R θ (0)] .

(4.68)

Finally,sin e the auto orrelationfun tion

R θ (k)

isthe sum of the

auto orre-lationfun tionsof its AR1 omponents

θ a,n

and

θ b,n

,we get

R h (k) = exp 

σ 2 a a |k| + σ b 2 b |k|

exp 

σ a 2 + σ b 2 .

(4.69)