• Non ci sono risultati.

BIT-INTERLEAVED CODED MODULATION

N/A
N/A
Protected

Academic year: 2021

Condividi "BIT-INTERLEAVED CODED MODULATION"

Copied!
62
0
0

Testo completo

(1)

UNIVERSIT `A DEGLI STUDI DI PADOVA

DIPARTIMENTO DI INGEGNERIA DELL’INFORMAZIONE CORSO DI LAUREA TRIENNALE IN

INGEGNERIA DELLE TELECOMUNICAZIONI TESI DI LAUREA

BIT-INTERLEAVED CODED MODULATION

RELATORE: Prof. Nevio Benvenuto

LAUREANDO: Alberto Desider`a

Padova, 30 settembre 2010

(2)
(3)

Opera in modo che la massima della tua volont`a possa sempre valere in ogni tempo come principio di una legislazione universale.

(Immanuel Kant)

(4)
(5)

Contents

Abstract 1

1 Introduction 3

2 System model 5

2.1 Channel model . . . . 5

2.2 Coded modulation . . . . 6

2.3 Bit-interleaved coded modulation . . . . 7

3 BICM analysis and approximations 9 3.1 Preliminaries . . . . 9

3.2 Error-rate approximation using Union Bounding and Saddlepoint Approximation . . . . 11

3.3 Closed-form expressions for results . . . . 15

3.4 Numerical results and discussions . . . . 20

4 Introduction of iterative decoding 23 4.1 System description . . . . 23

4.2 Conventional decoding . . . . 26

4.3 Iterative decoding with hard-decision feedback . . . . 26

4.4 Signal labelling . . . . 27

4.5 Simulation results . . . . 27

5 Applications 31 5.1 Non-coherent demodulation . . . . 31

5.2 Block-fading . . . . 34

5.3 MIMO - Multiple Input Multiple Output . . . . 37

(6)

6 Conclusion 45

A Viterbi Algorithm 47

B Turbo-code 49

Bibliography 51

(7)

Abstract

The principle of coding in the signal space follows directly from Shannon’s anal- ysis of waveform Gaussian channels subject to an input constraint. The early design of communication systems focused separately on modulation, and error correcting codes. More recently, powerful families of binary codes with a good trade-off between performance and decoding complexity have been discovered.

Bit-interleaved coded modulation (BICM) is a pragmatic approach combining the best out of both worlds: it takes advantage of the signal-space coding per- spective, whilst allowing for the use of powerful families of binary codes with virtually any modulation format. BICM avoids the need for the complicated and somewhat less flexible design typical of coded modulation. As matter of fact, has established itself as a quasi-standard (de-facto) for bandwidth - and power - efficient communication, like DSL, wireless LANs, WiMax. The aim of this thesis is to describe the main aspects of the system, focusing the attention on model characteristics and on the error analysis (based on bit-error rate approximations).

Finally I also consider the BICM with iterative decoding and I conclude with an overview of some applications of BICM.

(8)
(9)

Chapter 1 Introduction

Since Shannon’s landmark 1948 paper [1], approaching the capacity of the addi- tive white Gaussian noise (AWGN) channel has been one of the more relevant topics in information theory and coding theory. Shannon’s promise that rates up to the channel capacity can be reliably transmitted over the channel comes to- gether with the design challenge of effectively constructing coding schemes achiev- ing these rates with limited encoding and decoding complexity. A simply way of constructing codes for the Gaussian channel consists of fixing the modulator signal set, and then considering codewords obtained as sequences over the fixed modulator signal set, or alphabet.

Starting from the trellis-coded modulation analysed in [2], it has been generally accepted that modulation and coding should be combined in a single entity for improved performance, based on the combination of trellis codes and discrete signal constellations through set partitioning. The code performance in this sit- uation depends strongly, rather than on the minimum Euclidean distance of the code, on its minimum Hamming distance (the ’code diversity’).

The discovery of turbo-codes and the re-discovery of low-density parity-check (LDPC) codes, with their corresponding iterative decoding algorithms marked a new era in coding theory. These modern codes approach the capacity of binary- input channels with low-complexity. The analysis of iterative decoding also led to new methods for their efficient design.

In contrast with findings in [2], in [3] is proposed the bit-interleaved coded modu- lation (BICM) as a pragmatic approach to coded modulation. Here is recognized that the code diversity, and hence the reliability of coded modulation over the

(10)

channel, could be further improved. The idea is to make the code diversity equal to the smallest number of distinct bits (rather than channel symbols) along any error event. This is achieved by bit-wise interleaving at the encoder output, and by using an appropriate soft-decision bit metric as an input to the decoder. In [4] there is a comprehensive analysis of BICM in terms of information rates and error probability, and is showed that in fact the loss incurred by the BICM in- terface may be very small. Furthermore, this loss can essentially be recovered by using iterative decoding. Since its introduction, BICM has been regarded as a pragmatic yet a powerful scheme to achieve high data rates with general signal constellations. Nowadays, BICM is employed in a wide range of practical com- munications systems, such as DVB-S2, wireless LANs, DSL, WiMax, the future generation of high data rate cellular system (the so-called 4th generation). BICM has become the de-facto standard for coding over Gaussian channel in modern systems.

In this work, I introduce in a comprehensive fashion the theory underlying BICM and also tools for evaluating the performance of BICM. First I define the sys- tem model, analysing the channel model, the scheme of coded modulation and finally the bit-interleaved coded modulation. After that I focus my attention on BICM, considering a more specific characterization; starting from this I introduce some approximations for the study of the error-rate, achieving a much important result: the closed-form expressions for some kind of well-chosen subsets of sig- nal points, useful for computing the numerical integration of approximations.

These are valid for general quadrature amplitude modulation and phase shift keying signal constellations and arbitrary bit-to-symbol mapping rules. I then turn my attention to iterative decoding of BICM (BICM-ID) that provide some performance advantages using convolutional codes combined with mappings dif- ferent from the standards (as Gray labelling). After all I describe a number of application of BICM; in particular I consider the application of BICM to orthog- onal modulation with non-coherent detection, to block-fading channel and to the multiple-antenna channel (MIMO).

(11)

Chapter 2

System model

In this chapter I recall the baseline model of coded modulation(CM) and introduce the model of bit-interleaved coded modulation(BICM). Here I consider a generic stationary finite-memory vector channel, that in the next analysis will be more specified.

Figure 2.1: Block diagram of transmission with coded modulation(CM) and bit- interleaved coded modulation(BICM). For CM, π denotes interleaving at the symbol level; for BICM, π denotes interleaving at the bit level.

The CM and BICM models are illustrated in the block diagram of Figure 2.1. The main blocks of both schemes are 1) an encoder (ENC); 2) an interleaver π ; 3) a modulator, modelled by a labelling map µ and a signal setX , as finite set of points in the complex N -dimensional Euclidean spaceCN ; 4) a stationary finite-memory vector channel whose transition probability density function pθ(y | x) , x, y ∈ CN; 5) a demodulator (DEM); 6) a branch metric deinterleaver π−1; 7) a decoder (DEC). In this first part of discussion I make a detailed description of these blocks for CM and BICM.

2.1 Channel model

Now, assuming that x is the input and y is the output of system (x, y ∈ CN), I consider a vector channel characterized by a family of transition probability

(12)

density function (pdf)

{pθ(y | x) : θ ∈ CK; x, y∈ CN}

(2.1) parametrized by a complex-valued vector θ ∈ CK (where CK is a complex K- dimensional Euclidean space, different from space of input and output vectors) representing the channel state. I assume that θ is independent of the channel input x, and that, conditionally on the sequence θ , the channel is memoryless,

pθ( y| x)

=

k

pθk(yk | xk) , (2.2)

where x and y are the input and output sequences interested in the transmission.

Moreover, θ is assumed to be a stationary, finite memory random process. By this model I can represent a large number of typical communication channels, for example the additive white Gaussian noise (AWGN) channel (θ = constant). In the following sections I will take into account also the fading contribution, and then the channel will be consider as a pdf according to Nakagami-m distribution.

2.2 Coded modulation

A CM scheme can be viewed as the concatenation of an encoder (ENC) for a code C defined over a label alphabet A with an N-dimensional memoryless modulator over a signal set X ⊆ CN. I assume that A and X are finite and

|A| = |X | = M, where then I define r = log2M . The transmitted code sequence c ∈ C is mapped onto the signal sequence x by the labelling map µ : A −→ X , acting componentwise. If the codeC is designed to correct random errors, it may be convenient to introduce a symbol interleaver π between the encoder (ENC) and the mapper µ. For all this work I assume that π is an ideal interleaver. Let y = (y1, y2,· · · , yk,· · · ) denote the channel output sequence resulting from the transmission of the sequence x. Optimum decoding depends on the channel state information (CSI) available at the receiver. Here I will consider only the case of perfect CSI.

Detection: assuming that the receiver has perfect knowledge of the values taken on by each θk, the set of maximum-likelihood (ML) symbol metrics is given by

{log pθk(yk| z)}z∈X (2.3)

(13)

After metric deinterleaving (denoted by π1 in Figure 2.1), ML decision about the transmitted code sequence is made according to the rule,

ˆ

c = arg max

c∈C

k

log pθk(yk | µ(ck)) (2.4) where the summation runs over the whole sequence. This equation represents the ML decision rule also with finite-depth or with no interleaving, since conditionally on θ, the channel is memoryless.

2.3 Bit-interleaved coded modulation

BICM can be obtained by concatenating an encoder ENC for binary codeC, with an N -dimensional memoryless modulator over a signal set X ⊆ CN of size|X | = M = 2r, through a bit interleaver π and a one-to-one binary labelling map µ : {0, 1}r → X . The code sequence c is first interleaved by π. Next, the interleaved sequence π (c) is broken into subsequences of r bits each, which are mapped onto signals in X . Finally, the resulting signal sequence x is transmitted over the vector channel. The bit interleaver can be seen as a one-to-one correspondence π :→ (k, i), where k denotes the time ordering of the coded bits ck, k denotes the time ordering of the signals xk transmitted over the vector channel, and i indicates the position of the bit ck in the label of xk. From here I assume ideal bit interleaving to make simplified analysis. Let li(x) denote the ith bit of the label of x, and Xbi the subset of all the signals x∈ X whose label has the value b ∈ {0, 1} in position i. Further, let ¯b denote the complement of b and y the channel output resulting from the transmission of x. The conditional pdf of y, given li(x) = b∈ {0, 1} and θ, is

pθ(

y| li(x) = b)

=

x∈X

pθ(y | x) P (

x| li(x) = b)

= 2−(r−1)

x∈Xbi

pθ(y | x) (2.5)

The last expression is obtained by noting that pθ(

y| x, li(x) = b)

= pθ(y | x) and that

P (

x| li(x) = b)

=

{ 0, if li(x) = ¯b 2−(r−1), if li(x) = b

(14)

where I assume uniform input distribution

P (b = 0) = P (b = 1) = 1 2

The optimum receiver depends on the available CSI that in this case is ideal; the bit interleaving is also ideal and I can assume that at each time kthe demodulator (DEM) produces the set of ML bit metrics

λi(yk, b) = log

x∈Xbi

pθ

k′(yk | x) (2.6)

for b∈ {0, 1} and i = 1, · · · , r. Finally, the ML decoder (DEC) makes decisions according to the rule

ˆ

c = arg max

ck→c∈C

k

λi(yk, ck) , (2.7) where ck → c ∈ C indicates that ck is a coded bit of the code sequence c. The decision rule (2.7) is completely analogous to (2.4), and can be implemented in a simply way.

(15)

Chapter 3

BICM analysis and approximations

After the introduction to the system model of previous chapter, I want to get a more specific characterization for BICM systems. As in my analysis I will consider the transmission performances over fading channels, I use a Nakagami- m distribution to model the channel aspects. Assuming this, I evaluate some approximations for the error-rate, comparing the results with union bound found in [4]. This work is very useful for the system design and in next sections I will derive other simplified expressions for this purpose.

Figure 3.1: Block diagram of BICM transmission over a fading AWGN channel.

3.1 Preliminaries

Figure 3.1 shows the block diagram of equivalent BICM transmission system:

Transmitter: The BICM codeword x = [x1, x2,· · · , xL]∈ C comprises L complex

(16)

valued symbols and is obtained by first interleaving (π) the output of a binary encoder c = [c1, c2,· · · , cN] into cπ = [cπ1, cπ2,· · · , cπN] and a subsequent mapping µ : {0, 1}r → X of each r = log2(M ) bits. While the pdf approximation that I will present next is applicable to arbitrary signal constellations, for practical relevance I assume that X is an M-ary QAM or PSK constellation with average unit symbol energy. As usual, the coding and mapping results in a uniform dis- tribution signal points. When presenting numerical results next, I consider set

Figure 3.2: 16QAM signal set with (a) a suitable set partitioning labelling and (b) Gray Labelling.

partitioning labelling (SPL), semi set partitioning labelling (SSPL), modified set partitioning labelling (MSPL), and mixed labelling (ML), in addition to popular Gray labelling (GL) for QAM and PSK signal constellations. As Gray labelling plays a key role in BICM theory, I restate here its definition in term of BICM notation: let X denote a signal set of size M = 2r, with minimum Euclidean distance dmin. A binary map µ : {0, 1}r → X is a Gray labelling for X if, for all i = 1,· · · , r and b ∈ {0, 1}, each x ∈ Xbi has at most one z ∈ X¯bi at distance dmin. Figure 3.2 shows a labelled 16QAM signal set.

Channel: I consider BICM transmission over AWGN channels. The equivalent baseband discrete-time transmission model can be written as

yi =

¯

γhixi+ zi, (3.1)

where yi ∈ C is the received sample, hi ∈ R denotes the fading gain, zi ∈ C is the additive noise sample at discrete-time i. The noise samples are independent

(17)

and identically distributed (i.i.d.) according to a zero-mean complex Gaussian distribution. I further assume that interleaving effectively renders the fading coefficients hi i.i.d. random variables. Here ¯γ represents the average SNR, so the instantaneous SNR is given by

γi = ¯γh2i. (3.2)

To make matters concrete I consider the Nakagami-m distribution to model mul- tipath fading. Adjustment of the fading parameter renders this distribution very flexible. It includes Rayleigh fading (m = 1), nonfading AWGN (m → ∞) and Rician fading

(

m = (k+1)2k+12 )

channels as special cases. The corresponding distri- bution of the SNR (3.2) is [5] (Γ(·) denotes the Gamma function)

fγ|¯γ,m(γ) = mmγm−1

¯

γmΓ(m) exp (

¯ γ

)

(3.3)

Receiver: at the receiver, the demapper (µ1 in Figure 3.1) produces r bitwise reliability metrics Λπ per symbol. The Λπ are deinterleaved into Λ, which are then input to the decoder for binary code. The bit metric for the jth bit of the ith symbol has the form

Λ =− min

a∈Xj,1

( yi

¯

γhia 2)

+ min

a∈Xj,0

( yi

¯

γhia 2)

(3.4)

whereXj,b is the set of symbols with the jth bit in the binary label fixed to b. The (3.4) is the so-called max-log simplification of log-likelihood ratio (LLR), which is known to provide practically maximum-likelihood decoding performance [4].

3.2 Error-rate approximation using Union Bound- ing and Saddlepoint Approximation

The transmission channel between encoder output c and the decoder input Λ can be considered as an equivalent binary input output-symmetric (BIOS) channel (Figure 3.3), which is known as equivalent BICM channel. Assuming maximum likelihood decoding, the error-rate of linear codes transmitted over this kind of channels is well approximated by the union bound in the region above the cutoff

(18)

Figure 3.3: BIOS channel.

rate. As can be seen in [4], the BER union bound for a convolutional code of rate Rc = kc/nc is given by

Pb 1 kc

dH=dH,min

wdHPEP (dH | ¯γ, m) (3.5)

where wdH denotes total input weight of error events at Hamming distance dH, dH,min denotes the free distance of the convolutional code, and PEP (dH | ¯γ, m) is the pairwise error probability corresponding to an error event with Hamming weight dH. For BIOS channels, the PEP can be considered as the tail probability of a random variable generated by summing dH i.i.d LLRs Λ1,· · · , ΛdH. Choosing all-one codeword as reference codeword, I get

PEP (dH | ¯γ, m) = P (

dH =

dH

i=1

Λi < 0 | ¯γ, m )

. (3.6)

For computing such a probability, a common way is through the use of the Laplace transform ΦΛ|¯γ,m(s) of the pdf of Λ. That is

P (∆dH < 0| ¯γ, m) = 1 2πj

α+

α−∞

[ΦΛ|¯γ,m(s)]dH ds

s , (3.7)

where j is the imaginary unit and α∈ R, 0 < α < αmax, is chosen in the region of convergence of integral. The computation of this is often not straightforward and invokes the use of numerical methods. For this reason in [6] are proposed a few bounds and estimations; one of the most important for its simply form and accuracy is the saddlepoint approximations, that get an approximation for the previous result:

P (∆dH < 0| ¯γ, m) ≈

(ΦΛ|¯γ,ms))dH+0.5

ˆ s

2πdHΦ′′Λ|¯γ,ms)

, (3.8)

(19)

where Φ′′Λ|¯γ,m(s) denotes the second-order derivative of ΦΛ|¯γ,m(s) and ˆs ∈ R+, 0 < ˆs < αmax, is the saddlepoint defined through the first-order derivative as

ΦΛ|¯γ,ms) = 0. (3.9)

Now I first derive an approximation for pdf of the log-likelihood ratios defined in (3.4), that will be useful to derive the closed-form expressions for ΦΛ|¯γ,ms) for s∈ R+ and arbitrary m. Since the channel is BIOS, the symmetry property

fΛ|c=b,γ(λ) = fΛ|c=¯b,γ(−λ) (3.10)

holds. Here fΛ|c=b,γ(λ) is the pdf when c = b is transmitted and ¯b is the com- plement of b. I consider the transmission of c = 1, and I value the pdf as weight sum of pdf conditioned on the bit position 1≤ j ≤ r, and I find this expression

fΛ|c=1,γ(λ) = 2 rM

r j=1

x∈Xj,1

fΛ|j,x,γ(λ) . (3.11)

where fΛ|j,x,γ(λ) is a function depending to three parameters, because the con- dition ’c = 1’ can be represented also by two condition on ’j, x’. In fact c is the output binary to transmit and j, x are the parameters of the codeword. Now the task is to find expressions for fΛ|j,x,γ(λ). For this I have to consider all signal points in the constellation. As in [7] a simply way is to consider the set of all nearest signal points in Xj,¯b for a given x ∈ Xj,b. So I define a set of nearest competitive signal point of x to approximate fΛ|j,x,γ(λ). This corresponds to the approximation

Λ≈ −( yi

¯

γhixi 2)

+ min

a∈Xj,xi

( yi

¯

γhia 2)

, (3.12)

for the log-likelihood ratio in (3.4), that is accurate in the SNR range in which the minimum Euclidean distance events dominate and the BER union bound converges to the true error rate. To this goal I quote here the six non-equivalent formations for the set of nearest competitive signal pointsAj,xfor QAM and PSK constellations (Figure 3.4). Starting from here I will determine the pdf fΛ|j,x,γ(λ) and in next sections I will consider all-cases closed-form. The pdf is expressed by

fΛ|j,x,γ(λ) = d

D(λ|j,x,γ)

1 πexp(

− ∥z∥2)

dz (3.13)

whereD (λ | j, x, γ) is the part of complex plane in which the log-likelihood ratio is less than λ (in Figure 3.4 λ = 0)

(20)

Figure 3.4: Illustration for possible sets of nearest competitive signal points Aj,x for general QAM and PSK constellations. The shaded areas indicates D (λ | j, x, γ) for λ = 0. For λ > 0 the boundaries move toward x, for λ < 0 the boundaries move towards the competitive signal points.

(21)

3.3 Closed-form expressions for results

From the expressions founded in the previous section, I consider here the most im- portant result for the analysis: the closed-form solution for the kth configuration from Figure 3.4. In Table 3.1 are specified the fλ,k|·,γ(λ),that are the probability density function of log-likelihood ratios, and the Laplace transforms over nonfad- ing AWGN channel and over Nakagami-m fading channels. The expressions in Table 3.1 are obtained in [7] and the notations used are here reported: Nµ,σ2 is the Gaussian normal distribution with mean µ and variance σ2; erf (x) is the Gauss error function; u (x) is the unit step function. Then substituting fΛ|j,x,γ(λ) in (3.11) with the corresponding functions from Table 3.1, I can obtain the desired closed-form expression for fΛ|c=1,γ(λ). In [7] there are all parameters of the most important constellations and for different labellings (as GL, SPL, MSPL, ML, SSPL). In Figure 3.5 are reported some results.

After this, other important expressions can be obtained substituting the approx- imations in Laplace transform, which become in closed-form. Using what I find in the previous calculation I can analyse the expressions of Table 3.1 like this:

1) Nonfading Channel: in this case m → ∞ and γ = ¯γ. Using the expressions for fΛ,k|·,γ(λ) from Table 3.1, the ΦΛ,k|·,γ(s) can be written as a weighted sum of the the integrals [7] I1(s) and I2|µ,ν(s) that in closed-form are:

I1|µ(s) =1

4eµ(s2−s)( 1 + erf

(õ 2s

))2

(3.14)

I2|µ,ν(s) =1

4eµ(s2−s)erf

1 + (2ν)2

µs

(3.15)

where µ = d2γ and ν = tan (θ/2).

2) Nakagami-m Fading Channel: in this case ΦΛ,k|·,¯γ,m(s) =

0 fγ|¯γ,m(s) ΦΛ,k|·,γ(s) dγ for which expressions are given in Table 3.1 in terms of integrals [7] I3|µ,ν(s) and I4|µ,ν(s) that in closed-form are:

I3|µ,ν(s)

3 i=1

ai

(

m m +[

µ (s− s2) + bi(νs)2]

¯ γ

)m

(3.16)

where

[a1, a2, a3] = [

1,1 6,1

2 ]

(22)

Table 3.1: pdf of log-likelihood ratios for transmission over nonfading AWGN channel, and its Laplace transform over nonfading AWGN channel and over Nakagami-m channel, for the six different set of Figure 3.4.

(23)

Figure 3.5: pdf of reliability metrics for BICM transmission over the nonfading AWGN channel for different constellations and labelling. Lines are pdf approxi- mations founded while markers represent the estimated histograms through sim- ulative measurement.

(24)

and

[b1, b2, b3] = [

0, 1,4 3 ]

.

I4|µ,ν(s)

6 i=1

ai (

m m +[

µ (s− s2) + bi(νs)2]

¯ γ

)m

(3.17) where

[a1, a2, a3, a4, a5, a6] = [

1,1

3,−1, 1 36,1

6,1 4 ]

and

[b1, b2, b3, b4, b5, b6] = [

0, 1,4 3, 2,7

3,8 3 ]

. Here, µ = d2 and ν /√

2, d, d sin (θ/2)}

. Using what I have find, I can evaluate the BER that is an important performance indicator for BICM transmission. To do this I consider the case of asymptotically high SNR to further simplify the expressions for the pdf of log-likelihood ratios and its Laplace transform. These results are reported in Table 3.2. From the simplified expressions, I consider the case of asymptotic high SNR to further simplify the expressions for the pdf of log-likelihood ratios and its Laplace transform, and then I can make a direct evaluation of the PEP for the 2 cases. By these results I can obtain the BER in the common way.

1) Nonfading Channel: for transmission over nonfading AWGN channel the ap- proximation is

PEP (dH | γ) = N1dH d1

πdHγ exp (

1 4dHd21γ

)

. (3.18)

2) Nakagami-m Fading Channel: for this case the approximation gives

PEP (dH | ¯γ, m) = Γ (mdH + 1/2) 2

πΓ (mdH + 1) (l

max

l=1

Nl d2ml

)dH( 4m

¯ γ

)mdH

. (3.19)

Here I make the next assumptions [7]:

lmax = {

qmax, f or QAM

M/2, f or P SK

Nl =

2(n1,l+2n2,l+2n3,l+3n4,l+4n5,l)

rM , f or QAM

2

rM (n1,l+ 2n6,l) , f or P SK

(25)

Table 3.2: asymptotic values for pdf of log-likelihood ratios for transmission over nonfading AWGN channel, and its Laplace transform over nonfading AWGN channel and over Nakagami-m channel, for the six different set of Figure 3.4.

(26)

3.4 Numerical results and discussions

Now I compare simulated and analytical BER results as function of the bitwise SNR indicates with ¯γb as in [7]. Figure 3.6 shows analytical (lines) and simulated (markers) BER results for different constellations and labelling for transmission over the nonfading AWGN channel. Solid lines represent the BER union bound, while dashed lines represent the asymptotic approximation for dH = dH,min. The BER union bound is fairly tight for all modulation schemes and BERs below about 10−4. Likewise, the proposed simple expression (3.18) accurately predicts the asymptotic error-rate performance at high SNR.

Figure 3.6: BER of BICM transmission over nonfading channel for a 64-state convolutional code of rate 1/2. Solid lines: BER union bound. Dashed lines:

asymptotic analysis. Markers: simulation results.

I then compare analytical and simulated BER results for BICM transmission over fading channels with different constellations and labelling rules. Figure 3.7 shows BER curves obtained from the BER union bound and exact closed-form solutions for integrals I3|µ,ν(s) and I4|µ,ν(s) (solid lines) and their approximations

(27)

(3.16) and (3.17) (dashed lines). Again there is an excellent match between results from analysis and simulations, which confirm the validity of closed-form BER approximations. Finally in Figure 3.8 the asymptotic BER results from

Figure 3.7: BER of BICM transmission over Nakagami-m fading channel for 64- state convolutional code of rate 1/2. Solid lines: BER union bound using exact closed-form solution for integrals. Dashed lines: BER union bound using the approximations. Markers: simulation results.

(3.19) and dH = dH,min (solid lines) are plotted together with the BER union bound (markers) for the same transmission scenarios as Figure 3.7. It can be seen that asymptotic results correctly predict coding and fading gain of BICM scheme. Hence I conclude that the simple expressions founded are very valuable to quickly determine the asymptotic performance of BICM transmission over fading channels.

(28)

Figure 3.8: BER of BICM transmission over Nakagami-m fading channel for 64- state convolutional code of rate 1/2. Solid lines: asymptotic analysis with (3.19).

Markers: BER union bound.

(29)

Chapter 4

Introduction of iterative decoding

In this chapter, I study BICM with iterative decoding (BICM-ID). Inspired by the advent of turbo-codes and iterative decoding, BICM-ID was proposed in order to improve the performance of BICM by exchanging messages between the demodulator and the decoder of the underlying binary code. As illustrated in Figure 4.1 and shown in [8], [9], BICM-ID can provide some performance advantages using convolutional codes combined with mapping different from Gray.

With this kind of system with 8-state convolutional code of rate-2/3, and 8- PSK modulation, using hard-decision feedback, achieves an improvement over the conventional BICM scheme that exceeds 1 dB for a fully-interleaved Rayleigh flat- fading channel and that exceeds 1.5 dB for a channel with additive white Gaussian noise. This robust performance makes BICM with iterative decoding suitable for both types of channels [8]. In this analysis I will use only a system with this characteristics (8-state convolutional code of rate-2/3, and 8-PSK modulation);

the extension to other coding rates and modulation types is straightforward.

4.1 System description

The system block is shown in Figure 4.2. Note the addition of a feedback loop compared with the conventional decoder. I represent the output of the encoder by

C =[

c10, c20, c30,· · · , c1t, c2t, c3t,· · ·]

= [C0,· · · , Ct,· · · ] (4.1) where cit is the ith output bit at time position t and Ct= [c1t, c2t, c3t]. Three inde- pendent bit interleavers permute bits to break the correlation of fading channels

(30)

Figure 4.1: BER for BICM-ID in the AWGN channel with the (5, 7)8 convolutional code, 16-QAM with set partitioning mapping. Curve labels denote the iteration number. Upper curve corresponds to iteration 1, decreasing down to iteration 20.

In dashed lines, results for Gray mapping.

(31)

Figure 4.2: Block diagram of the BICM-ID scheme.

as well as the correlation between the bits in the same symbol. At the deinter- leavers, the permutation is inverted. The output of the interleavers is represented by

V = [

v10, v20, v03,· · · , v1t, vt2, vt3,· · ·]

= [V0,· · · , Vt,· · · ] (4.2)

where vti is the ith output bit at time position t and Vt = [vt1, vt2, vt3]. The interleaving is followed by a signal labelling map µ, an isomorphism between a 3-tuple Vt= [v1t, vt2, vt3] and a signal constellation point xt

xt = µ (Vt) , xt∈ X (4.3)

where X is the signal set of 8-PSK. The Rayleigh-fading channel with coherent detection give this output

yt = ρtxt+ zt (4.4)

as seen in the previous chapter.

(32)

4.2 Conventional decoding

For each received signal yt, a log-likelihood function is calculated for the two possible binary values of each coded bit

λ(

vti = b)

= log

x∈X (i,b)

P (x| yt, ρt)

= log

x∈X (i,b)

P (yt | x, ρt) P (x)

 ,

i = 1, 2, 3; b = 0, 1

(4.5)

where the signal subset X (i, b) = {µ (v1, v2, v3)| vi = b} and the terms common to all i and b are disregarded. In conventional decoding, the a priori probability P (x) is assumed equal for any x∈ X (i, b). Then, the bit metric becomes

λ(

vti = b)

= log

x∈X (i,b)

P (yt| x, ρt)

≈ max

x∈X (i,b)log P (yt | x, ρt)

=− min

x∈X (i,b)∥yt− ρtx2

(4.6)

where a constant scalar is disregarded and the approximation is good at high SNR. At the decoder (as Viterbi decoder), the branch metric corresponding to each of the eight possible 3-tuple is the sum of the corresponding bit metrics after deinterleaving.

4.3 Iterative decoding with hard-decision feed- back

Convolutional encoding introduces redundancy and memory into the signal se- quence. Yet, the equally likely assumption made before fails to use this informa- tion, primarily because it is difficult to specify in advance of any decoding. The a priori information is reflected in the decoding results and therefore can be in- cluded through iterative decoding. One approach is to use the Viterbi algorithm (see Appendix A) with soft outputs, although this is computationally complex.

Instead, I consider only binary-decision feedback for the calculation of the bit

(33)

metrics in the second round of decoding. For example, to calculate λ (v1t = 0) I assume, for any x = µ (v1 = 0, v2, v3)∈ X (1, 0),

P (x) =

{ 1, if v1 = 0, v2 = ˆvt2, v3 = ˆvt3

0, otherwise (4.7)

where ˆv2t and ˆvt3 are the first-round decoding decisions. Then the bit metric with the decision feedback becomes

λ(

v1t = 0)

= yt− ρtµ(

0, ˆv2t, ˆv3t) 2. (4.8) The bit metrics for other bit positions and bit values follow similarly. Given the feedback ˆvt2 and ˆvt3 the Euclidean distance between the signals µ (0, ˆvt2, ˆvt3) and µ (1, ˆvt2, ˆvt3) can be significantly larger than the minimum Euclidean distance be- tween the signals in the subsetX (1, 0) and those X (1, 1). With an appropriate signal labelling, the minimum Euclidean distance between coded sequences can be made large for BICM-ID. This is the key that BICM-ID outperforms con- ventional BICM. To avoid severe error propagation, the bits feedback should be independent of the bit for which the bit metric is calculated. This is made possible by independent bit interleavers - the three bits making up a channel symbol are typically far apart in the coded sequence. This is clearly a feature not available in a symbol-interleaved system.

4.4 Signal labelling

The performance of BICM-ID strongly depends on the signal labelling methods.

It can be seen that a mixed labelling method outperforms both Gray labelling and set-partitioning labelling. With mixed labelling, the eight sequential labels for 8-PSK signals are {000, 001, 010, 011, 110, 111, 100, 101}. The details of the labelling design are discussed in [9]. Here I report only the results.

4.5 Simulation results

Now I show the BER performance of 8-PSK BICM-ID for both Rayleigh fading and AWGN channels. Also other type of scheme are included for comparison.

The performance for Rayleigh fading channels is shown in Figure 4.3.

(34)

Figure 4.3: The performance of BICM-ID with the 8-state convolutional code of rate-2/3 and 8-PSK modulation for Rayleigh fading channels.

Figure 4.4: The performance of BICM-ID with the 8-state convolutional code of rate-2/3 and 8-PSK modulation for AWGN channels.

(35)

Compared with BICM scheme proposed in Chapter 2, there is a 1 dB perfor- mance degradation after the first round of decoding (without decision feedback) of BICM-ID due to mixed labelling. However, with a second round of decoding, BICM-ID quickly catches up and outperforms BICM by 1 dB. A third round of decoding adds a slight improvement. The performance for AWGN channels is shown in Figure 4.4. The SNR gap between BICM-ID and TCM scheme is only 0.2 dB. The gain of BICM-ID over BICM is more than 1.5 dB.

(36)

Riferimenti

Documenti correlati

Results on the effects of the solar modulation on the energy spectra of galactic cosmic-ray protons in the period July 2006-December 2009 have already been published [6] by the

In patients with schizophrenia (SCZ) connectivity strength within the DMN increased in ventromedial prefrontal cortex (vmPFC) (a); at t 2 (red) relative to t 1 (blue) when compared

The tracker planes perform also twelve independent energy losses measurements dE/dx, which are extensively used in the isotopes analysis.. A time of flight (ToF) system is composed

Electro-magnetic tests conducted on particles alone, helped to understand that bigger particles are able to overcome the melting temperature of the adhesive but hot-melt

L’interposizione amministrativa si limita ad una “abilitazione” del richiedente atta a vagliare in via preventiva (ma anche con controlli successivi e periodici)

finanziarie: analogie e differenze rispetto ad altri strumenti creditizi, in Impresa c.i., 1994, 3154. 43/1994 Giacomo Rosini auspicava, tra l’altro, che le cambiali

Il progetto del complesso multifunzionale, nell’area nord di Reggio Emilia, in prossimità e in corrispondenza della nuova stazione dell’Alta Velocità, vorrebbe sottolineare i