2. Important aspects
For a better understanding of the problems related to RF design we will introduce some theoretical background. In particular we will focus on the parameters that describe the main aspects of RF circuit requirements. Using a simplified point of view we can say that RF designs involve the following characteristics, among which we must find the best trade-off, depending on our project’s specifications: noise, power, frequency, gain, supply voltage, linearity.
2.1. Noise
2.1.1. Noise types
Noise could be defined as everything except the desired signal. However we should make a clear distinction between artificial and natural noise. In the first category we include all sorts of interferers that are characterized by a deterministic behavior like the 50-Hz AC network signal, in the latter the unpredictable random noises. This one is by far the most important and it is useful to understand its nature in order to reduce its impact in our device. First of all the random nature of such noise sources makes any time domain analysis not possible, hence we need another type of noise model. The only solution is the statistical modeling of the phenomenon in both frequency and amplitude domains. As a consequence we characterize noise by a Power Spectral Density (PSD) and by a Probability Density Function (PDF) respectively.
The main types of electrical noise of interest to us are Thermal, Shot and Flicker noise. Thermal noise is due to resistors, base and emitter resistance of BJTs and the resistance of the MOSFET channel. Its power is directly proportional to absolute temperature and to the equivalent noise bandwidth. Another type of noise is that generated by charges flowing through a potential barrier, known as Shottky or shot noise. It is found in the direct biased B-E junction of bipolar transistors.
Its power is proportional to the average current flowing across the barrier. In addiction we have the
noise caused by trapped charges at the oxide-semiconductor interface of MOSFETs, whose PSD is
inversely proportional to the frequency, called Flicker noise. These charges are relaxed with a
characteristic lifetime of tens of microseconds and this explains the low- frequency nature of such
noise source. From here we can argue a negligible influence at high frequencies but nonlinear
device such as mixers can translate the noise PSD to higher frequencies consequently affecting our
signal. For this reason it should be taken into account in a RF project as well. Moreover MOS
devices are affected by a further noise, whose effects become visible at very high frequencies, near
to the ?
Tof the transistor. When operating in inversion any fluctuation of the channel charge will induce a current in the gate, by means of the capacitive coupling between gate and cannel. This effect happens in combination with another one: at very high frequency the impedance seen at the gate is not purely capacitive; there is a significant delay between the signal at the gate and in the channel that translates into a phase shift, modeled with an input impedance with a real part. Both effects are modeled by a current source and a conductance in parallel with the source-gate capacitance
1.
2.1.2. Noise parameters and noise model
Fundamentally we have two ways of characterizing a noisy two-port circuit. The first one is by mean of two input generators, one series voltage source e and one parallel current source
ni , that
nmodel the noise behavior of the circuit, followed by the noiseless circuit (as if it were ideal). In Fig.
2.1 we added a noisy source with its current noise generator i shunted by the source admittance
sY .
sis Ys
en
in Noiseless OUT
two-port
Noisy two-port
Figure 2.1 Model for a noisy two -port
These input-referred noise generators are, in general, correlated. According to this we can express i as the sum of
ni and
ci , meaning the correlated and the uncorrelated part of
ui to
ne . Now,
nconsidering the correlating part i as proportional to
ne we can say:
ni
c =Y
ce
n =( G
c+jB
c) (2.1)
where Y
chas the dimensions of an admittance with a real part G and an imaginary part
cB .
c1D. K. Shaeffer, T.H. Lee, “A 1.5-V, 1.5-GHz CMOS Low Noise Amplifier” IEEE J. Solid-State Circuits, vol. 32, pp.
745-759, May 1997
The second possibility is the representation of the noise circuit with a parameter called noise factor and so defined:
out in
SNR
F
=SNR (2.2)
Where the numerator is the signal to noise ratio at the input of the two-port circuit and the denominator is the signal to noise ratio at the output. For a noisy circuit this quantity is larger than 0 dB. It represents a measure of the degradation of the SNR through the quadrupole.
The Noise Figure is the Noise Factor expressed in dB:
NF =10logF
(2.3)
An important formula, useful for the computation of the total NF of many cascaded stages is the well known Friis formula:
) 1 ( 1 1
2 1
tot
1 ) 1
1 (
1
−
+ −
− + +
− +
=
m p p
m
p
A A
NF A
NF NF
NF K K (2.4)
that shows clearly that the first stage (namely the LNA) noise figure has a predominant influence on the overall noise figure of the system if all the stages amplify the signal. On the other hand, if a stage attenuates the signal then the NF of the following stage is “magnified”.
We can rewrite the expression (2.2) of F as a function of the input noise generators e ,
ni
n(decomposed in correlated and uncorrelated part) and the source noise i (Fig. 2.1). At this point we
sintroduce some new noise parameters: equivalent resistance and conductance. The effect of the noise generators of the device (voltage or current) on the overall noise is the same as the thermal noise produced by a resistance (conductance) if e
n2 =4 kT
∆fR
nand i
u2 =4 kT
∆fG
u.
We will omit some passages that are explained in [3]. Replacing the noise generators with the above mentioned resistances we have a final expression of the Noise Factor as a function of the source admittance and four noise parameters linked to the noisy circuit:
s
n s c s
c u
G
R B B G
G
F G [( ) ( ) ]
1
2
2 + +
+ + +
=
(2.5)
The meaning of the parameters is explained below:
Parameter Meaning
G
s, B
sSource conductance, susceptance R
nEquivalent resistance of e
nG
uEquivalent conductance of i
uG
c, B
cCorrelation conductance, susceptance
Table 2.1 Two-port noise modeling: description of the parameters
Our task is to minimize the Noise Factor. Deriving (2.5) and equating to zero we obtain the following important conditions relative to the source admittance:
opt c
s B B
B =− =
(2.6)
c opt
n u
s
G G
R
G
=G
+ 2 =(2.7)
From these equations it is clear that in order to minimize the noise we must have a source admittance that is different from the one that realizes the maximum power transfer condition. In fact B
cis in general different from the input susceptance of the circuit, the same for the term under the square root compared with the input conductance. Replacing the terms in (2.5) with the expressions (2.6), (2.7) we have the analytical form for the Minimum Noise Factor.
+ +
+
= c c
n u
n G G
R R G
Fmin 1 2 2
(2.8)
This result is useful for designing purposes because we have the theoretical minimum noise possible
for a given technology and architecture as a function of four parameters tha t can be related to
technological parameters and circuit quantities (such as g
m). In this way we have a useful aid that
tells us in which direction we have to change our design in order to reduce noise.
2.2. Linearity
We start with the definition of linearity and we will explain the reason why a LNA should be linear along with the problems concerning a nonlinear system. Let us assume that a system responds to an input signal x
1(t) with an output signal y
1(t) and to x
2(t) with y
2(t). If the system responds to the input signal ax
1(t) + bx
2(t) with the output ay
1(t) + by
2(t) for every combination of a and b then we say that the system is linear. In all other cases the system is nonlinear. Often we use in electronics a linearized model of the active components. This representation is valid only for small signal analysis. To take into account greater order effects we can express the input-output characteristic with a polynomial:
y(t)≈α1x(t)+α2x2(t)+α3x3(t)+K
(2.9)
with as many coefficients as the order of approximation we want to use. Normally the input-output characteristics have a saturating behavior like that in the picture below. Nonlinearity has important (and negative) effects on the signal, which we here summarize.
2.2.1. Harmonics
If we stimulate the system with a sinusoidal signal like x(t) = Acos?t, we obtain the following output:
A t
A t A t
A A t
y α ω
α ω α ω
α α
3 4 cos 2
2 cos 4 cos
3 ) 2
(
3 3 2
2 3
3 1
2
2 + +
+
+
=
(2.10)
We observe the presence of a DC term plus other terms whose frequency is multiple of the fundamental ? . If the system has an odd symmetry, for example a differential amplifier, the even harmonics are cancelled unless mismatch errors are present. For this reason we will use a differential architecture in our LNA.
2.2.2. Gain compression
Another typical effect is the reduction of gain as the input signal amplitude becomes bigger. Such
effect is described by the a
3coefficient whose value has to be negative (assuming that the signal
gain a
1is positive) to take into account the saturating trend. We define here an useful parameter to quantify the saturation effect of the system. We can trace onto the input-output characteristic graph the straight line describing the first order small signal gain and the real curve as illustrated in Fig.
2.2.
Aout
log 20
Ain
log 20 1 dB
CP1
Figure 2.2 Definition of compression point
The point of the input axes in which the curves falls 1 dB down the ideal line is called the “Input 1- dB compression point” (CP
1). For a third order approximation it can be shown that:
3 1
1
0 . 145
α
=
α
CP (2.11)
2.2.3. Intermodulation
A detrimental effect due to the nonlinearity is the following: if two sinusoidal signals with the same amplitude A, whose pulsations are ?
1and ?
2respectively, are injected in the circuit (two-tone test), we note at the output the presence of further frequencies that are not harmonics of ?
1and ?
2.
Such frequencies are called “Intermodulation products (IM)”. For a third-order quadrupole we obtain at the output the sum of the following terms:
IM products of 1
storder:
A A
(
1t 2t)
3 3
1 cos cos
4
9α ω ω
α +
+
(2.12)
IM products of 2
ndorder:
[ cos 2 cos 2 2 cos( ) ]
2
1 2 1 22
2
ω ω ω ω
α
+ + ±
A t t
(2.13)
IM products of 3
rdorder:
[ t t t t ]
A cos 3 cos 3 3 cos( 2 ) 3 cos( 2 )
4
1 2 1 2 1 23
3
ω ω ω ω ω ω
α
+ + ± + ±(2.14)
We can observe that, supposing that the circuit has a bandpass behavior, like illustrated in figure 2.3, the only terms that fall very close to the band of interest are, apart from the first order IM, those having 2?
1± ?
2and ?
1± 2?
2as pulsations.
ω1 ω2 ω ω1 ω2 ω
2
2ω1−ω 2ω2−ω1
Figure 2.3 Intermodulation terms due to nonlinearity
These terms are also responsible for interference between signal in different frequency ranges. In
fact it can happen that two strong interferers are out band but their IM of third order (IM
3) fall in the
signal band thus disturbing it. (Fig. 2.4).
ω1 ω2
IN
ω ω1 ω2 ω
2
2ω1−ω 2ω2 −ω1 OUT
Signal band Signal band
Interferers
IM3
Figure 2.4 Interference due to intermodulation terms
We can see in the formula (2.12) that as the input amp litude A increases, the amplitude of the fundamental harmonic at the output grows linearly while the third order intermodulation products (2.14) are proportional to A
3.
For this reason an important parameter is here defined: the input signal amplitude such that small signal gain and IM
3gain are equals. Equating the coefficients of (2.12) and (2.14) and neglecting the second term in (2.12) we have:
3 1 3
3 2
1
3
4 4
3
α α
α
=α A
→A
=IP
=(2.15)
The parameter IP
3is called “Third-order Input Interception Point”. Plotting on a graph the
coefficients of (2.12) and (2.14) as a function of the input amplitude and expressing both axis in dB
we obtain the graph in Fig. 2.5:
IIP3
Figure 2.5 Fundamental and IM3 amplitudes at the output
We can note that the slope of the IM3 curve is three times larger than that of the fundamental term.
These straight lines are actually an approximation since the real quadrupole saturates for a certain input amplitude and its behavior will change consequently. However they can be drawn also in a real case simply interpolating the values relative to a small input signal.
In conclusion, comparing the (2.11) with the (2.15) we can relate the parameter CP
1and IP
3as following:
0 . 3298 9 . 6 4 3
145 . 0
3
1−
IP
= = ≈−CP dB (2.16)
2.2.4. Linearity of cascaded stages
An useful formula to calculate linearity in a cascaded system is given below
2:
2
3 , 3
2 1 2 1 2
2 , 3
2 1 2
1 , 3 2
, 3
1 1
IP IP
IP IP TOT
β α α + +
≈
(2.17)
2 B. Razavi, “RF microelectronics” Prentice hall PTR, p. 24, july 1997
where IP
3,idenotes the third-order input interception point of the i-th stage, expressed in volts, and a
1and ß
1represent the gain of the first and second stage respectively. Here can be seen that if the single stage gains are greater than 1 then the latter stages have a stronger influence on the overall linearity than the early ones. Consequently the requirements of linearity on the LNA are not so stringent, on the contrary the mixer must exhibit a good linear behavior.
2.2.5. Linearity in an LNA
When dealing with the linearity of an LNA we have to consider two important aspects. The input- referred third-order intercept point of the amplifying FET (in our case the input transistor) and the overall IP
3of the LNA.
The input-output characteristic of a MOS in saturation
( GS T)
2
OX n
D
V V
L W
I
=µ C
−(2.18)
shows a quadratic dependence of the output current I
Don the input V
GS. This means that in theory we haven’t any third order product, so that the third-order interception point is infinite. Actually the mobility of the charge carriers µ
nis a function of the electrical field in the channel, which depends on the overdrive voltage
T GS
od
V V
V
= −(2.19)
It can be shown
3that the IP
3of a transistor is proportional to the overdrive voltage and that for a MOS transistor we have:
IP
3 =3 . 26 ( V
GS −V
T) (2.20) In a narrowband architecture like that we use in our LNA, the effective voltage applied between gate and source of the input transistor is the input voltage multiplied by the Q of the input mesh.
Since the IIP3 is referred to the input power, its value in this case will be decreased by a factor
3 H. Klar, “HF – CMOS für die drahtlose Kommunikation” Institut für Mikroelektronik TU Berlin, p. 62, October 2004
(Q
2).
4An approximated table showing the dependence of the IP3 of a single-ended LNA on V
odand Q
incan be found in [8]
5. A third way to increase the linearity of the amplifier is through a negative feedback (series or parallel). It can be demonstrated
6that the input interception point after the feedback, expressed in volts is given by:
( )
3/23 ,
3
V 1 fA
V
IP FB = IP +(2.21)
where fA is the loop gain of the amplifier.
2.3. Power
As our project is aimed at portable systems, power consumption considerations assume great importance. We should realize an LNA that can use as low power as possible in order to assure longer battery life. We can act in two directions: minimize the supply voltage and minimize the bias current. The first objective is achieved with particular architectures, that avoid the use of stacked transistors or resistors, and with LC-tanks as load, that have no DC voltage drop.
The second one through techniques that maximize the gain without increasing the current. We can increase the transconductance of a MOSFET using the smallest channel length or the gain using an LC resonating load. The higher its Q factor, the higher the gain we can achieve, i.e. the needed bias current can be reduced. For this work the maximum power allowed by our specifications (see Par.
5.1) is defined by the maximum current absorption of 10 mA at a supply voltage of 1 V.
2.4. Gain (VGA)
We have seen that, according to Friis formula, the more is the gain of the initial stages of a chain, the lower is the overall noise figure. Thus one may think that it should be better if we realize our LNA with a gain as high as possible. On the other hand a very high gain affects linearity in a negative way. That can be seen in the formula of the overall IP
3of a cascaded system (2.17). As a result one must find a compromise between noise and linearity. Another issue is that we deal with
4 A. A. Abidi, G. J. Pottie, W. J. Kaiser, “Power-Conscious Design of Wireless Circuits and Systems ” Proceeding of the IEEE, Vol. 88, No. 10, pag. 1532, October 2000
5T. H. Lee, “The design of CMOS radio-frequency integrated circuits” Cambridge University Press, 2002, p. 300 6 A. A. Abidi, “ General Relations Between IP2, IP3 and Offsets in Differential circuits and the Effects of Feedback”
IEEE Transactions on Microwave Theory and Techniques, Vol. 51, No. 5, pag. 1610-1612, May 2003
input signal whose amplitude can vary in a very large range. If we are in the vicinity of a transmission station then our signal is very strong. Otherwise if our receiver is located very far from a BTS we have a very weak signal which have to be amplified without adding too much noise. A solution to this problem can be a Variable Gain Amplifier (VGA). We propose an amplifier whose gain can be set choosing among two different levels by means of an integrated logic that switches on/off active components.
2.5. Frequency band (Multiband LNA, GSM, UMTS, BT)
Recently a great number of technologies make use of RF communications. For example mobile phones use GSM standard in the 1800-1900 MHz range, new third generation phone using UMTS require a frequency band of approximately 2100 MHz and the Bluetooth wireless communication standard make use of the 2400 MHz band. More and more devices are being developed that need to function in all of these bands. One can imagine the new cell phones with dual mode GSM/UMTS, equipped with Bluetooth capabilities. The classic solution for this demand is to use three different LNAs, one per frequency band. We propose a new architecture that makes possible the use of the same amplifier in three different bands: 1.8 GHz, 1.9 GHz, 2.1 GHz. The operating frequency range can be selected with two external pins that, through a small digital circuit, switch between the three operating modes.
2.6. Input adaptation
2.6.1. Z-parameters, S-parameters
A classic way to represent a system with two or more ports in its external behavior, thus neglecting its internal structure, is by the Z-parameters that have the dimension of an impedance. In particular we define the input impedance Z
inas the ratio between the input current and the voltage difference at the input terminals. However, at higher frequencies it is very difficult to measure these quantities.
For this reason often S-parameter (that means “Scattering parameters”) are used to characterize a
network at high frequencies. We can see our circuit like a two-port (or n-port in case of more input
ports) network. Moreover we use as input/output variables the incident and reflected voltage waves
instead of currents and voltages. For example, assigning the number 1 to the input port and the
number 2 to the output port we can define the following parameters:
1
1 1
11= =Γ
i r
E
s E
1 2 21
i r
E s
=E
2 1 12
i r
E
s
=E
22 2
22 = =Γ
i r
E
s E (2.22)
where E
rkand E
ikare the complex amplitude of the incident and reflected voltage of the k-th port respectively and G
kis the reflection coefficient of the k-th port. These quantities are dimensionless and are often given in dB of their module. Perfect adaptation means a s
11equal to zero (that is - 8 dB).
2.7. Stability
A main requirement of a LNA is its unconditional stability. This means, according to bounded- input, bounded-output definition (BIBO) that every bounded input must produce a bounded output.
One possible way to instability are the unwanted paths between output and input due to parasitic components like capacitor. We must adopt solutions that minimize this phenomenon, in other words the reverse isolation (measured with s
12) must be maximized. Later on we will describe a way to obtain this.
2.7.1. K-Parameter, considerations
We introduce here a parameter, known as Stern stability factor, that gives information about the stability of a system. It is so defined:
12 21
2 22 2 11 2
2 1
s s
s K + ∆ −s −
=