Ingegneria dell ’Automazione - Sistemi in Tempo Reale
Selected topics on discrete-time and sampled-data systems
Luigi Palopoli
[email protected] - Tel. 050/883444
Outline
Parameters round-off Sample-rate selection
Parameters round-off
So far we have seen the problem of
quantization referred to Input/output data The problem is there also for parametric uncertainties inthe controller
For example suppose you choose a controller given by: y(k + 1) = αy(k) + u(k)
Due to finite precision math the system we will deal with is:
y(k + 1) = (α + δα)y(k) + u(k) + (k)
Parameters round-off
So far we have seen the problem of
quantization referred to Input/output data The problem is there also for parametric uncertainties inthe controller
For example suppose you choose a controller given by: y(k + 1) = αy(k) + u(k)
Due to finite precision math the system we will deal with is:
y(k + 1) = (α + δα)y(k) + u(k) + (k)
Parameters round-off - (continued)
In the example above the Z−transform that we get is z − α + δα
If we want to place a pole at 0.995 we need at least a precision of three digits: δα ≤ 0.005
Robustness analysis can be conducted using the root-locus-Jury critterion methods
We will show a different approach based on linearised sensitivy analysis
Linearised sensitivity analysis
Let the closed loop characteristic polynomial be:
P = zn + α1zn−1 + . . . + αn
Let the roots of the system be: λi = riejθi
Suppose αk is subject to uncertainity and that we are interested in evaluating variations of λj, that we assume to be a single root of P The Polynomial P can be thought of as a function of z and αk: P (z, αk)
Lin. sensitivity an. - (continued)
First order series expansion around z = λj: P (λj + δλj, αk + δλk) ≈ P (λj, αk) + ∂P∂z ˛
˛
˛z=λj δλj + ∂α∂P
kδαk + . . . Observing that P (λj, αk) = 0 we get:
δλj ≈ ∂P/∂αk
∂P/∂z
˛
˛
˛
˛z=λj
δαk
Numerator:
∂P
∂αk
˛
˛
˛
˛z=λj
= λn−kj
Writing P (z, αk) = (z − λ1) . . . (z − λn), we get for the denominator:
∂P
∂z
˛
˛
˛
˛z=λj
= Y
l6=j
(λj − λl)
Lin. sensitivity an. - (continued)
First order series expansion around z = λj: P (λj + δλj, αk + δλk) ≈ P (λj, αk) + ∂P∂z ˛
˛
˛z=λj δλj + ∂α∂P
kδαk + . . . Observing that P (λj, αk) = 0 we get:
δλj ≈ ∂P/∂αk
∂P/∂z
˛
˛
˛
˛z=λj
δαk Numerator:
∂P
∂αk
˛
˛
˛
˛z=λj
= λn−kj
Writing P (z, αk) = (z − λ1) . . . (z − λn), we get for the denominator:
∂P
∂z
˛
˛
˛
˛z=λj
= Y
l6=j
(λj − λl)
Lin. sensitivity an. - (continued)
First order series expansion around z = λj: P (λj + δλj, αk + δλk) ≈ P (λj, αk) + ∂P∂z ˛
˛
˛z=λj δλj + ∂α∂P
kδαk + . . . Observing that P (λj, αk) = 0 we get:
δλj ≈ ∂P/∂αk
∂P/∂z
˛
˛
˛
˛z=λj
δαk
Numerator:
∂P
∂αk
˛
˛
˛
˛z=λj
= λn−kj Writing P (z, αk) = (z − λ1) . . . (z − λn), we get for the denominator:
∂P
∂z
˛
˛
˛
˛z=λj
= Y
l6=j
(λj − λl)
Lin. sensitivity an. - (continued)
First order series expansion around z = λj: P (λj + δλj, αk + δλk) ≈ P (λj, αk) + ∂P∂z ˛
˛
˛z=λj δλj + ∂α∂P
kδαk + . . . Observing that P (λj, αk) = 0 we get:
δλj ≈ ∂P/∂αk
∂P/∂z
˛
˛
˛
˛z=λj
δαk
Numerator:
∂P
∂αk
˛
˛
˛
˛z=λj
= λn−kj
Writing P (z, αk) = (z − λ1) . . . (z − λn), we get for the denominator:
∂P
∂z
˛
˛
˛
˛z=λj
= Y
l6=j
(λj − λl)
Lin. sensitivity an. - (continued)
...hence,
δλj = −
λn−kj
Q
l6=j(λj − λl)
δαk Consideration 1: assuming a stable λj the highest sensitivity is for n = k i.e. for
the variations of αn
Consideration 2: Looking at the denominator roots should be far apart: clusters empahsise senistivity
Lin. sensitivity an. - (continued)
...hence,
δλj = −
λn−kj
Q
l6=j(λj − λl)
δαk
Consideration 1: assuming a stable λj the highest sensitivity is for n = k i.e. for the variations of αn
Consideration 2: Looking at the denominator roots should be far apart: clusters empahsise senistivity
Lin. sensitivity an. - (continued)
...hence,
δλj = −
λn−kj
Q
l6=j(λj − λl)
δαk
Consideration 1: assuming a stable λj the highest sensitivity is for n = k i.e. for the variations of αn
Consideration 2: Looking at the denominator roots should be far apart: clusters empahsise senistivity
Example
Suppose we need to implement a low pass filter with sharp cutoff
we need a cluster of roots → high
sensitivity for canonical implementations it is much more convenient to use a series or parallel composition of low order filters (sensitivity is on one factor).
Outline
Parameters round-off Sample-rate selection
Factors
A famous rule of the thumb is to choose the sampling period 20-40 times the bandwidth of the system.
Where does it come from?
Shannon theorem limit
Smoothness of time response Immunity to noise
sensitivity to parameters variations
Factors
A famous rule of the thumb is to choose the sampling period 20-40 times the bandwidth of the system.
Where does it come from?
Shannon theorem limit Smoothness of time response
Immunity to noise
sensitivity to parameters variations
Factors
A famous rule of the thumb is to choose the sampling period 20-40 times the bandwidth of the system.
Where does it come from?
Shannon theorem limit
Smoothness of time response Immunity to noise
sensitivity to parameters variations
Factors
A famous rule of the thumb is to choose the sampling period 20-40 times the bandwidth of the system.
Where does it come from?
Shannon theorem limit
Smoothness of time response Immunity to noise
sensitivity to parameters variations
Factors
A famous rule of the thumb is to choose the sampling period 20-40 times the bandwidth of the system.
Where does it come from?
Shannon theorem limit
Smoothness of time response Immunity to noise
sensitivity to parameters variations
Shannon theorem
Consider the block diagram:
Shannon theorem - (cont.)
Recall that setting H∗(s) = (1 − e−sT)D∗(s)nG(s)
s
o∗
, we get
Y ∗(s) = H∗(s)
H∗(s) + 1R∗(s) If we want to track a signal r with frequency componente up to ωb we have to
choose:
ωs
ωb > 2.
Indeed lower sampling frequencies would determine aliasing of signal r, generating delays and instability
Typically the closed loop roots of HH∗(s)+1∗(s) should be little lower than the required bnadwidth. Sampling rates lower than the bandwidth would determine aliasing of such roots as well.
In practice Shannon limit is too low!!
Shannon theorem - (cont.)
Recall that setting H∗(s) = (1 − e−sT)D∗(s)nG(s)
s
o∗
, we get
Y ∗(s) = H∗(s)
H∗(s) + 1R∗(s)
If we want to track a signal r with frequency componente up to ωb we have to choose:
ωs
ωb > 2.
Indeed lower sampling frequencies would determine aliasing of signal r, generating delays and instability
Typically the closed loop roots of HH∗(s)+1∗(s) should be little lower than the required bnadwidth. Sampling rates lower than the bandwidth would determine aliasing of such roots as well.
In practice Shannon limit is too low!!
Shannon theorem - (cont.)
Recall that setting H∗(s) = (1 − e−sT)D∗(s)nG(s)
s
o∗
, we get
Y ∗(s) = H∗(s)
H∗(s) + 1R∗(s)
If we want to track a signal r with frequency componente up to ωb we have to choose:
ωs
ωb > 2.
Indeed lower sampling frequencies would determine aliasing of signal r, generating delays and instability
Typically the closed loop roots of HH∗(s)+1∗(s) should be little lower than the required bnadwidth. Sampling rates lower than the bandwidth would determine aliasing of such roots as well.
In practice Shannon limit is too low!!
Shannon theorem - (cont.)
Recall that setting H∗(s) = (1 − e−sT)D∗(s)nG(s)
s
o∗
, we get
Y ∗(s) = H∗(s)
H∗(s) + 1R∗(s)
If we want to track a signal r with frequency componente up to ωb we have to choose:
ωs
ωb > 2.
Indeed lower sampling frequencies would determine aliasing of signal r, generating delays and instability
Typically the closed loop roots of HH∗(s)+1∗(s) should be little lower than the required bnadwidth. Sampling rates lower than the bandwidth would determine aliasing of such roots as well.
Smoothness of time response
Attitude control of a satellite System dynamics:
I ¨θ = MC + Md, where:
I is the ,oment of inertia of the satellite around its mass center
MC is the control torque
Md is the disturbance torque
The system dynamic is approximated by a double integrator
Example - (cont.)
Specifications: tr < 0.5s, Overshoot Mp < 16% Approximated relations:
tr ≈ 1.8ωn
ts ≈ ξω4.6n = 4.6σ Mp ≈ e−πξ/
√1−ξ2
...from which we get: ξ = 0.5, σ = 1.8
Example - (cont.)
Try with sample rates equal to 4, 8, 20, 40 times ωn
The root locations in the z−plane are given by z = esT = e(−xiωn±jωn√
1−xi2)T = e−ξωnT(cos(ωnp1 − ξ2T ) ± sin(ωnp1 − ξ2T )) Case ωs/ωn = 4 → ωnT = π/2 → T = 0.4363s:
z = esT = eξπ/2(cos(π/2p
1 − ξ2) ± sin(π/2p
1 − ξ2)) = 0.0952 ± 0.4459i
Closed loop characteristic polynomial:
z2 + α1z + α2 = z2 − 0.1904z + 0.2079
Discrete-time model for the system:
x(k + 1) = Φx(k) + Γu(k) = 2 4
1 T
0 1
3
5x(k) + 2 4
T2/2 T
3
5u(k)
Example - (cont.)
Try with sample rates equal to 4, 8, 20, 40 times ωn
The root locations in the z−plane are given by z = esT = e(−xiωn±jωn√
1−xi2)T = e−ξωnT(cos(ωnp1 − ξ2T ) ± sin(ωnp1 − ξ2T )) Case ωs/ωn = 4 → ωnT = π/2 → T = 0.4363s:
z = esT = eξπ/2(cos(π/2p
1 − ξ2) ± sin(π/2p
1 − ξ2)) = 0.0952 ± 0.4459i
Closed loop characteristic polynomial:
z2 + α1z + α2 = z2 − 0.1904z + 0.2079
Discrete-time model for the system:
x(k + 1) = Φx(k) + Γu(k) = 2 4
1 T
0 1
3
5x(k) + 2 4
T2/2 T
3
5u(k)
Example - (cont.)
Try with sample rates equal to 4, 8, 20, 40 times ωn
The root locations in the z−plane are given by z = esT = e(−xiωn±jωn√
1−xi2)T = e−ξωnT(cos(ωnp1 − ξ2T ) ± sin(ωnp1 − ξ2T )) Case ωs/ωn = 4 → ωnT = π/2 → T = 0.4363s:
z = esT = eξπ/2(cos(π/2p
1 − ξ2) ± sin(π/2p
1 − ξ2)) = 0.0952 ± 0.4459i Closed loop characteristic polynomial:
z2 + α1z + α2 = z2 − 0.1904z + 0.2079
Discrete-time model for the system:
x(k + 1) = Φx(k) + Γu(k) = 2 4
1 T
0 1
3
5x(k) + 2 4
T2/2 T
3
5u(k)
Example - (cont.)
Try with sample rates equal to 4, 8, 20, 40 times ωn
The root locations in the z−plane are given by z = esT = e(−xiωn±jωn√
1−xi2)T = e−ξωnT(cos(ωnp1 − ξ2T ) ± sin(ωnp1 − ξ2T )) Case ωs/ωn = 4 → ωnT = π/2 → T = 0.4363s:
z = esT = eξπ/2(cos(π/2p
1 − ξ2) ± sin(π/2p
1 − ξ2)) = 0.0952 ± 0.4459i Closed loop characteristic polynomial:
z2 + α1z + α2 = z2 − 0.1904z + 0.2079 Discrete-time model for the system:
x(k + 1) = Φx(k) + Γu(k) = 2 4
1 T
0 1
3
5x(k) + 2 4
T2/2 T
3
5u(k)
Example - (cont.)
Try with sample rates equal to 4, 8, 20, 40 times ωn
The root locations in the z−plane are given by z = esT = e(−xiωn±jωn√
1−xi2)T = e−ξωnT(cos(ωnp1 − ξ2T ) ± sin(ωnp1 − ξ2T )) Case ωs/ωn = 4 → ωnT = π/2 → T = 0.4363s:
z = esT = eξπ/2(cos(π/2p
1 − ξ2) ± sin(π/2p
1 − ξ2)) = 0.0952 ± 0.4459i Closed loop characteristic polynomial:
z2 + α1z + α2 = z2 − 0.1904z + 0.2079 Discrete-time model for the system:
x(k + 1) = Φx(k) + Γu(k) = 2 4
1 T
0 1
3
5x(k) + 2 4
T2/2 T
3
5u(k)
Example - (cont.)
State feedback: u(k) = −k1x1(k) − k2x2(k) Closed loop evolution x(k + 1) =
2 4
1 − k1T2/2 T − k2T2/2
−k1T 1 − k2T 3
5x(k) The characteristic polynomial is:
z2 + (T k2 + (T2/2)k1 − 2)z + (T2/2)k1 − T k2 + 1
The gains are found as follows:
2 4
k1 k2
3 5 =
2 4
T2/2 T T2/2 −T
3 5
−1 2 4
2 + α1
−1 + α2 3 5
Example - (cont.)
State feedback: u(k) = −k1x1(k) − k2x2(k) Closed loop evolution x(k + 1) =
2 4
1 − k1T2/2 T − k2T2/2
−k1T 1 − k2T 3
5x(k) The characteristic polynomial is:
z2 + (T k2 + (T2/2)k1 − 2)z + (T2/2)k1 − T k2 + 1
The gains are found as follows:
2 4
k1 k2
3 5 =
2 4
T2/2 T T2/2 −T
3 5
−1 2 4
2 + α1
−1 + α2 3 5
Example - (cont.)
State feedback: u(k) = −k1x1(k) − k2x2(k) Closed loop evolution x(k + 1) =
2 4
1 − k1T2/2 T − k2T2/2
−k1T 1 − k2T 3
5x(k) The characteristic polynomial is:
z2 + (T k2 + (T2/2)k1 − 2)z + (T2/2)k1 − T k2 + 1 The gains are found as follows:
2 4
k1 k2
3 5 =
2 4
T2/2 T T2/2 −T
3 5
−1 2 4
2 + α1
−1 + α2 3 5
Example - (cont.)
State feedback: u(k) = −k1x1(k) − k2x2(k) Closed loop evolution x(k + 1) =
2 4
1 − k1T2/2 T − k2T2/2
−k1T 1 − k2T 3
5x(k) The characteristic polynomial is:
z2 + (T k2 + (T2/2)k1 − 2)z + (T2/2)k1 − T k2 + 1 The gains are found as follows:
2 4
k1 k2
3 5 =
2 4
T2/2 T T2/2 −T
3 5
−1 2 4
2 + α1
−1 + α2
3 5
Example - (cont.)
An alternative way: construct the matrix T = [Γ ΦΓ]
Consider the coordinate transformation x = M ˜x, where M = T 2 4
1 α(ol)1
0 1
3 5 In the new coordinates we find
˜
x(k + 1) = 2 4
2 −1
1 0
3
5x(k) +˜ 2 4 1 0
3
5u(k)
With u(k) = −˜k˜x we get the close loop evolution
˜
x(k + 1) = 2 4
2 − ˜k1 −1 − ˜k2
1 0
3
5x(k) +˜ 2 4 1 0
3
5u(k)
, where the top row are the coeeficient of the characteristic polynomial.
Find solving:2 − ˜k1 = −α1, −1 − ˜k2 = −α2 k = −˜kM−1
Example - (cont.)
An alternative way: construct the matrix T = [Γ ΦΓ]
Consider the coordinate transformation x = M ˜x, where M = T 2 4
1 α(ol)1
0 1
3 5 In the new coordinates we find
˜
x(k + 1) = 2 4
2 −1
1 0
3
5x(k) +˜ 2 4 1 0
3
5u(k)
With u(k) = −˜k˜x we get the close loop evolution
˜
x(k + 1) = 2 4
2 − ˜k1 −1 − ˜k2
1 0
3
5x(k) +˜ 2 4 1 0
3
5u(k)
, where the top row are the coeeficient of the characteristic polynomial.
Find solving:2 − ˜k1 = −α1, −1 − ˜k2 = −α2 k = −˜kM−1
Example - (cont.)
An alternative way: construct the matrix T = [Γ ΦΓ]
Consider the coordinate transformation x = M ˜x, where M = T 2 4
1 α(ol)1
0 1
3 5 In the new coordinates we find
˜
x(k + 1) = 2 4
2 −1
1 0
3
5x(k) +˜ 2 4 1 0
3
5u(k) With u(k) = −˜k˜x we get the close loop evolution
˜
x(k + 1) = 2 4
2 − ˜k1 −1 − ˜k2
1 0
3
5x(k) +˜ 2 4 1 0
3
5u(k)
, where the top row are the coeeficient of the characteristic polynomial.
Find solving:2 − ˜k1 = −α1, −1 − ˜k2 = −α2 k = −˜kM−1
Example - (cont.)
An alternative way: construct the matrix T = [Γ ΦΓ]
Consider the coordinate transformation x = M ˜x, where M = T 2 4
1 α(ol)1
0 1
3 5 In the new coordinates we find
˜
x(k + 1) = 2 4
2 −1
1 0
3
5x(k) +˜ 2 4 1 0
3
5u(k)
With u(k) = −˜k˜x we get the close loop evolution
˜
x(k + 1) = 2 4
2 − ˜k1 −1 − ˜k2
1 0
3
5x(k) +˜ 2 4 1 0
3
5u(k)
, where the top row are the coeeficient of the characteristic polynomial.
Find solving:2 − ˜k1 = −α1, −1 − ˜k2 = −α2 k = −˜kM−1
Example - (cont.)
An alternative way: construct the matrix T = [Γ ΦΓ]
Consider the coordinate transformation x = M ˜x, where M = T 2 4
1 α(ol)1
0 1
3 5 In the new coordinates we find
˜
x(k + 1) = 2 4
2 −1
1 0
3
5x(k) +˜ 2 4 1 0
3
5u(k)
With u(k) = −˜k˜x we get the close loop evolution
˜
x(k + 1) = 2 4
2 − ˜k1 −1 − ˜k2
1 0
3
5x(k) +˜ 2 4 1 0
3
5u(k) k = −˜kM−1
Example - (cont.)
An alternative way: construct the matrix T = [Γ ΦΓ]
Consider the coordinate transformation x = M ˜x, where M = T 2 4
1 α(ol)1
0 1
3 5 In the new coordinates we find
˜
x(k + 1) = 2 4
2 −1
1 0
3
5x(k) +˜ 2 4 1 0
3
5u(k)
With u(k) = −˜k˜x we get the close loop evolution
˜
x(k + 1) = 2 4
2 − ˜k1 −1 − ˜k2
1 0
3
5x(k) +˜ 2 4 1 0
3
5u(k)
, where the top row are the coeeficient of the characteristic polynomial.