• Non ci sono risultati.

Parameters round-off

N/A
N/A
Protected

Academic year: 2021

Condividi "Parameters round-off"

Copied!
42
0
0

Testo completo

(1)

Ingegneria dell ’Automazione - Sistemi in Tempo Reale

Selected topics on discrete-time and sampled-data systems

Luigi Palopoli

[email protected] - Tel. 050/883444

(2)

Outline

Parameters round-off Sample-rate selection

(3)

Parameters round-off

So far we have seen the problem of

quantization referred to Input/output data The problem is there also for parametric uncertainties inthe controller

For example suppose you choose a controller given by: y(k + 1) = αy(k) + u(k)

Due to finite precision math the system we will deal with is:

y(k + 1) = (α + δα)y(k) + u(k) + (k)

(4)

Parameters round-off

So far we have seen the problem of

quantization referred to Input/output data The problem is there also for parametric uncertainties inthe controller

For example suppose you choose a controller given by: y(k + 1) = αy(k) + u(k)

Due to finite precision math the system we will deal with is:

y(k + 1) = (α + δα)y(k) + u(k) + (k)

(5)

Parameters round-off - (continued)

In the example above the Z−transform that we get is z − α + δα

If we want to place a pole at 0.995 we need at least a precision of three digits: δα ≤ 0.005

Robustness analysis can be conducted using the root-locus-Jury critterion methods

We will show a different approach based on linearised sensitivy analysis

(6)

Linearised sensitivity analysis

Let the closed loop characteristic polynomial be:

P = zn + α1zn−1 + . . . + αn

Let the roots of the system be: λi = riei

Suppose αk is subject to uncertainity and that we are interested in evaluating variations of λj, that we assume to be a single root of P The Polynomial P can be thought of as a function of z and αk: P (z, αk)

(7)

Lin. sensitivity an. - (continued)

First order series expansion around z = λj: P (λj + δλj, αk + δλk) ≈ P (λj, αk) + ∂P∂z ˛

˛

˛z=λj δλj + ∂α∂P

kδαk + . . . Observing that P (λj, αk) = 0 we get:

δλj ∂P/∂αk

∂P/∂z

˛

˛

˛

˛z=λj

δαk

Numerator:

∂P

∂αk

˛

˛

˛

˛z=λj

= λn−kj

Writing P (z, αk) = (z − λ1) . . . (z − λn), we get for the denominator:

∂P

∂z

˛

˛

˛

˛z=λj

= Y

l6=j

j − λl)

(8)

Lin. sensitivity an. - (continued)

First order series expansion around z = λj: P (λj + δλj, αk + δλk) ≈ P (λj, αk) + ∂P∂z ˛

˛

˛z=λj δλj + ∂α∂P

kδαk + . . . Observing that P (λj, αk) = 0 we get:

δλj ∂P/∂αk

∂P/∂z

˛

˛

˛

˛z=λj

δαk Numerator:

∂P

∂αk

˛

˛

˛

˛z=λj

= λn−kj

Writing P (z, αk) = (z − λ1) . . . (z − λn), we get for the denominator:

∂P

∂z

˛

˛

˛

˛z=λj

= Y

l6=j

j − λl)

(9)

Lin. sensitivity an. - (continued)

First order series expansion around z = λj: P (λj + δλj, αk + δλk) ≈ P (λj, αk) + ∂P∂z ˛

˛

˛z=λj δλj + ∂α∂P

kδαk + . . . Observing that P (λj, αk) = 0 we get:

δλj ∂P/∂αk

∂P/∂z

˛

˛

˛

˛z=λj

δαk

Numerator:

∂P

∂αk

˛

˛

˛

˛z=λj

= λn−kj Writing P (z, αk) = (z − λ1) . . . (z − λn), we get for the denominator:

∂P

∂z

˛

˛

˛

˛z=λj

= Y

l6=j

j − λl)

(10)

Lin. sensitivity an. - (continued)

First order series expansion around z = λj: P (λj + δλj, αk + δλk) ≈ P (λj, αk) + ∂P∂z ˛

˛

˛z=λj δλj + ∂α∂P

kδαk + . . . Observing that P (λj, αk) = 0 we get:

δλj ∂P/∂αk

∂P/∂z

˛

˛

˛

˛z=λj

δαk

Numerator:

∂P

∂αk

˛

˛

˛

˛z=λj

= λn−kj

Writing P (z, αk) = (z − λ1) . . . (z − λn), we get for the denominator:

∂P

∂z

˛

˛

˛

˛z=λj

= Y

l6=j

j − λl)

(11)

Lin. sensitivity an. - (continued)

...hence,

δλj = −

λn−kj

Q

l6=jj − λl)

δαk Consideration 1: assuming a stable λj the highest sensitivity is for n = k i.e. for

the variations of αn

Consideration 2: Looking at the denominator roots should be far apart: clusters empahsise senistivity

(12)

Lin. sensitivity an. - (continued)

...hence,

δλj = −

λn−kj

Q

l6=jj − λl)

δαk

Consideration 1: assuming a stable λj the highest sensitivity is for n = k i.e. for the variations of αn

Consideration 2: Looking at the denominator roots should be far apart: clusters empahsise senistivity

(13)

Lin. sensitivity an. - (continued)

...hence,

δλj = −

λn−kj

Q

l6=jj − λl)

δαk

Consideration 1: assuming a stable λj the highest sensitivity is for n = k i.e. for the variations of αn

Consideration 2: Looking at the denominator roots should be far apart: clusters empahsise senistivity

(14)

Example

Suppose we need to implement a low pass filter with sharp cutoff

we need a cluster of roots → high

sensitivity for canonical implementations it is much more convenient to use a series or parallel composition of low order filters (sensitivity is on one factor).

(15)

Outline

Parameters round-off Sample-rate selection

(16)

Factors

A famous rule of the thumb is to choose the sampling period 20-40 times the bandwidth of the system.

Where does it come from?

Shannon theorem limit

Smoothness of time response Immunity to noise

sensitivity to parameters variations

(17)

Factors

A famous rule of the thumb is to choose the sampling period 20-40 times the bandwidth of the system.

Where does it come from?

Shannon theorem limit Smoothness of time response

Immunity to noise

sensitivity to parameters variations

(18)

Factors

A famous rule of the thumb is to choose the sampling period 20-40 times the bandwidth of the system.

Where does it come from?

Shannon theorem limit

Smoothness of time response Immunity to noise

sensitivity to parameters variations

(19)

Factors

A famous rule of the thumb is to choose the sampling period 20-40 times the bandwidth of the system.

Where does it come from?

Shannon theorem limit

Smoothness of time response Immunity to noise

sensitivity to parameters variations

(20)

Factors

A famous rule of the thumb is to choose the sampling period 20-40 times the bandwidth of the system.

Where does it come from?

Shannon theorem limit

Smoothness of time response Immunity to noise

sensitivity to parameters variations

(21)

Shannon theorem

Consider the block diagram:

(22)

Shannon theorem - (cont.)

Recall that setting H(s) = (1 − e−sT)D(s)nG(s)

s

o

, we get

Y (s) = H(s)

H(s) + 1R(s) If we want to track a signal r with frequency componente up to ωb we have to

choose:

ωs

ωb > 2.

Indeed lower sampling frequencies would determine aliasing of signal r, generating delays and instability

Typically the closed loop roots of HH(s)+1(s) should be little lower than the required bnadwidth. Sampling rates lower than the bandwidth would determine aliasing of such roots as well.

In practice Shannon limit is too low!!

(23)

Shannon theorem - (cont.)

Recall that setting H(s) = (1 − e−sT)D(s)nG(s)

s

o

, we get

Y (s) = H(s)

H(s) + 1R(s)

If we want to track a signal r with frequency componente up to ωb we have to choose:

ωs

ωb > 2.

Indeed lower sampling frequencies would determine aliasing of signal r, generating delays and instability

Typically the closed loop roots of HH(s)+1(s) should be little lower than the required bnadwidth. Sampling rates lower than the bandwidth would determine aliasing of such roots as well.

In practice Shannon limit is too low!!

(24)

Shannon theorem - (cont.)

Recall that setting H(s) = (1 − e−sT)D(s)nG(s)

s

o

, we get

Y (s) = H(s)

H(s) + 1R(s)

If we want to track a signal r with frequency componente up to ωb we have to choose:

ωs

ωb > 2.

Indeed lower sampling frequencies would determine aliasing of signal r, generating delays and instability

Typically the closed loop roots of HH(s)+1(s) should be little lower than the required bnadwidth. Sampling rates lower than the bandwidth would determine aliasing of such roots as well.

In practice Shannon limit is too low!!

(25)

Shannon theorem - (cont.)

Recall that setting H(s) = (1 − e−sT)D(s)nG(s)

s

o

, we get

Y (s) = H(s)

H(s) + 1R(s)

If we want to track a signal r with frequency componente up to ωb we have to choose:

ωs

ωb > 2.

Indeed lower sampling frequencies would determine aliasing of signal r, generating delays and instability

Typically the closed loop roots of HH(s)+1(s) should be little lower than the required bnadwidth. Sampling rates lower than the bandwidth would determine aliasing of such roots as well.

(26)

Smoothness of time response

Attitude control of a satellite System dynamics:

I ¨θ = MC + Md, where:

I is the ,oment of inertia of the satellite around its mass center

MC is the control torque

Md is the disturbance torque

The system dynamic is approximated by a double integrator

(27)

Example - (cont.)

Specifications: tr < 0.5s, Overshoot Mp < 16% Approximated relations:

tr 1.8ωn

ts ξω4.6n = 4.6σ Mp ≈ e−πξ/

1−ξ2

...from which we get: ξ = 0.5, σ = 1.8

(28)

Example - (cont.)

Try with sample rates equal to 4, 8, 20, 40 times ωn

The root locations in the z−plane are given by z = esT = e(−xiωn±jωn

1−xi2)T = e−ξωnT(cos(ωnp1 − ξ2T ) ± sin(ωnp1 − ξ2T )) Case ωsn = 4 → ωnT = π/2 → T = 0.4363s:

z = esT = eξπ/2(cos(π/2p

1 − ξ2) ± sin(π/2p

1 − ξ2)) = 0.0952 ± 0.4459i

Closed loop characteristic polynomial:

z2 + α1z + α2 = z2 − 0.1904z + 0.2079

Discrete-time model for the system:

x(k + 1) = Φx(k) + Γu(k) = 2 4

1 T

0 1

3

5x(k) + 2 4

T2/2 T

3

5u(k)

(29)

Example - (cont.)

Try with sample rates equal to 4, 8, 20, 40 times ωn

The root locations in the z−plane are given by z = esT = e(−xiωn±jωn

1−xi2)T = e−ξωnT(cos(ωnp1 − ξ2T ) ± sin(ωnp1 − ξ2T )) Case ωsn = 4 → ωnT = π/2 → T = 0.4363s:

z = esT = eξπ/2(cos(π/2p

1 − ξ2) ± sin(π/2p

1 − ξ2)) = 0.0952 ± 0.4459i

Closed loop characteristic polynomial:

z2 + α1z + α2 = z2 − 0.1904z + 0.2079

Discrete-time model for the system:

x(k + 1) = Φx(k) + Γu(k) = 2 4

1 T

0 1

3

5x(k) + 2 4

T2/2 T

3

5u(k)

(30)

Example - (cont.)

Try with sample rates equal to 4, 8, 20, 40 times ωn

The root locations in the z−plane are given by z = esT = e(−xiωn±jωn

1−xi2)T = e−ξωnT(cos(ωnp1 − ξ2T ) ± sin(ωnp1 − ξ2T )) Case ωsn = 4 → ωnT = π/2 → T = 0.4363s:

z = esT = eξπ/2(cos(π/2p

1 − ξ2) ± sin(π/2p

1 − ξ2)) = 0.0952 ± 0.4459i Closed loop characteristic polynomial:

z2 + α1z + α2 = z2 − 0.1904z + 0.2079

Discrete-time model for the system:

x(k + 1) = Φx(k) + Γu(k) = 2 4

1 T

0 1

3

5x(k) + 2 4

T2/2 T

3

5u(k)

(31)

Example - (cont.)

Try with sample rates equal to 4, 8, 20, 40 times ωn

The root locations in the z−plane are given by z = esT = e(−xiωn±jωn

1−xi2)T = e−ξωnT(cos(ωnp1 − ξ2T ) ± sin(ωnp1 − ξ2T )) Case ωsn = 4 → ωnT = π/2 → T = 0.4363s:

z = esT = eξπ/2(cos(π/2p

1 − ξ2) ± sin(π/2p

1 − ξ2)) = 0.0952 ± 0.4459i Closed loop characteristic polynomial:

z2 + α1z + α2 = z2 − 0.1904z + 0.2079 Discrete-time model for the system:

x(k + 1) = Φx(k) + Γu(k) = 2 4

1 T

0 1

3

5x(k) + 2 4

T2/2 T

3

5u(k)

(32)

Example - (cont.)

Try with sample rates equal to 4, 8, 20, 40 times ωn

The root locations in the z−plane are given by z = esT = e(−xiωn±jωn

1−xi2)T = e−ξωnT(cos(ωnp1 − ξ2T ) ± sin(ωnp1 − ξ2T )) Case ωsn = 4 → ωnT = π/2 → T = 0.4363s:

z = esT = eξπ/2(cos(π/2p

1 − ξ2) ± sin(π/2p

1 − ξ2)) = 0.0952 ± 0.4459i Closed loop characteristic polynomial:

z2 + α1z + α2 = z2 − 0.1904z + 0.2079 Discrete-time model for the system:

x(k + 1) = Φx(k) + Γu(k) = 2 4

1 T

0 1

3

5x(k) + 2 4

T2/2 T

3

5u(k)

(33)

Example - (cont.)

State feedback: u(k) = −k1x1(k) − k2x2(k) Closed loop evolution x(k + 1) =

2 4

1 − k1T2/2 T − k2T2/2

−k1T 1 − k2T 3

5x(k) The characteristic polynomial is:

z2 + (T k2 + (T2/2)k1 − 2)z + (T2/2)k1 − T k2 + 1

The gains are found as follows:

2 4

k1 k2

3 5 =

2 4

T2/2 T T2/2 −T

3 5

−1 2 4

2 + α1

−1 + α2 3 5

(34)

Example - (cont.)

State feedback: u(k) = −k1x1(k) − k2x2(k) Closed loop evolution x(k + 1) =

2 4

1 − k1T2/2 T − k2T2/2

−k1T 1 − k2T 3

5x(k) The characteristic polynomial is:

z2 + (T k2 + (T2/2)k1 − 2)z + (T2/2)k1 − T k2 + 1

The gains are found as follows:

2 4

k1 k2

3 5 =

2 4

T2/2 T T2/2 −T

3 5

−1 2 4

2 + α1

−1 + α2 3 5

(35)

Example - (cont.)

State feedback: u(k) = −k1x1(k) − k2x2(k) Closed loop evolution x(k + 1) =

2 4

1 − k1T2/2 T − k2T2/2

−k1T 1 − k2T 3

5x(k) The characteristic polynomial is:

z2 + (T k2 + (T2/2)k1 − 2)z + (T2/2)k1 − T k2 + 1 The gains are found as follows:

2 4

k1 k2

3 5 =

2 4

T2/2 T T2/2 −T

3 5

−1 2 4

2 + α1

−1 + α2 3 5

(36)

Example - (cont.)

State feedback: u(k) = −k1x1(k) − k2x2(k) Closed loop evolution x(k + 1) =

2 4

1 − k1T2/2 T − k2T2/2

−k1T 1 − k2T 3

5x(k) The characteristic polynomial is:

z2 + (T k2 + (T2/2)k1 − 2)z + (T2/2)k1 − T k2 + 1 The gains are found as follows:

2 4

k1 k2

3 5 =

2 4

T2/2 T T2/2 −T

3 5

−1 2 4

2 + α1

−1 + α2

3 5

(37)

Example - (cont.)

An alternative way: construct the matrix T = [Γ ΦΓ]

Consider the coordinate transformation x = M ˜x, where M = T 2 4

1 α(ol)1

0 1

3 5 In the new coordinates we find

˜

x(k + 1) = 2 4

2 −1

1 0

3

5x(k) +˜ 2 4 1 0

3

5u(k)

With u(k) = −˜k˜x we get the close loop evolution

˜

x(k + 1) = 2 4

2 − ˜k1 −1 − ˜k2

1 0

3

5x(k) +˜ 2 4 1 0

3

5u(k)

, where the top row are the coeeficient of the characteristic polynomial.

Find solving:2 − ˜k1 = −α1, −1 − ˜k2 = −α2 k = −˜kM−1

(38)

Example - (cont.)

An alternative way: construct the matrix T = [Γ ΦΓ]

Consider the coordinate transformation x = M ˜x, where M = T 2 4

1 α(ol)1

0 1

3 5 In the new coordinates we find

˜

x(k + 1) = 2 4

2 −1

1 0

3

5x(k) +˜ 2 4 1 0

3

5u(k)

With u(k) = −˜k˜x we get the close loop evolution

˜

x(k + 1) = 2 4

2 − ˜k1 −1 − ˜k2

1 0

3

5x(k) +˜ 2 4 1 0

3

5u(k)

, where the top row are the coeeficient of the characteristic polynomial.

Find solving:2 − ˜k1 = −α1, −1 − ˜k2 = −α2 k = −˜kM−1

(39)

Example - (cont.)

An alternative way: construct the matrix T = [Γ ΦΓ]

Consider the coordinate transformation x = M ˜x, where M = T 2 4

1 α(ol)1

0 1

3 5 In the new coordinates we find

˜

x(k + 1) = 2 4

2 −1

1 0

3

5x(k) +˜ 2 4 1 0

3

5u(k) With u(k) = −˜k˜x we get the close loop evolution

˜

x(k + 1) = 2 4

2 − ˜k1 −1 − ˜k2

1 0

3

5x(k) +˜ 2 4 1 0

3

5u(k)

, where the top row are the coeeficient of the characteristic polynomial.

Find solving:2 − ˜k1 = −α1, −1 − ˜k2 = −α2 k = −˜kM−1

(40)

Example - (cont.)

An alternative way: construct the matrix T = [Γ ΦΓ]

Consider the coordinate transformation x = M ˜x, where M = T 2 4

1 α(ol)1

0 1

3 5 In the new coordinates we find

˜

x(k + 1) = 2 4

2 −1

1 0

3

5x(k) +˜ 2 4 1 0

3

5u(k)

With u(k) = −˜k˜x we get the close loop evolution

˜

x(k + 1) = 2 4

2 − ˜k1 −1 − ˜k2

1 0

3

5x(k) +˜ 2 4 1 0

3

5u(k)

, where the top row are the coeeficient of the characteristic polynomial.

Find solving:2 − ˜k1 = −α1, −1 − ˜k2 = −α2 k = −˜kM−1

(41)

Example - (cont.)

An alternative way: construct the matrix T = [Γ ΦΓ]

Consider the coordinate transformation x = M ˜x, where M = T 2 4

1 α(ol)1

0 1

3 5 In the new coordinates we find

˜

x(k + 1) = 2 4

2 −1

1 0

3

5x(k) +˜ 2 4 1 0

3

5u(k)

With u(k) = −˜k˜x we get the close loop evolution

˜

x(k + 1) = 2 4

2 − ˜k1 −1 − ˜k2

1 0

3

5x(k) +˜ 2 4 1 0

3

5u(k) k = −˜kM−1

(42)

Example - (cont.)

An alternative way: construct the matrix T = [Γ ΦΓ]

Consider the coordinate transformation x = M ˜x, where M = T 2 4

1 α(ol)1

0 1

3 5 In the new coordinates we find

˜

x(k + 1) = 2 4

2 −1

1 0

3

5x(k) +˜ 2 4 1 0

3

5u(k)

With u(k) = −˜k˜x we get the close loop evolution

˜

x(k + 1) = 2 4

2 − ˜k1 −1 − ˜k2

1 0

3

5x(k) +˜ 2 4 1 0

3

5u(k)

, where the top row are the coeeficient of the characteristic polynomial.

Riferimenti

Documenti correlati

With its 251 inmates, Halden is one of Norway’s largest prisons, in a country with only 3,800 prisoners (according to the International Center for Prison Studies); by contrast, in the

*166 Importantly, “[i]n the form of FLD [frontal lobe disorder] that causes impulsive behavior, there is a disruption in the neural circuit running between the limbic system

Choose an initial sector, and compute the average number of sampling intervals needed by the robot to return to it.. Choose an initial sector, and compute the probability that the

Paul Blokker is associate professor in political sociology at the Department of Sociology and Business Law, University of Bologna, Italy, and research coordinator at the

drivers displays features of precarious work in nearly all analysed dimensions: it is insecure (no employment contracts), unstable (the formal situation of Uber

The calibrated model can be used to study the spatial distribution of the thermo- hydrodynamic parameters of the coolant and their changes during the development of the

The preparation part closes with a chapter devoted to Monte Carlo sim- ulations and the type of possible applications of the resulting collections of train paths to capacity

The temperatures shown here are: local equilibrium tem- perature T , thermodynamic non-equilibrium temperature T neq (equal to the kinetic temperature along the x axis), the