A me stesso...
Abstract
Over the last decade,the development of fast and powerful microprocessors and DSPs (Digital Signal Processors),has increased and extended the ap-plication of model predictive control for Power Electronics and Drives. In permanent magnet synchronous motor drives (PMSM), especially for a Dead-Beat control,an accurate knowledge of machine parameters model is critical to achieve good performances in both control and/or monitoring systems. In this thesis, a new method for on-line identification of PMSM parameters for a dead beat current control is proposed. The continous update of the motor parameters helps tracking any dynamic change due the enviromental, aging and loading condition, does permitting to achieve better performances. Furthermore, a traditional Recoursive least squares method was studied and implemented as a benchmark for validating the results provided by the first method. The simulation results proved the validity of first solution, does highlighting the potential interest for various application.
Contents
Introduction v
1 Permanent Magnet Synchronous Motors 1
1.1 Voltage and Torque Equations in Machine Variables . . . 2
1.2 Voltage and Torque Equations in Rotor Reference-Frame Vari-ables . . . 6
2 Space Vector Pulse Width Modulation for two-level convert-ers 9 2.1 Reference Vector . . . 9
2.2 Time Duration . . . 13
2.3 Switching pattern generation . . . 16
3 Dead Beat control 20 3.1 Dead Beat theory . . . 21
3.2 Dead Beat current control . . . 23
3.3 Implementation of Predictive Dead Beat Control . . . 25
3.4 Compensation for Rotor Movement . . . 29
3.5 Parameter Errors . . . 31
CONTENTS
4.1 Identifiability . . . 35
4.2 Least Squares . . . 39
4.2.1 Recursive Least Squares . . . 44
4.3 The Dead Beat Estimator . . . 49
4.3.1 Dead Beat Estimator Basic Idea . . . 50
4.3.2 Dead Beat Estimator Procedure . . . 52
5 Model Description and Simulation Results 57 5.1 Model Description . . . 57
5.1.1 Motor model Description . . . 57
5.1.2 Control System . . . 58
5.1.3 Power Converter . . . 60
5.2 Simulation Results . . . 60
5.2.1 Recoursive Least Squares Estimation . . . 61
5.2.2 Dead Beat Estimation . . . 64
6 Experimental Part 71
7 Conclusions 75
A Recoursive Least Squares 78
B Dead Beat Estimator 85
Introduction
Nowadays electrical drives are widely diffused in a lot of different applications, thanks to their superior charateristics in terms of dynamic performances, en-ergy efficency, reliability, safety etc.
The accuracy and dynamic performances of drives depends on performances of the control systems, which maybe desined in a lot of different ways.In particular one of the most obviously promising approaches consist in embed-ding inside the controller some knowledge about the model of the apparatus to be controlled, as this may permit to improve the overall performances. Nevertheless such approach requires to determing the actual values of the parameters of the equivalent model of the machine to be controlled. Such parameters may not be fully known a-priori, or they may be even variable during the machine operation, due to temperature variation magnet satura-tion and so on.Therefore, a significant impovement in the drive performances can be achieved by getting ready some wise methodology to estimate the val-ues of such parameters on-line, i.e. during the machine operation.
This problem has prompted research to focus on the identification, in partic-ular the estimation of machine parameters.
During the operation, such parameters vary due to loading, aging and envi-ronmental conditions. The on-line identification of their values provides than an elegant way to improve the control system performances. Such
optimiza-CHAPTER 0. INTRODUCTION
tion turns out quite interesting also for industrial application.
In the literature one can find countless publications about identification methods: for example, in [1] two identification methods are presented, based respectively on model-reference and Kalman filter approaches, where as in [2] a recoursive least squares method is implemented. However the reserch in this field is still in progress.
In this thesis a new approach is presented for the identification of the equiv-alent parameters of brushless machines. The idea is to create a specific estimator (based on the prediction error) and update continuously each pa-rameter to improve the current control. This document is organized in seven different chapters.
In the first chapter the classical analytical model of the brushless machine is presented using Park transformation.
The second chapter contains the description of the conventional Space Vector Modulation technique used to drive the inverter suppling the machine. In the third chapter, the theory of the Dead-Beat control is illustrated and applied, from the basic concept to the developement of a specific current control criterion.
In the fourth chapter, after some general consideration about observability and its application to the identification of parameters, two specific methods are presented: Recoursive Least Squares and the Dead-Beat Estimator. For both of such methods, consideration about the pratical implementation are derived from the concepts of background.
In the fifth chapter, a MATLAB/SIMULINK model of the power part of the drive considered for the application of both methods is first presented. The separate implementation in such model of the two estimation methods above presented is then illustrated. Finally a summary of the simulations results
CHAPTER 0. INTRODUCTION
obtained from the implemented models is presented, commenting the most significant aspects.
Some preliminary experimental tests were also carried out using the facilities avaiable at the University of Nottingham: the most significant implementa-tion aspects are illustrated in chapter six.
Finally, in chapter seven the conclusions of this thesis are presented, high-lighting the results already obtained as well as the aspects that are more likely to benefit from further investigations.
Chapter 1
Permanent Magnet
Synchronous Motors
Introduction The permanent-magnet ac machine supplied from a con-trolled voltage or current source inverter is becoming widely used. This is attributed to a relatively high torque density(torque/mass or torque/volume) and ease of control relative to alternative machine architectures[3]. Depend-ing upon the control strategies, the performance of this inverter-machine combination can be made, for example, to emulate the performance of a permanent-magnet dc motor, operate in a maximum torque per ampere mode, provide a ”field weakening” technique to increase the speed range for constant power operation, and shift the phase of the stator applied voltages to obtain the maximum possible torque at any given rotor speed. Fortunately, we are able to become quite familiar with the basic operating features of the permanent-magnet ac machine without getting too involved with the actual inverter or the control strategies. In particular, if we assume that the stator variables (voltages and currents) are sinusoidal and balanced with the same angular velocity as the rotor speed, we are able to predict the predominant
CHAPTER 1. PERMANENT MAGNET SYNCHRONOUS MOTORS
operating features of all of the above mentioned modes of operation with-out becoming involved with the actual switching or control of the inverter. Therefore, in this chapter,we will focus on the performance of the inverter-machine combination assuming that the inverter is designed and controlled appropriately and leave how this is done to the following chapters.
1.1
Voltage and Torque Equations in Machine
Variables
A two-pole, magnet ac machine, which is also called a permanent-magnet synchronous machine, is depicted in 1.1 . It has three-phase, wye-connected stator windings and a permanent-magnet rotor. The stator wind-ings are identical windwind-ings displaced at 120, each with Ns equivalent turns and resistancers. For our analysis, we will assume that the stator windings are sinusoidally distributed. The three sensors shown in 1.1 are Hall effect devices. When the north pole is under a sensor, its output is nonzero; with a south pole under the sensor, its output is zero. During steady-state op-eration, the stator windings are supplied from an inverter that is switched at a frequency corresponding to the rotor speed. The states of the three sensors are used to determine the switching logic for the inverter. In the actual machine, the sensors are not positioned over the rotor, as shown in 1.1. Instead, they are often placed over a ring that is mounted on the shaft external to the stator windings and magnetized in the same direction as the rotor magnets. We will return to these sensors and the role they play later. The voltage equations in machine variables are
CHAPTER 1. PERMANENT MAGNET SYNCHRONOUS MOTORS where (fabcs)T = h fas fbs fcs i (1.2) rs = diag h rs rs rs i (1.3) The flux linkages may be written as
φabcs= Lsiabcs+ φ0m (1.4)
where, neglecting mutual leakage terms and assuming that due to the per-manent magnet the d-axis reluctance of the rotor is larger than the q-axis reluctance,Ls may be written as
Ls = Lls+ LA+ LBcos(2θr) − 1 2LA+ LBcos2 θr− π 3 −1 2LA+ LBcos2 θr+ π 3 −1 2LA+ LBcos2 θr− π 3 Lls+ LA+ LBcos2 θr− 2π 3 −1 2LA+ LBcos2 (θr+ π) −1 2LA+ LBcos2 θr+ π 3 −1 2LA+ LBcos2 (θr+ π) Lls+ LA+ LBcos2 θr+ 2π 3 (1.5)
The flux linkage φ0m may be expressed as φ0m = φ0m sin(θr) sin θr− 2π 3 sin θr+ 2π 3 (1.6)
where φ0m is the amplitude of the flux linkages established by the perma-nent magnet as viewed from the stator phase windings. In other words,pφ0m would be the open-circuit voltage induced in each stator phase winding. Damper windings are neglected since the permanent magnets are typically relatively poor electrical conductors, and the eddy currents that flow in the non-magnetic materials securing the magnets are small. Hence, in general large armature currents can be tolerated without significant demagnetization. We have assumed by 1.6 that the voltages induced in the stator windings by
CHAPTER 1. PERMANENT MAGNET SYNCHRONOUS MOTORS
CHAPTER 1. PERMANENT MAGNET SYNCHRONOUS MOTORS
the permanent magnet are constant amplitude sinusoidal voltages. The ex-pression for the electromagnetic torque may be written in machine variables using Te = P 2 ∂Wc ∂θr (1.7) where Wc= 1 2i T
abcsLsiabcs+ iTabcsφ 0
m+ Wpm (1.8)
In 1.8, Wpm is the energy in the coupling field due to the presence of the
permanent magnet. Substituting 1.8 into 1.7 and neglecting any change in Wpm with rotor position, the electromagnetic torque is expressed
Te= P 2 (Lmd− Lmd) 3 " i2as−1 2i 2 bs− 1 2i 2 cs− iasibs− iasics+ 2ibsics)sin(2θr) + √ 3 2 (i 2 bs− i2cs− 2iasibs+ 2iasics)cos(2θr) # + φ0m " ias− 1 2ibs− 1 2ics cos(θr) + √ 3 2 (ibs− ics)sin(θr) # (1.9) where Lmq and Lmd are
Lmq = 3 2(LA+ LB) (1.10) Lmd = 3 2(LA− LB) (1.11)
The above expression for torque is positive for motor action. The torque and speed may be related as
Te = J 2 P pωr+ Bm 2 P ωr+ TL (1.12)
where J is the inertia of the rotor and the connected load is in kgm2 . Since
we will be concerned primarily with motor action, the torque TL is positive
for a torque load. The constant Bm is a damping coefficient associated with
CHAPTER 1. PERMANENT MAGNET SYNCHRONOUS MOTORS
units N ms per radian of mechanical rotation, and it is generally small and often neglected.
1.2
Voltage and Torque Equations in Rotor
Reference-Frame Variables
The voltage equations in the rotor reference frame may be written directly from Ksrs(Ks)−1 = rs and vdq0s = ωφdq0s + pφdq0s with ω = ωr obtained
using the transformation 1.13 recall the equation fdq0= Tabc→dq0fabcs.
Tabc→dq0 = 2 3 cos(θdq) −sin(θdq) 1 cos θdq− 2π 3 −sin θdq − 2π 3 1 cos θdq− 4π 3 −sin θdq − 4π 3 1 (1.13)
considering the system reference 1.2
CHAPTER 1. PERMANENT MAGNET SYNCHRONOUS MOTORS vdq0sr = rsirdq0s+ ωrφrdq0s+ pφ r dq0s (1.14) where (φrdq0s)T =hφr ds −φrqs 0 i (1.15) (φrdq0s)T = Lls+ Lmq 0 0 0 Lls+ Lmd 0 0 0 Lls ir qs irds i0s + φ0rm 0 1 0 (1.16)
To be consistent with our previous notation, we have added the superscriptr to φ0m. In expanded form, we have
vqsr = rsirqs+ ωrφrds+ pφ r qs (1.17) vrds = rsirds+ ωrφrqs+ pφ r ds (1.18) v0s = rsi0s+ pφ0s (1.19) where φrqs = Lqirqs (1.20) φrds = Ldirds+ φ 0r m (1.21) φ0s = Llsi0s (1.22)
where Lq = Lls+ Lmq and Ld= Lls+ Lmd. It is readily shown that if mutual
leakage between stator windings is included in 1.5 , the form of the q- and d- axis flux linkages remains unchanged. Indeed, the only impact will be on the respective leakage terms in Lq and Ld. Substituting 1.20 1.22 into 1.17
1.19 , and since pφrm = 0, we can write
vqsr = (rs+ pLq)irqs+ ωrLdirds+ ωrφrm (1.23)
CHAPTER 1. PERMANENT MAGNET SYNCHRONOUS MOTORS
v0s = rs+ pLlsi0s (1.25)
The expression for electromagnetic torque in terms ofq andd variables may be obtained by substituting the expressions for the machine currents in terms of q- and d- currents into 1.9 . This procedure is quite labor intensive; however, once we have expressed the voltage equations in terms of reference-frame variables. In particular, the expression for input power is given by Pdq0s = Pabcs =
3
2(vqsiqs + vdsids+ 2v0si0s), and the electromagnetic torque multiplied by the rotor mechanical angular velocity is the power output. Thus we have Te 2 P ωr = 3 2(v r qsi r qs+ v r dsi r ds+ 2v0si0s) (1.26)
Substituting 1.17 and 1.19 into 1.26 gives us Te 2 P ωr = 3 2rs(i r2 qs+ i r2 ds+ 2i 2 0s) + 3 2(φ r dsi r qs− φ r qsi r ds)ωr +3 2(i r qspφrqs+ irdspφrds+ 2i0spφ0s) (1.27)
The first term on the right-hand side of 1.27 is the ohmic power loss in the stator windings, and the last term is the change of stored magnetic energy. If we equate the coefficients of ωr, we have
Te = 3 2 P 2 (φrdsirqs− φr qsi r ds) (1.28)
Substituting 1.20 and 1.21 into 1.28 yields Te= 3 2 P 2 [φ0rmirqs+ (Ld− Lq)irdsi r qs] (1.29)
In conclusion we can collect our equations in vd= rsid+ Ld did dt − ωeLqiq vq = rsiq+ Lq diq dt + ωeLdid+ ωeφ 0 m Te= 3 2 P 2 [φ0miq+ (Ld− Lq)idiq] (1.30)
Chapter 2
Space Vector Pulse Width
Modulation for two-level
converters
Introduction Referring to [4] the circuit in 2.1 demonstrates the foun-dation of a two-level voltage source converter. It has six switches (sw1-sw6) and each of these are represented with an IGBT switching device. A, B and C represents the output for the phase shifted sinusoidal signals. Depending on the switching combination the inverter will produce different outputs, cre-ating the two-level signal. The biggest difference from other PWM methods is that the SVPWM uses a vector as a reference. This gives the advantage of a better overview of the system.
2.1
Reference Vector
The reference vector is represented in a αβ-plane. This is a two-dimensional plane transformed from a three-dimensional plane containing the vectors of
CHAPTER 2. SPACE VECTOR PULSE WIDTH MODULATION FOR TWO-LEVEL CONVERTERS
Figure 2.1: Three-level three-phase inverter, with a load
the three phases. The switches being ON or OFF is determined by the lo-cation of the reference vector on this αβ-plane. Table 2.1 shows that the
Figure 2.2: The reference vector in the two and three dimensional plane switches can be ON or OFF, meaning 1 or 0. The switches 1,3,5 are the upper switches and if these are 1 (separately or together) it turns the upper inverter leg ON and the terminal voltage (Va, Vb, Vc) is positive (+VDC). If the upper switches are zero, then the terminal voltage is zero). The lower switches are complementary to the upper switches, so the only possible com-binations are the switching states: 000, 001, 010, 011, 100, 110, 110, 111. This means that there are 8 possible switching states, for which two of them are zero switching states and six of them are active switching states. These
CHAPTER 2. SPACE VECTOR PULSE WIDTH MODULATION FOR TWO-LEVEL CONVERTERS
Switching a b c
states S1 S2 Van S3 S4 Vbn S5 S6 Vcn
1 ON OFF Vdc ON OFF Vdc ON OFF Vdc
0 OFF ON 0 OFF ON 0 OFF ON 0
Table 2.1: Switching states for each phase leg
are represented by active (V1-V6) and zero (V0) vectors. The zero vectors
are placed in the axis origin 2.3 It is assumed that the three-phase system is
Figure 2.3: Space voltage vectors in different sectors
balanced:
Va0+ Vb0+ Vc0 = 0 (2.1)
These are the instantaneous phase voltages:
Va= V sin(θt) (2.2) Vb = V sin(θt + 2π 3 ) (2.3) Vc= V sin(θt + 4π 3 ) (2.4)
CHAPTER 2. SPACE VECTOR PULSE WIDTH MODULATION FOR TWO-LEVEL CONVERTERS
When the three phase voltages are applied to a AC machine a rotating flux is created. This flux is represented as one rotating voltage vector. The magni-tude and angle of this vector can be calculated with Clark’s Transformation:
Vref = Vα+ jVβ = 2 3(Va+ aVb+ a 2 Vc) (2.5) where a is given by 2.6 a = ej2π3 (2.6)
The magnitude and angle (determining in which sector the reference vector is in) of the reference vector is:
|Vref| = q V2 α + Vβ2 (2.7) θ = tan−1 Vβ Vα (2.8) The reference voltage can then be expresses as:
Vα+ jVβ = 2 3(Va+ e j2π3 V b+ ej 4π 3 Vc) (2.9)
Inserting the phase shifted values for Va, Vb and Vc gives: Vα+ jVβ = 2 3(Va+ cos( 2π 3 )Vb+ cos( 2π 3 )Vc) + j2 3(sin( 2π 3 )Vb− sin( 2π 3 )Vc) (2.10)
The voltage vectors on the alpha and beta axis can then be described as: Vα Vβ = 2 3 1 cos 2π 3 cos 2π 3 0 sin 2π 3 −sin 2π 3 Va Vb Vc = 2 3 1 −1 2 − 1 2 0 √ 3 2 − √ 3 2 Va Vb Vc (2.11) Vα = 2 3(Va− 1 2Vb− 1 2Vc) Vβ = 2 3( √ 3 2 Vb− √ 3 2 Vc) (2.12)
Having calculated Vα, Vβ, Vref and the reference angle,the first step is taken.
CHAPTER 2. SPACE VECTOR PULSE WIDTH MODULATION FOR TWO-LEVEL CONVERTERS
2.2
Time Duration
Vref can be found with two active and one zero vector. For sector 1 (0 to
π 3 ): Vref can be located with V0, V1 and V2. Vref in terms of the duration time
can be considered as:
Vref = V1 T1 Tc + V2 T2 Tc + V0 T0 Tc (2.13) VrefTc = V1T1+ V2T2+ V0T0 (2.14)
The total cycle is given by: Tc = T1 + T2 + T0 The position of Vref, V1, V2
and V0 can be described with its magnitude and angle:
Vref = Vrefrjθ, V1 = 2 3VDC, V2 = 2 3VDCe jπ3, V 0 = 0 (2.15) TcVref cos(θ) cos(θ) = 2 3T1VDC 1 0 + 2 3T2VDC cos(π 3) sin(π 3) (2.16)
Dividing these in real and imaginary parts simplifies the calculation for each duration time: Real part: TcVrefcos(θ) =
2
3T1VDC + 1
3T2VDC Immaginary part: TcVrefsin(θ) =
1 √
3T2VDC T1 and T2 is given by: T1 = Tc √ 3Vref VDC sin(π 3 − θ) = Tcasin( π 3 − θ) (2.17) T1 = Tc √ 3Vref VDC sin(θ) = Tcasin(θ) (2.18) with 0 < θ < π 3
CHAPTER 2. SPACE VECTOR PULSE WIDTH MODULATION FOR TWO-LEVEL CONVERTERS
Figure 2.4: Space vector diagram for Sector 1 described with the duty cycle for each vector
The general calculation to receive the duty times in the rest of the sectors is given by: T1 = Tcasin( π 3 − θ + n − 1 3 π) = Tca[sin( n 3π)cos(θ) − cos( n 3π)sin(θ)] (2.19) T2 = Tcasin(θ − n − 1 3 π) = Tca[−cos(θ)sin( n − 1 3 π) + sin(θ)cos( n − 1 3 π)] (2.20) T0 = Tc− T1− T2 (2.21)
Choosing n as the number of the sector (n=1,2,3,4,5,6) the calculations for the time duration in each sector can be calculated.
CHAPTER 2. SPACE VECTOR PULSE WIDTH MODULATION FOR TWO-LEVEL CONVERTERS
Sw states Corresponding Voltage Vectors a b c Vector Magnitude Angle
0 0 0 V0 0 0 1 1 1 1 0 0 V1 23VDC 0 1 1 0 V2 23VDC π3 0 1 0 V3 23VDC 2π3 0 1 1 V4 23VDC π 0 0 1 V5 23VDC 4π3 1 0 1 V6 23VDC 5π3
Table 2.2: All switching states and its corresponding voltage vectors
Sector Duration Times
T1 T2 T0 1 Tcasin(π3 − θ) Tcasin(θ) Tc− T1− T2 2 Tcasin(2π3 − θ) Tcasin(θ − π3) Tc− T1− T2 3 Tcasin(π − θ) Tcasin(θ − 2π3 ) Tc− T1− T2 4 Tcasin(4π3 − θ) Tcasin(θ − π) Tc− T1− T2 5 Tcasin(5π3 − θ) Tcasin(θ − 4π3 ) Tc− T1− T2 6 Tcasin(2π − θ) Tcasin(θ − 5π3 ) Tc− T1− T2
CHAPTER 2. SPACE VECTOR PULSE WIDTH MODULATION FOR TWO-LEVEL CONVERTERS
2.3
Switching pattern generation
Duty cycle For each sector there are 7 switching states for each cycle. It always starts and ends with a zero vector. This also means that there is no extra switching state needed when changing the sector. The uneven numbers travel counter clockwise in each sector and the even sectors travel clockwise. Duty cycle for sector 1 For sector 1 it goes through these switching states: 000-100-110-111-110-100-000, one round and then back again. This is during the time Tc and it has to be divided amongst the 7 switching states, three of
them being zero vectors: Tc= T0 4 + T1 2 + T0 2 + T2 2 + T1 2 + T0 4 (2.22)
This can be calculated for all the sectors 2.5. There are different kinds of waveforms: centre aligned and edge aligned. Edge align waveforms makes it easier when comparing with the carrier wave, but the centre aligned has the advantage of reducing the harmonics and also reducing noise. Sequencing
Figure 2.5: (a)Illustration ramp, (b)Space vector diagram with every switch-ing state and sequence
CHAPTER 2. SPACE VECTOR PULSE WIDTH MODULATION FOR TWO-LEVEL CONVERTERS
of Switching States in Sector 1-6 Following the pattern for each sector re-sults in a ON/OFF waveform for each sector and phase. Each switch has it switching information depending on where the reference vector is located. The waveforms are shown in the . For Sector 1, the switch is ON between T0
4 and Tc− T0
4 in the first phase, between T0 4 + T1 2 and Tc− ( T0 4 + T1 2 ) for the second phase and so on. For the switch to know that it should be switched ON at these specific times requires a timer that can give this information. Something like a ramp or a repeated sequence can be used as a reference 2.5, so the ramp indicates that the switches should be ON/OFF at specific times.
CHAPTER 2. SPACE VECTOR PULSE WIDTH MODULATION FOR TWO-LEVEL CONVERTERS
Sector Duty Time Upper Switches Duty Time Lower Switches 1 S1 T1+ T2+ T20 S2 T20 S3 T2+T20 S4 T1+T20 S5 T20 S6 T1+ T2+ T20 2 S1 T2+T20 S2 T1+T20 S3 T1+ T2+ T20 S4 T20 S5 T20 S6 T1+ T2+ T20 3 S1 T20 S2 T1+ T2+ T20 S3 T1+ T2+ T20 S4 T20 S5 T2+T20 S6 T1+ T2+ T20 4 S1 T20 S2 T1+ T2+ T20 S3 T2+T20 S4 T1+T20 S5 T1+ T2+ T20 S6 T20 5 S1 T2+T20 S2 T1+T20 S3 T20 S4 T1+ T2+ T20 S5 T1+ T2+ T20 S6 T20 6 S1 T1+ T2+ T20 S2 T20 S3 T20 S4 T1+ T2+ T20 S5 T2+T20 S6 T1+T20
CHAPTER 2. SPACE VECTOR PULSE WIDTH MODULATION FOR TWO-LEVEL CONVERTERS
Figure 2.6: Waveform showing sequencing of switching states for all the regions
Chapter 3
Dead Beat control
Introduction The recent decades remarkable advancement of semicon-ductor devices, and the development of cost effective and powerful DSPs and microprocessors have given numerous opportunities for many industrial applications. One of the by now well-known applications is within AC ma-chine drives.[5] Today, power electronics have increased efficiency and relia-bility. For that reason an increased attention is given to the development of high performance adjustable speed AC drives, as they can decrease the cost, and outdo the performance and efficiency of traditional DC adjustable speed drives. With today’s focus on energy efficiency optimization, this is a major industrial mean, while also the increased scope of application for electrical machines sets advanced demands for the dynamic performance. Permanent Magnet Synchronous Machine (PMSM) drives have been found to be ad-vantageous over other drives, e.g. due to its high efficiency and high power density. For that reason PMSMs are intensively studied among researchers, scientists, and engineers and is considered a hot topic of study. Furthermore many recent studies have shown the attitude to apply the Dead Beat control to PMSM drives,especially for high performances.
CHAPTER 3. DEAD BEAT CONTROL
3.1
Dead Beat theory
Deadbeat control is one of the most attractive approaches for digital con-trol, because it is able to zero the state variable errors in a finite number of sampling steps, usually giving the fastest dynamic response for digital imple-mentation. For this reason, deadbeat control has been widely used mainly for inverters, uninterruptible power supplies (UPS), and drive applications.In spite of several potential advantages, deadbeat control has drawbacks which sometimes limit its industrial applications. The most relevant is a high sensi-tivity to model uncertainties, parameter mismatches and noise on the sensed variables, such that a precise knowledge of system parameters and model is usually needed.
Thus, deadbeat control with simple autotuning features is very attractive, since it combines fast dynamic response with low sensitivity to parameter mismatches.
Deadbeat control is a control strategy found in the predictive control fam-ily. The idea behind predictive control is to use a model of the system to calculate predictions of future controlled variables. Deadbeat control is a discrete-time model based control scheme, which uses the machine model to calculate the voltages that eliminate the current errors after one sampling period. The voltages are subsequently synthesized to the machine terminals using an inverter controlled by PWM.
More generally the strategy is based on the knowledge of the object to be controlled, indeed is possible used the predictive approach to known the be-haviour of this. In this case a well-known of the model can improve the control in order to have a better response.
If we consider the known discrete time-variant system xk+1 = Axk+ Buk+ d
CHAPTER 3. DEAD BEAT CONTROL
inverting the matrix B uk= B−1[xk+1− (Axk) − d]
Usually is required to do a previously prediction step, computing the follow-ing state xpred = Axk+ Buk+ d and after calculate the output to the step
k+1 uk+1= B−1[xref − (Axpred) − d] where xref is the state at the step
k+2(usually reference value of interest state). This is an important compen-sation to let to the control unit to do the computing, in fact is not possible to apply the output in the same sample time of the measurement acquisition. In this case the output can be applied at next step as shown in Figure 3.2
CHAPTER 3. DEAD BEAT CONTROL
3.2
Dead Beat current control
The current control system of three-phase pulsewidthmodulated(PWM) voltage-source inverters(VSIs) is the key element in the control structure of many applications, such as grid-connected inverters, including active power filters and static synchronous compensators, ac motor drives, and uninterruptible power supplies.
As shown in the previously paragraph, the idea is to use the model(PMSM equations in own case) to compute the voltage reference to be sent in the modulator. In this case the states xk, xk+1 are the currents and the inputs
uk are the voltages.
Figure 3.2: Dead Beat current control
The equations insert in the Dead Beat current control(DBCC) are:
Idqk+1= A(k)Idqk + B(k)Vdqk+ C(k) (3.1)
CHAPTER 3. DEAD BEAT CONTROL
According with the Euler discretization
A(k) = 1 − Rs Ld Ts ωel Lq Ld −ωel Lq Ld 1 − Rs Lq Ts (3.3) B(k) = 1 Ld 0 0 1 Lq Ts (3.4) C(k) = 0 ωel ϕ0 Lq Ts (3.5)
Through the equations (3.1), is possible compute the current expected to the following step(Idqprev). Than using the equation (3.2) is possible compute
the reference voltage.
It is rather obvious that to have a good response of the system the parameters in 3.3, 3.4 and 3.5 should be correct.
A good current response is shown in 3.3, supposing a reference current(Iq) a step from 0 to 10 Ampere and considering an ideal inverter to supply the motor.
To understand how the parameters influence the response can be useful show the trend: If the inductance in the control is wrong we have overshoot or undershoot during the transient
If the resistence in the control is wrong we have an offset during the steady-state
If the flux in the control is wrong we have an offset during the steady-state with speed ω 6= 0
CHAPTER 3. DEAD BEAT CONTROL
Figure 3.3: Current response
3.3
Implementation of Predictive Dead Beat
Control
CHAPTER 3. DEAD BEAT CONTROL
Figure 3.4: Current response with wrong inductance
Figure 3.5: Current response with wrong resistance
Figure 3.6: Current response with wrong flux
including the digital implementation, delay compensation and deadbeat voltage constraint is shown in Figure 3.7. The whole loop is executed once
CHAPTER 3. DEAD BEAT CONTROL
Figure 3.7: Flowchart of the predictive deadbeat control algorithm including delay compensation
every sampling period. The previous value of the predicted voltage is stored as input for calculation of the next current prediction. After the delay com-pensated deadbeat control voltages are calculated and a decision is taken if the voltage restriction must be utilized or not. Finally, the delay
com-CHAPTER 3. DEAD BEAT CONTROL
pensated voltage vector is synthesized to the machine terminals through the inverter using SVPWM, and the next sampling of the current and rotor speed is acquired.
CHAPTER 3. DEAD BEAT CONTROL
3.4
Compensation for Rotor Movement
Worthy of note is the influence of the compensation of the rotor movement, as reported in [6]. The (k +1)thdq voltage reference vrefdq (k +1) are calculated without considering a rotor movement. In fact, the vdqref(k + 1) are manteined for the whole (k + 1)th sampling period, but during it the rotor moves and the real dq voltages vreal
dq (t) are different from the v ref
dq (k + 1). Especially
at high speeds, the rotor movement in a sampling period is not negligible. The offset occurs due to the errors between the vrefdq (k + 1) and vreal
dq (t), since
the vdqref(k + 1) are activated at a beginning of sampling period. The idea is to calculate the compensated dq voltage references vcom
dq (k + 1) so that the
average dq voltages vdqavg(k + 1) effectively applied to the motor during the (k + 1)th sampling period are equal to the vref
dq (k + 1).
The relationships between the vrealdq (t) and the vdqcom(k + 1) are described with following equations 3.6. Also the relationship of dq frames between them is shown in 3.8
Vdreal(t) = cos(θreal− θcom)Vcom
d (k + 1) + sin(θreal− θcom)Vqcom(k + 1)
Vqreal(t) = −sin(θreal− θcom)Vcom
d (k + 1) + cos(θreal− θcom)Vqcom(k + 1)
(3.6) The both sides of above equations are integrated from 0 to Ts.
TsVdavg(k + 1) = Z Ts 0 Vdreal(t) dt = Z Ts 0 cos(ωe(k)t)Vdcom(k + 1)+ Z Ts 0 sin(ωe(k)t)Vqcom(k + 1) dt TsVqavg(k + 1) = Z Ts 0 Vdreal(t) dt = Z Ts 0 −sin(ωe(k)t)Vdcom(k + 1)+ Z Ts 0 cos(ωe(k)t)Vqcom(k + 1) dt (3.7)
CHAPTER 3. DEAD BEAT CONTROL
Figure 3.8: dq reference frame for rotor movement
All voltages and the rotor angler speed ωe(k) are assumed constants for the
integral iterval. By imposing that the applied Vdqavg(k + 1) must be equal to the Vdqref(k + 1), the following equations are obtained from 3.7:
TsVdref(k + 1) = sin(ωe(k)Ts) ωe Vdcom(k + 1) −cos(ωe(k)Ts) − 1 ωe Vqcom(k + 1) TsVqavg(k + 1) = cos(ωe(k)Ts) ωe Vdcom(k + 1) +sin(ωe(k)Ts) − 1 ωe Vqcom(k + 1) (3.8)
CHAPTER 3. DEAD BEAT CONTROL
Therefore the Vcom
dq (k + 1) are derived from the equations of 3.8.
Vdcom(k + 1) =
−Ts[cos(ωe(k)Ts) − 1]ωe(k)Vdref(k + 1) + Tssin(ωe(k)Ts)ωe(k)Vqref(k + 1)
sin2(ω
e(k)Ts+ [cos(ωe(k)Ts) − 1]2
Vdcom(k + 1) =
Tssin(ωe(k)Ts)ωe(k)Vdref(k + 1) + Ts[cos(ωe(k)Ts) − 1]ωe(k)Vqref(k + 1)
sin2(ω
e(k)Ts+ [cos(ωe(k)Ts) − 1]2
(3.9)
3.5
Parameter Errors
To analyse the influence of parameter errors, some criteria must be set up for evaluating the performance.[7] The resistance of the cobber windings in the stator, varies with temperature according to 3.10
Rph = Rph0(1 + α(Tamb− Tref)) (3.10)
where:
αis the temperature coefficient of cobber. α = 0.003862 1
oC for cobber.
Rph0 is the measured stator resistance at approx. 20oC.
Tamb is the ambient temperature around the stator windings.
Tref is the reference temperature at approx. 20oC.
Assuming the parameter measurements acquired are valid for a temperature of approx. 20oC, and that a maximum ambient temperature of 100oC can
build up around the stator windings, a maximum increase of ∼ 30% is calcu-lated using 3.10. For caution and uncertainty, additionally ±20% variation is added. The stator resistance is therefore varied in the interval shown in 3.11
CHAPTER 3. DEAD BEAT CONTROL
When the stator current amplitude reach a certain level the magnetic prop-erties of the iron material in the stator changes due to the saturation phe-nomenon. This causes the reluctance path of the stator flux to change, hence the machine inductances changes. Therefore, it is important to anal-yse the effects of saturation when utilizing model based deadbeat control. As the synchronous inductances are a product of mechanical construction dimensions, material properties, and machine operating point, the interval of which the inductances varies is hard to establish and out of scope of this thesis. Therefore, the inductances are varied according to 3.12
0.5Ldq ≤ Ldq ≤ 1.5Ldq (3.12)
The flux varies with temperature as well, but it depends which material is used for that. The Sm-Co magnet has a better high temperature coercivity and a better temperature coefficient of remanence than the Nd-Fe-B type. In Figure 3.9, the knee of the B - H characteristic lies below the H-axis. As temperature rises, the knee moves up the curve with the danger of intrusion into the operating quadrant. Figure 3.10 shows the relative effects of the knee movements for the two materials in question. The knees below the H-axis move in a way which brings both characteristics closer to the origin, but the movement of the Sm-Co line is small in comparison to that of Nd-Fe-B. The knee of the Sm-Co characteristic does not move up enough to affect the linearity in the operating quadrant, but the movement of Nd-Fe-B is greater and linearity is not maintained. This means that the effects, at typical motor temperatures, of demagnetizing fields with intensities up to the ’hot’ value of coercivity will not be permanent for the Sm-Co magnet but may be so for Nd-Fe-B. It should be remembered, however, that even the Sm-Co magnet can still be demagnetized if the maximum allowable motor temperature is exceeded.
CHAPTER 3. DEAD BEAT CONTROL
Similarly the other two parameters we can assume the flux varied according to 3.13
0.5φ0 ≤ φ0 ≤ 1.5φ0 (3.13)
CHAPTER 3. DEAD BEAT CONTROL
Chapter 4
Identification Methods
Introduction The determination of an uncertain value is a recurring problem in many fields of engineering and science in general.
Based on measurements and mathematical models trying to estimate the value of a specific variable. In this context, we will refer to a particular case: the estimation of a magnitude of a discrete-time dynamic system.Therefore is a signal (variable) time-variant. The special case of the estimation of a constant magnitude of a dynamic system (parameter) takes the name of Parametric Identification
4.1
Identifiability
Our objective is to estimate in real time the most critical parameters of a PM drive application. [8]
This online identification should be sure and efficient with the least execution time possible. Also, no additional sensors are permitted due to certification problems. The only admitted measurements are those used for the control:
CHAPTER 4. IDENTIFICATION METHODS
stator current sensors, rotor position measurement and dc-link voltage sen-sor. It is, of course, necessary that the online parameter estimation works at steady state as well as during transient periods. It is also desirable that the online estimation requires the modification of neither the current references (Idref and Iqref ) nor the control output voltages (Vdref and Vqref). In
sum-mary, we have the following requirements: 1) no additional sensor;
2) no signal injection;
3) online estimation during transitions and in steady state.
For satisfying these requirements, we should study at first the identifiability of the electrical parameters under the aforementioned conditions.
For a given dynamic system, the identifiability of its parameters may be eval-uated by using the observability criterion when the state vector of the system is extended to the parameters.
Thus, for a given speed ω = ω0, the application of this concept to our system
begins by extending the state vector to the parameters. d dtid = − Rs Ld id+ ω0 Lq Ld iq+ 1 Ld Vd d dtiq = − Rs Lq iq− ω0 Ld Lq id+ 1 Lq (Vq− ω0ϕ0) d dtRs' 0 d dtLd ' 0 d dtLq ' 0 d dtϕ0 ' 0 (4.1)
CHAPTER 4. IDENTIFICATION METHODS
The system (3) is nonlinear, and its local observability depends on the rank of the Jacobian matrix of the following output vector:
Θ = id iq d dtid d dtiq d2 dt2id d2 dt2iq (4.2)
Here, for the parameter identifiability, we are only interested on the part of the Jacobian matrix concerning the parameters. Taking into account the mentioned requirements on the parameter identification and particularly that on the estimation in steady state (worst case), we give here the Jacobian submatrix when the electrical currents are settled (id = Id, iq = Iq) and the
stator voltages are constant
D = −Id 0 ω0Iq 0 −Iq −ω0Id 0 −ω0 RsId− ω0LdIq −ω02LdId −Rsω0Iq −ω20Ld RsIq+ ω0LqId Rsω0Id −ω02LqIq Rsω0 (4.3)
In matrix D, according to the local nature of this study, the rows and columns respectively correspond to small variations of
d dtid d dtiq d2 dt2id d2 dt2iq T and hRs Ld Lq ϕ0 iT .
It is obvious from (4.4) that the system (4.2) is locally nonobservable because D is not full rank. Therefore, the parameter vectorhRs Ld Lq ϕ0
iT is not entirely identifiable in steady state.
CHAPTER 4. IDENTIFICATION METHODS
In particular, the second and fourth columns are linearly dependent; it means that Ld and ϕ0 cannot be identified independently.
Meanwhile, most of the drives used in transport applications are nonsalient PMSMs ( Ld = Lq = Ls) in which the direct current is often fixed to zero
(Id = 0) for minimizing the machine losses. In this case, the second column
of D in (4.4), corresponding to Ld, can be suppressed and Idcan be replaced
by zero. Then, we have:
D = 0 ω0Iq 0 −Iq 0 −ω0 −ω0LdIq −Rsω0Iq −ω02Ld RsIq −ω02LqIq Rsω0 (4.4)
Once again, matrix D is not full rank because the first and third columns are linearly dependent. Thus, the parameter vector hRs Ld Lq ϕ0
iT
is not entirely identifiable,i.e., Rs and ϕ0 cannot be identified at the same time in
CHAPTER 4. IDENTIFICATION METHODS
4.2
Least Squares
To begine our discussion of least squares,[9] we will take the ARMA model and equation error, repeted for the nth-order:
y(k)+a1y(k−1)+a2y(k−2)+. . .+any(k−n)−b1u(k−1)−. . .−bnu(k−n) = e(k; θ).
(4.5) we assume that we observe the set of outputs and inputs
{−y(0), −y(1), . . . , −y(N ), u(0), u(1), . . . , u(N )} and wish to compute values for
θ = ha1 . . . an b1 . . . bn
iT ,
which will best fit the observed data. Because y(k) depends on past data back to the n periods earlier,the first error we can form is e(n; θ). Suppose we define the vector of errors by writing (4.5) over and over for
k = n, n + 1, . . . , N . The results would be
y(n) = φT(n)θ + e(n; θ), y(n + 1) = φT(n + 1)θ + e(n + 1; θ), .. . y(N ) = φT(N )θ + e(N ; θ), (4.6)
where we have used the fact that the state of the ARMA model is φ(k) = h−y(k − 1) −y(k − 2) . . . u(k − 1) u(k − n)
iT .
CHAPTER 4. IDENTIFICATION METHODS
notation and define Y (N ) = y(n) . . . y(N ) T , Φ(N ) = φ(n) . . . φ(N ) T , (N ; θ) = e(n) . . . e(N ) T , θ = a1 . . . an b1 bn T . (4.7)
Note that Φ(N ) is a matrix with 2n columns and N-n+1 rows. In terms of these, we can write the equation errors as
Y = Φθ + (N ; θ) (4.8)
Least Squares is a prescription that one should take that value of θ which makes the sum of the squares of e(k) as small as possible. In terms of (4.6), we define J (θ) = N X k=n e2(k; θ) (4.9)
and in terms of (4.8), this is
J (θ) = T(N ; θ)(N ; θ). (4.10) we want to find bθLS, the least-squares estimate of θ0, which is that θ having
the property
J (bθLS) ≤ J (θ). (4.11)
but J (θ) is a quadratic function of the 2n parameters in θ, and from calculus we take the result that a necessary condition on bθLS is the partial derivatives
of J with respect to θ at θ = bθLS should be zero. This wee do as follows
Jθ = T = (Y − Φθ)T(Y − Φθ) = YTY − θTΦTY − YTΦθ + θTΦTΦθ;
CHAPTER 4. IDENTIFICATION METHODS to vectors we obtain Jθ = ∂J ∂θi = −2YTΦ + 2θTΦTΦ (4.12) If we take the transpose of 4.12 and let θ = bθLS, we must get zero; thus
ΦTΦbθLS = ΦTY (4.13)
These equations are called the normal equations of the problem, and their solution will provide us with the least-squares estimate bθLS. Do the equations
have a unique solution? The answer depends mainly on how 0 was selected and what input signals u(k) were used. Recall that earlier we saw that a general third-order state model had fifteen parameters, but that only six of these were needed to completely describe the input-output dependency. If we stayed with the fifteen-element θ, the resulting normal equations could not have a unique solution. To obtain a unique parameter set, we must select a canonical form having a minimal number of parameters, such as the observer or ARMA forms. By way of definition, a parameter θ having the property that one and only one value of θ makes J(θ) a minimum is said to be identifiable. Two parameters having the property that J(θ1) = J(θ2) are
said to be equivalent. As to the selection of the inputs u(k), let us consider an absurd case. Suppose u(k) ≡ c for all Ka step function input. Now suppose
we look at 4.6 again for the third-order case to be specific. The errors are
y(3) = a1y(2) − a2y(1) − a3y(0) + b1c + b2c + b3c + e(3),
.. .
y(N ) = a1y(N − 1) − a2y(N − 2) − a3y(N − 3) + b1c + b2c + b3c + e(N ).
(4.14) It’s obvious that in 4.14 the parameters b1, b2, andb3 always appear as the
CHAPTER 4. IDENTIFICATION METHODS
input is used. Somehow the constant u fails to excite all the dynamics of the plant. This problem has been studied extensively, and the property of persistently exciting has been defined to describe a sequence u(k) that fluctuates enough to avoid the possibility that only linear combinations of elements of θ will show up in the error and hence in the normal equations [Ljung(1987)]. Without being more specific at this point, we can say that an input is persistently exciting of order n if the lower right (nxn) − matrix component of ΦTΦ [which depends only on u(k)] is nonsingular. It can be shown that a signal is persistently exciting of order n if its discrete spectrum has at least n nonzero points over the range 0 < ωT < π. White noise and a pseudo-random binary signal are examples of frequently used persistently exciting input signals. For the moment, then, we will assume that the u(k) are persistently exciting and that the θ are identifiable and consequently that ΦTΦ is nonsingular. We can then write the explicit solution
b
θLS = (ΦTΦ)−1ΦTY (4.15)
It should be especially noted that although we are here mainly interested in identification of parameters to describe dynamic systems, the solution 4.15 derives entirely from the error equations 4.8 and the sum of squares criterion 4.10. Least squares is used for all manner of curve fitting, including nonlinear least squares when the error is a nonlinear function of the parameters. Nu-merical methods for solving for the least-squares solution to 4.13 without ever explicitly forming the product ΦTΦ have been extensively studied [see Golub
(1965) and Strang (1976)]. The performance measure 4.9 is essentially based on the view that all the errors are equally important. This is not necessarily so, and a very simple modification can take account of known differences in the errors. We might know, for example, that data taken later in the exper-iment were much more in error than data taken early on, and it would seem
CHAPTER 4. IDENTIFICATION METHODS
reasonable to weight the errors accordingly. Such a scheme is referred to as weighted least squares and is based on the performance criterion
J (θ) =
N
X
k=n
w(k)e2(k; θ) = TW (4.16) In 4.16 we take the weighting function w(k) to be positive and, presumably, to be small where the errors are expected to be large, and vice versa. In any event, derivation of the normal equations from 4.16 follows at once and gives ΦTW Φbθ
W LS = ΦTW Y , and subject to the coefficient matrix being
nonsingular, we have
b
θW LS = (ΦTW Φ)−1ΦTW Y (4.17)
We note that 4.17 reduces to ordinary least squares when W = I, the identity matrix. Another common choice for w(k) in 4.16 is w(k) = (1 − γ)γN −k for
γ < 1. This choice weights the recent (k near N) observations more than the past (k near n) ones and corresponds to a first-order filter operating on the squared error. The factor 1 − γ causes the gain of the equivalent filter to be 1 for constant errors. As γ nears 1, the filter memory becomes long, and noise effects are reduced; whereas for smaller γ, the memory is short, and the estimate can track changes that can occur in θ if the computation is done over and over as N increases. The past is weighted geometrically with weighting γ, but a rough estimate of the memory length is given by 1/(1 − γ); so, for example, a γ = 0.99 corresponds to a memory of about 100 samples. The choice is a compromise between a short memory which permits tracking changing parameters and a long memory which incorporates a lot of averaging and reduces the noise variance.
CHAPTER 4. IDENTIFICATION METHODS
4.2.1
Recursive Least Squares
The weighted least-squares calculation for bθW LS given in 4.17 is referred to
as a batch calculation, because by the definition of the several entries, the formula presumes that one has a batch of data of length N from which the matrices Y and Φ are composed according to the definitions in 4.7, and from which, with the addition of the weighting matrix W, the normal equations are solved. There are times when the data are acquired sequentially rather than in a batch, and other times when one wishes to examine the nature of the solution as more data are included to see whether perhaps some im-provement in the parameter estimates continues to be made or whether any surprises occur such as a sudden change in θ or a persistent drift in one or more of the parameters. In short, one wishes sometimes to do a visual or experimental examination of the new estimates as one or several more data points are included in the computed values of bθW LS. The equations of 4.17
can be put into a form for sequential processing of the type described. We begin with 4.17 as solved for N data points and consider the consequences of taking one more observation. We need to consider the structure of ΦTW Φ and ΦTW Y as one more datum is added. Consider first ΦTW Φ. To be
specific about the weights, we will assume w = aγN −k. Then, if a = 1 and γ = 1, we have ordinary least squares; and if a = 1−γ, we have exponentially weighted least squares. From 4.7 we have, for data up to time N + 1
ΦT = [φ(n) · · · φ(N )φ(N + 1)] and ΦTW Φ =PN +1 k=n φ(k)w(k)φ T(k) =PN +1 k=n φ(k)aγ N +1−kφT(k)
CHAPTER 4. IDENTIFICATION METHODS ΦTW Φ = N X k=n φ(k)aγγN −kφT(k) + φ(N + 1)aφT(N + 1) = γφT(N )W (N )Φ(N ) + φ(N + 1)aφT(N + 1). (4.18)
From the solution 4.17 we see that the inverse of the matrix in 4.18 will be required, and for convenience and by convention we define the 2nx2n matrix P as
P (N + 1) = [ΦT(N + 1)W Φ(N + 1)]−1 (4.19) Then we see that 4.18 can be written as
P (N + 1) = [γP−1(N ) + φ(N + 1)aφT(N + 1)]−1 (4.20) and we need the inverse of a sum of two matrices. This is a well-known prob-lem, and a formula attributed to Householder (1964) known as the matrix inversion lemma is
(A + BCD)−1 = A−1− A−1B(C−1+ DA−1B)−1DA−1. (4.21)
To apply 4.21 to 4.18 we make the associations A = γP−1(N )
B = φ(N + 1) = φ C = w(N + 1) = a D = φT(N + 1) = φT
and we find at once that P (N + 1) = P (N ) γ − P (N ) γ φ( 1 a + φ TP (N ) γ φ) −1 φTP (N ) γ . (4.22)
CHAPTER 4. IDENTIFICATION METHODS
In the solution we also need ΦTW Φ,which we write as
ΦTW Y = [φ(n) · · · φ(N )φ(N + 1)] aγN +1−n 0 . .. aγ 0 a y(n) .. . y(N ) y(N + 1) (4.23)
which can be expressed in two terms as
ΦTW Y (N + 1) = γΦTW Y (N ) + φ(N + 1)ay(N + 1). (4.24) If we now substitute the expression for P(N+1) 4.22 and for ΦW Y (N + 1) from 4.24 into 4.17, we find [letting P (N ) = P, φ(N +1) = φ, and y(N +1) = y for notational convenience]
b θW LS(N + 1) = " P γ − P γφ 1 a + φ TP γφ −1 φTP γ # γΦT W Y (N ) + φay (4.25) When we multiply the factors in 4.25, we see that the term P ΦTW Y (N ) = b θW LS(N ), so that 4.25 reduces to b θW LS(N + 1) = bθW LS(N ) + P γφay − P γφ 1 a + φ TP γφ −1 φTbθW LS− P γφ 1 a + φ TP γφ −1 φTP γφay . (4.26)
If we now insert the identity 1 a + φ TP γφ −1 1 a + φ TP γφ
between the φ and the a in the second term on the right of 4.26, we can combine the two terms which multiply y to reduce 4.26 to
b
CHAPTER 4. IDENTIFICATION METHODS
where we have defined
L(N + 1) = P γφ 1 a + φ TP γφ −1 (4.28) Equations 4.22, 4.27, and 4.28 can be combined into a set of steps that constitute an algorithm for computing bθ recursively. To collect these, we proceed as follows:
1. Select a γ, and N
2. Comment: a = γ = 1 is ordinary least squares; a = 1γ and 0 < γ < 1 is exponentially weighted least squares.
3. Select initial values for P(N) and θ(N ). Comment: See discussion below.
4. Collect y(0), · · · , y(N ) and u(0), · · · , u(N ) and form =φT(N + 1). 5. Let k ←− N . 6. L(k + 1) ←− P (k) γ φ(k + 1) 1 a + φ T(k + 1)P (k) γ φ(k + 1) −1 7. Collect y(k + l) and u(k + 1)
8. bθ(k + 1) ←− bθ(k) + L(k + 1)(y(k + 1) − φT(k + 1)bθ(k)) 9. P (k + 1) ←− 1 γ[I − L(k + 1)φ T(k + 1)]P (k) 10. Form φ(k + 2) 11. Let k ←− k + 1 12. Go to the step 6
CHAPTER 4. IDENTIFICATION METHODS
Especially pleasing is the form of Step 8, the update formula for the next value of the estimate. We see that the term φT
b
θ(N ) is the output to be expected at the time N+1 based on the previous data, φ(N + 1), and the previous estimate, bθ(N ) Thus the next estimate of θ is given by the old estimate corrected by a term linear in the error between the observed output, y(N+1), and the predicted output,φT
b
θ(N ). The gain of the correction, L(N+1), is given by 4.27 and 4.28. Note especially that in 4.27 no matrix inversion is required but only division by the scalar
1 a + φ
TP
γφ
However, one should not take the implication that 4.27 is without numerical difficulties, but their study is beyond the scope of this text. We still have the question of initial conditions to resolve. Two possibilities are commonly recommended:
1. Collect a batch of N > 2n data values and solve the batch formula 4.17 once for P(N), L(N+1), and bθ(N ), and enter these values at Step 3. 2. Set bθ(N ) = 0, P (N ) = αI , where α is a large scalar. The suggestion
has been made that an estimate of a suitable α is [Soderstrom, Ljung, and Gustavsson(1974)] α = (10) 1 N + 1 PN i=0y 2(i)
The steps in the table update the least-squares estimate of the parameters θ when one more pair of data points u and y are taken. With only modest effort we can extend these formulas to include the case of vector or multi-variable observations wherein the data y(k) are a vector of p simultaneous observations. We assume that parameters θ have been defined such that the system can be described by
CHAPTER 4. IDENTIFICATION METHODS
One such set of parameters is defined by the multivariable ARMA model y(k) = − n X i=1 aiy(k − i) + n X i=1 Biu(k − i) (4.30)
where the ai. are scalars, the Bi are p x m matrices, θ is (n + nmp) x 1, and
the φ(k) are now (n + nmp) x p matrices. If we define Φ, Y and in 4.7, the remainder of the batch formula development proceeds exactly as before, leading to 4.15 for the least-squares estimates and 4.17 for the weighted least-squares estimates with little more than a change in the definition of the elements in the equations. We need to modify 4.16 to J = P Tw,
reflecting the fact that (k) is now also a p x 1 vector and the w(k) are p x p nonsingular matrices. To compute the recursive estimate equations, we need to repeat 4.18 and the development following 4.18 with the new definitions. Only minor changes are required. For example, in 4.22 we must replace 1 a by a−1. The resulting equations are, in the format of 4.27
L(N + 1) = P γφ a−1+ φTP γφ −1 , P (N + 1) = 1 γ(I − L(N + 1)φ T)P, b θW LS(N + 1) = bθW LS+ L(N + 1)[y(N + 1) − φTθbW LS(N )] (4.31)
4.3
The Dead Beat Estimator
Introduction As shown in Chapter 3 the Dead Beat control is very sensitive to the variation of the parameters between the model and the real motor to be controlled. The errors between the real value of current and the reference current give us a good information about the error on each parameter which have to be estimated. In particular we can notice the better condition in which it is possible to estimate each parameter. The basic idea is to estimate the parameters only where the information about the error
CHAPTER 4. IDENTIFICATION METHODS
are present. The inductances have to be estimated during the transient of current, indeed, as shown in fig:3.4, an error on the inductance give us an error during the transient of current(Overshoot or undershoot).
The flux can be estimated only when the rotor spin and we have a steady-state of current, in this case an error on the flux give us an error during the steady state of current fig:3.6(Offset).
The same reasoning can be made for the resistance: an error on the resistence give us an error during the steady state of current fig:3.5(Offset).
Is obvious that the main problem is to realize which error can be associated to an error on the flux or on the resistence(this is shown in the following paragraphs).
4.3.1
Dead Beat Estimator Basic Idea
To better understand what mentioned above, we can considered the equations 4.32 according with euler discretization,
Id(k+1) = 1 −R LTs Idk+ ωrTsIqk+ Vdk Ts L Iq(k+1) = 1 − R LTs Iqk− ωrTsIdk+ Vqk Ts L − φ0 LωrTs (4.32)
The eq 4.32 can be rewritten: Id(k+1)− Idk = − R LTsIdk+ ωrTsIqk+ Vdk Ts L Iq(k+1)− Iqk = − R LTsIqk− ωrTsIdk+ Vqk Ts L − φ0 LωrTs (4.33)
We can notice that in steady-state Id(k+1) ' Idk and Iq(k+1) ' Iqk so the eq
4.33 becomes:
0 = −RTsIdk+ ωrLTsIqk+ VdkTs
0 = −RTsIqk− ωrLTsIdk+ VqkTs− φ0ωrTs
CHAPTER 4. IDENTIFICATION METHODS
If we consider the situation in standstill, thus ωr ' 0, eq4.34 simplifies in:
0 = −RTsIdk + VdkTs
0 = −RTsIqk+ VqkTs
(4.35) is worth noting that in steady-state and standstill each prediction error can be associated to an error on the resistance, so this is the better condition to estimate it.
Instead during a transient, thus Id(k+1) Idk and Iq(k+1) Iqk, and with
the additional hypothesis Idk = Iqk ' 0, the eq 4.33 can be rewritten:
Id(k+1)= Vdk Ts L Iq(k+1) = Vqk Ts L − φ0 LωrTs (4.36)
Adding the condition standstill eq4.36 become: Id(k+1)= Vdk Ts L Iq(k+1) = Vqk Ts L (4.37)
In this case each prediction error can be associated to an error on the induc-tance, so this is the better condition to estimate it. At last we can consider the conditions steady-state no standstill thus ωr 6= 0 so we have:
0 = −RTsIdk+ ωrLTsIqk+ VdkTs
0 = −RTsIqk− ωrLTsIdk+ VqkTs− φ0ωrTs
(4.38)
In this case is difficult to associate a predictive error to a specific parameter, so is necessary another condition ωrφ0 RIqk so we can write for q-axis:
0 = −ωrLTsIdk+ VqkTs− φ0ωrTs (4.39)
If Idk = 0 the equation become:
0 = VqkTs− φ0ωrTs (4.40)
Is worth noting that in this case each prediction error can be associated to an error on the flux, so this is the better condition to estimate it.
CHAPTER 4. IDENTIFICATION METHODS
4.3.2
Dead Beat Estimator Procedure
As foregoing described, a right approach to the problem is very important, in fact is very important to compute the new value of a parameter only where the information is present and keep the old value if not.
The computing of parameters is based on the same criterion of the dead beat control(from which the name): we can do a prediction of a parameter using the model and considering the parameters state functions(like the currents). A good implementation required a definition of the conditions(Transient and Steady-state), a proper sequence of estimation and an initialization proce-dure.
In this thesis we can define transient: If
abs(Idq(k−1)− Idqk) ≥ M in transient current change (4.41)
then T ransient = 1
where Idq(k−1)is the current at the previous step, Idqk is the current measured
at step kth and M in transient current change is a value of current that we
can predefine (for example: 4A ).
This transient can be present on d-axis, q-axis or both. We can also define steady-state:
abs(Idq(k−1)− Idqk) ≤ M ax steadystate ripple (4.42)
then counter steadystate = counter steadystate + 1 and if
(counter steadystate ≥ steadystate delay samples) (4.43) then SteadyState = 1 where Idq(k−1)is the current at the previous step, Idqkis
the current measured at step kth, M ax steadystate ripple is a value of current
CHAPTER 4. IDENTIFICATION METHODS
is the number of samples time that we want the current keep his value quite costant. The steady-state can be present on d-axis, q-axis or both. In other words, to have a transient is enough a ”great” variation of current between two samples time; to have a steady-state we want that the current variation is ”low” for ”enough” time(that we can predefine like: 0.001s).
In fig:4.1 is shown the code used in simulation to define transient and steady state.
Figure 4.1: Implementation of transient and steady-state in matlab function
Once defined transient and steady-state we can proceed with the definition of the conditions for calculating the parameters. This conditions are:
CHAPTER 4. IDENTIFICATION METHODS
keep the previous value.
2)If there is transient on d-axis update the value of Ld
3)If there is transient on q-axis update the value of Lq
4)If there is a steadystate in any or both dq axes then...
4a)If the ratio between the back-EMF and the voltage drop on R ωre(k−1)φ0 RIdqk
≥ emf res ratio min update the value of flux, where emf res ratio min is a value that we can predefine(for example 5).
4b)If the ratio between the back-EMF and the voltage drop on R ωre(k−1)φ0 RIdqk
≤ emf resratiomax where emf resratiomax is a value that we can predefine(for
example 0.5), and If |Idqk| ≥ M insteadystatecurrent,where M insteadystatecurrent
is a value that we can predefine(for example 4A) update the resistance. As foregoing described, we can use the same equations of the model easily as-suming the parameters like state function. So for 2):
Ldk = 1 Idk− Id(k−1) " − R(k−1)TsId(k−1)+ ωre(k−1)Lq(k−1)TsIq(k−1) + TsVd(k−1) sin(ωre(k−1)Ts) ωre(k−1)Ts + TsVq(k−1) sin2(ω re(k−1) Ts 2) ωre(k−1) Ts 2 # (4.44) For 3) Lqk = 1 Iqk− Iq(k−1) " − R(k−1)TsIq(k−1)− ωre(k−1)Ld(k−1)TsId(k−1) − ωre(k−1)φk−1Ts+ TsVq(k−1) sin(ωre(k−1)Ts) ωre(k−1)Ts − TsVd(k−1) sin2(ω re(k−1) Ts 2 ) ωre(k−1) Ts 2 # (4.45)
CHAPTER 4. IDENTIFICATION METHODS For 4a) φ0k = Lq ωre(k−1)Ts " Iqk− Iq(k−1)+ R(k−1) Lq(k−1) TsIq(k−1)+ ωre(k−1) Ld(k−1) Lq(k−1) TsId(k−1) − Ts Lq(k−1) Vq(k−1) sin(ωre(k−1)Ts) ωre(k−1)Ts + Ts Lq(k−1) Vd(k−1) sin2(ω re(k−1) Ts 2 ) ωre(k−1) Ts 2 # (4.46) For 4b) is convenient use the equation of axis where the value of current is bigger. so for d-axis Rk = 1 Id(k−1)Ts " ωre(k−1)Lq(k−1)TsIq(k−1)+ TsVd(k−1) sin(ωre(k−1)Ts) ωre(k−1)Ts + TsVq(k−1) sin2(ω re(k−1) Ts 2 ) ωre(k−1) Ts 2 − Ld(k−1)(Id(k−1)− Idk) # (4.47) so for q-axis Rk= 1 Id(k−1)Ts " − ωre(k−1)Ld(k−1)TsId(k−1)− ωre(k−1)φk−1Ts+ TsVq(k−1) sin(ωre(k−1)Ts) ωre(k−1)Ts − TsVd(k−1) sin2(ω re(k−1) Ts 2 ) ωre(k−1) Ts 2 − Lq(k−1)(Iq(k−1)− Iqk) # (4.48) Define the conditions to compute each parameters it may not be sufficient to a correct identification: In fact all conditions described in paragraph 4.3.2 are required (stanstill, steady-state and transient).
In this thesis we initialize a procedure of online parameter identification in standstill, during which we can estimate resistance and inductances. This procedure can be implemented every time we need: for example when the
CHAPTER 4. IDENTIFICATION METHODS
temperature increase is sufficient to have a great resistance variation as shown in 3.11 In this thesis we can also choice if use the observer only to estimate parameters(’Open loop’) or update every new value online(’Closed loop’).In this case we update the value in this way:
Rctrl = Rctrl+ k(Rest− Rctrl)
Ldctrl = Ldctrl+ k(Ldest− Ldctrl)
Lqctrl = Lqctrl+ k(Lqest− Lqctrl)
φ0ctrl = φ0ctrl+ k(φ0est− φ0ctrl)
(4.49)
Where 0 < k ≤ 1 (for example 0.8) Rctrl,Ldctrl,Lqctrl,φctrl are the control
Chapter 5
Model Description and
Simulation Results
5.1
Model Description
The simulation environment used in this thesis is the tool SIMULINK of the program MATLAB/2013a, in which the model of the power part of the drive considered for the application of both methods is developed.
As shown in 5.1 the power scheme is composed in three subsystems: the motor, the converter and the closed loop control.
5.1.1
Motor model Description
The machine considered is a Permanent Magnet Synchronous Motor(PMSM). The electrical and mechanical parts of the machines are each represented by a second-order state-space model.
The sinusoidal model assumes that the flux estabilished by the permanent magnets in the stator is sinusoidal, which implies that the electromotive forces are sinusoidal. All non-linearities of the machine, due the saturation of
CHAPTER 5. MODEL DESCRIPTION AND SIMULATION RESULTS
Figure 5.1: Scheme of the power part of the drive
the machine, hysteresis phenomena, displacement currents etc., are neglected. This hypothesis are allowed because typically industrial device operate under the threshold of saturation of the machine and with low frequencies.
The block implements the following equations: d dtid= 1 Ld vd− R Ld id+ Lq Ld ωeliq d dtiq= 1 Lq vq− R Lq iq− Ld Lq ωelid− φ0ωel Lq Te= 3 2p[φ0+ (Ld− Lq)idiq] (5.1)
5.1.2
Control System
The Fig.5.2 show under the mask of the control and verification block. In this block a matlab function implements the Dead-Beat control, the SVM and the estimation method as shown in appendix A for the RLS method and in ap-pendix B for the Dead-Beat estimation method. The inputs of the controller
CHAPTER 5. MODEL DESCRIPTION AND SIMULATION RESULTS
Figure 5.2: Scheme of the controller
are: The measured currents (Id and Iq), the electrical speed (ωel), the bus
DC voltage (VDC) and the electric angle(θel); the outputs are the estimated
parameters (Rest,Lest,φest), the pulse of updating of the parameters and the
transient/steady-state(Rupd,Lupd,φupd,transientupd,steady − stateupd), duties
cycle and the Voltages reference (Vdqk). The SVM is implemented as
de-scribed in chapter 2, and the output are duties cycle which are compared with a ramp signal in FPGA block, as described in paragraph 2.3, to have the switching generation.
The Dead-Beat current control is implemented keeping in consideration the compensation for rotor movement and the delay compensation as described in chapter 3.
The implementation of estimation methods are respectively presented in paragraph 4.2.1 and in paragraph 4.3.2
CHAPTER 5. MODEL DESCRIPTION AND SIMULATION RESULTS
5.1.3
Power Converter
In fig.5.3 the inverter block is shown.The inverter can be selected on ideal or real mode. The real inverter keeps into account the voltage drops in the
Figure 5.3: Scheme of the inverter block
components and the dead-time due to switching.
For the sake of simplicity, the simulations have been carried out assuming an ideal operation of the inverter: therefore dead-time and voltage drops have been neglected. Anyway, in a second phase it would be possible to keep into account such phenomena to ensure anyway a correct behaviour of the estimator.
5.2
Simulation Results
Introduction This paragraph is dedicated to analysis of simulation re-sults obtained using two developed Simulink models, respectively the Recour-sive least mean squares model and the alternative deadbeat observer model. The simulation models are used to verify the functionality of the estimators,