• Non ci sono risultati.

Application of set membership identification on low performance hardware

N/A
N/A
Protected

Academic year: 2021

Condividi "Application of set membership identification on low performance hardware"

Copied!
61
0
0

Testo completo

(1)

POLITECNICO DI MILANO

School of Industrial and Information Engineering

Master of Science in Automation and Control Engineering

Application of Set Membership

Identication on Low Performance

Hardware

Supervisor:

Lorenzo Mario Fagiano

Co-supervisor: Marco Lauricella

Master Thesis dissertation of:

Colombo Marina Matr. 921030

Accademic year 2019-2020

(2)
(3)
(4)

Sommario

Negli ultimi anni, lo studio e lo sviluppo di metodi di predizione è diventato sempre più richiesto nell'ambito industriale e di ricerca. Grazie agli algoritmi di predizione, infatti, è possibile analizzare i dati di un sistema e ricavare ipotesi quanto più accurate sull'andamento futuro delle quantità prese in esame. La sda è diventata quella di riuscire a prevedere le grandezze necessarie quanto più precisamente possibile, avendo a disposizione una sempre più ridotta quantità di dati. Questi algoritmi hanno applicazione in settori anche molto diversi tra loro: dal campo energetico a quello industriale, dalle analisi meteorologiche a quelle nanziarie a quelle di mercato per il marketing.

Questo lavoro di tesi presenta l'applicazione a due casi di studio dei modelli ARX (autoregressivi con input esogeno) per la fase di predizione; vengono quindi ri-portate e commentate le prestazioni della previsione in base all'errore medio di predizione (MAPE). Inoltre, il modello ARX lineare proposto permette l'utilizzo di un recente approccio per quanticare l'errore di predizione, che fornisce soglie di errore garantite sull'accuratezza di simulazione. Basandoci sulle soglie ot-tenute dalla precedente analisi, è possibile determinare un insieme contenente tutti i modelli consistenti con i dati a disposizione. Questo insieme viene poi rinito applicando un algoritmo che ne riduce la complessità. In questo modo, il risultato nale sarà un insieme più piccolo, che garantisce una maggior probabil-ità di ottenere un'identicazione più precisa.

Il primo caso di studio tratta un sistema lineare molto semplice con due soli iii

(5)

parametri. Viene eettuata l'identicazione e vengono ricavati gli errori di accu-ratezza di simulazione, che serviranno per l'implementazione della seconda fase, ovvero la rinitura del set contenente i possibili modelli. Questo primo caso ha lo scopo di illustrare in modo molto semplice le potenzialità degli algoritmi utilizzati per la riduzione di dimensionalità del set.

Il secondo caso fa riferimento, invece, ad un'applicazione reale per la predizione dell'energia consumata da un edicio industriale contenente uci e laboratori, sulla base dei dati raccolti in sole due settimane.

Per dimostrare l'applicazione di questi approcci, l'algoritmo di forecasting è stato simulato su hardware embedded industriale, nel nostro caso un RaspberryPi. Nel primo capitolo di questo lavoro, vengono presentati i concetti teorici che pon-gono le basi per le analisi successive. Nel secondo e nel terzo capitolo venpon-gono illustrati rispettivamente il caso di studio del sistema lineare e della predizione del consumo energetico.

(6)
(7)

Abstract

In recent years, the demand for prediction methods study and development in the industrial eld and in research has been rising. With prediction algorithms, it is possible to analyse a system's data and achieve more accurate estimates on the future trend of the gures considered. The challenge now is to obtain increased accuracy in the quantities examined, with fewer data available. These algorithms can be employed in a vast range of dierent domains: from the energetic to the industrial eld, from meteorological to nancial and market analysis.

This thesis illustrates the application in two case studies of ARX models (autore-gressive with exogenous input) for the prediction phase; then, the predictions' performance is reported and commented, based on the Mean Absolute Percent-age Error (MAPE). Moreover, the linear ARX model proposed allows for the use of a recent uncertainty quantication approach, which provides guaranteed bound on the simulation accuracy. Based on the computed bound it is possible to dene a set containing all the possible models consistent with the available data. This set can be rened by applying an algorithm that reduces its complexity. In this way, the nal result will be a smaller set, resulting in better estimation accuracy. The rst case study concerns a simple linear system, with just two parameters. After the identication, the accuracy errors on the simulation will be computed; this values will be needed in the second phase, that is the renement of the set containing all the possible models. This rst case has the purpose to illustrate in a simple way the potentiality of the algorithms for the dimensionality reduction

(8)

of the initial set.

The second case study refers to a real application: the prediction of the total energy consumed by an industrial building, containing oces and laboratories, based on a data set collected in only two weeks.

In order to show the application of these approaches, the identication algorithm will be simulated on a low performance hardware, a RaspberryPi. In the rst chapter of this work, the theoretical concepts will be illustrated. In the second and the third chapter, respectively, the linear system case study and the load forecasting for the considered building will be described.

(9)

Contents

1 Problem formulation 5

1.1 System Identication . . . 5

1.2 Multistep Predictors . . . 7

1.3 Multistep Error bound . . . 8

1.4 Feasible Parameter Set . . . 9

1.5 Simulation Error Bound . . . 11

1.6 Embedded implementation . . . 12

2 Case study: Linear System Parameter Identication 15 2.1 Problem Description . . . 15

2.2 System Identication . . . 17

2.3 Accuracy Bound . . . 21

2.4 Feasible Parameter Set . . . 21

3 Case Study: Load Forecasting 29 3.1 Problem description . . . 29

3.1.1 Introduction . . . 29

3.1.2 Articial input signal . . . 31

3.2 System identication . . . 33

3.3 Accuracy Bound . . . 35

(10)
(11)

List of Figures

1.1 RaspberryPi . . . 12 2.1 (a) - Input signal (b) - Disturbance signal (c) - Output

mea-surement not corrupted by disturbance (d) - Output measure-ment corrupted by disturbance . . . 16 2.2 Comparison between the measured Y (in red) and the predicted Y

(in blue). (a) - Matlab result (b)- Python result . . . 19 2.3 Comparison between the measured Y (in red) and the predicted Y

(in blue). (a) - Matlab result (b)- Python result . . . 20 2.4 Multistep error bound (a) - Computation performed with Matlab

(b)- Computation performed with Python . . . 22 2.5 Simulation error bound . . . 23 2.6 Comparison between set with M1=150 and the sum of M1 with

dierent values of M2. (a) - Overlap between the set formed sidering only the rst M1 constraints (Green set) and the set con-sidering M1 and M2=50 (Pink set), (b) - Overlap between the set formed considering only the rst M1 constraints (Green set) and the set considering M1 and M2=100 (Pink set), (c) - Overlap be-tween set with M1 and M2=50 (Yellow set) and set with M1 and M2=100 (Blue set) . . . 24

(12)

x LIST OF FIGURES 2.7 Comparison between set with M1=250 and the sum of M1 with

dierent values of M2. (a) - Overlap between the set formed sidering only the rst M1 constraints (Green set) and the set con-sidering M1 and M2=50 (Pink set), (b) - Overlap between the set formed considering only the rst M1 constraints (Green set) and the set considering M1 and M2=200 (Pink set), (c) - Overlap be-tween set with M1 and M2=50 (Yellow set) and set with M1 and M2=200 (Blue set) . . . 25 2.8 Comparison between set with M1=300 and the sum of M1 with

dierent values of M2. (a) - Overlap between the set formed sidering only the rst M1 constraints (Green set) and the set con-sidering M1 and M2=50 (Pink set), (b) - Overlap between the set formed considering only the rst M1 constraints (Green set) and the set considering M1 and M2=200 (Pink set), (c) - Overlap be-tween set with M1 and M2=50 (Yellow set) and set with M1 and M2=200 (Blue set) . . . 26 2.9 τ performed with set M1=150 and M2=50 (in blue) and τ

per-formed with set M1=150 and M2=100 (in red) . . . 27 2.10 τ performed with set M1=250 and M2=50 (in blue)and τ

per-formed with set M1=250 and M2=200 (in red) . . . 27 2.11 τ performed with set M1=300 and M2=50 (in blue) and τ

per-formed with set M1=300 and M2=200 (in red) . . . 28 3.1 ABB building . . . 31 3.2 Comparison of the measured Y (in red) and the predicted Y (in

blue). (a) - Matlab result (b)- Python result . . . 36 3.3 Multistep error bound (a) - Computation performed with Matlab

(b)- Computation performed with Python . . . 38 3.4 Simulation error bound . . . 39

(13)

List of Tables

2.1 Identied parameters of the linear system . . . 18 2.2 MAPE of the linear system . . . 20 2.3 Computational time of RaspberriPi . . . 21 3.1 Identied prediction model parameter for 8-th week of year 2018 . 35 3.2 MAPE load forecasting . . . 35 3.3 Computational time of RaspberriPi . . . 37

(14)
(15)
(16)

Introduction

System identication is the term used in the automatic control eld for estimat-ing dynamical models of systems, based on measurements of the system's input and output signals and some initial information.

The system identication process begins with the design and the conducting of experiments in order to collect measurements, then the data set is partitioned into two subsets. The structure of the model is selected and the parameters are specied, then the rst subset, the training set, is used for the parameter identi-cation phase to t the collected data. The process ends with the obtained model quality evaluation, through model validation performed on the second subset, the validation set.

In control design, system identication is a very important step that cooperates with the controller design process and the properties of the identied model can have a large impact on the control design process itself.

In particular, if an inadequate choice of the system identication scheme is taken, the control design procedure could become very challenging and limit the e-ciency of the produced control algorithm. On the other hand, a more suitable choice of the identication procedure may ease the design process, making it more attractive for practical use. However, it is not possible to implement an identi-cation procedure that will give as result the model that perfectly corresponds to the actual plant, since some uncertainty related to the model itself will always be present. Moreover, the resulting model and its uncertainty depend on the initial

(17)

assumptions about the knowledge of the disturbance and measurement signals. The main goal of system identication, hence, is to derive an accurate model despite the presence of these imperfections.

In probabilistic system identication, it is assumed that certain statistical prop-erties of the disturbance signals are known. As a result, the uncertainty related to the identied plant model is associated with a probability density function. In Set Membership identication theory [1, 2, 3] or unknown but bounded (UBB) identication error description [4, 5, 6, 7, 8, 9, 10], the uncertainty is described by additive disturbance signals which are only known to be bounded, where the bound is often unknown. Thus, all information about the system is provided by the available measurements, and could then be summarized by sets of system state, or sets of model parameters or variables estimate, that are consistent both with the available data, and the constraints on the estimated bound on the iden-tication error. In this thesis, the aim is to develop an ideniden-tication algorithm with guaranteed worst-case error bounds in simulation, enforcing at the same time a stability property on the resulting models.

In the vast majority of the existing works in Set Membership literature, the noise bound is considered as a tuning variable. In this case, its choice directly aects the resulting accuracy bounds. In other descriptions, the noise bound is assumed to be known a priori, which can be a limiting assumption, since in many relevant applications, the noise bound is not known, and a guess on its amplitude can be dicult to be veried. Moreover, improper initial assumptions on the disturbance bound may lead to inconsistent estimators.

In this thesis, the theoretical results pertaining to Set Membership identication presented in [11] are reported, allowing one to obtain a convergent estimate of the noise bound from data, for the case of linear time-invariant systems aected by bounded additive measurement noise, where the bound is unknown.

Once the noise bound has been estimated, it is possible to derive the Feasible Parameter Set (FPS). The FPS is dened as the tightest set that contains all

(18)

possible multistep predictor parameters. However, this FPS description is based on the oine construction of the feasible parameter set obtained using the entire batch of collected data points. If the number of available samples is small, or if there is the necessity to continuously rene the FPS over time using the infor-mation coming from newly collected measurements, it is necessary to adopt an online recursive update rule.

In the literature, several approaches have been proposed, where the FPS is up-dated at any sample instant, or using a batch procedure, where the update is carried out after Nb new samples are collected from the system, see e.g.,

[12, 13, 14, 15]. However, these online update approaches incur in the problem of set dimensionality, since the number of constraints used to dene the FPS, and its number of facets, in general grows linearly over time. One possibility is to resort to a redundant constraints removal routine every time a new pair of constraints is added to the set, but this gives no control over the nal number of constraints necessary to dene the FPS, which could lead to memory saturation problems. Another possible solution has been proposed in [16], where the rst M1 facets of the FPS are obtained by recursion, then, a maximum of other M2 facets are dened, which can only be parallel to a predened set of directions. This set contains a nite numbers of vectors with the same magnitude, that will determine the shape of the resulting polytope. A possible way to construct the set of directions is to take regularly distributed vectors on the unit circle as pro-posed in [17]. An alternative approach is to construct the set on-line in order to learn the directions that should be used based on the system dynamics and thus reduce the approximation error introduced by using the bounded complexity polytope update[18].

In this thesis, the implemented identication algorithm has been tested on em-bedded platforms. The basic idea is that the platform receives the measurement from some sensors, and performs the parameter identication and the prediction. In literature, embedded platforms are widely used in many elds, both for

(19)

appli-cation in industrial elds and for identiappli-cation procedure.

Some examples of industrial applications are reported in [19, 20, 21, 22], while identication applications examples are described in [23, 24, 25]. Two case stud-ies have been developed in this work: the rst case study concerns a simple linear system, with just two parameters. After the identication, the accuracy errors on the simulation will be computed; this values will be needed in the second phase, that is the renement of the set containing all the possible models. This rst case has the purpose to illustrate in a simple way the potentiality of the applied algorithms.

The second case study refers to a real application: the prediction of the total energy consumed by an industrial building, property of ABB, based on a data set collected in only two weeks.

In order to show the application of these approaches, the identication algorithm and the computation of the prediction error bound will be simulated by a low performance hardware, a RaspberryPi.

(20)

Chapter 1

Problem formulation

1.1 System Identication

Consider a discrete time, linear-time invariant system (LTI), with state x(k) ∈ Rn, input u(k) ∈ Rm, and output y(k) ∈ Rn, where k ∈ Z denotes the discrete time variable in the form

x(k + 1) =Ax(k) + Bu(k)

y(k) =Cx(k)

(1.1) The output measurement z(k) ∈ Rn is aected by an additive noise d(k) ∈ Rn

z(k) = y(k) + d(k). (1.2)

We denote with zi(k), yi(k), di(k), the i-th component of vector z(k), y(k), d(k),

respectively, where i = 1, . . . , n.

The output equation can be modelled using an input-output auto-regressive with exogenous input model (ARX) of order n as:

(21)

whereT denotes the matrix transpose operation, the regressor φ i(k) is given by: φi(k) =YiTn(k) U T n(k) T ∈ Rn+m(n−1), Yin(k) = [yi(k) yi(k − 1)) . . . yi(k − n + 1)] T ∈ Rn, Un(k) =u(k − 1)T u(k)T . . . u(k − n + 1)T

T

∈ Rm(n−1).

(1.4)

For the identication procedure we consider a discrete time autoregressive model of order ny and we allow the model to be initialized with the rst nu input

measurements and the rst ny output measurements [26]. Let us denote with

ˆ

yi(k|ny)the predicted output at time step k +ny, given the output measurements

up to time step ny, the one-step-ahead predictor can be written as:

ˆ

yi(k + 1) = ϕi(k)Tθ (1.5)

where θ ∈ Rnu+ny is the parameters vector of the one-step-ahead prediction model

to be identied, and the regressor ϕi(k) is dened as:

ϕi(k|ny) =YiT(k|ny) UiT(k) T ∈ Rny+nu, Yi(k|ny) = [ ˆyi(k|ny) ˆyi(k − 1|ny) . . . ˆyi(k − ny+ 1|ny)]T ∈ Rny, Ui(k) =ui(k)T ui(k − 1)T . . . ui(k − nu+ 1)T T ∈ Rnu. (1.6)

For the identication of the parameter vector θ of model (1.5), standard pro-cedures including Prediction Error Minimization (PEM) and Simulation Error Minimization (SEM) are described in [26], [27]. In this work the parameter iden-tication is performed by solving the following unconstrained non linear program:

ˆ θ = arg min θ N X i=1 ˜ Yi− ˆYi(θ) 2 2 (1.7)

where N denotes the number of samples considered and ||.||2

2 denotes the `2

-squared norm, and

˜

Yi = [ ˜yi(1) . . . ˜yi(N )]T ,

ˆ

Yi(θ) = [ˆy(1|ny) . . . ˆyi(N − ny|ny)]T .

(22)

The term ˆYi(θ) is computed by recursion of (1.11) where, if k < ny the

esti-mated values ˆyi(k − j|ny), j = k, . . . , ny− 1are replaced by the measured values

˜

yi(k − j + ny), which are available at the rst ny time steps.

Standard sequential quadratic programming algorithms, such as the Gauss-Newton, are well-suited for this problem class and very ecient in terms of required com-putational time [28].

1.2 Multistep Predictors

Multistep-ahead prediction is the task of predicting a dynamical signal over a longer range of future consecutive sampling instants. A possible approach is to apply a one-step-ahead prediction model step by step and use the predicted value of the current time step to determine its value in the next step. Another possibility is to use a prediction model directly providing an estimation of the system output p-steps ahead in the future. In this thesis work, the multistep predictors are used as a tool that allows the computation of the prediction error bound and the simulation error bound using the results presented in [11]. The multistep predictor has the following general form:

ˆ

yi(k + p) = ϕip(k)

T

θip, (1.9)

where the elements of the parameter vector θip are polynomial functions of the

entries of θi1, i.e.,

θip = hp,n(θi1) (1.10)

the explicit expression of the polynomial functions hp,n: R(ny+nu) −→ R(ny+nu+p−1)

can be obtained by recursion of (1.5) with p = 1, and the regressor ϕip(k) is

de-ned as: ϕip(k) = h YiTny(k) Up,nT u(k)i T ∈ Rny+nu+p−1, Yiny(k) = [yi(k) yi(k − 1) . . . yi(k − ny+ 1)] T ∈ Rny, Up,nu(k) =u(k + p − 1) T . . . u(k)T . . . u(k − n u+ 1)T T ∈ Rnu+p−1. (1.11)

(23)

We refer to p as the prediction horizon or simulation horizon. The multistep predictor is obtained by iteration of the one-step-ahead model (1.5).

1.3 Multistep Error bound

We start by considering the one step-ahead ARX predictor dened in (1.5) and we derive, by its recursion, the multistep predictor for all the considered time step p. The error between the true p-steps ahead system output and its prediction is bounded, for any nite p, by:

|yi(k + p) − ϕTipθip| ≤ ¯εip(θip) (1.12)

where ¯εip(θip) represents the global error bound related to given multistep model

parameters θp. Moreover, since the theoretical error bound ¯εip(θip)is the solution

of an intractable optimization problem, ¯εip(θip) can be replaced, by adopting a

nite set of data points, with an estimation λip ≈ ¯εip which is demonstrated to

converge to ¯εip from below, see [11].

We assume that a nite number of measured pairs (˜y(k), ˜u(k)) is available for the model identication task, where ˜· is used to denote a sample of a given variable and ˜yip=˜˙yi(k + p). Here, for simplicity and without loss of generality, we consider

that the number of sampled regressor N is the same for any considered value of p. The prediction error bound λip is obtained by solving the following Linear Program (LP): λip = α min θip∈Ωp,λ∈R+ λ subject to y˜ip− ˜ϕ T ipθip ≤ λ, ∀( ˜ϕip, ˜yip) (1.13)

Where α > 0 is a scaling factor used to represent that λip is lower than ¯εip and

the set Ωp is a compact approximation of Rny+m(nu+p−1) introduced to technically

(24)

1.4 Feasible Parameter Set

Starting from the obtained global error bound, we can dene, for the p-steps ahead predictor, the set Θip which contains all the model parameters that are

consistent with the initial assumption and the collected measurements. This set is called Feasible Parameter Set (FPS) and it is dened as follow:

Θip =˙ {θip : |˜yip − ˜ϕ

T

ipθip| ≤ λip}, ∀( ˜ϕip, ˜yip) (1.14)

Each one of the element-wise inequalities in (1.14) comes from the fact that the discrepancy between the measured and the predicted values of the output cannot exceed the prediction error bound. The set Θip, if bounded, is a polytope with

2N facets. If Θip is unbounded, this indicates that the data collected from the

system are not informative enough, and new data should be acquired.

The FPSs are polytopes whose number of facets generally grows linearly with the number of data points, and whose dimensionality grows linearly with the horizon p in the multistep approach adopted here. This can cause a dimensionality problem. Here, we adopt the dimensionality reduction approach proposed in [16], based on a recursive update rule. The reduced polytopes are dened as:

Fip(k) = {θip ∈ R (ny+pnu): Φ ip(k)θip ≤ Yip(k) + λip} (1.15) where Φip ∈ R rj×(ny+pnu), Y ip(k) ∈ R rj, and r

j(k) is the number of non-redundant

inequalities pertaining to the jth row of the matrix θ. The polytope F

ip(k) can

be calculated recursively as the intersection of the polytope Fip(k − 1) and the

two half spaces dened by the newly measured plant output, yip(k):

Fip(k) = Fip(k − 1) ∩ {θip ∈ R ny+pnu : ϕ ip(k) Tθ ip ≤ yip(k) + λip} ∩ {θip ∈ R ny+pnu : −ϕ ip(k) Tθ ip ≤ yip(k) + λip} (1.16)

Since, as said before, the number of facets can become arbitrarily large, we em-ploy a polytope update algorithm with bounded complexity, similar to the one

(25)

proposed in [16]. In this approach, the polytope Fip(k)is updated by using (1.16)

as long as the number of its facets is smaller than a predened maximum limit M1. Once this limit is reached, each new facet that is added to the polytope is parallel to a plane that belongs to a predened set of M2 planes, which makes the total number of bounding facets equal to M1 + M2. In particular, a set D containing a nite number M2 of (ny + pnu)-dimensional vectors with the same

magnitude, that will determine the shape of the resulting polytope, has to be dened. Based on this set, the update of the polytope Fip(k) is given by the

following intersection: Fip(k) = Fip(k − 1) ∩ {θip ∈ R ny+pnu : ϕ+ ip(k) Tθ ip ≤ yip(k) + δ + ip} ∩ {θip ∈ R ny+pnu : ϕ− ip(k) Tθ ip ≤ yip(k) + δ − ip} (1.17)

where the vectors ϕ+

ip(k)and ϕ

ip(k)are taken as elements of D that are "closest",

in the inner product sense, to the vectors ϕip(k) and -ϕip(k):

ϕ+ip(k) =arg max v∈D ϕip(k) Tv ϕ−ip(k) =arg max v∈D −ϕip(k) Tv. (1.18)

and the scalars δ+ ip and δ

ipare selected such that the bounded complexity polytope

represents an outer approximation of the polytope that would be obtained by a normal update rule. Hence, the value of δ+

ip and δ

ip can be computed by solving

the following linear program:

δ+ip = max θ ϕ + ip(k) Tθ − y ip(k) δip = max θ ϕ − ip(k) Tθ + y ip(k) subject to Φip(k − 1)θ ≤ ˜Yip(k − 1) ϕip(k) Tθ ≤ ˜y ip(k) + λip −ϕip(k) Tθ ≤ −˜y ip(k) + λip (1.19)

(26)

The set D can be a xed set of vectors that can be chosen beforehand: one possible way to construct the set D is to take regularly distributed vectors on the unit circle as presented in [17]. An alternative approach is to construct the set D on-line in order to learn the directions that should be used based on the system dynamics and thus reduce the approximation error introduced by using the bounded complexity polytope updates. The main dierence between the described algorithm and the approach of [18] is that here we use a set of non-redundant inequalities tu update the polytopes, while in [18] a vertex list of updating is used.

1.5 Simulation Error Bound

Once the FPS is determined, we are able to derive the global simulation error bound related to a given predictor with parameter θip:

τip(θip) = max

ϕip∈Φip

max

θ∈Θip

|ϕTip(θ − θip)| + λip (1.20)

This bound is given by the maximum distance between the output predicted by using the parameters θip and the one predicted by using any other parameter

vector belonging to the FPS, plus the prediction error related to all θip ∈ Θip.

The computation of τip entails the solution of 2N+1 LPs, and it can be recast as

follows.

We start introducing the following term: ϕip(k) =      ϕip(k) if k ≤ N −ϕip(k − N ) if k > N for k = 1, . . . 2N. (1.21) This allows us to split the computation of τip in two parts.

Firstly, we compute:

cikp= max˙ θ∈Θ ip

ϕip(k)

(27)

by solving 2Np linear programs (LPs).

Then, the error bound τ can be calculated as: τip = γarg min θip∈Θip τ subject to cikp − ϕip(k) Tθ ≤ τ, k = 1, . . . 2N. (1.23)

where γ > is introduced in order to account the uncertainty due to the usage of a nite dataset, for more details see [11]. The obtained accuracy bound is used to analyse the performances achieved by a given prediction model of considered facets in the recursive update rule.

1.6 Embedded implementation

Figure 1.1: RaspberryPi

In this thesis, the implemented identication algorithm has been tested on embedded industrial platforms. In particular we use a Raspberry Pi that is a low cost, credit-card sized computer that plugs into a computer monitor or a TV, and can be controlled through a standard keyboard and mouse. It is capable of doing everything a standard computer is expected to do, from browsing the internet and playing high-denition videos, to creating spreadsheets, word-processing, and

(28)

playing games.

The model used for this work is equipped by a 4Gb Ram, a Gigabit Ethernet port, 4 USB port, 2 micro HDMI ports supporting 2x4K display, a USB-C for power supply and a powerful processor.

The basic idea is that the platform receives the measurement from some sensors, and performs the prediction. For this purpose, we have established a Client-Server communication between the RaspberryPi and the computer through the Ethernet port, as to better simulate the behaviour in real application. In this way, the computer can send to the Raspberry the data collected from the measurements and the Raspberry can elaborate it, and send back to the computer the result it has obtained. In order to perform the elaboration, data are sent as .csv le and the program, initially written in Matlab environment, has been rewritten in Python to be supported by the Raspberry. In particular, for our applications, we want the RaspberryPi to work on the identication algorithm and on the computation of the prediction error bound.

In the next chapter, two examples of application are illustrated, along with the corresponding result obtained both with the RaspberryPi and the computer.

(29)
(30)

Chapter 2

Case study: Linear System

Parameter Identication

2.1 Problem Description

Consider the following one input, one output, asymptotically stable, linear time invariant, autoregressive system:

y(k + 1) = ay(k) + bu(k) (2.1) where a = −0.5, b = 2 and an input sequence formed by m = 1000 samples generated through a Pseudo-Random-Binary-Sequence (PRBS) signal that pro-duces the corresponding measurable output y(k). The output measurement y(k) is aected by additive noise d(k), leading to:

˜

y(k) = y(k) + d(k) (2.2) A graphical representation of the signals is shown in Figure (2.1)

In this chapter, the results obtained with Matlab and Python are reported. Mat-lab is the application used on the high level hardware, while Python is used on the embedded platform.

(31)

0 100 200 300 400 500 600 700 800 900 1000 Data point number

-1 -0.5 0 0.5 1 (a) 0 100 200 300 400 500 600 700 800 900 1000 Data point number

-0.4 -0.2 0 0.2 0.4 (b) 0 20 40 60 80 100 120 140 160 180 200 Data point number

-2 0 2 4 y (c) 0 20 40 60 80 100 120 140 160 180 200 Data point number

-4 -2 0 2 4 (d)

Figure 2.1: (a) - Input signal (b) - Disturbance signal (c) - Output measure-ment not corrupted by disturbance (d) - Output measurement corrupted by disturbance

(32)

2.2 System Identication

Considering multistep predictors, for any given p ∈ N, the output equations can be written in the form of auto-regressive with exogenous input (ARX) of order ny = 1, initialized with nu = 1 input measurements and ny = 1 output

measurements:

y(k + p) = ϕTpθp (2.3)

where the regressor ϕp is given by:

ϕp(k) = [YT(k) UpT(k)] T ∈ R1+p, Y = [y(k)]T ∈ R, Up = [u(k + p − 1)T, . . . , u(k)T]T ∈ Rp. (2.4)

In addiction, θp is the parameter vector to be identied. For the identication

procedure, we proceed by splitting the available data set in:

ˆ Training set: Set of data used to learn the parameters of the model

ˆ Validation set: Set of data which provides an evaluation of the model learned on the training dataset

In our case, the original data set is composed of 1000 points, and it has been divided exactly in two halves, the rst one corresponding to the training set and the other one corresponding to the validation set.

As anticipated in the previous chapter, the identication of parameter vector θ is performed by solving (1.7): ˆ θ =arg min θ ˜ Y − ˆY (θ) 2 2 . (2.5) where ˜ Y = [˜y(1) . . . ˜y(500)]T , ˆ Y (θ) = [ˆy(1|ny) . . . ˆy(500 − ny|ny)]T . (2.6) Where the rst element of ˆY is replaced by the measured value ˜y(1) which is

(33)

available at time step ny = 1.

We repeat the identication procedure M times, and then the identied parame-ters which better t the data are selected. Various attempts are needed since the problem is non linear, so each solution will be a local minimum. The smallest local minimum is selected, to represent the global minimum.

The results of parameters identication are reported in Table(2.1) θ optimal

Matlab -0,4991 2,0118 Python -0,4991 2,0118

Table 2.1: Identied parameters of the linear system

The identication procedure has been carried out on the RaspberriPi, the result-ing computational time is reported in Table (2.3) and refers to 20 iteration of the Gauss-Newton algorithm initialized with a single initial condition. The num-ber of initial conditions is chosen according to the overall computation available time.

The obtained identied parameters are used to build the signal starting from nu = 1 input measurements and ny = 1 output measurements contained in the

validation set. This signal is compared to the original one as to verify the cor-rectness of the identication procedure.

The results of the validation phase are shown in Figure (2.3) from which it can be noted that the reconstruction of the signal based on the identied parameters is satisfactory.

In order to better evaluate the quality of the obtained model, we resort in the computation of the Mean Absolute Percentage Error (MAPE) computed over the validation set. The MAPE is dened as:

M AP E = 100 Nv Nv X t=1 ˜ y(t) − ˆy(t) ˜ y(t) (2.7)

(34)

0 20 40 60 80 100 120 140 160 180 200 Data point number

-2 0 2 4 (a) 0 20 40 60 80 100 120 140 160 180 200 Data point number

-2 0 2 4

(b)

Figure 2.2: Comparison between the measured Y (in red) and the predicted Y (in blue).

(a) - Matlab result (b)- Python result

where Nv is the number of data samples used during the testing phase, ˜y(t)

repre-sents the measured output and ˆy(t) reprerepre-sents the predicted output. Obviously, the lower the MAPE value, the better the forecasting performance.

Figure (2.2) shows the result of the validation phase if the system is not aected by disturbance, there is no dierence between the real output and the predicted one. Figure (2.3) shows the result of the validation phase when a disturbance signal is applied to the system. In order to better understand the quality of the model, the related MAPE are listed in the Table (2.2).

(35)

0 20 40 60 80 100 120 140 160 180 200 Data point number

-4 -2 0 2 4 (a) 0 20 40 60 80 100 120 140 160 180 200 Data point number

-4 -2 0 2 4 (b)

Figure 2.3: Comparison between the measured Y (in red) and the predicted Y (in blue).

(a) - Matlab result (b)- Python result

MAPE

Matlab 19,0999 Python 19,0999

(36)

2.3 Accuracy Bound

The results of the computation of the multistep error bound and the simulation error bound, obtained from (1.13) and (1.20)-(1.23), as described in Section (1.5) are shown in Figure (2.4) and (2.5). Notice that in Figure (2.5) the computation has been performed considering the whole set of constraints, corresponding to the entire set of training data. The time needed for the RaspberryPi to perform the computation of the simulation error bound with p = 1 is reported in Table 2.3.

Computational time

θ optimal 0.81 seconds λ1 10.54 seconds

Table 2.3: Computational time of RaspberriPi

2.4 Feasible Parameter Set

In this section, the aim is to illustrate the results obtained by the application of the polytope update algorithm with bounded complexity described in (1.17)-(1.19).

In particular, we want to show how the simulation error bound is aected by the choice of a dierent number of constraints.

In order to graphically represents the results on a 2D plane, we show how the FPS changes, by selecting a dierent number of constraints, in the 2D case, that correspond to the case of p=1, then the results for the p-D cases will be reported through the graphical analysis of the simulation error bound. Notice that, each graph include a graphical representation of the point corresponding to the true

(37)

0 50 100 150 200 250 0 0.1 0.2 0.3 0.4 0.5 0.6 (a) 0 50 100 150 200 250 0 0.1 0.2 0.3 0.4 0.5 0.6 (b)

Figure 2.4: Multistep error bound (a) - Computation performed with Matlab (b)-Computation performed with Python

(38)

0 50 100 150 200 250 300 350 400 450 500 0 1 2 3 4 5 6 7 8

Figure 2.5: Simulation error bound

system's parameters values. From Figure (2.6)-(2.7)-(2.8) we can notice that the number of facets of each polytope is lower than the number of constraints considered: since every constraint represent a direction, it can be concluded that there are only few principal directions. By analysing the graphs, the bigger set is obtained when considering M1 only: when new constraints are selected, considering M1+M2, the set becomes smaller. However, in the last picture, M1 is already very big; thus by adding M2=50 or M2=200 the resulting set is the same, meaning that the added constraints do not provide more information. In Figure (2.9)-(2.10)-(2.11) are reported the results of the simulation error bound in the p-D case with M1=150, M1=250 and M1=300 respectively. From these graphs it is clear that the simulation error bound increases as the number of constraints considered increases.

(39)

(a)

(b)

(c)

Figure 2.6: Comparison between set with M1=150 and the sum of M1 with dierent values of M2.

(a) - Overlap between the set formed considering only the rst M1 constraints (Green set) and the set considering M1 and M2=50 (Pink set),

(b) - Overlap between the set formed considering only the rst M1 constraints (Green set) and the set considering M1 and M2=100 (Pink set),

(c) - Overlap between set with M1 and M2=50 (Yellow set) and set with M1 and M2=100 (Blue set)

(40)

(a)

(b)

(c)

Figure 2.7: Comparison between set with M1=250 and the sum of M1 with dierent values of M2.

(a) - Overlap between the set formed considering only the rst M1 constraints (Green set) and the set considering M1 and M2=50 (Pink set),

(b) - Overlap between the set formed considering only the rst M1 constraints (Green set) and the set considering M1 and M2=200 (Pink set),

(c) - Overlap between set with M1 and M2=50 (Yellow set) and set with M1 and M2=200 (Blue set)

(41)

(a)

(b)

(c)

Figure 2.8: Comparison between set with M1=300 and the sum of M1 with dierent values of M2.

(a) - Overlap between the set formed considering only the rst M1 constraints (Green set) and the set considering M1 and M2=50 (Pink set),

(b) - Overlap between the set formed considering only the rst M1 constraints (Green set) and the set considering M1 and M2=200 (Pink set),

(c) - Overlap between set with M1 and M2=50 (Yellow set) and set with M1 and M2=200 (Blue set)

(42)

0 10 20 30 40 50 60 70 80 Prediction horizon 3.5 4 4.5 5 5.5 6 6.5 7 7.5 8 8.5 M2=50 M2=100

Figure 2.9: τ performed with set M1=150 and M2=50 (in blue) and τ performed with set M1=150 and M2=100 (in red)

0 20 40 60 80 100 120 140 Prediction horizon 3 4 5 6 7 8 9 10 11 12 M2=50 M2=200

Figure 2.10: τ performed with set M1=250 and M2=50 (in blue)and τ performed with set M1=250 and M2=200 (in red)

(43)

0 50 100 150 Prediction horizon 3.5 4 4.5 5 5.5 6 6.5 7 7.5 8 M2=50 M2=200

Figure 2.11: τ performed with set M1=300 and M2=50 (in blue) and τ performed with set M1=300 and M2=200 (in red)

(44)

Chapter 3

Case Study: Load Forecasting

3.1 Problem description

3.1.1 Introduction

One of the main actor in the worldwide energy consumption are buildings: they account for the 20%- 40% of the total energy demand [29, 30, 31]. The future energy requirements of a system are predicted through the Load Forecasting (LF) technique, considering previous (historical) load data and weather conditions, such as wind speed, temperature, humidity and solar radiation [32, 33, 34]. LF techniques are becoming more and more common in all sectors, since they allow system operators to plan the energy usage over time and analyse it: this will lead to a more accurate energy management, shifting the requirements from o-peak periods, and making more favourable energy purchase plans. Energy consumption forecasts are particularly important and are a basic requirement in all kind of energy management system, just think that even a 1% reduction in the average forecast error can lead to the saving of hundreds of thousands, or even millions, of dollars, depending on the scale of the considered power grid [35]. The classication of LF varies among literatures, but in general it can be classied into three main areas [33]:

(45)

ˆ Short term LF (STLF), which predicts load on hourly basis up to one week for daily running and cost minimization;

ˆ Medium term LF (MTLF) usually predicts load on weekly, monthly and yearly basis for ecient operational planning;

ˆ Long term LF (LTLF), which is used to predict up to 50-year-ahead to facilitate the expansion planning.

In literature, one of the most common method to represent LF is the autore-gressive approach, whose models are entirely developed from a large amount of historical data, typically one or two years. In this thesis work, this approach was applied to a much shorter set of data, considering two weeks of data collecting. The data set has been constructed collecting load measurements and weather variables from a three oors oce building, belonging to ABB Italy, located in Bergamo and depicted by Figure (3.1). The energy load in [kW] was measured oor by oor, with a time interval of 15 minutes, i.e. each collected value stands for the average rate of energy consumption within the past quarter of hour. As a result, 96 samples are collected each 24 hours. The weather variables were collected with the same rate as the load prole, by a weather station located near the considered building. In particular, the available variables are: ambient tem-perature, relative humidity, solar global irradiation and wind speed. This thesis focuses on the Short Term Load Forecast, specically, one-day-ahead, i.e. 24-hour-ahead load forecast. Inspired by a previous work by Lauricella and Fagiano [11] on recent results in the derivation of tight error bounds for linear models using the Set Membership (SM) approach, this thesis focuses on a linear ARX (Autoregressive with Exogenous Inputs) model, identied adopting the Simula-tion Error Method (SEM), in which the articial input signal technique proposed

(46)

Figure 3.1: ABB building

in [36] is used to recreate the expected periodic energy consumption behaviour. Then, resorting to the SM approach described in Section 1.1, the error bounds have been evaluated. In particular, with the articial input signal, the model is provided with empirical information on the periodicities of the energy consump-tion, making it able to mimic a frequency decomposition of the periodic part of the load behaviour, without requiring a huge amount of data. The forecasting model obtained through these techniques is both accurate and computationally ecient, using a very short data-set (e.g. two weeks), and with computable tight guaranteed bounds (since the model is linear).

3.1.2 Articial input signal

The articial input signal idea is inspired by the Fourier decomposition, described in (3.1), stating that a periodic signal, or eventually any function f(x) over a

(47)

cer-tain interval, can be described by a Fourier series as a sum of sinusoidal functions: f (x) =a0+ ∞ X n=1 [ancos nωt + bnsin nωt] =a0+ ∞ X m=1 [γncos(nωt − αn)] where γn = p an2+ bn2, αn = arctan bn an (3.1)

Specically, the prediction model is excited by the weather conditions data and the articial input signal, composed by articial signals designed to simulate the energy consumption and its expected periodic behaviour. The articial input signal is thus formulated as a linear combination of sinusoidal functions with unitary amplitude. The harmonics of such signal are not learned from the data, but they are a priori chosen as fractions of days and weeks based on an expert choice of the relevant frequency contributions in the considered load trends. The articial input signal at the kth time step of the ith day is then dened as:

uf,i(k) =               cos(ω1(k + 96(i − 1))) ... cos(ωnω(k + 96(i − 1))) sin(ω1(k + 96(i − 1))) ... sin(ωnω(k + 96(i − 1)))               , k = 1 . . . 96, i = 1 . . . N (3.2)

where nωis the number of considered frequency contributions, and each ωj = 2πfj

corresponds to one of the selected harmonics. Thus, we have uf,i(k) ∈ Rnf with

nf = 2nω. The knowledge of low frequency components coming from seasonal

and monthly patterns in not required, since the training data set is short(e.g. N = 14). Instead, the focus is on higher frequencies regarding weekly, daily and hourly patterns. As a result, considering that the sampling period is 15 minutes (i.e. 4 times per hour), we dene fj = 4·24·7j , so that, for example, with nω = 14

(48)

days (j = 1) and 12 hours (j = 14). The multi-variable signal (3.2) is given as input to a linear ARX model to be identied. For simplicity sake and without loss of generality, the same order nu is adopted for the weather inputs and the

articial input signal.

3.2 System identication

Consider again the given problem introduced in Section (1.2), let us assume that it is desired to use N days to identify the model and to make forecasts on the subsequent M days. In this section we make a slightly change in notation, the i variable is used to indicate the i − th day of the considered week. Denote now the predictor for time step k + 1 of the ith day in the form:

ˆ

yi(k + 1|ny) = ϕi(k|ny)Tθ, k = 0, . . . , (95 − ny) (3.3)

where T denotes the matrix transpose operation, ˆy

i(k + 1|ny) is a (k + 1)

step-ahead predictor, θ ∈ Rny+nu(nw+nf) is the parameter vector to be identied, and

ϕi(k|ny) ∈ Rny+nu(nw+nf) is the regressor vector, with ny and nu standing

respec-tively for the autoregressive order and exogenous input order, and nw and nf

standing for the number of weather variables and ctitious inputs.

The multi-step-ahead prediction is realized by recursion of the one-step-ahead predictor, as it is dened in (1.9)

ϕi(k|ny) = [YiT(k|ny) UiT(k)]T (3.4)

where, if k ≥ ny, and Yi(k|ny) ∈ Rny in (3.4) is built as:

YiT(k|ny) = [ˆyi(k|ny), ˆyi(k − 1|ny), . . . , ˆyi(k − ny+ 1|ny)]T (3.5)

(49)

values ˆyi(k − j|ny)in (3.2), j = k, . . . , ny− 1, are replaced by the measured values

˜

yi(k − j + ny), which are available at the rst n−y time steps of th ithday (initial

conditions).

Finally, the vector Ui(k) ∈ Rnu+(nw+nf) in (3.4) contains the input signals as:

Ui(k) =[uw,i(k)T, . . . , uwi(k − nu+ 1)

T,

uf,i(k)T, . . . , uf,i(k − nu+ 1)T]T,

(3.6)

where nu is the exogenous input order being a value between 1 and ny, uw,i(k)

contains the weather variables and uf,i(k)is the ctitious input described in (3.2).

As anticipated in the previous section, the identication of parameter vector θp

is carried out by solving (1.7): ˆ θ = arg min θ M X i=1 ˜ Yi− ˆYi(θ) 2 2 . (3.7) where ˜ Yi = [ ˜yi(1) . . . ˜yi(96)] T , ˆ Yi(θ) = [ˆy(1|ny) . . . ˆyi(96 − ny|ny)] T . (3.8) In Table (3.1) the vector of parameter identication are shown. The result is reported as a table to make it more readable.

Matlab -0.3600 0.6080 0.5830 -1.9275 0.6368 -0.0097 1.8880 -0.3578 1.7879 0.2990 -1.7656 2.3705 -0.9258 -8.3099 2.4857 -2.5385 -0.3600 0.6080 0.5830 -1.9275 0.6368 -0.0097 1.8880 -0.3578 1.7879 0.2990 -1.7656 2.3705 -0.9258 -8.3099 2.4857 -2.5385 0.2209 1.1282 -1.8859 Python -0.3490 0.5812 0.5706 -2.2556 0.7415 -0.0097 2.2420 -0.6460 1.8742 0.4022 -2.0633 2.6525 -1.3610 -9.7870 2.7548 -2.6096 0.2335 1.4703 -1.8214 0.1748 -2.1438 7.7448 -2.2632 1.0021 1.4895 3.6967 2.1241 -0.9267 -2.7244 1.2723 0.0059 -1.2077 -3.5209 0.5815 1.4957

(50)

Table 3.1: Identied prediction model parameter for 8-th week of year 2018 The identication procedure has been simulated on the RaspberryPi, as in the previous case of study. The resulting computational time is reported in Table (3.3) and refers to 20 iteration of the Gauss-Newton algorithm initialized with a single initial condition. The number of initial conditions is chosen according to the overall computation available time.

The validation procedure results performed with the training set and the related MAPE computed as in (2.7) are displayed in the Figure (3.2), and Table (3.2). The results obtained with Matlab and Python presents small dierences, probably due to a dierent approximation adopted by the dierent solvers.

In both cases, the MAPE is small enough to conclude that the identication procedure was successful.

MAPE

Matlab 7.5305 Python 7.6035

Table 3.2: MAPE load forecasting

3.3 Accuracy Bound

Through procedure described in Section (1.3) and (1.5), the accuracy bounds have been computed. The results are shown in Figure (3.3) and (3.4) and we can notice that as the prediction horizon increases, the prediction error bound and the simulation error bound decreases, that is what we expected.

The time needed for the RaspberryPi to perform the computation of the simula-tion error bound with p = 1 is reported in Table 3.3.

(51)

0 100 200 300 400 500 600 700 Data point number

200 300 400 500 Load[kW] (a) 0 100 200 300 400 500 600 700 Data point number

200 300 400 500 Load[kW] (b)

Figure 3.2: Comparison of the measured Y (in red) and the predicted Y (in blue). (a) - Matlab result (b)- Python result

Computational time

θ optimal 56.72 seconds λ 50.43 seconds

(52)

00:00 02:15 04:45 07:15 09:45 12:15 14:45 17:15 19:45 22:15 Prediction horizon 5 10 15 20 25 30 35 Load[kW] (a) 00:00 02:15 04:45 07:15 09:45 12:15 14:45 17:15 19:45 22:15 Prediction Horizon 5 10 15 20 25 30 35 Load [kW] (b)

Figure 3.3: Multistep error bound (a) - Computation performed with Matlab (b)-Computation performed with Python

(53)

Mon Tue Wed Thu Fri Sat Sun Prediction horizon 50 100 150 200 250 300 350 Load [kW]

(54)
(55)

Conclusion

This thesis work aimed to show how the derivation of guaranteed accuracy bounds on the forecasting error, resorting to Set Membership framework, can be obtained through the usage of a linear prediction model also in a variety of applications elds.

The rst case study was a simple setting that permitted to verify in a very easy and intuitive way the result of polytope update algorithm with bounded complex-ity. The conclusions that can be drawn from this example concern the accuracy of the nal result, that can be inuenced by the constraints number. By increasing the number of constraints, the prediction error decreases, while the complexity increases. The trade o will be chosen based on the future applications. More-over, the result provided by the RaspberryPi and the computer was the same, mostly because there were only few parameters to be identied from a limited set of data. The computational times are extremely low and allows us to conclude that real applications are implementable.

In the second case study, concerning the experimental results obtained from a data-set with measurements collected from an oce building, the validity the performance of the proposed approach was illustrated. The work shows that the usage of the articial input signal signicantly improves the forecasting accuracy of linear models. The results obtained from the RaspberryPi and the computer are slightly dierent, but they lead to the same conclusions. Also in this case the computational times turned out to be very low so the same conclusions of the

(56)

previous case can be drawn.

The future work could involve the application of the algorithm for the renement of the set.

(57)

Bibliography

[1] Fred Schweppe. Recursive state estimation: Unknown but bounded errors and system inputs. IEEE Transactions on Automatic Control, 13(1):2228, 1968.

[2] Fred C Schweppe. Uncertain dynamic systems. Prentice Hall, 1973.

[3] HS Witsenhausen. Sets of possible states of linear systems given perturbed observations. IEEE Transactions on Automatic Control, 13(5):556558, 1968.

[4] D Bertsekas and I Rhodes. Recursive state estimation for a set-membership description of uncertainty. IEEE Transactions on Automatic Control, 16(2): 117128, 1971.

[5] Alexander B Kurzhanski and Vladimir M Veliov. Modeling techniques for uncertain systems. Birkhäuser, 1994.

[6] Mario Milanese, John Norton, Hélène Piet-Lahanier, and Éric Walter. Bounding approaches to system identication. Springer Science & Business Media, 2013.

[7] Mario Milanese and Carlo Novara. Model quality in identication of nonlin-ear systems. IEEE Transactions on Automatic Control, 50(10):16061611, 2005.

(58)

[8] Mario Milanese and Antonio Vicino. Optimal estimation theory for dynamic systems with set membership uncertainty: an overview. Automatica, 27(6): 9971009, 1991.

[9] H Piet-Lahanier and E Walter. Exact recursive characterization of feasible parameter sets in the linear case. Mathematics and computers in simulation, 32(5-6):495504, 1990.

[10] E Walter and H Piet-Lahanier. Recursive robust minimax estimation for models linear in their parameters. IFAC Proceedings Volumes, 25(15):215 220, 1992.

[11] Marco Lauricella and Lorenzo Fagiano. Set membership identication of linear systems with guaranteed simulation accuracy. IEEE Transactions on Automatic Control, 2020.

[12] Luigi Chisci, Andrea Garulli, Antonio Vicino, and Giovanni Zappa. Block recursive parallelotopic bounding in set membership identication. Auto-matica, 34(1):1522, 1998.

[13] José Manuel Bravo, Teodoro Alamo, María Julia Redondo, and Eduardo F Camacho. An algorithm for bounded-error identication of nonlinear systems based on dc functions. Automatica, 44(2):437444, 2008.

[14] Sandor M Veres. Polyhedron updating and relaxation for on-line parameter and state bounding. IFAC Proceedings Volumes, 27(8):13151320, 1994. [15] Antonio Vicino and Giovanni Zappa. Sequential approximation of feasible

parameter sets for identication with set membership uncertainty. IEEE Transactions on Automatic Control, 41(6):774785, 1996.

[16] Marko Tanaskovic, Lorenzo Fagiano, Roy Smith, and Manfred Morari. Adap-tive receding horizon control for constrained mimo systems. Automatica, 50 (12):30193029, 2014.

(59)

[17] S Maraoui and H Messaoud. Design and comparative study of limited com-plexity bounding error identication algorithms. IFAC Proceedings Volumes, 34(13):513518, 2001.

[18] SM Veres, H Messaoud, and JP Norton. Limited-complexity model-unfalsifying adaptive tracking-control. International Journal of Control, 72 (15):14171426, 1999.

[19] N. Agrawal and S. Singhal. Smart drip irrigation system using raspberry pi and arduino. In International Conference on Computing, Communication Automation, pages 928932, 2015. doi: 10.1109/CCAA.2015.7148526. [20] V. Sandeep, K. L. Gopal, S. Naveen, A. Amudhan, and L. S. Kumar.

Glob-ally accessible machine automation using raspberry pi based on internet of things. In 2015 International Conference on Advances in Computing, Communications and Informatics (ICACCI), pages 11441147, 2015. doi: 10.1109/ICACCI.2015.7275764.

[21] Sheikh Ferdoush and Xinrong Li. Wireless sensor network system design using raspberry pi and arduino for environmental monitoring applications. Procedia Computer Science, 34:103  110, 2014.

[22] P Bhaskar Rao and SK Uma. Raspberry pi home automation with wireless sensors using smart phone. International Journal of Computer Science and Mobile Computing, 4(5):797803, 2015.

[23] Assia Arsalane, Noureddine El Barbri, Abdelmoumen Tabyaoui, Ab-dessamad Klilou, Karim Rhor, and Abdellah Halimi. An embedded system based on dsp platform and pca-svm algorithms for rapid beef meat freshness prediction and identication. Computers and Electronics in Agriculture, 152: 385392, 2018.

(60)

[24] Martin Dendaluce Jahnke, Francesco Cosco, Rihards Novickis, Joshue Perez Rastelli, and Vicente Gomez-Garay. Ecient neural network im-plementations on parallel embedded platforms applied to real-time torque-vectoring optimization using predictions for multi-motor electric vehicles. Electronics, 8(2):250, 2019.

[25] Mehdi Maasoumy, Qi Zhu, Cheng Li, Forrest Meggers, and Alberto Sangiovanni-Vincentelli. Co-design of control algorithm and embedded plat-form for building hvac systems. In Proceedings of the ACM/IEEE 4th Inter-national Conference on Cyber-Physical Systems, pages 6170, 2013.

[26] Lennart Ljung. Prediction error estimation methods. Circuits, Systems and Signal Processing, 21(1):1121, 2002.

[27] Sergio Bittanti. Lecture notes of model identicatin and data analysis. [28] Lorenzo Fagiano. Lecture notes of constrained numerical optimization for

estimation and control.

[29] Eia. electric power monthly: with data for january 2015. technical report, us energy inf adm, 2015.

[30] Joaquim Massana, Carles Pous, Llorenç Burgas, Joaquim Melendez, and Joan Colomer. Short-term load forecasting in a non-residential building contrasting models and attributes. Energy and Buildings, 92:322330, 2015. [31] Luis Pérez-Lombard, José Ortiz, and Christine Pout. A review on buildings energy consumption information. Energy and buildings, 40(3):394398, 2008. [32] Jordan M Malof, Boning Li, Bohao Huang, Kyle Bradbury, and Artem Stret-slov. Mapping solar array location, size, and capacity using deep learning and overhead imagery. arXiv preprint arXiv:1902.10895, 2019.

(61)

[33] Ahsan Raza Khan, Anzar Mahmood, Awais Safdar, Zafar A Khan, and Naveed Ahmed Khan. Load forecasting, dynamic pricing and dsm in smart grid: A review. Renewable and Sustainable Energy Reviews, 54:13111322, 2016.

[34] George Gross and Francisco D Galiana. Short-term load forecasting. Pro-ceedings of the IEEE, 75(12):15581573, 1987.

[35] Hesham K Alfares and Mohammad Nazeeruddin. Electric load forecasting: literature survey and classication of methods. International journal of sys-tems science, 33(1):2334, 2002.

[36] Lauricella, z. cai, l. fagiano, day-ahead building load forecasting with a small dataset, ifac world congress, 2020.

Riferimenti

Documenti correlati

Testing for A Set of Linear Restrictions in VARMA Models Using Autoregressive Metric: An Application to Granger Causality Test.. Francesca Di Iorio 1, * and Umberto

The joint action of these two effects within the multilateral and the regional trade systems gives rise to the result that, for the same number of direct trade partners, the R&D

Committee of Ministers, Recommendation on the re-examination or reopening of certain cases at domestic level following judgments of the European Court of Human Rights (R(2000)2),

Remark 2.1. A Banach homogeneous group can be seen as a Banach graded nilpotent Lie group equipped with dilations. This is the natural terminology from the finite dimensional case

Supplementary Materials: The following are available online at www.mdpi.com/xxx/s1, Figure S1: Atrogene expression in sedentary and exercised tumor-bearing mice, Figure S2:

Memorie della Roma monumentale, riflessi della politica papale nelle «descriptiones» di Giovanni Diacono e Pietro Mallio. dedicate ad

In the paper, three experiments were carried out to investigate the role played by membership and typicality in a categorization task in which participants expressed their

In particular, inspired by the above mentioned results of [19] on FPGA speed, and by those of [10] and [17], all concerning FPGA implementations involving the burst