• Non ci sono risultati.

Bridging the energy performance gap: an artificial intelligence based model for urban-scale simulations

N/A
N/A
Protected

Academic year: 2021

Condividi "Bridging the energy performance gap: an artificial intelligence based model for urban-scale simulations"

Copied!
100
0
0

Testo completo

(1)

i

Bridging the energy performance gap:

an artificial intelligence based model

for urban-scale simulations

Roberto BOGHETTI

(2)

ii

“All models are wrong, but some are useful.” George Box

(3)

iii

Acknowledgements

This thesis marks the end of a long journey, that would not have been the same without the many special people I have shared the road with.

Nothing of this could be possible without the support and patience of my father. You are, and always were, a source of inspiration.

I would like to express my deepest gratitude to Chiara. You have helped and motivated me more than you know. We have spent incredible moments together and I promise you many new adventures in the days to come.

I would also like to thank my friends. You were always there, carrying me through the toughest moments and celebrating my successes.

I would like to acknowledge my supervisors, Giacomo and Jérôme, who believed in me and pushed me to follow my curiosity. You offered me an incredible opportunity, wise advices and invaluable guidance. I hope I have given you at least a fraction of what you have given me.

I also owe a debt of gratitude to the people at Idiap, who welcomed me as an old friend from the first moment. I have felt at home every single day, and I could never forget how warm my heart felt when you gathered into my office on my last day.

Finally, I dedicate this work to my mother, Rita. You were a force of nature, and never lost your smile even in the darkest hours. I am sure there is much of you between these pages.

(4)

iv

Abstract

With a growing awareness around the importance of optimization of buildings’ efficiency, being able to make accurate predictions of their energy demand is an invaluable asset for practitioners and designers. For this reason, it is important to constantly improve existing models as well as introduce new methods that can help reduce the so-called energy performance gap, which separates predicted from actual consumption values. This is especially true for urban scale simulations, where even small scenes can be very complex and carry the necessity of finding a reasonable balance between precision and computational efforts. The scope of this thesis is to evaluate the possibility of using artificial intelligence to effectively predict the energy demand of buildings at urban scale. For this purpose, a machine learning based model is created and confronted with two case studies, one in Switzerland and one in Italy. Its results are compared with the in situ measured consumption and with the estimations of a simulation software. The research showed that the use of machine learning resulted in a performance gap in line, if not lower, with the current literature. The reasons for this outcome, as well as possible future research directions are finally discussed.

(5)

v

Riassunto

In un periodo storico di crescente consapevolezza rispetto all'importanza di ottimizzare l’efficienza degli edifici, la capacità di poter svolgere simulazioni energetiche affidabili è un requisito imprescindibile per professionisti e addetti ai lavori. Per questa ragione, è importante migliorare costantemente i modelli esistenti e introdurre nuovi strumenti che aiutino a colmare il gap prestazionale che separa i consumi reali da quelli stimati in fase di calcolo. Ciò è ancora più significativo se si considerano le simulazioni a scala urbana, dove anche piccoli casi di studio possono risultare estremamente complessi, introducendo la necessità di trovare il giusto equilibrio tra precisione e tempi di calcolo. L’obiettivo di questa tesi è di valutare la possibilità di usare l’intelligenza artificiale per predire efficacemente i consumi energetici degli edifici affrontando il problema su scala urbana. Per poter fare ciò è stato proposto e messo alla prova su due casi di studio, uno in Italia e uno in Svizzera, un modello basato sul machine learning. I risultati ottenuti sono poi stati confrontati sia con i consumi reali che con quelli stimati da un software di simulazione. La ricerca ha mostrato come l’uso di un simile modello possa portare a risultati in linea, se non migliori, con quelli di altri metodi di calcolo e ad un gap prestazionale coerente con i valori presenti in letteratura. Le ragioni di questo risultato e i possibili scenari futuri sono infine discussi.

(6)

vi

Résumé

La conscience de l’importance de l’amélioration de l’efficacité énergétique dans le bâtiment se développe, et la capacité à prédire précisément la demande en énergie est un atout précieux pour les entrepreneurs et les designers. Pour cette raison, il est important d’améliorer en permanence les modèles existants, ainsi que d’introduire de nouvelles méthodes qui peuvent aider à réduire l’écart de performance énergétique, c’est-à-dire l’écart entre les prédictions et la consommation réelle. C’est le cas en particulier dans les simulations à l’échelle urbaine, où même de petits théâtres peuvent se révéler très complexes à modéliser, et requièrent de trouver un équilibre raisonnable entre précision et efforts numériques. L’objet de cette thèse est d’évaluer la possibilité d’utiliser l’intelligence artificielle pour prédire efficacement la demande en énergie de bâtiments à l’échelle urbaine. Dans ce but, un modèle de machine learning est créé et confronté à deux études de cas, l’une en Suisse et l’autre en Italie. Ses résultats sont comparés à la consommation mesurée in situ et aux estimations d’un logiciel de simulation. La recherche a montré que l’utilisation du machine learning conduit à un écart de performance similaire, si ce n’est inférieur, à celui de la littérature actuelle. Les raisons de ce résultat, ainsi que de possibles futures directions de recherche, sont discutées pour finir.

(7)

vii

Table of contents

Chapter 1

On the importance of energy efficiency

2

1.1 Climate change 2

1.1.1 Human footprint: greenhouse gases and global warming 2 1.1.2 Evidences and the chain of side effects 3 1.1.3 Global warming 1.5°: future perspective 7

1.2 Energy use 9

1.2.1 The building sector 9

Chapter 2

The performance gap

10

2.1 The gap and its magnitude 10

2.2 Root causes of the energy gap 12

2.3 How uncertainties are addressed: occupancy behaviour 14

Chapter 3

Artificial Intelligence

15

3.1 Definition and history 15

3.1.1 Machine learning 16

3.2 AI in the collective imaginary 17

3.2.1 A note on AGI 18

3.3 Ethics 18

3.4 State of the art of machine learning methods for energy forecasting 19

Chapter 1

Aim and approach

22

Chapter 2

Brief overview of the used tools

23

2.1 CitySim 23

2.1.1 XML Schema 25

(8)

viii 2.2.1 PostGIS 27 2.3 Python 27 2.3.1 Shapely (ver 1.6) 28 2.3.2 Pandas (ver 0.24.1) 28 2.3.3 Geopandas (ver 0.4.0) 28 2.3.4 SQLAlchemy (ver 1.2.18) 28 2.3.5 Scikit-learn (ver 0.20.1) 29

Chapter 3

Exploratory data analysis

29

3.1 Turin 29

3.1.1 Climate 34

3.2 Broc 36

3.2.1 Climate 39

Chapter 4

Features extraction

41

4.1 Building-scale features 41 4.1.1 In Turin 42 4.1.2 In Broc 44 4.2 Urban-scale features 45 4.2.1 In Turin 47 4.2.2 In Broc 49

Chapter 5

Machine learning algorithms

50

5.1 The bias-variance trade-off 51

5.2 Linear models 52

5.2.1 Ordinary least squares linear regression 53

5.2.2 Ridge regressor 54

5.2.3 Lasso 55

5.2.4 Elastic net 56

5.3 K-nearest neighbours (k-NN) 56

5.4 Support vector regression machines (SVR) 57

5.5 Decision tree regressor 58

5.6 Ensemble methods 59

5.6.1 Bagged decision trees 59

5.6.2 Random forest 60

(9)

ix

Chapter 1

Database structure

62

1.1 Structure 62

1.1.1 City 62

1.1.2 Climate 62

1.1.3 Occupancy data 63

1.1.4 Surfaces data 63

Chapter 2

Physics-based simulation: Python DBLinker

63

2.1 How it works 64

Chapter 3

Artificial intelligence model

69

3.1 How it works 70

Chapter 4

Genetic algorithm for model calibration

72

4.1 The differential evolution algorithm (DE) 72

4.1.1 Implementation 73

Chapter 1

Results

75

1.1 Turin 75 1.1.1 After calibration 78 1.2 Broc 79 1.2.1 After calibration 81

Chapter 2

Conclusions

82

2.1 Possible research directions 83

(10)
(11)
(12)

2

Chapter 1

On the importance of energy

efficiency

1.1 Climate change

One of the key challenges of our century is to reduce our impact on the environment: it is well known that anthropogenic emissions are changing the composition of the atmosphere, with the side effect of altering some of the meteorological parameters, leading to a modification in global climate patterns known as the climate change [1, 2]. While the literature is not always unanimous in the definition of climate change, the term is usually used, and will be used in this work, to denote such modification of multiple environmental drivers over time by different causes, that depend both on human activities and natural processes [3]. The lack of agreement on a proper definition is mainly due to the complexity and the constant evolution of the phenomena in terms of magnitude and time-frames, and to the fact that every attempt to fill this gap brings both its benefits and problems, that should be thoroughly considered when defining such an important term. Furthermore, the word “climate” itself is source of additional complication: it is not always clear which systems are to be considered under its definition and, therefore, the definition of climate change. For the scope of this work, these systems are identified with the atmosphere (air), hydrosphere (water), cryosphere (ice and permafrost), biosphere (living things), and lithosphere (earth's crust and upper mantle).

1.1.1

Human footprint: greenhouse gases and global warming

Among the main climate drivers that are modified as an effect of the climate change, temperature is particularly relevant, as it is also the trigger for the modification of other meteorological parameters. Historically, our planet has seen different cycles of ice ages and warmer interglacial periods, each with its corresponding impact on the different components of the climate system, as an effect of natural processes such as the subtle shift of the Earth’s orbit. In the last century, however, the planet has witnessed an abnormal rise of the temperatures that, according to the vast

(13)

3

majority of the scientific community, is a direct consequence of human processes. This phenomenon is known as global warming. A concise but proper definition is given by NASA [4]: “Global warming is the unusually rapid increase in Earth’s average surface temperature over the

past century primarily due to the greenhouse gases released by people burning fossil fuels”. Two

things can be noted in this definition: the first one is that, as opposed to the definition of climate change, global warming specifically refers to human-caused changes. The second observation to be made is that the primary driver of this phenomenon is the emission of greenhouse gases that originate from the burning of fossil fuels. A greenhouse gas (GHG) is a gas that is capable to absorb and emit radiant energy within the thermal infrared range. As a consequence, the natural concentration of GHGs in the atmosphere allows most of the incoming solar radiation to reach and heat Earth's surface but prevents part of the outgoing thermal radiation from escaping to space. In this way, part of this energy is trapped within the atmosphere and the system acts like a greenhouse, hence the name. According to [5], it is estimated that without naturally occurring greenhouse gases Earth's average temperature would be around -18 °C, as opposed to the actual 15 °C. For what concerns human activity, the most significant share of global emissions, referred to the year 2017, is taken by carbon dioxide (CO2), that represent the 73% of the total, followed by

methane (CH4) at 18%, nitrous oxide (N2O) at 6% and the so-called F-gases (HFCs, PFCs, SF6 and NF3)

at 3%. A closer look at the main causes for CO2 emissions reveals that 89% of the human-produced

share is a consequence of coal, oil or natural gas combustion [6] for the conversion of energy. When these fossil fuels are burned, oxygen combines with carbon to form CO2 and with hydrogen to form

H2O and these reactions release heat that is used as source of energy. Therefore, one of the biggest

occasions of reducing GHGs emissions and climate change lies in an evolution of the energy sector, where the transition to renewable sources and the optimization of energy consumption can reduce human-caused emissions in a relatively short time window.

1.1.2

Evidences and the chain of side effects

Because of the changes in the global temperatures that occur naturally during Earth’s lifespan, quantifying the magnitude of the global warming as previously defined is a complicated task, to the point that its own existence is sometimes argued. Within the scientific community, however, the consensus on the global scenario is supported by 97-98% of the publishing climate scientists [7]. The historical data on the global mean temperatures shows a clear increment during the last century and in particular in the last decade. This trend is visible in Figure 1. The chart, elaborated by NASA using the GISTEMP v4 database [8, 9] shows the land-ocean temperature index, 1880 to present, with base period 1951-1980. The solid black line is the global annual mean and the solid red line is the five-year LOWESS smooth. The grey shading represents the total annual uncertainty

(14)

4

at a 95% confidence interval. The temperature anomaly slowly starts growing around the 1910 with an increasingly steeper change from the 1980 onward, touching a peak close to +1 °C. While at a first look this number might seem relatively small, its impact on the climate is far from being negligible.

Figure 1 - Land-ocean temperature index, 1880 to present.

Another point of view on the magnitude of the problem can be matured by looking at regional data.

(15)

5

Figure 2, taken from the first chapter of the IPCC special report ”Global Warming of 1.5 °C” [10], illustrates the spatial and seasonal pattern of present-day warming (2006–2015 decade) with base pre-industrial period 1850-1900. The data clearly shows that warming, as it was easily expectable, is not observed to be spatially or seasonally uniform: generally speaking, land regions experience a substantially higher increment than oceanic areas, reaching values of more than twice the global average. Furthermore, there is a noticeable concentration of higher temperature anomalies in the Northern Hemisphere, where the relative warming has reached values above 3°C, in demonstration of how local changes can be a lot more severe than the global average might suggest.

This scenario comes with several side effects that impact all the components of the climate system and, therefore, different aspects of human life. Moreover, these changes are deeply interconnected and interact with each other, effectively creating unpredictable side effects. While the topic would deserve an essay on its own, this is beyond the scope of this thesis, and will be therefore discussed briefly, in order to provide a general introduction to the problem. Among these side effects, the most notable are:

Glaciers melting: the rise of temperatures around the globe is leading to a progressive melting of mountain glaciers, ice sheets covering West Antarctica and Greenland, and Arctic sea ice. As a consequence, more water flows to the seas from glaciers and ice caps, and ocean water warms and expands in volume. This combination of effects has played the major role in raising average global sea level between 10 and 20 centimetres in the past hundred years, according to the Intergovernmental Panel on Climate Change (IPCC) [11]. This, in turn, has severe repercussions on the artic and oceanic wildlife, leads to a stronger erosion and inundation of shores and intensifies the impact of storms [12].

Wildlife damages: rising temperatures heavily affect wildlife and their habitats. This is true not only for the artic and oceanic fauna, but for every living being on the planet: the biosphere is a delicate system that find its balance on the relationships between its elements. Global warming breaks this balance by influencing the populations of different species, and therefore the resources needed to supply their growth in number. Among the consequences, there are damages to other animals’ habitats and crops. Furthermore, global warming has a direct impact on migration patterns: many species have changed their migration habits in order to find cooler places.

Forest disturbances: there are direct and indirect relationships between the global warming and the amplification of forest disturbances [13] caused by both abiotic (fire, drought, wind, snow and ice) and biotic (insects and pathogens) disturbance agents. As trees are considered natural sponges for the absorption of CO2, to the point that it’s been estimate that forests and other ecosystems

(16)

6

could provide more than one-third of the total CO2 reductions required to keep global warming

below 2 °C through to 2030 [14] (scientists, however, are still discussing the actual feasibility of this solution), this effect adds a layer of complexity to the task of mitigating the speed at which climate is changing.

Altered precipitations patterns: rains and snowfalls has increased across the globe on average. Extreme events in particular are becoming more frequent and intense and might exceed our expectation in the future [15]. On the other hand, some regions are experiencing a more severe drought, increasing the risk of wildfires, lost crops, and drinking water shortages.

Increased humidity and magnified heatwaves: global warming is causing an increase in the number and intensity of heat waves in many regions of the world, while affecting the levels of humidity at the same time [16]. The coexistence of these two phenomena can create extreme conditions that are dangerous for animals and humans.

While the list takes only note of the most important effects, the scenario that arise from these points should instil a critical level of concern. Even more so if we consider that the effects caused by a further increment of the global average temperatures are hard to predict and might be worse than our previsions.

(17)

7

Before concluding this chapter, there is one more question that needs to be addressed: what is the rate of human-induced warming as opposed to the natural evolution of the Earth’s climate? The answer is complex and carries an intrinsic uncertainty due to our limited capability to precisely estimate the natural component of the Earth’s shift to higher temperatures. From the analysis of the period 1950-2010, Jones and al. [18] estimated that the range of possible contributions to the observed warming of 0.6 °C from greenhouse gases could be approximately between 0.6 and 1.2 °C, balanced by a counteracting cooling from other anthropogenic effects of between 0 and −0.5 °C. Similar conclusions are drown by Haustein and al. [17]: the study estimated an anthropogenic increase referred to the period 1850-79 of 1.01 °C in May 2017, with an uncertainty range of +0.87 to +1.22 °C (5–95% confidence interval). Furthermore, they estimated the corresponding natural externally driven change to be in the range −0.01 ± 0.03 °C, very small in comparison to the human contribution. The authors therefore concluded that all the observed warming since 1850–79 is to be considered anthropogenic. The results of their research are shown in Figure 3.

1.1.3

Global warming 1.5°: future perspective

The threat of the climate change is becoming more and more evident and its growth is approaching a critical point of no return. Countries and organizations are taking measures to limit the magnitude of future global warming, but the efforts required to mitigate this phenomenon to acceptable levels are enormous. In December 2015, the United Nations drafted an agreement, signed in April 2016, to combat climate change and to accelerate and intensify the actions and investments needed for a sustainable low carbon future. The document, known as the Paris Agreement, contains the long-term goal of keeping the increase in global average temperature well below 2 °C above pre-industrial levels and, even further, of trying to limit the increase to 1.5 °C. Following the agreement, the previously mentioned special report entitled “Global warming of 1.5 °C” was published by the Intergovernmental Panel on Climate Change (IPCC) in 2018 to better understand the impact of this decision, the future scenario that we should expect if the goal is met, and the road we should pursue to achieve this result. While a general overview of the main possible consequences of the global warming is given in section 1.1.2 of this thesis, a picture of how the human life could change in the future can be found in the fifth chapter of the aforementioned report [19]. A summary of the direct implications of the two different levels of global warming estimated in the report can be found in Table 1.

(18)

8 Table 1 - Global warming and its possible future effects.

Impacts 1.5 °C 2 °C

Water Scarcity

4% more people exposed to water stress

8% more people exposed to water stress, with 184–270 million people more exposed 496 (range 103–1159) million

people exposed and vulnerable to water stress

586 (range 115–1347) million people exposed and vulnerable to water stress

Ecosystems

Around 7% of land area experiences biome shifts

Around 13% (range 8–20%) of land area experiences biome shifts

70–90% of coral reefs at risk from bleaching

99% of coral reefs at risk from bleaching

Coastal cities

31–69 million people exposed to coastal flooding

32–79 million exposed to coastal flooding

Fewer cities and coasts exposed to sea level rise and extreme events

More people and cities exposed to flooding

Food systems

Significant declines in crop yields avoided, some yields may increase

Average crop yields decline

32–36 million people exposed to lower yields

330–396 million people exposed to lower yields

Health

Lower risk of temperature-related morbidity and smaller mosquito range

Higher risks of temperature-related morbidity and mortality and larger geographic range of mosquitoes

3546–4508 million people exposed to heat waves

5417–6710 million people exposed to heat waves

The most evident implication of these effects will be the worsening of existing poverty and exacerbation of inequalities. The study estimates that severe aggravation of these consequences will be reached by 2030, following the impact on food and water security, health and other components of sustainable development. There is, however, a strong agreement within the scientific community that limiting the temperature rise within 1.5 °C would still permit to achieve the sustainable development goals for poverty eradication, water access, safe cities, food security,

(19)

9

healthy lives and inclusive economic growth, and would contain the damages to terrestrial ecosystems and biodiversity.

1.2 Energy use

As stated in section 1.1.1, the main source of greenhouse gases emissions is the combustion of fossil fuels for the conversion of energy. It is therefore vital to take quick actions to reduce the impact of this sector on the global temperatures. According to IEA [20], between 1971 and 2017 world total primary energy supply (TPES) increased by more than 2.5 times and kept growing in 2018 at an even faster pace. During these years, coal and natural gas have seen the highest share of growth among the energy sources, while oil diminished its predominance from 44% to 32% of the total, not enough to register a downward move on absolute value. 2018 pictured a similar scenario, but also showed a significant growth of more than 4% in the use of renewables, which now accounts for the 25% of the world energy supply. This trend is attributable to the expansion in electricity generation, where renewables accounted for 45% of the growth in 2018. Among these, solar PV, hydropower, and wind each accounted for about a third of the growth, with most of the remaining share taken by bioenergy. However, while the numbers look nice on paper, to meet the goals for the next decades the growth of renewables must expand at a much quicker rate, with their share in the power mix raising from one-quarter today to two-thirds in 2040.

1.2.1

The building sector

Buildings are the main destination of the produced energy. They account for 46% of final natural gas consumption, 76% of combustible renewables and waste, 52% of electricity use, and 51% of heat. Taking into account the losses generated during the production and transformation of the energy itself, which represent the 29.30% of TPES, these numbers sum up to 22.52% of the total primary energy supply. Of this tremendous amount of energy, about half is linked with space and water heating. This scenario, however, is not stable and could be subject to change, as climate change affects heating and cooling energy demands [21]. In particular, it leads to a decrease in heating demand in favour of space cooling, and possibly to a need for higher peak energy in response to extreme events. The side effect of this shift would be a decline in the demand for natural gas and oil and an increased need for electricity, that in turn could bring a disproportionate increase in energy infrastructures. These statistics put the building sector at the centre of the discussion around climate change: given the wide room for improvement, it effectively represents one of the best opportunities for reducing greenhouse gases emissions, both for new and existing

(20)

10

buildings. On the other hand, it has the potential to worsen the situation if we delay our intervention, leaving slow actions out of the options.

Chapter 2

The performance gap

2.1 The gap and its magnitude

With a growing concern around efficiency and sustainability of buildings, reducing their energy consumption and, therefore, their carbon footprint, is of primary importance. The design phase is a crucial step that highly influence the future behaviour of a building and being able to make accurate predictions is vital for the final result. There is, however, a well-documented mismatch between the energy demand predicted during the design phase and the as-built measured performance [22, 23, 24, 25]. This is often referred to as performance gap. An extensive research has been done on the topic, with a lot of initiatives aimed at giving an order of magnitude to the problem. PROBE (Post-occupancy Review of Buildings and their Engineering) [26] was one of the first projects to highlight the presence of the performance gap, demonstrating that buildings could consume two times more energy than they were supposed to. To come to this conclusion, 23 buildings in England perceived as perfectly designed were observed during the years 1995-2002. In more recent times, Carbonbuzz [27] was launched by the Royal Institute of British Architects (RIBA) and the Chartered Institution of Building Services Engineers (CIBSE). The platform, open since 2008, allows professionals and researchers to anonymously publish buildings energy consumption data. An energy audit was conducted by the UCL energy institute on office and education buildings of Carbonbuzz database and published on April 2013 [28]. The audit highlighted a median error of respectively +59% and +48% for the heat consumption and +71% and +90% for the electricity use as represented in Figure 4.

(21)

11

Figure 4 – Results from the audit done by the UCL energy institute on Carbonbuzz’s database on April 2013.

Figure 5 – Magnitude of the performance gap in the studies reviewed by Shi et al..

A broader look at the magnitude of the performance gap can be given by a comprehensive list of recent studies on the topic compiled by Shi et al. [29]. The paper compares the upper and lower limits of the gaps found in the reviewed researches, showing significant differences between predicted and measured energy performances, with actual consumptions that range from 0.26 to 4.0 times the predicted demands. It also highlights how the magnitude of this gap varies significantly between similar case studies, making a good comprehension of the problem a difficult task. In some cases, however, this fluctuation is a consequence of a lack of standardization on how the performance gap is quantified and presented in the researches, and, therefore, could be improved in the future. Finally, it is important to note that the predicted energy consumption can

(22)

12

be either larger or smaller than the as-built one and that an estimation on the behaviour of a given building in this regard cannot be made a priori with enough confidence. Of the research studies reviewed in the paper, those that reported an upper and lower limit for the performance gap are presented in Figure 5.

2.2 Root causes of the energy gap

While no correlation can be easily established between the magnitude of the performance gap and the commonly used building parameters, the extensive research done on the topic has shed some light on the underlying factors that influence the problem. Following the analysis from [23] and [30], the sources of discrepancy can be grouped in three main categories according to the phase of the building life-cycle during which they originate: causes that pertain to the design stage, causes from the construction stage (including hand-over to the client) and causes that relate to the operational stage.

Design stage causes can be the most disparate. A first cause can be found in a miscommunication between the client and the designer on the energy targets for the future building. The same incomprehension can also arise within the design team itself, leading to an inconsistency between the adopted solutions and the expected performances. Another well-known issue is the use of incorrect models or simulation. Software and tools used to predict energy consumptions can be inadequate or may be based on equations or algorithms that expose to a higher gap. While this problem may easily be mitigated by using only appropriately validated and accredited tools, the physical complexity of the problem and the level of uncertainty on some key parameters, coupled with the neglection of some hard-to-factor properties, make a certain degree of error unavoidable. The use of a properly validated software, however, does not itself guarantee acceptable results: it is also important that the user has the right knowledge of the problem and the experience and skills to use these tools in the right manner. Finally, keeping the software up-to-date is also a necessary practice that is often overlooked, but is needed to prevent the user from using outdated or obsolete data. Among the causes that are strictly related to the design, human errors play a key role. A lot of factors can drive the design team to make wrong assumptions that may lead to underperforming technical decisions. On the other hand, some solutions, typically energy saving systems, may be overly complex, leading to errors in later phases. A poor design can also be expressed in the form of hard to build solutions, inadequate choices or wrong sequences of the design assembly. An important role in this regard is also played by how the chosen design is communicated: a lack of specifications or details of the project is a significant issue that is often addressed during the

(23)

13

construction stage with poorly improvised solutions. One last error that is commonly made during this phase is to neglect the future evolution of the building and of its context. On the building side, this means not taking into account possible changes in its use and occupancy, as well as failing to predict performance deteriorations of the design components. Looking at the context instead, both urban development and environmental changes add a layer of complexity that is often overlooked. Construction stage carries a lot of potential issues that can either originate during this step or have their root in the design phase. As previously noted, miscommunication and lack of proper design details and specifications are common sources of mismatch between the project as conceived during the design phase and the actual building that will be handed to the client. When details are poorly or not specified, in fact, the final choice is often left to the contractor, which will eventually solve the issue with suboptimal or flawed solutions. The same problem can also be brought up by design adjustment or changes made on site to overcome unexpected hurdles. At the root of these problems there is often a lack of expertise and energy performance knowledge or skills on site or the lack of interest for the contractor in assuring a well performant building. Furthermore, changing requests from the client can lead to a similar situation where the final design does not match the specifications that were chosen during the previous phase. One last group of issues that often arise during the construction of a building is imputable to a wrong assembly of the components. Gaps in the insulation, poor installation of fabric and hurried refinements can create thermal bridges and alter the performance of the components, negatively affecting the behaviour of the building. Operational stage is in most cases the phase that contributes the most to the performance gap. This is especially due to the fact that several hypotheses made during the previous phases may result wrong. The first and often cited as the main reason for the performance gap is the number of occupants and their behaviour. Predicting how people will make use of appliances, windows and systems is a complex task that is ultimately governed by subjectivity. Even in highly automated buildings, users’ actions can influence the energy consumption drastically. This becomes even more complicated with the technological development, that favours the introduction of IT-related loads in everyday life that are usually higher than anticipated. An overview of the most common techniques used to address occupancy behaviour will be covered in the next section. Other sources of misalignment during this stage can be identified within the use of various devices. Both sensors and energy efficiency systems are, in fact, prone to errors, miscalibrations and outdated software that negatively affect their performances. Finally, a poorly executed maintenance or the lack of it can lead to a severe modification of the building’s behaviour.

(24)

14

2.3 How uncertainties are addressed: occupancy behaviour

As this is often pointed out as the main cause for the performance gap, it is relevant to provide a general overview of the problem as well as the approaches that are usually used to model it. A good, while limited, insight on the magnitude of the gap that different behaviours can create is given by Parker et al. [31]: their study compared the total energy consumption of 10 nearly identical homes near Homestead, Florida. The resulting yearly energy consumptions were evenly distributed between the extreme values of 7,257 kWh/year and 20,452 kWh/year. This denotes an upper limit that is three times higher than the lower one. While the factors that could have contributed to this gap are the most disparate, the considerable similarity of the case studies suggests that occupancy behaviour might have played an important role. Another research by Eguaras-Martínez et al. [32] reports that changes on the occupancy behaviour during simulations could lead to differences in the energy consumption of up to 30% within their case studies. The typical key types of behaviour that need to be addressed, along with their effects, are:

• Presence, that leads to metabolic heat gains

• Windows, which impacts the natural ventilation rate • Blinds, affecting illuminance and transmitted irradiance • Lights, that introduce heat gains and electrical energy demand

• Waste, which influences the production of combustible and recyclable solids

To cope with this underlying uncertainty, using stochastic models is common approach. Among these, one of the most popular choices is to use a hidden Markov model (HMM). Hidden Markov models are Bayesian networks based on the assumption that in a randomly changing system future states depend only on the current state, not on the events that occurred before it. These states, however, are not directly visible by the user at any given time, hence the word “hidden”. What is visible is the output data, which is correlated to these states by a probability distribution. The other frequent approach, usually proposed in norms, is to use a deterministic model, with homogenous occupancy schedules tied with the building type. These schedules are given in the form of daily profiles composed of hourly values that define the fraction of the occupancy or the energy used. In both cases, these models are built upon previously known occupancy data, gathered with different methods that can impact the quality of the forecast. These methods can be direct, when the data is gathered through monitoring, or indirect, when the user behaviour is inferred from surveys. Using artificial intelligence in energy forecasting may eliminate the needs for creating such models beforehand, as it implicitly takes care of them based on the availability of relevant input data.

(25)

15

Chapter 3

Artificial Intelligence

In the past decade, artificial intelligence (AI) has seen a steady surge in popularity, and its development has reached new maturity levels. It is quickly becoming a fixed presence in different areas of human life: from industry to social science, there are countless situations where AI has brought an improvement to state-of-the art models and tools, and sometimes has even replaced them. The engineering world, and in particular the energy sector, makes no exception: many recent researches were focused on applying this technology to different phases of the energy life cycle, from the planning stage to the operational one. In this chapter, a closer look at the foundations of AI is given, as well as a brief overview of its impact on human society.

3.1 Definition and history

The term Artificial Intelligence (AI), also known as Machine Intelligence, is used to denote the ability of a machine to carry out tasks that are typically associated with the human intelligence, simulating the behaviour of our brain. These tasks include, among others, learning, planning and interpreting images and sounds. A more rigorous definition is given by Kaplan et al. [33], that describes it as “a system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation”. The idea of a similar technology has ancient roots, and has become a leitmotif of stories and legends from all the human ages. Its existence, however, is much more recent, while still not as recent as the common imaginary might suggest. The first application of AI, in fact, dates back to 1943, when neurophysiologist Warren McCulloch and mathematician Walter Pitts published a paper on how neurons might work [34] and built a neural network with electrical circuits to illustrate their theory. It is, however, more than 10 years later, in 1956, that artificial intelligence became a proper research field, during a summer workshop at Dartmouth College. After an initial excitement and some remarkable results, the progresses hit a technological barrier that could not be passed at the

(26)

16

time, and quickly started to slow down, leading to a period called “AI Winter” when the number of researches on the topic became increasingly small. In the following decades, the AI had new cycles of highs and lows, finally culminating in a new prolific phase in the early 1990s, that laid the foundation for its exponential growth in the following 20 years. One of the reasons of this success can be found in the increased computational power of machines that paved the way for more complex and precise algorithms, as well as allowing the use of higher quantities of data, that is the foundation on which artificial intelligences are built.

3.1.1

Machine learning

Among the many capabilities of artificial intelligence, Machine learning (ML) is one of the most widely known and used. The term refers to the application of artificial intelligence that provides systems the ability to automatically learn from a set of inputs, typically vectors, and outputs. This goal can be achieved using many different statistical models, each with its advantages and disadvantages. The origin of the name machine learning dates back to 1959, when the American computer scientist Arthur Lee Samuel coined the term to indicate a “field of study that gives computers the ability to learn without being explicitly programmed”. In the most famous of his studies, he investigated how learning algorithms could be used to shape the behaviour of a computer in the game of checkers [35], which was a recurrent theme in the AI researches of that period. What is really noteworthy, however, is that by that year he could affirm that machines were playing better than the average human. Machine learning techniques have greatly evolved since then, and nowadays they have become the underlying technology that fuels self-driving cars, business’ decisions and even medical discoveries. With regards to the nature of the problem to solve, machine learning algorithms can be divided into 4 main categories: supervised, semi-supervised, unsupervised and reinforcement learning.

• Supervised learning is the machine learning task of building a mathematical model from a set of fully labelled data containing both the inputs and the desired outputs. Fully labelled means that each example in the training dataset is tagged with the right answer or target variable. A suchlike dataset of, for example, flower images would teach the model how to tell photos of roses, daisies and daffodils. When shown a new image, the model would then compare it to the training data to predict the correct label. Supervised learning models can typically be used for both classification and regression problems. • Unsupervised learning is the task to build models that infer patterns from unlabelled,

unclassified or uncategorized data. The most common practical form of this method is cluster analysis, which is used to explore raw data without the human intervention.

(27)

17

Unsupervised learning finds its main applications in the fields of density estimation and features analysis.

• Semi-supervised learning is halfway between the two previous categories. It includes problems where the training set contains both labelled and unlabelled data, with the latter being typically the majority. This method is particularly useful when extracting relevant features from the data is difficult, and labelling examples is a time-intensive task.

• Reinforcement learning is the area of machine learning where models are trained using a reward-based approach. The AI agent is built so that when it attempts to improve its performance on a specific task, it receives a reward based on the level of improvement. The agent iteratively repeats this process with the goal of maximizing the final reward and, therefore, its performance. This approach is particularly useful for games, where flowing a long-term strategy is usually better than just trying to make the best immediate move every time.

In this work, supervised learning algorithms will be used to estimate the energy demand of buildings by training a model with labelled data.

3.2 AI in the collective imaginary

The idea of a machine that is able to formulate thoughts and taking decisions on its own has always been a recurrent theme in the human imaginary. More than two thousand years ago, Greek mythology explored this topic in various forms. The tales of the Argonauts offer one of the first examples of intelligent being that was “made, not born”: the bronze automata Talos, crafted by the god of blacksmiths Hephaestus and charged to guard the shores of Crete. In addition to creating Talos, according to Homer’s poems Hephaestus also gave birth to a multiple automated servants for his workshop, blessed with the gods’ knowledge, which could be seen as an ancient mythical version of artificial intelligence [36]. Another mythical automaton capable of understanding and following the orders of its creator is widely present in Jewish folklore: it is the figure of the Golem, an anthropomorphic clay statue at the centre of many tales. It is however during the Middle Ages that the archetype of an intelligent machine with human capabilities started to frequently appear in art and legends [37]. Many attempts were also made at creating mechanical devices which could emulate animal and human moves, and in some cases, they were said to have been given a proper consciousness. It is the case of Pope Sylvester II, which according to the legend was able to build a mechanical brazen head that could answer “yes” or “no” to his questions. From here to the present

(28)

18

times, the history of art is full of different interpretations of what can be described as an artificial intelligence. These stories, movies, paintings and legends have all contributed in shaping our idea of what it could be like, and what could it become in an imaginary future. From the tales of Hephaestus to the modern blockbusters, the wide majority of these representations share a common fear of losing our ability of manage such intelligent machines, a fear that is far from being a prerogative of the world of fictions. Recent studies [38, 39, 40] show how a significant number of people sees artificial intelligence as a possible threat: cyber-attacks, social surveillance, loss of data-privacy are among the major concerns raised by people globally. Moreover, according to Zhang and Dafoe’s report, 12% of Americans believe that the impact of high-level machine intelligence will be “extremely bad, possibly human extinction”. Another, less dystopic concern regards the loss of jobs due to AI-driven automation. Studies carried out on the topic report opposite findings: some of them share workers’ concerns and forecast a progressive contraction of the labour market, while others foresee a shift in occupational demand from routine tasks to roles that require social, empathic and interpersonal skills, with possibly an improvement of the employees’ conditions and a growing demand in a large number of sectors [41].

3.2.1

A note on AGI

As stated in the previous paragraph, artificial intelligence is source of concerns for many people. Some of these fears, however, refer to a different kind of technology: the so-called artificial general intelligence (AGI) or “strong” AI [42]. The main difference with classical (or “weak”) AI is the ability to not only perform a very specific action, but to interact with unknown problems with the rationale of a human being. An AGI-driven machine would “learn how to learn”, and would follow its own thought process without human intervention. As of now, AGI is nothing more than a concept: while there is a large number of researches focused on creating such technology, a significant share of experts even argues its feasibility.

3.3 Ethics

Unlike the frightening images of a dystopic future in media and popular fiction, AI is changing our daily lives mostly in ways that improve human health, safety, and productivity. There are, however, many ethical implications and questions that have risen as a consequence [43]. The topic is extremely wide and complex, and most of the questions are still unanswered. Without delving too deep and consequently lose the generality that this introduction is meant to have, the main ethical concerns are the followings:

(29)

19

• Can an AI system be held accountable for its actions? While this may seem like an easy question to address, there are several underlying implications to consider. First of all, understanding AI systems’ decisions is not a straightforward process, as it requires considerable efforts and expose business to reveal their trade secrets. Secondly, the AI space comes with a tremendous range of systems different from one another, making generalization a difficult task.

• Can AI be employed in positions that require respect and care? In jobs like nurse, judge, or police officer human empathy plays an important role. While they can benefit from the introduction of AI from a practical standpoint, would there be a backslash on their ability to exercise respect and care? Could AI be a threat to human dignity in some cases? • How should we address AI bias? The data on which an artificial intelligence is trained

highly influences its behaviour. If this data shows some sort of bias, this may get formalized and engrained. It is the case, for example, of hiring and recruitment: an algorithm may favour male candidates over females, as this has always been a common trend. Could AI accentuate discrimination rather than eradicating it?

• Can AI be predictable? Predictability is an important requirement of any system that has the power to influence human lives. While the functioning of traditional software is governed by precise sets of rules, AI decisions are driven by a more obscure type of reasoning. This may leave room for fringe cases, unpredictable for the most part, where AI behaviour deviates from expectations, possibly creating dangerous situations. How can this problem be addressed? Is its existence acceptable if it still brings an overall improvement?

These and many other questions are currently debated, and in many cases, experts are strongly divided on the answers. Countries are taking actions and creating legal frameworks and guidelines in an effort to find unanimous strategies to regulate this complex matter.

3.4 State of the art of machine learning methods for energy

forecasting

Given the central importance of energy efficiency in the present days and the promising results that machine learning methods are demonstrating, an extensive research is being conducted on the topic. A comprehensive list of recent studies in this regard is given by Amasyali et al. [44]. Results

(30)

20

from different researches indicate that machine learning driven approaches can reach a significant amount of precision, provided that there is enough quality data to support the training of the models. They can make forecasts with errors comparable to those of traditional physics-driven methods, and reduce the computational times of complex scenes by a significant factor. In some cases, machine learning models are even capable of handling uncertainties and approximations in a better way than traditional methods, outputting predictions closer to reality and therefore reducing the resulting performance gap. Comparing these studies, however, is not always possible, as there is a significant number of parameters that define the problem and several different approaches that can be taken. Nonetheless, this great diversity is indispensable to understand how the different algorithms can perform under different circumstances. An important step to establish a more rigorous common thread would be to define proper guidelines and best practices to guarantee a uniform structural workflow while allowing for different approaches to be used. Furthermore, identifying some common case studies on which focusing the researches without losing on generalizability would greatly benefit the ability to compare their results, potentially allowing for further insights.

(31)

21

(32)

22

Chapter 1

Aim and approach

The scope of this thesis is to analyse the performance gap between predicted and measured yearly heating demand of buildings at urban scale, and to investigate the possible advantages of using machine learning driven models over traditional methods. While there exists a consistent body of literature on the subject, a standard methodology has not been agreed upon by the scientific community, and the research panorama is vastly inhomogeneous: for this reason, the exploration of different methods, as well as different case studies, can represent a valuable addition to the ongoing discussion. Furthermore, the availability of quality data provided for this thesis and the uniqueness of the chosen approach differentiate this work from previous researches. To achieve this goal, several steps were needed. A diagram that summarises the workflow for each case study is given in Figure 6.

Figure 6 – Brief summary of the work done for each case study in the form of diagram.

The first step was to analyse and clean the available data. This is a standard operation that always needs to be done when working with data. In this case, it consisted in retrieving missing data, checking for errors and correcting them, converting values into their desired unit of measurement, removing corrupted entries. After this pre-processing operation, two online databases, one for each

(33)

23

case study, were created and organized according to the structure given in Part III1.1. Storing the databases online yields the advantage of allowing different users to simultaneously access the last version of the data while maintaining sensitive information private. While this was not of primary importance for this thesis, the choice was made in the view of a future development of this work. For each case study, the available data was then uploaded along with newly created tables with the support of minor python scripts. With both the databases set up, a first, uncalibrated physical simulation of the two scenarios was made and its performance evaluated. The same input data was then processed using a machine learning model and the results of both methods were compared. The idea behind this step was to see how well an artificial intelligence could support the user during the phase of preliminary energy demand calculation, where a thorough definition of some input parameters that need to be measured in situ is yet to be done. Following this first comparison, the physical model was calibrated using an evolutionary algorithm. The aim of the calibration is to find the optimal values of an arbitrary number of parameters for which the output of the model is as close as possible to the measured energy demand. In particular, the parameters chosen were the setpoint temperature, the number of people and the infiltration rate, because of their direct connection with the human behaviour. The machine learning model was then run again and evaluated, to see if the tuning of these parameters improved its performance. Finally, the findings of both databases were compared and briefly discussed.

Chapter 2

Brief overview of the used tools

2.1 CitySim

CitySim is a an urban-scale building energy simulation software built as the successor of SUNtool [45]. It consists of two parts: CitySim Solver and CitySim Pro (Figure 7). The former is the command-line solver that runs the simulation. It was developed at the Solar Energy and Building Physics Laboratory (LESO-PB) of the École polytechnique fédérale de Lausanne1. The latter is a graphical

(34)

24

user interface (GUI) for CitySim Solver designed by Kaemco SARL2, a spin-off company of the

LESO-PB. The idea behind CitySim is to combine the lighting calculation with a simplified thermal model and energy conversion systems. To assure a good trade-off between accuracy, input data and computing time, the following models are used:

• Simplified Radiation Model (SRA) [46]: a radiation model based on the discretization of the sky into 145 patches developed for SUNtool.

• Simplified Thermal Network (4N) [47]: a simplified thermal model designed to ensure reasonable simulation times even with a large number of buildings, and sufficient flexibility to work with little descriptive information.

• Retroaction on Energy Conversion Systems (ECS) and Occupants’ behaviour.

Three inputs files are needed in order to run the simulation: a 3D model of the buildings to simulate, an XML file containing their thermo-physical properties and a Meteonorm CLI file for the weather data. It is however possible to have the 3D model directly written on the XML file, as chosen for this work.

Figure 7 – The interface of CitySim Pro with the Broc XML loaded.

(35)

25

2.1.1

XML Schema

Storing all the information needed for the simulation in an XML file allows for a simple automation of the process, with only one file needed to be generated. In order for this set-up to properly work, the XML has to follow a precise schema, which is thoroughly described in the guide provided by Kaemco3. The schema can be divided into the following sections:

Header: contains the CitySim default tag and a specification about the simulation period and the CLI file.

FarFieldObstruction: contains information about far field obstructions. Analogously to the HOR files generated by Meteonorm, the points are given with their azimuth (phi) and elevation (theta) angles. Composites: stores the list of used materials, each one made of one or more layers. Every layer is assigned with its thickness, conductivity, specific heat and density.

OccupancyProfiles: the daily and yearly profiles of users are given in this section. A value from 0 to 1 indicates the rate of occupancy.

DeviceTypes: here are stored all the devices necessary for the simulation divided according to their type. For each device are specified its average power, sensible heat gain convective and radiative fractions and hourly mean usage probability.

ActivityTypes: stores the different possible activities and their probability of occurrence on a hourly base.

DHWProfiles: daily and yearly probability of domestic hot water usage, along with the expected daily volume of water consumed per person.

Buildings: this section contains all the buildings, that can be divided into different thermal zones. For the purpose of this work, only one thermal zone is considered for each building, and corresponds to the building itself. Each building tag contains information about the internal volume, set temperatures, blinders and installed systems. The thermal zone tags, on the other hand, encase information about the walls, floors and roofs geometries as well as the occupants and the total losses caused by linear and punctual thermal bridges. Code extract 1 shows an example of how a building is represented in an input XML file for CitySim.

Footer: finally, the footer contains the closing tags for the sections of the XML that are still open to this point.

(36)

26

(37)

27

2.2 PostgreSQL

PostgreSQL, or simply Postgres [48], is an open source4 object-relational database management

system born in 1986 at the University of California, Berkeley. It is designed to work with different data types as well as custom ones, making it a very flexible tool. Furthermore, the system supports both SQL for relational and JSON for non-relational queries. While Postgres is capable of handling very complex database structures, for the sake of simplicity a basic configuration will be used in this work, leaving most of the overheads to the python script. Therefore, the structure of the databases that will be used can be reassumed as a simple hierarchical tree structure, where the database is divided into folders called schemas in which tables are contained.

2.2.1

PostGIS

PostGIS is an open source5 spatial database extender for Postgres that adds support for geographic

objects. First released by Refractions Research6 in 2001, it is now widely used by researchers and

professionals. PostGIS was chosen for its versatility and compatibility with other tools needed for this work.

2.3 Python

Python [49] is an interpreted programming language created by the Dutch programmer Guido van Rossum and firstly released in 1991. Its name is a reference to the British comedy group Monty Python, whose work is mentioned many times in the language documentation. The design philosophy of Python is to emphasize code readability and clarity, notably using whitespace indentations and a clear formatting. This, together with its higher productivity over other languages [50], is often considered as one of the main reasons of its success, also proven by the achievement of the third place in the TIOBE popularity index [51]. The choice of using Python for this work was motivated by its versatility and by the extensive availability of libraries7, especially those targeted

to data science and machine learning purposes. The version used was Python 3.6, within the

4 https://git.postgresql.org/gitweb/?p=postgresql.git 5 https://svn.osgeo.org/postgis/

6 http://www.refractions.net/

(38)

28

Anaconda distribution8. Spyder9 was chosen as the preferred IDE (integrated development

environment).

2.3.1

Shapely (ver 1.6)

Shapely [52] is a Python library for the manipulation and analysis of geometric objects in the Cartesian plane based on the GEOS and JTS libraries. It was created and it is currently maintained by Sean Gillies.

2.3.2

Pandas (ver 0.24.1)

Pandas [53] is a library aimed at providing versatile data manipulation and analysis tools created by Wes McKinney and released for the first time in 2008. Its power lies in the use of flexible yet intuitive data structures. In particular, pandas introduces DataFrames, two-dimensional tabular data structures with labelled axis that can be used with the most common data science and artificial intelligence libraries.

2.3.3

Geopandas (ver 0.4.0)

Geopandas [54] is an extension of the pandas library that supports geospatial data. Its main data structure, the GeoDataFrame, can store geometric items and make it possible to perform shapely methods and functions on them.

2.3.4

SQLAlchemy (ver 1.2.18)

SQLAlchemy [55] is a Python SQL toolkit and Object Relational Mapper created by Michael Bayer and initially released in 2006. In the present work, SQLAlchemy was used to store, modify and retrieve data in the PostgreSQL database previously described. The connection is established in steps Firstly, an engine with the database address and credentials must be created. Secondly, the .connect() method must be called on the engine. At this point, with the connection established, it is possible to execute queries and operations. Finally, it is a good practice to disconnect from the server. This is usually done automatically by SQLAlchemy, however in some cases the .close() method must be called on the connection.

8 https://www.anaconda.com/ 9 https://www.spyder-ide.org/

(39)

29

2.3.5

Scikit-learn (ver 0.20.1)

Scikit-learn [56] is a machine learning library initially developed by David Cournapeau as a Google summer of code project in 2007. It contains a wide range of state-of-the-art algorithms for supervised and unsupervised problems. Scikit-learn is designed to work with other common Python libraries, such as matplotlib, numpy for array vectorization, and pandas. Some of the algorithms are written in Cython10 for improved performances.

Chapter 3

Exploratory data analysis

As the type of input data will significantly impact the generalizability of the tools and methods developed in this work, it is essential to examine the databases beforehand, to understand how the values of features are spread. Parameters that have been extracted from the study of the geometries will be covered in Chapter 4, along with their explanation. This first part of the data analysis is intended to provide a general overview of the complete databases, and, therefore, to give some insights on the two cities taken as case studies in their entirety. However, as only a subset of buildings was used in the simulations, where needed this analysis is accompanied by a comparison with the share of the data set that has been actually used in this work.

3.1 Turin

The Turin database was provided by the Politecnico di Torino as a shapefile (SHP), a vector data storage format that is commonly used for geographic information systems (GIS) software. It contains data for 58970 buildings, whose geometries are stored as a 2D footprint. As the database lacks information regarding the altitude, the value of 239 m was taken for the whole city. Coupled

10 Cython is a programming language designed to give C-like performance with code that is written mostly in Python.

(40)

30

with each building’s height, the altitude will be used to generate the Z coordinates to create the 2.5D model of the city.

Figure 8 – Distribution of the buildings in the latitudinal and longitudinal axes, taking into account the occupied surface.

The city of Turin is an important cultural and business centre in the Piedmont region, northern Italy. It is surrounded on the western and northern side by the Alps, and on the eastern front by a high hill that is the natural continuation of the hills of Monferrato. According to Istat [57], Turin occupies a total surface of 130.01 km2. Figure 8 shows the distribution of the built environment along the

longitudinal and latitudinal axes. Darker areas correspond to a higher buildings’ density, while in lighter ones the constructions are sparser. The graph highlights a development of the city in the North-South axis with a sinuous path: like the majority of European cities whose growth follows an important source of water, Turin was built and extended along Italy’s longest river, the Po. The

(41)

31

buildings in the database can be divided in 14 classes according to their use. As expected, the residential type covers the vast majority of the sample, with 44803 buildings represented, over three quarters of the whole database. Among non-residential buildings, the top spots are taken by industrial, commercial and service buildings, with 4519, 2818 and 1985 representatives respectively. It can be noted how these numbers reflect the historical importance of the city of Turin in the Italian and European industry panorama. The remaining buildings are, in order: 1035 schools, 918 offices, 719 kindergartens, 606 medical buildings, 403 university buildings, 397 sport facilities, 306 churches, 269 hotels, 108 recreative buildings and 93 swimming pools. The count for non-residential buildings is shown in Figure 9. In the simulations only residential buildings were used, for a total of 386 data points.

Figure 9 – Count of non-residential buildings in Turin.

An overview of the buildings’ age distribution is given in Figure 10. Buildings in the city are relatively old: more than 85% of them were built before 1971, with a concentration of data in the period 1961-1970. No information was provided, instead, on possible renovations. As the stratigraphies of the walls used in this work are assumed from the construction and renovation years, the lack of this data can in some cases lead to an underestimation of the thermal performances of the building envelope. Another important consideration to make concerns the height of the buildings. With a mean of 14.15 m and a median of 12.39 m, Turin has seen a marked upward development that has continued through the years, as shown in Figure 11.

(42)

32

Figure 10 – Count of the buildings’ age in the full database (left) and on the used portion (right).

Figure 11 – Violin plot of the buildings’ height by period.

This trend has a strong influence on urban-scale parameters and impacts the energy performances of buildings in different ways [58]. The median height in an urban area is in fact one of the factors that strongly affect the amount of direct sunlight that can hit the buildings’ surfaces and, therefore, their solar heat gains. It is also proven that the median height of a built environment is linked with the presence and magnitude of urban heat islands [59]. This effect, while well-known thanks to an extensive research on the topic, is often neglected during heating demand calculations due to its complexity. It has, however, an impact on measured performances and it is therefore relevant for

Riferimenti

Documenti correlati

Altri gruppi consistenti di patrimonio si aprivano nella parte superiore del Campo Giavesu in direzione di Cheremule e di Thiesi. La natura paludosa del fondo valle e

More generally, given some set of punctures in a T [A k−1 ] theory, we can construct a new theory with the same global symmetry but larger Coulomb branch starting with the T [A N k−1

The IUPAC provisional names

Solution proposed by Roberto Tauraso, Dipartimento di Matematica, Universit`a di Roma “Tor Vergata”, via della Ricerca Scientifica, 00133 Roma,

As  skin  pharmacokinetics  depends  on  both  time  and  axial  coordinate,  the  partial  derivative  equations  (PDEs)  (Eq.s  (1‐4))  are  discretized  respect 

“intelligent” machines, capable of thinking and possibly feeling emotions as men do – has been a frequent element in scientific literature, science fiction and theatre, the latter

Quando nel bel mezzo dell’XI secolo la nobiltà espulse i contadini dall’alto della collina per fortifi carla e piantare la loro bandiera, c’erano stati tra loro violenti scontri

14 Georgetown University Center on Education and the Workforce analysis of US Bureau of Economic Analysis, Interactive Access to Industry Economic Accounts Data: GDP by