• Non ci sono risultati.

Context-aware deployment of Fog applications

N/A
N/A
Protected

Academic year: 2021

Condividi "Context-aware deployment of Fog applications"

Copied!
78
0
0

Testo completo

(1)

University of Pisa

Department of Computer Science

Context-aware Deployment of Fog

Applications

MSc in Computer Science

(Software Curriculum)

Candidate:

Andrea Liut

Supervisors:

Antonio Brogi

Stefano Forti

A. Y. 2018/2019

(2)
(3)

i

Acknowledgements

I would like to express my gratitude to my supervisors, Professor Antonio Brogi and Stefano Forti, for their guidance, patience, enthusiastic assistance for this thesis and the Computer Science Department for funding this research work.

Appreciation is due to the staff of the Computer Science Department for their help and collaboration in the early stages of the work and its final validation.

I would like to thank my colleagues of the Giò Project Alessandro, Alessia and Jacopo, for their assistance, help, and good times during the work.

A special thank is due to my family and friends for their unconditional support and encouragement throughout my study.

(4)

ii

Abstract

In this thesis, we first design and prototype a microservice-based IoT application (Giò Plants), capable of managing indoor plants within a building by measuring their soil moisture level, temperature and light intensity, and of watering them when needed. We then design and prototype a platform (FogLute), on top of Kubernetes, to support the automated, context-aware deployment of multi-service applications to a Fog infrastructure. Finally, we validate both prototypes by exploiting FogLute to deploy the Giò Plants application over a real infrastructure set up at the Department of Computer Science of the University of Pisa, Italy.

(5)

Contents

1 Introduction 1 1.1 Context . . . 1 1.2 Problem considered . . . 2 1.3 Thesis Objectives . . . 3 1.4 Thesis Outline . . . 4 2 Background 5 2.1 Fog and Internet of Things . . . 5

2.2 Context-aware placement . . . 6 2.3 MicroBit . . . 7 2.4 Kubernetes . . . 8 3 Giò Plants 11 3.1 Overview . . . 11 3.2 Requirements elicitation . . . 12 3.2.1 User stories . . . 12 3.3 Data model . . . 13

3.4 Architecture and Implementation . . . 15

3.4.1 Overview . . . 15 3.4.2 SmartVase . . . 17 3.4.3 Fog Node . . . 19 3.4.4 Device Driver . . . 21 3.4.5 Devices Microservice . . . 22 3.4.6 API Gateway . . . 23 3.4.7 Frontend . . . 23 4 FogLute 25 4.1 Overview . . . 25 4.2 Data model . . . 26

4.3 Architecture and Implementation . . . 29

4.3.1 Deployer . . . 30 4.3.1.1 Deployment . . . 31 4.3.1.2 Objects generation . . . 33 4.3.1.3 Withdraw . . . 35 4.3.1.4 Nodes information . . . 36 4.3.2 Placement Analyzer . . . 37 4.3.2.1 EdgeUsher . . . 39 4.3.3 Node Watcher . . . 41 iii

(6)

iv Contents

4.3.4 REST API . . . 42

4.4 Testing . . . 43

5 Use case analysis 45 5.1 Set up . . . 45 5.2 Testing . . . 48 6 Conclusion 53 6.1 Summary . . . 53 6.2 Related Work . . . 54 6.2.1 IoT frameworks . . . 54 6.2.1.1 FogFlow . . . 55 6.2.1.2 EdgeX Foundry . . . 56 6.3 Assessment of Contribution . . . 56 6.4 Future works . . . 57 References . . . 58 Appendix 65

A Giò Plants Application description 65

(7)

Chapter 1

Introduction

1.1

Context

The Internet of Things (IoT) is undergoing a relentless growth which is only partially supported by current software and infrastructure architectures [5]. IoT environments generate unprecedented amounts of data that can be useful in many ways, particularly if analyzed for insights. However, the data volume can overwhelm today’s storage systems and analytics applications. Cloud systems by themselves cannot fully support IoT applications that must meet stringent latency and bandwidth constraints. Furthermore, Cloud connection latencies are not adequate to host real-time tasks such as life-saving connected devices, augmented reality, or gaming [11].

In this context, a decentralized computing paradigm that shifts the focus of application design away from centralized architectures towards the edge of the network has taken consideration by the industry. This approach is called Fog computing [30].

Fog computing introduces a hierarchy of computing nodes along the IoT-Cloud continuum so to enable QoS- and context-aware application deployments that can meet stringent application requirements taking advantage of the processing capabilities of the devices spread on the network [26]. This approach have been successfully applied in many fields, like medical [17] and agriculture [23].

Compute, storage, and networking resources are the building blocks of both the Cloud 1

(8)

2 1.2. Problem considered and the Fog, introducing characteristics that make the Fog a non-trivial extension of the Cloud as proposed by Bonomi et al. [8]:

• Edge location, location awareness, and low latency,

• Very large number of nodes, as a consequence of wide geographical distribution, • Large-scale sensor networks to monitor the environment,

• Support for mobility, • Real-time interactions,

• Heterogeneity and interoperability

In the Fog Computing context the Department of Computer Science, University of Pisa, funded the Giò project, whose aims at realizing a Fog computing testbed for research and educational purposes.

The Giò project involves the design, implementation and actual deployment of three microservice-based IoT platforms, namely:

1. GiòPlants, a service that manages indoor plants using smart pots by monitoring their soil moisture, environment temperature, and brightness and acting to maintain the optimal health status of them.

2. GiòMaps, an interactive virtual assistant interface realized with Amazon Alexa that shows an interactive map of the Computer Science Department,

3. GiòShader, a service that manages a motorized roller shutter to bring the levels of natural brightness to the desired ones by the users.

4. GiòRoom, a service that orchestrates GiòPlants and GiòShader to satisfy user’s goals exploiting features provided by the two services.

1.2

Problem considered

Fog computing applications can be deployed to any device that can support them over the IoT-Cloud continuum and are expected to be able to offload tasks to nearby devices.

(9)

1.3. Thesis Objectives 3 The problem of deciding where to place each application service (i.e., functionality) to infrastructure nodes is an interesting one to be tackled and it is provably NP-hard [11]. Indeed, in the last years, much literature focused on determining the best QoS- and context-aware deployment for multi-service IoT applications to Fog infrastructures. Current software products are no longer monolithic but organized as a set of components or services assembled as a system and operating together [9]. Many currently active platforms and frameworks do not apply such techniques to generate deployments of multi-service IoT applications. The problem addressed by this thesis can be stated as follows:

Devise a platform able to manage multi-service application deployments over a dynamic Fog infrastructure, generating placements for application components according to context constraints provided as specification.

1.3

Thesis Objectives

In this work, IoT frameworks, use cases, and tools in the area of fog computing frameworks are analyzed to design and implement a microservice computing platform able to place Fog applications according to a context-aware specification on real infrastructures, namely FogLute. FogLute provides all the mechanisms to orchestrate application components within the currently available infrastructure, handling resource availability and assessing of metrics, e.g., service time, latency, resource usage costs. Finally, the platform allows future development to enhance and improve its performances to fit specific requirements. As a case study, a microservice-based application capable of managing indoor plants, namely Giò Plants, is designed and prototyped to analyze and validate the performances of the platform.

(10)

4 1.4. Thesis Outline

1.4

Thesis Outline

The rest of this manuscript is organized as follows:

Chapter 1 provides the context of the thesis, introducing the research topic and explaining the motivations and problem statement.

Chapter 2 provides some needed background information for the reader to get familiar with the concepts used in this work. Particularly, it introduces the Fog computing paradigm and the problem of context-aware placement, Internet of Things, some state-of-the-art IoT frameworks for application deployment, and Kubernetes, a container orchestrator developed by Google.

Chapter 3 describes the design and implementation of Giò Plants. Requirements elicited from the users are identified and highlighted, and the resulting design is explained as well as the implementation details.

Chapter 4 describes the FogLute platform. As for Giò Plants, the requirements are identified, and the design and implementation are explained.

Chapter 5 describes the test and validation of FogLute using the deployment of the Giò Plants system as a test case. Then, performances are measured and analysis on the results is discussed.

Chapter 6 presents a summary of the work and its achievements, indicating directions for future work and further development.

(11)

Chapter 2

Background

2.1

Fog and Internet of Things

The Internet of Things (IoT) is a new technology paradigm envisioned as a global network of machines and devices capable of interacting with each other [4]. The definition of the Internet of things has evolved due to the merging of multiple technologies [14], real-time analytics, machine learning and deep learning [29], and automation systems. In the consumer market, IoT technology is mostly related to the concept of "smart home" [27], covering devices and appliances (such as lighting fixtures, thermostats, home security systems and cameras, and other home appliances) that support one or more common ecosystems, and can be controlled via devices associated with that ecosystem, such as smartphones and smart speakers.

The IoT, interconnection and communication between everyday objects, enables many applications in many domains: industry (supply chain management, transportation and logistics, aerospace, aviation, and automotive), society (telecommunication, medical technology, healthcare, smart building) and environmental (agriculture and breeding, recycling, disaster alerting, environmental monitoring) [31].

Hardware, such as the sensors and actuators, constitute the most important elements in the IoT [6]. The typical microprocessor which is used at the hardware layer is usually based on the ARM, X86 architectures with Real-Time operating system supporting high-performance communication mechanisms and cryptographic capabilities for security

(12)

6 2.2. Context-aware placement purposes.

Typically, the IoT nodes should operate using low power in the presence of lossy and noisy communication links. Examples of communication protocols used for the IoT are WiFi, Bluetooth, NFC, Z-wave, and LTE-Advanced [1]. System designers must take care of several design decisions when building an IoT system, e.g. power consumption, security, provided features, and more [2].

Fog infrastructures greatly support IoT applications. As Fog resources can be deployed anywhere with a network connection, those resources can be placed close to the IoT devices thus minimizing latencies and introducing context capabilities while offloading gigabytes of traffic data from the network [1]. This makes the Fog the appropriate platform for a number of critical Internet of Things such as Smart Cities, Connected Vehicles and Wireless Sensors and Actuators Networks (WSANs) in general [8].

2.2

Context-aware placement

Fog computing aims to extend the IoT+Cloud scenario, enabling ubiquitous access to a shared continuum of scalable computing resources. Fog infrastructures is expected to support Quality-of-Service- (QoS) and context-aware application deployments [18][26] in order to exploit the full capabilities of Fog resources.

Modern multi-service applications consist of many independently deployable components [22], each with its resource requirements, interacting in a distributed way. Some application services are suitable to be placed to the Cloud (e.g. data mining tools, service back-ends) while others are suitable to be placed to the edge (e.g., drivers, filters, data collectors). Moreover, the overall application deployment maybe not straightforward, requiring to analyze a very large number of services and infrastructure components. Thus, the construction of a deployment for a multi-service application over a Fog infrastructure that satisfies all the functional and non-functional constraints is a provably NP-hard [9]. Fog computing should support adaptive deployment to edge infrastructures, dynamically taking into account both the application requirements and the current state of the infrastructure. Thus, the needing of a suitable model of Fog applications and infrastructure

(13)

2.3. MicroBit 7 is crucial to achieving QoS-aware placements of services [7].

In the latest years, several approaches have been proposed to tackle this problem. Skarlat et al. [32] proposed an Integer Linear Programming method. Brogi et al. [11] introduced both an exhaustive and a greedy backtracking algorithm based on the different kind of requirements of multi-component applications, estimating QoS-assurance, resource consumption and deployment costs. One of the latest ones is the employment of Network Function Virtualisation (NFV) [24], a network architecture concept that uses the technologies of IT virtualization to virtualize entire classes of network node functions into building blocks that may connect, or chain together, to create communication services.

2.3

MicroBit

The MicroBit is an open-source hardware ARM-based embedded system designed by the BBC for use in computer education [28].

The MicroBit is equipped with an ARM Cortex-M0 processor, accelerometer and magnetometer sensors, Bluetooth and USB connectivity, a display consisting of 25 LEDs, two programmable buttons, and can be powered by either USB or an external battery pack. The device inputs and outputs are through five ring connectors that form part of a larger 25-pin edge connector.

The MicroBit was created using the ARM mbed development kits providing a platform for IoT development. The compiled code is then flashed onto the device using USB or Bluetooth connections. The code can be written using several languages, such as MicroPython - a Python implementation optimized to run on a microcontroller -, JavaScript and C++. For this thesis’s goals, the micro:bit runtime and C++ language is used as they give much more control on the resources of the board. Furthermore, the micro:bit runtime offers better control all the features provided by the board and on fibers execution, lightweight threads used to perform operations asynchronously.

(14)

8 2.4. Kubernetes

2.4

Kubernetes

Kubernetes is an open-source container-orchestration system for automating application deployment, scaling, and management designed by Google that aims to orchestrate applications containers across clusters of hosts providing a set of loosely coupled components that allows a flexible way to manage applications [19]. Many cloud services use Kubernetes as a basis for their platform (Platform-as-a-Service) or infrastructures (Infrastructure-as-a-Service).

Kubernetes contains many abstractions that represent the state of the system. These abstractions are represented by objects in the Kubernetes API. The main Kubernetes objects are:

Pod A Pod is the smallest deployable unit that Kubernetes can manage. It consists of one or more containers that are guaranteed to be placed on the same host machine sharing resources as well as the IP address and port space. They can also communicate with each other using standard inter-process communications but Containers in different Pods have distinct IP addresses and can communicate only adopting special configuration (Services). A Pod is considered ephemeral, hence they are created, scheduled and removed according to policies and constraints. With this design, Pods provide a high-level abstraction that is suitable for horizontal scaling and replication.

Service A Service is an abstraction that defines an accessibility policy for a set of Pods. Services are used to allow containers that are placed into different Pods to communicate or to expose applications as a network service.

Volume A Volume provides persistent storage for a Pod for the entire lifetime of the Pod that encloses it. Thus, it persists data even if a container inside the related Pod is restarted.

Namespace A Namespace provides partitioning of resources allowing the definition of virtual clusters backed by the same physical cluster. Namespaces are intended to be used in the presence of multiple users spread across multiple teams providing scopes for names.

(15)

2.4. Kubernetes 9

Figure 2.4.1: Kubernetes architecture

Kubernetes follows the master/slave architecture and its components can be divided into those that are part of the Control Plane and those that manages a single node.

Components of the Control Plane manages how Kubernetes communicates with the cluster. Such components, as the Master node and the kubelet process, maintain a record of all the Kubernetes objects in the system and maintains those objects’ state updated. The main control plane components are as follows:

etcd etcd is a persistent, lightweight, distributed, key-value data store that stores configuration data of the cluster and the overall state of it. Etcd provides consistency to the system which is crucial to obtain correct scheduling of resources.

API Server The API Server exposes a RESTful interface using JSON and HTTP

providing both the internal and external interface to Kubernetes.

Scheduler The scheduler is the component responsible to assign a node to an unscheduled node based on resource availability and assigned constraints.

The Kubernetes Node, or Minion, is a machine that provides a container runtime, such as Docker, and allows containers to be deployed. It also must run the kubelet component that monitors the state of node containers taking care of starting, stopping and restarting them if needed.

Kubernetes allows the usage of add-ons, defined as pods and services to implement features of the cluster. There are many add-ons available, such as the Web UI or monitoring tools,

(16)

10 2.4. Kubernetes but the most important is the DNS that allows Pods to use symbolic names - defined by Services - to communicate with other Pods of the cluster.

(17)

Chapter 3

Giò Plants

This chapter presents the Giò Plants platform describing the whole design and implementation process. It comprises the functional and technical documentation of the devised system. Section 3.1 presents the Giò Plants platform providing a high-level description of the goals that it aims to achieve. Section 3.2 describes the requirement analysis process from the very earlier stages to the establishment of the software specification. Section 3.3 present the model of data outlining the domain of interest along with the description of the main entities of the system. Section 3.4 covers essential technical documentation and describes in deep the architecture design, exposing their APIs and implementation.

3.1

Overview

Giò Plants is an IoT software platform that aims to manage indoor plants within a building. Each plant is planted in a smart pot, namely SmartVase, able to detect its health status and perform some actions to maintain the best health status possible, such as watering the soil. Furthermore, the platform provides a dashboard that lists all the registered SmartVase devices and for each of them allow to get the current status of the plant. Giò Plants is part of a bigger project, namely Giò, which aims to manage smart environment exploiting Fog computing, modern user interfaces, and interaction mechanisms.

(18)

12 3.2. Requirements elicitation

Figure 3.1.1: Giò Plants overview

3.2

Requirements elicitation

The requirement analysis aims to define the functional and non-functional requirements, use cases, actors, and workflows to be satisfied by the Giò Plants platform. This part is crucial because effective requirements make the design and the development phases easier reducing development costs and time.

Giò Plants is developed following the Agile methodology. Agile methods rely on an incremental approach to software specification, development, and delivery and they are best suited when system requirements usually change rapidly during the development process. New releases of the software are released quickly to customers, who can propose changes to requirements for the next releases.

3.2.1

User stories

Main actors of the use case scenarios are people of the staff of the Computer Science Department and SmartVase devices. In order to devise functional requirements, the staff of the Computer Science Departments had been interviewed, answering questions about the plants and their daily care.

(19)

3.3. Data model 13 and the second asking questions about how Giò Plants can improve the quality of work.

Section 1 – Plants management How often you take care of the plants?

What are the tasks you perform to take care of the plants? What time of day do you prefer most to dedicate to plant care? How do you check whether a plant needs water or not?

Do you also take care of regulating plant lighting and/or temperature? If yes, when? Do you water the pot saucer or the plant directly?

Would you like to be able to automate some of these activities? Would you like to be notified about the condition of the plants?

Table 3.2.1: Giò Plants questionnaire - Plant management

Section 2 – How Giò Plants can help

What tasks related to plant care would you like to complete with Giò?

Can you think of other tasks in which Giò could help? (Not just about plant care) What aspects of these tasks could be automated?

Would you use a system like Giò Plants at home? Would you like to try a Giò prototype?

Table 3.2.2: Giò Plants questionnaire - System need

Interviews had been recorded and questions had been written down and analyzed to produce the following user stories:

As I want to So that

Plant supervisor ask for the health of the plants assess plants health status Plant supervisor ask for the status of the plants check plants status remotely

Plant supervisor manually trigger the watering of a plant manually water a plant if I decide it needs more water Plant supervisor see the plant remotely check what plant has bad leaves

Plant supervisor consult a dashboard check the status of all plants quickly Plant supervisor get notified when the status of a plant changes fix the problem as quickly as possible

Table 3.2.3: Giò Plants user stories

3.3

Data model

This section describes how entities are represented within the system. Structures are presented as Go structures and JSON names are shown for each field of the structure. Device A Device is a description of a physical device that interacts with the system.

Each Device is identified by an ID, randomly generated when the devices register to the system. Name is a human-friendly name that can be assigned to the device,

(20)

14 3.3. Data model Mac is the unique identifier assigned to the device network interface controller and Room indicates the ID of the room in which the device is placed.

1 type Device struct {

2 ID string `json :"id , omitempty "` 3 Name string `json :" name "`

4 Mac string `json :" mac "` 5 Room string `json :" room "` 6 }

Listing 3.1: Device structure

Room A Room is a description of the place in which a Device can be located. It is used for logically group devices. It is identified by an ID. Name is a human-friendly name that can be assigned to the room.

1 type Room struct {

2 ID string `json :"id , omitempty "` 3 Name string `json :" name "`

4 }

Listing 3.2: Room structure

Reading A Reading represents a single value produced by a Device. Each Reading is identified by an ID, randomly generated when the reading is produced. Name is provided by the Device Driver when first receives the data by the Fog Node, Value is the actual value produced. It is represented by a string to allow a wide range of data. Unit is an optional indication of the unit relative to the value. CreationTimestamp indicates the time instant in which the Reading is created.

1 type Reading struct {

2 ID string `json :" id"` 3 Name string `json :" name "` 4 Value string `json :" value "` 5 Unit string `json :" unit "`

6 CreationTimestamp string `json :" creation_timestamp "` 7 }

(21)

3.4. Architecture and Implementation 15

3.4

Architecture and Implementation

In this section, the essential technical specification is introduced. The technical specification defines how the functional and non-functional requirements of the system can be achieved. The elements used to define the technical requirements include a general overview of the platform architecture, the most important design and technology decisions, and concrete interfaces between the platform components.

Interactions between components are described using Unified Modeling Language (UML) sequence diagrams. Those diagrams describe exactly the sequence of actions that the components perform during their activity.

3.4.1

Overview

The Giò Plants platform is realized following a microservices-based architecture. The microservices architectural pattern is a variant of the Service-Oriented Architecture (SOA) architectural style, that structures an application as a collection of loosely coupled services [21]. Services are fine-grained, independently deployable, highly maintainable, scalable and testable software often maintained by a small cross-functional team.

The Giò Plants architecture consists of several components:

SmartVase The SmartVase is a smart pot for plants that monitors the status of the plant inside itself and send data to the driver to which it is connected.

Fog Node The Fog Node is a tool running on edge hosts that handle connections with BLE devices and forward data to the local Device driver.

Device Driver The Device Driver receives data updates from the Fog Node and forward them to the Devices microservice.

Devices Microservice the Devices Microservice manages devices registration and provide storage for their data readings.

API Gateway The API Gateway service acts as an entry point for clients that want to interact with the platform.

(22)

16 3.4. Architecture and Implementation Frontend The Frontend microservice provides a web User Interface for displaying data

and interacting with devices.

Figure 3.4.1: Giò Plants architecture

All of the Giò Plants services are written in Go language and the SmartVase is written in C++ language. Go, also known as Golang, is a statically typed, compiled programming language designed at Google with memory safety, garbage collection, structural typing, and communicating sequential processes CSP-style concurrency. Its high performances networking and multiprocessing and availability of libraries and tools make Go a good candidate for implementing concurrent microservices.

Communication between components is realized using RESTful APIs through the HTTP protocol. RESTful interfaces exposed by the microservices are implemented using the Gorilla Mux library [16]. It simplifies the construction of RESTful interfaces by providing a request router and dispatcher for matching incoming requests to their respective handler. For simplicity, no persistence method is implemented for services. All data produced are stored in memory and cleanup mechanisms are implemented to avoid memory issues.

(23)

3.4. Architecture and Implementation 17 Services share the same project structure and comprise a few common design choices. Sources of data are realized using the Repository and Singleton pattern. Using the Repository pattern can help achieve loose coupling and can keep domain objects persistence ignorant.

3.4.2

SmartVase

The SmartVase is a smart plant pot capable of monitoring the status of the plant it carries and performing some actions to maintain the optimal health status of the plant. Data collected by the SmartVase are then sent to the closest Fog Node for further analysis and proper storage.

The status of the plant is devised by considering the following properties: the temperature perceived by the plant, the moisture level of the soil in the pot and the light intensity perceived by the plant.

Depending on the values read by the pot, actions can be taken to achieve a better health status of the plan, for example watering the soil. The SmartVase prototype is equipped with a water pump that is turned on when the soil is considered dry. Furthermore, the user should be able to trigger watering manually from the SmartVase directly.

The pot is realized by using a MicroBit as controller connected to several sensors and actuators. The thermometer and light sensor are provided by the MicroBit board itself, while the soil moisture level sensor and the water pump need to be realized separately. Thermometer The thermometer is provided by the MicroBit and it measures the

temperature perceived by the board. Values of the temperature range between 0 and 255, in Celsius.

Light level sensor The light level sensor provides a way to check how bright or dark the environment is. The value ranges between 0 and 255 meaning respectively darkness and bright light.

Soil moisture sensor The soil moisture sensor check the moisture level of pot’s soil using two pieces of metal stuck in the soil connected to the MicroBit. The soil itself has some electrical resistance which depends on the amount of water and nutrients

(24)

18 3.4. Architecture and Implementation in it acting as a variable resistor in an electronic circuit. The more water there is, the less the soil will have electrical resistance. To turn on the sensing, just turn one of the two connected pins to high and read the value on the other pin.

Figure 3.4.2: Soil moisture sensor schema

Water pump The water pump is used to water the soil on-demand. The small 3V pump is immersed into a water tank placed on the SmartVase and connected to the MicroBit.

Figure 3.4.3: Water pump schema

The current status of the plant is devised by reading raw sensor data. Depending on the status of the plant, some actions can be performed such as showing a message on the display in case there is not enough water in the water container or triggering the watering of the pot in case the moisture level is too low. The threshold for soil moisture level can be dynamically updated.

As SmartVases need to be powered by batteries, careful design is needed for the device. To reduce the power consumption, each SmartVase is connected to a SmartVase driver employing a Bluetooth Low Energy (BLE) connection. SmartVases implement a BLE

(25)

3.4. Architecture and Implementation 19 peripheral while the Fog Node implements a BLE central device. The SmartVase exposes a simple RESTful interface to enable manual watering of the pot in case an on-demand watering is needed.

The device starts setting up all the needed BLE services, required to publish data to the connected central device. Those services define their own specific BLE Service and Characteristic to be able to publish the data of their interest. It is crucial to have unique UUID to avoid conflicts with other services.

BLE services allow the central to read and write those values. In particular, services for publishing the temperature and light level perceived by the vase allow only reading the value, the service for watering allows only writing - triggering the watering - and the moisture service allows both reading and writing to provide a way to update the preferred target moisture level. Table 3.4.1 shows all the exposed services along with their supported operation.

Service Characteristic Properties

TemperatureService temperature NOTIFY

MoistureService moisture NOTIFY, WRITE

LightService light NOTIFY

WateringService watering WRITE

Table 3.4.1: SmartVase BLE services

The NOTIFY property allows BLE central devices to read the value published by the characteristic providing a notification mechanism while the WRITE property allows central peripherals to write a buffer of bytes to the device. With combinations of READ and WRITE, BLE provides bidirectional communication between the Fog Node and the SmartVase.

3.4.3

Fog Node

The Fog Node is a software tool that manages connections with Giò-compliant devices. The tool has been designed to run in a host close to devices in order to guarantee stable connection and context awareness of data. After a successful connection to a device, Fog Node fetch all available data that the device produces and deliver them to clients.

(26)

20 3.4. Architecture and Implementation Fog Node supports several communication mechanisms for communicating with devices to provide a generic framework to be compliant to different kind of devices. Figure 3.4.4 shows the interaction between the involved components.

Figure 3.4.4: Device registration and data flow

After being connected to a device, Fog Node fetches available BLE services and characteristics and prepare them to be notified to clients. The mechanism used to deliver information to clients is a web publish/subscribe based pattern: clients register to a webhook to Fog Node in order to be notified when a device produces new data. This mechanism provides lower coupling between Fog Node and other components.

The Fog Node is made of two main components: a transport manager and a RESTful interface.

The transport manager is responsible to set up all available communication mechanisms used to communicate with devices - e.g., BLE, WiFi, ZigBee - providing mechanisms for reading and writing data. In this way, the software can be easily extended with other mechanisms. By now, only BLE transport is implemented and it is sufficient for the prototype.

The BLE transport starts scanning for devices with a period of 10 seconds. When a BLE Peripheral is discovered, it is analyzed to identify its type. If the device is one of the supported ones, the connection is started and data are periodically read using BLE

(27)

3.4. Architecture and Implementation 21 services and characteristics exposed by the device. For the aims of the prototype, only MicroBits are allowed to connect to the Fog Node.

After connection, BLE characteristics are retrieved from the device. Depending on the operations supported by the BLE characteristic, different actions are taken. If the BLE characteristic has the NOTIFY property, a callback is registered for listening to notifications from the respective characteristic. Or, if the BLE characteristic has the WRITE property, Fog Node waits for writing requests to that specific characteristic. Writing is used to trigger actions and update properties on the device.

The RESTful interface exposes endpoints for interacting with Fog Node. Table 3.4.2 briefly shows all available endpoints.

Webhook callbacks can be set and removed with proper HTTP calls to this API. If Fog Node is not able to call a webhook correctly, the webhook is removed.

Fog Node expose endpoints for getting information about devices and triggering actions on them. Actions are triggered by indicating their name: Fog Node retrieve the related BLE characteristic UUID of the requested action and writes on the BLE characteristic as described before.

Method Endpoint Description

POST /callbacks creates a new callback for data notification DELETE /callbacks/{callbackUUID} deletes a callback given its UUID

GET /devices fetches all connected devices

GET /devices/{deviceID} gets information about a connected device POST /devices/{deviceID}/actions/{actionName} trigger an action on the selected device

Table 3.4.2: Fog Node RESTful API

3.4.4

Device Driver

The Device Driver is a microservice that subscribe to the local Fog Node to get notified when a device connected to it produces new data. This service aims to decide what devices are suitable for the Giò Plants system. When the Fog Node notifies new data from a device, the Device Driver filter and transforms data based on its registered devices specification. Then, it sends resulting data to the Devices microservice for proper storage. The Device Driver is associated to a Room. Each device it handles is registered within

(28)

22 3.4. Architecture and Implementation its Room object. When the device driver receives data from a device, it first registers its room and the device with the Devices microservice. Then, it sends the received data to the Device microservice for proper storage.

The main goal of Device Driver is to provide mechanisms to listen for data notification from the Fog Node to which it is registered using the webhook callback mechanism. The webhook is implemented as an endpoint of the RESTful API provided by the microservice. The URL is composed by using the actual IP address of the Device Driver service. The Device Driver starts by registering its webhook to the Fog Node deployed on the same host and tries to do so a few times. If the callback can not be registered, the Device Driver stops. Otherwise, the service is started up. A heartbeat mechanism handles disconnections and by periodically testing the connection between the Device Driver and the Fog Node trying to keep the callback registered.

The Device Driver allows clients to send action requests to devices. The device is identified by its physical address (MAC) and the action is identified by its BLE Characteristic UUID.

Table 3.4.3 describes the RESTful API provided by the Device Driver. A richer API can be designed according to Fog Node callback capabilities to provide a finer grain callback mechanism, but this design is sufficient for the prototype.

Method Endpoint Description

POST /callbacks/readings callback for device readings

GET /devices returns all devices connected to the local Fog Node POST /devices/{deviceID}/actions/{actionName} triggers an action on a device

Table 3.4.3: Device Driver RESTful API

3.4.5

Devices Microservice

The Device Microservice is designed to store devices information and produced data, and providing those data to the rest of the Giò Plant platform. It provides a RESTful API that allows the Device Driver to register rooms and devices and store their data. Each device is associated with a unique identifier (UUID).

(29)

3.4. Architecture and Implementation 23 by their name and MAC address, are registered multiple times, the same UUID is given as response. This allows to preserve the history of the device and handles disconnections from the system.

Table 3.4.4 shows all available endpoints provided by the RESTful API.

Method Endpoint Description

GET /rooms gets information about all registered rooms

POST /rooms creates a new room

GET /rooms/{roomID} gets information about a room

GET /rooms/{roomID}/devices gets information about all devices in a room

POST /rooms/{roomID}/devices creates a new device in a room

GET /rooms/{roomID}/devices/{deviceID} gets information about a device in a room GET /rooms/{roomID}/devices/{deviceID}/readings gets all readings produced by a device POST /rooms/{roomID}/devices/{deviceID}/readings creates a new reading for a device POST /rooms/{roomID}/devices/{deviceID}/actions/{actionName} triggers an action on a device

Table 3.4.4: Devices Microservice RESTful API

3.4.6

API Gateway

The API Gateway component is a microservice that provides an entry point for the whole Giò Plants platform. It exposes all the functionalities of the system to clients and the Frontend microservice.

The RESTful API exposed by the API Gateway is shown in Table 3.4.5. It is very similar to the Devices Microservice’s RESTful API, as it is the main provider of features in the current application design.

Method Endpoint Description

GET /rooms gets information about all registered rooms

GET /rooms/{roomID} gets information about a room

GET /rooms/{roomID}/devices gets information about all devices in a room GET /rooms/{roomID}/devices/{deviceID} gets information about a device in a room GET /rooms/{roomID}/devices/{deviceID}/readings gets all readings produced by a device POST /rooms/{roomID}/devices/{deviceID}/actions/{actionName} triggers an action on a device

Table 3.4.5: API Gateway RESTful API

3.4.7

Frontend

The Frontend is a microservice that provides a very simple, yet functional, web User Interface (UI) allowing users to interact with the Giò Plant platform. This service interacts

(30)

24 3.4. Architecture and Implementation with the API Gateway microservice, fetching data and triggering actions on devices. Then, those data are properly formatted and displayed to the user.

It provides the users a set of pages implemented by a RESTful API, listed by Table 3.4.6.

Method Endpoint Description

GET /rooms lists all available rooms GET /rooms/{roomID} gets information about a room GET /rooms/{roomID}/devices lists all available devices in a room GET /rooms/{roomID}/devices/{deviceID} gets information about a device in a room POST /rooms/{roomID}/devices/{deviceID}/actions/{actionName} triggers an action on a device

Table 3.4.6: Frontend RESTful API

(31)

Chapter 4

FogLute

This chapter presents the FogLute platform describing its design and implementation process. As already done for Giò Plants, this chapter comprises the functional and technical documentation that has been produced during the development phase. Section 4.1 presents the FogLute platform providing a high-level view of the system and its main goals. Section 4.2 present the model of data representing the domain of interest. Then, section 4.3 describes in deep the overall architecture of the system presenting each of the components and covering their essential design, interfaces, and implementation details. Finally, in section 4.4 presents the tests performed on the system.

Interactions between components are described using Unified Modeling Language (UML) diagrams and all the code snippets provided are written in the Go language.

4.1

Overview

FogLuteis a multi-service applications orchestrator able to manage applications deployment over a dynamic Fog infrastructure. It inputs a description of a multi-service application along with its requirements on hardware, bandwidth, latency, IoT, and security. For each application it manages, it decides the best context-aware placement that satisfies all the requirements and performs the deploy on the available underlying infrastructure.

FogLute is built on top Kubernetes. Kubernetes manages containerized workloads and service deployment as well as the infrastructure. FogLute transforms cluster nodes

(32)

26 4.2. Data model information into infrastructure specification and uses it whenever an application has to be deployed.

FogLute is able to observe infrastructure status and take actions, if needed, for adjusting active deployments. For the aims of the prototype, all applications are first stopped and then deployed again on the new infrastructure.

4.2

Data model

There are various relevant entities that FogLute employs to achieve its goals. FogLute, internally, employs EdgeUsher tool to devise possible placements for applications services, so, their data model are very similar. Structures are presented as Go structures along with JSON names for each field of them.

Service A Service is a description of a single component of a multi-service application. It is identified by an ID and it contains all its QoS constraints. TProc is the service average processing time (expressed in ms), HWReqs is the hardware capacity required to deploy the service (e.g. the available RAM), IoTReqs and SecReqs are respectively the lists of IoT devices and security policies required by the service, and NodeName is the name of the node in which the service must be placed. Images is an array containing all the docker images that should be used by the service. Each image object provides the docker image name, ports to be used, environment variables, and whether the container should be run with privileged authorization or not.

1 type Service struct {

2 Id string `json :" id"` 3 TProc int `json :" t_proc "` 4 HWReqs int `json :" hw_reqs "` 5 IoTReqs []string `json :" iot_reqs "` 6 SecReqs []string `json :" sec_reqs "` 7 Images [] Image `json :" images "` 8 NodeName string `json :" node_name "` 9 }

(33)

4.2. Data model 27 Flow A Flow is a description of a connection between two application Services specifying requirements on network bandwidth. Src and Dst indicate respectively the source and destination service, and Bandwidth indicates the requested network bandwidth the pair of service needs for communicating properly.

1 type Flow struct {

2 Src string `json :" src "` 3 Dst string `json :" dst "`

4 Bandwidth int `json :" bandwidth "` 5 }

Listing 4.2: Flow structure

Node A Node describes an available host for placing application services that belong to the underlying infrastructure. A Node is identified by an ID and is given a unique name. Then, other information is stored for logging purposes, e.g. IP address and geographical location. A Node can have more profiles depending on its behavior on the network.

1 type Node struct {

2 ID string `json :" id"` 3 Name string `json :" name "` 4 Address string `json :" address "` 5 Location Location `json :" location "` 6 Profiles [] NodeProfile `json :" profiles "` 7 }

Listing 4.3: Node structure

Node Profile A NodeProfile describes a specific set of capabilities that a node can provide. HWCaps is the available hardware capacity of that node, IoTCaps is the list of IoT devices that the node can utilize, and SecProps is the list of the security capabilities it features. Probability indicates the likeliness of those capabilities to be provided by the node.

1 type NodeProfile struct {

2 Probability float64 `json :" probability "` 3 HWCaps int64 `json :" hw_caps "` 4 IoTCaps []string `json :" iot_caps "` 5 SecCaps []string `json :" sec_caps "`

(34)

28 4.2. Data model

6 }

Listing 4.4: NodeProfile structure

Link A Link describes a (either point-to-point or end-to-end) connection between two Nodes. Latency is the latency experienced over the link (in ms) and Bandwidth is the transmission capacity it offers (in Mbps).

1 type Link struct {

2 Probability float64 `json :" probability "` 3 Src string `json :" src "`

4 Dst string `json :" dst "` 5 Latency int `json :" latency "` 6 Bandwidth int `json :" bandwidth "` 7 }

Listing 4.5: Link structure

Application An Application describes a multi-service application. Each Application has a unique identifier and name, to identify within the application. Services is the list of services that compose the application itself, Flows is the list of logical connection between services, and MaxLatencies is the list of service chains that the application need with the maximum latencies they need to communicate correctly.

1 type Application struct {

2 ID string `json :" id"`

3 Name string `json :" name "`

4 Services [] Service `json :" services "`

5 Flows [] Flow `json :" flows "`

6 MaxLatencies [] MaxLatencyDescription `json :" max_latency "` 7 }

Listing 4.6: Application structure

Infrastructure An Infrastructures describes the current set of computational resources available for deploying. It is represented as a collection of Nodes and Links forming a graph.

1 type Infrastructure struct { 2 Nodes [] Node `json :" nodes "` 3 Links [] Link `json :" links "`

(35)

4.3. Architecture and Implementation 29

4 }

Listing 4.7: Infrastructure structure

4.3

Architecture and Implementation

FogLute is realized as a standalone software tool written in Go that has to be run on the Kubernetes master node of the managed cluster to let FogLute interacts properly with the cluster.

FogLute is made by several components:

Deployer aims to store information about applications and prepare new deployments. Placement Analyzer generates a placement for a given application.

Node Watcher listens for infrastructure changes notifying the Deployer for taking proper actions.

REST API provides a RESTful API for clients.

The UML Component diagram shown in Figure 4.3.1 describes the overall structure of the FogLute system and interfaces between its components.

The application deployment process begins when an application is submitted to the system. The REST API component receives a JSON description of the application, constructs the respective Application structure and forwards it to the Deployer component. The Deployer component checks if the application is already present: in this case, no action is performed. Otherwise, the Deployer component fetches the available nodes of the underlying architecture from the Node Watcher component and requests a placement for the application by calling the Placement Analyzer component. If such a placement is generated, then the Deployer component begins the real deployment by generating proper Kubernetes Objects and applying them to the underlying cluster. Otherwise, an error is reported. Figure 4.3.2 provides a graphical representation of the entire deployment process.

(36)

30 4.3. Architecture and Implementation

Figure 4.3.1: FogLute Component diagram

rules and best practice for proper working with Kubernetes should be applied. As an example, intra-service communication should be obtained by using the Kubernetes DNS to further decouple services.

The following sections describe each of the FogLute components analyzing their structures and design decisions.

4.3.1

Deployer

The Deployer component is responsible to store information about applications that are deployed by FogLute, managing their deployment and removal from the system.

The main goal of the Deployer component is to set up and start the deployment process of applications managed by FogLute. This is realized by mapping operations on applications into a set of Kubernetes Objects and updates on them. FogLute interacts directly with Kubernetes utilizing the official Kubernetes Go client.

The following sections describe in deep the main parts of the deployment and withdraw process of applications.

(37)

4.3. Architecture and Implementation 31

Figure 4.3.2: Deployment of a new application

4.3.1.1

Deployment

Deploying applications is the main goal of the Deployment component. After receiving a new application to deploy, the Deployer begins the deployment calling the deploy method. Listing 4.8 shows the code of the method without the parts related to debugging purposes.

1 func ( manager * Manager ) deploy ( application * model . Application ) (* model . Placement , []error) {

2 currentInfrastructure , err := manager . getInfrastructure ()

3 if err != nil {

4 return nil, []error{ err }

5 }

6

7 placements , err := (* manager . analyzer ). GetDeployment ( Normal , application , currentInfrastructure )

8 if err != nil {

9 return nil, []error{ err }

(38)

32 4.3. Architecture and Implementation

11

12 best , err := pickBestPlacement ( placements )

13 if err != nil {

14 return nil, []error{ fmt . Errorf (" cannot devise a placement for app %s: %s",

15 application .ID ,

16 err )}

17 }

18

19 deployErrors := manager . performPlacement ( application , currentInfrastructure , best )

20

21 if len( deployErrors ) > 0 { 22 return best , deployErrors

23 }

24

25 return best , nil 26 }

Listing 4.8: Deploy procedure

At line 2, the Infrastructure description is obtained by fetching available nodes from the Node Watcher component and connecting them in a complete graph. This approximation fits well the abstraction provided by Kubernetes. Furthermore, this suffices for the prototype, but more realistic descriptions can be devised by monitoring network performances and the cluster topology.

Then, feasible placements are devices by the Placement Analyzer, as shown at line 7. Between all the feasible placements, the best one is picked by choosing the placement with the highest probability, as shown at line 12. If more than one placement is devised with the highest probability, the best placement is selected as a random choice.

Finally, at line 17, the Deployer applies the placement to the infrastructure. The performPlacement method creates the required Kubernetes Objects from the application and applies them to the cluster. Listing 4.9 shows the code for deployment.

(39)

4.3. Architecture and Implementation 33

4.3.1.2

Objects generation

Each service is transformed into a Deployment Object. Deployments represent a set of multiple, identical Pods with no unique identities. A Deployment may run multiple replicas of service and Kubernetes may automatically replace any instances that fail or become unresponsive. Using Deployments allow FogLute to be more powerful rather than using Pods and enable future works to concern about tuning and optimizations.

FogLute may create other Kubernetes Objects in addition to Deployments, depending on the application specification. Those parameters are described by the Images structure. For each entity specified, it allows to specify:

• Name: defines the name of the Docker image used for the service,

• Local: indicates if the Docker image should be pulled from a remote repository or should be found locally,

• Env: defines a dictionary of environment variables that need to be set for the Pod, • Ports: defines a list of port mappings used for exposing the service to external

applications,

• Privileged: indicates if the container should be run with "privileged" capabilities. This provides the container the have nearly all the same access to the host as processes running outside containers on the host.

If a port mapping is specified by the service, a new Kubernetes Service Object is created. There exist several types of Services that can be created by Kubernetes. For the objectives of the prototype, a port mapping generates a LoadBalancer Service which allows external connections to it.

Furthermore, a service can be explicitly constrained to be placed on a specific node by providing the node name in the specification.

1 func ( manager * Manager ) performPlacement ( application * model . Application , infrastructure * model . Infrastructure , placement * model . Placement ) []error {

2 errors := make([]error, 0) 3

(40)

34 4.3. Architecture and Implementation

4 deploymentsClient := manager . clientset . AppsV1 (). Deployments ( apiv1 . NamespaceDefault )

5 servicesClient := manager . clientset . CoreV1 (). Services ( apiv1 . NamespaceDefault )

6

7 for _, assignment := range placement . Assignments { 8 deployment , services , err := manager .

createDeploymentFromAssignment ( application , infrastructure , & assignment )

9 if err != nil {

10 errors = append( errors , err )

11 continue

12 }

13

14 _, err = deploymentsClient . Create ( deployment )

15 if err != nil {

16 errors = append( errors , err )

17 continue

18 } else {

19 log .Printf(" Deployment %s created .\n", assignment . ServiceID )

20 }

21

22 for _, s := range services {

23 serviceResult , err := servicesClient . Create (s)

24 if err != nil {

25 errors = append( errors , err )

26 continue

27 } else {

28 log .Printf(" Service %s created . Ports : %v\n", s.Name , serviceResult . Spec . Ports )

29 } 30 } 31 } 32 33 if len( errors ) > 0 { 34 return errors 35 } 36

(41)

4.3. Architecture and Implementation 35

37 return nil 38 }

Listing 4.9: Actual deployment of an application on Kubernetes cluster

4.3.1.3

Withdraw

The withdraw process of an application is realized by performing delete operations on Kubernetes Objects created during the deployment process of the application.

The Kubernetes Go client permits to delete objects based on their name. Thus, the name of each Deployment and Service generated are retrieved and removed from the cluster. Listing 4.10 shows the code for removing an application from the cluster.

1 func ( manager * Manager ) delete( application * model . Application ) []error {

2 deploymentsClient := manager . clientset . AppsV1 (). Deployments ( apiv1 . NamespaceDefault )

3 serviceClient := manager . clientset . CoreV1 (). Services ( apiv1 . NamespaceDefault )

4

5 errors := make([]error, 0) 6

7 for _, s := range application . Services {

8 deploymentName := fmt . Sprintf ("%s -%s", application .ID , s.Id) 9 deletePolicy := metav1 . DeletePropagationForeground

10

11 err := deploymentsClient . Delete ( deploymentName , & metav1 . DeleteOptions {

12 PropagationPolicy : & deletePolicy ,

13 })

14 if err != nil {

15 log .Printf(" Cannot delete Deployment %s: %s\n", deploymentName , err )

16 errors = append( errors , err )

17 } else {

18 log .Printf(" Deployment %s deleted .\n", s.Id)

19 }

(42)

36 4.3. Architecture and Implementation

21 for _, image := range s. Images {

22 for _, port := range image . Ports {

23 if port . Expose > 0 {

24 // R e m o v e the a s s o c i a t e d s e r v i c e

25 serviceName := port . Name

26

27 log .Printf(" Deleting Service %s ...\ n", serviceName ) 28

29 if err := serviceClient . Delete ( serviceName , & metav1 . DeleteOptions {

30 PropagationPolicy : & deletePolicy ,

31 }); err != nil {

32 log .Printf(" Cannot delete Service %s: %s\n",

serviceName , err )

33 errors = append( errors , err )

34 } else {

35 log .Printf(" Service %s deleted .\n", serviceName ) 36 } 37 } 38 } 39 } 40 41 } 42 43 if len( errors ) > 0 { 44 return errors 45 } 46 47 return nil 48 }

Listing 4.10: Withdrawal of an application

4.3.1.4

Nodes information

In order to get a description that can be profitably exploited by the Placement Analyzer, Kubernetes nodes descriptions are processed and relevant data are extracted from them.

(43)

4.3. Architecture and Implementation 37 Properties are stored using labels that are associated with the node.

When a new application deployment is requested, those labels are retrieved and parsed into proper property values, such as IoT and security capabilities. This mechanism allows the node to update their description and permit a finer analysis by the Placement Analyzer. Listing 4.11 shows an example of label parsing for retrieving IoT capabilities.

1 if iotCaps , exists := node . Labels [ config . IotLabel ]; exists { 2 n. Profiles [0]. IoTCaps = strings . Split ( iotCaps , ",")

3 } else {

4 n. Profiles [0]. IoTCaps = make([]string, 0)

5 }

Listing 4.11: Label parsing

4.3.2

Placement Analyzer

The Placement Analyzer component is responsible to produce a set of placements for a given application and infrastructure.

A Placement Analyzer implements the interface PlacementAnalyzer which provides only the method getPlacements. This allows the system to be easily extended by simply providing additional implementations of analyzers.

The prototype employs EdgeUsher as a placement analyzer. Based on Problog, EdgeUsher requires a Problog description of both application and infrastructure. Thus, EdgeUsher placement analyzer starts by transforming application and infrastructure definition into proper Problog code. Then Problog is called and results are fetched and properly parsed to retrieve all the available placements for the services.

In Problog, variables and atoms are distinguished by the first letter. Variables have the first capital letter while atoms have the first lowercase letter. Such kind of restriction makes the transformation process not straightforward.

To avoid incorrect code generation, both the application and the infrastructure are processed in a way that all names are replaced with symbolic names utilizing a symbol table. Each name is associated with a random generated alpha-numeric identifier and each

(44)

38 4.3. Architecture and Implementation occurrence of that name is replaced with the corresponding identifier. Then, Problog code can be safely generated for devising placements.

There is no Problog Go client available so, Problog is run by directly run as an external process.

After Problog finishes, results are collected and parsed into a list of placements. Those placements must be processed again to replace identifiers with respective names. This is realized by retrieving the identifier with a reverse lookup of the symbol table.

Finally, corrected placements are provided as a result. Listing 4.12 shows the placement generation process implemented by the EdgeUsher analyzer.

1 func (eu * EdgeUsher ) GetPlacements ( mode deployment .Mode , application * model . Application , infrastructure * model . Infrastructure ) ([] model . Placement , error) {

2 table := NewSymbolTable () 3

4 // C l e a n u p s t r i n g s w i t h i n the o b j e c t s 5 safeApp := cleanApp ( application , table )

6 safeInfr := cleanInfrastructure ( infrastructure , table ) 7

8 code := getCode ( safeApp , safeInfr , euPath ) 9

10 result , err := callProblog ( code )

11 if err != nil {

12 return nil, err

13 }

14

15 placements , err := parseResult ( result )

16 if err != nil {

17 return nil, err

18 }

19

20 if len( placements ) == 1 && placements [0]. Probability == 0 { 21 return nil, fmt . Errorf ("no placements available ")

22 }

23

24 cleanedPlacements := cleanPlacements ( placements , table ) 25

(45)

4.3. Architecture and Implementation 39

26 return cleanedPlacements , nil 27 }

Listing 4.12: Placement generation

4.3.2.1

EdgeUsher

EdgeUsher prototype [13] provides a declarative methodology that aims at solving the VNF placement problem in the Cloud-Edge scenarios. Written in the Problog language, it exploits probability distributions to model the dynamic behavior of Edge infrastructures, and it ranks by assessing the set of valid placements against an infrastructure that varies according to such probabilities.

The prototype requires two inputs:

• A description of an application, in the form of VNF chain describing services and QoS requirements,

• A description of the infrastructure, expressing capabilities (hardware, IoT, bandwidth, latency and security) of its components.

Applications and infrastructures are modeled as follows:

Application Applications are described by virtual network services chains indicating functions that compose the application. Each function is modeled as a Service indicating processing time, hardware capabilities and IoT and security requirements. Requirements between functions are described by flows, indicating the required bandwidth that functions need to work properly. Finally, the maximum required latency can be indicated for each chain of the application.

1 chain ( ucdavis_cctv , [ cctv_driver , feature_extr ,

lightweight_analytics , alarm_driver , wan_optimiser , storage , video_analytics ]).

2

3 service ( cctv_driver , 2, 1, [ video1 ], or( anti_tampering , access_control )).

4 service ( feature_extr , 5, 3, [], and ( access_control , or( obfuscated_storage , encrypted_storage ))).

(46)

40 4.3. Architecture and Implementation

Figure 4.3.3: An example instance of a VNF chain of a CCTV system [13]

5 ...

6 flow ( cctv_driver , feature_extr , 15) .

7 flow ( feature_extr , lightweight_analytics , 8). 8 ...

9 maxLatency ([ cctv_driver , feature_extr , lightweight_analytics , alarm_driver ], 150) .

Listing 4.13: EdgeUsher Application description example

Infrastructure Infrastructures are modeled as graphs made of nodes and links between them. Each node describes the hardware, IoT and security capabilities that the node can offer, while links describe the capabilities of the connection between nodes in the form of experienced latency and available bandwidth.

1 0.8:: node ( parkingServices , 1, [ video1 ], [ authentication , anti_tampering , wireless_security , obfuscated_storage ]). 2 ...

3 0.98:: link ( parkingServices , westEntry , 15, 70) . 4 0.98:: link ( parkingServices , lifeSciences , 15, 70) . 5 0.98:: link ( parkingServices , mannLab , 15, 70) .

Listing 4.14: EdgeUsher infrastructure description

Figure 4.3.3 presents an example of an instance of VNF chain for a CCTV system. Based on those descriptions, EdgeUsher outputs a ranking of all eligible placements for the VNF chains and routing paths for the related traffic flows over the available Edge-Cloud infrastructure. The ranking considers how well a certain placement can satisfy the chain requirements as the infrastructure setting varies.

(47)

4.3. Architecture and Implementation 41

4.3.3

Node Watcher

The Node Watcher component stores information about available hosts of the underlying Kubernetes cluster. It periodically updates its node list to find out changes to the infrastructure. If the infrastructure changes, it notifies the change to the Deployer component who will take action for adapting active deployments to the actual cluster setting. Listing 4.15 shows the code for node update procedure.

Hosts information are retrieved in two ways: explicit fetching of available nodes on the cluster and observing of host changes.

When the Node Watcher is created, it registers a callback for observing the availability of the node. This mechanism allows FogLute to be notified when a host joins or leaves the cluster as soon as possible and add or remove the host respectively.

Nodes have the possibility to update labels on the Kubernetes Node description to store placement analyzer information. This update can not be observed by the previous mechanism as it notifies the Node Watcher only for cluster changes. Thus, Node Watcher periodically fetches all nodes information and updates the description of each node. Kubernetes report both ready and not ready nodes that are currently joined to the cluster. Thus, a filtering phase is needed to remove nodes that are not available for scheduling - e.g. nodes reporting NotReady status, tainted nodes, and so on. As an example, the master node is reported as a ready node, but it is (generally) not used for deploying Pods for security reasons, thus the master node is usually tainted.

As the node list may be is accessed by multiple goroutines during the execution, a simple synchronization mechanism is adopted to achieve mutual exclusion for safe access.

1 func (nw * NodeWatcher ) updateNodes () { 2 newNodes , err := nw. fetchNodes ()

3 if err != nil {

4 log .Printf(" Cannot get nodes : %s\n", err )

5 }

6

7 nw. nodelistMutex . Lock ()

8 defer nw. nodelistMutex . Unlock () 9

(48)

42 4.3. Architecture and Implementation

10 oldNodeCount := len(nw. nodelist ) 11

12 nw. nodelist = make([] apiv1 .Node , 0) 13

14 for _, node := range newNodes {

15 if isNodeAvailableForScheduling (& node ) { 16 nw. nodelist = append(nw. nodelist , node )

17 } else {

18 log .Printf(" Cannot use (%s) %s for job scheduling \n", node . UID , node . Name )

19 }

20 }

21

22 log .Printf(" Node list updated .") 23 }

Listing 4.15: Node updater

4.3.4

REST API

The REST API component is responsible to expose a RESTful interface permitting interactions with the FogLute platform. This is the entry point for interacting with the FogLute platform.

RESTful endpoints provide all the basic available operations on applications. Endpoints are shown in Table 4.3.1. Each request and response is represented by JSON objects for interoperability purposes.

Application creation and withdraw operations works by requests. The operation is scheduled and performed asynchronously by FogLute to answer as soon as possible to the client and avoid timeout errors.

Method Endpoint Description

GET /applications gets information about all active applications POST /applications requests the deploy of a new application GET /applications/{applicationID} gets information about an application DELETE /applications/{applicationID} requests the withdraw of an application

(49)

4.4. Testing 43

4.4

Testing

As the final part of FogLute development process, we show the results of the testing process of the platform. Tests are run using Postman collections and automated procedures. Tests are conducted using the testbed set up in the Computer Science department using a Kubernetes cluster made of five nodes.

The focus of the process is put on system testing by using use-cases for verification purposes, thus to check whether the software meets all functional and non-functional requirements defined during the design phase. FogLute presents two main use cases: the addition and the withdraw of an application. For this test, application described in A has been used.

Addition The expected behavior of the addition of an application is to produce all Kubernetes Objects from the given application and infrastructure descriptions and applying them to the cluster. Then, check the health status of all the components. List application The expected behavior of listing application is to obtain a list of

actually deployed applications.

Withdraw The expected behavior of the withdrawal of an application is to remove the selected application from the system and to remove all Kubernetes objects from the cluster.

(50)

44 4.4. Testing

(51)

Chapter 5

Use case analysis

This chapter briefly presents the validation and testing of the FogLute platform by deploying the Giò Plants system. In section 5.1, the actual cluster is presented along with the hardware features available. Giò Plants system is deployed and tested over a real testbed, showing the actual placement devised by FogLute. Finally, in section 5.2 the Giò Plants platform is validated by collecting feedbacks and usage data from the staff of the Computer Science department.

5.1

Set up

The first step of the process is to set up a Kubernetes cluster. The cluster is created by using Raspberry Pi 3 Model B+ computers for workers and a desktop. The Raspberry Pi is a small single-board computer equipped with an ARM processor, Linux-based operative system, wide range of connectivity capabilities and a small power consumption footprint. Table 5.1.1 used Raspberry Pi model specifications.

Spec Raspberry Pi 3 Model B+

CPU ARM Cortex-A53 1.4GHz

RAM size 1GB SRAM

Integrated Wi-Fi 2.4GHz and 5GHz

Network 10/100 MBPS Ethernet, 802.11n Wireless LAN

Bluetooth 4.2, Bluetooth Low Energy (BLE)

Table 5.1.1: Raspberry Pi 3 Model B+ specifications

Riferimenti

Documenti correlati

Thus, Big Data can provide organic information, particularly in geographical commu- nities where there is a gap of traditional data sources, on issues related to women and girls

Otherwise, procedure DEPLOY (∆, γ, n, A, I, ϑ) is responsible for adding the association between γ and n to the partial deployment ∆ and to update the current state of I

Without loss of generality we may assume that A, B, C are complex numbers along the unit circle. |z|

Answer: As a basis of Im A, take the first, fourth and fifth columns, so that A has rank 3. Find a basis of the kernel of A and show that it really is a basis... Does the union of

If S is not a linearly independent set of vectors, remove as many vectors as necessary to find a basis of its linear span and write the remaining vectors in S as a linear combination

 “A big data architecture is designed to handle the ingestion, processing, and analysis of data that is too large or complex for traditional

Both the temporal variations of the resonant frequencies, obtained from the spectral analysis, and the cross-correlation of seismic noise in the lower frequency bands suggest

The most common parameters used for decision making during EST include maximal theoretical heart rate (MTHR) predicted based on age, energy expenditure expressed in metabolic