• Non ci sono risultati.

SDN-aided Companion Fog Computing: towards seamless container migration in a Fog Computing platform with the ONOS controller

N/A
N/A
Protected

Academic year: 2021

Condividi "SDN-aided Companion Fog Computing: towards seamless container migration in a Fog Computing platform with the ONOS controller"

Copied!
98
0
0

Testo completo

(1)

Scuola di Ingegneria

Dipartimento di Ingegneria dell'Informazione

Master’s Degree in

Computer Engineering: Computer Systems and Networks

Master’s Degree Thesis

SDN-aided Companion Fog Computing: towards seamless

container migration in a Fog Computing platform with the

ONOS controller

Thesis Supervisors:

Candidate:

Prof. Enzo Mingozzi Christian Sileo

Prof. Antonio Virdis

Matricola 533161

Prof. Carlo Puliafito

(2)

2

Index

1 Abstract ... 3

2 Introduction ... 4

3 Background ... 6

3.1 Fog Computing ... 6

3.1.1 Fog vs Cloud ... 7

3.1.2 Mobility support issue ... 9

3.2 Software Defined Networking ... 11

3.2.1 SDN Overview ... 11

3.2.2 SDN Controllers: the comparison ... 15

3.2.3 ONOS ... 27

4 Architectural reference solution ... 31

4.1 Presentations of the entities ... 32

4.1.1 Mobile application ... 32

4.1.2 Mobile device layer ... 33

4.1.3 Fog layer ... 34

4.1.4 Cloud layer ... 37

4.1.5 Network infrastructure ... 38

4.2 Operational scenario ... 40

5 Prototype implementation ... 51

5.1 Simulation topology: GNS3 and Mininet ... 52

5.2 Mobile Node VM ... 55

5.3 Fog Node VM ... 56

5.3.1 Container networking configuration ... 57

5.3.2 Buffering VNF through NFQueue ... 61

5.4 Cloud VM ... 66

5.4.1 ONOS controller setup ... 66

5.4.2 ONOS northbound app: Fog-App ... 70

5.5 Demo ... 82

6 Tests and conclusions ... 87

6.1 UDP test-case ... 87

6.2 TCP test-case ... 89

6.3 Conclusions ... 96

(3)

3

1

Abstract

Mobile Internet of Things (MIoT) devices, because of their limited resources, are not able to offer complex and intensive services by only exploiting their own local capabilities. Fog Computing represents a valuable solution to overcome this problem by extending the Cloud towards the network edge, distributing services close to the end devices. In order to maintain the advantages deriving from this approach, it is necessary to support the mobility of the devices, keeping the services provided by the Fog layer constantly close to them. For this purpose, the container deployed in the Fog, who hosts the service behaving as the “companion” of the correspondent application running on the mobile node, may need to be migrated appropriately. The aim of this Thesis is to investigate how a Companion Fog Computing platform can exploit the flexibility of the Software Defined Networking (SDN) paradigm. Specifically, the ONOS (Open Network Operating System) SDN controller has been integrated in the system architecture to enable a seamless container migration, simplifying the internal network infrastructure management. Specific policies and functional components of the platform have been designed and then implemented and validated through an operational prototype, with the objective of pursuing service continuity and container migration transparency.

(4)

4

2

Introduction

The Internet of Things (IoT) represents a constantly growing universe, characterized by a huge number of devices, from the “dumbest” to the “smartest”, capable to connect to the Internet and to implement, through it, a potentially infinite range of heterogeneous services. These services can be simply based on data-transfer, but in most cases they involve also functionalities such as Big Data gathering through environmental and/or human sensors, data storing and processing and eventually live active intervention through commands given to apposite actuators.

The Internet of Things ecosystem is proving to be crucial for the realization of modern and innovative services, but in most cases the end-devices involved in this world are not sufficiently equipped with hardware resources, and so they cannot host directly medium-high complexity services. For this purpose, the IoT universe needs the support of the Cloud Computing paradigm, exploiting the huge quantity of resources that typically a Cloud provider is capable to offer with a flexible, pay-per-use, business model. The issue of this approach is mainly represented by the fact that the Cloud resources (regardless of the Cloud classification: public, private or even hybrid) are located in few Data Centers, whose spatial collocation cannot be close enough to the great number of services-involved parties. In most cases these distances between service hosts and data consumers and producers lead to degraded performances, that may be not acceptable for certain typologies of services. It is for this reason that the Fog Computing paradigm has entered the Cloud scenario as its natural extension towards the network edge, introducing computing, storage and networking resources throughout the whole path between the Data Centers and the IoT end-devices, thus allowing a closer end-to-end proximity and a series of non-negligible advantages, while however preserving the pros of the Cloud Computing approach, and in particular resource virtualization, dynamicity, elasticity and transparency.

The most important open issue regarding the Fog Computing paradigm is the mobility support: when the end-device moves away, it unavoidably enlarges its distance from the Fog node hosting its “companion” application, thus nullifying the advantage of the paradigm. In order to avoid this counter-effect, the Fog platform has to be enabled to the service migration, in such a way that the service itself and the mobile device can keep their relative distance in a certain acceptable threshold during the time. Although the migration of the service from one node to another supports the mobility of the device, keeping the mobile application as close as possible to its fog “companion”, this is not sufficient to grant the procedure transparency with respect to the mobile application itself. It would be desirable for the latter to be unaware of the migration, in order to continue to communicate with the service as if nothing is changing. The aim of this Thesis is to present a complete Fog platform reference architecture who addresses the issue by exploiting the presence of a Software Defined

(5)

5

Networking (SDN) controller, and to implement a simplified prototype, capable of highlighting the functionalities of the model by validating the initial expectations.

Software Defined Networking can be considered as the natural network management evolution in the Cloud/Fog direction: by centralizing the control plane, separated from the data plane by removing it from physical network devices, SDN makes the network handling as flexible and dynamic as the virtualized infrastructure made available by the Cloud first, and then by the Fog Computing paradigm. In the next paragraphs it will shown how a SDN controller can be integrated in a Fog system architecture in order to cover an essential support role for the orchestration tasks regarding the service migration, paying special attentions to the preservation of the transparency for the end-application.

The Thesis is organized as follows. In the section 3, for the sake of completeness, a more detailed overview about the Fog Computing paradigm will be provided, discussing the advantages that this model brings with respect to the “traditional” Cloud approach, with a focus on the “mobility support” issue and its use cases, and briefly presenting the state of the art in that direction. Moreover, the novel SDN paradigm will be briefly discussed, in order to clarify its advantages and to highlight its suitability in a dynamic and flexible environment like that of the Cloud/Fog computing. Then it will presented a concise comparison among the actually most diffused SDN controllers, considering several aspects such as their architecture, modularity, extensibility, scalability and resilience, with a focus on the ONOS (Open Network Operating System) controller and its main features, in order to justify its choice at implementation level.

The section 4, instead, will cover the detailed presentation of the architectural reference solution: each component will be analyzed with respect to its role and functionalities, then the modules interactions, needed to support service migration, will be discussed.

The aspects linked to the prototype and simulation building will be dealt in the section 5, starting from the supporting tools exploited to put on the emulated topology and ending with the software components realized to enable the designed protocols and policies.

Lastly, the section 6 will conclude this thesis by presenting some evaluations about the produced prototype based on several tests (including both UDP and TCP as transport layer protocol), the interesting findings and the final conclusions.

(6)

6

3

Background

The main purpose of the Thesis is to propose an architectural reference solution capable of ensuring a seamless mobility support in a Fog Computing environment, by exploiting the novel SDN model. The aim of this section is to present the analysis study behind the design and implementation phases, in order to better define the background scenario in which the proposed ideas find their collocation, highlighting the exploitability of the solution in real and common use cases.

3.1

Fog Computing

The Fog Computing paradigm was born as the extension of the Cloud towards the end-users at the network edge, with the idea of distributing the capabilities of the Cloud Computing model (resources and computing, storage and networking services) throughout the whole path, thus considerably increasing the proximity between the service consumer and the service provider.

As reported in [1], an accurate summary on the Fog Computing paradigm, this model was proposed in [2] as a direct descendent of the Cloud, hence it has to inherit all the benefits of the latter, starting from the dynamic, flexible and transparent architectural model, enabled by the resource virtualization techniques, up to the business model, based on the pay-per-use provisioning paradigm. The OpenFog Consortium in [3] provides a complete definition: “Fog computing is a horizontal, system-level architecture that distributes computing, storage, control and networking functions closer to the users along a cloud-to-thing continuum.” From that follows that services may be placed anywhere along the mentioned “continuum” on nodes called Fog Nodes, that are any devices having enough computing, storage and networking resources in order to accomplish the target service.

It is worth noting that the Fog paradigm does not substitute at all the Cloud, but, on the contrary, they cooperate typically in the same architecture, enabling the deployments of complex and modern services able to exploit the advantage of both Fog and the Cloud. The overall resulting system is typically hierarchical with heterogeneous nodes covering specific roles also depending on their position. This hierarchical organization and the proximity to end devices can be considered the main features of the Fog Computing paradigm.

(7)

7

Fig. 3.1 - Fog Computing Organization

3.1.1 Fog vs Cloud

Although the Fog paradigm was derived directly from the Cloud, it dissociates from the latter because of its capacity to overcome some structural limits and weakness of the parent. As discussed in the comparison between Cloud and Fog in [1] the first separation point is represented by the Data Centers typical placement in the Cloud model: in the context of the integration between Cloud and IoT devices the distance between the parties can become an issue in the provisioning of several services. The issue is particularly evident in public Clouds, characterized by off-premises resources with a very wide coverage area, but can affect anyway also private and hybrid Clouds, the more the end-users belong to larger areas.

It follows that the first metric affected positively by the adoption of the Fog approach is the latency: the high communication distance between the parties can lead to the impossibility of developing certain kinds of applications domains, in which specific, strict timing constraints must necessarily be satisfied in order to provide an “operational” service.

(8)

8

The second element considered in the comparison between the Cloud and the Fog is the bandwidth consumption: in case of traditional Cloud deployment, the huge quantity of data typically produced by end-users is directly addressed to the distant Cloud, leading to large amounts of traffic travelling throughout the whole network path; with the Fog model instead, the responsibilities of hosting the services are off-loaded in the continuum between the end-user and the Cloud, on the so called Fog nodes, and hence considerably shortening the network segment involved in the traffic exchange.

Another non-negligible difference regards the security and privacy aspects: in most cases the data that the end-user asks for processing belong to sensitive categories (e.g. personal, health-related data) and their sending to the Cloud over the public untrusted Internet may represent an issue; by introducing the Fog paradigm, this kind of data can be locally analyzed on a Fog node, but also in case of necessity for Cloud processing, several privacy enforcements, that IoT resource-constrained devices cannot perform, can be added at the Fog layer to reduce the risks involved in the transfer of the data towards the Cloud. Another Cloud security challenge that the Fog can overcome regards the not remote possibility that sensitive data, collected in a certain world area, needs to be transmitted to a very far Data Center, placed in a world region characterized by completely different legal regulations about privacy.

The context-awareness is another important advantage of the Fog adoption with respect to the Cloud: it enables improved services and optimal resources utilization by exploiting information such as local network conditions, the set of nearby elements that can influence or collaborate in the service provisioning; obviously the amount of context shared by the end-user and a very far Data Center is limited differently from the case of a topologically closer Fog node.

The last aspect with respect to which the Fog model can be considered as a solution for a Cloud problem is the possible presence of the so called “hostile environments”: the long distance that typically separates Data Centers and end-users increases the possibility of encountering along the path some critical contexts (characterized by the presence of weak network infrastructures) that may considerably worsen the performances, up to risk the interruption of the service availability.

(9)

9

3.1.2 Mobility support issue

The adoption of the Fog Computing paradigm can lead to a considerable number of key benefits, enabling improved and innovative services. Given the importance of the increasing IoT market, the Fog paradigm cannot ignore the need to support this kind of devices in all possible utilization scenarios. One of the branches that is leading the IoT growth is the IoMT (Internet of Mobile Things), thanks to the non-stop diffusion of heterogeneous mobile devices. From this perspective an important issue may arise: end-user needs the continuity of a service hosted by a certain Fog Node while he/she is moving. But the mobility itself risks limiting the benefits underlying the Fog Computing model choice: unavoidably when the device moves it decreases its proximity with respect to the Fog Node hosting the service. This problem is Fog-specific, because in the traditional Cloud paradigm, the Cloud service is a-priori far away from the end-user, and the mobility does not make the scenario worsen in that direction. Hence a fundamental requirement when designing a Fog Computing platform is the realization of a proper service migration mechanism capable to support the mobility. It means that it must be possible to move (possibly in a stateful manner) each Fog service from a Fog node to another one with the aim of maintaining it close enough to the moving device.

The Fog paradigm, together with an appropriate mobility support, enables the possibility of deployment of innovative applications such as Augmented Reality (AR) and Virtual Reality (VR) mobile applications, automotive and automated driving applications (often referred as IoV, standing for Internet of Vehicles), smart healthcare applications (e.g. exploiting data collected by wearables and processing capabilities of Fog nodes) but it can also provide benefits to “more traditional” applications, such as rate-adaptive streaming applications, with the heavy video-rendering tasks off-loaded from the mobile devices to the typically more resourced Fog nodes. Moreover it is important to notice that these kind of applications would all benefit from the key advantages of adopting the Fog model: lower latencies are, in most of the mentioned use-cases, an enabling metrics for the service; the reduced amount of bandwidth in the network is a decisive factor when dealing with Big Data; the context-awareness is primarily important in a IoV context; the security and privacy concerns cannot be ignored when handling personal sensitive data.

However it is worth to highlight that the mobility support it is not the only reason behind the need of introducing a service migration mechanism in a Fog platform: indeed, although some applications require it natively because of the mobile nature of some of their components (as in the applications examples listed previously), in other cases the service migration can be a direct requirement derived

(10)

10

from the management and orchestration needs of the platform, in order to guarantee an higher level of flexibility and dynamicity in services allocation and provisioning.

In the reference [1], it is possible to find an accurate state-of-the-art analysis about the existing Fog Computing platforms enabled to the service migration, but none of them is designed to explicitly support the mobility by addressing the migration transparency issue. The ideal mobility support indeed should provide service migration and orchestration mechanisms able to hide their complexity from the final application perspective: the latter should constantly operate in the same way, like if it is executing in a static scenario, without the need to change the endpoint of its communication channel. The main objective of the Thesis is to find a patch for this open issue by designing a complete reference architecture, including Fog, Cloud, but also network management (an SDN controller) components. Anyway, the process of ideation of the solution started from the analysis of the existing Fog architectures briefly described in [1] and from the modification and extension of the Fog platform discussed in [12].

In the reference [5], Follow Me Fog (FMF) platform is presented: it is characterized by the presence of a Software as a Service (SaaS) server placed on each access point, and hence only the pending jobs are migrated whenever an handover occurs, which it is not always necessary. Other two platforms presented in [6] and [7], called respectively Follow Me Edge (FME) and Follow Me Cloud (FMC), show particular attention to the content and session migrations, highlighting the issue linked to the change of the IP address after the service relocation. Two different implementations of the FMC ideas are presented in [8] and [9], the first adopting the Locator/ID Separation Protocol (LISP) and the second a NOX controller SDN approach. The platform Foglets, discussed in [10], adopts instead mobile agents, and the runtime state (at a coarse granularity) has to be captured directly by the application. The platform described in [11] allows the stateful migration of whole Virtual Machines (VMs) by extending the Openstack platform. This introduces the choice between the deployment of a service on the Fog Node on a VM or on a container. Although the decision depends on the specific context, typically the containers are preferred more often in Fog environments: indeed, while VMs guarantee better isolation and multi-platform software compatibility, containers, by sharing the host system kernel, are more lightweight and perform better. Both the proposal in [12] and [4] are characterized by a stateful container migration with the latter solution based on the Message Queue Telemetry Transport (MQTT) publish/subscribe protocol. The platform architecture presented in [12] in particular can be considered the starting point for the solution proposed in this Thesis. It introduces the Companion Fog Platform, accurately defining Cloud, Fog and Mobile nodes components and their interactions in policies and protocols supporting the migration.

(11)

11

3.2

Software Defined Networking

The main factor that guided the rapid spread of Software Defined Networking (SDN) solutions in last years is, above all, the increased flexibility of the network architecture, that allows administrators to quickly and easily answer to dynamic requirements. This feature goes perfectly with the motivations behind the evolution of the traditional computing model in the Cloud direction. Indeed, the SDN approach can be definitively considered as the natural networking counterpart of the Cloud Computing model. The above consideration justifies the objective of this Thesis, which is the will to exploit the novel SDN principles to implement the network functions needed in a Fog Computing platform that wants to address and support the service migration transparency.

3.2.1 SDN Overview

A brief overview of the SDN world, its key benefits, its components and the basic description of how the network management is handled will be provided in this paragraph, by reporting the insights of the chapter 2 and 4 of [18], exploiting also some explicative figures taken from the same reference.

According to the authors, a keyword that explains the SDN diffusion is “simplification”. Networking devices became in the years more and more complex in order to satisfy the new requirements of the network functions, and also their management, configuration and integration in compound infrastructures can be now considered very time-consuming tasks.

SDN enables the possibility of adopting, for the network device setting, simpler policy-based patterns, by separating forwarding and control plane, that instead resided on the same physical device in the traditional approach. Also cost is an important driver towards SDN: simpler devices were introduced, thus removing the need of complex (and expensive) hardware components; Operational and Capital Expense (OPEX and CAPEX) can be reduced by exploiting the SDN agility to dynamically adapt to changing requirements and scenarios; the device-vendors effort to implement common functionalities for their own equipment (raising also the vendor lock-in issue) can be overcome through the open, standardized approaches of SDN. As earlier reported, the most important SDN driver, from the perspective of this Thesis, is its suitability for the Cloud/Fog environment: the spread of Data Centers raised up new important virtualization needs, with respect to computing, storage but also networking resources. SDN can provide the “network virtualization”, by introducing a higher-level abstraction

(12)

12

on top of the physical entity, that allows to reach the Cloud emerging requirements of automation, agility, scalability and multi-tenancy.

As the authors of [18] emphasize, SDN is characterized by five concepts: plane separation, simplified devices, centralized control, network automation and virtualization, and openness. The first aspect regards the separation between forwarding and control planes. The forwarding plane involves the forwarding functionality, thus the logic and tables that allow the network device to properly choose how to treat incoming packets. The logic and all the algorithms used to program or configure the forwarding plane are placed instead in the control plane.

Traditional devices keep both planes locally despite these are logically well separated. SDN leaves only the forwarding plane into the network device, that hence will become very simple, moving the control one in a centralized controller, where it can take advantage of a global network view. It follows that the controller can implement higher-level policies, providing primitive instructions to the simplified devices in order to let them decide and operate fast with the incoming packets.

SDN introduces several important network abstractions: the first one is the distributed state abstraction, that offers to programmers a global network view, allowing to ignore the single state of the different entities composing the network; the second one is the forwarding abstraction, that gives the possibility to assert only the desired forwarding behaviours, regardless of the vendor-specific hardware; the last one is the configuration or specification abstraction, that allow to specify the desired outcome of the network without considering the implementation details to reach that outcome on the physical network.

The centralized controller provides two interfaces called typically northbound and southbound: the former is for the applications, the latter is for the physical network devices (the most common one is the OpenFlow protocol). Both these interfaces are standardized, documented and not proprietary, allowing devices from different vendors to interoperate. The basic components of SDN architecture are the SDN devices, the controller and the applications.

(13)

13

Fig. 3.2 – SDN operation overview

The simplified SDN devices host both functionalities to decide on incoming packets (forwarding functionalities) and the data, represented by the flows defined by the controller. A flow is a certain set of packets transferred unidirectionally from one endpoint to another and it is represented on the SDN device as a flow entry in a flow table. This table, constructed by the controller, is made of prioritized entries, each composed of two fields, the match and the action: the first used to compare against the incoming packets and the second specifying the action to be undertaken in case of match. In case no match is found among the flow entries, the switch can forward the incoming packet to the controller or simply drop it, depending on OpenFlow version and switch’s configuration.

(14)

14

An important responsibility of the controller is that of abstracting the controlled devices, providing this abstracted representation to SDN applications. They can set, through the controller, proactive or reactive flows. The SDN switches are composed of three layers, the upper is represented by the API used to communicate with the controller, the middle layer or abstraction layer host the flow tables and the lower layer is made of the packet processing functionalities. The latter consists of the needed logic to apply the rules corresponding to the highest-priority matching flow entry. Hence if a match is found the packet is processed locally, otherwise a copy or a reference to it is typically forwarded to the controller, that in this case “consumes” the packet. At this point the controller can act in several ways, ignoring the packet, replying with an instruction for the switch containing the proper treatment for that packet or even replying with the installation or modification of a flow entry (this case is referred as reactive flow installation).

Fig. 3.4 – SDN software switch Fig. 3.5 – SDN hardware switch

It is worth noting that the network policy implementations are realized not only by the controller, but also with the help of applications interacting with it through the northbound API. Although controllers are often provided with already-available common application modules, it is important to distinguish them from the core features that instead a controller must include: end-user device discovery, network device discovery, network device topology management and flow management. In particular the controller is able to identify the topology by learning about the existence of SDN devices and end-user devices and by tracking the connectivity between them. Moreover, it keeps per-flow statistics collected by the controlled switches.

(15)

15

Fig. 3.6 – Controller architecture

3.2.2 SDN Controllers: the comparison

An important preliminary task has concerned the comparison among the most popular open-source SDN controller currently available, with the aim of properly selecting the most complete and suitable one. Both a qualitative and a quantitative perspective has been considered in the following discussion: in particular, the controllers main features seen from an high-level perspective has been extracted from the summary about SDN controllers proposed in [13], while a more practical view, involving performance evaluations through several benchmarking tools, has been provided reporting the main findings shown in the reference [14].

The first part of the section includes the presentation and the comparisons among the five most diffused SDN controllers (ONOS, ODL, OpenKilda, Ryu and Faucet) with the focus on several aspects such as the architecture’s main characteristics, the Northbound and Southbound available interfaces, the modularity and extensibility, the scalability and the resilience.

The first presented controller is Open Network Operating System, better known as ONOS. It is based on a three-tier architecture: at the lower layer we find the Southbound, including modules related to the protocols adopted for the communication with the network devices; in the middle the Distributed Core is placed, responsible for providing network functions in a protocol agnostic way; the upper layer is the Northbound, hosting the ONOS apps, which exploit and process the network state information exposed by the core. ONOS offers an Intent-based framework through which the client can simply specify what the functional outcome of a network service should be, abstracting any

(16)

16

implementation detail. Among those analyzed, ONOS is the controller providing compatibility with the largest set of protocols for the southbound interface: to name a few OpenFlow, NETCONF, SNMP, BGP and PCEP. ONOS allows external interactions by providing natively RPCs, RESTful APIs, a Web GUI and a CLI at the Northbound side. The modularity and extensibility are guaranteed through built-in mechanisms for dynamically connecting/disconnecting components (managed as OSGi bundles) while it is running: the Open Services Gateway Initiative (OSGi) is a modular development framework where loosely coupled modules construct the entire platform. The modules can be built independently with the ability of importing and exporting data from one another. Several controller instances can be organized in cluster to scale-out. They can join and leave dynamically, with the Atomix distributed datastore responsible for the cluster, assuring CP into the CAP triangle. Obviously as the cluster grows, the communication and coordination activities rapidly increase, thus limiting the performance gain per additional cluster member. Moreover, ONOS provides BGP routing capabilities to coordinate traffic flows between several SDN islands composing large-scale SD-WANs. The clustering capabilities, together with an odd number of SDN controller’s instances allows to guarantee fault-tolerance. In case of master node failure, a new leader would be selected to take the control of the network.

(17)

17

Also the OpenDayLight (ODL) controller, as ONOS, presents a three-tier architecture: the lower layer is represented by the Southbound Interfaces & Protocol Plugins, used to interact with the network devices; the middle layer is called Controller Platform and is composed of the Network Service Functions and the Service Abstraction Layer; the upper layer is the Network Applications, Orchestration & Services, allowing operators to apply high-level policies or the integration between ODL and other platforms. As ONOS, ODL offers the set of the most common southbound protocols used for the communication with the network devices, including OpenFlow, while for the northbound it provides only RESTful APIs. The adoption of OSGi provides, also in this case, a very flexible and dynamic approach to add and remove functionalities at runtime. Even ODL offers the possibility of a cluster deployment, with the only difference that the distributed datastore used is AKKA, thus assuring AP into the CAP triangle. The controller provides BGP routing capabilities to coordinate traffic flows between the SDN islands. With respect to resilience, ODL provides mechanisms very similar to the ONOS approach, with the important difference that while ONOS focuses on eventual consistency, ODL focuses on high availability.

(18)

18

The third analyzed controller is OpenKilda. It uses the Floodlight software to interact with switches using OpenFlow, but pushes decision making functionality into other parts of the stack. Kafka is used as a message bus in order to pass state information, collected by Floodlight, to an Apache Storm based cluster of processing agents, which produce time-series: Apache Storm passes the time-series data to OpenTSDB for storing and analyzing. Neo4j, a graph analysis and visualization platform, is also integrated in the overall controller architecture. OpenKilda for the southbound supports only the OpenFlow protocol (through Floodlight) and it offers RESTful APIs for the northbound. This controller is built on several well-supported open-source components to implement a decentralized, distributed control plane, backed by a unique, well-designed cluster of agents to drive network updates as required. Therefore, the modular nature of its architecture makes reasonably easy to add new features. The scalability is achieved at different layers, indeed the Southbound, with idempotent Floodlight instances, and the Storm cluster can scale independently from each other: Floodlight instances have no requirement to share state while the Storm cluster is by design horizontally scalable and allows throughput to be increased by adding nodes. However OpenKilda does not provide BGP routing capabilities and not even in-built clustering mechanism, thus relying on external tools to maintain availability: it is possible to run multiple, identically configured instances, or a single instance controlled by an external framework that detects and restarts failed nodes.

(19)

19

The next analyzed controller is Ryu, even if it is better thought of as a toolbox, with which SDN controller functionality can be built. It is a component-based software defined networking framework providing software components with well defined APIs. At the southbound layer its interfaces support several protocols such as OpenFlow and NETCONF while at the northbound interface, even in this case, only RESTful APIs are provided. It is not so proper to talk about Ryu’s modularity or extensibility, because it offers simple supporting infrastructure that users can use as desired by writing their own code: even if this requires development expertise, of course the complete flexibility of the SDN solution is achievable. Ryu does not offer internal clustering ability and requires external tools to share the network state and allow failover between cluster members.

Fig. 3.10– Ryu Architecture

The last discussed controller is Faucet, a lightweight OpenFlow SDN controller built on top of Ryu. Each Faucet instance has two connections to the underlying switches: one for control and configuration updates, the other (Gauge) is a read-only connection for gathering, collating and transmitting state information to process elsewhere. At the southbound interface it supports only the OpenFlow protocol, while at the northbound layer it differentiates itself from other alternatives by adopting YAML configuration files. YAML (Yet Another Modeling Language) is a human-readable data-serialization language, commonly used for configuration files (as in this case) and in applications where data is being stored or transmitted. These configurations files track the intended system state instead of instantaneous API calls, requiring external tools for dynamically applying configuration. Adding functionality to Faucet is achieved through modifying the systems that make use of its Northbound interfaces. This provides the added flexibility of using different tools and languages

(20)

20

depending on the problem being solved. Additionally, increasing the complexity of northbound interactions does not negatively impact the SDN directly. Faucet is designed to be deployed at scale such that each instance is close to the subset of switches under its control. Each instance of Faucet is self-contained and due to its lightweight nature, no clustering is required. Anyway, Faucet contains no intrinsic clustering capability and requires external tools to distribute state or to maintain availability is desired. It is worth noting that for Faucet, which is controlled by static configuration files, restarting a controller is a quick and stable operation having no dependency on upstream infrastructure once the configuration is written.

Fig. 3.11 – Faucet Architecture

In order to extract some interesting insights from the previously presented overview, in the following there will be a brief recap on the main aspects of the discussed controllers, by directly comparing the several alternatives.

From an architectural perspective, ONOS and ODL are typically easier to maintain and allow lower latency between the tightly coupled southbound API, PCE (Path Computation Element) and Northbound APIs. However, as the scale increases, their approach (based on several centralized controller instances) can become a bottleneck. Instead architectures such as OpenKilda and Faucet are generally more complex to maintain and deploy but can allow the platform to scale more effectively: by decoupling the processing of PCE, telemetry and southbound interface traffic, each function can be scaled independently. Ryu is different to the other options, having a core set of programs that are run as a “platform”.

(21)

21

Considering the perspective of the extensibility and modularity of the solution, ONOS and ODL take advantage of OSGi containers for loading bundles at runtime, allowing a very flexible approach to adding functionalities at the expense of centralizing processing to each controller instance. Faucet and OpenKilda allow to add/remove features by modifying the systems that make use of their northbound interfaces. This provides the added flexibility of decoupling SDN tiers and using different tools and languages depending on the problem being solved. Ryu provides a well-defined API for developers to change the way components are managed and configured.

The scalability is another important point of comparison: ONOS and ODL contain internal functionality for maintaining a cluster; they exploit a distributed datastore that shares the current SDN state and allows controllers failover in the event of a cluster partition. OpenKilda approaches cluster scalability in an independent modular way for each level of its stack. Both Ryu and Faucet contain no intrinsic clustering capability and require external tools to distribute the state. Moreover Ryu, Faucet, ODL and ONOS all include native BGP routing capabilities to coordinate traffic flows between different SDN islands. But universal PCE and telemetry processing will need to be developed. OpenKilda instead provides a working reference architecture for achieving this universal scope without the partitioning into autonomous SDN islands.

Looking at future SDN developments, it is an important challenge to extract and use properly the available telemetry to infer the system state. With respect to this aspect ODL lacks functionalities, with telemetry still being an experimental module in the latest version. ONOS and Faucet allow telemetry to be exported and processed by dedicated components. In OpenKilda, extracting usable telemetry from the infrastructure was a core design principle as visible from the architecture itself.

The last but not least, the programming language and the support communities of each of the proposed controllers can be compared. ONOS, ODL and OpenKilda are written in Java with several development cases in the market and good supporting documentation and libraries available. It should be noticed that Java processes can tend to be heavyweight and require resource and configuration management to keep them responsive. Ryu and Faucet are written in Python, a well-supported language while the documentation is concise and technical. Python is not a fast language and has limitations due to both the dynamic type representations and limited multi-threaded capabilities. Both ODL and ONOS benefit from large developer and user communities under the Linux Foundation Networking banner, where many large international players are involved, but a possible downside is that many voices can impact the stability. OpenKilda is a small but active community and between these two extremes there are Ryu and Faucet.

(22)

22

Theoretical comparison based on features and properties do not reflect the actual performance of any controller. Hence, real deployment and benchmarking is necessary for true evaluation. The second part of this section consists into the summary of a research work [14], aimed to do a performance evaluation of some SDN controllers currently available. Benchmarking the performance of a controller is a challenging task: simulation/emulation evaluation can give only an indication of performance and may significantly differ from actual production environment evaluation because of limitations imposed by the available tools.

The presented research benchmarking work focused on several classes of SDN controllers: single-threaded single-instance (NOX, POX, Ryu), multi-single-threaded single-instance (Floodlight, OpenMul, Beacon, Maestro) and multi-threaded multi-instance (ONOS, ODL). For the task, three benchmarking tools have been exploited, extracting from each different performance metrics: from CBench and PktBlaster tools latency and throughput and from OFNet the average RTT, the average flow setup latency, the CPU utilization of vSwitch Daemon and the missed flow requests.

CBench tests the performance by sending asynchronous messages: for the latency the messages are in series, which means that it sends a packet-in message to the emulated switch and waits for a response before sending the next one; the throughput of the running controller is tested with the same parameter, but packets are not sent in series, with requests sent without waiting for a response. CBench outputs the flow messages which a controller can handle per second. The results presented are an average of number of responses per second from all switches in the execution.

PktBlaster utilizes the in-built TCP-based traffic emulation profile and creates an OpenFlow session between the emulated switch and the controller. The controllers are evaluated based on latency (flow installation rate) and throughput (flow processing rate).

OFNet uses a custom tree-based topology. The number of hosts and switches is limited due to resources constraints of the emulating machines. Inbuilt traffic generator is used, which initiates and transfers multiple types of traffic (DNS, Web, Ping, Multi-cast, FTP, Telnet…) among hosts in the emulated network. OFNet provides analysis against time.

(23)

23

Fig. 3.12 – Benchmarking tools recap table

Fig. 3.13 – Benchmarking parameters table

Fig. 3.14 – CBench latency varying switches’ number Fig. 3.15 – CBench latency varying iterations’ number (16 switches)

(24)

24

In the figure 3.14, it can be observed the latency against the number of switches in topology, from 2 to 16: it is worth noting two distinct groups, one with high latency, and one with significantly lower; Ryu controller presents a negligible impact on its latency performance varying the switches’ number. Similarly, NOX and POX also show minimal change in latency as the switches increase. However, less latency does not translate to a strong preference, as the capabilities of controller itself must also be considered (ONOS and ODL are the best solutions from this point of view). In the figure 3.15, it can be observed the effect of tool’s own performance on latency measurement by changing the number of iterations while the number of switches is fixed at 16: most of the controllers change their latency as the results are averaged out over a larger set of repetitions: the basic take-away from this is that the setup environments effect on measurements should never be undervalued.

Fig. 3.16 – PktBlaster latency varying switches’ number

Fig. 3.17 – OFNet flow setup latency

Latency calculation using PktBlaster is also extracted against increasing number of switches: here three distinct groups of controllers can be distinguished: NOX and POX show minimum latency, while Floodlight, ODL, and ONOS have the highest latency in this test; Ryu, OpenMUL, Maestro, and Beacon are in the middle. The important factor to note is that the number of switches does not have any significant impact on the latency calculation. Moreover, latency is closer to RTT between observing node and controller, but flow installation time (path provisioning) would include multiple switches, hence increasing the time. OFNet has a different evaluation and reporting method, simulating the SDN network much like Mininet. The output values are reported against time, so the figure shows the averaged result of 10 iterations on a timeline of 300 seconds: it can be observed that

(25)

25

there is no specific pattern over time followed by any given controller. The overall effect is that less time is required to install flows as the simulation progresses.

Fig. 3.18 – CBench throughput varying switches’ number

Fig. 3.19 – PktBlaster throughput varying switches’ number

In throughput mode, CBench switches send as many packets as possible at once and does do not wait for a reply. The figure shows the comparison based on increasing number of switches: NOX, POX, and RYU remain the lowest performers, while the flow response rate of ONOS is significantly higher around 400 flows/ms up to 500 flows/ms. The measurements of throughput of PktBlaster show minimal effect with respect to switches’ number variation, with Floodlight, ODL, and ONOS as the best performers and NOX and POX as the worst.

(26)

26

Fig. 3.22 – OFNet missing flows

The RTT evaluation is an important factor because it identifies the communication delay between the controller and the switch. If the controller and switches are physically far apart, the increased RTT will contribute to increased latency. Similarly, the time complexity of packet processing at controller effects the overall performance. Based on the established tree topology, ONOS has the highest RTTs. On the other hand, Ryu and OpenMUL have the least RTTs, mostly because of less complex algorithms involved at the controller. However, less complex does not translate to better, rather, they may be attributed to a smaller number of controller capabilities. The OFNet’s in-built traffic emulation application is used to transmit various packets and to identify the CPU usage of the vSwitch process while interacting with a controller. While running a single-threaded controller like NOX, POX, and Ryu, the CPU of vSwitch daemon remains under 30% to 40%. On the contrary, CPU utilization is remarkably higher at 90% in the case of the multi-threaded controller like ONOS. Besides, the CPU usage remains under 70% rest of the controllers including Floodlight and ODL. It is important to notice that multi-threading capabilities can have been limited by the capabilities of the vSwitches. Missed flows refer to the number of flows that the controller misses while the test is ongoing. The vSwitch transmits reactive flows to benchmark the SDN controllers and the figure depicts that ONOS, ODL and Floodlight miss the least number of flows, opposed to NOX, POX and RYU. This again is attributed to the multi-threading capabilities of the controllers, which allows them to perform comparatively better than the single-threaded ones.

In the end, in order to summarize the insights of the benchmark proposed in [14] it is possible to conclude with the following considerations: with respect to latency and throughput, multi-threaded controllers (both single-instance and multi-instance) perform significantly better than single-threaded controllers like NOX, POX, and Ryu; however, they also require more physical resources in order to

(27)

27

perform efficiently. The position of the controller in the physical topology, directly impacts several performance parameters, but also limitations and features of tools directly affect the benchmarking. It should be noticed that single-threaded single-instance controllers can still perform better in simplified topologies while multi-threaded controllers are more suitable for complex environments.

3.2.3 ONOS

At the end of the comparative analysis presented in the last chapter, several factors have directed the final choice of the controller on ONOS: its completeness, its wide support of the largest range of southbound protocols, the rich alternatives provided for the northbound interactions, its usage in several production solutions and last but not least the impression of the best-kept and accurate official documentation [15]. Starting from it, this section will provide more details on ONOS architecture and internals.

The design goals on the basis of which ONOS is built can be summarized in four concepts: code modularity, configurability, separation of concerns and protocol agnosticism. The code is structured in a set of sub-projects, each with independent source tree (Maven’s notion of hierarchical POM file): ONOS root contains the top-level POM file. Functionalities can be developed as self-contained units and the Apache Karaf OSGi framework allows the management and deploy of the components with the highest flexibility level possible both at startup and runtime. Moreover, it provides std JAX-RS API to develop secure REST APIs as a set of bundles (allowing customizable setups) and local and remote ssh console with extensible CLI. Clear boundaries separate the subsystems in the three-tier architecture: modules interacting with the network through southbound API (protocol-aware); modules of the core, tracking and serving info about network state (protocol-agnostic); application-level modules acting upon info provided by the core through northbound API. Protocol agnosticism means the idea to provide services not bound to specific protocol libraries or implementations. New network-facing module (through apposite southbound API) as plugin can be loaded into the system to support a new protocol while the other modules are not bound to specific protocols.

The protagonists of the ONOS internals are the so-called Services. A Service or Subsystem is a unit of functionality made of several components creating a vertical slice in the ONOS software stack; each component resides in one of the three tiers and can be identified by one or more implemented Java Interfaces.

(28)

28

Fig. 3.23 – Typical ONOS Service structure

From the figure we can identify the typical components of an ONOS Subsystem (Provider, Manager, Store, Application) but it is worth noting that not all subsystems are composed of these modules.

Providers (lowest tier modules) interact with the network via protocol-specific libraries, and with the core via the ProviderService interface. They supply service-specific sensory data but may also collect data from other subsystems, converting them into service-specific data. Some providers may also need to accept control edicts from the core and apply them to the network using the appropriate protocol-specific means. A Provider is associated with a ProviderId, an externalizable identity of a family of providers, which allows devices and other model entities to remain associated with the identity of the provider responsible for their existence even after the provider is uninstalled/unloaded. A subsystem may be associated with multiple providers designated as either primary or ancillary. The primary provider owns the entities associated with its service, with ancillary providers contributing their information as overlays.

The manager is a component resident in the core, exposing several interfaces:

- A northbound Service interface through which applications or other core components can learn about a particular aspect of the network state;

- An AdminService interface for taking administrative commands and applying them onto the network state or the system;

- A southbound ProviderRegistry interface through which Providers can register with the manager, so that it may interact with it;

(29)

29

- A southbound ProviderService interface presented to a registered Provider, through which it may send and receive information to/from the manager.

The consumers of a Manager’s Service interface may receive information both synchronously by querying the service, and asynchronously as an event listener.

Also the Store is a core-resident component. It is closely associated with the Manager, and has the task of indexing, persisting, and synchronizing the information received by the Manager. This includes ensuring consistency and robustness of information across multiple ONOS instances by directly communicating with stores on other ONOS instances.

Fig. 3.24 – ONOS Store components

The Application components deploy several functionalities by consuming information aggregated by the managers via the AdminService and Service interfaces. Each application is associated with a unique ApplicationId used by ONOS to track context associated with an application. In order to obtain a valid ID, applications register with the CoreService, providing their name (which is expected to follow the reverse DNS notation).

Other two important elements of the ONOS internals are Events and Decriptions. Both them are immutable once created. Descriptions are used to pass information about an element across the southbound API. They are usually made up of one or more model objects, ONOS's representations of various network components.

On the other side, Events are used by Managers to notify its listeners about changes in the network, and by Stores to notify their peers of events in a distributed setting. An Event is composed of an event type and a subject, built of model objects. Events are generated by the Store, based on input from the Manager. Once generated, an Event is dispatched to interested listeners via the StoreDelegate interface, which ultimately invokes the EventDeliveryService. Essentially, the StoreDelegate moves the event out of the store, and the EventDeliveryService ensures that the event only reaches interested listeners. Due to how they interact, these two components reside

(30)

30

in the Manager, where it provides the implementation class of the StoreDelegate to the store. Event listeners are any components that implement the EventListener interface. EventListener child interfaces are classified by the type of Event subclass they listen for. The typical mode of implementation is for an EventListener to be an inner class of a manager or an application, from which the appropriate services are invoked based on received event. This restricts the handling of events external to a subsystem, to the subsystem's manager or to an application, the logical locations where they should be handled.

Fig. 3.25 – ONOS Event handling

ONOS maintains protocol-agnostic and protocol-specific network element and state representations that can be translated from one to the other. The former are constructs of the core tier, referred as Model Objects, and the latter are constructs of the appropriate provider. Model Objects are built from the information found in Descriptions by the system core. They are exposed to the application and also represent the subject body of an Event. Dependencies exists among Model Objects: some entities may rely on other entities. They can be summarized in three main categories:

- Network Topology (Device, Port, Host, Link, EdgeLink, Path, Topology …); - Network Control (FlowRule, Intent, RoleValue …);

(31)

31

4

Architectural reference solution

The background analysis of the problem has led to the ideation of the architecture presented in this section. It is composed of several heterogeneous components integrated in order to properly provide mobility support in a Cloud/Fog environment, by enabling the service migration and assuring its transparency from the end-application’s perspective. Hence the actors taking an important role in this reference solution are the Cloud environment, the Fog platform, the mobile application (and so the mobile node) and a series of functional sub-components residing within their boundaries.

The starting point for the conception of this architecture is the Companion Fog Computing (CFC) platform presented in [12]: that name is justified by the fact that the Fog service behaves as a “companion” of the correspondent application on the mobile device. The model introduced by the authors has been modified and extended in this Thesis with the aim of adapting it to the presence of a new relevant component such as the SDN controller. In order to completely exploit the possibilities enabled by the latter, the entities’ models and the migration orchestration has been deepened in the way discussed in the following paragraphs.

For the presentation of the details, a modular approach has been chosen, analyzing one-by-one, first the key involved parties, then deepening their functional components and their collocation, and eventually the interactions needed to describe a complete operational scenario.

Fig. 4.1 – Macro components of the reference architecture: the Cloud, Fog and the Mobile Devices layers

(32)

32

4.1

Presentations of the entities

The first part of the presentation of the architectural reference solution is composed of the introduction of its main actors, their functional sub-components and the modeling approach adopted to abstract their complexity, in such a way that allows to isolate and highlight the factors of interest.

4.1.1 Mobile application

The application that wants to be executed exploiting the advantages of the Fog-Computing paradigm is actually composed of two strictly coupled modules: the properly said Mobile Application (ma) and its “companion”, that can be referred as Fog Application (fa). As depicted by the figure 4.2, the former executes on the mobile IoT device (the mobile node), while the latter is deployed in the Fog layer, and specifically within a container in a Fog node, according to the Infrastructure as a Service (IaaS) model.

Fig 4.2 – Model of the mobile application

The fog application, together with the container within which it executes, can be referred concisely as Fog Service (fs). It will be the protagonist of the migration process that wants to support the

(33)

33

mobility of the IoT device. The Fog service can be modeled as an entity characterized by the tuple

<HWreqs, EtEreqs, PHreqs, n(t)>:

- HWreqs: it represents the set of hardware requirements (e.g. the minimum amount of RAM or the minimum number of virtual CPUs) needed to complete the tasks assigned to the service; - EtEreqs: it represents a set of end-to-end (between ma and fa) requirements, (e.g. the maximum

latency or RTT) without which the service cannot correctly execute its tasks;

- PHreqs: they are the set of per-hop network requirements (e.g. the minimum actually available bandwidth on the links) that have to be satisfied on each hop of the path to ensure the minimum service functionality;

- n(t): it is the identifier of the Fog node that hosts fs at a given instant (t).

4.1.2 Mobile device layer

Consistently with the previous adopted modeling approach, the mobile IoT device or mobile node (m) can be characterized by the tuple <AP(t), ap(t), EtEmn(t)>:

- AP(t): it is the set of the identifiers of the access points reachable by the mobile node at a given instant (t);

- ap(t): it is the identifier of the access point through which the mobile node connects to the network at a given instant (t);

- EtEmn(t): it is the value of the end-to-end (between m and n(t)) used metrics (e.g. the latency

or the RTT) at a given instant (t).

(34)

34

In order to enable the functionalities of the proposed Fog-Computing platform the mobile node has to be equipped with several supporting components, as shown in the figure 4.4.

Fig. 4.4 – Functional components on the mobile node

The mobile node does not host only the mobile application (ma), but also a Mobile Manager. It is composed of two sub-components: the Migration Requester and the End-to-End Monitor. The idea, better clarified later, is that the latter will check periodically the fulfilment of the EtEreqs of the “companion” fs, promptly signalling the former in case of exceeding of the threshold. Then the Migration Requester will be responsible for starting the fs migration process.

4.1.3 Fog layer

Several fog nodes (fn) belong to the Fog layer. Basing on the actual needs each node (j-th) can be modeled with the tuple <idj, HWj(t), sj(t)>.

(35)

35

- idj: it is the unique identifier of the j-th Fog node of the infrastructure;

- HWj(t): it represents the set of actual amounts of available hardware resources on the j-th Fog

node at a given instant (t);

- sj(t): it is the actual state of the j-th Fog node at a given instant (t), it can be either “on” or

“off”.

On each Fog node several fs may be deployed: a fog service is essentially a container within which the fa runs, and it must be reachable from the mobile IoT node on which instead the ma is executing. The networking strategy adopted for the containers deployed on a Fog node is depicted in the figure 4.6.

Fig. 4.6 – Container networking on the Fog node

The idea is to exploit a bridge on the the host in order to reach every container deployed locally. A bridge is substantially a virtual switch and the access route to and from the outside is represented by a physical interface of the Fog node, that has to be attached (as a port) to that bridge. To ensure isolation and multi-tenancy, for each container on the node a dedicated network namespace must be created. The links between the standard network namespace of the host and those brought up for the Fog services are represented by the virtual ethernet pairs shown in the above figure: one end of this links is the so called veth-guest interface, the one exploited by the container to send and receive traffic, and on the other end there is the veth-host interface, attached to the bridge on the host and hence allowing packets to/from the container to pass from a network namespace to another as needed by the circumstances.

(36)

36

The figure 4.7 presents instead a high-level view of the functional sub-components that each Fog node has to be equipped with.

Fig. 4.7 – Functional components on the fog node

First it is worth noting the presence of the functional counterpart of the Mobile Manager deployed on the mobile node: the Fog Manager. It is composed of two sub-components: the End-to-End Monitor and the Hardware Monitor. Their functions, better clarified later, are respectively those of tracking the values of the used end-to-end metrics (with respect to the mobile node and to the access points of the network infrastructure) and to monitor the current state of the available hardware resources of the node.

The role of the Buffering VNF instead deserves to be analyzed separately. Its design and introduction in the proposed architecture addresses the intrinsic problem of the service unavailability during its migration. That issue is of course unavoidably linked to the service downtime experienced because of the migration, but some patches can be adopted with the aim of limiting its impact. As remarked in [17], a possible solution is that of introducing an entity acting as a “Cloud Hopper”: the approach followed by the authors is that of creating an apposite application to which address all the traffic during the service downtime. The role of this application is simply that of keeping locally the incoming packets and forwarding them to the proper node at the completion of the migration process. It is worth underlining that the scenario in which the authors placed their “Cloud Hopper” application is a traditional Cloud environment, and so their solution, as it is, it is not suitable for the context taken as reference in this work: indeed it is not desirable, even if only for a limited period, that the traffic exceeds the Fog layer, thus reducing the advantages behind the choice of the Fog paradigm. For this

(37)

37

reason, the proposed idea has been better adapted to the proposed reference architecture in the form of the Buffering Virtual Network Function (VNF).

Fig. 4.8 – The Buffering VNF

The component implementing that VNF must be deployed on every fn of the Fog infrastructure. Its role is to intercept, when activated, the local incoming packets and to delay their delivery until the restore of the fs has been successfully completed on the same node. The policies according to which the VNF is activated/deactivated, and its interaction with the other components will be deepened in the section 4.2.

4.1.4 Cloud layer

The proposed architecture requires also the presence of a Cloud layer. It is worth noting that in real cases it is not rare to find applications deployed in several components, with some of these not only on the mobile or fog nodes, but also in the Cloud.

In the proposed architecture this particular configuration is not taken under consideration because it does not affect the features of the presented solution. However it requires some modules to be deployed at the Cloud layer to act in the orchestration role. These macro-components are shown in the figure 4.9.

(38)

38

Fig. 4.9 – Functional components at the Cloud layer

First it is possible to find the Migration Orchestrator: this module is composed of two sub-components, the Migration Manager and the Registry Server. The former is responsible for receiving the migration trigger signal from the mobile node and to start the migration process by interacting with the involved fog nodes, while the latter maintains useful information about the current state of the fog nodes of the infrastructure, enabling a smart selection of the target node for the migration. On the other side, the SDN controller is used to introduce the needed flexibility for the configuration of the network devices and for the dynamic provisioning of communication paths between m and the corresponding fn.

The deeper description of the policies and the interactions enabling the solution for the migration support will be addressed in the section 4.2, which deals with the operational mechanisms of the architecture.

4.1.5 Network infrastructure

The network infrastructure layer proposed in this work is composed of SDN switches that can be remotely controlled by the in-Cloud centralized SDN controller, through the most common SDN southbound protocol: OpenFlow. Among these devices, it is important to distinguish those belonging to the access layer, and hence from now on referred as access points (ap).

(39)

39

Fig. 4.10 – The network infrastructure

Consistently with the approach adopted for the previously presented entities, each access point is modeled with the tuple <idk, FNk(t), EtEk(t), EtEmk> as depicted by the figure 4.11.

Fig. 4.11 – Model of the access point

-

idk: it is the unique identifier of the k-th access point of the infrastructure;

-

FNk(t): it represents the set of the identifiers of the Fog nodes actually reachable from the k-th access point at a given instant (t);

- EtEk(t): it represents the set of values of end-to-end metrics (e.g. the latency or the RTT)

between the k-th apk and all the fnj such that fnj ∈ FNk at a given instant (t);

-

EtEmk: it represents the average value of the end-to-end metrics between the k-th access point

apk and a mobile node m connected to apk, and it does not depend on the time, but only on the typology of the specific access point.

(40)

40

4.2

Operational scenario

Once presented the entities of which the proposed architecture is made, this section will analyze how they interact to compose policies and mechanisms designed to support the IoT device mobility through the service migration. The figure 4.12 presents the overall architecture, with all the previously seen components placed in a more composite, high-level view.

Riferimenti

Documenti correlati

Both middleware services (such as workload and data management) and applications (such as monitoring tools etc.) require an interface to the Grid Information Service (IS) to

Powder XRD patterns of the samples (Figure S1); N 2 adsorption/desoprtion isotherms at 77 K of the samples (Figure S2); SEM images of the samples (Figure S3); comparison of

In modern Astrodynamics and Celestial Mechanics, thanks to the rise of interplanetary exploration that has brought a renewed interest in Lambert’s problem, it has application mainly

E’ stato ricavato un disegno preliminare del propulsore e basandosi su questa configurazione iniziale è stata effettuata un’analisi magnetica per verificare il

One of the competing hypotheses suggests that local ischaemia of the tissue, caused by occlusion of the capillaries under pressure and subsequent reperfusion damage by oxygen

together I'M A WATER DROP AND I WANT TO STOP I GO UP AND DOWN FROM THE SKY TO THE GROUND. I WOULD LIKE TO STAY IN A SAFE

1 Heart Failure Unit, Centro Cardiologico Monzino IRCCS, Via Parea 4, 20138 Milan, Italy; and 2 Cardiovascular Section, Department of Clinical Sciences and Community Health,

Challenges and ideas ahead Rauno Cavallaro, Assistant Professor, University Carlos III Madrid 10:20 An example of disruptive configuration born in Pisa: the PrandtlPlane. Karim