• Non ci sono risultati.

Graphical Interface for an SDN Monitoring Platform

N/A
N/A
Protected

Academic year: 2021

Condividi "Graphical Interface for an SDN Monitoring Platform"

Copied!
54
0
0

Testo completo

(1)

University of Pisa

and

Scuola Superiore Sant’Anna

Master Degree in Computer Science and Networking

Graphical Interface For An

SDN Monitoring Platform

by

Hagos Lemlem Adhane

Thesis Supervisors:

Prof. Piero CASTOLDI

Ing. Barbara MARTINI

(2)
(3)

Abstract

Within 5G networks, services are imposing stringent requirements in terms of bandwidth and delay. In this context, an SDN monitoring plat-form is proposed which allows for the orchestration of service chaining paths and the collection of statistics related to the actual status of switches and links in the network. In this thesis, we present a graphical interface for this monitoring platform that is used to display the graphical rep-resentation of the SDN network behavior and the actions taken by the orchestrator in order to adapt the installed service chaining paths to the network status in order to respect the SLA constraints. This platform leverages on Grafana, a popular graphical representation tool. The re-sults presented in this thesis show how the SDN orchestrator collects the statistics from the network and how it adapts the installed service chain-ing paths to the status of the network by redirectchain-ing the traffic through unloaded paths in case of congestion events or QoS degradations.

(4)

Contents

1 Introduction 1 2 Background and State of the Art 3

2.1 SDN . . . 3 2.2 NFV . . . 4 2.3 SDN Controllers . . . 5 2.3.1 ONOS . . . 5 2.3.2 ODC . . . 6 2.3.3 POX . . . 7 2.4 Orchestrator . . . 7

3 Graphical representation of monitoring data statistics 9 3.1 Time-Series DataBases . . . 9

3.1.1 Influxdb . . . 10

3.1.2 Gnocchi . . . 10

3.1.3 Prometheus . . . 11

3.2 Tools for Graphical representation . . . 12

3.2.1 Grafana . . . 12 3.2.2 Kibana . . . 13 3.2.3 Graphite . . . 13 4 Installation 13 4.1 ONOS . . . 14 4.2 Mininet . . . 14 4.3 Gnocchi . . . 16

4.3.1 Installation using pip . . . 16

4.3.2 Configuring authentication . . . 16 4.3.3 Configuration . . . 17 4.3.4 Initialization . . . 18 4.4 Grafana . . . 18 4.4.1 Installation . . . 18 4.4.2 Configuration . . . 18

4.4.3 Adding a data source in Grafana . . . 20

4.5 Orchestrator . . . 21

4.6 Workflow . . . 22

5 Experiments 25 5.1 Graphs with a single request . . . 26

5.1.1 Traffic = 1Mb . . . 26

5.1.2 Traffic = 10Mb . . . 27

5.1.3 Traffic = 5Mb . . . 31

5.2 Graphs with three requests . . . 35

5.2.1 Traffic = 5Mb . . . 35

(5)

6 Conclusion 41 A ConnectionManager.java 44 B Installation of Gnocchi from source 45

List of Tables

1 Experiments parameters . . . 25

List of Figures

1 SDN architeture . . . 3

2 NFV Architeture . . . 4

3 ONOS Controller architecture . . . 6

4 ODL SDN Controller architecture . . . 6

5 POX Controller . . . 7

6 SDN Orchestrator: Building Blocks . . . 8

7 Gnocchi Architecture . . . 11

8 Prometheus Architecture . . . 12

9 ONOS command line interface . . . 14

10 Mininet command . . . 15

11 Mininet Topology . . . 15

12 Adding psql data source in Grafana . . . 20

13 Workflow sequence diagram . . . 23

14 N = 1, Traffic = 1Mb, swN = 3 . . . 26 15 N = 1, Traffic = 1Mb, swN = 14 . . . 27 16 N = 1, Traffic = 10Mb, R = 1, swN = 3 . . . 28 17 N = 1, Traffic = 10Mb, R = 1, swN = 14 . . . 29 18 N = 1, Traffic = 10Mb, R = 2, swN = 3 . . . 30 19 N = 1, Traffic = 10Mb, R = 2, swN = 14 . . . 31 20 N = 1, Traffic = 5Mb, R = 1, swN = 3 . . . 32 21 N = 1, Traffic = 5Mb, R = 1, swN = 14 . . . 33 22 N = 1, Traffic = 5Mb, R = 2, swN = 3 . . . 34 23 N = 1, Traffic = 5Mb, R = 2, swN = 14 . . . 34 24 N = 3, Traffic = 5Mb, R = 1, swN = 3 . . . 36 25 N = 3, Traffic = 5Mb, R = 1, swN = 14 . . . 37 26 N = 3, Traffic = 5Mb, R = 2, swN = 3 . . . 37 27 N = 3, Traffic = 5Mb, R = 2, swN = 14 . . . 38

(6)

28 N = 1, Traffic = 10Mb, swN = 3, Th = 10Mb . . . 39

29 N = 1, Traffic = 10Mb, swN = 14, Th = 10Mb . . . 39

30 N = 1, Traffic = 20Mb, swN = 3, Th = 10Mb . . . 40

(7)

ACKNOWLEDGMENTS

First and foremost, I am grateful to God the almighty for giving me the strength, health, knowledge and oportunitiy to undertake this masters program and this thesis. Without his blessings and protection, it would be impossible for me to achieve this achievement.

Secondly, I would like to thank my advisor Pro. Piero Castoldi and Ing. Barbara Martini for their continuous guidance and help during the course of my thesis. And, I would like also to thank for all the Professors of my department for giving me the best knowledge and experience during my course of study. I am also deeply thankful to Molka Gharbaoui for her day to day help and cooperation by answering my questions and ambuigties starting day one. I would like to give her a special thanks for making may work easy and smooth. Finally, I would like to thank the many people who have done many nice things for me during my journey to fulfill my dreams.

(8)

Dedication

I am really honored and blessed to dedicate this thesis to the alimighty God who gave me the strength and courage to finish this work.

Next, I would like to dedicate this thesis to the most lovable and sweetest mom in the world, Berhan Ar’aya and my beloved uncle Teklay Mehari. It would be unimaginable and very hard for me to reach here and to achieve every achievements I achieved, without your utmost love, help and care. So, this is not mine but honestly yours! Thank you for every thing you did for me! I love you so much!

(9)
(10)

1 Introduction

Nowadays, users demand for a higher speed, lower end to end latency, higher data rates, higher device connectivity and a better quality of services is expo-nentially increasing. Because of these demands, a more faster fifth-generation mobile network (5G) is on the research area to replace the fourth generation (4G). Within the 5G networks, services are imposing stringent requirements in terms of bandwidth and delay. To meet these requirements of services, the 5G network’s need for network virtualization and network slicing should be handled first. SDN/NFV, which are the best technologies that give 5G a network pro-grammability and the ability for a network virtualization are the best solutions to these needs. Moreover, these technologies enable on-demand provisioning and scalability solutions to meet the high demands of users in 5G. On the other hand, monitoring is gaining significant interest in SDN networks since it allows for achieving QoS-enabled and reliable service path chains. It also allows for a dynamic adaptation of service function chaining by collecting statistics of the network and taking actions based on the network’s status. Monitoring platforms need then to interact with SDN network controllers to adapt service chains to the ever-changing user requirements or to promptly recover from service outages or degradation events. In this context, an SDN monitoring platform is proposed which allows for the orchestration of service chaining paths and the collection of statistics related to the actual status of switches and links in the network. It leverages on the controller capabilities to monitor the status of the network and accordingly arrange service chain paths while preserving the predetermined quality of service requirements.

In this thesis, we present a graphical interface for this monitoring platform which is used to display the graphical representation of the SDN network be-havior and the actions taken by the orchestrator to adapt the installed service chaining paths to the network status to respect the SLA constraints. The results presented in this thesis show how the SDN orchestrator collects the statistics from the network and how it adapts the installed service chaining paths to the status of the network by redirecting the traffic through unloaded paths in case of congestion or QoS degradation. The graphs displayed in the experiments section of this thesis are evincing the network’s behavior while a traffic with different parameters traverses between two endpoints in the path chain and actions taken by the orchestrator.

The organization of this thesis is described as follows: In section 2, we present the background and state-of-the-art of the technologies and tools used in this thesis. Section 3 presents three of the best time-series databases and the graphical representation of SDN statistics using graphical representation tools. The fourth section gives a detailed description of how to install the tools used in this thesis. At the end of this section, we present a clear workflow of the graphical interface for an SDN monitoring platform. In section 5, we make some experiments in the graphical representation tool of the statistics collected by the orchestrator, and, we visualize different graphs of how the orchestrator manages a request for a service chain path setup, collects the throughput of the

(11)

switches and accordingly, how it adapts the path chain in case of congestion or QoS degradation by redirecting to a new set of unloaded switches. Finally, section 6 concludes our thesis by summarizing the main results of the work.

(12)

2 Background and State of the Art

2.1 SDN

Software Defined Networking (SDN) is a new networking paradigm in which the forwarding hardware is decoupled from control decisions[3]. It gives hope to change the limitations of current network infrastructures. First, it breaks the vertical integration by separating the network’s control logic (the control plane) from the underlying routers and switches that forward the traffic (the data plane). Second, with the separation of the control and data planes, network switches become simple forwarding devices and the control logic is implemented in a logically centralized controller (or network operating system), simplifying policy enforcement and network (re)configuration and evolution. The control plane consists of one or more controllers which are considered as the brain of SDN network where the whole intelligence is incorporated. The separation of the control plane and the data plane can be realized by means of a well-defined programming interface between the switches and the SDN controller [4] [5].

The main idea is to allow software developers to rely on network resources in the same easy manner as they do on storage and computing resources [3]. The main advantages of SDN over traditional approach are that it allows us to quickly test and deploy new applications in a real network, minimize capital and operating expenses and allows centralized management of each switch [8]. The following figure (i.e. Fig. 1) shows the architecture of SDN [16].

(13)

2.2 NFV

Network function virtualization (NFV) represents a significant transformation for service provider networks, driven by the goals of reducing cost, increasing flexibility, and providing personalized services [9]. It transforms how network operators architect their infrastructure by leveraging the full-blown virtualiza-tion technology to separate software instance from hardware platform, and by decoupling functionality from location for faster networking service provisioning. Essentially, NFV implements network functions through software virtualiza-tion techniques and runs them on commodity hardware (i.e., industry standard servers, storage, and switches) [7].

The main idea of NFV is the decoupling of physical network equipments from the functions that run on them [21]. This means that a network func-tion such as a firewall can be dispatched to a TSP(Telecommunicafunc-tion Service Providers) as an instance of plain software. This allows for the consolidation of many network equipment types onto high volume servers, switches and storage, which could be located in data centers, distributed network nodes and at end user premises. This way, a given service can be decomposed into a set of Vir-tual Network Functions (VNFs), which could then be implemented in software running on one or more industry standard physical servers [6]. In NFV environ-ments, monolithic complex network functions running on specialized hardware are decomposed into smaller functional units and dynamically orchestrated onto a virtualized cloud and edge infrastructure [9].

(14)

2.3 SDN Controllers

SDN controller is a software platform that works with a commercial server technology and provides the necessary resources and virtualizations based on logic-based and virtual network outlook principles. It is the core of an SDN network which maintains a global view of the network and manages the network intelligence by installing flow table entries in switches. This is done through a control plane protocol that operates on the southbound interface of the con-trol layer. It lies between network devices at one end and applications at the other end. Any communications between applications and devices have to go through the controller. The controller also uses the protocols such as OpenFlow to configure network devices and choose the optimal network path for appli-cation traffic. Controller implementation in SDN can range from a physically centralized controller which is implemented on a single server, to physically distributed control elements that are logically centralized [10] [11] [17]. In the following subsections some of the currently most used controllers are presented.

2.3.1 ONOS

Open Network Operating System(ONOS) is the first and leading open source SDN controller for building next-generation SDN/NFV solutions. It supports both configuration and real-time control of the network, eliminating the need to run routing and switching control protocols inside the network fabric. By moving intelligence into the ONOS cloud controller, innovation is enabled and end-users can easily create new network applications without the need to alter the data-plane systems [13].

The following features make ONOS the perfect choice for building next-generation SDN/NFV solutions. It is the only open source controller providing :

• Scalability: Offers virtually unlimited replication for scaling control plane capacity

• High Performance: Performs to the exacting specifications of large network operators

• Resiliency: Provides the availability required for mission critical operator networks.

• Legacy device support: Makes it easy to add or configure traditional de-vices and serde-vices with model based dynamic configuration

• Next-Generation device support: Gives real-time control for native SDN data-plane devices with OpenFlow and now with P4 support [12]. Fig. 3 [20] below shows the architecture of ONOS controller.

(15)

Figure 3: ONOS Controller architecture

2.3.2 ODC

OpenDaylight Controller(ODC) is an open source project supported by IBM, Cisco, Juniper, VMWare and several other major networking vendors. It is an SDN controller platform implemented in Java. ODC presents a new SDN controller architecture based on Services Abstraction Layer (SAL) concept such that it supports protocols other than OpenFlow [14] [15]. OpenDaylight is a modular open platform for customizing and automating networks of any size and scale.

(16)

2.3.3 POX

POX is a python based open source controller for developing SDN applications. POX controller provides an efficient way to implement the OpenFlow protocol which is the de facto communication protocol between the controllers and the switches. It is possble to run different applications like hub, switch, load bal-ancer, and firewall using POX controller. Its great strength lies in that it can be used with real hardware, in testbeds or with Mininet emulator. The POX controller has some great features but does not have a GUI interface [18].

Figure 5: POX Controller

ONOS architecture is designed to maintain high speed and large scale works and its main distinguishing characteristic is its support for hybrid net-works [19]. It also offers a simple way to directly interact with the controller through a REST API in its NorthBound Interface (NB-I) [22]. So, because of these best features and others, we choose to use ONOS controller in this thesis.

2.4 Orchestrator

SDN Orchestrator is an application running on top of the ONOS network con-troller platform which is in charge of offering control capabilities of the underly-ing network switchunderly-ing nodes through API. The SDN orchestrator leverages on the controller capabilities to monitor the status of the network and accordingly arrange service chain paths while preserving predetermined QoS requirements. It exposes at NorthBound a RESTful interface which enables the upper layer applications to directly demand the set-up/tear-down of service chain paths by specifying an ordered set of end-point IP addresses of VF (Virtual Function) instances according to an intent-based approach. It collects traffic statistics from the switches and elaborates them to evaluate their current load and the actual throughput performance of data flows. While periodically evaluating the availability status (i.e., load) of switches, the SDN orchestrator is able to ad-dress a regulated usage of those to achieve reliable service chain paths. It also

(17)

adaptively regulates the use of network resources (i.e., switch and links) and, accordingly, dynamically arranges service chain paths to maximize the available data throughput [22, 23].

Figure 6: SDN Orchestrator: Building Blocks

Fig. 6 presents the main software building blocks of the SDN orchestrator and the interactions among them. The main components are the following:

1. Request Manager: which exposes a REST API to Applications. It handles requests for the setup of service chain paths between specified source and destination endpoints while ensuring that a specified sequence of Virtual Network Functions (VNFs) is traversed by the traffic flow. Both endpoints and VNFs are specified as IP addresses. The request involves composite paths consisting in a sequence of path segments to be individually pro-visioned in a coordinated way by leveraging the Service Data Delivery Control and Coordination component.

2. Service Data Delivery Control and Coordination: decomposes the re-quested service chain path into an ordered sequence of path segments, coordinates the provisioning actions of each path segment through proper mapping into ONOS-compliant queries while leveraging the BNServer. 3. Adaptation Module: is in charge of triggering the Service Data Delivery

Control and Coordination component for the re-provisioning of (part of) service chain paths throughout a different set of switches (i.e., redirection) as soon as a degradation event (e.g., congestion of one or more switches, decrease of data throughput under a specified threshold) is detected using operational data stored in the Registry. The decision for redirections is

(18)

triggered according to specified orchestration policies aiming at offering different level of guarantees (i.e., service availability and/or QoS assur-ance).

4. Statistics Collector: obtains the operational status data of service chain paths after processing the set of collected OF statistics retrieved leveraging the BNServer. The operational status data are stored in the Registry and consist in (i) throughput on per-chain (i.e., per-flow) basis, and (ii) switch throughput computed from per-port byte counters.

5. BNServer: is in charge of interacting with the base controller functions through appropriate APIs. This component leverages the ONOS con-troller functions to handle the low-level directives (i.e., OF messages) for installing the forwarding rules throughout the network and collect traffic statistics at switches.

6. Registry: contains descriptive and operational data on network nodes and service chain paths. More specifically, it contains (i) a list of the available VNFs instances with related descriptive information (i.e., type, network location in terms of IP prefix, Data Path ID and port of the switch they are connected to); (ii) descriptive information about the service chain paths and each comprised segment established throughout the network (e.g., IP addresses of end-points, identifier and port number of intermediate switches); (iii) operational information about the load (i.e., throughput) of switches and the actual QoS performance (i.e., data throughput) of service chain data flows evaluated using monitoring data collected by the Statistics Collector [23].

3 Graphical representation of monitoring data

statistics

The statistics gathered from the network by the SDN Orchestrator will be saved in a time series database. We have many time series databases to store the data, but we will only see the most used ones. After sending the data to the chosen database, we will also represent them by using a graphical representation software in real time and see what is the behavior of the network. Later on, in subsection 3.2, we will further see which are the best graphical representation software.

3.1 Time-Series DataBases

Below is the description and features of some of the best time-series databases to use for saving time-series statistics. We choose the leading three tools that are mostly used and we will choose one among them for storing the statistics in this thesis.

(19)

3.1.1 Influxdb

InfluxDB is the time-series database in the TICK Stack. InfluxDB is used as a data store for any use case involving large amounts of time-stamped data, including DevOps monitoring, application metrics, IoT sensor data, and real-time analytics. A space on a machine can be conserved by configuring InfluxDB to keep data for a defined length of time, automatically expiring & deleting any unwanted data from the system. InfluxDB also offers an SQL-like query language for interacting with data. InfluxDB is a high-performance data store written specifically for time series data. It allows for high throughput ingest, compression and real-time querying of that same data. InfluxDB is written entirely in Go and it compiles into a single binary with no external dependencies [24].

3.1.2 Gnocchi

Gnocchi is an open-source time series database. The problem that Gnocchi solves is the storage and indexing of time series data and resources at a large scale. This is useful in modern cloud platforms which are not only huge but also are dynamic and potentially multi-tenant. Gnocchi takes all of that into account. It has been designed to handle large amounts of aggregates being stored while being performant, scalable and fault-tolerant. While doing this, the goal was to be sure to not build any hard dependency on any complex storage system. Gnocchi takes a unique approach to time series storage: rather than storing raw data points, it aggregates them before storing them. This built-in feature differs from most other time series databases, which usually support this mechanism as an option and compute aggregation (average, minimum, etc.) at query time. Because Gnocchi computes all the aggregations at ingestion, getting the data back is extremely fast, as it needs to read back the pre-computed results.

Gnocchi has best features which make it the best time-series database. HTTP REST interface, horizontal scalability, metric aggregation, metric value search, archiving policy, structured resources, resource history, queryable re-source indexer, multi-tenant and Grafana support are some of the key features of Gnocchi. This time-series database needs a database to index the resources and metrics that it will handle. The supported indexer drivers are:

• PostgreSQL - preferred • MySQL (at least version 5.6.4)

The indexer is responsible for storing the index of all resources, archive policies and metrics, along with their definitions, types and properties. The indexer is also responsible for linking resources with metrics and the relationships of resources [2].

(20)

Figure 7: Gnocchi Architecture

3.1.3 Prometheus

Prometheus, a Cloud Native Computing Foundation project, is a systems and service monitoring system. It collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts if some condition is observed to be true.

Prometheus’s main features are:

• a multi-dimensional data model with time series data identified by metric name and key/value pairs

• a flexible query language to leverage this dimensionality

• no reliance on distributed storage; single server nodes are autonomous • time series collection happens via a pull model over HTTP

• targets are discovered via service discovery or static configuration • multiple modes of graphing and dashboarding support [25, 26].

(21)

Figure 8: Prometheus Architecture

In this thesis, we prefer to use Gnocchi as our time series database to store the statistics due to its best features and its easiness to support Grafana. As the developers of the Gnocchi suggested the best indexer for Gnocchi is PostgreSQL, so we also choose to use PostgreSQL as our indexer.

3.2 Tools for Graphical representation

After storing the statistics in the preferred time-series database, we are going to represent and visualize them using graphical representation tools. The following are some of the best tools to visualize our data(i.e.statistics) graphically.

3.2.1 Grafana

Grafana is an open source, feature rich metrics dashboard and graph editor for Graphite, Elasticsearch, OpenTSDB, Prometheus, Gnocchi and InfluxDB. It allows to query, visualize, alert on and understand the metrics no matter where they are stored. As mentioned above Grafana supports many different storage back-ends for the time series data (Data Source). Each Data Source has a specific Query Editor that is customized for the features and capabilities that the particular Data Source exposes. The query language and capabilities of each Data Source are obviously very different. It is possible to combine data from multiple Data Sources onto a single Dashboard, but each Panel is tied to a specific Data Source that belongs to a particular Organization.

(22)

Best features of Grafana:

• Visualize: From heatmaps to histograms, graphs to geomaps, Grafana has a plethora of visualization options to help us understand our data, beautifully.

• Alert: Seamlessly define alerts where it makes sense — while we’re in the data.

• Unify: Bring data together to get better context. Grafana supports dozens of databases, natively.

• Open: Grafana gives many options. It’s completely open source, and backed by a vibrant community.

• Extend: Discover hundreds of dashboards and plugins in the official library [1] [27].

3.2.2 Kibana

Kibana is an open source data visualization plugin for Elasticsearch. It provides visualization capabilities on top of the content indexed on an Elasticsearch clus-ter. We can create bar, line and scatter plots, or pie charts and maps on top of large volumes of data. Kibana core ships with the classics: histograms, line graphs, pie charts, sunbursts, and more. Plus, we can use Vega grammar to design our own visualizations. All leverage the full aggregation capabilities of Elasticsearch. It gives the freedom to select the way we give shape to our data. And we don’t always have to know what we’re looking for [28].

3.2.3 Graphite

Graphite is an enterprise-ready monitoring tool that runs equally well on cheap hardware or Cloud infrastructure. Professionals use it to track the performance of their websites, applications, business services, and networked servers. It marked the start of a new generation of monitoring tools, making it easier than ever to store, retrieve, share, and visualize time-series data. It does store numeric time-series data and renders graphs of this data on demand. What Graphite does not do is to collect data for us, however, there are some tools like collectd that know how to send data to graphite [29].

In this thesis we choose to use Grafana, since it is easy to integrate with gnocchi and it has the best features to visualize the time-series data in a graph-ical way than the others.

4 Installation

In this section, we are going to see the installation and configuration of all the tools which are going to be used in this thesis.

(23)

4.1 ONOS

ONOS can be installed by downloading of the latest version from ONOS website. Since ONOS is an open source software, we could build it from scratch. However, there is also a ready-to-run ONOS distribution in the ONOS website. In this thesis, we downloaded the last version of ONOS controller and installed it using a terminal in our virtual machine. As we can see in the following figure, ONOS is running successfully.

Figure 9: ONOS command line interface

4.2 Mininet

Mininet is an emulation tool that allows running several virtual hosts, con-trollers, switches, and links. It uses container-based virtualization to make a single system to act as a complete network. It is a simple, robust and inexpen-sive network tool to develop and test Open-flow based applications. Mininet allows creating topologies of very large scale size up to thousands of nodes and performs a test on them easily. It has very simple command-line tools and API. Mininet allows the user to easily create, customize, share and test SDN net-works. Mininet creates a realistic virtual network, running real kernel, switch and application code, on a single machine (VM, cloud or native), in seconds, with a single command:

(24)

Figure 10: Mininet command

Fig. 11 displays the Mininet topology that we are using in this thesis. As we can see the topology has a total of fourteen switches. Some of the switches are connected to an emulated cloud platforms where a set of Virtual Func-tions(VFs) are deployed. Among the switches in the topology, switch3, switch7 and switch11 are the ones connected to the cloud platform. The other eleven switches are all “normal” Open-flow switches. That means, they can only for-ward traffic and one or more hosts are connected to them. 

Figure 11: Mininet Topology

The easiest way to get started is to download a pre-packaged Mininet/Ubuntu VM. This VM includes Mininet itself, all OpenFlow binaries and tools pre-installed, and tweaks to the kernel configuration to support larger Mininet net-works. Even though there are different options to install Mininet, VM installa-tion is the easiest, recommended and most foolproof way of installing Mininet. Once the installation is completed, it could be used for an SDN simulation with

(25)

OpenFlow protocol[8, 18] [30].

4.3 Gnocchi

There are two options to install Gnocchi [2]. The first one is installing it using pip and the second option is installing it from source. Let’s see how to perform the installation in both ways. In this thesis, we have used the first option to install Gnocchi.

4.3.1 Installation using pip

To install Gnocchi using pip, an experimenter should execute the following com-mand on a terminal. During the installation, various extra variants are available as an option to be chosen and installed with Gocchi depending on the features and drivers of our demand. After the extra features to be used are selected, they need to be put inside the brackets by replacing the default configurations of the indexer, storage, and authentication.

pip install gnocchi[indexer, storage, authentication]

Here are the list of variants available:

• keystone – provides support for Keystone authentication • postgresql – provides PostgreSQL indexer support • mysql - It provides MySQL indexer support • swift – provides OpenStack Swift storage support • s3 – provides Amazon S3 storage support

• redis – provides Redis storage support

4.3.2 Configuring authentication

The API server supports different authentication methods:

• basic: is the default authentication that uses the standard HTTP Autho-rization header. The password of the authentication is not used.

• keystone: to use OpenStack Keystone. In order to use keystone, some configurations should be set in gnocchi.conf.

• remote-user: where Gnocchi will look at the HTTP server REMOTE_USER environment variable to get the user-name.

(26)

In this thesis we are going to use the following extra variants inside the bracket, but we are not expected to write the default ones. So we only modify the ones that are not specified as default.

• indexer: PostgreSQL • storage: file(default)

• authentication: basic (default)

So here is the command to run in terminal in our case:

pip install gnocchi[postgresql]

This command is going to install PostgreSQL for indexer, file for storage, and basic for authentication and authorization. Installation of Gnocchi from source is presented in Appendix B.

To store our data in Gnocchi we create at first a PostgreSQL database and a table. A user which has the same name with the login name of the VM (i.e. Ubuntu) is created to avoid writing password each time accessing the database from a terminal. This user will be the owner of the database we created to store the statistics. Then, we create a database called Gnocchi and we create table named [th_bytes] with out the brackets inside the database. This table has 14 columns to store the statistics from all the 14 switches participating in our mininet network topology as stated in subsection 4.2. Since Grafana does not support string data types, the columns data type are numeric data type (i.e. in our case it is float).

4.3.3 Configuration

The default configuration file is found in ’/etc/gnocchi/gnocchi.conf’. It is also possible to generate it by running the command given below. It should be configured by editing the appropriate parts of the file. So, the indexer option will be set to the PostgreSQL database (i.e. gnocchi) created above previously. And the storage option should be also set to an existing directory if any, but in this thesis, we use the default storage, namely ’/var/lib/gnocchi’ to store the metrics. By default, the incoming driver is configured to use the same value as the storage driver. Below we present an example of how to configure the ‘gnocchi.conf’ file:

[indexer] url = postgresql://root:p4assw0rd@localhost/gnocchi [storage] file_basepath = /var/lib/gnocchi

The source code does not provide the configuration file; it will be created during the installation. But if not created during the installation, we can

(27)

gener-ate it by running the following command in a terminal by giving the path where to put the configuration file in disk.

gnocchi-config-generator > /path/to/gnocchi.conf

In our case it was not created during the installation, and here is how to generate it to the path /etc/gnocchi/

gnocchi-config-generator > /etc/gnocchi/gnocchi.conf

4.3.4 Initialization

Once we have configured Gnocchi properly and change the configuration file as per the demand, the indexer and storage should be initialize as follows:

gnocchi-upgrade

4.4 Grafana

As we have seen above, using Grafana[1] we can visualize our data in tables, graphs, and charts. In order to use Grafana, it should be installed and configured first. We install and configure Grafana by using the following detailed steps:

4.4.1 Installation

We install the stable version of Grafana on Debian/Ubuntu using the following steps:

wget https://s3-us-west-2.amazonaws.com/grafana-releases/release/grafana_5.1.4_amd64.deb

sudo apt-get install -y adduser libfontconfig sudo dpkg -i grafana_5.1.4_amd64.deb

After the installation of Grafana, since we are using a new database(i.e. psql) different from the default one (i.e. sqlite3), the default configurations should be changed with the new information. In the following section, we are going to describe all about the configuration file of Grafana.

4.4.2 Configuration

The configuration file is located at ’/etc/grafana/grafana.ini’ and it is already pre-configured with default settings. But in this thesis, the defaults are not going to be used. Instead, a different database is used, so that some default

(28)

configurations in ’grafana.ini’ are replaced by the new database information. In the configuration file, the database connection is configured by specifying type, host, name, user, and password in the [database] section of grafana.ini. In our case, the database indexer we use in Gnocchi is PostgrSQL. So, we only need to change the type of the database and the host, no need to change the others. In the following, we show the configuration file’s default configuration and the modified one.

Here is the default configuration of the configuration file: [database]

# You can configure the database connection by specifying type, host, name, user and password

# as separate properties or as on string using the url properties. # Either ”mysql”, ”postgres” or ”sqlite3”, it’s your choice ;type = sqlite3

;host = 127.0.0.1:3306 ;name = grafana ;user = root

# If the password contains # or ; you have to wrap it with triple quotes. Ex ”””#password;”””

;password =

And in our case, after we modify the configuration as per the new database’s information the type and host looks like as below. But, there is no need to change the others.

;type = postgres ;host = 127.0.0.1:5432

Finally, Grafana is installed and configured successfully and the server (init.d service) should be started before we begin to use it. The command to start Grafana service is presented below:

sudo service grafana-server start

We can also make the server start at a boot time. To configure the Grafana server to start at boot time, it is possible to run the following command:

sudo update-rc.d grafana-server defaults

The default HTTP port of Garafana is 3000 and the default group, user and password is ‘admin’ without the quotations. It’s possible to change the default

(29)

port in the Grafana configuration file, and we can change it to any port number. We use Grafana by running the following URL in a browser and log in to it using the default user name and password.

http:/localhost:3000

4.4.3 Adding a data source in Grafana

Grafana ships with a built-in PostgreSQL data source plugin that allows to query and visualize data from a PostgreSQL compatible database. Below are the steps that show how to add a psql data source to Grafana:

1. Open the side menu by clicking the Grafana icon in the top header. 2. In the side menu under the Configuration icon there is a link named Data

Sources

3. Click the + Add data source button in the top header. 4. Select PostgreSQL from the Type drop down.

Fig. 12 shows how to add psql data source in Grafana.

Figure 12: Adding psql data source in Grafana

The database user we specify when we add the data source should only be granted SELECT permissions on the specified database and tables we want to

(30)

query. Grafana does not validate that the query is safe. The query could include any SQL statements. For example, statements like DELETE FROM user; and DROP TABLE user; would be executed. To protect against this, it is highly recommended to create a specific PostgreSQL user with restricted permissions. We should make sure the user gets none unwanted privileges from the public role. So, we have created a user called ‘grafanareader’ that has only a SELECT permission on the database ‘Gnocchi’ and table ‘th_bytes’. Below is how we create the user with these permissions: 

First we log in to the database(i.e. Gnocchi) we are going to use. And by running the following psql commands in the command line one by one, the permission that we want the user to have is assigned.

CREATE USER grafanareader WITH PASSWORD ’gr0f0n0’; GRANT USAGE ON SCHEMA public TO grafanareader;

GRANT SELECT ON th_bytes TO grafanareader;

4.5 Orchestrator

The statistics collected from ONOS by the orchestrator could be stored in a disk as a CSV file or inserted to Gnocchi database directly by adding the following piece of codes to the orchestrator. In previous works, the statistics have been sent to disk as a CSV file. But, for this thesis, we need the statistics to be inserted directly to a time-series database while the orchestrator is running. Even if we could insert the data from disk to our database, it’s not an efficient way to do so. So the preferred way to insert the data directly to gnocchi is by modifying a piece of line of code in StatisticsCollector.java and add a new class (i.e. ConnectionManager.java) for the JDBC connection.

The code added in StatisticsCollector.java is a piece of code that creates a hash-map called ‘switchMap’ which is going to store the ID of the switches as a key and throughput of each switch as its value. And, another line of code which puts the throughput_in_a_period to the hashmap.

(31)

Iterator<String> iter = packetsReceived.keySet().iterator();

int nb_SW = 0;

//hashMap to store the calculated_th_bytes and calculated_th_packets

HashMap<String, String> switchMap = new HashMap<>(); ...

double throughput_in_a_period = ((double)

bytes_in_a_period/(1000000.0*stats_duration_sec)); switchMap.put(key,String.valueOf(throughput_in_a_period));

Since gnocchi uses a PostgreSQL as its indexer database, a Java database connection JDBC class is added to the orchestrator to create a connection to the database. The code below creates the connection and inserts the statistics accepted from the Statistics collector class to Gnocchi in real-time every 40 seconds. The database has fourteen columns to store the throughput of the fourteen switches we are using in our topology in this thesis. Appendix A at the end of this thesis shows the whole code of this class.

public class ConnectionManager {

private static final String url =

"jdbc:postgresql://localhost:5432/gnocchi"; private static final String username = "ubuntu";

private static final String password = "passw0rd";

private static Connection con;

public static Connection getConnection() {

try {

con = DriverManager.getConnection(url, username, password); } catch (SQLException ex) {

System.out.println("Failed to create the database connection.");

}

return con; }

4.6 Workflow

The sequence graph presented below displays the overall work flow of the whole system we are using in this thesis. To set up and run the whole process we follow the following work flow:

(32)

Figure 13: Workflow sequence diagram

At first, Mininet creates the topology of the network we are using in this thesis. The topology has fourteen switches, three of them are connected to emulated cloud platforms containing a set of virtual functions and the remaining ones are “normal” open-flow switches attached to one or more hosts. As we can see in the sequence diagram, the monitoring engine does many jobs in this system. In the beginning, it monitors the behavior of the network by collecting the throughput of all switches and then it notifies the network status to the orchestrator. Next, it sends the collected statistics to the time-series database (i.e. Gnocchi). Once the data is stored inside Gnocchi, it will be visualized using Grafana, a graphical representation tool which is integrated with Gnocchi. If the load of the switches exceeds the specified threshold, the monitoring engine sends a threshold exceeded notification to the orchestrator. Finally, the orchestrator takes an action to recover from the network congestion by redirecting the traffic to another path segment if available.

In the following, we summarize the different steps necessary for the execution of the scenario evaluated in this thesis.

(33)

1. Run Gnocchi and metricd:

Open a terminal and run the gnocchi deamon first as follows.

gnocchi-api

Next, open new terminal and run metricd deamon: gnocchi-metricd

2. Run ONOS controller:

ubuntu@ubuntu-VirtualBox: $cd /onos onos> ok onos

Check whether onos runs correctly by runing the following command: onos>apps -a -s

After runing this command if a list of applications the following result is obtained, it means that ONOS is running correctly and it is good to go to the next step and run mininet.

3. Run Mininet:

cd mininet/topologies sudo mn -c

Some times an error may happen during topology creation, if there were previously created topologies. So running this mininet command will solve the problem and after that the topology will be created successfully.If not leave this step.

sudo python NSF.py

4. Run the Orchestrator: Open the java project in netbeans and run the following classes in order.

• run StatisticsCollector.java • run DSE.java

After running DSE, in order the hosts to be discovered by StatisticsCol-lector the following command should be run in mininet: pingall. And it is also possible to ping using specific hosts.

For example: ping h1 h7 . The next thing to do is checking if the hosts are dicoverd in StatisticsCollector, after that the following classes will be run.

• run BNS.java

• run ApplicationEntity.java 5. start grafana-server process:

(34)

5 Experiments

In these experiments, the topology we are considering has a total of fourteen switches. Of these fourteen switches, three of them are connected to emulated cloud platforms where a set of Virtual Functions(VFs) are deployed. The three switches connected to the cloud platforms are switch3, switch7, and switch11. The remaining eleven switches are all ”normal” OpenFlow switches that only forward traffic. The VFs deployed in the cloud platforms have the same type and the choice of the cloud platform is indifferent. Some of the switches are connected to two hosts and some others are connected only to one host. The traffic that will be sent from a source to destination could be a UDP or TCP traffic. In these experiments, we have chosen to use UDP traffic. The reason we have chosen UDP is that it allows us to easily tune up the size of traffic to be sent. A UDP traffic is generated using iperf and it is going to be sent from a randomly chosen source host to a randomly chosen destination host. During the path of the traffic from source to destination, one of the VNFs is randomly chosen at a time to be traversed by the traffic. In the following graphs, we are going to see how the traffic is managed by the orchestrator when the network is empty and when it is overloaded during the traffic flow from source to destination hosts. To do so, in Table. 1, we summarize the set of parameters that will be tuned and set to specific values which will allow us to evaluate the behavior of the SDN orchestrator under specific scenarios.

Parameters Description value Traffic amount of traffic sent using iperf for

every installed request

variable N number of requests sent in every

scenario

variable AD adaptation duration: interval of time

separating two consecutive paths redirection in case of overloaded

switches

200 sec

SD Stats duration: interval of time separating two consecutive statistic

collections

40 sec Th threshold: maximum switch overload variable swN the number of switches displayed in

the graph

variable R the number of re-directions displayed

in the graph

1 or 2 Table 1: Experiments parameters

(35)

5.1 Graphs with a single request

In this subsection we are going to demonstrate the behavior of the switches in our topology, when a single request is sent for the setup of service chain path. During the request, a different traffic size is going to be sent from source to a destination endpoints. The VNFs and the endpoints used in our experiments are specified by a unique IP addresses.

5.1.1 Traffic = 1Mb

Fig. 14 shows the status of the three switches that are connected to the emu-lated cloud platforms in the network when a 1Mb UDP traffic is sent between the endpoints. A single request is sent from host1 to ask the orchestrator for a service path chain setup. As soon as the orchestrator accepts this request, it selects a set of switches and a single virtual function that satisfies the request’s requirements. Then the traffic traverses through this set of switches during its path towards the destination. In our case, in this first graph switch3 is the one selected from the switches connected to the emulated cloud platforms by the orchestrator. The orchestrator also periodically collects throughput of all switches in the network and using this data it checks if the load of the switches exceeds the specified threshold. But here, the traffic of switch3 is always be-low the 1Mb threshold and as a result, there is no overloaded switches and no redirection happens.

Figure 14: N = 1, Traffic = 1Mb, swN = 3

The same behavior is displayed below for all the switches in the topology in Fig. 15. Here, as we can observe from the graph the set of switches selected by the orchestrator in this path which are traversed by the traffic shows a non-zero throughput while the other switches are idle. 

(36)

Figure 15: N = 1, Traffic = 1Mb, swN = 14

5.1.2 Traffic = 10Mb

In Fig. 16 we sent a request to the orchestrator for a setup of a service chain path from host1 to host2 by sending a 10Mb traffic. We already specified the source and destination endpoints to be host1 and host2 respectively in the orchestrator. So, after it accepts this request for a path setup, the orchestrator chooses one switch among the three switches connected to the emulated cloud platforms randomly. And it also selects a set of switches which satisfy our requirements to be traversed by the traffic during its flow in the path chain. In this graph, switch3 is selected by the orchestrator in the first segment. Moreover, the orchestrator collects the throughput performance of all switches and based on that it checks whether the switches are overloaded or not. To adapt the path chain if an overload happens, it compares the throughput of switch3 with the threshold we specified every 200 seconds. Since the traffic of this switch is greater than the specified 1Mb threshold as we can observe in the graph, switch3 is already overloaded. Now the orchestrator takes an action to adapt the service path chain by redirecting the path to a newly selected set of switches. This set of switches makes the second path segment to be traversed by the traffic during its path to its destination. Moreover, after the orchestrator knows about the overloaded switch, it makes a first redirection of the service path chain to the newly selected path segment. As a result, the traffic of switch3 begins to drop. A graphical representation of the behavior of the network, specifically the behavior of the three switches connected to the emulated cloud platforms is displayed in Fig. 16.

(37)

Figure 16: N = 1, Traffic = 10Mb, R = 1, swN = 3

We also present the behavior of all switches in our network topology in the following graph. Although everything is similar to Fig. 16, the number of switches displayed in the figure below is different and it shows all fourteen switche’s behavior. As we can recognize from the graph, not all the switches are used during the first segment of the service chain path. After an overload happens, the orchestrator redirects the traffic flow to a new path segment. As a result, it selects a new set of switches for the new segment of the service path chain. We can observe from the graph that switch1 and switch10 display a non-zero throughput, which means they were part of the first segment of the service path chain. The graph also shows some switches traversed previously are chosen again in the second path segment. The traffic of the remaining switches is dropped to zero, and they are replaced by the newly chosen switches from the topology. This is because we are considering only one request, so no other flows are traversing the switches. This graph only shows the behavior of the network and the recovery mechanisms taken by the orchestrator to adapt the service path chain until the first redirection happens. In Fig. 18 and 19 we will see if the properties of the network in the first redirection are consistent or not after this redirection.

(38)

Figure 17: N = 1, Traffic = 10Mb, R = 1, swN = 14

As we have mentioned above, Fig. 18 and 19 below show the behavior of the three switches, not only until the first redirection but also after the first redirection. Due to the first redirection, switch11 was randomly selected to be traversed by the traffic in the second path segment. Throughout the path of the traffic in this segment, the orchestrator periodically checks the load of the switches in the network. As soon as switch11 becomes overload (i.e. the traffic is greater than the specified threshold), the orchestrator triggers a re-provisioning of the service path chain by redirecting to a new set of switches. Now, switch3 is chosen to replace the overloaded switch in the next segment by the traffic, while the previous switch declines and finally drops to zero. The same behavior continues again and again until the source stops sending traffic to the destination.

(39)

Figure 18: N = 1, Traffic = 10Mb, R = 2, swN = 3

While in the previous graph the behavior of the switches connected to the emulated cloud platforms is visualized, in Fig. 19 the behavior of all switches during the traffic flow in the path segment and the actions taken by the orches-trator to adapt the installed service chaining path to the status of the network during a congestion are displayed. We can observe from the graph that, the new set of switches chosen in this path segment are not exactly the same with the previous path segment’s set of switches. This is because the orchestrator chooses the set of switches not deterministically. Instead, it chooses them ran-domly as much as they satisfy the request’s requirements. Among the switches which are part of this path segment, switch9 is with the highest traffic as shown in the graph. The reason is, it has been selected to participate in both segments by the orchestrator.

(40)

Figure 19: N = 1, Traffic = 10Mb, R = 2, swN = 14

5.1.3 Traffic = 5Mb

In this subsection, a different traffic size less than 10Mb and greater than 1Mb will be used. In the following four graphs presented below, a 5Mb UDP traffic will be sent to request the orchestrator for the setup of a service chain path between the specified endpoints. We will examine them one by one, and their behavior during the traffic flow on the service path chain and the recovery actions taken by the orchestrator will be presented on the graphs.

Even though, the traffic size we sent in the following graphs is smaller than the 10Mb traffic size sent in the previous graphs, the behavior of the three switches, and most importantly the measures taken by the orchestrator to adapt a service chain path when the randomly chosen switch overloads are almost the same. In Fig. 20 a request for the setup of a service path chain from host1 to host10 is sent. Next, based on the specified requirements the orchestrator prepares a sequence of path segments for the traffic flow. It also chooses switch3 as the first switch among the three switches connected the cloud platforms to be traversed by the traffic in the first path segment. While the orchestrator monitors the availability of the switches status periodically, after some minutes the throughput of switch3 exceeds the threshold size and it is now overloaded. So, the orchestrator decides to overcome this network congestion by redirecting the traffic to a new switch connected to the emulated cloud platforms and a newly chosen unloaded set of switches. As we can observe from the graph, the newly selected switch from the switches connected to the emulated cloud platforms is switch11. At the point of the redirection, the traffic of switch3 starts

(41)

to drop while the throughput of switch11 increases. Only the three switches connected to cloud platforms are displayed in this graph and it shows only the first redirection. Later on, we will see the behavior of the switches after the first redirection in Fig. 22 and 23.

Figure 20: N = 1, Traffic = 5Mb, R = 1, swN = 3

All the details specified above in Fig. 20 also are available in fig. 21. But here, we will present the behavior of all switches in our topology during the traffic flow among the source and destination endpoints. We can see from the graph that [sw1, sw9 and sw10] are the set of switches traversed by the traffic in this first path segment until the first redirection happens. It is also clearly seen that sw1 and sw10’s throughput is not dropping, instead; they are continuing as part of the set of switches in the second path segment. On the other hand, switch9 starts to drop at the point of redirection and finally, it becomes zero.

(42)

Figure 21: N = 1, Traffic = 5Mb, R = 1, swN = 14

In the following, we will examine graphs with two redirections in Fig. Fig. 22 and 23, to compare the network behavior with the graphs that display only the first redirection. Fig. 22 plots the status of the switches connected to a cloud platform while Fig. 23 plots all the switches of the topology. When switch3 is overloaded, the orchestrator chooses switch11 among the three switches con-nected to a cloud platform to perform the first redirection. Once the flow rules are installed in the switches involved in the new path, traffic starts flowing and we clearly observe that the throughput starts increasing in switch11 while it drops to zero in switch3. After few minutes, since the value of the threshold is low, switch11 becomes overloaded, and the orchestrator performs a second redirection choosing again switch3 which was released few minutes ago. This behavior is simplistic and allows us to show the redirection process. A more re-alistic scenario would consider more requests and higher values of the threshold.

(43)

Figure 22: N = 1, Traffic = 5Mb, R = 2, swN = 3

The graph displayed below has the same properties with the above figure, the only difference being that the number of switches displayed in the graphs. So, Fig. 23 presents the behavior of all fourteen switches during the traffic flow in this service path segment. Among all the switches displayed, switch9 is with the highest throughput. The reason is, this switch is chosen as part of the set of switches used in both segments by the orchestrator during the traffic flow in the service chain path. 

(44)

5.2 Graphs with three requests

In this subsection we will show the behavior of the switches when more than one request is sent, and, a more than one UDP traffic will sent from the same source (i.e. client) to a single destination (i.e. server). We are also using host1 and host10 as the service path chain endpoints (i.e. source and destination).

5.2.1 Traffic = 5Mb

In both Fig. 24 and 25 we have sent a path setup request to the orchestrator by sending 5Mb UDP traffic three times from host1 to host10. After it accepts a request for a service path setup, the orchestrator finds a sequence of path segments which satisfies the sum of all three requests requirements. So, as Fig. 24 displays, to manage this request switch3 is selected as the first switch in this path segment among the three switches which are connected to the emulated cloud platforms. Then the traffic flow traverses this switch and the other set of switches in this path segment during its path to the destination host. Moreover, the orchestrator periodically monitors these set of switches in the whole network and it collects the throughput of each of them. Then, based on the collected throughput it checks periodically if the throughput of switch3 is greater than the specified threshold or not. In this case, the traffic of switch three exceeds the 1Mb threshold and as a result, the switch is already overloaded after some minutes. Now the orchestrator’s job is to dynamically adapt the installed service path chain to the network status by redirecting the traffic to a new path segment. To do so, it randomly chooses a set of switches for the new segment and it also selects swtch11 to be traversed by the traffic during its path to the destination. Next, it redirects the traffic to the newly chosen path segment so that it is able to preserve the QoS performance and SLA requirements. In the following graph, the behavior of the three switches connected to the cloud platforms in the network is presented.

(45)

Figure 24: N = 3, Traffic = 5Mb, R = 1, swN = 3

The graph below is displaying the whole switches in our topology during the traffic flow in this service path chain. Besides the difference in the number of switches displayed in the graph, all the other details are the same with the above figure. In this graph, the set of switches used in the first path segment is clearly displayed with non zero traffic. The remaining switches remain idle during this segment. That means, they were not selected by the orchestrator during this first path segment of the traffic flow. But if a redirection happens which is true in our case, some of the switches which were idle are selected by the orchestrator to be used in the succeeding segment. Not only new switches are selected but also some switches which took part in this path segment are selected to be part of the second path segment. The remaining switches of the first segment are declining and finally, their throughput is zero and they become idle. 

(46)

Figure 25: N = 3, Traffic = 5Mb, R = 1, swN = 14

Fig. 26 shows the behavior of our switches after the orchestrator took a recovery action by making the first redirection to adapt the service path chain from the congestion that happened in switch three. In this second path segment, switch eleven and a set of switches are selected randomly. So, now the traffic traverses these switches in this segment in its path to its destination. The orchestrator is monitoring the switch’s status in the second segment periodically in the same way as before. Moreover, after switch eleven overloads, switch3 is chosen again to be used in the succeeding segment. And as we can observe this graph shows only the behavior of the three switches connected to the cloud platforms. 

(47)

In another case, Fig. 27 is showing the property of all the switches during the two path segments of the traffic flow. As we have discussed before in the above graphs, some of the switches are displayed with a traffic and the remaining switches that are not part of the path segments are with zero traffic. It is also showing us the behavior of the switches is like the one we have seen in the first redirection.

Figure 27: N = 3, Traffic = 5Mb, R = 2, swN = 14

5.3 Graphs with a single request and 10Mb threshold

In the above sections, we have presented the behavior of sending a different number of requests and different traffic sizes in the network. The threshold was fixed to a constant value (i.e. 1.0Mb). In this subsection, we will examine the effect of the threshold on the network by changing its size.

In Fig. 28 and 29, a 10Mb traffic, equal to the threshold is sent from the source to the destination. Even if the traffic we sent is higher than some of the traffics sent before in the above graphs, it is not enough to make the switches overload and no redirection happens in this graphs. This is because iperf makes a difference between the specified bandwidth and the traffic actually sent, which is always lower. So the set of switches chosen at the beginning of the traffic flow by the orchestrator will be used in this service chain path. Moreover, the orchestrator chooses randomly switch3 and another set of switches in order to set up the requested service path chain. It also monitors the network periodically to check for its status. Here, the network’s status is fine and no overloaded switches are present. In the first graph below, we displayed the behavior of our three switches; the switches connected to the emulated cloud platforms. On the other hand, the second graph displays the properties of all switches in the network. The reason why the traffic displayed in the graphs is always lower than the actual traffic sent is that, the UDP traffic specified in the command line to be sent from a client to the server using iperf is much lower than the

(48)

actual transferred traffic on the links. This finding is the fruit of the utilization of the graphical interface that allowed us to understand, in real-time, what is the exact amount of traffic sent in the network.

Figure 28: N = 1, Traffic = 10Mb, swN = 3, Th = 10Mb

Figure 29: N = 1, Traffic = 10Mb, swN = 14, Th = 10Mb

Now let’s try even increasing the 10Mb traffic to 20Mb to observe if it has any effect on the switches behavior. As Fig. 30 and 31 display, there is a little increase in the load of the switches in comparison with the above graphs with 10Mb traffic, but still, it does not exceed the 10Mb threshold and no redirection happens. More or less they have the same behavior with the above figures we have displayed in this sub-subsection. The whole process of requesting for

(49)

setting up a service path chain and the management of this requests by the orchestrator is similar to the previous graphs. The switch selected randomly from the switches connected to the emulated cloud platforms in these graphs is switch3. And finally, the graphs below are displaying the behavior of the three switches and the behavior of all switches in the topology respectively.

Figure 30: N = 1, Traffic = 20Mb, swN = 3, Th = 10Mb

(50)

6 Conclusion

This thesis has presented a graphical representation interface for an SDN mon-itoring platform.This platform leverages on Grafana, a popular graphical rep-resentation tool. It also integrates Grafana with Gnocchi, one of the best time-series databases currently present to store the statistics collected by the orches-trator. Its job is to visualize clear and easy-to-understand graphs of the network behavior and the recovery actions of the orchestrator during congestion or a QoS degradation.

As the experiment results presented in this thesis show, the orchestrator periodically collects the throughput of all switches in the network, and based on these statistics it evaluates the load of all switches. When the throughput of the virtual function is more than the specified threshold size, it redirects the traffic to a new path segment in order to dynamically adapt the network.

(51)

References

[1] Grafana. Available [Online]. http://www.grafana.org [2] Gnocchi. Available [Online]. https://gnocchi.xyz

[3] B. Astuto A. Nunes, M. Mendonca, X. Nguyen, K. Obraczka, and T. Turletti. A Survey of Software-Defined Networking: Past, Present, and Future of Programmable Networks. IEEE, 2014

[4] D. Kreutz, F. M. V. Ramos, P. E. Verı´ssimo,C. E. Rothenberg,S. Azodol-molky, & S. Uhlig. Software-Defined Networking: A Comprehensive Sur-vey. IEEE | Vol. 103, No. 1, January 2015

[5] https://en.wikipedia.org/wiki/Software-defined_networking

[6] R. Mijumbi, J. Serrat, J. Gorricho, N. Bouten, F. De Turck, and R. Boutaba. Network Function Virtualization: State-of-the-Art and Re-search Challenges. IEEE COMMUNICATIONS SURVEYS & TUTORI-ALS, VOL. 18, NO. 1, FIRST QUARTER 2016

[7] B. Han, V. Gopalakrishnan, L. Ji, and S. Lee. Network Function Virtualiza-tion: Challenges and Opportunities for Innovations. IEEE Communications Magazine • February 2015

[8] K. Kaur, J. Singh and N. S. Ghumman. Mininet as Software Defined Net-working Testing Platform. International Conference on Communication, Computing & Systems (ICCCS–2014)

[9] Y. Zhang. Network Function Virtualization: Concepts and Applicability in 5G Networks, First Edition. 2018

[10] H. A. Ammar, Y. Nasser, A. Kayssi. Dynamic SDN Controllers-Switches Mapping For Load Balancing and Controller Failure Handling.Department of Electrical and Computer Engineering American University of Beirut, IEEE. 2017

[11] SDN controller. Available [Online]. https://goo.gl/ujLf4E [12] ONOS. Available [Online]: https://onosproject.org/

[13] ONOS2. Available [Online]: https://www.opennetworking.org/onos/ [14] Z. K. Khattak, M. Awais and A. Iqbal. Performance Evaluation of

Open-Daylight SDN Controller. Department of Computer Science Namal College Mianwali, Pakistan. IEEE 2014

[15] OpenDaylight Consortium. Available [Online]: https://www.opendaylight.org/

[16] Understanding the SDN architecture. Available [Online]: https://www.sdxcentral.com/sdn/definitions/inside-sdn-architecture/

(52)

[17] H. POLAT & O. POLAT. The Effects of DoS Attacks on ODL and POX SDN Controllers. 8th International Conference on Information Technology (ICIT), IEEE 2017

[18] J. Singh, S. Kaur and N. S. Ghumman. Network Programmability Using POX Controller. International Conference on Communication, Computing & Systems (ICCCS–2014)

[19] O. Salman, I. H. Elhajj, A. Kayssi & A. Chehab. SDN Controllers: A Comparative Study. Proceedings of the 18th Mediterranean Electrotechni-cal Conference MELECON 2016, IEEE

[20] ONOS architecture. Available [Online]. https://thenewstack.io/open-source-sdn-controllers-part-vii-onos/

[21] NFV Architecture. Available [Online]. https://goo.gl/wM6CTS

[22] M. Gharbaoui, S. Fichera, P. Castoldi & B. Martini. Network Orchestrator for QoS-enabled Service Function Chaining in reliable NFV/SDN infras-tructure. IEEE 2017

[23] B. Martini, M. Gharbaoui, S. Fichera & P. Castoldi. Network Orchestration in Reliable 5G/NFV/SDN infrastructures. Scuola Superiore Sant’Anna & CNIT, Pisa, Italy

[24] InfluxDB. Available [Online]. www.influxdata.com

[25] Prometheus. Available [Online]. https://prometheus.io/docs/ [26] Prometheus. Available [Online]. https://goo.gl/4FNSho

[27] Grafana. Available [Online]. https://github.com/grafana/grafana

[28] Kibana. Available[Online]. https://www.elastic.co/products/kibana, https://en.wikipedia.org/wiki/Kibana

[29] Graphite. Available [Online]. https://graphiteapp.org/ [30] Mininet. Available [Online]. http://mininet.org/

Riferimenti

Documenti correlati

With this platform the clinician has real time monitoring capabilities of his patients with statistics and the patient has a mobile application that allows him to track his values

Figure 6: Port scan detection rate vs number of SR members for R-SYN algorithm.. Each member contributes to the processing with a number of network packets that is on average 1/9 of

In other terms, when the perception of the picture’s vehicle amounts to the configurational fold of seeing-in, it is no longer a perception of a mere flat item, as

© Edizioni Scientifiche Italiane ISBN 978-88-495-4479-4 distinguere dal Comune nella struttura e nelle funzioni, in ragione del prin- cipio di differenziazione che caratterizza

Per questo è necessario tenere presenti entrambi gli aspetti della verità scientifica: se da un lato essa è sempre legata a un contesto di concetti e

La Direttiva 2010/31/Ue viene recepita con il Decreto legge 4 Giugno 2013, n. 63, “Disposizioni urgenti per il recepi- mento della Direttiva 2010/31/UE del Parlamento europeo

Both quiescent and migrating spinal stem/progenitor cells express the Activating transcription factor 3 (ATF3), which currently has no clear role in neuronal development of the

Figure 2. Results from the local two ‐fluid simulation. Blue and light blue colors represent the plasma density. The shaded surface represents the magnetopause folded by the