• Non ci sono risultati.

Analysis, Design and Implementation of an SDN-based Multi-Radio Access and Backhauling Architecture

N/A
N/A
Protected

Academic year: 2021

Condividi "Analysis, Design and Implementation of an SDN-based Multi-Radio Access and Backhauling Architecture"

Copied!
144
0
0

Testo completo

(1)

Department of Information Engineering

Fundaci´

o i2CAT - The Internet Research Center

MSc Degree in

Telecommunications Engineering

Analysis, Design and Implementation of an

SDN-based Multi-Radio Access and Backhauling

Architecture

Supervisors: Candidate:

Prof. Michele Pagano Matteo Grandi

Universit`a di Pisa Prof. Stefano Giordano Universit`a di Pisa

PhD. Daniel Camps Mur Fundaci´o i2CAT

Prof. David Rinc´on Rivera

Universitat Polit`ecnica de Catalunya

(2)
(3)

With the introduction of the fourth generation network (4G), mobile data traffic experienced an exponential increase during the last decade that is expected to grow up substantially in the coming years with the release of the fifth generation network (5G). Data rates hundreds of times higher than the current ones, massive media content and a plethora of new end-user services will exceed the boundaries of existing network architectures, calling for the creation of new network paradigms and solutions that are able to face these challenges. Concepts and visions related to the Information and Communications Technology (ICT) evolution such as the Internet of Things (IoT), hundreds of billions connected devices, wearables, Industrial Internet, etc., describe the range of new services each coming with specific requirements, that the network will have to handle.

In order to face these challenges, the entire network architecture has to evolve and adapt. The edge and access networks serve as starting point of this evolution to reach the adaptability and flexibility required by 5G networks. Dense Small Cell (SC) deployments seem to be the answer to the high demand for coverage and capacity, two requirements that are hard or even impossible to address simultaneously with a conventional mobile network architecture.

The deployment of heterogeneous and ultra-dense networks also com-prises the introduction of adaptive network features such as integration of access nodes, dynamic resource allocation, per-tenant traffic management etc. In order to achieve these self-management features, new mechanisms have to be investigated for the backhaul control and data plane

(4)

manage-ment. The development of ultra-dense SC networks also requires the use of alternative technologies when it comes to provide connectivity between SCs and the core network: traditional wired solutions prove to be infeasi-ble and costly. Under these circumstances, the use of wireless technologies to interconnect SCs and provide backhaul connectivity to all of them is a much more efficient and cheaper solution.

This thesis has the goal to present the design and implementation of a Software Defined Networking (SDN) based access and backhauling archi-tecture to face the upcoming challenges of 5G. The archiarchi-tecture aims to provide mobile connectivity over wireless backhauling infrastructures using the Institute of Electrical and Electronics Engineers (IEEE) 802.11ac and IEEE 802.11n standards. First, this thesis describes the benefits of using SDN-based technologies over traditional architectures. Secondly, a novel SDN-based backhaul wireless architecture is presented, providing a descrip-tion of the main features and the hardware used. Extensive experiments are conducted to discover the most suitable configuration for throughput maximization and to overcome the performance limitation caused by the cross channel interference. A description of the access network and its in-tegration with the existing backhaul architecture is provided. Finally, the software components required for the IP assignment procedure management is designed, implemented and validated in the context of the client access management.

(5)
(6)
(7)

Foremost, I would like to thank Professor Michele Pagano for giving me the great opportunity to perform this master thesis work and for his valuable feedback and timely suggestions.

I would like to express my sincere gratitude towards my advisor Ph.D. Daniel Camps Mur, i2CAT, for providing me all the tools and sharing his enormous knowledge despite his busy schedule.

A special gratitude to Ph.D. August Betzler, i2CAT, that supported, ad-vised, and patiently guided me inside and outside the office environment. His contribution to this thesis and to the whole experience is priceless. My sincere thanks to Ing. Flaminio Minerva that helped fighting against the bureaucracy and turned the possibility of the i2CAT internship into reality. Thanks also for all the hints.

I would also like to thank my family and Sara. I could count on their sup-port during the whole path, encouraging me anytime and anyway.

I gratefully acknowledge to my many colleagues at i2CAT for providing a stimulating environment in which to work and learn. Thank you, Jacint, Joan Josep, Laura, Marisa, Marc, Miguel, Pouria, Ricardo, Shuaib, Xavier, and many others...

Finally a sincere gratitude to the GDS team Ezio, Anna, Paolo, Fabio, Gabriele, David and Andrea.

(8)
(9)

Abstract i

Acknowledgements v

List of Figures xi

List of Tables xiv

List of Acronyms and Abbreviations xvii

1 Introduction 1

1.1 Overview . . . 1

1.2 Structure of this thesis . . . 5

2 Network Architectures and Enabling Technologies 7 2.1 Traditional network structure limitations and new paradigm 8 2.2 The SDN architecture principle . . . 10

2.3 The Open vSwitch project . . . 13

2.4 The SDN Southbound Interface . . . 17

2.4.1 The OpenFlow protocol . . . 18

2.5 The SDN-Controller . . . 20

2.5.1 The OpenDaylight project . . . 24

2.6 The SDN Northbound Interface . . . 25

(10)

Contents

2.7.1 Backhaul/Fronthaul technology evolution . . . 28

2.7.2 IEEE 802.11ac as Backhaul for Small Cell deployment 32 2.7.2.1 Performance requirements . . . 34

2.7.3 Wi-Fi based access in Multi-RAT environments . . . 36

3 I2CAT’s Wireless SDN-based Backhauling Architecture 39 3.1 I2CAT’s SDN-based system model . . . 40

3.1.1 VLAN tagging . . . 43

3.2 Gateworks Ventana GW5410 SBC . . . 45

3.2.1 Wi-Fi adapters . . . 47

3.2.2 Antennas . . . 48

4 Experimental Study of Wireless Access and Backhaul Tech-nology 51 4.1 Physical description of the devices . . . 52

4.2 Indoor testbed measurement campaign . . . 54

4.2.1 Single NIC indoor experiments . . . 59

4.2.2 Dual NIC indoor experiments . . . 62

4.2.3 Triple NIC indoor experiments . . . 68

4.3 Outdoor measurement campaign . . . 69

4.3.1 Outdoor tests using omnidirectional antennas . . . . 70

4.3.1.1 Single NIC outdoor omnidirectional exper-iments . . . 70

4.3.1.2 Dual NIC outdoor omnidirectional experi-ments . . . 72

4.3.1.3 Triple NIC outdoor omnidirectional experi-ments . . . 75

4.3.1.4 Channel pairs experiments . . . 76

4.3.2 Outdoor directive antennas experiments . . . 81

4.4 Experiments results summary . . . 86

5 Implementation of a Multi-Tenant SDN-based Wireless Ra-dio Access 89 5.1 Access network design . . . 92

(11)

5.1.1 Access and Gateway node anatomy . . . 94

5.2 Client association process . . . 96

5.2.1 The dhcp-sesame-access bundle . . . 101

5.2.2 Duplicate IP address detection . . . 104

5.3 Validation of dhcp-sesame-access bundle . . . 106

6 Conclusions and Future Work 111

(12)
(13)

1.1 Global mobile traffic and device subscription - Cisco VNI

2017 estimation. . . 2

1.2 Percentage of devices and connections share - Cisco VNI 2017 estimation. . . 4

2.1 The three main layers of network functionality. . . 9

2.2 Traditional Networking vs SDN paradigm. . . 11

2.3 Fundamental abstraction of the SDN architecture. . . 12

2.4 Virtual Switch anatomy. . . 14

2.5 OvS packets processing design. . . 15

2.6 OvS main components. . . 16

2.7 OpenFlow basic Architecture and Component. . . 19

2.8 SDN Controller and its basic services. . . 20

2.9 OpenDaylight Controller releases’ roadmap. . . 25

2.10 Backhaul Network Technology. . . 26

2.11 Backhaul Architecture Migration. . . 31

2.12 Wireless access technologies supported by a SC. . . 37

3.1 I2CAT Backhaul and Access Networks model. . . 40

3.2 i2CAT’s SDN-based wireless backhauling architecture overview. 42 4.1 I2CAT SDN-based architecture deployment model. . . 52

4.2 Wireless device sketch. . . 53

(14)

List of Figures

4.4 Indoor test deployment in i2CAT office environment. . . 57 4.5 I2CAT indoor test methodology for Sub6 wireless device. . 58 4.6 Single NIC throughput measured at the receiver side. . . 60 4.7 Single NIC experiment IEEE 802.11ac MAC frame

aggrega-tion sample. . . 61 4.8 Comparison between the adjacent channel pair 36+52 and

the separate channel pair 36+149 in aggregate transmission. 63 4.9 Dual NIC aggregate data throughput in MP and AP+STA

mode. . . 64 4.10 Dual NIC experiment IEEE 802.11ac MAC frame

aggrega-tion sample. . . 65 4.11 Dual NIC relay data throughput in MP and AP+STA mode. 66 4.12 Dual NIC aggregate and rely mode data throughput in MP

mode using 20 and 6 dBm transmission power. . . 67 4.13 Triple NIC aggregate and rely mode data throughput in MP

mode. . . 68 4.14 Single NIC throughput at the receiver side measured in the

outdoor environment with omnidirectional antennas. . . 71 4.15 MCS and number of spatial streams (NSS) sample for

chan-nel 36. . . 72 4.16 Dual NIC aggregate data throughput in outdoor scenario

with omnidirectional antennas. . . 73 4.17 Dual NIC relay data throughput in outdoor scenario with

omnidirectional antennas. . . 74 4.18 MCS used on ch. 149 MP relay mode during dual NIC

out-door experiments.. . . 74 4.19 Dual NIC aggregate and relay data throughput in outdoor

scenario with omnidirectional antennas and 6 dBm transmit-ted power. . . 75 4.20 Triple NIC aggregate and relay data throughput in outdoor

scenario with omnidirectional antennas. . . 76 4.21 Single and overall aggregate throughput for different

chan-nel pairs transmitting simultaneously using 80 MHz chanchan-nel bandwidth. . . 78

(15)

4.22 UDP transmission on channel 52 in vertical (V) and

hori-zontal (H) polarization. . . 79

4.23 Single and overall aggregate throughput for different chan-nel pairs transmitting simultaneously using 40 MHz chanchan-nel bandwidth. . . 80

4.24 Omnidirectional vs directive antennas comparison in single NIC transmission. . . 82

4.25 Three node outdoor testbed for directive antennas. . . 83

4.26 Single and overall throughput for different channel pairs in relay mode. . . 84

4.27 Single and overall throughput for different channel pairs in sink mode. . . 85

4.28 Single and overall throughput for different channel pairs in source mode. . . 85

4.29 Ideal vs overall aggregate throughput comparison. . . 86

5.1 Uplink and downlink end-to-end paths example. . . 91

5.2 I2CAT architecture reference scenario. . . 93

5.3 Access Node anatomy based on OvS. . . 95

5.4 Gateway node anatomy based on OvS. . . 95

5.5 The four basic DHCP messages exchanged during the IP address assignment procedure. . . 97

5.6 Simplified dhcp-sesame-access flowchart. . . 103

5.7 Simple testbed for testing the dhcp-sesame-access bundle. . 107

5.8 PacketIn packet captured on the host veth0 interface. . . 109

(16)
(17)

2.1 IEEE 802.11n and IEEE 802.11ac available channels list in

the European Regulatory Domain. . . 32

2.2 IEEE 802.11ac channels width. . . 33

2.3 IEEE 802.11ac MCS definition. . . 33

2.4 IEEE 802.11ac Theoretical Data Rate for 1, 2 and 3 Spatial Streams. . . 35

3.1 Gateworks Ventana GW5410 Outstanding Characteristics. . 46

3.2 Compex System WLE900VX and WLE200NX outstanding characteristics comparison. . . 49

4.1 80 MHz available channels in the 5 GHz band. . . 77

4.2 MCSs and retry rates measured on the 40 and 80 MHz chan-nel bandwidth simultaneous transmissions. . . 80

5.1 Key header fields of a DHCP Dicovery packet. . . 98

5.2 Key header fields of a DHCP Offer packet. . . 99

5.3 Key header fields of a DHCP ACK packet. . . 101

(18)
(19)

Abbreviations

3G Third Generation

4G Fourth Generation

5G Fifth Generation

ACI Adjacent Channel Interference

AP Access Point

API Application Programming Interface

ARP Address Resolution Protocol

ATM Asynchronous Transfer Mode

BSC Base Station Controller

CAC Channel Availability Check

CapEx Capital Expenditure

DFS Dynamic Frequency Selection

DHCP Dynamic Host Configuration Protocol

DL Down-Link

GPS Global Positioning System

GRE Generic Rounting Encapsulation

HSxPA High Speed Downlink/Uplink Packet Access IEEE Institute of Electrical and Electronics Engineers

IM Instant Messaging

ISM Industrial Medical Scientific

(20)

ITU-T International Telecommunication Union - Telecommunication Standardization Bureau

JVM Java Virtual Machine

LAN Local Area Network

LGI Long Guard Interval

LLID Local Link Identifier

LOS Line Of Sight

LTE Long Term Evolution

M2M Machine to Machine

MAC Medium Access Control

MCS Modulation and Coding Scheme

MNO Mobile Network Operator

MU-MIMO Multi-User Multiple-Input, Multiple Output

NOS Network Operating System

NSS Number of Spatial Streams

ODL OpenDaylight

OVSDB Open vSwotch Database

PCM Pulse Code Modulation

PDH Plesiochronous Digital Hierarchy

PLID Peer Link Identifier

PoE Power over Ethernet

RAN Radio Access Network

RAT Radio Access technology

RBS Radio Base Station

RPC Remote Procedure Calling

RSSI Received Signal Strength Indicato

SBC Single Board Computer

SC Small Cell

SDH Synchronous Digital Hierarchy

SDN Software Defined Networking

SGI Short Guard Interval

SLA Service Layer Abstraction

SNR Signal-to-Noise Ratio

(21)

SSID Service Set Identifier

TDM Time Division Multiplexing

U-NII Unlicensed National Information Infrastructuret

UDP User Datagram Protocol

UL Up-Link

VM Virtual Machine

VoIP Voice over IP

VPN Virtual Private Network

VXLAN Virtual eXtensible Local Area Network WCDMA Wideband Code Division Multiple Access

WLAN Wireless Local Area Network

(22)
(23)

Introduction

This chapter briefly presents an introduction to the research area in which this work took place. After a quick overview about the state of the art and the evolution trends of this technological field, the structure of this document is presented.

1.1

Overview

Mobile data traffic has grown 18-fold over the past 5 years, and 60 percent of the total mobile data traffic amount has been offloaded onto Wi-Fi. Recent studies [1], [2] show that the number of mobile data traffic has grown over 63 percent in 2016, reaching 7.2 ExaBytes per month at the end of 2016. Smartphones account for most of that growth, followed by Machine-to-Machine (M2M) communication modules (e.g. Global Positioning System GPS in vehicles, asset tracking systems in shopping and manufacturing sec-tors, medical application, wearables et al.). The Ericsson Mobility Report for the year 2016 [2] forecasts 9.2 billion mobile subscriptions by the end of 2019 and 20 ExaBytes of mobile traffic generated each month. Cisco expects 49 ExaBytes per month by 2021.

(24)

1.1. Overview

Figure 1.1: Global mobile traffic and device subscription - Cisco VNI 2017 estimation [1].

(25)

in a high data traffic use. It is also expected that by 2021 nearly three-quarters of all devices connected to the mobile network will be “smart” devices. The vast majority of mobile data traffic (around 98 percent) will originate from these smart devices, up from 89 percent in 2016 [1]. Smart-phones, tablets and phablets (electronic device combining a phone and a tablet functions) will continue to dominate the mobile traffic, creating a plethora of new service demands. The mobile devices are getting smarter with an increasing number of user’s equipments with higher computing re-sources, and network capabilities that create a growing demand for more capable and intelligent networks.

Mobile operators are finding it hard to provide sufficient data rates from the cellular base stations to their core network and to ensure mobile service availability within densely populated areas, such as shopping centers and transportation terminals. The traditional macro cell oriented mobile network architecture does not suit these environments, considering the high concentration of end-user. A solution that pledges to provide services to a large number of users concentrated in a small area consist in dividing the subscribers’ data traffic over different SC. The SC deployment avoids other typical macro cell problems such as higher power supply necessity, expensive cooling systems, physical space occupancy in an appropriate and secure location.

To meet the growing demand of data capacity, services availability and high throughput dictated by the future traffic volumes and subscribers, the existing radio access network (RAN) architecture must be reconsidered and enhanced.

The transition from macro cell to SCs is mainly driven by the high data requirements that characterize the modern Radio Access Technology (RAT) like Hight Speed Uplink/Downlink Packet Access-Wideband Code Division Multiple Access (HSxPA-WCDMA), Long Term Evolution (LTE) and 5G. One of the major 5G challenges is to define a backhaul (BH) and fronthaul (FH) architecture that meets the requirements of future 5G networks. This can be achieved by means of a converged wireless and optical network that provides a flexible infrastructure capable of supporting

(26)

1.1. Overview 2016 2017 2018 2019 2020 2021 0 10 20 30 40 50 60 92% 94% 95% 97% 98% 99% E xa B yt es p er M on th 2016 2017 2018 2019 2020 2021 0 2 4 6 8 10 12 14 46% 54% 57% 67% 72% 75% 54% 46% 43% 33% 28% 25% B ill io n of D ev ic es Smart Traffic Nonsmart Traffic

Smart Devices and Connections Nonsmart Device and Connections

Figure 1.2: Percentage of devices and connections share - Cisco VNI 2017 estimation [1].

(27)

a diverse and time-varying set of services. A significant technical hurdle is represented by backhaul outdoor SCs. These SCs are normally mounted on lamp posts and street furniture. The backhaul and intra SC connectivity represents a challenge that needs to be addressed with efficient wireless technology [3]. The wireless elements of the proposed infrastructure include both millimeter-Wave and sub-6GHz technologies, which, combined, offer a good balance between bandwidth, range and flexibility.

In this thesis, the architecture and the design choices of Sub6 GHz wireless devices are described. These devices are designed to provide a comprehensive set of parameters useful for managing and orchestrating the entire access and backhaul network behavior. The network management is possible thanks specific software modules. In particular the design of the software module that manages the client access procedure is another goal of this thesis.

1.2

Structure of this thesis

Chapter 1 introduces the context of research and the technological trend for access and backhaul networks. Chapter 2 describes the network archi-tecture and its evolution with particular attention on the SDN paradigm and its benefits in comparison to traditional network architectures. A de-scription of the key features of the hardware devices used for this thesis project is provided. Chapter 3 illustrates the wireless SDN-based back-haul architecture designed by i2CAT. In order to fulfill the requirements of the access and backhauling network, a hardware device is designed and composed for use in future network deployments. Chapter 4 is devoted exclusively to determine the most suitable architecture for the deployment scenario, to the description of the experiments conducted to characterize the achievable performance in terms of throughput and to the detailed elab-oration of the obtained results. Finally, Chapter 5 focuses on describing and validating the software components developed to handle the IP address assignment procedure that occurs during a client’s access. Conclusions on the investigation carried out and the next steps to extend and improve the

(28)

1.2. Structure of this thesis

(29)

Network Architectures and

Enabling Technologies

The incredible growth of devices capable to connect to the Internet has created a new paradigm where almost everything and everyone is inter-connected providing nearly global accessibility. This is imposes important challenges to traditional IP networks that are widespread adopted but no longer flexible enough to face the large amount of users and services, and the immense variety of end-user expectations, as well as the type of offered services. Further, with the evolution of communication technologies, net-work architectures must evolve in parallel to meet the new extensive services demands. New solutions such as ultra-dense SC and wireless backhauling architectures represent a mandatory step to be investigated in order to meet the future 5G network requirements. The enormous potential demon-strated by these technologies is driving researchers and developers towards the adoption of these solutions for the development of the 5G network. The current investigation is centering the attention on defining and devel-oping new network concepts able to provide the means to easily configure an dynamically reconfigure network topologies, and applying specific poli-cies depending on the network state and conditions. For this aim, Software Defined Networking (SDN) is a paradigm that separates the network

(30)

con-2.1. Traditional network structure limitations and new paradigm

trol logic from the underlying routers and switches responsible for the data plane, breaking the vertical integration. This means providing a higher level of flexibility and a reduction of the network management complex-ity. Introducing dynamic programmability in forwarding devices and the centralization of the “networking intelligence” are two key features of SDN enabling this new paradigm.

This chapter provides a basic view of the SDN paradigm, the backhaul and the access infrastructure for dense SC deployments. After a brief intro-duction to the SDN paradigm and the basic principles of the architecture, its fundamental components are presented, followed by a overview of access and backhaul infrastructures. In this overview some aspects like the back-haul evolution, the benefits brought by the adoption of the SDN paradigm and the access technology involved are covered. Finally the attention is focused on the IEEE 802.11ac standard and its role in the SC deployment in a multi-RAT environment.

2.1

Traditional network structure limitations and

new paradigm

The evolution of traditional IP networks has led to the development of con-trol and transport network protocols running inside physical routers and switches. In order to cope with the high complexity of traditional IP net-works, techniques like traffic routing policies, fault recovery, load balancing, etc. are required. The poor flexibility and dynamicity of traditional routers and switches makes it hard to manage and configure them to satisfy these requirements. In order to implement high-level network policies, network operators need to configure each individual network device separately, us-ing low-level and often vendor-specific commands [4]. Furthermore, the configuration complexity does not comply with dynamic fault reaction and load change adaptation mechanisms. Automatic reconfiguration and fast response or fail safe mechanisms are therefore a hard challenge in conven-tional network structures.

(31)

Figure 2.1: The three main layers of network functionality.

In addition to the management complexity, another big limitation of the traditional structure is that it is vertically integrated. From an abstraction point of view, generic computer network can be devided in three function-ality planes: the data plane, the control plane and the management plane (see Fig. 2.1). The data plane represents all the physical devices involved in networking actions, which are responsible for forwarding data in the most efficient way possible. The control plane is in charge of populating the for-warding tables of the data plane elements. The management plane includes all the services and tools used to monitor and manage the network func-tionality. It is in this management plane where network policies are defined. The control plane compels these policies in a protocol, and finally the data plane applies and executes these policies, properly forwarding data.

In traditional IP networks these three layers coexist in the same net-working devices to obtain high network resilience, optimized data process-ing times (and thus small delays) and guarantee a rapid performance in-creasing. However, the control and data plane integration in the same net-working device reduces flexibility and places a brake to the configuration

(32)

2.2. The SDN architecture principle

prospects, causing inertia in new technology deployment. This approach produces also a rather static architecture as written in [5], [6] and [7]. For this reason, traditional network architectures are considered rigid and com-plex to manage. They are also very susceptible to misconfiguration errors: a single misconfigured device can cause important network faults such as forwarding loops, wrong routing paths, service violations etc.

Moreover, even though some vendors offer solutions working on pro-prietary hardware and operating systems, the network operators have to maintain all of these different solutions, which can be costly and time con-suming procedure. SDN is the key enabler technology to overcome these limitations.

2.2

The SDN architecture principle

SDN is a networking paradigm that pledges to overcome the limitations of traditional networking infrastructures. Breaking the vertical integration by separating the network’s control logic (control plane) from the underly-ing physical devices that forward the traffic (the data plane) is the first key change. The second is that thanks to the splitting between data and control planes, the physical data structure is strongly simplified. All the routers, switches and middleboxes become simple forwarding devices and the control logic is implemented in a SDN-Controller. This simplifies network configu-ration, reconfiguconfigu-ration, management and development (see Fig. 2.2).

These two approaches are the key to reach the required network flexi-bility, to easily introduce new abstractions for simplifying the management of the network and to facilitate innovation. Middleboxes, that since always represent one of the most expensive and problematic elements in the net-work, are reduced if not eliminated and their functionality is integrated into software functions, enabling the flexible and easy design and deployment of new solutions. The network control intelligence and decision-making are centralized in a more comfortable and accessible structure and can be de-fined on more powerful and user-friendly program languages. The main idea is to allow developers to manage the network resources in the same

(33)

Figure 2.2: Traditional Networking vs SDN paradigm.

manner they manage computer resources.

The separation of the forwarding hardware from the control logic also allows the development of new protocols and applications, at the same time opening the access to network virtualization as highlighted in [8]. Such separation can be realized by means of an Application Programming Interface (API) between the virtual switches and the SDN-Controller. The SDN-Controller uses this API to enforce the control rules over the data plane elements as depicted in Fig. 2.3. One open source API solution that is commonly used for research purpose is OpenFlow [9], [10].

SDN and OpenFlow started as academic experiments [10], but they re-cently had a significant success to the point that many commercial network devices vendors now include support for these technology in their products. The SDN-OpenFlow pair demonstrated enough potential to convince some big companies (like Google, Microsoft, Verizon etc.) to found the Open Networking Foundation (ONF) [11] with the aim of prizing the adoption of SDN.

Depending on the rules installed by the SDN-Controller, an OpenFlow-capable virtual switch can behave like a router, a switch, or a general

(34)

2.2. The SDN architecture principle

Figure 2.3: Fundamental abstraction of the SDN architecture.

middlebox performing roles such as firewall, load balancer, and so on. As written in [4], the networking industry has used the term SDN in an improper way, by referring to a general software participation in networking. According to [4], SDN can be defined as a network architecture with four main characteristics:

1. The control plane is decoupled from the data plane, and control functionality is removed from the network devices that become mere packet forwarders.

2. Each forwarding decision is “flow based” instead of “destination based”. A flow is defined by a set of matches on the header fields of a packet, used as filtering criterion to perform certain actions. In this context a flow is a sequence of packets between a source and a destination. Each single packet that composes the flow, undergoes the same actions at the forwarding devices.

3. Control logic takes place in the SDN-Controller: a software platform running on a server and providing an easy way to program the for-warding devices basing its decisions on an abstract point of view of the network.

(35)

4. The SDN-Controller interacts with the underlying data plane using the OpenFlow API. In this way the management of the forwarding plane becomes easy, and the network results programmable through software application. This last characteristic has also the benefit to being simpler and less prone to configuration errors, mainly due to the use of high-level languages and softwares. Further, having a decentral-ized SDN-Controller with global knowledge of the network, simplifies the development and evaluation of advanced network applications, new functions and services.

The SDN approach promises further benefits. First, since the intelli-gence of the network is centralized and it receives information about the state of the underlying network, a control program can react automatically to sudden changes of the network conditions (device breakdown, connection loss, congestion, etc). Second, the configuration and management of such policies is simpler and less prone to misconfiguration errors than using the low-level device configuration languages typical of proprietary solutions. Third, having an open interface that supervises the networks behavior, en-ables programming of heterogeneous forwarding devices.

Below, a brief description of the main components of the SDN Archi-tecture is presented, starting from the lower level of the SDN abstraction and rising up until the application layer.

2.3

The Open vSwitch project

In networking a virtual switch acts like an advanced edge switch for Virtual Machines (VMs). Software switches are emerging as one of the most promis-ing solutions for data centers and virtualized network infrastructures [4]. Software switches such as Open vSwitch (OvS) have been used for mov-ing network functions to the edge (with the core performmov-ing traditional IP forwarding), thus enabling network virtualization [12]. Open vSwitch (OvS) [13] is a multilayer virtual switch licensed under open source license. In order to connect an hypervisor with other VMs and with the outside

(36)

2.3. The Open vSwitch project

Figure 2.4: Virtual Switch anatomy [14].

world, a fast and reliable Linux-based bridge is sufficient. But in a multi-server virtualization deployment, this stack is not well suited. Linux-based bridges are designed to work in physical server machines, thus, depending on the application, it can be difficult to adapt this solution to a virtual-ized multi-server environment. OvS is targeted to replace the native Linux networking solution especially in environments characterized by highly dy-namic endpoints, high rate of topology changes and the need of maintain-ing logical abstraction and integration with particular and special purpose hardware. OvS can be used concurrently with OpenFlow (see Section 2.4.1) as control stack for hardware switches. Figure 2.4 shows the role of a virtual switch in the control stack.

Some of the main features of OvS are: • full VLAN support (IEEE 802.1q) • OpenFlow implementation

• Generic Routing Encapsulation (GRE) and Virtual eXtensible Local Area Network (VXLAN) tunneling.

• NetFlow supported

(37)

Figure 2.5: OvS packets processing design [14]. • Fine-grained QoS support

OvS is originally designed to operate with wired network interfaces. Thus, in order to make it work with wireless interfaces, as required by the scope of this thesis, some adaptation are necessary.

Any network interface, whether physical or virtual, can be added as a port to the virtual switch. Packets flowing through these ports are pro-cessed and, based on the established rules, an action is applied. The deci-sions about how to process a packet are made in the userspace, but only the first packet of a new flow actually reaches the userspace level. Once the ovs-vswitchd, the OvS userspace-level process, ends the packet processing, the matched rule is installed in the OvS Kernel Module. After a certain timeout of inactivity, the rules are deleted. All the following packets of the flow hit this cached entry at kernel level for a faster forwarding, as depicted in Fig. 2.5.

The main components of OvS are the above-mentioned ovs-vswitchd, the OvS Kernel Module and the ovsdb-server as shown in Fig. 2.6.

The ovs-vswitchd is an userspace daemon that manages and controls any single OvS switch on the local machine. The SDN-Controller inter-acts with the OvS daemon using the OpenFlow protocol, and ovs-vswitchd communicates with the kernel module over netlink. The OvS daemon uses a database (OVSDB) to retrieve the matching rule to apply to the flow.

(38)

2.3. The Open vSwitch project

Figure 2.6: OvS main components [14].

The ovsdb-server maintains the database tables up to date and provides Remote Procedure Calling (RPC) to one or more OVSDBs to talk with external clients.

The component that actually handles switching and tunneling is the OvS Kernel Module. It is designed to be fast and simple: for each incoming packet if an associated rule exists, the rule is executed and the packet and Byte counters are updated. The kernel module simply follows the instructions ovs-vswitchd provides. These instructions are called actions and, in the simplest way, they list the physical port through which to forward a packet. An action can also specify modifications of the packet headers, specific packet handling or packet dropping.

There are three basic actions that can be associated to a flow-entry [15]: • Forward the flow’s packet through a specific port (or a set of ports). • Mark the packet as to be resolved by the SDN-Controller by encap-sulating in a specific header and forwarding it to the SDN-Controller (PacketIn event). This happens when the OvS device does not know how to handle the packet, or when the packet is recognized as a ser-vice packet and it must be sent to the SDN-Controller. The details on the PacketIn action and how it is used in the context of this thesis is further detailed in Section 5.1.1.

(39)

congestion.

Determining how a packet must be handled is an ovs-vswitchd task that happens in the userspace. After taking this decision the OvS daemon sends the packet back to the kernel modules with the correspondent action for further processing.

2.4

The SDN Southbound Interface

The SDN-Controller’s southbound interface allows the communication be-tween the SDN-Controller and the forwarding elements in the data plane as shown in Fig. 2.3. Also called southbound API, this interface represents the bridge between the control and forwarding plane elements. According to ONF, at the time of writing the most deployed and widely diffused open southbound standard for SDN is OpenFlow. OpenFlow is briefly presented in Section 2.4.1.

There are however some alternatives to the well known OpenFlow pro-tocol. Some other southbound APIs are Protocol-Oblivious Forwarding (POF) [16], OpenvSwitch Database (OvSDB) [17], Forwarding and Con-trol Element Separation ForCES [18], OpFlex [19] and others. These APIs proposal introduce different concepts for the southbound interface. For in-stance ForCES and OpFlex consider the possibility to keep the control and data plane in the same network elements in order to improve scalability. A more detailed description of ForCES is done in [8]. OvSDB offers advanced networking functions, especially from the management and Quality of Ser-vice (QoS) point of view.

The southbound interface has the aim to provide basically three information sources for the control plane:

• Event-based notifications to inform the SDN-Controller about a change happened in the network topology.

• Periodic statistics useful for the SDN-Controller to monitor the state of the nodes and links.

(40)

2.4. The SDN Southbound Interface

• Managing the packet-in messages that are sent by a forwarding de-vices that matched a specific entry corresponding to this action (for instance because a management or control flow is recognized or be-cause a specific rule is installed for this purpose).

The SDN-Controller on its side uses the southbound interface protocol to install specific rules on the forwarding devices according with the policy to apply, update the flow tables and manage the network topology.

2.4.1 The OpenFlow protocol

OpenFlow is an open communication protocol that enables SDN-Controllers to determine the path of the packets across the forwarding devices (virtual switches). OpenFlow is the de facto protocol for the southbound interface of SDN-Controllers. It acts as an intermediary between the OpenFlow-capable switch and the SDN-Controller, allowing to add, remove and update the flow-entries in the flow-table, and to collect per-flow statistics. OpenFlow allows the management of virtual switches from different vendors to be managed via a single, open protocol. This is particularly important in cer-tain areas of work, such as research and development, where developers often work in a heterogeneous switch environment and with high port den-sity [15]. OpenFlow allows the SDN-Controller to control the flow-table in the OpenFlow-capable switch via the secure channel, as depicted in Fig. 2.7.

An OpenFlow-capable switch basically consists of three parts:

• One or more flow tables, where each flow is associated with a specific action.

• A secure channel allowing the communication between the OpenFlow-capable switch and the SDN-Controller.

• The OpenFlow Protocol that standardizes the message exchanged between the OpenFlow-capable switch and the SDN-Controller.

(41)

Figure 2.7: OpenFlow basic Architecture and Components [15].

OpenFlow allows defining the flow-rules externally, avoiding thus the need to program each switch manually. The OpenFlow-capable switch just forwards the packets through its ports according to the rules defined by the SDN-Controller and written in the flow table thanks to the OpenFlow protocol. A flow can be defined by several fields. For instance in the project described in the following chapters, VLAN tagging is broadly used to define a flow. Other header fields that allow defining and distinguishing flows are the Medium Access Control (MAC) address (source and/or destination), the IP address (source and/or destination), the protocol used and many other depending on the OpenFlow protocol version. According to [15] a flow-table entry is composed of three fields:

• A header that defines the flow.

• An action that defines the way to process the packet. • Statistics keeping track of per-flow parameters.

In order to support the modification applied to OvS for the specific purposes of this project, some ad-hoc modifications have been applied in OpenFlow. These details are discussed in Section 3.1.

(42)

2.5. The SDN-Controller

Figure 2.8: SDN Controller and its basic services.

2.5

The SDN-Controller

The SDN-Controller is also called Network Operating System (NOS). Fol-lowing this comparison one can imagine the SDN-Controller as the oper-ating system and the forwarding devices as the hardware. In this assump-tion, the southbound interface represents the drivers. An operating system provides an abstraction layer for accessing and managing the hardware re-sources, simplifying the usage and increasing the productive capacity with-out loosing the essential services and operations. As a generic operating system has to provide basic functions such as input/output (I/O) opera-tions and control, program execution, data presentation, communication, security, and so on, a network controller, in analogy, has to provide some basic networking functions. Some of these basic networking functions an SDN-Controller has to satisfy are topology control, statistics collection, devices management, event notification, forwarding mechanism implemen-tation, and security. Figure 2.8 shows a simplified view of this abstraction. Thus, the SDN-Controller is a key element in the SDN architecture. It is placed between the application and data layer and it takes the responsi-bility of installing virtual flow entries in the devices and of collecting and

(43)

computing network statistics in order to manage the network’s data plane and to define data forwarding policies.

According to [20] it is possible to distinguish two ways for installing flows in the device’s forwarding flow table: the proactive mode and the reactive mode. In proactive mode, the flow tables are populated before the first packet of a flow reaches the OpenFlow switch. This approach reduces the flow table set-up delay and the OpenFlow switches do not have to retrieve flow information from the SDN-Controller to know how to manage the proactively installed flows. On the other hand, the proactive rules can only be used for a limited set of operations: at the bootstrapping of the control path for example. Further, in order to proactively apply the rules, a general knowledge of the topology is required. There is also the risk to not cover all the possible flow type the network can deal with, or overflowing the flow table.

In a reactive approach each forwarding rule is set by the SDN-Controller. When the first packet hits the OpenFlow switch, it is forwarded to the SDN-Controller as a so called “PacketIn” event. The SDN-SDN-Controller, then sets up the proper forwarding rule for that flow according to any metric or routing policy that applies to the type of flow. The advantages of this strategy are mainly represented by the scalability and the higher flexibility of the network in comparison with the proactive mode.

In order to avoid table overflowing, a flow entry expires after a cer-tain time of inactivity. Despite the flow-by-flow decision that can take into account traffic load conditions and QoS based policies, the reactive mode suffers of a large round trip time before a flow rule is installed. Further-more, a strong requirement to use the reactive mode is that the OpenFlow switches know how to reach the SDN-Controller and there are not any nodes of which the SDN-Controller is unaware.

A popular hybrid approach consists in mixing the proactive and reac-tive mode. Before the first packet of a flow enters the network, the switches are provisioned with a set of flow rules in their flow table. These rules are defined to instruct each switch how to reach the SDN-Controller for instance, or to have a static route for a certain type of traffic, if needed. All the other flows that do not match one of the pre-provisioned entries,

(44)

2.5. The SDN-Controller

are treated using the reactive approach.

Another aspect that is also possible to manage, is the flow granularity. A fine grained traffic control can provide more flexibility going deep in the flow details, while a gross grained flows aggregation results more scalable. By adjusting the granularity it is possible to adjust the flexibility - scalability trade-off [20].

The authors in [20] also make a classification concerning the push-based vs. pull-based monitoring statistics. Receive statistics from the forwarding devices is compulsory for the SDN-Controller in order to get an overview of the traffic and the network conditions. The push-based approach consist in sending statistics from each switch to the SDN-Controller to inform about specific events such as removing or setting up a new flow table entry. This mechanism does not provide any information about the flow before the entry modification, but it is useful to collect statistics about certain flows that match a pool of characteristics. The pull-based mechanism allows the SDN-Controller to learn more information about the behavior of a specific flow, but it requires a fine tune of the interval between a request and the following one, and it can hinder the network scalability.

A SDN-Controller is an entity that manages all forwarding devices in the network, and typically it runs on a dedicated server. This structure describes a centralized architecture, where all the intelligence of the network is concentrated in a single entity that also represents a single point of failure. Moreover, depending on the dimension of the networks, the number of devices involved and the management required, a single SDN-Controller might be not sufficient to manage such networks. Studies [21] show that with a sufficiently powerful hardware base, a single SDN-Controller can handle more than 12 million flows per second. In case the SDN-Controller performance is not sufficient for the proper management of the network, or a higher level of reliability is required, a distributed control platform can be used. A distributed design can be scaled to meet the environment’s demand (i.e. performing a network partition between the SDN-Controllers) and can result more fault tolerant to different kind of failures. For instance, Open Network Operating System (ONOS) [22] uses the distributed approach.

(45)

ONOS is an experimental SDN control platform motivated by performance, scalability and availability requirements of large network operators [23]. To meet the high availability condition imposed by network operators, ONOS adopts a distribute architecture. Athough ONOS is physically distributed across multiple servers, it maintains a centralized logical and global network view that can be read by the application layer to make forwarding and policy decisions. Each server hosting ONOS acts as the exclusive SDN-Controller for a subset of virtual switches, and it is responsible for the management of only the virtual switches it controls. Nevertheless, each single server contributes maintaining the global network view. A virtual switch can be connected to multiple ONOS instances, but only one can control that switch. This is achieved by electing the leader instance for each connected virtual switch. The scalability of this approach becomes clear if a consistent increment of the data plane capacity is considered. ONOS can react to the consequent growing demand of control plane instances adding SDN-Controller clusters to the distribute control plane. Another advantage provided by the distributed ONOS nature is the high fault tolerance. As state in [23] if an ONOS instance fails, the overall architecture continues to work because there are multiple redundant instances ready to take over. These characteristics make ONOS one of the most popular SDN-Controller platform.

Another SDN-Controller widespreadly used is OpenDaylight (ODL) [24]. ODL owes its popularity not only to its characteristics, but also to a strong push from a large community of developers and the contribution of many leading companies in the industry.

There are a plenty of available SDN-Controllers, a short overview af their fundamental characteristics is presented in [4], [8] and [20]. The SDN-Controller with characteristics that best meet the design specifications of the project presented in this thesis is OpenDaylight. For this reason, ODL deserves a short overview in the next section.

(46)

2.5. The SDN-Controller

2.5.1 The OpenDaylight project

OpenDaylight (ODL) is an open source project hosted by the Linux Foun-dation [25]. It was created to build the basis for a strong and wide SDN adoption and led by an open source community. ODL can also count on an important industry-supported framework with members as Brocade, Cisco, Citrix, Ericsson, HP, IBM, NEC, and Red Hat just for naming some of the founders. The OpenDayligh Project offers a Java-based SDN platform allowing the users to deploy SDN solutions, develop new logical modules and other functionalities to advance SDN fostering. Collaborating with the Open Network Foundation permits ODL to support the most recent open standards and to keep up to date with the technologies changes.

ODL includes support for the OpenFlow protocol, but can also support other open standards for the southbound interface thanks to the Service Layer Abstraction (SLA) that allows several APIs to coexist in the same control platform [4]. On the northbound interface, ODL exposes several types of APIs that can be used to request information about the network state or act upon the network. The application layer uses the ODL Con-troller to collect statistics about the network status, make an analysis of the data and then use the SDN-Controller to apply the necessary modifica-tion to the network. The fact that ODL is a Java-based software gives the possibility of deploying the SDN-Controller on any hardware and on any operating system that supports Java, further than been compatible with third-party components and libraries. Being implemented solely in soft-ware gives a high degree of modularity to the ODL Controller. The ODL Controller is able to load software frameworks (bundles) or modules to bet-ter meet the required functionalities and to deploy a variety of network environments.

ODL is a project in continuous evolution: the first release for the ODL Controller called “Hydrogen” was announced in 2014, but developers were already focused on the next release “Helium”. In few years several releases succeeded over time, as represented in Fig. 2.9.

The active and dynamic community, the interoperability with different platforms and the extended API support, have allowed ODL to become one

(47)

Figure 2.9: OpenDaylight Controller releases’ roadmap.

of the most popular and innovative SDN-Controllers.

2.6

The SDN Northbound Interface

The northbound interfaces introduce the possibility of accessing and pro-gramming the SDN-Controller via the application layer. The northbound interface is considered a software API and its environment does not in-clude strong hardware relations as in the case of the southbound interface API. In contrast to the southbound interface that already has a widely accepted standard protocol (OpenFlow), the northbound interface stan-dard is still an open issue at the moment of writing. The importance of a common accepted standard for northbound APIs is out of the question, both for allowing portability, and to promote interoperability between dif-ferent control platforms. The northbound API replies to the necessity to have a controller-independent abstraction that guarantees a high degree of programmability.

Currently, each SDN-Controller defines its own northbound API, and it is probably too early to think that a globally used northbound interface solution emerges. This is mainly due to the fact that modern network applications are extremely different and vary for every case and deployment.

(48)

2.7. Backhauling and Access infrastructure

Figure 2.10: Backhaul Network Technology [26].

It is indeed difficult to assume that only one API is able to manage distinct applications such as security or routing for real-time traffic. Among the various existing proposals, many SDN-Controllers, including OpenDaylight, are designed to work with APIs based on the REpresentional State Transfer (REST) software architecture.

2.7

Backhauling and Access infrastructure

Mobile backhaul is a term commonly used to portray the network link be-tween the Radio Base Station (RBS) and the first entry border device of the transport network [26]. Modern backhaul networks provide connectiv-ity with basically three different kinds of transport media: copper, optical fiber and microwave radio links, as illustrated in Fig. 2.10. Each of these transport media has its own characteristics that make it suitable for a cer-tain deployment environment. For instance, optical fibers are leased on different providers in high data traffic urbanized areas. Microwave links,

(49)

on their side, represent a solution for those areas where wired connection solutions are difficult to enforce. A microwave radio link can be deployed in various licensed and unlicensed frequency bands. Using unlicensed bands is a cost efficient solution, but it is prone to suffer from radio interference issues. In an unlicensed band multiple systems can coexist and interfere with each other. Due to asymmetries in different communication technolo-gies and selfish system behavior, sharing unlicensed bands can arouse unfair and inefficient situation.

Depending on the frequency in use there is a trade-off between coverage range and link capacity. In most of the current backhaul scenarios, de-ploying copper links to directly connect each single repeater that provides wireless connection to the final user, does not represent a cost-efficient choice for backhauling due to the price of this material that grows with the capacity. The increased number of mobile subscribers and the intensifica-tion of high-speed data service demands have a strong impact both in the access and in the transport network.

The migration from macro cells to heterogeneous backhaul architectures bring along many challenges that need to be addressed. One of these chal-lenges concerns the deployment of SCs in a mobile heterogeneous network. Due to its heterogeneous nature, deploying a SC backhaul network means placing SCs in impromptu locations among the city infrastructures. SCs are typically installed on lamp posts and other street furniture like traffic lights and walls of buildings [3]. This approach is completely different from the one followed in macro cell deployments, where the RBS location is carefully planned before the installation. A macro cell can count on a prearranged building site which includes one or more wired backhaul connections, and it envisages a vast coverage area. In order to serve the same area covered by a single macro cell, tens of geographically dispersed SCs are needed, thus it is not conceivable to have a wired connection for each one of the SCs. Moreover, a mobile heterogeneous network consists of access nodes of different nature (Base Stations, hotspots, Access Points). Each node might have a different capacity, coverage and other characteristics, that manages a large variety of data traffic. A typical mobile homogeneous network con-sists of a single type of macro cell Radio Access Network (RAN), typically

(50)

2.7. Backhauling and Access infrastructure

Base Stations, with similar characteristics.

Moving to SCs solves the capillary coverage and capacity problems, but at the same time, it creates the new challenge of providing a suitable backhaul for all of these SCs. On the other hand, it opens to new possibility and new types of implementations, providing the possibility to manage Quality of Service (QoS) and security.

The RAN is an increasingly critical component of the global network in-frastructure and is the primary reason Mobile Network Operators (MNOs) are intensely focused on the mobile backhaul network as a key element of their short- to long-term business strategies. With the voracious bandwidth demand growth rates that show no signs of abating, the capacity, reliabil-ity, and availability of mobile backhaul networks must improve as wireless access speeds to video-centric user content increases [26]. In the following subsections, the technologies used for backhaul and fronthaul architecture are detailed. Than, the IEEE 802.11ac standard and its use in the SC deployment scenario will be investigated, analyzing its characteristics and performance requirement. A description of the main contribution of using a multi-RAT access environment will conclude this chapter.

2.7.1 Backhaul/Fronthaul technology evolution

The network evolution provided a wide availability of broadband technolo-gies over the last decades, but an obstacle to the massive deployment of these wireless services was represented by the problem of interacting with the already present legacy technologies. Some of the major challenges of communication service providers are:

• Reducing the backhaul transport cost while the demand for mobile bandwidth-intensive and multimedia services still grows.

• Adopting packet-optimized networks able to support the most recent generations of wireless mobile communications requirements.

• Guaranteeing availability of these services and satisfying the cus-tomers’ requirements

(51)

An affordable and cost efficient mobile backhaul network that can meet capacity demands and provide support for data services, as well as tra-ditional voice architectures, is what communication service providers are seeking.

Since third and fourth generation networks appeared on the market, Ethernet-based connections start replacing legacy Plesiochronous Digital Hierarchy (PDH) E1/T1 in a prevalent manner. First generation com-munications networks provided only voice transport services, so the PDH E1/T1 were the prevalent backhauling techniques which allows multiplexing multiple voice channel from the base station to the Base Station Controller (BSC) [26]. E1 and T1 links operate on 2048 and 1544 Mbps respectively. The E1 frame consists of 32 time slots (30 for voice communication and 2 re-served for frame synchronization and signaling) with a 64 kbps compressed Pulse Code Modulation (PCM) voice stream. Thanks to a compression algorithm the number of carried voice streams can be quadrupled. Thus E1 can support 30x4 = 120 voice calls and it is worldwide deployed, ex-ception made for North America and Japan where T1 is mostly used. T1 frame is composed of 24 time slots (one dedicated for signaling) each of them can support 64 Kbps PCM voice streams. With the same compres-sion, T1 can support 23x4=92 voice calls. A typical E1/T1 carrier is a copper line, but in many cases, they are merged into high rate optical fiber connections. Typical standards for this carrier are Synchronous Optical Networking (SONET) mainly adopted in North America, and Synchronous Digital Hierarchy (SDH) in Europe and Rest of the World.

When the first broadband services were launched, communications ser-vice providers maintained Time Division Multiplexing (TDM) and Asyn-chronous Transfer Mode (ATM) backhaul because the voice traffic was still predominant. However, data traffic growth rate highlighted the limitation of this technology. If second generation (2G) networks introduced data packets communication, forcing network operator to introduce combined E1/T1 and Ethernet system, in third generation (3G) networks it passed to be mainly a packet-based and Ethernet-based system. On the same line, Long Term Evolution (LTE) is a full-packet network. Network

(52)

Op-2.7. Backhauling and Access infrastructure

erators first developed solutions to carry voice and data services on the same infrastructure, but they soon realized that TDM-based platforms can not effectively and economically scale to support the explosive growth of data traffic [27]. In order to maintain backwards-compatibility with previ-ous circuit switched technology, some solutions for carrying TDM services on Ethernet-based communications systems such as pseudowires or circuit emulation were introduced. Also, this approach only temporarily solved the offloading data problem and proved to be unable to scale the High Speed Downlink/Uplink Packet Access (HSxPA) and LTE services. This motivated the introduction of full-packet solutions. Figure 2.11 shows a roadmap of service availability based on mobile access technology.

In order to meet cost-effective solution, concerning both equipment and deployment costs, and to obtain a high resiliency level, microwave radio systems are replacing leased copper lines. So far more than 60% of radio base station coverage is ensured by microwaves systems, and this ratio is intended to grow up with the massive SC deployments [29]. In the last years the use of radio links at licensed (3.65, 4.9, 6, 11, 18, 23 and 60 GHz) and unlicensed (2.4 and 5 GHz) bands, increased enormously. Microwave-based backhaul can be deployed on unlicensed spectrum to reduce Capital Expen-diture (CapEx) but raising at the same time radio interference issues. The project of microwave links also opens new aspects of telecommunications that are still under investigation.

A typical urban scenario for SC microwave backhaul network represents a challenge from various points of views. The above mentioned interference issue is an open case study. This refers to interference generated by other transmissions and the one generated by other data-flows in the same or ad-jacent radio channels [30], [31], [32], [33], [34] and [35]. The presence or not of Line of Sight (LOS), the trade-off between frequency, range of coverage and data rate, and antennas design, are all aspects that can affect per-formance, capabilities and future extension of the entire microwave-based backhaul.

(53)
(54)

2.7. Backhauling and Access infrastructure

ISM Band Channel Number Central Frequency

GHz (ch) MHz

2.4 1 - 11 2412 + 5 · (ch − 1)

5 36 - 144 5180 + 5 · (ch − 36)

5 149 - 165 5180 + 5 · (ch − 36)

Table 2.1: IEEE 802.11n and IEEE 802.11ac available channels list in the European Regulatory Domain.

2.7.2 IEEE 802.11ac as Backhaul for Small Cell deployment IEEE 802.11ac is a wireless network standard defined by IEEE Standard Association within the IEEE 802.11 family standard definitions [36]. It provides high-throughput in Wireless Local Area Network (WLAN) on the 5 GHz Industrial Medical Scientific (ISM) unlicensed band. The IEEE 802.11ac specification, compared to IEEE 802.11n [37] adds channel band-widths of 80 MHz and 160 MHz with both contiguous and non-contiguous 160 MHz channels for flexible channel assignment. It adds also higher order modulation schemes in the form of 256 quadrature amplitude mod-ulation (QAM), providing an additional data rate improvement. A fur-ther data throughput increment is achieved by increasing the maximum number of spatial streams to eight and implementing a new technology to support multi-user multiple-input, multiple-output (MU-MIMO). Finally, some other novel features are introduced, such as a compulsory frame ag-gregation, TXOP Sharing, and others.

Compared with the 2.4 GHz ISM band, the 5 GHz ISM band offers more non-overlapping channels which can be bonded together to obtain wider channels. Table 2.1 shows a list of the available channels in the 2.4 and 5 GHz bands.

In particular, IEEE 802.11ac adds 80 MHz bandwidth channels into its specification, by combining two adjacent channels with 40 MHz band-width. The widest bandwidth channels are built combining two adjacent 80 MHz channels forming a 160 MHz bandwidth channel, or two non-adjacent

(55)

80 MHz channel generating the so-called 80+80 MHz channel. Table 2.2 displays the valid channel numbers for various channel widths.

Channel Width Valid Channel Numbers

20 MHz 36, 40, 44, 48, 52, 56, 60, 64, 100, 104, 108, 112, 116, 120, 124, 128, 132, 136, 140, 144, 149, 153, 161, 165 40 MHz 38, 46, 54, 62, 102, 110, 118, 126, 134, 142, 151, 159 80 MHz 42, 58, 106, 122, 138, 155

160 MHz 50, 114

Table 2.2: IEEE 802.11ac channels width.

IEEE 802.11ac extends the high modulation and coding scheme (MCS) employment concept of IEEE 802.11n adding two new MCS that use 256-QAM. Table 2.3 illustrates which modulation type and coding rate corre-spond to each MCS index.

MCS Index Modulation Coding Rate

0 BPSK 1/2 1 QPSK 1/2 2 QPSK 3/4 3 16-QAM 1/2 4 16-QAM 3/4 5 64-QAM 2/3 6 64-QAM 3/4 7 64-QAM 5/6 8 256-QAM 3/4 9 256-QAM 5/6

Table 2.3: IEEE 802.11ac MCS definition.

However, 256-QAM requires a higher signal-to-noise ratio (SNR) at the receiver front-end to keep a low bit-error probability compared with lower

(56)

2.7. Backhauling and Access infrastructure

MCS. Another enhancement IEEE 802.11ac introduces is related to the number of MIMO users, extended from four to eight but with some re-strictions. In order to maintain a feasible management scale of the MU-MIMO transmission, the standard fixes the maximum number of simulta-neous beams directed to different nodes to four. The number of spatial streams each receiver can have is also four [38].

2.7.2.1 Performance requirements

The desired network characteristics and consequently the backhaul perfor-mance requirements for a SC deployment vary depending on the mobile access technologies that a given cell provides. Performance metrics can also be very different depending on the specific purpose of the SC and the type of traffic that is handled by the backhaul nodes. Typical metrics are throughput, end-to-end latency, and jitter. From the throughput point of view, the basic idea is that a backhaul link should be capable of providing at least the same throughput as the maximum hailed from the access side. An estimation of the amount of this traffic can be evaluated taking into account the access technology capability, the number of users connected to the SC and their simultaneous data rate. Nevertheless, defining precisely the raw data rate provided by the access technology, and the throughput capabilities of the backhaul link is anything but easy. There is a strong disagreement between the theoretical possible throughput and the actually achieved one. For example Table 2.4 shows the theoretical peak raw data rates achievable by IEEE 802.11ac depending on the channel bandwidth, MCS, short or long guard intervals (SGI, LGI) and the number of spatial streams. The actual throughput achieved in the real deployment is sig-nificantly lower than the theoretical. During experiments conducted in a typical office scenario, it was possible to observe a 30 - 40% throughput decrease in average, compared with the theoretical one, even when trans-mitting on interference-free channels.

IEEE 802.11ac characteristics can satisfy the throughput requirements of a backhaul that uses, for instance, IEEE 802.11n interface on the access side.

(57)

Theoretical Data Rate (Mbps)

20 MHz 40 MHz 80 MHz 160 MHz

MCS NSS LGI SGI LGI SGI LGI SGI LGI SGI

0 1 6.5 7.2 13.5 15 29.3 32.5 58.2 65 1 1 13 14.4 27 30 58.5 65 117 130 2 1 19.5 21.7 40.5 45 87.8 97.5 175.5 195 3 1 26 28.9 54 60 117 130 234 260 4 1 39 46.3 81 90 175.5 195 351 390 5 1 52 57.8 108 120 234 260 468 520 6 1 58.5 65 121.5 135 163.3 292.5 526 585 7 1 65 72.2 153 150 292.5 320 585 650 8 1 79 86.7 162 180 351 390 702 780 9 1 n/a n/a 180 200 390 433.3 780 866.7 0 2 13 14.4 27 30 58.5 65 117 130 1 2 26 28.9 54 60 117 130 230 260 2 2 39 43.3 81 90 175.5 195 351 390 3 2 52 57.8 108 120 234 260 468 520 4 2 78 86.7 162 180 351 390 702 780 5 2 104 115.6 216 240 468 520 936 1040 6 2 117 130.3 240 270 526.5 582 1053 1170 7 2 130 144.4 270 300 585 650 1170 1300 8 2 156 173.3 324 360 702 780 1404 1560 9 2 n/a n/a 360 400 780 866.7 1560 1733 0 3 6 19.5 21.7 40.5 45 87.8 97.5 175.5 195 1 3 39 43.3 81 90 175.5 195 351 390 2 3 58.5 65 121.5 135 263.3 292.5 526.5 585 3 3 78 86.7 162 180 351 390 702 780 4 3 117 130 243 270 526.5 585 1053 1170 5 3 156 173.3 324 360 702 780 1404 1560 6 3 175.5 195 364.5 405 n/a n/a 1579 1755 7 3 195 216.7 405 450 877.5 975 1755 1950 8 3 234 260 486 540 1053 1170 2106 2340 9 3 260 288.9 540 600 1170 1300 n/a n/a

Table 2.4: IEEE 802.11ac Theoretical Data Rate for 1, 2 and 3 Spatial Streams.

(58)

2.7. Backhauling and Access infrastructure

A comprehensive study of IEEE 802.11n performance was conducted by Van Winkle in [39]. The study confirms what emerged from tests com-pleted in an office environment and in an outdoor scenario. The maximum throughput on an IEEE 802.11n access interface is around 80 Mbps both in Up-Link (UL) and Dow-Link (DL). We can consider reasonable to set 100Mbps in UL and DL as minimum required throughput for a SC’s back-haul link.

Latency and jitter requirements mainly depend on the network services that are offered in dense SC deployments, so it is necessary first to iden-tify which type of services are used by mobile users in order to ideniden-tify these requirements. Video streaming and Voice over IP (VoIP) traffic have different requirements to Instant Messaging (IM) and file sharing services. The International Telecommunication Union - Telecommunication Stan-dardization Bureau (ITU-T) recommends some one-way transmission time parameter range in order to provide a good quality user experience for voice call [40]. A link that is capable of these latency and jitter conditions, can also fulfill best-effort traffic requirements.

2.7.3 Wi-Fi based access in Multi-RAT environments On the client side, a modern SC could announce several radio interfaces technologies such as WCDMA, LTE, IEEE 802.11n, and IEEE 802.11ac, for listing some of them. This approach is called Multi Radio Access Tech-nology (Multi-RAT). Multi-RAT is taught to address the capacity and high user throughput problem in 5G networks. The adoption of the Multi-RAT paradigm offers several advantages, on the other hand it represents a chal-lenge for the architecture designer. One of the major contributions of Multi-RAT is the possibility to improve the utilization of radio resources by ag-gregating both licensed and unlicensed spectrum to extend the available system bandwidth. This means a strong incentive for a consistent service experience for the users. However, an accurate and efficient mechanism to allow the coexistence of different technologies must be developed.

By taking advantage of each access’ technology unique characteristics, a multi-RAT system is able to improve the overall performance and

(59)

reliabil-Figure 2.12: Wireless access technologies supported by a SC.

ity of the system as a whole. For instance, one of the technologies over the licensed band can be used for exchanging the control messages to maintain the connection and to perform handovers. The technology operating in un-licensed frequency band, able to support data rates of hundreds of Megabit per second (Mbps) or even Gigabit per second (Gbps), is more suitable for data traffic.

Figure 2.12 depicts a set of different technologies on the access side, while IEEE 802.11ac is the only technology considered on the network side. Due to its enhanced data throughput of Gbps, IEEE 802.11ac seems to perfectly be suited as backhaul network technology. Moreover, thanks to its widespread availability in user equipment IEEE 802.11n is an appropriate access technology.

(60)

Riferimenti

Documenti correlati

The contours of the density ratio can be seen in Figure 5.4: the case for Kn = 10 is in the upper part and for Kn = 10000 in the lower part. A difference between the two cases in

Following Alan Reed Libert, it is possible to classify artifi- cial and natural languages under three types, depending on the form of comparative adjectives. Artificial languages

'From Science-Fiction to Science' di Anna Curir e Fernando De Felice, è un libro in cui si rovescia la prospettiva da cui siamo soliti partire: la scienza di oggi che preconizza

Probably it will occur that in the chronic treatment of Bipolar Disorder, as in many other Psychiatric Diseases, a new drug therapy will have to be prescribed in the future..

Vi erano poi numerose tipologie di navi civili, che nella flotta da guerra ave- vano esclusivamente compiti logistici e di trasporto: le navi grandi, che si muo- vevano prevalentemente

A Putti sono riconosciuti meriti organizzativi e di sviluppo dell'Istituto Rizzoli (la fondazione delle Officine Rizzoli nel 1914 e dell'Istituto Elioterapico 'Codivilla' a

Esaustivo è il caso della candidiasi invasiva: il miglioramento della gestione clinica e l’aumentata sopravvivenza dei pazienti ricoverati in terapia intensiva, la crescente

Figure 2 shows the survival probability of solar electron neutrinos as predicted by the oscillation scenario and as determined by observations.. The most interesting feature of