This Master thesis was accomplished within the Erasmus Mundus Joint Master Degree “Photonic Integrated Circuits, Sensors and NETworks (PIXNET)”.
Coordinating Institution: Scuola Superiore di Studi Universitari e di Perfezionamento Sant'anna Partners
Osaka University Aston University
Technische Universiteit Eindhoven
Project Data
Start: 01-09-2017 - End: 31-08-2022
Project Reference: 586665-EPP-1-2017-1-IT-EPPKA1-JMD-MOB EU Grant: 3.334.000 EUR
Website: http://pixnet.santannapisa.it
Programme: Erasmus+
Key Action: Learning Mobility of Individuals
Abstract—The future optical metro network with edge computing should provide low and deterministic latency service to support
mission-critical applications with the other traffics. Due to a lack of flexibility, high latency and jitter, the conventional network architecture can't fulfill these critical applications' QoS requirements. Thus, we need dynamic and fast reconfigurable network. Using the novel metro network model introduced in our colleagues' recent study, in this paper, we proposed and implemented a prioritization method at the scheduler and jitter compensation mechanism in the top-of-rack (TOR) switch to achieve deterministic latency for mission-critical applications. We numerically investigate the performance of the proposed methods via the OMNET++ simulator in terms of latency, jitter, and packet loss ratio under different edge node locations. The numerical analysis also includes the other two application types (Massive Internet of Things, and content delivery network) to examine the impact of the methods on the remaining traffic flow. Considering a typical metro access network topology with 20 nodes that serves around 1 million population, we have achieved 65 μs deterministic latency with 37 ps jitter, and 8E-7 packet loss ratio at load 0.5 for mission-critical applications by allocating ten edge computing nodes.
Index Terms—5G, data center, jitter, deterministic latency
I. INTRODUCTION
The next generation of mobile networks bring along the unprecedented levels of heterogeneous traffic, further aggravated by the adoption of the New Radio technology which operates in millimeter wave (mmWave) spectrum [1]. Among these diversified traffics, the most anticipated and hyped aspect of 5G is the technologies support for mission-critical applications. Unlike eMBB or Massive Internet of Things (mIOT), these use cases are not an extension of how cellular technology is used today [2].
Nowadays, the mission-critical applications have gained popularity in every real-world scenario. For instance, in the automotive industry, there has been a development of autonomous vehicles which is further going to the point where cars communicate each other. This communication needs to occur under rigorous real-time constraints as packets which contain critical information will
Achieving Deterministic Latency for Optical
Metro Network with Edge Computing
be sent such as the cars next turning direction, the distance between the vehicles, an information about an incident road ahead, the position of the car from either side of the lane, and so on. Also, in a proximal cloud driven augmented and virtual reality gadgets, the messages must deliver on time so that the experience of a real- world environment is not distorted. This application category also includes remote medical surgery, remote control of robots and drones, and etc.
During this communication, at the transmitter side, packets are sent in a continuous stream with equal guard time. However, due to network congestion, improper queuing, channel properties or configuration errors, this steady stream can become lumpy or the time deviation between the packet’s arrival time can vary instead of remaining constant which results jitter. Jitter is the variation in the delay of received packets. Jitter mitigation is a major problem in mission-critical applications real-time communication which needs stringent timing guarantees with a jitter limitation of about 1 µs [3].
Thus, unless we focus on this real-time communication aspect, it will result an introduction of random delays and jitter at several points of the path. The mission-critical applications operation needs real-time object capturing and processing. This data processing is performed in the nearest edge computing node where high computing power is deployed. Therefore, the optical metro network is also a part of the path where this real-time communication happens.
As shown in Figure 1, in 5G network, these mission-critical applications share the communication resources with the other two application types, each of them has different performance, QoS service requirement and traffic pattern. Some of the applications will be served by the centralized DC located at the core network, and the others in the edge servers such as mission critical applications. In order to get a service in the servers located either in the centralized DC or edge computing nodes, the packets of each application including mission-critical’s’ should pass through several routers and switches, which makes them to be vulnerable to latency variation, hence jitter. This motivated us to propose a method which can solve this problem where this variation can happen.
Until now, there are a few methods widely adopted in different network technologies in order to control jitter, such as usage of jitter buffer, utilizing traffic shapers, and employing optimized scheduling algorithm while packet scheduling. Jitter buffer method usually used to compensate a jitter which occurs due to rate mismatch between the near end and far end devices [4].
Figure 1: Architecture of multi-domain heterogeneous converged optical network [16]
A jitter buffer which is located at the destination, intentionally delays the arriving packets so that all the packets will experience the same latency [5], which means the user can get a jitter free connection. The second method is, by utilizing traffic shapers [6]. This method is implemented to reduce the jitter that occurs because of network congestion, results jitter; this is mainly due to excessive input being sent on the channel. In order to limit the amount of input traffic being sent in the network, one needs to make use of traffic shapers. These traffic shapers ensure that only a certain amount of traffic is sent through the channel, hence regulating jitter. The last technique is, by using the scheduling algorithm at the transmitter side. This method is widely used to mitigate a jitter for a specific type of traffic in the traffic flow.
In most applications, the aforementioned methods are implemented in combination to minimize jitter. In this work, we adopted the first and the last-mentioned methods at the TOR switch and scheduler module of the access interface, respectively. These methods are customized in terms of addressing the specific causes of the jitter at the mission-critical traffic in our network. Note that, in this paper, all applications which need deterministic (jitter free) latency are categorized under mission-critical traffic.
In this paper, we numerically investigate the performance of the proposed methods regarding to mitigation of mission-critical packets jitter. The two jitter reduction mechanisms are implemented on fOADM-based metro access network via OMNET++. On this analysis, we evaluate the network with jitter reduction methods in terms of achieving the packet loss ratio (PLR) requirement of mission-critical applications (10-7) and its impact on the latency and packet loss ratio of the remaining traffic type as a function of metro edge node locations. Note that PLR of 10-7 is enough for mission critical applications according to [17].
The rest of the paper is organized as follows, in section 2, the detail of the metro node architecture and its operation, the traffic flow and network slicing strategy as well as the jitter compensation mechanism at TOR are presented. Section 3 explains the details of the network traffic model and network configuration. In section 4, we report and discuss the simulation results in different scenario. Section 5 summarizes the main outcomes of the project.
II. METRO NETWORK SYSTEM WITH EDGE COMPUTING
A. System Operation of the fOADM-Based Node
The schematic of the fOADM-based metro access node is shown in Figure 2(a). Each node consists of a low-cost photonic-integrated wavelength selector/blocker-based fOADM [see Figure 2(b)], an optoelectrical interface [see Figure 2(c)], and the computing. The metro access network operates in a time-slotted way and under the control of an out-of-band supervisory channel, which is illustrated by the red line in Figure 2(a). The supervisory channel can fast control the nodes by carrying the destination information of the data channels in each time slot. Each fOADM node extracts and processes the supervisory channel. The control module in the optoelectrical interface processes the destination information and controls the fast reconfigurable add-drop multiplexer to decide which wavelength channels should be eventually dropped and thus to which wavelengths the access data traffic will be added. The drop and continue function is also supported by the fOADM. Unlike wavelength-based switching to address the node destination (each node destination has a dedicated wavelength), we use the labels carried by the supervisory channel information to address the node destination. This avoids the bonding of each wavelength to a certain node, and it enables efficient statistical (re)use of the wavelengths and solves collisions. It is worth noting that our fOADM system can transmit traffic on any available wavelength in a time slot, while the ECOFRAME [7] and POADM [8] can only send on one wavelength.
Figure 2: fOADM-based metro access network node construction and its operation [20]
The sub-microsecond fOADM module is shown in Fig. 2(b). It uses a semiconductor optical amplifier (SOA) gates-based wavelength blocker to either to drop or continue any wavelength channel. More details about this module are reported in [9-15]. The optoelectrical interface is illustrated in Figure 2(c) and consists of an access interface, a network interface, and a control module. The access interface is responsible for aggregating the incoming 5G data traffic. The traffic is first checked by the destination analyzer of the access interface, where the packets with the same destination and application will be aggregated in the specified buffer of the network interface. Each buffer cell represents an optical packet that will be transmitted as soon as a time slot in one of the wavelength channels is available. Each buffer inside the network interface is specified for an application, and the buffer can send traffic on any available wavelength, as shown in Figure 3. The available wavelength channels in every time slot are known for the interface by the information from the supervisory channel. Tunable transmitters are employed to transmit at any available wavelength to fully utilize the wavelength resources in a statistical multiplexing fashion. The control module monitors the free wavelengths, enables the wavelength blockers, and sets the fast-tunable transmitters to add the optical packets into the network without collision. Note that, either because of the prioritization implemented at the scheduler or if there is no available time slot on any possible wavelength, the packet remains stored in the electronic buffer. Finally, the control module modifies the supervisory channel according to the data channel dropped and added before sending it out to the next node.
B. Network Slicing Strategy
In this work, we consider three slices of traffic (Massive Internet of Things (mIOT), content delivery network (CDN), and mission-critical) to study the network performance with the proposed methods in terms of achieving deterministic latency for
mission-critical applications and its influence on the remaining traffic types. For each network slice, dedicated resources (computing power and wavelength channels) are flexibly assigned. As per the traffic properties [17-19], the mIOT services consists of actuators and immobile measuring sensors with no constraints on latency. In contrast, the mission-critical traffic comprises of autonomous vehicles, virtual reality, and remote surgery requires deterministic end-to-end latency and low packet loss ratio (10-7). The mobile broadband service such as CDNs demands high bandwidth yet isn’t so sensitive on latency.
(a) (b)
Figure 3: Scheduling method at the normal node (a) and edge node (b)
The bandwidth resource is sliced for each traffic type according to its properties by utilizing a scheduling method. Other than bandwidth, the computing resources deployed in edge and centralized DCs are also assigned to the slices based on their latency sensitivity.
The two scheduling methods illustrated in Figure 3(a) and 3(b) are implemented at the metro access node with and without computing power, respectively. In the remaining parts of the paper, the term edge node is used to represent the metro access node with edge computing and normal node to denote the metro access node without edge computing. The scheduling used in the normal node, assigns wavelength resources by the proportion of each slice in terms of traffic volume without priority, to avoid large buffer demand at the network interface. Instead, the packet latency variation will be compensated at the TOR of edge node. For example, assume that the three network slices required bandwidth of B1, B2, and B3 aggregated into three dedicated buffers, respectively. At each time slot, if there are B wavelengths channels available for sending the traffic, then those free channels will be allocated to the slices by the proportion of the required bandwidth. Note that the bandwidth resource assignment of the slices is also based on their reliability requirement. In case, when the calculated free channel count for one slice is not an integer, the
scheduler applies round-down function for the mIOT slice and the round-up function to the mission-critical traffic; the remaining number of free wavelength channels are given to CDN slice.
At the edge node, this scheduling method would only be implemented when the fraction of the free channel can accommodate the instant mission-critical traffic bandwidth demand. Otherwise, all the mission-critical slice requested bandwidth will be assigned as far as the free channels are enough to do so and the other traffic types will share the remaining free channels according to their properties including reliability. When the free channels can’t support the entire bandwidth request of mission-critical, the whole available wavelengths will be assigned, and the remaining data stored in the electronic buffer like that of the other two slices.
To optimize the network performance as well as the computing resources usage, we assign different network functions to the edge computing and to the centralized data- center (DC) [21-23]. Thus, the virtualized radio access network (RAN) functions get a service in the edge computing node, while the virtualized mobile network gateway functions are served by the centralized DC. Connectivity among the virtual network functions (VNFs) located in the edge computing node and the centralized DC are supposed to be provisioned by a uniform control plane. For all the three slices, radio access network functions are virtualized in a distributed unit (DU), which is deployed in the edge computing node. Network gateway functions virtualized in a central unit (CU) are deployed in the centralized DC or edge computing node based on different latency requirements of the various slices.
In this work, the NFVs of the different slices is assigned either on the distributed unit or the centralized unit according to the network slicing strategy. Based on this strategy, the mIOT slice features latency insensitive processing; so, the VNFs of this traffic can only be deployed in the centralized DC. In contrast, VNFs and the corresponding computing resources of the mission-critical slice should be installed in the edge computing node in order to minimize the network latency. For the purpose of reducing the upper-layer traffic , the most-popular CDN (defined in the paper as CDN-1) caches should be stored in the edge computing node, while the other less popular CDNs (defined in this work as CDN-2) can be stored in the centralized DC to lower the cost of the edge network segment. Note that all types of traffic first need to get to the edge computing node since the edge computing node is where the virtualized baseband function is deployed. As seen thus far, the dedicated slices are created for services with various requirements. Moreover, VNFs are placed in different locations in each slice (i.e. either in edge computing node or centralized DC) based on the service itself. This network slicing and virtualization capability of the network enables easily to do network resource management and flexible assignment regarding to accomplishing the deterministic latency for
mission-critical slice in cost effective way.
Figure 4: Network slicing and traffic routing [20]
C. Traffic Flow for Edge and Normal Nodes
The access interface of the optoelectrical interface is responsible for aggregating and grooming the incoming traffic. As shown in Figure 2, the optoelectrical interface consists of two kinds of access interfaces; the one connected with both antennas and edge computing nodes, and the other one only connected with antenna access. The access interface of the normal node only aggregates and forwards the traffic according to its type. Whereas, the interface deployed in the edge node should give the switching function as well among the computing nodes, antenna accesses, and the optical ring metro network. The computing node serves as a small-scale datacenter which consists of servers and top-of rack (TOR) switches. The downlink interfaces of the TOR switches are connected with the serves mounted in the rack where the switch is located. In the opposite side, the TOR switches are also connected with the access interface through its uplink ports for traffic exchanging among the antenna accesses, the optical metro network, and the servers located in the other racks.
The generated traffic will be routed based on the destination information, which is related to the traffic flow and contains the node destination and server destination. For the two application types traffic, i.e. mIOT and CDN-2 traffic, the destination node is the index of the nearest edge node, then it will be modified to the index of the gateway node by the servers after the edge node processing. Afterwards, the processed traffic will reach to the network interface via the TOR switch. At the network interface, traffics with the same destination node and properties are aggregated into the corresponding dedicated buffers. The available wavelength channels assigned for each buffer type at each time slot according to the scheduling method mentioned in Section 2B.
The detail traffic flow of each slice in the network is illustrated in Figure 4. At the normal nodes, the incoming traffic is first classified according to its traffic type and then further aggregated into the corresponding dedicated buffer types. The traffic data stored in the buffers is forwarded to the nearest metro edge node for data processing and network function layer processing. For the edge node, the incoming traffic from the normal nodes and antenna access is directly forwarded to the attached TOR switch and servers for first-stage processing. The egress traffic of the servers is then sent back to the TOR and further to the edge node. At the edge node, the ingress traffic of the mission-critical and half of the CDN (CDN-1) is sent back to the traffic source, which could be the antenna access of the current edge node or the closest normal node through the optical ring network. The traffic of mIOT and the other half of CDN services are aggregated into the network interface and will be sent to the gateway node, which is responsible for distributing and aggregating the traffic to and from the centralized DC. Thus, the gateway node forwards the traffic to the centralized DC. Eventually, the serviced packets at the centralized datacenter will be send back to the gateway node and further to the optical metro access ring network towards its source node.
D. Jitter compensation at the TOR switch for mission-critical traffics
As mentioned in the above sub-section, at the edge node the traffic coming from both antenna access and normal node is forwarded to the attached TOR. As shown in Figure 5, the TOR switch stores the traffic into two distinct buffers assigned for mission-critical and non-critical traffic (BUF 4_1, and BUF 4_2, respectively). Then, the prioritizer and jitter controller module always first reads the critical traffics from their buffer instead of other traffics. Next, it checks whether the latency of the packet fulfils the maximum expected network delay time; if its delay time is greater than or equal to the expected time, it immediately sends the packet to the assigned destination server. Otherwise, it calculates the time deviation between the reference network latency and the actual propagation delay time of the packet. Afterwards, an instance will be created at the time controller module which is responsible for controlling the remaining time of the packet’s latency to fulfil the expected network delay time.
Simultaneously, the prioritizer and jitter controller module tags a unique identification number at the index of the packet and send it to waiting buffer (BUF 5 in Figure 5) where packets remain stored until the time controller module alerts the prioritizer and jitter controller module to read it back. Once the alert is sent from the time controller, the unique ID of the packet will be recovered from the alert message and using this ID, the specific packet will be fetched from the waiting buffer and forwards it to the destination server.
Figure 5: Jitter compensation at the TOR for mission-critical traffics
III. SIMULATION SETUP
The performance of the introduced methods in the fOADM-based metro access network with edge computing nodes aiming to achieve deterministic latency for critical traffics has been studied through OMNeT++ network simulation framework. Since the network model is built based on a real operator’s metro network in a metropolitan area with a population of 1 million, it gives ample opportunity to test the
Figure 6: Details of functional modules of each node and traffic flow of each network slice in the simulation model
methods considering the real-world network scenarios. The connection between the metro access and the core where the centralized DC is located has been simulated as an extra fixed delay. In this network, there are 19 clusters, and each cluster consists of 20 nodes connected in a ring topology. The simulation model and detailed traffic flows are illustrated in Figure 6. Both metro access nodes (i.e. the edge and normal nodes) are connected in a ring based- metro access network. The average distance between two consecutive metro access nodes is 9 kilometers. Every cluster has a gateway node which is connected with the metro core region. The metro core region comprises 33 nodes connected with six backbone nodes.; on average, six metro core nodes are connected with a backbone node. The average distance between two metro core nodes is 13 km, and it is 40 km for the first two level nodes. It is assumed that every three first level nodes are connected to a centralized data center. Thus, the distance between the metro access gateway node and the centralized DC is around 200 km, that is equivalent of 1 ms propagation delay.
A. Traffic Model
As shown in Table 1, out of the populations covered in the network, let one-fifth be the operator’s share of the subscribers; 30 percent of them are mobile users. Assume that at most one-tenth of the mobile subscribers’ request network services at simultaneously. It’s considered also the peak data-rate per subscriber is symmetric 1 Gb/s. The total estimated peak traffic of the studied network is around 160 Tb/s for mobile access. Thus, the peak access traffic of each metro access node is around 400 Gb/s
… Traffic Source (~400 Gb/s)
… …
M_ critical & CDN-1 Traffic flow mIOT & CDN-2 Traffic flow
Aggregator & Switch … Gateway Node Traffic Source (~400 Gb/s) … … B U F 1 … … B U F 2 B U F 2 1 Tb/s B U F 2 B U F 2 B U F 1
Optical ring network T X 1 T X 2 T X n … Sc he dul er
M_critical (High priority) Other traffic (Low priority)
Aggregator & Switch … Normal Node B U F 1 … B U F 2 B U F 2 600 Gb/s B U F 1 T X 1 T X 2 T X m … Optical ring network
Sc he dul er Switch … Centralized DC 40 Gb/s TOR … B U F 2 B U F 2 Server TOR Switch 4 U F 3 B 4 U F 3 B 4 U F 3 B 4 U F 4 B 4 U F 4 B 4 U F 4 B Server TOR … 10 Gb/s M _ c ri ti cal & C D N -1 T raf fi c f low
Aggregator & Switch … Edge Node Traffic Source (~400 Gb/s) … … B U F 1 … … B U F 2 B U F 2 1 Tb/s B U F 2 B U F 2 B U F 1 T X 1 T X 2 T X n … Sc he dul er
M_critical (High priority) Other traffic (Low priority)
40 Gb/s
Server
TOR
… Optical ring network
since there are 380 metro access nodes in total. So, in the simulation, each metro access node can generate at most 400 Gb/s traffic, which includes three uniformly distributed traffic types.
Assume that each node generates uniformly distributed packets of CDN, mIOT, and mission-critical traffic with a probability of 70%, 20%, and 10%, respectively [23]. The destination node of each packet is generated according to the traffic flow strategy discussed in Section 2B. Both mission-critical and half of the CDN traffic (CDN-1) only need to reach the nearest edge node [24]. The remaining half of the CDN (CDN-2) and mIOT traffics need to get to the nearest edge node first and then forward to the centralized DC. Note that, in the simulation, NFV processing is emulated by traversing the traffic through the edge node servers or centralized DC. The destination servers located in the edge node and centralized DC receive uniformly distributed traffic.
B. Network Configuration
Consider that the optical metro access nodes aggregate and generate optical packets within a time slot of 1 μs. The supervisory channel processing time 400 ns, which includes a 100 ns supervisory channel processing time, a 180 ns for adjusting the tunable lasers at the right wavelength [24], 20 ns switching time of SOA’s, and a 100 ns guard time for slot alignment. The propagation delay between two metro access nodes is 45 μs because of the 9 km distance between the consecutive nodes. In the simulation, the distance between the gateway node and the centralized DC is emulated as 1 ms propagation delay because of the 200 km fiber link length.
Each edge computing node and centralized DC consists of several servers and TOR switches. Each of the TOR switches are equipped with several 10 and 40 Gb/s network interfaces and can connect with 40 servers at most. In contrast, every server is equipped with a 10 Gb/s network interface. Thus, the oversubscription of the TOR switch is set to 1, which means its uplink bandwidth is equal to the total aggregated bandwidth of the 40 servers directly connected to it. In the simulation model, the traffic source emulates the antenna source, which consists of ten 40 Gb/s transceivers to the access interface.
At each 1 μs time-slot, the traffic source generates a uniformly distributed traffic blocks. Then, the generated traffic blocks will be forwarded to the access interface of the node, where each traffic block is divided into four small cells with 1.25 Kbytes per cell. Ten of the small cells with the same application type and node destination will be aggregated into a cell of 12.5 kBytes size
and stored in a dedicated buffer (BUF1 in Figure 6), then further forwards to the optical network. The TOR switch is connected with the access interface by various 40 Gb/s transceivers, which are used to either to send or receive the small cells to/from the access interface with (without) aggregation. Optical packets coming from the metro access network will be first categorized into 10 small cells and then send to the buffers (BUF 2 in Figure 6) interfaced to the destination.
Table 1: Traffic model parameters summary
Population 25 million
Subscribers 5 million
Oversubscription ratio 0.1 Mobile subscriber ratio 0.3
Peak data rate 1 Gb/s symmetric
Total Peak traffic ~160 Tb/s
Peak traffic per node ~400 Gb/s
IV. SIMULATION RESULTS AND DISCUSSION
In this section, we numerically investigate and analyze the performance of the proposed jitter controlling methods as a function of different edge computing node counts. In section 4A, we analyze the performance of the jitter controller for mission-critical traffic at different edge node count. In addition, in section 4B, for the sake of investigating the influence of the implemented methods on the remaining traffic type, we studied the packet loss ratio and the average latency of each traffic type.
A. Network Performance Analysis for Different Edge Node Counts
First, the performance of the jitter controller is analyzed as a function of different edge node counts. In this analysis, each edge node has 80 servers, and 10 transceivers; each transceiver can send/receive at 100 Gb/s. Also, the size of BUF 1 and BUF 2 in Figure 6 are set to 187.5 Kbytes, the other detail simulation parameters are summarized in Table 2. Note that, at the gateway the size of BUF 1 is set to 250. Also, there are, in total, 160 servers available in the centralized DC in the metro access node we used. We have investigated the network performance for four cases, in which the edge node counts are set to 4, 6, 10, and 20.
Table 2 (a): Edge node parameters
Parameter Size/ Amount
BUF 1/ BUF2 187.5 Kbytes
BUF 3_1/BUF 4_1 (Critical buffers) 8.75 Kbytes BUF 3_2 (Non-critical buffers) 250 Kbytes BUF 4_2 (Non-critical buffers) 17.5 Kbytes
Number of transceivers 10
Number of servers 80
Table 2(b): Normal node parameters
Parameter Size/ Amount
Number of transceivers 6
BUF 1 size 187.5 Kbytes
BUF 2 size 187.5 Kbytes
Since we have several network resources as well as all the mission-critical packets are treated at the edge node without traversing through the optical ring network, the packets latency before arriving at the TOR doesn’t have constraint of wavelength availability. Thus, as shown in Figure 7(a) the jitter controller controlled the latency variation with 6.67 ps jitter under any of the traffic load. The possible cause of this small variation is clock misalignment between the source where the packet gets the first-time stamp and the destination. As we can observe from Figure 7(b), when the number of edge counts reduced to ten, in addition to the clock synchronization issue, other two constraints determine the variation of latency of the packet. One of the additional factors that causes jitter is, wavelength availability while the packet is sent from the normal node. Since, if there is no available bandwidth to send the packet, it remains in the electronic buffer. Thus, the latency deviation between the packet sent as soon as it arrives and the one stored in the buffer for a long time, results high jitter. The other factor is high propagation delay difference between the packets generated from the nearest normal node and the packets coming from the antenna access attached to the edge node. The introduced jitter compensator enables to get deterministic latency of 65 µs with 37 ps jitter until the traffic load reaches 0.5.
(a) (b) (c) (d) (e) (f)
Figure 7: CDF of mission critical packets latency for different edge node count with and without using jitter controller: jitter controller performance under edge node count 20 (a), 10 (b), 6 (c), and 4 (d), Without using the jitter controller under edge node count of 20 (e) and 10 (f)
11 11 12 12 13 13 Latency (µs) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Probability
CDF of mission critical latency (10-6 ) Load = 0.1 Load = 0.2 Load = 0.3 Load = 0.4 Load = 0.5 Load = 0.6 Load = 0.7 Load = 0.8 Load = 0.9 Load = 1 0 200 400 600 800 1000 Latency (µs) 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Probability
CDF of mission critical latency (10-6 ) Load = 0.1 Load = 0.2 Load = 0.3 Load = 0.4 Load = 0.5 Load = 0.6 Load = 0.7 Load = 0.8 Load = 0.9 Load = 1 0 200 400 600 800 1000 1200 1400 Latency (µs) 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Probability
CDF of mission critical latency (10-6 ) Load = 0.1 Load = 0.2 Load = 0.3 Load = 0.4 Load = 0.5 Load = 0.6 Load = 0.7 Load = 0.8 Load = 0.9 Load = 1 200 250 300 350 400 450 500 Latency (µs) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Probability
CDF of mission critical latency (10-6 ) Load = 0.1 Load = 0.2 Load = 0.3 Load = 0.4 Load = 0.5 Load = 0.6 Load = 0.7 Load = 0.8 Load = 0.9 Load = 1 5 10 15 20 25 30 Latency (µs) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Probability
CDF of mission critical latency (10-6 ) Load = 0.1 Load = 0.2 Load = 0.3 Load = 0.4 Load = 0.5 Load = 0.6 Load = 0.7 Load = 0.8 Load = 0.9 Load = 1 50 100 150 200 250 300 350 400 Latency (µs) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Probability
CDF of mission critical latency (10-6 ) Load = 0.1 Load = 0.2 Load = 0.3 Load = 0.4 Load = 0.5 Load = 0.6 Load = 0.7 Load = 0.8 Load = 0.9 Load = 1
The last two factors took the main role for causing the jitter when the edge node count is six and four. As shown in Figure 7(c), when the jitter compensator able to achieve deterministic latency of 152 µs with jitter of 102 ps for the loads less than 0.5. At the load 0.5, the jitter begins to rise to ~300 ps due to the high network load as compared to the previous cases. As the network load increases, the probability that nodes to get free wavelength drops, hence the packets latency increases because of long buffering time at the network interface. As illustrated in Figure 7 (d), the jitter compensator shows good performance until the load 0.4 (~240 ps), as the number of edge nodes decreases, more normal nodes will get a service form a single edge node in turn, it rises the traffic volume at the edge node even at load 0.5. As a result, the network shows microseconds jitter.
Figure 7 (e) and (f) shows that the performance of the network without utilizing the proposed jitter reduction methods when twenty and ten number of edge node counts are deployed, respectively. According to Figure 7 (e), the network has a microseconds jitter in any of the traffic load which doesn’t fulfil the QoS requirements of the mission-critical applications. However, using the propose methods, we have reduced the jitter into less than seven picoseconds which is more than enough to accommodate the mission-critical applications in the future 5G network. Also, as illustrated in Figure 7 (f), the network with ten edge node counts has latency in the range of ten to hundreds of microseconds, but after using the proposed methods, we decreased the jitter below 350 ps for the load less than 0.5. In addition, all the picosecond latency variation reported above are illustrated in Figure 8, to better show the above CDF plots.
B. Assessment of the impact of the proposed methods on the traffics other than mission critical’s
Consider all the aforementioned traffic parameters are the same for this analysis as well. In this assessment, the latency is end-to-end latency, and jitter is defined as the standard deviation between the interarrival times of packets. In addition, for the packet loss ratio calculation, we consider that the centralized DC doesn’t lose packets because of the resource and capacity considered are larger than the throughput of the gateway. As shown in the Figure 9 (a), the network with four and six edge node counts has a PLR offset of 10-5 and 10-6, which doesn’t comply the mission-critical traffic requirement. Thus, either four or six edge nodes-based network is not suitable to serve critical applications. However,
(a) (b)
(c)
(a) (b)
(c) (d)
Figure 10: (a) & (c) Average latency of mIOTs and CDN-2 traffic types on a network with and without a jitter controller,
respectively. (b) & (d) Average latency of CDN-1 traffic slices with and without using the jitter controller methods.
when the edge node count increases to ten, the network shows better PLR improvement, especially for the traffic load below 0.4. As the traffic flow increases further, the buffers located at the network interface as well as at the TOR switch becomes full. Also,
the optical ring metro network would be congested, in turn, it forces the packets to remain in the buffer for a long period of time. Further increasing the edge node counts to twenty results, the mission-critical traffic to have zero PLR. Because, as this type of traffics only stay at the edge node, their traffic flow doesn’t depend on the instant available wavelength and also the assigned buffer sizes are enough to keep their packets even at the maximum loads. In the remaining traffic types, as shown in Figure 9 (b) and (c), when the edge node count is different from twenty, their packet loss ratio (PLR) increases as the load rises, due to network congestion and full occupation of buffers as well as the priority at the scheduler. In contrast, in the case of twenty edge node counts, the PLR of the mIOT and CDN-2 don’t depend on the introduced methods, since mission critical packets doesn’t transport through optical network. Instead, it depends on the available wavelengths; until the load is less than 0.5, each node has fair chance to get free wavelength, then it results high loss at the network interface of the gateway node. But, once the traffic rises further, the probability to get free wavelengths begins to drop, hence packets remain stored at the buffer located at their own node, that can keep them even at high loads. So, the loss at the network interface of the gateway node drops.
As illustrated in Figure 10 (a) & (c), the average latency of the mIOT and CDN-2 traffics have an offset of around 1.5 ms, since those traffics has to be processed by the centralized DC which is 200 km far from the gateway node. Due to the priority at the scheduler, the two traffic types experience higher latency than the network without optimizing for mission critical’s.
Unlike that of mIOTs and CDN-2, as illustrated in Figure 10 (b) & (d), CDN-1’s latency shows better improvement as the number of edge nodes increase, since this traffic is served by the edge servers. More edge nodes mean less propagation distance between the edge and the normal node, and also smaller service time due to high available computing recourses. Because of the network optimization done for mission critical’s, the CDN’s latency has shown a peak at load 0.5, since at that load the mission critical packets arrive at the TOR switch with latency of small deviation from the maximum delay. Then, they will fulfil the maximum delay requirement within short buffering time and get the service at the server which results the CDN-1 packets to be stored at the buffer for long period of time, hence their latency rises. When the loads higher than 0.5, its latency decreases since the packet loss ratio of mission critical’s rise, then CDN’s can easily get service at the server.
V. CONCLUSION AND FUTURE WORK
We have proposed two jitter reduction methods; jitter compensator at the TOR switch using a jitter buffering technique and scheduling at the network interface of the edge node which gives high priority for mission-critical traffics while sending packets. The performance of the proposed methods has been studied in terms of jitter, latency, and packet loss ratio under different number of edge nodes in the metro network. The methods are implemented and numerically investigated on a fOADM-based metro access network which is built via OMNET++. Results show that, when the edge node count is 20, the network is ideal for mission-critical’s, a latency of 12 µs which complies their latency requirement of less than 1 ms, it also has only a jitter of 37 ps that is much less than 1 µs requirement. However, this performance doesn’t come for free, this needs to deploy more computing resources at each node. In order to make the network more cost effective, with six edge node counts, the network can serve the mission-critical applications by keeping their traffic load fifty percent at most.
As we discussed before, in most of the assessments except the edge node count 20, the jitter controller doesn’t control the jitter when the traffic load becomes above 0.5. So, as a future work we recommend optimizing the proposed methods in terms of achieving less than 1 µs jitter even at higher loads (above load 0.5) in cost effective way.
REFERENCES
[1] M. Mirahsan, R. Schoenen, and H. Yanikomeroglu, “HetHetNets: Heterogeneous traffic distribution in heterogeneous wireless cellular networks,” IEEE J.
Sel. Areas Commun., vol. 33, no. 10, pp. 2252–2265, Oct. 2015.
[2] HIS Markit, “Mission critical use cases”, https://ihs-markit.foleon.com/technology/5g-is-coming-topical-report/use-case-mission-critical-copy/ [3] Geetika-Singh, F. N. U. "Analysis of jitter control using real time scheduling." (2020).
[4] Pogrebinsky, Vladimir, and Noam Caster. "Jitter buffer and methods for control of same." U.S. Patent Application No. 09/267,350.
[5] Benzaoui, Nihel, et al. "DDN: Deterministic dynamic networks." 2018 European Conference on Optical Communication (ECOC). IEEE, 2018. [6] Davis, Robert I., and Nicolas Navet. "Traffic shaping to reduce jitter in controller area network (CAN)." ACM SIGBED Review 9.4 (2012): 37-40. [7] Ušćumlić, Bogdan, et al. "Optimal dimensioning of the WDM unidirectional ECOFRAME optical packet ring." Photonic network communications 22.3
(2011): 254-265.
[8] Simonneau, Christian, Jean Christophe Antona, and Dominique Chiaroni. "Packet-optical Add/Drop multiplexer technology: A pragmatic way to introduce optical packet switching in the next generation of metro networks." 2009 IEEE LEOS Annual Meeting Conference Proceedings. IEEE, 2009.
[9] Miao, Wang, et al. "Low latency optical label switched add-drop node for multi-Tb/s data center interconnect metro networks." ECOC 2016; 42nd European Conference on Optical Communication. VDE, 2016.
[10] Pan, Bitao, et al. "Performance assessment of metro networks based on fast optical add-drop multiplexers under 5G traffic applications." 2018 European Conference on Optical Communication (ECOC). IEEE, 2018.
[11] Hawilo, Hassan, et al. "NFV: state of the art, challenges, and implementation in next generation mobile networks (vEPC)." IEEE Network 28.6 (2014): 18-26.
[12] R. Buyya, C. S. Yeo, and S. Venugopal, “Market-oriented cloud computing: vision, hype, and reality for delivering it services as computing utilities,” in 10th IEEE International Conference on High Performance Computing and Communications (2008), pp. 5–13.
[13] “Common public radio interface (CPRI); interface specification V7.0 (2015), http://www.cpri.info/downloads/CPRI_v_7_0_2015-10-09.pdf
[14] “Minimum requirements related to technical performance for IMT- 2020 radio interface(s),” ITU-R M.2410-0, 2017, https://www.itu.int/dms_pub/itu-r/opb/rep/R-REP-M.2410-2017-PDF-E.pdf,
[15] “Minimum requirements related to technical performance for IMT- 2020 radio interface(s),” ITU-R M.2410-0, 2017, https://www.itu.int/dms_pub/itu-r/opb/rep/R-REP-M.2410-2017-PDF-E.pdf,
[16] Zong, Yue, et al. "Virtual Network Embedding for Multi-Domain Heterogeneous Converged Optical Networks: Issues and Challenges." Sensors 20.9 (2020): 2655.
[17] H2020 5G-PICTURE project, deliverable D2.1 “5G and Vertical Services, use cases and Requirements”, Jan. 2018. [18] Wortmann, F., and K. Flüchter. "Internet of things. Business & Information Systems Engineering, 57 (3), 221-224." (2015).
[20] Pan, Bitao, et al. "Performance assessment of a fast optical add-drop multiplexer-based metro access network with edge computing." IEEE/OSA Journal of Optical Communications and Networking 11.12 (2019): 636-646.
[21] Larsen, Line MP, Aleksandra Checko, and Henrik L. Christiansen. "A survey of the functional splits proposed for 5G mobile crosshaul networks." IEEE Communications Surveys & Tutorials 21.1 (2018): 146-172.
[22] Alliance, N. G. M. N. "Description of network slicing concept." NGMN 5G P 1.1 (2016).
[23] Zhang, Haijun, et al. "Network slicing based 5G and future mobile networks: mobility, resource management, and challenges." IEEE communications magazine 55.8 (2017): 138-145.
[24] Simsarian, J. E., et al. "A widely tunable laser transmitter with fast, accurate switching between all channel combinations." 2002 28TH European Conference on Optical Communication. Vol. 2. IEEE, 2002.