• Non ci sono risultati.

4 – Traffic Engineering

N/A
N/A
Protected

Academic year: 2021

Condividi "4 – Traffic Engineering"

Copied!
12
0
0

Testo completo

(1)

4 – Traffic Engineering

4.1 – Traffic Engineering and MPLS

One of the reasons, may be the most important reason, for the increasing employment of MPLS is the simplicity to apply the techniques for the traffic engineering.

Traffic Engineering (TE) is the process to measure, characterize, model and control the traffic through a network, with the purpose of optimizing the use of the resources and therefore the performance of the network [RFC 2702].

In an MPLS network, the main objective of TE is to facilitate efficient and reliable operations to improve the network performance, in particular as regards the use of the resources and the quality of the service offered to the traffic. This purpose has become more and more necessary and in some cases essential in a great number of Autonomous Systems both for the high network management costs and for the commercial and competitive nature of Internet. Even if the TE is used particularly in a network core, it can be applied to any label switched network under a single technical administration in which at least two paths exist between two nodes [RFC 2702]. So, TE permits to an Internet Service Provider a precise control over the traffic flows within its network [RFC 2430]. The main objectives associated to Traffic Engineering can be divided in:

• Traffic Oriented.

• Resource Oriented.

The Traffic Oriented objectives include the aspects to improve the QoS of the traffic flows. In a model with a single class of service (only the best effort class), such objectives include the minimization of packet loss, the minimization of delay, the maximization of

(2)

throughput and the enforcement of Service Level Agreements. In a Differentiated Services model, where there is more than one class, it is more useful to consider statistically bounded traffic oriented performance objectives, as for example peak to peak packet delay variation, loss ratio, and maximum packet transfer delay [RFC 2702]. The Resource Oriented objectives include aspects pertaining to the optimization of resource utilization. Such objectives are achieved through an efficient management of the resources; in particular, changing the traffic distribution through the network, so that it is possible to utilize in a more equilibrated way all the network resources, avoiding the presence of heavily loaded zones together with lightly loaded zones [Tofoni, 03].

A fundamental traffic and resource oriented performance objective is congestion avoidance. There is congestion in two main cases [RFC 2702]:

• When network resources are insufficient or inadequate to manage the arriving load.

• When traffic flows are inefficiently mapped onto the available resources; causing subsets of network resources to become over-utilized, while others remain underutilized.

Increasing the network capacity could be an obvious and secure solution to both the problems. However, as explained in chapter 1, this is not an economical option for many ISP. In addition, probably, no matter how much bandwidth the network can provide, because new applications will be invented to consume them, the problem would only be postponed. For this reason, to solve the first problem, some classical techniques can be applied, like: limitation in the sender traffic rate, management of the router queues (like RED, RIO, WRED, etc…), control of the scheduling mechanisms, and others. The second type of problem can be resolved just with the Traffic Engineering, which represents a solution to avoid the congestion with the routing of flows through paths that are not

(3)

MPLS is a very good choice to obtain Traffic Engineering because potentially it can offer a great part of the upper level functionalities, in an integrated way and at a lower cost, if compared with the actual alternatives. Moreover, as written in the last section of the previous chapter, MPLS gives a full and efficient support to the Explicit Routing.

A well known example to explain TE is shown in figure 4.1; this is called the “fish example” referring to the fishlike network structure. Let us make the hypothesis that:

1. The red traffic follows the shortest path occupying 60% of the link bandwidth. 2. When R1 sends other traffic requiring 50% of the link bandwidth, if it follows the

shortest path (as both the red and the yellow flows are directed to the same egress LSR), that should result in an overloaded and congested path through R3 with all the negative consequences brought by this situation. While the path through R6 and R7, even if it has one hop more than the shortest, results totally unused.

3. At this point, TE is used to constrict the yellow traffic to follow the other path, assuring a better service to both types of traffic, with a better distributed use of the network.

Figure 4.1 – Fish example

R8 R2 R6 R3 R4 R7 R1 R5 600 Kbit/s

(4)

MPLS offers the possibility to automatize some aspects of the Traffic Engineering functions. The explicit labeled switched paths can be easily created with an administrative manual action or through an automatic action made by protocols of different levels. An MPLS fundamental characteristic is the capacity to give full support to the Constraint-based Routing.

4.2 – Constraint-based Routing (CR)

The Constraint-based Routing is an important instrument to make the traffic engineering process automatic. The CR overcomes the limitations of the simple destination-based routing, determining the routes basing the decisions not only on the network’s topology (that is the Distance Vector algorithm), but also on the use of particular metrics, as for example bandwidth, delay, monetary cost, and jitter. The routing algorithm chooses a path able to optimize one or more of these metrics. Generally, a metric based on the hop count and on the bandwidth is used [RFC 3212].

The algorithm used in the various simulations of this thesis searches the minimum number of hops and keeps track of the bandwidth available in all the links, deciding on the basis of these two parameters: among all the paths chooses the shortest path with the available requested bandwidth.

Obviously, the use of the CR implies that every router involved has to compute its routing table more frequently than in the case of destination-based routing, this is due to the possible changes of the parameters used as metrics even without topology changes.

The Constraint-based routing automating process reduces considerably the level of manual intervention involved in traffic engineering.

(5)

To give an idea of how a CR using number of hops and bandwidth availability as metrics works, it is useful to examine the “fish” structure once again. Looking at figure 4.2, let us suppose that:

• It is in use a CR with number of hops and bandwidth availability as metrics.

• The red traffic is the first to be routed, followed by the yellow and the green.

• All the three traffic types are directed to the same egress LSR.

Figure 4.2 – Constraint-based Routing example

We will notice that:

1. Even using the CR, as the available bandwidth is the same in both the possible paths, the red traffic follows the shortest path occupying 60% of the link bandwidth.

2. The yellow traffic, as the bandwidth available is not sufficient to assure both the presences of the red and the yellow without losses, follows the path through R6 and R7, even if there is a hop more than in the path through R3.

R8 R2 R6 R3 R4 R7 R1 R5 600 Kbit/s

500 Kbit/s Link capacity 1Mbit/s 200 Kbit/s

(6)

3. When the green traffic is routed, as there is still available bandwidth in the path with the minimum number of hops, that path will be chosen to be followed.

This case just examined is quite simple with only two possible paths, the advantages brought by CR do not fully appear. In an Autonomous System belonging to the Internet there are a lot of possible paths, with a consequent increasing in the routing complexity. In this case, an automatic routing algorithm with the CR potentials can show better its capabilities, as it will be seen in chapter 5.

Constraint-based routing can be of two types: online and offline. The online constraint-based routing allows the routers to compute paths for LSPs at any time. In the offline constraint-based routing, an offline server computes paths for LSPs periodically (the periodicity can be chosen by the administrator, usually hours or days). LSPs are then configured to take the computed paths.

4.3 – CR-LDP

CR-LDP has been created to extend the CR capabilities to the MPLS Label Distribution Protocol in order to add functionalities allowing Explicit LSP management and signaling. MPLS has not one established protocol to support TE. RSVP with TE extensions could be used, but in this thesis CR-LDP has been chosen. CR-LDP does not require the implementation of an additional protocol. It uses existing LDP message structures and only extends LDP as necessary to implement Traffic Engineering, because, as explained in the previous chapter, the LDP protocol has the characteristic to be extendible. What we obtain is an end-to-end setup mechanism of a constraint-based routed LSP initiated by the ingress LSR. It also specifies mechanisms to provide means for reservation of resources using LDP [RFC 3212].

(7)

The CR extension consists in adding new Type Length Value objects that are (for a deeper explanation look at RFC 3212):

• Explicit Route

• Explicit Route Hop

• Traffic Parameters • Preemptions; • LSPID • Route Pinning • Resource Class • CR-LSP FEC

Also new procedures are added to support the required functionalities like [Tofoni, 03]:

• Path signaling

• Traffic parameters definition

• LSP management (priorities, administrative bonds, etc…)

Also a CR-LSP, like any other LSP, is a path through an MPLS network. But a CR-LSP is calculated including, but not limited to, routing information. In particular, as will be clearer in chapter 5, in this thesis each LSP has a dedicated bandwidth and appears like a sort of dedicated circuit having that bandwidth.

Using the CR-LDP protocol, the downstream-on-demand with ordered control is used as label assignment mechanism.

For CR-LDP, an LSP is set up when a series of Label Request Messages propagate forward from the ingress to the egress LSR and then, if the requested path satisfies the constraints (for example, if there is sufficient bandwidth available), then labels are allocated and distributed (mapped) by a set of label- mapping messages that propagate backward from the egress LSR to the ingress LSR, as is shown in figure 4.3.

(8)

Establishment of a CR-LSP may fail for a variety of reasons; all such failures are signaled by the Notification Message.

Figure 4.3 – Setting up an LSP

In figure 4.4, the resource reservation process of MPLS in nodes and links is shown. When a CR-LDP component receives a CR-LDP Request message, it calls the Admission Control to check if the node has the requested resources. If there are sufficient available resources, the Admission Control reserves it by updating the Resource table. Then the LDP Request message is forwarded to the next MPLS node.

When a CR-LDP component receives a CR-LDP Mapping message, it saves the label and interface information in the Label Information Bases (LIB) table and the requested CR-LSP information in the Explicit Route information Base (ERB) table. Then it calls the Resource Manager to create a queue to serve the requested CR-LSP, and saves its ServiceID in the ERB table. Then, the LSP Mapping message is forwarded to the previous MPLS node [Ahn]. 11 7 10 2 5 8 4 1 11 7 10 2 5 8 4 1

(9)

Figure 4.4 - Resource reservation process

4.4 – MPLS and DiffServ

The MPLS architecture gives a very good support not only to TE, but also to DiffServ. As we have seen in the previous sections, both MPLS and DiffServ leave the major complexity to the network edge, operating a codification in the ingress router. The DSCP field is inside the IP header that is never controlled inside an MPLS domain; this makes the DSCP code useless to arrive at a PHB in an MPLS LSP. The idea in order to avoid this problem is to use the MPLS labels also to codify the DiffServ information. There are two ways to achieve this, depending also on the ways the label header is encoded:

• Using the 3 class of service bits in the shim header, that can be employed only if a shim header is used to encode the label.

• Using dedicated labels to obtain a DiffServ treatment that can be employed in both the cases of the presence of a shim header or not (like an ATM encoding).

The first way implies the possibility to use 8 different classes of service, while the DSCP allows up to 64 classes of service. If the required number of classes is lower than 8 (that is

(10)

the case analyzed in this thesis, where a shim header is used) there is no problem to map the DSCP in the CoS field. If there are more classes, the mapping must be more-to-one or the second encoding way must be used. So, using the shim header, the Label field tells to the LSRs where to forward the packet, while the CoS field tells to the LSRs what PHB to apply to treat the packet. The packets belonging to a common PHB scheduling class must trave l on the same LSP. [Davie, 00].

DiffServ-Traffic aware Engineering (DS-TE) integrates both the techniques of TE and DS, extending MPLS traffic engineering to enable performing constraint-based routing of traffic that satisfies more restrictive bandwidth constraints than that satisfied by CR for regular traffic. This ability to satisfy a more restrictive bandwidth constraint translates into an ability to achieve higher QoS performance (in terms of delay, jitter, throughput or packet loss).

The idea is to associate to each class of service an LSP with parameters increasing in quality together with the increasing of the classes.

Figure 4.5 – DiffServ and CR 11 7 10 2 5 8 4 1

11

7

10

2

5

8

4

1

(11)

There are a lot of possible choices to associate DS and CR. In figure 4.5, an example with three classes is shown:

• Gold class: the CR allocates 60% of a single link capacity (it is assumed that all the links have the same capacity) in the shortest LSP able to guarantee that bandwidth.

• Silver class: the CR allocates 40% of a single link capacity in the shortest LSP able to guarantee that bandwidth. In this case, the bandwidth is available on the shortest path, so there is no problem for the allocation.

• Bronze class: the immediate answer may be to treat the bronze traffic as simple best effort, but if there is no bandwidth allocated and we are using a complex network with all the other classes allocating resources. Then, it may happen that more Bronze flows follow the same link together with higher priority flows, causing loss in the bronze traffic and a worse treatment (delay and jitter) for the higher classes. So, the solution is always the same: allocating a small bandwidth and following the shortest path able to grant that bandwidth. In this case, the bandwidth required is 5%, but, as there is no availability in the shortest path, an alternative path is followed.

In order to obtain better treatments for the higher classes, it is better to reserve first the higher priority classes. This way to integrate DiffServ with MPLS CR has been used in the simulations of this thesis with different percentage of bandwidth required as will be explained in the next chapter.

Another simpler way to integrate DiffServ with traffic engineering is to use explicit routing to assign different paths to different classes. Figure 4.6 shows this way, following different paths for the three classes directed to the same egress LSR. Obviously, if different path lengths are present, the shortest path is reserved to the higher priority traffic

(12)

(like the Gold traffic in figure 4.6). However this is an approach useful only in the case of simple networks.

Figure 4.6 – DiffServ and TE

Gold Class Silver Class Bronze Class

11 10 2 8 4 1 11 10 2 8 4 1 55

Figura

Figure 4.1 – Fish example
Figure 4.2 – Constraint-based Routing example
Figure 4.3 – Setting up an LSP
Figure 4.4 - Resource reservation process
+3

Riferimenti

Documenti correlati