• Non ci sono risultati.

A Lagrangian approach to a problem with probailistic constraints

N/A
N/A
Protected

Academic year: 2021

Condividi "A Lagrangian approach to a problem with probailistic constraints"

Copied!
52
0
0

Testo completo

(1)

Corso di Laurea Magistrale in Matematica

Tesi di Laurea Magistrale

A Lagrangian approach to a

problem with probabilistic

constraints

Candidato:

Relatore:

Matteo Cacciola

Prof. Antonio Frangioni

(2)

Contents

Introduction ii

1 System Model 1

1.1 Instances and heuristics . . . 3 2 Mathematical Model 5 3 Lagrangian Decomposition 9 3.1 Sub-problem 1, 2 . . . 10 3.2 Sub-problem h − 1, h . . . 12 3.3 Sub-problem h = k . . . 13 4 Solving the mono-dimensional problem 15 4.1 A modified Newton’s method . . . 19 5 Bundle methods and implementation 22 5.1 Bundle methods . . . 22 5.2 Implementation . . . 24

6 Heuristics 26

6.1 Heuristics performances . . . 29 7 Performance evaluation 32 7.1 Lagrangian lower bound and gaps evaluation . . . 32 7.2 Disaggregated version . . . 43

(3)

In the context of transmission networks, cellular networks are the new fore-front. In a lot of applications information should be delivered to all User Equipments within a given area, with a high, quantifiable reliability and within a pre-specified time. Some examples are vehicular collision alerts, malfunctioning alerts in Industry-4.0 manufacturing plants, periodic distri-bution of coordination information for swarming robots or platooning vehi-cles, etc. All these applications involve a concept of geographic proximity and have specifics requirements in term of time deadlines and reliability.

Traditional information dissemination mechanisms can not afford these kind of tasks, for example because they can not guarantee the successful reception of the message with high probability unless they use a very large amount of energy. It can be crucial, in this framework, the use of a network that allows two type of transmission : a vertical one, form the base station to the users that is expensive but reliable, and a horizontal one, that is device to device (D2D), cheaper but less reliable.

In this context become very important to choose which UEs should act as multi-hop relays, so that the message is broadcast to all the intended recipients. In fact delivery time, reliability and resource usage are all heavily affected by these decisions.

Solving this kind of problems to optimality is very difficult (we will show that is a non-convex non-linear mixed integer problem). Some good heuristic methods have been developed, but there is no proof of these heuristics’ tight-ness to the optimal solutions. Our principal aim is to find a way to compute a lower bound in a reasonable amount of time in order to reduce this gap.

(4)

Chapter 1

System Model

We consider a network of mobiles (UEs) which are under the control of a single eNodeB (eNB). The eNB can send them information using uni-cast downlink (vertical) transmissions. Information can also travel through device-to-device links, i.e. horizontal broadcast transmissions originated at UEs. Vertical links are reliable but costly, and should be avoided if possible. By contrast, horizontal transmissions are instead free, but are not reliable: there is no ARQ mechanism involved, and it is impractical to try and as-certain which UEs, in the neighbourhood of the transmitter, have success-fully decoded a message. However, horizontal transmissions are scheduled by the eNB, which issues grants to the UEs that may transmit: this allows to (reasonably accurately) model the probability that a certain horizontal transmission is successful, given the position of the UEs, the transmission power of the transmitter, and the modulation and coding scheme adopted for transmission. The floorplan can be modeled as a graph G = (N, A) , where N = { 0 } ∪ N0 with node 0 representing the eNB and nodes in N0

representing the UEs. The arc set is A = A0 ∪ A00, where A0 contains all

“vertical” arcs (0, i) for all i ∈ N0; these have success probability one, but

a high (energy) cost. The set A00 instead contain the “horizontal” arcs (i, j)

for i, j ∈ N0, each with the associated nonzero probability P

ij > 0 that a

transmission from i will be decoded successfully at j; that is, we remove all clearly useless arcs from A00, if any. However, often times for simplicity we

(5)

will ignore the arc set and assume the graph complete.

Figure 1.1: Example of a scenario

Without loss of generality, we consider a time-slotted network, where grants are issued on each time slot. Multiple grants (up to M) can be given in a slot, to different UEs, and the ensuing transmissions are not mutually interfering. Our problem is to transmit a message to the entire floorplan, exploiting horizontal transmissions as much as possible, by

a) initially allocating a minimum number of vertical transmissions (i.e., selecting the minimum initial set of the UEs that possess a message); b) exploit only horizontal transmissions subsequently, to cover all the UEs. We pose two requirements:

1. the broadcast process must be over in k rounds, with k known a priori; 2. at the end of the broadcast process, each UE must possess the message

with a given predefined probability α ∈ (0, 1).

Besides knowing which UEs should be initially targeted with a vertical trans-mission, we also want to know which UEs should transmit when, i.e., the schedule of the grants that the eNB has to issue in order to compose the

(6)

CHAPTER 1. SYSTEM MODEL 3 broadcast schedule. All other things being equal we want the solution is-suing the least possible numbers of grants, possibly weighted in such a way that as to differentiate between “early” and “late” ones (say, preferring the former to the latter). For this, a (small) weight βh is defined for grants

is-sued at round h > 1. It is actually easy to allow node-specific grants βh i, e.g.,

depending on the type of node i and/or its remaining battery power.

1 eNode start 2 3 P1,2 P1,3 P2,3

1.1

Instances and heuristics

To test the model that we will discuss below we needed some realistic scenar-ios, so we contacted the Department of Information Engineering of Pisa that is an international landmark in this field. They provided us a software that permit to create various instances of the problem tuning a lot of parameters to set the dimension and the difficulty of the instance. Farther the software includes an advanced heuristic method that will allow us to asses our lower bound.

The basic idea of the heuristic method they develop is that the better way to transmit the message from a user that received it to another one, may not be the direct arc connecting the two users, a different path that exploits arcs with high probability can be more effective. So we can compute the probability of a path as the product of the probability of its arcs, for a detailed description of the algorithm we refer to [NSV18].

(7)

Algorithm 1: Department of Information Engineering of Pisa Heuristic algorithm

1 Inputs: k, α, P 2 for t=1..k do

3 Compute the t-mr paths for all pairs of vertices (i,j) and the

associated reliabilites r(t) i,j. 4 end

5 compute vector X by solving the following problem:

min N X i=1 xi X i6=j xilog(1 − r (t) i,j) ≤ log(1 − α) ∀j xj ∈ {0, 1} ∀j Set p(0) j = xj ∀j 6 for t=1..k do

7 Compute colum g(t) og grant matrix G by solving the following

problem: min N X i=1 g(t)i X i6=j

g(t)i log(1 − p(t−1)i ri,j(k−t+1)) ≤ log(1 − α) ∀j g(t)j ∈ Z+ ∀j compute p(t) j = ( 1 − p (t−1) j ) Q (i,j)∈A00( 1 − g (t) i p (t−1) i Pij) ∀j 8 end

(8)

Chapter 2

Mathematical Model

We introduce variables xi ∈ { 0 , 1} for i ∈ N0 indicating which nodes are

reached by the initial vertical transmission. We also introduce continuous variables ph

i for i ∈ N

0 and h ∈ K = { 1 , . . . , k } indicating the probability

that node i has been reached at round h (i.e., it either had been reached before or it is reached during round h). We also need variables gh

i indicating

whether node i receives a grant for broadcasting at round h. Because the first round is clearly “special”, it is useful to define the set K0 = { 2 , . . . , k }

of “normal” rounds. A natural MINLP model of the problem is: min P i∈N0xi+ P h∈K0 P i∈N0βihgih (2.1) p1i = xi i ∈ N0 (2.2) pki ≥ α i ∈ N0 (2.3) 1 − phj = ( 1 − ph−1j )Q (i,j)∈A00( 1 − ghiph−1i Pij) j ∈ N0 , h ∈ K0 (2.4) P i∈N0ghi ≤ M h ∈ K 0 (2.5) xi ∈ { 0 , 1 } i ∈ N0 (2.6) 0 ≤ phi ≤ 1 i ∈ N0 , h ∈ K (2.7) gih ∈ { 0 , 1 } i ∈ N0 , h ∈ K0 (2.8) 5

(9)

The objective function (2.1) is obvious, as are all the other linear constraints, i.e., except (2.4). That constraint is the direct expression of the probability computation, namely: the event “j still does not have the message at the end of turn h” is the conjunction of the events “j did not have the message at the end of turn h − 1” and “j does not receive it during turn h”. In turn, the latter is the conjunction of the events “i does not receive it during turn h from its neighbour i” for all i; the latter clearly has probability 1−gh

ip h−1 i Pij,

in that the probability that i broadcasts the message during turn h is the conjunction of the events “i had the message at the end of turn h − 1” and “i receives the grant during turn h”.

Clearly, (2.4) are the “nasty” constraints. They still have possibly useful properties, such as

• they can clearly be rewritten as 1 − ph j ≥ ( 1 − p h−1 j ) Q (i,j)∈A00( 1 − gihph−1i Pij) (2.9)

in that this allows to under-estimating the reception probability, upon which we have a ≥ constraint. Indeed, each reception probability is an increasing function of other reception probabilities, allowing to under-estimate them is not a problem, because a feasible solution with the under-estimate will a fortiori be feasible with the true values.

• The product of non-negative numbers (f(x) = x1x2. . . xn) is a

quasi-concave function when x ≥ 0, and quasi-concavity is conserved by linear transformations (if f(x) is quasi-concave, then f(1 − x) is also so). This is the setting that makes geometric programming possible. Yet, note that the verse in (2.9) is the opposite of the one that would be required for naturally having a convex feasible region (if f is quasi-concave, then f(x) ≥ α is convex). Indeed, the model (2.1)–(2.8) is not a geometric program.

In fact, the set of feasible solutions is inherently not convex. To see that, consider a case with N0 = { 1 , 2 , 3 }, P

ij = 1/2 for all (i, j) and α = 1/2.

(10)

CHAPTER 2. MATHEMATICAL MODEL 7 x3 = 0. Completing this as either g12 = 1, g22 = g33 = 0 or g22 = 1, g12 = g33 = 0

gives feasible solutions, i.e., p2

3 = 1 − 1 · 1 · 1/2 = 1/2. On the other hand, the

convex combination of these two (partial) solutions given by g2

1 = g22 = 1/2,

g33 = 0 yields p23 = 1 − 1 · 1/2 · 1/2 = 1/4, which is unfeasible. In other words, the feasible region is nonconvex, even: a) considering only the xi and

gh

i variables, i.e., projecting away the phi; b) relaxing integrality of these (of

course), and c) with just two levels.

To try to make the problem more manageable one can take logarithms (which is the main idea behind geometric programming), reducing (2.4) to

log(1 − phj) = log( 1 − ph−1j ) +P

(i,j)∈A00log( 1 − ghiph−1i Pij) .

A further simplification is obtained by noticing that if gh

i = 0 then log( 1 −

gh ip

h−1

i Pij) = 0 whatever the values of ph−1i and Pij; hence, an equivalent

form is

log(1 − phj) = log( 1 − ph−1j ) +P

(i,j)∈A00gihlog( 1 − ph−1i Pij) . (2.10)

The advantage is that the constraint is now linear in the gh

i; hence, whenever

the ph−1

i are fixed, the logarithm of p h−1

i is a linear function of the gih. An issue

with this formulation is that ph

i = 1 can (indeed, must) happen, which would

make the corresponding logarithms ill-defined. It is in principle possible to sidestep this issue by defining a proper ¯p < 1 “very close to 1” (say,

¯

p = 0.9999) and assuming that no success probability can ever be larger than ¯p. That is, one replaces (2.2) and (2.7) with, respectively,

p1i = ¯pxi i ∈ N0 (2.11)

0 ≤ phi ≤ ¯p i ∈ N0 , h ∈ K . (2.12) This leads to (slightly) under-estimating the overall success probability, and hence the optimal solution of the thusly modified problem is at least feasible for the true one; the advantage is that, clearly, no logarithm of 0 is ever to be computed. However, this is not necessary/useful in the only case that can

(11)

be dealt with directly with this idea., which is the case of k = 2, i.e., only one round. Specializing (2.10) gives

log(1 − p2

j) = log( 1 − xj) +

P

(i,j)∈A00log( 1 − gi2xiPij) . (2.13)

The observation is that it is clearly useless that g2

i = 1 if xi = 0; we can

therefore add constraints g2

i ≤ xi i ∈ N0 , (2.14)

which allows to substitute the product g2

ixi with simply g2i. We can

ulti-mately rewrite (2.4) in this case as log(1−p2

i) ≥ log(1− ¯p)xi+P(i,j)∈A00g2jlog( 1−Pij) i ∈ N0 . (2.15)

Despite this improvements a monolithic formulation of the problem looks rather hard to tackle. This leaves decomposition approaches as the only reasonable resort.

(12)

Chapter 3

Lagrangian Decomposition

We therefore discuss a Lagrangian Decomposition approach. min P

i∈N0( xi+Ph=2,...,kβihgih) (3.1)

log(1 − p2

j) ≥ log( 1 − ¯p )xj +P(i,j)∈A00gi2log( 1 − Pij)xi j ∈ N0

(3.2) log(1 − ph j) ≥ log( 1 − p h−1 j ) + P (i,j)∈A00gihlog( 1 − ph−1i Pij) j ∈ N0 h ∈ K0\ {2} (3.3) (2.3) , (2.5) , (2.6) , (2.12) , (2.8)

and we relax the constraints (3.2) and (3.3) with multipliers λ2

j and λhj,

respectively. This decomposes the problem into k independent sub-problems, which we now separately discuss. In our relaxation there are only two things that link variable, one is the constraint (2.5) that links al the variables gh i

with same h and the other is the product gh

i log( 1 − p h−1

i Pi,j) that links gih

with ph−1

i . So we have the sub-problem with variables g2i and xi (that are the

p1

i), the sub-problems with variable gih and p h−1

i one for each h ∈ 3, ..., k and

the sub-problem with the remaining variables, the pk

i that actually are not

linked each other and so this sub-problem is trivial (we will see this later).

(13)

3.1

Sub-problem

1, 2

This sub-problem only has variables xi and gi2:

min P i∈N0 xi+ βi2gi2+ λ2i log( 1 − ¯p )xi+ P (i,j)∈A00gj2log( 1 − Pij)xj   (3.4) P i∈N0g2i ≤ M , (2.6) gi2 ∈ { 0 , 1 } i ∈ N0 (3.5) We now collect the terms in g2

i and in xi to get P i∈N0 ( 1 + λ2i log( 1 − ¯p ) )xi+ βi2+ P (i,j)∈A00λ2jlog( 1 − Pij)xigi2  It is clearly useless that g2

i = 1if xi = 0; we can therefore add the constraints

(2.14) and replace g2

ixi with simply gi2. This finally yields

min P i∈N0 ( 1 + λ2i log( 1 − ¯p ) )xi+ βi2+ P (i,j)∈A00λ2jlog( 1 − Pij)gi2  P i∈N0g2i ≤ M gi2 ≤ xi , xi ∈ { 0 , 1 } , gi2 ∈ { 0 , 1 } i ∈ N 0

which is a small MILP. Actually this can be written as a simple flow problem, and therefore solved extremely efficiently. A simple algorithm to solve this problem is the following:

(14)

CHAPTER 3. LAGRANGIAN DECOMPOSITION 11 Algorithm 2: 1 for i ∈ N0 do 2 x[i]=0 3 g[i]=0 4 end 5 for i ∈ N0 do 6 xCost[i]=1 + λ2ilog(1 − ¯p) 7 if xCost[i]≤ 0 then 8 x[i]=1 9 end 10 end 11 for i ∈ N0 do

12 gCost[i]=βi2+P(i,j)∈A00λ2jlog(1 − Pi,j) 13 if xCost[i]>0 then 14 gCost[i]=gCost[i]+xCost[i] 15 end 16 end 17 sort(gCost) 18 cont=0

19 while cont<M && gCost[cont]<0 do 20 g[i]=1

21 x[i]=1 22 end

where the procedure sort(v) sort the array v in increasing way. The algorithm simply calculate the cost of all the variables, paying attention to the fact that to turning on a gh

i variable we have to turn on the corresponding

(15)

3.2

Sub-problem h −

1, h

This sub-problem only has variables ph−1

i and gih: min P i∈N0 ( λhi − λh−1i ) log(1 − ph−1i ) + βih+ P (i,j)∈A00λhj log( 1 − ph−1i Pij)ghi  (3.6) P i∈N0ghi ≤ M (3.7) 0 ≤ ph−1i ≤ ¯p , gih ∈ { 0 , 1 } i ∈ N0 (3.8)

Although the problem looks complicated, the significant advantage is that each variable ph−1

i only “talks directly” with the corresponding gih. In other

words, consider fi( p , g ) = ( λhi − λh−1i ) log(1 − p) + β h i + P (i,j)∈A00λhj log( 1 − pPij)g . Let us call p∗,1i =argmin{ fi( p , 1 ) : 0 ≤ p ≤ ¯p } and p ∗,0 i =argmin{ fi( p , 0 ) : 0 ≤ p ≤ ¯p } ; then, (3.6)–(3.8) is equivalent to min P i∈N0fi( p ∗,1 i , 1 )g h i + fi( p ∗,0 i , 0 )(1 − g h i) P i∈N0ghi ≤ M gih ∈ { 0 , 1 } i ∈ N0 i.e., a trivial (flow) problem once p∗,1

i and p ∗,0

i are computed. Finding the

latter is trivial: the solution is arg min p∈{0, ¯p}{(λ h i − λ h−1 i ) log(1 − p)}

(16)

CHAPTER 3. LAGRANGIAN DECOMPOSITION 13 For the other we have to solve a one-dimensional problem of the form

min{ f (p) = c log(1 − p) +P

i∈N0ailog( 1 − pbi) : 0 ≤ p ≤ ¯p } . (3.9)

We will discuss how to do this in the next chapter.

The next algorithm can be used to solve the sub-problem once all the p∗,0 i and p∗,1 i are computed. Algorithm 3: 1 for i ∈ N0 do 2 g[i]=0 3 end 4 for i ∈ N0 do 5 gCost[i]=fi(p∗,1i , 1) − fi(p∗,0i , 0) 6 end 7 sort(gCost) 8 cont=0

9 while cont<M && gCost[cont]<0 do 10 g[i]=1

11 end

Like the previous one, this algorithm just compute the cost of the various variables and set to 1 those have negative cost. Since it will be useful in a next chapter we remark that, once solved the problem, to obtain the point where the optimum is reached we have to set ph−1

i = p ∗,1 i if gih = 1 otherwise ph−1i = p∗,0i , where p∗,0i = 0 if min{(λh i − λ h−1 i ) log(1 − ¯p), 0} = 0 otherwise p∗,0i = ¯p

3.3

Sub-problem h

= k

This sub-problem only has variables pk i:

min P

i∈N0−λki log(1 − pki)

(17)

The problem is separable and each univariate function is either convex since λki are positive; therefore, this sub-problem is trivial. The obvious solution is Pi∈N0−λki log(1 − α).

By appropriate methods (of which we have no lack) the Lagrangian Dual can be solved efficiently, which gives a bound to compare heuristic solutions with.

During the solution of the Lagrangian Dual integer xi and gih can be

gen-erated. These may be feasible, or simple approaches may be constructed to make them feasible; this provides other ways to produce feasible solu-tions, hopefully quickly, that may be good ones. Finally, the solution of the Lagrangian Dual ultimately also gives a continuous (convexified) primal solu-tion, that can be used both to drive further heuristics, and even to construct a full-fledged B&B approach to the problem. If the bound is decently tight, this might even have some chances of solving it in decent times.

(18)

Chapter 4

Solving the mono-dimensional

problem

Our aim is to solve a one-dimensional problem of the form min{ f (p) = c log(1 − p) +P

i∈N0ailog( 1 − pbi) : 0 ≤ p ≤ ¯p } .(3.9)

We are interested only in the case with ai ≥ 0 since we can replace the "="

in (3.3) with a "≥", but we still do not know the sign of c. If c is positive the problem is trivial: we have a decreasing function such that limp→1−f (p) =

−∞, so the minimum is reached in ¯p and choosing this one close enough to 1 we have minimum as small as we want. In the case with c < 0 we can prove that there is at most one internal point that is a local minimum, moreover it s easy to see that, since all bi are smaller than 1, limp→1f (p) = +∞. So

the minimum is reached in 0 or in the internal local minimum (if it exists). Our function is the sum of an increasing function with asymptote in 1 and a decreasing function with asymptote greater than 1 (all bi are smaller than

1) so, choosing ¯p great enough we have that near this one the function is increasing. Near to 0 the behaviour depends on the ai so we can imagine

(we are going to prove this claim) that there are two cases: the function is decreasing in 0 so there is an interval near 0 where it decreases and when it is close enough to ¯p become increasing, or the function is increasing in 0 so it is in all the interval.

(19)

Lemma 1. If c < 0, 0 ≤ bi ≤ 1 and ai ≥ 0 then exists at most one p ∈ [0, 1]

s.t. f0(p) = 0. Moreover if the function is increasing in a point p0 so it is

for every p0 ≤ p ≤ 1

Proof.

Let us write the derivative: f0(p) = − c 1−p − P i aibi 1−pbi so we have f 0(p) ≥ 0 if − c 1 − p − X i aibi 1 − pbi ≥ 0 −X i aibi 1 − pbi ≥ c 1 − p −P i aibi 1−pbi c 1−p ≤ 1 −X i (1 − p)aibi (1 − pbi)c ≤ 1

lets call g(p) the LH, we have that f0(p) ≥ 0 if and only if g(p) ≤ 1. Lets

derive g(p), we obtain g0(p) = −X i aibi c pbi− 1 + bi− pbi (1 − pbi)2 = = −X i aibi c bi− 1 (1 − pbi)2

that is lower than 0 in all [0, 1] so we have only one zero of the derivative and one way it is increasing (g(p) is lower than 1) so it is for the rest of the interval.

Seeing the form of the derivative we imagine that we can do a similar prove for f00(p).

Lemma 2. If c < 0, 0 ≤ bi ≤ 1 and ai ≥ 0 then exists at most one p ∈ [0, 1]

s.t. f00(p) = 0. Moreover if the function is convex in a point p0 so it is for

(20)

CHAPTER 4. SOLVING THE MONO-DIMENSIONAL PROBLEM 17 Proof.

The prove is very similar to the latter: f00(p) = − c (1−p)2 − P i aib2i (1−pbi)2 so we have f00(p) ≥ 0 if − c (1 − p)2 − X i aib2i (1 − pbi)2 ≥ 0 −X i aib2i (1 − pbi)2 ≥ c (1 − p)2 −P i aib2i (1−pbi)2 c (1−p)2 ≤ 1 −X i (1 − p)2a ib2i (1 − pbi)2c ≤ 1 lets derive the LH, we obtain

−X i aib2i c −2(1 − p)(1 − pbi)2+ 2bi(1 − pbi)(1 − p)2 (1 − pbi)4 = = −X i aib2i c 2(1 − p)(1 − pbi) −(1 − pbi) + bi(1 − p) (1 − pbi)4 = = −X i aib2i c 2(1 − p)(1 − pbi) bi− 1 (1 − pbi)4

that is lower than 0 in all [0, 1].

Lets resume our results, we have 3 cases:

• The function f(p) is increasing in 0, so it is increasing in all [0, ¯p] and the minimum is 0 and is reached in 0.

(21)

Figure 4.1: Increasing in all the interval

• The function f(p) is decreasing in 0 but is convex in 0, so it is convex in all [0, ¯p] and the minimum can be search with standard technique

Figure 4.2: Not monotone but convex in all the interval

• The function f(p) is decreasing in 0 and is not convex in 0, so we have to find an interval (we can always find one of the type [˜p, ¯p]) in

(22)

CHAPTER 4. SOLVING THE MONO-DIMENSIONAL PROBLEM 19 which the derivative take 0 and the function is convex so we are in the previous case.

Figure 4.3: Not convex in the first part of the interval then become convex

4.1

A modified Newton’s method

Our first try to solve this problem is to use the Newton’s method

The well known Newton methods consist in the iterative algorithm de-scribed below:

Algorithm 4: Newton’s method

1 k = 0 x0 2 while |φ0(xk)| ≥  do 3 xk+1= xk− φ 0(x k) φ00(x k) 4 k = k + 1 5 end

The problem is that in order to have quadratic convergence with this method we have to find an interval where |φ00(x)φ(x)

φ0(x)2 | ≤ 1, furthermore the

algorithm converges only in the interval where the function is convex. So we decided to use a simple bisection method if in the current point the function is not convex. If the interval where is the minimum is [a, b]

(23)

we calculate the derivative of the minimized function in a + 1/2(b − a) and depending on his its sign we shrink the interval. Otherwise, if in the current point the function is convex, we check if the point proposed by the Newton algorithm is in the restricted interval that contains the minimum and if using this point the new restricted interval is "smaller enough" than the previous one. In the case this conditions are satisfied we use the Newton point, if not we make another iteration using bisection.

It is not worthless to notice that, if the point proposed by Newton is in the interval, it can always be used to shrink the interval, so we always will do this. Finally, due to the specific form of our function, the minimum is often close to 1, so instead of using a standard bisection we use the point a + 3/4(b − a) to reduce the interval until, in this point, our function does not take positive derivative. This should leave to a faster convergence in the initials steps of our algorithm. The algorithm we used is the following:

(24)

CHAPTER 4. SOLVING THE MONO-DIMENSIONAL PROBLEM 21 Algorithm 5: Modified Newton’s method

1 k = 0 x0 ∈ [0, 1] a = 0 b = 1 α ∈ (0, 0.3] 2 check1=1 3 while |φ0(xk)|(b − a) ≥  ∗ max(|φ(xk)|, 1) do 4 check2=0 5 if φ00(xk) ≥ 0 then 6 xNk+1= xk− φ 0(x k) φ00(x k) 7 if a ≤ xNk+1 ≤ b then 8 if φ0(xNk+1) ≥ 0 then 9 b = xNk+1 10 end 11 else 12 a = xNk+1 13 end 14 if a + α(b − a) ≤ xNk+1 ≤ b − α(b − a) then 15 xk= xk+1 16 check2=1 17 k = k + 1 18 end 19 end 20 end 21 if check2=0 then 22 if check1=0 then 23 xk+1= a + 3/4(b − a) 24 if φ0(xk+1) ≥ 0 then 25 check2=1 26 end 27 end 28 else 29 xk+1= a + 1/2(b − a) 30 end 31 if φ0(xk+1) ≥ 0 then 32 b = xk+1 33 end 34 else 35 a = xk+1 36 end 37 k = k + 1 38 end 39 end

(25)

Bundle methods and

implementation

5.1

Bundle methods

In order to solve the Lagrangian dual we will use a Bundle method, let have a brief overview about it. This class of method is concerned to solve

inf

x {f (x) : x ∈ X}

where f : Rn→ R is finite-valued and convex (hence continuous) and X ⊆ Rn is

closed convex. Here f is only known through an oracle (“black box”) that, given any x ∈ X, returns the values f(x) and z ∈ ∂f(x). These methods sample f

in a sequence of tentative points xi to gather the f-values f(xi) and the bundle

of first-order information β = {zi ∈ ∂f (xi)}. A distinguished vector ¯x is taken

as the current point, and β is used to compute a tentative descent direction d∗

along which the next tentative point is generated. After a “successful” step, the current point can be updated; otherwise, the new information is used to enhance

β, hopefully obtaining a better direction at the next iteration, for more details we

refer to [Fra02] and [Fra18].

As seen this methods requires the computation of a sub-gradient of the objective

(26)

CHAPTER 5. BUNDLE METHODS AND IMPLEMENTATION 23

function. Lets consider the Lagrangian dual of a problem of the form

min f (x) (5.1)

gi(x) ≤ 0 i ∈ I (5.2)

x ∈ X (5.3)

with respect (5.2), that is max λi≥0 {min x∈X f (x) + P i∈Iλigi(x)} (5.4)

A sub-gradient of the function φ(λ) = minx∈X{ f (x) + λigi(x) } it is the vector

with components gi(x(λ)) where x(λ) is an optimal solution of the problem that

has to be solved for the computation of φ(λ). Proof. φ(y) ≤ f (x(λ))+X i∈I yigi(x(λ)) = φ(λ)+ X i∈I (yi−λi)gi(x(λ)) = φ(λ)+(y−λ)g(x(λ))

So we can compute a sub-gradient for our problem with the following:

SubGi2 j(x, p, g) = − log(1 − p2j) + log( 1 − ¯p )xj + X (i,j)∈A00 gi2log( 1 − Pij)xi (5.5) SubGih j(x, p, g) = − log(1 − phj) + log( 1 − ph−1j ) + X (i,j)∈A00 ghi log( 1 − ph−1i Pij) (5.6)

Interestingly, as said in previous sections, solve the Lagrangian Dual is equivalent to solve the convexified problem:

min{f (x) : gi(x) ≤ 0 x ∈Conv(X)}

The solution of this problem can be useful in order to implement some heuristics that, if the bound provided bye the Lagrangian dual is tight, can be very accurate.

(27)

5.2

Implementation

In order to implement our algorithm we wrote a program in C++ language using the IDE Visual Studio, for the Bundle algorithm we use the NDOSolver/FiOracle C++ suite of NDO solvers that have been developed at the Department of Computer Science of the University of Pisa over the last 25 years. The most important class of this library are the FiOaracle and the NDOSolver, the first allow us to implement all that we need to describe the function that has to be optimized, the most important methods and fields that we implemented are:

• The constructor myOracle(std::istream *iStrm = nullptr, std::istream *iStrm2 = nullptr): just creates the object, it is field and initialize them. We have to set the number of Users in the network, the number of step in which the signal must reach all the users, the minimum probability to be reached, the maximum number of transmission permitted every step, the connection matrix giving the probability of communication between two users and the matrix of the transmission cost of the users. Moreover we set the value of ¯p and of the precision to be reached when we solve the mono-dimensional problem;

• virtual HpNum Fi( cIndex wFi ): this method has to compute the value of our function in the point given by the internal field Lambda. In particular in this method we have to write the implementation of the algorithm to solve the various sub-problems that we discuss in the previous sections. In particular we write two functions performing the algorithm 2 and 3 and another that calculate te optimum for the sub-problem k;

• virtual Index GetGi( SgRow SubG , cIndex_Set &SGBse , cIndex Name

, cIndex strt , Index stp): just computes the sub-gradient at the last

point where our function was computed according with (5.5) and (5.6);

• vector<double> xVar; vector<vector<int» gVar; vector<vector<double»

pVar: the fields where we store the last solution obtained from the

compu-tation of our function, this information is necessary to compute the sub-gradient and to construct the convexificated solution, moreover we can use this solution to construct an heuristic;

(28)

CHAPTER 5. BUNDLE METHODS AND IMPLEMENTATION 25

computation of the convexificated solution, the solver will automatically cal-culate and store the associated convex multiplier;

• SetGiName(cIndex Name): when necessary, store the solution obtained from the last computation of our function in the "Name" position of solVect; The class NDOSolver allows to use various algorithms for the non-differentiable optimization, one of this use a Bundle method, we can set the function that has to be optimized throw the method virtual void SetFiOracle( FiOracle *Fi

) and we can solve our problem simply calling the method virtual NDOStatus

Solve( void ). It is also possible to set a large number of option to set the

(29)

Heuristics

In order to reduce the gap we try some heuristic approaches to get a better upper bound on our problem. We can do this using the various integer, but (possibly) not admissible, solutions that we obtain every time we compute the Lagrangian function and using the convexified solution that we can compute after we solve the Lagrangian dual.

Lets consider the integer solution obtained from the computation of the

La-grangian function: we can ignore the continuous variables ph

j since we can compute

them from the xi and ghi variables so that constraints (2.4) are satisfied, moreover

the constraints (2.5) are naturally satisfied due to the form of the Lagrangian func-tion. The only constraints that can be not satisfied are (2.3), this substantially means that our solution do not have enough "active" integer variables, or rather, there are not enough transmission to ensure that all users receive the message with probability at least α.

So our first try is to use an integer but not admissible solution as starting point and put some new variables to 1 to make it admissible. Given an integer solution, we define the score of integer variables as follows :

S(xi) = X {j : pk j<α} Pi,j2 (α − pkj)3δi S(ghi) = X {j : pk j<α} Pi,j2 (α − pkj)3ph−1i

So higher is the connection (Pi,j) of a user with the other ones that did not receive

(30)

CHAPTER 6. HEURISTICS 27

the message, higher is the score of the integer variables related to him, but being connected with a users that have almost received the signals is less important that being connected with another one that is way far from receiving it, so we added the

term (α−pk

j)(that is the gap between the actual value of pkj and the minimum that

it have to take to be admissible). For the gh

i variables another important factor

is the probability that the user i received the signal at step h − 1, it is not worth try to transmit from a user that have not received the signal so we add the term

ph−1j . Finally, since put an xi equal to 1 is more expensive than do it for a gih we

add the factor δ < 1, in the case the costs βh

i are constant with the step we can

use δi = βi. We can change the exponent of these factors to give them more or less

impact in the final score.

Now we can write our first heuristic algorithm:

Algorithm 6: Heuristic alg1

1 xi ∈ {0, 1} gi ∈ {0, 1}

2 while ∃ pkj < α do

3 ¯i = arg max

{i:xi=0} {S(xi)} 4 (¯j, ¯h) = arg max {(i,h):gh i=0} {S(gh i)} 5 if S(x¯i) > S(g¯¯hj) then 6 x¯i= 1 7 end 8 else 9 g¯¯h j = 1 10 end 11 for h = 2..k do 12 for i ∈ I do 13 phi = 1 − ( 1 − ph−1i )Q(i,j)∈A00( 1 − ghjph−1j Pij) 14 end 15 end 16 end

Until reach an admissible solution we calculate the score of all the variables that are not active and activate the one that have the best score. This algorithm can be used starting from every integer solution that respect (2.5), so, we can use it from the solution given by the computation of the Lagrangian function or we

(31)

can use it to create an independent heuristic, for example starting from the null solution.

Another tool that we can use to find an heuristic solution is the convexified solution obtained when we solve the Lagrangian dual, this is a fractional solution that can be rounded in order to find an integer one. Unfortunately all the rounding strategies that we tried did not lead to an admissible one, so we had to use again our first heuristic approach once we obtain an integer solution.

The principal rounding strategy we tried is basically to choose the higher m0

xi variables (in the fractional solution) and put them to 1 and, likewise, for each

step h = 1..k we put to 1 the higher mhgihvariables. The choose of the parameters

mh can lead our heuristic solution in different direction: m0 influence how much

will be difficult find a way to transmit the signal throw the steps and a value too small can lead easily to an infeasible solution, on the other hand the cost of this

variables is way more higher than the gh

i so put too much of them to 1 will certainly

lead to a bad heuristic solution. The way we choose mh for h = 1..k can lead to

solutions that prefer to transmit the signal in the initial steps to make the future transmission more effective or to solution that wait the advanced steps, when the signal is more spread, to make a big number of transmission or finally we can choose for a balanced approach that tend to do the same number of transmission every step. So the algorithm is:

Algorithm 7: Heuristic alg2

1 xConvi ∈ [0, 1] giConv∈ [0, 1] 2 for c=1..m0 do 3 xord[c]= 1 4 end 5 for h=1..k do 6 for c=1..mh do 7 gord[c]h = 1 8 end 9 end

(32)

CHAPTER 6. HEURISTICS 29

6.1

Heuristics performances

We test the following heuristics method:

1 we apply the algorithm (6) to the optimal solution of the Lagrangian relax-ation with the optimal lagrangian multiplier found by the bundle algorithm; 2 we apply the algorithm (6) to every optimal solution of the Lagrangian re-laxation with Lagrangian multiplier associated to an element of the bundle, and we keep the best of these;

3 we use the algorithm (7) with m0= 1..4and mh = 2..4, 2 + 2h, ..4 + 2h, 2 +

2(k + 2) − 4 − 2h, .., 4 + 2(k + 2) − 4 − 2h, and if the output is not a admissible

solution we also run algorithm (6);

In almost all cases the best result is obtained by the second of our heuristics, and, we will see this in next section, in some cases it outperform also the Department of Information Engineering of Pisa one.

Lets focus on the third group of heuristics, in most of the cases the choice of making more transmission at the final steps or at the beginning ones is less performing than a balanced approach, figures 6.1 and 6.2 show this trend.

Figure 6.1: Eur1 is a mean overall balanced heuristic methods, Eur2 is a mean on the heuristic methods that do more transmissions at the final steps and Eur3 is a mean over the methods that transmit more in the beginning steps.

(33)

Figure 6.2: changing alpha the results are very similar

We can also divide the heuristic methods of the third group according on the number of variables that they put to one. The results show that the method that activate less variable have better performance on the smaller instances, instead on the greater instances the methods that activate more variables perform better (see figures 6.3 and 6.4). This is not unexpected since the small instances will have optimal solutions with less variables set to 1.

Figure 6.3: Plot of the upper bound computed by our heuristic methods, the first number indicates how much x variables we activate, the second give a measure of the number of g variables set to 1

(34)

CHAPTER 6. HEURISTICS 31

(35)

Performance evaluation

To evaluate the performance of our model we create scenarioss with the the follow-ing parameters:

• Number of users varying in {10, 25, 50, 75, 100} • Radius varying in {100, 250, 500, 750, 1000}

• Minimum probability to reach (α) varying in {0.92, 0.95, 0.96}

We set also the number of steps to 5, the maximum number of grant for step to 8 and the value of ¯p to 0.99.

7.1

Lagrangian lower bound and gaps

evalua-tion

The results are summarized in the table below where: • LB stays for the lower bound found by our model;

• UB stays for the upper bound given by the heuristic solution computed by the Department of Information Engineering of Pisa software;

• Iteration stays for the total number of bundle method iteration; • Total time refers to the time needed for the lower bound computation;

(36)

CHAPTER 7. PERFORMANCE EVALUATION 33

• Time for mono-dim refers to the total time needed to the computation of the mono-dimensional problem solutions ;

Users Ray LB UB Iteration Total time Time for mono-dim 10 100 0.14 1.2 9 1.57E-03 4.53E-04 10 250 0.26 1.2 102 1.08E-02 2.87E-03 10 500 0.99 4.2 283 3.65E-02 1.19E-02 10 750 1.92 7.2 134 1.61E-02 6.59E-03 10 1000 3.02 9.0 144 1.85E-02 8.60E-03 25 100 0.13 1.2 169 4.26E-02 7.65E-03 25 250 0.22 1.2 231 9.51E-02 1.09E-02 25 500 0.86 3.4 688 4.61E-01 1.06E-01 25 750 2.63 9.0 1448 9.68E-01 4.15E-01 25 1000 5.86 16.6 993 6.97E-01 2.49E-01 50 100 0.13 1.2 384 3.48E-01 5.51E-02 50 250 0.23 1.2 782 9.84E-01 9.20E-02 50 500 0.72 3.4 2941 3.57E+00 5.36E-01 50 750 1.78 6.4 3312 7.53E+00 2.49E+00 50 1000 3.95 14.0 2834 7.49E+00 3.96E+00 75 100 0.13 1.2 596 1.03E+00 1.44E-01 75 250 0.22 1.2 788 1.67E+00 1.44E-01 75 500 0.70 3.2 4783 1.08E+01 1.06E+00 75 750 1.51 6.6 6331 1.92E+01 5.08E+00 75 1000 3.06 13.4 7372 3.16E+01 1.22E+01 100 100 0.13 1.2 630 1.80E+00 2.17E-01 100 250 0.22 1.2 937 3.47E+00 2.84E-01 100 500 0.70 3.2 6018 2.14E+01 1.73E+00 100 750 1.34 7.2 7166 2.82E+01 2.73E+00 100 1000 2.50 10.4 10000 5.08E+01 1.59E+01

(37)

Users Ray LB UB Iteration Total time Time for mono-dim 10 100 0.17 1.2 9 1.43E-03 4.13E-04 10 250 0.31 2.4 96 1.02E-02 2.56E-03 10 500 1.17 4.4 293 3.73E-02 1.29E-02 10 750 2.27 7.8 145 1.72E-02 6.57E-03 10 1000 3.58 10.8 127 1.57E-02 7.41E-03 25 100 0.16 1.2 194 4.94E-02 8.73E-03 25 250 0.26 1.4 186 7.21E-02 9.94E-03 25 500 1.02 4.4 541 3.39E-01 8.86E-02 25 750 3.12 8.8 1650 1.17E+00 4.79E-01 25 1000 6.95 16.0 335 1.97E-01 8.89E-02 50 100 0.16 1.2 197 1.77E-01 2.66E-02 50 250 0.27 1.4 777 9.77E-01 8.63E-02 50 500 0.86 3.6 3237 4.09E+00 6.23E-01 50 750 2.12 8.8 2644 6.72E+00 2.28E+00 50 1000 4.68 17.0 3115 8.81E+00 4.25E+00 75 100 0.16 1.2 576 9.92E-01 1.35E-01 75 250 0.26 1.4 684 1.44E+00 1.26E-01 75 500 0.83 3.4 4684 1.15E+01 1.10E+00 75 750 1.79 6.8 5538 1.71E+01 4.41E+00 75 1000 3.63 14.4 8815 3.73E+01 1.35E+01 100 100 0.15 1.2 897 2.58E+00 3.30E-01 100 250 0.26 1.4 1286 4.64E+00 3.82E-01 100 500 0.83 3.2 4788 1.71E+01 1.38E+00 100 750 1.59 6.8 5462 2.24E+01 2.17E+00 100 1000 2.96 11.2 9989 5.19E+01 1.60E+01

(38)

CHAPTER 7. PERFORMANCE EVALUATION 35 Users Ray LB UB Iteration Total time Time for mono-dim 10 100 0.18 1.2 9 1.42E-03 4.25E-04 10 250 0.33 2.4 96 1.09E-02 2.87E-03 10 500 1.26 4.4 308 3.76E-02 1.28E-02 10 750 2.44 6.6 133 9.10E-03 4.01E-03 10 1000 3.85 10.8 169 2.15E-02 9.64E-03 25 100 0.17 1.2 189 4.79E-02 9.87E-03 25 250 0.28 1.4 174 6.22E-02 9.21E-03 25 500 1.10 4.0 580 3.70E-01 9.07E-02 25 750 3.36 8.8 1019 6.84E-01 2.69E-01 25 1000 7.46 18.6 589 4.41E-01 1.53E-01 50 100 0.17 1.2 379 3.44E-01 5.43E-02 50 250 0.29 1.4 651 8.57E-01 7.51E-02 50 500 0.92 3.6 2869 3.68E+00 5.57E-01 50 750 2.27 8.8 2507 6.88E+00 2.13E+00 50 1000 5.03 18.2 1757 4.25E+00 2.52E+00 75 100 0.17 1.2 615 1.07E+00 1.50E-01 75 250 0.28 1.4 794 1.71E+00 1.42E-01 75 500 0.89 3.6 4745 1.12E+01 1.12E+00 75 750 1.92 7.0 4899 1.47E+01 3.85E+00 75 1000 3.90 14.0 6097 2.69E+01 1.02E+01 100 100 0.17 1.2 898 2.56E+00 3.23E-01 100 250 0.28 1.4 1033 3.92E+00 3.56E-01 100 500 0.89 3.6 6476 2.42E+01 1.88E+00 100 750 1.71 7.8 4769 1.81E+01 1.87E+00 100 1000 3.18 12.0 10000 5.19E+01 1.59E+01

Table 7.3: Results with α = 0.96

We also test on these instances our heuristic methods and we compare the best of them (indicated in the table below by "BestHeur") with the Department of Information Engineering of Pisa one. In order to do this we report

• Gap= (UB − LB)/ max{1, LB}

(39)

Users Ray UB BestHeur Gap Best Gap 10 100 1.2 1.2 105.59% 105.59% 10 250 1.2 1.2 93.89% 93.89% 10 500 4.2 3.6 321.47% 261.47% 10 750 7.2 5.6 275.81% 192.29% 10 1000 9.0 6.8 198.36% 125.42% 25 100 1.2 1.2 106.57% 106.57% 25 250 1.2 1.2 97.93% 97.93% 25 500 3.4 3.4 253.61% 253.61% 25 750 9.0 7.8 241.98% 196.38% 25 1000 16.6 13.0 183.46% 121.99% 50 100 1.2 1.2 106.84% 106.84% 50 250 1.2 1.2 97.18% 97.18% 50 500 3.4 3.2 267.57% 247.57% 50 750 6.4 8.6 258.74% 382.06% 50 1000 14.0 15.4 254.60% 290.06% 75 100 1.2 1.2 106.93% 106.93% 75 250 1.2 1.2 98.14% 98.14% 75 500 3.2 2.8 250.00% 210.00% 75 750 6.6 7.4 338.11% 391.21% 75 1000 13.4 14.6 337.87% 377.08% 100 100 1.2 1.2 106.97% 106.97% 100 250 1.2 1.2 98.31% 98.31% 100 500 3.2 3.0 250.22% 230.22% 100 750 7.2 7.6 437.06% 466.90% 100 1000 10.4 12.2 316.64% 388.75%

(40)

CHAPTER 7. PERFORMANCE EVALUATION 37 Users Ray UB BestHeur Gap Best Gap

10 100 1.2 1.2 102.91% 102.91% 10 250 2.4 1.6 209.06% 129.06% 10 500 4.4 3.4 276.49% 190.93% 10 750 7.8 5.6 243.15% 146.36% 10 1000 10.8 7.0 201.85% 95.64% 25 100 1.2 1.2 104.07% 104.07% 25 250 1.4 1.4 113.81% 113.81% 25 500 4.4 3.4 329.43% 231.83% 25 750 8.8 8.0 181.90% 156.28% 25 1000 16.0 13.8 130.37% 98.69% 50 100 1.2 1.2 104.41% 104.41% 50 250 1.4 1.4 112.95% 112.95% 50 500 3.6 3.2 274.09% 234.09% 50 750 8.8 9.0 315.89% 325.34% 50 1000 17.0 16.0 263.03% 241.68% 75 100 1.2 1.2 104.50% 104.50% 75 250 1.4 1.4 114.08% 114.08% 75 500 3.4 3.2 256.98% 236.98% 75 750 6.8 7.8 280.57% 336.53% 75 1000 14.4 15.2 296.72% 318.76% 100 100 1.2 1.2 104.55% 104.55% 100 250 1.4 1.4 114.28% 114.28% 100 500 3.2 3.2 237.23% 237.23% 100 750 6.8 8.0 327.65% 403.12% 100 1000 11.2 12.6 278.28% 325.57%

(41)

Users Ray UB BestHeur Gap Best Gap 10 100 1.2 1.2 101.64% 101.64% 10 250 2.4 1.4 206.75% 106.75% 10 500 4.4 3.6 250.32% 186.63% 10 750 6.6 6.0 170.43% 145.85% 10 1000 10.8 6.8 180.73% 76.75% 25 100 1.2 1.2 102.87% 102.87% 25 250 1.4 1.4 111.86% 111.86% 25 500 4.0 3.8 263.32% 245.15% 25 750 8.8 8.2 162.24% 144.36% 25 1000 18.6 14.0 149.22% 87.59% 50 100 1.2 1.2 103.23% 103.23% 50 250 1.4 1.4 110.92% 110.92% 50 500 3.6 3.4 267.70% 247.70% 50 750 8.8 9.0 287.05% 295.85% 50 1000 18.2 17.0 261.72% 237.87% 75 100 1.2 1.2 103.34% 103.34% 75 250 1.4 1.4 112.14% 112.14% 75 500 3.6 3.2 270.80% 230.80% 75 750 7.0 7.8 264.60% 306.27% 75 1000 14.0 14.8 258.96% 279.47% 100 100 1.2 1.2 103.40% 103.40% 100 250 1.4 1.4 112.36% 112.36% 100 500 3.6 3.2 271.06% 231.06% 100 750 7.8 8.2 356.53% 379.94% 100 1000 12.0 12.8 277.21% 302.36%

Table 7.6: Results with α = 0.96

We can notice that the gap increases with the radius (at least when there are enough users) but not with the number of users, this because bigger is the space where the users are distributed harder is the transmission of the signal, but if we increases the number of users, without increasing the space, the transmission become easier.

(42)

CHAPTER 7. PERFORMANCE EVALUATION 39 1 eNode start 2 0.85 x1 = 1 x2 = 0 g12 = 1 g22 = 0 g13 = 1 g23 = 0 cost=1 + β12+ β13 1 eNode start 3 2 0.96 0.85 0.96 x1 = 1 x2 = 0 x3 = 1 g12 = 0 g22 = 0 g32 = 1 g31 = 0 g32 = 0 g33 = 0 cost=1 + β23

For example (figure above) if we have two users with transmission probability 0.85 and we want that both the users receive the signal with probability 0.95 the optimal solution consist in making a vertical transmission and two horizontal one, but if we put another user between the other two, lets say with transmission probability 0.96 with both the other users, the optimal solution is a vertical transmission to the new user and only one horizontal transmission. Figures 7.1 and 7.2 can help to notice this trend.

(43)

Figure 7.1: An increases in the number of users do not always lead to an increases of the gap

Figure 7.2: Increasing the ray the gap increases too

The trend we notice in the behaviour of the Gap does not appear in the variation of the total time, bigger is the number of users more are the variables, bigger is the model that has to be solved, so the total time increases with both the ray and

(44)

CHAPTER 7. PERFORMANCE EVALUATION 41

the number of users.

The value of the parameter α do not affect so much either the Gap or the total time, this shows that is more important the casual factor of how the users are distributed in the space.

Another interesting trend is the behaviour of the total time percentage that is spent for the solution of mono-dimensional problems. We can see that it reaches the maximum when the ray is at least 750.

Finally we can see that in the little and medium size instances the best of our heuristics methods outperforms the Department of Information Engineering of Pisa one, instead in the biggest instances the trend is opposite. we can see that in figure 7.3.

Figure 7.3: In this plot all the instances have α = 0.95 and are ordered by increasing number of user and after by increasing ray

To summarize our results we plot in 3D both lower and upper bound varying number of users and ray.

(45)

Figure 7.4: 3D plot of upper and lower bound with α = 0.92

(46)

CHAPTER 7. PERFORMANCE EVALUATION 43

Figure 7.6: 3D plot of upper and lower bound with α = 0.96

7.2

Disaggregated version

The bundle method that is implemented in the Ndosolver suite can also work in a disaggregated version, the target function is seen as sum of n sub-function and separate sub-gradient are computed. This disaggregated version of the algorithm can potentially lead to a reduction of computation time, so, since this is a factor very reliant in our application, we also implemented this version. In our scenario the number of function is the number of steps, in fact we can write our objective function in this way:

min P

i∈N0 xi+ βi2gi2+ λ2i log( 1 − ¯p )xi+P(i,j)∈A00gj2log( 1 − Pij)xj + k

X

h=2

min P

i∈N0 ( λhi−λh−1i ) log(1−ph−1i )+ βih+P(i,j)∈A00λhj log( 1−ph−1i Pij)ghi



+ min P

i∈N0−λki log(1 − pki) (7.1)

So we can use the method described in section 3 and 4 to compute the value of the various sub-functions. For the computation of the sub-gradient we only need to separate the part of the aggregated sub-gradient inherent to the sub-function we are considering (it is enough to consider only the part with the variable relative to

(47)

the corresponding sub-problem ).

We test this version on the same instances we present in last section, we can see that the number of iteration of the disaggregated version is lower in most of the cases 7.7, only in some low ray scenarios the aggregated version became lower 7.8.

Figure 7.7: The disaggragted version need less iteration to converge (data are reported in logarithmic scale)

Figure 7.8: When the radius is little the number of iteration of the aggregated version is very low (data are reported in logarithmic scale)

(48)

CHAPTER 7. PERFORMANCE EVALUATION 45

The total computation time is similar in most of the cases 7.9, in the low ray instances the aggregated version is way faster 7.10;

Figure 7.9: Only in some scenarios the disaggregated version has a total time computation lower than the aggregated one (data are reported in logarithmic scale)

Figure 7.10: In the instances where the radius is equal to 100 the aggregated version need less computation time than the disaggregated one

(49)

The heuristic method that exploit the solutions given by the bundle need to be modified for the disaggregated bundle, in fact these solutions become partial solutions, relative only to the variables that appear on the associated sub-problem. So we have to paste various partial solutions to obtain a complete one, in the absence of better alternatives we decided to do this randomly. The upper bounds obtained in this way are worse than those obtained by this heuristic methods for the aggregated version 7.11, a better choice of the solution to paste maybe could lead to better results

Figure 7.11: The heuristics methods perform better on the aggregated version

Since the previous heuristic method was the more performing we expect that also the best heuristic of disaggregated version would be worse than the aggregated one. Figure 7.12 shows that this happens in most of the cases, but in some scenario, the results are very similar.

(50)

CHAPTER 7. PERFORMANCE EVALUATION 47

Figure 7.12: The aggregated version produces better upper bounds in most of the scenarios

(51)

Conclusions

We present a new model for a routing problem with probabilistic constraints, this problem can not be solved to the optimal with the general purpose-tools actually available, so we decompose the model with a Lagrangian approach. In this way we obtain valid lower bounds for the problem, we also develop some heuristic methods that, in some cases, are shown to be more efficient than the best heuristics actually available. Despite the gap is not so good, the methods we develop will allow the implementation of an implicit enumeration algorithm (Branch and Bound) that could lead to find optimal solutions of the problem.

(52)

Bibliography

[Fra02] Antonio Frangioni. Generalized bundle methods. SIAM J. OPTIM., 13(1):117–156, 2002.

[Fra18] Antonio Frangioni. Standard Bundle Methods: Untrusted Models and Duality. Technical report, Dipartimento di Informatica, Universit‘a di Pisa, 2018.

[NSV18] Giovanni Nardini, Giovanni Stea, and Antonio Virdis. Geofenced broad-casts via centralized scheduling of device-to-device communications in lte-advanced. In Simonetta Balsamo, Andrea Marin, and Enrico Vicario, editors, New Frontiers in Quantitative Methods in Informatics, pages 3– 17, Cham, 2018. Springer International Publishing.

Riferimenti

Documenti correlati

1 A fim de fornecer uma solução para o problema de proteção, eu identifiquei: uma concepção individualista-autônoma de consciência; uma concepção liberal da liberdade

Al essandr o

To test the effect of SSRI on behavioural inflexibility and on OFC activation, Npy1r 2lox and Npy1r Y5R-/- mice were treated with saline or escitalopram (1mg/kg, i.p.) 30

Inoltre, nella leggenda aborigena, la Madre, catturando e ripristinando lo spirito nel corpo del figlio, esprime al meglio l’aspetto creativo e terapeutico del femminile, che agisce

 Voi  potete  tranquillamente,   per  esempio,  nel  raccontare  la  storia  della  Rivoluzione  francese,  usare   una  pagina  alla  Michelet,  un

La storia inedita della massoneria dal 1992 alle inchieste P3 e P4, libro di Arrigo Dino, edito da Castelvecchi. Il libro non nasconde lo scontro di parte della massoneria con la

In questo caso molto spesso assistiamo alla cessione del credito, o vendita, ad un soggetto terzo, che diviene il nuovo creditore in cambio della

Il riconoscimento della comunità internazionale sarà il banco di prova per la legittimità di una secessione: si è arrivati alla creazione di uno Stato, e ciò deve poter