• Non ci sono risultati.

To test the functioning of the web service, two scripts were written, located in the

"testing_client" folder. The task of the first program (testPUT.py) is to send many simultaneous http requests to the server to check that they are executed correctly.

Since the web service, in addition to being a data distribution system, is mainly a measurement collection system with the task of making them permanent on the server, the requests forwarded by the testing client to the backend are PUT

requests to the address that ends with “/measure”, this means that they have to insert data into the DB and they represents one of the most frequent requests received by the server. By using requests of this type, it is also possible to check if the data is inserted correctly within the database.

The script inside contains a function called "add_data" which has the task of generating random measurements for board 0 and inserting them both within a local DB and sending them with the http PUT request to the web service. The number of data generated can be specified by setting the number of iterations within the for loop.

1 def add_data ( index ) :

2 num = random . randint (2 , 5)

3 time . s l e e p (num)

4 p r i n t ( " Thread " + s t r ( index ) + " s l e e p s " + s t r (num) + " seconds

" )

5

6 t_conn = s q l i t e 3 . connect ( ’ TestDB . db ’ , timeout =600)

7 t_c = t_conn . c u r s o r ( )

8

9 # v a r i a b l e s to c r e a t e a w e l l formatted json f i l e to send

10 data = {}

11 l = [ ]

12

13 # f o r to generate random data and save then i n t o the l o c a l db

14 f o r x in range (3600) :

15 n = random . randint (2 , 9)

16 obj = {" sensorID " : 12 , " timestamp " : 2 , " data " : n , " geoHash " :

" sadsad " , " a l t i t u d e " : 1222}

17 t_c . execute ( ’ ’ ’

18 INSERT INTO measure_table ( sensorID , timestamp , data , geoHash , a l t i t u d e )

19 VALUES (12 , 2 , ? , " sadsad " , 1222)

20 ’ ’ ’ , [ n ] )

21 l . append ( obj )

22 obj = {" sensorID " : 14 , " timestamp " : 2 , " data " : n , " geoHash " :

" sadsad " , " a l t i t u d e " : 1222}

23 t_c . execute ( ’ ’ ’

24 INSERT INTO measure_table ( sensorID , timestamp , data ,

geoHash , a l t i t u d e )

25 VALUES (14 , 2 , ? , " sadsad " , 1222)

26 ’ ’ ’ , [ n ] )

27 l . append ( obj )

28 obj = {" sensorID " : 16 , " timestamp " : 2 , " data " : n , " geoHash " :

" sadsad " , " a l t i t u d e " : 1222}

29 t_c . execute ( ’ ’ ’

30 INSERT INTO measure_table ( sensorID , timestamp , data ,

geoHash , a l t i t u d e )

31 VALUES (16 , 2 , ? , " sadsad " , 1222)

32 ’ ’ ’ , [ n ] )

33 l . append ( obj )

34 obj = {" sensorID " : 18 , " timestamp " : 2 , " data " : n , " geoHash " :

" sadsad " , " a l t i t u d e " : 1222}

35 t_c . execute ( ’ ’ ’

36 INSERT INTO measure_table ( sensorID , timestamp , data ,

geoHash , a l t i t u d e )

37 VALUES (18 , 2 , ? , " sadsad " , 1222)

38 ’ ’ ’ , [ n ] )

39 l . append ( obj )

40 n = random . randint (2 , 9)

41 obj = {" sensorID " : 13 , " timestamp " : 2 , " data " : n , " geoHash " :

" sadsad " , " a l t i t u d e " : 1222}

42 t_c . execute ( ’ ’ ’

43 INSERT INTO measure_table ( sensorID , timestamp , data ,

geoHash , a l t i t u d e )

44 VALUES (13 , 2 , ? , " sadsad " , 1222)

45 ’ ’ ’ , [ n ] )

46 l . append ( obj )

47 obj = {" sensorID " : 15 , " timestamp " : 2 , " data " : n , " geoHash " :

" sadsad " , " a l t i t u d e " : 1222}

48 t_c . execute ( ’ ’ ’

49 INSERT INTO measure_table ( sensorID , timestamp , data ,

geoHash , a l t i t u d e )

50 VALUES (15 , 2 , ? , " sadsad " , 1222)

51 ’ ’ ’ , [ n ] )

52 l . append ( obj )

53 obj = {" sensorID " : 17 , " timestamp " : 2 , " data " : n , " geoHash " :

" sadsad " , " a l t i t u d e " : 1222}

54 t_c . execute ( ’ ’ ’

55 INSERT INTO measure_table ( sensorID , timestamp , data ,

geoHash , a l t i t u d e )

56 VALUES (17 , 2 , ? , " sadsad " , 1222)

57 ’ ’ ’ , [ n ] )

58 l . append ( obj )

59 obj = {" sensorID " : 19 , " timestamp " : 2 , " data " : n , " geoHash " :

" sadsad " , " a l t i t u d e " : 1222}

60 t_c . execute ( ’ ’ ’

61 INSERT INTO measure_table ( sensorID , timestamp , data ,

geoHash , a l t i t u d e )

62 VALUES (19 , 2 , ? , " sadsad " , 1222)

63 ’ ’ ’ , [ n ] )

64 l . append ( obj )

65 n = random . randint (19 , 25)

66 obj = {" sensorID " : 20 , " timestamp " : 2 , " data " : n , " geoHash " :

" sadsad " , " a l t i t u d e " : 1222}

67 t_c . execute ( ’ ’ ’

68 INSERT INTO measure_table ( sensorID , timestamp , data ,

geoHash , a l t i t u d e )

69 VALUES (20 , 2 , ? , " sadsad " , 1222)

70 ’ ’ ’ , [ n ] )

71 l . append ( obj )

72 obj = {" sensorID " : 21 , " timestamp " : 2 , " data " : 23323 , "

geoHash " : " sadsad " , " a l t i t u d e " : 1222}

73 t_c . execute ( ’ ’ ’

74 INSERT INTO measure_table ( sensorID , timestamp , data ,

geoHash , a l t i t u d e )

75 VALUES (21 , 2 , ? , " sadsad " , 1222)

76 ’ ’ ’ , [ n ] )

77 l . append ( obj )

78 t_conn . commit ( )

79

80 # c r e a t e the json data

81 data [ ’ data_block ’ ] = l

82 json_data = json . dumps( data )

83

84 response = r e q u e s t s . put ( url , data=json_data , headers=headers )

85

86 r e s = response . json ( )

87

88 p r i n t ( r e s )

89

90 p r i n t ( " Thread " + s t r ( index ) + " f i n i s h e d " )

Listing 6.1: add_data function

The main code of the script has the task of setting the configuration parameters of the connection to the local DB, creating it if necessary, together with the connection parameters to the web service.

1 # c r e a t e the u r l amd the headers to send data to the s e r v e r

2 u r l = ’ http : / / 1 2 7 . 0 . 0 . 1 : 5 0 0 0 / ws/measure ’

3 headers = {" Content−Type " : " a p p l i c a t i o n / json " ,

4 " x−access −token " : " eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9 . eyJ1c2VyX2lkIjo1MSwiY291bnRlciI6MX0 .

gBzzmxxxn3TIcB87VKIv1MrfuGEcUw5tBf−FeJGgIlo "}

5

6 # c r e a t e l o c a l DB

7 conn = s q l i t e 3 . connect ( ’ TestDB . db ’ )

8 c = conn . c u r s o r ( )

9 # drop the measure_table f i r s t

10 c . execute ( ’ ’ ’

11DROP TABLE IF EXISTS measure_table

12 ’ ’ ’ )

13 conn . commit ( )

14# c r e a t e the l o c a l measure_table

15 c . execute ( ’ ’ ’CREATE TABLE measure_table

16 ( [ measureID ] INTEGER PRIMARY KEY, [ sensorID ] INTEGER , [ timestamp ] i n t e g e r , [ data ] FLOAT, [ geoHash ] VARCHAR(12) , [ a l t i t u d e ] FLOAT ) ’ ’ ’ )

17 conn . commit ( )

Listing 6.2: setting parameters

At this point some parallel program threads are generated that are able to execute the “add_data” function, thus simulating the clients that is sending data to the web service. Thanks to the use of two for loops it is possible to specify how many threads we want to run simultaneously and how many times the procedure must be repeated.

1 f o r z in range ( 2) :

2 threads = [ ]

3

4 # c r e a t e the threads and s t a r t them

5 f o r i in range ( 5) :

6 t = Thread ( t a r g e t=add_data , args =(i , ) )

7 threads . append ( t )

8 t . s t a r t ( )

9

10 # wait f o r the threads

11 f o r x in threads :

12 x . j o i n ( )

Listing 6.3: creating threads

Finally, the testing client queries the local database to obtain the number of data inserted inside it. This value will subsequently allow us to check if the same number of data has been entered into the DB contacted by the backend.

1 # d i s p l a y the number o f raws in the measure_table

2 c . execute ( ’ ’ ’

3 SELECT COUNT( ∗ )

4 FROM measure_table

5 WHERE timestamp = 2

6 ’ ’ ’ )

7 p r i n t ( c . f e t c h a l l ( ) )

Listing 6.4: quering local DB

The second script has the task of testing the latency and throughput of the web service, simulating the execution of a variable number of clients through the use of different threads. The numebr of simulated clients as well as the number of requests to make can be setted at the beginning of the script.

1 threads = [ ]

2

3 t0 = time . time ( )

4

5 # c r e a t e the threads and s t a r t them

6 f o r i in range ( c l i e n t s ) :

7 t = Thread ( t a r g e t=send_requests , args =(i , ) )

8 threads . append ( t )

9 t . s t a r t ( )

10 11

12# wait f o r the threads

13 f o r p in threads :

14 p . j o i n ( )

15

16 t1 = time . time ( )

17

18 t o t a l = t1 − t0

19

20 l a t = gen_latency / c l i e n t s

21 throughput = n_requests / t o t a l

22 23

24 p r i n t ( s t r ( l a t ) )

25 p r i n t ( s t r ( throughput ) )

Listing 6.5: creation of multiple threads

In the code visible above it is also possible to see that how the throughput and medium latency is calculated, noting that the time interval for sending and receiving all the requests is measured at this point.

It is also visible that each thread executes the “send_requests” function which has the task of generating GET requests and sending them to the web service. The number of requests to be send is splitted between all the generated clients Each request concerns the measurements of the day "2018-11-10", filtering the data so as to obtain only those relating to particles type pm2.5 and board number 8. This type of request was used as it is one of the heavier on the server.

As requests are generated and sent to the web service, the response time for each is measured, establishing is this way the latency value.

1 def send_requests ( cl i e n t_ i n d e x ) :

2 g l o b a l gen_latency

3 x = 0

4

5 u r l = ’ http : / / 1 2 7 . 0 . 0 . 1 : 5 0 0 0 / ws/measure /pm25/8 ’

6 headers = {" Content−Type " : " a p p l i c a t i o n / json " ,

7 " x−access −token " : "

eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9 . eyJ1c2VyX2lkIjoxMTQsImNvdW50ZXIiOjR9 .

WAYJy1ypOamJCbvoap25MPfA29MqZADNTFSgMco7tLE" ,

8 " s t a r t " : "2018 −11 −10" ,

9 " end " : "2018 −11 −10" ,

10 " board " : " True " ,

11 " f i l t e r " : " True " ,

12 " l a s t −two " : " False " ,

13 " from " : " 9" }

14

15 n = n_requests / c l i e n t s

16

17 f o r j in range ( i n t (n) ) :

18 t2 = time . time ( )

19 response = r e q u e s t s . get ( url , headers=headers )

20 r e s = response . json ( )

21 t3 = time . time ( )

22 p a r t i a l = t3 − t2

23 x = x + p a r t i a l

24 25

26 l a t e n c y = x / ( n_requests / c l i e n t s )

27

28 gen_latency = gen_latency + l a t e n c y

29

30 p r i n t ( " C l i e n t " + s t r ( c l i en t _ i n d e x ) + " : \n " + " l a t e n c y : " + s t r ( l a t e n c y ) )

Listing 6.6: send_requests function

Documenti correlati