• Non ci sono risultati.

Test 2: latency required for nodes’ activation and deactivation

7.3 Test 2: latency required for nodes’ activation

Figure 7.17 represents the CDF for the acquired data about the latency required to trigger the activation of the considered node, in terms of message exchange and processing time, in 7 different cases:

• Case1: a lifecycle_talker (example managed node whose lifecycle we want to control) is executed in the same LAN and in the same machine as the lifecycle_manager.

• Case2: the lifecycle_manager is inside the NX board, in which all other ROS2 nodes are in Unconfigured state. The map_server node was chosen as an example target this time.

• Case3: same as Case2, but alba_v2_localization is taken as target.

• Case4: The lifecycle_manager is inside the NX (like Case2 and Case3), but the other nodes inside it are up and running (Active state), no longer Unconfigured. The map_server is taken as example target.

• Case5: same as Case4, but alba_v2_localization is taken as target.

• Case6: the lifecycle_manager runs in a laptop and the map_server is executed in the NX, where all the other nodes are Active.

• Case7: same as Case6, but the alba_v2_localization node is taken as target.

We can see that the data range, considering all the 7 cases together, goes from nearly 1ms to 12ms at maximum. There are differences in the retrieved data, which can be clearly seen by the fact that the curves representing the last considered conditions are shifted towards the right w.r.t. the ones representing the previous cases. This is because the scenarios implying two physical machines (NX and laptop) present some additional latency given by the time spent to go from one device to another, while the other cases are not affected by this problem.

We can clearly say, then, that no substantial processing time is required to activate a node, even in a context in which other nodes are present (a little bit more latency is registered when they are Inactive w.r.t. when they’re Unconfigured, compare Case2 and Case4). The activation latency has the order of magnitude of 1ms, hence designing a switching solution based on it should not represent a real problem, according to these data.

Figure 7.18 represents, instead, the CDF on the latency data taken in the process of deactivation of the nodes, in the same exact scenarios (indeed, nodes have been activated and deactivated in loop and the measurements have been taken

Figure 7.18: Latency required for nodes activation

separately). One important thing emerges from this graph: in most situations, the latencies are equal to the ones required for activation (see the curves on the left), namely with an order of magnitude of 1ms. However, there are some cases in which a consistently higher amount of time is required to deactivate a certain node, which can be clearly seen for Case5 and Case7. Indeed, the manager spent up to 60ms to deactivate the considered node, which was alba_v2_localization in both cases: this means that activation and deactivation times are very variable and strongly depend on the considered node, in terms of how many operations have to be performed when activating or deactivating it, for instance.

In this experiments, the alba_v2_localization node had to go through a lot of processing before being deactivated, while not that much in the opposite transition, as witnessed by the previous graph. The slight difference in the two curves of Case5 and Case7 is given by the additional latency required for execution in different machines (Case7), but the greatest gap with the other cases is generated by the difference in processing time, as mentioned.

In conclusion, we can say that even the activation or deactivation phases, which are at the very core of the switching solution proposed by this thesis, can bring much additional latency that can heavily affect the overall system. This strongly depends on the node whose transitions we’re triggering, and must be considered case by case in order to decide if this cloud autonomous driving solution can be feasible or not.

Future work

The system developed by this thesis in order to move the robot’s computational load to the cloud is not complete: as already discussed, additional effort must be applied in order to produce a working prototype of a cloud solution for autonomous driving out of this thesis, especially for its switching capabilities. Some possible future additions, which could be useful in delivering such prototype, are to be considered for both problems faced in this work:

1. ROS2 Remote communication: the solution to this problem, which in this thesis is represented by the DDS Router technology, can be definitely improved.

As the data itself collected in Section 7.2 confirm, a DDS Router instance cannot live up to the real-time latency constraints of this kind of application, as it is not optimized to work with its strict requirements. However, there are different implementations of bridges capable of connecting DDS Networks available as open-source plugins, such as Eclipse Zenoh for Cyclone DDS.

These alternatives can be the final solution to this problem and they have not been explored here.

2. Cloud/local switching: the proposed architecture is designed from scratch, and it still has a lot of faults that must be taken in consideration to have a working prototype of a switching mechanism for a ROS2 autononmous system. They’ve already been discussed in Section 4.4.1 (in which also proposed solutions are described), but we list them here once again:

• Synchronization: the activation of local instance and deactivation of the remote one (and vice versa) must be synchronized. Each node needs some time to be activated or deactivated, hence we can end up having both cloud and local replicas in the same lifecycle state (either Active or Inactive) or two exactly equal nodes publishing the same information to a third node at the same time. Most likely, the majority of nodes will not