• Non ci sono risultati.

4.3 Problem1: ROS2 remote communication

4.3.1 Initial Peers List

The first solution is based on the fact that the middleware which ROS2 is based on, namely DDS, has a lot more potential than the one exploited by default in ROS2. DDS is a very powerful creation thought for distributed applications, such as IoT ones, in which a lot of devices such as sensors and drones are scattered in different parts of the world, talking to each other and creating a huge distributed DDS network. Like other implementations, eProsima Fast DDS has as well the possibility of unlocking more potential than the one normally used by ROS2: this allows us to take advantage of the middleware itself to reach our goal, which is achieving remote ROS2 communication.

By potential, we mean that Fast DDS can be easily configured to act differently than the way ROS2 users usually think, so that some different mechanisms can be leveraged. In particular, Fast DDS, as stated in the official documentation, can be configured by means of XML configuration files. They are the way we can bend the DDS middleware’s behaviour for our own goals, defining the so-called profiles. The main reason we usually want to do that is to unlock different Quality of Service (QoS) behaviours than the default ones, in order to control how messages are exchanged in a DDS Domain.

In our case, though, we do not want to configure some QoS settings, but slightly modify the SIMPLE Discovery protocol behaviour, as for this first solution we’re indeed taking advantage of the so-called Initial Peers List. According to the RTPS standard, each RTPSParticipant (in our case a ROS2 node) must listen for incoming Participant Discovery Protocol (PDP) discovery metatraffic in two different ports, one linked with a multicast address, and another one linked to a unicast address.

Fast DDS allows for the configuration of an Initial Peers List which contains one or more such IP-port address pairs corresponding to remote DomainParticipants PDP discovery listening resources, so that the local DomainParticipant will not only send its PDP traffic to the default multicast address-port specified by its domain, but also to all the IP-port address pairs specified in the Initial Peers List.

Simply put, Fast DDS allows a node to specify this list of Initial Peers, so that such node will send discovery traffic not only to all the other nodes in the LAN, as the SIMPLE Discovery protocol already does on its own, but also to all the Peers declared in that list: they will be contacted one by one in unicast.

A DomainParticipant’s Initial Peers List contains the list of IP-port address pairs

belonging to all the other DomainParticipants which it will send its Discovery traffic to. It is a list of addresses that a DomainParticipant will use in the unicast Discovery mechanism, together or as an alternative to multicast Discovery. Therefore, this approach also applies to those scenarios in which multicast functionality is not available.

The following constitutes an example XML configuration file which sets an Initial Peers list with one peer on host 192.168.10.13 with DomainParticipant ID 1 in domain 0:

Figure 4.1: XML configuration file for an Initial Peers List

With this mechanism, remote ROS2 nodes can be declared to be part of the working scenario, following some simple steps.

For the local nodes:

• We declare a set of <locator> fields, one for each node running cloud-side.

• We specify, in every <address> field, the IP of the cloud server in which all the other nodes, which we wish to discover, are being executed.

• For each of them, a different value must be set in the <port> field, which must correspond to the port which the node is listening at. Ports are not randomly chosen, but each node listens at a specific one, which can be computed with the following equation:

port = 7400 + 250 ∗ domainID + 10 + 2 ∗ participantID

• We save these tags in an XML file, which will become the configuration file of ev-ery node we launch. In order to make each node accept the XML profile, first we must export an environment variable, which is RMW_FASTRTPS_USE_QOS_FROM_XML.

By default, its value is set to 0, and by changing it to 1 we force the node we’re launching to look for an XML configuration file.

• Now that we have a proper XML configuration file, we need to force the node to look for it, and we have two options:

1. Declare the FASTRTPS_DEFAULT_PROFILES_FILE environmental variable, with the path to the location of the XML file as its value. This will make the node look for the file in the specified path.

2. Name this file DEFAULT_FASTRTPS_PROFILES.xml and save it in the work-ing directory in which we execute the node, so that it will be automatically recognized as an XML configuration file.

For the Cloud nodes, it is mirrored:

• We declare a set of <locator> fields, one for each node running local-side.

• We specify, in every <address> field, the IP of the device (in our case the Nvidia Jetson Xavier NX ) in which all the other nodes, which we wish to discover, are being executed.

• For each of them, a different value must be set in the <port> field, which must correspond to the port which the node is listening at. Ports are not randomly chosen, but each node listens at a specific one, found with the following equation:

7400 + 250 ∗ domainID + 10 + 2 ∗ participantID

• We save these tags in an XML file, which will become the configuration file of ev-ery node we launch. In order to make each node accept the XML profile, first we must export an environment variable, which is RMW_FASTRTPS_USE_QOS_FROM_XML.

By default, its value is set to 0, and by changing it to 1 we force the node we’re launching to look for an XML configuration file.

• Now that we have a proper XML configuration file, we need to force the node to look for it, and we have two options:

1. Declare the FASTRTPS_DEFAULT_PROFILES_FILE environmental variable, with the path to the location of the XML file as its value. This will make the node look for the file in the specified path.

2. Name this file DEFAULT_FASTRTPS_PROFILES.xml and save it in the work-ing directory in which we execute the node, so that it will be automatically recognizes as XML configuration file.

Note: There is also the possibility of not defining the Initial Peer listening port.

In this case, the Discovery information would be sent to every port ranging from participantID=0 to a maximum number set by the Fast DDS specifications (see TransportDescriptorInterface).

Of course, we must make sure that the RMW_IMPLEMENTATION environment vari-able is set equal to rmw_fastrtps_cpp, otherwise we’re not using Fast DDS in the first place.

Note: An important thing to highlight is that the RTPS protocol under DDS is not aware of ROS2 types, so the MicroRTPS agent registers a default generic type in FastRTPS. This generic type has a maximum size of 1028 and by de-fault FastRTPS entities uses the memory policy PREALLOCATED_MEMORY_MODE. We must solve this by explicitly defining a <historyMemoryPolicy> tag inside the <publisher> and/or <subscriber> tags, setting the value DYNAMIC or PREALLOCATED_WITH_REALLOC, so no errors on the payload size will be generated by RTPS:

<publisher profile_name="/turtle1/cmd_vel" is_default_profile="true">

<historyMemoryPolicy>DYNAMIC</historyMemoryPolicy>

</publisher>

Pros

This solution has the advantage of being very simple, as it does not imply any modification on the ROS2 architecture of the navigation system: only the generation of a proper XML configuration file for DDS is required, which constitutes a task of minimal effort for the developer. Once done and exported the right environment variables, every node will be executed normally and will reach the remote peers on its own.

Cons

Despite the discussed clear advantage, this solution has not been considered feasible for an actual deployment.

The main problem resides in ROS2 itself and in simple words is mutual reachability: ROS2 assumes that every machine in which its demon is running

can reach all the others where the other ROS2 nodes are being executed, which they have to talk to, and vice versa. Indeed, it is not a client-server paradigm, but a publish-subscribe one: in any mechanism based on a request and a response, a request would be delivered and the response entity, although not being able to reach the requester independently, will use the information on its address contained in the request to reply, which does not imply the need for mutual reachability. In our case, this is not true, as any node must be able to contact all the other ones independently: consequently, the machine at one side, in our case the Nvidia Jetson Xavier NX, must be able to reach (physically, by means of ping), the machine at the other side, in our case the cloud server.

In a real case scenario, mutual reachability cannot be easily assumed, as it requires additional network configurations than the ones mostly found: a server is in fact usually reachable from clients, but clients (SEDIA in this case) are not always reachable by the server, which means that the ROS2 nodes in the server publishing on topics cannot independently contact the ones running in SEDIA.

There is another disadvantage: this method consists in each node sending Discovery metatraffic to each other node in unicast (as well as the normal multicast one), to the IP address and the port specified in the XML configuration file.

Assuming that the two machines are mutually reachable in terms of network, what if either of them is behind a NAT?

In the majority of scenarios, a Cloud server is publicly reachable, with a public IP address, but a device such as our Nvidia Jetson Xavier NX usually lives inside a private network, the one of the facility in which SEDIA is navigating. This means that the network module used for the communication with the cloud server has a private IP address, and a NAT is used at the edge of the network to put it in communication with the outside. In this case, while SEDIA is able to contact the server directly through its public address, the same does not apply to the opposite direction, as the server is not able to independently contact SEDIA.

The way in which this is usually solved consists in the creation of some port forwarding rules is the NAT, implementing a so-called "NAT traversal," which allows in our case the server to contact SEDIA regardless of the fact that it has been contacted earlier by it. But right here resides the second problem: we know that each node listens to unicast Discovery traffic at a certain port, which is the one specified in the XML file, but this means that a different port forwarding rule must be set for each different node, resulting in a very long list of rules.

If the amount of nodes is too large, the router could also not be able to support such huge list.

Also, even if we suppose to actually create that set of rules in the edge device, there is no guarantee that the participantIDs of the remote nodes are not going to change, or a different domainID is not decided by however is launching the SEDIA

application. Changing either of the two parameters will change the listening ports of the launched nodes, which calls for a manual modification of the XML configuration file, in order to match the newly allocated ports. It is not the most comfortable and plug-and-play solution to apply for a proper ROS2 remote communication.

In conclusion, even if very simple and straightforward, the solution of the Initial Peers configuration, through an XML configuration file, results to be not practical, due to the fact that the location of the remote peers is manually set, and this brings to a lot of problems at the end of the day. For these reasons, other solutions have been explored.