• Non ci sono risultati.

5.3 Evaluation Phase

5.3.2 Real Environment

5.3.2.2 ROS Interface

Figure 5.23: Low-level board piCORE based on STM32 microcontroller IC.

are linked via USB. How data are extrapolated from sensors, how boards act to link all the hardware parts and how trained network infers is explained in the next section.

a rotation matrix is applied in order to covert data from NED frame to NWU frame.

This new frame is the same as the reference frame of nodes that provide asset from IMU data. Rotation matrix is defined as follow:

Rx =

1 0 0

0 cos(π) − sin(π) 0 sin(π) cos(π)

(5.4)

Once data are converted, a ros::Publisher publishes a message on /gps/data topic.

This message contains three fields, one for each component of distance vector.

keyence szv node This node deals with data provided by laser scanner. As said before, a server is initialized to receive data from client. Data are parsed by an al-gorithm that check if connection was established and get the portion of datagram relative to distance measures as a 32-bit array. Then, this array is arranged in a sensor_msgs/LaserScan.msg ROS message that contains many informations about scan sequences, including ranges field. After message is created, it is published from a ros::Publisheron topic /scan.

zed node Depth map from ZED camera is provided by a node belonging to the existing package zed-wrapper. This package allows to use the ZED stereo Camera with ROS. It provides access to the ZED camera stereo images, depth map, 3D point cloud and 6-DOF tracking. Depth map is published on topic /zed/depth/depth_registered by this node. An example frame is shown in Figure 5.24.

Figure 5.24: Figure shows a frame provided by ZED camera. It is possible to note the colour gradient from closer object to those further.

uart This is one of the main nodes of the framework. It is the gate between other nodes and low-level board. As depicted in 5.27, uart node performs multiple tasks:

1. parses incoming data from low-level board, including IMU;

2. reads data from topic published by control node;

3. builds command package to send towards low-level board via serial communica-tion.

Thus, uart node allows sharing data between high-level part of system and the low-level part. As explained before, IMU provides linear acceleration, angular velocity and magnetic field along 3 axis. These informations are extrapolated in this node and published on two different topics, /imu/data_raw for gyroscope and accelerometer data and /imu/mag for magnetometer data; these topics are subscribed by other nodes that compute asset of vehicle required for control. Once control values for drivers are computed, they are published on topic atr_cmd subscribed by uart. Then, control signal are arranged in a package made up of different bytes, as Figure 5.25 shows. This package is sent to low-level board to move vehicle.

Figure 5.25: Package with command values sent to low-level board. Bytes dimension in brackets

imu filter node and toEuler As already said, data from IMU module pass through uart node before being elaborated to get out vehicle asset. There are actually two different nodes aim to compute asset working together:

1. imu filter node;

2. toEuler.

The first node is used to filter and fuse raw data from IMU device. It fuses angular velocities, accelerations, and magnetic readings from IMU into an orientation quater-nion, as described in [10]. It subscribes to /imu/data_raw topic and /imu/mag topic, then publishes the fused data on the /imu/data topic. The second node subscribes to a topic published by the first node and computes the conversion from quaternion orientation to Euler angles. Given quaternion q = (w, x, y, z) and Euler angles (φ, θ, ψ) respectively as Roll, Pitch and Yaw, the conversion is computed as follow:

φ = atan2(2(wx + yz), 1 − 2(x2− y2)) (5.5)

θ = arcsin(2(wy + zx)) (5.6)

ψ = atan2(2(wz + xy), 1 − 2(y2+ z3)) (5.7)

The computed asset is published on rpyTopic topic and used from node neural.

neural This node initializes the structure of actor neural network from a Network-Model.uff file created after evaluation in 3D simulated environment. Nvidia TensorRT is a platform for high-performance deep learning inference that allows to load model from file. After a previous definition of input and output layers of network, it creates an engine to make inference.

Figure 5.26: Nvidia TensorRT workflow. It build an inference engine from trained network model.

In order to take advantage of both CPU and GPU on Nvidia Jetson TX2 platform, the computation of action as output of neural network is split in four steps through TensorRT. CPU and GPU are referred as host and device. The workflow is:

1. state array is built on host (CPU);

2. it is moved from host (CPU) to device (GPU) where memory is previously allo-cated;

3. output action in computed on device (GPU);

4. it is moved from device (GPU) to host(CPU) to proceed with inference.

Data from gps node, toEuler node and data from laser scanner or camera nodes con-verge towards neural node. This node subscribes to all topics containing sensors data and stores these information about asset, distance from target and from obstacles in a

shared data structure. At each loop function spinOnce() updates data in this struc-ture with new environment observation used to build state array for network. Once state is built, it is fed to the network. The new action can assume values in range [0,1] for linear velocity and [-1,-1] for angular velocity. They are published on standard topic cmd_vel subscribed by control node, described below.

control This node has a simple rule: it subscribes to cmd_vel topic, reads data from message and sends new action using a custom message on topic atr_cmd. Since this control signal is fed directly to engines, it must be compatible with range values of engines, that is [0,65535]. Thus, the following function is applied:

forward = 1

2(alinear+ 1) · M (5.8)

steer = 1

2(aangular+ 1) · M (5.9)

where forward and steer are respectively linear and angular velocity of engines, alinear and aangular are network output and M is equal to 65535. In this way, angular output of network is mapped in the entire range of angular velocity of vehicle engines, while linear network output is mapped in range [32768, 65535] of vehicle engines discarding backward velocity.

In the following section the different behaviour of vehicle in real environment first using laser scanner and then camera will be discussed, making some consideration about all other sensors and goodness of 2D trained network in real scenario and its adaptability in unseen situations.

Figure 5.27: ROS Iterface.

Documenti correlati