• Non ci sono risultati.

Path Planning and Collision Avoidance algorithm for under-ice AUV

N/A
N/A
Protected

Academic year: 2021

Condividi "Path Planning and Collision Avoidance algorithm for under-ice AUV"

Copied!
92
0
0

Testo completo

(1)

Master Degree Course in

Robotic and Automation Engineering

Path Planning and Collision Avoidance

algorithm for under-ice AUV

21 June 2018

Piero Pareti Abetoni Supervisor. Andrea Caiti

Ing. Andrea Munaf`o

(2)
(3)

sul motivo per cui l’obiettivo non si pu`o raggiungere... Quando ti rendi conto di non poterli raggiungere, non abbassare il tiro, agisci diversamente per riconquistarli.”

(4)

1 Introduction 6

1.1 National Oceanography Centre - Southampton . . . 10

1.2 AUV - Autosub Long Range . . . 12

1.3 Statements of the problem . . . 14

1.4 Contribution of this thesis . . . 15

2 Previous work 18 2.1 Underwater Robotics and Technologies . . . 18

2.2 Path Planning Algorithm . . . 21

2.2.1 Combinatorial Planning . . . 23

2.2.2 Artificial potential fields planning . . . 26

2.2.3 Sampling-based planning . . . 27

2.3 Collision Avoidance System . . . 29

2.4 Algorithms and Methods for Underwater Explorations . . . 31

3 Problem Formulation 34 3.1 Requirements . . . 34

3.2 Design Approach . . . 35

3.3 Algorithm Developed . . . 36

4 Forward-Looking Sonar 38 4.1 How a sonar work . . . 39

4.2 FLS Modelling and Sampling . . . 40

5 Obstacle Detection and Avoidance 43 5.1 Obstacle Detection . . . 43

5.2 Obstacle Reconstruction . . . 44

5.3 Obstacle Avoidance . . . 46

5.4 Local Target Selection . . . 48

6 Planning Procedure 49 6.1 The RRT* Algorithm . . . 49

(5)

6.1.1 Implementation Solutions . . . 53

6.2 RRT* with dynamic constraints . . . 56

6.2.1 Algorithm developed . . . 58

6.2.2 2D Path Generation . . . 60

6.2.3 3D Path Generation . . . 62

6.3 New method based on RRT* . . . 64

7 Avoiding Risky Missions 69 8 Simulations and Results 71 8.1 RRT* . . . 73

8.2 RRT* with dynamic constraints . . . 75

8.3 New method base on RRT (Dyn-Cut RRT) . . . 77

8.4 Dyn-Cut-RRT planning inside the gallery . . . 79

9 Conclusions 81 9.1 Future Works . . . 82

References 83

(6)

1.1 Moon, Mars and Underwater Earth surface statistics of mapping [6] . . . 7

1.2 ALR recovery after a deep water and long range mission in West Antarctica 12 1.3 ALR equipped with both localisation sensors and science payload. . . 13

1.4 Peninsula Antarctica and Alexander Island . . . 14

2.1 Localisation and Mapping problem [10] . . . 20

2.2 Piano Mover’s Problem - from Distributed Systems course notes . . . 22

2.3 Visibility graphs: Trapezoidal and Triangular decomposition . . . 24

2.4 Quadtree decomposition . . . 25

2.5 Pre-built graph for safe exploration . . . 32

3.1 State flow of design approach . . . 36

4.1 Image from[7]. Multibeam sonar type: horizontal scanning 2-D sonar, vertical scanning 2-D sonar, horizontal and vertical scanning 3-D sonar . . 39

4.2 2D and 3D Forward-Looking Sonar model . . . 40

4.3 2D and 3D Sampling, rispectively 1500 and 9000 samples . . . 41

4.4 Spherical coordinates . . . 42

5.1 3D and 2D border obstacle re-built . . . 46

5.2 Sea-floor mantle creation in 3D environment: a small part of the map used by the seabed detection algorithm. It is the same in ice-shelf detection algorithm . . . 46

5.3 2D and 3D collision avoidance system . . . 48

6.1 Steering procedure . . . 50

6.2 (a) Sample random selection; (b) Nearest search procedure; (c) Steering procedure; (d) Ball with radius ρ centered in srand; (e) Best Parent search-ing; (f) Update branch; (g-h) Rewire procedure; (i) Final optimal tree . . 51

6.3 ”Boundary Surf ace” . . . 54

6.4 Trajectory-Polygon intersection . . . 54

6.5 (a) Pattern to identify; (b) Resolution Problem . . . 55

(7)

6.6 Simulation of 2D RRT*: single iteration . . . 56

6.7 Simulation of 3D RRT*: obstacle avoiding . . . 56

6.8 Intermediate node insertion sint . . . 59

6.9 LSR and RSR Dubins trajectories . . . 60

6.10 Simulation of 2D RRT* with dynamic constraints: single iteration . . . . 61

6.11 3D path generation . . . 63

6.12 Smooth 3D path . . . 63

6.13 Path 3D: when the Γ1lenght is zero and different altitude between starting and final point . . . 64

6.14 Planning 2Dwith dynamic cutting RRT . . . 68

7.1 Pre-computed 2D conical trajectories to guarantee vehicle safety . . . 70

7.2 (a)Mission 1 in a dead-end; (b) Mission completed successfully . . . 70

8.1 Complete simulations of 2D RRT* algorithm . . . 73

8.2 Complete simulations of 3D RRT* algorithm . . . 74

8.3 Complete simulations of 2D Dubins RRT* algorithm . . . 75

8.4 Complete simulations of 3D RRT* algorithm with dynamic constraints in real environment . . . 76

8.5 Complete simulations of 2D dynamic-cut RRT algorithm . . . 77

8.6 Complete simulations of 3D dynamic-cut RRT algorithm . . . 78

8.7 3D simulation 1 inside the gallery (real map): in red and cyan right wall and obstacle reconstruction . . . 79

8.8 3D simulation 2 inside the gallery (real map): in red and cyan right wall and obstacle reconstruction . . . 80

(8)

Oceans have always fascinated man, both for the mystery that surrounds them and for the hidden resources that possess.

The attractiveness of the discovery that since ancient times pushes men beyond their own limits. The presence of these limits and the desire to overcome them allowed the rise of new historical eras of wealth and innovation. For instance, people who lived around the Mediterranean had the maritime trade as the main source of wealth, and they began to discover new lands beyond the oceans from XV century. These are the challenges that the great men of history have done to allow us today to be what we are. The earliest sea ”explorations” depended on human endurance, that is the depth a person could dive. Our ancestors certainly explored the near shore. The Polynesians dived from their seagoing outrigger canoes, but the depth they could explore was limited to relatively low water. The women who dive for pearls near Japan and the Greeks who dive for sponges have achieved phenomenal endurance records in holding their breath but without reach high depths. In the eighth century, Vikings measured sea depth by dropping lead weights attached to ropes overboard and recording how much rope was underwater when the weight reached the bottom. These lengths were measured in fathoms. And it continued like this until 1872, when Sir. William Thomson replaced the rope with thinner wire and invented the Thomson sounding machine. Using a tension wheel and a brake, his machine provided more accurate measurements of ocean depth.

Deep sea physical studies began when the French mathematician, astronomer, and scien-tist Pierre Simon de Laplace used only tidal motions along the shores of West Africa and Brazil to calculate the average depth of the Atlantic Ocean. He estimated this average to be 13,000 ft (3,962 m), which scientists later proved with soundings over the ocean to be relatively accurate[1].

Before the nineteenth century the seafloor had never been studied. It was assumed there were flat and practically unchangeable over time[2]. It was believed that life could not exist deep in the ocean. No light meant no plant life, which ultimately implied no animal life. At 200 meters (650 feet) below, light scatters and fades. At 4,000 meters (13,000 feet), the temperature drops almost to freezing, and the pressure increases to one nearly unbearable for humans. In 1885 the first bathymetric map was published which revealed the presence of submarine mountain ranges in the Atlantic Ocean. Only after the

(9)

ing of the Titanic in 1912, there was an effort to create an acoustic mechanism to detect objects in the water. In 1914, Reginald A. Fessenden developed the Fessenden Oscillator to use a technique called echo ranging, whereby sound and its echoes off objects are used to determine distances in the air. During World War I, researchers further developed the Fessenden Oscillator to detect underwater submarines. This enhancement paved the way for the current system of Sound Navigation and Ranging (SONAR)[3]. Images and data become more precise during the Second World War when, with the advent of echo sounders, the ocean depths began to be measured. From the measurements of the seabed, we passed then to the perforations.

Between the 40s and 60s, the first dives to the deepest depths of the oceans began. It was first the American explorer William Babe and later Auguste Piccard who built the first pressurized underwater ships. Their goal was to reach the deepest parts of the earth. But they will have to wait until 1953 to reach the Marianne ditch, using the ”Trieste” vehicle. Thanks to the new technological discoveries, ocean has become a lot more interesting in the last half-century. The sonar systems discovery marked the turning point in the ocean exploration area. In 1966 the first ”Deep Sea Drilling Project” expedition was launched with the aim of collecting as many samples as possible from the depths of the Atlantic, the Pacific, the Indian Ocean, the Red Sea and the Mediterranean Sea. Unfortunately, the high mission costs and technologies where not yet fully developed for these purposes, which did not allow to obtain a large amount of data on our ocean floor.

Today, thanks to the powerful technological tools at our disposal, we are able to explore every single corner of the emerged land, simply from our desk. The remaining part rep-resents around 72% of the Earth’s surface. In 2014, David Sandwell and his colleagues of Scripps Institute of Oceanography in San Diego presented a new global map of the seafloor with about 5 km of maximum resolution, thanks to the use of satellite data and images[4]. Of this unknown portion, only 10-15% has been mapped with higher resolu-tion (around 100m) and only 5% with resoluresolu-tion around 30m. The remainder 60% of the Earth’s surface remains relatively unknown[5].

(10)

It is a common feeling among Earth scientists, often a complaint, that we know more about other planets in the solar system than we know of our sea. Indeed, astronomers have a more complete topographical understanding of the moon, ex-planet Pluto and the dwarf planet Ceres than we do of the seabed. Their instruments have managed to map Mercury, almost all of Venus and even Mars. This is strange, because the topography of the seafloor plays such a huge role in keeping the planet habitable, a role we need to fully understand in order to predict what the future of our climate holds. The shape of the ocean floor helps to determine weather pattern, when and where a tsunami will strike and management of fisheries that feed millions.

The amount of money spent from NASA to find out what is around our planet is much higher than the sum ocean science can ever spend. The goals of space exploration are in a way easier to identify with; you can see them using a telescope and observe the im-mensity that lies above our heads. It was Galileo Galilei who revolutionized the world of astronomy. Starting from the already existing Dutch lenses, he combined the awareness of their instruments with the scientific method, creating the first telescope. With two simple lenses and a wooden tube, he was the one who began the cosmic observations. He took the telescope and directed it to the Moon.

If we had been in the Galilean period, maybe even a few centuries later, and we decided to take the telescope and focus it underwater, what could we probably see? If we were lucky to choose a calm sea day with a good visibility, we could be able to see something within a few meters. Obviously, all that depends of the coastal regions. Seeing in the deep is hardest. There are many factors that influence this process, from currents, to turbidity, from salinity to the seabed material composition. The reason about the im-possibility to see in the deep lays in the structure of H2O molecules. In fact, it owns a

tetragonal structure with an oxygen structure in its centre and two hydrogen molecules in the two extremities. Strong hydrogen bonds between several molecules allow to have a compact structure. Its density increases at decreasing the temperature, until to reach the maximum value of 1000 kg/m³ at temperature of 3.98°C. Instead, the air is just a mixture of light gases whose density (in standard atmospheric conditions of dry air and 15° Celsius of temperature) is 1,225 kg/m³.

For instance, the difference between the two fluids allows to float boats and other objects containing air, thank the hydrostatic thrust. Come to think of it, hydrodynamics and aerodynamics are much more similar than one might think. An aeroplane or an under-water vehicle are studied and designed in the same way, however considering different the fluid where they have to operate. With the first, you can communicate and share information using electromagnetic waves, and with the second only using pressure waves. Unfortunately in the water the electromagnetic waves are attenuated a lot and therefore they are unusable for the communications. Consequently, all technologies developed for self-location like GPS cannot be exploited. The sound waves are the unique usable waves.

(11)

In fact, due to the greater density of water than air, the speed of sound can propagate itself much faster. The energy of sound transmission takes place with an elastic wave: a succession of compressions and rarefactions of the fluid in the sound propagation di-rection. Imagine dividing the fluid into many layers perpendicular to the propagation direction. When the first layer is hit by the sound wave, it absorbs its energy by com-pressing itself. Re-expanding, it gives the energy to the next, which in turn absorbs it, compressing itself, and so on. The speed at which each layer compresses and re-expands, and therefore the transmission speed, depends on a property of the fluid, the compress-ibility: the less the layer is compressible, the sooner it will transfer the energy to the next. Since water is 10 thousand times less compressible than air, the sound spreads faster in water then in the air. Although the acoustic systems are used in the everyday life of each person, there are many problems for marine applications that prevent their efficient and effective use. Compared with radio telecommunications, the available bandwidth is reduced by several orders of magnitude. Because of the low sound speed, multipath are generated which propagate for tens of milliseconds, as well as significant Doppler shifts and spreading. Acoustic communication system are often not limited by noise, but by reverberation, bottom loss, time variability and inherent attenuation.

Despite all these problems and limitations due to the absence of appropriate technology, we have been able to achieve the deepest trench of the Earth and to make the first under-ice expeditions of the Antarctica and North Pole.

The frontiers of scientific exploration are pushing more and more towards the abyss, precisely for the reason of having them at hand, but without knowing what they hide. Although the oceanographic discoveries of the last half century have completely trans-formed our point of view, the ocean remains largely unexplored. There are several meth-ods that allow obtaining data from surface and seabed, which combined produce medium-resolution maps. The satellites are equipped with hyperspectral and multi-spectral sen-sors which are used to provide constant streams of images of coastal areas. They can use LiDAR (Light Detection and Ranging) or gravity sensor. The first is a remote sensing method that uses light (visible, ultraviolet and near-infrared) in the form of a pulsed laser to see the curves in underwater landscape. The second, instead, measure the grav-itational attraction of the sea with respect to the seabed. The higher the mountains, ridges and reefs are and greater will be the attraction these will exert on the water above them. However, the voyage to the ocean bottom is still a challenging experience. There are already ROVs that allow scientists to work from the ship and remote control the underwater robot, but the studies are increasingly pushing towards autonomous vehicles, that are able to perform tasks without human supervision. As said Robert Ballard, best known for the re-discovery of Titanic: with only 5% of seafloor mapped, our knowledge of what’s beneath is about as detailed as a dinner table with a wet blanket thrown over it. Fortunately, we’re about to whip the blanket off and reveal this aquatic meal with

(12)

exquisite detail [2].

Last June (2017), Nippon Foundation and GEBCO had launched Seabed 2030. It is an international project with the aim to facilitate the complete mapping of the global ocean floor by the year 2030. In addition, both the British government through the NERC and the US with the NOOA, are planning many investments the near future. Considerable changes in the marine environment are expected for the next decade.

1.1

National Oceanography Centre - Southampton

National Oceanography Centre(NOC) is a technology research institute for the marine security. It was opened in 1996 by HRC the Duke of Edinburgh, assembling the Univer-sity of Southampton’s departments of oceanography and geology with the Institute of Oceanographic Sciences. This name reflects the national prominence in the ocean and earth science. On 2010, after the fusion with Proudman Oceanographic Laboratory of Liverpool, it became National Oceanography Centre, combining the best England centres and becoming a flagship in the world.

Since the last 50s, the United Kingdom has invested a lot of resources for marine devel-opment and research, becoming one of the pioneers of ocean technology in the 1960s. The first methods of tidal prediction used across the entire world were conceived. In fact, it was deployed one of the world’s first autonomous underwater vehicle since in the mid of 1990s and more recently they have been pioneering the use of coordinated fleets of under-water and unmanned surface vehicles. Currently, the NOC has two sites, Southampton and Liverpool, and holds numerous University departments, like Geochemistry, Geology and Geophysics, Marine Biology and Ecology, Paleoceanography and Paleoclimate and Physical Oceanography. In addition, it holds several NERC (National Environmental Re-search Council) laboratory, like Marine Geoscience, Marine Physics and Ocean Climate and Marine Systems Modelling. Southampton Centre could be the main structure for the entire marine scientific community throughout England thanks mostly to NERC, but also with the aid of EU grant and commercial organisations. NOC possesses the greatest marine data centre (British Oceanographic Data Centre - BODC) and, since 2015, it is the Marine Robotics Innovation Centre headquarters, submarine robotic research Centre seat. It owns:

-two research vessels RRS Discovery and RRS James Cook, both became famous for the many discoveries made by the NOC, like the world’s deepest undersea volcanic vents, in the Caribbean;

- multiuse aquarium facilities to simulate rock shores and environment; - smaller pressurized tanks for the study of deep-sea organisms;

(13)

- three fleets of submersibles and autonomous vehicles, comprising Autosub submarines, underwater gliders and remotely operated vehicles.

They aim to work on the most challenging scientific problems in ocean science, view-ing the oceans and safeguardview-ing the Ocean-Earth system. Their research is focused on eight key scientific and technological goals to predict what ocean changes will be: - The innovation of Technology: the large fleet of vehicles is proof of this. Besides, the need to collect the most data from the abyss has pushed the NOC to start several studies on systems that allow the vehicles themselves to remain as long as possible at sea, like battery and navigation systems;

- Sea-Level Variability and Extremes: they aim to take significant steps in the advance-ment of the global sea level variability, caused by the strong climate changes;

- Heat and Carbon Penetration into the Deep Ocean: the increase of the CO2 into the atmosphere has caused an increase of Earth temperature system and therefore also the oceans;

- Resources and Hazards: only the 5% of the world seafloor has been imaged in detail, making it one of the last exploration frontiers of our planet. The NOC seeks to under-stand the dynamic processes of the seabed necessary for the sustainable exploitation of resources since it could offer new resources;

- Microscopic Life, Chemical Cycling and Biological Diversity and Ecosystems;

- Environmental Prediction: the prediction of national and global weather and natural hazard can avoid danger to people and property and facilitates economic activity. The prediction and analysis of these events is done in the time-scale of decades or centuries; -Coastal and Continental Shelf Seas: the continental platforms are continuously in move-ment and since some decades, they have monitored the effects of the displacemove-ment and dissolution of the polar plates.

(14)

1.2

AUV - Autosub Long Range

Figure 1.2: ALR recovery after a deep water and long range mission in West Antarctica

The Autosub series of AUVs, designed and built at the National Oceanography Cen-tre, Southampton (UK), have been operated in a fully unsupported mode for many years [60],[61]. There have been developed many version of these vehicle (Autosub1, Autosub2, Autosub3, Autosub6000, ALR1 and ALR2), but the lasts are Autosub Long Range series. ALR is a deep-rated, long-endurance flight style AUV that fits well the requirement for deep and long-range missions. It is a propeller driven, 3.6 m long vehicle and has a dry weight of approximately 660 kg. It is designed to have a 6000 m depth rating and a range up to several thousand kilometres. Such long range is enabled by ensuring a low hotel load and travelling slowly (0.35 - 1.0 m/s). Propulsion power is provided by a custom designed propulsion unit which drives a two bladed propeller. The vehicle is maneuvred using two stern planes and a rudder, while the top fin is fixed and contains antennae for GPS, WiFi and Iridum as well as an optional strobe. The moveable control planes are controlled using custom actuators, similar in structure to the propulsion motor, that are installed along the fore-aft axis of the vehicle. The output drive of the actuators goes through a worm and wheel gear to provide the 90° rotation required to drive the control planes. The control planes were sized so that the AUV would be hydrodynamically stable in forward motion, and thus the control planes do not need to be continuously adjusted to maintain heading.

A major challenge is to decide on the best sensor suite to enable long range capabilities under the Arctic ice. The trade-off between navigation accuracy and on-board available energy for sensors and data fusion complicates the problem. Given the restrictions in processing power during long range missions, the AUV is equipped only with (see Fig. 1.3): the SiRFstarIV GSD4e GPS module to get GPS fixes when the ALR surfaces, the downward looking 300 kHz Teledyne RDI ADCP to estimate the 3D AUV velocity

(15)

Figure 1.3: ALR equipped with both localisation sensors and science payload.

(either over ground or relative to water currents) expressed in the AUV body frame; the 6-axis PNI TCM XB compass/attitude module based on magneto-inductive sensors; and, forward of the front sphere is a large internal space which contains the Sea-Bird SBE 52-MP Conductivity, Temperature and Depth (CTD) probe. The CTD probe is a standard sensor for deriving the operating depth from the measured hydrostatic pres-sure. The dry-space and buoyancy for ALR is produced by two 6000 m rated forged aluminium pressure spheres. The forward sphere contains the batteries for the vehicle, which are lithium primary cells, while the rear sphere contains the control electronics [67].

In order to completely determine the AUV pose in 3D environment, six indipende paramters are required (three for position and three for attitude). Four of all six pa-rameters are directly measured by the on-board sensors: the depth is measured with high accuracy CTD probe and the vehicle attitude is acquired using the magnetic com-pass and the tilt sensors. Thus, given the operating depth, the vehicle attitude and considering compensated pitch and roll AUV velocity, it can be assumed that the vehicle is levelled horizontally and stabilized in roll and pitch. In this way, the 3D localisation problem can be transformed to a 2D one. The remainder variables to completely know the vehicle dynamic, the 2D vehicle position, are estimated using 2D kinematics and dead-reckoning navigation. Although the DR procedure provides a sufficient short-term accuracy, uncompensated sensor errors inevitably induce unbounded positioning error growth. The navigation error is a combined effect of multiple error sources in velocity and heading measurements. Also, the geometry of the AUV trajectory might have an effect in the estimation error. This error has been quantified in ∼ 1% of distance trav-elled in a straight line. For this reason, in long range missions, additional mathematical methods are needed to compensate and correct such errors.

(16)

1.3

Statements of the problem

Figure 1.4: Peninsula Antarctica and Alexander Island

National Oceanography Centre is one of the greatest oceanography centres in the world and it is specialised to South and North Pole missions.

In addition to the countless missions into the Atlantic Ocean and into the North Sea, in 2020 the NOC aims to explore a particular area in the Antarctic continent, situated in the Antarctica Peninsula. Using the ALR (Autosub Long Range) AUV, the goal of this mission is to undertake a sub-ice traverse of the King George VI ice shelf at the base of the West Antarctic Peninsula. In fact, an underwater channel completely ice covered, called the Drake Passage, is present between the continent and the Alexander Island. The length of the traverse will be about 600km, the expected water depth it is 500m from ice to seabed and the smallest part is situated in the middle and should be large around 18km. The ice shelf is open at both ends and the water flows from the south margin to the north.

The requirement for the mission is to start on the north end and recover the vehicle at the sud end. It is expected for the mission to last 10 days with at cruise speed of 0.7-1.2 m/s and is not strictly necessary that ALR acrosses the entire tunnel. If the full traverse is not possible, the vehicle must abort its mission by returning to the starting point. Vehicle safety is the main constraint to be met in such circumstances. Ideally, it would fly close to the underside of the ice (> 50m) but could be better fly close to the seabed, if it possible.

The presence of bathymetry satellite data for the region should be reasonable. These could help the understanding of the ice depth and the seabed but unfortunately, they are not enough accurate (error > 100m). ALR will need an INS for the navigation, a good Terrain Aided Navigation system to improve the self-localisation, combining the known data and the bathymetry/ altimetry real data that are acquired in real-time. The mission

(17)

is very challenging due to its temporal duration, the unknown nature of the environment (it is, in fact, the first time that an autonomous vehicle is sent to explore that area) and finally, the impossibility to use powerful processing systems and a lot of sensors. Much more devices are used for that kind of operation to self-locate and process all the data and more energy must be available to complete the expedition. Furthermore, using more energy means having much more weight to transfer. Every addition device or sensor should be strictly necessary.

1.4

Contribution of this thesis

The mission proposed by NOC drives more and more engineers towards the research for new methods and for the development of more sophisticated algorithms, to guarantee the vehicle safety and, at the same time, to complete the objective of the mission. As is widely known, in underwater environment the biggest problem is trying to estimate the position with the best accuracy possible. All of that is due to the impossibility to use the Global Positioning System for this kind of applications. Hence, it is requested to use advanced mathematical methods and alternative systems to help these unmanned vehicles to carry out their operations. If our aim was to develop navigation algorithms for a nuclear submarine, certainly we could have spent hundreds of thousand pounds for control systems and sensors. In that case, we would have had an almost perfect knowl-edge of our position, without use GPS. Unfortunately, it is not our case and we must adapt with fewer accuracy instruments and be trying to make the most of what we have available.

Therefore, let’s imagine being inside an autonomous underwater vehicle which is travel-ling in an under-ice area where you couldn’t communicate with anyone on the surface. What would you need to drive safely? Each autonomous vehicle needs:

ˆ Advanced systems to self-locate like SLAM (Simultaneous Localization and Map-ping) and TAN (Terrain Aided Navigation): they try to fix navigation errors, in-troduced by the navigation system. Their estimations seek to improve the position and attitude vehicle, preventing them from diverging over time;

ˆ Collision avoidance system: the basic idea is “if there an object in the robot’s way, avoid it”. It provides to identify preventively any obstacles during the flight;

ˆ Path planning system: it finds the best path to follow, minimizes the distance travelled and avoids the obstacles. Once the targets have been defined, step by step, these algorithms compute the ideal trajectories that would allow the vehicle to reach them.

(18)

The overall aim of this research is to design and develop a path planning and collision avoidance system for AUVs. In this period in Southampton, I analysed several meth-ods existing in the literature, extracting what I thought could be better for underwater application and I highlighted advantages and disadvantages for each of them. For our purpose, the only sensors that can be used are the forward-looking sonar and, depends on the task that needs to be performed, the ADCP. After a preliminary study, their functioning was modelled in Matlab. Starting from well-known path planning models and combining them with dynamic programming and collision avoidance methodologies, I have developed an algorithm that extracts suboptimal and safe solutions.

The first simulations were performed in static environments where the only obstacles had cylindrical shape placed in a bare environment. In order to make them as realistic as possible, instead of the unrealistic environment, it was decided to use a real environmen-tal map of one of the Antarctic regions in which the NOCs team has carried out missions in the past. Following the first phase of processing and seeing the perfect success of the algorithm, we immediately thought of verifying its operation in other environments. In particular, I wanted to try to simulate the 2009 M431 mission carried out in the Antarc-tic region of Palmer, where AUTOSUB 2000 crashed into one of the surface ice walls, returning to the surface partially damaged. Unfortunately, I could not get hold of that environmental data. Therefore, the final simulations will be carried out using a map created ad hoc to simulate an underwater gallery, as planned for the 2020 mission. In this thesis, the vehicle localisation has not been treated and developed. Its application is decoupled from the path planning part since the latter must work independently of the right vehicle position. This work is structured in the following way:

In Chapter 1 the general problem of autonomous underwater navigation is presented, going partly to deepen the acoustic propagation problem. Some of the nineteenth cen-tury histories is presented and the world goals for the future are set. Subsequently, a detailed description will be provided about the ALR (Autosub Long Range) vehicle with the related onboard sensors. Furthermore, in the rest of the introduction, was presented the National Oceanography Center of Southampton and what my job in that research centre was. The Chapter 2 is completely dedicated to the analysis of existing literature in path planning and collision avoidance field, evaluating what had been used before in underwater and under ice explorations.

In Chapter 3 the problem is formalized, explaining the requirements and the assumptions made for the problem under examination. The design approach and the algorithm de-veloped are presented in Section 3.2 and 3.3. Chapter 4 is completely dedicated to detail the FLS functioning and its modelling in Matlab and the sampling procedure used to explore the environment is detailed. In Chapter 5 the collision avoidance and detection method used and the local target selection are shown.

(19)

the exploration graphs through the RRT* algorithm. In particular, classic RRT*, RRT* with dynamic constraint and a new RRT* method are presented in Section 6.1, 6.2 and 6.3, respectively. In Chapter 7 a method is proposed to guarantee the vehicle safety avoiding risky situation during the trajectories planning. Chapter 8 is reserved for the results presentation obtained with relative comments concerning the various algorithms and, finally, in Chapter 9 the conclusion and future works are explained.

(20)

2.1

Underwater Robotics and Technologies

In the last two decades, the interest of various world research group towards exploration and navigation increased considerably. Until the 90s, it was due to absence or inability of the technologies. Current technology progress, instead, has allowed the development of new measuring instruments, techniques and technologies to enable humans to explore unknown regions, and remotes environments. Consequently, there has been exponential growth of “Underwater Autonomous Vehicles” inside the scientific sector, off-shore indus-try and army [7]. The study and understanding of physics processes, local ecosystems, mixing dynamics and tidal movements into these particularly unaffordable environments, has a considerable importance and great scientific interest [8].

Due to the inability to visit the underwater environments, the AUVs have gradually become even more popular and play an important role in the explorations. They are used for climate research and various changes [9], monitoring of the marine habitat[10], glacier inspection [11], monitoring pollution levels [12] and seabed mapping. It is right on mapping that the robotics community is investing funds and resources. In fact, one of the fundamental and most crucial requirements for an autonomous robot is to have a precise and detailed map of the environment in which it has to move [13]. As well described by the name, the AUVs, after having been programmed with scientific mission requirements, must be able to operate autonomously and self-locate without external intervention. The impossibility of using GPS positioning system, widely used for land vehicles and airplanes, due to rapidly attenuation of electromagnetic waves through water[14], makes every un-derwater operation even more difficult. Only the acoustic communications can achieve long distance because they are spreading slower than the radio waves. This allows to suf-fer less from a dissipative and attenuation effects, higher in liquids than in air, but change with changing the salinity and temperature. The only underwater location systems are based on using transponders, but inaccurate long-range location estimates in extreme environments do not allow them to be used. Currently, the American department of advanced research (DARPA) is developing a new underwater positioning system, called Posydon, that would allow all AUV to know its positions with more accuracy, without

(21)

surfing to acquire data. With this new technologies, long range missions could become more and more feasible in the next future.

Thus, the AUVs rely on proprioceptive on-board systems whose allows them dead reck-oning navigation, as a compass, Doppler Velocity Log (DVL) or INS for underwater positioning. The localization and consequently the navigation are very inaccurate due to the unlimited growth of the navigation error associated with the inertial system, which tends to increase in long-range missions, such as under-ice, where the vehicle can’t get to the surface. This problem is even more evident in low-cost vehicles, where the sensors used are low-cost and therefore not very precise. Traditionally, to try to reduce as more as possible INS errors, an acoustic beacon can be used or sensors with more precision, which increase exponentially the vehicle’s price. In addition to proprioceptive sensors, an AUV can be equipped with acoustic sensors such as Side Scan Sonar (SSS) or imaging sonars, which are used to detect environmental information.

Greater is the autonomy that we can give to the vehicle less the cost of managing the cruise will be because the support vessels will be free to perform other tasks. This is what happens in remote expeditions like those carried out at the Antarctica Continent or North Pole. Despite the considerable work done in underwater robotics, the obstacle avoidance technology still remains immature and unreliable. To carry out long missions autonomously, an AUV must have detection and obstacle avoidance systems and be capa-ble to operate in dynamic and highly uncertain environments. An absence or malfunction may put at risk the safety of the vehicle. Catastrophic failure of a mission is intolerable for two reasons: the recovery process can be particularly difficult, while the replacement of broken part can be prohibitive in terms of time and costs[15].

Uncertainty about vehicle position will become a mistake about the obstacle position. It’s the Forward-Looking Sonar (FLS) task to identify the possible presence of an object along the vehicle travel direction through the sonar scanning beam. These data will be sent to a central processing system, which will have to modify the trajectory, to ensure a safe execution of the mission. Collision Avoidance is one of the key points to complete a mission, especially in critical situations. Therefore, it is of great scientific interest to analyse FLS data to rapidly answer in terms of security for the whole system. Although the Multibeam FLS are widely using in the underwater environment as Obstacles Iden-tifier for their superior performance, they are much more expensive than Sectorial Scan Sonar and this often prevents its use.

For Collision Avoidance, however, they do not mean only the processes dedicated to collision preventing, but also all those systems that provide for the creation of works map and to plane the entire vehicle motion. Mapping is a rather complex process be-cause it must cover multiple areas of work: space covering, sensing method, localization

(22)

and mapping algorithms and data representation.

The map creation is the generation of an abstract representation of the world, where the vehicle could really be found. This representation can be simpler than reality, so navigation, path-planning, and obstacle avoidance could compute more easily. Having a reliable map of the environment is the first requirement to move without colliding with the various obstacles. Therefore, a mobile robot must be able to create an internal map of its workspace, with sensors at its disposal. Being able to learn a map from zero, could allow the same vehicle to adapt to various changes, without human supervision[16]. Acquiring map is not so trivial. At first, because the space of all possible maps is huge, as defined on a continuous space tending to infinity. Secondly, the process of learning maps suffers from a cyclical problem, often defined as the problem of the eggs and the hen. In fact, while it works in an unknown environment, the robot builds a map and within that, it tries to localise. They are necessary two representations: one model of environment’s observation (maps of the explored environment) and one localization model (position estimate) [13]. As the navigation error increases, the error made of maps construction become more and bigger. Therefore, a particle filter must be developed or implemented to correct them as much as possible. This is what is proposed in the SLAM technique (Simultaneous Localization and Mapping), where environment and obstacles detected are used as a benchmark to improve the position estimate[16].

SLAM is able to solve simultaneously some problems related to navigation and obstacle avoidance, but it is not a completely known technique that can be relied upons: research is very active in this sector.

Figure 2.1: Localisation and Mapping problem [10]

According to [17] this methodology is not appropriate to detect and avoid underwater obstacles. Indeed, detecting an object is very difficult in submarine environments because of enormous amounts of background noise and clutter. It can be expected that there are many false alarms (due to noise) in the received scans. With Multibeam Scan Sonar, the traditional approach to managing the false alarms is to use image processing technique such as segmentation or extraction of features, to be able to distinguish potential targets

(23)

or not. However, using these methods may not be reliable in underwater environments, due to the lack of distinct characteristics. In addition, the AUV time flight greatly influ-ences the expense of submarine mapping, thus any improvements in sonar imaging may be of vital importance for marine and off-shore research.

Developing a system that is capable to manage errors and obstacles will make possi-ble for the AUVs to conduct long missions autonomously. In [18], Burguera proposed an interesting and alternative strategy to SLAM. It consists in overlapping two consecutive scans, to evaluate the errors committed from proprioceptive sensors in position estimate and consequently, to correct various shifts. Moreover, while SLAM technique is based on characteristics extracted from the data, instead the Scan Matching Technique can work even without. As demonstrated by the same Burguera, this method has the disadvantage of accumulating errors in position, with the consequent drift of AUV that increase over time.

In the next paragraphs, we will explain much more in detail what are the path planning algorithms and the various methods applied to resolve the collision avoidance problem in the literature. Then, some strategies will be presented in underwater field and, finally, it will be explained what is the contribution of this thesis.

2.2

Path Planning Algorithm

The Planning word, in the meaning of the term itself, means scheduling a series of ac-tivities according to a pre-established plan. It is the act of getting ready for something that is going to happen in an imminent future. Obviously, it can have a more specific meaning in accordance to the using field.

Nowadays, we are more and more talking about robots and how they are being intro-duced within the daily individual lives. Just think of autonomous robots that clean our homes, the autonomous car, or the industrial robots that must cooperate in the same environment with other robots or other people. Similarly, a mobile robot carrying bag-gage in an airport must navigate among obstacles that may be fixed, as fittings, conveyor belts, construction elements, or mobile, like passengers or workers.

To operate in an effective and safe way, it is therefore necessary that every move is planned, avoiding as much as possible unforeseen or uncertain situations. The desire to improve our wellbeing creates necessity and in this case the need is algorithm. In robotics, a fundamental need is to have algorithms that convert high level specifications of tasks from human into low level descriptions of how to move. A basic version of this

(24)

Figure 2.2: Piano Mover’s Problem - from Distributed Systems course notes

problem consists of finding a sequence of motions for a robot from a start configuration to a given goal configuration while avoiding collisions with any obstacles in the environ-ment. Clearly, one would like to endow the robot with the capability of autonomously planning its motion, starting from a high-level description of the task provided by the user and a geometric characterization of the workspace, either made available entirely in advance (off-line planning) or gathered by the robot itself during the motion by means of on-board sensors (on-line planning). A classical version of motion planning is sometimes referred to as the Piano Mover’s Problem (see Figure 2.2).

Imagine having to move your piano from one room to another. An algorithm must determine how to move into your home without hitting anything. The geometry piano is the algorithm input and the moves to make is the output. Robot motion planning usually ignores dynamics and other differential constraints and focuses primarily on the transla-tions. It began to consider other aspects, such as uncertainties, differential constraints, modeling errors, and optimality, only after the introduction of control theory into motion planning. Historically, trajectory planning usually refers to the problem of taking the solution from a robot motion planning algorithm and determining how to move along the solution in a way that respects the mechanical limitations of the robot[19].

To drive the robot across the obstacles, early methods[20] directly used the 3D CAD models of the robot and obstacles to find a solution, they indeed, considered the ”op-erational 3D space”. In this space, the path planning problem consists of finding the movements of a complex 3D structure (the robot) in a cluttered 3D space. A major step-foward was to express the problem in another space known as the configuration space, denoted by C. The robot position or configuration is completely determined by a singles point having n parameters. In C space, the path planning problem consists of finding m robot configurations that take us from the initial to the final state. The positions that are not physically feasible (because of a collision) are represented in a particular C sub-space, called Cobs. Known the configuration space and identified the occupied space

by the obstacle, we can define Cf ree = C/Cobs. This represents the space within which

(25)

In the last 25 years, so many people were developing several methods and different algorithms. There is no dearth of literature regarding the theory of motion planning [19],[22],[23]. Thus, only a limited number of motion planning techniques that are as-sociated with AUVs will be surveyed. Historically, motion planning approaches can be classified into three fundamental categories: Combinatorial or roadmap planning, Artif icial potential f ields planning and Sampling − based planning.

2.2.1 Combinatorial Planning

All combinatorial motion planning algorithms are based on the construction of a topologi-cal graph topologi-called roadmap, which efficiently solves multiple initial-goal queries. A graph, to be defined roadmap, must satisfy the accessibility and connectivity-preserving conditions, which means that for each configuration q must exist at least one path that reaches it, starting from the initial state [19]. Thus, these algorithms produce a graph in Cfree, where each vertex and edge are a collision-free path through Cf ree. There are three different

approaches: V isibility graphs, V oronoi diagrams and Cell Decomposition methods.

The visibility graph method deals the problem to find the minimum path between two points within an environment with polyhedral obstacles. It is known that the shortest path is a polygonal curve whose vertices are precisely the obstacle vertices. So, the algorithm creates several lines which connect each vertex to each other, avoiding all con-nections that are not in Cf ree [24]. Unfortunately, one obvious disadvantage of using

a visibility graph is the assumption that all the obstacles are known. Furthermore, a visibility graph has the tendency to generate paths very close to the obstacles edges. You might think about correcting the size of the obstacles online in such a way as to avoid them or it required a first phase of environment exploration. If we were not interested to find the shortest route to reach the target, bud instead we want to move within the en-vironment remaining equidistant from all obstacles, we must use the Voronoi Diagrams. This can be done through the construction of the maximum clearance roadmaps that try to keep the robot as far as possible from obstacles and environmental limits. Initially, it is necessary to define a finite number of points. Then, the tessellation space is made in such a way that every region vertex is equidistant from two points that surround it [25]. For instance, A 2D lattice gives an irregular honeycomb tessellation, with equal hexagons with point symmetry; rectangular lattice gives the hexagons reduce to rectangles in rows and columns or a square lattice gives the regular tessellation of squares. One of the most famous algorithm for creating Voronoi Diagrams was developed by Steven Fortune in 1986. This algorithm uses a sliding line that moves within space. During its motion, it

(26)

allows tracing parabolas whose focus are precisely the random points sampling into the environment. In this way, parabola’s points are equidistant from both the beach line and the focuses.

Figure 2.3: Visibility graphs: Trapezoidal and Triangular decomposition

The basic idea of Cell Decomposition Method is to find a path between the initial and the target configuration by subdividing the robot work space robot into smaller and smaller regions called cells. They are divided in Approximate and Exact Cell Decomposition. The approximate model, also called quadtree (Figure 2.4), uses a recur-sive method in which each cell is in turn subdivided into four smaller cells with the same form. This is done to limit the space occupied by obstacles as much as possible. The feasible path will be the shortest path passing through the vertices belonging to Cf ree

and joining qi to qg. In the exact method, however, unlike the previous one, cellular

de-composition is done completely in Cf ree, without analysing the entire obstacle borders.

In fact, suppose we can scan the entire space vertically and plot as many vertical lines as external environment or polygon vertices. In this way, we will create trapezoidal regions free of obstacles. Likewise, we can connect every external space vertex with each visible obstacle vertex. Now, we will create triangular regions. At this point, if we take the midpoint of every lines and join them together, we could find a path between the initial to the final configurations [19], [26].

All these methodologies are based on the creation of discrete space where it will be necessary to find the best solution. Starting from the initial configuration, a sequence of state is defined that allow to reach the goal configuration. In other words, these are graph search methods. One important requirement for this kind of algorithms is to be systematic. This means, if the graph is limited, the algorithms will be able in a finite time to explore all reachable states. Instead, if the graph is infinite, we should be willing to obtain a solution with a certain tolerance. Otherwise, the algorithm could be iterated indefinitely [19]. Searching can be done following several ways: forward, backward or bidirectionally, and their use may vary depending on the application. In fact, the first case is used to explore dynamically the environment, the second method is used when we already have a graph and we want to find the best path in terms of cost, while the last

(27)

Figure 2.4: Quadtree decomposition

allows you to expand bidirectionally the graph, both starting from the initial state and from the final one.

There are many search algorithms that rely on dynamic programming to perform this routine, such as breadth-first search, depth-first search, A*, Dijakstra’s algorithm with unlimited variants and configurations.

Breadth-F irst Search is based on amplitude exploration. This means that before pass-ing from one depth level to another, all nodes already on the queue must be analysed. In other word, we proceed in a uniform way to examine nodes of much greater depth. Contrary to the previous case, the Depth-F irst Search is based on a through exploration starting from the initial node. This kind of algorithm is chosen in particular applications where it seems more efficient to look for solutions in long ways. Both search algorithms are exhaustive (complete), which means ultimately, all free space will be searched for solutions. For cases where multiple solutions exist, and optimality (in general shortest distance) is not a concern, the depth-first search tends to have a lower memory require-ment while providing a quicker answer. However, a depth-first search can be deceived into searching long list of cells, or states, even when the goal may be very near.

The Dijikstra algorithm was developed at the Mathematical Center in Amsterdam in 1956 to determine the shortest route for transportation in 64 Dutch cities. It allows to find the optimal trajectories inside a graph, keeping in memory all nodes previously explored and evaluating what is the best path according to the cost function. Suppose we can assign a cost to move from one state to another. The total cost of a given path into the graph will be equal to the sum of the edges costs that, starting from the initial node, will lead to the final one. Nonetheless, in most circumstances, searching the entire free space can be computationally demanding and if the cost function is not properly defined for the problem, the performance can dramatically degrade.

And Finally, the A ∗ search algorithm is an extension of Dijkstra’s algorithm that tries to reduce the total number of states explored by incorporating a heuristic estimate of the cost to get to the goal from a given state. Here, we need to define a cost-to-come,

(28)

i.e. the cost to reach the current position from to initial configuration and a cost-to-go, i.e the cost to achieve the final configuration. However, there is no way to know the true optimal cost-to-go in advance, but, in many applications, it is possible to construct a rea-sonable underestimate of it. An extension of A* that addresses the problem of expensive re-planning when obstacles appear in the path of the robot, is known as D*. Unlike A*, D* starts from the goal vertex and has the ability to change the costs of parts of the path that include an obstacle. This allows D* to re-plan around an obstacle while maintaining most of the already calculated path. A* and D* become computationally expensive when either the search space is large, e.g., due to a fine-grain resolution required for the task, or the dimensions of the search problem are high, e.g. when planning for an arm with multiple degrees of freedom.

2.2.2 Artificial potential fields planning

The idea of a potential field is derived from the nature [28]. For instance, a charged particle navigating a magnetic field, or a small ball rolling in a hill. The idea is that depending on the strength of the field, the particle, arrive to the source of the field, the magnet. In robotics, we can simulate the same effect, by creating an artificial potential field that will attract the robot to the goal.

The artificial potential field planner consists of representing the goal with an attractive field and the obstacles with a repulsive field and the vehicle is required to just follow the local gradient of the new field to reach the goal. It was able to unite the motion of autonomous vehicle with the sum of the forces applied to it, to generate attractive or repulsive potential fields depending on obstacles presence or not. The most promising direction of motion is given by the opposite of the gradient -∇U (q). The field of artificial forces F (q) in configuration space, is produced by a differentiable function U obtained as:

F (q) = −∇U (q)U (q) = Uatt(q) + Urep(q) (2.1)

where ¯∇ denotes the gradient operator, Uatt is the attractive potential associated with the goal configuration, qgoal and Urep is the repulsive potential associated with the

C-obstacle region. The attraction field can be formulated as below

Uatt(q) =

1 2ξρ

2

goal(q) (2.2)

The attraction force can be formulated as Fatt(q) = −ξ(q − qgoal) where ξ is a positive

(29)

qgoal is the goal configuration.

The repulsion field and force can be fomulated as below:

Urep(q) =    1 2η( 1 ρ(q)− 1 ρ0) 2 if ρ(q) ≤ ρ 0 0 if ρ(q) > ρ0 (2.3) ¯ Frep(q) =    η(ρ(q)1 −ρ1 0) 1 ρ2(q)∇ρ(q)¯ if ρ(q) ≤ ρ0 0 if ρ(q) > ρ0 (2.4)

where η is a positive scaling factor and ρ0 is the inflence distance.

One major advantage of this method is its low computational requirement which makes it very suitable for real-time implementation. The potential field can be used in several ways to steer a robot. This will lead toward a more natural motion since the effect of ft is filtered by the dynamic of the robot. In case of online planning, the force ft generated by the potential field can be used as generalized forces acting on the dynamical system: τ = ft(q). This approach will lead toward a more natural motion since the effect of ft

is filtered by the dynamic of the robot [19]. Considering the vehicle as an object having a unit mass and subjected to speed or acceleration constraints, could surely produce re-sponses much more reactive. In [29] Yoerger has applied a potential local planner to drive the Benthic Explorer on a high-level mission on a seabed, limiting itself to controlling the vehicle’s forward and vertical speed. However, this method possesses an intrinsic problem, that is the minimum of the formation of local [30]. For this reason, in most implementations, it is combined with another global path planner that will correct the first when it becomes trapped. In [31] Warren proposed a method to avoid this and con-sisted of first generating a linear path to the goal configuration which was then modified online thanks to the potential field planner to avoid collisions. Trucco and Lane in [32], rather than exploit the method of decreasing gradient, they made a preliminary analysis through best-first to identify any local minima and, in a second moment, these will be considered points to avoid as if they were obstacles.

2.2.3 Sampling-based planning

If until this moment we have considered algorithms that allowed to plan trajectories only after an explicit characterization of Cf ree and Cobs, now let’s evaluate the algorithms in

the dark when exploring Cf ree.

The stochastic or random approach was first introduced by Barraquand and Latombe [33], and later used by Overmars [34], and more recently by Kavraki [35]. The main

(30)

concept is to generate a number of samples (nodes) randomly, eliminate those nodes in the obstacles, and then, connect all the adjacent nodes with straight lines. The primary reason that only adjacent nodes are connected is to avoid saturating the configuration space with too many paths. Later, the resulting roadmap is searched from the start point to the goal point for the shortest path. With the advent of sampling-based methods, a number of algorithm have been implemented, that can solve planning problems in real time execution, which not so long before they had to setting as data of the problem. These algorithms, although they are not able to guarantee the existence of a feasible solution in a generic instant, the probability of finding a better solution than each other, increase with the increase of samples number. Asymptotically speaking, sampling ran-domly allows a better exploration of an unknown environment, because the data obtained are more dense. Unlike other motion planning methods, its randomised nature tends to make its performance less susceptible to the effect of configuration space dimension [33]. Among the most famous algorithms are the PRM (Probabilistic Roadmap) and RRT (Rapidly Exploring Random Tree) methods developed by Latombe [35] and LaValle [36], respectively. The basic idea of PRM algorithms is to use a local planner which samples with a certain probability distribution into the configuration space. Each sample is sub-ject to a collision check to see if it belongs to Cf ree or not. And finally, through a global

planner, we try to connect the created nodes to determine the set of best solutions that allow to move from qinit to qgoal. The first graph construction phase is iterated until

the graph is not quite dense and then moves on to the next exploration phase using one of the algorithms explained previously for discrete planning [35]. We must consider the fact that these algorithms are complete only in probability and, therefore, it is necessary to define a maximum iterations number to ensure that the algorithm iterates infinitely, without finding a solution. This method is notoriously known for its long calculation times and difficulty in finding a path in Cfree if there are some narrow passages. Over the years, several methods have been proposed to solve this problem, such as the one proposed by Simeon in [37], by Kavraki in [38] or Amato in [39]. All these methods differ from each other for their sampling strategies or for the implemented heuristics.

One of the major problems when using PRM algorithms lies in considering only al-gebraic constraints (due to obstacles), rather than differential constraints. The paths generated are typically rigid with abrupt directional changes, making it not viable for vehicle actuator operations due to saturation. Most systematic search algorithms do not function well in high-dimensional space and, moreover, the paths set created can be opti-mized, to reduce the control effort and all the distance to be covered. This has prompted the introduction of the RRT which can be considered as an incremental form of PRM and is designed to search efficiently non-convex high-dimensional spaces. This guarantee probabilistic completeness and returns a feasible path, when one exists, with probability

(31)

approaching to one as the number of samples approaches infinity. The RRT algorithm explanation will be detailde in Chapter 3.

An important step towards efficiently optimizing using randomized planners was taken in [50]. In particular, Karaman and Frazzoli showed that the RRT algorithm converges to a nonoptimal solution with probability one. Furthermore, they introduced a new al-gorithm, called RRT*, and proved that it is globally asymptotically optimal for systems without differential constraints, while maintaining the same probabilistic completeness and computational efficiency of the baseline RRT. It possesses a few fascinating:

- It allows one to take into account both algebraic and differential constraints simultane-ously, which is vital for motion planning;

- It is biased to the free space and exploits a probabilistic search method;

- It has also been proven to be probabilistically complete (Cheng and LaValle [40]); - The simplistic nature of the algorithm facilitates performance analysis and lastly.

This method got a considerable success especially in applications where it is desirable to quickly find a solution, thanks to the speed with which the algorithm able to find a feasible solution. Moreover, these work until their interruption and improve the solu-tion to be obtained as the number of samples increases. The number of samples choice, the algorithm calculation time to be dedicated and the solution quality, are considerable study and interest aspects [50], [53], [54].

Kinematic contrains were then introduced in [52], where it shows how the new method is able to provide optimal solution in the case of vehicles like unicycle and aerial ve-hicles. The last decade research has been to apply these methods for honolomic and non-honolomic vehicle car-like, trying to improve the computation performance and mak-ing more and more efficiente the graph construction. In [55], for instance, they provide improvements for RRT* calculation time. In [56], a new method is proposed to avoid an inappropriate sampling, giving greater weight to regions with greater visibility. Ro-driguez, in [57], proposes a series of heuristics to deal with the problem of passing through narrow roads with car-like vehicles, but using some information on environment confor-mation. If RRT has fixed number of nodes, then the memory cost remains constant, thus, making it ideal for use with outdoor car-like robots. The idea of having fixed number of nodes is presented by RRT*FN in [58] and for non honolomic vehicle in [59].

2.3

Collision Avoidance System

In this thesis, for Collision Avoidance refers to the vehicle’s ability to avoid collision with static and dynamic obstacles, while trying to complete the mission’s goal. In the oceanic

(32)

environment, in addition to static obstacles, as rocky walls, grand canyons or underwater cliffs, one vehicle can be in front of floating objects, big fish or other vehicles. These are the most unpredictable and dangerous, although the occurrence of these events is truly rare. In general, the avoidance systems are built ad-hoc based on customer needs, risk and application.

In the past, Ivan R. Bertaska in [42] has developed a planner suggested from Fiorini in [41], that combines a local search based on Velocity Obstacles (VO) concept with global reticular search to find dynamically feasible trajectories. In those papers, the pro-posed approach is based on Velocity Obstacles Method. Starting from the hypothesis that obstacles can move only with uniform linear motion, it creates a conical shape through relative velocities of the obstacle. Thus, the vehicle velocity vector and consequently the planned trajectory, will not have to go through this obscured region. Starting from VO, in [43] Shiller presented a non-linear method to avoid the obstacles, taking into account their shape and size to generate accurately a curvature along the path. Next step will be proposed from the same Shiller in [44], where he introduced the distance estimation with the obstacles and he used the A* algorithm to perform the optimal velocity. These meth-ods, therefore, have several problems which do not allow their application in under-ice areas. The principal trouble of applicability of such algorithm is in the perfect knowledge of the environment and especially in the shape of the obstalces.

The most common methods for collision prevention are those proposed by Conti [46] on the artificial aperture method and Tusseya in [47] on the dynamic window method. Both have a strong adaptability in various environments and exploit the limited informa-tion coming from the sensors. They are based on the artificial potential method where obstacles generate repulsive forces to drive the robot into free path searching within vis-ible space. In the dynamic window approach an heuristic function is introduced to find an obstacle-free path based on the vehicle’s kinematics, the distance from the obstacles and the distance from the target to be reached. These algorithms, but in general the methods that are using a potential field approach, are not able to provide correct results in case the vehicle is forced to go through narrow passages. In fact, if the walls repulsive force is such that does not allow identification passage, the vehicle may not notice this and may not reach the target. A possible change was proposed by Khatib in [49]. He highlights the problem described above, which is more consistent into the vehicles with limited acceleration. It introduces two different levels to command the robot motion: the desired direction is determined in the first one and in the second one the driving commands are generated which will produce the movement, in which is also taken into consideration the impulse that the vehicle is able to produce.

(33)

unknown characteristics, people also often use the intelligent control algorithm in the collision avoidance process, including the fuzzy system, the expert system and deep re-inforcement learning [48].

In case you want to manage noisy data, as indeed happens in underwater exploration, the better technique is Occupancy Maps Grid. That belongs to geometric algorithm group, whose model the environment with geometric absolute relations between the obstacles detected. They exist then the topological maps, where graphs are created with nodes: each of that is representative of reachable positions. There are also Hybrid Maps [45] which combine both types. However, they are disadvantageous because they require more calculation and maintenance. The relationships between the two schemes are not simple and require more powerful hardware systems.

2.4

Algorithms and Methods for Underwater Explorations

The literature of under-ice explorations is poor. The main reason why this happens is the necessary means lack to accomplish this kind of missions. Instead, the NOCs is one of the world leaders. Since the end of the nineties, they have organized expeditions to the Arctic and Antarctic regions with the AUV.

The AUV Autosub series has already operated in a completely unsupported mode for many years [8], [15],[60], [61]. To ensure a reasonable likelihood of success, the missions that have been carried out have tended to be relatively uncomplicated and with safe altitude limits. Typical examples are straight-line transects between a small number of waypoints or a grid survey at a safe altitude over a relatively benign and known en-vironment. Despite the great interest in the data you can gather, these areas exhibit steep slopes, vertical cliffs many Hundreds of meters high, canyons, and overhangs. The problem fro the AUV is not only one of avoiding the hazards, but also one of control sta-bility within an acceptable range of a set point altitude demand. The ocean floor colour photographic survey scenario requires the AUV to fly at altitudes of less than 4m above the ocean floor (owing to rapid attenuation of the red light with increasing distance). At this level the AUV’s horizon is very close and the sensing of approaching obstacles and impending collisions is particularly difficult.

This vehicles are equipped with detection and control low level system, simple but working even in extreme conditions. Only in recent years, the desire for ever-impossible challenges is pushing the team to develop new technologies and localization, planning and

(34)

detection methods which include powerful mathematical tools and artificial intelligence.

Contrary to the under-ice area, the literature in underwater explorations is full of arti-cle that talk about the different approach to map, plan and avoid obstaarti-cle. In unmanned underwater control domain it is crucial to use safe path planning methods, because al-lows to carry out its own underwater exploration mission. Some of these methods are based on algorithms developed for terrestrial vehicles that have been adapted to the present case, like A *, Phi, JPS and RRT. Some of the proposed solutions are based on path plannin algorithms in planar environments, already used in the terrestrial and aerial field. Compared to the 2D case, the 3D one has not been extensively studied, due to the non-linearity in underwater motion and high complexity of marine environ-ments. One of the underwater vehicle main requirements is to acquire data about the environment surrounding the vehicle and plan the route to be followed online. In this regard, Petillot in [62] proposed a first mapping and planning approach in which he used a multi-beam scan sonar detect and reconstruct the environment. In [63] Maki presented a method based on landmarks to drive the AUV, even if their results are obtained within a temperature-controlled water tank. However, these limitations have been overcome in [64] where a framework establishes an online mapping and path planning that leads the AUV to navigate while building, simultaneously, a 2-dimensional representation of vehicle surroundings. The framework has a mapping module that uses Octomaps [65] to represent the environment, a planning module that calculates online collision-free paths, and a mission handler that coordinates the planner and the AUV controllers. Although the results may be satisfactory, Hernandez in [66] proposes a different approach to the usual RRT * in which he tries to minimize calculation time and improve the technique with which he reshapes the paths for SPARUS-II vehicle.

Figure 2.5: Pre-built graph for safe exploration

There are several papers where they propose genetic and fuzzy control algorithms, to identify the fastest path, obstacles and minimize the algorithm costs, which have not been analysed because they are considered unsuitable for the project requirements. When we talk about practical applications, however, nowadays one of the habits applied

(35)

in the underwater planning field is not to use sophisticated, complex, probabilistic, intel-ligent algorithms and so on, but create simple, computational low-cost data structures, that at the same time are safe. Pre-built graphs (see Figure 2.5) are used to determine the best path to follow avoiding collisions or achieving targets. Consider the vehicle at time tk and its predective model, we could evaluate at time tk+1, tk+2, tk+3 in which

Riferimenti

Documenti correlati

In order to measure the coping strategies of Italian teachers, the Brief Cope (Coping Orientation to Problems Experienced) by Carver, Scheier and Weintrab

 In acceleration (Figure 11 (a)), the increment of the engine power is as rapid as in the case of the engine without the plenum; indeed the pressure inside the plenum at the

Therefore, the present study aimed to compare the genotype distribution of ACVR1B rs2854464 between endurance athletes, sprint/power athletes, and non-athletic control partici- pants

In this study, we aim at analysing the different re- sponse to environmental parameters, in terms of benthic chlorophyll a concentration (chl a), assumed as a proxy of algal

shown in Fig. We also assume that the average energy consumption on all road segments is the same for all vehicles, which equals 0.025 kWh, and every PHEV has a same initial

The preference for collective family history in the Veneto did not completely impede compilation of libri di famiglia: the texts printed here demonstrate that a few brave souls

Clinical and Molecular Features of Epidermal Growth Factor Receptor (EGFR) Mutation Positive Non-Small-Cell Lung Cancer (NSCLC) Patients Treated with Tyrosine Kinase Inhibitors

Differentiated CaCo-2 cells were treated with either 0.2 µg/mL anti-TLR2 or anti-TLR4 antibodies and IL-8; IFNβ and TNFα cell release was detected after 24-hour cell incubation with