• Non ci sono risultati.

Sviluppo di un distema di navigazione visuale per veicoli multirotire

N/A
N/A
Protected

Academic year: 2021

Condividi "Sviluppo di un distema di navigazione visuale per veicoli multirotire"

Copied!
53
0
0

Testo completo

(1)

Dipartimento dell'Informazione

Laurea magistrale in Ingegneria Robotica e

dell'Automazione

Development of a visual navigation system

for multirotor vehicles

Studente:

Ciro Potena

Relatore: Controrelatore:

Prof. Lorenzo Pollini

Prof. Mario Innocenti

(2)
(3)

Acknoledgment

The biggest thanks go to my family and to my friends who has always supported me. A special thanks also goes to Angela, always present in times of trouble. Additionally, I would like to thank Lorenzo Pollini and Francesco Di Corato for giving me the opportunity to work on this interesting and challenging master thesis and for supporting me throughout my work. Thanks for helping me and answering my many questions!

(4)

Contents

Abstract

1

1 Introduction

2

1.1 Project outline . . . .

4

2 Notation and prerequisites

5

2.1 Notation . . . .

5

2.2 Camera Geometry in Computer Vision . . . .

5

2.2.1 Ideal Pinhole Camera . . . .

5

2.2.2 Extrinsic and Intrinsic Parameters . . . .

7

2.2.3 Epipolar geometry . . . .

9

2.2.4 Rectication . . . .

11

2.3 ROS . . . .

13

3 Pose Estimation with Stereo Vision

15

3.1 Introduction . . . .

15

3.2 Algorithm Overview . . . .

16

3.3 Image Acquisition . . . .

19

3.3.1 Image Conversion . . . .

19

3.3.2 Image Preprocessing: Rectication . . . .

20

3.3.3 Feature detection . . . .

21

3.3.4 SIFT . . . .

22

3.3.5 SURF . . . .

24

(5)

3.4 Finding correspondences . . . .

27

3.5 Triangulation . . . .

28

3.6 Pose estimation . . . .

30

3.6.1 DLT . . . .

30

3.6.2 Robust Approach . . . .

31

3.6.3 Pose renement . . . .

33

3.7 Keyframe change . . . .

35

4 Comparison of Approaches

38

4.1 Comparison between SIFT and SURF . . . .

38

4.2 Comparison between SIFT and SIFT with LM renement 41

4.3 Comparison between dierent features threshold . . .

44

4.4 Real data results . . . .

45

5 Conclusions and future works

46

5.1 Conclusion . . . .

46

(6)

Abstract

MAVs are naturally unstable platforms exhibiting great agility and they thus require a trained pilot to operate them. Industrial inspection with MAVs is a challenging task, and the ability to hover with high accuracy and stability is a key issue. In addition, payload on micro aerial vehicles is very limited, and the use of lightweight sensing equipment directly results in longer ight duration or the ability to carry additional payload.

This motivates the use of vision sensors for control. In this thesis, a modular framework that allows the user to test dierent components of a Visual Odometry based pose estimation algorithm has been created. Using this framework, dierent approaches and dierent implementations for the steps of the algorithm have been tested.

As a result, three approaches using SURF, SIFT and SIFT with a Levenberg-Marquardt renement are compared and a solution to be used for the inspection of a simulated environment. Finally the best of this three implementations was tested on the EUROC dataset.

(7)

Chapter 1

Introduction

Allowing easy access to places where, for some reason or another, humans cannot go without a large eort or without exposure to hazards is an important eld of research. One can think of many scenarios such as search and rescue missions after a disaster or re, inspection of either very small and narrow tunnels or very large rooms, surveillance and observation tasks and so on. Many of those tasks are most easily accomplished by small ying robots. Because of that, a lot of research has been directed at the eld of micro aerial vehicles (MAVs) over the past years.

The MAV can either be teleoperated by a human operator, or operate completely autonomous. While the latter is obviously still a lot more challenging, even tele-operated ight poses many challenges. MAVs are highly dynamical systems which are not easily controlled by a human operator even if they are close by.

Furthermore, for many of the previously mentioned tasks, the operator might not be able to directly see the MAV, which only makes controlling the helicopter more dicult.

The tasks of this thesis result from an application where a MAV, particularly a quadrotor, will be used to inspect large industrial buildings. There are a lot of diculties to consider for correctly terminate this task. For example the wall of the buildings might not oer many distinctive features or patterns that can be used in navigation and the inspection may require a particular maneuver that a human pilot can't be able to do.

It will therefore be necessary to nd suitable sensors to work in the given envi-ronment and to control the MAV to at least provide

(8)

stable hovering. On top of the control loop, a human operator or any other navi-gation process can then maneuver the MAV to the desired locations or waypoints. Other important issues to consider is the characteristics of the the vehicle itself. The most important dierence between any non-ying robot and a MAV is that the latter is always in motion. Unlike wheeled robots, a helicopter cannot just stand still, wait, take some more measurements and compute the best strategy, and only then move on.

Instead, while computing the next controller output, the MAV sill moves and maybe even renders the control action it is currently computing invalid. As a second main problem, MAVs have a very limited power supply on board and most of this power is used to just keep the MAV in the air.

This makes it dicult to add many sensors to the vehicle and provide sucient computational power to perform sophisticated calculations on board. Instead, few and lightweight sensors have to be used, which usually leads to lower quality of the measurements. Thirdly, MAVs are usually underactuated, which means that the six degrees of freedom (DOF) have to be controlled by less than six independent actuators.

This however requires precise estimates of not only the current position and at-titude, but also the current velocity, with a high update rate. Because of those constraints it is interesting to investigate the use of cameras to control the MAV. Because of those constraints it is interesting to investigate the use of cameras to control the UAV.

Cameras have the advantage that even small and lightweight versions can capture images of acceptable quality.

Furthermore, those images can be acquired at a high enough frequency, so that the acquisition of data is surely never going to be the bottle neck of the control system.

Acquiring images also yields a huge amount of data that can be used to extract whatever information is needed for control. On the other hand, images also include a lot of unnecessary or even unwanted pieces of information.

Also, as mentioned before, it may be dicult to nd the relevant information within the images because of the lack of distinctive features or patterns on the walls.

(9)

require a large amount of computational power, such that at least parts of those computations will have to be o-laded to a o-board computer.

1.1 Project outline

The goal of this thesis was to investigate the usage of stereo vision to control a MAV. In addition, one specic application where the MAV is used to inspect a large industrial tank was considered. The work consists of the following main tasks:

1.

Choose a strategy to use the images to control the RW UAV

2.

Create a framework to test dierent approaches and dierent versions of the individual elements of an algorithm that implements this strategy

3.

Use the framework to compare three main approaches and nd a suitable solution for the given industrial inspection application

The framework was programmed using C/C++ on a computer running Ubuntu linux 12.04. It makes extensive use of the meta-operating system ROS Hydro and the open source computer vision library openCV. This thesis is structured as follows:

Chapter 2 introduces the notation and some basic concepts that are used in the thesis. It also includes a short introduction to ROS.

In chapter 3 all the individual steps contained in the algorithm of the frame-work are presented and dierent approaches for every step are analyzed

Chapter 4 presents the results of a comparison of two approaches using the framework

Finally, chapter 5 draws conclusions and highlights future work and possible future extensions

(10)

Chapter 2

Notation and prerequisites

2.1 Notation

Throughout this report, the following notation is used:

2D point or vector: x =  x y T

or  x y 1 T

for homogeneous coordi-nates

3D point or vector: X =  X Y Z T or  X Y Z 1 T for homoge-neous coordinates

Matrix: A

Identity matrix: I

It will always be noted if homogeneous coordinates are used. The ^operator denotes the 3x3 Skew-Symmetric matrix that represents the cross-product:

X1× X2 = ˆX1· X2 =   0 −Z1 Y1 Z1 0 −X1 −Y1 X1 0  · X2

2.2 Camera Geometry in Computer Vision

2.2.1 Ideal Pinhole Camera

In order to use camera for control, it will be necessary to reconstruct 3D points of the scene in 2D points. Therefore a relation that maps 2D image pixel coordinates in 3D real world coordinates is needed.

(11)

This relation can be approximated by starting from the ideal pinhole camera model and then transforming it to the needed form.

Figure 2.1: The Pinhole Camera Model

Figure 2.1 shows the pinhole camera model. Image plane has been moved to the front because is more convenient.

This can be done without loss of generality. In this representation P is a 3D real world point and is projected to an image point, p, according to:

x = x y  = f Z  X Y  (2.1) where f is the focal length of the camera. In preparation for the following trans-formations, equation (2.1) can be rewritten to include a camera matrix Kf and

the projection matrix P0.

Z   x y 1  =   f 0 0 0 f 0 0 0 1  ·   1 0 0 0 0 1 0 0 0 0 1 0  ·     X Y Z 1     = Kf · P0·     X Y Z 1     (2.2)

(12)

The importance in the division into two matrices will be clear in the next section. 2.2.2 Extrinsic and Intrinsic Parameters

In the previous camera model, the origin of the world was placed at the camera center. This is often not the case for many real applications including the one under investigation in this text.

Since our cameras move with the MAV, the camera coordinate frame will obviously not coincide with the real world xed frame at all times. The point X0in the xed

world coordinate frame is transformed to the camera frame by the transformation T0C. If the transformation TC transformed the camera frame from the world frame to its current position, then the transformation T0C can be written as T0C = T

−1 C ,

again using homogeneous coordinates.

Therefore, the following equation (2.3) shows the relation between the point XC

with respect to the camera frame and the point X0 w.r.t. the world coordinate

frame. XC = T0C · X0 = T−1C · X0 =  RT C −R T CTC 0 1  · X0 =  R T 0 1  · X0 (2.3)

The 3x3 matrix R and the 3x1 vector T are called the extrinsic camera parameters. Inserting equation (2.2) into equation (2.3) leads to an equation that maps image plane coordinates to 3D real world coordinates. In that equation the depth if often replaced by a costant factor λ because it is usually unknown as the pinhole camera model only allows a reconstruction up to a scaling factor.

λ   x y 1  =   f 0 0 0 f 0 0 0 1  ·   1 0 0 0 0 1 0 0 0 0 1 0  ·  R T 0 1  ·     X0 Y0 Z0 1     (2.4)

Using the abbreviation introduced above, the equation (2.4) can be written as λx = Kf · P0· T0C · X0 (2.5)

In order to get usable equations, the current image plane coordinates x and y have to be mapped to image pixel coordinates, because the camera is built up of many small photo sensors of a given size and shape, each yielding the image brightness at the corresponding location.

So is necessary to compensate for some eects that are not included into ideal pinhole camera model.

(13)

This compensation has to account for three eects:

Pixels might not be square. The coordinates in x-direction are scaled by Sx

and the coordinates in y-direction are scaled by Sy

Pixels might also not be rectangular, which is compensated by the shearing factor Sθ = cot(θ)

For cameras with chips that are read line by line from left to right, the image principal point has to be shifted from the top left corner to the image center. Other transformations might be possible for other builds, but they can all be summarized by adding an oset  Ox Oy

T

in both x- and y-directions respectively.

Summarizing all of the above, one can write the mapping from image plane coor-dinates to pixel coorcoor-dinates as:

 x0 y0  = Sx Sθ 0 Sy  · x y  + Ox Oy  (2.6) With equation (2.6) and by using homogeneous coordinates again, the camera matrix Kf can be extended to the intrinsic camera matrix K:

K =   f 0 0 0 f 0 0 0 1  ·   Sx Sθ Ox 0 Sy Oy 0 0 1  = Kf ·   Sx Sθ Ox 0 Sy Oy 0 0 1   (2.7) Using the intrinsic camera matrix K, equation (2.5) can be updated to map from pixel coordinates to 3D real world coordinates:

λx0 = K · P0· T0C · X0 (2.8)

which can be simplied to

λx0 = P · X0 (2.9)

with

(14)

Equation (2.9) fully describes the mapping from a 3D real world point to a 2D pixel coordinate point for the ideal pinhole camera model. However, in a real camera, lenses have to be used.

Depending on the design and quality of the lenses, the resulting image will be more or less distorted.

For example, narrow eld of view lenses tend to impose smaller distortions than wide eld of view lenses. In order to use the images for 3D reconstruction, it is absolutely necessary to compensate for those distortions, because otherwise nding valid correspondences will be more of a gamble rather than a well denes process.

Therefore, the projected image plane coordinates of a real world point have to be transformed before multiplying them with the intrinsic camera matrix. Usually, a distortion model including radial and tangential distortions is used, and the transformation from distorted image plane coordinates x to undistorted image plane coordinates xd is given by:

xd= x · 1 + d1r2+ d2r4+d5r6 +  2d3xy + d4(r2+ 2x2) d3(r2+ 2y2) + 2d4xy  (2.11) with r2 = x2+ y2

Usually, the distortion parameters are summarized to a vector D. d5 is sometimes

omitted, depending on how much accuracy is needed. equations have to be solved. Typically, this cannot be done fast and ecient enough to run the undistortion at real time, however, it is possible to precompute a lookup table that can then be used to remap the images in real time.

Using equations (2.10) and (2.9) thus allows to go from pixel coordinates to 3D real world coordinates while compensating for lens distortions. A more detailed description can be found in [5].

2.2.3 Epipolar geometry

Geometrical relations between two camera views can be described by epipolar geometry. Using epipolar relations will later help to nd correspondences, remove outliers and test for accuracy.

The geometrical relations are true for either two dierent views of one single camera, or for two cameras of a stereo vision setup.

For the following explanation, the rst view will be named camera1 and the sec-ond view camera2, with the correspsec-onding intrinsic camera matrices K1 and K2

(15)

Later in this thesis, camera1 will be the left camera and camera2 the right camera of the stereo setup. Figure 2.2 shows the basic setup with two camera views. In both views, the real world point P is captured in the image at positions x1 and

x2 respectively. From the drawing in gure 2.2 it can be seen that the projection

x1 of P in the second view can only lie on the line l2 if the points x1 and x2 indeed

are both projections of the same point P and vice versa for point x1.

Figure 2.2: Setup of two camera views with epipolar geometry

The lines l1 and l2 are called the epipolar lines and the points e1 and e2 are

called epipoles. The epipoles are located at the two points where the image plane intersects with the baseline between the two views. If the optical axes of the two views are parallel, this intersection will lie at innity and the epipolar lines will be parallel. If we call the 3D coordinates of point P relative to the two camera frames X1 and X2, they are related by a rigid body trasformation in the following

way:

X1 = RX2+ T (2.12)

Since Xi = λixi, we can rewrite equation (2.11) in terms of image coordinate xi

and depths λi as

λ1x1 = λ2Rx2+ T (2.13)

In order to eliminate the depths λi in the preceding equation, taking the vector

product for T, or premultiplying both sides by the skew symmetric matrix Tb we obtain

(16)

λ1T xb 1 = λR bT x2 (2.14) Since the vectorT ·xb 1 = T ×x1is perpendicular to the vector x1, the inner product D

x1, bT x1E is zero. So premultiplying equation (2.13) for xT1 and since λ2 > 0 we

obtain the epipolar constraints

xT2Ex1 = 0 (2.15)

with E = T Rb . E is called essential matrix and encodes the relative pose of the camera(s) from one view to the other. Equation (2.14) use image plane coordi-nate, but it can be trasformed to use pixel coordinates by using the previously introduced intrinsic camera matrix K1 and K2 :

x0T2 F x01 = 0 (2.16) with

F = K−T2 T RKb 1 = K

−T

2 EK (2.17)

F is called fundamental matrix and contains the intrinsic camera properties. A more detailed description can be found in [4].

2.2.4 Rectication

Stereo image rectication refers to a process where the two images of a stereo camera pair are undistorted, projected to the same plane and transformed such that their epipolar lines are parallel and exactly row-aligned. Figure 2.3 shows the four steps needed to rectify a stereo image pair:

1.

Two images of the scene are taken by the right and left cameras.

2.

The images are undistorted using the distortion approximation described above in section 2.2.2.

3.

Now the transformation (R, T ) that describes the relative pose of one camera to the other, is used to rectify the images. That is of course, if that relation is known. In this thesis, we use calibrated cameras, and therefore it is indeed known. There is also a solution to the problem in case this transformation is unknown.

(17)

4.

The image is cropped because the warping creates curved edges that might interfere with image segmentation. After the cropping, only the parts of the image remain where no artifacts of the warping process are visible.

Figure 2.3: Schematic of the stereo rectication process: a) take images b) undistort the images c) rectify to row-aligned images projected to the same plane d) crop

(18)

2.3 ROS

One of the key components of the framework is the meta-operating system ROS. ROS is, in the words of its developers, an open-source, meta-operating system for your robot. It provides the services you would expect from an operating system, including hardware abstraction, low-level device control, implementation of commonly-used functionality, message-passing between processes, and package management.



The main advantage of using ROS is the much improved modularity. ROS allows the user to write so-called nodes, which are programs written in either C/C++ or Python. Nodes can comunicate using topics or services. The manner represents datastreams. For instance: camera images, robot joint conguration or position can be modelled as topics. Topics values are typically published at regular rate to keep the whole system up-to-date. The latter represents requests which are sent asynchronously, and usually at lower rate. Slower components such as a motion planning node will typically provide services.

Figure 2.4: Example of ROS nodes that comunicate through topic and service

One key advantage is, that a node can be coded to receive messages, but it will also run when no such message is there to be received. On the other hand, a node can send messages without the need to know if another node is actually receiving them.

(19)

This allows the user to add and remove nodes to the current setup at will, which makes testing, plotting, visualizing and verifying data a lot more comfortable and fast. A second advantage of using ROS are the many built-in nodes that come with the basic installation. A lot of very useful tools for robotics have been created by the ROS community, which can signicantly speed up testing and/or developing new ideas. For example, the framework currently uses the ROS package stereo image proc to modify the camera images.

The same thing can of course be achieved by writing code oneself in this case using openCV functionality, but the implementation is already available in high quality. Other tools like the visualization tool rviz are extremely useful to debug and present results, and would probably take a very long time to develop from scratch.

ROS can be also used together Gazebo that is a a multi-robot simulator for outdoor and indoor environments.

(20)

Chapter 3

Pose Estimation with Stereo Vision

This chapter will introduce the components of the framework used to estimate the pose of the UAV. For most steps within the algorithm, multiple possibilities are presented and explained on a basic level.

This chapter is intended to give the reader a fundamental understanding of the working principles of all components that have been considered, in order to prepare the reader for the following chapter, where dierent combinations and their results are compared.

3.1 Introduction

As it is often the case, for most of the components of the algorithm, there is no 'perfect' solution. In general, it is possible to estimate motion with monocular vision as done in [8] and [9]. The advantage of such a setup is primarily the potential to save weight and power by having only one camera. Using monocular vision, it is possible to compute the essential or fundamental matrix using a 7- or 8-point algorithm on consecutive images taken by the camera.

Decomposing the matrix leads to the recovery of the translation and rotation, though the translation can only be found up to a scaling factor. In general, a unique solution can be found, but in order to get a robust recovery algorithm, the baseline between consecutive images has to be suciently large and the images have to be taken with two distinctive vantage points. Both conditions are usually not achievable for cameras mounted on a MAV, because MAVs have very fast dynamics and require control algorithm with high update rates.

Therefore, compared to the update rate, the MAV moves rather slowly and the baseline remains relatively short.

It would be possible to simply skip frames until the baseline is large enough, but

(21)

this would again violate the constraint of a high update rate. While monocu-lar visual servoing has been used in reality and proved to work, because of the drawbacks listed above a stereo vision setup was chosen for this thesis. Using two cameras allows to recover accurate estimates of depth and motion. In addition, the recovery is done in only one timestep, instead of having to use two consecutive images.

However, there are some drawbacks of using stereo vision as well, in addition to the increased weight and power consumption, the computational burden is also increased signicantly. Two images have to be acquired, and throughout the algorithm, all steps have to be carried out for both images, therefore doubling the eort at nearly all of the steps. Additionally, the cameras have to be synchronized, which is not a trivial task given the required accuracy.

3.2 Algorithm Overview

This section will give a high-level overview over all components of the general testing framework and all components of its centerpiece, the main pose estimation algorithm. In order to test the pose estimation algorithm on the quadrotor, it was rst veried its correct working in a simulated environment.

This test was been performed in ROS environment using the hector_quadrotor package for ROS. It contains packages related to modeling, control and simulation of quadrotor UAV trough Gazebo. The SIFT and SURF algorithms were been taken from OpenCV library instead, the DLT and triangulation functions, were been developed in c++ in such manner that to modify them if necessary.

(22)

Figure 3.1 shows how ROS is used to connect two or more nodes. This connection is called the framework in this thesis, because this is the main structure that allows modular testing.

It will always include one node which feeds images into the ROS 'universe' and the node running the main pose estimation algorithm. In addition, there are many helper nodes to save images, feed saved images back to simulate the cameras, visualize data by either plotting it, saving it to les or displaying it.

In addition, the ROS package 'stereo image proc' can be connected, such that the raw camera images are fed in, and estimated pose are sent back to the ROS 'uni-verse' for further use. For accuracy purpose the Gazebo's ground truth is sended to the ROS universe so that the performance of the pose estimation algorithm can be veried. While gure 3.1 shows how the dierent nodes are connected, gure 3.2 shows the details of the 'main algorithm: pose estimation' block.

Figure 3.2: Pose estimation algorithm scheme. Top: Initialization to get a keyframe and store the properties. Bottom: Loop to read images, nd keypoints, nd correspondences in the keyframe, nally estimate the pose

(23)

This block is where the position of the MAV with respect to a keyframe is com-puted, so that this information trasformed in world system coordinates can then be used in the controller.

First of all, the ROS 'universe' has to be fed with images. Usually, this is done by running the camera driver node and the image preprocessing nodes the tranform ROS image into a friendly format for the OpenCV library that are subsequently used in the main algorithm.

The images are created in Gazebo's simulated 3D environment with a given fram-erate and are not inuenced by any other running nodes.

With the image input running, the main algorithm can be started too.

It will rst go, after the tuning of the parameters in dependence on light conditions and noise, through a keyframe initialization, and then enter the loop where the pose estimation and the position update are performed in series.

The initialization consist of the following steps (gure 3.2 top):

1.

Prepare all variables for storage.

2.

Read an image pair.

3.

Detect features in the image pair.

4.

Match the features between the left and right pair. Delete all features that have no match, or are considered outliers.

5.

Triangulate the valid matches and get their 3D real world coordinates.

6.

Save the matched features, their properties, and the resulting 3D pointcloud. This will be the keyframe reference. Until a new keyframe is taken, all movement is calculated with reference to this frame

After acquiring the keyframe and the reference data, the main loop runs until the program is shut down. As shown in gure 3.2 (bottom), it contains the following steps:

1.

Read in a new pair of images.

2.

Extract features.

3.

Match features between the current left image and the saved keypoints of the keyframe left image. Delete all keypoints in the current image that have not been matched to a keypoint in the keyframe or are considered outliers.

4.

Use the current features and the saved keyframe pointcloud in the pose es-timation routine to get the rotation matrix R and the translation vector T which describe the current position respect the keyframe.

(24)

5.

Update quadrotor position and orientation respect the world reference sys-tem.

The following sections will rst present the camera driver and image preprocess-ing nodes will be presented in section 3.3. After that, the elements of the pose estimation algorithm are investigated.

3.3 Image Acquisition

As showed in gure 3.1 the image acquisition runs directly in Gazebo simulator. Two cameras with a resolution of 300x300 and a disparity along the x camera axis of 30 cm are mounted on the quadrotor chassis. The left camera was set to be the origin of the camera reference system.The main funcionalities and requirements to get a correct acquisition are:

Start the left and right cameras with identical parameters. Make sure both run at the given framerate.

Make sure that the cameras are synchronized. This means, that the two cameras should ideally take images at exactly the same time. In reality, the two images should be taken with the minimum time shift in between that is achievable with the given hardware.

Make sure the memory of both cameras is read correctly, so that always the two corresponding images are paired.

Pubblish the pair of images in the ROS 'universe'.

Correctly subscribe the pose estimation node to the ROS 'universe' and, eventually, convert it into a friendly format

3.3.1 Image Conversion

For the images ROS use its own sensor_msgs/Image message format, but in this thesis will be used the OpenCV library that uses a dierent image format. To get the image in a friendly format with the OpenCV libraries used in the main algorithm was used the cv_bridge package which, as the name implies, generates a bridge between the ROS image message and the OpenCV cv::Mat format. An example of this conversione is showed in gure 3.3.

(25)

Figure 3.3: SCvBridge interface beetwen ROS and OpenCV libraries

3.3.2 Image Preprocessing: Rectication

As described in section 2.2.2, the images taken by the cameras are each within their own camera coordinate frame and both are distorted according to each camera's distortion coecients.

For this thesis, the raw images are to be rectied in order to speed up the pose estimation algorithm. This is not mandatory. All the following steps can be performed on raw images. However, rectication not only leads to increased speed, but also allows for a a good and very fast outlier removal step for the left to right feature matching. Therefore, as showed in gure 3.1, the raw images are sent to a preprocessing rectication node. OpenCV already oers a function called 'stereoRectify', which can be used to rectify stereo image pairs. For simplicity, this package is used in this node, though the mechanisms used by it are exactly the same as described in section 2.2.4 and shown in gure 2.3. The output, as previously stated, are two images that have been transformed into the same coordinate frame (only the baseline distance in x-direction is still there), and have parallel epipolar lines. For the remainder of this thesis, it is assumed that the images read into the pose estimation algorithm were been already rectied.

(26)

3.3.3 Feature detection

The rst thing to do after reading in a pair of images is to nd points of interest within them. This is a very important step as it will lay the groundwork for all the computer vision steps later in the algorithm. While a human eye can easily recognize objects in an image, even if they are partially occluded, distorted, rotated or skewed, a computer is still incapable of achieving much more than very simple object recognition.

There exist algorithms were a computer can learn the shape and characteristics of an object and then nd it within images, but science is nowhere near the goal of being able to nd whatever is considered 'relevant data' in arbitrary images. Feature detectors might have some or all of the following desirable properties though it is usually better to select those that are needed and not generally include 'as many as possible'.

It is also quite obvious the rst property of low computational cost might be in conict with the other properties:

Fast computation (low computational cost).

Distinctiveness: Features should be unique and distinguishable.

Repeatability: If the algorithm is applied to the same image twice, the same features should be detected.

Invariance to scale: if the image is viewed at dierent scale, the same features should be detected.

Invariance to rotation: if the image is viewed in a rotated version, the same features should be detected.

Invariance to illumination: if the image is viewed under dierent illuminations, the same features should be detected.

Luckily, nding objects is not really necessary for this work, it is enough to nd distinguished regions of interest. For example, some detectors use corners in the image. A corner is a point in the image where image gradients in two distinct directions are present. Other detectors use more sophisticated methods to nd suitable regions in the image. The following sections will introduce two interest point detection methods that have been tested in the framework. For each of them, rst the functional principle and the resulting properties are presented, and after that particular details of the implementation and integration into the framework are summarized. Both of them are already available in OpenCV library.

(27)

3.3.4 SIFT

SIFT (Scale-Invariant Feature Transform) was rst introduced by Lowe in [1]. SIFT can robustly identify objects even among clutter and under partial occlusion, because the SIFT feature descriptor is invariant to uniform scaling, orientation, and partially invariant to ane distortion and illumination changes. This section summarizes Lowe's object recognition method and mentions a few competing techniques available for object recognition under clutter and partial occlusion. SIFT also includes for every features recognized a descriptor extractor. For nding features Laplacian of Gaussian is found for the image with various σ values. LoG acts as a blob detector which detects blobs in various sizes due to change in σ. In short, σ acts as a scaling parameter. For gaussian kernel with low σ gives high value for small corner while guassian kernel with high σ ts well for larger corner. So, we can nd the local maxima across the scale and space which gives us a list of (x, y, σ) values which means there is a potential keypoint at (x,y) at σ scale. An example for this is showed in gure 3.4.

But this LoG is a little costly, so SIFT algorithm uses Dierence of Gaussians

Figure 3.4: Feature for corner at dierent scale

which is an approximation of LoG. Dierence of Gaussian is obtained as the dierence of Gaussian blurring of an image with two dierent σ, let it be σ and kσ. This process is done for dierent octaves of the image in Gaussian Pyramid. Once this DoG are found, images are searched for local extrema over scale and space. For example, one pixel in an image is compared with its 8 neighbours as well as 9 pixels in next scale and 9 pixels in previous scales.

If it is a local extrema, it is a potential keypoint. It basically means that keypoint is best represented in that scale. It is shown in gure 3.5.

(28)

Figure 3.5: Laplacian of Gaussian, and a local maxima beetwen dient scale example

Once potential keypoints locations are found, they have to be rened to get more accurate results.

They used Taylor series expansion of scale space to get more accurate location of extrema, and if the intensity at this extrema is less than a threshold value, it is rejected. DoG has higher response for edges, so edges also need to be removed. For this, a concept similar to Harris corner detector is used.

They used a 2x2 Hessian matrix H to compute the pricipal curvature. We know from Harris corner detector that for edges, one eigen value is larger than the other. So here they used a simple function, if this ratio is greater than a threshold that keypoint is discarded. So it eliminates any low-contrast keypoints and edge keypoints and what remains is strong interest points.

The SIFT feature detector thus has the following properties:

+ Scale invariant.

+ Great repeatability.

- Slower then other algorithm (SURF, FAST, etc...).

+ It is able to detect small rotations or traslations beetwen two frames.

- For its Real-Time use, because of its very high computational cost, is nec-essary to extract a low number of features.

SIFT was implemented using the new standardized framework for feature tracking included in openCV, which makes the whole implementation extremely easy. Detecting features is simply done by creating an instance of the SIFT feature detector, and then calling the detect routine.

The inputs of the function (see gure 3.6) are the maximum number of features, which determines how many features are extracted and of course the image in which the features are to be detected.

(29)

The function outputs a vector containing all the features that ware detected in the form of 2D openCV points.

Figure 3.6: Inputs and outputs of the SIFT feature and descriptor extraction function

3.3.5 SURF

SURF (Speeded-Up Robust Features) was rst introduced by Bay, Ess, Tuytelaars and Van Gool in [2].

It was designed in order to speed up the detection of features that have good invariance properties. It is achieved by relying heavily on integral images for image convolution. SURF includes a feature detector and a descriptor extractor. This section will introduce the SURF feature detector in order to keep a clear structure within this report.

The approach to detect features using the SURF algorithm is a basic Hessian-matrix approximation using integral images. An integral image is dened as an image of the same size as the original image, but instead of having a pixel intensity at every pixel location

x = x y T

the value at that position is exchanged by the sum of all pixels within a rectangle spanning the region from the origin to x:

(30)

IΣ(x) = i≤x X i=0 j≤y X j=0 I(i, j) (3.1)

Once the integral image is computed, calculating the sum of intensities over any rectangle only takes three additions, which drastically reduces the computation time needed for this operation. The detector is based on the computation of a Hessian matrix because of its good accuracy, and because its determinant can be used to select the scale. Given a point x in the image I, the hessian matrix H in x at scale σ is dened as:

H(x, σ) = Lxx(x, σ) Lxy(x, σ) Lyx(x, σ) Lyy(x, σ)



(3.2)

with Lxx(x, σ)is the convolution of the Gaussian second order derivative ∂

2

∂x2g(σ)

with the image I in point x.

(31)

When using SURF, features should be located at dierent scales. Usually, this is done using image pyramids, and a similar structure is also used in the SURF implementation.

However, due to the use of integral images, the image is not downsampled as it usually is in the pyramidal search, but up-scaled.

Using integral images allows this to be done at constant cost, and therefore much faster than the usual pyramidal down-sampling.

After creating the pyramid, the best features have to be localized, both in the image and over scale. For this, a non-maximum suppression in a 3x3x3 space in the image and the scale is applied. The SURF feature detector thus has the following properties:

+ Scale invariant.

+ Great repeatability.

+ Faster than other comparable algorithms (e.g. SIFT, hessian-laplace, har-rislaplace etc.).

- For small movements between successive pairs of images, the invariances included in SURF are actually superuous. Since the movement is so small between the two frames, non-rotation-invariant detectors will mostly be able to handle the small rotations that can take place in the brief timespan that passed between the two pairs of images.

- For a great number of features extracted, this algorithms takes too much computational time, and becomes not usable in real-time application. SURF was implemented using the previously introduced openCV framework. Detecting features is simply done by creating an instance of the SURF feature detector, and then calling the detect routine.

The inputs of the function (see gure 3.8) are the hessian threshold, which is strictly correlated with the number of features detected and extracted.

The second input is of course the image in which the features are to be detected. The function outputs a vector containing all the features that were detected in the form of 2D openCV points.

(32)

Figure 3.8: Inputs and outputs of the SURF feature and descriptor extraction function.

3.4 Finding correspondences

After extracting features in the images, correspondences between two sets of such features have to be detected. This has to be done between the left and right image of one frame, as well as between the current frame and the keyframe. In this thesis, descriptor based matching approach has been used for both left-right and current frame-keyframe matching.

It uses, as its name suggests, descriptors in order to nd the corresponding features in two sets. For every feature a descriptor is computed.

This happens in both images independently, which requires a feature detector with good repeatability. In this case both sift and surf can compute for every feature recognized a descriptor.

The advantage of using descriptors is that choosing a smart descriptor can have a positive eect on speed and accuracy of the matching process, might allow to include invariances and can enable the use of smart matching techniques. The drawbacks of using descriptors is, that it takes some computational eort to com-pute them.

(33)

The matcher used in this framework is the brute force matcher. It takes the descriptor of one feature in rst set and is matched with all other features in second set using some distance calculation and the closest one is returned. It can be used with several distance measurement, but here was used the L2 norm.

argmin

j

k d

i

− d

j

k

2

(3.3)

This algorithm also returns the distance between the features matched. This will be used for outlier initial rejection. An example of this matcher is showed in gure 3.9.

Figure 3.9: Straightforward way of matching four descriptors: just test them all in a brute-force approach and select the best match for each

3.5 Triangulation

Once two corresponding points in the two images have been located, the 3D real world position of them can be reconstructed through triangulation.

At this point in the algorithm, a set of features in the left image and a set of features in the right image including a correspondence map have been computed. As introduced in chapter 2.2 a 3D point X is projected to pixel coordinates xl and

xr in the left and right images respectively according to

λx

l

=

 K

l

R

l

K

l

T

l

 · X = P

l

· X

(34)

For this thesis, it is assumed that the coordinate frame of the left camera is the reference frame. This does not inuence the math and depending on the application another choice might be better suited.

This leads to Rr being equal to the identity matrix I and Tr being equal to zero.

Rl and Tl represent the transformation from the left camera to the 3D points

reference system.

Figure 3.10: General triangulation's scheme

To get the triangulated point equation 11 can be used in the following way  xl −Rxr  Z1 Z2  = T (3.5)  Z1 Z2  = H†T (3.6)

so calculating the pseudo inverse for each pair of points is possible to get their position along the z axis.

Because of the non perfect knowledge of the variables used in this equation, a plausibility check are usually useful to get rid of outliers.

It is often possible to dene rules to limit the 3D real world space in which the features should be located. For example it is already helpful to exclude every triangulated point with a negative z-coordinate, since in reality all points need to lie in front of the camera.

(35)

The acceptable space for points can be further specied if the setup allows it, for example using knowledge about the cameras' elds of view and the baseline between them allows to dene an acceptable volume. Additional information about the task can also help to further rene and minimize the volume. All points triangulated to locations outside this volume are then considered outliers and discarded.

3.6 Pose estimation

The last step of the pose estimation algorithm is to use the previously calculated 3D pointclouds, and to actually estimate the current pose with respect to the keyframe pose. In this section, a relatively simple algorithm to determine the relative pose between 3D points and 2D set of features is presented. If the data was perfect, this estimation would work well.

However, in the real word an algorithm has to be able to handle outliers and other imperfections. Thus, in the second part a method to build a robust estimator is presented.

3.6.1 DLT

Given a sets of 3D points and 2D features, an algorithm to compute the trasfor-mation between them is the DLT (Direct Linear Transform).

The proposed solution computes the rotation R and the traslation T, and it also compensates for degenerate congurations where a rotation might be mistakenly identied as a mirror operation.

Rewriting the projection matrix P in the following way

P =   p1T p 14 p2T p 24 p3T p 34   (3.7) xi = P · Xi =   p1TXi+ p14 p2TXi+ p24 p3TX i+ p34   (3.8)

and left-multiplying by the cross product of the feature vector xi

xi× P · Xi =   0 −1 vi 1 0 −ui −vi ui 0  · P · Xi =

(36)

vp

3T

X

i

+ vp

34

− p

2T

X

i

− p

24

p

1T

X

i

+ p

14

− up

3T

X

i

− up

34

up

2T

X

i

+ up

24

− vp

1T

X

i

− vp1

24

(3.9)

that can be rewritten as a Ap = 0 system with A =  0 Xi −viXi Xi 0 −uiXi  p =   p14 ... p34   (3.10)

Each point correspondence gives rise to two independent equations in the entries of A. We seek a non-zero solution p, since the obvious solution p = 0 is of no interest to us. A has a 1-dimensional null-space which provides a solution for Such a solution p can only be determined up to a non-zero scale factor. A scale may be arbitrarily chosen for h by a requirement on its norm such as f p f = 1. Using a minimum set of 6 points correspondeces the p vectot may be computed. Beacause to inexact measurement of image coordinates (generally termed noise)  there will not be an exact solution apart from the zero solution.

Instead of demanding an exact solution, one attempts to nd an approximate solution.

This can be done with the unit eigenvector of matrix ATA with last eigenvalue.

Then the projection matrix P is built from the p vector. Rotation matrix R is computed with a QR decompostion on the matrix obtained from the rst 3 rows and colums from the porjection matrix.

By checking det(R), the result can be tested for the mistake of returning a reec-tion. The traslation T is computed from the last colum of the projection matrix. 3.6.2 Robust Approach

In order to gain robustness against outliers, a popular method is the so-called RANSAC estimator (RANdom SAmple Consensus), which works as follows:

1.

Randomly select a given number of data points from a given contaminated set

2.

Use those points to compute a hypothesis of the solution to the problem that needs to be solved (here: compute R and T using the least squares optimization)

(37)

3.

Apply the hypothesis to all data points, use a threshold to exclude points that are deemed outliers and calculate the average error of all inliers.

4.

If the average error is smaller than the currently lowest average error, save the hypothesis and the average error as the new best solution.

5.

Repeat the above steps until a stopping criteria is fullled.

6.

After stopping, use the best

A mathematical description of RANSAC can be found in [6]. An example of outliers in left right image correspondeces ia showed in gure 3.11.

Figure 3.11: Outliers in left right image correspondeces

RANSAC estimator basically minimizes a cost function C =P

i

(ρ(e2i))where T is the inlier threshold value, e2

i is the squared reprojection error and ρ() is given by

the following equation

ρ(e2) = 

e e2 < T2

constant e2 ≥ T2 (3.11)

Thus, inliers have a penalty given by the reprojection error and outliers are penalized by a constant value. This can work well if the T threshold is choosen in a correct manner. In fact if the threshold T is chosen too high, the resulting estimate can be poor. Instead, if it's choosen too low some outliers can't be detected, because doesn't penalize the cost function.

(38)

3.6.3 Pose renement

The pose estimation computed with DLT, although the robust approach with RANSAC, may be far from ground truth. This can result in a faster drift accu-mulation. possible causes of this may be:

Low number of inliers before DLT estimate

Low number of outliers not eliminated

An existing solution that can resolve this problem is the Bundle-Adjustment. It was originally conceived in the eld of photogrammetry during the 1950s and has increasingly been used by computer vision researchers during recent years. Bundle adjustment boils down to minimizing the reprojection error between the image locations of observed and predicted image points, which is expressed as the sum of squares of a large number of nonlinear, real-valued functions. Thus, the minimization is achieved using nonlinear least-squares algorithms.

Of these, LevenbergMarquardt has proven to be one of the most successful due to its ease of implementation and its use of an eective damping strategy that lends it the ability to converge quickly from a wide range of initial guesses. By iteratively linearizing the function to be minimized in the neighborhood of the current estimate, the LevenbergMarquardt algorithm involves the solution of linear systems known as the normal equations.

When solving the minimization problems arising in the framework of bundle ad-justment, the normal equations have a sparse block structure owing to the lack of interaction among parameters for dierent 3D points and cameras. A more detailed description can be found in [3]. The main drawback is that it requires a lot of computational time. The solution proposed in this thesis is to use a similar technique used in BA but minimizing only the camera pose in the current frame. So a Levenbeg-Marquardt based pose renement algorithm was used. It optimize some parameters β for the model f(x, β) so that the sum of the squares of the deviations S(β) = m X i=1 [yi− f (xi, β)]2 (3.12)

become minimal. It is an iterative procedure. To start minimization use an initial guess for β, and at each iteration it becomes β + δ. The functions f(xi, β + δ)are

approssimated with their linear approximations

(39)

where

Ji =

∂f (xi, β)

∂β (3.14)

is the gradient of f with respect to β. The rst order appoximations gives

S(β + δ) = ky − f (β) − J δk2 (3.15) taking the derivative with respect to δ and setting the result to zero gives

(JTJ + λI)δ = JT [y − f (β)] (3.16) where λ is a damping factor that is adjusted at each iteration. Here the model used was the reprojection error that must be minimize respect to pose and orientation. With a correct initialization of the damping factor λ is possible to a fast renement of the pose estimated with the DLT.

An example of error minimization is showed in gure 3.12.

(40)

3.7 Keyframe change

Other important issue is the keyframe change. From this choice can derive a lower or a greater drift's accumulation. However the keyframe change has several problems:

Changing it in presence of low number of features correspondences between keyframe and frame can decrease DLT's estimate accuracy

Changing it frequently may improve computational time and prevents DLT to estimate all the vehicle's dynamics

Taking a new keyframe from a new couple of images may result in low number of 3D points, involving the rst two points

A possible solution for this problem is here proposed. Store old frame's informa-tion if a LIFO list and, when the number of corrispondeces becomes less than a threshold, promote as new keyframe the one with the greatest number of features corrispondeces with actual frame.

(41)

The choice of the best frame is the solution of the optimization problem N ewKeyf rame = argmax

j

(M atcher(xActualF rame, xj)) (3.17)

Finding the solution for this problem can improve estimate's accuracy, particularly

1.

Reduction of computational time due to the features extraction from a single image (other features are stored)

2.

Reduction of the few features extraction risk

3.

Reduction of the few inliers risk before DLT estimate The optimal threshold was found with experimental tests.

(42)

Another very important parameter to tune is the features threshold such that the algorithm will change keyframe at dierent times.

This directly inuences the accuracy and performance of the pose estimation, since a certain number of corrispondeces between frame and keyframe is generally needed to get a good and robust result, and to ensure that the already small workspace is not further limited by not having enough features distributed over the entire image.

One also needs to keep in mind, that in the matching step quite a large percentage (up to about 50%) of the initially detected features are considered outlier matches and removed. On top of that, the pose estimation algorithm again removes outliers due to the maximum allowed error, and again, this removal can target up to 50% of the remaining points in some cases. Therefore, when tuning the features threshold to an initial number, it is important not to forget that many of them might not be considered inliers throughout the algorithm.

(43)

Chapter 4

Comparison of Approaches

4.1 Comparison between SIFT and SURF

In the previous chapter the algorithm to estimate the pose of the quadrotor has been presented, and for the feature detection, two solutions have been introduced, namely SIFT and SURF. In this section this two approaches will be compared to nd the best in term of estimate accuracy. The comparison was done on a simulated scenario with the same path.

Figure 4.1: Roll and pitch angles with SIFT and SURF

(44)

In gure 4.1 are showed the results for the estimate of the roll and pitch angles in Gazebo simulated environment. For each are shown the two estimates obtained with SIFT and SURF.

Figure 4.2: Pitch angle and X, Y and Z traslation with SIFT and SURF

In gure 4.2 are showed the results for the estimate of the yaw angle and X, Y and Z traslation with the two approaches introduced previously.

(45)

Looking at the images can be seen that SURF's estimates are less accurate and noisier. As further evidence for both of them is also computed the Rank Correla-tion index as follows

ρ =

n

P

i,j=1

a

ij

b

ij

s

n

P

i,j=1

a

2 ij n

P

i,j=1

b

2 ij

(4.1)

(46)

4.2 Comparison between SIFT and SIFT with LM

rene-ment

As said previously, a possible technique for improves accuracy is the application of a Levenberg-Marquardt algorithm using the estimated pose as initial guess. In this way it should be able to reduce the reprojection error improving the relative position respect the keyframe. The aim of this section is to prove if this method really improves the estimate.

For the comparison have been used the estimates obtained with SIFT and with an LM renement using the SIFT's estimate as initial guess. The comparison was done on the same path used previously.

In gure 4.5 are showed the results for the estimate of the roll and pitch angles in Gazebo simulated environment. For each are shown the two estimates obtained with SIFT and SURF.

Figure 4.4: Roll and pitch angles with SIFT and SIFT with LM

In gure 4.6 are showed the results for the estimate of the yaw angle and X, Y and Z traslation with the two approaches introduced previously.

(47)

Figure 4.5: Pitch angle and X, Y and Z traslation with SIFT and SIFT with LM

Looking at the images can be seen that the renement locally improves accuracy but, because the presence of not removed outliers, generates large error spikes. If these spikes are present in the instant of the keyframe change will cause a large drift accumulation.

(48)

As previously done, for both of them the Rank Correlation index has been com-puted.

In all the six degrees of freedom the SIFT's Rank Correlation index is slightly

Figure 4.6: Rank correlation index between SIFT and SIFT with LM estimates

(49)

4.3 Comparison between dierent features threshold

In this section a comparison between SIFT's estimates with several features thresh-old has been done.

In gure 4.8 are showed the Rank Correlation index respect the threshold.

Figure 4.7: Rank correlation index of the SIFT's estimate respect the features threshold

As can be seen in the gure, a possible optimal choice is in the range between 80 and 100. An upper threshold doesn't improve accuracy but increases computa-tional time because keyframe is changed more frequently.

(50)

4.4 Real data results

Using the results shown in the previous simulation, that are SIFT feature extractor and a threshold value of 80, the proposed algorithm was tested on the EUROC dataset. It is an industrial scenario with a 60 metres path length in 2800 data sample. As in the previous results, the estimate follows the real pose less then drift.

(51)

Chapter 5

Conclusions and future works

5.1 Conclusion

In this master thesis, a framework to test visual servoing strategies has been built up, and the dierent components and approaches that exist have been investi-gated. The framework and the literature research should serve as a solid basis to thoroughly test all possible solutions to the given task of industrial inspection with MAV.

The tests that have been performed and presented in the previous chapter as a comparison showed that there is at least one approach, namely SIFT features that is capable of solving the task in terms of accuracy. Another approach (SURF) can be eliminated from consideration because to its noisier and lower accuracy. A third option, where a real time renement for pose and orientation and SIFT are combined, can improve local accuracy but, in presence of a little number of outlier, generates great error peak. In addition SIFT was tested with several value for features threshold. As showed there is an optimal range between 80 and 100. A lower threshold produces worst estimate. Threshold greater then 100 doesn't improve accuracy but increase computational time, due to higher frequency of keyframe change.

However, it will be interesting to keep an eye on the developments of those algo-rithms in the future.

(52)

5.2 Future works

Throughout the work on the thesis, many possibilities could not be explored because of time constraints. When this project is continued, between the possible algorithm acts to improve the results there is loop detection [7].

It can recognize if the actual features are already seen previously partially cor-recting the estimate. Some improvements that can result are:

1.

Detecting a loop can reduce uncertainty when a large drift have been accu-mulated

2.

Loop can be founded by evaluating similarity between the current camera image and past camera images

3.

Visual similarity can be computed using image descriptors

In gure 5.1 are showed the improvements related with a good loop detection.

(53)

Bibliography

[1] Lowe, David G., Object recognition from local scale-invariant features, Pro-ceedings of the International Conference on Computer Vision 1999

[2] Herbert Bay, Andreas Ess, Tinne Tuytelaars, Luc Van Gool, "SURF: Speeded Up Robust Features", Computer Vision and Image Understanding 2008

[3] B. Triggs; P. McLauchlan and R. Hartley and A. Fitzgibbon . "Bundle Ad-justment  A Modern Synthesis". Proceedings of the International Workshop on Vision Algorithms 1999

[4] R.I. Hartley and A. Zisserman . Multiple View Geometry in computer vision. Cambridge University Press 2004

[5] Y. Ma, S. Soatto, J. Kosecka, S. Sastry An Invitation to 3D Vision, from image to geometric models. Springer 2003

[6] Martin A. Fischler and Robert C. Bolles. "Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography" 1981

[7] Angeli, A., Filliat, D., Doncieux, S., & Meyer. A Fast and Incremental Method for Loop-Closure Detection Using Bags of Visual Words. IEEE Transactions On Robotics 2008

[8] C. Forster, M. Pizzoli, D. Scaramuzza. SVO: Fast Semi-Direct Monocular Visual Odometry, " IEEE International Conference on Robotics and Automation (ICRA) 2014

[9] G. Klein, D. Murray. Parallel Tracking and Mapping for Small AR Workspaces, IEEE and ACM International Symposium on Mixed and Augmented Reality 2007

Riferimenti

Documenti correlati

sense and to try to force reality [...] to allow itself to be seen through the 

The former consisted in the absolute phase measurement of a voltage, with respect to the cosine waveform with positive peak aligned with the PPS signal (see Fig. The

of the cases were diagnosed as indolent lymphoma, while the remaining were referable to nodular hyperplasia, either purely lymphoid or complex type. Notably, none of the dogs

Two of 3 dogs treated with lente insulin and classified with poor glycaemic control showed 271. serum lipase activity above the reference range in most of

[r]

The medium D is to be thought surrounded by a total absorber (or by a vacuum if D is convex), and neutrons migrate in this volume, are scattered and absorbed by this material..

At the present it’s not possible to discriminate which patients, among those with diabetes insipidus, may have a higher risk to have neurodegenerative lesions and, most of all,

It is important to underline that for noise data, we can choose the M largest singular values of D and the resulting matrix Λ is called truncated +