• Non ci sono risultati.

An Autonomous System for the Evaluation of a Robot Tool Center Point using Computer Vision

N/A
N/A
Protected

Academic year: 2021

Condividi "An Autonomous System for the Evaluation of a Robot Tool Center Point using Computer Vision"

Copied!
76
0
0

Testo completo

(1)

UNIVERSITร€ DEGLI STUDI DI PADOVA

D

IPARTIMENTO DI

I

NGEGNERIA DELL

โ€™I

NFORMAZIONE

T

ESI DI

L

AUREA

M

AGISTRALE IN

I

NGEGNERIA

I

NFORMATICA

An Autonomous System for the Evaluation of

a Robot Tool Center Point using Computer

(2)
(3)
(4)
(5)

Abstract

The main purpose of this thesis is to study a method for finding the geometric relationship between the TCP (tool center point) of a tool attached to the robot flange (the last axis) and the robot itself using a camera.

The proposed method requires the rotation matrix between the reference systems related to the robot and the camera and an initial tool positioning moving the robot using the manual control system in order to make the TCP detectable by the camera.

The image processing algorithms were designed to find the TCP on the image of any symmetric tools. Once identified the TCP on the image, the method to find the geometric relationship between robot and the three-dimensional TCP can be applied to any tool.

From a theoretical point of view the TCP computed is not an approximation. Any error is due to intrinsic system errors as camera discretization, robot pose

accuracy and image processing algorithms accuracy.

(6)
(7)

Contents

1 Introduction ... 1

Background ... 1

Objectives ... 1

State of the art ... 2

1.3.1 Electronic calibration systems ... 2

1.3.2 Camera based calibration systems ... 3

System setup ... 3

Organisation of the document ... 5

2 Robotics ... 7

KUKA Agilus1100 sixx ... 7

2.1.1 Description ... 7

2.1.2 Technical data ... 8

Euclid Labs Simulator ... 10

Coordinate Systems ... 11

2.3.1 Tool Center Point ... 11

2.3.2 Robot Base frame ... 11

2.3.3 Wrist frame ... 12

2.3.4 Tool frame ... 13

3 Image processing ... 15

Pinhole camera model ... 15

Image processing algorithms ... 17

3.2.1 Grayscale histogram ... 17

3.2.2 Binary image ... 17

3.2.3 Gaussian filter ... 18

3.2.4 Sobel operator ... 18

3.2.5 Cannyโ€™s Edge Detection ... 19

3.2.6 Hough transform ... 21

3.2.7 Thinning ... 23

3.2.8 Homotopic skeleton detection ... 23

Key point detection ... 25

3.3.1 Tool direction ... 25

3.3.2 TCP localization ... 27

3.3.3 TCP optimisation ... 28

4 TCP calibration ... 29

(8)

Minimum number of equations ... 29

Camera Base calibration ... 34

4.3.1 Kuka base calibration ... 35

4.3.2 Base calibration using camera images ... 35

TCP positioning with camera ... 36

TCP computation ... 37

4.5.1 Least squares method ... 38

4.5.2 Analytic solution ... 38 Error analysis ... 44 5 Results ... 47 Image resolution ... 47 TCP accuracy... 47 TCP precision ... 55 6 Conclusions... 61 Results ... 61 Future Works ... 62 A. Appendix ... 63

A.1 Least squares method with normal equations ... 63

(9)

List of Figures

Figure 1.1 โ€“ Euclid Labs logo... 2

Figure 1.2 โ€“ Leoni TCP calibration system ... 2

Figure 2.1 โ€“ Kuka Agiuls1100 sixx ... 7

Figure 2.2 โ€“ Kuka Agiuls1100 sixx size and joint values ... 10

Figure 2.3 โ€“ Euclid Labs simulator ... 11

Figure 2.4 โ€“ Robot base frame ... 12

Figure 2.5 โ€“ Wrist frame ... 12

Figure 2.6 โ€“ Tool frame ... 13

Figure 3.1 โ€“ Pinhole camera model ... 15

Figure 3.2 โ€“ Point projection on image plane ... 16

Figure 3.3 - Histogram ... 17

Figure 3.4 โ€“ Binary image ... 20

Figure 3.5 โ€“ Edge found with Cannyโ€™s edge detection ... 20

Figure 3.6 - Hough transform line representation ... 21

Figure 3.7 โ€“ Edge image ... 23

Figure 3.8 โ€“ Lines found with Hough transform ... 23

Figure 3.9 โ€“ Source image ... 24

Figure 3.10 โ€“ Homotopic skeleton ... 24

Figure 3.11 โ€“ Source image ... 25

Figure 3.12 โ€“ Wrong skeleton detection ... 25

Figure 3.13 โ€“ Correct skeleton detection ... 25

Figure 3.14 - Line direction ... 25

Figure 3.15 โ€“ Source image ... 26

Figure 3.16 โ€“ Wrong tool directon ... 26

Figure 3.17 โ€“ Source image ... 27

Figure 3.18 โ€“ Skeleton direction found ... 27

Figure 3.19 โ€“ Keypoint without tcp optimisation ... 28

Figure 3.20 โ€“ Keypoint with tcp optimisation ... 28

Figure 4.1 โ€“ Axis-angle representation ... 30

Figure 4.2 โ€“ Rotation between two coordinate system ... 31

Figure 5.1 โ€“ Accuracy x y axis ... 48

Figure 5.2 โ€“ Accuracy y z axis ... 48

Figure 5.3 โ€“ Accuracy x z axis ... 49

Figure 5.4 โ€“ Accuracy x y z axis ... 49

(10)

Figure 5.6 โ€“ Accuracy y axis... 50

Figure 5.7 โ€“ Accuracy z axis ... 50

Figure 5.8 โ€“ Accuracy - Distance of each ๐‘‡๐ถ๐‘ƒ๐‘ฅ found from ๐‘‡๐ถ๐‘ƒ๐‘š๐‘’๐‘Ž๐‘›๐‘ฅ ... 51

Figure 5.9 โ€“ Accuracy - Distance of each ๐‘‡๐ถ๐‘ƒ๐‘ฆ found from ๐‘‡๐ถ๐‘ƒ๐‘š๐‘’๐‘Ž๐‘›๐‘ฆ ... 51

Figure 5.10 โ€“ Accuracy - Distance of each ๐‘‡๐ถ๐‘ƒ๐‘ง found from ๐‘‡๐ถ๐‘ƒ๐‘š๐‘’๐‘Ž๐‘›๐‘ง ... 51

Figure 5.11 โ€“ Accuracy - Distances histogram from the mean point ๐‘‡๐ถ๐‘ƒ๐‘š๐‘’๐‘Ž๐‘› ... 55

Figure 5.12 โ€“ Distance of each ๐‘‡๐ถ๐‘ƒ๐‘ฆ found from ๐‘‡๐ถ๐‘ƒ๐‘š๐‘’๐‘Ž๐‘›๐‘ฆ... 55

Figure 5.13 โ€“ Precision x y axis ... 56

Figure 5.14 โ€“ Precision y z axis ... 56

Figure 5.15 โ€“ Precision x z axis ... 56

Figure 5.16 โ€“ Precision x y z axis ... 56

Figure 5.17 โ€“ Precision x axis ... 56

Figure 5.18 โ€“ Precision y axis ... 56

Figure 5.19 โ€“ Precision z axis ... 57

Figure 5.20 โ€“ Precision - Distance of each ๐‘‡๐ถ๐‘ƒ๐‘ฅ found from ๐‘‡๐ถ๐‘ƒ๐‘š๐‘’๐‘Ž๐‘›๐‘ฅ ... 57

Figure 5.21 โ€“ Precision - Distance of each ๐‘‡๐ถ๐‘ƒ๐‘ฆ found from ๐‘‡๐ถ๐‘ƒ๐‘š๐‘’๐‘Ž๐‘›๐‘ฆ ... 57

Figure 5.22 โ€“ Precision - Distance of each ๐‘‡๐ถ๐‘ƒ๐‘ง found from ๐‘‡๐ถ๐‘ƒ๐‘š๐‘’๐‘Ž๐‘›๐‘ง ... 58

Figure 5.23 โ€“ Accuracy - Distances histogram from the mean point ๐‘‡๐ถ๐‘ƒ๐‘š๐‘’๐‘Ž๐‘› ... 60

(11)

1

1 I

NTRODUCTION

B

ACKGROUND

Usually robots delivered by the manufacturer are calibrated during production. The calibration procedure allows to know position and orientation of each axis during a movement.

On last axis (tool ๏ฌ‚ange) can be attached any tool. A point called TCP (tool center point) has to be calibrated to move the robot with respect to the tool assembled: it is the origin of a reference system representing the tool position and orientation. For each tool can be defined a different TCP, meaning that changing a tool the relative calibration has to be performed. Furthermore, while the robot is moving, it could collide with something thus requiring a new TCP calibration.

Typically each robot has a manual procedure to calibrate the TCP. It can be computed using manual move controls to define four different positions keeping the TCP on the same point: for each combination of three of the four flange poses, a sphere whose center should be the TCP is computed. Then the TCP estimation will be the point that minimize the distance from each sphere center found. The result of this method depends on the operator skill to positioning the robot on the different poses keeping the TCP on the same position. Usually the accuracy achievable is around 0,5๐‘š๐‘š.

O

BJECTIVES

The company Euclid Labs requires an autonomous method to calibrate the TCP of symmetric tools using a camera. The Objective is to improve the accuracy of the manual method.

(12)

2

Figure 1.1 โ€“ Euclid Labs logo

The final demonstration software has to be a window in which every robot movements have to be simulated before the real execution.

S

TATE OF THE ART

1.3.1 Electronic calibration systems

Some manufacturers develop tool calibration system for robots that uses rotation-symmetric tools. These systems measure the toolโ€™s position electronically across all six dimensions and automatically adjusts the toolโ€™s position using an in-line sensor system. On Figure 1.2 is shown advintec TCP-3D designed by Leoni that is an istance of these kind of systems.

(13)

3

There are several problems regarding these kind of systems that lead to search for others solutions:

๏‚ท the workspace is restricted

๏‚ท there is no user interface

๏‚ท โ€œblack boxโ€ system

1.3.2 Camera based calibration systems

There are some patents with several solutions to solve this problem using a camera but even in this case there are some restrictions. The common basic idea is to move the flange in at least three different poses keeping the TCP on the same point and then compute the relationship between TCP and flange defining and solving a set of constraints related to these flange poses.

The easier solution computes the TCP without approximations from a theoretical point of view but it requires two robot flange rotations of 90 degree around two perpendicular axis [ 3 ] [ 4 ]. The main problem of this solution is that there could not be enough space to perform this kind of rotations and an initial TCP approximation is required.

Another solution to compute the TCP makes use of mean squares error algorithm [ 2 ] [ 8 ]. In this case there is no restrictions about the rotation angles amplitude and it can be applied even without initial TCP approximations but to reach a reasonable accuracy many flange poses has to be used to define the system of equations.

A third solution computes the TCP defining a set of equations using several flange poses and keeping the TCP on the camera optical axis [ 2 ]. This method requires at least four flange poses and allows to define system of second order equations. This method has no approximations from a theoretical point of view but second order constraints lead to high intrinsic system errors amplification as camera discretization and robot pose inaccuracy.

S

YSTEM SETUP

(14)

4

๏‚ท Manipulator: KUKA AGILUS1100 SIXX

๏‚ท Camera: Blaser Ace acA2040-25gm (2048x2048 pixels)

๏‚ท Lens: Goyo GMTHR35028MCN (F/2.8 50mm)

๏‚ท Led backlight: PHLOX SLLUB (200 X 200 mm, IP-65)

๏‚ท PC: Asus asus n550jk (Intel Core i7-4700HQ (2.40 GHz))

Manipulator, camera and PC are connected to a local Ethernet network.

Manipulator and PC communicate reading and writing two shared variables stored on the robot control unit:

๏‚ท flag: a Boolean variable that will be set to ๐‘ก๐‘Ÿ๐‘ข๐‘’ when a new movement has to be performed. Once the movement has been performed it will be set to ๐‘“๐‘Ž๐‘™๐‘ ๐‘’.

๏‚ท pos: contains the manipulator position to reach. It can be expressed in different coordinate systems.

The PC handles the main software. It elaborates camera images and computes manipulator positions.

The following routine is used to command the manipulator from the PC: 1. set the new position on the variable pos

2. set to true the variable flag

3. suspend execution until the manipulator will set the variable flag to false. While the manipulator executes the following loop:

1. check the variable flag

2. if flag value is ๐‘ก๐‘Ÿ๐‘ข๐‘’ move the active TCP to the new position pos 3. set flag to ๐‘“๐‘Ž๐‘™๐‘ ๐‘’.

(15)

5

O

RGANISATION OF THE DOCUMENT

0 Errore. Il risultato non

รจ valido per una tabella.

Describes the purpose of this thesis and the configuration of the devices used

1 Errore. Il risultato non

รจ valido per una tabella.

Detailed information about the manipulator and its reference systems and a brief description of the simulator used

3 Image processing All the theory regarding image processing algorithms used and the explanation of the method used to detect the TCP with the camera

4 TCP calibration Detailed analysis of the method used to calibrate the TCP including the interaction between software and camera

5 Results The results obtained with several test to analyse the precision and the accuracy of the method proposed

6 Conclusions Final discussion of the results achieved and possible future developments

A Appendix Explanation of the least squares method with normal equations

(16)
(17)

7

2 R

OBOTICS

KUKA

A

GILUS

1100

SIXX

This small industrial robot is a 6-axis arm and it have been designed to achieve high velocity, at the expense of a reduced payload.

2.1.1 Description

The robot is composed of the following components visible in Figure 2.1: 1. In-line wrist A4, A5, A6

2. Arm A3 3. Link arm A2

4. Rotating column A1 5. Electrical installations 6. Base frame

Figure 2.1 โ€“ Kuka Agiuls1100 sixx

In-line wrist A4, A5, A6 The robot is fitted with a 3-axis line wrist. The

(18)

8

Arm A3 The arm is the link between the in-line wrist and the link arm. The arm is driven by the motor of axis 3.

Link arm A2 The link arm is the assembly located between the arm and the rotating column. It houses the motor and gear unit of axis 2.

Rotating column A1 The rotating column houses the motors of axes 1 and 2. The rotational motion of axis 1 is performed by the rotating column.

Electrical installations The electrical installations include all the motor and control cables for the motors of axes 1 to 6. The electrical installations also include the RDC box, which is integrated into the robot.

Base frame The base frame is the base of the robot. Interface A1 is located at the rear of the base frame.

Table 2.1 - Kuka Agiuls1100 sixx components

2.1.2 Technical data

The robot joints are mechanically and software constrained. The maximum values are shown in Table 2.2, with the maximum speed that each axis can achieve.

Axis Range of motion,

software limited

Speed with rated payload 1 +/-170ยฐ 360 ยฐ/s 2 +45ยฐ to -190ยฐ 360 ยฐ/s 3 +156ยฐ to -120ยฐ 360 ยฐ/s 4 +/-185ยฐ 360 ยฐ/s 5 +/-120ยฐ 360 ยฐ/s 6 +/-350ยฐ 360 ยฐ/s

(19)

9

The KUKA Agilus Sixx 1100 technical data are listed in Table 2.3. The working area is defined by the size of the robot and the values which can assume the joints represented in Figure 2.2.

Max. reach 1,101 mm

Max. payload 10 kg

Pose repeatability ยฑ0.03 mm

Number of axes 6

Mounting position Floor, ceiling, wall

Robot footprint 209 mm ร— 207 mm

Weight (excluding

controller), approx 54 kg

(20)

10

Figure 2.2 โ€“ Kuka Agiuls1100 sixx size and joint values

E

UCLID

L

ABS

S

IMULATOR

(21)

11

Figure 2.3 โ€“ Euclid Labs simulator

C

OORDINATE

S

YSTEMS

2.3.1 Tool Center Point

The TCP (tool center point) is the reference point of the tool attached to the robot flange. Usually the robot movements are programmed defining a path with

respect to the TCP that can be represented in different coordinate systems. It could be any point but typically corresponds to the tool tip, e.g. the muzzle of a welding torch. Furthermore, there could be several TCP, but only one at time can be active.

2.3.2 Robot Base frame

(22)

12

Figure 2.4 โ€“ Robot base frame

In this work, the robot base coordinate system corresponds to the world coordinate system.

2.3.3 Wrist frame

The wrist frame corresponds to the robot flange (the last axis of the robot where the tools have to be attached). The origin of the wrist frame is the center of the visible ๏ฌ‚ange surface. The red, green and blue axis on Figure 2.5 represent the ๐‘ฅ, ๐‘ฆ and ๐‘ง axis.

(23)

13

2.3.4 Tool frame

The tool frame is the coordinate system in which the origin corresponds to the tool center point. The z-axis has usually the same direction of the tool tip and defines its work direction. The red, green and blue axis on Figure 2.6 represent the ๐‘ฅ, ๐‘ฆ and ๐‘ง axis.

(24)
(25)

15

3 I

MAGE PROCESSING

P

INHOLE CAMERA MODEL

Pinhole camera model is a projective model that approximate the image acquisition process of a digital camera through simple geometric relationships. The purpose is to find the relation between image points and their position in the world. It performs well as long as the lens is thin and no wide-angle lens is used.

In Figure 3.1 are shown the relationships between the following coordinate systems:

1. World coordinate system has origin at point ๐‘‚ and its orientation is given by ๐‘ˆ๐‘ฅ, ๐‘ˆ๐‘ฆ and ๐‘ˆ๐‘ง.

2. Camera coordinate system has the origin at point C that is also the focal point. Axis ๐‘ˆ๐‘งโ€ฒ is aligned with camera optical axis and its direction points the

image plane. Plane given by ๐‘ˆ๐‘ฅโ€ฒ and ๐‘ˆ

๐‘ฆโ€ฒ is parallel to the camera image

plane.

3. Image coordinate system has axis ๐‘ข and ๐‘ฃ aligned with ๐‘ˆ๐‘ฅโ€ฒ and ๐‘ˆ๐‘ฆโ€ฒ of

camera coordinate system. The origin is the top left corner and the optical

axis intersect image plane center.

Figure 3.1 โ€“ Pinhole camera model

(26)

16 ๐‘ง๐‘–[ ๐‘ข๐‘›๐‘– ๐‘ฃ๐‘›๐‘– 1 ] = [ ๐‘ฅ๐‘– ๐‘ฆ๐‘– ๐‘ง๐‘–] = [ ๐‘…๐‘‚๐ถ ๐‘‡๐‘‚๐ถ 0 1 ] [ ๐‘‹๐‘– ๐‘Œ๐‘– ๐‘๐‘– 1 ] ( 3.1 )

Where [๐‘‹๐‘–, ๐‘Œ๐‘–, ๐‘๐‘–]๐‘‡ represents a point expressed in world coordinate system,

[๐‘ฅ๐‘–, ๐‘ฆ๐‘–, ๐‘ง๐‘–]๐‘‡ is the same point but expressed in camera coordinate system and

[๐‘ข๐‘›๐‘–, ๐‘ฃ๐‘›๐‘–]๐‘‡ is the corresponding normalized image coordinate. ๐‘… โˆˆ โ„3๐‘ฅ3 and ๐‘ก โˆˆ โ„3 are the extrinsic parameters and represent rotation and translation from the world

coordinate system to the camera coordinate system.

Real camera image plane is usually represented with the following transformation from the normalized image coordinate:

[ ๐‘ข๐‘– ๐‘ฃ๐‘– 1 ] = [ ๐›ผ ๐›พ ๐‘ฃ0 0 ๐›ฝ ๐‘ข0 0 0 1 ] [ ๐‘ข๐‘›๐‘– ๐‘ฃ๐‘›๐‘– 1 ] ( 3.2 )

Where ๐›ผ โˆˆ โ„ and ๐›ฝ โˆˆ โ„ are scaling factors from normalized image plane to real image plane along ๐‘ข๐‘› and ๐‘ฃ๐‘› directions, ๐›พ โˆˆ โ„ represents the skew coefficient between ๐‘ข and ๐‘ฃ axis and [๐‘ข0, ๐‘ฃ0] โˆˆ โ„2 represent the intersection of real image

plane with optical axis.

A simplified schema about how the image is acquired by the sensor is shown in

Figure 3.2. The point (๐‘ฅ๐‘–, ๐‘ฆ๐‘–, ๐‘ง๐‘–) expressed in camera coordinate system,

Figure 3.2 โ€“ Point projection on image plane

(27)

17

I

MAGE PROCESSING ALGORITHMS

3.2.1 Grayscale histogram

The histogram of a grayscale image with 256 levels is a function โ„Ž๐‘“(๐‘ฅ) where the domain is all the integers contained into the interval [0,255] that for each pixel represents the possible values of the grayscale image while the codomain is the number of pixels with the corresponding value found: ๐œŒ =๐‘›๐‘™

๐‘˜ , ๐‘˜ โˆˆ โ„, ๐‘˜ โ‰ฅ 1

Algorithm 3.1 - Computing the histogram

Figure 3.3 - Histogram

3.2.2 Binary image

Since the tool is in backlight, a threshold value can divide the light background from the darker tool. To avoid external light inference, the threshold is not fixed: computing the image histogram the threshold ๐‘‡ can be defined dynamically finding the minimum value between the two higher local maximum values, this should corresponds to a value between the mean intensity of the background and the mean intensity of the tool.

0 2000 4000 6000 8000 10000 12000 14000 16000 18000 20000 1 255 Fre q u en cy Grayscale

1. Set all the elements of the array โ„Ž๐‘“(๐‘ฅ) to zero

(28)

18

Once found the threshold ๐‘‡ using the histogram, the binary image ๐‘๐‘“(๐‘ฅ, ๐‘ฆ) can be

computed as follows:

๐‘๐‘“(๐‘ฅ, ๐‘ฆ) = {

0 ๐‘–๐‘“ ๐‘“(๐‘ฅ, ๐‘ฆ) > ๐‘‡

1 ๐‘–๐‘“ ๐‘“(๐‘ฅ, ๐‘ฆ) โ‰ค ๐‘‡ ( 3.3 )

3.2.3 Gaussian filter

This filter smooth the image convolving a Gaussian kernel ๐ป with the image itself. ๐ป is a matrix of 2๐‘˜ + 1 ร— 2๐‘˜ + 1 elements, with ๐‘˜ โˆˆ โ„•, where values are defined as follows: ๐ป๐‘–๐‘— = 1 2๐œ‹๐œŽ2 ๐‘’ โˆ’(๐‘–โˆ’๐‘˜โˆ’1)2+(๐‘—โˆ’๐‘˜โˆ’1)2 2๐œŽ2 ( 3.4 )

Where ๐œŽ is the standard deviation of the probability distribution.

3.2.4 Sobel operator

This operator can be used to approximate the image gradient convolving the image with two kernels ๐‘†๐‘ฅโˆˆ โ„ค2 and ๐‘†

๐‘ฆ โˆˆ โ„ค2: ๐‘†๐‘ฅ = [ โˆ’1 0 1 โˆ’1 0 2 โˆ’1 0 1 ] ๐‘†๐‘ฆ = [ โˆ’1 โˆ’2 โˆ’1 0 0 0 1 2 1 ] ( 3.5 )

(29)

19

3.2.5 Cannyโ€™s Edge Detection

John F. Canny developed this algorithm in 1986 to detect a wide range of edges in images. This edge detector has three main criteria to solve the problem:

๏‚ท The detection: false detection should be avoided and important edge should not be missed

๏‚ท The localization: minimize the distance between the edge detected and the edge center

๏‚ท The one response: the algorithm should minimize multiple response to a single edge

The basic idea of Cannyยดs edge detection algorithm is to consider a point in the image as an edge if the first derivatives has local maxima along its gradient direction.

The algorithm can be broken down to five different steps:

1. To remove the noise and obtain better results, the first step of the algorithm is to smooth the image with a Gaussian filter. This can be done convolving the image with a Gaussian filter.

2. Compute the image gradient to estimate local edge magnitude ๐บ๐‘–๐‘— and

direction ๐œƒ๐‘–๐‘— for each image pixel (๐‘–, ๐‘—). Sobel operator can be used for this

purpose.

3. Find the edges location using Non-maximum suppression Algorithm:

a. Round the direction to the nearest multiple of 45ยฐ

b. Compare the gradient magnitude with the one related to the next pixel along the gradient direction

c. Suppress the current pixel value if its gradient magnitude is lower The purpose is to find all local maxima in the gradient magnitudes along the related gradient directions.

4. At this point, the image contains the edge detected but there could be some spurious response. A way to keep only the true edges is to threshold the related gradient magnitude with hysteresis:

(30)

20

b. If the gradient magnitude is under a low threshold then the corresponding pixel is a false edge

c. If the gradient magnitude is between the low and high thresholds then the corresponding pixel is a true edge only if there is at least one other true edge among its neighbours

5. Feature synthesis approach: The general principle is to start from the smallest scale, then synthesise the larger scale outputs that would occur if they were the only edges present. Then comparing the large scale output with the synthetized edge response, additional edges are marked only if the large scale output is significantly greater than the synthetic prediction. Different scale for Cannyโ€™s edge detector are represented by different standard deviation ๐œŽ of the Gaussian filter.

Algorithm 3.2 โ€“ Canny edge detector

Figure 3.4 โ€“ Binary image Figure 3.5 โ€“ Edge found with Cannyโ€™s edge detection

1. Convolve the image with a Gaussian filter with standard deviation ๐œŽ 2. Compute the image gradient

3. Find the location of the edges using Non-maximum suppression Algorithm 4. Threshold edges with hysteresis

(31)

21

3.2.6 Hough transform

Conceived by Hough in 1962, in its first version, the purpose was the recognition of straight lines. The main idea is that any point on the entire image gives a contribution to the overall recognition: given a point, can be defined a sheaf of lines and for each couple of points there is only one common line. Computing a discrete sheaf of lines for each image point, if a specific line appears ๐‘› times, then there is a line on the image with ๐‘› points.

A bidimensional line can be represented with two parameters, then defining a bidimensional matrix where columns and rows corresponds to the first and second line parameter, the lines found can be counted incrementing the corresponding cell. The result is an accumulation function defined on the lines parameter space. Changing the parameter space, this algorithm can be used to find various geometric shape.

A line can be represented in several ways, the most common equation used is ๐‘ฆ = ๐‘š๐‘ฅ + ๐‘ž where ๐‘š is the gradient of the line, ๐‘ž is the y-intercept of the line and (๐‘ฅ, ๐‘ฆ) represent a line point. Changing ๐‘š and ๐‘ž all the lines that through the point (๐‘ฅ, ๐‘ฆ) can be defined. With this kind of representation, the parameter ๐‘š increases immeasurably when the line become vertical and has small variations when the line become horizontal then this representation is not well suited to define a discrete parametric space. A suitable line representation could be ๐‘Ÿ = ๐‘ฅ sin ๐œƒ + ๐‘ฆ cos ๐œƒ where ๐‘Ÿ โˆˆ โ„ is the line distance from the center and ๐œƒ โˆˆ โ„ is the orientation of ๐‘Ÿ with respect to the abscissas axis.

(32)

22

With this representation, for each image point (๐‘ฅ, ๐‘ฆ) โˆˆ ๐‘ก๐‘œ๐‘œ๐‘™ ๐‘‡ โŠ‚ โ„ค2 , the corresponding lines ๐‘™๐‘–๐‘ฅ๐‘ฆ, with ๐‘– โˆˆ [0,179] can be computed varying the orientation

๐œƒ๐‘– from 0 to 179 and computing the corresponding distance ๐‘Ÿ๐‘– from the center as a function of ๐‘ฅ, ๐‘ฆ and ๐œƒ๐‘™

Once all the pixels were analysed, the points in the parameter space that have accumulated the most number of "votes" represents the lines that have high probability of being present in the image.

A further step is to keep the lines that have a minimum number of points, the decision threshold ๐œŒ could be a fraction of the number of points ๐‘›๐‘™ of the largest line found: ๐œŒ =๐‘›๐‘™

๐‘˜ , ๐‘˜ โˆˆ โ„, ๐‘˜ โ‰ฅ 1

Algorithm 3.3 โ€“ Hough transform algorithm

1. Define the parameter space (๐‘Ÿ, ๐œƒ) with an accumulation matrix M 2. Initialize all the M element to zero

3. For each pixel (๐‘ฅ, ๐‘ฆ) compute ๐‘Ÿ๐‘– = ๐‘ฅ sin ๐œƒ๐‘– + ๐‘ฆ cos ๐œƒ๐‘– with ๐œƒ๐‘– โˆˆ [0,179] and increment by a unit ๐‘€(๐‘Ÿ๐‘–, ๐œƒ๐‘–)

(33)

23

Figure 3.7 โ€“ Edge image Figure 3.8 โ€“ Lines found with Hough transform

3.2.7 Thinning

This is a morphological operator used to reduce the area of an element contained into a binary image ๐‘‹ โˆˆ โ„ค2. The resulting image ๐‘‹โ€ฒโˆˆ โ„ค2 is:

๐‘‹โ€ฒ= ๐‘‹ โŠ˜ ๐ต = ๐‘‹ / ๐‘‹ โŠ— ๐ต

( 3.7 )

Where ๐ต = (๐ต1, ๐ต2), ๐ต1 โˆˆ โ„ค2, ๐ต2โˆˆ โ„ค2, ๐ต1โˆฉ ๐ต2 = โˆ… is called composite structuring

element and ๐‘‹ โŠ— ๐ต = {๐‘ฅ: ๐ต1 โŠ‚ ๐‘‹ ๐‘Ž๐‘›๐‘‘ ๐ต2 โŠ‚ ๐‘‹๐‘} is the hit-or-miss transformation.

In other words, once the thinning operator has been applied, a pixel ๐‘ฅ equal to one of the initial image will be zero on the resulting image if the set of surroundings pixels ๐ต1 of ๐‘ฅ are equal to one and the set of surroundings pixels ๐ต2 of ๐‘ฅ are equal to zero.

This operator has been implemented convolving the image with eight composite

structuring elements taken from the Golay alphabet:

๐ต1 = [ 0 0 0 โˆ— 1 โˆ— 1 1 1 ] ๐ต2 = [ โˆ— 0 0 1 1 0 โˆ— 1 โˆ— ] ๐ต3 = [ 1 โˆ— 0 1 1 0 1 โˆ— 0 ] ๐ต4 = [ โˆ— 1 โˆ— 1 1 0 โˆ— 0 0 ] ๐ต5 = [ 1 1 1 โˆ— 1 โˆ— 0 0 0 ] ๐ต6 = [ โˆ— 1 โˆ— 0 1 1 0 0 โˆ— ] ๐ต7 = [ 0 0 0 โˆ— 1 โˆ— 1 1 1 ] ๐ต8 = [ 0 0 โˆ— 0 1 1 โˆ— 1 โˆ— ] ( 3.8 )

3.2.8 Homotopic skeleton detection

To define what is an image skeleton, maximal ball needs to be specified: A ball ๐ต(๐‘, ๐‘Ÿ) โŠ‚ โ„๐‘› with radius ๐‘Ÿ โˆˆ โ„ and center ๐‘ โˆˆ โ„๐‘› is said to be maximal in a set

๐‘‹ โŠ‚ โ„๐‘› if and only if there exists no other ball included in ๐‘‹ and containing ๐ต: ๐‘™๐‘’๐‘ก ๐‘‹ โŠ‚ โ„๐‘›๐‘Ž๐‘›๐‘‘ ๐ต ๐‘๐‘Ž๐‘™๐‘™ โŠ‚ โ„๐‘›, ๐‘กโ„Ž๐‘’๐‘› ๐ต ๐‘–๐‘  ๐‘š๐‘Ž๐‘ฅ๐‘–๐‘š๐‘Ž๐‘™ ๐‘๐‘Ž๐‘™๐‘™ ๐‘œ๐‘“ ๐‘‹ ๐‘–๐‘“

(34)

24

The skeleton ๐‘†(๐‘‹) โŠ‚ โ„๐‘› by maximal balls of a set ๐‘‹ โˆˆ โ„๐‘› is the set of the centers of its maximal balls:

๐‘†(๐‘‹) = {๐‘ โˆˆ ๐‘‹ โˆถ โˆƒ ๐‘Ÿ โ‰ฅ 0, ๐ต(๐‘, ๐‘Ÿ) ๐‘–๐‘  ๐‘š๐‘Ž๐‘ฅ๐‘–๐‘š๐‘Ž๐‘™ ๐‘๐‘Ž๐‘™๐‘™ ๐‘œ๐‘“ ๐‘‹ }

( 3.9 )

A sequential thinning with the eight composite structuring elements defined in section 3.2.7 applied until idempotency is reached, produces a homotopic substitute in โ„ค2 of the skeleton defined by maximal balls.

Figure 3.9 โ€“ Source image Figure 3.10 โ€“ Homotopic skeleton

In some unfortunate cases this algorithm performs a sequential pixel elimination: with only one iteration a tool portion become a line due to the tool erosion from only one side (

Figure 3.12) leading to a false skeleton detection. The problem can be avoided adding a flag matrix ๐‘€ reinitialized at the beginning of each cycle with false values and set to true a cell ๐‘€(๐‘ฅ๐‘–, ๐‘ฆ๐‘–) when the corresponding image pixel ๐ผ(๐‘ฅ๐‘–, ๐‘ฆ๐‘–) is going

(35)

25

Figure 3.11 โ€“ Source image Figure 3.12 โ€“ Wrong skeleton detection Figure 3.13 โ€“ Correct skeleton detection

K

EY POINT DETECTION

The TCP projection on the image can be detected finding tool main direction and then searching the minimum point along the found direction. With symmetric tools, this point should correspond to the tool tip.

3.3.1 Tool direction

The Hough transform 3.2.6 can be used for this purpose. Applying this algorithm on the image, the resulting accumulation matrix contains the number of points belonging to each line found. A line is represented by the line distance from the image center ๐‘Ÿ๐‘– and the rotation angle ๐œƒ๐‘– of ๐‘Ÿ๐‘– with respect to ๐‘‹ axis (see section 3.2.6 for more details).

The direction of a line can be represented by the angle between the ๐‘‹ axis and the line itself as shown in Figure 3.14

(36)

26

With this representation the line direction can be obtained as ๐œƒ๐‘–โ€ฒ= ๐œƒ๐‘– +๐œ‹

2 since it is

perpendicular to the direction of ๐‘Ÿ๐‘–.

Now the problem is which line consider as main tool direction.

3.3.1.1 Sum of lines direction

The first idea was to apply the Hough transform on the image containing the edges found with Cannyโ€™s algorithm and then to find main direction with the following method:

1. Define a new vector of directions ๐ท that represents all the possible directions ๐œƒ๐‘–, where each cell contains the counting of the points of all the lines with the corresponding direction.

2. Then the main tool direction can be found identifying the cell of ๐ท with the maximum value and thereafter calculating the line direction ๐œƒ๐‘™โ€ฒ.

In some cases, this method cannot performs well. For instance applying this method on a triangular tip of a tool, the direction will be not accurate Figure 3.16.

Figure 3.15 โ€“ Source image Figure 3.16 โ€“ Wrong tool directon

3.3.1.2 Largest line

(37)

27

In this case, to identify the line representing main tool direction, could be easier to find the absolute maximum on the accumulation matrix found through the Hough transform since it represents the line with more points found on the image that should correspond to the main skeleton part.

Figure 3.17 โ€“ Source image Figure 3.18 โ€“ Skeleton direction found

3.3.2 TCP localization

Once found the tool direction ๐œƒโ€ฒ, rotating the image coordinate space ๐‘ˆ

๐‘ฅ๐‘ฆ of ๐œƒโ€ฒ

anticlockwise, the TCP projection can be found among the tool points of local extrema along the new abscissas axis ๐‘ฅโ€ฒ.

Let ๐‘ˆ๐‘ฅ๐‘ฆ the initial image coordinate space with origin on image centre, the new

coordinate space ๐‘ˆ๐‘ฅโ€ฒ๐‘ฆโ€ฒ with the abscissas axis parallel to tool direction has

coordinates: [๐‘ฅ โ€ฒ ๐‘ฆโ€ฒ] = [ cos ๐œƒโ€ฒ โˆ’ sin ๐œƒโ€ฒ sin ๐œƒโ€ฒ cos ๐œƒโ€ฒ ] [ ๐‘ฅ ๐‘ฆ] ( 3.10 )

The maximum and minimum points representing the tool along the ๐‘ฅโ€ฒ axis

correspond to a point on the tool tip and a point of image borders. Now, evaluating these points found expressed in coordinate system ๐‘ˆ๐‘ฅ๐‘ฆ, understand which is the

(38)

28

3.3.3 TCP optimisation

There could be more than one point of extrema that can represent the TCP ( Figure 3.19). A way to solve this ambiguity is to calculate the mean of the surrounding points along the axis perpendicular to the tool direction: let ๐‘‡โ€ฒโˆˆ โ„ค2 the set of the points representing the tool with the coordinate system ๐‘ˆ๐‘ฅโ€ฒ๐‘ฆโ€ฒ ( 3.10 ),

(๐‘ฅ๐‘’๐‘ฅ๐‘กโ€ฒ , ๐‘ฆ๐‘’๐‘ฅ๐‘กโ€ฒ ) โˆˆ ๐‘‡โ€ฒ the point found with TCP localisation (section 3.3.2) and ๐‘† =

{(๐‘ฅโ€ฒ, ๐‘ฆโ€ฒ) โˆˆ ๐‘‡โ€ฒ โˆถ |๐‘ฅโ€ฒโˆ’ ๐‘ฅ

๐‘’๐‘ฅ๐‘กโ€ฒ | < ๐‘˜, ๐‘˜ โˆˆ โ„•} the set of its surrounding points, then the

TCP is: (๐‘ฅ๐‘ก๐‘๐‘, ๐‘ฆ๐‘ก๐‘๐‘) = [cos โˆ’๐œƒโ€ฒ โˆ’ sin โˆ’๐œƒโ€ฒ sin โˆ’๐œƒโ€ฒ cos โˆ’๐œƒโ€ฒ ] [ ๐‘ฅ๐‘’๐‘ฅ๐‘กโ€ฒ ๐‘ฆ๐‘š๐‘’๐‘Ž๐‘›โ€ฒ ] ๐‘คโ„Ž๐‘’๐‘Ÿ๐‘’ ๐‘ฆ๐‘š๐‘’๐‘Ž๐‘›โ€ฒ = โˆ‘(๐‘ฅโ€ฒ,๐‘ฆโ€ฒ)โˆˆ๐‘†๐‘ฆโ€ฒ โ€–๐‘†โ€– ( 3.11 )

A result of this optimisation can be seen in

Figure 3.20 where the set ๐‘† is represented by purple pixels while the TCP by grey pixels.

Figure 3.19 โ€“ Keypoint without tcp optimisation

(39)

29

4 TCP

CALIBRATION

T

OOL CENTER POINT REPRESENTATION

Let ๐น๐‘– the ๐‘–๐‘กโ„Ž position of the robot flange represented with the homogeneous

transformation matrix ( 4.1 ) expressed in robot base coordinate system. ๐น๐‘– = [๐‘…๐‘– ๐‘‡๐‘–

0 1, ] , ๐‘– โˆˆ [1, ๐‘] ( 4.1 )

Where ๐‘…๐‘– is the three-dimensional rotation matrix from the robot base fame ๐‘‚๐‘ค and ๐‘‡๐‘– is the three-dimensional translation vector from the robot base frame ๐‘‚๐‘ค. ๐น๐‘– can

be obtained with the robot forward kinematics.

The tool center point can be represented with a further homogeneous transformation matrix ๐ป๐‘ก๐‘๐‘ with respect to the robot flange coordinate system.

๐ป๐‘ก๐‘๐‘ = [๐‘…๐‘ก๐‘๐‘ ๐‘‡๐‘ก๐‘๐‘

0 1 ] ( 4.2 )

Where ๐‘…๐‘ก๐‘๐‘ is the three-dimensional rotation matrix from the robot flange frame ๐น๐‘–

and ๐‘‡๐‘ก๐‘๐‘ = (๐‘‡๐‘ก๐‘๐‘๐‘ฅ, ๐‘‡๐‘ก๐‘๐‘๐‘ฆ, ๐‘‡๐‘ก๐‘๐‘๐‘ง)๐‘‡ is the three-dimensional translation vector from the robot flange frame ๐น๐‘–.

M

INIMUM NUMBER OF EQUATIONS

A position ๐‘๐‘– โˆˆ โ„3 of the TCP expressed in robot base coordinate system can be computed as follows:

๐‘๐‘– = ๐น๐‘– โˆ™ ๐‘ก๐‘ก๐‘๐‘, ๐‘– โˆˆ [1, ๐‘] ( 4.3 )

Where the frame ๐น๐‘– represents a robot flange pose and ๐‘ก๐‘ก๐‘๐‘ = (๐‘‡๐‘ก๐‘๐‘, 1)๐‘‡ is the homogeneous vector representing the TCP with respect to ๐น๐‘–.

For each couple of different robot flange poses ๐น๐‘– and ๐น๐‘—, with the TCP constrained

(40)

30

๐น๐‘–โˆ™ ๐‘ก๐‘ก๐‘๐‘โˆ’ ๐น๐‘—โˆ™ ๐‘ก๐‘ก๐‘๐‘ = ๐‘๐‘– โˆ’ ๐‘๐‘—, ๐‘–, ๐‘— โˆˆ [1, ๐‘], ๐‘– โ‰  ๐‘— (๐‘๐‘– = ๐‘๐‘—) โ‡’ [๐น๐‘– โˆ’ ๐น๐‘—] โˆ™ ๐‘ก๐‘ก๐‘๐‘ = ๐ท๐‘–๐‘— โˆ™ ๐‘ก๐‘ก๐‘๐‘ = 0

( 4.4 )

The dimension of ๐ท๐‘–๐‘— is 4x4. Since ๐น๐‘– and ๐น๐‘— are homogeneous matrix, all the

values of last row of ๐ท๐‘–๐‘— are zero.

The rank of ๐ท๐‘–๐‘— ( 4.4 ) is two, this can be demonstrated but some concepts have to be defined.

Let R a three-dimensional rotation matrix:

1 Orthonormal basis: in โ„3 can be represented with a 3x3 matrix in which the

columns represent three unit vector where each of them is parallel to one and only one axis of the three-dimensional space. ๐‘… may be seen as an orthonormal basis since it is an orthogonal matrix with determinant equal to one.

2 Axis-angle representation: ๐‘… can be represented with the axis-angle

representation ๐‘’ = (๐‘’ฬ‚, ๐œƒ) where ๐œƒ is the rotation around the axis of rotation represented by the three-dimensional vector ๐‘’ฬ‚. As the rotation ๐œƒ grows, any point rotates around the rotation axis ๐‘’ฬ‚ (Figure 4.1).

3 Rotation plane: it is a plane defined in โ„3 in relation with ๐‘…, it contains the origin of the coordinate system and has the normal perpendicular to the axis of rotation ๐‘’ฬ‚ (Figure 4.1).

(41)

31

4 Difference vector: let ๐‘1 a three-dimensional point and ๐‘2 = ๐‘… โˆ™ ๐‘1 the point ๐‘1

rotated with the matrix ๐‘… . A difference vector of ๐‘… is ๐‘ฃ12 = ๐‘ƒ1โˆ’ ๐‘ƒ2. All the

difference vectors of ๐‘… are contained into its rotation plane since they are perpendicular to the axis of rotation ๐‘’ฬ‚ of ๐‘….

5 Correlated rotation matrix: since a rotation matrix is an orthonormal basis, for

each couple of rotations matrix ๐‘…๐‘– and ๐‘…๐‘— exists a correlated rotation matrix ๐‘…๐‘–๐‘— that rotates the three unit vectors represented by the columns of ๐‘…๐‘– (Figure 3.2 ๐‘ฅ, ๐‘ฆ, ๐‘ง) to a new position defined by the three columns of ๐‘…๐‘— (Figure 3.2 ๐‘ฅโ€ฒ, ๐‘ฆโ€ฒ, ๐‘งโ€ฒ).

6

Figure 4.2 โ€“ Rotation between two coordinate system

Statement 4.1: Let ๐‘…๐‘– and ๐‘…๐‘— two rotation matrix such that ๐‘…๐‘– โ‰  ๐‘…๐‘—. The matrix

๐‘…๐‘– โˆ’ ๐‘…๐‘— has rank equal to two.

Proof: The columns of ๐‘…๐‘– โˆ’ ๐‘…๐‘— can be seen as three difference vectors contained

into the rotation plane of the correlated rotation matrix ๐‘…๐‘–๐‘— of ๐‘…๐‘– and ๐‘…๐‘— Then the rank of ๐‘…๐‘– โˆ’ ๐‘…๐‘— must be less or equal to two [1].

(42)

32

Where ๐‘…๐‘–1โˆ’ ๐‘…๐‘—1and ๐‘…๐‘–2โˆ’ ๐‘…๐‘—2 are the first and the second column of ๐‘…๐‘–โˆ’ ๐‘…๐‘—, but since ๐‘…๐‘– and ๐‘…๐‘— are orthonormal basis:

๐‘…๐‘–1โŠฅ ๐‘…๐‘–2 โ‡’ ๐‘…๐‘–๐‘‡1 โˆ™ ๐‘…๐‘–2 = 0

๐‘…๐‘—1 โŠฅ ๐‘…๐‘—2โ‡’ ๐‘…๐‘—๐‘‡1โˆ™ ๐‘…๐‘— 2 = 0

( 4.6 )

Where ๐‘…๐‘–1 โ‰  0, ๐‘…๐‘–2 โ‰  0, ๐‘…๐‘—1 โ‰  0 and ๐‘…๐‘—2โ‰  0.

Then if ( 4.5 ) is true, even the following equations are true: (๐‘…๐‘–1โˆ’ ๐‘…๐‘—1)๐‘…๐‘–2= ๐›ผ(๐‘…๐‘–2โˆ’ ๐‘…๐‘—2)๐‘…๐‘–2 ๐‘…๐‘–๐‘‡1๐‘…๐‘–2โˆ’ ๐‘…๐‘—๐‘‡1๐‘…๐‘–2 = ๐›ผ๐‘…๐‘–๐‘‡2๐‘…๐‘–2โˆ’ ๐›ผ๐‘…๐‘—๐‘‡2๐‘…๐‘—2 โ‡’ โˆ’๐‘…๐‘—๐‘‡1๐‘…๐‘–2 = ๐›ผ๐‘…๐‘–๐‘‡2๐‘…๐‘–2โˆ’ ๐›ผ๐‘…๐‘—๐‘‡2๐‘…๐‘–2 โ‡’ โˆ’๐‘…๐‘—1 = ๐›ผ๐‘…๐‘–2โˆ’ ๐›ผ๐‘…๐‘—2 โ‡’ โˆ’๐‘…๐‘—๐‘‡1๐‘…๐‘—2 = ๐›ผ๐‘…๐‘–๐‘‡2๐‘…๐‘—2โˆ’ ๐›ผ๐‘…๐‘—๐‘‡2๐‘…๐‘—2 โ‡’ 0 = ๐›ผ๐‘…๐‘–๐‘‡2๐‘…๐‘—2โˆ’ ๐›ผ๐‘…๐‘—๐‘‡2๐‘…๐‘—2 โ‡’ ๐›ผ(๐‘…๐‘–2โˆ’ ๐‘…๐‘—2) = 0 ( 4.7 )

Then equation ( 4.5 ) is true if:

๐‘…๐‘–2โˆ’ ๐‘…๐‘—2 = ๐‘…๐‘–1โˆ’ ๐‘…๐‘—1 = 0

( 4.8 )

But since ๐‘…๐‘– โ‰  ๐‘…๐‘—, at most one column of ๐‘…๐‘— can be equal to the same column of ๐‘…๐‘– then ( 4.8 ) cannot be true, meaning that even ( 4.5 ) cannot be true then the

difference vectors cannot be parallel. Since at least two columns of ๐‘…๐‘–โˆ’ ๐‘…๐‘— are not parallel, ๐‘…๐‘– โˆ’ ๐‘…๐‘— has rank grater or equal to two [2].

[1] โˆฉ [2] โ‡’ ๐‘…๐‘– โˆ’ ๐‘…๐‘— has rank equal to two.

(43)

33

Statement 4.2: If ๐‘…๐‘– โ‰  ๐‘…๐‘—, the last column of ๐ท๐‘–๐‘— ( 4.4 ) ๐‘‡๐‘– โˆ’ ๐‘‡๐‘— is a linear

combination of the submatrix columns ๐‘…๐‘– โˆ’ ๐‘…๐‘— of ๐ท๐‘–๐‘—.

Proof: ๐‘‡๐‘– and ๐‘‡๐‘— can be seen as two vectors ๐‘ฃ๐‘– and ๐‘ฃ๐ฝ expressed in TCP coordinate system:

๐‘‡๐‘– = TCPw+ ๐‘ฃ๐‘–

๐‘‡๐‘— = TCPw+ ๐‘ฃ๐‘—

( 4.9 )

Where TCPw is the TCP expressed in robot base coordinate system.

๐‘‡๐‘– โˆ’ ๐‘‡๐‘— = TCPw+ ๐‘ฃ๐‘– โˆ’ (TCPw+ ๐‘ฃ๐‘—) = ๐‘ฃ๐‘–โˆ’ ๐‘ฃ๐‘— ( 4.10 )

Since ๐‘ฃ๐‘— = ๐‘…๐‘–๐‘—โˆ™ ๐‘ฃ๐‘–, where ๐‘…๐‘–๐‘— is the correlated rotation matrix of ๐‘…๐‘– and ๐‘…๐‘—, then

๐‘ฃ๐‘–โˆ’ ๐‘ฃ๐‘— is a difference vector of ๐‘…๐‘–๐‘—. This means that also ๐‘‡๐‘–โˆ’ ๐‘‡๐‘— is a difference

vector of ๐‘…๐‘–๐‘— then it is contained into the rotation plane of ๐‘…๐‘–๐‘— as the columns of ๐‘…๐‘– โˆ’ ๐‘…๐‘—. Then if ๐‘…๐‘– โ‰  ๐‘…๐‘—, ๐‘…๐‘–โˆ’ ๐‘…๐‘— has rank equal to two then ๐‘‡๐‘– โˆ’ ๐‘‡๐‘— can be expressed as a linear combination of ๐‘…๐‘– โˆ’ ๐‘…๐‘—.

Qed

Then if ๐‘…๐‘– โ‰  ๐‘…๐‘—, ๐‘…๐‘–โˆ’ ๐‘…๐‘— has rank equal to two, ๐‘‡๐‘–โˆ’ ๐‘‡๐‘— is a linear combination of ๐‘…๐‘– โˆ’

๐‘…๐‘— and since the last row ๐ท๐‘–๐‘— ( 4.4 ) is zero then ๐ท๐‘–๐‘— has rank equal to two.

This means that two flange poses are not enough to solve the system of equations ๐ท๐‘–๐‘— โˆ™ ๐‘ก๐‘ก๐‘๐‘ = 0, at least another pose ๐น๐‘˜ is needed.

[๐น๐‘– โˆ’ ๐น๐‘—

๐น๐‘–โˆ’ ๐น๐‘˜] โˆ™ ๐‘ก๐‘ก๐‘๐‘ = [

๐‘…๐‘– โˆ’ ๐‘…๐‘— ๐‘‡๐‘– โˆ’ ๐‘‡๐‘—

๐‘…๐‘– โˆ’ ๐‘…๐‘˜ ๐‘‡๐‘– โˆ’ ๐‘‡๐‘˜] โˆ™ ๐‘ก๐‘ก๐‘๐‘ = 0 ( 4.11 )

If ๐‘…๐‘– โ‰  ๐‘…๐‘— and ๐‘…๐‘— โ‰  ๐‘…๐‘˜ and if ๐‘’ฬ‚๐‘–๐‘— โ‰  ๐‘’ฬ‚๐‘–๐‘˜, where ๐‘’ฬ‚ is the axis of rotation of the respective

correlated rotation matrix, then the difference vectors represented by the difference

matrix ๐‘…๐‘– โˆ’ ๐‘…๐‘— and ๐‘…๐‘– โˆ’ ๐‘…๐‘˜ in ( 4.11 ) are contained in at least two different rotation

(44)

34

difference vectors linearly independent or, in other words, the system of equations

has unique solution.

C

AMERA

B

ASE CALIBRATION

A binding to solve equation ( 4.11 ) is that the TCP must be in the same point for each flange pose, but using a camera the TCP can be constrained to lie only on a line.

The optical axis ๐‘™๐‘œ in the three-dimensional space is a line whose projection corresponds to the image center. Then, for each flange pose, two coordinates can be constrained with fairly accuracy moving the flange on a plane parallel to the camera plane in such a way to constraint the TCP to be on the image center. Let ๐‘Š๐œ‹ the camera base that is a coordinate system with the origin corresponding to the center of the camera image plane, with the plane ๐‘ฅ๐‘ฆ๐œ‹ aligned with the

camera plane ๐‘ฅ๐‘ฆ๐‘ and the z-axis aligned with the camera optical axis: ๐‘Š๐œ‹ = [๐‘…๐œ‹ ๐‘‡๐œ‹

0 1] = ๐‘Š๐‘‚[

๐‘…๐‘œ๐œ‹ ๐‘‡๐‘œ๐œ‹

0 1 ] ( 4.12 )

Where ๐‘…๐‘œ๐œ‹ โˆˆ โ„3๐‘ฅ3 and ๐‘‡

๐‘œ๐œ‹ โˆˆ โ„3 are the rotation matrix and the translation vector

from the robot base coordinate system to the camera plane.

Since the purpose is to move the tool with respect to the current position (then without setting absolute positions), only the rotation matrix ๐‘…๐œ‹ is required because it allows to define the direction ๐‘‘๐œ‹๐‘– โˆˆ โ„

3 with respect to ๐‘Š

๐œ‹, along which to move

the flange to constraint the TCP projection to be on the image center keeping the flange on the plane ๐‘ฅ๐‘ฆ๐œ‹. Let ๐‘‘๐‘๐‘– โˆˆ โ„2 the direction given by the vector that connect the TCP projection (๐‘ฅ๐‘–, ๐‘ฆ๐‘–) on the image plane to the image center (๐‘ฅ๐‘, ๐‘ฆ๐‘), then:

๐‘‘๐œ‹๐‘– =

(๐‘ฅ๐‘–โˆ’ ๐‘ฅ๐‘ , ๐‘ฆ๐‘– โˆ’ ๐‘ฆ๐‘, 0) |(๐‘ฅ๐‘– โˆ’ ๐‘ฅ๐‘ , ๐‘ฆ๐‘– โˆ’ ๐‘ฆ๐‘)|

( 4.13 )

Note that last value of ( 4.13 ) is zero since the coordinate system ๐‘Š๐œ‹ has the plane

(45)

35

flange along direction ๐‘‘๐œ‹๐‘– in such a way to constraint the TCP projection to be on the image center without changing the z-value.

There are several methods to find the camera rotation matrix ๐‘…๐œ‹, two of which are

the following:

4.3.1 Kuka base calibration

Kuka provides a procedure to define a new reference system called base. This

procedure requires the identification of three points ๐‘1, ๐‘2, ๐‘3 โˆˆ โ„3 moving the robot using the manual control system:

๏‚ท ๐‘1: origin of the new coordinate system

๏‚ท ๐‘2: point on x-axis of the new coordinate system

๏‚ท ๐‘3: point on xy plane of the new coordinate system The z-axis direction is defined according to left-hand rule.

Note that to define the origin with fairly accuracy a pointed tool already calibrated have to be used. Here only the rotation ๐‘…๐œ‹ is useful then the base can be defined

moving a pointed tool on the three points but considering the corresponding flange position to set the values ๐‘1, ๐‘2 and ๐‘3, making sure that during the movements

the flange doesnโ€™t change its orientation. The resulting rotation matrix is the same since the flange position can be seen as a constant translation of the tool tip. Then the reference system computed will have another origin but the same orientation.

4.3.2 Base calibration using camera images

Another way to compute the rotation matrix ๐‘…๐œ‹ is to move the tool on different

positions contained into a plane of the 3D space expressed in robot base coordinate system and recording the TCP position found with the camera.

Let (๐‘‹๐‘ค, ๐‘Œ๐‘ค, ๐‘๐‘ค) a flange position expressed in the robot base coordinate system, (๐‘ข, ๐‘ฃ) the pixel coordinate representing the TCP, ๐‘…๐œ‹ โˆˆ โ„3๐‘ฅ3 a rotation matrix, ๐›ผ and

(46)

36

Equation ( 4.14 ) represents the relation between flange positions and corresponding TCP found on the image plane. Then the camera rotation matrix can be found recording these planar correspondences and then computing ๐‘…๐œ‹.

TCP

POSITIONING WITH CAMERA

Let ๐‘๐œ‹๐‘– โˆˆ โ„3 the current flange position expressed in camera base frame ๐‘Š๐œ‹ ( 4.12

), (๐‘ฅ๐‘–, ๐‘ฆ๐‘–) the corresponding TCP projection on the camera plane and (๐‘ฅ๐‘, ๐‘ฆ๐‘) the image plane center. The flange position expressed in ๐‘Š๐œ‹ such that the TCP projection on the camera image plane corresponds to the image center is:

๐‘๐œ‹๐‘–โˆ’๐‘ = ๐‘๐œ‹๐‘–+ [ ๐›ผ(๐‘ฅ๐‘ โˆ’ ๐‘ฅ๐‘–) ๐›ฝ(๐‘ฆ๐‘ โˆ’ ๐‘ฆ๐‘–) 0 ] ( 4.15 )

Where ๐›ผ and ๐›ฝ are the scaling factors from pixel to robot coordinate system unit along ๐‘ฅ and ๐‘ฆ directions.

The scaling factor ๐›ผ and ๐›ฝ can be defined considering a flange position (๐‘“๐‘ฅ, ๐‘“๐‘ฅ, ๐‘“๐‘ง) expressed in camera coordinate system ๐‘Š๐œ‹, the corresponding TCP projection on the camera plane (๐‘ฅ1, ๐‘ฆ1), a second flange position (๐‘“๐‘ฅ+ ๐‘‘, ๐‘“๐‘ฆ+ ๐‘‘, ๐‘“๐‘ง) (that is a translation of ๐‘๐œ‹1 along the plane ๐‘ฅ๐‘ฆ) and the corresponding TCP projection on the camera plane (๐‘ฅ2, ๐‘ฆ2): ๐›ผ = ๐‘‘ ๐‘ฅ2โˆ’ ๐‘ฅ1 ๐›ฝ = ๐‘‘ ๐‘ฆ2โˆ’ ๐‘ฆ1 ( 4.16 )

Note that the plane ๐‘ฅ๐‘ฆ is parallel to the camera image plane.

(47)

37

Algorithm 4.1 โ€“ TCP positioning with camera

Algorithm 4.1 changes dynamically the scale factor if the flange movements to position the TCP are too high or too low.

TCP

COMPUTATION

Regardless of the method used to compute TCP translation vector, there are two main problems:

1. During early stages no information are known about the tool, then to change a flange pose keeping the TCP detectable by the camera, only small flange rotation are suitable. The maximum rotation suitable for this purpose is related to the tool geometry.

2. As shown above at least three flange poses are required to solve the system of equations ( 4.11 ) and the rotation axis between flange poses must be distinct. Assuming that first rotation has the rotation axis equal to the z-axis with respect to the reference system ๐‘Š๐œ‹ ( 4.12 ), using Algorithm 4.1 the TCP can be positioned on the same three-dimensional point of the first flange pose. Now the next flange pose must have a different rotation axis, 1. Get flange position ๐‘๐œ‹1 and the corresponding TCP projection on the

camera plane (๐‘ฅ1, ๐‘ฆ1)

2. ๐‘‘๐‘ฅ = (๐‘ฅ๐‘โˆ’ ๐‘ฅ1), ๐‘‘๐‘ฆ = (๐‘ฆ๐‘โˆ’ ๐‘ฆ1) 3. Set the flange position ๐‘๐œ‹2 = ๐‘๐œ‹1+ [

๐›ผ๐‘‘๐‘ฅ ๐›ฝ๐‘‘๐‘ฆ

0

] and get the corresponding TCP projection on the camera plane (๐‘ฅ2, ๐‘ฆ2)

4. If (๐‘ฅ2, ๐‘ฆ2) corresponds to the image center then stop.

(48)

38

this means, applying Algorithm 4.1 to move the TCP, itโ€™s final position will have a different z-coordinate, then an approximation of TCP can be computed using ( 4.11 ).

The first idea was to estimate a solution with the method of the least squares that compute a result minimizing the error of each TCP position related to the flange poses used. Studying the detail of this approach, to overcome the problem of the third TCP position that have a different z-coordinate, I found another way to solve the problem without approximations from a theoretical point of view using three flange poses.

4.5.1 Least squares method

The least squares method with normal equations A.1 can be used to compute the TCP:

๐‘‡๐‘ก๐‘๐‘ = (๐ด๐‘‡๐ด)โˆ’1๐ด๐‘‡๐‘ฆ ( 4.17 )

Where ๐ด is the submatrix of ( 4.11 ) containing the difference ๐‘…๐‘–โˆ’ ๐‘…๐‘— and ๐‘…๐‘– โˆ’ ๐‘…๐‘˜ and ๐‘ฆ represents the last column of ( 6.1 ) containing the difference of the translation vectors with opposite sign:

๐ด๐‘ฅ โˆ’ ๐‘ฆ = [ ๐‘…๐‘– โˆ’ ๐‘…๐‘— ๐‘…๐‘– โˆ’ ๐‘…๐‘˜ ] ๐‘‡๐‘ก๐‘๐‘โˆ’ [ ๐‘‡๐‘—โˆ’ ๐‘‡๐‘– ๐‘‡๐‘˜โˆ’ ๐‘‡๐‘– ] ( 4.18 )

As shown above, with ๐‘…๐‘— = ๐‘…๐‘ง๐‘…๐‘– and ๐‘…๐‘˜ = ๐‘…๐‘ฅ๐‘…๐‘–, A has rank three then even ๐ด๐‘‡๐ด

has rank three. This means ๐ด๐‘‡๐ด is invertible then the system of equations ( 4.11 ) can be solved and has unique solution.

Other flange pose could be used to add equations at ( 4.18 ), but positioning the flange after a rotation with Algorithm 4.1 there is no guarantee that the TCP will be on the same three-dimensional position. Then the TCP computed will be an approximation.

4.5.2 Analytic solution

Let ๐‘‡๐ถ๐‘ƒ๐‘–๐‘š๐‘”โˆ’๐‘ โˆˆ โ„ค2 the image center point, ๐‘‡๐ถ๐‘ƒ

๐œ‹โˆ’๐‘ โˆˆ โ„3 the point used to constraint

(49)

39

must be equal to ๐‘‡๐ถ๐‘ƒ๐‘–๐‘š๐‘”โˆ’๐‘ and ๐น1๐œ‹ the representation of the first flange position

with the TCP placed on the point ๐‘ƒ๐œ‹1= ๐‘‡๐ถ๐‘ƒ๐œ‹โˆ’๐‘ with a rotation matrix ๐‘…1๐œ‹ and a

translation vector ๐‘‡1๐œ‹.

A rotation ๐‘…๐‘ง around the z-axis applied on ๐น1๐œ‹ moves the TCP to a new 3D position without changing the z-value expressed in coordinate syestem ๐‘Š๐œ‹. Now using

Algorithm 4.1 the flange can be moved along ๐‘ฅ and ๐‘ฆ axis in such a way to place the TCP detected with the camera on the image point ๐‘‡๐ถ๐‘ƒ๐‘–๐‘š๐‘”โˆ’๐‘. The new TCP position ๐‘ƒ๐œ‹2 โˆˆ โ„3 will be equal to ๐‘ƒ๐œ‹1, let ๐น2๐œ‹ the corresponding flange pose with rotation matrix ๐‘…2๐œ‹ = ๐‘…๐‘งโˆ™ ๐‘…1๐œ‹ and translation vector ๐‘‡2๐œ‹.

Note that using the image center point as TCP position constraint, independently in which z-coordinate is placed the TCP, if its projection on the image plane is equal to ๐‘‡๐ถ๐‘ƒ๐‘–๐‘š๐‘”โˆ’๐‘ then x-coordinate and y-coordinate will be equal to those of ๐‘‡๐ถ๐‘ƒ๐œ‹โˆ’๐‘. To found a solution of ( 4.11 ), the next flange rotation has to have a different rotation-axis and the TCP has to be placed on ๐‘‡๐ถ๐‘ƒ๐œ‹โˆ’๐‘. But using another rotation axis the 3D TCP z-value will be not equal to the z-value of ๐‘ƒ๐œ‹1 and ๐‘ƒ๐œ‹2 and the information obtained from camera are not enough to move the flange on a third pose ๐น3๐œ‹ such that the related TCP point ๐‘ƒ๐œ‹3โˆˆ โ„3 is equal to ๐‘ƒ๐œ‹1 and ๐‘ƒ๐œ‹2, only the x-component and the y-component will be equal. This means that the z-component of the translation vector ๐‘‡3๐œ‹ of ๐น3๐œ‹ will be affected by error. The following method

overcome this problem leading to a solution without errors from a theoretical point of view:

Consider now only the first and the second flange poses:

๐‘ƒ๐œ‹1= ๐‘ƒ๐œ‹2 โ‡’ [๐น1๐œ‹โˆ’ ๐น2๐œ‹] โˆ™ ๐‘ก๐‘ก๐‘๐‘ = [๐‘…1๐œ‹โˆ’ ๐‘…๐‘งโˆ™ ๐‘…1๐œ‹ ๐‘‡1๐œ‹ โˆ’ ๐‘‡2๐œ‹] โˆ™ ๐‘ก๐‘ก๐‘๐‘ = 0 ( 4.19 )

Where ๐‘ก๐‘ก๐‘๐‘ = (๐‘‡๐‘ก๐‘๐‘, 1)๐‘‡ is the homogeneous vector representing the TCP expressed in robot flange coordinate system, then ( 4.19 ) can be rewritten as follows:

(50)

40

As shown in Statement 4.1 [๐‘…1๐œ‹โˆ’ ๐‘…๐‘งโˆ™ ๐‘…1๐œ‹] has rank equal to two then a solution of ๐‘‡๐‘ก๐‘๐‘ = (๐‘‡๐‘ก๐‘๐‘๐‘ฅ, ๐‘‡๐‘ก๐‘๐‘๐‘ฆ, ๐‘‡๐‘ก๐‘๐‘๐‘ง) cannot be found, however a partial solution can be computed.

If [๐‘…1๐œ‹โˆ’ ๐‘…๐‘งโˆ™ ๐‘…1๐œ‹] would have the last column and the last row equal to zero then

๐‘‡๐‘ก๐‘๐‘๐‘ฅ and ๐‘‡๐‘ก๐‘๐‘๐‘ฆcould be computed using the first two rows of the system of equations ( 4.20 ) but ๐‘…1๐œ‹ could be any rotation matrix.

Statement 4.3: [๐‘…1๐œ‹โˆ’ ๐‘…๐‘งโˆ™ ๐‘…1๐œ‹]๐‘‡๐‘ก๐‘๐‘+ ๐‘‡1๐œ‹ โˆ’ ๐‘‡2๐œ‹ = 0 ( 4.21 ) โ‡” [๐ผ โˆ’ ๐‘…๐‘ง] โˆ™ ๐‘‡๐‘ก๐‘๐‘โ€ฒ + ๐‘‡ 1๐œ‹ โˆ’ ๐‘‡2๐œ‹ = 0 ( 4.22 ) Where ๐‘‡๐‘ก๐‘๐‘โ€ฒ = ๐‘… 1๐œ‹โˆ™ ๐‘‡๐‘ก๐‘๐‘. Proof:

Let ๐ด, ๐ต, ๐ถ โˆˆ โ„3๐‘ฅ3 and ๐ผ โˆˆ โ„3๐‘ฅ3 the identity matrix then:

(๐ด + ๐ต)๐ถ = ๐ด๐ถ + ๐ต๐ถ ( 4.23 )

(๐ด๐ต)๐ถ = ๐ด(๐ต๐ถ) ( 4.24 )

๐ผ๐ด = ๐ด๐ผ = ๐ด ( 4.25 )

๐‘…1๐œ‹ is a rotation matrix then it is invertible:

๐‘…1๐œ‹โˆ™ ๐‘…1๐œ‹โˆ’1= ๐ผ ( 4.26 )

Exploiting properties ( 4.25 ) and ( 4.26 ), ( 4.21 ) can be rewritten as:

[๐‘…1๐œ‹โˆ’ ๐‘…๐‘งโˆ™ ๐‘…1๐œ‹]๐‘…1๐œ‹โˆ’1โˆ™ ๐‘…1๐œ‹โˆ™ ๐‘‡๐‘ก๐‘๐‘+ ๐‘‡1๐œ‹ โˆ’ ๐‘‡2๐œ‹ = 0 ( 4.27 )

(51)

41

There is a bi-unique relation between ๐‘‡๐‘ก๐‘๐‘โ€ฒ and ๐‘‡๐‘ก๐‘๐‘, then finding the solution of

๐‘‡๐‘ก๐‘๐‘โ€ฒ , even ๐‘‡๐‘ก๐‘๐‘ can be computed.

๐‘‡๐‘ก๐‘๐‘= ๐‘…1๐œ‹โˆ’1โˆ™ ๐‘‡๐‘ก๐‘๐‘โ€ฒ ( 4.29 )

[๐ผ โˆ’ ๐‘…๐‘ง] of ( 4.22 ) has the last row and the last column equal to zero:

[๐ผ โˆ’ ๐‘…๐‘ง] โˆ™ ๐‘‡๐‘ก๐‘๐‘โ€ฒ + ๐‘‡1๐œ‹ โˆ’ ๐‘‡2๐œ‹ = [ 1 โˆ’cos ๐›ผ๐‘ง sin ๐›ผ๐‘ง 0 โˆ’ sin ๐›ผ๐‘ง 1 โˆ’ cos ๐›ผ๐‘ง 0 0 0 0][ ๐‘‡๐‘ก๐‘๐‘โ€ฒ ๐‘ฅ ๐‘‡๐‘ก๐‘๐‘โ€ฒ ๐‘ฆ ๐‘‡๐‘ก๐‘๐‘โ€ฒ ๐‘ง] + [ ๐‘‡1๐œ‹๐‘ฅ โˆ’ ๐‘‡2๐œ‹๐‘ฅ ๐‘‡1๐œ‹๐‘ฆ โˆ’ ๐‘‡2๐œ‹๐‘ฆ ๐‘‡1๐œ‹๐‘ง โˆ’ ๐‘‡2๐œ‹๐‘ง] = 0 ( 4.30 )

Where ๐›ผ๐‘ง โˆˆ (0, ๐œ‹] is the rotation angle around the z-axis and ๐‘‡๐‘ก๐‘๐‘โ€ฒ =

(๐‘‡๐‘ก๐‘๐‘โ€ฒ ๐‘ฅ, ๐‘‡๐‘ก๐‘๐‘ โ€ฒ ๐‘ฆ, ๐‘‡๐‘ก๐‘๐‘ โ€ฒ ๐‘ง).

Then from ( 4.30 ), the following system of equations can be extracted:

[ 1 โˆ’ cos ๐›ผ๐‘ง sin ๐›ผ๐‘ง โˆ’ sin ๐›ผ๐‘ง 1 โˆ’ cos ๐›ผ๐‘ง ] [ ๐‘‡๐‘ก๐‘๐‘โ€ฒ ๐‘ฅ ๐‘‡๐‘ก๐‘๐‘โ€ฒ ๐‘ฆ ] + [ ๐‘‡1๐œ‹๐‘ฅ โˆ’ ๐‘‡2๐œ‹๐‘ฅ ๐‘‡1๐œ‹๐‘ฆ โˆ’ ๐‘‡2๐œ‹๐‘ฆ ] = 0 ( 4.31 )

If ๐›ผ๐‘ง โ‰  0, ( 4.31 ) can be solved since the matrix determinant is grater then zero:

๐ท๐‘’๐‘ก [

1 โˆ’ cos ๐›ผ๐‘ง sin ๐›ผ๐‘ง

โˆ’ sin ๐›ผ๐‘ง 1 โˆ’ cos ๐›ผ๐‘ง

] = (1 โˆ’ cos ๐›ผ๐‘ง)2+ (sin ๐›ผ๐‘ง)2 =

1 + cos2๐›ผ

๐‘งโˆ’ 2cos ๐›ผ๐‘ง+ sin2๐›ผ = 2(1 โˆ’ cos ๐›ผ๐‘ง) > 0

( 4.32 ) [ ๐‘‡๐‘ก๐‘๐‘โ€ฒ ๐‘ฅ ๐‘‡๐‘ก๐‘๐‘โ€ฒ ๐‘ฆ ] =1 2 [ 1 โˆ’ sin ๐›ผ๐‘ง 1 โˆ’ cos ๐›ผ๐‘ง sin ๐›ผ๐‘ง 1 โˆ’ cos ๐›ผ๐‘ง 1 ] [ ๐‘‡2๐œ‹๐‘ฅ โˆ’ ๐‘‡1๐œ‹๐‘ฅ ๐‘‡2๐œ‹๐‘ฆ โˆ’ ๐‘‡1๐œ‹๐‘ฆ ] ( 4.33 )

(52)

42

Considering the TCP constraint resulting from the first and the third flange pose: [๐‘…1๐œ‹โˆ’ ๐‘…๐‘ฅโˆ™ ๐‘…1๐œ‹]๐‘‡๐‘ก๐‘๐‘+ ๐‘‡1๐œ‹ โˆ’ ๐‘‡3๐œ‹= 0 ( 4.34 )

Using Statement 4.3, even ( 4.34 ) can be rewritten as follows: [๐ผ โˆ’ ๐‘…๐‘ฅ] โˆ™ ๐‘‡๐‘ก๐‘๐‘โ€ฒ + ๐‘‡ 1๐œ‹ โˆ’ ๐‘‡2๐œ‹ = [ 0 0 0 0 1 โˆ’ cos ๐›ผ๐‘ฅ sin ๐›ผ๐‘ฅ 0 โˆ’ sin ๐›ผ๐‘ฅ 1 โˆ’cos ๐›ผ๐‘ฅ][ ๐‘‡๐‘ก๐‘๐‘โ€ฒ ๐‘ฅ ๐‘‡๐‘ก๐‘๐‘โ€ฒ ๐‘ฆ ๐‘‡๐‘ก๐‘๐‘โ€ฒ ๐‘ง] + [ ๐‘‡1๐œ‹๐‘ฅ โˆ’ ๐‘‡3๐œ‹๐‘ฅ ๐‘‡1๐œ‹๐‘ฆ โˆ’ ๐‘‡3๐œ‹๐‘ฆ ๐‘‡1๐œ‹๐‘ง โˆ’๐‘‡3๐œ‹๐‘ง] = 0 ( 4.35 )

Where ๐›ผ๐‘ฅโˆˆ (0, ๐œ‹] is the rotation angle around the x-axis.

As mentioned before, any rotation with a rotation-axis not equal to ๐‘ง, leads to a

๐‘‡3๐œ‹๐‘ง value affected by error, because using Algorithm 4.1 the TCP related to the two flange poses cannot be constrained to lie on the same point. But knowing ๐‘‡๐‘ก๐‘๐‘โ€ฒ

๐‘ฆ

from ( 4.33 ), the second row of ( 4.35 ) can be used to compute ๐‘‡๐‘ก๐‘๐‘โ€ฒ ๐‘ง: (1 โˆ’ cos ๐›ผ๐‘ฅ)๐‘‡๐‘ก๐‘๐‘โ€ฒ ๐‘ฆ+ (sin ๐›ผ๐‘ฅ)๐‘‡๐‘ก๐‘๐‘โ€ฒ ๐‘ง+ ๐‘‡1๐œ‹๐‘ฆ โˆ’ ๐‘‡3๐œ‹๐‘ฆ = 0 ( 4.36 ) ๐‘‡๐‘ก๐‘๐‘โ€ฒ ๐‘ง = ๐‘‡3๐œ‹๐‘ฆโˆ’ ๐‘‡1๐œ‹๐‘ฆ โˆ’ (1 โˆ’ cos ๐›ผ๐‘ฅ)๐‘‡๐‘ก๐‘๐‘โ€ฒ ๐‘ฆ sin ๐›ผ๐‘ฅ ( 4.37 )

Note that in this case ๐›ผ๐‘ฅ must be less then ๐œ‹ to find a solution.

(53)

43 [ ๐‘‡๐‘ก๐‘๐‘โ€ฒ ๐‘ฅ ๐‘‡๐‘ก๐‘๐‘โ€ฒ ๐‘ฆ ๐‘‡๐‘ก๐‘๐‘โ€ฒ ๐‘ง] = [ 1 2 โˆ’ sin ๐›ผ๐‘ง 2(1 โˆ’ cos ๐›ผ๐‘ง) 0 sin ๐›ผ๐‘ง 2(1 โˆ’ cos ๐›ผ๐‘ง) 1 2 0 โˆ’(1 โˆ’ cos ๐›ผ๐‘ฅ) 2 sin ๐›ผ๐‘ฅ sin ๐›ผ๐‘ง (1 โˆ’ cos ๐›ผ๐‘ง) โˆ’(1 โˆ’ cos ๐›ผ๐‘ฅ) 2 sin ๐›ผ๐‘ฅ 1 sin ๐›ผ๐‘ฅ][ ๐‘‡2๐œ‹๐‘ฅ โˆ’ ๐‘‡1๐œ‹๐‘ฅ ๐‘‡2๐œ‹๐‘ฆ โˆ’ ๐‘‡1๐œ‹๐‘ฆ ๐‘‡3๐œ‹๐‘ฆโˆ’ ๐‘‡1๐œ‹๐‘ฆ] ( 4.38 ) (1 โˆ’ cos ๐›ผ) sin ๐›ผ = 2 sin2๐›ผ2

2 sin๐›ผ2 cos๐›ผ2 = tan ๐›ผ 2 ( 4.39 ) [ ๐‘‡๐‘ก๐‘๐‘โ€ฒ ๐‘ฅ ๐‘‡๐‘ก๐‘๐‘โ€ฒ ๐‘ฆ ๐‘‡๐‘ก๐‘๐‘โ€ฒ ๐‘ง] = [ 1 2 โˆ’ 1 2 tan๐›ผ2๐‘ง 0 1 2tan๐›ผ2๐‘ง 1 2 0 โˆ’1 2 tan๐›ผ2๐‘ฅ tan๐›ผ2๐‘ง โˆ’ 1 2tan ๐›ผ๐‘ฅ 2 1 sin ๐›ผ๐‘ฅ ] [ ๐‘‡2๐œ‹๐‘ฅ โˆ’ ๐‘‡1๐œ‹๐‘ฅ ๐‘‡2๐œ‹๐‘ฆ โˆ’ ๐‘‡1๐œ‹๐‘ฆ ๐‘‡3๐œ‹๐‘ฆโˆ’ ๐‘‡1๐œ‹๐‘ฆ] ( 4.40 )

All the terms of ( 4.40 ) are known then, as shown in Statement 4.3, the solution of ๐‘‡๐‘ก๐‘๐‘ can be computed as follows:

๐‘‡๐‘ก๐‘๐‘= ๐‘…1๐œ‹โˆ’1โˆ™ ๐‘‡๐‘ก๐‘๐‘โ€ฒ ( 4.41 )

(54)

44

E

RROR ANALYSIS

Because of the translations performed by the robot and the camera discretization, the analytic solution 4.5.2 to compute the TCP has some unavoidable errors. Suppose that for each translations difference term of equation ( 4.40 ) there is an error ๐œ–๐‘– โˆˆ (โˆ’๐œ–๐‘š๐‘Ž๐‘ฅ, ๐œ–๐‘š๐‘Ž๐‘ฅ) where ๐‘– โˆˆ [1,2,3] is the corresponding row index:

[ ๐‘‡๐‘ก๐‘๐‘โ€ฒ ๐‘ฅ+ ๐œ–๐‘ฅ ๐‘‡๐‘ก๐‘๐‘โ€ฒ ๐‘ฆ+ ๐œ–๐‘ฆ ๐‘‡๐‘ก๐‘๐‘โ€ฒ ๐‘ง+ ๐œ–๐‘ง] = [ 1 2 โˆ’ 1 2 tan๐›ผ๐‘ง 2 0 1 2tan๐›ผ2๐‘ง 1 2 0 โˆ’1 2 tan๐›ผ2๐‘ฅ tan๐›ผ2๐‘ง โˆ’ 1 2tan ๐›ผ๐‘ฅ 2 1 sin ๐›ผ๐‘ฅ] [ ๐‘‡2๐œ‹๐‘ฅ โˆ’ ๐‘‡1๐œ‹๐‘ฅ+ ๐œ–1 ๐‘‡2๐œ‹๐‘ฆ โˆ’ ๐‘‡1๐œ‹๐‘ฆ+ ๐œ–2 ๐‘‡3๐œ‹๐‘ฆโˆ’ ๐‘‡1๐œ‹๐‘ฆ + ๐œ–3] ( 4.42 )

(๐œ–๐‘ฅ, ๐œ–๐‘ฆ, ๐œ–๐‘ง) is the distance vector from the real solution. This error can be separated

from equation ( 4.42 ): [ ๐œ–๐‘ฅ ๐œ–๐‘ฆ ๐œ–๐‘ง] = [ 1 2 โˆ’ 1 2 tan๐›ผ2๐‘ง 0 1 2tan๐›ผ2๐‘ง 1 2 0 โˆ’1 2 tan๐›ผ2๐‘ฅ tan๐›ผ2๐‘ง โˆ’ 1 2tan ๐›ผ๐‘ฅ 2 1 sin ๐›ผ๐‘ฅ] [ ๐œ–1 ๐œ–2 ๐œ–3] ( 4.43 )

Matrix on equation ( 4.43 ) represents the error amplification of the method used to compute the TCP as a function of the rotation angle ๐›ผ๐‘ฅ and ๐›ผ๐‘ง. Note that with these values tending to zero the errors will tend to infinity.

The minimum amplification for each error corresponds to the optimal values ๐›ผ๐‘ง = ๐œ‹ and ๐›ผ๐‘ฅ= ๐œ‹

(55)

45 [ ๐œ–๐‘ฅ ๐œ–๐‘ฆ ๐œ–๐‘ง] = [ 1 2 0 0 0 1 2 0 0 โˆ’1 2 1] [ ๐œ–1 ๐œ–2 ๐œ–3] ( 4.44 )

Since the rotations are centered on the flange, to keep the TCP detectable by the camera, the rotation angles must be small, like one or two degrees, but moving ๐›ผ๐‘ง and ๐›ผ๐‘ฅ away from optimal values, errors will be amplified. Then applying the

method described in section 4.5.2 with this kind of rotations, the ๐‘‡๐‘ก๐‘๐‘ computed will be inaccurate, however the result can be used to set a new rotation center to compute the ๐‘‡๐‘ก๐‘๐‘ a second time using widest rotations and keeping the tool detectable by the camera.

If the errors amplification were still too high, the error ๐œ–๐‘ง could be reduced with another rotation: after the second rotation around the x-axis with ๐›ผ๐‘ฅ= ๐œ‹

2, the plane

๐‘ฅ๐‘ง will be parallel to the camera image plane. Now the same method used to compute ๐‘‡๐‘ก๐‘๐‘โ€ฒ

๐‘ฅ and ๐‘‡๐‘ก๐‘๐‘ โ€ฒ

๐‘ฆ ( 4.30 ) can be used to compute ๐‘‡๐‘ก๐‘๐‘ โ€ฒ ๐‘ฅ and ๐‘‡๐‘ก๐‘๐‘ โ€ฒ ๐‘ง replacing ๐‘‡๐‘ก๐‘๐‘โ€ฒ ๐‘ฆ with ๐‘‡๐‘ก๐‘๐‘ โ€ฒ

๐‘ง, performing a the rotation around the y-axis instead of z-axis and

moving the flange with Algorithm 4.1 along z-axis instead of y-axis. Then replacing only the resulting row related to ๐‘‡๐‘ก๐‘๐‘โ€ฒ

๐‘ง, ( 4.42 ) can be rewrite as follows:

[ ๐‘‡๐‘ก๐‘๐‘โ€ฒ ๐‘ฅ+ ๐œ–๐‘ฅ ๐‘‡๐‘ก๐‘๐‘โ€ฒ ๐‘ฆ+ ๐œ–๐‘ฆ ๐‘‡๐‘ก๐‘๐‘โ€ฒ ๐‘ง+ ๐œ–๐‘ง] = [ 1 2 โˆ’ 1 2 tan๐›ผ2๐‘ง 0 1 2tan๐›ผ2๐‘ง 1 2 0 1 2tan๐›ผ2๐‘ฆ 0 1 2 ] [ ๐‘‡2๐œ‹๐‘ฅ โˆ’ ๐‘‡1๐œ‹๐‘ฅ+ ๐œ–1 ๐‘‡2๐œ‹๐‘ฆ โˆ’ ๐‘‡1๐œ‹๐‘ฆ+ ๐œ–2 ๐‘‡4๐œ‹๐‘งโˆ’ ๐‘‡1๐œ‹๐‘ง + ๐œ–3] ( 4.45 )

Where ๐›ผ๐‘ฆ is the rotation around the y-axis.

(56)

46 [ ๐œ–๐‘ฅ ๐œ–๐‘ฆ ๐œ–๐‘ง] = [ 1 2 0 0 0 1 2 0 0 0 1 2][ ๐œ–1 ๐œ–2 ๐œ–3] ( 4.46 )

However there could be not enough space to perform this kind of rotations. Then the first method (only two rotation around z-axis and x-axis without constraints on rotations angles) could be more suitable for the purpose.

For instance ๐›ผ๐‘ง = ๐›ผ๐‘ฅ= ๐œ‹

4 on equation ( 4.42 ) could be the right compromise

between precision and performance:

[ ๐œ–๐‘ฅ ๐œ–๐‘ฆ ๐œ–๐‘ง] = [ 1 2 โˆ’ โˆš2 4 โˆ’ 2โˆš2 0 โˆš2 4 โˆ’ 2โˆš2 1 2 0 โˆ’1 2 โˆ’ 2 โˆ’ โˆš2 2โˆš2 โˆš2 2 ] [ ๐œ–1 ๐œ–2 ๐œ–3] ( 4.47 ) [ ๐œ–๐‘ฅ ๐œ–๐‘ฆ ๐œ–๐‘ง] = [ 0,5 โˆ’1,21 0 1,21 0,5 0 โˆ’0,5 โˆ’0,21 1,41][ ๐œ–1 ๐œ–2 ๐œ–3] ( 4.48 )

Riferimenti

Documenti correlati

In this paper, the first decades of history of the industrial robotics are presented, starting from the ideas of Devol and Engelberger, that led to the birth of Unimate,

Giร  da diversi anni sono coordinatrice scientifica e didattica di corsi di perfezionamento sulla tematica della marginalitร  e dellโ€™esclusione sociale, attivati presso la

escena (produciendo pues un efecto metateatral) para montar una folรญa muy peculiar: su propia fiesta de โ€œnuevos entremesesโ€ o piezas cรณmicas breves que aprovecha

While typical transient observations, such as those of GRBs, are at cosmological distances that hinder the detection prospects of very high-energy photons due to

In the following section we prove that a corresponding ellipsoid is orthogonal to the curves studied in the third section, and propose another variant of H-measures, the

The primary outcome of this study was stroke recurrence in pregnancy or the postpartum period de๏ฌned as 6 weeks after delivery.. Results: Forty-eight pregnancies with a history

A distanza di tre anni dalla pubblicazione di quel testo si presentano in que- sta sede ๏œฑ๏œน documenti dellโ€™instrumentum pertinenti a due aree della Sardhnรญa: da un lato due

Figure 4.1: Correlation heatmap of the following aggregate measures of Individual Mobility Net- works: Amount of IMNs (#nets), Average total distance driven in km (Km driven),