• Non ci sono risultati.

A new architecture for the control of complex colloborative robots

N/A
N/A
Protected

Academic year: 2021

Condividi "A new architecture for the control of complex colloborative robots"

Copied!
91
0
0

Testo completo

(1)

Corso di Laurea Magistrale in Ingegneria Elettronica

Tesi di Laurea

A new architecture for the control

of complex collaborative robots

Candidato: Livia Moro Relatore: Dr. Federico Baronti External supervisor: Dr. Ben Jeppesen

Anno Accademico 2015/2016

(2)
(3)

Industrial automation is experiencing a revolution in the relationship between robots and humans. Industrial robots are being developed for more flexible and collaborative tasks, and are being designed to work safely side by side with human workers. Lower-force robots may need sophisticated motion control and behavioral algorithms, but are expected to be more compact and lower cost.

In order to meet these needs, a hard processor plus FPGA architecture is proposed for the control architecture of a 4DOF articulated robot.

The FPGA provides extreme flexibility in integration of different modules and in recon-figuration. It is also a good environment for the development of a motor control system. The hard processor can handle the high level tasks, and eventually run an operative sys-tem.

The implementation presented combines multi-axis motion control with 3D robot vision. The end effector of the robot manipulator reproduces the same movements of the closest point tracked from the camera.

The processor receives the depth frames from the 3D camera, tracks the position of the closest point for each frame, and sends these points to the FPGA, as points of a path. The FPGA receives the coordinates of the points and computes position and speed com-mands for each motor. The motion control algorithm in the FPGA includes kinematics and trajectory planning calculations.

The dissertation starts discussing the organization of an industrial automation hier-archy, the main features of collaborative robots and the requirements that these features impose on the control system. Based on these, the advantages coming from an FPGA based control system are discussed, and an overview of the state of the art in FPGA mo-tor control is provided.

The theoretic background necessary for the motion control of a 4 DOF articulated robot arm is presented. In particular the geometric based solution of the direct and inverse kinematics problem is provided. Singularities are evaluated from the Jacobian matrix, and different approaches for trajectory planning are presented.

Then, the description of the robot hardware and of the control architecture is pre-sented. Details are provided about the tasks that are assigned to each part of the system, according to the real-time requirements.

The software development for the hard processor and the FPGA is described in detail. The environment of the hard processor is composed of the Ubilinux operative system and the libraries for the interfaces handling. The program for the hard processor represents the depth image in the terminal, finds the position of the closest point to the camera and sends the position to the FPGA. The frame rate from the camera is 30 fps, and all the program functionalities run at the same rate.

(4)

The development flow for the FPGA is performed with the Quartus II Software. First is designed a System on Programmable Chip composed of a Nios II processor plus mem-ory and peripherals. A custom IP is designed for the generation of PWM signals for the control of the motors. The Nios II software implements the inverse kinematics and the trajectory planning algorithms. The code in the FPGA has also the important role of synchronizing the data position from the hard processor, and sending the commands with the right timing for the motion of the robot arm.

This project shows how the requirements for the control of a collaborative robot are met with an architecture composed of FPGA plus hard processor. The demonstration implements successfully the motion control of the robot arm, integrated with the camera vision, that provides the interactive behavior.

Future developments may include: the integration of force sensors, safety IP, indus-trial communication protocols, a different trajectory planning algorithm, a motor con-trol method for bigger industrial robot and the use of a different strategy for the image processing.

(5)

I would like to thank the Intel Programmable Solutions Group (PSG), and in particular the System Solution Engineering (SSE) PSG group in High Wycombe. This master the-sis was developed during a six month internship program, and in this period I had the chance to be part of an exciting environment, and to learn a lot from all the inspiring and passionate people working there. I would like to express my gratitude to my company supervisor, Ben Jeppesen, for the constant support and the good advice, that came always at the right moment, and always with a smile. Moreover, he was my real reference point in this incredible six month experience.

I would also like to say thanks, for the help and the good time, to all the Embedded control and Functional Safety group: Alex, Andrew, Xiang, Kevin, Meena and Thomas.

I would like to thank my supervisor Dr.Federico Baronti of the Electronics Engineering course at University of Pisa. He made this experience possible making contact with the company, and was always present for support, clarification and advice. I also would like to thank Prof. Roberto Saletti, for support and enthusiasm.

This accomplishment would not have been possible without them. Thank you. Livia Moro

(6)
(7)

1 Introduction 13

1.1 Problem Statement . . . 13

1.2 Technical Contribute . . . 14

1.3 Objectives . . . 14

2 Background 17 2.0.1 Industrial Robotics Trends . . . 17

2.1 Collaborative Robots . . . 17

2.1.1 Applications . . . 18

2.1.2 Safety . . . 18

2.1.3 Comparison . . . 20

2.2 Common Architectures for Industrial Automation Systems . . . 20

2.2.1 Supervisor Level . . . 21

2.2.2 Control Level . . . 22

2.2.3 Field Level . . . 22

2.2.4 Interconnects . . . 23

2.3 Robotics Software Platforms . . . 23

2.3.1 Robot Operating System . . . 23

2.3.2 ROS Industrial . . . 24

2.4 A Renowed Example: Baxter . . . 24

2.5 Potential of FPGAs for Robotic Systems . . . 26

2.5.1 Flexibility . . . 26

2.5.2 High Performance . . . 27

2.5.3 FPGAs and System on Chips . . . 27

2.5.4 Cost and Time to Market . . . 28

2.5.5 State of the Art . . . 29

3 Review of articulated robots 31 3.1 Structure . . . 31

3.2 Kinematics . . . 32

3.2.1 Forward Kinematics . . . 33

3.2.2 Inverse Kinematics . . . 34

3.3 Differential Kinematics: the Jacobian Matrix . . . 35

3.3.1 Example: Jacobian of a 3DOF Manipulator . . . 36

3.4 Singularity Calculation . . . 37

3.5 Path Finding and Trajectory Planning . . . 38

3.5.1 Linear Interpolation . . . 39

3.5.2 Linear Segments with Parabolic Blends . . . 40

(8)

CONTENTS

4 Control system architecture 43

4.1 Specifications . . . 43

4.1.1 Characteristics of the Robot Arm . . . 43

4.1.2 Robot Hardware Choice . . . 45

4.2 Robot Task and Building Functions . . . 46

4.3 Control Hardware . . . 46

4.3.1 Actuator Drivers . . . 47

4.3.2 Intel MAX 10 FPGA Development Kit . . . 49

4.3.3 UP Board . . . 51

4.3.4 Intel Realsense Camera . . . 52

4.4 Hardware/Software Partitioning . . . 53

4.4.1 UP Board . . . 53

4.4.2 Max 10 FPGA . . . 53

5 Software development 55 5.1 Overview of the Design Flow and Tool Chain . . . 55

5.1.1 Hardware Design Flow . . . 55

5.1.2 Software Design Flow . . . 56

5.2 QSYS System . . . 56

5.2.1 Clock and Reset . . . 56

5.2.2 Avalon ALTPLL . . . 56

5.2.3 Nios II Processor . . . 57

5.2.4 Avalon MM Clock Crossing Bridge . . . 57

5.2.5 DDR3 SDRAM Controller . . . 58

5.2.6 System ID Peripheral . . . 58

5.2.7 Interval Timer Core . . . 58

5.2.8 JTAG to Avalon Master Bridge . . . 59

5.2.9 SPI Core (3 Wire Serial) . . . 59

5.2.10 OpenCores I2C . . . 60

5.2.11 Custom FPGA IP: PWM Controller . . . 60

5.3 FPGA Nios II . . . 62

5.3.1 Implementation 1: Using the Adafruit PWM HAT . . . 63

5.3.2 Implementation 2: Using the Terasic Servo Motor Kit . . . 69

5.4 UP Board . . . 73

5.4.1 Ubilinux OS . . . 73

5.4.2 Libraries . . . 73

5.4.3 Main Program . . . 74

5.5 Software Summary . . . 75

6 Experimental set-up and results 77 6.1 Image processing . . . 78 6.2 Path filtering . . . 78 6.3 Inverse kinematics . . . 79 6.4 Trajectory planning . . . 80 6.5 Execution times . . . 80 6.6 Resource usage . . . 81 7 Conclusion 85

(9)

1.1 Intel Programmable Solutions Group (PSG) . . . 14

2.1 Collaborative robots: comparative table [1] . . . 20

2.2 Collaborative robots: comparative table [1] . . . 21

2.3 Industial automation system hierarchy [2] . . . 22

2.4 PLC architecture [3] . . . 23

2.5 Baxter collaborative robot [4] . . . 25

2.6 FPGA-based performance scaling methods [5] . . . 28

2.7 FPGA as SoC Motion/Motor Control [5] . . . 29

2.8 Cyclone V SoC Hard processor system [6] . . . 29

3.1 Structure of the articulated manipulator [7] . . . 31

3.2 Workspace of the articulated manipulator [7] . . . 32

3.3 Side view of the studied robot . . . 33

3.4 Top view of the studied robot . . . 33

3.5 Geometric approach for inverse kinematics [8] . . . 34

3.6 Elbow up and elbow down solutions [7] . . . 35

3.7 Articulated manipulator [7] . . . 36

3.8 Boundary singularity [7] . . . 38

3.9 Internal singularity [7] . . . 38

3.10 Linear trajectory for consecutive points: qualitative graph . . . 40

3.11 LSPB trajectory: position, velocity and acceleration . . . 40

3.12 LSPB trajectory for consecutive points: qualitative graph . . . 41

3.13 S-curve trajectory . . . 42

4.1 Lynxmotion AL5D dimensions . . . 43

4.2 Servo position control . . . 44

4.3 Block scheme of RC servos . . . 44

4.4 Example of internal control circuit for RC servos . . . 45

4.5 Block scheme using the Adafruit PWM HAT . . . 47

4.6 Block scheme using the Terasic Servo Motor Kit . . . 47

4.7 Adafruit PWM HAT . . . 48

4.8 Terasic Servo Motor Kit . . . 49

4.9 Intel MAX 10 FPGA Development Kit Overview . . . 49

4.10 Intel Max 10 FPGA architecture . . . 51

4.11 UP Board . . . 51

4.12 Realsense camera . . . 52

4.13 Realsense camera depth evaluation . . . 53

4.14 Hardware/Software partitioning between hard processor system and FPGA 54 5.1 Hardware and software design flow [9] . . . 55

(10)

LIST OF FIGURES

5.3 Memory controller . . . 58

5.4 SPI core configured as a slave [10] . . . 60

5.5 SPI signals with clock phase=0 and clock polarity=0 . . . 60

5.6 PWM signal . . . 61

5.7 Read and write timing . . . 62

5.8 Flowchart of the main program running on the Nios II: implementation 1 . 63 5.9 SPI shift registers . . . 65

5.10 Linear trajectory generation . . . 67

5.11 example of PWM output [11] . . . 68

5.12 I2C protocol . . . 68

5.13 Read/write a register in the PCA9685 [11] . . . 69

5.14 Flowchart of the main program running on the Nios II: implementation 2 . 70 5.15 LSPB trajectory generation . . . 72

5.16 Flowchart of the main program running on the UP Board . . . 74

5.17 System summary . . . 76

6.1 Running demonstration . . . 77

6.2 Controller hardware set-up: UP Board, MAX 10 Development Kit and Terasic Servo Motor Kit . . . 78

6.3 Terminal ASCII based representation of the depth frame . . . 78

6.4 30 Hz point positions received from the UP Board, and 5 Hz point positions after filtering in the FPGA . . . 79

6.5 Point position expressed in the Cartesian frame space . . . 80

6.6 End-effector position expressed in the joint space . . . 81

6.7 Base joint positions for trajectory planning . . . 81

6.8 Speed and acceleration values from the trajectory generation for the base joint path in figure 6.7 . . . 82

(11)

4.1 Actuator specifications for the Lynxmotion AL5D robot arm . . . 45

4.2 UP Board specifications . . . 52

4.3 Realsense Camera specifications . . . 53

5.1 Interval timer registers . . . 59

5.2 SPI Core slave registers . . . 60

5.3 OpenCores I2C registers [12] . . . 61

5.4 PWM controller registers . . . 61

6.1 Nios II execution times . . . 82

(12)
(13)

Introduction

1.1

Problem Statement

An industrial robot, as defined by ISO 8373, is ’an automatically controlled, re-programmable, multipurpose manipulator programmable in three or more axes, which may be either fixed in place or mobile, for use in industrial automation applications’.

As robot technology has matured, manufacturers and end-users have begun to explore beyond the initial strategic advantages. Current trends in industrial robots include [13]:

• Robots for high-performance applications, where control strategies are different from those in assembly.

• Collaborative robotics coordination, where two or more robots work on one object. • Collaborative coordination between robots and human workers.

• Versatile robots, that can be easily trained and reprogrammed for different applica-tions.

• Cost-efficient robots, with high reliability and productivity.

These new requirements are bringing radical changes in the design of the robots. They are smaller and slower, and may include image and force sensors for a safe interaction with the environment.

Robots satisfying these features can be included in the category of collaborative robots. They can be seen more as the direct assistants of humans, working side by side and per-forming the repetitive and accurate tasks, leaving to the human the creative and technical decisions.

Changes in robot architectures brings more challenging requirements for the control system:

• Flexibility: robots may have different tasks and should adapt in time to changes in the work line.

• Integration: robots are smaller, but require more elements like additional sensors, safety modules, Human Machine Interfaces, etc.

• Performance: complex behavioral algorithms are the frontier of robot-human interac-tion, but are also very resource-intensive. Another distinctive factor for performance is the choice of the control strategy for the actuators, and its effectiveness.

• Safety: working side by side with humans, robots need extra requirements to be safety compliant.

(14)

1.2. TECHNICAL CONTRIBUTE

• Real time response: for safety requirements, but also for performance it is important to have predictable behavior in short times.

• Cost: keeping the cost under a reasonable level can be the key for the diffusion of automation in small and medium enterprises, which represents the target market for the collaborative robots.

These requirements brings to new parameters and priorities, and the control systems de-signed for meeting the requirements of standard industrial robots could not be the best trade off for this new type of collaborative robots.

The standard architecture is composed of an industrial PC for the high level control plus several micro-controllers for the control of single motors. Usually this approach for collaborative robots requires extra modules for the integration of sensors, industrial com-munication buses and safety modules. This leads to increased prices, or alternatively to reduced performance.

Increasing these features and keeping a reasonable price is the direction of many studies, and can change even further the role of robots in the automation industry.

1.2

Technical Contribute

Figure 1.1: Intel Programmable Solutions Group (PSG)

This master thesis project is the result of a six months internship program at Intel Programmable Solutions Group (PSG). More in particular, the High Wycombe office is the System Solution Engineering (SSE) PSG group in Europe. This work is the initial part of a longer project that aims at exploring the potentialities of combining FPGAs and processors in robotic applications.

The architecture presented here offers a solution for a control system composed of an hard processor plus FPGA.

The processor performs the high level tasks, offering an environment where graphics pro-grams and Human Machine Interfaces can easily be developed.

The FPGA provides the real-time control of the robot: it drives the motors of the arm and executes the low level control of the system. The use of the FPGA as control unit leads to important advantages: more integration, more flexibility and better performances. The key feature is programmable logic, which allows developers to add application-specific functions that normally would require spinning a custom ASIC, or using multiple external modules.

1.3

Objectives

The project presented is oriented to demonstrate the potential of processor plus FPGA systems in robotic applications.

(15)

The objective is to implement a prototype control system, that performs the motion con-trol of a 4-DOF articulated robot and integrates a 3D camera for interaction with the environment.

The main goals of the project are: • 3D vision integration

• Kinematic transformations • Trajectory planning • Motor Driving

In order to achieve the objectives for this project, the work flow has been divided into the following phases:

1. Review of the state of the art in robotic control systems. Greater attention was given to collaborative robots and FPGA based motor control systems.

2. Study of the theoretical background necessary for the motion control algorithms. Kinematics calculation, singularities calculation and trajectory planning for articu-lated manipulators were studied.

3. Choice of the hardware for the robot arm, control system and camera.

4. FPGA flow. Design of the PWM IP for the control of the motors. Design of the System on Programmable Chip. Development of the motion control in the Nios II processor, which involves the implementation of kinematics transformations, trajec-tory planning and motor driving.

5. Hard processor flow. Organization of the environment, with the operating system and libraries. Implementation of the 3D vision feature and the communication with the FPGA.

6. Integration of the FPGA module and the processor module in a working system matching frequencies and timings for the motion of the arm.

The organization of the dissertation replicates the logic flow and is structured as fol-lows:

• Chapter 2 provides a background on industrial automation and collaborative robots. On the basis of that, a deep analysis of the possible advantages coming from the use of FPGAs is done.

• Chapter 3 contains the theoretical background about articulated manipulators. A detailed description of the algorithms used in the implementation for kinematics transformations and trajectory planning is provided

• In Chapter 4 a description of the robot arm hardware is provided. Then a detailed overview of the tasks required is explained, and on that basis the control system architecture is described in all its parts.

• In Chapter 5 a detailed description of the software implementation is presented for both the FPGA and the processor.

• Chapter 6 contains the experimental results. • In Chapter 7 conclusions are drawn.

(16)
(17)

Background

2.0.1 Industrial Robotics Trends

The World Robotics Report 2016 released by the International Federation of Robotics (IFR) highlights the increasing need of robots in industries: the report predicts the world-wide annual sales of industrial robots to increase by at least 13% on average per year from 2017 to 2019. Industry sources estimate that about 90 percent of U.S. companies that could benefit from robotics are not taking advantage of this technology. There are over 300,000 small and medium-sized enterprises (SMEs) in the United States, and this number is growing. These companies represent the greatest potential for increasing the use of robotics, yet many of them must overcome several challenges to add robots to their manufacturing processes. The manufacturing environment especially has a wide range of potential applications for collaborative robots.

In some manufacturing processes, there are applications where it makes sense for work-ers to perform a task manually, and robots cannot help. In other applications, the best option is overall automation, and human interaction can only obstruct the work.

Collaborative robots are practical for many of the tasks that fall somewhere in between. In these situations, a worker needs to see, feel and react as needed, but the collaborative robot can handle certain physically taxing motions. It is a logical choice to introduce these easy to use robots with a short payback period into companies that need flexibility and that have to deal with low and variable volumes of production. Also, the built-in safety decreases the infrastructural requirements and integration costs: this makes automation more affordable and reduces the time to bring a new work cell on line.

For example applications that are dangerous for the worker are then done by the robot to remove or lessen risks for the worker. As we see companies using collaborative robots to dispense toxic product such as glue or other chemicals, using robots for these types of applications removes the worker from direct contact with the toxic product and lets him do value added jobs such as quality assurance. Another interesting application can be a robot that is placing several screws on an assembly with a worker doing the final screwing operation or verification. Both the robot and worker can work alongside each other in collaboration to enhance the productivity of the worker and reduce their stress load due to the manufacturing rhythm [14] .

2.1

Collaborative Robots

Collaborative robots (or cobots) are machines designed to be used alongside humans with-out risk for the worker. They are a complex self-standing system, which include the control, sensing and actuation of the robot. The definition of collaborative robots in a more general

(18)

2.1. COLLABORATIVE ROBOTS

way can include the robotic cell, that is monitored, and then safe for human co-workers, also if the robot itself is not collaborative. Collaborative robots generally combine some or all of the following characteristics:

• They are designed to be safe around people, either by force limiting to avoid injury if they touch, by sensors that prevent touching or by a combination of both. • They are often relatively light weight and can be moved from task to task as needed. • They do not require skill to program. Most cobots are simple enough that anyone

who can use a smartphone or tablet can teach or program them. • They are intended to assist, not replace, the production worker.

2.1.1 Applications

It is important to distinguish between collaborative robots and collaborative applications: is not because the robot is collaborative that the application is collaborative and vice versa. We can identify four types of collaborative applications: safety monitored stops, hand guiding, speed and separation monitoring, and power-force limiting[15].

• In the safety monitored stop applications the robot working area is monitored with a laser or a camera that detects the presence of humans. If a human is detected in a dangerous position the actuator of the robots stop. The control system is still working because the camera is monitoring the area and the controller of the robot analyses the effective position of the robot. This kind of collaboration can be used for example if the robot has to work with a heavy item and a worker has to do some operations on the part. This method does not involve low speed or low power robots, and it requires safeguarding of a traditional industrial robot. The benefit that this offers is the ease and speed to resume automatic operation.

• With speed and separation monitoring different safety zones are delimited in the robot working space. Certain zones will allow maximum speed for the robot while some zones will require slower speed, because of the potential proximity of the worker. The monitoring of the zone is done mostly using vision. This area division and flexibility can be useful in cases where collaboration between humans and robots is not constant and where the robot will work at the maximum speed for the most of the time.

• Hand guiding applications covers all robots that can be moved only with the direct input of the operator. The human worker teaches the robot which movements to do. Force-torque sensors are usually used in these applications in order to detect the contact with the human co-worker.

• Power-force limiting applications are applications where the robot can detect an abnormal force being exerted on its body. In case of contact the robot only imparts limited force to the actuators.

2.1.2 Safety

ISO/TS 15066 provides guidelines about cobots safety. The technical specification from the International Organization for Standardization identifies safety requirements for col-laborative industrial robot systems and the environments in which they operate. ISO/TS 15066 supplements the existing industrial robot safety standards ISO 10218-1 and ISO

(19)

10218-2. ISO/TS 15066 is based on the idea that if a human has incidental contact with a robot system, pain or injury should not occur. As such, the risk assessment required for power and force limiting (PFL) collaborative robots must incorporate data on pain thresh-old levels. Even with cobots built-in safety systems, ISO/TS 15066 requires to perform a comprehensive risk assessment that evaluates the robot system and the environment in which it is placed.The goal is to validate that not only the robot but the complete work cell is safe[15]. In additional to physical safety features, it must be shown that a control system is sufficiently safe in the presence of faults. The general industrial functional safety standard, IEC 61508, exists to help the design of programmable electronic systems used in industrial machines which could cause a hazard to human life.

Manufacturers of collaborative robots are introducing new safety features and certifying their products to safety standards, for example: Bionics Robotics GmbH’s “BioRob Arm”. BioRob arm is accredited for safe human-robot collaboration by the German Trade As-sociation in accordance with the European Union machinery directive 2006 / 42 / EC [1]. Yaskawa has certified its collaborative robot, the Motoman HC10 (which can handle loads up to 10kg and has a reach of 1.2m), to ISO/TS 15066:2016, which specifies safety requirements for collaborative industrial robot systems and the work environment. The robot uses a force/torque sensor in each axis to help avoid dangerous collisions with op-erators. MRK-Systeme GmbH offer a ‘conversion kit’ to convert a Kuka KR 5 ARC HW robot into a force limited robot. They call this Kleinroboter (small robot) KR 5 SI (for Safe Interaction). This can transform an ordinary industrial robot into a collaborative robot approved by DIN EN ISO 10218 (for general robotic devices) and eventually ISO TS 15066 regulations [1].

There are two basic approaches for making collaborative robots safe.

One approach is to make the robot inherently safe. If it makes contact with a human co-worker, it immediately stops so that the worker feels no more than a gentle touch. This approach limits the maximum load that the robot can handle as well as the speed. A robot moving a 50-lb part at high speed is going to hurt no matter how quickly it can stop upon making contact. This approach uses force and torque sensors alongside the body of the robot to detect the contact with the human, or the environment.

Also the design of actuators can influence the safety of the robot. [16] provides the famous example of Series Elastic Actuators. They differ from traditional actuation techniques because they have a spring between the motor/gearing elements and the output of the actuator. This results in a better control and in absorbing impact energy.

This approach is mostly used in collaborative applications like hand-guiding and power-force limiting, where the human needs to work at the same time side by side with the robot.

Another sensor-based approach uses cameras or laser sensors to monitor the root cell. This allows collaborative use in faster and heavier applications. This is the approach of applications like monitored stop, or in a more sophisticated way like speed-separation monitoring. Modern sensors can be very discriminating, sensing not only the presence of a person but their location as well. This allows the robot to slow down, work around the person or stop according to the situation. When the person moves away, the robot can automatically resume normal operation. [17] Offers a good example of implementation of this approach, using a camera with depth sensor for detecting the obstacle presence, and Variable Stiffness Actuators.

Other than sensors and actuators, collaborative robots also have special geometries that can allow soft-impact. In fact, compared to industrial robots that have sharp edges, collaborative robots have rounder shapes and integrated features that reduce the risk of pinch points and the severity of an impact.

(20)

2.2. COMMON ARCHITECTURES FOR INDUSTRIAL AUTOMATION SYSTEMS

2.1.3 Comparison

Below in figures 2.1 and 2.2 a comparison of the most popular collaborative robots available in the market.

Figure 2.1: Collaborative robots: comparative table [1]

2.2

Common Architectures for Industrial Automation

Sys-tems

After the overview of collaborative robots, an analysis of the common organization and the control architectures used in Industrial Robotic is provided, in order to give a technical context for collaborative robots.

Industrial automation (IA) can be defined as the use of set technologies and automatic control devices for the operation and control of industrial processes. An industrial

(21)

automa-Figure 2.2: Collaborative robots: comparative table [1]

tion system platform consists of different elements which perform functions like sensing, control, supervision and monitoring related to the industrial process.

The common organization for industrial automation systems consists of three level hier-archy. Figure 2.3 illustrates more in detail the three main levels as well as the intercom-munications between them.

2.2.1 Supervisor Level

Supervisor level performs high level control over the industrial platform. Within this level there are mainly PC-based systems, equipped with standard OS (e.g., Windows-embedded), supplier-specific industrial process-control software for process parametriza-tion and visualizaparametriza-tion, and Human Machine Interfaces. The supervisor level usually does not contain any control architecture such as PLCs. These PCs are responsible for functions like set point computations, performance monitoring, diagnostics, start-up, shut-down and other emergency operations. This level is typically linked to the control level to collect data from the system in order to get the ability to present the system state, or to assure

(22)

2.2. COMMON ARCHITECTURES FOR INDUSTRIAL AUTOMATION SYSTEMS

Figure 2.3: Industial automation system hierarchy [2]

the input to the system. The connection is implemented via a non-time-critical commu-nications system (e.g. Ethernet).

2.2.2 Control Level

This level includes the robot controllers (programmable logic controllers – PLC) where automation programs are executed. A programmable logic controller is a special form of microprocessor-based controller that uses a programmable memory to store instructions and to implement functions such as logic, sequencing, timing, counting and arithmetic in order to control machines [3]. Input devices (sensors) and output devices (actuators) are connected to the PLC. PLC systems are mainly modular populated to allow customized and optimized configurations (modular I/O-interface boxes, communication interfaces, function modules, etc.). Figure 2.4 shows the common PLC structure.

Peripheral controllers could be installed: these systems take over some pre-processing of I/O data to reduce the working load of the main PLC. The data of the control system and the communication buses interact directly with the actuators and sensors. This requires that all system components, including the network and its computer nodes with their hardware and software infrastructure work in real-time mode.

2.2.3 Field Level

The field level is the lowest level of the automation hierarchy, and includes sensors and actuators. Sensors convert the real time parameters like temperature, pressure, flow, into electrical signals. This sensor data is further transferred to the controller so as to monitor and analyze the real time parameters. On other hand actuators converts the electrical

(23)

Figure 2.4: PLC architecture [3]

signals (from the controllers) into mechanical means to control the processes. For instance, we can describe this level as eyes and arms of a particular process. In this level there could also be peripheral PLCs or remote I/O systems offering pre-processing of the collected data and communicating to the main PLC via field bus. Communication between a peripheral PLC and the end device is usually a point-to-point connection. There are also intelligent communication standards established, which provide, in addition, remote parametrization and diagnostics of the end device. Some field-level devices (e.g. vision sensors) still require interfacing with higher bandwidth; for this purpose also field-bus based sensors are used.

2.2.4 Interconnects

This kind of hierarchical organization is distributed among several modules, and needs interconnects between the different parts.

The real-time features of these communication networks change according to the level and the kind of data transmitted. Usually the connection between high level and control level is implemented via a non-time-critical communications system (e.g. Ethernet). Be-tween control level and field level real-time compliant field bus protocols are implemented (according IEEE 1588) [18] . The typical kind of data exchange is cyclic, based on a predefined scheme. All network activities are executed within a network cycle which is in the form of one repeated time-window or repeated series of time-windows. The most com-mon Ethernet-based protocols are: EtherCAT, EtherNet/IP, PROFINET, POWERLINK, Sarcos III, Modbus /TCP.

2.3

Robotics Software Platforms

For the development of robotic systems, large vendors have developed their own propri-etary software platforms. However, for research, hobbyists and some smaller companies, several open source platforms exist. The most famous are Robot Operating System, Oro-cos, Microsoft Robotics Developer Studio. OSGi Alliance [19] is also used from KUKA. It is a service platform for the Java programming language, and can be used for the de-velopment of distributed systems like robots. Here an overview of the Robot Operating System is presented.

2.3.1 Robot Operating System

ROS (Robot Operating System) [20] is a BSD-licensed system for controlling robotic com-ponents from a PC. It runs on Linux and provides services like hardware abstraction, low-level device control, implementation of commonly-used functionality, message-passing

(24)

2.4. A RENOWED EXAMPLE: BAXTER

between processes, and package management.

A ROS system is comprised of a number of independent nodes, each of which commu-nicates with the other nodes using a publish/subscribe messaging model. For example, a particular sensor’s driver might be implemented as a node, which publishes sensor data in a stream of messages. These messages could be consumed by any number of other nodes, including filters, loggers, and higher-level systems such as guidance, path-finding, etc. Note that nodes in ROS do not have to be on the same system or even of the same architecture. This makes ROS flexible and adaptable to the needs of the user. For ex-ample if we refer again to figure 2.3 a node can run on a general purpose PC and act as the supervisor level. Other nodes can interact with embedded controllers and implement the control unit and the sensor interfaces, keeping the contact with the supervisor. ROS is heavily utilized by the research community for service robotics applications, but the system can be applied to other application areas, including industrial robotics.

2.3.2 ROS Industrial

One of the historic problems with technology is that closed proprietary systems slow down the development of systems which cannot interact easily [21]. With a common non-proprietary structure many developers can solve innumerable problems simultaneously. This moves robot development along at a pace which is equal to the demand for robot applications.

ROS-Industrial is an open-source project that extends the advanced capabilities of ROS to manufacturing automation and robotics. The ROS-Industrial repository includes interfaces for common industrial manipulators, grippers, sensors, and device networks. ROS runs on Linux, which is not a real-time operative system. However, a robot control system needs timing guarantees: this drives robot designers to partition robot systems into real-time and non real-time subsystems.

In a typical implementation ROS on Linux plans the motion of the robot arm and passes the plan to a real-time controller that executes it, moving the arm in real-time [22]. The ROS industrial movement has gained momentum in the past few years owing to the large amount of research done in that field. However, this platform is not always embraced from companies, due to the open source feature and the processing resources required.

2.4

A Renowed Example: Baxter

Overview

[4] [23]Baxter is a humanoid, anthropomorphic robot from Rethink Robotics. It is one of the first and most famous cobots in the market. It consists of two, 7DOF arms with Series Elastic Actuators [16] at each joint, incorporating full position and force sensing. The robot is also equipped with three integrated cameras, along with sonar, accelerometers and range-finding sensors. The processing hardware of Baxter is an Intel Core i7-3770 Processor, with the software implemented with ROS (Robot Operative System). The main feature of Baxter, besides the collaborative behavior, is the ease of the human interface. Another strong point of Baxter is the price, since it costs 20000 dollars, a very reasonable price for a complex industrial robot if compared to other competitors.

(25)

Figure 2.5: Baxter collaborative robot [4]

Control System

The control system [4] is implemented with a main PC and several microcontrollers. The PC acts as the PLC of the system, and performs the calculations about the movement and the behaviour of the robot, with the microcontrollers (one for each actuator) performing the control for each motor.

Baxter’s arm controls flows through on four layers of operation:

1. User Code running via workstation.

2. Joint Control Listeners via ROS (on Baxter’s internal Linux PC).

3. ”RealTime” Motor Control Loop (the highest priority process on Baxter’s internal Linux PC)

4. Joint Control Boards (micro-controllers attached to each arm joint)

Note that, although the effort for making the system as responsive as possible, ROS in not a real time OS. However, this does not affect the safety of the system, since Series Elastic Actuators with micro-controllers are a real-time subsystem. They directly measure torque at every joint, monitoring for sudden impacts, soft obstructions, or excess forces or torques and stopping the robot.

(26)

2.5. POTENTIAL OF FPGAS FOR ROBOTIC SYSTEMS

2.5

Potential of FPGAs for Robotic Systems

At the end of the study of collaborative robots and robot architectures, we can summarize the main challenges that their control systems have to face:

• High performance.

• Combination of deterministic control and high level intelligence.

• Flexibility in the integration of sensors, communication protocols and actuators. • Safety compliance.

• Real time response. • Cheap implementation.

Combining all these features into one solution is challenging. If we consider for exam-ple the cost constrain, time-to-market has to be shortened, the price of controller devices has to be cheap and its energy consumption reduced.

This cost reduction is also more challenging if we consider that these control systems must be based on sophisticated control algorithms which need increasing computing resources and reduced execution time.

For the implementation, designers are oriented between two main families of digital device technologies.

The first family is based on a pure software platform. The associated devices are micro-controllers and Digital Signal Processors (DSP). These components integrate a performing microprocessor core along with several peripherals which are necessary to control the tar-geted system in real-time and to communicate with the industrial environment.

More often the whole control system is implemented with a main processor that performs the calculations for the general behaviour of the robot and several microprocessors that implement the direct control to the actuators. The main advantages of this approach are the maturity of these technologies, the quality of the associated development tools as well as their low price.

The alternative family of available digital devices for implementing industrial control sys-tems is the Field Programmable Gate Arrays technology (FPGAs). Below the features that make FPGAs the best solution for cobot control systems, keeping a comparison with the pure software solution.

2.5.1 Flexibility

The main advantage of an FPGA is its flexibility. Flexibility allows to rapidly implement or reprogram the logic in an FPGA for a specific feature or capability. This flexibility gives big advantages in system integration and reconfigurability.

System integration

Microprocessors, customized digital logic components, custom peripheral interfaces (e.g. PWM, speed/position sensors) can be integrated on a single FPGA device.

For the safety requirements, Intel FPGA provides devices, IP, established safety FPGA design flow, and development tools for safety designs up to the Safety Integrity Level 3 (SIL3).Integrating a SIL3 “safe” solution in a pre-qualified FPGA with standard appli-cation functions on the main board not only lowers the safety cost footprint, but it also

(27)

enhances system flexibility and shortens development time.

FPGAs allow secure communication. Open Secure Socket Layer encryption can be im-plemented in the FPGA fabric. This encryption enables faster and more secure enterprise communication channels.

Another important use of the integration capabilities of FPGAs concerns the industrial communication network. [24] Designers must contend with the fragmented nature of Fieldbus and Industrial Ethernet protocol standards. FPGAs can implement multiple Industrial Ethernet protocols simultaneously on a single device by instantiating ready-made intellectual property (IP) cores. Protocols are enabled by downloading the relevant protocol stacks, which cab be executed in the built-in SoC FPGA Hard Processor System (HPS)(figure 2.8) [5].

Reconfigurability

Programmable hardware provides performances comparable to ASIC but has the flexi-bility to change the functionality of IPs, adapt to new interfaces, etc. This is a very important feature, because protects the robotic system from obsolescence and ensures hardware revisions during development with a change in software.

2.5.2 High Performance

Unlike sequential processors, FPGAs are massively parallel and can greatly accelerate algorithm execution. With embedded digital signal processing (DSP) blocks and on-chip memory, FPGAs offer faster hardware acceleration at lower cost and power consumption than conventional processors. The high performance of FPGA comes from its flexibility which makes it possible to realize the fully optimized circuit for each application. [5] FPGAs provide the following ways to scale the processing performance, as illustrated in figure 2.6:

• Use either a high-performance external processor along with one to multiple embed-ded processors inside the FPGA. Is also possible to integrate all processing functions into the FPGA as the SoC.

• Add custom instructions in line with the processor code to accelerate specific pro-cessor instructions; for example floating point calculations.

• Accelerate data transformation with application-specific hardware, like DSP blocks. On the other hand, they can implement a dedicated data path and control logic for low latency and jitter. This is essential in real-time control systems, where the success of the tasks depends on low latency and deterministic performance. The high-performance FPGA fabric can easily meet the stringent IEEE 802.1 TSN timing requirements.

2.5.3 FPGAs and System on Chips

Usually the question of whether to implement a function in software or hardware comes down to a decision of flexibility versus scalability. Implementing in software usually results in a rapid and cheap solution, but often at the detriment of performance and scalability. In the case of robot controllers the growing complexity of systems and algorithms is making the software solution unsatisfactory for performance.

(28)

2.5. POTENTIAL OF FPGAS FOR ROBOTIC SYSTEMS

Figure 2.6: FPGA-based performance scaling methods [5]

Current FPGAs allow the coexistence in the same device of hardware design with ef-ficient processors. As a consequence, FPGAs can be first viewed as programmable micro-controllers where designers can combine one or several RISC processors with dedicated peripherals and computing hardware accelerators. The user can choose which functionali-ties to implement in hardware and which in software. While it takes more time to get the feature into an FPGA rather than implementing it in software, the time-to-market is still considerably less than doing a similar feature in an ASIC. Designers can also reprogram the FPGA fabric to incorporate hardware changes, avoiding major redesign of entire sys-tems.

Integrating design components on a single SoC FPGA device platform further simplifies design complexity and reduces overall system cost. FPGA SoC can off-load the PLC’s processor by implementing high-performance algorithms and HMI in the hardware fabric. Examples are Cyclone V SoC FPGA (ARM Cortex A9 + FPGA logic) and MAX10 (Nios II soft-processor + FPGA logic). Figure 2.7 illustrates a simple industrial motor control system in which the FPGA now functions as the SoC, integrating DSP blocks, memory, video graphics controllers, motor encoders, and other components. One can simply add PHYs and other analog and power components to complete the design[5]. In figure 2.8 it is shown how Cyclone V SoC integrates an ARM-based hard processor system (HPS) consisting of processor, peripherals, and memory interfaces with the FPGA fabric using a high-bandwidth interconnect backbone. These devices include additional hard logic such as PCI Express Gen2, multiport memory controllers, error correction code (ECC),R

memory protection and high-speed serial transceivers. Industry has shown that high-end FPGAs are growing in density while handling higher-speed applications and more complex designs. Furthermore, evolution of FPGAs over the years follows Moore’s Law just like CPUs have been doing in terms of the amount of logic that you can implement into them.

2.5.4 Cost and Time to Market

Cost and time to market are two features where software solutions are very competitive. They are very cheap and very easy to program. FPGAs are still more expensive compared to their DSP and microcontroller competitors. However, we have seen how a microproces-sor based control system tipically is composed of more elements than the simple processing unit. It has sometimes several microprocessor, DSPs, drivers, communication interfaces, that make the system more complex and may increase both the cost and the time to market. On the other hand, until recently, FPGAs required detailed hardware design

(29)

ex-Figure 2.7: FPGA as SoC Motion/Motor Control [5]

Figure 2.8: Cyclone V SoC Hard processor system [6]

pertise on the part of the application developer. But more recently, software-to-hardware tools have emerged that allow application developers to describe and generate complete software/hardware systems using higher-level languages. For example DSP Builder for Intel FPGAs is a digital signal processing (DSP) design tool that allows HDL generation of DSP algorithms from MathWorks Simulink environment.

2.5.5 State of the Art

FPGA-based motor controllers

In the motor control side, Intel FPGA offers the following Motor Control IPs: • Field Oriented Control (FOC)

• Pulse Width Modulator (PWM) • Sigma-Delta ADC Interface • DC-Link Voltage Monitor • Drive on Chip System Monitor • Encoder Interface IP

(30)

2.5. POTENTIAL OF FPGAS FOR ROBOTIC SYSTEMS

• Vibration Suppression (FFT, Peak Detection, Notch Filter) FPGA in Advanced Control.

In [25] a non-linear adaptive and ‘computed torque’ algorithms are partitioned between a DSP and FPGA.

[25] Demonstrates the FPGA’s capability for high frequency control loop updates, achiev-ing 120kHz current control and 20kHz velocity control. Computationally-intensive con-trol algorithms like sensor-less motor concon-trol [26], direct torque concon-trol [27] and model-predictive control [28, 29, 30] can be enabled by FPGAs. The hardware description lan-guage code can be generated automatically from lanlan-guages including C [28] and Simulink [31]. A FPGA-based motion controller for DC motors is used in [32], with the addition of image processing IP from stereo CMOS image sensors.

FPGA-based robot systems

In [33], kinematic calculations are implemented in hardware in an FPGA for faster exe-cution. The ‘Motion controller IP’ also includes 5-axis speed and position control loops, parallel quadrature encoder interfaces and PWM generation blocks. The motors are 24V geared DC motors. In [34] an example is presented of an FPGA-based bilateral control system based on acceleration control. The application is a surgical robot with haptic feed-back. In [35] is provided a good example of control for a 7-DOF robot arm. The system provides torque control in each joint, implemented with DSP for motion control and FPGA for the control of the single actuators. In [36] and [37] the FPGA implementation of the control of 5 DOF and 6 DOF robotic arms with feedback position is presented. In [8] an FPGA is used for the control of a 4DOF robot arm similar to the one used in this project. A similar approach is used for the kinematics solution. In [38] is provided an example of FPGA-based real-time robot motion planning acceleration. It uses 4 3D cameras similar to the one used in this project.

Although the many explored applications of FPGAs in motor control and robotic applica-tions, an FPGA-based control of a collaborative robot has not been implemented yet. In order to supply this technical gap, a control system that takes advantage from the FPGA features exposed above is developed in this work.

(31)

Review of articulated robots

3.1

Structure

[7]The articulated manipulator, also called revolute, or anthropomorphic manipulator, is the robot architecture used in this project. All the joints of an articulated robot are single axis revolute joints. In particular here we will focus on the Planar 4DOF manipulator: 3DOF for the arm plus the rotative base. The structure of the articulated robot arm with three revolute joints is very much similar to the human arm.

The robot have a base fixed to the ground. The body of the robot is connected to the base with a revolute joint which rotates along the axis of the base. The first link is connected to the base through the shoulder, which axis is perpendicular to the base joint. One more link is connected to the arm through a revolute joint parallel to the shoulder joint: this joint forms the elbow of the articulated robot. Finally a wrist and a gripper are attached to the last link.

The positions of base, shoulder and elbow define the position of the end-effector, while the wrist defines the orientation.

Figure 3.1: Structure of the articulated manipulator [7]

The workspace of a manipulator is the total volume reached by the end-effector as the manipulator executes all possible motions. The workspace is constrained by the geometry of the manipulator and by mechanical constraints of the joints. We can find two more accurate definitions of workspace: reachable workspace and dexterous workspace.

Reachable workspace: It is defined as the locus that can be reached by the end-effector at least with one orientation.

(32)

3.2. KINEMATICS

Dexterous Workspace: It is defined as the locus that can be reached by the end-effector with any orientation and without singularities.

We will deal with singularities hereinafter in the chapter. It is clear that the points of the dexterous workspace are a subset of the points from the reachable workspace. [7]

Figure 3.2: Workspace of the articulated manipulator [7]

3.2

Kinematics

Kinematics is the description of the motion of the robot without regard to the forces that cause it. The kinematics solutions of any robot manipulator are divided in two solutions: • Forward kinematics: the position of the robot’s end effector evaluated from the joints position and the geometry of the robot. Can be defined as the transformation from joint space to Cartesian space

• Inverse kinematics : the position of each joint evaluated from the position and the orientation of the end-effector and the geometry of the robot. Can be defined as the trans-formation from Cartesian space to joint space.

The solution for kinematics problems can be found in two ways: with the algebraic approach or with the geometric approach.

• The algebraic approach is used for complex robots, and is based on the study of relative rotation and translation of the joints. There are many ways to represent rotation, including the following: Euler angles, Gibbs vector, Cayley-Klein param-eters, Pauli spin matrices, axis and angle, orthonormal matrices, and Hamilton’s quaternions. Of these representations, homogenous transformations have been used most often in robotics. Denavit and Hartenberg convention is also very popular and has become the standard for describing robot kinematics. In the DH convention a reference frame is assigned to each link, and homogeneous transformation matrices are used to describe the relative position/rotation of these frames. The reference frames are assigned according to a particular convention, and therefore the number of parameters needed to describe the pose of each link, and consequently of the robot, is minimized.

• The geometric approach is used when the robot has less than 4 DOF or when its structure can be decoupled into multiple planar geometry problems. In the case of

(33)

Figure 3.3: Side view of the studied robot

Figure 3.4: Top view of the studied robot

the 6DOF articulated arm for example one common approach is to split the arm into two parts: the arm and the end effector. The first part is responsible for locating the center of wrist, while the second determines the orientation of the wrist. The calcu-lation of kinematics with this approach is very fast and less computational expensive rather than the algebraic one. Algebraic approach requires matrix calculations an system solutions. On the other hand, trigonometric functions can be easily imple-mented in Look up Tables, reducing considerably the computational expense. If the system has limited calculation resources or if the calculations need to be ex-ecuted in run time it is the best choice for arms with few DOF. For these reasons geometric approach is used in this implementation.

The following explores more in detail the resolution of forward and inverse kinematics with the geometric approach for the studied articulated manipulator.

3.2.1 Forward Kinematics

In this implementation the wrist angle is always set in order for the gripper to be parallel to the ground. For the calculation of the end-effector position it is sufficient to evaluate the wrist position and then add the gripper offset contribute. For this reason the robot can be evaluated as a planar 3DOF manipulator (2DOF for the arm plus the rotating base). From the definition of forward kinematics we can easily find the wrist position (see figure 3.2):

d = l1cosθ1+ l2cos(θ1+ θ2) (3.1)

z = b + l1sinθ1+ l2sin(θ1+ θ2) (3.2)

x = dcosθ0 (3.3)

y = dsinθ0 (3.4)

The end-effector position can be evaluated as:

xgr = x + grlenghtcosθ0 (3.5)

ygr = y + grlenghtsinθ0 (3.6)

(34)

3.2. KINEMATICS

3.2.2 Inverse Kinematics

[7] [39] Inverse kinematics calculations are much more complicated than forward kinemat-ics. Forward kinematics always has the solution and it is always only one solution. Inverse kinematics can present more problems.

It can have a solution or not. It can give infinite number of solutions. In some situations even if the solution exists it can be unreachable because of construction of the robot. Here is described the algorithm based on the geometric approach used in the implemen-tation (refer to figure 3.5).

Figure 3.5: Geometric approach for inverse kinematics [8] First obtained the base angle θ0

θ0= arctan(y/x) (3.8)

The solution exists unless x = y = 0. In this case θ0 is undefined, and the manipulator is in a singular configuration (3.9). Note that a second valid solution could have been θ0 = π + arctan(y/x). This of course will bring to more possible solutions to θ1 and θ2.

The wrist coordinates can be easily evaluated from the end effector coordinates sub-tracting the grip offset. In our application the grip is always parallel to the ground.

θ4 = 0 (3.9)

x0 = x − [grlenghtcos(θ0)] (3.10)

y0= y − [grlenghtsin(θ0)] (3.11)

The distance d is evaluated:

d =

q

x02+ y02 (3.12)

z0 = z − b (3.13)

From the coordinates of the wrist (x’,y’,z’) we can evaluate the joint angles. First, is found the distance between the wrist and the shoulder d1.

d1=

p

(35)

Then, we can evaluate the intermediate angles α1 and α2 α1 = arctan(z0/d) (3.15) α2= arcos l12− l2 2+ d2 2l1d (3.16)

Where α1 is the angle between d1 and the ground, while α2 is the angle between d1 and l1. We can finally find

θ1 = α1+ α2 (3.17) θ2 = arcos l21+ l22− d2 2l1l2 ; (3.18) θ3 = 270 − θ1− θ2; (3.19)

We can see that with a very simple 2DOF manipulator there could have been two possible solutions for inverse kinematics. The arcos() function in this algorithm gives solution between 0 and π, but could have been defined between −π and 0: this corresponds to elbow up and elbow down configurations (figure 3.6). The number of possible solutions increases with the number of DOF: for a 6DOF manipulator we may have at most 16 solutions.

Figure 3.6: Elbow up and elbow down solutions [7]

3.3

Differential Kinematics: the Jacobian Matrix

[7] For the study of the robot movement the relationship between joints position and end effector position is not the only important data we need. The other important information is the relationship between the joints and the end-effector velocities.

This helps for the definition of the movement and the control of the joints. However in this implementation the Jacobian matrix is not evaluated in the algorithm.

Here is presented for the study of the manipulability of the robot and for the computation of singularities.

Singularities are manipulator configurations in which one or more degrees of freedom are made redundant. This reduces the ability of the robot to move in 3D space close to a singularity, even though the area could be well within its workspace.

It is important to evaluate the singularities, in order to avoid them in the motion control. The analytical expression of the Jacobian is obtained by differentiating a vector

x = f (q) ∈ R6 (3.20)

The vector x defines the position (in cartesian coordinates) and orientation (according to some convention) of the end effector manipulator. The vector q defines the joint positions

(36)

3.3. DIFFERENTIAL KINEMATICS: THE JACOBIAN MATRIX

corresponding to the x position of the end effector. The f() function describe the forward kinematics equations of the robot. By differentiating f (q), one obtains

dx = ∂f (q)

∂q dq = J (q)dq (3.21)

Where the matrix J (q) ∈ R6×n is the Jacobian matrix. If the infinitesimal period of time dt is considered, one obtains

dx

dt = J (q) dq

dt (3.22)

dx

dt can be seen as the set of translational and rotational velocity of the end-effector respect

to the base frame F0 [40]. The translational velocity of the point p (the end effector) with

respect to F0 is defined as the derivative (with respect to time) of p, denoted as

˙ p = dp

dt (3.23)

With respect to the rotational velocity A triple γ ∈ R3gives the orientation of the end-effector frame with respect to F0, and its derivative is used to define the rotational velocity

˙γ =

dt (3.24)

The elements Ji,j of the Jacobian are non linear functions of q(t): therefore these elements

change in time according to the value of the joint variables. The Jacobian dimensions depend on the number n of joints and on the dimension of the considered operative space (usually 6).

3.3.1 Example: Jacobian of a 3DOF Manipulator

Figure 3.7: Articulated manipulator [7] from 3.1, 3.2, 3.3, 3.4, the end-effector position is:

x = cosθ0[l1cosθ1+ l2cos(θ1+ θ2)] (3.25)

y = sinθ0[l1cosθ1+ l2cos(θ1+ θ2)] (3.26)

(37)

The rotation of the end effector with respect to the base can be seen as two consecutive rotations: the first of θ1+ θ2 along y0, and the second of θ0 along z0.

Therefore, doing the derivatives, the jacobian matrix is:

J (q) =         

− sin θ0[l1cos θ1+ l2cos(θ1+ θ2)] − cos θ0[l1sin θ1+ l2sin(θ1+ θ2)] −l2cos(θ0) sin(θ1+ θ2)

cos(θ0)[l1cos θ1+ l2cos(θ1+ θ2)] − sin θ0[l1sinθ1+ l2sin(θ1+ θ2)] −l2sin θ0sin(θ1+ θ2)

0 l1cos θ1+ l2cos(θ1+ θ2) l2cos(θ1+ θ2)

0 0 0 0 1 1 1 0 0          (3.28)

3.4

Singularity Calculation

[7] [40]In general, rank(J (q)) = min(6, n). We have seen that the elements of the jacobian matrix are functions of the joints, then some robot configurations exist such that the Jacobian loses rank. These configurations are denoted as kinematic singularities. In these configurations, there are “directions” in R6 without any correspondent “direction” in Rn. The singular configurations are important for several reasons:

• They represents configurations in which the motion capabilities of the robot are reduced: it is not possible to impose arbitrary motions of the end-effector.

• In the proximity of a singularity, small velocities in the operational space may gen-erate large (infinite) velocities in the joint space.

• They correspond to configurations that do not have a well defined solution to the inverse kinematic problem: either no solution or infinite solutions.

We can calculate easily the singularities in the previous example of the 3DOF articu-lated manipulator. The matrix loses rank when det(J (q)) = 0. All rotation matrices are orthogonal matrices with determinant +1, so for the evaluation of the singularities, we will take into account the first three rows of the matrix.

det(J (q)) = l1l2sin(θ2)[(l1cos θ1+ l2cos(θ1+ θ2)] (3.29)

Therefore, for a 3DOF manipulator, there are two types of singularity:

• Boundary singularities, when θ2 = 0 or θ2 = 180, that correspond to points on the

border of the workspace, i.e. when the robot is either fully extended or retracted (3.8). These singularities may be easily avoided by not driving the manipulator to the border of its workspace

• Internal singularities, when [(l1cos θ1+ l2cos(θ1+ θ2)] = 0 , that occur inside the reachable workspace and are generally caused by the alignment of two or more axes of motion, or by the attainment of particular end-effector configurations. For example it happens when the x,y coordinates of the wrist are 0 (3.9). These singularities constitute a serious problem, as they can be encountered anywhere in the reachable workspace for a planned path in the operational space.

(38)

3.5. PATH FINDING AND TRAJECTORY PLANNING

Figure 3.8: Boundary singularity [7]

Figure 3.9: Internal singularity [7]

3.5

Path Finding and Trajectory Planning

A path denotes the locus of points in the robot’s joint space, or in cartesian space, which the robot has to follow in the execution of the assigned motion. Note that a path does not involve the notion of time, it is a purely spatial concept. The distribution of these points changes according to the specific application, for example the distribution of obstacles in the workspace if a collision avoidance algorithm is implemented.

The trajectory is the curve, as a function of time, that passes through the points of the path.

In simple cases, the polynomials describing the trajectory can be analytically obtained by means of circular/straight line motion primitives. More often, the polynomials must be obtained by using more complex approaches which guarantee continuity of the curve and its derivatives up to a desired order. This continuity is desirable because successive derivatives of the trajectory represent velocity, acceleration, jerk and so on. Jerk is de-fined as the derivative of the acceleration. Discontinuities in these quantities have adverse effects on the robot dynamics and actuators. In most cases, the acceleration needs to be continuous while the jerk may be discontinuous but should be below a specified limit. Here we focus only on the trajectory generation, that takes the points of the path as an input and creates the temporal representation with continuity of the desired order, adding more intermediary points.

The trajectory can be executed in the cartesian space or in the joint space. Generally, continuity in cartesian space does not always correspond to continuity in joint space. Planning the trajectory in the cartesian space is more intuitive, and provides immediate knowledge of the position of the end effector at all times, which is necessary for applications

(39)

like obstacle avoidance. In the other hand, planning in joint space has some important advantages: it does not need on-line kinematic inversion, making the algorithm faster, and is easier to impose continuity constrains in join velocity and acceleration.

In this implementation joint space is chosen for the trajectory planning. This is the common procedure for the trajectory generation in the joint space:

• Calculate inverse kinematics solution for the points of the path. • Assign total time using maximal velocities in joints.

• Discretize the individual joint trajectories in time. • Blend a continuous function between these points.

Therefore, the preferred approach is to use a suitable number of low order polynomials with desired continuity constraints at the path points. Generally, at least a cubic polynomial is used since it allows the imposition of continuity on the velocity at each point.

Below the three different solutions or the trajectory generation used in this implementation are explained more in detail.

3.5.1 Linear Interpolation

The simplest trajectory to determine motion from an initial point x0 to a final point x1, is defined as

x(t) = a0+ a1(t − t0) (3.30)

Once the initial and final instants t0, t1, and positions x0 and x1 are specified,the

param-eters a0, a1 can be computed by solving the system

x(t0) = x0 = a0 (3.31)

x(t1) = x1= a0+ a1(t1− t0) (3.32)

where

T = t1− t0 (3.33)

is the time duration. Therefore

a0 = x0 (3.34) a1 = x1− x0 t1− t0 = h T (3.35) where h = x1− x0 (3.36)

is the displacement. The velocity is constant over the interval [t0, t1] and its value is

v(t) = h

T(= a1). (3.37)

If we are generating a trajectory from a path that covers more than two points, this approach only guarantees the position continuity, while the velocity is discontinuous in the points of the path. Obviously, the acceleration is null in the interior of the trajectory and has an impulsive behavior at the extremities. So, this approach is unsuitable if a smooth and continuous trajectory is desired.

(40)

3.5. PATH FINDING AND TRAJECTORY PLANNING

Figure 3.10: Linear trajectory for consecutive points: qualitative graph

3.5.2 Linear Segments with Parabolic Blends

[41]Another way to generate suitable joint space trajectories is by the Linear Segments with Parabolic Blends (LSPB) method. The LSPB trajectory is such that the velocity is initially “ramped up” to its desired value and then “ramped down” when it approaches the goal position. To achieve this we divide the desired trajectory in three parts.

• The first part from time t0 to time t1 is a quadratic polynomial. This results in a linear “ramp” velocity and a constant acceleration.

• At time t1, called the blend time, the trajectory switches to a linear function. This corresponds to a constant velocity and null acceleration.

• Finally, at time t2 = tf − t1 the trajectory switches once again, this time to a

quadratic polynomial so that the velocity is linear and the acceleration is constant.

Figure 3.11: LSPB trajectory: position, velocity and acceleration

With this approach continuity for the position and the velocity is obtained, but not for the acceleration, and the jerk has an impulsive behavior at the extremities. The position

Riferimenti

Documenti correlati

This result strongly suggests that we are observing discrete shifts from part-time to full-time work, as conjectured by Zabalza et al 1980 and Baker and Benjamin 1999, rather

Gemelli, Roma Tomao Carlo Silverio ASL Latina, Distretto Aprilia Tomao Federica Università di Roma La Sapienza Tonini Giuseppe Campus Biomedico, Roma Vici Patrizia IFO,

• Sanitari: gestione delle nuove tossicità e degli effetti collaterali a lungo termine, esiti della chirurgia e della radioterapia.

NUOVI BIOMARCATORI di MEDICINA NUCLEARE.. Le associazioni

In ottemperanza alla normativa ECM ed al principio di trasparenza delle fonti di finanziamento e dei rapporti con soggetti portatori di interessi commerciali in campo

• LBA7_PR - BRIM8: a randomized, double-blind, placebo-controlled study of adjuvant vemurafenib in patients (pts) with completely resected, BRAFV600+ melanoma at high risk

 ADJUVANT THERAPY with CHECKPOINT INHIBITORS (CA209-238 trial) AND TARGET THERAPY (BRIM8 and COMBI-AD trials)?. ….the

LBA31 STAMPEDE: Abi or Doc – directly randomised data -No difference in OSS, SSE for ADT+Abi vs ADT+Doc. - Different mechanism of action , side effect profile, treatment duration