• Non ci sono risultati.

A model mediated tele-auscultation system with multimodal feedback

N/A
N/A
Protected

Academic year: 2021

Condividi "A model mediated tele-auscultation system with multimodal feedback"

Copied!
79
0
0

Testo completo

(1)

Information Engineering Department

Institute of Communication, Information and Perception Technologies

University of Pisa, Scuola Superiore Sant’Anna

Multimodal tele-auscultation system

with force feedback

Master Degree in Embedded Computing Systems

Supervisors:

Prof. Carlo Alberto Avizzano Ing. Alessandro Filippeschi

Author: Sara Falleni

(2)

A Babbo Salva

in ogni scelta, traguardo o momento futuro. Semprecon me.

(3)

Acknowledgements

Vorrei innanzitutto ringraziare il mio relatore, il Professor Carlo Alberto Avizzano, per l’opportunità datami di svolgere questo lavoro di tesi: ogni nuovo sentiero é ricco di miglio-ramenti e fornisce prospettive diverse di cui é possibile arricchirsi; quindi il mio correlatore, Alessandro Filippeschi, che con infinita pazienza ha seguito il mio lavoro e mi ha trasmesso non solo le conoscenze necessarie per portare a termine il progetto, ma anche la passione per ciò che fa e che si è sempre rivelato un’interfaccia critica e costruttiva con cui scambiarsi opinioni. Sempre all’interno del laboratorio Percro, un grazie a Emanuele Ruffaldi, per il contributo tecnico e per l’energia positiva trasmessa.

Un ringraziamento infinito alla famiglia: a Mamma, per aver sempre incoraggiato ogni mia scelta e ogni mio percorso, per aver sempre creduto in me e per aver, negli anni e con molto impegno, creato una serenità e una stabilità che mi hanno permesso di raggiungere ognuno dei miei risultati; a Elena ed a Elisa perché, entrambe, hanno sostenuto ogni mia battaglia e festeggiato ogni mio successo e perchè ognuna, per il solo fatto di esserci, ha costituito la motivazione per fare bene anche nei momenti peggiori.

E poi a tutti gli altri: fratelli, nonni, zii, cugini e nipoti acquisiti e non, che sono qui oggi per festeggiare con me questo incredibile traguardo; un pensiero speciale per Giorgia e Federica, con le quali condivido percorsi paralleli che talvolta hanno la fortuna di incontrarsi.

Sono tante le opportunità che un percorso di cinque anni può dare e tra quelle che ho ricevuto ci sono sicuramente le persone che ho incontrato: grazie quindi a Elena, Michele, Antonio e Francesca, con i quali ho condiviso tutto il percorso e che oltre a colleghi sono stati dei compagni fantastici, con i quali passare mattine, pomeriggi e sere, sessioni di studio intensive e merende interminabili e che posso considerare dei veri Amici.

Non solo l’Università mi ha dato l’opportunità di conoscere persone incredibili e quindi un grazie enorme agli amici di sempre: a Giulia per tutto quello che non è cambiato nonostante i mille impegni, a Francesco M. per conoscere tutti i miei difetti e volermi comunque bene, a Francesco D. per essere sempre pronto a strapparmi un sorriso e infine a Chiara, Moira,

(4)

iv

Veronica e Andrea, per tutte le risate e per aver condiviso con me in questi anni una parte fondamentale della mia vita.

Un ringraziamento speciale a Pasquale: per essere stato un collega e un complice incredi-bile in questo percorso, per essere stato il giorno e la notte di questi due anni, per essere in ogni momentoinesauribile fonte di forza e ispirazione. E grazie per aver lottato per essere qui, con me, in questo giorno speciale, mi hai resa veramente felice.

Infine un grazie a me: per la forza, per il coraggio e per la dedizione che mi hanno sempre accompagnata; ci sono stati ostacoli grandi come montagne, problemi che sembravano irrisolvibili e dolori che mi hanno squarciato il cuore, ma ogni volta sono riuscita a prendere tutti i cocci e a incollarli tutti insieme per ottenere un risultato migliore.

(5)

Abstract

The remote examination is becoming more and more important since the population is becoming older and the number of required medical examination is growing.

The system proposed here is a novel system for remote examination for the particular case of auscultation. The system is located at two sites: at the Patient Site the patient lays supine, a robotic interface holds and places over the patient trunk the electronic stethoscope able to send sounds to PC; an RGB-D camera placed over the patient shoots the whole scene. At the Doctor Site the doctor has a haptic interface using which he or she moves the robotic interface at the Patient Site, moreover the doctor can see the scene shot at the Patient Side by the camera on an LCD screen and can hear sounds captured by the stethoscope thanks to a diaphragm and a headset where the audio stream from the Patient Side is played.

The presented system is based on three different feedback types: the audio feedback, the video feedback, and the haptic feedback; even if during this project a Local Area Network is used, the three feedback channels can communicate over the Internet using TCP and UPD.

After presenting this novel system the effectiveness of the developed work will be shown using auscultation-like experiments. First the usability of the system for placing the stetho-scope will be investigated, then the correctness of listened sounds will be checked. Finally, considerations a brief summary of the whole project will be provided together with the encountered problems and possible future improvements.

(6)

Table of contents

List of figures ix 1 Introduction 1 1.1 Objectives . . . 2 1.2 ReMeDi . . . 2 1.3 Publication . . . 3 1.4 Structure . . . 4

2 State of the Art 5 2.1 Teleoperation Systems . . . 5

2.1.1 Telediagnosis Systems . . . 5

2.1.1.1 Remote abdominal telesonography . . . 6

2.1.1.2 Femoral palpation and needle insertion training simulation 6 2.1.1.3 Remote palpation . . . 7

2.1.2 Teleauscultation Systems . . . 8

2.1.2.1 Tele Auscultation System for Heart . . . 8

2.1.2.2 Tele-Auscultation Support System with Mixed Reality Navigation . . . 8

2.1.3 Systems Comparison . . . 9

2.2 Bilateral Teleoperation Approaches . . . 10

2.2.1 Classical Bilateral Teleoperation . . . 10

2.2.2 Bilateral Teleoperation with Wave Variables . . . 11

2.2.3 Model Mediated Bilateral Teleoperation . . . 11

2.2.4 Model Mediated Choice . . . 13

2.3 Model Mediated State of Art . . . 13

2.3.1 Point-cloud-based Model-mediated Teleoperation . . . 14

2.3.2 Real-Time Modeling and Haptic Rendering of Deformable Objects for Point Cloud-Based Model-Mediated Teleoperation . . . 14

(7)

Table of contents vii

3 General Description 16

3.1 System Overview . . . 16

3.2 Organization of Components . . . 17

3.2.1 Components for Haptic Feedback . . . 18

4 Teleauscultation System 21 4.1 Patient Site . . . 21 4.1.1 Haptic Interfaces . . . 21 4.1.1.1 Stethoscope . . . 21 4.1.1.2 Stethoscope-Grab Interface . . . 22 4.1.1.3 Grab . . . 25 4.1.1.4 Electronics . . . 29 4.1.1.5 Control . . . 31 4.1.2 Video Interface . . . 34 4.1.3 Audio Interface . . . 34 4.2 Doctor Site . . . 35 4.2.1 Haptic Interfaces . . . 35 4.2.2 Video Interfaces . . . 37 4.2.3 Audio Interfaces . . . 37 4.3 Communication . . . 37

5 Haptic Interfaces Functioning 41 5.1 Host-Robot Communication . . . 41

5.1.1 Host to Robot Communication . . . 41

5.1.2 Robot to Host Communication . . . 42

5.2 Host-Host Communication . . . 44 5.2.1 General Description . . . 44 5.2.2 Matlab Interface . . . 44 5.2.3 Initialization . . . 45 5.2.3.1 Packets format . . . 46 5.2.3.2 Procedure Description . . . 47

5.2.4 Position and Force Control . . . 47

5.2.5 Workspace Mapping . . . 49

5.2.6 Model Mediated Based Haptic Feedback . . . 50

(8)

Table of contents viii

6 Experiments and Validation 54

6.1 Users profile . . . 54

6.2 Experiment 1: Positioning . . . 54

6.2.1 Description . . . 55

6.2.2 Method . . . 56

6.2.3 Results . . . 56

6.3 Experiment 2: Heart rate recognition . . . 58

6.3.1 Description . . . 58

6.3.2 Method . . . 59

6.3.3 Results . . . 59

6.4 Experiment 3: Track comparison . . . 60

6.4.1 Description . . . 60 6.4.2 Method . . . 60 6.4.3 Results . . . 61 6.5 Further Information . . . 61 6.6 Discussion . . . 62 7 Conclusions 63 7.1 Developed Work . . . 63 7.2 Principal problems . . . 64 7.2.1 Audio Quality . . . 64 7.2.2 Video Feedback . . . 65 7.3 Future Improvements . . . 65 7.3.1 Audio Filter . . . 65

7.3.2 Patient Side Haptic Interface . . . 66

7.3.3 Improved Video Feedback . . . 66

7.3.4 Smart Model Estimation . . . 66

7.4 Conclusions . . . 67

(9)

List of figures

1.1 ReMeDi System . . . 2

2.1 Custom robotic arm . . . 6

2.2 Palpation System . . . 7

2.3 Auscultation Loop . . . 8

2.4 Tele-Auscultation System . . . 9

2.5 Classical Bilateral Tele-Operation Comparison . . . 11

2.6 Bilateral Tele-Operation Comparison . . . 12

2.7 Force Scheme . . . 15

3.1 General System Overview . . . 16

3.2 System Architecture . . . 17

3.3 Real System . . . 18

3.4 Software Loop . . . 19

4.1 Patient Haptic Interface . . . 22

4.2 Realized Interface . . . 23 4.3 Interface functioning . . . 24 4.4 Stethoscope-Grab Interface . . . 24 4.5 Percro Grab . . . 25 4.6 Grab Frames . . . 26 4.7 Interface Offsets . . . 27 4.8 Embedded Controller . . . 29 4.9 Haptic Interface FSM . . . 30 4.10 Vpoints-Ical Graph . . . 31

4.12 Motor 1 Velocity Graph . . . 32

4.13 Motor 2 Velocity Graph . . . 33

4.14 Motor 3 Velocity Graph . . . 33

(10)

List of figures x

4.16 Delta . . . 36

4.17 Delta Hook . . . 37

4.19 Multimodal Streaming . . . 38

4.11 Control Loop Graph . . . 39

4.18 Doctor Haptic Interface . . . 40

5.1 PC2E communication . . . 42

5.2 E2PC communication . . . 43

5.3 Matlab Interface . . . 45

5.4 Delta to Grab communication . . . 46

5.5 Grab to Delta communication . . . 46

5.6 Initialization Phase Description . . . 48

5.7 Grab Initial Point . . . 49

5.8 Delta Initial Point . . . 50

5.9 Spring Activation Finite State Machine . . . 51

5.10 Physical Meaning . . . 52

5.11 Latency Loop Calculation . . . 52

5.12 Temporal Latency Scheme . . . 53

5.13 Latency Distribution Histogram . . . 53

6.1 Mannequin with target points . . . 55

6.2 Delta’s position and forces . . . 57

6.3 Position Error . . . 58

6.4 Heart Rate Error . . . 60

6.5 Track Comparison Questionnaire . . . 61

6.6 Track Comparison Results . . . 61

(11)

Chapter 1

Introduction

The increasing aging of developed countries population is already demanding to perform a vast number of medical examination, and this need is going to grow in the following years. An efficient solution to the lack of physicians on the high number of examination is the telemedicine; it allows doctors to perform diagnosis and examinations even if they are far from patients. In this way, it is possible to maintain a high-quality healthcare service also in remote areas or areas with few physicians.

The telemedicine is based on the more general concept of teleoperation. Teleoperation refers to the possibility to control something which is in a certain place from a different place. The origin of the teleoperation can be found in the Radio invention by Nikola Tesla who has been able to carry sounds using waves. For what about telemedicine the interesting feature it is based on is the possibility to perform remote control on a robot.

In the last few years, thanks to the chance mentioned above, the telediagnosis field has grown very fast, in particular, the palpation and the ultrasonographic examination have been investigated and many remote examination systems have been developed. Together with them, the tele-auscultation examination has raised the attention of engineers and physicians both as an independent examination and as a contribution to ultrasonography to obtain a more accurate medical evaluation. When a doctor performs an auscultation examination he or she examines the circulatory and respiratory systems hearing heart and breath sounds, and the gastrointestinal system hearing bowel sounds.

(12)

1.1 Objectives 2

1.1

Objectives

The work described in the following pages aims at realizing a tele-auscultation system using which a doctor can perform a complete auscultation examination. The system must implement a video feedback which allows the doctor to observe the scene where the patient is, a good quality audio feedback fundamental for the examination and the subsequent diagnosis and haptic feedback. The haptic channel has to make the stethoscope positioning independent, that means no help needed at the patient side. Moreover, it has to implement a force feedback.

The realized system must be able to work also in the presence of latency, and it is why the force feedback is implemented using a model mediated technique explained later. Another objective of the work is to implement a system which gives to the doctor a high sense of presence which means that the doctor must have the feeling to share the same place with the patient and the examination must be as similar as possible to a not teleoperated one.

1.2

ReMeDi

Fig. 1.1 ReMeDi System

The above-mentioned work is developed inside the ReMeDi context. ReMeDi is a European project in which a robot system is designed to perform tele-examination of patients.

(13)

1.3 Publication 3

The ReMeDi project develops a robot capable of performing a physical examination, specifically of i) palpation, i.e. pressing the patient’s stomach with the doctor’s hand and observing the stiffness of the internal organs and the patient’s feedback (discomfort, pain) as well as ii) ultrasonographic examination and iii) auscultation. In addition to quality teleconferencing, ReMeDi features a mobile robot (placed in a hospital) equipped with a lightweight and inherently safe manipulator with an advanced sensitized head and/or ultrasonic probe eventually able to hold a stethoscope; the remote interface (placed at the doctor’s location) is equipped with sophisticated force-feedback, active vision, audio-feedback and locomotion capabilities.

An another objective of the ReMeDi project is to go beyond classical telepresence concepts: it will be able to capture and process multi-sensory data (integrating visual, haptic, speech, patient’s emotions, and physiological responses) into perception and reasoning capabilities making ReMeDi a diagnostic assistant offering context-dependent and proactive support for the doctor. Particular attention is devoted to safety aspects: the normative standards (both existing and in a draft) and the results of ongoing research projects will be integrated into all the system development phases.

1.3

Publication

The realized work represents a progress on actual tele-auscultation systems; current systems, which will be described in Section 2.1.2, need an assistant at the Patient Site and don’t offer accurate feedbacks to perform a precise and real-time diagnosis.

The system described here, instead, aims at use more accurate feedbacks and methods, already used in other telediagnosis systems (section 2.1.1) also to perform a tele-auscultation examination.

Since the developed work represents a sigh of progress in the tele-auscultation field it has been possible to submit an Article to IEEE RO-MAN2017, the 26th International Symposium on Robot and Human Interactive Communication which will be held in Lisbon, Portugal, from August 28 to September 1, 2017.

This symposium is a leading forum where state-of-the-art innovative results, the latest developments as well as future perspectives relating to the robot and human interactive communication are presented and discussed. The conference covers a wide range of topics

(14)

1.4 Structure 4

related to Robot and Human Interactive Communication, involving theories, methodologies, technologies, empirical and experimental studies.

1.4

Structure

In the following pages the developed work is organized as follows: in Chapter 2 the State of the Art related to telediagnosis and tele-auscultation systems will be explored; moreover an overview of Model-Mediated techniques will be provided. In Chapter 3 a general description of the developed system will be given. In Chapter 4 the used interfaces and the communication between them will be described. At this point in Chapter 5 the functioning of the haptic loop will be shown. Finally, in Chapter 6, the whole system will be validated with the description of some experiments and the related results and in Chapter 7 a summary of the whole system will be provided together with the encountered problems, then many possible improvements will be described.

(15)

Chapter 2

State of the Art

After the introduction of the project and before the description of implementation’s details in this Chapter the State of The Art about the project will be analyzed. In the first part the State of the Art of Teleoperation systems will be discussed; in particular, two parallel paths will be followed: on one side the State of the Art of Telediagnosis System, on the other Side the specific case of Teleauscultation. In the second part, an introduction to the latency problem and to the Model Mediated technique will be provided, then the State of The Art of Model-Mediated Systems will be investigated.

2.1

Teleoperation Systems

As already said in the Introduction, the word teleoperation expresses a very general concept: the possibility performs a machine control from a site that is different from the machine site. Since the object of the project is to control a robotic interface using another robotic interface which is situated in a place arbitrary far the previous one, systems with similar characteristics will be illustrated.

2.1.1

Telediagnosis Systems

The first class of systems that will be discussed are telediagnosis systems: these systems give doctors the possibility to perform a medical examination even if they are far from the patient. Follows of a list of some innovative telediagnosis systems with interesting characteristics.

(16)

2.1 Teleoperation Systems 6

2.1.1.1 Remote abdominal telesonography

The work developed in [ACA+07] aims at realizing a system which can perform sonogra-phy between a remote site and an expert center. The sonograsonogra-phy also called ultrasonograsonogra-phy is a noninvasive diagnostic imaging technique based on the application of ultrasound. It is used to see internal body structures such as tendons, muscles, joints, vessels and internal organs. Its aim is often to find a source of a disease or to exclude any pathology.

The developed system implements bidirectional transmission channel for audio and video (sonographic and ambient images), moreover at the Doctor Side the sonographer moves a dummy probe while at the Patient Side a custom portable robotic arm is held on the patient’s body by an operator who is not a trained sonographer e doesn’t manipulate the probe.

As one can see in figure 2.1 the robotic arm holds curved array scan probe. Probe holder is

Fig. 2.1 Custom robotic arm

supported by the frame, bottom ring of which is held in contact with the skin by an assistant at Patient Side. The operator at the Patient Side positions the probe holder as indicated by the sonographer at the Doctor Site. The expert doctor can monitor the position of the probe holder using the video feedback.

2.1.1.2 Femoral palpation and needle insertion training simulation

This work is not exactly focused on telediagnosis but gives a different perspective about the importance that new technologies have in the medical field and the power of Virtual

(17)

2.1 Teleoperation Systems 7

Reality. [CJGC11] develops a virtual environment for training femoral palpation and needle insertion; for the palpation haptics effect, two off-the-shelf force feedback devices have been linked together to provide a hybrid device that gives five degrees of force feedback. This is combined with a custom built hydraulic interface to provide a pulse like tactile effect. The needle interface is based on a modified PHANTOM Omni end effector that allows a real interventional radiology needle to be mounted and used during simulation.

2.1.1.3 Remote palpation

Going on with the itinerary inside telediagnosis systems [FBR+15] showed in figure 2.2 can be found: the aim of the work is to realize a palpation system; in addition to the video streaming, this work presents an important feature also developed in the tele-auscultation system described later: the force feedback. The Patient Side is continuously tracked by a

Fig. 2.2 Palpation System

Kinect camera and lets to future implementations the possibility to move a haptic device; the Doctor Side is where the most of the work is developed: the doctor is free to move hands which are tracked by a Leap Motion and reproduced by an avatar in an Augmented Reality Environment in which is also copied the scene recorded by the Kinect. The contact between doctor’s hands and patient skin is checked in the virtual environment, the positivity of the test activates the force feedback realized with a 3 DoF haptic interface.

(18)

2.1 Teleoperation Systems 8

2.1.2

Teleauscultation Systems

After the description of some telediagnosis system, it is time to investigate the specific field of tele-auscultation.

2.1.2.1 Tele Auscultation System for Heart

The first work that will be analyzed is [KJKM13]; in this system, showed in figure 2.3, two are the fundamental points: the possibility to receive a good quality sound and the awareness of the stethoscope position. The developed system works as follows: since the

Fig. 2.3 Auscultation Loop

system is thought to be used in every location an electronic stethoscope is used to record heart sounds, it sends everything to a mobile phone which can transfer sounds to another cell phone using a standard call. At the Doctor Side incoming sounds are amplified and filtered, then sent to a speaker. To know the stethoscope position a system that uses simple SMS is implemented: since the important heart sounds are four, the operator that performs the stethoscope positioning has a platform with four buttons (one for each position); whenever a button is pressed the relative position is sent to the doctor’s mobile phone exploiting the SMS service.

Everyone can see that the work just described is a step backward on the telediagnosis system outlined in section 2.1.1; indeed no video or haptic feedback is provided; moreover, the assistant work is still crucial for the examination.

2.1.2.2 Tele-Auscultation Support System with Mixed Reality Navigation

In the last work that will be examined, [HUK+13], a mix of previous works characteristics can be found; the implemented work illustrated by figure 2.4 has an audio and video

(19)

2.1 Teleoperation Systems 9

teleconferencing system both for scenes and for sounds coming from stethoscope which is again an electronic one. Moreover, even if the stethoscope positioning is again let to an

Fig. 2.4 Tele-Auscultation System

assistant, this task is facilitated by a Mixed Reality system which allows the doctor to indicate over the patient image the desired contact point. Finally to check the quality of the contact condition e-textile sensors are attached to the chest piece; they send the sensed pressure to the Doctor Side where this information are shown with an intuitive system: a ring-shaped colored indicator is displayed on the PC_DOC the red means that the chest piece is pushed firmly on the chest, the blue means that the chest piece is weakly pressed.

2.1.3

Systems Comparison

To summarize works described in previous sections the table 2.1 has been developed; in the first column the reference to the work can be found, the second and the third columns refers to Video and Audio streaming between the two sides of tele-diagnosis, the fourth column refers to the possibility for the doctor to independently position the probe at the

(20)

2.2 Bilateral Teleoperation Approaches 10

Patient Side, the fifth column obviously refers to the presence of a Virtual Reality system and finally the last column indicates the presence of a force feedback.

Work Video Audio Positioning Virtual Reality Force Feedback [ACA+07] YES YES HYBRID NO NO

[CJGC11] NO NO NO YES YES

[FBR+15] YES NO NO YES YES

[KJKM13] NO YES NO NO NO

[HUK+13] YES YES NO YES NO Table 2.1 Comparison Table

The five compared parameters are the ones that will be referred in the developed tele-auscultation system; as one can easily see none of state-of-art systems has all the character-istics, the best score reached is 3 of 5 and in particular none of them has the possibility to independently move the probe at the Patient Side from the Doctor Side.

2.2

Bilateral Teleoperation Approaches

As stated in section 1.1, one of the objectives of this project is to obtain a system which can also work in the presence of latency. To obtain a bilateral teleoperation system, there are three ways that can be followed: Classical Bilateral Teleoperation, Bilateral Teleoperation with Wave Variables, Model-Mediated Bilateral Tele-Operation.

2.2.1

Classical Bilateral Teleoperation

A system of Classical Bilateral Teleoperation is achieved when a direct loop is created using the Patient Site and the Doctor Site, and it can be seen in figure 2.5. The problem of this approach is that when unpredictable latencies are present, the direct loops creates virtual energy which can make the whole system unstable.

(21)

2.2 Bilateral Teleoperation Approaches 11

Fig. 2.5 Classical Bilateral Tele-Operation Comparison

2.2.2

Bilateral Teleoperation with Wave Variables

A possible alternative to maintain stability in a system with time delays is the use of wave variables. They encodes force and velocity of both sides (Master/Doctor and Slave/Patient) into u and v. A possible implementation of this method is described in [ATPM08] and works as follows: the transformation at the Master Side is described by equations 2.1

um= bx˙m+ fm √ 2b vm= bx˙m− fm √ 2b (2.1)

where b denotes the characteristic wave impedance, which is a positive constant. The transformation at the Slave Side, instead, are described by equations in 2.2.

us= bx√˙s+ fs

2b vs=

bx˙s− fs

2b (2.2)

A possible implementation drawback of this approach is the wave reflection phenomenon which can lead to oscillatory behaviors and which can be managed with a low pass filter or with the impedance matching procedure.

2.2.3

Model Mediated Bilateral Teleoperation

The third possibility to realize the force feedback in the presence of delays is a Model-Mediated Bilateral Teleoperation.

Model Mediated Bilateral Teleoperation is a technique that allows us to overcome lim-itations of passivity-based approaches such as wave variables. In these approaches, the transparency of the system is often sacrificed to keep the teleoperated system stable. In some applications, this loss of transparency can be avoided by model-mediated teleoperation. In

(22)

2.2 Bilateral Teleoperation Approaches 12

model-mediated teleoperation, force rendering at the master interface is calculated based on a model of the slave side, i.e. the user interacts with a virtual model of the slave side that is placed on the master side. Therefore, force feedback is not calculated based on force data coming from the slave side but based on the parameters of the slave side model which can eventually be updated thanks to force, position and velocity data gathered by robot at the slave side.

A comparison between a classical bilateral teleoperation and a model mediated teleopera-tion can be seen in figure 2.6: in the upper illustrateleopera-tion a traditeleopera-tional bilateral teleoperateleopera-tion system is shown, indeed the detected force at the Slave side is directly sent and used by the Master; in the lower figure the Model-Mediated bilateral teleoperation is represented, the characteristic is that the position-force loop is broken using the model circle: the Slave sends to the Master information about the model, then the force is computed at the Master side using the model characteristics.

Fig. 2.6 Bilateral Tele-Operation Comparison

The characteristics a Model-Mediated system tries to achieve are:

• Transparency: The model must represent the modeled item as better as possible. Indeed the doctor should have the feeling to be interacting with the real item.

(23)

2.3 Model Mediated State of Art 13

• High Performance: The implemented model must be representative but also easy to compute; the computation of the rendered force must be done within a step simulation, only in this way it is possible to have always a correct force feedback

• Robustness: The system must also work in the presence of latency; it must be devel-oped in such a way to avoid system instability

2.2.4

Model Mediated Choice

Since a description of what a Model-Mediated System is has been given in section 2.2, now the reasons why it has been chosen will be figured out.

Although the realized project is currently using a Local Area Network of which can be quickly performed a latency estimation; the system has been imagined to have a wider use that means the possibility to use the INTERNET to communicate from one side to the other. While the average of INTERNET latency from one place to another can be computed with simulations, the latency peaks due to many reasons can be unpredictable.

For this reasons the system had to be implemented to be latency tolerant, which mainly means that the loop must be stable also in the case of big lag. One possibility is to implement the Model-Mediated technique which allows breaking the communication loop, which, as a consequence of big latency, could create virtual Energy causing the system instability. Indeed, as already stated, the method separates the Patient Site and the Doctor Site: while position data coming from Doctor Site are directly used by Patient Site, force information coming from Patient Site are not directly used by Doctor Site: using the force direction and the contact recognition the Doctor Site calculates the intensity of the rendered force only relying on its position.

The reason why a model mediated bilateral teleoperation has been chosen despite alter-natives (i.e. wave variables described in section 2.2.2), is that this technique can maintain transparency also in the case of big latency, moreover with a simple implementation it is possible to reach well-working results.

2.3

Model Mediated State of Art

Given the technique that will be used to implement the haptic feedback, in the following section the State of Art of Model-Mediated Bilateral Teleoperation will be explained.

(24)

2.3 Model Mediated State of Art 14

2.3.1

Point-cloud-based Model-mediated Teleoperation

The first work that will be described is [XCS13]; the aim of this work is to be able to reconstruct a model with complex geometry. Indeed, until this work, researches done on MMT can only estimate the environment with a rigid planar surface in one of two dimensions. This work, instead, reconstructs a point cloud model using a Time of Flight to capture depth images of the object surface. Given the high frame rate of the camera and the flexible work range a quickly update of the model can be done making the model at the Doctor Site always updated.

The contact condition is then based on the point cloud reconstruction: when the probe is intended to be in contact with a surface the force feedback is activated, and the probe itself can only follow the surface to avoid the collision.

It can easily understand that despite the big precision and the usability of this model it can not be used in the tele-auscultation system that has to be developed: since a stable contact between the skin and the stethoscope is needed a model in which collisions are avoided is not a right choice.

2.3.2

Real-Time Modeling and Haptic Rendering of Deformable

Ob-jects for Point Cloud-Based Model-Mediated Teleoperation

The second work that will be described can be found in [XS14] and represents a step forward in the previous work; in fact, it starts from the cloud-based model reconstruction of the previous work for adapting it to deformable objects. The proposed method is based on a novel radial function-based deformation (RFBD) approach which allows us to use the pcb-MMT concept for deformable objects as well.

According to the geometry of the contact area, the surface deformation can be described by different radial functions with an infinite deformation area, since the interesting thing is the deformation near the contact this paper proposes a sixth-order polynomial radial function to approximate the area. The longitudinal displacement of the object surface z as a function of the radial distance r can be described as:

z(r) =    c(R2− r2)3 r≤ R 0 r> R (2.3)

(25)

2.3 Model Mediated State of Art 15

where R is the radius of the deformation area depending on the material of the object and c depends on object stiffness, damping factor and dynamics.

The force feedback is then calculated as:

fmt = kp(xm− xp)⃗et fmn= kp(xm− xp)⃗en (2.4)

where kpis the virtual stiffness between the master proxy and Haptic Interaction Point, xp

Fig. 2.7 Force Scheme

and xmare the positions of the master proxy and HIP, ⃗et and ⃗endenote the tangential and

normal vectors of the contact surface computed by the point cloud and finally fmt and fmn are the tangent and the normal components with respect to the surface of the force feedback.

Since the Doctor Side is aware of the Patient Side behavior, the described system allows a precise and correct body deformation and force feedback even in presence of big delays.

(26)

Chapter 3

General Description

Now that many teleoperation systems have been described, in this chapter an overview of the developed system will be given, then a simple explanation of the used Hardware and Software interfaces will be provided together with the organization of the whole components.

3.1

System Overview

To reach the objectives explained in section 1.1, the system in figure 3.1 has been developed. As one can see in the figure the system is divided into two sides: the Patient Site that is where

Fig. 3.1 General System Overview

the patient is and the Doctor Site, that is where the specialized doctor is. Each side has its hardware and software interfaces, chosen to implement the desired feedbacks; since in this

(27)

3.2 Organization of Components 17

case the required feedbacks are Haptic, Audio, and Video each side needs a Haptic Interface, an Audio Interface, and a Video Interface. For the Audio and the Video channels at the Patient Site the hardware interface are an RGB-D Camera (Microsoft Kinect) and a Digital Stethoscope; both are able to collect data. Meanwhile, at the Doctor Site, two interfaces able to reproduce data are used, in our case headphones and an LCD screen.

For what about the Haptic interfaces, at the Patient Site a robotic arm is used: it can be moved providing it the desired position, force or velocity at both joint or End-Effector and it doesn’t need to be grabbed by an assistant to reach the patient, at the Doctor Site a parallel robot is used to teleoperate.

Data coming from all the interfaces flow into two Laptops: the one that is at the Patient Site is the Patient PC which will be later called PC_PAT while the one at the Doctor Site is the Doctor PC and will be later called PC_DOC. The two PCs can communicate over the network, so they are the instruments which allow the interfaces to exchange data.

3.2

Organization of Components

The organization of all the components mentioned in section 3.1 can be seen in fig 3.2; Each channel is represented by horizontal connections, hardware components are represented

Fig. 3.2 System Architecture

by green rectangles while software components are represented by blue squares. As the figure explains, each Hardware interface is managed by a dedicated software component. For the Audio streaming at the Patient Site a C# software is used while at the Doctor Site

(28)

3.2 Organization of Components 18

a Python software is used, for the Video streaming a C++ software is used at both Patient and Doctor Sites while for the Haptic streaming a two Simulink models are used to create the two Firmwares and two other Simulink models implement the communication between laptop and microcontroller and between laptops.

Each couple of software implements a communication over a Local Area Network; for the Audio and the Haptic channel a communication over UDP is used while for the Video channel a communication over TCP is implemented.

The real system can be seen in figure 3.3; each hardware component is described by the related green square; software components, instead, are on Doctor and Patient PCs which are not visible in the illustration.

Fig. 3.3 Real System

3.2.1

Components for Haptic Feedback

To better understand the complex Haptic loop, which is shown in figure 3.2, the figure 3.4 has been developed. In this figure, the hardware components were removed. Moreover, the

(29)

3.2 Organization of Components 19

robotic interface at the Patient Site is called Grab, whereas the haptic interface is called Delta. The figure highlights the software components as well as the communication interfaces. The

Fig. 3.4 Software Loop

communication loop is articulated in four steps: for each side a Low Level and a Medium Level are developed, where the Low Level represents the Embedded Controller and the Medium Level represents the Simulink Host. In the following lines, each block will be functionally described

• Delta Low Level The Delta embedded controller receives direct inputs from the joystick attached to the Doctor’s Haptic interface, the three encoders and the drivers who also represent an output for the block. It can perform kinematics computations and gravity compensation. Regarding the communication with the Simulink Host, the controller sends the Delta Clock value, the Delta Position and the Latency of the whole system; it receives from the Host the working Mode, the target Reference and information to computing the Latency.

• Delta Medium Level The tasks of the Delta Medium Level are three: using data coming from the Delta it has to compute the Grab working modality. Moreover, it provides a measure of the Delta position to be given to Grab Medium Level, finally, with data coming from the Grab Host it has to perform Force Rendering.

• Grab Medium Level The Grab Medium Level has two task: the Slave Manager computes the Grab working mode and the relative target and sends them together with the Delta Cycle to the Grab Low Level; meanwhile the Check Contact must verify the contact condition (when the stethoscope of the audio channel in figure 3.2 is pressing on the patient skin) and compute direction of the force of the Grab End-Effector.

(30)

3.2 Organization of Components 20

• Grab Low Level The Grab embedded controller can perform Kinematics compu-tations, Gravity Compensation and Force Sensing using information coming from encoders and drivers, the drivers are also used to implement the force control. It receives from the Grab Host the working modality, the reference, and the Delta Cycle Value and it sends back to the Host the Delta Cycle Value together with the force sensed.

(31)

Chapter 4

Teleauscultation System

After the general description in Chapter 3, in this Chapter the interfaces used to realize the project will be described. The Chapter is divided into three main section: the Patient Site, the Doctor Site, and the Communication. For each side Hardware and Software of Haptic, Video and Audio interfaces will be explained, then in section 4.3 the used communication strategy will be shown.

4.1

Patient Site

As already mentioned in the previous chapter, the Patient Site is where the Patient is placed; in the following sections Haptic, Video and Audio interfaces used in the Patient Side will be described.

4.1.1

Haptic Interfaces

At the Patient Site, the patient lays supine on a table and has a haptic interface placed behind. The robotic interface which can be seen in figure 4.1 has three components: the Grab, the Stethoscope, and the Grab-Stethoscope interface.

4.1.1.1 Stethoscope

The description of the haptic interfaces starts from a geometric analysis of the Stethoscope; in this way, it will be later possible to understand the interface in Section 4.1.1.2. A functional description of this interface will be provided in Section 4.1.3. What is now interesting it is that the chest piece of the stethoscope can be represented by a truncated pyramid with a triangular base and rounded edges; attached to the minor base part the piece that must be

(32)

4.1 Patient Site 22

Fig. 4.1 Patient Haptic Interface

placed on the patient skin can be represented by a dome with high smaller than the base radius.

4.1.1.2 Stethoscope-Grab Interface

Before the description of the actuated robotic arm used at the Patient Side, it is now described the interface which can attach the stethoscope to the robotic arm itself.

The chest piece of the stethoscope outlined in the previous section (base dome diameter 5.1) has been separated from the diaphragm and the headset to be mounted on the end effector. Follows a list of characteristics the interface between the stethoscope and the robotic arm has to respect:

• The stethoscope must be able to rotate around the X3 and Y3 axes of Grab Frame 3

of figure 4.6 to adapt to the patient body, so the interface must have two degrees of freedom

• The application point of the force given by the end effector on the stethoscope can not cause a momentum on the stethoscope, so it has to be as near as possible to the center of the stethoscope support

(33)

4.1 Patient Site 23

• All the human body’s points useful for auscultation must be reachable

• The stethoscope must be entirely fixed to the interface; no relative motion must be allowed

The interface in figure 5.3 has been designed and then realized via 3D printing in ABSplus-P430 (Stratasys®) plastic. As we can see in figure 4.3 it is composed of three parts: the first

Fig. 4.2 Realized Interface

allows for an offset of the stethoscope with respect to the end point, whereas parts two and three make the stethoscope have two rotational DoFs with respect to the robotic interface by means of two hinge joints. These joint have low friction thanks to ball bearings, and their axis intersects at the point PSthat is 15mm above the center PC of the surface of the

stethoscope which is in contact with the patient. This mechanical solution makes the force FC generated during the contact with the patient align the stethoscope to the skin without the need of actuated DoFs. In facts the torques due to FC on the two affected rotational

joints make the stethoscope rotate until FC direction includes point PS. When the force is

aligned the three actuated DoFs of the robotic interface allow to exert any force on the patient. Therefore, spring was added to parts 2 and 3 to limit the motion of the stethoscope during non-contact phases.

(34)

4.1 Patient Site 24

Fig. 4.3 Interface functioning

Rubber and foam layers were added as interfaces around the chest piece to minimize noise due to the motion of the robotic interface; moreover, the stethoscope has been linked to the interface using many cable ties instead of a plastic part to reduce the minimum vibrations coming from Grab.

Before the interface realization a workspace study has been done using the Solidworks tool: a simple parallelogram with grab behavior has been realized, and the interface with the stethoscope has been attached to it. The important feature that has been investigated with the simulation is the fact that all the useful points for auscultation had to be reachable.

The realized system can be seen in figure 4.4.

(35)

4.1 Patient Site 25

4.1.1.3 Grab

The Grab is the primary haptic interface and the only one to be actuated. In the following sections the Grab will be described from a Mechanical point of view, then the Electronics that drives the arm will be explained, and finally, the Control Strategies will be shown.

Mechanics In this section a General Description of the structure, the Kinematics and the Gravity Compensation of the Grab will be provided.

General Description The Grab in figure 4.5 is 3 DoFs fully actuated manipulator built in Percro Laboratory which allows to place the stethoscope in any location of the patient’s trunk; the interface has a Rotational-Rotational-Prismatic joints kinematics with the following joints limits:

• q1∈ [−20, 20]°

• q2∈ [−40, 40]°

• q3∈ [0.4, 0.8]m

and a maximum continuous and peak force of 4N and 10N.

Fig. 4.5 Percro Grab

The Grab has two base motors which move the first two rotational joints with a differential kinematics, and a third motor that moves the third prismatic joint.

(36)

4.1 Patient Site 26

Fig. 4.6 Grab Frames

Reference systems are defined as follows:

• Frame 0: the Origin O0 is the common point of Joint 1 and Joint 2 axes, z0 axis is Vertical opposite to gravity, x0 axis is horizontal and directed towards motor 1

• Frame 1: the Origin O1 is the common point of Joint 1 and Joint 2 axes, axes are obtained from FRAME 0 by means of Joint j rotation

• Frame 2: the Origin O2 is the common point of Joint 1 and Joint 2 axes, axes are obtained from FRAME 1 by means of joint 2 rotation

• Frame 3: the Origin O3 is at the end of the end effector, axes are obtained from FRAME 2 by means of Joint 3 translation

As shown in figure 4.6 joints are referred as follows

• Joint 1 Rotation about x0 axis, zero when z1 axis is vertical

• Joint 2 Rotation about z1 axis, zero when x2 is superimposed to axis x1

(37)

4.1 Patient Site 27

Kinematics After the definition of the reference frames, the direct and the differential kinematics have also been developed considering the grab-stethoscope interface and the stethoscope itself attached to the grab.

Given q1, q2, d3 which are the three joint variables and the two offset given from

Grab-Stethoscope interface o fy, o fz in figure 4.7 the End-Effector position (that is the position

Fig. 4.7 Interface Offsets

reached from the the center of the stethoscope chest piece) is given by:

   px py pz   =    −sin(q2)(d3+ o fy)

cos(q1)cos(q2)(d3+ o fy) − o fzsin(q1)

cos(q2)sin(q1)(d3+ o fy) + o fzcos(q1)

 

 (4.1)

For simplicity, the following notation will be used:

• s1 = sin(q1), c1 = cos(q1)

• s2 = sin(q2), c2 = cos(q2)

So the direct and the inverse Jacobian matrix are:

Jd=    0 −c2(d3+ o fy) −s2 −s1c2(d3+ o fy) − o fzc1 −c1s2(d3+ o fy) c1c2 c1 ∗ c2 ∗ (d3+ o fy) − o fzs1 −s1s2(d3+ o fy) s1c2    (4.2)

(38)

4.1 Patient Site 28 Jinv=          0 − s1 (d3+o fy)c2 c1 (d3+o fy)c2 − c2 (d3+o fy) − s2(d3c1c2−o fzs1+o fyc1c2) c2(d3+o fy)2 − s2(o fzc1+d3c2s1) c2(d3+o fy)2 −s2 d3c1c2−o fzs1+o fyc1c2 d3+o fy − s2(o fzc1+d3c2s1+o fyc2s1) c2(d3+o fy)2          (4.3)

Gravity Compensation Since all the physical information about the robotic interface were known, to obtain a faster convergence of the end effector to the desired reference it has been decided to perform an exact gravity compensation outside the control loop.

The principle is the following: the motor of joint three, that is prismatic, must compensate the gravity force of link three projected over the direction of joint three; for joint two, base motors must compensate the momentum of the gravity force of link three and two and finally, for joint one base motors must compensate the momentum of the gravity force of link three, two and one.

Given the masses m1, m2, m3and the center of gravity of the three links⃗b1,⃗b2,⃗b3expressed

in Frame 0 the resultant momentums are computed, then they are projected in the direction of each joint axes and added to obtain the total torque needed.

Computation of Momentums ⃗ M1= −⃗b1× [0, 0, −m1⃗g] (4.4) ⃗ M2= −⃗b2× [0, 0, −m2⃗g] (4.5) ⃗ M3= −⃗b3× [0, 0, −m3⃗g] (4.6) Compensation on Theta 1: τ1= −( ⃗M1+ ⃗M2+ ⃗M3)[1, 0, 0]T (4.7) Compensation on Theta 2: τ2= −( ⃗M2+ ⃗M3)[0, −sin(q1), cos(q1)]T (4.8) Compensation on Theta 3: τ3= m3gsin(q1)cos(q2); (4.9)

(39)

4.1 Patient Site 29

In computations above the Stethoscope and the Grab-Stethoscope interfaces are included in link 3 and the center of gravity of this subsystem is approximated to have the following coordinates on Frame 3: bst = [0, 0.042, −0.055]m. These coordinates have been obtained in

the following way: the x coordinate is necessarily 0 because the structure is symmetric with respect to the plane generated by yz, the y coordinate has been discovered placing the system above a metallic axes directed like x, at the equilibrium point the y coordinate is identified and finally the z coordinate is supposed to be at the of the first passive joint, where part 1 meets part 2

4.1.1.4 Electronics

Board The Grab is driven by a custom embedded controller which includes a STM® Discovery Board based on an ARM Cortex-M4 STM32F407VGT6 microcontroller. The board drives via PWM three H-bridge based drivers (Pololu®VNH5019) featuring current sense. The board also features the interface for the three optical encoders attached to each motor shaft and the serial communication via high-speed USB (3 Mbits/s). The micro of the embedded board runs the firmware obtained directly from Matlab code generation of a Simulink model. The embedded controller with its main features is shown in figure 4.8

(40)

4.1 Patient Site 30

Firmware The firmware of the robotic interface allows for setting either position, velocity or force control at either joint or end-effector. It also computes and drives joint torques needed for gravity compensation and provides currents measured at each driver. To obtain these features, a Simulink Model composed by seven main blocks has been developed; each block communicates with the others using custom busses. The following structure is performed:

• Master Controller: Its main component is the Finite State Machine which manages the various working modes of the Grab as well as the Delta’s operation mode; it is shown in figure 4.9: when the system is turned off STATUS is equal to 1, then when POWER is set to 1 the system turns ON, and the calibration procedure starts; if the calibration process ends the system successfully becomes READY, and STATUS becomes 6, at this point each STATUS value from 8 to 13 are is related to a working modality (force, position and velocity of both joints or End-Effector). Whenever an error occurs the system goes in ERROR state and status is equal to 14.

Fig. 4.9 Haptic Interface FSM

• MONITOR: gives the possibility to insert variables in the MON bus, it is used for Debugging

• controlComponents: it is the block where the high-level control is performed, it computes the necessary motor torque to obtain a target position or velocity

(41)

4.1 Patient Site 31

• MOTOR TORQUE DRIVERS: it is the block which performs the current control, which is low-level control; then it calculated the necessary Duty Cycle of the Voltage reference using information coming from Driver Calibration

• Kinematics: here is where the Kinematics computations are done and where it is developed the Gravity Compensation Model

• Data IOcom_ReMeDi: it is the block that implements the communication with the Simulink Host

• CONSOLE: it is a block that implements a direct communication between the firmware and the Matlab console. CONSOLE and Data IOcom_ReMeDi can not be active at the same time

4.1.1.5 Control

Driver Calibration To perform a precise control strategy the first realized step has been the Driver Calibration; this step allows the creation of a graph which correlates the desired current with the needed voltage. In our case eight positive voltage value and eight negative voltage value have been used to create the graph in figure 4.10

Fig. 4.10 Vpoints-Ical Graph

Control Strategy Once the relation between the Voltage needed and the desired Current has been obtained, the implementation of the control strategy is described. The figure 4.11 shows the whole control scheme. The light blue boxes represent the control modalities available for the user which are Force, Position, and Velocity at both Joint or End-Effector; they can be set using the MODE variable with the related value as also described in figure

(42)

4.1 Patient Site 32

4.9 and they work in an exclusive way. A seventh control modality is available, and it is represented by the orange box; this modality is active during the Calibration Phase and allows to impose the Desired Motor Velocity directly. Follows a Description of the whole system of figure 4.11 from left to right.

The first step is to obtain a Joint Velocity Reference (JointVelRef in figure 4.11) from both Joint and End-Effector Velocity and Position Control, to do this the difference between the Joint Position Reference and the current Joint Position is multiplied by a Gain kp1 obtaining a Joint Velocity, the Joint Velocity Reference, instead, is directly used; the difference between the End-Effector Position Reference and the current End-Effector Position is multiplied by a Gain kp2 becoming in this way an End-Effector Velocity Reference and then summed to the End-Effector Velocity Reference of MODE 6. These two End-Effector contributions are multiplied by the Inverse of the Jacobian Matrix (InvJacobian) to obtain a Joint Velocity Reference. It is important to remember that control contributions can be summed because they are exclusive, so only one of them is different from 0 in each moment.

Once the Joint Velocity Reference is obtained it is premultiplied by the Kj2m matrix which transforms the Joint Velocity to a Motor Velocity; this is the point where the Calibration Mode can set the Motor Velocity and where the Velocity Control Loop begins. To implement the Velocity Control Loop a PID controller is used with the Differential Gain equal to 0. The P and I Gains have been tuned obtaining kiv= [0.1919, 0.1919, 0.0471], kpv= [0.02, 0.020.0047].

The tuning procedure has been stopped with results displayed in figure 4.12, 4.13, 4.14 for the Velocity Control Loop, in which the red line is the desired velocity, and the blue line is the measured velocity

(43)

4.1 Patient Site 33

Fig. 4.13 Motor 2 Velocity Graph

Fig. 4.14 Motor 3 Velocity Graph

The obtained Gains kiv and kpv transform the error between the Desired Motor Velocity and the Current Motor Velocity to the Desired Motor Torque. Here two other components are summed: the Motor Torque needed for Friction or Gravity Compensation and the Motor Torque coming from the End-Effector and Joint Force Control. To obtain Motor Torques from End-Effector Force Control the target value is premultiplied for the transpose Jacobian obtaining a Joint Force which is added to the Joint Force Reference (the reason why they are summed is still the same, only one control modality can be active), then the Joint Force Reference is premultiplied by the transpose of Kj2m Matrix to obtain a Motor Torque. This contribution is summed to the Compensation contribution, and the Position/Velocity contribution get the total Motor Torque needed.

(44)

4.1 Patient Site 34

The received Motor Torque needed is converted to the Desired Current using the km2i Gain, then using the relation in figure 4.10 the required Voltage and the subsequent Duty Cycle are computed.

4.1.2

Video Interface

To manage the video channel, a Kinect camera, that is an RGB-D sensor, has been used at the Patient Site, placed over the patient in a way to minimize occluded points due to the Grab and to allow the doctor to see the stethoscope surface as much as possible. The Kinect provides images at 30Hz frequency and at 640x480 pixel resolution which are managed by the Compact Components (CoCo, [RB16]) framework for Mixed Reality, an open source C++ based software that has been designed around the concept of data-flow and multicore execution flexibility. Even if the actual video streaming uses a 2d visualization of the scene, CoCo has been chosen for a future use of the Depth channel, and alternative viewing modalities like Head Mounted Displays. The use of a Head Mounted Display would take the doctor’s sense of presence to the next level, minimizing differences between real auscultation and a tele-auscultation.

4.1.3

Audio Interface

Nowadays each area of our everyday life is testing the technology diffusion and its benefits; like all the other fields the medical one is also experimenting a new generation of smart objects which can be very useful to obtain better results. An example of this trend is the changing of stethoscopes which, like many other medical instruments, are becoming electronic devices. One can understand that there are many advantages in using an electronic stethoscope: first of all the possibility to introduce a digital filter, then the capability to register a sound that can be lately consulted and last but not least the Bluetooth technology allows the transfer of sounds to the PC.

The audio channel at the Patient Side is managed by the Littmann®Electronic Stethoscope Model 3200 in figure 4.15. It is a Bluetooth model that can be used as a classic stethoscope with an integrated speaker that works inside the diaphragm. Moreover, the stethoscope can record and save up to twelve 30-second patient soundtracks which can be subsequently listened and finally it can communicate with a computer using the Bluetooth.

(45)

4.2 Doctor Site 35

Fig. 4.15 Littmann Stethoscope 3200

For the realization of the thesis project the standard Bluetooth connection has been used, the audio signal has 4kHz frequency expressed in 64 samples represented with 16 bits.

4.2

Doctor Site

Now that all the Patient components have been described it is time to move to the Doctor Site. As for the Patient Side, first of all, the Haptic Interfaces will be described, then the Video and the Audio Interfaces will be shown.

4.2.1

Haptic Interfaces

In the Doctor Site, that is where the specialized doctor is, the used haptic interface the doctor acts on is a custom Delta-like parallel haptic interface in figure 4.16.

(46)

4.2 Doctor Site 36

Fig. 4.16 Delta

The Delta is driven by a copy of the embedded controller which drives the Grab mounted on an STM board; serial communication via high-speed USB (running at 3 Mbits/s) is allowed. The micro of the embedded board runs the firmware, again obtained with Matlab Code Generation from the same Simulink Model used for the Grab with the necessary adjustments which still allows for setting either position, velocity or force control at either joint or end-effector. The workspace of the robot is comparable to a real arm workspace since the doctor must be able to move the delta End-Effector from a stable position and it has to guarantee comfortable movements to the doctor, it is included in a cylinder whose diameter and height are 0.26m and 0.12m.; the robot has three Maxon RE40 motors that work at 24V which are used to perform translational motion of the End-Effector. The force that the device can display is 40N in each direction within the workspace.

A simple hook in figure 4.17

has been projected and realized via 3D printing in ABSplus-P430 (Stratasys®) plastic to be fixed at the top of the Delta. It has a shape similar to stethoscope’s chest piece shape, it is empty inside to be as lightweight as possible with three supports which give the stability. It is thought to increase the sense of presence of the doctor and to facilitate the placing of the stethoscope itself.

(47)

4.3 Communication 37

Fig. 4.17 Delta Hook

The resultant Haptic interface is shown in figure 4.18

4.2.2

Video Interfaces

The Interface used at Patient Side to display the video is an LCD monitor; also, in this case, the video manager is CoCo which can interpret the incoming packets transforming them into images.

4.2.3

Audio Interfaces

At the Patient Side simple headphones are used to hear sounds. One of them is placed inside the original stethoscope headset that works as a noise filter for a better result. The audio managing is made by two Python software which can send incoming audio packets to the headphones or record a .m4a tracks.

4.3

Communication

Since interfaces have been explained, now it will be analyzed how each pair of interfaces can communicate among themselves. As can be seen in figure 4.19 Communications are implemented using three independent channels:

(48)

4.3 Communication 38

Fig. 4.19 Multimodal Streaming

• Video: the Video Streaming managed by CoCo uses a TCP connection: the Streamer on the Patient PC acts like a server and waits for a connection request; the Player on the Doctor PC acts as a client and sends the connection to seek the connection between the two sides is mono-directional, from Patient Side to Doctor Side.

• Audio: the Audio Streaming, managed at Patient Side by the C# software and at the Doctor Side by the Python Softwares, uses an UDP connection; also, in this case, the link between the two sides is mono-directional, from Patient Side to Doctor Side.

• Data: Differently from the Audio and the Video one, the Data communication im-plemented between the two haptic interfaces is bi-directional. It is imim-plemented over UDP, for this reason, information in each packet must be independent of information in previous packets. If this condition is not respected latency or packet loss could cause wrong behaviors.

(49)

4.3 Communication 39 Fig. 4.11 Control Loop Graph

(50)

4.3 Communication 40

(51)

Chapter 5

Haptic Interfaces Functioning

All the used interfaces have been described in the previous chapter together with all the communication channels, now it is time to go deep into the Haptic Interfaces Functioning. In fact, the haptic channel manages two crucial and sensitive task: the positioning of the stethoscope over the patient trunk and the implementation of the force feedback. The communication between the two robotic interfaces is realized in two steps: each robot communicates with a Simulink host model running on the associated PC via USB; then Hosts communicate each other via UDP.

5.1

Host-Robot Communication

In this section, hosts implementation will be described to understand which parameters can be sent to the firmware and which parameters can be read from the host. The host that will be analyzed is used in both Doctor and Patient Sides.

5.1.1

Host to Robot Communication

There are many parameters that can be given to the Robot using the host; they are showed in figure 5.1 and are sent in a 52 Byte packet using serial communication at 3Mbit/s over high-speed USB. The packet, which sends data from PC to embedded controller and which will be referred as PC2E, has a 6 Byte Head(’PERCRO’), a 4 Byte CRC and a 2 Byte Tail(’\r\n’). Follows a simple explanation of each payload’s parameter:

• POWER: It is the variable that starts the Finite State Machine that drives the robot; when set to 1 the calibration procedure, during which the robot acts to motors to reach a known position (usually a mechanical stop), starts; this procedure is necessary because

(52)

5.1 Host-Robot Communication 42

Fig. 5.1 PC2E communication

of the use of incremental encoders that starting from the known registered position will measure motor shaft rotation

• RUN: It is the variable that empowers the control mode set in MODE variable

• MODE: Let the user choose which control modality has to be used between force, position and velocity control at both joints or end effector

• EFFECT: Selects whether Gravity or Friction Compensation must be used or none of them

• VAL: It is a three element vector and gives the reference values which must be followed by the control loop

• LEDV: unused

• CYCLEBACK: In Delta Host, it is a variable used to calculate the latency of the complete system, in the Grab Host becomes CycDeltaIn, because the arrived Delta Cycle Value must enter the firmware.

5.1.2

Robot to Host Communication

To monitor the robot system state many variables are sent back to the host in a 68 Byte packet as can be seen in figure 5.2; also in this case the packet has a 6 Byte Head(’PERCRO’), a 4 Byte CRC and a 2 Byte Tail(’\r\n’). Follows a list of variables that can be read with a

(53)

5.1 Host-Robot Communication 43

Fig. 5.2 E2PC communication

simple explanation:

• STATUS: The robot status depends on what the robot itself is doing. Its functioning is described by the figure 4.9.

• CYCLE: It is the variable that displays the cycle value of the embedded controller; it is used both to check functioning and to calculate latency

• Pos: Depending on the control modality this variable displays End-effector or Joint position

• Vel: Depending on the control modality this variable displays End-effector or Joint velocity

• FTvect: It is a variable available for debug

• BTN: Since it is possible to attach to Delta a joystick, this variable is set to one every time a joystick button is pressed

• LAT: In Delta Host displays the latency calculated; in particular it is the difference between the cycleback value entering into the controller end the controller actual cycle value, in Grab Host displays the most recent Delta Cycle Value that has entered into the controller, the loop can be seen in figure 3.4

(54)

5.2 Host-Host Communication 44

5.2

Host-Host Communication

After the description of the Host-Robot communication, all the needed elements to investi-gate the Host-Host communication are known. As already mentioned the communication between the two sides is implemented with a two Simulink models integrated into the Host Simulink models.

5.2.1

General Description

Many are the objectives pursued by the whole model: the most important one is to allow the doctor an easy positioning of the stethoscope, realized with Workspace Mapping, which has to be as realistic as possible with the contribution of a Haptic Feedback; moreover the task just mentioned must be feasible also in the presence of latency, for this reason, a Model-Mediated model has been implemented. Finally, many system parameters must be easy to change, it is why a simple Matlab Interface has been implemented.

In the following sections the used Matlab Interface will be explained, then Initialization phase, Workspace Mapping, Position and Force control, Haptic Feedback and Latency Estimation will be described.

5.2.2

Matlab Interface

To manage the whole stuff a simple Matlab Graphical User Interface (GUI) in figure 5.3 has been implemented Each button is explained below:

• START: starts the Delta Host Simulink Model and set PW R_D = PW R_G = 1, which means that it starts both Robots Finite State Machine and consequently both calibrations

• STOP: reset PW R_D = PW R_G = 0 which means that stops both Robots Finite State Machine

• HAPTIC ON: enable the haptic feedback

• HAPTIC OFF: disable the haptic feedback

• FORCE_CTRL: set MODE_G to 2 changing the Grab control mode to End Effector Force Control

• POS_CTRL: set MODE_G to 5 changing the Grab control mode to End Effector Position Control

(55)

5.2 Host-Host Communication 45

Fig. 5.3 Matlab Interface

• Experiments Tab: It is used to collect and organize Experimental Data, see Chapter 6

• Frequency Tab: It used to collect data relative to frequency recognition, see Chapter 6

• Grab Target Points: It is used in the positioning experiment to save target points

The interface not only makes the parameters changing easier, it avoids the use of the mouse directly in the Simulink Model, avoiding in turn useless and problematic interrupts.

5.2.3

Initialization

The initialization phase goes from the turning ON of both models to when both robots have reached a known fixed position, let’s say PI_D for Delta and PI_G for Grab. An important concept that has been used is that even if the communication is bi-directional, the Doctor Host can change parameters also in Patient Host while the Patient Host can only send useful information to Doctor Host.

(56)

5.2 Host-Host Communication 46

To describe the initialization task the following notation will be used: for Delta variables _D will be added to the variable name while for Grab variable _G will be added.

5.2.3.1 Packets format

To exchange data, both Delta and Grab Host send a packet via UDP. The format of Delta2Grab packet is described in fig 5.4; PWR_G is the variable used to start the Grab

Fig. 5.4 Delta to Grab communication

Finite State Machine, the ENABLE is used to notify the Grab that Delta is ready to start the tele-operation loop, CYCLE_D is the value of the Delta Cycle, as it will be later explained it is used to calculate latency, OFFSET is an array of three Single variables that represents the current offsets (x, y, z) of the Delta position with respect to PI_D. Finally MODE_G is used to set the Grab control modality.

As it will be later described, for the aim of collecting experimental data it has been decided to merge them in the Delta Host. For this reason, the packet Grab to Delta is bigger than the Delta to Grab one. The packet composition can be seen in 5.5; and it is now described:

Fig. 5.5 Grab to Delta communication

FORCE is a three dimensional vector which always sends the mean of the last 30 Force samples measured by the Grab, KOEFF represent the force scaling factor as it will be explain in Model Mediated Section, CYCLEBACK is a variable used to calculate latency,

Riferimenti

Documenti correlati

The closed loop worked at the GIF for many months with a total flow of 40 l/h and a fresh mixture input of 2 l/h (5% of the total flow).. CMS week 14 March 2005

Remark 3.18 Proposition 3.17 says that the general integral of the nonhomogeneous linear system is the general integral of the associated homogeneous system (which is a vectorial

In the time interval [0, t] no measure has been performed on the

Before computing vector k T one has to compute the vector ˜ k T which, in the reachability standard form of the system, locates in -1 all the eigenvalues of the reachable part of

The slave robot in this specific case will be just an actuator moving in one degree of freedom vertically and the force feedback will be realized with the aid of a sensor, the

As for the value of Resistance to Uniaxial Compressive Strenght, the survey refers to the results obtained by the compressive press, being it a direct test; the instrument

Clinical Assistant Professor, Department of Ophthalmology, Weill Medical College of Cornell University, New York, New York; Director, Oculopla- stics Service, Department

Amyl nitrite, by increasing forward flow and the velocity of ejection, will increase the intensity of ejection murmurs caused by fixed outflow obstruction (the right-sided out-