• Non ci sono risultati.

Machine-Vision Based Position and Orientation Sensing System for UAV Aerial Refueling

N/A
N/A
Protected

Academic year: 2021

Condividi "Machine-Vision Based Position and Orientation Sensing System for UAV Aerial Refueling"

Copied!
19
0
0

Testo completo

(1)UNIVERSITÀ DI PISA. Facoltà di Ingegneria Laurea Specialistica in Ingegneria dell’Automazione Tesi di laurea. Real-Time Machine-Vision Based Position and Orientation Sensing System for UAV Aerial Refueling Tesi di laurea svolta presso la West Virginia University (Morgantown, WV, U.S.A.). Candidato: Rocco Vincenzo Dell’Aquila Relatori: Prof. Mario Innocenti Prof. Lorenzo Pollini. Sessione di Laurea del 17/07/2006 Anno accademico 2005/2006.

(2) ABSTRACT This thesis describes the design of a Real Time Machine-Vision (MV) Position Sensing System for the problem of Semi-Autonomous Docking within Aerial Refueling (AR) for Unmanned Aerial Vehicles (UAVs). MV-based algorithms are implemented within the proposed scheme to detect the relative position and orientation between the UAV and the tanker. In this effort, techniques and algorithms for the acquisition of the image from a real Webcam, for the Feature Extraction (FE) from the acquired image, for the Detection and Labeling (DAL) of the features, for the tanker-UAV Pose Estimation (PE) have been developed and extensively tested in MATLAB/Simulink® Soft Real-Time environment and in Linux/RTAI Hard Real-Time environment. Firstly it was implemented the MV block of the previous entire simulation with real videos and real images from a webcam instead of the Virtual Reality Toolbox® visualization. After the webcam-based MV was relocated to reach the Hard Real-Time requirements. Only this second part is illustrated in this short report. Additionally, it’s developed a new way for the inter-process communication among Real-Time and Non Real-Time processes, implementing the Cyclic Asynchronous Buffers (CAB) on RTAI. Finally the entire sensing system was tested using an 800Mhz PC-104 computer (the On-Board Computer embedded on the YF-22 UAV models of the WVU Laboratories), and the results confirmed the feasibility of executing image processing algorithms in real-time using off-the-shelf commercial hardware to obtain reliable relative position and orientation estimations.. SOMMARIO Lo scopo di questa tesi è descrivere la progettazione di un sistema sensoristico per la misurazione della posizione relativa, basato sulla visione artificiale (MV) e sviluppato in ambiente Real-Time, per risolvere il problema del rifornimento autonomo in volo (AR) per i velivoli autopilotati (UAVs). Gli algoritmi basati sulla visione artificiale sono implementati all’interno dello schema proposto per misurare la posizione e l’orientamento relativi tra l’aeromobile che rifornisce il carburante (Tanker) e l’UAV. All’interno di questo lavoro, sono stati sviluppati algoritmi e tecniche per l’acquisizione di immagini mediante una webcam, per l’estrazione dalle immagini acquisite delle proprietà caratteristiche dell’oggetto (Feature Extraction), per il riconoscimento e l’etichettature (Labeling) delle caratteristiche, e per la stima della posizione relativa Tanker-UAV. Questi algoritmi e tecniche sono stati testati ampiamente in ambiente MATLAB/Simulink® in Soft Real-Time e successivamente in ambiente Linux/RTAI© in Hard Real-Time. Inizialmente il blocco della Machine Vision dell’intera simulazione è stato implementato con video reali e con la webcam al posto di utilizzare la precedente visualizzazione di un mondo simulato in Virtual Reality Toolbox®. Successivamente la versione basata sulla webcam è stata aggiornata per soddisfare i requisiti di funzionamento Hard Real-Time. Solo questa seconda parte viene riportata in questo report per motivi di brevità. Inoltre è stato progettato e sviluppato un nuovo modo per la comunicazione tra processi Real-Time e non, implementando i Cyclic Asynchronous Buffer (CAB) in RTAI. Infine il sistema completo è stato testato usando un’embedded computer PC-104 da 800Mhz (che è montato a bordo dei modelli YF-22 UAV costruiti nei laboratori della West Virginia University), e i risultati confermano la fattibilità di poter eseguire i suddetti algoritmi di image processing in ambiente Real-Time usando hardware disponibile in commercio per ottenere una stima attendibile della posizione e dell’orientazione relativa. Keywords: Real-Time Image Acquisition, Inter-Process Communication, Pose Estimation, Visual Servo Control 0.

(3) 1. Introduction. One of the biggest current limitations of Unmanned Aerial Vehicles (UAVs) is their lack of aerial refueling (AR) capabilities. There are currently two hardware configurations used for aerial refueling for manned aircraft. The first configuration is used by the US Air Force and features a refueling boom maneuvered by a boom operator to connect with the fuel receptacle of the aircraft to be refueled. The second configuration is used by the US Navy and features a flexible hose with an aerodynamically stabilized perforated cone - known as the ‘Probe and Drogue’ system. The effort described in this thesis is relative to the US Air Force refueling boom system, within the general goal of extending the use of this system to the refueling of UAV’s. For this purpose, a key issue is represented by the need of accurate measurement of the ‘tanker-UAV’ relative position and orientation from the ‘pre-contact’ to the ‘contact’ position and during the refueling. Although sensors based on laser, infrared radar, and GPS technologies are suitable for autonomous docking [1], there might be limitations associated with their use. For example, the use of UAV GPS signals might not always be possible since the GPS signals may be distorted by the tanker airframe. Therefore, the use of Machine Vision (MV) technology has been recently proposed in addition - or as an alternative - to these technologies [2][3]. Furthermore, a MV-based system has been investigated for close proximity operations of aerospace vehicles [4] and for the navigation of UAV’s [5]. The focus of the study in this thesis is the development and the implementation of Machine Vision based Relative Position and Orientation Sensor (RPOS) between UAV and tanker through the use of a real camera. The sensor consists of a camera connected to a computer via an Universal Serial Bus (USB) connection. The computer hosts a Real-Time Operating System (RTOS) executing several Real Time (RT) and Non Real Time (NRT) processes. Specifically, first an image acquisition algorithm captures a stream of images from a camera; these images are then processed by a Feature Extraction (FE) algorithm. Next, the feature coordinates are processed by a Detection and Labeling (DAL) algorithm followed by a Pose Estimation (PE) algorithm, which finally provides estimates of the relative position and orientation of the tanker. The software was developed within a Matlab/Simulink® environment, which allowed for an initial detailed testing of the sensor algorithms in closed loop using Virtual Reality images [6]. The position sensor alone was initially tested within a Soft Real-Time environment, and later implemented using a Hard Real-Time Linux/RTAI© [7] setup. A novel shared-memory based, inter-process communication mechanism called Cyclic Asynchronous Buffer (CAB) [8] was developed within this effort. Finally, the testing was performed using a 800 MHz PC-104 computer [9]. This synthesis is organized as follows. The MV-based AR problem is briefly described in the initial section. Next, the MV position sensing system is outlined thorough the documentation of the corner detection, labeling, and pose estimation algorithms. Additionally, the Hard Real-Time Linux/RTAI based implementation is illustrated. Finally, experimental results are presented and discussed.. 2. The MV-Based AR Problem. A block diagram of the MV-based AR problem is shown in Fig. 1, along with relevant geometric distances and associated reference frames. 2.1. Reference frames and Notation. The study of the AR problem requires the definition of the following Reference Frames (RFs): • •. ERF, or E: earth-fixed reference frame. PRF, or P: earth-fixed reference frame having the x axis aligned with the planar component of. 1.

(4) the tanker velocity vector. • TRF or T: body-fixed tanker reference frame located at the tanker center of gravity (CG). • URF or U: body-fixed UAV reference frame located at the UAV CG. • CRF or C: body-fixed UAV camera reference frame.. Fig. 1 – Reference Frames for the AR Problem Within this study geometric points are expressed using homogeneous (4D) coordinates and are indicated with a capital letter and a left superscript denoting the associated reference frame. For example, a point P expressed in the F reference frame has coordinates FP = [x,y,z,1]T, where the right ‘T’ superscript indicates transposition. Vectors are defined as difference between points; therefore their 4th coordinate is always ‘0’. Also, vectors are denoted by two uppercase letters, indicating the two points at the extremes of the vector. For example, EBR = EB - ER is the vector from the point R to the point B expressed in the Earth Reference Frame. The transformation matrices are (4 x 4) matrices relating points and vectors expressed in an initial reference frame to points and vectors expressed in a final reference frame. They are denoted with a capital T with a right subscript indicating the “initial” reference frame and a left superscript indicating the “final” reference frame. For example, the matrix E TT represents the homogeneous transformation matrix that transforms a vector/point expressed in TRF to a vector/point expressed in ERF. 2.2. Geometric formulation of the AR problem. The objective of the docking is to guide the UAV such that its fuel receptacle (point R in Fig. 1) is “transferred” to the center of a 3-dimensional window (3DW, also called “Refueling Box”) under the tanker (point B in Fig. 1). It is assumed that the boom operator will take control of the refueling operations once the UAV fuel receptacle reaches and remains within this 3DW. It should be emphasized that point B is fixed within the TRF; also, the dimensions of the 3DW δ x, δ y, δ z are known design parameters. It is additionally assumed that the tanker and the UAV can share a short-range data communication link during the docking maneuver. Furthermore, the UAV is assumed to be equipped with a digital camera along with an on-board computer hosting the MV algorithms acquiring the images of the tanker. Finally, the 2-D image plane of the MV is defined as the ‘y-z’ plane of the CRF.. 2.

(5) 2.3. Receptacle-3DW-center vector. The reliability of the AR docking maneuver is strongly dependent on the accuracy of the measurement of the vector PBR, that is the distance vector between the UAV fuel receptacle and the center of the 3D refueling window, expressed within the PRF: P. BR = PTT T B − PTU U R = PTT T B − PTT T TC CTU U R. (1). In the above relationships both UR and TB are known and constant parameters since the fuel receptacle (point R) and the 3DW center (point B) are located at known positions with respect to the UAV and tanker frames respectively. The transformation matrix CTU represents the position and orientation of the CRF with respect to the URF; therefore, it is also known and assumed to be constant. The transformation matrix PTT represents the position and orientation of the tanker respect to PRF, which are measured on the tanker and broadcasted to the UAV through the data communication link. In particular, if the sideslip angle β of the tanker is negligible then PTT only depends on the tanker roll and pitch angles. Finally, TTC, is the inverse of CTT, which can be evaluated either “directly”- that is using the relative position and orientation information provided by the MV system- or “indirectly”-. (. ). −1. E that is by using the formula C TT = C TU E TU TT , where the matrices ETU and ETT can be evaluated using information from the position (GPS) and orientation sensors (gyros) of the tanker and UAV respectively.. 3. The MV Position Sensing System. The MV position sensing system was developed in Simulink® and it is shown in Fig 2.. Fig. 2 – MV position sensing system Simulink® scheme.. The purpose of the scheme in Fig. 2 is to provide an evaluation of the TTC matrix from the image acquired by the camera. The main blocks in Fig. 2 are the ‘Image Acquisition’ block, the ‘Feature Extraction’ block, the ‘Detection and Labeling’ block and the ‘Pose Estimation’ block. The algorithms relative to these blocks will be shortly described in later sections to have a quick idea of the RPOS functioning. The Soft Real time Simulink block ‘RTBlock’ developed by L. Daga [10] is used for executing the simulation within Soft Real-Time constraints in Windows environment. This block simply holds the execution of the simulation enough to keep the simulation clock in synchrony with the Central Processing Unit (CPU) clock. In other words, if the CPU elapsed time is shorter than the simulation step, this block waits for the time needed to keep the simulation step within the corresponding CPU elapsed time, leaving the remaining CPU time to the other active Windows processes. It should be emphasized that the Hard Real-Time requirements cannot be achieved within Microsoft® Windows XP, since this OS is not designed to guarantee timing constraints for its processes.. 3.

(6) 3.1. Image Acquisition. A Creative Webcam 5 has been tested (focal length: 15.8 mm, field of view: 95.5° vertical 127.3° horizontal) and selected for this effort. The image acquisition in the Simulink scheme is performed with a Level-2 S-Function written in the Matlab language. Within the S-function initialization phase, a ‘videoinput’ object with the suitable type, number, and image color format (winvideo, 2, RGB24_320x240 in our case) is created and started. This allows the function to extract one image at every simulation step [11].. 3.2. Corner Detection Algorithms. Two feature extraction algorithms were evaluated, that is the Harris corner detector [12] - specifically the version revised by Noble [13] - and the SUSAN corner detector [14]. A brief review of these methods is provided below. 3.2.1. Harris Corner Detection Algorithm. This method is based on the analysis of the matrix of the intensity derivatives, also known as “Plessey operator” [12], which is defined as follows:. ⎡I 2 M =⎢ X ⎢⎣ IYX. I XY ⎤ ⎥ 2 IY ⎥⎦. (2). where I is the gray level intensity of the image, and IX, IY, IXY, IYX are its directional derivatives. The directional derivatives are determined by convolving the image by a kernel of the correspondent derivative of a Gaussian. If at a certain point the eigenvalues of the matrix M take on large values, then a small change in any direction will cause a substantial change in the gray level. This indicates that the point is a corner. A “cornerness” value for each pixel of the image is calculated. If the value of C exceeds a certain threshold, the associated pixel is declared a corner. In both Harris detector method [12] and its variation by Noble [13] a local maxima search is performed as a final step of the algorithm with the goal of maximizing the value of C for the selected corners. 3.2.2. SUSAN Corner Detection Algorithm. The SUSAN (Smallest Univalue Segment Assimilating Nucleus) corner detection method [14] uses an entirely different approach to low-level image processing than most other corner detection algorithms. A specific characteristic of the method is that no image derivative is used. Furthermore, noise reduction is not required in this algorithm. Within this approach, each image point is associated with a local area of similar intensity. A circular mask (having a central pixel which is also referred to as “the nucleus” shown in Fig. 3) is considered at the image positions. If the intensity of each pixel of the image within the mask is compared with the intensity of the mask’s nucleus, then an area of the mask can be defined which has the same (or similar) intensity as the nucleus. The area of the mask containing pixels of similar intensities is called the USAN, that is, “Univalue Segment Assimilating Nucleus”. A pixel is considered a corner when the USAN area is less than half of the maximum possible area.. 4.

(7) Fig. 3 – SUSAN functioning: mask, nucleus and boundary.. 3.3. Detection and Labeling Algorithm. Once the 2D coordinates of the detected corners on the image plane have been detected, the problem is to correctly associate each detected corner with the associated feature/corner on the tanker aircraft, whose position in the TRF (3D coordinates) is assumed to be known. Thus, the general approach is to identify a set of detected markers [u j , v j ] to be matched to a subset of estimated markers positions [uˆ j , vˆ j ] .. 3.3.1 Projection equations The subset [uˆ j , vˆ j ] is a projection in the camera plane of the markers P(j) using the standard “pin-hole” projection model [15]. Specifically, according to the “Pin-Hole” model, given a marker ‘j’ with coordinates C P( j ) = [ C x j , C y j , C z j , 1 ]T in the CRF frame, its projection into the image plane can be calculated using the projection equation: ⎡uˆ j ⎤ f ⎢ vˆ ⎥ = C x p, j ⎣ j⎦. ⎡ C y p, j ⎤ C T ⎢C ⎥ = g f , TT ( X ) ⋅ P( j ) z ⎢⎣ p , j ⎥⎦. (. ). (3). where f is the camera focal length, TP(j) are the components of the marker P(j) in TRF, which are fixed and known ‘a priori’. CTT(X) is the transformation matrix between camera and tanker reference frames, which is a function of the current position and orientation vector X:. X = [ C xT , C yT , C zT , Cψ T , CθT , CϕT ]T. (4). For labeling purposes the vector X is assumed to be known. In fact the MV-based estimation of the camera-tanker distance and orientation at the previous time instant can be used as a good approximation of the current distance (assuming a sufficiently fast sampling rate for the MV system).. 3.3.2 The ‘Points Matching’ problem Once the “projection” subset [uˆ j , vˆ j ] is available, the problem of relating the points extracted from the camera measurements to the actual features on the tanker can be formalized in terms of matching the set of points { p1 , p2 ,..., pm } - where p j = [u j , v j ] is the generic ‘to be matched’ point from the camera to the set of points { pˆ1 , pˆ 2 ,..., pˆ n } , where pˆ j = [uˆ j , vˆ j ] is the generic point obtained by projecting the 5.

(8) known nominal corners in the camera plane through Eq. (3). Since the two data sets represents the 2D projections of the same points at the same time instant on the same plane a high degree of correlation between the two sets is normally expected. In the ideal case corresponding points would be almost exactly superimposed, resulting in a trivial matching process. However, due to both the camera-tanker relative motion and the presence of different sources of system and measurement noise, a matching problem has to be defined and solved. In this current effort, it was implemented an ‘ad-hoc’ labeling algorithm, which solves the matching problem using an heuristic procedure [16]. The algorithm is reviewed below.. 3.3.3 Labeling algorithm The implemented labeling algorithm has the purpose of detecting the points corresponding to real corners and arranging the vector of detected corner coordinates in the format GDAL = [u1 , v1 … um , vm ] . If the kth corner is not detectable the overflow value 100 is used instead in the position 2*k and 2*k+1. Let Pˆ = { pˆ1 , pˆ 2 ,..., pˆ n } denote the set of the n projected corners, and let P = { p1 , p2 ,..., pm } denote the set of detected corners (not to exceed m). The labeling function creates a matrix Err of dimension n × m , whose entries are all the Euclidian distance between Pˆ and P. The three vectors MinR , MinC and Index - with dimensions n, m and m respectively - are also created. The minimum element of every column of Err is stored in the row vector MinC while the index of the row in which the function founds the minimum is stored in another row vector Index . The minimum element of every row of Err is instead stored in the column vector MinR . The position of the detected corner ‘j’ in P is deemed “valid” if: MinC [ j ] == MinR ⎡⎣ Index [ j ]⎤⎦. (5). Detected corners that satisfy the validity condition are assigned to their nearest projected corner; on the other side, detected corners that do not satisfy the validity condition are discarded. In other words, the validity condition ensures that only one detected corner - among the set of detected corners that are closer to a certain projected corner than to other projected corners - is assigned to that projected corner. The other detected corners in the same set are not assigned to any other projected corner. The resulting algorithm has a computational complexity proportional to n2 and avoids the typical problems associated with a labeling function that simply assigns the detected corners P to the nearest corners in Pˆ [16].. 3.4. Pose Estimation Algorithm. Following the solution of the labeling problem, the information in the set of points must be used to derive the rigid transformation relating CRF to TRF in Eq. (1). Within this study, the Gaussian Least Squares Differential Correction (GLSDC) [17] pose estimation method was used ([16], [2]). The GLSDC is based on the application of the Gauss-Newton method for the minimization of a non-linear cost function formulated in terms of the difference between estimated and detected corners positions. Within the GLSDC algorithm, the matrix CTT is expressed as a function of an estimate X (k ) of the unknown vector X(k) at every sample time instant k: X (k ) = [ C xT , C yT , C zT , Cψ T , CθT , CϕT ]T (6) Using X (k ) to project of the corner ‘j’ in the camera plane the 2D coordinates are evaluated using:. 6.

(9) ⎡u j ⎤ f ⎢v ⎥ = C x p, j ⎣ j⎦. ⎡ C y p, j ⎤ C T ⎢C ⎥ = g f , TT ( X ( k )) ⋅ P( j ) z ⎢⎣ p , j ⎥⎦. (. ). (7). By rearranging the coordinates of all the projected corner, the following vector is obtained G ( X (k )) = [u1 , v1 ,....., um , vm ]. (8). Next, the following MV estimation error can be defined at time k: ΔG (k ) = GDAL (k ) − G ( X (k )). (9). where GDAL(k) contains the coordinates of the detected and labeled points extracted from the camera: GDAL (k ) = [u1 , v1 ,....., um , vm ]. (10). The GLSDC algorithm iteratively refines the initial value of X (k ) by repeating the following steps for a number of iterations (with index i): X i +1 (k ) = X i (k ) + Ri−1 ( k ) AiT ( k )W (k )ΔGi (k ). (11). where Ri (k ) = AiT (k )W (k ) Ai (k ) Ai (k ) =. ∂Gi (k ) ∂X. (12). (13) X = X i (k ). and W(k) is the (2m x 2m) covariance matrix of the estimation error. The initial guess X 0 (k ) at k is the final estimation at the previous sample time k-1. The original algorithm outlined in Eqs. (11)- (13) was designed to work with a fixed number of m corners. Specific modifications have been introduced for dealing with a time-varying number of corners [16]. Particularly, the nominal corners that are not visible are removed from the estimation process at each time step. This implies that at each time instant Eq. (11) is modified with the appropriate number of rows and the dimensions and the values of A and W in Eqs. (12) and (13) are adjusted accordingly. Also another PE algorithm was tested, the Lu-Hager-Mjolsness (LHM) [18] one, for the comparison of the simulated results but it is not reported in this summary because for different problems it was not implemented in Real-Time. 4. Real-Time Implementation. The availability of a reliable Real-Time Operating System (RTOS) is critical for the computer-based implementation of control laws within flight control systems. Within this effort, the software was implemented in the RTAI-Linux system.. 7.

(10) 4.1. Linux-RTAI-RtaiLab Environment Description. The environment selected for this study is the Linux ‘Fedora Core 4’ OS patched with RTAI 3.3 [7]. Linux is a multitasking kernel, which provides non-preemptive scheduling among several dynamically created and deleted process. However, Linux suffers from a lack of real-time support since it cannot guarantee response times for its processes. The “Real Time Application Interface” (RTAI) Open Source package - developed at the “DIAM-Politecnico di Milano” [19] [23] - is based on a patch to the Linux kernel that inserts an Hardware Abstraction Layer, providing essentially the ability to make the Kernel fully preemptable. The latest versions of RTAI features the ADEOS nano-kernel [20] [21] [23], which can serve a real-time kernel with the highest priority and a general purpose kernel with the lowest priority. This feature allows the Real-Time and Non Real-Time domains to co-exist within the same hardware. Furthermore, RTAI includes a broad variety of services for real-time programming. The RTAI-Lab and LXRT are two very important RTAI utilities. The former provides a common structured framework to use code automatically generated by model-based simulation tools (such as Simulink or Scilab), as well as a simple graphical user interface to monitor the real-time execution of such code. LXRT [20] [21] stands for LinuX Real-Time and is an RTAI-Linux symmetric API that allows the development of real-time applications entirely in user-space. An advantage of this tool is that the hard real time tasks run under the memory protection umbrella of the standard Linux OS. Additionally, the full range of Linux System calls is available to the task. Specifically, LXRT allows applications to dynamically switch between soft real-time (where the services provided by the Linux system calls are available) and hard real-time (where the Linux system calls are no longer available) by using a single user-space function call. Finally, LXRT allows memory sharing, messages, semaphores, and timings between Linux and RTAI processes. For providing all the above features LXRT creates a real-time agent task in kernel-space for each user-space process. This task is commonly referred to as the “angel” of the user space task (specific implementation details are shown in Fig. 6); its purpose is to execute the real time services for the relative user-space task [21].. 4.2. Experimental Setup. The hardware utilized for this effort includes an “ASUS A6V” Laptop, a PC-104 Embedded Computer, and a “Creative Webcam 5” camera. The laptop - hosting WindowsXP as Operating System - is used to run the Matlab/Simulink environment and to automatically generate code from Simulink. The PC-104 Embedded Computer – shown in Fig. 4 - is used to host the RTAI-Linux system and consequently to run the software in real time. The PC-104 system features an 800Mhz processor, 128 MB of RAM (EmETX-t603 ARBOR Technology Corp.), and uses a Compact Flash as a storage media for the RTOS and the RT applications [9]. Controller Board. Servo Control Module. DAQ Card. Power Supply Card. CPU Card. Interface-Board. CF Card Reader. Computer Box. Fig. 4 – PC-104 Computer.. 8.

(11) Finally, the Creative Webcam 5 camera is connected to the embedded computer and used for image acquisition purposes.. 4.3. Implementation. The application consists of the following codes. 4.3.1 rt_agent.ko. This is a Linux kernel module which allocates space for both the shared memory (using the “v_malloc” function) and the semaphore; this module - which needs to be inserted in the kernel before starting the following two programs - initializes the whole CAB communication structure (described in Section 4.3.6), as well as the registers, and the semaphore. 4.3.2 sender. This is the LXRT Linux process that performs the image acquisition from the USB camera. Specifically, this task uses the Video4Linux API (supported by the Linux Driver for the Creative Webcam 5) to set up the camera at 30fps, 320x240 pixels resolution, and YUV420P image color standard. This function also features the creation of the “angel” agent task initialized with the FIFO scheduler, the page memory disabling, the LXRT switching from Soft Real-Time to Hard Real-Time before accessing the semaphore, as well as the LXRT switching back to Soft Real Time after the semaphore has been accessed. The YUV420P image is converted to a GRAY image (through extraction of the Y component), which is then written to the CAB structure every 0.033 sec. 4.3.3 lxrtshm320x240p. This is a Hard Real Time, User Space process that implements the Simulink scheme - shown in Fig. 5 hosting all the machine vision algorithms including the s-function “lxrtshm2imagetpar.c”, which acquires the image data from the CAB structure, and the s-function “savedata.c”, which collects data in memory and saves them to a file before the termination of the process [24].. Fig. 5 – Functioning Diagram.. 4.3.4 Compiling the code. After the selection of the initial condition, the RTW “build” option allows to automatically generate C code from the whole Simulink scheme in conjunction with the RTAI Target available with RTAI-Lab. A real time executable is then built on the Linux platform using the “make –f” command. The generation of executable files also requires the compilation of the ‘sender.c’ and ‘rt_agent.c’ codes.. 9.

(12) 4.3.5 Design philosophy. As described in previous sections, the software implementation followed the specific guideline of keeping the ‘image acquisition’ task separated from the ‘image processing’ task. There are three main reasons for this approach. The first reason is due to the fact that keeping the image acquisition as a Linux process allowed the author to use the Linux drivers available for the camera and for the USB communication; otherwise, the acquisition of the images in real time would have required the development of real-time drivers for both the camera and the entire USB communication stack. On the other hand, keeping the feature extraction, labeling, and pose estimation tasks within the Simulink environment allowed the author to completely refine and validate them through several closed-loop Simulink simulations [6]. The second reason for the separation of the tasks is that eventual failures in the image acquisition process would not affect the remaining part of the control loop. In fact in the event of unreliable input data, a tracking and docking control system for refueling would revert to GPS Data and/or abort the docking maneuver [6]. Finally, the third reason for the separation of the tasks is that simulations studies [3,6] showed that small delays/uncertainties in the acquisition process can be tolerated as long as the real time task supplies to the docking control laws a constant real-time stream of position and orientation estimates.. Fig. 6 – Architecture of the LXRT real time application.. 4.3.6 Cyclic Asynchronous Buffer (CAB). RTAI has several types of Inter-Process Communication (IPC) patterns. For the purposes of this effort, Shared Memory (SHM) was selected as the most suitable. An approach to SHM-based asynchronous communication is provided by the Cyclic Asynchronous Buffers (CAB) mechanism, which was designed for the communication between periodic activities - such as control loops and sensory acquisition processes - for the HARTIK Real-Time Kernel [8]. A CAB provides a “one-to-many” communication channel, which at any instant contains the latest message or data inserted in it. A message is not “consumed” by a receiving process, but is maintained into the CAB structure until a new message is overwritten. Therefore, once the first message has been put in a CAB, a task can never be blocked during a ‘receive’ operation. Similarly, since a new message overwrites the previous one, a sender can never be blocked. However, using such semantics, a message. 10.

(13) can be read more than once if the receiver is faster than the sender, while messages can be lost if the sender is faster than the receiver. Communicating through CABs is generally faster than passing messages through IPC ports, and CAB primitives are time bounded. If a CAB is used by N tasks, it must have at least N+1 buffers to avoid blocking. In fact with N buffers or less several inconsistencies could appear, as shown in [22]. In this effort only one sender (image acquisition task) and one receiver (Simulink algorithms task) are present, with the messages consisting in the acquired images. Thus, the CAB needs 3 image-sized buffers. The developed CAB structure also needs two main features for a correct implementation, that is a semaphore - allowing mutual exclusion access to the resource - and a memory space shared by the processes, as shown in Fig. 7. Specifically, the allocated shared memory contains 3 image-sized buffers and 2 flags. This always allows an empty buffer for the writing process and the latest available image for the reading process. The two flags are accessed in mutual exclusion by the processes through the use of a handshake semaphore initialized at 1. The reading flag - named “which_read” - indicates which buffer the reading process is reading, while the writing flag - named “which_write” - indicates the latest available image which has been written by the writing process.. Fig. 7 – CAB. The writing protocol consist in the mutex (mutual exclusion) access to the flags, the determination of which buffer is available at the moment (excluding the last written and the one which is being read), the writing of the image, and finally the mutex updating of the writing flag. After the flags have been updated and released, the latest acquired image is available. Similarly, the reading process accesses the flags in mutual exclusion and updates the reading flag indicating that the last written image will be read. Next, the actual reading of the images is performed. 5. Discussion of Simulation Results. Several tests were performed using many different small scale static aircraft models. The results here reported are relative to the use of a static B747-400 model (length: 74.9 cm, wing span 64.2 cm). The following initial condition matrix was selected using a tool developed in MATLAB environment from which, knowing the model geometric measurements in TRF, after manual clicking of the suitable features selected on the first image from the camera, using 2 different PE algorithms (GLSDC [16], [17] and LHM [18]), you have in output the initial condition matrix.. 11.

(14) ⎡0.836 −0.004 −0.549 1.428 ⎤ ⎢ 0.093 0.987 0.134 0.001 ⎥⎥ C TT (t0 ) = ⎢ ⎢ 0.541 −0.163 0.825 −0.006 ⎥ ⎢ ⎥ 0 0 1 ⎦ ⎣ 0. The aircraft model was manually moved for 30 seconds changing the initial condition and finally restoring it, while the MV algorithms were executing. Sample images from the corner detection algorithm along with the plots of x, y, z distances and roll, pitch, yaw angles are shown in Fig 8-10. Specifically, Fig. 8 shows the x, y, z relative coordinates (Scope #1), the pitch, yaw and roll relative angles (Scope #2) and the number of corners detected and labeled (Scope #3 and Meter) produced by the MV algorithms running in the Linux/RTAI environment. The XRtaiLab environment was used to monitor the outputs in real time.. Fig. 8 – Architecture of the LXRT real time application in XRtaiLab.. (a) (b) Fig. 9 – Video restored from the RTAI application (2 different frames).. 12.

(15) The blue stars in Fig. 9 are all the corners found by the SUSAN algorithm while the green rhombuses are the corners detected and labeled from the labeling algorithm. Finally, the red pentacles are the projected corners in the CRF. Fig. 10 shows the camera-aircraft relative position, in terms of relative distance and relative orientation, as computed by the pose estimation algorithm. Fig. 10 also shows the time history of the number of corners correctly detected and labelled by the MV algorithms. It is needed to underline that in the video restored from the HRT simulations the projected corners follow the real ones on the aircraft model. The good behaviour of the execution is ensured from the VRT based version tests [17], [25] and from coarse but according manually measurements. 4. Coord X Coord Y Coord Z. X,Y,Z [m]. 3 2 1 0 -1. 0. 5. 10. 15 Time [sec]. 20. 25. 30. Roll, Pitch Yaw [rad]. (a) 2 Roll Pitch Yaw. 1 0 -1. 0. 5. 10. 15 Time [sec]. 20. 25. 30. Corner Matched [units]. (b) 10. 5. 0. Num. Of Correctly Detected and Labeled Corners. 0. 5. 10. 15 Time [sec]. 20. 25. 30. (c) Fig. 10 – x,y,z rel. (a) roll, pitch, yaw rel. (b) and the labeled markers (c) plots. One data analysis done it was the check if the two processes, the Hard RT and the Soft RT one, can run properly with a period of 0.1 second; the test consists in the examination if two sequential images are the same one or not. In the first case the amount of calculus for the laptop would be too much for the application and the soft real time process could not complete different acquisition in every period. The results on 25 experiments are shown in the table 1. In this results is clear that the XRtaiLab use with for results visualization disturbs badly our application because it’s another soft real-time process more. We need to underline that the application runs on the On-Board Computer without any visualization tool so we can say the percentage of the missed image upgrade near to 1% it’s tolerable.. 13.

(16) With XRtaiLab LXRT version. events on 300 31.7. percentage 10.57 %. Without XRtaiLab events on 300 4.1. percentage 1.37 %. Tab. 1 – 25 Experiments results on 30 sec simulation run.. The effort also included the use of the Harris Corner Detection Algorithm in lieu of the SUSAN algorithm. Extensive VR-based closed-loop simulation studies [17], [25] showed that the Harris algorithm was generally able to perform better than SUSAN, but it was substantially more computationally intensive, specifically, it was roughly 4 times slower than SUSAN. In fact, a Simulink scheme similar to the one in Fig. 2 but using the Harris algorithm instead of SUSAN was used to generate a real time executable file, which however could not execute in real time on the selected computer. Images from the customizable ‘soft’ real time version developed and executable in Matlab/Simulink environment, with approximately the same scheme reported in Fig. 2, are shown in Fig. 11. This version is the predecessor of the RTAI based Hard Real Time version above described, it runs with pre-recorded videos or with the same webcam (for more info I send back to the thesis).. Fig. 11 – Video from customizable Matlab/Simulink based ‘soft’ real-time version.. Finally also a more simple and computationally light version with only Shared Memory (SHM) without LXRT calls was implemented, this version is more fast and not HW-depending but it does not theoretically ensure the atomic use of the images, it is not reported for briefness in this synthesis.. Conclusions and Future Developments. The summary describes the real-time implementation of a Machine-Vision Based Position Sensing System for potential application during the Autonomous docking within the UAV Aerial Refueling problem. The attention in this effort focused on the upgrade of the Machine-Vision Based Position Sensing System from a Matlab/Simulink® Soft Real Time environment to a Linux/RTAI Hard Real Time environment. With this purpose, a novel Inter-Process mechanism - based on Shared Memory Communication - was implemented on RTAI. The software was implemented on a PC-104 based, embedded computer, using a commercial off-the-shelf USB Webcam for image acquisition. The study has confirmed the feasibility of interfacing image acquisition with labeling and pose estimation algorithms to obtain a reliable real-time relative position and orientation estimation for potential applications to docking problems.. 14.

(17) When the real images and videos from the US Air Force will be available, the next topics for this project will be: • A suitable choice of the number and positions for the corners • An attentive tuning of the parameters for the respective algorithms • An image processing filter to have better image quality • The separation of the operation in different processes running on different hardware and communicating with UDP in Real-Time • The definitive choice of the image acquisition camera (maybe a FireWire Camera, more quality and well-supported from linux). Furthermore, from a theoretical point of view, an implementation of the Kalman filter for the GPSMachine Vision integration (Sensor Fusion) will be done in the next future. An accordant prove to analyze the real errors will be done positioning and moving the tanker with a robotic arm (in the DSEA laboratories of the Univ. of Pisa) to have a correct evaluation and confirmation of the real distances and orientation, it could be done also to improve the choice of the default corners. During this thesis work at the WVU I collaborated also for the development and the implementation of new Detection And Labeling (DAL) algorithms [26], they will be tested and compared in the next months. In the last months of my WVU permanence I worked hard in the MATLAB/Simulink – X-Plane I/O integration. X-Plane is a Flight Simulator software which has the visualization in a simulated world of many airplanes and all the maps of the earth (and not only), and more important X-Plane is used in the Flight Simulator Cockpit available at the WVU. I don’t report this part of my work in this thesis because it would be out of the contest. The results of this thesis were edited in a manuscript [27] and it was submitted to the Journal of RealTime Imaging Processing edited by Springer.. References. [1] Korbly R. and Sensong L. “Relative attitudes for automatic docking,” AIAA Journal of Guidance Control and Dynamics, Vol. 6, No. 3, 1983, pp. 213-215. [2] Kimmett J., Valasek J., Junkins J.L., “Autonomous Aerial Refueling Utilizing a Vision Based Navigation System”, Proceedings of the 2002 AIAA GNC Conference, Paper 2002-5569, Monterey (CA), August 2002. [3] Fravolini M.L., Ficola A., Campa G., Napolitano M.R., Seanor B., “Modeling and Control Issues for Autonomous Aerial Refueling for UAVs Using a Probe-Drogue Refueling System,” Journal of Aerospace Science Technology, Vol. 8, No. 7, 2004, pp. 611-618. [4] Philip N.K., Ananthasayanam M.R., “Relative Position and Attitude Estimation and Control Schemes for the Final Phase of an Autonomous Docking Mission of Spacecraft”, Acta Astronautica, vol. 52, 2003, pp. 511-522. [5] Sinopoli B., Micheli M., Donato G., Koo T.J., “Vision Based Navigation for an Unmanned Aerial Vehicle”, Proceedings of the 2001 IEEE International Conference on Robotics and Automation, Vol. 2, 1757-1764, Seoul, South Korea, May 2001. [6] Campa G., Napolitano M.R., Vendra S., Fravolini, M.L, “A Simulation Environment for Machine Vision based Aerial Refueling for UAVs”, Submitted to: IEEE Transaction On Aerospace and Electronic Systems, April 2006.. 15.

(18) [7] Mantegazza P., “DIAPM RTAI for Linux: WHYs, WHATs and HOWs”, Real Time Linux Workshop Conf., Vienna University of Technology, Dec. 1999 [8] Giorgio C. Buttazzo, “HARTIK: A Real-Time Kernel for Robotics Applications”, Proc. Of the 14th IEEE Real_Time Systems Symposium (RTSS 1993), pp201-205, Dec 1993 [9] Napolitano, M.R., “Development of Formation Flight Control Algorithms Using 3 YF-22 Flying Models”, Final Report, Air Force Office of Scientific Research, AFOSR Grant Number F49620-01-1-0373, April 2005 [10] L. Daga, “Dalla simulazione al controllo robusto del modello non lineare di un elicottero monorotore tramite sensori d'assetto GPS/INS”, Ph.D Thesis, Roma, Nov. 1997 [11] AA. VV., “Image Acquisition Toolbox: User’s Guide ver. 1”, http://www.mathworks.com/access/helpdesk/help/pdf_doc/imaq/imaq_print.pdf, Mathworks, March 2006. [12] Harris, C and Stephens, M, “A Combined Corner and Edge Detector”, Proc. 4th Alvey Vision Conference, Manchester, pp. 147-151, 1988. [13]. A. Noble, “Finding Corners”, Image and Vision Computing Journal, 6(2): 121-128, 1988.. [14] Smith S.M., and Brady J. M., “SUSAN: A New Approach to Low Level Image Processing”, International Journal of ComputerVision, 23(1): 45-78, 1997. [15] Hutchinson S., Hager G., Corke P.,“A tutorial on visual servo control”, IEEE Transactions on Robotics and Automation, Vol. 12, No. 5, 1996, pp 651-670. [16] Haralick, R.M et al., “Pose Estimation from Corresponding Point Data”, IEEE Transactions on Systems, Man, and Cybernetics, Vol. 19, No. 6, 1989, pp. 1426-1446. [17] Campa, G., Mammarella, M., Napolitano, M.R., Fravolini, M.L., Pollini L., “Addressing Pose Estimation Issues for Machine Vision based UAV Autonomous Aerial Refueling”, Submitted to: IEEE Transaction On Systems, Man and Cybernetics, June 2005. Accepted for Publication in September 2005. [18] Lu, C.P., Hager, G.D., Mjolsness, E., “Fast and Globally Convergent Pose Estimation from Video Images,” IEEE Transactions On Pattern Analysis and Machine Intelligence, Vol. 22, No. 6, 2000, pp. 610-622. [19] Bianchi E., Dozio L., Mantegazza P. “DIAPM-RTAI: A Hard Real Time support for LINUX”, Dipartimento di Ingegneria Aerospaziale Politecnico di Milano, www.aero.polimi.it/~rtai/documentation/reference/rtai_man.pdf, 2000. [20] Lineo Inc, Mantegazza P., “DIAPM RTAI Programming www.aero.polimi.it/~rtai/documentation/reference/rtai_prog_guide.pdf , Sep 2000. Guide. 1.0”,. [21] Pasi Sarolathi, “Real-Time Application Interface”, Research seminar on Real-Time and Java, Univ of Helsinki, Dept. of Computer Science, 26th Feb 2001 [22] Buttazzo G., “Materials for the Course on Real-Time Systems”, http://robot.unipv.it/toolleeo/thesis/dea.pdf, Department of Computer Science University of Pavia, School of Ph. D. 2001. [23] Mantegazza P., “DIAPM RTAI - Beginner's Guide”, RTAI Documentation Article, http://www.rtai.org/ 24th January 2006. [24] Campa G., “Saving data to a file at the end of the simulation”, http://www.mathworks.com/matlabcentral/fileexchange/loadFile.do?objectId=9986, MATLAB Central File Exchange, 14th February 2006. 16.

(19) [25] Vendra, S., Campa, G., Napolitano, M.R., Mammarella, M., Fravolini, M.L., “Addressing Corner Detection Issues for Machine Vision based UAV Aerial Refueling” Submitted to Machine Vision and Application, October 2005. [26] Dell’Aquila R.V., Campa G., Mammarella M., Napolitano M.R., “Real-Time Machine-VisionBased Position Sensing System for UAV Aerial Refueling”, Submitted to: SPRINGER, “Journal of Real-Time Image Processing”, May 2006. [27] Campa G., Mammarella M., Napolitano M.R., Dell’Aquila R.V., M. L. Fravolini, V. Brunori “Point Matching Algorithm Comparison Addressed to Aerial Refueling for UAVs”, Submitted to: IEEE Pattern Analysis and Machine Intelligence, June 2006. [28] Pla, F., Marchant, J.A., “Matching Feature Points in Image Sequences through a Region-Based Method,” Computer vision and image understanding, Vol. 66, No. 3, 1997, pp. 271-285. [29] Umeyama, S., “Parameterized point pattern matching and its application to recognition of IEEE Transactions on Pattern Analysis and Machine object families,” Intelligence, Vol.15, No.2, 1993, pp.136-144. [30] Fravolini, M.L., Campa, G., Napolitano, M.R., Ficola, A., “Evaluation of Machine Vision Algorithms for Autonomous Aerial Refueling for Unmanned Aerial Vehicles”, Submitted to: AIAA Journal of Aerospace Computing, Information and Communication, April 2005.. List of Acronyms. AR UAV MV GPS 3DW FE DAL PE RPOS CAB RGB CG USB RTOS RT NRT WVU RTAI SHM LXRT. Aerial Refueling Unmanned Aerial Vehicle Machine Vision Global Positioning System 3 Dimensional Window Feature Extraction Detection and Labeling Pose Estimation Relative Position and Orientation Sensor Cyclic Asynchronous Buffer Red Green Blue Center of Gravity Universal Serial Bus Real-Time Operating System Real-Time Non Real-Time West Virginia University Real-Time Application Interface SHared Memory LinuX Real-Time. 17.

(20)

Riferimenti

Documenti correlati

Accordingly, we investigated the effects of full replacement of FM with Tenebrio molitor (TM) larvae meal in the diet of rainbow trout (Oncorhynchus mykiss) on fish gut and

linea di confine tra i comitati di Perugia e di Gubbio (Bartoli Langeli, Codice diplomatico del comune di Perugia. 862); per altre notizie su di essa, Tiberini, Le signorie

Most importantly, we found that sperm from asthenozoospermic men express, on average, lower levels of CatSper1 and that expression of the protein is highly correlated with

We take designing a "cluster unmanned network monitoring cruise system" as the goal, first of all, design a capable of providing unmanned aerial floating platform,

In particular, we evaluated how UAV imagery can be used to map and evaluate the cover of Oxalis pes-caprae, a South African plant species present in a number of

www.ijebmr.com Page 396 To track more in detail the evolution of the elements that comprise the smart grid field, Graph 2 shows the top-20 technologies that include: systems

Since all treatment plans were generated with 0 ◦ collimator angle, the sweeping windows moved parallel to the direction of gantry motion and caused dose deviations for target

Use of all other works requires consent of the right holder (author or publisher) if not exempted from copyright protection by the