• Non ci sono risultati.

Sistemi per il Governo dei Robot

N/A
N/A
Protected

Academic year: 2021

Condividi "Sistemi per il Governo dei Robot"

Copied!
50
0
0

Testo completo

(1)

Sistemi per il Governo dei Robot

Silvia Rossi - Lezione 19

(2)
(3)

Common Sensing Techniques for Reactive Robots

Describe difference between active and passive sensors, give an example of each

Define each of the following terms in one or two sentences: proprioception, exteroception, exproprioception,proximity sensor, logical sensor, false positive, false negative, hue, saturation, image, pixel, image function, computer vision

List the metrics for rating individual sensors and a sensor suite, and apply these to a particular application

Describe the problems of specular reflection, cross talk, foreshortening, and if given a 2D line drawing of surfaces, illustrate where each of these problems would be likely to occur

Write perceptual schemas from any logically equivalent range sensor to produce a polar plot percept for obstacle avoidance behavior

If given a small interleaved RGB image and a range of color values for a region, be able to 1) threshold on color and 2) construct a color histogram

(4)

Motivation

• Sensing is tightly coupled with acting in reactive systems, so need to know about sensors

• What sensors are out there?

– Ultrasonics, cameras are traditional favorites – Sick laser ranger is gaining fast in popularity

• How would you describe them (attributes)?

• How would you decide which ones to pick and

use for an application?

(5)

Ways of Organizing Sensors

3 Types of Perception

Proprioceptive, exteroceptive, exproprioceptive

Input

Active v.s passive

Output

Image vs. non-image

(6)

IL SISTEMA PERCETTIVO DI UN ROBOT PUÒ ESSERE DIVISO IN DUE CATEGORIE

PROPRIOCETTIVO ESTEROCETTIVO

STATO INTERNO ESTRAZIONE CARATTERISTICHE AMBIENTALI

VELOCITÀ, ACCELERAZIONE, TEMPERATURA, PRESSIONE …

ILLUMINAZIONE, DISTANZA DA OSTACOLI, TEMPERATURA

AMBIENTE…

(7)

Sensor Categories

• Proprioceptive

• Self-control

– INS – GPS

• Exteroceptive

• Navigation

• Object recognition

– Proximity

Range

Contact

– Computer Vision

• Exproprioception

(8)

Sensing Model

(9)

Active vs. Passive (Example)

Active sensors

Sensor emits some form of energy and then measures the impact as a way of

understanding the environment Ex. Ultrasonics, laser

Passive sensors

Sensor receives energy already in the environment

Ex. Camera

• Passive consume less energy, but often signal: noise problems

• Active often have restricted environments

STEREO CAMERA

PAIR THERMAL

SENSOR

LASER RANGER

SONARS

BUMP SENSOR

(10)

I SENSORI SONO DISPOSITIVI FISICI CHE MISURANO QUANTITÀ FISICHE

Proprietà fisiche tecnologia

Contatto Bump, switch

Distanza Ultrasuoni, radar

Livello luce Fotocellule, videocamera Rotazione Encoder, potenziometri

Accelerazione Giroscopi

Temperatura Infrarossi

Altitudine altimetri

… …

… IN GENERALE, LA STESSA PROPRIETÀ FISICA PUÒ

(11)

Sensor Output: Imagery v.s Observation

Observation

Single value or vector

Image

A picture-like format where is a direct physical correspondence to the scene being imaged

Has an image function which maps a signal onto a pixel value

(12)

Popular Non-Imagery Navigation Sensors

Exteroception at a distance is the key Direct contact sensors

Bump sensors, whiskers

“Look ahead” sensors

Range direct IR rangers Ultrasonic Laser

(13)

Contact Sensors

About them:

Passive Advantages:

Cheap

Disadvantages:

Poor sensitivity Poor coverage Poor localization In development

capacitance based “skins”

mouse whiskers for robots

BUMP SENSOR

(14)

GLI SWITCH SONO FORSE I SENSORI PIÙ SEMPLICI SI BASANO SUL PRINCIPIO DI CIRCUITO APERTO/

CHIUSO

VENGONO USATI IN MODI DIVERSI COME SENSORI DIVERSI

SENSORI DI CONTATTO

SENSORI DI LIMITE (GRIPPER)

SHAFT ENCODER

(15)

SENSORI SEMPLICI – SENSORI DI LUCE – FOTOCELLULE

IL LORO COMPITO È QUELLO DI RICONOSCERE IL GRADO DI ILLUMINAZIONE DI UNA STANZA

MISURANO LA QUANTITÀ DI LUCE CHE SI INFRANGE SU DI UNA FOTOCELLULA – RESISTIVITÀ DEL

MATERIALE

I SENSORI DI LUCE SONO SEMPLICI E RIESCONO A

RILEVARE UN’AMPIA GAMMA DI LUNGHEZZE D’ONDA

– DISTINGUONO L’ULTRAVIOLETTO DAGLI INFRAROSSI

(16)

GLI SHAFT ENCODER MISURANO LA ROTAZIONE ANGOLARE

FORNISCONO INFORMAZIONI SULLA POSIZIONE E/O LA VELOCITÀ DELL’ASSE AL QUALE SONO COLLEGATI

MISURA QUANTO SONO VELOCI LE RUOTE MISURE ODOMETRICHE: NUMERO DI ROTAZIONI DELLA

RUOTE

MECCANICI – OTTICI

(17)

Infrared and Thermal

Actually a spectrum of wavelengths, often emitted from heat

“IR” is cheap, used in remotes

True infrared, FLIR (forward looking infrared red) produces thermal imagery

Night-vision is not really IR, is light amplification

(18)

IR

About them:

Usually a point sensor, active

Emits a particular wavelength, then detects time to bounce back Popular for indoor detection of collisions

Advantages

Cheap

Can also detect dark/light (via strength) Disadvantages

Sensitive to lighting conditions Specular reflection

(19)

Typical Sensors

• Khepera

– Exteroceptive: IR

– Proprioceptive: shaft encoders

• Nomad

– Exteroceptive: Sonar, Cameras, Laser ranger – Proprioceptive: shaft

encoders for wheels,

turret, pan-tilt

(20)

Sensing in Reactive Paradigm

(21)

Behavioral Sensor Fusion:

What if you have multiple sensors?

IN SCHEMA THEORETIC TERMS:

PERCEPTUAL SCHEMAS, MOTOR SCHEMAS

(22)

Logical Sensors

• Different sensors/perceptual schemas can produce the same percept - motor schema doesn’t care!

Behavior can pick what’s available

• Example: ring of IRs, ring of sonars, redundant ring and polar plots

• Example: Hershberger and Murphy (video)

If sensor fails, then another can be substituted without deliberation or explicit modeling

• Example: Gage and Murphy (video)

(23)

How do you rate sensors?

Field of view, range: does it cover the “right” area

Accuracy & repeatability: how well does it work?

Responsiveness in target domain: how well does it work for this domain?

Power consumption: may suck the batteries dry too fast

• Reliability: can be a bit flakey, vulnerable

Size: always a concern!

Computational Complexity: can you process it fast enough?

Interpretation Reliability: do you believe what it’s

(24)

Ultrasonics

• Physics: membrane vibrates (is shocked), emitting a sound,

(25)

Ultrasonics rating

Field of view, range

Polaroid lab grade:

β=4 -15

o

R=25-30 ft

(26)

Ultrasonic rating

• Accuracy &

repeatability

– Within about 0.5inch

• Responsiveness in target domain

– Depends!

Specular reflection, cross

talk, foreshortening

(27)

Ultrasonic rating (cont.)

• Power consumption

High

• Reliability

Lots of problems

• Size

Size of a Half dollar, board is similar size and can be creatively packaged

• Computational Complexity

Low; doesn’t give much information

• Interpretation Reliability

(28)

Ultrasonics Summary

• Physics is: active sensor, works on time of flight

advantages: range, inexpensive ($30 US), small

disadvantages: specular reflection, crosstalk,

foreshortening, high power consumption, low resolution

(29)

Range Grid

(30)

Laser Ranger (Sick)

• Physics: plane of laser light, time of flight

• Field of view, range

(31)

Sick

• Accuracy & repeatability

Excellent results

• Responsiveness in target domain

• Power consumption

High; reduce battery run time by half

• Reliability

good

• Size

A bit large

• Computational Complexity

Not bad until try to “stack up”

• Interpretation Reliability

(32)

Laser Ranger Summary

• 180

o

plane

Advantages: high accuracy, coverage

Disadvantages: 2D, resistant to miniaturization, cost ($13,000 US)

NASA/CMU NOMAD ROBOT FOR EXPLORING

THE ARTIC FOR

METEORS

(33)

Computer Vision: Navigation

Works on any image, regards of source (video, thermal imagery, ...)

Optic flow

Stereo range maps

Depth from X algorithms

Note: most computer vision work focuses on single image, but now beginning to look at

sequence

(34)

Optic Flow

About them

Affordance of time-to-contact

Advantages

Affordance derived from camera

Disadvantages

Computationally expensive; if not on chip, unlikely to operate fast enough

Typically brittle under lighting conditions, different speeds, environment with uniform taxture

(35)

Stereo Range Maps

About them

Use two cameras

Advantages

Passive

Good coverage

Disadvantages

Environment may not support interest operators Computationally expensive

Sensitive to calibration

(36)

Microsoft Kinect

Dispositivo per poter giocare con la Xbox 360 senza utilizzare controller tradizionali

Hardware:

Telecamera RGB Sensore a infrarossi Array di microfoni

Capacità libreria software:

Tracciamento utenti

Riconoscimento di volti Ricoscimento voci

Riconoscimento gesti

(37)

Computer Vision

• Physics: light reflecting off of surfaces, respond to wavelenght

• Field of view, range: depends on lens; lens typical have a different VFOV and HFOV (vertical, horizontal)

• Accuracy & repeatability: good

• Responsiveness in target domain: depends on lighting source, inherent constrast between objects of interest

• Power consumption: low

• Reliability: good

• Size: can be miniaturized

• Computational Complexity: absolute best is O(n*m) or O(n

2

),

common is O(n

2

m

2

)

(38)

Object Recognition

For purely reactive systems, non reasoning, just affordances

Typically recognize specific things by pattern of

color, heat signature, fusion of these algorithms, or

through Hough transform

(39)

2 Common Vision Algorithms

• For reactive applications:

– Color segmentation

• Imprint on a color region, then follow it (or remember it)

– Color histogramming

Imprint on a region with a distribution of color, then follow it (or remember it)

SEGMENTED ON RED,

(40)

Color Cueing Algorithms

Thresholding/color segmentation, blob analysis

Make a binary image with all pixels in color range Each group of connected pixels == region (or blob) Extract region statistics

Size (relative position) Centroid (where to aim)

(41)

To solve for this problem, your algorithm needs to label each blob as seperate entities. To do this, run this algorithm:

go through each pixel in the array: if the pixel is a blob color, label it '1'

    otherwise label it 0 go to the next pixel

    if it is also a blob color

        and if it is adjacent to blob 1       label it '1'

        else label it '2' (or more)

(42)
(43)

Color Cueing Algorithms

Color histogramming

Ditinguish an object by the ptoportions of each color in its signature

Problems with these algorithms

Color constancy is hard

Some colors/color spaces are better than others

Often have to do some pre-processing to clean up images

Mean/median filtering

(44)

Color Spaces

• RGB (red, green, blue) is the NTSC output

Poor color constancy in “real world”

• H,S,I (hue, saturation, intensity) has theoretical color constancy

But not with conversion from RGB to HSI

• Alternatives SCT

ORIGINAL IMAGE SEGMENTATION USING THRESHOLDS THAT WERE PERFECT IN THE PREVIOUS IMAGE, THEN THE ROBOT MOVED

(45)

Edge Detection

Edge detection is a technique to locate the edges of objects in the scene.

This can be useful for locating the horizon, the corner of an object, white line following, or for determing the shape of an object.

sort through the image matrix pixel by pixel

for each pixel, analyze each of the 8 pixels surrounding it record the value of the darkest pixel, and the lightest pixel if (darkest_pixel_value - lightest_pixel_value) > threshold)     then rewrite that pixel as 1;

else rewrite that pixel as 0;

What the algorithm does is detect sudden changes in color or lighting, representing the edge of an object.

(46)

Shape Detection and Pattern Recognition

Shape detection requires preprogramming in a mathematical representation database of the shapes you wish to detect.

For example, suppose you are writing a program that can distinguish between a triangle, a square, and a circle. This is how you would do it:

run edge detection to find the border line of each shape count the number of continuous edges a sharp change in line direction signifies a different line

do this by determining the average vector between adjacent pixels if three lines detected, then its a triangle

if four lines, then a square if one line, then its a circle

by measure angles between lines you can determine more info (rhomboid, equilateral triangle, etc.)

(47)

Case Study:

Hors d’Oeuvres, Anyone?

CAMERA PAIR (REDUNDANT):

FACE COLOR

DIGITAL THERMOMETER:

“FACE” TEMPERATURE CHECK

LASER RANGE:

COUNT TREAT REMOVAL

SONARS:

AVOID OBSTACLES, COUNT TREAT REMOVAL

SONARS:

AVOID OBSTACLES

CAMERA PAIR (REDUNDANT):

FACE COLOR

IF BLOCKED, PUFFED UP SENSOR FUSION:

REDUCED

FALSE POSITIVES, FALSE NEGATIVES

(48)

Behaviors and Sensor Allocation

for Borg Shark

(49)

Summary

• Design of a sensor suite requires careful consideration

Almost all robots will have proprioception, but exteroception needs to be closely matched to the task and the environment

• Most common exteroceptive sensors on mobile robots are:

Ultrasonics

Computer vision Laser range

• Color vision can be hard, almost all vision is

computationally expensive unless focus on affordances

Borg shark and Puffer fish with color plus heat

(50)

RUMORE ED ERRORI CONTRIBUISCONO

ALL’INCERTEZZA DELLA LETTURA DI UN SENSORE

SORGENTI DI INCERTEZZA:

RUMORE O ERRORI DEL SENSORE LIMITI DEL SENSORE

RUMORE O ERRORI NEGLI EFFETTORI E ATTUATORI STATI PARZIALMENTE O DEL TUTTO NON OSSERVABILI

MANCANZA DI CONOSCENZA APRIORI SULL’AMBIENTE O

SUO CAMBIAMENTO IMPROVVISO

Riferimenti

Documenti correlati

1) I moduli sono raggruppati in strati di competenza. 2) I moduli in un strato più alto possono avere la priorità, o sussumere, l’output del behaviour nello strato adiacente

A questo punto, è facile illustrare come un campo di potenziale può essere usato da un behaviour, RUNAWAY, per il robot con un solo sensore. Il behaviour RUNAWAY userà

If left-sensor is just right (0.33) and forward sensor is too far (0.33 for the ex.) then drive straight.. If left-sensor is too far (0.00) and forward sensor is too (0.00) far

Lo schema percettivo doveva usare la linea bianca per calcolare la differenza fra dove era il centroide della linea bianca e dove il robot avrebbe dovuto essere, mentre gli

La rappresentazione del mondo viene modificata ogni volta che il robot percepisce l’ambiente e il piano delle azioni viene stabilito sulla base di tale rappresentazione..

– Coverage (ex. Spread out to cover as much as possible) – Convergence (ex. Robots meet from different start positions) – Movement-to (ex. Doesn’t require agents to know about

Nelle architetture di controllo reattive l’esecuzione di un compito è suddivisa tra moduli ognuno dei quali ha assegnata una specifica competenza e attua un determinato

network (HTN) planning, uses abstract operators to incrementally decompose a planning problem from a high-level goal statement to a primitive plan network. Primitive