• Non ci sono risultati.

Image Processing Techniques for Different Biometric Traits

N/A
N/A
Protected

Academic year: 2021

Condividi "Image Processing Techniques for Different Biometric Traits"

Copied!
59
0
0

Testo completo

(1)

Image Processing for Biometric Applications

Ruggero Donida Labati

Image Processing Techniques for Different Biometric Traits

Academic year 2016/2017

(2)

• Fingerprint

• Iris

• Face

Summary

(3)

Fingerprint

(4)

• A fingerprint in its narrow sense is an impression left by the friction ridges of a human finger

• One of the most used traits in biometric applications

o High durability o High distinctivity

• The more mature biometric trait

Fingerprint

Ridge Valley

(5)

• Level 1: the overall global ridge flow pattern

• Level 2: points of the ridges, called minutiae points

• Level 3: ultra-thin details, such as pores and local peculiarities of the ridge edges

Levels of analysis

(6)

• Usually from 0°to 180°

• Evaluation of the gradient orientation in squared regions

For each local region, the ridge orientation is considered as the mean of the angular values of the pixels

Level 1: local ridge orientation

(7)

• Global or map

• x-signature

Level 1: local ridge frequency

(8)

Level 1: singular regions

Loop Delta Whorl

• Pointcaré algorithm

• The northern loop is usually considered

as the core point

(9)

Level 1: Pointcaré algorithm

Let G is be a vector field and C be a curve immersed in G, then the Poicaré index PG,C is defined as the total rotation of the vectors of G along C

Considering eight near elements dk (k = 0… 7)

.

• Classification

0° = the point is not a singular point;

360° = whorl region;

180° = core point;

-180° = delta point.

(10)

Level 1: ridge count

(11)

• Evaluates both frequency and space

Level 1: Gabor filters (1/2)

(12)

The even-symmetric Gabor filter has the general form

where Φ is the orientation of the Gabor filter, f is the frequency of a sinusoidal plane wave, and σx and σy are the space constants of the Gaussian envelope along x and y axes, respectively.

Level 1: Gabor filters (2/2)

(13)

Level 2: minutiae

termination bifurcation lake point or island

Independent ridge

spur crossover

(14)

Level 2: the most used minutiae extractor

Crossing number

:

8 neigborhood of the pixel p 𝐶𝐶𝐶𝐶 𝑝𝑝 = 1

2 �𝑘𝑘=1

8

𝑛𝑛𝑘𝑘 − 𝑛𝑛 𝑘𝑘+1 mod 8 If 𝐶𝐶𝐶𝐶 𝑝𝑝 = 1 or 𝐶𝐶𝐶𝐶 𝑝𝑝 = 3, p is a minutia

(15)

Level 2: minutia patterns

(16)

Level 2: ridge following

(17)

Level 3

• Minimum resolution of 800 DPI

(18)

The biometric recognition process

(19)

Acquisition

Latent Inked and rolled Live scan

(20)

Thermal (Sweeping)

3dimensional

Images

(21)

Quality evaluation (1/2)

NIST NFIQ

(22)

Local standard deviation (16 X 16 pixels)

Preprocessing: segmentation

(23)

Preprocessing: enhancement

• Contextual filtering

(24)

Minutiae-based matching

• Two minutiae are considered as correspondent if their spatial distance sd and direction difference dd are less than fixed

thresholds

• Non-exact point pattern matching

o Roto-translations

o Non-linear distortions o False minutiae

o Missed minutiae

o Non constant number of minutiae

(25)

Minutiae-based matching:

Delaunay triangulation

• Features

o Side lengths of th triangles

o Greater angls

o Incenters

o Minutiae features (x, y, θ)

(26)

Minutiae-based matching:

Delaunay triangulation (1/2)

• Method A o Method B

(27)

Minutiae-based matching:

NIST Bozhorth 3

• Step 1: Construction of Intra-Fingerprint Minutiae Comparison Tables

• Step 2: Construction of Inter-Fingerprint Compatibility Table Step 3: Traverse Inter-Fingerprint Compatibility Table

constructed in second step

(28)

Other minutiae-based recognition methods:

Cylindercode

(29)

[1] A. K. Jain, et al., “Filterbank-based fingerprint matching,”

IEEE Transactions on Image Processing, 2000.

TemplateFinercode Input Finercode

Euclidean distance Matching result ComputeA.A.D.

feature Filtering

Normalize each sector Input image

Divide image in sectors Locate the

reference point

Other recognition methods: Fingercode

(30)

Iris

(31)

The colored ring around the pupil of the eye is called the

Iris

Iris biometric trait

(32)

Iris-based identification

Acquisition Module

Feature Extraction

Module

DataBase

iriscode

Matching Module

N iriscode Identity / Not identified

Acquisition Module

Feature Extraction

Module Quality

Checker

Enrollment

Identification

Quality Checker

iriscode

(33)

Iris scanners

(34)

How iris recognition works

(35)

Iris segmentation using Hough transform

(36)

Iris segmentation using Daugman’s method

(37)

Iris segmentation using active contours

(38)

I

II

I = separable eyelashes

II = not-separable eyelashes

Eyelids and eyelashes

(39)

Separable eyelashes

Gabor Filter

Thresholding

𝑔𝑔(𝑥𝑥, 𝑦𝑦) = 𝐾𝐾 exp� −𝜋𝜋(𝑎𝑎2(𝑥𝑥 − 𝑥𝑥0)2 + 𝑏𝑏2(𝑦𝑦 − 𝑦𝑦0)2)� exp�𝑗𝑗(2𝜋𝜋𝐹𝐹0 (𝑥𝑥 cos 𝜔𝜔0 + 𝑦𝑦 sin 𝜔𝜔0) + 𝑃𝑃)�

𝐿𝐿1 (𝑥𝑥, 𝑦𝑦) = �1 if �𝐼𝐼(𝑥𝑥, 𝑦𝑦) ∗ ℛℯ�𝑔𝑔(𝑥𝑥, 𝑦𝑦)�� > 𝑡𝑡3

0 else

𝑡𝑡3 = 𝑘𝑘 max(|𝐼𝐼𝑟𝑟(𝑥𝑥, 𝑦𝑦) ∗ 𝑔𝑔(𝑥𝑥, 𝑦𝑦)|)

(40)

Non-separable eyelashes and reflections

1) 2) 3) 4)

5) Starting from an initial set of points L5, we apply an iterative region growing technique based on the following

where μ and σ are the local mean and standard deviation of I(x,y) processed in a 3 × 3 matrix centered in the point x,y and β is a fixed threshold .

𝐿𝐿2 (𝑥𝑥, 𝑦𝑦) = �1 if var�𝐼𝐼(𝑥𝑥, 𝑦𝑦)� > 𝑡𝑡4

0 else

𝐿𝐿3 (𝑥𝑥, 𝑦𝑦) = erode(𝐿𝐿2(𝑥𝑥, 𝑦𝑦), 𝑆𝑆 ) 𝐿𝐿4 (𝑥𝑥, 𝑦𝑦) = �1 if 𝐼𝐼(𝑥𝑥, 𝑦𝑦) > 𝑡𝑡5

0 else

𝐿𝐿5 = 𝐿𝐿3 OR 𝐿𝐿4

𝜇𝜇 + 𝛽𝛽𝜎𝜎 < 𝐼𝐼(𝑥𝑥, 𝑦𝑦)

(41)

Iris normalization: motivation

Pupil dilation Gaze deviation Resolution

(42)

Iris normalization: Rubber sheet model

(43)

Iriscode (1/2)

”The detailed iris pattern is encoded into a 256-byte “IrisCode” by demodulating it

with 2D Gabor wavelets, which represent the texture by phasors in the

complex plane. Each phasor angle (Figure*) is quantized into just the quadrant in which it lies for each local

element of the iris pattern, and this operation is repeated all across the iris,

at many different scales of analysis”

J. Dougman

Parte reale di h

Parte immaginaria h

11

(*)

Real part of h

Imaginary part of h

(44)

Iriscode (2/2)

(45)

Matching Score = 0.26848

Iriscode matching

(46)

Face

(47)

Face recognition using still images

(48)

Face acquisition sensors

• Cameras (a)

• Webcams

• Scanners for off-line acquisitions

• Termocameras (b)

• Multispectral devices (c,d,e,f)

(49)

• Scan window over image

• Classify window as either:

Face

Non-face

Classifier Window

Face Non-face

Face detection

(50)

• Consider an n-pixel image to be a point in an n-dimensional space, x R

n

.

• Each pixel value is a coordinate of x.

x

1

x

2

x

3

Images as features

(51)

{

Rj } are set of training images.

x

1

x

2

x

3

R1

R2 I

) , ( min

arg dist R I

ID j

j

=

Nearest Neighbor Classifier

(52)

• Use Principle Component Analysis (PCA) to reduce the dimsionality

Eigenfaces

(53)

Eigenfaces (2)

[ ] [ ]

[ x

1

x

2

x

3

x

4

x

5

] W

• Construct data matrix by stacking

vectorized images and then apply Singular

Value Decomposition (SVD)

(54)

• Modeling

o Given a collection of n labeled training images

Compute mean image and covariance matrix

Compute k Eigenvectors (note that these are images) of covariance matrix corresponding to k largest Eigenvalues

Project the training images to the k-dimensional Eigenspace

• Recognition

o Given a test image, project to Eigenspace

Perform classification to the projected training images

Eigenfaces (2)

(55)

Problems of PCA

(56)

Fisherfaces

• An n-pixel image x∈Rn can be projected to a low-dimensional feature space y∈Rm by

y = Wx

where W is an n by m matrix.

• Recognition is performed using nearest neighbor in Rm.

• How do we choose a good W?

(57)

• PCA (Eigenfaces)

Maximizes projected total scatter

• Fisher’s Linear Discriminant

Maximizes ratio of projected between-class to projected within-class scatter

W S W

W T T

PCA = argmaxW

W S

W

W S W W

W T

B T

fld = argmaxW

χ1 χ2

PCA

LDA

PCA and LDA

(58)

Face recognition based on fiducial points

(59)

Gabor wavelet Discrete cosine transform

Examples of other features

Riferimenti

Documenti correlati

Unfortunately, when the input image is obtained by a consumer device (e.g. mobile phone, digital still camera [4]) most of the low-level features that characterize a text

[r]

È una vera opera di dissodamento, un’indagine pionieristica con scarso conforto bibliografico: la stessa storiografia francese, che nei medesimi anni aveva sviluppato un

The major activity in my research work was to identify databases of geometrical design, traffic control, traffic volume and traffic accident data for at-grade (level)

We do not care whether the organization is persistent, meaning that it has long-lasting goals that will pro- duce many interactions (like a selling company), or if it is

4.9.2 Returning from a

This article presented a bibliometric analysis of the academic studies on technology in social entrepreneurship research to identify the most prolific and most cited

In questo scenario si inserisce l’esperienza del Building Future Lab Laboratorio di Ricer- ca Operativa Permanente dell’Università Mediterranea di Reggio Calabria, Dipartimento