• Non ci sono risultati.

Multiple constraints to compute optical flow

N/A
N/A
Protected

Academic year: 2021

Condividi "Multiple constraints to compute optical flow"

Copied!
8
0
0

Testo completo

(1)

1243 IEEE TRANSACTIONS ON PAlTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 18, NO. 12, DECEMBER 1996

Multiple

Constraints

to Compute Optical Flow

Massimo Tistarelli

Abstra~t-The; Gomputation of the opticial flow field from an image sequence requires the definition of constraints on the temporal change of image features.

constraints in the computational schema. In the first step, it is shown that differential constraints correspond to an implicit feature tracking. Therefore, the lbest results (either in terms of measurement accuracy, and speed in the computation) are obtained by selecting and applying the constraints which are best "tuned" to the particular image feature under consideration.

Considering also multiple image points not only allows us to obtain a (locally) better estimate of the velocity field, but also to detect erroneous mea.surements due to discontinuities in the velocity field. Moreover, by hypothesizing a constant acceleration motion model, also the derivatives of the optical flow are computed.

Several experiments are presented from real image sequences. Index Terms--Optical flow, velocity field, differential constraints, dynamic vision., motion analysis, image velocity, dynamic scene analysis, computer vision.

In this paper, we consider the implications of using multiple

1

INTRODUICTION

IN many comlputer vision techniques it is often desirable to over constrain the problem. This practice generally allows us to more precisely deteirmine the solution. Moreover, it is also important to use redundant information to enforce robustness with respect to measurement noise in the input data. This is particularly true dealing with differential methods where high frequency noise is enhanced by derivative operators. For this reason there are many examples in the literature, where optical flow is computed by means of several constraint equations applied to many image points 111, [21, [31, [41, [51, [61, [71, [81, [91, DO], D11, U21. Minimiza- tion and least squares are the mathematical tools most widely ap- plied to solve 1 his kind of problems.

Not always the constraints are obtained by applying the same equation to multiple points, as in [Z], [41, [51, [71, [121, but also by defining multiple constraints for each image point, either based on a set of differential equations [l], [9], [lo], [ll], or obtained by ap- plying the same set of equations to different functions which are related to the local image brightness [31, [6], [8]. Many researchers have also exp Licitly addressed the problem of occlusions, in the computation of smooth flow fields [l], [Z], [5], [13], by using mul- tiple constraints.

Regardless of the methodology used to over constrain the problem, there is a question to be answered: is always the problem posed in the correct way? What is the best way to over constrain the Computational problem and make it well posed?

The aim of this paper is, to analyze differential flow constraints commonly used to compute optical flow, and understand the rela- tion with the underlying intensity pattern. The problem is faced in terms of the geometrical properties of the constraint equations in relation with the distribution of the image brightness.

2

MOTION AND OPTICAL

FLOW

Uras et al. [14], among others [15], claim that the aperture problem is a "false problem." In fact, it can be easily overcome as soon as enough "structure" is present in the image brightness [151. For example, assuming the flow field to be locally constant:

where E(x,

y,

t ) is the image brightness.

Several researchers 111, [lo], [141, 1151 exploited the integration of multiple constraint equations, to over constrain the computa- tional problem. By adding the brightness constancy equation to (l), three equations in two unknowns are obtained' [9],

[lo]:

d

a

+

- E = O - V E = O

d t at

In this case the optical flow is computed by solving, in closed form, an over-determined system of linear equations in the un- known terms (U, V) =

?.

3 USING MULTIPLE

CONSTRAINTS

In this section, we consider three methods to compute the optical flow using multiple differential constraints:'

1) For each point on the image, where the Hessian matrix of the image brightness is not singular, the direct solution of (1) provides the velocity vector

v .

2) Simultaneously solve (2) applied to a single pixel:

(3)

3) Solve an overconstrained system of linear equations ob- tained by applying (2) to a set of pixels within a given neighborhood.

These methods deserve advantages as well as disadvantages [161. Considering Fig. 1, it ii; interesting to note that in the case of (2) it is impossible to determine, at least from the geometry of the system, which one is the wrong constraint line.

X.Y. Zzzzzzz is with xxxxxxx.For additional authors, place cursor just before The author is u i t h the University of Genoa, Department of Communica- tion, Computer and Systems Science, Integrated Laboratory for Advanced Robotics ( L I R A -Lab), Via Opera Pia 13 - 16145 Genoa, Italy. E-mail: tista@c'ist.unige.it.

'.

It is worth noting that this corresponds to a first order approxima- tion of the optical flow field with a locally constant vector field. By taking into account the values of the spatial derivatives of velocity, it is possible to extend the approximation to the second order, but the is here limited to the first order to allow the optical flow cam- putation to be based on single pixel measurements.

2 . Even though we are considering three basic constraints, the fol-

lowing analysis and results apply for any number of constraints, at least two, derived from the image brightness function [l], [31, [61, [81. Manuscript receiixd Nov. 10,1994; revised Oct. 3,1996. Recommended for

acceptance by A. Singh.

For information I J M obtaining reprints of this article, please send e-mail to: transpami@computer.org, and reference IEEECS Log Number P96104.

(2)

1244 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 18, NO. 12, DECEMBER 1996

Fig. 1. Effects of the errors in one constraint equation. The wrong constraint is the dotted line C,, while the correct constraint line is C,. The wrong velocity vector is dashed. The geometry of the constraint equations is shown in case of: the constraint equation (1) (left); the constraint equation (2) (middle): multiple data points applying the brightness constancy equation (right).

Fig. 2. Two images from a set of 37 of a flat picture moving toward the camera along the optical axis (top). Optical flows computed by using the best intersection (left); the least squares (middle); the mean of the two better intersections (right). On the bottom row, an enlargement of the flow fields relative to the marked window is shown.

(3)

IEEE TRANSACT’IONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 18, NO. 12, DECEMBER 1996 1245

3.1 Mapping Flow Constraints

Instead of taking all three equations of (2) at the same time, it is possible to consider three equation pairs separately and study their stability. This is equivalent to consider-each intersection point in Fig. 1 separately and determine the conditioning of the corre- sponding system of equations. This is done by analyzing the ma- trices?

to the linage Brightness Structure

A first option is to consider as correct solution the most reliable intersection in velocity space:*

= Mi-’

ii

e

detM, > (detMj, det M,}, ( 5 )

i,

j , k E {I, 2, 3},

i

# j # k

In Fig. 2, two images from a sequence of 37 are shown. The im- ages are 256 x 256 pixels with eight bits of resolution in intensity. The sequence i s relative to a flat picture moving along the optical axis of the camera. This image sequence has been used to compute the optical flow in the following ways:

Applying the equation pair with the highest determinant. Computing the pseudo-inverse as in ( 3 ) .

Taking the average of the flow vectors computed from the equation pairs with higher determinants.

TABLE 1

STATISTICS OF THE THREE ALGORITHMS PRESENTED Algorithm (a): Algorithm Algorithm

I

highest “det M (b): least (c): mean of N. of vectors

15 sec 17.5 sec 15.4 sec For 51 % of the image points, det M I is the highest determinant, as 3% is det M2, and for 46% is det M com aye with (4)). Optical flows have been com- puted on a Sun SPARC IPC 2 . L workstation.

The three flow fields are quite similar, but, as shown in Table 1, their density is different. In order to facilitate a comparison of the techniques, a small portion of the optical flows is shown enlarged. As it can be noticed, the marked vector obtained from the mean of the two better intersections in the velocity space has a correct ori- entation (at least considering the velocity of the neighboring pix- els) while the same vector, computed from the best intersection

’.

In general one can consider a set of n functions of the image brightness F ( E ) with the constraints:

$?(E) =

$Vp(E)

=

6

where

VF(E)

re:presents the gradient operator applied to each ele- ment of the vector of functions

F ( E ) .

These constraints result in a set of 3 x n equation pairs [31,[61, [81.

‘.

The computation of 2 x 2 matrix determinants is used throughout the paper to measure the reliability or confidence of the computed velocity vector, which corresponds to the intersection point of two straight lines in (U, v) space. In the Appendix this aspect is discussed in more detail.

point in the velocity space, has an evidently wrong orientation. Most probably, this is due to the fact that, by taking the best inter- section point, some relevant information is still missing. This in- formatioq can be only included by considering also the remaining constraint line (or lines if more than three constraints are given). On the other hand, there are still errors depending on the local structure of the image brightness [17].

a

b C

Fig. 3. Synthetic image used to characterize the behaviour of the con- straint equations in response to different intensity patterns; a. Values of the determinants for the three equations: MI (left), M2 (middle), M3 (right). The gray value codes the determinant values, light gray is 0, dark represent a negative value and bright a positive value: b. Domi- nance of the determinants. White values indicate that det M, is the greatest determinant, gray and black are points where det MI or det M3 have the greatest values. The zero value is coded as bright gray; c. The dominance of the determinants is shown as the absolute value of the difference between the two greatest determinant values: ldet M i - det MI. The zero value is coded as briglht gray.

4

IMPLICIT

FEATURE

TR~CKING

Any algorithm for the compultation of the optical flow field, can be regarded as a solution to the tracking problem 1161, [181. In fact, even though differential methods do not establish any explicit correspondence between image features over time, still the differ- ential operators involved (in any framework) track characteristic intensity patterns over time [ll]. This is an interesting aspect which dramatically changes the perspective under which differ- ential techniques should be analyzed and applied. Flow and/or motion constraints should not be applied regardless of the image sequence to be processed, but rather, purposively selected as a

(4)

1246 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 18, NO. 12, DECEMBER 1996

_"-"..-.*".

. . . ,

...

_-

a. b.

Fig. 4. Optical flows computed by using the weighted sum of the best intersections (on the left is the raw optical flow, and on the right is the smoothed optical flow). The parameters applied for the estimation of the optical flow fields are: a. os = 2, of = 1, of= 1.2, A = 5%, r = 1; b. os = 2,

O f = 1, of= 1, A = 5%, Z= 1.

function of the local distribution of the image b r i g h t n e ~ s . ~ In gen- eral, this is implicitly performed when tuning thresholds on the parameters of the algorithm or discarding flow vectors with a low confidence (in any way they have been computed in the first place) [ l l ] . This methodology implies the measurement of wrong data from the beginning, while it would be desirable to avoid useless measurements This is possible only by tuning the constraints to the underlying intensity distribution As shown in Table 1, a side effect of this procedure is the reduction in the time required to process the image sequence

4.1 Effects of the image Intensity Distribution

By analyzing the values of the derivatives of the image brightness, Nagel [15] demonstrated that the brightness constancy equation (BCE) and the stationarity of the intensity gradient (SIG) equations are not defined when characteristic gray patterns occur on the image. In particular, none can be applied to compute the image velocity whenever the gray level variation is linear In this case, at least one of the derivatives E, and E , are null, and E,, = 0 and/or E, = 0 On the other hand, the value of det M,, as from (41, (which corresponds to the SIG equations), is minimum at gray level cor- ners and it is maximum at extrema of the image intensity This fact explains why the optical flow, computed by means of the con- straint equations expressed in (11, is much less dense than apply- ing all three equations of (3) because they also take into account the corners of the image intensity Therefore, whenever the gray level pattern is not a corner or a maximum, all three equations are "weak," in the sense that the straight lines represented by equation pairs do not intersect at right angles (1 e , the corresponding matrix

determinant is low)

The behavior of the considered differential constraint equations is explained in Fig 3 The value of the dominant determinant for '. It is worth noting that in the work by Fleet et al. [19] this is per- formed by applying multiple filtering stages tuned to characteristic frequency bands in the image sequence.

each image point is shown in Fig. 3b. Dark pixels correspond to a maximum for det M,, the middle gray identifies pixels where det M3 is maximum, and bright pixels correspond to a maximum for det M,. The areas where det M, is prevalent are very small and limited to the peaks in the image intensity. It is now evident that not all the equations are "well t u n e d to "sense" the temporal variation of every intensity pattern. But, rather, each equation is best suited to compute the motion of a particular intensity distri- bution.

Assuming the determinants of two equation pairs to be greater than a given threshold z (stating the intersections to be admissi- ble), it is possible to consider the probability of the intersections of both line pairs and try to maximize the a posteriori probability. In the ( U , v) space this corresponds to move one intersection point toward the other, according to their confidence. If the two inter- sections have the same probability (or the three intersection points, obtained from the three equation pairs, are the vertexes of an isos- celes triangle), then the most probable solution will be located at a point in the middle of the line connecting the two intersections. It is worth noting that this point does not correspond to the least squares solution.

In this way, the correct solution corresponds to the "center of mass" of the two points, where the "mass" of each point is the value of the respective matrix determinant, which is used as confi- dence measure:

where

N:

= b:M: - bfM,12 and

NY

=

b,"MB1

-

b:M,Z

are the numerators of the expression for

7

= (ul, U*)

,

as from (51, and

D,

= det M,. In the general case, selected m constraints from the total

(5)

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 18, NO. 12, DECEMBER 1996 1247 1 .-

2

Di

id (7)

where the selected m constraints are the best tuned to the local intensity profile.

4.2 Experimlental Results

The results of two experiments, relative to the computation of the optical flow by integrating multiple constraints for each pixel, are reported in Fig. 4.

In order to allow the numerical computation of the spatiotem- poral derivatives of the image brightness, a spatio-temporal 3D Gaussian kernel has been applied to the images. The resulting raw optical flow has been further regularized by applying also a small Gaussian filter to both the U and ZI components of the vector field6

1141, [201. The parameters which have been set in the algorithm are the following:

o,

and q are the values of the standard deviation of the Gaussian kernel in space and time;

A is the maximum allowed difference in the values of the higher determinants;

zis the threshold on the value of the determinant computing the flow vectors;

q

is the value of the standard deviation of the Gaussian kernel used to smooth the U and v component of the com- puted velocity field;

In order to improve visibility, the flow fields have been sub- sampled by taiking one vector every five, along the x and y image coordinates. The spatial and temporal derivatives of the image brightness have been computed with fixed one-dimensional masks, locally approximating the Taylor expansion of the bright- ness function [.201. The sequences in Fig. 4 have been acquired with an ordinary CCD camera, under different conditions; all the im- ages in the sequences are 256 x 256 pixels with eight bits of resolu- tion in intensit:y.

The sequence in Fig. 4a (moving sequence) has been acquired in- side a room, from a camera mounted on a mobile vehicle moving along a direction slightly inclined with respect to the camera opti- cal axis. The object on the right, in the foreground, is moving along a direction almost orthogonal to the vehicle trajectory. On the bottom, both the raw and the smoothed optical flow are shown. As the object in the foreground is moving very fast, compared to the motion of the camera, the applied differential operators do not allow to compute a correct estimate of the image velocity of the moving object Itself.

The second sequence (blocks sequence) has been acquired, at the University of Karlsruhe from a camera mounted on a robot arm [211. Two images (256 x 256 pixels with eight bits per pixel) from the original sequence are shown in Fig. 4b. On the bottom of the same figure, both the raw and the smoothed optical flow are shown. It is worth noting that the moving block in the foreground can not be traclked very well because, during the motion, the sur- faces in the bacl<ground are occluded. This is an intrinsic limitation of the applied constraints and can be overcome by taking into ac- count multiple ;pixels.

‘.

It is worth noting that the Gaussian filtering of the optical flow is not mandatory for the proposed algorithms. In the experiments it is mainly applied to fill-in missing flow vectors and to facilitate the un- derstanding of the presented results.

5

The application of multiple constraints can be extended to multi- ple data points. This is possible by assuming the flow field to be constant within a given neighborhood of the current pixel and by applying the constraints relative to each pixel simultaneously. The velocity vector is determined as the center of mass of the cloud of intersections (from the constraint lines), in the velocity space, rela- tive to each pixel within the considered neighborhood.

If the velocity field is locally constant or smooth, a very small cloud is expected, because all the pixels will have the same veloc- ity. On the other hand, if the cloud is large, or the intersection points are spread apart, then the pixels exhibit different velocities (see Fig. 5).

INTEGRATION OF

DATA

FROM

MULTIPLE

POINTS

Fig. 5. A small window is shown out of the original image. For each pixel within a 3 x 3 window, three constraint lines are defined in the velocity space. One or two intersection points are selected for each pixel, according to the value of the determinant of the corresponding system of equations. As sketsched by the diagram on the upper right corner, all the selected intersection points form a cluster in the velocity space, which determines the velocity vector for the central pixel within the 3 x 3 window.

In order to analyze the distribution of the intersection points, the modulus of the mean

E(v)

and of the variance (T(v) of the

distribution can be defined:7

’,

Other measurements could be applied, like the residual of the least square error, or other statistical techniques aimed at the definition of the clustering degree of a given data set [22]. The mean and variance of the point positions have been chosen because they directly reflect the measurements performed by the algorithm. On the other hand, other error measurements, like the residual, do not take into account the explicit non-linearity of the algorithm.

(6)

1248 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 18, NO. 12, DECEMBER 1996

(9) where N is the number of selected intersection points,

v

= (U, a)

is the velocity vector computed from the set of intersection points, and =

(ui,

vi)

is the velocity vector defined by each intersec- tion point in the velocity space. The vector

6

represents the mean distance of the set of intersection points from the top of the re- sulting velocity vector V . By setting a threshold on the value of

E(V)

it is possible to determine if the velocity field is locally con- stant or, conversely, if the current image location corresponds to a flow discontinuity. It is worth noting that this method relies on a local assumption of the differential properties of the velocity field, which is used to compute a flow estimate at each pixel position.

5.1

Including Linear Variations of

Velocity

By considering multiple pixels it is also possible to include a first order, nonconstant, term in the flow constraints (2). Linear varia- tions of the optical flow [41 can be expressed as:

-

-

(10)

where ux, uy, vx, ny represent the spatial derivatives of the image velocity at go =

(x0,

yo),

Ax = xo - x and Ay = yo -

y.

By substituting this expression into (2), for two points ?! and

where (U',

vo

X i

one obtains: ELdy E' A' +E: E;A;

ELA~,,

+

E: w v . . E L A i =

V ( q ,

1

of the image brightness

E,,

are computed at the point ?[. The optical flow and its first order derivatives can be computed if the parameter matrix is not singular. A necessary condition is that the intensity gradients at the image points

g i

and X i

are not null and not aligned.

In order to reduce the errors in the velocity estimate, and to improve accuracy, more than just two points are used. In general, matrix inversion is a source of round-off errors. For this reason, singular value decomposition (SVD) has been applied to find a robust numerical solution for each possible intersection. In order to reduce the processing time, not all the available equations for each point' are used. For each pixel, det MI, det M2, and det M, are also computed; only the equation pairs corresponding to the higher determinants are included in the estimation matrix. In this way, not only computations are avoided in areas where there is not enough structure in the gray levels to allow measurements (compare with Fig. 3b), but also sources of possible numerical errors are avoided.

5.2 Experimental Results

In Fig. 6 to Fig. 8, several experiments, performed by applying

multiple constraints from multiple pixels, are shown. The pa- rameters which have been set in the algorithm are the same of the previous experiments, while

E ( P )

is the mean of the distribution of the computed intersection points in the (U,

v)

space, measured in pixels. In order to improve visibility the flow fields have been sub- sampled by taking one vector every five, along the

x

and y image co- ordinates.

In Fig. 6 an experiment performed on the "moving sequence" is shown. On top one original image (left) and the computed raw optical flow (right) are shown. The parameters used for the veloc- ity computation are: os = 2, q = 1, A = 5%, z = 1,

E

V 0.2. In the middle, the optical flow smoothed with a Gaussian kernel with standard deviation

7

equal to one is shown. On the right, an en- largement of one computed velocity vector in the velocity space, corresponding to the position marked with a cross on the original image, is shown. All the intersection points used to compute the vector are also shown as small dots. On the bottom, are the com- puted mean (left) and variance (right) for all the image points. The light gray indicates a zero value, all other values range from one (dark) to 255 (white). It is worth noting that the area correspond- ing to the moving object in the foreground, has very high values both for the mean and for the variance of the distribution of the intersection points in velocity space. High values in the mean are also reported along the depth discontinuities.

In Fig. 7 the same images have been used to compute the opti- cal flow with two different methods:

a by computing the pseudo intersections of all the constraints

(least squares solution) and

a by integrating the constraints from multiple image points.

(7

=

Both optical flows have been smoothed with a Gaussian operator with standard deviation

9

equal to one.

As can be noticed, the first flow field reports errors in the area corresponding to the moving object. Conversely, the method based on the integration of constraints from multiple points allows to detect and discard the areas corresponding to wrong measure- ments.

The last experiment has been performed on an image sequence obtained from the original "blocks sequence," by cropping a 256 x 256 window around the moving marble block. On the top row of Fig. 8 the computed raw optical flow (left) and the optical flow smoothed with a Gaussian kernel with standard deviation

9

equal to one (right) are shown. The parameters used for the com- putation are: O, = 2,

o,

= 1, A = 5%,

z=

1,

E(V)

= 0.5. In the middle row, one original image is shown. On the right, an enlargement of the computed velocity vector, in the velocity space, at two differ- ent positions (marked with a cross on the original image), are shown. All the intersection points used to compute the vectors are also shown as small dots. On the bottom row, are the computed mean (left) and variance (right) for all the image points. As it can be noticed, most of the higher values of the mean and variance are at the image points corresponding to depth discontinuities or to occluding edges of the two blocks in the image. High values in the mean are also reported, on the lower left corner, within an area with uniform intensity.

'. Considering a

3 x 3 neighborhood around I,, a total of 3 x 8 = 24 equations are obtained.

(7)

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 18, NO. 12, DECEMBER I996 1249

Fig. 6. Computation of the optical flow by applying constraints from multiple points or1 the “moving” image sequence.

a. b.

Fig. 7. Comparison between the optical flows computed with two differ- ent methods from the “moving” image sequence.

Fig. 8. Optical flow computed by applying constraints from multiple points on the “blocks” image sequence. The sequence has been ob- tained by extracting a 256 x 256 window from the original 512 x 512 images.

5 CONCLUSION

In this paper, the problem of combining multiple constraints to compute the optical flow field from an image sequence has been addressed.

One of the main aspects which has been outlined in this paper is that the response of a given constraint strictly depends on the local distribution of the image intensity. Therefore, the choice of the constraints to be applied should depend on the local structure of the image brightness and riot only on the confidence associated to the measurement. In fact, there are examples where the local image structure does not allow to apply a given constraint at all, or the information obtained is completely wrong. These observations lead to the conclusion that iin order to compute the optical flow field from an image stream, the constraints to be applied to the image sequence should not be chosen only on the basis of the mo- tion to be detected, but also considering the local image structure.

It is demonstrated, both analytically and with experiments, that the same equations applied to different brightness structures can give exact or wrong estimates.

The complex nature of the real world, often makes assumptions for the velocity computation fail. This is due to many phenomena, like occlusions, shadows, depth discontinuities or even an exces- sive velocity of the objects. Two simple measurements, namely the mean and the standard deviation of the distribution of the inter- sections among the constraint lines, are devised to detect and pos-

(8)

1250 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 18, NO. 12, DECEMBER 1996

sibly threshold the flow vectors corresponding to image points where some of the assumptions may be violated.

By applying different measurements other properties could be detected, and eventually determine which assumptions are vio- lated. This analysis could be usefully applied to better understand the nature of the scene for successive vision tasks.

APPENDIX

In the literature, different criteria have been proposed to define the confidence of velocity measurements [11]. Considering first order differential techniques it has been pointed out that the magnitude of the spatial gradient is somehow related (or proportional) to the accuracy in velocity estimation. On the other hand, the determi- nant of the Hessian matrix of the image intensity is the most reli- able measure to define the confidence of velocity estimates in sec- ond order differential techniques.

In this approach, the accuracy of a velocity estimate is related to the confidence of partial measurements obtained from equation pairs, namely single straight lines intersections in the velocity space. The determinant of each matrix Mi in (4) is directly propor- tional to the difference in the direction of the two straight lines:

where 5 =

tan

8, and are the angular coefficients of the two straight lines and A 0 = Bo -

4

The value of det M is maximum when the straight lines intersect at right angle ( A 8 = k$) and minimum when the straight lines are parallel The value of the determinant also corresponds to the product of the eigenvalues Even though each eigenvalue is proportional to the reliability of the corresponding constraint equation, the deter- minant has the advantage of being easier to compute, and also 1s

directly related to the combination of the two equations There- fore, the value of det M is a good measure for the confidence of the computed velocity vector

=

tan

bo b,

ACKNOWLEDGMENTS

This work has been partially funded by the Esprit project VAP, by grants of the Italian National Research Council and by the SMART European network of excellence.

REFERENCES

111 M. Otte and H.H. Nagel, “Estimation of Optical Flow Based on Higher Order Spatiotemporal Derivatives in Interlaced and Non- Interlaced Image Sequences,” Artificial Intelligence, vol. 78, pp. 5- 43,1995.

D. Ben-Tzvi, A. Del Bimbo, and P. Nesi, ”Optical Flow from Con- straint Lines Parametrization,” Pattern Recognition, vol. 26, no. 10, pp. 1,549-1,561, 1993.

V. Markandey and B.E. Flinchbaugh, ”Multispectral Constraints for Optical Flow Computation,” Proc. Third IEEE Int’l Conf. Com- puter Vision, Osaka, Japan, 1990, pp. 3841.

M. Campani and A. Verri, ”Computing Optical Flow from an Overconstrained System of Linear Algebraic Equations,” Proc. Third IEEE lnt’l Conf. Computer Vision, Osaka, Japan, 1990, pp. 22- 26.

P. Nesi, ”Variational Approach to Optical Flow Estimation Man- aging Discontinuities,” Image and Vision Computing, vol. 8, no. 4, pp. 419439,1993.

J. Lai, J. Gauch, and J. Crisman, ”Using Color to Compute Optical Flow,” Proc. SPIE, vol. 2,056, pp. 186-194,1993.

B.G. Schunck, ”Image Flow Segmentation and Estimation by Constraint Line Clustering,” 1EEE Trans. Pattern Analysis and Ma- chine Intelligence, vol. ll, no. 10, pp. 1,010-1,027, Oct. 1989. R.J. Woodham, ”Multiple Light Source Optical Flow,” Proc. Third IEEE Int’l Conf. Computer Vision, Osaka, Japan, 1990, pp. 4246. 0. Tretiak and L. Pastor, ”Velocity Estimation from Image Se- quences with Second Order Differential Operators,” Proc. Seventh IEEE Int’l Conf. Pattern Recognition, 1984, pp, 16-19.

1101 M. Tistarelli and G. Sandini, “Dynamic Aspects in Active Vision,” CVGIP, Special Issue on Purposive and Qualitative Active Vision, Y . Aloimonos, ed., vol. 56, no. 1, pp. 108-129, July 1992.

[111 J.L. Barron, D.J. Fleet, and S.S. Beauchemin, ”Performance of Op- tical Flow Techniques,” Int’l ]. Computer Vision, vol. 12, pp. 43-77, 1994.

J. Weber and J. Malik, “Robust Computation of Optical Flow in a Multi-Scale Differential Framework,” Int’l ]. Computer Vision, [131 C. Schnorr, “Computation of Discontinuous Optical Flow by Do- main Decomposition and Shape Optimization,” I d l ]. Computer Vision, vol. 8, no. 2, pp. 153-165,1992.

[14] S. Uras, F. Girosi, A. Verri, and V. Torre, ”A Computational Ap- proach to Motion Perception,” Biological Cybernetics, vol. 60, [15] H.H. Nagel, ”On the Estimation of Optical Flow: Relations Be- tween Different Approaches and Some New Results,” Artificial In- telligence, vol. 33, pp. 299-324,1987.

1161 M. Tistarelli, ”Multiple Constraints for Optical Flow,” Proc. Third European Conf. Computer Vision,

1.0. Eklundh, ed., Stockholm,

Sweden, May 2-6,1994, pp. 61-70.

[17] T.S. Denney and J.L. Prince, ”On Optimal Brightness Functions for Optical Flow,” Proc. 1992 IEEE lnt’l Conf. Acoustics, Speech and Signal Processing, Baltimore, Md., 1992, pp. 257-260.

[18] C.Q. Davis, Z.Z. Karu, and D.M. Freeman, ”Equivalence of Sub- pixel Motion Estimators Based on Optical Flow and Block Matching,” Proc. IEEE Int’l Symp. Computer Vision, Coral Gables, Fla., Nov. 21-23,1995, pp. 7-12.

[19] D.J. Fleet and A.D. Jepson, ”Computation of Component Image Velocity from Local Phase Information,” Int’l J. Computer Vision, vol. 5, pp. 77-104,1990.

[20] V. Torre and T. Poggio, ”On Edge Detection,” I E E E Trans. Pattern Analysis and Machine Intelligence, vol. 8, pp. 147-163, Feb. 1986. [21] H.H. Nagel, G. Socher, H. Kollnig, and M. Otte, “Motion Bound-

ary Detection in Image Sequences by Local Stochastic Tests,” Proc. Third European Conf. Computer Vision, J.O. Eklundh, ed., Stock- holm, Sweden, May 2-6,1994, pp. 305-316.

[22] R.C. Tryon and D.E. Bailey, Cluster Analysis. New York McGraw- Hill, 1970. [2] [3] [4] [5] [6] 171 181 [9] 1121 vol. 14, pp. 67-81,1995. pp. 79-87,1988.

Riferimenti

Documenti correlati

(a) Since the ellipse has a central symmetry, the center of symmetry has to be at midway between the

[r]

Compute the determinant of the matrix obtained from A by first interchanging the last two columns and then interchanging the first two

Since algebraic and geometric multiplicities differ and hence there is no basis of R 4 consisting of eigenvectors of A, the matrix A cannot be

If it is possible, compute such an orthogonal matrix A and explain its geometrical meaning... and use this information to write down the orthogonal projection of R 4

Compute the determinant of the matrix obtained from A by first interchanging the last two columns and then interchanging the first

Otherwise indicate why diagonalization is not possible1. Otherwise indicate why diagonalization is

Compute the determinant of the matrix obtained from A by first interchanging the last two columns, then interchanging the last two rows, and then multiplying the second row