• Non ci sono risultati.

EYE MOVEMENT TRACKING METHOD IN A STUDY OF VISUAL PERCEPTION OF AMBIGUOUS IMAGES

N/A
N/A
Protected

Academic year: 2021

Condividi "EYE MOVEMENT TRACKING METHOD IN A STUDY OF VISUAL PERCEPTION OF AMBIGUOUS IMAGES"

Copied!
24
0
0

Testo completo

(1)

1

LITHUANIAN UNIVERSITY OF HEALTH SCIENCES

FACULTY OF MEDICINE

INSTITUTE OF BIOLOGICAL SYSTEMS AND GENETICS

VILIUS DRANSEIKA

EYE MOVEMENT TRACKING METHOD IN A STUDY OF

VISUAL PERCEPTION OF AMBIGUOUS IMAGES

Final Master Thesis

Thesis supervisor

Assoc. Prof. Dr. Arūnas Bielevičius

(2)

2

CONTENTS

SUMMARY ... 3 SANTRAUKA ... 4 ACKNOWLEDGEMENT ... 5 CONFLICT OF INTERESTS ... 5

APPROVAL FROM THE ETHICS COMMITTEE ... 5

1. INTRODUCTION ... 6

2. AIM AND OBJECTIVES ... 7

3. LITERATURE REVIEW ... 8

3.1. EYE MOVEMENTS ... 8

3.2. IMAGE EXAMINATION ... 9

3.3 NEURAL IMAGE PROCESSING ... 10

4. RESEARCH METHODOLOGY AND METHODS ... 11

4.1. SETUP ... 11

4.2. DATA COLLECTION ... 11

4.3. DATA PROCESSING ... 12

5. RESULTS AND THEIR DISCUSSION ... 14

5.1 SACCADE ONSET TIME ... 14

5.2 TEXT READING TIME ... 15

5.3 TIME TO PERCEIVE AN IMAGE ... 16

5.4 HEATMAPS ... 17

6. CONCLUSIONS ... 19

REFERENCES ... 20

(3)

3

SUMMARY

Name: Vilius Dranseika.

Title: Eye movement tracking method in a study of visual perception of ambiguous images. Aim: Analysis of ambiguous images observation via psychophysical eye tracking experiments. Objectives: 1. To prepare experimental methodology and perform pilot studies with ambiguous

images. 2. To examine and compare saccade onset, text reading and perception times for ambiguous vs. normal images. 3. To demonstrate and analyze attention areas (heatmaps) of ambiguous image observation. Participants: 11 healthy volunteers aged 21-25 (5 women, 6 men), who had no knowledge about the study prior to the experiment.

Methods: Monocular observation from 1.1m distance was used in the experiment. 8 images (1

control and 7 ambiguous images) were presented nose-to-tail on a screen. Participants’ eye movements were recorded. Gaze positions were plotted as x, y coordinates in time. This allowed to extract the following times: saccade onset time, text reading duration and perception duration. An ANOVA analysis was used to look for statistically significant differences amongst these times for ambiguous images. Then, a t-test was used to check for statistically significant in control vs. ambiguous images. Areas of highest visual attention were illustrated using heatmaps.

Results: Data from 7 subjects for 8 images was analyzed. Mean saccade onset time was 0.3407 s.

The mean reading time was 0.6187 s. The average time to perceive the control was 2.362 s for simple vs. 4.048 s for ambiguous images. All images had 2 areas that attracted most attention. In one group (3 images) these locations could clearly be attributed to specific percepts, whilst in the other group (4 images) both percepts could be established by looking at the same location.

Conclusions: 1. Eye movement recording was implemented in a psychophysical experiment with

the text reading and ambiguous vs control images perception. 2. There was no significant difference in saccadic onset times and text reading times in simple vs. ambiguous images (P value > 0.05). 3. Perception time was longer for ambiguous images compared to control 4.048 s vs. 2.362 s (P value < 0.05). 4. Each image contained 2 attention areas. However, in a subset it was impossible to determine which rival meaning was perceived by focusing on that specific area. 5. Longer perception times as well as the inability to determine an exact perceived meaning based on gaze position indicate, that higher cognitive factors may also play a role in the perception of ambiguous images.

(4)

4

SANTRAUKA

Autorius: Vilius Dranseika

Pavadinimas: Akių judesių sekimo metodas dviprasmių vaizdų suvokimo tyrime.

Tikslas: Žmogaus akių judesių, užregistruotų stebint dviprasmius vaizdus psichofizikinių

eksperimentų metu, analizė.

Uždaviniai: 1. Paruošti įrangą/metodiką ir atlikti pilotinius bandymus su dviprasmiais vaizdais. 2.

Ištirti ir palyginti sakadų pradžios, skaitymo ir suvokimo laikus dviprasmiams ir paprastiems pavikslėliams. 3. Pavaizduoti ir išanalizuoti dėmesio koncentracijos sritis (spalvinių grafikų pavidalu).

Tyrimo dalyviai: 11 sveikų 21-25 metų savanorių (5 moterys, 6 vyrai), kurie prieš eksperimentus

nežinojo apie tyrimo pobūdį.

Matodika: Stebėjimai buvo atlikti viena akimi 1.1 m. atstumu. 8 paveikslėliai (1 kontrolinis ir 7

dviprasmiai) pasirodydavo nuosekliai vienas po kito kompiuterio ekrane, ir jų stebėjimo metu buvo registruojami dalyvių akių judesiai. Žvilgsnio pozicijos buvo pavaizduotos kaip x, y koordinatės laike. Tai leido išskirti: sakados pasikeitmo, teksto skaitymo, paveiksliuko suvokimo laikus. Naudojant ANOVA analizę buvo ieškoma statistiškai reikšmingų skirtumų tarp dviprasmių vaizdų, atliktas t-testas Taip pat buvo iliustruotos daugiausiai vaizdinio dėmesio susilaukusios paveikslėlių dalys.

Rezultatai: Buvo išanalizuoti 7 tiriamųjų duomenys 8 paveikslėliams.. Vidutinis sakadų pradžios

laikas buvo 0.3407 s. Vidutinis skaitymo laikas 0.6187 s.. Vidutinis suvokimo laikas paprastiems ir dviprasmiams vaizdams buvo 4.048 s vs. 2.362 s. Visi paveikslėliai turėjo dvi dėmesį pritraukusias sritis. Vienoje dviprasmių vaizdų grupėje (3 paveikslėliai) buvo galima aiškiai susieti stebimą sritį ir suvokiamą reikšmę, tačiau kitoje grupėje (4 paveikslėliai) abi reikšmės galėjo būti matomos stebint tą pačią sritį.

Išvados: 1. Buvo sukurta tinkama metodika duomenų rinkimui ir analizavimui akių judesių

stebėjimui dviprasmių vaizdų tyrime. 2. Nebuvo rasta statistiškai reikšmingų skirtumų tarp sakadų pradžios ir teksto skaitymo laikų paprastuose ir dviprasmiuose vaizduose (P > 0.05). 3. Dviprasmių vaizdų suvokimo laikas buvo ilgesnis nei kontrolės: 4.048 s vs. 2.362 s (P < 0.05). 4.Visi paveikslėliai turėjo po 2 dėmesio zonas. Tačiau dalyje paveikslėlių nebuvo galima aiškiai susieti suvokiamos reikšmės ir stebimos zonos. 5. Ilgesnis suvokimo laikas bei negalėjimas nuspėti suvokiamos reikšmės iš stebimos zonos, leidžia manyti, kad aukštesni kognityviniai faktoriai taip pat dalyvauja dviprasmių vaizdų suvokime.

(5)

5

ACKNOWLEDGEMENT

I would like to sincerely thank my supervisor assoc. prof. Dr. Arūnas Bielevičius for his guidance and support in writing this thesis.

CONFLICT OF INTERESTS

There were no conflicts of interest to declare related to this thesis.

APPROVAL FROM THE ETHICS COMMITTEE

According to the Dr. E. Peičius, the head of Kaunas Regional Biomedical Research Ethics Committee, a permission from the Ethics Committee is not necessary for this work (28 February 2018).

(6)

6

1. INTRODUCTION

Eye tracking is a process that allows to record and measure where a person is looking at any given time. It involves the analysis of eye movements as well as gaze position. Therefore, it is a good estimate of where a person is directing one’s attention [1]. Usually, the image that we look at matches the information we perceive. However, sometimes an image might be unclear or have several possible interpretations. To model such cases ambiguous images are used. They are a type of still images, that have 2 or more possible meanings that can both be seen without change in stimulus and also interchanged with some degree of voluntary control [2–4]. An example would be the Rubin’s Vase – a picture that can be seen as a vase or 2 faces (Fig. 1).

Simple or unambiguous images are processed in a different way than ambiguous images – even different brain zones are activated [6]. Most research on them involves brain activity detection [2] or influence studies where one percept is forced over the other [7]. However, very few studies try to determine what happens to a person’s eyes while viewing an ambiguous image. These are also of the technical variety [8].

In this thesis we would like to investigate how a person looks at ambiguous images using an eye tracker. In this way we hope to get insights on the various gaze tracking parameters and how they differ compared to simple images. Understanding at least some eye movements would not only help to grasp the underlying physiological principles, but also could provide knowledge for future technological advances. To the best of our knowledge, this would be a first of this kind of study in Lithuania.

(7)

7

2. AIM AND OBJECTIVES

Aim.

Analysis of ambiguous images observation via psychophysical eye tracking experiments.

Objectives:

1. To prepare experimental methodology and perform pilot studies with ambiguous images. 2. To examine and compare saccade onset, text reading and perception times for ambiguous vs.

normal images.

(8)

8

3. LITERATURE REVIEW

3.1. EYE MOVEMENTS

When exploring a scenery humans collect information from a broad field of vision, ranging up to 180 degrees or more [9]. Yet our fovea – the central part of the eye with the highest resolution – covers only 2 degrees of the visual field. Of course, the other parts of the eye also contribute to vision, but the acuity decreases getting closer to the periphery. Hence, in order to explore a whole image in great detail it is necessary to move the eyes [10].

There are 2 types of eye movements when looking at still images: the movement itself called saccades and fixations [9, 11]. Saccades are quick ballistic movements reaching speeds of hundreds of degrees per second that happen every few hundreds of milliseconds. Once initiated they are hard to suppress [12]. Their purpose is not to obtain visual information but to direct the eye to a point of interest in between fixations. Actually, the eyes temporarily stop conveying visual signals during saccades, as it would only produce a blurred perceived image [10, 13]. A process, similar to saccades, that is also worth mentioning is called pursuit movements. These are slow continuous rotations of the eyes that keep track of moving targets [12, 14].

Visual information is acquired during fixations. These are instances when the gaze is fixed on an object. However, despite the name the eyes are never truly still – tiny movements are always present. Studies have even shown that eliminating those movements causes perceived stationary images to vanish [9, 13, 15–18].

There are 3 types of movement that happen during fixations namely: tremors, drifts and microsaccades:

1. Tremors are aperiodic oscillations of the eye with a frequency up to 100 Hz. Their amplitudes are about the size of a photoreceptor in the fovea, which is beyond the resolution limit of most eye trackers, thus making investigating them complicated. Tremors are believed to be noise caused by the firing motor units of the eye muscles as their frequencies are supposedly too high to be an efficient stimulus for vision [9, 12, 15, 19].

2. Drifts are slow curly movements of the eye with an amplitude of around ten photoreceptors. They presumably help in fine detail perception. It is believed that they are a continuation of tremors and therefore might be motor noise of firing ocular muscles. However, it is also possible that along with tremors they might have a central origin, as a reduction in their quantity has been noticed in patients with brainstem lesions [15, 17, 19, 20].

(9)

9 3. Microsaccades are believed to be the most important type of fixational eye movements. They are involuntary binocular jerks of less than 1° that are initiated by small retinal slips, as shown by Engbert and Mergenthaler [17, 21]. Their role is supposedly to prevent image fading caused by neural adaptation: perceived images disappear if microsaccades are suppressed [22]. Older studies questioned the importance of microsaccades in high precision tasks. It was believed that microsaccades become suppressed in activities like shooting or threading a needle and yet the perceived image does not fade [23]. However, recent research showed that they are not only present but also help the visual system in perceiving spatial details and spatial attention [24, 25]. In addition, they have been proven to have two important functions in a high acuity task like reading. They correct the gaze if a saccade made it land in the wrong position. Also, they allow to gather additional information about the surroundings before executing the next saccade [26].

3.2. IMAGE EXAMINATION

A person cannot process a whole image in detail at once. The information load would be too great to handle considering the minds limited resources. Therefore, the brain prioritizes: at any given moment only a part of an image is being looked at and, thus, processed [10, 27]. The other parts of an image are then mostly ignored. Simons et al. demonstrated this by showing that big changes to an image will be missed by a person if he or she is not directly looking at the changing part. For instance, during visual disturbances like saccades and blinks [28] or if the changes are very subtle [29].

The way a person looks at a stable image is also a complex process. During the first few fixations one gets a gist of the image [10]. It has been shown that it takes as little as 13-40 ms for people to get the general idea behind a scene [30, 31]. At the same time the mind maps the regions of interest of the image and starts examining it piece by piece. These regions of interest are influenced by bottom-up and top-down factors [13].

Bottom-up factors stem from the picture itself. They include elements like color, edges, high contrast areas, intensity, texture etc. In other words, prominent/salient parts of an image that draw our attention. They are believed to be processed at early visual levels, hence the name bottom-up factors, creating a saliency map [13, 32]. Saliency maps can be simulated since they are based on physical characteristics. This in turn can be used to predict where a person looks and directs his/her attention. There is a positive correlation between saliency and likelihood of fixations. Also, if an image is more cluttered fixations get longer. However, models based on saliency alone often do not match actual data from eye

(10)

10 tracking experiments. Therefore, there are more factors in play [10, 13, 27, 33]. Top-down factors, on the other hand, are subjective influences like emotions, expectations, goals, and knowledge [27, 32].

3.3

NEURAL IMAGE PROCESSING

Visual stimulus is processed in a complicated, multistage manner. It takes 150 ms to recognize a stimulus [31, 34]. Brain zones have a hierarchical organization, where each zone processes a different feature of the signal. Primary visual cortex (V1), for instance, is responsible for processing contours and creating saliency maps [13]. V2 analyses surfaces, whereas V4 works with color and shape [35]. These levels help to identify an object. Perception happens when higher levels integrate top-down factors [36]. Still, all levels are highly interrelated, as even the low ones (V1) can be modified to work in a task specific way [37].

In ambiguous images understanding processing and involved brain zones becomes complicated because of perceptual switches that happen even though the stimulus does not change. Activity in higher levels (frontoparietal and temporal cortices) was associated with percept changing [6]. However, it is now believed that no specific zones are responsible for perceiving ambiguous images and that the entire cortex might be involved [38].

(11)

11

4. RESEARCH METHODOLOGY AND METHODS

4.1. SETUP

The experiments were done using a ViewPoint PC-60 (2.8.4.536) (© Arrington Research, Inc,

USA) eye tracker (Fig. 2) in accordance with the manufacturer’s instructions. It is a head fixed system,

therefore, the head of a participant was held in a stable position with a chinrest. The right eye was recorded with the infrared camera of the eye tracker, while the left eye was covered with a dark sheet of paper. Proper camera placement as well as good pupil visibility was ensured prior to the experiments. The images were presented on a LE46B652 46 inch Full HD (1920:1080p) screen (Samsung, South Korea) from a distance of 1.1 m and were changed with a press of a computer keyboard key. The experiments were performed in a dark room to avoid distractions of attention and gaze fixations. The gathered information was sent to the

software provided by the manufacturer. Parameters like x, y coordinates (with 0.5o accuracy), fixations,

pupil height and width were recorded in the order of 0.0000025 s, i.e., 2.5E-6 or 2.5 µs.

4.2. DATA COLLECTION

The participants were 11 healthy volunteers aged 21-25 (5 women, 6 men). Prior to the experiments, they had no knowledge of the study, other than that it involves eye tracking.

(12)

12 Before the experiment, the participants were told that they would be shown 8 different images: 1 simple image and 7 ambiguous images. Each picture would contain a text with hints of what to look for in the picture. They were asked to read the text, then to focus on one and then the other possible interpretation in no specific order. Once they found all the listed interpretations of an image, they could change the picture with a push of a computer keyboard key. In addition, the participants were instructed not to move their head out of the head mount during the experiment. A calibration of the eye tracker was performed prior each measurement followed by the 8 images showed on the screen. The images used in the experiment in the order they were presented can be found in Annex 1. The 1st image (not ambiguous) image was used as a control.

4.3. DATA PROCESSING

The data from the experiments was collected in a video and text formats. First, the videos were analyzed with the manufacturers software. This allowed to check for and eliminate unsuitable recordings e.g. due to partially unsuccessful calibration. In addition, gaze parameters like saccades and fixations could be manually accessed for any given time. An example is given in Figure 3 where green circles represent fixations, and their size corresponds to fixation durations. Red lines represent saccades.

Due to the vastness of data a graphical representation was needed to facilitate the extraction of useful information. A script in Python (Copyright © 2001-2018 Python Software Foundation) to make a 3D graph was written by the author to plot x, y coordinates against time. We would get a graph for each picture

(13)

13 seen by every person individually. This allowed to see how gaze positions change over time without having to play and stop video recordings. The graphs were used to identify different regions representing the following times:

1. saccade onset time (the time before the 1st eye movement for a picture); 2. text reading time and the overall time a subject looked at the image (figure 4);

3. the time to process an image that was calculated by subtracting reading time from the overall duration.

Averaged values (n=7) as well as standard deviation were calculated and used for the study. Analysis of significant differences (ANOVA) was used to look for statistically significant differences amongst ambiguous images. Finally, a t-test was performed to see if a significant difference

could be found when comparing a subjects control time and the average time for an ambiguous image.

Figure 4. 3D graph example with 3 identified zones: saccade onset time, text reading time and overall

(14)

14 A visual representation (Heatmap) to show which parts of each image attracted most attention was created using EyeMMV toolbox on GNU Octave 4.2.1 (Copyright (C) 2017 John W. Eaton and others) open source software. Regions of pictures were marked colored scale was assigned based how much time was looking at specific coordinates (grey being least and red – most viewed pixel coordinates, fig. 8). The resulting color maps were laid over the images to achieve the final heatmaps [39].

5. RESULTS AND THEIR DISCUSSION

Data from 7 subjects for 8 images (1 control image and 7 ambiguous images) was analyzed. The rest of the results were discarded because subjects were not able to maintain head position, which interfered with calibration done prior to the experiment, hence giving inaccurate results. Also, in some instances participants could not find both possible interpretations in the pictures.

5.1

SACCADE ONSET TIME

The mean times of saccade onset for each image can be found in Figure 5. An ANOVA single factor analysis showed that there was no significant difference amongst subjects’ saccadic onset times (p = 0.1961, p > 0.05). In addition, the t-test showed that there was no significant difference when a subject was looking at control vs. ambiguous image (p = 0.094, p > 0.05). The overall mean time of saccade onset was 0.3407 s. According to Pratt and Trottier saccadic reaction times can be as fast as < 200 ms if a person proactively tries to look for a change in an image. However, these become longer with factors like luminance or a conscious intension to look away from a changing element [40]. Healthy young individuals may have reaction times up to around 600 ms. These increase further with increasing age or in the presence of eye disease [41]. Therefore, there seems to be no difference between ambiguous and simple images at the first or saccade onset stage.

(15)

15

5.2

TEXT READING TIME

The mean text reading times for all images are given in Figure 6. There was no significant difference in text reading times of an ambiguous images (p = 0.4900, p >0.05). Also, no significant difference was found when comparing the control image and test images (p = 0.7244, p >0.05). The mean reading time was 0.6187 s. The texts contained 1-4 words or 10-23 characters. Typically, a saccade happens every 7-9 characters, however this can vary from one character up to a short sentence. The fixations in between saccades can be 60-500 ms long [42]. These would suggest that participants spent fixations while reading the hints for the images. Even though this is longer than the reaction, however, it is shorter than the time they were looking at the image itself.

Figure 5. Mean saccade onset times

0 0.1 0.2 0.3 0.4 0.5 0.6 Tim e (s ) Image

(16)

16

5.3

TIME TO PERCEIVE AN IMAGE

Figure 7 shows the mean times it took for participants to perceive the images. The time to perceive an image was calculated by subtracting the reading time from the overall time spent looking at an image. An ANOVA analysis showed no significant differences amongst time to perceive an ambiguous image (p = 0.2491, p >0.05). However, the t-test showed significant differences it took for participants to perceive ambiguous images vs control (p = 0.0322, p <0.05). The average time to perceive the control was 2.362 s vs 4.048 s for test images. One could argue that since ambiguous images have two or more possible interpretations they consequently require more fixations, hence the difference in time. This hypothesis can be proved by examining heatmaps.

Figure 6. Mean text reading times

0 0.2 0.4 0.6 0.8 1 1.2 Tim e (s ) Image

Figure 7. Mean times to perceive an image

0 1 2 3 4 5 6 7 8 9 Tim e (s ) Image

(17)

17

5.4

HEATMAPS

Heatmaps for each image including control can be seen in Figure 8. Images had 3-4 zones that attracted most attention. In some instances, hotspots for text are not visible. The algorithm ignored them due to relatively short duration. If we leave out the areas with text, then each picture has 2 hotspots.

Images can be grouped into 2 groups by the localization of attention areas (hotspots). The first group (B, C, D) contains hotspots that can be clearly attributed to one meaning. On the other hand, the second group (images E, F, G, H) contains hotspots that sit on the edge of different percepts and cannot be easily attributed to one solution.

Studies have shown that both cases are possible. People can look at a specific area to perceive only one interpretation. In some studies parts of ambiguous images are even made more prominent to make one perception more likely than the other (also known as ‘disambiguation’) [7]. However, while the latter is still true, it has been noticed that a position where both percepts are simultaneously possible also exist [8]. Therefore, it seems that fixations and higher cognitive factors are important for perception of ambiguous images [2, 6, 7, 43]. This involvement of higher cognition, might also be responsible for the difference in perception time.

(18)

18

A B

C D

E F

G H

(19)

19

6. CONCLUSIONS

1. Eye movement recording was implemented in a psychophysical experiment with the text reading and ambiguous vs control images perception.

2. There was no significant difference in saccadic onset times and text reading times in simple vs. ambiguous images (P value > 0.05).

3. Perception time was longer for ambiguous images compared to control 4.048 s vs. 2.362 s (P value < 0.05).

4. Each image contained 2 attention areas. However, in a subset it was impossible to determine which rival meaning was perceived by focusing on that specific area.

5. Longer perception times as well as the inability to determine an exact perceived meaning based on gaze position indicate, that higher cognitive factors may also play a role in the perception of ambiguous images.

(20)

20

REFERENCES

[1] Hansen DW, Ji Q. In the eye of the beholder: A survey of models for eyes and gaze. IEEE Trans Pattern Anal Mach Intell 2010; 32(3): 478–500

[2] Filevich E, Becker M, Wu Y-H, Kühn S. Seeing Double: Exploring the Phenomenology of Self-Reported Absence of Rivalry in Bistable Pictures. Front Hum Neurosci 2017; 11: 301z

[3] Pearson J, Brascamp J. Sensory memory for ambiguous vision. Trends Cogn Sci (Regul Ed ) 2008; 12(9): 334–41

[4] van Ee R, van Dam LCJ, Brouwer GJ. Voluntary control and the dynamics of perceptual bi-stability. Vision Res 2005; 45(1): 41–55

[5] Xu H, Matsumoto D, Kanazawa K, Takeno J. Using a Conscious System to Construct a Model of the Rubin's Vase Phenomenon. Procedia Computer Science 2016; 88: 27–32

[6] Wang M, Arteaga D, He BJ. Brain mechanisms for simple perception and bistable perception. Proc Natl Acad Sci U S A 2013; 110(35): E3350-9

[7] Intaitė M, Šoliūnas A, Gurčinienė O, Rukšėnas O. Effect of bias on the perception of two simultaneously presented ambiguous figures. Psichologija. 2013 Jan 1;47(47):91-101.

[8] Roy AK, Mahadevappa M, Guha R, Mukherjee J, Akhtar MN. A Novel Technique to develop Cognitive Models for Ambiguous Image Identification using Eye Tracker. IEEE Trans. Affective Comput. 2017: 1

[9] Rucci M, Poletti M. Control and Functions of Fixational Eye Movements. Annu Rev Vis Sci 2015; 1: 499–518

[10] Castelhano, Monica S and Rayner, Keith. Eye movements during reading, visual search, and scene perception: An overview. Cognitive and cultural influences on eye movements 2008; 2175: 3–33.

[11] Strahs RD. Initial Eye Fixations and Eye Movements when Viewing Artistic Reproductions throughout Time. 2015

[12] Krauzlis RJ, Goffart L, Hafed ZM. Neuronal control of fixation and fixational eye movements: Assessing the preclinical data. Philos Trans R Soc Lond B, Biol Sci 2017; 372(1718): 8–18

[13] Kowler E. Eye movements: The past 25 years. Vision Res 2011; 51(13): 1457–83

[14] Popa L, Selejan O, Scott A, Mureşanu DF, Balea M, Rafila A. Reading beyond the glance: Eye tracking in neurosciences. Neurol Sci 2015; 36(5): 683–8

[15] Rucci M, McGraw PV, Krauzlis RJ. Fixational eye movements and perception. Vision Res 2016; 118: 1–4

(21)

21 [16] Rucci M, Iovin R, Poletti M, Santini F. Miniature eye movements enhance fine spatial detail. Nature 2007; 447(7146): 851–4

[17] Martinez-Conde S, Macknik SL, Troncoso XG, Hubel DH. Microsaccades: A neurophysiological analysis. Trends Neurosci 2009; 32(9): 463–75

[18] Martinez-Conde S, Otero-Millan J, Macknik SL. The impact of microsaccades on vision: Towards a unified theory of saccadic function. Nat Rev Neurosci 2013; 14(2): 83–96

[19] Martinez-Conde S, Macknik SL, Hubel DH. The role of fixational eye movements in visual perception. Nat Rev Neurosci 2004; 5(3): 229–40

[20] Ko H-K, Snodderly DM, Poletti M. Eye movements between saccades: Measuring ocular drift and tremor. Vision Res 2016; 122: 93–104

[21] Engbert R, Mergenthaler K. Microsaccades are triggered by low retinal image slip. Proc Natl Acad Sci U S A 2006; 103(18): 7192–7

[22] Martinez-Conde S, Macknik SL, Troncoso XG, Dyar TA. Microsaccades counteract visual fading during fixation. Neuron 2006; 49(2): 297–305

[23] Kowler E, Steinman RM. Small saccades serve no useful purpose: Reply to a letter by R. W. Ditchburn. Vision Res 1980; 20(3): 273–6

[24] Ko H-K, Poletti M, Rucci M. Microsaccades precisely relocate gaze in a high visual acuity task. Nat Neurosci 2010; 13(12): 1549–53

[25] Yuval-Greenberg S, Merriam EP, Heeger DJ. Spontaneous microsaccades reflect shifts in covert attention. J Neurosci 2014; 34(41): 13693–700

[https://doi.org/10.1523/JNEUROSCI.0582-14.2014]

[26] Bowers NR, Poletti M. Microsaccades during reading. PLoS ONE 2017; 12(9): e0185180 [27] Frintrop S, Rome E, Christensen HI. Computational visual attention systems and their cognitive foundations. ACM Trans. Appl. Percept. 2010; 7(1): 1–39

[28] Simons DJ, Levin DT. Change blindness. Trends Cogn Sci (Regul Ed ) 1997; 1(7): 261–7 [29] Simons DJ, Franconeri SL, Reimer RL. Change blindness in the absence of a visual disruption. Perception 2000; 29(10): 1143–54

[30] Castelhano MS, Henderson JM. Initial scene representations facilitate eye movement guidance in visual search. J Exp Psychol Hum Percept Perform 2007; 33(4): 753–63

[31] Potter MC, Wyble B, Hagmann CE, McCourt ES. Detecting meaning in RSVP at 13 ms per picture. Atten Percept Psychophys 2014; 76(2): 270–9

(22)

22 [32] Ramos Gameiro R, Kaspar K, König SU, Nordholt S, König P. Exploration and Exploitation in Natural Viewing Behavior. Sci Rep 2017; 7(1): 2311

[33] Judd T, Ehinger K, Durand F, Torralba A. Learning to predict where humans look: Kyoto, Japan, 29 September - 2 October 2009. Piscataway, NJ: IEEE 2009.

[34] Thorpe S, Fize D, Marlot C. Speed of processing in the human visual system. Nature 1996; 381(6582): 520–2

[35] Arcaro MJ, Kastner S. Topographic organization of areas V3 and V4 and its relation to supra-areal organization of the primate visual system. Vis Neurosci 2015; 32: E014

[36] Tong F. Primary visual cortex and visual awareness. Nat Rev Neurosci 2003; 4(3): 219–29 [37] Hayhoe M, Ballard D. Eye movements in natural behavior. Trends Cogn Sci (Regul Ed ) 2005; 9(4): 188–94

[38] Kornmeier J, Bach M. Ambiguous figures - what happens in the brain when perception changes but not the stimulus. Front Hum Neurosci 2012; 6: 51

[39] Filippakopoulou V, National Technical University of Athens, Krassanakis V, Nakos B. EyeMMV toolbox: An eye movement post-analysis tool based on a two-step spatial dispersion threshold for fixation identification. eyemovement.org; 2014.

[40] Pratt J, Trottier L. Pro-saccades and anti-saccades to onset and offset targets. Vision Res 2005; 45(6): 765–74

[41] Mazumdar D, Pel JJM, Panday M, et al. Comparison of saccadic reaction time between normal and glaucoma using an eye movement perimeter. Indian J Ophthalmol 2014; 62(1): 55–9

[42] Liversedge SP, Findlay JM. Saccadic eye movements and cognition. Trends Cogn Sci (Regul Ed ) 2000; 4(1): 6–14

[43] Wimmer MC, Doherty MJ. Investigating children's eye-movements: Cause or effect of reversing ambiguous figures?. InProceedings of the Annual Meeting of the Cognitive Science Society 2007 Jan 1 (Vol. 29, No. 29).

(23)

23

ANNEX

Control Old People

Legs/Hands Apple/Face

Vaze/Face Men/Woman

Donkey/Seal Musician/Face Figure 8. Image used in the experiment in order: control (A) and ambiguous

(24)

24

ANNEX LIST OF REFERENCES

[1] Mickey mouse black and white [retrieved 2018-04-25] Available from: http://worldartsme.com/mickey-mouse-black-and-white-clipart.html#gal_post_55771_mickey-mouse-black-and-white-clipart-1.jpg

[2] "Forever Always" by Octavio Ocampo [retrieved 2018-04-25] Available from: http://brainden.com/images/old-couple.jpg

[3] Hand and legs [retrieved 2018-04-25] Available from: http://www.moillusions.com/wp-content/uploads/2016/04/ac46ca15b4302604dbee7e77f68f463f.jpg

[4] Mak J. Steve [retrieved 2018-04-25] Available from:

http://simpledesktops.com/browse/desktops/2011/oct/06/steve/

[5] Xu H, Matsumoto D, Kanazawa K, Takeno J. Using a Conscious System to Construct a Model of the Rubin's Vase Phenomenon. Procedia Computer Science 2016; 88: 27–32

[6] Balcetis E, Lassiter GD, editors. Social psychology of visual perception. Psychology Press; 2010 May 31.

[7] Churchland PS, Churchland PM. Neural worlds and real worlds. Nature Reviews Neuroscience. 2002 Nov;3(11):903.

[8] Bulatov A, Bertulis A. Neurofiziologiniai regimojo suvokimo pradmenys. Kaunas, Kauno Medicinos Universiteto leidykla. 2008.

Riferimenti

Documenti correlati

To rule out the possibility that the ef- fect of knee involvement was due to a relationship with one or more different joint groups we also performed a multi- variate

Model predictions associated with the estimated posterior distributions of model parame- ters for the different seasons (from 2000 to 2011) were analyzed in terms of i) the weekly

Sound damping materials used in aeronautical applications to insulate the cabin from external noise sources are the so-called blankets that, in high performance configuration,

In order to investigate these questions, we conducted a quantitative study based on a survey distributed among members of the Emirati community in which they were asked to respond

A recent study confirmed that the use of preservative-free gel of timolol 0.1% once daily maintained the efficacy on IOP and reduced signs and symptoms in almost all glaucomatous

In order to better understand this peculiar musical trend in relation to social and political satire, I shall first approach selected songs from Gli Squallor’s

In addition to the expert advice from Commission C.C1 (Astronomy Education and Development) and Commission C.C2 (Communicating Astronomy with the Public), the other Commissions

In the left panel, the velocities of stars in the disk are plotted with grey density contours (because of the large number of stars), while the halo stars (selected as those with |v −