= PDF Reprint, = BibTeX entry, = Online Abstract
L. Itti, Quantitative Modeling of Perceptual Salience at Human Eye Position, Visual Cognition, Vol. 14, No. 4-8, pp. 959-984, Aug-Dec 2006. [2004 impact factor: 1.588] (Cited by 140)
Abstract: We investigate the extent to which a simple model of bottom-up attention and salience may be embedded within a broader computational framework, and compared with human eye movement data. In this study, we focus on quantifying whether increased realism of the simulation framework significantly affects the outcome of quantitative measures of how well the model may predict where in video clips humans may direct their gaze. To this end, we compare three variants of the model, tested with 15 video clips of natural scenes, also shown to three human observers. We measure model-predicted salience at the locations gazed to by the human observers, compared to random locations. The first variant simply processes the raw video clips, the second adds a gaze-contingent foveation filter, and the third further attempts to realistically simulate dynamic human vision by embedding the video frames within a larger background, and shifting them to eye position. Our main finding is that increasing simulation realism highly significantly improves the predictive ability of the model. This study hence suggests that attempting to better emulate the details of how a visual stimulus may actually be captured by a constantly rotating retina during active vision has a significant impact onto quantitative outcomes of comparisons between model and human behavior.
Keywords: Visual attention ; eye movements ; saliency ; bottom-up
Themes: Computational Modeling, Model of Bottom-Up Saliency-Based Visual Attention, Human Eye-Tracking Research
Copyright © 2000-2007 by the University of Southern California, iLab and Prof. Laurent Itti.
This page generated by bibTOhtml on Tue 09 Jan 2024 12:10:23 PM PST