Abstract


= PDF Reprint,     = BibTeX entry,     = Online Abstract


Click to download PDF version Click to download BibTeX data Clik to view abstract L. Itti, Quantifying the Contribution of Low-Level Saliency to Human Eye Movements in Dynamic Scenes, Visual Cognition, Vol. 12, No. 6, pp. 1093-1123, Aug 2005. [2003 impact factor: 1.588] (Cited by 280)

Abstract: We investigated the contribution of low-level saliency to human eye movements in complex dynamic scenes. Eye movements were recorded while naive observers viewed a heterogeneous collection of 50 video clips (46,489 frames; 4-6 subjects per clip), yielding 11,916 saccades of amplitude 2deg or more. A model of bottom-up visual attention computed instantaneous saliency at the instant each saccade started and at its future endpoint location. Median model-predicted saliency was 45 percent the maximum saliency, a significant factor 2.03 greater than expected by chance. Motion and temporal change were stronger predictors of human saccades than color, intensity or orientation features, with the best predictor being the sum of all features. There was no significant correlation between model-predicted saliency and duration of fixation. A majority of saccades were directed to a minority of locations reliably marked as salient by the model, suggesting that bottom-up saliency may provide a set of candidate saccade target locations, with the final choice of which location to fixate more strongly determined top-down.

Keywords: Visual attention ; eye movements ; saliency ; bottom-up ; top-down

Themes: Model of Bottom-Up Saliency-Based Visual Attention, Computational Modeling, Model of Top-Down Attentional Modulation, Human Eye-Tracking Research

 

Copyright © 2000-2007 by the University of Southern California, iLab and Prof. Laurent Itti.
This page generated by bibTOhtml on Wed Feb 15 12:13:56 PST 2017