= PDF Reprint,     = BibTeX entry,     = Online Abstract

Click to download PDF version Click to download BibTeX data Clik to view abstract A. Borji, D. N. Sihite, L. Itti, What/Where to Look Next? Modeling Top-down Visual Attention in Complex Interactive Environments, IEEE Transactions on Systems, Man, and Cybernetics, Part A - Systems and Humans, Vol. 44, No. 5, pp. 523-538, May 2014. [2013 Impact Factor: 2.183] (Cited by 67)

Abstract: Several visual attention models have been proposed for describing eye movements over simple stimuli and tasks such as free viewing or visual search. Yet to date, there exists no computational framework that can reliably mimic human gaze behavior in more complex environments and tasks such as urban driving. Additionally, benchmark datasets, scoring techniques, and top-down model architectures are not yet well understood. In this study, we describe new task-dependent approaches for modeling top-down overt visual attention based on graphical models for probabilistic inference and reasoning. We describe a Dynamic Bayesian Network (DBN) that infers probability distributions over attended objects and spatial locations directly from observed data. Probabilistic inference in our model is performed over object-related functions which are fed from manual annotations of objects in video scenes or by state-of- the-art object detection/recognition algorithms. Evaluating over appx. 3 hours (appx. 315,000 eye fixations and 12,600 saccades) of observers playing 3 video games (time-scheduling, driving, and flight combat), we show that our approach is significantly more predictive of eye fixations compared to: (1) simpler classifier- based models also developed here that map a signature of a scene (multi-modal information from gist, bottom-up saliency, physical actions, and events) to eye positions, (2) 14 state-of-the-art bottom-up saliency models, and (3) brute-force algorithms such as mean eye position. Our results show that the proposed model is more effective in employing and reasoning over spatio- temporal visual data compared with the state-of-the-art.

Themes: Computational Modeling, Model of Top-Down Attentional Modulation


Copyright © 2000-2007 by the University of Southern California, iLab and Prof. Laurent Itti.
This page generated by bibTOhtml on Fri Jan 26 09:25:23 PST 2018