= PDF Reprint, = BibTeX entry, = Online Abstract
A. Borji, D. N. Sihite, L. Itti, Computational Modeling of Top-down Visual Attention in Interactive Environments, In: Proc. British Machine Vision Conference (BMVC 2011), pp. 85.1-85.12, Sep 2011. [2011 acceptance rate: 31.8%] (Cited by 31)
Abstract: Modeling how visual saliency guides the deployment of attention over visual scenes has attracted much interest recently - among both computer vision and experimental/computational researchers - since visual attention is a key function of both machine and biological vision systems. Research efforts in computer vision have mostly been focused on modeling bottom-up saliency. Strong influences on attention and eye movements, however, come from instantaneous task demands. Here, we propose models of top-down visual guidance considering task influences. The new models estimate the state of a human subject performing a task (here, playing video games), and map that state to an eye position. Factors influencing state come from scene gist, physical actions, events, and bottom-up saliency. Proposed models fall into two categories. In the first category, we use classical discriminative classifiers, including Regres- sion, kNN and SVM. In the second category, we use Bayesian Networks to combine all the multi-modal factors in a unified framework. Our approaches significantly outperform 15 competing bottom-up and top-down attention models in predicting future eye fixations.
Themes: Model of Bottom-Up Saliency-Based Visual Attention, Model of Top-Down Attentional Modulation, Computational Modeling, Human Psychophysics
Copyright © 2000-2007 by the University of Southern California, iLab and Prof. Laurent Itti.
This page generated by bibTOhtml on Wed Feb 15 12:13:56 PST 2017