= PDF Reprint, = BibTeX entry, = Online Abstract
D. J. Berg, L. Itti, Modeling bottom-up and top-down guidance of eye movements in humans and monkeys, European Conference on Eye Movements (ECEM), Southampton, England, Aug 2009.
Abstract: Active visual processing of complex natural environments requires animals to combine, in a highly dynamic and adaptive manner, sensory signals that originate from the environment (bottom-up) with behavioral goals and priorities dictated by the task at hand (top-down). Together, bottom-up and top-down influences combine to serve the many tasks which require that we direct attention to the most ''relevant'' entities in our visual environment. While much progress has been made in investigating experimentally how humans and other primates may operate such goal-based attentional selection, very little is understood of the general mathematical principles and neuro-computational architectures that subserve the observed behavior. I will describe recent computational work which attacks the problem of developing models of visual attentional selection and eye movement programming that are more flexible and can be strongly modulated by the task at hand. I will back the proposed architectures up by comparing their predictions to behavioral recordings from humans and monkeys. I will show examples of applications of these models to real-world vision challenges, using complex stimuli from television programs or modern immersive video games.
Themes: Model of Bottom-Up Saliency-Based Visual Attention, Model of Top-Down Attentional Modulation, Computational Modeling
Copyright © 2000-2007 by the University of Southern California, iLab and Prof. Laurent Itti.
This page generated by bibTOhtml on Thu Jan 31 11:39:41 PST 2019