= PDF Reprint, = BibTeX entry, = Online Abstract
L. Itti, Modeling bottom-up and top-down visual attention in humans and monkeys, Harvard medical school weekly seminar, Cambridge, MA, Jan 2009.
Abstract: Visual processing of complex natural environments requires animals to combine, in a highly dynamic and adaptive manner, sensory signals that originate from the environment (bottom-up) with behavioral goals and priorities dictated by the task at hand (top-down). Together, bottom-up and top-down influences combine to serve the many tasks which require that we direct attention to the most ''relevant'' entities in our visual environment. While much progress has been made in investigating experimentally how humans and other primates may operate such goal-based attentional selection, very little is understood of the general mathematical principles and neuro-computational architectures that subserve the observed behavior. I will describe recent computational work which attacks the problem of developing models of visual attentional selection that are more flexible and can be strongly modulated by the task at hand. I will back the proposed architectures up by comparing their predictions to behavioral recordings from humans and monkeys. I will show examples of applications of these models to real-world vision challenges, using complex stimuli from television programs or modern immersive video games.
Themes: Model of Bottom-Up Saliency-Based Visual Attention, Model of Top-Down Attentional Modulation, Computational Modeling
Copyright © 2000-2007 by the University of Southern California, iLab and Prof. Laurent Itti.
This page generated by bibTOhtml on Wed Feb 15 12:13:56 PST 2017