= PDF Reprint, = BibTeX entry, = Online Abstract
L. Elazary, L. Itti, A Bayesian model for efficient visual search and recognition, Vision Research, Vol. 50, No. 14, pp. 1338-1352, Jun 2010. [2008 impact factor: 2.051] (Cited by 147)
Abstract: Humans employ interacting bottom-up and top-down processes to significantly speed up search and recognition of particular targets. We describe a new model of attention guidance for efficient and scalable first-stage search and recognition with many objects (117,174 images of 1147 objects were tested, and 40 satellite images). Performance for recognition is on par or better than SIFT and HMAX, while being, respectively, 1500 and 279 times faster. The model is also used for top-down guided search, finding a desired object in a 5x5 search array within four attempts, and improving performance for finding houses in satellite images.
Themes: Model of Bottom-Up Saliency-Based Visual Attention, Model of Top-Down Attentional Modulation, Human Eye-Tracking Research
Copyright © 2000-2007 by the University of Southern California, iLab and Prof. Laurent Itti.
This page generated by bibTOhtml on Tue 10 Nov 2020 12:18:58 PM PST