Abstract


= PDF Reprint,     = BibTeX entry,     = Online Abstract


Click to download PDF version Click to download BibTeX data Clik to view abstract V. Navalpakkam, L. Itti, Sharing Resources: Buy Attention, Get Recognition, In: Proc. International Workshop on Attention and Performance in Computer Vision (WAPCV'03), Graz, Austria, Jul 2003. (Cited by 39)

Abstract: Inspired by nature s policy of sharing resources, we have enhanced our attention model with minimal extra hardware to enable the twin powers of object detection and recognition. With just the elementary information available at the preattentive stage in the form of low-level feature maps tuned to color, intensity and orientation, our model learns representations of objects in diverse, complex backgrounds. The representation starts with simple vectors of low-level feature values computed at one location centered on a given view of object. We then recursively combine views to form instances, in turn combined into simple objects, composite objects, and so on, taking into account feature values and their variance. Given any new scene, our model uses the learnt representation of the target object to perform top-down biasing on the attention system such as to render this object more salient by enhancing those features which are characteristic of the object. Experimental results indicate that our enhanced model is 5-20 times faster at detecting targets using this biasing than when no feature is enhanced. Our model is also able to recognize a wide variety of objects ranging from simple geometrical objects to complex objects such as soda cans, handicap signs, and many others under noisy conditions. There are few false negatives and false positives. The good performance of our lightweight model suggests that the human visual system may indeed be sharing resources extensively and attention and object recogni-tion may be so intimately related that if we buy attention, we might get the other for free!

Themes: Computational Modeling, Model of Bottom-Up Saliency-Based Visual Attention, Model of Top-Down Attentional Modulation, Computer Vision, Scene Understanding

 

Copyright © 2000-2007 by the University of Southern California, iLab and Prof. Laurent Itti.
This page generated by bibTOhtml on Wed Feb 15 12:13:56 PST 2017