= PDF Reprint, = BibTeX entry, = Online Abstract
C. Siagian, L. Itti, Biologically Inspired Mobile Robot Vision Localization, IEEE Transactions on Robotics, Vol. 25, No. 4, pp. 861-873, July 2009. [2008 impact factor: 2.656] (Cited by 213)
Abstract: We present a robot localization system using biologically inspired vision. Our system models two extensively studied human visual capabilities: 1) extracting the 'gist' of a scene to produce a coarse localization hypothesis and 2) refining it by locating salient landmark points in the scene. Gist is computed here as a holistic statistical signature of the image, thereby yielding abstract scene classification and layout. Saliency is computed as a measure of interest at every image location, which efficiently directs the time-consuming landmark-identification process toward the most likely candidate locations in the image. The gist features and salient regions are then further processed using aMonte Carlo localization algorithm to allow the robot to generate its position. We test the system in three different outdoor environments - building complex (38.4m x 54.86m area, 13,966 testing images), vegetation-filled park (82.3m x 109.73 m area, 26,397 testing images), and open field park (137.16m x 178.31m area, 34,711 testing images) - each with its own challenges. The system is able to localize, on aver age, within 0.98, 2.63, and 3.46 m, respectively, even with multiple kidnapped-robot instances.
Themes: Computational Modeling, Model of Bottom-Up Saliency-Based Visual Attention, Scene Understanding, Beobots
Copyright © 2000-2007 by the University of Southern California, iLab and Prof. Laurent Itti.
This page generated by bibTOhtml on Thu Jan 31 11:39:41 PST 2019