= PDF Reprint, = BibTeX entry, = Online Abstract
C.-K. Chang, C. Siagian, L. Itti, Mobile Robot Vision Navigation & Localization Using Gist and Saliency, In: Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4147-4154, Oct 2010. [2010 acceptance rate: 58.2%] (Cited by 56)
Abstract: We present a vision-based navigation and localization system using two biologically-inspired scene understanding models which are studied from human visual capabilities: (1) Gist model which captures the holistic characteristics and layout of an image and (2) Saliency model which emulates the visual attention of primates to identify conspicuous regions in the image. Here the localization system utilizes the gist features and salient regions to accurately localize the robot, while the navigation system uses the salient regions to perform visual feedback control to direct its heading and go to a user-provided goal location. We tested the system on our robot, Beobot2.0, in an indoor and outdoor environment with a route length of 36.67m (10,890 video frames) and 138.27m (28,971 frames), respectively. On average, the robot is able to drive within 3.68cm and 8.78cm (respectively) of the center of the lane.
Note: Both first authors contributed equally
Copyright © 2000-2007 by the University of Southern California, iLab and Prof. Laurent Itti.
This page generated by bibTOhtml on Wed Feb 15 12:13:56 PST 2017