= PDF Reprint, = BibTeX entry, = Online Abstract
C. Siagian, C.-K. Chang, L. Itti, Autonomous Mobile Robot Localization and Navigation Using Hierarchical Map Representation Primarily Guided by Vision, Journal of Field Robotics, Vol. 31, No. 3, pp. 408-440, May/Jun 2014. [2012 impact factor: 2.152] (Cited by 26)
Abstract: While impressive recent progress has been achieved with autonomous vehicles both indoors and on streets, autonomous localization and navigation in less constrained and more dynamic environments, such as outdoor pedestrian and bicycle-friendly sites, remains a challenging problem. We describe a new approach that utilizes several visual perception modules --- place recognition, landmark recognition, and road lane detection --- supplemented by proximity cues from a planar Laser Range Finder for obstacle avoidance. At the core of our system is a new hybrid topological/grid-occupancy map which integrates the outputs from all perceptual modules, despite different latencies and timescales. Our approach allows for real-time performance through a combination of fast but shallow processing modules that update the map's state while slower but more discriminating modules are still computing. We validated our system using a ground vehicle that autonomously traversed several times three outdoor routes, each 400m or longer, in a university campus. The routes featured different road types, environmental hazards, moving pedestrians, and service vehicles. In total, the robot logged over 10km of successful recorded experiments, driving within a median of 1.37m laterally of the center of the road, and localizing within 0.97m (median) longitudinally of its true location along the route.
Note: Both first authors contributed equally
Themes: Computational Modeling, Model of Bottom-Up Saliency-Based Visual Attention, Scene Understanding, Beobots
Copyright © 2000-2007 by the University of Southern California, iLab and Prof. Laurent Itti.
This page generated by bibTOhtml on Tue 09 Jan 2024 12:10:23 PM PST