Abstract


= PDF Reprint,     = BibTeX entry,     = Online Abstract


Click to download PDF version Click to download BibTeX data Clik to view abstract C. Siagian, C.-K. Chang, R. C. Voorhies, L. Itti, Beobot 2.0: Cluster Architecture for Mobile Robotics, Journal of Field Robotics, Vol. 28, No. 2, pp. 278-302, March/April 2011. [2010 2-year impact factor: 3.580] (Cited by 16)

Abstract: With the recent proliferation of robust but computationally demanding robotic algorithms, there is now a need for a mobile robot platform equipped with powerful computing facilities. In this paper, we present the design and implementation of Beobot 2.0: an affordable research-level mobile robot equipped with a cluster of sixteen 2.2GHz processing cores. Beobot 2.0 uses compact Computer on Module (COM) processors with modest power requirements, thus accommodating various robot design constraints while still satisfying the requirement for computationally intensive algorithms. In the paper, we discuss issues involved in utilizing multiple COM Express modules on a mobile platform such as inter-processor communication, power consumption, cooling, and protection from shocks, vibrations, and other environmental hazards such as dust and moisture. We have applied Beobot 2.0 to the following computationally demanding tasks: laser-based robot navigation, SIFT object recognition, finding objects in a cluttered scene using visual saliency, and vision-based localization, wherein the robot has to identify landmarks from a large database of images in a timely manner. For the last task, we tested the localization system in three large-scale outdoor environments, which provide 3583, 6006, and 8823 test frames, respectively. The localization errors for the three environments were 4.12ft, 7.81ft, and 13.40ft respectively. The per-frame processing times were 421.45ms, 794.31ms, and 884.74ms respectively, representing speedup factors of 2.80, 3.00, and 3.58 when compared to a single dual-core computer performing localization.

Themes: Computational Modeling, Model of Bottom-Up Saliency-Based Visual Attention, Scene Understanding, Beobots

 

Copyright © 2000-2007 by the University of Southern California, iLab and Prof. Laurent Itti.
This page generated by bibTOhtml on Tue 09 Jan 2024 12:10:23 PM PST