Abstract


= PDF Reprint,     = BibTeX entry,     = Online Abstract


Click to download PDF version Click to download BibTeX data Clik to view abstract C.-K. Chang, C. Siagian, L. Itti, Hardware and software computing architecture for robotics applications of neuroscience-inspired vision and navigation algorithms, In: Proc. Vision Science Society Annual Meeting (VSS10), May 2010.

Abstract: Biologically-inspired vision algorithms have thus far not been widely applied to real-time robotics because of their intensive computation requirements. We present a biologically-inspired visual navigation and localization system which is implemented in real-time using a cloud computing framework. We create a visual computation architecture on a compact wheelchair-based mobile platform. Our work involves both a new design of cluster computer hardware and software for real-time vision. The vision hardware consists of two custom-built carrier boards that host eight computer modules (16 processor cores total) connected to a camera. For all the nodes to communicate with each other, we use ICE (Internet Communication Engine) protocol which allow us to share images and other intermediate information such as saliency maps (Itti and Koch 2001), and scene 'gist' features (Siagian and Itti 2007). The gist features, which coarsely encode the layout of the scene, are used to quickly identify the general whereabouts of the robot in a map, while the more accurate but time consuming salient landmark recognition is used to pin-point its location to the coordinate level. Here we extend the system to also be able to navigate in its environment (indoors and outdoors) using these same features. That is, the robot has to identify the direction of the road, use it to compute movement commands, perform visual feedback control to ensure safe driving over time. We utilize four out of eight computers for localization (salient landmark recognition system) while the remainder are used to compute navigation strategy. As a result, the overall system performs all these computing tasks simultaneously in real-time at 10 frames per second. In short, with the new design and implementation of the highly-capable vision platform, we are able to apply computationally complex biologically-inspired vision algorithms on the mobile robot.

Themes: Model of Bottom-Up Saliency-Based Visual Attention, Model of Top-Down Attentional Modulation, Computational Modeling, Human Eye-Tracking Research

 

Copyright © 2000-2007 by the University of Southern California, iLab and Prof. Laurent Itti.
This page generated by bibTOhtml on Wed Feb 15 12:13:56 PST 2017