Abstract


= PDF Reprint,     = BibTeX entry,     = Online Abstract


Click to download PDF version Click to download BibTeX data Clik to view abstract J. Ng, R. Hirata, T. N. Mundhenk, E. Pichon, A. Tsui, T. Ventrice, P. Williams, L. Itti, Towards Visually-Guided Neuromorphic Robots: Beobots, In: Proc. 9th Joint Symposium on Neural Computation (JSNC'02), Pasadena, California, May 2002. (Cited by 3)

Abstract: Despite the advancements made in the field of AI and Robotics, robots today remain vastly inferior to animals in terms of mental agility. The main reason for this is that robots do not possess the neural capabilities of an animal brain. Neural algorithms adapt well to diverse environments, whereas robot AI is usually limited to a test lab setting. To resolve this disparity, an intuitive solution would be to try to emulate the neural functions present in animal brains. However, neural algorithms require vast amounts of computational power to process, in particular those algorithms that require real-time vision. Many robots, which run on power-saving embedded processors, do not have a lot of CPU cycles to spare. We are developing a high-performance visually-guided robotics platform with enough processing speed to run neural algorithms. This 'Beobot' platform consists of a high-performance radio-controlled truck chassis (the robot) carrying an x86-based supercomputer (the Beowulf cluster). The computing cluster consists of two compact dual-CPU motherboards linked together by a gigabit Ethernet connection. Powering the computer are four Pentium-III (Coppermine) 1Ghz processors along with 768MB of memory per motherboard. Two Firewire cameras provide the Beobot's vision. A compact flash card is used as a makeshift hard drive, and it has enough space to store a thin UNIX-variant kernel and iLab's vision software. The vision software itself consists of several general-purpose neural algorithms. Most prominent of these is iLab's Saliency-based visual attention system, which enables the Beobot to drive its attention towards the most salient locations and objects in a visual scene. In addition, we have developed prototype algorithms that allow the Beobot to parse scene layouts and perform object recognition. A primitive action/memory AI system allows it to implement simple visually-guided behavior. Finally, the component-oriented nature of the vision software enables future additions of neural modules. The potential advantage of the Beobot comes from its use of x86-based hardware and UNIX-based C++ development environment. Nearly all the parts of the Beobot are inexpensive, off-the-shelf components. This enables easy replacement of broken parts. Furthermore, the expandability of PC hardware enables devices to be plugged into the Beobot for additional functionalities. All these traits combined make the Beobot potentially easy to replicate, and this allows for wider adoption upon the successful completion of the prototype.

Themes: Computational Modeling, Model of Bottom-Up Saliency-Based Visual Attention, Computer Vision, Beobots

 

Copyright © 2000-2007 by the University of Southern California, iLab and Prof. Laurent Itti.
This page generated by bibTOhtml on Tue 09 Jan 2024 12:10:23 PM PST