Towards Visually-Guided Neuromorphic Robots |
||||||||||||
|
Beobots in ActionWe are developing a number of neuromorphic visually-guided behaviors for the Beobot. Please see our publication page for details. One important aspect of our research is in learning to navigate. To this end, we feed decoded signals from the radio control of the robot into the sound card input of one of the motherboards. This allows the robot to measure a teaching signal sent by human operators helping it in its learning process. Testing under human controlHere we tested the mechanical stability of the robot under human radio control.
Learning to navigateChristopher Ackerman developed a nifty piece of software, by which the robot learns to navigate based on a global analysis of the gist of the scene it perceives through its video camera.
The interesting aspect of the approach used here is that it makes no assumption about the contents or structure of the images seen by the Beobot. Instead, a generic analysis of the image is performed (using a Fourier transform), converting each frame into a 40-number 'signature'. A two-layer backprop network then learns the appropriate actions to take for given signatures. As an example, let's train the Beobot to navigate along a different path, and under very different lighting conditions (dusk).
|
Copyright © 2005 by the University of Southern California, iLab and The Beobot Team. Last updated Thursday, 02-Sep-2010 10:05:32 PDT