We present a vision-based navigation and localization system using two
biologically-inspired scene understanding models which are studied
from human visual capabilities:
(1) Gist model which captures the holistic characteristics and layout of an image and
(2) Saliency model which emulates the visual attention of primates to identify
conspicuous regions in the image.
Here the localization system utilizes the gist features and salient regions to accurately localize the robot, while the navigation system uses the salient regions to perform visual feedback control to direct its heading and go to a user-provided goal location.
We tested the system on our robot, Beobot2.0, in an indoor and outdoor environment with a route length of 36.67m (10,890 video frames) and 138.27m (28,971 frames), respectively. On average, the robot is able to drive within 3.68cm and 8.78cm (respectively) of the center of the lane.
The code is integrated to the iLab Neuromorphic Vision C++ Toolkit. In order to gain code access, please follow the download instructions there.
The code you want is in saliency/src/Robot/Beobot2/Navigation/GistSal_Navigation/GistSal_Navigation.C
To compile the code: make bin/app-GistSal_Navigation
To run the code, in the saliency folder, run the command: