This system utilizes the salient regions that are recognized by the localization system to direct the robot's heading.

Here we mostly deal with tools and implementation details that are too specific for our lab (both hardware and software), and are not included in the papers.

To Do

improvements:

  • KAI & CHRISTIAN: change corner db to use salient regions.
  • CHRISTIAN: Local Navigation Map
  • KAI: encoder based robot control. For example:

(trans, vel) of (1.0,0.0) means same number of ticks left and right (no need to worry about slippage).

  (trans, vel) of (0.0,1.0) means same number of ticks left and right but in opposite direction (no need to worry about slippage). 

IEEE-TRobotics data to take:

  • better ground truth than the encoder integration, GPS based maybe.
  • add trajectory.
  • sites: 1 indoor with many branches and decision making in the corners.
  • add results: success/not scraping, number of recovery, time of travel, speed of robot.
  • Check the remaining references in the googledocs
To Do

Environment Information

The steps to create an environment information (map and landmark database) is the following:

  • divide the training images to segments.
  • run the GSnav programs (bin/beobot-GSnav-master, bin/beobot-GSnav, bin/beobot-GSnav-dorsal) to create a landmark database and gist features for each frame. On 3 separate shell, ssh to bx4 or where the data is (usually where the camera is currently connected to), run in parallel commands below. Note that this has to be done for each session for each segment. Run the GSnav and GSnav-dorsal first. Wait for 5 seconds, then run GSnav-master:

$ bin/beobot-GSnav –ip-port=9791

 $ bin/beobot-GSnav-dorsal --ip-port=9792
 $ bin/beobot-GSnav-master 
    --in=../data/logs/DATE_OF_FRAMES_TAKEN/image_000000000#.ppm 
    --input-frames=START-END@1Hz train ../data/SITE/SITE.env 
    #segment 
    current_segment_number 
    --beowulf-slaves=bx4:9791,bx4:9792 
    --ip-port=9790 
    SITE_current_segment_number

make sure the image size is 160×120

  • train the segment estimator using bin/train-FFN (feed-forward neural network) explained below:
  • create a topological map by hand. We need to measure the distances of each edge and record the locations of each node.

Gist Segment Estimation Training Process

The following are the few steps:

  • Calculate the dimension reduction Matrix:
    • Open Matlab (may need to move the SITE_sessions.txt and gist files to a computer with Matlab)
    • Make sure FastICA is installed.
    • In Matlab command line type x = loadData('GistDirectory','SITE_sessions.txt');
    • fasticag : to load up the fastICA gui
    • Load data type variable x
    • Reduce dim. for reduce dimension to 80 (as per PAMI07 paper)
    • Do ICA
    • Save results with _SITE extension
    • Quit to close the FastICA gui
    • In Matlab command line type: d = saveFile('GistDirectory/SITE.evec', transpose(W_SITE)); for the dimension reduction matrix
    • move the SITE.evec file back to the SITE/gistTrain folder in the robot
  • Train the neural Networks:
    • Setup the SITE_GIST_train.txt
    • setup the SITE_gistList.txt
    • run bin/train-FFN SITE_GIST_train.txt

In the end we have the following files in the same environment SITE folder (for example we have ACB, AnFpark, FDFpark for the PAMI 2007 paper):

  • SITE.env environment file
  • SITE_GIST_train.txt gist training sessions files
  • SITE_sessions.txt landmark training sessions files
  • SITE.tmap topological map file
  • sub-folder SITE/gist: *.gist files
  • sub-folder SITE/gistTrain: all the gist (segment estimation) training files
  • sub-folder SITE/frames: all the input *.png files. This could be put
  • sub-folder SITE/LandmarkDB: *.lmk landmark files and *.png salient region image files (for display purposes)

Back to Software System


Navigation
QR Code
QR Code beobot_2.0_software_system_gistsal_localization_navigation (generated for current page)