Robolocust metrics log analyzer for Bayesian TTI predictions. More...
#include <iostream>
Go to the source code of this file.
Functions | |
int | main () |
Robolocust metrics log analyzer for Bayesian TTI predictions.
This file defines the main function for a multithreaded analysis program that loads all the Robolocust metrics logs associated with the Bayesian time-to-impact prediction experiments and processes these logs to produce the desired results files for each set of related experiments.
Here's the background for this program: the Robolocust project aims to use locusts for robot navigation. Specifically, the locust sports a visual interneuron known as the Lobula Giant Movement Detector (LGMD) that spikes preferentially in response to objects moving toward the animal on collisional trajectories. Robolocust's goal is to use an array of locusts, each looking in a different direction. As the robot moves, we expect to receive greater spiking activity from the locusts looking in the direction in which obstacles are approaching (or being approached) and use this information to veer the robot away.
Before we mount actual locusts on a robot, we would like to first simulate this LGMD-based navigation. Toward that end, we use a laser range finder (LRF) mounted on an iRobot Create driven by a quad-core mini-ITX computer. A computational model of the LGMD developed by Gabbiani, et al. takes the LRF distance readings and the Create's odometry as input and produces artificial LGMD spikes based on the time-to-impact of approaching objects. We simulate multiple virtual locusts by using different angular portions of the LRF's field of view. To simulate reality a little better, we inject Gaussian noise into the artificial spikes.
We have devised three different LGMD-based obstacle avoidance algorithms:
1. EMD: pairs of adjacent LGMD's are fed into Reichardt motion detectors to determine the dominant direction of spiking activity and steer the robot away from that direction;
2. VFF: a spike rate threshold is used to "convert" each virtual locust's spike into an attractive or repulsive virtual force; all the force vectors are combined to produce the final steering vector;
3. TTI: each locust's spike rate is fed into a Bayesian state estimator that computes the time-to-impact given a spike rate; these TTI estimates are then used to determine distances to approaching objects, thereby effecting the LGMD array's use as a kind of range sensor; the distances are compared against a threshold to produce attractive and repulsive forces, with the sum of the force field vectors determining the final steering direction.
As mentioned above, the TTI algorithm uses a Bayesian state estimator that predicts the time-to-impact given an LGMD spike rate. The Bayesian time-to-impact prediction experiments are designed to evaluate this TTI prediction model. Here's how these experiments were conducted:
The robot was driven straight ahead towards a wall starting at a point 2.5 meters away. A single virtual locust looking straight ahead was used to generate LGMD spikes. The robot was configured to stop just short of hitting the wall.
As it moved toward the wall, the robot's controller would record the current speed, LGMD spike rate, actual time-to-impact, predicted TTI, actual distance to wall, predicted distance and the prediction's confidence level to a log file.
We varied the robot's speed, the amount of noise in the artificially generated LGMD spikes and the delta value for the spike generation model. The delta value is a parameter that controls when the peak spike rate is achieved w.r.t. the point at which a collision takes place, i.e., when time-to-impact is zero. To illustrate, let us say we use a delta of 1.0 seconds; this means that the spike generation model will produce a peak when the time-to-impact is 1.0 seconds, i.e., when the approaching wall is 1 second away from the robot. Similarly, when delta is 0.5, the LGMD peak will be achieved when the robot is half a second away from colliding with the wall; at delta = 2, the peak will be at 2 seconds from collision; so on and so forth.
The TTI prediction experiments were run with the following parameters:
For each noise level, robot speed and delta value, we ran the robot 10 times. Thus, if we consider each set of 10 such individual runs to be one dataset, we have a total of 4 noise levels times 4 speeds times 8 delta values = 128 datasets.
If "bay" is the root data directory for the Bayesian TTI experiments, then each dataset's log files are stored in a subdirectory of "bay" in a hierarchy as depicted below:
bay | +-- 000 | | | +-- 0.1 | | | | | +-- 0.25 | | +-- 0.50 | | +-- 0.75 | | +-- 1.00 | | +-- 1.25 | | +-- 1.50 | | +-- 1.75 | | +-- 2.00 | | | +-- 0.2 | | : | | : | | | +-- 0.3 | | : | | | +-- 0.4 | : | : | +-- 025 | : | : | +-- 050 | : | : | +-- 100 : :
NOTE: The above directory hierarchy is not fixed by the lobot controller. It is simply how we configured the controller to work while running these experiments and collecting the data.
The objective of this program is to load an entire dataset from each of the above directories and then write out a results file whose format is shown below:
TTI LGMD Spike Rate Predicted TTI Confidence Level --- --------------- ------------- ---------------- mean stdev mean stdev mean stdev
Since each dataset's results can be computed independently of every other dataset, we use multiple threads to process several datasets in parallel. Given the Bayesian TTI prediction experiments' data root directory, this program finds all the subdirectories containing log files and then launches as many threads as CPU's available to walk through this directory list and perform the necessary log file parsing.
Definition in file LobayMain.C.