LoRenderResults.H

Go to the documentation of this file.
00001 /**
00002    \file  Robots/LoBot/control/LoRenderResults.H
00003    \brief An offline behaviour for rendering the results of the
00004    trajectory experiments.
00005 
00006    The Robolocust project aims to use locusts for robot navigation.
00007    Specifically, the locust sports a visual interneuron known as the
00008    Lobula Giant Movement Detector (LGMD) that spikes preferentially in
00009    response to objects moving toward the animal on collisional
00010    trajectories. Robolocust's goal is to use an array of locusts, each
00011    looking in a different direction. As the robot moves, we expect to
00012    receive greater spiking activity from the locusts looking in the
00013    direction in which obstacles are approaching (or being approached) and
00014    use this information to veer the robot away.
00015 
00016    Before we mount actual locusts on a robot, we would like to first
00017    simulate this LGMD-based navigation. Toward that end, we use a laser
00018    range finder (LRF) mounted on an iRobot Create driven by a quad-core
00019    mini-ITX computer. A computational model of the LGMD developed by
00020    Gabbiani, et al. takes the LRF distance readings and the Create's
00021    odometry as input and produces artificial LGMD spikes based on the
00022    time-to-impact of approaching objects. We simulate multiple virtual
00023    locusts by using different angular portions of the LRF's field of
00024    view. To simulate reality a little better, we inject Gaussian noise
00025    into the artificial spikes.
00026 
00027    We have devised three different LGMD-based obstacle avoidance
00028    algorithms:
00029 
00030       1. EMD: pairs of adjacent LGMD's are fed into Reichardt motion
00031               detectors to determine the dominant direction of spiking
00032               activity and steer the robot away from that direction;
00033 
00034       2. VFF: a spike rate threshold is used to "convert" each virtual
00035               locust's spike into an attractive or repulsive virtual
00036               force; all the force vectors are combined to produce the
00037               final steering vector;
00038 
00039       3. TTI: each locust's spike rate is fed into a Bayesian state
00040               estimator that computes the time-to-impact given a spike
00041               rate; these TTI estimates are then used to determine
00042               distances to approaching objects, thereby effecting the
00043               LGMD array's use as a kind of range sensor; the distances
00044               are compared against a threshold to produce attractive and
00045               repulsive forces, with the sum of the force field vectors
00046               determining the final steering direction.
00047 
00048    We also implemented a very simple algorithm that just steers the robot
00049    towards the direction of least spiking activity. However, although it
00050    functioned reasonably well as an obstacle avoidance technique, it was
00051    found to be quite unsuitable for navigation tasks. Therefore, we did
00052    not pursue formal tests for this algorithm, focusing instead on the
00053    three algorithms mentioned above.
00054 
00055    To evaluate the relative merits of the above algorithms, we designed a
00056    slalom course in an approximately 12'x6' enclosure. One end of this
00057    obstacle course was designated the start and the other end the goal.
00058    The robot's task was to drive autonomously from start to goal,
00059    keeping track of itself using Monte Carlo Localization. As it drove,
00060    it would collect trajectory and other pertinent information in a
00061    metrics log.
00062 
00063    For each algorithm, we used four noise profiles: no noise, 25Hz
00064    Gaussian noise in the LGMD spikes, 50Hz, and 100Hz. For each noise
00065    profile, we conducted 25 individual runs. We refer to an individual
00066    run from start to goal as an "experiment" and a set of 25 experiments
00067    as a "dataset."
00068 
00069    The lobot program, properly configured, was used to produce metrics
00070    log files for the individual experiments. The lomet program was used
00071    to parse these metlogs and produce information regarding the robot's
00072    average-case behaviour associated with a dataset. These files are
00073    referred to as results files.
00074 
00075    The lobot::RenderResults class defined here implements an offline
00076    behaviour to read metlog and results files and visualize them on the
00077    map of the slalom enclosure. This behaviour is not meant to control
00078    the robot; rather, it is supposed to visualize the robot's trajectory
00079    from start to finish and plot the points where its emergency stop and
00080    extrication behaviours were active and those points where the robot
00081    bumped into things. As mentioned above, the point lists for the
00082    trajectory, emergency stop and extrication behaviours and bump
00083    locations comes from either metlog files produced by lobot or results
00084    files output by lomet.
00085 
00086    The ultimate objective behind this visualization is to be able to
00087    collect screenshots that can then be presented as data in papers,
00088    theses, etc. We could have written Perl/Python scripts to process the
00089    metlog and results files and generate pstricks code for inclusion in
00090    LaTeX documents. However, since the lobot program already implements
00091    map visualization and screen capturing, it is easier to bootstrap off
00092    off that functionality to produce JPG/PNG images that can then be
00093    included by LaTeX.
00094 */
00095 
00096 // //////////////////////////////////////////////////////////////////// //
00097 // The iLab Neuromorphic Vision C++ Toolkit - Copyright (C) 2000-2005   //
00098 // by the University of Southern California (USC) and the iLab at USC.  //
00099 // See http://iLab.usc.edu for information about this project.          //
00100 // //////////////////////////////////////////////////////////////////// //
00101 // Major portions of the iLab Neuromorphic Vision Toolkit are protected //
00102 // under the U.S. patent ``Computation of Intrinsic Perceptual Saliency //
00103 // in Visual Environments, and Applications'' by Christof Koch and      //
00104 // Laurent Itti, California Institute of Technology, 2001 (patent       //
00105 // pending; application number 09/912,225 filed July 23, 2001; see      //
00106 // http://pair.uspto.gov/cgi-bin/final/home.pl for current status).     //
00107 // //////////////////////////////////////////////////////////////////// //
00108 // This file is part of the iLab Neuromorphic Vision C++ Toolkit.       //
00109 //                                                                      //
00110 // The iLab Neuromorphic Vision C++ Toolkit is free software; you can   //
00111 // redistribute it and/or modify it under the terms of the GNU General  //
00112 // Public License as published by the Free Software Foundation; either  //
00113 // version 2 of the License, or (at your option) any later version.     //
00114 //                                                                      //
00115 // The iLab Neuromorphic Vision C++ Toolkit is distributed in the hope  //
00116 // that it will be useful, but WITHOUT ANY WARRANTY; without even the   //
00117 // implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR      //
00118 // PURPOSE.  See the GNU General Public License for more details.       //
00119 //                                                                      //
00120 // You should have received a copy of the GNU General Public License    //
00121 // along with the iLab Neuromorphic Vision C++ Toolkit; if not, write   //
00122 // to the Free Software Foundation, Inc., 59 Temple Place, Suite 330,   //
00123 // Boston, MA 02111-1307 USA.                                           //
00124 // //////////////////////////////////////////////////////////////////// //
00125 //
00126 // Primary maintainer for this file: mviswana usc edu
00127 // $HeadURL: svn://isvn.usc.edu/software/invt/trunk/saliency/src/Robots/LoBot/control/LoRenderResults.H $
00128 // $Id: LoRenderResults.H 13982 2010-09-19 09:45:34Z mviswana $
00129 //
00130 
00131 #ifndef LOBOT_RENDER_RESULTS_BEHAVIOUR_DOT_H
00132 #define LOBOT_RENDER_RESULTS_BEHAVIOUR_DOT_H
00133 
00134 //------------------------------ HEADERS --------------------------------
00135 
00136 // lobot headers
00137 #include "Robots/LoBot/control/LoBehavior.H"
00138 #include "Robots/LoBot/metlog/LoPointList.H"
00139 #include "Robots/LoBot/misc/factory.hh"
00140 
00141 // Standard C++ headers
00142 #include <istream>
00143 #include <vector>
00144 #include <string>
00145 
00146 //----------------------------- NAMESPACE -------------------------------
00147 
00148 namespace lobot {
00149 
00150 //------------------------- CLASS DEFINITION ----------------------------
00151 
00152 // Forward declaration
00153 
00154 /**
00155    \class lobot::RenderResults
00156    \brief An offline behaviour for rendering the robot's trajectory from
00157    start to finish and the locations where its emergency stop and
00158    extrication behaviours were active.
00159 
00160    This class implements an offline behaviour that reads the obstacle map
00161    and goal location used to test different LGMD-based avoidance
00162    algorithms in a local navigation task; it also reads, from either a
00163    metlog or results file, the list of points comprising the robot's
00164    trajectory, the locations where its emergency stop and extrication
00165    behaviours were activated, and the locations where it bumped into
00166    things. It then renders all of these things in the Robolocust UI.
00167 
00168    This behaviour is meant to be used to collect screen shots for
00169    inclusion in papers, etc. illustrating the performance of the
00170    LGMD-based obstacle avoidance algorithms implemented by Robolocust. It
00171    does not and is not meant to control the robot; rather, it implements
00172    an offline, rendering task to be used in conjunction with certain data
00173    processing and analysis efforts.
00174 */
00175 class RenderResults : public Behavior {
00176    // Prevent copy and assignment
00177    RenderResults(const RenderResults&) ;
00178    RenderResults& operator=(const RenderResults&) ;
00179 
00180    // Handy type to have around in a derived class
00181    typedef Behavior base ;
00182 
00183    // Boilerplate code to make the generic factory design pattern work
00184    friend  class subfactory<RenderResults, base> ;
00185    typedef register_factory<RenderResults, base> my_factory ;
00186    static  my_factory register_me ;
00187 
00188    /// We assume that the lomet program was used to store results for
00189    /// different datasets in subdirectories of a "root" data directory.
00190    /// When this behaviour starts up, it will create a list of all these
00191    /// results files and allow the user to walk through the list with the
00192    /// '+' or '-' keys, starting, of course, at the first name in the
00193    /// list. As each name in the list is visited, the behaviour will load
00194    /// and visualize the corresponding results file.
00195    ///
00196    /// This data member holds the list of result file names as described
00197    /// above.
00198    //@{
00199    typedef std::vector<std::string> ResultsList ;
00200    ResultsList m_results_files ;
00201    //@}
00202 
00203    /// This iterator points to the current entry in the results files
00204    /// list, i.e., the file currently being visualized.
00205    ResultsList::const_iterator m_results ;
00206 
00207    /// A helper struct for holding a goal's bounds (in map coordinates).
00208    struct Goal {
00209       float left, right, bottom, top ;
00210 
00211       /// Helper to read a goal from an input stream.
00212       friend std::istream& operator>>(std::istream& is, Goal& g) {
00213          return is >> g.left >> g.right >> g.bottom >> g.top ;
00214       }
00215    } ;
00216 
00217    /// This data member holds the goal specified for the slalom obstacle
00218    /// course. The render_results behaviour reads this information from
00219    /// the goal behaviour's section of the config file.
00220    ///
00221    /// NOTE: Although the goal behaviour supports a list of goals, only
00222    /// one was actually used in the trajectory recording experiments.
00223    /// Accordingly, this behaviour only bothers with the first goal from
00224    /// the entire list (which should, anyway, contain only one entry).
00225    Goal m_goal ;
00226 
00227    /// Every time the render_results behaviour loads a results file, it
00228    /// will store the different point lists of interest in these data
00229    /// members.
00230    PointList m_traj, m_stop, m_extr, m_lgmd, m_bump ;
00231 
00232    /// This behaviour supports a slideshow mode, wherein, periodically,
00233    /// it switches to the next file in its list of metlogs and/or results
00234    /// files to be visualized. In this mode, the user may want to
00235    /// temporarily pause the slideshow when a results file is being
00236    /// visualized after a long string of metlogs in order to be able to
00237    /// study the average-case behaviour in greater length.
00238    ///
00239    /// This flag keeps track of the type of file currently being
00240    /// visualized. It is true when the file is a dataset results file and
00241    /// false when it is a metlog.
00242    bool m_results_file_being_visualized ;
00243 
00244    /// A private constructor because behaviours are instantiated with an
00245    /// object factory and not directly by clients.
00246    RenderResults() ;
00247 
00248    /// When the behaviour starts up, we will read the map file, the start
00249    /// and end points and the various point lists from the results file.
00250    void pre_run() ;
00251 
00252    /// These functions load the specified file that is to be visualized.
00253    //@{
00254    void load(const std::string& file_name) ;
00255    void load_metlog (const std::string& file_name) ;
00256    void load_results(const std::string& file_name) ;
00257    //@}
00258 
00259    /// This behaviour's action isn't much: it simply implements a
00260    /// slideshow feature that allows users to sit back and let the
00261    /// Robolocust application automatically display each metlog or
00262    /// results file one-by-one.
00263    void action() ;
00264 
00265    /// This function responds to the '+', '-' and 's' keys to load the
00266    /// next or previous results file and to take a screenshot of the
00267    /// current one.
00268    void keypress(unsigned char key) ;
00269 
00270    /// These functions walk through the results files list.
00271    //@{
00272    void prev() ;
00273    void next() ;
00274    //@}
00275 
00276    /// This function returns the name of the current results file being
00277    /// visualized.
00278    std::string curr() ;
00279 
00280    /// These methods render the start point, the goal, the robot's
00281    /// trajectory and locations where the emergency stop and extrication
00282    /// behaviours were active.
00283    //@{
00284    static void render_results(unsigned long client_data) ;
00285    void render_goal() const ;
00286    void render_traj() const ;
00287    //@}
00288 
00289    /// Clear all the point lists.
00290    void clear_point_lists() ;
00291 
00292    /// Clean-up.
00293    ~RenderResults() ;
00294 } ;
00295 
00296 //-----------------------------------------------------------------------
00297 
00298 } // end of namespace encapsulating this file's definitions
00299 
00300 #endif
00301 
00302 /* So things look consistent in everyone's emacs... */
00303 /* Local Variables: */
00304 /* indent-tabs-mode: nil */
00305 /* End: */
Generated on Sun May 8 08:41:23 2011 for iLab Neuromorphic Vision Toolkit by  doxygen 1.6.3