00001 /** 00002 \file Robots/LoBot/control/LoCalibrateLET.H 00003 \brief A do-nothing behaviour used to compute the probability values 00004 for lgmd_extricate_tti's sensor model. 00005 00006 This file defines a class that computes probability values for the 00007 lgmd_extricate_tti behaviour's sensor model. The lgmd_extricate_tti 00008 behaviour works by applying recursive Bayesian updates to a 00009 probability distribution containing P(tti|lgmd) values. For the 00010 Bayesian update to work, the behaviour needs a table of P(lgmd|tti) 00011 values, the so-called sensor model. 00012 00013 Now, given a time-to-impact, the Gabbiani LGMD model yields the 00014 corresponding spike rate. Thus, to figure out P(lgmd|tti) values, all 00015 we need to do is pick random times-to-impact and apply the Gabbiani 00016 model to get the corresponding LGMD spike rates. We then discretize 00017 both these quantities to get the correct bin in the sensor model's 00018 probability table and simply increment that bin. 00019 */ 00020 00021 // //////////////////////////////////////////////////////////////////// // 00022 // The iLab Neuromorphic Vision C++ Toolkit - Copyright (C) 2000-2005 // 00023 // by the University of Southern California (USC) and the iLab at USC. // 00024 // See http://iLab.usc.edu for information about this project. // 00025 // //////////////////////////////////////////////////////////////////// // 00026 // Major portions of the iLab Neuromorphic Vision Toolkit are protected // 00027 // under the U.S. patent ``Computation of Intrinsic Perceptual Saliency // 00028 // in Visual Environments, and Applications'' by Christof Koch and // 00029 // Laurent Itti, California Institute of Technology, 2001 (patent // 00030 // pending; application number 09/912,225 filed July 23, 2001; see // 00031 // http://pair.uspto.gov/cgi-bin/final/home.pl for current status). // 00032 // //////////////////////////////////////////////////////////////////// // 00033 // This file is part of the iLab Neuromorphic Vision C++ Toolkit. // 00034 // // 00035 // The iLab Neuromorphic Vision C++ Toolkit is free software; you can // 00036 // redistribute it and/or modify it under the terms of the GNU General // 00037 // Public License as published by the Free Software Foundation; either // 00038 // version 2 of the License, or (at your option) any later version. // 00039 // // 00040 // The iLab Neuromorphic Vision C++ Toolkit is distributed in the hope // 00041 // that it will be useful, but WITHOUT ANY WARRANTY; without even the // 00042 // implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR // 00043 // PURPOSE. See the GNU General Public License for more details. // 00044 // // 00045 // You should have received a copy of the GNU General Public License // 00046 // along with the iLab Neuromorphic Vision C++ Toolkit; if not, write // 00047 // to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, // 00048 // Boston, MA 02111-1307 USA. // 00049 // //////////////////////////////////////////////////////////////////// // 00050 // 00051 // Primary maintainer for this file: mviswana usc edu 00052 // $HeadURL: svn://isvn.usc.edu/software/invt/trunk/saliency/src/Robots/LoBot/control/LoCalibrateLET.H $ 00053 // $Id: LoCalibrateLET.H 13732 2010-07-29 14:16:35Z mviswana $ 00054 // 00055 00056 #ifndef LOBOT_CALIBRATE_LET_DOT_H 00057 #define LOBOT_CALIBRATE_LET_DOT_H 00058 00059 //------------------------------ HEADERS -------------------------------- 00060 00061 // lobot headers 00062 #include "Robots/LoBot/control/LoBehavior.H" 00063 #include "Robots/LoBot/misc/factory.hh" 00064 #include "Robots/LoBot/misc/singleton.hh" 00065 00066 // Standard C++ headers 00067 #include <string> 00068 00069 //----------------------------- NAMESPACE ------------------------------- 00070 00071 namespace lobot { 00072 00073 //------------------------- CLASS DEFINITION ---------------------------- 00074 00075 /** 00076 \class lobot::CalibrateLET 00077 00078 \brief A do-nothing behaviour for calibrating lgmd_extricate_tti. 00079 00080 This class implements a behaviour that produces the table of 00081 probability values required by lgmd_extricate_tti's sensor model. This 00082 behaviour does not actually control lobot or even use the robot's 00083 sensors. It is meant to be a purely computational, off-line task. 00084 00085 The idea here is to run the Gabbiani "forward" model for LGMD spike 00086 generation for all the time-to-impact discretizations. We then find 00087 the correct bin in the P(lgmd|tti) table and increment it. In the end, 00088 right before quitting, this table is normalized to yield a bunch of 00089 probabilities (viz., values between zero and one). 00090 00091 The lgmd_extricate_tti behaviour then uses these P(lgmd|tti) values to 00092 compute P(tti|lgmd) by applying recursive Bayesian updates to a 00093 probability distribution over a discretized TTI space. 00094 00095 NOTE: The table of P(lgmd|tti) values, viz., the sensor model, 00096 contains causal information, i.e., the state that causes a particular 00097 measurement. The goal of Bayesian state estimation is to go in the 00098 opposite direction, i.e., given a particular measurement, what is the 00099 most likely state we are in? 00100 00101 Usually, causal information is easier to obtain. One way is by direct 00102 experimentation and sampling, for example, driving the robot about in 00103 a known environment and examining the correlation between its states 00104 and corresponding sensor readings. 00105 00106 To go this route in lobot's case, we would simply drive it around, 00107 using the laser range finder and encoders to compute time-to-impact. 00108 The locust module can be used to generate LGMD spikes. Then, all we'd 00109 have to do is discretize the TTI and LGMD values and increment the 00110 correct bin. 00111 00112 Unfortunately, in practice, the above approach yields very bad results 00113 due to two factors. Firstly the TTI computed is almost always 00114 concentrated at the upper end of the TTI discretization's range 00115 because of the effects of other behaviours that slow the robot down, 00116 stop it and perform extrication. Secondly, the LGMD activity only 00117 becomes significant very near impact, peaking briefly followed by a 00118 rapid decay. For the most part, the LGMD remains silent. Thus, 00119 sampling it frequently under normal driving conditions results in a 00120 concentration at the lower end of the LGMD spectrum. 00121 00122 The alternative to direct experimentation and sampling for obtaining 00123 causal information is to apply a computational model. In situations 00124 where the mathematics and physics of the underlying process are 00125 well-understood, this might actually be a better approach. 00126 00127 In lobot's case, thanks to Gabbiani, et al., we already have a neat 00128 model relating LGMD spikes to TTI. Thus, there really is no need to 00129 drive the robot around and collect statistics. Instead, we simply 00130 apply the model and obtain the necessary information. 00131 */ 00132 class CalibrateLET : public Behavior { 00133 // Prevent copy and assignment 00134 CalibrateLET(const CalibrateLET&) ; 00135 CalibrateLET& operator=(const CalibrateLET&) ; 00136 00137 // Handy type to have around in a derived class 00138 typedef Behavior base ; 00139 00140 // Boilerplate code to make the generic factory design pattern work 00141 friend class subfactory<CalibrateLET, base> ; 00142 typedef register_factory<CalibrateLET, base> my_factory ; 00143 static my_factory register_me ; 00144 00145 /// A private constructor because behaviours are instantiated with an 00146 /// object factory and not directly by clients. 00147 CalibrateLET() ; 00148 00149 /// This method initializes the sensor model. 00150 void pre_run() ; 00151 00152 /// This is an interactive behaviour designed to respond to keypress 00153 /// events. Thus, the action method remains empty. 00154 void action() ; 00155 00156 /// When the user presses the 'k', '+', 'u' or 'i' keys, the 00157 /// calibrator will increment the standard deviation of the Gaussian 00158 /// used to weight the sensor model updates. The 'j', '-' or 'd' keys 00159 /// will decrement it. 00160 /// 00161 /// These keys allow users to experiment with different settings to 00162 /// achieve the desired sensor model. 00163 void keypress(unsigned char key) ; 00164 00165 /// Visualization to help with development and debugging. 00166 void render_me() ; 00167 00168 /// OpenGL initialization and clean-up. 00169 //@{ 00170 void gl_init() ; 00171 void gl_cleanup() ; 00172 //@} 00173 00174 /// Clean-up. 00175 ~CalibrateLET() ; 00176 00177 /// This inner class encapsulates various parameters that can be used 00178 /// to tweak different aspects of the calibrate_lgmd_extricate_tti 00179 /// behaviour. 00180 class Params : public singleton<Params> { 00181 /// This behaviour is an interactive one that responds to keypress 00182 /// events and updates the sensor models containing the causal 00183 /// probabilities for the Bayesian time-to-impact estimation. 00184 /// Robolocust uses two sensor models: one for the LOOMING phase of 00185 /// the LGMD signal and the other for the BLANKING phase. 00186 /// 00187 /// To prevent pathological situations, we refrain from putting 00188 /// zeros (or extremely low) probabilities in the sensor models by 00189 /// applying a Gaussian weighting formula to update bins 00190 /// neighbouring the one "pointed" to by various [TTI, LGMD] pairs. 00191 /// The parameter used to effect this weighting is a standard 00192 /// deviation for the Gaussian. 00193 /// 00194 /// When this behaviour starts up, it will set the "input focus" to 00195 /// the LOOMING sensor model. Thus, when the user presses the 'k', 00196 /// '+', 'u' or 'i' keys, the calibrator will increment the 00197 /// standard deviation of the Gaussian used to weight the LOOMING 00198 /// sensor model updates (see above). The 'j', '-' or 'd' keys will 00199 /// decrement the Gaussian. 00200 /// 00201 /// When the user presses the TAB key, the behaviour will set the 00202 /// "input focus" to the BLANKING sensor model. The "iku+" and 00203 /// "dj-" keys will then affect the BLANKING sensor model's 00204 /// Gaussian. Pressing TAB again will switch back to the LOOMING 00205 /// sensor model; so on and so forth. 00206 /// 00207 /// This setting specifies the amount by which the behaviour should 00208 /// increment/decrement the sigma value associated with the 00209 /// above-mentioned Gaussian weighting formula. 00210 float m_dsigma ; 00211 00212 /// Private constructor because this is a singleton. 00213 Params() ; 00214 00215 // Boilerplate code to make generic singleton design pattern work 00216 friend class singleton<Params> ; 00217 00218 public: 00219 /// Accessing the various parameters. 00220 //@{ 00221 static float dsigma() {return instance().m_dsigma ;} 00222 //@} 00223 00224 /// Clean-up. 00225 ~Params() ; 00226 } ; 00227 } ; 00228 00229 //----------------------------------------------------------------------- 00230 00231 } // end of namespace encapsulating this file's definitions 00232 00233 #endif 00234 00235 /* So things look consistent in everyone's emacs... */ 00236 /* Local Variables: */ 00237 /* indent-tabs-mode: nil */ 00238 /* End: */