Differences

This shows you the differences between two versions of the page.

Link to this comparison view

beobot_2.0_system [2013/01/30 09:57]
siagian
beobot_2.0_system [2013/12/05 15:21] (current)
siagian
Line 2: Line 2:
 It consists of high level vision algorithms that try to solve problems in vision localization, navigation, object recognition, and Human-Robot Interaction (HRI). It consists of high level vision algorithms that try to solve problems in vision localization, navigation, object recognition, and Human-Robot Interaction (HRI).
  
-All of Beobot2.0 software code is freely available in our [[http://ilab.usc.edu/toolkit/|Vision Toolkit]], in particular in the : **src/Robots/Beobot2.0/** folder. +All of Beobot 2.0 code is freely available in our [[http://ilab.usc.edu/toolkit/|Vision Toolkit]], in particular in the : **src/Robots/Beobot2.0/** folder. 
-In the toolkit we also provide other software tools such as such as microcontroller code to run the robot.+In the toolkit we also provide other software such as such as microcontroller code to run the robot.
  
  
Line 10: Line 10:
 Note that the first 2 sections (//Deadlines// and //To Do//) are ongoing internal message boards. The public section starts at the //Current Research// section. Note that the first 2 sections (//Deadlines// and //To Do//) are ongoing internal message boards. The public section starts at the //Current Research// section.
  
 +===== Deadlines =====
  
 +Past software accomplished deadlines can be found [[Beobot_2.0/Past_Software_Deadlines |here]]. 
  
-===== Deadlines ===== 
  
-Past software deadlines can be found [[Beobot_2.0/Past_Software_Deadlines |here]].  +^   ^ Tasks                                          ^ Date           ^  
-^   ^ Todo                                                          ^ Date                                     +|1. | IEEE AR 2013 road recognition comparison paper | Dec 31, 2013   
-|1. | AR 2012 paper submitted         July 31, 2013 |  +|2. | IROS 2014: BeoRoadFinder: vision & tilted LRF  Feb 12014    
-|2. | IEEE T-Robotics 2012 paper submitted                    July 312013 +|3. | Implement Object search system                 Mar 1, 2014    
-|2. | Implement Human-Robot Interaction system | August 1, 2013 +|4. | RSS 2014: Crowd navigation & understanding     | May 1, 2014    | 
-|4. | Start crowded-scene related research                                            | August 1, 2013 |+|5. | Implement Human-Robot Interaction system       | August 1, 2014 |
  
  
Line 25: Line 26:
  
  
-===== To Do  ===== 
-  *  IEEE TRobotics 2012: Hierarchical environment representation 
-<code> 
-   - Global Localization: lifetime learning prior, GPS integration 
-   - local navigation map :  BeoRoadFinder, IMU odometry</code> 
-       
-  *  ICRA2013: BeoRoadFinder: vision & planar LRF  
  
-  *  AR2012: Place recognition comparison 
- 
-Hardware:  
-  *   encoder based movements: equalize motors, take out the battery capacity variability. Trim still ok. 
-  *  Sensor wrapper code:  isWorking() and getData() 
  
  
Line 44: Line 33:
 The specific tasks that we are focusing on are: The specific tasks that we are focusing on are:
  
-  *  [[http://ilab.usc.edu/siagian/Research/RobotVisionLocalization/RobotVisionLocalization.html|Biologically inspired Vision Localization system]] +  * [[http://ilab.usc.edu/siagian/Research/RobotVisionLocalization/RobotVisionLocalization.html|Biologically inspired Vision Localization system]] 
-  *  [[http://ilab.usc.edu/siagian/Research/RobotVisionNavigation/RobotVisionNavigation.html|vision navigation system]] using salient regions. [[Beobot_2.0/Software_System/GistSal_Localization_Navigation| work notes]]. +  * [[http://ilab.usc.edu/siagian/Research/RobotVisionNavigation/RobotVisionNavigation.html|vision navigation system]] using salient regions. [[Beobot_2.0/Software_System/GistSal_Localization_Navigation| work notes]]. 
-  *  [[Beobot_2.0/Software_System/Lane_Following| road or lane following/recognition  and navigation system]]. +  * [[http://ilab.usc.edu/siagian/Research/RobotVisionNavigation/VisualRoadRecognition.html|road or lane following/recognition  and navigation system]] 
-  *  recognizing people and other target objects +  * recognizing people and other target objects 
-  *  approaching and following people and other target objects +  * approaching and following people and other target objects 
-  *  real time human pose recognition and tracking that leads to better mobile [[Beobot_2.0/Software_System/Human_Robot_Interaction|Human Robot Interaction]]+  * real time human pose recognition and tracking that leads to better mobile [[Beobot_2.0/Software_System/Human_Robot_Interaction|Human Robot Interaction]]
  
  
Line 62: Line 51:
 ====  Hierarchical Representation of the Robot's Environment  ==== ====  Hierarchical Representation of the Robot's Environment  ====
  
-At the center of our software architecture is the use of hierarchical representation of the robot's environment. +At the center of our mobile robotic system is the use of hierarchical representation of the robot's environment. 
 We have a two level map: a global map for localization (how to recognize one's own location) and a local map for localization (how to to move about one's current environment, regardless if we know our exact location). We have a two level map: a global map for localization (how to recognize one's own location) and a local map for localization (how to to move about one's current environment, regardless if we know our exact location).
  
-The **global map** (illustrated by the left image) is a graph-based augmented topological map, which is very compact and scalable to localize large sized environments. +The **global map** (illustrated by the left image) is a graph-based augmented topological map, which is compact and scalable to localize large sized environments. 
 On the other hand, for navigation, we utilize an ego-centric traditional grid occupancy map as **local map** (on the right), which details the dangers in the robot's immediate surrounding. Here the robot is denoted below by a circle with an arrow indicating the robot's heading. On the other hand, for navigation, we utilize an ego-centric traditional grid occupancy map as **local map** (on the right), which details the dangers in the robot's immediate surrounding. Here the robot is denoted below by a circle with an arrow indicating the robot's heading.
  
  
 {{:globaltopologicalmap.jpg?400|}}{{:localnavigationgridmap.jpg?400|}} {{:globaltopologicalmap.jpg?400|}}{{:localnavigationgridmap.jpg?400|}}
-We find that it would be inefficient to use a grid map for global localization, as it is too large to maintain, but with little added information that a topological map cannot do+ 
 +It would be inefficient to use a grid map for global localization, as it is too large to maintain for large scale environments, but with little added information that is not in a topological map. 
 We do not need to memorize every square foot of every hallway in the environment, we just need to know the one we are on.  We do not need to memorize every square foot of every hallway in the environment, we just need to know the one we are on. 
 By using a local map that will not be committed to the long term storage (it is robot-centric and is updated as the robot moves), we have our desired overall mobile robot representation that is both compact (for scalability) and detailed (for accuracy).  By using a local map that will not be committed to the long term storage (it is robot-centric and is updated as the robot moves), we have our desired overall mobile robot representation that is both compact (for scalability) and detailed (for accuracy). 
  
  
 +===== Navigation =====
  
-===== Software Tools =====+We use road recognition system to navigate.
  
-The following are the firmware level software that would be useful in optimizing robot systems:+{{youtube>U5TFW-o7WJA?large}}
  
-==== Distributed Computer Communication  ==== 
  
-The Beobot2.0 software system uses [[Beobot_2.0/Software_System/ICE| ICE]] to communicate between computers in the cluster. 
  
-However, we would like to improve it and add tools that can evaluate how the individual modules are performing, or whether the network is congested by which data packets. 
  
-**Distributed system features** that are important: +===== Software Tools, Operating Systems Issues =====
-  *  Performance optimization tools:  +
-<code> +
-   timers on modules (evolve, updateMessage) +
-   latency information of when the packets arrive  +
-   The amount of data in the **network** to if it is **congested** </code> +
- +
-  *  Packet synchronization +
-<code> +
-   * only evoke updateMessage if a set of packets with the **same index number** is received. +
-   * Be able to set the policy for packet dropping for individual message queues</code> +
- +
-All comes down to serialization. +
- +
- +
-==== Operating System ==== +
- +
-**Real Time Robotics OS** research at the kernel level down the line.  +
-  *  Does not put too much priority in answering user inputs.  +
-  *  Knows and **Prioritize jobs** that are the most critical to the survival of the robot. +
-  *  Can allocate resources properly to satisfy all the modules.  +
- +
-MOSIX and Scyld Beowulf operating systems that encapsulates all the individual nodes that does all the job distribution internally. +
- +
-General notes to work your way around [[Beobot_2.0/Software_System/Mandriva| Mandriva 2009]] and [[Beobot_2.0/Software_System/Ubuntu| Ubuntu 9.10]]..+
  
 +The software tools related discussions can be found [[Beobot_2.0/Software_Tools| here]]. 
 +It includes firmware level issues such as low level computer communication.
  
  
  
 Back to [[index|Beobot 2.0]] Back to [[index|Beobot 2.0]]
- 
- 

Navigation
QR Code
QR Code beobot_2.0_system (generated for current page)