Differences

This shows you the differences between two versions of the page.

Link to this comparison view

index [2013/07/24 11:08]
siagian
index [2013/08/07 11:51] (current)
siagian
Line 1: Line 1:
-{{youtube>4C4c7eg6WTY?960x540|Beobot 2.0}}+{{youtube>nCqww9tXIJo?960x540|Beobot 2.0}}
  
 +The VisualAid project is launched to develop a fully integrated visual aid device to allow the visually impaired users go to a location and find an object of interest.
 +The system utilizes a number of state-of-the-art techniques such as SLAM (Simultaneous Localization and Mapping) and visual object detection.
 +The device is designed to complement existing tools for navigation, such as seeing eye dogs and white canes.
  
-The VisualAid project is launched to ...+The system incorporates the use of a Smartphone, Head Mounted Camera, Computer, and Vibro-tactile and Bone conduction Headphones.
  
 +Specifically, a user is able to choose from a number of tasks ("go somewhere", "find the Cheerios") through the Smartphone User interface.  
 +For our project, we start by focusing on the supermarket setting first.
 +The head mounted camera then sees the outside world and pass it to the software. The vision algorithms then recognize the user whereabouts, and create a path to a goal location.
 +Once the user arrives at the specified destination, the system then pinpoints the item(s) of importance to allow the user to grab it.
  
- [[Beobot_2.0/Mechanical_System |mechanical system]] and [[Beobot_2.0/Electrical_Systemelectrical system]].+ 
 +The system is comprised of a few sub-systems: the [[visualaid/User_Interface |user interface]], [[visualaid/navigation |navigation]], and [[Beobot_2.0/object_recognition|object recognition and localization]] components.
  
  
Line 11: Line 19:
  
 ^   ^ Name                                                          ^ Task                                 ^     ^   ^ Name                                                          ^ Task                                 ^    
-|1. | [[http://ilab.usc.edu/siagian/ | Christian Siagian]]          | Mechanical, electrical, and software |  +|1. | [[http://ilab.usc.edu/siagian/ | Christian Siagian]]          | Overall system architecture          |  
-|2. | [[http://ilab.usc.edu/| Nii Mante]]                           | Mechanical, electrical, and software |+|2. | [[http://ilab.usc.edu/| Nii Mante]]                           | Navigation                           |
 |3. | [[http://ilab.usc.edu/ | Kavi Thakkor]]                       | Object Recognition                   | |3. | [[http://ilab.usc.edu/ | Kavi Thakkor]]                       | Object Recognition                   |
-|4. | [[http://ilab.usc.edu/ | Jack Eagan]]                         |                                      |+|4. | [[http://ilab.usc.edu/ | Jack Eagan]]                         | User Interface                       |
 |4. | [[http://ilab.usc.edu/ | James Weiland]]                      |                                      | |4. | [[http://ilab.usc.edu/ | James Weiland]]                      |                                      |
-|5. | [[http://ilab.usc.edu/ | Laurent Itti]]                       | Mechanical, electrical, and software +|5. | [[http://ilab.usc.edu/ | Laurent Itti]]                       |                                      
  
 +Internal notes can be found [[https://docs.google.com/document/d/19l5841BuzokBSV1E-LHNqk2u7V5MHGZ94Zi-DnLhGwo/edit | here]].
  
  
 ====User's Manual==== ====User's Manual====
-The user's manual can be found [[visualaid/Users_Manual |here]] when it is ready for publishing.+The user's manual can be found [[visualaid/Users_Manual |here]].
  
  
 ====System overview==== ====System overview====
  
-The visual aid system is a combination of multiple subsystems including;+{{:tatrc_full_nav_rec_large.jpg?1200|}} 
 + 
 +The following describes the different subsystems in the visual aid device:
  
 ===User Interface=== ===User Interface===
-The [[visualaid/User_Interface |user interface]] is used to input parameters.+The [[visualaid/User_Interface |user interface]] is used to obtain request from the user.
  
 ===Navigation=== ===Navigation===
Line 36: Line 47:
  
  
-===Object Recognition===+===Object Recognition And Localization===
  
-The [[Beobot_2.0/object_recognition| object recognition]] system is used to determine if an object matches the one requested+The [[Beobot_2.0/object_recognition|object recognition and localization]] component is used to detect the requested object in the user field of view, 
- +and guide the user to reach and grab the object.
-===Object Localization=== +
-The [[visualaid/Object_Localization |object localization]] system is used to allow the user to reach and grab important objects.+
  
 ====Links (Related Projects)==== ====Links (Related Projects)====
-  * [[http://www.willowgarage.com/pages/pr2/overview Grozi]]: ...+  * [[http://grozi.calit2.net| Grozi]]: handheld scene text recognition system to recognize supermarket aisles. Also have object recognition to guide user to the object. 
 +  * [[http://openglass.us OpenGlass]]: Google Glass-based system, which utilizes crowdsourcing to recognize objects or gain other information from pictures. 
 +  * [[http://fluid.media.mit.edu/projects/eyering | EyeRing]]: Finger-worn visual assistant that provides a voice feedback describing features such as color of objects, currency values, and price tags. 
 + 
  

Navigation
QR Code
QR Code index (generated for current page)