Differences

This shows you the differences between two versions of the page.

Link to this comparison view

index [2013/07/24 12:54]
siagian
index [2013/08/07 11:51] (current)
siagian
Line 1: Line 1:
 {{youtube>nCqww9tXIJo?960x540|Beobot 2.0}} {{youtube>nCqww9tXIJo?960x540|Beobot 2.0}}
  
-The VisualAid project is launched to ... CHRISTIAN_FIXXX+The VisualAid project is launched to develop a fully integrated visual aid device to allow the visually impaired users go to a location and find an object of interest. 
 +The system utilizes a number of state-of-the-art techniques such as SLAM (Simultaneous Localization and Mapping) and visual object detection. 
 +The device is designed to complement existing tools for navigation, such as seeing eye dogs and white canes. 
 + 
 +The system incorporates the use of a Smartphone, Head Mounted Camera, Computer, and Vibro-tactile and Bone conduction Headphones. 
 + 
 +Specifically, a user is able to choose from a number of tasks ("go somewhere", "find the Cheerios") through the Smartphone User interface.   
 +For our project, we start by focusing on the supermarket setting first. 
 +The head mounted camera then sees the outside world and pass it to the software. The vision algorithms then recognize the user whereabouts, and create a path to a goal location. 
 +Once the user arrives at the specified destination, the system then pinpoints the item(s) of importance to allow the user to grab it. 
  
 The system is comprised of a few sub-systems: the [[visualaid/User_Interface |user interface]], [[visualaid/navigation |navigation]], and [[Beobot_2.0/object_recognition|object recognition and localization]] components. The system is comprised of a few sub-systems: the [[visualaid/User_Interface |user interface]], [[visualaid/navigation |navigation]], and [[Beobot_2.0/object_recognition|object recognition and localization]] components.
Line 16: Line 26:
 |5. | [[http://ilab.usc.edu/ | Laurent Itti]]                       |                                      |  |5. | [[http://ilab.usc.edu/ | Laurent Itti]]                       |                                      | 
  
 +Internal notes can be found [[https://docs.google.com/document/d/19l5841BuzokBSV1E-LHNqk2u7V5MHGZ94Zi-DnLhGwo/edit | here]].
  
  
Line 23: Line 34:
  
 ====System overview==== ====System overview====
 +
 +{{:tatrc_full_nav_rec_large.jpg?1200|}}
  
 The following describes the different subsystems in the visual aid device: The following describes the different subsystems in the visual aid device:
Line 40: Line 53:
  
 ====Links (Related Projects)==== ====Links (Related Projects)====
-  * [[http://grozi.calit2.net/ | Grozi]]: ... FIXXX_JACK+  * [[http://grozi.calit2.net/ | Grozi]]: handheld scene text recognition system to recognize supermarket aislesAlso have object recognition to guide user to the object. 
 +  * [[http://openglass.us | OpenGlass]]: Google Glass-based system, which utilizes crowdsourcing to recognize objects or gain other information from pictures. 
 +  * [[http://fluid.media.mit.edu/projects/eyering | EyeRing]]: Finger-worn visual assistant that provides a voice feedback describing features such as color of objects, currency values, and price tags. 
 + 
  

Navigation
QR Code
QR Code index (generated for current page)