Differences

This shows you the differences between two versions of the page.

Link to this comparison view

index [2013/07/30 14:29]
siagian
index [2013/08/07 11:51] (current)
siagian
Line 1: Line 1:
 {{youtube>nCqww9tXIJo?960x540|Beobot 2.0}} {{youtube>nCqww9tXIJo?960x540|Beobot 2.0}}
  
-The VisualAid project is launched to ... CHRISTIAN_FIXXXWe aim to allow visually impaired users to augment the use of existing tools for navigation, such as seeing eye dogs and white canes, and to improve their ability in finding items of importance+The VisualAid project is launched to develop a fully integrated visual aid device to allow the visually impaired users go to a location and find an object of interest. 
 +The system utilizes a number of state-of-the-art techniques such as SLAM (Simultaneous Localization and Mapping) and visual object detection. 
 +The device is designed to complement existing tools for navigation, such as seeing eye dogs and white canes.
  
 The system incorporates the use of a Smartphone, Head Mounted Camera, Computer, and Vibro-tactile and Bone conduction Headphones. The system incorporates the use of a Smartphone, Head Mounted Camera, Computer, and Vibro-tactile and Bone conduction Headphones.
  
-Specifically, a user is able to choose the specific task ("go somewhere", "find the Cheerios") through the Smartphone User interface.  The head mounted camera then sees the outside worldand passes the visual input to our computer Once the information has been fed to the computer, vision algorithms parse the input, and determine important features or paths in the real world After the important info has been determinedfeedback algorithms parse the important info from the vision algorithms, and communicate to the user how to complete the task that they specified +Specifically, a user is able to choose from a number of tasks ("go somewhere", "find the Cheerios") through the Smartphone User interface.   
 +For our project, we start by focusing on the supermarket setting first. 
 +The head mounted camera then sees the outside world and pass it to the softwareThe vision algorithms then recognize the user whereabouts, and create a path to a goal location. 
 +Once the user arrives at the specified destination, the system then pinpoints the item(s) of importance to allow the user to grab it.
  
  
Line 22: Line 26:
 |5. | [[http://ilab.usc.edu/ | Laurent Itti]]                       |                                      |  |5. | [[http://ilab.usc.edu/ | Laurent Itti]]                       |                                      | 
  
 +Internal notes can be found [[https://docs.google.com/document/d/19l5841BuzokBSV1E-LHNqk2u7V5MHGZ94Zi-DnLhGwo/edit | here]].
  
  
Line 30: Line 35:
 ====System overview==== ====System overview====
  
-{{:tatrc_full_nav_rec.jpg?200|}} +{{:tatrc_full_nav_rec_large.jpg?1200|}}
  
 The following describes the different subsystems in the visual aid device: The following describes the different subsystems in the visual aid device:
Line 49: Line 53:
  
 ====Links (Related Projects)==== ====Links (Related Projects)====
-  * [[http://grozi.calit2.net/ | Grozi]]: ... FIXXX_JACK+  * [[http://grozi.calit2.net/ | Grozi]]: handheld scene text recognition system to recognize supermarket aislesAlso have object recognition to guide user to the object. 
 +  * [[http://openglass.us | OpenGlass]]: Google Glass-based system, which utilizes crowdsourcing to recognize objects or gain other information from pictures. 
 +  * [[http://fluid.media.mit.edu/projects/eyering | EyeRing]]: Finger-worn visual assistant that provides a voice feedback describing features such as color of objects, currency values, and price tags. 
 + 
 + 

Navigation
QR Code
QR Code index (generated for current page)