This is an old revision of the document!


The Adobe Flash Plugin is needed to display this content.

The VisualAid project is launched to … CHRISTIAN_FIXXXWe aim to allow visually impaired users to augment the use of existing tools for navigation, such as seeing eye dogs and white canes, and to improve their ability in finding items of importance.

The system incorporates the use of a Smartphone, Head Mounted Camera, Computer, and Vibro-tactile and Bone conduction Headphones.

Specifically, a user is able to choose the specific task (“go somewhere”, “find the Cheerios”) through the Smartphone User interface. The head mounted camera then sees the outside world, and passes the visual input to our computer. Once the information has been fed to the computer, vision algorithms parse the input, and determine important features or paths in the real world. After the important info has been determined, feedback algorithms parse the important info from the vision algorithms, and communicate to the user how to complete the task that they specified.

The system is comprised of a few sub-systems: the user interface, navigation, and object recognition and localization components.

People

Name Task
1. Christian Siagian Overall system architecture
2. Nii Mante Navigation
3. Kavi Thakkor Object Recognition
4. Jack Eagan User Interface
4. James Weiland
5. Laurent Itti

User's Manual

The user's manual can be found here.

System overview

The following describes the different subsystems in the visual aid device:

User Interface

The user interface is used to obtain request from the user.

The navigation system is used to guide the user to their destination.

Object Recognition And Localization

The object recognition and localization component is used to detect the requested object in the user field of view, and guide the user to reach and grab the object.


Navigation
QR Code
QR Code index (generated for current page)