In our course “Blended Interaction”, Master’s students Michael Zöllner and Stephan Huber have been working on a very different approach to use the Microsoft Kinect. Since we liked their project so much and their helmet-mounted Kinect is such an eye-catcher (check out the video! J), we asked them to write about it for our blog. Here is what they wrote:
NAVI (Navigational Aids for the Visually Impaired) is a student project aiming at improving indoor navigation for visually impaired by leveraging the Microsoft Kinect camera, a vibrotactile waistbelt and markers from the AR-Toolkit.While the “white cane” is a good tool to improve navigation for visually impaired, it has certain drawbacks such as a small radius or that it just detects objects that are on the ground (during typical use).We wanted to augment the visually impaired person’s impression of a room or building by providing vibro-tactile feedback that reproduces the room’s layout.
For this, depth information from the Kinect is mapped by our software onto three pairs of Arduino LilyPad vibration motors located at the left, center and right of the waist. These pairs of vibration motors are hot glued into a fabric waist belt and connected to an Arduino 2009 board. To increase the impact of the vibration motor they were put into the cap of a plastic bottle. The Arduino in the waist belt is connected via usb to a laptop that was mounted onto a special backpack-construction, which has holes for cables and fan.
To support point-to-point navigation usually a seeing-eye dog is used. This dog however must be trained for certain routes, costs a lot of money and gets tired soon. In certain research projects GPS is used to provide this point-to-point navigation, however GPS is not applicable for indoor scenarios.
We wanted to utilize the rgb camera of the Kinect, so we put several markers of the AR-Toolkit on the walls and doors of our building thereby modeling a certain route from one room to another. The markers are tracked continuously all along the way and our software provides synthesized auditory navigation instructions for the person. These navigation instructions vary based on the distance of the person to the marker (which we get from Kinect’s depth camera). So for example, if you walk towards a door the output will be “Door ahead in 3”, “2”, “1”, “pull the door” where each part of the information depends on the distance to the marker on the door.
The software is written with C#/.NET. We used the MangedOpenNI (https://github.com/kobush/ManagedOpenNI) wrapper for the Kinect and the managed wrapper of the ARToolkitPlus (http://code.google.com/p/comp134artd) for marker tracking. Voice synthesis is done using Microsoft’s Speech API (http://msdn.microsoft.com/en-us/speech/default). All input streams are glued together using Reactive Extensions for .NET (http://msdn.microsoft.com/en-us/devlabs/ee794896).