Comparing Auditory and Complementary Feedback Techniques in Virtual and Real World Environments For Visually Impaired People


One of the main focuses for the Realities Lab is how the emerging fields of Virtual and Augmented Reality can be used to assist people with disabilities. This is the first post in a series on the research conducted in this area.

Virtual Reality (VR) technologies are able to create rich, interactive worlds for users. Many popular applications use immersive 3D graphics to convey information about a person’s surroundings and how they should move through it. However, this approach is not sufficient for the estimated 285 million people with visual impairments, who rely on their other senses in their day to day life. We believe that VR programs can also construct these explorable virtual environments through digital versions of other sensory feedback, such as voice or vibrations. The goal of our study is to answer the question: Can computer-generated versions of real world auditory and tactile feedback provide useful navigation information to individuals with visual impairments?

Experiment

For this study, we examined two main aspects of movement – deciding which direction to face, and determining the distance to your destination. Individuals with reduced vision capabilities often utilize ambient noise, vibrations, and feedback from tools such as walking canes to help them perform these two tasks as they navigate from place to place. Our intent was to determine if artificial versions of these stimuli could be effectively utilized in the same way as the physical versions.

Our test trials took place at the Georgia Institute of Technology campus in Atlanta, GA. Sixteen participants with visual impairments were selected to take part in the study. After being fitted with an HTC Vive headset, each individual went through seven different trials, consisting of the following conditions:

  • Direction Detection with Virtual Audio (DDVA)
  • Direction Detection with Virtual Audio and Haptics (DDVAH)
  • Direction Detection with Real Audio (DDRA)
  • Direction Detection with Real Audio and Haptics (DDRAH)
  • Navigation with Virtual Audio (NVA)
  • Navigation with Virtual Audio and Speech Reinforcement (NVASR)
  • Navigation with Real Audio (NRA)

During the Direction Detection tests, a tonal sound source (either in the virtual environment or in the real world) was randomly placed around the participant. They were then asked to turn in place until they felt they were facing the origin of the sound. Once they confirmed this, the difference between their direction and that of the sound source was recorded, and the sound was moved to a new location.

The Navigation tests expanded on this by requiring the user to actually walk to where they believed the sound was emanating from. Again, after the participant confirmed they believed they had reached the sound’s origin, the actual remaining distance was recorded and the audio source was moved to another location (either physically or virtually).

Results

We compared the trial results using the Wilcoxon Signed Ranks Test. In addition to the previously stated metrics for the Direction Detection and Navigation trials, we also measured the time taken to complete each task. Since moving from place to place must be done in a timely manner as well as correctly, we felt this was a valuable feature to study.

Initially, participants performed their tasks more quickly when relying on real world audio. However, as they repeated the trials, the difference in completion time between real and virtual audio shrank. During the final tests of a session, the participant completion speed between virtual and real audio cues was distinguishable only when the virtual cues was not reinforcement by kinetic signals.

The addition of haptic feedback to auditory direction guidance was shown to improve participant’s speed and effectiveness. This finding was illustrated in the quantitative results and further confirmed by commentary for the participants themselves, who reported that they felt more confident when they could rely on vibrations as well as audio for navigation. In contrast, participants took longer to complete the Navigation with Virtual Audio scenarios when speech reinforcement was added in. We speculate that the added complexity of interpreting verbal commands compared to simple vibrations led to the increase in completion time.

Conclusion

We believe the results of our study show the viability of non-visual navigation guidance techniques in virtual spaces. While the participants generally made faster movement decisions when guided by real audio, they were ultimately able to navigate in a comparably effective manner using virtual sound instructions. Participant’s ability to navigate was further improved by reinforcing auditory cues with kinetic ones. We feel these findings are encouraging for current and future portable VR applications for individuals with visual impairments, as useful navigation information could be generated through digital accessories rather relying on in-place, physical sources.