One of the main focuses for the Realities Lab is how the emerging fields of Virtual and Augmented Reality can be used to assist people with disabilities. This is the second post in a series on the research conducted in this area.
Abstract
The primary goal of this research is to investigate the possibilities of utilizing audio feedback to support effective Human-Computer Interaction Virtual Environments (VEs) without visual feedback for people with Visual Impairments. Efforts have been made to apply virtual reality (VR) technology for training and educational applications for diverse population groups, such as children and stroke patients. Those applications had already shown effects of increasing motivations, providing safer training environments and more training opportunities. However, they are all based on visual feedback. With the head related transfer functions (HRTFs), it is possible to design and develop considerably safer, but diversified training environments that might greatly benefit individuals with VI. In order to explore this, I ran three studies sequentially: 1) if/how users could navigate themselves with different types of 3D auditory feedback in the same VE; 2) if users could recognize the distance and direction of a virtual sound source in the virtual environment (VE) effectively; 3) if users could recognize the positions and distinguish the moving directions of 3D sound sources in the VE between the participants with and without VI.
The results showed some possibilities of designing effective Human-Computer Interaction methods and some understandings of how the participants with VI experienced the scenarios differently than the participants without VI. Therefore, this research contributed new knowledge on how a visually impaired person interacts with computer interfaces, which can be used to derive guidelines for the design of effective VEs for rehabilitation and exercise.