Home » VR-REU 2022 » Explore Virtual Environments Using a Mobile Mixed Reality Cane Without Visual Feedback

Explore Virtual Environments Using a Mobile Mixed Reality Cane Without Visual Feedback

Zhenchao Xia, Stony Brook University

Week1- Working update:

This week, after meeting with my mentor on the overall structure and future development direction of the project, I realized that I needed to add a new model to the original project, namely a learning model that uses a laser pen in VR to broadcast location and physical information when interacting with other objects. Since the purpose of our project is to help OM Trainer train blind people, we needed to add a very specific tutorial introduction section. This week, I started creating a new tutorial scene for the new learning model and the original part of the project. In the scene, different objects will be generated in different locations of the room to guide the user through the different modes.

Week2 – Working update:

This week, I built a scenario that will be used as a user tutorial. In this scenario, the model represented by the user is placed inside the irregular room model. The user needs to run AR and VR programs on the phone, and mount the phone on the selfie stick to use it as an exploration tool — blind stick. The user will follow the generated waypoints, exploring the entire structure of the room and finding the exit. During the process, the user will learn how to use the cane, feedback when the cane interacts with objects, and guidance for WayPoint.

Week3 – Working update:

In this week, I created a simple prototype according to the confirmed development requirements. In this scene, I replaced the human model in the actual project with a small square model. The Laser beam shoots forward from the middle of the small square, and when the human body rotates, the laser beam also rotates. When the Pointer interacts with the object, the specific information of the object will be broadcast. In the following week, I will load the Laser Beam into different scenes of the project for testing after completing the basic functions of the Laser Beam.

Week4 – Working update:

This week, I combined the laser pointer with the original user model and created a gesture menu that turns on/off based on the detected movement of the user’s gesture.
The laser pointer can interact with any object in the scene and give detailed item attributes and voice prompt feedback of spatial location information.
Taking the person facing the direction as 0 degrees, when the iPhone mounted on the cane is placed is raised to 45 degrees, which is diagonally above the person, the gesture menu will be opened. In the gesture menu, users can switch between cane mode and laser pointer mode, skip/return/re-read voice messages, etc.

(Gesture Menu)

(Laser Pointer)

Week5 – Working update:

This week, I added all the existing functions to the gesture menu, through which the user can switch any provided function at any time including cane mode, laser pointer mode, hint, replay, etc. Considering that the content of the gesture menu may change in different scenarios, I created a base class for the menu, which contains all the basic functions related to the menu. In the future, we only need to create a script that inherits the base class for special menus. The menu can be customized by rewriting the special functions.

Week6 – Working update:

This week, I made a tutorial for laser pointer mode, in which the user will be trained on how to open the gesture menu with a special gesture, toggle the current option, and confirm the use of the current function. And find targets with complex properties by switching between laser pointer mode and cane mode. Through user testing, I found that overly complex gestures are not easily recognized by the app, and it is difficult for users to easily open the gesture menu. So I changed the way the user interacts with the device, when the pitch of the user’s cane is between 270 and 360 degrees, the gesture menu opens. In the state of maintaining the menu, every two seconds, the current option automatically switches to the next item. When the user closes the menu, execute the current option.

Week7 – Working update:

This week, I worked with my mentor and colleagues to design an experiment to test the app, including the flow of the experiment, the process of collecting data, and the evaluation process of the results. In order to better analyze the data, we decided to upload the important data collected in the experiment, including the user’s position, rotation, head movement, etc., to a database called firebase. Now I am implementing to read the data from the firebase database in unity, and according to the data in the database, let the “user” model move according to the actions of the real user, so that we can reproduce the experiment at any time and get more specific and accurate experimental data, analyze the user’s action trajectory.

Week8 – Working update

This week, I successfully finished the data collection and replay function which allows us to get the position and the rotation of users’ bodies, the rotation of the cane, and users’ heads. And I design an informal test to verify the positive effect of my two new features which are the laser pointer and the gesture menu. After accepting the instructions for two new features, users need to switch to the laser pointer from the cane and use the laser pointer to explore the virtual room and build the mental map for the room’s layout. Once they finished, they need to reconstruct the mental map on the paper. We will get the result by comparing the graphs with the actual layout of the virtual room. But due to the limited time, the experiment is not well defined, due to the lack of strategies for exploring the complex virtual room, users’ data are not reliable and as expected. In the future, I will try to improve the design of the experiments.

Final report submitted and accepted as a 2-pages paper (poster presentation) in VRST 2022:
Zhenchao Xia, Oyewole Oyekoya, and Hao Tang. 2022. Effective Gesture-
Based User Interfaces on Mobile Mixed Reality. In Symposium on Spatial User Interaction (SUI ’22), December 1–2, 2022, Online, CA, USA. ACM, New York, NY, USA, 2 pages. https://doi.org/10.1145/3565970.3568189 – pdf

Hunter College
City University of New York
HN-1001T
695 Park Ave
New York, NY 10065

Telephone: +1 (212) 396-6837
Email: oo700 at hunter dot cuny dot edu

Follow me:
wolexvr on Twitter  wole-oyekoya-5b753610 on Linkedin  woleucl's YouTube channel  oyekoya's Github profile  Googlescholar