Home » Articles posted by VR-REU Student
Author Archives: VR-REU Student
Enhancing Virtual Exploration for Blind and Low Vision Users: In-Place Virtual Exploration with Mixed Reality Cane
Hong Zhao, CUNY Borough of Manhattan Community College
Mentors: Hao Tang and Oyewole Oyekoya
Week 1:
The day before the VR-REU program officially kicked off, Dr. Wole organized a bowling icebreaker event. It was really fun! Everyone was very enthusiastic, and I was happy to meet the other cohort. On Monday, we first completed an REU pre-program survey, then took a quick tour of Hunter College and met all the REU mentors on Zoom. My work officially started as I began discussing my project direction with Dr. Tang and getting a preliminary understanding of some of the current project code. Wednesday was the first class on VR, AR, and Mixed Reality, which primarily covered the ideal principles, how it works, and the history of VR. For the rest of the week, I reviewed some related literature and then had another discussion with Dr. Tang to finalize some research directions and complete my project proposal. Finally on Friday, we got an introduction to ParaView and presented our project proposals.
Week 2:
This week, Dr. Wole introduced us to the writing tool Overleaf. He also demonstrated how to use Tableau for optimizing data visualization.
I completed the CITI training and successfully developed the initial version of the MR Cane control system. Here is how it works:
- When the user long-presses the screen for two seconds, the virtual character begins to move.
- The direction of the character’s movement is determined by the current orientation of the headset.
- The mobile phone acts as a white cane. Users can swing their phones left or right to control the movement of the virtual cane.
Initially, I used the AR Session’s camera to capture the phone’s rotation data. However, this method proved to be imprecise when the phone was parallel to the ground, leading to suboptimal performance. To address this, I switched to using the phone’s gyroscope to obtain the rotation angles. This approach has significantly improved the test results.
Here are some key points about using the gyroscope:
- When the device is in its default orientation, the X-axis points horizontally to the right, the Y-axis points vertically upward, and the Z-axis points away from the front of the screen. Thus, the back of the screen corresponds to a negative Z value.
- As the device moves or rotates, these coordinate axes remain fixed relative to the phone.
This new method using the gyroscope has shown promising results in our tests, enhancing the accuracy and responsiveness of the MR Cane control system.
Week 3:
This week, Dr. Wole explained topics related to immersion, presence, and reality demo seminar, and also introduced us to the tools in VMD.
I developed a new method for controlling movement in place, inspired by the Meta Quest 3 controller. By using a mobile phone as the control device, users can simulate forward movement by swinging the phone up and down. It’s a more natural and intuitive way to navigate the virtual space, making the experience feel even more immersive.
Additionally, I completed the development of the footstep sound management module. This module plays corresponding footstep sounds based on the material of the ground the character is stepping on. The technical details involve creating the character’s walk animation and adding a callback function to play sounds at keyframes. An animator is used to control the transitions between standing and moving states of the character.
I also added a settings menu to allow users to switch between different movement control modes.
Week 4:
Our coursework this week was very rich. Dr. Wole covered topics such as 3D tracking, scanning and animation, interaction and input devices, and an introduction to GPUs. These topics were very interesting and have been very helpful for my project.
This week, I mainly completed a module called Layout Learning. This module is designed to help visually impaired individuals build a mental map before exploring virtual environments. Specifically, by holding down the screen and moving your finger back and forth to explore the layout of a virtual room, if you touch walls, tables, or chairs etc., the system provides vibration feedback and plays the corresponding object name. Additionally, when moving outside the room, the system alerts “Out Of Room.” Some of these improvements actually came from user feedback during user studies.
Due to focusing on the development process previously, I fell a bit behind on my research paper. So, on Wednesday and Thursday, I caught up on my research paper progress and completed the introduction and methodology sections.
On Friday, we had our midterm presentations, and everyone performed very well. I also completed my presentation smoothly.
Week 5:
This week, Dr. Wole lectured on Graphics Processing Units (GPUs) and Immersive Audio. I learned a lot from the class. The presentation introduced many technical points that help us improve rendering efficiency.
In terms of development, I focused on enhancing the tutorial module of my app. This module is designed to guide users on how to use their phone as a white cane. It provides instructions on turning, moving forward using different gestures like long presses and downward swipes. The most exciting part is the use of 3D audio and sound playback speed to help users navigate towards their targets. The 3D audio cues indicate whether the target is on the left or right, and the playback speed increases when the user is facing the target directly.
Additionally, I made significant progress in setting up the Apple developer account and successfully uploaded the app to TestFlight. This will make it easier for our testers to download and use the app, providing us with valuable feedback. I also designed a survey and created it using Google Forms to gather detailed user feedback on their experience with the app.
Week 6:
This week I focused on designing all the survey questions. The goal was to verify if my Layout Module can help BLV (Blind and Low Vision) users build a mental map and assess its effectiveness. I also aimed to test the operability of two in-place control movement methods: long-press and swing down, asking users to choose their preferred static movement method.
Additionally, we invited a few high school students and a BLV user to the Hiterman Hall lab for testing. The non-BLV testers performed the app tests with their eyes closed to simulate the BLV experience. After the tests, they completed the survey questionnaire.
This Saturday, the REU cohort attended a lunch cruise. We had a great time and took many beautiful photos. Moreover, Dr. Wole assigned each REU member two research papers from SIGGRAPH to review and evaluate.
Week 7:
On Monday, we shared the progress of our respective projects. Then, we reviewed the papers from the SIGGRAPH Asia conference. Those papers truly provided me with a lot of inspiration. On Thursday, we went to the CUNY Advanced Science Research Center, where we experienced some very cool interactive visual technologies and visited various scientific labs. I also participated in Cason’s project presentation, which was really creative.
This Friday, Dr. Wole explained some statistical analysis strategies and provided reasonable analysis suggestions for each person’s survey. I learned many new analysis methods and look forward to applying them to my project. This week, I also participated in application testing and surveys conducted by other REU students. I invited high school workshop students to test our application with their eyes closed and complete surveys. Their feedback was very helpful to us.
Additionally, I tried to invite BLV (blind and low vision) users from a blind school to test our application. Unfortunately, when I arrived at the blind school, they were in class and unable to participate in the testing. But I will continue to look for opportunities to get them involved.
Week 8:
I can’t believe it’s already the last week of my REU project. At the beginning of this week, I started delving into analyzing the user data we collected. I used Tableau to create some excellent charts for our presentation. I began wrapping up the remaining sections of our paper, adding results, discussion, and future work. I also prepared the slides for the final presentation on Thursday.
Wednesday was so much fun! Our REU group participated in the teacher motion capture session. Professors gave short talks while wearing motion capture equipment, and we listened to their presentations and evaluated their performance.
Thursday was a significant day, we gave our final presentations! We summarized our research projects and presented them to our fellow REU participants, mentors, and some students from Iowa State University. It was a bit nerve-wracking but also very meaningful.
On Friday, we spent the entire day at Hunter College. We finished our paper together and had a wonderful day. We also attended presentations by students from Iowa State University. On the way home, I started brainstorming ideas for the poster and planned to submit both the poster and the paper to ACM ISS 2024 by Saturday.
I had a fantastic time with everyone in the REU program. I am very grateful for this experience and the friendships I have made. I wish all the REU participants good luck in their future endeavors. Thank you all for embarking on this journey with me.
Arab Data Bodies Project
Lamya Serhir, CUNY Baruch College
Project: Arab Data Bodies Project
Mentors: Laila Shereen Sakr and Oyewole Oyekoya
Week 1:
The first week primarily consisted of meeting the other students, some of the mentors, and Professor Wole in addition to the project proposal. I read up on the research another student did last year for Arab Data Bodies to see how I could build on his work. Last year, he used the archive housing all the data, also known as R-Shief, to analyze the frequency of tweets, language used and general sentiment. A UML diagram of attributes like user, language used, url, tweet id, hashtag facilitated such analysis by organizing the data points. Ultimately, he used the sentiment output from such tweets to animate facial features of the avatar.
I would like to focus making avatars of very prominent protestors that were in Tahrir Square, the center of political demonstrations in Egypt. Professor Wole recommended creating the scene such that elements of it could be used in any site of major protests, such as Alexandria and Suez. To do so, we can create crowds of people chanting and holding up signs during the protest.
The next steps are for me to get comfortable using Unity: in addition to beginner tutorials, there is a tutorial on crowd simulation that would be useful in my project. Another consideration is whether data from R-Shief archive will be beneficial, and if so, what kind of data that would be. I was thinking of basing the avatars on the most shared or viewed images or videos taken from the protests at Tahrir Square, but there are plenty of visuals available on the internet that I could use as well.
Week 2:
This week, I focused on researching previous work done regarding VR documentaries. I found evidence about what components of VR increase the user’s sense of connectedness and how immersive documentaries create more positive attitudes towards human rights as opposed to written mediums. There is also research about the importance of social media in catalyzing the Arab Spring that I plan on using for background.
This week, I’d like to meet with my mentor to narrow down what aspects of the protests I should focus on. I plan on completing a crowd simulation that I can use to replicate a protest and finding assets within Unity that would be applicable to my project. Additionally, I’ll continue to search for relevant literature as the project progresses.
Week 3:
Professor Sakr’s team pivoted from creating a VR documentary to a video game. I learned more about the concept and inspiration artwork behind the video game, and will model my simulation after the Middle sovereign. In the world of the Arab Data Bodies Video game, there are five sovereigns each represented by a color. The Middle sovereign is represented by gold, and the theme behind it is royalty and status. I have the necessary components to make avatars move in a crowd-like fashion, so the next step is creating the environment in addition to the avatars.
Week 4:
I began creating the environment for the crowd simulation to take place (as depicted in the photos below). After consulting with the team and Professor Wole, the consensus that it would be best to focus on avatars for the remainder of the project was reached. The next step is to create avatars using generators like Metahuman and perhaps existing avatars in open source websites. There are three types of avatars I plan on creating: one with a human head, another with a robotic head and a third with a half human and half robotic head.
Virtual Reality and Structural Racism Project
Ari Riggins, Princeton University
Project: Virtual Reality and Structural Racism Project
Mentors: Courtney Cogburn and Oyewole Oyekoya
Week 1:
This week after meeting with Dr. Wole to discuss the specifics of the project and brainstorming ideas and research questions to be explored, I began writing my project proposal. This proposal discusses the goals and methodology for the project.
This project aims to create an effective virtual reality based visualization that brings light to the disparities of structural racism within housing. This visualization will be based on data from different cities within the United States. We will use property value data as well as racial demographics of the areas as the input; this data will be represented as a three dimensional street or residential area with houses of changing dimensions; the dimension of the house will be proportional to its value over time with color displaying the racial component.
In addition to the project proposal, this week I also downloaded the program Unity and began getting used to it and thinking about how it could work for the project.
Week 2:
My goals for this week were mainly to learn how to use Unity to build the project and to do some background research and summarize it on the topic. I downloaded the Unity ARKit and began following some tutorials to learn how to use it. So far, I managed to make an ios AR application which uses the phone camera to display the world with an added digital cube.
After discussion with Dr. Wole, the project idea evolved a bit to involve displaying the residential area as an augmented reality visualization where it can be viewed through a device as resting on top of a flat surface such as a table or the ground. The next step that I am currently working on in Unity is surface detection so that the visualization can align with these surfaces.
In terms of research, I found several relevant sources investigating structural racism within housing. I came across the University of Minnesota’s Mapping Prejudice project which hosts an interactive map of covenants in Minnesota where there were restrictions on the race of property owners and tenants. This project provides a view of one method of visualization for data on racial discrimination within housing.
Week 3:
This week was spent focusing mostly on the data. I met with Dr. Cogburn and Dr. Wole and we discussed a more specific view of the visualization. Dr. Cogburn brought up the reference of a report done by Brookings Edu which investigates the devaluation of Black homes and neighborhoods; this report will serve as the jumping-off point for the data of this project as well as a reference for discussion of the topic.
The data used in the report comes from the American Community Survey performed by the US Census Bureau and from Zillow. It will be necessary to find similar data from the census for this project. We decided that currently, the project should focus on one geographic area as a case study of the overall inequality. The city I am planning to focus on is Rochester New York; it was represented in the Brookings report and was shown to have a large disparity in the valuation of Black and White homes.
Week 4:
This week in unity I continued working with the ARKit to detect surfaces and display the visualization on them. We discussed the data after running into a roadblock where we did not have access to all of the information we wanted. The Brookings report had not provided the names of specific towns and areas that we found to be comparable so we cannot find data on them individually. However, we are able to use the reported data by changing our visualization a bit. Instead of being on a timeline, the houses will be on a sliding scale by the factor of race.
I also gave my midterm presentation this week which helped me solidify my background research for the project, as well as explain it in a clear manner.
Week 5:
This week I was mostly working in unity. I found a free house asset that works for the project and I used the ARKit to place this on any detected plane. I also worked on getting a United States map to serve as the basis of the visualization on the plane. We decided to use multiple locations from the Brookings report as case studies, so now I am still working to write the script which changes the house size in accordance with this data. Now that I have the pieces working, I need to arrange the scene and scale everything, as well as create some instructions for use.
I have also been working on my paper and am currently thinking about the methodology section.
Week 6:
This week in terms of writing the paper I made a short draft of my abstract and began working on the methods section. I worked in unity to get the house asset into AR and to write a script to add the growing animation in the video below. I added an input to the house which dictates the disparity which should be displayed through the amount of growth of the house. I also looking into changing the color of the house and having it fade from one color to another. When meeting with my mentors, they suggested that I try some different approaches to the overall visualization such as adding avatars to depict the neighborhood demographics of the house and changing the color of the house to green or some other monetary representation to depict the change in value.
Week 7:
This week, I have been working to get my demo finished. I fixed my shrinking issue with the house and I added the color change to the roof, though I still have to sync these two processes. In meeting with my mentors we decided that I should focus on completing this one scene instead of working on two due to the limited time left. We also discussed the background of the scene and things I could add to make it feel more like a neighborhood. We also discussed labeling and how I could make clear the data which the visualization is actually conveying. At this point, my work is going to be finishing this demo and focusing on wrapping up all we discussed and what I’ve worked on into a presentation.
Week 8:
In the final week, I was majorly focused on preparing for my presentation and finishing up every aspect of the project. I also worked to finish the paper along with my presentation. In terms of my visualization, I had the case study visualization of one house changing in size and color, but at my mentor meeting we discussed the significance of the color and other possibilities. I ended up making two other versions of the visualizations using different colormaps representing the racial make-up of the communities.