Home » Articles posted by VR-REU Student

Author Archives: VR-REU Student

Richard Yeung

Week 1

Since my project is building off an existing one, my main goal is to understand what has already been made, and what its limitations and capabilities are. This meant looking into the source code, so I can understand what features are made, testing the project to see how it runs, and building it so I know there aren’t any unforeseen complications. I was able to talk to the person who wrote most of it, so he helped explained a lot of more complex code.

My goal for next week is to create a rough draft of my project. I discussed this with my professor and we may have found a way to create the in-place exploration for the visually impaired user, we just need to write and test it.

Week 2

The goal of this project is to create an app that allows visually impaired (VI) users to move through a virtual enviornment without having to move themselves. This allows users to explore an enviornment without the restrictions of not having enough physical space. And since this is a virtual environment, VI users can explore at their own pace and allow them to get acquainted with new enviornments without the struggle of having to feel around with other people around. This will give VI users some feeling of security when they have some mental map of their enviornment.

Since I am building off existing work, the progress has been pretty quick so far. I am almost finished completing my a demo for this project so that it can be tested. As of now, there are three premade virtual space that users can explore. The avatars reponse to user inputs which controls the movement of the avatar. There are 2 states of the avatar: unmoving and moving.

Unmoving: Users can rotate their phones which will rotate the avatar. Positional movement of the phone will not affect the avatar.

Moving: Users will press on their phone screen which will move forward in whatever direction the user is facing. While moving, users can rotate their phones and it will not change the direction of the avatar, allowing VI users to move their cane while moving forward.

At this point, only one thing needs to be done, which is connecting an airpod to my device. For this project a special airpod needs to be used that captures users head movement. This is how we know how to rotate the avatar’s head. The issue is that for whatever reason, my device cannot connect properly to these airpods; more specifically, airpods can connect, but the head movement feature cannot be used. Switching phones or updating Xcode and iOS did not solve this. So right now, I am looking for solutions to this.

Week 3

There was a technical problem. As stated in the previous week, for some reason my iphone 11 could not properly connect to the airpods. I tested it on a iphone 14, and the airpod connected and it worked on the app. This lead me to believe that it might be some software limitations, so I did some researched and found nothing. Then I tried to reverse engineer the borrowed github code that allowed connection between Unity and the airpods using its API. While researching, I was given an iphone 7 to test it on. It worked. So, then I realize it is probably something to do with my iphone 11, particularly its settings. I decided to do a factory reset and this solved the problem.

Once I got the airpods to work properly and test the app, I just needed a VI user to test it. This Friday, I was able to do just that. For about 20 minutes, the user tested the demo. For the demo, there are two objectives: to test the controls and test how well the user is able to visualize the room. In the demo, there were three rooms with furitures set up in different places. The user tested all three rooms as I observe their performance. In the end, she seems to be accustomed to 1 out of the 3 rooms. I was able to ask for her feedback and improvements that would help. In the following week, I am working to implement some of them.

Week 4

This was a slow week. Most of the time was spent on discussing best appoarch. We want the user to not have trouble using the application, but with as much immersion as possible since we believe that will help with creating a mental map of an area. One issue is turning. From feedback, we considered allowing users to turn the avatar without actually turn their own body by possibly adding a button to do so. By adding this feature, users can explore their virtual environment while sitting down or without looking like a loony when using the app out in public. However, this would cut into the immersion we are trying to develop, not to mention that turning an avatar with a button is not the same feeling as turning one’s body. We are still trying to figure this out, but as of now, we are sticking with user having to turn their body, but we are adding auditory feedback to tell users which direction they are facing.

Aside from that, I managed to implement one feature, which is the audio footstep. This was a bit of an issue as most tutorials online does not consider if the avatar is walking into a wall. As such, I had to do some looking around, testing, and eventually got to where I am. The current version will generate footsteps with delays. Capturing avatar’s position every frame, I calculate the difference in position. If it is normal, then it plays the footsteps at normal speed. If there is a small change, their is a noticable delay between footsteps. And if there is barely any change, then no footsteps at all. For some reason, even when walking into a wall, there is still some positional difference, so a threshold is needed to pass for any footstep audio to be played.

Next week, I plan on changing how the phone is held when interacting with the app, adding voice to tell users what directions they are facing, fixing how the users walk backwards, and controling for phone vibrations.

Week 5

I implemented most of what I wanted. I added two different modes of control: The original and a swipe.

For the original control, the user would press on the screen to move and tilt their phone upwards to move backwards. The tilting part seems to be a difficult implementation due to how Unity calculates euler angles. I might do away with the tilt feature since, according to the tester, tilting does not feel appropriate for it. Instead I will combine this with features from the swipe.

The swipe feature has four ways to move the users: swipe up will move avatar forward, swipw down will move avatar backwards, swipe left or right will turn users in that direction. The swipe upward and wonward works great. The issue is the left and right. The way its impemented is that the left and right turn turns both the body and head. This is necessary since, the head is strictly following the airpods, I need to get around it and make it seem like the airpods is moving. This should have worked but for some reason, sometimes when teh user swipes left or right, the head turns longer than the body. No idea what is causing this or how to fix it, but this is my objective for next week.

I also plan to implement another means of control: using buttons. Since swiping might be difficult for some users, especially those who are not use to technology or have problems with their hands, I can just add four buttons that are in range with the users thumb. Im still thinking this through, but more options is better.

Other problems is that the user still seems to get stuck on the wall. Im just going to increase the collider range, but a better method needs to be thought up.

Week 6

I more or less have everything set up. There is still a problem with my program where the in-game cane seems to drift away from the avatar. This needs some testing but it seems like the problem has to do with the camera not picking up distinct backgrounds which makes it assume the phone is moving therefore the in-game cane moves. Again, not sure, but needs more testing.

Everything else is pretty much done. The participants will hold the phone like a cane and explore a room for x amount of mins. At the end, they will be asked to visualize the room (such as drawing it). The main focus will be testing spatial memorization, so the material of the object is not that important, just knowing that something is there is enough. They will be testing on two different version of movement: one where user needs to turn with their body, and another where the user turns with a swipe/button. This will test whether turn with their body or turning with a button/swipe will affect their spatial awareness. They will be testing on 3 different rooms with increase difficulties (ie more furnitures). Not sure if we want to change the room size/shape or keep it the same, but so far its the same room. They will be graded on how many furnitures they can correctly position in their drawing.

Week 7

I managed to finished and fix most of the problematic parts of my program so it should not break or cause some weird bug/error in the middle of testing. I also meet with the participate to get some feedback and they seem to like it. So now, its a matter of testing it with more participates. In the mean time, I will on my paper, think of the questions, and add small features that I think may be useful.

Week 8

Since I am continuing to work on this project, I am adding more features to make testing easier such as randomizing the room size and randomizing object placement. This will allow for more variety in testing and makes sure that participants don’t get use to the same room size.

Final Paper:
Richard Yeung, Oyewole Oyekoya, and Hao Tang. 2023. In-Place Virtual Exploration Using a Virtual Cane: An Initial Study. In Companion Proceedings of the 2023 Conference on Interactive Surfaces and Spaces (ISS Companion ’23). Association for Computing Machinery, New York, NY, USA, 45–49. https://doi.org/10.1145/3626485.3626539 – pdf

Diego Rivera: Neural Network models in Virtual Reality

Diego Rivera, Iona College

Project: Neural Network in Virtual Reality through Unity 3D

Mentors: Lie Xie, Tian Cai, Wole Oyekoya

Week 1:

For week one research over transformers were done and reading how to implement google cardboard in unity and have it working. Also researching on how Pytorch works was done this week. Next week a Unity scene will be developed in order for the neural network models to be implemented in the scene. More research and development will be done this week to have a presentable prototype next week.

Week 2:

This week development on the unity scene was started, and a majority of the visual aspect finished. I was able to use Unity’s new input system that allows XR controller and a regular console controller to move the object for an easy way to implement in other platforms. Creating the transformer model is still in production more debugging is needed.

The video above showcase the placement model and how the controls will work in controller. XR controller is added in the game but testing must be done in order to see if it works and calibrated correctly. The box is shown to rotate on its x and y axis, decrease, and increase in size as well. UI elements will be added and continue debugging and creating a functional Transformer model in Pytorch will be next.

 

Week 3:

I was able to obtain a quest and able to test out the game however there are many bugs and errors I need to fix which is the main objective to get the project working and running. Next week bugs should be fixed, and the project should be able to run and work properly.

 

Week 4:

Debugging was finished and the CNN scene was developed. For the Development of the Transformer scene a ONNX file is created and ready to use in Unity for a similar experience as the CNN scene. Audio is set, controller are interactive, clipping issues is found in the CNN scene but that will be fixed later as developing the Transformer Scene is next and should be priority. Downside of CNN scene and possibly the Transformer scene is the need to be in Link mode and the standalone application will not work because File explorer is need in order to get the models to work.

Basic load of CNN modelCNN model with inputs and outputs

The models above shows the model before and after receiving weights and inputs

 

Week 5:

The project has started development in The transformer scene using the ONNX file, some issues were encountered and bugging issues appeared. Once the scene is implemented, quality assurance will begin.

 

Week 6:

Developing a working Transformer model is a success, using the ONNX file allows for a simple interactive transformer model, however I am unable to display a visual model of the transformer like the CNN visualizer. The ONNX file was able to transform into a JSON file but the code used in the CNN scene is not compatible with it, as a result a visual interactive scene was created using the ONNX file. The scene allows the user to drag a photo and place it on a black square which takes in the data and allows the user to run the model and get an output.

  

Further QA will be done and implementing more information about the models.

 

Week 7:

The final touches of the Transformer scene has been made, a small demo shows how the model runs through. Not included in the video a scroll bar with information about the Transformer model was created to give more background to the model and project.

 

Week 8:

Finished development, presentation was today July 29th, 2022. I learned a lot in this REU and understand more about machine learning and more information about deep learning. No further updates were made this week just preparation on presentation and finishing writing report.

Final Report submitted and accepted as a 2-pages paper (poster presentation) at VRST 2022:
Diego Rivera. 2022. Visualizing Machine Learning in 3D. In 28th ACM Symposium on Virtual Reality Software and Technology (VRST ’22), November 29-December 1, 2022, Tsukuba, Japan. ACM, New York, NY, USA, 2 pages. https://doi.org/10.1145/3562939.3565688 – pdf

Hunter College
City University of New York
HN-1001T
695 Park Ave
New York, NY 10065

Telephone: +1 (212) 396-6837
Email: oo700 at hunter dot cuny dot edu

Follow me:
wolexvr on Twitter  wole-oyekoya-5b753610 on Linkedin  woleucl's YouTube channel  oyekoya's Github profile  Googlescholar