Home » Articles posted by VR-REU Student
Author Archives: VR-REU Student
Utilizing VMD to Visualize and Analyze Two PDB Files of FNDC1
Asmita Deb
Week 1
This week we first started by bowling and getting to know everybody that we would be working with for the next 8 weeks. It was a great team bonding experience and by Monday, when we stepped into Hunter College, everything went smoothly! We went up to a conference room and met with the mentors of everyone in the REU and then we took a tour of the college itself. We came back up and were told our tasks for the week and then left. The next few days were dedicated to creating a proposal for our project that would satisfy our mentor, Dr. Wole, and ourselves. On Wednesday we submitted and discussed our drafted proposals and then finalized them to be submitted on Overleaf. Throughout this we also watched many lecture videos that introduced us to VR/AR and the research tools we would be using. The project that I am planning on focusing on is the utilization of Paraview, a scientific visualization software, to analyze and visualize a protein called FNDC1 whose overexpression is associated with multiple cancers.
Week 2
This week I worked on my literature review. I found about 4-5 sources about the protein, FNDC1, and also using visualization software and how it could benefit the research of understudied proteins. This was the first time I used OverLeaf, but it was easy to get the hang of. I also played around with Paraview all week and downloaded both of my files. I’ve been editing them, but I will be also trying VMD next week, just to see the differences. I also figured out what type of user study I wanted to conduct and what I wanted to specifically write about in my paper. Throughout the week I also enjoyed good food, fun activities, and bonding with the other interns!
Week 3
This week was more focused on the methodology and moving forward in our projects. I inserted both of my PDB files of the two different structures into VMD, but for a lot of the analysis tools in VMD, it is necessary to have some sort of trajectory or movement file. To gain this, I had to download a separate program called GROMACS which is used for a lot of trajectory and molecular dynamics. I didn’t have a lot of previous experience with Homebrew, CMake, or terminal coding, but through this experience I learned a lot. I got my movement files and will now load it onto my VMD protein frames. I have also been working on my research paper on OverLeaf and also watching the videos of the class.
Week 4
This week I worked on my midterm presentation that we would be showing to our peers and also the other mentors. The presentation included an introduction to our project, why we were conducting this research, our literature review (papers and works that are related to our project and have significance to our research), our methodology, future work, and a video that served as a demo to what we’ve been doing so far. It was a very stress-free presentation and was used for feedback and help in our projects. In the next week I hope to garner all my data so that I may start working on my survey and send it out by Week 6’s beginning to then start collecting survey data to include in my paper.
Week 5
This week I noted down all of the data that I have accumulated between the two PDB files. I have also started narrowing down the five questions I want to ask for my survey for each protein model, which means I will have 10 questions total. I have started writing down my comparisons and similarities about the two files in a way that is presentable in a research paper. I have been updating OverLeaf as well to keep up with the progress I’m making. My hope is by the beginning of Week 7 I will send out the survey and continue to edit my paper while I await my responses. This would mean that Week 7+8 I am just adding my survey data and finishing up my paper (fingers crossed). It was also 4th of July this week, so it was an odd week, but very fun!
Week 6
This week, I asked my friend to act as my guinea pig with my first and very preliminary survey. The feedback I got was to not ask so many quantitative questions and more so about comfortability with the software as not everyone has a biology background. It is easier to ask if they are able to use the video in my survey to answer the questions than judge if they answered correctly. I have been updating OverLeaf, blog posts (trying to figure out the images). I am going to send out the survey this week and start analyzing my data as it comes in. For fun this week, my best friend from home came to visit and we spent a great time together exploring the city as it is her first time!
Week 7
Wow! We are hitting the last week of this internship and I feel so many emotions. I feel happy, scared, and excited for the future. This week I focused on finalizing the data and information I wanted to include in my paper and what I wanted to really write about as a conclusion of my research. I also finished up my survey this week, sending it to Dr. Wole for a quick run-through and then sending out the survey to everyone and anyone to get the max participants I could. As I wait for more results, I will just start to finish up my paper in OverLeaf and discuss the results I’ve gained so far, as there seems to be a trend. I’ve attached an image below of the trend I’m seeing. For fun, my best friend from college visited this weekend and we had a really great time, also with some of my co-interns as we celebrated our last weekend in NYC!
Week 8
This was the last week of our internship! Very bittersweet, but also very exciting. I finished up my paper and prepared to submit to ISS after figuring out all of the logistics. It also seems that there is a mandatory poster to also submit, so I will be working on that. A lot of my time was spent on analyzing my data as well and trying to figure out how I wanted to present it in my paper. We also had our final presentations which we did with Iowa State. We heard how their REU went and the research they conducted which was very informative and creative! I will also be adding all my work into the GitHub and Dropbox! My time here in the city doing such amazing and creative research is something I will never forget. I truly had a really great experience and I would recommend it to anyone else!
Richard Yeung
Week 1
Since my project is building off an existing one, my main goal is to understand what has already been made, and what its limitations and capabilities are. This meant looking into the source code, so I can understand what features are made, testing the project to see how it runs, and building it so I know there aren’t any unforeseen complications. I was able to talk to the person who wrote most of it, so he helped explained a lot of more complex code.
My goal for next week is to create a rough draft of my project. I discussed this with my professor and we may have found a way to create the in-place exploration for the visually impaired user, we just need to write and test it.
Week 2
The goal of this project is to create an app that allows visually impaired (VI) users to move through a virtual enviornment without having to move themselves. This allows users to explore an enviornment without the restrictions of not having enough physical space. And since this is a virtual environment, VI users can explore at their own pace and allow them to get acquainted with new enviornments without the struggle of having to feel around with other people around. This will give VI users some feeling of security when they have some mental map of their enviornment.
Since I am building off existing work, the progress has been pretty quick so far. I am almost finished completing my a demo for this project so that it can be tested. As of now, there are three premade virtual space that users can explore. The avatars reponse to user inputs which controls the movement of the avatar. There are 2 states of the avatar: unmoving and moving.
Unmoving: Users can rotate their phones which will rotate the avatar. Positional movement of the phone will not affect the avatar.
Moving: Users will press on their phone screen which will move forward in whatever direction the user is facing. While moving, users can rotate their phones and it will not change the direction of the avatar, allowing VI users to move their cane while moving forward.
At this point, only one thing needs to be done, which is connecting an airpod to my device. For this project a special airpod needs to be used that captures users head movement. This is how we know how to rotate the avatar’s head. The issue is that for whatever reason, my device cannot connect properly to these airpods; more specifically, airpods can connect, but the head movement feature cannot be used. Switching phones or updating Xcode and iOS did not solve this. So right now, I am looking for solutions to this.
Week 3
There was a technical problem. As stated in the previous week, for some reason my iphone 11 could not properly connect to the airpods. I tested it on a iphone 14, and the airpod connected and it worked on the app. This lead me to believe that it might be some software limitations, so I did some researched and found nothing. Then I tried to reverse engineer the borrowed github code that allowed connection between Unity and the airpods using its API. While researching, I was given an iphone 7 to test it on. It worked. So, then I realize it is probably something to do with my iphone 11, particularly its settings. I decided to do a factory reset and this solved the problem.
Once I got the airpods to work properly and test the app, I just needed a VI user to test it. This Friday, I was able to do just that. For about 20 minutes, the user tested the demo. For the demo, there are two objectives: to test the controls and test how well the user is able to visualize the room. In the demo, there were three rooms with furitures set up in different places. The user tested all three rooms as I observe their performance. In the end, she seems to be accustomed to 1 out of the 3 rooms. I was able to ask for her feedback and improvements that would help. In the following week, I am working to implement some of them.
Week 4
This was a slow week. Most of the time was spent on discussing best appoarch. We want the user to not have trouble using the application, but with as much immersion as possible since we believe that will help with creating a mental map of an area. One issue is turning. From feedback, we considered allowing users to turn the avatar without actually turn their own body by possibly adding a button to do so. By adding this feature, users can explore their virtual environment while sitting down or without looking like a loony when using the app out in public. However, this would cut into the immersion we are trying to develop, not to mention that turning an avatar with a button is not the same feeling as turning one’s body. We are still trying to figure this out, but as of now, we are sticking with user having to turn their body, but we are adding auditory feedback to tell users which direction they are facing.
Aside from that, I managed to implement one feature, which is the audio footstep. This was a bit of an issue as most tutorials online does not consider if the avatar is walking into a wall. As such, I had to do some looking around, testing, and eventually got to where I am. The current version will generate footsteps with delays. Capturing avatar’s position every frame, I calculate the difference in position. If it is normal, then it plays the footsteps at normal speed. If there is a small change, their is a noticable delay between footsteps. And if there is barely any change, then no footsteps at all. For some reason, even when walking into a wall, there is still some positional difference, so a threshold is needed to pass for any footstep audio to be played.
Next week, I plan on changing how the phone is held when interacting with the app, adding voice to tell users what directions they are facing, fixing how the users walk backwards, and controling for phone vibrations.
Week 5
I implemented most of what I wanted. I added two different modes of control: The original and a swipe.
For the original control, the user would press on the screen to move and tilt their phone upwards to move backwards. The tilting part seems to be a difficult implementation due to how Unity calculates euler angles. I might do away with the tilt feature since, according to the tester, tilting does not feel appropriate for it. Instead I will combine this with features from the swipe.
The swipe feature has four ways to move the users: swipe up will move avatar forward, swipw down will move avatar backwards, swipe left or right will turn users in that direction. The swipe upward and wonward works great. The issue is the left and right. The way its impemented is that the left and right turn turns both the body and head. This is necessary since, the head is strictly following the airpods, I need to get around it and make it seem like the airpods is moving. This should have worked but for some reason, sometimes when teh user swipes left or right, the head turns longer than the body. No idea what is causing this or how to fix it, but this is my objective for next week.
I also plan to implement another means of control: using buttons. Since swiping might be difficult for some users, especially those who are not use to technology or have problems with their hands, I can just add four buttons that are in range with the users thumb. Im still thinking this through, but more options is better.
Other problems is that the user still seems to get stuck on the wall. Im just going to increase the collider range, but a better method needs to be thought up.
Week 6
I more or less have everything set up. There is still a problem with my program where the in-game cane seems to drift away from the avatar. This needs some testing but it seems like the problem has to do with the camera not picking up distinct backgrounds which makes it assume the phone is moving therefore the in-game cane moves. Again, not sure, but needs more testing.
Everything else is pretty much done. The participants will hold the phone like a cane and explore a room for x amount of mins. At the end, they will be asked to visualize the room (such as drawing it). The main focus will be testing spatial memorization, so the material of the object is not that important, just knowing that something is there is enough. They will be testing on two different version of movement: one where user needs to turn with their body, and another where the user turns with a swipe/button. This will test whether turn with their body or turning with a button/swipe will affect their spatial awareness. They will be testing on 3 different rooms with increase difficulties (ie more furnitures). Not sure if we want to change the room size/shape or keep it the same, but so far its the same room. They will be graded on how many furnitures they can correctly position in their drawing.
Week 7
I managed to finished and fix most of the problematic parts of my program so it should not break or cause some weird bug/error in the middle of testing. I also meet with the participate to get some feedback and they seem to like it. So now, its a matter of testing it with more participates. In the mean time, I will on my paper, think of the questions, and add small features that I think may be useful.
Week 8
Since I am continuing to work on this project, I am adding more features to make testing easier such as randomizing the room size and randomizing object placement. This will allow for more variety in testing and makes sure that participants don’t get use to the same room size.
Final Paper:
Richard Yeung, Oyewole Oyekoya, and Hao Tang. 2023. In-Place Virtual Exploration Using a Virtual Cane: An Initial Study. In Companion Proceedings of the 2023 Conference on Interactive Surfaces and Spaces (ISS Companion ’23). Association for Computing Machinery, New York, NY, USA, 45–49. https://doi.org/10.1145/3626485.3626539 – pdf
Diego Rivera: Neural Network models in Virtual Reality
Diego Rivera, Iona College
Project: Neural Network in Virtual Reality through Unity 3D
Mentors: Lie Xie, Tian Cai, Wole Oyekoya
Week 1:
For week one research over transformers were done and reading how to implement google cardboard in unity and have it working. Also researching on how Pytorch works was done this week. Next week a Unity scene will be developed in order for the neural network models to be implemented in the scene. More research and development will be done this week to have a presentable prototype next week.
Week 2:
This week development on the unity scene was started, and a majority of the visual aspect finished. I was able to use Unity’s new input system that allows XR controller and a regular console controller to move the object for an easy way to implement in other platforms. Creating the transformer model is still in production more debugging is needed.
The video above showcase the placement model and how the controls will work in controller. XR controller is added in the game but testing must be done in order to see if it works and calibrated correctly. The box is shown to rotate on its x and y axis, decrease, and increase in size as well. UI elements will be added and continue debugging and creating a functional Transformer model in Pytorch will be next.
Week 3:
I was able to obtain a quest and able to test out the game however there are many bugs and errors I need to fix which is the main objective to get the project working and running. Next week bugs should be fixed, and the project should be able to run and work properly.
Week 4:
Debugging was finished and the CNN scene was developed. For the Development of the Transformer scene a ONNX file is created and ready to use in Unity for a similar experience as the CNN scene. Audio is set, controller are interactive, clipping issues is found in the CNN scene but that will be fixed later as developing the Transformer Scene is next and should be priority. Downside of CNN scene and possibly the Transformer scene is the need to be in Link mode and the standalone application will not work because File explorer is need in order to get the models to work.
The models above shows the model before and after receiving weights and inputs
Week 5:
The project has started development in The transformer scene using the ONNX file, some issues were encountered and bugging issues appeared. Once the scene is implemented, quality assurance will begin.
Week 6:
Developing a working Transformer model is a success, using the ONNX file allows for a simple interactive transformer model, however I am unable to display a visual model of the transformer like the CNN visualizer. The ONNX file was able to transform into a JSON file but the code used in the CNN scene is not compatible with it, as a result a visual interactive scene was created using the ONNX file. The scene allows the user to drag a photo and place it on a black square which takes in the data and allows the user to run the model and get an output.
Further QA will be done and implementing more information about the models.
Week 7:
The final touches of the Transformer scene has been made, a small demo shows how the model runs through. Not included in the video a scroll bar with information about the Transformer model was created to give more background to the model and project.
Week 8:
Finished development, presentation was today July 29th, 2022. I learned a lot in this REU and understand more about machine learning and more information about deep learning. No further updates were made this week just preparation on presentation and finishing writing report.
Final Report submitted and accepted as a 2-pages paper (poster presentation) at VRST 2022:
Diego Rivera. 2022. Visualizing Machine Learning in 3D. In 28th ACM Symposium on Virtual Reality Software and Technology (VRST ’22), November 29-December 1, 2022, Tsukuba, Japan. ACM, New York, NY, USA, 2 pages. https://doi.org/10.1145/3562939.3565688 – pdf