Home » VR-REU 2023

Category Archives: VR-REU 2023

Immersive Remote Telepresence and Self-Avatar Project

Sonia Birate, University of Virginia
Oyewole Oyekoya, CUNY Hunter College

Week One

This week, in addition to peer bounding, a Hunter College tour, and an introduction to Paraview, I concentrated primarily on finalizing my research proposal. Dr. Wole and I were able to narrow down a project that will explore the possibility, feasibility, and effects of controlling avatars in Virtual Reality using factual facial expressions and eye movements from individuals. The goal is to investigate the realism and believability of avatars, particularly when another individual’s facial expressions are mapped onto that avatar. By properly mapping facial expressions and eye movements onto the avatars, we seek to aid with the creation of a more realistic and captivating VR experience that closely mirrors real-life interactions. After mapping the facial expressions, does the avatar retain its believability, especially to individuals familiar with the person being represented? Overall, we were able to discuss the game plan, which mostly comprises utilizing the software Reallusion as well as some possible user study.

Week Two

I dedicated my efforts to acquainting myself with Reallusion, with a particular focus on exploring its headshot feature, as depicted below. While attempting to recreate an avatar character, I encountered some challenges in capturing every detail accurately, especially when it came to the eyes. Nonetheless, I considered this endeavor as a preliminary software test, so I remain unfazed by the outcome. Concurrently, I commenced working on my abstract and literature review, successfully locating ten relevant sources to incorporate into the research paper’s related work section. Additionally, Trinity and I went to see the new Spiderman movie for fun, and we both really enjoyed it.

Remaking a character in Avatar through the headshot feature.
 
 
Week Three
This week has proven to be quite eventful. I created a remarkable avatar resembling myself thanks to the headshot plugin found in Character Creator. However, perfecting its resemblance required careful adjustments and a significant amount of time. It dawned on me that even the most subtle nuances, like a delicate play of shadows on one’s face, can profoundly influence the outcome of the avatar’s resemblance and even currently I am considering reworking my avatar to achieve a more truly accurate depiction. Additionally, I swiftly immersed myself in the LIVE App, effortlessly mapping a range of expressions onto my avatar. This immersive experience has provided me with a comprehensive understanding of my project, fostering a sense of both growth and satisfaction. I also worked on my methodology. For next week, I am hoping to start getting a few facial expressions from different individuals mapped onto my avatar.  
 
 
 
Week Four 
My avatar, along with Dr. Wole and Trinity (a current summer researcher), had our expressions mapped onto it. To test if people could distinguish between the three avatars, we conducted a small demonstration during the midterm presentation. It was an intriguing experience because most individuals had difficulty discerning the dissimilarities. Interestingly, while performing the facial mapping, I observed that Trinity’s facial expressions appeared more natural, despite her being considered the unfaithful representation. I successfully captured the seven universal expressions (neutral, happy, sad, surprise, anger, disgust, fear) from both the volunteers and myself, which were then mapped onto my avatar. In the upcoming week, I intend to replicate and enhance the research demonstrations by utilizing better pictures and videos. Additionally, I plan to create a Google form that should be operational by Friday.
 
 
 
The image below shows Dr. Wole, Trinity, and I mapping our expressions using the LiveFace application on my iPhone. I was avatar A, Trinity avatar B, Dr. Wole avatar C (this is our sad expression).
Week Five
Re-recorded avatar and individual videos to replicate and improve on the study demonstrations. I also worked on creating a survey draft. Overall, we opted to re-record the videos again because the previous individual videos captured on my iPhone had a wireframe, which Dr. Wole didn’t preferred. As a result, next week I am re-recording avatar films as well as individual iPhone videos and finalizing my survey to send it to participants to collect my user-study aspect of the research. 
 
Week Six

Over the course of this week, my primary focus was on enhancing the quality of the videos required for the survey. I was faced with a significant undertaking that revolved around the meticulous re-editing of a substantial number of videos, precisely 42 in total. It’s worth noting that this number was evenly split, with half of the videos consisting of recorded avatar clips, while the other half comprised individual clips captured using iPhones.To ensure a seamless user experience, I meticulously segmented these videos into shorter, more digestible clips, spanning approximately 3 to 4 seconds each. These clips were subsequently uploaded to YouTube, which provided a convenient platform for effortless integration into the survey. This approach aimed to streamline the process and enable survey respondents to conveniently view and respond to the video content. Subsequently, a survey draft was created, incorporating all 42 clips, utilizing a forced choice answer method, and prompting users to match individuals with the avatar with their facial expressions. We intend to send out the survey to individuals next week.

Week Seven

During this week, I completed the design of my survey and distributed it to my REU cohort, mentors, and other potential participants. As of Sunday, I have received 20 responses, all of which are valid and can be used for analysis. I dedicated time to working on the user study section using Overleaf. Moving forward, my next steps involve initiating the data cleaning and analysis phase, along with defining the types of data and their respective categories. I am currently in the process of determining which tests I will employ for the analysis. Additionally, I aim to promptly finalize the results and analysis section on Overleaf.

Week Eight

I completed the results and analysis part of my paper and was able to obtain a graph that displayed the survey results. I developed a powerpxoint presentation to display my results, which I shared with thI completed the results and analysis part of my paper and was able to obtain a graph that displayed the survey results. I developed a powerpoint presentation to display my results, which I shared with the team. Overall, I am on schedule to submit it to Siggraph Assia. Below is the chart generated from the results received from my survey. Overall, the unfaithful representations had a bit more consistency ratio with people being able to correctly match them than the faithful representation. 

 

Overall, I loved this summer so much and doing research at Hunter. I would do this all over again if I could. <3

2023 VR-REU students dinner

Final Paper:
Birate Sonia, Trinity Suma, Kwame Agyemang, and Oyewole Oyekoya. 2023. Mapping and Recognition of Facial Expressions on Another Person’s Look-Alike Avatars. In SIGGRAPH Asia 2023 Technical Communications (SA Technical Communications ’23), December 12–15, 2023, Sydney, NSW, Australia. ACM, New York, NY, USA 4 Pages. https://doi.org/10.1145/3610543.3626159 – pdf

Arab Data Bodies Project

Lamya Serhir, CUNY Baruch College

Project: Arab Data Bodies Project

Mentors: Laila Shereen Sakr and Oyewole Oyekoya

Week 1:

The first week primarily consisted of meeting the other students, some of the mentors, and Professor Wole in addition to the project proposal. I read up on the research another student did last year for Arab Data Bodies to see how I could build on his work. Last year, he used the archive housing all the data, also known as R-Shief, to analyze the frequency of tweets, language used and general sentiment. A UML diagram of attributes like user, language used, url, tweet id, hashtag facilitated such analysis by organizing the data points. Ultimately, he used the sentiment output from such tweets to animate facial features of the avatar. 

I would like to focus making avatars of very prominent protestors that were in Tahrir Square, the center of political demonstrations in Egypt. Professor Wole recommended creating the scene such that elements of it could be used in any site of major protests, such as Alexandria and Suez. To do so, we can create crowds of people chanting and holding up signs during the protest.

The next steps are for me to get comfortable using Unity: in addition to beginner tutorials, there is a tutorial on crowd simulation that would be useful in my project. Another consideration is whether data from R-Shief archive will be beneficial, and if so, what kind of data that would be. I was thinking of basing the avatars on the most shared or viewed images or videos taken from the protests at Tahrir Square, but there are plenty of visuals available on the internet that I could use as well. 

Week 2:

This week, I focused on researching previous work done regarding VR documentaries. I found evidence about what components of VR increase the user’s sense of connectedness and how immersive documentaries create more positive attitudes towards human rights as opposed to written mediums. There is also research about the importance of social media in catalyzing the Arab Spring that I plan on using for background.

This week, I’d like to meet with my mentor to narrow down what aspects of the protests I should focus on. I plan on completing a crowd simulation that I can use to replicate a protest and finding assets within Unity that would be applicable to my project. Additionally, I’ll continue to search for relevant literature as the project progresses.

Week 3:

Professor Sakr’s team pivoted from creating a VR documentary to a video game. I learned more about the concept and inspiration artwork behind the video game, and will model my simulation after the Middle sovereign. In the world of the Arab Data Bodies Video game, there are five sovereigns each represented by a color. The Middle sovereign is represented by gold, and the theme behind it is royalty and status. I have the necessary components to make avatars move in a crowd-like fashion, so the next step is creating the environment in addition to the avatars.

Week 4:

I began creating the environment for the crowd simulation to take place (as depicted in the photos below). After consulting with the team and Professor Wole, the consensus that it would be best to focus on avatars for the remainder of the project was reached. The next step is to create avatars using generators like Metahuman and perhaps existing avatars in open source websites. There are three types of avatars I plan on creating: one with a human head, another with a robotic head and a third with a half human and half robotic head.

Week 5: 
This week, I familiarized myself with Blender so I could create the avatars for my user study. I experimented with different techniques, such as editing and sculpting to reach a desired output. I pulled aspects of the avatars such as the head and body in addition to accessories from SketchFab, which has 3D animations that can be downloaded and imported to Blender.
Week 6:
After discussing with Professor Wole, it was concluded that my project would be a developmental project since a user study would not be applicable. Therefore, I’ll be focused on writing for the following weeks and including some studies about storytelling in video games. Over the weekend, I will also import the avatars to the scene in Unity I created and animate them.
Week 7:
I spent the week revisiting my related works since my paper will be related to storytelling using VR in documentaries and video games. Most studies focus on one or the other; additionally, I’ll have to include the role of social media in the Arab Spring. I tried to import the blend files to unity, but ran into some issues transferring the textures. Moving forward, I’m focused on wrapping up the results and finalizing my paper.
Week 8:
Although this week was hectic, it was great to hear about other people’s projects and results in addition to taking their user surveys. I learned a lot over the course of this REU about my interests, strengths and weaknesses. At times, I felt like I wasn’t headed in the right direction or got frustrated when I wasn’t making as much progress as I would have liked. But I made sure to communicate my concerns with Professor Wole throughout and pivot my concept when it was clearly not working out. Although I didn’t expect to be working on a video game going into this project and found difficulty creating a thesis from my work since it was developmental and not as technical as the other projects, I think I handled the circumstances as best I could and am happy I stepped outside my comfort zone. I enjoyed the process of research and would have liked collecting and analyzing data. I hope to take on more opportunities that will allow me to do both. I also enjoyed meeting and getting to know the other students and the professor, in addition to members of the Arab Data Bodies team, all of whom were very kind, resourceful and intelligent.

Immersive Remote Telepresence and Self-Avatar Project

Trinity Suma, Columbia University 

Oyewole Oyekoya, CUNY Hunter College

Week 1

I first met my REU cohort the Monday after I arrived in NYC, bonding over bumper-less bowling at Frames Bowling Lounge.  Our initial meeting was refreshing; I was excited to work with all of them and make new friends.  On Tuesday, my real work picked up after a quick tour of Hunter and a meet-and-greet with all the REU mentors.  I began discussing directions for my project with Dr. Wole and outlining a project proposal.  Wednesday was the first session of VR, AR, and Mixed Reality, a class taught by Dr. Wole that I and the rest of my cohort are auditing.  For the rest of the week, I finalized my project proposal, defining my project’s uniqueness and conducting a preliminary literature review.  We wrapped up the week learning how to use Paraview and presenting our proposals.

Week 2

My work picked up this week as I began to familiarize myself with Reallusion to design the avatars for my study.  My project is ideally going to follow a bystander intervention scenario set in a pub/bar environment.  Below is my idealized script, but I will likely cut out some dialogue for simplicity. 

Study dialogue illustrating a bystander intervention scenario at a bar.

My scenario has five characters:

  • Person A: the one bullying B
  • Person B: the one being bullied by A
  • Person C: another bystander
  • Person D: bar owner 
  • User

Below are also preliminary avatar designs for persons C and A, respectively.  I am not designing an avatar for the user since it is ideally in the first person.  I am also considering not designing one for person D for simplicity.  Only person B will be made from a headshot, and it will resemble someone the user knows.  This week, I also began working on my paper, beginning with the introduction and literature review.  Next, I want to continue creating my avatars and animate/record the audio.

 
 
Work was not all I did this week, however!  Sonia and I watched the new Across the Spiderverse movie together before visiting the NYPL for the Performing Arts to get some work done.  I also attended the CUNY SciCom Symposium at the ASRC with my peers where we listened to various research talks and learned more about presenting our research.
 
Week  3

Progress was slower this week.  I redesigned my avatars for persons A and C and also designed an avatar for person B.  Person B is modeled after myself (see below).  I’ve decided that, for simplicity, I will not design a character for person D.  I began working with some audio recordings as well.  I debated using Audacity, AudioDirector, and Voxal to edit my audio but I chose Audacity since I am most familiar with it.  I began importing my characters into iClone as well to sync their audio. 

The overall direction of my project has changed since last week.  Dr. Wole and I discussed and decided that we are going to focus on how pitch and speed affect users’ perceptions and choices in a bystander scenario.  This will allow creators to gauge how avatars’ voices influence users’ experiential fidelity. 

The week ended with a bang at Webster Hall where I saw CRAVITY, one of my favorite musical artists.  Later that weekend, I saw Hadestown with my uncle for Father’s Day.

Week 4

Welcome to week 4!  I can’t believe I am already halfway through this experience.  This week I finished animating my avatars on iClone with audio recordings of both my voice and my brother’s voice.  There has been more discussion about the direction of my project, but in the meantime, I worked on creating pitch variations for my audio.  Each clip has been pitched both up and down by 10%.  I chose 10% since it seemed like a good baseline to start; the clips did not sound incredibly unrealistic, but the difference was still noticeable.  Below is a sample line from the aggressor.  The first clip is the unedited recording, the second clip is the pitched-up recording, and the third clip is the pitched-down recording. 

We have decided not to abandon the bystander scenario I wrote.  Instead, it will be used as the medium to convey the altered audio.  The scenario will be presented in a survey.  The study participant will watch the scenario play out by reading the narration and watching video clips of the animated avatars.  In some cases, the participant will be presented with multiple variations of the same clips (this procedure is subject to change) in which they will have to rank the clips based on their level of aggression or assertiveness, depending on the character.  This study will allow future developers to gauge how to record and modify their audio to best convey their desired tones. 

Week 5

My progress was slower this week as we finalized the focus of my project.  After much discussion, we are going to study how various combinations of over-exaggerated, under-exaggerated, and average facial expressions and tones affect survey participants’ perceptions of aggressiveness and assertiveness (depending on the character being evaluated).  A diagram of each combination is shown below.  Nevertheless, this week I worked with Sonia and Dr. Wole to record the lines of the aggressor and bystander in my scenario with their lookalike avatars.  We have decided not to use the avatars I designed from the neutral base to maintain the lookalike avatar concept nor the audio my brother recorded.

In addition to work, I had a lot of spare time to myself, which was very healing.  I visited the MET and Guggenheim for free and met up with a friend from home.  On Thursday, the REU cohort attended a lunch cruise where we had great views of the Freedom Tower, Brooklyn Bridge, and the Statue of Liberty. 

Week 6

I had less work to do this week, but I expect it to pick up very soon.  I focused on editing all the videos of the lookalike avatars I had filmed with Sonia and Dr. Wole.  Sonia played the bystander while Dr. Wole played the aggressor; each of them filmed a variation where they underexaggerated and overexaggerated their words and facial expressions in addition to a neutral version.  From there, I exchanged the audio on each video to create 9 different variations of their words.  See the diagram above.  Here is one of the videosOnce my videos were approved and we decided on a survey design, I created my survey in Qualtrics and am preparing to send it out early next week or sooner.  

Luckily, I was able to take advantage of the holiday weekend and joined my family in Atlantic City, NJ.  Later in the week I also went to see TWICE at Metlife Stadium. 

Week 7

This week, I finalized my survey design and sent it out to my REU cohort, the mentors, and other potential participants.  As of Friday afternoon, I have 22 responses, but not all of them are usable since they are incomplete.  I am beginning the data cleaning and analysis stages.  Given my data type and how they are categorized, I am still figuring out what tests I will use.  Dr. Wole and I have discussed non-parametric Friedman tests and two-way repeated measures ANOVA tests.  Hopefully, it will be finalized this weekend.  I have also been researching new papers that are applicable to the emotional recognition aspect of my study to include in my introduction and literature review.  

This week, my cohort also visited the ASRC again to tour the Illumination Space and the facility itself.  We also tested Sabrina’s AR app which was very fun!  I had enough time that day to visit Columbia and use one of the libraries to get some work done, which was very nice.  This weekend, I am going to Atlantic City again for my grandma’s birthday as well as taking a class at the Peloton studio near Hudson Yards. 

Week 8

Happy last week!  Thank you for following me throughout the last 8 weeks and reading my blog posts!  Over the weekend, I finally updated the introduction and literature review sections of my paper as I mentioned last week.  This week was one of my busiest as I balanced packing up my room to move out with finishing some preliminary data analysis to include in my final presentation.  Since we have yet to analyze the statistical significance data we ran, I looked at the mean and median responses for each question type.  Our results are following our original hypotheses; you can find the data in the slideshow below.  On Friday, I ground out my results and discussion sections for my paper and finished packing to go home Saturday.  I have had an amazing time this summer and will miss all of my cohort members!!

Presentation: VR REU – Final Presentation 2

Final Paper:
Trinity Suma, Birate Sonia, Kwame Agyemang Baffour, and Oyewole Oyekoya. 2023. The Effects of Avatar Voice and Facial Expression Intensity on Emotional Recognition and User Perception. In SIGGRAPH Asia 2023 Technical Communications (SA Technical Communications ’23), December 12–15, 2023, Sydney, NSW, Australia. ACM, New York, NY, USA 4 Pages. https://doi.org/10.1145/3610543.3626158 – pdf

Visualization of Point Mutations in Fibronectin type-III domain-containing protein 3 in Prostate Cancer

Samantha Vos, Virginia Wesleyan University

Week 1

 

This week I developed my project proposal with Dr. Olorunseum Ogunwobi. We decided to develop a visualization of the cell lineage plasticity process using either VMD or Paraview. To do this, I will find either the same or extremely similar miRNA, mRNA, and oncogenes that Dr. Ogunwobi suggested I use for the research. I began to search for these components in databanks that had files compatible with VMD or Paraview. So far it has been difficult to find these molecules that have compatible files. PBD is a great databank for proteins, but since I’m dealing with DNA, I need to find a databank that provides compatible files for genetic components. I’ve been reading Dr. Ogunwobi’s previous publications to learn more about this phenomenon’s purpose and process. This project will be focused on combining this specific biological process with the coding to produce a visualization that can be used to educate people.

We had class this week, where we learned about the definitions, history, and functionality of VR, AR, and MR. We also had a self-paced lab where we learned more about Paraview. There are also lectures to learn about VMD, so I will utilize both of these next week. I have been to input proteins into VMD and Paraview so I can learn to upload files and manipulate molecules in this software.

These are pictures of the protein 4LSD, a protein I used to test the functionality of Paraview and VMD.

 

Week 2

This week I found a compatible file for FDNC3 which will be the target of my microRNA. I learned how to used Latex and Overleaf to write my research paper, and added my literature review to the bibliography. I also completed the required CITI certification for Responsible Conduct of Research (RCR) and HSR for Undergraduate Students. On Friday I attended the CUNYSciCom: Communicating Your Science Symposium to learn about how to teach others about your research, both through general and scientific audiences. I found this experience to be extremely useful, as presenting research can itself be difficult without the added pressure of making sure your audience understands it. I will definitely be using some of the tactics used by the presenters in my future research presentations and posters. This week I still had some trouble finding files that are compatible with VMD, so I will be asking Dr. Ogunwobi and Dr. Wole for advise as to how to find these files. When I do find them, I will be using them to help educate others about miRNA and how it can be used as a cancer treatment.

Week 3

This week I focused on finding compatible files for mRNA. As far as I know, there is a very specific file type that can hold DNA and RNA and upload them onto VMD. However, I have absolutely scoured the internet and have yet to find files that were compatible. Due to this complication, my project has been adjusted. I will be taking the FNDC3 protein and mutating it by changing different amino acids in different positions to change the protein’s functionality or structure. I will be using mutations that have been found in prostate cells and that have a correlation with cancer proliferation. I will then be comparing the mutations to their original protein and demonstrating how the mutations affect the prostate cells. I have already found 9 mutations that lead to cancer in prostate cells, and next week I will be mutating the original FNDC3 protein in VMD with the amino acid adjustments.

 

Week 4

This week I focused on trying to find the binding or active site of FNDC3 so that I can make mutations to its sequence. The 9 mutations that I had previously found were not compatible with the VMD software, so I will be working on finding new mutations in FNDC3. I created a mutation in VMD and labeled the mutated amino acid for my midterm presentation on Friday. Everyone’s presentations looked amazing and very well developed. It was fascinating to learn about other people’s projects and how they have overcome obstacles they’ve encountered. I am working on developing an interactive video with my mutations so that the viewer can move and examine the mutations. I will be looking for more mutations and watching lots of videos next week to learn how to do this.

Week 5

This week I found 8 mutations that work with my protein. I applied these mutations using the mutate residue feature on VMD, and saved all of their visualization states. I also made visualization states of the originals with the same position highlighted so that it would easier to compare them to the mutations. I also figured out how to have multiple visualization states in one window, so now I can create my larger, final visualization state to use for my interactive video. Next week I will be working on my interactive video and sending it to the other participants in the program to test its efficiency and capabilities. I will also be working on creating a survey for them to fill out so I can get some feedback as to how to better my video and perfect it before the end of the program. I have learned so much throughout this program and I am excited to keep learning throughout these final weeks.

Week 6

This week I experienced some difficulty in recording my video. I struggled with trying to figure out how to create an interactive video, and I learned that I do not have the proper computer for ray tracing, so I cannot create an interactive video on my computer. I worked on my paper this week to make sure it was designed exactly how I pictured it, made small changes, and found more literary sources for my paper. I decided to work on styling my representations of my mutations to make sure that when I did make my presentation, it looked perfect. Here are some pictures of my mutations. Next week I will be recording a video and sending it out for data collection,

 

Week 7

I recorded my video successfully, which was an absolute relief. I now have to collect data and analyze it. I will be working on my survey for my presentation and finishing up my paper. I have faced many challenges during this project, and I am happy that I can finally say I have completed it. I am proud of my work, and I am excited to have other people test it. I used many different features of VMD to create my visualization, like mutate residue, sequence viewer, representations, and atom labeling. I enjoyed working on this projects completion, and now I will be working on a power point presentation to sum up my project at the VR-REU Symposium 2023. I am very excited to present my work!

Week 8

This week I finished collecting data from my survey and analyzed it, finally adding it to my paper. I also presented my work in the VR-REU Symposium 2023, and I really enjoyed teaching others about my project and the intricacies of protein structure. My fellow students all had wonderful presentations, and I was truly impressed by all of the work they had done. I think it was a successful presentation, and afterwards we got together and had lunch as a final group celebration of our work. On Friday, we came together one last time to finish our papers, submit them, and put all of our data files into Github. I submitted my paper to the ISS conference, and I really hope it gets accepted. I felt happy for completing the program, but also a small sense of sadness at its ending. I truly had a great experience, I learned a lot about coding and the computers science world. I believe this experience has helped me grow, and I will never forget my classmates, Dr. Ogunwobi, or Dr. Wole.

Final Paper:
Samantha Vos, Oyewole Oyekoya, and Olorunseun Ogunwobi. 2023. Visualization of Point Mutations in Fibronectin Type-III Domain-Containing Protein 3 in Prostate Cancer. In Companion Proceedings of the 2023 Conference on Interactive Surfaces and Spaces (ISS Companion ’23). Association for Computing Machinery, New York, NY, USA, 10–13. https://doi.org/10.1145/3626485.3626531 – pdf

Habin Park: The Community Game Development Toolkit

Project: The Community Game Development Toolkit – Creating easy-to-use tools in Unity to help students and artists tell their story and show their artwork in a game like format.

Mentor:  Daniel Lichtman

About Me: I’m CUNY BA student majoring in Game Design and Entrepreneurship at Hunter College.

Week 1: Project Proposal and Toolkit Exploration

This week, I had the time to explore and delve into the toolkit developed by Daniel Lichtman. This toolkit was specifically designed to aid in various projects To gain a deeper understanding of its functionalities, I decided to open and explore an example project using the toolkit.

 

Utilizing the tools provided by Daniel Lichtman’s toolkit, I created my own custom scene with a custom HDRI and some art assets.

Using these tools I learned how they were supposed to be used and the current process for adding them to a Unity project and making a new presentation with them.

Additionally, I worked on crafting a comprehensive project proposal that outlined the details of my research paper. This involved carefully articulating the scope, objectives, and methodology of my study, ensuring that the proposal provided a clear roadmap for my research endeavors. It also included a timeline so that I could properly plan out and schedule my progress.

Lastly, I made progress in completing the CITI certification process for research protocols.

Week 2: Literature Review and SciComs Symposium

During the second week of REU, I did some extensive paper reading as I starteded compiling my related works and literature review. This crucial step allowed me to gain a comprehensive understanding of the existing research related to my project. To further advance my project, I took the initiative to set up a VR Unity project and installed the necessary software to begin developing a VR locomotion system in Unity, utilizing an Oculus Quest 2.

To streamline my work and ensure efficient collaboration, I transferred all my current project components, including the project proposal, abstract, and relevant sources, to Overleaf from Google Docs. With this step, I started writing my research paper, which I eagerly began. The initial section I tackled was the related works, which encompassed a section on VR art design, VR toolkits, and previous papers highlighting the comparison between VR and 2D mediums of communication.

Additionally, I had the privilege of attending the CUNY Student SciComs Symposium, an amazing event where student scientists presented their research to two distinct audiences: their peers and the general public. These short presentations not only included contextual descriptions of their work but also incorporated visual aids to facilitate comprehension. Engaging in a lively Q&A session with the audience further enriched the experience.

Among these presentations, the one that stood out the most to me was on malaria. The researcher shed light on how the malaria parasite infects and causes harm to the liver, which currently lacks effective cures. To address this pressing issue, the scientist is trying to replicate liver damage in mice, paving the way for testing potential cures. This presentation exemplified the innovative approaches being pursued to tackle real-world challenges, leaving a lasting impression on me.

Week 3:

During this third week, I have been engaged in conducting research on VR technology. One significant milestone I achieved was writing the methodology for my upcoming VR research paper.

Additionally, I started work on a VR compatible Unity project where I created an prototype layout for an art project.

Central to this project is the implementation of a VR locomotion system, which aims to enhance user immersion and interaction within the virtual environment. Furthermore, as part of this project, I learned how to directly connect an Oculus Quest 2 headset to the Unity project.

Lastly, I started learning how to use VMD to visualize molecules and atoms.

Week 4:

This week I worked on the methodology section of my paper implementing some of the feedback from my mentor into the paper. In addition, I created a demo level and populated it with art to showcase what a potential art project would look like. It was a great way to bring my project to life within the virtual world. By carefully selecting and placing the art assets, I aimed to create an immersive experience that would captivate the viewer’s imagination. I first found some free assets on the Unity asset store to create a simple courtyard and environment with. I then found some copyright free art to place in the level as resemble what an artist would potentially do in their work.
My level results looks like the following.

To further progress the toolkit, I integrated the Oculus Quest 2 headset. I successfully connected the headset and got the Unity project to work seamlessly inside it. Now, the viewer could step into the virtual environment and feel as though they were physically present within the level.

Week 5:

During the fifth week, my main focus was on developing and adding to the website that contains all the necessary instructions for the VR project toolkit. This involved detailing the setup process and providing step-by-step guidance for implementing the toolkit with screenshots and text. Additionally, I compiled a list of settings for Professor Litchman to incorporate into the toolkit, aiming to get rid of some of the setup steps.

In order to ensure the toolkit’s effectiveness, I sought assistance from artists whom I am acquainted with. I requested their participation in testing the toolkit for the study, and fortunately, three of them agreed to help out.

Aside from the project work, we had an enjoyable experience as a team during this week. We embarked on a delightful river lunch cruise, which granted us the opportunity to admire the captivating Manhattan shoreline and the iconic Statue of Liberty. The lunch was delightful, and it provided a pleasant setting for us to get better acquainted with one another and learn about everyone’s well-being.

Week 6:
During this week, we had the July 4th holiday, which caused a slight delay in sending out the instructions to our case study testers. However, on July 5th, I promptly distributed the instructions and provided a comprehensive explanation to the testers regarding the purpose of the study and the specific objectives of the toolkit being tested.

In addition to providing instructions, I created a survey for the testers. This survey was designed to gather both quantitative and qualitative data, allowing us to gain a better understanding of the testers’ experiences. To ensure best responses, I included open-ended questions, providing the testers with the opportunity to provide detailed feedback.

Lastly, as we decided to focus on a single study instead of two, I dedicated some time to rewriting the research paper that I had previously prepared. This adjustment allowed me to revise the paper for the new study approach, ensuring the coherence and accuracy of the paper.

Week 7:
During the seventh week, my main focus was to provide assistance and guidance to the users/testers of the toolkit as they carried out their testing. I dedicated my time to addressing any questions they had and ensuring a smooth testing process for them.

In addition to supporting the testers, I devoted some time to organizing the structure of my paper. Specifically, I outlined the sections for the User Study, Results and Analysis, Discussions, and Conclusion. This preparation allowed me to establish a clear way for me to present my findings.

As I awaited the availability of the necessary data, I also began working on writing the User Study section of the paper. Since I did not have the data required for Results and Analysis at this point, I focused on writing the details of the User Study itself especially on how it was conducted.

To ensure the effective presentation of the data, I consulted with Dr. Oyewole. Together, we discussed utilizing a tabular format to convey the findings, considering the small number of participants involved in the study. This approach would help provide a concise and organized presentation of the data.

Towards the end of the week, I received the data I needed from the three testers. With this information, I am now ready to proceed with completing the remaining sections of the paper in this final week.

 

Week 8:
In the final week of the project, I gave a presentation of my project and what I have been working on this summer. This presentation was attended by mentors, fellow participants, and invited guests. It provided an excellent opportunity to showcase my hard work and the outcomes of the study.

During this week, I dedicated considerable effort to finalize the research paper. I completed the remaining sections, including the User Study, Results and Analysis, Discussion, and Conclusion. Additionally, to enhance the paper’s clarity and visual appeal, I incorporated relevant images to support the presented data.

Furthermore, I successfully completed the submission process for the ACM ISS conference. Hopefully they will accept this paper since that would be a great achievement for me, and would be something I would be quite proud of.

As part of the final steps, I uploaded the developed toolkit to GitHub for Dr. Oyewole and other participants to look at.

In conclusion, the final week was marked by significant progress as well as a successful presentation, which included the completion of the research paper and conference submission. This culmination of efforts reflects the dedication and hard work put forth during the entire duration of the VR REU program. I am grateful to Dr. Oyewole, my mentor Professor Daniel Lichtman, and all the other participants for an amazing experience.

Final Paper:
Habin Park, Daniel Lichtman, and Oyewole Oyekoya. 2023. Exploring Virtual Reality Game Development as an Interactive Art Medium: A Case Study with the Community Game Development Toolkit. In Companion Proceedings of the 2023 Conference on Interactive Surfaces and Spaces (ISS Companion ’23). Association for Computing Machinery, New York, NY, USA, 5–9. https://doi.org/10.1145/3626485.3626530 – pdf

 

 

Project Theme: Nutritional Education

Richard Erem – University of Connecticut – Professor Margrethe Horlyck-Romanovsky

Week One:

Our VR-REU 2023 program commenced on Tuesday, May 30th, 2023, following a delightful bowling session on the prior day that served as an icebreaker for our team. After acquiring our temporary IDs at Hunter College, we enjoyed a comprehensive building tour before proceeding to our designated learning space.

Our first interaction with project mentors happened over a Zoom call where introductions were exchanged, and a flurry of questions engaged both parties. Queries revolved around our expectations from the program and changes the mentors were planning for the year.

As the week progressed, we delved into an insightful introductory lesson on virtual, augmented, and mixed reality. We explored their distinctions, their evolution, and their present-day applications showcased in several projects. In addition, we discovered tools like Unity3D, Blender, and Mixamo, with resources provided to maximize our command over these innovative tools.

On Friday, we familiarized ourselves with ParaView, a robust open-source application for visualizing and analyzing large data sets. This was primarily facilitated through a self-guided lab.

After wrapping up the lab, we reviewed project proposals from each participant. Subsequently, I had a consultation with my mentor, Professor Margrethe Horlyck-Romanovsky, about my proposal. Generously, she offered several constructive revisions and assisted in refining my research direction. She encouraged me to study food deserts in Brooklyn and the Bronx, either virtually (via a service like Google Maps) or physically. This approach is anticipated to enhance my comprehension of the challenges people confront in pursuit of healthy eating habits, thereby enriching the authenticity of my simulation. She further equipped me with additional resources like a map of high-need areas and critical data on New York City’s FRESH (Food Retail Expansion to Support Health) program.

Overall, I am very eager to initiate the production of my project and look forward to the upcoming week with great anticipation!

 

Week Two:

Going into Week Two, I was determined to master the basics of Unity3D, so I played with the applications for several hours each day, experimenting with various 3D models until I settled with a nice supermarket with several car prefabs. I made sure my project wasn’t too graphically intensive so I wouldn’t have to deal with a slew of optimization problems later on. I then utilized the spline package known as Curvy Splines 8 (which I was familiar with as I had used it back in high school experimenting for fun, albeit very briefly). I used several splines to make the cars drive around the map (so that the game feels lively before you enter the supermarket), but I struggled greatly with making the turn animations look smooth and realistic. I plan on fixing this Week 3 and after I do, I plan on working on the next scene (entering the supermarket). My current project status can be viewed below:

As far as research goes, I dedicated an extensive amount of time researching about my project topic in order to garner enough credible information to construct a proper literature review. Instead of relying too heavily on Google Scholar, I opted to utilize my own university’s library resources as my mentor advised me that it would give me more substantiated data. I converted my literature review and my project proposal into Overleaf to follow the guidelines and format that Professor Wole wanted us to. I quite liked it as it made my work look very professional and neat.

 

In class, Professor Wole taught us the basics of writing a research paper as well as explaining the hardware and software components that go into most VR headsets. On Wednesday we learned about immersive visual and interactive displays, which are essentially digitally generated environments that engage users’ sense with lifelike visuals and responsive controls. We additionally got to view a bunch of VR projects which ended up being really cool and really funny as some of them included hilarious comedy.

On Friday, we did not have class but instead attended CUNYSciCom, a science-based symposium where several PHD-level students presented their research in front of a bunch of people. The program’s ultimate goals where to build better communication skills for STEM students and they even had cash prizes for the best presentations, with $500 dollars being the higher possible awarded amount. I took several notes during the presentations, asked questions and even played with some playdough-like material in an attempt to create a model with an underlying deep message about science in general (ended up poorly!). From learning about MAKI (Malaria-associated Acute Kidney Injury) to DNA G-Quadruplexes to even human-elephant conflict, I thoroughly enjoyed my experience at the symposium and I hope to attend one again in the nearby future.

It was a very packed week full of hardcore research, game development, and learning, and I am once again very excited for the upcoming week to see what my future holds!

 

Week Three:

(This is going to be significantly longer than my previous posts so bear with me!)

As I came into Week Three, I continued to get the hang of Unity3D various features, in particular its animation system. But before that, lets talk about the cars. I managed to create a spline that makes the cars move around corners in a very smooth manner, however, the cars would randomly flip over and glitch out and considering I still needed to add Box Colliders for each of the cars (so that they are unable to drive through each other), I decided to scrap the idea for now. Although as a last desperate attempt, I tried to bake a NavMesh onto the road looping around the market, however, a few problems arose. It didn’t cover the entire width of the road (I know, I can adjust that, but the whole thing wasn’t worth it anyway) and it actually caused a huge performance drop in Unity3D, so I gave up on it now. It’s a small detail in the game anyway so I can work on it after the more important stuff are completed.

 

Speaking of more important stuff, I began to work on my player model. I found a temporary one off the Unity asset store (although when the game is completed ideally you’ll be using your own avatar that you’ve scanned in) to use as a placeholder and then utilized Cinemachine to set up a third person view of him. Cinemachine is basically just a suite of camera tools for Unity which makes it easier to control the behavior of cameras in games. I adjusted the camera’s position to a place where third person games usually have it, then set it to follow and look at my player model. By the way, I know the majority of VR-Games are in First Person in order to capture that perfect immersion feeling and you may be wondering why I am working in third person. The reason is because ideally my game will adjust the player model based on what they consume in the game and I want the user to be able to view that easily with a press of a button. So I’ll incorporate a way to seamlessly switch between first and third person with a key that isn’t commonly used for other functions (for example, I obviously wouldn’t have the key be W or something).

 

Next came animations. Using the website Mixamo (a platform that offers 3D computer graphics technology for animating 3D characters) as Professor Wole had recommended, I installed some basic locomotive movement such as front, back, left, and right movements although I initially just had forward, left, and right. Here’s how I did it.

After I had dragged and dropped them onto my assets folder in Unity, I clicked Window then Animation then Animator. I created an Idle state and then slapped on an installed Idle animation. Then I created two float parameters, vertical and horizontal. The idea is that their values will change based on the input of your keys, which would in turn trigger specific animations based on the conditions of the transitions between states. I created a Walk FWD blend tree (blends animations based on set parameters), added 3 motions (initially anyway), and mirrored the first one. First motion was to walk forward while turning right. I mirrored this one for the left turn so I didn’t have to use an extra animation (meaning the third motion was the same as the first motion). I set the first motion to a threshold of -1 and the third motion to a threshold of 1, so that holding the A key will make the turn longer (and fluid!) until you reach the threshold of -1 and vice versa for the third motion and the D key. You have to uncheck ‘Automate Thresholds’ in order to be able to do this by the way.

Then, I went back to my base layer and created a transition from the Walk FWD blend tree to the Idle state with the condition vertical less than 0.1. This essentially means that if you’re currently walking forward and you let go of the W key (a.k.a lower your vertical value since the W is tied to it), your state will transition to Idle, which indicates that you’ve stopped moving. Vice versa logic for the transition going from Idle to Walk FWD (so it would be vertical greater than 0.1) was used. This is all super simple (not for me though since I had to learn then do it) stuff, but the more complicated your locomotion is (maybe included jumps, left right strafes, wallrunning etc), the more complex these general tasks in Animator will be. I forgot to mention, I unchecked “Has exit time” for my transitions so that there was no delay between pressing a key and having the animation trigger.

 

Please view the gif below to see my player model in action:

 

Gif of Player Movement

 

A closer look  at the temporary Player Model:

 

Player Model JPG

 

In regards to what we learned in class throughout the week, we began by delving deep into important concepts like Immersion, Presence, and Reality on Monday. These terms are crucial to understanding how users experience VR environments. I now understand the idea of Immersion as a measurable quantity, that is to the extent of sensory information and consistency of system responses. Presence, the subjective response to a VR system, was distinguished as a psychological state of “being there” in a virtual environment. The overall discussion offered intriguing insights into the fidelity of a VR system in reproducing real-world experiences.

 

On Wednesday, we shifted our focus to the technical aspects, diving in 3D tracking, Scanning and Animation. We explored how tracking systems capture movements of the head and body to transform them into virtual actions. The class also detailed how 3D scanning can generate digital replicas of real-world objects, and how these digital models can be animated to create dynamic VR experiences.

The demo seminars  provided practical applications of these various concepts. The Animation demo on Monday introduced us to various animation techniques and their uses in creating engaging VR content. The 3D Input seminar on Wednesday demonstrated different input methods used in VR and how they influence user experiences.

 

Finally, our Friday was dedicated to a self-paced visualization lab where we worked with Scientific Visualization using VMD. This session allowed us to install the VMD application, download sample data sets, and follow a lab manual to complete various tasks. This hands-on experience was incredibly beneficial, enabling us to get familiar with the program and better understand the practical aspects of VR in scientific visualization.

 

It’s been an intensive but rewarding week (as you can see by the high word count) of deepening our knowledge and skills in Virtual Reality. My goals for next week are to add more complex animations to my player model and work on a Scene 2 when the player enters the supermarket and is greeted with options. We’re almost halfway done with the program and I am very excited for what is to come!

 

Week Four:

Week Four poses the biggest challenge to my project to date: Life. Seriously. I got some sort of food poisoning from McDonald’s grimace meal, which took me out for the majority of Friday and the weekend. My laptop stopped working so I had to buy a brand new one, and a very expensive one at that. Had to soft restart my project and use my predecessor’s project as a template, only to find out that the version I was sent was an incomplete version and I spent hours wondering why it was not working. It has been a miserable week to say the least and this one is going to be quite short as a result. But hey, at least we got a day off. And at least I got to play around with the Oculus Quest 2 VR headset that was thankfully provided to me by Professor Wole. I must remain positive if I am to complete this project.

I also managed to implement the third person / first person camera switcher logic into my project!

using UnityEngine;
public class CameraSwitcher : MonoBehaviour
{
    public Camera firstPersonCamera;
    public Camera thirdPersonCamera;
    public KeyCode switchKey = KeyCode.V; // Set to ‘V’ key.
    private void Update()
    {
        if (Input.GetKeyDown(switchKey))
        {
            SwitchCamera();
        }
    }
    private void SwitchCamera()
    {
        // If first person camera is currently enabled, disable it and enable the third person camera.
        if (firstPersonCamera.enabled)
        {
            firstPersonCamera.enabled = false;
            thirdPersonCamera.enabled = true;
        }
        // If third person camera is currently enabled, disable it and enable the first person camera.
        else if (thirdPersonCamera.enabled)
        {
            thirdPersonCamera.enabled = false;
            firstPersonCamera.enabled = true;
        }
    }
}

This script, named “CameraSwitcher”, is used to toggle between two cameras in a Unity game: a first-person camera and a third-person camera. The switching is triggered by pressing a specified key, which is set to ‘V’ by default in this script.

In every frame of the game (in the Update method), the script checks if the switch key has been pressed. If it has, the script calls the SwitchCamera method.

The SwitchCamera method checks which camera is currently active. If the first-person camera is enabled, it disables the first-person camera and enables the third-person camera. Conversely, if the third-person camera is enabled, it disables the third-person camera and enables the first-person camera. This allows for toggling back and forth between the two views when the switch key is pressed (V).

 

As far as what we learned this week, Professor Wole taught us about Interaction and Input Devices. These are essentially the hardware and software tools used to perceive, interpret, and respond to user commands within the VR environment. These devices allow the user to interact with the virtual world, control actions, manipulate objects, and navigate through space. They also can provide haptic feedback to improve the sense of immersion. We also learned about rest frames, which is the reference frame from which all other movements and interactions are measured or evaluated. It’s essentially the default, stationary position in the virtual environment. We saw some cool demoes ranging from realistic and nonrealistic hands to the First Hand Technique (hands-on interaction).

Friday was the midterm presentations but as I was suffering from food poisoning, I decided to get some sleep that day instead of heading into class. While this week has been rough for me, I hope I can do better for the next upcoming weeks and I play on remaining optimistic!

 

Week Five:

I’m happy to say that this week went a lot smoother than my previous week. For starters, I don’t feel as sick. Also, my mother graciously purchased a new charger for me for my old laptop (thank you mom, love you!) as that was the problem with it so I was able to just return the new one and then use the money to purchase a new desktop PC (my first one and I built it myself!). It is pretty high-end as it uses an AMD Ryzen 7800X3D for the CPU and an AMD Radeon RX 7900 XTX for the GPU. While obviously it is amazing for gaming, it is also AMAZING for 3D work like Blender and Unity in general. When comparing to my laptop, instead of loading my Unity world in like 3 to 4 minutes, it loads it up in like 20 seconds. I haven’t experienced any lag either, even with having a bunch of mesh colliders for all my game objects. I’ve gotten a nice productivity boost from the speed of this PC!

 

For starters, I’ve added Oculus VR integration into my project. This was a little tricky, and I ended up using XR Plug-in Management as opposed to the older Oculus Integration Package. Then, by setting up Locomotion System, Continuous Turn and Walk components in my rig (XR Origin), I was able to achieve movement in my VR game! Movement in VR is really just really smooth teleportation, but it works pretty well. I also added footsteps to the base game, however, I’ve struggled with getting it to work on the VR version so that’s something I’ll deal with either in Week 6 or early Week 7, it isn’t super important. I also setup the VR controllers in game which will hopefully be used for hand tracking soon (right now, only head tracking works).

 

So, to be clear, my goals for Week 6 are to implement proper hand tracking, possibly implement footsteps that work in the VR environment, but also I want to add an option when the player approaches the main door to select “Enter” and have the scene smoothly transition to the next one. I want also to have my assets setup for my second scene and include a mirror somewhere near checkout. The logic for the items prices and whatnot can be done early Week 7, then project testing can be carried out that week as-well (or early week 8). I also want to figure out a way to switch between First Person and Third Person view in the VR environment, which shouldn’t be too difficult, however this isn’t as important to complete as the previous stuff.

 

As far as what we learned in class this week, Professor Wole taught us about interactive 3D graphics and showed us examples of such involving lighting, cameras, materials, shaders and textures. For points, lines, and polygons, I learned that they are the basic geometrical primitives used to build more complex 3D shapes in computer graphics. Graphics pipeline was the sequence of steps that a graphics system follows to render 3D objects onto a 3D screen. In relation to that, OpenGL graphics Pipeline is a specific implementation of the graphics pipeline, allowing hardware-accelerated rendering of 3D and 2D vector graphics. For the Vulcan Graphics Pipeline, it is basically just a high-performance, cross-platform graphics and compute API, which offers greater control over the GPU and lower CPU usage.

We briefly went over GLUT, which is short for OpenGL Utility Toolkit. GLUT provides functions for creating windows and handling input in OpenGL programs. I learned that in terms of cameras, 3D is just like taking a photograph, just over and over and over and a lot at a time! The camera defines the viewer’s position and view direction, which determines what is visible on the screen, while the lights simulate the interaction of light with objects in order to create a sense of realism and depth. For colors and materials, color defines the base appearance of an object while materials determine how the object interacts with lights. Finally, we talked about textures, which are essentially just images or patterns applied to 3D models to give them a more realistic appearance by adding detail such as wood grain or skin or something.

I’ve had a much better week than week 4 now that I have recovered and I’m excited to see what I can accomplish in the upcoming following weeks!

 

Week Six:

 

I made decent progress this week! I began adding hand tracking support including actual 3D hand models. I figured they would be more immersive than adding the default 3D Oculus models. I followed that up by adding ray cast beams (lasers) from the hands of the player which allowed for them to interact with objects from afar, such as clicking buttons. I added a second scene which depicted the interior of the supermarket. Then I added a transition button to the door of the exterior supermarket so the user can click it and it will fade into Scene 2, the interior. I then spent forever trying to fix the lighting of the interior as the prefabs I was using came from an older version of Unity. I fixed it somewhat using the Generate Lighting button in the lighting settings, which took a whopping three to four hours to finish rendering, so I let that work overnight.

 

I then included buttons on my food items. When the user clicks the button, the item would be included on the left side panel of their screen as text. I assigned a reasonable nutritional value to each item (only four so far, I began with Steak) along with the costs which was based on the prices I see at my local Target.  An example is below.

 

I wrote several C# scripts, including ShoppingCart.cs, FoodButton.cs, CartUiManager.cs, and CheckoutButton.cs. They’re still a work in progress, but I’ll briefly explain each one.

 

ShoppingCart.cs: This script sets up a virtual shopping cart for a user, capable of storing items represented as ‘FoodButton’ objects; it also updates a visual interface (through the CartUIManager) whenever a new item is added to the cart, and can calculate the total nutritional value of all items in the cart.

FoodButton.cs: This script represents an item of food that the user can interact with (possibly in a VR or AR environment, as it uses XRBaseInteractable). Each ‘FoodButton’ has its own name, nutritional value, price, and knows about the shopping cart to which it can be added. It also sets up listeners to handle when it’s selected or deselected in the interface, making sure to add itself to the cart when selected.

CartUIManager.cs: This script manages the visual interface of the shopping cart. It displays the name and price of each item in the cart, and calculates the total price of all items. The UI is updated every time an item is added to the cart (via the UpdateCartUI method).

CheckoutButton.cs: This is the code for the checkout button that a user can interact with. Like the FoodButton, it sets up listeners to handle being selected or deselected. When selected, it calculates the total nutritional value of all items in the shopping cart, and updates the UI to show this information to the user.

 

They all work together in union to hopefully create an immersive and engaging VR shopping experience for my simulation. Next week I plan on adding Scene 3, a ‘one year fastforward’ black transition scene and a new script that switches the default player model to different ones based on the nutritional value of the food you’ve checked out. This will be the trickiest part yet, but I’ll see what I can manage.

 

Now for what we studied during class this week. We learned about GPUs and Immersive audio (including a demo on audio) on Monday and Immersive Telepresence and Networking along with Perception, VR Sickness and Latency on Wednesday.

 

A Graphics Processing Unit (GPU) is a piece of hardware designed to quickly create images for display on screens, making them essential for tasks like gaming and video rendering. GPUs are used in a variety of applications, from creating the graphics in video games, to speeding up artificial intelligence processes and scientific computations. There are two types of GPUs: integrated GPUs, which are built into the central processing unit (CPU) of a computer, and discrete GPUs, which are separate hardware components. GPUs have evolved from simple machines designed for 2D image acceleration to powerful devices capable of rendering complex 3D images. The main difference between a CPU and a GPU is their purpose: a CPU is designed to quickly perform a wide variety of tasks one after the other (low latency), while a GPU is designed to perform many similar tasks at the same time (high throughput), making it excellent for creating images or performing calculations that can be run in parallel. General Purpose GPUs (GPGPUs) are GPUs that are used to perform computations that were traditionally handled by the CPU, expanding the scope of tasks that can benefit from a GPU’s parallel processing capabilities. In computing, latency refers to the time it takes to complete a single task, while throughput refers to the number of tasks that can be completed in a given amount of time.

 

The auditory threshold refers to the range of sound frequencies humans can hear, typically from 20 to 22,000 Hz, with speech frequency specifically falling between 2,000 to 4,000 Hz, while ultrasound refers to frequencies above 20,000 Hz, which are inaudible to humans but can be heard by some animals. I actually didn’t know this exactly, so it was cool to learn something new!

 

In the context of virtual reality (VR), telepresence refers to technology that allows you to feel as though you’re physically present in a different location or virtual environment through immersive sensory experiences. Related to this, Sensory Input refers to the information that your VR system receives from sensors, like your headset or handheld controllers, which capture your movements and translate them into the VR environment. Within VR, mobility refers to your ability to move around and interact within the virtual environment, which can range from stationary experiences to full room-scale movement. Audio-Visual Output describes the sound (audio) and images (visual) that the VR system produces to create an immersive virtual environment, typically delivered through headphones and a VR headset.

In VR terms, manipulation means interacting with or changing the virtual environment, typically through gestures or controller inputs, like grabbing objects or pressing virtual buttons.

Beaming is a term used in VR to describe the act of virtually transporting or projecting oneself into a different location, effectively simulating being physically present in that environment.

 

I hope for next week I am able to wrap up my simulation and get some test results. Only time will tell!

 

Week Seven:

 

This week undoubtedly has been my most productive week so far! I’ll try and keep this short though. Here are the changes I’ve made to my game! For starters, I removed the initial scene where you walk into the supermarket because it is pointless, has a lot of anti-aliasing issues (texture flickering), and impacted performance marginally. So now the game starts inside the supermarket. I added a player model from Mixamo to my VR rig then by using animation rigging, made it possible for the user to have head tracking and hand tracking for a more immersive experience. I wrote a bunch of new scripts, namely CartUIManager.cs, CheckoutButton.cs, DetailPanelController.cs, FoodButton.cs, ItemButton.cs, ScaleAdjustment.cs, and ShoppingCart.cs. I’ll briefly explain what each one does.

CartUIManager.cs manages the user interface for the shopping cart, updating the display to show the items in the cart, their quantities, and the total price and calories. CheckoutButton.cs is attached to the checkout button in the game, and when clicked, it calculates the total calories and price (including NYC’s tax rate) of the items in the cart, updates the UI, and triggers a scale adjustment based on the total calories. DetailPanelController.cs controls the detail panel that displays the nutritional information of a food or drink item when the player hovers over it. FoodButton.cs is attached to each food item button in the game, storing the nutritional information and price of the food item, and adding the item to the shopping cart when clicked. ItemButton.cs is an abstract script that serves as a base for the FoodButton and DrinkButton scripts, defining common properties and methods such as the item name, calories, price, and the method to add the item to the cart. ScaleAdjustment.cs is a script that adjusts the player’s avatar based on the total calories of the items in the shopping cart when the checkout button is clicked. Finally, ShoppingCart.cs represents the shopping cart, storing the items added to the cart along with their quantities, and providing methods to add items to the cart, calculate the total price and calories, and clear the cart.

These seven scripts make up the bulk of my functionality in game. After this though, I spent 5-6 hours at night using the Target app to look up various typical supermarket food items, then inserting them as buttons on each shelf of the place along with their nutrition facts. I added a Tracked Device Graphic Raycaster on each of the buttons so that it can be detected in VR by the ray casted beams that come from the players hands. Then I added a Event Trigger that uses the DetailPanelController script so that whenever a player hovers their beams on an item button (Pointer Enter and Exit BaseEventData), it will show the nutritional facts of said item and it will go away once the beam comes off of it. Then I attached the FoodButton or DrinkButton scripts to various items which is where I wrote all the nutritional facts I got from the Target app. I constructed a basic mirror along with security cameras which follow the player and allow the player to see their virtual avatar in the game in real-time. Then I made a panel, placed it high above the player and then wrote an introductory text along with an explanation of the game controls. The text is seen below in the image.

Of course that’s way too difficult to read here so here you go:

 

Welcome to the DietDigital Game Simulation!

In here, you’ll be simulating the experience of shoppping in an VR supermarket environment. You’ll also be able to checkout your food items and see immediate changes to your physique once you do so via the mirror or security cameras.

Note that the items you select here are realistically what you would eat in a single day and once you press the checkout button, the changes to your body reflect (or atleast attempt to) what would happen if you stuck to that diet for a year.

The controls are simple: Hover over an item button to reveal its nutritional facts and use the right trigger to add it to your cart.

Happy shopping!

 

I added some background audio to the store which just sounds like an average bustling supermarket environment for extra immersion.

And that wraps up essentially everything major that I included into my game! Going into what we did in class this week, I unfortunately missed class due to unrelated reasons (and responsibilities) but the students basically just tested each other’s application and completed surveys about them afterwards. We even went on a fieldtrip on Thursday, a ASRC / Illumination Space Field Trip, and got a tour of the buildings with its different facilities and whatnot (along with playing a student’s application game) which was super awesome to experience!

 

Overall, great week for progress! I’ll be doing my data collection next week, writing my research paper along with demoing my project to an audience at Hunter College to wrap up the final week of the program. Thanks for reading!

 

Week Eight:

 

This has probably been my favorite week of the program and I’m disappointed that it’s coming to an end soon! I became much more friendly with the students I was working with, and we went out to have fun multiple times this week, it was amazing (although it is very much a shame that this had to happen on the last week of the program). In terms of my project, I ran it with 7 of my fellow REU students and they gave me their feedback on it, which I used to construct and publish my research paper. High immersion and decent presence scores but the most common complaint was the motion sickness (which I had no real control over to be honest, it’s VR). I did notice that the women who ran my simulation tended to lose more weight than the men who did so. I deduced that it was because the program was tailored towards men in general as the base calories was set to 2600. To fix this issue, I created an in-world space main menu where you could customize your age, gender, activity level and body composition size (two buttons, increase or decrease) and then based on what you chose, your based caloric needs would change. Active males for example need more calories than sedentary females and vice versa for example. I added two mirrors to this main menu then I added a Start button that when clicked, would make the menu disappear and then you can walk forward and play the game like normal (except I added a few more food items as per my feedback received). Here’s what this all looked like (not the best UI design but this simulation is all about functionality haha):

In other news, I presented my demo at Hunter College in a conference room on zoom this Thursday. It was supposed to be at the symposium I believe (or something similar) but due to scheduling room conflicts, it was changed to here instead. Definitely was a good thing however, as I didn’t have to present in front of several people (that I didn’t know personally) in person and it was relegated to just a Zoom meeting. I was the last person to present so I just chilled and watched everyone’s presentations before me. Everyone had something super interesting to share and I enjoyed them much more than I initially expected.

Prior to Thursday, we attended a Zoom meeting for the SPIRE-EIT REU Presentations where the IOWA students presented their REU projects just like we did the next day. It was pretty time consuming but overall, fairly interesting as I am a man of science myself of course and simple love to learn new things!

I also met with my mentor this week to go over the changes I had made to my simulation and to better prepare myself for my research paper submission. Pretty standard stuff. I also attended a Zoom meeting that went over how to apply to grad school, how to prepare for it, and the Dos and Don’ts of doing so.

I departed from New York City on Saturday in a very sad mood, but it was fun while it lasted! For future REU students reading this blog, just know that it zooms by faster than expected so just make sure you’re working hard, having fun, absorbing a bunch of information, and connecting with as much people as possible!

 

I’ll probably update this blog if my paper somehow manages to get accepted but if not that’s quite alright.

Thank you for joining me on this journey reader and I hope you have a blessed life!!

Final Paper:
Richard Chinedu Erem, Oyewole Oyekoya, and Margrethe Horlyck-Romanovsky. 2023. Effects of Varying Avatar Sizes on Food Choices in Virtual Environments. In Companion Proceedings of the 2023 Conference on Interactive Surfaces and Spaces (ISS Companion ’23). Association for Computing Machinery, New York, NY, USA, 24–26. https://doi.org/10.1145/3626485.3626534 – pdf

VR as a Learning Tool for Students with Disabilities – Summer 2023

Filip Trzcinka – Hunter College

Mentor: Daniel Chan

Week 1:

Before meeting with my mentor Daniel to narrow down what kind of project I would be working on, I decided to get ahead on my work and start on the Literature Review portion of my research paper. I wasn’t sure if I would be focusing on a physical, mental, or learning disability for the project so I began to research and read about said topics to see which direction I would prefer to take. Upon furthering my knowledge, I found myself more focused on papers that described learning and cognitive disabilities. I came up with two main research proposal ideas that I brought up to Dan when we met, and asked for his advice and for any feedback he may have. Upon conversing and pitching my ideas, he allowed me to choose which of the ideas I’d prefer to work on. After some though I decided my research project would focus on the creation of a driving simulation for student drivers who have Attention-Deficit Hyperactivity Disorder (ADHD). We planned to meet again next week after I have thoroughly researched the problems people with ADHD face when engaging in a learning activity, and what possible methods or features I should include in my simulation to aid in their learning experience. I also plan to begin mapping out and creating the simulated setting on Unity so I may also get ahead on the creation portion of the project. When we pitched our proposals to the group on Friday, Dr. Wole mentioned that he could try to connect me with someone who has had experience with making a driving simulator so that I could potentially build upon their work rather than start from scratch. Nevertheless, I will still take a look if Unity has some assets already implemented that I could use if the challenge of making a driving sim from scratch should arise.

 

Week 2:

This week I focused on the Literature Review section of my paper. I found a paper that described research conducted using a VR driving simulation to see if such a tool can help improve driving skills of people with Autism Spectrum Disorder. I decided to use this paper as ground work for how I’d like to develop my idea. Though their simulation was very basic, there was a test group where the simulation that used audio feedback to help remind the drivers of important rules of the road: like staying under the speed limit, staying in their lanes, etc. I considered what other features could be implemented to help those using the simulation to learn. I knew I would focus on ADHD so I read through papers where a VR driving simulation was tested with people with ADHD as a test pool, but could not find any conducted research that used an enhanced simulation instead of a basic one. Some used tools like eye trackers and such, but no real software implementations to benefit the learning experience of the user. I then looked through papers of teaching techniques used to help people with ADHD learn, with an emphasis on keeping their attention. After discussing with Dr. Wole, I went back and found papers that tested for ways to keep attention of everyone driving, besides just people with ADHD. With what I’ve read so far I created a list of features I’m hoping to implement into the driving simulation I create. The week was finished with writing my Related Works section and posting it on Overleaf with proper citations.

 

Week 3

With the Related Works draft completed, it was time to start the development of my driving simulation. Using the Unity game engine, I was able to import a low polygon city asset to use as the environment. I made some edits to it to make intersection lines more clear, box colliders for buildings so the user wouldn’t just phase through them, as well as adding traffic lights for every intersection. Since the traffic light assets were only for show and had no real functionality, I had to add point lights for the Red, Green, Yellow lights and I tried to write some scripts to allow the lights to change based on a timer, but unfortunately I made little progress with making that work. Will have to continue next week. I got a car asset which already includes a steering wheel object (so I would not have create my own) and imported it into the scene. I wanted a steering wheel object as I eventually hope to have the user actually grab the wheel to steer. For the car, I removed animations that were connected to the asset, added a camera to simulate the first person view then got to work on my scripts to allow the car to accelerate, reverse, and steer using the WASD keys (temporary inputs so I can test everything out at the moment) as well as having a hand brake as the space bar key. I had to take time to adjust the float values of the car’s rigid body, as well as the values for the acceleration as my car did drive pretty slow at the start. It still drives slow, but that could benefit the use for driving in a city environment. After running through the added features I plan to include with Dan, I began my work on the Methodology section of my paper as well as taking another hack at the traffic light scripts.

                                     
 
 
Week 4
 
Got my traffic lights system working! Though the method to get it to work is very unconventional, unless it breaks something else I will not be touching it anymore. After getting that completed I worked on fixing the speed of my car object during acceleration as it was extremely slow last week. The main work done this week however, was the implementations of my visual cue features. The first was a ring that appears over the traffic lights to help grab the attention of the user. This would occur when the user reaches a certain proximity trigger to that specific traffic light, so you don’t have so many of the rings active at once as that would be counter productive to its purpose. It took some time to get that working, and unfortunately this feature exists only for certain traffic lights in my scene, so I need to go through the lights without the feature and add that in. The second feature is that of a lane alert. When the car object moves through the lane line, a red translucent bar appears to signal the user that they need to stay in their lanes. With those features implemented I was able to present a decent product for the midterm presentations that occurred this Friday. At the end of the week I finished up the draft for the Methodology section and I eagerly await the notes Dr. Wole has for me as I am not sure if I wrote it in a conventional way. Next week I will try to deploy the game onto the Oculus Quest headset, change the input device from the WASD keys to the actual controllers, and begin implementing my audio cue features that I have planned.

 

Week 5

Unfortunately this week was not as productive as I had hoped. Through the process of trying to deploy to the Meta Quest 2 headset, many issues occurred. First there was the issue of unsuccessful builds. Many console errors that made no sense to me would pop up which would cause the build process to exit unexpectedly. As typical with computer science, this was solved by googling said errors and working through solutions others have posted online, leading me to be able to progress forward with a single step of having a successful build. However, the error of running the simulation on the headset was the next and more difficult hurdle to overcome. When the Unity game would try to run on the headset, a infinite loading screen would appear with the unity logo jittering in front of you. I had a colleague in the program who did have a successful deployment of his game try to help, but still the same problem was happening. Together we tried to deploy an empty scene, however still no success. I got permission to factory reset the headset and set it up as if it was my own, however through this I would be unable to verify my account as a developer due to a problem Meta has had for over a year where they would have issues sending over a SMS confirmation code for account verification. Eventually I brought the headset in to have it checked and set up by Kwame who was now able to get a previous Unity game to deploy on the headset. With this light at the end of the tunnel giving us hope, we tried to deploy the empty scene which worked! And yet our final roadblock of the week appeared, my work for the game would still not deploy. The same issue of the infinite loading screen would appear. As is typical for roadblocks, I will now have to take a few steps back in order to progress forward. I will need to rebuild what I originally made in my empty scene that I know works. This will have to be done incrementally as I need to ensure that any progress made can still deploy to the headset, rather than rebuild it all in one go and encounter the same issue. In a more positive light, this week I was able to implement another feature I planned to include, which is when the car object enters the lane collider trigger, a sound cue will loop at the same time the visual cue will appear. This is to use both the senses of sight and sound to grab the attention of the driver towards their mistake. I also worked to edit my Methodology section of my paper to polish it up and include more specific and important information relevant to the proof of concept paper. Week 6 will definitely require me to go the extra mile with my work as I am currently behind everyone else, with 3 weeks left to go in the program, yet often times diamonds are formed under pressure.

 

Week 6

This will be a short blog post as throughout this entire week, I have just been working to recreate all that I had into the empty scene that was able to deploy to the headset. Since I wasn’t too sure what exactly caused problems for deployment originally, any time I added something new to the scene, I would deploy to check if it worked. This along side the fact that what you see in Unity on your laptop is different than what you see with the Quest 2 headset on, led to a very repetitive process of adding something or making a change, build and run to the headset, check to see how it looks, something is a bit off so lets go and change it, build and run to headset, rinse and repeat. Though tedious, I was able to get almost everything I had earlier, deployed and working. The only thing that still needs to be completed is the actual player input manager so the car can accelerate/decelerate/steer/brake through the player’s button presses. My suspicion for last week’s roadblock was most likely due to me making a mistake with handling player input, so I am a tad nervous that I will make a mistake again and cause it to break, especially since I’m not familiar with how Unity deals with Quest 2 controller inputs, as will a keyboard its just: Input.GetKey(“w”). In the meantime however, I implemented my final feature idea which is when the user is not looking forward for two seconds, a audio cue is played until their FOV once again is focused on the road. With just the player input left to go, I’m excited to start player testing next week and finishing the Data Collection and Analysis portion of my paper.

 

Week 7

I was able to complete the button inputs for the game on Monday this week. It’s not exactly what I had planned, but since we’re stretched for time it will just have to make do. That same evening I created my google form questionnaire, then had my older brother and my father test out the game for two and a half minutes each. They filled out the questionnaire, as well as giving me more feedback face to face that I took down in my notes. Tuesday I had another person I know test out the game and complete the questionnaire, and Wednesday I had anyone in the program who was willing to test it out try out the game, making sure I maintained consistency with how I managed this user study. That led to a total of eleven people for my Control Group. That same day I had one person who I knew was diagnosed with ADHD by a professional, also test the game and fill the questionnaire, leading me to complete my actual testing. Thursday night I completed the “Testing” section of my research paper and this Friday and weekend I will continue to work on the “Data Results and Analysis” section and the “Conclusion” section.

 

Week 8

What a week. As the REU approached its end, everyone in the program scrambled to get to the finish line. I for one spent Monday and Tuesday this week getting my paper’s draft finished. I sent a draft to my mentor Dan who gave it back with some extremely helpful notes. This allowed me not only fix my paper, but also know what I needed to prep for our Symposium that happened on Thursday, and let me tell you that presentation was stressful for me. I did not do a presentation for over five years and was severely out of practice. Thankfully Dan and a couple of friends from the REU helped me prep. I still stuttered and stumbled my way through it, but received many interesting questions about my project that made me think more in depth about it. Friday was spent finalizing my paper while also helping my colleagues with theirs. It was a definitely a bittersweet ending to an amazing program experience.

Final Paper:
Filip Trzcinka, Oyewole Oyekoya, and Daniel Chan. 2023. Students with Attention-Deficit/Hyperactivity Disorder and Utilizing Virtual Reality to Improve Driving Skills. In Companion Proceedings of the 2023 Conference on Interactive Surfaces and Spaces (ISS Companion ’23). Association for Computing Machinery, New York, NY, USA, 1–4. https://doi.org/10.1145/3626485.3626529 – pdf

Richard Yeung

Week 1

Since my project is building off an existing one, my main goal is to understand what has already been made, and what its limitations and capabilities are. This meant looking into the source code, so I can understand what features are made, testing the project to see how it runs, and building it so I know there aren’t any unforeseen complications. I was able to talk to the person who wrote most of it, so he helped explained a lot of more complex code.

My goal for next week is to create a rough draft of my project. I discussed this with my professor and we may have found a way to create the in-place exploration for the visually impaired user, we just need to write and test it.

Week 2

The goal of this project is to create an app that allows visually impaired (VI) users to move through a virtual enviornment without having to move themselves. This allows users to explore an enviornment without the restrictions of not having enough physical space. And since this is a virtual environment, VI users can explore at their own pace and allow them to get acquainted with new enviornments without the struggle of having to feel around with other people around. This will give VI users some feeling of security when they have some mental map of their enviornment.

Since I am building off existing work, the progress has been pretty quick so far. I am almost finished completing my a demo for this project so that it can be tested. As of now, there are three premade virtual space that users can explore. The avatars reponse to user inputs which controls the movement of the avatar. There are 2 states of the avatar: unmoving and moving.

Unmoving: Users can rotate their phones which will rotate the avatar. Positional movement of the phone will not affect the avatar.

Moving: Users will press on their phone screen which will move forward in whatever direction the user is facing. While moving, users can rotate their phones and it will not change the direction of the avatar, allowing VI users to move their cane while moving forward.

At this point, only one thing needs to be done, which is connecting an airpod to my device. For this project a special airpod needs to be used that captures users head movement. This is how we know how to rotate the avatar’s head. The issue is that for whatever reason, my device cannot connect properly to these airpods; more specifically, airpods can connect, but the head movement feature cannot be used. Switching phones or updating Xcode and iOS did not solve this. So right now, I am looking for solutions to this.

Week 3

There was a technical problem. As stated in the previous week, for some reason my iphone 11 could not properly connect to the airpods. I tested it on a iphone 14, and the airpod connected and it worked on the app. This lead me to believe that it might be some software limitations, so I did some researched and found nothing. Then I tried to reverse engineer the borrowed github code that allowed connection between Unity and the airpods using its API. While researching, I was given an iphone 7 to test it on. It worked. So, then I realize it is probably something to do with my iphone 11, particularly its settings. I decided to do a factory reset and this solved the problem.

Once I got the airpods to work properly and test the app, I just needed a VI user to test it. This Friday, I was able to do just that. For about 20 minutes, the user tested the demo. For the demo, there are two objectives: to test the controls and test how well the user is able to visualize the room. In the demo, there were three rooms with furitures set up in different places. The user tested all three rooms as I observe their performance. In the end, she seems to be accustomed to 1 out of the 3 rooms. I was able to ask for her feedback and improvements that would help. In the following week, I am working to implement some of them.

Week 4

This was a slow week. Most of the time was spent on discussing best appoarch. We want the user to not have trouble using the application, but with as much immersion as possible since we believe that will help with creating a mental map of an area. One issue is turning. From feedback, we considered allowing users to turn the avatar without actually turn their own body by possibly adding a button to do so. By adding this feature, users can explore their virtual environment while sitting down or without looking like a loony when using the app out in public. However, this would cut into the immersion we are trying to develop, not to mention that turning an avatar with a button is not the same feeling as turning one’s body. We are still trying to figure this out, but as of now, we are sticking with user having to turn their body, but we are adding auditory feedback to tell users which direction they are facing.

Aside from that, I managed to implement one feature, which is the audio footstep. This was a bit of an issue as most tutorials online does not consider if the avatar is walking into a wall. As such, I had to do some looking around, testing, and eventually got to where I am. The current version will generate footsteps with delays. Capturing avatar’s position every frame, I calculate the difference in position. If it is normal, then it plays the footsteps at normal speed. If there is a small change, their is a noticable delay between footsteps. And if there is barely any change, then no footsteps at all. For some reason, even when walking into a wall, there is still some positional difference, so a threshold is needed to pass for any footstep audio to be played.

Next week, I plan on changing how the phone is held when interacting with the app, adding voice to tell users what directions they are facing, fixing how the users walk backwards, and controling for phone vibrations.

Week 5

I implemented most of what I wanted. I added two different modes of control: The original and a swipe.

For the original control, the user would press on the screen to move and tilt their phone upwards to move backwards. The tilting part seems to be a difficult implementation due to how Unity calculates euler angles. I might do away with the tilt feature since, according to the tester, tilting does not feel appropriate for it. Instead I will combine this with features from the swipe.

The swipe feature has four ways to move the users: swipe up will move avatar forward, swipw down will move avatar backwards, swipe left or right will turn users in that direction. The swipe upward and wonward works great. The issue is the left and right. The way its impemented is that the left and right turn turns both the body and head. This is necessary since, the head is strictly following the airpods, I need to get around it and make it seem like the airpods is moving. This should have worked but for some reason, sometimes when teh user swipes left or right, the head turns longer than the body. No idea what is causing this or how to fix it, but this is my objective for next week.

I also plan to implement another means of control: using buttons. Since swiping might be difficult for some users, especially those who are not use to technology or have problems with their hands, I can just add four buttons that are in range with the users thumb. Im still thinking this through, but more options is better.

Other problems is that the user still seems to get stuck on the wall. Im just going to increase the collider range, but a better method needs to be thought up.

Week 6

I more or less have everything set up. There is still a problem with my program where the in-game cane seems to drift away from the avatar. This needs some testing but it seems like the problem has to do with the camera not picking up distinct backgrounds which makes it assume the phone is moving therefore the in-game cane moves. Again, not sure, but needs more testing.

Everything else is pretty much done. The participants will hold the phone like a cane and explore a room for x amount of mins. At the end, they will be asked to visualize the room (such as drawing it). The main focus will be testing spatial memorization, so the material of the object is not that important, just knowing that something is there is enough. They will be testing on two different version of movement: one where user needs to turn with their body, and another where the user turns with a swipe/button. This will test whether turn with their body or turning with a button/swipe will affect their spatial awareness. They will be testing on 3 different rooms with increase difficulties (ie more furnitures). Not sure if we want to change the room size/shape or keep it the same, but so far its the same room. They will be graded on how many furnitures they can correctly position in their drawing.

Week 7

I managed to finished and fix most of the problematic parts of my program so it should not break or cause some weird bug/error in the middle of testing. I also meet with the participate to get some feedback and they seem to like it. So now, its a matter of testing it with more participates. In the mean time, I will on my paper, think of the questions, and add small features that I think may be useful.

Week 8

Since I am continuing to work on this project, I am adding more features to make testing easier such as randomizing the room size and randomizing object placement. This will allow for more variety in testing and makes sure that participants don’t get use to the same room size.

Final Paper:
Richard Yeung, Oyewole Oyekoya, and Hao Tang. 2023. In-Place Virtual Exploration Using a Virtual Cane: An Initial Study. In Companion Proceedings of the 2023 Conference on Interactive Surfaces and Spaces (ISS Companion ’23). Association for Computing Machinery, New York, NY, USA, 45–49. https://doi.org/10.1145/3626485.3626539 – pdf

Exploring Perceptions of Structural Racism in Housing Valuation Through 3D Visualizations

Lisa Haye, CUNY John Jay College (Economics, B.S.)

Mentors: Courtney Cogburn and Oyewole Oyekoya

Week 1

The 2023 VR-REU commenced at the Framers Bowling Lounge as a Memorial Day icebreaker, where we all introduced ourselves to one another, as well as to Professor Wole’s research team. The following day, we convened at Hunter College where we were introduced to some of the program mentors, and I began reviewing the work of my 2022 predecessor to think about how I could either expand or pivot last year’s work towards a new direction. Professor Wole also began his lecture on VR, AR, and MR, and we were introduced to the history of the field, as well as its applications across various disciplines. 

I met with both Professor Wole, and my research mentor, Professor Courtney Cogburn, to discuss the potential framework of my project. I began exploring both Unity Terrain and potential city and house asset packages in the Unity Asset Store, as these applications will be key to constructing my visualization models for the project. I also began looking at publications centered around both structural racism and how the issue has been visualized in the past. 

We ended this week with Professor Wole introducing us to Paraview, a scientific visualization program, for our self-paced lab session. I submitted my project proposal, and began to draft a schedule towards curating a literature review of my topic, as well as experimenting with Unity Terrain.

Week 2

This week, Professor Wole taught us the preliminary tenets of writing a scientific research paper and introduced us to Overleaf to compose our writing. Professor Wole also held VR lectures on immersive visual and interactive displays, along with 3D geometry. 

Meanwhile, this week my time was split between conducting a literature review to create a bibliography, finding databases that correlate with the data visualization aspect of this project, and familiarizing myself with Unity with a test model of different functions that are key to my 3D models. I identified three potential data points for my models (housing valuation, climate, and the access to green space), as well as two neighborhoods within the Bronx to serve as case studies to highlight disparities based on that data. Professor Wole also introduced me to MapBox for Unity, a location data and maps platform that could be integrated into Unity for precise map development; I am considering using a mixture of MapBox and Unity Terrain as my methodology for the project moving forward.

The week ended with all of us attending the CUNY SciCom’s “Communicating Your Science” Symposium at CUNY’s Advanced Science Research Center, where we listened to various CUNY graduate students talk about their research with general and peer audience presentations. It was exciting listening to disciplines such as mathematics to biology to physics come together to talk about their work in a way that was fun, educational, and most importantly, accessible to audiences who may not be familiar with concepts such as the sonification of star rotations , DNA G-Quadruplexes, and properties of shapes!

Here is a screenshot of my test model on Unity from earlier this week:

 
Week 3
 

This week, Professor Wole gave lectures on immersion, presence, and reality, as well as 3D tracking, scanning, and animation. We all had an engaging conversation on the uncanny valley, the theory that humans experience revulsion as they observe a character that is close to human characteristics, but are slightly off in appearance. Professor Wole’s scientific visualization lab this week centered on Visual Molecular Dynamics (VMD), a molecular 3D visualization program. 

Here is a screenshot of the ubiquitin protein molecule, visualized in CPK style and color set to ResID. I am not too sure what those acronyms mean, but I am interested in finding out:

 

As for my project, I was split between creating a first draft for the abstract, introduction, and related works sections and experimenting with MapBox. I think my methodology is going to shift towards a more MapBox-intensive procedure, with creating custom map styles on MapBox Studio, and then deploying it to Unity3D. Thus, I spent a lot of time getting a crash course on MapBox’s functions; I created a demo map of Riverdale, one of the Bronx neighborhoods featured in my project, to get a taste of how these models would look like in Unity. I actually ran into quite a few errors, most importantly, my map object did not play in game mode and it does not appear in the hierarchy unless I manually move it there, and I wonder if modeling the map will be easier with the 2017 version of Unity (the version most compatible with current MapBox software). Nonetheless, I hope to work these errors out with Professor Wole soon. Meanwhile, here is my demo model of a Bronx neighborhood:

Next week, I hope to begin the formal construction of my models!

Week 4

Roadblock, roadblock, roadblock – my computer refused to open a project with Unity’s 2017 editor so I couldn’t test if that resolved the problems, my maps continued to refuse to display unless I manually enabled their preview separately, they could not be represented together side by side, and it was difficult to display them properly in the game scene, and I honestly became dejected. I began considering whether or not my project had to pivot back to manually visualizing data with Unity Terrain and assets from the Asset store, and Professor Wole’s PhD student, Kwame Agyemang, and I tried to find any 3D models of New York City that could be imported in Unity, in case a pivot was necessary. Nonetheless, I compiled data from Zillow and the New York City Environment and Health Data Portal to be used for housing valuation, climate, and greenspace data; the former was extracted using a Google Chrome extension called Zillow Data Explorer, and then opened as a Google Sheet, and the latter was manually compiled into a Google Sheets on my drive. 

My breakthrough occurred just on Friday, when Professor Wole hosted our midterm presentations for our status updates; after disclosing my setback, a fellow REU student revealed they actually had prior experience using Mapbox! Thanks to Richard Yeung, the problem was resolved – if Mapbox is being used with a recent version of Unity (in this case, I am using Unity Editor version 2021.3.19f1), you must download ‘AR Foundation [current version is 4.2.8]’ and ‘AR Core XR [4.2.8]’ from the Unity Package Manager, and when importing Mapbox SDK into Unity, do not import ‘Google AR Core’, ‘Mapbox AR’, and ‘Unity AR Interface’. With that, I was able to have my map display properly and my use of Mapbox for this project can now continue. It was very nice seeing how everyone’s projects are coming together, and my talk with Professor Wole helped me consider how I will fulfill my research question while also considering Professor Cogburn’s reminder to consider my audience when thinking about representing data effectively. Because of this week’s setback, I am a bit pressed for time in terms of creating my models and writing my methodology for my paper, so this weekend requires me to make up for lost time; nonetheless, as I create my models, I am going to consider how I want to construct a user study for this project.

Obstacles are bound to happen in research, but it is important to keep your mind open to change in projects, and to ask your network and your network’s network for help, you never know who can help until you do. Here is a test model for my two Bronx neighborhoods actually displaying side by side! 

 
Week 5
 

With the resolution of my Mapbox problems, I spent this week really honing in on the details of my models, both in terms of what data was being visualized and how I want to represent the information on Unity. My housing valuation model, which I originally presumed would be my easiest model to complete, took some thinking as I considered how I wanted to represent redlining and what data point I would be expressing; I decided to focus on highlighting a sample of property values of single-family homes currently on sale as of June 2023 that are above the median value for the Bronx ($442,754, according to Zillow) and condominiums in both neighborhoods. I am still experimenting with how the climate model could be visually represented, and greenspace is going to highlight the environment of both neighborhoods.

I spent some time working on the methodology section of my paper, and lessons this week included Professor Wole’s lecture on interactive 3D graphics, as well as an introduction to Tableau for our lab work. Professor Wole generously took the REU participants on a cruise from Pier 61 for lunch, and we all ate food and chatted on the water as we sailed by downtown Manhattan, Brooklyn, and the Statue of Liberty.

Next week, I hope to complete my models and finish up my writing for the methodology. I haven’t worked on the details for the user study of my project, so I hope to speak to my mentors regarding its structure.

Week 6

This week, I was able to complete a model for housing valuation, climate, and the environment, but I could not find a way to visualize climate and the environment in a 3D format so the research is solely going to focus on visualizing housing valuation. Professor Wole, Professor Cogburn, and I discussed the various potential dimensions and codes that could be used to visualize the existing data in different ways, and now that I’m scrapping climate and the environment, I will be focusing on as many ways to visualize housing valuation as I can, while reframing my paper, and reframing the script for my user study. Future work could consider visualizing various forms of structural racism either separately or concurrently within various neighborhoods. 

With what I’ve learned technically through visualizing the housing valuation data, portions of the current model I have will translate into the various models I have to create, such as a baseline model to be used as a comparison, as well as a color dimension of the redlined versus non-redlined community. I also have to consider focusing solely on representing single-family homes or condominiums in my target neighborhood; finding literature on either type of housing structure will guide my visualization selection. Here is a screenshot of my experimenting with various design choices for the housing valuation model as of late:

 
Week 7

This week I’ve spent the majority of my time working on as many housing valuation models as I can, and talked with my mentors about what questions are going to be relevant towards answering our research question in the user study. I struggled a bit with organizing my time this week, but having conversations with Professor Wole and Professor Cogburn helped ground my expectations and steer my project to the final leg of the marathon. 

The cohort returned to CUNY’s Advanced Research Center (ASRC) for the IlluminationSpace tour, where we all interacted with models and systems related to the core science fields that ASRC specializes in (nanoscience, structural biology, environmental science, photonics, and neuroscience), and it was a really fun way to expose us to the objectives of these fields and how they overlap with one another. Sabrina took advantage of our touring of ASRC’s facilities by having us demo the application she created for the REU, and Sabrina sat us down to listen to her experiences with academia; I admired her openness, especially since many of her comments on academia resonate with my own experiences.

Professor Wole also managed to host three program officers from the National Science Foundation’s Graduate Research Fellowship Program to come speak to us about the program’s purpose, its eligibility requirements, and opened the floor for questions. Professor Wole made it clear throughout the program that part of his objectives for the REU is to encourage us to consider graduate school, and introducing us to a fellowship dedicated to funding our graduate studies and research interests (which could potentially be a barrier for students who are considered low-income, and therefore may make them skeptical towards going to graduate school) was really honorable of him to do.

We ended the week with Professor Wole talking to us about the importance of statistical analysis in research, and he gave us a crash course on ANOVA. With the symposium next Thursday, I have a lot of work ahead of me, and I’m excited to see what everyone has accomplished!

Week 8

The final week began with me finally (finally) completing my user study on Google Forms; users were given context to structural racism and redlining, the procedure, and then users had the option of giving their demographic information anonymously before they were exposed to two questions regarding seventeen versions of my models. Users were tasked with answering two questions to measure their perceptions of the models, and the final section asked users to rank their preferences in terms of structural racism visualization. As of today, I have received 29 responses, so for a survey that has been live for three days, that’s pretty good! I will most likely keep my survey live closer to the deadline of one of the conferences I am applying to in August, just in case I can squeeze in more data for the poster. I also met the cohort for dinner downtown, which was a nice break from working on papers and data analysis.

Professor Wole connected with Iowa State University’s SPIRE-EIT 2023 program this week, and we met with SPIRE-EIT’s PI, Professor Stephen Gilbert, and his students and learned about the three projects they are working on, which was really cool to learn about. I also met with Professor Wole to discuss how to statistically analyze my data, and to also discuss a rather interesting comment I received in the feedback section of my survey; the comment reminded me of the kind of controversy of a project like mine elicits, but also just the nature of research in general – criticism will occur, but I plan to address that comment in my discussions. Professor Wole helped me take in the criticism by talking about his own teacher evaluation experiences, which made me feel a lot better. On Thursday, our own VR-REU symposium was hosted at Hunter, and several of the mentors, loved ones, and the SPIRE-EIT program appeared virtually to listen to our work! Below is the title slide for my presentation, and here is a link to my slides: VR-REU 2023 Symposium

 

This program has been such a tremendous experience to be a part of and so a series of thanks are in order: I want to thank each of the REU participants I met for giving me camaraderie, knowledge, and overall just a fun experience, I think they were an amazing set of people to be grouped with. I want to thank Kwame and Richard Yeung for helping me when my project hit roadblocks, and I want to thank my loved ones for supporting this journey by pushing me to apply to this program, listening to me talk about Unity, roadblocks, and random facts about Riverdale and Soundview, as well as sending out and completing my survey. I want to thank Professor Cogburn for her mentorship and guidance, especially as a Black woman in academia, and most importantly, I want to thank Professor Wole; he was an amazing PI, an insightful professor, and a great mentor, and I want to thank him for giving me a great introduction to research, academia, and for overall taking a shot on an economics major like me. 

After today, I’ll still be working on my paper and poster, and whether I get published or not, I am grateful for the valuable tools this program has given me, and I know my work towards research is only getting started. 

Final Paper:
Lisa Haye, Courtney D. Cogburn, and Oyewole Oyekoya. 2023. Exploring Perceptions of Structural Racism in Housing Valuation through 3D Visualizations. In Companion Proceedings of the 2023 Conference on Interactive Surfaces and Spaces (ISS Companion ’23). Association for Computing Machinery, New York, NY, USA, 19–23. https://doi.org/10.1145/3626485.3626533 (Best Poster Award) – pdf

STEM Education on Structural Biology through an Immersive Learning Environment

Sabrina Chow, Cornell University

Week 1: Introduction and Project Proposal 🎳

The first few days in NYC for the REU started with an introduction to the rest of the cohort and the facilities. I went bowling with Dr. Wole and the other REU students, which was a lot of fun and very competitive. The next day, we all gathered at Hunter College and toured the building. We met some of the mentors, including my own: Kendra Krueger. The day after was the start of the class “CSCI49383 – VR, AR, Mixed Reality,” where I learned about the ideal principles of VR, how it works, and its history. After class, I worked with some of the other students to brainstorm for our proposals over poke and boba.

Later, I met with Kendra to write up the details of my project proposal. Kendra is the STEM Outreach and Education Manager at the Advanced Science Research Center (ASRC), and from our conversation, I can tell that she is truly an educator at heart. I’m really excited to work on this project, which will enhance the learning experience for K-12 students visiting the Illumination space in the ASRC. Kendra gave me two different paths to go down, but ultimately, I have decided to focus on structural biology instead of neuroscience. It’s a subject I’m more comfortable with and I think I can create a good STEM education project about it. Finally on Friday, I met with the rest of the cohort, where we got an introduction to Paraview and presented our project proposals.

A snapshot of the Paraview tutorial we went through.
A snapshot of the Paraview tutorial we went through.

Week 2: Working at the Advanced Science Research Center 🧪

I started the week with getting set up at the ASRC and introduced to the other high school/undergraduate researchers working there over the summer. I got to talk more with Kendra about my project and briefly met Eta Isiorho, a researcher at the ASRC whose expertise in structural biology and crystallization I will be relying on. Then, I attended a lab safety training session over Zoom so that I’ll be able to enter Eta’s lab. I also used the time to complete CITI training since the world was on fire and it wasn’t safe to go outside. (see photo below)

Smoky air outside the dorm.
Smoky air outside the dorm from wind blowing down the smoke from the Canadian wildfires. The AQI was almost 300.

Towards the end of the week, I attended the SciComms conference at the ARSC with the rest of the VR-REU cohort. The format was each presenter prepared an informal presentation of their research followed by a formal, more scientific version. It was really interesting to hear about the wide variety of projects going on around us, and I think attending will really help to prepare me for our symposium at the end of these 8 weeks.

Part of the science/research art project at the SciComms conference.
Part of the science/research art project at the SciComms conference. The question was, “What about research inspires you?” For me, it’s my love for animals and therefore, biology.

For my project, I continued to compile sources and take notes for my literature review. I was hoping to create a mock-up for the project, but after meeting with Kendra and Eta, I think I will need to readjust my project to fit both of their expectations.

Week 3: Making Progress 📱

This week, I started to get into the meat of the project. Since this is a large project, I knew I had to break it down into smaller pieces. First, I made a mockup of what I wanted my application to look like using Figma (see below).

This is my general idea for the application that I’m developing. Students will be able to use their devices to see molecules and more through AR.

Second, I began to work with Xcode to create the real app. This took a little bit longer than I was expecting since I am still getting used to Xcode and Swift again, but I have the general layout.

Layout of app in Xcode.
The first look at my application in the Xcode storyboard.

Looking forward, I will need to work on the functionality of the application. That will be the most difficult part of the project, but I’ve found many YouTube tutorials that will help me understand how Apple’s RealityKit works so I am hopeful. In addition, another issue that I’ve been considering is how I will share my application. If I go through the official Apple App Store, I will need to submit the app for review and prepare the app with the proper certificates, etc.

Outside of my project, I also met a couple more times with Eta. She showed me the crystallization lab at the ASRC and taught me more about the software she uses. I’m hoping to use some of that software to create videos of the molecules. In addition, I attended the CUNY Graduate Sciences Information Session and learned more about the process of applying to grad school. Finally, towards the end of the week, Dr. Wole taught us about VMD.

Picture of VMD interface with overlaying structures.Picture of VMD interface with molecule with selected functional group.

Week 4: Application Framework 🛠️

For this week, I created the structural framework of my application in Xcode. I finished the storyboard for the application and began to make ViewControllers. The vast majority of the week was spent on implementing the collection tab. In hindsight, I think there are still ways that I could have made the code more efficient. For example, I made three separate UICollectionViews instead of just using the built-in sections version. This method would require adding custom sections though, so I will most likely not change this unless I have spare time at the end of the project.

The implemented version of the collections tab for my application.

I also worked on implementing the pop up page that shows up when a molecule is selected from the Collection tab. Each molecule will have more detailed information about what the image is showing and why it is relevant (in general and to the ASRC’s scientists).

This is the pop up tab that shows more details about a selected molecule from the Collection tab.

The only thing left to do regarding these parts is:

  • The actual game part. Users will need to unlock the molecules through the AR camera. That means that they should not be able to be clicked on until after the user has scanned a particular code.
  • The molecules. The image files used were random examples taken directly from RCSB PDB. I will need to find relevant molecules and their images– hopefully from Eta.
  • The descriptions. I will need to write the different blurbs and have Kendra look over them. My goal for the little descriptions is that they will be informational without having too much scientific jargon.

I think for this upcoming week, I will reach out to Eta and Kendra about getting files. Other than that, I will be focusing on implementing the AR part because I suspect that will be the most difficult. Once I have the files, I will also need to convert them from .pdb/.xyz/etc. to a 3D compatible format. Fingers crossed!

Week 5: Plateau-ing 🥲

This week, I started out with trying how to convert between file formats. Most protein files are saved as .PDB (old) or .mmCIF (new). First, I needed to change from those formats to .OBJ, a standard 3D file format. VMD and PyMol are both supposed to have native converter tools, but I found that when I tried to convert files using these two programs, the resulting files were almost or completely empty. Eventually, I found that the Chimera program works the best to convert the .PDB/.mmCIF files to .OBJ. Second, I would have to go from .OBJ to .USDZ, the 3D file format created by Pixar that Apple uses. The newest version of the application, ChimeraX, was the best for creating a .OBJ compatible with Apple’s RealityConverter tool that takes 3D files and converts them to .USDZ. The final file did not have color, which is definitely not ideal, but I think I will deal with that later.

A snapshot of RealityConverter taking in a .OBJ file and creating this .USDZ file.
A snapshot of RealityConverter taking in a .OBJ file and creating this .USDZ file.

Next, I worked on implementing the actual game functions. This required setting up ‘communication’ between the different ViewControllers. I tried many different methods, but I found that the best way was to have functions changing the items within the Collection class and make instances of the Collection class in the different ViewControllers that needed to use those functions.

Finally, I’ve been trying to learn and use the basics of RealityKit in the app. Specifically, I want to use the image tracking feature. I need to track multiple images, and each image should show a specific image. I have an idea of how to do it, but I have not been able to test it. Also, I still need the actual files that I will use in the application.

Week 6: Everything is Looking Up 🥳

I began the week knowing that I would need to get the AR function implemented. My goal is to do user testing next week, and I can’t do that with an app that doesn’t have AR since the whole point of this program is to use XR in an innovative way. I was starting to feel panic as the deadline is quickly approaching.

As a result, I worked on setting up the AR. I finally began to test on my iPhone, instead of the built-in simulator on my laptop. On the first tab of my application, there is an ARView. My first issue was that this ARView was just showing up as a black screen. Eventually I got it to work with the camera by setting up permissions in the app’s .PLIST file (property list).

Camera on.
The first tab of my application with the working camera.

My application uses images to track where the model should be placed. Therefore, using my phone camera allowed me to actual see the model from different perspectives. I made a couple of sample scenes in Apple’s RealityComposer and then imported the project into my Xcode project. In RealityComposer, I was able to display the model by scanning the image (as seen below), so I assumed that it would work in Xcode. It did not.

Molecule model in RealityComposer.
This was the sample molecule model in RealityComposer using my phone camera. The model is sitting on top of the QR code.

I ran into a complete roadblock with the AR in the middle of the week. I found that my Scene was loading correctly but was not connecting to the Anchor. I honestly think I spent 12 hours searching for a solution. I kept adding and testing code over and over. What was the solution, you might ask? Deleting everything but the code loading in the models from screen… Sometimes the simplest answer is the solution. As a result, the models correctly showed up when its respective image appeared. (I’m updating this blog in the middle of the week because I need to share that I succeeded 🥹.)

Working AR models on iPhone screen.
Using ARView to scan the QR codes and place the models from RealityComposer on their respective QR code.

Then, I worked on the ‘unlocking’ feature. This required reworking my Item class and learning about how Substrings work in Swift. Thankfully, it was not nearly as difficult as the AR stuff. I also spent some time downloading QR codes for the image tracking. Finally, I worked on downloading files from the Protein Data Bank and writing descriptions for them.

Week 7: Final Stretch 🏃🏻‍♀️

This week consisted of fixing a lot of small things before doing user testing. For example, one issue I had was that I needed to load the arrays of my Item class before ‘capturing’ a picture of the Item’s model in AR. My solution was to switch the positions of the Collection and AR tabs. This meant the Collection tab would load first. I also think logically it makes more sense for the Collection to be first as an introduction to the application.

Another major part was fixing the QR codes I was using. I originally generated the QR codes online using QRFY. Then I used a photo editor to add a blank space in the middle with the QR code’s number. The issue that I ran into was that the QR codes were too similar. Apple’s AR system kept mixing them up for each other, resulting in tons of overlapping models. At first, I thought the issue was that I was testing it on a screen; however, I printed them out and they were still glitching. I then spent a couple hours in the photo editor adjusting the QR codes by hand until Apple approved that they were different enough.

I also learned how to download the files with color. Instead of using the .OBJ file format, I started using the .GLB/.GLTF format. The command in ChimeraX looks like: “save filename.glb textureColors true”.

I then collected all of the files that I would need. I decided on three major subjects to talk about: 1) protein structure, 2) x-ray crystallography, and 3) scientists at the ASRC. All of the protein molecules were downloaded the RCSB Protein Data Bank. I wanted to use 3D models for the x-ray crystallography process, but I realized I did not have enough time to make them myself. Therefore, I used images from a presentation that Eta had sent me. For the scientist spotlights, I went through the faculty page of the ASRC and read a ton of papers until I found two more faculty (in addition to Eta) that work with protein files. Once I had all the images and converted the files, I put them into my RealityComposer project. Then, I wrote captions for each one.

What my Reality Composer project looked like with the correct model and QR code.
What my Reality Composer project looked like with the correct models and QR codes.

Finally, on Thursday of this week, I did user testing. It was a bit nerve-wracking, especially because I learned that I could not download the application on everyone’s phones. Apparently Apple only lets you download onto three devices, and for some reason, I could only download it onto my phone and one other person’s phone. I even tried to sign up for the paid developer program ($100 fee…) but it would take 2-3 days to get approved and it was the morning of user testing. Ultimately, I decided to split up the group into two and have everyone share the two phones.

The testing itself went pretty well! I was pleasantly surprised by how invested everyone was in finding all of the QR codes. Everyone was also quite impressed by the AR. The models stay still on the QR code, so moving around in real life allows the user to see a model from different perspectives.

Another part of the Thursday tour was my speech. I was invited to give a 30 min speech to my cohort about something related to research. My topic of choice was “Surviving Academia 101.” This is something I feel pretty strongly about since I am still figuring out my path through academia. To be honest, it certainly was not the most well-rehearsed speech, but I think (and hope) that my passion about the subject made up for it. I talked about my experience with wanting to do research but not feeling like I belonged.

Sabrina Chow giving a speech.
A photo of me giving a speech to my cohort about research.

Week 8: Saying Goodbye 🥲

Over the weekend, I started to write my final paper on Overleaf. I had already set up the general structure and some of the sections. Thanks, earlier me! The main thing that I did was updating my methods section and starting to look at the results. Although I did not get as much user data as I would have liked, I definitely had enough to consider my work a preliminary study.

On Monday, I went in to help test some of the other students’ projects. It was really impressive seeing what everyone else had accomplished in just 8 weeks! Dr. Wole also helped me with some questions I had about how to visualize my data. Later that night, all of us students met up for dinner. It was so much fun.

2023 VR-REU students dinner

For the rest of the week, I spent most of my time grinding out the paper and preparing my slides for the symposium. I honestly did not have too much trouble with using LaTeX. My real issue was finding the right words to summarize everything I did. There were parts where I wanted to overshare about the process (specifically to complain about all the problems I had run into). There were also parts where I had no idea what to write. Still, by writing a couple sentences and then switching when I ran into a mental roadblock, I began to make significant progress. Writing my paper while also working on the presentation helped a lot as well, since I could just take the information from the paper and simplify it for the presentation. Before long, the slide deck for my presentation was done.

Symposium presentation by Sabrina Chow
My presentation on what I had spent the last 8 weeks doing. It was 10 minutes long with another 2 minutes for questions.

Thursday morning was the VR-REU Symposium. One by one, we presented our projects, talking about how our projects took shape, what challenges we had faced, and our results. Even though I’d seen everyone’s projects by that point, it was quite interesting to hear about how they addressed issues with their projects.

Finally, Friday arrived. Our last day! It’s hard to believe that time passed by that quickly. I went to our usual classroom in Hunter and finished up my paper. I submitted it to the ACM Interactive Surfaces and Spaces poster session. Then, we said our goodbyes.

For my last night in NYC, I went for a quick walk through Central Park and reflected. I’m so grateful that I got to be a participant in this REU. I’ve learned so much from Dr. Wole, Kendra, and my fellow students. I challenged myself with a project that was all my own, and I am very proud of how it turned out. Wishing everyone else the best with their future endeavors! I know you all will do amazing things!

Final Paper:
Sabrina Chow, Kendra Krueger, and Oyewole Oyekoya. 2023. IOS Augmented Reality Application for Immersive Structural Biology Education. In Companion Proceedings of the 2023 Conference on Interactive Surfaces and Spaces (ISS Companion ’23). Association for Computing Machinery, New York, NY, USA, 14–18. https://doi.org/10.1145/3626485.3626532 – pdf

Hunter College
City University of New York
HN-1001T
695 Park Ave
New York, NY 10065

Telephone: +1 (212) 396-6837
Email: oo700 at hunter dot cuny dot edu

Follow me:
wolexvr on Twitter  wole-oyekoya-5b753610 on Linkedin  woleucl's YouTube channel  oyekoya's Github profile  Googlescholar