Home » Articles posted by VR-REU Student
Author Archives: VR-REU Student
VR as a sensory stimulation tool for adolescents with ASD and anxiety
Amaya Keys, Howard University
Week One:
As a pleasant start to the program, I met the other students I’d be spending the next eight weeks with over a fun night of bowling. On Monday morning, we briefly met with our mentors via Zoom following an in-depth tour of Hunter College’s facilities. My mentor, Mr. Daniel Chan, recommended that we keep in mind the scope of the project and not to overextend ourselves with a wildly complex project for only 8 weeks. To me specifically, he suggested that I pick one specific disability, do thorough research on it, and from there decide on how a VR/AR application may help. I met with him one-on-one twice during the week to receive guidance on all of the jumbled thoughts racing through my mind. After bouncing from idea to idea, I finally landed on the development of a virtual multi-sensory stimulation room for adolescents with ASD that experience anxiety. To close out the week, we completed a short lab on ParaView, decided on a conference that we would submit our research to at the end of the summer, briefly reviewed our updated project proposals, and then finally got a sneak peek into the VR lab. Now that my project is solidified, I hope to dive into experimenting with Unity, as I know I have a steep learning curve ahead of me.
Week Two:
To start off the week, I finished the remaining modules of my CITI training so that I could solely focus on writing my literature review. I also received one of Dr. Wole’s VR Meta Quest headsets to take home and begin experimenting with. I attempted to set it up to my computer and phone on my own as well as deploy Unity but experienced many challenges. Later in the week, Kwame assisted me with the set-up process, and we ran an escape room demo to test its functioning. Following successful set-up, I worked to enable hand tracking on the headset and test another demo but ran into further difficulties that I am still working to resolve.
I spoke with my mentor at our scheduled Tuesday meeting time, and we just ensured that I was feeling confident and secure in the direction my project was heading. Throughout the week I continued to take notes on various studies and worked to weed out the ones I felt were unnecessary to include in my paper. The first draft of my literature review was unnecessarily long and contained way too many sources that weren’t directly related to my research, but by Thursday night, I had finalized a version that I was happy with. I sent it to my mentor for feedback and while he had a few comments about potential changes, he was overall pretty satisfied with how it looked. On Friday, we conducted a brief lab on Tableau and updated Dr. Wole on the status of our projects.
Next steps for me include solidifying my methodology and making headway on the development of my sensory room. I intend to search through the Unity assets store and TurboSquid to hopefully find some ready-made elements for my project. Before anything else though, I definitely want to fix the errors with the hand tracking functionalities, as this will be a significant element of my application.
Week Three:
Now that the introduction and related works sections are out of the way, I used this week to focus on the development of my application in Unity. I got off to a pretty slow start; I worked through figuring out what packages I needed to import and how to enable hand tracking. My next goal was to create 3D bubbles that the user can pop as they rise from the floor. I looked through the Unity Assets store and Turbosquid, but couldn’t find any pre-made model that would fit my needs. I followed a tutorial to create a basic bubble GameObject and wrote a script for them to multiply and continuously respawn. I am still trying to equip it with a poke interaction so that they “pop” when touched. I then created a light panel on one wall of my room and am working to make each circle change colors when tapped. Now that I have a little more experience with navigating and troubleshooting in Unity, I hope to be more productive with development next week.
Week Four:
This week I focused on making as much technical progress in Unity as possible while also thinking about my methodology to prepare for our midterm presentations on Friday. I decided that the five elements I wanted in the environment are a light panel, poppable bubbles, 3D object play, glowing interactive stars, and an alphabet/number board. By attaching a script to the Interactable Unity Event Wrapper from the Meta Interaction SDK, I was able to enable random color changes of the buttons on the light panel when touched. From there, it became so much easier to add interactive components to other objects in the scene now that I had that baseline. I also decided to make the light panel resemble an LED hexagon panel commonly used by individuals with Autism. I then modified a map and pins from one of the Meta Interaction SDK samples and imported models from Sketchfab and CGTrader to recreate a magnetic alphabet board. The user can drag letters and numbers from the tray and place them freely on the board. The third element of focus this week was importing 3D shapes and enabling the necessary components to allow the user to scale, rotate, position, and throw them in their hands. I now have a bubble popping animation clip but was still unsuccessful in getting them to pop when poked, which will be a focus for next week. Another issue I’m running into is that the game starts in a new camera angle every time I run it, and I am not sure why. I have worked around it for testing purposes, but it is something I will definitely have to resolve later so that there is no confusion when the participants put on the headset. As a relieving end to the stressful week, midterm presentations went well, and it was interesting hearing the updates on everyone’s projects. It’s refreshing to know we’re all dealing with different challenges and are working through them at around the same pace.
Week Five:
This week was the most productive for me technical-wise. While I unfortunately was not able to get the bubble animation working in Unity, I did enable a simple popping functionality where they are destroyed on touch. The last and final activity that I created was a series of interactive stars in the sky that have paths drawn between them when “activated” or lit. I then imported a counter surface for my activities to be displayed on and created a simple dome in blender as the exterior. The new models caused some technical problems with the existing objects, so a lot of my time was spent debugging those issues.
I also worked on completing the required documents necessary for IRB approval. However, at the end of the week, I still had not been reached out to by any prospective participants, so Dr. Wole and I discussed whether it was even worth going through the IRB process at all. He brought up the idea of possibly designing a work-in-progress research paper instead of going through with a user study. This would give me the opportunity to either turn this into a long-term project or allow another researcher to pick it up.
Week Six:
On Tuesday, Dr. Wole, Mr. Chan, and I had a discussion about the direction of my project and decided it would be best if we follow through with the work-in-progress paper, and put the IRB process on hold for now. With that being said, I don’t have much to write about this week, as I just focused on making tweaks to my virtual environment and editing my experimental procedure. The room now has a welcome console with options for user to control audio input as well as text instructions beside each activity. I still need to record a short demo of each activity in action to be played above the instructions for those that may want a visual. I also added an additional constellation for users to interact with. I am having many technical difficulties with the letter/number board and may need to start brainstorming some alternative methods for how it will function. I would like for the user to be able to drag letters from the tray and place them on the board, which will then “lock” the object to the board surface as if it is magnetic. This is proving to be quite difficult so instead, they may have to just touch the letter/number they’d like and it’ll appear on the board. I’d like to have everything finished by early next week so I can receive feedback on the environment from others in my cohort. It will not be a traditional user study, but it is still feedback that can ultimately be included in my paper.
Week Seven:
This week I modified my experimental design and found an anxiety scale that can be used to quantify my results. The Glasgow anxiety scale is a 27-question questionnaire that measures anxiety in people with intellectual disabilities. I modified and shortened it to 11 questions to make it relevant for my study. After discussing my options with my mentor and Dr. Wole, we decided it would be best to conduct a pilot study for now, include the preliminary results in my paper, and just write about the time constraint being a limitation towards conducting a full-scale project. I finalized my pre- and post-intervention survey and prepared to immerse my peers within the environment. On Friday, we briefly went over statistical analysis with Dr. Wole and I was advised to perform a Friedman repeated measures ANOVA for the survey ratings and a one-way repeated measures ANOVA for the anxiety scale. While I wasn’t familiar with using SPSS statistics or performing any kind of statistical analysis for that matter, I was confident I could figure it out with the help of the links and tutorials Dr. Wole posted since I had pretty simple data. After the session, I ran the study with 7 of my peers that were able to stay back and help me. Even by this point, I still wasn’t completely satisfied with how the room looked and functioned, but with only a week left I had no choice but to move forward. I really hope that this is a project I am able to continue to work on next semester so I can finally explore the benefits on the target subject group!
Week Eight:
I can’t believe it is the final week! This week honestly felt like a blur; one day I was gathering my stats and writing my results and the next I was giving my presentation. I finalized the methodology and results sections to send to my mentor early in the week and then began drafting my discussion and conclusion. On Wednesday, we participated in the teacher motion capture session to help Dr. Hayes and Dr. Wole’s research study. I thought it was a really cool project and found many of the presentations to be quite interesting. My favorites were Dr. Hayes’ talk on XR for education and Mr. Lichtman’s talk on Inclusive Game Design. After a long day, I came back and worked alongside a couple of my peers to finish up my slides for Thursday afternoon’s final presentation. On the day of presentations, I was very nervous and didn’t feel as prepared as I had felt for midterms—just because of how fast everything seemed to be moving these past 2 weeks. I still feel like it went well, despite a few mess-ups, and was relieved to have it over with. Lots of my family and friends joined the Zoom to watch and support me, so I was immensely grateful for that. The Iowa State students also seemed very interested in our projects and asked some great questions. I appreciated them sticking with us throughout all the technical difficulties and construction noises. The following day, we listened to their presentations after our morning paper writing session. I was impressed with all that they were able to develop on Unity in a group setting with such a short turnaround time. We wrapped up by going over any final conference submission details and with that, the summer was officially over! I had such a great time with my cohort and can say they definitely made my first research experience! I hope to see that all our papers get approved for our respective conferences and that I can witness their talks/poster presentations when the time comes.
Project Theme: Nutritional Education
Richard Erem – University of Connecticut – Professor Margrethe Horlyck-Romanovsky
Week One:
Our VR-REU 2023 program commenced on Tuesday, May 30th, 2023, following a delightful bowling session on the prior day that served as an icebreaker for our team. After acquiring our temporary IDs at Hunter College, we enjoyed a comprehensive building tour before proceeding to our designated learning space.
Our first interaction with project mentors happened over a Zoom call where introductions were exchanged, and a flurry of questions engaged both parties. Queries revolved around our expectations from the program and changes the mentors were planning for the year.
As the week progressed, we delved into an insightful introductory lesson on virtual, augmented, and mixed reality. We explored their distinctions, their evolution, and their present-day applications showcased in several projects. In addition, we discovered tools like Unity3D, Blender, and Mixamo, with resources provided to maximize our command over these innovative tools.
On Friday, we familiarized ourselves with ParaView, a robust open-source application for visualizing and analyzing large data sets. This was primarily facilitated through a self-guided lab.
After wrapping up the lab, we reviewed project proposals from each participant. Subsequently, I had a consultation with my mentor, Professor Margrethe Horlyck-Romanovsky, about my proposal. Generously, she offered several constructive revisions and assisted in refining my research direction. She encouraged me to study food deserts in Brooklyn and the Bronx, either virtually (via a service like Google Maps) or physically. This approach is anticipated to enhance my comprehension of the challenges people confront in pursuit of healthy eating habits, thereby enriching the authenticity of my simulation. She further equipped me with additional resources like a map of high-need areas and critical data on New York City’s FRESH (Food Retail Expansion to Support Health) program.
Overall, I am very eager to initiate the production of my project and look forward to the upcoming week with great anticipation!
Week Two:
Going into Week Two, I was determined to master the basics of Unity3D, so I played with the applications for several hours each day, experimenting with various 3D models until I settled with a nice supermarket with several car prefabs. I made sure my project wasn’t too graphically intensive so I wouldn’t have to deal with a slew of optimization problems later on. I then utilized the spline package known as Curvy Splines 8 (which I was familiar with as I had used it back in high school experimenting for fun, albeit very briefly). I used several splines to make the cars drive around the map (so that the game feels lively before you enter the supermarket), but I struggled greatly with making the turn animations look smooth and realistic. I plan on fixing this Week 3 and after I do, I plan on working on the next scene (entering the supermarket). My current project status can be viewed below:
As far as research goes, I dedicated an extensive amount of time researching about my project topic in order to garner enough credible information to construct a proper literature review. Instead of relying too heavily on Google Scholar, I opted to utilize my own university’s library resources as my mentor advised me that it would give me more substantiated data. I converted my literature review and my project proposal into Overleaf to follow the guidelines and format that Professor Wole wanted us to. I quite liked it as it made my work look very professional and neat.
In class, Professor Wole taught us the basics of writing a research paper as well as explaining the hardware and software components that go into most VR headsets. On Wednesday we learned about immersive visual and interactive displays, which are essentially digitally generated environments that engage users’ sense with lifelike visuals and responsive controls. We additionally got to view a bunch of VR projects which ended up being really cool and really funny as some of them included hilarious comedy.
On Friday, we did not have class but instead attended CUNYSciCom, a science-based symposium where several PHD-level students presented their research in front of a bunch of people. The program’s ultimate goals where to build better communication skills for STEM students and they even had cash prizes for the best presentations, with $500 dollars being the higher possible awarded amount. I took several notes during the presentations, asked questions and even played with some playdough-like material in an attempt to create a model with an underlying deep message about science in general (ended up poorly!). From learning about MAKI (Malaria-associated Acute Kidney Injury) to DNA G-Quadruplexes to even human-elephant conflict, I thoroughly enjoyed my experience at the symposium and I hope to attend one again in the nearby future.
It was a very packed week full of hardcore research, game development, and learning, and I am once again very excited for the upcoming week to see what my future holds!
Week Three:
(This is going to be significantly longer than my previous posts so bear with me!)
As I came into Week Three, I continued to get the hang of Unity3D various features, in particular its animation system. But before that, lets talk about the cars. I managed to create a spline that makes the cars move around corners in a very smooth manner, however, the cars would randomly flip over and glitch out and considering I still needed to add Box Colliders for each of the cars (so that they are unable to drive through each other), I decided to scrap the idea for now. Although as a last desperate attempt, I tried to bake a NavMesh onto the road looping around the market, however, a few problems arose. It didn’t cover the entire width of the road (I know, I can adjust that, but the whole thing wasn’t worth it anyway) and it actually caused a huge performance drop in Unity3D, so I gave up on it now. It’s a small detail in the game anyway so I can work on it after the more important stuff are completed.
Speaking of more important stuff, I began to work on my player model. I found a temporary one off the Unity asset store (although when the game is completed ideally you’ll be using your own avatar that you’ve scanned in) to use as a placeholder and then utilized Cinemachine to set up a third person view of him. Cinemachine is basically just a suite of camera tools for Unity which makes it easier to control the behavior of cameras in games. I adjusted the camera’s position to a place where third person games usually have it, then set it to follow and look at my player model. By the way, I know the majority of VR-Games are in First Person in order to capture that perfect immersion feeling and you may be wondering why I am working in third person. The reason is because ideally my game will adjust the player model based on what they consume in the game and I want the user to be able to view that easily with a press of a button. So I’ll incorporate a way to seamlessly switch between first and third person with a key that isn’t commonly used for other functions (for example, I obviously wouldn’t have the key be W or something).
Next came animations. Using the website Mixamo (a platform that offers 3D computer graphics technology for animating 3D characters) as Professor Wole had recommended, I installed some basic locomotive movement such as front, back, left, and right movements although I initially just had forward, left, and right. Here’s how I did it.
After I had dragged and dropped them onto my assets folder in Unity, I clicked Window then Animation then Animator. I created an Idle state and then slapped on an installed Idle animation. Then I created two float parameters, vertical and horizontal. The idea is that their values will change based on the input of your keys, which would in turn trigger specific animations based on the conditions of the transitions between states. I created a Walk FWD blend tree (blends animations based on set parameters), added 3 motions (initially anyway), and mirrored the first one. First motion was to walk forward while turning right. I mirrored this one for the left turn so I didn’t have to use an extra animation (meaning the third motion was the same as the first motion). I set the first motion to a threshold of -1 and the third motion to a threshold of 1, so that holding the A key will make the turn longer (and fluid!) until you reach the threshold of -1 and vice versa for the third motion and the D key. You have to uncheck ‘Automate Thresholds’ in order to be able to do this by the way.
Then, I went back to my base layer and created a transition from the Walk FWD blend tree to the Idle state with the condition vertical less than 0.1. This essentially means that if you’re currently walking forward and you let go of the W key (a.k.a lower your vertical value since the W is tied to it), your state will transition to Idle, which indicates that you’ve stopped moving. Vice versa logic for the transition going from Idle to Walk FWD (so it would be vertical greater than 0.1) was used. This is all super simple (not for me though since I had to learn then do it) stuff, but the more complicated your locomotion is (maybe included jumps, left right strafes, wallrunning etc), the more complex these general tasks in Animator will be. I forgot to mention, I unchecked “Has exit time” for my transitions so that there was no delay between pressing a key and having the animation trigger.
Please view the gif below to see my player model in action:
A closer look at the temporary Player Model:
In regards to what we learned in class throughout the week, we began by delving deep into important concepts like Immersion, Presence, and Reality on Monday. These terms are crucial to understanding how users experience VR environments. I now understand the idea of Immersion as a measurable quantity, that is to the extent of sensory information and consistency of system responses. Presence, the subjective response to a VR system, was distinguished as a psychological state of “being there” in a virtual environment. The overall discussion offered intriguing insights into the fidelity of a VR system in reproducing real-world experiences.
On Wednesday, we shifted our focus to the technical aspects, diving in 3D tracking, Scanning and Animation. We explored how tracking systems capture movements of the head and body to transform them into virtual actions. The class also detailed how 3D scanning can generate digital replicas of real-world objects, and how these digital models can be animated to create dynamic VR experiences.
The demo seminars provided practical applications of these various concepts. The Animation demo on Monday introduced us to various animation techniques and their uses in creating engaging VR content. The 3D Input seminar on Wednesday demonstrated different input methods used in VR and how they influence user experiences.
Finally, our Friday was dedicated to a self-paced visualization lab where we worked with Scientific Visualization using VMD. This session allowed us to install the VMD application, download sample data sets, and follow a lab manual to complete various tasks. This hands-on experience was incredibly beneficial, enabling us to get familiar with the program and better understand the practical aspects of VR in scientific visualization.
It’s been an intensive but rewarding week (as you can see by the high word count) of deepening our knowledge and skills in Virtual Reality. My goals for next week are to add more complex animations to my player model and work on a Scene 2 when the player enters the supermarket and is greeted with options. We’re almost halfway done with the program and I am very excited for what is to come!
Week Four:
Week Four poses the biggest challenge to my project to date: Life. Seriously. I got some sort of food poisoning from McDonald’s grimace meal, which took me out for the majority of Friday and the weekend. My laptop stopped working so I had to buy a brand new one, and a very expensive one at that. Had to soft restart my project and use my predecessor’s project as a template, only to find out that the version I was sent was an incomplete version and I spent hours wondering why it was not working. It has been a miserable week to say the least and this one is going to be quite short as a result. But hey, at least we got a day off. And at least I got to play around with the Oculus Quest 2 VR headset that was thankfully provided to me by Professor Wole. I must remain positive if I am to complete this project.
I also managed to implement the third person / first person camera switcher logic into my project!
This script, named “CameraSwitcher”, is used to toggle between two cameras in a Unity game: a first-person camera and a third-person camera. The switching is triggered by pressing a specified key, which is set to ‘V’ by default in this script.
In every frame of the game (in the Update method), the script checks if the switch key has been pressed. If it has, the script calls the SwitchCamera method.
The SwitchCamera method checks which camera is currently active. If the first-person camera is enabled, it disables the first-person camera and enables the third-person camera. Conversely, if the third-person camera is enabled, it disables the third-person camera and enables the first-person camera. This allows for toggling back and forth between the two views when the switch key is pressed (V).
As far as what we learned this week, Professor Wole taught us about Interaction and Input Devices. These are essentially the hardware and software tools used to perceive, interpret, and respond to user commands within the VR environment. These devices allow the user to interact with the virtual world, control actions, manipulate objects, and navigate through space. They also can provide haptic feedback to improve the sense of immersion. We also learned about rest frames, which is the reference frame from which all other movements and interactions are measured or evaluated. It’s essentially the default, stationary position in the virtual environment. We saw some cool demoes ranging from realistic and nonrealistic hands to the First Hand Technique (hands-on interaction).
Friday was the midterm presentations but as I was suffering from food poisoning, I decided to get some sleep that day instead of heading into class. While this week has been rough for me, I hope I can do better for the next upcoming weeks and I play on remaining optimistic!
Week Five:
I’m happy to say that this week went a lot smoother than my previous week. For starters, I don’t feel as sick. Also, my mother graciously purchased a new charger for me for my old laptop (thank you mom, love you!) as that was the problem with it so I was able to just return the new one and then use the money to purchase a new desktop PC (my first one and I built it myself!). It is pretty high-end as it uses an AMD Ryzen 7800X3D for the CPU and an AMD Radeon RX 7900 XTX for the GPU. While obviously it is amazing for gaming, it is also AMAZING for 3D work like Blender and Unity in general. When comparing to my laptop, instead of loading my Unity world in like 3 to 4 minutes, it loads it up in like 20 seconds. I haven’t experienced any lag either, even with having a bunch of mesh colliders for all my game objects. I’ve gotten a nice productivity boost from the speed of this PC!
For starters, I’ve added Oculus VR integration into my project. This was a little tricky, and I ended up using XR Plug-in Management as opposed to the older Oculus Integration Package. Then, by setting up Locomotion System, Continuous Turn and Walk components in my rig (XR Origin), I was able to achieve movement in my VR game! Movement in VR is really just really smooth teleportation, but it works pretty well. I also added footsteps to the base game, however, I’ve struggled with getting it to work on the VR version so that’s something I’ll deal with either in Week 6 or early Week 7, it isn’t super important. I also setup the VR controllers in game which will hopefully be used for hand tracking soon (right now, only head tracking works).
So, to be clear, my goals for Week 6 are to implement proper hand tracking, possibly implement footsteps that work in the VR environment, but also I want to add an option when the player approaches the main door to select “Enter” and have the scene smoothly transition to the next one. I want also to have my assets setup for my second scene and include a mirror somewhere near checkout. The logic for the items prices and whatnot can be done early Week 7, then project testing can be carried out that week as-well (or early week 8). I also want to figure out a way to switch between First Person and Third Person view in the VR environment, which shouldn’t be too difficult, however this isn’t as important to complete as the previous stuff.
As far as what we learned in class this week, Professor Wole taught us about interactive 3D graphics and showed us examples of such involving lighting, cameras, materials, shaders and textures. For points, lines, and polygons, I learned that they are the basic geometrical primitives used to build more complex 3D shapes in computer graphics. Graphics pipeline was the sequence of steps that a graphics system follows to render 3D objects onto a 3D screen. In relation to that, OpenGL graphics Pipeline is a specific implementation of the graphics pipeline, allowing hardware-accelerated rendering of 3D and 2D vector graphics. For the Vulcan Graphics Pipeline, it is basically just a high-performance, cross-platform graphics and compute API, which offers greater control over the GPU and lower CPU usage.
We briefly went over GLUT, which is short for OpenGL Utility Toolkit. GLUT provides functions for creating windows and handling input in OpenGL programs. I learned that in terms of cameras, 3D is just like taking a photograph, just over and over and over and a lot at a time! The camera defines the viewer’s position and view direction, which determines what is visible on the screen, while the lights simulate the interaction of light with objects in order to create a sense of realism and depth. For colors and materials, color defines the base appearance of an object while materials determine how the object interacts with lights. Finally, we talked about textures, which are essentially just images or patterns applied to 3D models to give them a more realistic appearance by adding detail such as wood grain or skin or something.
I’ve had a much better week than week 4 now that I have recovered and I’m excited to see what I can accomplish in the upcoming following weeks!
Week Six:
I made decent progress this week! I began adding hand tracking support including actual 3D hand models. I figured they would be more immersive than adding the default 3D Oculus models. I followed that up by adding ray cast beams (lasers) from the hands of the player which allowed for them to interact with objects from afar, such as clicking buttons. I added a second scene which depicted the interior of the supermarket. Then I added a transition button to the door of the exterior supermarket so the user can click it and it will fade into Scene 2, the interior. I then spent forever trying to fix the lighting of the interior as the prefabs I was using came from an older version of Unity. I fixed it somewhat using the Generate Lighting button in the lighting settings, which took a whopping three to four hours to finish rendering, so I let that work overnight.
I then included buttons on my food items. When the user clicks the button, the item would be included on the left side panel of their screen as text. I assigned a reasonable nutritional value to each item (only four so far, I began with Steak) along with the costs which was based on the prices I see at my local Target. An example is below.
I wrote several C# scripts, including ShoppingCart.cs, FoodButton.cs, CartUiManager.cs, and CheckoutButton.cs. They’re still a work in progress, but I’ll briefly explain each one.
ShoppingCart.cs: This script sets up a virtual shopping cart for a user, capable of storing items represented as ‘FoodButton’ objects; it also updates a visual interface (through the CartUIManager) whenever a new item is added to the cart, and can calculate the total nutritional value of all items in the cart.
FoodButton.cs: This script represents an item of food that the user can interact with (possibly in a VR or AR environment, as it uses XRBaseInteractable). Each ‘FoodButton’ has its own name, nutritional value, price, and knows about the shopping cart to which it can be added. It also sets up listeners to handle when it’s selected or deselected in the interface, making sure to add itself to the cart when selected.
CartUIManager.cs: This script manages the visual interface of the shopping cart. It displays the name and price of each item in the cart, and calculates the total price of all items. The UI is updated every time an item is added to the cart (via the UpdateCartUI method).
CheckoutButton.cs: This is the code for the checkout button that a user can interact with. Like the FoodButton, it sets up listeners to handle being selected or deselected. When selected, it calculates the total nutritional value of all items in the shopping cart, and updates the UI to show this information to the user.
They all work together in union to hopefully create an immersive and engaging VR shopping experience for my simulation. Next week I plan on adding Scene 3, a ‘one year fastforward’ black transition scene and a new script that switches the default player model to different ones based on the nutritional value of the food you’ve checked out. This will be the trickiest part yet, but I’ll see what I can manage.
Now for what we studied during class this week. We learned about GPUs and Immersive audio (including a demo on audio) on Monday and Immersive Telepresence and Networking along with Perception, VR Sickness and Latency on Wednesday.
A Graphics Processing Unit (GPU) is a piece of hardware designed to quickly create images for display on screens, making them essential for tasks like gaming and video rendering. GPUs are used in a variety of applications, from creating the graphics in video games, to speeding up artificial intelligence processes and scientific computations. There are two types of GPUs: integrated GPUs, which are built into the central processing unit (CPU) of a computer, and discrete GPUs, which are separate hardware components. GPUs have evolved from simple machines designed for 2D image acceleration to powerful devices capable of rendering complex 3D images. The main difference between a CPU and a GPU is their purpose: a CPU is designed to quickly perform a wide variety of tasks one after the other (low latency), while a GPU is designed to perform many similar tasks at the same time (high throughput), making it excellent for creating images or performing calculations that can be run in parallel. General Purpose GPUs (GPGPUs) are GPUs that are used to perform computations that were traditionally handled by the CPU, expanding the scope of tasks that can benefit from a GPU’s parallel processing capabilities. In computing, latency refers to the time it takes to complete a single task, while throughput refers to the number of tasks that can be completed in a given amount of time.
The auditory threshold refers to the range of sound frequencies humans can hear, typically from 20 to 22,000 Hz, with speech frequency specifically falling between 2,000 to 4,000 Hz, while ultrasound refers to frequencies above 20,000 Hz, which are inaudible to humans but can be heard by some animals. I actually didn’t know this exactly, so it was cool to learn something new!
In the context of virtual reality (VR), telepresence refers to technology that allows you to feel as though you’re physically present in a different location or virtual environment through immersive sensory experiences. Related to this, Sensory Input refers to the information that your VR system receives from sensors, like your headset or handheld controllers, which capture your movements and translate them into the VR environment. Within VR, mobility refers to your ability to move around and interact within the virtual environment, which can range from stationary experiences to full room-scale movement. Audio-Visual Output describes the sound (audio) and images (visual) that the VR system produces to create an immersive virtual environment, typically delivered through headphones and a VR headset.
In VR terms, manipulation means interacting with or changing the virtual environment, typically through gestures or controller inputs, like grabbing objects or pressing virtual buttons.
Beaming is a term used in VR to describe the act of virtually transporting or projecting oneself into a different location, effectively simulating being physically present in that environment.
I hope for next week I am able to wrap up my simulation and get some test results. Only time will tell!
Week Seven:
This week undoubtedly has been my most productive week so far! I’ll try and keep this short though. Here are the changes I’ve made to my game! For starters, I removed the initial scene where you walk into the supermarket because it is pointless, has a lot of anti-aliasing issues (texture flickering), and impacted performance marginally. So now the game starts inside the supermarket. I added a player model from Mixamo to my VR rig then by using animation rigging, made it possible for the user to have head tracking and hand tracking for a more immersive experience. I wrote a bunch of new scripts, namely CartUIManager.cs, CheckoutButton.cs, DetailPanelController.cs, FoodButton.cs, ItemButton.cs, ScaleAdjustment.cs, and ShoppingCart.cs. I’ll briefly explain what each one does.
CartUIManager.cs manages the user interface for the shopping cart, updating the display to show the items in the cart, their quantities, and the total price and calories. CheckoutButton.cs is attached to the checkout button in the game, and when clicked, it calculates the total calories and price (including NYC’s tax rate) of the items in the cart, updates the UI, and triggers a scale adjustment based on the total calories. DetailPanelController.cs controls the detail panel that displays the nutritional information of a food or drink item when the player hovers over it. FoodButton.cs is attached to each food item button in the game, storing the nutritional information and price of the food item, and adding the item to the shopping cart when clicked. ItemButton.cs is an abstract script that serves as a base for the FoodButton and DrinkButton scripts, defining common properties and methods such as the item name, calories, price, and the method to add the item to the cart. ScaleAdjustment.cs is a script that adjusts the player’s avatar based on the total calories of the items in the shopping cart when the checkout button is clicked. Finally, ShoppingCart.cs represents the shopping cart, storing the items added to the cart along with their quantities, and providing methods to add items to the cart, calculate the total price and calories, and clear the cart.
These seven scripts make up the bulk of my functionality in game. After this though, I spent 5-6 hours at night using the Target app to look up various typical supermarket food items, then inserting them as buttons on each shelf of the place along with their nutrition facts. I added a Tracked Device Graphic Raycaster on each of the buttons so that it can be detected in VR by the ray casted beams that come from the players hands. Then I added a Event Trigger that uses the DetailPanelController script so that whenever a player hovers their beams on an item button (Pointer Enter and Exit BaseEventData), it will show the nutritional facts of said item and it will go away once the beam comes off of it. Then I attached the FoodButton or DrinkButton scripts to various items which is where I wrote all the nutritional facts I got from the Target app. I constructed a basic mirror along with security cameras which follow the player and allow the player to see their virtual avatar in the game in real-time. Then I made a panel, placed it high above the player and then wrote an introductory text along with an explanation of the game controls. The text is seen below in the image.
Of course that’s way too difficult to read here so here you go:
Welcome to the DietDigital Game Simulation!
In here, you’ll be simulating the experience of shoppping in an VR supermarket environment. You’ll also be able to checkout your food items and see immediate changes to your physique once you do so via the mirror or security cameras.
Note that the items you select here are realistically what you would eat in a single day and once you press the checkout button, the changes to your body reflect (or atleast attempt to) what would happen if you stuck to that diet for a year.
The controls are simple: Hover over an item button to reveal its nutritional facts and use the right trigger to add it to your cart.
Happy shopping!
I added some background audio to the store which just sounds like an average bustling supermarket environment for extra immersion.
And that wraps up essentially everything major that I included into my game! Going into what we did in class this week, I unfortunately missed class due to unrelated reasons (and responsibilities) but the students basically just tested each other’s application and completed surveys about them afterwards. We even went on a fieldtrip on Thursday, a ASRC / Illumination Space Field Trip, and got a tour of the buildings with its different facilities and whatnot (along with playing a student’s application game) which was super awesome to experience!
Overall, great week for progress! I’ll be doing my data collection next week, writing my research paper along with demoing my project to an audience at Hunter College to wrap up the final week of the program. Thanks for reading!
Week Eight:
This has probably been my favorite week of the program and I’m disappointed that it’s coming to an end soon! I became much more friendly with the students I was working with, and we went out to have fun multiple times this week, it was amazing (although it is very much a shame that this had to happen on the last week of the program). In terms of my project, I ran it with 7 of my fellow REU students and they gave me their feedback on it, which I used to construct and publish my research paper. High immersion and decent presence scores but the most common complaint was the motion sickness (which I had no real control over to be honest, it’s VR). I did notice that the women who ran my simulation tended to lose more weight than the men who did so. I deduced that it was because the program was tailored towards men in general as the base calories was set to 2600. To fix this issue, I created an in-world space main menu where you could customize your age, gender, activity level and body composition size (two buttons, increase or decrease) and then based on what you chose, your based caloric needs would change. Active males for example need more calories than sedentary females and vice versa for example. I added two mirrors to this main menu then I added a Start button that when clicked, would make the menu disappear and then you can walk forward and play the game like normal (except I added a few more food items as per my feedback received). Here’s what this all looked like (not the best UI design but this simulation is all about functionality haha):
In other news, I presented my demo at Hunter College in a conference room on zoom this Thursday. It was supposed to be at the symposium I believe (or something similar) but due to scheduling room conflicts, it was changed to here instead. Definitely was a good thing however, as I didn’t have to present in front of several people (that I didn’t know personally) in person and it was relegated to just a Zoom meeting. I was the last person to present so I just chilled and watched everyone’s presentations before me. Everyone had something super interesting to share and I enjoyed them much more than I initially expected.
Prior to Thursday, we attended a Zoom meeting for the SPIRE-EIT REU Presentations where the IOWA students presented their REU projects just like we did the next day. It was pretty time consuming but overall, fairly interesting as I am a man of science myself of course and simple love to learn new things!
I also met with my mentor this week to go over the changes I had made to my simulation and to better prepare myself for my research paper submission. Pretty standard stuff. I also attended a Zoom meeting that went over how to apply to grad school, how to prepare for it, and the Dos and Don’ts of doing so.
I departed from New York City on Saturday in a very sad mood, but it was fun while it lasted! For future REU students reading this blog, just know that it zooms by faster than expected so just make sure you’re working hard, having fun, absorbing a bunch of information, and connecting with as much people as possible!
I’ll probably update this blog if my paper somehow manages to get accepted but if not that’s quite alright.
Thank you for joining me on this journey reader and I hope you have a blessed life!!
Final Paper:
Richard Chinedu Erem, Oyewole Oyekoya, and Margrethe Horlyck-Romanovsky. 2023. Effects of Varying Avatar Sizes on Food Choices in Virtual Environments. In Companion Proceedings of the 2023 Conference on Interactive Surfaces and Spaces (ISS Companion ’23). Association for Computing Machinery, New York, NY, USA, 24–26. https://doi.org/10.1145/3626485.3626534 – pdf
Virtual Reality and Public Health Project: Nutrition Education – Professor Margrethe Horlyck-Romanovsky
Talia Attar, Cornell University
Week One:
We kicked off the VR-REU 2022 program on Monday and convened in the Hunter College Computer Science Department, Professor Wole, the other participants, and I finally getting to meet and introduce ourselves to each other. Via a Meet-and-Greet style meeting, the other participants and I had the honor to hear the program mentors explain a bit about their work and their vision for integrating Virtual Reality into what they do. In Professor Wole’s VR/AR/MR summer class this week, we learned about hardware and software, 3D geometry, and the basics of writing a research paper using LaTex – a very useful introduction into a key component of doing research. Finally, as a group, we rounded out the week with an introduction to Paraview, using disk data to explore the breadth of Paraview’s capacities.
In regards to my research project, I met with my mentor, Professor Margrethe Horlyck-Romanovsky, and created a concrete concept for the project. After telling me about her research and the information gaps we currently face in generating a complete understanding of how people interact with their food systems, my mentor and I discussed how Virtual Reality could be used to study this gap. We formalized the necessary features of the Virtual Reality application and planned what the related study may look like. Heading into next week, I am excited to dive deeper into learning Unity and building out my project!
Week Two:
I entered Week Two excited to kick my project development into high gear. With the help of my Professor Margrethe and Dr. Wole, I was able enhance the specifications for my Virtual Reality simulation and create a more detailed vision. I began implementing the simulation, a process that was slow at first as I familiarized myself with the XR features of Unity. However, as the week progressed, I grew more comfortable with this type of development and made headway on the first scene in my project – a city block.
In addition to working on the simulation, I also spent a significant amount of time considering aspects of the study itself. Professor Margrethe, Dr. Wole, and I discussed details from recruiting participants to analyzing produced results, allowing the study to come into clearer view. I was also fortunate to receive valuable and detailed advice around literature reviewing and other aspects of research papers from Professor Margrethe.
The REU members and I ended the week as a group, and Dr. Wole taught us about using Tableau for data visualization. The sample dashboard I created through his tutorial can be found here.
Week Three:
Week three marked an exciting point in the program as I was able to begin deploying my simulation to the Meta Quest 2 VR headset. This was the first time I had gotten to wear a VR headset outside of the demo last week, and it was informative to be able to explore a variety of simulations for an extended period of time. The highlight was certainly successfully building my Unity project directly on to the headset. In regards to the simulation itself, I began a different approach to creating my 3D scene compared to last week in an attempt to enhance the level of detail present. I also began the interactive level of the project by coding the XR rig to follow a fixed, controlled path around the simulation.
In addition to work on my personal study, I joined the other participants in learning a new visualization tool: VMD.
Week Four:
This week I saw the largest progress in my Virtual Reality development process to date. With the help of some carefully selected asset packages from the Unity store, I was finally able to get over the hump of world building and begin implementing more of the user interactions. I successfully completed a draft of the first layer of the world: the city-level view with three food sources. The user is taken on a fixed path walk around the block, with the freedom to move their head to look around. At the end of this walk, a pop-up appears for the user to select where they would like to enter with their laser pointer, and then they are taken on another fixed path walk to the food vendor of their choosing. Upon arriving, the following scene – interior of the store – loads. Developing the interactive UI for this selection step of the process was the largest technical challenge I faced to date, as the Unity UI support was developed for a 2D setting. However, with the help of many (many) YouTube videos and other online resources, I was able to use the Oculus Integration package to adapt the UI features effectively to Virtual Reality.
Next week will entail continuing the development flow to build out the next layer of the simulation.
Week Five:
During Week Five, I picked up right where I left off in my last blog post: implementing the “interior” layer of the simulation. This entailed crafting three new scenes and mini “worlds” to represent the green grocer, the supermarket, and the fast food restaurant. Professor Margrethe and I discussed the appropriate foods and information to present in each food source and ended up with a carefully crafted list of what is included. The two main tasks I faced in development were figuring out how to appropriately represent the relevant foods and constructing a logical and clear interface for the user to interact with the food options to simulate a shopping experience. The latter task was challenging in terms of both design and actual implementation, but I ended the week with a solid vision and corresponding code to do so. In Week Six, I will be finishing applying the interactive layer throughout all three food sources and generally cleaning up any loose ends within the simulation.
The other program participants and I ended the week with a fun field trip to the CUNY Advanced Science Research Center and got to see applications of virtual reality as well as many other interesting and complex ongoing research projects!
Week Six:
Week Six entailed the final push of development of the simulation. One main addition that occurred during this week was the creation of a text file log that records statistics around the users interactions. This will be incredibly useful in gathering detailed results around users behavior within the simulation. Another important development from this week was that many new food items were added as possible options to expand the breadth of choices and potential purchases the user might make. Finally, I added components to provide direction and explanation to the user to enhance ease of use. With these exciting developments, finally running the study with participants using the simulation next week feels promising!
The images below are screenshots taken directly from deployment of the simulation on the Oculus Quest 2. They show the user purchasing interface in two of the food businesses.
Week Seven:
This week was very exciting because I finally ran the study using the simulation! The week began with final preparations for running the study that included constructing the survey for people to fill out after the VR experience and addressing any lingering bugs in the simulation. Throughout the week, I was able to recruit 12 participants and administer the VR simulation and survey to each. It was an incredibly rewarding experience to see the outcome of my UNITY development process be put to use.
I concluded the week by beginning to analyze the results found and start writing them up to the final research paper. Looking forward to next week, the final week of the program, I will be completing constructing the relevant results and writing my paper as well as preparing for the final presentation!
Week Eight:
This week marked the final week of the REU program. I spent the bulk of the week completing the short paper to submit to the VRST 2022 conference taking place in Tsukuba, Japan this fall. A large portion of this process was analyzing the results of the study. The simulation and study yielded data around a variety of different factors, such as the decision outcomes of the simulation and the usability score measured from a system usability questionnaire component of the survey. I combined different aspects of the data to generate several key findings around behavioral and decision-making patterns in the simulation. However, the most critical part of this preliminary study was that, mainly supported by the high usability and presence scores, virtual reality shows promise as a tool for studying individual food consumer behavior in a multilevel food environment, and the study findings warrant further research into this application.
The program concluded with a wonderful day of presentations, and I was fortunate to hear about the work done by my fellow REU participants throughout the summer.
Thank you to Dr. Wole for facilitating this program and to my mentor, Dr. Margrethe Horlyck-Romanovsky, for her endless support throughout this process.
Final Report was submitted and accepted as a 2-pages paper (poster presentation) at VRST 2022:
Talia Attar, Oyewole Oyekoya, and Margrethe F. Horlyck-Romanovsky. 2022. Using Virtual Reality Food Environments to Study Individual Food Consumer Behavior in an Urban Food Environment. In 28th ACM Symposium on Virtual Reality Software and Technology (VRST ’22), November 29-December 1, 2022, Tsukuba, Japan. ACM, New York, NY, USA, 2 pages. https://doi.org/10.1145/3562939.3565685 – pdf