Home » VR-REU 2024 » VR as a sensory stimulation tool for adolescents with ASD and anxiety

VR as a sensory stimulation tool for adolescents with ASD and anxiety

Amaya Keys, Howard University

Week One:

As a pleasant start to the program, I met the other students I’d be spending the next eight weeks with over a fun night of bowling. On Monday morning, we briefly met with our mentors via Zoom following an in-depth tour of Hunter College’s facilities. My mentor, Mr. Daniel Chan, recommended that we keep in mind the scope of the project and not to overextend ourselves with a wildly complex project for only 8 weeks. To me specifically, he suggested that I pick one specific disability, do thorough research on it, and from there decide on how a VR/AR application may help. I met with him one-on-one twice during the week to receive guidance on all of the jumbled thoughts racing through my mind. After bouncing from idea to idea, I finally landed on the development of a virtual multi-sensory stimulation room for adolescents with ASD that experience anxiety. To close out the week, we completed a short lab on ParaView, decided on a conference that we would submit our research to at the end of the summer, briefly reviewed our updated project proposals, and then finally got a sneak peek into the VR lab. Now that my project is solidified, I hope to dive into experimenting with Unity, as I know I have a steep learning curve ahead of me. 

Week Two:

To start off the week, I finished the remaining modules of my CITI training so that I could solely focus on writing my literature review. I also received one of Dr. Wole’s VR Meta Quest headsets to take home and begin experimenting with. I attempted to set it up to my computer and phone on my own as well as deploy Unity but experienced many challenges. Later in the week, Kwame assisted me with the set-up process, and we ran an escape room demo to test its functioning. Following successful set-up, I worked to enable hand tracking on the headset and test another demo but ran into further difficulties that I am still working to resolve.  

I spoke with my mentor at our scheduled Tuesday meeting time, and we just ensured that I was feeling confident and secure in the direction my project was heading. Throughout the week I continued to take notes on various studies and worked to weed out the ones I felt were unnecessary to include in my paper. The first draft of my literature review was unnecessarily long and contained way too many sources that weren’t directly related to my research, but by Thursday night, I had finalized a version that I was happy with. I sent it to my mentor for feedback and while he had a few comments about potential changes, he was overall pretty satisfied with how it looked. On Friday, we conducted a brief lab on Tableau and updated Dr. Wole on the status of our projects.

Next steps for me include solidifying my methodology and making headway on the development of my sensory room. I intend to search through the Unity assets store and TurboSquid to hopefully find some ready-made elements for my project. Before anything else though, I definitely want to fix the errors with the hand tracking functionalities, as this will be a significant element of my application.

Week Three:

Now that the introduction and related works sections are out of the way, I used this week to focus on the development of my application in Unity. I got off to a pretty slow start; I worked through figuring out what packages I needed to import and how to enable hand tracking. My next goal was to create 3D bubbles that the user can pop as they rise from the floor. I looked through the Unity Assets store and Turbosquid, but couldn’t find any pre-made model that would fit my needs. I followed a tutorial to create a basic bubble GameObject and wrote a script for them to multiply and continuously respawn. I am still trying to equip it with a poke interaction so that they “pop” when touched. I then created a light panel on one wall of my room and am working to make each circle change colors when tapped. Now that I have a little more experience with navigating and troubleshooting in Unity, I hope to be more productive with development next week. 










Week Four: 

This week I focused on making as much technical progress in Unity as possible while also thinking about my methodology to prepare for our midterm presentations on Friday. I decided that the five elements I wanted in the environment are a light panel, poppable bubbles, 3D object play, glowing interactive stars, and an alphabet/number board. By attaching a script to the Interactable Unity Event Wrapper from the Meta Interaction SDK, I was able to enable random color changes of the buttons on the light panel when touched. From there, it became so much easier to add interactive components to other objects in the scene now that I had that baseline. I also decided to make the light panel resemble an LED hexagon panel commonly used by individuals with Autism. I then modified a map and pins from one of the Meta Interaction SDK samples and imported models from Sketchfab and CGTrader to recreate a magnetic alphabet board. The user can drag letters and numbers from the tray and place them freely on the board. The third element of focus this week was importing 3D shapes and enabling the necessary components to allow the user to scale, rotate, position, and throw them in their hands. I now have a bubble popping animation clip but was still unsuccessful in getting them to pop when poked, which will be a focus for next week. Another issue I’m running into is that the game starts in a new camera angle every time I run it, and I am not sure why. I have worked around it for testing purposes, but it is something I will definitely have to resolve later so that there is no confusion when the participants put on the headset. As a relieving end to the stressful week, midterm presentations went well, and it was interesting hearing the updates on everyone’s projects. It’s refreshing to know we’re all dealing with different challenges and are working through them at around the same pace. 


Week Five: 

This week was the most productive for me technical-wise. While I unfortunately was not able to get the bubble animation working in Unity, I did enable a simple popping functionality where they are destroyed on touch. The last and final activity that I created was a series of interactive stars in the sky that have paths drawn between them when “activated” or lit. I then imported a counter surface for my activities to be displayed on and created a simple dome in blender as the exterior. The new models caused some technical problems with the existing objects, so a lot of my time was spent debugging those issues. 



I also worked on completing the required documents necessary for IRB approval. However, at the end of the week, I still had not been reached out to by any prospective participants, so Dr. Wole and I discussed whether it was even worth going through the IRB process at all. He brought up the idea of possibly designing a work-in-progress research paper instead of going through with a user study. This would give me the opportunity to either turn this into a long-term project or allow another researcher to pick it up.

Week Six:

On Tuesday, Dr. Wole, Mr. Chan, and I had a discussion about the direction of my project and decided it would be best if we follow through with the work-in-progress paper, and put the IRB process on hold for now. With that being said, I don’t have much to write about this week, as I just focused on making tweaks to my virtual environment and editing my experimental procedure. The room now has a welcome console with options for user to control audio input as well as text instructions beside each activity. I still need to record a short demo of each activity in action to be played above the instructions for those that may want a visual. I also added an additional constellation for users to interact with. I am having many technical difficulties with the letter/number board and may need to start brainstorming some alternative methods for how it will function. I would like for the user to be able to drag letters from the tray and place them on the board, which will then “lock” the object to the board surface as if it is magnetic. This is proving to be quite difficult so instead, they may have to just touch the letter/number they’d like and it’ll appear on the board. I’d like to have everything finished by early next week so I can receive feedback on the environment from others in my cohort. It will not be a traditional user study, but it is still feedback that can ultimately be included in my paper.

Hunter College
City University of New York
695 Park Ave
New York, NY 10065

Telephone: +1 (212) 396-6837
Email: oo700 at hunter dot cuny dot edu

Follow me:
wolexvr on Twitter  wole-oyekoya-5b753610 on Linkedin  woleucl's YouTube channel  oyekoya's Github profile  Googlescholar