Home » Articles posted by VR-REU Student

Author Archives: VR-REU Student

Immersive Content for Interdisciplinary STEM Education

Cason Allen, Florida A & M University

Week 1: 

This REU opened up with a social event where Dr. Wole took our cohort to bowl. It was fun, as I had the opportunity to meet the other students in the program. The following day commenced the first official day of the REU, where we toured Hunter College and were given an introduction to some of the mentors. The week progressed with meeting my mentor Kendra Krueger, where we started formulating ideas for the project I would be engaging in this summer. With my project relating to how the Advanced Science Research Center (ASRC) is involved in STEM education and outreach, it was nice to see how field trips at the CUNY Graduate Center are conducted and how they aim to teach students in the short time they have. This set the stage for what my project proposal would look like, as I spent much of the rest of the week doing research on the different ways to go about the project and brainstorming methods to conduct it. In addition, we ran through the first couple of lectures in Dr. Wole’s Introduction to Virtual Reality class and gained an introduction to the software Paraview. Seeing how much we have done in such a short amount of time, I am excited about what the next 7 weeks have in store in this REU.

 

Week 2: 

This week, I met with Kendra to further discuss my project’s expectations. With this project primarily being a visualization tool for various instruments within the ASRC, its accessibility must be more open to the public. Therefore, I aim to create a desktop visualization program and, if there is any time, create a VR version. For this tool, I have started finding objects to create the virtual object library needed and have scratched the surface of what can be done in Unity. Unfortunately, I could not explore as much as I needed to due to getting sick mid-week, so moving forward, I will have to catch up and stay on track. In addition, I have been working on my literature review, gaining a couple more sources revolving around augmented reality in STEM education and assessing student learning outcomes. I was able to continue my paper by implementing this into Overleaf, and moving forward, I will be starting my methodology and meeting with a researcher at the ASRC to see how some of the instruments operate. If I can not find digital models for such instruments, I may have to recreate them in SolidWorks.

 

Week 3: 

This week, I was met with some complications, dealing with licensing issues with SolidWorks and trying to get over the learning curve with Unity. I have been able to create a sample scene for a user to navigate through to set up my workflow for the final project, as I need to go into a lab of the ASRC and replicate a lab setup there. Focusing on user interaction with stand-in objects, I have been trying to figure out a method for a pop-up screen to appear and show information on the interacted object. I have been slowly working on my methodology as well as adding sample models to my virtual library to plug into the final virtual environment.

Week 4:

This week consisted of quite a bit of development for the lab walkthrough game. Learning how to script player actions, raycasting, and interactions was challenging as I have run into countless issues, but nevertheless entertaining. Much more will need to be done, including expanding on character interactions, creating a more game-like approach to the project, and eventually deploying this project on some type of web application, but I am confident much will be completed given how this week has gone. In addition, we had to meet the milestone of completing our methodology and presenting our midterm presentations. For my presentation, I was able to reveal a demo of what I had done for the game. Next will be completing the user study section for the paper paired with creating surveys to assess the participants of this study.

Image of the demo where the user is able to interact with a laser diode

Week 5:

This week, I was able to get a bit of technical work done, fortunately. Being able to create interactions with the object, the next step is to perfect the pop-up informational slides that appear upon interaction. At least one object has been added for the disciplines of Environmental Science, Structural Biology, Photonics, and Neuroscience, with the remaining discipline being Nanoscience. In the process of adding the objects, I decided to change the environment the user was in the distinguish each of the disciplines from one another by assigning their rooms a specific color that is easily associated with its given discipline. In addition, I added sound effects for walking, inspecting objects, and leaving inspection mode and I am in the process of remodeling an instrument from the ASRC Instrument database to put into the environment. Overall, I have been trying to make the program feel more fun and game-like while developing everything. Moving forward, I will have to personalize the pop-ups to each object, finish modeling the nanofabrication device, and the incentive for the user to learn each of the facts.

Image of the new environment, showcasing the difference in lighting and some of the instruments in the Structural Biology space.

Week 6:

This week, I completed the educational game and the Google form to assess participants. The main hurdle to overcome was creating the pop-up window to give the user information on the instrument selected took the most time to figure out as the first step was to learn how to customize the player UI, then programming the panel to turn on when interacting with an object. Following that, the actual informational page needed to be displayed instead of a black screen, and finally the informational page needed to be specific to each item. Miraculously, I solved this toward the beginning of the week, leaving the rest of the week prioritizing web deployment and creating the survey. The web deployment took some time to understand, as it took many trials to finally load and render correctly but I now have a fully working link to the game through my GitHub page and a survey to assess participants who use the game for educational purposes.

Image of game in use with informational pop-up
Image of informational game before clicking on object to show informational pop-up

Week 7:

After changing my survey slightly, I sent the link to my mentor to be reviewed and approved. Now that all developments were completed, I was able to start with beta testing. On Tuesday, I went to the ASRC to give a presentation to high school students on a field trip, entailing my journey from high school to doing research at Hunter College. Following that, The students explored the Illumination Space where they also were the first group to test the game I created. After giving them the survey, I received great feedback on their satisfaction with the game and what could be added or reduced to make the game more fun and usable. Later in the week, as a group, the VR-REU cohort I am in went to the ASRC on Thursday. During the visit, they also tested the game, gave feedback, and completed the survey, helping me with my data collection and analysis. Following that, we went on a tour, exploring the labs of the facilities. This week has been slightly calmer than most of the previous weeks but as this program comes to a close, I know this next week will be loaded with work.

 

 

Week 8:

This week has been a stressful wrap-up to the program but has been enjoyable regardless. Filled with data analysis, writing the results, discussion, conclusion, and revising the parts of my paper that had remnants of my alternate project concept, I spent most of the week in my room staring at my computer. On Tuesday, I visited the ASRC one final time, meeting with Kendra to discuss what should be on the final research paper, the data analysis, and the presentation. On Wednesday, the entire cohort was at Hunter College from 10 am to 5 pm as participants for a motion capture study. In that study, different professors or teachers gave a quick presentation wearing a motion capture suit. Afterward, we were given a survey to assess the instructor based off of body movement and how we were able to connect with them. Approaching the end of the program, on Thursday, we all showcased our final presentations to our mentors, Iowa State, and whatever other audience members were present. Finally on Friday, we were all gathered to discuss the conferences we will attend and conducted a paper writing session to touch up or finish what ever needed to be completed for submission of our respective conferences. Following that, we watched the project presentations for Iowa State and completed the post REU survey. Overall, I had fun during this REU and it forced me to learn Unity, something I never thought I would learn. I had the opportunity to meet great people from different areas and live in New York for a short amount of time and I am grateful.

VR-REU Cason Allen Final Presentation

VR as a Learning Tool for Students with Disabilities – Summer 2023

Filip Trzcinka – Hunter College

Mentor: Daniel Chan

Week 1:

Before meeting with my mentor Daniel to narrow down what kind of project I would be working on, I decided to get ahead on my work and start on the Literature Review portion of my research paper. I wasn’t sure if I would be focusing on a physical, mental, or learning disability for the project so I began to research and read about said topics to see which direction I would prefer to take. Upon furthering my knowledge, I found myself more focused on papers that described learning and cognitive disabilities. I came up with two main research proposal ideas that I brought up to Dan when we met, and asked for his advice and for any feedback he may have. Upon conversing and pitching my ideas, he allowed me to choose which of the ideas I’d prefer to work on. After some though I decided my research project would focus on the creation of a driving simulation for student drivers who have Attention-Deficit Hyperactivity Disorder (ADHD). We planned to meet again next week after I have thoroughly researched the problems people with ADHD face when engaging in a learning activity, and what possible methods or features I should include in my simulation to aid in their learning experience. I also plan to begin mapping out and creating the simulated setting on Unity so I may also get ahead on the creation portion of the project. When we pitched our proposals to the group on Friday, Dr. Wole mentioned that he could try to connect me with someone who has had experience with making a driving simulator so that I could potentially build upon their work rather than start from scratch. Nevertheless, I will still take a look if Unity has some assets already implemented that I could use if the challenge of making a driving sim from scratch should arise.

 

Week 2:

This week I focused on the Literature Review section of my paper. I found a paper that described research conducted using a VR driving simulation to see if such a tool can help improve driving skills of people with Autism Spectrum Disorder. I decided to use this paper as ground work for how I’d like to develop my idea. Though their simulation was very basic, there was a test group where the simulation that used audio feedback to help remind the drivers of important rules of the road: like staying under the speed limit, staying in their lanes, etc. I considered what other features could be implemented to help those using the simulation to learn. I knew I would focus on ADHD so I read through papers where a VR driving simulation was tested with people with ADHD as a test pool, but could not find any conducted research that used an enhanced simulation instead of a basic one. Some used tools like eye trackers and such, but no real software implementations to benefit the learning experience of the user. I then looked through papers of teaching techniques used to help people with ADHD learn, with an emphasis on keeping their attention. After discussing with Dr. Wole, I went back and found papers that tested for ways to keep attention of everyone driving, besides just people with ADHD. With what I’ve read so far I created a list of features I’m hoping to implement into the driving simulation I create. The week was finished with writing my Related Works section and posting it on Overleaf with proper citations.

 

Week 3

With the Related Works draft completed, it was time to start the development of my driving simulation. Using the Unity game engine, I was able to import a low polygon city asset to use as the environment. I made some edits to it to make intersection lines more clear, box colliders for buildings so the user wouldn’t just phase through them, as well as adding traffic lights for every intersection. Since the traffic light assets were only for show and had no real functionality, I had to add point lights for the Red, Green, Yellow lights and I tried to write some scripts to allow the lights to change based on a timer, but unfortunately I made little progress with making that work. Will have to continue next week. I got a car asset which already includes a steering wheel object (so I would not have create my own) and imported it into the scene. I wanted a steering wheel object as I eventually hope to have the user actually grab the wheel to steer. For the car, I removed animations that were connected to the asset, added a camera to simulate the first person view then got to work on my scripts to allow the car to accelerate, reverse, and steer using the WASD keys (temporary inputs so I can test everything out at the moment) as well as having a hand brake as the space bar key. I had to take time to adjust the float values of the car’s rigid body, as well as the values for the acceleration as my car did drive pretty slow at the start. It still drives slow, but that could benefit the use for driving in a city environment. After running through the added features I plan to include with Dan, I began my work on the Methodology section of my paper as well as taking another hack at the traffic light scripts.

                                     
 
 
Week 4
 
Got my traffic lights system working! Though the method to get it to work is very unconventional, unless it breaks something else I will not be touching it anymore. After getting that completed I worked on fixing the speed of my car object during acceleration as it was extremely slow last week. The main work done this week however, was the implementations of my visual cue features. The first was a ring that appears over the traffic lights to help grab the attention of the user. This would occur when the user reaches a certain proximity trigger to that specific traffic light, so you don’t have so many of the rings active at once as that would be counter productive to its purpose. It took some time to get that working, and unfortunately this feature exists only for certain traffic lights in my scene, so I need to go through the lights without the feature and add that in. The second feature is that of a lane alert. When the car object moves through the lane line, a red translucent bar appears to signal the user that they need to stay in their lanes. With those features implemented I was able to present a decent product for the midterm presentations that occurred this Friday. At the end of the week I finished up the draft for the Methodology section and I eagerly await the notes Dr. Wole has for me as I am not sure if I wrote it in a conventional way. Next week I will try to deploy the game onto the Oculus Quest headset, change the input device from the WASD keys to the actual controllers, and begin implementing my audio cue features that I have planned.

 

Week 5

Unfortunately this week was not as productive as I had hoped. Through the process of trying to deploy to the Meta Quest 2 headset, many issues occurred. First there was the issue of unsuccessful builds. Many console errors that made no sense to me would pop up which would cause the build process to exit unexpectedly. As typical with computer science, this was solved by googling said errors and working through solutions others have posted online, leading me to be able to progress forward with a single step of having a successful build. However, the error of running the simulation on the headset was the next and more difficult hurdle to overcome. When the Unity game would try to run on the headset, a infinite loading screen would appear with the unity logo jittering in front of you. I had a colleague in the program who did have a successful deployment of his game try to help, but still the same problem was happening. Together we tried to deploy an empty scene, however still no success. I got permission to factory reset the headset and set it up as if it was my own, however through this I would be unable to verify my account as a developer due to a problem Meta has had for over a year where they would have issues sending over a SMS confirmation code for account verification. Eventually I brought the headset in to have it checked and set up by Kwame who was now able to get a previous Unity game to deploy on the headset. With this light at the end of the tunnel giving us hope, we tried to deploy the empty scene which worked! And yet our final roadblock of the week appeared, my work for the game would still not deploy. The same issue of the infinite loading screen would appear. As is typical for roadblocks, I will now have to take a few steps back in order to progress forward. I will need to rebuild what I originally made in my empty scene that I know works. This will have to be done incrementally as I need to ensure that any progress made can still deploy to the headset, rather than rebuild it all in one go and encounter the same issue. In a more positive light, this week I was able to implement another feature I planned to include, which is when the car object enters the lane collider trigger, a sound cue will loop at the same time the visual cue will appear. This is to use both the senses of sight and sound to grab the attention of the driver towards their mistake. I also worked to edit my Methodology section of my paper to polish it up and include more specific and important information relevant to the proof of concept paper. Week 6 will definitely require me to go the extra mile with my work as I am currently behind everyone else, with 3 weeks left to go in the program, yet often times diamonds are formed under pressure.

 

Week 6

This will be a short blog post as throughout this entire week, I have just been working to recreate all that I had into the empty scene that was able to deploy to the headset. Since I wasn’t too sure what exactly caused problems for deployment originally, any time I added something new to the scene, I would deploy to check if it worked. This along side the fact that what you see in Unity on your laptop is different than what you see with the Quest 2 headset on, led to a very repetitive process of adding something or making a change, build and run to the headset, check to see how it looks, something is a bit off so lets go and change it, build and run to headset, rinse and repeat. Though tedious, I was able to get almost everything I had earlier, deployed and working. The only thing that still needs to be completed is the actual player input manager so the car can accelerate/decelerate/steer/brake through the player’s button presses. My suspicion for last week’s roadblock was most likely due to me making a mistake with handling player input, so I am a tad nervous that I will make a mistake again and cause it to break, especially since I’m not familiar with how Unity deals with Quest 2 controller inputs, as will a keyboard its just: Input.GetKey(“w”). In the meantime however, I implemented my final feature idea which is when the user is not looking forward for two seconds, a audio cue is played until their FOV once again is focused on the road. With just the player input left to go, I’m excited to start player testing next week and finishing the Data Collection and Analysis portion of my paper.

 

Week 7

I was able to complete the button inputs for the game on Monday this week. It’s not exactly what I had planned, but since we’re stretched for time it will just have to make do. That same evening I created my google form questionnaire, then had my older brother and my father test out the game for two and a half minutes each. They filled out the questionnaire, as well as giving me more feedback face to face that I took down in my notes. Tuesday I had another person I know test out the game and complete the questionnaire, and Wednesday I had anyone in the program who was willing to test it out try out the game, making sure I maintained consistency with how I managed this user study. That led to a total of eleven people for my Control Group. That same day I had one person who I knew was diagnosed with ADHD by a professional, also test the game and fill the questionnaire, leading me to complete my actual testing. Thursday night I completed the “Testing” section of my research paper and this Friday and weekend I will continue to work on the “Data Results and Analysis” section and the “Conclusion” section.

 

Week 8

What a week. As the REU approached its end, everyone in the program scrambled to get to the finish line. I for one spent Monday and Tuesday this week getting my paper’s draft finished. I sent a draft to my mentor Dan who gave it back with some extremely helpful notes. This allowed me not only fix my paper, but also know what I needed to prep for our Symposium that happened on Thursday, and let me tell you that presentation was stressful for me. I did not do a presentation for over five years and was severely out of practice. Thankfully Dan and a couple of friends from the REU helped me prep. I still stuttered and stumbled my way through it, but received many interesting questions about my project that made me think more in depth about it. Friday was spent finalizing my paper while also helping my colleagues with theirs. It was a definitely a bittersweet ending to an amazing program experience.

Final Paper:
Filip Trzcinka, Oyewole Oyekoya, and Daniel Chan. 2023. Students with Attention-Deficit/Hyperactivity Disorder and Utilizing Virtual Reality to Improve Driving Skills. In Companion Proceedings of the 2023 Conference on Interactive Surfaces and Spaces (ISS Companion ’23). Association for Computing Machinery, New York, NY, USA, 1–4. https://doi.org/10.1145/3626485.3626529 – pdf

Immersive Remote Telepresence and Self-Avatar Project

Aisha Frampton-Clerk, CUNY Queensborough Community College

Week 1:

I began testing some faces in the reallusion software. First, trying my own and then my boyfriend’s to see how it handled different lighting in the original headshot images and different features like facial hair. My first task next week will be to work on styling downloading hair packages and learning to manipulate them. I think these tests have given me a better idea of the scope of the software and gave me some interesting results to analyse and help shape the future of my study.

This first encounter with reallusion has helped me to understand the quality of headshot that make the best self avatars. When I start working with photographic headshots next week I will be sure to consider lighting and angle of the image.

 

 

 

Week 2:

This week I was focusing on finding celebrity source images to style in the Reallusion software. I first had to look for websites that provide royalty/copyright free images. I found that others had recommended flickr so I choose celebrities that had a range of images from their database. The image selection process took longer than I thought. As I tested images in the reallusion software I found that the quality of the images had to be very high as well as the angle of the face. Images with a celebrity smiling or hair over their face where difficult for the software to decifer. After finding the right pictures I put them into the reallusion software to begin styling. Using the smart hair content pack to create hairstyles that match the celebrity asethethic so they are as easy to identify as possible. I am trying to work out how I can make more custom hairstyles and clothing packs so the characters are as recognizable as possible.

Next I will be looking at how I can animate these characters, specifcally facial expressions and speech.

Week 3:

I have been importing my characters into IClone 7. I recorded a short voice memo and uploaded this to IClone 7. While there where automated lip movement and alignment to the words I had to tweak it so it fit better with the words. This included making adjustments to the facial expressions like moving eyebrows to match cadence and tone changes in the voice recording. As seen in teh face key tab selected polygons can be moved and matched to different sections of the speech to direct face movement.

Next I am going to look for recordings of the celebrity talking. I am going to look with ones that have video not just audio so I can look closely at their facial movements to model them. I also want to begin working with larger expressive movements over the whole body.

Week 4 :

This week I have been continuing to make characters and work on making them as realistic as possible. I have had some issues working with images of black celebrities. Often the software cannot pick up highlights on the face when it is selecting the color for the rest of the body. To work around this I have been selecting skin tones by hand to try and get a more accurate representation. Finding black hair textures has also been difficult as they dont come with the program. I have found in some cases layering different hair pieces in the smart hair content pack has given a thicker affect. I have also had to change some of the celebrities I chose as they did not have enough images for me to work with. I will have to test several pictures before i find one that gives an avatar that looks like the celebrity but now that i have the right images it has made styling much easier.

Here is a before and after of will smith with better original headshot and styling

Week 5:

I have been watching tutorials on how to create facial expressions/emotions onto reallusion characters in mixamo. Previously working with facial expressions exclusively in iclone 7 I am excited to see how the software differs. I want to work with the the camera plug in function as well.

I am also working on creating non celebrity headshots to make headshots that will not be familiar to subjects. with this styling is much easier as I have more control over the original images.

I have also been looking for more papers that are similar to my topic for me to use as a basis for my paper. Reading these papers in further depth has given me a lot of ideas of the features that contribute to realism and how these features can be investigated. so while it has been beneficial for understanding how to construct a research paper I have gained a better idea of what makes virtual reality real.

Week 6 :

I have been looking for the best way to add facial animation to characters. The live motion has the most customisation as it can copy any expression you make. However it requires much more adjustment then the face puppetting. The smile is often creepy and unnatural as the upper lip area cannot be selected and altered on is own. Luckily however they are both easy to pick up and work with so I will be able to record audio which will use the acculips function to automate the lip movement.

Week 7:

I have been putting together videos of two avatars with audio ready for the questionnaire. I made 4 variations of the avatar. First a stationary image of the character then a video with audio and lip movement the next is a video including facial expressions and finally a video with full body movement. All videos have teh same audio accompanyment so as not to distract from the avatar.

https://youtu.be/njViZf5UKXY

I have also finished my survey by asking some questions about the video to participants. I will collect the results over the next weekend.

Week 8:

This week I was analysing the responses to my study and adding these to my paper. I completed the survey with 25 responses. I found that eye tracking had a huge affect on realism. As the second lip movement avatar was consistently ranked the least realistic and most unsettling. I was able to make some interesting conclusion about the importance of movement when creating virtual characters. I added figures that illustrate this to my paper and presentation.

Final report was submitted and accepted as a 4-pages short paper at VRST 2022:
Aisha Frampton-Clerk and Oyewole Oyekoya. 2022. Investigating the Perceived Realism of the Other User’s Look-Alike Avatars. In 28th ACM Symposium on Virtual Reality Software and Technology (VRST ’22), November29-December 1, 2022, Tsukuba, Japan. ACM, New York, NY, USA, 5 pages. https://doi.org/10.1145/3562939.3565636 – pdf

Hunter College
City University of New York
HN-1001T
695 Park Ave
New York, NY 10065

Telephone: +1 (212) 396-6837
Email: oo700 at hunter dot cuny dot edu

Follow me:
wolexvr on Twitter  wole-oyekoya-5b753610 on Linkedin  woleucl's YouTube channel  oyekoya's Github profile  Googlescholar