Home » Articles posted by VR-REU Student

Author Archives: VR-REU Student

Gamification of Food Selection and Nutrition Education in VR

Student: Caroline Klein, Vassar College

Mentor: Margrethe Horlyck-Romanovsky, Brooklyn College

 

Week 1

After a hectic move-in and learning to use the subway on the first day, I was happy to meet Dr. Wole and all my wonderful cohort members at bowling. Monday was mostly settling in to Hunter and learning about the structure of the program, but later in the week we dove into discussing research conferences and VR, including a brief introduction to ParaView. I also met with my mentor twice to discuss research ideas and develop my project. I spent most of the week researching, brainstorming, and writing my proposal in my room, using the previous REU publications as reference. I also dedicated a significant amount of time to completing my CITI certification to ensure proper research practices.

I am currently planning to develop a virtual buffet environment and design an experiment to see how incorporating gamified elements like nutrient-based point incentives influences people’s food selections in the simulation. It was challenging gauging how much development I could accomplish in 4-5 weeks since I’ve never worked with Unity or VR, but I am open to adjusting my project as needed once I am familiar with the software and have a better sense of what I can accomplish. Although things were a bit unclear at first, I’m excited for the following weeks as I take the next steps with my project and begin learning Unity.

 

Week 2

I spent most of the first half of the week setting up and connecting the MetaQuest headset and learning how to deploy a game from Unity. Kwame was very helpful in this area, and spent time during Wednesday’s class helping me and Amaya get a handle on the basics. I followed some introductory Unity tutorials and played around with the VR development environment on my own as well. For the second half of the week, I completed my literature review and drafted the Introduction, Related Work, and References sections of my research paper. I also completed my CITI certification and tried out Tableau in class on Friday. On my own time, I was excited to attend a women in computing event at Bank of America and get a NY Public Library Card after finally getting our student IDs. 

Next week, my goal is to jump into development and make significant progress on the VR game because the headset logistics took longer than I expected this week and I have not started implementing the actual buffet simulation yet. I aim to finish the game in the next 3 weeks so I have enough time to collect data, analyze results, and finalize the paper. I will also start planning the specifics of the buffet options and gathering information about their nutritional value using the USDA database as my mentor suggested.

 

Week 3

It was challenging resolving runtime errors on my own and piecing together tutorials to develop the features I wanted, but I feel like I made real progress in Unity this week as I began building the VR simulation. I was able to model a simple version of the buffet setting, and implemented movement controls, hand animations, grabbing and ray interactors, text boxes, and updating the overlaid image of a point bar when a food item is selected. These components will lay the foundation of my game. I also conferred with my mentor and explored the available assets on the Unity store to plan the food options, and started writing the methodology section of my research paper. Most of my week was spent on technical development at home, but we also learned more about VR concepts and technologies like immersion, presence, and VMD in class.

 

Image of test simulation

 

One setback I had this week was that something went wrong when I was working on preventing food items from falling through the floor that made the program very glitchy, and I couldn’t figure out how to fix it. Luckily, I had backed up an early version of the project to GitHub so I was able to retrieve that and redo my work. This was a lifesaver and I will definitely continue to utilize version control on GitHub as I continue to implement the program. However, I am still finding Unity’s errors confusing and sometimes don’t know what is causing them. In the coming week, I hope to expand on my Unity project to have a working prototype of my game with a limited selection of options, before expanding to accommodate all food options and nutrient points in the final version.

 

Week 4

I started off the week with some technical progress as I gathered assets from the Unity Store and arranged them in a 3D scene to make the buffet setting for participants to interact with. One issue I ran into was that we had planned to have several salad options as part of the food selection, but strangely there were no salad prefabs in the asset store. Luckily, I was able to find a lettuce leaf prefab which I used in combination with a bowl, tomato slices, and dumplings (hidden among the leaves to look like chicken) to make the desired salads. Now that the virtual setting is more expansive, I am starting to experience motion sickness when testing the game since I am spending longer in it and moving around more, but it hasn’t been too bad.

 

Point system activated      Buffet setting

 

After meeting with my mentor on Tuesday, we decided to amend the methodology and get rid of the two-group design because of the challenge of validating difference results between the groups and the time cost of having many participants since each has to complete the VR simulation. I quickly updated the research paper to reflect this, but plan to rework each section more fully to reflect the change. I feel like I’m having trouble pinning down what type of results and conclusions I want to draw from the experiment now that I’m not comparing how their food choices varied with the gamified intervention, but I hope to meet with my mentor soon to develop this more.

I continued game implementation throughout the week, adding more point bars and the food list to the overlay. I spent the afternoon Thursday trying to integrate a SQLite database, but kept running into errors and ended up unable to successfully add it. I’ve found debugging in Unity very frustrating since it says that I have build errors but gives no indication in the log of where they are coming from. This necessitates building every few minutes to check for errors, but each build takes about 30 seconds to run and you still don’t know exactly what is wrong.

I also prepared a slideshow and demo videos for our midterm presentations on Friday. In this presentation, I briefly went over my proposed idea, literature review, methodology, and technical progress, showcasing the virtual scene and interaction mechanisms. It was also helpful to hear about how everyone else’s projects were developing. I finished up Friday by working on the verbal instructions and Google Forms to give participants before and after the experiment. Next week I hope to combat some of the limitations I identified this week and finish up the technical implementation aspect so the game is ready for user trials.

 

Week 5

This week was heavy in technical implementation as I worked on incorporating feedback from the midterm presentation and finishing up the game. Technical accomplishments include:

  • Restructuring the point system code to make it more organized and easier to scale out by associating each food option with a FoodItem object (with attributes for mass, nutrition values, etc.) that I can pass into the add and remove functions.
  • Adding ray interactor functionality so that when you hover over the different point bars you can see health information about each nutrient.
  • Adding the ability for players to change their food selections and remove items from their plate. This was recommended by my mentor to make participants feel empowered to make informed decisions after seeing the effects of their choices.
  • Expanding the existing functionality and details I had prototyped with the burger selection demo to all point bars and food items, including accurate point assignments based on nutrition label data.

I also decided to replace the sushi roll option with a teriyaki salmon and broccoli dish in order to have a dish rich in Vitamin E since that is one of the nutrients I am highlighting. In addition to development in Unity, I worked on planning the pre- and post-surveys for the experiment. My mentor recommended using an established NEMS-P Food Environment survey for reference, so I will plan to incorporate some of those questions and add similarly structured questions about specific nutrients for the knowledge quiz. She also recommended adding an indicator for the recommended daily amount of each nutrient and nutrition labels for each food to the game, so I will look into incorporating these components, too. On campus this week we continued learning about VR topics like GPUs, audio, telepresence, and VR sickness, and played around with Gephi for graph/network visualizations.

 

Week 6

After seeing more of NYC and connecting with the other REU participants on a lunch cruise over the weekend, I jumped into preparing for the research trials. Since I have human subjects testing the game, most of the week was spent preparing for the IRB application. I finished all the required documentation, such as consent forms, and got a faculty member from my home institution to sign off on the Reliance Agreement. I also finalized the Google Form questionnaires and had them approved by my mentor. On the technical side, I added some features to enhance the game experience, such as sounds and an end screen. Adding the background music was simple, but I struggled for a little longer with debugging event-triggered sound effects for food selections.

 

End screen       Hover nutrient information panels

 

I also spent some time reviewing SIGGRAPH Asia XR submissions, which exposed me to some cool research being done. At this point, the VR game and pre- and post-study questionnaire forms are finalized, so I am mostly waiting on IRB approval to start running the study with participants.

 

Week 7

I started the week with some initial usability testing and incorporating user feedback into the game (including fixing typos!). One of the main complaints was motion sickness, so I slowed down the movement controls to help with that. After a final review of the questionnaire, I began the official user study and was able to collect data from 10 participants. Some motion sickness was still reported, but it was much less and did not interfere with the experience. I updated my research paper in Overleaf to reflect the current state of the project.

At Hunter this week, we started off by sharing our progress on our projects and going over our SIGGRAPH Asia conference paper reviews from last week. On Thursday, we took a group trip to the CUNY Advanced Science Research Center, where we got to try some cool interactive visual technologies and tour the various science labs. We ended the week discussing strategies for statistical analysis and our plan for final presentations. Throughout the week, I also participated in all the other REU students’ studies, which involved filling out surveys and testing their applications.

I feel like I still have a lot to do in the last week with data analysis and finishing the research paper, but I’m glad I have the technical implementation and user trials done, and I’m excited to put it all together.

 

Week 8

At the beginning of the week, I analyzed my user response data, identifying the appropriate statistical tests based on which assumptions were met and following Laerd tutorials to process the data. I used IBM SPSS software to evaluate the significance of the increase in knowledge scores and perceived nutrient importance. Although I found a mean increase in knowledge quiz scores, it wasn’t statistically significant according to the Wilcoxon Signed Ranks Test. However, there was a statistically significant increase in the reported consideration of dietary fiber, lutein and zeaxanthin, and vitamin E after the intervention, though these were based on subjective self-reported scores. I also calculated summary statistics to assess changes in awareness of nutrients and to identify which aspects of the game experience were effective. I updated the results section of my paper to reflect my statistical analysis and wrote the discussion and conclusions sections.

By Wednesday, I had a finished draft of my paper, but I had to revise it because two more people participated in my study, bringing the total to 12 participants. Fortunately, most general results and conclusions remained the same, but I had to spend a few hours redoing the statistical analysis and updating the paper accordingly. The REU group also participated in a Motion Capture event on Wednesday, where different teachers gave short presentations while wearing motion capture technology and we evaluated their performance. It was a long day, but it was exciting participating in an external research study and fun listening to the various talks. At this point, I was in the process of getting feedback from my mentors and revising the paper.

We had our final presentations on Thursday, which consisted of a 10-minute slideshow summarizing our research projects. We presented to our REU peers, mentors, and some Iowa State University REU students. In return, we listened to their research presentations on Friday. The rest of Friday, the group spent the entire day at Hunter finalizing our papers. I created my one-page poster, reviewed submission guidelines, and clarified some details with Dr. Wole. We also completed the end-of-program survey. It was a fun day, as we all worked in the same room, talked over lunch, and helped each other navigate the submission process.

 

Final presentations Zoom

 

I’m not fully done with my project at this point because I haven’t submitted yet (almost no one has), but I feel like I am in a good place. My mentor is in Europe right now and has limited availability, but I want to give her a chance to read it over one last time before I submit. I will be submitting a Poster Submission to ACM ISS Poster 2024, which is not due until August 15th, but I hope to send it in early next week. Overall, the Hunter REU program was a very enjoyable experience that I was happy to be a part of. Even if our research didn’t feel the most impactful, I definitely feel more confident with the process of writing and publishing a research paper, which I knew essentially nothing about at the beginning of the program. Beyond that, I loved all my fellow REU participants and couldn’t have asked for a better cohort group. I wish everyone the best of luck in their future endeavors, and hope this blog convinces someone to apply for the REU.

Immersive Remote Telepresence and Self-Avatar Project

Sonia Birate, University of Virginia
Oyewole Oyekoya, CUNY Hunter College

Week One

This week, in addition to peer bounding, a Hunter College tour, and an introduction to Paraview, I concentrated primarily on finalizing my research proposal. Dr. Wole and I were able to narrow down a project that will explore the possibility, feasibility, and effects of controlling avatars in Virtual Reality using factual facial expressions and eye movements from individuals. The goal is to investigate the realism and believability of avatars, particularly when another individual’s facial expressions are mapped onto that avatar. By properly mapping facial expressions and eye movements onto the avatars, we seek to aid with the creation of a more realistic and captivating VR experience that closely mirrors real-life interactions. After mapping the facial expressions, does the avatar retain its believability, especially to individuals familiar with the person being represented? Overall, we were able to discuss the game plan, which mostly comprises utilizing the software Reallusion as well as some possible user study.

Week Two

I dedicated my efforts to acquainting myself with Reallusion, with a particular focus on exploring its headshot feature, as depicted below. While attempting to recreate an avatar character, I encountered some challenges in capturing every detail accurately, especially when it came to the eyes. Nonetheless, I considered this endeavor as a preliminary software test, so I remain unfazed by the outcome. Concurrently, I commenced working on my abstract and literature review, successfully locating ten relevant sources to incorporate into the research paper’s related work section. Additionally, Trinity and I went to see the new Spiderman movie for fun, and we both really enjoyed it.

Remaking a character in Avatar through the headshot feature.
 
 
Week Three
This week has proven to be quite eventful. I created a remarkable avatar resembling myself thanks to the headshot plugin found in Character Creator. However, perfecting its resemblance required careful adjustments and a significant amount of time. It dawned on me that even the most subtle nuances, like a delicate play of shadows on one’s face, can profoundly influence the outcome of the avatar’s resemblance and even currently I am considering reworking my avatar to achieve a more truly accurate depiction. Additionally, I swiftly immersed myself in the LIVE App, effortlessly mapping a range of expressions onto my avatar. This immersive experience has provided me with a comprehensive understanding of my project, fostering a sense of both growth and satisfaction. I also worked on my methodology. For next week, I am hoping to start getting a few facial expressions from different individuals mapped onto my avatar.  
 
 
 
Week Four 
My avatar, along with Dr. Wole and Trinity (a current summer researcher), had our expressions mapped onto it. To test if people could distinguish between the three avatars, we conducted a small demonstration during the midterm presentation. It was an intriguing experience because most individuals had difficulty discerning the dissimilarities. Interestingly, while performing the facial mapping, I observed that Trinity’s facial expressions appeared more natural, despite her being considered the unfaithful representation. I successfully captured the seven universal expressions (neutral, happy, sad, surprise, anger, disgust, fear) from both the volunteers and myself, which were then mapped onto my avatar. In the upcoming week, I intend to replicate and enhance the research demonstrations by utilizing better pictures and videos. Additionally, I plan to create a Google form that should be operational by Friday.
 
 
 
The image below shows Dr. Wole, Trinity, and I mapping our expressions using the LiveFace application on my iPhone. I was avatar A, Trinity avatar B, Dr. Wole avatar C (this is our sad expression).
Week Five
Re-recorded avatar and individual videos to replicate and improve on the study demonstrations. I also worked on creating a survey draft. Overall, we opted to re-record the videos again because the previous individual videos captured on my iPhone had a wireframe, which Dr. Wole didn’t preferred. As a result, next week I am re-recording avatar films as well as individual iPhone videos and finalizing my survey to send it to participants to collect my user-study aspect of the research. 
 
Week Six

Over the course of this week, my primary focus was on enhancing the quality of the videos required for the survey. I was faced with a significant undertaking that revolved around the meticulous re-editing of a substantial number of videos, precisely 42 in total. It’s worth noting that this number was evenly split, with half of the videos consisting of recorded avatar clips, while the other half comprised individual clips captured using iPhones.To ensure a seamless user experience, I meticulously segmented these videos into shorter, more digestible clips, spanning approximately 3 to 4 seconds each. These clips were subsequently uploaded to YouTube, which provided a convenient platform for effortless integration into the survey. This approach aimed to streamline the process and enable survey respondents to conveniently view and respond to the video content. Subsequently, a survey draft was created, incorporating all 42 clips, utilizing a forced choice answer method, and prompting users to match individuals with the avatar with their facial expressions. We intend to send out the survey to individuals next week.

Week Seven

During this week, I completed the design of my survey and distributed it to my REU cohort, mentors, and other potential participants. As of Sunday, I have received 20 responses, all of which are valid and can be used for analysis. I dedicated time to working on the user study section using Overleaf. Moving forward, my next steps involve initiating the data cleaning and analysis phase, along with defining the types of data and their respective categories. I am currently in the process of determining which tests I will employ for the analysis. Additionally, I aim to promptly finalize the results and analysis section on Overleaf.

Week Eight

I completed the results and analysis part of my paper and was able to obtain a graph that displayed the survey results. I developed a powerpxoint presentation to display my results, which I shared with thI completed the results and analysis part of my paper and was able to obtain a graph that displayed the survey results. I developed a powerpoint presentation to display my results, which I shared with the team. Overall, I am on schedule to submit it to Siggraph Assia. Below is the chart generated from the results received from my survey. Overall, the unfaithful representations had a bit more consistency ratio with people being able to correctly match them than the faithful representation. 

 

Overall, I loved this summer so much and doing research at Hunter. I would do this all over again if I could. <3

2023 VR-REU students dinner

Final Paper:
Birate Sonia, Trinity Suma, Kwame Agyemang, and Oyewole Oyekoya. 2023. Mapping and Recognition of Facial Expressions on Another Person’s Look-Alike Avatars. In SIGGRAPH Asia 2023 Technical Communications (SA Technical Communications ’23), December 12–15, 2023, Sydney, NSW, Australia. ACM, New York, NY, USA 4 Pages. https://doi.org/10.1145/3610543.3626159 – pdf

Sonifying the Microbiome: 365 days in 360

Deborah Rudin, University of Minnesota Twin Cities

Project: Sonifying the Microbiome: 365 days in 360°

Mentors:  Andrew Demirjian

About Me: I’m a Computer Science and Theater Arts double major at the University of Minnesota Twin Cities. My hobbies include reading, dancing, creating art, listening to music, and cooking. I currently work as an Audio/Media Technician for my University’s Theater department’s Sound Shop.

Week 1:

After meeting with my mentor and deciding what direction we wanted to take our project, I settled into learning how to use Max MSP. Max MSP is a visual programming language for music and multimedia design, and it’s what I’ll be using for the majority of the project. Going through the various tutorials and experimenting with the patches gave me a basic understanding which I will build upon as the summer progresses. I also took a look at the data we’re working with, and found the minimum and maximum of each feature in order to develop a range of values. These features refer to their respective taxonomic groups as found in the microbiomes of the infants. We’re only working with the top ten taxonomic groups, as observed in the study which the data was originally from. Then, I noted the periods between measurements from each infant in order for us to create an understanding of the intervals – especially to see if they held some form of consistency across the board. Later on, we may use these as a way to affect the duration of notes or other aspects of the sonification. I also wrote up a project proposal, and submitted it to Dr. Wole. At the end of the week, I attended the CUNY SciComs Symposium, which was highly interesting.

On the more extracurricular side of things, I’ve started exploring the city with my cohort. So far we’ve gone to the Highline and Chelsea Market!

Below:  Data on the Measurements and Features of the Infants

Week 2:

This week, my mentor and I started diving more into using Max MSP for our project. I first separated our data so that it was per infant, and transposed it so as to easily acquire the data per feature in Max MSP. Then, I worked on creating a patch which would go through the data on an infant and output the data on each feature separately. A patch is essentially a visual program in which you ‘patch’ different objects to the inputs and outputs of others. This language is similar to that of audio systems in real life in that aspect. After that, I worked on using said patch to make a basic sonification of the values of the first feature on the first baby. This worked successfully, although not quite in a way pleasing to the ear. Thus, I then worked on learning how to put the features through various synths. My mentor and I figured out how to generate notes and chords in Max MSP, and then I worked on creating a basic generative music piece by randomizing the steps between the notes. We also decided to create a room in Mozilla Hubs to showcase our work in addition to the in-person black box setting for our presentation.

As a cohort, my peers and have been learning how to use various visualization tools as well. We have so far worked with both ParaView and Tableau. We also attended a Zoom dissertation on the benefits of using VR/MR for Chronic Pain Self-Management.

Week 3:

As a cohort, we worked with VMD and working on our research papers in Overleaf. I wrote a draft of my abstract and introduction, and will continue working on my paper throughout the course of the summer. In my project, I worked on forming chords from my data, using the values to generate the notes and then putting them through MIDI synths. I played around with creating two chords, one basic piano and one through a pizzicato string synth. However, it doesn’t yet run completely correctly, so I still have much to develop with that patch. I also researched the different taxonomical families which we have data on in order to figure out how they’ll determine their usage in our sonification. I’m currently playing with the idea of using Gram-Negative and Gram-Positive bacterium for different chords or features of the sonification process. We are also considering using certain features for velocity, others for pitch, and perhaps some for frequency filtering. At the moment there are a lot of different possibilities that we can work with, and as such we’re considering all of them.

On the extracurricular side of things, I went to the Museum of Modern Art (MoMA) and had afternoon tea at a lovely cafe I found.

Week 4:

This week, I heavily focused on my research paper. I got my introduction and related works section done, which leaves me to start working on writing up my methodology next. We now have research paper writing sessions on Zoom on Tuesdays. As I researched the related works, I found the information fascinating! Pythagoras’ Harmony of the Spheres is something which I had never heard of before — I definitely want to learn more about it. Coding-wise, I fixed up the patch and got it running, with gram-positive bacteria making one chord and gram-negative bacteria making another chord. However, the sound isn’t exactly what I want yet, so my mentor and I are definitely going to play around with what features do what. I also started working in Mozilla Hubs, feeling out how it works. Once I’ve got a placement of visual aids I desire, then I’ll work on figuring out how the audio zoning works.

As a cohort, we worked on integrating R and Python with Tableau. On my own, I attended a concert at Palladium Times Square!

Week 5:

As the start of the second half of the program, much of this week’s focus was on developing a first model in Mozilla Hubs to see how everything works. I arranged objects and gave them corresponding mp3 files in order to figure out how the audio zoning functions. We’ve created a system wherein each feature corresponds to a note in a scale, which is then pitch modulated according to the data values. Then, each baby has a different synth corresponding to it to create distinction while also making patterns identifiable and viable. We’re still finalizing our scale and what instruments to use. With this set up, we’re sending everything through Ableton Live, which directly reads in the MIDI notes from Max MSP and lets us convert them into the mp3 files we need. I’ve continued working on my research paper, now working on my methodology.

This week we also toured the ASRC, or Advanced Science Research Center, which was fascinating. I was most interested in their Neuroscience and Photonics research.

Week 6:

This week was spent putting together the first draft of the final project. I arranged the Grogu baby objects in a circle, each with their corresponding audio files in Mozilla Hubs. First, we used a test audio to make sure the objects were at least relatively in sync with each other. After that, we used the sonifications which we’d created. We used string synths to create a sound more pleasing to the year, and modified the durations of the notes to correspond with the time between measurements: shorter at the beginning and getting longer towards the end. Next, we want to try different spatializations within Mozilla Hubs. For my paper, I finished my abstract and methodology, and now just need to write my results and conclusion.

On my own, I went to Spyscape, which is an interactive spy museum.

Below: Baby and Sound objects in Mozilla Hubs

Week 7:

This week has pretty much been crunch time. As a cohort, we’ve started to work on testing each others’ programs and running user studies. I set up the new spacialization for the sonification, and I much prefer this new version. I set in in a geodesic dome preset scene, which allows a circular set up and much more room between the babies as well as the middle. This allows one to listen to them all at once or go around the edge to focus on one or several at a time. The volume on each audio is also adjustable, so if one wishes to only hear one baby, they could turn down the volume all the way on the others. Some small stumbling blocks my mentor and I dealt with were originally having a bunch of duplicate audios instead of separate unique ones, and making sure the duration of the notes matched with their periods of measurement. Luckily, these were easily resolved, and we were able to go on with our final implementation. After I put everything into Mozilla Hubs, I also labeled each baby in order for our observations to be correct and valid without room for confusion. Unfortunately, Mozilla Hubs does not have a labeling system, so I resorted to using the Pen object in order to create a drawing and then turn it into a pinnable 3D object.

On a less work intense side, we took a group trip to the Bronx Zoo to do the Treetop Adventure Zipline. That was a lot of fun, even in the boiling heat. I also tried a NYC restaurant week restaurant – which was incredible – as well as a magical afternoon tea at The Cauldron.

Below: Baby objects in the Geodesic Dome with the Audio Debugger showing the Audio Zoning in Mozilla Hubs

Baby objects in the geodesic dome with the Audio Debugger showing the Audio Zoning

Week 8:

As the last week of the program, this week was focused on getting our papers done and submitted. Our abstracts were due Monday, with the papers themselves needing to be submitted by Friday. For me, this meant getting participants to experience the sonification and then respond to a response for in order for data to be collected. This data collection allowed me to analyze my results and write my results section of my paper. Once that was done, I was able to write my conclusion and finish off the paper. Before I turned it in, I checked it over with my mentor so that I could add anything he thought was necessary. That done, I was able to submit my short paper! After that, I worked on developing my slides for my Friday presentation. On Friday, all of us presented our projects in a presentation session. We each had about 25 minutes to present, and the session was hybrid in-person and on Zoom. Included was a session Dr. Wole set up in which he presented a zoom recording of us REU participants discussing computing, STEM, and VR/AR/MR. We recorded this session on Tuesday, and it was recorded so as to create an easy method of presentation. Breakfast and lunch were both provided during the presentation session, which was a very nice addition to our last day of the program.

Outside of working on finishing up my project, I saw Phantom of the Opera on Broadway, which was incredible. I really enjoyed working in NYC this summer, and I’m so glad I had the chance to participate in this REU.

Final Report

Hunter College
City University of New York
HN-1001T
695 Park Ave
New York, NY 10065

Telephone: +1 (212) 396-6837
Email: oo700 at hunter dot cuny dot edu

Follow me:
wolexvr on Twitter  wole-oyekoya-5b753610 on Linkedin  woleucl's YouTube channel  oyekoya's Github profile  Googlescholar