Home » Articles posted by VR-REU Student

Author Archives: VR-REU Student

VR-REU 2024

Or Butbul

 

Week 1

 

After landing in a sunny New York and unpacking my bags, I headed down to the lobby to meet the group of people that would be living in the same residence as me, we all went together to go bowling with the rest of the group and we had a lot of fun! The next day was spent touring the college, meeting the professors, and starting to talk about proposals. Initially I was drawn to a project focusing on Motion Capture. After reading the bulk of articles focusing on Motion Capture I decided to shift my focus more towards graphics. My proposal was accepted after the class we had on wednesday and I spent my time refining it on Thursday. Friday was spent learning Paraview and exploring the city

 

Week 2

This week focused on completing the preliminary material. I have been working on the CITI training materials as well as the literature review for the beginnings of my paper. I have been having an issue with my base university that has been stopping me from connecting to the internet here and limiting the things I can do while we meet. I hope to resolve the issue before the beginning of next week, as well as set up a remote desktop so I will be able to access a faster computer to render my virtual humans. Outside of the project, I have been trying many new restaurants and getting to know my cohort. Talking about each others project has given us a clarity about our ideas and goals in our projects.

 

Week 3

I managed to create the virtual humans last weekend. The main goal of this week was to connect to my home computer remotely so I could work in unreal and have my Meta-humans. I connected to my home computer Wednesday and downloaded the mesh of my first avatar. This weekend I plan to get the mesh of my other avatar, and the textures of both my avatars. Using Blender over Unreal was a big decision for me as Unreal Engine has its own rendering pipeline better suited to the realism of the Meta-humans, however, Blender gives me more information on render time and memory usage, which is vital to the project. The goal for next week is to get all the renderings of the avatars so the survey can be prepared. As an aside from my work this week, I had some fun opportunities to meet my group for pizza and shopping at Chelsea Market. I also met some of my old friends who live in the city now.

 

Week 4

This week focused on preparing the material we needed for our midterm presentations, I spent the bulk of my week refining my literature review and creating a more detailed methodology for the project itself. It was difficult to connect to my home computer this week and I did not have that much time to work in Unreal, I did have some work that I could do outside of my computer. Namely, I could prepare animation data so my avatars could move in the survey and survey takers could have a better reference for the level of realism of the avatar. Next week I plan to get the renderings of my avatars at their different levels of detail.

This is an example Metahuman with an idle animation found in Metahuman Creator, UE’s web based platform for avatar creation.
Outside of work, I went to a festival over the weekend in Brooklyn and I was able to see many amazing singers and hear a lot of great music.
Week 5
This week had many difficulties for me in capturing the renderings of my virtual human. capturing animation from Metahuman creator was not possible so a custom idle animation had to be recorded using LiveLinkFace. Importing that animation into Unreal and getting it to work with their avatars was a long ordeal as well. I was able to change the level of detail of my avatar in the scene and modify its texture maps which will be helpful for the study. Unreal also provides me with the render times of each frame and the total render time which is helpful for the project. Lastly, I had a big issue with getting a specific render engine to work with my MetaHumans. I was wanting to use Unreal Engine’s path tracing as my chosen engine however there is very little support that engine has with MetaHuman and renderings could not be achieved at a sufficient level of detail. I have decided to use their detailed lit render engine to replicate a lower performance system’s render engine, while still maintaining a sufficient level of detail overall.
I will get the renderings as soon as possible, My survey questions have been prepared, I plan to render the avatars as soon as I can and importing those videos into the survey, hopefully finishing my survey before the start of next week.
Week 6
This was a great week to get work done! All the MetaHumans were animated and rendered. The image sequences were rendered out as png sequences, and then assembled as mp4 videos in blender. I then tried to import the videos into Qualtrics to create the survey however the videos were too large in size. I used Giphy to down-scaled them and turn them into gifs which seemed to work with the survey. The surveys should be completed in the weekend, and distribution will likely happen at the beginning of next week.
Week 7
This week was pretty simple as the majority of it was spent waiting to collect data from survey takers. I released the survey Wednesday and had collected enough data by Saturday. Monday was spent refining the paper and fixing errors, while Wednesday onwards was spent thinking of how data analysis would go.
Week 8
The final week should not be underestimated! Me and my colleagues spent so much time on our papers this week. Making sure they were all as perfect as possible. Some of the week was spent on data analysis, where I decided I would use ANOVA for my comparison data, and simple means and standard deviations for my other data. All sections of the paper were edited to perfection as we were ready to submit work. At the last day, we had the privilege of hearing Iowa State University get on a call with us and talk to us about their XR research. It was great to hear what excellent work they were doing After countless hours and constant revising, I am happy to say that my paper is finished! I’m sad to see this experience come to a close but I am so happy I got to participate in this experience!

Immersive Remote Telepresence and Self-Avatar Project

Trinity Suma, Columbia University 

Oyewole Oyekoya, CUNY Hunter College

Week 1

I first met my REU cohort the Monday after I arrived in NYC, bonding over bumper-less bowling at Frames Bowling Lounge.  Our initial meeting was refreshing; I was excited to work with all of them and make new friends.  On Tuesday, my real work picked up after a quick tour of Hunter and a meet-and-greet with all the REU mentors.  I began discussing directions for my project with Dr. Wole and outlining a project proposal.  Wednesday was the first session of VR, AR, and Mixed Reality, a class taught by Dr. Wole that I and the rest of my cohort are auditing.  For the rest of the week, I finalized my project proposal, defining my project’s uniqueness and conducting a preliminary literature review.  We wrapped up the week learning how to use Paraview and presenting our proposals.

Week 2

My work picked up this week as I began to familiarize myself with Reallusion to design the avatars for my study.  My project is ideally going to follow a bystander intervention scenario set in a pub/bar environment.  Below is my idealized script, but I will likely cut out some dialogue for simplicity. 

Study dialogue illustrating a bystander intervention scenario at a bar.

My scenario has five characters:

  • Person A: the one bullying B
  • Person B: the one being bullied by A
  • Person C: another bystander
  • Person D: bar owner 
  • User

Below are also preliminary avatar designs for persons C and A, respectively.  I am not designing an avatar for the user since it is ideally in the first person.  I am also considering not designing one for person D for simplicity.  Only person B will be made from a headshot, and it will resemble someone the user knows.  This week, I also began working on my paper, beginning with the introduction and literature review.  Next, I want to continue creating my avatars and animate/record the audio.

 
 
Work was not all I did this week, however!  Sonia and I watched the new Across the Spiderverse movie together before visiting the NYPL for the Performing Arts to get some work done.  I also attended the CUNY SciCom Symposium at the ASRC with my peers where we listened to various research talks and learned more about presenting our research.
 
Week  3

Progress was slower this week.  I redesigned my avatars for persons A and C and also designed an avatar for person B.  Person B is modeled after myself (see below).  I’ve decided that, for simplicity, I will not design a character for person D.  I began working with some audio recordings as well.  I debated using Audacity, AudioDirector, and Voxal to edit my audio but I chose Audacity since I am most familiar with it.  I began importing my characters into iClone as well to sync their audio. 

The overall direction of my project has changed since last week.  Dr. Wole and I discussed and decided that we are going to focus on how pitch and speed affect users’ perceptions and choices in a bystander scenario.  This will allow creators to gauge how avatars’ voices influence users’ experiential fidelity. 

The week ended with a bang at Webster Hall where I saw CRAVITY, one of my favorite musical artists.  Later that weekend, I saw Hadestown with my uncle for Father’s Day.

Week 4

Welcome to week 4!  I can’t believe I am already halfway through this experience.  This week I finished animating my avatars on iClone with audio recordings of both my voice and my brother’s voice.  There has been more discussion about the direction of my project, but in the meantime, I worked on creating pitch variations for my audio.  Each clip has been pitched both up and down by 10%.  I chose 10% since it seemed like a good baseline to start; the clips did not sound incredibly unrealistic, but the difference was still noticeable.  Below is a sample line from the aggressor.  The first clip is the unedited recording, the second clip is the pitched-up recording, and the third clip is the pitched-down recording. 

We have decided not to abandon the bystander scenario I wrote.  Instead, it will be used as the medium to convey the altered audio.  The scenario will be presented in a survey.  The study participant will watch the scenario play out by reading the narration and watching video clips of the animated avatars.  In some cases, the participant will be presented with multiple variations of the same clips (this procedure is subject to change) in which they will have to rank the clips based on their level of aggression or assertiveness, depending on the character.  This study will allow future developers to gauge how to record and modify their audio to best convey their desired tones. 

Week 5

My progress was slower this week as we finalized the focus of my project.  After much discussion, we are going to study how various combinations of over-exaggerated, under-exaggerated, and average facial expressions and tones affect survey participants’ perceptions of aggressiveness and assertiveness (depending on the character being evaluated).  A diagram of each combination is shown below.  Nevertheless, this week I worked with Sonia and Dr. Wole to record the lines of the aggressor and bystander in my scenario with their lookalike avatars.  We have decided not to use the avatars I designed from the neutral base to maintain the lookalike avatar concept nor the audio my brother recorded.

In addition to work, I had a lot of spare time to myself, which was very healing.  I visited the MET and Guggenheim for free and met up with a friend from home.  On Thursday, the REU cohort attended a lunch cruise where we had great views of the Freedom Tower, Brooklyn Bridge, and the Statue of Liberty. 

Week 6

I had less work to do this week, but I expect it to pick up very soon.  I focused on editing all the videos of the lookalike avatars I had filmed with Sonia and Dr. Wole.  Sonia played the bystander while Dr. Wole played the aggressor; each of them filmed a variation where they underexaggerated and overexaggerated their words and facial expressions in addition to a neutral version.  From there, I exchanged the audio on each video to create 9 different variations of their words.  See the diagram above.  Here is one of the videosOnce my videos were approved and we decided on a survey design, I created my survey in Qualtrics and am preparing to send it out early next week or sooner.  

Luckily, I was able to take advantage of the holiday weekend and joined my family in Atlantic City, NJ.  Later in the week I also went to see TWICE at Metlife Stadium. 

Week 7

This week, I finalized my survey design and sent it out to my REU cohort, the mentors, and other potential participants.  As of Friday afternoon, I have 22 responses, but not all of them are usable since they are incomplete.  I am beginning the data cleaning and analysis stages.  Given my data type and how they are categorized, I am still figuring out what tests I will use.  Dr. Wole and I have discussed non-parametric Friedman tests and two-way repeated measures ANOVA tests.  Hopefully, it will be finalized this weekend.  I have also been researching new papers that are applicable to the emotional recognition aspect of my study to include in my introduction and literature review.  

This week, my cohort also visited the ASRC again to tour the Illumination Space and the facility itself.  We also tested Sabrina’s AR app which was very fun!  I had enough time that day to visit Columbia and use one of the libraries to get some work done, which was very nice.  This weekend, I am going to Atlantic City again for my grandma’s birthday as well as taking a class at the Peloton studio near Hudson Yards. 

Week 8

Happy last week!  Thank you for following me throughout the last 8 weeks and reading my blog posts!  Over the weekend, I finally updated the introduction and literature review sections of my paper as I mentioned last week.  This week was one of my busiest as I balanced packing up my room to move out with finishing some preliminary data analysis to include in my final presentation.  Since we have yet to analyze the statistical significance data we ran, I looked at the mean and median responses for each question type.  Our results are following our original hypotheses; you can find the data in the slideshow below.  On Friday, I ground out my results and discussion sections for my paper and finished packing to go home Saturday.  I have had an amazing time this summer and will miss all of my cohort members!!

Presentation: VR REU – Final Presentation 2

Final Paper:
Trinity Suma, Birate Sonia, Kwame Agyemang Baffour, and Oyewole Oyekoya. 2023. The Effects of Avatar Voice and Facial Expression Intensity on Emotional Recognition and User Perception. In SIGGRAPH Asia 2023 Technical Communications (SA Technical Communications ’23), December 12–15, 2023, Sydney, NSW, Australia. ACM, New York, NY, USA 4 Pages. https://doi.org/10.1145/3610543.3626158 – pdf

Arab data bodies – Arab futurism meets Data Feminism

Mustapha Bouchaqour, CUNY New York City College of Technology

Week 1: Getting to know my team and the project’s goal.

I have joined Professor Laila in working on this Project. The idea of this project will be seeing as a story that reflect what have happened in Arab countries since Arab Spring (uprising) started in 2011. This story takes place in 2011. It is an Arab futuristic world where the history of the 21st century is one where data and artificial intelligence have created “data bodies (DB).” In a hundred years from now, individuality is created out of data. Human and non-human subjectivities can be born solely from data.

The idea then is to develop a game. This game uses real data from early 21st century uprising social movements – activating the 2011 Arab Uprising as ground zero – to create human non-human agents called data bodies. This week goal was to make sense of data collected and get to know more the team I am working with along with the blueprint we should design as the basic thing needed for developing the game.

Week 2: Analyzing data using NLP with first basic design using Unity 3d

My group so far is still working on developing a blueprint that will work as the basic foundation for the game. However, the unique final product that I am trying to deliver is centered about 2 concepts. The game is challenging the power. The data provided is categorized into emotional, experience, and historical data (Arab uprising 2011). The gap between analyzing data and implementing the game using Unity 3D is where I am working on right now. I am in process of analyzing data that was gathered between 2011 and 2013. I will be using Natural language processing (NLP) and design the basic animation needed for first stage.

Week 3: Deep dive into data

The dataset is held in MYSQL database. The data is split between a few different tables. The tables are as follows:

  • random key
  • session Tweet
  • User
  • Tweet
  • Tweet Test
  • Tweet URL
  • URL
  • Tweet Hashtag
  • Hashtag
  • Language
  • Session
  • Source

Based on the UML, there are 3 independent tables which are Language, session, and source. They have no direct connection using UML approach. However, I believe they are some intersections occurring within all tables in database.  . The way data was collected may lead to this view. In addition to that, the rest table seems to have an interesting intersection. Tweet tables has around 6 connections, in other words, it is connected to 6 tables which are random key, session tweet, user, tweet test, tweet hashtag, and tweet URL. Here are some fields related to tweet table:

The ‘tweet’ table glues everything together. It has the following columns:

  • twitter_id # I believe this twitter_id is also valid for the Twitter API, but I never tested to see if it was

text

  • geo # the geo data is PHP serialized GeoJSON point data (I believe lon lat), use a PHP deserializer to read it
  • source
  • from_user_id
  • to_user_id
  • lang_code
  • created_at

The ‘user’ table has the following:

  • user_id
  • username
  • profile_image_url # many of these are now broken, but some can be fixed by just modifying the hostname to whatever Twitter is using now

The ‘hashtag’ table has the following:

  • hashtag_name
  • Definition # these definitions were curated by Laila directly
  • Related_Country
  • Started_Collecting
  • Stopped_Collecting
  • hashtag_id

The ‘url’ table has the following:

  • url_id
  • url

You can look at a tweet’s user’s info by INNER Joining the tweet table with the user table on the from_user_id column of the tweet table.

Because tweets and hashtags, and also tweets and URLS, have a many-to-many relationship, they are associated by INNER JOIN’ing on these assocation tables:

  • tweetHashtag
  • tweetUrl

In addition to this, NLP model was developed to analyze data and prepare the pipeline needed for Unity 3D.

A simple UML model was built to check the tables relationship

 

 
 
 
 
 
 
 
 
 
 
 

Week 4: Storytelling using dataset from R-Shief.

My team ultimate goal is to create a virtual reality that project the story behind the data. This is a story set in the future that locates the 2011 Arab Uprisings as the birth of the digital activism we witnessed grow globally throughout the twenty-first century—from Tunis to Cairo to Occupy Wall Street, from 5M and 12M in Spain to the Umbrella Revolution in Hong Kong, and more. The player enters a public mass gathering brimming with the energy of social change and solidarity. The player has from sunrise to sunrise to interact with “data bodies.”

However, given the short time I have, and the deadline needed for coming up with a solid final product, I was guided by my mentor Professor Laila to work on the followings:

1 – Develop a homology visualization using the Tweet data August 2011 – # Syria

2 – Distributing the tweet data over several characters where we can see how data changed to be an emotional motion including but not limited to: Anger, Dance, Protest, Read, etc.

Week 5: Creating and visualizing network data with Gephi.

Getting access to R-Sheif server and using “tweet” table. First, nodes file was created by extracting all the user_id from tweet table. We assigned to each user_id specific reference or Id, and come up with a nodes fie contains “Id” and “label” columns. Edges file was created by checking the relationship between user_id within the “tweet” table. The “tweet” table contains two fields that demonstrate this relationship which are “from_user_Id” and “to_user_id”. The edges file then will contains many fields including the languages.

Note: Data used still has the same criteria which are:

  • Tweet contains “Syria
  • Data time: August 2011

An example of network data will look like this:

  • Each circle represents a node which is a user id
  • Edges are the connection between nodes
  • Edges with colors represents language that been linked to the tweet

Sentiments analysis using same data from tweet table:

#Comments:

The last graph is much better, allowing us to actually see some dips and trends in sentiment over time. Now all that is left to do is projecting these changes in sentiments over avatars we create using Unity3D.

Week 6:  keep working on the research paper and going over some ML-Agent in Unity 3D

Basically, this week my entire work focused on unity. I found out many resources talking about how to implement ML models into Unity3D. My goal is to distribute sentiments clusters over the characters I have. In addition, I worked on wrapping up the abstract needed for the research papers.

Week 7:  Finished Abstract and keep working on the research paper and ML-Agents in Unity 3D

I finished the research paper abstract along with the introduction. Figuring out how to implement ML-Agents in Unity 3D. Wrapping up the Demo

Started writing up the final presentation.

Week 8:  Deadline for a great Experience

During the journey of 8 weeks, I’ve learned a lot in this REU and get work out of my comfortable zoon. During this week, I focused on preparing the presentation and wrapping up the research papers.

Final Report

Hunter College
City University of New York
HN-1001T
695 Park Ave
New York, NY 10065

Telephone: +1 (212) 396-6837
Email: oo700 at hunter dot cuny dot edu

Follow me:
wolexvr on Twitter  wole-oyekoya-5b753610 on Linkedin  woleucl's YouTube channel  oyekoya's Github profile  Googlescholar