Home » VR-REU 2023 » Immersive Remote Telepresence and Self-Avatar Project

Immersive Remote Telepresence and Self-Avatar Project

Sonia Birate, University of Virginia
Oyewole Oyekoya, CUNY Hunter College

Week One

This week, in addition to peer bounding, a Hunter College tour, and an introduction to Paraview, I concentrated primarily on finalizing my research proposal. Dr. Wole and I were able to narrow down a project that will explore the possibility, feasibility, and effects of controlling avatars in Virtual Reality using factual facial expressions and eye movements from individuals. The goal is to investigate the realism and believability of avatars, particularly when another individual’s facial expressions are mapped onto that avatar. By properly mapping facial expressions and eye movements onto the avatars, we seek to aid with the creation of a more realistic and captivating VR experience that closely mirrors real-life interactions. After mapping the facial expressions, does the avatar retain its believability, especially to individuals familiar with the person being represented? Overall, we were able to discuss the game plan, which mostly comprises utilizing the software Reallusion as well as some possible user study.

Week Two

I dedicated my efforts to acquainting myself with Reallusion, with a particular focus on exploring its headshot feature, as depicted below. While attempting to recreate an avatar character, I encountered some challenges in capturing every detail accurately, especially when it came to the eyes. Nonetheless, I considered this endeavor as a preliminary software test, so I remain unfazed by the outcome. Concurrently, I commenced working on my abstract and literature review, successfully locating ten relevant sources to incorporate into the research paper’s related work section. Additionally, Trinity and I went to see the new Spiderman movie for fun, and we both really enjoyed it.

Remaking a character in Avatar through the headshot feature.
 
 
Week Three
This week has proven to be quite eventful. I created a remarkable avatar resembling myself thanks to the headshot plugin found in Character Creator. However, perfecting its resemblance required careful adjustments and a significant amount of time. It dawned on me that even the most subtle nuances, like a delicate play of shadows on one’s face, can profoundly influence the outcome of the avatar’s resemblance and even currently I am considering reworking my avatar to achieve a more truly accurate depiction. Additionally, I swiftly immersed myself in the LIVE App, effortlessly mapping a range of expressions onto my avatar. This immersive experience has provided me with a comprehensive understanding of my project, fostering a sense of both growth and satisfaction. I also worked on my methodology. For next week, I am hoping to start getting a few facial expressions from different individuals mapped onto my avatar.  
 
 
 
Week Four 
My avatar, along with Dr. Wole and Trinity (a current summer researcher), had our expressions mapped onto it. To test if people could distinguish between the three avatars, we conducted a small demonstration during the midterm presentation. It was an intriguing experience because most individuals had difficulty discerning the dissimilarities. Interestingly, while performing the facial mapping, I observed that Trinity’s facial expressions appeared more natural, despite her being considered the unfaithful representation. I successfully captured the seven universal expressions (neutral, happy, sad, surprise, anger, disgust, fear) from both the volunteers and myself, which were then mapped onto my avatar. In the upcoming week, I intend to replicate and enhance the research demonstrations by utilizing better pictures and videos. Additionally, I plan to create a Google form that should be operational by Friday.
 
 
 
The image below shows Dr. Wole, Trinity, and I mapping our expressions using the LiveFace application on my iPhone. I was avatar A, Trinity avatar B, Dr. Wole avatar C (this is our sad expression).
Week Five
Re-recorded avatar and individual videos to replicate and improve on the study demonstrations. I also worked on creating a survey draft. Overall, we opted to re-record the videos again because the previous individual videos captured on my iPhone had a wireframe, which Dr. Wole didn’t preferred. As a result, next week I am re-recording avatar films as well as individual iPhone videos and finalizing my survey to send it to participants to collect my user-study aspect of the research. 
 
Week Six

Over the course of this week, my primary focus was on enhancing the quality of the videos required for the survey. I was faced with a significant undertaking that revolved around the meticulous re-editing of a substantial number of videos, precisely 42 in total. It’s worth noting that this number was evenly split, with half of the videos consisting of recorded avatar clips, while the other half comprised individual clips captured using iPhones.To ensure a seamless user experience, I meticulously segmented these videos into shorter, more digestible clips, spanning approximately 3 to 4 seconds each. These clips were subsequently uploaded to YouTube, which provided a convenient platform for effortless integration into the survey. This approach aimed to streamline the process and enable survey respondents to conveniently view and respond to the video content. Subsequently, a survey draft was created, incorporating all 42 clips, utilizing a forced choice answer method, and prompting users to match individuals with the avatar with their facial expressions. We intend to send out the survey to individuals next week.

Week Seven

During this week, I completed the design of my survey and distributed it to my REU cohort, mentors, and other potential participants. As of Sunday, I have received 20 responses, all of which are valid and can be used for analysis. I dedicated time to working on the user study section using Overleaf. Moving forward, my next steps involve initiating the data cleaning and analysis phase, along with defining the types of data and their respective categories. I am currently in the process of determining which tests I will employ for the analysis. Additionally, I aim to promptly finalize the results and analysis section on Overleaf.

Week Eight

I completed the results and analysis part of my paper and was able to obtain a graph that displayed the survey results. I developed a powerpxoint presentation to display my results, which I shared with thI completed the results and analysis part of my paper and was able to obtain a graph that displayed the survey results. I developed a powerpoint presentation to display my results, which I shared with the team. Overall, I am on schedule to submit it to Siggraph Assia. Below is the chart generated from the results received from my survey. Overall, the unfaithful representations had a bit more consistency ratio with people being able to correctly match them than the faithful representation. 

 

Overall, I loved this summer so much and doing research at Hunter. I would do this all over again if I could. <3

2023 VR-REU students dinner

Final Paper:
Birate Sonia, Trinity Suma, Kwame Agyemang, and Oyewole Oyekoya. 2023. Mapping and Recognition of Facial Expressions on Another Person’s Look-Alike Avatars. In SIGGRAPH Asia 2023 Technical Communications (SA Technical Communications ’23), December 12–15, 2023, Sydney, NSW, Australia. ACM, New York, NY, USA 4 Pages. https://doi.org/10.1145/3610543.3626159 – pdf

Hunter College
City University of New York
HN-1001T
695 Park Ave
New York, NY 10065

Telephone: +1 (212) 396-6837
Email: oo700 at hunter dot cuny dot edu

Follow me:
wolexvr on Twitter  wole-oyekoya-5b753610 on Linkedin  woleucl's YouTube channel  oyekoya's Github profile  Googlescholar