Home » VR-REU 2022 » Immersive Remote Telepresence and Self-Avatar Project

Immersive Remote Telepresence and Self-Avatar Project

Aisha Frampton-Clerk, CUNY Queensborough Community College

Week 1:

I began testing some faces in the reallusion software. First, trying my own and then my boyfriend’s to see how it handled different lighting in the original headshot images and different features like facial hair. My first task next week will be to work on styling downloading hair packages and learning to manipulate them. I think these tests have given me a better idea of the scope of the software and gave me some interesting results to analyse and help shape the future of my study.

This first encounter with reallusion has helped me to understand the quality of headshot that make the best self avatars. When I start working with photographic headshots next week I will be sure to consider lighting and angle of the image.

 

 

 

Week 2:

This week I was focusing on finding celebrity source images to style in the Reallusion software. I first had to look for websites that provide royalty/copyright free images. I found that others had recommended flickr so I choose celebrities that had a range of images from their database. The image selection process took longer than I thought. As I tested images in the reallusion software I found that the quality of the images had to be very high as well as the angle of the face. Images with a celebrity smiling or hair over their face where difficult for the software to decifer. After finding the right pictures I put them into the reallusion software to begin styling. Using the smart hair content pack to create hairstyles that match the celebrity asethethic so they are as easy to identify as possible. I am trying to work out how I can make more custom hairstyles and clothing packs so the characters are as recognizable as possible.

Next I will be looking at how I can animate these characters, specifcally facial expressions and speech.

Week 3:

I have been importing my characters into IClone 7. I recorded a short voice memo and uploaded this to IClone 7. While there where automated lip movement and alignment to the words I had to tweak it so it fit better with the words. This included making adjustments to the facial expressions like moving eyebrows to match cadence and tone changes in the voice recording. As seen in teh face key tab selected polygons can be moved and matched to different sections of the speech to direct face movement.

Next I am going to look for recordings of the celebrity talking. I am going to look with ones that have video not just audio so I can look closely at their facial movements to model them. I also want to begin working with larger expressive movements over the whole body.

Week 4 :

This week I have been continuing to make characters and work on making them as realistic as possible. I have had some issues working with images of black celebrities. Often the software cannot pick up highlights on the face when it is selecting the color for the rest of the body. To work around this I have been selecting skin tones by hand to try and get a more accurate representation. Finding black hair textures has also been difficult as they dont come with the program. I have found in some cases layering different hair pieces in the smart hair content pack has given a thicker affect. I have also had to change some of the celebrities I chose as they did not have enough images for me to work with. I will have to test several pictures before i find one that gives an avatar that looks like the celebrity but now that i have the right images it has made styling much easier.

Here is a before and after of will smith with better original headshot and styling

Week 5:

I have been watching tutorials on how to create facial expressions/emotions onto reallusion characters in mixamo. Previously working with facial expressions exclusively in iclone 7 I am excited to see how the software differs. I want to work with the the camera plug in function as well.

I am also working on creating non celebrity headshots to make headshots that will not be familiar to subjects. with this styling is much easier as I have more control over the original images.

I have also been looking for more papers that are similar to my topic for me to use as a basis for my paper. Reading these papers in further depth has given me a lot of ideas of the features that contribute to realism and how these features can be investigated. so while it has been beneficial for understanding how to construct a research paper I have gained a better idea of what makes virtual reality real.

Week 6 :

I have been looking for the best way to add facial animation to characters. The live motion has the most customisation as it can copy any expression you make. However it requires much more adjustment then the face puppetting. The smile is often creepy and unnatural as the upper lip area cannot be selected and altered on is own. Luckily however they are both easy to pick up and work with so I will be able to record audio which will use the acculips function to automate the lip movement.

Week 7:

I have been putting together videos of two avatars with audio ready for the questionnaire. I made 4 variations of the avatar. First a stationary image of the character then a video with audio and lip movement the next is a video including facial expressions and finally a video with full body movement. All videos have teh same audio accompanyment so as not to distract from the avatar.

https://youtu.be/njViZf5UKXY

I have also finished my survey by asking some questions about the video to participants. I will collect the results over the next weekend.

Week 8:

This week I was analysing the responses to my study and adding these to my paper. I completed the survey with 25 responses. I found that eye tracking had a huge affect on realism. As the second lip movement avatar was consistently ranked the least realistic and most unsettling. I was able to make some interesting conclusion about the importance of movement when creating virtual characters. I added figures that illustrate this to my paper and presentation.

Final report was submitted and accepted as a 4-pages short paper at VRST 2022:
Aisha Frampton-Clerk and Oyewole Oyekoya. 2022. Investigating the Perceived Realism of the Other User’s Look-Alike Avatars. In 28th ACM Symposium on Virtual Reality Software and Technology (VRST ’22), November29-December 1, 2022, Tsukuba, Japan. ACM, New York, NY, USA, 5 pages. https://doi.org/10.1145/3562939.3565636 – pdf

Hunter College
City University of New York
HN-1001T
695 Park Ave
New York, NY 10065

Telephone: +1 (212) 396-6837
Email: oo700 at hunter dot cuny dot edu

Follow me:
wolexvr on Twitter  wole-oyekoya-5b753610 on Linkedin  woleucl's YouTube channel  oyekoya's Github profile  Googlescholar