Home » VR-REU 2022

Category Archives: VR-REU 2022

Arab data bodies – Arab futurism meets Data Feminism

Mustapha Bouchaqour, CUNY New York City College of Technology

Week 1: Getting to know my team and the project’s goal.

I have joined Professor Laila in working on this Project. The idea of this project will be seeing as a story that reflect what have happened in Arab countries since Arab Spring (uprising) started in 2011. This story takes place in 2011. It is an Arab futuristic world where the history of the 21st century is one where data and artificial intelligence have created “data bodies (DB).” In a hundred years from now, individuality is created out of data. Human and non-human subjectivities can be born solely from data.

The idea then is to develop a game. This game uses real data from early 21st century uprising social movements – activating the 2011 Arab Uprising as ground zero – to create human non-human agents called data bodies. This week goal was to make sense of data collected and get to know more the team I am working with along with the blueprint we should design as the basic thing needed for developing the game.

Week 2: Analyzing data using NLP with first basic design using Unity 3d

My group so far is still working on developing a blueprint that will work as the basic foundation for the game. However, the unique final product that I am trying to deliver is centered about 2 concepts. The game is challenging the power. The data provided is categorized into emotional, experience, and historical data (Arab uprising 2011). The gap between analyzing data and implementing the game using Unity 3D is where I am working on right now. I am in process of analyzing data that was gathered between 2011 and 2013. I will be using Natural language processing (NLP) and design the basic animation needed for first stage.

Week 3: Deep dive into data

The dataset is held in MYSQL database. The data is split between a few different tables. The tables are as follows:

  • random key
  • session Tweet
  • User
  • Tweet
  • Tweet Test
  • Tweet URL
  • URL
  • Tweet Hashtag
  • Hashtag
  • Language
  • Session
  • Source

Based on the UML, there are 3 independent tables which are Language, session, and source. They have no direct connection using UML approach. However, I believe they are some intersections occurring within all tables in database.  . The way data was collected may lead to this view. In addition to that, the rest table seems to have an interesting intersection. Tweet tables has around 6 connections, in other words, it is connected to 6 tables which are random key, session tweet, user, tweet test, tweet hashtag, and tweet URL. Here are some fields related to tweet table:

The ‘tweet’ table glues everything together. It has the following columns:

  • twitter_id # I believe this twitter_id is also valid for the Twitter API, but I never tested to see if it was

text

  • geo # the geo data is PHP serialized GeoJSON point data (I believe lon lat), use a PHP deserializer to read it
  • source
  • from_user_id
  • to_user_id
  • lang_code
  • created_at

The ‘user’ table has the following:

  • user_id
  • username
  • profile_image_url # many of these are now broken, but some can be fixed by just modifying the hostname to whatever Twitter is using now

The ‘hashtag’ table has the following:

  • hashtag_name
  • Definition # these definitions were curated by Laila directly
  • Related_Country
  • Started_Collecting
  • Stopped_Collecting
  • hashtag_id

The ‘url’ table has the following:

  • url_id
  • url

You can look at a tweet’s user’s info by INNER Joining the tweet table with the user table on the from_user_id column of the tweet table.

Because tweets and hashtags, and also tweets and URLS, have a many-to-many relationship, they are associated by INNER JOIN’ing on these assocation tables:

  • tweetHashtag
  • tweetUrl

In addition to this, NLP model was developed to analyze data and prepare the pipeline needed for Unity 3D.

A simple UML model was built to check the tables relationship

 

 
 
 
 
 
 
 
 
 
 
 

Week 4: Storytelling using dataset from R-Shief.

My team ultimate goal is to create a virtual reality that project the story behind the data. This is a story set in the future that locates the 2011 Arab Uprisings as the birth of the digital activism we witnessed grow globally throughout the twenty-first century—from Tunis to Cairo to Occupy Wall Street, from 5M and 12M in Spain to the Umbrella Revolution in Hong Kong, and more. The player enters a public mass gathering brimming with the energy of social change and solidarity. The player has from sunrise to sunrise to interact with “data bodies.”

However, given the short time I have, and the deadline needed for coming up with a solid final product, I was guided by my mentor Professor Laila to work on the followings:

1 – Develop a homology visualization using the Tweet data August 2011 – # Syria

2 – Distributing the tweet data over several characters where we can see how data changed to be an emotional motion including but not limited to: Anger, Dance, Protest, Read, etc.

Week 5: Creating and visualizing network data with Gephi.

Getting access to R-Sheif server and using “tweet” table. First, nodes file was created by extracting all the user_id from tweet table. We assigned to each user_id specific reference or Id, and come up with a nodes fie contains “Id” and “label” columns. Edges file was created by checking the relationship between user_id within the “tweet” table. The “tweet” table contains two fields that demonstrate this relationship which are “from_user_Id” and “to_user_id”. The edges file then will contains many fields including the languages.

Note: Data used still has the same criteria which are:

  • Tweet contains “Syria
  • Data time: August 2011

An example of network data will look like this:

  • Each circle represents a node which is a user id
  • Edges are the connection between nodes
  • Edges with colors represents language that been linked to the tweet

Sentiments analysis using same data from tweet table:

#Comments:

The last graph is much better, allowing us to actually see some dips and trends in sentiment over time. Now all that is left to do is projecting these changes in sentiments over avatars we create using Unity3D.

Week 6:  keep working on the research paper and going over some ML-Agent in Unity 3D

Basically, this week my entire work focused on unity. I found out many resources talking about how to implement ML models into Unity3D. My goal is to distribute sentiments clusters over the characters I have. In addition, I worked on wrapping up the abstract needed for the research papers.

Week 7:  Finished Abstract and keep working on the research paper and ML-Agents in Unity 3D

I finished the research paper abstract along with the introduction. Figuring out how to implement ML-Agents in Unity 3D. Wrapping up the Demo

Started writing up the final presentation.

Week 8:  Deadline for a great Experience

During the journey of 8 weeks, I’ve learned a lot in this REU and get work out of my comfortable zoon. During this week, I focused on preparing the presentation and wrapping up the research papers.

Final Report

Immersive 3D Biological Visualization of Proteins and microRNA Using VMD

Olubusayo Oluwagbamila, Rutgers University New Brunswick

Program mentor: Oyewole Oyekoya, Ph.D.     

Project mentor: Olorunseun Ogunwobi, M.D Ph.D.               

Progress Report

Week 1: June 6 – June 10 

This week, I met with Dr. Ogunwobi, studied his work, and drafted my research proposal. I read papers discussing the presence of single-nucleotide polymorphisms (SNPs) on the 8q24 chromosome, the encoding of six miRNAs on the PVT1 loci, as well as the underexpression of miRNA-1205 in prostate cancer. We decided that my role in this project will be  to visualize his research findings, and, along with Dr. Oyekoya, concluded that only certain datasets can be visualized on VMD and Paraview. 

On Friday, I physically attended the CUNYSciCom Symposium at the CUNY Graduate Center. There, I watched CUNY grad students make two presentations on their research: one for scientists and the other for non-scientists. I was especially  intrigued by how most of the presenters were able to simplify their work for audiences from non-scientific backgrounds, without watering it down. They employed analogies to form some connection with their audience, and linked that connection with their research. This method will come in handy for me, so I took some (mental) notes down.  I also watched tutorials on Paraview and VMD, and made attempts at visualizing substances on them.

My goal for next week is to collect data from Dr. Ogunwobi and figure out which datasets can be visualized with my tools. In the meantime, I will continue learning visualization on VMD and Paraview.

 

Week 2: June 13 – June 17

On Monday, I sat in on Dr. Ogunwobi’s weekly lab meeting at the Belfer Research Building. I listened to a few of his undergraduate and graduate students present their progress on the project they were working on. Following that, I was introduced to Fayola, program coordinator at the Hunter College Center for Cancer Health Disparities Research (CCHDR). She was hospitable, giving me a tour of the floor and showing me the different labs and lab equipment used in their research.

The data I needed was under the care of one of Dr. Ogunwobi’s Ph.D. students who had recently graduated, and there had to be some coordination between her and the current lab students. Because of that I was unable to access any data this week. I did however keep working on VMD and learned some cool tricks. Using the lipase 2w22 as a model, I practiced generating a Protein Structure File (PSF) from a Protein Data Bank (PDB) file. I also learnt how to add mutations to a protein, as well as modifying graphical representations of a protein by coloring or drawing method.

During the week, I virtually attended some interesting VR-related presentations. One was a seminar on the Role of Self-Administered VR for On-Demand Chronic Pain Treatment, and the other was a dissertation defense of a Ph.D. nursing candidate. Both presentations contained research on the effects of VR usage on pain, and both research findings demonstrated the positive physical and emotional results VR usage had on patients. This brought to mind the increasing technological advances happening globally, how much the world has changed over the years, and how much the world will change years from now. I find that fascinating, but also ominous. Maybe I watched too many Black Mirror episodes.

My goals for next week are to collect the data from Dr. Ogunwobi’s lab, continue learning VMD, and study other microRNA visualization projects.

 

Week 3: June 20 – June 24

This week, I got access to the data needed for this project. There were a lot of files available (over 6,000!), so I spent a good amount of time sifting through the data and figuring out which ones would be needed for my project. I was able to select files with compatible file types, but I did face some difficulties. I was unable to open these files on either VMD or Paraview, and only got an error message when I tried. My guess is that the problem lies either with the files I have, or with my knowledge of VMD/Paraview. Next week, I will test both hypotheses by going back to the lab to further examine these files, while also watching more tutorials on VMD and PAraview.

I also got to work on myresearch paper this week – I currently have my background/introduction, bibliography and a portion of my methods section complete. I faced some challenges transferring this to the template on Overleaf, and so another goal for next week would be to watch turotials on using Overleaf.

 

Week 4: June 27 – July 1

This week, I spoke to Dr. Wole about the issues I had last week, and he siggested I find similar files from public databases. From the Protein Data Bank, I was able to find four proteins (or their look-alikes) associated with my project: Aurora Kinase A, FRYL, Human Neuron-Specific Enolase-2 and Notch Homolog 2 N-Terminal-Like Protein A & B. I visualized them on VMD, mutated them, and compared the mutated structures with the original. I was hoping to be able to visualize microRNA-1205, or at least the locus PVT1 on chromosome 8p24. Unfortunately, because these molecules are non-protein, and because the Proetin Data Bank only contains information about proteins, I could not visualize them. I searched the web for other open-source databases, or other visualization software. I found a Nucleic Acid Database by Rutgers University (shoutout) and an RNA visualization software called RNA Artist. I could not find any MicroRNA file on the NAD. I tried downloading other files (RNA, DNA) from the NAD and opening them up with RNA Artist, but I kept getting error messages. Next week, I will look more into this.

Since this week marked the end of the first half of this program, I and my peers each made our mid-term presentations on Friday. I had fun putting my PowerPoint slides togather and breaking down the context of my project. Dr. Ogunwobi and I are the only ones in the entire program from Biology/Genetics background, so I enjoyed the challenge of explaining gene expression to Computer Science, Engineering and Art professionals. I also found other projects my peers are working on interesting, and I loved how much progress we all have made on our individual projects. I am looking forward to making more strides in the second half of this program.

 

Week 5: July 4 – July 8

The beginning of this week was a holiday week, so I spent the first couple of days exploring the city. I also spent some time exploring the Nucelic Acid Database and RNA Artist software. Unfortunately, I couldn’t find anything that would be useful for my project, at least not in the next four weeks. They do seem to be interesting visualization tools, however, so I will keep in mind for future projects. This week, I also got a chance to use some VMD extensions. I used the movie maker extension to make both a single-frame movie and a trajectory movie. I did some more digging, and I found a paper that talked about using other VMD extensions for RNA visualization, such as the NetworkVIew extension/plug in. I hope to explore this next week.

 

Week 6: July 11 – July 15

This week, I checked out the NetworkView extension and, unfortunately, it doesn’t have the features I would have liked for my project. Asides from that, I made more VMD videos using different graphic representations. Here’s a close-up video of the Aurora Kinase A protein, and another one of its mutant. I liked this representation because it shows the missing bonds and molecules in the mutant structure.  Also, it’s a visually appealing representation, which would be really interesting to view in VR.

 

 

Week 7: July 18 – Jluy 22

 This week, I worked on visualizing the other proteins associated with my project, similar to the videos I made on Aurora Kinase A. So far, I am done with the visualization aspect of my project, and the next step is to convert these into VR compatible formats.

 

Week 8: July 25 – July 29

This was the final stretch of the program. In this week, I concluded my project my refining the visuals I had created and checking out the VR experience. Due to the techincal obscurity of VMD, I was unable to directly export my visuals  with the VMD VR extension. I was, however, able to work around this and display my visuals through Google Carboard headsets. While this format was less immersive than I had hoped, it did involve virtual reality. I also completed my research paper and submitted it for publication. On the last day of the progrram, I had the opportunity to present my work spanning all eight weeks of the program. Here are some 2D-versions of the 3D visuals I presented:

.

My project had its challenges, but it was overall a fulfilling introduction to 3D biological visualization. I am grateful to Dr. Wole for organizaing and enabling this project, and my mentor, Dr. Ogunwobi, for trusting me with his work. VMD appears to be a very useful software with numerous applications. I hope to further explore it independently over the next four weeks.

Final Report

Immersive Remote Telepresence and Self-Avatar Project

Aisha Frampton-Clerk, CUNY Queensborough Community College

Week 1:

I began testing some faces in the reallusion software. First, trying my own and then my boyfriend’s to see how it handled different lighting in the original headshot images and different features like facial hair. My first task next week will be to work on styling downloading hair packages and learning to manipulate them. I think these tests have given me a better idea of the scope of the software and gave me some interesting results to analyse and help shape the future of my study.

This first encounter with reallusion has helped me to understand the quality of headshot that make the best self avatars. When I start working with photographic headshots next week I will be sure to consider lighting and angle of the image.

 

 

 

Week 2:

This week I was focusing on finding celebrity source images to style in the Reallusion software. I first had to look for websites that provide royalty/copyright free images. I found that others had recommended flickr so I choose celebrities that had a range of images from their database. The image selection process took longer than I thought. As I tested images in the reallusion software I found that the quality of the images had to be very high as well as the angle of the face. Images with a celebrity smiling or hair over their face where difficult for the software to decifer. After finding the right pictures I put them into the reallusion software to begin styling. Using the smart hair content pack to create hairstyles that match the celebrity asethethic so they are as easy to identify as possible. I am trying to work out how I can make more custom hairstyles and clothing packs so the characters are as recognizable as possible.

Next I will be looking at how I can animate these characters, specifcally facial expressions and speech.

Week 3:

I have been importing my characters into IClone 7. I recorded a short voice memo and uploaded this to IClone 7. While there where automated lip movement and alignment to the words I had to tweak it so it fit better with the words. This included making adjustments to the facial expressions like moving eyebrows to match cadence and tone changes in the voice recording. As seen in teh face key tab selected polygons can be moved and matched to different sections of the speech to direct face movement.

Next I am going to look for recordings of the celebrity talking. I am going to look with ones that have video not just audio so I can look closely at their facial movements to model them. I also want to begin working with larger expressive movements over the whole body.

Week 4 :

This week I have been continuing to make characters and work on making them as realistic as possible. I have had some issues working with images of black celebrities. Often the software cannot pick up highlights on the face when it is selecting the color for the rest of the body. To work around this I have been selecting skin tones by hand to try and get a more accurate representation. Finding black hair textures has also been difficult as they dont come with the program. I have found in some cases layering different hair pieces in the smart hair content pack has given a thicker affect. I have also had to change some of the celebrities I chose as they did not have enough images for me to work with. I will have to test several pictures before i find one that gives an avatar that looks like the celebrity but now that i have the right images it has made styling much easier.

Here is a before and after of will smith with better original headshot and styling

Week 5:

I have been watching tutorials on how to create facial expressions/emotions onto reallusion characters in mixamo. Previously working with facial expressions exclusively in iclone 7 I am excited to see how the software differs. I want to work with the the camera plug in function as well.

I am also working on creating non celebrity headshots to make headshots that will not be familiar to subjects. with this styling is much easier as I have more control over the original images.

I have also been looking for more papers that are similar to my topic for me to use as a basis for my paper. Reading these papers in further depth has given me a lot of ideas of the features that contribute to realism and how these features can be investigated. so while it has been beneficial for understanding how to construct a research paper I have gained a better idea of what makes virtual reality real.

Week 6 :

I have been looking for the best way to add facial animation to characters. The live motion has the most customisation as it can copy any expression you make. However it requires much more adjustment then the face puppetting. The smile is often creepy and unnatural as the upper lip area cannot be selected and altered on is own. Luckily however they are both easy to pick up and work with so I will be able to record audio which will use the acculips function to automate the lip movement.

Week 7:

I have been putting together videos of two avatars with audio ready for the questionnaire. I made 4 variations of the avatar. First a stationary image of the character then a video with audio and lip movement the next is a video including facial expressions and finally a video with full body movement. All videos have teh same audio accompanyment so as not to distract from the avatar.

https://youtu.be/njViZf5UKXY

I have also finished my survey by asking some questions about the video to participants. I will collect the results over the next weekend.

Week 8:

This week I was analysing the responses to my study and adding these to my paper. I completed the survey with 25 responses. I found that eye tracking had a huge affect on realism. As the second lip movement avatar was consistently ranked the least realistic and most unsettling. I was able to make some interesting conclusion about the importance of movement when creating virtual characters. I added figures that illustrate this to my paper and presentation.

Final report was submitted and accepted as a 4-pages short paper at VRST 2022:
Aisha Frampton-Clerk and Oyewole Oyekoya. 2022. Investigating the Perceived Realism of the Other User’s Look-Alike Avatars. In 28th ACM Symposium on Virtual Reality Software and Technology (VRST ’22), November29-December 1, 2022, Tsukuba, Japan. ACM, New York, NY, USA, 5 pages. https://doi.org/10.1145/3562939.3565636 – pdf

VR as a Learning Tool for Students with Disabilities

Nairoby Pena, Cornell University

Week 1: 

Dan and I met a couple of days before the REU began and we decided that we should start from a broad standpoint and begin to narrow in as time progressed. My first “assignment” was to research physical disabilities, learning disabilities, and mental health (emotional) issues, and the impediments that students face in higher education when they have these disabilities. During this week’s meeting, we became increasingly aware that with just 8 weeks it is unfortunately not possible to have a productive project if we take the angle of aiding mental health issues with VR. We decided that the next “assignment” will be to look for two core classes that each student must take at Hunter and Cornell (four in total) and look for the issues or impediments that a student from each of the disability groups may face in these classes (with a heavy focus on physical and learning disabilities). I will be starting on this assignment to wrap up the week as well as beginning to explore Unity, and starting to think about developing the documentation since I found one or two papers that could be a part of that. By next week we should be working on a concrete methodology for creating or enhancing VR as a tool for the impediments that students with disabilities face.

Week 2: 

This week I presented the Dan about core classes both at Cornell and Hunter which all students have to take and I talked about the impediments that students with disabilities might face in taking those classes. For example, Cornell has a swimming requirement in order to graduate and we talked about the obvious point being that every person is not physically able to jump in the pool and swim. In addition, at Hunter, there is an English course that every student is required to take which involves heavy writing and reading which students with dysgraphia or dyslexia respectively would probably struggle with. There is also an astronomy class that students can choose to take as a part of their core curriculum and unfortunately, a disability might impede a person from taking the course because of the lab component in which 9 out of 13 experiments are self-guided on a computer. Dan brought to my attention that every lab has very high tables that are made for people to be standing at and if we think about someone who might have a wheelchair this could be difficult for them to navigate as they would have to be looking up at their peers making it harder to collaborate. Dan and I also spoke about the research paper where he guided me in understanding what I could mention in my abstract. We noticed that we still do not have a clear methodology but we did talk about the fact that VR heavily exists right now in virtual museum visits and how this is a huge help for people with many different types of disabilities so this might be where the project is headed at this point.

I began to look into Unity and attempted to learn about all of its components that had to do with VR. I built my own VR space and then realized that I didn’t have a headset to try what I created; I also looked into building a microgame. 

Week 3: 

This week I focused on a deep dive into dyslexia, what it is and how it affects students. I looked at what impediments students might face when they have this disability and have to take English 120 at Hunter. I also researched how VR could or could not help students with dyslexia. Most research concludes that more scientific research is needed to prove that VR can help, however, theoretically, it can and there are VR programs that exist for students with dyslexia. After meeting with Dan, we decided that we should not focus on dyslexia because it would be difficult to come up with a project to help in minimal time. Being that there has to be a visualization aspect for the outcome of the project, I received clarity from Dr.Wole and decided that I will be creating a VR “game” or program where autistic students can engage in social modeling. This can include modules on greeting people, engaging in conversation, and how to conduct oneself in various social environments. Since it is week 3 already I will have to work rigorously to develop this. So far I have gone into unity and used the assets in order to build the virtual environment. I will be working to develop a user interface for at least three different social situations and hope to test at least one of them by the end of next week. 

We had a paper writing zoom session this week where I was able to get a draft of my abstract and begin my introduction. I will have to go in and edit this as my focus has shifted from disabilities in general to developmental disabilities (more specifically autism).

Week 4: 

Last week’s idea about social modeling has been modified so that it will be able to get done in the amount of time that is left. I will now be using a museum asset in unity and putting a human-like avatar in the virtual space. The avatar will guide the person that wears the VR headset through the museum and what the avatar does will play on a loop. (Thank you to Dr. Wole for the idea). Applying prior research to this idea, I hope to implement the general outline for social skills teaching packages this means that the program will allow for repetition of the target task (visiting a museum), verbal explanation of the social skill, the practice of skill in realistic settings (in virtual reality), role-play of target behaviors (practicing the behaviors in the virtual reality program without the guided audio).

This week I worked heavily on my paper. I was able to get through most of the introduction, related works, and some of the methodology. I will continue to add to it as the project moves along. Finally, I developed a slides presentation for the midterm check-in.

Week 5: 

This week I was able to get the museum asset in unity. I further developed the idea of social modeling through a recollection of my experience working at the Intrepid Museum with the Access program. One of the events that the program provided was early morning openings for students with disabilities, with a large portion of the population having autism. I decided that I should try to use that as the model in the museum asset. Below is a screen recording of what I have so far. I was able to edit the presentation from what the original asset had to what I wanted it to say. I hope to add audio of my voice so that everything isn’t reading based and I also will be trying to make it possible for the user to look through the museum itself; after this is done I will deploy it to the headset and make edits as needed.

I worked on the citations in my paper and revised it as my ideas developed. I am starting to think about what my results section will look like. We all visited the CUNY ASRC and got to take a tour while learning about their initiatives.

Week 6:

This week I had technical difficulties with adding audio to my visualization portion. Kwame and I spent hours trying to get the code right so that certain audios would play at each slide, however, we could not get it to work. I also looked for existing scripts that may help but they were also unable to work. I created a backup plan to have the audio play on a loop in its entirety, though this would cause people to listen to different audio portions at different points in the experience.

I sent my mentor Dan my paper for feedback and began to work on refining it and next week I hope to begin to finalize it. In addition, Dan gave me the idea to share some of the other existing VR programs that exist to explain in my final presentation so that people understand the extent to which they can be helpful for students with disabilities.

Next week I will be testing my visual aspect from Unity and making any fixes as well as focusing on the paper.

Week 7: 

I got to participate in Talia’s user study this week; it was cool to see how a fellow participant’s visualization aspect came out. Since my project does not have a user study, I focused on what I would write in my discussion for my paper. I met with Dan and he was able to give me more feedback on the paper so that I could continue to work on it. As for the technical difficulties, it seems that I will have to play the looped audio in its entirety due to the diminishing amount of time that is available. Dan and I spoke about the final presentation and what I should include which led me to begin to work on my final slides.

We got to visit the Bronx Zoo on Wednesday where we zip-lined, that was fun! We ended the week with a meeting speaking about how the final week would look and how to prepare.

Week 8: 

I spent the last week submitting my abstract, finishing my paper, and practicing my final presentation. I was able to participate in Kwame’s user study. For the final touch to my visualization portion, I put an animated avatar into the museum to make it seem as though there was a tour guide.

Overall it has been a great learning experience participating in this REU. I am grateful for all that I have learned and all of the obstacles that I was able to surpass in developing this project.

Final Paper

Amelia Roth: The Community Game Development Toolkit

Amelia Roth, Gustavus Adolphus College

Project: The Community Game Development Toolkit–Developing accessible tools for students and artists to tell their story using creative game design

Mentor:  Daniel Lichtman

About Me: I’m majoring in Math and Computer Science at Gustavus Adolphus College in Saint Peter, Minnesota.

Week 1: I begun the week by working on several tutorials in Unity to better learn the application and how I can use it to improve the accessibility of the Community Game Development Toolkit (CGDT). When meeting with my mentor earlier this week, we decided that improving the accessibility and functionality of the CGDT was our main goal during these 8 weeks. We would like to shift the creation of visual art and stories from Unity to an interactive in-game experience. Over the next few weeks, I plan to work on the in-game editor.

 

Week 2: This week I was able to use my new Unity skills to start using the toolkit and understanding the scripting behind it. I created a GitHub account and soon, I’ll have access to the code for the CGDT! I met with my mentor at the beginning of this week and we created a to-do list for me over the next couple of weeks. To create an in-game editor, my first steps will be working on selecting objects, moving objects, and most importantly, making sure that any in-game changes will be saved once the user leaves play mode in Unity.

Here’s a bit of art I made in the CGDT, check it out!

 

Week 3: I spent this week working on the code for the CGDT and uploading it to the repository on Github so that my additions are documented in the CGDT. I met with my mentor twice this week to work get help on the coding needing and we got quite a bit done!As of now, while in play mode, a user can select an object (which highlights to show it has been selected), move that object in circle around them, towards and away from them, and up and down. They can also make the object smaller and bigger. Since this took less time than expected, I can move onto the next step which would be allowing the user to save changes in play mode instead of just in editor mode. I also think it would be helpful to add functions that allow the player to rotate objects on the object’s axes, so I’ll ask my mentor if he thinks we have time to add this in.

I also began writing my paper this week. The REU had a cowriting session on Tuesdays that we plan to implement indefinitely so that us students have set aside time specifically to work on the papers and ask questions in real time if we need help. I’ve been looking for previous research that relates to my project somehow, and one very interesting thing I found is called the Verb Collective. With the Verb Collective, different verbs such as “to scatter”, “to drop”, and “to spell” have functions attached to each of them, which is turn can call other verbs and their functions. I think it’s related to my project in the way that the CGDT is meant to be a storytelling tool. Both the Verb Collective and The Community Game Development Toolkit are interested in exploring VR as a way to see the world with a new perspective.

Week 4: This week hasn’t had as many satisfying results as last week, but I’m in the middle of working on several things that should hopefully be done next week. One of the things I’m currently working on is movie textures. Right now, the CGDT has an automatic importer for textures that turns images in usable sprites, but no such script for movie textures. I have learned how to do it manually, though, and if you look closely at the cube in the image below, you’ll see it has a video attached of Grand Central Terminal!

Dan and I are also working on saving the changes we make in play mode so that user’s hard work doesn’t go to waste! Some of the necessary functions are a bit over my head for this so Dan is lending a helping hand. I’ve also started working on some documentation for the CGDT, so that it’s easy to find a tutorial for exactly what you’re trying to learn how to use.

The writing for the paper is going well, I’ve got a solid related works section and a good start on my introduction. It is also our midterm REU presentation tomorrow, and I’m excited to share the work I’ve done on the CGDT with my fellow REU students!

 

Week 5: As with all things, this project has its ups and downs in terms of how much I get done in a week. This week was one of the slower ones. Saving and loading automatically is turning out to be trickier than expected, and building the CGDT project I have on Unity to my Quest 2 is turning out a whole slew of errors so far. But, the work I’ve done this week has progressed my understanding of these problems, and I feel confident that I can finish them up in week 6. I also created some new documentation for the CGDT this week on downloading and installing Unity, and how to use assets that were originally IRL art in the virtual setting.

In the next week, besides finishing up the lingering tasks of week 5, I plan to adapt the code I’ve written for moving, rotating and scaling objects so that the can be controlled in VR through joysticks instead of on a keyboard on the computer. Some of the original code of the CGDT might have to be adapted as well, such as Player Movement, which is also down with the keyboard currently. Looking further ahead, once I feel the CGDT has all the implements I’d like it to, I’ll test the usability of these functions in a small study. Once all these pieces are in place, I’ll be able to finish my paper!

Week 6: Week 6 had one of the most rewarding experiences of this REU so far: figuring out how to make automatic saving and loading work! It was very exciting to leave play mode, enter play mode again, and see the changes I had previously made saved. Even if I restart Unity and reopen the project, the changes remain. In my opinion, this is the most important aspect I’ve added to the CGDT. Without automatic saving and loading, the tools for moving, rotating and scaling objects aren’t very useful. Ideally, I’d love to add an inventory in play mode, so that dragging and dropping objects from the project window isn’t necessary, and a way to delete objects in play mode as well. Both of those things are definitely possible in the time I have left, but finding a way to save those changes as well might end up being beyond the scope of the project.

Week 6 also brought another change of plans as well. I’ve been trying to build the CGDT to my Quest 2 so I can work with it on a headset. Unfortunately, I’m still getting a lot of errors. It may have something to do with how the new scripts I’ve added to the CGDT. However, Dr. Wole and I agreed that working on deployment issues, especially when they’ve taken up a lot of time already, is probably not the best way to spend my remaining weeks of the REU. Although seeing the CGDT on a headset would have been very cool, I actually think working more on the desktop version is truer to the mission of the CGDT. The CGDT is meant to be accessible for students, artists, and non-game developers in general, and and lot more people own computers than VR headsets. At this point, though, I’ve learned to never say never, so who knows what Week 7 will bring!

Week 7: Success building to the headset! There were a few scripts in the CGDT that were editor-specific and therefore causing the building problems. I was able to remove those scripts and once I did, my scene built to the Quest 2. However, it doesn’t have any of the new capabilities that the desktop version of the CGDT has, so I’ve spent the last couple days figuring out what needs to change for the headset version. I created a new prefab for the CGDT that is VR-specific so that it relies on an OVRCameraRig instead of a Camera. Once this prefab is added, the user is able to fly around in the scene they’ve created, moving forward in the direction they’re facing, and rotating if they wish. I’d also like to move objects with raycasting, same as I did for the desktop version, so I’ve added the raycasting laser, although it isn’t able to grab anything yet.

There was some other great stuff I did this week for the program. I participated in a user study for another student’s project, and the whole program went ziplining at the Bronx zoo together, which was really fun! The deadline on the paper is also coming up quickly, so I’ve been polishing my abstract and working on the implementation section of my paper. I was recommended a few applications to use to draw some illustrations of what my functions do, so I’ll be adding those illustrations into my paper in our final week.

Week 8: The final week! Everything I worked on this week was related to polishing my paper and creating my presentation for the final day of the REU, today. I’ve learned a lot during this REU both in terms of programming tools and skills like writing and presenting. I’m think my presentation went well, and I look forward to putting the finishing touches on my paper today. I wish I could have gotten more done on the VR-CGDT version, but as this is an 8 week program, I’m really happy with everything I was able to accomplish. Thanks to all of my mentors for making this such a great experience!

Final Report was submitted and accepted as a 2-page paper (poster presentation) at VRST 2022:
Amelia Roth and Daniel Lichtman. 2022. The Community Game Development Toolkit. In 28th ACM Symposium on Virtual Reality Software and Technology (VRST ’22), November 29-December 1, 2022, Tsukuba, Japan. ACM, New York, NY, USA, 2 pages. https://doi.org/10.1145/3562939.3565661 – pdf

Explore Virtual Environments Using a Mobile Mixed Reality Cane Without Visual Feedback

Zhenchao Xia, Stony Brook University

Week1- Working update:

This week, after meeting with my mentor on the overall structure and future development direction of the project, I realized that I needed to add a new model to the original project, namely a learning model that uses a laser pen in VR to broadcast location and physical information when interacting with other objects. Since the purpose of our project is to help OM Trainer train blind people, we needed to add a very specific tutorial introduction section. This week, I started creating a new tutorial scene for the new learning model and the original part of the project. In the scene, different objects will be generated in different locations of the room to guide the user through the different modes.

Week2 – Working update:

This week, I built a scenario that will be used as a user tutorial. In this scenario, the model represented by the user is placed inside the irregular room model. The user needs to run AR and VR programs on the phone, and mount the phone on the selfie stick to use it as an exploration tool — blind stick. The user will follow the generated waypoints, exploring the entire structure of the room and finding the exit. During the process, the user will learn how to use the cane, feedback when the cane interacts with objects, and guidance for WayPoint.

Week3 – Working update:

In this week, I created a simple prototype according to the confirmed development requirements. In this scene, I replaced the human model in the actual project with a small square model. The Laser beam shoots forward from the middle of the small square, and when the human body rotates, the laser beam also rotates. When the Pointer interacts with the object, the specific information of the object will be broadcast. In the following week, I will load the Laser Beam into different scenes of the project for testing after completing the basic functions of the Laser Beam.

Week4 – Working update:

This week, I combined the laser pointer with the original user model and created a gesture menu that turns on/off based on the detected movement of the user’s gesture.
The laser pointer can interact with any object in the scene and give detailed item attributes and voice prompt feedback of spatial location information.
Taking the person facing the direction as 0 degrees, when the iPhone mounted on the cane is placed is raised to 45 degrees, which is diagonally above the person, the gesture menu will be opened. In the gesture menu, users can switch between cane mode and laser pointer mode, skip/return/re-read voice messages, etc.

(Gesture Menu)

(Laser Pointer)

Week5 – Working update:

This week, I added all the existing functions to the gesture menu, through which the user can switch any provided function at any time including cane mode, laser pointer mode, hint, replay, etc. Considering that the content of the gesture menu may change in different scenarios, I created a base class for the menu, which contains all the basic functions related to the menu. In the future, we only need to create a script that inherits the base class for special menus. The menu can be customized by rewriting the special functions.

Week6 – Working update:

This week, I made a tutorial for laser pointer mode, in which the user will be trained on how to open the gesture menu with a special gesture, toggle the current option, and confirm the use of the current function. And find targets with complex properties by switching between laser pointer mode and cane mode. Through user testing, I found that overly complex gestures are not easily recognized by the app, and it is difficult for users to easily open the gesture menu. So I changed the way the user interacts with the device, when the pitch of the user’s cane is between 270 and 360 degrees, the gesture menu opens. In the state of maintaining the menu, every two seconds, the current option automatically switches to the next item. When the user closes the menu, execute the current option.

Week7 – Working update:

This week, I worked with my mentor and colleagues to design an experiment to test the app, including the flow of the experiment, the process of collecting data, and the evaluation process of the results. In order to better analyze the data, we decided to upload the important data collected in the experiment, including the user’s position, rotation, head movement, etc., to a database called firebase. Now I am implementing to read the data from the firebase database in unity, and according to the data in the database, let the “user” model move according to the actions of the real user, so that we can reproduce the experiment at any time and get more specific and accurate experimental data, analyze the user’s action trajectory.

Week8 – Working update

This week, I successfully finished the data collection and replay function which allows us to get the position and the rotation of users’ bodies, the rotation of the cane, and users’ heads. And I design an informal test to verify the positive effect of my two new features which are the laser pointer and the gesture menu. After accepting the instructions for two new features, users need to switch to the laser pointer from the cane and use the laser pointer to explore the virtual room and build the mental map for the room’s layout. Once they finished, they need to reconstruct the mental map on the paper. We will get the result by comparing the graphs with the actual layout of the virtual room. But due to the limited time, the experiment is not well defined, due to the lack of strategies for exploring the complex virtual room, users’ data are not reliable and as expected. In the future, I will try to improve the design of the experiments.

Final report submitted and accepted as a 2-pages paper (poster presentation) in VRST 2022:
Zhenchao Xia, Oyewole Oyekoya, and Hao Tang. 2022. Effective Gesture-
Based User Interfaces on Mobile Mixed Reality. In Symposium on Spatial User Interaction (SUI ’22), December 1–2, 2022, Online, CA, USA. ACM, New York, NY, USA, 2 pages. https://doi.org/10.1145/3565970.3568189 – pdf

Virtual Reality and Public Health Project: Nutrition Education – Professor Margrethe Horlyck-Romanovsky

Talia Attar, Cornell University

Week One:

We kicked off the VR-REU 2022 program on Monday and convened in the Hunter College Computer Science Department, Professor Wole, the other participants, and I finally getting to meet and introduce ourselves to each other. Via a Meet-and-Greet style meeting, the other participants and I had the honor to hear the program mentors explain a bit about their work and their vision for integrating Virtual Reality into what they do. In Professor Wole’s VR/AR/MR summer class this week, we learned about hardware and software, 3D geometry, and the basics of writing a research paper using LaTex – a very useful introduction into a key component of doing research. Finally, as a group, we rounded out the week with an introduction to Paraview, using disk data to explore the breadth of Paraview’s capacities.

In regards to my research project, I met with my mentor, Professor Margrethe Horlyck-Romanovsky, and created a concrete concept for the project. After telling me about her research and the information gaps we currently face in generating a complete understanding of how people interact with their food systems, my mentor and I discussed how Virtual Reality could be used to study this gap. We formalized the necessary features of the Virtual Reality application and planned what the related study may look like. Heading into next week, I am excited to dive deeper into learning Unity and building out my project!

Week Two:

I entered Week Two excited to kick my project development into high gear. With the help of my Professor Margrethe and Dr. Wole, I was able enhance the specifications for my Virtual Reality simulation and create a more detailed vision. I began implementing the simulation, a process that was slow at first as I familiarized myself with the XR features of Unity. However, as the week progressed, I grew more comfortable with this type of development and made headway on the first scene in my project – a city block. 

In addition to working on the simulation, I also spent a significant amount of time considering aspects of the study itself. Professor Margrethe, Dr. Wole, and I discussed details from recruiting participants to analyzing produced results, allowing the study to come into clearer view. I was also fortunate to receive valuable and detailed advice around literature reviewing and other aspects of research papers from Professor Margrethe.

The REU members and I ended the week as a group, and Dr. Wole taught us about using Tableau for data visualization. The sample dashboard I created through his tutorial can be found here.

Week Three:

Week three marked an exciting point in the program as I was able to begin deploying my simulation to the Meta Quest 2 VR headset. This was the first time I had gotten to wear a VR headset outside of the demo last week, and it was informative to be able to explore a variety of simulations for an extended period of time. The highlight was certainly successfully building my Unity project directly on to the headset. In regards to the simulation itself, I began a different approach to creating my 3D scene compared to last week in an attempt to enhance the level of detail present. I also began the interactive level of the project by coding the XR rig to follow a fixed, controlled path around the simulation.

In addition to work on my personal study, I joined the other participants in learning a new visualization tool: VMD.

Week Four:

This week I saw the largest progress in my Virtual Reality development process to date. With the help of some carefully selected asset packages from the Unity store, I was finally able to get over the hump of world building and begin implementing more of the user interactions. I successfully completed a draft of the first layer of the world: the city-level view with three food sources. The user is taken on a fixed path walk around the block, with the freedom to move their head to look around. At the end of this walk, a pop-up appears for the user to select where they would like to enter with their laser pointer, and then they are taken on another fixed path walk to the food vendor of their choosing. Upon arriving, the following scene – interior of the store – loads. Developing the interactive UI for this selection step of the process was the largest technical challenge I faced to date, as the Unity UI support was developed for a 2D setting. However, with the help of many (many) YouTube videos and other online resources, I was able to use the Oculus Integration package to adapt the UI features effectively to Virtual Reality. 

                                     

Next week will entail continuing the development flow to build out the next layer of the simulation.

Week Five:

During Week Five, I picked up right where I left off in my last blog post: implementing the “interior” layer of the simulation. This entailed crafting three new scenes and mini “worlds” to represent the green grocer, the supermarket, and the fast food restaurant. Professor Margrethe and I discussed the appropriate foods and information to present in each food source and ended up with a carefully crafted list of what is included. The two main tasks I faced in development were figuring out how to appropriately represent the relevant foods and constructing a logical and clear interface for the user to interact with the food options to simulate a shopping experience. The latter task was challenging in terms of both design and actual implementation, but I ended the week with a solid vision and corresponding code to do so. In Week Six, I will be finishing applying the interactive layer throughout all three food sources and generally cleaning up any loose ends within the simulation. 

The other program participants and I ended the week with a fun field trip to the CUNY Advanced Science Research Center and got to see applications of virtual reality as well as many other interesting and complex ongoing research projects!

Week Six:

Week Six entailed the final push of development of the simulation. One main addition that occurred during this week was the creation of a text file log that records statistics around the users interactions. This will be incredibly useful in gathering detailed results around users behavior within the simulation. Another important development from this week was that many new food items were added as possible options to expand the breadth of choices and potential purchases the user might make. Finally, I added components to provide direction and explanation to the user to enhance ease of use. With these exciting developments, finally running the study with participants using the simulation next week feels promising!

The images below are screenshots taken directly from deployment of the simulation on the Oculus Quest 2. They show the user purchasing interface in two of the food businesses.

            

 

Week Seven:

This week was very exciting because I finally ran the study using the simulation! The week began with final preparations for running the study that included constructing the survey for people to fill out after the VR experience and addressing any lingering bugs in the simulation. Throughout the week, I was able to recruit 12 participants and administer the VR simulation and survey to each. It was an incredibly rewarding experience to see the outcome of my UNITY development process be put to use.

I concluded the week by beginning to analyze the results found and start writing them up to the final research paper. Looking forward to next week, the final week of the program, I will be completing constructing the relevant results and writing my paper as well as preparing for the final presentation!

Week Eight:

This week marked the final week of the REU program. I spent the bulk of the week completing the short paper to submit to the VRST 2022 conference taking place in Tsukuba, Japan this fall. A large portion of this process was analyzing the results of the study. The simulation and study yielded data around a variety of different factors, such as the decision outcomes of the simulation and the usability score measured from a system usability questionnaire component of the survey. I combined different aspects of the data to generate several key findings around behavioral and decision-making patterns in the simulation. However, the most critical part of this preliminary study was that, mainly supported by the high usability and presence scores, virtual reality shows promise as a tool for studying individual food consumer behavior in a multilevel food environment, and the study findings warrant further research into this application.

The program concluded with a wonderful day of presentations, and I was fortunate to hear about the work done by my fellow REU participants throughout the summer.

Thank you to Dr. Wole for facilitating this program and to my mentor, Dr. Margrethe Horlyck-Romanovsky, for her endless support throughout this process.

Final Report was submitted and accepted as a 2-pages paper (poster presentation) at VRST 2022:
Talia Attar, Oyewole Oyekoya, and Margrethe F. Horlyck-Romanovsky. 2022. Using Virtual Reality Food Environments to Study Individual Food Consumer Behavior in an Urban Food Environment. In 28th ACM Symposium on Virtual Reality Software and Technology (VRST ’22), November 29-December 1, 2022, Tsukuba, Japan. ACM, New York, NY, USA, 2 pages. https://doi.org/10.1145/3562939.3565685 – pdf

Virtual Reality and Structural Racism Project

Ari Riggins, Princeton University

Project: Virtual Reality and Structural Racism Project

Mentors:  Courtney Cogburn and Oyewole Oyekoya

Week 1:

This week after meeting with Dr. Wole to discuss the specifics of the project and brainstorming ideas and research questions to be explored, I began writing my project proposal. This proposal discusses the goals and methodology for the project. 

This project aims to create an effective virtual reality based visualization that brings light to the disparities of structural racism within housing. This visualization will be based on data from different cities within the United States. We will use property value data as well as racial demographics of the areas as the input; this data will be represented as a three dimensional street or residential area with houses of changing dimensions; the dimension of the house will be proportional to its value over time with color displaying the racial component.

In addition to the project proposal, this week I also downloaded the program Unity and began getting used to it and thinking about how it could work for the project.

Week 2:

My goals for this week were mainly to learn how to use Unity to build the project and to do some background research and summarize it on the topic. I downloaded the Unity ARKit and began following some tutorials to learn how to use it. So far, I managed to make an ios AR application which uses the phone camera to display the world with an added digital cube.

cube in window

After discussion with Dr. Wole, the project idea evolved a bit to involve displaying the residential area as an augmented reality visualization where it can be viewed through a device as resting on top of a flat surface such as a table or the ground. The next step that I am currently working on in Unity is surface detection so that the visualization can align with these surfaces.

In terms of research, I found several relevant sources investigating structural racism within housing. I came across the University of Minnesota’s Mapping Prejudice project which hosts an interactive map of covenants in Minnesota where there were restrictions on the race of property owners and tenants. This project provides a view of one method of visualization for data on racial discrimination within housing.

Week 3:

This week was spent focusing mostly on the data. I met with Dr. Cogburn and Dr. Wole and we discussed a more specific view of the visualization. Dr. Cogburn brought up the reference of a report done by Brookings Edu which investigates the devaluation of Black homes and neighborhoods; this report will serve as the jumping-off point for the data of this project as well as a reference for discussion of the topic.

The data used in the report comes from the American Community Survey performed by the US Census Bureau and from Zillow. It will be necessary to find similar data from the census for this project. We decided that currently, the project should focus on one geographic area as a case study of the overall inequality. The city I am planning to focus on is Rochester New York; it was represented in the Brookings report and was shown to have a large disparity in the valuation of Black and White homes.

Week 4:

This week in unity I continued working with the ARKit to detect surfaces and display the visualization on them. We discussed the data after running into a roadblock where we did not have access to all of the information we wanted. The Brookings report had not provided the names of specific towns and areas that we found to be comparable so we cannot find data on them individually. However, we are able to use the reported data by changing our visualization a bit. Instead of being on a timeline, the houses will be on a sliding scale by the factor of race.

I also gave my midterm presentation this week which helped me solidify my background research for the project, as well as explain it in a clear manner.

Week 5:

This week I was mostly working in unity. I found a free house asset that works for the project and I used the ARKit to place this on any detected plane. I also worked on getting a United States map to serve as the basis of the visualization on the plane. We decided to use multiple locations from the Brookings report as case studies, so now I am still working to write the script which changes the house size in accordance with this data. Now that I have the pieces working, I need to arrange the scene and scale everything, as well as create some instructions for use.

I have also been working on my paper and am currently thinking about the methodology section.

Week 6:

This week in terms of writing the paper I made a short draft of my abstract and began working on the methods section. I worked in unity to get the house asset into AR and to write a script to add the growing animation in the video below. I added an input to the house which dictates the disparity which should be displayed through the amount of growth of the house. I also looking into changing the color of the house and having it fade from one color to another. When meeting with my mentors, they suggested that I try some different approaches to the overall visualization such as adding avatars to depict the neighborhood demographics of the house and changing the color of the house to green or some other monetary representation to depict the change in value.

Week 7:

This week, I have been working to get my demo finished. I fixed my shrinking issue with the house and I added the color change to the roof, though I still have to sync these two processes. In meeting with my mentors we decided that I should focus on completing this one scene instead of working on two due to the limited time left. We also discussed the background of the scene and things I could add to make it feel more like a neighborhood. We also discussed labeling and how I could make clear the data which the visualization is actually conveying. At this point, my work is going to be finishing this demo and focusing on wrapping up all we discussed and what I’ve worked on into a presentation.

Week 8:

In the final week, I was majorly focused on preparing for my presentation and finishing up every aspect of the project. I also worked to finish the paper along with my presentation. In terms of my visualization, I had the case study visualization of one house changing in size and color, but at my mentor meeting we discussed the significance of the color and other possibilities. I ended up making two other versions of the visualizations using different colormaps representing the racial make-up of the communities.

Final Report

Diego Rivera: Neural Network models in Virtual Reality

Diego Rivera, Iona College

Project: Neural Network in Virtual Reality through Unity 3D

Mentors: Lie Xie, Tian Cai, Wole Oyekoya

Week 1:

For week one research over transformers were done and reading how to implement google cardboard in unity and have it working. Also researching on how Pytorch works was done this week. Next week a Unity scene will be developed in order for the neural network models to be implemented in the scene. More research and development will be done this week to have a presentable prototype next week.

Week 2:

This week development on the unity scene was started, and a majority of the visual aspect finished. I was able to use Unity’s new input system that allows XR controller and a regular console controller to move the object for an easy way to implement in other platforms. Creating the transformer model is still in production more debugging is needed.

The video above showcase the placement model and how the controls will work in controller. XR controller is added in the game but testing must be done in order to see if it works and calibrated correctly. The box is shown to rotate on its x and y axis, decrease, and increase in size as well. UI elements will be added and continue debugging and creating a functional Transformer model in Pytorch will be next.

 

Week 3:

I was able to obtain a quest and able to test out the game however there are many bugs and errors I need to fix which is the main objective to get the project working and running. Next week bugs should be fixed, and the project should be able to run and work properly.

 

Week 4:

Debugging was finished and the CNN scene was developed. For the Development of the Transformer scene a ONNX file is created and ready to use in Unity for a similar experience as the CNN scene. Audio is set, controller are interactive, clipping issues is found in the CNN scene but that will be fixed later as developing the Transformer Scene is next and should be priority. Downside of CNN scene and possibly the Transformer scene is the need to be in Link mode and the standalone application will not work because File explorer is need in order to get the models to work.

Basic load of CNN modelCNN model with inputs and outputs

The models above shows the model before and after receiving weights and inputs

 

Week 5:

The project has started development in The transformer scene using the ONNX file, some issues were encountered and bugging issues appeared. Once the scene is implemented, quality assurance will begin.

 

Week 6:

Developing a working Transformer model is a success, using the ONNX file allows for a simple interactive transformer model, however I am unable to display a visual model of the transformer like the CNN visualizer. The ONNX file was able to transform into a JSON file but the code used in the CNN scene is not compatible with it, as a result a visual interactive scene was created using the ONNX file. The scene allows the user to drag a photo and place it on a black square which takes in the data and allows the user to run the model and get an output.

  

Further QA will be done and implementing more information about the models.

 

Week 7:

The final touches of the Transformer scene has been made, a small demo shows how the model runs through. Not included in the video a scroll bar with information about the Transformer model was created to give more background to the model and project.

 

Week 8:

Finished development, presentation was today July 29th, 2022. I learned a lot in this REU and understand more about machine learning and more information about deep learning. No further updates were made this week just preparation on presentation and finishing writing report.

Final Report submitted and accepted as a 2-pages paper (poster presentation) at VRST 2022:
Diego Rivera. 2022. Visualizing Machine Learning in 3D. In 28th ACM Symposium on Virtual Reality Software and Technology (VRST ’22), November 29-December 1, 2022, Tsukuba, Japan. ACM, New York, NY, USA, 2 pages. https://doi.org/10.1145/3562939.3565688 – pdf

Sonifying the Microbiome: 365 days in 360

Deborah Rudin, University of Minnesota Twin Cities

Project: Sonifying the Microbiome: 365 days in 360°

Mentors:  Andrew Demirjian

About Me: I’m a Computer Science and Theater Arts double major at the University of Minnesota Twin Cities. My hobbies include reading, dancing, creating art, listening to music, and cooking. I currently work as an Audio/Media Technician for my University’s Theater department’s Sound Shop.

Week 1:

After meeting with my mentor and deciding what direction we wanted to take our project, I settled into learning how to use Max MSP. Max MSP is a visual programming language for music and multimedia design, and it’s what I’ll be using for the majority of the project. Going through the various tutorials and experimenting with the patches gave me a basic understanding which I will build upon as the summer progresses. I also took a look at the data we’re working with, and found the minimum and maximum of each feature in order to develop a range of values. These features refer to their respective taxonomic groups as found in the microbiomes of the infants. We’re only working with the top ten taxonomic groups, as observed in the study which the data was originally from. Then, I noted the periods between measurements from each infant in order for us to create an understanding of the intervals – especially to see if they held some form of consistency across the board. Later on, we may use these as a way to affect the duration of notes or other aspects of the sonification. I also wrote up a project proposal, and submitted it to Dr. Wole. At the end of the week, I attended the CUNY SciComs Symposium, which was highly interesting.

On the more extracurricular side of things, I’ve started exploring the city with my cohort. So far we’ve gone to the Highline and Chelsea Market!

Below:  Data on the Measurements and Features of the Infants

Week 2:

This week, my mentor and I started diving more into using Max MSP for our project. I first separated our data so that it was per infant, and transposed it so as to easily acquire the data per feature in Max MSP. Then, I worked on creating a patch which would go through the data on an infant and output the data on each feature separately. A patch is essentially a visual program in which you ‘patch’ different objects to the inputs and outputs of others. This language is similar to that of audio systems in real life in that aspect. After that, I worked on using said patch to make a basic sonification of the values of the first feature on the first baby. This worked successfully, although not quite in a way pleasing to the ear. Thus, I then worked on learning how to put the features through various synths. My mentor and I figured out how to generate notes and chords in Max MSP, and then I worked on creating a basic generative music piece by randomizing the steps between the notes. We also decided to create a room in Mozilla Hubs to showcase our work in addition to the in-person black box setting for our presentation.

As a cohort, my peers and have been learning how to use various visualization tools as well. We have so far worked with both ParaView and Tableau. We also attended a Zoom dissertation on the benefits of using VR/MR for Chronic Pain Self-Management.

Week 3:

As a cohort, we worked with VMD and working on our research papers in Overleaf. I wrote a draft of my abstract and introduction, and will continue working on my paper throughout the course of the summer. In my project, I worked on forming chords from my data, using the values to generate the notes and then putting them through MIDI synths. I played around with creating two chords, one basic piano and one through a pizzicato string synth. However, it doesn’t yet run completely correctly, so I still have much to develop with that patch. I also researched the different taxonomical families which we have data on in order to figure out how they’ll determine their usage in our sonification. I’m currently playing with the idea of using Gram-Negative and Gram-Positive bacterium for different chords or features of the sonification process. We are also considering using certain features for velocity, others for pitch, and perhaps some for frequency filtering. At the moment there are a lot of different possibilities that we can work with, and as such we’re considering all of them.

On the extracurricular side of things, I went to the Museum of Modern Art (MoMA) and had afternoon tea at a lovely cafe I found.

Week 4:

This week, I heavily focused on my research paper. I got my introduction and related works section done, which leaves me to start working on writing up my methodology next. We now have research paper writing sessions on Zoom on Tuesdays. As I researched the related works, I found the information fascinating! Pythagoras’ Harmony of the Spheres is something which I had never heard of before — I definitely want to learn more about it. Coding-wise, I fixed up the patch and got it running, with gram-positive bacteria making one chord and gram-negative bacteria making another chord. However, the sound isn’t exactly what I want yet, so my mentor and I are definitely going to play around with what features do what. I also started working in Mozilla Hubs, feeling out how it works. Once I’ve got a placement of visual aids I desire, then I’ll work on figuring out how the audio zoning works.

As a cohort, we worked on integrating R and Python with Tableau. On my own, I attended a concert at Palladium Times Square!

Week 5:

As the start of the second half of the program, much of this week’s focus was on developing a first model in Mozilla Hubs to see how everything works. I arranged objects and gave them corresponding mp3 files in order to figure out how the audio zoning functions. We’ve created a system wherein each feature corresponds to a note in a scale, which is then pitch modulated according to the data values. Then, each baby has a different synth corresponding to it to create distinction while also making patterns identifiable and viable. We’re still finalizing our scale and what instruments to use. With this set up, we’re sending everything through Ableton Live, which directly reads in the MIDI notes from Max MSP and lets us convert them into the mp3 files we need. I’ve continued working on my research paper, now working on my methodology.

This week we also toured the ASRC, or Advanced Science Research Center, which was fascinating. I was most interested in their Neuroscience and Photonics research.

Week 6:

This week was spent putting together the first draft of the final project. I arranged the Grogu baby objects in a circle, each with their corresponding audio files in Mozilla Hubs. First, we used a test audio to make sure the objects were at least relatively in sync with each other. After that, we used the sonifications which we’d created. We used string synths to create a sound more pleasing to the year, and modified the durations of the notes to correspond with the time between measurements: shorter at the beginning and getting longer towards the end. Next, we want to try different spatializations within Mozilla Hubs. For my paper, I finished my abstract and methodology, and now just need to write my results and conclusion.

On my own, I went to Spyscape, which is an interactive spy museum.

Below: Baby and Sound objects in Mozilla Hubs

Week 7:

This week has pretty much been crunch time. As a cohort, we’ve started to work on testing each others’ programs and running user studies. I set up the new spacialization for the sonification, and I much prefer this new version. I set in in a geodesic dome preset scene, which allows a circular set up and much more room between the babies as well as the middle. This allows one to listen to them all at once or go around the edge to focus on one or several at a time. The volume on each audio is also adjustable, so if one wishes to only hear one baby, they could turn down the volume all the way on the others. Some small stumbling blocks my mentor and I dealt with were originally having a bunch of duplicate audios instead of separate unique ones, and making sure the duration of the notes matched with their periods of measurement. Luckily, these were easily resolved, and we were able to go on with our final implementation. After I put everything into Mozilla Hubs, I also labeled each baby in order for our observations to be correct and valid without room for confusion. Unfortunately, Mozilla Hubs does not have a labeling system, so I resorted to using the Pen object in order to create a drawing and then turn it into a pinnable 3D object.

On a less work intense side, we took a group trip to the Bronx Zoo to do the Treetop Adventure Zipline. That was a lot of fun, even in the boiling heat. I also tried a NYC restaurant week restaurant – which was incredible – as well as a magical afternoon tea at The Cauldron.

Below: Baby objects in the Geodesic Dome with the Audio Debugger showing the Audio Zoning in Mozilla Hubs

Baby objects in the geodesic dome with the Audio Debugger showing the Audio Zoning

Week 8:

As the last week of the program, this week was focused on getting our papers done and submitted. Our abstracts were due Monday, with the papers themselves needing to be submitted by Friday. For me, this meant getting participants to experience the sonification and then respond to a response for in order for data to be collected. This data collection allowed me to analyze my results and write my results section of my paper. Once that was done, I was able to write my conclusion and finish off the paper. Before I turned it in, I checked it over with my mentor so that I could add anything he thought was necessary. That done, I was able to submit my short paper! After that, I worked on developing my slides for my Friday presentation. On Friday, all of us presented our projects in a presentation session. We each had about 25 minutes to present, and the session was hybrid in-person and on Zoom. Included was a session Dr. Wole set up in which he presented a zoom recording of us REU participants discussing computing, STEM, and VR/AR/MR. We recorded this session on Tuesday, and it was recorded so as to create an easy method of presentation. Breakfast and lunch were both provided during the presentation session, which was a very nice addition to our last day of the program.

Outside of working on finishing up my project, I saw Phantom of the Opera on Broadway, which was incredible. I really enjoyed working in NYC this summer, and I’m so glad I had the chance to participate in this REU.

Final Report

Hunter College
City University of New York
HN-1001T
695 Park Ave
New York, NY 10065

Telephone: +1 (212) 396-6837
Email: oo700 at hunter dot cuny dot edu

Follow me:
wolexvr on Twitter  wole-oyekoya-5b753610 on Linkedin  woleucl's YouTube channel  oyekoya's Github profile  Googlescholar