Virtual Reality and Structural Racism Project

Ari Riggins, Princeton University

Project: Virtual Reality and Structural Racism Project

Mentors:  Courtney Cogburn and Oyewole Oyekoya

Week 1:

This week after meeting with Dr. Wole to discuss the specifics of the project and brainstorming ideas and research questions to be explored, I began writing my project proposal. This proposal discusses the goals and methodology for the project. 

This project aims to create an effective virtual reality based visualization that brings light to the disparities of structural racism within housing. This visualization will be based on data from different cities within the United States. We will use property value data as well as racial demographics of the areas as the input; this data will be represented as a three dimensional street or residential area with houses of changing dimensions; the dimension of the house will be proportional to its value over time with color displaying the racial component.

In addition to the project proposal, this week I also downloaded the program Unity and began getting used to it and thinking about how it could work for the project.

Week 2:

My goals for this week were mainly to learn how to use Unity to build the project and to do some background research and summarize it on the topic. I downloaded the Unity ARKit and began following some tutorials to learn how to use it. So far, I managed to make an ios AR application which uses the phone camera to display the world with an added digital cube.

cube in window

After discussion with Dr. Wole, the project idea evolved a bit to involve displaying the residential area as an augmented reality visualization where it can be viewed through a device as resting on top of a flat surface such as a table or the ground. The next step that I am currently working on in Unity is surface detection so that the visualization can align with these surfaces.

In terms of research, I found several relevant sources investigating structural racism within housing. I came across the University of Minnesota’s Mapping Prejudice project which hosts an interactive map of covenants in Minnesota where there were restrictions on the race of property owners and tenants. This project provides a view of one method of visualization for data on racial discrimination within housing.

Week 3:

This week was spent focusing mostly on the data. I met with Dr. Cogburn and Dr. Wole and we discussed a more specific view of the visualization. Dr. Cogburn brought up the reference of a report done by Brookings Edu which investigates the devaluation of Black homes and neighborhoods; this report will serve as the jumping-off point for the data of this project as well as a reference for discussion of the topic.

The data used in the report comes from the American Community Survey performed by the US Census Bureau and from Zillow. It will be necessary to find similar data from the census for this project. We decided that currently, the project should focus on one geographic area as a case study of the overall inequality. The city I am planning to focus on is Rochester New York; it was represented in the Brookings report and was shown to have a large disparity in the valuation of Black and White homes.

Week 4:

This week in unity I continued working with the ARKit to detect surfaces and display the visualization on them. We discussed the data after running into a roadblock where we did not have access to all of the information we wanted. The Brookings report had not provided the names of specific towns and areas that we found to be comparable so we cannot find data on them individually. However, we are able to use the reported data by changing our visualization a bit. Instead of being on a timeline, the houses will be on a sliding scale by the factor of race.

I also gave my midterm presentation this week which helped me solidify my background research for the project, as well as explain it in a clear manner.

Week 5:

This week I was mostly working in unity. I found a free house asset that works for the project and I used the ARKit to place this on any detected plane. I also worked on getting a United States map to serve as the basis of the visualization on the plane. We decided to use multiple locations from the Brookings report as case studies, so now I am still working to write the script which changes the house size in accordance with this data. Now that I have the pieces working, I need to arrange the scene and scale everything, as well as create some instructions for use.

I have also been working on my paper and am currently thinking about the methodology section.

Week 6:

This week in terms of writing the paper I made a short draft of my abstract and began working on the methods section. I worked in unity to get the house asset into AR and to write a script to add the growing animation in the video below. I added an input to the house which dictates the disparity which should be displayed through the amount of growth of the house. I also looking into changing the color of the house and having it fade from one color to another. When meeting with my mentors, they suggested that I try some different approaches to the overall visualization such as adding avatars to depict the neighborhood demographics of the house and changing the color of the house to green or some other monetary representation to depict the change in value.

Week 7:

This week, I have been working to get my demo finished. I fixed my shrinking issue with the house and I added the color change to the roof, though I still have to sync these two processes. In meeting with my mentors we decided that I should focus on completing this one scene instead of working on two due to the limited time left. We also discussed the background of the scene and things I could add to make it feel more like a neighborhood. We also discussed labeling and how I could make clear the data which the visualization is actually conveying. At this point, my work is going to be finishing this demo and focusing on wrapping up all we discussed and what I’ve worked on into a presentation.

Week 8:

In the final week, I was majorly focused on preparing for my presentation and finishing up every aspect of the project. I also worked to finish the paper along with my presentation. In terms of my visualization, I had the case study visualization of one house changing in size and color, but at my mentor meeting we discussed the significance of the color and other possibilities. I ended up making two other versions of the visualizations using different colormaps representing the racial make-up of the communities.

Final Report

Diego Rivera: Neural Network models in Virtual Reality

Diego Rivera, Iona College

Project: Neural Network in Virtual Reality through Unity 3D

Mentors: Lie Xie, Tian Cai, Wole Oyekoya

Week 1:

For week one research over transformers were done and reading how to implement google cardboard in unity and have it working. Also researching on how Pytorch works was done this week. Next week a Unity scene will be developed in order for the neural network models to be implemented in the scene. More research and development will be done this week to have a presentable prototype next week.

Week 2:

This week development on the unity scene was started, and a majority of the visual aspect finished. I was able to use Unity’s new input system that allows XR controller and a regular console controller to move the object for an easy way to implement in other platforms. Creating the transformer model is still in production more debugging is needed.

The video above showcase the placement model and how the controls will work in controller. XR controller is added in the game but testing must be done in order to see if it works and calibrated correctly. The box is shown to rotate on its x and y axis, decrease, and increase in size as well. UI elements will be added and continue debugging and creating a functional Transformer model in Pytorch will be next.

 

Week 3:

I was able to obtain a quest and able to test out the game however there are many bugs and errors I need to fix which is the main objective to get the project working and running. Next week bugs should be fixed, and the project should be able to run and work properly.

 

Week 4:

Debugging was finished and the CNN scene was developed. For the Development of the Transformer scene a ONNX file is created and ready to use in Unity for a similar experience as the CNN scene. Audio is set, controller are interactive, clipping issues is found in the CNN scene but that will be fixed later as developing the Transformer Scene is next and should be priority. Downside of CNN scene and possibly the Transformer scene is the need to be in Link mode and the standalone application will not work because File explorer is need in order to get the models to work.

Basic load of CNN modelCNN model with inputs and outputs

The models above shows the model before and after receiving weights and inputs

 

Week 5:

The project has started development in The transformer scene using the ONNX file, some issues were encountered and bugging issues appeared. Once the scene is implemented, quality assurance will begin.

 

Week 6:

Developing a working Transformer model is a success, using the ONNX file allows for a simple interactive transformer model, however I am unable to display a visual model of the transformer like the CNN visualizer. The ONNX file was able to transform into a JSON file but the code used in the CNN scene is not compatible with it, as a result a visual interactive scene was created using the ONNX file. The scene allows the user to drag a photo and place it on a black square which takes in the data and allows the user to run the model and get an output.

  

Further QA will be done and implementing more information about the models.

 

Week 7:

The final touches of the Transformer scene has been made, a small demo shows how the model runs through. Not included in the video a scroll bar with information about the Transformer model was created to give more background to the model and project.

 

Week 8:

Finished development, presentation was today July 29th, 2022. I learned a lot in this REU and understand more about machine learning and more information about deep learning. No further updates were made this week just preparation on presentation and finishing writing report.

Final Report submitted and accepted as a 2-pages paper (poster presentation) at VRST 2022:
Diego Rivera. 2022. Visualizing Machine Learning in 3D. In 28th ACM Symposium on Virtual Reality Software and Technology (VRST ’22), November 29-December 1, 2022, Tsukuba, Japan. ACM, New York, NY, USA, 2 pages. https://doi.org/10.1145/3562939.3565688 – pdf

Sonifying the Microbiome: 365 days in 360

Deborah Rudin, University of Minnesota Twin Cities

Project: Sonifying the Microbiome: 365 days in 360°

Mentors:  Andrew Demirjian

About Me: I’m a Computer Science and Theater Arts double major at the University of Minnesota Twin Cities. My hobbies include reading, dancing, creating art, listening to music, and cooking. I currently work as an Audio/Media Technician for my University’s Theater department’s Sound Shop.

Week 1:

After meeting with my mentor and deciding what direction we wanted to take our project, I settled into learning how to use Max MSP. Max MSP is a visual programming language for music and multimedia design, and it’s what I’ll be using for the majority of the project. Going through the various tutorials and experimenting with the patches gave me a basic understanding which I will build upon as the summer progresses. I also took a look at the data we’re working with, and found the minimum and maximum of each feature in order to develop a range of values. These features refer to their respective taxonomic groups as found in the microbiomes of the infants. We’re only working with the top ten taxonomic groups, as observed in the study which the data was originally from. Then, I noted the periods between measurements from each infant in order for us to create an understanding of the intervals – especially to see if they held some form of consistency across the board. Later on, we may use these as a way to affect the duration of notes or other aspects of the sonification. I also wrote up a project proposal, and submitted it to Dr. Wole. At the end of the week, I attended the CUNY SciComs Symposium, which was highly interesting.

On the more extracurricular side of things, I’ve started exploring the city with my cohort. So far we’ve gone to the Highline and Chelsea Market!

Below:  Data on the Measurements and Features of the Infants

Week 2:

This week, my mentor and I started diving more into using Max MSP for our project. I first separated our data so that it was per infant, and transposed it so as to easily acquire the data per feature in Max MSP. Then, I worked on creating a patch which would go through the data on an infant and output the data on each feature separately. A patch is essentially a visual program in which you ‘patch’ different objects to the inputs and outputs of others. This language is similar to that of audio systems in real life in that aspect. After that, I worked on using said patch to make a basic sonification of the values of the first feature on the first baby. This worked successfully, although not quite in a way pleasing to the ear. Thus, I then worked on learning how to put the features through various synths. My mentor and I figured out how to generate notes and chords in Max MSP, and then I worked on creating a basic generative music piece by randomizing the steps between the notes. We also decided to create a room in Mozilla Hubs to showcase our work in addition to the in-person black box setting for our presentation.

As a cohort, my peers and have been learning how to use various visualization tools as well. We have so far worked with both ParaView and Tableau. We also attended a Zoom dissertation on the benefits of using VR/MR for Chronic Pain Self-Management.

Week 3:

As a cohort, we worked with VMD and working on our research papers in Overleaf. I wrote a draft of my abstract and introduction, and will continue working on my paper throughout the course of the summer. In my project, I worked on forming chords from my data, using the values to generate the notes and then putting them through MIDI synths. I played around with creating two chords, one basic piano and one through a pizzicato string synth. However, it doesn’t yet run completely correctly, so I still have much to develop with that patch. I also researched the different taxonomical families which we have data on in order to figure out how they’ll determine their usage in our sonification. I’m currently playing with the idea of using Gram-Negative and Gram-Positive bacterium for different chords or features of the sonification process. We are also considering using certain features for velocity, others for pitch, and perhaps some for frequency filtering. At the moment there are a lot of different possibilities that we can work with, and as such we’re considering all of them.

On the extracurricular side of things, I went to the Museum of Modern Art (MoMA) and had afternoon tea at a lovely cafe I found.

Week 4:

This week, I heavily focused on my research paper. I got my introduction and related works section done, which leaves me to start working on writing up my methodology next. We now have research paper writing sessions on Zoom on Tuesdays. As I researched the related works, I found the information fascinating! Pythagoras’ Harmony of the Spheres is something which I had never heard of before — I definitely want to learn more about it. Coding-wise, I fixed up the patch and got it running, with gram-positive bacteria making one chord and gram-negative bacteria making another chord. However, the sound isn’t exactly what I want yet, so my mentor and I are definitely going to play around with what features do what. I also started working in Mozilla Hubs, feeling out how it works. Once I’ve got a placement of visual aids I desire, then I’ll work on figuring out how the audio zoning works.

As a cohort, we worked on integrating R and Python with Tableau. On my own, I attended a concert at Palladium Times Square!

Week 5:

As the start of the second half of the program, much of this week’s focus was on developing a first model in Mozilla Hubs to see how everything works. I arranged objects and gave them corresponding mp3 files in order to figure out how the audio zoning functions. We’ve created a system wherein each feature corresponds to a note in a scale, which is then pitch modulated according to the data values. Then, each baby has a different synth corresponding to it to create distinction while also making patterns identifiable and viable. We’re still finalizing our scale and what instruments to use. With this set up, we’re sending everything through Ableton Live, which directly reads in the MIDI notes from Max MSP and lets us convert them into the mp3 files we need. I’ve continued working on my research paper, now working on my methodology.

This week we also toured the ASRC, or Advanced Science Research Center, which was fascinating. I was most interested in their Neuroscience and Photonics research.

Week 6:

This week was spent putting together the first draft of the final project. I arranged the Grogu baby objects in a circle, each with their corresponding audio files in Mozilla Hubs. First, we used a test audio to make sure the objects were at least relatively in sync with each other. After that, we used the sonifications which we’d created. We used string synths to create a sound more pleasing to the year, and modified the durations of the notes to correspond with the time between measurements: shorter at the beginning and getting longer towards the end. Next, we want to try different spatializations within Mozilla Hubs. For my paper, I finished my abstract and methodology, and now just need to write my results and conclusion.

On my own, I went to Spyscape, which is an interactive spy museum.

Below: Baby and Sound objects in Mozilla Hubs

Week 7:

This week has pretty much been crunch time. As a cohort, we’ve started to work on testing each others’ programs and running user studies. I set up the new spacialization for the sonification, and I much prefer this new version. I set in in a geodesic dome preset scene, which allows a circular set up and much more room between the babies as well as the middle. This allows one to listen to them all at once or go around the edge to focus on one or several at a time. The volume on each audio is also adjustable, so if one wishes to only hear one baby, they could turn down the volume all the way on the others. Some small stumbling blocks my mentor and I dealt with were originally having a bunch of duplicate audios instead of separate unique ones, and making sure the duration of the notes matched with their periods of measurement. Luckily, these were easily resolved, and we were able to go on with our final implementation. After I put everything into Mozilla Hubs, I also labeled each baby in order for our observations to be correct and valid without room for confusion. Unfortunately, Mozilla Hubs does not have a labeling system, so I resorted to using the Pen object in order to create a drawing and then turn it into a pinnable 3D object.

On a less work intense side, we took a group trip to the Bronx Zoo to do the Treetop Adventure Zipline. That was a lot of fun, even in the boiling heat. I also tried a NYC restaurant week restaurant – which was incredible – as well as a magical afternoon tea at The Cauldron.

Below: Baby objects in the Geodesic Dome with the Audio Debugger showing the Audio Zoning in Mozilla Hubs

Baby objects in the geodesic dome with the Audio Debugger showing the Audio Zoning

Week 8:

As the last week of the program, this week was focused on getting our papers done and submitted. Our abstracts were due Monday, with the papers themselves needing to be submitted by Friday. For me, this meant getting participants to experience the sonification and then respond to a response for in order for data to be collected. This data collection allowed me to analyze my results and write my results section of my paper. Once that was done, I was able to write my conclusion and finish off the paper. Before I turned it in, I checked it over with my mentor so that I could add anything he thought was necessary. That done, I was able to submit my short paper! After that, I worked on developing my slides for my Friday presentation. On Friday, all of us presented our projects in a presentation session. We each had about 25 minutes to present, and the session was hybrid in-person and on Zoom. Included was a session Dr. Wole set up in which he presented a zoom recording of us REU participants discussing computing, STEM, and VR/AR/MR. We recorded this session on Tuesday, and it was recorded so as to create an easy method of presentation. Breakfast and lunch were both provided during the presentation session, which was a very nice addition to our last day of the program.

Outside of working on finishing up my project, I saw Phantom of the Opera on Broadway, which was incredible. I really enjoyed working in NYC this summer, and I’m so glad I had the chance to participate in this REU.

Final Report

VR-REU 2022: New Research Experience Program to Broaden Participation in Computing

Ten students will be attending the VR-REU research program at City University of New York (CUNY), Hunter College from June 6 – July 29, 2022.

Aisha Frampton-Clerk
CUNY Queensborough Community College

Amelia Roth
Gustavus Adolphus College

Ari Riggins
Princeton University

Deborah Rudin
University of Minnesota Twin Cities

Diego Rivera
Iona College

Mustapha Bouchaqour
CUNY New York City College of Technology

Nairoby Pena
Cornell University

Olubusayo Oluwagbamila
Rutgers University New Brunswick

Talia Attar
Cornell University

Zhenchao Xia
Stony Brook University

New York, NY — June 3, 2022 — The VR-REU program is a Research Experience for Undergraduates (REU) program sponsored by the National Science Foundation (NSF) that enables undergraduate students to undertake multidisciplinary research projects in Immersive 3D Visualization and VR/AR/MR (Virtual, Augmented and Mixed Reality) at Hunter College. The intended impact is to use the creative potential of Immersive 3D technology to attract and broaden participation of women and underrepresented minorities in STEM and computing fields. Participants will also be provided training in immersive 3D visualization tools and technologies. This experience will include excursions and social events.

“I believe that virtual, augmented and mixed reality technology helps to unleash the creativity of students and increases their interest in technology,” Wole Oyekoya, Associate Professor of Computer Science at Hunter College, said. “This program also provides the opportunity to be involved in building the metaverse with diversity, equity and inclusion in mind”.

The objective of the program is to inspire participating students to consider STEM as a career path and pursue STEM careers at the graduate level, and specifically target participation of women and underrepresented groups. The program identified research mentors with immersive visualization needs and pair each student with a research mentor. Research mentors include diverse faculty members not just from Computer Science but also from diverse fields including film and media, arts, health, journalism, social sciences and biological sciences. Research mentors include faculty members from eight CUNY colleges, University of Columbia and University of California, Santa Barbara.

Students will participate in mentored research projects in data driven research areas, including scientific visualization and visual analytics. Some of the planned research projects include: VR for aiding students with learning disabilities, MicroRNA (Ribonucleic Acid) as a regulator for cell lineage plasticity, immersive remote telepresence, sonifying the microbiome, and visualization of deep learning model architecture.

In addition to the exposure to cutting edge research, the students selected to participate will receive a travel stipend for one round trip to and from New York City, a housing allowance, a weekly stipend to cover living expenses and access to research faculty and VR/AR/MR resources to help facilitate their success in the program.

Two of our students (Aisha Frampton-Clerk and Diego Rivera) were also selected to receive a $10,000 fellowship through a new program, Last Mile Fellowship to Broaden Computing-Related REU Participation, that aims to improve diversity in computing-related research.

The VR-REU program is sponsored by NSF Grant No. 2050532 and is directed by the PI, Dr Wole Oyekoya. This funding supports Hunter College and CUNY in its mission to continue to provide higher education to a largely disadvantaged population that includes women and underrepresented minorities, immigrants or the children of immigrants and first-generation college students. This year’s program supports a diverse student cohort that is 70% female, 30% male, 40% African American, 20% Hispanic, 60% White, and 10% Asian.

Participants will present the results of their research on July 29, 2022 at Hunter College. Please contact the PI for more information.

Visualization of the food consumed during COVID

Visualization of the food consumed during COVID by Paul Grant, CUNY Hunter College

Frontiers in VR Paper

Our Paper, “Exploring First-Person Perspectives in Designing a Role-Playing VR Simulation for Bullying Prevention: A Focus Group Study” has been accepted in the Frontiers in Virtual Reality Journal.

ACM SUI Paper

Our paper, “Usability Evaluation of Behind the Screen Interaction” has been accepted at the ACM Spatial User interaction, SUI 2021.

PhD Student Position

I am seeking a PhD student to join my research lab in Fall 2022. More information available here.

NSF REU Award

Prof. Wole has been awarded a National Science Foundation (NSF) award.

Title: REU Site: Research Experience for Undergraduates in Immersive 3D Visualization
Abstract: This Research Experiences for Undergraduates (REU) Site award funds a new site to enable ten undergraduate students each year to undertake Immersive 3D Visualization research projects at Hunter College, City University of New York (CUNY). The intended impact is to use the creative potential of Immersive 3D technology to attract and broaden participation of women and underrepresented minorities in STEM and computing fields.

ACM ICVARS Paper

Our paper, “A Comparative Study of Smartphone, Desktop, and CAVE systems for Visualizing a Math Simulation” has been accepted at the ACM International Conference on Virtual and Augmented Reality Simulations, ICVARS 2021.

Hunter College
City University of New York
HN-1001T
695 Park Ave
New York, NY 10065

Telephone: +1 (212) 396-6837
Email: oo700 at hunter dot cuny dot edu

Follow me:
wolexvr on Twitter  wole-oyekoya-5b753610 on Linkedin  woleucl's YouTube channel  oyekoya's Github profile  Googlescholar