Exploring Perceptions of Structural Racism in Housing Valuation Through 3D Visualizations

Lisa Haye, CUNY John Jay College (Economics, B.S.)

Mentors: Courtney Cogburn and Oyewole Oyekoya

Week 1

The 2023 VR-REU commenced at the Framers Bowling Lounge as a Memorial Day icebreaker, where we all introduced ourselves to one another, as well as to Professor Wole’s research team. The following day, we convened at Hunter College where we were introduced to some of the program mentors, and I began reviewing the work of my 2022 predecessor to think about how I could either expand or pivot last year’s work towards a new direction. Professor Wole also began his lecture on VR, AR, and MR, and we were introduced to the history of the field, as well as its applications across various disciplines. 

I met with both Professor Wole, and my research mentor, Professor Courtney Cogburn, to discuss the potential framework of my project. I began exploring both Unity Terrain and potential city and house asset packages in the Unity Asset Store, as these applications will be key to constructing my visualization models for the project. I also began looking at publications centered around both structural racism and how the issue has been visualized in the past. 

We ended this week with Professor Wole introducing us to Paraview, a scientific visualization program, for our self-paced lab session. I submitted my project proposal, and began to draft a schedule towards curating a literature review of my topic, as well as experimenting with Unity Terrain.

Week 2

This week, Professor Wole taught us the preliminary tenets of writing a scientific research paper and introduced us to Overleaf to compose our writing. Professor Wole also held VR lectures on immersive visual and interactive displays, along with 3D geometry. 

Meanwhile, this week my time was split between conducting a literature review to create a bibliography, finding databases that correlate with the data visualization aspect of this project, and familiarizing myself with Unity with a test model of different functions that are key to my 3D models. I identified three potential data points for my models (housing valuation, climate, and the access to green space), as well as two neighborhoods within the Bronx to serve as case studies to highlight disparities based on that data. Professor Wole also introduced me to MapBox for Unity, a location data and maps platform that could be integrated into Unity for precise map development; I am considering using a mixture of MapBox and Unity Terrain as my methodology for the project moving forward.

The week ended with all of us attending the CUNY SciCom’s “Communicating Your Science” Symposium at CUNY’s Advanced Science Research Center, where we listened to various CUNY graduate students talk about their research with general and peer audience presentations. It was exciting listening to disciplines such as mathematics to biology to physics come together to talk about their work in a way that was fun, educational, and most importantly, accessible to audiences who may not be familiar with concepts such as the sonification of star rotations , DNA G-Quadruplexes, and properties of shapes!

Here is a screenshot of my test model on Unity from earlier this week:

 
Week 3
 

This week, Professor Wole gave lectures on immersion, presence, and reality, as well as 3D tracking, scanning, and animation. We all had an engaging conversation on the uncanny valley, the theory that humans experience revulsion as they observe a character that is close to human characteristics, but are slightly off in appearance. Professor Wole’s scientific visualization lab this week centered on Visual Molecular Dynamics (VMD), a molecular 3D visualization program. 

Here is a screenshot of the ubiquitin protein molecule, visualized in CPK style and color set to ResID. I am not too sure what those acronyms mean, but I am interested in finding out:

 

As for my project, I was split between creating a first draft for the abstract, introduction, and related works sections and experimenting with MapBox. I think my methodology is going to shift towards a more MapBox-intensive procedure, with creating custom map styles on MapBox Studio, and then deploying it to Unity3D. Thus, I spent a lot of time getting a crash course on MapBox’s functions; I created a demo map of Riverdale, one of the Bronx neighborhoods featured in my project, to get a taste of how these models would look like in Unity. I actually ran into quite a few errors, most importantly, my map object did not play in game mode and it does not appear in the hierarchy unless I manually move it there, and I wonder if modeling the map will be easier with the 2017 version of Unity (the version most compatible with current MapBox software). Nonetheless, I hope to work these errors out with Professor Wole soon. Meanwhile, here is my demo model of a Bronx neighborhood:

Next week, I hope to begin the formal construction of my models!

Week 4

Roadblock, roadblock, roadblock – my computer refused to open a project with Unity’s 2017 editor so I couldn’t test if that resolved the problems, my maps continued to refuse to display unless I manually enabled their preview separately, they could not be represented together side by side, and it was difficult to display them properly in the game scene, and I honestly became dejected. I began considering whether or not my project had to pivot back to manually visualizing data with Unity Terrain and assets from the Asset store, and Professor Wole’s PhD student, Kwame Agyemang, and I tried to find any 3D models of New York City that could be imported in Unity, in case a pivot was necessary. Nonetheless, I compiled data from Zillow and the New York City Environment and Health Data Portal to be used for housing valuation, climate, and greenspace data; the former was extracted using a Google Chrome extension called Zillow Data Explorer, and then opened as a Google Sheet, and the latter was manually compiled into a Google Sheets on my drive. 

My breakthrough occurred just on Friday, when Professor Wole hosted our midterm presentations for our status updates; after disclosing my setback, a fellow REU student revealed they actually had prior experience using Mapbox! Thanks to Richard Yeung, the problem was resolved – if Mapbox is being used with a recent version of Unity (in this case, I am using Unity Editor version 2021.3.19f1), you must download ‘AR Foundation [current version is 4.2.8]’ and ‘AR Core XR [4.2.8]’ from the Unity Package Manager, and when importing Mapbox SDK into Unity, do not import ‘Google AR Core’, ‘Mapbox AR’, and ‘Unity AR Interface’. With that, I was able to have my map display properly and my use of Mapbox for this project can now continue. It was very nice seeing how everyone’s projects are coming together, and my talk with Professor Wole helped me consider how I will fulfill my research question while also considering Professor Cogburn’s reminder to consider my audience when thinking about representing data effectively. Because of this week’s setback, I am a bit pressed for time in terms of creating my models and writing my methodology for my paper, so this weekend requires me to make up for lost time; nonetheless, as I create my models, I am going to consider how I want to construct a user study for this project.

Obstacles are bound to happen in research, but it is important to keep your mind open to change in projects, and to ask your network and your network’s network for help, you never know who can help until you do. Here is a test model for my two Bronx neighborhoods actually displaying side by side! 

 
Week 5
 

With the resolution of my Mapbox problems, I spent this week really honing in on the details of my models, both in terms of what data was being visualized and how I want to represent the information on Unity. My housing valuation model, which I originally presumed would be my easiest model to complete, took some thinking as I considered how I wanted to represent redlining and what data point I would be expressing; I decided to focus on highlighting a sample of property values of single-family homes currently on sale as of June 2023 that are above the median value for the Bronx ($442,754, according to Zillow) and condominiums in both neighborhoods. I am still experimenting with how the climate model could be visually represented, and greenspace is going to highlight the environment of both neighborhoods.

I spent some time working on the methodology section of my paper, and lessons this week included Professor Wole’s lecture on interactive 3D graphics, as well as an introduction to Tableau for our lab work. Professor Wole generously took the REU participants on a cruise from Pier 61 for lunch, and we all ate food and chatted on the water as we sailed by downtown Manhattan, Brooklyn, and the Statue of Liberty.

Next week, I hope to complete my models and finish up my writing for the methodology. I haven’t worked on the details for the user study of my project, so I hope to speak to my mentors regarding its structure.

Week 6

This week, I was able to complete a model for housing valuation, climate, and the environment, but I could not find a way to visualize climate and the environment in a 3D format so the research is solely going to focus on visualizing housing valuation. Professor Wole, Professor Cogburn, and I discussed the various potential dimensions and codes that could be used to visualize the existing data in different ways, and now that I’m scrapping climate and the environment, I will be focusing on as many ways to visualize housing valuation as I can, while reframing my paper, and reframing the script for my user study. Future work could consider visualizing various forms of structural racism either separately or concurrently within various neighborhoods. 

With what I’ve learned technically through visualizing the housing valuation data, portions of the current model I have will translate into the various models I have to create, such as a baseline model to be used as a comparison, as well as a color dimension of the redlined versus non-redlined community. I also have to consider focusing solely on representing single-family homes or condominiums in my target neighborhood; finding literature on either type of housing structure will guide my visualization selection. Here is a screenshot of my experimenting with various design choices for the housing valuation model as of late:

 
Week 7

This week I’ve spent the majority of my time working on as many housing valuation models as I can, and talked with my mentors about what questions are going to be relevant towards answering our research question in the user study. I struggled a bit with organizing my time this week, but having conversations with Professor Wole and Professor Cogburn helped ground my expectations and steer my project to the final leg of the marathon. 

The cohort returned to CUNY’s Advanced Research Center (ASRC) for the IlluminationSpace tour, where we all interacted with models and systems related to the core science fields that ASRC specializes in (nanoscience, structural biology, environmental science, photonics, and neuroscience), and it was a really fun way to expose us to the objectives of these fields and how they overlap with one another. Sabrina took advantage of our touring of ASRC’s facilities by having us demo the application she created for the REU, and Sabrina sat us down to listen to her experiences with academia; I admired her openness, especially since many of her comments on academia resonate with my own experiences.

Professor Wole also managed to host three program officers from the National Science Foundation’s Graduate Research Fellowship Program to come speak to us about the program’s purpose, its eligibility requirements, and opened the floor for questions. Professor Wole made it clear throughout the program that part of his objectives for the REU is to encourage us to consider graduate school, and introducing us to a fellowship dedicated to funding our graduate studies and research interests (which could potentially be a barrier for students who are considered low-income, and therefore may make them skeptical towards going to graduate school) was really honorable of him to do.

We ended the week with Professor Wole talking to us about the importance of statistical analysis in research, and he gave us a crash course on ANOVA. With the symposium next Thursday, I have a lot of work ahead of me, and I’m excited to see what everyone has accomplished!

Week 8

The final week began with me finally (finally) completing my user study on Google Forms; users were given context to structural racism and redlining, the procedure, and then users had the option of giving their demographic information anonymously before they were exposed to two questions regarding seventeen versions of my models. Users were tasked with answering two questions to measure their perceptions of the models, and the final section asked users to rank their preferences in terms of structural racism visualization. As of today, I have received 29 responses, so for a survey that has been live for three days, that’s pretty good! I will most likely keep my survey live closer to the deadline of one of the conferences I am applying to in August, just in case I can squeeze in more data for the poster. I also met the cohort for dinner downtown, which was a nice break from working on papers and data analysis.

Professor Wole connected with Iowa State University’s SPIRE-EIT 2023 program this week, and we met with SPIRE-EIT’s PI, Professor Stephen Gilbert, and his students and learned about the three projects they are working on, which was really cool to learn about. I also met with Professor Wole to discuss how to statistically analyze my data, and to also discuss a rather interesting comment I received in the feedback section of my survey; the comment reminded me of the kind of controversy of a project like mine elicits, but also just the nature of research in general – criticism will occur, but I plan to address that comment in my discussions. Professor Wole helped me take in the criticism by talking about his own teacher evaluation experiences, which made me feel a lot better. On Thursday, our own VR-REU symposium was hosted at Hunter, and several of the mentors, loved ones, and the SPIRE-EIT program appeared virtually to listen to our work! Below is the title slide for my presentation, and here is a link to my slides: VR-REU 2023 Symposium

 

This program has been such a tremendous experience to be a part of and so a series of thanks are in order: I want to thank each of the REU participants I met for giving me camaraderie, knowledge, and overall just a fun experience, I think they were an amazing set of people to be grouped with. I want to thank Kwame and Richard Yeung for helping me when my project hit roadblocks, and I want to thank my loved ones for supporting this journey by pushing me to apply to this program, listening to me talk about Unity, roadblocks, and random facts about Riverdale and Soundview, as well as sending out and completing my survey. I want to thank Professor Cogburn for her mentorship and guidance, especially as a Black woman in academia, and most importantly, I want to thank Professor Wole; he was an amazing PI, an insightful professor, and a great mentor, and I want to thank him for giving me a great introduction to research, academia, and for overall taking a shot on an economics major like me. 

After today, I’ll still be working on my paper and poster, and whether I get published or not, I am grateful for the valuable tools this program has given me, and I know my work towards research is only getting started. 

Final Paper:
Lisa Haye, Courtney D. Cogburn, and Oyewole Oyekoya. 2023. Exploring Perceptions of Structural Racism in Housing Valuation through 3D Visualizations. In Companion Proceedings of the 2023 Conference on Interactive Surfaces and Spaces (ISS Companion ’23). Association for Computing Machinery, New York, NY, USA, 19–23. https://doi.org/10.1145/3626485.3626533 (Best Poster Award) – pdf

STEM Education on Structural Biology through an Immersive Learning Environment

Sabrina Chow, Cornell University

Week 1: Introduction and Project Proposal 🎳

The first few days in NYC for the REU started with an introduction to the rest of the cohort and the facilities. I went bowling with Dr. Wole and the other REU students, which was a lot of fun and very competitive. The next day, we all gathered at Hunter College and toured the building. We met some of the mentors, including my own: Kendra Krueger. The day after was the start of the class “CSCI49383 – VR, AR, Mixed Reality,” where I learned about the ideal principles of VR, how it works, and its history. After class, I worked with some of the other students to brainstorm for our proposals over poke and boba.

Later, I met with Kendra to write up the details of my project proposal. Kendra is the STEM Outreach and Education Manager at the Advanced Science Research Center (ASRC), and from our conversation, I can tell that she is truly an educator at heart. I’m really excited to work on this project, which will enhance the learning experience for K-12 students visiting the Illumination space in the ASRC. Kendra gave me two different paths to go down, but ultimately, I have decided to focus on structural biology instead of neuroscience. It’s a subject I’m more comfortable with and I think I can create a good STEM education project about it. Finally on Friday, I met with the rest of the cohort, where we got an introduction to Paraview and presented our project proposals.

A snapshot of the Paraview tutorial we went through.
A snapshot of the Paraview tutorial we went through.

Week 2: Working at the Advanced Science Research Center 🧪

I started the week with getting set up at the ASRC and introduced to the other high school/undergraduate researchers working there over the summer. I got to talk more with Kendra about my project and briefly met Eta Isiorho, a researcher at the ASRC whose expertise in structural biology and crystallization I will be relying on. Then, I attended a lab safety training session over Zoom so that I’ll be able to enter Eta’s lab. I also used the time to complete CITI training since the world was on fire and it wasn’t safe to go outside. (see photo below)

Smoky air outside the dorm.
Smoky air outside the dorm from wind blowing down the smoke from the Canadian wildfires. The AQI was almost 300.

Towards the end of the week, I attended the SciComms conference at the ARSC with the rest of the VR-REU cohort. The format was each presenter prepared an informal presentation of their research followed by a formal, more scientific version. It was really interesting to hear about the wide variety of projects going on around us, and I think attending will really help to prepare me for our symposium at the end of these 8 weeks.

Part of the science/research art project at the SciComms conference.
Part of the science/research art project at the SciComms conference. The question was, “What about research inspires you?” For me, it’s my love for animals and therefore, biology.

For my project, I continued to compile sources and take notes for my literature review. I was hoping to create a mock-up for the project, but after meeting with Kendra and Eta, I think I will need to readjust my project to fit both of their expectations.

Week 3: Making Progress 📱

This week, I started to get into the meat of the project. Since this is a large project, I knew I had to break it down into smaller pieces. First, I made a mockup of what I wanted my application to look like using Figma (see below).

This is my general idea for the application that I’m developing. Students will be able to use their devices to see molecules and more through AR.

Second, I began to work with Xcode to create the real app. This took a little bit longer than I was expecting since I am still getting used to Xcode and Swift again, but I have the general layout.

Layout of app in Xcode.
The first look at my application in the Xcode storyboard.

Looking forward, I will need to work on the functionality of the application. That will be the most difficult part of the project, but I’ve found many YouTube tutorials that will help me understand how Apple’s RealityKit works so I am hopeful. In addition, another issue that I’ve been considering is how I will share my application. If I go through the official Apple App Store, I will need to submit the app for review and prepare the app with the proper certificates, etc.

Outside of my project, I also met a couple more times with Eta. She showed me the crystallization lab at the ASRC and taught me more about the software she uses. I’m hoping to use some of that software to create videos of the molecules. In addition, I attended the CUNY Graduate Sciences Information Session and learned more about the process of applying to grad school. Finally, towards the end of the week, Dr. Wole taught us about VMD.

Picture of VMD interface with overlaying structures.Picture of VMD interface with molecule with selected functional group.

Week 4: Application Framework 🛠️

For this week, I created the structural framework of my application in Xcode. I finished the storyboard for the application and began to make ViewControllers. The vast majority of the week was spent on implementing the collection tab. In hindsight, I think there are still ways that I could have made the code more efficient. For example, I made three separate UICollectionViews instead of just using the built-in sections version. This method would require adding custom sections though, so I will most likely not change this unless I have spare time at the end of the project.

The implemented version of the collections tab for my application.

I also worked on implementing the pop up page that shows up when a molecule is selected from the Collection tab. Each molecule will have more detailed information about what the image is showing and why it is relevant (in general and to the ASRC’s scientists).

This is the pop up tab that shows more details about a selected molecule from the Collection tab.

The only thing left to do regarding these parts is:

  • The actual game part. Users will need to unlock the molecules through the AR camera. That means that they should not be able to be clicked on until after the user has scanned a particular code.
  • The molecules. The image files used were random examples taken directly from RCSB PDB. I will need to find relevant molecules and their images– hopefully from Eta.
  • The descriptions. I will need to write the different blurbs and have Kendra look over them. My goal for the little descriptions is that they will be informational without having too much scientific jargon.

I think for this upcoming week, I will reach out to Eta and Kendra about getting files. Other than that, I will be focusing on implementing the AR part because I suspect that will be the most difficult. Once I have the files, I will also need to convert them from .pdb/.xyz/etc. to a 3D compatible format. Fingers crossed!

Week 5: Plateau-ing 🥲

This week, I started out with trying how to convert between file formats. Most protein files are saved as .PDB (old) or .mmCIF (new). First, I needed to change from those formats to .OBJ, a standard 3D file format. VMD and PyMol are both supposed to have native converter tools, but I found that when I tried to convert files using these two programs, the resulting files were almost or completely empty. Eventually, I found that the Chimera program works the best to convert the .PDB/.mmCIF files to .OBJ. Second, I would have to go from .OBJ to .USDZ, the 3D file format created by Pixar that Apple uses. The newest version of the application, ChimeraX, was the best for creating a .OBJ compatible with Apple’s RealityConverter tool that takes 3D files and converts them to .USDZ. The final file did not have color, which is definitely not ideal, but I think I will deal with that later.

A snapshot of RealityConverter taking in a .OBJ file and creating this .USDZ file.
A snapshot of RealityConverter taking in a .OBJ file and creating this .USDZ file.

Next, I worked on implementing the actual game functions. This required setting up ‘communication’ between the different ViewControllers. I tried many different methods, but I found that the best way was to have functions changing the items within the Collection class and make instances of the Collection class in the different ViewControllers that needed to use those functions.

Finally, I’ve been trying to learn and use the basics of RealityKit in the app. Specifically, I want to use the image tracking feature. I need to track multiple images, and each image should show a specific image. I have an idea of how to do it, but I have not been able to test it. Also, I still need the actual files that I will use in the application.

Week 6: Everything is Looking Up 🥳

I began the week knowing that I would need to get the AR function implemented. My goal is to do user testing next week, and I can’t do that with an app that doesn’t have AR since the whole point of this program is to use XR in an innovative way. I was starting to feel panic as the deadline is quickly approaching.

As a result, I worked on setting up the AR. I finally began to test on my iPhone, instead of the built-in simulator on my laptop. On the first tab of my application, there is an ARView. My first issue was that this ARView was just showing up as a black screen. Eventually I got it to work with the camera by setting up permissions in the app’s .PLIST file (property list).

Camera on.
The first tab of my application with the working camera.

My application uses images to track where the model should be placed. Therefore, using my phone camera allowed me to actual see the model from different perspectives. I made a couple of sample scenes in Apple’s RealityComposer and then imported the project into my Xcode project. In RealityComposer, I was able to display the model by scanning the image (as seen below), so I assumed that it would work in Xcode. It did not.

Molecule model in RealityComposer.
This was the sample molecule model in RealityComposer using my phone camera. The model is sitting on top of the QR code.

I ran into a complete roadblock with the AR in the middle of the week. I found that my Scene was loading correctly but was not connecting to the Anchor. I honestly think I spent 12 hours searching for a solution. I kept adding and testing code over and over. What was the solution, you might ask? Deleting everything but the code loading in the models from screen… Sometimes the simplest answer is the solution. As a result, the models correctly showed up when its respective image appeared. (I’m updating this blog in the middle of the week because I need to share that I succeeded 🥹.)

Working AR models on iPhone screen.
Using ARView to scan the QR codes and place the models from RealityComposer on their respective QR code.

Then, I worked on the ‘unlocking’ feature. This required reworking my Item class and learning about how Substrings work in Swift. Thankfully, it was not nearly as difficult as the AR stuff. I also spent some time downloading QR codes for the image tracking. Finally, I worked on downloading files from the Protein Data Bank and writing descriptions for them.

Week 7: Final Stretch 🏃🏻‍♀️

This week consisted of fixing a lot of small things before doing user testing. For example, one issue I had was that I needed to load the arrays of my Item class before ‘capturing’ a picture of the Item’s model in AR. My solution was to switch the positions of the Collection and AR tabs. This meant the Collection tab would load first. I also think logically it makes more sense for the Collection to be first as an introduction to the application.

Another major part was fixing the QR codes I was using. I originally generated the QR codes online using QRFY. Then I used a photo editor to add a blank space in the middle with the QR code’s number. The issue that I ran into was that the QR codes were too similar. Apple’s AR system kept mixing them up for each other, resulting in tons of overlapping models. At first, I thought the issue was that I was testing it on a screen; however, I printed them out and they were still glitching. I then spent a couple hours in the photo editor adjusting the QR codes by hand until Apple approved that they were different enough.

I also learned how to download the files with color. Instead of using the .OBJ file format, I started using the .GLB/.GLTF format. The command in ChimeraX looks like: “save filename.glb textureColors true”.

I then collected all of the files that I would need. I decided on three major subjects to talk about: 1) protein structure, 2) x-ray crystallography, and 3) scientists at the ASRC. All of the protein molecules were downloaded the RCSB Protein Data Bank. I wanted to use 3D models for the x-ray crystallography process, but I realized I did not have enough time to make them myself. Therefore, I used images from a presentation that Eta had sent me. For the scientist spotlights, I went through the faculty page of the ASRC and read a ton of papers until I found two more faculty (in addition to Eta) that work with protein files. Once I had all the images and converted the files, I put them into my RealityComposer project. Then, I wrote captions for each one.

What my Reality Composer project looked like with the correct model and QR code.
What my Reality Composer project looked like with the correct models and QR codes.

Finally, on Thursday of this week, I did user testing. It was a bit nerve-wracking, especially because I learned that I could not download the application on everyone’s phones. Apparently Apple only lets you download onto three devices, and for some reason, I could only download it onto my phone and one other person’s phone. I even tried to sign up for the paid developer program ($100 fee…) but it would take 2-3 days to get approved and it was the morning of user testing. Ultimately, I decided to split up the group into two and have everyone share the two phones.

The testing itself went pretty well! I was pleasantly surprised by how invested everyone was in finding all of the QR codes. Everyone was also quite impressed by the AR. The models stay still on the QR code, so moving around in real life allows the user to see a model from different perspectives.

Another part of the Thursday tour was my speech. I was invited to give a 30 min speech to my cohort about something related to research. My topic of choice was “Surviving Academia 101.” This is something I feel pretty strongly about since I am still figuring out my path through academia. To be honest, it certainly was not the most well-rehearsed speech, but I think (and hope) that my passion about the subject made up for it. I talked about my experience with wanting to do research but not feeling like I belonged.

Sabrina Chow giving a speech.
A photo of me giving a speech to my cohort about research.

Week 8: Saying Goodbye 🥲

Over the weekend, I started to write my final paper on Overleaf. I had already set up the general structure and some of the sections. Thanks, earlier me! The main thing that I did was updating my methods section and starting to look at the results. Although I did not get as much user data as I would have liked, I definitely had enough to consider my work a preliminary study.

On Monday, I went in to help test some of the other students’ projects. It was really impressive seeing what everyone else had accomplished in just 8 weeks! Dr. Wole also helped me with some questions I had about how to visualize my data. Later that night, all of us students met up for dinner. It was so much fun.

2023 VR-REU students dinner

For the rest of the week, I spent most of my time grinding out the paper and preparing my slides for the symposium. I honestly did not have too much trouble with using LaTeX. My real issue was finding the right words to summarize everything I did. There were parts where I wanted to overshare about the process (specifically to complain about all the problems I had run into). There were also parts where I had no idea what to write. Still, by writing a couple sentences and then switching when I ran into a mental roadblock, I began to make significant progress. Writing my paper while also working on the presentation helped a lot as well, since I could just take the information from the paper and simplify it for the presentation. Before long, the slide deck for my presentation was done.

Symposium presentation by Sabrina Chow
My presentation on what I had spent the last 8 weeks doing. It was 10 minutes long with another 2 minutes for questions.

Thursday morning was the VR-REU Symposium. One by one, we presented our projects, talking about how our projects took shape, what challenges we had faced, and our results. Even though I’d seen everyone’s projects by that point, it was quite interesting to hear about how they addressed issues with their projects.

Finally, Friday arrived. Our last day! It’s hard to believe that time passed by that quickly. I went to our usual classroom in Hunter and finished up my paper. I submitted it to the ACM Interactive Surfaces and Spaces poster session. Then, we said our goodbyes.

For my last night in NYC, I went for a quick walk through Central Park and reflected. I’m so grateful that I got to be a participant in this REU. I’ve learned so much from Dr. Wole, Kendra, and my fellow students. I challenged myself with a project that was all my own, and I am very proud of how it turned out. Wishing everyone else the best with their future endeavors! I know you all will do amazing things!

Final Paper:
Sabrina Chow, Kendra Krueger, and Oyewole Oyekoya. 2023. IOS Augmented Reality Application for Immersive Structural Biology Education. In Companion Proceedings of the 2023 Conference on Interactive Surfaces and Spaces (ISS Companion ’23). Association for Computing Machinery, New York, NY, USA, 14–18. https://doi.org/10.1145/3626485.3626532 – pdf

Arab data bodies – Arab futurism meets Data Feminism

Mustapha Bouchaqour, CUNY New York City College of Technology

Week 1: Getting to know my team and the project’s goal.

I have joined Professor Laila in working on this Project. The idea of this project will be seeing as a story that reflect what have happened in Arab countries since Arab Spring (uprising) started in 2011. This story takes place in 2011. It is an Arab futuristic world where the history of the 21st century is one where data and artificial intelligence have created “data bodies (DB).” In a hundred years from now, individuality is created out of data. Human and non-human subjectivities can be born solely from data.

The idea then is to develop a game. This game uses real data from early 21st century uprising social movements – activating the 2011 Arab Uprising as ground zero – to create human non-human agents called data bodies. This week goal was to make sense of data collected and get to know more the team I am working with along with the blueprint we should design as the basic thing needed for developing the game.

Week 2: Analyzing data using NLP with first basic design using Unity 3d

My group so far is still working on developing a blueprint that will work as the basic foundation for the game. However, the unique final product that I am trying to deliver is centered about 2 concepts. The game is challenging the power. The data provided is categorized into emotional, experience, and historical data (Arab uprising 2011). The gap between analyzing data and implementing the game using Unity 3D is where I am working on right now. I am in process of analyzing data that was gathered between 2011 and 2013. I will be using Natural language processing (NLP) and design the basic animation needed for first stage.

Week 3: Deep dive into data

The dataset is held in MYSQL database. The data is split between a few different tables. The tables are as follows:

  • random key
  • session Tweet
  • User
  • Tweet
  • Tweet Test
  • Tweet URL
  • URL
  • Tweet Hashtag
  • Hashtag
  • Language
  • Session
  • Source

Based on the UML, there are 3 independent tables which are Language, session, and source. They have no direct connection using UML approach. However, I believe they are some intersections occurring within all tables in database.  . The way data was collected may lead to this view. In addition to that, the rest table seems to have an interesting intersection. Tweet tables has around 6 connections, in other words, it is connected to 6 tables which are random key, session tweet, user, tweet test, tweet hashtag, and tweet URL. Here are some fields related to tweet table:

The ‘tweet’ table glues everything together. It has the following columns:

  • twitter_id # I believe this twitter_id is also valid for the Twitter API, but I never tested to see if it was

text

  • geo # the geo data is PHP serialized GeoJSON point data (I believe lon lat), use a PHP deserializer to read it
  • source
  • from_user_id
  • to_user_id
  • lang_code
  • created_at

The ‘user’ table has the following:

  • user_id
  • username
  • profile_image_url # many of these are now broken, but some can be fixed by just modifying the hostname to whatever Twitter is using now

The ‘hashtag’ table has the following:

  • hashtag_name
  • Definition # these definitions were curated by Laila directly
  • Related_Country
  • Started_Collecting
  • Stopped_Collecting
  • hashtag_id

The ‘url’ table has the following:

  • url_id
  • url

You can look at a tweet’s user’s info by INNER Joining the tweet table with the user table on the from_user_id column of the tweet table.

Because tweets and hashtags, and also tweets and URLS, have a many-to-many relationship, they are associated by INNER JOIN’ing on these assocation tables:

  • tweetHashtag
  • tweetUrl

In addition to this, NLP model was developed to analyze data and prepare the pipeline needed for Unity 3D.

A simple UML model was built to check the tables relationship

 

 
 
 
 
 
 
 
 
 
 
 

Week 4: Storytelling using dataset from R-Shief.

My team ultimate goal is to create a virtual reality that project the story behind the data. This is a story set in the future that locates the 2011 Arab Uprisings as the birth of the digital activism we witnessed grow globally throughout the twenty-first century—from Tunis to Cairo to Occupy Wall Street, from 5M and 12M in Spain to the Umbrella Revolution in Hong Kong, and more. The player enters a public mass gathering brimming with the energy of social change and solidarity. The player has from sunrise to sunrise to interact with “data bodies.”

However, given the short time I have, and the deadline needed for coming up with a solid final product, I was guided by my mentor Professor Laila to work on the followings:

1 – Develop a homology visualization using the Tweet data August 2011 – # Syria

2 – Distributing the tweet data over several characters where we can see how data changed to be an emotional motion including but not limited to: Anger, Dance, Protest, Read, etc.

Week 5: Creating and visualizing network data with Gephi.

Getting access to R-Sheif server and using “tweet” table. First, nodes file was created by extracting all the user_id from tweet table. We assigned to each user_id specific reference or Id, and come up with a nodes fie contains “Id” and “label” columns. Edges file was created by checking the relationship between user_id within the “tweet” table. The “tweet” table contains two fields that demonstrate this relationship which are “from_user_Id” and “to_user_id”. The edges file then will contains many fields including the languages.

Note: Data used still has the same criteria which are:

  • Tweet contains “Syria
  • Data time: August 2011

An example of network data will look like this:

  • Each circle represents a node which is a user id
  • Edges are the connection between nodes
  • Edges with colors represents language that been linked to the tweet

Sentiments analysis using same data from tweet table:

#Comments:

The last graph is much better, allowing us to actually see some dips and trends in sentiment over time. Now all that is left to do is projecting these changes in sentiments over avatars we create using Unity3D.

Week 6:  keep working on the research paper and going over some ML-Agent in Unity 3D

Basically, this week my entire work focused on unity. I found out many resources talking about how to implement ML models into Unity3D. My goal is to distribute sentiments clusters over the characters I have. In addition, I worked on wrapping up the abstract needed for the research papers.

Week 7:  Finished Abstract and keep working on the research paper and ML-Agents in Unity 3D

I finished the research paper abstract along with the introduction. Figuring out how to implement ML-Agents in Unity 3D. Wrapping up the Demo

Started writing up the final presentation.

Week 8:  Deadline for a great Experience

During the journey of 8 weeks, I’ve learned a lot in this REU and get work out of my comfortable zoon. During this week, I focused on preparing the presentation and wrapping up the research papers.

Final Report

Immersive 3D Biological Visualization of Proteins and microRNA Using VMD

Olubusayo Oluwagbamila, Rutgers University New Brunswick

Program mentor: Oyewole Oyekoya, Ph.D.     

Project mentor: Olorunseun Ogunwobi, M.D Ph.D.               

Progress Report

Week 1: June 6 – June 10 

This week, I met with Dr. Ogunwobi, studied his work, and drafted my research proposal. I read papers discussing the presence of single-nucleotide polymorphisms (SNPs) on the 8q24 chromosome, the encoding of six miRNAs on the PVT1 loci, as well as the underexpression of miRNA-1205 in prostate cancer. We decided that my role in this project will be  to visualize his research findings, and, along with Dr. Oyekoya, concluded that only certain datasets can be visualized on VMD and Paraview. 

On Friday, I physically attended the CUNYSciCom Symposium at the CUNY Graduate Center. There, I watched CUNY grad students make two presentations on their research: one for scientists and the other for non-scientists. I was especially  intrigued by how most of the presenters were able to simplify their work for audiences from non-scientific backgrounds, without watering it down. They employed analogies to form some connection with their audience, and linked that connection with their research. This method will come in handy for me, so I took some (mental) notes down.  I also watched tutorials on Paraview and VMD, and made attempts at visualizing substances on them.

My goal for next week is to collect data from Dr. Ogunwobi and figure out which datasets can be visualized with my tools. In the meantime, I will continue learning visualization on VMD and Paraview.

 

Week 2: June 13 – June 17

On Monday, I sat in on Dr. Ogunwobi’s weekly lab meeting at the Belfer Research Building. I listened to a few of his undergraduate and graduate students present their progress on the project they were working on. Following that, I was introduced to Fayola, program coordinator at the Hunter College Center for Cancer Health Disparities Research (CCHDR). She was hospitable, giving me a tour of the floor and showing me the different labs and lab equipment used in their research.

The data I needed was under the care of one of Dr. Ogunwobi’s Ph.D. students who had recently graduated, and there had to be some coordination between her and the current lab students. Because of that I was unable to access any data this week. I did however keep working on VMD and learned some cool tricks. Using the lipase 2w22 as a model, I practiced generating a Protein Structure File (PSF) from a Protein Data Bank (PDB) file. I also learnt how to add mutations to a protein, as well as modifying graphical representations of a protein by coloring or drawing method.

During the week, I virtually attended some interesting VR-related presentations. One was a seminar on the Role of Self-Administered VR for On-Demand Chronic Pain Treatment, and the other was a dissertation defense of a Ph.D. nursing candidate. Both presentations contained research on the effects of VR usage on pain, and both research findings demonstrated the positive physical and emotional results VR usage had on patients. This brought to mind the increasing technological advances happening globally, how much the world has changed over the years, and how much the world will change years from now. I find that fascinating, but also ominous. Maybe I watched too many Black Mirror episodes.

My goals for next week are to collect the data from Dr. Ogunwobi’s lab, continue learning VMD, and study other microRNA visualization projects.

 

Week 3: June 20 – June 24

This week, I got access to the data needed for this project. There were a lot of files available (over 6,000!), so I spent a good amount of time sifting through the data and figuring out which ones would be needed for my project. I was able to select files with compatible file types, but I did face some difficulties. I was unable to open these files on either VMD or Paraview, and only got an error message when I tried. My guess is that the problem lies either with the files I have, or with my knowledge of VMD/Paraview. Next week, I will test both hypotheses by going back to the lab to further examine these files, while also watching more tutorials on VMD and PAraview.

I also got to work on myresearch paper this week – I currently have my background/introduction, bibliography and a portion of my methods section complete. I faced some challenges transferring this to the template on Overleaf, and so another goal for next week would be to watch turotials on using Overleaf.

 

Week 4: June 27 – July 1

This week, I spoke to Dr. Wole about the issues I had last week, and he siggested I find similar files from public databases. From the Protein Data Bank, I was able to find four proteins (or their look-alikes) associated with my project: Aurora Kinase A, FRYL, Human Neuron-Specific Enolase-2 and Notch Homolog 2 N-Terminal-Like Protein A & B. I visualized them on VMD, mutated them, and compared the mutated structures with the original. I was hoping to be able to visualize microRNA-1205, or at least the locus PVT1 on chromosome 8p24. Unfortunately, because these molecules are non-protein, and because the Proetin Data Bank only contains information about proteins, I could not visualize them. I searched the web for other open-source databases, or other visualization software. I found a Nucleic Acid Database by Rutgers University (shoutout) and an RNA visualization software called RNA Artist. I could not find any MicroRNA file on the NAD. I tried downloading other files (RNA, DNA) from the NAD and opening them up with RNA Artist, but I kept getting error messages. Next week, I will look more into this.

Since this week marked the end of the first half of this program, I and my peers each made our mid-term presentations on Friday. I had fun putting my PowerPoint slides togather and breaking down the context of my project. Dr. Ogunwobi and I are the only ones in the entire program from Biology/Genetics background, so I enjoyed the challenge of explaining gene expression to Computer Science, Engineering and Art professionals. I also found other projects my peers are working on interesting, and I loved how much progress we all have made on our individual projects. I am looking forward to making more strides in the second half of this program.

 

Week 5: July 4 – July 8

The beginning of this week was a holiday week, so I spent the first couple of days exploring the city. I also spent some time exploring the Nucelic Acid Database and RNA Artist software. Unfortunately, I couldn’t find anything that would be useful for my project, at least not in the next four weeks. They do seem to be interesting visualization tools, however, so I will keep in mind for future projects. This week, I also got a chance to use some VMD extensions. I used the movie maker extension to make both a single-frame movie and a trajectory movie. I did some more digging, and I found a paper that talked about using other VMD extensions for RNA visualization, such as the NetworkVIew extension/plug in. I hope to explore this next week.

 

Week 6: July 11 – July 15

This week, I checked out the NetworkView extension and, unfortunately, it doesn’t have the features I would have liked for my project. Asides from that, I made more VMD videos using different graphic representations. Here’s a close-up video of the Aurora Kinase A protein, and another one of its mutant. I liked this representation because it shows the missing bonds and molecules in the mutant structure.  Also, it’s a visually appealing representation, which would be really interesting to view in VR.

 

 

Week 7: July 18 – Jluy 22

 This week, I worked on visualizing the other proteins associated with my project, similar to the videos I made on Aurora Kinase A. So far, I am done with the visualization aspect of my project, and the next step is to convert these into VR compatible formats.

 

Week 8: July 25 – July 29

This was the final stretch of the program. In this week, I concluded my project my refining the visuals I had created and checking out the VR experience. Due to the techincal obscurity of VMD, I was unable to directly export my visuals  with the VMD VR extension. I was, however, able to work around this and display my visuals through Google Carboard headsets. While this format was less immersive than I had hoped, it did involve virtual reality. I also completed my research paper and submitted it for publication. On the last day of the progrram, I had the opportunity to present my work spanning all eight weeks of the program. Here are some 2D-versions of the 3D visuals I presented:

.

My project had its challenges, but it was overall a fulfilling introduction to 3D biological visualization. I am grateful to Dr. Wole for organizaing and enabling this project, and my mentor, Dr. Ogunwobi, for trusting me with his work. VMD appears to be a very useful software with numerous applications. I hope to further explore it independently over the next four weeks.

Final Report

Immersive Remote Telepresence and Self-Avatar Project

Aisha Frampton-Clerk, CUNY Queensborough Community College

Week 1:

I began testing some faces in the reallusion software. First, trying my own and then my boyfriend’s to see how it handled different lighting in the original headshot images and different features like facial hair. My first task next week will be to work on styling downloading hair packages and learning to manipulate them. I think these tests have given me a better idea of the scope of the software and gave me some interesting results to analyse and help shape the future of my study.

This first encounter with reallusion has helped me to understand the quality of headshot that make the best self avatars. When I start working with photographic headshots next week I will be sure to consider lighting and angle of the image.

 

 

 

Week 2:

This week I was focusing on finding celebrity source images to style in the Reallusion software. I first had to look for websites that provide royalty/copyright free images. I found that others had recommended flickr so I choose celebrities that had a range of images from their database. The image selection process took longer than I thought. As I tested images in the reallusion software I found that the quality of the images had to be very high as well as the angle of the face. Images with a celebrity smiling or hair over their face where difficult for the software to decifer. After finding the right pictures I put them into the reallusion software to begin styling. Using the smart hair content pack to create hairstyles that match the celebrity asethethic so they are as easy to identify as possible. I am trying to work out how I can make more custom hairstyles and clothing packs so the characters are as recognizable as possible.

Next I will be looking at how I can animate these characters, specifcally facial expressions and speech.

Week 3:

I have been importing my characters into IClone 7. I recorded a short voice memo and uploaded this to IClone 7. While there where automated lip movement and alignment to the words I had to tweak it so it fit better with the words. This included making adjustments to the facial expressions like moving eyebrows to match cadence and tone changes in the voice recording. As seen in teh face key tab selected polygons can be moved and matched to different sections of the speech to direct face movement.

Next I am going to look for recordings of the celebrity talking. I am going to look with ones that have video not just audio so I can look closely at their facial movements to model them. I also want to begin working with larger expressive movements over the whole body.

Week 4 :

This week I have been continuing to make characters and work on making them as realistic as possible. I have had some issues working with images of black celebrities. Often the software cannot pick up highlights on the face when it is selecting the color for the rest of the body. To work around this I have been selecting skin tones by hand to try and get a more accurate representation. Finding black hair textures has also been difficult as they dont come with the program. I have found in some cases layering different hair pieces in the smart hair content pack has given a thicker affect. I have also had to change some of the celebrities I chose as they did not have enough images for me to work with. I will have to test several pictures before i find one that gives an avatar that looks like the celebrity but now that i have the right images it has made styling much easier.

Here is a before and after of will smith with better original headshot and styling

Week 5:

I have been watching tutorials on how to create facial expressions/emotions onto reallusion characters in mixamo. Previously working with facial expressions exclusively in iclone 7 I am excited to see how the software differs. I want to work with the the camera plug in function as well.

I am also working on creating non celebrity headshots to make headshots that will not be familiar to subjects. with this styling is much easier as I have more control over the original images.

I have also been looking for more papers that are similar to my topic for me to use as a basis for my paper. Reading these papers in further depth has given me a lot of ideas of the features that contribute to realism and how these features can be investigated. so while it has been beneficial for understanding how to construct a research paper I have gained a better idea of what makes virtual reality real.

Week 6 :

I have been looking for the best way to add facial animation to characters. The live motion has the most customisation as it can copy any expression you make. However it requires much more adjustment then the face puppetting. The smile is often creepy and unnatural as the upper lip area cannot be selected and altered on is own. Luckily however they are both easy to pick up and work with so I will be able to record audio which will use the acculips function to automate the lip movement.

Week 7:

I have been putting together videos of two avatars with audio ready for the questionnaire. I made 4 variations of the avatar. First a stationary image of the character then a video with audio and lip movement the next is a video including facial expressions and finally a video with full body movement. All videos have teh same audio accompanyment so as not to distract from the avatar.

https://youtu.be/njViZf5UKXY

I have also finished my survey by asking some questions about the video to participants. I will collect the results over the next weekend.

Week 8:

This week I was analysing the responses to my study and adding these to my paper. I completed the survey with 25 responses. I found that eye tracking had a huge affect on realism. As the second lip movement avatar was consistently ranked the least realistic and most unsettling. I was able to make some interesting conclusion about the importance of movement when creating virtual characters. I added figures that illustrate this to my paper and presentation.

Final report was submitted and accepted as a 4-pages short paper at VRST 2022:
Aisha Frampton-Clerk and Oyewole Oyekoya. 2022. Investigating the Perceived Realism of the Other User’s Look-Alike Avatars. In 28th ACM Symposium on Virtual Reality Software and Technology (VRST ’22), November29-December 1, 2022, Tsukuba, Japan. ACM, New York, NY, USA, 5 pages. https://doi.org/10.1145/3562939.3565636 – pdf

VR as a Learning Tool for Students with Disabilities

Nairoby Pena, Cornell University

Week 1: 

Dan and I met a couple of days before the REU began and we decided that we should start from a broad standpoint and begin to narrow in as time progressed. My first “assignment” was to research physical disabilities, learning disabilities, and mental health (emotional) issues, and the impediments that students face in higher education when they have these disabilities. During this week’s meeting, we became increasingly aware that with just 8 weeks it is unfortunately not possible to have a productive project if we take the angle of aiding mental health issues with VR. We decided that the next “assignment” will be to look for two core classes that each student must take at Hunter and Cornell (four in total) and look for the issues or impediments that a student from each of the disability groups may face in these classes (with a heavy focus on physical and learning disabilities). I will be starting on this assignment to wrap up the week as well as beginning to explore Unity, and starting to think about developing the documentation since I found one or two papers that could be a part of that. By next week we should be working on a concrete methodology for creating or enhancing VR as a tool for the impediments that students with disabilities face.

Week 2: 

This week I presented the Dan about core classes both at Cornell and Hunter which all students have to take and I talked about the impediments that students with disabilities might face in taking those classes. For example, Cornell has a swimming requirement in order to graduate and we talked about the obvious point being that every person is not physically able to jump in the pool and swim. In addition, at Hunter, there is an English course that every student is required to take which involves heavy writing and reading which students with dysgraphia or dyslexia respectively would probably struggle with. There is also an astronomy class that students can choose to take as a part of their core curriculum and unfortunately, a disability might impede a person from taking the course because of the lab component in which 9 out of 13 experiments are self-guided on a computer. Dan brought to my attention that every lab has very high tables that are made for people to be standing at and if we think about someone who might have a wheelchair this could be difficult for them to navigate as they would have to be looking up at their peers making it harder to collaborate. Dan and I also spoke about the research paper where he guided me in understanding what I could mention in my abstract. We noticed that we still do not have a clear methodology but we did talk about the fact that VR heavily exists right now in virtual museum visits and how this is a huge help for people with many different types of disabilities so this might be where the project is headed at this point.

I began to look into Unity and attempted to learn about all of its components that had to do with VR. I built my own VR space and then realized that I didn’t have a headset to try what I created; I also looked into building a microgame. 

Week 3: 

This week I focused on a deep dive into dyslexia, what it is and how it affects students. I looked at what impediments students might face when they have this disability and have to take English 120 at Hunter. I also researched how VR could or could not help students with dyslexia. Most research concludes that more scientific research is needed to prove that VR can help, however, theoretically, it can and there are VR programs that exist for students with dyslexia. After meeting with Dan, we decided that we should not focus on dyslexia because it would be difficult to come up with a project to help in minimal time. Being that there has to be a visualization aspect for the outcome of the project, I received clarity from Dr.Wole and decided that I will be creating a VR “game” or program where autistic students can engage in social modeling. This can include modules on greeting people, engaging in conversation, and how to conduct oneself in various social environments. Since it is week 3 already I will have to work rigorously to develop this. So far I have gone into unity and used the assets in order to build the virtual environment. I will be working to develop a user interface for at least three different social situations and hope to test at least one of them by the end of next week. 

We had a paper writing zoom session this week where I was able to get a draft of my abstract and begin my introduction. I will have to go in and edit this as my focus has shifted from disabilities in general to developmental disabilities (more specifically autism).

Week 4: 

Last week’s idea about social modeling has been modified so that it will be able to get done in the amount of time that is left. I will now be using a museum asset in unity and putting a human-like avatar in the virtual space. The avatar will guide the person that wears the VR headset through the museum and what the avatar does will play on a loop. (Thank you to Dr. Wole for the idea). Applying prior research to this idea, I hope to implement the general outline for social skills teaching packages this means that the program will allow for repetition of the target task (visiting a museum), verbal explanation of the social skill, the practice of skill in realistic settings (in virtual reality), role-play of target behaviors (practicing the behaviors in the virtual reality program without the guided audio).

This week I worked heavily on my paper. I was able to get through most of the introduction, related works, and some of the methodology. I will continue to add to it as the project moves along. Finally, I developed a slides presentation for the midterm check-in.

Week 5: 

This week I was able to get the museum asset in unity. I further developed the idea of social modeling through a recollection of my experience working at the Intrepid Museum with the Access program. One of the events that the program provided was early morning openings for students with disabilities, with a large portion of the population having autism. I decided that I should try to use that as the model in the museum asset. Below is a screen recording of what I have so far. I was able to edit the presentation from what the original asset had to what I wanted it to say. I hope to add audio of my voice so that everything isn’t reading based and I also will be trying to make it possible for the user to look through the museum itself; after this is done I will deploy it to the headset and make edits as needed.

I worked on the citations in my paper and revised it as my ideas developed. I am starting to think about what my results section will look like. We all visited the CUNY ASRC and got to take a tour while learning about their initiatives.

Week 6:

This week I had technical difficulties with adding audio to my visualization portion. Kwame and I spent hours trying to get the code right so that certain audios would play at each slide, however, we could not get it to work. I also looked for existing scripts that may help but they were also unable to work. I created a backup plan to have the audio play on a loop in its entirety, though this would cause people to listen to different audio portions at different points in the experience.

I sent my mentor Dan my paper for feedback and began to work on refining it and next week I hope to begin to finalize it. In addition, Dan gave me the idea to share some of the other existing VR programs that exist to explain in my final presentation so that people understand the extent to which they can be helpful for students with disabilities.

Next week I will be testing my visual aspect from Unity and making any fixes as well as focusing on the paper.

Week 7: 

I got to participate in Talia’s user study this week; it was cool to see how a fellow participant’s visualization aspect came out. Since my project does not have a user study, I focused on what I would write in my discussion for my paper. I met with Dan and he was able to give me more feedback on the paper so that I could continue to work on it. As for the technical difficulties, it seems that I will have to play the looped audio in its entirety due to the diminishing amount of time that is available. Dan and I spoke about the final presentation and what I should include which led me to begin to work on my final slides.

We got to visit the Bronx Zoo on Wednesday where we zip-lined, that was fun! We ended the week with a meeting speaking about how the final week would look and how to prepare.

Week 8: 

I spent the last week submitting my abstract, finishing my paper, and practicing my final presentation. I was able to participate in Kwame’s user study. For the final touch to my visualization portion, I put an animated avatar into the museum to make it seem as though there was a tour guide.

Overall it has been a great learning experience participating in this REU. I am grateful for all that I have learned and all of the obstacles that I was able to surpass in developing this project.

Final Paper

Amelia Roth: The Community Game Development Toolkit

Amelia Roth, Gustavus Adolphus College

Project: The Community Game Development Toolkit–Developing accessible tools for students and artists to tell their story using creative game design

Mentor:  Daniel Lichtman

About Me: I’m majoring in Math and Computer Science at Gustavus Adolphus College in Saint Peter, Minnesota.

Week 1: I begun the week by working on several tutorials in Unity to better learn the application and how I can use it to improve the accessibility of the Community Game Development Toolkit (CGDT). When meeting with my mentor earlier this week, we decided that improving the accessibility and functionality of the CGDT was our main goal during these 8 weeks. We would like to shift the creation of visual art and stories from Unity to an interactive in-game experience. Over the next few weeks, I plan to work on the in-game editor.

 

Week 2: This week I was able to use my new Unity skills to start using the toolkit and understanding the scripting behind it. I created a GitHub account and soon, I’ll have access to the code for the CGDT! I met with my mentor at the beginning of this week and we created a to-do list for me over the next couple of weeks. To create an in-game editor, my first steps will be working on selecting objects, moving objects, and most importantly, making sure that any in-game changes will be saved once the user leaves play mode in Unity.

Here’s a bit of art I made in the CGDT, check it out!

 

Week 3: I spent this week working on the code for the CGDT and uploading it to the repository on Github so that my additions are documented in the CGDT. I met with my mentor twice this week to work get help on the coding needing and we got quite a bit done!As of now, while in play mode, a user can select an object (which highlights to show it has been selected), move that object in circle around them, towards and away from them, and up and down. They can also make the object smaller and bigger. Since this took less time than expected, I can move onto the next step which would be allowing the user to save changes in play mode instead of just in editor mode. I also think it would be helpful to add functions that allow the player to rotate objects on the object’s axes, so I’ll ask my mentor if he thinks we have time to add this in.

I also began writing my paper this week. The REU had a cowriting session on Tuesdays that we plan to implement indefinitely so that us students have set aside time specifically to work on the papers and ask questions in real time if we need help. I’ve been looking for previous research that relates to my project somehow, and one very interesting thing I found is called the Verb Collective. With the Verb Collective, different verbs such as “to scatter”, “to drop”, and “to spell” have functions attached to each of them, which is turn can call other verbs and their functions. I think it’s related to my project in the way that the CGDT is meant to be a storytelling tool. Both the Verb Collective and The Community Game Development Toolkit are interested in exploring VR as a way to see the world with a new perspective.

Week 4: This week hasn’t had as many satisfying results as last week, but I’m in the middle of working on several things that should hopefully be done next week. One of the things I’m currently working on is movie textures. Right now, the CGDT has an automatic importer for textures that turns images in usable sprites, but no such script for movie textures. I have learned how to do it manually, though, and if you look closely at the cube in the image below, you’ll see it has a video attached of Grand Central Terminal!

Dan and I are also working on saving the changes we make in play mode so that user’s hard work doesn’t go to waste! Some of the necessary functions are a bit over my head for this so Dan is lending a helping hand. I’ve also started working on some documentation for the CGDT, so that it’s easy to find a tutorial for exactly what you’re trying to learn how to use.

The writing for the paper is going well, I’ve got a solid related works section and a good start on my introduction. It is also our midterm REU presentation tomorrow, and I’m excited to share the work I’ve done on the CGDT with my fellow REU students!

 

Week 5: As with all things, this project has its ups and downs in terms of how much I get done in a week. This week was one of the slower ones. Saving and loading automatically is turning out to be trickier than expected, and building the CGDT project I have on Unity to my Quest 2 is turning out a whole slew of errors so far. But, the work I’ve done this week has progressed my understanding of these problems, and I feel confident that I can finish them up in week 6. I also created some new documentation for the CGDT this week on downloading and installing Unity, and how to use assets that were originally IRL art in the virtual setting.

In the next week, besides finishing up the lingering tasks of week 5, I plan to adapt the code I’ve written for moving, rotating and scaling objects so that the can be controlled in VR through joysticks instead of on a keyboard on the computer. Some of the original code of the CGDT might have to be adapted as well, such as Player Movement, which is also down with the keyboard currently. Looking further ahead, once I feel the CGDT has all the implements I’d like it to, I’ll test the usability of these functions in a small study. Once all these pieces are in place, I’ll be able to finish my paper!

Week 6: Week 6 had one of the most rewarding experiences of this REU so far: figuring out how to make automatic saving and loading work! It was very exciting to leave play mode, enter play mode again, and see the changes I had previously made saved. Even if I restart Unity and reopen the project, the changes remain. In my opinion, this is the most important aspect I’ve added to the CGDT. Without automatic saving and loading, the tools for moving, rotating and scaling objects aren’t very useful. Ideally, I’d love to add an inventory in play mode, so that dragging and dropping objects from the project window isn’t necessary, and a way to delete objects in play mode as well. Both of those things are definitely possible in the time I have left, but finding a way to save those changes as well might end up being beyond the scope of the project.

Week 6 also brought another change of plans as well. I’ve been trying to build the CGDT to my Quest 2 so I can work with it on a headset. Unfortunately, I’m still getting a lot of errors. It may have something to do with how the new scripts I’ve added to the CGDT. However, Dr. Wole and I agreed that working on deployment issues, especially when they’ve taken up a lot of time already, is probably not the best way to spend my remaining weeks of the REU. Although seeing the CGDT on a headset would have been very cool, I actually think working more on the desktop version is truer to the mission of the CGDT. The CGDT is meant to be accessible for students, artists, and non-game developers in general, and and lot more people own computers than VR headsets. At this point, though, I’ve learned to never say never, so who knows what Week 7 will bring!

Week 7: Success building to the headset! There were a few scripts in the CGDT that were editor-specific and therefore causing the building problems. I was able to remove those scripts and once I did, my scene built to the Quest 2. However, it doesn’t have any of the new capabilities that the desktop version of the CGDT has, so I’ve spent the last couple days figuring out what needs to change for the headset version. I created a new prefab for the CGDT that is VR-specific so that it relies on an OVRCameraRig instead of a Camera. Once this prefab is added, the user is able to fly around in the scene they’ve created, moving forward in the direction they’re facing, and rotating if they wish. I’d also like to move objects with raycasting, same as I did for the desktop version, so I’ve added the raycasting laser, although it isn’t able to grab anything yet.

There was some other great stuff I did this week for the program. I participated in a user study for another student’s project, and the whole program went ziplining at the Bronx zoo together, which was really fun! The deadline on the paper is also coming up quickly, so I’ve been polishing my abstract and working on the implementation section of my paper. I was recommended a few applications to use to draw some illustrations of what my functions do, so I’ll be adding those illustrations into my paper in our final week.

Week 8: The final week! Everything I worked on this week was related to polishing my paper and creating my presentation for the final day of the REU, today. I’ve learned a lot during this REU both in terms of programming tools and skills like writing and presenting. I’m think my presentation went well, and I look forward to putting the finishing touches on my paper today. I wish I could have gotten more done on the VR-CGDT version, but as this is an 8 week program, I’m really happy with everything I was able to accomplish. Thanks to all of my mentors for making this such a great experience!

Final Report was submitted and accepted as a 2-page paper (poster presentation) at VRST 2022:
Amelia Roth and Daniel Lichtman. 2022. The Community Game Development Toolkit. In 28th ACM Symposium on Virtual Reality Software and Technology (VRST ’22), November 29-December 1, 2022, Tsukuba, Japan. ACM, New York, NY, USA, 2 pages. https://doi.org/10.1145/3562939.3565661 – pdf

Explore Virtual Environments Using a Mobile Mixed Reality Cane Without Visual Feedback

Zhenchao Xia, Stony Brook University

Week1- Working update:

This week, after meeting with my mentor on the overall structure and future development direction of the project, I realized that I needed to add a new model to the original project, namely a learning model that uses a laser pen in VR to broadcast location and physical information when interacting with other objects. Since the purpose of our project is to help OM Trainer train blind people, we needed to add a very specific tutorial introduction section. This week, I started creating a new tutorial scene for the new learning model and the original part of the project. In the scene, different objects will be generated in different locations of the room to guide the user through the different modes.

Week2 – Working update:

This week, I built a scenario that will be used as a user tutorial. In this scenario, the model represented by the user is placed inside the irregular room model. The user needs to run AR and VR programs on the phone, and mount the phone on the selfie stick to use it as an exploration tool — blind stick. The user will follow the generated waypoints, exploring the entire structure of the room and finding the exit. During the process, the user will learn how to use the cane, feedback when the cane interacts with objects, and guidance for WayPoint.

Week3 – Working update:

In this week, I created a simple prototype according to the confirmed development requirements. In this scene, I replaced the human model in the actual project with a small square model. The Laser beam shoots forward from the middle of the small square, and when the human body rotates, the laser beam also rotates. When the Pointer interacts with the object, the specific information of the object will be broadcast. In the following week, I will load the Laser Beam into different scenes of the project for testing after completing the basic functions of the Laser Beam.

Week4 – Working update:

This week, I combined the laser pointer with the original user model and created a gesture menu that turns on/off based on the detected movement of the user’s gesture.
The laser pointer can interact with any object in the scene and give detailed item attributes and voice prompt feedback of spatial location information.
Taking the person facing the direction as 0 degrees, when the iPhone mounted on the cane is placed is raised to 45 degrees, which is diagonally above the person, the gesture menu will be opened. In the gesture menu, users can switch between cane mode and laser pointer mode, skip/return/re-read voice messages, etc.

(Gesture Menu)

(Laser Pointer)

Week5 – Working update:

This week, I added all the existing functions to the gesture menu, through which the user can switch any provided function at any time including cane mode, laser pointer mode, hint, replay, etc. Considering that the content of the gesture menu may change in different scenarios, I created a base class for the menu, which contains all the basic functions related to the menu. In the future, we only need to create a script that inherits the base class for special menus. The menu can be customized by rewriting the special functions.

Week6 – Working update:

This week, I made a tutorial for laser pointer mode, in which the user will be trained on how to open the gesture menu with a special gesture, toggle the current option, and confirm the use of the current function. And find targets with complex properties by switching between laser pointer mode and cane mode. Through user testing, I found that overly complex gestures are not easily recognized by the app, and it is difficult for users to easily open the gesture menu. So I changed the way the user interacts with the device, when the pitch of the user’s cane is between 270 and 360 degrees, the gesture menu opens. In the state of maintaining the menu, every two seconds, the current option automatically switches to the next item. When the user closes the menu, execute the current option.

Week7 – Working update:

This week, I worked with my mentor and colleagues to design an experiment to test the app, including the flow of the experiment, the process of collecting data, and the evaluation process of the results. In order to better analyze the data, we decided to upload the important data collected in the experiment, including the user’s position, rotation, head movement, etc., to a database called firebase. Now I am implementing to read the data from the firebase database in unity, and according to the data in the database, let the “user” model move according to the actions of the real user, so that we can reproduce the experiment at any time and get more specific and accurate experimental data, analyze the user’s action trajectory.

Week8 – Working update

This week, I successfully finished the data collection and replay function which allows us to get the position and the rotation of users’ bodies, the rotation of the cane, and users’ heads. And I design an informal test to verify the positive effect of my two new features which are the laser pointer and the gesture menu. After accepting the instructions for two new features, users need to switch to the laser pointer from the cane and use the laser pointer to explore the virtual room and build the mental map for the room’s layout. Once they finished, they need to reconstruct the mental map on the paper. We will get the result by comparing the graphs with the actual layout of the virtual room. But due to the limited time, the experiment is not well defined, due to the lack of strategies for exploring the complex virtual room, users’ data are not reliable and as expected. In the future, I will try to improve the design of the experiments.

Final report submitted and accepted as a 2-pages paper (poster presentation) in VRST 2022:
Zhenchao Xia, Oyewole Oyekoya, and Hao Tang. 2022. Effective Gesture-
Based User Interfaces on Mobile Mixed Reality. In Symposium on Spatial User Interaction (SUI ’22), December 1–2, 2022, Online, CA, USA. ACM, New York, NY, USA, 2 pages. https://doi.org/10.1145/3565970.3568189 – pdf

Virtual Reality and Public Health Project: Nutrition Education – Professor Margrethe Horlyck-Romanovsky

Talia Attar, Cornell University

Week One:

We kicked off the VR-REU 2022 program on Monday and convened in the Hunter College Computer Science Department, Professor Wole, the other participants, and I finally getting to meet and introduce ourselves to each other. Via a Meet-and-Greet style meeting, the other participants and I had the honor to hear the program mentors explain a bit about their work and their vision for integrating Virtual Reality into what they do. In Professor Wole’s VR/AR/MR summer class this week, we learned about hardware and software, 3D geometry, and the basics of writing a research paper using LaTex – a very useful introduction into a key component of doing research. Finally, as a group, we rounded out the week with an introduction to Paraview, using disk data to explore the breadth of Paraview’s capacities.

In regards to my research project, I met with my mentor, Professor Margrethe Horlyck-Romanovsky, and created a concrete concept for the project. After telling me about her research and the information gaps we currently face in generating a complete understanding of how people interact with their food systems, my mentor and I discussed how Virtual Reality could be used to study this gap. We formalized the necessary features of the Virtual Reality application and planned what the related study may look like. Heading into next week, I am excited to dive deeper into learning Unity and building out my project!

Week Two:

I entered Week Two excited to kick my project development into high gear. With the help of my Professor Margrethe and Dr. Wole, I was able enhance the specifications for my Virtual Reality simulation and create a more detailed vision. I began implementing the simulation, a process that was slow at first as I familiarized myself with the XR features of Unity. However, as the week progressed, I grew more comfortable with this type of development and made headway on the first scene in my project – a city block. 

In addition to working on the simulation, I also spent a significant amount of time considering aspects of the study itself. Professor Margrethe, Dr. Wole, and I discussed details from recruiting participants to analyzing produced results, allowing the study to come into clearer view. I was also fortunate to receive valuable and detailed advice around literature reviewing and other aspects of research papers from Professor Margrethe.

The REU members and I ended the week as a group, and Dr. Wole taught us about using Tableau for data visualization. The sample dashboard I created through his tutorial can be found here.

Week Three:

Week three marked an exciting point in the program as I was able to begin deploying my simulation to the Meta Quest 2 VR headset. This was the first time I had gotten to wear a VR headset outside of the demo last week, and it was informative to be able to explore a variety of simulations for an extended period of time. The highlight was certainly successfully building my Unity project directly on to the headset. In regards to the simulation itself, I began a different approach to creating my 3D scene compared to last week in an attempt to enhance the level of detail present. I also began the interactive level of the project by coding the XR rig to follow a fixed, controlled path around the simulation.

In addition to work on my personal study, I joined the other participants in learning a new visualization tool: VMD.

Week Four:

This week I saw the largest progress in my Virtual Reality development process to date. With the help of some carefully selected asset packages from the Unity store, I was finally able to get over the hump of world building and begin implementing more of the user interactions. I successfully completed a draft of the first layer of the world: the city-level view with three food sources. The user is taken on a fixed path walk around the block, with the freedom to move their head to look around. At the end of this walk, a pop-up appears for the user to select where they would like to enter with their laser pointer, and then they are taken on another fixed path walk to the food vendor of their choosing. Upon arriving, the following scene – interior of the store – loads. Developing the interactive UI for this selection step of the process was the largest technical challenge I faced to date, as the Unity UI support was developed for a 2D setting. However, with the help of many (many) YouTube videos and other online resources, I was able to use the Oculus Integration package to adapt the UI features effectively to Virtual Reality. 

                                     

Next week will entail continuing the development flow to build out the next layer of the simulation.

Week Five:

During Week Five, I picked up right where I left off in my last blog post: implementing the “interior” layer of the simulation. This entailed crafting three new scenes and mini “worlds” to represent the green grocer, the supermarket, and the fast food restaurant. Professor Margrethe and I discussed the appropriate foods and information to present in each food source and ended up with a carefully crafted list of what is included. The two main tasks I faced in development were figuring out how to appropriately represent the relevant foods and constructing a logical and clear interface for the user to interact with the food options to simulate a shopping experience. The latter task was challenging in terms of both design and actual implementation, but I ended the week with a solid vision and corresponding code to do so. In Week Six, I will be finishing applying the interactive layer throughout all three food sources and generally cleaning up any loose ends within the simulation. 

The other program participants and I ended the week with a fun field trip to the CUNY Advanced Science Research Center and got to see applications of virtual reality as well as many other interesting and complex ongoing research projects!

Week Six:

Week Six entailed the final push of development of the simulation. One main addition that occurred during this week was the creation of a text file log that records statistics around the users interactions. This will be incredibly useful in gathering detailed results around users behavior within the simulation. Another important development from this week was that many new food items were added as possible options to expand the breadth of choices and potential purchases the user might make. Finally, I added components to provide direction and explanation to the user to enhance ease of use. With these exciting developments, finally running the study with participants using the simulation next week feels promising!

The images below are screenshots taken directly from deployment of the simulation on the Oculus Quest 2. They show the user purchasing interface in two of the food businesses.

            

 

Week Seven:

This week was very exciting because I finally ran the study using the simulation! The week began with final preparations for running the study that included constructing the survey for people to fill out after the VR experience and addressing any lingering bugs in the simulation. Throughout the week, I was able to recruit 12 participants and administer the VR simulation and survey to each. It was an incredibly rewarding experience to see the outcome of my UNITY development process be put to use.

I concluded the week by beginning to analyze the results found and start writing them up to the final research paper. Looking forward to next week, the final week of the program, I will be completing constructing the relevant results and writing my paper as well as preparing for the final presentation!

Week Eight:

This week marked the final week of the REU program. I spent the bulk of the week completing the short paper to submit to the VRST 2022 conference taking place in Tsukuba, Japan this fall. A large portion of this process was analyzing the results of the study. The simulation and study yielded data around a variety of different factors, such as the decision outcomes of the simulation and the usability score measured from a system usability questionnaire component of the survey. I combined different aspects of the data to generate several key findings around behavioral and decision-making patterns in the simulation. However, the most critical part of this preliminary study was that, mainly supported by the high usability and presence scores, virtual reality shows promise as a tool for studying individual food consumer behavior in a multilevel food environment, and the study findings warrant further research into this application.

The program concluded with a wonderful day of presentations, and I was fortunate to hear about the work done by my fellow REU participants throughout the summer.

Thank you to Dr. Wole for facilitating this program and to my mentor, Dr. Margrethe Horlyck-Romanovsky, for her endless support throughout this process.

Final Report was submitted and accepted as a 2-pages paper (poster presentation) at VRST 2022:
Talia Attar, Oyewole Oyekoya, and Margrethe F. Horlyck-Romanovsky. 2022. Using Virtual Reality Food Environments to Study Individual Food Consumer Behavior in an Urban Food Environment. In 28th ACM Symposium on Virtual Reality Software and Technology (VRST ’22), November 29-December 1, 2022, Tsukuba, Japan. ACM, New York, NY, USA, 2 pages. https://doi.org/10.1145/3562939.3565685 – pdf

Virtual Reality and Structural Racism Project

Ari Riggins, Princeton University

Project: Virtual Reality and Structural Racism Project

Mentors:  Courtney Cogburn and Oyewole Oyekoya

Week 1:

This week after meeting with Dr. Wole to discuss the specifics of the project and brainstorming ideas and research questions to be explored, I began writing my project proposal. This proposal discusses the goals and methodology for the project. 

This project aims to create an effective virtual reality based visualization that brings light to the disparities of structural racism within housing. This visualization will be based on data from different cities within the United States. We will use property value data as well as racial demographics of the areas as the input; this data will be represented as a three dimensional street or residential area with houses of changing dimensions; the dimension of the house will be proportional to its value over time with color displaying the racial component.

In addition to the project proposal, this week I also downloaded the program Unity and began getting used to it and thinking about how it could work for the project.

Week 2:

My goals for this week were mainly to learn how to use Unity to build the project and to do some background research and summarize it on the topic. I downloaded the Unity ARKit and began following some tutorials to learn how to use it. So far, I managed to make an ios AR application which uses the phone camera to display the world with an added digital cube.

cube in window

After discussion with Dr. Wole, the project idea evolved a bit to involve displaying the residential area as an augmented reality visualization where it can be viewed through a device as resting on top of a flat surface such as a table or the ground. The next step that I am currently working on in Unity is surface detection so that the visualization can align with these surfaces.

In terms of research, I found several relevant sources investigating structural racism within housing. I came across the University of Minnesota’s Mapping Prejudice project which hosts an interactive map of covenants in Minnesota where there were restrictions on the race of property owners and tenants. This project provides a view of one method of visualization for data on racial discrimination within housing.

Week 3:

This week was spent focusing mostly on the data. I met with Dr. Cogburn and Dr. Wole and we discussed a more specific view of the visualization. Dr. Cogburn brought up the reference of a report done by Brookings Edu which investigates the devaluation of Black homes and neighborhoods; this report will serve as the jumping-off point for the data of this project as well as a reference for discussion of the topic.

The data used in the report comes from the American Community Survey performed by the US Census Bureau and from Zillow. It will be necessary to find similar data from the census for this project. We decided that currently, the project should focus on one geographic area as a case study of the overall inequality. The city I am planning to focus on is Rochester New York; it was represented in the Brookings report and was shown to have a large disparity in the valuation of Black and White homes.

Week 4:

This week in unity I continued working with the ARKit to detect surfaces and display the visualization on them. We discussed the data after running into a roadblock where we did not have access to all of the information we wanted. The Brookings report had not provided the names of specific towns and areas that we found to be comparable so we cannot find data on them individually. However, we are able to use the reported data by changing our visualization a bit. Instead of being on a timeline, the houses will be on a sliding scale by the factor of race.

I also gave my midterm presentation this week which helped me solidify my background research for the project, as well as explain it in a clear manner.

Week 5:

This week I was mostly working in unity. I found a free house asset that works for the project and I used the ARKit to place this on any detected plane. I also worked on getting a United States map to serve as the basis of the visualization on the plane. We decided to use multiple locations from the Brookings report as case studies, so now I am still working to write the script which changes the house size in accordance with this data. Now that I have the pieces working, I need to arrange the scene and scale everything, as well as create some instructions for use.

I have also been working on my paper and am currently thinking about the methodology section.

Week 6:

This week in terms of writing the paper I made a short draft of my abstract and began working on the methods section. I worked in unity to get the house asset into AR and to write a script to add the growing animation in the video below. I added an input to the house which dictates the disparity which should be displayed through the amount of growth of the house. I also looking into changing the color of the house and having it fade from one color to another. When meeting with my mentors, they suggested that I try some different approaches to the overall visualization such as adding avatars to depict the neighborhood demographics of the house and changing the color of the house to green or some other monetary representation to depict the change in value.

Week 7:

This week, I have been working to get my demo finished. I fixed my shrinking issue with the house and I added the color change to the roof, though I still have to sync these two processes. In meeting with my mentors we decided that I should focus on completing this one scene instead of working on two due to the limited time left. We also discussed the background of the scene and things I could add to make it feel more like a neighborhood. We also discussed labeling and how I could make clear the data which the visualization is actually conveying. At this point, my work is going to be finishing this demo and focusing on wrapping up all we discussed and what I’ve worked on into a presentation.

Week 8:

In the final week, I was majorly focused on preparing for my presentation and finishing up every aspect of the project. I also worked to finish the paper along with my presentation. In terms of my visualization, I had the case study visualization of one house changing in size and color, but at my mentor meeting we discussed the significance of the color and other possibilities. I ended up making two other versions of the visualizations using different colormaps representing the racial make-up of the communities.

Final Report

Hunter College
City University of New York
HN-1001T
695 Park Ave
New York, NY 10065

Telephone: +1 (212) 396-6837
Email: oo700 at hunter dot cuny dot edu

Follow me:
wolexvr on Twitter  wole-oyekoya-5b753610 on Linkedin  woleucl's YouTube channel  oyekoya's Github profile  Googlescholar