Home » Articles posted by VR-REU Student
Author Archives: VR-REU Student
Amateur Confidence in Creativity with the Community Game Development Toolkit
Lance Cheng (he/him), University of Massachusetts Amherst
Week 1: Mon 06/03 – Sun 06/09
Hi! A little bit about myself: I’m Lance, I use he/him pronouns, and I’m a native New Yorker. I just finished my first year at UMass, where I study data science, CS, public interest technology, and comparative literature. Besides academics, I also love volunteering as a notetaker, working as a TA and at UMass’s queer resource center, learning languages, and playing guitar.
This summer, I’ll be working on the Community Game Development Toolkit with Professor Daniel Lichtman. The Toolkit is a set of tools for the Unity game engine that allows you to make collage scenes, and it was particularly developed so people without technical game development skills could still create games – otherwise, we’d miss out on so many of their unique perspectives! I hope this blog can be useful to future applicants to the REU who want to see what the experience is like or future students who work with Dan and want something to reference.
I spent most of the latter half of the week doing some literature review and coming up with different experimental designs, with the goal of the experiments being to determine if the Toolkit’s features help people feel more creative and in touch with themselves. It was great meeting Dr. Wole (who organizes this REU), the mentors, and the other interns so far, and I’m excited to work more with all of them in the coming weeks! I’m also excited to bring together the artistic and quantitative aspects of computation and figure out how to design something that maximizes creative possibilities.
Week 2: Mon 06/10 – Sun 06/16
Second week completed! The biggest event of this week was finalizing the basis of the experiment I’ll be running. Dan wanted to see how the Toolkit could help diverse communities tell stories about themselves, and to make that benchmark a little more measurable, I’ve decided to investigate if the Toolkit’s collage-style approach makes people more confident in their creativity compared to other tools. Most of my time was spent brainstorming experiment structures, doing even more literature review, and drafting the introduction and related works sections of my paper. This was also my first time using LaTeX, which was much easier than I thought it would be, thankfully.
Something else that’s been really helpful: reaching out to Dan’s former interns! Two people have worked with Dan before me (Amelia Roth and Habin Park, both of whose publications are linked on this REU home page), and both of them are lovely people who gave thoughtful advice when I discussed some of the problems I was running into. It sounds obvious, but to anyone in the future, please do reach out to past REU cohorts; it made me feel much less isolated to know they encountered the same issues and published successfully despite that.
Week 3: Mon 06/17 – Sun 06/23
Come on and slam and welcome to the (game) jam. I’ve been reflecting on itch.io’s visual novel and narrative game community, which is largely made up of amateurs who want to tell stories about themselves – exactly what the goal of the Toolkit is. Some of my favorite itch creators include graeme borland, Angela He, and Nicky Case!. I’m also happy to announce that the first few trials (ie, people I will be making into my Unity guinea pigs) will be run next Tuesday! They’ll be asked to take a “before” survey, perform a creative task using Unity and the Toolkit, and then take an “after” survey.
Week 4: Mon 06/24 – Sun 06/30
I’ve finished drafting the first few sections of the paper (Abstract, Introduction, Related Works, and Methods). For the next couple weeks before I have enough data to analyze, most of my work will consist of brute-forcing my way into finding experiment subjects.
Week 5: Mon 06/31 – Sun 07/07
This week, I performed my first trial! The one subject I’ve worked with picked up Unity and the Toolkit a lot faster than I thought people would, and this person was on the less technically inclined side, so it can only be up from here. Plus, now I have at least a couple images of that subject’s creation for the paper. I have thirteen (!!!!) more subjects lined up, as well as some candidates I need to get in touch with, so I’m feeling a lot more optimistic about my sample size and confidence levels! In a perfect world, I’d like to have somewhere in the ballpark of 25 subjects, but honestly, fourteen isn’t too bad.
Week 6: Mon 07/08 – Sun 07/14
Thirteen subjects turned out not to be quite the correct number; because of lack of access to a computer that can run Unity (ie, without exploding…), some participants have had to drop out. However, I’m working with my advisor to source more from his past students and other communities he’s worked with, so hopefully there’ll be an uptick soon. Regardless of whether or not I can source more participants, I held five more trials this week. It doesn’t sound like a ton, but all of them are one-on-one and they’ve run about forty minutes each so far, so they take a lot of energy. At least a few more trials in the following week will bring me up to ten participants total.
While the 25-subject ballpark was not reached by any means, I feel decently satisfied knowing that it is at least an improvement on sample sizes in past Toolkit studies. And isn’t that what research is all about?
Week 7: Mon 07/15 – Sun 07/21
I managed to get one last participant in the ring, so the total sample size is eleven participants. I also have 22 total respondents (11 who did not complete the experimental task) to the “before” survey after asking people to complete it just so I could have a broader range of data. I’ve also begun looking at the data. It doesn’t seem like there’s too much statistical significance to the quantitative data, but looking at the qualitative data in participants’ written responses is super interesting and provides more specific insight into what features would be helpful. In particular, something like Scratch’s sprite creation canvas might see a lot of use.
Week 8: Mon 07/22 – Fri 07/26
The final week! I was happy to have spent more time with my fellow interns this week, and my final presentation to the Iowa State SPIRE REU students went pretty well too. Some closing thoughts:
- As implied above, I wish I’d made more time to bond with the other interns.
- On a similar note, it would have been nice to connect more with the graduate research assistants and previous REU participants as well.
- I underestimated just how confusing LaTeX can be… “Why is my table there?” has become my mantra.
- People find Unity a lot more intuitive once you make them realize it’s not really that scary, its interface just isn’t the most approachable. And even less creatively inclined individuals can find fun in it!
While the REU is wrapping up, the paper itself hasn’t quite yet. I’ll be continuing to work on it in the coming week for submission to ISS on August 15th, so communication is still ongoing with Dr. Wole and my mentor on technicalities (not to mention LaTeX confusion, because why actually is my table there?).
All in all, it’s been a pretty excellent summer. I really appreciated the opportunity to perform meaningful research into creativity and digital media, to connect with a wide range of students in computer science and other STEM fields, and to grow my appreciation for interdisciplinary study! You can find below some of the scenes created by participants in Unity – they really are something.
In order: “First(ish) Steps,” Arthur Murray; “Mommy’s Little Helper,” John Cheng; “Summertime Backyard Memories,” Mia Ikeda.
Visualization of Point Mutations in Fibronectin type-III domain-containing protein 3 in Prostate Cancer
Samantha Vos, Virginia Wesleyan University
Week 1
This week I developed my project proposal with Dr. Olorunseum Ogunwobi. We decided to develop a visualization of the cell lineage plasticity process using either VMD or Paraview. To do this, I will find either the same or extremely similar miRNA, mRNA, and oncogenes that Dr. Ogunwobi suggested I use for the research. I began to search for these components in databanks that had files compatible with VMD or Paraview. So far it has been difficult to find these molecules that have compatible files. PBD is a great databank for proteins, but since I’m dealing with DNA, I need to find a databank that provides compatible files for genetic components. I’ve been reading Dr. Ogunwobi’s previous publications to learn more about this phenomenon’s purpose and process. This project will be focused on combining this specific biological process with the coding to produce a visualization that can be used to educate people.
We had class this week, where we learned about the definitions, history, and functionality of VR, AR, and MR. We also had a self-paced lab where we learned more about Paraview. There are also lectures to learn about VMD, so I will utilize both of these next week. I have been to input proteins into VMD and Paraview so I can learn to upload files and manipulate molecules in this software.
These are pictures of the protein 4LSD, a protein I used to test the functionality of Paraview and VMD.
Week 2
This week I found a compatible file for FDNC3 which will be the target of my microRNA. I learned how to used Latex and Overleaf to write my research paper, and added my literature review to the bibliography. I also completed the required CITI certification for Responsible Conduct of Research (RCR) and HSR for Undergraduate Students. On Friday I attended the CUNYSciCom: Communicating Your Science Symposium to learn about how to teach others about your research, both through general and scientific audiences. I found this experience to be extremely useful, as presenting research can itself be difficult without the added pressure of making sure your audience understands it. I will definitely be using some of the tactics used by the presenters in my future research presentations and posters. This week I still had some trouble finding files that are compatible with VMD, so I will be asking Dr. Ogunwobi and Dr. Wole for advise as to how to find these files. When I do find them, I will be using them to help educate others about miRNA and how it can be used as a cancer treatment.
Week 3
This week I focused on finding compatible files for mRNA. As far as I know, there is a very specific file type that can hold DNA and RNA and upload them onto VMD. However, I have absolutely scoured the internet and have yet to find files that were compatible. Due to this complication, my project has been adjusted. I will be taking the FNDC3 protein and mutating it by changing different amino acids in different positions to change the protein’s functionality or structure. I will be using mutations that have been found in prostate cells and that have a correlation with cancer proliferation. I will then be comparing the mutations to their original protein and demonstrating how the mutations affect the prostate cells. I have already found 9 mutations that lead to cancer in prostate cells, and next week I will be mutating the original FNDC3 protein in VMD with the amino acid adjustments.
Week 4
This week I focused on trying to find the binding or active site of FNDC3 so that I can make mutations to its sequence. The 9 mutations that I had previously found were not compatible with the VMD software, so I will be working on finding new mutations in FNDC3. I created a mutation in VMD and labeled the mutated amino acid for my midterm presentation on Friday. Everyone’s presentations looked amazing and very well developed. It was fascinating to learn about other people’s projects and how they have overcome obstacles they’ve encountered. I am working on developing an interactive video with my mutations so that the viewer can move and examine the mutations. I will be looking for more mutations and watching lots of videos next week to learn how to do this.
Week 5
This week I found 8 mutations that work with my protein. I applied these mutations using the mutate residue feature on VMD, and saved all of their visualization states. I also made visualization states of the originals with the same position highlighted so that it would easier to compare them to the mutations. I also figured out how to have multiple visualization states in one window, so now I can create my larger, final visualization state to use for my interactive video. Next week I will be working on my interactive video and sending it to the other participants in the program to test its efficiency and capabilities. I will also be working on creating a survey for them to fill out so I can get some feedback as to how to better my video and perfect it before the end of the program. I have learned so much throughout this program and I am excited to keep learning throughout these final weeks.
Week 6
This week I experienced some difficulty in recording my video. I struggled with trying to figure out how to create an interactive video, and I learned that I do not have the proper computer for ray tracing, so I cannot create an interactive video on my computer. I worked on my paper this week to make sure it was designed exactly how I pictured it, made small changes, and found more literary sources for my paper. I decided to work on styling my representations of my mutations to make sure that when I did make my presentation, it looked perfect. Here are some pictures of my mutations. Next week I will be recording a video and sending it out for data collection,
Week 7
I recorded my video successfully, which was an absolute relief. I now have to collect data and analyze it. I will be working on my survey for my presentation and finishing up my paper. I have faced many challenges during this project, and I am happy that I can finally say I have completed it. I am proud of my work, and I am excited to have other people test it. I used many different features of VMD to create my visualization, like mutate residue, sequence viewer, representations, and atom labeling. I enjoyed working on this projects completion, and now I will be working on a power point presentation to sum up my project at the VR-REU Symposium 2023. I am very excited to present my work!
Week 8
This week I finished collecting data from my survey and analyzed it, finally adding it to my paper. I also presented my work in the VR-REU Symposium 2023, and I really enjoyed teaching others about my project and the intricacies of protein structure. My fellow students all had wonderful presentations, and I was truly impressed by all of the work they had done. I think it was a successful presentation, and afterwards we got together and had lunch as a final group celebration of our work. On Friday, we came together one last time to finish our papers, submit them, and put all of our data files into Github. I submitted my paper to the ISS conference, and I really hope it gets accepted. I felt happy for completing the program, but also a small sense of sadness at its ending. I truly had a great experience, I learned a lot about coding and the computers science world. I believe this experience has helped me grow, and I will never forget my classmates, Dr. Ogunwobi, or Dr. Wole.
Final Paper:
Samantha Vos, Oyewole Oyekoya, and Olorunseun Ogunwobi. 2023. Visualization of Point Mutations in Fibronectin Type-III Domain-Containing Protein 3 in Prostate Cancer. In Companion Proceedings of the 2023 Conference on Interactive Surfaces and Spaces (ISS Companion ’23). Association for Computing Machinery, New York, NY, USA, 10–13. https://doi.org/10.1145/3626485.3626531 – pdf
Explore Virtual Environments Using a Mobile Mixed Reality Cane Without Visual Feedback
Zhenchao Xia, Stony Brook University
Week1- Working update:
This week, after meeting with my mentor on the overall structure and future development direction of the project, I realized that I needed to add a new model to the original project, namely a learning model that uses a laser pen in VR to broadcast location and physical information when interacting with other objects. Since the purpose of our project is to help OM Trainer train blind people, we needed to add a very specific tutorial introduction section. This week, I started creating a new tutorial scene for the new learning model and the original part of the project. In the scene, different objects will be generated in different locations of the room to guide the user through the different modes.
Week2 – Working update:
This week, I built a scenario that will be used as a user tutorial. In this scenario, the model represented by the user is placed inside the irregular room model. The user needs to run AR and VR programs on the phone, and mount the phone on the selfie stick to use it as an exploration tool — blind stick. The user will follow the generated waypoints, exploring the entire structure of the room and finding the exit. During the process, the user will learn how to use the cane, feedback when the cane interacts with objects, and guidance for WayPoint.
Week3 – Working update:
In this week, I created a simple prototype according to the confirmed development requirements. In this scene, I replaced the human model in the actual project with a small square model. The Laser beam shoots forward from the middle of the small square, and when the human body rotates, the laser beam also rotates. When the Pointer interacts with the object, the specific information of the object will be broadcast. In the following week, I will load the Laser Beam into different scenes of the project for testing after completing the basic functions of the Laser Beam.
Week4 – Working update:
This week, I combined the laser pointer with the original user model and created a gesture menu that turns on/off based on the detected movement of the user’s gesture.
The laser pointer can interact with any object in the scene and give detailed item attributes and voice prompt feedback of spatial location information.
Taking the person facing the direction as 0 degrees, when the iPhone mounted on the cane is placed is raised to 45 degrees, which is diagonally above the person, the gesture menu will be opened. In the gesture menu, users can switch between cane mode and laser pointer mode, skip/return/re-read voice messages, etc.
(Gesture Menu)
(Laser Pointer)
Week5 – Working update:
This week, I added all the existing functions to the gesture menu, through which the user can switch any provided function at any time including cane mode, laser pointer mode, hint, replay, etc. Considering that the content of the gesture menu may change in different scenarios, I created a base class for the menu, which contains all the basic functions related to the menu. In the future, we only need to create a script that inherits the base class for special menus. The menu can be customized by rewriting the special functions.
Week6 – Working update:
This week, I made a tutorial for laser pointer mode, in which the user will be trained on how to open the gesture menu with a special gesture, toggle the current option, and confirm the use of the current function. And find targets with complex properties by switching between laser pointer mode and cane mode. Through user testing, I found that overly complex gestures are not easily recognized by the app, and it is difficult for users to easily open the gesture menu. So I changed the way the user interacts with the device, when the pitch of the user’s cane is between 270 and 360 degrees, the gesture menu opens. In the state of maintaining the menu, every two seconds, the current option automatically switches to the next item. When the user closes the menu, execute the current option.
Week7 – Working update:
This week, I worked with my mentor and colleagues to design an experiment to test the app, including the flow of the experiment, the process of collecting data, and the evaluation process of the results. In order to better analyze the data, we decided to upload the important data collected in the experiment, including the user’s position, rotation, head movement, etc., to a database called firebase. Now I am implementing to read the data from the firebase database in unity, and according to the data in the database, let the “user” model move according to the actions of the real user, so that we can reproduce the experiment at any time and get more specific and accurate experimental data, analyze the user’s action trajectory.
Week8 – Working update
This week, I successfully finished the data collection and replay function which allows us to get the position and the rotation of users’ bodies, the rotation of the cane, and users’ heads. And I design an informal test to verify the positive effect of my two new features which are the laser pointer and the gesture menu. After accepting the instructions for two new features, users need to switch to the laser pointer from the cane and use the laser pointer to explore the virtual room and build the mental map for the room’s layout. Once they finished, they need to reconstruct the mental map on the paper. We will get the result by comparing the graphs with the actual layout of the virtual room. But due to the limited time, the experiment is not well defined, due to the lack of strategies for exploring the complex virtual room, users’ data are not reliable and as expected. In the future, I will try to improve the design of the experiments.
Final report submitted and accepted as a 2-pages paper (poster presentation) in VRST 2022:
Zhenchao Xia, Oyewole Oyekoya, and Hao Tang. 2022. Effective Gesture-Based User Interfaces on Mobile Mixed Reality. In Symposium on Spatial User Interaction (SUI ’22), December 1–2, 2022, Online, CA, USA. ACM, New York, NY, USA, 2 pages. https://doi.org/10.1145/3565970.3568189 – pdf