Uncertainty Visualizations of Hurricane Forecasts

Visualization techniques have been widely used by the U.S National Hurricane Center to enhance viewers’ understanding of hurricane forecasts and their underlying uncertainty. The track forecast cone is the one almost universally adopted by the general public, the news media, and governmental officials. However, current research has experimentally shown that it has limitations that result in misconceptions of the uncertainty included. Most importantly, the area covered by the cone tends to be misinterpreted as the region affected by the hurricane. In addition, the cone summarizes forecasts for the next three days into a single representation and, thus, makes it difficult for viewers to accurately determine crucial time-specific information. To address these limitations, this research proposes novel alternative visualizations. The work reported here began by proposing a technique that generates and smoothly interpolates robust statistics from ensembles of hurricane predictions, thus developing visualizations that inherently include the spatial uncertainty by displaying three levels of positional storm strike risk at a specific point in time. To address the misconception of the area covered by the cone, this research develops time-specific visualizations depicting spatial information based on a sampling technique that selects a representative subset from an ensemble of points. It also allows depictions of such important storm characteristics as size and intensity. We collaborated on a cognitive study that indicates that these visualizations are more accurate than the track forecast cone because they enhance the viewers’ ability to understand the predictions.

 hurricane

Touch Interfaces for Teaching STEM

The goal of this project is to create a design (using Unity3D and the Gestureworks API) that is suitable for STEM research, that may be applicable for Chemistry students of all ages.

stem

stem

LIDAR Scan of Clemson University

LIDAR, which stands for Light Detection and Ranging, is a remote sensing method that uses pulsed laser light to measure ranges to the Earth i.e. aerial scans. Using KeckCave’s VRUI toolit, we displayed Lidar scan of Clemson University in the Oculus Rift.

lidar scan

EU FP7 Beaming Project

The EU Beaming project aims to give people a real sense of physically being in a remote location with other people, and vice versa, without actually physically travelling. When we interact with others we pay the most attention to the face because it conveys eye gaze, head movement, expressions and gestures and are used as a crucial channel of communication. Therefore, we propose the use of spherical displays to represent telepresent visitors’ head at a remote location.

Attention Model

This research models automatic attention behaviour using a saliency model that generates plausible targets for combined gaze and head motions. The attention model was integrated into an OpenSG application and the open-source Second Life client. Studies in both systems demonstrated a promising attention model that is not just believable and realistic but also adaptable to varying task, without any prior knowledge of the virtual scene.


EU Presenccia Project

Group Collaboration in three heterogeneous systems: CAVE, XIM and Real-world

Presenccia in Second Life

EPSRC Eye Catching Project

The project aimed to support eye-gaze as a key interaction resource in collaboration. The virtual characters are controlled by human subjects in a cave-like facility (immersive virtual environment).  Low-latency precision head-tracking allows distortion-free and lag-free movement through the virtual environment. Interaction with and navigation through the environment are achieved via a hand-held tracking unit. The eyes are controlled by head-mounted eye trackers.

PhD Research

Title “Eye Tracking: A Perceptual Interface for Content Based Image Retrieval”

Visual search experiments are devised to explore the feasibility of an eye gaze driven search mechanism. An eye tracking image retrieval interface together with pre-computed similarity measures yield a significantly better performance than random selection using the same similarity information. Gaze parameters were explored to determine novel methods of inferring intention from users’ gaze data.

Hunter College
City University of New York
HN-1001T
695 Park Ave
New York, NY 10065

Telephone: +1 (212) 396-6837
Email: oo700 at hunter dot cuny dot edu

Follow me:
wolexvr on Twitter  wole-oyekoya-5b753610 on Linkedin  woleucl's YouTube channel  oyekoya's Github profile  Googlescholar