I'm Vivian, a PhD Student at Carnegie Mellon University.

I'm a third year in the Robotics Institute doing research with the Future Interfaces Group, advised by Prof. Chris Harrison. My research builds on my background in embedded systems, sensing, and computer vision, currently focused on haptics and interaction.

I am a Swartz Entrepreneurial Fellow and NSF GRFP Honorable Mention, and have received two Best Paper Awards at premier venues in human-computer interaction.

In my free time you can find me taking photos and chasing plastic!

Get in touch: vhshen@cmu.edu

CV/Resume

Selected Research

V Shen, T Rae-Grant, J Mullenbach, 

C Harrison, C Shultz 

UIST 2023

🎖️ Best Demo Jury's Honorable Mention, People's Choice Honorable Mention 

Fluid Reality: High-Resolution, Untethered Haptic Gloves using Electroosmotic Pump Arrays

Virtual and augmented reality headsets are making significant progress in audio-visual immersion and consumer adoption. However, their haptic immersion remains low, due in part to the limitations of vibrotactile actuators which dominate the AR/VR market. In this work, we present a new approach to create high-resolution shape-changing fingerpad arrays with 20 haptic pixels/cm². Unlike prior pneumatic approaches, our actuators are low-profile (5mm thick), low-power (approximately 10mW/pixel), and entirely self-contained, with no tubing or wires running to external infrastructure. We show how multiple actuator arrays can be built into a five-finger, 160-actuator haptic glove that is untethered, lightweight (207g, including all drive electronics and battery), and has the potential to reach consumer price points at volume production. We describe the results from a technical performance evaluation and a suite of eight user studies, quantifying the diverse capabilities of our system. This includes recognition of object properties such as complex contact geometry, texture, and compliance, as well as expressive spatiotemporal effects.

V Shen, C Shultz, C Harrison

CHI 2022

🏆 Best Paper Award

Mouth Haptics in VR using a Headset Ultrasound Phased Array

Today’s consumer virtual reality systems offer limited haptic feedback via vibration motors in handheld controllers. Rendering haptics to other parts of the body is an open challenge, especially in a practical and consumer-friendly manner. The mouth is of particular interest, as it is a close second in tactile sensitivity to the fingertips. In this research, we developed a thin, compact, beamforming array of ultrasonic transducers, which can render haptic effects onto the mouth. Importantly, all components are integrated into the VR headset, meaning the user does not need to wear an additional accessory or place any external infrastructure in their room. Our haptic sensations can be felt on the lips, teeth, and tongue, which can be incorporated into new and interesting VR experiences. 

K Ahuja, V Shen, C Fang, N Riopelle,

A Kong, C Harrison

CHI 2022

ControllerPose: Inside-Out Body Capture with VR Controller Cameras.

We present a new and practical method for capturing user body pose in virtual reality experiences: integrating cameras into handheld controllers, where batteries, computation and wireless communication already exist. By virtue of the hands operating in front of the user during many VR interactions, our controller-borne cameras can capture a superior view of the body for digitization. We developed a series of demo applications illustrating the potential of our approach and more leg-centric interactions, such as balancing games and kicking soccer balls. 

V Shen, J Spann, C Harrison

SUI 2021

🏆 Best Paper Award

FarOut Touch: Extending the Range of ad hoc Touch Sensing with Depth Cameras
The ability to co-opt everyday surfaces for touch interactivity has been an area of HCI research for several decades. In the past, advances in depth sensors and computer vision led to step-function improvements in ad hoc touch tracking. However, progress has slowed in recent years. We surveyed the literature and found that the very best ad hoc touch sensing systems are able to operate at ranges up to around 1.5 m. This limited range means that sensors must be carefully positioned in an environment to enable specific surfaces for interaction. Furthermore, the size of the interactive area is more table-scale than room-scale. In this research, we set ourselves the goal of doubling the sensing range of the current state of the art system.

Get in touch! vhshen@cmu.edu