V Shen, C Harrison, C Shultz
TOCHI 2023
Presented at CHI 2024
Expressive, Scalable, Mid-air Haptics with Synthetic Jets
Non-contact, mid-air haptic devices have been utilized for a wide variety of experiences, including those in extended reality, public displays, medical, and automotive domains. In this work, we explore the use of synthetic jets as a promising and under-explored mid-air haptic feedback method. We show how synthetic jets can scale from compact, low-powered devices, all the way to large, long-range, and steerable devices. We built seven functional prototypes targeting different application domains to illustrate the broad applicability of our approach. These example devices are capable of rendering complex haptic effects, varying in both time and space. We quantify the physical performance of our designs using spatial pressure and wind flow measurements and validate their compelling effect on users with stimuli recognition and qualitative studies.
V Shen, T Rae-Grant, J Mullenbach,
C Harrison, C Shultz
UIST 2023
🎖️ Best Demo Jury's Honorable Mention, People's Choice Honorable Mention
Fluid Reality: High-Resolution, Untethered Haptic Gloves using Electroosmotic Pump Arrays
Virtual and augmented reality headsets are making significant progress in audio-visual immersion and consumer adoption. However, their haptic immersion remains low, due in part to the limitations of vibrotactile actuators which dominate the AR/VR market. In this work, we present a new approach to create high-resolution shape-changing fingerpad arrays with 20 haptic pixels/cm². Unlike prior pneumatic approaches, our actuators are low-profile (5mm thick), low-power (approximately 10mW/pixel), and entirely self-contained, with no tubing or wires running to external infrastructure. We show how multiple actuator arrays can be built into a five-finger, 160-actuator haptic glove that is untethered, lightweight (207g, including all drive electronics and battery), and has the potential to reach consumer price points at volume production. We describe the results from a technical performance evaluation and a suite of eight user studies, quantifying the diverse capabilities of our system. This includes recognition of object properties such as complex contact geometry, texture, and compliance, as well as expressive spatiotemporal effects.
V Shen, C Harrison
ICMI 2022
Pull Gestures with Coordinated Graphics on Dual-Screen Devices
A new class of dual-touchscreen device is beginning to emerge, either constructed as two screens hinged together, or as a single display that can fold. The interactive experience on these devices is simply that of two 2D touchscreens, with little to no synergy between the interactive areas. In this work, we consider how this unique, emerging form factor creates an interesting 3D niche, in which out-of-plane interactions on one screen can be supported with coordinated graphics in the other orthogonal screen. Following insights from an elicitation study, we focus on "pull gestures", a multimodal interaction combining on-screen touch input with in air movement. These naturally complement traditional multitouch gestures such as tap and pinch, and are an intriguing and useful way to take advantage of the unique geometry of dual-screen devices.
V Shen, C Shultz, C Harrison
CHI 2022
🏆 Best Paper Award
Mouth Haptics in VR using a Headset Ultrasound Phased Array
Today’s consumer virtual reality systems offer limited haptic feedback via vibration motors in handheld controllers. Rendering haptics to other parts of the body is an open challenge, especially in a practical and consumer-friendly manner. The mouth is of particular interest, as it is a close second in tactile sensitivity to the fingertips. In this research, we developed a thin, compact, beamforming array of ultrasonic transducers, which can render haptic effects onto the mouth. Importantly, all components are integrated into the VR headset, meaning the user does not need to wear an additional accessory or place any external infrastructure in their room. Our haptic sensations can be felt on the lips, teeth, and tongue, which can be incorporated into new and interesting VR experiences.
K Ahuja, V Shen, C Fang, N Riopelle,
A Kong, C Harrison
CHI 2022
ControllerPose: Inside-Out Body Capture with VR Controller Cameras.
We present a new and practical method for capturing user body pose in virtual reality experiences: integrating cameras into handheld controllers, where batteries, computation and wireless communication already exist. By virtue of the hands operating in front of the user during many VR interactions, our controller-borne cameras can capture a superior view of the body for digitization. We developed a series of demo applications illustrating the potential of our approach and more leg-centric interactions, such as balancing games and kicking soccer balls.
V Shen, J Spann, C Harrison
SUI 2021
🏆 Best Paper Award
FarOut Touch: Extending the Range of ad hoc Touch Sensing with Depth Cameras
The ability to co-opt everyday surfaces for touch interactivity has been an area of HCI research for several decades. In the past, advances in depth sensors and computer vision led to step-function improvements in ad hoc touch tracking. However, progress has slowed in recent years. We surveyed the literature and found that the very best ad hoc touch sensing systems are able to operate at ranges up to around 1.5 m. This limited range means that sensors must be carefully positioned in an environment to enable specific surfaces for interaction. Furthermore, the size of the interactive area is more table-scale than room-scale. In this research, we set ourselves the goal of doubling the sensing range of the current state of the art system.