Using adaptation of the senses and virtual environments to make the world more inclusive.
Today, we can augment our senses in a way that enables us to be in a digital space, independent of the physical laws that limit us in Reality. This allows people to do more than they could have achieved. It also allows leveling the plane field between users with different physical, social, or environmental limitations. This is exciting topic, we can better bridge between people, and make the world more inclusive.


Some highlight publications:
V. Ranganeni, M. Sinclair, Eyal Ofek, A. Miller, J. Campbell, A. Kolobov & E. Cutrell
Exploring Levels of Control for a Navigation Assistant for Blind Travelers, Human-Robot Interaction (HRI ’23),
Only a small percentage of people with vision impairments use traditional mobility aids such as canes or guide dogs. Various assistive technologies have been proposed to address the limitations of conventional mobility aids. These devices often give either the user or the device the majority of the control. In this work, we explore how varying levels of control affect the users’ sense of agency, trust in the device, confidence, and successful navigation. We present Glide, a novel mobility aid with two control modes: Glide-directed and User-directed. We employ Glide in a study (N=9) in which blind or low-vision participants used both modes to navigate through an indoor environment. Overall, participants found that Glide was easy to use and learn. Most participants trusted Glide despite its current limitations, and their confidence and performance increased as they continued to use Glide. Users’ control mode preference varied in different situations; no single mode “won” in all situations. Performance increased as they continued to use Glide. Users’ control mode preference varied in different situations; no single mode “won” in all situations.
This work led to the formation of a company: Gliadance.IO


Ahuja, K., Ofek, E., Gonzalez-Franco, M. Holz, C. and A. Wilson,
CoolMoves: User Motion Accentuation in Virtual Reality, ACM IMWUT ’21, 1–23. Video
Current Virtual Reality (VR) systems are bereft of stylization and embellishment of the user’s motion- concepts that have been well explored in animations for games and movies. We present CoolMoves, a system for expressive and accentuated full-body motion synthesis of a user’s virtual avatar in real-time, from the limited input cues afforded by current consumer-grade VR systems, specifically headset and hand positions. We make use of existing motion capture databases as a template motion repository to draw from. We match similar spatio-temporal motions present in the database and then interpolate between them using a weighted distance metric. Joint prediction probability is then used to temporally smooth the synthesized motion, using human motion dynamics as a prior. This allows our system to work well even with very sparse motion databases (e.g., with only 3-5 motions per action). We validate our system with four experiments: a technical evaluation of our quantitative pose reconstruction and three additional user studies to evaluate the motion quality, embodiment and agency.


D. Jain, S. Junuzovic, E. Ofek, M. Sinclair, J. R. Porter, C. Yoon, S. Machanavajhala, M. Ringel Morris. A taxonomy of Sounds in Virtual Reality. DIS 2021 Best Paper
Virtual reality (VR) leverages human sight, hearing, and touch senses to convey virtual experiences. For d/Deaf and hard-of-hearing (DHH) people, information conveyed through sound may not be accessible. To help with the future design of accessible VR sound representations for DHH users, this paper contributes a consistent language and structure for representing sounds in VR. Using two studies, we report on the design and evaluation of a novel taxonomy for VR sounds. Study 1 included interviews with 10 VR sound designers to develop our taxonomy along two dimensions: sound source and intent. To evaluate this taxonomy, we conducted another study (Study 2) where eight HCI researchers used our taxonomy to document sounds in 33 VR apps. We found that our taxonomy was able to successfully categorize nearly all sounds (265/267) in these apps. We also uncovered additional insights for designing accessible visual and haptic-based sound substitutes for DHH users.

More Accessibility in MR works:

M. Yamagami, S. Junuzovic, M. Gonzalez-Franco, E. Ofek. E. Cutrell, J.R. Porter, A.D. Wilson and M.E. Mott,
Two-In-One: A Design Space for Mapping Unimanual Input into Bimanual Interactions in VR for Users with Limited Movement, Trans. on Accessible Computing,


D. Jain (UW), S. Junuzovic, E. Ofek, M. Sinclair, J.R. Porter, C. Yoon, S. Machanavajhala and M. Ringel Morris, Towards Sound Accessibility in Virtual Reality, ACM ICMI 2021 Video
B. Cohn, A. Maselli, E. Ofek, and M. Gonzalez-Franco, SnapMove: Movement Projection Mapping in Virtual Reality AIVR 2020,

A. F. Siu, M. Sinclair, R. Kovacs, C. Holz, E. Ofek & E. Cutrell., Virtual Reality Without Vision: A Haptic and Auditory White Cane to Navigate Complex Virtual Worlds.
CHI 2020 Honorable Mention paper
Video, Project, MSR Blog, CHI 2020 Talk – Alexa F. Sui


Y. Zhao (Cornell Tech & MSR), E. Cutrell, C. Holz, M. Ringel Morris, Eyal Ofek, and A. Wilson, SeeingVR: A Set of Tools to Make Virtual Reality More Accessible to People with Low Vision. CHI 2019
Paper, Video, MSR Blog, A Demo 2019, Scientific American: “Virtual Reality Has an Accessibility Problem”


D. Schneider, A. Otte, T. Gesslein, P. Gagel, B. Kuth, M. S. Damlakhi, O. Dietz, Eyal Ofek, M. Pahud, P. O. Kristensson, J. Muller and J. Grubert
ReconViguRation: Reconfiguring Physical Keyboards in Virtual Reality. TVCG) Vol. 25 Issue: 11, Nov. 2019) Video


T. Nguyen, M. Thaoduyen, S.Iqbal & E. Ofek, The Known Stranger: Supporting Conversations between Strangers with Personalized Topic Suggestion, CHI 2015