Prof. Eyal Ofek

Chair of Computer Science

(HCI, Mixed Reality, Computer Vision)

Work e-mail: [email protected]

Personal e-mail:  [email protected]

LinkedIn, Facebook, YouTube, Google Scholar

Curriculum vitae (academic)

About me

2024 Chair of Computer Science

Computer Science Dept.

University of Birmingham, UK

My research is in the areas of HCI and in particular using sensing, and environment understanding, Mixed reality, and the use of technology to make collaboration easier and more inclusive.

2023 Computer Vision Specialist

Leading the vision/AI development.

Data Blanket – Real-time AI-Driven Fire Fighting

Wildfires result in the loss of thousands of lives, millions of acres, hundreds of billions of dollars in damage, and over 5% of global emissions every year. Doing better should be everybody’s business ‍We just made it ours by building the most comprehensive system out there, empowering our firefighters with new tactical tools and information, and ensuring every part of this fighting system performs better.

2011-2023 Principal Researcher & Research Manager

I was a Principal Researcher in the EPIC (Extended Perception Interaction and Cognition) team at Microsoft Research. My research focused on Human-Computer Interaction (HCI) and sensing and Mixed Reality displays (MR) to enable users to reach their full potential in productivity, creativity, and collaboration. I have been granted more than 110 patents and published over 90 academic publications (with more than 14000 citations), and I was awarded a senior member of Assoc. for Computing Machines (ACM).

In addition to publications and transfer of technology to products, I have released multiple tools and open source libraries such as the RoomAlive Toolkit, used around the world for multi-projection systems, SeeingVR to enhance the use of VR for people with low vision, Microsoft Rocketbox avatars, MoveBox and HeadBox toolkits to democratize avatar animation, and RemoteLab for distributed user studies.

Besides my work at Microsoft Research, I served on multiple conference committees and was the paper chair of ACM SIGSPATIAL 2011. I was the Specialty Chief Editor of Frontiers in Virtual Reality for the area of Haptics and on the editorial board of IEEE Computer Graphics and Application Journal (CG&A)

I have formed and led a new MR research group at Microsoft Research’s Extreme Computing lab. I envision MR applications as weaved with the fabric of our lives rather than PC and mobile apps, limited to running on a specific device’s screen. Such applications have to be smart to understand users’ changing physical and social context and flexible to adapt accordingly. We developed systems such as FLARE (Fast Layout for AR experiences) that were used by the HoloLens Team and inspired the Unity MARS product, or Triton 3D audio simulation used by Microsoft Games such as Gears of War 4 and is the base of Microsoft Acoustics. ILLUMIROOM, a collaboration with the Redmond Lab, was presented at CES 2013 Keynote.

2005-2011 Research Manager

I formed the Bing Maps & Mobile Research Lab (Virtual Earth Research). We combined generated many novel results, impacting the product while generating world-class computer vision and graphics research. Among our results is the development of the influential text detector technology used by the BIng Mobile app and incorporated into OpenCV, the world’s first street-side imagery service, street-level reconstruction of geometry and texture pipeline, novel texture compression used by Virtual Earth 3D, and more.

Besides publishing in leading computer vision and graphics forums, our work was presented at TED 2010 and in the New York Times.

Automatic geopositioning of Flickr’s images
3D reconstruction of streets

1996-2001 CTO (Software)

I oversaw software and algorithms R&D of the world’s first time-of-flight video camera in a startup company. I used cameras for applications such as TV depth keying and reconstruction, and it was the basis for the Depth cameras used by Microsoft HoloLens and MagicLeap HMD.

Early real-time color and depth TV
Usage of ZCam in live broadcast – KPIX CA
Demo of ZCam – Alias Wavefront Research

1995-1996 Interactive InnovationFounder

Development of a novel game engine rendering global illumination effects in real-time. The work was presented at SIGGRAPH 1996 and appeared in graphic textbooks.

1992-1996 Ph.D. Computer Vision and Graphics,

The Hebrew University.

I received my M.Sc. in Computer Vision in 1992 and a Triple Major BA (Computer Science, Physics, & Mathematics) in 1987.

1985-1986 BazbosoftFounder

Development of the award-winning and super popular Amiga photo-editing and drawing editor (Photon-Paint).

Selected industrial projects.

For more information, please see my LinkedIn profile.                  

Last edited: July. 2023.  

Selected Talks

AWE 2013: Gesture and Interactive Technologies
Behind the Scenes with Microsoft:
VR in the wild
Haptics in AR and VR – Frontiers in VR
Using Virtual Reality To Help People With Disabilities – NPR
Inside AR and VR – Microsoft Research Blog
Future of Haptics in VR

News

  • Oct. 24 – Two papers, ‘Avatar Pilot’ and ‘VR Transformer’ presented at ISMAR 24
  • Sep. 24 – Joined the University of Birmingham as a Chair of CS.
  • Apr. 24 – The paper “Big or Small” won an Honourable Mention at CHI 2024.
  • Mar. 24 – I’m a co-editor of A special issue on haptics in the metaverse in the IEEE Transactions on Haptics Journal.
  • Jan 24 – Two papers have been accepted to CHI 2024
  • July 11 – Our paper “Beyond Audio” got the best paper at DIS 2023, CMU Pittsburgh, Pennsylvania.
  • May 23 – I joined DataBlanket, a startup that works on AI-Based Fire-Fighting
  • Apr 23 – “Embodying Physics-Aware Avatars in Virtual Reality” received Best Paper: Honorable mention at CHI 23
  • Feb 23 – “AdHocProx: Sensing Mobile, Ad-Hoc Collaborative Device Formations using Dual Ultra-Wideband Radios” got accepted to CHI 23

Research Interests

Adaptive Mixed Reality (MR) & AI

I see Mixed reality as a revolution beyond display technology. While traditional software is developed, tested, and used on standard devices, MR applications use the user environment as their platform. This requires such applications to be aware of the user’s unique context, physical environment, social interactions, and other applications. The rise of Machine Learning and large language models is an exciting opportunity to incorporate more localized and global knowledge into this process.

I look at new ways to design and implement such applications and their effect on our work and social interactions.

Sensing, Computer Vision, and privacy

Another aspect of the technology revolution is the proliferation of sensors, enabling applications to fit the user’s context and intent better. Sensors can enable better interaction with devices as part of a holistic digital environment around the user, all focused on the user’s tasks and enabling a bridge between digital space and physical objects.

A significant importance I see in designing sensing is that it can enable new capabilities but maintain the user’s privacy. New sensors enable us to plan what part of the space is measured and use the minimalistic data granularity needed for the task.

Accessibility and inclusion of MR

Today we can augment our senses in a way that enables us to be in a digital space, independent of the physical laws that limit us in Reality. It enables people to do more than they could have achieved in reality. It also enables to level the plane field between users with different physical, social, or environmental limitations.

Haptics

Our experience of the real world is not limited to vision and audio. The limited rendering of touch sensation reduces MR’s realism today and the effectiveness of using our hands when working in the space.

I have done extensive research on rendering haptics with other senses, designing novel hand-held haptic controllers that advanced the state-of-the-art of active haptic rendering and using scene understanding and manipulation of hand-eye -coordination effects of using the physical environment around the user for haptic rendering.

Avatars

In Virtual Reality and spatial computing simulations, avatars often represent humans.

I was working on issues such as creating avatars, controlling avatars using natural motions, and decoupling avatars’ motions from users’ motions for accessibility and productivity and their perception by the users.

Academic Service

  • VRST 2023 PC Member
  • ISMAR 2023 PC Member
  • ACM UIST 2023 PC Member
  • Frontiers in Virtual Reality Specialty Chief Editor – Haptics (2020-202)
  • IEEE Computer Graphics & Applications (CG&A) Member of the Editorial Board
  • ACM SIGSPATIAL 2011 Conference Paper Chair
  • ACM SIGSPATIAL PC Member
  • ACM Conference on Human Factors in Computing Systems (CHI) PC Member
  • IEEE Computer Vision & Pattern Recognition (CVPR) PC Member
  • Pacific Graphics PC Member
  • ACM International Conference on Interactive Surfaces and Spaces (ISS) PC Member
  • ACM Multimedia Conference (MMSYS) PC Member
  • Microsoft Research Ph.D. Fellowship area chair
  • Microsoft Research Ada Lovelace Fellowship area chair
  • A visiting Professor. The School of Computer Science, Interdisciplinary Center, Herzliya, Israel 2002

Awards

  • Best Paper: Honorable Mention, CHI 2024
  • Best Paper, DIS 2023
  • Best Paper: Honorable Mention Paper, CHI 2023
  • Senior Member of the ACM 2022
  • Best Paper, DIS 2021
  • Best Paper: Honorable Mention Paper, CHI 2020
  • Best Paper: Honorable Mention Paper, IEEE VR 2020
  • Best Paper: Honorable Mentioned Demo, UIST 2019
  • Best Paper, ISMAR 2019
  • Best Paper: Honorable Mentioned Paper, CHI 2018
  • Golden Mouse Award – Best Video Showcase, CHI 2016
  • Best paper, CSCW 2016
  • Golden Mouse Award – Best Video Showcase, CHI 2013
  • Best paper, CHI 2013
  • Best paper, UIST 2009
  • Microsoft Star Developer, Microsoft Bing Maps 2006
  • Charles Clor Scholarship,  1992
  • Talpiot program member 1984

120+ Granted patents

2022
2021
2020
2019
2018
2017
2016
2015
2014
2013
2012
2011
2010
2009

Invited talks

6Sight, Monterey 2007