TRUSTED BY INDUSTRY LEADERS

Prophesee is the inventor of the world’s most advanced neuromorphic vision systems.

Composed of a patented Event-based Vision sensor featuring intelligent, independent pixels and an extensive AI library, Prophesee Metavision® system unlocks next-level, ultra low-power bioinspired eye-tracking.

From gaze-tracking for real-world chat GPT interaction to foveated rendering, driver monitoring systems, varioptics headsets or even presbyopia correction, Metavision® sensors and algorithms allow for the first time both ultra-fast and low-energy eye tracking in a consumer-friendly form factor.

<2mW

Sensing power consumption / 16uW standby mode

>1kHz

Sampling rate

5x lower datarate

vs. image-based sensor

>120dB dynamic range

Lower illumination requirements

<1ms

End-to-end latency

<1°

Detection accuracy

APPLICATIONS

GAZE TRACKING

Zinn Labs uses Prophesee’s GenX320 sensor to deliver a high-refresh-rate, low-latency gaze-tracking solution, pushing the boundaries of responsiveness and realism of head-worn display devices.

The low compute footprint of Zinn Labs’ 3D gaze estimation gives it the flexibility to support ultra low-power modes for use in smart wearables that look like normal eyewear.

Kevin Boyle, CEO of Zinn Labs, explains, “Zinn Labs’ event-based gaze-tracking reduces bandwidth by two orders of magnitude compared to video-based solutions, allowing it to scale to previously impractical applications and form factors.

DRIVER MONITORING SYSTEM

Real-Time Face & Eye Tracking and Blink Detection using Event Cameras

The paper introduces a novel method using a convolutional recurrent neural network to detect and track faces and eyes for DMS. It also highlights how event cameras can better capture the unique temporal signature of eye blinks, providing insights into driver fatigue.

KILOHERTZ EYE TRACKING

Event-Based Kilohertz Eye Tracking using Coded Differential Lighting – 2022

Sampling Rate: Test results show the VGA-based system operates at a 1 kHz sampling rate, with accurate corneal glint detection even at high eye movement velocities up to 1,000°/s.

Detection Accuracy: The system achieves sub-pixel accuracy in glint detection, with an error of less than 0.5 pixels even at high rotational velocities.

Noise Rejection: The system remains robust against external noise, maintaining sub-pixel accuracy even in challenging conditions like background light flicker.

Event cameras are a good fit for eye tracking sensors in AR/VR headsets, since they fulfil key requirements on power and latency.

By pulsing the glint stimuli in binary patterns in the 1-2kHz range, we are able to achieve sampling-time of 1 ms on glint updates.

The result is a low-power, sub-pixel accurate corneal glint detector which robustly provides updates at kHz rates. 

EVS ALGORITHMS INTEGRABILITY

Evaluating Image-Based Face and Eye Tracking
with Event Cameras

This paper showcases the viability of integrating conventional algorithms with event-based data by converting it into a frame format.

The study achieved a mean Average Precision (mAP) score of 0.91 using models like GR-YOLO and YOLOv8, demonstrating robust performance across various real-world datasets, even under challenging lighting conditions.

FOVEATED RENDERING

The human eye never sees the whole scene in high definition.

It can only perceive HD through its fovea which covers just 3 degrees of the field of view.

~90% of the virtual scene is currently wasted being rendered in HD while these parts of the scene are not perceived in details by the human eye.

But, foveated rendering is not an easy feat.

The eye has some of the fastest moving muscles in the body, contracting in less than 10ms. The eye’s angular speed regularly reaches extremes of around 700°/s.

Event-based vision’s speed and energy efficiency allows for the first time to achieve optimal foveated rendering, capturing the finest fleeting movements of the eye at speeds up to 1kHz.

Q&A

What data do I get from the sensor exactly ?

The sensor will output a continuous stream of data consisting of:

  • X and Y coordinates, indicating the location of the activated pixel in the sensor array
  • The polarity, meaning if the activated event corresponds to a positive (dark to light) or negative (light to dark) contrast change
  • A timestamp “t”, precisely encoding when the event was generated, at the microsecond resolution

For more information, check our pages on Event-based concepts and events streaming and decoding.

Learn more

How can you be “blur-free” ?

Image blur is mostly caused by movement of the camera or the subject during exposure. This can happen when the shutter speed is too slow or if movements are too fast.

With event-based sensor, there is no exposure but rather a continuous flow of “events” triggered by each pixel independently whenever an illumination change is detected. Hence there is no blur.

Are you compatible with eye-tracking active / passive illumination ?

Yes, the sensor perceives changes continuously and not through exposure times. This allows us to perceive light pulses or motion under constant illumination

What is the frame rate ?

There is no frame rate, our Metavision sensor is neither a global shutter nor a rolling shutter, it is actually shutter-free.

This represents a new machine vision category enabled by a patented sensor design that embeds each pixel with its own intelligence processing. It enables them to activate themselves independently when a change is detected.

As soon as an event is generated, it is sent to the system, continuously, pixel by pixel and not at a fixed pace anymore.

How can the dynamic range be so high ?

The pixels of our event-based sensor contain photoreceptors that detect changes of illumination on a logarithmic scale. Hence it automatically adapts itself to low and high light intensity and does not saturate the sensor as a classical frame-based sensor would do.

I have existing image-based datasets, can I use them to train Event-based models ?

Yes, you can leverage our “Video to Event Simulator”. This is Python script that allows you to transform frame-based image or video into Event-based counterparts. Those event based files can then be used to train Event-based models.

Contact us for access to even more advanced simulator tools

FIND OUT WHAT PROPHESEE METAVISION® TECHNOLOGIES CAN BRING TO YOUR XR PROJECTS

RECOGNITION