Welcome to the Prophesee Research Library, where academic innovation meets the world’s most advanced event-based vision technologies

 

 

We have brought together groundbreaking research from scholars who are pushing the boundaries with Prophesee Event-based Vision technologies to inspire collaboration and drive forward new breakthroughs in the academic community.

Introducing Prophesee Research Library, the largest curation of  academic papers, leveraging Prophesee event-based vision.

Together, let’s reveal the invisible and shape the future of Computer Vision.

 

 

SEARCH PUBLICATIONS

Can’t find your research? Submit it here 

Temporal-Mapping Photography for Event Cameras

Temporal-Mapping Photography for Event Cameras

In this paper, for the first time, we realize events to dense intensity image conversion using a stationary event camera in static scenes. Different from traditional methods that mainly rely on event integration, the proposed Event-Based Temporal Mapping Photography (EvTemMap) measures the time of event emitting for each pixel.

read more
Event Cameras in Automotive Sensing: A Review

Event Cameras in Automotive Sensing: A Review

This article explores the applications, benefits, and challenges of event cameras in these two critical domains within the automotive industry. This review also highlights relevant datasets and methodologies, enabling researchers to make informed decisions tailored to their specific vehicular-technology and place their work in the broader context of EC sensing.

read more
Noise2Image: Noise-Enabled Static Scene Recovery for Event Cameras

Noise2Image: Noise-Enabled Static Scene Recovery for Event Cameras

This work proposes a method, called Noise2Image, to leverage the illuminance-dependent noise characteristics to recover the static parts of a scene, which are otherwise invisible to event cameras. The results show that Noise2Image can robustly recover intensity images solely from noise events, providing a novel approach for capturing static scenes in event cameras, without additional hardware.

read more
Object Detection with Spiking Neural Networks on Automotive Event Data

Object Detection with Spiking Neural Networks on Automotive Event Data

In this work, we propose to train spiking neural networks (SNNs) directly on data coming from event cameras to design fast and efficient automotive embedded applications. Indeed, SNNs are more biologically realistic neural networks where neurons communicate using discrete and asynchronous spikes, a naturally energy-efficient and hardware friendly operating mode.

read more

Don’t miss the next story.
Subscribe to our newsletter!

FEATURED PAPERS

FEATURED IN

LOW-LATENCY AUTOMOTIVE VISION WITH EVENT CAMERAS

University of Zurich

Advanced driver assistance systems using RGB cameras face a bandwidth–latency trade-off. Event cameras, measuring intensity changes asynchronously, offer high temporal resolution and sparsity, reducing these requirements. However, event-camera-based algorithms either lack accuracy or sacrifice efficiency. This paper proposes a hybrid object detector combining event and frame-based data, leveraging both modalities’ advantages to achieve efficient, high-rate object detections with reduced latency. Using a 20 fps RGB camera and an event camera matches the latency of a 5,000 fps camera with the bandwidth of a 45 fps camera, maintaining accuracy. This method enhances efficient and robust perception in edge scenarios.

HONORABLE MENTION AT

EventPS Real-Time Photometric Stereo Using an Event Camera

EVENTPS: REAL-TIME PHOTOMETRIC STEREO USING AN EVENT CAMERA

Peking University, Shanghai Jiao Tong University, The University of Tokyo, National Institute of Informatics

This paper introduces EventPS, a novel approach to real-time photometric stereo using an event camera. Capitalizing on the exceptional temporal resolution, dynamic range, and low bandwidth characteristics of event cameras, EventPS estimates surface normal only from the radiance changes, significantly enhancing data efficiency. EventPS seamlessly integrates with both optimization-based and deep-learning-based photometric stereo techniques to offer a robust solution for non-Lambertian surfaces. Extensive experiments validate the effectiveness and efficiency of EventPS compared to frame-based counterparts. Our algorithm runs at over 30 fps in real-world scenarios, unleashing the potential of EventPS in time-sensitive and high-speed downstream applications.

TOP

AUTOMOTIVE BENCHMARK

EventPS Real-Time Photometric Stereo Using an Event Camera

DSEC: A STEREO EVENT CAMERA DATASET FOR DRIVING SCENARIOS

University of Zurich, ETH Zurich

Autonomous driving has advanced significantly with corporate funding, yet it struggles in challenging illumination conditions like night, sunrise, and sunset. Standard cameras are being pushed to their limits in low light and high dynamic range scenarios. To address these challenges, this paper introduces DSEC, a new dataset that contains such demanding illumination conditions, providing a rich set of sensory data from a wide-baseline stereo setup of two color frame cameras and two high-resolution monochrome event cameras, along with lidar and RTK GPS measurements, both hardware synchronized with all camera data. DSEC is notable for its high-resolution event cameras, which excel in temporal resolution and dynamic range. This dataset, comprising 53 sequences in varied lighting, provides ground truth disparity for developing and evaluating event-based stereo algorithms. It is the first high-resolution, large-scale stereo dataset with event cameras.

INVENTORS AROUND THE WORLD

Aug 2024