Welcome to the Prophesee Research Library, where academic innovation meets the world’s most advanced event-based vision technologies
We have brought together groundbreaking research from scholars who are pushing the boundaries with Prophesee Event-based Vision technologies to inspire collaboration and drive forward new breakthroughs in the academic community.
Introducing Prophesee Research Library, the largest curation of academic papers, leveraging Prophesee event-based vision.
Together, let’s reveal the invisible and shape the future of Computer Vision.
Temporal-Mapping Photography for Event Cameras
In this paper, for the first time, we realize events to dense intensity image conversion using a stationary event camera in static scenes. Different from traditional methods that mainly rely on event integration, the proposed Event-Based Temporal Mapping Photography (EvTemMap) measures the time of event emitting for each pixel.
Efficent Gesture Recognition on Spiking Convolutional Networks Through Sensor Fusion of Event-Based and Depth Data
This work proposes a Spiking Convolutional Neural Network, processing event- and depth data for gesture recognition. The network is simulated using the open-source neuromorphic computing framework LAVA for offline training and evaluation on an embedded system.
Event Cameras in Automotive Sensing: A Review
This article explores the applications, benefits, and challenges of event cameras in these two critical domains within the automotive industry. This review also highlights relevant datasets and methodologies, enabling researchers to make informed decisions tailored to their specific vehicular-technology and place their work in the broader context of EC sensing.
Noise2Image: Noise-Enabled Static Scene Recovery for Event Cameras
This work proposes a method, called Noise2Image, to leverage the illuminance-dependent noise characteristics to recover the static parts of a scene, which are otherwise invisible to event cameras. The results show that Noise2Image can robustly recover intensity images solely from noise events, providing a novel approach for capturing static scenes in event cameras, without additional hardware.
Object Detection with Spiking Neural Networks on Automotive Event Data
In this work, we propose to train spiking neural networks (SNNs) directly on data coming from event cameras to design fast and efficient automotive embedded applications. Indeed, SNNs are more biologically realistic neural networks where neurons communicate using discrete and asynchronous spikes, a naturally energy-efficient and hardware friendly operating mode.
Enhancing Visual Place Recognition via Fast and Slow Adaptive Biasing in Event Cameras
This paper introduces feedback control algorithms that automatically tune the bias parameters through two interacting methods: 1) An immediate, on-the-fly \textit{fast} adaptation of the refractory period, which sets the minimum interval between consecutive events, and 2) if the event rate exceeds the specified bounds even after changing the refractory period repeatedly.
Don’t miss the next story.
Subscribe to our newsletter!
FEATURED PAPERS
LOW-LATENCY AUTOMOTIVE VISION WITH EVENT CAMERAS
University of Zurich
Advanced driver assistance systems using RGB cameras face a bandwidth–latency trade-off. Event cameras, measuring intensity changes asynchronously, offer high temporal resolution and sparsity, reducing these requirements. However, event-camera-based algorithms either lack accuracy or sacrifice efficiency. This paper proposes a hybrid object detector combining event and frame-based data, leveraging both modalities’ advantages to achieve efficient, high-rate object detections with reduced latency. Using a 20 fps RGB camera and an event camera matches the latency of a 5,000 fps camera with the bandwidth of a 45 fps camera, maintaining accuracy. This method enhances efficient and robust perception in edge scenarios.
EVENTPS: REAL-TIME PHOTOMETRIC STEREO USING AN EVENT CAMERA
Peking University, Shanghai Jiao Tong University, The University of Tokyo, National Institute of Informatics
This paper introduces EventPS, a novel approach to real-time photometric stereo using an event camera. Capitalizing on the exceptional temporal resolution, dynamic range, and low bandwidth characteristics of event cameras, EventPS estimates surface normal only from the radiance changes, significantly enhancing data efficiency. EventPS seamlessly integrates with both optimization-based and deep-learning-based photometric stereo techniques to offer a robust solution for non-Lambertian surfaces. Extensive experiments validate the effectiveness and efficiency of EventPS compared to frame-based counterparts. Our algorithm runs at over 30 fps in real-world scenarios, unleashing the potential of EventPS in time-sensitive and high-speed downstream applications.
DSEC: A STEREO EVENT CAMERA DATASET FOR DRIVING SCENARIOS
University of Zurich, ETH Zurich
Autonomous driving has advanced significantly with corporate funding, yet it struggles in challenging illumination conditions like night, sunrise, and sunset. Standard cameras are being pushed to their limits in low light and high dynamic range scenarios. To address these challenges, this paper introduces DSEC, a new dataset that contains such demanding illumination conditions, providing a rich set of sensory data from a wide-baseline stereo setup of two color frame cameras and two high-resolution monochrome event cameras, along with lidar and RTK GPS measurements, both hardware synchronized with all camera data. DSEC is notable for its high-resolution event cameras, which excel in temporal resolution and dynamic range. This dataset, comprising 53 sequences in varied lighting, provides ground truth disparity for developing and evaluating event-based stereo algorithms. It is the first high-resolution, large-scale stereo dataset with event cameras.
INVENTORS AROUND THE WORLD
Aug 2024