Welcome to the Prophesee Research Library, where academic innovation meets the world’s most advanced event-based vision technologies
We have brought together groundbreaking research from scholars who are pushing the boundaries with Prophesee Event-based Vision technologies to inspire collaboration and drive forward new breakthroughs in the academic community.
Introducing Prophesee Research Library, the largest curation of academic papers, leveraging Prophesee event-based vision.
Together, let’s reveal the invisible and shape the future of Computer Vision.
MMDVS-LF: Multi-Modal Dynamic Vision Sensor and Eye-Tracking Dataset for Line Following
This paper proposes a novel, computationally efficient regularizer to mitigate event collapse in the CMax framework. From a theoretical point of view, the regularizer is designed based on geometric principles of motion field deformation (measuring area rate of change along point trajectories).
eTraM: Event-based Traffic Monitoring Dataset
Event cameras offer high temporal resolution and efficiency but remain underutilized in static traffic monitoring. We present eTraM, a first-of-its-kind event-based dataset with 10 hours of traffic data, 2M annotations, and eight participant classes. Evaluated with RVT, RED, and YOLOv8, eTraM highlights the potential of event cameras for real-world applications.
SEVD: Synthetic Event-based Vision Dataset for Ego and Fixed Traffic Perception
This paper evaluates a dataset using state-of-the-art event-based (RED, RVT) and frame-based (YOLOv8) methods for traffic participant detection tasks and provide baseline benchmarks for assessment. Additionally, the authors conduct experiments to assess the synthetic event-based dataset’s generalization capabilities.
Object Detection using Event Camera: A MoE Heat Conduction based Detector and A New Benchmark Dataset
This paper introduces a novel MoE (Mixture of Experts) heat conduction-based object detection algorithm that strikingly balances accuracy and computational efficiency. Initially, we employ a stem network for event data embedding, followed by processing through our innovative MoE-HCO blocks.
x-RAGE: eXtended Reality – Action & Gesture Events Dataset
This paper presents the first event-camera based egocentric gesture dataset for enabling neuromorphic, low-power solutions for XR-centric gesture recognition.
Event Stream based Sign Language Translation: A High-Definition Benchmark Dataset and A New Algorithm
Unlike traditional SLT based on visible light videos, which is easily affected by factors such as lighting, rapid hand movements, and privacy breaches, this paper proposes the use of high-definition Event streams for SLT, effectively mitigating the aforementioned issues.
Don’t miss the next story.
Subscribe to our newsletter!
INVENTORS AROUND THE WORLD
Feb 2025