RESEARCH PAPERS
Astrometric Calibration and Source Characterisation of the Latest Generation Neuromorphic Event-based Cameras for Space Imaging
In this paper, the traditional techniques of conventional astronomy are reconsidered to properly utilise the event-based camera for space imaging and space situational awareness.
Motion Segmentation for Neuromorphic Aerial Surveillance
This paper addresses these challenges by introducing a novel motion segmentation method that leverages self-supervised vision transformers on both event data and optical flow information. Our approach eliminates the need for human annotations and reduces dependency on scene-specific parameters.
CoSEC: A Coaxial Stereo Event Camera Dataset for Autonomous Driving
This paper introduces hybrid coaxial event-frame devices to build the multimodal system, and propose a coaxial stereo event camera (CoSEC) dataset for autonomous driving. As for the multimodal system, it first utilizes the microcontroller to achieve time synchronization, and then spatially calibrate different sensors, where they perform intra- and inter-calibration of stereo coaxial devices.
Ev-Layout: A Large-scale Event-based Multi-modal Dataset for Indoor Layout Estimation and Tracking
This paper presents Ev-Layout, a novel large-scale event-based multi-modal dataset designed for indoor layout estimation and tracking. Ev-Layout makes key contributions to the community by: Utilizing a hybrid data collection platform (with a head-mounted display and VR interface) that integrates both RGB and bio-inspired event cameras to capture indoor layouts in motion.
RGBE-Gaze: A Large-Scale Event-Based Multimodal Dataset for High Frequency Remote Gaze Tracking
This paper presents dataset characteristics such as head pose, gaze direction, and pupil size. Furthermore, it introduces a hybrid frame-event based gaze estimation method specifically designed for the collected dataset. Moreover, it performs extensive evaluations of different benchmarking methods under various gaze-related factors.
Synthetic Lunar Terrain: A Multimodal Open Dataset for Training and Evaluating Neuromorphic Vision Algorithms
Synthetic Lunar Terrain (SLT) is an open dataset collected from an analogue test site for lunar missions, featuring synthetic craters in a high-contrast lighting setup. It includes several side-by-side captures from event-based and conventional RGB cameras, supplemented with a high-resolution 3D laser scan for depth estimation.
HUE Dataset: High-Resolution Event and Frame Sequences for Low-Light Vision
Low-light environments pose significant challenges for image enhancement methods. To address these challenges, this work introduces the HUE dataset, a comprehensive collection of high-resolution event and frame sequences captured in diverse and challenging low-light conditions.
M2P2: A Multi-Modal Passive Perception Dataset for Off-Road Mobility in Extreme Low-Light Conditions
Low-light environments pose significant challenges for image enhancement methods. To address these challenges, this work introduces the HUE dataset, a comprehensive collection of high-resolution event and frame sequences captured in diverse and challenging low-light conditions.
eCARLA-scenes: A synthetically generated dataset for event-based optical flow prediction
This papers addresses the lack of datasets by introducing eWiz, a comprehensive library for processing event-based data. It includes tools for data loading, augmentation, visualization, encoding, and generation of training data, along with loss functions and performance metrics.
MouseSIS: A Frames-and-Events Dataset for Space-Time Instance Segmentation of Mice
This paper proposes a novel, computationally efficient regularizer to mitigate event collapse in the CMax framework. From a theoretical point of view, the regularizer is designed based on geometric principles of motion field deformation (measuring area rate of change along point trajectories).
M3ED: Multi-Robot, Multi-Sensor, Multi-Environment Event Dataset
This paper presents M3ED, the first multi-sensor event camera dataset focused on high-speed dynamic motions in robotics applications. M3ED provides high-quality synchronized and labeled data from multiple platforms, including ground vehicles, legged robots, and aerial robots, operating in challenging conditions such as driving along off-road trails, navigating through dense forests, and performing aggressive flight maneuvers.
N-ROD: a Neuromorphic Dataset for Synthetic-to-Real Domain Adaptation
This paper analyzes the synth-to-real domain shift in event data, i.e., the gap arising between simulated events obtained from synthetic renderings and those captured with a real camera on real images.
Real-time event simulation with frame-based cameras
This work proposes simulation methods that improve the performance of event simulation by two orders of magnitude (making them real-time capable) while remaining competitive in the quality assessment.
Object Tracking with an Event Camera
This paper analyzes the synth-to-real domain shift in event data, i.e., the gap arising between simulated events obtained from synthetic renderings and those captured with a real camera on real images.
A Monocular Event-Camera Motion Capture System
This work builds an event-based SL system that consists of a laser point projector and an event camera, and devises a spatial-temporal coding strategy that realizes depth encoding in dual domains through a single shot.
Vision événementielle Omnidirectionnelle : Théorie et Applications
Depuis quelques années, l’utilisation des caméras événementielles est en plein essor en vision par ordinateur et en robotique, et ces capteurs sont à l’origine d’un nombre croissant de projets de recherche portant, par exemple, sur
le véhicule autonome.
Object Detection Method with Spiking Neural Network Based on DT-LIF Neuron and SSD
This paper proposes an object detection method with SNN based on Dynamic Threshold Leaky Integrate-and-Fire (DT-LIF) neuron and Single Shot multibox Detector (SSD). First, a DT-LIF neuron is designed, which can dynamically
adjust the threshold of neuron according to the cumulative membrane potential to drive spike activity of the
deep network and imporve the inferance speed.
Asynchronous Optimisation for Event-based Visual Odometry
This paper focuses on event-based visual odometry (VO). While existing event-driven VO pipelines have adopted continuous-time representations to asynchronously process event data, they either assume a known map, restrict the camera to planar trajectories, or integrate other sensors into the system. Towards map-free event-only monocular VO in SE(3), we propose an asynchronous structure-from-motion optimisation back-end.
Target-free Extrinsic Calibration of Event-LiDAR Dyad using Edge Correspondences
This paper proposes a novel method to calibrate the extrinsic parameters between a dyad of an event camera and a LiDAR without the need for a calibration board or other equipment. Our approach takes advantage of the fact that when an event camera is in motion, changes in reflectivity and geometric edges in the environment trigger numerous events, which can also be captured by LiDAR.
Suivi et estimation de profondeur avec un banc stéréo événementiel embarqué sur un véhicule autonome
Dans cet article, nous proposons un prototype de pipeline stéréo événementiel pour la reconstruction 3D et le suivi d’une caméra en mouvement. Le module de reconstruction 3D repose sur la fusion DSI (“disparity space image”), tandis que le module de suivi utilise les surfaces temporelles comme champs de distance anisotropes, pour estimer la pose de la caméra.
High-definition event frame generation using SoC FPGA devices
This work builds an event-based SL system that consists of a laser point projector and an event camera, and devises a spatial-temporal coding strategy that realizes depth encoding in dual domains through a single shot.
A New Stereo Fisheye Event Camera for Fast Drone Detection and Tracking
This paper presents a new compact vision sensor consisting of two fish eye event cameras mounted back to-back, which offers a full 360-degree view of the surrounding environment. We describe the optical design, projection model and practical calibration using the incoming stream of events, of the novel stereo camera, called SFERA.
Ultra-High-Frequency Harmony: mmWave Radar and Event Camera Orchestrate Accurate Drone Landing
This paper replaces the traditional frame camera with event camera, a novel sensor that harmonizes in sampling frequency with mmWave radar within the ground platform setup, and introduce mmE-Loc, a high-precision, low-latency ground localization system designed for drone landings.
MMDVS-LF: Multi-Modal Dynamic Vision Sensor and Eye-Tracking Dataset for Line Following
This paper proposes a novel, computationally efficient regularizer to mitigate event collapse in the CMax framework. From a theoretical point of view, the regularizer is designed based on geometric principles of motion field deformation (measuring area rate of change along point trajectories).
eTraM: Event-based Traffic Monitoring Dataset
Event cameras offer high temporal resolution and efficiency but remain underutilized in static traffic monitoring. We present eTraM, a first-of-its-kind event-based dataset with 10 hours of traffic data, 2M annotations, and eight participant classes. Evaluated with RVT, RED, and YOLOv8, eTraM highlights the potential of event cameras for real-world applications.
SEVD: Synthetic Event-based Vision Dataset for Ego and Fixed Traffic Perception
This paper evaluates a dataset using state-of-the-art event-based (RED, RVT) and frame-based (YOLOv8) methods for traffic participant detection tasks and provide baseline benchmarks for assessment. Additionally, the authors conduct experiments to assess the synthetic event-based dataset’s generalization capabilities.
Object Detection using Event Camera: A MoE Heat Conduction based Detector and A New Benchmark Dataset
This paper introduces a novel MoE (Mixture of Experts) heat conduction-based object detection algorithm that strikingly balances accuracy and computational efficiency. Initially, we employ a stem network for event data embedding, followed by processing through our innovative MoE-HCO blocks.
x-RAGE: eXtended Reality – Action & Gesture Events Dataset
This paper presents the first event-camera based egocentric gesture dataset for enabling neuromorphic, low-power solutions for XR-centric gesture recognition.
Event Stream based Sign Language Translation: A High-Definition Benchmark Dataset and A New Algorithm
Unlike traditional SLT based on visible light videos, which is easily affected by factors such as lighting, rapid hand movements, and privacy breaches, this paper proposes the use of high-definition Event streams for SLT, effectively mitigating the aforementioned issues.
Event Stream-based Visual Object Tracking: A High-Resolution Benchmark Dataset and A Novel Baseline
This paper proposes a novel hierarchical knowledge distillation framework that can fully utilize multi-modal / multi-view information during training to facilitate knowledge transfer, enabling us to achieve high-speed and low-latency visual tracking during testing by using only event signals.
A Benchmark Dataset for Event-Guided Human Pose Estimation and Tracking in Extreme Conditions
This paper introduces a new hybrid dataset encompassing both RGB and event data for human pose estimation and tracking in two extreme scenarios: low-light and motion blur environments.
EventSTR: A Benchmark Dataset and Baselines for Event Stream based Scene Text Recognition
This paper proposes to recognize the scene text using bio-inspired event cameras by collecting and annotating a large-scale benchmark dataset, termed EventSTR. It contains 9,928 high-definition (1280 × 720) event samples and involves both Chinese and English characters.
Falcon ODIN: an event-based camera payload
This paper talks about the mission design and objectives for Falcon ODIN along with ground-based testing of all four camera. Falcon ODIN contains two event based cameras (EBC) and two traditional framing cameras along with mirrors mounted on azimuth elevation rotation stages which allow the field of regard of the EBCs to move.
Person Re-Identification without Identification via Event anonymization
This paper proposes an end-to-end network architecture jointly optimized for the twofold objective of preserving privacy and performing a downstream task such as person ReId.
Ultra-Efficient On-Device Object Detection on AI-Integrated Smart Glasses with TinyissimoYOLO
This paper illustrates the design and implementation of tiny machine-learning algorithms exploiting novel low-power processors to enable prolonged continuous operation in smart glasses.
Fast 3D reconstruction via event-based structured light with spatio-temporal coding
This work builds an event-based SL system that consists of a laser point projector and an event camera, and devises a spatial-temporal coding strategy that realizes depth encoding in dual domains through a single shot.
Asynchronous Blob Tracker for Event Cameras
This work builds an event-based SL system that consists of a laser point projector and an event camera, and devises a spatial-temporal coding strategy that realizes depth encoding in dual domains through a single shot.
Heart Rate Detection Using an Event Camera
This paper proposes to harness the surface of the skin caused by the pulsatile flow of blood in the wrist region. It investigates whether an event camera could be used for continuous noninvasive monitoring of heart rate (HR). Event camera video data from 25 participants, comprising varying age groups and skin colours, was collected and analysed.
SPADES: A Realistic Spacecraft Pose Estimation Dataset using Event Sensing
This paper proposes an effective data filtering method to improve the quality of training data, thus enhancing model performance. Additionally, it introduces an image-based event representation that outperforms existing representations.
Event Camera and LiDAR based Human Tracking for Adverse Lighting Conditions in Subterranean Environments
This paper proposes a novel LiDAR and event camera fusion modality for subterranean (SubT) environments for fast and precise object and human detection in a wide variety of adverse lighting conditions, such as low or no light, high-contrast zones and in the presence of blinding light sources