RESEARCH PAPERS
A 5-Point Minimal Solver for Event Camera Relative Motion Estimation
This paper introduces a novel minimal 5-point solver that jointly estimates line parameters and linear camera velocity projections, which can be fused into a single, averaged linear velocity when considering multiple lines.
YCB-Ev 1.1: Event-vision dataset for 6DoF object pose estimation
This work introduces the YCB-Ev dataset, which contains synchronized RGB-D frames and event data that enables evaluating 6DoF object pose estimation algorithms using these modalities. This dataset provides ground truth 6DoF object poses for the same 21 YCB objects that were used in the YCB-Video (YCB-V) dataset, allowing for cross-dataset algorithm performance evaluation.
MoveEnet: Online High-Frequency Human Pose Estimation With an Event Camera
This paper proposes a Human Pose Estimation system, MoveEnet, that can take events as input from a camera
and estimate 2D pose of the human agent in the scene. The final system can be attached to any event camera, regardless of resolution.
EvTTC: An Event Camera Dataset for Time-to-Collision Estimation
To explore the potential of event cameras in the above-mentioned challenging cases, this paper proposes EvTTC, which is the first multi-sensor dataset focusing on TTC tasks under high-relative-speed scenarios. EvTTC consists of data collected using standard cameras and event cameras, covering various potential collision scenarios in daily driving and involving multiple collision objects.
Spatio-temporal Transformers for Action Unit Classification with Event Cameras
This paper proposes a novel spatio-temporal Vision Transformer model that uses Shifted Patch Tokenization (SPT) and locality Self-Attention (LSA) to enhance the accuracy of Action Unit classification from event streams.
Event-based vision in magneto-optic Kerr effect microscopy
This paper explores the use of event cameras as an add-on to traditional MOKE microscopy to enhance time resolution for observing magnetic domains. Event cameras improve temporal resolution to 1 µs, enabling real-time monitoring and post-processing of fast magnetic dynamics. A proof-of-concept feedback control experiment demonstrated a latency of just 25 ms, highlighting the potential for dynamic material research. Limitations of current event cameras in this application are also discussed.
Learned Event-based Visual Perception for Improved Space Object Detection
This paper presents a hybrid image- and event-based architecture for detecting dim space objects in geosynchronous orbit using dynamic vision sensing. Combining conventional and point-cloud feature extractors like PointNet, the approach enhances detection performance in high-background activity scenes. An event-based imaging simulator is also developed for model training and sensor parameter optimization, demonstrating improved recall for dim objects in challenging conditions.
Dataset collection from a SubT environment
This paper introduces a dataset from a subterranean (SubT) environment, captured with state-of-the-art sensors like RGB, RGB-D, event-based, and thermal cameras, along with 2D/3D lidars, IMUs, and UWB positioning systems. Synchronized raw data is provided in ROS message format, enabling evaluations of navigation, localization, and mapping algorithms.
EVIMO2: An Event Camera Dataset for Motion Segmentation, Optical Flow, Structure from Motion, and Visual Inertial Odometry in Indoor Scenes with Monocular or Stereo Algorithms
In this paper, a new event camera dataset, EVIMO2, is introduced that improves on the popular EVIMO dataset by providing more data, from better cameras, in more complex scenarios. As with its predecessor, EVIMO2 provides labels in the form of per-pixel ground truth depth and segmentation as well as camera and object poses.
Event-Based Motion Capture System for Online Multi-Quadrotor Localization and Tracking
This paper presents the implementation details and experimental validation of a relatively low-cost motion capture system for multi-quadrotor motion planning using an event camera. The real-time, multi-quadrotor detection and tracking tasks are performed using a deep learning network You-Only-Look-Once (YOLOv5) and a k-dimensional (k-d) tree, respectively.
Event Visualization and Trajectory Tracking of the Load Carried by Rotary Crane
This paper concerns research on the load motion carried by a rotary crane. For this purpose, the laboratory crane model was designed in Solidworks software, and numerical simulations were made using the Motion module. The developed laboratory model is a scaled equivalent of the real Liebherr LTM 1020 object.
High-fidelity Event-Radiance Recovery via Transient Event Frequency
This paper proposes to use event cameras with bio-inspired silicon sensors, which are sensitive to radiance changes, to recover precise radiance values. It reveals that, under active lighting conditions, the transient frequency of event signals triggering linearly reflects the radiance value.
Performance of spiking neural networks on event data for embedded automotive applications
This paper aims to increase the performance of spiking neural networks for event data processing, in order to design intelligent automotive algorithms that are efficient, fast, and energy-efficient.
Stereo Hybrid Event-Frame (SHEF) Cameras for 3D Perception
This research introduces a novel approach to Stereo Hybrid Event-Frame Disparity Estimation, leveraging the unique strengths of both event and frame-based cameras. By combining these modalities, significant improvements in depth estimation accuracy, enabling more robust and reliable 3D perception systems was achieved.
Optical flow estimation using the Fisher–Rao metric
In this paper the local histograms are normalised to produce probability distributions. Once these distributions are obtained, the optical flow is estimated using powerful methods taken from probability theory, in particular, methods based on the Fisher–Rao metric.
Event Camera Based Real-Time Detection and Tracking of Indoor Ground Robots
This paper presents a real-time method to detect and track multiple mobile ground robots using event cameras. The method uses density-based spatial clustering of applications with noise (DBSCAN) to detect the robots and a single k-dimensional (k − d) tree to accurately keep track of them as they move in an indoor arena.
Sparse-E2VID: A Sparse Convolutional Model for Event-Based Video Reconstruction Trained With Real Event Noise
To address the issue of dense processing, this paper introduces Sparse-E2VID, an architecture that processes data
in sparse format. With Sparse-E2VID, the inference time is reduced to 55 ms (at 720 × 1280 resolution), which is 30% faster than FireNet. Additionally, Sparse-E2VID reduces the computational cost by 98% compared to FireNet+, while also improving image quality.
Power equipment vibration visualization using intelligent sensing method based on event-sensing principle
In this work, we address the power equipment vibration visualization using intelligent sensing method based on event-sensing principle. Vibration measurements can be used to evaluate the operation status of power equipment and are widely applied in equipment quality inspection and fault identification.
Temporal-Mapping Photography for Event Cameras
In this paper, for the first time, we realize events to dense intensity image conversion using a stationary event camera in static scenes. Different from traditional methods that mainly rely on event integration, the proposed Event-Based Temporal Mapping Photography (EvTemMap) measures the time of event emitting for each pixel.
Efficient Gesture Recognition on Spiking Convolutional Networks Through Sensor Fusion of Event-Based and Depth Data
This work proposes a Spiking Convolutional Neural Network, processing event- and depth data for gesture recognition. The network is simulated using the open-source neuromorphic computing framework LAVA for offline training and evaluation on an embedded system.
Event Cameras in Automotive Sensing: A Review
This article explores the applications, benefits, and challenges of event cameras in these two critical domains within the automotive industry. This review also highlights relevant datasets and methodologies, enabling researchers to make informed decisions tailored to their specific vehicular-technology and place their work in the broader context of EC sensing.
Noise2Image: Noise-Enabled Static Scene Recovery for Event Cameras
This work proposes a method, called Noise2Image, to leverage the illuminance-dependent noise characteristics to recover the static parts of a scene, which are otherwise invisible to event cameras. The results show that Noise2Image can robustly recover intensity images solely from noise events, providing a novel approach for capturing static scenes in event cameras, without additional hardware.
Object Detection with Spiking Neural Networks on Automotive Event Data
In this work, we propose to train spiking neural networks (SNNs) directly on data coming from event cameras to design fast and efficient automotive embedded applications. Indeed, SNNs are more biologically realistic neural networks where neurons communicate using discrete and asynchronous spikes, a naturally energy-efficient and hardware friendly operating mode.
Enhancing Visual Place Recognition via Fast and Slow Adaptive Biasing in Event Cameras
This paper introduces feedback control algorithms that automatically tune the bias parameters through two interacting methods: 1) An immediate, on-the-fly \textit{fast} adaptation of the refractory period, which sets the minimum interval between consecutive events, and 2) if the event rate exceeds the specified bounds even after changing the refractory period repeatedly.
Event-based Motion Segmentation with Spatio-Temporal Graph Cuts
We develop a method to identify independently moving objects acquired with an event-based camera, i.e., to solve the event-based motion segmentation problem. We cast the problem as an energy minimization one involving the fitting of multiple motion models. We jointly solve two subproblems, namely event cluster assignment (labeling) and motion model fitting.
Vista 2.0: An Open, Data-driven Simulator for Multimodal Sensing and Policy Learning for Autonomous Vehicles
Here, we present VISTA, an open source, data-driven simulator that integrates multiple types of sensors for autonomous vehicles. Using high fidelity, real-world datasets, VISTA represents and simulates RGB cameras, 3D LiDAR, and event-based cameras, enabling the rapid generation of novel viewpoints in simulation and thereby enriching the data available for policy learning with corner cases that are difficult to capture in the physical world.
Event-Based Non-rigid Reconstruction of Low-Rank Parametrized Deformations from Contours
Visual reconstruction of fast non-rigid object deformations over time is a challenge for conventional frame-based cameras. In recent years, event cameras have gained significant attention due to their bio-inspired properties, such as high temporal resolution and high dynamic range. In this paper, we propose a novel approach for reconstructing such deformations using event measurements.
MUSES: The Multi-Sensor Semantic Perception Dataset for Driving under Uncertainty
Achieving level-5 driving automation in autonomous vehicles necessitates a robust semantic visual perception system capable of parsing data from different sensors across diverse conditions. However, existing semantic perception datasets often lack important non-camera modalities typically used in autonomous vehicles, or they do not exploit such modalities to aid and improve semantic annotations in challenging conditions. To address this, the research introduce MUSES, the MUlti-SEnsor Semantic perception dataset for driving in adverse conditions under increased uncertainty.
SGE: Structured Light System Based on Gray Code with an Event Camera
We introduce a novel method for measuring properties of periodic phenomena with an event camera, a device asynchronously reporting brightness changes at independently operating pixels. The approach assumes that for fast periodic phenomena, in any spatial window where it occurs, a very similar set of events is generated at the time difference corresponding to the frequency of the motion.
EE3P: Event-based Estimation of Periodic Phenomena Properties
The paper introduces a novel method for measuring properties of periodic phenomena with an event camera, a device asynchronously reporting brightness changes at independently operating pixels. The approach assumes that for fast periodic phenomena, in any spatial window where it occurs, a very similar set of events is generated at the time difference corresponding to the frequency of the motion.
Recent Event Camera Innovations: A Survey
This paper presents a comprehensive survey of event cameras, tracing their evolution over time. It introduces the fundamental principles of event cameras, compares them with traditional frame cameras, and highlights their unique characteristics and operational differences. The survey covers various event camera models from leading manufacturers, key technological milestones, and influential research contributions.
Learning Visual Motion Segmentation using Event Surfaces Event-based Vision
We evaluate our method on the state of the art event-based motion segmentation dataset – EV-IMO and perform comparisons to a frame-based method proposed by its authors. Our ablation studies show that increasing the event slice width improves the accuracy, and how subsampling and edge configurations affect the network performance.
Pushing the Limits of Asynchronous Graph-based Object Detection with Event Cameras
In this work, we break this glass ceiling by introducing several architecture choices which allow us to scale the depth and complexity of such models while maintaining low computation. On object detection tasks, our smallest model shows up to 3.7 times lower computation, while outperforming state-of-the-art asynchronous methods by 7.4 mAP.
Real-Time Optical Flow for Vehicular Perception with Low- and High-Resolution Event Cameras
Our model outperforms by a large margin feed-forward event-based architectures. Moreover, our method does not require any reconstruction of intensity images from events, showing that training directly from raw events is possible, more efficient, and more accurate than passing through an intermediate intensity image.
Event Guided Depth Sensing
Our model outperforms by a large margin feed-forward event-based architectures. Moreover, our method does not require any reconstruction of intensity images from events, showing that training directly from raw events is possible, more efficient, and more accurate than passing through an intermediate intensity image.
Stereo Event-based Particle Tracking Velocimetry for 3D Fluid Flow Reconstruction
First, we track particles inside the two event sequences in order to estimate their 2D velocity in the two sequences of images. A stereo-matching step is then performed to retrieve their 3D positions. These intermediate outputs are incorporated into an optimization framework that also includes physically plausible regularizers, in order to retrieve the 3D velocity field.
Fast Image Reconstruction with an Event Camera
Previous works rely on hand-crafted spatial and temporal smoothing techniques to reconstruct images from events. We propose a novel neural network architecture for video reconstruction from events that is smaller (38k vs. 10M parameters) and faster (10ms vs. 30ms) than state-of-the-art with minimal impact to performance.
TUM-VIE: The TUM Stereo Visual-Inertial Event Dataset
We provide ground truth poses from a motion capture system at 120Hz during the beginning and end of each sequence, which can be used for trajectory evaluation. TUM-VIE includes challenging sequences where state-of-the art visual SLAM algorithms either fail or result in large drift.
Event-based Visual Odometry on Non-Holonomic Ground Vehicles
As demonstrated on both simulated and real data, our algorithm achieves accurate and robust estimates of the vehicle’s instantaneous rotational velocity, and thus results that are comparable to the delta rotations obtained by frame-based sensors under normal conditions. We furthermore significantly outperform the more traditional alternatives in challenging illumination scenarios.
Real-Time 6-DoF Pose Estimation by an Event-Based Camera Using Active LED Markers
This paper proposes a simple but effective event-based pose estimation system using active LED markers (ALM) for fast and accurate pose estimation. The proposed algorithm is able to operate in real time with a latency below 0.5 ms while maintaining output rates of 3 kHz.