RESEARCH PAPERS
Learning Parallax for Stereo Event-based Motion Deblurring
This work proposes a novel coarse-to-fine framework, named NETwork of Event-based motion Deblurring with STereo event and intensity cameras (St-EDNet), to recover high-quality images directly from the misaligned inputs, consisting of a single blurry image and the concurrent event streams.
blurry image and the concurrent event streams.
Neural Image Re-Exposure
This work aims at re-exposing the captured photo in the post-processing, providing a more flexible way to address issues within a unified framework. Specifically, it propose a neural network based image re-exposure framework.
Neuromorphic Computing and Sensing in Space
The term “neuromorphic” refers to systems that closely resemble the architecture and dynamics of biological neural networks. From brain-inspired computer chips to sensory devices mimicking human vision and olfaction, neuromorphic computing aims to achieve efficiency levels comparable to biological organisms.
Real-time 6-DoF pose estimation by an event-based camera using active LED markers
This paper proposes a simple but effective event-based pose estimation system using active LED markers (ALM) for fast and accurate pose estimation. The proposed algorithm is able to operate in real time with a latency below 0.5 ms while maintaining output rates of 3 kHz.
The development of a Hardware-in-the-Loop test setup for event-based vision near-space space objects
This paper proposes to develop a Hardware-in-the-Loop imaging setup that enables experimenting with an event-based and frame-based camera under simulated space conditions. The generated data sets were used to compare visual navigation algorithms in terms of an event-based and frame-based feature detection and tracking algorithm.
Collision detection for UAVs using Event Cameras
This paper explores the use of event cameras for collision detection in unmanned aerial vehicles (UAVs). Traditional cameras have been widely used in UAVs for obstacle avoidance and navigation, but they suffer from high latency and low dynamic range. Event cameras, on the other hand, capture only the changes in the scene and can operate at high speeds with low latency.
Demystifying event-based camera latency: sensor speed dependence on pixel biasing, light, and spatial activity
This report explores how various mechanisms effect the response time of event-based cameras (EBCs) are based on unconventional electro-optical IR vision sensors, which are only sensitive to changing light. Because their operation is essentially “frameless,” their response time is not dependent to a frame rate or readout time, but rather the number of activated pixels, the magnitude of background light, local fabrication defects, and analog configuration of the pixel.blurry image and the concurrent event streams.
On the Generation of a Synthetic Event-Based Vision Dataset for Navigation and Landing
This paper presents a methodology and a software pipeline for generating event-based vision datasets from optimal landing trajectories during the approach of a target body. It constructs sequences of photorealistic images of the lunar surface with the Planet and Asteroid Natural Scene Generation Utility (PANGU) at different viewpoints along a set of optimal descent trajectories obtained by varying the boundary conditions.
Time-resolved velocity profile measurement using event-based imaging
This paper presents the implementation of time-resolved velocity profile measurement using event-based vision
(EBV) employing an event-camera in-place of a high-speed camera.
To change or not to change: Exploring the potential of event-based detectors for wavefront sensing
This paper presents the modelling and preliminary experimental results of a Shack-Hartmann tip-tilt wavefront sensor equipped with an event-based detector, demonstrating its ability to estimate spot displacement with remarkable speed and sensitivity in low-light conditions.
Widefield Diamond Quantum Sensing with Neuromorphic Vision Sensors
This paper proposes a novel, computationally efficient regularizer to mitigate event collapse in the CMax framework. From a theoretical point of view, the regularizer is designed based on geometric principles of motion field deformation (measuring area rate of change along point trajectories).
Multi-Event-Camera Depth Estimation and Outlier Rejection by Refocused Events Fusion
The MSMO algorithm uses the velocities of each event to create an average of the scene and filter out dissimilar events. This work shows the study performed on the velocity values of the events and explains why ultimately an average-based velocity filter is insufficient for lightweight MSMO detection and tracking of objects using an EBS camera.
Are High-Resolution Event Cameras Really Needed?
The MSMO algorithm uses the velocities of each event to create an average of the scene and filter out dissimilar events. This work shows the study performed on the velocity values of the events and explains why ultimately an average-based velocity filter is insufficient for lightweight MSMO detection and tracking of objects using an EBS camera.
Neuromorphic Imaging with Super-Resolution
This paper introduces the first self-supervised neuromorphic super-resolution prototype. It can be self-adaptive to per input source from any low-resolution camera to estimate an optimal, high-resolution counterpart of any scale, without the need of side knowledge and prior training.
Neuromorphic Drone Detection: an Event-RGB Multimodal Approach
This paper presents the implementation of time-resolved velocity profile measurement using event-based vision
(EBV) employing an event-camera in-place of a high-speed camera.
Interpolation-Based Event Visual Data Filtering Algorithms
In this paper, we propose a method for event data that is capable of removing approximately 99% of noise while preserving the majority of the valid signal. It proposes four algorithms based on the matrix of infinite impulse response (IIR) filters method.
Event-Based Shape from Polarization with Spiking Neural Networks
This paper investigates event-based shape from polarization using Spiking Neural Networks (SNNs), introducing the Single-Timestep and Multi-Timestep Spiking UNets for effective and efficient surface normal estimation.
Low-Complexity Lossless Coding of Asynchronous Event Sequences for Low-Power Chip Integration
This paper introduces a groundbreaking low-complexity lossless compression method for encoding asynchronous event sequences, designed for efficient memory usage and low-power hardware integration.
Event-Based Shape From Polarization
This paper tackles the speed-resolution trade-off using event cameras. Event cameras are efficient highspeed vision sensors that asynchronously measure changes in brightness intensity with microsecond resolution.
Neuromorphic Seatbelt State Detection for In-Cabin Monitoring with Event Cameras
This paper tackles the speed-resolution trade-off using event cameras. Event cameras are efficient highspeed vision sensors that asynchronously measure changes in brightness intensity with microsecond resolution.
Evaluating Image-Based Face and Eye Tracking with Event Cameras
This paper showcases the viability of integrating conventional algorithms with event-based data, transformed into a frame format while preserving the unique benefits of event cameras.
A Fast Geometric Regularizer to Mitigate Event Collapse in the Contrast Maximization Framework
This paper proposes a novel, computationally efficient regularizer to mitigate event collapse in the CMax framework. From a theoretical point of view, the regularizer is designed based on geometric principles of motion field deformation (measuring area rate of change along point trajectories).
Pedestrian Detection with High-Resolution Event Camera
This paper compares two methods of processing event data by means of deep learning for the task of pedestrian detection. It uses a representation in the form of video frames, convolutional neural networks and asynchronous sparse convolutional neural networks.
A 5-Point Minimal Solver for Event Camera Relative Motion Estimation
This paper introduces a novel minimal 5-point solver that jointly estimates line parameters and linear camera velocity projections, which can be fused into a single, averaged linear velocity when considering multiple lines.
YCB-Ev 1.1: Event-vision dataset for 6DoF object pose estimation
This work introduces the YCB-Ev dataset, which contains synchronized RGB-D frames and event data that enables evaluating 6DoF object pose estimation algorithms using these modalities. This dataset provides ground truth 6DoF object poses for the same 21 YCB objects that were used in the YCB-Video (YCB-V) dataset, allowing for cross-dataset algorithm performance evaluation.
MoveEnet: Online High-Frequency Human Pose Estimation With an Event Camera
This paper proposes a Human Pose Estimation system, MoveEnet, that can take events as input from a camera
and estimate 2D pose of the human agent in the scene. The final system can be attached to any event camera, regardless of resolution.
EvTTC: An Event Camera Dataset for Time-to-Collision Estimation
To explore the potential of event cameras in the above-mentioned challenging cases, this paper proposes EvTTC, which is the first multi-sensor dataset focusing on TTC tasks under high-relative-speed scenarios. EvTTC consists of data collected using standard cameras and event cameras, covering various potential collision scenarios in daily driving and involving multiple collision objects.
Spatio-temporal Transformers for Action Unit Classification with Event Cameras
This paper proposes a novel spatio-temporal Vision Transformer model that uses Shifted Patch Tokenization (SPT) and locality Self-Attention (LSA) to enhance the accuracy of Action Unit classification from event streams.
Event-based vision in magneto-optic Kerr effect microscopy
This paper explores the use of event cameras as an add-on to traditional MOKE microscopy to enhance time resolution for observing magnetic domains. Event cameras improve temporal resolution to 1 µs, enabling real-time monitoring and post-processing of fast magnetic dynamics. A proof-of-concept feedback control experiment demonstrated a latency of just 25 ms, highlighting the potential for dynamic material research. Limitations of current event cameras in this application are also discussed.
Learned Event-based Visual Perception for Improved Space Object Detection
This paper presents a hybrid image- and event-based architecture for detecting dim space objects in geosynchronous orbit using dynamic vision sensing. Combining conventional and point-cloud feature extractors like PointNet, the approach enhances detection performance in high-background activity scenes. An event-based imaging simulator is also developed for model training and sensor parameter optimization, demonstrating improved recall for dim objects in challenging conditions.
Dataset collection from a SubT environment
This paper introduces a dataset from a subterranean (SubT) environment, captured with state-of-the-art sensors like RGB, RGB-D, event-based, and thermal cameras, along with 2D/3D lidars, IMUs, and UWB positioning systems. Synchronized raw data is provided in ROS message format, enabling evaluations of navigation, localization, and mapping algorithms.
EVIMO2: An Event Camera Dataset for Motion Segmentation, Optical Flow, Structure from Motion, and Visual Inertial Odometry in Indoor Scenes with Monocular or Stereo Algorithms
In this paper, a new event camera dataset, EVIMO2, is introduced that improves on the popular EVIMO dataset by providing more data, from better cameras, in more complex scenarios. As with its predecessor, EVIMO2 provides labels in the form of per-pixel ground truth depth and segmentation as well as camera and object poses.
Event-Based Motion Capture System for Online Multi-Quadrotor Localization and Tracking
This paper presents the implementation details and experimental validation of a relatively low-cost motion capture system for multi-quadrotor motion planning using an event camera. The real-time, multi-quadrotor detection and tracking tasks are performed using a deep learning network You-Only-Look-Once (YOLOv5) and a k-dimensional (k-d) tree, respectively.
Event Visualization and Trajectory Tracking of the Load Carried by Rotary Crane
This paper concerns research on the load motion carried by a rotary crane. For this purpose, the laboratory crane model was designed in Solidworks software, and numerical simulations were made using the Motion module. The developed laboratory model is a scaled equivalent of the real Liebherr LTM 1020 object.
High-fidelity Event-Radiance Recovery via Transient Event Frequency
This paper proposes to use event cameras with bio-inspired silicon sensors, which are sensitive to radiance changes, to recover precise radiance values. It reveals that, under active lighting conditions, the transient frequency of event signals triggering linearly reflects the radiance value.
Performance of spiking neural networks on event data for embedded automotive applications
This paper aims to increase the performance of spiking neural networks for event data processing, in order to design intelligent automotive algorithms that are efficient, fast, and energy-efficient.
Stereo Hybrid Event-Frame (SHEF) Cameras for 3D Perception
This research introduces a novel approach to Stereo Hybrid Event-Frame Disparity Estimation, leveraging the unique strengths of both event and frame-based cameras. By combining these modalities, significant improvements in depth estimation accuracy, enabling more robust and reliable 3D perception systems was achieved.
Optical flow estimation using the Fisher–Rao metric
In this paper the local histograms are normalised to produce probability distributions. Once these distributions are obtained, the optical flow is estimated using powerful methods taken from probability theory, in particular, methods based on the Fisher–Rao metric.
Event Camera Based Real-Time Detection and Tracking of Indoor Ground Robots
This paper presents a real-time method to detect and track multiple mobile ground robots using event cameras. The method uses density-based spatial clustering of applications with noise (DBSCAN) to detect the robots and a single k-dimensional (k − d) tree to accurately keep track of them as they move in an indoor arena.
Sparse-E2VID: A Sparse Convolutional Model for Event-Based Video Reconstruction Trained With Real Event Noise
To address the issue of dense processing, this paper introduces Sparse-E2VID, an architecture that processes data
in sparse format. With Sparse-E2VID, the inference time is reduced to 55 ms (at 720 × 1280 resolution), which is 30% faster than FireNet. Additionally, Sparse-E2VID reduces the computational cost by 98% compared to FireNet+, while also improving image quality.