RESEARCH PAPERS

Temporal-Mapping Photography for Event Cameras

Temporal-Mapping Photography for Event Cameras

In this paper, for the first time, we realize events to dense intensity image conversion using a stationary event camera in static scenes. Different from traditional methods that mainly rely on event integration, the proposed Event-Based Temporal Mapping Photography (EvTemMap) measures the time of event emitting for each pixel.

Event Cameras in Automotive Sensing: A Review

Event Cameras in Automotive Sensing: A Review

This article explores the applications, benefits, and challenges of event cameras in these two critical domains within the automotive industry. This review also highlights relevant datasets and methodologies, enabling researchers to make informed decisions tailored to their specific vehicular-technology and place their work in the broader context of EC sensing.

Noise2Image: Noise-Enabled Static Scene Recovery for Event Cameras

Noise2Image: Noise-Enabled Static Scene Recovery for Event Cameras

This work proposes a method, called Noise2Image, to leverage the illuminance-dependent noise characteristics to recover the static parts of a scene, which are otherwise invisible to event cameras. The results show that Noise2Image can robustly recover intensity images solely from noise events, providing a novel approach for capturing static scenes in event cameras, without additional hardware.

Object Detection with Spiking Neural Networks on Automotive Event Data

Object Detection with Spiking Neural Networks on Automotive Event Data

In this work, we propose to train spiking neural networks (SNNs) directly on data coming from event cameras to design fast and efficient automotive embedded applications. Indeed, SNNs are more biologically realistic neural networks where neurons communicate using discrete and asynchronous spikes, a naturally energy-efficient and hardware friendly operating mode.

Enhancing Visual Place Recognition via Fast and Slow Adaptive Biasing in Event Cameras

Enhancing Visual Place Recognition via Fast and Slow Adaptive Biasing in Event Cameras

This paper introduces feedback control algorithms that automatically tune the bias parameters through two interacting methods: 1) An immediate, on-the-fly \textit{fast} adaptation of the refractory period, which sets the minimum interval between consecutive events, and 2) if the event rate exceeds the specified bounds even after changing the refractory period repeatedly.

Event-based Motion Segmentation with Spatio-Temporal Graph Cuts

Event-based Motion Segmentation with Spatio-Temporal Graph Cuts

We develop a method to identify independently moving objects acquired with an event-based camera, i.e., to solve the event-based motion segmentation problem. We cast the problem as an energy minimization one involving the fitting of multiple motion models. We jointly solve two subproblems, namely event cluster assignment (labeling) and motion model fitting.

Vista 2.0: An Open, Data-driven Simulator for Multimodal Sensing and Policy Learning for Autonomous Vehicles

Vista 2.0: An Open, Data-driven Simulator for Multimodal Sensing and Policy Learning for Autonomous Vehicles

Here, we present VISTA, an open source, data-driven simulator that integrates multiple types of sensors for autonomous vehicles. Using high fidelity, real-world datasets, VISTA represents and simulates RGB cameras, 3D LiDAR, and event-based cameras, enabling the rapid generation of novel viewpoints in simulation and thereby enriching the data available for policy learning with corner cases that are difficult to capture in the physical world.

Event-Based Non-rigid Reconstruction of Low-Rank Parametrized Deformations from Contours

Event-Based Non-rigid Reconstruction of Low-Rank Parametrized Deformations from Contours

Visual reconstruction of fast non-rigid object deformations over time is a challenge for conventional frame-based cameras. In recent years, event cameras have gained significant attention due to their bio-inspired properties, such as high temporal resolution and high dynamic range. In this paper, we propose a novel approach for reconstructing such deformations using event measurements.

MUSES: The Multi-Sensor Semantic Perception Dataset for Driving under Uncertainty

MUSES: The Multi-Sensor Semantic Perception Dataset for Driving under Uncertainty

Achieving level-5 driving automation in autonomous vehicles necessitates a robust semantic visual perception system capable of parsing data from different sensors across diverse conditions. However, existing semantic perception datasets often lack important non-camera modalities typically used in autonomous vehicles, or they do not exploit such modalities to aid and improve semantic annotations in challenging conditions. To address this, the research introduce MUSES, the MUlti-SEnsor Semantic perception dataset for driving in adverse conditions under increased uncertainty.

SGE: Structured Light System Based on Gray Code with an Event Camera

SGE: Structured Light System Based on Gray Code with an Event Camera

We introduce a novel method for measuring properties of periodic phenomena with an event camera, a device asynchronously reporting brightness changes at independently operating pixels. The approach assumes that for fast periodic phenomena, in any spatial window where it occurs, a very similar set of events is generated at the time difference corresponding to the frequency of the motion.

EE3P: Event-based Estimation of Periodic Phenomena Properties

EE3P: Event-based Estimation of Periodic Phenomena Properties

The paper introduces a novel method for measuring properties of periodic phenomena with an event camera, a device asynchronously reporting brightness changes at independently operating pixels. The approach assumes that for fast periodic phenomena, in any spatial window where it occurs, a very similar set of events is generated at the time difference corresponding to the frequency of the motion.

Recent Event Camera Innovations: A Survey

Recent Event Camera Innovations: A Survey

This paper presents a comprehensive survey of event cameras, tracing their evolution over time. It introduces the fundamental principles of event cameras, compares them with traditional frame cameras, and highlights their unique characteristics and operational differences. The survey covers various event camera models from leading manufacturers, key technological milestones, and influential research contributions.

Learning Visual Motion Segmentation using Event Surfaces Event-based Vision

Learning Visual Motion Segmentation using Event Surfaces Event-based Vision

We evaluate our method on the state of the art event-based motion segmentation dataset – EV-IMO and perform comparisons to a frame-based method proposed by its authors. Our ablation studies show that increasing the event slice width improves the accuracy, and how subsampling and edge configurations affect the network performance.

Pushing the Limits of Asynchronous Graph-based Object Detection with Event Cameras

Pushing the Limits of Asynchronous Graph-based Object Detection with Event Cameras

In this work, we break this glass ceiling by introducing several architecture choices which allow us to scale the depth and complexity of such models while maintaining low computation. On object detection tasks, our smallest model shows up to 3.7 times lower computation, while outperforming state-of-the-art asynchronous methods by 7.4 mAP.

Event Guided Depth Sensing

Event Guided Depth Sensing

Our model outperforms by a large margin feed-forward event-based architectures. Moreover, our method does not require any reconstruction of intensity images from events, showing that training directly from raw events is possible, more efficient, and more accurate than passing through an intermediate intensity image.

Stereo Event-based Particle Tracking Velocimetry for 3D Fluid Flow Reconstruction

Stereo Event-based Particle Tracking Velocimetry for 3D Fluid Flow Reconstruction

First, we track particles inside the two event sequences in order to estimate their 2D velocity in the two sequences of images. A stereo-matching step is then performed to retrieve their 3D positions. These intermediate outputs are incorporated into an optimization framework that also includes physically plausible regularizers, in order to retrieve the 3D velocity field.

Fast Image Reconstruction with an Event Camera

Fast Image Reconstruction with an Event Camera

Previous works rely on hand-crafted spatial and temporal smoothing techniques to reconstruct images from events. We propose a novel neural network architecture for video reconstruction from events that is smaller (38k vs. 10M parameters) and faster (10ms vs. 30ms) than state-of-the-art with minimal impact to performance.

TUM-VIE: The TUM Stereo Visual-Inertial Event Dataset

TUM-VIE: The TUM Stereo Visual-Inertial Event Dataset

We provide ground truth poses from a motion capture system at 120Hz during the beginning and end of each sequence, which can be used for trajectory evaluation. TUM-VIE includes challenging sequences where state-of-the art visual SLAM algorithms either fail or result in large drift.

Event-based Visual Odometry on Non-Holonomic Ground Vehicles

Event-based Visual Odometry on Non-Holonomic Ground Vehicles

As demonstrated on both simulated and real data, our algorithm achieves accurate and robust estimates of the vehicle’s instantaneous rotational velocity, and thus results that are comparable to the delta rotations obtained by frame-based sensors under normal conditions. We furthermore significantly outperform the more traditional alternatives in challenging illumination scenarios.

Table tennis ball spin estimation with an event camera

Table tennis ball spin estimation with an event camera

Event cameras do not suffer as much from motion blur, thanks to their high temporal resolution. Moreover, the sparse nature of the event stream solves communication bandwidth limitations many frame cameras face. To the best of our knowledge, we present the first method for table tennis spin estimation using an event camera. We use ordinal time surfaces to track the ball and then isolate the events generated by the logo on the ball.

TimeReplayer: Unlocking the Potential of Event Cameras for Video Interpolation

TimeReplayer: Unlocking the Potential of Event Cameras for Video Interpolation

The pioneering work Time Lens introduced event cameras to video interpolation by designing optical devices to collect a large amount of paired training data of high-speed frames and events, which is too costly to scale. To fully unlock the potential of event cameras, this paper proposes a novel TimeReplayer algorithm to interpolate videos captured by commodity cameras with events.

Deep Learning for Event-based Vision: A Comprehensive Survey and Benchmarks

Deep Learning for Event-based Vision: A Comprehensive Survey and Benchmarks

We conduct benchmark experiments for the existing methods in some representative research directions, i.e., image reconstruction, deblurring, and object recognition, to identify some critical insights and problems. Finally, we have discussions regarding the challenges and provide new perspectives for inspiring more research studies.

Real-Time Face & Eye Tracking and Blink Detection using Event Cameras

Real-Time Face & Eye Tracking and Blink Detection using Event Cameras

This paper proposes a novel method to simultaneously detect and track faces and eyes for driver monitoring. A unique, fully convolutional recurrent neural network architecture is presented. To train this network, a synthetic event-based dataset is simulated with accurate bounding box annotations, called Neuromorphic HELEN.

Tracking-Assisted Object Detection with Event Cameras

Tracking-Assisted Object Detection with Event Cameras

Lastly, we propose a spatio-temporal feature aggregation module to enrich the latent features and a consistency loss to increase the robustness of the overall pipeline. We conduct comprehensive experiments to verify our method’s effectiveness where still objects are retained, but real occluded objects are discarded.

Detection and Tracking With Event Based Sensors

Detection and Tracking With Event Based Sensors

The MSMO algorithm uses the velocities of each event to create an average of the scene and filter out dissimilar events. This work shows the study performed on the velocity values of the events and explains why ultimately an average-based velocity filter is insufficient for lightweight MSMO detection and tracking of objects using an EBS camera.

Multi-Bracket High Dynamic Range Imaging with Event Cameras

Multi-Bracket High Dynamic Range Imaging with Event Cameras

In this paper, we propose the first multibracket HDR pipeline combining a standard camera with an event camera. Our results show better overall robustness when using events, with improvements in PSNR by up to 5dB on synthetic data and up to 0.7dB on real-world data.

EvUnroll: Neuromorphic Event Based Rolling Shutter Image Correction

EvUnroll: Neuromorphic Event Based Rolling Shutter Image Correction

We further propose datasets captured by a high-speed camera and an RS-Event hybrid camera system for training and testing our network. Experimental results on both public and proposed datasets show a systematic performance improvement compared to state-of-the-art methods.We further propose datasets captured by a high-speed camera and an RS-Event hybrid camera system for training and testing our network.

A Point-image fusion network for event-based frame interpolation

A Point-image fusion network for event-based frame interpolation

Temporal information in event streams plays a critical role in event-based video frame interpolation as it provides temporal context cues complementary to images. Most previous event-based methods first transform the unstructured event data to structured data formats through voxelisation, and then employ advanced CNNs to extract temporal information.

eWand: A calibration framework for wide baseline event-based camera systems

eWand: A calibration framework for wide baseline event-based camera systems

To overcome calibration limitations, we propose eWand, a new method that uses blinking LEDs inside opaque spheres instead of a printed or displayed pattern. Our method provides a faster, easier-to-use extrinsic calibration approach that maintains high accuracy for both event- and frame-based cameras.

Concept Study for Dynamic Vision Sensor Based Insect Monitoring

Concept Study for Dynamic Vision Sensor Based Insect Monitoring

In this concept study, the processing steps required for this are discussed and suggestions for suitable processing methods are given. On the basis of a small dataset, a clustering and filtering-based labeling approach is proposed, which is a promising option for the preparation of larger DVS insect monitoring datasets.

Don’t miss a bit,

follow us to be the first to know

✉️ Join Our Newsletter