RESEARCH PAPERS
Table tennis ball spin estimation with an event camera
Event cameras do not suffer as much from motion blur, thanks to their high temporal resolution. Moreover, the sparse nature of the event stream solves communication bandwidth limitations many frame cameras face. To the best of our knowledge, we present the first method for table tennis spin estimation using an event camera. We use ordinal time surfaces to track the ball and then isolate the events generated by the logo on the ball.
TimeReplayer: Unlocking the Potential of Event Cameras for Video Interpolation
The pioneering work Time Lens introduced event cameras to video interpolation by designing optical devices to collect a large amount of paired training data of high-speed frames and events, which is too costly to scale. To fully unlock the potential of event cameras, this paper proposes a novel TimeReplayer algorithm to interpolate videos captured by commodity cameras with events.
Deep Learning for Event-based Vision: A Comprehensive Survey and Benchmarks
We conduct benchmark experiments for the existing methods in some representative research directions, i.e., image reconstruction, deblurring, and object recognition, to identify some critical insights and problems. Finally, we have discussions regarding the challenges and provide new perspectives for inspiring more research studies.
Real-Time Face & Eye Tracking and Blink Detection using Event Cameras
This paper proposes a novel method to simultaneously detect and track faces and eyes for driver monitoring. A unique, fully convolutional recurrent neural network architecture is presented. To train this network, a synthetic event-based dataset is simulated with accurate bounding box annotations, called Neuromorphic HELEN.
Tracking-Assisted Object Detection with Event Cameras
Lastly, we propose a spatio-temporal feature aggregation module to enrich the latent features and a consistency loss to increase the robustness of the overall pipeline. We conduct comprehensive experiments to verify our method’s effectiveness where still objects are retained, but real occluded objects are discarded.
Detection and Tracking With Event Based Sensors
The MSMO algorithm uses the velocities of each event to create an average of the scene and filter out dissimilar events. This work shows the study performed on the velocity values of the events and explains why ultimately an average-based velocity filter is insufficient for lightweight MSMO detection and tracking of objects using an EBS camera.
Study and design of an energy efficient perception module combining event-based image sensors and spiking neural network with 3D integration technologies
The work explores bio-inspired applications for tasks where frame-based methods are already successful but present robustness flaws because classical frame-based imagers cannot be intrinsically high speed and high dynamic range.
Inceptive Event Time-Surfaces for Object Classification using Neuromorphic Cameras
This paper presents a novel fusion of low-level approaches for dimensionality reduction into an effective approach for high-level objects in neuromorphic camera data called Inceptive Event Time-Surfaces (IETS).
Flow Cytometry With Event-Based Vision and Spiking Neuromorphic Hardware
We demonstrate, for the first time, a spiking neural network running on neuromorphic hardware for a fully event-based flow cytometry pipeline with 98.45% testing accuracy. We open up new possibilities for online and on-chip learning in flow cytometry applications.
Multi-Bracket High Dynamic Range Imaging with Event Cameras
In this paper, we propose the first multibracket HDR pipeline combining a standard camera with an event camera. Our results show better overall robustness when using events, with improvements in PSNR by up to 5dB on synthetic data and up to 0.7dB on real-world data.
EvUnroll: Neuromorphic Event Based Rolling Shutter Image Correction
We further propose datasets captured by a high-speed camera and an RS-Event hybrid camera system for training and testing our network. Experimental results on both public and proposed datasets show a systematic performance improvement compared to state-of-the-art methods.We further propose datasets captured by a high-speed camera and an RS-Event hybrid camera system for training and testing our network.
Smart Visual Beacons with Asynchronous Optical Communications using Event Cameras
The proposed method achieves up to 4 kbps in an indoor environment and lossless transmission over a distance of 100 meters, at a transmission rate of 500 bps, in full sunlight, demonstrating the potential of the technology in an outdoor environment.
Exploring space situational awareness using neuromorphic event-based cameras
Recent advances in neuromorphic engineering have led to the availability of high-quality neuromorphic event-based cameras that provide a promising alternative to the conventional cameras used in space imaging.
Event-based Image Reconstruction Linear Inverse Problem with Deep Regularization using Optical Flow
In this work we show, for the first time, how tackling the combined problem of motion and brightness estimation leads us to formulate event-based image reconstruction as a linear inverse problem that can be solved without training an image reconstruction RNN.
A Point-image fusion network for event-based frame interpolation
Temporal information in event streams plays a critical role in event-based video frame interpolation as it provides temporal context cues complementary to images. Most previous event-based methods first transform the unstructured event data to structured data formats through voxelisation, and then employ advanced CNNs to extract temporal information.
eWand: A calibration framework for wide baseline event-based camera systems
To overcome calibration limitations, we propose eWand, a new method that uses blinking LEDs inside opaque spheres instead of a printed or displayed pattern. Our method provides a faster, easier-to-use extrinsic calibration approach that maintains high accuracy for both event- and frame-based cameras.
Event-Driven Imaging in Turbid Media: A Confluence of Optoelectronics and Neuromorphic Computation
In this paper a new optical-computational method is introduced to unveil images of targets whose visibility is severely obscured by light scattering in dense, turbid media.
Concept Study for Dynamic Vision Sensor Based Insect Monitoring
In this concept study, the processing steps required for this are discussed and suggestions for suitable processing methods are given. On the basis of a small dataset, a clustering and filtering-based labeling approach is proposed, which is a promising option for the preparation of larger DVS insect monitoring datasets.
EventLFM: Event Camera integrated Fourier Light Field Microscopy for Ultrafast 3D imaging
We introduce EventLFM, a straightforward and cost-effective system that overcomes these challenges by integrating an event camera with Fourier light field microscopy (LFM), a state-of-the-art single-shot 3D wide-field imaging technique. We further develop a simple and robust event-driven LFM reconstruction algorithm that can reliably reconstruct 3D dynamics from the unique spatiotemporal measurements captured by EventLFM.
Memory-Efficient Fixed-Length Representation of Synchronous Event Frames for Very-Low-Power Chip Integration
The experimental evaluation on a public dataset demonstrates that the proposed fixed-length coding framework provides at least two times the compression ratio relative to the raw EF representation and a close performance compared with variable-length video coding standards and variable-length state-of-the-art image codecs for lossless compression of ternary EFs generated at frequencies below one KHz.
Event-based Background-Oriented Schlieren
This paper presents a novel technique for perceiving air convection using events and frames by providing the first theoretical analysis that connects event data and schlieren. We formulate the problem as a variational optimization one combining the linearized event generation model with a physically-motivated parameterization that estimates the temporal derivative of the air density.
On-orbit optical detection of lethal non-trackable debris
Resident space objects in the size range of 0.1 mm–3 cm are not currently trackable but have enough kinetic energy to have lethal consequences for spacecraft. The assessment of small orbital debris, potentially posing a risk to most space missions, requires the combination of a large sensor area and large time coverage.
G2N2: Lightweight event stream classification with GRU graph neural networks
We benchmark our model against other event-graph and convolutional neural network based approaches on the challenging DVS-Lip dataset (spoken word classification). We find that not only does our method outperform state of the art approaches for similar model sizes, but that, relative to the convolutional models, the number of calculation operations per second was reduced by 81%.
Live Demonstration: Integrating Event Based Hand Tracking Into TouchFree Interactions
To explore the potential of event cameras, Ultraleap have developed a prototype stereo camera using two Prophesee IMX636ES sensors. To go from event data to hand positions the event data is aggregated into event frames. This is then consumed by a hand tracking model which outputs 28 joint positions for each hand with respect to the camera.
X-Maps: Direct Depth Lookup for Event-Based Structured Light Systems
We present a new approach to direct depth estimation for Spatial Augmented Reality (SAR) applications using event cameras. These dynamic vision sensors are a great fit to be paired with laser projectors for depth estimation in a structured light approach. Our key contributions involve a conversion of the projector time map into a rectified X-map, capturing x-axis correspondences for incoming events and enabling direct disparity lookup without any additional search.
Monocular Event-Based Vision for Obstacle Avoidance with Quadrotor
We present the first events-only static-obstacle avoidance method for a quadrotor with just an onboard, monocular event camera. By leveraging depth prediction as an intermediate step in our learning framework, we can pre-train a reactive obstacle avoidance events-to-control policy in simulation, and then fine-tune the perception component with limited events-depth real-world data to achieve dodging in indoor and outdoor settings.
Event-Based Motion Magnification
In this work, we propose a dual-camera system consisting of an event camera and a conventional RGB camera for video motion magnification, providing temporally-dense information from the event stream and spatially-dense data from the RGB images. This innovative combination enables a broad and cost-effective amplification of high-frequency motions.
Cell detection with convolutional spiking neural network for neuromorphic cytometry
Our previous work demonstrated the early development of neuromorphic imaging cytometry, evaluating its feasibility in resolving conventional frame-based imaging systems’ limitations in data redundancy, fluorescence sensitivity, and compromised throughput. Herein, we adopted a convolutional spiking neural network (SNN) combined with the YOLOv3 model (SNN-YOLO) to perform cell classification and detection on label-free samples under neuromorphic vision.
Event-Based RGB Sensing With Structured Light
We introduce a method to detect full RGB events using a monochrome EC aided by a structured light projector. We combine the benefits of ECs and projection-based techniques and allow depth and color detection of static or moving objects with a commercial TI LightCrafter 4500 projector and a monocular monochrome EC, paving the way for frameless RGB-D sensing applications.
Event-Based Video Frame Interpolation With Cross-Modal Asymmetric Bidirectional Motion Fields
We propose a novel event-based VFI framework with cross-modal asymmetric bidirectional motion field estimation. Our EIF-BiOFNet utilizes each valuable characteristic of the events and images for direct estimation of inter-frame motion fields without any approximation methods. We develop an interactive attention-based frame synthesis network to efficiently leverage the complementary warping-based and synthesis-based features.
Neuromorphic Event-Based Facial Expression Recognition
Recently, event cameras have shown large applicability in several computer vision fields especially concerning tasks that require high temporal resolution. In this work, we investigate the usage of such kind of data for emotion recognition by presenting NEFER, a dataset for Neuromorphic Event-based Facial Expression Recognition.
Robust e-NeRF: NeRF from Sparse & Noisy Events under Non-Uniform Motion
In this work, we propose Robust e-NeRF, a novel method to directly and robustly reconstruct NeRFs from moving event cameras under various real-world conditions, especially from sparse and noisy events generated under non-uniform motion.
Event Stream-based Visual Object Tracking: A High-Resolution Benchmark Dataset and A Novel Baseline
In this paper, we propose a novel hierarchical knowledge distillation framework that can fully utilize multi-modal / multi-view information during training to facilitate knowledge transfer, enabling us to achieve high-speed and low-latency visual tracking during testing by using only event signals.
Event-based micro vibration measurement using phase correlation template matching with event filter optimization
This study proposes an event filter-based phase correlation template match (EF-PCTM) method, including its optimal design procedure, to measure micro-vibrations using an event camera. In this study, event filter is designed using an infinite impulse response filter and genetic algorithm, and a cost function is defined to improve the performance of EF-PCTM.
Faces in Event Streams (FES): An Annotated Face Dataset for Event Cameras
Faces in Event Streams dataset contains 689 minutes of recorded event streams, and 1.6 million annotated faces with bounding box and five point facial landmarks. This paper presents the dataset and corresponding models for detecting face and facial landmakrs directly from event stream data.
An Asynchronous Linear Filter Architecture for Hybrid Event-Frame Cameras
In this paper, we present an asynchronous linear filter architecture, fusing event and frame camera data, for HDR video reconstruction and spatial convolution that exploits the advantages of both sensor modalities. The key idea is the introduction of a state that directly encodes the integrated or convolved image information and that is updated asynchronously as each event or each frame arrives from the camera.
Stereo Event-based Visual-Inertial Odometry
We show that our proposed pipeline provides improved accuracy over the result of the state-of-the-art visual odometry for stereo event-based cameras, while running in real-time on a standard CPU (low-resolution cameras). To the best of our knowledge, this is the first published visual-inertial odometry for stereo event-based cameras.
Unsupervised Video Deraining with An Event Camera
In this paper, we propose a novel approach by integrating a bio-inspired event camera into the unsupervised video deraining pipeline, which enables us to capture high temporal resolution information and model complex rain characteristics. Specifically, we first design an end-to-end learning-based network consisting of two modules, the asymmetric separation module and the cross-modal fusion module.
Neuromorphic cytometry: implementation on cell counting and size estimation
Our work has achieved highly consistent outputs with a widely adopted flow cytometer (CytoFLEX) in detecting microparticles. Moreover, the capacity of an event-based photosensor in registering fluorescent signals was evaluated by recording 6 µm Fluorescein isothiocyanate-marked particles in different lighting conditions, revealing superior performance compared to a standard photosensor.
SEpi-3D: soft epipolar 3D shape measurement with an event camera for multipath elimination
In this paper, we propose the soft epipolar 3D(SEpi-3D) method to eliminate multipath in temporal space with an event camera and a laser projector. Specifically, we align the projector and event camera row onto the same epipolar plane with stereo rectification; we capture event flow synchronized with the projector frame to construct a mapping relationship between event timestamp and projector pixel.