RESEARCH PAPERS

Concept Study for Dynamic Vision Sensor Based Insect Monitoring

Concept Study for Dynamic Vision Sensor Based Insect Monitoring

In this concept study, the processing steps required for this are discussed and suggestions for suitable processing methods are given. On the basis of a small dataset, a clustering and filtering-based labeling approach is proposed, which is a promising option for the preparation of larger DVS insect monitoring datasets.

EventLFM: Event Camera integrated Fourier Light Field Microscopy for Ultrafast 3D imaging

EventLFM: Event Camera integrated Fourier Light Field Microscopy for Ultrafast 3D imaging

We introduce EventLFM, a straightforward and cost-effective system that overcomes these challenges by integrating an event camera with Fourier light field microscopy (LFM), a state-of-the-art single-shot 3D wide-field imaging technique. We further develop a simple and robust event-driven LFM reconstruction algorithm that can reliably reconstruct 3D dynamics from the unique spatiotemporal measurements captured by EventLFM.

Memory-Efficient Fixed-Length Representation of Synchronous Event Frames for Very-Low-Power Chip Integration

Memory-Efficient Fixed-Length Representation of Synchronous Event Frames for Very-Low-Power Chip Integration

The experimental evaluation on a public dataset demonstrates that the proposed fixed-length coding framework provides at least two times the compression ratio relative to the raw EF representation and a close performance compared with variable-length video coding standards and variable-length state-of-the-art image codecs for lossless compression of ternary EFs generated at frequencies below one KHz.

Event-based Background-Oriented Schlieren

Event-based Background-Oriented Schlieren

This paper presents a novel technique for perceiving air convection using events and frames by providing the first theoretical analysis that connects event data and schlieren. We formulate the problem as a variational optimization one combining the linearized event generation model with a physically-motivated parameterization that estimates the temporal derivative of the air density.

On-orbit optical detection of lethal non-trackable debris

On-orbit optical detection of lethal non-trackable debris

Resident space objects in the size range of 0.1 mm–3 cm are not currently trackable but have enough kinetic energy to have lethal consequences for spacecraft. The assessment of small orbital debris, potentially posing a risk to most space missions, requires the combination of a large sensor area and large time coverage.

G2N2: Lightweight event stream classification with GRU graph neural networks

G2N2: Lightweight event stream classification with GRU graph neural networks

We benchmark our model against other event-graph and convolutional neural network based approaches on the challenging DVS-Lip dataset (spoken word classification). We find that not only does our method outperform state of the art approaches for similar model sizes, but that, relative to the convolutional models, the number of calculation operations per second was reduced by 81%.

Live Demonstration: Integrating Event Based Hand Tracking Into TouchFree Interactions

Live Demonstration: Integrating Event Based Hand Tracking Into TouchFree Interactions

To explore the potential of event cameras, Ultraleap have developed a prototype stereo camera using two Prophesee IMX636ES sensors. To go from event data to hand positions the event data is aggregated into event frames. This is then consumed by a hand tracking model which outputs 28 joint positions for each hand with respect to the camera.

X-Maps: Direct Depth Lookup for Event-Based Structured Light Systems

X-Maps: Direct Depth Lookup for Event-Based Structured Light Systems

We present a new approach to direct depth estimation for Spatial Augmented Reality (SAR) applications using event cameras. These dynamic vision sensors are a great fit to be paired with laser projectors for depth estimation in a structured light approach. Our key contributions involve a conversion of the projector time map into a rectified X-map, capturing x-axis correspondences for incoming events and enabling direct disparity lookup without any additional search.

Monocular Event-Based Vision for Obstacle Avoidance with Quadrotor

Monocular Event-Based Vision for Obstacle Avoidance with Quadrotor

We present the first events-only static-obstacle avoidance method for a quadrotor with just an onboard, monocular event camera. By leveraging depth prediction as an intermediate step in our learning framework, we can pre-train a reactive obstacle avoidance events-to-control policy in simulation, and then fine-tune the perception component with limited events-depth real-world data to achieve dodging in indoor and outdoor settings.

Event-Based Motion Magnification

Event-Based Motion Magnification

In this work, we propose a dual-camera system consisting of an event camera and a conventional RGB camera for video motion magnification, providing temporally-dense information from the event stream and spatially-dense data from the RGB images. This innovative combination enables a broad and cost-effective amplification of high-frequency motions.

Cell detection with convolutional spiking neural network for neuromorphic cytometry

Cell detection with convolutional spiking neural network for neuromorphic cytometry

Our previous work demonstrated the early development of neuromorphic imaging cytometry, evaluating its feasibility in resolving conventional frame-based imaging systems’ limitations in data redundancy, fluorescence sensitivity, and compromised throughput. Herein, we adopted a convolutional spiking neural network (SNN) combined with the YOLOv3 model (SNN-YOLO) to perform cell classification and detection on label-free samples under neuromorphic vision.

Event-Based RGB Sensing With Structured Light

Event-Based RGB Sensing With Structured Light

We introduce a method to detect full RGB events using a monochrome EC aided by a structured light projector. We combine the benefits of ECs and projection-based techniques and allow depth and color detection of static or moving objects with a commercial TI LightCrafter 4500 projector and a monocular monochrome EC, paving the way for frameless RGB-D sensing applications.

Event-Based Video Frame Interpolation With Cross-Modal Asymmetric Bidirectional Motion Fields

Event-Based Video Frame Interpolation With Cross-Modal Asymmetric Bidirectional Motion Fields

We propose a novel event-based VFI framework with cross-modal asymmetric bidirectional motion field estimation. Our EIF-BiOFNet utilizes each valuable characteristic of the events and images for direct estimation of inter-frame motion fields without any approximation methods. We develop an interactive attention-based frame synthesis network to efficiently leverage the complementary warping-based and synthesis-based features.

Neuromorphic Event-Based Facial Expression Recognition

Neuromorphic Event-Based Facial Expression Recognition

Recently, event cameras have shown large applicability in several computer vision fields especially concerning tasks that require high temporal resolution. In this work, we investigate the usage of such kind of data for emotion recognition by presenting NEFER, a dataset for Neuromorphic Event-based Facial Expression Recognition.

Faces in Event Streams (FES): An Annotated Face Dataset for Event Cameras

Faces in Event Streams (FES): An Annotated Face Dataset for Event Cameras

Faces in Event Streams dataset contains 689 minutes of recorded event streams, and 1.6 million annotated faces with bounding box and five point facial landmarks. This paper presents the dataset and corresponding models for detecting face and facial landmakrs directly from event stream data.

An Asynchronous Linear Filter Architecture for Hybrid Event-Frame Cameras

An Asynchronous Linear Filter Architecture for Hybrid Event-Frame Cameras

In this paper, we present an asynchronous linear filter architecture, fusing event and frame camera data, for HDR video reconstruction and spatial convolution that exploits the advantages of both sensor modalities. The key idea is the introduction of a state that directly encodes the integrated or convolved image information and that is updated asynchronously as each event or each frame arrives from the camera.

Stereo Event-based Visual-Inertial Odometry

Stereo Event-based Visual-Inertial Odometry

We show that our proposed pipeline provides improved accuracy over the result of the state-of-the-art visual odometry for stereo event-based cameras, while running in real-time on a standard CPU (low-resolution cameras). To the best of our knowledge, this is the first published visual-inertial odometry for stereo event-based cameras.

Unsupervised Video Deraining with An Event Camera

Unsupervised Video Deraining with An Event Camera

In this paper, we propose a novel approach by integrating a bio-inspired event camera into the unsupervised video deraining pipeline, which enables us to capture high temporal resolution information and model complex rain characteristics. Specifically, we first design an end-to-end learning-based network consisting of two modules, the asymmetric separation module and the cross-modal fusion module.

Neuromorphic cytometry: implementation on cell counting and size estimation

Neuromorphic cytometry: implementation on cell counting and size estimation

Our work has achieved highly consistent outputs with a widely adopted flow cytometer (CytoFLEX) in detecting microparticles. Moreover, the capacity of an event-based photosensor in registering fluorescent signals was evaluated by recording 6 µm Fluorescein isothiocyanate-marked particles in different lighting conditions, revealing superior performance compared to a standard photosensor.

SEpi-3D: soft epipolar 3D shape measurement with an event camera for multipath elimination

SEpi-3D: soft epipolar 3D shape measurement with an event camera for multipath elimination

In this paper, we propose the soft epipolar 3D(SEpi-3D) method to eliminate multipath in temporal space with an event camera and a laser projector. Specifically, we align the projector and event camera row onto the same epipolar plane with stereo rectification; we capture event flow synchronized with the projector frame to construct a mapping relationship between event timestamp and projector pixel.

Automotive Object Detection via Learning Sparse Events by Spiking Neurons

Automotive Object Detection via Learning Sparse Events by Spiking Neurons

This paper explores the unique membrane potential dynamics of SNNs and their ability to modulate sparse events. We introduce an innovative spike-triggered adaptive threshold mechanism designed for stable training. Building on these insights, we present a specialized spiking feature pyramid network (SpikeFPN) optimized for automotive event-based object detection.

Event-based Motion-Robust Accurate Shape Estimation for Mixed Reflectance Scenes

Event-based Motion-Robust Accurate Shape Estimation for Mixed Reflectance Scenes

In this paper, we present a novel event-based structured light system that enables fast 3D imaging of mixed reflectance scenes with high accuracy. On the captured events, we use epipolar constraints that intrinsically enable decomposing the measured reflections into diffuse, two-bounce specular, and other multi-bounce reflections.

Multi-Robot, Multi-Sensor, Multi-Environment Event Dataset

Multi-Robot, Multi-Sensor, Multi-Environment Event Dataset

We present M3ED, the first multi-sensor event camera dataset focused on high-speed dynamic motions in robotics applications. M3ED provides high-quality synchronized and labeled data from multiple platforms, including ground vehicles, legged robots, and aerial robots, operating in challenging conditions such as driving along off-road trails, navigating through dense forests, and executing aggressive flight maneuvers.

Towards Real-Time Fast Unmanned Aerial Vehicle Detection Using Dynamic Vision Sensors

Towards Real-Time Fast Unmanned Aerial Vehicle Detection Using Dynamic Vision Sensors

This paper presents F-UAV-D (Fast Unmanned Aerial Vehicle Detector), an embedded system that enables fast-moving drone detection. In particular, we propose a setup to exploit DVS as an alternative to RGB cameras in a real-time and low-power configuration. Our approach leverages the high-dynamic range (HDR) and background suppression of DVS.

Stereo-Event-Camera-Technique for Insect Monitoring

Stereo-Event-Camera-Technique for Insect Monitoring

To investigate the causes of declining insect populations, a monitoring system is needed that automatically records insect activity and additional environmental factors over an extended period of time. For this reason, we use a sensor-based method with two event cameras. In this paper, we describe the system, the view volume that can be recorded with it, and a database used for insect detection.

Deep Event Visual Odometry

Deep Event Visual Odometry

Deep Event Visual Odometry sparsely tracks selected event patches over time. A key component of DEVO is a novel deep patch selection mechanism tailored to event data. We significantly decrease the pose tracking error on seven real-world benchmarks by up to 97% compared to event-only methods and often surpass or are close to stereo or inertial methods.

Neuromorphic Wireless Device-Edge Co-Inference via the Directed Information Bottleneck

Neuromorphic Wireless Device-Edge Co-Inference via the Directed Information Bottleneck

The proposed system is designed using a transmitter-centric information-theoretic criterion that targets a reduction of the communication overhead, while retaining the most relevant information for the end-to-end semantic task of interest. Numerical results on standard data sets validate the proposed architecture, and a preliminary testbed realization is reported.

An Asynchronous Kalman Filter for Hybrid Event Cameras

An Asynchronous Kalman Filter for Hybrid Event Cameras

Our model outperforms by a large margin feed-forward event-based architectures. Moreover, our method does not require any reconstruction of intensity images from events, showing that training directly from raw events is possible, more efficient, and more accurate than passing through an intermediate intensity image.

ESL: Event-Based Structured Light

ESL: Event-Based Structured Light

Our model outperforms by a large margin feed-forward event-based architectures. Moreover, our method does not require any reconstruction of intensity images from events, showing that training directly from raw events is possible, more efficient, and more accurate than passing through an intermediate intensity image.

Event-Driven Visual-Tactile Sensing and Learning for Robots

Event-Driven Visual-Tactile Sensing and Learning for Robots

Our neuromorphic fingertip tactile sensor, NeuTouch, scales well with the number of taxels thanks to its event-based nature. Likewise, our Visual-Tactile Spiking Neural Network (VT-SNN) enables fast perception when coupled with event sensors. We evaluate our visual-tactile system (using the NeuTouch and Prophesee event camera) on two robot tasks.

How To Calibrate Your Event Camera

How To Calibrate Your Event Camera

We propose a generic event camera calibration framework using image reconstruction. Instead of relying on blinking patterns or external screens, we show that neural network-based image reconstruction is well suited for the task of intrinsic and extrinsic calibration of event cameras.

Event-Based Kilohertz Eye Tracking Using Coded Differential Lighting

Event-Based Kilohertz Eye Tracking Using Coded Differential Lighting

Event cameras operate at low power ( 5mW) and respond to changes in the scene with a latency on the order of microseconds. These properties make event cameras an exciting candidate for eye tracking sensors on mobile platforms such as AR/VR headsets, since these systems have hard real-time and power constraints.

A Large Scale Event-based Detection Dataset for Automotive

A Large Scale Event-based Detection Dataset for Automotive

In this study, Prophesee introduces the first very large detection dataset for event cameras. The dataset is composed of more than 39 hours of automotive recordings acquired with a 304×240 GEN1 sensor. It contains open roads and very diverse driving scenarios, ranging from urban, highway, suburbs and countryside scenes.

Learning to Detect Objects with a 1 Megapixel Event Camera

Learning to Detect Objects with a 1 Megapixel Event Camera

Our model outperforms by a large margin feed-forward event-based architectures. Moreover, our method does not require any reconstruction of intensity images from events, showing that training directly from raw events is possible, more efficient, and more accurate than passing through an intermediate intensity image.

Don’t miss a bit,

follow us to be the first to know

✉️ Join Our Newsletter