LEARNING TO DETECT OBJECTS WITH A 1 MEGAPIXEL EVENT CAMERA

PROPHESEE, NNAISENSE


Etienne Perot, Pierre de Tournemire, Davide Nitti, Jonathan Masci, Amos Sironi

ABSTRACT

Event cameras encode visual information with high temporal precision, low data-rate, and high-dynamic range. Thanks to these characteristics, event cameras are particularly suited for scenarios with high motion, challenging lighting conditions and requiring low latency. However, due to the novelty of the field, the performance of event-based systems on many vision tasks is still lower compared to conventional frame-based solutions. The main reasons for this performance gap are: the lower spatial resolution of event sensors, compared to frame cameras; the lack of large-scale training datasets; the absence of well established deep learning architectures for event-based processing. In this paper, we address all these problems in the context of an event-based object detection task. First, we publicly release the first high-resolution large-scale dataset for object detection. The dataset contains more than 14 hours recordings of a 1 megapixel event camera, in automotive scenarios, together with 25M bounding boxes of cars, pedestrians, and two-wheelers, labeled at high frequency. Second, we introduce a novel recurrent architecture for event-based detection and a temporal consistency loss for better-behaved training. The ability to compactly represent the sequence of events into the internal memory of the model is essential to achieve high accuracy. Our model outperforms by a large margin feed-forward event-based architectures. Moreover, our method does not require any reconstruction of intensity images from events, showing that training directly from raw events is possible, more efficient, and more accurate than passing through an intermediate intensity image. Experiments on the dataset introduced in this work, for which events and gray level images are available, show performance on par with that of highly tuned and studied frame-based detectors.

Source: neurIPS Proceedings

INVENTORS COMMUNITY

Our fast-growing network of 15,000+ researchers and developers is revealing the invisible with Prophesee’s neuromorphic vision technologies. From giving sight back to the blind to touching cells or tracking space debris, they have shown incredible imagination and innovation by leveraging Event-Based Metavision.

Don’t miss a bit,

follow us to be the first to know

✉️ Join Our Newsletter