MOTION SEGMENTATION FOR NEUROMORPHIC AERIAL SURVEILLANCE

WESTERN SYDNEY UNIVERSITY

Sami ArjaAlexandre MarcireauSaeed AfsharBharath RameshGregory Cohen

 

ABSTRACT

Aerial surveillance demands rapid and precise detection of moving objects in dynamic environments. Event cameras, which draw inspiration from biological vision systems, present a promising alternative to frame-based sensors due to their exceptional temporal resolution, superior dynamic range, and minimal power requirements. Unlike traditional frame-based sensors that capture redundant information at fixed intervals, event cameras asynchronously record pixel-level brightness changes, providing a continuous and efficient data stream ideal for fast motion segmentation. While these sensors are ideal for fast motion segmentation, existing event-based motion segmentation methods often suffer from limitations such as the need for per-scene parameter tuning or reliance on manual labelling, hindering their scalability and practical deployment. In this paper, we address these challenges by introducing a novel motion segmentation method that leverages self-supervised vision transformers on both event data and optical flow information. Our approach eliminates the need for human annotations and reduces dependency on scene-specific parameters. In this paper, we used the EVK4-HD Prophesee event camera onboard a highly dynamic aerial platform in urban settings. 

Source: Arxiv.org

PRODUCTS USED IN THIS PAPER

SEARCH PUBLICATION LIBRARY

Don’t miss a bit,

follow us to be the first to know

✉️ Join Our Newsletter