Evaluating Image-Based Face and Eye Tracking with Event Cameras
DUBLIN CITY UNIVERSITY, UNIVERSITY OF GALWAY, FotoNation-Tobii, Insight SFI Research Center for Data Analytics
Khadija Iddrisu, Waseem Shariff, Noel E.OConnor, Joseph Lemley, Suzanne Little
ABSTRACT
Event Cameras, also known as Neuromorphic sensors, capture changes in local light intensity at the pixel level, producing asynchronously generated data termed “events”. This distinct data format mitigates common issues observed in conventional cameras, like under-sampling when capturing fast-moving objects, thereby preserving critical information that might otherwise be lost. However, leveraging this data often necessitates the development of specialized, handcrafted event representations that can integrate seamlessly with conventional Convolutional Neural Networks (CNNs), considering the unique attributes of event data. In this study, We evaluate event-based Face and Eye tracking. The core objective of our study is to showcase the viability of integrating conventional algorithms with event-based data, transformed into a frame format while preserving the unique benefits of event cameras. To validate our approach, we constructed a frame-based event dataset by simulating events between RGB frames derived from the publicly accessible Helen Dataset. We assess its utility for face and eye detection tasks through the application of GR-YOLO — a pioneering technique derived from YOLOv3. This evaluation includes a comparative analysis with results derived from training the dataset with YOLOv8. Subsequently, the trained models were tested on real event streams from various iterations of Prophesee’s event cameras and further evaluated on the Faces in Event Stream (FES) benchmark dataset.