Sensor-Augmented Egocentric-Video Captioning with Dynamic Modal Attention

Bibliographic Details
Title: Sensor-Augmented Egocentric-Video Captioning with Dynamic Modal Attention
Authors: Nakamura, Katsuyuki, Ohashi, Hiroki, Okada, Mitsuhiro
Publication Year: 2021
Collection: Computer Science
Subject Terms: Computer Science - Computer Vision and Pattern Recognition
More Details: Automatically describing video, or video captioning, has been widely studied in the multimedia field. This paper proposes a new task of sensor-augmented egocentric-video captioning, a newly constructed dataset for it called MMAC Captions, and a method for the newly proposed task that effectively utilizes multi-modal data of video and motion sensors, or inertial measurement units (IMUs). While conventional video captioning tasks have difficulty in dealing with detailed descriptions of human activities due to the limited view of a fixed camera, egocentric vision has greater potential to be used for generating the finer-grained descriptions of human activities on the basis of a much closer view. In addition, we utilize wearable-sensor data as auxiliary information to mitigate the inherent problems in egocentric vision: motion blur, self-occlusion, and out-of-camera-range activities. We propose a method for effectively utilizing the sensor data in combination with the video data on the basis of an attention mechanism that dynamically determines the modality that requires more attention, taking the contextual information into account. We compared the proposed sensor-fusion method with strong baselines on the MMAC Captions dataset and found that using sensor data as supplementary information to the egocentric-video data was beneficial, and that our proposed method outperformed the strong baselines, demonstrating the effectiveness of the proposed method.
Comment: Accepted to ACM Multimedia (ACMMM) 2021
Document Type: Working Paper
DOI: 10.1145/3474085.3475557
Access URL: http://arxiv.org/abs/2109.02955
Accession Number: edsarx.2109.02955
Database: arXiv
More Details
DOI:10.1145/3474085.3475557