Bibliographic Details
Title: |
Grounding Partially-Defined Events in Multimodal Data |
Authors: |
Sanders, Kate, Kriz, Reno, Etter, David, Recknor, Hannah, Martin, Alexander, Carpenter, Cameron, Lin, Jingyang, Van Durme, Benjamin |
Publication Year: |
2024 |
Collection: |
Computer Science |
Subject Terms: |
Computer Science - Computation and Language, Computer Science - Computer Vision and Pattern Recognition |
More Details: |
How are we able to learn about complex current events just from short snippets of video? While natural language enables straightforward ways to represent under-specified, partially observable events, visual data does not facilitate analogous methods and, consequently, introduces unique challenges in event understanding. With the growing prevalence of vision-capable AI agents, these systems must be able to model events from collections of unstructured video data. To tackle robust event modeling in multimodal settings, we introduce a multimodal formulation for partially-defined events and cast the extraction of these events as a three-stage span retrieval task. We propose a corresponding benchmark for this task, MultiVENT-G, that consists of 14.5 hours of densely annotated current event videos and 1,168 text documents, containing 22.8K labeled event-centric entities. We propose a collection of LLM-driven approaches to the task of multimodal event analysis, and evaluate them on MultiVENT-G. Results illustrate the challenges that abstract event understanding poses and demonstrates promise in event-centric video-language systems. Comment: Preprint; 9 pages; 2024 EMNLP Findings |
Document Type: |
Working Paper |
Access URL: |
http://arxiv.org/abs/2410.05267 |
Accession Number: |
edsarx.2410.05267 |
Database: |
arXiv |