Wed-3-12-10 CAPTION ALIGNMENT FOR LOW RESOURCE AUDIO-VISUAL DATA

Vighnesh Reddy Konda(Indian Institute of Technology Bombay), Mayur Warialani(iitb.ac.in), Rakesh Prasanth Achari(iitb.ac.in), Varad Bhatnagar(iitb.ac.in), Jayaprakash Akula(iitb.ac.in), Preethi Jyothi(Indian Institute of Technology Bombay), Ganesh Ramakrishnan(Department of Computer Science and Engineering, Indian Institute of Technology Bombay), Gholamreza Haffari(Monash University) and Pankaj Singh(IIT Bombay)
Abstract: Understanding videos via captioning has gained a lot of traction recently. While captions are provided alongside videos, the information about where a caption aligns within a video is missing, which could be particularly useful for indexing and retrieval. Existing work on learning to infer alignments has mostly exploited visual features and ignored the audio signal. Video understanding applications often underestimate the importance of the audio modality. We focus on how to make effective use of the audio modality for temporal localization of captions within videos. We release a new audio-visual dataset that has captions time-aligned by (i) carefully listening to the audio and watching the video, and (ii) watching only the video. Our dataset is audio-rich and contains captions in two languages, English and Marathi (a low-resource language). We further propose an attention-driven multimodal model, for effective utilization of both audio and video for temporal localization. We then investigate (i) the effects of audio in both data preparation and model design, and (ii) effective pretraining strategies (Audioset, ASR-bottleneck features, PASE, {\it etc.}) handling low-resource setting to help extract rich audio representations.
Student Information

Student Events

Travel Grants