Mon-3-4-7 An Acoustic Segment Model Based Segment Unit Selection Approach to Acoustic Scene Classification with Partial Utterances

Hu Hu(Georgia Institute of Technology), Sabato Marco Siniscalchi(University of Enna Kore), Yannan Wang(Tencent Technology (Shenzhen) Co., Ltd), Bai Xue(Institute of Software Chinese Academy of Sciences), Jun Du(University of Science and Technologoy of China) and Chin-Hui Lee(Georgia Institute of Technology)
Abstract: In this paper, we propose a sub-utterance unit selection framework to remove acoustic segments in audio recordings that carry little information for acoustic scene classification (ASC). Our approach is built upon a universal set of acoustic segment units covering the overall acoustic scene space. First, those units are modeled with acoustic segment models (ASMs) used to tokenize acoustic scene utterances into sequences of acoustic segment units. Next, paralleling the idea of stop words in information retrieval, stop ASMs are automatically detected. Finally, acoustic segments associated with the stop ASMs are blocked, because of their low indexing power in retrieval most acoustic scenes. In contrast to building scene models with whole utterances, the ASM-removed sub-utterances are then used as inputs to the AlexNet-L back-end for final classification. On the DCASE 2018 dataset, scene classification accuracy increases from 68%, with whole utterances, to 72.1%, with segment selection. This represents a competitive accuracy without any data augmentation, and/or ensemble strategy. Moreover, our approach compares favourably to AlexNet-L with a conventional attention mechanism.
Student Information

Student Events

Travel Grants