Mon-2-8-8 Intra-Utterance Similarity Preserving Knowledge Distillation for Audio Tagging

Chun-Chieh Chang(Johns Hopkins University), Chieh-Chi Kao(Amazon.com), Ming Sun(Amazon.com) and Chao Wang(Amazon.com)
Abstract: Knowledge Distillation (KD) is a popular area of research for reducing the size of large models while still maintaining good performance. The outputs of larger teacher models are used to guide the training of smaller student models. Given the repetitive nature of acoustic events, we propose to leverage this information to regulate the KD training for Audio Tagging. This novel KD method, “Intra-Utterance Similarity Preserving KD” (IUSP), shows promising results for the audio tagging task. It is motivated by the previously published KD method: “Similarity Preserving KD” (SP). However, instead of preserving the pairwise similarities between inputs within a mini-batch, our method preserves the pairwise similarities between the frames of a single input utterance. Our proposed KD method, IUSP, shows consistent improvements over SP across student models of different sizes on the DCASE 2019 Task 5 dataset for audio tagging. There is a 27.1% to 122.4% percent increase in improvement of micro AUPRC over the baseline relative to SP’s improvement of over the baseline.
Student Information

Student Events

Travel Grants