Mon-2-11-2 Semantic Mask for Transformer based End-to-End Speech Recognition

Chengyi Wang(Nankai University), Yu Wu(Microsoft Research Asia), Yujiao Du(Alibaba Corporation), Jinyu Li(Microsoft), Shujie Liu(Microsoft Research Asia, Beijing), Liang Lu(Microsoft), Shuo Ren(Beihang University), Guoli Ye -(Microsoft), Sheng Zhao(Microsoft) and Ming Zhou(microsoft research asia)
Abstract: Attention-based encoder-decoder model has achieved impressive results for both automatic speech recognition (ASR) and text-to-speech (TTS) tasks. This approach takes advantage of the memorization capacity of neural networks to learn the mapping from the input sequence to the output sequence from scratch, without the assumption of prior knowledge such as the alignments. However, this model is prone to overfitting, especially when the amount of training data is limited. Inspired by SpecAugment and BERT, in this paper, we propose a semantic mask based regularization for training such kind of end-to-end (E2E) model. The idea is to mask the input features corresponding to a particular output token, e.g., a word or a word-piece, in order to encourage the model to fill the token based on the contextual information. While this approach is applicable to the encoder-decoder framework with any type of neural network architecture, we study the transformer-based model for ASR in this work. We perform experiments on Librispeech 960h and TedLium2 data sets, and achieve the state-of-the-art performance in the scope of attention based E2E models.
Student Information

Student Events

Travel Grants