Thu-3-10-5 Self-and-Mixed Attention Decoder with Deep Acoustic Structure for Transformer-based LVCSR

Xinyuan Zhou(Shanghai Normal University), Grandee Lee(National University of Singapore), Emre Yilmaz(National University of Singapore), Yanhua Long(Shanghai Normal University), Jiaen Liang(Unisound AI Technology Co., Ltd.) and Haizhou Li(National University of Singapore)
Abstract: The Transformer has shown impressive performance in automatic speech recognition. It uses the encoder-decoder structure with self-attention to learn the relationship between the high-level representation of the source inputs and embedding of the target outputs. In this paper, we propose a novel decoder structure that features a self-and-mixed attention decoder (SMAD) with a deep acoustic structure (DAS) to improve the acoustic representation of Transformer-based LVCSR. Specifically, we introduce a self-attention mechanism to learn a multi-layer deep acoustic structure for multiple levels of acoustic abstraction. We also design a mixed attention mechanism that learns the alignment between different levels of acoustic abstraction and its corresponding linguistic information simultaneously in a shared embedding space. The ASR experiments on Aishell-1 shown that the proposed structure achieves CERs of 4.8% on the dev set and 5.1% on the test set, which are the best results obtained on this task to the best of our knowledge.
Student Information

Student Events

Travel Grants