Wed-1-5-9 High Performance Sequence-to-Sequence Model for Streaming Speech Recognition

Thai Son Nguyen(Karlsruhe Institute of Technology), Ngoc Quan Pham(Karlsruhe Institute of Technology), Sebastian Stüker(Karlsruhe Institute of Technology) and Alex Waibel(Karlsruhe Institute of Technology)
Abstract: Recently sequence-to-sequence models have started to achieve state-of-the-art performance on standard speech recognition tasks when processing audio data in batch mode, i.e., the complete audio data is available when starting processing. However, when it comes to performing run-on recognition on an input stream of audio data while producing recognition results in real-time and with low word-based latency, these models face several challenges. For many techniques, the whole audio sequence to be decoded needs to be available at the start of the processing, e.g., for the attention mechanism or the bidirectional LSTM (BLSTM). In this paper, we propose several techniques to mitigate these problems. We introduce an additional loss function controlling the uncertainty of the attention mechanism, a modified beam search identifying partial, stable hypotheses, ways of working with BLSTM in the encoder, and the use of chunked BLSTM. Our experiments show that with the right combination of these techniques, it is possible to perform run-on speech recognition with low word-based latency without sacrificing in word error rate performance.
Student Information

Student Events

Travel Grants