Thu-3-5-3 Multilingual Speech Recognition with Self-Attention Structured Parameterization

Yun Zhu(Google), Parisa Haghani(Google), Anshuman Tripathi(Google), Bhuvana Ramabhadran(Google), Brian Farris(Google), Hainan Xu(Google), Han Lu(Google), Hasim Sak(Google), Isabel Leal(Google), Neeraj Gaur(Google), Pedro Moreno(google inc.) and Qian Zhang(Google)
Abstract: Multilingual automatic speech recognition systems can transcribe utterances from different languages. These systems are attractive from different perspectives: they can provide quality improvements, specially for lower resource languages, and simplify the training and deployment procedure. End-to-end speech recognition has further simplified multilingual modeling as one model, instead of several components of a classical system, have to be unified. In this paper, we investigate a stream- able end-to-end multilingual system based on the Transformer Transducer. We propose several techniques for adapting the self-attention architecture based on the language id. We analyze the trade-offs of each method with regards to quality gains and number of additional parameters introduced. We conduct experiments in a real-world task consisting of five languages. Our experimental results demonstrate 8% to 20% relative gain over the baseline multilingual model.
Student Information

Student Events

Travel Grants