Cross/Multi-Lingual and Code-Switched Speech Recognition

Mon-3-1-4 Multi-Encoder-Decoder Transformer for Code-Switching Speech Recognition

Xinyuan Zhou(Shanghai Normal University), Emre Yilmaz(National University of Singapore), Yanhua Long(Shanghai Normal University), Yijie Li(Unisound AI Technology Co., Ltd.) and Haizhou Li(National University of Singapore)
Abstract: Code-switching (CS) occurs when a speaker alternates words of two or more languages within a single sentence or across sentences. Automatic speech recognition (ASR) of CS speech has to deal with two or more languages at the same time. In this study, we propose a Transformer-based architecture with two symmetric language-specific encoders to capture the individual language attributes, that improve the acoustic representation of each language. These representations are combined using a language-specific multi-head attention mechanism in the decoder module. Each encoder and its corresponding attention module in the decoder are pre-trained using a large monolingual corpus aiming to alleviate the impact of limited CS training data. We call such a network a multi-encoder-decoder (MED) architecture. Experiments on the SEAME corpus show that the proposed MED architecture achieves 10.2% and 10.8% relative error rate reduction on the CS evaluation sets with Mandarin and English as the matrix language respectively.
Student Information

Student Events

Travel Grants