Tue-1-8-8 DARTS-ASR: Differentiable Architecture Search for Multilingual Speech Recognition and Adaptation

Yi-Chen Chen(National Taiwan University), Jui-Yang Hsu(National Taiwan University), Cheng-Kuang Lee(NVIDIA) and Hung-yi Lee(National Taiwan University (NTU))
Abstract: In previous works, only parameter weights of ASR models are optimized under fixed-topology architecture. However, the design of successful model architecture has always relied on human experience and intuition. Besides, many hyperparameters related to model architecture need to be manually tuned. Therefore in this paper, we propose an ASR approach with efficient gradient-based architecture search, DARTS-ASR. In order to examine the generalizability of DARTS-ASR, we apply our approach not only on many languages to perform monolingual ASR, but also on a multilingual ASR setting. Following previous works, we conducted experiments on a multilingual dataset, IARPA BABEL. The experiment results show that our approach outperformed the baseline fixed-topology architecture by 10.2% and 10.0% relative reduction on character error rates under monolingual and multilingual ASR settings respectively. Furthermore, we perform some analysis on the searched architectures by DARTS-ASR.
Student Information

Student Events

Travel Grants