Prithvi Raj Reddy Gudepu(Samsung Research Institute Bangalore), Gowtham Prudhvi Vadisetti(Samsung Research Institute Bangalore), Abhishek Niranjan(Samsung Research Institute, Bangalore - India), Kinnera Saranu(Samsung R&D Institute, Bangalore), Raghava Sarma(Samsung Research Institute Bangalore), Mahaboob Ali Basha Shaik(Voice Intelligence, Samsung R&D Institute) and periyasamy Paramasivam(Samsung)
Automatic speech recognition (ASR) systems are known to perform poorly under whispered speech conditions. One of the primary reasons is the lack of large annotated whisper corpora. To address this challenge, we propose data augmentation with synthetic whisper corpus generated from normal speech using Cycle-Consistent Generative Adversarial Network (CycleGAN). We train CycleGAN model with a limited corpus of parallel whispered and normal speech, aligned using Dynamic Time Warping (DTW). The model learns frame-wise mapping from feature vectors of normal speech to those of whisper. We then augment ASR systems with the generated synthetic whisper corpus. In this paper, we validate our proposed approach using state-of-the-art end-to-end (E2E) and hybrid ASR systems trained on publicly available Librispeech, wTIMIT and internally recorded far-field corpora. We achieved 23% relative reduction in word error rate (WER) compared to baseline on whisper test sets. In addition, we also achieved WER reductions on Librispeech and far-field test sets.