Wed-3-7-2 On Loss Functions and Recurrency Training for GAN-based Speech Enhancement Systems

Zhuohuang Zhang(Indiana University Bloomington), Chengyun Deng(Didi Chuxing), Yi Shen(Indiana University Bloomington), Donald S. Williamson(Indiana University Bloomington), Yongtao Sha(Didi Chuxing), Yi Zhang(Didi Chuxing), Hui Song(Didi Chuxing) and Xiangang Li(Didi Chuxing)
Abstract: Recent work has shown that it is feasible to use generative adversarial networks (GANs) for speech enhancement, however, these approaches have not been compared to state-of-the-art (SOTA) non GAN-based approaches. Additionally, many loss functions have been proposed for GAN-based approaches, but they have not been adequately compared. In this study, we propose novel convolutional recurrent GAN (CRGAN) architectures for speech enhancement. Multiple loss functions are adopted to enable direct comparisons to other GAN-based systems. The benefits of including recurrent layers are also explored. Our results show that the proposed CRGAN model outperforms the SOTA GAN-based models using the same loss functions and it outperforms other non-GAN based systems, indicating the benefits of using a GAN for speech enhancement. Overall, the CRGAN model that combines an objective metric loss function with the mean squared error (MSE) provides the best performance over comparison approaches across many evaluation metrics.
Student Information

Student Events

Travel Grants