Thu-SS-1-6-8 A Convolutional Deep Markov Model for Unsupervised Speech Representation Learning

Sameer Khurana(MIT), Antoine Laurent(Le Mans), Wei-Ning Hsu(Massachusetts Institute of Technology), Jan Chorowski(University of Wroclaw), Adrian Lancucki(University of Wroclaw), Ricard Marxer(Université de Toulon, LIS CNRS UMR 7020) and James Glass(Massachusetts Institute of Technology)
Abstract: Probabilistic Latent Variable Models (LVMs) provide an alternative to self-supervised learning approaches for linguistic representation learning from speech. LVMs admit an intuitive probabilistic interpretation where the latent structure shapes the information extracted from the signal. Even though LVMs have recently seen a renewed interest due to the introduction of Variational Autoencoders (VAEs), their use for speech representation learning remains largely unexplored. In this work, we propose Convolutional Deep Markov Model (ConvDMM), a Gaussian state-space model with non-linear emission and transition functions modeled by deep neural networks. This unsupervised model is trained using black box variational inference. A deep convolutional neural network is used as an inference network for structured variational approximation. When trained on a large scale speech dataset (LibriSpeech), ConvDMM produces features that significantly outperform multiple self-supervised feature extracting methods on linear phone classification and recognition on the Wall Street Journal dataset. Furthermore, we found that ConvDMM complements self-supervised methods like Wav2Vec and PASE, improving on the results achieved with any of the methods alone. Lastly, we find that ConvDMM features enable learning better phone recognizers than any other features in an extreme low-resource regime with few labeled training examples.
Student Information

Student Events

Travel Grants