Speech Synthesis: Text Processing, Data and Evaluation

Tue-1-7-8 An unsupervised method to select a speaker subset from large multi-speaker speech synthesis datasets

Pilar Oplustil(University of Edinburgh), Jennifer Williams(University of Edinburgh), Joanna Rownicka(The University of Edinburgh) and Simon King(University of Edinburgh)
Abstract: Large multi-speaker datasets for TTS typically contain diverse speakers, recording conditions, styles and quality of data. Although one might generally presume that more data is better, in this paper we show that a model trained on a carefully-chosen subset of speakers from LibriTTS provides significantly better quality synthetic speech than a model trained on a larger set. We propose an unsupervised methodology to find this subset by clustering per-speaker acoustic representations.
Student Information

Student Events

Travel Grants