Thu-1-11-8 End-to-end text-to-speech synthesis with unaligned multiple language units based on attention

Masashi Aso(University of Tokyo), Shinnosuke Takamichi(University of Tokyo) and Hiroshi Saruwatari(The University of Tokyo)
Abstract: This paper presents the use of unaligned multiple language units for end-to-end text-to-speech (TTS). End-to-end TTS is a promising technology in that it does not require intermediate representation such as prosodic contexts. However, it causes mispronunciation and unnatural prosody. To alleviate this problem, previous methods have used multiple language units, e.g., phonemes and characters, but required the units to be hard-aligned. In this paper, we propose a multi-input attention structure that simultaneously accepts multiple language units without alignments among them. We consider using not only traditional phonemes and characters but also subwords tokenized in a language-independent manner. We also propose a progressive training strategy to deal with the unaligned multiple language units. The experimental results demonstrated that our model and training strategy improve speech quality.
Student Information

Student Events

Travel Grants