Wed-1-8-4 S2IGAN: Speech-to-Image Generation via Adversarial Learning

Xinsheng Wang(Xi'an Jiaotong University), Tingting Qiao(Zhejiang University), Jihua Zhu(Xi’an Jiaotong University), Alan Hanjalic(Delft University of Technology) and Odette Scharenborg(Multimedia computing, Delft University of Technology)
Abstract: An estimated half of the world's languages do not have a written form, making it impossible for these languages to benefit from any existing text-based technologies. In this paper, a speech-to-image generation (S2IG) framework is proposed which translates speech descriptions to photo-realistic images without using any text information, thus allowing unwritten languages to potentially benefit from this technology. The proposed S2IG framework, named S2IGAN, consists of a speech embedding network (SEN) and a relation-supervised densely-stacked generative model (RDG). SEN learns the speech embedding with the supervision of the corresponding visual information. Conditioned on the speech embedding produced by SEN, the proposed RDG synthesizes images that are semantically consistent with the corresponding speech descriptions. Extensive experiments on two public benchmark datasets CUB and Oxford-102 demonstrate the effectiveness of the proposed S2IGAN on synthesizing high-quality and semantically-consistent images from the speech signal, yielding a good performance and a solid baseline for the S2IG task.
Student Information

Student Events

Travel Grants