Khyathi Raghavi Chandu(Carnegie Mellon University) and Alan W Black(Carnegie Mellon University)
Code-Switching (CS) is a prevalent phenomenon observed in bilingual and multilingual communities, especially in digital and social media platforms. A major problem in this domain is the dearth of substantial corpora to train large scale neural models. Generating vast amounts of quality synthetic text assists several downstream tasks that heavily rely on language modeling such as speech recognition, text-to-speech synthesis etc,. We present a novel vantage point of CS to be style variations between both the participating languages. Our approach does not need any external dense annotations such as lexical language ids. It relies on easily obtainable monolingual corpora without any parallel alignment and a limited set of naturally CS sentences. We propose a two-stage generative adversarial training approach where the first stage generates competitive negative examples for CS and the second stage generates more realistic CS sentences. We present our experiments on the following pairs of languages: Spanish-English, Mandarin-English, Hindi-English and Arabic-French. We show that the trends in metrics for generated CS move closer to real CS data in the above language pairs through the dual stage training process. We believe this viewpoint of CS as style variations opens new perspectives for modeling various tasks in CS text.