Thu-1-9-2 Conditional Response Augmentation for Dialogue using Knowledge Distillation

Myeongho Jeong(Yonsei University), Seungtaek Choi(Yonsei University), Hojae Han(Yonsei university), Kyungho Kim(Yonsei University) and Seung-won Hwang(Yonsei University)
Abstract: This paper studies dialogue response selection task. As state-of-the-arts are neural models requiring a large training set, data augmentation is essential to overcome the sparsity of observational annotation, where one observed response is annotated as gold. In this paper, we propose counterfactual augmentation, of considering whether unobserved utterances would ``counterfactually" replace the labelled response, for the given context, and augment only if that is the case. We empirically show that our pipeline improves BERT-based models in two different response selection tasks without incurring annotation overheads.
Student Information

Student Events

Travel Grants