Zhenhao He(South China University of Technology), Jiachun Wang(South China University of Technology) and Jian Chen(South China University of Technology)
Abstract:
Recent advances in neural sequence-to-sequence models have led to promising results for end-to-end task-oriented dialog generation. Such frameworks enable a decoder to retrieve knowledge from the dialog history and the knowledge base during generation. However, these models usually rely on learned word embeddings as entity representation, which is difficult to deal with the rare and unknown entities. In this work, we propose a novel enhanced entity representation (EER) to simultaneously obtain context-sensitive and structure-aware entity representation. Our proposed method enables the decoder to facilitate both the ability to fetch the relevant knowledge and the effectiveness of incorporating grounding knowledge into the dialog generation. Experimental results on two publicly available dialog datasets show that our model outperforms the state-of-the-art data-driven task-oriented dialog models. Moreover, we conduct an Out-of-Vocabulary (OOV) test to demonstrate the superiority of EER in handling common OOV problem.