Mon-2-9-9 End-to-End Spoken Language Understanding Without Full Transcripts

Hong-Kwang Kuo(IBM T. J. Watson Research Center), Zoltán Tüske(IBM Research), Samuel Thomas(IBM Research AI), Yinghui Huang(IBM), Kartik Audhkhasi(IBM Research), Brian Kingsbury(IBM Research), Gakuto Kurata(IBM Research), Zvi Kons(IBM Haifa research lab), Ron Hoory(IBM Haifa Research Lab) and Luis Lastras(IBM Research AI)
Abstract: An essential component of spoken language understanding (SLU) is slot filling: representing the meaning of a spoken utterance using semantic entity labels. In this paper, we develop end-to-end spoken (E2E) language understanding systems that directly convert speech input to semantic entities and investigate if these E2E SLU models can be trained solely on semantic entity annotations without word-for-word transcripts. Training such models is very useful as they can drastically reduce the cost of data collection. We created two types of such speech-to-entities models, a CTC model and an attention-based encoder-decoder model, by adapting models trained originally for speech recognition. Given that our experiments involve speech input, these systems need to recognize both the entity label and words representing the entity value correctly. For our speech-to-entities experiments on the ATIS corpus, both the CTC and attention models showed impressive ability to skip non-entity words: there was little degradation when trained on just entities versus full transcripts. We also explored the scenario where the entities are in an order not necessarily related to spoken order in the utterance. With its ability to do re-ordering, the attention model did remarkably well, achieving only about 2% degradation in speech-to-bag-of-entities F1 score.
Student Information

Student Events

Travel Grants