Thu-3-8-4 Efficient MDI Adaptation for N-gram Language Models

Ruizhe Huang(CLSP, Johns Hopkins University), Ashish Arora(Johns Hopkins University), Ke Li(Johns Hopkins University), Dan Povey(Johns Hopkins University) and Sanjeev Khudanpur(Johns Hopkins University)
Abstract: This paper presents an efficient algorithm for n-gram language model adaptation under the minimum discrimination information (MDI) principle, where an out-of-domain language model is adapted to satisfy the constraints of marginal probabilities of the in-domain data. The challenge for MDI language model adaptation is its computational complexity. By taking advantage of the backoff structure of n-gram model and the idea of hierarchical training method, originally proposed for maximum entropy (ME) language models, we show that MDI adaptation can be computed in linear-time complexity to the inputs in each iteration. The complexity remains the same as ME models, although MDI is more general than ME. This makes MDI adaptation practical for large corpus and vocabulary. Experimental results confirm the scalability of our algorithm on very large datasets, while MDI adaptation gets slightly worse perplexity but better word error rate results compared to simple linear interpolation.
Student Information

Student Events

Travel Grants