Wed-1-2-5 Low Latency Speech Recognition using End-to-End Prefetching

Shuo-Yiin Chang(Google USA), Bo Li(Google), David Rybach(Google), Yanzhang He(Google Inc.), Wei Li(Google Inc), Tara Sainath(Google) and Trevor Strohman(Google)
Abstract: Latency is a crucial metric for streaming speech recognition systems. In this paper, we reduce latency by fetching responses early based on the partial recognition results and refer to it as prefetching. Specifically, prefetching works by submitting partial recognition results for subsequent processing before the recognition result is finalized. If the partial result matches the final recognition result, the early fetched response can be delivered to the user instantly. Prefetching can be triggered multiple times for a single query, but this leads to multiple rounds of downstream processing and increases the computation costs. It is hence desirable to fetch the result sooner but meanwhile limiting the number of prefetches. We investigated a series of prefetching decision models including decoder silence based prefetching, acoustic silence based prefetching and end-to-end prefetching. In this paper, we demonstrate the proposed prefetching mechanism reduced latency by 200 ms for a system that consists of a streaming first pass model using recurrent neural network transducer and a non-streaming second pass rescoring model using Listen, Attend and Spell. We observe that the end-to-end prefetching provides the best trade-off between cost and latency and is 100 ms faster compared to silence based prefetching at a fixed prefetch rate.
Student Information

Student Events

Travel Grants