Hirofumi Inaguma(Kyoto University), Masato Mimura(Kyoto University) and Tatsuya Kawahara(Kyoto University)
We investigate a monotonic multihead attention (MMA) by extending hard monotonic attention to Transformer-based automatic speech recognition (ASR) for online streaming applications.
For streaming inference, all monotonic attention (MA) heads should learn proper alignments because the next token is not generated until all heads detect the corresponding token boundaries.
However, we found not all MA heads learn alignments with a naïve implementation.
To encourage every head to learn alignments properly, we propose HeadDrop regularization by masking out a part of heads stochastically during training.
Furthermore, we propose to prune redundant heads to improve consensus among heads for boundary detection and prevent delayed token generation caused by such heads.
Chunkwise attention on each MA head is extended to the multihead counterpart.
Finally, we propose head-synchronous beam search decoding to guarantee stable streaming inference.