Albert Zeyer(Human Language Technology and Pattern Recognition Group (Chair of Computer Science 6), Computer Science Department, RWTH Aachen University), André Merboldt(RWTH Aachen University), Ralf Schlüter(Lehrstuhl Informatik 6, RWTH Aachen University) and Hermann Ney(RWTH Aachen University)
Abstract:
The RNN transducer is a promising
end-to-end model candidate.
We compare the original training criterion
with the full marginalization over all alignments,
to the commonly used maximum approximation,
which simplifies, improves and speeds up our training.
We also generalize from the original neural network model
and study more powerful models,
made possible due to the maximum approximation.
We further generalize the output label topology
to cover RNN-T, RNA and CTC.
We perform several studies among all these aspects,
including a study on the effect of external alignments.
We find that the transducer model
generalizes much better on longer sequences
than the attention model.
Our final transducer model
outperforms our attention model
on Switchboard 300h
by over 6% relative WER.