Shuwen Deng(Friedrich-Alexander University Erlangen), Wolfgang Mack(International Audio Laboratories Erlangen) and Emanuël Habets(International Audio Laboratories Erlangen)
The reverberation time, T60, is an important acoustic parameter in speech and acoustic signal processing. Often, the T60 is unknown and blind estimation from a single-channel measurement is required. State-of-the-art T60 estimation is achieved by a convolutional neural network (CNN) which maps a feature representation of the speech to the T60. The temporal input length of the CNN is fixed. Time-varying scenarios, e.g., robot audition, require continuous T60 estimation in an online fashion, which is computationally heavy using the CNN. We propose to use a convolutional recurrent neural network (CRNN) for blind T60 estimation as it combines the parametric efficiency of CNNs with the online estimation of recurrent neural networks and, in contrast to CNNs, can process time-sequences of variable length. We evaluated the proposed CRNN on the Acoustic Characterization of Environments Challenge dataset for different input lengths. Our proposed method outperforms the state-of-the-art CNN approach even for shorter inputs at the cost of more trainable parameters.