Wed-3-8-5 Gated Recurrent Fusion of Spatial and Spectral Features for Multi-channel Speech Separation with Deep Embedding Representations

Cunhang Fan(Institute of Automation, Chinese Academy of Sciences), Jianhua Tao(National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences), Bin Liu(Institute of Automation, Chinese Academy of Sciences), Jiangyan Yi(Institute of Automation, Chinese Academy of Sciences) and Zhengqi Wen(Institute of Automation, Chinese Academy of Sciences)
Abstract: Multi-channel deep clustering (MDC) has acquired a good performance for speech separation. However, MDC only applies the spatial features as additional information, which does not fuse them with the spectral features very well. So it is difficult to learn mutual relationship between spatial and spectral features. Besides, the training objective of MDC is defined at embedding vectors, rather than real separated sources, which may damage the separation performance. In this work, we deal with spatial and spectral features as two different modalities. We propose the gated recurrent fusion (GRF) method to adaptively select and fuse the relevant information from spectral and spatial features by making use of the gate and memory modules. In addition, to solve the training objective problem of MDC, the real separated sources are used as the training objectives. Specifically, we apply the deep clustering network to extract deep embedding features. Instead of using the unsupervised K-means clustering to estimate binary masks, another supervised network is utilized to learn soft masks from these deep embedding features. Our experiments are conducted on a spatialized reverberant version of WSJ0-2mix dataset. Experimental results show that the proposed method outperforms MDC baseline and even better than the oracle ideal binary mask.
Student Information

Student Events

Travel Grants