Hyewon Han(Yonsei University), Soo-Whan Chung(Yonsei University) and Hong-Goo Kang(Yonsei University)
Many approaches can derive information about a single speaker's identity from the speech by learning to recognize consistent characteristics of acoustic parameters.
However, it is challenging to determine identity information when there are multiple concurrent speakers in a given signal.
In this paper, we propose a novel deep speaker representation strategy that can reliably extract multiple speaker identities from an overlapped speech.
We design a network that can extract a high-level embedding that contains information about each speaker's identity from a given mixture.
Unlike conventional approaches that need reference acoustic features for training, our proposed algorithm only requires the speaker identity labels of the overlapped speech segments.
We demonstrate the effectiveness and usefulness of our algorithm in a speaker verification task and a speech separation system conditioned on the target speaker embeddings obtained through the proposed method.