Tue-1-2-10 Investigating Robustness of Adversarial Samples Detection for Automatic Speaker Verification

Xu Li(The Chinese University of Hong Kong), Na Li(Tencent), Jinghua Zhong(The Chinese University of Hong Kong), Xixin Wu(University of Cambridge), Xunying Liu(Chinese University of Hong Kong), Dan Su(Tencent AILab Shenzhen), Dong Yu(Tencent AI Lab) and Helen Meng(The Chinese University of Hong Kong)
Abstract: Recently adversarial attacks on automatic speaker verification (ASV) systems attracted widespread attention as they pose severe threats to ASV systems. However, methods to defend against such attacks are limited. Existing approaches mainly focus on retraining ASV systems with adversarial data augmentation. Also, countermeasure robustness against different attack settings are insufficiently investigated. Orthogonal to prior approaches, this work proposes to defend ASV systems against adversarial attacks with a separate detection network, rather than augmenting adversarial data into ASV training. A VGG-like binary classification detector is introduced and demonstrated to be effective on detecting adversarial samples. To investigate detector robustness in a realistic defense scenario where unseen attack settings may exist, we analyze various kinds of unseen attack settings' impact and observe that the detector is robust (6.27\% EER_{det} degradation in the worst case) against unseen substitute ASV systems, but it has weak robustness (50.37\% EER_{det} degradation in the worst case) against unseen perturbation methods. The weak robustness against unseen perturbation methods shows a direction for developing stronger countermeasures.
Student Information

Student Events

Travel Grants