Detecting synthetic speech using long term magnitude and phase information

Synthetic speech is speech signals generated by text-to-speech (TTS) and voice conversion (VC) techniques. They impose a threat to speaker verification (SV) systems as an attacker may make use of TTS or VC to synthesize a speakers voice to cheat the SV system. To address this challenge, we study the...

Full description

Bibliographic Details
Published in:2015 IEEE China Summit and International Conference on Signal and Information Processing (ChinaSIP)
Main Authors: Tian, Xiaohai, Du, Steven, Xiao, Xiong, Xu, Haihua, Chng, Eng Siong, Li, Haizhou
Other Authors: 2015 IEEE China Summit and International Conference on Signal and Information Processing (ChinaSIP), Temasek Laboratories, NTU-UBC Research Centre of Excellence in Active Living for the Elderly, School of Computer Science and Engineering
Format: Conference Object
Language:English
Published: 2015
Subjects:
Online Access:http://hdl.handle.net/10220/47055
https://doi.org/10.1109/ChinaSIP.2015.7230476
Description
Summary:Synthetic speech is speech signals generated by text-to-speech (TTS) and voice conversion (VC) techniques. They impose a threat to speaker verification (SV) systems as an attacker may make use of TTS or VC to synthesize a speakers voice to cheat the SV system. To address this challenge, we study the detection of synthetic speech using long term magnitude and phase information of speech. As most of the TTS and VC techniques make use of vocoders for speech analysis and synthesis, we focus on differentiating speech signals generated by vocoders from natural speech. Log magnitude spectrum and two phase-based features, including instantaneous frequency derivation and modified group delay, were studied in this work. We conducted experiments on the CMU-ARCTIC database using various speech features and a neural network classifier. During training, the synthetic speech detection is formulated as a 2-class classification problem and the neural network is trained to differentiate synthetic speech from natural speech. During testing, the posterior scores generated by the neural network is used for the detection of synthetic speech. The synthetic speech used in training and testing are generated by different types of vocoders and VC methods. Experimental results show that long term information up to 0.3s is important for synthetic speech detection. In addition, the high dimensional log magnitude spectrum features significantly outperforms the low dimensional MFCC features, showing that it is important to retain the detailed spectral information for detecting synthetic speech. Furthermore, the two phase-based features are found to perform well and complementary to the log magnitude spectrum features. The fusion of these features produces an equal error rate (EER) of 0.09%. NRF (Natl Research Foundation, S’pore) Accepted version