Yilin Pan(University of Sheffield), Bahman Mirheidari(Department of Computer Science, University of Sheffield), Markus Reuber(Academic Neurology Unit, Royal Hallamshire Hospital), Annalena Venneri(Sheffield Institute for Translational Neuroscience, University of Sheffield), Daniel Blackburn(Sheffield Institute for Translational Neuroscience, University of Sheffield) and Heidi Christensen(University of Sheffield)
Speech and language based automatic dementia detection is of interest due to it being non-invasive, low-cost and potentially able to aid diagnosis accuracy. The collected data are mostly audio recordings of spoken language and these can be used directly for acoustic-based analysis. To extract linguistic-based information, an automatic speech recognition (ASR) system is used to generate transcriptions. However, the extraction of reliable acoustic features is difficult when the acoustic quality of the data is poor as is the case with DementiaBank, the largest opensource dataset for Alzheimer’s Disease classification. In this paper, we explore how to improve the robustness of the acoustic feature extraction by using time alignment information and confidence scores from the ASR system to identify audio segments of good quality. In addition, we design rhythm-inspired features and combine them with acoustic features. By classifying the combined features with a bidirectional-LSTM attention network, the F-measure improves from 62.15% to 70.75% when only the high-quality segments are used. Finally, we apply the same approach to our previously proposed hierarchical-based network using linguistic-based features and show improvement from 74.37% to 77.25%. By combining the acoustic and linguistic systems, a state-of-the-art 78.34% F-measure is achieved on the DementiaBank task.