Lubna Alhinti(University of Sheffield), Stuart Cunningham(University of Sheffield) and Heidi Christensen(University of Sheffield)
Effective communication relies on the comprehension of both verbal and nonverbal information. People with dysarthria may lose their ability to produce intelligible and audible speech sounds which in time may affect their way of conveying emotions, that are mostly expressed using nonverbal signals. Recent research shows some promise on automatically recognising the verbal part of dysarthric speech. However, this is the first study that investigates the ability to automatically recognise the nonverbal part. A parallel database of dysarthric and typical emotional speech is collected, and approaches to discriminating between emotions using models trained on either dysarthric (speaker dependent, matched) or typical (speaker independent, unmatched) speech are investigated for four speakers with dysarthria caused by cerebral palsy and Parkinson’s disease. Promising results are achieved in both scenarios using SVM classifiers, opening new doors to improved, more expressive voice input communication aids.