Hira Dhamyal(Carnegie Mellon University), Shahan Ali Memon(Language Technologies Institute, Carnegie Mellon University), Bhiksha Raj(Carnegie Mellon University) and Rita Singh(Carnegie Mellon University)
Can vocal emotions be emulated? This question has been a recurrent concern of the speech community and has also been vigorously investigated. It has been fueled further by its link to the issue of validity of acted emotion databases. Much of the speech and vocal emotion research has relied on acted emotion databases as valid proxies for studying natural emotions. To create models that generalize to natural settings, it is crucial to work with valid prototypes -- ones that can be assumed to reliably represent natural emotions. More concretely, it is important to study emulated emotions against natural emotions in terms of their physiological, and psychological concomitants. In this paper, we present an on-scale systematic study of the differences between natural and acted vocal emotions. We use a self-attention based emotion classification model to understand the phonetic bases of emotions by discovering the most attended phonemes for each class of emotions. We then compare these attended phonemes in their importance and distribution across acted and natural classes. Our tests show significant differences in the manner and choice of phonemes in acted and natural speech, concluding moderate to low validity and value in using acted speech databases for emotion classification tasks.