Thu-3-6-5 Automatic Glottis Detection and Segmentation in Stroboscopic videos using Convolutional Networks

Divya Degala(Indian Institute of Science), Achuth Rao M V(Indian Institute of Science), Rahul Krishnamurthy(Manipal Academy for Higher Education), Pebbili Gopikishore(All India Institute of Speech and Hearing), Veeramani Priyadharshini(All India Institute of Speech and Hearing), Prakash T K(All India Institute of Speech and Hearing) and Prasanta Ghosh(Assistant Professor, EE, IISc)
Abstract: Laryngeal videostroboscopy is widely used for the analysis of glottal vibration patterns. This analysis plays a crucial role in the diagnosis of voice disorders. It is essential to study these patterns using automatic glottis segmentation methods to avoid subjectiveness in diagnosis. Glottis detection is an essential step before glottis segmentation. This paper considers the problem of automatic glottis segmentation using U-Net based deep convolutional networks. For accurate glottis detection, we train a fully convolutional network with a large amount of glottal and non-glottal images. In glottis segmentation, we consider U-Net with three different weight initialization schemes: 1) Random weight Initialization (RI), 2) Detection Network weight Initialization (DNI) and 3) Detection Network encoder frozen weight Initialization (DNIFr), using two different architectures: 1) U-Net without skip connection (UWSC) 2) U-Net with skip connection (USC). Experiments with 22 subjects’ data reveal that the performance of glottis segmentation network can be increased by initializing its weights using those of the glottis detection network. Among all schemes, when DNI is used, the USC yields an average localization accuracy of 81.3% and a Dice score of 0.73, which are better than those from the baseline approach by 15.87% and 0.07 (absolute), respectively.
Student Information

Student Events

Travel Grants