Similarity learning for cnn-based asl alphabet recognition Chapter in Scopus uri icon

abstract

  • © 2021 The authors and IOS Press. All rights reserved.Sign language is an important communication way to convey information among the deaf community, and it is primarily used by people who have hearing or speech impairments. Besides, sign language represents a direct Human-Computer-Interaction (HCI) similar to voice commands therefore, the purpose of this study is to investigate and develop a system for American Sign Language (ASL) alphabet recognition using convolutional neural networks. Our proposal is based on semantic similarity learning using Siamese Convolutional Neural Network to reduce the intra-class variation and inter-class similarity among sign images in a Euclidean space the results of the siamese architecture applied to the ASL alphabet dataset outperform previous works found in the literature. From these results, using t-SNE visualization, we demonstrate that our hypothesis is correct; the ASL recognition improves when increasing the similarity among encoding of the images belonging to the same class and reducing it otherwise.

publication date

  • September 8, 2021