Evaluating the Impact of Deep Learning Model Architecture on Sign Language Recognition Accuracy in Low-Resource Context

Authors

  • Tebatso Moape University of South Africa
  • Absolom Muzambi University of South Africa
  • Bester Chimbo University of South Africa

DOI:

https://doi.org/10.33022/ijcs.v14i1.4493

Keywords:

Sign Language Recognition (SLR), Deep Learning Models, Transformer Models, Low-Resource Datasets Environments

Abstract

Deep learning models are well-known for their reliance on large training datasets to achieve optimal performance for specific tasks. These models have revolutionized the field of machine learning, including achieving high accuracy rates in image classification tasks. As a result, these models have been used for sign language recognition. However, the models often underperform in low-resource contexts. Given the country-specific nature of sign languages, this study examines the effectiveness and performance of Convolutional Neural Networks (CNN), Artificial Neural Networks (ANN), hybrid model (CNN + Recurrent Neural Networks (RNN)), and VGG16 deep learning architectures in recognizing South African Sign Language (SASL) under a data-constrained context. The models were trained and evaluated using a dataset of 12420 training images representing 26 static SASL alphabets, and 4050 validation images. The paper's primary objective is to determine the optimal methods and settings for improving sign recognition models in low-resource contexts. The performance of the models was evaluated across multiple image dimensions trained for 60 epochs to analyze each model's adaptability and efficiency under varying computational parameters. The experiments showed that the ANN and CNN models consistently achieved high accuracy with lower computational requirements, making them well-suited for low-resource contexts.

Downloads

Published

07-02-2025