
We propose a novel approach for American Sign Language (ASL) recognition that combines Google MediaPipe's real-time hand and body landmark tracking with a lightweight deep learning model trained on the MS-ASL dataset. We called this model “Key Frame MLP”, and it extracts the key frames features from the sequence of hand and pose landmarks, enabling efficient recognition without requiring RGB input. Evaluated on the MS-ASL 1000 dataset, our approach achieves 61% top-1 average class accuracy, demonstrating strong performance relative to its simplicity. The entire system is optimized for real-time operation and low computational cost, making it suitable for deployment on edge devices. These results highlight the effectiveness of combining modular MLPs with fast landmark-based inputs for scalable sign language recognition.
Key frame MLP, ASL, sign language recognition, neural networks, mediapipe, computer vision, MS-ASL
Key frame MLP, ASL, sign language recognition, neural networks, mediapipe, computer vision, MS-ASL
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
