
In a world where effective communication is fundamental, individuals who are Deaf and Dumb (D&D) often face unique challenges due to their primary mode of communication—sign language. Despite the interpreters' invaluable roles, their lack of availability causes communication difficulties for the D&D individuals. This study explores whether the field of Human-Computer Interaction (HCI) could be a potential solution. The primary objective is to assist D&D individuals with computer applications that could act as mediators to bridge the communication gap between them and the wider hearing population. To ensure their independent communication, we propose an automated system that could detect specific Bangla Sign Language (BdSL) words, addressing a critical gap in the sign language detection and recognition literature. Our approach leverages deep learning and transfer learning principles to convert webcam-captured hand gestures into textual representations in real-time. The model's development and assessment rest upon 992 images created by the authors, categorized into ten distinct classes representing various BdSL words. Our findings show the DenseNet201 and ResNet50-V2 models achieve promising training and testing accuracies of 99% and 93%, respectively. Doi: 10.28991/ESJ-2023-07-06-019 Full Text: PDF
Artificial intelligence, Bridge (graph theory), Social Sciences, mobilenet-v2, Sign Language, Gesture, Transfer of learning, Sociology, Developmental and Educational Psychology, sign language, Psychology, Internal medicine, Human–computer interaction, bdsl, FOS: Philosophy, ethics and religion, FOS: Sociology, Programming language, Social sciences (General), FOS: Psychology, Continuous Recognition, Gesture Recognition in Human-Computer Interaction, Sign (mathematics), Physical Sciences, Medicine, Interpreter, Face (sociological concept), Population, Convolutional neural network, Speech recognition, Mathematical analysis, densenet201, Gesture recognition, American Sign Language, image processing., FOS: Mathematics, T1-995, Sign language, Technology (General), cnn, Demography, H1-99, Natural language processing, resnet50-v2, deep learning, Linguistics, Computer science, Human-Computer Interaction, Philosophy, Computer Science, FOS: Languages and literature, Acquisition and Development of Sign Language, Mathematics
Artificial intelligence, Bridge (graph theory), Social Sciences, mobilenet-v2, Sign Language, Gesture, Transfer of learning, Sociology, Developmental and Educational Psychology, sign language, Psychology, Internal medicine, Human–computer interaction, bdsl, FOS: Philosophy, ethics and religion, FOS: Sociology, Programming language, Social sciences (General), FOS: Psychology, Continuous Recognition, Gesture Recognition in Human-Computer Interaction, Sign (mathematics), Physical Sciences, Medicine, Interpreter, Face (sociological concept), Population, Convolutional neural network, Speech recognition, Mathematical analysis, densenet201, Gesture recognition, American Sign Language, image processing., FOS: Mathematics, T1-995, Sign language, Technology (General), cnn, Demography, H1-99, Natural language processing, resnet50-v2, deep learning, Linguistics, Computer science, Human-Computer Interaction, Philosophy, Computer Science, FOS: Languages and literature, Acquisition and Development of Sign Language, Mathematics
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 7 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Top 10% |
