Deep Learning-Based Continuous Sign Language Recognition
DOI:
https://doi.org/10.18196/jrc.v6i3.25881Keywords:
Computer Vision, Sign Language, Real-Time Recognition, Deep Learning, 2DCNN, Yolo OptimizationAbstract
This study focuses on the development of a continuous sign language recognition system based on deep neural network models. A new Kazakh Sign Language (QazSL) dataset is created. DL models for continuous KazSL are developed, their accuracy and robustness under different environmental conditions are analyzed, and an optimized model algorithm to improve sign recognition processes are proposed. The main goal is to improve gesture recognition accuracy, account for gesture variability and environmental conditions, and promote the development of adaptive technologies for low-resource languages. This paper proposes a QazSL recognition system using an YOLOv8n and optimized 2DCNN models to improve accessibility for the hearing impaired. The optimized 2DCNN method includes optimal data preprocessing techniques and new training architecture, followed by model training and testing with precision, recall, and accuracy metrics. The proposed systems were trained using an opencourse K-RSL dataset with 5 signers and a newly created QazSL dataset, recorded by 7 signers. The test accuracy of gesture recognition are 98.12% for Yolov8n and 98, 57% for 2DCNN, indicating the robustness and capability of the models for realtime application. Certain issues, such as background variation and gesture consistency, were found to affect recognition under different conditions. This research contributes to the development of AI-based assistive technology to facilitate social inclusion and access to communication for deaf and hard-of-hearing people. By addressing the challenges identified in gesture recognition, this study paves the way for more reliable interactions between users and technology. Future work will focus on optimizing the model further to enhance its performance in varied environments and to expand its applicability across different languages and sign systems.
References
K. Emmorey, “Ten Things You Should Know About Sign Languages," Current Directions in Psychological Science, vol. 32, no. 5, p. 387394, 2023, doi: 10.1177/09637214231173071.
F. M. Najib, “Sign language interpretation using machine learning and artificial intelligence," Neural Computing and Applications, vol. 37, no. 2, p. 841857, 2024, doi: 10.1007/s00521-024-10395-9.
L. Zulpukharkyzy Zholshiyeva, T. Kokenovna Zhukabayeva, S. Turaev, M. Aimambetovna Berdiyeva, and D. Tokhtasynovna Jambulova, “Hand Gesture Recognition Methods and Applications: A Literature Survey, The 7th International Conference on Engineering, p. 18, 2021, doi: 10.1145/3492547.3492578.
Y. Zhang, “National Institute on Deafness and Other Communication Disorders (NIDCD)," Encyclopedia of Global Health, vol. 4, pp. 1196– 1196, 2008, doi: 10.4135/9781412963855.n840.
D. Ferri, I. Tekuchova, and E. Krolla, “Between disability and culture: The search for a legal taxonomy of sign languages in the European Union," International and Comparative Law Quarterly, vol. 73, no. 3, p. 669706, 2024, doi: 10.1017/s0020589324000253.
R. K. Attar, V. Goyal, and L. Goyal, “State of the Art of Automation in Sign Language: A Systematic Review," ACM Transactions on Asian and Low-Resource Language Information Processing, vol. 22, no. 4, p. 180, 2023, doi: 10.1145/3564769.
Y. Chen, S. Wang, L. Lin, Z. Cui, and Y. Zong, “Computer Vision and Deep Learning Transforming Image Recognition and Beyond," International Journal of Computer Science and Information Technology, vol. 2, no. 1, p. 4551, 2024, doi: 10.62051/ijcsit.v2n1.06.
A. Alayed, “Machine Learning and Deep Learning Approaches for Arabic Sign Language Recognition: A Decade Systematic Literature Review," Sensors, vol. 24, no. 23, 2024, doi: 10.3390/s24237798.
K. Dakhare, V. Wankhede and P. Verma, “A Survey on Recognition and Translation System of Real-Time Sign Language," 2024 2nd DMIHER International Conference on Artificial Intelligence in Healthcare, Education and Industry (IDICAIEI), pp. 1-6, 2024, doi: 10.1109/IDICAIEI61867.2024.10842738.
S. Shanmugam and R. S. Narayanan, “An accurate estimation of hand gestures using optimal modified convolutional neural network," Expert Systems with Applications, vol. 249, 2024, doi: 10.1016/j.eswa.2024.123351.
A. O. Hashi, S. Z. M. Hashim, and A. B. Asamah, “A Systematic Review of Hand Gesture Recognition: An Update From 2018 to 2024," IEEE Access, vol. 12, p. 143599143626, 2024, doi: 10.1109/access.2024.3421992.
P. Agrawal, R. Bose, G. K. Gupta, G. Kaur, S. Paliwal, and A. Raut, “Advancements in Computer Vision: A Comprehensive Review," 2024 International Conference on Innovations and Challenges in Emerging Technologies (ICICET), p. 16, 2024, doi: 10.1109/icicet59348.2024.10616321.
M. Gündüz and G. Ik, “A new YOLO-based method for real-time crowd detection from video and performance analysis of YOLO models," Journal of Real-Time Image Processing, vol. 20, no. 1, 2023, doi: 10.1007/s11554-023-01276-w.
T. Diwan, G. Anirudh, and J. V. Tembhurne, “Object detection using YOLO: Challenges, architectural successors, datasets and applications," Multimedia Tools and Applications, vol. 82, no. 6, p. 92439275, 2022, doi: 10.1007/s11042-022-13644-y.
H. Bhuiyan, M.F. Mozumder, Md.R.I. Khan, Md. S. Ahmed, N.Z. Nahim, “Enhancing Bidirectional Sign Language Communication: Integrating YOLOv8 and NLP for Real-Time Gesture Recognition," Translation, 2024, doi: 10.48550/arxiv.2411.13597.
S. Verma, V. Chandok, A. Gupte, S. S. Badhiye, P. Borkar and P. K. Agrawal, “Unlocking Communication: YOLO Sign Language Detection System," 2024 8th International Conference on Computing, Communication, Control and Automation (ICCUBEA), pp. 1-7, 2024, doi: 10.1109/ICCUBEA61740.2024.10775264.
M. T. Patel, P. S. Kumar, A. R. De, and S. V. Raghavan, “YOLO Convolutional Neural Network Algorithm for Recognition of Indian Sign Language Gestures," 2023 International Conference on Advances in Computing, Communication and Applied Informatics (ACCAI), p. 18, 2023, doi: 10.1109/ACCAI58221.2023.10200524.
T. T. H. Vu, D. L. Pham, and T. W. Chang, “A YOLO-based Real-time Packaging Defect Detection System," Procedia Computer Science, vol. 217, p. 886894, 2023, doi: 10.1016/j.procs.2022.12.285.
M. L. Ali and Z. Zhang, “The YOLO Framework: A Comprehensive Review of Evolution, Applications, and Benchmarks in Object Detection," Computers, vol. 13, no. 12, 2024, doi: 10.3390/computers13120336.
N. N. Herbaz, H. El Idrissi, and A. Badri, “Deep Learning Empowered Hand Gesture Recognition: Using YOLO Techniques," 2023 14th International Conference on Intelligent Systems: Theories and Applications (SITA), 2023, doi: 10.1109/sita60746.2023.10373734.
A. Serek, A. Issabek, A. Akhmetov, and A. Sattarbek, “Part-ofspeech tagging of Kazakh text via LSTM network with a bidirectional modifier," 2021 16th International Conference on Electronics, Computer and Computation (ICECCO), p. 16, 2021, doi: 10.1109/ICECCO53745.2021.9774003.
S. N. A. Mahmoud, A. Yousif, and M. H. Hassanein, “A Comparative Analysis of Machine Learning Algorithms for Sign Language Recognition," International Journal of Advanced Computer Science and Applications, vol. 12, no. 7, p. 122130, 2021, doi: 10.14569/IJACSA.2021.0120717.
J. S. Smith and R. K. Lee, “Deep Learning-based Sign Language Translation: A Survey," Artificial Intelligence Review, vol. 56, p. 25672592, 2023, doi: 10.1007/s10462-023-10234-5.
H. Chen, Y. Liu, and X. Zhang, “A Comprehensive Review on Hand Gesture Recognition Techniques Based on Computer Vision," Pattern Recognition, vol. 134, 2023, doi: 10.1016/j.patcog.2023.108655.
M. Moradi, D. D. Kannan, S. Asadianfam, H. Kolivand and O. Aldhaibani, “A Review of Sign Language Systems," 2023 16th International Conference on Developments in eSystems Engineering (DeSE), pp. 200-205, 2023, doi: 10.1109/DeSE60595.2023.10468964.
N. Amangeldy, I. Krak, B. Kurmetbek, N. Gazizova, “A Comparison of the Effectiveness Architectures LSTM1024 and 2DCNN for Continuous Sign Language Recognition Process, In Seventh International Workshop on Computer Modeling and Intelligent Systems, vol. 3702, 2024.
S. Alyami, H. Luqman, and M. Hammoudeh, “Reviewing 25 years of continuous sign language recognition research: Advances, challenges, and prospects, Information Processing Management, vol. 61, no. 5, 2024, doi: 10.1016/j.ipm.2024.103774.
M. Al-Hammadi, G. Muhammad, W. Abdul, M. Alsulaiman, M. A. Bencherif and M. A. Mekhtiche, “Hand Gesture Recognition for Sign Language Using 3DCNN," in IEEE Access, vol. 8, pp. 79491-79509, 2020, doi: 10.1109/ACCESS.2020.2990434.
R. Chen, X. Tian, "Gesture Detection and Recognition Based on Object Detection in Complex Background," Applied Sciences, vol. 13, no. 7, 2023, doi: 10.3390/app13074480.
S. K. Hussein, A. S. Ahmed, Z. Kul and A. M. Ashir, “Real- Time Hand Gesture Recognition for Home Automation: A YOLOv8-Based Approach with Identity Verification and Low-Resource Hardware Implementation," 2024 21st International Multi-Conference on Systems, Signals Devices (SSD), pp. 340-348, 2024, doi: 10.1109/SSD61670.2024.10548453.
X. Wang and P. Wang, “Research and Analysis of Gesture Recognition Experimentation Based on YOLOv5, 2024 IEEE 7th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), p. 706711, 2024, doi: 10.1109/itnec60942.2024.10732916.
M. Kang, C. M. Ting, F. F. Ting, and R. C. W. Phan, “ASF-YOLO: A novel YOLO model with attentional scale sequence fusion for cell instance segmentation, Image and Vision Computing, vol. 147, 2024, doi: 10.1016/j.imavis.2024.105057.
A. Moryossef, Z. Jiang, M. Müller, S. Ebling, and Y. Goldberg, “Linguistically Motivated Sign Language Segmentation, Findings of the Association for Computational Linguistics: EMNLP, p. 1270312724, 2023, doi: 10.18653/v1/2023.findings-emnlp.846.
J. Wu, “A New Sign Language Translation System Based on Expert Model, Applied and Computational Engineering, vol. 81, no. 1, p. 210218, 2024, doi: 10.54254/2755-2721/81/20241162.
S. Feng and T. Yuan, “Sign language translation based on new continuous sign language dataset, 2022 IEEE International Conference on Artificial Intelligence and Computer Applications (ICAICA), 2022, doi: 10.1109/icaica54878.2022.9844468.
A. N. manov and A. B. Aben, “Investigation and Application of Various Algorithms used in Object Detection and Classification in Image Data, A Iasaý atynday Halyqaralyq qazaq-túrk ýnverstetn habarlary (fzka matematka nformatka serasy), vol. 30, no. 3, p. 6274, 2024, doi: 10.47526/2024-3/2524-0080.10.
B. K. Pratama, Sri Lestanti, and Yusniarsi Primasari, “Implementasi Algoritma You Only Look Once (YOLO) untuk Mendeteksi Bahasa Isyarat SIBI, ProTekInfo(Pengembangan Riset dan Observasi Teknik Informatika), vol. 11, no. 2, p. 714, 2024, doi: 10.30656/protekinfo.v11i2.9105.
N. Herbaz, H. El Idrissi and A. Badri, “Deep Learning Empowered Hand Gesture Recognition: using YOLO Techniques," 2023 14th International Conference on Intelligent Systems: Theories and Applications (SITA), pp. 1-7, 2023, doi: 10.1109/SITA60746.2023.10373734.
A. Imran, M.S. Hulikal, H.A.A. Gardi, “Real Time American Sign Language Detection Using Yolo-v9," arXiv.Org, 2024, doi: 10.48550/arxiv.2407.17950.
B. Steven, C. Vannia Nathalie, C. Gerry Johanes, and P. Andam Suri, “Comparison Research: YOLO (You Only Look Once) Model for Indonesian Sign Language Detection Reducing Communication Inequalities, 2024 2nd International Conference on Technology Innovation and Its Applications (ICTIIA), p. 16, 2024, doi: 10.1109/ictiia61827.2024.10761360.
P. Patel, S. Pampaniya, A. Ghosh, R. Raj, D. Karuppaih and S. Kandasamy, “Enhancing Accessibility Through Machine Learning: A Review on Visual and Hearing Impairment Technologies," in IEEE Access, vol. 13, pp. 33286-33307, 2025, doi: 10.1109/ACCESS.2025.3539081.
O. K. T. Alsultan and M. T. Mohammad, “A Deep Learning-Based Assistive System for the Visually Impaired Using YOLO-V7," Revue d’Intelligence Artificielle, vol. 37, no. 4, p. 901906, 2023, doi: 10.18280/ria.370409.
K. J. Jaiswal, A. Khan, H. Budhdev, and S. Gandhi, “Deep LearningDriven Sign Language Recognition: A Multimodal Approach for Gestureto-Text Translation Using CNN-RNN Architectures," International Journal of Advanced Research in Science, Communication and Technology, 2024, doi: 10.48175/ijarsct-19946.
C. M. N. Kumar, A. Vanitha, N. Y. Lavanya, N. C. Lekhana, R. Tasmiya, and L. D. Nisarga, “Deep Learning-Based Recognition of Sign Language," 2024 Second International Conference on Data Science and Information System (ICDSIS), p. 16, 2024, doi: 10.1109/ICDSIS61070.2024.10594011.
H. Bhuiyan, M. F. Mozumder, Md. R. I. Khan, Md. S. Ahmed, and N. Z. Nahim, “Enhancing Bidirectional Sign Language Communication: Integrating YOLOv8 and NLP for Real-Time Gesture Recognition and Translation," arXiv, 2024, doi: 10.48550/arxiv.2411.13597.
M. Alaftekin, I. Pacal, and K. Cicek, “Real-time sign language recognition based on YOLO algorithm," Neural Computing and Applications, vol. 36, pp. 7609–7624, 2024, doi: 10.1007/s00521-024-09503-6.
D. Fan, M. Yi, W. Kang, Y. Wang, and C. Lv, “Continuous sign language recognition algorithm based on object detection and variablelength coding sequence, Scientific Reports, vol. 14, no. 1, 2024, doi: 10.1038/s41598-024-78319-0.
S. Ahmed, S. R. Revolinski, P. W. Maughan, M. Savic, J. Kalin, and I. C. Burke, “Deep learningbased detection and quantification of weed seed mixtures, Weed Science, vol. 72, no. 6, p. 655663, 2024, doi: 10.1017/wsc.2024.60.
M. Maruyama, S. Singh, K. Inoue, P. Pratim Roy, M. Iwamura, and M. Yoshioka, “Word-Level Sign Language Recognition With Multi-Stream Neural Networks Focusing on Local Regions and Skeletal Information, IEEE Access, vol. 12, p. 167333167346, 2024, doi: 10.1109/access.2024.3494878.
H. ZainEldin et al., “Silent no more: a comprehensive review of artificial intelligence, deep learning, and machine learning in facilitating deaf and mute communication, Artificial Intelligence Review, vol. 57, no. 7, 2024, doi: 10.1007/s10462-024-10816-0.
J. Shin, A. S. M. Miah, K. Suzuki, K. Hirooka and M. A. M. Hasan, “Dynamic Korean Sign Language Recognition Using Pose Estimation Based and Attention-Based Neural Network," in IEEE Access, vol. 11, pp. 143501-143513, 2023, doi: 10.1109/ACCESS.2023.3343404.
S. Parab and Mr. C. Bhattacharjee, “Empowering Accessibility: Bridging Communication Gap through Sign Language Detection Systems using Convolution Neural Network, International Journal for Research in Applied Science and Engineering Technology, vol. 13, no. 1, p. 712, 2025, doi: 10.22214/ijraset.2025.66011.
S. Renjith and R. Manazhy, “Real Time Recognition of ISL by Time Distributed CNN model using ISL video dataset, Research Square, 2023, doi: 10.21203/rs.3.rs-3046559/v1.
D. Ezra, S. Mastitz, and I. Rabaev, “Signsability: Enhancing Communication through a Sign Language App, Software, vol. 3, no. 3, p. 368379, 2024, doi: 10.3390/software3030019.
V. M. Dilpak, Rewa S. Joshi, and Harshada K. Sonje, “SignSense: AI Framework for Sign Language Recognition, International Journal of Advanced Research in Science, Communication and Technology, p. 372385, 2024, doi: 10.48175/ijarsct-17257.
A. E. M. Ridwan et al., “Network-Based Sign Language Recognition: A Comprehensive Approach Using Transfer Learning with Explainability," arXiv, 2024, doi: 10.48550/arxiv.2409.07426.
S. J, S. R. M, M. Vespa M and K. R, “Sign Language Translation and Voice Impairment Support System using Deep Learning," 2024 5th International Conference on Data Intelligence and Cognitive Informatics (ICDICI), pp. 558-563, 2024, doi: 10.1109/ICDICI62993.2024.10810957.
S. Saxena, A. Paygude, P. Jain, A. Memon, and V. Naik, “Hand Gesture Recognition using YOLO Models for Hearing and Speech Impaired People, 2022 IEEE Students Conference on Engineering and Systems (SCES), 2022, doi: 10.1109/sces55490.2022.9887751.
S. Al Ahmadi, F. Mohammad, and H. Al Dawsari, “Efficient YOLO-Based Deep Learning Model for Arabic Sign Language Recognition, Journal of Disability Research, vol. 3, no. 4, 2024, doi: 10.57197/jdr-2024-0051.
H. J. Bhuiyan et al., “Enhancing Bidirectional Sign Language Communication: Integrating YOLOv8 and NLP for Real-Time Gesture Recognition Translation," arXiv, 2024, doi: 10.48550/arXiv.2411.13597.
A. Munandar, Z. Yunizar, and S. Retno, “Indonesian Sign Language (BISINDO) Alphabet Detection System Using YOLO (You Only Look Once) Algorithm, Proceedings of Malikussaleh International Conference on Multidisciplinary Studies (MICoMS), vol. 4, 2024, doi: 10.29103/micoms.v4i.952.
S. Ghodke, A. Jadhav, R. Kakde, P. Navgare, and S. Khiani, “Sign Language Detection for Deaf and Hard of Hearing (DHH) Community, 2024 8th International Conference on Computing, Communication, Control and Automation (ICCUBEA), p. 15, 2024, doi: 10.1109/iccubea61740.2024.10775143.
A. H. A. Halim, A. A. A. Rahim, N. A. Rozaini, S. L. M. Hassan, I. S. A. Halim, and N. E. Abdullah, “Malaysian Sign Language (MSL) Detection: Comparison of YOLOv5 and CNN," 2024 IEEE 15th Control and System Graduate Research Colloquium (ICSGRC), p. 2934, 2024, doi: 10.1109/icsgrc62081.2024.10691273.
R. Raj, R. Sreemathy, M. Turuk, J. Jagdale, and M. Anish, “Indian Sign Language Recognition in Real Time using YOLO NAS," 2024 3rd International Conference for Advancement in Technology (ICONAT), p. 18, 2024, doi: 10.1109/iconat61936.2024.10774832.
N. Swapna, P. Shivani, D. Madhav Karthik, and G. Shivateja, “An Effective Real Time Sign Language Recognition using Yolo Algorithm, SSRN Electronic Journal, 2025, doi: 10.2139/ssrn.5083947.
U. Jana, S. Paul, and D. Bhandari, “Real-Time Caption Generation for the American Sign Language Using YOLO and LSTM,"2024 IEEE International Conference on Information Technology, Electronics and Intelligent Communication Systems (ICITEICS), p. 14, 2024, doi: 10.1109/iciteics61368.2024.10625098.
M. Mukushev, et. al., “Evaluation of Manual and Non-manual Components for Sign Language Recognition, " European Language Resources Association (ELRA), pp. 6075-6080, 2020.
C. Kenshimov, S. Mukhanov, T. Merembayev, and D. Yedilkhan, “A comparison of convolutional neural networks for Kazakh sign language recognition," Eastern-European Journal of Enterprise Technologies, vol. 5, no. 2, p. 4454, 2021, doi: 10.15587/1729-4061.2021.241535.
L. Zholshiyeva, T. Zhukabayeva, Sh. Turaev, M. Berdieva, and R. Sengirbayeva, “Real-time Kazakh Sign Language using MEDIAPIPE and SVM, News of the National Academy of Republic of Kazakhstan. Series physicomathematical, vol. 1, no. 345, p. 8293, 2023, doi: 10.32014/2023.2518-1726.170.
S. Mukhanov et al., “Gesture recognition of machine learning and convolutional neural network methods for Kazakh sign language," Scientific Journal of Astana IT University, p. 85100, 2023, doi: 10.37943/15lpcu4095.
C. Kenshimov, Z. Buribayev, Y. Amirgaliyev, A. Ataniyazova, and A. Aitimov, “Sign language dactyl recognition based on machine learning algorithms," Eastern-European Journal of Enterprise Technologies, vol. 4, no. 2, p. 5872, 2021, doi: 10.15587/1729-4061.2021.239253.
N. Amangeldy, A. Ukenova, G. Bekmanova, B. Razakhova, M. Milosz, and S. Kudubayeva, “Continuous Sign Language Recognition and Its Translation into Intonation-Colored Speech," Sensors, vol. 23, no. 14, 2023, doi: 10.3390/s23146383.
Y. Amirgaliyev, A. Ataniyazova, Z. Buribayev, M. Zhassuzak, B. Urmashev, and L. Cherikbayeva, “Application of neural networks ensemble method for the Kazakh sign language recognition," Bulletin of Electrical Engineering and Informatics, vol. 13, no. 5, pp. 32753287, 2024, doi: 10.11591/eei.v13i5.7803.
https://krslproject.github.io/krsl20/
https://habr.com/ru/articles/710016/
M. Bakirci, “Real-Time Vehicle Detection Using YOLOv8-Nano for Intelligent Transportation Systems," Traitement du Signal, vol. 41, no. 04, p. 17271740, 2024, doi: 10.18280/ts.410407.
W. Fang and W. Chen, “TBFYOLOv8n: A Lightweight Tea Bud Detection Model Based on YOLOv8n Improvements," Sensors, vol. 25, no. 2, 2025, doi: 10.3390/s25020547.
M. Yue, L. Zhang, J. Huang, and H. Zhang, “Lightweight and Efficient Tiny-Object Detection Based on Improved YOLOv8n for UAV Aerial Images," Drones, vol. 8, no. 7, 2024, doi: 10.3390/drones8070276.
S. Tyagi, P. Upadhyay, H. Fatima, S. Jain, and A. K. Sharma, American Sign Language Detection using YOLOv5 and YOLOv8, Research Square, 2023, doi: 10.21203/rs.3.rs-3126918/v1.
B. Steven, C. Vannia Nathalie, C. Gerry Johanes and P. Andam Suri, “Comparison Research: YOLO (You Only Look Once) Model for Indonesian Sign Language Detection Reducing Communication Inequalities," 2024 2nd International Conference on Technology Innovation and Its Applications (ICTIIA), pp. 1-6, 2024, doi: 10.1109/ICTIIA61827.2024.10761360.
B. Alsharif, E. Alalwany, and M. Ilyas, “Transfer learning with YOLOV8 for real-time recognition system of American Sign Language Alphabet, Franklin Open, vol. 8, 2024, doi: 10.1016/j.fraope.2024.100165.
N. Amangeldy, I., Krak, B. Kurmetbek, N. Gazizova, “A Comparison of the Effectiveness Architectures LSTM1024 and 2DCNN for Continuous Sign Language Recognition Process,” Seventh International Workshop on Computer Modeling and Intelligent Systems, 2024.
J. Wang, “Isolated Sign Language Recognition Based on Deep Learning,” In Electronic Engineering and Informatics, IOS Press, pp. 263-272, 2024.
T. H. Noor et al., “Real-Time Arabic Sign Language Recognition Using a Hybrid Deep Learning Model,” Sensors, vol. 24, no. 11, 2024, doi: 10.3390/s24113683.
L. Zholshiyeva, T. Zhukabayeva, D. Baumuratova, A. Serek, “Design of QazSL Sign Language Recognition System for Physically Impaired Individuals,” Journal of Robotics and Control (JRC), vol. 6 no. 1, pp. 191-201, 2025, doi: 10.18196/jrc.v6i1.23879.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Lazzat Zholshiyeva, Tamara Zhukabayeba, Azamat Serek, Ramazan Duisenbek, Meruert Berdieva, Nurshapagat Shapay

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).
This journal is based on the work at https://journal.umy.ac.id/index.php/jrc under license from Creative Commons Attribution-ShareAlike 4.0 International License. You are free to:
- Share – copy and redistribute the material in any medium or format.
- Adapt – remix, transform, and build upon the material for any purpose, even comercially.
The licensor cannot revoke these freedoms as long as you follow the license terms, which include the following:
- Attribution. You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
- ShareAlike. If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.
- No additional restrictions. You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
• Creative Commons Attribution-ShareAlike (CC BY-SA)
JRC is licensed under an International License