Design of QazSL Sign Language Recognition System for Physically Impaired Individuals

Authors

  • Lazzat Zholshiyeva L.N. Gumilyov Eurasian National University
  • Tamara Zhukabayeva L.N. Gumilyov Eurasian National University
  • Dilaram Baumuratova Astana International University
  • Azamat Serek Kazakh-British Technical University (KBTU) https://orcid.org/0000-0001-7096-6765

DOI:

https://doi.org/10.18196/jrc.v6i1.23879

Keywords:

Sign Language Recognition, Kazakh Sign Language, Machine Learning, Deep Learning, Physically Impaired Individuals

Abstract

Automating real-time sign language translation through deep learning and machine learning techniques can greatly enhance communication between the deaf community and the wider public. This research investigates how these technologies can change the way individuals with speech impairments communicate. Despite advancements, developing accurate models for recognizing both static and dynamic gestures remains challenging due to variations in gesture speed and length, which affect the effectiveness of the models. We introduce a hybrid approach that merges machine learning and deep learning methods for sign language recognition. We provide new model for the recognition of Kazakh Sign Language (QazSL), employing five algorithms: Support Vector Machine (SVM), Long Short Term Memory (LSTM), Gated Recurrent Unit (GRU), Convolutional Neural Networks (CNN) with VGG19, ResNet-50, and YOLOv5. The models were trained on a QazSL dataset of more than 4,400 photos. Among the assessed models, the GRU attained the highest accuracy of 100%, followed closely by SVM and YOLOv5 at 99.98%, VGG19 at 98.87% for dynamic dactyls, LSTM at 85%, and ResNet-50 at 78.61%. These findings illustrate the comparative efficacy of each method in real-time gesture recognition. The results yield significant insights for enhancing sign language recognition systems, presenting possible advancements in accessibility and communication for those with hearing impairments.

References

M. Moradi, D. D. Kannan, S. Asadianfam, H. Kolivand, and O. Aldhaibani, “A Review of Sign Language Systems,” 2023 16th International Conference on Developments in eSystems Engineering (DeSE), pp. 200–205, Dec. 2023, doi: 10.1109/dese60595.2023.10468964.

B. A. Dabwan et al., “Hand Gesture Classification for Individuals with Disabilities Using the DenseNet121 Model,” 2024 International Conference on Advancements in Power, Communication and Intelligent Systems (APCI), pp. 1–5, Jun. 2024, doi: 10.1109/apci61480.2024.10616504.

Ș. Takır, B. Bilen, and D. Arslan, “Sentiment Analysis in Turkish Sign Language Through Facial Expressions and Hand Gestures,” 2024 32nd Signal Processing and Communications Applications Conference (SIU), pp. 1–4, May 2024, doi: 10.1109/siu61531.2024.10601084)

T. N. Fitria, “The use of sign language as a media for delivering information on national television news broadcasts” ELP (Journal of English Language Pedagogy), vol. 9, no. 1, pp. 118–131, Jan. 2024, doi: 10.36665/elp.v9i1.764.)

R. Rastgoo, K. Kiani, S. Escalera, V. Athitsos, and M. Sabokrou, “A survey on recent advances in Sign Language Production,” Expert Systems with Applications, vol. 243, p. 122846, Jun. 2024, doi: 10.1016/j.eswa.2023.122846.

K. Emmorey, “Ten Things You Should Know About Sign Languages,” Current Directions in Psychological Science, vol. 32, no. 5, pp. 387–394, May 2023, doi: 10.1177/09637214231173071.

N. K. Caselli, K. Emmorey, and A. M. Cohen-Goldberg, “The signed mental lexicon: Effects of phonological neighbourhood density, iconicity, and childhood language experience,” Journal of Memory and Language, vol. 121, p. 104282, Dec. 2021, doi: 10.1016/j.jml.2021.104282.

F. Fitriyani, L. Q. Ainii, R. Jannah, and S. Maryam, “Analysis of Sign Language Skills in Improving Communication and Learning for Deaf Children,” Continuous Education: Journal of Science and Research, vol. 5, no. 1, pp. 30–39, Feb. 2024, doi: 10.51178/ce.v5i1.1757.

K. Pravda. Uslyshte nas. Kazakhstanskaya Pravda, 2023.

U.N. Office of the High Commissioner for Human Rights, “Experts from the Committee on the Rights of Persons with Disabilities commend Kazakhstan for its commitment,” OHCHR, Mar. 2024. [Online]. Available: https://www.ohchr.org/en/news/2024/03/experts-committee-rights-persons-disabilities-commend-kazakhstan-its-commitment.

N. S. Alsharif, T. Clifford, A. Alhebshi, S. N. Rowland, and S. J. Bailey, “Effects of Dietary Nitrate Supplementation on Performance during Single and Repeated Bouts of Short-Duration High-Intensity Exercise: A Systematic Review and Meta-Analysis of Randomised Controlled Trials,” Antioxidants, vol. 12, no. 6, p. 1194, May 2023, doi: 10.3390/antiox12061194.

R. Rastgoo, K. Kiani, and S. Escalera, “Sign Language Recognition: A Deep Survey,” Expert Systems with Applications, vol. 164, p. 113794, Feb. 2021, doi: 10.1016/j.eswa.2020.113794.

The Concept of Development of Inclusive Education in Kaza- khstan. Available online: https://legalacts.egov.kz/application/ downloadconceptfile?id=2506747 (accessed on 2 August 2022)

H. Zhou, D. Wang, Y. Yu, and Z. Zhang, “Research Progress of Human-Computer Interaction Technology Based on Gesture Recognition,” Electronics, vol. 12, no. 13, p. 2805, Jun. 2023, doi: 10.3390/electronics12132805.

O. M. Herbert, D. Pérez-Granados, M. A. O. Ruiz, R. Cadena Martínez, C. A. G. Gutiérrez, and M. A. Z. Antuñano, “Static and Dynamic Hand Gestures: A Review of Techniques of Virtual Reality Manipulation,” Sensors, vol. 24, no. 12, p. 3760, Jun. 2024, doi: 10.3390/s24123760.

Ali, D. Jirak, and S. Wermter, “Snapture—a Novel Neural Architecture for Combined Static and Dynamic Hand Gesture Recognition,” Cognitive Computation, vol. 15, no. 6, pp. 2014–2033, Jul. 2023, doi: 10.1007/s12559-023-10174-z.

A. M. Buttar, U. Ahmad, A. H. Gumaei, A. Assiri, M. A. Akbar, and B. F. Alkhamees, “Deep Learning in Sign Language Recognition: A Hybrid Approach for the Recognition of Static and Dynamic Signs,” Mathematics, vol. 11, no. 17, p. 3729, Aug. 2023, doi: 10.3390/math11173729.

H. Mohyuddin, S. K. R. Moosavi, M. H. Zafar, and F. Sanfilippo, “A comprehensive framework for hand gesture recognition using hybrid-metaheuristic algorithms and deep learning models,” Array, vol. 19, p. 100317, Sep. 2023, doi: 10.1016/j.array.2023.100317.

S. Das, Md. S. Imtiaz, N. H. Neom, N. Siddique, and H. Wang, “A hybrid approach for Bangla sign language recognition using deep transfer learning model with random forest classifier,” Expert Systems with Applications, vol. 213, p. 118914, Mar. 2023, doi: 10.1016/j.eswa.2022.118914.

L. Zulpukharkyzy Zholshiyeva, T. Kokenovna Zhukabayeva, S. Turaev, M. Aimambetovna Berdiyeva, and D. Tokhtasynovna Jambulova, “Hand Gesture Recognition Methods and Applications: A Literature Survey,” The 7th International Conference on Engineering, pp. 1–8, Oct. 2021, doi: 10.1145/3492547.3492578.

S. F. Ahmed et al., “Deep learning modelling techniques: current progress, applications, advantages, and challenges,” Artificial Intelligence Review, vol. 56, no. 11, pp. 13521–13617, Apr. 2023, doi: 10.1007/s10462-023-10466-8.

T. Tao, Y. Zhao, T. Liu, and J. Zhu, “Sign Language Recognition: A Comprehensive Review of Traditional and Deep Learning Approaches, Datasets, and Challenges,” IEEE Access, vol. 12, pp. 75034–75060, 2024, doi: 10.1109/access.2024.3398806.

A. Osman Hashi, S. Zaiton Mohd Hashim, and A. Bte Asamah, “A Systematic Review of Hand Gesture Recognition: An Update From 2018 to 2024,” IEEE Access, vol. 12, pp. 143599–143626, 2024, doi: 10.1109/access.2024.3421992.

X. Zhao, L. Wang, Y. Zhang, X. Han, M. Deveci, and M. Parmar, “A review of convolutional neural networks in computer vision,” Artificial Intelligence Review, vol. 57, no. 4, Mar. 2024, doi: 10.1007/s10462-024-10721-6.

Q. M. Areeb, Maryam, M. Nadeem, R. Alroobaea, and F. Anwer, “Helping Hearing-Impaired in Emergency Situations: A Deep Learning-Based Approach,” IEEE Access, vol. 10, pp. 8502–8517, 2022, doi: 10.1109/access.2022.3142918.

N. R and G. Titus, “Hybrid Deep Learning Models for Hand Gesture Recognition with EMG Signals,” 2024 International Conference on Advances in Modern Age Technologies for Health and Engineering Science (Amate), pp. 1–6, May 2024, doi: 10.1109/amathe61652.2024.10582166.

N. Ashrafi, Y. Liu, X. Xu, Y. Wang, Z. Zhao, and M. Pishgar, “Deep learning model utilization for mortality prediction in mechanically ventilated ICU patients,” Informatics in Medicine Unlocked, vol. 49, p. 101562, 2024, doi: 10.1016/j.imu.2024.101562.

A. B. Kydyrbekova and Y. Karymsakova, “The Potential of Using Interactive Storytelling in a Mixed Reality Environment in Teaching Kazakh Sign Language,” Iasaýı ýnıversıtet n ń habarshysy, vol. 132, no. 2, pp. 373–385, Jun. 2024, doi: 10.47526/2024-2/2664-0686.69.

H. ZainEldin et al., “Silent no more: a comprehensive review of artificial intelligence, deep learning, and machine learning in facilitating deaf and mute communication,” Artificial Intelligence Review, vol. 57, no. 7, Jun. 2024, doi: 10.1007/s10462-024-10816-0.

N. Amangeldy, S. Kudubayeva, A. Kassymova, A. Karipzhanova, B. Razakhova, and S. Kuralov, “Sign Language Recognition Method Based on Palm Definition Model and Multiple Classification,” Sensors, vol. 22, no. 17, p. 6621, Sep. 2022, doi: 10.3390/s22176621.

A. A. Rahim, “Kazakh Sign Language Recognition By Using Machine Learning Methods,” Вестник Ауэс, vol. 2, no. 65, 2024.

Y. A. Gomaa, “Deciphering linguistic and cultural hurdles in English-Arabic media translation: Insights from the BBC online news articles,” Cadernos de Tradução, vol. 44, no. 1, pp. 1–21, Mar. 2024, doi: 10.5007/2175-7968.2024.e94510.

B. Ren, M. Liu, R. Ding, and H. Liu, “A Survey on 3D Skeleton- Based Action Recognition Using Learning Method,” Cyborg and Bionic Systems, vol. 5, Jan. 2024, doi: 10.34133/cbsystems.0100.

G. Halvardsson, J. Peterson, C. Soto-Valero, and B. Baudry, “Interpretation of Swedish Sign Language Using Convolutional Neural Networks and Transfer Learning,” SN Computer Science, vol. 2, no. 3, Apr. 2021, doi: 10.1007/s42979-021-00612-w.

R. Amer Kadhim and M. Khamees, “A Real-Time American Sign Language Recognition System using Convolutional Neural Network for Real Datasets,” TEM Journal, pp. 937–943, Aug. 2020, doi: 10.18421/tem93-14.

A. Kasapbași, A. E. A. Elbushra, O. lL-Hardanee, and A. Yilmaz, “DeepASLR: A CNN based human-computer interface for American Sign Language recognition for hearing-impaired individuals,” Computer Methods and Programs in Biomedicine Update, vol. 2, p. 100048, 2022, doi: 10.1016/j.cmpbup.2021.100048.

B. Kareem Murad and A. H. Hassin Alasadi, “Advancements and Challenges in Hand Gesture Recognition: A Comprehensive Review,” Iraqi Journal for Electrical and Electronic Engineering, vol. 20, no. 2, pp. 154–164, Jul. 2024, doi: 10.37917/ijeee.20.2.13.

M. Iman, H. R. Arabnia, and K. Rasheed, “A Review of Deep Transfer Learning and Recent Advancements,” Technologies, vol. 11, no. 2, p. 40, Mar. 2023, doi: 10.3390/technologies11020040.

H. Brock, F. Law, K. Nakadai, and Y. Nagashima, “Learning Three-dimensional Skeleton Data from Sign Language Video,” ACM Transactions on Intelligent Systems and Technology, vol. 11, no. 3, pp. 1–24, Apr. 2020, doi: 10.1145/3377552.

V. Kimmelman, A. Imashev, M. Mukushev, and A. Sandygulova, “Eyebrow position in grammatical and emotional expressions in Kazakh-Russian Sign Language: A quantitative study,” Plos One, vol. 15, no. 6, p. e0233731, Jun. 2020, doi: 10.1371/journal.pone.0233731.

J. Lo Bianco, “Ideologies of sign language and their repercussions in language policy determinations,” Language Communication, vol. 75, pp. 83–93, Nov. 2020, doi: 10.1016/j.langcom.2020.09.002.

D. Saparova and A. Kanagatova, “The prefigurative culture of Margaret Mead or the eternal problem of generation gap,” Eurasian Journal of Religious Studies, vol. 16, no. 4, pp. 34–41, 2018, doi: 10.26577/ejrs-2018-4-186.

A. Imashev, M. Mukushev, V. Kimmelman, and A. Sandygulova, “A Dataset for Linguistic Understanding, Visual Evaluation, and Recognition of Sign Languages: The K-RSL,” Proceedings of the 24th Conference on Computational Natural Language Learning, pp. 631–640, 2020, doi: 10.18653/v1/2020.conll-1.51.

Y. Obi, K. S. Claudio, V. M. Budiman, S. Achmad, and A. Kurniawan, “Sign language recognition system for communicating to people with disabilities,” Procedia Computer Science, vol. 216, pp. 13–20, 2023, doi: 10.1016/j.procs.2022.12.106.

M. Mukushev, A. Ubingazhibov, A. Kydyrbekova, A. Imashev, V. Kimmelman, and A. Sandygulova, “FluentSigners-50: A signer independent benchmark dataset for sign language processing,” Plos One, vol. 17, no. 9, p. e0273649, Sep. 2022, doi: 10.1371/journal.pone.0273649.

A. Kasapbaşi and H. Canbolat, “Prediction of Turkish Sign Language Alphabets Utilizing Deep Learning Method,” 2023 5th International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA), pp. 1–7, Jun. 2023, doi: 10.1109/hora58378.2023.10156705.

C. Kenshimov, Z. Buribayev, Y. Amirgaliyev, A. Ataniyazova, and A. Aitimov, “Sign language dactyl recognition based on machine learning algorithms,” Eastern-European Journal of Enterprise Technologies, vol. 4, no. 2(112), pp. 58–72, Aug. 2021, doi: 10.15587/1729-4061.2021.239253.

A. Kuznetsova, V. Kimmelman, “Testing MediaPipe Holistic for Linguistic Analysis of Nonmanual Markers in Sign Languages,” Physics Today, vol. 2014, no. 12, Dec. 2014, doi: 10.1063/pt.5.028530.

G. Sánchez-Brizuela, A. Cisnal, E. de la Fuente-López, J.-C. Fraile, and J. Pérez-Turiel, “Lightweight real-time hand segmentation leveraging MediaPipe landmark detection,” Virtual Reality, vol. 27, no. 4, pp. 3125–3132, Sep. 2023, doi: 10.1007/s10055-023-00858-0.

L. Zholshiyeva, T. Zhukabayeva, Sh. Turaev, M. Berdieva, and R. Sengirbayeva, “Real-time Kazakh sign language recognition using Mediapipe and SVM,” News of the academy of sciences of the Re- public of Kazakhstan, Series physics and information technology, vol. 1, no. 345, pp. 82–93, Mar. 2023, doi: 10.32014/2023.2518-1726.17

S. Katoch, V. Singh, and U. S. Tiwary, “Indian Sign Language recognition system using SURF with SVM and CNN,” Array, vol. 14, p. 100141, Jul. 2022, doi: 10.1016/j.array.2022.100141.

S. Renjith and R. Manazhy, “Sign language: a systematic review on classification and recognition,” Multimedia Tools and Applications, vol. 83, no. 31, pp. 77077–77127, Feb. 2024, doi: 10.1007/s11042-024-18583-4.

R. Saravanan and S. Veluchamy, “Sign Language Classification With MediaPipe Hand Landmarks,” 2023 International Conference on Energy, Materials and Communication Engineering (ICEMCE), pp. 1–6, Dec. 2023, doi: 10.1109/icemce57940.2023.10434034.

Y. Amirgaliyev, A. Ataniyazova, Z. Buribayev, M. Zhassuzak, B. Urmashev, and L. Cherikbayeva, “Application of neural networks ensemble method for the Kazakh sign language recognition,” Bulletin of Electrical Engineering and Informatics, vol. 13, no. 5, pp. 3275–3287, Oct. 2024, doi: 10.11591/eei.v13i5.7803.

S. Mukhanov, R. Uskenbayeva, Young Im Cho, D. Kabyl, N. Les, and M. Amangeldi, “Gesture recognition of machine learning and convolutional neural network methods for Kazakh sign language,” Scientific Journal of Astana IT University, pp. 85–100, Sep. 2023, doi: 10.37943/15lpcu4095.

A. Imashev, M. Mukushev, V. Kimmelman, and A. Sandygulova, “A Dataset for Linguistic Understanding, Visual Evaluation, and Recognition of Sign Languages: The K-RSL,” Proceedings of the 24th Conference on Computational Natural Language Learning, pp. 631–640, 2020, doi: 10.18653/v1/2020.conll-1.51.

L. Zholshiyeva, T. Zhukabayeva, S. Turaev, M. Berdiyeva, and S. R. Kuanysbaevna, “A Real-Time Approach to Recognition of Kazakh Sign Language,” 2022 International Conference on Smart Information Systems and Technologies (SIST), pp. 1–6, Apr. 2022, doi: 10.1109/sist54437.2022.9945799.

H. Goddard and L. Shamir, “SVMnet: Non-Parametric Image Classification Based on Convolutional Ensembles of Support Vector Machines for Small Training Sets,” IEEE Access, vol. 10, pp. 24029–24038, 2022, doi: 10.1109/access.2022.3154405.

T. H. Noor et al., “Real-Time Arabic Sign Language Recognition Using a Hybrid Deep Learning Model,” Sensors, vol. 24, no. 11, p. 3683, Jun. 2024, doi: 10.3390/s24113683.

R. K. Pathan, M. Biswas, S. Yasmin, M. U. Khandaker, M. Salman, and A. A. F. Youssef, “Sign language recognition using the fusion of image and hand landmarks through multi-headed convolutional neural network,” Scientific Reports, vol. 13, no. 1, Oct. 2023, doi: 10.1038/s41598-023-43852-x.

Y. Ma, T. Xu, and K. Kim, “Two-Stream Mixed Convolutional Neural Network for American Sign Language Recognition,” Sensors, vol. 22, no. 16, p. 5959, Aug. 2022, doi: 10.3390/s22165959.

N. F. Attia, M. T. F. S. Ahmed, and M. A. M. Alshewimy, “Efficient deep learning models based on tension techniques for sign language recognition,” Intelligent Systems with Applications, vol. 20, p. 200284, Nov. 2023, doi: 10.1016/j.iswa.2023.200284.

H. Sun, L. Wang, H. Liu, and Y. Sun, “Hyperspectral Image Classification with the Orthogonal Self-Attention ResNet and Two-Step Support Vector Machine,” Remote Sensing, vol. 16, no. 6, p. 1010, Mar. 2024, doi: 10.3390/rs16061010.

G. Makridis, P. Mavrepis, and D. Kyriazis, “A deep learning approach using natural language processing and time-series fore- casting towards enhanced food safety,” Machine Learning, vol. 112, no. 4, pp. 1287–1313, Mar. 2022, doi: 10.1007/s10994-022-06151-6.

L. Zholshiyeva, T. Zhukabayeva, Sh. Turaev, M. Berdieva, “Kazakh sign language recognition based on CNN,” News of the Academy of Sciences of the Republic of Kazakhstan, Series physics and information technology, vol. 3, no. 347, pp. 76–87, 2023.

L. Zholshiyeva1, T. Zhukabayeva, Sh. Turaev, M. Berdieva, and B. Khu Ven-Tsen, “Development of an intellectual system for recognizing Kazakh dactyl gestures based on LSTM and GRU models,” News of the Academy of Sciences of the Republic of Kazakhstan, Series physics and information technology, vol. 2, no. 346, pp. 1̇41-153, 2023, doi: 10.32014/2023.2518-1726.190.

O. M. Sincan and H. Y. Keles, “AUTSL: A Large Scale Multi- Modal Turkish Sign Language Dataset and Baseline Methods,” IEEE Access, vol. 8, pp. 181340–181355, 2020, doi: 10.1109/access.2020.3028072.

B. Saini, D. Venkatesh, N. Chaudhari, T. Shelake, S. Gite, and B. Pradhan, “A comparative analysis of Indian sign language recognition using deep learning models,” Forum for Linguistic Studies, vol. 5, no. 1, p. 197, Jul. 2023, doi: 10.18063/fls.v5i1.1617.

M. Alaftekin, I. Pacal, and K. Cicek, “Real-time sign language recognition based on YOLO algorithm,” Neural Computing and Applications, vol. 36, no. 14, pp. 7609–7624, Feb. 2024, doi: 10.1007/s00521-024-09503-6.

M. Rivera-Acosta, J. M. Ruiz-Varela, S. Ortega-Cisneros, J. Rivera, R. Parra-Michel, and P. Mejia-Alvarez, “Spelling Correction Real-Time American Sign Language Alphabet Translation System Based on YOLO Network and LSTM,” Electronics, vol. 10, no. 9, p. 1035, Apr. 2021, doi: 10.3390/electronics10091035.

I. G. A. Poornima, G. S. Priya, C. A. Yogaraja, R. Venkatesh, and P. Shalini, “Hand and Sign Recognition of Alphabets Using YOLOv5,” SN Computer Science, vol. 5, no. 3, Mar. 2024, doi: 10.1007/s42979-024-02628-4.

B. Hemachandran, C. Pavan Rakesh Reddy, and D. Harsha Vardhan Reddy, “Comparative Study of Classification Algorithms in Sign Language Recognition,” 2022 13th International Conference on Computing Communication and Networking Technologies (ICCCNT), vol. 1045, pp. 1–5, Oct. 2022, doi: 10.1109/icccnt54827.2022.9984428.

S. Alyami, H. Luqman, and M. Hammoudeh, “Reviewing

25 years of continuous sign language recognition research: Advances, challenges, and prospects,” Information Processing, Management, vol. 61, no. 5, p. 103774, Sep. 2024, doi: 10.1016/j.ipm.2024.103774.

Q. M. Areeb, Maryam, M. Nadeem, R. Alroobaea, and F. Anwer, “Helping Hearing-Impaired in Emergency Situations: A Deep Learning-Based Approach,” IEEE Access, vol. 10, pp. 8502–8517, 2022, doi: 10.1109/access.2022.3142918.

D. Nurgazina, S. Kudubayeva, and A. Ismailov, “Scientific Aspects Of Modern Approaches To Machine Translation For Sign Language,” Scientific Journal of Astana IT University, pp. 41–54, Jun. 2024, doi: 10.37943/18dqxx2356.

A. Núñez-Marcos, O. Perez-de-Viñaspre, and G. Labaka, “A survey on Sign Language machine translation,” Expert Systems with Applications, vol. 213, p. 118993, Mar. 2023.

R. Zuo and B. Mak, “Improving Continuous Sign Language Recognition with Consistency Constraints and Signer Removal,” ACM Transactions on Multimedia Computing, Communications, and Applications, vol. 20, no. 6, pp. 1–25, Mar. 2024.

F. Wei and Y. Chen, “Improving Continuous Sign Language Recognition with Cross-Lingual Signs,” 2023 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 23555– 23564, Oct. 2023, doi: 10.1109/iccv51070.2023.02158.

W. Xue, Z. Kang, L. Guo, S. Yang, T. Yuan, and S. Chen, “Continuous Sign Language Recognition for Hearing-Impaired Consumer Communication via Self-Guidance Network,” IEEE Transactions on Consumer Electronics, vol. 70, no. 1, pp. 535– 542, Feb. 2024, doi: 10.1109/tce.2023.3342163.

Downloads

Published

2025-01-11

Issue

Section

Articles