Enhancing Humanoid Robot Soccer Ball Tracking, Goal Alignment, and Robot Avoidance Using YOLO-NAS
DOI:
https://doi.org/10.18196/jrc.v5i3.21839Keywords:
Humanoid Robot Soccer, YOLO, Object Detection, ML Deployment, Ball Tracking.Abstract
This research aims to enhance humanoid robot soccer Ball Tracking, Goal Alignment, and Robot avoidance tasks using YOLO-NAS. The study followed a three-stage approach involving model engineering, which involves model training, code integration, and testing by comparing it with YOLO-v8 and YOLOv7. We measured the mAP (Mean Avegara Precision) and the speed of the detection of each model. Descriptive and Friedman techniques were employed to interpret testing results. In the ball tracking task, YOLO-NAS achieved a success rate of 53.3% compared to YOLOv7 with 68.3%. In the goal alignment task, YOLO-NAS achieved the highest success rate of 91.7%. In the Robot Avoidance task, YOLO-NAS, the same as YOLOv8, 100% nailed the test. These findings suggest that YOLO-NAS performs exceptionally well in the goal-alignment task but does not excel in two other tasks related to humanoid robot soccer.References
D. Zhou, G. Chen, and F. Xu, “Application of Deep Learning Technology in Strength Training of Football Players and Field Line Detection of Football Robots,” Front. Neurorobot., vol. 16, p. 867028, Jun. 2022, doi: 10.3389/fnbot.2022.867028.
P. Stone, “Designing Robots to Best World Cup Winners has Inspired Generations of Roboticists,” IEEE Spectrum, vol. 60, no. 7, pp. 40–49, Jul. 2023, doi: 10.1109/MSPEC.2023.10177053.
D. C. Melo, M. R. O. A. Maximo, and A. M. da Cunha, “Learning Push Recovery Behaviors for Humanoid Walking Using Deep Reinforcement Learning,” J. Intell Robot Syst., vol. 106, no. 1, p. 8, Aug. 2022, doi: 10.1007/s10846-022-01656-7.
A. Risnumawan, “Parameter Adjustment of EROS Humanoid Robot Soccer using a Motion Visualization,” Complete, vol. 3, no. 1, Jul. 2022, doi: 10.52435/complete.v2i1.203.
R. Jánoš et al., “Stability and Dynamic Walk Control of Humanoid Robot for Robot Soccer Player,” Machines, vol. 10, no. 6, Jun. 2022, doi: 10.3390/machines10060463.
C.-C. Liu, Y.-C. Lin, W.-F. Lai, and C.-C. Wong, “Kicking Motion Planning of Humanoid Robot Based on B-Spline Curves,” J. Appl. Sci. Eng., vol. 25, no. 4, pp. 723–731, 2021, doi: 10.6180/jase.202208_25(4).0007.
A. Rezaeipanah, Z. Jamshidi, and S. Jafari, “A Shooting Strategy When Moving On Humanoid Robots Using Inverse Kinematics And Q-Learning,” International Journal of Robotics and Automation, vol. 36, p. 2020, Nov. 2020, doi: 10.2316/J.2021.206-0393.
A. F. V. Muzio, M. R. O. A. Maximo, and T. Yoneyama, “Deep Reinforcement Learning for Humanoid Robot Behaviors,” J. Intell Robot Syst., vol. 105, no. 1, p. 12, Apr. 2022, doi: 10.1007/s10846-022-01619-y.
X. Ming, X. Nanfeng, Z. Mengjun, and Y. Qunyong, “Optimized Convolutional Neural Network-Based Object Recognition for Humanoid Robot,” J. Robotics Autom., vol. 4, no. 1, Feb. 2020, doi: 10.36959/673/363.
Q. Bai, S. Li, J. Yang, Q. Song, Z. Li, and X. Zhang, “Object Detection Recognition and Robot Grasping Based on Machine Learning: A Survey,” IEEE Access, vol. 8, pp. 181855–181879, 2020, doi: 10.1109/ACCESS.2020.3028740.
E. Maiettini, G. Pasquale, L. Rosasco, and L. Natale, “On-line object detection: a robotics challenge,” Auton. Robot., vol. 44, no. 5, pp. 739–757, May 2020, doi: 10.1007/s10514-019-09894-9.
A. Rizgi et al., “Visual Perception System of EROS Humanoid Robot Soccer,” International Journal of Intelligent Information Technologies, vol. 16, pp. 68–86, Oct. 2020, doi: 10.4018/IJIIT.2020100105.
M. Abreu, T. Silva, H. Teixeira, L. P. Reis, and N. Lau, "6D localization and kicking for humanoid robotic soccer," Journal of Intelligent & Robotic Systems, vol. 102, no. 2, p. 30, 2021.
I. Da Silva, D. Perico, T. Homem, and R. Bianchi, “Deep Reinforcement Learning for a Humanoid Robot Soccer Player,” Journal of Intelligent & Robotic Systems, vol. 102, p. 69, Jul. 2021, doi: 10.1007/s10846-021-01333-1.
C. Hong, I. Jeong, L. F. Vecchietti, D. Har, and J.-H. Kim, “AI World Cup: Robot-Soccer-Based Competitions,” IEEE Transactions on Games, vol. 13, no. 4, pp. 330–341, Dec. 2021, doi: 10.1109/TG.2021.3065410.
P.-H. Kuo, W.-C. Yang, P.-W. Hsu, and K.-L. Chen, “Intelligent proximal-policy-optimization-based decision-making system for humanoid robots,” Advanced Engineering Informatics, vol. 56, p. 102009, Apr. 2023, doi: 10.1016/j.aei.2023.102009.
E. Antonioni, V. Suriani, F. Riccio, and D. Nardi, “Game Strategies for Physical Robot Soccer Players: A Survey,” IEEE Trans. Games, vol. 13, no. 4, pp. 342–357, Dec. 2021, doi: 10.1109/TG.2021.3075065.
D. Steffi, S. Mehta, and V. Venkatesh, “Object detection on robosoccer environment using convolution neural network,” IJEECS, vol. 29, no. 1, p. 286, Jan. 2022, doi: 10.11591/ijeecs.v29.i1.pp286-294.
E. R. Jamzuri, H. Mandala, and J. Baltes, “A Fast and Accurate Object Detection Algorithm on Humanoid Marathon Robot,” Indonesian Journal of Electrical Engineering and Informatics (IJEEI), vol. 8, no. 1, Art. no. 1, Mar. 2020, doi: 10.52549/ijeei.v8i1.1960.
N. Cruz, F. Leiva, and J. Ruiz-del-Solar, “Deep learning applied to humanoid soccer robotics: playing without using any color information,” Auton. Robot., vol. 45, no. 3, pp. 335–350, Mar. 2021, doi: 10.1007/s10514-021-09966-9.
S. A. Irfan and N. S. Widodo, “Application of Deep Learning Convolution Neural Network Method on KRSBI Humanoid R-SCUAD Robot,” Bul. Il. Sar. TE., vol. 2, no. 1, p. 40, May 2020, doi: 10.12928/biste.v2i1.985.
S. N. Aslan, A. Uçar, and C. Güzeliş, “New convolutional neural network models for efficient object recognition with humanoid robots,” Journal of Information and Telecommunication, vol. 6, no. 1, pp. 63–82, Jan. 2022, doi: 10.1080/24751839.2021.1983331.
M. Bestmann et al., “TORSO-21 Dataset: Typical Objects in RoboCup Soccer 2021,” in RoboCup 2021: Robot World Cup XXIV, vol. 13132, pp. 65–77, 2022, doi: 10.1007/978-3-030-98682-7_6.
B. Tan, “Soccer-Assisted Training Robot Based on Image Recognition Omnidirectional Movement,” Wireless Communications and Mobile Computing, vol. 2021, pp. 1–10, Aug. 2021, doi: 10.1155/2021/5532210.
R. Dikairono, S. Setiawardhana, D. Purwanto, and T. A. Sardjono, “CNN-Based Self Localization Using Visual Modelling of a Gyrocompass Line Mark and Omni-Vision Image for a Wheeled Soccer Robot Application,” International Journal of Intelligent Engineering and Systems, vol. 13, no. 6, pp. 442–453, 2020, doi: 10.22266/ijies2020.1231.39.
D. D. R. Meneghetti, T. P. D. Homem, J. H. R. De Oliveira, I. J. D. Silva, D. H. Perico, and R. A. D. C. Bianchi, “Detecting Soccer Balls with Reduced Neural Networks: A Comparison of Multiple Architectures Under Constrained Hardware Scenarios,” J. Intell Robot Syst., vol. 101, no. 3, p. 53, Mar. 2021, doi: 10.1007/s10846-021-01336-y.
D. Lim, J. Kim, and H. Kim, “Efficient robot tracking system using single-image-based object detection and position estimation,” ICT Express, vol. 10, no. 1, pp. 125–131, Feb. 2024, doi: 10.1016/j.icte.2023.07.009.
Q. Yan, S. Li, C. Liu, M. Liu, and Q. Chen, “RoboSeg: Real-Time Semantic Segmentation on Computationally Constrained Robots,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 52, no. 3, pp. 1567–1577, Mar. 2022, doi: 10.1109/TSMC.2020.3032437.
A. Yildiz, N. G. Adar, and A. Mert, “Convolutional Neural Network Based Hand Gesture Recognition in Sophisticated Background for Humanoid Robot Control,” IAJIT, vol. 20, no. 3, 2023, doi: 10.34028/iajit/20/3/9.
J. G. Melo and E. Barros, “An Embedded Monocular Vision Approach for Ground-Aware Objects Detection and Position Estimation,” Robot World Cup, pp. 100-111, 2022.
C. O. Yinka-Banjo, O. A. Ugot, and E. Ehiorobo, “Object detection for robot coordination in robotics soccer,” Nig. J. Technol. Dev., vol. 19, no. 2, pp. 136–142, Aug. 2022, doi: 10.4314/njtd.v19i2.5.
A. C. Nugraha, M. L. Hakim, S. Yatmono, and M. Khairudin, “Development of Ball Detection System with YOLOv3 in a Humanoid Soccer Robot,” J. Phys.: Conf. Ser., vol. 2111, no. 1, p. 012055, Nov. 2021, doi: 10.1088/1742-6596/2111/1/012055.
S. K. Narayanaswami et al., “Towards a Real-Time, Low-Resource, End-to-End Object Detection Pipeline for Robot Soccer,” in RoboCup 2022: Robot World Cup XXV, pp. 62–74, Mar. 2023, doi: 10.1007/978-3-031-28469-4_6.
D. Barry, M. Shah, M. Keijsers, H. Khan, and B. Hopman, "xYOLO: A Model For Real-Time Object Detection In Humanoid Soccer On Low-End Hardware," 2019 International Conference on Image and Vision Computing New Zealand (IVCNZ), pp. 1-6, 2019, doi: 10.1109/IVCNZ48456.2019.8960963.
R. Nie, “Research of target detection method YOLO,” ACE, vol. 6, no. 1, pp. 620–632, Jun. 2023, doi: 10.54254/2755-2721/6/20230905.
Y. Zhang, “YOLO Series Target Detection Technology and Application,” Highlights in Science, Engineering and Technology, vol. 39, pp. 841–847, Apr. 2023, doi: 10.54097/hset.v39i.6653.
M. Hussain, “YOLO-v1 to YOLO-v8, the Rise of YOLO and Its Complementary Nature toward Digital Manufacturing and Industrial Defect Detection,” Machines, vol. 11, no. 7, p. 677, Jun. 2023, doi: 10.3390/machines11070677.
O. M. Moradeyo, A. S. Olaniyan, A. O. Ojoawo, J. A. Olawale, and R. W. Bello, “YOLOv7 Applied to Livestock Image Detection and Segmentation Tasks in Cattle Grazing Behavior, Monitor and Intrusions,” jasem, vol. 27, no. 5, pp. 953–958, May 2023, doi: 10.4314/jasem.v27i5.10.
E. R. Jamzuri et al., “Barelang FC - Team Description Paper Humanoid Kid-Size League RoboCup 2023 France,” robocuphumanoid, 2023, doi: 10.13140/RG.2.2.30653.23527.
J. Chen et al., “Efficient and lightweight grape and picking point synchronous detection model based on key point detection,” Computers and Electronics in Agriculture, vol. 217, p. 108612, Feb. 2024, doi: 10.1016/j.compag.2024.108612.
P. Yan, W. Wang, G. Li, Y. Zhao, J. Wang, and Z. Wen, “A lightweight coal gangue detection method based on multispectral imaging and enhanced YOLOv8n,” Microchemical Journal, vol. 199, p. 110142, Apr. 2024, doi: 10.1016/j.microc.2024.110142.
A. Saadeldin, M. M. Rashid, A. A. Shafie, and T. F. Hasan, “Real-time vehicle counting using custom YOLOv8n and DeepSORT for resource-limited edge devices,” TELKOMNIKA (Telecommunication Computing Electronics and Control), vol. 22, no. 1, Feb. 2024, doi: 10.12928/telkomnika.v22i1.25096.
H. Wang, L. Fu, and L. Wang, “Detection algorithm of aircraft skin defects based on improved YOLOv8n,” Signal, Image and Video Processing, pp. 1-15, Feb. 2024, doi: 10.1007/s11760-024-03049-9.
Y. Fan, L. Zhang, C. Zheng, Y. Zu, X. Wang, and J. Zhu, “Real-time and accurate meal detection for meal-assisting robots,” Journal of Food Engineering, vol. 371, p. 111996, Jun. 2024, doi: 10.1016/j.jfoodeng.2024.111996.
K. Chen, B. Du, Y. Wang, G. Wang, and J. He, “The real-time detection method for coal gangue based on YOLOv8s-GSC,” J. Real-Time Image Proc., vol. 21, no. 2, p. 37, Feb. 2024, doi: 10.1007/s11554-024-01425-9.
M. Zhao et al., “MED-YOLOv8s: a new real-time road crack, pothole, and patch detection model,” J. Real-Time Image Proc., vol. 21, no. 2, p. 26, Jan. 2024, doi: 10.1007/s11554-023-01405-5.
M. A. Soeleman, C. Supriyanto, and Purwanto, “Deep Learning Model for Unmanned Aerial Vehicle-based Object Detection on Thermal Images,” RIA, vol. 37, no. 6, pp. 1441–1447, Dec. 2023, doi: 10.18280/ria.370608.
S. M. H. Rizvi, A. Naseer, S. U. Rehman, S. Akram, and V. Gruhn, “Revolutionizing Agriculture: Machine and Deep Learning Solutions for Enhanced Crop Quality and Weed Control,” IEEE Access, vol. 12, pp. 11865–11878, 2024, doi: 10.1109/ACCESS.2024.3355017.
J. Farooq, M. Muaz, K. Khan Jadoon, N. Aafaq, and M. K. A. Khan, “An improved YOLOv8 for foreign object debris detection with optimized architecture for small objects,” Multimed. Tools Appl., pp. 1-27, Dec. 2023, doi: 10.1007/s11042-023-17838-w.
A. M. Qasim, N. Abbas, A. Ali, and B. A. A.-R. Al-Ghamdi, “Abandoned Object Detection and Classification Using Deep Embedded Vision,” IEEE Access, vol. 12, pp. 35539–35551, 2024, doi: 10.1109/ACCESS.2024.3369233.
C. M. Badgujar, P. R. Armstrong, A. R. Gerken, L. O. Pordesimo, and J. F. Campbell, “Real-time stored product insect detection and identification using deep learning: System integration and extensibility to mobile platforms,” Journal of Stored Products Research, vol. 104, p. 102196, Dec. 2023, doi: 10.1016/j.jspr.2023.102196.
H. Duong-Trung and N. Duong-Trung, “Integrating YOLOv8-agri and DeepSORT for Advanced Motion Detection in Agriculture and Fisheries,” EAI Endorsed Transactions on Industrial Networks and Intelligent Systems, vol. 11, no. 1, pp. e4–e4, Feb. 2024, doi: 10.4108/eetinis.v11i1.4618.
X. Zhao, Y. He, H. Zhang, Z. Ding, C. Zhou, and K. Zhang, “A quality grade classification method for fresh tea leaves based on an improved YOLOv8x-SPPCSPC-CBAM model,” Sci. Rep., vol. 14, p. 4166, Feb. 2024, doi: 10.1038/s41598-024-54389-y.
N. Hnoohom, P. Chotivatunyu, N. Maitrichit, C. Nilsumrit, and P. Iamtrakul, “The video-based safety methodology for pedestrian crosswalk safety measured: The case of Thammasat University, Thailand,” Transportation Research Interdisciplinary Perspectives, vol. 24, p. 101036, Feb. 2024, doi: 10.1016/j.trip.2024.101036.
N. Zou, Q. Xu, Y. Wu, X. Zhu, and Y. Su, “An Automated Method for Generating Prefabs of AR Map Point Symbols Based on Object Detection Model,” ISPRS International Journal of Geo-Information, vol. 12, no. 11, Nov. 2023, doi: 10.3390/ijgi12110440.
W. Yuan, “AriAplBud: An Aerial Multi-Growth Stage Apple Flower Bud Dataset for Agricultural Object Detection Benchmarking,” Data, vol. 9, no. 2, p. 36, Feb. 2024, doi: 10.3390/data9020036.
R. M. Sampurno, Z. Liu, R. M. R. D. Abeyrathna, and T. Ahamed, “Intrarow Uncut Weed Detection Using You-Only-Look-Once Instance Segmentation for Orchard Plantations,” Sensors, vol. 24, no. 3, Jan. 2024, doi: 10.3390/s24030893.
H. Boudlal, M. Serrhini, and A. Tahiri, “A novel approach for simultaneous human activity recognition and pose estimation via skeleton-based leveraging WiFi CSI with YOLOv8 and mediapipe frameworks,” Signal, Image and Video Processing, pp. 1-17, Feb. 2024, doi: 10.1007/s11760-024-03031-5.
K. Qi, W. Xu, W. Chen, X. Tao, and P. Chen, “Multiple object tracking with segmentation and interactive multiple model,” Journal of Visual Communication and Image Representation, vol. 99, p. 104064, Mar. 2024, doi: 10.1016/j.jvcir.2024.104064.
M. Sukkar, M. Shukla, D. Kumar, V. C. Gerogiannis, A. Kanavos, and B. Acharya, “Enhancing Pedestrian Tracking in Autonomous Vehicles by Using Advanced Deep Learning Techniques,” Information, vol. 15, no. 2, Feb. 2024, doi: 10.3390/info15020104.
W. Sheng, J. Shen, Q. Huang, Z. Liu, and Z. Ding, “Multi-objective pedestrian tracking method based on YOLOv8 and improved DeepSORT,” MBE, vol. 21, no. 2, pp. 1791–1805, 2024, doi: 10.3934/mbe.2024077.
J. Terven, D.-M. Córdova-Esparza, and J.-A. Romero-González, “A Comprehensive Review of YOLO Architectures in Computer Vision: From YOLOv1 to YOLOv8 and YOLO-NAS,” MAKE, vol. 5, no. 4, pp. 1680–1716, Nov. 2023, doi: 10.3390/make5040083.
T. Sharma et al., “Deep Learning-Based Object Detection and Classification for Autonomous Vehicles in Different Weather Scenarios of Quebec, Canada,” IEEE Access, vol. 12, pp. 13648–13662, 2024, doi: 10.1109/ACCESS.2024.3354076.
Z. Gao, J. Huang, J. Chen, T. Shao, H. Ni, and H. Cai, “Deep transfer learning-based computer vision for real-time harvest period classification and impurity detection of Porphyra haitnensis,” Aquacult. Int., pp. 1-28, Feb. 2024, doi: 10.1007/s10499-024-01422-6.
A. D. Dobrzycki, A. M. Bernardos, L. Bergesio, A. Pomirski, and D. Sáez-Trigueros, “Exploring the Use of Contrastive Language-Image Pre-Training for Human Posture Classification: Insights from Yoga Pose Analysis,” Mathematics, vol. 12, no. 1, Jan. 2024, doi: 10.3390/math12010076.
S. Chen, Y. Li, Y. Zhang, Y. Yang, and X. Zhang, “Soft X-ray image recognition and classification of maize seed cracks based on image enhancement and optimized YOLOv8 model,” Computers and Electronics in Agriculture, vol. 216, p. 108475, Jan. 2024, doi: 10.1016/j.compag.2023.108475.
S. Saluky, G. B. Nugraha, and S. H. Supangkat, “Enhancing Abandoned Object Detection with Dual Background Models and Yolo-NAS,” International Journal of Intelligent Systems and Applications in Engineering, vol. 12, no. 2, 2024.
H. Slimani, J. El Mhamdi, and A. Jilbab, “Advancing disease identification in fava bean crops: A novel deep learning solution integrating YOLO-NAS for precise rust,” Journal of Intelligent & Fuzzy Systems, vol. 46, no. 2, pp. 3475–3489, Jan. 2024, doi: 10.3233/JIFS-236154.
R. N. Anand, R. P. Singh, D. Gupta, and K. Palaniappan, “Ship Detection in Satellite Images using You Only Look Once-Neural Architecture Search,” in 2023 9th International Conference on Signal Processing and Communication (ICSC), pp. 463–468, Dec. 2023, doi: 10.1109/ICSC60394.2023.10441207.
K. Charoenjai, W. Kusakunniran, T. Thaipisutikul, N. Yodrabum, and I. Chaikangwan, “Automatic detection of nostril and key markers in images,” Intelligent Systems with Applications, vol. 21, p. 200327, Mar. 2024, doi: 10.1016/j.iswa.2024.200327.
F. Zhengxin et al., “MLOps Spanning Whole Machine Learning Life Cycle: A Survey,” arXiv preprint arXiv:2304.07296, 2023, doi: 10.48550/arXiv.2304.07296.
Downloads
Published
Issue
Section
License
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).
This journal is based on the work at https://journal.umy.ac.id/index.php/jrc under license from Creative Commons Attribution-ShareAlike 4.0 International License. You are free to:
- Share – copy and redistribute the material in any medium or format.
- Adapt – remix, transform, and build upon the material for any purpose, even comercially.
The licensor cannot revoke these freedoms as long as you follow the license terms, which include the following:
- Attribution. You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
- ShareAlike. If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.
- No additional restrictions. You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
• Creative Commons Attribution-ShareAlike (CC BY-SA)
JRC is licensed under an International License