Object Detection for Wheeled Mobile Robot Based Using Deep Learning: Systematic Review
Keywords:
Mobile Robots, Object Detection, VOSViewer, Review PaperAbstract
The integration of deep learning technologies in object detection has significantly enhanced the capabilities of wheeled mobile robots, making them more efficient and intelligent in navigating complex environments. These technologies enable more accurate pattern recognition, adaptability to diverse environmental conditions, and improved autonomous decision-making capabilities. This study aims to explore the evolution and current trends in object detection for wheeled mobile robots, with a specific focus on the application of deep learning technologies as a foundational driver for system advancements. The objectives include analyzing the contribution of deep learning to improving object detection accuracy, system efficiency, and the robots' adaptability to dynamic environments. The methodology employed in this study is a Systematic Literature Review (SLR), comprising several key steps: formulating research questions, identifying relevant research sources, utilizing specific keywords for data collection, disseminating the gathered data, and analyzing the findings to address the research questions. Data is sourced exclusively from the Scopus digital database, focusing on publications from 2019 to 2024. The collected data, formatted in RIS, is subsequently analyzed and visualized using VOSviewer. The outcomes of this research include insights into the growth of research publications in recent years, the identification of key trends in object detection methodologies for wheeled mobile robots, the exploration of interconnections between critical concepts in the field, and the mapping of the knowledge network based on relevant keywords. Special emphasis is placed on the pivotal role of deep learning technologies in driving object detection advancements, including accuracy and system efficiency enhancements.
References
Z. Zou, K. Chen, Z. Shi, Y. Guo, and J. Ye, “Object Detection in 20 Years: A Survey,” Proc. IEEE, vol. 111, no. 3, pp. 257–276, 2023, doi: 10.1109/JPROC.2023.3238524.
H. Zhang, Y. Zhang, and T. Yang, “A survey of energy-efficient motion planning for wheeled mobile robots,” Ind. Rob., vol. 47, no. 4, pp. 607–621, 2020, doi: 10.1108/IR-03-2020-0063.
N. Manakitsa, G. S. Maraslidis, L. Moysis, and G. F. Fragulis, “A Review of Machine Learning and Deep Learning for Object Detection, Semantic Segmentation, and Human Action Recognition in Machine and Robotic Vision,” Technologies, vol. 12, no. 2, 2024, doi: 10.3390/technologies12020015.
Y. Hu, G. Liu, Z. Chen, and J. Guo, “Object Detection Algorithm for Wheeled Mobile Robot Based on an Improved YOLOv4,” Appl. Sci., vol. 12, no. 9, 2022, doi: 10.3390/app12094769.
C. Cruz Ulloa, A. Krus, A. Barrientos, J. del Cerro, and C. Valero, “Robotic Fertilization in Strip Cropping using a CNN Vegetables Detection-Characterization Method,” Comput. Electron. Agric., vol. 193, 2022, doi: 10.1016/j.compag.2022.106684.
W. K. Sleaman, A. A. Hameed, and A. Jamil, “Monocular vision with deep neural networks for autonomous mobile robots navigation,” Optik (Stuttg)., vol. 272, 2023, doi: 10.1016/j.ijleo.2022.170162.
T. H. Lin, C. T. Chang, and A. Putranto, “Tiny machine learning empowers climbing inspection robots for real-time multiobject bolt-defect detection,” Eng. Appl. Artif. Intell., vol. 133, p. 108618, 2024, doi: 10.1016/j.engappai.2024.108618.
A. Rakshit, S. Pramanick, A. Bagchi, and S. Bhattacharyya, “Autonomous grasping of 3-D objects by a vision-actuated robot arm using Brain–Computer Interface,” Biomed. Signal Process. Control, vol. 84, 2023, doi: 10.1016/j.bspc.2023.104765.
J. Guo et al., “Supervised learning study on ground classification and state recognition of agricultural robots based on multi-source vibration data fusion,” Comput. Electron. Agric., vol. 219, 2024, doi: 10.1016/j.compag.2024.108791.
H. Zhang and M. Ali Babar, “Systematic reviews in software engineering: An empirical investigation,” Inf. Softw. Technol., vol. 55, no. 7, pp. 1341–1354, 2013, doi: 10.1016/j.infsof.2012.09.008.
A. Y. Naich and J. R. Carrión, “LiDAR-Based Intensity-Aware Outdoor 3D Object Detection,” Sensors, vol. 24, no. 9, pp. 1–17, 2024, doi: 10.3390/s24092942.
M. F. R. Lee and Z. S. Shih, “Autonomous Surveillance for an Indoor Security Robot,” Processes, vol. 10, no. 11, pp. 1–32, 2022, doi: 10.3390/pr10112175.
M. A. Kevin, J. Crespo, J. Gomez, and C. Alfaro, “Advanced System for Enhancing Location Identification through Human Pose and Object Detection,” Machines, vol. 11, no. 8, 2023, doi: 10.3390/machines11080843.
Z. Song, W. Su, H. Chen, M. Feng, J. Peng, and A. Zhang, “VSLAM Optimization Method in Dynamic Scenes Based on YOLO-Fastest,” Electron., vol. 12, no. 17, 2023, doi: 10.3390/electronics12173538.
C. He and L. He, “Deep Learning-based Mobile Robot Target Object Localization and Pose Estimation Research,” Int. J. Adv. Comput. Sci. Appl., vol. 14, no. 6, pp. 1325–1333, 2023, doi: 10.14569/IJACSA.2023.01406140.
Y. Li, G. Yang, L. He, L. Zhao, T. Zhao, and H. Sun, “Deep Vision Guided Mobile Robot Dual Axis PID Dynamic Real Time Adjustment Following,” in 2023 WRC Symposium on Advanced Robotics and Automation, WRC SARA 2023, 2023. doi: 10.1109/WRCSARA60131.2023.10261835.
Y. Sasaki, T. Kamegawa, and A. Gofuku, “Effect of display of YOLO’s object recognition results to HMD for an operator controlling a mobile robot,” Artif. Life Robot., vol. 28, no. 2, pp. 323–331, 2023, doi: 10.1007/s10015-023-00856-0.
Z. Li, C. Li, and P. Munoz, “Blueberry Yield Estimation Through Multi-View Imagery with YOLOv8 Object Detection,” in 2023 ASABE Annual International Meeting, 2023, doi: 10.13031/aim.202300883.
D. M. Holm, P. Junge, J. Rutinowski, and J. Fottner, “Investigation of Deep Learning Datasets for Warehousing Logistics,” Proc. Conf. Prod. Syst. Logist., pp. 119–128, 2023, doi: 10.15488/15311.
M. Rostkowska and P. Skrzypczyński, “Optimizing Appearance-Based Localization with Catadioptric Cameras: Small-Footprint Models for Real-Time Inference on Edge Devices,” Sensors, vol. 23, no. 14, 2023, doi: 10.3390/s23146485.
P. Kaur, S. Kansal, and V. P. Singh, “Investigation of the prediction of wildlife animals and its deployment using the robot,” J. F. Robot., 2023, doi: 10.1002/rob.22243.
A. A. Alajami, N. Palau, S. Lopez-Soriano, and R. Pous, “A ROS-based distributed multi-robot localization and orientation strategy for heterogeneous robots,” Intell. Serv. Robot., vol. 16, no. 2, 2023, doi: 10.1007/s11370-023-00457-7.
M.-L. Tham, “Internet of robotic things for mobile robots: Concepts, technologies, challenges, applications, and future directions,” Digit. Commun. Networks, vol. 9, May 2023, doi: 10.1016/j.dcan.2023.05.006.
A. Merabet, A. V Latha, F. A. Kuzhippallil, M. Rahimipour, J. Rhinelander, and R. Venkat, “Robot Vision and Deep Learning for Automated Planogram Compliance in Retail BT - Robotics, Computer Vision and Intelligent Systems,” J. Filipe and J. Röning, pp. 21–30, 2024.
F. Jia, M. Afaq, B. Ripka, Q. Huda, and R. Ahmad, “Vision- and Lidar-Based Autonomous Docking and Recharging of a Mobile Robot for Machine Tending in Autonomous Manufacturing Environments,” Appl. Sci., vol. 13, no. 19, 2023, doi: 10.3390/app131910675.
S. D. Achirei, R. Mocanu, A. T. Popovici, and C. C. Dosoftei, “Model-Predictive Control for Omnidirectional Mobile Robots in Logistic Environments Based on Object Detection Using CNNs,” Sensors, vol. 23, no. 11, 2023, doi: 10.3390/s23114992.
K. Kaneda, S. Nagashima, R. Korekata, M. Kambara, and K. Sugiura, “Learning-To-Rank Approach for Identifying Everyday Objects Using a Physical-World Search Engine,” IEEE Robot. Autom. Lett., vol. 9, no. 3, 2024, doi: 10.1109/LRA.2024.3352363.
P. Slaviček, I. Hrabar, and Z. Kovačić, “Generating a Dataset for Semantic Segmentation of Vine Trunks in Vineyards Using Semi-Supervised Learning and Object Detection,” Robotics, vol. 13, no. 2, 2024, doi: 10.3390/robotics13020020.
L. Mochurad, Y. Hladun, Y. Zasoba, and M. Gregus, “An Approach for Opening Doors with a Mobile Robot Using Machine Learning Methods,” Big Data Cogn. Comput., vol. 7, no. 2, 2023, doi: 10.3390/bdcc7020069.
D. Pramanta, N. Fuengfusin, A. R. Syulistyo, and H. Tamukoh, “YOLO real-time object detection on EV3-Robot using FPGA hardware Accelerator,” Proc. Int. Conf. Artif. Life Robot., pp. 416–419, 2024, doi: 10.5954/icarob.2024.os15-1.
D. A. Nguyen, K. N. Hoang, N. T. Nguyen, and H. N. Tran, “Enhancing Indoor Robot Pedestrian Detection Using Improved PIXOR Backbone and Gaussian Heatmap Regression in 3D LiDAR Point Clouds,” IEEE Access, vol. 12, 2024, doi: 10.1109/ACCESS.2024.3351868.
A. A. Abed, A. Al-Ibadi, and I. A. Abed, “Vision-Based Soft Mobile Robot Inspired by Silkworm Body and Movement Behavior,” J. Robot. Control, vol. 4, no. 3, 2023, doi: 10.18196/jrc.v4i3.16622.
P. An et al., “Leveraging Self-Paced Semi-Supervised Learning with Prior Knowledge for 3D Object Detection on a LiDAR-Camera System,” Remote Sens., vol. 15, no. 3, 2023, doi: 10.3390/rs15030627.
D. Sirala and K. S. Nagla, “Accuracy in Object Detection by Using Deep Learning Method,” in Lecture Notes in Mechanical Engineering, 2024, doi: 10.1007/978-981-99-4594-8_5.
Y. Ye, X. Ma, X. Zhou, G. Bao, W. Wan, and S. Cai, “Dynamic and Real-Time Object Detection Based on Deep Learning for Home Service Robots,” Sensors, vol. 23, no. 23, 2023, doi: 10.3390/s23239482.
Y. Jia, B. Ramalingam, R. E. Mohan, Z. Yang, Z. Zeng, and P. Veerajagadheswar, “Deep-Learning-Based Context-Aware Multi-Level Information Fusion Systems for Indoor Mobile Robots Safe Navigation,” Sensors, vol. 23, no. 4, 2023, doi: 10.3390/s23042337.
Y. Zhang, M. Yin, H. Wang, and C. Hua, “Cross-Level Multi-Modal Features Learning With Transformer for RGB-D Object Recognition,” IEEE Trans. Circuits Syst. Video Technol., vol. 33, no. 12, 2023, doi: 10.1109/TCSVT.2023.3275814.
A. W. Ramadhan, B. S. B. Dewantara, Setiawardhana, and F. Akzada, “Lidar-based Human Detection for a Mobile Robot in Social Environment using Deep Learning,” in IES 2023 - International Electronics Symposium: Unlocking the Potential of Immersive Technology to Live a Better Life, Proceeding, 2023, doi: 10.1109/IES59143.2023.10242550.
A. E. Salman and M. R. Roman, “Augmented reality-assisted gesture-based teleoperated system for robot motion planning,” Ind. Rob., vol. 50, no. 5, 2023, doi: 10.1108/IR-11-2022-0289.
R. Yadav, R. Halder, A. Thakur, and G. Banda, “A Lightweight Deep Learning-based Weapon Detection Model for Mobile Robots,” in ACM International Conference Proceeding Series, 2023, doi: 10.1145/3610419.3610489.
S. Panigrahi, P. Maski, and A. Thondiyath, “Real-time biodiversity analysis using deep-learning algorithms on mobile robotic platforms,” PeerJ Comput. Sci., vol. 9, 2023, doi: 10.7717/peerj-cs.1502.
L. Zhang, L. Ji, and Z. Ma, “Target positioning method based on B-spline level set and GC Yolo-v3,” Cluster Comput., vol. 27, no. 3, 2024, doi: 10.1007/s10586-023-04164-x.
P. Ding, H. Qian, Y. Zhou, and S. Chu, “Object detection method based on lightweight YOLOv4 and attention mechanism in security scenes,” J. Real-Time Image Process., vol. 20, no. 2, 2023, doi: 10.1007/s11554-023-01263-1.
W. Kong et al., “Object Detection Algorithm of Stairs Based on Multi-information Fusion,” in ACM International Conference Proceeding Series, 2023, doi: 10.1145/3598151.3598187.
J. Roch, J. Fayyad, and H. Najjaran, “DOPESLAM: High-Precision ROS-Based Semantic 3D SLAM in a Dynamic Environment,” Sensors, vol. 23, no. 9, 2023, doi: 10.3390/s23094364.
M. M. Aung, D. Maneetham, P. N. Crisnapati, and Y. Thwe, “Enhancing Object Recognition for Visually Impaired Individuals using Computer Vision,” Int. J. Eng. Trends Technol., vol. 72, no. 4, pp. 297–305, 2024, doi: 10.14445/22315381/IJETT-V72I4P130.
J. Kwak, K. M. Yang, Y. J. Lee, M. G. Kim, and K. H. Seo, “Variance Optimization Based on Guided Anchor Siamese Network for Target-of-interest Object Recognition in Autonomous Mobile Robots,” Int. J. Control. Autom. Syst., vol. 21, no. 11, 2023, doi: 10.1007/s12555-022-0542-5.
S. K. Hegde, R. Dharmalingam, and S. Kannan, “Intelligent German traffic sign and road barrier assist for autonomous driving in smart cities,” Multimed. Tools Appl., vol. 83, no. 22, 2024, doi: 10.1007/s11042-023-16435-1.
T. L. Kim, S. Arshad, and T. H. Park, “Adaptive Feature Attention Module for Robust Visual–LiDAR Fusion-Based Object Detection in Adverse Weather Conditions,” Remote Sens., vol. 15, no. 16, 2023, doi: 10.3390/rs15163992.
S. Sharan, P. Nauth, and J.-J. Domínguez-Jiménez, “Accurate semantic segmentation of RGB-D images for indoor navigation,” J. Electron. Imaging, vol. 31, no. 06, 2022, doi: 10.1117/1.jei.31.6.061818.
D. Shin, J. Cho, and J. Kim, “Environment-Adaptive Object Detection Framework for Autonomous Mobile Robots,” Sensors, vol. 22, no. 19, 2022, doi: 10.3390/s22197647.
R. Kabir, Y. Watanobe, M. R. Islam, K. Naruse, and M. M. Rahman, “Unknown Object Detection Using a One-Class Support Vector Machine for a Cloud–Robot System,” Sensors, vol. 22, no. 4, 2022, doi: 10.3390/s22041352.
Z. Zhou, L. Li, A. Fürsterling, H. J. Durocher, J. Mouridsen, and X. Zhang, “Learning-based object detection and localization for a mobile robot manipulator in SME production,” Robot. Comput. Integr. Manuf., vol. 73, 2022, doi: 10.1016/j.rcim.2021.102229.
R. E. A. Ancona, L. G. C. Ramírez, and O. O. G. Frías, “Indoor Localization and Navigation based on Deep Learning using a Monocular Visual System,” Int. J. Adv. Comput. Sci. Appl., vol. 12, no. 6, 2021, doi: 10.14569/IJACSA.2021.0120611.
H. C. Wang, W. Z. Chen, Y. L. Huang, and J. J. Zhuang, “Adaptive System for Object Detection and Picking Based on Efficient Convolutional Neural Networks,” in Lecture Notes on Data Engineering and Communications Technologies, vol. 94, 2022, doi: 10.1007/978-3-030-89776-5_1.
W. Wu, L. Guo, H. Gao, Z. You, Y. Liu, and Z. Chen, “YOLO-SLAM: A semantic SLAM system towards dynamic environment with geometric constraint,” Neural Comput. Appl., vol. 34, no. 8, 2022, doi: 10.1007/s00521-021-06764-3.
Z. Dong et al., “Automatic Pavement Crack Detection Based on YOLOv5-AH,” in 2022 12th International Conference on CYBER Technology in Automation, Control, and Intelligent Systems, CYBER 2022, 2022, doi: 10.1109/CYBER55403.2022.9907394.
M. Afif, R. Ayachi, Y. Said, and M. Atri, “An evaluation of EfficientDet for object detection used for indoor robots assistance navigation,” J. Real-Time Image Process., vol. 19, no. 3, 2022, doi: 10.1007/s11554-022-01212-4.
S. Duman, A. Elewi, and Z. Yetgin, “Distance Estimation from a Monocular Camera Using Face and Body Features,” Arab. J. Sci. Eng., vol. 47, no. 2, 2022, doi: 10.1007/s13369-021-06003-w.
S. Sadeghi Esfahlani, A. Sanaei, M. Ghorabian, and H. Shirvani, “The Deep Convolutional Neural Network Role in the Autonomous Navigation of Mobile Robots (SROBO),” Remote Sens., vol. 14, no. 14, 2022, doi: 10.3390/rs14143324.
B. Griffin, “Mobile Robot Manipulation using Pure Object Detection,” in Proceedings - 2023 IEEE Winter Conference on Applications of Computer Vision, WACV 2023, 2023, doi: 10.1109/WACV56688.2023.00063.
X. Lei, Y. Chen, and L. Zhang, “Real-Time SLAM and Faster Object Detection on a Wheeled Lifting Robot with Mobile-ROS Interaction,” Appl. Sci., vol. 14, no. 14, 2024, doi: 10.3390/app14145982.
Z. Xu, X. Zhan, Y. Xiu, C. Suzuki, and K. Shimada, “Onboard Dynamic-Object Detection and Tracking for Autonomous Robot Navigation With RGB-D Camera,” IEEE Robot. Autom. Lett., vol. 9, no. 1, 2024, doi: 10.1109/LRA.2023.3334683.
R. Xu et al., “ApproxDet: Content and contention-aware approximate object detection for mobiles,” in SenSys 2020 - Proceedings of the 2020 18th ACM Conference on Embedded Networked Sensor Systems, 2020, doi: 10.1145/3384419.3431159.
Y. Wu, L. Liu, and R. Kompella, “Parallel Detection for Efficient Video Analytics at the Edge,” in Proceedings - 2021 IEEE 3rd International Conference on Cognitive Machine Intelligence, CogMI 2021, 2021, doi: 10.1109/CogMI52975.2021.00035.
J. J. Cabrera, V. Román, A. Gil, O. Reinoso, and L. Payá, “An experimental evaluation of Siamese Neural Networks for robot localization using omnidirectional imaging in indoor environments,” Artif. Intell. Rev., vol. 57, no. 8, 2024, doi: 10.1007/s10462-024-10840-0.
C. Patruno, V. Renò, M. Nitti, N. Mosca, M. di Summa, and E. Stella, “Vision-based omnidirectional indoor robots for autonomous navigation and localization in manufacturing industry,” Heliyon, vol. 10, no. 4, 2024, doi: 10.1016/j.heliyon.2024.e26042.
K. J. Singh, D. S. Kapoor, K. Thakur, A. Sharma, and X. Z. Gao, “Computer-Vision Based Object Detection and Recognition for Service Robot in Indoor Environment,” Comput. Mater. Contin., vol. 72, no. 1, 2022, doi: 10.32604/cmc.2022.022989.
S. Protasov et al., “CNN-based Omnidirectional Object Detection for HermesBot Autonomous Delivery Robot with Preliminary Frame Classification,” 2021 20th Int. Conf. Adv. Robot. ICAR 2021, pp. 517–522, 2021, doi: 10.1109/ICAR53236.2021.9659386.
S. T. Kao and M. T. Ho, “Ball-catching system using image processing and an omni-directional wheeled mobile robot,” Sensors, vol. 21, no. 9, 2021, doi: 10.3390/s21093208.
M. Liu, M. Chen, Z. Wu, B. Zhong, and W. Deng, “Implementation of Intelligent Indoor Service Robot Based on ROS and Deep Learning,” Machines, vol. 12, no. 4, 2024, doi: 10.3390/machines12040256.
F. He and L. Zhang, “Design of Indoor Security Robot based on Robot Operating System,” J. Comput. Commun., vol. 11, no. 05, 2023, doi: 10.4236/jcc.2023.115008.
T. Y. Lin and J. G. Juang, “Application of 3D point cloud map and image identification to mobile robot navigation,” Meas. Control (United Kingdom), vol. 56, no. 5–6, 2023, doi: 10.1177/00202940221136242.
N. Adiuku, N. P. Avdelidis, G. Tang, A. Plastropoulos, and Y. Diallo, “Mobile Robot Obstacle Detection and Avoidance with NAV-YOLO,” Int. J. Mech. Eng. Robot. Res., vol. 13, no. 2, pp. 219–226, 2024, doi: 10.18178/ijmerr.13.2.219-226.
Q. Huang, “Weight-Quantized SqueezeNet for Resource-Constrained Robot Vacuums for Indoor Obstacle Classification,” AI, vol. 3, no. 1, 2022, doi: 10.3390/ai3010011.
K. Tahara and N. Hirose, “Ex-DoF: Expansion of Action Degree-of-Freedom with Virtual Camera Rotation for Omnidirectional Image,” in Proceedings - IEEE International Conference on Robotics and Automation, 2022, doi: 10.1109/ICRA46639.2022.9812301.
C. Patruno, R. Colella, M. Nitti, V. Renò, N. Mosca, and E. Stella, “A vision-based odometer for localization of omnidirectional indoor robots,” Sensors (Switzerland), vol. 20, no. 3, 2020, doi: 10.3390/s20030875.
D. Rapado-Rincon, H. Nap, K. Smolenova, E. J. van Henten, and G. Kootstra, “MOT-DETR: 3D single shot detection and tracking with transformers to build 3D representations for agro-food robots,” Comput. Electron. Agric., vol. 225, no. February, p. 109275, 2024, doi: 10.1016/j.compag.2024.109275.
S. Manglani, “Real-time Vision-based Navigation for a Robot in an Indoor Environment,” arXiv preprint arXiv:2307.00666., 2023.
J. Zeng and J. Fu, “Basketball robot object detection and distance measurement based on ROS and IBN-YOLOv5s algorithms,” PLoS One, vol. 19, pp. 1–18, 2024, doi: 10.1371/journal.pone.0310494.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Hapsari Peni Agustin Tjahyaningtijas

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).
This journal is based on the work at https://journal.umy.ac.id/index.php/jrc under license from Creative Commons Attribution-ShareAlike 4.0 International License. You are free to:
- Share – copy and redistribute the material in any medium or format.
- Adapt – remix, transform, and build upon the material for any purpose, even comercially.
The licensor cannot revoke these freedoms as long as you follow the license terms, which include the following:
- Attribution. You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
- ShareAlike. If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.
- No additional restrictions. You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
• Creative Commons Attribution-ShareAlike (CC BY-SA)
JRC is licensed under an International License