Q-RCR: A Modular Framework for Collision-Free Multi-Package Transfer on Four-Wheeled Omnidirectional Conveyor Systems
DOI:
https://doi.org/10.18196/jrc.v6i4.26050Keywords:
Q-Learning, Adaptive Path Planning, Collision Avoidance, Omnidirectional Cellular Conveyor, Simultaneous Transfer, Logistics Optimization, Flexible ManufacturingAbstract
Modern logistics systems increasingly require high flexibility in handling simultaneous package transfers in compact, dynamic environments without collisions. Improper handling of multi-package transfers in omnidirectional conveyor systems can lead to deadlocks, congestion, or delivery delays, particularly in grid-based environments where routing complexity increases with package variability and layout density. This research addresses these challenges by introducing Q-RCR, a modular Q-Learning-based framework with Rule-Based Conflict Resolution (RCR) for intelligent path planning and collision handling in Four-Wheeled Omnidirectional Cellular Conveyor (FOCC) systems. The research contribution is decoupling path learning and collision handling, enabling independent agent training while minimizing computational burden and improving convergence in multi-agent scenarios. The proposed Q-RCR framework integrates Q-Learning for route optimization with a rule-based conflict resolution module, applying four adaptive strategies: Sequential Transfer, Insert Path, Reroute, and Hybrid. The method is implemented in a grid-based FOCC environment, supporting eight-directional movement and handling various package sizes. Experiments were conducted in four scenarios with grid dimensions ranging from 8×11 to 12×12 and involving up to four simultaneous packages. Results show that Q-RCR consistently outperforms Double Q-Learning, RRT, and A* regarding delivery time, path smoothness, and the number of activated cells. The hybrid mode demonstrated the most effectiveness in handling frequent collisions and maintaining operational flow continuity. The proposed framework demonstrates strong adaptability, scalability, and responsiveness, offering a practical and intelligent solution for real-time multi-package coordination in flexible manufacturing and warehouse automation environments.
References
A. W. Youssef, N. M. Elhusseiny, O. M. Shehata, L. A. Shihata, and E. Azab, “Kinematic modeling and control of omnidirectional wheeled cellular conveyor,” Mechatronics, vol. 87, Nov. 2022, doi: 10.1016/j.mechatronics.2022.102896.
Z. Zhang, T. Sun, Z. Wang, and X. Zhang, “Trajectory Planning and Performance Atlases of a New Omnidirectional Conveyor,” Actuators, vol. 13, no. 11, p. 441, Nov. 2024, doi: 10.3390/act13110441.
Z. Lin, K. Wu, R. Shen, X. Yu, and S. Huang, “An Efficient and Accurate A-Star Algorithm for Autonomous Vehicle Path Planning,” IEEE Trans Veh Technol, vol. 73, no. 6, pp. 9003–9008, Jun. 2024, doi: 10.1109/TVT.2023.3348140.
B. Wang, L. Zhang, and J. Kim, “Fault Detection and Diagnosis of Three-Wheeled Omnidirectional Mobile Robot Based on Power Consumption Modeling,” Mathematics, vol. 12, no. 11, p. 1731, Jun. 2024, doi: 10.3390/math12111731.
M. Eyuboglu and G. Atali, “A novel collaborative path planning algorithm for 3-wheel omnidirectional Autonomous Mobile Robot,” Rob Auton Syst, vol. 169, p. 104527, Nov. 2023, doi: 10.1016/j.robot.2023.104527.
E. Rubies, R. Bitriá, and J. Palacín, “A Parcel Transportation and Delivery Mechanism for an Indoor Omnidirectional Robot,” Applied Sciences, vol. 14, no. 17, p. 7987, Sep. 2024, doi: 10.3390/app14177987.
M. Ramírez-Neria, R. Madonski, E. G. Hernández-Martínez, N. Lozada-Castillo, G. Fernández-Anaya, and A. Luviano-Juárez, “Robust trajectory tracking for omnidirectional robots by means of anti-peaking linear active disturbance rejection,” Rob Auton Syst, vol. 183, p. 104842, Jan. 2025, doi: 10.1016/j.robot.2024.104842.
L. Wen, B. Liang, B. Li, and L. Zhang, “Power Balance Control Strategy of Permanent Magnet Synchronous Motor of Belt Conveyor,” IEEE Access, vol. 10, pp. 117045–117052, 2022, doi: 10.1109/ACCESS.2022.3219088.
P. Zhou et al., “A New Embedded Condition Monitoring Node for the Idler Roller of Belt Conveyor,” IEEE Sens J, vol. 24, no. 7, pp. 10335–10346, Apr. 2024, doi: 10.1109/JSEN.2024.3363905.
D. Miao, Y. Wang, L. Yang, and S. Wei, “Coal Flow Detection of Belt Conveyor Based on the Two-Dimensional Laser,” IEEE Access, vol. 11, pp. 82294–82301, 2023, doi: 10.1109/ACCESS.2023.3301768.
S. Gaiardelli, D. Carra, S. Spellini, and F. Fummi, “Dynamic Job and Conveyor-Based Transport Joint Scheduling in Flexible Manufacturing Systems,” Applied Sciences, vol. 14, no. 7, p. 3026, Apr. 2024, doi: 10.3390/app14073026.
M. Sperling, T. Kurschilgen, and P. Schumacher, “Concept of a Peripheral-Free Electrified Monorail System (PEMS) for Flexible Material Handling in Intralogistics,” Inventions, vol. 9, no. 3, p. 52, Apr. 2024, doi: 10.3390/inventions9030052.
K.-J. Wang and T.-L. Lee, “Designing a digital-twin based dashboard system for a flexible assembly line,” Comput Ind Eng, vol. 196, p. 110491, Oct. 2024, doi: 10.1016/j.cie.2024.110491.
K. Li, T. Liu, P. N. Ram Kumar, and X. Han, “A reinforcement learning-based hyper-heuristic for AGV task assignment and route planning in parts-to-picker warehouses,” Transp Res E Logist Transp Rev, vol. 185, p. 103518, May 2024, doi: 10.1016/j.tre.2024.103518.
M. Jeon, J. Lee, and S.-K. Ko, “Modular Reinforcement Learning for Playing the Game of Tron,” IEEE Access, vol. 10, pp. 63394–63402, 2022, doi: 10.1109/ACCESS.2022.3175299.
X. Li, Z. Lv, S. Wang, Z. Wei, and L. Wu, “A Reinforcement Learning Model Based on Temporal Difference Algorithm,” IEEE Access, vol. 7, pp. 121922–121930, 2019, doi: 10.1109/ACCESS.2019.2938240.
B. An, S. Sun, and R. Wang, “Deep Reinforcement Learning for Quantitative Trading: Challenges and Opportunities,” IEEE Intell Syst, vol. 37, no. 2, pp. 23–26, 2022, doi: 10.1109/MIS.2022.3165994.
U. Orozco-Rosas, K. Picos, J. J. Pantrigo, A. S. Montemayor, and A. Cuesta-Infante, “Mobile Robot Path Planning Using a QAPF Learning Algorithm for Known and Unknown Environments,” IEEE Access, vol. 10, pp. 84648–84663, 2022, doi: 10.1109/ACCESS.2022.3197628.
L. Chen et al., “Transformer-Based Imitative Reinforcement Learning for Multirobot Path Planning,” IEEE Trans Industr Inform, vol. 19, no. 10, pp. 10233–10243, Oct. 2023, doi: 10.1109/TII.2023.3240585.
Z. Bai, H. Pang, Z. He, B. Zhao, and T. Wang, “Path Planning of Autonomous Mobile Robot in Comprehensive Unknown Environment Using Deep Reinforcement Learning,” IEEE Internet Things J, vol. 11, no. 12, pp. 22153–22166, Jun. 2024, doi: 10.1109/JIOT.2024.3379361.
A. A. N. Kumaar and S. Kochuvila, “Mobile Service Robot Path Planning Using Deep Reinforcement Learning,” IEEE Access, vol. 11, pp. 100083–100096, 2023, doi: 10.1109/ACCESS.2023.3311519.
W. Liu, L. Dong, J. Liu, and C. Sun, “Knowledge transfer in multi-agent reinforcement learning with incremental number of agents,” Journal of Systems Engineering and Electronics, vol. 33, no. 2, pp. 447–460, Apr. 2022, doi: 10.23919/JSEE.2022.000045.
I. Han, S. Oh, H. Jung, I. Chung, and K. J. Kim, “Monte Carlo and Temporal Difference Methods in Reinforcement Learning [AI-eXplained],” IEEE Comput Intell Mag, vol. 18, no. 4, pp. 64–65, Nov. 2023, doi: 10.1109/MCI.2023.3304145.
S. Kautsar, A. S. Aisjah, M. Syai’in, K. Indriawati, and T. R. Biyanto, “Path Planning for 4-Wheeled Omnidirectional Cellular Conveyor using Q-Learning Algorithm,” in 2024 International Electronics Symposium (IES), pp. 466–472, 2024, doi: 10.1109/IES63037.2024.10665817.
W. Zaher, A. W. Youssef, L. A. Shihata, E. Azab, and M. Mashaly, “Omnidirectional-Wheel Conveyor Path Planning and Sorting Using Reinforcement Learning Algorithms,” IEEE Access, vol. 10, pp. 27945–27959, 2022, doi: 10.1109/ACCESS.2022.3156924.
M. Q. Zaman and H. M. Wu, “Intelligent Motion Control Design for an Omnidirectional Conveyor System,” IEEE Access, vol. 11, pp. 47351–47361, 2023, doi: 10.1109/ACCESS.2023.3275962.
R. V. Subrahmanyam, M. Kiran Kumar, S. Nair, S. Warrier, and B. N. Prashanth, “Design and development of automatic warehouse sorting rover,” Mater Today Proc, vol. 46, pp. 4497–4503, 2021, doi: 10.1016/j.matpr.2020.09.688.
S. G. Ozden, A. E. Smith, and K. R. Gue, “A computational software system to design order picking warehouses,” Comput Oper Res, vol. 132, p. 105311, Aug. 2021, doi: 10.1016/j.cor.2021.105311.
V. Simic, S. Dabic-Miletic, E. B. Tirkolaee, Ž. Stević, A. Ala, and A. Amirteimoori, “Neutrosophic LOPCOW-ARAS model for prioritizing industry 4.0-based material handling technologies in smart and sustainable warehouse management systems,” Appl Soft Comput, vol. 143, p. 110400, Aug. 2023, doi: 10.1016/j.asoc.2023.110400.
Z. Qiu, J. Long, Y. Yu, and S. Chen, “Integrated task assignment and path planning for multi-type robots in an intelligent warehouse system,” Transp Res E Logist Transp Rev, vol. 194, p. 103883, Feb. 2025, doi: 10.1016/j.tre.2024.103883.
V. Gupta, R. Mitra, F. Koenig, M. Kumar, and M. K. Tiwari, “Predictive maintenance of baggage handling conveyors using IoT,” Comput Ind Eng, vol. 177, p. 109033, Mar. 2023, doi: 10.1016/j.cie.2023.109033.
Z. Seibold, K. Furmans, and K. R. Gue, “Using Logical Time to Ensure Liveness in Material Handling Systems with Decentralized Control,” IEEE Transactions on Automation Science and Engineering, vol. 19, no. 1, pp. 545–552, Jan. 2022, doi: 10.1109/TASE.2020.3029199.
S. Li, L. Fan, and S. Jia, “A hierarchical solution framework for dynamic and conflict-free AGV scheduling in an automated container terminal,” Transp Res Part C Emerg Technol, vol. 165, p. 104724, Aug. 2024, doi: 10.1016/j.trc.2024.104724.
S. Zhou, Q. Liao, C. Xiong, J. Chen, and S. Li, “A novel metaheuristic approach for AGVs resilient scheduling problem with battery constraints in automated container terminal,” J Sea Res, vol. 202, p. 102536, Dec. 2024, doi: 10.1016/j.seares.2024.102536.
X. Fan, H. Sang, M. Tian, Y. Yu, and S. Chen, “Integrated scheduling problem of multi-load AGVs and parallel machines considering the recovery process,” Swarm Evol Comput, vol. 94, p. 101861, Apr. 2025, doi: 10.1016/j.swevo.2025.101861.
X. Yang, H. Hu, and C. Cheng, “Collaborative scheduling of handling equipment in automated container terminals with limited AGV-mates considering energy consumption,” Advanced Engineering Informatics, vol. 65, p. 103133, May 2025, doi: 10.1016/j.aei.2025.103133.
Z. Ali et al., “Design and development of a low-cost 5-DOF robotic arm for lightweight material handling and sorting applications: A case study for small manufacturing industries of Pakistan,” Results in Engineering, vol. 19, p. 101315, Sep. 2023, doi: 10.1016/j.rineng.2023.101315.
J. Palacín, E. Rubies, R. Bitrià, and E. Clotet, “Non-Parametric Calibration of the Inverse Kinematic Matrix of a Three-Wheeled Omnidirectional Mobile Robot Based on Genetic Algorithms,” Applied Sciences, vol. 13, no. 2, p. 1053, Jan. 2023, doi: 10.3390/app13021053.
C.-H. Hsu and J.-G. Juang, “Using a Robot for Indoor Navigation and Door Opening Control Based on Image Processing,” Actuators, vol. 13, no. 2, p. 78, Feb. 2024, doi: 10.3390/act13020078.
A. G. Rojas-López, M. G. Villarreal-Cervantes, A. Rodríguez-Molina, and J. A. Paredes-Ballesteros, “Efficient Online Controller Tuning for Omnidirectional Mobile Robots Using a Multivariate-Multitarget Polynomial Prediction Model and Evolutionary Optimization,” Biomimetics, vol. 10, no. 2, p. 114, Feb. 2025, doi: 10.3390/biomimetics10020114.
J. G. Pérez-Juárez et al., “Kinematic Fuzzy Logic-Based Controller for Trajectory Tracking of Wheeled Mobile Robots in Virtual Environments,” Symmetry (Basel), vol. 17, no. 2, p. 301, Feb. 2025, doi: 10.3390/sym17020301.
W. Chen et al., “A survey of autonomous robots and multi-robot navigation: Perception, planning and collaboration,” Biomimetic Intelligence and Robotics, vol. 5, no. 2, p. 100203, Jun. 2025, doi: 10.1016/j.birob.2024.100203.
Z. Lu et al., “A reinforcement learning-based optimization method for task allocation of agricultural multi-robots clusters,” Computers and Electrical Engineering, vol. 120, p. 109752, Dec. 2024, doi: 10.1016/j.compeleceng.2024.109752.
X. Li, J. Ren, and Y. Li, “Multi-mode filter target tracking method for mobile robot using multi-agent reinforcement learning,” Eng Appl Artif Intell, vol. 127, p. 107398, Jan. 2024, doi: 10.1016/j.engappai.2023.107398.
L. Zhang, Z. Cai, Y. Yan, C. Yang, and Y. Hu, “Multi-agent policy learning-based path planning for autonomous mobile robots,” Eng Appl Artif Intell, vol. 129, p. 107631, Mar. 2024, doi: 10.1016/j.engappai.2023.107631.
Z. Tang, F. Fu, G. Lu, and D. Chen, “Reinforcement Learning for Autonomous Agents: Scene-Specific Dynamic Obstacle Avoidance and Target Pursuit in Unknown Environments,” IEEE Access, vol. 12, pp. 145496–145510, 2024, doi: 10.1109/ACCESS.2024.3463732.
X. Zhang, H. Zong, and W. Wu, “Cooperative Obstacle Avoidance of Unmanned System Swarm via Reinforcement Learning Under Unknown Environments,” IEEE Trans Instrum Meas, vol. 74, pp. 1–15, 2025, doi: 10.1109/TIM.2024.3522370.
V. B. Ajabshir, M. S. Guzel, and E. Bostanci, “A Low-Cost Q-Learning-Based Approach to Handle Continuous Space Problems for Decentralized Multi-Agent Robot Navigation in Cluttered Environments,” IEEE Access, vol. 10, pp. 35287–35301, 2022, doi: 10.1109/ACCESS.2022.3163393.
W. Fang, Z. Liao, and Y. Bai, “Improved ACO algorithm fused with improved Q-Learning algorithm for Bessel curve global path planning of search and rescue robots,” Rob Auton Syst, vol. 182, p. 104822, Dec. 2024, doi: 10.1016/j.robot.2024.104822.
C. Wang, X. Yang, and H. Li, “Improved Q-Learning Applied to Dynamic Obstacle Avoidance and Path Planning,” IEEE Access, vol. 10, pp. 92879–92888, 2022, doi: 10.1109/ACCESS.2022.3203072.
D. U. Rijalusalam and I. Iswanto, “Implementation Kinematics Modeling and Odometry of Four Omni Wheel Mobile Robot on The Trajectory Planning and Motion Control Based Microcontroller,” Journal of Robotics and Control (JRC), vol. 2, no. 5, 2021, doi: 10.18196/jrc.25121.
H. Taheri and C. X. Zhao, “Omnidirectional mobile robots, mechanisms and navigation approaches,” Mech Mach Theory, vol. 153, p. 103958, Nov. 2020, doi: 10.1016/j.mechmachtheory.2020.103958.
S. Chu, M. Lin, D. Li, R. Lin, and S. Xiao, “Adaptive reward shaping based reinforcement learning for docking control of autonomous underwater vehicles,” Ocean Engineering, vol. 318, p. 120139, Feb. 2025, doi: 10.1016/j.oceaneng.2024.120139.
P. Chintala, R. Dornberger, and T. Hanne, “Robotic Path Planning by Q Learning and a Performance Comparison with Classical Path Finding Algorithms,” International Journal of Mechanical Engineering and Robotics Research, pp. 373–378, 2022, doi: 10.18178/ijmerr.11.6.373-378.
Y. Shi and Z. Rong, “Analysis of Q-Learning Like Algorithms Through Evolutionary Game Dynamics,” IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 69, no. 5, pp. 2463–2467, May 2022, doi: 10.1109/TCSII.2022.3161655.
A. Nadeem, A. Ullah, and W. Choi, “Social-Aware Peer Selection for Energy Efficient D2D Communications in UAV-Assisted Networks: A Q-Learning Approach,” IEEE Wireless Communications Letters, vol. 13, no. 5, pp. 1468–1472, May 2024, doi: 10.1109/LWC.2024.3375235.
M. Muzammul, M. Assam, Y. Y. Ghadi, N. Innab, M. Alajmi, and T. J. Alahmadi, “IR-QLA: Machine Learning-Based Q-Learning Algorithm Optimization for UAVs Faster Trajectory Planning by Instructed- Reinforcement Learning,” IEEE Access, vol. 12, pp. 91300–91315, 2024, doi: 10.1109/ACCESS.2024.3420169.
Y. Shi and Z. Rong, “Analysis of Q-Learning Like Algorithms Through Evolutionary Game Dynamics,” IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 69, no. 5, pp. 2463–2467, May 2022, doi: 10.1109/TCSII.2022.3161655.
G. Zheng, J. Zhang, S. Deng, W. Cai, and L. Chen, “Evolution of cooperation in the public goods game with Q-learning,” Chaos Solitons Fractals, vol. 188, p. 115568, Nov. 2024, doi: 10.1016/j.chaos.2024.115568.
H. Jiang, G. Li, J. Xie, and J. Yang, “Action Candidate Driven Clipped Double Q-Learning for Discrete and Continuous Action Tasks,” IEEE Trans Neural Netw Learn Syst, 2022, doi: 10.1109/TNNLS.2022.3203024.
J. Fan, X. Zhang, Y. Zou, Y. Li, Y. Liu, and W. Sun, “Improving policy training for autonomous driving through randomized ensembled double Q-learning with Transformer encoder feature evaluation,” Appl Soft Comput, vol. 167, p. 112386, Dec. 2024, doi: 10.1016/j.asoc.2024.112386.
K. Xie and A. Szolnoki, “Reputation in public goods cooperation under double Q-learning protocol,” Chaos Solitons Fractals, vol. 196, p. 116398, Jul. 2025, doi: 10.1016/j.chaos.2025.116398.
G. Liu et al., “Weighted double Q-learning based eco-driving control for intelligent connected plug-in hybrid electric vehicle platoon with incorporation of driving style recognition,” J Energy Storage, vol. 86, p. 111282, May 2024, doi: 10.1016/j.est.2024.111282.
M. Ben-Akka, C. Tanougast, C. Diou, and A. Chaddad, “An Efficient Hardware Implementation of the Double Q-Learning Algorithm,” in International Conference on Electrical, Computer, Communications and Mechatronics Engineering, ICECCME 2023, 2023, doi: 10.1109/ICECCME57830.2023.10252988.
N. Li and S. I. Han, “Adaptive Bi-Directional RRT Algorithm for Three-Dimensional Path Planning of Unmanned Aerial Vehicles in Complex Environments,” IEEE Access, vol. 13, pp. 23748–23767, 2025, doi: 10.1109/ACCESS.2025.3537697.
Z.-J. Ding, X.-J. Meng, L.-X. Zhang, Y.-B. Guo, M.-Q. Chen, and S.-H. Zhen, “Variable Sampling Area RRT Algorithm Based on Limiting Joint Angle for Robotic Arm Path Planning,” IEEE Access, vol. 13, pp. 30665–30677, 2025, doi: 10.1109/ACCESS.2025.3541645.
Z. Sun, B. Xia, P. Xie, X. Li, and J. Wang, “NAMR-RRT: Neural Adaptive Motion Planning for Mobile Robots in Dynamic Environments,” IEEE Transactions on Automation Science and Engineering, vol. 22, pp. 13087–13100, 2025, doi: 10.1109/TASE.2025.3551464.
D. Uwacu, A. Yammanuru, K. Nallamotu, V. Chalasani, M. Morales, and N. M. Amato, “HAS-RRT: RRT-Based Motion Planning Using Topological Guidance,” IEEE Robot Autom Lett, vol. 10, no. 6, pp. 6223–6230, Jun. 2025, doi: 10.1109/LRA.2025.3560878.
Z. Yao, Z. Liu, and C. Han, “The Improved RRT Integrated With the Artificial Potential Field Path Planning Algorithm,” IEEE Access, vol. 13, pp. 68398–68409, 2025, doi: 10.1109/ACCESS.2025.3561348.
S. Mohammad Langari, F. Vahdatikhaki, and A. Hammad, “Improving the performance of RRT path planning of excavators by embedding heuristic rules,” Advanced Engineering Informatics, vol. 62, p. 102724, Oct. 2024, doi: 10.1016/j.aei.2024.102724.
R. Li et al., “Emperor Yu Tames the Flood: Water Surface Garbage Cleaning Robot Using Improved A* Algorithm in Dynamic Environments,” IEEE Access, vol. 13, pp. 48888–48903, 2025, doi: 10.1109/ACCESS.2025.3551088.
Z. Liu, H. Liu, Z. Lu, and Q. Zeng, “A Dynamic Fusion Pathfinding Algorithm Using Delaunay Triangulation and Improved A-Star for Mobile Robots,” IEEE Access, vol. 9, pp. 20602–20621, 2021, doi: 10.1109/ACCESS.2021.3055231.
G. Tang, C. Tang, C. Claramunt, X. Hu, and P. Zhou, “Geometric A-Star Algorithm: An Improved A-Star Algorithm for AGV Path Planning in a Port Environment,” IEEE Access, vol. 9, pp. 59196–59210, 2021, doi: 10.1109/ACCESS.2021.3070054.
Y. Ma, Y. Zhao, Z. Li, X. Yan, H. Bi, and G. Królczyk, “A new coverage path planning algorithm for unmanned surface mapping vehicle based on A-star based searching,” Applied Ocean Research, vol. 123, p. 103163, Jun. 2022, doi: 10.1016/j.apor.2022.103163.
A. Allus and M. Unel, “Angle-based multi-goal ordering and path-planning using an improved A-star algorithm,” Rob Auton Syst, vol. 190, p. 105001, Aug. 2025, doi: 10.1016/j.robot.2025.105001.
J. Huang, C. Chen, J. Shen, G. Liu, and F. Xu, “A self-adaptive neighborhood search A-star algorithm for mobile robots global path planning,” Computers and Electrical Engineering, vol. 123, p. 110018, Apr. 2025, doi: 10.1016/j.compeleceng.2024.110018.
X. Zhang, G. Xiong, Y. Wang, S. Teng, and L. Chen, “D-PBS: Dueling Priority-Based Search for Multiple Nonholonomic Robots Motion Planning in Congested Environments,” IEEE Robot Autom Lett, vol. 9, no. 7, pp. 6288–6295, Jul. 2024, doi: 10.1109/LRA.2024.3402183.
T. P. Nguyen, H. Nguyen, and H. Q. T. Ngo, “Towards sustainable scheduling of a multi-automated guided vehicle system for collision avoidance,” Computers and Electrical Engineering, vol. 120, p. 109824, Dec. 2024, doi: 10.1016/j.compeleceng.2024.109824.
X. Zhou, X. Wang, Z. Xie, J. Gao, F. Li, and X. Gu, “A Collision-free path planning approach based on rule guided lazy-PRM with repulsion field for gantry welding robots,” Rob Auton Syst, vol. 174, p. 104633, Apr. 2024, doi: 10.1016/j.robot.2024.104633.
T. S. Khan, D. Pfoser, S. Ruan, and A. Züfle, “Simplifying traffic simulation - from Euclidean distances to agent-based models,” Computational Urban Science, vol. 4, no. 1, p. 32, Nov. 2024, doi: 10.1007/s43762-024-00145-x.
M. Jeon, J. Lee, and S.-K. Ko, “Modular Reinforcement Learning for Playing the Game of Tron,” IEEE Access, vol. 10, pp. 63394–63402, 2022, doi: 10.1109/ACCESS.2022.3175299.
R. Trauth, K. Moller, G. Würsching, and J. Betz, “FRENETIX: A High-Performance and Modular Motion Planning Framework for Autonomous Driving,” IEEE Access, vol. 12, pp. 127426–127439, 2024, doi: 10.1109/ACCESS.2024.3436835.
T. Lin, C. Yue, Z. Liu, and X. Cao, “Modular Multi-Level Replanning TAMP Framework for Dynamic Environment,” IEEE Robot Autom Lett, vol. 9, no. 5, pp. 4234–4241, May 2024, doi: 10.1109/LRA.2024.3377556.
Y. Zhu, Z. Wang, C. Chen, and D. Dong, “Rule-Based Reinforcement Learning for Efficient Robot Navigation With Space Reduction,” IEEE/ASME Transactions on Mechatronics, vol. 27, no. 2, pp. 846–857, Apr. 2022, doi: 10.1109/TMECH.2021.3072675.
M. I. Radaideh and K. Shirvan, “Rule-based reinforcement learning methodology to inform evolutionary algorithms for constrained optimization of engineering applications,” Knowl Based Syst, vol. 217, p. 106836, Apr. 2021, doi: 10.1016/j.knosys.2021.106836.
Y. Wang et al., “Enhancing Closed-Loop Performance in Learning-Based Vehicle Motion Planning by Integrating Rule-Based Insights,” IEEE Robot Autom Lett, vol. 9, no. 9, pp. 7915–7922, Sep. 2024, doi: 10.1109/LRA.2024.3433170.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Syamsiar Kautsar, Aulia Siti Aisjah, Syamsul Arifin, Mat Syai'in

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).
This journal is based on the work at https://journal.umy.ac.id/index.php/jrc under license from Creative Commons Attribution-ShareAlike 4.0 International License. You are free to:
- Share – copy and redistribute the material in any medium or format.
- Adapt – remix, transform, and build upon the material for any purpose, even comercially.
The licensor cannot revoke these freedoms as long as you follow the license terms, which include the following:
- Attribution. You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
- ShareAlike. If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.
- No additional restrictions. You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
• Creative Commons Attribution-ShareAlike (CC BY-SA)
JRC is licensed under an International License