Enhance Deep Reinforcement Learning with Denoising Autoencoder for Self-Driving Mobile Robot

Gilang Nugraha Putu Pratama, Indra Hidayatulloh, Herman Dwi Surjono, Totok Sukardiyono

Abstract


Over the past years, self-driving mobile robots have captured the interest of researchers, prompting exploration into their multifaceted implementation. They have the potential to revolutionize transportation by mitigating human error and reducing traffic accidents. The process of deploying self-driving mobile robots can be divided into several steps, such as algorithm design, simulation, and real-world application. This research paper presents a simulation using DonkeyCar on the Mini Monaco track, employing a Soft Actor-Critic (SAC) alongside a denoising autoencoder. At this point, it is limited to the simulation, serving as a proof of concept for further research with hardware implementation. The simulation verifies that relying solely on SAC for the convergence of policy is not sufficient; it yields a mean episode length of only 28.82 steps and a mean episode reward of 0.7815. The simulation ended after 3557 steps due to the inability of SAC alone to converge, without completing a single lap. Later, by integrating the denoising autoencoder, convergence of policy can be achieved. It enables DonkeyCar to adeptly track the lane of the circuit. The denoising autoencoder plays an important role in accelerating the convergence of transfer learning. Notably, the mean reward per episode reached 2380.4387, with an average episode length of 771.71 and a total of 114357 steps taken. DonkeyCar manages to complete several laps. These results affirm the effectiveness of SAC with a denoising autoencoder in enhancing the performance of self-driving mobile robots.

Keywords


Self-Driving Mobile Robot; Deep Reinforcement Learning; Donkeycar Simulation; Soft Actor-Critic; Denoising Autoencoder.

Full Text:

PDF

References


T. Morita and S. Managi, “Autonomous vehicles: Willingness to pay and the social dilemma,” Transportation Research Part C: Emerging Technologies, vol. 119, 2020, doi: 10.1016/j.trc.2020.102748.

A. Chowdhury, G. Karmakar, J. Kamruzzaman, A. Jolfaei and R. Das, “Attacks on Self-Driving Cars and Their Countermeasures: A Survey,” in IEEE Access, vol. 8, pp. 207308-207342, 2020, doi: 10.1109/ACCESS.2020.3037705.

A. S. M. Al-Obaidi, A. Al-Qassar, A. R. Nasser, A. Alkhayyat, A. J. Humaidi, and I. K. Ibraheem, “Embedded design and implementation of mobile robot for surveillance applications,” Indonesian Journal of Science and Technology, vol. 6, no. 2, pp. 427–440, 2021, doi: 10.17509/IJOST.V6I2.36275.

Y. Weng, J. Pajarinen, R. Akrour, T. Matsuda, J. Peters and T. Maki, “Reinforcement Learning Based Underwater Wireless Optical Communication Alignment for Autonomous Underwater Vehicles,” in IEEE Journal of Oceanic Engineering, vol. 47, no. 4, pp. 1231-1245, 2022, doi: 10.1109/JOE.2022.3165805.

J. Wang, Y. Sun, B. Wang and T. Ushio, “Mission-Aware UAV Deployment for Post-Disaster Scenarios: A Worst-Case SAC-Based Approach,” in IEEE Transactions on Vehicular Technology, vol. 73, no. 2, pp. 2712- 2727, 2024, doi: 10.1109/TVT.2023.3319480.

A. Nagahama, T. Saito, T. Wada and K. Sonoda, “Autonomous Driving Learning Preference of Collision Avoidance Maneuvers,” in IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 9, pp. 5624-5634, 2021, doi: 10.1109/TITS.2020.2988303.

S. Ono, Y. Okazaki, K. Kanetsuna, and M. Mizumoto, “Egocentric, altruistic, or hypocritic?: A cross-cultural study of choice between pedestrianfirst and driver-first of autonomous car,” IEEE Access, vol. 11, pp. 108716–108726, 2023, doi: 10.1109/ACCESS.2023.3320041.

B. Zhang, R. Sengoku, and H.-O. Lim, “Adaptive motion control for an autonomous mobile robot based on space risk map,” IEEE Access, vol. 11, pp. 69553–69562, 2023, doi: 10.1109/ACCESS.2023.3292999.

L. A. Dennis and M. Fisher, “Verifiable Self-Aware Agent-Based Autonomous Systems,” in Proceedings of the IEEE, vol. 108, no. 7, pp. 1011-1026, 2020, doi: 10.1109/JPROC.2020.2991262.

S. Kitajima, H. Chouchane, J. Antona-Makoshi, N. Uchida and J. Tajima, “A Nationwide Impact Assessment of Automated Driving Systems on Traffic Safety Using Multiagent Traffic Simulations,” in IEEE Open Journal of Intelligent Transportation Systems, vol. 3, pp. 302-312, 2022, doi: 10.1109/OJITS.2022.3165769.

H. Muslim, et al., “Cut-Out Scenario Generation With Reasonability Foreseeable Parameter Range From Real Highway Dataset for Autonomous Vehicle Assessment,” in IEEE Access, vol. 11, pp. 45349-45363, 2023, doi: 10.1109/ACCESS.2023.3268703.

Y. Miyaki and H. Tsukagoshi, “Self-Excited Vibration Valve That Induces Traveling Waves in Pneumatic Soft Mobile Robots,” in IEEE Robotics and Automation Letters, vol. 5, no. 3, pp. 4133-4139, 2020, doi: 10.1109/LRA.2020.2978455.

M. Aladem and S. A. Rawashdeh, “A Single-Stream Segmentation and Depth Prediction CNN for Autonomous Driving,” in IEEE Intelligent Systems, vol. 36, no. 4, pp. 79-85, 2021, doi: 10.1109/MIS.2020.2993266.

K. Qu, W. Zhuang, Q. Ye, W. Wu and X. Shen, “Model-Assisted Learning for Adaptive Cooperative Perception of Connected Autonomous Vehicles,” in IEEE Transactions on Wireless Communications, 2024, doi: 10.1109/TWC.2024.3354507.

N. Kodama, T. Harada, and K. Miyazaki, “Traffic signal control system using deep reinforcement learning with emphasis on reinforcing successful experiences,” IEEE Access, vol. 10, pp. 128943–128950, 2022, doi: 10.1109/ACCESS.2022.3225431.

T. Osa and M. Aizawa, “Deep reinforcement learning with adversarial training for automated excavation using depth images,” IEEE Access, vol. 10, pp. 4523–4535, 2022, doi: 10.1109/ACCESS.2022.3140781.

K. Ohashi, K. Nakanishi, Y. Yasui, and S. Ishii, “Deep adversarial reinforcement learning method to generate control policies robust against worst-case value predictions,” IEEE Access, vol. 11, pp. 100798–100809, 2023, doi: 10.1109/ACCESS.2023.3314750.

G. E. Setyawan, P. Hartono, and H. Sawada, “Cooperative multi-robot hierarchical reinforcement learning,” International Journal of Advanced Computer Science and Applications, vol. 13, no. 9, pp. 35–44, 2022, doi: 10.14569/IJACSA.2022.0130904.

S. Kotera, B. Yin, K. Yamamoto, and T. Nishio, “Lyapunov optimizationbased latency-bounded allocation using deep deterministic policy gradient for 11ax spatial reuse,” IEEE Access, vol. 9, pp. 162337–162347, 2021.

K. Naya, K. Kutsuzawa, D. Owaki, and M. Hayashibe, “Spiking neural network discovers energy-efficient hexapod motion in deep reinforcement learning,” IEEE Access, vol. 9, pp. 150345–150354, 2021, doi: 10.1109/ACCESS.2021.3126311.

M. G. Khoshkholgh and H. Yanikomeroglu, “Faded-Experience Trust Region Policy Optimization for Model-Free Power Allocation in Interference Channel,” in IEEE Wireless Communications Letters, vol. 10, no. 3, pp. 659-663, 2021, doi: 10.1109/LWC.2020.3045005.

I. K. Ozaslan, H. Mohammadi and M. R. Jovanovic, “Computing Sta- ´ bilizing Feedback Gains via a Model-Free Policy Gradient Method,” in IEEE Control Systems Letters, vol. 7, pp. 407-412, 2023, doi: 10.1109/LCSYS.2022.3188180.

S. Takakura and K. Sato, “Structured Output Feedback Control for Linear Quadratic Regulator Using Policy Gradient Method,” in IEEE Transactions on Automatic Control, vol. 69, no. 1, pp. 363-370, 2024, doi: 10.1109/TAC.2023.3264176.

R. F. J. Dossa, S. Huang, S. Ontan˜on, and T. Matsubara, “An em- ´ pirical investigation of early stopping optimizations in proximal policy optimization,” IEEE Access, vol. 9, pp. 117981–117992, 2021, doi: 10.1109/ACCESS.2021.3106662.

Y. Gu, Y. Cheng, C. L. P. Chen and X. Wang, “Proximal Policy Optimization With Policy Feedback,” in IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 52, no. 7, pp. 4600-4610, 2022, doi: 10.1109/TSMC.2021.3098451.

S. Siboo, A. Bhattacharyya, R. N. Raj, and S. H. Ashwin, “An empirical study of ddpg and ppo-based reinforcement learning algorithms for autonomous driving,” IEEE Access, vol. 11, pp. 125094–125108, 2023, doi: 10.1109/ACCESS.2023.3330665.

O. Aydogmus and M. Yilmaz, “Comparative analysis of reinforcement learning algorithms for bipedal robot locomotion,” IEEE Access, vol. 12, pp. 7490–7499, 2024, doi: 10.1109/ACCESS.2023.3344393.

S. Bhattacharjee, S. Halder, Y. Yan, A. Balamurali, L. V. Iyer and N. C. Kar, “Real-Time SIL Validation of a Novel PMSM Control Based on Deep Deterministic Policy Gradient Scheme for Electrified Vehicles,” in IEEE Transactions on Power Electronics, vol. 37, no. 8, pp. 9000-9011, 2022, doi: 10.1109/TPEL.2022.3153845.

A. Candeli, G. d. Tommasi, D. G. Lui, A. Mele, S. Santini, and G. Tartaglione, “A deep deterministic policy gradient learning approach to missile autopilot design,” IEEE Access, vol. 10, pp. 19685–19696, 2022, doi: 10.1109/ACCESS.2022.3150926.

E. H. H. Sumiea, S. J. Abdulkadir, M. G. Ragab, S. M. Al-Selwi, S. M. Fati, A. AlQushaibi, and H. Alhussian, “Enhanced deep deterministic policy gradient algorithm using grey wolf optimizer for continuous control tasks,” IEEE Access, vol. 11, pp. 139771–139784, 2023, doi: 10.1109/ACCESS.2023.3341507.

N. Abo Mosali, S. S. Shamsudin, O. Alfandi, R. Omar, and N. AlFadhali, “Twin delayed deep deterministic policy gradient-based target tracking for unmanned aerial vehicle with achievement rewarding and multistage training,” IEEE Access, vol. 10, pp. 23545–23559, 2022, doi: 10.1109/ACCESS.2022.3154388.

J. Khalid, M. A. Ramli, M. S. Khan, and T. Hidayat, “Efficient load frequency control of renewable integrated power system: A twin delayed ddpg-based deep reinforcement learning approach,” IEEE Access, vol. 10, pp. 51561–51574, 2022, doi: 10.1109/ACCESS.2022.3174625.

O. E. Egbomwan, S. Liu and H. Chaoui, “Twin Delayed Deep Deterministic Policy Gradient (TD3) Based Virtual Inertia Control for InverterInterfacing DGs in Microgrids,” in IEEE Systems Journal, vol. 17, no. 2, pp. 2122-2132, 2023, doi: 10.1109/JSYST.2022.3222262.

T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine, “Soft actor-critic: Offpolicy maximum entropy deep reinforcement learning with a stochastic actor,” in 35th International Conference on Machine Learning, ICML 2018, vol. 5, pp. 2976–2989, 2018.

A. Surriani, O. Wahyunggoro, and A. I. Cahyadi, “A trajectory control for bipedal walking robot using stochastic-based continuous deep reinforcement learning,” Evergreen, vol. 10, no. 3, pp. 1538–1548, 2023, doi: 10.5109/7151701.

H. Yong, J. Seo, J. Kim, M. Kim and J. Choi, “Suspension Control Strategies Using Switched Soft Actor-Critic Models for Real Roads,” in IEEE Transactions on Industrial Electronics, vol. 70, no. 1, pp. 824-832, 2023, doi: 10.1109/TIE.2022.3153805.

E. Prianto, M. Kim, J.-H. Park, J.-H. Bae, and J.-S. Kim, “Path planning for multi-arm manipulators using deep reinforcement learning: Soft actor–critic with hindsight experience replay,” Sensors, vol. 20, no. 20, pp. 1–23, 2020, doi: 10.3390/s20205911.

C.-C. Wong, S.-Y. Chien, H.-M. Feng, and H. Aoyama, “Motion planning for dual-arm robot based on soft actor-critic,” IEEE Access, vol. 9, pp. 26871–26885, 2021, doi: 10.1109/ACCESS.2021.3056903.

A. Mustafa, T. Sasamura and T. Morita, “Robust Speed Control of Ultrasonic Motors Based on Deep Reinforcement Learning of a Lyapunov Function,” in IEEE Access, vol. 10, pp. 46895-46910, 2022, doi: 10.1109/ACCESS.2022.3170995.

M. R. Hong, et al., “Optimizing Reinforcement Learning Control Model in Furuta Pendulum and Transferring it to Real-World,” in IEEE Access, vol. 11, pp. 95195-95200, 2023, doi: 10.1109/ACCESS.2023.3310405.

K. Kasaura, S. Miura, T. Kozuno, R. Yonetani, K. Hoshino and Y. Hosoe, “Benchmarking Actor-Critic Deep Reinforcement Learning Algorithms for Robotics Control With Action Constraints,” in IEEE Robotics and Automation Letters, vol. 8, no. 8, pp. 4449-4456, 2023, doi: 10.1109/LRA.2023.3284378.

H. Sekkat, O. Moutik, L. Ourabah, B. ElKari, Y. Chaibi, and T. A. Tchakoucht, “Review of reinforcement learning for robotic grasping: Analysis and recommendations,” Statistics, Optimization and Information Computing, vol. 12, no. 2, pp. 571–601, 2024, doi: 10.19139/soic-2310-5070-1797.

E. Chisari, A. Liniger, A. Rupenyan, L. V. Gool and J. Lygeros, “Learning from Simulation, Racing in Reality,” 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 8046-8052, 2021, doi: 10.1109/ICRA48506.2021.9562079.

F. Li et al., “Autoencoder-Enabled Potential Buyer Identification and Purchase Intention Model of Vacation Homes,” in IEEE Access, vol. 8, pp. 212383-212395, 2020, doi: 10.1109/ACCESS.2020.3037920.

K. Sama et al., “Extracting Human-Like Driving Behaviors From Expert Driver Data Using Deep Learning,” in IEEE Transactions on Vehicular Technology, vol. 69, no. 9, pp. 9315-9329, 2020, doi: 10.1109/TVT.2020.2980197.

P. Hartono, “Mixing Autoencoder With Classifier: Conceptual Data Visualization,” in IEEE Access, vol. 8, pp. 105301-105310, 2020, doi: 10.1109/ACCESS.2020.2999155.

P. Cristovao, H. Nakada, Y. Tanimura and H. Asoh, “Generating InBetween Images Through Learned Latent Space Representation Using Variational Autoencoders,” in IEEE Access, vol. 8, pp. 149456-149467, 2020, doi: 10.1109/ACCESS.2020.3016313.

K. Ohashi, K. Nakanishi, W. Sasaki, Y. Yasui and S. Ishii, ”Deep Adversarial Reinforcement Learning With Noise Compensation by Autoencoder,” in IEEE Access, vol. 9, pp. 143901-143912, 2021, doi: 10.1109/ACCESS.2021.3121751.

K. Fujiwara, H. Iwamoto, K. Hori and M. Kano, “Driver Drowsiness Detection Using R-R Interval of Electrocardiogram and Self-Attention Autoencoder,” in IEEE Transactions on Intelligent Vehicles, vol. 9, no. 1, pp. 2956-2965, 2024, doi: 10.1109/TIV.2023.3308575.

F. N. Khan and A. P. T. Lau, “Robust and efficient data transmission over noisy communication channels using stacked and denoising autoencoders,” in China Communications, vol. 16, no. 8, pp. 72-82, 2019, doi: 10.23919/JCC.2019.08.007.

T. Yokota, H. Hontani, Q. Zhao and A. Cichocki, “Manifold Modeling in Embedded Space: An Interpretable Alternative to Deep Image Prior,” in IEEE Transactions on Neural Networks and Learning Systems, vol. 33, no. 3, pp. 1022-1036, 2022, doi: 10.1109/TNNLS.2020.3037923.

R. Al-Hmouz, W. Pedrycz, A. Balamash and A. Morfeq, “LogicOriented Autoencoders and Granular Logic Autoencoders: Developing Interpretable Data Representation,” in IEEE Transactions on Fuzzy Systems, vol. 30, no. 3, pp. 869-877, 2022, doi: 10.1109/TFUZZ.2020.3043659.

N. J. Zakaria, M. I. Shapiai, R. A. Ghani, M. N. M. Yassin, M. Z. Ibrahim and N. Wahid, “Lane Detection in Autonomous Vehicles: A Systematic Review,” in IEEE Access, vol. 11, pp. 3729-3765, 2023, doi: 10.1109/ACCESS.2023.3234442.

B. Rabhi, A. Elbaati, H. Boubaker, U. Pal, and A. M. Alimi, “Multilingual handwriting recovery framework based on convolutional denoising autoencoder with attention model,” Multimedia Tools and Applications, vol. 83, no. 8, pp. 22295–22326, 2024, doi: 10.1007/s11042-023-16499-z.

T. Murata, K. Fukami, and K. Fukagata, “Nonlinear mode decomposition with convolutional neural networks for fluid dynamics,” Journal of Fluid Mechanics, vol. 882, 2020, doi: 10.1017/jfm.2019.822.

W. -H. Lee, M. Ozger, U. Challita and K. W. Sung, “Noise LearningBased Denoising Autoencoder,” in IEEE Communications Letters, vol. 25, no. 9, pp. 2983-2987, 2021, doi: 10.1109/LCOMM.2021.3091800.

H. El-Fiqi, M. Wang, K. Kasmarik, A. Bezerianos, K. C. Tan and H. A. Abbass, “Weighted Gate Layer Autoencoders,” in IEEE Transactions on Cybernetics, vol. 52, no. 8, pp. 7242-7253, 2022, doi: 10.1109/TCYB.2021.3049583.

A. Nawaz, S. S. Khan and A. Ahmad, “Ensemble of Autoencoders for Anomaly Detection in Biomedical Data: A Narrative Review,” in IEEE Access, vol. 12, pp. 17273-17289, 2024, doi: 10.1109/ACCESS.2024.3360691.

H. Anand, B. S. Sammuli, K. E. J. Olofsson and D. A. Humphreys, “Real-Time Magnetic Sensor Anomaly Detection Using Autoencoder Neural Networks on the DIII-D Tokamak,” in IEEE Transactions on Plasma Science, vol. 50, no. 11, pp. 4126-4130, 2022, doi: 10.1109/TPS.2022.3181548.

T. -W. Ban, “Compressed Feedback Using Autoencoder Based on Deep Learning for D2D Communication Networks,” in IEEE Wireless Communications Letters, vol. 12, no. 4, pp. 590-594, 2023, doi: 10.1109/LWC.2023.3234574.

M. Kramer, “Autoassociative neural networks,” Computers & Chemical Engineering, vol. 16, no. 4, pp. 313–328, 1992, doi: 10.1016/0098-1354(92)80051-A.

Y. Qiu, Y. Yang, Z. Lin, P. Chen, Y. Luo and W. Huang, “Improved denoising autoencoder for maritime image denoising and semantic segmentation of USV,” in China Communications, vol. 17, no. 3, pp. 46-57, 2020, doi: 10.23919/JCC.2020.03.005.

P. Singh and A. Sharma, “Attention-Based Convolutional Denoising Autoencoder for Two-Lead ECG Denoising and Arrhythmia Classification,” in IEEE Transactions on Instrumentation and Measurement, vol. 71, pp. 1-10, 2022, doi: 10.1109/TIM.2022.3197757.

X. Li, Z. Liu and Z. Huang, “Deinterleaving of Pulse Streams With Denoising Autoencoders,” in IEEE Transactions on Aerospace and Electronic Systems, vol. 56, no. 6, pp. 4767-4778, 2020, doi: 10.1109/TAES.2020.3004208.

A. Alajmi, W. Ahsan, M. Fayaz and A. Nallanathan, “Intelligent Resource Allocation in Backscatter-NOMA Networks: A Soft Actor Critic Framework,” in IEEE Transactions on Vehicular Technology, vol. 72, no. 8, pp. 10119-10132, 2023, doi: 10.1109/TVT.2023.3254138.

Z. He, L. Dong, C. Song and C. Sun, “Multiagent Soft Actor-Critic Based Hybrid Motion Planner for Mobile Robots,” in IEEE Transactions on Neural Networks and Learning Systems, vol. 34, no. 12, pp. 10980-10992, 2023, doi: 10.1109/TNNLS.2022.3172168.

R. Ma, Y. Wang, S. Wang, L. Cheng, R. Wang and M. Tan, “SampleObserved Soft Actor-Critic Learning for Path Following of a Biomimetic Underwater Vehicle,” in IEEE Transactions on Automation Science and Engineering, 2023, doi: 10.1109/TASE.2023.3264237.

S. Wang, R. Diao, C. Xu, D. Shi and Z. Wang, “On Multi-Event CoCalibration of Dynamic Model Parameters Using Soft Actor-Critic,” in IEEE Transactions on Power Systems, vol. 36, no. 1, pp. 521-524, 2021, doi: 10.1109/TPWRS.2020.3030164.

A. R. Heidarpour, M. R. Heidarpour, M. Ardakani, C. Tellambura and M. Uysal, “Soft Actor–Critic-Based Computation Offloading in Multiuser MEC-Enabled IoT—A Lifetime Maximization Perspective,” in IEEE Internet of Things Journal, vol. 10, no. 20, pp. 17571-17584, 2023, doi: 10.1109/JIOT.2023.3277753.

M. Haklidir and H. Temeltas, “Guided Soft Actor Critic: A Guided Deep Reinforcement Learning Approach for Partially Observable Markov Decision Processes,” in IEEE Access, vol. 9, pp. 159672-159683, 2021, doi: 10.1109/ACCESS.2021.3131772.

N. Gholizadeh, N. Kazemi and P. Musilek, “A Comparative Study of Reinforcement Learning Algorithms for Distribution Network Reconfiguration With Deep Q-Learning-Based Action Sampling,” in IEEE Access, vol. 11, pp. 13714-13723, 2023, doi: 10.1109/ACCESS.2023.3243549.

A. R. Sayed, X. Zhang, G. Wang, C. Wang and J. Qiu, “Optimal Operable Power Flow: Sample-Efficient Holomorphic Embedding-Based Reinforcement Learning,” in IEEE Transactions on Power Systems, vol. 39, no. 1, pp. 1739-1751, 2024, doi: 10.1109/TPWRS.2023.3266773.

A. Raffin, A. Hill, A. Gleave, A. Kanervisto, M. Ernestus, and N. Dormann, “Stable-baselines3: Reliable reinforcement learning implementations,” Journal of Machine Learning Research, vol. 22, no. 268, pp. 1–8, 2021.

H. Yun and D. Park, “Virtualization of self-driving algorithms by interoperating embedded controllers on a game engine for a digital twining autonomous vehicle,” Electronics, vol. 10, no. 17, 2021, doi: 10.3390/electronics10172102.

A. Bayuwindra, L. Wonohito and B. R. Trilaksono, “Design of DDPGBased Extended Look-Ahead for Longitudinal and Lateral Control of Vehicle Platoon,” in IEEE Access, vol. 11, pp. 96648-96660, 2023, doi: 10.1109/ACCESS.2023.3311850.

T. Uetsuki, Y. Okuyama and J. Shin, “CNN-based End-to-end Autonomous Driving on FPGA Using TVM and VTA,” 2021 IEEE 14th International Symposium on Embedded Multicore/Many-core Systems-on-Chip (MCSoC), pp. 140-144, 2021, doi: 10.1109/MCSoC51149.2021.00028.

N. K. Manjunath, A. Shiri, M. Hosseini, B. Prakash, N. R. Waytowich and T. Mohsenin, ”An Energy Efficient EdgeAI Autoencoder Accelerator for Reinforcement Learning,” in IEEE Open Journal of Circuits and Systems, vol. 2, pp. 182-195, 2021, doi: 10.1109/OJCAS.2020.3043737.

A. Fatima, A. K. Gowda, C. C U, M. Tauseef and B. Y, “Implementation of Driverless Car,” 2023 International Conference on Advances in Electronics, Communication, Computing and Intelligent Information Systems (ICAECIS), pp. 447-452, 2023, doi: 10.1109/ICAECIS58353.2023.10170382.

J. S. Hieber, “Nonparametric regression using deep neural networks with relu activation function,” Annals of Statistics, vol. 48, no. 4, pp. 1875– 1897, 2020, doi: 10.1214/19-AOS1875.

Y. Terada and R. Hirose, “Fast generalization error bound of deep learning without scale invariance of activation functions,” Neural Networks, vol. 129, pp. 344–358, 2020, doi: 10.1016/j.neunet.2020.05.033.

M. Tanaka, “Weighted sigmoid gate unit for an activation function of deep neural network,” Pattern Recognition Letters, vol. 135, pp. 354–359, 2020, doi: 10.1016/j.patrec.2020.05.017.

T. Szandała, “Review and comparison of commonly used activation functions for deep neural networks,” Studies in Computational Intelligence, vol. 903, pp. 203–224, 2021, doi: 10.1007/978-981-15-5495-7.

S. Bera and V. K. Shrivastava, “Analysis of various optimizers on deep convolutional neural network model in the application of hyperspectral remote sensing image classification,” International Journal of Remote Sensing, vol. 41, no. 7, pp. 2664–2683, 2020, doi: 10.1080/01431161.2019.1694725.

R. Llugsi, S. E. Yacoubi, A. Fontaine and P. Lupera, “Comparison between Adam, AdaMax and Adam W optimizers to implement a Weather Forecast based on Neural Networks for the Andean city of Quito,” 2021 IEEE Fifth Ecuador Technical Chapters Meeting (ETCM), pp. 1-6, 2021, doi: 10.1109/ETCM53643.2021.9590681.

R. M. Schmidt, F. Schneider, and P. Hennig, “Descending through a crowded valley - benchmarking deep learning optimizers,” International Conference on Machine Learning, vol. 139, pp. 9367–9376, 2021.




DOI: https://doi.org/10.18196/jrc.v5i3.21713

Refbacks

  • There are currently no refbacks.


Copyright (c) 2024 Gilang Nugraha Putu Pratama, Indra Hidayatulloh, Herman Dwi Surjono, Totok Sukardiyono

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

 


Journal of Robotics and Control (JRC)

P-ISSN: 2715-5056 || E-ISSN: 2715-5072
Organized by Peneliti Teknologi Teknik Indonesia
Published by Universitas Muhammadiyah Yogyakarta in collaboration with Peneliti Teknologi Teknik Indonesia, Indonesia and the Department of Electrical Engineering
Website: http://journal.umy.ac.id/index.php/jrc
Email: jrcofumy@gmail.com


Kuliah Teknik Elektro Terbaik