Oil Palm USB (Unstripped Bunch) Detector Trained on Synthetic Images Generated by PGGAN

Wahyu Sapto Aji, Kamarul Hawari bin Ghazali, Son Ali Akbar

Abstract


Identifying Unstriped Bunches (USB) is a pivotal challenge in palm oil production, contributing to reduced mill efficiency. Existing manual detection methods are proven time-consuming and prone to inaccuracies. Therefore, we propose an innovative solution harnessing computer vision technology. Specifically, we leverage the Faster R-CNN (Region-based Convolution Neural Network), a robust object detection algorithm, and complement it with Progressive Growing Generative Adversarial Networks (PGGAN) for synthetic image generation. Nevertheless, a scarcity of authentic USB images may hinder the application of Faster R-CNN. Herein, PGGAN is assumed to be pivotal in generating synthetic images of Empty Fruit Bunches (EFB) and USB. Our approach pairs synthetic images with authentic ones to train the Faster R-CNN. The VGG16 feature generator serves as the architectural backbone, fostering enhanced learning. According to our experimental results, USB detectors that were trained solely with authentic images resulted in an accuracy of 77.1%, which highlights the potential of this methodology. However, employing solely synthetic images leads to a slightly reduced accuracy of 75.3%. Strikingly, the fusion of authentic and synthetic images in a balanced ratio of 1:1 fuels a remarkable accuracy surge to 87.9%, signifying a 10.1% improvement. This innovative amalgamation underscores the potential of synthetic data augmentation in refining detection systems. By amalgamating authentic and synthetic data, we unlock a novel dimension of accuracy in USB detection, which was previously unattainable. This contribution holds significant implications for the industry, ensuring further exploration into advanced data synthesis techniques and refining detection models.


Keywords


PGGAN; USB; Detector; Faster R-CNN; Synthetic Image.

Full Text:

PDF

References


U. K. H. Mohd Nadzim, R. Yunus, R. Omar, and B. Y. Lim, “Factors Contributing to Oil Losses in Crude Palm Oil Production Process in Malaysia: A Review,” International Journal of Biomass and Renewables, vol. 9, no. 1, pp. 10-24, 2020.

O. Walat and N. S. Bock, “Palm Oil Mill OER and Total Oil Losses,” Palm Oil Engineering Bulletin, vol. 108, pp. 11-16, Sep. 2013.

Y. D. Tan and J. S. Lim, "Feasibility of palm oil mill effluent elimination towards sustainable Malaysian palm oil industry," Renewable and Sustainable Energy Reviews, vol. 111, pp. 507-522, 2019.

W. S. Aji, K. H. B. Ghazali, and S. A. Akbar, "Oil palm unstripped bunch detector using modified faster regional convolutional neural network," IAES International Journal of Artificial Intelligence, vol. 11, no. 1, p. 189, 2022.

A. Hassan, N. H. Muhammad, Z. Ab Rahman, R. M. Halim, H. Alias, and M. Sabtu, “Improving mill oil extraction rate under the Malaysian National Key Economic Area,” Palm Oil Engineering Bulletin, vol. 103, pp. 33-47, 2012.

J. W. Lai, H. R. Ramli, L. I. Ismail, and W. Z. W. Hasan, "Real-Time Detection of Ripe Oil Palm Fresh Fruit Bunch Based on YOLOv4," in IEEE Access, vol. 10, pp. 95763-95770, 2022, doi: 10.1109/ACCESS.2022.3204762.

A. Paullada, I. D. Raji, E. M. Bender, E. Denton, and A. Hanna, “Data and its (dis)contents: A survey of dataset development and use in machine learning research,” Patterns, vol. 2, no. 11, p. 100336, Nov. 2021, doi: 10.1016/j.patter.2021.100336.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, May 2015, doi: 10.1038/nature14539.

J. Deng, W. Dong, R. Socher, L. -J. Li, K. Li, and L. Fei-Fei, "ImageNet: A large-scale hierarchical image database," 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248-255, 2009, doi: 10.1109/CVPR.2009.5206848.

T. Y. Lin et al., "Microsoft coco: Common objects in context," in Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pp. 740-755, 2014.

A. Gupta, P. Dollár, and R. Girshick, "LVIS: A Dataset for Large Vocabulary Instance Segmentation," 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5351-5359, 2019, doi: 10.1109/CVPR.2019.00550.

A. Kuznetsova et al., “The Open Images Dataset V4: Unified Image Classification, Object Detection, and Visual Relationship Detection at Scale,” Int. J. Comput. Vis., vol. 128, no. 7, pp. 1956–1981, Jul. 2020, doi: 10.1007/s11263-020-01316-z.

Y. Lu and S. Young, “A survey of public datasets for computer vision tasks in precision agriculture,” Comput. Electron. Agric., vol. 178, p. 105760, Nov. 2020, doi: 10.1016/j.compag.2020.105760.

A. Tomczak, A. Gupta, S. Ilic, N. Navab, and S. Albarqouni, "What can we learn about a generated image corrupting its latent representation?," in International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 505-515, 2022.

M. Abedi, L. Hempel, S. Sadeghi, and T. Kirsten, “GAN-Based Approaches for Generating Structured Data in the Medical Domain,” Appl. Sci., vol. 12, no. 14, p. 7075, Jul. 2022, doi: 10.3390/app12147075.

M. Jang et al., “Image Turing test and its applications on synthetic chest radiographs by using the progressive growing generative adversarial network,” Sci. Rep., vol. 13, no. 1, p. 2356, Feb. 2023, doi: 10.1038/s41598-023-28175-1.

S. B. Alam, M. Hossain, and S. Kobashi, “Synthetic Brain Image Generation for ADHD prediction based on Progressive Growing Generative Adversarial Network," in International Symposium on Affective Science and Engineering ISASE2020, pp. 1-5, 2020.

C. Bowles et al., “GAN Augmentation: Augmenting Training Data using Generative Adversarial Networks,” arXiv preprint arXiv:1810.10863, 2023.

K. C. Chan, M. Rabaev, and H. Pratama, “Generation of synthetic manufacturing datasets for machine learning using discrete-event simulation,” Prod. Manuf. Res., vol. 10, no. 1, pp. 337–353, Dec. 2022, doi: 10.1080/21693277.2022.2086642.

R. Sun, M. Zhang, K. Yang, and J. Liu, “Data Enhancement for Plant Disease Classification Using Generated Lesions,” Appl. Sci., vol. 10, no. 2, p. 466, Jan. 2020, doi: 10.3390/app10020466.

H. Bhagwani, S. Agarwal, A. Kodipalli, and R. J. Martis, "Targeting class imbalance problem using GAN," 2021 5th International Conference on Electrical, Electronics, Communication, Computer Technologies and Optimization Techniques (ICEECCOT), pp. 318-322, 2021, doi: 10.1109/ICEECCOT52851.2021.9708011.

R. Ma, J. Lou, P. Li, and J. Gao, “Reconstruction of Generative Adversarial Networks in Cross Modal Image Generation with Canonical Polyadic Decomposition,” Wirel. Commun. Mob. Comput., vol. 2021, pp. 1–9, Apr. 2021, doi: 10.1155/2021/8868781.

B. Priswanto and H. Santoso, “CycleGAN and SRGAN to Enrich the Dataset,” SinkrOn, vol. 7, no. 2, pp. 495–503, Apr. 2022, doi: 10.33395/sinkron.v7i2.11384.

M. Yurt, S. U. Dar, A. Erdem, E. Erdem, K. K. Oguz, and T. Çukur, “mustGAN: multi-stream Generative Adversarial Networks for MR Image Synthesis,” Medical image analysis, vol. 70, p. 101944, May 2021, doi: 10.1016/j.media.2020.101944.

A. Gonzales, G. Guruswamy, and S. R. Smith, “Synthetic data in health care: A narrative review,” PLOS Digital Health, vol. 2, no. 1, p. e0000082, Jan. 2023, doi: 10.1371/journal.pdig.0000082.

S. Kumari and K. Aggarwal, “Scope of generative adversarial networks (GANs) in image processing: A review,” Int. J. Health Sci., vol. 6, no. S6, pp. 724–733, Jun. 2022, doi: 10.53730/ijhs.v6nS6.9664.

D. Mukherkjee, P. Saha, D. Kaplun, A. Sinitca, and R. Sarkar, “Brain tumor image generation using an aggregation of GAN models with style transfer,” Sci. Rep., vol. 12, no. 1, p. 9141, Jun. 2022, doi: 10.1038/s41598-022-12646-y.

J. Islam and Y. Zhang, “GAN-based synthetic brain PET image generation,” Brain Inform., vol. 7, no. 1, p. 3, Dec. 2020, doi: 10.1186/s40708-020-00104-2.

A. Aljohani and N. Alharbe, “Generating Synthetic Images for Healthcare with Novel Deep Pix2Pix GAN,” Electronics, vol. 11, no. 21, p. 3470, Oct. 2022, doi: 10.3390/electronics11213470.

L. Abady, J. Horváth, B. Tondi, E. J. Delp, and M. Barni, “Manipulation and generation of synthetic satellite images using deep learning models,” J. Appl. Remote Sens., vol. 16, no. 4, Nov. 2022, doi: 10.1117/1.JRS.16.046504.

H. Mansourifar, A. Moskovitz, B. Klingensmith, D. Mintas, and S. J. Simske, “GAN-Based Satellite Imaging: A Survey on Techniques and Applications,” IEEE Access, vol. 10, pp. 118123–118140, 2022, doi: 10.1109/ACCESS.2022.3221123.

P. Gao, T. Tian, L. Li, J. Ma, and J. Tian, "DE-CycleGAN: An Object Enhancement Network for Weak Vehicle Detection in Satellite Images," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 3403-3414, 2021, doi: 10.1109/JSTARS.2021.3062057.

X. Rui, Y. Cao, X. Yuan, Y. Kang, and W. Song, “DisasterGAN: Generative Adversarial Networks for Remote Sensing Disaster Image Generation,” Remote Sens., vol. 13, no. 21, p. 4284, Oct. 2021, doi: 10.3390/rs13214284.

H. Liu, G. Yang, F. Deng, Y. Qian, and Y. Fan, “MCBAM-GAN: The Gan Spatiotemporal Fusion Model Based on Multiscale and CBAM for Remote Sensing Images,” Remote Sens., vol. 15, no. 6, p. 1583, Mar. 2023, doi: 10.3390/rs15061583.

K. Jiang, Z. Wang, P. Yi, G. Wang, T. Lu, and J. Jiang, "Edge-Enhanced GAN for Remote Sensing Image Superresolution," in IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 8, pp. 5799-5812, Aug. 2019, doi: 10.1109/TGRS.2019.2902431.

A. Abdollahi, B. Pradhan, G. Sharma, K. N. A. Maulud, and A. Alamri, “Improving Road Semantic Segmentation Using Generative Adversarial Network,” IEEE Access, vol. 9, pp. 64381–64392, 2021, doi: 10.1109/ACCESS.2021.3075951.

A. Abdollahi, B. Pradhan, S. Gite, and A. Alamri, “Building Footprint Extraction from High Resolution Aerial Images Using Generative Adversarial Network (GAN) Architecture,” IEEE Access, vol. 8, pp. 209517–209527, 2020, doi: 10.1109/ACCESS.2020.3038225.

H. Alqahtani, M. Kavakli-Thorne, and G. Kumar, “Applications of Generative Adversarial Networks (GANs): An Updated Review,” Arch. Comput. Methods Eng., vol. 28, no. 2, pp. 525–552, Mar. 2021, doi: 10.1007/s11831-019-09388-y.

M. B. Bejiga, G. Hoxha, and F. Melgani, "Improving Text Encoding for Retro-Remote Sensing," in IEEE Geoscience and Remote Sensing Letters, vol. 18, no. 4, pp. 622-626, April 2021, doi: 10.1109/LGRS.2020.2983851.

M. Fuentes Reyes, S. Auer, N. Merkle, C. Henry, and M. Schmitt, “SAR-to-Optical Image Translation Based on Conditional Generative Adversarial Networks—Optimization, Opportunities and Limits,” Remote Sens., vol. 11, no. 17, p. 2067, Sep. 2019, doi: 10.3390/rs11172067.

H. Gao et al., “A Hyperspectral Image Classification Method Based on Multi-Discriminator Generative Adversarial Networks,” Sensors, vol. 19, no. 15, p. 3269, Jul. 2019, doi: 10.3390/s19153269.

R. Qin and J. Zhao, “High-Efficiency Generative Adversarial Network Model for Chemical Process Fault Diagnosis,” IFAC-Pap., vol. 55, no. 7, pp. 732–737, 2022, doi: 10.1016/j.ifacol.2022.07.531.

P. Peng, Y. Wang, W. Zhang, Y. Zhang, and H. Zhang, "Imbalanced Process Fault Diagnosis Using Enhanced Auxiliary Classifier GAN," 2020 Chinese Automation Congress (CAC), pp. 313-316, 2020, doi: 10.1109/CAC51589.2020.9327104.

X. Hu, G. Li, P. Niu, J. Wang, and L. Zha, “A generative adversarial neural network model for industrial boiler data repair,” Appl. Soft Comput., vol. 104, p. 107214, Jun. 2021, doi: 10.1016/j.asoc.2021.107214.

J. D. C. Mumbelli, G. A. Guarneri, Y. K. Lopes, D. Casanova, and M. Teixeira, “An application of Generative Adversarial Networks to improve automatic inspection in automotive manufacturing,” Appl. Soft Comput., vol. 136, p. 110105, Mar. 2023, doi: 10.1016/j.asoc.2023.110105.

J. Cao, J. Ma, D. Huang, P. Yu, J. Wang, and K. Zheng, “Method to enhance deep learning fault diagnosis by generating adversarial samples,” Appl. Soft Comput., vol. 116, p. 108385, Feb. 2022, doi: 10.1016/j.asoc.2021.108385.

J. E. Choi, D. H. Seol, C. Y. Kim, and S. J. Hong, “Generative Adversarial Network-Based Fault Detection in Semiconductor Equipment with Class-Imbalanced Data,” Sensors, vol. 23, no. 4, p. 1889, Feb. 2023, doi: 10.3390/s23041889.

L. Huo, H. Qi, S. Fei, C. Guan, and J. Li, “A Generative Adversarial Network Based a Rolling Bearing Data Generation Method Towards Fault Diagnosis,” Comput. Intell. Neurosci., vol. 2022, pp. 1–21, Jul. 2022, doi: 10.1155/2022/7592258.

Y. Lu, D. Chen, E. Olaniyi, and Y. Huang, “Generative adversarial networks (GANs) for image augmentation in agriculture: A systematic review,” Comput. Electron. Agric., vol. 200, p. 107208, Sep. 2022, doi: 10.1016/j.compag.2022.107208.

E. Olaniyi, D. Chen, Y. Lu, and Y. Huang, “Generative adversarial networks for image augmentation in agriculture: a systematic review," arXiv preprint arXiv:2204.04707, 2022.

J. J. Bird, C. M. Barnes, L. J. Manso, A. Ekárt, and D. R. Faria, “Fruit quality and defect image classification with conditional GAN data augmentation,” Sci. Hortic., vol. 293, p. 110684, Feb. 2022, doi: 10.1016/j.scienta.2021.110684.

H. Madokoro et al., “Semantic Segmentation of Agricultural Images Based on Style Transfer Using Conditional and Unconditional Generative Adversarial Networks,” Appl. Sci., vol. 12, no. 15, p. 7785, Aug. 2022, doi: 10.3390/app12157785.

I. H. Sarker, “Deep Learning: A Comprehensive Overview on Techniques, Taxonomy, Applications and Research Directions,” SN Comput. Sci., vol. 2, no. 6, p. 420, Nov. 2021, doi: 10.1007/s42979-021-00815-1.

K. Choudhary et al., “Recent advances and applications of deep learning methods in materials science,” Npj Comput. Mater., vol. 8, no. 1, p. 59, Apr. 2022, doi: 10.1038/s41524-022-00734-6.

X. Li, M. Luo, S. Ji, L. Zhang, and M. Lu, “Evaluating generative adversarial networks based image-level domain transfer for multi-source remote sensing image segmentation and object detection,” Int. J. Remote Sens., vol. 41, no. 19, pp. 7343–7367, Oct. 2020, doi: 10.1080/01431161.2020.1757782.

S. M. A. Bashir and Y. Wang, “Small Object Detection in Remote Sensing Images with Residual Feature Aggregation-Based Super-Resolution and Object Detector Network,” Remote Sens., vol. 13, no. 9, p. 1854, May 2021, doi: 10.3390/rs13091854.

C. D. Prakash and L. J. Karam, "It GAN Do Better: GAN-Based Detection of Objects on Images With Varying Quality," in IEEE Transactions on Image Processing, vol. 30, pp. 9220-9230, 2021, doi: 10.1109/TIP.2021.3124155.

L. Posilović, D. Medak, M. Subašić, M. Budimir, and S. Lončarić, “Generative adversarial network with object detector discriminator for enhanced defect detection on ultrasonic B-scans,” Neurocomputing, vol. 459, pp. 361–369, Oct. 2021, doi: 10.1016/j.neucom.2021.06.094.

R. Liu et al., “Anomaly-GAN: A data augmentation method for train surface anomaly detection,” Expert Syst. Appl., vol. 228, p. 120284, Oct. 2023, doi: 10.1016/j.eswa.2023.120284.

J. Zhang et al., “Shrimp egg counting with fully convolutional regression network and generative adversarial network,” Aquac. Eng., vol. 94, p. 102175, Aug. 2021, doi: 10.1016/j.aquaeng.2021.102175.

H. Lee, S. Kang, and K. Chung, “Robust Data Augmentation Generative Adversarial Network for Object Detection,” Sensors, vol. 23, no. 1, p. 157, Dec. 2022, doi: 10.3390/s23010157.

R. Barth, J. Hemming, and E. J. Van Henten, “Optimising realism of synthetic images using cycle generative adversarial networks for improved part segmentation,” Comput. Electron. Agric., vol. 173, p. 105378, Jun. 2020, doi: 10.1016/j.compag.2020.105378.

E. Bellocchio, G. Costante, S. Cascianelli, M. L. Fravolini, and P. Valigi, "Combining Domain Adaptation and Spatial Consistency for Unseen Fruits Counting: A Quasi-Unsupervised Approach," in IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 1079-1086, April 2020, doi: 10.1109/LRA.2020.2966398.

Z. Fei, A. Olenskyj, B. N. Bailey, and M. Earles, “Enlisting 3D Crop Models and GANs for More Data Efficient and Generalizable Fruit Detection,” in 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), pp. 1269–1277, 2021, doi: 10.1109/ICCVW54120.2021.00147.

B. N. Bailey, “Helios: A Scalable 3D Plant and Environmental Biophysical Modeling Framework,” Front. Plant Sci., vol. 10, p. 1185, Oct. 2019, doi: 10.3389/fpls.2019.01185.

C. Douarre, C. F. Crispim-Junior, A. Gelibert, L. Tougne, and D. Rousseau, “Novel data augmentation strategies to boost supervised segmentation of plant disease,” Comput. Electron. Agric., vol. 165, p. 104967, Oct. 2019, doi: 10.1016/j.compag.2019.104967.

M. Zeng, H. Gao, and L. Wan, “Few-Shot Grape Leaf Diseases Classification Based on Generative Adversarial Network,” J. Phys. Conf. Ser., vol. 1883, no. 1, p. 012093, Apr. 2021, doi: 10.1088/1742-6596/1883/1/012093.

D. M. Ibrahim and N. M. Elshennawy, “Improving Date Fruit Classification Using CycleGAN-Generated Dataset,” Comput. Model. Eng. Sci., vol. 131, no. 1, pp. 331–348, 2022, doi: 10.32604/cmes.2022.016419.

A. Abbas, S. Jain, M. Gour, and S. Vankudothu, “Tomato plant disease detection using transfer learning with C-GAN synthetic images,” Comput. Electron. Agric., vol. 187, p. 106279, Aug. 2021, doi: 10.1016/j.compag.2021.106279.

Y. Tian, G. Yang, Z. Wang, E. Li, and Z. Liang, “Detection of Apple Lesions in Orchards Based on Deep Learning Methods of CycleGAN and YOLOV3-Dense,” J. Sens., vol. 2019, pp. 1–13, Apr. 2019, doi: 10.1155/2019/7630926.

Z. Luo, H. Yu, and Y. Zhang, “Pine Cone Detection Using Boundary Equilibrium Generative Adversarial Networks and Improved YOLOv3 Model,” Sensors, vol. 20, no. 16, p. 4430, Aug. 2020, doi: 10.3390/s20164430.

C. Dewi, R.-C. Chen, Y.-T. Liu, and S.-K. Tai, “Synthetic Data generation using DCGAN for improved traffic sign recognition,” Neural Comput. Appl., vol. 34, no. 24, pp. 21465–21480, Dec. 2022, doi: 10.1007/s00521-021-05982-z.

G. Hu, H. Wu, Y. Zhang, and M. Wan, “A low shot learning method for tea leaf’s disease identification,” Comput. Electron. Agric., vol. 163, p. 104852, Aug. 2019, doi: 10.1016/j.compag.2019.104852.

T. Karras, T. Aila, S. Laine, and J. Lehtinen, “Progressive growing of gans for improved quality, stability, and variation,” arXiv preprint arXiv:1710.10196, 2017.

X. Guo, Y. Liu, K. Tan, M. Jin, and H. Lu, “PGGAN: Improve Password Cover Rate Using the Controller,” J. Phys. Conf. Ser., vol. 1856, no. 1, p. 012012, Apr. 2021, doi: 10.1088/1742-6596/1856/1/012012.

A. A. Rusu et al., “Progressive neural networks," arXiv preprint arXiv:1606.04671, 2016.

T. Diwan, G. Anirudh, and J. V. Tembhurne, “Object detection using YOLO: challenges, architectural successors, datasets and applications,” Multimed. Tools Appl., vol. 82, no. 6, pp. 9243–9275, Mar. 2023, doi: 10.1007/s11042-022-13644-y.

W. Liu et al., “Ssd: Single shot multibox detector," in Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part I 14, pp. 21-37, 2016.

S. S. A. Zaidi, M. S. Ansari, A. Aslam, N. Kanwal, M. Asghar, and B. Lee, “A survey of modern deep learning based object detection models,” Digit. Signal Process., vol. 126, p. 103514, Jun. 2022, doi: 10.1016/j.dsp.2022.103514.

J. P. Vasconez, J. Delpiano, S. Vougioukas, and F. Auat Cheein, “Comparison of convolutional neural networks in fruit detection and counting: A comprehensive evaluation,” Comput. Electron. Agric., vol. 173, p. 105348, Jun. 2020, doi: 10.1016/j.compag.2020.105348.

S. Wan and S. Goudos, “Faster R-CNN for multi-class fruit detection using a robotic vision system,” Comput. Netw., vol. 168, p. 107036, Feb. 2020, doi: 10.1016/j.comnet.2019.107036.

Z. Wang et al., “An improved Faster R-CNN model for multi-object tomato maturity detection in complex scenarios,” Ecol. Inform., vol. 72, p. 101886, Dec. 2022, doi: 10.1016/j.ecoinf.2022.101886.

Z. Li et al., “A high-precision detection method of hydroponic lettuce seedlings status based on improved Faster RCNN,” Comput. Electron. Agric., vol. 182, p. 106054, Mar. 2021, doi: 10.1016/j.compag.2021.106054.

L. Quan et al., “Maize seedling detection under different growth stages and complex field environments based on an improved Faster R–CNN,” Biosyst. Eng., vol. 184, pp. 1–23, Aug. 2019, doi: 10.1016/j.biosystemseng.2019.05.002.

Z. Song, L. Fu, J. Wu, Z. Liu, R. Li, and Y. Cui, “Kiwifruit detection in field images using Faster R-CNN with VGG16,” IFAC-Pap., vol. 52, no. 30, pp. 76–81, 2019, doi: 10.1016/j.ifacol.2019.12.500.

M. M. Taye, “Theoretical Understanding of Convolutional Neural Network: Concepts, Architectures, Applications, Future Directions,” Computation, vol. 11, no. 3, p. 52, Mar. 2023, doi: 10.3390/computation11030052.

G. Iyer, B. Hanin, and D. Rolnick, "Maximal initial learning rates in deep relu networks," in International Conference on Machine Learning, pp. 14500-14530, 2023.




DOI: https://doi.org/10.18196/jrc.v4i5.19499

Refbacks

  • There are currently no refbacks.


Copyright (c) 2023 Wahyu Sapto Aji, Kamarul Hawari bin Ghazali, Son Ali Akbar

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

 


Journal of Robotics and Control (JRC)

P-ISSN: 2715-5056 || E-ISSN: 2715-5072
Organized by Peneliti Teknologi Teknik Indonesia
Published by Universitas Muhammadiyah Yogyakarta in collaboration with Peneliti Teknologi Teknik Indonesia, Indonesia and the Department of Electrical Engineering
Website: http://journal.umy.ac.id/index.php/jrc
Email: jrcofumy@gmail.com


Kuliah Teknik Elektro Terbaik