Enhancing Diabetic Retinopathy Classification in Fundus Images using CNN Architectures and Oversampling Technique
DOI:
https://doi.org/10.18196/jrc.v6i1.25331Keywords:
Diabetic Retinopathy, CNN Architectures, SMOTE, Class Imbalance, Classification SystemAbstract
Diabetic Retinopathy (DR) is a severe complication of diabetes mellitus that affects the retinal blood vessels and is a leading cause of blindness in productive-age individuals. The global increase in diabetes prevalence requires an effective DR classification system for early detection. This study aims to develop a DR classification system using several CNN architectures, such as EfficientNet-B4, ResNet-50, DenseNet-201, Xception, and Inception-ResNet-v2, with the application of the SMOTE oversampling technique to address data class imbalance. The dataset used is APTOS 2019, which has an unbalanced class distribution. Two scenarios were tested, the first without data balancing and the second with SMOTE implementation. The test results show that in the first scenario, Xception achieved the highest accuracy at 80.61%, but model performance was still limited due to majority class dominance. The application of SMOTE in the second scenario significantly improved model accuracy, with EfficientNet-B4 achieving the highest accuracy of 97.78%. Additionally, precision and recall increased dramatically in the second scenario, demonstrating SMOTE's effectiveness in enhancing the model's ability to detect minority classes and reduce prediction errors. DenseNet-201 achieved the highest precision at 99.28%, while Inception-ResNet-v2 recorded the highest recall at 98.57%. Overall, this study proves that the SMOTE method effectively addresses class imbalance in the fundus dataset and significantly improves CNN model performance. Although data balancing can help improve model quality by dealing with data imbalances, it comes at a higher computational cost. Using data balancing techniques with SMOTE significantly increased the iteration time per round on all tested CNN architectures.
References
W. L. Alyoubi, W. M. Shalash, and M. F. Abulkhair, “Diabetic retinopathy detection through deep learning techniques: A review,” Informatics in Medicine Unlocked, vol. 20, p. 100377, 2020, doi: 10.1016/j.imu.2020.100377.
H. Sun et al., “IDF diabetes Atlas: Global, regional and country-level diabetes prevalence estimates for 2021 and projections for 2045,” Diabetes Research and Clinical Practice, vol. 183, no. 109119, Dec. 2021, doi: 10.1016/j.diabres.2021.109119.
Z. Zlatarova et al., “Prevalence of Diabetic Retinopathy Among Diabetic Patients from Northeastern Bulgaria,” Diagnostics, vol. 14, no. 20, pp. 2340–2340, Oct. 2024, doi: 10.3390/diagnostics14202340.
J. Chua, C. X. Y. Lim, T. Y. Wong, and C. Sabanayagam, “Diabetic retinopathy in the Asia-Pacific,” Asia Pac. J. Ophthalmol. (Phila.), vol. 7, no. 1, pp. 3–16, Jan. 2018, doi: 10.22608/APO.2017511.
P. Shrestha, R. Kaiti, and R. Shyangbo, “Blindness among patients with type II diabetes mellitus presenting to the Outpatient Department of Ophthalmology of a tertiary care centre: A descriptive cross-sectional study,” JNMA J. Nepal Med. Assoc., vol. 60, no. 254, pp. 877–880, Oct. 2022. doi: 10.31729/jnma.7702.
T. H. Fung, B. Patel, E. G. Wilmot, and W. M. Amoaku, “Diabetic retinopathy for the non-ophthalmologist,” Clinical Medicine, vol. 22, no. 2, pp. 112–116, Mar. 2022, doi: 10.7861/clinmed.2021-0792.
T.-E. Tan and T. Y. Wong, “Diabetic retinopathy: Looking forward to 2030,” Frontiers in Endocrinology, vol. 13, Jan. 2023, doi: 10.3389/fendo.2022.1077669.
Z. Yang, T.-E. Tan, Y. Shao, T. Y. Wong, and X. Li, “Classification of diabetic retinopathy: Past, present and future,” Frontiers in Endocrinology, vol. 13, Dec. 2022, doi: 10.3389/fendo.2022.1079217.
E. Alizadeh, P. Mammadzada, and H. André, “The Different Facades of Retinal and Choroidal Endothelial Cells in Response to Hypoxia,” International Journal of Molecular Sciences, vol. 19, no. 12, pp. 3846–3846, Dec. 2018, doi: 10.3390/ijms19123846.
A. Sebastian, O. Elharrouss, S. Al-Maadeed, and N. Almaadeed, “A Survey on Deep-Learning-Based Diabetic Retinopathy Classification,” Diagnostics, vol. 13, no. 3, p. 345, Jan. 2023, doi: 10.3390/diagnostics13030345.
X. Huang et al., “Artificial intelligence promotes the diagnosis and screening of diabetic retinopathy,” Frontiers in Endocrinology, vol. 13, Sep. 2022, doi: 10.3389/fendo.2022.946915.
L. F. Nakayama et al., “Diabetic retinopathy classification for supervised machine learning algorithms,” International Journal of Retina and Vitreous, vol. 8, no. 1, Jan. 2022, doi: 10.1186/s40942-021-00352-2.
G. Alwakid, W. Gouda, M. Humayun, and N. Z. Jhanjhi, “Deep learning-enhanced diabetic retinopathy image classification,” Digital health, vol. 9, Jan. 2023, doi: 10.1177/20552076231194942.
P. Zang et al., “A Diabetic Retinopathy Classification Framework Based on Deep-Learning Analysis of OCT Angiography,” Translational vision science & technology, vol. 11, no. 7, pp. 10–10, Jul. 2022, doi: 10.1167/tvst.11.7.10.
C. Suedumrong, S. Phongmoo, T. Akarajaka, and K. Leksakul, “Diabetic Retinopathy Detection Using Convolutional Neural Networks with Background Removal, and Data Augmentation,” Applied Sciences, vol. 14, no. 19, p. 8823, Sep. 2024, doi: 10.3390/app14198823.
P. Zhang et al., “Fundus Image Generation and Classification of Diabetic Retinopathy Based on Convolutional Neural Network,” Electronics, vol. 13, no. 18, p. 3603, Sep. 2024, doi: 10.3390/electronics13183603.
A. Dutta et al., “Early Prediction of Diabetes Using an Ensemble of Machine Learning Models,” International Journal of Environmental Research and Public Health, vol. 19, no. 19, p. 12378, Sep. 2022, doi: 10.3390/ijerph191912378.
A. M. Fayyaz, M. I. Sharif, S. Azam, A. Karim, and J. El-Den, “Analysis of Diabetic Retinopathy (DR) Based on the Deep Learning,” Information, vol. 14, no. 1, p. 30, Jan. 2023, doi: 10.3390/info14010030.
Karthik, Maggie, and S. Dane, “APTOS 2019 Blindness Detection 2019,” Kaggle, 2019, https://kaggle.com/competitions/aptos2019-blindness-detection.
M. M. Hassan and H. R. Ismail, “Bayesian Deep Learning Applied to Diabetic Retinopathy with Uncertainty Quantification,” Heliyon, pp. e41802–e41802, Jan. 2025, doi: 10.1016/j.heliyon.2025.e41802.
T. Karkera, C. Adak, S. Chattopadhyay, and M. Saqib, “Detecting severity of Diabetic Retinopathy from fundus images: A transformer network-based review,” Neurocomputing, vol. 597, p. 127991, Sep. 2024, doi: 10.1016/j.neucom.2024.127991.
A. Rahman et al., “Diabetic Retinopathy Detection: A Hybrid Intelligent Approach,” Computers, materials & continua/Computers, materials & continua (Print), vol. 80, no. 3, pp. 4561–4576, Jan. 2024, doi: 10.32604/cmc.2024.055106.
M. Gargi, R. K. Eluri, O. P. Samantray, and K. Hajarathaiah, “Compact Pyramidal dense mixed attention network for Diabetic retinopathy severity prediction under deep learning,” Biomedical Signal Processing and Control, vol. 100, pp. 106960–106960, Oct. 2024, doi: 10.1016/j.bspc.2024.106960.
S. Abbasi et al., “Classification of diabetic retinopathy using unlabeled data and knowledge distillation,” Artificial Intelligence in Medicine, vol. 121, p. 102176, Nov. 2021, doi: 10.1016/j.artmed.2021.102176.
S. S. Mondal, N. Mandal, K. K. Singh, A. Singh, and I. Izonin, “EDLDR: An Ensemble Deep Learning Technique for Detection and Classification of Diabetic Retinopathy,” Diagnostics, vol. 13, no. 1, p. 124, Dec. 2022, doi: 10.3390/diagnostics13010124.
D. Saproo, A. N. Mahajan, and S. Narwal, “Deep learning based binary classification of diabetic retinopathy images using transfer learning approach,” Journal of Diabetes & Metabolic Disorders, Sep. 2024, doi: 10.1007/s40200-024-01497-1.
J. Fan et al., “A Self-Supervised Equivariant Refinement Classification Network for Diabetic Retinopathy Classification,” Journal of Imaging Informatics in Medicine, Sep. 2024, doi: 10.1007/s10278-024-01270-z.
H. K. Vasireddi, S. D. K, and R. R. G N V, “Deep feed forward neural network–based screening system for diabetic retinopathy severity classification using the lion optimization algorithm,” Graefe’s Archive for Clinical and Experimental Ophthalmology, Sep. 2021, doi: 10.1007/s00417-021-05375-x.
H. Shakibania, S. Raoufi, B. Pourafkham, H. Khotanlou, and M. Mansoorizadeh, “Dual branch deep learning network for detection and stage grading of diabetic retinopathy,” Biomedical Signal Processing and Control, vol. 93, pp. 106168–106168, Jul. 2024, doi: 10.1016/j.bspc.2024.106168.
S. Madarapu, S. Ari, and K. Mahapatra, “DFCAFNet: Dual-feature co-attentive fusion network for diabetic retinopathy grading,” Biomedical Signal Processing and Control, vol. 96, pp. 106564–106564, Jun. 2024, doi: 10.1016/j.bspc.2024.106564.
C. Huang, M. Sarabi, and A. E. Ragab, “MobileNet-V2 /IFHO Model for Accurate Detection of Early-Stage Diabetic Retinopathy,” Heliyon, pp. e37293–e37293, Aug. 2024, doi: 10.1016/j.heliyon.2024.e37293.
K. Ashwini and R. Dash, “Improving Diabetic Retinopathy grading using Feature Fusion for limited data samples,” Computers and Electrical Engineering, vol. 120, p. 109782, Dec. 2024, doi: 10.1016/j.compeleceng.2024.109782.
K. Ashwini and R. Dash, “Grading diabetic retinopathy using multiresolution based CNN,” Biomedical Signal Processing and Control, vol. 86, p. 105210, Sep. 2023, doi: 10.1016/j.bspc.2023.105210.
S. Piri, D. Delen, T. Liu, and H. M. Zolbanin, “A data analytics approach to building a clinical decision support system for diabetic retinopathy: Developing and deploying a model ensemble,” Decision Support Systems, vol. 101, pp. 12–27, Sep. 2017, doi: 10.1016/j.dss.2017.05.012.
J. P-Fontanilles, A. Valls, and P. R-Aroca, “Multivariate data binning and examples generation to build a Diabetic Retinopathy classifier based on temporal clinical and analytical risk factors,” Knowledge-Based Systems, vol. 300, pp. 112154–112154, Jun. 2024, doi: 10.1016/j.knosys.2024.112154.
S. Nayak et al., “Development of a machine learning-based model for the prediction and progression of diabetic kidney disease: A single centred retrospective study,” International Journal of Medical Informatics, vol. 190, pp. 105546–105546, Jul. 2024, doi: 10.1016/j.ijmedinf.2024.105546.
Y. Fu, Y. Ju, and D. Zhang, “MSEF-Net: A multi-scale EfficientNet Fusion for Diabetic Retinopathy grading,” Biomedical Signal Processing and Control, vol. 98, pp. 106714–106714, Aug. 2024, doi: 10.1016/j.bspc.2024.106714.
R. Raza et al., “Lung-EffNet: Lung cancer classification using EfficientNet from CT-scan images,” Engineering Applications of Artificial Intelligence, vol. 126, pp. 106902–106902, Nov. 2023, doi: 10.1016/j.engappai.2023.106902.
S. Tripathy, R. Singh, and M. Ray, “Automation of Brain Tumor Identification using EfficientNet on Magnetic Resonance Images,” Procedia Computer Science, vol. 218, pp. 1551–1560, Jan. 2023, doi: 10.1016/j.procs.2023.01.133.
B. Scholles et al., “Osteoporosis screening: Leveraging EfficientNet with complete and cropped facial panoramic radiography imaging,” Biomedical Signal Processing and Control, vol. 100, pp. 107031–107031, Oct. 2024, doi: 10.1016/j.bspc.2024.107031.
K. Ali, Z. A. Shaikh, A. A. Khan, and A. A. Laghari, “Multiclass skin cancer classification using EfficientNets – a first step towards preventing skin cancer,” Neuroscience Informatics, vol. 2, no. 4, p. 100034, Dec. 2022, doi: 10.1016/j.neuri.2021.100034.
K. Sun, M. He, Z. He, H. Liu, and X. Pi, “EfficientNet embedded with spatial attention for recognition of multi-label fundus disease from color fundus photographs,” Biomedical Signal Processing and Control, vol. 77, p. 103768, Aug. 2022, doi: 10.1016/j.bspc.2022.103768.
C. Guo, Y. Chen, and J. Li, “Radiographic imaging and diagnosis of spinal bone tumors: AlexNet and ResNet for the classification of tumor malignancy,” Journal of bone oncology, vol. 48, pp. 100629–100629, Aug. 2024, doi: 10.1016/j.jbo.2024.100629.
W. Xu, Y.-L. Fu, and D. Zhu, “ResNet and its application to medical image processing: Research progress and challenges,” Computer Methods and Programs in Biomedicine, vol. 240, p. 107660, Oct. 2023, doi: 10.1016/j.cmpb.2023.107660.
C. J. Ejiyi et al., “ResfEANet: ResNet-fused External Attention Network for Tuberculosis Diagnosis using Chest X-ray Images,” Computer Methods and Programs in Biomedicine Update, vol. 5, pp. 100133–100133, Dec. 2023, doi: 10.1016/j.cmpbup.2023.100133.
Y. Fan et al., “RMAP-ResNet: Segmentation of brain tumor OCT images using residual multicore attention pooling networks for intelligent minimally invasive theranostics,” Biomedical Signal Processing and Control, vol. 90, pp. 105805–105805, Dec. 2023, doi: 10.1016/j.bspc.2023.105805.
N. G. Inan, O. Kocadağlı, D. Yıldırım, İ. Meşe, and Ö. Kovan, “Multi-class classification of thyroid nodules from automatic segmented ultrasound images: Hybrid ResNet based UNet convolutional neural network approach,” Computer Methods and Programs in Biomedicine, vol. 243, p. 107921, Jan. 2024, doi: 10.1016/j.cmpb.2023.107921.
Q. Xiao et al., “A computer vision and residual neural network (ResNet) combined method for automated and accurate yeast replicative aging analysis of high-throughput microfluidic single-cell images,” Biosensors and Bioelectronics, vol. 244, p. 115807, Jan. 2024, doi: 10.1016/j.bios.2023.115807.
Y. Zhou et al., “Optimization of automated garbage recognition model based on ResNet-50 and weakly supervised CNN for sustainable urban development,” Alexandria Engineering Journal, vol. 108, pp. 415–427, Aug. 2024, doi: 10.1016/j.aej.2024.07.066.
G. Mohandass, G. Hari Krishnan, D. Selvaraj, and C. Sridhathan, “Lung Cancer Classification using Optimized Attention-based Convolutional Neural Network with DenseNet-201 Transfer Learning Model on CT image,” Biomedical Signal Processing and Control, vol. 95, p. 106330, Sep. 2024, doi: 10.1016/j.bspc.2024.106330.
Z. Gu et al., “Assessing breast cancer volume alterations post-neoadjuvant chemotherapy through DenseNet-201 deep learning analysis on DCE-MRI,” Journal of Radiation Research and Applied Sciences, vol. 17, no. 3, pp. 100971–100971, Jun. 2024, doi: 10.1016/j.jrras.2024.100971.
M. R. Khare and R. H. Havaldar, “Predicting the anterior slippage of vertebral lumbar spine using Densenet-201,” Biomedical Signal Processing and Control, vol. 86, pp. 105115–105115, Jun. 2023, doi: 10.1016/j.bspc.2023.105115.
A. W. Saleh, G. Gupta, S. B. Khan, N. A. Alkhaldi, and A. Verma, “An Alzheimer’s disease classification model using transfer learning Densenet with embedded healthcare decision support system,” Decision Analytics Journal, vol. 9, p. 100348, Dec. 2023, doi: 10.1016/j.dajour.2023.100348.
H. Zerouaoui and A. Idri, “Deep hybrid architectures for binary classification of medical breast cancer images,” Biomedical Signal Processing and Control, vol. 71, p. 103226, Jan. 2022, doi: 10.1016/j.bspc.2021.103226.
F. B. Mofrad and G. Valizadeh, “DenseNet-based transfer learning for LV shape Classification: Introducing a novel information fusion and data augmentation using statistical Shape/Color modeling,” Expert Systems with Applications, vol. 213, p. 119261, Mar. 2023, doi: 10.1016/j.eswa.2022.119261.
B. Cansiz, C. U. Kilinc, and G. Serbes, “Deep learning-driven feature engineering for lung disease classification through electrical impedance tomography imaging,” Biomedical Signal Processing and Control, vol. 100, pp. 107124–107124, Nov. 2024, doi: 10.1016/j.bspc.2024.107124.
M. N. Akram, M. U. Yaseen, M. Waqar, M. Imran, and A. Hussain, “A Double-Branch Xception Architecture for Acute Hemorrhage Detection and Subtype Classification,” Computers, materials & continua/Computers, materials & continua (Print), vol. 76, no. 3, pp. 3727–3744, Jan. 2023, doi: 10.32604/cmc.2023.041855.
C. Upasana, A. S. Tewari, and J. P. Singh, “An Attention-based Pneumothorax Classification using Modified Xception Model,” Procedia Computer Science, vol. 218, pp. 74–82, 2023, doi: 10.1016/j.procs.2022.12.403.
M. Aparna and B. Srinivasa Rao, “Xception-Fractalnet: Hybrid Deep Learning Based Multi-Class Classification of Alzheimer’s Disease,” Computers, Materials & Continua, vol. 74, no. 3, pp. 6909–6932, 2023, doi: 10.32604/cmc.2023.034796.
S. Sharma and S. Kumar, “The Xception model: A potential feature extractor in breast cancer histology images classification,” ICT Express, Nov. 2021, doi: 10.1016/j.icte.2021.11.010.
J. Banumathi et al., “An Intelligent Deep Learning Based Xception Model for Hyperspectral Image Analysis and Classification,” Computers, Materials & Continua, vol. 67, no. 2, pp. 2393–2407, 2021, doi: 10.32604/cmc.2021.015605.
A. Panthakkan, S. M. Anzar, S. Jamal, and W. Mansoor, “Concatenated Xception-ResNet50 — A novel hybrid approach for accurate skin cancer prediction,” Computers in Biology and Medicine, vol. 150, pp. 106170–106170, Oct. 2022, doi: 10.1016/j.compbiomed.2022.106170.
M. Rahimzadeh and A. Attar, “A modified deep convolutional neural network for detecting COVID-19 and pneumonia from chest X-ray images based on the concatenation of Xception and ResNet50V2,” Informatics in Medicine Unlocked, vol. 19, p. 100360, 2020, doi: 10.1016/j.imu.2020.100360.
M. Neshat, M. Ahmed, H. Askari, M. Thilakaratne, and S. Mirjalili, “Hybrid Inception Architecture with Residual Connection: Fine-tuned Inception-ResNet Deep Learning Model for Lung Inflammation Diagnosis from Chest Radiographs,” Procedia Computer Science, vol. 235, pp. 1841–1850, 2024, doi: 10.1016/j.procs.2024.04.175.
Y. Chen et al., “Classification of Lungs Infected COVID-19 Images based on Inception-ResNet,” Computer Methods and Programs in Biomedicine, p. 107053, Jul. 2022, doi: 10.1016/j.cmpb.2022.107053.
H. Wang, X. Shen, K. Fang, Z. Dai, G. Wei, and L.-F. Chen, “Contrast-enhanced magnetic resonance image segmentation based on improved U-Net and Inception-ResNet in the diagnosis of spinal metastases,” Journal of Bone Oncology, vol. 42, pp. 100498–100498, Oct. 2023, doi: 10.1016/j.jbo.2023.100498.
G. S. Sunsuhi and S. A. Jose, “An Adaptive Eroded Deep Convolutional neural network for brain image segmentation and classification using Inception ResnetV2,” Biomedical Signal Processing and Control, vol. 78, p. 103863, Sep. 2022, doi: 10.1016/j.bspc.2022.103863.
S. Peng, H. Huang, W. Chen, L. Zhang, and W. Fang, “More Trainable Inception-ResNet for Face Recognition,” Neurocomputing, vol. 411, pp. 9-19, May 2020, doi: 10.1016/j.neucom.2020.05.022.
M. Nawaz, A. Javed, and A. Irtaza, “A deep learning model for FaceSwap and face-reenactment deepfakes detection,” Applied Soft Computing, vol. 162, p. 111854, Sep. 2024, doi: 10.1016/j.asoc.2024.111854.
X. Yu, J. Tian, Z. Chen, Y. Meng, and J. Zhang, “Predictive breast cancer diagnosis using ensemble fuzzy model,” Image and Vision Computing, vol. 148, p. 105146, Aug. 2024, doi: 10.1016/j.imavis.2024.105146.
R. Khattab, I. R. Abdelmaksoud, and S. Abdelrazek, “Automated detection of COVID-19 and pneumonia diseases using data mining and transfer learning algorithms with focal loss from chest X-ray images,” Applied Soft Computing, vol. 162, pp. 111806–111806, May 2024, doi: 10.1016/j.asoc.2024.111806.
S. Patnaik, S. Ghosh, R. Ghosh, and S. Sahay, “Identifying Skeletal Maturity from X-rays using Deep Neural Networks,” The Open Biomedical Engineering Journal, vol. 15, no. 1, pp. 141–148, Dec. 2021, doi: 10.2174/1874120702115010141.
G. Nirmala, P. P. Nayudu, A. R. Kumar, and R. Sagar, “Automatic cervical cancer classification using adaptive vision transformer encoder with CNN for medical application,” Pattern Recognition, pp. 111201–111201, Nov. 2024, doi: 10.1016/j.patcog.2024.111201.
X.-L. Pan et al., “EL-CNN: An enhanced lightweight classification method for colorectal cancer histopathological images,” Biomedical Signal Processing and Control, vol. 100, p. 106933, Feb. 2025, doi: 10.1016/j.bspc.2024.106933.
J. Rabbah, M. Ridouani, and L. Hassouni, “Improving pneumonia diagnosis with high-accuracy CNN-Based chest X-ray image classification and integrated gradient,” Biomedical Signal Processing and Control, vol. 101, p. 107239, Mar. 2025, doi: 10.1016/j.bspc.2024.107239.
S. Rajeashwari and K. Arunesh, “Enhancing pneumonia diagnosis with ensemble-modified classifier and transfer learning in deep-CNN based classification of chest radiographs,” Biomedical Signal Processing and Control, vol. 93, pp. 106130–106130, Feb. 2024, doi: 10.1016/j.bspc.2024.106130.
H. M. El-Hoseny, H. F. Elsepae, W. A. Mohamed, and A. S. Selmy, “Optimized Deep Learning Approach for Efficient Diabetic Retinopathy Classification Combining VGG16-CNN,” Computers, materials & continua, vol. 77, no. 2, pp. 1855–1872, Jan. 2023, doi: 10.32604/cmc.2023.042107.
S. Liu, W. Wang, L. Deng, and H. Xu, “Cnn-trans model: A parallel dual-branch network for fundus image classification,” Biomedical Signal Processing and Control, vol. 96, pp. 106621–106621, Jul. 2024, doi: 10.1016/j.bspc.2024.106621.
J. Barbero-Gómez, R. P. M. Cruz, J. S. Cardoso, P. A. Gutiérrez, and C. Hervás-Martínez, “CNN explanation methods for ordinal regression tasks,” Neurocomputing, pp. 128878–128878, Nov. 2024, doi: 10.1016/j.neucom.2024.128878.
Y. Pamungkas, M. R. N. Ramadani, and E. N. Njoto, “Effectiveness of CNN Architectures and SMOTE to Overcome Imbalanced X-Ray Data in Childhood Pneumonia Detection,” Journal of Robotics and Control (JRC), vol. 5, no. 3, pp. 775–785, 2024, doi: 10.18196/jrc.v5i3.21494.
M. Javed et al., “An advanced deep neural network for fundus image analysis and enhancing diabetic retinopathy detection,” Healthcare analytics, vol. 5, pp. 100303–100303, Jun. 2024, doi: 10.1016/j.health.2024.100303.
A. Samanta, A. Saha, S. C. Satapathy, S. L. Fernandes, and Y.-D. Zhang, “Automated detection of diabetic retinopathy using convolutional neural networks on a small dataset,” Pattern Recognition Letters, vol. 135, pp. 293–298, Jul. 2020, doi: 10.1016/j.patrec.2020.04.026.
Z. Lu, J. Miao, J. Dong, S. Zhu, X. Wang, and J. Feng, “Automatic classification of retinal diseases with transfer learning-based lightweight convolutional neural network,” Biomedical Signal Processing and Control, vol. 81, p. 104365, Mar. 2023, doi: 10.1016/j.bspc.2022.104365.
S. Zhou, J. Wang, and B. Li, “A multi-class fundus disease classification system based on an adaptive scale discriminator and hybrid loss,” Computational Biology and Chemistry, vol. 113, pp. 108241–108241, Oct. 2024, doi: 10.1016/j.compbiolchem.2024.108241.
A. AbuKaraki et al., “Pulmonary Edema and Pleural Effusion Detection Using EfficientNet-V1-B4 Architecture and AdamW Optimizer from Chest X-Rays Images,” Computers, materials & continua, vol. 80, no. 1, pp. 1055–1073, Jan. 2024, doi: 10.32604/cmc.2024.051420.
M. Sahoo, S. Ghorai, M. Mitra, and S. Pal, “Improved detection accuracy of red lesions in retinal fundus images with superlearning approach,” Photodiagnosis and Photodynamic Therapy, vol. 42, pp. 103351–103351, Feb. 2023, doi: 10.1016/j.pdpdt.2023.103351.
Y. Kim, Y. Kwon, and M. C. Paik, “Valid oversampling schemes to handle imbalance,” Pattern Recognition Letters, vol. 125, pp. 661–667, Jul. 2019, doi: 10.1016/j.patrec.2019.07.006.
S. Min et al., “Deep Imbalanced Regression Model for Predicting Refractive Error from Retinal Photos,” Ophthalmology Science, pp. 100659–100659, Nov. 2024, doi: 10.1016/j.xops.2024.100659.
T. Felfeli et al., “Assessment of predictive value of artificial intelligence for ophthalmic diseases using electronic health records: A systematic review and meta-analysis,” JFO Open Ophthalmology, vol. 7, p. 100124, Jul. 2024, doi: 10.1016/j.jfop.2024.100124.
Y. Pamungkas and M. R. N. Ramadani, “Leveraging of recurrent neural networks architectures and SMOTE for dyslexia prediction optimization in children,” TELKOMNIKA (Telecommunication Computing Electronics and Control), vol. 22, no. 5, p. 1178, Jul. 2024, doi: 10.12928/telkomnika.v22i5.26092.
R. Vij and S. Arora, “Modified deep inductive transfer learning diagnostic systems for diabetic retinopathy severity levels classification,” Biomedical Signal Processing and Control, vol. 99, p. 106885, Jan. 2025, doi: 10.1016/j.bspc.2024.106885.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Yuri Pamungkas, Evi Triandini, Wawan Yunanto, Yamin Thwe

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).
This journal is based on the work at https://journal.umy.ac.id/index.php/jrc under license from Creative Commons Attribution-ShareAlike 4.0 International License. You are free to:
- Share – copy and redistribute the material in any medium or format.
- Adapt – remix, transform, and build upon the material for any purpose, even comercially.
The licensor cannot revoke these freedoms as long as you follow the license terms, which include the following:
- Attribution. You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
- ShareAlike. If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.
- No additional restrictions. You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
• Creative Commons Attribution-ShareAlike (CC BY-SA)
JRC is licensed under an International License