Image Denoising Using Generative Adversarial Network by Recursive Residual Group
DOI:
https://doi.org/10.18196/jrc.v6i2.24302Keywords:
Generative Adversarial Network, Image Denoising, Cardiac Magnetic Resonance Imaging, Recursive Residual, Deep learningAbstract
Cardiac magnetic resonance imaging (CMR) is a vital tool for noninvasively assessing heart shape and function, offering exceptional spatial and temporal resolution alongside superior soft tissue contrast. However, CMR images often suffer from noise and artifacts due to cardiac and respiratory motion or patient movement impacting diagnostic accuracy. While real-time noise suppression can mitigate these issues, it comes at a high computational and financial cost. This paper introduces a method that includes a complete way to clean up medical images by using a new Denoising Generative Adversarial Network (D-GAN). The D-GAN architecture incorporates a recursive residual group-based generator and a discriminator inspired by PatchGAN.The recursive residual group-based generator and the Selective Kernel Feature Fusion (SKFF) mechanism are part of a new D-GAN architecture that makes denoising work better. A PatchGAN-based discriminator designed to improve adversarial training dynamics and texture modeling for medical images. These innovations offer improved feature refinement and texture modeling, enhancing the denoising of cardiac MRI images. allows the model to get a doubling context of local and global, informational, and hierarchical developed features located in the generator. Our technique outperforms other methods in terms of PSNR and SSIM. With scores of 0.837, 0.911, and 0.971 for noise levels of 0.3, 0.2, and 0.1, and PSNR scores of 29.48 dB, 32.58 dB, and 37.85 dB, the results show that the D-GAN method is better than other methods.
References
A. Bustin, N. Fuin, R. M. Botnar, and C. Prieto, “Compressed-sensing to artificial intelligence-based cardiac MRI reconstruction,” Frontiers in Cardiovascular Medicine, vol. 7, p. 17, 2020, doi: 10.3389/fcvm.2020.00017.
K. Wong, G. Fortino, and D. Abbott, “Deep learning-based cardiovascular image diagnosis: a promising challenge,” Future Generation Computer Systems, vol. 110, p. 802-811, 2020, doi: 10.1016/j.future.2019.09.047.
G.K. Thakur, A. Thakur, S. Kulkarni, N. Khan, and S. Khan, “Deep Learning Approaches for Medical Image Analysis and Diagnosis,” Cureus, vol. 16, no. 5, p. e59507, 2024, doi: 10.7759/cureus.59507.
T. F. Ismail, W. Strugnell, C. Coletti, M. Božić-Iven, S. Weingärtner, K. Hammernik, T. Correia, and T. Küstner, “Cardiac MR: From theory to practice,” Front. Cardiovasc Med, vol. 9, p. 826283, 2022, doi: 10.3389/fcvm.2022.826283.
M. A. U. Naser and A. H. H. Al-Asadi, “Hybrid Residual Blocks - Transformers Improving Cardiac MRI Segmentation in Deep Learning,” 2023 4th International Conference on Communications, Information, Electronic and Energy Systems (CIEES), pp.1-7, 2023, doi: 10.1109/CIEES58940.2023.10378777.
M. A. Kareem et al., “Predicting post-contrast information from contrast agent-free cardiac MRI using machine learning: Challenges and methods,” Front Cardiovasc Med, vol. 9, p. 894503, 2022, doi: 10.3389/fcvm.2022.894503.
Q. Zhang et al., “Toward replacing late gadolinium enhancement with artificial intelligence virtual native enhancement for gadolinium-free cardiovascular magnetic resonance tissue characterization in hypertrophic cardiomyopathy,” Circulation, vol. 144, pp. 589–599, 2021, doi: 10.1161/CIRCULATIONAHA.121.054432.
K. Haris et al., “Free-breathing fetal cardiac MRI with Doppler ultrasound gating, compressed sensing, and motion compensation,” J Magn Reason Imaging, vol. 51, pp. 260–272, 2020, doi: 10.1002/jmri.26842.
Y. Sato and K. Ohkuma, “Verification of image quality improvement by deep learning reconstruction to 1.5 T MRI in T2-weighted images of the prostate gland,” Radiol Phys Technol, vol. 17, pp. 756–764, 2024, doi: 10.1007/s12194-024-00819-5.
T. M. Vollbrecht et al., “Deep learning denoising reconstruction for improved image quality in fetal cardiac cine MRI,” Front Cardiovasc Med, vol. 11, p. 1323443, 2024, doi: 10.3389/fcvm.2024.1323443.
M. Tang et al., “A deep learning algorithm for noise reduction in a novel respiratory-triggered single-shot phase-sensitive inversion recovery myocardial delayed enhancement cardiac MRI pulse sequence,” European Heart Journal - Cardiovascular Imaging, vol. 24, 2023, doi: 10.1093/ehjci/jead119.372.
K. Phipps et al., “Accelerated in vivo cardiac diffusion-tensor MRI using residual deep learning–based denoising in participants with obesity,” Radiology: Cardiothoracic Imaging, vol. 3, p. e200580, 2021, doi: 10.1148/ryct.2021200580.
A. Fotaki et al., “Accelerating 3D MTC-BOOST in patients with congenital heart disease using a joint multi-scale variational neural network reconstruction,” Magn Reson Imaging, vol. 92, pp. 120-132, 2022, doi: 10.1016/j.mri.2022.06.012.
L. M. Bischoff et al., “Deep learning super-resolution reconstruction for fast and motion-robust T2-weighted prostate MRI,” Radiology, vol. 308, no. 3, 2023, doi: 10.1148/radiol.230427.
M. Tanabe et al., “Feasibility of high-resolution magnetic resonance imaging of the liver using deep learning reconstruction based on the deep learning denoising technique,” Magn Reson Imaging, vol. 80, pp. 121–126, 2021, doi: 10.1371/journal.pone.0287903.
B. Sistaninejhad, H. Rasi, and P. Nayeri, “A Review Paper about Deep Learning for Medical Image Analysis,” Computational and Mathematical Methods in Medicine, vol. 2023, no. 1, p. 7091301, 2023 doi: 10.1155/2023/7091301.
H. H. Valizadeh et al., “Highly accelerated free-breathing real-time phase contrast cardiovascular MRI via complex-difference deep learning,” Magn Reson Med, vol. 86, pp. 804-819, 2021, doi: 10.1002/mrm.28750.
J. I. Hamilton, D. Currey, S. Rajagopalan, and N. Seiberlich, “Deep learning reconstruction for cardiac magnetic resonance fingerprinting T (1) and T (2) mapping,” Magn Reson Med, vol. 85, pp. 2127-2135, 2021, doi: 10.1002/mrm.28568.
D.yd, E. Heiberg, A.H. Aletras, and E. Hedström, “Super-resolution cine image enhancement for fetal cardiac magnetic resonance imaging,” Magn Reson Imaging, vol. 56, no. 1, pp. 223-231, 2022, doi: 10.1002/jmri.27956.
C. M. Sandino, P. Lai, S. S. Vasanawala, and J. Y. Cheng, “Accelerating cardiac cine MRI using a deep learning-based ESPIRiT reconstruction,” Magn. Reson. Med, vol. 85, no. 1, pp. 152–167, 2021, doi: 10.1002/mrm.2842.
A. Phaira, A. Fotakia, L. Felsnera, H. Qib, and R. M. Botnar, “A motion-corrected deep-learning reconstruction framework for accelerating whole-heart magnetic resonance imaging in patients with congenital heart disease,” Journal of Cardiovascular Magnetic Resonance, vol. 26, no. 1, p. 101039, 2024, doi: 10.1016/j.jocmr.2024.101039.
D. Kim, M. L. Jen, L. B. Eisenmenger, and K. M. Johnson, “Accelerated 4D-flow MRI with 3-point encoding enabled by machine learning,” Mag. Reson. Med, vol. 89, pp. 800–811, 2023, doi: 10.1002/mrm.29469.
Z. Ke et al., "Deep Manifold Learning for Dynamic MR Imaging," in IEEE Transactions on Computational Imaging, vol. 7, pp. 1314-1327, 2021, doi: 10.1109/TCI.2021.3131564.
M. Akcakaya, S. Moeller, S. Weingartner, and K. Ugurbil, “Scan-specific robust artificial-neural-networks for k-space interpolation (RAKI) reconstruction: database-free deep learning for fast imaging,” Magn Reson Med, vol. 81, pp. 439-453, 2019, doi: 10.1002/mrm.27420.
J. Schlemper, J. Caballero, J. V. Hajnal, A. N. Price, and D. Rueckert, “A deep cascade of convolutional neural networks for dynamic MR image reconstruction,” IEEE Trans Med Imaging, vol. 37, pp. 491-503, 2018, doi: 10.1109/TMI.2017.2760978.
T. Kustner et al., “Deep-learning based super-resolution for 3D isotropic coronary MR angiography in less than a minute,” Magn Reson Med, vol. 86, pp. 2837-2852, 2021, doi: 10.1002/mrm.28911.
T. A. McDonagh et al., “2023 focused update of the 2021 ESC guidelines for the diagnosis and treatment of acute and chronic heart failure: developed by the task force for the diagnosis and treatment of acute and chronic heart failure of the European Society of Cardiology (ESC) with the special contribution of the Heart Failure Association (HFA) of the ESC,” European heart journal, vol. 44, no. 37, pp. 3627-3639, 2023, doi: 10.1093/eurheartj/ehad195.
S. Ghadimi et al., “Fully-automated global and segmental strain analysis of DENSE cardiovascular magnetic resonance using deep learning for segmentation and phase unwrapping,” J Cardiovasc Magn Reson, vol. 23, p. 20, 2021, doi: 10.1186/s12968-021-00712-9.
E. Hann, et al., “Deep neural network ensemble for on-the-fly quality control-driven segmentation of cardiac MRI T1 mapping,” Med Image Anal, vol. 71, p. 102029, 2021, doi: 10.1016/j.media.2021.102029.
I. Machado et al., “A Deep Learning-Based Integrated Framework for Quality-Aware Undersampled Cine Cardiac MRI Reconstruction and Analysis,” IEEE Trans BiomedEng, vol. 71, no. 3, pp. 855-865, 2024, doi: 10.1109/TBME.2023.3321431.
N. Ajmeera and P. Kamakshi, “Sentiment analysis technique on product reviews using Inception Recurrent Convolutional Neural Network with ResNet Transfer Learning,” Smart Science, vol. 12, no. 4, pp. 654-665, 2024, doi: 10.1080/23080477.2024.2370210.
L. T. Wang and S. Cheng, “Image super-resolution via dual-level recurrent residual networks,” Sensors, vol. 22, p. 3058, 2022.
S. M. A. Bashir, Y. Wang, M. Khan, and Y. Niu, “A comprehensive Review of deep learning-based single image super-resolution,” PeerJ Comput. Sci., vol. 7, p. e621, 2021, doi: 10.7717/peerj-cs.621.
Y. Li, B. Sixou, and F. Peyrin, “A Review of the deep learning methods for medical images super resolution problems,” IRBM, vol. 42, pp. 120-133, 2021, doi: 10.1016/j.irbm.2020.08.004.
S. Anwar and N. Barnes, "Densely Residual Laplacian Super-Resolution," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 3, pp. 1192-1204, 2022.
R. Lan et al., “Cascading and enhanced residual networks for accurate single-image super-resolution,” IEEE Trans Cybern, vol. 51, pp. 115-125, 2021, doi: 10.1109/TCYB.2019.2952710
J. A. scanoa, M. J. Middione, A. B. Syed, C. M. Sandino, S. S. Vasanawala, and D. B. Ennis, “Accelerated two-dimensional phase-contrast for cardiovascular MRI using deep learning-based reconstruction with complex difference estimation,” Mag. Reson. Med, vol. 89, pp. 356–369, 2023.
H. Li, X. He, Z. Yu, and J. Luo, “Noise-robust image fusion with low-rank sparse decomposition guided by external patch prior,” Inf. Sci., vol. 523, pp. 14-37, 2020, doi: 10.1016/j.ins.2020.03.009.
Q. Zhang et al., “Artificial intelligence for contrast-free MRI: scar assessment in myocardial infarction using deep learning-based virtual native enhancement,” Circulation, vol. 146, no. 20, pp. 1492–1503, 2022.
S. A. Khowaja , B. N. Yahya, and S. -L. Lee, “Cascaded and Recursive ConvNets (CRCNN): An effective and flexible approach for image denoising,” Signal Processing: Image Communication, vol. 99, p. 116420, 2021, doi: 10.1016/j.image.2021.116420.
E. Cole, J. Cheng, J. Pauly, and S. Vasanawala, “Analysis of deep complex-valued convolutional neural networks for MRI reconstruction and phase-focused applications,” Mag. Reson. Med., vol. 86, pp. 1093–1109, 2021, doi: 10.1002/mrm.28733.
C. Tian, Y. Xu, Z. Li, W. Zuo, L. Fei, and H. Liu, “Attention-guided CNN for image denoising,” Neural Networks, vol. 124, pp. 117-129, 2020, doi: 10.1016/j.neunet.2019.12.024.
J. A. Oscanoa, et al., “Deep Learning-Based Reconstruction for Cardiac MRI: A Review,” Bioengineering (Basel), vol. 10, no. 3, p. 334, 2023, doi: 10.3390/bioengineering10030334.
T. Küstner et al., “CINENet: Deep learning-based 3D cardiac CINE MRI reconstruction with multi-coil complex-valued 4D spatio-temporal convolutions,” Sci Rep, vol. 10, p. 13710, 2020, doi: 10.1038/s41598-020-70551-8.
Y. Zhang et al., “Rapid 3D breath-hold MR cholangiopancreatography using deep learning–constrained compressed sensing reconstruction,” Eur Radiol., vol. 33, no. 4, pp. 2500-2509, 2023, doi: 10.1007/s00330-022-09227-y.
D. Prokopenko, K. Hammernik, T. Roberts, D. F. A. Lloyd, D. Rueckert, and J. V. Hajnal, “The challenge of fetal cardiac MRI reconstruction using deep learning,” Springer Nature Switzerland, pp. 64–74, 2023, doi: 10.1007/978-3-031-45544-5_6 .
N. Nazir, A. Sarwar, and B. S. Saini, “Recent developments in denoising medical images using deep learning: An overview of models, techniques, and challenges,” Micron, vol. 180, 2024, doi: 10.1016/j.micron.2024.103615.
W. Yuan, H. Liu, L. Liang, and W. Wang, “Learning the Hybrid Nonlocal Self-Similarity Prior for Image Restoration,” Mathematics, vol. 12, p. 1412, 2024, doi: 10.3390/math12091412.
N. Qi, Y. Shi, X. Sun, W. Ding, and B. Yin, "Single image super-resolution via 2D sparse representation," 2015 IEEE International Conference on Multimedia and Expo (ICME), pp. 1-6, 2015.
X. Lan, S. Roth, D. Hutten ocher, and M. J. Black, “Efficient belief propagation with learned higher-order Markov random fields,” in European conference on computer vision. Berlin, Heidelberg: Springer, vol. 2006, pp. 269–82, 2006, doi: 10.1007/11744047_21.
J. Mairal, F. Bach, J. Ponce, G. Sapiro, and A. Zisserman, "Non-local sparse models for image restoration," 2009 IEEE 12th International Conference on Computer Vision, pp. 2272-2279, 2009.
Y. Chen and T. Pock, "Trainable Nonlinear Reaction Diffusion: A Flexible Framework for Fast and Effective Image Restoration," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1256-1272, 2017, doi: 10.1109/TPAMI.2016.2596743.
H. C. Burger, C. J. Schuler and S. Harmeling, "Image denoising: Can plain neural networks compete with BM3D?," 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2392-2399, 2012.
Q. Yang et al., "Low-Dose CT Image Denoising Using a Generative Adversarial Network With Wasserstein Distance and Perceptual Loss," in IEEE Transactions on Medical Imaging, vol. 37, no. 6, pp. 1348-1357, 2018.
K. Zhang et al., “Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising,” IEEE Transactions on Image Processing, vol. 26, no. 7, pp. 3142-3155, 2016.
S. Zamir, A. Arora, S. Khan, M. Hayat, F. Khan, M. Yang, and L. Shao, “Learning Enriched Features for Real Image Restoration and Enhancement,” In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXV 16, pp. 492-511, 2020, doi: 10.1007/978-3-030-58595-2_30
K. K. L .Wong et al., “medical image diagnostics based on computer-aided flow analysis using magnetic resonance images,” Computerized Medical Imaging and Graphics, vol. 36, no. 7, pp. 527-541, 2012.
J. L. Harris, “Diffraction and Resolving Power,” Journal of the Optical Society of America, vol. 54, no. 7, pp. 931-936, 1964.
J. W. Goodman, “Introduction to Fourier Optics: McGraw-Hill,” Optical Engineering, vol. 35, no. 5, pp. 1513-1513, 1996.
C. Dong, C. C. Loy, K. He, and X. Tang, "Image Super-Resolution Using Deep Convolutional Networks," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 2, pp. 295-307, 2016.
C. Dong, C. C. Loy, and X. Tang, “Accelerating the super-resolution convolutional neural network,” In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part II 14, pp. 391-407, 2016.
C. Ledig et al., “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 105-114, 2017, doi: 10.1109/CVPR.2017.19.
M. Arjovsky, C. Smith, and B. L´eon, “Wasserstein generative adversarial networks,” In International conference on machine learning, pp. 214–233, 2017.
G. Ishaan, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville, “Improved training of Wasserstein GANs,” in Advances in neural information processing systems, vol. 30, 2017.
M. Ran, J. Hu, Y. Chen, H. Chen, "Denoising of 3-D Magnetic Resonance Images Using a Residual Encoder-Decoder Wasserstein Generative Adversarial Network," Medical image analysis, vol. 55, pp. 165-180, 2018.
R. R. Sood and M. Rusu, "Anisotropic Super Resolution In Prostate Mri Using Super Resolution Generative Adversarial Networks," 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), pp. 1688-1691, 2019, doi: 10.1109/ISBI.2019.8759237.
Z. Han, B. Wei, A. Mercado, S. Leung, and S. Li,” Spine-GAN: Semantic segmentation of multiple spinal structures,” Medical image analysis, vol. 50, pp. 23–35, 2018, doi: 10.1016/j.media.2018.08.005.
H. Du, N.Yuan, and L. Wang, “Node2Node: Self-Supervised Cardiac Diffusion Tensor Image Denoising Method,” Applied Sciences, vol. 13, no. 19, p. 10829, 2023, doi: 10.3390/app131910829.
S. Ghose, N. Singh, and P. Singh, “Image Denoising using Deep Learning: Convolutional Neural Network,” 2020 10th International Conference on Cloud Computing, Data Science & Engineering (Confluence), pp. 511-517, 2020, doi: 10.1109/Confluence47617.2020.9057895.
M. Rezaei, H. Yang, and C. Meinel, “Deep neural network with the l2-norm unit for brain lesions detection,” in International conference on neural information processing, pp. 798–807, 2017, doi: 10.1007/978-3-319-70093-9_8.
M. Zhao, Y. Wei, and K. K. Wong, “A Generative Adversarial Network technique for high-quality super-resolution reconstruction of cardiac magnetic resonance images,” Magnetic Resonance Imaging, vol. 85, pp. 153–160, 2022, doi: 10.1016/j.mri.2021.10.033.
I. Goodfellow et al., “Generative adversarial networks,” Communications of the ACM, vol. 63, no. 11, pp. 139-144, 2020, doi: 10.1145/3422622.
S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee, “Generative adversarial text to image synthesis,” in International conference on machine learning, pp. 1060-1069, 2016.
J. Liu, C. Gu, J. Wang, G. Youn, and J. Kim, “Multi-scale multi-class conditional generative adversarial network for handwritten character generation,” The Journal of Supercomputing, vol. 75, no. 4, pp. 1922–1940, 2019.
Q. Yan and W. Wang, “DCGANs for image super-resolution, denoising and deblurring,” Advances in Neural Information Processing Systems, pp. 487–495, 2017.
K. Zhang, H. Hu, K. Philbrick, G. M. Conte, J. D. Sobek, P. Rouzrokh, and B. J. Erickson, “SOUP-GAN: Super-Resolution MRI Using Generative Adversarial Networks,” Tomography, vol. 8, no. 2, pp. 905-919, 2022, doi: 10.3390/tomography8020073.
M. L. Salvia, et al., “Deep Convolutional Generative Adversarial Networks to Enhance Artificial Intelligence in Healthcare: A Skin Cancer Application,” Sensors, vol. 22, p. 6145, 2022, doi: 10.3390/s22166145.
J. Blarr, S. Klinder, W. V. Liebig, K. Inal, L. Kärger, and K. A. Weidenmann, “Deep convolutional generative adversarial network for generation of computed tomography images of discontinuously carbon fiber reinforced polymer microstructures,” Scientific Reports, vol. 14, no. 1, p. 9641, 2024, doi: 10.1038/s41598-024-59252-8.
Z. Chen, Z. Zeng, H. Shen, Zheng, P. Dai, and P. Ouyang, “DN-GAN: Denoising generative adversarial networks for speckle noise reduction in optical coherence tomography images,” Biomedical Signal Processing and Control, vol. 55, p. 101632, 2020, doi: 10.1016/j.bspc.2019.101632.
Y. Wang, S. Luo, L. Ma, and M. Huang, “RCA-GAN: An Improved Image Denoising Algorithm Based on Generative Adversarial Networks,” Electronics, vol. 12, no. 22, p. 4595, 2023, doi: 10.3390/electronics12224595.
B. GÊrard, M. Sangnier, U. Tanielian, “Some theoretical insights into Wasserstein GANs,” Journal of Machine Learning Research, vol. 22, no. 119, pp. 1-45, 2021.
Y. Gao and M. K. Ng, “Wasserstein generative adversarial uncertainty quantification in physics-informed neural networks,” Journal of Computational Physics, vol. 463, p. 111270, 2022, doi: 10.1016/j.jcp.2022.111270.
M. N. Yeasmin, M. Al Amin, T. J. Joti, Z. Aung, and M.A. Azim, “Advances of AI in image-based computer-aided diagnosis: A review,” Array, p. 100357, 2024, doi: 10.1016/j.array.2024.100357.
G. Yang et al., "DAGAN: Deep De-Aliasing Generative Adversarial Networks for Fast Compressed Sensing MRI Reconstruction," in IEEE Transactions on Medical Imaging, vol. 37, no. 6, pp. 1310-1321, 2018, doi: 10.1109/TMI.2017.2785879
Y. Li, P. Jian, and G. I. Han, “Cascaded Progressive Generative Adversarial Networks for Reconstructing Three-Dimensional Grayscale Core Images from a Single Two-Dimensional Image,” Frontiers in Physics, vol. 10, p. 716708, 2022, doi: 10.3389/fphy.2022.716708.
I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville, “Improved training of Wasserstein gans,” in Advances in Neural Information Processing Systems, pp. 5767–5777, 2017.
M.Shafiq and Z. Gu, “Deep residual learning for image recognition: A survey,” Applied sciences, vol. 12, no. 18, p. 8972, 2022, doi: 10.3390/app12188972.
X. Li, W. Wang, X. Hu, and J. Yang, “Selective Kernel Networks,” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 510-519, 2019.
T. Mou, S. Alam, Md. H. Rahman, G. Srivastava, M. Hasan, and M. L. Uddin “Multi-Range Sequential Learning Based Dark Image Enhancement with Color Upgradation,” Applied Sciences, vol. 13, no. 2, p. 1034, 2023, doi: 10.3390/app13021034.
S. Woo, J. Park, J. Lee, and I. Kweon, “CBAM: Convolutional block attention module,” In Proceedings of the European conference on computer vision (ECCV), pp. 3-19, 2018, doi:10.1007/978-3-030-01234-2_1.
P. Isola, J. Y Zhu, T. Zhou, and A. Efros, “Image-to-Image Translation with Conditional Adversarial Networks,” Conference: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5967-5976, 2017, doi: 10.1109/CVPR.2017.632.
O. Bernard et al., "Deep Learning Techniques for Automatic MRI Cardiac Multi-Structures Segmentation and Diagnosis: Is the Problem Solved?," in IEEE Transactions on Medical Imaging, vol. 37, no. 11, pp. 2514-2525, 2018, doi: 10.1109/TMI.2018.283750.
J. Xu and E.Adalsteinsson, “Deformed2self: Self-supervised denoising for dynamic medical imaging,” In Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part II 24, pp. 25-35, 2021.
V. Lempitsky, A. Vedaldi, and D. Ulyanov, "Deep Image Prior," 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9446-9454, 2018.
Y. Quan, M. Chen, T. Pang, and H. Ji, “Self2self with dropout: Learning self-supervised denoising from single image,” In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 1890-1898, 2020.
K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, "Image Denoising by Sparse 3-D Transform-Domain Collaborative Filtering," in IEEE Transactions on Image Processing, vol. 16, no. 8, pp. 2080-2095, 2007.
M. Maggioni, G. Boracchi, A. Foi, and K. Egiazarian, "Video Denoising, Deblocking, and Enhancement Through Separable 4-D Nonlocal Spatiotemporal Transforms," in IEEE Transactions on Image Processing, vol. 21, no. 9, pp. 3952-3966, 2012.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 maysaa abd alkareem naser, Abbas Al-Asadi

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).
This journal is based on the work at https://journal.umy.ac.id/index.php/jrc under license from Creative Commons Attribution-ShareAlike 4.0 International License. You are free to:
- Share – copy and redistribute the material in any medium or format.
- Adapt – remix, transform, and build upon the material for any purpose, even comercially.
The licensor cannot revoke these freedoms as long as you follow the license terms, which include the following:
- Attribution. You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
- ShareAlike. If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.
- No additional restrictions. You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
• Creative Commons Attribution-ShareAlike (CC BY-SA)
JRC is licensed under an International License