Noise-Reduced 3D Organ Modeling from CT Images Using Median Filtering for Anatomical Preservation in Medical 3D Printing

Authors

  • Phichitphon Chotikunnan Rangsit University https://orcid.org/0000-0002-6617-6805
  • Rawiphon Chotikunnan Rangsit University
  • Tasawan Puttasakul Rangsit University
  • Wanida Khotakham Rangsit University
  • Pariwat Imura Rangsit University
  • Jaroonrut Prinyakupt Rangsit University
  • Nuntachai Thongpance Rangsit University
  • Anuchart Srisiriwat Pathumwan Institute of Technology

DOI:

https://doi.org/10.18196/jrc.v6i4.26665

Keywords:

Median Filtering, Medical Image Processing, STL File Generation, DICOM

Abstract

This study offers a systematic approach to improving the reconstruction of three-dimensional anatomical models from CT imaging data. The main difficulty tackled is the maintenance of internal bone features during denoising, essential for producing clinically relevant models. A nonlinear filtering strategy was implemented, utilizing a 3×3 median filter alongside manual refinement to eliminate salt-and-pepper noise while preserving anatomical information. The study presents a reproducible image-processing pipeline that improves structural clarity and enables material-efficient 3D printing while preserving internal bone integrity. A publicly available dataset including 813 anonymized chest CT scans (512×512 pixels, 16-bit grayscale) from Zenodo was employed. Preprocessing included grayscale normalization, brightness adjustment, and the application of median filters with kernel sizes from 3×3 to 9×9, followed by artifact removal using FlashPrint software before STL conversion. The 3×3 median filter achieved an excellent balance between noise reduction and anatomical clarity, outperforming mean filtering and larger kernels in maintaining edge detail. Although statistical evaluation was not conducted, visual analysis validated an 18.07 percent decrease in print time and a 17.88 percent reduction in filament consumption. The technology exhibited actual efficacy in generating high-quality anatomical models. Future endeavors will incorporate automated segmentation and sophisticated denoising methodologies to enhance applicability in surgical simulation, clinical education, and personalized healthcare planning.

Author Biographies

Phichitphon Chotikunnan, Rangsit University

Assoc. Prof. Acting Sub LT. Phichitphon Chotikunnan is a Lecturer of the Biomedical Engineering Program at the College of Biomedical Engineering, Rangsit University. He has expertise in robotics, embedded systems, fuzzy logic control, and iterative learning control. He holds a Doctor of Engineering degree in Electrical and Information Engineering and a Master of Engineering in Electrical Engineering, both from King Mongkut's University of Technology Thonburi. He also has a Bachelor of Engineering in Mechatronics Engineering from Pathumwan Institute of Technology.

He has published in international journals and conferences, and he has been involved in various research projects. His work experiences include positions as a Teaching Assistant, Control and Instrumentation Engineer, R&D Embedded Applications, Lecturer, and R&D Consultant. He has also participated in numerous training programs and workshops, and he has received several awards for his research excellence.

Rawiphon Chotikunnan, Rangsit University

He is a Lecturer in the Biomedical Engineering Program at the College of Biomedical Engineering, Rangsit University. With a Master of Engineering in Biomedical Engineering from Rangsit University and a Bachelor of Information Technology in Interactive Design and Game Development from Dhurakij Pundit University, his Research Interests Include Interactive Media, Medical Image Processing, Robots, and Control Systems.

Tasawan Puttasakul, Rangsit University

She obtained her Bachelor of Science in Physics-Electronics with Second Class Honors from Naresuan University, Thailand, her Master of Engineering in Biomedical Electronics Engineering, and her Doctor of Philosophy in Biomedical Engineering from King Mongkut’s Institute of Technology Ladkrabang, Thailand. Since 2009, she has held the position of lecturer in the College of Biomedical Engineering, Rangsit University. Her pedagogical and research interests encompass biomedical signal processing, biomedical image processing, and biosensors. She has authored many publications in international journals and conferences, concentrating on medical sensors, robotic control, and biomedical image processing.

Wanida Khotakham, Rangsit University

She obtained her Bachelor of Engineering in Automation Engineering from King Mongkut's University of Technology Thonburi, Thailand and her Master of Science in Data Science from Newcastle University, UK. Currently, she is a lecturer in the College of Biomedical Engineering at Rangsit University, where she teaches courses on software design, health information technology, data analytics, and automation engineering.

Pariwat Imura, Rangsit University

He serves as a Lecturer in the Biomedical Engineering Program at the College of Biomedical Engineering, Rangsit University. He holds a Master of Engineering in Biomedical Engineering from Rangsit University and completed his Bachelor of Science Program in Computer Science at Rajamangala University of Technology Lanna. His research interests span Medical Imaging Systems, Fundamental Principles of Computer Communication Networks and Database Management, Smart Medical Systems, Big Data Analytics in Medical, Medical Artificial Intelligence, and embedded systems.

Jaroonrut Prinyakupt, Rangsit University

She earned her Bachelor of Engineering in Electrical Engineering from Prince of Songkla University, Thailand; her Master of Science in Biomedical Instrumentation from Mahidol University, Thailand; and her Doctor of Philosophy in Electrical Engineering from Chulalongkorn University, Thailand.  She presently holds the position of assistant professor in the College of Biomedical Engineering at Rangsit University, concentrating on pedagogy and research in biomedical instruments, electronic systems, image processing, and programming.

Nuntachai Thongpance, Rangsit University

He currently holds the position of Associate Professor and Dean of the College of Biomedical Engineering at Rangsit University. He established undergraduate and graduate courses in medical instrumentation and biomedical engineering at Rangsit University. Nuntachai earned his Master of Engineering in nuclear technology from Chulalongkorn University in 1987 and his Bachelor of Science in physics with second-class honors from Prince of Songkla University in 1984. His research interests encompass medical devices, biomedical engineering, and healthcare management engineering.

Anuchart Srisiriwat, Pathumwan Institute of Technology

He holds the position of Associate Professor in Electrical Engineering at Pathumwan Institute of Technology, Thailand. He obtained his Ph.D. in electrical engineering from King Mongkut’s University of Technology North Bangkok. His scholarly pursuits encompass fuel cell systems, renewable energy, control systems, and engineering education. He had vast expertise in academic administration and has received many national awards and royal honors for his services to education and society.

References

R. T. Sadia, J. Chen, and J. Zhang, “CT image denoising methods for image quality improvement and radiation dose reduction,” Journal of Applied Clinical Medical Physics, vol. 25, no. 2, p. e14270, 2024, doi: 10.1002/acm2.14270.

F. Fan et al., “Quadratic autoencoder (Q-AE) for low-dose CT denoising,” IEEE Transactions on Medical Imaging, vol. 39, no. 6, pp. 2035-2050, 2019, doi: 10.1109/TMI.2019.2963248.

R. Muwardi, M. Yunita, H. Ghifarsyam, and H. Juliyanto, “Optimize Image Processing Algorithm on ARM Cortex-A72 and A53,” Jurnal Ilmiah Teknik Elektro Komputer Dan Informatika, vol. 8, no. 3, pp. 399-409, 2022, doi: 10.26555/jiteki.v8i3.24457.

D. Sharma and N. Agrawal, “Development of Modified CNN Algorithm for Agriculture Product: A Research Review,” Jurnal Ilmiah Teknik Elektro Komputer Dan Informatika, vol. 8, no. 1, pp. 167-174, 2022, doi: 10.26555/jiteki.v8i1.23722.

R. Bello and C. Oluigbo, “Deep Learning-Based SOLO Architecture for Re-Identification of Single Persons by Locations,” Jurnal Ilmiah Teknik Elektro Komputer Dan Informatika, vol. 8, no. 4, pp. 599-609, 2022, doi: 10.26555/jiteki.v8i4.25059.

B. Suprapto, A. Wahyudin, H. Hikmarika, and S. Dwijayanti, “The Detection System of Helipad for Unmanned Aerial Vehicle Landing Using YOLO Algorithm,” Jurnal Ilmiah Teknik Elektro Komputer Dan Informatika, vol. 7, no. 2, pp. 193-206, 2021, doi: 10.26555/jiteki.v7i2.20684.

R. Bello, C. Oluigbo, and O. Moradeyo, “Motorcycling-Net: A Segmentation Approach for Detecting Motorcycling Near Misses,” Jurnal Ilmiah Teknik Elektro Komputer Dan Informatika, vol. 9, no. 1, pp. 96-106, 2023, doi: 10.26555/jiteki.v9i1.25614.

A. Kamilaris and F. X. Prenafeta-Boldú, “A review of the use of convolutional neural networks in agriculture,” The Journal of Agricultural Science, vol. 156, no. 3, pp. 312-322, Mar. 2018, doi: 10.1017/S0021859618000436.

R. H. Abiyev and M. K. S. Ma’aitaH, “Deep convolutional neural networks for chest diseases detection,” Journal of Healthcare Engineering, vol. 2018, pp. 1-16, Nov. 2018, doi: 10.1155/2018/4168538.

F. Schwendicke, T. Golla, M. Dreher, and J. Krois, “Convolutional neural networks for dental image diagnostics: A scoping review,” Journal of Dentistry, vol. 91, p. 103226, Mar. 2019, doi: 10.1016/j.jdent.2019.103226.

G. N. Nguyen, N. H. Le Viet, M. Elhoseny, K. Shankar, B. B. Gupta, and A. A. Abd El-Latif, “Secure blockchain enabled Cyber-physical systems in healthcare using deep belief network with ResNet model,” Journal of parallel and distributed computing, vol. 153, pp. 150-160, 2021, doi: 10.1016/j.jpdc.2021.03.011.

C. D. Vo, D. A. Dang, and P. H. Le, “Development of Multi-Robotic Arm System for Sorting System Using Computer Vision,” Journal of Robotics and Control (JRC), vol. 3, no. 5, pp. 690-698, 2022, doi: 10.18196/jrc.v3i5.15661.

A. N. Hidayah, S. A. Radzi, N. A. Razak, W. H. M. Saad, Y. C. Wong, and A. A. Naja, “Disease Detection of Solanaceous Crops Using Deep Learning for Robot Vision,” Journal of Robotics and Control (JRC), vol. 3, no. 6, pp. 790-799, 2022, doi: 10.18196/jrc.v3i6.15948.

Z. Zou, K. Chen, Z. Shi, Y. Guo, and J. Ye, "Object Detection in 20 Years: A Survey," in Proceedings of the IEEE, vol. 111, no. 3, pp. 257-276, March 2023, doi: 10.1109/JPROC.2023.3238524.

P. Rosyady and R. Sumiharto, “Highway Visual Tracking System using Thresholding and Hough Transform,” Jurnal Ilmiah Teknik Elektro Komputer Dan Informatika, vol. 4, no. 2, pp. 93-99, 2019, doi: 10.26555/jiteki.v4i2.12016.

T. Agrawal and P. Choudhary, “Segmentation and classification on chest radiography: a systematic survey,” The Visual Computer, vol. 39, no. 3, pp. 875-913, 2023, doi: 10.1007/s00371-021-02352-7.

S. Saha Roy, S. Roy, P. Mukherjee, and A. Halder Roy, “An automated liver tumour segmentation and classification model by deep learning based approaches,” Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, vol. 11, no. 3, pp. 638-650, 2023, doi: 10.1080/21681163.2022.2099300.

M. Xu, K. Huang, and X. Qi, “A Regional-Attentive Multi-Task Learning Framework for Breast Ultrasound Image Segmentation and Classification,” IEEE Access, vol. 11, pp. 5377-5392, 2023, doi: 10.1109/ACCESS.2023.3236693.

A. Iqbal and M. Sharif, “BTS-ST: Swin transformer network for segmentation and classification of multimodality breast cancer images,” Knowledge-Based Systems, vol. 267, p. 110393, 2023, doi: 10.1016/j.knosys.2023.110393.

M. K. Hasan, M. A. Ahamad, C. H. Yap, and G. Yang, “A survey, review, and future trends of skin lesion segmentation and classification,” Computers in Biology and Medicine, p. 106624, 2023, doi: 10.1016/j.compbiomed.2023.106624.

T. Dang and H. Tran, “A Secured, Multilevel Face Recognition based on Head Pose Estimation, MTCNN and FaceNet,” Journal of Robotics and Control (JRC), vol. 4, no. 4, pp. 431-437, 2023, doi: 10.18196/jrc.v4i4.18780.

M. I. Rusydi, A. Novira, T. Nakagome, J. Muguro, R. Nakajima, W. Njeri, et al., “Autonomous Movement Control of Coaxial Mobile Robot based on Aspect Ratio of Human Face for Public Relation Activity Using Stereo Thermal Camera,” Journal of Robotics and Control (JRC), vol. 3, no. 3, pp. 361-373, 2022, doi: 10.18196/jrc.v3i3.14750.

X. Zheng, Q. Lei, R. Yao, Y. Gong, and Q. Yin, “Image segmentation based on adaptive K-means algorithm,” EURASIP Journal on Image and Video Processing, vol. 2018, no. 1, pp. 1-10, 2018, doi: 10.1186/s13640-018-0309-3.

R. Srikanth and K. Bikshalu, “Multilevel thresholding image segmentation based on energy curve with harmony Search Algorithm,” Ain Shams Engineering Journal, vol. 12, no. 1, pp. 1-20, 2021, doi: 10.1016/j.asej.2020.09.003.

G. Aletti, A. Benfenati, and G. Naldi, “A semiautomatic multi-label color image segmentation coupling Dirichlet problem and colour distances,” Journal of Imaging, vol. 7, no. 10, p. 208, 2021, doi: 10.3390/jimaging7100208.

C. Suguna and S. P. Balamurugan, “Computer Aided Diagnosis for Cervical Cancer Screening using Monarch Butterfly Optimization with Deep Learning Model,” in 2023 5th International Conference on Smart Systems and Inventive Technology (ICSSIT), pp. 1059-1064, 2023, doi: 10.1109/ICSSIT55814.2023.10060959.

G. Sun, X. Jia, and T. Geng, “Plant diseases recognition based on image processing technology,” Journal of Electrical and Computer Engineering, vol. 2018, 2018, doi: 10.1155/2018/6070129.

A. Mohan and S. Poobal, “Crack detection using image processing: A critical review and analysis,” Alexandria Engineering Journal, vol. 57, no. 2, pp. 787-798, 2018, doi: 10.1016/j.aej.2017.01.020.

B. Li and Y. He, “An improved ResNet based on the adjustable shortcut connections,” IEEE Access, vol. 6, pp. 18967-18974, 2018, doi: 10.1109/ACCESS.2018.2814605.

D. Marmanis, K. Schindler, J. D. Wegner, S. Galliani, M. Datcu, and U. Stilla, “Classification with an edge: Improving semantic image segmentation with boundary detection,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 135, pp. 158-172, 2018, doi: 10.48550/arXiv.1612.01337.

J. Duan, T. Shi, H. Zhou, J. Xuan, and S. Wang, “A novel ResNet-based model structure and its applications in machine health monitoring,” Journal of Vibration and Control, vol. 27, no. 9-10, pp. 1036-1050, 2021, doi: 10.1177/107754632093650.

H. Yu, H. Sun, J. Tao, C. Qin, D. Xiao, Y. Jin, and C. Liu, “A multi-stage data augmentation and AD-ResNet-based method for EPB utilization factor prediction,” Automation in Construction, vol. 147, p. 104734, 2023, doi: 10.1016/j.autcon.2022.104734.

S. Ayyachamy, V. Alex, M. Khened, and G. Krishnamurthi, “Medical image retrieval using Resnet-18,” Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and Applications, vol. 10954, pp. 233-241, 2019, doi: 10.1117/12.2515588.

M. Gao, D. Qi, H. Mu, and J. Chen, “A transfer residual neural network based on ResNet-34 for detection of wood knot defects,” Forests, vol. 12, no. 2, p. 212, 2021, doi: 10.3390/f12020212.

L. Wen, X. Li, and L. Gao, “A transfer convolutional neural network for fault diagnosis based on ResNet-50,” Neural Computing and Applications, vol. 32, pp. 6111-6124, 2020, doi: 10.1007/s00521-019-04097-w.

B. Yu, L. Yang, and F. Chen, “Semantic segmentation for high spatial resolution remote sensing images based on convolution neural network and pyramid pooling module,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 11, no. 9, pp. 3252-3261, Sep. 2018, doi: 10.1109/JSTARS.2018.2860989.

F. Tabassum, M. I. Islam, R. T. Khan, and M. R. Amin, “Human face recognition with combination of DWT and machine learning,” Journal of King Saud University-Computer and Information Sciences, vol. 34, no. 3, pp. 546-556, 2022, doi: 10.1016/j.jksuci.2020.02.002.

M. Sajjad, M. Nasir, K. Muhammad, S. Khan, Z. Jan, A. K. Sangaiah, et al., “Raspberry Pi assisted face recognition framework for enhanced law-enforcement services in smart cities,” Future Generation Computer Systems, vol. 108, pp. 995-1007, 2020, doi: 10.1016/j.future.2017.11.013.

L. Li, X. Mu, S. Li, and H. Peng, “A review of face recognition technology,” IEEE Access, vol. 8, pp. 139110-139120, 2020, doi: 10.1109/ACCESS.2020.3011028.

H. Ling, J. Wu, L. Wu, J. Huang, J. Chen, and P. Li, “Self residual attention network for deep face recognition,” IEEE Access, vol. 7, pp. 55159-55168, 2019, doi: 10.1109/ACCESS.2019.2913205.

A. Jha, “Classroom attendance system using facial recognition system,” The International Journal of Mathematics, Science, Technology, and Management, vol. 2, no. 3, pp. 4-7, 2007, doi: 10.1051/itmconf/20203202001.

R. I. Bendjillali, M. Beladgham, K. Merit, and A. Taleb-Ahmed, “Illumination-robust face recognition based on deep convolutional neural networks architectures,” Indonesian Journal of Electrical Engineering and Computer Science, vol. 18, no. 2, pp. 1015-1027, 2020, doi: 10.11591/ijeecs.v18.i2.pp1015-1027.

M. Z. Khan, S. Harous, S. U. Hassan, M. U. G. Khan, R. Iqbal, and S. Mumtaz, “Deep unified model for face recognition based on convolution neural network and edge computing,” IEEE Access, vol. 7, pp. 72622-72633, 2019, doi: 10.1109/ACCESS.2019.2918275.

A. Alzu’bi, F. Albalas, T. Al-Hadhrami, L. B. Younis, and A. Bashayreh, “Masked face recognition using deep learning: A review,” Electronics, vol. 10, no. 21, p. 2666, 2021, doi: 10.3390/electronics10212666.

Y. Tang, M. Chen, C. Wang, L. Luo, J. Li, G. Lian, and X. Zou, “Recognition and localization methods for vision-based fruit picking robots: A review,” Frontiers in Plant Science, vol. 11, p. 510, 2020, doi: 10.3389/fpls.2020.00510.

H. A. Williams, M. H. Jones, M. Nejati, M. J. Seabright, J. Bell, N. D. Penhall, et al., “Robotic kiwifruit harvesting using machine vision, convolutional neural networks, and robotic arms,” Biosystems Engineering, vol. 181, pp. 140-156, 2019, doi: 10.1016/j.biosystemseng.2019.03.007.

S. D. Kumar, S. Esakkirajan, S. Bama, and B. Keerthiveena, “A microcontroller based machine vision approach for tomato grading and sorting using SVM classifier,” Microprocessors and Microsystems, vol. 76, p. 103090, 2020, doi: 10.1016/j.micpro.2020.103090.

M. Halstead, C. McCool, S. Denman, T. Perez, and C. Fookes, “Fruit quantity and ripeness estimation using a robotic vision system,” IEEE Robotics and Automation Letters, vol. 3, no. 4, pp. 2995-3002, 2018, doi: 10.1109/LRA.2018.2849514.

S. Wan and S. Goudos, “Faster R-CNN for multi-class fruit detection using a robotic vision system,” Computer Networks, vol. 168, p. 107036, 2020, doi: 10.1016/j.comnet.2019.107036.

M. T. Habib, A. Majumder, A. Z. M. Jakaria, M. Akter, M. S. Uddin, and F. Ahmed, “Machine vision based papaya disease recognition,” Journal of King Saud University-Computer and Information Sciences, vol. 32, no. 3, pp. 300-309, 2020, doi: 10.1016/j.jksuci.2018.06.006.

J. J. Zhuang, S. M. Luo, C. J. Hou, Y. Tang, Y. He, and X. Y. Xue, “Detection of orchard citrus fruits using a monocular machine vision-based method for automatic fruit picking applications,” Computers and Electronics in Agriculture, vol. 152, pp. 64-73, 2018, doi: 10.1016/j.compag.2018.07.004.

A. Gongal, M. Karkee, and S. Amatya, “Apple fruit size estimation using a 3D machine vision system,” Information Processing in Agriculture, vol. 5, no. 4, pp. 498-503, 2018, doi: 10.1016/j.inpa.2018.06.002.

Z. Wang, J. Underwood, and K. B. Walsh, “Machine vision assessment of mango orchard flowering,” Computers and Electronics in Agriculture, vol. 151, pp. 501-511, 2018, doi: 10.1016/j.compag.2018.06.040.

X. Zhao, P. Sun, Z. Xu, H. Min, and H. Yu, “Fusion of 3D LIDAR and Camera Data for Object Detection in Autonomous Vehicle Applications,” IEEE Sensors Journal, vol. 20, no. 9, pp. 4901-4913, 2020, doi: 10.1109/JSEN.2020.2966034.

W. Zhao, W. Ma, L. Jiao, P. Chen, S. Yang, and B. Hou, “Multi-Scale Image Block-Level F-CNN for Remote Sensing Images Object Detection,” IEEE Access, vol. 7, pp. 43607-43621, 2019, doi: 10.1109/ACCESS.2019.2908016.

T. Zhou, D. P. Fan, M. M. Cheng, J. Shen, and L. Shao, “RGB-D Salient Object Detection: A Survey,” Computational Visual Media, vol. 7, pp. 37-69, 2021, doi: 10.48550/arXiv.2008.00230.

M. Shirpour, N. Khairdoost, M. Bauer, and S. Beauchemin, “Traffic Object Detection and Recognition Based on the Attentional Visual Field of Drivers,” IEEE Transactions on Intelligent Vehicles, 2021, doi: 10.1109/TIV.2021.3133849.

S. S. A. Zaidi, M. S. Ansari, A. Aslam, N. Kanwal, M. Asghar and B. Lee, “A Survey of Modern Deep Learning Based Object Detection Models,” Digital Signal Processing, vol. 115, p. 103514, 2022, doi: 10.48550/arXiv.2104.11892.

A. Kuznetsova et al., “The Open Images Dataset V4: Unified Image Classification, Object Detection, and Visual Relationship Detection at Scale,” International Journal of Computer Vision, vol. 128, no. 7, pp. 1956-1981, 2020, doi: 10.48550/arXiv.1811.00982.

X. Chen, H. Li, Q. Wu, K. N. Ngan, and L. Xu, “High-Quality R-CNN Object Detection Using Multi-Path Detection Calibration Network,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 31, no. 2, pp. 715-727, 2021, doi: 10.1109/TCSVT.2020.2987465.

S. Zhang, Y. Wu, C. Men, and X. Li, “Tiny YOLO Optimization Oriented Bus Passenger Object Detection,” Chinese Journal of Electronics, vol. 29, no. 1, pp. 132-138, 2020, doi: 10.1049/cje.2019.11.002.

A. Suhail, M. Jayabalan, and V. Thiruchelvam, “Convolutional Neural Network Based Object Detection: A Review,” Journal of Critical Reviews, vol. 7, no. 11, pp. 786-792, 2020, doi: 10.48550/arXiv.1905.01614.

M. D. Hossain and D. Chen, “Segmentation for Object-Based Image Analysis (OBIA): A review of algorithms and challenges from remote sensing perspective,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 150, pp. 115-134, 2019, doi: 10.1016/j.isprsjprs.2019.02.009.

H. Costa, G. M. Foody, and D. S. Boyd, “Supervised methods of image segmentation accuracy assessment in land cover mapping,” Remote Sensing of Environment, vol. 205, pp. 338-351, 2018, doi: 10.1016/j.rse.2017.11.024.

Y. Xiao, L. Daniel, and M. Gashinova, “Image segmentation and region classification in automotive high-resolution radar imagery,” IEEE Sensors Journal, vol. 21, no. 5, pp. 6698-6711, 2020, doi: 10.1109/JSEN.2020.3043586.

M. Z. Alom, C. Yakopcic, M. Hasan, T. M. Taha, and V. K. Asari, “Recurrent residual U-Net for medical image segmentation,” Journal of Medical Imaging, vol. 6, no. 1, pp. 014006-014006, 2019, doi: 10.1117/1.JMI.6.1.014006.

N. Siddique, S. Paheding, C. P. Elkin, and V. Devabhaktuni, “U-net and its variants for medical image segmentation: A review of theory and applications,” IEEE Access, vol. 9, pp. 82031-82057, 2021, doi: 10.1109/ACCESS.2021.3086020.

M. H. Hesamian, W. Jia, X. He, and P. Kennedy, “Deep learning techniques for medical image segmentation: achievements and challenges,” Journal of digital imaging, vol. 32, pp. 582-596, 2019, doi: 10.1007/s10278-019-00227-x.

A. Sinha and J. Dolz, “Multi-scale self-guided attention for medical image segmentation,” IEEE Journal of Biomedical and Health Informatics, vol. 25, no. 1, pp. 121-130, 2021, doi: 10.48550/arXiv.1906.02849.

F. Shi et al., “Review of artificial intelligence techniques in imaging data acquisition, segmentation, and diagnosis for COVID-19,” IEEE Reviews in Biomedical Engineering, vol. 14, pp. 4-15, 2021, doi: 10.1109/RBME.2020.2987975.

F. Munawar, S. Azmat, T. Iqbal, C. Grönlund, and H. Ali, “Segmentation of lungs in chest X-ray image using generative adversarial networks,” IEEE Access, vol. 8, pp. 153535-153545, 2020, doi: 10.1109/ACCESS.2020.3017915.

S. Wang, D. M. Yang, R. Rong, X. Zhan, and G. Xiao, “Pathology image analysis using segmentation deep learning algorithms,” The American Journal of Pathology, vol. 189, no. 9, pp. 1686-1698, 2019, doi: 10.1016/j.ajpath.2019.05.007.

S. M. Anwar, M. Majid, A. Qayyum, M. Awais, M. Alnowami, and M. K. Khan, “Medical image analysis using convolutional neural networks: a review,” Journal of Medical Systems, vol. 42, pp. 1-13, 2018, doi: 10.1007/s10916-018-1088-1.

Y. Lu, W. Ma, X. Dong, M. Brown, T. Lu, and W. Gan, “Differentiate Xp11.2 Translocation Renal Cell Carcinoma from Computed Tomography Images and Clinical Data with ResNet-18 CNN and XGBoost,” CMES - Computer Modeling in Engineering & Sciences, vol. 136, no. 1, 2023, doi: 10.32604/cmes.2023.024909.

V. Narayan, P. K. Mall, A. Alkhayyat, K. Abhishek, S. Kumar, and P. Pandey, “[Retracted] Enhance‐Net: An Approach to Boost the Performance of Deep Learning Model Based on Real‐Time Medical Images,” Journal of Sensors, vol. 2023, no. 1, p. 8276738, 2023, doi: 10.1155/2023/8276738.

S. A. El-Feshawy, W. Saad, M. Shokair, and M. Dessouky, “IoT framework for brain tumor detection based on optimized modified ResNet 18 (OMRES),” The Journal of Supercomputing, vol. 79, no. 1, pp. 1081-1110, 2023, doi: 10.1007/s11227-022-04678-y.

P. Chotikunnan, T. Puttasakul, R. Chotikunnan, B. Panomruttanarug, M. Sangworasil, and A. Srisiriwat, “Evaluation of single and dual image object detection through image segmentation using ResNet18 in robotic vision applications,” Journal of Robotics and Control (JRC), vol. 4, no. 3, pp. 263-277, 2023, doi: 10.18196/jrc.v4i3.17932.

R. BME, Anonymized CT DICOM Dataset Compilation for Research Purposes. Zenodo, 2016, doi: 10.5281/zenodo.152448.

Downloads

Published

2025-06-19

How to Cite

[1]
P. Chotikunnan, “Noise-Reduced 3D Organ Modeling from CT Images Using Median Filtering for Anatomical Preservation in Medical 3D Printing”, J Robot Control (JRC), vol. 6, no. 4, pp. 1600–1611, Jun. 2025.

Issue

Section

Articles

Most read articles by the same author(s)

1 2 > >>