Real-Time Prohibited Item Detection in X-ray Security Screening via Adaptive Multi-scale Feature Fusion and Lightweight Dynamic Convolutions
DOI:
https://doi.org/10.18196/jrc.v6i4.27030Keywords:
Prohibited Item Detection, Real-Time X-Ray Screening, Adaptive Feature Fusion, Lightweight Dynamic Convolutions, Deployment-Efficient Vision SystemsAbstract
Prohibited item detection in X-ray security screening is a challenging task due to the diverse shapes, sizes, and materials of concealed objects. In this paper, we propose a novel end-to-end framework, integrating adaptive multiscale convolution blocks (AMC Block) and an adaptive lightweight convolution module (ALCM), to address these challenges with high accuracy and efficiency. The AMC block leverages parallel convolutional paths with varying kernel sizes and dilation rates, enabling the capture of both fine-grained and large-scale features. This multiscale strategy ensures that small items like wires and larger objects such as bags or metallic weapons are equally well-detected. Building on top of multi-stage features extracted by the AMC block, we introduce the ALCM to refine and fuse feature maps at different pyramid levels. The ALCM employs a dynamic weight generator (DWG), which adaptively assigns importance to multiple convolutional kernels based on local content, followed by multi-scale depthwise convolutions (MSDC), a lightweight mechanism that enriches features across scales using parallel convolutions with different receptive fields. This approach enhances spatial context while keeping the parameter overhead minimal. Experimental results on two public large-scale X-ray datasets, OPIXray and HiXray, demonstrate that our method achieves state-of-the-art performance while maintaining real-time inference speed. Specifically, our model achieves 91.2% mAP@0.5 and 78.4% mAP@0.5:0.95 on OPIXray, and 87.3% mAP@0.5 and 73.5% mAP@0.5:0.95 on HiXray, outperforming strong baselines including YOLOv9 and Faster R-CNN. Despite competitive accuracy, our model remains efficient with 92.0 GFLOPs and 42 FPS. Furthermore, we examine the generalizability of our system across varied X-ray imaging settings and discuss failure cases such as false negatives in cluttered environments. These findings highlight the practical applicability of our approach for deployment in real-world security checkpoints, striking a strong balance between detection accuracy and computational efficiency.
References
N. D. Toan, L. H. Le, and H. Nguyen, “Adaptive Compression Techniques for Lightweight Object Detection in Edge Devices,” Mathematical Modelling of Engineering Problems, vol. 11, no. 11, pp. 3071–3081, Nov. 2024, doi: 10.18280/mmep.111119.
R. Nambiar and R. Nanjundegowda, “A Comprehensive Review of AI and Deep Learning Applications in Dentistry: From Image Segmentation to Treatment Planning,” Journal of Robotics and Control (JRC), vol. 5, no. 6, pp. 1744–1752, 2024, doi: 10.18196/jrc.v5i6.23056.
H. F. Mahdi and B. J. Khadhim, “Enhancing IoT Security: A Deep Learning and Active Learning Approach to Intrusion Detection,” Journal of Robotics and Control (JRC), vol. 5, no. 5, pp. 1525–1535, 2024, doi: 10.18196/jrc.v5i5.22292.
N. Sutarna, C. Tjahyadi, P. Oktivasari, M. Dwiyaniti, and T. Tohazen, “Hyperparameter Tuning Impact on Deep Learning Bi-LSTM for Photovoltaic Power Forecasting,” Journal of Robotics and Control (JRC), vol. 5, no. 3, pp. 677–693, 2024, doi: 10.18196/jrc.v5i3.21120.
R. K. Mahmood et al., “Optimizing Network Security with Machine Learning and Multi-Factor Authentication for Enhanced Intrusion Detection,” Journal of Robotics and Control (JRC), vol. 5, no. 5, pp. 1502–1524, 2024, doi: 10.18196/jrc.v5i5.22508.
T. D. Nguyen, T. K. Pham, C. K. Ha, L. H. Le, T. Q. Ngo, and H. Nguyen, “Combining dual attention mechanism and efficient feature aggregation for road and vehicle segmentation from UAV imagery,” Bulletin of Electrical Engineering and Informatics, vol. 13, no. 3, pp. 1779–1787, Jun. 2024, doi: 10.11591/eei.v13i3.6742.
H. Nguyen, T. A. Nguyen, and N. D. Toan, “Optimizing Feature Extraction and Fusion for High-Resolution Defect Detection in Solar Cells,” Intelligent Systems with Applications, vol. 24, p. 200443, Dec. 2024, doi: 10.1016/J.ISWA.2024.200443.
R. H. Laftah and K. H. K. Al-Saedi, “Explainable Ensemble Learning Models for Early Detection of Heart Disease,” Journal of Robotics and Control (JRC), vol. 5, no. 5, pp. 1412–1421, 2024, doi: 10.18196/jrc.v5i5.22448.
U. Lestari, S. Salam, Y.-H. Choo, A. Alomoush, and K. Al Qallab, “Automatic prediction of learning styles: a comprehensive analysis of classification models,” Bulletin of Electrical Engineering and Informatics, vol. 13, no. 5, pp. 3675–3685, Oct. 2024, doi: 10.11591/eei.v13i5.7456.
A. S. F. Kamaruzaman et al., “Systematic literature review: application of deep learning processing technique for fig fruit detection and counting,” Bulletin of Electrical Engineering and Informatics, vol. 12, no. 2, pp. 1078–1091, Apr. 2023, doi: 10.11591/eei.v12i2.4455.
N. S. Reddy and V. Khanaa, “Intelligent deep learning algorithm for lung cancer detection and classification,” Bulletin of Electrical Engineering and Informatics, vol. 12, no. 3, pp. 1747–1754, Jun. 2023, doi: 10.11591/eei.v12i3.4579.
D. Al-Fraihat, Y. Sharrab, A.-R. Al-Ghuwairi, M. Al-Elaimat, and M. Alzaidi, “Detecting and resolving feature envy through automated machine learning and move method refactoring,” International Journal of Electrical and Computer Engineering (IJECE), vol. 14, no. 2, p. 2330, Apr. 2024, doi: 10.11591/ijece.v14i2.pp2330-2343.
A. Bayegizova et al., “Fire detection using deep learning methods,” International Journal of Electrical and Computer Engineering (IJECE), vol. 14, no. 1, p. 547, Feb. 2024, doi: 10.11591/ijece.v14i1.pp547-555.
W. Yusuf et al., “Deep learning for automated encrustation detection in sewer inspection,” Intelligent Systems with Applications, vol. 24, p. 200433, Dec. 2024, doi: 10.1016/j.iswa.2024.200433.
A. M. Mustafa, R. Agha, L. Ghazalat, and T. Sha’ban, “Natural disasters detection using explainable deep learning,” Intelligent Systems with Applications, vol. 23, p. 200430, Sep. 2024, doi: 10.1016/j.iswa.2024.200430.
E. I. A. El-Latif, M. El-dosuky, A. Darwish, and A. E. Hassanien, “A deep learning approach for ovarian cancer detection and classification based on fuzzy deep learning,” Scientific Reports, vol. 14, no. 1, p. 26463, Nov. 2024, doi: 10.1038/s41598-024-75830-2.
K. Sharma, G. K. Sethi, and R. K. Bawa, “A comparative analysis of deep learning and deep transfer learning approaches for identification of rice varieties,” Multimedia Tools and Applications, vol. 84, no. 10, pp. 6825–6842, Apr. 2024, doi: 10.1007/s11042-024-19126-7.
N. Zouhri, A. El Mourabit, and A. I. Z. El Abidine, “AI-based Bubbles Detection in the Conformal Coating for Enhanced Quality Control in Electronics Manufacturing,” Journal of Robotics and Control (JRC), vol. 5, no. 2, pp. 525–532, Mar. 2024, doi: 10.18196/jrc.v5i2.20441.
M. M. Mukto et al., “Design of a real-time crime monitoring system using deep learning techniques,” Intelligent Systems with Applications, vol. 21, p. 200311, Mar. 2024, doi: 10.1016/j.iswa.2023.200311.
A. M. Ayalew, B. Enyew, Y. A. Bezabh, B. M. Abuhayi, and G. S. Negashe, “Early-stage cardiomegaly detection and classification from X-ray images using convolutional neural networks and transfer learning,” Intelligent Systems with Applications, vol. 24, p. 200453, Dec. 2024, doi: 10.1016/j.iswa.2024.200453.
S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137–1149, Jun. 2017, doi: 10.1109/TPAMI.2016.2577031.
K. He, G. Gkioxari, P. Dollar, and R. Girshick, “Mask R-CNN,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 2, pp. 386–397, Feb. 2020, doi: 10.1109/TPAMI.2018.2844175.
Z. Cai and N. Vasconcelos, “Cascade R-CNN: Delving Into High Quality Object Detection,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6154–6162, Jun. 2018, doi: 10.1109/CVPR.2018.00644.
J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You Only Look Once: Unified, Real-Time Object Detection,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 779–788, Jun. 2016, doi: 10.1109/CVPR.2016.91.
J. Redmon and A. Farhadi, “YOLO9000: Better, Faster, Stronger,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6517–6525, Jul. 2017, doi: 10.1109/CVPR.2017.690.
Nir Barazida, “YOLOv6: next generation object detection - review and comparison,” Https://Dagshub.Com/Blog/Yolov6/. p. Https://Dagshub.Com/Blog/Yolov6/, 2022.
W. Liu et al., “SSD: Single shot multibox detector,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 9905 LNCS, pp. 21–37, 2016, doi: 10.1007/978-3-319-46448-0_2.
V.-H. Le and T.-L. Pham, “Ovarian Tumors Detection and Classification on Ultrasound Images Using One-stage Convolutional Neural Networks,” Journal of Robotics and Control (JRC), vol. 5, no. 2, pp. 561–581, Mar. 2024, doi: 10.18196/jrc.v5i2.20589.
H. Jati, N. A. Ilyasa, and D. D. Dominic, “Enhancing Humanoid Robot Soccer Ball Tracking, Goal Alignment, and Robot Avoidance Using YOLO-NAS,” Journal of Robotics and Control (JRC), vol. 5, no. 3, pp. 829–838, 2024, doi: 10.18196/jrc.v5i3.21839.
M. A. Hossain et al., “Bangla handwritten word recognition using YOLO V5,” Bulletin of Electrical Engineering and Informatics, vol. 13, no. 3, pp. 2175–2190, Jun. 2024, doi: 10.11591/eei.v13i3.6953.
M. F. Al-Baghdadi and M. E. Manaa, “Unmanned aerial vehicles and machine learning for detecting objects in real time,” Bulletin of Electrical Engineering and Informatics, vol. 11, no. 6, pp. 3490–3497, Dec. 2022, doi: 10.11591/eei.v11i6.4185.
S. Dwijayanti, B. Y. Suprapto, M. Mutiyara, and R. Rendyansyah, “Real-time object detection and distance measurement for humanoid robot using you only look once,” Bulletin of Electrical Engineering and Informatics, vol. 13, no. 6, pp. 4134–4146, Dec. 2024, doi: 10.11591/eei.v13i6.7476.
F. F. Putra and Y. D. Prabowo, “Low resource deep learning to detect waste intensity in the river flow,” Bulletin of Electrical Engineering and Informatics, vol. 10, no. 5, pp. 2724–2732, Oct. 2021, doi: 10.11591/eei.v10i5.3062.
R. Rendi and D. Fitrianah, “Enhancing low-light pedestrian detection: convolutional neural network and YOLOv8 integration with automated dataset,” Bulletin of Electrical Engineering and Informatics, vol. 14, no. 3, pp. 1969–1980, Jun. 2025, doi: 10.11591/eei.v14i3.8903.
O. Bouazizi, C. Azroumahli, A. El Mourabit, and M. Oussouaddi, “Road Object Detection using SSD-MobileNet Algorithm: Case Study for Real-Time ADAS Applications,” Journal of Robotics and Control (JRC), vol. 5, no. 2, pp. 551–560, Mar. 2024, doi: 10.18196/jrc.v5i2.21145.
A. G. Howard et al., “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications,” 2017, [Online]. Available: http://arxiv.org/abs/1704.04861
M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. C. Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 4510–4520, 2018, doi: 10.1109/CVPR.2018.00474.
M. Tan and Q. V. Le, “EfficientNetV2: Smaller Models and Faster Training,” in Proceedings of Machine Learning Research, vol. 139, pp. 10096–10106, 2021.
B. Riyanta, H. A. Irianta, and B. P. Kamiel, “Development of Speech Command Control Based TinyML System for Post-Stroke Dysarthria Therapy Device,” Journal of Robotics and Control (JRC), vol. 4, no. 4, pp. 466–478, 2023, doi: 10.18196/jrc.v4i4.15918.
A. T. Khalaf and S. K. Abdulateef, “Ophthalmic Diseases Classification Based on YOLOv8,” Journal of Robotics and Control (JRC), vol. 5, no. 2, pp. 408–415, 2024, doi: 10.18196/jrc.v5i2.21208.
S. Chalichalamala, N. Govindan, and R. Kasarapu, “An extreme gradient boost based classification and regression tree for network intrusion detection in IoT,” Bulletin of Electrical Engineering and Informatics, vol. 13, no. 3, pp. 1741–1751, 2024, doi: 10.11591/eei.v13i3.6843.
V. Q. Nghiem, H. H. Nguyen, and M. S. Hoang, “LEAF-YOLO: Lightweight Edge-Real-Time Small Object Detection on Aerial Imagery,” Intelligent Systems with Applications, vol. 25, p. 200484, Mar. 2025, doi: 10.1016/j.iswa.2025.200484.
W. Abu Elhaija and Q. Abu Al-Haija, “A novel dataset and lightweight detection system for broken bars induction motors using optimizable neural networks,” Intelligent Systems with Applications, vol. 17, 2023, doi: 10.1016/j.iswa.2022.200167.
X. Kong, X. Li, X. Zhu, Z. Guo, and L. Zeng, “Detection model based on improved faster-RCNN in apple orchard environment,” Intelligent Systems with Applications, vol. 21, 2024, doi: 10.1016/j.iswa.2024.200325.
K. Abraham, M. Abo-Zahhad, and M. Abdelwahab, “Natural disaster damage analysis using lightweight spatial feature aggregated deep learning model,” Earth Science Informatics, vol. 17, no. 4, pp. 3149–3161, 2024, doi: 10.1007/s12145-024-01325-3.
W. Le and L. Huang, “A lightweight building change detection network with coordinate attention and multiscale fusion,” Earth Science Informatics, vol. 17, no. 3, pp. 2699–2710, 2024, doi: 10.1007/s12145-024-01315-5.
B. Li and J. Li, “Methods for landslide detection based on lightweight YOLOv4 convolutional neural network,” Earth Science Informatics, vol. 15, no. 2, pp. 765–775, 2022, doi: 10.1007/s12145-022-00764-0.
B. Zhao, H. He, H. Xu, P. Shi, X. Hao, and G. Huang, “LDA-Mono: A lightweight dual aggregation network for self-supervised monocular depth estimation,” Knowledge-Based Systems, vol. 304, p. 112552, Nov. 2024, doi: 10.1016/j.knosys.2024.112552.
H. Zhong et al., “LiFSO-Net: A lightweight feature screening optimization network for complex-scale flat metal defect detection,” Knowledge-Based Systems, vol. 304, p. 112520, Nov. 2024, doi: 10.1016/j.knosys.2024.112520.
R. Mao, G. Wu, J. Wu, and X. Wang, “A lightweight convolutional neural network for road surface classification under shadow interference,” Knowledge-Based Systems, vol. 306, p. 112761, Dec. 2024, doi: 10.1016/j.knosys.2024.112761.
K. Liang et al., “Automatic threat recognition of prohibited items at aviation checkpoint with x-ray imaging: a deep learning approach,” in Proceedings of SPIE, p. 2, 2018, doi: 10.1117/12.2309484.
G. Flitton, A. Mouton, and T. P. Breckon, “Object classification in 3D baggage security computed tomography imagery using visual codebooks,” Pattern Recognition, vol. 48, no. 8, pp. 2489–2499, 2015, doi: 10.1016/j.patcog.2015.02.006.
S. Akcay and T. P. Breckon, “An evaluation of region based object detection strategies within X-ray baggage security imagery,” in Proceedings - International Conference on Image Processing, ICIP, vol. September, pp. 1337–1341, 2017, doi: 10.1109/ICIP.2017.8296499.
Y. F. A. Gaus, N. Bhowmik, S. Akcay, and T. Breckon, “Evaluating the transferability and adversarial discrimination of convolutional neural networks for threat object detection and classification within x-ray security imagery,” in Proceedings - 18th IEEE International Conference on Machine Learning and Applications, ICMLA 2019, pp. 420–425, 2019, doi: 10.1109/ICMLA.2019.00079.
J. Liu, X. Leng, and Y. Liu, “Deep convolutional neural network based object detector for X-ray baggage security imagery,” in Proceedings - International Conference on Tools with Artificial Intelligence, ICTAI, vol. 2019-November, pp. 1757–1761, 2019, doi: 10.1109/ICTAI.2019.00262.
B. Wang, H. Ding, and C. Chen, “AC-YOLOv4: an object detection model incorporating attention mechanism and atrous convolution for contraband detection in x-ray images,” Multimedia Tools and Applications, vol. 83, no. 9, pp. 26485–26504, 2024, doi: 10.1007/s11042-023-16628-8.
L. Han, C. Ma, Y. Liu, J. Jia, and J. Sun, “SC-YOLOv8: A Security Check Model for the Inspection of Prohibited Items in X-ray Images,” Electronics (Switzerland), vol. 12, no. 20, 2023, doi: 10.3390/electronics12204208.
F. Shao, J. Liu, P. Wu, Z. Yang, and Z. Wu, “Exploiting foreground and background separation for prohibited item detection in overlapping X-Ray images,” Pattern Recognition, vol. 122, 2022, doi: 10.1016/j.patcog.2021.108261.
N. Xiang, Z. Gong, Y. Xu, and L. Xiong, “Material-Aware Path Aggregation Network and Shape Decoupled SIoU for X-ray Contraband Detection,” Electronics (Switzerland), vol. 12, no. 5, 2023, doi: 10.3390/electronics12051179.
X. Huang and Y. Zhang, “ScanGuard-YOLO: Enhancing X-ray Prohibited Item Detection with Significant Performance Gains,” Sensors, vol. 24, no. 1, 2024, doi: 10.3390/s24010102.
X. Ding, X. Zhang, J. Han, and G. Ding, “Scaling Up Your Kernels to 31×31: Revisiting Large Kernel Design in CNNs,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2022-June, pp. 11953–11965, 2022, doi: 10.1109/CVPR52688.2022.01166.
X. Ding, X. Zhang, N. Ma, J. Han, G. Ding, and J. Sun, “RepVgg: Making VGG-style ConvNets Great Again,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 13728–13737, 2021, doi: 10.1109/CVPR46437.2021.01352.
Z. Liu, H. Mao, C. Y. Wu, C. Feichtenhofer, T. Darrell, and S. Xie, “A ConvNet for the 2020s,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2022-June, pp. 11966–11976, 2022, doi: 10.1109/CVPR52688.2022.01167.
T. Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” in Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, vol. January, pp. 936–944, 2017 doi: 10.1109/CVPR.2017.106.
Y. Wei, R. Tao, Z. Wu, Y. Ma, L. Zhang, and X. Liu, “Occluded Prohibited Items Detection: An X-ray Security Inspection Benchmark and De-occlusion Attention Module,” in MM 2020 - Proceedings of the 28th ACM International Conference on Multimedia, pp. 138–146, 2020, doi: 10.1145/3394171.3413828.
R. Tao et al., “Towards Real-world X-ray Security Inspection: A high-quality benchmark and lateral inhibition module for prohibited items detection,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 10903–10912, 2021, doi: 10.1109/ICCV48922.2021.01074.
A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “YOLOv4: Optimal Speed and Accuracy of Object Detection,” 2020, [Online]. Available: http://arxiv.org/abs/2004.10934
Z. Tian, C. Shen, H. Chen, and T. He, “FCOS: Fully convolutional one-stage object detection,” in Proceedings of the IEEE International Conference on Computer Vision, vol. 2019-October, pp. 9626–9635, 2019, doi: 10.1109/ICCV.2019.00972.
R. Khanam and M. Hussain, “What is YOLOv5: A deep look into the internal features of the popular object detector,” 2024, [Online]. Available: http://arxiv.org/abs/2407.20892
C.-Y. Wang, A. Bochkovskiy, and H.-Y. M. Liao, “YOLOv7: Trainable Bag-of-Freebies Sets New State-of-the-Art for Real-Time Object Detectors.” pp. 7464–7475, 2023, doi: 10.1109/cvpr52729.2023.00721.
D. Reis, J. Kupec, J. Hong, and A. Daoudi, “Real-Time Flying Object Detection with YOLOv8,” 2023, [Online]. Available: http://arxiv.org/abs/2305.09972
C.-Y. Wang, I.-H. Yeh, and H.-Y. M. Liao, “YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information,” 2024, doi: 10.1007/978-3-031-72751-1_1.
C. Miao et al., “Sixray: A large-scale security inspection x-ray benchmark for prohibited item discovery in overlapping images,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2019-June, pp. 2114–2123, 2019, doi: 10.1109/CVPR.2019.00222.
A. Howard et al., “Searching for mobileNetV3,” in Proceedings of the IEEE International Conference on Computer Vision, vol. 2019-October, pp. 1314–1324, 2019, doi: 10.1109/ICCV.2019.00140.
X. Zhang, X. Zhou, M. Lin, and J. Sun, “ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 6848–6856, 2018, doi: 10.1109/CVPR.2018.00716.
N. Ma, X. Zhang, H. T. Zheng, and J. Sun, “Shufflenet V2: Practical guidelines for efficient cnn architecture design,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 11218 LNCS, pp. 122–138, 2018, doi: 10.1007/978-3-030-01264-9_8.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Hoanh Nguyen, Chi Kien Ha

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).
This journal is based on the work at https://journal.umy.ac.id/index.php/jrc under license from Creative Commons Attribution-ShareAlike 4.0 International License. You are free to:
- Share – copy and redistribute the material in any medium or format.
- Adapt – remix, transform, and build upon the material for any purpose, even comercially.
The licensor cannot revoke these freedoms as long as you follow the license terms, which include the following:
- Attribution. You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
- ShareAlike. If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.
- No additional restrictions. You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
• Creative Commons Attribution-ShareAlike (CC BY-SA)
JRC is licensed under an International License