Improving YOLO Object Detection Performance on Single-Board Computer using Virtual Machine
DOI:
https://doi.org/10.18196/eist.v5i1.22486Keywords:
Deep learning, single-board computer, virtual machine, edge computing, optimizationAbstract
Single-board computers have gained popularity in the recent decade, largely due to the immense advancements in deep learning. Deep learning involves complex computational processes that are beyond the capabilities of regular microcontrollers, thus necessitating the use of single-board computers. However, single-board computers are primarily designed to operate efficiently in low-power environments. Therefore, optimization is crucial for running deep learning algorithms effectively on single-board computers. In this work, we explore the impact of utilizing the DeepStream framework to run deep learning algorithms, specifically the YOLO algorithm, on NVIDIA Jetson single-board computers. The DeepStream framework can be executed in virtual machines, notably Docker, to improve the performance and portability of the model. Additionally, deploying the Docker virtual machine from removable disks can further enhance its portability and even increase the algorithm's speed. Our benchmarks indicate that real-time streaming of the YOLO algorithm can operate up to 8.5 times faster when deployed from a Docker virtual machine.References
J. Chen and X. Ran, “Deep Learning With Edge Computing: A Review,” Proceedings of the IEEE, vol. 107, no. 8, pp. 1655–1674, Aug. 2019, doi: 10.1109/JPROC.2019.2921977.
M. Satyanarayanan, “The Emergence of Edge Computing,” Computer, vol. 50, no. 1, pp. 30–39, Jan. 2017, doi: 10.1109/MC.2017.9.
U. Drolia, K. Guo, and P. Narasimhan, “Precog: prefetching for image recognition applications at the edge,” in Proceedings of the Second ACM/IEEE Symposium on Edge Computing, in SEC ’17. New York, NY, USA: Association for Computing Machinery, Oct. 2017, pp. 1–13. doi: 10.1145/3132211.3134456.
M. A. Haq, S.-J. Ruan, and J.-H. Chen, “Detecting Obstacle in 3D Space using Monocular Camera,” in 2022 IEEE 4th Global Conference on Life Sciences and Technologies (LifeTech), Mar. 2022, pp. 431–432. doi: 10.1109/LifeTech53646.2022.9754879.
M. A. Haq, S.-J. Ruan, M.-E. Shao, Q. M. U. Haq, P.-J. Liang, and D.-Q. Gao, “One Stage Monocular 3D Object Detection Utilizing Discrete Depth and Orientation Representation,” IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 11, pp. 21630–21640, Nov. 2022, doi: 10.1109/TITS.2022.3175198.
Md. I. Uddin, Md. S. Alamgir, Md. M. Rahman, M. S. Bhuiyan, and M. A. Moral, “AI Traffic Control System Based on Deepstream and IoT Using NVIDIA Jetson Nano,” in 2021 2nd International Conference on Robotics, Electrical and Signal Processing Techniques (ICREST), Jan. 2021, pp. 115–119. doi: 10.1109/ICREST51555.2021.9331256.
E. Jeong, J. Kim, S. Tan, J. Lee, and S. Ha, “Deep Learning Inference Parallelization on Heterogeneous Processors With TensorRT,” IEEE Embedded Systems Letters, vol. 14, no. 1, pp. 15–18, Mar. 2022, doi: 10.1109/LES.2021.3087707.
G. Jocher et al., “ultralytics/yolov5: v6.1 - TensorRT, TensorFlow Edge TPU and OpenVINO Export and Inference,” Zenodo, Feb. 2022, doi: 10.5281/zenodo.6222936.
I. Miell and A. Sayers, Docker in Practice, Second Edition. Simon and Schuster, 2019.
J. Nickoloff and S. Kuenzli, Docker in Action, Second Edition. Simon and Schuster, 2019.
J. Redmon and A. Farhadi, “YOLOv3: An Incremental Improvement.” arXiv, Apr. 08, 2018. doi: 10.48550/arXiv.1804.02767.
J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 779–788.
M. Everingham, S. M. A. Eslami, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The Pascal Visual Object Classes Challenge: A Retrospective,” Int J Comput Vis, vol. 111, no. 1, pp. 98–136, Jan. 2015, doi: 10.1007/s11263-014-0733-5.
A. Paszke et al., “PyTorch: An Imperative Style, High-Performance Deep Learning Library,” in Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché-Buc, E. Fox, and R. Garnett, Eds., Curran Associates, Inc., 2019, pp. 8024–8035. [Online]. Available: http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf
“PyTorch | SpringerLink.” Accessed: Apr. 01, 2024. [Online]. Available: https://link.springer.com/chapter/10.1007/978-3-030-57077-4_10
“Vision meets robotics: The KITTI dataset - A Geiger, P Lenz, C Stiller, R Urtasun, 2013.” Accessed: Apr. 01, 2024. [Online]. Available: https://journals.sagepub.com/doi/full/10.1177/0278364913491297
Downloads
Additional Files
Published
Issue
Section
License
Copyright
The author should be aware that by submitting an article to this journal, the article's copyright will be fully transferred to journal of Emerging Information Science and Technology. Authors are allowed to resend their manuscript to other journals or intentionally withdraw the manuscript only if both parties (journal of Emerging Information Science and Technology and Authors) have agreed on the issue. Once the manuscript has been published, authors are allowed to use their published article under journal of Emerging Information Science and Technology's copyrights.
All authors are required to deliver the agreement of license transfer once they submit the manuscript to journal of Emerging Information Science and Technology. By signing the agreement, the copyright is attributed to this journal to protect the intellectual material for the authors. Authors are allowed to share, copy and redistribute the material in any medium and in any circumstances to give appropriate credit and wide readership to the work.
License
Articles published in the journal of Emerging Information Science and Technology are licensed under an Attribution 4.0 International (CC BY 4.0) license. You are free to:
- Share — copy and redistribute the material in any medium or format.
- Adapt — remix, transform, and build upon the material for any purpose, even commercially.
This license is acceptable for Free Cultural Works. The licensor cannot revoke these freedoms as long as you follow the license terms. Under the following terms:
- Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
- No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.