Mobility Aid for the Visually Impaired Using Machine Learning and Spatial Audio
DOI:
https://doi.org/10.18196/jrc.v6i2.25245Keywords:
Assistive Technology, Blind People, Time-of-Flight Camera, K-Means, Image Recognition, Concurrent ProgrammingAbstract
Assistive technology is crucial in enhancing the quality of life for individuals with disabilities, including the visually impaired. Many mobility aids lack advanced features such as real-time machine learning-based object detection and spatial audio for environmental awareness. This research contributes to developing more intelligent and adaptable assistive technology for visually impaired individuals, promoting improved navigation and environmental awareness. This research presents a head-mounted mobility aid that integrates a time-of-flight camera, a web camera, and a touch sensor with K-Means clustering, Convolutional Neural Networks (CNNs), and concurrent programming on a Raspberry Pi 4B to detect and classify surrounding obstacles and objects. The system converts obstacle data into spatial audio, allowing users to perceive their surroundings through sound direction and intensity. Object recognition is activated via a touch sensor, providing distance and directional information relative to the user using audio description. The concurrent programming implementation improves execution time by 50.22% compared to Infinite Loop Design (ILD), enhancing real-time responsiveness. However, the system has limitations, including object recognition limited to 80 predefined categories, a 4-meter detection range, reduced accuracy under high-intensity sunlight, and potential interference in spatial audio perception due to external noise. Assistive technology to help the mobility of blind people using advanced technology based on machine learning has developed in a form that can be used flexibly for the user's mobility.
References
A. Bonello, E. Francalanza, and P. Refalo, “Smart and Sustainable Human-Centred Workstations for Operators with Disability in the Age of Industry 5.0: A Systematic Review,” Sustainability, vol. 16, no. 1, p. 281, Dec. 2023, doi: 10.3390/su16010281.
N. Jayasekara, B. Kulathunge, H. Premaratne, I. Nilam, S. Rajapaksha, and J. Krishara, “Revolutionizing Accessibility: Smart Wheelchair Robot and Mobile Application for Mobility, Assistance, and Home Management,” Journal of Robotics and Control (JRC), vol. 5, no. 1, pp. 27–53, Dec. 2023, doi: 10.18196/jrc.v5i1.20057.
M. H. Abidi, A. Noor Siddiquee, H. Alkhalefah, and V. Srivastava, “A comprehensive review of navigation systems for visually impaired individuals,” Heliyon, vol. 10, no. 11, p. e31825, Jun. 2024, doi: 10.1016/j.heliyon.2024.e31825.
J. Madake, S. Bhatlawande, A. Solanke, and S. Shilaskar, “A Qualitative and Quantitative Analysis of Research in Mobility Technologies for Visually Impaired People,” IEEE Access, vol. 11, pp. 82496–82520, 2023, doi: 10.1109/ACCESS.2023.3291074.
Y. Lei, S. L. Phung, A. Bouzerdoum, H. Thanh Le, and K. Luu, “Pedestrian Lane Detection for Assistive Navigation of Vision-Impaired People: Survey and Experimental Evaluation,” IEEE Access, vol. 10, pp. 101071–101089, 2022, doi: 10.1109/ACCESS.2022.3208128.
M. Itair, I. Shahrour, and I. Hijazi, “The Use of the Smart Technology for Creating an Inclusive Urban Public Space,” Smart Cities, vol. 6, no. 5, pp. 2484–2498, Sep. 2023, doi: 10.3390/smartcities6050112.
M. Singh, J. Chauhan, M. S. Kanroo, S. Verma, and P. Goyal, “IPCRF: An End-to-end Indian Paper Currency Recognition Framework for Blind and Visually Impaired People,” IEEE Access, 2022, doi: 10.1109/ACCESS.2022.3202007.
G. I. Okolo, T. Althobaiti, and N. Ramzan, “Assistive systems for visually impaired persons: challenges and opportunities for navigation assistance,” Sensors, vol. 24, no. 11, p. 3572, 2024.
J. Wang, E. Liu, Y. Geng, X. Qu, and R. Wang, “A Survey of 17 Indoor Travel Assistance Systems for Blind and Visually Impaired People,” IEEE Trans Hum Mach Syst, vol. 52, no. 1, pp. 134–148, Feb. 2022, doi: 10.1109/THMS.2021.3121645.
S. M. Aslam and S. Samreen, “Gesture Recognition Algorithm for Visually Blind Touch Interaction Optimization Using Crow Search Method,” IEEE Access, vol. 8, pp. 127560–127568, 2020, doi: 10.1109/ACCESS.2020.3006443.
K. M. Masal, S. Bhatlawande, and S. D. Shingade, “Development of a visual to audio and tactile substitution system for mobility and orientation of visually impaired people: a review,” Multimed Tools Appl, vol. 83, no. 7, pp. 20387–20427, Aug. 2023, doi: 10.1007/s11042-023-16355-0.
T. Halbach, K. S. Fuglerud, T. Fyhn, K. Kjæret, and T. A. Olsen, “The Role of Technology for the Inclusion of People with Visual Impairments in the Workforce,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 13309 LNCS, pp. 466 – 478, 2022, doi: 10.1007/978-3-031-05039-8_34.
B. Leporini, M. Rosellini, and N. Forgione, “Designing assistive technology for getting more independence for blind people when performing everyday tasks: an auditory-based tool as a case study,” J Ambient Intell Humaniz Comput, vol. 11, no. 12, pp. 6107–6123, Dec. 2020, doi: 10.1007/s12652-020-01944-w.
M. Tuttle and E. W. Carter, “Examining High-Tech Assistive Technology Use of Students With Visual Impairments,” J Vis Impair Blind, vol. 116, no. 4, pp. 473–484, Jul. 2022, doi: 10.1177/0145482X221120265.
V. H. Le, “Visual Slam and Visual Odometry Based on RGB-D Images Using Deep Learning: A Survey,” Journal of Robotics and Control (JRC), vol. 5, no. 4, pp. 1050–1079, 2024, doi: 10.18196/jrc.v5i4.22061.
F. Merchan, M. Poveda, D. E. Cáceres-Hernández, and J. E. Sanchez-Galan, “Indoor Navigation Aid Systems for the Blind and Visually Impaired Based on Depth Sensors,” Examining Optoelectronics in Machine Vision and Applications in Industry 4.0, pp. 187–223, 2021, doi: 10.4018/978-1-7998-6522-3.ch007.
A. Paramarthalingam, J. Sivaraman, P. Theerthagiri, B. Vijayakumar, and V. Baskaran, “A deep learning model to assist visually impaired in pothole detection using computer vision,” Decision Analytics Journal, vol. 12, p. 100507, Sep. 2024, doi: 10.1016/j.dajour.2024.100507.
J. H. Han et al., “Mobility Support with Intelligent Obstacle Detection for Enhanced Safety,” Optics, vol. 5, no. 4, pp. 434–444, Oct. 2024, doi: 10.3390/opt5040032.
P. Powell, F. Pätzold, M. Rouygari, M. Furtak, S. M. Kärcher, and P. König, “Helping Blind People Grasp: Evaluating a Tactile Bracelet for Remotely Guiding Grasping Movements,” Sensors, vol. 24, no. 9, May 2024, doi: 10.3390/s24092949.
S. Alzalabny, O. Moured, K. Müller, T. Schwarz, B. Rapp, and R. Stiefelhagen, “Designing a Tactile Document UI for 2D Refreshable Tactile Displays: Towards Accessible Document Layouts for Blind People,” Multimodal Technologies and Interaction, vol. 8, no. 11, Nov. 2024, doi: 10.3390/mti8110102.
F. Barontini, M. G. Catalano, L. Pallottino, B. Leporini, and M. Bianchi, “Integrating Wearable Haptics and Obstacle Avoidance for the Visually Impaired in Indoor Navigation: A User-Centered Approach,” IEEE Trans Haptics, vol. 14, no. 1, pp. 109–122, Jan. 2021, doi: 10.1109/TOH.2020.2996748.
F. E. Z. El-Taher, L. Miralles-Pechuan, J. Courtney, K. Millar, C. Smith, and S. McKeever, “A Survey on Outdoor Navigation Applications for People With Visual Impairments,” 2023, Institute of Electrical and Electronics Engineers Inc. doi: 10.1109/ACCESS.2023.3244073.
Z. Yu and M. Hu, “Real Environment Warning Model for Visually Impaired People in Trouble on the Blind Roads Based on Wavelet Scattering Network,” IEEE Access, vol. 12, pp. 82156–82167, 2024, doi: 10.1109/ACCESS.2024.3412328.
K. C. Shahira and A. Lijiya, “Towards Assisting the Visually Impaired: A Review on Techniques for Decoding the Visual Data from Chart Images,” IEEE Access, vol. 9, pp. 52926–52943, 2021, doi: 10.1109/ACCESS.2021.3069205.
K. M. Safiya and R. Pandian, “Real-Time Photo Captioning for Assisting Blind and Visually Impaired People Using LSTM Framework,” IEEE Sens Lett, vol. 7, no. 11, pp. 1–4, Nov. 2023, doi: 10.1109/LSENS.2023.3327565.
S. Malla, P. K. Sahu, S. Patnaik, and A. K. Biswal, “Obstacle Detection and Assistance for Visually Impaired Individuals Using an IoT-Enabled Smart Blind Stick,” Revue d’Intelligence Artificielle, vol. 37, no. 3, pp. 783–794, Jun. 2023, doi: 10.18280/ria.370327.
E. Cardillo, C. Li, and A. Caddemi, “Millimeter-wave radar cane: A blind people aid with moving human recognition capabilities,” IEEE J Electromagn RF Microw Med Biol, vol. 6, no. 2, pp. 204–211, Jun. 2022, doi: 10.1109/JERM.2021.3117129.
B. Mangesh, K. Shruti, P. Gaurav, S. Mahek, and P. Jay, “Next Generation Smart Stick for Blind People using Assistive Technology,” International Journal of Performability Engineering, vol. 20, no. 5, p. 282, 2024, doi: 10.23940/ijpe.24.05.p3.282291.
M. Bamdad, D. Scaramuzza, and A. Darvishy, “SLAM for Visually Impaired People: A Survey,” IEEE Access, 2024, doi: 10.1109/ACCESS.2024.3454571.
S. Khan, S. Nazir, and H. U. Khan, “Analysis of Navigation Assistants for Blind and Visually Impaired People: A Systematic Review,” IEEE Access, vol. 9, pp. 26712–26734, 2021, doi: 10.1109/ACCESS.2021.3052415.
L. Zhang, K. Jia, J. Liu, G. Wang, and W. Huang, “Design of Blind Guiding Robot Based on Speed Adaptation and Visual Recognition,” IEEE Access, vol. 11, pp. 75971–75978, 2023, doi: 10.1109/ACCESS.2023.3296066.
P. Mejia, L. C. Martini, F. Grijalva, and A. M. Zambrano, “CASVI: Computer Algebra System Aimed at Visually Impaired People. Experiments,” IEEE Access, vol. 9, pp. 157021–157034, 2021, doi: 10.1109/ACCESS.2021.3129106.
U. Masud, T. Saeed, H. M. Malaikah, F. U. Islam, and G. Abbas, “Smart Assistive System for Visually Impaired People Obstruction Avoidance Through Object Detection and Classification,” IEEE Access, vol. 10, pp. 13428–13441, 2022, doi: 10.1109/ACCESS.2022.3146320.
J. Guerreiro, Y. Kim, R. Nogueira, S. A. Chung, A. Rodrigues, and U. Oh, “The Design Space of the Auditory Representation of Objects and Their Behaviours in Virtual Reality for Blind People,” IEEE Trans Vis Comput Graph, vol. 29, no. 5, pp. 2763–2773, May 2023, doi: 10.1109/TVCG.2023.3247094.
S. B. Sukhavasi, S. B. Sukhavasi, K. Elleithy, A. El-Sayed, and A. Elleithy, “A hybrid model for driver emotion detection using feature fusion approach,” International journal of environmental research and public health, vol. 19, no. 5, p. 3085, 2022.
A. R. See, B. G. Sasing, and W. D. Advincula, “A Smartphone-Based Mobility Assistant Using Depth Imaging for Visually Impaired and Blind,” Applied Sciences, vol. 12, no. 6, p. 2802, Mar. 2022, doi: 10.3390/app12062802.
O. Duran and B. Turan, “Vehicle-to-vehicle distance estimation using artificial neural network and a toe-in-style stereo camera,” Measurement, vol. 190, p. 110732, Feb. 2022, doi: 10.1016/j.measurement.2022.110732.
A. Zaarane, I. Slimani, W. Al Okaishi, I. Atouf, and A. Hamdoun, “Distance measurement system for autonomous vehicles using stereo camera,” Array, vol. 5, p. 100016, Mar. 2020, doi: 10.1016/j.array.2020.100016.
P. Johanns, T. Haucke, and V. Steinhage, “Automated distance estimation for wildlife camera trapping,” Ecol Inform, vol. 70, p. 101734, Sep. 2022, doi: 10.1016/j.ecoinf.2022.101734.
J. Wei et al., “Dual UAV-based cross view target position measurement using machine learning and Pix-level matching,” Measurement, vol. 236, p. 115039, Aug. 2024, doi: 10.1016/j.measurement.2024.115039.
X. Li et al., “Three-dimensional reconstruction based on binocular structured light with an error point filtering strategy,” Optical Engineering, vol. 63, no. 3, Mar. 2024, doi: 10.1117/1.OE.63.3.034102.
J. Kim, “Camera-Based Net Avoidance Controls of Underwater Robots,” Sensors, vol. 24, no. 2, p. 674, Jan. 2024, doi: 10.3390/s24020674.
X. Geng et al., “A Lightweight Approach for Passive Human Localization Using an Infrared Thermal Camera,” IEEE Internet Things J, vol. 9, no. 24, pp. 24800–24811, Dec. 2022, doi: 10.1109/JIOT.2022.3194714.
C. Vela, G. Fasano, and R. Opromolla, “Pose determination of passively cooperative spacecraft in close proximity using a monocular camera and AruCo markers,” Acta Astronaut, vol. 201, pp. 22–38, Dec. 2022, doi: 10.1016/j.actaastro.2022.08.024.
Y. Yin, D. Gao, K. Deng, and Y. Lu, “Vision-based autonomous robots calibration for large-size workspace using ArUco map and single camera systems,” Precis Eng, vol. 90, pp. 191–204, Oct. 2024, doi: 10.1016/j.precisioneng.2024.08.010.
E. Adil, M. Mikou, and A. Mouhsen, “A novel algorithm for distance measurement using stereo camera,” CAAI Trans Intell Technol, vol. 7, no. 2, pp. 177–186, Jun. 2022, doi: 10.1049/cit2.12098.
A. Zaarane, I. Slimani, W. Al Okaishi, I. Atouf, and A. Hamdoun, “Distance measurement system for autonomous vehicles using stereo camera,” Array, vol. 5, p. 100016, Mar. 2020, doi: 10.1016/j.array.2020.100016.
O. Duran, B. Turan, and M. Kaya, “Machine-learning-based ensemble regression for vehicle-to-vehicle distance estimation using a toe-in style stereo camera,” Measurement, vol. 240, p. 115540, Jan. 2025, doi: 10.1016/j.measurement.2024.115540.
A. Abdelsalam, M. Mansour, J. Porras, and A. Happonen, “Depth accuracy analysis of the ZED 2i Stereo Camera in an indoor Environment,” Rob Auton Syst, vol. 179, p. 104753, Sep. 2024, doi: 10.1016/j.robot.2024.104753.
P. K. Duba, N. P. B. Mannam, and R. P, “Stereo vision based object detection for autonomous navigation in space environments,” Acta Astronaut, vol. 218, pp. 326–329, May 2024, doi: 10.1016/j.actaastro.2024.02.032.
J. Wang, Y. Guan, Z. Kang, and P. Chen, “A Robust Monocular and Binocular Visual Ranging Fusion Method Based on an Adaptive UKF,” Sensors, vol. 24, no. 13, p. 4178, Jun. 2024, doi: 10.3390/s24134178.
M. Carfagni et al., “Metrological and critical characterization of the Intel D415 stereo depth camera,” Sensors, vol. 19, no. 3, p. 489, 2019.
X. Ding, L. Xu, H. Wang, X. Wang, and G. Lv, “Stereo depth estimation under different camera calibration and alignment errors,” Applied Optics, vol. 50, no. 10, pp. 1289-1301, 2011.
A. Simonelli, S. R. Buló, L. Porzi, E. Ricci, and P. Kontschieder, “Towards Generalization Across Depth for Monocular 3D Object Detection,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer Science and Business Media Deutschland GmbH, 2020, pp. 767–782. doi: 10.1007/978-3-030-58542-6_46.
B. Wei et al., “Remote Distance Binocular Vision Ranging Method Based on Improved YOLOv5,” IEEE Sens J, vol. 24, no. 7, pp. 11328–11341, Apr. 2024, doi: 10.1109/JSEN.2024.3359671.
W. Wang et al., “A multi‐degree‐of‐freedom monitoring method for slope displacement based on stereo vision,” Computer-Aided Civil and Infrastructure Engineering, vol. 39, no. 13, pp. 2010–2027, Jul. 2024, doi: 10.1111/mice.13173.
G. Li, Z. Xu, Y. Zhang, C. Xin, J. Wang, and S. Yan, “Calibration method for binocular vision system with large field of view based on small target image splicing,” Meas Sci Technol, vol. 35, no. 8, p. 085006, Aug. 2024, doi: 10.1088/1361-6501/ad4381.
M. H. Conde, T. Kerstein, B. Buxbaum, and O. Loffeld, “Near-Infrared, Depth, Material: Towards a Trimodal Time-of-Flight Camera,” in 2020 IEEE SENSORS, IEEE, Oct. 2020, pp. 1–4. doi: 10.1109/SENSORS47125.2020.9278760.
M. H. Conde, T. Kerstein, B. Buxbaum, and O. Loffeld, “Live Demonstration: a Trimodal Time-of-Flight Camera Featuring Material Sensing,” in 2020 IEEE SENSORS, IEEE, Oct. 2020, pp. 1–1. doi: 10.1109/SENSORS47125.2020.9278928.
Y. Liu, Y. Fan, Z. Wu, J. Yao, and Z. Long, “Ultrasound-Based 3-D Gesture Recognition: Signal Optimization, Trajectory, and Feature Classification,” IEEE Trans Instrum Meas, vol. 72, pp. 1–12, 2023, doi: 10.1109/TIM.2023.3235438.
L. Qi, T. Zhang, K. Xu, H. Pan, Z. Zhang, and Y. Yuan, “A novel terrain adaptive omni-directional unmanned ground vehicle for underground space emergency: Design, modeling and tests,” Sustain Cities Soc, vol. 65, p. 102621, Feb. 2021, doi: 10.1016/j.scs.2020.102621.
V. A. Grishin, “Accuracy of Relative Navigation Using Time-of-Flight Cameras,” J Spacecr Rockets, vol. 60, no. 2, pp. 471–480, Mar. 2023, doi: 10.2514/1.A35079.
Y. Song, C. Lu, F. Wu, Z. Cao, and X. Liang, “A method for evaluating 3D-TOF camera ranging performance,” in Sixth Symposium on Novel Optoelectronic Detection Technology and Applications, H. Jiang and J. Chu, Eds., SPIE, Apr. 2020, p. 179. doi: 10.1117/12.2564703.
M. H. Conde, "A Material-Sensing Time-of-Flight Camera," in IEEE Sensors Letters, vol. 4, no. 7, pp. 1-4, July 2020, doi: 10.1109/LSENS.2020.3005042.
C. Mao, Y. Song, and J. Chen, “A lightweight adaptive random testing method for deep learning systems,” Softw Pract Exp, vol. 53, no. 11, pp. 2271–2295, Nov. 2023, doi: 10.1002/spe.3256.
K. Manjari, M. Verma, and G. Singal, “A survey on Assistive Technology for visually impaired,” Internet of Things, vol. 11, p. 100188, Sep. 2020, doi: 10.1016/j.iot.2020.100188.
D. Zhu et al., “Unified Audio-Visual Saliency Model for Omnidirectional Videos With Spatial Audio,” IEEE Trans Multimedia, vol. 26, pp. 764–775, 2024, doi: 10.1109/TMM.2023.3271022.
T. Rudzki, D. Murphy, and G. Kearney, “A DAW-Based Interactive Tool for Perceptual Spatial Audio Evaluation,” in 145th Audio Engineering Society Convention, Oct. 2018.
C. Schissler, A. Nicholls, and R. Mehra, “Efficient HRTF-based Spatial Audio for Area and Volumetric Sources,” IEEE Trans Vis Comput Graph, vol. 22, no. 4, pp. 1356–1366, Apr. 2016, doi: 10.1109/TVCG.2016.2518134.
N. Javeri, P. B. Dutta, K. Sunder, and K. Jain, “A Machine Learning Approach to Predicting Personalized Head Related Transfer Functions and Headphone Equalization from Video Capture Data,” in 2023 Immersive and 3D Audio: from Architecture to Automotive (I3DA), pp. 1–9, 2023, doi: 10.1109/I3DA57090.2023.10289448.
M. I. Thariq Hussan, D. Saidulu, P. T. Anitha, A. Manikandan, and P. Naresh, “Object Detection and Recognition in Real Time Using Deep Learning for Visually Impaired People,” International Journal of Electrical and Electronics Research, vol. 10, no. 2, pp. 80–86, 2022, doi: 10.37391/IJEER.100205.
J. Cruz Antony, G. M. Karpura Dheepan, V. K, V. Vikas, and V. Satyamitra, “Traffic sign recognition using CNN and Res-Net,” EAI Endorsed Transactions on Internet of Things, vol. 10, Feb. 2024, doi: 10.4108/eetiot.5098.
M. Fuad et al., “Towards Controlling Mobile Robot Using Upper Human Body Gesture Based on Convolutional Neural Network,” Journal of Robotics and Control (JRC), vol. 4, no. 6, pp. 856–867, Dec. 2023, doi: 10.18196/jrc.v4i6.20399.
A. H. N. Hidayah, S. Ahmad Radzi, N. A. Razak, W. H. M. Saad, Y. C. Wong, and A. A. Naja, “Disease Detection of Solanaceous Crops Using Deep Learning for Robot Vision,” Journal of Robotics and Control (JRC), vol. 3, no. 6, pp. 790–799, Dec. 2022, doi: 10.18196/jrc.v3i6.15948.
K. Zhang, Y. Wang, S. Shi, Q. Wang, C. Wang, and S. Liu, “Improved yolov5 algorithm combined with depth camera and embedded system for blind indoor visual assistance,” Sci Rep, vol. 14, no. 1, p. 23000, Oct. 2024, doi: 10.1038/s41598-024-74416-2.
M. S. Khoirom, M. Sonia, B. Laikhuram, J. Laishram, and D. Singh, “Comparative Analysis of Python and Java for Beginners,” International Research Journal of Engineering and Technology, 2020.
J. Sundnes, Introduction to Scientific Programming with Python. Cham: Springer International Publishing, 2020, doi: 10.1007/978-3-030-50356-7.
A. Castello, R. M. Gual, S. Seo, P. Balaji, E. S. Quintana-Orti, and A. J. Pena, “Analysis of Threading Libraries for High Performance Computing,” IEEE Transactions on Computers, vol. 69, no. 9, pp. 1279–1292, Sep. 2020, doi: 10.1109/TC.2020.2970706.
T. Triwiyanto, W. Caesarendra, V. Abdullayev, A. A. Ahmed, and H. Herianto, “Single Lead EMG signal to Control an Upper Limb Exoskeleton Using Embedded Machine Learning on Raspberry Pi,” Journal of Robotics and Control (JRC), vol. 4, no. 1, pp. 35–45, Feb. 2023, doi: 10.18196/jrc.v4i1.17364.
S. Venkateshalu and S. Deshpande, “Optimized CNN Learning Model With Multi‐Threading for Forgery Feature Detection in Real‐Time Streaming Approaches,” in Digital Twin and Blockchain for Smart Cities, pp. 101–116, 2024, doi: 10.1002/9781394303564.ch6.
V. Manjunath and M. Baunach, “A framework for static analysis and verification of low-level RTOS code,” Journal of Systems Architecture, vol. 154, p. 103220, Sep. 2024, doi: 10.1016/j.sysarc.2024.103220.
Z. Yan, H. Wang, Z. Wang, X. Liu, and Q. Ning, “Imaging simulation of the AMCW ToF camera based on path tracking,” Appl Opt, vol. 61, no. 18, p. 5474, Jun. 2022, doi: 10.1364/AO.458940.
N. Sanmartin-Vich, J. Calpe, and F. Pla, “Shot Noise Analysis for Differential Sampling in Indirect Time of Flight Cameras,” IEEE Signal Process Lett, vol. 30, pp. 46–49, 2023, doi: 10.1109/LSP.2023.3236263.
Y. Fang, X. Wang, Z. Sun, K. Zhang, and B. Su, “Study of the depth accuracy and entropy characteristics of a ToF camera with coupled noise,” Opt Lasers Eng, vol. 128, p. 106001, May 2020, doi: 10.1016/j.optlaseng.2020.106001.
Y. Du, Z. Jiang, J. Tian, and X. Guan, “Modeling, analysis, and optimization of random error in indirect time-of-flight camera,” Opt Express, vol. 33, no. 2, p. 1983, Jan. 2025, doi: 10.1364/OE.547731.
J. Lee and M. Gupta, “Mitigating AC and DC Interference in Multi-ToF-Camera Environments,” IEEE Trans Pattern Anal Mach Intell, vol. 45, no. 12, pp. 15005–15017, Dec. 2023, doi: 10.1109/TPAMI.2023.3307564.
W. Zhang, P. Song, Y. Bai, H. Geng, Y. Wu, and Z. Zheng, “Non-systematic noise reduction framework for ToF camera,” Opt Lasers Eng, vol. 180, p. 108324, Sep. 2024, doi: 10.1016/j.optlaseng.2024.108324.
F. Ahmed, M. H. Conde, P. L. Martinez, T. Kerstein, and B. Buxbaum, “Pseudo-Passive Time-of-Flight Imaging: Simultaneous Illumination, Communication, and 3D Sensing,” IEEE Sens J, vol. 22, no. 21, pp. 21218–21231, Nov. 2022, doi: 10.1109/JSEN.2022.3208085.
D. Poirier-Quinot and B. F. G. Katz, “Assessing the Impact of Head-Related Transfer Function Individualization on Task Performance: Case of a Virtual Reality Shooter Game,” Journal of the Audio Engineering Society, vol. 68, no. 4, 2020, doi: 10.17743/jaes.2020.0004ï.
J. Wang, K. Qian, Y. Qiu, H. Zhang, and X. Xie, “A multi-attribute subjective evaluation method on binaural 3D audio without reference stimulus,” Applied Acoustics, vol. 200, p. 109042, Nov. 2022, doi: 10.1016/j.apacoust.2022.109042.
J. Zhao, D. Yao, J. Gu, and J. Li, “Efficient prediction of individual head-related transfer functions based on 3D meshes,” Applied Acoustics, vol. 219, p. 109938, Mar. 2024, doi: 10.1016/j.apacoust.2024.109938.
W. Ryu, S. Lee, and E. Park, “The Effect of Training on Localizing HoloLens-Generated 3D Sound Sources,” Sensors, vol. 24, no. 11, p. 3442, May 2024, doi: 10.3390/s24113442.
J. Wang, M. Liu, X. Wang, T. Liu, and X. Xie, “Prediction of head-related transfer function based on tensor completion,” Applied Acoustics, vol. 157, p. 106995, Jan. 2020, doi: 10.1016/j.apacoust.2019.08.001.
H. Liu, P. Yuan, B. Yang, G. Yang, and Y. Chen, “Head‐related transfer function–reserved time‐frequency masking for robust binaural sound source localization,” CAAI Transactions on Intelligence Technology, vol. 7, no. 1, pp. 26-33, 2022.
J. M. Arend, F. Brinkmann, and C. Pörschmann, “Assessing spherical harmonics interpolation of time-aligned head-related transfer functions,” AES: Journal of the Audio Engineering Society, vol. 69, no. 1–2, pp. 104–117, Feb. 2021, doi: 10.17743/JAES.2020.0070.
R. R. de Alvarenga, L. A. V. Dias, and A. M. da Cunha, “Multtestlib: A Python package for performing unit tests using multiprocessing,” SoftwareX, vol. 29, p. 101986, Feb. 2025, doi: 10.1016/j.softx.2024.101986.
S. Yu. Gordleeva et al., “Real-Time EEG–EMG Human–Machine Interface-Based Control System for a Lower-Limb Exoskeleton,” IEEE Access, vol. 8, pp. 84070–84081, 2020, doi: 10.1109/ACCESS.2020.2991812.
O. V. Doronin, “Improvement and comparison the performance of fuzzing testing algorithms for applications in Google Thread Sanitizer,” Scientific and Technical Journal of Information Technologies, Mechanics and Optics, vol. 22, no. 4, pp. 734–741, Aug. 2022, doi: 10.17586/2226-1494-2022-22-4-734-741.
M. A. Khan, P. Paul, M. Rashid, M. Hossain, and M. A. R. Ahad, “An AI-Based Visual Aid with Integrated Reading Assistant for the Completely Blind,” IEEE Trans Hum Mach Syst, vol. 50, no. 6, pp. 507–517, Dec. 2020, doi: 10.1109/THMS.2020.3027534.
J. Tang et al., “Design and Optimization of an Assistive Cane With Visual Odometry for Blind People to Detect Obstacles With Hollow Section,” IEEE Sens J, vol. 21, no. 21, pp. 24759–24770, 2022, doi: 10.1109/JSEN.2021.3115854.
J. Cao, T. Zhang, L. Hou, and N. Nan, “An improved YOLOv8 algorithm for small object detection in autonomous driving,” J Real Time Image Process, vol. 21, no. 4, p. 138, Aug. 2024, doi: 10.1007/s11554-024-01517-6.
J. Qu et al., “SS-YOLOv8: small-size object detection algorithm based on improved YOLOv8 for UAV imagery,” Multimed Syst, vol. 31, no. 1, p. 42, Feb. 2025, doi: 10.1007/s00530-024-01622-3.
G. Yao, S. Zhu, L. Zhang, and M. Qi, “HP-YOLOv8: High-Precision Small Object Detection Algorithm for Remote Sensing Images,” Sensors, vol. 24, no. 15, p. 4858, Jul. 2024, doi: 10.3390/s24154858.
T. Wu and Y. Dong, “YOLO-SE: Improved YOLOv8 for Remote Sensing Object Detection and Recognition,” Applied Sciences, vol. 13, no. 24, p. 12977, Dec. 2023, doi: 10.3390/app132412977.
M. Safaldin, N. Zaghden, and M. Mejdoub, “An Improved YOLOv8 to Detect Moving Objects,” IEEE Access, vol. 12, pp. 59782–59806, 2024, doi: 10.1109/ACCESS.2024.3393835.
M. Talib, A. H. Y. Al-Noori, and J. Suad, “YOLOv8-CAB: Improved YOLOv8 for Real-time object detection,” Karbala International Journal of Modern Science, vol. 10, no. 1, Jan. 2024, doi: 10.33640/2405-609X.3339.
Q. Su and J. Mu, “Complex Scene Occluded Object Detection with Fusion of Mixed Local Channel Attention and Multi-Detection Layer Anchor-Free Optimization,” Automation, vol. 5, no. 2, pp. 176–189, Jun. 2024, doi: 10.3390/automation5020011.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Wahyudi Wahyudi

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).
This journal is based on the work at https://journal.umy.ac.id/index.php/jrc under license from Creative Commons Attribution-ShareAlike 4.0 International License. You are free to:
- Share – copy and redistribute the material in any medium or format.
- Adapt – remix, transform, and build upon the material for any purpose, even comercially.
The licensor cannot revoke these freedoms as long as you follow the license terms, which include the following:
- Attribution. You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
- ShareAlike. If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.
- No additional restrictions. You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
• Creative Commons Attribution-ShareAlike (CC BY-SA)
JRC is licensed under an International License