A Recurrent Deep Architecture for Enhancing Indoor Camera Localization Using Motion Blur Elimination

Muhammad S. Alam, Farhan B. Mohamed, Ali Selamat, AKM B. Hossain

Abstract


Rapid growth and technological improvements in computer vision have enabled indoor camera localization. The accurate camera localization of an indoor environment is challenging because it has many complex problems, and motion blur is one of them. Motion blur introduces significant errors, degrades the image quality, and affects feature matching, making it challenging to determine camera pose accurately. Improving the camera localization accuracy for some robotic applications is still necessary. In this study, we propose a recurrent neural network (RNN) approach to solve the indoor camera localization problem using motion blur reduction. Motion blur in an image is detected by analyzing its frequency spectrum. A low-frequency component indicates motion blur, and by investigating the direction of these low-frequency components, the location and amount of blur are estimated. Then, Wiener filtering deconvolution removes the blur and obtains a clear copy of the original image. The performance of the proposed approach is evaluated by comparing the original and blurred images using the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). After that, the camera pose is estimated using recurrent neural architecture from deblurred images or videos. The average camera pose error obtained through our approach is (0.16m, 5.61◦). In two recent research, Deep Attention and CGAPoseNet, the average pose error is (19m, 6.25◦) and (0.27m, 9.39◦), respectively. The results obtained through the proposed approach improve the current research results. As a result, some applications of indoor camera localization, such as mobile robots and guide robots, will work more accurately.

Keywords


Camera Pose Estimation; Indoor Camera Localization; Indoor Robot Navigation; Motion Blur; RRN; SLAM.

Full Text:

PDF

References


M. Sewtz, X. Luo, J. Landgraf, T. Bodenmuller and R. Triebel, “Ro- ¨ bust Approaches for Localization on Multi-camera Systems in Dynamic Environments,” 2021 7th International Conference on Automation, Robotics and Applications (ICARA), pp. 211-215, 2021, doi: 10.1109/ICARA51699.2021.9376475.

M. S. Alam, F. B. Mohamed, A. Selamat and A. B. Hossain, “A Review of Recurrent Neural Network Based Camera Localization for Indoor Environments,” in IEEE Access, vol. 11, pp. 43985-44009, 2023, doi: 10.1109/ACCESS.2023.3272479.

A. Raza, L. Lolic, S. Akhter and M. Liut, “Comparing and Evaluating Indoor Positioning Techniques,” 2021 International Conference on Indoor Positioning and Indoor Navigation (IPIN), pp. 1-8, 2021, doi: 10.1109/IPIN51156.2021.9662632.

J. Zhang, and H. Mao, “Wknn indoor positioning method based on spatial feature partition and basketball motion capture,” Alexandria engineering journal, vol. 61, no. 1, pp. 125–134, 2022, doi: 10.1016/j.aej.2021.04.078.

C. E. A. Bundak, M. A. Abd Rahman, M. K. A. Karim, and N. H. Osman, “Fuzzy rank cluster top k euclidean distance and triangle based algorithm for magnetic field indoor positioning system,” Alexandria Engineering Journal, vol. 61, no. 5, pp. 3645–3655, 2022, doi: 10.1016/j.aej.2021.08.073.

R. Brylka, U. Schwanecke and B. Bierwirth, “Camera Based Barcode Localization and Decoding in Real-World Applications,” 2020 International Conference on Omni-layer Intelligent Systems (COINS), pp. 1-8, 2020, doi: 10.1109/COINS49042.2020.9191416.

J. Guo, R. Ni and Y. Zhao, “DeblurSLAM: A Novel Visual SLAM System Robust in Blurring Scene,” 2021 IEEE 7th International Conference on Virtual Reality (ICVR), pp. 62-68, 2021, doi: 10.1109/ICVR51878.2021.9483818.

H. Yu, H. Zhu, and F. Huang, “Visual simultaneous localization and mapping (SLAM) based on blurred image detection,” Journal of Intelligent & Robotic Systems, vol. 103, no. 1, 2021, doi: 10.1007/s10846-021-01456-5.

P. Wozniak and B. Kwolek, “Deep Embeddings-based Place Recognition Robust to Motion Blur,” 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), pp. 1771-1779, 2021, doi: 10.1109/ICCVW54120.2021.00203.

E. Brachmann and C. Rother, “Visual Camera Re-Localization From RGB and RGB-D Images Using DSAC,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 9, pp. 5847-5865, 2022, doi: 10.1109/TPAMI.2021.3070754.

H. Yao, R. W. Stidham, Z. Gao, J. Gryak, and K. Najarian, “Motionbased camera localization system in colonoscopy videos,” Medical Image Analysis, vol. 73, 2021, doi: 10.1016/j.media.2021.102180.

S. Jia, L. Ma, S. Yang and D. Qin, “A Novel Visual Indoor Positioning Method With Efficient Image Deblurring,” in IEEE Transactions on Mobile Computing, vol. 22, no. 7, pp. 3757-3773, 2023, doi: 10.1109/TMC.2022.3143502.

B. Han, Y. Lin, Y. Dong, H. Wang, T. Zhang and C. Liang, “Camera Attributes Control for Visual Odometry With Motion Blur Awareness,” in IEEE/ASME Transactions on Mechatronics, vol. 28, no. 4, pp. 2225- 2235, 2023, doi: 10.1109/TMECH.2023.3234316.

H. Li, Z. Zhang, T. Jiang, P. Luo, H. Feng, and Z. Xu, “Real-world deep local motion deblurring,” in Proceedings of the AAAI Conference on Artificial Intelligence, pp. 1314–1322, 2023, doi: 10.48550/arXiv.2204.08179.

P. -E. Sarlin et al., “Back to the Feature: Learning Robust Camera Localization from Pixels to Pose,” 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3246-3256, 2021, doi: 10.1109/CVPR46437.2021.00326.

I. Arrouch, N. S. Ahmad, P. Goh, and J. M. Saleh, “Close proximity timeto-collision prediction for autonomous robot navigation: An exponential gpr approach,” Alexandria Engineering Journal, vol. 61, no. 12, pp. 11171–11183, 2022, doi: 10.1016/j.aej.2022.04.041.

T. Xie et al., “A Deep Feature Aggregation Network for Accurate Indoor Camera Localization,” in IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 3687-3694, 2022, doi: 10.1109/LRA.2022.3146946.

Q. Li et al., “Structure-guided camera localization for indoor environments,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 202, pp. 219–229, 2023, doi: 10.1016/j.isprsjprs.2023.05.034.

H. Son, J. Lee, J. Lee, S. Cho, and S. Lee, “Recurrent video deblurring with blur-invariant motion estimation and pixel volumes,” ACM Transactions on Graphics, vol. 40, no. 5, pp. 1–18, 2021, doi: 10.1145/3453720.

G. Carbajal, P. Vitoria, M. Delbracio, P. Muse, and J. Lezama, “Non- ´ uniform blur kernel estimation via adaptive basis decomposition,” arXiv preprint, 2021, doi: 10.48550/arXiv.2102.01026.

S. S. Carita, and R. B. Hadiprakso, “Double Face Masks Detection Using Region-Based Convolutional Neural Network,” in Jurnal Ilmiah Teknik Elektro Komputer dan Informatika (JITEKI), vol. 9, no. 4, pp. 904–911, 2023, doi: 10.26555/jiteki.v9i4.23902.

D. Rozumnyi, M. R. Oswald, V. Ferrari and M. Pollefeys, “Motionfrom-Blur: 3D Shape and Motion Estimation of Motion-blurred Objects in Videos,” 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 15969-15978, 2022, doi: 10.1109/CVPR52688.2022.01552.

X. Ge, J. Tan and L. Zhang, “Blind Image Deblurring Using a NonLinear Channel Prior Based on Dark and Bright Channels,” in IEEE Transactions on Image Processing, vol. 30, pp. 6970-6984, 2021, doi: 10.1109/TIP.2021.3101154.

J. F. Schmid, S. F. Simon, R. Radhakrishnan, S. Frintrop and R. Mester, “HD Ground - A Database for Ground Texture Based Localization,” 2022 International Conference on Robotics and Automation (ICRA), pp. 7628- 7634, 2022, doi: 10.1109/ICRA46639.2022.9811977.

V. Gampala, M. S. Kumar, C. Sushama, and E. F. I. Raj, “Deep learning based image processing approaches for image deblurring,” Materialstoday: Proceedings, vol. 10, 2020, doi: 10.1016/j.matpr.2020.11.076.

W. Yang, X. Zhang, H. Ma and G. Zhang, “Laser Beams-Based Localization Methods for Boom-Type Roadheader Using Underground Camera Non-Uniform Blur Model,” in IEEE Access, vol. 8, pp. 190327-190341, 2020, doi: 10.1109/ACCESS.2020.3032368.

S. Wang, M. Jiu, L. Chen, S. Li, and M. Xu, “A deep encoderdecoder based primal-dual proximal network for image restoration,” in Fifteenth International Conference on Graphics and Image Processing, pp. 312–322, 2024, doi: 10.1117/12.3021256.

Y. Xu, Y. Zhu, Y. Quan, and H. Ji, “Attentive deep network for blind motion deblurring on dynamic scenes,” Computer Vision and Image Understanding, vol. 205, 2021, doi: 10.1016/j.cviu.2021.103169.

J. Yu, L. Guo, C. Xiao, and Z. Chang, “Edge-Based Blur Kernel Estimation Using Sparse Representation and Self-similarity,” in Image and Graphics: 11th International Conference, pp. 179–205, 2021, doi: 10.1007/978-3-030-87358-5 15.

M. Chang, C. Yang, H. Feng, Z. Xu and Q. Li, “Beyond Camera Motion Blur Removing: How to Handle Outliers in Deblurring,” in IEEE Transactions on Computational Imaging, vol. 7, pp. 463-474, 2021, doi: 10.1109/TCI.2021.3076886.

N. Varghese, A. N. Rajagopalan and Z. A. Ansari, “Real-time Large-motion Deblurring for Gimbal-based imaging systems,” in IEEE Journal of Selected Topics in Signal Processing, doi: 10.1109/JSTSP.2024.3386056.

C. Zhu et al., “Deep recurrent neural network with multi-scale bidirectional propagation for video deblurring,” in Proceedings of the AAAI conference on artificial intelligence, vol. 36, no. 3, pp. 3598–3607, 2022, doi: 10.1609/aaai.v36i3.20272.

W. Z. Shao et al., “DeblurGAN+: Revisiting blind motion deblurring using conditional adversarial networks,” Signal Processing, vol. 168, 2020, doi: 10.1016/j.sigpro.2019.107338.

S. Zhang, A. Zhen, and R. L. Stevenson, “Deep motion blur removal using noisy/blurry image pairs,” Journal of Electronic Imaging, vol. 30, no. 3, pp. 033022–033022, 2021, doi: 10.1117/1.JEI.30.3.033022.

Q. Zhu, M. Zhou, N. Zheng, C. Li, J. Huang and F. Zhao, “Exploring Temporal Frequency Spectrum in Deep Video Deblurring,” 2023 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 12394-12403, 2023, doi: 10.1109/ICCV51070.2023.01142.

W. Niu, K. Zhang, W. Luo and Y. Zhong, “Blind Motion Deblurring Super-Resolution: When Dynamic Spatio-Temporal Learning Meets Static Image Understanding,” in IEEE Transactions on Image Processing, vol. 30, pp. 7101-7111, 2021, doi: 10.1109/TIP.2021.3101402.

X. Hu et al., “Pyramid Architecture Search for Real-Time Image Deblurring,” 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 4278-4287, 2021, doi: 10.1109/ICCV48922.2021.00426.

M. Tian, Q. Nie and H. Shen, “3D Scene Geometry-Aware Constraint for Camera Localization with Deep Learning,” 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 4211-4217, 2020, doi: 10.1109/ICRA40945.2020.9196940.

D. Rozumnyi, M. R. Oswald, V. Ferrari, and M. Pollefeys, “Shape from blur: Recovering textured 3d shape and motion of fast moving objects,” Advances in Neural Information Processing Systems, vol. 34, pp. 29972–29983, 2021, doi: 10.48550/arXiv.2106.0876.

K. Purohit, S. Vasu, M. P. Rao, and A. N. Rajagopalan, “Multiplanar geometry and latent image recovery from a single motion-blurred image,” Machine Vision and Applications, vol. 33, no. 10, 2022, doi: 10.1007/s00138-021-01254-x.

S. Klenk, L. Koestler, D. Scaramuzza and D. Cremers, “E-NeRF: Neural Radiance Fields From a Moving Event Camera,” in IEEE Robotics and Automation Letters, vol. 8, no. 3, pp. 1587-1594, 2023, doi: 10.1109/LRA.2023.3240646.

D. Park, D. U. Kang, J. Kim, and S. Y. Chun, “Multitemporal recurrent neural networks for progressive non-uniform single image deblurring with incremental temporal training,” in Computer Vision–ECCV 2020: 16th European Conference, vol. 12351, pp. 327–343, 2020, doi: 10.1007/978-3-030-58539-6_20.

J. Pan, H. Bai and J. Tang, “Cascaded Deep Video Deblurring Using Temporal Sharpness Prior,” 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3040-3048, 2020, doi: 10.1109/CVPR42600.2020.00311.

T. Xie et al., “A Deep Feature Aggregation Network for Accurate Indoor Camera Localization,” in IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 3687-3694, 2022, doi: 10.1109/LRA.2022.3146946.

R. M. Yuliza, M. Rhozaly, M. Y. Lenni, and G. E. Yehezkiel, “Fast Human Recognition System on Real-Time Camera,” Jurnal Ilmiah Teknik Elektro Komputer dan Informatika (JITEKI), vol. 9, no. 4, pp. 895–903, 2023, doi: 10.26555/jiteki.v9i4.27009.

J. Yu et al., “CNN-based Monocular Decentralized SLAM on embedded FPGA,” 2020 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), pp. 66-73, 2020, doi: 10.1109/IPDPSW50202.2020.00019.

S. Majchrowska et al., “Deep learning-based waste detection in natural and urban environments,” Waste Management, vol. 138, pp. 274–284, 2022, doi: 10.1016/j.wasman.2021.12.001.

A. Kendall, M. Grimes and R. Cipolla, “PoseNet: A Convolutional Network for Real-Time 6-DOF Camera Relocalization,” 2015 IEEE International Conference on Computer Vision (ICCV), pp. 2938-2946, 2015, doi: 10.1109/ICCV.2015.336.

Z. Xiao, C. Chen, S. Yang, and W. Wei, “EffLoc: Lightweight Vision Transformer for Efficient 6-DOF Camera Relocalization,” arXiv preprint, 2024, doi: 10.48550/arXiv.2402.13537.

M. Bui et al., “6d camera relocalization in ambiguous scenes via continuous multimodal inference,” in Computer Vision–ECCV 2020: 16th European Conference, vol. 12363, pp. 139–157, 2020, doi: 10.1007/978-3-030-58523-5_9.

Y. Deng, S. Hui, R. Meng, S. Zhou, and J. Wang, “Hourglass attention network for image inpainting,” in European conference on computer vision, pp. 483–501, 2022, doi: 10.1007/978-3-031-19797-0_28.

K. Liu, Q. Li, G. Qiu, “Posegan: A pose-to-image translation framework for camera localization,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 166, pp. 308–315, 2020, doi: 10.1016/j.isprsjprs.2020.06.010.

F. Ott, T. Feigl, C. Loffler and C. Mutschler, “ViPR: Visual-Odometry-aided Pose Regression for 6DoF Camera Localization,” 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 187-198, 2020, doi: 10.1109/CVPRW50498.2020.00029.

A. A. C. Ponce and J. M. Carranza, “Convolutional neural networks for geo-localisation with a single aerial image,” Journal of Real-Time Image Processing, vol. 19, no. 3, pp. 565–575, 2022, doi: 10.1007/s11554-022-01207-1.

C. Wang et al., “DymSLAM: 4D Dynamic Scene Reconstruction Based on Geometrical Motion Segmentation,” in IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 550-557, 2021, doi: 10.1109/LRA.2020.3045647.

Y. Cho, S. Eum, J. Im, Z. Ali, H. -G. Choo and U. Park, “Deep PhotoGeometric Loss for Relative Camera Pose Estimation,” in IEEE Access, vol. 11, pp. 130319-130328, 2023, doi: 10.1109/ACCESS.2023.3325661.

M. Li, J. Qin, D. Li, R. Chen, X. Liao, and B. Guo, “Vnlstmposenet: A novel deep convnet for real-time 6-dof camera relocalization in urban streets,” Geo-Spatial Information Science, vol. 24, no. 3, pp. 422–437, 2021, doi: 10.1080/10095020.2021.1960779.

R. Clark, S. Wang, A. Markham, N. Trigoni and H. Wen, “VidLoc: A Deep Spatio-Temporal Model for 6-DoF Video-Clip Relocalization,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2652-2660, 2017, doi: 10.1109/CVPR.2017.284.

F. Xue, X. Wu, S. Cai and J. Wang, “Learning Multi-View Camera Relocalization With Graph Neural Networks,” 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11372-11381, 2020, doi: 10.1109/CVPR42600.2020.01139.

X. Huang et al., “Realtime grasping strategies using event camera,” Journal of Intelligent Manufacturing, vol. 33, no. 2, pp. 593–615, 2022, doi: 10.1007/s10845-021-01887-9.

J. Xiao, L. Li, C. Wang, Z. -J. Zha and Q. Huang, “Few Shot Generative Model Adaption via Relaxed Spatial Structural Alignment,” 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11194-11203, 2022, doi: 10.1109/CVPR52688.2022.01092.

J. Kim, D. Kim, S. Lee, and S. Chi, “Hybrid DNN training using both synthetic and real construction images to overcome training data shortage,” Automation in Construction, vol. 149, 2023, doi: 10.1016/j.autcon.2023.104771.

Y. Cai, L. Ge, J. Cai, N. M. Thalmann and J. Yuan, “3D Hand Pose Estimation Using Synthetic Data and Weakly Labeled RGB Images,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, no. 11, pp. 3739-3753, 2021, doi: 10.1109/TPAMI.2020.2993627.

Y. Wang, B. Xiao, A. Bouferguene, M. Al-Hussein, and H. Li, “ContentBased Image Retrieval for Construction Site Images: Leveraging Deep Learning–Based Object Detection,” Journal of Computing in Civil Engineering, vol. 37, no. 6, 2023, doi: 10.1061/JCCEE5.CPENG-5473.

M. Lyu, X. Guo, K. Zhang, and L. Zhang, “A Visual Indoor Localization Method Based on Efficient Image Retrieval,” Journal of Computer and Communications, vol. 12, no. 2, pp. 47–66, 2024. doi: 10.4236/jcc.2024.122004.

D. Acharya, C. J. Tatli, and K. Khoshelham, “Synthetic-real image domain adaptation for indoor camera pose regression using a 3D model,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 202, pp. 405–421, 2023, doi: 10.1016/j.isprsjprs.2023.06.013.

D. Acharya, S. Singha Roy, K. Khoshelham, S. Winter, “A recurrent deep network for estimating the pose of real indoor images from synthetic image sequences,” Sensors, vol. 20, no. 19, 2020, doi: 10.3390/s20195492.

N. Li and H. Ai, “Efiloc: large-scale visual indoor localization with efficient correlation between sparse features and 3d points,” The Visual Computer, vol. 38, pp. 2091–2106, 2022, doi: 10.21123/bsj.2024.9648.

M. S. Alam, F. B. Mohamed, and A. K. M. B. Hossain, “Self-Localization of Guide Robots Through Image Classification,” Baghdad Science Journal, vol. 21, no. 2(SI), 2024, https://doi.org/10.21123/bsj.2024.9648.

Fahmizal et al., “Path Planning for Mobile Robots on Dynamic Environmental Obstacles Using PSO Optimization,” in Jurnal Ilmiah Teknik Elektro Komputer dan Informatika (JITEKI), vol. 10, no. 1, pp. 166-172, 2024, doi: 10.26555/jiteki.v10i1.28513.

Y. Jin, L. Yu, G. Li, and S. Fei, “A 6-DOFs event-based camera relocalization system by CNN-LSTM and image denoising,” Expert Systems with Applications, vol. 170, 2021, doi: 10.1016/j.eswa.2020.114535.

M. S. Alam, A. K. M. B. Hossain, and F. B. Mohamed, “Performance Evaluation of Recurrent Neural Networks Applied to Indoor Camera Localization,” International Journal of Emerging Technology and Advanced Engineering, vol. 12, no. 8, 2022, doi: 10.46338/ijetae0822_15.

H. Yang, X. Su, S. Chen, W. Zhu, and C. Ju, “Efficient learning-based blur removal method based on sparse optimization for image restoration,” PLoS One, vol. 15, no. 3, 2020, doi: 10.1371/journal.pone.0230619.

J. Dong, S. Roth, and B. Schiele, “Deep wiener deconvolution: Wiener meets deep learning for image deblurring,” Advances in Neural Information Processing Systems, vol. 33, pp. 1048–1059, 2020, doi: 10.48550/arXiv.2103.09962.

K. -H. Liu, C. -H. Yeh, J. -W. Chung and C. -Y. Chang, “A Motion Deblur Method Based on Multi-Scale High Frequency Residual Image Learning,” in IEEE Access, vol. 8, pp. 66025-66036, 2020, doi: 10.1109/ACCESS.2020.2985220.

W. Zhou et al., “Improved estimation of motion blur parameters for restoration from a single image,” PLoS One, vol. 15, no. 9, 2020, doi: 10.1371/journal.pone.0238259.

J. S. Oh, H. Lee, and W. Hwang, “Motion blur treatment utilizing deep learning for time-resolved particle image velocimetry,” Experiments in Fluids, vol. 62, no. 234, pp. 1–16, 2021, doi: 10.1007/s00348-021-03330-4.

Y. Xiang, H. Zhou, C. Li, F. Sun, Z. Li, and Y. Xie, “Application of Deep Learning in Blind Motion Deblurring: Current Status and Future Prospects,” arXiv preprint, pp. 1–29, 2024, doi: 10.48550/arXiv.2401.05055.

K. -H. Liu, C. -H. Yeh, J. -W. Chung and C. -Y. Chang, “A Motion Deblur Method Based on Multi-Scale High Frequency Residual Image Learning,” in IEEE Access, vol. 8, pp. 66025-66036, 2020, doi: 10.1109/ACCESS.2020.2985220.

Y. Huihui, L. Daoliang, and C. Yingyi, “A state-of-the-art review of image motion deblurring techniques in precision agriculture,” Heliyon, vol. 9, no. 6, 2023, doi: 10.1016/j.heliyon.2023.e17332.

J. Park, S. Nah, and K. M. Lee, “Recurrence-in-recurrence networks for video deblurring,” arXiv preprint, pp. 1–12, 2022, doi: 10.48550/arXiv.2203.06418.

J. Zhao et al., “Do RNN and LSTM have long memory?,” in International Conference on Machine Learning, pp. 11365–11375, 2020, doi: 10.48550/arXiv.2006.03860.

J. W. Rae, A. Potapenko, S. M. Jayakumar, and T. P. Lillicrap, “Compressive transformers for long-range sequence modelling,” arXiv preprint, pp. 1–19, 2019, doi: 10.48550/arXiv.1911.05507.

J. Dong, S. Roth, and B. Schiele, “Deep wiener deconvolution: Wiener meets deep learning for image deblurring,” Advances in Neural Information Processing Systems, vol. 33, pp. 1048–1059, 2020, doi: 10.48550/arXiv.2103.09962.

Z. Zhong, Y. Gao, Y. Zheng, and B. Zheng, “Efficient spatio-temporal recurrent neural network for video deblurring,” in Computer Vision–ECCV 2020: 16th European Conference, pp. 191–207, 2020, doi: 10.1007/978-3-030-58539-6_12.

J. Shotton, B. Glocker, C. Zach, S. Izadi, A. Criminisi and A. Fitzgibbon, “Scene Coordinate Regression Forests for Camera Relocalization in RGBD Images,” 2013 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2930-2937, 2023, doi: 10.1109/CVPR.2013.377.

M. Dubenova, A. Zderadickova, O. Kafka, T. Pajdla, and M. Polic, “D-InLoc++: Indoor Localization in Dynamic Environments,” Pattern Recognition, pp. 246–261, 2022, doi: 10.1007/978-3-031-16788-1_16.

N. Radwan, A. Valada and W. Burgard, “VLocNet++: Deep Multitask Learning for Semantic Visual Localization and Odometry,” in IEEE Robotics and Automation Letters, vol. 3, no. 4, pp. 4407-4414, 2018, doi: 10.1109/LRA.2018.2869640.

S. Imambi, K. B. Prakash, and G. R. Kanagachidambaresan, “PyTorch,” Programming with TensorFlow: Solution for Edge Computing Applications, pp. 87–104, 2021, doi: 10.1609/aaai.v34i06.6608.

D. Yi, J. Ahn, and S. Ji, “An effective optimization method for machine learning based on ADAM,” Applied Sciences, vol. 10, no. 3, 2020, doi: 10.3390/app10031073.

A. Hagemann, M. Knorr and C. Stiller, “Deep geometry-aware camera self-calibration from video,” 2023 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 3415-3425, 2023, doi: 10.1109/ICCV51070.2023.00318.

B. Wang, C. Chen, C. X. Lu, P. Zhao, N. Trigoni, A. Markham, “Atloc: Attention guided camera localization,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 6, pp. 10393–10401, 2020, doi: 10.1609/aaai.v34i06.6608.

L. Xu, T. Guan, Y. Luo, Y. Wang, Z. Chen, and W. Liu, “EpiLoc: Deep Camera Localization Under Epipolar Constraint,” Transactions on Internet & Information Systems, vol. 16, no. 6, 2022, doi: 10.3837/tiis.2022.06.014.

A. Pepe and J. Lasenby, “Cga-posenet: Camera pose regression via a 1dup approach to conformal geometric algebra,” arXiv preprint, pp. 1–13, 2023, doi: 10.48550/arXiv.2302.05211.

A. Abozeid, A. I. Taloba, R. M. Abd El-Aziz, A. F. Alwaghid, M. Salem, and A. Elhadad, “An Efficient Indoor Localization Based on Deep Attention Learning Model.,” Computer Systems Science and Engineering, vol. 46, no. 2, pp. 2637–2650, 2023, doi: 10.32604/csse.2023.037761.




DOI: https://doi.org/10.18196/jrc.v5i4.21930

Refbacks

  • There are currently no refbacks.


Copyright (c) 2024 Muhammad S. Alam, Farhan B. Mohamed, Ali Selamat, AKM B. Hossain

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

 


Journal of Robotics and Control (JRC)

P-ISSN: 2715-5056 || E-ISSN: 2715-5072
Organized by Peneliti Teknologi Teknik Indonesia
Published by Universitas Muhammadiyah Yogyakarta in collaboration with Peneliti Teknologi Teknik Indonesia, Indonesia and the Department of Electrical Engineering
Website: http://journal.umy.ac.id/index.php/jrc
Email: jrcofumy@gmail.com


Kuliah Teknik Elektro Terbaik