Voice Recognition Security Reliability Analysis Using Deep Learning Convolutional Neural Network Algorithm

Wahyu Ibrahim, Henry Candra, Haris Isyanto

Abstract


This study discusses the reliability analysis of voice recognition security using the deep learning convolutional neural network (CNN) algorithm. The CNN algorithm has learning advantages in that it is safer, faster, and more accurate. CNN also can solve user identification problems in large amounts of data. The measured voice input is ten types of user's voice with the number of iterations of 6000, 12000, and 15000 sound files. Furthermore, voice extraction features are performed to recognize conversations and retain information that is very much needed. After that, the voice file iteration data is trained to register the user's voice so that a trained model is obtained. These results measure performance (confusion matrix) to analyze the actual value compared to the predicted value in the CNN algorithm. The results obtained are that the best accuracy is obtained at 15000 sound file iterations, 96.87%, 12000 sound file iterations get 96.30%, and 6000 sound file iterations get 95.77%. CNN's performance data shows that 15000 iterations of voice files produce high accuracy. Voice recognition security helps provide high security and maintain the privacy of one's identity.

Keywords


voice recognition; convolutional neural network; confusion matrix; accuracy

Full Text:

PDF

References


Z. Rui and Z. Yan, “A Survey on Biometric Authentication: Toward Secure and Privacy-Preserving Identification,” IEEE Access, vol. 7, pp. 5994–6009, 2019, doi: 10.1109/ACCESS.2018.2889996.

A. Tyagi, Ipsita, R. Simon, and S. K. khatri, “Security Enhancement through IRIS and Biometric Recognition in ATM,” in 2019 4th International Conference on Information Systems and Computer Networks (ISCON), 2019, pp. 51–54, doi: 10.1109/ISCON47742.2019.9036156.

J. Handa, S. Singh, and S. Saraswat, “Approaches of Behavioural Biometric Traits,” in 2019 9th International Conference on Cloud Computing, Data Science & Engineering (Confluence), 2019, pp. 516–521, doi: 10.1109/CONFLUENCE.2019.8776905.

P. Kim, MATLAB Deep Learning: With Machine Learning, Neural Networks and Artificial Intelligence, 1st ed. USA: Apress, 2017.

M. S. Elmahdy and A. A. Morsy, “Subvocal speech recognition via close-talk microphone and surface electromyogram using deep learning,” in 2017 Federated Conference on Computer Science and Information Systems (FedCSIS), 2017, pp. 165–168, doi: 10.15439/2017F153.

Y. Liao and Y. Wang, “Some Experiences on Applying Deep Learning to Speech Signal and Natural Language Processing,” in 2018 World Symposium on Digital Intelligence for Systems and Machines (DISA), 2018, pp. 83–94, doi: 10.1109/DISA.2018.8490638.

S. Albawi, T. A. Mohammed, and S. Al-Zawi, “Understanding of a convolutional neural network,” in 2017 International Conference on Engineering and Technology (ICET), 2017, pp. 1–6, doi: 10.1109/ICEngTechnol.2017.8308186.

M. Z. Alom et al., “The History Began from AlexNet: A Comprehensive Survey on Deep Learning Approaches,” Mar. 2018.

R. Jagiasi, S. Ghosalkar, P. Kulal, and A. Bharambe, “CNN based speaker recognition in language and text-independent small scale system,” in 2019 Third International conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC), 2019, pp. 176–179, doi: 10.1109/I-SMAC47947.2019.9032667.

A. Antony and R. Gopikakumari, “Speaker identification based on combination of MFCC and UMRT based features,” Procedia Comput. Sci., vol. 143, pp. 250–257, 2018, doi: 10.1016/j.procs.2018.10.393.

A. Tharwat, “Classification assessment methods,” Appl. Comput. Informatics, vol. 17, no. 1, pp. 168–192, 2018, doi: 10.1016/j.aci.2018.08.003.

J. Ma and L. Yang, “Robust supervised and semi-supervised twin extreme learning machines for pattern classification,” Signal Processing, vol. 180, p. 107861, 2021, doi: 10.1016/j.sigpro.2020.107861.

X. Zhang, D. Cheng, Y. Dai, and X. Xu, “Multimodal Biometric Authentication System for Smartphone Based on Face and Voice Using Matching Level Fusion,” in 2018 IEEE 4th International Conference on Computer and Communications (ICCC), 2018, pp. 1468–1472, doi: 10.1109/CompComm.2018.8780935.

A. Kamalu, A. Raji, and V. I. Nnebedum, “IDENTITY AUTHENTICATION USING VOICE BIOMETRICS TECHNIQUE U,” 2015.

R. Tanwar, K. Singh, and S. Malhotra, “An approach to ensure security using voice authentication system,” Int. J. Recent Technol. Eng., vol. 7, no. 5, pp. 161–165, 2019.

S. Duraibi, F. Sheldon, and W. Alhamdani, “Voice Biometric Identity Authentication Model for IoT Devices,” Int. J. Secur. Priv. Trust Manag., vol. 9, pp. 1–10, May 2020, doi: 10.5121/ijsptm.2020.9201.

N. Chauhan, T. Isshiki, and D. Li, “Speaker Recognition Using LPC, MFCC, ZCR Features with ANN and SVM Classifier for Large Input Database,” in 2019 IEEE 4th International Conference on Computer and Communication Systems (ICCCS), 2019, pp. 130–133, doi: 10.1109/CCOMS.2019.8821751.

S. A. Alim and N. K. A. Rashid, “Some Commonly Used Speech Feature Extraction Algorithms,” in from Natural to Artificial Intelligence - Algorithms and Applications, IntechOpen, 2018.




DOI: https://doi.org/10.18196/jet.v6i1.14281

Refbacks

  • There are currently no refbacks.


Copyright (c) 2022 Journal of Electrical Technology UMY


 

Office Address:

Journal of Electrical Technology UMY

Department of Electrical Engineering, Universitas Muhammadiyah Yogyakarta

Jl. Brawijaya, Kasihan, Bantul, Daerah Istimewa Yogyakarta

Phone/Fax: +62274-387656/ +62274-387646,

E-mail: jet@umy.university

Creative Commons License
Journal of Electrical Technology UMY is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. situs slot gacor server kamboja slot