SlideShare a Scribd company logo
Computer Science & Engineering: An International Journal (CSEIJ), Vol. 3, No. 4, August 2013
DOI : 10.5121/cseij.2013.3403 21
AN EFFICIENT SPEECH RECOGNITION
SYSTEM
Suma Swamy1
and K.V Ramakrishnan
1
Department of Electronics and Communication Engineering, Research Scholar,
Anna University, Chennai
suma_swamy@yahoo.com, ramradhain@yahoo.com
ABSTRACT
This paper describes the development of an efficient speech recognition system using different
techniques such as Mel Frequency Cepstrum Coefficients (MFCC), Vector Quantization (VQ) and
Hidden Markov Model (HMM).
This paper explains how speaker recognition followed by speech recognition is used to recognize the
speech faster, efficiently and accurately. MFCC is used to extract the characteristics from the input
speech signal with respect to a particular word uttered by a particular speaker. Then HMM is used
on Quantized feature vectors to identify the word by evaluating the maximum log likelihood values
for the spoken word.
KEYWORDS
MFCC, VQ, HMM, log likelihood, DISTMIN.
1. INTRODUCTION
The idea of human machine interaction led to research in Speech recognition. Automatic speech
recognition uses the process and related technology for converting speech signals into a sequence
of words or other linguistic units by means of an algorithm implemented as a computer program.
Speech understanding systems presently are capable of understanding speech input for
vocabularies of thousands of words in operational environments. Speech signal conveys two
important types of information: (a) speech content and (b) The speaker identity. Speech
recognisers aim to extract the lexical information from the speech signal independently of the
speaker by reducing the inter-speaker variability. Speaker recognition is concerned with
extracting the identity of the person. [3]
Speaker identification allows the use of uttered speech to verify the speaker’s identity and control
access to secure services. Speech Recognition offers greater freedom to employ the physically
handicapped in several applications like manufacturing processes, medicine and telephone
network. Figure 1(a) shows the speech recognition system without speaker identification. Figure
1(b) shows how the speaker identification followed by speech recognition improves the
efficiency. With this approach, the database will be divided into smaller divisions (SP1 to SPn)
with respect to different speakers. Hence the speech recognition rate improves for the
corresponding speaker.
Computer Science & Engineering: An International Journal (CSEIJ), Vol. 3, No. 4, August 2013
22
Figure.1 (a) Speech recognition system without speaker identification
Figure.1 (b) Speaker identification followed by speech recognition
This paper focuses on the implementation of speaker identification and enhancement of speech
recognition using Hidden Markov Model (HMM) techniques. [1], [4]
2. HISTORY OF SPEECH RECOGNITION
Speech Recognition research has been ongoing for more than 80 years. Over that period there
have been at least 4 generations of approaches, and a 5th generation is being formulated based on
current research themes. To cover the complete history of speech recognition is beyond the scope
of this paper.
By 2001, computer speech recognition had reached 80% accuracy and no further progress was
reported till 2010. Speech recognition technology development began to edge back into the
Feature
Extraction
(MFCC)
Codebook
Generation
(VQLBG)
Database
Speech Recognition
(HMM) Recognized
Speech
Input
Speech
Feature
Extraction
(MFCC)
Codebook
Generation
(VQLBG)
Feature Matching
(DISTMIN)
Speech Recognition
(HMM)
Recognized
Speech
Identified/
Recognized
Speaker
Input
Speech
SP1
SP2
SPn
Matching
(DCoFeature
Extraction
(MFCC)
ISTMIN)
n
Database
Computer Science & Engineering: An International Journal (CSEIJ), Vol. 3, No. 4, August 2013
23
forefront with one major event: the arrival of the “Google Voice Search app for the iPhone”. In
2010, Google added “personalized recognition” to Voice Search on Android phones, so that the
software could record users’ voice searches and produce a more accurate speech model. The
company also added Voice Search to its Chrome Browser in mid-2011. Like Google’s Voice
Search, Siri relies on cloud-based processing. It draws on its knowledge about the speaker to
generate a contextual reply and responds to voice input. [2]
Parallel processing methods using combinations of HMMs and acoustic- phonetic approaches to
detect and correct linguistic irregularities are used to increase recognition decision reliability and
increase robustness for recognition of speech in noisy environment.
3. PROPOSED MODEL
The structure of proposed system consists of two modules as shown in figure 1(b).
• Speaker Identification
• Speech Recognition
3.1 Speaker Identification
Feature extraction is a process that extracts data from the voice signal that is unique for each
speaker. Mel Frequency Cepstral Coefficient (MFCC) technique is often used to create the
fingerprint of the sound files. The MFCC are based on the known variation of the human ear’s
critical bandwidth frequencies with filters spaced linearly at low frequencies and logarithmically
at high frequencies used to capture the important characteristics of speech. [6], [7], [8]
These extracted features are Vector quantized using Vector Quantization algorithm. Vector
Quantization (VQ) is used for feature extraction in both the training and testing phases. It is an
extremely efficient representation of spectral information in the speech signal by mapping the
vectors from large vector space to a finite number of regions in the space called clusters. [6], [8]
After feature extraction, feature matching involves the actual procedure to identify the unknown
speaker by comparing extracted features with the database using the DISTMIN algorithm.
3.2 Speech Recognition System
Hidden Markov Processes are the statistical models in which one tries to characterize the
statistical properties of the signal with the underlying assumption that a signal can be
characterized as a random parametric signal of which the parameters can be estimated in a precise
and well-defined manner. In order to implement an isolated word recognition system using
HMM, the following steps must be taken
(1) For each uttered word, a Markov model must be built using parameters that optimize the
observations of the word.
(2) Maximum likelihood model is calculated for the uttered word. [5], [9], [10], [11]
4. IMPLEMENTATION
The major modules used are as follows:
MFCC (Mel-scaled Frequency Cepstral Coefficients)
Computer Science & Engineering: An International Journal (CSEIJ), Vol. 3, No. 4, August 2013
24
• Mel-spaced Filter Bank
VQ (Vector Quantization)
HMM (Hidden Markov Model)
• Discrete-HMM Observation matrix
• Forward-Backward Algorithm
Figure. 2 new speakers sound is added to database by entering name and recording time
Figure 2 shows the screenshot of how new speaker is added into the database. Figure 3 shows the
screenshot of how the speech is recognized.
Figure. 3 Screen shot shows the demonstration of speech recognition system
Computer Science & Engineering: An International Journal (CSEIJ), Vol. 3, No. 4, August 2013
25
5. EXPERIMENTATION RESULTS
In the speaker identification phase, four different (2 male and 2 female) speakers are asked to
speak the same word ten times from the given list of words. The speakers are then asked to utter
the same words in a random order and the recognition results noted. The percentage recognition
of a speaker for these words is given in the table 1 and efficiency chart is shown in figure 5 for
the same. The overall efficiency of speaker identification system is 95%.
In speech recognition phase, the experiment is repeated ten times for each of the above words.
The resulting efficiency percentage and its corresponding efficiency chart are shown in table 2
and figure 6 respectively. The overall efficiency of a speech recognition system obtained is 98%.
Table 1. Speaker identification results
Words Female
Speaker
1
Female
Speaker
2
Male
Speaker
3
Male
Speaker
4
Computer 90% 100% 100% 90%
Read 100% 100% 100% 100%
Mobile 90% 100% 90% 90%
Man 100% 70% 100% 100%
Robo 80% 100% 100% 100%
Average
%
92% 94% 98% 96%
Figure. 4 Efficiency chart for Speaker Identification System
Table 2. Speech Recognition Results
Words Recognition %
Computer 99%
Read 100%
Mobile 96%
Man 100%
Robo 95%
Average % 98%
Computer Science & Engineering: An International Journal (CSEIJ), Vol. 3, No. 4, August 2013
26
Figure. 5. Efficiency Chart for Speech Recognition System
6. CONCLUSION
In the speaker identification phase, MFCC and Distance Minimum techniques have been used.
These two techniques provided more efficient speaker identification system. The speech
recognition phase uses the most efficient HMM Algorithm. It is found that Speaker recognition
module improves the efficiency of speech recognition scores. The coding of all the techniques
mentioned above has been done using MATLAB. It has been found that the combination of
MFCC and Distance Minimum algorithm gives the best performance and also accurate results in
most of the cases with an overall efficiency of 95%. The study also reveals that the HMM
algorithm is able to identify the most commonly used isolated word. As a result of this, speech
recognition system achieves 98% efficiency.
ACKNOWLEDGEMENTS
We acknowledge Visvesvaraya Technological University, Belgaum and Anna University,
Chennai for the encouragement and permission to publish this paper. We would like to thank the
Principal of Sir MVIT, Dr. M.S.Indira for her support. Our special thanks to Prof. Dilip.K.Sen,
HOD of CSE for his valuable suggestions from time to time.
REFERENCES
[1] Ronald M. Baecker, “Readings in human-computer interaction: toward the year 2000”, 1995.
[2.] Melanie Pinola, “Speech Recognition Through the Decades: How We Ended Up With Siri”,
PCWorld.
[3] Ganesh Tiwari, “Text Prompted Remote Speaker Authentication : Joint Speech and Speaker
Recognition/Verification System”.
[4] Dr.Ravi Sankar, Tanmoy Islam, Srikanth Mangayyagari, “Robust Speech/Speaker Recognition
Systems”.
[5] Bassam A.Q.Al-Qatab and Raja.N.Aninon, “Arabic Speech Recognition using Hidden Markov
Model ToolKit (HTK)”, IEEE Information Technology (ITSim), 2010,page 557-562.
[6] Ahsanul Kabir, Sheikh Mohammad Masudul Ahsan,”.Vector Quantization in Text Dependent
Automatic Speaker Recognition using Mel-Frequency Cepstrum Coefficient”, 6th WSEAS
Computer Science & Engineering: An International Journal (CSEIJ), Vol. 3, No. 4, August 2013
27
International Conference on circuits, systems, electronics, control & signal processing, Cairo,Egypt,
dec 29-31, 2007,page 352-355
[7] Lindasalwa Muda, Mumtaj Begam and Elamvazuthi.,”Voice Recognition Algorithms using Mel
Frequency Cepstral Coefficient (MFCC) and DTW Techniques “,Journal of Computing, Volume 2,
Issue 3, March 2010
[8] Mahdi Shaneh and Azizollah Taheri ,”Voice Command Recognition System based on MFCC and VQ
Algorithms”, World Academy of Science, Engineering and Technology Journal , 2009
[9] Remzi Serdar Kurcan, “Isolated word recognition from in-ear microphone data using hidden markov
models (hmm)”, Master’s Thesis, 2006.
[10] Nikolai Shokhirev ,”Hidden Markov Models “, 2010.
[11] L.R. Rabiner, “A tutorial on Hidden Markov Models and selected applications in Speech
Recognition”, Proceedings of the IEEE Journal, Feb 1989, Vol 77, Issue: 2.
[12] Suma Swamy, Manasa S, Mani Sharma, Nithya A.S, Roopa K.S and K.V Ramakrishnan, “An
Improved Speech Recognition System”, LNICST Springer Journal, 2013.
AUTHORS
1. Suma Swamy obtained her B.E (Electronics Engineering) in 1990 from Shivaji
University, Kolhapur, Maharashtra, and M.Tech (Electronics and Communication
Engineering) in 2005 from Visvesvaraya Technological University, Belgaum,
Karnataka. She is working as Associate Professor, Department of CSE, Sir M.
Visvesvaraya Institute of Technology, Bengaluru, India. She is Research Scholar in
the department of ECE, Anna University, Chennai, India. Her areas of interest are
Speech Recognition, Database Management Systems and Design of Algorithms.
2. Dr. K.V Ramakrishnan obtained his M.Sc. (Electronics) from Poona University in
1961 and Ph.D (Electronics) from Toulouse (France) in 1972. He worked as Scientist
in CEERI from 1962-1999 at different places. He was a Consultant for M/s. Servo
Electronics, Delhi in 1999. He was a Director for Research and Development, HOD
(ECE/TE/MCA) at Sir M. Visvesvaraya Institute of Technology, Bengaluru, India
from 1999 -2002. He was a HOD (ECE) at New Horizon College of Engineering,
Bangalore from 2002-03. He was a HOD(CSE/ISE) at Sir M. Visvesvaraya Institute of
Technology, Bengaluru, India from 2003- 2006. He was a Dean and Professor (E CE)
at CMR Institute of Technology Bangalore from 2006-2009. He also officiated as principal during 2007. He
is a Supervisor, Anna University Chennai, India. His area of research is Speech Processing and Embedded
Systems.
Ad

Recommended

histogram-based-emotion
histogram-based-emotion
Muhammet Pakyurek
 
CURVELET BASED SPEECH RECOGNITION SYSTEM IN NOISY ENVIRONMENT: A STATISTICAL ...
CURVELET BASED SPEECH RECOGNITION SYSTEM IN NOISY ENVIRONMENT: A STATISTICAL ...
ijcsit
 
19 ijcse-01227
19 ijcse-01227
Shivlal Mewada
 
Development of Quranic Reciter Identification System using MFCC and GMM Clas...
Development of Quranic Reciter Identification System using MFCC and GMM Clas...
IJECEIAES
 
Comparison of Feature Extraction MFCC and LPC in Automatic Speech Recognition...
Comparison of Feature Extraction MFCC and LPC in Automatic Speech Recognition...
TELKOMNIKA JOURNAL
 
Parameters Optimization for Improving ASR Performance in Adverse Real World N...
Parameters Optimization for Improving ASR Performance in Adverse Real World N...
Waqas Tariq
 
Evaluation of Hidden Markov Model based Marathi Text-ToSpeech Synthesis System
Evaluation of Hidden Markov Model based Marathi Text-ToSpeech Synthesis System
IJERA Editor
 
EFFECT OF MFCC BASED FEATURES FOR SPEECH SIGNAL ALIGNMENTS
EFFECT OF MFCC BASED FEATURES FOR SPEECH SIGNAL ALIGNMENTS
ijnlc
 
Speaker Identification From Youtube Obtained Data
Speaker Identification From Youtube Obtained Data
sipij
 
Arabic digits speech recognition and speaker identification in noisy environm...
Arabic digits speech recognition and speaker identification in noisy environm...
TELKOMNIKA JOURNAL
 
PUNJABI SPEECH SYNTHESIS SYSTEM USING HTK
PUNJABI SPEECH SYNTHESIS SYSTEM USING HTK
ijistjournal
 
Ijetcas14 426
Ijetcas14 426
Iasir Journals
 
5215ijcseit01
5215ijcseit01
ijcsit
 
IRJET- Vocal Code
IRJET- Vocal Code
IRJET Journal
 
An Improved Approach for Word Ambiguity Removal
An Improved Approach for Word Ambiguity Removal
Waqas Tariq
 
Intelligent hands free speech based sms
Intelligent hands free speech based sms
Kamal Spring
 
Speaker Identification & Verification Using MFCC & SVM
Speaker Identification & Verification Using MFCC & SVM
IRJET Journal
 
40120130406014 2
40120130406014 2
IAEME Publication
 
Automatic speech emotion and speaker recognition based on hybrid gmm and ffbnn
Automatic speech emotion and speaker recognition based on hybrid gmm and ffbnn
ijcsa
 
Sentiment analysis by deep learning approaches
Sentiment analysis by deep learning approaches
TELKOMNIKA JOURNAL
 
Hindi digits recognition system on speech data collected in different natural...
Hindi digits recognition system on speech data collected in different natural...
csandit
 
A Novel, Robust, Hierarchical, Text-Independent Speaker Recognition Technique
A Novel, Robust, Hierarchical, Text-Independent Speaker Recognition Technique
CSCJournals
 
Speech Recognized Automation System Using Speaker Identification through Wire...
Speech Recognized Automation System Using Speaker Identification through Wire...
IOSR Journals
 
Effect of Time Derivatives of MFCC Features on HMM Based Speech Recognition S...
Effect of Time Derivatives of MFCC Features on HMM Based Speech Recognition S...
IDES Editor
 
Classification of Language Speech Recognition System
Classification of Language Speech Recognition System
ijtsrd
 
Robust Speech Recognition Technique using Mat lab
Robust Speech Recognition Technique using Mat lab
IRJET Journal
 
AN ANALYSIS OF SPEECH RECOGNITION PERFORMANCE BASED UPON NETWORK LAYERS AND T...
AN ANALYSIS OF SPEECH RECOGNITION PERFORMANCE BASED UPON NETWORK LAYERS AND T...
IJCSEA Journal
 
A Review On Speech Feature Techniques And Classification Techniques
A Review On Speech Feature Techniques And Classification Techniques
Nicole Heredia
 
Speech Recognition Using HMM with MFCC-An Analysis Using Frequency Specral De...
Speech Recognition Using HMM with MFCC-An Analysis Using Frequency Specral De...
sipij
 
De4201715719
De4201715719
IJERA Editor
 

More Related Content

What's hot (15)

Speaker Identification From Youtube Obtained Data
Speaker Identification From Youtube Obtained Data
sipij
 
Arabic digits speech recognition and speaker identification in noisy environm...
Arabic digits speech recognition and speaker identification in noisy environm...
TELKOMNIKA JOURNAL
 
PUNJABI SPEECH SYNTHESIS SYSTEM USING HTK
PUNJABI SPEECH SYNTHESIS SYSTEM USING HTK
ijistjournal
 
Ijetcas14 426
Ijetcas14 426
Iasir Journals
 
5215ijcseit01
5215ijcseit01
ijcsit
 
IRJET- Vocal Code
IRJET- Vocal Code
IRJET Journal
 
An Improved Approach for Word Ambiguity Removal
An Improved Approach for Word Ambiguity Removal
Waqas Tariq
 
Intelligent hands free speech based sms
Intelligent hands free speech based sms
Kamal Spring
 
Speaker Identification & Verification Using MFCC & SVM
Speaker Identification & Verification Using MFCC & SVM
IRJET Journal
 
40120130406014 2
40120130406014 2
IAEME Publication
 
Automatic speech emotion and speaker recognition based on hybrid gmm and ffbnn
Automatic speech emotion and speaker recognition based on hybrid gmm and ffbnn
ijcsa
 
Sentiment analysis by deep learning approaches
Sentiment analysis by deep learning approaches
TELKOMNIKA JOURNAL
 
Hindi digits recognition system on speech data collected in different natural...
Hindi digits recognition system on speech data collected in different natural...
csandit
 
A Novel, Robust, Hierarchical, Text-Independent Speaker Recognition Technique
A Novel, Robust, Hierarchical, Text-Independent Speaker Recognition Technique
CSCJournals
 
Speech Recognized Automation System Using Speaker Identification through Wire...
Speech Recognized Automation System Using Speaker Identification through Wire...
IOSR Journals
 
Speaker Identification From Youtube Obtained Data
Speaker Identification From Youtube Obtained Data
sipij
 
Arabic digits speech recognition and speaker identification in noisy environm...
Arabic digits speech recognition and speaker identification in noisy environm...
TELKOMNIKA JOURNAL
 
PUNJABI SPEECH SYNTHESIS SYSTEM USING HTK
PUNJABI SPEECH SYNTHESIS SYSTEM USING HTK
ijistjournal
 
5215ijcseit01
5215ijcseit01
ijcsit
 
An Improved Approach for Word Ambiguity Removal
An Improved Approach for Word Ambiguity Removal
Waqas Tariq
 
Intelligent hands free speech based sms
Intelligent hands free speech based sms
Kamal Spring
 
Speaker Identification & Verification Using MFCC & SVM
Speaker Identification & Verification Using MFCC & SVM
IRJET Journal
 
Automatic speech emotion and speaker recognition based on hybrid gmm and ffbnn
Automatic speech emotion and speaker recognition based on hybrid gmm and ffbnn
ijcsa
 
Sentiment analysis by deep learning approaches
Sentiment analysis by deep learning approaches
TELKOMNIKA JOURNAL
 
Hindi digits recognition system on speech data collected in different natural...
Hindi digits recognition system on speech data collected in different natural...
csandit
 
A Novel, Robust, Hierarchical, Text-Independent Speaker Recognition Technique
A Novel, Robust, Hierarchical, Text-Independent Speaker Recognition Technique
CSCJournals
 
Speech Recognized Automation System Using Speaker Identification through Wire...
Speech Recognized Automation System Using Speaker Identification through Wire...
IOSR Journals
 

Similar to AN EFFICIENT SPEECH RECOGNITION SYSTEM (20)

Effect of Time Derivatives of MFCC Features on HMM Based Speech Recognition S...
Effect of Time Derivatives of MFCC Features on HMM Based Speech Recognition S...
IDES Editor
 
Classification of Language Speech Recognition System
Classification of Language Speech Recognition System
ijtsrd
 
Robust Speech Recognition Technique using Mat lab
Robust Speech Recognition Technique using Mat lab
IRJET Journal
 
AN ANALYSIS OF SPEECH RECOGNITION PERFORMANCE BASED UPON NETWORK LAYERS AND T...
AN ANALYSIS OF SPEECH RECOGNITION PERFORMANCE BASED UPON NETWORK LAYERS AND T...
IJCSEA Journal
 
A Review On Speech Feature Techniques And Classification Techniques
A Review On Speech Feature Techniques And Classification Techniques
Nicole Heredia
 
Speech Recognition Using HMM with MFCC-An Analysis Using Frequency Specral De...
Speech Recognition Using HMM with MFCC-An Analysis Using Frequency Specral De...
sipij
 
De4201715719
De4201715719
IJERA Editor
 
ASR_final
ASR_final
Bidhan Barai
 
44 i9 advanced-speaker-recognition
44 i9 advanced-speaker-recognition
sunnysyed
 
IRJET- Device Activation based on Voice Recognition using Mel Frequency Cepst...
IRJET- Device Activation based on Voice Recognition using Mel Frequency Cepst...
IRJET Journal
 
Course report-islam-taharimul (1)
Course report-islam-taharimul (1)
TANVIRAHMED611926
 
52 57
52 57
Ijarcsee Journal
 
Bachelors project summary
Bachelors project summary
Aditya Deshmukh
 
B.Tech Project Report
B.Tech Project Report
Rohit Singh
 
An Effective Approach for Chinese Speech Recognition on Small Size of Vocabulary
An Effective Approach for Chinese Speech Recognition on Small Size of Vocabulary
sipij
 
Dy36749754
Dy36749754
IJERA Editor
 
DATABASES, FEATURES, CLASSIFIERS AND CHALLENGES IN AUTOMATIC SPEECH RECOGNITI...
DATABASES, FEATURES, CLASSIFIERS AND CHALLENGES IN AUTOMATIC SPEECH RECOGNITI...
IRJET Journal
 
Speaker Recognition System using MFCC and Vector Quantization Approach
Speaker Recognition System using MFCC and Vector Quantization Approach
ijsrd.com
 
Speaker recognition on matlab
Speaker recognition on matlab
Arcanjo Salazaku
 
Joint MFCC-and-Vector Quantization based Text-Independent Speaker Recognition...
Joint MFCC-and-Vector Quantization based Text-Independent Speaker Recognition...
Ahmed Ayman
 
Effect of Time Derivatives of MFCC Features on HMM Based Speech Recognition S...
Effect of Time Derivatives of MFCC Features on HMM Based Speech Recognition S...
IDES Editor
 
Classification of Language Speech Recognition System
Classification of Language Speech Recognition System
ijtsrd
 
Robust Speech Recognition Technique using Mat lab
Robust Speech Recognition Technique using Mat lab
IRJET Journal
 
AN ANALYSIS OF SPEECH RECOGNITION PERFORMANCE BASED UPON NETWORK LAYERS AND T...
AN ANALYSIS OF SPEECH RECOGNITION PERFORMANCE BASED UPON NETWORK LAYERS AND T...
IJCSEA Journal
 
A Review On Speech Feature Techniques And Classification Techniques
A Review On Speech Feature Techniques And Classification Techniques
Nicole Heredia
 
Speech Recognition Using HMM with MFCC-An Analysis Using Frequency Specral De...
Speech Recognition Using HMM with MFCC-An Analysis Using Frequency Specral De...
sipij
 
44 i9 advanced-speaker-recognition
44 i9 advanced-speaker-recognition
sunnysyed
 
IRJET- Device Activation based on Voice Recognition using Mel Frequency Cepst...
IRJET- Device Activation based on Voice Recognition using Mel Frequency Cepst...
IRJET Journal
 
Course report-islam-taharimul (1)
Course report-islam-taharimul (1)
TANVIRAHMED611926
 
Bachelors project summary
Bachelors project summary
Aditya Deshmukh
 
B.Tech Project Report
B.Tech Project Report
Rohit Singh
 
An Effective Approach for Chinese Speech Recognition on Small Size of Vocabulary
An Effective Approach for Chinese Speech Recognition on Small Size of Vocabulary
sipij
 
DATABASES, FEATURES, CLASSIFIERS AND CHALLENGES IN AUTOMATIC SPEECH RECOGNITI...
DATABASES, FEATURES, CLASSIFIERS AND CHALLENGES IN AUTOMATIC SPEECH RECOGNITI...
IRJET Journal
 
Speaker Recognition System using MFCC and Vector Quantization Approach
Speaker Recognition System using MFCC and Vector Quantization Approach
ijsrd.com
 
Speaker recognition on matlab
Speaker recognition on matlab
Arcanjo Salazaku
 
Joint MFCC-and-Vector Quantization based Text-Independent Speaker Recognition...
Joint MFCC-and-Vector Quantization based Text-Independent Speaker Recognition...
Ahmed Ayman
 
Ad

Recently uploaded (20)

UserCon Belgium: Honey, VMware increased my bill
UserCon Belgium: Honey, VMware increased my bill
stijn40
 
OWASP Barcelona 2025 Threat Model Library
OWASP Barcelona 2025 Threat Model Library
PetraVukmirovic
 
PyCon SG 25 - Firecracker Made Easy with Python.pdf
PyCon SG 25 - Firecracker Made Easy with Python.pdf
Muhammad Yuga Nugraha
 
Cracking the Code - Unveiling Synergies Between Open Source Security and AI.pdf
Cracking the Code - Unveiling Synergies Between Open Source Security and AI.pdf
Priyanka Aash
 
OpenACC and Open Hackathons Monthly Highlights June 2025
OpenACC and Open Hackathons Monthly Highlights June 2025
OpenACC
 
Curietech AI in action - Accelerate MuleSoft development
Curietech AI in action - Accelerate MuleSoft development
shyamraj55
 
Quantum AI Discoveries: Fractal Patterns Consciousness and Cyclical Universes
Quantum AI Discoveries: Fractal Patterns Consciousness and Cyclical Universes
Saikat Basu
 
Mastering AI Workflows with FME by Mark Döring
Mastering AI Workflows with FME by Mark Döring
Safe Software
 
Daily Lesson Log MATATAG ICT TEchnology 8
Daily Lesson Log MATATAG ICT TEchnology 8
LOIDAALMAZAN3
 
Oh, the Possibilities - Balancing Innovation and Risk with Generative AI.pdf
Oh, the Possibilities - Balancing Innovation and Risk with Generative AI.pdf
Priyanka Aash
 
"Scaling in space and time with Temporal", Andriy Lupa.pdf
"Scaling in space and time with Temporal", Andriy Lupa.pdf
Fwdays
 
Connecting Data and Intelligence: The Role of FME in Machine Learning
Connecting Data and Intelligence: The Role of FME in Machine Learning
Safe Software
 
Quantum AI: Where Impossible Becomes Probable
Quantum AI: Where Impossible Becomes Probable
Saikat Basu
 
OpenPOWER Foundation & Open-Source Core Innovations
OpenPOWER Foundation & Open-Source Core Innovations
IBM
 
Securing AI - There Is No Try, Only Do!.pdf
Securing AI - There Is No Try, Only Do!.pdf
Priyanka Aash
 
AI vs Human Writing: Can You Tell the Difference?
AI vs Human Writing: Can You Tell the Difference?
Shashi Sathyanarayana, Ph.D
 
10 Key Challenges for AI within the EU Data Protection Framework.pdf
10 Key Challenges for AI within the EU Data Protection Framework.pdf
Priyanka Aash
 
Salesforce Summer '25 Release Frenchgathering.pptx.pdf
Salesforce Summer '25 Release Frenchgathering.pptx.pdf
yosra Saidani
 
From Manual to Auto Searching- FME in the Driver's Seat
From Manual to Auto Searching- FME in the Driver's Seat
Safe Software
 
" How to survive with 1 billion vectors and not sell a kidney: our low-cost c...
" How to survive with 1 billion vectors and not sell a kidney: our low-cost c...
Fwdays
 
UserCon Belgium: Honey, VMware increased my bill
UserCon Belgium: Honey, VMware increased my bill
stijn40
 
OWASP Barcelona 2025 Threat Model Library
OWASP Barcelona 2025 Threat Model Library
PetraVukmirovic
 
PyCon SG 25 - Firecracker Made Easy with Python.pdf
PyCon SG 25 - Firecracker Made Easy with Python.pdf
Muhammad Yuga Nugraha
 
Cracking the Code - Unveiling Synergies Between Open Source Security and AI.pdf
Cracking the Code - Unveiling Synergies Between Open Source Security and AI.pdf
Priyanka Aash
 
OpenACC and Open Hackathons Monthly Highlights June 2025
OpenACC and Open Hackathons Monthly Highlights June 2025
OpenACC
 
Curietech AI in action - Accelerate MuleSoft development
Curietech AI in action - Accelerate MuleSoft development
shyamraj55
 
Quantum AI Discoveries: Fractal Patterns Consciousness and Cyclical Universes
Quantum AI Discoveries: Fractal Patterns Consciousness and Cyclical Universes
Saikat Basu
 
Mastering AI Workflows with FME by Mark Döring
Mastering AI Workflows with FME by Mark Döring
Safe Software
 
Daily Lesson Log MATATAG ICT TEchnology 8
Daily Lesson Log MATATAG ICT TEchnology 8
LOIDAALMAZAN3
 
Oh, the Possibilities - Balancing Innovation and Risk with Generative AI.pdf
Oh, the Possibilities - Balancing Innovation and Risk with Generative AI.pdf
Priyanka Aash
 
"Scaling in space and time with Temporal", Andriy Lupa.pdf
"Scaling in space and time with Temporal", Andriy Lupa.pdf
Fwdays
 
Connecting Data and Intelligence: The Role of FME in Machine Learning
Connecting Data and Intelligence: The Role of FME in Machine Learning
Safe Software
 
Quantum AI: Where Impossible Becomes Probable
Quantum AI: Where Impossible Becomes Probable
Saikat Basu
 
OpenPOWER Foundation & Open-Source Core Innovations
OpenPOWER Foundation & Open-Source Core Innovations
IBM
 
Securing AI - There Is No Try, Only Do!.pdf
Securing AI - There Is No Try, Only Do!.pdf
Priyanka Aash
 
AI vs Human Writing: Can You Tell the Difference?
AI vs Human Writing: Can You Tell the Difference?
Shashi Sathyanarayana, Ph.D
 
10 Key Challenges for AI within the EU Data Protection Framework.pdf
10 Key Challenges for AI within the EU Data Protection Framework.pdf
Priyanka Aash
 
Salesforce Summer '25 Release Frenchgathering.pptx.pdf
Salesforce Summer '25 Release Frenchgathering.pptx.pdf
yosra Saidani
 
From Manual to Auto Searching- FME in the Driver's Seat
From Manual to Auto Searching- FME in the Driver's Seat
Safe Software
 
" How to survive with 1 billion vectors and not sell a kidney: our low-cost c...
" How to survive with 1 billion vectors and not sell a kidney: our low-cost c...
Fwdays
 
Ad

AN EFFICIENT SPEECH RECOGNITION SYSTEM

  • 1. Computer Science & Engineering: An International Journal (CSEIJ), Vol. 3, No. 4, August 2013 DOI : 10.5121/cseij.2013.3403 21 AN EFFICIENT SPEECH RECOGNITION SYSTEM Suma Swamy1 and K.V Ramakrishnan 1 Department of Electronics and Communication Engineering, Research Scholar, Anna University, Chennai [email protected], [email protected] ABSTRACT This paper describes the development of an efficient speech recognition system using different techniques such as Mel Frequency Cepstrum Coefficients (MFCC), Vector Quantization (VQ) and Hidden Markov Model (HMM). This paper explains how speaker recognition followed by speech recognition is used to recognize the speech faster, efficiently and accurately. MFCC is used to extract the characteristics from the input speech signal with respect to a particular word uttered by a particular speaker. Then HMM is used on Quantized feature vectors to identify the word by evaluating the maximum log likelihood values for the spoken word. KEYWORDS MFCC, VQ, HMM, log likelihood, DISTMIN. 1. INTRODUCTION The idea of human machine interaction led to research in Speech recognition. Automatic speech recognition uses the process and related technology for converting speech signals into a sequence of words or other linguistic units by means of an algorithm implemented as a computer program. Speech understanding systems presently are capable of understanding speech input for vocabularies of thousands of words in operational environments. Speech signal conveys two important types of information: (a) speech content and (b) The speaker identity. Speech recognisers aim to extract the lexical information from the speech signal independently of the speaker by reducing the inter-speaker variability. Speaker recognition is concerned with extracting the identity of the person. [3] Speaker identification allows the use of uttered speech to verify the speaker’s identity and control access to secure services. Speech Recognition offers greater freedom to employ the physically handicapped in several applications like manufacturing processes, medicine and telephone network. Figure 1(a) shows the speech recognition system without speaker identification. Figure 1(b) shows how the speaker identification followed by speech recognition improves the efficiency. With this approach, the database will be divided into smaller divisions (SP1 to SPn) with respect to different speakers. Hence the speech recognition rate improves for the corresponding speaker.
  • 2. Computer Science & Engineering: An International Journal (CSEIJ), Vol. 3, No. 4, August 2013 22 Figure.1 (a) Speech recognition system without speaker identification Figure.1 (b) Speaker identification followed by speech recognition This paper focuses on the implementation of speaker identification and enhancement of speech recognition using Hidden Markov Model (HMM) techniques. [1], [4] 2. HISTORY OF SPEECH RECOGNITION Speech Recognition research has been ongoing for more than 80 years. Over that period there have been at least 4 generations of approaches, and a 5th generation is being formulated based on current research themes. To cover the complete history of speech recognition is beyond the scope of this paper. By 2001, computer speech recognition had reached 80% accuracy and no further progress was reported till 2010. Speech recognition technology development began to edge back into the Feature Extraction (MFCC) Codebook Generation (VQLBG) Database Speech Recognition (HMM) Recognized Speech Input Speech Feature Extraction (MFCC) Codebook Generation (VQLBG) Feature Matching (DISTMIN) Speech Recognition (HMM) Recognized Speech Identified/ Recognized Speaker Input Speech SP1 SP2 SPn Matching (DCoFeature Extraction (MFCC) ISTMIN) n Database
  • 3. Computer Science & Engineering: An International Journal (CSEIJ), Vol. 3, No. 4, August 2013 23 forefront with one major event: the arrival of the “Google Voice Search app for the iPhone”. In 2010, Google added “personalized recognition” to Voice Search on Android phones, so that the software could record users’ voice searches and produce a more accurate speech model. The company also added Voice Search to its Chrome Browser in mid-2011. Like Google’s Voice Search, Siri relies on cloud-based processing. It draws on its knowledge about the speaker to generate a contextual reply and responds to voice input. [2] Parallel processing methods using combinations of HMMs and acoustic- phonetic approaches to detect and correct linguistic irregularities are used to increase recognition decision reliability and increase robustness for recognition of speech in noisy environment. 3. PROPOSED MODEL The structure of proposed system consists of two modules as shown in figure 1(b). • Speaker Identification • Speech Recognition 3.1 Speaker Identification Feature extraction is a process that extracts data from the voice signal that is unique for each speaker. Mel Frequency Cepstral Coefficient (MFCC) technique is often used to create the fingerprint of the sound files. The MFCC are based on the known variation of the human ear’s critical bandwidth frequencies with filters spaced linearly at low frequencies and logarithmically at high frequencies used to capture the important characteristics of speech. [6], [7], [8] These extracted features are Vector quantized using Vector Quantization algorithm. Vector Quantization (VQ) is used for feature extraction in both the training and testing phases. It is an extremely efficient representation of spectral information in the speech signal by mapping the vectors from large vector space to a finite number of regions in the space called clusters. [6], [8] After feature extraction, feature matching involves the actual procedure to identify the unknown speaker by comparing extracted features with the database using the DISTMIN algorithm. 3.2 Speech Recognition System Hidden Markov Processes are the statistical models in which one tries to characterize the statistical properties of the signal with the underlying assumption that a signal can be characterized as a random parametric signal of which the parameters can be estimated in a precise and well-defined manner. In order to implement an isolated word recognition system using HMM, the following steps must be taken (1) For each uttered word, a Markov model must be built using parameters that optimize the observations of the word. (2) Maximum likelihood model is calculated for the uttered word. [5], [9], [10], [11] 4. IMPLEMENTATION The major modules used are as follows: MFCC (Mel-scaled Frequency Cepstral Coefficients)
  • 4. Computer Science & Engineering: An International Journal (CSEIJ), Vol. 3, No. 4, August 2013 24 • Mel-spaced Filter Bank VQ (Vector Quantization) HMM (Hidden Markov Model) • Discrete-HMM Observation matrix • Forward-Backward Algorithm Figure. 2 new speakers sound is added to database by entering name and recording time Figure 2 shows the screenshot of how new speaker is added into the database. Figure 3 shows the screenshot of how the speech is recognized. Figure. 3 Screen shot shows the demonstration of speech recognition system
  • 5. Computer Science & Engineering: An International Journal (CSEIJ), Vol. 3, No. 4, August 2013 25 5. EXPERIMENTATION RESULTS In the speaker identification phase, four different (2 male and 2 female) speakers are asked to speak the same word ten times from the given list of words. The speakers are then asked to utter the same words in a random order and the recognition results noted. The percentage recognition of a speaker for these words is given in the table 1 and efficiency chart is shown in figure 5 for the same. The overall efficiency of speaker identification system is 95%. In speech recognition phase, the experiment is repeated ten times for each of the above words. The resulting efficiency percentage and its corresponding efficiency chart are shown in table 2 and figure 6 respectively. The overall efficiency of a speech recognition system obtained is 98%. Table 1. Speaker identification results Words Female Speaker 1 Female Speaker 2 Male Speaker 3 Male Speaker 4 Computer 90% 100% 100% 90% Read 100% 100% 100% 100% Mobile 90% 100% 90% 90% Man 100% 70% 100% 100% Robo 80% 100% 100% 100% Average % 92% 94% 98% 96% Figure. 4 Efficiency chart for Speaker Identification System Table 2. Speech Recognition Results Words Recognition % Computer 99% Read 100% Mobile 96% Man 100% Robo 95% Average % 98%
  • 6. Computer Science & Engineering: An International Journal (CSEIJ), Vol. 3, No. 4, August 2013 26 Figure. 5. Efficiency Chart for Speech Recognition System 6. CONCLUSION In the speaker identification phase, MFCC and Distance Minimum techniques have been used. These two techniques provided more efficient speaker identification system. The speech recognition phase uses the most efficient HMM Algorithm. It is found that Speaker recognition module improves the efficiency of speech recognition scores. The coding of all the techniques mentioned above has been done using MATLAB. It has been found that the combination of MFCC and Distance Minimum algorithm gives the best performance and also accurate results in most of the cases with an overall efficiency of 95%. The study also reveals that the HMM algorithm is able to identify the most commonly used isolated word. As a result of this, speech recognition system achieves 98% efficiency. ACKNOWLEDGEMENTS We acknowledge Visvesvaraya Technological University, Belgaum and Anna University, Chennai for the encouragement and permission to publish this paper. We would like to thank the Principal of Sir MVIT, Dr. M.S.Indira for her support. Our special thanks to Prof. Dilip.K.Sen, HOD of CSE for his valuable suggestions from time to time. REFERENCES [1] Ronald M. Baecker, “Readings in human-computer interaction: toward the year 2000”, 1995. [2.] Melanie Pinola, “Speech Recognition Through the Decades: How We Ended Up With Siri”, PCWorld. [3] Ganesh Tiwari, “Text Prompted Remote Speaker Authentication : Joint Speech and Speaker Recognition/Verification System”. [4] Dr.Ravi Sankar, Tanmoy Islam, Srikanth Mangayyagari, “Robust Speech/Speaker Recognition Systems”. [5] Bassam A.Q.Al-Qatab and Raja.N.Aninon, “Arabic Speech Recognition using Hidden Markov Model ToolKit (HTK)”, IEEE Information Technology (ITSim), 2010,page 557-562. [6] Ahsanul Kabir, Sheikh Mohammad Masudul Ahsan,”.Vector Quantization in Text Dependent Automatic Speaker Recognition using Mel-Frequency Cepstrum Coefficient”, 6th WSEAS
  • 7. Computer Science & Engineering: An International Journal (CSEIJ), Vol. 3, No. 4, August 2013 27 International Conference on circuits, systems, electronics, control & signal processing, Cairo,Egypt, dec 29-31, 2007,page 352-355 [7] Lindasalwa Muda, Mumtaj Begam and Elamvazuthi.,”Voice Recognition Algorithms using Mel Frequency Cepstral Coefficient (MFCC) and DTW Techniques “,Journal of Computing, Volume 2, Issue 3, March 2010 [8] Mahdi Shaneh and Azizollah Taheri ,”Voice Command Recognition System based on MFCC and VQ Algorithms”, World Academy of Science, Engineering and Technology Journal , 2009 [9] Remzi Serdar Kurcan, “Isolated word recognition from in-ear microphone data using hidden markov models (hmm)”, Master’s Thesis, 2006. [10] Nikolai Shokhirev ,”Hidden Markov Models “, 2010. [11] L.R. Rabiner, “A tutorial on Hidden Markov Models and selected applications in Speech Recognition”, Proceedings of the IEEE Journal, Feb 1989, Vol 77, Issue: 2. [12] Suma Swamy, Manasa S, Mani Sharma, Nithya A.S, Roopa K.S and K.V Ramakrishnan, “An Improved Speech Recognition System”, LNICST Springer Journal, 2013. AUTHORS 1. Suma Swamy obtained her B.E (Electronics Engineering) in 1990 from Shivaji University, Kolhapur, Maharashtra, and M.Tech (Electronics and Communication Engineering) in 2005 from Visvesvaraya Technological University, Belgaum, Karnataka. She is working as Associate Professor, Department of CSE, Sir M. Visvesvaraya Institute of Technology, Bengaluru, India. She is Research Scholar in the department of ECE, Anna University, Chennai, India. Her areas of interest are Speech Recognition, Database Management Systems and Design of Algorithms. 2. Dr. K.V Ramakrishnan obtained his M.Sc. (Electronics) from Poona University in 1961 and Ph.D (Electronics) from Toulouse (France) in 1972. He worked as Scientist in CEERI from 1962-1999 at different places. He was a Consultant for M/s. Servo Electronics, Delhi in 1999. He was a Director for Research and Development, HOD (ECE/TE/MCA) at Sir M. Visvesvaraya Institute of Technology, Bengaluru, India from 1999 -2002. He was a HOD (ECE) at New Horizon College of Engineering, Bangalore from 2002-03. He was a HOD(CSE/ISE) at Sir M. Visvesvaraya Institute of Technology, Bengaluru, India from 2003- 2006. He was a Dean and Professor (E CE) at CMR Institute of Technology Bangalore from 2006-2009. He also officiated as principal during 2007. He is a Supervisor, Anna University Chennai, India. His area of research is Speech Processing and Embedded Systems.