0% found this document useful (0 votes)
55 views

Arabic Language Learning Assistance Base

This document discusses developing an Arabic language learning assistance system based on automatic speech recognition. The system aims to provide speaker-independent Arabic speech recognition and integrate it into a computer-assisted language learning platform. It will analyze a learner's pronunciation, assess errors, and propose remedies. The system is being developed using open source Sphinx tools from Carnegie Mellon University. It involves creating Arabic language corpora for training, testing, and learner speech records totaling over 11 hours of speech data.

Uploaded by

Abdelkbir Ws
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views

Arabic Language Learning Assistance Base

This document discusses developing an Arabic language learning assistance system based on automatic speech recognition. The system aims to provide speaker-independent Arabic speech recognition and integrate it into a computer-assisted language learning platform. It will analyze a learner's pronunciation, assess errors, and propose remedies. The system is being developed using open source Sphinx tools from Carnegie Mellon University. It involves creating Arabic language corpora for training, testing, and learner speech records totaling over 11 hours of speech data.

Uploaded by

Abdelkbir Ws
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Arabic language learning assistance based on automatic

speech recognition system


Mohamed Belgacem 1,2, Ayoub Maatallaoui4, Mounir ZRIGUI2,3
1
LIDILEM, University of Stendhal Grenoble3, France
2
UTIC Laboratory, Monastir, Tunisia
3
Faculty of sciences of Monastir, Tunisia
4
LIG Laboratory, GETALP, Grenoble, France

Among those who are addressed to the languages students of


Abstract - In this work we present the results of a research
include, we mention:
phase that have been conducted to establish a new system
- The PLATO (Programmed Logic for Automated
providing Arabic speech recognition with satisfactory
Operations Teaching), developed at the Illinois
performance and independence of the speaker. This system
University (USA) for French, Spanish, German, Russian,
will be integrated into a Computer Assisted Language
Hebrew, Latin and even Esperanto [2].
Learning (CALL) platform. This work describes the
- KANDA in Tokyo for English, German, French,
development of the above announced system that will acquire
Spanish and Chinese (quoted by JC Simon [3]).
as part of the signal emitted by the speaker and assess learner
- The OPE Project, at the Paris VII university, for
perspective pronunciation and to propose remedies to teach
English language [4].
him the correct pronunciation. This work is based on a new
- The Nelles R. and Sennekamp M. works [5] at the
speaker independent automatic recognition of Arabic speech
Freiburg University (Germany), on the French language.
by deploying SPHINX open source tools from Carnegie
- MIRTO, developed at the "LIDILEM” laboratory in
Mellon University (CMU) [1].
Grenoble, MIRTO allows the creation of educational
activities almost immediately, by exploiting the
possibilities of NLP tools procedures. MIRTO can
Keywords: Computer Assisted Language Learning (CALL):
generate activities for learners to learn several languages
The Arabic case, computer assisted instruction (CAI): Arabic
[6].
case, automatic Arabic speech recognition, Arabic Corpus,
acoustic model, pronunciation dictionary.
These achievements are certainly an important
educational value, but limited by the fact that they applied
1 Introduction only to the written part of languages. It seemed interesting to
bring to the Computer Assisted Learning the assistance of
The system presented here is essentially a method of automatic speech recognition, giving the student control of his
automatic Arabic speech speaker independent recognition. oral expression.
This method is specially designed to help for oral language
training; in this case the Arabic language and can
To our knowledge, little research has been done in this
automatically report to the learner's mistakes in pronunciation
direction, we note the Nord-mann B.Jwork’s [7] at Illinois
in previously listed words of a lexicon constituting a "lesson
University, who proposed a project limited to the screen
". Tests on a large representative corpus of phonetic
simulation for pronunciation differences between teacher and
difficulties and tonic accent of Arabic show the qualities and
student, and that to try to approximate as possible the
limitations of the method.
pronunciation of the learner to the proposed model.

2 Problematic On their side, DW Kalikow and Swets IA [8] present a


system of phonetic correction developed in "Bolt Beranek and
One of the contributions of modern linguistics is
Newman Inc., Cambridge (USA) for learning a foreign
highlighting the importance of the oral aspect of any
language. There was only one automatic pronunciation
language. However, the acquisition of correct pronunciation
instructor using the capabilities of data storage. Note also the
and a good accent and mastery of oral expression is often
API system (Automated Pronunciation Instruction) developed
difficult in a foreign language. The introduction of vocal
under the ARPA project, initially for learning French by the
techniques in language teaching has been an attempt in this
Spanish, then in its second version, for the therapy of hearing
direction. The computer also can assist and we have seen,
impaired children. TT System (Teaching Training) designed
over the past ten years, multiply the projects interested by the
in Japan [9] is also a unit of computer-assisted voice therapy.
Computer-Assisted Learning: the case of oral, mostly in U.S.
but also in Japan and Europe.
In France, recent research has been undertaken by CNET 3 Automatic Arabic speech recognition
Lebras J. [10] who had implemented an algorithm reflecting
some pronunciation errors with an English native speaker Although the Arab world has an estimated number of
learning French (especially aspiration errors of the Deaf 250 million speakers, there has been little research on Arabic
plosive consonants and faults on diphthongization).In the speech recognition when compared to other similar
Arab world, there have been several initial attempts to address importance languages. Due to the lack of speech corpora and
this problem. They began by applications to learn Holy Quran pronunciation dictionary, the majority of automatic Arabic
correct recitation to the Arabic speakers and I must say that speech recognition work was focused on adapting recognition
this task is almost similar to learning a foreign oral language systems designed for other languages like English i.e. let the
(in this case Arabic language) where there is a wide variety of system identifies the Arabic word as English ones, these
pronunciations that can be accepted by the speakers, or the systems are based on rules for converting English phonemes
holy Quran should be recited in the same way that mean in the for Arabic graphemes.
classical Arabic dialect.
In this section we present our Arabic speech recognition
Among the few studies that are interested in Arabic, we can system based on Sphinx [1] and propose an automatic toolkit
cite the following works to be able to find out later that our that is able to be applied for educational oral language

 El-Kasasy [11] developed a system for holy Koran


system differs from other systems in several ways: learning applications. Three corpora are fully developed in
this work, namely the training corpus about 7 hours, the test
recitation learning. This system is based on the syllabic corpus about 1.5 hours and the third corpus is containing the
signal unit’s segmentation. Each segment of the test learners records about 3 hours. The method adopted for the
syllable is compared to the reference, then the system corpora development is inspired from [13] and [14]. By
accepts or rejects the segment and syllabic he does not deploying the three mentioned above corpora and by using
give detailed comments on the error. Each segment of our automatic transcription tools and our Arabic phonetiser,
the test syllable is compared to the reference, then the our dictionary pronunciation corresponding to about 23578
system accepts or rejects the segment and syllabic but words SPHINX decoder is trained to develop three acoustic
the system here does not give detailed comments on models, one for each corpus. The training is based on Hidden

 Omar proposed in [12] a learner pronunciation


the error. Markov Model HMM. We consider the corpus used in our
system is quite important to validate our approach. The
identification system based on hidden Markov model Sphinx has never been used before in this way for the
(HMM). He grouped, in this work, the different kind of automatic recognition of Arabic speech independent speakers.
acceptable Arabic phonemes pronunciation and then
compares them with the speaker pronunciation to 3.1 State of the art
decide whether they can be accepted or not. This
system has two steps: First, the pronunciation of entry Developing Arabic speech recognition system is a multi-
is segmented into phonemes. However, in this stage, disciplinary effort, requiring the integration of phonetics of
errors of substitution, insertion and deletion between the Arabic language, speech processing techniques and
the phonemes of the word searched are detected. Arabic natural language processing.
Secondly, these units are examined by HMM.
Automatic Arabic speech recognition has recently been
This hierarchical system has increased complexity and its approached by a number of researchers. Satori. and al. [15]
performance was proved to be more severe than purely used tools Sphinx speech recognition for Arabic. They have
statistical approaches such as systems based on HMM. managed to build a system for recognition of isolated digits
from Arabic (1, 2. 9). These data were recorded from 6
To our knowledge, it seemed interesting to bring robust speakers. They reach a recognition accuracy of 86.66% of
automatic speech recognition to the Computer Assisted isolated digits.
foreign languages learning. In this paper we present a system
resulted from several years research efforts and we hope to Hiyassat, 2007 [16] in his thesis developed a tool to
use it to teach Arabic pronunciation for non Arabic native generate the dictionary of Arabic pronunciation. Dictionaries
speakers. We show how our system can be used for the task of are generated based on a small MSA speech corpus consisting
spoken foreign language learning, in this case Arabic of numbers and small vocabulary. Kirchhoff et al. [17]
language. We will also show how he can evaluate the learner worked on the recognition of spoken Arabic and study the
level and how he can return feedback messages to help learner differences between colloquial and formal Arabic in speech
to locate the mispronounced phonemes and this is will be recognition
based on a new robust system for automatic Arabic speech
recognition. 3.2 Recognition steps
From a speech signal, the first treatment is to extract the
characteristic parameters. These parameters are input module
acoustic or acoustic-phonetic decoding. This acoustic-
phonetic decoding in turn can produce one or more phonetic
assumptions usually associated with a probability for each
segment (a window or a frame) of speech signals.

This hypothesis generator is often modeled by local


statistical models of elementary units of speech, such as a
phoneme. To train acoustic models, we learn models of
acoustic units of our tagged corpus [my work].

The hypothesis generator interacts with a lexical module


to force the acoustic-phonetic to recognize only the words
represented in the lexical module. The models are represented
by a phonetic pronunciation dictionary (phonetic dictionary)
or probabilistic automata that are Fig.1. General principle of our system of automatic speech
able to associate a probability to each possible pronunciation recognition Multi-dialect corpus
of a word.
3.3 Overview
To recognize what is being said, we begin by looking
through the models of acoustic units, the unit that is supposed For our automatic Arabic speech recognition system, we
to have been produced, and then construct, from the lattice of have made the choice to built our own corpus using several
acoustic units and a statistical model of language following classes of dialects and also to add orthographic transcriptions
the most likely words. Before presenting these modules that are needed for the forced alignment module. This corpus
separately, we give the Bayesian equation applied to the consists of all data used in the evaluation campaign and
problem of automatic speech recognition. The input values of exclusion of textual data. This corpus was developed within
an ASR system correspond to a sampled audio signal and are the project Oreillodule [18]. Three types of resources are
analyzed to extract a sequence of acoustic observation “X”. provided in the corpus. On the one hand, the resources used in
As part of a statistical modeling of speech decoding, the conventional automatic speech recognition, acoustic (corpus
search for the W series of spoken words is based on a criterion of spoken text) and text (newspapers or transcripts of official
MAP (maximum a posteriori): proceedings approximate). On the other hand, an original
resource of untranscribed speech is proposed. This corpus of
speech, without associated transcript but in large quantity,
intended to explore the possibility of unsupervised learning.
= ��� �� � (1)
This corpus is composed of radio data (different radio): Radio
International Tunisia, Algeria Radio International and TV
channels: Aljazeera TV [19] and many other Arab channels
By applying Bayes theory, the equation becomes: [15].

Table 1. Statistics of our Corpus: Learning and Test.


� �( )
= ��� �� (2)
�( ) Corpus Dialect Duration Male Female
Learning 9 7 Hours 235 235

However, P (X) does not depend on a particular value of W Test 9 1.5 Hours 45 45
and can be "released" from the calculation of the argmax: Total 9 8.5 Hours 280 280

= ��� �� � �( ) (3)
3.4 Pronunciation Dictionaries
The pronunciation dictionary provides the link between
Where the term P (W) is estimated using the language model the sequences of acoustic units and the words represented in
and P (X | W) is the probability given by the acoustic models. the language model. While the corpus of text and speech can
This type of approach allows integrating in the same decision be collected, the dictionary pronunciation is usually not
process, the acoustic and linguistic information (Figure 1). directly available. Although a manually created pronunciation
dictionary gives a good performance, the task is very
cumbersome to achieve and requires extensive knowledge of
the language. The literature suggests approaches that can
automatically generate the pronunciation dictionary. The
approach, simple and fully automatic, using phonemes as the compelled by the transcript. The purpose of this experiment is
unit of modeling has been well validated in many works. to compare the forced alignment with the output of acoustic-
However, for the Arabic language, we used a new approach to phonetic decoding. Following the forced alignment
automatically generate our dictionary pronunciation. The first procedure, we get a body segmented into phonemes with the
step is based on the phoneme and each phoneme is timestamps. Each line of transcription result contains the start
representing Arabic as a modeling unit. The second step is to time and end time, the position of phoneme in the phoneme
try to build manually a small dictionary as shown in the frame number expected.
following table, and finally, in the third step, build
automatically an Arabic phonetiser based on our 3.8 Experiments
pronunciation dictionary which was done manually and an
Arabic model language. We use SphinxTrain [1] to train the acoustic models
(HMM). Models context independent (CI) and context
Table 2. Sample from the pronunciation dictionary. dependent (CD with 1000 states) based on graphemes and
phonemes are constructed from the speech corpus described
Arabic word Phonetic
in section I.3. We obtain four acoustic models, namely
‫أمس‬ S M AE E Grapheme_CI, Grapheme_CD, Phoneme_CI and
Phoneme_CD.
‫ال ائرة‬ H AE R IH E AE: D EL
‫ال ائية‬ H AE Y IH E AE : N IH J EL Experiments are conducted with Sphinx3 [13]. The
‫الرابعة‬ H AE AI IH B AE: R EL topology model is a HMM with 3 states with 8 Gaussians per
‫بالم كمة‬ H AE M AE K AE M EL IH B state. The parameter vector contains 13 MFCC, their first and
H AE Y IH E AE : D IH T B IH second derivatives. The body of text is first segmented into
‫ااب ائية‬ E EL words and the 20k most frequent words are extracted for use
‫ب ونس‬ S IH N UW T IH B as vocabulary test. This vocabulary of words and the corpus
of learning language models are then segmented into 8800
‫ال ر‬ AE R AE DH2 AE N EL
syllables and 3500 clusters of characters respectively. The
transcript of the speech training corpus is also used to learn
the language model. The language models used in our
3.5 Evaluation of automated tools experiments is obtained by linear interpolation between the
models created from the web data and those of the
In analyzing large-scale variations related to dialect
transcription of speech corpora. Development data are used to
speakers and automatic recognition of Arabic speech, we
optimize the interpolation parameters.
evaluate the contribution of automatic tools acoustic-phonetic
decoding and forced alignment tool based on the Sphinx.
3.9 Results and Discussion
3.6 Acoustic-phonetic decoding Phonemes grouping
This step is the exact transcript generated from the
speech signal that is the transcript that the speaker supposed Using the DAP (acoustic-phonetic decoding), we get a
tosay. phonetic transcription of the corpus from a speech signal
without using the orthographic transcription (without any
knowledge of the lexicon, and no language model). From the
From a speech signal, the first treatment is to extract
phonetic transcription, we performed statistics on the
vectors of parameters. These parameters are input module
percentage of phonemes in our corpus [20].
acoustic or acoustic-phonetic decoding. This acoustic-
phonetic decoding in turn can produce one or more phonetic
assumptions usually associated with a probability for each The fact is that analysis of phonemes first class is
segment (a window or a frame) of speech signals. simpler for analysis as distinct phonemes. Secondly because
there are not many errors in the DAP. We classified all the
phonemes generated by the DAP in two classes: consonant
3.7 Principle of forced alignment and vowel, and each class is divided into subclasses.
The second treatment is to achieve for each sentence in We have compiled the sounds into six classes of phonemes:
the corpus forced alignment between the sentence and the long vowel / short vowel gemination, words containing
corresponding speech signal. The eventual aim is to compare unfamiliar sounds, sounds that exist in other languages,
the results of our acoustic-phonetic decoding results of the Hamza middle and final emphatic Letters, Sounds
forced alignment to extract phonetic confusions. Before unproblematic. These groups maintain production methods
starting the forced alignment, it needs our dictionary and do not include places of articulation of sounds.
pronunciation. This task is to align the speech signals of each
class with its corresponding orthographic transcription in our
corpus to obtain segmentation into phonemes in the corpus,
Table 3. Arabic phonemes grouping. Recognition problems of large vocabulary may appear
depending on the conditions under which the test signal is
LSVG WUS SEOL MFH EL US
recorded. If the word is pronounced more or less close to the
‫حع‬ ‫ذ هـ ثـ خ‬ ‫قصض‬
microphone therefore recognition rates can vary widely,
‫ر‬ ‫طظ‬
despite the normalization of the signal to prevent this
phenomenon.
‫ت ّم‬ ‫شارع‬ ‫ماذا‬ ‫جزائري‬ ‫طا ب‬ ‫سن‬
‫تفّاح‬ ‫جامعة‬ ‫ه‬ ‫قاء‬ ‫ف‬
ّ ‫ص‬ ‫اسبانيا‬ However if the user pronounces the word always the
same distance and with the same intensity, the recognition
‫ستّة‬ ‫عربي‬ ‫ذهب‬ ‫سؤا‬ ‫طب‬ ‫بنان‬
rates are very satisfactory , and this allow the system to reach
‫أمّي‬ ‫مرحبًا‬ ‫ا‬
ً ‫أه‬ ‫سأ‬ ‫برتقا‬ ‫في‬ a new rate of automatic speech recognition for large Arabic
vocabulary never reached before.
We explain here the abbreviation used in this table (Table 3)
LSVG : Long or short vowel germination 4 CALL Applications
WUS: Words with unfamiliar sounds The computer-assisted learning has attracted
SEOL: Sounds that exist in other languages considerable attention in recent years. Many research efforts
MFH : Middle and final “hamza” have been made to improve such systems particularly in the
EL: Emphatic Letters field of foreign language teaching.
US : Unproblematic Sounds
In the second part of this article, we describe our system
and these results for learning of spoken Arabic language
computer-assisted. This work was developed to teach
pronunciation of Arabic people speaking a foreign language:
French ... This application uses our system for speech
recognition to detect errors in pronunciation user.

4.1 Design and testing of our system


This system consists of a mosaic of sub-program
managed by a main program that allows users to interact
firstly with the teacher in the design phase of programmed
instruction, and secondly with student during the lesson itself.
The dialogue-machine operator (teacher or student) is
Fig. 2. Rates of automatic recognition of Arabic speech: isolated typically provided via a keyboard, screen and microphone.
words
The teacher's role: the teacher should call the lesson and
The previous figure shows the efficiency of our then:
recognition system tested on three Arabic speakers. The - Select the words to study based on problems of
results of our system are very satisfactory, as over 97%
recognition rate of isolated words from Arabic. This result is pronunciation adapted to grade level: Level 1
the best compared to all the work that has made in this area. (A1) Level 2 (A2) or level 3 (B1).
- The teacher makes sure that the system recognize it
well the words that were chosen by him for learners.
The effectiveness
80,00% The role of the learner: the learner works as follows:
75,00% - The student pronounces the words chosen by the
teacher.
70,00%
65,00% The role of the system: the system can output the results as
follows:
Locuteur 1 Locuteur 2 Locuteur 3 - If the pronunciation was incorrect, therefore the
system returns the word after underlining the place of
Fig. 3. Rate of automatic recognition of Arabic speech: large
vocabularies faulty pronunciation. For example the word ‫ شارع‬if
the learner gave SAE: RIHAI instead SHAE: R AI
IH
- If the spoken word is too far removed from the 80%
model proposed by the teacher, especially if not
provided this fault so there will be only the error 60%
message. 40% HH
- In the latter case it may be asked:
20% AI
* To go directly to the hearing of the next word and
further work. 0%
* Either to repeat the word immediately (�� times) 0 5 10 15
if the teacher wants.

All these rules are designed to make our system a very simple Fig. 5. Result of the system for each student on the class of
application that allows a genuine dialogue with the student, the phoneme: the unknown sounds of Arabic.
even in the absence of the teacher.
100%
4.2 The process of ALO: Testing and Results 80%
Q
This part corresponds to the test process of our system. 60%
This application was tested for quantitative information on its SS
40%
validity and, in particular, its ability to provide statistics on a DD
learner or the class (level). Systematic tests on a large corpus 20%
in Arabic (of the order of 352 words selected by a linguist and 0% TT
Arabic Language is communication in everyday life 0 5 10 15
(introducing oneself, family, food, clothing , orientation in
space and time ...).
Fig. 6. Result of the system for each student on the class of
- Sounds unproblematic: 52 words. the phoneme: Letters from the Arabic emphatic.
- Letters emphatic ( ‫ (ق ص ض ط ظ‬: 60 words.
- Hamza and final median: 60 words.
- Sounds existing in other languages : )‫ر‬ (: The results and statistics of our system: Learning
60words. foreign spoken languages: the case of Arabic, are very
- Words with unfamiliar sounds ( ) : 60words satisfactory. The previous figures show very good levels of
each learner in relation to its difficulties in pronunciation of
- Long vowel or short vowel and gemination: 60
each class of phonemes. These statistics are very helpful for
words. the teacher to automatically detect errors in the pronunciation
of each learner.
This system was tested by 13 French students from the
University Stendhal Grenoble, France after the training
Learning foreign languages: the case of Arabic. The following 5 Conclusion and Outlook
figures show the statistics of the level of each student for In this paper, we presented our system, a platform for
classes in Arabic phonemes: long vowel or short vowel and learning foreign spoken languages: The case of Arabic based
gemination, emphatic letters, unfamiliar sounds. on the formalism of standard Arabic automatic speech
recognition.
150%
AE Our system differs from the few other work being done
100% on standard Arabic and use the foreign language learning: the
AE: case of spoken Arabic Computer Assisted on several aspects:
50% UH it incorporates an acoustic model of speech-based Arabic-
Based Approach to Hidden Markov Model (HMM) giving
UW results in the form of phonetic structures, while other systems
0%
IH are lacking and assume that the input signal is already
0 5 10 15 phonetically labeled and organized (the case of El-Kasasy
[11]).
Fig. 4. Result of the system for each student on the class of Our system includes also a language model to validate
the phoneme: the long vowel or short vowel gemination and the acoustic analysis obtained. Several opportunities are
Arabic. offered to our work, we can cite, among others:
In terms of modeling: a multitude of modeling that can [15] Mourad MARS, Georges Antoniadis, Mounir Zrigui:
be undertaken to expand the coverage of linguistic phenomena "Nouvelles ressources et nouvelles pratiques pédagogiques
treated (enlarge our training corpus, our language model, avec les outils TAL", ISDM32, N°571, Avril (2008).
dictionary pronunciation ... etc.).
[16] Aymen Trigui, Mohsen Maraoui, Mounir Zrigui: The
In terms of implementation, we propose the Gemination Effect on Consonant and Vowel Duration in
implementation of other modules of the platform (Learning Standard Arabic Speech. SNPD 2010: 102-105
Voice of Arabic sentences, diversity of exercises for the
learner, expand our platform for learning other languages ... [17] Mohamed Belgacem, Mounir Zrigui: Automatic
etc...). Identification System of Arabic Dialects. IPCV 2010: 740-
749
6 References
[18] Tahar Saidane, Mounir Zrigui, and Mohamed Ben
[1] http://cmusphinx.sourceforge.net. Ahmed: Arabic Speech Synthesis Using a Concatenation of
Polyphones: The Results. Canadian Conference on AI 2005:
[2] Sherwood B. « Man-Machine Studies » University 406-411
Illinois.USA, 1980.
[19] Mounir Zrigui, Mbarki Chahad, Anis Zouaghi, and
[3] Simon J. « L’éduction et l’informatisation de la société » Mohsen Maraoui: A Framework of Indexation and Document
Rapport au président de la république. 1981. Video Retrieval based on the Conceptual Graphs. CIT 18(3):
(2010)
[4] Bestougeff H. Thèse de l’état université de Paris VII,
1970. [20] Rami Ayadi, Mohsen Maraoui, Mounir Zrigui:
Intertextual distance for Arabic texts classification. ICITST
[5] Nelles R. Thèse à l’université de Fribourg, 1977. 2009, pages 1-6.

[6] Antoniadis G. « Du TAL et son apport aux systèmes


d’apprentissage des langues » Contributions. Habilitation à
Diriger des Recherches, Université Stendhal Grenoble 3
France, 2008.

[7] Nordmann B. « A comparative study of some visual


speech displays » Rapport de contract Université Illinois
USA, 1981.

[8] Kalikow D, Proc, of the I.E.E.E. Fall Electronics


Conférence USA, 1991.

[9] Harada K. « Annual bulletin research institute of


logopedics and phonetics » 1991.

[10] Lebras J. These Université de Rennes II France, 1981.

[11] El-Kasasy M. « An Automatic Speech Verification


System » Thèse, Cairo University, Faculty of Engineering
Department of Electronics and Communications Egypt, 1992.

[12] Mourad Mars, Georges Antoniadis, Mounir Zrigui:


Statistical Part Of Speech Tagger for Arabic Language. IC-
AI 2010: 894-899

[13] Mohsen Maraoui, Georges Antoniadis and Mounir


Zrigui: "SALA: Call System for Arabic Based on NLP
Tools". IC-AI 2009: 168-172

[14] Mohsen Maraoui, Georges Antoniadis, Mounir Zrigui:


"CALL System for Arabic Based on Natural Langage
processing Tools". IICAI 2009: 2249-2258.

You might also like