0% found this document useful (0 votes)
75 views

Bangla Food Review Sentimental Analysis Using Machine Learning

The document discusses building a machine learning model to analyze sentiment in Bangla language food reviews from online platforms in Bangladesh. The authors collected over 1000 reviews, labeled them, preprocessed the data, extracted features and trained ML and deep learning models. Their best model was an LSTM deep learning model that achieved 90.89% accuracy in sentiment classification.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
75 views

Bangla Food Review Sentimental Analysis Using Machine Learning

The document discusses building a machine learning model to analyze sentiment in Bangla language food reviews from online platforms in Bangladesh. The authors collected over 1000 reviews, labeled them, preprocessed the data, extracted features and trained ML and deep learning models. Their best model was an LSTM deep learning model that achieved 90.89% accuracy in sentiment classification.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Bangla Food Review Sentimental Analysis using

Machine Learning
Mohd. Istiaq Hossain Junaid, Faisal Hossain, Udyan Saha Upal, Anjana Tameem, Abul Kashim & Ahmed Fahmin
Department of Electrical and Computer Engineering
North South University
Dhaka, Bangladesh
[email protected], [email protected], [email protected], [email protected],
[email protected] & [email protected]
2022 IEEE 12th Annual Computing and Communication Workshop and Conference (CCWC) | 978-1-6654-8303-2/22/$31.00 ©2022 IEEE | DOI: 10.1109/CCWC54503.2022.9720761

Abstract— In this modern age, people are dependent on the food delivery platforms can get insightful information from
internet. They prefer to order food online or Food App rather the user's sentiment analysis. Nowadays, proper analysis of
than the restaurant. They are giving various reviews online user/customer feedback is critical to an organization's success.
about the foods. In this project, we aim to build a machine If a restaurant owner cannot recognize consumers' issues
learning model to analyze the sentiment of that reviews. In through their feedback, the authority's ability to understand
Bangladesh, internet users are increasing day by day. So we company difficulties becomes more difficult. Sentiment
have decided to build the model for the Bangla language. We analysis systems can review user feedback more proactively
have found no Bangla dataset for food reviews that we can use to the rapid change in the market and improve the restaurant's
for our project. Then we have collected more than one thousand
business view. To measure a user's opinion, one can identify
Bangla food reviews from various online platforms like
Foodpanda, Hungrynaki, Shohoz food, Pathao food, etc., and
the market position of a particular restaurant. Making the
labeled them. After some necessary preprocessing, we have model about the total reviews and their target levels, positive
extracted various features from cleaned data and used them to or negative, will categorize new user reviews [1].
train and test for machine learning and deep learning models. According to BTRC, the approximate number of internet
We have come to the result that Long Term Short Term users in Bangladesh will be 120.95 million by June 2021.
(LSTM), a deep learning model giving the best accuracy, that is They usually post reviews in Bangla, English, or Phonetic
90.89%, where we have used word2sequence as feature
Bangla, which is trendy among smartphone users. It is
extraction. Our research contribution will help the food
impossible to examine the reviews carefully because there is
industry by using this model. This model can help them to
understand the Bangla food review sentiment.
no precise guideline for posting them, and so many comments
are made regularly. That's why an automated system that can
Keywords— Bangla language processing, Sentiment analysis, identify the polarity of reviews can be beneficial. The features
Customer reviews, Natural Language Processing (NLP), of the user's emotional expression in general, whether positive
Recurrent Neural Network (RNN), Deep Learning (DL), Long or negative, may be found in Bangla or, more particularly,
short-term memory (LSTM), Gated recurrent unit (GRU). Phonetic Bangla. Our study collected over 1000 reviews from
various online food delivery platforms like Foodpanda,
I. INTRODUCTION Shohoz food, Pathao food, HungryNaki.
Sentiment Analysis, also known as opinion mining, is a Our objective is to create a machine learning system that
method of Natural Language Processing (NLP) that identifies can detect the possibility to the users and automatically
and extracts the emotional tone from text-based data. This display the ratio of negative to positive reviews using
system helps businesses gather insightful and socio sentiment analysis via natural language processing (NLP). We
sentimental information from unstructured text data. On the preprocessed the data with punctuation removal, stopwords
verge of technological advancement and the internet era, it can removal, tokenization, stemming, and other useless symbols.
be easily noticeable that text-based sharing reviews on various Transforming text to feature vector, also known as
internet platforms, especially restaurants through online food vectorization, we use Count Vectors, TfidfVectorizer, N-gram
delivery platforms, are common. Online text-based reviews to test machine learning and deep learning techniques. Finally,
reflect customers' opinions on a particular product. By to classify positive and negative reviews, we preferred
expressing their own opinions or sentiments, users rate the Random Forest Classifier, LinearSVM, Naïve Bayes,
restaurants and their food services. These reviews might Decision Tree Classifier, Logistic Regression, Multinomial
indicate whether the restaurant is of sufficient quality or poor Naïve Bayes, LSTM, GRU, RNN. We have also dealt the
quality. For this reason, those text reviews can be the source phonetic reviews like "Khabar ta besh valo chilo," which
for sentiment analysis about a restaurant. In recent years, translates "খাবারটা বেশ ভাল ছিল" in actual Bengali. We
internet users have been growing radically in Bangladesh.
did these types of translations manually.
Online food delivery systems offer users to share their
feedback to care about their customer's fundamental interest In summary, in our study, we collected over 1000 Bengali
in their food services. For the rapid growth of these platforms, food reviews and annotated them manually, which will be a
enormous text reviews could be analyzed to find out the user's
sentiment on particular food items. Restaurant owners and

978-1-6654-8303-2/22/$31.00 ©2022 IEEE


0347

Authorized licensed use limited to: Chungnam National University. Downloaded on February 23,2023 at 02:18:30 UTC from IEEE Xplore. Restrictions apply.
Fig. 1 Infrastructure of our experiment

public dataset for future research. Those reviews were Most of these research works are not adequately utilized
collected manually from online food delivery platforms. We for Bangla reviews sentiment analysis. One such research was
implemented deep learning techniques to get higher accuracy implemented by Sharif et al. [6]. They collected English
rather than only machine learning techniques. reviews from Facebook pages and groups and manually
annotated and classified positive or negative reviews. Then
II. LITERATURE REVIEW translated the reviews from English to Bangla. Their best
Natural language processing (NLP) is a major model MNB achieved 80.48% accuracy on validation sets.
preprocessing task in any script-based research work in the Another similar research, Rahman et al. [7] of a translated
machine learning discipline. Many scholars and researchers dataset of online shopping review, achieved 78% accuracy for
have done sentiment-based analysis on opinion mining where SVM and 83% accuracy for the CNN model. But sentiment
different emotions are categorized, and various machine analysis is highly dependent on language context and structure
learning models were implemented after applying other NLP where machine translation lacks human interactions. For this
techniques. Yet, the essential part of doing any textual-based purpose, researchers are now more focused on collecting
preprocessing is solely dependent on the language, which is native Bangla language data for machine learning purposes.
not universal. While there are lots of works and NLP tools for For instance, Shafin et al. [8] worked on online shopping
the English language, scarcity of Bangla annotated datasets product review opinion mining, collected 1020 reviews, and
and no proper NLP tools for Bangla text preprocessing are annotated them positively and negatively accordingly. Using
some main restrictions on more research in this discipline. TF-IDF vectorizer as model features, SVM acquired the best
testing accuracy of 88.81%. Though this was a product review
Sentiment analysis for different contextual datasets has based on opinion mining, more focused work on restaurant
experimented with English datasets. Hossain et al. [2] worked and food reviews has been done recently. Hossain et al. [9]
on a restaurant review dataset where they collected about 1k created a dataset from FoodPanda and Shohoj food websites
reviews from the PRIYO review website. Manually by manually annotating 500 positive and 500 negative reviews
annotated the reviews under positive and negative and experimented on a combined CNN-LSTM model. Though
impressions, and POS tagged the dataset for feature they achieved high training accuracy, validation accuracy was
extraction along with the TF-IDF vectorizer. stuck at 75.01%. Haque et al. [10] experimented with different
Experimenting with three machine learning models for MNB, feature extraction and model implementation techniques to
SVM, KNN, and LR - they achieved the best accuracy of 77% compare restaurant opinion mining. They collected 1500
on their validation set. Similar experimentation was done by reviews from Facebook, YouTube videos, blogs and
Asiful et al. [3], where they collected a dataset from a annotated them bad, good, or excellent. Their experiment
Facebook page named FOODBANK. Still, their dataset differentiated different features of N-gram and vectorizing
yielded an unbalanced one consisting of 500 positive reviews techniques with SVM, DT, and LR models. They found that
with 200 negative reviews. As feature extraction is an TF-IDF Vectorizer with Bigram acquired the best accuracy on
essential part of any machine learning research, different SVM with SVM 75.58%.
approaches were taken by researchers in NLP. Hasan et al. [4]
Our background research has concluded that there is a
has taken 5000 restaurant reviews and extracted four various
scarcity of rich Bangla annotated datasets for food and
features - BOW, TF-IDF, Skip-Gram, CBOW and
restaurant reviews. So, this paper aims to construct a dataset
experimented with three machine learning models. Among
on opinion mining for food and restaurant reviews. Implement
them, they showed that skip-gram had the best accuracy
and experiment with different features and create a machine
among all the models. Bhuiyan et al. [5] implemented deep
learning and deep learning model for our custom dataset.
learning models and compared CNN, CNN with attention
mechanism and LSTM on an English dataset and achieved
98.4% accuracy.

0348

Authorized licensed use limited to: Chungnam National University. Downloaded on February 23,2023 at 02:18:30 UTC from IEEE Xplore. Restrictions apply.
III. METHODOLOGY C. Feature Extraction
Different features were extracted from the review text to
feed the classifier models. As previously mentioned, we used
Food Reviews traditional machine learning techniques and deep learning
models to classify our food review dataset. Both of these
models are unable to classify textual representations. To
analyze and predict different sentiments, they need numeric
representations of various properties and features of the text
and sentences. We used other parts for different classifiers to
experiment and get better results. We'll go over those features
briefly in this section:
Count Vectorizer: The frequency of the word is calculated
for each document using a count vectorizer to represent text
data in a matrix form. All unique words in a dataset are
represented in a sparse matrix by a CountVectorizer. Each
word in a document is assigned an index number, representing
Negative Positive the frequency of that word in the paper. After that, the vector
representation is used as a feature. We extracted count vectors
from our dataset and fed them into various models.
Fig. 2 Pie chart of collected Data

A. Dataset Creation Glove Vector: Glove is an unsupervised learning


algorithm. Vector representation for words can obtain by
The dataset is the most crucial part of any research project using this algorithm. It leverages both global and local
for any machine learning system. However, there is a lack of statistics of a corpus for coming up with a principled loss.
a Bangla corpus for sentiment analysis of food reviews. As a
result, we collected, processed, and manually annotated our
food review dataset to conduct our research. We gathered
Word2Sequence: It converts the tokens of the corpus into
nearly 1100 reviews from the FOODPANDA website, Pathao
a sequence of integers where each word is given in the index.
food, hungrynaki, and sohozfood, which allows customers to
Example: Token ["I", "will"] and ["I", "go"]. This token will
share their food quality. Then we manually hand-labeled
convert into [1,2] and [1.4]
specific reviews as positive or negative, removing duplicate
reviews from the dataset. We used 520 positives and 520
negative reviews to create a balanced dataset. As a result, our
final dataset contained a total of 1040 food reviews, each of TF-IDF: Term Frequency-Inverse Document Frequency is
which fell into one of two categories: positive or negative. For a famous and essential feature extractor used in nearly all NLP
our training and testing purpose, we have taken a ratio of studies. The frequency of a word in a document is simply a
80:20 from the dataset. We have also ensured that there is an ratio of how many times it appears in the paper. The Inverse
equal number of positive and negative reviews in the training Document Frequency of a word determines its importance. It
and testing set for a balanced training and testing outcome. compensates for some words being used more often than
others. It establishes the significance of a word throughout the
document. IDF displays the frequency of a given word across
all documents. We used unigram and bigram for TF-IDF
B. Data Preprocessing
vectorizing, where contiguous two words were also used as
For machine learning, textual representations of food feature input. N-grams are useful for conveying better
reviews are insufficient. We need to use Bangla text sentence properties by generating context between words.
preprocessing techniques on our dataset to prepare the textual
representations for building models. We begin by removing
unnecessary punctuation, emoticon, pictorial icons, random D. Classifiers
English words, alphabets from the review text to isolate only
Bangla text. Second, we tokenize our review sentences into After extracting features from our dataset, we fed our data into
words and remove Bangla stop words using the BNLP toolkit. various machine learning and deep learning models. We used
Furthermore, we use Bangla stemming techniques to extract the Random Forest classifier, Linear Support Vector Machine,
the base words from each word token. Finally, all stemmed Multinomial Naive Bayes, Decision Tree, and Logistic
words are combined into sentences for further feature Regression for the traditional machine learning classifier. We
extraction. used Count vectorizer and TF-IDF vector for training these
models with unigram and bigram features. We used LSTM,
GRU, and RNN for deep learning. Also, for deep learning
features, we used word sequence and Glove techniques. The
LSTM with word sequence model yielded all the models' best
testing accuracy.

0349

Authorized licensed use limited to: Chungnam National University. Downloaded on February 23,2023 at 02:18:30 UTC from IEEE Xplore. Restrictions apply.
E. Model Implementation
Table 1 Comparative analysis of other's model

N Authors Review Data Best Model Best


o Topic set Accuracy
.
1 O. Sharif Online 1000 Multinomial 80.48%
. et al. [6] Restaurant Naive Bayes
2 M.A Restaurant 2053 CNN 83%
. Rahman et
al. [7]
3 Hossain E. Restaurant 8435 _ 91.37
. et al [2]
4 R. A. Trip 337 Naive Bayes 72.04%
. Laksono, Advisor
et al. [11]
5 N. Restaurant 1000 CNN-LSTM 94.22%
. Hossain et
al [9]
6 M. H. Book 6281 CNN-LSTM :97.22%
. Rahman et
al. [12]
7 R. R. Movie 4000 SVM 88.90%
. Chowdhur
y et al.
[13]
8 Ours Food 1040 LSTM 90.86%
Fig. 3 Architecture of LSTM (Fine-tuned) .
First, we cleaned the data by following some processes. Then
we have extracted features from the clean data. After that, we From table 1, we can see the comparison of our work with
have fed into various machine learning and deep learning others.
models. We have a total of 1040 data. Among them, 520 are
negative reviews, and 520 are positive. We have split the data
into an 80:20 ratio for training and testing. We converted the IV. RESULT & ANALYSIS
clean data into Count Vectors, N-Gram vector, and Tf-Idf We have analyzed various models of deep learning and
vector for feature extraction for machine learning models. machine learning models. We have used precision, recall, F1
Then we train the models with 80% data. Then test with the score, and accuracy for machine learning model evaluation.
rest of 20% of the data. For deep learning model evaluation, we have used accuracy.
We have converted the clean data into Word to Sequence We have also used the loss curve for the deep learning model
and Glove Vector to feed the deep learning model as feature to see the model performance. We can have a clear idea about
extraction. For feeding the Glove vectors to LSTM, RNN, overfitting under-fitting by training loss and validation loss.
GRU, we must make all the input sequences into the same 𝑻𝑵 + 𝑻𝑷
length. We have used a function that pads the short sequence 𝑨𝒄𝒄𝒖𝒓𝒂𝒄𝒚 =
𝑻𝑷 + 𝑭𝑷 + 𝑻𝑵 + 𝑭𝑵
with zeros. We have set our maximum size of word 10000.
Then we used the pre-trained Bangla Glove vector for word TN means Trues Negative, TP means True Positive, FP means
embedding. After that, we have created an embedding matrix False Negative, TN means True Negative, and FN means
with a Glove vector. With 80% of the training, data have False Negative. By this accuracy, we have evaluated all
trained the Deep learning models. We have set Optimizer as models.
"Adam" with 0.0001 learning rate, metrics as "Accuracy," A. Machine Learning Model Evaluation
loss function as" Binary cross-entropy," and run the model
100 epochs. We get good results from LSTM. From table 2, we can see the results that we have achieved.
For feeding the word to sequence to the deep learning We have converted the cleaned text into Tf-Idf vectorizer,
models, we first have made the training sequence and test count vectorizer, N-gram vector to feed the machine learning
sequence equal. The size of the dataset vocab is 1817. After model. After feeding the model, we don't get good results.
that, we have used 80% data as a training dataset. Then feed We have got 75% accuracy from the logistic regression
into various deep learning models. We have set the Optimizer model by using the Tf-Idf vectorizer as feature extraction
as "Adam" with a 0.0001 learning rate, metrics as from machine learning models. Its precision, Recall, and F1
"Accuracy," loss function as "Binary cross-entropy." And run score is also good than other models. We have also used other
the model 100 epochs with batch size 16. After training feature extraction like the N-gram and count vectors. We
various models, we have got good accuracy from LSTM also have gotten 73.55% accuracy from Multinomial Naive Bayes
From Fig.3 shows the architecture of LSTM (Fine-tuned), using N-gram vectors, which is unsatisfactory.
which has given the accuracy of more than 90%.

0350

Authorized licensed use limited to: Chungnam National University. Downloaded on February 23,2023 at 02:18:30 UTC from IEEE Xplore. Restrictions apply.
Table 2. Average evaluation metrics for ML models Table 3. Average evaluation metrics for DL models

Feature Models Precision Recall F1 Accurac Feature Extraction Models Accuracy


Extracti Score y
on
Word2Sequence LSTM 90.86%
Count Random 0(0.75) 0(0.74) 0(0.74) 74.52% GRU 89.42%
Vector Forrest 1(0.74) 1(0.75) 1(0.75) RNN 87.01%
Classifier Glove Vector LSTM 87.5%
Linear 0(0.72) 0(0.62) 0(0.66) 68.75% GRU 80.28%
SVM 1(0.66) 1(0.76) 1(0.71) RNN 87.5%

Multinom 0(0.66) 0(0.80) 0(0.71) 69.23% From table 3, we can see that by using the word to sequence
ial Naive 1(0.74) 1(0.59) 1(0.66) as feature extraction, we get a good result from LSTM, which
Bayes was fine-tuned. The accuracy is 90.86%. We can also observe
Decision 0(0.69) 0(0.72) 0(0.70) 69.71% that LSTM performed well when using Glove as feature
Tree 1(0.71) 1(0.67) 1(0.69)
Classifier extraction. The accuracy is 87.5%. Mohd Istiaq Hossain
Junaid et al. [14], for binary text classification, achieved more
Logistic 0(0.72) 0(0.69) 0(0.71) 71.15%
Regressio 1(0.70) 1(0.73) 1(0.72)
than 98% from GRU by using word2sequence as feature
n extraction. The difference between that dataset has more
vocab per sentence than this dataset. We can conclude that
N-Gram Random 0(0.73) 0(0.73) 0(0.73) 73.08%
Vector Forrest 1(0.73) 1(0.73) 1(0.73) Deep learning models are better than machine learning
Classifier models. Among the Deep learning Models, LSTM performs
Linear 0(0.73) 0(0.73) 0(0.73) 73.08% well by using word2Sequence as feature extraction. We can
SVM 1(0.73) 1(0.73) 1(0.73) say that feature extraction plays a vital role in our experiment.
Multinom 0(0.70) 0(0.83) 0(0.76) 73.55%
It helps the model to increase its accuracy.
ial Naive 1(0.79) 1(0.64) 1(0.71)
Bayes Fig. 4 shows the ROC curves of the deep learning models
Decision 0(0.72) 0(0.65) 0(0.68) 69.71% using the word2sequence as feature extraction. We can see
Tree 1(0.68) 1(0.74) 1(0.71
Classifier
that the highest AUC (Area under Curve) Score is 0.94, which
Logistic 0(0.72) 0(0.75) 0(0.73) 72.59% means it can distinguish between bad and good—the highest
Regressio 1(0.74) 1(0.70) 1(0.72 AUC score from GRU.
n
Tf-Idf Random 0(0.75) 0(0.64) 0(69) 72.0%
Forrest 1(0.69) 1(0.79) 1(0.75)
Classifier
Linear 0(0.74) 0(0.64) 0(0.69) 71.0%
SVM 1(0.69) 1(0.78) 1(0.73)

Logistic 0(0.77) 0(0.73) 0(0.75) 75.0%


Regressio 1(0.74) 1(0.78) 1(0.76)
n
Decision 0(0.73) 0(0.66) 0(0.69) 71.0%
Tree 1(0.69) 1(0.75) 1(0.72)
Classifier

B. Deep Learning Model Evaluation


Since we don't have good results from machine learning
models, we have tried some Deep Learning models. We have
converted the cleaned text data to word to sequence and
Glove vector for feeding the model. We have used
Recurrent Neural Networks (RNN), Long Short-Term Fig.4: ROC curves for DL learning Models (Feature: Word2Sequence)
Memory (LSTM), and Gated Recurrent Unit (GRU) to train
our dataset.

0351

Authorized licensed use limited to: Chungnam National University. Downloaded on February 23,2023 at 02:18:30 UTC from IEEE Xplore. Restrictions apply.
V. CONCLUSION & FUTURE WORK
Bangla Food Review sentiment analysis is relatively new
research. Since internet users are increasing daily in
Bangladesh, people are moving their activities more from
offline to online. So this is important to work on such a topic
for the future. We have created our dataset, preprocessed it,
and trained on various machine learning and deep learning
models. We have the best result from LSTM by using
word2sequence as feature extraction.
For future work, we can extend the datasets and classify
more than two categories: classify datasets into neutral, worst,
best, etc. Since the pre-trained model is quite bigger, we plan
to use distilling knowledge to have a small model with better
accuracy. By this, any person can use this model with any
device. Our contribution will have a significant impact on the
food industry. This work will benefit the food industries, as
well as customers.

Fig. 5: Loss curves for DL learning Models (Feature: Word2Sequenc)


VI. REFERENCES
From Fig. 5, we can see the deep learning models we have
used in our experiments that are overfitting. The loss curves
show that the training loss plot continually decreases with
epochs. The Validation loss is decreasing to a certain point [1] M. Hu and B. Liu, "Mining and Summarizing Customer Reviews,"
in Tenth ACM SIGKDD International Conference on Knowledge
and beginning to increase. This indicated that the models are Discovery and Data Mining, New York, NY, USA, 2004.
getting overfitted for small datasets. It has learned the training
[2] Hossain E., Sharif O., Hoque M.M., Sarker I.H, "SentiLSTM: A
dataset too well, including noises. Deep Learning Approach for Sentiment Analysis of Restaurant
Reviews," Hybrid Intelligent Systems. HIS 2020. Advances in
From Fig 6, we see that the deep learning models have Intelligent Systems and Computing.
overfitted with the data. They have also learned the training [3] S. M. A. Huda, M. M. Shoikot, M. A. Hossain and I. J. Ila, "An
Effective Machine Learning Approach for Sentiment Analysis on
dataset with noises. Popular Restaurant Reviews in Bangladesh," in 1st International
Conference on Artificial Intelligence and Data Sciences (AiDAS),
Malaysia, 2019.
[4] T. Hasan, A. Matin, M. Kamruzzaman, S. Islam, and Md. O. Faruq
Goni, "A Comparative Analysis of Feature Extraction Methods for
Human Opinion Grouping Using Several Machine Learning
Techniques," in 2020 IEEE International Women in Engineering
(WIE) Conference on Electrical and Computer Engineering
(WIECON-ECE), Bhubaneswar, India, Dec. 2020.
[5] Md. R. Bhuiyan, M. H. Mahedi, N. Hossain, Z. N. Tumpa, and S. A.
Hossain, , "An Attention Based Approach for Sentiment Analysis of
Food Review Dataset," in 11th International Conference on
Computing, Communication and Networking Technologies
(ICCCNT), Kharagpur, India, 2020.
[6] O. Sharif, M. M. Hoque, and E. Hossain, “Sentiment analysis of
Bengali texts on online restaurant reviews using Multinomial Naïve
Bayes, "Sentiment analysis of Bengali texts on online restaurant
reviews using Multinomial Naïve Bayes," in 1st International
Conference on Advances in Science, Engineering and Robotics
Technology (ICASERT), Dhaka, Bangladesh, 2019 .
[7] Md. A. Rahman and E. Kumar Dey, "Aspect Extraction from Bangla
Reviews using Convolutional Neural Network," in Joint 7th
International Conference on Informatics, Electronics & Vision
(ICIEV) and 2018 2nd International Conference on Imaging, Vision
& Pattern Recognition (icIVPR), Kitakyushu, Japan,, 2018.
[8] M. A. Shafin, Md. M. Hasan, Md. R. Alam, M. A. Mithu, A. U. Nur,
and Md. O. Faruk, "Product Review Sentiment Analysis by Using
Fig. 6: Loss curves for DL learning Models (Feature: Glove Vector)

0352

Authorized licensed use limited to: Chungnam National University. Downloaded on February 23,2023 at 02:18:30 UTC from IEEE Xplore. Restrictions apply.
NLP and Machine Learning in Bangla Language," in International
Conference on Computer and Information Technology (ICCIT),
DHAKA, Bangladesh,, 2020.
[9] N. Hossain, Md. R. Bhuiyan, Z. N. Tumpa, and S. A. Hossain,
"Sentiment Analysis of Restaurant Reviews using Combined CNN-
LSTM," in 11th International Conference on Computing,
Communication and Networking Technologies (ICCCNT),
Kharagpur, India, 2020 .
[10] F. Haque, Md. M. H. Manik, and M. M. A. Hashem, "“Opinion
Mining from Bangla and Phonetic Bangla Reviews Using
Vectorization Methods,”," in 4th International Conference on
Electrical Information and Communication Technology (EICT), Dec.
2019, Khulna, Bangladesh.
[11] R. A. Laksono, K. R. Sungkono, R. Sarno and C. S. Wahyuni,
"Sentiment Analysis of Restaurant Customer Reviews on
TripAdvisor using Naïve Bayes," in 12th International Conference
on Information & Communication Technology and System (ICTS),
2019 .
[12] M. H. Rahman, M. S. Islam, M. M. U. Jowel, M. M. Hasan and M.
S. Latif, "Classification of Book Review Sentiment in Bangla
Language Using NLP, Machine Learning and LSTM," in 2021 12th
International Conference on Computing Communication and
Networking Technologies (ICCCNT), 2021 .
[13] R. R. Chowdhury, M. Shahadat Hossain, S. Hossain and K.
Andersson, "Analyzing Sentiment of Movie Reviews in Bangla by
Applying Machine Learning Techniques," in 2019 International
Conference on Bangla Speech and Language Processing (ICBSLP),
2019, .
[14] M. I. Hossain Junaid, F. Hossain and R. M. Rahman, "Bangla Hate
Speech Detection in Videos Using Machine Learning," 2021 IEEE
12th Annual Ubiquitous Computing, Electronics & Mobile
Communication Conference (UEMCON), 2021, pp. 0347-0351, doi:
10.1109/UEMCON53757.2021.9666550.

0353

Authorized licensed use limited to: Chungnam National University. Downloaded on February 23,2023 at 02:18:30 UTC from IEEE Xplore. Restrictions apply.

You might also like