Quantum-Enhanced_Support_Vector_Machine_for_Sentiment_Classification
Quantum-Enhanced_Support_Vector_Machine_for_Sentiment_Classification
ABSTRACT Quantum computers have potential computational abilities such as speeding up complex
computations, parallelism by superpositions, and handling large data sets. Moreover, the field of natural
language processing (NLP) is rapidly attracting researchers and engineers in order to build larger model
computations of NLP. Thus, the use of quantum technology in NLP tasks, especially sentiment classification,
has the potential to be developed. In this research, we investigate the best technique to represent senti-
ment sentences so that sentiment can be analyzed using the Quantum-Enhanced Support Vector Machine
(QE-SVM) algorithm. Investigations were carried out using circuit parameter optimization methods and
data transformation. The pipeline of the proposed method consists of sentence-to-circuit conversion, circuit
parameter training, state vector formation, and finally the training and testing processes. As a result,
we obtained the best classification results with an accuracy of 93.33% using the SPSA optimization method
and PCA transformation data. These results have also outperformed the baseline SVM method.
(numeric) datasets, using feature map transformations and path for using a quantum kernel in NLP using quantum
adjustments to the rotation factor [7]. computing.
The nature of complex vectors that QE-SVM can handle The following is the remainder of this paper. Section II
aligns with the nature of subjective sentences in sentiment covers sentiment analysis in quantum computing, the
classification data. Therefore, using QE-SVM in the senti- optimization method, the quantum kernel, and SVM in
ment classification task is a potential research area. As one brief. Section III explains our proposed QE-SVM method.
of the tasks in NLP, handling sentiment classification in a Section IV comprises the findings of our experiments as well
quantum environment is carried out using the Quantum NLP as some discussions. The final section brings the paper to
(QNLP) methodology [8]. This methodology uses a com- some conclusions.
positional language structure in the form of grammar and
semantics constructed in a quantum way. II. RELATED WORKS
The main problem of the research is to find the best data 1) SENTIMENT ANALYSIS AND QUANTUM NLP
representation for quantum NLP to represent a sentiment in a
Sentiment analysis, one of the most developed fields of
sentence or called sentiment classification. Moreover, several
NLP, has been widely researched because of its significant
optimizers such as SPSA and ANN are explored in order
use. One potential approach is to use Quantum Machine
to improve the classification performance. Finally, we also
Learning. Several methods that try to imitate quantum mech-
expand the improvements of sentiment classifications from
anisms include [11], which examined sentiment analysis
the classical SVM method to the Quantum Enhanced-SVM
on Twitter data using a quantum-inspired representation
(QE-SVM) method.
model. This method uses quantum mechanisms to model
Our previous work [9] has formulated a quantum repre-
semantic and sentiment information on a series of projectors
sentation for the sentiment classification task [9]. We used
in a probabilistic space. Then this method was developed
a state vector representation and particular negation han-
into a quantum-like multimodal network (QMN). It com-
dling with the Not-box operation. However, the dimensions
bines quantum theory with long short-term memory (LSTM)
of the vector representation are large, and the prediction
networks for multimodal sentiment analysis on conversa-
results could be more optimal (81.67% accuracy). The chal-
tions [12]. Quantum algorithms in Variational Quantum Clas-
lenge that needs to be solved is building a proper quantum
sifiers (VQC) can also be used to solve sentiment analysis
representation of subjective sentences that can be com-
problems, in which the work in [13]. carried out one of them
puted quickly and precisely using the QE-SVM learning
using EfficientSU2 and RealAmplitudes, a built-in library
algorithm.
from Qiskit quantum computer simulator. Although similar,
In this paper, the focus is on exploring how to use quan-
this method outperforms the classification results of classical
tum natural language processing (QNLP) to represent the
ML models.
sentiment of a sentence. The aim is to come up with an
One of the critical steps in the QNLP methodology is
effective and efficient quantum representation of subjective
circuit parameter training after changing sentences into cir-
sentences that can be used for quantum sentiment classifica-
cuits. This learning process is carried out using a learn-
tion. We modified an existing experimental QNLP pipeline
ing/optimization algorithm. One widely used method is
(described in [10]) to better suit their needs, particularly
the Simultaneous Perturbation Stochastic Approximation
during the optimization stage. The methodology involves
(SPSA) [14]. An essential feature of SPSA is the gradi-
converting sentences into circuits, training circuit parame-
ent approximation which requires only two measurements
ters, and reading state vectors, followed by techniques for
of the objective function regardless of the dimensions of
transforming the state vector data to work with the QE-SVM
the optimization problem. This feature significantly reduces
classifier.
optimization costs, especially in problems with many vari-
In summary, this paper has two main contributions. The
ables to optimize. Moreover, this method often outperforms
first is developing an effective and efficient quantum rep-
other optimization methods, especially in variational quan-
resentation of subjective sentences. We suggest using the
tum algorithms [15].
X-gate quantum operation to represent negative sentences in a
quantum circuit. In addition, we propose two alternative data
transformation methods - double angles and PCA - to make 2) QUANTUM KERNEL AND OPTIMIZATION
the data compatible with the QE-SVM classifier. The second To carry out the classification process, the kernel method for
contribution is being the first to apply QE-SVM to natural lan- machine learning is one that is widely used. Among them is
guage processing tasks, specifically sentiment classification. the Support Vector Machine (SVM) as the most well-known
We demonstrate that using QE-SVM with the appropriate rep- traditional learning method [16]. Combining the advantages
resentation leads to better predictive performance than SVM. of SVM with quantum computing, the authors of [6] proposed
Moreover, compared to the previous work [9], the proposed the concept of a quantum variational classifier that is run
method outperforms the accuracy performance up to 93.33% using a quantum variational circuit. Then they also proposed
using SPSA circuit parameter training and PCA with a quantum kernel estimator, which optimizes the SVM classi-
n = 14 data transformation. This work leads to the potential fier by estimating the kernel function. The last method is the
basis for developing the QSVM module/library on Qiskit so data, namely the Parkinson’s dataset, IoT irrigation, and drug
that this method is easily adapted by many parties. classification.
The quantum kernel method utilizes a quantum fea-
ture space. Recalling that quantum states exist in Hilbert
space [17], one can calculate the inner product between two III. PROPOSED METHOD
quantum states. Theoretically, this can be achieved directly In this work, we design a sentiment classifier based on a
on a quantum circuit; the inner product between the state quantum feature map. Figure 1 shows the illustration of the
91 represented by a set of unitaries U1 and the state 92 repre- fundamental difference between classical feature maps and
sented by a set of unitaries U2 can be calculated by applying quantum feature maps. Basically, the classical feature space
†
the unitaries U2 U1 and observing the resulting state [18]. is formed by the classical values where the data points are
Alternatively, one can measure each state 91 and 92 and represented by their original features before any kernel is
calculate the inner product classically. In both cases, the value applied. On the other hand, quantum feature space is formed
of the inner product is used for further interpretation. Most by the quantum states. Thus, the quantum feature map is also
commonly in machine learning, it is used to find the support formed by the quantum circuits as depicted in Figure 1.
vectors of a support vector classifier [6]. The motivation for To classify sentiment using quantum representation,
using quantum kernels is that quantum feature maps are more we use an experimental QNLP pipeline similar to the one
difficult to calculate classically while potentially partitioning used in [10]. The pipeline involves converting sentences into
the data/input space in a more distinguishable manner [7]. circuits, optimizing them, and using the resulting circuits
The QSVM concept that was previously developed was to classify sentiment using the QE-SVM method. We made
then continued at the application level by [7], using the Noisy some modifications to the pipeline, particularly during the
Intermediate-Scale Quantum (NISQ) assumption. In their optimization stage. The general pipeline stages used in
work, the authors use quantum states built from quantum this study include: (1) generating circuits from sentences,
feature maps from structured data. Subsequently, the vector is (2) training circuit parameters, (3) extracting state vectors
handled by the quantum kernel to carry out the classification. from the circuits as sentence embeddings, and (4) using
The dataset used is three standard UCI datasets, namely wine, these embeddings to train the QE-SVM classifier and predict
breast cancer, and handwritten digits, as well as two artificial the sentiment of each sentence. This process is illustrated
numeric datasets. in Figure 2.
One of the essential stages before the quantum kernel The sentiment classification task used in this study
is data transformation which produces feature maps. This involved the restaurant sentiment dataset and required binary
process can be done using special functions or rules, such sentiment classification (positive or negative). In the cir-
as Principal Component Analysis (PCA) or double angles, cuit representation, each sentence type ′ s′ was mapped to
or automatically using a learning algorithm. The last category 1 qubit. The conversion process from sentences to cir-
was developed by [19], using a genetic algorithm to minimize cuits was adapted from previous studies, e.g., [8], [20],
circuit parameters. This approach was tested on structured and [21], with some modifications. The training process for
circuit parameters was conducted using SPSA. Each stage is by the findings of the Not-box experiment in our prior
explained further below. work [9].
A. DATA TRANSFORMATION
First, data is transformed from state vector data with high
dimensions into one with lower dimensions. This is done in
order to train the QE-SVM in a reasonable time. For this
experiment, the data is transformed into 14 columns. The
value 14 is chosen because the original state vector data
is obtained from reading a quantum circuit with 7 qubits, FIGURE 3. Pauli Y feature map circuit.
where each qubit state has a |0⟩ and a |1⟩ component
(i.e., in superposition). weight vectors l is less than p such that the resulting trans-
This paper investigates two data transformation methods: formation of X yields data with reduced dimensionality as
the double angles method, and PCA with 14 principal com- follows [26]:
ponents.
w(k) = (w1 , . . . , wp )(k) (5)
1) DOUBLE ANGLES tk(i) = x(i) .w(k) . (6)
For a state vector composed of n qubits with states
[α0 , β0 , ] , . . . , [αn , βn , ] , each state can be described by two
angles θ1 and θ2 . Given that qubit states are normalized B. FEATURE MAP SELECTION
α 2 + β 2 = 1, we can calculate θ1 as The experiments presented in this paper focus on three Pauli
β feature maps: Pauli Y, Pauli YY, and Pauli Y YY. The decision
tan (θ1 ) = (1) is inspired by the work in [7], which uses the Pauli Y, Pauli
α
β
YY, Pauli Y YY and Pauli Z, Pauli ZZ, Pauli Z ZZ feature
θ1 = tan−1 (2) maps. Preliminary experimentation showed that the results of
α
both the Y and Z counterparts yielded the same results, which
On the other hand, because α and β are complex values, is explained by the SVM only reading the real values of the
we can find the angle between them in complex space using quantum kernel output. Therefore for brevity, the methods
the cosine rule. Therefore, we can calculate θ2 as listed will cover the Y counterparts of those three feature
α.β = |α||β| cos(θ2 ) (3) maps.
A feature map with Pauli Y rotation gates takes input
α.β
θ2 = cos (4) data x and encodes it onto a quantum circuit by the following
|α||β| transformation. The general form can be written as [7]:
These two angles are chosen because they describe the
n
magnitude and similarity of each component, respectively. X Y
Furthermore, this decorrelates a majority of the high dimen- UφY (x) = exp i φS (x) σj∈{Y } (7)
j=1
sional state vector data.
2) PCA (n=14) PThe
n
aboveQ gate encodes the transformation matrix
j=1 φ S (x) σY as a set of Pauli Y rotations with input
Principal Component Analysis is used to obtain the principal
φS (x), where S denotes the connectivity between a subset of
components and project the input data onto lower dimensions.
PCA takes the input data and projects it onto a set of orthog- Q φS (x) is x0 when only a
qubits in the quantum circuit, and
single qubit is concerned and is j∈S π − x otherwise.
onal vectors, which describe a p-dimensional ellipsoid fitted
onto the reference dataset. The coordinates are ordered such
1) PAULI Y FEATURE MAP
that the components have descending variance (where the
projection of the data with the greatest variance is known as The Pauli Y feature map is a simple feature map with a P gate
the first principal component and lies on the first coordinate, between a π/2 X-rotation gate and its inverse. The result is a
and so on). Y-rotation gate with angle x, which may be repeated multiple
PCA takes a data matrix X with n records and p fields times. There is no entanglement in this feature map.
(assuming the value of each column has been preprocessed
such that the mean of each column is zero) and transforms 2) PAULI YY FEATURE MAP
it by a set of l weight vectors, each with dimension p onto The Pauli YY feature map is a second-order Pauli Y evolution
a target vector space known as principal component scores, circuit with Pauli Y and Pauli YY components. In the YY
such that the set of scores t of a data entry has the maximum feature map, binary entanglement is introduced between all
possible variance of X . It is noted that the weight vectors pairs of qubits in the circuit, with its input parameter corre-
w have been normalized, and the cardinality of the set of sponding to the index of qubit pair permutation. As with the
87524 VOLUME 11, 2023
F. Z. Ruskanda et al.: Quantum-Enhanced Support Vector Machine for Sentiment Classification
Pauli Y feature map, this Pauli YY circuit may be repeated transformation into the following equation [7].
multiple times.
X n Y
Uφ(x) = exp i αφS (x) σj∈{X ,Y ,Z } (8)
3) PAULI Y YY FEATURE MAP j=1
The Pauli Y YY feature map is a Pauli Y feature map followed
by a Pauli YY feature map. This feature map starts out with- The values chosen in this paper range from 0.5 to 2.0 with
out entanglement, then has linear entanglement introduced by an increment of 0.1, as well as several other interesting values
the second-order Pauli Y evolution circuit component. The (0.75, 1.25, and 1.75).
Pauli Y YY circuit may be repeated multiple times.
1) QUANTUM KERNEL PREPARATION
4) PAULI Y Y YY FEATURE MAP
This step prepares a Qiskit Quantum Instance from a Qiskit
backend, then instantiates a Quantum Kernel with the chosen
Following the construction pattern of the Pauli Y YY feature
Pauli feature map. The Quantum Instance is a Qiskit object
map, the Pauli Y Y YY feature map is a Pauli Y feature map,
that contains a Qiskit Backend, as well as the configura-
followed by another Pauli Y feature map, followed by a Pauli
tion for circuit transpilation and execution. It is used to run
YY feature map. This feature map prepends an additional
the Quantum Kernel when called by the SVC during future
Pauli Y encoding circuit to the Pauli Y YY feature map. The
steps. The Quantum Kernel is a Qiskit object that pack-
Pauli Y Y YY circuit may be repeated multiple times.
ages a quantum kernel function by transforming two sets of
n-dimensional data, say x and y, onto higher dimensional
C. ROTATION FACTOR APPLICATION data (typically of dimension 2n) through the use of a
To handle overfitting, a rotation factor is applied to the rota- quantum feature map which takes x as its input param-
tion gate parameter angles φS (x), such that the values are eters, and calculates the dot-product between them. The
multiplied by the scaling factor, modifying the feature map dot-product result in matrix form can then be used in common
machine-learning techniques: The angle values are stored in the outputData with the corre-
sponding index.
K (x, y) =< f (x), f (y) > . (9)
Algorithm 3 QE-SVM words. This dataset is divided into 170 training, 50 develop-
Input: DataFrame trainData, DataFrame testData, ment, and 60 test sentences.
arrayInteger trainLabels, arrayInteger testLabels,
integer nQubits, dictionary trainingConfig.
Output: arrayInteger trainPredictions, arrayInteger A. DATA TRANSFORMATION AND CIRCUIT PARAMETER
testPredictions, float trainAccuracy, float TRAINING EXPERIMENTS
testAccuracy. This experiment was conducted to determine which com-
begin bination of methods is most appropriate to improve the
1) Apply rotation factor by ApplyRotationFactor
(trainData, testData) sentiment classification results in QE-SVM. The combina-
2) Create feature map by FeatureMap(Pauli, nQubits, tion of methods is done on: circuit parameter training, data
Repetitions, Entanglement) transformation method, feature map, and rotation factor. For
3) Run quantum simulation and kernel by the circuit parameter training method, we used three options:
QuantumSimulationAndKernel (nShots, FeatureMap) SPSA, ANN 1 (3 layers), and ANN 2 (5 layers). We use
stored in adhocKernel
4) Initialize SVM from the QESVM; two alternatives for the data transformation method, namely
qesvmClassifier(adhocKernel, scaledTrainData) Double angles and PCA (n=14). As for the feature map,
5) Call training prediction by we use four methods: Pauli Y, Pauli YY, Pauli Y YY, and
qesvmClassifier.predict(scaledTrainData) → Pauli Y Y YY. The parameter rotation factor varies from 0.5
trainPredictions to 1, and repetitions from 1 to 3, whereas the fixed parameters
6) Call testing prediction by
qesvmClassifier.predict(scaledTestData) → are using linear entanglement with 16 shots. To simplify
testPredictions the result’s presentation, only the three best results for the
return (trainAccuracy, testAccuracy) combination of circuit parameter training method and rep-
resentation are shown (Table 1). Based on the experimental
results, it can be seen that the combination of the SPSA opti-
mization method with the PCA transformation method, the
2) To study the impact of ANN architecture for circuit Pauli Y Y YY feature map, and a rotation factor of 0.9 gives
parameter training on the prediction result the best accuracy results of 93.33%.
3) To study the effect of rotation factor on the prediction
result B. ANN LAYER EXPERIMENTS
4) To perform prediction comparison with SVM baseline The ANN layer experiment was carried out to determine the
First, the experimental hardware used is a Linux OS with effect of the number of ANN layers on the sentiment clas-
8 vCPUs and 52 GB of RAM with VM type n1-highmem-8. sification results. The experimental parameters used are the
The hardware is the same as our preliminary work in [9]. The data transformation method, feature map, and rotation factor
dataset used in the experiment is a collection of simple sub- for ANN architectures: ANN 1 and ANN 2. The number of
jective sentences in the restaurant domain. These sentences layers of the two architectures is 3 and 5, respectively. Based
are generated from 29 vocabularies, consisting of positive and on the experimental results in Table 3 and Figure 6, ANN 1
negative sentences, with each sentence having a length of 4-5 gives better accuracy than ANN 2. The best configuration is
TABLE 6. Comparison of QE-SVM prediction results with three variations of the circuit parameter training method.
combination of ANN with PCA or Double Angles still needs to the baseline method (SVM). The followings are the three
to be optimal in handling positive and negative sentences. models used and their parameter configurations:
In the second evaluation (Table 5), we tried comparing 1) Baseline 1: circuit parameter training method ANN 1 -
sentences that could be handled by our method (QE-SVM) representation Double angles - classifier SVM
TABLE 7. Comparison of the predicted results of our method and the baselines.
2) Baseline 2: circuit parameter training method SPSA - negative = 31 and predicted positive = 29. Therefore,
representation PCA - classifier SVM it cannot be concluded that the model tends to predict
3) Our method: circuit parameter training method SPSA - positively or negatively, and prediction errors occur due
representation PCA - classifier QE-SVM to a lack of models in other aspects. By transforming
the state vector, QE-SVM provides more accurate pre-
We found some cases when comparing the three models
diction results for both positive and negative sentences.
(Table 7).
Moreover, classical SVM is unsuitable for transformed
1) The prediction is correct in our method but wrong in data (double angles/PCA) and performs better before
baseline 1 or baseline 2 (cases a1 and a2). Observa- transformation because the data is more descriptive and
tion of the prediction results shows that baseline 1 is no information is lost. Classical SVM can handle high-
often mistaken as a false positive rather than a false dimensional state-vector data because it does not need
negative. From the test set prediction results, base- to simulate a quantum kernel. The dot product between
line 1 has a minimal tendency to predict positively two high-dimensional vectors is only O(n). However,
compared to our method. As additional information, its overall performance is still below QE-SVM.
the false positive rate of baseline 1 is 34.62%, while our 2) The prediction was wrong in our method but correct
method’s false positive rate is 10.71%, baseline 1 accu- in baseline 1 or baseline 2 (Case b1 and b2). The
racy of 80%, and our method’s accuracy of 93.33%). case where our method is wrong and baseline 1 or
In the case of baseline 2, there is no particular ten- baseline 2 is correct occurs when the label is nega-
dency to predict negative or positive, with predicted tive and best incorrectly predicts it as positive. It is
conjectured that baseline 1 happens to be accurate, [6] V. Havlíček, A. D. Córcoles, K. Temme, A. W. Harrow, A. Kandala,
given the slight tendency to predict negatively. In addi- J. M. Chow, and J. M. Gambetta, ‘‘Supervised learning with quantum-
enhanced feature spaces,’’ Nature, vol. 567, no. 7747, pp. 209–212,
tion, our method has a wrong prediction on the positive Mar. 2019.
sentence (9th sentence). This fact is presumably due to [7] J.-E. Park, B. Quanz, S. Wood, H. Higgins, and R. Harishankar, ‘‘Practical
the similarity of the sentence with one of the sentences application improvement to quantum SVM: Theory to practice,’’ 2020,
arXiv:2012.07725.
in the train set. [8] W. Zeng and B. Coecke, ‘‘Quantum algorithms for compositional natural
Lastly, the comparison among several methods of language processing,’’ 2016, arXiv:1608.01406.
[9] F. Z. Ruskanda, M. R. Abiwardani, M. A. Al Bari, K. A. Bagaspati,
QE-SVM in terms of accuracy with respect to epoch is R. Mulyawan, I. Syafalni, and H. T. Larasati, ‘‘Quantum representation for
depicted in Figure 9. As shown in the figure for the higher sentiment classification,’’ in Proc. IEEE Int. Conf. Quantum Comput. Eng.
epochs, the combination of PCA and SPSA yields the highest (QCE), Sep. 2022, pp. 67–78.
[10] D. Kartsaklis, I. Fan, R. Yeung, A. Pearson, R. Lorenz, A. Toumi,
accuracy for both training and testing. This is due to the G. de Felice, K. Meichanetzidis, S. Clark, and B. Coecke, ‘‘Lam-
state vector information being well represented and optimized beq: An efficient high-level Python library for quantum NLP,’’ 2021,
by the PCA and the SPSA. On the other hand, double arXiv:2110.04236.
[11] Y. Zhang, D. Song, P. Zhang, X. Li, and P. Wang, ‘‘A quantum-inspired
angles data (DA) may eliminate some information. For com- sentiment representation model for Twitter sentiment analysis,’’ Appl.
parison, the PCA (n=14) approximates the distribution of Intell., vol. 49, no. 8, pp. 3093–3108, 2019.
128-dimensional data with 14 values, whereas the double [12] Y. Zhang, D. Song, X. Li, P. Zhang, P. Wang, L. Rong, G. Yu, and B. Wang,
‘‘A quantum-like multimodal network framework for modeling interaction
angle represents 7-dimensional data with only 2 values (the dynamics in multiparty conversational sentiment analysis,’’ Inf. Fusion,
angle and the amplitude). vol. 62, pp. 14–31, Oct. 2020.
[13] N. Joshi, P. Katyayan, and S. A. Ahmed, ‘‘Comparing classical ML models
VI. CONCLUSION with quantum ML models with parametrized circuits for sentiment analysis
task,’’ J. Phys., Conf. Ser., vol. 1854, no. 1, Apr. 2021, Art. no. 012032.
This paper described a study on the implementation of [14] A. Liu, X. Deng, Z. Tong, Y. Luo, and B. Liu, ‘‘A simultaneous perturbation
QE-SVM on the NLP task: sentiment classification. The stochastic approximation enhanced teaching-learning based optimization,’’
subjective sentence, which contains sentiment value was ana- in Proc. IEEE Congr. Evol. Comput. (CEC), Jul. 2016, pp. 3186–3192.
[15] X. Bonet-Monroig, H. Wang, D. Vermetten, B. Senjean, C. Moussa,
lyzed by transforming it into a quantum representation that T. Bäck, V. Dunjko, and T. E. O’Brien, ‘‘Performance comparison
can be used as input to the quantum kernel. The experimental of optimization methods on variational quantum algorithms,’’ 2021,
results proved that the combination of sentences-to-circuit arXiv:2111.13454.
[16] C. Cortes and V. Vapnik, ‘‘Support-vector networks,’’ Mach. Learn.,
steps, the SPSA optimization method, and the data transfor- vol. 20, no. 3, pp. 273–297, 1995.
mation method PCA on QE-SVM provided the best sentiment [17] M. Schuld and N. Killoran, ‘‘Quantum machine learning in feature Hilbert
classification results of 93.33% accuracy, with an increase of spaces,’’ Phys. Rev. Lett., vol. 122, no. 4, Feb. 2019, Art. no. 040504.
16.6% compared to the baseline SVM. This approach worked [18] M. Schuld, ‘‘Supervised quantum machine learning models are kernel
methods,’’ 2021, arXiv:2101.11020.
both in positive and negative subjective sentences. Our work [19] S. Altares-López, A. Ribeiro, and J. J. García-Ripoll, ‘‘Automatic design
leads to a potential path for using quantum kernels for NLP of quantum feature maps,’’ Quantum Sci. Technol., vol. 6, no. 4, Oct. 2021,
using quantum computing. Art. no. 045015.
[20] K. Meichanetzidis, A. Toumi, G. de Felice, and B. Coecke, ‘‘Grammar-
In the future research, we suggest further development by aware sentence classification on quantum computers,’’ 2020,
using Variational Quantum Algorithms and by implementing arXiv:2012.03756.
them on a quantum computer. [21] R. Lorenz, A. Pearson, K. Meichanetzidis, D. Kartsaklis, and B. Coecke,
‘‘QNLP in practice: Running compositional models of meaning on a
quantum computer,’’ 2021, arXiv:2102.12846.
ACKNOWLEDGMENT [22] R. Yeung and D. Kartsaklis, ‘‘A CCG-based version of the DisCoCat
An earlier version of this paper was presented at framework,’’ 2021, arXiv:2105.07720.
the 2022 IEEE International Conference on Quantum Com- [23] M. F. Porter, ‘‘An algorithm for suffix stripping,’’ Program, vol. 40, no. 3,
pp. 211–218, Jul. 2006.
puting and Engineering (QCE), Broomfield, CO, USA [DOI: [24] B. Coecke, M. Sadrzadeh, and S. Clark, ‘‘Mathematical foundations for a
10.1109/QCE53715.2022.00025]. compositional distributional model of meaning,’’ 2010, arXiv:1003.4394.
[25] J. C. Spall, ‘‘An overview of the simultaneous perturbation method for
REFERENCES efficient optimization,’’ Johns Hopkins APL Tech. Dig., vol. 19, no. 4,
pp. 482–492, 1998.
[1] B. Liu, Sentiment Analysis: Mining Opinions, Sentiments, and Emotions.
[26] I. T. Jolliffe, Principal Component Analysis. Aberdeen, U.K.: Univ. of
Cambridge, U.K.: Cambridge Univ. Press, 2020.
Aberdeen, 2002.
[2] V. Vyas and V. Uma, ‘‘Approaches to sentiment analysis on product
reviews,’’ in Sentiment Analysis and Knowledge Discovery in Contempo-
rary Business. Hershey, PA, USA: IGI Global, 2019, pp. 15–30.
[3] S. Unankard, X. Li, M. Sharaf, J. Zhong, and X. Li, ‘‘Predicting elections
from social networks based on sub-event detection and sentiment analy- FARISKA ZAKHRALATIVA RUSKANDA (Mem-
sis,’’ in Proc. 15th Int. Conf. Web Inf. Syst. Eng. (WISE), Thessaloniki, ber, IEEE) received the B.S., M.S., and Ph.D.
Greece. Cham, Switzerland: Springer, Oct. 2014, pp. 1–16. degrees from the School of Electrical Engineering
[4] W. Duan, Q. Cao, Y. Yu, and S. Levy, ‘‘Mining online user-generated and Informatics, Bandung Institute of Technology,
content: Using sentiment analysis technique to study hotel service Bandung, Indonesia. She is currently an Assistant
quality,’’ in Proc. 46th Hawaii Int. Conf. Syst. Sci., Jan. 2013, Professor of natural language processing with the
pp. 3119–3128. Informatics Research Group, Bandung Institute of
[5] J. Biamonte, P. Wittek, N. Pancotti, P. Rebentrost, N. Wiebe, and S. Lloyd, Technology.
‘‘Quantum machine learning,’’ Nature, vol. 549, no. 7671, pp. 195–202,
2017.