0% found this document useful (0 votes)
15 views

Implementation and Performance Evaluation of Quant

Uploaded by

micktyson0222
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

Implementation and Performance Evaluation of Quant

Uploaded by

micktyson0222
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Article Not peer-reviewed version

Implementation and Performance


Evaluation of Quantum Machine
Learning Algorithms for Binary
Classification

Surajudeen Shina Ajibosin and Deniz Cetinkaya *

Posted Date: 6 November 2024

doi: 10.20944/preprints202411.0364.v1

Keywords: quantum machine learning; binary classification; quantum algorithms

Preprints.org is a free multidisciplinary platform providing preprint service


that is dedicated to making early versions of research outputs permanently
available and citable. Preprints posted at Preprints.org appear in Web of
Science, Crossref, Google Scholar, Scilit, Europe PMC.

Copyright: This open access article is published under a Creative Commons CC BY 4.0
license, which permit the free download, distribution, and reuse, provided that the author
and preprint are cited in any reuse.
Preprints.org (www.preprints.org) | NOT PEER-REVIEWED | Posted: 6 November 2024 doi:10.20944/preprints202411.0364.v1

Disclaimer/Publisher’s Note: The statements, opinions, and data contained in all publications are solely those of the individual author(s) and
contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting
from any ideas, methods, instructions, or products referred to in the content.

Article

Implementation and Performance Evaluation of


Quantum Machine Learning Algorithms for
Binary Classification
Surajudeen Shina Ajibosin and Deniz Cetinkaya *
Department of Computing & Informatics, Bournemouth University, UK; [email protected]
* Correspondence: [email protected]; +44-1202-961-241

Abstract: Quantum Machine Learning (QML) merges principles of Quantum Computing (QC) and
Machine Learning (ML), offering improved efficiency and potential quantum advantage in data-
driven tasks and solving complex problems. In binary classification, where the goal is to assign data
into one of two categories, QML uses quantum algorithms to process large datasets efficiently.
Quantum algorithms like Quantum Support Vector Machines (QSVM) and Quantum Neural
Networks (QNN) exploit quantum parallelism and entanglement to enhance performance over
classical methods. This research explores the use of QML algorithms for binary classification and
compares their performance with classical ML methods. This study focuses on two common QML
algorithms, Quantum Support Vector Classifier (QSVC) and QNN. We used the Qiskit software and
did the experiments with three different datasets. Data preprocessing included dimensionality
reduction using Principal Component Analysis (PCA) and standardization using scalers. The results
showed that quantum algorithms demonstrated competitive performance against their classical
counterparts in terms of accuracy, while QSVC performed better than QNN. These findings suggest
that QML holds potential for improving computational efficiency in binary classification tasks. This
opens the way for more efficient and scalable solutions in complex classification challenges and
shows the complementary role of quantum computing.

Keywords: quantum machine learning; binary classification; quantum algorithms

1. Introduction
Quantum Computing (QC) exploits the principles of quantum mechanics to process information
to solve problems that are too complex for classical computers. Unlike classical bits, qubits possess
the unique ability to represent numerous possible combinations of 0 and 1 at the same time. This
simultaneous existence in multiple states is a phenomenon referred to as superposition. This property
enables the processing of information in a parallel and exponentially expanded manner compared to
classical computers. Although still in its early stages, quantum computing holds great promise for
solving problems that are currently infeasible with classical computers [1].
Machine Learning (ML) has been significantly advancing human capabilities and contributing
to societal progress across various fields by automating complex tasks, improving decision-making
processes, and providing personalized experiences [2]. ML uses algorithms and statistical models to
analyze and interpret complex data, assisting in planning, decision-support, predictive analysis,
intelligent automation and in many other activities across multiple domains including healthcare,
finance, education, transportation, defense, etc. [3–5].
ML is a rapidly evolving and active field that is continuously being expanded with new models
and approaches [6]. Training and deploying ML models can be computationally expensive and time-
consuming depending on the volume of data and the complexity of the methods. Research efforts to
address this challenge include improving ML algorithms and architectures to be more efficient and
scalable, and using hardware advancements such as Graphics Processing Units (GPUs) and Tensor

© 2024 by the author(s). Distributed under a Creative Commons CC BY license.


Preprints.org (www.preprints.org) | NOT PEER-REVIEWED | Posted: 6 November 2024 doi:10.20944/preprints202411.0364.v1

Processing Units (TPUs), as well as distributed machine learning, to accelerate model training and
inference [7].
QC, as a multifaceted field based on quantum theory and with different potential applications
in software engineering, is an alternative paradigm to reduce the time required for solving complex
problems with improved performance [8,9]. Quantum Machine Learning (QML) brings the power of
ML and QC to solve challenges in various domains [10–12]. It uses principles of quantum mechanics
to improve computational efficiency and the performance metrics of the available ML algorithms. As
QC technology evolves, the synergy between quantum algorithms and ML is expected to drive
innovative solutions to previously unsolved computational problems.
QML offers a potential solution by utilizing quantum mechanical properties such as
superposition and entanglement, to process and analyze data more efficiently. However, the practical
implementation of quantum algorithms and models is a relatively new research field so many
application areas still remain underexplored. In this study, we focus on the binary classification
problem in machine learning. Our main research question was about understanding if QML
algorithms improve the performance of trained ML models by capturing more complex patterns or
correlations in the datasets. We implemented a set of commonly used classical and quantum machine
learning algorithms for binary classification by using three different datasets and compared the
performances of the two computing paradigms.
This section presented an introduction to the problem domain and overview of the study. The
remainder of this manuscript is organized as follows: Section 2 presents background information and
review of the current literature regarding the QC and QML. Section 3 explains the materials and
methods adopted for building and training the models for this study. Section 4 describes more details
about the experiments and presents the results. Section 5 includes the discussion and conclusions as
well as suggestions for future work.

2. Background Information and Literature Review


In this section, fundamental concepts and principles of QC and QML will be briefly explained
with a review of the literature for related work. Concepts such as superposition, entanglement and
quantum gates will be described. Related work about QML and specifically using quantum classifiers
for binary classification is presented.

2.1. Quantum Computation and Quantum Information


Unlike classical computation in which the smallest unit of information is represented in bits
(represented as 0s and 1s), quantum computation uses quantum bits, or qubits [13]. Quantum
computers are not simply faster or better versions of today’s classical computers, but instead they
represent a fundamentally new paradigm for processing information. While classical bits can take
the value 0 or 1, qubits possess the unique ability to represent and store various possible combinations
of 0 and 1 at the same time [14]. This ability to simultaneously be in multiple states is called
superposition. This property enables the processing of information in a parallel and exponentially
expanded manner compared to classical computers.
A quantum state is any possible state of a quantum mechanical system or quantum hardware.
There are numerous examples of quantum mechanical two-level systems in nature that potentially
could serve as qubits. For example, the electronic states of an ion or the electron spin of an atom
implanted in silicon. If the quantum system is based on multi-level computational unit instead of the
conventional 2-level qubit, then each unit is called as qudit. Compared to qubits, qudits provide a
larger state space that can reduce the circuit complexity and enhance the algorithm efficiency [15].
In quantum computing, we can generate pairs of qubits that are entangled which means
changing the state of one of the qubits will instantly change the state of the other one. Entanglement
has two very special properties: it is inherently private and allows maximal coordination. This
happens even if they are separated by very long distances.
Quantum computers are built using various hardware technologies whereas the main types are
superconducting circuits, photonic networks, trapped ions, quantum dots, etc. Each type has its
Preprints.org (www.preprints.org) | NOT PEER-REVIEWED | Posted: 6 November 2024 doi:10.20944/preprints202411.0364.v1

advantages and disadvantages, such as greater entanglement or longer coherence times. Quantum
supremacy is the goal for a quantum computer that could perform calculations that are not possible
with classical computers or beyond the reach of even the most powerful supercomputer. In the future,
quantum computers may not rely on a single hardware technology, but instead could be based on
combining different technologies for greater effectiveness [16]. Table 1 presents a comparison of
classical vs. quantum computing according to various features.

Table 1. Classical vs. Quantum Computing.

Feature Quantum Classical

Theory Quantum mechanics Classical physics

Computation Probabilistic Deterministic

Operations Linear algebra operations Boolean algebra operations

Information storage Qubits, qudits Bits


Continuous possible states in
System state Discrete number of possible states
superposition
Superconducting loops, trapped ions, Transistors
Technology
quantum dots, etc.
Complex problems, optimisation,
Applications General purpose
simulation
Error rate High Low

Environment Ultra cold Room temperature

Computing power Exponential growth Linear growth

Processing QPU CPU

A universal fault-tolerant quantum computer that can efficiently solve complex tasks such as
integer factorization or unstructured database search requires qubits in millions with optimized
coherence times and error rates [18]. Practical realization of such devices could take a long time;
however Noisy Intermediate-Scale Quantum (NISQ) computation have been achieved and near-term
devices are presently available for use. For example, IBM’s Quantum Lab, Google Quantum AI,
Microsoft’s Azure Quantum and Amazon’s Bracket quantum services are offering cloud-based
solutions for implementation of quantum algorithms and being used for real-world applications [19].
There are other companies such as D-Wave, Rigetti Computing, IonQ, Intel, Quantiuum, Xanadu, etc.
which are developing quantum computers and technologies as well [20]. These cumulative efforts
will likely propel QC towards the Intermediate-Scale Quantum era [21], and perhaps the Fault-
Tolerant Quantum era [22].

2.2. QC Implementation Models


We use Dirac notation to represent the quantum states as it is widely used in quantum
mechanics. A quantum state of a system can be represented by a column vector whose components
are probability amplitudes for different states in which the system might be found when measured,
i.e. in correspondence with the classical states of that system. The probability amplitudes are complex
numbers, and the sum of the absolute values squared of the probability amplitudes is equal to 1 [14].
We assume that a qubit represents an abstraction of the fundamental unit of information without
regard as to what it is. It might be a single physical qubit or a logical qubit that consists of multiple
physical qubits (e.g., 10-100). Qubits can have the value |0⟩ and |1⟩ or be in a state other than |0⟩ and
|1⟩. A particular quantum state can be represented by a wave function 𝜓(x) as follows:
Preprints.org (www.preprints.org) | NOT PEER-REVIEWED | Posted: 6 November 2024 doi:10.20944/preprints202411.0364.v1

|𝜓⟩ = 𝛼|0⟩ + 𝛽|1⟩ (1)

where 𝛼, 𝛽 ∈ ℂ and 𝛼 + 𝛽 = 1 (2)


In the literature, there are various methods to represent and examine the quantum systems
whereas the most common ones are quantum gate-circuit and adiabatic quantum computing that
have been successfully implemented [23].
Quantum gates are similar to the classical gates and quantum circuits can be designed by using
existing sets of quantum gates that operate on a constant number of qubits [24]. This approach is
commonly used and universal for QC as gates are linked to each other like in the classical circuits
and each gate perform some unitary operator [14].
Symbols are often used to denote the gates during the design of quantum circuits. For example,
quantum analogue of a classical NOT gate is the X-gate, also known as Pauli-X gate. Note that, not
all classical gates have a direct quantum analogue. Multi-qubit gates act on two or more qubits
simultaneously and enable the creation and manipulation of quantum states. Simulations can be
executed by using single or two qubit gates on a universal quantum computer.
A qubit state can be illustrated with Bloch sphere [14]. |𝜓⟩ is pointing from the origin to a point
on the surface of the unit sphere. The direction of |𝜓⟩ is specified by polar angle θ and azimuthal
angle φ. Commonly used single qubit gates are the Pauli-X, -Y, and -Z gates, which correspond to
rotations by 𝜋 radians about the x, y, and z axes respectively on the Bloch sphere, and Hadamard
gate, which is a 𝜋 rotation about the X+Z axis and it has the effect of putting the state into
superposition. Once measured, the qubit will be in either one of its computational basis states.
Most common two qubit gate is quantum controlled NOT or CNOT gate which flips the second
qubit (the target qubit) if and only if the first qubit (the control qubit) is |1⟩. All unitary circuits can
be decomposed into single qubit gates and CNOT gates. Because the two-qubit CNOT gate costs
much more time to execute on real hardware than single qubit gates, circuit cost or complexity can
be measured by the number of CNOT gates. There are also 3-qubit gates like a Toffoli gate also known
as the CCNOT gate.
Adiabatic quantum computing is an alternative universal approach but in terms of computational
complexity is equivalent to gate-based quantum computing at polynomial time [25]. It is founded
upon the quantum adiabatic theorem which describes the evolution of the ground state of a quantum
system as follows:
𝐻(𝑡) = 𝑠(𝑡)𝐻 + (1 − 𝑠(𝑡))𝐻 (3)
A time-varying Hamiltonian evolves from an initial ground state 𝐻 to a form 𝐻 that encodes
the problem to be solved. 𝑠(𝑡) represents an adiabatic evolution path, a function that decreases from
1 to 0 for some elapsed time 𝑡 . If the time-evolution of the Hamiltonian is sufficiently slow, the state
is likely to remain in the ground-state [26]. Adiabatic quantum computing and more specifically
quantum annealing are mainly used for optimization problems, while gate-based quantum
computing can be used for a broader range of problems.

2.3. Quantum Algorithms and Quantum Data Encoding Methods


For a two-qubit system, all possibilities could be encoded into the state of the two qubits via
superposition of the four basis states |00⟩, |01⟩, |10⟩ and |11⟩. In a superposition of four states requires
four probability amplitudes to fully describe the quantum state. Multiple qubits in a quantum
computer can be conceptually grouped together in a quantum register. Each qubit will have an index
within this register and each qubit can be addressed in qubit operations by using this qubit index.
Quantum parallelism is based on the ability of a quantum register to exist in a superposition of
base states. It is the possibility of performing large number of operations in parallel without the need
of extra resources. For example, while with three classical bits there are 2n = 23 = 8 cases, all these cases
can be represented by using 3 qubits in a single quantum state simultaneously in superposition.
Preprints.org (www.preprints.org) | NOT PEER-REVIEWED | Posted: 6 November 2024 doi:10.20944/preprints202411.0364.v1

Quantum interference is a phenomenon in quantum mechanics that is when subatomic particles


interact with and influence themselves and other particles while in a probabilistic superposition state.
It can influence the probability of the outcomes when the quantum state is measured.
Quantum parallelism and quantum interference form the foundation of how a quantum
computer processes information simultaneously. Quantum computers have the potential to exceed
the performance of conventional computers for complex problems such as cryptography, chemistry,
pharmaceuticals, etc. Quantum advantage is the point at which quantum algorithms deliver a
significant, practical benefit beyond what classical computers alone are capable for computationally
complex problems.
Quantum algorithms are usually described as a circuit model but they can be defined by using
other mathematical models too. They can use other techniques or algorithms as sub-parts of the
algorithm such as quantum phase estimation, quantum Fourier transform, amplitude amplification,
etc.
One of the important steps in QML is encoding classical data into quantum states suitable for
quantum computation [27]. There are several techniques for quantum data encoding such as basis
encoding, amplitude encoding, angle encoding, etc.
Basis encoding is a technique where classical data are encoded directly into computational basis
states of qubits. Each classical bit of data is mapped to a qubit, and the classical value is represented
by the qubit being in either the ∣0⟩ or ∣1⟩ state. It is also known as binary encoding. Basis encoding is
straightforward and directly maps classical binary data to quantum states, making it easy to
implement in quantum circuits for classical data in binary form.
Amplitude encoding involves encoding the data into the amplitudes of the quantum states. For
a system with n qubits, amplitude encoding uses the 2n amplitudes to represent classical data.
However, preparing a quantum state with specific amplitudes is non-trivial and may require complex
quantum circuits or additional resources.
Angle encoding also known as phase or rotation encoding is another technique that maps
classical data into phase angles of qubits. A qubit state can be represented with Bloch sphere
parameters using θ and φ.
𝜃 𝜃
|𝜓⟩ = cos |0⟩ + 𝑒 sin |1⟩ (4)
2 2
Angle encoding is straightforward to implement using basic quantum gates making it accessible
in quantum circuits, however a qubit can only be used to encode a data feature making it more
expensive to use for high dimension datasets unless a dimensionality reduction technique is used.

2.4. Related Work


This section presents related work about using QML for classification problems. We grouped
the studies into three subsections based on the underlying model.

2.4.1. Studies Using Variational Quantum Classifier (VQC)


VQC is a promising QML model particularly for tackling classification problems. Built upon the
principles of variational quantum algorithms, VQCs use parameterized quantum circuits to perform
tasks in high-dimensional quantum spaces, potentially achieving a computational advantage over
classical classifiers in certain scenarios [28].
To improve the detection rate of heart failure, Munshi et al. (2024) proposed a QML-based
framework using a standard heart failure dataset consisting of different features related to
cardiovascular diseases [29]. VQC was one of the QML algorithms used for the research work. The
Principal Component Analysis (PCA) dimensionality technique was used to reduce the dimension of
the dataset. This study focused on comparison of the QML methods rather than comparing QML
performance with classical ML algorithms on the same dataset. After a comparative analysis, they
concluded that QSVC outperformed the VQC in all the metrics for the given dataset.
Preprints.org (www.preprints.org) | NOT PEER-REVIEWED | Posted: 6 November 2024 doi:10.20944/preprints202411.0364.v1

Maheshwari et al. (2022) presented the application of VQC for binary classification [30]. The
three datasets used were: a synthetic dataset with randomly generated values between 0 and 1, a
publicly available sonar dataset consisting of mining data, and a proprietary diabetes dataset related
to diabetes with acute diseases and diabetes without acute disease. Feature importance technique
was used to reduce the dimensions of the datasets by dropping some features with least importance.
The study reported VQC accuracies of 75%, 71.4%, and 68.73% for the synthetic, sonar, and diabetes
datasets, respectively for basis data encoding. In contrast, amplitude data encoding-based VQC
achieved superior results with accuracies of 98.40%, 67.3%, and 74.50% on the same datasets. These
findings show the potential of quantum approaches in improving classification tasks, suggesting
promising directions for future research in QML.

2.4.2. Studies Using Quantum Support Vector Classifier (QSVC)


QSVC is an adaptation of the classical Support Vector Machine (SVM) within the framework of
quantum computing. Classical SVMs operate by mapping data into a high-dimensional feature space
where classes can be separated by an optimal hyperplane, but this mapping can become
computationally intensive as data dimensionality grows. QSVCs address this challenge by using
quantum feature maps and quantum kernels, potentially allowing for exponential speedup in certain
classification scenarios. As the field advances, understanding the capabilities and limitations of
QSVCs will be essential for evaluating their practical relevance within QML.
Kavitha and Kaulgud (2024) in a recent study employed various datasets to benchmark the
performance of QSVC, demonstrating their potential and highlighting the areas where further
research is needed [31]. A key factor in the study is the selection of suitable feature maps, which are
critical for encoding data into quantum states. Effective feature maps can enhance the classifier’s
ability to distinguish between different classes in the dataset. The results showed that by using the
right quantum feature maps, a quantum advantage was demonstrated for all three datasets compared
to the classical alternatives. Comparisons to other QML approaches could provide broader insight on
QSVC’s strengths.
Suzuki et al. (2024) explored the practical applicability of both QSVC and Quantum Support
Vector Regression (QSVR) by evaluating their performances on a variety of datasets [32]. QSVC was
evaluated on fraudulent credit-card transactions and image datasets while the regression
performance of QSVR was evaluated on financial and materials datasets. The experiments were
implemented both on a quantum circuit simulator and real quantum computer. The results show the
resilience of quantum kernels on noisy devices even when implemented on shallow circuits and
adaptability across different types of datasets and tasks. These findings demonstrate the potential of
QML to offer significant advantages over classical counterparts, particularly in handling noisy
environments which are prevalent in near-term quantum devices.
By adopting a QSVC based approach on quantum annealing principles, Yuan et al. (2023) was
able to enhance the classification of flow separation scenarios [33]. The QSVC algorithm
demonstrated superior performance compared to classical SVM approaches. Specifically, it achieved
an 11.1% increase in accuracy for binary classification tasks. Additionally, the study extended its
investigation to multiclass classification, focusing on multiple angles of attack on aircraft wings. The
developed multiclass QSVC, utilizing a one-against-all strategy, showed a notable 17.9% accuracy
improvement over classical methods.

2.4.3. Studies Using Quantum Neural Networks (QNN)


QNNs represent a novel fusion of quantum computing and artificial neural networks where the
quantum parallelism and entanglement provide more efficient ways to process information [34]. A
QNN has an input, output, and a number of hidden layers. The smallest building block of a QNN is
the quantum perceptron, which is the quantum analogue of perceptron used in ML [35]. Most QNNs
are developed as feed-forward networks, i.e. it takes input from one layer of qubits, evaluates this
information and passes on the output to the next layer. However, other models such as Quantum
Preprints.org (www.preprints.org) | NOT PEER-REVIEWED | Posted: 6 November 2024 doi:10.20944/preprints202411.0364.v1

Recurrent Neural Networks (QRNN) have also been proposed in the literature contributing to an
emerging and rapidly developing field of research [36].
Several studies used the QNN approach in the literature to advance QML. For example, Simoes
et al. (2023) performed an experimented analysis of QNN algorithm by evaluating its performance
on 5 different datasets using different combinations of quantum encoding techniques [37]. The study
was able to demonstrate a quantum advantage, with the results showing that the QNN outperformed
the classical neural network by 7%. In the hybrid approach used, QNN was implemented as a
variational quantum circuit while the optimizer was implemented on a classical hardware.
Although it is relatively a new research field, the number of publications has increased recently
in this area and researchers started to present systematic review studies to compare the QML
methods and explore their usage for specific problems [38–40]. Overall, existing literature on QML
highlights significant advancements in developing quantum-enhanced classification algorithms.
While practical implementations are limited by current quantum hardware constraints such as noise
and low coherence, the results are promising with better accuracy and efficiency.

3. Materials and Methods


This section presents the selected methods in this study as well as the datasets. We used Python
programming language with Jupyter Notebook and standard libraries pandas, numpy, matplotlib
and scikit-learn. The quantum computation processes were implemented by using the IBM Qiskit
software package which provides open access to quantum computing services for various QML
algorithms. We used a quantum simulator due to the limited usage time of the real quantum
computers and the fact that simulators provide a flexible and accessible alternative for exploring
quantum computation.
Current quantum processors, which are relatively small and noisy, lack the capacity to
disentangle and generalize quantum data independently. To be effective, these NISQ processors must
operate alongside classical co-processors [43]. So, we employed a hybrid quantum-classical model
which combines the classical and quantum computing to use the strengths of both paradigms. The
basic workflow is shown in Figure 1. During the initialization step, building the models, analysis and
preprocessing of the datasets are done on a classical computer. The training and optimization of the
models are done both on the classical and quantum hardware for the classical and quantum
algorithms respectively. During the evaluation step, the results are analyzed on a classical computer.

Figure 1. Flowchart of the model implementation.


Preprints.org (www.preprints.org) | NOT PEER-REVIEWED | Posted: 6 November 2024 doi:10.20944/preprints202411.0364.v1

3.1. ML Approach and Models


The quantum gate-circuit model is adopted for this research work because it is more popular
and has the most easily accessible quantum hardware. The adaptation of classical ML algorithms in
designing the corresponding quantum analogues is an ongoing research area. While it enables
transition into QC much easier by using existing methods, it also allows for direct comparison by
providing a benchmark for measuring the performance and effectiveness of quantum adaptations.
We used the QSVC and QNN as they have both achieved a level of success in their design and
implementation as discussed in the previous section.
QSVC is an adaptation of the Support Vector Classifier (SVC) which is a sub type of SVM and it
works well with small and medium-sized datasets [41]. It is used for both linear and non-linear
classification tasks, whereas for non-linear classification tasks, it uses a kernel trick which maps the
input data into a higher-dimensional feature space. For comparison, the classical model is trained
and evaluated using the SVC where four different models are trained using the linear, poly, rbf and
sigmoid kernels for each dataset. The cross_validate() function is used for the training and evaluation
of the classical models.
To adapt SVC into a quantum algorithm as QSVC, the quantum kernel function is represented
with a quantum Hilbert space using a quantum feature map [42]. A kernel function can be
represented as a matrix, and the kernel matrix K that represents the overlap of two quantum states is
defined as follows:

𝐾 , = 〈𝜙(𝑥⃗ ) | 𝜙 𝑥⃗ (5)

where a quantum feature map 𝜙(𝑥⃗) maps a classical feature vector 𝑥⃗ to a Hilbert space.
QNN uses parameterized quantum circuits to represent the layers. For comparison, the classical
model is trained and evaluated using the Multilayer Perceptron (MLP) from the scikit-learn library
which is a feedforward artificial neural network.
We used PCA for dimensionality reduction and data visualization. This method reduces the data
into its most critical features referred to as principal components [44]. These components are linear
combinations of the original variables that capture the maximum variance within the dataset.
Through this process, PCA offers an approximation of the original data matrix, relying on a reduced
number of principal components while preserving the most significant variance present in the data
[45].
We used Quantum Feature Map (QFM)to map our classical datasets into quantum states. The
quantum data estimation approach adopted by the selected two algorithms differs slightly. While we
used same quantum feature maps for both, for QSVC we used Quantum Kernel Estimation (QKE)
method and for QNN we used Parameterized Quantum Circuit (PQC) method. These methods are
explained in Section 3.4.

3.2. Datasets
We used three health related datasets which are publicly available. This section briefly explains
the datasets and provides the links to the datasets.
The Breast Cancer dataset is a part of the Python scikit-learn package and also publicly available
via https://archive.ics.uci.edu/dataset/17/breast+cancer+wisconsin+diagnostic. It is a multivariate
dataset with real feature characteristics. It contains 569 instances and 31 features. The features were
computed from digitized images of a fine needle aspirate of a breast mass. The dataset was loaded
via the scikit-learn library directly during implementation.
The Diabetes dataset can be found on the Kaggle data repository and is available via
https://www.kaggle.com/datasets/uciml/pima-indians-diabetes-database. This dataset comprises of
768 instances and 9 features with the prediction feature included. It contains information such the
number of pregnancies, the glucose level, blood pressure, skin thickness, insulin level in the blood
stream, the body-mass index, age and diabetes pedigree function. There are no missing values and
duplicated information in the dataset.
Preprints.org (www.preprints.org) | NOT PEER-REVIEWED | Posted: 6 November 2024 doi:10.20944/preprints202411.0364.v1

The Heart Disease dataset is also available at Kaggle data repository via
https://www.kaggle.com/datasets/andrewmvd/heart-failure-clinical-data. It includes heart failure
clinical data with 299 instances and 13 features.
Three different datasets are defined as DBC, DD and DHD of order Rn.m representing the Breast
Cancer, Diabetes and Heart Disease datasets respectively each containing a set of features {f1, f2, …,
fm} where m is the number of features and n is the number of instances of each dataset. After importing
these datasets, some exploratory data analysis is conducted to gain more insights into the contents of
the datasets. Appendix A includes the target class balance analysis and PCA visualization for the
three datasets. Next section provides brief information about the data preprocessing step.

3.3. Data Preprocessing


The first step was identifying and isolating the target/class feature for prediction. Then, a split
of the datasets into training and testing datasets is done using either the cross_validate() or the
train_test_split() function with a splitting ratio of 80:20. This is a standard supervised ML procedure
where 80% of a dataset is used for training a model and the remaining 20% is used to evaluate the
performance of the trained model on an unseen dataset. The splitting ratio can be varied.

𝐷 :ℛ ⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯ 𝐷 :ℛ

𝐷 :ℛ ⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯ 𝐷 :ℛ

𝐷 :ℛ ⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯ 𝐷 :ℛ
A standard scaler is applied to the datasets. This technique removes the mean of each feature in
the datasets and scales them unit variance. This process ensures that each feature contributes equally
to the model, preventing any feature from dominating due to its scale. This has been shown to
improve the trainability of ML models [46]. It takes care of the outlier datapoints.

𝐷 :ℛ ⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯ 𝐷 :ℛ

𝐷 :ℛ ⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯ 𝐷 :ℛ

𝐷 :ℛ ⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯ 𝐷 :ℛ
Next, the PCA technique to reduce the dimensionality of the datasets without reducing the
information and pattern therein is applied. The dimension of each dataset is reduced to a linear
combination of two principal components. The dimensionality of a dataset impacts the speed and
training efficiency of QML models. Besides, the limitation and operational constraints of quantum
hardware is also a justification for this dimensionality reduction.

𝐷 :ℛ ⎯ 𝐷 :ℛ

𝐷 :ℛ ⎯ 𝐷 :ℛ

𝐷 :ℛ ⎯ 𝐷 :ℛ
The final step was the application of the MinMaxScaler for the quantum model to restrict the
range of values in the dataset between 0 and 1.
Preprints.org (www.preprints.org) | NOT PEER-REVIEWED | Posted: 6 November 2024 doi:10.20944/preprints202411.0364.v1

10

3.4. QML Model Implementation


In this study, we adopted the ZZFeatureMap which was designed with the angle encoding
technique. It is assumed to be difficult to simulate classically [47]. It is available as one of the built-in
QFMs provided by IBM in the qiskit_machine_learning library.
Figure 2 shows the ZZFeaturemap circuit for the QSVC model with two feature dimensions (for
the two principal components), two repetitions and a linear entanglement. It has a depth of 10. The
x[0] and x[1] parameters are the placeholders for the input features.

Figure 2. ZZFeatureMap with 2 qubits for the QSVC model.

We employed the QKE method for the QSVC model to optimize the quantum kernel’s
parameters, utilizing a quantum kernel trained model and an optimizer. We used specifically the
quantum kernel alignment (QKA) method in accomplishing this task [48]. In QKA, a quantum kernel
that is parameterized to fit a dataset is iteratively adjusted aiming for the largest possible margin in
SVMs. A custom rotational layer is created and composed with the ZZFeatureMap as shown in Figure
3. A quantum kernel is then instantiated from the TrainableFidelityQuantumKernel class with feature
map passed a parameter. In training the quantum kernel, a gradient-free optimizer SPSA
(Simultaneous Perturbation Stochastic Approximation) is used [49]. It is particularly efficient for
noisy quantum circuits. The fit() method is then used to train the QKT with the dataset.

Figure 3. ZZFeatureMap with a custom rotational layer.

Similar to the QSVC model, the ZZFeatureMap and an ansatz is configured for the QNN model
as shown in Figure 4 and 5. A custom quantum circuit is created with number of qubits equivalent to
the feature dimension of our datasets. The ansatz was instantiated from the RealAmplitudes class
library. A QNN is created from the SamplerQNN class and passed into the configured circuit as a
parameter. We used the PQC method for the QNN model. A quantum classifier from the
NeuralNetworkClassifier class that integrates a optimization algorithm COBYLA (Constrained
Optimization By Linear Approximation) is used for training the QNN.

Figure 4. Ansatz generated for the ZZFeatureMap.

Figure 5. Two qubits quantum circuit configured with ZZFeatureMap and ansatz.
Preprints.org (www.preprints.org) | NOT PEER-REVIEWED | Posted: 6 November 2024 doi:10.20944/preprints202411.0364.v1

11

4. Results
This section presents the experimental results for both classical and quantum models for each
dataset and compares their performance. The IBM quantum development environment and Qiskit
software library was easy to use, and the available documentation was comprehensive [42]. We used
the standard metrics that are Accuracy, F1-Score, Precision, Recall, and the area under the ROC (ROC-
AUC). Accuracy measures the number of instances correctly classified, and it can be defined as below
by using the confusion matrix values:
𝑇𝑃 + 𝑇𝑁
(6)
𝑇𝑃 + 𝑇𝑁 + 𝐹𝑃 + 𝐹𝑁
Precision is the proportion of correctly predicted positive observations to the total predicted
positives (TP/(TP + FP)). Recall is the proportion of correctly labelled positive observations to all
actual positives (TP/(TP + FN)), aka true positive rate (TPR). F1-Score is the harmonic mean of
precision and recall, providing a single metric that represent a model’s total class-wise accuracy.
ROC-AUC is a metric that measures the performance of a classification model by plotting the TPR
against the false positive rate (FPR) at various threshold settings.

4.1. Breast Cancer Dataset Performance Metrics


Figure 6 shows the breast cancer dataset results for the seven models trained for this study that
are classical SVC with 4 different kernels (linear, poly, rbf and sigmoid), MLP, and the two quantum
algorithms QSVC and QNN. In terms of accuracy, SVC_Linear and MLP models performed the best
with the highest accuracy of 95.26%, while QSVC had a competitive score of 93.86%. QNN performed
the worst with the lowest accuracy of 77.19%. Other metrics were not significantly different.

Figure 6. Results for the Breast Cancer dataset.

4.2. Diabetes Dataset Performance Metrics


Figure 7 shows the diabetes dataset results for each model. Among the models, The MLP model
performed the best with accuracy score of 72.4%. SVC_Linear and SVC_RBF showed comparable
performance with an accuracy of 72.39% and 72.14% respectively.

Figure 7. Results for the Diabetes dataset.

On the other hand, SVC_Poly and QNN exhibit lower accuracy scores of 69.4% and 67.53%,
respectively. The SVC_Sigmoid model has the lowest accuracy at 62.63%, suggesting it is the least
suitable for this task among the models tested. QSVC performs relatively well with an accuracy of
70.78%, making it a competitive alternative in the quantum machine learning category. On the other
Preprints.org (www.preprints.org) | NOT PEER-REVIEWED | Posted: 6 November 2024 doi:10.20944/preprints202411.0364.v1

12

hand, F1 scores are higher for quantum models which indicates that the quantum models had the
good overall performance of binary classification.

4.3. Heart Disease Dataset Performance Metrics


Figure 8 shows the heart disease dataset results for each model. For this dataset, the MLP model
achieved the highest accuracy at 75.57%, closely followed by the SVC_RBF, SVC_Linear and
SVC_Poly. QNN model demonstrated lowest accuracy of 58.33%, indicating it was less useful in this
application. QSVC exhibited moderate performance, with an accuracy of 63.33%, which is better than
QNN but still lags the other models. However, the other performance metrics were different
specifically for recall and F1 score. QSVC performed best for the F1 score with a score of 77.49%.

Figure 8. Results for the Heart Disease dataset.

5. Discussion
This section discusses the interpretation of the results and limitations to the study. Conclusions
are summarized and future research directions are highlighted as well.

5.1. Discussion and Limitations


Across the three datasets while the classical algorithms showed similar or slightly better
performance, the QSVC showed strength by outperforming some of the classical kernels in some
metrics and doing much better than the QNN. Several reasons could be responsible for the poor
performances of the quantum algorithms in this study. Firstly, the complexities of the datasets may
not be suitable enough for the requirements of QC, i.e. datasets are small with limited number of
instances and feature dimension after pcs was only two. We plan to continue the experiments with
larger datasets as well as changing the parameters to better understand the quantum processes.
Secondly, limitations of restricted access to quantum hardware and using a simulator could have
an impact on the results. Access to a real quantum hardware would have further enriched this study.
We used IBM quantum platform and Qiskit, but more detailed analysis could be performed with
other software packages and comparing them, e.g. using Cirq by Google or Q# by Microsoft as an
alternative quantum programming language.
Finally, IBM’s ongoing updates on the quantum services and Qiskit package made model
implementation a bit challenging as a user. For example, IBM announced new Qiskit 1.2 around
August 2024 and some classes were deprecated, e.g. BaseSamplerV1 and BaseEstimatorV1. This
required continuous updates in the code and even some QML functions did not work with the
updated version.
Due to the emerging nature of the quantum programming language constructs, the backward
compatibility has been somewhat poor. For example V2 primitives are not supported in Qiskit ML
package version 0.7.2 but the upcoming 0.8.0 version, which is expected to be released soon in
November 2024, will fix that issue.
These results highlight the fact that quantum computing, however promising, may not be
suitable for all datasets and should not be considered a direct replacement for the classical computing
paradigm, rather its role is complementary. In other words, it will be an overkill to apply QC in
simple tasks that can be accomplished by existing classical alternatives, but QC can be a promising
solution when the layers of complexity increase, i.e. to tackle with exponentially complex problems.
We note that although quantum computing is more powerful than classical computing, for most of
Preprints.org (www.preprints.org) | NOT PEER-REVIEWED | Posted: 6 November 2024 doi:10.20944/preprints202411.0364.v1

13

the trivial operations and small datasets, classical ML can be enough. Considering the required
computing power and current challenges with quantum hardware, one can prefer to use hybrid or
classical models if there are alternative classical solutions with comparable performance.

5.2. Conclusions and Future Work


In this paper, we presented a study on performance evaluation of QML algorithms for binary
classification and compared the results with classical counterparts by using three datasets. Main
objective in this study was to demonstrate the practical implementation and applicability of QML
algorithms and compare the performance results to classical counterparts. The relevant literature was
reviewed, and methods were explained. The quantum processes of building a quantum feature map,
quantum kernel estimation for QSVC and parameterized quantum circuit for QNN were
implemented. Evaluations of the models and comparative analysis of the performance metrics were
presented. Results showed that QML algorithms can improve the performance of trained ML models
but there is not a significant quantum advantage for small datasets.
Although QC promises to revolutionize many fields, significant technical and theoretical
challenges remain before practical and widespread use of quantum computers becomes a reality. QC
in the NISQ era is prone to errors and noise. Quantum noise is the unwanted disturbances that affect
the quantum systems and lead to errors in quantum computation. [51]. Even small amounts of noise
can lead to decoherence, causing qubits to lose their superposition and entanglement properties.
Quantum noise pose a significant barrier to the development of large-scale and fault-tolerant
quantum computers. Advances in quantum error correction, qubit stability, scalability, and quantum
algorithm development will be critical for the progress of this technology.
Using a classical computer or a supercomputer will potentially be the easiest and most
economical solution for tackling ordinary problems, but on the other hand many complex
mathematical problems and machine learning challenges can benefit the exponential power and
quantum advantage provided by QC and QML. As a future work, we are planning to apply the QML
into other classification problems with medium datasets. This emerging field holds promise for
solving complex classification problems, particularly in cases involving high-dimensional data.
Additionally, we would like to explore QML for multi-labelled data as the multinomial classification
adds another layer of complexity.

Author Contributions: Conceptualization, S.S.A. and D.C; methodology,


S.S.A. and D.C; software, S.S.A.; validation, S.S.A. and D.C.; research and
analysis, S.S.A.; writing—original draft preparation, review and editing, S.S.A.
and D.C; supervision, D.C. All authors have read and agreed to the published
version of the manuscript.
Funding: This research received no external funding.
Institutional Review Board Statement: This study was conducted according
to the ethical guidelines of Bournemouth University in the UK. This study did
not involve humans or animals.
Informed Consent Statement: Not applicable.
Data Availability Statement: Publicly available datasets were used in this
study as explained in the methods section.
Conflicts of Interest: The authors declare no conflicts of interest.

Appendix A
This appendix includes the target class balance analysis and PCA visualization for the three
datasets. Figure A1 is for the Breast Cancer dataset; Figure A2 is for the Diabetes dataset; and Figure
A3 is for the Heart Disease dataset.
Preprints.org (www.preprints.org) | NOT PEER-REVIEWED | Posted: 6 November 2024 doi:10.20944/preprints202411.0364.v1

14

(a) (b)
Figure A1. Breast Cancer dataset (a) target class balance; (b) PCA visualization.

(a) (b)
Figure A2. Diabetes dataset (a) target class balance; (b) PCA visualization.

(a) (b)
Figure A3. Heart Disease dataset (a) target class balance; (b) PCA visualization.

Appendix B
Example code snippet for data preprocessing step is given in Figure B1 and example execution
results for quantum QSVC model is given in Figure B2. Code is available upon request.
Preprints.org (www.preprints.org) | NOT PEER-REVIEWED | Posted: 6 November 2024 doi:10.20944/preprints202411.0364.v1

15

Figure B1. Code snippet for data preprocessing.

Figure B2. Code snippet for QSVC model execution.

References
1. Gill, S.S.; Kumar, A.; Singh, H.; Singh, M.; Kaur, K.; Usman, M.; Buyya, R. Quantum computing: A
taxonomy, systematic review and future directions. Software: Practice and Experience, 2022; vol 52 (1), pp. 66-
114. doi: http://dx.doi.org/10.1002/spe.3039
2. Wu, X.; Xiao, L.; Sun, Y.; Zhang, J.; Ma, T.; He, L. A survey of human-in-the-loop for machine learning.
Future Generations Computer Systems, 2022; vol. 135, pp. 364-381. doi:
http://dx.doi.org/10.1016/j.future.2022.05.014
3. Sahu, M.; Gupta, R.; Ambasta, R. K.; Kumar, P. Artificial intelligence and machine learning in precision
medicine: A paradigm shift in big data analysis. Progress in Molecular Biology and Translational Science, 2022;
vol. 190(1), pp. 57-100. doi: http://dx.doi.org/10.1016/bs.pmbts.2022.03.002
4. Oztas, B.; Cetinkaya, D.; Adedoyin, F.; Budka, M.; Aksu, G.; Dogan, H. Transaction monitoring in anti-
money laundering: A qualitative analysis and points of view from industry. Future Generation Computer
Systems, 2024; vol. 159, pp. 161-171. doi: https://doi.org/10.1016/j.future.2024.05.027
Preprints.org (www.preprints.org) | NOT PEER-REVIEWED | Posted: 6 November 2024 doi:10.20944/preprints202411.0364.v1

16

5. Sabeur, Z.; Bruno, A.; Johnstone, L.; Ferjani, M.; Benaouda, D.; Arbab-Zavar, B.; Cetinkaya, D.; Sallal, M.
Cyber-physical behaviour detection and understanding using artificial intelligence. In: 13th International
Conference on Applied Human Factors and Ergonomics (AHFE’22), Cognitive Computing and Internet of
Things, vol. 67, New York, USA, 24-28 July 2022.
6. Bertolini, M.; Mezzogori, D.; Neroni, M; Zammori, F. Machine learning for industrial applications: A
comprehensive literature review. Expert Systems with Applications, 2021; vol. 175, article 114820. doi:
http://dx.doi.org/10.1016/j.eswa.2021.114820
7. Verbraeken, J.; Wolting, M.; Katzy, J.; Kloppenburg, J.; Verbelen, t.; Rellermeyer, J. S. A survey on distributed
machine learning. ACM Computing Surveys, 2020; vol. 53(2), article 30. doi: https://doi.org/10.1145/3377454
8. Ali, S.; Yue, T.; Abreu, R. When software engineering meets quantum computing. Communications of the
ACM, 2022; vol. 65(4), pp. 84-88. doi: http://dx.doi.org/10.1145/3512340
9. Hassija, V.; Chamola, V.; Saxena, V.; Chanana, V.; Parashari, P.; Mumtaz, S.; Guizani, M. Present landscape
of quantum computing. IET Quantum Communication, 2020; vol. 1(2), pp. 42-48. doi:
http://dx.doi.org/10.1049/iet-qtc.2020.0027
10. Biamonte, J.; Wittek, P.; Pancotti, N.; Rebentrost, P.; Wiebe, N.; Lloyd, S. Quantum machine learning. Nature,
2017; vol. 549 (7671), pp. 195-202. doi: https://doi.org/10.1038/nature23474
11. Ramezani, S. B.; Sommers, A.; Manchukonda, H. K.; Rahimi, S.; Amirlatifi, A. Machine learning algorithms
in quantum computing: A survey. In: Proceedings of the International Joint Conference on Neural
Networks, Glasgow, UK, 19-24 July 2020.
12. Bayerstadler, A.; Becquin, G.; Binder, J.; Botter, T.; Ehm, H.; Ehmer, T.; Erdmann, M.; Gaus, N.; Harbach, P.;
Hess, M.; Klepsch, J.; Leib, M.; Luber, S.; Luckow, A.; Mansky, M.; Mauerer, W.; Neukart, F.; Niedermeier,
C.; Palackal, L.; Pfeiffer, R.; Polenz, C.; Sepulveda, J.; Sievers, T.; Standen, B.; Streif, M.; Strohm, T.; Utschig-
Utschig, C.; Volz, D.; Weiss, H.; Winter, F. Industry quantum computing applications. EPJ Quantum
Technology, 2021; vol. 8 (1). doi: http://dx.doi.org/10.1140/epjqt/s40507-021-00114-x
13. Hughes, C.; Isaacson, J.; Perry, A.; Sun, R. F.; Turner, J. Quantum Computing for the Quantum Curious. Springer
Nature, Switzerland, 2021.
14. Nielsen, M.A.; Chuang, I.L. Quantum Computation and Quantum Information, 10th ed., Cambridge University
Press, UK, 2010.
15. Yuchen, W.; Zixuan, H.; Barry, S.C., Sabre, K. Qudits and high-dimensional quantum computing. Frontiers
in Physics - Sec. Quantum Engineering and Technology, 2020; vol. 8. doi:
https://dx.doi.org/10.3389/fphy.2020.589504
16. Menon, S.G.; Glachman, N.; Pompili, M.; Dibos, A.; Bernien, H. An integrated atom array-nanophotonic
chip platform with background-free imaging. Nature Communications, 2024; vol. 15, article 6156.
https://doi.org/10.1038/s41467-024-50355-4
17. Grumbling, E.; Horowitz, M. (Editors). Quantum Computing: Progress and Prospects, The National Academies
of Sciences, Engineering and Medicine, The National Academies Press, Washington DC, USA, 2019.
18. Bharti, K.; Cervera-Lierta, A.; Kyaw, T. H.; Haug, T.; Alperin-Lea, S.; Anand, A.; Degroote, M.; Heimonen,
H.; Kottmann, J. S.; Menke, T.; Mok, W.-K.; Sim, S.; Kwek, L.-C.; Aspuru-Guzik, A. Noisy intermediate-scale
quantum (NISQ) algorithms. Reviews of Modern Physics., 2022; vol. 94, article 015004. doi:
https://dx.doi.org/10.1103/RevModPhys.94.015004
19. Bova, F.; Goldfarb, A.; Melko, R. G. Commercial applications of quantum computing. EPJ Quantum
Technology, 2021; vol. 8 (1). doi: http://dx.doi.org/10.1140/epjqt/s40507-021-00091-1
20. Smith, C. S. Forbes top 10 quantum computing companies making change – Dec 2023. Available online:
https://www.forbes.com/sites/technology/article/top-quantum-computing-companies/ (last accessed on 29
October 2024).
21. Elben, A.; Vermersch, B.; van Bijnen, R.; Kokail, C.; Brydges, T.; Maier, C.; Joshi, M. K.; Blatt, R.; Roos, C. F.;
Zoller, P. Cross-platform verification of intermediate scale quantum devices. Physical Review Letters, 2020;
vol. 124 (1). doi: http://dx.doi.org/10.1103/physrevlett.124.010504
22. Shor, P. W. Fault-tolerant quantum computation. In: Proceedings of the 37th Conference on Foundations of
Computer Science, IEEE Computer Society Press, 14-16 October 1996.
23. Nimbe, P.; Weyori, B. A.; Adekoya, A. F. Models in quantum computing: a systematic review. Quantum
Information Processing, 2021; vol. 20 (2). doi: http://dx.doi.org/10.1007/s11128-021-03021-3
24. DiVincenzo, D. P. Quantum gates and circuits. In: Proceedings of the Royal Society A: Mathematical,
physical, and engineering sciences, 1998; vol. 454 (1969), pp. 261-276. doi:
http://dx.doi.org/10.1098/rspa.1998.0159
25. McGeoch, C. Adiabatic Quantum Computation and Quantum Annealing: Theory and Practice, Part of the book
series: Synthesis Lectures on Quantum Computing (SLQC), Springer, 2014.
26. Albash, T.; Lidar, D. A. Adiabatic quantum computation. Reviews of Modern Physics, 2018; vol. 90 (1). doi:
http://dx.doi.org/10.1103/revmodphys.90.015002
Preprints.org (www.preprints.org) | NOT PEER-REVIEWED | Posted: 6 November 2024 doi:10.20944/preprints202411.0364.v1

17

27. Rath, M.; Date, H. Quantum data encoding: a comparative analysis of classical-to-quantum mapping
techniques and their impact on machine learning accuracy. EPJ Quantum Technology, 2024; vol. 11(72). doi:
https://doi.org/10.1140/epjqt/s40507-024-00285-3
28. Schuld, M.; Bocharov, A.; Svore, K.; Wiebe, N. Circuit-centric quantum classifiers. Physical Review A, 2020;
vol. 101(3), American Physical Society. doi: https://dx.doi.org/10.1103/PhysRevA.101.032308
29. Munshi, M.; Gupta, R.; Jadav, N. K.; Polkowski, Z.; Tanwar, S.; Alqahtani, F.; Said, W. Quantum machine
learning-based framework to detect heart failures in Healthcare 4.0. Software: Practice & Experience, 2024; vol.
54(2), pp. 168-185. doi: http://dx.doi.org/10.1002/spe.3264
30. Maheshwari, D.; Sierra-Sosa, D.; Garcia-Zapirain, B. Variational quantum classifier for binary classification:
Real vs synthetic dataset. IEEE Access, 2022; vol. 10, pp. 3705-3715. doi:
http://dx.doi.org/10.1109/access.2021.3139323
31. Kavitha, S. S.; Kaulgud, N. Quantum machine learning for support vector machine classification.
Evolutionary Intelligence, 2024; vol. 17 (2), pp. 819-828.
32. Suzuki, T.; Hasebe, T.; Miyazaki, T. Quantum support vector machines for classification and regression on
a trapped-ion quantum computer. Quantum Machine Intelligence, 2024; vol. 6(1). doi:
http://dx.doi.org/10.1007/s42484-024-00165-0
33. Yuan, X.-J.; Chen, Z.-Q.; Liu, Y.-D.; Xie, Z.; Liu, Y.-Z.; Jin, X.-M.; Wen, X.; Tang, H. Quantum support vector
machines for aerodynamic classification. Intelligent Computing, 2023; vol. 2. doi:
http://dx.doi.org/10.34133/icomputing.0057
34. Aly, M.; Fadaaq, S.; Warga, O. A.; Nasir, Q.; Talib, M. A. Experimental benchmarking of quantum machine
learning classifiers. In: Proceedings of the 6th International Conference on Signal Processing and
Information Security (ICSPIS), pp. 240-245, Dubai, United Arab Emirates, 2023. doi:
https://dx.doi.org/10.1109/ICSPIS60075.2023.10343811
35. Beer, K.; Bondarenko, D.; Farrelly, T.; Osborne, T.; Salzmann, R.; Scheiermann, D.; Wolf, R. Training deep
quantum neural networks. Nature Communications, 2020; vol. 11, article 808. doi:
https://doi.org/10.1038/s41467-020-14454-2
36. Li, Y.; Wang, Z.; Han, R.; Shi, S.; Li, J.; Shang, R.; Zheng, H.; Zhong, G.; Gu, Y. Quantum recurrent neural
networks for sequential learning. Neural Networks, 2023; vol. 166, pp. 148-161. doi:
https://doi.org/10.1016/j.neunet.2023.07.003
37. Simoes, R. D. M.; Huber, P.; Meier, N.; Smailov, N.; Fuchslin, R. M.; Stockinger, K. Experimental evaluation
of quantum machine learning algorithms. IEEE Access, 2023; vol. 11, pp. 6197-6208. doi:
http://dx.doi.org/10.1109/access.2023.3236409
38. Desai, U.; Kola, K. S.; Nikhitha, S.; Nithin, G.; Raj, G. P.; Karthik, G. Comparison of machine learning and
quantum machine learning for breast cancer detection. In: Proceedings of the International Conference on
Smart Systems for Applications in Electrical Sciences, Tumakuru, India, 3-4 May 2024. doi:
https://dx.doi.org/10.1109/ICSSES62373.2024.10561257
39. Reka, S. S.; Karthikeyan, H. L.; Shakil, A. J.; Venugopal, P.; Muniraj, M. Exploring quantum machine
learning for enhanced skin lesion classification: A comparative study of implementation methods. IEEE
Access, 2024; vol. 12, pp. 104568-104584. doi: https://dx.doi.org/10.1109/ACCESS.2024.3434681
40. Peral-García, D.; Cruz-Benito, J.; García-Peñalvo, F. J. Systematic literature review: Quantum machine
learning and its applications. Computer Science Review, 2024; vol. 51, article 100619. doi:
http://dx.doi.org/10.1016/j.cosrev.2024.100619
41. Cervantes, J.; Garcia-Lamont, F.; Rodríguez-Mazahua, L; Lopez, A. A comprehensive survey on support
vector machine classification: Applications, challenges and trends. Neurocomputing, 2020; vol. 408, pp. 189-
215. doi: http://dx.doi.org/10.1016/j.neucom.2019.10.118
42. IBM Qiskit Machine Learning documentation, Available online: https://qiskit-community.github.io/qiskit-
machine-learning/ (last accessed on 30 October 2024).
43. Broughton, M.; Verdon, G.; McCourt, T.; Martinez, A.J.; Yoo, J.H.; Isakov, S.V.; Massey, P.; Halavati, R.; Niu,
M.Y.; Zlokapa, A.; Peters, E.; Lockwood, O.; Skolik, A.; Jerbi, S.; Dunjko, V.; Leib, M.; Streif, M.; Von Dollen,
D.; Chen, H.; Cao, S.; Wiersema, R.; Huang, H.-Y.; McClean, J.R.; Babbush, R.; Boixo, S.; Bacon, D.; Ho, A.K.;
Neven, H.; Mohseni, M. TensorFlow Quantum: A software framework for quantum machine learning, arXiv
2020. Available from: http://arxiv.org/abs/2003.02989
44. Hasan, B.M.S.; Abdulazeez, A.M. A review of principal component analysis algorithm for dimensionality
reduction. Journal of Soft Computing and Data Mining, 2021; vol. 2 (1), pp. 20-30. doi:
http://dx.doi.org/10.30880/jscdm.2021.02.01.003
45. Greenacre, M.; Groenen, P. J. F.; Hastie, T.; D’Enza, A. I.; Markos, A.; Tuzhilina, E. Principal component
analysis. Nature Reviews-methods primers, 2022; vol. 2 (1). doi: http://dx.doi.org/10.1038/s43586-022-00184-w
46. Ahsan, M.; Mahmud, M.; Saha, P.; Gupta, K.; Siddique, Z. Effect of data scaling methods on machine
learning algorithms and model performance. Technologies, 2021; vol. 9 (3), article 52. doi:
http://dx.doi.org/10.3390/technologies9030052
Preprints.org (www.preprints.org) | NOT PEER-REVIEWED | Posted: 6 November 2024 doi:10.20944/preprints202411.0364.v1

18

47. Havlíček, V.; Córcoles, A. D.; Temme, K.; Harrow, A. W.; Kandala, A.; Chow, J. M.; Gambetta, J. M.
Supervised learning with quantum-enhanced feature spaces. Nature, 2019; vol. 567 (7747), pp. 209–212. doi:
http://dx.doi.org/10.1038/s41586-019-0980-2
48. Vasques, X.; Paik, H.; Cif, L. Application of quantum machine learning using quantum kernel algorithms
on multiclass neuron M-type classification. Scientific Reports, 2023; vol. 13 (1). doi:
http://dx.doi.org/10.1038/s41598-023-38558-z
49. Bonet-Monroig, X.; Wang, H.; Vermetten, D.; Senjean, B.; Moussa, C.; Bäck, T.; Dunjko, V.; O’Brien, T. E.
Performance comparison of optimization methods on variational quantum algorithms. arXiv 2021.
Available from: http://arxiv.org/abs/2111.13454
50. Javadi-Abhari, A.; Treinish, M.; Krsulich, K.; Wood, C.J.; Lishman, J.; Gacon, J.; Martiel, S.; Nation, P.D.;
Bishop, L.S.; Cross, A.W.; Johnson, B.R.; Gambetta, J.M. Quantum computing with Qiskit. arXiv 2024.
Available from: http://dx.doi.org/10.48550/ARXIV.2405.08810
51. Ball, H.; Biercuk, M. J.; Carvalho, A. R. R.; Chen, J.; Hush, M.; De Castro, L. A.; Li, L.; Liebermann, P. J.;
Slatyer, H. J.; Edmunds, C.; Frey, V.; Hempel, C.; Milne, A. Software tools for quantum control: improving
quantum computer performance through noise and error suppression. Quantum Science and Technology,
2021; vol. 6 (4), article 044011. doi: http://dx.doi.org/10.1088/2058-9565/abdca6

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those
of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s)
disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or
products referred to in the content.

You might also like