Final Doc Blind ECG Restoration by Operational Cyclegan
Final Doc Blind ECG Restoration by Operational Cyclegan
INTRODUCTION:
HOLTER or wearable ECG monitoring has been increasingly used to monitor heart activity for 12 to 48
hours or even longer periods. The extended period of recording time is beneficial for observing sporadic
cardiac arrhythmias which would not be possible to diagnose in a shorter time. Doctors recommend
patients to avoid sudden movements and high-impact workouts such as running while recording. Even if
patients avoid those movements, during their daily routine motion-related slip of the sensor or other
interference can induce severe artifacts such as baseline wander, signal cuts, motion artifacts, diminished
QRS amplitude, noise, and other interferences. Some typical examples of such corrupted ECG recordings
from the benchmark China Physiological Signal Challenge (CPSC-2020) dataset [1] are shown in Fig. 1.
As can be seen in the figure, the severity of such blended artifacts makes some of the ECG signals
undiagnosable by machines or even experienced doctors. Even though noise is just one of the artifact
types corrupting the ECG signal, numerous studies in the literature address this as the sole denoising
problem, and many of which assumed a certain type of (e.g., additive Gaussian) noise independent from
the signal. To date, several DSP methods from statistical filters or transform-domain denoising [2]–[5] to
recent denoising techniques by deep learning have been proposed for ECG denoising. Chiang et al. [6]
proposed a denoising autoencoder architecture using a fully convolutional network which can be applied
to reconstruct the clean data from its noisy version. A 13-layer autoencoder model was applied to MIT-
BIH Arrhythmia and Noise Stress datasets corrupted with additive Gaussian noise, yielding around 16%,
14%, and 11% SNR (dB) improvements corresponding to the input −1 dB, 3 dB, and 7 dB SNR values,
respectively. Hamad et al. [7] developed a deep learning autoencoder to denoise ECG signals from the
discrete wavelet transform coefficients of the ECG signal. The proposed system consists of two stages
which are isolating the approximation and thresholding the subband coefficients that will then be used
asinput to a 14-layer autoencoder to reconstruct a clean signal. They obtained a 6.26 dB SNR
improvement on the MIT-BIH Arrhythmia database corrupted with additive Gaussian noise. In [8], a deep
recurrent neural network (DRNN) model which is a specific hybrid of DRNN and denoising AE is
applied to denoising of ECG signal. Both real and synthetic data are used to get improved performance. A
new ECG denoising framework based on the generative adversarial network (GAN) is proposed in [9].
For adversarial training of the generative model, both the clean and noisy ECG samples (additive
Gaussian noise) from the MIT-BIH Arrhythmia database are used. The improved performance of the
proposed system over the existing framework is demonstrated through testing over multiple noise
conditions for 5 and 10 dB SNR levels. It is straightforward to develop such supervised ML-based
denoising solutions when a clean ECG signal is corrupted by artificial (additive) noise with a fixed type
and variance, and then turn this as a regression problem by using noisy/clean signal as the input/output of
the network, which will eventually learn to suppress the noise. However, such denoising solutions
obviously will fail to restore any actual ECG signal corrupted with a blend of artifacts, as typical samples
shown in Fig. 1. Even only for the “denoising” purpose, assuming an additive and independent noise
model with a fixed noise variance is far from being realistic. As can be seen in the ECG segment at the
1st row in Fig. 1, the noise level may vary in a short time, and it may neither be additive nor independent
from the signal. Therefore, in this study, we address this problem as a blind restoration approach thus
avoiding any prior assumption over the artifact types and severities. We neither turn it to be a supervised
regression problem since one cannot have the corrupted and clean ECG signal at the same time in reality
unless the artifacts are artificially created. That is why, for training, we want to use the real corrupted
signals with any blend of artifacts, and the network should be able to restore the clean signal while
preserving the main characteristics of the ECG patterns. The proposed approach learns to perform
transformations between the “clean” (e.g., close to the clinical ECG quality) and the “corrupted” ECG
segments using 1D convolutional and operational Cycle-GANs. Since its first introduction in 2014, GANs
[19] and their variations brought a new perspective to the machine learning communities with their
superiority in different image synthesis problems. Cycle-Consistent Adversarial Networks (Cycle-GANs)
[20] are developed and used for image-to-image translation on unpaired datasets. To accomplish the
aforementioned objective, in this study, we first selected batches of clean and corrupted ECG segments
from the CPSC-2020 dataset. Then, we adapted the 1D version of Cycle-GANs that can learn to
transform the ECG signals (segments) from different batches as the baseline method. The Cycle-GANs
can preserve major “patterns” of the corrupted ECG segment transformed to the “other” category, the
clean segment. Therefore, the main ECG characteristics (e.g., the interval and timing of R-peaks, QRS
waveform of ECG beats, etc.) will still be preserved whilst the quality will be improved. To further boost
the restoration performance and reduce the complexity, operational Cycle-GANs are proposed in this
study. Derived from Generalized Operational Perceptrons [10]–[15], Operational Neural Networks
(ONNs) [16]–[18], and their new variants, Self-Organized Operational Neural Networks (Self-ONNs)
[21], [22], [29]–[31], are heterogeneous network models with a non-linear neuron model. Self-ONNs are
heterogeneous network models with a non-linear neuron model which have shown superior diversity and
increased learning capabilities. Recently, Self-ONNs have been shown to outperform their predecessors,
CNNs, in many regression and classification tasks. To reflect this superiority in ECG restoration, the
convolutional layers/neurons of the native 1D Cycle-GANs are replaced by operational/generative
layers/neurons of the Self-ONNs. Once a 1D operational Cycle-GAN is trained over the batches, the
generator Self-ONN trained for the “corrupted” to “clean” ECG segment transformation can then be used
for the ECG restoration. The performance is evaluated over the SCPC-2020 dataset quantitatively by the
performance comparisons using the benchmark peak detectors, Pan and Tompkins [23] and Hamilton
[24], qualitatively (visually), and also by the medical doctors for arrhythmia diagnosis. We can enlist the
novel and significant contributions of this study as follows: 1) This is a pioneer study where ECG
restoration is addressed as a “blind” approach thus avoiding any prior assumption such as certain artifact
types and severities. 2) This is the first study where 1D Cycle-GANs are proposed in a biomedical signal
restoration application. To the best of our knowledge, this is actually the first study where 1D Cycle-
GANs have ever been used for a 1D signal processing application. 3) A novel GAN type, operational
GANs, are proposed in this study which outperform the conventional (convolutional) model even with a
reduced network complexity. 4) The proposed method has also been tested over the largest ECG
benchmark dataset, SPSC-2020 with more than one million beats. Both the peak-labeled dataset, our
results and the source code are now publicly shared with the research community.
ECG recordings often suffer from a set of artifacts with varying types, severities, and durations, and this
makes an accurate diagnosis by machines or medical doctors difficult and unreliable. Numerous studies
have proposed ECG denoising; however, they naturally fail to restore the actual ECG signal corrupted
with such artifacts due to their simple and naive noise model. In this pilot study, we propose a novel
approach for blind ECG restoration using cycle-consistent generative adversarial networks (Cycle-GANs)
where the quality of the signal can be improved to a clinical level ECG regardless of the type and severity
of the artifacts corrupting the signal. Methods: To further boost the restoration performance, we
propose 1D operational Cycle-GANs with the generative neuron model. Results: The proposed approach
has been evaluated extensively using one of the largest benchmark ECG datasets from the China
Physiological Signal Challenge (CPSC-2020) with more than one million beats. Besides the quantitative
and qualitative evaluations, a group of cardiologists performed medical evaluations to validate the
quality and usability of the restored ECG, especially for an accurate arrhythmia diagnosis. Significance:
As a pioneer study in ECG restoration, the corrupted ECG signals can be restored to clinical level quality.
2. LITERATURE SURVEY:
“An open-access long-term wearable ECG database for premature ventricular contractions
and supraventricular premature beat detection,”
Wearable electrocardiogram (ECG) devices can provide real-time, long-term, non-invasive and comfortable
ECG monitoring for premature beats (PB) assessment (typically presenting as premature ventricular
contractions (PVC) and supraventricular premature beat (SPB)), which may foreshadow stroke or sudden
cardiac death. However, the poor quality, introduced by the dry electrode in wearable ECG monitoring system,
leads to the inefficient recognition of the existing PB detection technologies. Although many methods can
achieve high recognition rate on current widely-used open-access clinical ECG databases, they still fail to
work properly on dynamic ECG signals. This study presents an open-access ECG database comprises of 24-
hour wearable ECG recordings. The database is used for the 3rd China Physiological Signal Challenge (CPSC
2020), where participants are expected to recognize PVC and SPB from these recordings. All the approved
algorithms are evaluated by scoring standards and regulations defined in terms of PVC detection and SPB
detection, respectively.
In this paper, a nonlinear Bayesian filtering framework is proposed for the filtering of single
channel noisy electrocardiogram (ECG) recordings. The necessary dynamic models of the ECG
are based on a modified nonlinear dynamic model, previously suggested for the generation of a
highly realistic synthetic ECG. A modified version of this model is used in several Bayesian
filters, including the Extended Kalman Filter, Extended Kalman Smoother, and Unscented
Kalman Filter. An automatic parameter selection method is also introduced, to facilitate the
adaptation of the model parameters to a vast variety of ECGs. This approach is evaluated on
several normal ECGs, by artificially adding white and colored Gaussian noises to visually
inspected clean ECG recordings, and studying the SNR and morphology of the filter outputs. The
results of the study demonstrate superior results compared with conventional ECG denoising
approaches such as bandpass filtering, adaptive filtering, and wavelet denoising, over a wide
range of ECG SNRs. The method is also successfully evaluated on real nonstationary muscle
artifact. This method may therefore serve as an effective framework for the model-based filtering
of noisy ECG recordings.
We present a new modified wavelet transform, called the multiadaptive bionic wavelet transform
(MABWT), that can be applied to ECG signals in order to remove noise from them under a wide
range of variations for noise. By using the definition of bionic wavelet transform and adaptively
determining both the center frequency of each scale together with the -function, the problem of
desired signal decomposition is solved. Applying a new proposed thresholding rule works
successfully in denoising the ECG. Moreover by using the multiadaptation scheme, lowpass
noisy interference effects on the baseline of ECG will be removed as a direct task. The method
was extensively clinically tested with real and simulated ECG signals which showed high
performance of noise reduction, comparable to those of wavelet transform (WT). Quantitative
evaluation of the proposed algorithm shows that the average SNR improvement of MABWT is
1.82 dB more than the WT-based results, for the best case. Also the procedure has largely proved
advantageous over wavelet-based methods for baseline wandering cancellation, including both
DC components and baseline drifts.
The electrocardiogram (ECG) is widely used for the diagnosis of heart diseases. However, ECG
signals are easily contaminated by different noises. This paper presents efficient denoising and
compressed sensing (CS) schemes for ECG signals based on basis pursuit (BP). In the process of
signal denoising and reconstruction, the low-pass filtering method and alternating direction
method of multipliers (ADMM) optimization algorithm are used. This method introduces dual
variables, adds a secondary penalty term, and reduces constraint conditions through alternate
optimization to optimize the original variable and the dual variable at the same time. This
algorithm is able to remove both baseline wander and Gaussian white noise. The effectiveness of
the algorithm is validated through the records of the MIT-BIH arrhythmia database. The
simulations show that the proposed ADMM-based method performs better in ECG denoising.
Furthermore, this algorithm keeps the details of the ECG signal in reconstruction and achieves
higher signal-to-noise ratio (SNR) and smaller mean square error (MSE).
“An adaptive filtering approach for electrocardiogram (ECG) signal noise reduction using
neural networks,”
Electrocardiogram (ECG) signals have been widely used in clinical studies to detect heart
diseases. However, ECG signals are often contaminated with noise such as baseline drift,
electrode motion artifacts, power-line interference, muscle contraction noise, etc. Conventional
methods for ECG noise removal do not yield satisfactory results due to the non-stationary nature
of the associated noise sources and their spectral overlap with desired ECG signals. In this paper,
an adaptive filtering approach based on discrete wavelet transform and artificial neural
network is proposed for ECG signal noise reduction. This new approach combines the multi-
resolution property of wavelet decomposition and the adaptive learning ability of artificial neural
networks, and fits well with ECG signal processing applications. Computer simulation results
demonstrate that this proposed approach can successfully remove a wide range of noise with
significant improvement on SNR (signal-to-noise ratio).
The electrocardiogram (ECG) is an efficient and noninvasive indicator for arrhythmia detection
and prevention. In real-world scenarios, ECG signals are prone to be contaminated with various
noises, which may lead to wrong interpretation. Therefore, significant attention has been paid on
denoising of ECG for accurate diagnosis and analysis. A denoising autoencoder (DAE) can be
applied to reconstruct the clean data from its noisy version. In this paper, a DAE using the fully
convolutional network (FCN) is proposed for ECG signal denoising. Meanwhile, the proposed
FCN-based DAE can perform compression with regard to the DAE architecture. The proposed
approach is applied to ECG signals from the MIT-BIH Arrhythmia database and the added noise
signals are obtained from the MIT-BIH Noise Stress Test database. The denoising performance
is evaluated using the root-mean-square error (RMSE), percentage-root-mean-square difference
(PRD), and improvement in signal-to-noise ratio (SNR imp ). The results of the experiments
conducted on noisy ECG signals of different levels of input SNR show that the FCN acquires
better performance as compared to the deep fully connected neural network- and convolutional
neural network-based denoising models. Moreover, the proposed FCN-based DAE reduces the
size of the input ECG signals, where the compressed data is 32 times smaller than the original.
The results of the study demonstrate the superiority of FCN in denoising, with lower RMSE and
PRD, as well as higher SNR imp . According to the results, we believe that the proposed FCN-
based DAE has a good application prospect in clinical practice.
“ECG signal de-noising based on deep learning autoencoder and discrete wavelet
transform,”
ECG is very important tool for diagnosis of heart disease, this signal is suffered from different
types of noises such as baseline wander (BW), muscle artifact (MA) and electrode motion (EM) ,
which lead to wrong interpretation. In order to prevent or reduce the effect of these noises,
different approaches have been applied to enhance the ECG signal. In this paper, we have
proposed a new method for ECG signal de-noising based on deep learning Auto encoder (DL-
DAE) and wavelet transform named as (WT-DAE). The proposed system (WT-DAE) is
constructed from two stages, in the first stage, the wavelet transform is used to isolate the most
significant coefficient of the signal (approximation sub-band) from de-tails coefficients (details
sub-band). The details coefficients is fed to new proposed threshold method , which is used to
evaluate the threshold value according to the feature of ECG signal, this threshold value is used
to threshold the detail coefficients, in order to remove the details noise that is contained as high
frequencly component , then invers wavelet transform is used to reconstruct the signal . Different
wavelet filters and threshold functions are applied in this stage. The second stage of signal de-
noising is performed by using DAE method, which is designed for reconstruct the de-noised sig-
nal. The proposed DAE model is constructed from 14 layers of convolutional, relu and max_
pooling layer with different parameters. We perform training and testing the model with MIT-
BIH ECG database and the performance of the pro-posed system is evaluated by terms of MSE,
RMSE, PRD and PSNR. The experimental results are compared with other approaches and show
that, the proposed system demonstrated the superiority for de-noising ECG signal.
Traditional Artificial Neural Networks (ANNs) such as Multi-Layer Perceptrons (MLPs) and
Radial Basis Functions (RBFs) were designed to simulate biological neural networks; however,
they are based only loosely on biology and only provide a crude model. This in turn yields well-
known limitations and drawbacks on the performance and robustness. In this paper we shall
address them by introducing a novel feed-forward ANN model, Generalized Operational
Perceptrons (GOPs) that consist of neurons with distinct (non-)linear operators to achieve a
generalized model of the biological neurons and ultimately a superior diversity. We modified the
conventional back-propagation (BP) to train GOPs and furthermore, proposed Progressive
Operational Perceptrons (POPs) to achieve self-organized and depth-adaptive GOPs according to
the learning problem. The most crucial property of the POPs is their ability to simultaneously
search for the optimal operator set and train each layer individually. The final POP is, therefore,
formed layer by layer and this ability enables POPs with minimal network depth to attack the
most challenging learning problems that cannot be learned by conventional ANNs even with a
deeper and significantly complex configuration.
3.SYSTEM ANALYSIS
Existing system:
we introduce the main network characteristics of 1D Self-ONNs1 with the formulation of
forward propagation 1D nodal operations of a CNN, ONN with fixed (static) nodal operators,
and Self-ONN with generative neuron which can have any arbitrary nodal function, (including
possibly standard types such as linear and harmonic functions) for each kernel element of each
connection. Obviously, Self-ONN has the potential to achieve greater operational diversity and
flexibility, allowing any nodal operator function to be formed without the use of an operator set
library or a prior search process to select the best nodal operator.
DISADVANTAGE:
Accuracy Less
PROPOSED SYSTEM:
ECG recordings are using to detect heart diseases but this recording often contains noise which
prevent machine learning algorithms or experience medical doctors from taking accurate
diagnosis and there is no such algorithm exists to remove noise or reconstruct corrupted ECG
signals. To overcome from above issue author of this paper introducing Operational Cyclic-GAN
(Generative Adversarial Network) algorithm which get trained on Normal and Noisy signals and
this trained model can be applied on any corrupted or Noisy ECG signals to detect and repair
such noises by using features from Normal ECG signals which exists in the model. GAN
algorithm is designed to generate synthetic or fake images from original images so author
extending this algorithm to generate new ECG signals from corrupted signals.
ADVANTAGE:
Accuracy more
Modules Information:
1) Load Operational Cycle-GANs Model: using this module we will generate and load GAN
model
2) Upload Noisy ECG Signal: using this module we will upload noise images
3) Predict Quality Signals: using this module GAN will detect and repair noisy part and then
generate new image. Noise in both images will be mark with start symbol and in
predicted GAN image we can see the corrected position of star symbol
4) Comparison Graph: using this module we will plot accuracy, precision and other metrics
graph
3.3. PROCESS MODEL USED WITH JUSTIFICATION
Feasibility Study
TEAM FORMATION
Project Specification
Requirements PREPARATION ANALYSIS &
Gathering DESIGN CODE UNIT TEST ASSESSMENT
INTEGRATION ACCEPTANCE
& SYSTEM DELIVERY/ TEST
TESTING INSTALLATION
Umbrella
TRAINING
Activity
SDLC is nothing but Software Development Life Cycle. It is a standard which is used by
software industry to develop good software.
Stages in SDLC:
Requirement Gathering
Analysis
Designing
Coding
Testing
Maintenance
These requirements are fully described in the primary deliverables for this stage: the
Requirements Document and the Requirements Traceability Matrix (RTM). The requirements
document contains complete descriptions of each requirement, including diagrams and
references to external documents as necessary. Note that detailed listings of database tables and
fields are not included in the requirements document.
The title of each requirement is also placed into the first version of the RTM, along with the title
of each goal from the project plan. The purpose of the RTM is to show that the product
components developed during each stage of the software development lifecycle are formally
connected to the components developed in prior stages.
In the requirements stage, the RTM consists of a list of high-level requirements, or goals, by title,
with a listing of associated requirements for each goal, listed by requirement title. In this
hierarchical listing, the RTM shows that each requirement developed during this stage is
formally linked to a specific product goal. In this format, each requirement can be traced to a
specific product goal, hence the term requirements traceability.
The outputs of the requirements definition stage include the requirements document, the RTM,
and an updated project plan.
No. of staff required to handle a project is represented as Team Formation, in this case
only modules are individual tasks will be assigned to employees who are working for that
project.
Project Specifications are all about representing of various possible inputs submitting to
the server and corresponding outputs along with reports maintained by administrator.
Analysis Stage:
The planning stage establishes a bird's eye view of the intended software product, and uses this
to establish the basic project structure, evaluate feasibility and risks associated with the project,
and describe appropriate management and technical approaches.
The most critical section of the project plan is a listing of high-level product requirements, also
referred to as goals. All of the software product requirements to be developed during the
requirements definition stage flow from one or more of these goals. The minimum information
for each goal consists of a title and textual description, although additional information and
references to external documents may be included. The outputs of the project planning stage are
the configuration management plan, the quality assurance plan, and the project plan and
schedule, with a detailed listing of scheduled activities for the upcoming Requirements stage,
and high level estimates of effort for the out stages.
Designing Stage:
The design stage takes as its initial input the requirements identified in the approved
requirements document. For each requirement, a set of one or more design elements will be
produced as a result of interviews, workshops, and/or prototype efforts. Design elements
describe the desired software features in detail, and generally include functional hierarchy
diagrams, screen layout diagrams, tables of business rules, business process diagrams, pseudo
code, and a complete entity-relationship diagram with a full data dictionary. These design
elements are intended to describe the software in sufficient detail that skilled programmers may
develop the software with minimal additional input.
When the design document is finalized and accepted, the RTM is updated to show that each
design element is formally associated with a specific requirement. The outputs of the design
stage are the design document, an updated RTM, and an updated project plan.
The development stage takes as its primary input the design elements described in the approved
design document. For each design element, a set of one or more software artifacts will be
produced. Software artifacts include but are not limited to menus, dialogs, and data management
forms, data reporting formats, and specialized procedures and functions. Appropriate test cases
will be developed for each set of functionally related software artifacts, and an online help
system will be developed to guide users in their interactions with the software.
The RTM will be updated to show that each developed artifact is linked to a specific design
element, and that each developed artifact has one or more corresponding test case items. At this
point, the RTM is in its final configuration. The outputs of the development stage include a fully
functional set of software that satisfies the requirements and design elements previously
documented, an online help system that describes the operation of the software, an
implementation map that identifies the primary code entry points for all major system functions,
a test plan that describes the test cases to be used to validate the correctness and completeness of
the software, an updated RTM, and an updated project plan.
During the integration and test stage, the software artifacts, online help, and test data are
migrated from the development environment to a separate test environment. At this point, all test
cases are run to verify the correctness and completeness of the software. Successful execution of
the test suite confirms a robust and complete migration capability. During this stage, reference
data is finalized for production use and production users are identified and linked to their
appropriate roles. The final reference data (or links to reference data source files) and production
user list are compiled into the Production Initiation Plan.
The outputs of the integration and test stage include an integrated set of software, an online help
system, an implementation map, a production initiation plan that describes reference data and
production users, an acceptance plan which contains the final suite of test cases, and an updated
project plan.
During the installation and acceptance stage, the software artifacts, online help, and initial
production data are loaded onto the production server. At this point, all test cases are run to
verify the correctness and completeness of the software. Successful execution of the test suite is
a prerequisite to acceptance of the software by the customer.
After customer personnel have verified that the initial production data load is correct and the test
suite has been executed with satisfactory results, the customer formally accepts the delivery of
the software.
The primary outputs of the installation and acceptance stage include a production application, a
completed acceptance test suite, and a memorandum of customer acceptance of the software.
Finally, the PDR enters the last of the actual labor data into the project schedule and locks the
project as a permanent project record. At this point the PDR "locks" the project by archiving all
software items, the implementation map, the source code, and the documentation for future
reference.
Maintenance:
Outer rectangle represents maintenance of a project, Maintenance team will start with
requirement study, understanding of documentation later employees will be assigned work and
they will undergo training on that particular assigned category. For this life cycle there is no end,
it will be continued so on like an umbrella (no ending point to umbrella sticks).
• ECONOMIC FEASIBILITY
A system can be developed technically and that will be used if installed must still be a good
investment for the organization. In the economical feasibility, the development cost in creating
the system is evaluated against the ultimate benefit derived from the new systems. Financial
benefits must equal or exceed the costs. The system is economically feasible. It does not require
any addition hardware or software. Since the interface for this system is developed using the
existing resources and technologies available at NIC, There is nominal expenditure and
economical feasibility for certain.
• OPERATIONAL FEASIBILITY
Proposed projects are beneficial only if they can be turned out into information system. That will
meet the organization’s operating requirements. Operational feasibility aspects of the project are
to be taken as an important part of the project implementation. This system is targeted to be in
accordance with the above-mentioned issues. Beforehand, the management issues and user
requirements have been taken into consideration. So there is no question of resistance from the
users that can undermine the possible application benefits. The well-planned design would
ensure the optimal utilization of the computer resources and would help in the improvement of
performance status.
• TECHNICAL FEASIBILITY
Earlier no system existed to cater to the needs of ‘Secure Infrastructure Implementation System’.
The current system developed is technically feasible. It is a web based user interface for audit
workflow at NIC-CSD. Thus it provides an easy access to .the users. The database’s purpose is
to create, establish and maintain a workflow among various entities in order to facilitate all
concerned users in their various capacities or roles. Permission to the users would be granted
based on the roles specified. Therefore, it provides the technical guarantee of accuracy,
reliability and security.
User Interface
The user interface of this system is a user friendly python Graphical User Interface.
Hardware Interfaces
The interaction between the user and the console is achieved through python capabilities.
Software Interfaces
SYSTEM REQUIREMENTS:
HARDWARE REQUIREMENTS:
SOFTWARE REQUIREMENTS:
Class diagram:-
The class diagram is the main building block of object oriented modeling. It is used both for
general conceptual modeling of the systematic of the application, and for detailed modeling
translating the models into programming code. Class diagrams can also be used for data
modeling. The classes in a class diagram represent both the main objects, interactions in the
application and the classes to be programmed. A class with three sections, in the diagram, classes
is represented with boxes which contain three parts:
The bottom part gives the methods or operations the class can take or undertake
Class diagram:
plot_outputs.py
deleteDirectory()
predict()
main.py
filename
model
accuracy GAN_Arch_details.py
deleteDirectory() Upsample()
loadModel() Downsample()
uploadNoisyImage() CycleGAN_Unet_Generator()
GanPredict() CycleGAN_Discriminator()
predictSignals()
graph()
close()
1D_self_operational_cycleGan.py
configure_optimizers()
training_step()
training_epoch_end()
Use case diagram:-
A use case diagram at its simplest is a representation of a user's interaction with the system and
depicting the specifications of a use case. A use case diagram can portray the different types of
users of a system and the various ways that they interact with the system. This type of diagram is
typically used in conjunction with the textual use case and will often be accompanied by other
types of diagrams as well.
user
comparison graph
Exit
Sequence Diagram:
A sequence diagram is a kind of interaction diagram that shows how processes operate with one
another and in what order. It is a construct of a Message Sequence Chart. A sequence diagram
shows object interactions arranged in time sequence. It depicts the objects and classes involved
in the scenario and the sequence of messages exchanged between the objects needed to carry out
the functionality of the scenario. Sequence diagrams are typically associated with use case
realizations in the Logical View of the system under development. Sequence diagrams are
sometimes called event diagrams, event scenarios, and timing diagrams.
comparison graph
Exit
successfully exited
Collaboration diagram
Upload Noisy
Exit 10: successfully Exited ECG signal
3: Upload Noisy ECG signal
9: Exit
user
compariso
n graph 6: successfully predicted quality signals predict quality
7: comparison graph signals
Component Diagram
In the Unified Modeling Language, a component diagram depicts how components are wired
together to form larger components and or software systems. They are used to illustrate the
structure of arbitrarily complex systems
Components are wired together by using an assembly connector to connect the required interface
of one component with the provided interface of another component. This illustrates the service
consumer - service provider relationship between the two components.
upload operational
cycle-GANs model
Upload Noisy
ECG signal
user
predict quality
signals
comparison graph
EXIT
Deployment Diagram
A deployment diagram in the Unified Modeling Language models the physical deployment of
artifacts on nodes. To describe a web site, for example, a deployment diagram would show what
hardware components ("nodes") exist (e.g., a web server, an application server, and a database
server), what software components ("artifacts") run on each node (e.g., web application,
database), and how the different pieces are connected (e.g. JDBC, REST, RMI).
The nodes appear as boxes, and the artifacts allocated to each node appear as rectangles within
the boxes. Nodes may have sub nodes, which appear as nested boxes. A single node in a
deployment diagram may conceptually represent multiple physical nodes, such as a cluster of
database servers.
Deployment diagram:
upload
operatio
Upload
Noisy
user
predict
quality
compari
son
Exit
Activity diagram:
Activity diagram is another important diagram in UML to describe dynamic aspects of the
system. It is basically a flow chart to represent the flow form one activity to another activity. The
activity can be described as an operation of the system.
So the control flow is drawn from one operation to another. This flow can be sequential,
branched or concurrent.
Activity diagram:
Data Flow Diagram:
Data flow diagrams illustrate how data is processed by a system in terms of inputs and outputs.
Data flow diagrams can be used to provide a clear representation of any business function. The
technique starts with an overall picture of the business and continues by analyzing each of the
functional areas of interest. This analysis can be carried out in precisely the level of detail
required. The technique exploits a method called top-down expansion to conduct the analysis in
a targeted way.
As the name suggests, Data Flow Diagram (DFD) is an illustration that explicates the passage of
information in a process. A DFD can be easily drawn using simple symbols. Additionally,
complicated processes can be easily automated by creating DFDs using easy-to-use, free
downloadable diagramming tools. A DFD is a model for constructing and analyzing information
processes. DFD illustrates the flow of information in a process depending upon the inputs and
outputs. A DFD can also be referred to as a Process Model. A DFD demonstrates business or
technical process with the support of the outside data saved, plus the data flowing from the
process to another and the end results.
Dataflow Diagram:
5.IMPLEMETATION
5.1 Python
Python is a general-purpose language. It has wide range of applications from Web development
(like: Django and Bottle), scientific and mathematical computing (Orange, SymPy, NumPy) to
desktop graphical user Interfaces (Pygame, Panda3D). The syntax of the language is clean and
length of the code is relatively short. It's fun to work in Python because it allows you to think
about the problem rather than focusing on the syntax.
History of Python:
Python is a fairly old language created by Guido Van Rossum. The design began in the late
1980s and was first released in February 1991.
No. It wasn't named after a dangerous snake. Rossum was fan of a comedy series from late
seventies. The name "Python" was adopted from the same series "Monty Python's Flying
Circus".
Features of Python:
Python has a very simple and elegant syntax. It's much easier to read and write Python programs
compared to other languages like: C++, Java, C#. Python makes programming fun and allows
you to focus on the solution rather than syntax.
If you are a newbie, it's a great choice to start your journey with Python.
You can freely use and distribute Python, even for commercial use. Not only can you use and
distribute software’s written in it, you can even make changes to the Python's source code.
Portability
You can move Python programs from one platform to another, and run it without any changes.
It runs seamlessly on almost all platforms including Windows, Mac OS X and Linux.
This will give your application high performance as well as scripting capabilities which other
languages may not provide out of the box.
Unlike C/C++, you don't have to worry about daunting tasks like memory management, garbage
collection and so on.
Likewise, when you run Python code, it automatically converts your code to the language your
computer understands. You don't need to worry about any lower-level operations.
Python has a number of standard libraries which makes life of a programmer much easier since
you don't have to write all the code yourself. For example: Need to connect MySQL database on
a Web server? You can use MySQLdb library using import MySQLdb .
Standard libraries in Python are well tested and used by hundreds of people. So you can be sure
that it won't break your application.
Object-oriented
Everything in Python is an object. Object oriented programming (OOP) helps you solve a
complex problem intuitively.
With OOP, you are able to divide these complex problems into smaller sets by creating objects.
Applications of Python:
Programming in Python is fun. It's easier to understand and write Python code. Why? The syntax
feels natural. Take this source code for an example:
a=2
b=3
sum = a + b
print(sum)
You don't need to define the type of a variable in Python. Also, it's not necessary to add
semicolon at the end of the statement.
Python enforces you to follow good practices (like proper indentation). These small things can
make learning much easier for beginners.
Python allows you to write programs having greater functionality with fewer lines of code.
Here's a link to the source code of Tic-tac-toe game with a graphical interface and a smart
computer opponent in less than 500 lines of code. This is just an example. You will be amazed
how much you can do with Python once you learn the basics.
Python has a large supporting community. There are numerous active forums online which can
be handy if you are stuck.
import tkinter
from tkinter import filedialog
import numpy as np
import numpy as np
import pandas as pd
import shutil
import torch
import torchvision.transforms.functional as TF
import pytorch_lightning as pl
main = Tk()
main.geometry("1300x1200")
def deleteDirectory():
for f in filelist:
os.remove(os.path.join('Outputs', f))
def loadModel():
global model
text.delete('1.0', END)
model = CycleGAN_Unet_Generator()
checkpoint =torch.load("model/model_weights_16NQ3.pth")
model.load_state_dict(checkpoint)
model.eval()
def uploadNoisyImage():
global filename
text.delete('1.0', END)
def GanPredict(name):
global accuracy
accuracy = []
gan_outputs=sio.loadmat("test_outputs/"+name+"_gan_outputs.mat")
real_sig=sio.loadmat("test_outputs/"+name+"_real_sig.mat")
gan_outputs=gan_outputs["gan_outputs"]
real_sig=real_sig["real_sig"]
ab_beats=sio.loadmat("mats/R"+name+".mat")
S=ab_beats["S"]
V=ab_beats["V"]
S = pd.DataFrame(data =S)
V = pd.DataFrame(data =V)
win_size=int(np.floor(len(gan_outputs)/4000))
gan_outputs=gan_outputs[:win_size*4000]
real_sig=real_sig[:win_size*4000]
gan_outputs1=gan_outputs.reshape(win_size,4000)
real_sig1=real_sig.reshape(win_size,4000)
S_arr=np.zeros(((win_size*4000),1))
V_arr=np.zeros(((win_size*4000),1))
S_arr[S.values]=1
V_arr[V.values]=1
S_arr=S_arr.reshape(win_size,4000)
V_arr=V_arr.reshape(win_size,4000)
for i in range(0,4):
gan_outputs=gan_outputs1[i,:]
real_sig=real_sig1[i,:]
V=V_arr[i,:]
S=S_arr[i,:]
acc = dot(gan_outputs, real_sig)/(norm(gan_outputs) * norm(real_sig))
accuracy.append(acc)
if i == 0:
time_axis=np.arange(i*4000,(i+1)*4000)/400
a=plt.figure()
a.set_size_inches(12, 10)
ax=plt.subplot(211)
ax.set_xticks(major_ticksx)
ax.set_xticks(minor_ticksx, minor=True)
ax.set_yticks(major_ticksy)
ax.set_yticks(minor_ticksy, minor=True)
plt.plot(time_axis,real_sig,linewidth=0.7,color='k')
plt.scatter(time_axis[real_sig*V!=0],real_sig[real_sig*V!=0], c='#2ca02c',s=100,
marker=(5, 1), alpha=0.5)
ax.grid(which='minor', alpha=0.2,color='r')
ax.grid(which='major', alpha=0.5,color='r')
plt.ylabel('Amplitude', fontsize=13)
ax2.set_xticks(major_ticksx)
ax2.set_xticks(minor_ticksx, minor=True)
ax2.set_yticks(major_ticksy)
ax2.set_yticks(minor_ticksy, minor=True)
plt.plot(time_axis,gan_outputs,linewidth=0.7,color='k')
plt.scatter(time_axis[gan_outputs*V!=0],gan_outputs[gan_outputs*V!=0],
c='#2ca02c',s=100, marker=(5, 1), alpha=0.5)
ax2.grid(which='minor', alpha=0.2,color='r')
ax2.grid(which='major', alpha=0.5,color='r')
plt.title("Operational Cycle-GAN", fontsize=15)
plt.ylabel('Amplitude', fontsize=13)
plt.tight_layout(pad=1.0)
plt.show()
def predictSignals():
deleteDirectory()
name = os.path.basename(filename).split("_")
print(name[0])
GanPredict(name[0])
def graph():
global accuracy
text.delete('1.0', END)
text.insert(END,"Accuracy : "+str(accuracy[0])+"\n")
text.insert(END,"Precision : "+str(accuracy[1])+"\n")
text.insert(END,"Recall : "+str(accuracy[2])+"\n")
text.insert(END,"FSCORE : "+str(accuracy[3])+"\n")
text.update_idletasks()
height = accuracy
bars = ('Accuracy','Precision','Recall','FSCORE')
y_pos = np.arange(len(bars))
plt.bar(y_pos, height)
plt.xticks(y_pos, bars)
plt.xlabel("Metric Name")
plt.ylabel("Metric Values")
plt.show()
def close():
main.destroy()
title.config(bg='HotPink4', fg='yellow2')
title.config(font=font)
title.config(height=3, width=120)
title.place(x=0,y=5)
font1 = ('times', 13, 'bold')
modelButton.place(x=50,y=100)
uploadButton.place(x=470,y=100)
predictButton.place(x=790,y=100)
graphButton.place(x=50,y=150)
closeButton.place(x=470,y=150)
font1 = ('times', 13, 'bold')
text=Text(main,height=20,width=130)
scroll=Scrollbar(text)
text.configure(yscrollcommand=scroll.set)
text.place(x=10,y=200)
text.config(font=font1)
main.config(bg='plum2')
main.mainloop()
6. TESTING:
Implementation is one of the most important tasks in project is the phase in which one has to be
cautions because all the efforts undertaken during the project will be very interactive.
Implementation is the most crucial stage in achieving successful system and giving the users
confidence that the new system is workable and effective. Each program is tested individually at
the time of development using the sample data and has verified that these programs link together
in the way specified in the program specification. The computer system and its environment are
tested to the satisfaction of the user.
Implementation
The implementation phase is less creative than system design. It is primarily concerned with user
training, and file conversion. The system may be requiring extensive user training. The initial
parameters of the system should be modifies as a result of a programming. A simple operating
procedure is provided so that the user can understand the different functions clearly and quickly.
The different reports can be obtained either on the inkjet or dot matrix printer, which is available
at the disposal of the user. The proposed system is very easy to implement. In general
implementation is used to mean the process of converting a new or revised system design into an
operational one.
Testing
Testing is the process where the test data is prepared and is used for testing the modules
individually and later the validation given for the fields. Then the system testing takes place
which makes sure that all components of the system property functions as a unit. The test data
should be chosen such that it passed through all possible condition. Actually testing is the state
of implementation which aimed at ensuring that the system works accurately and efficiently
before the actual operation commence. The following is the description of the testing strategies,
which were carried out during the testing period.
System Testing
Testing has become an integral part of any system or project especially in the field of
information technology. The importance of testing is a method of justifying, if one is ready to
move further, be it to be check if one is capable to with stand the rigors of a particular situation
cannot be underplayed and that is why testing before development is so critical. When the
software is developed before it is given to user to use the software must be tested whether it is
solving the purpose for which it is developed. This testing involves various types through which
one can ensure the software is reliable. The program was tested logically and pattern of
execution of the program for a set of data are repeated. Thus the code was exhaustively checked
for all possible correct data and the outcomes were also checked.
Module Testing
To locate errors, each module is tested individually. This enables us to detect error and correct it
without affecting any other modules. Whenever the program is not satisfying the required
function, it must be corrected to get the required result. Thus all the modules are individually
tested from bottom up starting with the smallest and lowest modules and proceeding to the next
level. Each module in the system is tested separately. For example the job classification module
is tested separately. This module is tested with different job and its approximate execution time
and the result of the test is compared with the results that are prepared manually. The comparison
shows that the results proposed system works efficiently than the existing system. Each module
in the system is tested separately. In this system the resource classification and job scheduling
modules are tested separately and their corresponding results are obtained which reduces the
process waiting time.
Integration Testing
After the module testing, the integration testing is applied. When linking the modules there may
be chance for errors to occur, these errors are corrected by using this testing. In this system all
modules are connected and tested. The testing results are very correct. Thus the mapping of jobs
with resources is done correctly by the system.
Acceptance Testing
When that user fined no major problems with its accuracy, the system passers through a final
acceptance test. This test confirms that the system needs the original goals, objectives and
requirements established during analysis without actual execution which elimination wastage of
time and money acceptance tests on the shoulders of users and management, it is finally
acceptable and ready for the operation.
Test Test Case Test Case Test Steps Test Test
Case Name Desc. Step Expected Actual Case Priority
Id Status
O1 Load Verify If model not we cannot we can do High High
operational operational loaded we do any further
Cycle-GAN cycle-GAN cannot do further operations
model loaded or not anything operations
02 Upload Noisy Verify ECG If ECG not we cannot we can do High High
ECG signal uploaded or not uploaded do any further
further operations
operations
03 Predict Quality Verify quality If quality We We can High High
Signals predicted or not prediction is cannot Run the
not done run Operation
operation
04 Comparison Verify If Comparison We We can High High
Graph Comparison Graph May Not cannot Run the
graph or not Plot run Operation
operation
7. SCREENSHOTS:
In above screen model is loaded and now click on ‘Upload Noisy ECG Signal’
button to upload noisy image and get below output
In above screen selecting and uploading ‘1_noise.sig’ signal file and then click on
‘Open’ button to load file and get below output
In above screen ECH noise signal data loaded and now click on ‘Predict Quality
Signal’ button to get below output
In above screen first image is thee noisy ECG signal graph where * indicates noise
part (sudden jump or decrease in values called as noise) and second is the GAN
predicted ECG signal and we can see in second image * position is showing
accurate value without noise (means some values are decrease from sudden jump).
If there is no noise in the input signals then * mark will not come. Similarly you
can upload and test other image and now click on ‘Comparison Graph’ button to
get below graph
In above screen we can see operational cyclic GAN accuracy and other values and
we can see graph also and now showing another example
In above screen uploading another signal image and below is the output
In above screen there is no * mark so there are no noise values
8. CONCLUSION:
The major problem of Holter and wearable ECG sensors is that the acquired ECG signal may severely be
corrupted by a blend of artifacts, and this makes it too difficult, if not infeasible, to diagnose any heart
abnormality by machines or humans. In this study, we propose a novel approach to restore the ECG signal
to a clinical level quality regardless of the type or severity of the artifacts. Therefore, we follow a
different path from the prior works, which approached this as a “denoising” problem for additive
(artificial) noise with a fixed type and power so that they could propose a supervised solution. Such
common regression-based solutions are not useful in practice and that is why this study addressed this
problem with a blind restoration approach without any prior assumption over the artifact types and
severity. As the baseline method, we proposed 1D Cycle-GANs, and to further boost the performance, we
proposed operational Cycle-GANs. Once Cycle-GANs are trained over the clean and corrupted batches,
the generator, GX2C, learns to transform the corrupted ECG segments to clean counterparts while
preserving the ECG characteristics. The optimized PyTorch code and the labeled CPSC-2020 dataset are
publicly shared in [32]. The quantitative, qualitative, and medical evaluations performed over an
extensive set of real Holter recordings demonstrate that the corrupted ECG can indeed be restored with a
desired (clinical) quality level, which in turn improves the efficiency and accuracy of ECG diagnosis by
machines and humans. In particular, the R-peak detection performances of the two landmark detectors
have been significantly improved over the restored signal. During the medical evaluation, the
cardiologists confirmed that the restored ECG signal is more useful for arrhythmia diagnosis 95.51% of
the time. They further note that the restoration has almost no side effects on the arrhythmia beats, i.e.,
neither causing an arrhythmic beat to turn to a normal beat nor transforming a normal beat into an
arrhythmic beat. Finally, besides the superior ECG quality achieved by the proposed restoration approach,
the visual evaluation further demonstrated that the hidden/undetected arrhythmia events can possibly be
diagnosed from the restored ECG. A similar conclusion can also be made on the significant peak
detection performance gain of arrhythmia beats achieved after the restoration. Among all proposed
restoration approaches by 1D Cycle-GANs, the novel operational Cycle-GANs have a superior restoration
performance and can even outperform a more complex counterpart with convolutional neurons. This is
not surprising considering the superiority of Self-ONNs in many challenging ML and CV tasks over the
(deep) CNN models [28]–[30]. Despite the elegant restoration performance, we note that very
occasionally some potential arrhythmia beats with very low amplitudes may not be distinguished from the
background noise, and hence not restored. Moreover, few over-corrections were encountered yielding
artificial beats. Such minority cases can be addressed by designing a cost function that incorporates the
class information (normal, S, and V type beats). Finally, the depth and complexity of the operational
Cycle-GANs can further be reduced while boosting the restoration performance by using the super neuron
model recently proposed in [31]. These will be the topics of our future research.
9. REFERENCES:
[1] Z. P. Cai et al., “An open-access long-term wearable ECG database for premature ventricular
contractions and supraventricular premature beat detection,” J. Med. Imag. Health Inform., vol. 10, pp.
2663–2667, 2020.
[2] R. Sameni et al., “A nonlinear Bayesian filtering framework for ECG denoising,” IEEE Trans.
Biomed. Eng., vol. 54, no. 12, pp. 2172–2185, Dec. 2007, doi: 10.1109/TBME.2007.897817.
[3] O. Sayadi and M. B. Shamsollahi, “Multiadaptive bionic wavelet transform: Application to ECG
denoising and baseline wandering reduction,” EURASIP J. Adv. Signal Process., vol. 2007, 2007, Art.
no. 41274, doi: 10.1155/2007/41274.
[4] R. Liu, M. Shu, and C. Chen, “ECG signal denoising and reconstruction based on basis pursuit,” Appl.
Sci., vol. 11, 2021, Art. no. 1591, [ doi: 10.3390/app11041591.
[5] S. Poungponsri and X.-H. Yu, “An adaptive filtering approach for electrocardiogram (ECG) signal
noise reduction using neural networks,” Neurocomputing, vol. 117, pp. 206–213, 2013, doi:
10.1016/j.neucom.2013.02.010.
[6] H. Chiang et al., “Noise reduction in ECG signals using fully convolutional denoising autoencoders,”
IEEE Access, vol. 7, pp. 60806–60813, 2019, doi: 10.1109/ACCESS.2019.2912036.
[7] H. Aqeel and J. Ammar, “ECG signal de-noising based on deep learning autoencoder and discrete
wavelet transform,” Int. J. Eng. Technol., vol. 9. pp. 415–423, 2020. doi: 10.14419/ijet.v9i2.30499.
[8] K. Antczak, “Deep recurrent neural networks for ECG signal denoising,” Jun. 2018. Accessed: Jan.
16, 2022. [Online]. Available: https://arxiv.org/ abs/1807.11551
[9] P. Singh and G. Pradhan, “A new ECG denoising framework using generative adversarial network,”
IEEE/ACM Trans. Comput. Biol. Bioinf., vol. 18, no. 2, pp. 759–764, Mar/Apr. 2021.
[10] S. Kiranyaz et al., “Progressive operational perceptrons,” Neurocomputing, vol. 224, pp. 142–154,
Feb. 2017, doi: 10.1016/j.neucom.2016. 10.044.
[11] D. T. Tran et al., “Heterogeneous multilayer generalized operational perceptron,” IEEE Trans.
Neural Netw. Learn. Syst., vol. 31, no. 3, pp. 710–724, Mar. 2020. doi: 10.1109/TNNLS.2019.2914082.
[12] D. T. Tran et al., “Progressive operational perceptron with memory,” Neurocomputing, vol. 379, pp.
172–181, 2019, doi: 10.1016/j.neucom.2019.10.079.
[13] D. T. Tran et al., “PyGOP: A python library for generalized operational perceptron,” Knowl.-Based
Syst., vol. 182, Oct. 2019, Art. no. 104801, doi: 10.1016/j.knosys.2019.06.009.
[14] S. Kiranyaz et al., “Generalized model of biological neural networks: Progressive operational
perceptrons,” in Proc. Int. Joint Conf. Neural Netw., 2017, pp. 2477–2485.
[15] D. T. Tran et al., “Knowledge transfer for face verification using heterogeneous generalized
operational perceptrons,” in Proc. IEEE Int. Conf. Image Process., Taipei, Taiwan, 2019, pp. 1168–1172.
[16] S. Kiranyaz et al., “Operational neural networks,” Neural Comput. Appl., vol. 32, no. 11, pp. 6645–
6668, Jun. 2020.
[17] S. Kiranyaz et al., “Exploiting heterogeneity in operational neural networks by synaptic plasticity,”
Neural Comput. Appl., vol. 33, pp. 7997–8015, Jan. 2021.
[18] J. Malik, S. Kiranyaz, and M. Gabbouj, “FastONN–Python based opensource GPU implementation
for operational neural networks,” Jun. 2020. Accessed: Jan. 16, 2022. [Online]. Available:
https://arxiv.org/abs/2006. 02267
[19] I. Goodfellow et al., “Generative adversarial nets,” in Proc. Adv. Neural Inf. Process. Syst., 2014,
vol. 27, pp. 2672–2680.
[20] J.-Y. Zhu et al., “Unpaired image-to-image translation using cycleconsistent adversarial networks,”
in Proc. IEEE Int. Conf. Comput. Vis., 2017, pp. 2223–2232.
[21] S. Kiranyaz et al., “Self-organized operational neural networks with generative neurons,” Neural
Netw., vol. 140, pp. 294–308, Aug. 2021. doi: 10.1016/j.neunet.2021.02.028. Epub 2021 Mar 17. PMID:
33857707.
[22] J. Malik, S. Kiranyaz, and M. Gabbouj, “Self-organized operational neural networks for severe
image restoration problems,” Neural Netw., vol. 135, pp. 201–211, Mar. 2021, doi:
10.1016/j.neunet.2020.12. 014.
[23] J. Pan and W. J. Tompkins, “A real-time QRS detection algorithm,” IEEE Trans. Biomed. Eng., vol.
BME-32, no. 3, pp. 230–236, Mar. 1985, doi: 10.1109/TBME.1985.325532.
[24] P. S. Hamilton and W. J. Tompkins, “Quantitative investigation of QRS detection rules using the
MIT/BIH arrhythmia database,” IEEE Trans. Biomed. Eng., vol. BME-33, no. 12, pp. 1157–1165, Dec.
1986. [25] G. Van Rossum and F. L. Drake Jr, Python Reference Manual. Amsterdam, Netherland:
Centrum voor Wiskunde en Informatica Amsterdam, 1995.