Bioinformatics And Medical Applications Big Data Using Deep Learning Algorithm A Suresh pdf download
Bioinformatics And Medical Applications Big Data Using Deep Learning Algorithm A Suresh pdf download
https://ebookbell.com/product/bioinformatics-and-medical-
applications-big-data-using-deep-learning-algorithm-a-
suresh-42455422
https://ebookbell.com/product/roughfuzzy-pattern-recognition-
applications-in-bioinformatics-and-medical-imaging-pradipta-
maji-4311682
https://ebookbell.com/product/symmetrical-analysis-techniques-for-
genetic-systems-and-bioinformatics-advanced-patterns-and-applications-
premier-reference-source-1st-edition-sergey-petoukhov-1373342
https://ebookbell.com/product/probabilistic-modeling-in-
bioinformatics-and-medical-informatics-dirk-husmeier-richard-dybowski-
stephen-roberts-4106214
https://ebookbell.com/product/soft-computing-and-medical-
bioinformatics-1st-ed-naresh-babu-muppalaneni-7157048
Medical Biotechnology Biopharmaceutics Forensic Science And
Bioinformatics Hajiya Mairo Inuwa
https://ebookbell.com/product/medical-biotechnology-biopharmaceutics-
forensic-science-and-bioinformatics-hajiya-mairo-inuwa-46969346
Medical Device Data And Modeling For Clinical Decision Making Artech
House Series Bioinformatics Biomedical Imaging 1st Edition John R
Zaleski
https://ebookbell.com/product/medical-device-data-and-modeling-for-
clinical-decision-making-artech-house-series-bioinformatics-
biomedical-imaging-1st-edition-john-r-zaleski-2357318
https://ebookbell.com/product/bioinformatics-and-machine-learning-for-
cancer-biology-yiping-fan-44874538
https://ebookbell.com/product/bioinformatics-and-human-genomics-
research-diego-a-forero-46667884
https://ebookbell.com/product/bioinformatics-and-biomedical-
engineering-9th-international-workconference-iwbbio-2022-maspalomas-
gran-canaria-spain-june-2730-2022-proceedings-part-i-ignacio-rojas-
editor-47223554
Bioinformatics and
Medical Applications
Scrivener Publishing
100 Cummings Center, Suite 541J
Beverly, MA 01915-6106
Publishers at Scrivener
Martin Scrivener ([email protected])
Phillip Carmical ([email protected])
Bioinformatics and
Medical Applications
Edited by
A. Suresh
S. Vimal
Y. Harold Robinson
Dhinesh Kumar Ramaswami
and
R. Udendhran
This edition first published 2022 by John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USA
and Scrivener Publishing LLC, 100 Cummings Center, Suite 541J, Beverly, MA 01915, USA
© 2022 Scrivener Publishing LLC
For more information about Scrivener publications please visit www.scrivenerpublishing.com.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or
transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or other-
wise, except as permitted by law. Advice on how to obtain permission to reuse material from this title
is available at http://www.wiley.com/go/permissions.
For details of our global editorial offices, customer services, and more information about Wiley prod-
ucts visit us at www.wiley.com.
ISBN 978-1-119-79183-6
Set in size of 11pt and Minion Pro by Manila Typesetting Company, Makati, Philippines
10 9 8 7 6 5 4 3 2 1
Contents
Preface xv
1 Probabilistic Optimization of Machine Learning Algorithms
for Heart Disease Prediction 1
Jaspreet Kaur, Bharti Joshi and Rajashree Shedge
1.1 Introduction 2
1.1.1 Scope and Motivation 3
1.2 Literature Review 4
1.2.1 Comparative Analysis 5
1.2.2 Survey Analysis 5
1.3 Tools and Techniques 10
1.3.1 Description of Dataset 11
1.3.2 Machine Learning Algorithm 12
1.3.3 Decision Tree 14
1.3.4 Random Forest 15
1.3.5 Naive Bayes Algorithm 16
1.3.6 K Means Algorithm 18
1.3.7 Ensemble Method 18
1.3.7.1 Bagging 19
1.3.7.2 Boosting 19
1.3.7.3 Stacking 19
1.3.7.4 Majority Vote 19
1.4 Proposed Method 20
1.4.1 Experiment and Analysis 20
1.4.2 Method 22
1.5 Conclusion 25
References 26
v
vi Contents
xv
xvi Preface
The editors thank the contributors most profoundly for their time and
effort.
A. Suresh
S. Vimal
Y. Harold Robinson
Dhinesh Kumar Ramaswami
R. Udendhran
February 2022
1
Probabilistic Optimization of
Machine Learning Algorithms
for Heart Disease Prediction
Jaspreet Kaur1*, Bharti Joshi2 and Rajashree Shedge2
2
Department of Computer Engineering Ramrao, Adik Institute of
Technology Nerul, Navi Mumbai, India
Abstract
Big Data and Machine Learning have been effectively used in medical management
leading to cost reduction in treatment, predicting the outbreak of epidemics,
avoiding preventable diseases, and, improving the quality of life.
Prediction begins with the machine learning patterns from several existing known
datasets and then applying something very similar to an obscure dataset to check
the result. In this chapter, we investigate Ensemble Learning which overcomes the
limitations of a single algorithm such as bias and variance by using a multitude of
algorithms. The focus is not solely increasing the accuracy of weak classification algo-
rithmic programs however additionally implementing the algorithm on a medical
dataset wherever it is effectively used for analysis, prediction, and treatment. The
consequence of the investigation indicates that ensemble techniques are powerful in
improving the forecast accuracy and displaying an acceptable performance in disease
prediction. Additionally, we have worked on a procedure to further improve the accu-
racy post applying ensemble method by focusing on the wrongly classified records
and using probabilistic optimization to select pertinent columns by increasing their
weight and doing a reclassification which would result in further improved accuracy.
The accuracy hence achieved by our proposed method is, by far, quite competitive.
A. Suresh, S. Vimal, Y. Harold Robinson, Dhinesh Kumar Ramaswami and R. Udendhran (eds.)
Bioinformatics and Medical Applications: Big Data Using Deep Learning Algorithms, (1–28)
© 2022 Scrivener Publishing LLC
1
2 Bioinformatics and Medical Applications
1.1 Introduction
Healthcare and biomedicine are increasingly using big data technologies
for research and development. Mammoth amount of clinical data have
been generated and collected at an unparalleled scale and speed. Electronic
health records (EHR) store large amounts of patient data. The quality of
healthcare can be greatly improved by employing big data applications to
identify trends and discover knowledge. Details generated in the hospitals
fall in the following categories.
“Effective heart disease Improve precision in forecast of Presented a method called the HRFLM ended up being
prediction using hybrid cardiovascular illness Hybrid Random forest with quite precise in the
machine learning Linear Model (HRFLM). prediction of heart illness.
techniques” [6] It utilizes ANN with back
propagation taking as input
13 clinical features
(Continued)
Table 1.1 Comparative analysis of prediction techniques. (Continued)
Title Problem Solution Result
“A classification for Characterize information Hoeffding tree deals with Results exhibit an accuracy
patients with heart for patients with coronary increasing tree proofs and of around 85% and the
disease based on sickness and assessment of the capacity to gain from processing error value of
Hoeffding tree” [7] models used to foresee steam of huge information 14%.
coronary disease patients. assuming that the
distribution sample remains
constant with time.
“Heart Disease Detection Give more certainty and Data was divided in 80:20 A precision of 90% was
Using Machine precision to the Specialist’s ratio for training and testing achieved based on the hard
Learning Majority analysis considering the face and a combination of four voting ensemble model.
Voting Ensemble that the model is prepared algorithms (SGD, KNN,
Method” [8] using real information of RF, and LR) was used by
healthy and sick patients. majority voting method.
“Robust Heart Disease Coronary illness prediction with Selected significant attributes Achieved accuracy
Prediction: A Novel accessible clinical information by using correlation of 86.94% which
Approach based on is one of the huge difficulties accompanied with RF outperforms the 85%
Significant Feature and for scientists. and Stratified K-fold precision reported by
Ensemble Learning cross-validation. Hoeffding tree method.
Model” [9]
Probabilistic Optimization of ML for HDP
(Continued)
7
8
Table 1.1 Comparative analysis of prediction techniques. (Continued)
Title Problem Solution Result
“A Comprehensive Compare the accuracy of Various classifiers, namely, SVM method using the
Investigation and different data mining DT, NB, MLP, KNN, SCRL, boosting technique
Comparison of classification schemes, RBF, and SVM, have been outperforms the other
Machine Learning employing Ensemble Machine employed. aforementioned methods.
Techniques in the Learning Techniques, for
Domain of Heart forecasting heart ailments.
Disease” [10]
“Increasing Diversity in Improve the classification Enhanced variety of Random Proposed method works
Random Forests Using accuracy. Forests put forward that more efficiently in
Naive Bayes” [11] was constructed by pseudo comparison to other
randomly picking up certain advanced ensemble
attributes and incorporating methods.
Naive Bayes estimation into
the training and segregation
Bioinformatics and Medical Applications
category.
“Improved Classification Increase classification accuracy. Utilized average class Naive Bayes combined with
Techniques by probabilities to concatenate Random Forest has ended
Combining KNN Naive Bayes, KNN, and up being the ideal blend.
and Random Forest Random Forest.
with Naive Bayesian
Classifier” [12]
(Continued)
Table 1.1 Comparative analysis of prediction techniques. (Continued)
Title Problem Solution Result
“Comparison of Machine Examination of ML models on Used Cross Industry Standard Random Forest outperforms
Learning Models forecast of cardiovascular Process for Data Mining and other models by achieving
in Prediction of illness utilizing patients’ four algorithms, namely, an accuracy of 73%,
Cardiovascular Disease cardiovascular hazard factors. RF, NB, LR, and KNN, were sensitivity of 65%, and
Using Health Record used. specificity of 80%.
Data” [13]
“Feature Analysis of Combine results of the AI Common features are Precision of the collected
Coronary Artery Heart examination applied on compared and extracted dataset is around 80%.
Disease Data Sets” [14] various datasets centering on from different datasets
CAD. and fast decision trees
and pruned C4.5 tree are
administered on it.
“Cardio Vascular Disease To construct the detection system Dataset is categorized via the The accuracy is elevated up
Classification Ensemble based on fuzzy logic algorithm usage of fuzzy logic, genetic to 99.97% and the error
Optimization Using for extraction of features algorithm, and, moreover, rate is decreased to 0.987%.
Genetic Algorithm and making use of neural network training is performed by
Neural Network”[15] classifier of heart disease. neural network by the
extracting features.
Probabilistic Optimization of ML for HDP
9
10 Bioinformatics and Medical Applications
Key ideas such as the data setup, data classification, data mining models,
and techniques are described below.
Probabilistic Optimization of ML for HDP 11
1.0
age 1 –0.023 –0.0056 0.054 0.021 0.018 0.15 0.099 –0.048 –0.03 –0.01 0.24
gender –0.023 1 0.023 0.16 0.006 0.015 –0.036 –0.02 0.34 0.17 0.0058 0.0081
height –0.056 0.023 1 –0.009 –2e-05 –0.00011–0.0038 –0.0022 0.0056 0.0025 0.0016 –0.0042 0.8
weight 0.054 0.16 –0.009 1 0.031 0.044 0.14 0.11 0.068 0.067 –0.017 0.18
ap_hi 0.021 0.006 –2e-05 0.031 1 0.016 0.024 0.012 –0.00092 0.0014 –3.7e-05 0.054 0.6
ap_lo 0.018 0.015 –0.00011 0.044 0.016 1 0.024 0.011 0.0052 0.011 0.0048 0.066
cholesterol 0.15 –0.036 –0.0038 0.14 0.024 0.024 1 0.45 0.01 0.036 0.01 0.22
0.4
gluc 0.099 –0.02 –0.0022 0.11 0.012 0.011 0.45 1 –0.0047 0.011 –0.0067 0.089
smoke –0.048 0.34 0.0056 0.068 –0.00092 0.0052 0.01 –0.0047 1 0.34 0.026 –0.016
0.3
alco –0.03 0.17 0.0025 0.067 0.0014 0.011 0.036 0.011 0.34 1 0.025 –0.0073
active –0.01 0.0059 0.0016 –0.017 –3.7e-05 0.0048 0.01 –0.0067 0.026 0.025 1 –0.036
cardio 0.24 0.0081 –0.0042 0.18 0.054 0.066 0.22 0.089 –0.016 –0.0073 –0.036 1 0.0
age gender height weight ap_hi ap_lo cholesterol gluc smoke alco active cardio
Figures 1.2, 1.3, 1.4, and 1.5 display the distribution of some of the input
values such as age, gender, presence of cardiovascular disease, and choles-
terol type.
age in days
0.00025
0.00020
0.00015
0.00010
0.00005
0.00000
10000 12000 14000 16000 18000 20000 22000 24000
age
0.0 35014
1.0 34977
Name: cardio, dtype: int64
35000
30000
25000
20000
count
15000
10000
5000
0
0.0 1.0
cardio
30000
Frequency of with and without disease
cardio
0.0
1.0
25000
20000
15000
10000
5000
0
1.0 2.0 3.0
Cholesterol type
0.5
0.4
cardio
0.3
0.2
0.1
0.0
1 2
gender
IG = 1 − ∑p
j =1
2
j
Entropy is defined as
c
IH = − ∑ P log
j =1
j 2 pj
where pj is the proportion of samples that belong to class c for a specific node.
Gini impurity and entropy are used as selection criterion for decision
trees. Basically, they assist us with figuring out what is a decent split point
for root/decision nodes on classification/regression trees. Decision trees
utilizes the split point to split on the feature resulting in the highest infor-
mation gain (IG) for a given criteria which is referred to as Gini or entropy.
It is based on the decrease in entropy after a dataset is split on an attribute.
A number of the benefits of decision tree are as follows:
θ = θ k1 ,θ k2 ,θ kp
Training
set
Decision Decision Decision
Tree Tree Tree
1 2 n
Voting
Test Set (averaging)
Prediction
P( E|H ) ∗ P( H )
P( H|E ) =
P( E )
P( X|y ) ∗ P( y )
P( y|X ) =
P( X )
where
Therefore, to find the category y with high probability, we use the fol-
lowing function:
n
y = arg max P( y )iΠ=1 P( xi |y )
• Easy to execute.
• Requires a limited amount of training data to measure
parameters.
• High computational efficiency.
Stacking
ex. Voting
Ensemble
Ensemble
learnings
Sequential Parallel
Ensemble Ensemble
learning learning
(Boosting) (Bragging)
Ex. Adaboost, Ex. Random Forest,
Stochastic Gradient Bagged Decision
Boosting Trees, Extra Trees
1.3.7.1 Bagging
Bagging or bootstrap aggregation assigns equal weights to each model in the
ensemble. It trains each model of the ensemble separately using random sub-
set of training data in order to promote variance. Random Forest is a classi-
cal example of bagging technique where multiple random decision trees are
combined to achieve high accuracy. Samples are generated in such a manner
that the samples are different from each other and replacement is permitted.
1.3.7.2 Boosting
The term “Boosting” implies a gathering of calculations which changes a
weak learner to strong learner. It is an ensemble technique for improving
the model predictions of some random learning algorithm. It trains weak
learners consecutively, each attempting to address its predecessor. There
are three kinds of boosting in particular, namely, AdaBoost that assigns
more weight to the incorrectly classified data that would be passed on to
the next model, Gradient Boosting which uses the residual errors made by
previous predictor to fit the new predictor, and Extreme Gradient Boosting
which overcomes drawbacks of Gradient Boosting by using parallelization,
distributed computing, out-of-core computing, and cache optimization.
1.3.7.3 Stacking
It utilizes meta-learning calculations to discover how to join the forecasts
more readily from at least two basic algorithms. A meta model is a two-level
engineering with Level 0 models which are alluded to as base models and
Level 1 model which are alluded to as Meta model. Meta-model depends
on forecasts made by basic models on out of sample data. The yields from
the base models utilized as contribution to the meta-model might be in the
form of real values in the case of regression and probability values in the
case of classification. A standard method for setting up a meta-model train-
ing database is with k-fold cross-validation of basic models.
From this analysis, we found the PKmeans method to be the most effi-
cient. Though serial along with K means achieves the best accuracy for
training data, it is not feasible for real data where target column is not
present. The reliability on any single algorithm is not possible for correctly
Probabilistic Optimization of ML for HDP 21
classifying all the records; hence, we use more suitable ensemble method
which utilizes the wisdom of the crowd. It uses the ensemble method of
the type majority voting which includes adding the decisions in favor of
crisp class labels from different models and foreseeing the class with the
most votes.
Our goal is to achieve the best possible accuracy which surpasses the
accuracy achieved by the individual methods. Figures 1.8 to 1.11 show the
confusion matrix plotted by Naive Bayes, Random Forest, and Decision
Tree individually as well as their ROC curve.
Naive bayes_cm
9234 1251
0
7355 3158
1
0 1
7687 2798
0
3207 7306
1
0 1
decision Tree_cm
6654 3831
0
3880 6633
1
0 1
0.9
0.8
0.7
True Positive Rate
0.6
0.5
0.4
0.3
0.2
GaussianNB, AUC=0.691
0.1 DecisionTreeClassif ier, AU=0.632
0.0 RandomForestClassif ier, AU=0.775
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
False Positive Rate
1.4.2 Method
We observed that by applying ensemble method of type majority voting on
the algorithms Decision tree, Random Forest, Naive Bayes, and K means,
we could achieve an accuracy of 91.56%. To additionally improve the pre-
cision, we proposed the following algorithm. The design of the proposed
method is as given in Figure 1.12.
Probabilistic Optimization of ML for HDP 23
Dataset Parrellley
Random Forest
A C B Serially
Check with all algorithm
NBpred<-naive bayes,
K-means clustering Naive bayes Decision Tree Random forest DTpred<=decision<-random forest
prediction, kmeansPred<=kmeans
Apply
prediction
naive
bayes
Voting with
classif ication
Apply algorithm on dataset . calculate accuracy. winner will
Check
output with be the max
dataset Yes value of
No classif ication
1 2 3
Classify Classify
Apply 3
Apply DT Apply RF K-means
2
1 Calculate accuracy
initialization
d ← dataset
a1 ← Naive_Bayes_output ← ApplyNaiveBayes(d)
a2 ← Decision_tree_output ← ApplyDesisionTree(d)
a3 ← Random_tree_output ← ApplyRandomForest(d)
a4 ← K_Means_output ← ApplyKmeans(d)
winner(0,1) ← Voting(a1, a2, a3, a4)
op ← winner_of_max_count(0,1)
if op ≠ desired_output then
Probability_calculation of each column with output 0 or 1
ci ← probability(0,1)
end
Find mean square error with the training and find lowest (MSE) parame-
ter. Calculate the Euclidean distance
( xi − x j )2 + ( zi − z j )2
Enhanced Accuracy
Probabilistic Optimization of ML for HDP 25
1.5 Conclusion
An ensemble of classifiers is a collection of classification models whose
singular forecasts are joined, by means of weighted or unweighted casting
a ballot to dole out a classification mark to each new pattern. There is no
single best method of creating successful ensemble methods and is being
actively researched. Predicting heart disease has been a topic of interest
for researchers for a long time. We therefore check the accuracy of the
heart disease prediction using an ensemble of classifiers. For our study, we
26 Bioinformatics and Medical Applications
References
1. Heart Disease Facts Statistics, Centers for Disease Control and Prevention,
[Online], Available: https://www.cdc.gov/heartdisease/facts.htm. [Accessed:
27-Apr-2019].
2. Thenmozhi, K. and Deepika, P., Heart disease prediction using classification
with different decision tree techniques. Int. J. Eng. Res. Gen. Sci., 2, 6, 6–11,
2014.
3. Kaggle Dataset, Cardiovascular Disease dataset, Available: https://www.kaggle.
com/sulianova/cardiovascular-disease-dataset.
4. Kannan, R. and Vasanthi, V., Machine learning algorithms with ROC curve
for predicting and diagnosing the heart disease, in: Soft Computing and
Medical Bioinformatics, pp. 63–72, Springer Singapore, Jun 2018.
5. Latha, C.B.C. and Jeeva, S.C., Improving the accuracy of prediction of heart
disease risk based on ensemble classification techniques. Inform. Med.
Unlocked, 16, 100203, 2019.
6. Mohan, S., Thirumalai, C., Srivastava, G., Effective heart disease prediction
using hybrid machine learning techniques. IEEE Access, 7, 81542–81554,
2019.
7. Thaiparnit, S., Kritsanasung, S., Chumuang, N., A classification for patients
with heart disease based on hoeffding tree, in: 2019 16th International Joint
Conference on Computer Science and Software Engineering (JCSSE), Jul 2019,
IEEE.
Probabilistic Optimization of ML for HDP 27
3
Amity International Business School, Amity Univ., Noida, U.P., India
4
Vikram University, Ujjain, M.P., India
5
BTech CSE Third Year, Department of CSE, ABES Engineering College Ghaziabad,
U.P., India
Abstract
Old age cancer was the cause of death. Forty percent of cancers are found in people
over the age of 65. Lung cancer is one of these potentially deadly cancers. Young-,
middle-, and old-aged patients, men who are chronic smokers or women who
have never smoked are all victims of the disease. Therefore, a classification of lung
cancer based on the associated risks (high risk, low risk, high risk) is required.
The study was conducted using a lung cancer classification scheme by study-
ing micrographs and classifying them into a deep neural network using machine
learning (ML) framework. Tissue microscopy images are based on the risk of
using deep concealed neural networks. Neural Networks–Deep Conversion Deep
Neural Networks are only used for classification (photo search) based on primary
image (for example, displayed name) and similarity.
After that, scene recognition is performed on the stage. These algorithms help
to recognize faces, tumors, people, road signs, plastics, and different perspec-
tive of visual information. The productivity of circular networks in image detec-
tion is one of the primary causes why the world has stirred to proficiency. Their
in-depth learning is a major advance in computer vision (CV) that has important
A. Suresh, S. Vimal, Y. Harold Robinson, Dhinesh Kumar Ramaswami and R. Udendhran (eds.)
Bioinformatics and Medical Applications: Big Data Using Deep Learning Algorithms, (29–46)
© 2022 Scrivener Publishing LLC
29
30 Bioinformatics and Medical Applications
2.1 Introduction
NSCLC includes three types of cancer: squamous cell carcinoma, ade-
nocarcinoma, and large cell carcinoma derived from lung tissue.
Adenocarcinoma is a slow-growing cancer that first appears in the outer
region of the lung. Lung cancer is more common in smokers, but the most
well-known sort of lung cancer in nonsmokers. Squamous cell carcinoma
is more normal in the focal point of the lung and all the more generally in
smokers, but large cell carcinoma can be found anywhere in the lung tissue
and grows faster than adenomas and lung cancer [9, 20].
According to Choi, H. and his team members, lung cancer risk classifi-
cation models with gene expression function are very interesting. Change
previous models based on individual symptomatic genes.
They have revealed that the aim to develop a risk classification model
was developed based on a novel level of gene expression network that was
performed using multiple microarrays of lung adenocarcinoma, and gene
convergence network investigation was carried out to recognize endurance
networks. Genes representing these networks have been used to develop
depth-based risk classification models. This model has been approved in
two test sets. The efficiency of the model was strongly related to patient
survival in the two sets of experiments and training. In multivariate analy-
sis, this model was related with persistent anticipation and autonomous of
other clinical and neurotic highlights.
The researchers have shown that how the gene structures and expres-
sions can be useful in early detection of the cancer and suitable steps can
be taken to cure the patients with higher probability of saving the lives [4].
{whom the Muse greatly loved, and gave him both good and evil ;
she took away his eyes and gave him sweet minstrelsy.}
ΤΑ ΟΜΗΡΙΚΑ ΕΠΗ
επιλέγει
ός τον Όμηρον
ήθροισα σποράδην το πριν αειδόμενον
Καθόλου λοιπόν έχομεν μαρτυρίας πρώτον μεν περί των εξής· ότι εν
Αθήναις υπήρχεν έθος, τουλάχιστον από των αρχών του πέμπτου π.
Χ. αιώνος, καθ' ό τα Ομηρικά έπη απηγγέλλοντο δημοσία κατά τάξιν
ωρισμένην και ότι η αρχή του έθους απεδίδετο εις νόμον της
πολιτείας. Έπειτα δε βλέπομεν ότι κατά πάντας τους μέχρι του
Πινδάρου γράψαντας, τους μη Αθηναίους, «Όμηρος» φαίνεται
καλούμενος ο ποιητής επών πολύ περισσοτέρων ή όσα κατέχομεν
ημείς, — πιθανώς πάντων των Τρωικών και Θηβαϊκών επών — ενώ
εν τη αττική λογοτεχνία από του πέμπτου αιώνος και εξής Όμηρος
είναι μόνον ο ποιητής της Ιλιάδος και της Οδυσσείας, τα δε λοιπά
έπη κατ' αρχάς μεν εθεωρήθησαν ως αμφίβολα, έπειτα δ'
ερρίφθησαν εις την λήθην. Ενθυμούμενοι δε, ότι πάντες οι περί
Παναθηναίων γράψαντες λέγοντες «Όμηρον» εννοούσιν απλώς ως
πράγμα αυτόδηλον την Ιλιάδα και την Οδύσσειαν, συμπεραίνομεν
αμέσως, ότι μόνα τα δυο ταύτα ποιήματα είχον τότ' εκλεγή προς
απαγγελίαν και ότι ακριβώς η απαγγελία εκείνη εγέννησε την
εξαιρετικήν υπόληψιν, ότι ταύτα είναι ο «γνήσιος» Όμηρος.
Αλλά διατί εξελέγησαν αυτά; Τούτο δεν είναι εντελώς φανερόν. Αλλά
πρώτον παραβολή εν γένει του ύφους των αποδοκιμασθέντων επών
προς τα δύο διασωθέντα δεικνύει ότι ταύτα είναι πολύ πλέον
επεξειργασμένα ή εκείνα· έχουσι μεγαλυτέραν ενότητα, φαινόμενα
πολύ ολιγώτερον εκείνων ως απλαί ραψωδίαι, δυνατώτερον
δραματικόν πάθος και περισσότερον ρητορικόν διάκοσμον. Προς
ταύτα έν μόνον έπος ηδύνατο να παραβληθή, το και πρώτον
μνημονευόμενον ως έργον του Ομήρου, η * Θηβαΐς· αλλ' η δόξα των
Θηβών ήτο βεβαίως το πάντων ήκιστα ευάρεστον εις τους Αθηναίους
θέμα· αι Αθήναι δηλαδή θ' απέρριπτον αυτό μάλλον αδιστάκτως ή
όσον απέρριψεν η Σικυών τον «Όμηρον», τον εγκωμιάσαντα το
Άργος (53).
Τοιουτοτρόπως ορίζομεν σπουδαιότατον εν τη ιστορία των επών
σταθμόν, υπολείπεται δε να διαγράψωμεν τον προτού και τον
κατόπιν δρόμον. Αρχόμενοι δε από των κατόπιν, παρατηρούμεν ότι η
παραδιδομένη ερμηνεία του Ομήρου απορρέει από των γραμματικών
της Αλεξανδρείας, των ακμασάντων κατά τον τρίτον και τον
δεύτερον αιώνα π. Χ., ήτοι Ζηνοδότου του Εφεσίου (γεννηθέντος
περί τα 325), Αριστοφάνους του Βυζαντίου (γεννηθέντος περί τα
257) και ιδίως Αριστάρχου του εκ Σαμοθράκης (215-145), του
εγκυροτάτου περί της πρώτης εις τους αρχαίους γνωστής ποιήσεως
κριτικού. Αλλ' η περί αυτού γνώσις ημών προέρχεται εξ επιτομής των
έργων τεσσάρων μεταγενεστέρων λογίων, ήτοι του περί της
Αρισταρχείου διορθώσεως έργου του Διδύμου, της περί σημείων
Ιλιάδος και Οδυσσείας πραγματείας του Αριστονίκου (ήτοι των
σημείων, άπερ μετεχειρίσθη ο Αρίσταρχος) της Ιλιακής προσωδίας
του Ηρωδιανού και του περί της στιγμής της παρ' Ομήρω έργου του
Νικάνορος. Τούτων ο Δίδυμος και ο Αριστόνικος ήκμαζον επί
Αυγούστου, η δε επιτομή απετελέσθη κατά τον τρίτον μ. Χ. αιώνα·
το χειρόγραφον εν ώ σώζεται, είναι ο περίφημος Ενετικός (Α) κώδιξ
της δεκάτης εκατονταετηρίδος, ο περιέχων την Ιλιάδα, ουχί δε και
την Οδύσσειαν.
Και περί μεν του χρόνου, βεβαίως η καθιέρωσις του εθίμου δεν ήτο
παλαιοτέρα του Ιππάρχου, του τελευταίου των ανδρών, εις ούς
αποδίδεται· επεκράτησε δηλαδή τούτο ουχί πριν ο Ιππίας
τυραννεύση, πιθανώς δε και μετά την κατάλυσιν της τυραννίδος.
Αλλ' η καθιέρωσις των έργων του μεγάλου Ίωνος ποιητού ως
σπουδαίου μέρους της σεμνοτάτης θρησκευτικής τελετής των
Αθηνών ήτο γεγονός, δυνάμενον να συμβή μόνον κατά περίοδον
πλήρους αδελφώσεως μετά της Ιωνίας. Τοιαύτη δε τάσις αρχίζει εν
Αθήναις από της επαναστάσεως των Ιώνων διότι προ του 500 π. Χ.
οι Αθηναίοι περιεφρόνουν τους υποτιθεμένους εκείνους συγγενείς, ο
δε Κλεισθένης είχε καταργήσει και τα Ιωνικά των φυλών ονόματα.
Το έτος 499 είναι η αρχή της μεγάλης πανιωνικής περιόδου της
Αθηναϊκής πολιτικής, ότε αι Αθήναι αναδέχονται την θέσιν
μητροπόλεως και προστάτιδος της Ιωνίας, ασπάζονται την ιωνικήν
παιδείαν, και ανέρχονται εις την πνευματικήν ηγεμονίαν της Ελλάδος.
Παιδεία και γράμματα φαίνεται ότι κατέλιπον τότε την Μίλητον, όπως
κατέλιπον την Κωνσταντινούπολιν μετά τα 1453, φυσικόν δε
καταφύγιον αυτών υπήρξαν αι Αθήναι. Κατωτέρω θα γνωρίσωμεν
τους μεγάλους άνδρας, και τους σπουδαίους νεωτερισμούς, όσοι
τότε επέρασαν εκ της Ασίας εις τας Αθήνας· εκ τούτων ήτο και η
παραδοχή του ιωνικού αλφαβήτου υπό των ιδιωτών και των
λογογράφων.
{when all the high peaks stand out, and the jutting promontories
and glens; and above the sky the infinite heaven breaks open!}
Αλλά περί των σωζομένων ομηρικών επών το θαυμαστόν είναι όχι ότι
έχουσιν ίχνη των χειρών των ραψωδών, αλλά ότι δεν έχουσι και
περισσότερα. Όπως ευρίσκονται σήμερον, δεν είναι κατάλληλα εις
ραψωδίαν. Διότι ως σύνολα είναι παραπολύ μακρά προς απαγγελίαν,
πλην εξαιρετικής τινος ευκαιρίας, ως η τότε νομοθετηθείσα, και
αρκετά συμπαγή, ώστε δυσκόλως να καταμερίζωνται εις
αποσπάσματα στρογγύλα, είναι δε απίθανον ότι ο νόμος προςέδωκεν
εις αυτά την παρούσαν μορφήν διά μιας. Απέβλεπε μάλλον εις την
ορθήν συνέχειαν της αληθινής αφηγήσεως. Δεχόμενος δε ραψωδούς
(65), επέτρεπε πιθανώς εις αυτούς ελευθερίαν εις τον διάκοσμον, και
δεν επέβαλλε την προςκόλλησιν εις τας λέξεις του κειμένου.
Την ταξινόμησιν ταύτην μαρτυρεί η όλη ιστορία των επών κατά τον
τέταρτον π. Χ. αιώνα, σπουδαίον δε είναι, ότι ταύτα όπως είναι
σήμερον οργανικώς αδιαίρετα, προσαρμόζονται μάλλον εις τας
απαιτήσεις αναγνωστών· αναγνώστας δε πολλούς ούτε αι Αθήναι,
ούτε η Ιωνία είχε περί το 470 π. Χ. Τότε ο μεν Αναξίμανδρος έγραψε
τα σοφά του διδάγματα χάριν ολίγων μαθητών, όπως
απομνημονεύσωσιν αυτά, ο δε Ξενοφάνης απετείνετο κυρίως διά
στίχων εις τα ώτα· μόλις 40 έτη βραδύτερον ο μεν Ηρόδοτος
συνέπηξε τας διηγήσεις του εις βιβλίον χάριν των φιλομούσων προς
ιδιαιτέραν ανάγνωσιν, ο δ' Ευριπίδης ήρχισε να συνάγη βιβλία.
{So dealt they with the burying of Hector; but there came the
Amazon, daughter of Ares, great-hearted slayer of men}
και διηγουμένην τον έρωτα του Αχιλλέως προς την Αμαζόνα, τον
φόνον αυτής υπ' αυτού, ίσως δε και τον ανδρείον θάνατον και
τούτου. Διότι ο θάνατος του Αχιλλέως, καθώς ενόησεν ο Γκαίτε, είναι
το απαιτούμενον τέλος της σωζομένης Ιλιάδος. Ότε ο αθάνατος
ίππος Ξάνθος και ο αποθνήσκων Έκτωρ προλέγουσιν αυτόν,
αισθανόμεθα ότι οι λόγοι των πρέπει να επαληθεύσωσι, διότι άλλως ο
μύθος δεν έχει έννοιαν. Και αν ήτο πράγματι κανείς εκ των
επιφανεστέρων Ομηριδών ο απεικονίσας το τελευταίον εκείνο
ψυχορράγημα, ότε όχι πλέον ο Κεβριόνης ή ο Πάτροκλος, αλλ' αυτός
ο Αχιλλεύς
{under the blind dust-storm, the mighty limbs flung mightily, and the
riding of war forgotten,}
Η ΕΠΙΚΗ ΔΙΑΛΕΚΤΟΣ
ebookbell.com