Informatics Networking and Intelligent Computing Proceedings of the 2014 International Conference on Informatics Networking and Intelligent Computing INIC 2014 16 17 November 2014 Shenzhen China 1st Edition Jiaxing Zhang (Editor) - The ebook in PDF and DOCX formats is ready for download now
Informatics Networking and Intelligent Computing Proceedings of the 2014 International Conference on Informatics Networking and Intelligent Computing INIC 2014 16 17 November 2014 Shenzhen China 1st Edition Jiaxing Zhang (Editor) - The ebook in PDF and DOCX formats is ready for download now
com
OR CLICK BUTTON
DOWLOAD EBOOK
Informatics, Networking and Intelligent Computing not only provides the state-
of-the art in informatics and networking technology, but also offers innovative
ideas present and future problems. The book will be invaluable to academics and
professional involved in informatics, networking and intelligent computing.
an informa business
INFORMATICS, NETWORKING AND INTELLIGENT COMPUTING
This page intentionally left blank
PROCEEDINGS OF THE 2014 INTERNATIONAL CONFERENCE ON INFORMATICS,
NETWORKING AND INTELLIGENT COMPUTING (INIC 2014), 16–17 NOVEMBER 2014,
SHENZHEN, CHINA
Editor
Jiaxing Zhang
Wuhan University, Wuhan, Hubei, China
CRC Press/Balkema is an imprint of the Taylor & Francis Group, an informa business
© 2015 Taylor & Francis Group, London, UK
Typeset by MPS Limited, Chennai, India
All rights reserved. No part of this publication or the information contained herein may be reproduced,
stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, by
photocopying, recording or otherwise, without written prior permission from the publishers.
Although all care is taken to ensure integrity and the quality of this publication and the information
herein, no responsibility is assumed by the publishers nor the author for any damage to the property or
persons as a result of operation or use of this publication and/or the information contained herein.
Published by: CRC Press/Balkema
P.O. Box 11320, 2301 EH Leiden, The Netherlands
e-mail: [email protected]
www.crcpress.com – www.taylorandfrancis.com
ISBN: 978-1-138-02678-0 (Hardback)
ISBN: 978-1-315-73453-8 (Ebook PDF)
Informatics, Networking and Intelligent Computing – Zhang (Ed.)
© 2015 Taylor & Francis Group, London, ISBN: 978-1-138-02678-0
Table of contents
Preface IX
Organizing committee XI
Computational intelligence
Decomposition genetic algorithm for cellular network spatial optimization 3
M. Livschitz
A heating and cooling model for office buildings in Seattle 9
W.Q. Geng, Y. Fu & G.H. Wei
Multi-depth Deep Feature learning for face recognition 15
C.C. Zhang, X.F. Liang & T. Matsuyama
Research on camera calibration basing OpenCV 21
H.M. Nie
Designing fuzzy rule-based classifiers using a bee colony algorithm 25
I.A. Hodashinsky, R.V. Meshcheryakov & I.V. Gorbunov
City management based on Geospatial Business Intelligence (Geo-BI) 35
Y.L. Zhou & W.J. Qi
Research into the development mode of intelligent military logistics 39
F. Zhang, D.R. Ling & M. Wang
Incomplete big data imputation algorithm using optimized possibilistic c-means and
deep learning 43
H. Shen & E.S. Zhang
Human-machine interaction for an intelligent wheelchair, based on head poses 49
Y. Wang, N. Liu & Y. Luo
An optimization model in the design of a product process 55
T. Qi, S.P. Fang & C.Q. Liu
V
The analysis of access control model based on Single Sign-on in SOA environment 83
G.Z. Wang, B. Zhang, X.F. Fei, Y. Liu, H.R. Gui & H.R. Xiong
An Android malware detection method using Dalvik instructions 89
K. Zhang, Q.S. Jiang, W. Zhang & X.F. Liao
Identification of spoofing based on a nonlinear model of an radio frequency power amplifier 95
Y.M. Gan & M.H. Sun
Computational model for mixed ownership duopoly competition in the electricity sector
with managerial incentives 101
V. Kalashnikov-Jr., A. Beda & L. Palacios-Pargas
VI
Information system designing for innovative development assessment of the efficiency
of the Association of Innovative Regions of Russia members 173
V.V. Spitsin, O.G. Berestneva, L.Y. Spitsina, A. Karasenko, D. Shashkov &
N.V. Shabaldina
Selected aspects of applying UWB (Ultra Wide Band) technology in transportation 177
M. Džunda, Z. Cséfalvay & N. Kotianová
Design of a wireless monitoring system for a Pleurotus eryngii cultivation environment 183
L. Zhao & X.J. Zhu
Research on Trellis Coded Modulation (TCM) in a wireless channel 189
X.M. Lu, F. Yang, Y. Song & J.T. He
Riemann waves and solitons in nonlinear Cosserat medium 193
V.I. Erofeev & A.O. Malkhanov
Research on the adaptability of SAR imaging algorithms for squint-looking 197
M.C. Yu
Improved factor analysis algorithm in factor spaces 201
H.D. Wang, Y. Shi, P.Z. Wang & H.T. Liu
Research on the efficacy evaluation algorithms of Earth observation satellite mission 207
H.F. Wang, Y.M. Liu & P. Wu
An image fusion algorithm based on NonsubSampled Contourlet Transform and Pulse
Coupled Neural Networks 211
G.Q. Chen, J. Duan, Z.Y. Geng & H. Cai
A cognitive global clock synchronization algorithm in Wireless Sensor Networks (WSNs) 215
B. Ahmad, S.W. Ma, L. Lin, J.J. Liu & C.F. Yang
A multi-drop distributed smart sensor network based on IEEE1451.3 219
H.W. Lu, L.H. Shang & M. Zhou
Solitary strain waves in the composite nonlinear elastic rod 225
N.I. Arkhipova & V.I. Erofeev
Semiconducting inverter generators with minimal losses 227
A.B. Daryenkov & V.I. Erofeev
Research into a virtual machine migration selection strategy 231
L. Sun & X.Y. Wu
An analysis of the influence of power converters on the operation of devices 235
A.I. Baykov, V.I. Erofeev & V.G. Titov
VII
A study of sign adjustment of complete network under the second structural theorem 269
H.Z. Deng, J. Wu, Y.J. Tan & P. Abell
Sybil detection and analysis of micro-blog Sina 273
R.F. Liu, Y.J. Zhao & R.S. Shi
A kinematics analysis of actions of a straddled Jaeger salto on uneven bars performed by
Shang Chunsong 279
L. Zhong, J.H. Zhou & T. Ouyang
VIII
Informatics, Networking and Intelligent Computing – Zhang (Ed.)
© 2015 Taylor & Francis Group, London, ISBN: 978-1-138-02678-0
Preface
The 2014 International Conference on Informatics, Networking and Intelligent Computing (INIC2014) will be
held in Shenzhen, China on November 16–17, 2014. The main purpose of this conference is to provide a common
forum for experts and scholars of excellence in their domains from all over the world to present their latest and
inspiring works in the area of informatics, networking and intelligent computing.
The informatics is helpful for people to make full use of information technology and means to improve the
efficiency of work. In recent years, the networking technology has experienced a rapid development and it is
widely used in both our daily life and industrial manufacturing, such as games, education, entertainment, stocks
and bonds, financial transactions, architectural design, communication and so on. Any company or institution
with ambition cannot live without the latest high-tech products. At present, intelligent computing is one of the
most important methods of intelligent science and also is the current topic of the information technology. For
example, machine learning, data mining and intelligent control have become the hot topics of current research.
In general, informatics, networking and intelligent computing have become more and more essential to people’s
life and work.
INIC2014 has received a large number of papers and fifty-eight papers were finally accepted after reviewing.
These articles were divided into several sessions, such as computational intelligence, networking technology and
engineering, systems and software engineering, information technology and engineering application and signal
and data processing.
During the organization course, we have received much help from many people and institutions. Firstly, we
would like to show our thankfulness to the whole committee for their support and enthusiasm. Secondly, we
would like to thank the authors for their carefully writing. Lastly, the organizers of the conference and other
people who have helped us would also be appreciated for their kindness.
We wish all the attendees at INIC2014 can enjoy a scientific conference in Shenzhen, China. We really
hope that all our participants can exchange useful information and make amazing developments in informatics,
networking and intelligent computing after this conference.
INIC2014 Committee
IX
This page intentionally left blank
Informatics, Networking and Intelligent Computing – Zhang (Ed.)
© 2015 Taylor & Francis Group, London, ISBN: 978-1-138-02678-0
Organizing committee
Honor Chair
E.P. Purushothaman, University of Science and Technology, India
General Chair
Jun Yeh, Tallinn University of Technology, Estonia
Y.H. Chang, Chihlee Institute of Technology, Taiwan
Program Chair
Tim Chou, Advanced Science and Industry Research Center, Hong Kong
W. K. Jain, Indian Institute of Technology, India
XI
This page intentionally left blank
Computational intelligence
This page intentionally left blank
Informatics, Networking and Intelligent Computing – Zhang (Ed.)
© 2015 Taylor & Francis Group, London, ISBN: 978-1-138-02678-0
M. Livschitz
TEOCO, Fairfax, VA, USA
ABSTRACT: Spatial optimal planning of cellular networks is a fundamental problem in network design. This
paper suggests a new algorithmic approach for automatic cell planning and describes the multi layer’s decom-
position genetic algorithm, which significantly improves optimization convergence. Algorithm convergence is
compared with single layer genetic algorithms based on the cell planning of real cellular networks.
3
by a vector of 5 parameters. Each parameter pp is con- between signals from all services and the interference
fined within a boundary and can be assigned one of from all antennas impacting the mobiles. Uplink and
the discrete values: downlink power control and handovers between cells
should also be taken into consideration by resolving
the sequence of system linear equations. As a result,
the evaluation could be quite expensive, especially
The boundaries of permissible values could be dif- for large network clusters with high antenna density.
ferent Thus tilt changes could be assigned any values The complexity of the Monte-Carlo evaluator is at
in a range with a predefined step (usually 1 deg), least O (I2 log(I)). This results in a very high impor-
azimuths have values according to a higher step and tance for improving convergence of GA for cellular
with a limitation on minimal change, while antenna cell-planning problems in wideband networks.
types and heights are assigned values from a list of This nature of 3G, 4G and beyond networks requires
available patterns or heights. reaching a higher isolation between remote cells from
The goal of cell planning F(K) is defined by com- the second tier and further. On the other hand, close
bining different key performance indicators (KPIs) cells have overlapping coverage areas supporting con-
representing network coverage quality or capacity tinuous coverage and have a very high impact on
K = {K1 , . . . ,Kk }, where each Kk represents a bad each other. Impacts from further cells depend on
level of KPI defined by a fuzzy threshold Tk [8] as site location, area topology, building height and other
follows: environmental properties. Some tall sites located on
mountain tops or building roofs influence distances
of 5–15 km, while other sites in urban areas could
be almost invisible even at distances less than 1 km.
Influences between cells depend on spatial network
configuration and can be changed or eliminated during
cellular network optimization activity.
Manual planning of wide areas is usually performed
by an interactive, iterative process that includes the
All KPIs are dependent on antenna configurations. following two phases:
Cell planning should improve defined KPIs • Planning of smaller clusters (local cluster
optimization);
• Synchronization between clusters and composing a
final solution
changing antenna configurations according to the pre-
defined constraints. In the case of budget cell-planning Antennas in small compact clusters should be con-
problems, cost will be assigned to each antenna change figured together, taking into consideration the strong
and an additional constraint on the total cost of network influences between them.
changes should be added. The synchronization phase should resolve in two
This means that the optimal cluster cell configu- types of interactions between clusters: impacts through
ration Po = {P1 ,…,PI } is defined by one value from cells on borders, which are impacted by two or more
I 5 clusters and long links from further sectors. There are
PipN available combinations. In most cases the usually not too many further impacts between antennas
i=1 p=1
from remote clusters. The process described above is
spatial cell planning problem is considered as NP-hard,
repeated till the network KPIs reach predefined values.
which means that finding an optimal solution for it
In case network planning should fail to reach
(within real networks) is not feasible in a reasonable
required KPIs for the cluster, new additional sec-
running time. Thus, much of the work is done with
tors, sites and locations will be recommended and the
genetic algorithms (GA) for the spatial cell planning
iterative process will continue.
problem [3, 6, 8, 9] and different heuristic solutions
We believe that problem-oriented heuristics should
for improving the GA convergence.
be used for the efficient optimization and improve-
Common GA algorithm schemes suppose there is a
ment of the optimization algorithm. The manual opti-
quick manner of goal function calculation and popula-
mization process scheme described above leads to
tion evaluation. For the cellular network optimization
the development of two-level GA for improving the
this means that there is an evaluator, allowing for an
convergence of spatial cellular network optimizations.
estimated network quality for multiple antenna spatial
locations and configurations.
2.2 Algorithm description
The network evaluator for wideband cellular net-
works should estimate network quality according to The main idea of the decomposition genetic algorithm
the planned traffic load. The best method for wide- is a reduction in the overall optimization problem
band network evaluation is based on the Monte-Carlo to multiple smaller optimization sub-problems with
techniques [8], which require simulation of uplinks consequent results composition.
and downlinks between all the simulated mobiles and Reduced optimization problems are resolved inde-
all serving cells. The main KPIs depend on the ratio pendently, bringing local improvement for sub-areas.
4
Figure 2. Cluster division into 4 sub-areas.
5
are adjusted accordingly. This is an additional advan-
tage of DGA that wide network optimization might be
done based on different criteria per area.
6
and estimate possible improvements. This approach
could be used for very general spatial problems, where
local influences are stronger than global. Different
algorithms of cluster division will also be compared.
5 CONCLUSIONS
REFERENCES
[1] E. Amaldi, A. Capone, and F. Malucelli. Optimizing base
station siting in UMTS networks, In Proceedings of the
IEEE Vehicular Technology Conference, 4, 2001, 2828–
2832.
[2] D. Amzallag, J. Naor, and D. Raz. Cell planning of
4G cellular networks, In Proceedings of the 6th IEEE
International Conference on 3G & Beyond (3G’2005),
London, 2005, 501–506.
[3] D. Amzallag, M. Livschitz, J. Naor, D. Raz. Cell Planning
of 4G Cellular Networks: Algorithmic Techniques and
Results, In Proceedings of the 6th IEEE International
Conference on 3G & Beyond (3G’2005), London, 501–
506.
Figure 5. Optimization results for a large cluster. [4] F. Longoni and A. Länsisalmi and A. Toskala. Radio
Access Network Architechture, In H. Holma and A.
3.2 Large cluster optimization Toskala (editors), WCDMA for UMTS, John Wiley &
Sons, Third edition, 2004, 75–98.
Figure 5 depicts network improvement for a big cluster [5] C. Lee and H. G. Kang. Cell planning with capacity
with about 700 cells in a dense urban area. In this case expansion in mobile communications: A tabu search
DGA is able to reach more than three times higher an approach, IEEE Transactions on Vehicular Technology,
improvement in a shorter time. Base line GA was not 49, 2000, 1678–1691.
[6] K. Lieska, E. Laitinen, and J. Lähteenmäki. Radio cover-
able to reach a reasonable improvement in a feasible age optimization with genetic algorithms, In Proceedings
time for so large clusters. Modern cellular networks of the 9th IEEE International Symposium on Personal
of 3G and 4G have higher density and may have thou- Indoor and Mobile Radio Communications (PIMRC’98),
sands cells for which DGA approachis critical. Cluster 1998, 318–321.
division in a few sub-areas and the first optimization [7] M. Livschitz, D. Amzallag. High-Resolution Traffic Map
phase allowed the starting of a second phase of DGA of a CDMA Cellular Network, In Proceedings of the 6th
from a better initial point. INFORMS Telecommunication Conference, US, March
2002, 62–64.
[8] Maciej J. Nawrocki Undertanding UMTS Radio Net-
work. Modelling, planning and automated optimization,
4 FUTURE RESEARCH John Wiley & Sons, 2006.
[9] H. Lin, R. T. Juang, D. B. Lin, C.Y. Ke, andY. Wang. Cell
In the future we are going to investigate theoretical planning scheme for WCDMA systems using genetic
aspects of the DGA, research common properties of algorithms and measured background noise floor, IEEE
problems where improvement using DGA can be used Proceedings on Communications, 151, 2004, 595–600.
7
This page intentionally left blank
Informatics, Networking and Intelligent Computing – Zhang (Ed.)
© 2015 Taylor & Francis Group, London, ISBN: 978-1-138-02678-0
G.H. Wei
School of Management, Dalian Jiaotong University, Dalian, Liaoning, China
ABSTRACT: To specify a model of heating and cooling systems inside office buildings, we focus on details
of difierent non-ignorable factors of standard office building in Seattle. As Seattle has distinctive climatic
variations, this is a good case for discussion. Based on an analysis of the temperature profile of Seattle, as
well as the application of heating and AC systems, we are able to build a model, which can be applied to real,
modern buildings in Seattle. Also, we consider the heat radiation from human bodies and electronics (lighting,
computers, and other sources) in an office building. By incorporating all the factors in the research, our goal is
to model the temperature curve that best fits the actual situation inside a building. Our model can be used to
evaluate and improve AC systems, thus making office buildings more comfortable for office workers, as well as
reducing the consumption of energy. For less energy consumption, we figure out that the minimum heat output
of heating and an AC system should be 10◦ F/h in order to make the temperature inside the building constant
within the comfort zone.
9
the heating and the AC System. The conduction of
heat through walls follows Newton’s Law of Cooling/
Heating, which indicates that the rate of change of the
temperature inside the building is directly proportional
to the difference in temperature inside and outside. The
proportional constant k represents how fast the heat
conduction proceeds. The other factors will directly
affect the temperature inside the building. Let T (t) be
the temperature inside the building, and the follow-
ing equation can be used to describe the change of the
temperature:
10
by timing the average height of each floor. Then we as others in the offices, the heat output of iMac is
assume the room is filled with air, which is heated by 126 to 463 BTU/h, based on data from its official
human radiation and electrical appliances. Therefore, website [4]. In this case we will take the average
we have: value as 250 BTU/h ≈ 73.27 W. Finally, we need to
consider the heat produced by other electrical equip-
ment. This equipment is normally not in constant use,
such as printers, copiers, vacuums, etc. We assume
the equipment has a power of about 5 W. Adding
where Q(t) is the heat produced by human and elec- all the heat generated by these appliances, we have
trical appliances per hour. V denotes the volume of Qe = 10 W + 73.27 W + 5 W = 88.27 W ≈ 317770 J/h.
the room, which is also the volume of the air. ρ is the Combining the data above, it is easy to calculate
density of air, and c is the specific heat capacity of air. the heat produced by human radiation and electrical
Based on previous research, the working space appliances per hour:
in an office per person is about 185 square feet
[6]. The normal height of an office is 12 to
15 feet, and in this model we will consider 13
feet as the height of the room. So V = 185 ft2 ×
(1 m)3
13 ft × (3.2808.1 ft)3
≈ 68.102 m3 . Besides office areas, Because the model needs to fit the temperature
office buildings also have public areas such as lob- change for the whole day, so the working time of office
bies, elevators, and meeting rooms, which are not buildings needs to be considered. We assume that the
constantly occupied by people. We assume that the working time of the building is from 8 am. to 6 pm.
ratio of office areas to public areas is approximately The human body and the electrical appliances produce
1:1, and therefore the volume needed for each person to heat only during the working hours. Therefore, H(t)
be kept warm needs to be doubled, so V = 136.204 m3 . during the entire day is a segmented function:
ρ and c are all constants with value ρ = 1275g/m3 and
c = 1.007 J/(g K). Plug in all the constants:
11
per unit area when the temperature is changed by1
degree. A is the area of the material, which is the wall
or the roof. ρ represents the density of air. V is the
volume of air, which is assumed to be the volume of
the building. c is the specific heat capacity of the air.
In the model, we assume an office building as
a cuboid with 40 floors, therefore the height is 13
ft./floor × 40 floor = 520 ft. = 158.5 m. And we also
assume the base of the building is square with an area
of 80 ft. × 80 ft. (24.38 m × 24.38 m). Then we can
calculate the area of walls and roof, as well as the
volume of the building:
A wall = 4 × 158.5 m × 24.38 m = 15456.92 mˆ2
A roof = 24.38 m × 24.38 m = 594.3844 mˆ2
24.38 m × 24.38 m × 158.5 m = 94209.9274 mˆ3
Since different materials have quite different val- Figure 2. Temperature inside the building without
ues of U, which will have a significant impacton the insulation.
conduction of heat, we will calculate two k values: ki
when the materials of the walls and roof contains an
insulation layer, which will decrease the rate of heat
conduction. And ku for the walls and roof without insu-
lation. Also, the roof and walls have different values
of U, therefore,
12
In the process of AC system operation, we use
the segmented function to solve the general problem.
We estimate the general temperature difference within
one day according to the curve we plot. When T(t)
is higher than the comfort zone which we set up, AC
system tends to decrease the temperature, otherwise,
it increases the temperature. The segmented function
is set according to different temperature variation.
6 CONCLUSIONS
13
This page intentionally left blank
Informatics, Networking and Intelligent Computing – Zhang (Ed.)
© 2015 Taylor & Francis Group, London, ISBN: 978-1-138-02678-0
ABSTRACT: Deep structure learning is a promising method for computer vision problems, such as face
recognition. This paper proposes a Multi-depth Deep Feature (MDDF) learning method to learn abstract features
which has the most information in the region of interest. It is unlike other deep structure learning methods that
evenly partition the raw data/images into patches of the same size and then recursively learn the features by
evenly aggregating these pieces of local information. MDDF does an uneven partition according to the density
of the discriminative power in the image and learns a data-driven deep structure that preserves more information
at the region of interest. In the cross-database experiments, MDDF shows a better performance over a canonical
deep structure learning (Deep-PCA) on face recognition. Moreover, MDDF achieves an accuracy, comparable
with other well-known face recognition methods.
Keywords: Deep structure learning, Multi-depth Deep Feature, fine-to-coarse Quad-tree partition
15
therefore, preserves the corresponding local infor-
mation. The criterion of Quad-tree partition uses
LDA-motivated total variance, which ensures a robust
resistance to local noise and an efficiency of computa-
tion. In our framework, Quad-tree partition functions
as a convolutional neural network, but builds up an
incomplete hierarchical structure. Afterwards, aggre-
gation of these uneven features in the multi-depth
structure functions as a recursive neural network, and
outputs an abstract feature having more information at
the region of interest. Experiments on four challeng-
ing databases show that MDDF gains an advantage on
accuracy over Deep PCA. It also achieves comparable
accuracy with other well-known methods, using local
or global features. Figure 1. Illustration of the top-down Quad-tree parti-
tion and the bottom-up deep feature learning hierarchy.
PCA + LDA extract top vectors from each block to describe
2 HIERARCHICAL QUAD-TREE PARTITION local features. The green bounded node joins its children
nodes in the original order to preserve the global information.
Instead of dividing the face region into a uniform
grid, Quad-tree partitions the face region by means of
local discriminative variance. Larger partition means
the block has a lower feature density. By contrast, the
smaller partition means the block has a higher fea-
ture density. To make the partition more robust to local
noises, we consider the variance on all faces across the Figure 2. Fine-to-coarse Quad-tree partitions on Yale 2
entire database. Motivated by the idea of LDA which database.
encodes discriminative information by maximizing the
between-class scatter matrix Sb and minimizing the of Deep PCA tree structure when Tv = 0). The face
within-class scatter matrix Sw (see Eq. (1)), we define image is split into fewer and bigger blocks when Tv
a template face T by Eq. (2) to represent the distri- is large, but into more and smaller blocks when Tv is
bution of discriminative information for the database. small. Therefore, fine-to-coarse partition provides an
Thus, the total variance of the entire database is the opportunity to explore the effectiveness of varied deep
variance of the template: structures.
where µ is the mean image of all classes, µi is the With a set of Quad-tree partitions, we are able to learn
mean image of class Xi , Ni is the number of samples features from a face. Deep feature learning produces a
in class Xi , and xk is the k-th sample of class Xi . bottom-up hierarchy of a feature representing face, in
We perform a top-down data driven Quad-tree parti- which the higher levels correspond to a shorter overall
tion on T to partition it into smaller blocks recursively, description of the face. It also encodes the correlation
according to a function doSplit(r), defined in Eq. (3). among the local patches. We create a hierarchy for
If the variance of a region r (starts from entire T ) is a face, based on the aforementioned Quad-tree parti-
higher than a threshold (Tv ∗ totalVar), then r is split tion using PCA + LDA. Figure 1 shows that a face is
into four sub-blocks with the same size. The partition partitioned into many blocks of varied sizes. Blocks
carries on under the criteria defined by the certifica- without a green ring are the leaf nodes at different
tion function in Eq. (3). Eventually, we have an uneven levels in the tree. These leaf nodes are used as the
partitioned face and an incomplete hierarchical struc- input for PCA + LDA, and select the top ki vectors as
ture of the face (see Fig. 1). Usually, it is rather difficult a feature basis, where ki is less than the corresponding
to find the best deep structure using one Tv . We, there- block size, i denotes the level index in the hierarchy.
fore, give a set of thresholds in an ascending order, and While the smaller i is, then the bigger ki becomes. Each
introduce fine-to-coarse face partitions (see Fig. 2). block is projected into a corresponding new basis, and
The leftmost partition is equivalent to the leaf nodes the four reduced-dimensionality neighbouring blocks
16
are then joined together back to their father node in
their original order. This process is repeated, using the
newly created layer as the data for the PCA + LDA and
join process to create the next layer up, until reaching
the root node.
As illustrated in Figure 1, the smallest blocks in the
Quad-tree partition measure 4 × 4 at level 3. We apply
PCA + LDA to these blocks, and extract the top 3 × 3 Figure 3. Template images on four databases: (a) ORL, (b)
Yale 2, (c) AR, (d) FERET.
vectors from each to describe the local feature. The
four neighbours are then joined in the inverse order
Table 1. Recognition accuracy of MDDF compared to other
of the partition, back to the father node at level 2, four reference methods.
where the original blocks have size of 8 × 8. At level
2, if a block was not further decomposed, PCA + LDA Database
extracts the top 6 × 6 vectors which are the same size
as the newly joined block from level 3. We recursively Method ORL Yale 2 AR FERET
apply PCA + LDA extraction, and join the neighbour
blocks to upper layer. Eventually, the procedure stops PCA + LDA 92.80 90.76 86.81 83.51
at root node. The last PCA + LDA selects about 30 MPCRC 91.50 92.80 88.60 73.64
top features as a vector. Obviously, these features pre- 30-Region 93.88 90.78 90.57 82.18
serve not only the global information thanks to the Deep PCA 92.33 91.11 84.67 85.18
MDDF 94.68 92.82 88.36 85.26
feature hierarchy, but also the local feature from blocks
partitioned by our Quad-tree partition.
and scarves). As in [4], a subset, with only illumi-
nation and expression changes, contains 50 male
subjects and 50 female subjects that are chosen.
4 EXPERIMENTS AND ANALYSIS
For each subject, we randomly choose 5 samples
for training and the left 9 images for test.
To demonstrate the effectiveness of MDDF, four public
[4] FERET database: 13539 images corresponding
and challenging databases were employed for eval-
to 1565 subjects. Images differ in facial expres-
uation: ORL [10], Extended Yale (Yale 2) [11], AR
sions, head position, lighting conditions, ethnicity,
[12], and FERET [13]. Face images in these databases
gender, and age. To evaluate the robustness of
are under variant conditions, including head poses
our methods with regard to facial expressions, we
(ORL, AR), illumination changes (ORL, Yale 2, AR,
worked with a subset of front faces labelled as Fa,
and FERET), facial expressions (ORL, AR, FERET),
Fb, where Fa represents regular facial expressions,
and facial details (e.g. with glasses or not: ORL, AR,
and Fb alternative facial expressions. All Fa are
FERET). Face images were cropped to 32 × 32. The
selected for training data, while Fb as test data.
template images created on each database are shown
in Figure 3. An example of a fine-to-coarse Quad-tree To verify the feature performance, we compared
partition is shown in Figure 2. Our method generated proposed MDDF with various methods using global
8–15 Quad-trees depending on the databases. These features, local features, and canonical deep fea-
Quad-trees are indexed according to the threshold Tvi tures, respectively. They are: (1) the conventional
in an ascending order. PCA + LDA method [2] which extracts a global fea-
ture vector from the whole face region; (2) MPCRC
[1] ORL database: 40 subjects, 10 samples per sub-
[4], which develops a multi-scale local patch-based
ject, with variances in facial expressions, with
method to alleviate the problem of sensitivity of path
open or closed eyes, with glasses or no glasses,
size; (3) a 30-region method [3], which defines the
scale changes (up to about 10 percent), and head
30 regions according to experimental experience; (4)
poses. 5 samples per subject are randomly selected
the Deep PCA [7], which integrates the discriminative
as training data while the left ones are selected as
information extracted from uniform local patches to
test data.
a global feature vector. Table 1 shows the comparison
[2] Extended Yale database (Yale 2): more than
results, and gives five observations:
20,000 single light source images of 38 subjects
with 576 viewing conditions (9 poses in 64 illumi- Observation (1): PCA+LDA extracts a global fea-
nation conditions). To evaluate the robustness of ture that focuses on the holistic information on
our method on the illumination changes, 5 samples the image. It could be regarded as a summary of
of the 64 illumination conditions are randomly faces, but ignores details such as local variations.
selected as training data, the remaining 59 images Thus, PCA + LDA has a rather average perfor-
as test data. mance on various databases, neither bad nor good.
[3] AR database: 4000 colour face images of 126 The local patch-based methods focus more on local
people (70 men and 56 women), including frontal variations, and perform better on Yale 2, AR and
views of faces, with different facial expressions, FERET databases which contains more local defor-
lighting conditions, and occlusions (sun glasses mations caused by facial expressions, illumination
17
changes, and occlusions, etc. This implies the sig-
nificant advantage of the subregion-based features
over the holistic-based features in dealing with local
deformations.
Observation (2): the MPCRC method outperforms
PCA + LDA on Yale 2 and AR databases obviously
due to the robustness of the patches in local vari-
ations, but degrades significantly on the FERET
database where only a Single Sample Per Person
(SSPP) is collected as gallery data. Under the SSPP,
the MPCRC degenerates into the original patch-
based method PCRC which has no collaboration
with multiple patches in variant scales. Since the
performance of local patch-based methods is very
sensitive to patch size, they suffer from severe
degradation in performance under inappropriate
patch size. That is why many local patch-based
methods cooperate with global features to over-
come this problem. This motivates the development
of our multi-depth deep structure learning to asso-
ciate local features with a global representation of Figure 4. Recognition accuracy against a Quad-tree index
an entire image structure. during a fine-to-coarse partition on four databases: (a) ORL,
Observation (3): a 30-Region method is composed of (b) Yale 2, (c) AR, and (d) FERET.
30 subregions which have large overlaps between
each other. All these subregions are empirically Quad-tree partitions the face into small patches with
designed according to facial structures after being the same size. We can see that varied deep struc-
well registrated. Particularly, one ‘subregion’ actu- ture features perform quite a large range. The best
ally is the entire facial region. We can think of it as performance is usually achieved at a certain parti-
a brute-force integrating global and local features. tion, depending on the database, but this does not
Therefore, it obtains a higher performance on the come from the Deep PCA in most cases. Because
AR database being well-registered and having only our Quad-Tree-based partition is processed on the
frontal features. It also performs well on an ORL template image, which is obtained from Sb and
database because of the effectiveness of its subre- Sw , and deemed as a summary of the database.
gions. However, the performance degrades in not This data-driven strategy makes our method adapt
well registered databases, such asYale 2 and FERET. to databases, and generates the most appropriate
The reason might be that these sub-regions do not partition for deep structure learning automatically.
fit with the data. This suggests that the design of the Our method benefits from the data-driven image
sub-regions must adapt to diverse data. partition-based deep structure learning method and
Observation (4): Deep PCA improves the perfor- it can be widely applied to diverse databases, espe-
mance of conventional PCA + LDA methods on cially for those with a large number of variations.
Yale 2 and FERET databases, which shows the effec- It must be pointed out that MDDF performs worse
tiveness of deep structure learning for coping with than the 30-region method but has a better perfor-
local deformations. However, it performs worse on mance than the Deep PCA method on AR database.
an AR database. The reason might be that Deep We argue the reason that MDDF learns a multi-
PCA assumes that the discriminative information depth structure feature from the template face of the
is evenly distributed. It partitions the face into a database. But sub-regions in 30 regions are empir-
uniform grid and evenly aggregates features from ically designed for well-registered databases, and
the lower level. This result indicates the importance ARs, but not for others.
of designing a data-driven deep structure learning
method.
Observation (5): the proposed MDDF achieves the
best performance in most experiments. The multi- 5 CONCLUSION
depth deep feature learning is developed based on a
fine-to-coarse Quad-tree partition. To explore how This paper proposes a novel deep structure learning
varied deep structure affects the effectiveness of method Multi-depth Deep Feature (MDDF) for face
deep features, we defined a set of thresholds for recognition. It unevenly partitions face images accord-
Quad-tree partition. Figure 4 plots the recognition ing to the density of the discriminative power in the
accuracy against the Quad-tree index (the indices local regions, and learns a data-driven deep struc-
are corresponding to the thresholds Tvi in ascending ture that preserves more information at the region of
order). Please note that the leftmost dot indicates interest. The comparison with diverse methods using
the accuracy of Deep PCA because Tv = 0 and global, local features, and the canonical deep structure
18
feature ‘Deep PCA’ shows the comparable perfor- [4] P. Zhu, L. Zhang, Q. Hu, and S. Shiu, “Multi-scale patch
mance of MDDF in four challenging databases. We based collaborative representation for face recognition
can conclude that the MDDF has the most accurate with margin distribution optimization,” in ECCV, 2012.
description of image structure for recognition. In cur- [5] Y. Bengio and Y. LeCun, “Scaling learning algorithms
towards AI,” 2007.
rent work, the dimension of extracted features from [6] G. E. Hinton and S. Osindero, “A fast learning algo-
bigger blocks, which are not further partitioned, is the rithm for deep belief nets,” Neural Computation, vol.
same as the one aggregated from four smaller blocks 18, pp. 1527–1554, 2006.
at a lower level. In feature research we will explore [7] B. Mitchell and J. Sheppard, “Deep structure learning:
what feature dimensions of these bigger blocks would beyond connectionist approaches,” in IEEE ICMLA,
induce an optimal performance. 2012.
[8] F. Berisha, A. Johnston, and P. W. McOwan, “Identify-
ing regions that carry the best information about global
ACKNOWLEDGEMENT facial configurations,” Journal of Vision, vol. 11, no. 10,
pp. 1–8, 2010.
[9] U. Park, R. Jillela, and A. Ross, “Periocular biometrics
This work is supported by: Japan Society for the Pro- in the visible spectrum,” IEEE Trans. on Info. Forensics
motion of Science, Scientific Research KAKENHI for and Security, no. 6, pp. 96–106, 2011.
Grant-in-Aid for Young Scientists (ID: 25730113). [10] Ferdinando S. Samaria and Andy C. Harter, “Parame-
terization of a stochastic model for human face iden-
tification,” in Proceedings of 2nd IEEE Workshop on
REFERENCES Applications of Computer Vision, pp. 138–142, 1994.
[11] A. S. Georghiades, P. N. Belhumeur, and D. J. Kriegman,
[1] M. Turk and A. Pentland, “Eigenfaces for recognition,” “From few to many: Illumination cone models for face
J. Cognitive Neuroscience, vol. 3, no. 1, 1991. recognition under variable lighting and pose,” IEEE
[2] P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman, Transactions on PAMI, vol. 23, no. 6, pp. 643–660,
“Eigen faces vs. Fisher faces: Recognition using class 2001.
specific linear projection,” IEEE Trans. PAMI, vol. 20, [12] A. M. Martinez and R. Benavente, “The AR face
no. 7, pp. 711–720, 1997. database,” CVC Technical Report, 1998.
[3] L. Spreeuwers, “Fast and accurate 3D face recognition [13] P. Jonathon Phillips, H. Moon, S. A. Rizvi, and P. J.
using registration to an intrinsic coordinate system and Rauss, “The FERET evaluation methodology for face
fusion of multiple region classifiers,” IJCV, vol. 93, recognition algorithms,” IEEE Transactions on PAMI,
no. 3, pp. 389–414, 2011. vol. 22, no. 10, pp. 1090–1104, 2000.
19
This page intentionally left blank
Informatics, Networking and Intelligent Computing – Zhang (Ed.)
© 2015 Taylor & Francis Group, London, ISBN: 978-1-138-02678-0
H.M. Nie
Heilongjiang University of Science and Technology, China
ABSTRACT: According to the principle of camera calibration, it puts forward a calibration technique based on
OpenCV camera, and with the help of open-source computer vision library, OpenCV completes the calibration in
VS 2008 development platform. The experiment proves that the calibration procedure based on OpenCV camera
has the following advantages: calibration accuracy, high computation efficiency, good cross platform portability,
which can be effectively applied in the field of computer vision system.
2 CALIBRATION PRINCIPLE
21
In Formula (10), R is 3 × 3 rotation matrix; T is the
translation matrix 3 × 1
22
cvSeqPush to save the sub-pixel coordinates into
the coordinate sequence.
(6) to substitute the sub-pixel coordinates of corner
points and corner points in the world coordinate
value into the cvCalibrateCamera2 (), and to get
the camera internal and external parameters and
distortion parameters.
(7) to release memory space of function allocated and
to prevent memory leaking.
4 THE IMPLEMENTATION OF
PROGRAMMING
23
Table 1. Comparison of camera parameters. computer vision. While in these studies, it is neces-
sary to determine the geometrical relationship between
Camera Imaging Matlab the corresponding points in visual images and in the real
parameters results calibration results world. The purpose of Camera calibration is to estab-
lish the correspondence between 3D world coordinate
fx 677.761683 677.911558
and 2D image coordinate. The camera calibration
fy 699.952933 703.2464865
x0 337.5338665 338.213853 procedure developed with OpenCV, has advantages
y0 285.015533 285.95167 of accurate calibration results, high calculating effi-
k1 −0.3013115 −0.298848 ciency, good cross-platform portability, which can
k2 0.13680118 0.135938 be applied in the field of computer vision system
p1 0.000446 0.000386 effectively.
p2 −0.00133 −0.001355
24
Informatics, Networking and Intelligent Computing – Zhang (Ed.)
© 2015 Taylor & Francis Group, London, ISBN: 978-1-138-02678-0
ABSTRACT: This paper proposes a fuzzy approach which enables one to build intelligent decision-making
systems. We present new learning strategies to derive fuzzy classification rules from data. The training procedure
is based on a bee colony algorithm. We observe four components of the algorithm: initialization, work of scout
bee, rules of the antecedent’s generation, and weight tune. The evaluation of the fuzzy rule-based classifier
performance, adjusted according to given algorithms, is finally applied to well-known classification problems,
such as bupa, iris, glass, new thyroid, and wine. The comparison with the work of such algorithms as Ant
Miner, Core, Hider, Sgerd, and Target is made in training and in testing random selections. We describe the
implementation of the fuzzy rule-based classifier to forecast the efficiency of the non-medical treatment of
patients rehabilitated in the Federal State Institution of Tomsk Scientific-Research Institute of Health-Resort
and Physiotherapy Federal Medical and Biological Agency of Russia.
25
Table 1. Numerical benchmark functions.
Minimum
Function Ranges value
sin2 ( x12 + x22 ) − 0.5
f1 (x) = 0.5 + −100 ≤ xi ≤ 100 f1 (0) = 0
(1 + 0.001(x12 + x22 ))2
n
f2 (x) = xi2 −100 ≤ xi ≤ 100 f2 (0) = 0
n
i=1 n
1 xi − 100
f3 (x) = (xi − 100)2 − cos √ +1 −600 ≤ xi ≤ 600 f3 (100) = 0
4000 i=1 i=1 i
n
f4 (x) = (xi2 − 10cos(2πxi ) + 10) −5.12 ≤ xi ≤ 5.12 f4 (0) = 0
i=1
n−1
f5 (x) = (100(xi+1 − xi2 )2 + (xi − 1)2 ) −50 ≤ xi ≤ 50 f5 (1) = 0
i=1
the classification problem by employing hybrid (Karaboga, 2005), which is a recent swarm intelli-
approaches on the grounds of decision trees and evo- gence based approach to solve nonlinear and complex
lutional algorithms (Pulkkinen & Koivisto, 2008), and optimization problems. It is as simple as PSO and dif-
machine learning methods, Learning Classifier Sys- ferential evolutionalgorithms, and uses only common
tems, in particular, and the given approach is based on control parameters such as colony size and maximum
reward training and genetic algorithms (Ishibuchi & cycle number.
Nojima, 2007). In studies (Karaboga & Basturk,2008), (Karaboga &
An even more recent approach is that of Swarm Akay, 2009), the performance of the ABC algorithm
Intelligence. The two most widely used swarm intelli- is compared with that of differential evolution (DE),
gence algorithms are Ant Colony Optimisation (ACO) PSO and Evolutionary Algorithms (EA) for multi-
(Dorigo, Maniezzo, & Colorni, 1996) and Particle dimensional and multimodal numeric problems. Clas-
Swarm Optimisation (PSO) (Kennedy & Ebenhart, sical benchmark functions are presented in Table 1.
1995). In (Casillas et al., 2005), an approach to In experiments, f1 (x) has 2 parameters; f2 (x) has 5
the fuzzy rule learning problem ACO algorithms parameters and f3 (x), f4 (x) and f5 (x) functions have
is proposed. This learning task is formulated as a 50 parameters. Parameter ranges, formulations, and
combinatorial optimization problem, and the features global optimum values of these functions are given in
related to ACO algorithms are introduced. In (Abadeh Table 1 (Karaboga & Basturk, 2008).
et al., 2008) an evolutionary algorithm to induct fuzzy The mean and the standard deviations of the func-
classification rules is proposed which uses an ant tion values obtained by DE, PSO, EA, and ABC
colony optimization based local searcher to improve algorithms are given in Table 2 (Karaboga & Basturk,
the quality of the final fuzzy classification system. 2008).
The proposed local search procedure is used in the Simulation results show that the ABC algorithm
structure of a Michigan based evolutionary fuzzy performs better than the above mentioned algorithms
system. and can be efficiently employed to solve the multi-
Another category for fuzzy rule-based classifiers modal engineering problems.
design is PSO. The article (Elragal, 2008) discusses The ABC has a lot of advantages in memory, local
a method for improving accuracy of fuzzy-rule-based search, solution improvement mechanism, and so on,
classifiers by using particle swarm optimization. In and it is able to get excellent performance on opti-
this work, two different fuzzy classifiers are consid- mization problems (Karaboga & Akay, 2009), (Zhao
ered. The first classifier is based on the Mamdani et al., 2010). In recent years, the ABC algorithm
fuzzy inference system while the second one is based has been successfully used to solve hard combina-
on the Takagi-Sugeno fuzzy inference system. The tional optimization problems including traveling the
parameters of the proposed fuzzy classifiers include salesman problem (Li, Cheng, Tan & Niu, 2012),
antecedent, consequent parameters, and the structure the quadratic knapsack problem (Pulikanti & Singh,
of fuzzy rules, optimized by using PSO. 2009), and the leaf-constrained minimum spanning
In our study we implement the novel bee colony a tree problem (Singh, 2009). In the article (Singh,
algorithm to identify the structure and parameters of 2009) by comparing the approach against genetic algo-
the fuzzy rule-based classifier. There are many widely rithm, ant-colony optimization algorithms, and tabu
known algorithms based on the behaviour of honey searches, Singh reported that the ABC outperformed
bees in nature. These algorithms can be divided into the other approaches in terms of best and average
two categories and they correspond to the behaviour solution qualities and the computational time.
of bees while gathering food and mating. Our study is The ABC algorithm has been applied to various
based on the Artificial Bee Colony (ABC) algorithm problem domains including the training of artificial
26
Table 2. The results obtained by DE, PSO, EA, and ABC The fuzzy classification is described by the func-
algorithms. tion f: → [0,1]m , which refers the classified object
to each class with the definite grade of member-
DE PSO EA ABC ship being calculated in the following way: βj (x) =
n
f1 (x) = 0 0 ± 0 0.00453 ± 0±0 0±0 Aki (xk ) · CFi , j = 1, 2, . . . , m.
0.0009 Rij k=1
f2 (x) = 0 0±0 2.5113E-8 ± 0 0±0 0±0 The classifier is the class being defined in the
f3 (x) = 0 0±0 1.5490 ± 0.00624 ± 0±0 following way: class = cj∗ , j ∗ = arg max βj .
0.067 0.00138 1<j<m
f4 (x) = 0 0±0 13.1162 ± 32.6679 ± 0±0 The fuzzy classifier can be presented as a function
1.4482 1.9402 c = f(x, θ, CF), where θ – the rule base containing
f5 (x) = 0 35.3176 ± 5142.45 ± 79.818 ± 0.1331 ± rules of type (1).
0.2744 2929.47 10.4477 0.2622 Let us assume that the multitude of teaching data
(observation table) is given by {(xp ; cp ), p = 1, . . . , z},
and let us define the following unit function:
neural networks (Karaboga, Akay B, & Ozturk, 2007),
(Kumbhar & Krishnan, 2011), the design of a digi-
tal infinite impulse response filter (Karaboga, 2009),
solving software testing (Suri & Snehlata, 2011), and
the prediction of the tertiary structures of proteins then the computational criteria of classification system
(Benitez & Lopes, 2010). However, it has not yet been accuracy can be defined in the following way:
used to tune a fuzzy rule-based classifier.
The main motivation of this paper is to present a
novel classifier design. The idea is to use the improved
bee colony algorithms for finding more accuracy in
real life data classification.
The paper is organized as follows: Section 2 The problem of identification turns into the problem
introduces the main theoretical aspects of the fuzzy of maximum search for the given function in multi-
rule-based classification systems; Section 3 briefly dimensional space, coordinates of which correspond
describes the artificial bee colony technique; Section to the fuzzy system parameters. To optimize θ, it is
4 describes a modified Bees algorithm and how it was advisable to use the bee colony algorithm which is set
used in this paper and Section 5 shows test results in to generate and change the rule base. And to set CF, it
the well-known data set and results in the forecasting is advisable to use the modified bee colony algorithm
of non-medical treatment efficiency. The final section which relies on the characteristics of realization of the
offers the conclusions. bee dance operation.
27
repeated cycles, C= 1, 2, . . ., MCN, of the search pro- 4.1 The algorithm of the rule base initialisation on
cesses of the employed bees, the onlooker bees and the ground of the training data set
scout bees.
This algorithm is aimed to form a primary rule base
The main steps of the algorithm are given below
which has one rule for each class. In addition, the algo-
(Akay & Karaboga, 2009), (Karaboga & Akay, 2009):
rithm eliminates those which are not covered with any
1: Initialize the population of solutions xij , i = 1, . . . , terms of change in the input variables.
SN and j = 1, 2, . . . , D Input: The number of mclasses, the observation
2: Evaluate the population table {xpi }, Type – the type of membership function.
3: C = 1 Output: The primary rule base of θ classifier.
4: repeat{Employed Bees’ Phase} Algorithm: 1: θ = {∅}
5: Produce new solutions vi for the employed bees 2: For each q class Do:
by vij = xij + φij (xij − xkj ), 2.1: For each i marker Do:
where k ∈ {1, 2, . . . , SN } and j = 1, 2, . . . , D are 2.1.1: Searching for min classqi = min(xpi )
p
randomly chosen indices. Although k is deter-
2.1.2: Searching for max classqi = max(xp,i )
mined randomly, it has to be different from i. And p
φij is a random number between [−1, 1] which 2.1.3: Term Aiq of the type Type creation, laying the
controls the production of neighbouring food interval [min classqi , max classqi ]
sources around xij and represents the comparison End do (i);
of two food positions as seen by a bee. 2.2: The rule creation type 1 with class = cq , w = 1
6: Apply the greedy selection process between vi and and antecedent contains Aiq for each xi
xi {Onlooker Bees’ Phase} 2.3: θ := θ ∪ {Rq }
7: Calculate the probability values pi for the solutions End do (q);
xi by: 3: With each I marker Do:
3.1: Check the existence of the areas to ensure that
is not covered by area term
3.2: IF not, covered places are found THEN :
3.2.1: The closest term to the left removes its right
where fit i is the fitness value of the solution i border according to the size of the empty distance
which is proportional to the nectar amount of the 3.2.2: The closest term to the right removes its right
food source in the position i and SN is the number border according to the size of the empty distance
of food sources which is equal to the number of End IF;
employed bees or onlooker bees. End do (i).
8: Produce the new solutions vi for the onlookers Figure 1a) represents the results of the work of the
from the solutions where xi is selected depending initialisation algorithm after step 2. Three classes set
on pi and begins to evaluate them on the xi marker by triangular membership functions
9: Apply the greedy selection process between vi and are considered. The run of xi input marker – [mini ,
xi {Scout Bees’ Phase} maxi ]. Not the whole run is laid by terms, but the
10: Determine the abandoned solution for the scout, areas [max class3i , min class1i ] and [max class1i , maxi ]
if it exists, and replace it with a new randomly are not covered. Figure 1b) represents the outcome of
produced solution xi by work of the algorithm for xi marker.
11: Memorize the best solution achieved so far 4.2 The work of the scout bee algorithm
12: C = C + 1 In this algorithm scout-bees create one fuzzy rule for
13: until C = MCN the chosen class. The rule generated by the algorithm
contains fuzzy terms with a previously defined view
of membership function. In actual fact, the algorithm
4 THE ARTIFICIAL BEE COLONY realizes the methodology of random search of the
ALGORITHM fuzzy rule.
Input: The number of n features, mini – the mini-
The design process of a fuzzy rule-based system mum value of i feature, maxi – the maximum value of
from a given input-output data set can be presented i feature, Type – the type of the membership function,
as a structure- and parameter-optimization problem q – the current class.
(Evsukoff et al., 2009). Output: The rule of the fuzzy classifier R.
Some algorithms take part in the fuzzy classifier Algorithm:
tuning: the algorithm of the rule-base initialisation on 1: For each i marker Do:
the ground of a training data set, the base algorithm of 1.1: Set the term Aiz at random, corresponding to
a rule generation (it includes the algorithm of work of the type of the membership function Type, in such a
scout bee), and the modified algorithm of a rule weigh- way that the left border Aiz > mini and the right border
ing coefficient setting. The step-by-step submission of Aiz < maxi .
the above mentioned algorithms is given below. End do (i);
28
REllite – the multitude from l the closest to
BestScout in terms of the result from the multitude
RScouts;
RElliteWork j – the multitude from o rules received
as a result of perfecting of the rule REllitej by working
bees, 1 < j < l;
ABase – initial classification accuracy at the
current iteration;
ABestScout – the increase of the classifier work
precision at the cost of inclusion of BestScout into
the rule base;
AScoutsi – the increase of the classifier work preci-
sion at the cost of inclusion of RScoutsi into rule base
Figure 1. The work of the initialization theory RBase, 1 < i < s;
demonstration. ABestScoutWork i – the increase of the classifier
work precision at the cost of inclusion of RBestScout-
Work i into the base RBase, 1 < i < o;
2: Create the rule type 1 with class = cq , w = 1 and AElliteWork ji – the increase of the classifier work
antecedent contains Aiz for each xi . precision at the cost of inclusion of RBestScout-
Work ji into the base RBase, 1 < j < l, 1 < i < o;
4.3 The base algorithm of the bee colony for CountAddRulle – the number of the rules added into
generation of the fuzzy classifier rules rule base.
Algorithm: 1: θ ∗ = θ
The given algorithm serves for the fuzzy classifier rule
2: Calculate ABase = E(θ, {1})
base formation and aims to receive the initial rule base
3: Scout bees generate s rules RScoutsi , 1 < i < s
which is surely better than the random completing.
according to the algorithm of scout bee work
The algorithm joins two conceptions of decision
4: For each i – scout calculate: AScoutsi =
making: scout bees using random searching method-
E(θ ∪ RScoutsi ,{1}) – Abase
ology realise the above given algorithm of work gen-
5: Find the BestScout rule which satisfies the
erating new solutions and working bees, realising the
following condition ABestScout = max(AScouts)
idea of local searches, set antecedents (IF-parts) and
6: REllite = ∅
consequents (THEN-parts) of the rule.
7: Form the multitude of REllite rules in
To make the algorithm less cumbersome, one can
the following way REllite ∈ RScoutsi , where i
make a decision to shorten the number of improved
satisfies the condition min (ABestScout-AScoutsi ) and
rules on the ground of their utility value, defined by 1<i<s
the incremental rate growth of the number of train- RScoutsi ∈RElliteWork,
/ 1<i<s
ing samples, as correctly classified objects. The given 8: Each i – working bee from BestScout forms the
solution is the analogue of the bee “dance” in nature. vector RBestScoutWorki . 1 < i < o using the method
After that, the best solution from the multitude of the of the local search
best solutions in each stage within the limits of the 9: For each i – working bee calculate:
given iteration is picked and this very solution will be ABestScoutWork i = E(θ ∪ RBest_Scout_Worki ,
added into rule base. {1}) – Abase;
In the current version the application field of the 10: Each i – working bee from REllitej forms
algorithm is limited by sets of data which contain the vector RElliteWorkji , 1 < j < l, 1 < i < o using the
integral-valued and real-valued markers cancelling out method of the local search
nominal markers. 11: For each j – rule from RElliteDo:
Input: s – the number of scout bees, o – the number 11.1: For each i – working bee calculate:
of working bees, l – the number of rules at best under ABestScoutWork ji = E(θ ∪ RElliteWorkji , {1}) –
investigation, z – the number of the rules generated Abase;
by the algorithm, θ – initial rule base according to the End do (j);
algorithm of the initialisation of the fuzzy classifier 12: Place rule into θ* the base corresponding
rule base. the condition max(AScout BestScout , ABestScoutWork,
Output: θ∗ – the effective base of the fuzzy classifier AElliteWork);
rules. 13: Countaddrulle := Countaddrulle + 1, if Coun-
Algorithm variables: taddrulle = z, that EXIT, in other words step 2.
RScouts – the multitude of the rules received by
scout bees;
4.4 The modified bee colony algorithm tooptimize
BestScout – the best rule according to the value of
the weight
utility from the multitude RScouts;
RBestScoutsWork – the multitude from the o rules The above mentioned algorithm is used to optimize the
received by working bees with perfecting the rule CF vector. The algorithm is modified because there is
BestScout; no single understanding of how bees are attracted to
29
the source of the nectar. Although it is known that this
correspondence has a stochastic nature, it is connected
with the quality of the source of nectar. In the modi-
fied algorithm the enlistment is performed according
to a genetic algorithm breeding method; to choose the
solution an annealing imitation method is used.
Input:The size of hive U , the rule base θ, the number
of iterations Iter, required accuracy E, initial temper-
ature T0 , cooling index α, the percentage of scouts
from a hive P_S, g parameter, the view of population
formation algorithm Alg.
Output: CF – optimal rule weighing coefficient of
the fuzzy classifier.
Algorithm variables:
Iter – the number of current iteration;
best – the best decision vector;
BS – accident decision vectors;
W – block of decision vectors of working bees;
NW – decision vectors formed on the ground of the
whole block of working bees;
Figure 2. Wine data: a) accuracy vs. number of rules;
NB – decision vectors formed on the ground of the b) accuracy vs. size of hive for training and test patterns.
vector best;
F – storage of all decision vectors;
15: Formation of scouts |BS|:=|F| – |W|, if |BS|
Fj .co – standardized estimated accuracy for j vector
more than P_S *U /100%, then |BS|:=P_S *U /100%
of the storage F.
16: If the number of iterations is exceeded or
Algorithm: 1: Iter = 1, g = 5, |BS| = P_S *U /100%,
the required accuracy E is gained, then CF =
T = T 0, W = ∅, CF = ∅
Wk k = argmax(Wi .accuracy), 1 < i < |W|, EXIT, in
2: The creation of random BS decision vectors for
other way Iter: = Iter + 1, T : = αT , move to step 2.
each of scouts
3: Calculation of classification accuracy BSj . accu-
racy = E(θ, BSj )
5 BENCHMARK AND COMPARISON OF
4: Definition of the best decision best = argmax(BSk .
RESULTS
accuracy, Wq . accuracy)
5: Run the cycle of j from 1 to |BS|
The proposed fuzzy classifier is tested with the
5.1: IF exp(−|BSj . accuracy–best. accuracy |/T ) <
benchmark classification problems: Bupa, Iris, Glass,
rand* g THEN include j decision into working bees
New thyroid, and Wine. They are publicly avail-
massive W
able on the KEEL-dataset repository web page
6: best is being included into working bees
(http://www.keel.es). Ten-fold cross-validation is
massive W
employed to examine the generalization ability of the
7: Formation of new decisions NW on the ground
proposed approach to classifying the dataset. In the
of W. Run the cycle of j from 1 to |W|:
ten-fold cross-validation procedure, the datasets are
7.1: NWj = Wj ± rand(Wj – best)*g;
separated into ten subsets of almost the same size.
8: Calculation of accuracy NWj . accuracy=
Nine subsets are used as training patterns for designing
E(θ, NWj )
the fuzzy rule-based classifier. The remaining single
9: Formation of new decisions NB on the ground of
subset is used as the test set to evaluate the con-
best. Run the cycle of j from 1 to |W|:
structed fuzzy classifier. This procedure is performed
9.1: NBj : = best ± rand (Wj – best)*g;
ten times after that roles of subsets are exchanged with
10: Calculation of accuracy NBj . accuracy =
each other so that every subset is used as the test set.
E(θ, NBj )
The computer simulation implemented the previously
11: Calculation of decisions normalized accuracy
mentioned procedure ten times and calculated the aver-
for all foragers F=NW + NB + W
age accuracy, the corresponding standard deviation,
12: Fj .co := Fj . accuracy/ i Fi . accuracy
and the average number of rules associated with the
13: Formation of W. It will include decisions
generated fuzzy rule-based classifiers.
from F,
⎧ We examine the relationship between the accuracy
⎨ > 0.8, decision j is included 3 times and the number of rules, accuracy, and size of the
Fj .co > 0.4, decision j is included 2 times hive. On Wine dataset, the performance for varying
⎩ > 0.05, decision j is included 1 time number of fuzzy rules and the size of the hives are
shown in Fig. 2, on Glass dataset – in Fig. 3.
14: Formation of a new population with the num- From the figures, we can see that the accuracy of
ber equals the hive size U , according to the given classification on learn patterns and test patterns is fur-
algorithm Alg ther improved by increasing the rules for each class
30
Table 3. Average results of training accuracy (%) obtained.
New
Data set Bupa Iris Glass thyroid Wine
New
Data set Bupa Iris Glass thyroid Wine
31
algorithm of the complicated sample – glass, the sec- Table 7. Classification results of training and test
ond place is won by bupa, the third by thyroid, wine, accuracy (%).
and easily the worst result is obtained by iris. The bad
result of the iris sample is connected with the fact that Pfc*** PSO ACO
our algorithms do not mark the important markers for
ID* KD** Training Test Training Test Training Test
objects, unlike Hider, Sgerd, but they treat all markers
as if they were of the same importance. KZ, IS
The developed algorithms are implemented for tun- 1 90.95 75.33 72.24 71.43 78.53 72.72
ing the fuzzy classifier to non-pharmacological treat- 2 89.61 70.94 69.65 60.28 79.57 74.61
ment efficiency forecasting. The forecasting is based 3 91.48 81.43 68.89 66.43 76.04 74.20
on the analysis of retrospective data before and after 4 87.01 74.29 63.89 56.71 67.68 65.31
the treatment of rehabilitated patients. The patients 5 85.59 70.00 59.50 56.11 67.74 61.80
are prescribed of five kinds of treatment. The empiric KZ. TSH. TST
base for forecasting is medical data of patients being 1 81.25 69.5 67.45 59.30 75.32 65.95
2 85.72 71.36 71.34 67.75 76.75 73.11
rehabilitated in Federal State Institution of Tomsk
3 82.5 67.16 70.15 62.13 78.17 65.51
Scientific-Research Institute of Health-Resort and 4 80 72.51 61.43 61.14 74.26 69.43
Physiotherapy Federal Medical and Biological Agency 5 80 62.57 60.82 57.33 71.38 61.48
of Russia.
Index of the functional stress of organism *ID – Input Data
FNO = Index_AG/Index_LRP, where index_AG is **KT – Kind of treatment
adaptive hormone index: ratio of glucocorticoid con- ***Pfc – Proposed fuzzy classifier
centration (GC) to insulin (IS) in blood serum;
index_LRP is index of lipid reserve for peroxidation,
is calculated on the basis of data totality.
The patient undergoes the medical test after treat- According to Table 7, the proposed fuzzy classifier
ment course and then FNO index is calculated. At that gives results better thatACO and PSO, except one case.
increase of FNO index value in dynamics gives evi- In this case, result of the ACO better in the testing
dence of strengthening degree of functional stress of selections of the second kind of treatment.
organism and decrease to give evidence of disturbed The trained fuzzy system is used to choose the most
function normalization. effective kind of treatment. For this purpose, anew
FNO_koef = FNObefore/FNOafter serves as a patient’s data is sent to the system input, and then the
prediction value.The value of this index gives evidence system gives back the foreseeable class of the patient’s
of treatment effectiveness. If FNO_koef > 1, it means state change from each kind of treatment.
that patient has improvement after passing of treatment
course (class 1), otherwise – notable improvement is
not observed (class 2). 6 CONCLUSION
Fuzzy system permits to give predictions of treat-
ment performance of new incoming patients with Transformation of experimental information in a fuzzy
one or another complex after training on the existing knowledge base can be a useful method of data
precedents (training choice). processing in medicine, banking, management, and
As input variables we selected in the first experi- other areas where decision-makers, rather than those
ment KZ – glucocorticoid, IS – insulin; in the second working in strict quantitative relationships, prefer
one: KZ – glucocorticoid, TSH – thyroid-stimulating transparent and easily interpreted verbal rules.
hormone, TST – testosterone. A modified version of the ABC algorithm for fuzzy
If to be used for training all having sample, it can rule-based classifiers’ design has been introduced.
come out with the problem of the so called ‘over fit- Four components of the algorithm are observed: ini-
ting’. In other words, classifiers will be tuned only on tialization, work of scout bee, rule antecedent gener-
this sample and will not necessary be effective work- alisation, and scales setting.
ing with other data. To overcome this problem and to No Free Lunch Theorem shows that in the absence
obtain a fuzzy system of high generalized ability we of assumptions we should not prefer any classifica-
used in our work the method of cross-validation. For tion algorithm over another (Wolpert & Macready,
each: from five complexes, each table of observations 1997). The given comparisons of the developed algo-
is divided into training and the testing selections in the rithm with its analogues demonstrate that the proposed
ratio 80:20. algorithms are feasible and effective in solving com-
Before an application, our proposed fuzzy clas- plex classification problems.
sifier uses two different methods: a Particle Swarm The algorithms are checked against the real data
Optimization algorithm (PSO) and an Ant Colony with the implementation of the fuzzy classifier to
Optimization algorithm (ACO) with Single tone forecast the efficiency of non-medical treatment.
approximation. The offered algorithms can be implemented to
The average results from 25 experiments with the solve questions in the area of data mining.
same input parameters are compared with algorithms This project is done with financial support from the
that were used before and are given in Table 7. Russian Foundation for Basic Research (project No.
32
Another Random Document on
Scribd Without Any Related Topics
certainty, than his aspect would be transformed and, as it were, a
shadow of change would pass over his countenance.
"I will give you rest." Strange! For then the words "come hither unto
me" must be understood to mean: stay with me, I am rest; or, it is
rest to remain with me. It is not, then, as in other cases where he
who helps and says "come hither" must afterwards say: "now depart
again," explaining to each one where the help he needs is to be
found, where the healing herb grows which will cure him, or where
the quiet spot is found where he may rest from labor, or where the
happier continent exists where one is not heavy laden. But no, he
who opens his arms, inviting every one—ah, if all, all they that labor
and are heavy laden came to him, he would fold them all to his
heart, saying: "stay with me now; for to stay with me is rest." The
helper himself is the help. Ah, strange, he who invites everybody
and wishes to help everybody, his manner of treating the sick is as if
calculated for every sick man, and as if every sick man who comes
to him were his only patient. For otherwise a physician divides his
time among many patients who, however great their number, still
are far, far from being all mankind. He will prescribe the medicine,
he will say what is to be done, and how it is to be used, and then he
will go—to some other patient; or, in case the patient should visit
him, he will let him depart. The physician cannot remain sitting all
day with one patient, and still less can he have all his patients about
him in his home, and yet sit all day with one patient without
neglecting the others. For this reason the helper and his help are not
one and the same thing. The help which the physician prescribes is
kept with him by the patient all day so that he may constantly use it,
whilst the physician visits him now and again; or he visits the
physician now and again. But if the helper is also the help, why, then
he will stay with the sick man all day, or the sick man with him—ah,
strange that it is just this helper who invites all men!
II
COME HITHER ALL YE THAT LABOR AND ARE HEAVY LADEN, AND I
WILL GIVE YOU REST.
What enormous multiplicity, what an almost boundless diversity, of
people invited; for a man, a lowly man, may, indeed, try to
enumerate only a few of these diversities—but he who invites must
invite all men, even if every one specially and individually.
The invitation goes forth, then—along the highways and the byways,
and along the loneliest paths; aye, goes forth where there is a path
so lonely that one man only, and no one else, knows of it, and goes
forth where there is but one track, the track of the wretched one
who fled along that path with his misery, that and no other track;
goes forth even where there is no path to show how one may
return: even there the invitation penetrates and by itself easily and
surely finds its way back—most easily, indeed, when it brings the
fugitive along to him that issued the invitation. Come hither, come
hither all ye, also thou, and thou, and thou, too, thou loneliest of all
fugitives!
Thus the invitation goes forth and remains standing, wheresoever
there is a parting of the ways, in order to call out. Ah, just as the
trumpet call of the soldiers is directed to the four quarters of the
globe, likewise does this invitation sound wherever there is a
meeting of roads; with no uncertain sound—for who would then
come?—but with the certitude of eternity.
It stands by the parting of the ways where worldly and earthly
sufferings have set down their crosses, and calls out: Come hither,
all ye poor and wretched ones, ye who in poverty must slave in
order to assure yourselves, not of a care-free, but of a toilsome,
future; ah, bitter contradiction, to have to slave for—assuring one's
self of that under which one groans, of that which one flees! Ye
despised and overlooked ones, about whose existence no one, aye,
no one is concerned, not so much even as about some domestic
animal which is of greater value! Ye sick, and halt, and blind, and
deaf, and crippled, come hither!—Ye bed-ridden, aye, come hither,
ye too; for the invitation makes bold to invite even the bed-ridden—
to come! Ye lepers; for the invitation breaks down all differences in
order to unite all, it wishes to make good the hardship caused by the
difference in men, the difference which seats one as a ruler over
millions, in possession of all gifts of fortune, and drives another one
out into the wilderness—and why? (ah, the cruelty of it!) because
(ah, the cruel human inference!) because he is wretched,
indescribably wretched. Why then? Because he stands in need of
help, or at any rate, of compassion. And why, then? Because human
compassion is a wretched thing which is cruel when there is the
greatest need of being compassionate, and compassionate only
when, at bottom, it is not true compassion! Ye sick of heart, ye who
only through your anguish learned to know that a man's heart and
an animal's heart are two different things, and what it means to be
sick at heart—what it means when the physician may be right in
declaring one sound of heart and yet heart-sick; ye whom
faithlessness deceived and whom human sympathy—for the
sympathy of man is rarely late in coming—whom human sympathy
made a target for mockery; all ye wronged and aggrieved and ill-
used; all ye noble ones who, as any and everybody will be able to
tell you, deservedly reap the reward of ingratitude (for why were ye
simple enough to be noble, why foolish enough to be kindly, and
disinterested, and faithful)—all ye victims of cunning, of deceit, of
backbiting, of envy, whom baseness chose as its victim and
cowardice left in the lurch, whether now ye be sacrificed in remote
and lonely places, after having crept away in order to die, or
whether ye be trampled underfoot in the thronging crowds where no
one asks what rights ye have, and no one, what wrongs ye suffer,
and no one, where ye smart or how ye smart, whilst the crowd with
brute force tramples you into the dust—come ye hither!
The invitation stands at the parting of the ways, where death parts
death and life. Come hither all ye that sorrow and ye that vainly
labor! For indeed there is rest in the grave; but to sit by a grave, or
to stand by a grave, or to visit a grave, all that is far from lying in
the grave; and to read to one's self again and again one's own
words which one knows by heart, the epitaph which one devised
one's self and understands best, namely, who it is that lies buried
here, all that is not the same as to lie buried one's self. In the grave
there Is rest, but by the grave there is no rest; for it is said: so far
and no farther, and so you may as well go home again. But however
often, whether in your thoughts or in fact, you return to that grave—
you will never get any farther, you will not get away from the spot,
and this is very trying and is by no means rest. Come ye hither,
therefore: here is the way by which one may go farther, here is rest
by the grave, rest from the sorrow over loss, or rest in the sorrow of
loss—through him who everlastingly re-unites those that are parted,
and more firmly than nature unites parents with their children, and
children with their parents—for, alas! they were parted; and more
closely than the minister unites husband and wife—for, alas! their
separation did come to pass; and more indissolubly than the bond of
friendship unites friend with friend—for, alas! it was broken.
Separation penetrated everywhere and brought with it sorrow and
unrest; but here is rest!—Come hither also ye who had your abodes
assigned to you among the graves, ye who are considered dead to
human society, but neither missed nor mourned—not buried and yet
dead; that is, belonging neither to life nor to death; ye, alas! to
whom human society cruelly closed its doors and for whom no grave
has as yet opened itself in pity—come hither, ye also, here is rest,
and here is life!
The invitation stands at the parting of the ways, where the road of
sin turns away from the inclosure of innocence—ah, come hither, ye
are so close to him; but a single step in the opposite direction, and
ye are infinitely far from him. Very possibly ye do not yet stand in
need of rest, nor grasp fully what that means; but still follow the
invitation, so that he who invites may save you from a predicament
out of which it is so difficult and dangerous to be saved; and so that,
being saved, ye may stay with him who is the Savior of all, likewise
of innocence. For even if it were possible that innocence be found
somewhere, and altogether pure: why should not innocence also
need a savior to keep it safe from evil?—The invitation stands at the
parting of the ways, where the road of sin turns away to enter more
deeply into sin. Come hither all ye who have strayed and have been
lost, whatever may have been your error and sin: whether one more
pardonable in the sight of man and nevertheless perhaps more
frightful, or one more terrible in the sight of man and yet,
perchance, more pardonable; whether it be one which became
known here on earth or one which, though hidden, yet is known in
heaven—and even if ye found pardon here on earth without finding
rest in your souls, or found no pardon because ye did not seek it, or
because ye sought it in vain: ah, return and come hither, here is
rest!
The invitation stands at the parting of the ways, where the road of
sin turns away for the last time and to the eye is lost in perdition.
Ah, return, return, and come hither! Do not shrink from the
difficulties of the retreat, however great; do not fear the irksome
way of conversion, however laboriously it may lead to salvation;
whereas sin with winged speed and growing pace leads forward or—
downward, so easily, so indescribably easy—as easily, in fact, as
when a horse, altogether freed from having to pull, cannot even with
all his might stop the vehicle which pushes him into the abyss. Do
not despair over each relapse which the God of patience has
patience enough to pardon, and which a sinner should surely have
patience enough to humble himself under. Nay, fear nothing and
despair not: he that sayeth "come hither," he is with you on the way,
from him come help and pardon on that way of conversion which
leads to him; and with him is rest.
Come hither all, all ye—with him is rest; and he will raise no
difficulties, he does but one thing: he opens his arms. He will not
first ask you, you sufferer—as righteous men, alas, are accustomed
to, even when willing to help—"Are you not perhaps yourself the
cause of your misfortune, have you nothing with which to reproach
yourself?" It is so easy to fall into this very human error, and from
appearances to judge a man's success or failure: for instance, if a
man is a cripple, or deformed, or has an unprepossessing
appearance, to infer that therefore he is a bad man; or, when a man
is unfortunate enough to suffer reverses so as to be ruined or so as
to go down in the world, to infer that therefore he is a vicious man.
Ah, and this is such an exquisitely cruel pleasure, this being
conscious of one's own righteousness as against the sufferer—
explaining his afflictions as God's punishment, so that one does not
even—dare to help him; or asking him that question which
condemns him and flatters our own righteousness, before helping
him. But he will not ask you thus, will not in such cruel fashion be
your benefactor. And if you are yourself conscious of your sin he will
not ask about it, will not break still further the bent reed, but raise
you up, if you will but join him. He will not point you out by way of
contrast, and place you outside of himself, so that your sin will stand
out as still more terrible, but he will grant you a hiding place within
him; and hidden within him your sins will be hidden. For he is the
friend of sinners. Let him but behold a sinner, and he not only stands
still, opening his arms and saying "come hither," nay, but he stands
—and waits, as did the father of the prodigal son; or he does not
merely remain standing and waiting, but goes out to search, as the
shepherd went forth to search for the strayed sheep, or as the
woman went to search for the lost piece of silver. He goes—nay, he
has gone, but an infinitely longer way than any shepherd or any
woman, for did he not go the infinitely long way from being God to
becoming man, which he did to seek sinners?
III
THE PAUSE
He that invites. Who is he? Jesus Christ. Which Jesus Christ? He that
sits in glory on the right side of his Father? No. From his seat of
glory he spoke not a single word. Therefore it is Jesus Christ in his
lowliness, and in the condition of lowliness, who spoke these words.
Is then Jesus Christ not the same? Yes, verily, he is today, and was
yesterday, and 1800 years ago, the same who abased himself,
assuming the form of a servant—the Jesus Christ who spake these
words of invitation. It is also he who hath said that he would return
again in glory. In his return in glory he is, again, the same Jesus
Christ; but this has not yet come to pass.
Is he then not in glory now? Assuredly, that the Christian believes.
But it was in his lowly condition that he spoke these words; he did
not speak them from his glory. And about his return in glory nothing
can be known, for this can in the strictest sense be a matter of belief
only. But a believer one cannot become except by having gone to
him in his lowly condition—to him, the rock of offense and the object
of faith. In other shape he does not exist, for only thus did he exist.
That he will return in glory is indeed expected, but can be expected
and believed only by him who believes, and has believed, in him as
he was here on earth.
Jesus Christ is, then, the same; yet lived he 1800 years ago in
debasement, and is transfigured only at his return. As yet he has not
returned; therefore he is still the one in lowly guise about whom we
believe that he will return in glory. Whatever he said and taught,
every word he spoke, becomes eo ipso untrue if we give it the
appearance of having been spoken by Christ in his glory. Nay, he is
silent. It is the lowly Christ who speaks. The space of time between
(i.e. between his debasement and his return in glory) which is at
present about 1800 years, and will possibly become many times
1800—this space of time, or else what this space of time tries to
make of Christ, the worldly information about him furnished by world
history or church history, as to who Christ was, as to who it was who
really spoke these words—all this does not concern us, is neither
here nor there, but only serves to corrupt our conception of him,
arid thereby renders untrue these words of invitation.
It is untruthful of me to impute to a person words which he never
used. But it is likewise untruthful, and the words he used likewise
become untruthful, or it becomes untrue that he used them, if I
assign to him a nature essentially unlike the one he had when he did
use them. Essentially unlike; for an untruth concerning this or the
other trifling circumstance will not make it untrue that "he" said
them. And therefore, if it please God to walk on earth in such strict
incognito as only one all-powerful can assume, in guise impenetrable
to all men; if it please him—and why he does it, for what purpose,
that he knows best himself; but whatever the reason and the
purpose, it is certain that the incognito is of essential significance—I
say, if it please God to walk on earth in the guise of a servant and,
to judge from his appearance, exactly like any other man; if it please
him to teach men in this guise—if, now, any one repeats his very
words, but gives the saying the appearance that it was God that
spoke these words: then it is untruthful; for it is untrue that h e said
these words.
No. And why not? Because one cannot "know" anything at all about
"Christ"; for he is the paradox, the object of faith, and exists only for
faith. But all historic information is communication of "knowledge."
Therefore one cannot learn anything about Christ from history. For
whether now one learn little or much about him, it will not represent
what he was in reality. Hence one learns something else about him
than what is strictly true, and therefore learns nothing about him, or
gets to know something wrong about him; that is, one is deceived.
History makes Christ look different from what he looked in truth, and
thus one learns much from history about—Christ? No, not about
Christ; because about him nothing can be "known," he can only be
believed.
No, by no means, but rather the opposite; for else Christ were but a
man.
There is really nothing remarkable in a man having lived. There have
certainly lived millions upon millions of men. If the fact is
remarkable, there must have been something remarkable in a man's
life. In other words, there is nothing remarkable in his having lived,
but his life was remarkable for this or that. The remarkable thing
may, among other matters, also be what he accomplished; that is,
the consequences of his life.
But that God lived here on earth in human form, that is infinitely
remarkable. No matter if his life had had no consequences at all—it
remains equally remarkable, infinitely remarkable, infinitely more
remarkable than all possible consequences. Just try to introduce that
which is remarkable as something secondary and you will
straightway see the absurdity of doing so: now, if you please,
whatever remarkable is there in God's life having had remarkable
consequences? To speak in this fashion is merely twaddling.
No, that God lived here on earth, that is what is infinitely
remarkable, that which is remarkable in itself. Assuming that Christ's
life had had no consequences whatsoever—if any one then
undertook to say that therefore his life was not remarkable it would
be blasphemy. For it would be remarkable all the same; and if a
secondary remarkable characteristic had to be introduced it would
consist in the remarkable fact that his life had no consequences. But
if one should say that Christ's life was remarkable because of its
consequences, then this again were a blasphemy; for it is his life
which in itself is the remarkable thing.
There is nothing very remarkable in a man's having lived, but it is
infinitely remarkable that God has lived. God alone can lay so much
emphasis on himself that the fact of his having lived becomes
infinitely more important than all the consequences which may flow
therefrom and which then become a matter of history.
Let us imagine a man, one of the exalted spirits, one who was
wronged by his times, but whom history later reinstated in his rights
by proving by the consequences of his life who he was. I do not
deny, by the way, that all this business of proving from the
consequences is a course well suited to "a world which ever wishes
to be deceived." For he who was contemporary with him and did not
understand who he was, he really only imagines that he understands
when he has got to know it by help of the consequences of the
noble one's life. Still, I do not wish to insist on this point, for with
regard to a man it certainly holds true that the consequences of his
life are more important than the fact of his having lived.
Let us imagine one of these exalted spirits. He lives among his
contemporaries without being understood, his significance is not
recognized—he is misunderstood, and then mocked, persecuted, and
finally put to death like a common evil-doer. But the consequences
of his life make it plain who he was; history which keeps a record of
these consequences re-instates him in his rightful position, and now
he is named in one century after another as the great and the noble
spirit, and the circumstances of his debasement are almost
completely forgotten. It was blindness on the part of his
contemporaries which prevented them from comprehending his true
nature, and wickedness which made them mock him and deride him,
and finally put him to death. But be no more concerned about this;
for only after his death did he really become what he was, through
the consequences of his life which, after all, are by far more
important than his life.
Now is it not possible that the same holds true with regard to Christ?
It was blindness and wickedness on the part of those times[8]—but
be no more concerned about this, history has now re-instated him,
from history we know now who Jesus Christ was, and thus justice is
done him.
Ah, wicked thoughtlessness which thus interprets Sacred History like
profane history, which makes Christ a man! But can one, then, learn
anything from history about Jesus? (cf. B) No, nothing. Jesus Christ
is the object of faith—one either believes in him or is offended by
him; for "to know" means precisely that such knowledge does not
pertain to him. History can therefore, to be sure, give one
knowledge in abundance; but "knowledge" annihilates Jesus Christ.
Again—ah, the impious thoughtlessness!—for one to presume to say
about Christ's abasement: "Let us be concerned no more about his
abasement." Surely, Christ's abasement was not something which
merely happened to him—even if it was the sin of that generation to
crucify him; was surely not something that simply happened to him
and, perhaps, would not have happened to him in better times.
Christ himself wished to be abased and lowly. His abasement (that
is, his walking on earth in humble guise, though being God) is
therefore a condition of his own making, something he wished to be
knotted together, a dialectic knot which no one shall presume to
untie, and which no one will untie, for that matter, until he himself
shall untie it when returning in his glory.
His case is, therefore, not the same as that of a man who, through
the injustice inflicted on him by his times, was not allowed to be
himself or to be valued at his worth, while history revealed who he
was; for Christ himself wished to be abased—it is precisely this
condition which he desired. Therefore, let history not trouble itself to
do him justice, and let us not in impious thoughtlessness
presumptuously imagine that we as a matter of course know who he
was. For that no one knows; and he who believes it must become
contemporaneous with him in his abasement. When God chooses to
let himself be born in lowliness, when he who holds all possibilities in
his hand assumes the form of a humble servant, when he fares
about defenseless, letting people do with him what they list: he
surely knows what he does and why he does it; for it is at all events
he who has power over men, and not men who have power over
him—so let not history be so impertinent as to wish to reveal who he
was.
Lastly—ah the blasphemy!—if one should presume to say that the
percussion which Christ suffered expresses something accidental! If
a man is persecuted by his generation it does not follow that he has
the right to say that this would happen to him in every age. Insofar
there is reason in what posterity says about letting bygones be
bygones. But it is different with Christ! It is not he who by letting
himself be born, and by appearing in Palestine, is being examined by
history; but it is he who examines, his life is the examination, not
only of that generation, but of mankind. Woe unto the generation
that would presumptuously dare to say: "let bygones be bygones,
and forget what he suffered, for history has now revealed who he
was and has done justice by him."
If one assumes that history is really able to do this, then the
abasement of Christ bears an accidental relation to him; that is to
say, he thereby is made a man, an extraordinary man to whom this
happened through the wickedness of that generation—a fate which
he was far from wishing to suffer, for he would gladly (as is human)
have become a great man; whereas Christ voluntarily chose to be
the lowly one and, although it was his purpose to save the world,
wished also to give expression to what the "truth" suffered then, and
must suffer in every generation. But if this is his strongest desire,
and if he will show himself in his glory only at his return, and if he
has not returned as yet; and if no generation may be without
repentance, but on the contrary every generation must consider
itself a partner in the guilt of that generation: then woe to him who
presumes to deprive him of his lowliness, or to cause what he
suffered to be forgotten, and to clothe him in the fabled human
glory of the historic consequences of his life, which is neither here
nor there.
II
ebookgate.com