100% found this document useful (1 vote)
18 views

Informatics Networking and Intelligent Computing Proceedings of the 2014 International Conference on Informatics Networking and Intelligent Computing INIC 2014 16 17 November 2014 Shenzhen China 1st Edition Jiaxing Zhang (Editor) - The ebook in PDF and DOCX formats is ready for download now

The document is a collection of proceedings from the 2014 International Conference on Informatics, Networking and Intelligent Computing held in Shenzhen, China. It includes contributions on various topics such as computational intelligence, networking technology, systems and software engineering, and signal processing, showcasing innovative ideas and state-of-the-art research in these fields. The editor of the proceedings is Jiaxing Zhang, and the publication is intended for academics and professionals in informatics and networking.

Uploaded by

mlawahcufja
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
18 views

Informatics Networking and Intelligent Computing Proceedings of the 2014 International Conference on Informatics Networking and Intelligent Computing INIC 2014 16 17 November 2014 Shenzhen China 1st Edition Jiaxing Zhang (Editor) - The ebook in PDF and DOCX formats is ready for download now

The document is a collection of proceedings from the 2014 International Conference on Informatics, Networking and Intelligent Computing held in Shenzhen, China. It includes contributions on various topics such as computational intelligence, networking technology, systems and software engineering, and signal processing, showcasing innovative ideas and state-of-the-art research in these fields. The editor of the proceedings is Jiaxing Zhang, and the publication is intended for academics and professionals in informatics and networking.

Uploaded by

mlawahcufja
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 73

Instant Ebook Access, One Click Away – Begin at ebookgate.

com

Informatics Networking and Intelligent Computing


Proceedings of the 2014 International Conference
on Informatics Networking and Intelligent
Computing INIC 2014 16 17 November 2014 Shenzhen
China 1st Edition Jiaxing Zhang (Editor)
https://ebookgate.com/product/informatics-networking-and-
intelligent-computing-proceedings-of-the-2014-international-
conference-on-informatics-networking-and-intelligent-
computing-inic-2014-16-17-november-2014-shenzhen-
china-1st-e/

OR CLICK BUTTON

DOWLOAD EBOOK

Get Instant Ebook Downloads – Browse at https://ebookgate.com


Click here to visit ebookgate.com and download ebook now
Editor
Zhang Informatics, Networking
and Intelligent Computing

Editor: Jiaxing Zhang

Informatics, Networking and Intelligent Computing collects contributions to the

and Intelligent Computing


Informatics, Networking
2014 International Conference on Informatics, Networking and Intelligent Computing
(Shenzhen, China 16-17 November 2014). The topics covered include:
- Computational intelligence
- Networking technology and engineering
- Systems and software engineering
- Information technology and engineering applications, and
- Signal and data processing

Informatics, Networking and Intelligent Computing not only provides the state-
of-the art in informatics and networking technology, but also offers innovative
ideas present and future problems. The book will be invaluable to academics and
professional involved in informatics, networking and intelligent computing.

an informa business
INFORMATICS, NETWORKING AND INTELLIGENT COMPUTING
This page intentionally left blank
PROCEEDINGS OF THE 2014 INTERNATIONAL CONFERENCE ON INFORMATICS,
NETWORKING AND INTELLIGENT COMPUTING (INIC 2014), 16–17 NOVEMBER 2014,
SHENZHEN, CHINA

Informatics, Networking and


Intelligent Computing

Editor

Jiaxing Zhang
Wuhan University, Wuhan, Hubei, China
CRC Press/Balkema is an imprint of the Taylor & Francis Group, an informa business
© 2015 Taylor & Francis Group, London, UK
Typeset by MPS Limited, Chennai, India
All rights reserved. No part of this publication or the information contained herein may be reproduced,
stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, by
photocopying, recording or otherwise, without written prior permission from the publishers.
Although all care is taken to ensure integrity and the quality of this publication and the information
herein, no responsibility is assumed by the publishers nor the author for any damage to the property or
persons as a result of operation or use of this publication and/or the information contained herein.
Published by: CRC Press/Balkema
P.O. Box 11320, 2301 EH Leiden, The Netherlands
e-mail: [email protected]
www.crcpress.com – www.taylorandfrancis.com
ISBN: 978-1-138-02678-0 (Hardback)
ISBN: 978-1-315-73453-8 (Ebook PDF)
Informatics, Networking and Intelligent Computing – Zhang (Ed.)
© 2015 Taylor & Francis Group, London, ISBN: 978-1-138-02678-0

Table of contents

Preface IX
Organizing committee XI

Computational intelligence
Decomposition genetic algorithm for cellular network spatial optimization 3
M. Livschitz
A heating and cooling model for office buildings in Seattle 9
W.Q. Geng, Y. Fu & G.H. Wei
Multi-depth Deep Feature learning for face recognition 15
C.C. Zhang, X.F. Liang & T. Matsuyama
Research on camera calibration basing OpenCV 21
H.M. Nie
Designing fuzzy rule-based classifiers using a bee colony algorithm 25
I.A. Hodashinsky, R.V. Meshcheryakov & I.V. Gorbunov
City management based on Geospatial Business Intelligence (Geo-BI) 35
Y.L. Zhou & W.J. Qi
Research into the development mode of intelligent military logistics 39
F. Zhang, D.R. Ling & M. Wang
Incomplete big data imputation algorithm using optimized possibilistic c-means and
deep learning 43
H. Shen & E.S. Zhang
Human-machine interaction for an intelligent wheelchair, based on head poses 49
Y. Wang, N. Liu & Y. Luo
An optimization model in the design of a product process 55
T. Qi, S.P. Fang & C.Q. Liu

Networking technology and engineering


A new SIFT feature points restoration based on a watermarking scheme resilient to
geometrical attacks 61
O.J. Lou, S.H. Li, Z.X. Liu & S.T. Tang
Using CALL (Computer-assisted Language Learning) to achieve multidimensional
college English teaching 67
W. Liu
Reflections on multimedia teaching 71
W.G. Chang
Electromechanic installations vibration acceleration protection system 75
V.I. Erofeev, A.S. Plehov & D.U. Titov
The study of CBI theme-based teaching mode of college English from multiple intelligence
module perspective 79
W. Liu

V
The analysis of access control model based on Single Sign-on in SOA environment 83
G.Z. Wang, B. Zhang, X.F. Fei, Y. Liu, H.R. Gui & H.R. Xiong
An Android malware detection method using Dalvik instructions 89
K. Zhang, Q.S. Jiang, W. Zhang & X.F. Liao
Identification of spoofing based on a nonlinear model of an radio frequency power amplifier 95
Y.M. Gan & M.H. Sun
Computational model for mixed ownership duopoly competition in the electricity sector
with managerial incentives 101
V. Kalashnikov-Jr., A. Beda & L. Palacios-Pargas

Systems and software engineering


A software reliability testing theory and technology research 107
H.L. Sun, X.Z. Hou, K. Zh & H.F. Luo
Fingertips detection and tracking based on a Microsoft Kinect depth image 113
Z.X. Li, J. Liu, H.C. Wu & Z.M. Chen
A virtual dressing room approach based on Microsoft Kinect 117
J.F. Yao, L. Lysandra, L. Yang, B.R. Yang & Z.C. Huang
ASM (Active Shape Model) modeling of the human body and its application in virtual
fitting 123
X.Y. Xiong & X.J. Zhu
Building an orchestration architecture for cloud services: A case study of designing a platform
as a service (PaaS) runtime environment 127
P.C. Chen, Y.T. Huang, Y.C. Lee & C.C. Chu
Development of an MIPI (Mobile Industry Processor Interface) interface camera driver
based on WINCE (Windows Embedded Compact) 131
K. Xiao, L. Shan & Z.T. Li
Trends in the development of databases on statistics in the OECD, the EU and Russia 135
N. Chistyakova, V. Spitsin, J. Abushahmanova & N. Shabaldina
The effect of casting geometry on the thermal gradient in A201 aluminium alloy
plate castings 139
Y.S. Kuo & M.F. Lu
A research on multi-implementation game product-based learning for game development
specialty students 143
C. He
A network behaviour analyser: Automatic fingerprint extraction from functions of
mobile applications 147
P. Liu & C.Y. Wu

Information technology and engineering application


Design of dipole array antenna for a 2.4-GHz wireless local area network application 155
Y.Y. Lu & K.C. Liao
A Token-based Network Communication Library (TBNCL) in a private cloud storage system 159
Q. Wang, L. Li, Z.H. Guo, M. Lin & R. Pan
Analysis of phased array antenna’s vibration effects on the performance of shipborne MLS 163
H.S. Xie, P. Zhou, J.G. Wei, B.K. Luan & D. Wang
Application of PUS (Packet Utilization Standard) and XTCE (XML Telemetric and
Command Exchange) in satellite telemetry data exchange design and description 169
Y. Liu, J.Q. Li & Z.D. Li

VI
Information system designing for innovative development assessment of the efficiency
of the Association of Innovative Regions of Russia members 173
V.V. Spitsin, O.G. Berestneva, L.Y. Spitsina, A. Karasenko, D. Shashkov &
N.V. Shabaldina
Selected aspects of applying UWB (Ultra Wide Band) technology in transportation 177
M. Džunda, Z. Cséfalvay & N. Kotianová
Design of a wireless monitoring system for a Pleurotus eryngii cultivation environment 183
L. Zhao & X.J. Zhu
Research on Trellis Coded Modulation (TCM) in a wireless channel 189
X.M. Lu, F. Yang, Y. Song & J.T. He
Riemann waves and solitons in nonlinear Cosserat medium 193
V.I. Erofeev & A.O. Malkhanov
Research on the adaptability of SAR imaging algorithms for squint-looking 197
M.C. Yu
Improved factor analysis algorithm in factor spaces 201
H.D. Wang, Y. Shi, P.Z. Wang & H.T. Liu
Research on the efficacy evaluation algorithms of Earth observation satellite mission 207
H.F. Wang, Y.M. Liu & P. Wu
An image fusion algorithm based on NonsubSampled Contourlet Transform and Pulse
Coupled Neural Networks 211
G.Q. Chen, J. Duan, Z.Y. Geng & H. Cai
A cognitive global clock synchronization algorithm in Wireless Sensor Networks (WSNs) 215
B. Ahmad, S.W. Ma, L. Lin, J.J. Liu & C.F. Yang
A multi-drop distributed smart sensor network based on IEEE1451.3 219
H.W. Lu, L.H. Shang & M. Zhou
Solitary strain waves in the composite nonlinear elastic rod 225
N.I. Arkhipova & V.I. Erofeev
Semiconducting inverter generators with minimal losses 227
A.B. Daryenkov & V.I. Erofeev
Research into a virtual machine migration selection strategy 231
L. Sun & X.Y. Wu
An analysis of the influence of power converters on the operation of devices 235
A.I. Baykov, V.I. Erofeev & V.G. Titov

Signal and data processing


The classification of insect sounds by image feature matching based on spectrogram analysis 241
A.Q. Jia, B.R. Min & C.Y. Wei
Research on business model innovation method based on TRIZ and DEA 247
X. Liu, J.W. Ding & X.Q. Ren
Analytical solution for fuzzy heat equation based on generalized Hukuhara differentiability 251
T. Allahviranloo, Z. Gouyandeh & A. Armand
Identification of space contact for a dynamics medium 257
V.S. Deeva, M.S. Slobodyan, G.A. Elgina, S.M. Slobodyan & V.B. Lapshin
Membership functions of fuzzy sets in the diagnosis of structures pathology 261
G.G. Kashevarova, M.N. Fursov & Y.L. Tonkov
Global stock market index analysis based on complex networks and a multiple
regression model 265
Z.L. Zhang & S.J. Qiao

VII
A study of sign adjustment of complete network under the second structural theorem 269
H.Z. Deng, J. Wu, Y.J. Tan & P. Abell
Sybil detection and analysis of micro-blog Sina 273
R.F. Liu, Y.J. Zhao & R.S. Shi
A kinematics analysis of actions of a straddled Jaeger salto on uneven bars performed by
Shang Chunsong 279
L. Zhong, J.H. Zhou & T. Ouyang

Author index 283

VIII
Informatics, Networking and Intelligent Computing – Zhang (Ed.)
© 2015 Taylor & Francis Group, London, ISBN: 978-1-138-02678-0

Preface

The 2014 International Conference on Informatics, Networking and Intelligent Computing (INIC2014) will be
held in Shenzhen, China on November 16–17, 2014. The main purpose of this conference is to provide a common
forum for experts and scholars of excellence in their domains from all over the world to present their latest and
inspiring works in the area of informatics, networking and intelligent computing.
The informatics is helpful for people to make full use of information technology and means to improve the
efficiency of work. In recent years, the networking technology has experienced a rapid development and it is
widely used in both our daily life and industrial manufacturing, such as games, education, entertainment, stocks
and bonds, financial transactions, architectural design, communication and so on. Any company or institution
with ambition cannot live without the latest high-tech products. At present, intelligent computing is one of the
most important methods of intelligent science and also is the current topic of the information technology. For
example, machine learning, data mining and intelligent control have become the hot topics of current research.
In general, informatics, networking and intelligent computing have become more and more essential to people’s
life and work.
INIC2014 has received a large number of papers and fifty-eight papers were finally accepted after reviewing.
These articles were divided into several sessions, such as computational intelligence, networking technology and
engineering, systems and software engineering, information technology and engineering application and signal
and data processing.
During the organization course, we have received much help from many people and institutions. Firstly, we
would like to show our thankfulness to the whole committee for their support and enthusiasm. Secondly, we
would like to thank the authors for their carefully writing. Lastly, the organizers of the conference and other
people who have helped us would also be appreciated for their kindness.
We wish all the attendees at INIC2014 can enjoy a scientific conference in Shenzhen, China. We really
hope that all our participants can exchange useful information and make amazing developments in informatics,
networking and intelligent computing after this conference.
INIC2014 Committee

IX
This page intentionally left blank
Informatics, Networking and Intelligent Computing – Zhang (Ed.)
© 2015 Taylor & Francis Group, London, ISBN: 978-1-138-02678-0

Organizing committee

Honor Chair
E.P. Purushothaman, University of Science and Technology, India

General Chair
Jun Yeh, Tallinn University of Technology, Estonia
Y.H. Chang, Chihlee Institute of Technology, Taiwan

Program Chair
Tim Chou, Advanced Science and Industry Research Center, Hong Kong
W. K. Jain, Indian Institute of Technology, India

International Scientific Committee


M. Subramanyam, Anna University, India
Urmila Shrawankar, G.H. Raisoni College of Engineering, India
M.S. Chen, Da-Yeh University, Taiwan
I. Saha, Jadavpur University, India
X. Lee, Hong Kong Polytechnic University, Hong Kong
Antonio J. Tallón-Ballesteros, University of Seville, Spain
J. Xu, Northeast Dianli University, China
Q.B. Zeng, Shenzhen University, China
C.X. Pan, Harbin Engineering University, China
L.P. Chen, Huazhong University of Science and Technology, China
K.S. Rajesh, Defence University College, India
M.M. Kim, Chonbuk National University, Korea
X. Ma, University of Science and Technology of China, China
L. Tian, Huaqiao University, China
M.V. Raghavendra, Adama Science & Technology University, Ethiopia
J. Ye, Hunan University of Technology, China
Q. Yang, University of Science and Technology Beijing, China
Z.Y. Jiang, University of Wollongong, Australia

XI
This page intentionally left blank
Computational intelligence
This page intentionally left blank
Informatics, Networking and Intelligent Computing – Zhang (Ed.)
© 2015 Taylor & Francis Group, London, ISBN: 978-1-138-02678-0

Decomposition genetic algorithm for cellular network spatial optimization

M. Livschitz
TEOCO, Fairfax, VA, USA

ABSTRACT: Spatial optimal planning of cellular networks is a fundamental problem in network design. This
paper suggests a new algorithmic approach for automatic cell planning and describes the multi layer’s decom-
position genetic algorithm, which significantly improves optimization convergence. Algorithm convergence is
compared with single layer genetic algorithms based on the cell planning of real cellular networks.

1 INTRODUCTION and future traffic load requirements. Based on traffic


demands for the area, propagation models and various
The cellular technologies of 3G and 4G cellular net- constraints and costs for base station locations, the
works use the same frequency for all users and are cell plan should define a set of locations for base sta-
projected to provide a wide variety of new services, tions and base-station configurations, including sector
based on high-data-rate wireless channels. Existing 3G numbers, directions, number of antennas per cell and
networks support 16.0 Mbps applications whilst 4G antenna directions. During the planning stage, the most
networks will support bit rates higher than 80 Mbps. vital issue is usually site location definition, when
The important aspect of these technologies is that these expanding existing networks, the most critical issue is
systems are most likely to be implemented supporting antenna configuration and direction. Design of most
high frequencies above 2 GHz. Such high frequencies 3G and 4G networks includes the installation of anten-
yield very quick signal degradation and strong diffrac- nas with remote changeable electrical tilt, allowing for
tion from small obstacles, forcing the reduction of quick implementation of cell planning results and sup-
cell size. On the other hand, high speed data require- porting quicker response times and shorter planning
ments support results in high Signal-to-Noise (SNR) cycles.
requirements. In order to decrease the amount of inter- Most 4G network deployments will be done by
ference and increase the quality of coverage, spatial existing cellular operators who have 2G and 3G net-
cell optimization and planning leads to higher antenna work infrastructure above existing infrastructure. This
tilt requirements and additional decrease of cell size, condition will result in site locations being more or less
resulting in increased site densities especially in urban predefined based on the existing network site loca-
areas. tions or among permitted shared locations and even
New applications and services lead to large dif- during the planning stage the key cell planning issue
ferences in traffic distributions and sector loads [7], is choosing antenna types and direction.
making the cell planning process more complicated. In this paper we suggest a new algorithm approach
Cell planning in modern cellular networks must have base in genetic algorithms (GA) for resolving the
the capability of responding to traffic distribution cell-planning problem and we illustrate algorithm
changes because this is the main way to improve net- efficiency by using some results of existing cellular
work efficiency. This is true for the planning stage, network optimization projects.
when the network is planning for a particular load as
well as after network deployment and traffic growth.
2 DECOMPOSITION GENETIC ALGORITHM
Spatial network planning is one of the most impor-
tant issues when building a new network or when
2.1 Decomposition of cell planning
expanding an existing network during traffic growth or
changing traffic patterns. Such planning will be much Let us consider cell-planning or budget cell-planning
more important and complicated for 4G networks. Pro- problems [3] of a cellular network with a given set
ducing a viable cell plan is a key ingredient in the of base stations Ib = {1, . . . , I}. The base-station con-
ability of an operator to provide QoS and coverage in figuration is defined by a vector of parameters Pp
a cost efficient manner. representing the typical antenna type (pattern), the
Cell planning typically includes planning a net- azimuth, tilt, and height together with cell power
work of base stations that provides a high level of definitions. If there is only one antenna per cell,
coverage in the service area with respect to current configuration of antenna i Pi = {p1 , . . . , p5 } is defined

3
by a vector of 5 parameters. Each parameter pp is con- between signals from all services and the interference
fined within a boundary and can be assigned one of from all antennas impacting the mobiles. Uplink and
the discrete values: downlink power control and handovers between cells
should also be taken into consideration by resolving
the sequence of system linear equations. As a result,
the evaluation could be quite expensive, especially
The boundaries of permissible values could be dif- for large network clusters with high antenna density.
ferent Thus tilt changes could be assigned any values The complexity of the Monte-Carlo evaluator is at
in a range with a predefined step (usually 1 deg), least O (I2 log(I)). This results in a very high impor-
azimuths have values according to a higher step and tance for improving convergence of GA for cellular
with a limitation on minimal change, while antenna cell-planning problems in wideband networks.
types and heights are assigned values from a list of This nature of 3G, 4G and beyond networks requires
available patterns or heights. reaching a higher isolation between remote cells from
The goal of cell planning F(K) is defined by com- the second tier and further. On the other hand, close
bining different key performance indicators (KPIs) cells have overlapping coverage areas supporting con-
representing network coverage quality or capacity tinuous coverage and have a very high impact on
K = {K1 , . . . ,Kk }, where each Kk represents a bad each other. Impacts from further cells depend on
level of KPI defined by a fuzzy threshold Tk [8] as site location, area topology, building height and other
follows: environmental properties. Some tall sites located on
mountain tops or building roofs influence distances
of 5–15 km, while other sites in urban areas could
be almost invisible even at distances less than 1 km.
Influences between cells depend on spatial network
configuration and can be changed or eliminated during
cellular network optimization activity.
Manual planning of wide areas is usually performed
by an interactive, iterative process that includes the
All KPIs are dependent on antenna configurations. following two phases:
Cell planning should improve defined KPIs • Planning of smaller clusters (local cluster
optimization);
• Synchronization between clusters and composing a
final solution
changing antenna configurations according to the pre-
defined constraints. In the case of budget cell-planning Antennas in small compact clusters should be con-
problems, cost will be assigned to each antenna change figured together, taking into consideration the strong
and an additional constraint on the total cost of network influences between them.
changes should be added. The synchronization phase should resolve in two
This means that the optimal cluster cell configu- types of interactions between clusters: impacts through
ration Po = {P1 ,…,PI } is defined by one value from cells on borders, which are impacted by two or more
I  5 clusters and long links from further sectors. There are
PipN available combinations. In most cases the usually not too many further impacts between antennas
i=1 p=1
from remote clusters. The process described above is
spatial cell planning problem is considered as NP-hard,
repeated till the network KPIs reach predefined values.
which means that finding an optimal solution for it
In case network planning should fail to reach
(within real networks) is not feasible in a reasonable
required KPIs for the cluster, new additional sec-
running time. Thus, much of the work is done with
tors, sites and locations will be recommended and the
genetic algorithms (GA) for the spatial cell planning
iterative process will continue.
problem [3, 6, 8, 9] and different heuristic solutions
We believe that problem-oriented heuristics should
for improving the GA convergence.
be used for the efficient optimization and improve-
Common GA algorithm schemes suppose there is a
ment of the optimization algorithm. The manual opti-
quick manner of goal function calculation and popula-
mization process scheme described above leads to
tion evaluation. For the cellular network optimization
the development of two-level GA for improving the
this means that there is an evaluator, allowing for an
convergence of spatial cellular network optimizations.
estimated network quality for multiple antenna spatial
locations and configurations.
2.2 Algorithm description
The network evaluator for wideband cellular net-
works should estimate network quality according to The main idea of the decomposition genetic algorithm
the planned traffic load. The best method for wide- is a reduction in the overall optimization problem
band network evaluation is based on the Monte-Carlo to multiple smaller optimization sub-problems with
techniques [8], which require simulation of uplinks consequent results composition.
and downlinks between all the simulated mobiles and Reduced optimization problems are resolved inde-
all serving cells. The main KPIs depend on the ratio pendently, bringing local improvement for sub-areas.

4
Figure 2. Cluster division into 4 sub-areas.

too low a number will not allow the rapid optimization


of sub-areas. For real cellular networks this number
Figure 1. Decomposition Genetic Algorithm. could be chosen considering the cluster division on the
RNC and is dependent on cluster size and the number
This allows easy calculation in the distribution of run- of antennas (I) used for cell planning.
ning each sub-area optimization in parallel on separate
computers. Local optimizations can be stopped after 2.2.2 Sub-area division
achieving an initial improvement even before the GA Cellular network cluster is automatically divided into
is converged, because the second optimization phase N sub-areas so that the site set will be divided into
will enable the solution to be finalized. Research on 
N
the N subset: In = Sn according to geographical
the definition of stop criteria for local optimizations n=1
and impact on convergence results will be performed criteria. Geographical areas should be compact areas
in the future. Following is a block diagram (Figure 1) including a few dozen cells (Figure 2).
describing decomposition genetic algorithm based on Subsets can overlap but should not be subsets of
an existing GA tool. other subsets: ∀n, k, Sn  ⊂ Sk . A site set can be divided
We consider decomposition genetic algorithm randomly, using technological criteria or based on dis-
(DGA) as an extension for any GA implementations tance criteria. Automatic division could be done using
not concerning the GA itself. There is only one excep- various algorithms.
tion. GA used by DGA should allow starting based Fuzzy logic cluster algorithm is used as a basic
on some particular solution. This limitation is usually division algorithm for DGA. Fuzzy logic clustering
realized for GA used for cell planning and especially associates with each element a set of membership
for budgeted cell planning, which optimize networks levels which indicate the strength of the association
starting from existing configurations. between that data element and a particular cluster.
It is used for assigning elements to one or more
2.2.1 Subset number definition spatial clusters. Fuzzy clustering creates N + 1 sub-
As a first step the number of subsets (N ), which will areas, including N non-overlapped sets and one border
define the number of sub-areas used for local opti- area. Border area includes elements which have sim-
mization, is chosen. The number of subsets is defined ilar membership for two or more clusters. It overlaps
by the sizes of small sub-areas or by taking into consid- with N or less clusters and used for the second phase
eration connectivity between different network parts. optimization by DGA.
As will be shown later, this parameter could signif-
icantly impact on convergence improvement, which 2.2.3 Sub-area goal function definitions
could be reached using the DGA. Too large a number Optimization criteria per sub-area are defined at this
will result in the division of clusters over too small an step. The optimization criteria should generally be the
area, which could not be optimized separately due to same for a whole cluster, but different sub-areas might
the high impact from antennas out of the area, while have different initial KPIs. The optimization criteria

5
are adjusted accordingly. This is an additional advan-
tage of DGA that wide network optimization might be
done based on different criteria per area.

2.2.4 GA sub-area optimizations


The following three steps should be performed for all
N sub-areas:
• Start from the initial network configuration, run
the GA for sub-area optimization according to the
defined criteria.
• Keep the optimized solution Pn = {Pj }, j ∈ Sn for
sub-area n. GA optimization will result in sub-area
antenna configuration.
• Sub-area optimizations are independent of each Figure 3. Optimization results for medium size cluster.
other and can be run in parallel.
Parallel calculation on multiple processes signifi-
cantly improves CPU utilization reduce overall time toolkits, supporting GA. The optimization tool uses
for the optimization. technological operators, which helps improve GA
convergence based on experience in radio access
2.2.5 Solution composing network optimization.
Solution composing creates the network configuration All comparisons were run within the same GA con-
for the whole cluster (Poin ), combining optimization figurations and show goal function improvement as
results for sub-areas. An overlapped areas between, a function of the evaluation number. For DGA goal
different networks is created based on border area function improvement the second phase is calculated
of fuzzy clustering. It is used for the global opti- relatively to the initial network quality. Plots depict
mization, synchronizing between different solutions. a graph of improvements on the baseline. TEOCO’s
These initial network configurations will show sig- optimization GA for the whole network and DGA
nificant improvements in most KPIs defined as the goal functions improvement is for the second phase
optimization goals. of decomposition GA. As a result of running the first
phase optimizations for all sub-areas decomposition
2.2.6 Second phase initialization GAs have some additional overhead. These optimiza-
The initial population for the second phase is created tion cycles for sub-areas are short and evaluations for
based on combined solutions of sub-areas (Poin ). The small sub areas are much quicker than for wide areas,
initial population should include one or more com- so this additional overhead is minor compared with an
posed solutions, but other population members are optimization run for a whole area and cannot change
built using mutation operators. the overall picture.

2.2.7 Second phase GA


The GA optimization is run for the whole boarding 3.1 Middle size cluster
area starting from the better initial population. This Optimization results of a cluster with 165 cells are
optimization should further improve the network, syn- shown in Figure 3. This network covers a small
chronizing configurations of cells located on borders town in a mountainous environment where impacts
between areas. It should also eliminate remote links between remote antennas are significant. Decompo-
between different areas. sition genetic algorithms show a little better goal
function improvement compared with the baseline
GA, but the conversion rate is about 4–5 times higher.
3 ALGORITHM VALIDATION Optimization results of DGA are impacted by a num-
ber of sub-areas N and their sizes used for the first
In this section some results of decomposition GA use phase.
will be presented. All the results of optimization runs Figure 4 depicts a dependence of maximal improve-
were made based on the optimization of real 3G net- ment in the convergence rate, which can be reached by
works – two markets with different properties and DGA on number N of the cluster division in sub-areas.
presenting different cases of cellular optimization. One Cluster division on only a few sub-areas (N is small)
market represents the optimization of a suburban area does not bring to the maximum convergence improve-
with middle site density. The second area is a typical ment, while cluster division over too many areas (N
high density urban market, representing cellular net- is big) will result also in a smaller improvement
work properties in large towns with very high traffic because the sub-areas would be smaller and borders
and high density of cellular sites. between sub-areas will have relatively higher weight.
All optimizations run were done by using TEOCO’s These results show that optimal cluster division could
optimization tool and TEOCO’s implementation of significantly improve algorithm efficiency.

6
and estimate possible improvements. This approach
could be used for very general spatial problems, where
local influences are stronger than global. Different
algorithms of cluster division will also be compared.

5 CONCLUSIONS

Decomposition genetic algorithms show very signif-


icant convergence improvement compared with the
usual flat scheme of GA. DGA is able to achieve signif-
icantly better network improvement in a shorter time
for the huge spatial cell optimization and planning
problems. DGA is appropriated for parallel calcula-
tions on multi CPU computers.
Figure 4. Convergence rate for different divisions.

REFERENCES
[1] E. Amaldi, A. Capone, and F. Malucelli. Optimizing base
station siting in UMTS networks, In Proceedings of the
IEEE Vehicular Technology Conference, 4, 2001, 2828–
2832.
[2] D. Amzallag, J. Naor, and D. Raz. Cell planning of
4G cellular networks, In Proceedings of the 6th IEEE
International Conference on 3G & Beyond (3G’2005),
London, 2005, 501–506.
[3] D. Amzallag, M. Livschitz, J. Naor, D. Raz. Cell Planning
of 4G Cellular Networks: Algorithmic Techniques and
Results, In Proceedings of the 6th IEEE International
Conference on 3G & Beyond (3G’2005), London, 501–
506.
Figure 5. Optimization results for a large cluster. [4] F. Longoni and A. Länsisalmi and A. Toskala. Radio
Access Network Architechture, In H. Holma and A.
3.2 Large cluster optimization Toskala (editors), WCDMA for UMTS, John Wiley &
Sons, Third edition, 2004, 75–98.
Figure 5 depicts network improvement for a big cluster [5] C. Lee and H. G. Kang. Cell planning with capacity
with about 700 cells in a dense urban area. In this case expansion in mobile communications: A tabu search
DGA is able to reach more than three times higher an approach, IEEE Transactions on Vehicular Technology,
improvement in a shorter time. Base line GA was not 49, 2000, 1678–1691.
[6] K. Lieska, E. Laitinen, and J. Lähteenmäki. Radio cover-
able to reach a reasonable improvement in a feasible age optimization with genetic algorithms, In Proceedings
time for so large clusters. Modern cellular networks of the 9th IEEE International Symposium on Personal
of 3G and 4G have higher density and may have thou- Indoor and Mobile Radio Communications (PIMRC’98),
sands cells for which DGA approachis critical. Cluster 1998, 318–321.
division in a few sub-areas and the first optimization [7] M. Livschitz, D. Amzallag. High-Resolution Traffic Map
phase allowed the starting of a second phase of DGA of a CDMA Cellular Network, In Proceedings of the 6th
from a better initial point. INFORMS Telecommunication Conference, US, March
2002, 62–64.
[8] Maciej J. Nawrocki Undertanding UMTS Radio Net-
work. Modelling, planning and automated optimization,
4 FUTURE RESEARCH John Wiley & Sons, 2006.
[9] H. Lin, R. T. Juang, D. B. Lin, C.Y. Ke, andY. Wang. Cell
In the future we are going to investigate theoretical planning scheme for WCDMA systems using genetic
aspects of the DGA, research common properties of algorithms and measured background noise floor, IEEE
problems where improvement using DGA can be used Proceedings on Communications, 151, 2004, 595–600.

7
This page intentionally left blank
Informatics, Networking and Intelligent Computing – Zhang (Ed.)
© 2015 Taylor & Francis Group, London, ISBN: 978-1-138-02678-0

A heating and cooling model for office buildings in Seattle

W.Q. Geng & Y. Fu


Department of Applied Computational Mathematical Science, University of Washington, Seattle, Washington, USA

G.H. Wei
School of Management, Dalian Jiaotong University, Dalian, Liaoning, China

ABSTRACT: To specify a model of heating and cooling systems inside office buildings, we focus on details
of difierent non-ignorable factors of standard office building in Seattle. As Seattle has distinctive climatic
variations, this is a good case for discussion. Based on an analysis of the temperature profile of Seattle, as
well as the application of heating and AC systems, we are able to build a model, which can be applied to real,
modern buildings in Seattle. Also, we consider the heat radiation from human bodies and electronics (lighting,
computers, and other sources) in an office building. By incorporating all the factors in the research, our goal is
to model the temperature curve that best fits the actual situation inside a building. Our model can be used to
evaluate and improve AC systems, thus making office buildings more comfortable for office workers, as well as
reducing the consumption of energy. For less energy consumption, we figure out that the minimum heat output
of heating and an AC system should be 10◦ F/h in order to make the temperature inside the building constant
within the comfort zone.

1 BACKGROUND 4. How do we minimize the use of energy while keep-


ing a comfortable temperature, for example, with
Office buildings are the places where people spend less running time?
most of their time besides their homes, because people
work in the office building for 8–10 hours per day. A
comfortable working environment can boost people’s 3 SIMPLIFICATIONS AND ASSUMPTIONS
efficiency and happiness. People tend to reduce energy
consumption in a more flexible way. In our model, a lot of assumptions are made to sim-
Therefore, our model aims to combine all factors plify the model, while keeping these assumptions still
about thermal comfort in the building, and then gets reasonable for building a credible result. Since every
rid of utilizing a traditional conditioner system because office building has differences in various aspects, such
it wastes a lot of energy. The goal is to design an auto- as thickness of walls, areas per person, and in AC sys-
matic thermal system inside the building, and to figure tems, we take average values or representative values
out the hourly out-door and in-door temperature curve, to build our model, while assuming that these data fit
the human body and electrics radiation in the office the office building in our model. Also, in the outside
building, and AC system. temperature model, we took temperatures over a few
days and assumed the temperature pattern during these
days would fit the whole year. Besides the simplifica-
2 PROBLEM DESCRIPTION tions in the variables we study, there are a lot of factors
that we did not incorporate in our model, for example,
This paper presents a heat dynamic model for calcula- the heat escaping from open windows and doors. We
tion of the indoor and outdoor temperature, heat flow, assumed that these unconsidered factors were negli-
and future energy needs for electrical heating in an gible. More detailed simplifications and assumptions
office building. are explained in the model section.
We consider the following questions:
1. How does the outside temperature influence the
4 A MATHEMATICAL MODEL
temperature inside the building?
2. How does the temperature inside the building
4.1 An overall model of the temperature inside a
change with inner heat radiation, heating, and an
building
AC system?
3. How do we modify the heating and theAC system to The temperature inside a building is based on the
make the temperature in the building fit the comfort heat conducted through the building’s walls, combined
zone during working hours? with the heat produced inside the building, as well as

9
the heating and the AC System. The conduction of
heat through walls follows Newton’s Law of Cooling/
Heating, which indicates that the rate of change of the
temperature inside the building is directly proportional
to the difference in temperature inside and outside. The
proportional constant k represents how fast the heat
conduction proceeds. The other factors will directly
affect the temperature inside the building. Let T (t) be
the temperature inside the building, and the follow-
ing equation can be used to describe the change of the
temperature:

Figure 1. Hourly Seattle temperature in one year.

in Seattle [1], we plotted the data to show the general


4.2 Temperature of Seattle trend of the temperature. To fit the curve into a func-
tion of t, we use curve fitting tools in MATLAB using
To build a model that shows the hourly temperature in five years of repeating data. Based on the shape of the
Seattle, firstly, our goal is to find the hourly tempera- plot, we choose the Fourier equation with two terms to
ture model that fits each day. Then we incorporate the fit the plot. The functions are in the format below:
daily model with the data that we collect through the
year, which is the minimum, maximum, and average
temperature of each month.
To generalize the model through one day, we collect
temperature data [9] during four days from 14th to
17th, Feb 2014 as plotted in blue line in the figure
below. The x-axis represents 96 hours in total in 4 days. To convert time from months to hours, t is multiplied
The y-axis represents the temperature. Based on the by 12/(365*24). According to the fitting data gener-
plot, the temperatures over the four days act as an ever- ated by MATLAB, ω = 0.5236 for all three curves.
repeating wave of peaks and valleys. Therefore, we For Mave (t), a0 = 51.95, a1 = −12.4, b1 = −1.227,
assume that the temperature is close to a trigonometry a2 = 1.163, b2 = 2.114. For Mmax (t), a0 = 59.9, a1 =
function of time and looks like: −14.73, b1 = −0.7251, a2 = 1.264, b2 = 2.858.
For Mmin (t), a0 = 44.01, a1 = −10.07, b1 = −1.738,
a2 = 1.058, b2 = 1.371. The end of the curve shows
some increase in order to fit the Fourier function.
M(t) denotes the temperature of Seattle during the Here it shows some error compared with the real
day. A is the amplitude of the sine curve. For each day, data plot, but we assume the fitting curve can rep-
the upper limit of amplitude is |Mmax − Mave |, while resent the hourly averages of maximum and minimum
the lower limit is the amplitude being |Mmin − Mave |. temperatures in Seattle.
φ is the phase shift of the curve. Based on this for- Since we have the temperature curve within one
mula, we have amplitude A in one cycle as a segmented day, as well as the Mmax (t), Mmin (t) and Mave (t) curve
function as shown below: throughout a whole year, we can put the two functions
together to generate the hourly temperature curve in
Seattle within a year as shown in figure 1.
The combined function of hourly temperature in
Seattle will be used as the outside temperature in the
Also, when t = 6 + φ, the temperature reaches heating and cooling of building model.
its maximum and when t = 18 + φ, the temperature
reaches its minimum. To find the phase shift that best
4.3 Heat from human body radiation and electrical
fits the curve, MATLAB is used to determine the
appliances
phase shift value with the highest correlated coef-
ficient, which gives the result of φ = 10.04. That In this section, our goal with this model is to find
indicates that the highest temperature happens around H(t), which denotes the change per hour in temperature
4 pm and the lowest temperature happens at 4 am. due to heat radiated by people and produced by lights,
The average minimum and maximum temperatures are computers, and other electrical appliances.
determined separately for each day which causes some Our model is built based on an office-like room that
discontinuous on the curve. has limited volume and is only used by one person. The
The second step of the model is to find Mmax , H(t) we generated from this model can be applied to
Mmin and Mave according to time t hourly throughout a the whole building.
whole year in Seattle. Based on monthly average, max- Firstly, the volume of the room is calculated through
imum, and minimum temperature during 2003–2013 the average area per person in the office buildings

10
by timing the average height of each floor. Then we as others in the offices, the heat output of iMac is
assume the room is filled with air, which is heated by 126 to 463 BTU/h, based on data from its official
human radiation and electrical appliances. Therefore, website [4]. In this case we will take the average
we have: value as 250 BTU/h ≈ 73.27 W. Finally, we need to
consider the heat produced by other electrical equip-
ment. This equipment is normally not in constant use,
such as printers, copiers, vacuums, etc. We assume
the equipment has a power of about 5 W. Adding
where Q(t) is the heat produced by human and elec- all the heat generated by these appliances, we have
trical appliances per hour. V denotes the volume of Qe = 10 W + 73.27 W + 5 W = 88.27 W ≈ 317770 J/h.
the room, which is also the volume of the air. ρ is the Combining the data above, it is easy to calculate
density of air, and c is the specific heat capacity of air. the heat produced by human radiation and electrical
Based on previous research, the working space appliances per hour:
in an office per person is about 185 square feet
[6]. The normal height of an office is 12 to
15 feet, and in this model we will consider 13
feet as the height of the room. So V = 185 ft2 ×
(1 m)3
13 ft × (3.2808.1 ft)3
≈ 68.102 m3 . Besides office areas, Because the model needs to fit the temperature
office buildings also have public areas such as lob- change for the whole day, so the working time of office
bies, elevators, and meeting rooms, which are not buildings needs to be considered. We assume that the
constantly occupied by people. We assume that the working time of the building is from 8 am. to 6 pm.
ratio of office areas to public areas is approximately The human body and the electrical appliances produce
1:1, and therefore the volume needed for each person to heat only during the working hours. Therefore, H(t)
be kept warm needs to be doubled, so V = 136.204 m3 . during the entire day is a segmented function:
ρ and c are all constants with value ρ = 1275g/m3 and
c = 1.007 J/(g K). Plug in all the constants:

4.4 Heat conduction through building roof and wall


The next step is to determine Q(t), which can be The temperature inside the building is affected by the
separated into two parts: the heat radiation from the outside temperature mostly through conduction in the
human body is Qh and the heat produced by electrical roof and walls. In our model, we assume the conduc-
appliances is Qe . In the room model, only one person tion of heat through the wall and roof of the building
is staying in the room. The heat radiation of a human is the only way that the heat is lost between the inside
body can be considered as black-body radiation [3], of a building and the outside in Seattle. As indicated
which power can be determined by Stefan-Boltzman in the general model, the change in temperature due to
Law: conduction per unit time C(t) follows Newton’s Law
of Cooling/Heating, which can be represented as:

A is the area of human skin, in this case it is assumed


to be 2 m2 . σ is a Stefan-Boltzman constant, includ- where M(t) is the temperature outside and T(t) is
ing the value of the emissivity of human clothes or the temperature inside the building. k is the con-
skin, which is 0.98 according to data from Infrared stant representing how quickly the heat is conducted
Services [7]. T is the surface temperature. Because through the walls which is arbitrary based on the mate-
of the existence of fabric clothes, the surface temper- rial and thickness of the walls. Let Qt (t) be the heat
ature at room temperature (T0 ≈ 20◦ C = 293.15K) is conducted through wall per hour, we have [8]:
approximately 28◦ C = 301.15K [5]. Plug in the con-
stants, the power of one human body radiation is
Qh = 93.324W = 335970 J/h.
The heat produced by electrical appliances is mainly
from lighting systems and computers. Since different Combine the two equations above and we can
types of lighting systems can produce significantly dif- determine k:
ferent amounts of heat, for example, a filament lamp
as opposed to a LED lamp, we make an assumption
that the total heat produced by light is 10 W in a one-
person room. Also, a large amount of heat comes from
a computer. Assuming that an office uses iMac com- U is the heat transfer coefficient that represents the
puter, which should produce similar amounts of heat heat conducted through certain materials per unit time

11
per unit area when the temperature is changed by1
degree. A is the area of the material, which is the wall
or the roof. ρ represents the density of air. V is the
volume of air, which is assumed to be the volume of
the building. c is the specific heat capacity of the air.
In the model, we assume an office building as
a cuboid with 40 floors, therefore the height is 13
ft./floor × 40 floor = 520 ft. = 158.5 m. And we also
assume the base of the building is square with an area
of 80 ft. × 80 ft. (24.38 m × 24.38 m). Then we can
calculate the area of walls and roof, as well as the
volume of the building:
A wall = 4 × 158.5 m × 24.38 m = 15456.92 mˆ2
A roof = 24.38 m × 24.38 m = 594.3844 mˆ2
24.38 m × 24.38 m × 158.5 m = 94209.9274 mˆ3
Since different materials have quite different val- Figure 2. Temperature inside the building without
ues of U, which will have a significant impacton the insulation.
conduction of heat, we will calculate two k values: ki
when the materials of the walls and roof contains an
insulation layer, which will decrease the rate of heat
conduction. And ku for the walls and roof without insu-
lation. Also, the roof and walls have different values
of U, therefore,

First, we calculate ku for the building without insu-


lation by checking the chart of common heat transfer
coefficients of some common building elements [8]:

Figure 3. Temperature inside the building with insulation.

Then, for the insulating materials [8]:


4.5 AC and heating system
Based on previous modelling, H(t), the change of
temperature due to heat generated inside the build-
ing by human radiation and electrical appliances has
a relatively high value that can eventually cause the
temperature inside the building to become too high
without any adjustment of the AC System. Also, the
temperature in winter is still too low fora comfortable
Since ki is only about one-third of ku , it is obvi- office environment.
ous that the change inside the building due to the The temperature difference within one day is nearly
heat conduction through walls and roofs will be much 20 degrees. Also, the temperature can be as low as
smaller when the insulation is presented. That is to ≈40◦ F in winter, and as high as about ≈105◦ F in sum-
say, the insulation materials can keep the temperature mer, which is extremely uncomfortable in an office.
inside the building steadier and less affected by the Therefore, AC and heating systems are used to adjust
temperature outside. The following figures show the the temperature inside the building into the comfort
temperature inside the building compared with that zone for people, which is between 22◦ C to 26◦ C
outside in figure 2 and without insulation in figure 3. (71.5◦ F to 78.8◦ F) as based on research [2].
It is obvious that the range of the inside tempera- In our model, we assume that the heating and the
ture is less with insulation. Therefore, the insulating AC systems have a constant power that is able to pro-
materials make it easier to maintain the temperature duce or transfer the same amount of heat outside the
inside the building within a comfortable range. In the building. Also, in order to constrain the temperature
following model, we assume the building has good within a comfort zone, therefore, when T(t) is higher
insulation ability. So we will use k = ki = 0.1892 h−1 than a comfort zone, the AC is on to decrease the tem-
as our proportional constant. perature. To the contrary, the heat is on when the T(t)

12
In the process of AC system operation, we use
the segmented function to solve the general problem.
We estimate the general temperature difference within
one day according to the curve we plot. When T(t)
is higher than the comfort zone which we set up, AC
system tends to decrease the temperature, otherwise,
it increases the temperature. The segmented function
is set according to different temperature variation.

6 CONCLUSIONS

In a word, our model mainly achieves our aim to eval-


uate and improve the AC system. Our model combines
all factors about thermal comfort in the building, and
Figure 4. Temperature inside and outside office building avoids the disadvantages of traditional conditioner sys-
when m = 10. tems. We design an automatic thermal system inside
the building, and figure out the hourly out-door and
is lower than a comfort zone. In this way, U becomes in-door temperature curve, the radiation from the
a segmented function that: human body and from electrical devices in the office
building, and the AC system’s operation. In the
process, we use various mathematical methods:
curve – fitting tools in MATLAB, Fourier equa-
tion, and simple mathematical methods for differential
equations, applying Newton’s law of heating and cool-
where m is a positive constant that represents change ing, and setting segmented functions. We figure out the
rate of temperature per unit time due to the AC and final values that the minimum heating and AC system
heating system. By plotting different m values for the power need to keep T(t) steady within a comfort zone
system, higher m value is able to better control the of around 10◦ F/h. Under these circumstances, we can
temperature. Figure 4 shows T(t) when m = 10. consume less energy and achieve our primary goal.
For less energy consumption, we need to calculate
what is the minimum m that can make T(t) bound REFERENCES
within a comfort zone. From the figures above, we
can see that when m = 10◦ F/h, the curve for T(t) is [1] Climate Wizard. Seattle average min and max tem-
steady within the comfort zone. Therefore consider- perature from 2003 to 2013. http://climatewizard.org/,
ing the conservation of energy as well as the comfort 2014.
of the office, the heating and AC system for the office [2] W. Cui, G. Cao, J.H. Park, Q. Ouyang, and Y. Zhu.
Influence of indoor air temperature on human ther-
building need to be able to make temperature changes
mal comfort, motivation and performance. Building and
of about 10◦ F/h. Environment Building and Environment, 68(3):114–122,
2013. ID: 5136851685.
[3] James D. Hardy. The radiation of heat from the human
5 SOLUTION AND TECHNIQUES body: Iii. The human skin as a black-body radiator. The
Journal of Clinical Investigation, 13(4):615–620, 7 1934.
When we build the model of the yearly temperature [4] Apple Inc. iMac: Power consumption and thermal output.
change in Seattle, we use MATLAB and Fourier equa- http:// support.apple.com/kb/HT3559, 2013.
tions in mathematics. To fit the curve into the function [5] Bin Lee. Theoretical Prediction and Measurement of
the Fabric Surface Apparent Temperature in a Simulated
of t, we use curve – fitting tools in MATLAB by col-
Man/Fabric/Environment System. Melbourne: DSTO,
lecting five years of temperature data. Based on the 1999.
shape of the plot, we adopt Fourier equation with two [6] Norm Miller. Estimating office space per worker. Tech-
terms to fit the plot. Although some errors indeed exist nical report, Burnham-Moores Center for Real Estate,
in the figures, which we have plotted, compared to 2012.
the actual data, we assume that they can be ignored, [7] Infrared Services. Emissivity values for common mate-
and using the Fourier equation is a relatively accurate rials. http://infrared-thermography.com/material-1.htm,
method of curve fitting. 2007.
When considering the exchange of heat between [8] The Engineering ToolBox. Heat loss through build-
ing elements due to transmission. http://www.
the inside and the outside of the building, we apply
engineeringtoolbox.com/heat-loss-transmission-d_748.
Newton’s Law of Cooling/Heating, which indicates html, 2014.
that the rate of change of the temperature inside [9] Weather Spark. Feb 14 to Feb 17, 2014, hourly tem-
the building is directly proportional to the difference perature recorded at Boeing field/king county inter-
between the temperatures inside and outside. There- national airport, Seattle, WA. http://weatherspark.com/
fore we are able to build a differential equation and #!dashboard;ws=29735, 2014.
solve the problem as an exponential model.

13
This page intentionally left blank
Informatics, Networking and Intelligent Computing – Zhang (Ed.)
© 2015 Taylor & Francis Group, London, ISBN: 978-1-138-02678-0

Multi-depth Deep Feature learning for face recognition

C.C. Zhang, X.F. Liang & T. Matsuyama


Department of Intelligence Science and Technology, Graduate School of Informatics, Kyoto University, Kyoto, Japan

ABSTRACT: Deep structure learning is a promising method for computer vision problems, such as face
recognition. This paper proposes a Multi-depth Deep Feature (MDDF) learning method to learn abstract features
which has the most information in the region of interest. It is unlike other deep structure learning methods that
evenly partition the raw data/images into patches of the same size and then recursively learn the features by
evenly aggregating these pieces of local information. MDDF does an uneven partition according to the density
of the discriminative power in the image and learns a data-driven deep structure that preserves more information
at the region of interest. In the cross-database experiments, MDDF shows a better performance over a canonical
deep structure learning (Deep-PCA) on face recognition. Moreover, MDDF achieves an accuracy, comparable
with other well-known face recognition methods.

Keywords: Deep structure learning, Multi-depth Deep Feature, fine-to-coarse Quad-tree partition

1 INTRODUCTION information. Therefore, it reports a better performance


on facial variations.
Appearance-based techniques have been extensively Most deep learning methods are inspired by Con-
studied, and acknowledged as one class of the most volutional Networks [5] and Deep Belief Networks
successful face recognition approaches. Principal [6]. Due to the complicity of connectionist networks
Component Analysis (PCA) [1] and Linear Discrim- in deep learning, many built in biases affect their
inant Analysis (LDA) [2] are two widely accepted performance. An interesting study, Deep PCA [7],
representatives in this framework. However, pose and separated out other properties in existing deep archi-
illumination changes, occlusion, and data missing tecture to build a simplified and non-connectionist
greatly challenge these techniques in the real-world framework. It demonstrated that the exploitation of
scenarios, because they have a weak ability to cope deep structure did increase performance. However,
with local facial variations. Multi-subregion fusion the conclusion in [7] needs further discussion. Deep
methods [3, 4] are proposed as a solution for this PCA partitioned the image into small patches of the
problem. They divide the face into a set of dis- same size, evenly extracted lower dimensional fea-
joint/overlapped subregions, perform recognition on tures, evenly aggregated these features from each
each subregion, and fuse the results. Experiments lower level, and finally formed an overall abstract fea-
shows that they provide considerable accuracy in well ture.The improved performance came from a complete
registered databases. The major reason for success, it hierarchy that assumed the information was evenly dis-
is argued, is that the sub-regions preserve well the local tributed. We argue that features are mostly distributed
information which is more robust for variations of the unevenly in visual data. That is the reason the Region
face. However, the size and shape of the sub-regions of Interest (ROI) must be cropped first in many appli-
have rather sensitive parameters that are often assigned cations. This also applies to face recognition. Both
by empirical experiences or multi-scale values. psychology [8] and biometrics [9] researches show the
Recently, deep learning is a promising method in periocular region offers advantages over other regions.
computer vision, and has been discussed on face recog- This finding indicates that evenly partitioned patches
nition. It attempts to replicate the mechanism of the do not reveal the fact of feature distribution on faces.
human visual cortex. General deep learning is an In this paper, we address the above problem and
unsupervised scheme, and can be regarded as a hier- propose a Multi-depth Deep Feature (MDDF) to learn
archy structure consisting of a low level of original abstract features. Unlike evenly partitioned images,
local information, and multiple higher levels of more a hierarchical Quad-tree is introduced to partition
abstract information that encodes the correlations of facial regions into uneven subregions following their
local ones in the level below. Since both local regions varied discriminative powers. A small subregion has
and the overall structure of these unites are involved, more dense discriminative information than a bigger
deep learning is able to extract both local and global one. The feature learned from different subregions,

15
therefore, preserves the corresponding local infor-
mation. The criterion of Quad-tree partition uses
LDA-motivated total variance, which ensures a robust
resistance to local noise and an efficiency of computa-
tion. In our framework, Quad-tree partition functions
as a convolutional neural network, but builds up an
incomplete hierarchical structure. Afterwards, aggre-
gation of these uneven features in the multi-depth
structure functions as a recursive neural network, and
outputs an abstract feature having more information at
the region of interest. Experiments on four challeng-
ing databases show that MDDF gains an advantage on
accuracy over Deep PCA. It also achieves comparable
accuracy with other well-known methods, using local
or global features. Figure 1. Illustration of the top-down Quad-tree parti-
tion and the bottom-up deep feature learning hierarchy.
PCA + LDA extract top vectors from each block to describe
2 HIERARCHICAL QUAD-TREE PARTITION local features. The green bounded node joins its children
nodes in the original order to preserve the global information.
Instead of dividing the face region into a uniform
grid, Quad-tree partitions the face region by means of
local discriminative variance. Larger partition means
the block has a lower feature density. By contrast, the
smaller partition means the block has a higher fea-
ture density. To make the partition more robust to local
noises, we consider the variance on all faces across the Figure 2. Fine-to-coarse Quad-tree partitions on Yale 2
entire database. Motivated by the idea of LDA which database.
encodes discriminative information by maximizing the
between-class scatter matrix Sb and minimizing the of Deep PCA tree structure when Tv = 0). The face
within-class scatter matrix Sw (see Eq. (1)), we define image is split into fewer and bigger blocks when Tv
a template face T by Eq. (2) to represent the distri- is large, but into more and smaller blocks when Tv is
bution of discriminative information for the database. small. Therefore, fine-to-coarse partition provides an
Thus, the total variance of the entire database is the opportunity to explore the effectiveness of varied deep
variance of the template: structures.

3 MULTI-DEPTH DEEP FEATURE LEARNING

where µ is the mean image of all classes, µi is the With a set of Quad-tree partitions, we are able to learn
mean image of class Xi , Ni is the number of samples features from a face. Deep feature learning produces a
in class Xi , and xk is the k-th sample of class Xi . bottom-up hierarchy of a feature representing face, in
We perform a top-down data driven Quad-tree parti- which the higher levels correspond to a shorter overall
tion on T to partition it into smaller blocks recursively, description of the face. It also encodes the correlation
according to a function doSplit(r), defined in Eq. (3). among the local patches. We create a hierarchy for
If the variance of a region r (starts from entire T ) is a face, based on the aforementioned Quad-tree parti-
higher than a threshold (Tv ∗ totalVar), then r is split tion using PCA + LDA. Figure 1 shows that a face is
into four sub-blocks with the same size. The partition partitioned into many blocks of varied sizes. Blocks
carries on under the criteria defined by the certifica- without a green ring are the leaf nodes at different
tion function in Eq. (3). Eventually, we have an uneven levels in the tree. These leaf nodes are used as the
partitioned face and an incomplete hierarchical struc- input for PCA + LDA, and select the top ki vectors as
ture of the face (see Fig. 1). Usually, it is rather difficult a feature basis, where ki is less than the corresponding
to find the best deep structure using one Tv . We, there- block size, i denotes the level index in the hierarchy.
fore, give a set of thresholds in an ascending order, and While the smaller i is, then the bigger ki becomes. Each
introduce fine-to-coarse face partitions (see Fig. 2). block is projected into a corresponding new basis, and
The leftmost partition is equivalent to the leaf nodes the four reduced-dimensionality neighbouring blocks

16
are then joined together back to their father node in
their original order. This process is repeated, using the
newly created layer as the data for the PCA + LDA and
join process to create the next layer up, until reaching
the root node.
As illustrated in Figure 1, the smallest blocks in the
Quad-tree partition measure 4 × 4 at level 3. We apply
PCA + LDA to these blocks, and extract the top 3 × 3 Figure 3. Template images on four databases: (a) ORL, (b)
Yale 2, (c) AR, (d) FERET.
vectors from each to describe the local feature. The
four neighbours are then joined in the inverse order
Table 1. Recognition accuracy of MDDF compared to other
of the partition, back to the father node at level 2, four reference methods.
where the original blocks have size of 8 × 8. At level
2, if a block was not further decomposed, PCA + LDA Database
extracts the top 6 × 6 vectors which are the same size
as the newly joined block from level 3. We recursively Method ORL Yale 2 AR FERET
apply PCA + LDA extraction, and join the neighbour
blocks to upper layer. Eventually, the procedure stops PCA + LDA 92.80 90.76 86.81 83.51
at root node. The last PCA + LDA selects about 30 MPCRC 91.50 92.80 88.60 73.64
top features as a vector. Obviously, these features pre- 30-Region 93.88 90.78 90.57 82.18
serve not only the global information thanks to the Deep PCA 92.33 91.11 84.67 85.18
MDDF 94.68 92.82 88.36 85.26
feature hierarchy, but also the local feature from blocks
partitioned by our Quad-tree partition.
and scarves). As in [4], a subset, with only illumi-
nation and expression changes, contains 50 male
subjects and 50 female subjects that are chosen.
4 EXPERIMENTS AND ANALYSIS
For each subject, we randomly choose 5 samples
for training and the left 9 images for test.
To demonstrate the effectiveness of MDDF, four public
[4] FERET database: 13539 images corresponding
and challenging databases were employed for eval-
to 1565 subjects. Images differ in facial expres-
uation: ORL [10], Extended Yale (Yale 2) [11], AR
sions, head position, lighting conditions, ethnicity,
[12], and FERET [13]. Face images in these databases
gender, and age. To evaluate the robustness of
are under variant conditions, including head poses
our methods with regard to facial expressions, we
(ORL, AR), illumination changes (ORL, Yale 2, AR,
worked with a subset of front faces labelled as Fa,
and FERET), facial expressions (ORL, AR, FERET),
Fb, where Fa represents regular facial expressions,
and facial details (e.g. with glasses or not: ORL, AR,
and Fb alternative facial expressions. All Fa are
FERET). Face images were cropped to 32 × 32. The
selected for training data, while Fb as test data.
template images created on each database are shown
in Figure 3. An example of a fine-to-coarse Quad-tree To verify the feature performance, we compared
partition is shown in Figure 2. Our method generated proposed MDDF with various methods using global
8–15 Quad-trees depending on the databases. These features, local features, and canonical deep fea-
Quad-trees are indexed according to the threshold Tvi tures, respectively. They are: (1) the conventional
in an ascending order. PCA + LDA method [2] which extracts a global fea-
ture vector from the whole face region; (2) MPCRC
[1] ORL database: 40 subjects, 10 samples per sub-
[4], which develops a multi-scale local patch-based
ject, with variances in facial expressions, with
method to alleviate the problem of sensitivity of path
open or closed eyes, with glasses or no glasses,
size; (3) a 30-region method [3], which defines the
scale changes (up to about 10 percent), and head
30 regions according to experimental experience; (4)
poses. 5 samples per subject are randomly selected
the Deep PCA [7], which integrates the discriminative
as training data while the left ones are selected as
information extracted from uniform local patches to
test data.
a global feature vector. Table 1 shows the comparison
[2] Extended Yale database (Yale 2): more than
results, and gives five observations:
20,000 single light source images of 38 subjects
with 576 viewing conditions (9 poses in 64 illumi- Observation (1): PCA+LDA extracts a global fea-
nation conditions). To evaluate the robustness of ture that focuses on the holistic information on
our method on the illumination changes, 5 samples the image. It could be regarded as a summary of
of the 64 illumination conditions are randomly faces, but ignores details such as local variations.
selected as training data, the remaining 59 images Thus, PCA + LDA has a rather average perfor-
as test data. mance on various databases, neither bad nor good.
[3] AR database: 4000 colour face images of 126 The local patch-based methods focus more on local
people (70 men and 56 women), including frontal variations, and perform better on Yale 2, AR and
views of faces, with different facial expressions, FERET databases which contains more local defor-
lighting conditions, and occlusions (sun glasses mations caused by facial expressions, illumination

17
changes, and occlusions, etc. This implies the sig-
nificant advantage of the subregion-based features
over the holistic-based features in dealing with local
deformations.
Observation (2): the MPCRC method outperforms
PCA + LDA on Yale 2 and AR databases obviously
due to the robustness of the patches in local vari-
ations, but degrades significantly on the FERET
database where only a Single Sample Per Person
(SSPP) is collected as gallery data. Under the SSPP,
the MPCRC degenerates into the original patch-
based method PCRC which has no collaboration
with multiple patches in variant scales. Since the
performance of local patch-based methods is very
sensitive to patch size, they suffer from severe
degradation in performance under inappropriate
patch size. That is why many local patch-based
methods cooperate with global features to over-
come this problem. This motivates the development
of our multi-depth deep structure learning to asso-
ciate local features with a global representation of Figure 4. Recognition accuracy against a Quad-tree index
an entire image structure. during a fine-to-coarse partition on four databases: (a) ORL,
Observation (3): a 30-Region method is composed of (b) Yale 2, (c) AR, and (d) FERET.
30 subregions which have large overlaps between
each other. All these subregions are empirically Quad-tree partitions the face into small patches with
designed according to facial structures after being the same size. We can see that varied deep struc-
well registrated. Particularly, one ‘subregion’ actu- ture features perform quite a large range. The best
ally is the entire facial region. We can think of it as performance is usually achieved at a certain parti-
a brute-force integrating global and local features. tion, depending on the database, but this does not
Therefore, it obtains a higher performance on the come from the Deep PCA in most cases. Because
AR database being well-registered and having only our Quad-Tree-based partition is processed on the
frontal features. It also performs well on an ORL template image, which is obtained from Sb and
database because of the effectiveness of its subre- Sw , and deemed as a summary of the database.
gions. However, the performance degrades in not This data-driven strategy makes our method adapt
well registered databases, such asYale 2 and FERET. to databases, and generates the most appropriate
The reason might be that these sub-regions do not partition for deep structure learning automatically.
fit with the data. This suggests that the design of the Our method benefits from the data-driven image
sub-regions must adapt to diverse data. partition-based deep structure learning method and
Observation (4): Deep PCA improves the perfor- it can be widely applied to diverse databases, espe-
mance of conventional PCA + LDA methods on cially for those with a large number of variations.
Yale 2 and FERET databases, which shows the effec- It must be pointed out that MDDF performs worse
tiveness of deep structure learning for coping with than the 30-region method but has a better perfor-
local deformations. However, it performs worse on mance than the Deep PCA method on AR database.
an AR database. The reason might be that Deep We argue the reason that MDDF learns a multi-
PCA assumes that the discriminative information depth structure feature from the template face of the
is evenly distributed. It partitions the face into a database. But sub-regions in 30 regions are empir-
uniform grid and evenly aggregates features from ically designed for well-registered databases, and
the lower level. This result indicates the importance ARs, but not for others.
of designing a data-driven deep structure learning
method.
Observation (5): the proposed MDDF achieves the
best performance in most experiments. The multi- 5 CONCLUSION
depth deep feature learning is developed based on a
fine-to-coarse Quad-tree partition. To explore how This paper proposes a novel deep structure learning
varied deep structure affects the effectiveness of method Multi-depth Deep Feature (MDDF) for face
deep features, we defined a set of thresholds for recognition. It unevenly partitions face images accord-
Quad-tree partition. Figure 4 plots the recognition ing to the density of the discriminative power in the
accuracy against the Quad-tree index (the indices local regions, and learns a data-driven deep struc-
are corresponding to the thresholds Tvi in ascending ture that preserves more information at the region of
order). Please note that the leftmost dot indicates interest. The comparison with diverse methods using
the accuracy of Deep PCA because Tv = 0 and global, local features, and the canonical deep structure

18
feature ‘Deep PCA’ shows the comparable perfor- [4] P. Zhu, L. Zhang, Q. Hu, and S. Shiu, “Multi-scale patch
mance of MDDF in four challenging databases. We based collaborative representation for face recognition
can conclude that the MDDF has the most accurate with margin distribution optimization,” in ECCV, 2012.
description of image structure for recognition. In cur- [5] Y. Bengio and Y. LeCun, “Scaling learning algorithms
towards AI,” 2007.
rent work, the dimension of extracted features from [6] G. E. Hinton and S. Osindero, “A fast learning algo-
bigger blocks, which are not further partitioned, is the rithm for deep belief nets,” Neural Computation, vol.
same as the one aggregated from four smaller blocks 18, pp. 1527–1554, 2006.
at a lower level. In feature research we will explore [7] B. Mitchell and J. Sheppard, “Deep structure learning:
what feature dimensions of these bigger blocks would beyond connectionist approaches,” in IEEE ICMLA,
induce an optimal performance. 2012.
[8] F. Berisha, A. Johnston, and P. W. McOwan, “Identify-
ing regions that carry the best information about global
ACKNOWLEDGEMENT facial configurations,” Journal of Vision, vol. 11, no. 10,
pp. 1–8, 2010.
[9] U. Park, R. Jillela, and A. Ross, “Periocular biometrics
This work is supported by: Japan Society for the Pro- in the visible spectrum,” IEEE Trans. on Info. Forensics
motion of Science, Scientific Research KAKENHI for and Security, no. 6, pp. 96–106, 2011.
Grant-in-Aid for Young Scientists (ID: 25730113). [10] Ferdinando S. Samaria and Andy C. Harter, “Parame-
terization of a stochastic model for human face iden-
tification,” in Proceedings of 2nd IEEE Workshop on
REFERENCES Applications of Computer Vision, pp. 138–142, 1994.
[11] A. S. Georghiades, P. N. Belhumeur, and D. J. Kriegman,
[1] M. Turk and A. Pentland, “Eigenfaces for recognition,” “From few to many: Illumination cone models for face
J. Cognitive Neuroscience, vol. 3, no. 1, 1991. recognition under variable lighting and pose,” IEEE
[2] P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman, Transactions on PAMI, vol. 23, no. 6, pp. 643–660,
“Eigen faces vs. Fisher faces: Recognition using class 2001.
specific linear projection,” IEEE Trans. PAMI, vol. 20, [12] A. M. Martinez and R. Benavente, “The AR face
no. 7, pp. 711–720, 1997. database,” CVC Technical Report, 1998.
[3] L. Spreeuwers, “Fast and accurate 3D face recognition [13] P. Jonathon Phillips, H. Moon, S. A. Rizvi, and P. J.
using registration to an intrinsic coordinate system and Rauss, “The FERET evaluation methodology for face
fusion of multiple region classifiers,” IJCV, vol. 93, recognition algorithms,” IEEE Transactions on PAMI,
no. 3, pp. 389–414, 2011. vol. 22, no. 10, pp. 1090–1104, 2000.

19
This page intentionally left blank
Informatics, Networking and Intelligent Computing – Zhang (Ed.)
© 2015 Taylor & Francis Group, London, ISBN: 978-1-138-02678-0

Research on camera calibration basing OpenCV

H.M. Nie
Heilongjiang University of Science and Technology, China

ABSTRACT: According to the principle of camera calibration, it puts forward a calibration technique based on
OpenCV camera, and with the help of open-source computer vision library, OpenCV completes the calibration in
VS 2008 development platform. The experiment proves that the calibration procedure based on OpenCV camera
has the following advantages: calibration accuracy, high computation efficiency, good cross platform portability,
which can be effectively applied in the field of computer vision system.

Keywords: OpenCV; Camera calibration

1 INTRODUCTION The camera coordinate system: the camera itself


constitutes an object image coordinate system, and the
People usually complete the reconstruction of natural coordinates of a target point in the coordinate system
objects by imitating biological method, using different is expressed by (Xc , Yc , with Zc ).
imaging system to replace the visual system, and using The world coordinate system: we live in a three-
computers to replace the human brain to complete the dimensional world coordinate, and coordinates of a
three-dimensional reconstruction. That is to use two target point in the coordinate system is expressed by
cameras in different view point at the same time to cap- (Xw , Yw , with Zw ).
ture image information, through the camera calibration
matrix operations to obtain three-dimensional coor-
dinates, and then synthesize these three-dimensional 2.2 Linear camera model
coordinate information to establish the object model
in three-dimensional space, in order to make it more In the case of no distortion, setting the camera imag-
convenient and accurate to observe and operate on the ing is an ideal pinhole imaging; Pinhole camera model
object in all directions. OpenCV is an open-source is shown in Figure 1. Figure 1, 0 point as the cen-
computer vision library which is developed by Intel. It ter of projection is the coordinate origin. And 01
consists of a series of C and C++ functions, which point which is the intersection of the optical axis and
realizes the computer vision and image processing the image plane, is the center point of the imaging
of many common algorithms, including object track- plane.
ing, image processing, pattern recognition, motion Q point coordinates of (Xc , Yc , Zc ) in Figure 1 are
analysis, structure analysis, camera calibration and the coordinates Q in the camera coordinate system.
3D reconstruction algorithm. The camera calibration Q point coordinates (Xw , Yw , Zw ) are the coordinates
module of OpenCV which provides a good interface Q point in the world coordinate system. q coordinates
for users, which supports windows, Linux platform (Xu ,Yu ) are the coordinates of the q point in the image.
to improve the development efficiency, and enhances In Figure 1, f is the distance from center of projection
the portability of program, can be used for the actual to the image plane, the focal distance.
development project. According to the myopia of triangle principle, it
comes to the following formula:

2 CALIBRATION PRINCIPLE

2.1 The definition of the concept


The image coordinates: image formed from objects
projecting onto the two-dimensional plane, the estab-
lishment of coordinate system, and the coordinate of a In the real production process of camera, 01 as the
target point in the coordinate system are expressed by image center may not fall on the optical axis. The opti-
(Xu , Yu ). cal axis in a plane coordinate sets off (X0 , Y0 ), so X0

21
In Formula (10), R is 3 × 3 rotation matrix; T is the
translation matrix 3 × 1

Formula (11, 12) are the camera external parameters.

Figure 1. Pinhole camera. 2.3 Nonlinear camera model


and Y0 are the offset 01. Therefore, formula 1 and 2 The intensity of camera must be increased for fast
are changed into the following: imaging. The usual practice is to add the lens in front
of the camera, which will cause of image distortion.
Therefore we need to correct the distortion introduced
by the camera.
There are two forms of distortion caused by lens:
the radial distortion and tangential distortion. Each
point in the imaging sensor plane can be presented by
(x, y) Cartesian coordinates. It also can be presented
Also in the practical production process in camera,
in polar coordinates (r, t), the vector representation,
pixel sensor cannot be processed into a square, usually
where r is a vector of length; t is the vector of horizontal
rectangular. So we let the pixel sensor length and width
angle; and the center is located in the center of the sen-
be defined as x and y. Therefore we get the following
sor. Radial distortion is the change of vector endpoint
formulas:
length direction (r direction). Tangential distortion
is the angle change of vector endpoint extension of
tangent direction (t direction). Radial distortions will
not distort when r equates 0, so we use the first few
items of the Taylor series expansion to quantitatively
describe. That is:
Adding the Formula (3) and Formula (5)

Adding the Formula (4) and Formula (6)


The tangential distortion is due to the fact that we
introduce two parameters p1 , p2 to describe

Representing the Formula (7, 8) by the homogeneous


coordinate

So calibration of camera is actually the measure


of internal and external parameters and distortion
parameters of the camera.

According to Formula (9), we can complete the conver-


sion between image coordinate and camera coordinate. 3 THE CALIBRATION METHOD OF OPENCV
We also need to converse the camera coordinate
system into the world coordinate system. We apply planar checkerboard calibration template in
OpenCV camera calibration, through the free mobile
camera or template, grasping planar calibration tem-
plate images at different angles, to achieve the cali-
bration of camera. The checkerboard calibration plate
has length of 20 mm, 9 rows and 13 columns, a total
of 96 corners of the chessboard as template to grab a
different angle images using at least squares calibra-
tion calculation. Then it put multiple calibrated images

22
cvSeqPush to save the sub-pixel coordinates into
the coordinate sequence.
(6) to substitute the sub-pixel coordinates of corner
points and corner points in the world coordinate
value into the cvCalibrateCamera2 (), and to get
the camera internal and external parameters and
distortion parameters.
(7) to release memory space of function allocated and
to prevent memory leaking.

4 THE IMPLEMENTATION OF
PROGRAMMING

Programs are programming, debugging and testing


by using VS 2008 in Windows XP. Due to space
limitations, variable definitions and initializations are
omitted. Here are the key codes:
CvSize CBoardSize=cvSize(rCount, cCount) ; //The
calibration board size
if((srcimage=cvLoadImage(filename,1))==0)
//Load the image
continue;
//Failed to load, to continue loading the next picture
cvCvtCoor(srcimage,grayimage,CV_BGR2GRAY);
//Change color image into gray image
CvPoint2D32f * pCorners = ( CvPoint2D32f * )
( rCount* cCount* sizeof(CvPoint2D32f)) ;
// store the detected corner
Int iCount; //Save the detected corner points
reult=cvFindChessboardCorners(grayimage,
CBoardSize, pCorners,& iCount,
CV_CALIB_CB_ADAPTIVE_THRESH);
// Obtain the angle point
for( int i = 0; i < iCount; i ++ )
{
cvFindCornerSubPix (grayimage, pCorners, iCount,
cvSize( 11, 11), cvSize(- 1, -1), cvTermCriteria(
CV_TERMCRIT_EPS+
CV_TERMCRIT_ITER, 30, 0.1) ) ;
Figure 2. Calibration process. // gain exactly each corner coordinates
cvSeqPush( pSeq, pCorners) ;
//Store obtained coordinates in the sequence
into the same directory, and make it ready for the cal- }
ibration program reading. The calibration process is cvCalibrateCamera2 ( pObj, pIP, pPC, iS, pIc, pDn,
shown in Figure 2. pRn, pTn, 0) ;
// Obtain correct data
(1) to obtain calibration image directory file list. pObj is the world coordinates of corners.
(2) through the imread () to load the image. This pIP is the image coordinates of corners.
function supports the commonly used file format. pPC is the number of different image corner points.
(3) to call cvFindChessboardCorners () function to iS is the image size.
find chessboard corner, and succeed in returning pIc is the camera intrinsic matrix.
non-zero and failed returns zero. Function suc- pDn is the camera distortion coefficient.
ceeds and returns corner coordinates. Call this pRn is a camera rotation vector.
function, if the image is a true color image it must pTn is the camera translation vector.
be converted to gray scale image. If the image is
gray scale images it does not to be converted.
(4) to use cvCreateMemStorage () and cvCreateSeq () 5 EXPERIMENTAL DATA
to create the stored angular coordinate sequence.
(5) to call cvFindCornerSubPix () to obtain the sub- According to the calibration principle above, we have
pixel coordinates of corner values, and to use developed an experimental program, the program

23
Table 1. Comparison of camera parameters. computer vision. While in these studies, it is neces-
sary to determine the geometrical relationship between
Camera Imaging Matlab the corresponding points in visual images and in the real
parameters results calibration results world. The purpose of Camera calibration is to estab-
lish the correspondence between 3D world coordinate
fx 677.761683 677.911558
and 2D image coordinate. The camera calibration
fy 699.952933 703.2464865
x0 337.5338665 338.213853 procedure developed with OpenCV, has advantages
y0 285.015533 285.95167 of accurate calibration results, high calculating effi-
k1 −0.3013115 −0.298848 ciency, good cross-platform portability, which can
k2 0.13680118 0.135938 be applied in the field of computer vision system
p1 0.000446 0.000386 effectively.
p2 −0.00133 −0.001355

*Note Parameter k3 is ignored in Matlab, so it is not given in REFERENCES


this paper.
[1] Cheng Jianpu, Xiang Huiyu. A Camera Calibration of
Vision Measurement of Body Panel Based on OpenCV
results are saved as a text file. After several debugging, [J]. Mechanical design and manufacturing. 2010, 11:
finally the program is stable, accurate, and success- 198–200.
fully finds focus, setting a group of 20 pieces of [2] Feng Saleisi, Digital Image Process (Matlab Vision) [M].
Beijing: Electronic Industry Press, 2005.
1280 * 1024 pixel picture, when using 1.2 s, which can [3] Sun Jie, Zhu Shiqiang, Lai Xiaobo. A Method of Vision
satisfy the actual demand. At the same time in order Based Navigation Camera and Effective Calibration [J].
to verify the accuracy of the data, we apply Matlab to Computer Engineering 2010, 36(21): 212–213.
calibrate the 20 image for comparison. Parameters of [4] Tian Yuan-yuan, Tan Qing-chang. Study of CCD laser-
program calibration number camera and parameters of range finderbased on static image [J]. Micro—Computer
Matlab camera are shown in Table 1. Information, 2007, 11(31): 96–98.
[5] Weixin Kang, Zhou Sheng. Trailing Vehicle Video
Retrieval basedon Kalman and OpenCV [R]. Proceed-
6 CONCLUSION ings of 2010 3rd International Conference on Future
BioMedical Information Engineering (Volume 2), 2010.
The visual measurement and 3D reconstruction are
important parts in the current study on application of

24
Informatics, Networking and Intelligent Computing – Zhang (Ed.)
© 2015 Taylor & Francis Group, London, ISBN: 978-1-138-02678-0

Designing fuzzy rule-based classifiers using a bee colony algorithm

I.A. Hodashinsky, R.V. Meshcheryakov & I.V. Gorbunov


Tomsk State University of Control Systems and Radioelectronics, Tomsk, Tomsk Oblast, Russia

ABSTRACT: This paper proposes a fuzzy approach which enables one to build intelligent decision-making
systems. We present new learning strategies to derive fuzzy classification rules from data. The training procedure
is based on a bee colony algorithm. We observe four components of the algorithm: initialization, work of scout
bee, rules of the antecedent’s generation, and weight tune. The evaluation of the fuzzy rule-based classifier
performance, adjusted according to given algorithms, is finally applied to well-known classification problems,
such as bupa, iris, glass, new thyroid, and wine. The comparison with the work of such algorithms as Ant
Miner, Core, Hider, Sgerd, and Target is made in training and in testing random selections. We describe the
implementation of the fuzzy rule-based classifier to forecast the efficiency of the non-medical treatment of
patients rehabilitated in the Federal State Institution of Tomsk Scientific-Research Institute of Health-Resort
and Physiotherapy Federal Medical and Biological Agency of Russia.

1 INTRODUCTION successfully been proven in many real life applica-


tions where domain knowledge is imprecise or inexact.
Typical problems of decision-making theory is the A common problem with the implementation of such
selection of one or more of the best objects (options, systems, however, is the acquisition of the production
alternatives), and the ordering or ranking of objects rules on which decision-making is based.
is based on their properties and classification by a The fuzzy rule-based classification systems on
set of categories (Petrovsky, 2007, Meyer & Roubens, the ground of IF-THEN rules find wide implemen-
2005). Many real life decision-making problems can tations in the solutions of management problems,
be treated as a classification which is a decision- decision making, learning, adaptation, generalization,
making process that involves making choices. Many and reasoning. Fuzzy rules can deal with imprecise
applications like pattern recognition, disease diagno- knowledge and the uncertainty of information and
sis, and credit scoring can be considered as classifica- strengthen the knowledge representation power. The
tion problems. Classification consists of predicting the main advantage of such classifiers in comparison with
value of a (categorical) attribute (the class) based on classifiers such as the black box (neural networks) is
the values of other attributes (the predicting attributes). their interpretability. Fuzzy rules represent knowledge
The classifier can be considered as a mapper from acquired from empirical observations and experience.
the feature domain into the class labels domain. A set The use of descriptive linguistic variables in fuzzy
of different types of classifiers exist which use dif- rules provides the additional advantage of represent-
ferent approaches to perform this mapping such as: ing this knowledge in a form that is easy for humans
1) linear discriminate analysis; 2) Bayesian classifiers; to comprehend and validate.
3) decision trees; 4) neural networks-based classi- In the past, fuzzy classifiers were created by fuzzy
fiers; 5) support vector machines; 6) fuzzy rule-based rules based on apriori knowledge and expert knowl-
classifiers, and etc. (Angelov & Zhou, 2008). edge, but in many applications, it is difficult to obtain
It would be helpful if we could design a classi- fuzzy rules without apriori data. In recent years, the
fier based on linguistic interpretable rules, because so-called data-driven approaches have become dom-
it expresses the behaviour of the system in a human- inant in the fuzzy systems design area (Angelov &
readable form. One of the approaches to solve this Lughofer, 2008), (Angelov & Zhou, 2008), &
classification problem is to formulate a solution using (Lavygina & Hodashinsky, 2011).
a fuzzy rule-based classifier. The generally accepted problem of fuzzy sys-
Fuzzy set theory, fuzzy logic, and fuzzy reasoning tems tuning is a rule-base formation. For generat-
methods contributes to the development of alterna- ing fuzzy classification rules, in (Ishibuchi H. &
tive models for uncertainty that are of interest for Nojima Y. 1999), (Ho, Chen, Ho, & Chen, 2004),
building alternative approaches to classical decision (Mansoori, Zolghadri, & Katebi, 2008), and (Chang &
theory (Blanning, 1996), (Radojević & Suknović, Lilly, 2004), a genetic and evolutional algorithm
2008), and (Ye, 2012). Fuzzy rule-based systems have are proposed. Some studies have attempted to solve

25
Table 1. Numerical benchmark functions.

Minimum
Function Ranges value

sin2 ( x12 + x22 ) − 0.5
f1 (x) = 0.5 + −100 ≤ xi ≤ 100 f1 (0) = 0
(1 + 0.001(x12 + x22 ))2

n
f2 (x) = xi2 −100 ≤ xi ≤ 100 f2 (0) = 0
n
i=1  n  
1   xi − 100
f3 (x) = (xi − 100)2 − cos √ +1 −600 ≤ xi ≤ 600 f3 (100) = 0
4000 i=1 i=1 i

n
f4 (x) = (xi2 − 10cos(2πxi ) + 10) −5.12 ≤ xi ≤ 5.12 f4 (0) = 0
i=1

n−1
f5 (x) = (100(xi+1 − xi2 )2 + (xi − 1)2 ) −50 ≤ xi ≤ 50 f5 (1) = 0
i=1

the classification problem by employing hybrid (Karaboga, 2005), which is a recent swarm intelli-
approaches on the grounds of decision trees and evo- gence based approach to solve nonlinear and complex
lutional algorithms (Pulkkinen & Koivisto, 2008), and optimization problems. It is as simple as PSO and dif-
machine learning methods, Learning Classifier Sys- ferential evolutionalgorithms, and uses only common
tems, in particular, and the given approach is based on control parameters such as colony size and maximum
reward training and genetic algorithms (Ishibuchi & cycle number.
Nojima, 2007). In studies (Karaboga & Basturk,2008), (Karaboga &
An even more recent approach is that of Swarm Akay, 2009), the performance of the ABC algorithm
Intelligence. The two most widely used swarm intelli- is compared with that of differential evolution (DE),
gence algorithms are Ant Colony Optimisation (ACO) PSO and Evolutionary Algorithms (EA) for multi-
(Dorigo, Maniezzo, & Colorni, 1996) and Particle dimensional and multimodal numeric problems. Clas-
Swarm Optimisation (PSO) (Kennedy & Ebenhart, sical benchmark functions are presented in Table 1.
1995). In (Casillas et al., 2005), an approach to In experiments, f1 (x) has 2 parameters; f2 (x) has 5
the fuzzy rule learning problem ACO algorithms parameters and f3 (x), f4 (x) and f5 (x) functions have
is proposed. This learning task is formulated as a 50 parameters. Parameter ranges, formulations, and
combinatorial optimization problem, and the features global optimum values of these functions are given in
related to ACO algorithms are introduced. In (Abadeh Table 1 (Karaboga & Basturk, 2008).
et al., 2008) an evolutionary algorithm to induct fuzzy The mean and the standard deviations of the func-
classification rules is proposed which uses an ant tion values obtained by DE, PSO, EA, and ABC
colony optimization based local searcher to improve algorithms are given in Table 2 (Karaboga & Basturk,
the quality of the final fuzzy classification system. 2008).
The proposed local search procedure is used in the Simulation results show that the ABC algorithm
structure of a Michigan based evolutionary fuzzy performs better than the above mentioned algorithms
system. and can be efficiently employed to solve the multi-
Another category for fuzzy rule-based classifiers modal engineering problems.
design is PSO. The article (Elragal, 2008) discusses The ABC has a lot of advantages in memory, local
a method for improving accuracy of fuzzy-rule-based search, solution improvement mechanism, and so on,
classifiers by using particle swarm optimization. In and it is able to get excellent performance on opti-
this work, two different fuzzy classifiers are consid- mization problems (Karaboga & Akay, 2009), (Zhao
ered. The first classifier is based on the Mamdani et al., 2010). In recent years, the ABC algorithm
fuzzy inference system while the second one is based has been successfully used to solve hard combina-
on the Takagi-Sugeno fuzzy inference system. The tional optimization problems including traveling the
parameters of the proposed fuzzy classifiers include salesman problem (Li, Cheng, Tan & Niu, 2012),
antecedent, consequent parameters, and the structure the quadratic knapsack problem (Pulikanti & Singh,
of fuzzy rules, optimized by using PSO. 2009), and the leaf-constrained minimum spanning
In our study we implement the novel bee colony a tree problem (Singh, 2009). In the article (Singh,
algorithm to identify the structure and parameters of 2009) by comparing the approach against genetic algo-
the fuzzy rule-based classifier. There are many widely rithm, ant-colony optimization algorithms, and tabu
known algorithms based on the behaviour of honey searches, Singh reported that the ABC outperformed
bees in nature. These algorithms can be divided into the other approaches in terms of best and average
two categories and they correspond to the behaviour solution qualities and the computational time.
of bees while gathering food and mating. Our study is The ABC algorithm has been applied to various
based on the Artificial Bee Colony (ABC) algorithm problem domains including the training of artificial

26
Table 2. The results obtained by DE, PSO, EA, and ABC The fuzzy classification is described by the func-
algorithms. tion f: → [0,1]m , which refers the classified object
to each class with the definite grade of member-
DE PSO EA ABC ship being calculated in the following way: βj (x) =
 n
f1 (x) = 0 0 ± 0 0.00453 ± 0±0 0±0 Aki (xk ) · CFi , j = 1, 2, . . . , m.
0.0009 Rij k=1
f2 (x) = 0 0±0 2.5113E-8 ± 0 0±0 0±0 The classifier is the class being defined in the
f3 (x) = 0 0±0 1.5490 ± 0.00624 ± 0±0 following way: class = cj∗ , j ∗ = arg max βj .
0.067 0.00138 1<j<m
f4 (x) = 0 0±0 13.1162 ± 32.6679 ± 0±0 The fuzzy classifier can be presented as a function
1.4482 1.9402 c = f(x, θ, CF), where θ – the rule base containing
f5 (x) = 0 35.3176 ± 5142.45 ± 79.818 ± 0.1331 ± rules of type (1).
0.2744 2929.47 10.4477 0.2622 Let us assume that the multitude of teaching data
(observation table) is given by {(xp ; cp ), p = 1, . . . , z},
and let us define the following unit function:
neural networks (Karaboga, Akay B, & Ozturk, 2007),
(Kumbhar & Krishnan, 2011), the design of a digi-
tal infinite impulse response filter (Karaboga, 2009),
solving software testing (Suri & Snehlata, 2011), and
the prediction of the tertiary structures of proteins then the computational criteria of classification system
(Benitez & Lopes, 2010). However, it has not yet been accuracy can be defined in the following way:
used to tune a fuzzy rule-based classifier.
The main motivation of this paper is to present a
novel classifier design. The idea is to use the improved
bee colony algorithms for finding more accuracy in
real life data classification.
The paper is organized as follows: Section 2 The problem of identification turns into the problem
introduces the main theoretical aspects of the fuzzy of maximum search for the given function in multi-
rule-based classification systems; Section 3 briefly dimensional space, coordinates of which correspond
describes the artificial bee colony technique; Section to the fuzzy system parameters. To optimize θ, it is
4 describes a modified Bees algorithm and how it was advisable to use the bee colony algorithm which is set
used in this paper and Section 5 shows test results in to generate and change the rule base. And to set CF, it
the well-known data set and results in the forecasting is advisable to use the modified bee colony algorithm
of non-medical treatment efficiency. The final section which relies on the characteristics of realization of the
offers the conclusions. bee dance operation.

2 FUZZY RULE-BASED CLASSIFIER


3 THE ARTIFICIAL BEE COLONY
Fuzzy classification is a decision process based on ALGORITHM
fuzzy logic. Fuzzy rule-based classifiers consist of
classification rules. Fuzzy rules are a collection of lin- Artificial Bee Colony is a novel optimization algo-
guistic statements that describe how a fuzzy inference rithm inspired by the natural behaviour of honey bees
system should make a decision regarding that an input in their search of the best food sources. This tech-
is classification (Fernandez & Herrera, 2012). This nique was proposed by Karaboga, (2005) and further
classifier determines a mapping from a given input to improved by Karaboga & Basturk, (2008).
an output representing the class: Rij : The ABC algorithm works with a colony of artifi-
cial bees. The colony consists of three groups of bees:
IF x1 = A1i AN, AND xk = Aki AND
employed bees (workers), on lookers, and scouts. The
AND xn = Ani THEN class = cj , w = CF i (1)
position of a food source represents a possible solution
where x = (x1 , x2 , x3 , . . . , xn ) – vector of decision vari- to the optimization problem and the nectar amount of a
ables (attributes or features); Aki – the fuzzy term food source corresponds to the quality (fitness) of the
which characterizes k-marker in i-rule (i ∈ [1, R]), R – associated solution. The number of the employed bees
the number of rules; cj – identifier of j-class, j ∈ [1, m]; or the onlooker bees is equal to the number of solutions
CF i – rule weighing coefficient or degree of belief of in the population. At the first step, the ABC gener-
i-rule, CF i ∈ [0, 1]. ates a randomly distributed initial population of SN
The task of traditional classification can be solutions (food source positions), where SN denotes
described by the function f : → {0, 1}m , where the size of population. Each solution xi (i= 1, 2, . . .,
f (x) = (c1 , c2 , . . . , cm ) and also ci = 1, and cj = 0 ( j ∈ SN) is a D-dimensional vector. Here, D is the num-
[1, m], i = j), and the object set by vector x belongs to ber of optimization parameters. After initialization, the
ci class. population of the positions (solutions) is subjected to

27
repeated cycles, C= 1, 2, . . ., MCN, of the search pro- 4.1 The algorithm of the rule base initialisation on
cesses of the employed bees, the onlooker bees and the ground of the training data set
scout bees.
This algorithm is aimed to form a primary rule base
The main steps of the algorithm are given below
which has one rule for each class. In addition, the algo-
(Akay & Karaboga, 2009), (Karaboga & Akay, 2009):
rithm eliminates those which are not covered with any
1: Initialize the population of solutions xij , i = 1, . . . , terms of change in the input variables.
SN and j = 1, 2, . . . , D Input: The number of mclasses, the observation
2: Evaluate the population table {xpi }, Type – the type of membership function.
3: C = 1 Output: The primary rule base of θ classifier.
4: repeat{Employed Bees’ Phase} Algorithm: 1: θ = {∅}
5: Produce new solutions vi for the employed bees 2: For each q class Do:
by vij = xij + φij (xij − xkj ), 2.1: For each i marker Do:
where k ∈ {1, 2, . . . , SN } and j = 1, 2, . . . , D are 2.1.1: Searching for min classqi = min(xpi )
p
randomly chosen indices. Although k is deter-
2.1.2: Searching for max classqi = max(xp,i )
mined randomly, it has to be different from i. And p
φij is a random number between [−1, 1] which 2.1.3: Term Aiq of the type Type creation, laying the
controls the production of neighbouring food interval [min classqi , max classqi ]
sources around xij and represents the comparison End do (i);
of two food positions as seen by a bee. 2.2: The rule creation type 1 with class = cq , w = 1
6: Apply the greedy selection process between vi and and antecedent contains Aiq for each xi
xi {Onlooker Bees’ Phase} 2.3: θ := θ ∪ {Rq }
7: Calculate the probability values pi for the solutions End do (q);
xi by: 3: With each I marker Do:
3.1: Check the existence of the areas to ensure that
is not covered by area term
3.2: IF not, covered places are found THEN :
3.2.1: The closest term to the left removes its right
where fit i is the fitness value of the solution i border according to the size of the empty distance
which is proportional to the nectar amount of the 3.2.2: The closest term to the right removes its right
food source in the position i and SN is the number border according to the size of the empty distance
of food sources which is equal to the number of End IF;
employed bees or onlooker bees. End do (i).
8: Produce the new solutions vi for the onlookers Figure 1a) represents the results of the work of the
from the solutions where xi is selected depending initialisation algorithm after step 2. Three classes set
on pi and begins to evaluate them on the xi marker by triangular membership functions
9: Apply the greedy selection process between vi and are considered. The run of xi input marker – [mini ,
xi {Scout Bees’ Phase} maxi ]. Not the whole run is laid by terms, but the
10: Determine the abandoned solution for the scout, areas [max class3i , min class1i ] and [max class1i , maxi ]
if it exists, and replace it with a new randomly are not covered. Figure 1b) represents the outcome of
produced solution xi by work of the algorithm for xi marker.

11: Memorize the best solution achieved so far 4.2 The work of the scout bee algorithm
12: C = C + 1 In this algorithm scout-bees create one fuzzy rule for
13: until C = MCN the chosen class. The rule generated by the algorithm
contains fuzzy terms with a previously defined view
of membership function. In actual fact, the algorithm
4 THE ARTIFICIAL BEE COLONY realizes the methodology of random search of the
ALGORITHM fuzzy rule.
Input: The number of n features, mini – the mini-
The design process of a fuzzy rule-based system mum value of i feature, maxi – the maximum value of
from a given input-output data set can be presented i feature, Type – the type of the membership function,
as a structure- and parameter-optimization problem q – the current class.
(Evsukoff et al., 2009). Output: The rule of the fuzzy classifier R.
Some algorithms take part in the fuzzy classifier Algorithm:
tuning: the algorithm of the rule-base initialisation on 1: For each i marker Do:
the ground of a training data set, the base algorithm of 1.1: Set the term Aiz at random, corresponding to
a rule generation (it includes the algorithm of work of the type of the membership function Type, in such a
scout bee), and the modified algorithm of a rule weigh- way that the left border Aiz > mini and the right border
ing coefficient setting. The step-by-step submission of Aiz < maxi .
the above mentioned algorithms is given below. End do (i);

28
REllite – the multitude from l the closest to
BestScout in terms of the result from the multitude
RScouts;
RElliteWork j – the multitude from o rules received
as a result of perfecting of the rule REllitej by working
bees, 1 < j < l;
ABase – initial classification accuracy at the
current iteration;
ABestScout – the increase of the classifier work
precision at the cost of inclusion of BestScout into
the rule base;
AScoutsi – the increase of the classifier work preci-
sion at the cost of inclusion of RScoutsi into rule base
Figure 1. The work of the initialization theory RBase, 1 < i < s;
demonstration. ABestScoutWork i – the increase of the classifier
work precision at the cost of inclusion of RBestScout-
Work i into the base RBase, 1 < i < o;
2: Create the rule type 1 with class = cq , w = 1 and AElliteWork ji – the increase of the classifier work
antecedent contains Aiz for each xi . precision at the cost of inclusion of RBestScout-
Work ji into the base RBase, 1 < j < l, 1 < i < o;
4.3 The base algorithm of the bee colony for CountAddRulle – the number of the rules added into
generation of the fuzzy classifier rules rule base.
Algorithm: 1: θ ∗ = θ
The given algorithm serves for the fuzzy classifier rule
2: Calculate ABase = E(θ, {1})
base formation and aims to receive the initial rule base
3: Scout bees generate s rules RScoutsi , 1 < i < s
which is surely better than the random completing.
according to the algorithm of scout bee work
The algorithm joins two conceptions of decision
4: For each i – scout calculate: AScoutsi =
making: scout bees using random searching method-
E(θ ∪ RScoutsi ,{1}) – Abase
ology realise the above given algorithm of work gen-
5: Find the BestScout rule which satisfies the
erating new solutions and working bees, realising the
following condition ABestScout = max(AScouts)
idea of local searches, set antecedents (IF-parts) and
6: REllite = ∅
consequents (THEN-parts) of the rule.
7: Form the multitude of REllite rules in
To make the algorithm less cumbersome, one can
the following way REllite ∈ RScoutsi , where i
make a decision to shorten the number of improved
satisfies the condition min (ABestScout-AScoutsi ) and
rules on the ground of their utility value, defined by 1<i<s
the incremental rate growth of the number of train- RScoutsi ∈RElliteWork,
/ 1<i<s
ing samples, as correctly classified objects. The given 8: Each i – working bee from BestScout forms the
solution is the analogue of the bee “dance” in nature. vector RBestScoutWorki . 1 < i < o using the method
After that, the best solution from the multitude of the of the local search
best solutions in each stage within the limits of the 9: For each i – working bee calculate:
given iteration is picked and this very solution will be ABestScoutWork i = E(θ ∪ RBest_Scout_Worki ,
added into rule base. {1}) – Abase;
In the current version the application field of the 10: Each i – working bee from REllitej forms
algorithm is limited by sets of data which contain the vector RElliteWorkji , 1 < j < l, 1 < i < o using the
integral-valued and real-valued markers cancelling out method of the local search
nominal markers. 11: For each j – rule from RElliteDo:
Input: s – the number of scout bees, o – the number 11.1: For each i – working bee calculate:
of working bees, l – the number of rules at best under ABestScoutWork ji = E(θ ∪ RElliteWorkji , {1}) –
investigation, z – the number of the rules generated Abase;
by the algorithm, θ – initial rule base according to the End do (j);
algorithm of the initialisation of the fuzzy classifier 12: Place rule into θ* the base corresponding
rule base. the condition max(AScout BestScout , ABestScoutWork,
Output: θ∗ – the effective base of the fuzzy classifier AElliteWork);
rules. 13: Countaddrulle := Countaddrulle + 1, if Coun-
Algorithm variables: taddrulle = z, that EXIT, in other words step 2.
RScouts – the multitude of the rules received by
scout bees;
4.4 The modified bee colony algorithm tooptimize
BestScout – the best rule according to the value of
the weight
utility from the multitude RScouts;
RBestScoutsWork – the multitude from the o rules The above mentioned algorithm is used to optimize the
received by working bees with perfecting the rule CF vector. The algorithm is modified because there is
BestScout; no single understanding of how bees are attracted to

29
the source of the nectar. Although it is known that this
correspondence has a stochastic nature, it is connected
with the quality of the source of nectar. In the modi-
fied algorithm the enlistment is performed according
to a genetic algorithm breeding method; to choose the
solution an annealing imitation method is used.
Input:The size of hive U , the rule base θ, the number
of iterations Iter, required accuracy E, initial temper-
ature T0 , cooling index α, the percentage of scouts
from a hive P_S, g parameter, the view of population
formation algorithm Alg.
Output: CF – optimal rule weighing coefficient of
the fuzzy classifier.
Algorithm variables:
Iter – the number of current iteration;
best – the best decision vector;
BS – accident decision vectors;
W – block of decision vectors of working bees;
NW – decision vectors formed on the ground of the
whole block of working bees;
Figure 2. Wine data: a) accuracy vs. number of rules;
NB – decision vectors formed on the ground of the b) accuracy vs. size of hive for training and test patterns.
vector best;
F – storage of all decision vectors;
15: Formation of scouts |BS|:=|F| – |W|, if |BS|
Fj .co – standardized estimated accuracy for j vector
more than P_S *U /100%, then |BS|:=P_S *U /100%
of the storage F.
16: If the number of iterations is exceeded or
Algorithm: 1: Iter = 1, g = 5, |BS| = P_S *U /100%,
the required accuracy E is gained, then CF =
T = T 0, W = ∅, CF = ∅
Wk k = argmax(Wi .accuracy), 1 < i < |W|, EXIT, in
2: The creation of random BS decision vectors for
other way Iter: = Iter + 1, T : = αT , move to step 2.
each of scouts
3: Calculation of classification accuracy BSj . accu-
racy = E(θ, BSj )
5 BENCHMARK AND COMPARISON OF
4: Definition of the best decision best = argmax(BSk .
RESULTS
accuracy, Wq . accuracy)
5: Run the cycle of j from 1 to |BS|
The proposed fuzzy classifier is tested with the
5.1: IF exp(−|BSj . accuracy–best. accuracy |/T ) <
benchmark classification problems: Bupa, Iris, Glass,
rand* g THEN include j decision into working bees
New thyroid, and Wine. They are publicly avail-
massive W
able on the KEEL-dataset repository web page
6: best is being included into working bees
(http://www.keel.es). Ten-fold cross-validation is
massive W
employed to examine the generalization ability of the
7: Formation of new decisions NW on the ground
proposed approach to classifying the dataset. In the
of W. Run the cycle of j from 1 to |W|:
ten-fold cross-validation procedure, the datasets are
7.1: NWj = Wj ± rand(Wj – best)*g;
separated into ten subsets of almost the same size.
8: Calculation of accuracy NWj . accuracy=
Nine subsets are used as training patterns for designing
E(θ, NWj )
the fuzzy rule-based classifier. The remaining single
9: Formation of new decisions NB on the ground of
subset is used as the test set to evaluate the con-
best. Run the cycle of j from 1 to |W|:
structed fuzzy classifier. This procedure is performed
9.1: NBj : = best ± rand (Wj – best)*g;
ten times after that roles of subsets are exchanged with
10: Calculation of accuracy NBj . accuracy =
each other so that every subset is used as the test set.
E(θ, NBj )
The computer simulation implemented the previously
11: Calculation of decisions normalized accuracy
mentioned procedure ten times and calculated the aver-
for all foragers F=NW + NB + W
age accuracy, the corresponding standard deviation,
12: Fj .co := Fj . accuracy/ i Fi . accuracy
and the average number of rules associated with the
13: Formation of W. It will include decisions
generated fuzzy rule-based classifiers.
from F,
⎧ We examine the relationship between the accuracy
⎨ > 0.8, decision j is included 3 times and the number of rules, accuracy, and size of the
Fj .co > 0.4, decision j is included 2 times hive. On Wine dataset, the performance for varying
⎩ > 0.05, decision j is included 1 time number of fuzzy rules and the size of the hives are
shown in Fig. 2, on Glass dataset – in Fig. 3.
14: Formation of a new population with the num- From the figures, we can see that the accuracy of
ber equals the hive size U , according to the given classification on learn patterns and test patterns is fur-
algorithm Alg ther improved by increasing the rules for each class

30
Table 3. Average results of training accuracy (%) obtained.

New
Data set Bupa Iris Glass thyroid Wine

Pfc* 79.55 98.59 71.92 98.66 98.25


AM 80.38 97.26 81.48 99.17 99.69
CORE 61.93 95.48 54.26 92.66 99.06
HIDER 73.37 97.48 90.09 95.97 97.19
SGERD 59.13 97.33 53.84 90.23 91.76
TARGET 68.86 93.50 45.07 88.05 85.19

*Proposed fuzzy classifier

Table 4. Standard deviations of training accuracy (%)


obtained.

New
Data set Bupa Iris Glass thyroid Wine

Pfc* 1.72 0.42 2.17 0.61 0.71


Figure 3. Glass data: a) accuracy vs. number of rules; AM 3.25 0.74 6.59 0.58 0.58
b) accuracy vs. size of hive for training and test patterns. CORE 0.89 1.42 1.90 1.19 0.42
HIDER 2.70 0.36 1.64 0.83 0.98
generated. In addition, in other figures, we can see SGERD 0.68 0.36 2.96 0.87 1.31
TARGET 0.89 2.42 0.90 2.19 1.58
that the size of the hive equals 40 good compromises
between the accuracy and the compact spread of the *Proposed fuzzy classifier
result.
The following five algorithms are used as reference
algorithms (Alcala-Fdez et al., 2011). Table 5. Average results of test accuracy (%) obtained.
Ant-Miner (AM) – an ant colony system based on
a heuristic function based on the entropy measure for New
each attribute-value (Parpinelli et al., 2002); Data set Bupa Iris Glass thyroid Wine
CO-Evolutionary Rule Extractor (CORE) – aco-
evolutionary method which is employed as a fitness Pfc* 63.06 88.00 65.38 90.69 85.30
measure combination of the true positive rate and the AM 57.25 96.00 53.74 90.76 92.06
false positive rate (Tan, Yu, & Ang, 2006); CORE 61.97 92.67 45.74 90.76 94.87
Hierarchical Decision Rules (HIDER) – a method HIDER 65.83 96.67 64.35 90.28 82.61
which iteratively creates rules that cover randomly SGERD 57.89 96.67 48.33 88.44 87.09
TARGET 65.97 92.93 44.11 86.79 82.24
selected examples of the training set (Aguilar-Ruiz,
Riquelme, &Toro (2003)); Steady-State GeneticAlgo- *Proposed fuzzy classifier
rithm for Extracting Fuzzy Classification Rules from
Data (SGERD) – a steady-state GA which generates
a pre-specified number of rules per class following a Table 6. Standard deviations of test accuracy (%) obtained.
GCCL approach (Mansoori et al., 2008);
Tree Analysis with Randomly Generated and New
Evolved Trees (TARGET) methodology – a genetic Data set Bupa Iris Glass thyroid Wine
algorithm where each chromosome represents a com-
plete decision tree (Gray & Fan, 2008). Pfc* 10.34 4.22 6.89 3.84 5.80
AM 7.71 3.27 12.92 6.85 6.37
Tables 3–6 compare the results obtained herein with
CORE 4.77 4.67 9.36 5.00 4.79
those in five reference algorithms (Alcala-Fdez et al., HIDER 10.04 3.33 12.20 7.30 6.25
2011). SGERD 3.41 3.33 5.37 6.83 6.57
It is obvious from Tables 3–4 that the liaison of TARGET 1.41 4.33 5.37 5.83 7.57
the proposed fuzzy classifier gives results according
to the training samples, ranking the first place (iris), *Proposed fuzzy classifier.
the second (bupa, new thyroid) and the third (glass)
in the rating among the given algorithms. This points
to the good associative quality of the algorithm. The conception included in it, as is witnessed by a root-
decision depends on the training sample. However, if mean-square error value.
one takes only one sample, the algorithm gives two According to Tables 5–6, we can estimate the
or three variants of the rule base which are different prognostic properties of the solutions provided by
in terms of quality because of the random searching the algorithms. The best decision is reached by the

31
algorithm of the complicated sample – glass, the sec- Table 7. Classification results of training and test
ond place is won by bupa, the third by thyroid, wine, accuracy (%).
and easily the worst result is obtained by iris. The bad
result of the iris sample is connected with the fact that Pfc*** PSO ACO
our algorithms do not mark the important markers for
ID* KD** Training Test Training Test Training Test
objects, unlike Hider, Sgerd, but they treat all markers
as if they were of the same importance. KZ, IS
The developed algorithms are implemented for tun- 1 90.95 75.33 72.24 71.43 78.53 72.72
ing the fuzzy classifier to non-pharmacological treat- 2 89.61 70.94 69.65 60.28 79.57 74.61
ment efficiency forecasting. The forecasting is based 3 91.48 81.43 68.89 66.43 76.04 74.20
on the analysis of retrospective data before and after 4 87.01 74.29 63.89 56.71 67.68 65.31
the treatment of rehabilitated patients. The patients 5 85.59 70.00 59.50 56.11 67.74 61.80
are prescribed of five kinds of treatment. The empiric KZ. TSH. TST
base for forecasting is medical data of patients being 1 81.25 69.5 67.45 59.30 75.32 65.95
2 85.72 71.36 71.34 67.75 76.75 73.11
rehabilitated in Federal State Institution of Tomsk
3 82.5 67.16 70.15 62.13 78.17 65.51
Scientific-Research Institute of Health-Resort and 4 80 72.51 61.43 61.14 74.26 69.43
Physiotherapy Federal Medical and Biological Agency 5 80 62.57 60.82 57.33 71.38 61.48
of Russia.
Index of the functional stress of organism *ID – Input Data
FNO = Index_AG/Index_LRP, where index_AG is **KT – Kind of treatment
adaptive hormone index: ratio of glucocorticoid con- ***Pfc – Proposed fuzzy classifier
centration (GC) to insulin (IS) in blood serum;
index_LRP is index of lipid reserve for peroxidation,
is calculated on the basis of data totality.
The patient undergoes the medical test after treat- According to Table 7, the proposed fuzzy classifier
ment course and then FNO index is calculated. At that gives results better thatACO and PSO, except one case.
increase of FNO index value in dynamics gives evi- In this case, result of the ACO better in the testing
dence of strengthening degree of functional stress of selections of the second kind of treatment.
organism and decrease to give evidence of disturbed The trained fuzzy system is used to choose the most
function normalization. effective kind of treatment. For this purpose, anew
FNO_koef = FNObefore/FNOafter serves as a patient’s data is sent to the system input, and then the
prediction value.The value of this index gives evidence system gives back the foreseeable class of the patient’s
of treatment effectiveness. If FNO_koef > 1, it means state change from each kind of treatment.
that patient has improvement after passing of treatment
course (class 1), otherwise – notable improvement is
not observed (class 2). 6 CONCLUSION
Fuzzy system permits to give predictions of treat-
ment performance of new incoming patients with Transformation of experimental information in a fuzzy
one or another complex after training on the existing knowledge base can be a useful method of data
precedents (training choice). processing in medicine, banking, management, and
As input variables we selected in the first experi- other areas where decision-makers, rather than those
ment KZ – glucocorticoid, IS – insulin; in the second working in strict quantitative relationships, prefer
one: KZ – glucocorticoid, TSH – thyroid-stimulating transparent and easily interpreted verbal rules.
hormone, TST – testosterone. A modified version of the ABC algorithm for fuzzy
If to be used for training all having sample, it can rule-based classifiers’ design has been introduced.
come out with the problem of the so called ‘over fit- Four components of the algorithm are observed: ini-
ting’. In other words, classifiers will be tuned only on tialization, work of scout bee, rule antecedent gener-
this sample and will not necessary be effective work- alisation, and scales setting.
ing with other data. To overcome this problem and to No Free Lunch Theorem shows that in the absence
obtain a fuzzy system of high generalized ability we of assumptions we should not prefer any classifica-
used in our work the method of cross-validation. For tion algorithm over another (Wolpert & Macready,
each: from five complexes, each table of observations 1997). The given comparisons of the developed algo-
is divided into training and the testing selections in the rithm with its analogues demonstrate that the proposed
ratio 80:20. algorithms are feasible and effective in solving com-
Before an application, our proposed fuzzy clas- plex classification problems.
sifier uses two different methods: a Particle Swarm The algorithms are checked against the real data
Optimization algorithm (PSO) and an Ant Colony with the implementation of the fuzzy classifier to
Optimization algorithm (ACO) with Single tone forecast the efficiency of non-medical treatment.
approximation. The offered algorithms can be implemented to
The average results from 25 experiments with the solve questions in the area of data mining.
same input parameters are compared with algorithms This project is done with financial support from the
that were used before and are given in Table 7. Russian Foundation for Basic Research (project No.
32
Another Random Document on
Scribd Without Any Related Topics
certainty, than his aspect would be transformed and, as it were, a
shadow of change would pass over his countenance.
"I will give you rest." Strange! For then the words "come hither unto
me" must be understood to mean: stay with me, I am rest; or, it is
rest to remain with me. It is not, then, as in other cases where he
who helps and says "come hither" must afterwards say: "now depart
again," explaining to each one where the help he needs is to be
found, where the healing herb grows which will cure him, or where
the quiet spot is found where he may rest from labor, or where the
happier continent exists where one is not heavy laden. But no, he
who opens his arms, inviting every one—ah, if all, all they that labor
and are heavy laden came to him, he would fold them all to his
heart, saying: "stay with me now; for to stay with me is rest." The
helper himself is the help. Ah, strange, he who invites everybody
and wishes to help everybody, his manner of treating the sick is as if
calculated for every sick man, and as if every sick man who comes
to him were his only patient. For otherwise a physician divides his
time among many patients who, however great their number, still
are far, far from being all mankind. He will prescribe the medicine,
he will say what is to be done, and how it is to be used, and then he
will go—to some other patient; or, in case the patient should visit
him, he will let him depart. The physician cannot remain sitting all
day with one patient, and still less can he have all his patients about
him in his home, and yet sit all day with one patient without
neglecting the others. For this reason the helper and his help are not
one and the same thing. The help which the physician prescribes is
kept with him by the patient all day so that he may constantly use it,
whilst the physician visits him now and again; or he visits the
physician now and again. But if the helper is also the help, why, then
he will stay with the sick man all day, or the sick man with him—ah,
strange that it is just this helper who invites all men!

II
COME HITHER ALL YE THAT LABOR AND ARE HEAVY LADEN, AND I
WILL GIVE YOU REST.
What enormous multiplicity, what an almost boundless diversity, of
people invited; for a man, a lowly man, may, indeed, try to
enumerate only a few of these diversities—but he who invites must
invite all men, even if every one specially and individually.
The invitation goes forth, then—along the highways and the byways,
and along the loneliest paths; aye, goes forth where there is a path
so lonely that one man only, and no one else, knows of it, and goes
forth where there is but one track, the track of the wretched one
who fled along that path with his misery, that and no other track;
goes forth even where there is no path to show how one may
return: even there the invitation penetrates and by itself easily and
surely finds its way back—most easily, indeed, when it brings the
fugitive along to him that issued the invitation. Come hither, come
hither all ye, also thou, and thou, and thou, too, thou loneliest of all
fugitives!
Thus the invitation goes forth and remains standing, wheresoever
there is a parting of the ways, in order to call out. Ah, just as the
trumpet call of the soldiers is directed to the four quarters of the
globe, likewise does this invitation sound wherever there is a
meeting of roads; with no uncertain sound—for who would then
come?—but with the certitude of eternity.
It stands by the parting of the ways where worldly and earthly
sufferings have set down their crosses, and calls out: Come hither,
all ye poor and wretched ones, ye who in poverty must slave in
order to assure yourselves, not of a care-free, but of a toilsome,
future; ah, bitter contradiction, to have to slave for—assuring one's
self of that under which one groans, of that which one flees! Ye
despised and overlooked ones, about whose existence no one, aye,
no one is concerned, not so much even as about some domestic
animal which is of greater value! Ye sick, and halt, and blind, and
deaf, and crippled, come hither!—Ye bed-ridden, aye, come hither,
ye too; for the invitation makes bold to invite even the bed-ridden—
to come! Ye lepers; for the invitation breaks down all differences in
order to unite all, it wishes to make good the hardship caused by the
difference in men, the difference which seats one as a ruler over
millions, in possession of all gifts of fortune, and drives another one
out into the wilderness—and why? (ah, the cruelty of it!) because
(ah, the cruel human inference!) because he is wretched,
indescribably wretched. Why then? Because he stands in need of
help, or at any rate, of compassion. And why, then? Because human
compassion is a wretched thing which is cruel when there is the
greatest need of being compassionate, and compassionate only
when, at bottom, it is not true compassion! Ye sick of heart, ye who
only through your anguish learned to know that a man's heart and
an animal's heart are two different things, and what it means to be
sick at heart—what it means when the physician may be right in
declaring one sound of heart and yet heart-sick; ye whom
faithlessness deceived and whom human sympathy—for the
sympathy of man is rarely late in coming—whom human sympathy
made a target for mockery; all ye wronged and aggrieved and ill-
used; all ye noble ones who, as any and everybody will be able to
tell you, deservedly reap the reward of ingratitude (for why were ye
simple enough to be noble, why foolish enough to be kindly, and
disinterested, and faithful)—all ye victims of cunning, of deceit, of
backbiting, of envy, whom baseness chose as its victim and
cowardice left in the lurch, whether now ye be sacrificed in remote
and lonely places, after having crept away in order to die, or
whether ye be trampled underfoot in the thronging crowds where no
one asks what rights ye have, and no one, what wrongs ye suffer,
and no one, where ye smart or how ye smart, whilst the crowd with
brute force tramples you into the dust—come ye hither!
The invitation stands at the parting of the ways, where death parts
death and life. Come hither all ye that sorrow and ye that vainly
labor! For indeed there is rest in the grave; but to sit by a grave, or
to stand by a grave, or to visit a grave, all that is far from lying in
the grave; and to read to one's self again and again one's own
words which one knows by heart, the epitaph which one devised
one's self and understands best, namely, who it is that lies buried
here, all that is not the same as to lie buried one's self. In the grave
there Is rest, but by the grave there is no rest; for it is said: so far
and no farther, and so you may as well go home again. But however
often, whether in your thoughts or in fact, you return to that grave—
you will never get any farther, you will not get away from the spot,
and this is very trying and is by no means rest. Come ye hither,
therefore: here is the way by which one may go farther, here is rest
by the grave, rest from the sorrow over loss, or rest in the sorrow of
loss—through him who everlastingly re-unites those that are parted,
and more firmly than nature unites parents with their children, and
children with their parents—for, alas! they were parted; and more
closely than the minister unites husband and wife—for, alas! their
separation did come to pass; and more indissolubly than the bond of
friendship unites friend with friend—for, alas! it was broken.
Separation penetrated everywhere and brought with it sorrow and
unrest; but here is rest!—Come hither also ye who had your abodes
assigned to you among the graves, ye who are considered dead to
human society, but neither missed nor mourned—not buried and yet
dead; that is, belonging neither to life nor to death; ye, alas! to
whom human society cruelly closed its doors and for whom no grave
has as yet opened itself in pity—come hither, ye also, here is rest,
and here is life!
The invitation stands at the parting of the ways, where the road of
sin turns away from the inclosure of innocence—ah, come hither, ye
are so close to him; but a single step in the opposite direction, and
ye are infinitely far from him. Very possibly ye do not yet stand in
need of rest, nor grasp fully what that means; but still follow the
invitation, so that he who invites may save you from a predicament
out of which it is so difficult and dangerous to be saved; and so that,
being saved, ye may stay with him who is the Savior of all, likewise
of innocence. For even if it were possible that innocence be found
somewhere, and altogether pure: why should not innocence also
need a savior to keep it safe from evil?—The invitation stands at the
parting of the ways, where the road of sin turns away to enter more
deeply into sin. Come hither all ye who have strayed and have been
lost, whatever may have been your error and sin: whether one more
pardonable in the sight of man and nevertheless perhaps more
frightful, or one more terrible in the sight of man and yet,
perchance, more pardonable; whether it be one which became
known here on earth or one which, though hidden, yet is known in
heaven—and even if ye found pardon here on earth without finding
rest in your souls, or found no pardon because ye did not seek it, or
because ye sought it in vain: ah, return and come hither, here is
rest!
The invitation stands at the parting of the ways, where the road of
sin turns away for the last time and to the eye is lost in perdition.
Ah, return, return, and come hither! Do not shrink from the
difficulties of the retreat, however great; do not fear the irksome
way of conversion, however laboriously it may lead to salvation;
whereas sin with winged speed and growing pace leads forward or—
downward, so easily, so indescribably easy—as easily, in fact, as
when a horse, altogether freed from having to pull, cannot even with
all his might stop the vehicle which pushes him into the abyss. Do
not despair over each relapse which the God of patience has
patience enough to pardon, and which a sinner should surely have
patience enough to humble himself under. Nay, fear nothing and
despair not: he that sayeth "come hither," he is with you on the way,
from him come help and pardon on that way of conversion which
leads to him; and with him is rest.
Come hither all, all ye—with him is rest; and he will raise no
difficulties, he does but one thing: he opens his arms. He will not
first ask you, you sufferer—as righteous men, alas, are accustomed
to, even when willing to help—"Are you not perhaps yourself the
cause of your misfortune, have you nothing with which to reproach
yourself?" It is so easy to fall into this very human error, and from
appearances to judge a man's success or failure: for instance, if a
man is a cripple, or deformed, or has an unprepossessing
appearance, to infer that therefore he is a bad man; or, when a man
is unfortunate enough to suffer reverses so as to be ruined or so as
to go down in the world, to infer that therefore he is a vicious man.
Ah, and this is such an exquisitely cruel pleasure, this being
conscious of one's own righteousness as against the sufferer—
explaining his afflictions as God's punishment, so that one does not
even—dare to help him; or asking him that question which
condemns him and flatters our own righteousness, before helping
him. But he will not ask you thus, will not in such cruel fashion be
your benefactor. And if you are yourself conscious of your sin he will
not ask about it, will not break still further the bent reed, but raise
you up, if you will but join him. He will not point you out by way of
contrast, and place you outside of himself, so that your sin will stand
out as still more terrible, but he will grant you a hiding place within
him; and hidden within him your sins will be hidden. For he is the
friend of sinners. Let him but behold a sinner, and he not only stands
still, opening his arms and saying "come hither," nay, but he stands
—and waits, as did the father of the prodigal son; or he does not
merely remain standing and waiting, but goes out to search, as the
shepherd went forth to search for the strayed sheep, or as the
woman went to search for the lost piece of silver. He goes—nay, he
has gone, but an infinitely longer way than any shepherd or any
woman, for did he not go the infinitely long way from being God to
becoming man, which he did to seek sinners?

III

COME HITHER UNTO ME ALL YE THAT LABOR AND ARE HEAVY


LADEN, AND I WILL GIVE YOU REST.
"Come hither!" For he supposes that they that labor and are heavy
laden feel their burden and their labor, and that they stand there
now, perplexed and sighing—one casting about with his eyes to
discover whether there is help in sight anywhere; another with his
eyes fixed on the ground, because he can see no consolation; and a
third with his eyes staring heavenward, as though help was bound to
come from heaven—but all seeking. Therefore he sayeth: "come
hither!" But he invites not him who has ceased to seek and to
sorrow.—"Come hither!" For he who invites knows that it is a mark
of true suffering, if one walks alone and broods in silent
disconsolateness, without courage to confide in any one, and with
even less self-confidence to dare to hope for help. Alas, not only he
whom we read about was possessed of a dumb devil.[6] No suffering
which does not first of all render the sufferer dumb is of much
significance, no more than the love which does not render one
silent; for those sufferers who run on about their afflictions neither
labor nor are heavy laden. Behold, therefore the inviter will not wait
till they that labor and are heavy laden come to him, but calls them
lovingly; for all his willingness to help might, perhaps, be of no avail
if he did not say these words and thereby take the first step; for in
the call of these words: "come hither unto me!" he comes himself to
them. Ah, human compassion—sometimes, perhaps, it is indeed
praiseworthy self-restraint, sometimes, perhaps, even true
compassion, which may cause you to refrain from questioning him
whom you suppose to be brooding over a hidden affliction; but also,
how often indeed is this compassion but worldly wisdom which does
not care to know too much! Ah, human'compassion—how often was
it not pure curiosity, and not compassion, which prompted you to
venture into the secret of one afflicted; and how burdensome it was
—almost like a punishment of your curiosity—when he accepted your
invitation and came to you! But he who sayeth these redeeming
words "Come hither!" he is not deceiving himself in saying these
words, nor will he deceive you when you come to him in order to
find rest by throwing your burden on him. He follows the promptings
of his heart in saying these words, and his heart follows his words; if
you then follow these words, they will follow you back again to his
heart. This follows as a matter of course—ah, will you not follow the
invitation?—"Come hither!" For he supposes that they that labor and
are heavy laden are so worn out and overtaxed, and so near
swooning that they have forgotten, as though in a stupor, that there
is such a thing as consolation. Alas, or he knows for sure that there
is no consolation and no help unless it is sought from him; and
therefore must he call out to them "Come hither!"
"Come hither!" For is it not so that every society has some symbol or
token which is worn by those who belong to it? When a young girl is
adorned in a certain manner one knows that she is going to the
dance: Come hither all ye that labor and are heavy laden—come
hither! You need not carry an external and visible badge; come but
with your head anointed and your face washed, if only you labor in
your heart and are heavy laden.
"Come hither!" Ah, do not stand still and consider; nay, consider,
consider that with every moment you stand still after having heard
the invitation you will hear the call more faintly and thus withdraw
from it, even though you are standing still.—"Come hither!" Ah,
however weary and faint you be from work, or from the long, long
and yet hitherto fruitless search for help and salvation, and even
though you may feel as if you could not take one more step, and not
wait one more moment, without dropping to the ground: ah, but this
one step and here is rest!—"Come hither!" But if, alas, there be one
who is so wretched that he cannot come?—Ah, a sigh is sufficient;
your mere sighing or him is also to come hither.

THE PAUSE

COME HITHER UNTO ME ALL YE THAT LABOR AND ARE HEAVY


LADEN, AND I SHALL GIVE YOU REST.
Pause now! But what is there to give pause? That which in the same
instant makes all undergo an absolute change—so that, instead of
seeing an immense throng ofthem that labor and are heavy laden
following the invitation, you will in the end behold the very opposite,
that is, an immense throng of men who flee back shudderingly,
scrambling to get away, trampling all down before them; so that, if
one were to infer the sense of what had been said from the result it
produced, one would have to infer that the words had been "procul
o procul este profani," rather than "come hither"—that gives pause
which is infinitely more important and infinitely more decisive: THE
PERSON OF HIM WHO INVITES. Not in the sense that he is not the
man to do what he has said, or not God, to keep what he has
promised; no, in a very different sense.
Pause is given by the fact that he who invites is, and insists on
being, the definite historic person he was 1800 years ago, and that
he as this definite person, and living under the conditions then
obtaining, spoke these words of invitation.—He is not, and does not
wish to be, one about whom one may simply know something from
history (i.e. world history, history proper, as against Sacred History);
for from history one cannot "learn" anything about him, the simple
reason being that nothing can be "known" about him.—He does not
wish to be judged in a human way, from the results of his life; that
is, he is and wishes to be, a rock of offense and the object of faith.
To judge him after the consequences of his life is a blasphemy, for
being God, his life, and the very fact that he was then living and
really did live, is infinitely more important than all the consequences
of it in history.

A. Who spoke these words of invitation?

He that invites. Who is he? Jesus Christ. Which Jesus Christ? He that
sits in glory on the right side of his Father? No. From his seat of
glory he spoke not a single word. Therefore it is Jesus Christ in his
lowliness, and in the condition of lowliness, who spoke these words.
Is then Jesus Christ not the same? Yes, verily, he is today, and was
yesterday, and 1800 years ago, the same who abased himself,
assuming the form of a servant—the Jesus Christ who spake these
words of invitation. It is also he who hath said that he would return
again in glory. In his return in glory he is, again, the same Jesus
Christ; but this has not yet come to pass.
Is he then not in glory now? Assuredly, that the Christian believes.
But it was in his lowly condition that he spoke these words; he did
not speak them from his glory. And about his return in glory nothing
can be known, for this can in the strictest sense be a matter of belief
only. But a believer one cannot become except by having gone to
him in his lowly condition—to him, the rock of offense and the object
of faith. In other shape he does not exist, for only thus did he exist.
That he will return in glory is indeed expected, but can be expected
and believed only by him who believes, and has believed, in him as
he was here on earth.
Jesus Christ is, then, the same; yet lived he 1800 years ago in
debasement, and is transfigured only at his return. As yet he has not
returned; therefore he is still the one in lowly guise about whom we
believe that he will return in glory. Whatever he said and taught,
every word he spoke, becomes eo ipso untrue if we give it the
appearance of having been spoken by Christ in his glory. Nay, he is
silent. It is the lowly Christ who speaks. The space of time between
(i.e. between his debasement and his return in glory) which is at
present about 1800 years, and will possibly become many times
1800—this space of time, or else what this space of time tries to
make of Christ, the worldly information about him furnished by world
history or church history, as to who Christ was, as to who it was who
really spoke these words—all this does not concern us, is neither
here nor there, but only serves to corrupt our conception of him,
arid thereby renders untrue these words of invitation.
It is untruthful of me to impute to a person words which he never
used. But it is likewise untruthful, and the words he used likewise
become untruthful, or it becomes untrue that he used them, if I
assign to him a nature essentially unlike the one he had when he did
use them. Essentially unlike; for an untruth concerning this or the
other trifling circumstance will not make it untrue that "he" said
them. And therefore, if it please God to walk on earth in such strict
incognito as only one all-powerful can assume, in guise impenetrable
to all men; if it please him—and why he does it, for what purpose,
that he knows best himself; but whatever the reason and the
purpose, it is certain that the incognito is of essential significance—I
say, if it please God to walk on earth in the guise of a servant and,
to judge from his appearance, exactly like any other man; if it please
him to teach men in this guise—if, now, any one repeats his very
words, but gives the saying the appearance that it was God that
spoke these words: then it is untruthful; for it is untrue that h e said
these words.

B. Can one from history[7] learn to know anything about


Christ?

No. And why not? Because one cannot "know" anything at all about
"Christ"; for he is the paradox, the object of faith, and exists only for
faith. But all historic information is communication of "knowledge."
Therefore one cannot learn anything about Christ from history. For
whether now one learn little or much about him, it will not represent
what he was in reality. Hence one learns something else about him
than what is strictly true, and therefore learns nothing about him, or
gets to know something wrong about him; that is, one is deceived.
History makes Christ look different from what he looked in truth, and
thus one learns much from history about—Christ? No, not about
Christ; because about him nothing can be "known," he can only be
believed.

C. Can one prove from history that Christ was God?

Let me first ask another question: is any more absurd contradiction


thinkable than wishing to prove (no matter, for the present, whether
one wishes to do so from history, or from whatever else in the wide
world one wishes to prove it) that a certain person is God? To
maintain that a certain person is God—that is, professes to be God—
is indeed a stumbling block in the purest sense. But what is the
nature of a stumbling block? It is an assertion which is at variance
with all (human) reason. Now think of proving that! But to prove
something is to render it reasonable and real. Is it possible, then, to
render reasonable and real what is at variance with all reason?
Scarcely; unless one wishes to contradict one's self. One can prove
only that it is at variance with all reason. The proofs for the divinity
of Christ given in Scripture, such as the miracles and his resurrection
from the grave exist, too, only for faith; that is, they are no "proofs,"
for they are not meant to prove that all this agrees with reason but,
on the contrary, are meant to prove that it is at variance with reason
and therefore a matter of faith.
First, then, let us take up the proofs from history. "Is it not 1800
years ago now that Christ lived, is not his name proclaimed and
reverenced throughout the world, has not his teaching (Christianity)
changed the aspect of the world, having victoriously affected all
affairs: has then history not sufficiently, or more than sufficiently,
made good its claim as to who he was, and that he was—God?" No,
indeed, history has by no means sufficiently, or more than
sufficiently, made good its claim, and in fact history cannot
accomplish this in all eternity. However, as to the first part of the
statement, it is true enough that his name is proclaimed throughout
the world—as to whether it is reverenced, that I do not presume to
decide. Also, it is true enough that Christianity has transformed the
aspect of the world, having victoriously affected all affairs, so
victoriously indeed, that everybody now claims to be a Christian.
But what does this prove? It proves, at most, that Jesus Christ was a
great man, the greatest, perhaps, who ever lived. But that he was
God—stop now, that conclusion shall with God's help fall to the
ground.
Now, if one intends to introduce this conclusion by assuming that
Jesus Christ was a man, and then considers the 1800 years of
history (i.e. the consequences of his life), one may indeed conclude
with a constantly rising superlative: he was great, greater, the
greatest, extraordinarily and astonishingly the greatest man who
ever lived. If one begins, on the other hand, with the assumption (of
faith) that he was God, one has by so doing stricken out and car
celled the 1800 years as not making the slightest difference, one
way or the other, because the certainty of faith is on an infinitely
higher plane. And one course or the other one must take; but we
shall arrive at sensible conclusions only if we take the latter.
If one takes the former course one will find it impossible—unless by
committing the logical error of passing over into a different category
—one will find it impossible in the conclusion suddenly to arrive at
the new category "God"; that is, one cannot make the consequence,
or consequences, of—a man's life suddenly prove at a certain point
in the argument that this man was God. If such a procedure were
correct one ought to be able to answer satisfactorily a question like
this: what must the consequence be, how great the effects, how
many centuries must elapse, in order to infer from the consequences
of a man's life—for such was the assumption—that he was God; or
whether it is really the case that in the year 300 Christ had not yet
been entirely proved to be God, though certainly the most
extraordinarily, astonishingly, greatest man who had ever lived, but
that a few more centuries would be necessary to prove that he was
God. In that case we would be obliged to infer that people in the
fourth century did not look upon Christ as God, and still less they
who lived in the first century; whereas the certainty that he was God
would grow with every century. Also, that in our century this
certainty would be greater than it had ever been, a certainty in
comparison with which the first centuries hardly so much as
glimpsed his divinity. You may answer this question or not, it does
not matter.
In general, is it at all possible by the consideration of the gradually
unfolding consequences of something to arrive at a conclusion
different in quality from what we started with? Is it not sheer
insanity (providing man is sane) to let one's judgment become so
altogether confused as to land in the wrong category? And if one
begins with such a mistake, then how will one be able, at any
subsequent point, to infer from the consequences of something, that
one has to deal with an altogether different, in fact, infinitely
different, category? A foot-print certainly is the consequence of
some creature having made it. Now I may mistake the track for that
of, let us say, a bird; whereas by nearer inspection, and by following
it for some distance, I may make sure that it was made by some
other animal. Very good; but there was no infinite difference in
quality between my first assumption and my later conclusion. But
can I, on further consideration and following the track still further,
arrive at the conclusion: therefore it was a spirit—a spirit that leaves
no tracks? Precisely the same holds true of the argument that from
the consequences of a human life—for that was the assumption—we
may infer that therefore it was God.
Is God then so like man, is there so little difference between the two
that, while in possession of my right senses, I may begin with the
assumption that Christ was human? And, for that matter, has not
Christ himself affirmed that he was God? On the other hand, if God
and man resemble each other so closely, and are related to each
other to such a degree—that is, essentially belong to the same
category of beings, then the conclusion "therefore he was God" is
nevertheless just humbug, because if that is all there is to being
God, then God does not exist at all. But if God does exist and,
therefore, belongs to a category infinitely different from man, why,
then neither I nor any one else can start with the assumption that
Christ was human and end with the conclusion that therefore he was
God. Any one with a bit of logical sense will easily recognize that the
whole question about the consequences of Christ's life on earth is
incommensurable with the decision that he is God. In fact, this
decision is to be made on an altogether different plane: man must
decide for himself whether he will believe Christ to be what he
himself affirmed he was, that is, God, or whether he will not believe
so.
What has been said—mind you, providing one will take the time to
understand it—is sufficient to make a logical mind stop drawing any
inferences from the consequences of Christ's life: that therefore he
was God. But faith in its own right protests against every attempt to
approach Jesus Christ by the help of historical information about the
consequences of his life. Faith contends that this whole attempt is—
blasphemous. Faith contends that the only proof left unimpaired by
unbelief when it did away with all the other proofs of the truth of
Christianity, the proof which—indeed, this is complicated business—I
say, which unbelief invented in order to prove the truth of
Christianity—the proof about which so excessively much ado has
been made in Christendom, the proof of 1800 years: as to this, faith
contends that it is—blasphemy.
With regard to a man it is true that the consequences of his life are
more important than his life. If one, then, in order to find out who
Christ was, and in order to find out by some inference, considers the
consequences of his life: why, then one changes him into a man by
this very act—a man who, like other men, is to pass his examination
in history, and history is in this case as mediocre an examiner as any
half-baked teacher in Latin.
But strange! By the help of history, that is, by considering the
consequences of his life, one wishes to arrive at the conclusion that
therefore, therefore he was God; and faith makes the exactly
opposite contention that he who even begins with this syllogism is
guilty of blasphemy. Nor does the blasphemy consist in assuming
hypothetically that Christ was a man. No, the blasphemy consists in
the thought which lies at the bottom of the whole business, the
thought without which one would never start it, and of whose
validity one is fully and firmly assured that it will hold also with
regard to Christ—the thought that the consequences of his life are
more important than his life; in other words, that he is a man. The
hypothesis is: let us assume that Christ was a man; but at the
bottom of this hypothesis, which is not blasphemy as yet, there lies
the assumption that, the consequences of a man's life being more
important than his life, this will hold true also of Christ. Unless this is
assumed one must admit that one's whole argument is absurd, must
admit it before beginning—so why begin at all? But once it is
assumed, and the argument is started, we have the blasphemy. And
the more one becomes absorbed in the consequences of Christ's life,
with the aim of being able to make sure whether or no he was God,
the more blasphemous is one's conduct; and it remains blasphemous
so long as this consideration is persisted in.
Curious coincidence: one tries to make it appear that, providing one
but thoroughly considers the consequences of Christ's life, this
"therefore" will surely be arrived at—and faith condemns the very
beginning of this attempt as blasphemy, and hence the continuance
in it as a worse blasphemy.
"History," says faith, "has nothing to do with Christ." With regard to
him we have only Sacred History (which is different in kind from
general history), Sacred History which tells of his life and career
when in debasement, and tells also that he affirmed himself to be
God. He is the paradox which history never will be able to digest or
convert into a general syllogism. He is in his debasement the same
as he is in his exaltation—but the 1800 years, or let it be 18,000
years, have nothing whatsoever to do with this. The brilliant
consequences in the history of the world which are sufficient,
almost, to convince even a professor of history that he was God,
these brilliant consequences surely do not represent his return in
glory! Forsooth, in that case it were imagined rather meanly! The
same thing over again: Christ is thought to be a man whose return
in glory can be, and can become, nothing else than the
consequences of his life in history—whereas Christ's return in glory is
something absolutely different and a matter of faith. He abased
himself and was swathed in rags—he will return in glory; but the
brilliant consequences in history, especially when examined a little
more closely, are too shabby a glory—at any rate a glory of an
altogether incongruous nature, of which faith therefore never
speaks, when speaking about his glory. History is a very respectable
science indeed, only it must not become so conceited as to take
upon itself what the Father will do, and clothe Christ in his glory,
dressing him up with the brilliant garments of the consequences of
his life, as if that constituted his return. That he was God in his
debasement and that he will return in glory, all this is far beyond the
comprehension of history; nor can all this be got from history,
excepting by an incomparable lack of logic, and however
incomparable one's view of history may be otherwise.
How strange, then, that one ever wished to use history in order to
prove Christ divine.
D. Are the consequences of Christ's life more important than
his life?

No, by no means, but rather the opposite; for else Christ were but a
man.
There is really nothing remarkable in a man having lived. There have
certainly lived millions upon millions of men. If the fact is
remarkable, there must have been something remarkable in a man's
life. In other words, there is nothing remarkable in his having lived,
but his life was remarkable for this or that. The remarkable thing
may, among other matters, also be what he accomplished; that is,
the consequences of his life.
But that God lived here on earth in human form, that is infinitely
remarkable. No matter if his life had had no consequences at all—it
remains equally remarkable, infinitely remarkable, infinitely more
remarkable than all possible consequences. Just try to introduce that
which is remarkable as something secondary and you will
straightway see the absurdity of doing so: now, if you please,
whatever remarkable is there in God's life having had remarkable
consequences? To speak in this fashion is merely twaddling.
No, that God lived here on earth, that is what is infinitely
remarkable, that which is remarkable in itself. Assuming that Christ's
life had had no consequences whatsoever—if any one then
undertook to say that therefore his life was not remarkable it would
be blasphemy. For it would be remarkable all the same; and if a
secondary remarkable characteristic had to be introduced it would
consist in the remarkable fact that his life had no consequences. But
if one should say that Christ's life was remarkable because of its
consequences, then this again were a blasphemy; for it is his life
which in itself is the remarkable thing.
There is nothing very remarkable in a man's having lived, but it is
infinitely remarkable that God has lived. God alone can lay so much
emphasis on himself that the fact of his having lived becomes
infinitely more important than all the consequences which may flow
therefrom and which then become a matter of history.

E. A comparison between Christ and a man who in his life


endured the same treatment by his times as Christ endured.

Let us imagine a man, one of the exalted spirits, one who was
wronged by his times, but whom history later reinstated in his rights
by proving by the consequences of his life who he was. I do not
deny, by the way, that all this business of proving from the
consequences is a course well suited to "a world which ever wishes
to be deceived." For he who was contemporary with him and did not
understand who he was, he really only imagines that he understands
when he has got to know it by help of the consequences of the
noble one's life. Still, I do not wish to insist on this point, for with
regard to a man it certainly holds true that the consequences of his
life are more important than the fact of his having lived.
Let us imagine one of these exalted spirits. He lives among his
contemporaries without being understood, his significance is not
recognized—he is misunderstood, and then mocked, persecuted, and
finally put to death like a common evil-doer. But the consequences
of his life make it plain who he was; history which keeps a record of
these consequences re-instates him in his rightful position, and now
he is named in one century after another as the great and the noble
spirit, and the circumstances of his debasement are almost
completely forgotten. It was blindness on the part of his
contemporaries which prevented them from comprehending his true
nature, and wickedness which made them mock him and deride him,
and finally put him to death. But be no more concerned about this;
for only after his death did he really become what he was, through
the consequences of his life which, after all, are by far more
important than his life.
Now is it not possible that the same holds true with regard to Christ?
It was blindness and wickedness on the part of those times[8]—but
be no more concerned about this, history has now re-instated him,
from history we know now who Jesus Christ was, and thus justice is
done him.
Ah, wicked thoughtlessness which thus interprets Sacred History like
profane history, which makes Christ a man! But can one, then, learn
anything from history about Jesus? (cf. B) No, nothing. Jesus Christ
is the object of faith—one either believes in him or is offended by
him; for "to know" means precisely that such knowledge does not
pertain to him. History can therefore, to be sure, give one
knowledge in abundance; but "knowledge" annihilates Jesus Christ.
Again—ah, the impious thoughtlessness!—for one to presume to say
about Christ's abasement: "Let us be concerned no more about his
abasement." Surely, Christ's abasement was not something which
merely happened to him—even if it was the sin of that generation to
crucify him; was surely not something that simply happened to him
and, perhaps, would not have happened to him in better times.
Christ himself wished to be abased and lowly. His abasement (that
is, his walking on earth in humble guise, though being God) is
therefore a condition of his own making, something he wished to be
knotted together, a dialectic knot which no one shall presume to
untie, and which no one will untie, for that matter, until he himself
shall untie it when returning in his glory.
His case is, therefore, not the same as that of a man who, through
the injustice inflicted on him by his times, was not allowed to be
himself or to be valued at his worth, while history revealed who he
was; for Christ himself wished to be abased—it is precisely this
condition which he desired. Therefore, let history not trouble itself to
do him justice, and let us not in impious thoughtlessness
presumptuously imagine that we as a matter of course know who he
was. For that no one knows; and he who believes it must become
contemporaneous with him in his abasement. When God chooses to
let himself be born in lowliness, when he who holds all possibilities in
his hand assumes the form of a humble servant, when he fares
about defenseless, letting people do with him what they list: he
surely knows what he does and why he does it; for it is at all events
he who has power over men, and not men who have power over
him—so let not history be so impertinent as to wish to reveal who he
was.
Lastly—ah the blasphemy!—if one should presume to say that the
percussion which Christ suffered expresses something accidental! If
a man is persecuted by his generation it does not follow that he has
the right to say that this would happen to him in every age. Insofar
there is reason in what posterity says about letting bygones be
bygones. But it is different with Christ! It is not he who by letting
himself be born, and by appearing in Palestine, is being examined by
history; but it is he who examines, his life is the examination, not
only of that generation, but of mankind. Woe unto the generation
that would presumptuously dare to say: "let bygones be bygones,
and forget what he suffered, for history has now revealed who he
was and has done justice by him."
If one assumes that history is really able to do this, then the
abasement of Christ bears an accidental relation to him; that is to
say, he thereby is made a man, an extraordinary man to whom this
happened through the wickedness of that generation—a fate which
he was far from wishing to suffer, for he would gladly (as is human)
have become a great man; whereas Christ voluntarily chose to be
the lowly one and, although it was his purpose to save the world,
wished also to give expression to what the "truth" suffered then, and
must suffer in every generation. But if this is his strongest desire,
and if he will show himself in his glory only at his return, and if he
has not returned as yet; and if no generation may be without
repentance, but on the contrary every generation must consider
itself a partner in the guilt of that generation: then woe to him who
presumes to deprive him of his lowliness, or to cause what he
suffered to be forgotten, and to clothe him in the fabled human
glory of the historic consequences of his life, which is neither here
nor there.

F. The Misfortune of Christendom


But precisely this is the misfortune, and has been the misfortune, in
Christendom that Christ is neither the one nor the other—neither the
one he was when living on earth, nor he who will return in glory, but
rather one about whom we have learned to know something in an
inadmissible way from history—that he was somebody or other of
great account. In an inadmissible and unlawful way we have learned
to know him; whereas to believe in him is the only permissible mode
of approach. Men have mutually confirmed one another in the
opinion that the sum total of information about him is available if
they but consider the result of his life and the following 1800 years,
i.e. the consequences. Gradually, as this became accepted as the
truth, all pith and strength was distilled out of Christianity; the
paradox was relaxed, one became a Christian without noticing it,
without noticing in the least the possibility of being offended by him.
One took over Christ's teachings, turned them inside out and
smoothed them down—he himself guaranteeing them, of course, the
man whose life had had such immense consequences in history! All
became plain as day—very naturally, since Christianity in this fashion
became heathendom.
There is in Christendom an incessant twaddling on Sundays about
the glorious and invaluable truths of Christianity, its mild consolation.
But it is indeed evident that Christ lived 1800 years ago; for the rock
of offense and object of faith has become a most charming fairy-
story character, a kind of divine good old man.[9] People have not
the remotest idea of what it means to be offended by him, and still
less, what it means to worship. The qualities for which Christ is
magnified are precisely those which would have most enraged one,
if one had been contemporaneous with him; whereas now one feels
altogether secure, placing implicit confidence in the result and,
relying altogether on the verdict of history that he was the great
man, concludes therefore that it is correct to do so. That is to say, it
is the correct, arid the noble, and the exalted, and the true, thing—if
it is he who does it; which is to say, again, that one does not in any
deeper sense take the pains to understand what it is he does, and
that one tries even less, to the best of one's ability and with the help
of God, to be like him in acting rightly and nobly, and in an exalted
manner, and truthfully. For, not really fathoming it in any deeper
sense, one may, in the exigency of a contemporaneous situation,
judge him in exactly the opposite way. One is satisfied with admiring
and extolling and is, perhaps, as was said of a translator who
rendered his original word for word and therefore without making
sense, "too conscientious,"—one is, perhaps, also too cowardly and
too weak to wish to understand his real meaning.
Christendom has done away with Christianity, without being aware
of it. Therefore, if anything is to be done about it, the attempt must
be made to re-introduce Christianity.

II

He who invites is, then, Jesus Christ in his abasement, it is he who


spoke these words of invitation. It is not from his glory that they are
spoken. If that were the case, then Christianity were heathendom
and the name of Christ taken in vain, and for this reason it cannot
be so. But if it were the case that he who is enthroned in glory had
said these words: Come hither—as though it were so altogether easy
a matter to be clasped in the arms of glory—well, what wonder,
then, if crowds of men ran to him! But they who thus throng to him
merely go on a wild goose chase, imagining they know who Christ is.
But that no one knows; and in order to believe in him one has to
begin with his abasement.
He who invites and speaks these words, that is, he whose words
they are—whereas the same words if spoken by some one else are,
as we have seen, an historic falsification—he is the same lowly Jesus
Christ, the humble man, born of a despised maiden, whose father is
a carpenter, related to other simple folk of the very lowest class, the
lowly man who at the same time (which, to be sure, is like oil
poured on the fire) affirms himself to be God.
It is the lowly Jesus Christ who spoke these words. And no word of
Christ, not a single one, have you permission to appropriate to
yourself, you have not the least share in him, are not in any way of
his company, if you have not become his contemporary in lowliness
in such fashion that you have become aware, precisely like his
contemporaries, of his warning: "Blessed is he whosoever shall not
be offended in me.[10]" You have no right to accept Christ's words,
and then lie him away; you have no right to accept Christ's words,
and then in a fantastic manner, and with the aid of history, utterly
change the nature of Christ; for the chatter of history about him is
literally not worth a fig.
It is Jesus Christ in his lowliness who is the speaker. It is historically
true that h e said these words; but so soon as one makes a change
in his historic status, it is false to say that these words were spoken
by him.
This poor and lowly man, then, with twelve poor fellows as his
disciples, all from the lowest class of society, for some time an object
of curiosity, but later on in company only with sinners, publicans,
lepers, and madmen; for one risked honor, life, and property, or at
any rate (and that we know for sure) exclusion from the synagogue,
by even letting one's self be helped by him—come hither now, all ye
that labor and are heavy laden! Ah, my friend, even if you were deaf
and blind and lame and leprous, if you, which has never been seen
or heard before, united all human miseries in your misery—and if he
wished to help you by a miracle: it is possible that (as is human) you
would fear more than all your sufferings the punishment which was
set on accepting aid from him, the punishment of being cast out
from the society of other men, of being ridiculed and mocked, day
after day, and perhaps of losing your life. It is human (and it is
characteristic of being human) were you to think as follows: "no,
thank you, in that case I prefer to remain deaf and blind and lame
and leprous, rather than accept aid under such conditions."
"Come hither, come hither, all, ye that labor and are heavy laden, ah,
come hither," lo! he invites you and opens his arms. Ah, when a
gentlemanly man clad in a silken gown says this in a pleasant,
harmonious voice so that the words pleasantly resound in the
handsome vaulted church, a man in silk who radiates honor and
respect on all who listen to him; ah, when a king in purple and
velvet says this, with the Christmas tree in the background on which
are hanging all the splendid gifts he intends to distribute, why, then
of course there is some meaning in these words! But whatever
meaning you may attach to them, so much is sure that it is not
Christianity, but the exact opposite, something as diametrically
opposed to Christianity as may well be; for remember who it is that
invites!
And now judge for yourself—for that you have a right to do;
whereas men really do not have a right to do what is so often done,
viz. to deceive themselves. That a man of such appearance, a man
whose company every one shuns who has the least bit of sense in
his head, or the least bit to lose in the world, that he—well, this is
the absurdest and maddest thing of all, one hardly knows whether
to laugh or to weep about it—that he—indeed, that is the very last
word one would expect to issue from his mouth; for if he had said:
"Come hither and help me," or: "Leave me alone," or: "Spare me,"
or proudly: "I despise you all," we could understand that perfectly—
but that such a man says: "Come hither to me!" why, I declare, that
looks inviting indeed! And still further: "All ye that labor and are
heavy laden"—as though such folk were not burdened enough with
troubles, as though they now, to cap all, should be exposed to the
consequences of associating with him. And then, finally: "I shall give
you rest." What's that?—he help them? Ah, I am sure even the most
good-natured joker who was contemporary with him would have to
say: "Surely, that was the thing he should have undertaken last of all
—to wish to help others, being in that condition himself! Why, it is
about the same as if a beggar were to inform the police that he had
been robbed. For it is a contradiction that one who has nothing, and
has had nothing, informs us that he has been robbed; and likewise,
to wish to help others when one's self needs help most." Indeed it
is, humanly speaking, the most harebrained contradiction, that he
who literally "hath not where to lay his head," that he about whom it
was spoken truly, in a human sense, "Behold the man!"—that he
should say: "Come hither unto me all ye that suffer—I shall help!"
Now examine yourself—for that you have a right to do. You have a
right to examine yourself, but you really do not have a right to let
yourself without self-examination be deluded by "the others" into the
belief, or to delude yourself into the belief, that you are a Christian—
therefore examine yourself: supposing you were contemporary with
him! True enough he—alas! he affirmed himself to be God! But many
another madman has made that claim—and his times gave it as their
opinion that he uttered blasphemy. Why, was not that precisely the
reason why a punishment was threatened for allowing one's self to
be aided by him? It was the godly care for their souls entertained by
the existing order and by public opinion, lest any one should be led
astray: it was this godly care that led them to persecute him in this
fashion. Therefore, before any one resolves to be helped by him, let
him consider that he must not only expect the antagonism of men,
but—consider it well!—even if you could bear the consequences of
that step—but consider well, that the punishment meted out by men
is supposed to be God's punishment of him, "the blasphemer"—of
him who invites!
Come hither now all ye that labor and are heavy laden!
How now? Surely this is nothing to run after—some little pause is
given, which is most fittingly used to go around about by way of
another street. And even if you should not thus sneak out in some
way—always providing you feel yourself to be contemporary with
him—or sneak into being some kind of Christian by belonging to
Christendom: yet there will be a tremendous pause given, the pause
which is the very condition that faith may arise: you are given pause
by the possibility of being offended in him.
But in order to make it entirely clear, and bring it home to our minds,
that the pause is given by him who invites, that it is he who gives us
pause and renders it by no means an easy, but a peculiarly difficult,
matter to follow his invitation, because one has no right to accept it
without accepting also him who invites—in order to make this
Welcome to Our Bookstore - The Ultimate Destination for Book Lovers
Are you passionate about books and eager to explore new worlds of
knowledge? At our website, we offer a vast collection of books that
cater to every interest and age group. From classic literature to
specialized publications, self-help books, and children’s stories, we
have it all! Each book is a gateway to new adventures, helping you
expand your knowledge and nourish your soul
Experience Convenient and Enjoyable Book Shopping Our website is more
than just an online bookstore—it’s a bridge connecting readers to the
timeless values of culture and wisdom. With a sleek and user-friendly
interface and a smart search system, you can find your favorite books
quickly and easily. Enjoy special promotions, fast home delivery, and
a seamless shopping experience that saves you time and enhances your
love for reading.
Let us accompany you on the journey of exploring knowledge and
personal growth!

ebookgate.com

You might also like