Recognition of Mixture Control Chart Patterns Base
Recognition of Mixture Control Chart Patterns Base
https://doi.org/10.1007/s10044-018-0748-6
THEORETICAL ADVANCES
Received: 14 January 2018 / Accepted: 23 August 2018 / Published online: 27 August 2018
© Springer-Verlag London Ltd., part of Springer Nature 2018
Abstract
Unnatural control chart patterns (CCPs) can be associated with the quality problems of the production process. It is quite
critical to detect and identify these patterns effectively based on process data. Various machine learning techniques to CCPs
recognition have been studied on the process only suffer from basic CCPs of unnatural patterns. Practical production process
data may be the combination of two or more basic patterns simultaneously in reality. This paper proposes a mixture CCPs
recognition method based on fusion feature reduction (FFR) and fireworks algorithm-optimized multiclass support vector
machine (MSVM). FFR algorithm consists of three main sub-networks: statistical and shape features, features fusion and
kernel principal component analysis feature dimensionality reduction, which make the features more effective. In MSVM
classifier algorithm, the kernel function parameters play a very significant role in mixture CCPs recognition accuracy. There-
fore, fireworks algorithm is proposed to select the two-dimensional parameters of the classifier. The results of the proposed
algorithm are benchmarked with popular genetic algorithm and particle swarm optimization methods. Simulation results
demonstrate that the proposed method can gain the higher recognition accuracy and significantly reduce the running time.
Keywords Control chart patterns recognition · Multiclass support vector machines · Fusion feature reduction · Fireworks
algorithm · Parameters optimization
1 Introduction mainly include six basic patterns. Figure 1 shows the six
basic CCPs [1].
Control chart, as a primary tool in statistical process con- The previous studies mostly focused on the basic abnor-
trol, is utilized to monitor the production process in complex mal control chart patterns recognition [2–4]. However, prac-
manufacturing and service industries. It is built on the statis- tical production process data are expressed as a mixture pat-
tical hypothesis test principle, recording and monitoring the tern, which could be the combination of two or more basic
production process by the fluctuations of the main quality patterns simultaneously. It will lead to serious performance
characteristics. Control chart patterns (CCPs) recognition degradation and is difficult to recognize.
is beneficial to explore the causes of the abnormal patterns In recent decades, related researches have been conducted
and develop a corresponding treatment program. Accord- to recognize mixture patterns [5–8]. Lu et al. [5] use the
ing to the studies over the past two decades, control charts independent component analysis and support vector machine
(SVM) to recognize the mixture CCPs. Guh and Tannock [6]
* Min Zhang propose a back-propagation neural network method to imple-
[email protected] ment the mixture CCPs recognition. Xie et al. [7] describe
Yi Yuan a novel hybrid methodology to identify concurrent CCPs
[email protected] based on singular spectrum analysis and SVM. Yang et al.
Ruiqi Wang [8] integrate extreme-point symmetric mode decomposition
[email protected] and extreme learning machine to identify typical mixture
Wenming Cheng CCPs. These methods give some ways to improve the ability
[email protected] to recognize mixture CCPs, such as data preprocessing, fea-
ture extraction and classification algorithm. However, these
1
School of Mechanical Engineering, Southwest Jiaotong
University, Chengdu 610031, China
13
Vol.:(0123456789)
5 5
Cyclic 5 Increasing trend
Normal
0 0 0
-5 -5 -5
0 10 20 25 40 0 10 20 30 40 0 10 20 30 40
0 0 0
-5 -5 -5
0 10 20 30 40 0 10 20 30 40 0 10 20 30 40
Fig. 1 Six basic control chart patterns: a Normal (NOR), b Cyclic (CYC), c Increasing Trend (IT), d Decreasing Trend (DT), e Increasing Shift
(US) and f Decreasing Shift (DS)
studies do not address adequately the problem of detecting implemented for the efficient classification of CCPs in
basic CCPs or mixture CCPs. recent years. Artificial neural networks (ANNs) use a mul-
Ensuring effectively of process data is problematic in tilayer perception with back-propagation training to recog-
practice. The goal of data preprocessing and feature extrac- nize CCPs and have been the most frequently used method
tion is to improve the ability of mixture CCPs recognition. [18–20]. However, ANNs still have several weaknesses, such
Most studies show that feature extraction is an effective as the need for extensive training data, poor generalization
way to improve the mixture CCPs recognition, and related ability, over-fitting of the model and easily getting into a
researchers have proposed various methods to extract infor- local extremum [9]. SVM has been widely utilized in rec-
mation such as statistical features, shape features and so ognizing CCPs for its excellent performance in the practical
on [9, 10]. Zhang and Cheng [9] integrate shape and sta- application [21–23]. Unlike neural network methods, SVM
tistical features with principal component analysis (PCA) has excellent generalization capability to deal with small
to reduce features dimensions. And various other methods samples since it implements the principle of structural risk
are proposed to extract the features, like fisher discriminate minimization, which does not depend on the number of fea-
analysis [11], cost-optimal Bayesian [12], wavelet transform tures. For the mixture CCPs recognition problem, a multi-
decomposition [13] and kernel principal component analysis class support vector machine (MSVM) classifier by com-
(KPCA) [14]. KPCA method has been widely used to solve bining several binary classifiers or considering all classes
nonlinear problems [15], and it has been applied success- together can be used. However, the kernel function and
fully for recognizing mixture CCPs process monitoring [16]. parameters values of MSVM model play a very significant
A major limitation of KPCA is that the model is built from role in CCPs recognition, which should be optimized for the
the training data, which is time invariant [17]. However, proposed model.
most practical industrial production processes are time vary- Numerous researches followed optimization technique
ing. To overcome this difficulty, a fusion feature reduction in studies for MSVM kernel function parameter extraction.
(FFR) method is proposed in this paper, which combined the Genetic algorithm (GA) method and particle swarm optimi-
original features and the statistical and shape features and zation (PSO) methods are widely used; however, the results
then used KPCA to reduce the feature dimension and the show a relatively high percentage of errors and suffer from
computational complexity. premature convergence problem [13, 24, 25]. Fireworks
Machine learning (ML) algorithms, which special- algorithm (FWA), which is capable of solving nonlinear
ize in the computer simulation of human learning behav- and complex numerical computation with high accuracy, is
ior to acquire new knowledge or skills, have been widely implemented in parameters optimization for extraction of
13
SVM kernel function parameters. Further numerous stud- Ideally, data should be collected from the real industrial
ies on solving practical optimization problems using FWA production process. However, since a large amount of data
method can be found in studies [26–30]. Based on features are needed for CCPs recognition, Monte Carlo simulation
extraction and optimized classifier method mentioned above, method is used to generate the required set for training and
the proposed FFR_MSVM_FWA scheme should be able to testing in this study. Mathematical expressions for CCPs
gain the higher recognition accuracy and significantly reduce generation are expressed in Eq. (1), and the different param-
the running time in mixture CCPs recognition. eters are given in Table 1.
In this paper, a FFR_MSVM_FWA methodology is pro-
posed by integrating FFR features extraction method and
X(t) = 𝜇 + x(t) + d(t), t = 1, 2, … , T (1)
MSVM with fireworks algorithm parameters optimization. where X(t) is a sample value at time t, t is the time of sam-
The FFR generates the process data by Monte Carlo method pling, μ is the process mean and is fixed at 0; x(t) = r × σ, x(t)
and gets the statistical and shape features of the six basic and is a normal distribution at time t following a normal distribu-
four mixture CCPs. It combined the original features and the tion with zero mean between − 1 and 1 and standard devia-
statistical and shape features and then used KPCA to reduce tion σ is fixed at 1 in this study; d(t) is the abnormal value at
the feature dimension and the computational complexity. time t and set as 0 due to normal pattern. Forty data points
Compared with the traditional features extraction meth- of observation window are used as inputs, and 200 samples
ods, the FFR does not cause any data redundancy or losing data are generated in every pattern. Figures 1 and 2 show
of effective features. Furthermore, the proposed MSVM_ the six basic and four kinds of mixture CCPs, respectively.
FWA classifier use a novel optimization technique named Traditional CCPs recognition system is described in
fireworks algorithm for selecting the parameters problem. Fig. 3, and it mainly includes three main modules: process
With proper balance between exploration and exploitation data module, classification method module and recognized
process, the FWA finds a better solution for mixture CCPs patterns module. In this paper, a fusion feature reduction
recognition problem. Further numerical experiments on the method is used to preprocess the data to get the efficient
popular GA and PSO methods showed that the FWA method features and then uses the multiclass support vector machine
can gain the higher recognition accuracy and significantly with fireworks algorithm optimization as the classification
reduce the running time for identification of mixture CCPs. method to realize CCPs recognition.
Production process data are mostly expressed as a mixture 3.1 Statistical and shape features
pattern in the real world. The mixture CCPs were a combi-
nation of any two or three of the above-mentioned six basic Statistical and shape features are efficient to get useful
CCPs. Because the principle of increasing/decreasing trend information about CCPs recognition. In this study, we use
or upward/downward shift is similar, we select one of them a combination set of eight statistical features and five shape
separately, which are the increasing trend, downward shift features to reflect the CCPs. Let x = (x1, x2, …, xm) be a
and cyclic to generate the mixture CCPs in this paper. time-domain m-dimensional sample data vector collected
13
5 5
Mix-Cyclic+Shift
Mix-Cyclic+Trend
0 0
-5 -5
0 10 20 30 40 0 10 20 30 40
(a) (b)
5 5
Mix-Shift+Trend Mix-Cyclic+Trend+Shift
0 0
-5 -5
0 10 20 30 40 0 10 20 30 40
(c) (d)
Fig. 2 Four mixture CCPs: a Mix1-cyclic + trend (CT), b Mix2-cyclic + shift (CS), c Mix3-trend shift (TS) and d Mix4-cyclic + trend + shift
(CTS)
13
Fig. 4 Box plots of the eight feature‘Mean’for class 1-10 feature‘SD’for class 1-10
features for different classes 5 4
Value
Value
0 2
-5 0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
Value
Value
2 0
0 -2
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
Value
5 5
0 0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
Value
5 0
0 -10
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
Number of class Number of class
∑m
7. Negative cusum (Cm−): where ̄t = i=1 i t
, 𝛽0 = x̄ − 𝛽1 ̄t.
m
(2) N1: The number of mean crossing
[ ( ) ]
−
Ci− = max 0, x̄ − xi + Ci−1 (8) ( )( )
For i = 1, 2, … , m − 1, If xi − x̄ xi+1 − x̄
is calculated recursively by setting
Cm− C0+ = 0 , and for ∑ (11)
m
i≤i≤m < 0, n1i = 1; Else n1i = 0 and N1 = n1i .
i=1
8. Average autocorrelation ( R̄ xx) (3) N2: The number of least-square line crossing:
( )
For i = 1, 2, … , m − 1, if xi − x̃ i
∑m−1 ( ) (12)
Rxx (k) xi+1 − x̃ i+1 < 0, n2i = 1; else n2i = 0,
R̄ xx = k=0 (9)
m ∑
where x̃ i = 𝛽1 ti + 𝛽0 , and N2 = m n .
i=1 2i
where (4) APML: The area between the pattern and its mean line
∑m−k � �� � APML = Cm+ + Cm− (13)
i=1
xi − x̄ xi+k − x̄
Rxx (k) = ∑m � �2 (5) APLS: The area between the pattern and its least-square
i=1
xi − x̄ line
for k = 0, 1, …, m − 1. ∑
m
APLS = |x − x̃ | (14)
| i i|
The shape features, including slope, N 1, N2, APML, i=1
13
0.1
20
Value
Value
0
10
-0.1
-0.2 0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
feature‘N2’for class 1-10 feature‘APML’for class 1-10
30 8
6
20
Value
Value
4
10
2
0 0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
feature‘APLS’for class 1-10
100
Value
50
0
0 1 2 3 4 5 6 7 8 9 10
Number of class
Fig. 5 Box plots of the five shape features for different classes
13
13
FWA(2) FWA(2)
FWA(1)
FWA(2)
FWA(3)
FWA(5)
FWA(4)
FWA(5)
FWA(4) FWA(2)*
FWA(2)* FWA(1)
FWA(3)
FWA(5)
(d) (e)
Fig. 7 Movement of firework in search space. a Initialization. b Spark and amplitude evaluation. c Specific spark evaluation. d Relocation using
distance evaluation. e Fireworks identified
Step 4: Specific spark evaluation. To improve diversity New position of fireworks after the first iteration is rep-
in fireworks, Gaussian distribution is applied to specific resented in Fig. 7e. The corresponding selection probability
spark evaluation by random. of each spark is defined in the following.
g = gaussain(1, 1) (25) W(xi )
Q(xi ) = ∑ (28)
j∈K W(xi )
x̂ jk = xjk ⋅g (26)
For nonlinear problems, FWA can use two important steps
This process is to indicate optimal location of firework
to solve: (1) Spark evaluations in the first stage may create
by identifying the strength of sparks; two fireworks under-
the necessary randomness by specific spark evaluation. (2)
going specific spark evaluation of five fireworks are shown
Gaussian distribution can avoid premature convergence.
in Fig. 7c.
Step 5: Identify the global location. Euclidean distance
is used to find the best firework. The smallest fitness value
will always be selected, and the remaining n − 1 sparks are 5 The proposed CCPs recognition method
determined by the Euclidean distance between it and other
sparks as shown in Fig. 7d. The distance between fireworks In this method, FFR algorithm is used to get the effective
is calculated as follows: features, which is composed of three main sub-networks: sta-
∑ ∑ tistical and shape features sub-network, mixed features sub-
W(xi ) = d(xi , xk ) = ||xi − xk || network and KPCA sub-network. FWA is applied to select
(27)
j∈K j∈K
the optimal two-dimensional parameters of the MSVM clas-
where ‘K’ corresponds to current locations of individual sifier and can achieve the fastest time. The overall structure
sparks. is depicted in Fig. 8.
13
The purpose of this model is to effectively and automati- KPCA to reduce the feature dimension and the computa-
cally recognize mixture CCPs. It mainly consists of three tional complexity. Every control chart pattern has different
modules in series: fusion feature reduction, FWA param- attributes, and the statistical and shape features can be uti-
eters optimization and fault classification. Figure 9 shows lized to distinguish the CCPs. But it is difficult to represent
the detailed schematic diagram. the complete information of the control chart. In order to
In the feature extraction module, FFR method is used to retain the control chart information more completely, the
get the efficient feature data, which combined the original original process data and the statistical and shape features
features and the statistical and shape features and then used are fused. However, the feature dimension increases sharply
Fusion Features
KPCA
Calculate fitness
function
Update Classify SVM testing set
iterations FWA spark and
amplitude evaluation
Classify abnormal
Update the best
multiple fault patterns
recognition accuracy
No
stop?
Yes Diagnosing assignable
causes
Optimal parameters
FWA Optimization Faults Classification
13
after the feature fusion, resulting in the emergence of redun- is chosen, and parameters C and γ, respectively, vary in
dant features and increase of computational complexity, and the fixed ranges [0.1, 100] and [0.01, 10]. In the FWA
the KPCA method is used to reduce the feature dimension. optimization method, the related parameters are shown
In the FWA parameters optimization module, MSVM in Table 2.
classifier is applied to recognize the basic and mixture CCPs.
However, the parameters of MSVM, like the optimal kernel
function parameter value and the best penalty parameter, 6.1 Performance evaluation for the proposed FFR
should be optimized for getting the satisfactory recognition method
performance. Fireworks algorithm is applied to select the
two-dimensional optimum values of MSVM in parameters Feature extraction plays a very important part in the mix-
optimization module. ture patterns recognition, and its quality is the key to the
In the fault classification module, the trained MSVM speed and effectiveness of the MSVM classifier. To inves-
model with the optimal parameters is utilized to classify the tigate the effectiveness of the proposed FFR method, we
testing samples and then get the classification results. have used two other methods. The first one simply uses
process data (PD) as features, which are the 40-dimen-
sional time-domain sample data; the second chooses the
6 Performance analyses 13 statistical and shape features (SSF). MSVM with fire-
works algorithm is applied to recognize the mixture CCPs.
We evaluate the performance of the proposed FFR_MSVM_ Parameters of the proposed FWA optimization algorithm
FWA method. The samples have been previously generated are listed in Table 2.
by the Monte Carlo simulation, which consists of six basic Recognition accuracy rate (RAA) and run time are uti-
and four mixture patterns with 200 samples of each class. lized as the criterion for evaluating the performance of
For this study, we choose about 70% of the data as the train- the three feature extraction methods. The performance is
ing set and the rest as the testing set. Testing recognition highly relied on the features. Improper selection of the
accuracy rate can be utilized to evaluate the performance feature extraction method will result in low recognition
of CCPs recognition. To verify the effectiveness of the pro- accuracy rate. Reliable feature extraction algorithms can
posed FFR_MSVM_FWA method, several performances are achieve the global optimal solutions at any run. Results of
utilized in this study. The experiments were done on Intel the FWA-optimized MSVM using three feature extraction
Core i3-4150 with 4 GB RAM using Dell computer for this methods are shown in Table 3, which are obtained from the
section. average of ten independent runs in this study.
The parameters of MSVM play an important role in the We present the optimal parameters, the related RAA
performance of CCPs recognition. RBF kernel function (min, mean, max), standard deviation and run time. As
can be seen from Table 3, the optimal parameters (C, γ)
are varied for different feature extraction methods. The
Table 2 Parameters in FWA proposed MSVM with FFR model has better recognition
Parameters name Value performance for the mixture CCPs, of which the recogni-
tion accuracy has arrived at 99.67%. Moreover, the stand-
Maximum number of iterations 800
ard deviation is significantly reduced using FFR feature
Fireworks population size 20
extraction method, which is more stable compared with the
Radius of explosion 30
other methods. At the same time, the FFR method show
Max number of exploded sparks 40
that it finished in appropriate run time, but SSF show bet-
Min number of exploded sparks 1
ter performance for its fewer features. By comparing the
Gaussian sparks 5
three feature extraction methods, it is obvious that the pro-
The number of exploded sparks 50
posed FFR method significantly improves the recognition
Problem dimension 2
rate of mixture CCPs.
Table 3 Recognition accuracy Feature extraction method Best (C, γ) Accuracy (%) Standard Run time (s)
rate of the performance of three deviation
feature extraction methods Min Mean Max
Process data (PD) (9.04, 8.74) 95.83 96.35 97.00 0.41 101.17
Statistical and shape features (SSF) (29.95, 25.05) 92.83 94.57 96.33 1.15 80.66
Fusion feature reduction (FFR) (14.07, 12.57) 99.00 99.23 99.67 0.25 90.38
13
Table 4 Comparison of the performance of three parameter optimiza- Table 5 Optimal values of classifier parameters for different training
tion methods samples
Method Feature extrac- Recognition Run time (s) Method Training set no. Testing set no. Prediction accuracy (%)
tion method accuracy rate (%) (each pattern) (each pattern)
PD SSF FFR
MSVM_ GA PD 93.83 852.70
20% 40 160 95.31 92.75 99.00
SSF 94.00 246.00
30% 60 140 95.50 92.00 99.29
FFR 97.50 749.80
40% 80 120 95.17 93.50 98.83
MSVM_ PSO PD 97.67 770.93
50% 100 100 96.20 94.30 99.20
SSF 93.67 399.60
60% 120 80 97.13 95.00 99.38
FFR 98.67 1068.65
70% 140 60 97.17 96.33 99.50
MSVM_ FWA PD 96.5 93.50
80% 160 40 95.25 96.75 99.75
SSF 96.33 86.05
90% 180 20 96.50 92.00 99.00
FFR 99.67 89.82
13
of parameters for the MSVM classifier so that the recogni- and multiclass support vector machines. Comput Ind Eng
tion rate and run time of CCPs are much improved. Third, 66(4):683–695
14. Huang J, Yan X (2016) Related and independent variable fault
our study also shows that the proposed method can deliver detection based on KPCA and SVDD. J Process Control 39:88–99
satisfying prediction results even with relatively small-sized 15. Kallas M, Mourot G, Maquin D et al (2014) Diagnosis of nonlin-
training samples. ear systems using kernel principal component analysis. In: Euro-
pean workshop on advanced control and diagnosis
Acknowledgements This work is financially supported by National 16. Fazai R, Taouali O, Harkat MF et al (2016) A new fault detec-
Natural Science Foundation of China (NSFC) under Grant No. tion method for nonlinear process monitoring. Int J Adv Manuf
51675450 and the Fundamental Research Funds for the Central Uni- Technol 87:3425–3436
versities under Grant No. 2682016CX031. 17. Elaissi I, Jaffel I, Taouali O et al (2013) Online prediction model
based on the SVD-KPCA method. ISA Trans 52(1):96–104
18. Gutierrez HDLT, Pham DT (2016) Estimation and generation of
Compliance with ethical standards training patterns for control chart pattern recognition. Comput Ind
Eng 95:72–82
Conflict of interest The authors declare that they have no conflict of 19. Guh R, Shiue Y (2005) On-line identification of control chart
interest. patterns using self-organizing approaches. Int J Prod Res
43(6):1225–1254
20. Wang CH, Kuo W (2007) Identification of control chart patterns
using wavelet filtering and robust fuzzy clustering. J Intell Manuf
References 18(3):343–350
21. Khormali A, Addeh J (2016) A novel approach for recognition of
1. Montgomery DC (2001) Introduction to statistical quality control. control chart patterns: type-2 fuzzy clustering optimized support
Wiley, New York vector machine. ISA Trans 63:256–264
2. Ranaee V, Ebrahimzadeh A, Ghaderi R (2010) Application of the 22. Ranaee V, Ebrahimzadeh A (2011) Control chart pattern recogni-
PSO-SVM model for recognition of control chart patterns. ISA tion using a novel hybrid intelligent method. Appl Soft Comput
Trans 49(4):577–586 11(2):2676–2686
3. Shao YE, Chiu CC (2016) Applying emerging soft computing 23. Zhang YD, Wu L (2012) Classification of fruits using com-
approaches to control chart pattern recognition for an SPC–EPC puter vision and a multiclass support vector machine. Sensors
process. Neurocomputing 201:19–28 12(9):12489–12505
4. Gauri SK, Charkaborty S (2006) Feature-based recognition of 24. Alba E, Garcia-Nieto J, Jourdan L et al (2007) Gene selection in
control chart patterns. Comput Ind Eng 51(4):726–742 cancer classification using PSO/SVM and GA/SVM hybrid algo-
5. Lu CJ, Shao YE, Li PH (2011) Mixture control chart patterns rithms. In: IEEE congress on evolutionary computation, 2007.
recognition using independent component analysis and support CEC 2007. IEEE, pp 284–290
vector machine. Neurocomputing 74(11):1904–1908 25. Wei JX, Zhang RSH, Yu ZX et al (2017) A BPSO-SVM algorithm
6. Guh RS, Tannock JDT (1999) Recognition of control chart con- based on memory renewal and enhanced mutation mechanisms for
current patterns using a neural network approach. Int J Prod Res feature selection. Appl Soft Comput 58:176–192
37(8):1743–1765 26. Tan Y, Zhu Y (2010) Fireworks algorithm for optimization. In:
7. Xie L, Gu N, Li D et al (2013) Concurrent control chart patterns Tan Y, Shi Y, Tan KC (eds) Advances in swarm intelligence. ICSI
recognition with singular spectrum analysis and support vector 2010. Lecture notes in computer science, vol 6145. Springer,
machine. Comput Ind Eng 64(1):280–289 Berlin
8. Yang WA, Zhou W, Liao W et al (2015) Identification and quan- 27. Sangeetha K, Babu TS, Rajasekar N (2016) Fireworks algorithm-
tification of concurrent control chart patterns using extreme-point based maximum power point tracking for uniform irradiation as
symmetric mode decomposition and extreme learning machines. well as under partial shading condition. In: Artificial intelligence
Neurocomputing 147(1):260–270 and evolutionary computations in engineering systems. Springer,
9. Zhang M, Cheng W (2015) Recognition of mixture control chart India
pattern using multiclass support vector machine and genetic algo- 28. Reddy KS, Panwar LK, Kumar R et al (2016) Binary fireworks
rithm based on statistical and shape features. Math Probl Eng algorithm for profit based unit commitment (PBUC) problem. Int
5:1–10 J Electr Power Energy Syst 83:270–282
10. Gauri SK, Chakraborty S (2009) Recognition of control chart 29. Zhang Q, Liu H, Dai C (2016) Fireworks explosion optimiza-
patterns using improved selection of features. Comput Ind Eng tion algorithm for parameter identification of PV model. In: IEEE
56(4):1577–1588 international power electronics and motion control conference.
11. Peter He Q, Joe Qin S (2005) A new fault diagnose is method IEEE, pp 1587–1591
using fault directions in fisher discriminant analysis. AIChE J 30. Goswami D, Chakraborty S (2015) Parametric optimization of
51(2):555–571 ultrasonic machining process using gravitational search and fire-
12. Tian Y, Dub W, Makisc V (2017) Improved cost-optimal Bayesian works algorithms. AIN SHAMS Eng J 6(1):315–331
control chart based auto-correlated chemical process monitoring. 31. Babu TS, Ram JP, Sangeetha K et al (2016) Parameter extrac-
Chem Eng Res Des 123:63–75 tion of two diode solar PV model using fireworks algorithm. Sol
13. Du S, Huang D, Lv J (2013) Recognition of concurrent con- Energy 140:265–276
trol chart patterns using wavelet transform decomposition
13
1. use such content for the purpose of providing other users with access on a regular or large scale basis or as a means to circumvent access
control;
2. use such content where to do so would be considered a criminal or statutory offence in any jurisdiction, or gives rise to civil liability, or is
otherwise unlawful;
3. falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association unless explicitly agreed to by Springer Nature in
writing;
4. use bots or other automated methods to access the content or redirect messages
5. override any security feature or exclusionary protocol; or
6. share the content in order to create substitute for Springer Nature products or services or a systematic database of Springer Nature journal
content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a product or service that creates revenue,
royalties, rent or income from our content or its inclusion as part of a paid for service or for other commercial gain. Springer Nature journal
content cannot be used for inter-library loans and librarians may not upload Springer Nature journal content on a large scale into their, or any
other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not obligated to publish any information or
content on this website and may remove it or features or functionality at our sole discretion, at any time with or without notice. Springer Nature
may revoke this licence to you at any time and remove access to any copies of the Springer Nature journal content which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or guarantees to Users, either express or implied
with respect to the Springer nature journal content and all parties disclaim and waive any implied warranties or warranties imposed by law,
including merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published by Springer Nature that may be licensed
from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a regular basis or in any other manner not
expressly permitted by these Terms, please contact Springer Nature at