100% found this document useful (4 votes)
644 views

Fuzzy Mathematics

Fuzzy Mathematics by Irina Georgescu

Uploaded by

dodoboydragos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (4 votes)
644 views

Fuzzy Mathematics

Fuzzy Mathematics by Irina Georgescu

Uploaded by

dodoboydragos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 288

Fuzzy

Mathematics
Edited by
Etienne E. Kerre and John Mordeson
Printed Edition of the Special Issue Published in Mathematics

www.mdpi.com/journal/mathematics
Fuzzy Mathematics
Fuzzy Mathematics

Special Issue Editors


Etienne E. Kerre
John Mordeson

MDPI • Basel • Beijing • Wuhan • Barcelona • Belgrade


Special Issue Editors
Etienne E. Kerre John Mordeson
Ghent University Creighton University

Belgium USA

Editorial Office
MDPI
St. Alban-Anlage 66
Basel, Switzerland

This is a reprint of articles from the Special Issue published online in the open access journal
Mathematics (ISSN 2227-7390) from 2017 to 2018 (available at: https://www.mdpi.com/journal/
mathematics/special issues/Fuzzy Mathematics)

For citation purposes, cite each article independently as indicated on the article page online and as
indicated below:

LastName, A.A.; LastName, B.B.; LastName, C.C. Article Title. Journal Name Year, Article Number,
Page Range.

ISBN 978-3-03897-322-5 (Pbk)


ISBN 978-3-03897-323-2 (PDF)

Articles in this volume are Open Access and distributed under the Creative Commons Attribution
(CC BY) license, which allows users to download, copy and build upon published articles even for
commercial purposes, as long as the author and publisher are properly credited, which ensures
maximum dissemination and a wider impact of our publications. The book taken as a whole is

c 2018 MDPI, Basel, Switzerland, distributed under the terms and conditions of the Creative
Commons license CC BY-NC-ND (http://creativecommons.org/licenses/by-nc-nd/4.0/).
Contents

About the Special Issue Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii

Preface to ”Fuzzy Mathematics” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix

Erich Peter Klement and Radko Mesiar


L-Fuzzy Sets and Isomorphic Lattices: Are All the “New” Results Really New? †
Reprinted from: Mathematics 2018, 6, 146, doi: 10.3390/math6090146 . . . . . . . . . . . . . . . . 1

Krassimir Atanassov
On the Most Extended Modal Operator of First Type over Interval-Valued Intuitionistic Fuzzy
Sets
Reprinted from: Mathematics 2018, 6, 123, doi: 10.3390/math6070123 . . . . . . . . . . . . . . . . 25

Young Bae Jun, Seok-Zun Song and Seon Jeong Kim


N -Hyper Sets
Reprinted from: Mathematics 2018, 6, 87, doi: 10.3390/math6060087 . . . . . . . . . . . . . . . . . 35

Muhammad Akram and Gulfam Shahzadi


Hypergraphs in m-Polar Fuzzy Environment
Reprinted from: Mathematics 2018, 6, 28, doi: 10.3390/math6020028 . . . . . . . . . . . . . . . . . 47

Noor Rehman, Choonkil Park, Syed Inayat Ali Shah and Abbas Ali
On Generalized Roughness in LA-Semigroups
Reprinted from: Mathematics 2018, 6, 112, doi: 10.3390/math6070112 . . . . . . . . . . . . . . . . 65

Hsien-Chung Wu
Fuzzy Semi-Metric Spaces
Reprinted from: Mathematics 2018, 6, 106, doi: 10.3390/math6070106 . . . . . . . . . . . . . . . . 73

E. Mohammadzadeh, R. A. Borzooei
Nilpotent Fuzzy Subgroups
Reprinted from: Mathematics 2018, 6, 27, doi: 10.3390/math6020027 . . . . . . . . . . . . . . . . . 92

Florentin Smarandache, Mehmet Şahin and Abdullah Kargın


Neutrosophic Triplet G-Module
Reprinted from: Mathematics 2018, 6, 53, doi: 10.3390/math6040053 . . . . . . . . . . . . . . . . . 104

Pannawit Khamrot and Manoj Siripitukdet


Some Types of Subsemigroups Characterized in Terms of Inequalities of Generalized Bipolar
Fuzzy Subsemigroups
Reprinted from: Mathematics 2017, 5, 71, doi: 10.3390/math5040071 . . . . . . . . . . . . . . . . . 113

Seok-Zun Song, Seon Jeong Kim and Young Bae Jun


Hyperfuzzy Ideals in BCK/BCI-Algebras †
Reprinted from: Mathematics 2017, 5, 81, doi: 10.3390/math5040081 . . . . . . . . . . . . . . . . . 127

Young Bae Jun, Seok-Zun Song and Seon Jeong Kim


Length-Fuzzy Subalgebras in BCK/BCI-Algebras
Reprinted from: Mathematics 2018, 6, 11, doi: 10.3390/math6010011 . . . . . . . . . . . . . . . . . 141

v
Young Bae Jun, Florentin Smarandache,Seok-Zun Song and Hashem Bordbar
Neutrosophic Permeable Values and Energetic Subsets withApplications in
BCK/BCI-Algebras
Reprinted from: Mathematics 2018, 6, 74, doi: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153

Harish Garg and Jaspreet Kaur


A Novel (R, S)-Norm Entropy Measure of Intuitionistic Fuzzy Sets and Its Applications in
Multi-Attribute Decision-Making
Reprinted from: Mathematics 2018, 6, 92, doi: 10.3390/math6060092 . . . . . . . . . . . . . . . . . 169

Dheeraj Kumar Joshi, Ismat Beg and Sanjay Kumar


Hesitant Probabilistic Fuzzy Linguistic Sets with Applications in Multi-Criteria Group Decision
Making Problems
Reprinted from: Mathematics 2018, 6, 47, doi: 10.3390/math6040047 . . . . . . . . . . . . . . . . . 188

Irina Georgescu
The Effect of Prudence on the Optimal Allocation in Possibilistic and Mixed Models
Reprinted from: Mathematics 2018, 6, 133, doi: 10.3390/math6080133 . . . . . . . . . . . . . . . . 208

Rudolf Seising
The Emergence of Fuzzy Sets in the Decade of the Perceptron—Lotfi A. Zadeh’s and Frank
Rosenblatt’s Research Work on Pattern Classification
Reprinted from: Mathematics 2018, 6, 110, doi: 10.3390/math6070110 . . . . . . . . . . . . . . . . 227

Mohamadtaghi Rahimi, Pranesh Kumar and Gholamhossein Yari


Credibility Measure for Intuitionistic Fuzzy Variables
Reprinted from: Mathematics 2018, 6, 50, doi: 10.3390/math6040050 . . . . . . . . . . . . . . . . . 247

Musavarah Sarwar and Muhammad Akram


Certain Algorithms for Modeling Uncertain Data Using Fuzzy Tensor Product BézierSurfaces
Reprinted from: Mathematics 2018, 6, 42, doi: 10.3390/math6030042 . . . . . . . . . . . . . . . . . 254

Lubna Inearat and Naji Qatanani


Numerical Methods for Solving Fuzzy Linear Systems
Reprinted from: Mathematics 2018, 6, 19, doi: 10.3390/math6020019 . . . . . . . . . . . . . . . . . 266

vi
About the Special Issue Editors
Etienne E. Kerre was born in Zele, Belgium on 8 May 1945. He obtained his M. Sc. degree in
Mathematics in 1967 and his Ph.D. in Mathematics in 1970 from Ghent University. He has been a
lector since 1984, and has been a full professor at Ghent University since 1991. In 2010, he became a
retired professor. He is a referee for more than 80 international scientific journals, and also a member
of the editorial boards of many international journals and conferences on fuzzy set theory. He has
been an honorary chairman at various international conferences. In 1976, he founded the Fuzziness
and Uncertainty Modeling Research Unit (FUM). Since then, his research has been focused on the
modeling of fuzziness and uncertainty, and has resulted in a great number of contributions in
fuzzy set theory and its various generalizations. Especially the theories of fuzzy relational calculus
and of fuzzy mathematical structures owe a very great deal to him. Over the years he has also been a
promotor of 30 Ph.D.s on fuzzy set theory. His current research interests include fuzzy and
intuitionistic fuzzy relations, fuzzy topology, and fuzzy image processing. He has authored or
co-authored 25 books and more than 500 papers appearing in international refereed journals and
proceedings.

John Mordeson is Professor Emeritus of Mathematics at Creighton University. He received his B.S.,
M.S., and Ph. D. from Iowa State University. He is a Member of Phi Kappa Phi. He is President of the
Society for Mathematics of Uncertainty. He has published 15 books and over two hundred journal
articles. He is on the editorial boards of numerous journals. He has served as an external examiner
of Ph.D. candidates from India, South Africa, Bulgaria, and Pakistan. He has refereed for numerous
journals and granting agencies. He is particularly interested in applying mathematics of uncertainty
to combat the problems of human trafficking, modern slavery, and illegal immigration.

vii
Preface to ”Fuzzy Mathematics”
This Special Issue on fuzzy mathematics is dedicated to Lotfi A. Zadeh. In his 1965 seminal paper
entitled “Fuzzy Sets”, Zadeh extended Cantor’s binary set theory to a gradual model by introducing
degrees of belonging and relationship. Very soon, the extension was applied to almost all domains of
contemporary mathematics, giving birth to new disciplines such as fuzzy topology, fuzzy arithmetic,
fuzzy algebraic structures, fuzzy differential calculus, fuzzy geometry, fuzzy relational calculus,
fuzzy databases, and fuzzy decision making. In the beginning, mostly direct fuzztfications of the
classical mathematical domains were launched by simply changing Cantor’s set-theoretic operations
by Zadeh’s max-min extensions. The 1980s were characterized by an extension of the possible
fuzzifications due to the discovery of triangular norms and conorms. Starting in the 1990s, more
profound analysis was performed by studying the axiomatization of fuzzy structures and searching
for links between the different models to represent imprecise and uncertain information. It was our
aim to have this Special Issue comprise a healthy mix of excellent state-of-the-art papers as well as
brand-new material that can serve as a starting point for newcomers in the field to further develop
this wonderful domain of fuzzy mathematics.
This Special Issue starts with a corner-stone paper that should be read by all working in the field
of fuzzy mathematics. Using lattice isomorphisms, it shows that the results of many of the variations
and extensions of fuzzy set theory can be obtained immediately from the results of set theory itself.
The paper is extremely valuable to reviewers in the field. This paper is followed by one which gives
the definition of the most extended modal operator of the first type over interval-valued sets, and
presents some of its basic properties. A new function called a negative-valued function is presented
and applied to various structures. Results concerning N-hyper sets and hypergraphs in m-polar fuzzy
setting are also presented.
In the next paper, it is shown that the lower approximation of a subset of an LA-semigroup
need not be an LA-subsemigroup/ideal of an LA-semigroup under a set valued homomorphism. A
generalization of a bipolar fuzzy subsemigroup is given, and any regular semigroup is characterized
in terms of generalized BF-semigroups. The T1-spaces induced by a fuzzy semi-metric space
endowed with the special kind of triangle inequality are investigated. The limits in fuzzy semi-metric
spaces are also studied. The consistency of limit concepts in the induced topologies is shown.
Nilpotent fuzzy subgroups and neutrosophic triplet G-modules are also studied. Next we have a
series of papers on BCK/BCI algebras. The notions of hyper fuzzy sets in BCK/BCI-algebras are
introduced, and characterizations of hyper fuzzy ideals are established. The length-fuzzy set is
introduced and applied to BCK/BCI algebras. Neutrosophic permeable values and energetic subsets
with applications to BCK/BCI algebras are presented.
The following three papers concern decision making issue. An information measure for
measuring the degree of fuzziness in an intuitionistic fuzzy set is introduced. An illustrative example
related to a linguistic variable is given to illustrate it. The notion of occurring probabilistic values into
hesitant fuzzy linguistic elements is introduced and studied.

ix
The Special Issue ends with some applications of fuzzy mathematics. Several portfolio choice
models are studied. A possibilistic model in which the return of the risk is a fuzzy number, and four
models in which the background risk appears in addition to the investment risk are presented. The
interwoven historical developments of the two mathematical theories by Zadeh and Rosenblatt which
opened up into pattern classification and fuzzy clustering are presented. Credibility for intuitionistic
fuzzy sets is presented. Expected values, entropy, and general formulae for the central moments are
introduced and studied. Algorithms for modeling uncertain data using fuzzy tensor product surfaces
are presented. In particular, fuzzification and defuzzification processes are applied to obtain crisp
Bezier curves and surfaces from fuzzy data. Three numerical methods for solving linear systems are
presented, namely Jacobi, Gauss–Seidel, and successive over-relaxation.

Etienne E. Kerre, John Mordeson


Special Issue Editors

x
mathematics
Article
L-Fuzzy Sets and Isomorphic Lattices: Are All the
“New” Results Really New? †
Erich Peter Klement 1, *and Radko Mesiar 2
1 Department of Knowledge-Based Mathematical Systems, Johannes Kepler University, 4040 Linz, Austria
2 Department of Mathematics and Descriptive Geometry, Faculty of Civil Engineering,
Slovak University of Technology, 810 05 Bratislava, Slovakia; [email protected]
* Correspondence: [email protected]; Tel.: +43-650-2468290
† These authors contributed equally to this work.

Received: 23 June 2018; Accepted: 20 August 2018; Published: 23 August 2018

Abstract: We review several generalizations of the concept of fuzzy sets with two- or three-dimensional
lattices of truth values and study their relationship. It turns out that, in the two-dimensional case, several
of the lattices of truth values considered here are pairwise isomorphic, and so are the corresponding
families of fuzzy sets. Therefore, each result for one of these types of fuzzy sets can be directly rewritten
for each (isomorphic) type of fuzzy set. Finally we also discuss some questionable notations, in particular,
those of “intuitionistic” and “Pythagorean” fuzzy sets.

Keywords: fuzzy set; interval-valued fuzzy set; “intuitionistic” fuzzy set; “pythagorean” fuzzy set;
isomorphic lattices; truth values

1. Introduction
In the paper “Fuzzy sets” [1] L. A. Zadeh suggested the unit interval [0, 1] (which we shall denote
by I throughout the paper) as set of truth values for fuzzy sets, in a generalization of Boolean logic and
Cantorian set theory where the two-element Boolean algebra {0, 1} is used.
Soon after a further generalization was proposed in J. Goguen [2]: to replace the unit interval I
by an abstract set L (in most cases a lattice), noticing that the key feature of the unit interval in this
context is its lattice structure. In yet another generalization L. A. Zadeh [3,4] introduced fuzzy sets of
type 2 where the value of the membership function is a fuzzy subset of I.
Since then, many more variants and generalizations of the original concept in [1] were presented,
most of them being either L-fuzzy sets, type-n fuzzy sets or both. In a recent and extensive
“historical account”, H. Bustince et al. ([5], Table 1) list a total of 21 variants of fuzzy sets and study
their relationships.
In this paper, we will deal with the concepts of (generalizations of) fuzzy sets where the set of
truth values is either one-dimensional (the unit interval I), two-dimensional (e.g., a suitable subset of
the unit square I × I) or three-dimensional (a subset of the unit cube I3 ).
The one-dimensional case (where the set of truth values equals I) is exactly the case of fuzzy sets
in the sense of [1].
Concerning the two-dimensional case, we mainly consider the following subsets of the unit square
I × I:

L∗ = {( x1 , x2 ) ∈ I × I | x1 + x2 ≤ 1},
L2 (I) = {( x1 , x2 ) ∈ I × I | 0 ≤ x1 ≤ x2 ≤ 1},
P∗ = {( x1 , x2 ) ∈ I × I | x12 + x22 ≤ 1},

Mathematics 2018, 6, 146; doi:10.3390/math6090146 1 www.mdpi.com/journal/mathematics


Mathematics 2018, 6, 146

and the related set of all closed subintervals of the unit interval I:

I(I) = {[ x1 , x2 ] ⊆ I | 0 ≤ x1 ≤ x2 ≤ 1}.

Equipped with suitable orders, these lattices of truth values give rise to several generalizations
of fuzzy sets known from the literature: L∗ -fuzzy sets, “intuitionistic” fuzzy sets [6,7], grey sets [8,9],
vague sets [10], 2-valued sets [11], interval-valued fuzzy sets [4,12–14], and “Pythagorean” fuzzy
sets [15].
In the three-dimensional case, the following subsets of the unit cube I3 will play a major role:

D∗ = {( x1 , x2 , x3 ) ∈ I3 | x1 + x2 + x3 ≤ 1},
L3 (I) = {( x1 , x2 , x3 ) ∈ I3 | 0 ≤ x1 ≤ x2 ≤ x3 ≤ 1}.

Equipped with suitable orders, these lattices of truth values lead to the concepts of 3-valued
sets [11] and picture fuzzy sets [16].
While it is not surprising that lattices of truth values of higher dimension correspond to more
complex types of fuzzy sets, it is remarkable that in the two-dimensional case the lattices with the
carriers L∗ , L2 (I), P∗ , and I(I) are mutually isomorphic, i.e., the families of fuzzy sets with these truth
values have the same lattice-based properties. This implies that mathematical results for one type
of fuzzy sets can be carried over in a straightforward way to the other (isomorphic) types. This also
suggests that, in a mathematical sense, often only one of these lattices of truth values (and only one of
the corresponding types of fuzzy sets) is really needed.
Note that if some algebraic structures are isomorphic, then it is meaningful to consider all of them
only if they have different meanings and interpretations.
This is, e.g., the case for the arithmetic mean (on [−∞, ∞[) and for the geometric mean (on [0, ∞[).
On the other hand, concerning results dealing with such isomorphic structures, it is enough to prove
them once and then to transfer them to the other isomorphic structures simply using the appropriate
isomorphisms. For example, in the case of the arithmetic and geometric means mentioned here, the
additivity of the arithmetic mean is equivalent to the multiplicativity of the geometric mean.
Another example are pairs ( a, b) of real numbers which can be interpreted as points in the real
plane, as (planar) vectors, as complex numbers, and (if a ≤ b) as closed sub-intervals of the real line.
Most algebraic operations for these objects are defined for the representing pairs of real numbers; in
the case of the addition, the exact same formula is used.
We only mention that in the case of three-dimensional sets of truth values, the corresponding
lattices (and the families of fuzzy sets based on them) are not isomorphic, which means that they have
substantially different properties.
The paper is organized as follows. In Section 2, we discuss the sets of truth values for Cantorian
(or crisp) sets and for fuzzy sets and present the essential notions of abstract lattice theory, including
the crucial concept of isomorphic lattices. In Section 3, we review the two- and three-dimensional
sets of truth values mentioned above and study the isomorphisms between them and between the
corresponding families of fuzzy sets. Finally, in Section 4, we discuss some further consequences of
lattice isomorphisms as well as some questionable notations appearing in the literature, in particular
“intuitionistic” fuzzy sets and “Pythagorean” fuzzy sets.

2. Preliminaries
Let us start with collecting some of the basic and important prerequisites from set theory, fuzzy
set theory, and some generalizations thereof.

2
Mathematics 2018, 6, 146

2.1. Truth Values and Bounded Lattices


The set of truth values in Cantorian set theory [17,18] (and in the underlying Boolean logic [19,20])
is the Boolean algebra {0, 1}, which we will denote by 2 in this paper. Given a universe of discourse,
i.e., a non-empty set X, each Cantorian (or crisp) subset A of X can be identified with its indicator function
1 A : X → 2, defined by 1 A ( x ) = 1 if and only if x ∈ A.
In L. A. Zadeh’s seminal paper on fuzzy sets [1] (compare also the work of K. Menger [21–23] and
D. Klaua [24,25]), the unit interval [0, 1] was proposed as set of truth values, thus providing a natural
extension of the Boolean case. As usual, a fuzzy subset A of the universe of discourse X is described by
its membership function μ A : X → I, and μ A ( x ) is interpreted as the degree of membership of the object x
in the fuzzy set A. The standard order reversing involution (or double negation) NI : I → I is given by
NI ( x ) = 1 − x.
For the rest of this paper, we will reserve the shortcut I for the unit interval [0, 1] of the real line R.
On each subset of the real line, the order ≤ will denote the standard linear order inherited from R.
In a further generalization, J. Goguen [2] suggested to use the elements of an abstract set L as
truth values and to describe an L-fuzzy subset A of X by means of its membership function μ A : X → L,
where μ A ( x ) stands for the degree of membership of the object x in the L-fuzzy set A.
Several important examples for L were discussed in [2], such as complete lattices or complete
lattice-ordered semigroups. There is an extensive literature on L-fuzzy sets dealing with various aspects
of algebra, analysis, category theory, topology, and stochastics (see, e.g., [26–44]). For a more recent
overview of these and other types and generalizations of fuzzy sets see [5].
In most of these papers the authors work with a lattice ( L, ≤ L ), i.e., a non-empty, partially ordered
set ( L, ≤ L ) such that each finite subset of L has a meet (or greatest lower bound) and a join (or least upper
bound) in L. If each arbitrary subset of L has a meet and a join then the lattice is called complete, and if
there exist a bottom (or smallest) element 0 L and a top (or greatest) element 1 L in L, then the lattice is
called bounded.
For notions and results in the theory of general lattices we refer to the book of G. Birkhoff [45].
There is an equivalent, purely algebraic approach to lattices without referring to a partial order:
if ∧ L : L × L → L and ∨ L : L × L → L are two commutative, associative operations on a set L that
satisfy the two absorption laws, i.e., for all x, y ∈ L we have x ∧ L ( x ∨ L y) = x and x ∨ L ( x ∧ L y) = x,
and if we define the binary relation ≤ L on L by x ≤ L y if and only if x ∧ L y = x (which is equivalent
to saying that x ≤ L y if and only if x ∨ L y = y), then ≤ L is a partial order on L and ( L, ≤ L ) is a lattice
such that, for each set { x, y} ⊆ L, the elements x ∧ L y and x ∨ L y coincide with the meet and the join,
respectively, of the set { x, y} with respect to the order ≤ L .
Clearly, the lattices (2, ≤) and (I, ≤) already mentioned are examples of complete bounded
lattices: 2-fuzzy sets are exactly crisp sets, I-fuzzy sets are the fuzzy sets in the sense of [1].
n
If ( L1 , ≤ L1 ), ( L2 , ≤ L2 ), . . . , ( Ln , ≤ Ln ) are lattices and ∏ Li = L1 × L2 × · · · × Ln is the Cartesian
i =1
product of the underlying sets, then also
 n 
∏ i comp ,
L , ≤ (1)
i =1

is a lattice, the so-called product lattice of ( L1 , ≤ L1 ), ( L2 , ≤ L2 ), . . . , ( Ln , ≤ Ln ), where ≤comp is the


componentwise partial order on the Cartesian product ∏ Li given by

( x1 , x2 , . . . , xn ) ≤comp (y1 , y2 , . . . , yn ) (2)


⇐⇒ x 1 ≤ L1 y1 AND x 2 ≤ L2 y 2 AND ... AND xn ≤ Ln yn .

The componentwise partial order is not the only partial order that can be defined on ∏ Li .
An alternative is, for example, the lexicographical partial order ≤lexi given by ( x1 , x2 , . . . , xn ) ≤lexi
(y1 , y2 , . . . , yn ) if and only if (( x1 , x2 , . . . , xn ) = (y1 , y2 , . . . , yn ) or ( x1 , x2 , . . . , xn ) <lexi (y1 , y2 , . . . , yn )),

3
Mathematics 2018, 6, 146

where the strict inequality ( x1 , x2 , . . . , xn ) <lexi (y1 , y2 , . . . , yn ) holds if and only if there is an i0 ∈
{1, 2, . . . , n} such that xi = yi for each i ∈ {1, 2, . . . , i0 − 1} and xi0 < Li yi0 .
0
Obviously, whenever ( L1 , ≤ L1 ), ( L2 , ≤ L2 ), . . . , ( Ln , ≤ Ln ) are lattices then also
 n 
∏ Li , ≤lexi
i =1

is a lattice. Moreover, if each of the partial orders ≤ L1 , ≤ L2 , . . . , ≤ Ln is linear, then ≤lexi is also a linear
order. Note that this is not the case for ≤comp whenever n > 1 and at least two of the sets L1 , L2 , . . . , Ln
contain two or more elements. To take the simplest example: the lattice (2 × 2, ≤lexi ) is a chain, i.e.,
(0, 0) <lexi (0, 1) <lexi (1, 0) <lexi (1, 0), but in the product lattice (2 × 2, ≤comp ) the elements (0, 1)
and (1, 0) are incomparable with respect to ≤comp .
We only mention that also the product of infinitely many lattices may be a lattice. As an example,
if ( L, ≤ L ) is a lattice and X a non-empty set, then the set L X of all functions from X to L, equipped
with the componentwise partial order ≤comp , is again a lattice. Recall that, for functions f , g : X → L,
the componentwise partial order ≤comp is defined by f ≤comp g if and only if f ( x ) ≤ L g( x ) for all
x ∈ X. If no confusion is possible, we simply shall write f ≤ L g rather than f ≤comp g.

2.2. Isomorphic Lattices: Some General Consequences


For two partially ordered sets ( L1 , ≤ L1 ) and ( L2 , ≤ L2 ), a function ϕ : L1 → L2 is called an order
homomorphism if it preserves the monotonicity, i.e., if x ≤ L1 y implies ϕ( x ) ≤ L2 ϕ(y).
If ( L1 , ≤ L1 ) and ( L2 , ≤ L2 ) are two lattices then a function ϕ : L1 → L2 is called a lattice
homomorphism if it preserves finite meets and joins, i.e., if for all x, y ∈ L1

ϕ ( x ∧ L1 y ) = ϕ ( x ) ∧ L2 ϕ ( y ) and ϕ ( x ∨ L1 y ) = ϕ ( x ) ∨ L2 ϕ ( y ). (3)

Each lattice homomorphism is an order homomorphism, but the converse is not true in general.
A lattice homomorphism ϕ : L1 → L2 is called an embedding if it is injective, an epimorphism if it is
surjective, and an isomorphism if it is bijective, i.e., if it is both an embedding and an epimorphism.
If a function ϕ : L1 → L2 is an embedding from a lattice ( L1 , ≤ L1 ) into a lattice ( L2 , ≤ L2 ) then the
set { ϕ( x ) | x ∈ L1 } (equipped with the partial order inherited from ( L2 , ≤ L2 )) forms a sublattice of
( L2 , ≤ L2 ) which is isomorphic to ( L1 , ≤ L1 ). If ( L1 , ≤ L1 ) is bounded or complete, so is this sublattice of
( L2 , ≤ L2 ). Conversely, if ( L1 , ≤ L1 ) is a sublattice of ( L2 , ≤ L2 ) then ( L1 , ≤ L1 ) trivially can be embedded
into ( L2 , ≤ L2 ) (the identity function id L1 : L1 → L2 provides an embedding).
The word “isomorphic” is derived from the composition of the two Greek words “isōs” (meaning
similar, equal, corresponding) and “morphē” (meaning shape, structure), so it means having the same shape
or the same structure.
If two lattices ( L1 , ≤ L1 ) and ( L2 , ≤ L2 ) are isomorphic this means that they have the same
mathematical structure in the sense that there is a bijective function ϕ : L1 → L2 that preserves
the order as well as finite meets and joins, compare (3).
However, being isomorphic does not necessarily mean to be identical, for example (not in the
lattice framework), consider the arithmetic mean on [−∞, ∞[ and the geometric mean on [0, ∞[ [46,47]
which are isomorphic aggregation functions on Rn , but they have some different properties and they
are used for different purposes.
If ( L1 , ≤ L1 ) and ( L2 , ≤ L2 ) are isomorphic and if ( L1 , ≤ L1 ) has additional order theoretical
properties, these properties automatically carry over to the lattice ( L2 , ≤ L2 ).
For instance, if the lattice ( L1 , ≤ L1 ) is complete so is ( L2 , ≤ L2 ). Or, if the lattice ( L1 , ≤ L1 ) is
bounded (with bottom element 0 L1 and top element 1 L1 ) then also ( L2 , ≤ L2 ) is bounded, and the
bottom and top elements of ( L2 , ≤ L2 ) are obtained via 0 L2 = ϕ(0 L1 ) and 1 L2 = ϕ(1 L1 ).
Moreover, it is well-known that corresponding constructs over isomorphic structures are again
isomorphic. Here are some particularly interesting cases:

4
Mathematics 2018, 6, 146

Remark 1. Suppose that ( L1 , ≤ L1 ) and ( L2 , ≤ L2 ) are isomorphic lattices and that ϕ : L1 → L2 is a lattice
isomophism between ( L1 , ≤ L1 ) and ( L2 , ≤ L2 ).

(i) If f : L1 → L1 is a function then the composite function ϕ ◦ f ◦ ϕ−1 : L2 → L2 has the same order
theoretical properties as f .
 
(ii) If F : L1 × L1 → L1 is a binary operation on L1 and if we define ϕ−1 , ϕ−1 : L2 × L2 → L2 × L2 by
 −1 −1   −1 −
  −1 −1 
ϕ ,ϕ (( x, y)) = ϕ ( x ), ϕ (y) , then the function ϕ ◦ F ◦ ϕ , ϕ
1 : L2 × L2 → L2 is a
binary operation on L2 with the same order theoretical properties as F.
(iii) If A1 : ( L1 )n → L1 is an n-ary operation on L1 then, as a straightforward generalization, the composite
 
function ϕ ◦ A1 ◦ ϕ−1 , ϕ−1 , . . . , ϕ−1 : ( L2 )n → L2 given by
    
ϕ ◦ A1 ◦ ϕ −1 , ϕ −1 , . . . , ϕ −1 ( x 1 , x 2 , . . . , x n ) = ϕ A1 ϕ −1 ( x1 ), ϕ −1 ( x2 ), . . . , ϕ −1 ( x n ) ,

is an n-ary operation on L2 with the same order theoretical properties as A1 .

As a consequence of Remark 1, many structures used in fuzzy set theory can be carried over to
any isomorphic lattice, for example, order reversing involutions or residua [45], which are used in
BL-logics [48–62]. The same is true for many connectives (mostly on the unit interval I but also on
more general and more abstract structures (see, e.g., [63,64])) for many-valued logics such as triangular
norms and conorms (t-norms and t-conorms for short), going back to K. Menger [65] and B. Schweizer
and A. Sklar [66–68] (see also [69–73]), uninorms [74], and nullnorms [75]. Another example are
aggregation functions which have been extensively studied on the unit interval I in, e.g., [46,47,76–78].

Example 1. Let ( L1 , ≤ L1 ) and ( L2 , ≤ L2 ) be isomorphic bounded lattices, suppose that ϕ : L1 → L2 is a lattice


isomorphism between ( L1 , ≤ L1 ) and ( L2 , ≤ L2 ), and denote the bottom and top elements of ( L1 , ≤ L1 ) by 0 L1 and
1 L1 , respectively.

(i) Let NL1 : L1 → L1 be an order reversing involution (or double negation) on L1 , i.e., x ≤ L1 y implies
NL1 (y) ≤ L1 NL1 ( x ), and NL1 ◦ NL1 = id L1 . Then the function ϕ ◦ N L1 ◦ ϕ−1 is an order reversing
 
involution on L2 , and the complemented lattice L2 , ≤2 , ϕ ◦ N L1 ◦ ϕ−1 is isomorphic to ( L1 , ≤1 , NL1 ).
(ii) Let ( L1 , ≤ L1 , ∗1 , e1 , →1 , ←1 ) be a residuated lattice, i.e., ( L1 , ∗1 ) is a (not necessarily commutative)
monoid with neutral element e1 , and for the residua →1 , ←1 : L1 × L1 → L1 we have that for all
x, y, z ∈ L1 the assertion ( x ∗1 y) ≤ L1 z is equivalent to both y ≤ L1 ( x →1 z) and x ≤ L1 (z ←1 y). Then
      
L 2 , ≤ L2 , ϕ ◦ ∗ 1 ◦ ϕ −1 , ϕ − 1 , ϕ ( e1 ) , ϕ ◦ → 1 ◦ ϕ − 1 , ϕ − 1 , ϕ ◦ ← 1 ◦ ϕ − 1 , ϕ − 1

is an isomorphic residuated lattice.


(iii) Let T1 : L1 × L1 → L1 be a triangular norm on L1 , i.e., T1 is an associative, commutative order
 
homomorphism with neutral element 1 L1 . Then the function ϕ ◦ T1 ◦ ϕ−1 , ϕ−1 is a triangular norm
on L2 .
(iv) Let S1 : L1 × L1 → L1 be a triangular conorm on L1 , i.e., S1 is an associative, commutative order
 
homomorphism with neutral element 0 L1 . Then the function ϕ ◦ S1 ◦ ϕ−1 , ϕ−1 is a triangular conorm
on L2 .
(v) Let U1 : L1 × L1 → L1 be a uninorm on L1 , i.e., U1 is an associative, commutative order homomorphism
 
with neutral element e ∈ L1 such that 0 L1 < L1 e < L1 1 L1 . Then the function ϕ ◦ U1 ◦ ϕ−1 , ϕ−1 is a
uninorm on L2 with neutral element ϕ(e).
(vi) Let V1 : L1 × L1 → L1 be a nullnorm on L1 , i.e., V1 is an associative, commutative order homomorphism
such that there is an a ∈ L1 with 0 L1 < L1 a < L1 1 L1 such that for all x ≤ L1 a we have V1 (( x, 0 L1 )) = x,
 
and for all x ≥ L1 a we have V1 (( x, 1 L1 )) = x. Then the function ϕ ◦ V1 ◦ ϕ−1 , ϕ−1 is a nullnorm
on L2 .

5
Mathematics 2018, 6, 146

(vii) Let A1 : ( L1 )n → L1 be an n-ary aggregation function on L1 , i.e., A1 is an order homomorphism which


satisfies A1 (0 L1 , 0 L1 , . . . , 0 L1 ) = 0 L1 and A1 (1 L1 , 1 L1 , . . . , 1 L1 ) = 1 L1 . Then the function ϕ ◦ A1 ◦
 −1 −1 
ϕ , ϕ , . . . , ϕ−1 is an n-ary aggregation function on L2 .

3. Some Generalizations of Truth Values and Fuzzy Sets


In this section we first review the lattices of truth values for crisp sets and for fuzzy sets as
introduced in [1], followed by a detailed description of various generalizations thereof by means of
sets of truth values of dimension two and higher.

3.1. The Classical Cases: Crisp and Fuzzy Sets


Now we shall consider different lattices of types of truth values and, for a fixed non-empty
universe of discourse X, the corresponding classes of (fuzzy) subsets of X.
Recall that if the set of truth values is the classical Boolean algebra {0, 1} (denoted in this paper
simply by 2), then the corresponding set of all crisp (or Cantorian) subsets of X will be denoted by P ( X )
(called the power set of X). Each crisp subset A of X can be identified with its characteristic function
1 A : X → 2, which is defined by 1 A ( x ) = 1 if and only if x ∈ A. There are exactly two constant
characteristic functions: 1∅ : X → 2 maps every x ∈ X to 0, and 1 X : X → 2 maps every x ∈ X to 1.
Obviously, we have A ⊆ B if and only if 1 A ≤ 1 B , i.e., 1 A ( x ) ≤ 1 B ( x ) for all x ∈ X,
and (P ( X ), ⊆) is a complete bounded lattice with bottom element ∅ and top element X, i.e., (P ( X ), ⊆)
is isomorphic to the product lattice (2 X , ≤), where 2 X is the set of all functions from X to 2, and ≤ is
the componentwise standard order.
Switching to the unit interval (denoted by I) as set of truth values in the sense of [1], the set of all
fuzzy subsets of X will be denoted by F ( X ). As usual, each fuzzy subset A ∈ F ( X ) is characterized
by its membership function μ A : X → I, where μ A ( x ) ∈ I describes the degree of membership of the object
x ∈ X in the fuzzy set A.
For fuzzy sets A, B ∈ F ( X ) we have A ⊆ B if and only if μ A ≤ μ B , i.e., μ A ( x ) ≤ μ B ( x ) for all
x ∈ X. Therefore, (F ( X ), ⊆) is a complete bounded lattice with bottom element ∅ and top element X,
i.e., (F ( X ), ⊆) is isomorphic to (IX , ≤), where IX is the set of all functions from X to I.
Only for the sake of completeness we mention that the bottom and top elements in
F (X) are also denoted by ∅ and X, and they correspond to the membership functions
μ∅ = 1∅ and μX = 1X , respectively.
The lattice (P ( X ), ⊆) of crisp subsets of X can be embedded into the lattice (F ( X ), ⊆) of fuzzy
sets of X: the function embP ( X ) : P ( X ) → F ( X ) given by μembP (X) ( A) = 1 A , i.e., the membership
function of embP ( X ) ( A) is just the characteristic function of A, provides a natural embedding.
The membership function μ A : X → I of the complement A of a fuzzy set A ∈ F ( X ) is given by
μ A ( x ) = NI (μ A ( x )) = 1 − μ A ( x ).
For a fuzzy set A ∈ F ( X ) and α ∈ I, the α-cut (or α-level set) of A is defined as the crisp set
[ A]α ∈ P ( X ) given by
[ A ] α = { x ∈ X | μ A ( x ) ≥ α }.
The 1-cut [ A]1 = { x ∈ X | μ A ( x ) = 1} of a fuzzy set A ∈ F ( X ) is often called the kernel of A,
and the crisp set { x ∈ X | μ A ( x ) > 0} usually is called the support of the fuzzy set A.
The family ([ A]α )α∈I of α-cuts of a fuzzy subset A of X carries the same information as the
membership function μ A : X → I in the sense that it is possible to reconstruct the membership function
μ A from the family of α-cuts of A: for all x ∈ X we have [27,79]
  

μ A ( x ) = sup min α, 1[ A]α ( x ) α ∈ I .

We only mention that this is no more possible if the unit interval I is replaced by some lattice L
which is not a chain.

6
Mathematics 2018, 6, 146

3.2. Generalizations: The Two-Dimensional Case


A simple example of a two-dimensional lattice is (I × I, ≤comp ) as defined by (1) and (2), i.e.,
the unit square of the real plane R2 . In [63], triangular norms on this lattice (and on other product
lattices) were studied. The standard order reversing involution NI×I : I × I → I × I in (I × I, ≤comp ) is
given by
NI×I (( x, y)) = (1 − y, 1 − x ). (4)

This product lattice was considered in several expert systems [80–82]. There, the first coordinate
was interpreted as a degree of positive information (measure of belief ), and the second coordinate
as a degree of negative information (measure of disbelief ). Note that though several operations for
this structure were considered in the literature (for a nice overview see [83]), a deeper algebraic
investigation is still missing in this case.
To the best of our knowledge, K. T. Atanassov [6,7,84] (compare [85,86]) was the first to consider
both the degree of membership and the degree of non-membership when using and studying the bounded
lattice (L∗ , ≤L∗ ) of truth values given by (5) and (6). Unfortunately, he called the corresponding
L∗ -fuzzy sets “intuitionistic” fuzzy sets because of the lack of the law of excluded middle (for a critical
discussion of this terminology see Section 4.2):

L∗ = {( x1 , x2 ) ∈ I × I | x1 + x2 ≤ 1}, (5)
( x1 , x2 ) ≤L∗ (y1 , y2 ) ⇐⇒ x1 ≤ y1 AND x2 ≥ y2 . (6)

Obviously, (L∗ , ≤L∗ ) is a complete bounded lattice: 0L∗ = (0, 1) and 1L∗ = (1, 0) are the bottom
and top elements of (L∗ , ≤L∗ ), respectively, and the meet ∧L∗ and the join ∨L∗ in (L∗ , ≤L∗ ) are given by

( x1 , x2 ) ∧L∗ (y1 , y2 ) = (min( x1 , y1 ), max( x2 , y2 )),


( x1 , x2 ) ∨L∗ (y1 , y2 ) = (max( x1 , y1 ), min( x2 , y2 )).

Moreover, (I, ≤) can be embedded in a natural way into (L∗ , ≤L∗ ): the function embI : I → L∗
given by embI ( x ) = ( x, 1 − x ) is an embedding. Observe that there are also other embeddings of (I, ≤)
into (L∗ , ≤L∗ ), e.g., ϕ : I → L∗ given by ϕ( x ) = ( x, 0).
Note that the order ≤L∗ is not linear. However, it is possible to construct refinements of ≤L∗ which
are linear [87].
Mirroring the set L∗ about the axis passing through the points (0, 0.5) and (1, 0.5) of the unit
square I × I one immediately sees that there is some other lattice which is isomorphic to (L∗ , ≤L∗ ).
Both lattices are visualized in Figure 1.

Proposition 1. The complete bounded lattice (L∗ , ≤L∗ ) is isomorphic to the upper left triangle L2 (I) in I × I
(with vertexes (0, 0), (0, 1) and (1, 1)), i.e.,

L2 (I) = {( x1 , x2 ) ∈ I × I | 0 ≤ x1 ≤ x2 ≤ 1}, (7)

equipped with the componentwise partial order ≤comp , whose bottom and top elements are 0 L2 (I) = (0, 0)
and 1 L2 (I) = (1, 1), respectively. A canonical isomorphism between the lattices (L∗ , ≤L∗ ) and ( L2 (I), ≤comp )
L (I) L (I)
is provided by the function ϕL∗2 : L∗ → L2 (I) defined by ϕL∗2 (( x1 , x2 )) = ( x1 , 1 − x2 ).

It is readily seen that ( L2 (I), ≤comp ) is a sublattice of the product lattice (I × I, ≤comp ), and the
standard order reversing involution NL2 (I) : L2 (I) → L2 (I) is given by

NL2 (I) (( x, y)) = (1 − y, 1 − x ) (8)

7
Mathematics 2018, 6, 146

(compare (4)). On the other hand, the lattice (L∗ , ≤L∗ ) is not a sublattice of (I × I, ≤comp ), but it can be
embedded into (I × I, ≤comp ) using, e.g., the lattice monomorphism (as visualized in Figure 2)

L (I)
id L2 (I) ◦ ϕL∗2 : L∗ −→ L2 (I).

Several other lattices “look” different when compared with (L∗ , ≤L∗ ) or seem to address a different
context, but in fact they carry the same structural information as (L∗ , ≤L∗ ).
Well-known examples of this phenomenon are the lattices (I(I), ≤I(I) ), providing the basis
of interval-valued (or grey) fuzzy sets [4,8,9,12–14], and ( P∗ , ≤L∗ ), giving rise to the so-called
“Pythagorean” fuzzy sets [15,88,89], both turning out to be isomorphic to the lattice (L∗ , ≤L∗ ).
The following statements can be verified by simply checking the required properties.

0L∗ 1 L2 (I)
1 1

NL2 (I) ((u1 , u2 ))


0.8 0.8
( w1 , w2 )
( z1 , z2 )
0.6
( x1 , x2 )
(L∗ , ≤L∗ ) 0.6 ( u1 , u2 ) ( v1 , v2 )

0.4 0.4

( y1 , y2 )
0.2 0.2
( L2 (I), ≤comp )
NL∗ (( x1 , x2 )) 1L∗ 0 L2 (I)
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

Figure 1. The lattices (L∗ , ≤L∗ ) (left) and ( L2 (I), ≤comp ) (right). Note the difference of two orders: we
have ( x1 , x2 ) ≤L∗ (y1 , y2 ), (z1 , z2 ) ≤L∗ (y1 , y2 ), and (u1 , u2 ) ≤comp (w1 , w2 ), but ( x1 , x2 ) and (z1 , z2 )
are not comparable in (L∗ , ≤L∗ ), and (v1 , v2 ) is neither comparable to (u1 , u2 ) nor to (w1 , w2 ) with
respect to ≤comp . Also, a hint for the constructions of the order reversing involutions NL∗ and NL2 (I) as
reflections through the appropriate diagonal (dashed line) of I × I is given.

Proposition 2. The complete bounded lattice (L∗ , ≤L∗ ) is isomorphic to the following two lattices:

(i) to the lattice (I(I), ≤I(I) ) of all closed subintervals of the unit interval I, given by

I(I) = {[ x1 , x2 ] ⊆ I | 0 ≤ x1 ≤ x2 ≤ 1}, (9)


[ x1 , x2 ] ≤I(I) [y1 , y2 ] ⇐⇒ x1 ≤ y1 AND x2 ≤ y2 , (10)

with bottom and top elements 0I(I) = [0, 0] and 1I(I) = [1, 1], respectively; a canonical example of an
I(I)
isomorphism between (L∗ , ≤L∗ ) and (I(I), ≤I(I) ) is provided by the function ϕL∗ : L∗ → I(I) defined
I(I)
by ϕL∗ (( x1 , x2 )) = [ x1 , 1 − x2 ];
(ii) to the lattice ( P∗ , ≤L∗ ) of all points in the intersection of the unit square I × I and the unit disk with
center (0, 0), i.e.,
P∗ = {( x1 , x2 ) ∈ I × I | x12 + x22 ≤ 1}; (11)

a canonical example of a lattice isomorphism from ( P∗ , ≤ L∗ ) to (L∗ , ≤ L∗ ) is provided by the function


∗ L∗
ϕL ∗ ∗
P∗ : P → L defined by ϕ P∗ (( x1 , x2 )) = ( x1 , x2 ).
2 2

8
Mathematics 2018, 6, 146

Example 2. Let us start with the standard order reversing involution NL2 (I) on ( L2 (I), ≤comp ) given by (8).
The fact that ( L2 (I), ≤comp ) is isomorphic to each of the lattices (L∗ , ≤L∗ ), ( P∗ , ≤L∗ ), and (I(I), ≤I(I) )
(see Propositions 1 and 2) and Example 1(i) allow us to construct the order reversing involutions NL∗ : L∗ → L∗ ,
NP∗ : P∗ → P∗ , and NI(I) : I(I) → I(I) on the lattices (L∗ , ≤L∗ ), ( P∗ , ≤L∗ ), and (I(I), ≤I(I) ) are given by

NL∗ (( x1 , x2 )) = NP∗ (( x1 , x2 )) = ( x2 , x1 ), NI(I) ([ x1 , x2 ]) = [1 − x2 , 1 − x1 ] .

0L∗ 1 L2 (I) 1I×I


1 1 1

(L∗ , ≤L∗ )  id L2 (I)

0.5

( L2 (I), ≤comp ) (I × I, ≤comp )


1L∗
0 0.5 1 0 0 L2 (I) 0.5 1 0 0I×I 0.5 1

Figure 2. The lattices (L∗ , ≤L∗ ) (left), ( L2 (I), ≤comp ) (center), and (I × I, ≤comp ) (right). The
mirror symmetry between L∗ and L2 (I) shows that (L∗ , ≤L∗ ) and ( L2 (I), ≤comp ) are isomorphic,
and ( L2 (I), ≤comp ) is a sublattice of (I × I, ≤comp ).

Given a universe of discourse X, i.e., a non-empty set X, and fixing a bounded lattice ( L, ≤ L ),
we obtain a special type of L-fuzzy subsets of X in the sense of [2] and, on the other hand, a particular
case of type-2 fuzzy sets (also proposed by L. A. Zadeh [3,4]; see [90,91] for some algebraic aspects of
truth values for type-2 fuzzy sets).

An L∗ -fuzzy subset A of X is characterized by its membership function μLA : X → L∗ , where
the bounded lattice (L , ≤L∗ ) is given by (5) and (6). The bottom and top elements of (L∗ , ≤L∗ ) are

0L∗ = (0, 1) and 1L∗ = (1, 0), respectively.


Over the years, different names for fuzzy sets based on the lattices that are isomorphic to (L∗ , ≤L∗ )
according to Propositions 1 and 2 were used in the literature: in the mid-seventies I(I)-fuzzy sets were
called interval-valued in [4,12–14], in the eighties first the name “intuitionistic” fuzzy sets was used for
L∗ -fuzzy sets in [6,7] (compare also [84–86]) and then grey sets in [8,9]), and even later vague sets in [10]
(see also [10,92]). More recently, the name “Pythagorean” fuzzy sets was introduced for P∗ -fuzzy sets
in [15,88,89].
∗ ∗
As a function μLA : X → L∗ ⊆ I × I, the membership function μLA has two components
L∗
μ A , νA : X → I such that for each x ∈ X we have μ A ( x ) = (μ A ( x ), νA ( x )) and μ A ( x ) + νA ( x ) ≤ 1.
Both μ A : X → I and νA : X → I can be seen as membership functions of fuzzy subsets of X,
say A+ , A− ∈ F ( X ), respectively, i.e., for each x ∈ X we have

μ A + ( x ) = μ A ( x ), μ A − ( x ) = νA ( x ), and μ A+ ( x ) + μ A− ( x ) ≤ 1. (12)

The value μ A+ ( x ) is usually called the degree of membership of the object x in the L∗ -fuzzy set A,
while μ A− ( x ) is said to be the degree of non-membership of the object x in the L∗ -fuzzy set A.
Denoting the set of all L∗ -fuzzy subsets of X by FL∗ ( X ) and keeping the notations from (12),

for each A ∈ FL∗ ( X ) and its membership function μLA : X → L∗ ⊆ I × I we may write

μLA = (μ A , νA ) = (μ A+ , μ A− ).

As a consequence of (12), for the fuzzy sets A+ and A− we have A+ ⊆ ( A− ) . In other words, we can
identify each L∗ -fuzzy subset A ∈ FL∗ (X) with a pair of fuzzy sets ( A+ , A− ) with A+ ⊆ ( A− ) , i.e.,

FL∗ ( X ) = {( A+ , A− ) ∈ F ( X ) × F ( X ) | A+ ⊆ ( A− ) },

9
Mathematics 2018, 6, 146

and for two L∗ -fuzzy subsets A = ( A+ , A− ) and B = ( B+ , B− ) of X the assertion A ⊆L∗ B is equivalent
to A+ ⊆ B+ and B− ⊆ A− . The complement of an L∗ -fuzzy subset A = ( A+ , A− ) is the L∗ -fuzzy set
A = ( A − , A + ).
Then (FL∗ ( X ), ⊆L∗ ) is a complete bounded lattice with bottom element ∅ = (∅, X ) and top
element X = ( X, ∅), and the lattice (FL∗ ( X ), ⊆L∗ ) of L∗ -fuzzy sets is isomorphic to (L∗ X , ≤L∗ ).
Clearly, (F ( X ), ⊆) can be embedded into (FL∗ ( X ), ⊆L∗ ): a natural embedding is provided by the
function embF ( X ) : F ( X ) → FL∗ ( X ) defined by embF ( X ) ( A) = ( A, A ).
An interval-valued fuzzy subset A of the universe X (introduced independently in [4,12–14], some
authors called them grey sets [8,9]) is characterized by its membership function μ A : X → I(I),
where (I(I), ≤I(I) ) is the bounded lattice of all closed subintervals of the unit interval I given by (9)
and (10). The bottom and top elements of (I(I), ≤I(I) ) are then0I(I) = [0, 0] and 1I(I) = [1, 1], respectively.
A “Pythagorean” fuzzy subset A of the universe X (first considered in [15,88,89]) is characterized
by its membership function μ A : X → P∗ , where the bounded lattice ( P∗ , ≤L∗ ) is given by (11) and (6).
The bottom and top elements of ( P∗ , ≤L∗ ) are the same as in (L∗ , ≤L∗ ), i.e., we have 0 P∗ = (0, 1)
and 1 P∗ = (1, 0).
From Propositions 1 and 2 we know that the four bounded lattices (L∗ , ≤L∗ ), (I(I), ≤I(I) ),
( P∗ , ≤L∗ ), and ( L2 (I), ≤comp ) are isomorphic to each other. As an immediate consequence we obtain
the following result.

Proposition 3. Let X be a universe of discourse. Then we have:

(i) The product lattices ((L∗ ) X , ≤L∗ ), ((I(I)) X , ≤I(I) ), (( P∗ ) X , ≤L∗ ), and (( L2 (I)) X , ≤comp ) are
isomorphic to each other.
(ii) The lattices of all L∗ -fuzzy subsets of X, of all “intuitionistic” fuzzy subsets of X, of all interval-valued
fuzzy subsets of X, of all “Pythagorean” fuzzy subsets of X, and of all L2 (I)-fuzzy subsets of X are
isomorphic to each other.

This means that, mathematically speaking, all the function spaces mentioned in Proposition 3(i)
and all the “different” classes of fuzzy subsets of X referred to in Proposition 3(ii) share an identical
(lattice) structure. Any differences between them only come from the names used for individual objects,
and from the interpretation or meaning of these objects. In other words, since any mathematical result
for one of these lattices immediately can be carried over to all isomorphic lattices, in most cases there
is no need to use different names for them.

3.3. Generalizations to Higher Dimensions


As a straightforward generalization of the product lattice (I × I, ≤comp ), for each n ∈ N the
n-dimensional unit cube (In , ≤comp ), i.e., the n-dimensional product of the lattice (I, ≤), can be defined
by means of (1) and (2).
The so-called “neutrosophic” sets introduced by F. Smarandache [93] (see also [94–97] are based
on the bounded lattices (I3 , ≤I3 ) and (I3 , ≤I ), where the orders ≤I3 and ≤I on the unit cube I3 are
3 3

defined by

( x 1 , x 2 , x 3 ) ≤ I3 ( y 1 , y 2 , y 3 ) ⇐⇒ x1 ≤ y1 AND x2 ≤ y2 AND x3 ≥ y3 , (13)


I3
( x1 , x2 , x3 ) ≤ ( y1 , y2 , y3 ) ⇐⇒ x1 ≤ y1 AND x2 ≥ y2 AND x3 ≥ y3 . (14)

Observe that ≤I3 is a variant of the order ≤comp : it is defined componentwise, but in the third
component the order is reversed. The top element of (I3 , ≤I3 ) is (1, 1, 0), and (0, 0, 1) is its bottom
element. Analogous assertions are true for the lattice (I3 , ≤I ).
3

3 3 3 I3
Clearly, the three lattices (I , ≤comp ), (I , ≤I3 ), and (I , ≤ ) are mutually isomorphic: the functions
ϕ, ψ : I → I given by ϕ((x1 , x2 , x3 )) = (x1 , x2 , 1 − x3 ) and ψ((x1 , x2 , x3 )) = (x1 , 1 − x2 , x3 ) are canonical
3 3

10
Mathematics 2018, 6, 146

isomorphisms between (I3 , ≤comp ) and (I3 , ≤I3 ), on the one hand, and between (I3 , ≤I3 ) and (I3 , ≤I ),
3

on the other hand.


For each n ∈ N the n-fuzzy sets introduced by B. Bedregal et al. in [11] (see also [98,99]) are based
on the bounded lattice ( Ln (I), ≤comp ), where the set Ln (I) is a straightforward generalization of L2 (I)
defined in (7):
Ln (I) = {( x1 , x2 , . . . , xn ) ∈ In | x1 ≤ x2 ≤ · · · ≤ xn }. (15)

The order ≤comp on Ln (I) coincides with the restriction of the componentwise order ≤comp
on In to Ln (I), implying that ( Ln (I), ≤comp ) is a sublattice of the product lattice (In , ≤comp ). As a
consequence, we also have the standard order reversing involution NLn (I) : Ln (I) → Ln (I) which
is defined coordinatewise, i.e., NLn (I) (( x1 , x2 , . . . , xn )) = (1 − xn , . . . , 1 − x2 , 1 − x1 ) (compare (8)).
Considering, for n > 3, lattices which are isomorphic to ( Ln (I), ≤comp ), further generalizations of
“neuthrosophic” sets can be introduced.
B. C. Cuong and V. Kreinovich [16] proposed the concept of so-called picture fuzzy sets which are
based on the set D∗ ⊆ I3 of truth values given by

D∗ = {( x1 , x2 , x3 ) ∈ I3 | x1 + x2 + x3 ≤ 1}. (16)

The motivation for the set D∗ came from a simple voting scenario where each voter can act in one
of the four following ways: to vote for the nominated candidate (the proportion of these voters being
equal to x1 ), to vote against the candidate (described by x2 ), to have no preference and to abstain so
this vote will not be counted (described by x3 ), or to be absent (described by 1 − x1 − x2 − x3 ).
In the original proposal [16] the set D∗ was equipped with the partial order ≤I3 given by (13),
as inherited from the lattice (I3 , ≤I3 ). As {( x1 , x3 ) ∈ I × I | ( x1 , 0, x3 ) ∈ D∗ } = L∗ and (6), we may also
write ( x1 , x2 , x3 ) ≤I3 (y1 , y2 , y3 ) if and only if ( x1 , x3 ) ≤L∗ (y1 , y3 ) and x2 ≤ y2 . However, (D∗ , ≤I3 )
is not a lattice, but only a meet-semilattice with bottom element 0D∗ = (0, 0, 1); indeed, the set
{(1, 0, 0), (0, 1, 0)} has no join in D∗ with respect to ≤P (to be more precise, the semi-lattice (D∗ , ≤P )
has infinitely many pairwise incomparable maximal elements of the form ( a, 1 − a, 0) with a ∈ I).
Therefore, (without modifications) it is impossible [100] to introduce logical operations such as
t-norms or t-conorms [69] and, in general, aggregation functions [46] on (D∗ , ≤I3 ).
As a consequence, the order ≤I3 on D∗ was replaced by the following partial order ≤D∗ on D∗
(compare [16,100–102]) which is a refinement of ≤I3 :

( x 1 , x 2 , x 3 ) ≤ D∗ ( y 1 , y 2 , y 3 ) (17)
 
⇐⇒ ( x1 , x3 ) <L∗ (y1 , y3 ) OR ( x1 , x3 ) = ( y1 , y3 ) AND x2 ≤ y2 .

Note that the order ≤D∗ can be seen as a kind of lexicographical order related to two orders: to the
order ≤L∗ on L∗ and to the standard order ≤ on I.
It is easy to see that (D∗ , ≤D∗ ) is a bounded lattice with bottom element 0D∗ = (0, 0, 1) and
top element 1D∗ = (1, 0, 0). This allows aggregation functions (as studied on the unit interval I
in, e.g., [46,47,76–78]) to be introduced on (D∗ , ≤D∗ ). Observe also that the lattice (D∗ , ≤D∗ ) was
considered in recent applications of picture fuzzy sets [103,104].

11
Mathematics 2018, 6, 146

We only recall [105] that the join ∨≤D∗ and the meet ∧≤D∗ in the lattice (D∗ , ≤D∗ ) are given by


⎪ (x , x , x ) if ( x1 , x2 , x3 ) ≥D∗ (y1 , y2 , y3 ),

⎨ 1 2 3
( x1 , x2 , x3 ) ∨≤D∗ ( y1 , y2 , y3 ) = ( y1 , y2 , y3 ) if ( x1 , x2 , x3 ) ≤D∗ (y1 , y2 , y3 ),



⎩(max( x , y ), 0, min( x , y ))
1 1 3 3 otherwise,


⎪ ( x1 , x2 , x3 ) if ( x1 , x2 , x3 ) ≤D∗ (y1 , y2 , y3 ),



⎨(y , y , y )
1 2 3 if ( x1 , x2 , x3 ) ≥D∗ (y1 , y2 , y3 ),
( x 1 , x 2 , x 3 ) ∧ ≤D∗ ( y 1 , y 2 , y 3 ) =

⎪ ( ( x1 , y1 ), 1 − min( x1 , y1 )

⎪ min


− max( x3 , y3 ), max( x3 , y3 )) otherwise,

and the standard order reversing involution ND∗ : D∗ → D∗ by ND∗ (( x1 , x2 , x3 )) = ( x3 , x2 , x1 ).


From the definition of ≤D∗ in (17) it is obvious that (L∗ , ≤L∗ ) can be embedded in a natural way
into (D∗ , ≤D∗ ); an example of an embedding is given by

embL∗ : L∗ −→ D∗ (18)
( x1 , x2 ) −→ ( x1 , 0, x2 ).

Let us now have a look at the relationship between the lattice (D∗ , ≤D∗ ) and the lattice
( L3 (I), ≤comp ) given by (2) and (15). It is not difficult to see that the function ψ : D∗ → L3 (I) given
by ψ(( x1 , x2 , x3 )) = ( x1 , x1 + x2 , 1 − x3 ) is a bijection, its inverse ψ−1 : L3 (I) → D∗ being given by
ψ−1 (( x1 , x2 , x3 )) = ( x1 , x2 − x1 , 1 − x3 ).
Observe that that the bijection ψ is not order preserving: we have (0.2, 0.5, 0) ≤D∗ (0.3, 0, 0),
but ψ((0.2, 0.5, 0)) = (0.2, 0.7, 1) and ψ((0.2, 0.5, 0)) = (0.3, 0.3, 1) are incomparable with respect to ≤comp .
From ([105], Propositions 1 and 2) we have the following result:

Proposition 4. The lattices ( L3 (I), ≤comp ) and (D∗ , ≤D∗ ) are not isomorphic. However, we have

(i) The lattice ( L3 (I), ≤comp ) is isomorphic to the lattice (D∗ , ≤D3∗ ) with top element (1, 0, 0) and bottom
element (0, 0, 1), where the order ≤D3∗ is given by

( x1 , x2 , x3 ) ≤D3∗ (y1 , y2 , y3 ) ⇐⇒ x1 ≤ y1 AND x1 + x2 ≤ y1 + y2 AND x3 ≥ y3 .

(ii) The lattice (D∗ , ≤D∗ ) is isomorphic to the lattice ( L3 (I), ≤ L3 (I) ) with top element (1, 1, 1) and bottom
element (0, 0, 0), where the order ≤ L3 (I) is given by

( x1 , x2 , x3 ) ≤ L3 (I) (y1 , y2 , y3 )
⇐⇒ ( x1 , x3 ) <comp (y1 , y3 ) OR (( x1 , x3 ) = (y1 , y3 ) AND x2 − x1 ≤ y2 − y1 ).

In summary, if a universe of discourse X is fixed, then a picture fuzzy subset A of X is based on


the bounded lattice (D∗ , ≤D∗ ) defined in (16) and (17). It is characterized by its membership function
∗ D∗
μD ∗ ∗
A : X → D [16,100,106–109] where μ A ( x ) = ( μ A1 ( x ), μ A2 ( x ), μ A3 ( x )) ∈ D for some functions
μ A 1 , μ A 2 , μ A 3 : X → I.

12
Mathematics 2018, 6, 146

Clearly, the function μ A1 : X → I can be interpreted as the membership function of the fuzzy set
A1 ∈ F ( X ) and, analogously, μ A2 : X → I and μ A3 : X → I as membership functions of the fuzzy sets
A2 and A3 , respectively. In other words, for each picture fuzzy set A we may write A = ( A1 , A2 , A3 ).
In this context, the values μ A1 ( x ), μ A2 ( x ) and μ A3 ( x ) are called the degree of positive membership,
the degree of neutral membership, and the degree of negative membership of the object x in the picture
fuzzy set A, respectively. The value 1 − (μ A1 ( x ) + μ A2 ( x ) + μ A3 ( x )) ∈ I is called the degree of refusal
membership of the object x in A.
If X is a fixed universe of discourse, then we denote the set of all picture fuzzy subsets of X by
FD∗ ( X ). Obviously, for two picture fuzzy sets A, B ∈ FD∗ ( X ) the assertion A ⊆D∗ B is equivalent
to (μ A1 , μ A2 , μ A3 ) ≤D∗ (μ B1 , μ B2 , μ B3 ), i.e., (α A ( x ), β A ( x ), γ A ( x )) ≤D∗ (α B ( x ), β B ( x ), γB ( x )) for all
x ∈ X, and the membership function of the complement A of a picture fuzzy set A ∈ FD∗ ( X ) with
∗ D∗
membership function μD A = ( μ A1 , μ A2 , μ A3 ) is given by μ A = ( μ A3 , μ A2 , μ A1 ).
This means that (FD∗ ( X )), ⊆D∗ ) is a bounded lattice with bottom element ∅ = (∅, ∅, X ) and
top element X = ( X, ∅, ∅), and it is isomorphic to the product lattice ((D∗ ) X , ≤comp ) of all functions
∗ D∗ D∗ D∗
from X to D∗ (clearly, μD A ≤comp μ B means here μ A ( x ) ≤D∗ μ B ( x ) for all x ∈ X).

As a consequence, the lattice (FL∗ ( X )), ⊆L∗ ) of all L -fuzzy subsets of X can be embedded into
the lattice (FD∗ ( X )), ⊆D∗ ) of all picture fuzzy subsets of X via

embFL∗ ( X ) : FL∗ ( X ) −→ FD∗ ( X )


( A+ , A− ) −→ ( A+ , ∅, A− ),

and, using the embedding embL∗ : L∗ → D∗ defined in (18), the product lattice ((L∗ ) X , ≤comp ) can be
embedded into the product lattice ((D∗ ) X , ≤comp ).
We recognize a chain of subsets of X of increasing generality and complexity: crisp sets P ( X ),
fuzzy sets F ( X ), L∗ -fuzzy sets FL∗ ( X ), and picture fuzzy sets FD∗ ( X ). This corresponds to the
increasing complexity and dimensionality of the lattices of truth values (2, ≤), (I, ≤), (L∗ , ≤L∗ ),
and (D∗ , ≤D∗ ). The commutative diagram in Figure 3 visualizes the relationship between these
types of (fuzzy) sets and their respective membership functions, and also of the corresponding lattices
of truth values.
The content of this subsection also makes clear that the situation in the case of three-dimensional
sets of truth values is much more complex than for the two-dimensional truth values considered before.
In Proposition 3, we have seen that several classes of fuzzy sets with two-dimensional sets of
truth values are isomorphic to each other, while, in the case of three-dimensional truth values, we have
given a number of lattices of truth values that are not isomorphic to each other.
Obviously, continuing in the series of generalizations from I over L∗ to D∗ , for any arity n ∈ N
one can define a carrier
 n 

D∗n = ( x1 , ..., xn ) ∈ In ∑ xi ≤ 1
i =1

and equip it with some order  such that (D∗n , ) is a bounded lattice with top element (1, 0, ..., 0) and
bottom element (0, ..., 0, 1). The problematic question is whether such a generalization is meaningful
and can be used to model some real problem.

13
Mathematics 2018, 6, 146

If the arrow indicates an embedding, an epimorphism, and an isomorphism, and if the


homomorphisms are defined by

embI ( x ) = ( x, 1 − x ), conL∗ ((α1 , α2 )) = (α1 · 1 X , α2 · 1 X ),


embL∗ (( x1 , x2 )) = ( x1 , 0, x2 ), conD∗ ((α1 , α2 , α3 )) = (α1 · 1 X , α2 · 1 X , α3 · 1 X ),
embIX ( f ) = ( f , 1 − f ), emb(L∗ )X ((μ A+ , μ A− )) = (μ A+ , 1∅ , μ A− ),
embF ( X ) ( A) = ( A,  A), embFL∗ ( X ) (( A+ , A− )) = ( A+ , ∅, A− ),
π α ( f ) = f ( α ), memI ( A) = μ A ,
ind( A) = 1 A , memL∗ ( A) = (memI ( A+ ), memI ( A− )),
con(α) = α · 1 X , memD∗ ( A) = (memI ( A+ ), memI ( A(n) ), memI ( A− )),

then we obtain the following commutative diagram:

id2 embI embL∗


2 I L∗ D∗

πα con πα con πx conL∗ πx conD∗

id2 X embIX emb(L∗ ) X


2X IX (L∗ ) X (D∗ ) X

ind memI memL∗ memD∗

P (X) F (X) FL∗ ( X ) FD ∗ ( X )


idP ( X ) embF ( X ) embF ∗ ( X )
L

Figure 3. Crisp sets, fuzzy sets, L∗ -fuzzy sets, and picture fuzzy sets, and the corresponding sets of
truth values.

4. Discussion: Isomorphisms and Questionable Notations


In this section, we first mention some further consequences of isomorphic lattices for the
construction of logical and other connectives, and then we argue why, in our opinion, notations
like “intuitionistic” fuzzy sets and “Pythagorean” fuzzy sets are questionable and why it would be
better to avoid them.

4.1. Isomorphic Lattices: More Consequences


From Propositions 1 and 2, we know that the bounded lattice ( L2 (I), ≤comp ) is isomorphic to each
of the lattices (L∗ , ≤L∗ ), (I(I), ≤I(I) ), and ( P∗ , ≤L∗ ).
Many results for and constructions of operations on the lattices (L∗ , ≤L∗ ), (I(I), ≤I(I) ),
and ( P∗ , ≤L∗ ), and, subsequently, for L∗ -fuzzy sets (“intuitionistic” fuzzy sets), interval-valued fuzzy
sets, and “Pythagorean” fuzzy sets are a consequence of a rather general result for operations on
the lattice ( L2 (I), ≤comp ) and, because of the isomorphisms given in Propositions 1 and 2, they
automatically can be carried over to the isomorphic lattices (L∗ , ≤L∗ ), (I(I), ≤I(I) ), and ( P∗ , ≤L∗ ).
The following result makes use of the fact that ( L2 (I), ≤comp ) is a sublattice of the product
lattice (I × I, ≤comp ) and is based on [63]. It can be verified in a straightforward way by checking the
required properties:

14
Mathematics 2018, 6, 146

Proposition 5. Let F1 , F2 : I × I → I be two functions such that F1 ≤ F2 , i.e., for each ( x1 , x2 ) ∈ I × I we


have F1 (( x1 , x2 )) ≤ F2 (( x1 , x2 )), and consider the function F : L2 (I) × L2 (I) → L2 (I) given by

F (( x1 , x2 ), (y1 , y2 )) = ( F1 (( x1 , y1 )), F2 (( x2 , y2 ))).

Then we have:

(i) if F1 and F2 are two binary aggregation functions then the function F is a binary aggregation function
on L2 (I);
(ii) if F1 and F2 are two triangular norms then the function F is a triangular norm on L2 (I);
(iii) if F1 and F2 are two triangular conorms then the function F is a triangular conorm on L2 (I);
(iv) if F1 and F2 are two uninorms then the function F is a uninorm on L2 (I);
(v) if F1 and F2 are two nullnorms then the function F is a nullnorm on L2 (I).

Not all t-(co)norms, uninorms and nullnorms on the lattice ( L2 (I), ≤comp can be obtained by
means of Proposition 5, as the following example shows (see [110], Theorem 5):

Example 3. Let T : I × I → I be a t-norm on the unit interval I. Then, for each α ∈ I \ {1}, the function
Tα : L2 (I) × L2 (I) → L2 (I) defined by

Tα (( x1 , x2 ), (y1 , y2 )) = ( T (( x1 , x2 )), max( T (α, T ((y1 , y2 ))), T (( x1 , y2 )), T ((y1 , x2 ))))

is a t-norm on L2 (I) which cannot be obtained applying Proposition 5.

The characterization of those connectives on L2 (I), ≤comp ) is an interesting problem that has been
investigated in several papers (e.g., in [110–120]). Again, each of these results is automatically valid
for connectives on the isomorphic lattices (L∗ , ≤L∗ ), (I(I), ≤I(I) ), and ( P∗ , ≤L∗ ).
The result of Proposition 5(i) can be carried over to the n-dimensional case in a straightforward way:

Corollary 1. Let A1 , A2 : In → I be two n-ary aggregation functions such that A1 ≤ A2 , i.e., for
each ( x1 , x2 , . . . , xn ) ∈ In we have A1 (( x1 , x2 , . . . , xn )) ≤ A2 (( x1 , x2 , . . . , xn )). Then also the function
A : ( L2 (I))n → L2 (I) given by

A(( x1 , y1 ), ( x2 , y2 ), . . . , ( xn , yn )) = (A1 ( x1 , x2 , . . . , xn ), A2 (y1 , y2 , . . . , yn ))

is an n-ary aggregation function on L2 (I).

4.2. The Case of “Intuitionistic” Fuzzy Sets


As already mentioned, L∗ -fuzzy sets have been called “intuitionistic” fuzzy sets in [6,7,84] and in
a number of other papers (e.g., in [86,92,116,117,121–147]). In ([7], p. 87) K. T. Atanassov points out

[. . . ] the logical law of the excluded middle is not valid, similarly to the case in intuitionistic
mathematics. Herein emerges the name of that set. [. . . ]

Looking at Zadeh’s first paper on fuzzy sets [1] one readily sees that the elements of F ( X ) also
violate the law of the excluded middle if the unit interval I is equipped with the standard order
reversing involution and if the t-norm min and the t-conorm max are used to model intersection and
union of elements of F ( X ), respectively. In other words, the violation of the law of the excluded
middle is no specific feature of the L∗ -fuzzy sets.
A short look at the history of mathematics and logic at the beginning of the 20th century
shows that the philosophy of intuitionism goes back to the work of the Dutch mathematician L. E. J.
Brouwer who suggested and discussed (for the first time 1912 in his inaugural address at the
University of Amsterdam [148]) a foundation of mathematics independent of the law of excluded

15
Mathematics 2018, 6, 146

middle (see also [149–157]), a proposal eventually leading to a major controversy with the German
mathematician D. Hilbert [158–160] (compare also [161]).
There are only a few papers (most remarkably, those by G. Takeuti and S. Titani [162,163]) where the
original concept of intuitionistic logic was properly extended to the fuzzy case (see also [164–168])—here
the use of the term “intuitionistic” fuzzy set is fully justified (see [169]).
As a consequence, the use of the name “intuitionistic” fuzzy sets in [6,7,84] and in a number
of other papers in the same spirit has been criticized (mainly in [169–172]—compare Atanassov’s
reply [173] where he defended his original naming) because of its lack of relationship with the original
concept of intuitionism and intuitionistic logic.
Here are the main arguments against using the term “intuitionistic” fuzzy sets in the context of
L∗ -fuzzy sets, as given in [169]:

• the mere fact that the law of the excluded middle is violated in the case of L∗ -fuzzy sets does not
justify to call them “intuitionistic” (also the fuzzy sets in the sense of [1] do not satisfy the law of
the excluded middle, in general); moreover (see [53,170,174,175]), the use of an order reversing
involution for L∗ -fuzzy sets contradicts intuitionistic logic [176]:

[. . . ] the connectives of IFS theory violate properties of intuitionistic logic by validating


the double negation (involution) axiom [. . . ], which is not valid in intuitionistic logic.
(Recall that axioms of intuitionistic logic extended by the axiom of double negation
imply classical logic, and thus imply excluded middle [. . . ]

• intuitionistic logic has a close relationship to constructivism:

[. . . ] the philosophical ideas behind intuitionism in general, and intuitionistic


mathematics and intuitionistic logic in particular have a strong tendency toward
constructivist points of view. There are no relationship between these ideas and the
basic intuitive ideas of IFS theory [. . . ]

The redundancy of the names “intuitionistic” fuzzy sets, “L∗ -fuzzy sets” and “interval-valued
fuzzy sets” is also mentioned by J. Gutiérrez García and S. E. Rodabaugh in the abstract of [172]:
. . . (1) the term “intuitionistic” in these contexts is historically inappropriate given the standard
mathematical usage of “intuitionistic”; and (2), at every level of existence—powerset level, topological
fibre level, categorical level—interval-valued sets, [. . . ], and “intuitionistic” fuzzy sets [. . . ] are
redundant . . .

Also in a more recent paper by H. Bustince et al. ([5], p. 189) one can find an extensive discussion
of the “terminological problem with the name intuitionistic”, and the correctness of the notion chosen
in [162,163] is explicitly acknowledged.
To summarize, the name “intuitionistic” in the context of L∗ -fuzzy sets is not compatible with the
meaning of this term in the history of mathematics, and it would be better to avoid it.
Instead, because of the isomorphism between the lattice (L∗ , ≤L∗ ) and the lattice (I(I), ≤I(I) ) of
all closed subintervals of the unit interval I, it is only a matter of personal taste and of the meaning
given to the corresponding fuzzy sets to use one of the terms “L∗ -fuzzy sets” or “interval-valued
fuzzy sets”.

4.3. The Case of “Pythagorean” Fuzzy Sets


From Propositions 1 and 2 we know that the lattice ( P∗ , ≤L∗ ) given by (6) and (11) is isomorphic
to each of the lattices (L∗ , ≤L∗ ), ( L2 (I), ≤comp ), and (I(I), ≤I(I) ).
Recently, in [15,88,89] the term “Pythagorean” fuzzy set was coined and used, which turns out to
be a special case of an L-fuzzy set in the sense of [2], to be precise, an L-fuzzy set with P∗ as lattice of
truth values.

16
Mathematics 2018, 6, 146

No justification for the choice of the adjective “Pythagorean” in this context was offered. One may
only guess that the fact that, in the definition of the set P∗ in (11), a sum of two squares occurs,
indicating some similarity with the famous formula a2 + b2 = c2 for right triangles—usually attributed
to the Greek philosopher and mathematician Pythagoras who lived in the sixth century B.C.
The mutual isomorphism between the lattices ( P∗ , ≤L∗ ), (L∗ , ≤L∗ ), ( L2 (I), ≤comp ),
and (I(I), ≤I(I) ) implies that the families of L-fuzzy sets based on these lattices of truth values as
well as the families of their corresponding membership functions are also isomorphic, i.e., have the
same mathematical structure, as pointed out in Proposition 3. The identity of “Pythagorean” and
“intuitionistic” fuzzy sets was also noted in ([5], Corollary 8.1).
Therefore, each mathematical result for L∗ -fuzzy sets, interval-valued fuzzy sets, “intuitionistic”
fuzzy sets, etc., can be immediately translated into a result for “Pythagorean” fuzzy sets, and vice versa.
In other words, the term “Pythagorean” fuzzy sets is not only a fantasy name with no meaning
whatsoever, it is absolutely useless, superfluous and even misleading, because it gives the impression
to investigate something new, while isomorphic concepts have been studied already for many years.
Therefore, the name “Pythagorean” fuzzy sets should be completely avoided.
Instead, because of the pairwise isomorphism between the lattices ( P∗ , ≤L∗ ), (L∗ , ≤L∗ ) and the
lattice (I(I), ≤I(I) ) of all closed subintervals of the unit interval I, it is only a matter of personal taste
to use one of the synonymous terms “L∗ -fuzzy sets” or “interval-valued fuzzy sets”—in any case,
this can be done without any problem.

5. Concluding Remarks
As already mentioned, in the case of isomorphic lattices, any result known for one lattice can be
rewritten in a straightforward way for each isomorphic lattice.
As a typical situation, recall that (L∗ , ≤L∗ ) and ( L2 (I), ≤comp ) are isomorphic lattices. Then, for
each aggregation function A : In → I, the function A(2) : ( L2 (I))n → L2 (I) given by

A(2) (( x1 , y1 ), ( x2 , y2 ), . . . , ( xn , yn )) = (A( x1 , x2 , . . . , xn ), A(y1 , y2 , . . . , yn ))

is an aggregation function on L2 (I) (called representable in [85,110–120]), and any properties of A are
inherited by A(2) . For example, if A is a t-norm or t-conorm, uninorm, nullnorm, so is A(2) . If A is an
averaging (conjunctive, disjunctive) aggregation function [46] so is A(2) , etc.
Due to the isomorphism between the lattices (L∗ , ≤L∗ ) and ( L2 (I), ≤comp ) (see Proposition 1),
one can easily, for each aggregation function A : In → I, define the corresponding aggregation function
A∗ : (L∗ )n → L∗ by

A∗ (( x1 , y1 ), . . . , ( xn , yn )) = (A( x1 , . . . , xn ), 1 − A(1 − y1 , . . . , 1 − yn )).

In doing so, it is superfluous to give long and tedious proofs that, whenever A is a t-norm
(t-conorm, uninorm, nullnorm) on I, then A∗ is a t-norm (t-conorm, uninorm, nullnorm) on L∗ .
Similarly, considering any averaging aggregation function A [46] (e.g., a weighted quasi-arithmetic
mean based on an additive generator of some continuous Archimedean t-norm, e.g., the Einstein
t-norm [177]), then evidently also A∗ is an averaging (thus idempotent) aggregation function on L∗ .
In the same way, one can easily re-define aggregation functions on the “Pythagorean” lattice
( P∗ , ≤L∗ ), and again there is no need of proving their properties (automatically inherited from the
original aggregation function A acting on I), as it was done in, e.g., [178].
Finally, let us stress that we are not against reasonable generalizations of fuzzy sets in the sense
of [1], in particular if they proved to be useful in certain applications.
However, as one of the referees for this paper noted, “the crucial point is: not to introduce the
same under different name” and “not to re-prove the same [. . . ] facts”. Therefore we have underlined
that it is superfluous to (re-)prove “new” results for isomorphic lattices when the corresponding results
are already known for at least one of the (already existing) isomorphic lattices. Also, we will continue

17
Mathematics 2018, 6, 146

to argue against “new” fantasy names for known mathematical objects and against the (ab)use of
established (historical) mathematical notions in an improper context.

Author Contributions: These authors contributed equally to this work.


Acknowledgments: We gratefully acknowledge the support by the “Technologie-Transfer-Förderung” of the
Upper Austrian Government (Wi-2014-200710/13-Kx/Kai). The second author was also supported by the Slovak
grant APVV-14-0013. Finally, we would like to thank the four anonymous referees for their valuable comments.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353. [CrossRef]
2. Goguen, J.A. L-fuzzy sets. J. Math. Anal. Appl. 1967, 18, 145–174. [CrossRef]
3. Zadeh, L.A. Quantitative fuzzy semantics. Inf. Sci. 1971, 3, 159–176. [CrossRef]
4. Zadeh, L.A. The concept of a linguistic variable and its applications to approximate reasoning. Part I.
Inform. Sci. 1975, 8, 199–251. [CrossRef]
5. Bustince, H.; Barrenechea, E.; Pagola, M.; Fernandez, J.; Xu, Z.; Bedregal, B.; Montero, J.; Hagras, H.;
Herrera, F.; De Baets, B. A historical account of types of fuzzy sets and their relationships. IEEE Trans.
Fuzzy Syst. 2016, 24, 179–194. [CrossRef]
6. Atanassov, K. Intuitionistic fuzzy sets. In VII ITKR’s Session, Sofia, June 1983; Sgurev, V., Ed.; Central Science
and Technology Library, Bulgarian Academy of Sciences: Sofia, Bulgaria, 1984.
7. Atanassov, K.T. Intuitionistic fuzzy sets. Fuzzy Sets Syst. 1986, 20, 87–96. [CrossRef]
8. Deng, J.L. Introduction to grey system theory. J. Grey Syst. 1989, 1, 1–24.
9. Deng, J.L. Grey information space. J. Grey Syst. 1989, 1, 103–117.
10. Gau, W.-L.; Buehrer, D.J. Vague sets. IEEE Trans. Syst. Man Cybern. 1993, 23, 610–614. [CrossRef]
11. Bedregal, B.; Beliakov, G.; Bustince, H.; Calvo, T.; Mesiar, R.; Paternain, D. A class of fuzzy multisets with a
fixed number of memberships. Inf. Sci. 2012, 189, 1–17. [CrossRef]
12. Jahn, K.-U. Intervall-wertige Mengen. Math. Nachr. 1975, 68, 115–132. [CrossRef]
13. Sambuc, R. Fonctions ϕ-Floues: Application à L’aide au Diagnostic en Pathologie Thyroidienne. Ph.D. Thesis,
Université Aix-Marseille II, Faculté de Médecine, Marseille, France, 1975. (In French)
14. Grattan-Guinness, I. Fuzzy membership mapped onto intervals and many-valued quantities. Zeitschrift für
Mathematische Logik und Grundlagen der Mathematik 1976, 22, 149–160. [CrossRef]
15. Yager, R.R.; Abbasov, A.M. Pythagorean membership grades, complex numbers, and decision making. Int. J.
Intell. Syst. 2013, 28, 436–452. [CrossRef]
16. Cuong, B.C.; Kreinovich, V. Picture fuzzy sets—A new concept for computational intelligence problems.
In Proceedings of the Third World Congress on Information and Communication Technologies (WICT 2013),
Hanoi, Vietnam, 15–18 December 2013; pp. 1–6.
17. Cantor, G. Beiträge zur Begründung der transfiniten Mengenlehre. Art. I. Math. Ann. 1895, 46, 481–512.
(In German) [CrossRef]
18. Hausdorff, F. Grundzüge der Mengenlehre; Veit und Comp.: Leipzig, Germany, 1914. (In German)
19. Boole, G. The Mathematical Analysis of Logic, Being an Essay Towards a Calculus of Deductive Reasoning;
Macmillan, Barclay, & Macmillan: Cambridge, UK, 1847.
20. Boole, G. An Investigation of the Laws of Thought, on Which Are Founded the Mathematical Theories of Logic and
Probabilities; Walton: London, UK, 1854.
21. Menger, K. Probabilistic theories of relations. Proc. Natl. Acad. Sci. USA 1951, 37, 178–180. [CrossRef]
[PubMed]
22. Menger, K. Probabilistic geometry. Proc. Natl. Acad. Sci. USA 1951, 37, 226–229. [CrossRef] [PubMed]
23. Menger, K. Ensembles flous et fonctions aléatoires. C. R. Acad. Sci. Paris Sér. A 1951, 232, 2001–2003. (In French)
24. Klaua, D. Über einen Ansatz zur mehrwertigen Mengenlehre. Monatsb. Deutsch. Akad. Wiss. 1965, 7,
859–867. (In German)
25. Klaua, D. Einbettung der klassischen Mengenlehre in die mehrwertige. Monatsb. Deutsch. Akad. Wiss. 1967,
9, 258–272. (In German)
26. De Luca, A.; Termini, S. Entropy of L-fuzzy sets. Inf. Control 1974, 24, 55–73. [CrossRef]

18
Mathematics 2018, 6, 146

27. Negoita, C.V.; Ralescu, D.A. Applications of Fuzzy Sets to Systems Analysis; John Wiley & Sons:
New York, NY, USA, 1975.
28. Negoita, C.V.; Ralescu, D.A. L-fuzzy sets and L-flou sets. Elektronische Informationsverarbeitung und Kybernetik
1976, 12, 599–605.
29. Höhle, U. Representation theorems for L-fuzzy quantities. Fuzzy Sets Syst. 1981, 5, 83–107. [CrossRef]
30. Sarkar, M. On L-fuzzy topological spaces. J. Math. Anal. Appl. 1981, 84, 431–442. [CrossRef]
31. Höhle, U. Probabilistic topologies induced by L-fuzzy uniformities. Manuscr. Math. 1982, 38, 289–323.
[CrossRef]
32. Rodabaugh, S.E. Connectivity and the L-fuzzy unit interval. Rocky Mt. J. Math. 1982, 12, 113–121. [CrossRef]
33. Rodabaugh, S.E. Fuzzy addition in the L-fuzzy real line. Fuzzy Sets Syst. 1982, 8, 39–51. [CrossRef]
34. Cerruti, U. Completion of L-fuzzy relations. J. Math. Anal. Appl. 1983, 94, 312–327. [CrossRef]
35. Sugeno, M.; Sasaki, M. L-fuzzy category. Fuzzy Sets Syst. 1983, 11, 43–64. [CrossRef]
36. Klein, A.J. Generalizing the L-fuzzy unit interval. Fuzzy Sets Syst. 1984, 12, 271–279. [CrossRef]
37. Kubiak, T. L-fuzzy normal spaces and Tietze Extension Theorem. J. Math. Anal. Appl. 1987, 125, 141–153.
[CrossRef]
38. Flüshöh, W.; Höhle, U. L-fuzzy contiguity relations and L-fuzzy closure operators in the case of completely
distributive, complete lattices L. Math. Nachr. 1990, 145, 119–134. [CrossRef]
39. Kudri, S.R.T. Compactness in L-fuzzy topological spaces. Fuzzy Sets Syst. 1994, 67, 329–336. [CrossRef]
40. Kudri, S.R.T.; Warner, M.W. L-fuzzy local compactness. Fuzzy Sets Syst. 1994, 67, 337–345. [CrossRef]
41. Kubiak, T. On L-Tychonoff spaces. Fuzzy Sets Syst. 1995, 73, 25–53. [CrossRef]
42. Ovchinnikov, S. On the image of an L-fuzzy group. Fuzzy Sets Syst. 1998, 94, 129–131. [CrossRef]
43. Kubiak, T.; Zhang, D. On the L-fuzzy Brouwer fixed point theorem. Fuzzy Sets Syst. 1999, 105, 287–292.
[CrossRef]
44. Jäger, G. A category of L-fuzzy convergence spaces. Quaest. Math. 2001, 24, 501–517. [CrossRef]
45. Birkhoff, G. Lattice Theory; American Mathematical Society: Providence, RI, USA, 1973.
46. Grabisch, M.; Marichal, J.-L.; Mesiar, R.; Pap, E. Aggregation Functions; Cambridge University Press:
Cambridge, UK, 2009.
47. Grabisch, M.; Marichal, J.-L.; Mesiar, R.; Pap, E. Aggregation functions: Means. Inf. Sci. 2011, 181, 1–22.
[CrossRef]
48. Pavelka, J. On fuzzy logic. II. Enriched residuated lattices and semantics of propositional calculi. Z. Math.
Log. Grundl. Math. 1979, 25, 119–134. [CrossRef]
49. Höhle, U. Commutative, residuated l-monoids. In Non-Classical Logics and Their Applications to Fuzzy
Subsets. A Handbook of the Mathematical Foundations of Fuzzy Set Theory; Höhle, U., Klement, E.P., Eds.;
Kluwer Academic Publishers: Dordrecht, The Netherlands, 1995; Chapter IV, pp. 53–106.
50. Hájek, P. Basic fuzzy logic and BL-algebras. Soft Comput. 1998, 2, 124–128. [CrossRef]
51. Hájek, P. Metamathematics of Fuzzy Logic; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1998.
52. Turunen, E. BL-algebras of basic fuzzy logic. Mathw. Soft Comput. 1999, 6, 49–61.
53. Esteva, F.; Godo, L.; Hájek, P.; Navara, M. Residuated fuzzy logics with an involutive negation.
Arch. Math. Log. 2000, 39, 103–124. [CrossRef]
54. Jenei, S. New family of triangular norms via contrapositive symmetrization of residuated implications.
Fuzzy Sets Syst. 2000, 110, 157–174. [CrossRef]
55. Jenei, S.; Kerre, E.E. Convergence of residuated operators and connective stability of non-classical logics.
Fuzzy Sets Syst. 2000, 114, 411–415. [CrossRef]
56. Esteva, F.; Godo, L. Monoidal t-norm based logic: Towards a logic for left-continuous t-norms. Fuzzy Sets Syst.
2001, 124, 271–288. [CrossRef]
57. Hájek, P. Observations on the monoidal t-norm logic. Fuzzy Sets Syst. 2002, 132, 107–112. [CrossRef]
58. Esteva, F.; Godo, L.; Montagna, F. Axiomatization of any residuated fuzzy logic defined by a continuous
t-norm. In Proceedings of the Congress of the International Fuzzy Systems Association (IFSA),
Istanbul, Turkey, 30 June–2 July 2003; pp. 172–179.
59. Mesiar, R.; Mesiarová, A. Residual implications and left-continuous t-norms which are ordinal sums of
semigroups. Fuzzy Sets Syst. 2004, 143, 47–57. [CrossRef]
60. Montagna, F. On the predicate logics of continuous t-norm BL-algebras. Arch. Math. Log. 2005, 44, 97–114.
[CrossRef]

19
Mathematics 2018, 6, 146

61. Van Gasse, B.; Cornelis, C.; Deschrijver, G.; Kerre, E.E. A characterization of interval-valued residuated
lattices. Int. J. Approx. Reason. 2008, 49, 478–487. [CrossRef]
62. Van Gasse, B.; Cornelis, C.; Deschrijver, G.; Kerre, E.E. Triangle algebras: A formal logic approach to
interval-valued residuated lattices. Fuzzy Sets Syst. 2008, 159, 1042–1060. [CrossRef]
63. De Baets, B.; Mesiar, R. Triangular norms on product lattices. Fuzzy Sets Syst. 1999, 104, 61–75. [CrossRef]
64. Saminger-Platz, S.; Klement, E.P.; Mesiar, R. On extensions of triangular norms on bounded lattices.
Indag. Math. 2008, 19, 135–150. [CrossRef]
65. Menger, K. Statistical metrics. Proc. Natl. Acad. Sci. USA 1942, 8, 535–537. [CrossRef]
66. Schweizer, B.; Sklar, A. Espaces métriques aléatoires. C. R. Acad. Sci. Paris Sér. A 1958, 247, 2092–2094.
(In French)
67. Schweizer, B.; Sklar, A. Statistical metric spaces. Pac. J. Math. 1960, 10, 313–334. [CrossRef]
68. Schweizer, B.; Sklar, A. Probabilistic Metric Spaces; North-Holland: New York, NY, USA, 1983.
69. Klement, E.P.; Mesiar, R.; Pap, E. Triangular Norms; Kluwer Academic Publishers: Dordrecht, The Netherlands, 2000.
70. Klement, E.P.; Mesiar, R.; Pap, E. Triangular norms. Position paper I: Basic analytical and algebraic properties.
Fuzzy Sets Syst. 2004, 143, 5–26. [CrossRef]
71. Klement, E.P.; Mesiar, R.; Pap, E. Triangular norms. Position paper II: General constructions and
parameterized families. Fuzzy Sets Syst. 2004, 145, 411–438. [CrossRef]
72. Klement, E.P.; Mesiar, R.; Pap, E. Triangular norms. Position paper III: Continuous t-norms. Fuzzy Sets Syst.
2004, 145, 439–454. [CrossRef]
73. Alsina, C.; Frank, M.J.; Schweizer, B. Associative Functions: Triangular Norms and Copulas; World Scientific:
Singapore, 2006.
74. Yager, R.R.; Rybalov, A. Uninorm aggregation operators. Fuzzy Sets Syst. 1996, 80, 111–120. [CrossRef]
75. Calvo, T.; De Baets, B.; Fodor, J. The functional equations of Frank and Alsina for uninorms and nullnorms.
Fuzzy Sets Syst. 2001, 120, 385–394. [CrossRef]
76. Calvo, T.; Mayor, G.; Mesiar, R. (Eds.) Aggregation Operators. New Trends and Applications; Physica-Verlag:
Heidelberg, Germany, 2002.
77. Beliakov, G.; Pradera, A.; Calvo, T. Aggregation Functions: A Guide for Practitioners; Springer: Berlin/Heidelberg,
Germany, 2007.
78. Grabisch, M.; Marichal, J.-L.; Mesiar, R.; Pap, E. Aggregation functions: Construction methods, conjunctive,
disjunctive and mixed classes. Inf. Sci. 2011, 181, 23–43. [CrossRef]
79. Negoita, C.V.; Ralescu, D.A. Representation theorems for fuzzy concepts. Kybernetes 1975, 4, 169–174.
[CrossRef]
80. Shortliffe, E.H.; Buchanan, B.G. A model of inexact reasoning in medicine. Math. Biosci. 1975, 23, 351–379.
[CrossRef]
81. Shortliffe, E.H. Computer Based Medical Consultation—‘MYCIN’; Elsevier: New York, NY, USA, 1976.
82. Jackson, P. Introduction to Expert Systems; Addison-Wesley: Wokingham, UK, 1986.
83. Hájek, P.; Havránek, T.; Jiroušek, R. Uncertain Information Processing in Expert Systems; CRC Press:
Boca Raton, FL, USA, 1992.
84. Atanassov, K.T. Intuitionistic Fuzzy Sets; Physica-Verlag: Heidelberg, Germany, 1999.
85. Deschrijver, G.; Kerre, E.E. On the relationship between some extensions of fuzzy set theory. Fuzzy Sets Syst.
2003, 133, 227–235. [CrossRef]
86. Wang, G.-J.; He, Y.-Y. Intuitionistic fuzzy sets and L-fuzzy sets. Fuzzy Sets Syst. 2000, 110, 271–274. [CrossRef]
87. De Miguel, L.; Bustince, H.; Fernandez, J.; Induráin, E.; Kolesárová, A.; Mesiar, R. Construction of admissible
linear orders for interval-valued Atanassov intuitionistic fuzzy sets with an application to decision making.
Inf. Fusion 2016, 27, 189–197. [CrossRef]
88. Dick, S.; Yager, R.R.; Yazdanbakhsh, O. On Pythagorean and complex fuzzy set operations. IEEE Trans.
Fuzzy Syst. 2016, 24, 1009–1021. [CrossRef]
89. Yager, R.R. Pythagorean membership grades in multi-criteria decision making. IEEE Trans. Fuzzy Syst. 2014,
22, 958–965. [CrossRef]
90. Harding, J.; Walker, C.; Walker, E. The variety generated by the truth value algebra of type-2 fuzzy sets.
Fuzzy Sets Syst. 2010, 161, 735–749. [CrossRef]
91. Walker, C.; Walker, E. The algebra of fuzzy truth values. Fuzzy Sets Syst. 2005, 149, 309–347. [CrossRef]
92. Bustince, H.; Burillo, P. Vague sets are intuitionistic fuzzy sets. Fuzzy Sets Syst. 1996, 79, 403–405. [CrossRef]

20
Mathematics 2018, 6, 146

93. Smarandache, F. Neutrosophy: Neutrosophic Probability, Set, and Logic: Analytic Synthesis & Synthetic Analysis;
American Research Press: Rehoboth, NM, USA, 1998.
94. Smarandache, F. A unifying field in logics: Neutrosophic logic. Multiple-Valued Log. 2002, 8, 385–482.
95. Smarandache, F. Definition of neutrosophic logic—A generalization of the intuitionistic fuzzy logic.
In Proceedings of the 3rd Conference of the European Society for Fuzzy Logic and Technology,
Zittau, Germany, 10–12 September 2003; pp. 141–146.
96. Smarandache, F. Neutrosophic set—A generalization of the intuitionistic fuzzy set. In Proceedings of the
2006 IEEE International Conference on Granular Computing, Atlanta, GA, USA, 12–12 May 2006; pp. 38–42.
97. Wang, H.; Smarandache, F.; Zhang, Y.; Sunderraman, R. Single valued neutrosophic sets. In Multispace
& Multistructure. Neutrosophic Transdisciplinarity (100 Collected Papers of Sciences); Smarandache, F., Ed.;
North-European Scientific Publishers: Hanko, Finland, 2010; Volume IV, pp. 410–413.
98. Bedregal, B.; Mezzomo, I.; Reiser, R.H.S. n-Dimensional Fuzzy Negations. 2017. Available online: arXiv.org/
pdf/1707.08617v1 (accessed on 26 July 2017).
99. Bedregal, B.; Beliakov, G.; Bustince, H.; Calvo, T.; Fernández, J.; Mesiar, R. A characterization theorem for
t-representable n-dimensional triangular norms. In Eurofuse 2011. Workshop on Fuzzy Methods for Knowledge-Based
Systems; Melo-Pinto, P., Couto, P., Serôdio, C., Fodor, J., De Baets, B., Eds.; Springer: Berlin/Heidelberg, Germany,
2012; pp. 103–112.
100. Cuong, B.C.; Kreinovich, V.; Ngan, R.T. A classification of representable t-norm operators for picture
fuzzy sets. In Proceedings of the Eighth International Conference on Knowledge and Systems Engineering
(KSE 2016), Hanoi, Vietnam, 6–8 October 2016; pp. 19–24.
101. Cuong, B.C.; Ngan, R.T.; Ngoc, L.C. Some algebraic properties of picture fuzzy t-norms and picture fuzzy
t-conorms on standard neutrosophic sets. arXiv 2017, arXiv:1701.0144
102. Son, L.H.; Viet, P.V.; Hai, P.V. Picture inference system: A new fuzzy inference system on picture fuzzy set.
Appl. Intell. 2017, 46, 652–669. [CrossRef]
103. Bo, C.; Zhang, X. New operations of picture fuzzy relations and fuzzy comprehensive evaluation. Symmetry
2017, 9, 268. [CrossRef]
104. Wang, C.; Zhou, X.; Tu, H.; Tao, S. Some geometric aggregation operators based on picture fuzzy sets and
their application in multiple attribute decision making. Ital. J. Pure Appl. Math. 2017, 37, 477–492.
105. Klement, E.P.; Mesiar, R.; Stupňanová, A. Picture fuzzy sets and 3-fuzzy sets. In Proceedings of the 2018 IEEE
International Conference on Fuzzy Systems (FUZZ-IEEE), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 476–482.
106. Cuong, B.C. Picture fuzzy sets. J. Comput. Sci. Cybern. 2014, 30, 409–420.
107. Cuong, B.C.; Hai, P.V. Some fuzzy logic operators for picture fuzzy sets. In Proceedings of the Seventh
International Conference on Knowledge and Systems Engineering (KSE 2015), Ho Chi Minh City, Vietnam,
8–10 October 2015; pp. 132–137.
108. Cuong, B.C.; Ngan, R.T.; Hai, B.D. An involutive picture fuzzy negation on picture fuzzy sets and some
De Morgan triples. In Proceedings of the Seventh International Conference on Knowledge and Systems
Engineering (KSE 2015), Ho Chi Minh City, Vietnam, 8–10 October 2015; pp. 126–131.
109. Thong, P.H.; Son, L.H. Picture fuzzy clustering: A new computational intelligence method. Soft Comput.
2016, 20, 3549–3562. [CrossRef]
110. Deschrijver, G.; Kerre, E.E. Classes of intuitionistic fuzzy t-norms satisfying the residuation principle. Int. J.
Uncertain. Fuzziness Knowl.-Based Syst. 2003, 11, 691–709. [CrossRef]
111. Deschrijver, G. The Archimedean property for t-norms in interval-valued fuzzy set theory. Fuzzy Sets Syst.
2006, 157, 2311–2327. [CrossRef]
112. Deschrijver, G. Arithmetic operators in interval-valued fuzzy set theory. Inf. Sci. 2007, 177, 2906–2924.
[CrossRef]
113. Deschrijver, G. A representation of t-norms in interval-valued L-fuzzy set theory. Fuzzy Sets Syst. 2008, 159,
1597–1618. [CrossRef]
114. Deschrijver, G. Characterizations of (weakly) Archimedean t-norms in interval-valued fuzzy set theory.
Fuzzy Sets Syst. 2009, 160, 778–801. [CrossRef]
115. Deschrijver, G.; Cornelis, C. Representability in interval-valued fuzzy set theory. Int. J. Uncertain. Fuzziness
Knowl.-Based Syst. 2007, 15, 345–361. [CrossRef]
116. Deschrijver, G.; Cornelis, C.; Kerre, E.E. On the representation of intuitionistic fuzzy t-norms and t-conorms.
IEEE Trans. Fuzzy Syst. 2004, 12, 45–61. [CrossRef]

21
Mathematics 2018, 6, 146

117. Deschrijver, G.; Kerre, E.E. On the composition of intuitionistic fuzzy relations. Fuzzy Sets Syst. 2003, 136,
333–361. [CrossRef]
118. Deschrijver, G.; Kerre, E.E. Uninorms in L∗ -fuzzy set theory. Fuzzy Sets Syst. 2004, 148, 243–262. [CrossRef]
119. Deschrijver, G.; Kerre, E.E. Implicators based on binary aggregation operators in interval-valued fuzzy set
theory. Fuzzy Sets Syst. 2005, 153, 229–248. [CrossRef]
120. Deschrijver, G.; Kerre, E.E. Triangular norms and related operators in L∗ -fuzzy set theory. In Logical, Algebraic,
Analytic, and Probabilistic Aspects of Triangular Norms; Klement, E.P., Mesiar, R., Eds.; Elsevier: Amsterdam,
The Netherlands, 2005; Chapter 8, pp. 231–259.
121. Abbas, S.E. Intuitionistic supra fuzzy topological spaces. Chaos Solitons Fractals 2004, 21, 1205–1214.
[CrossRef]
122. Atanassov, K.; Gargov, G. Interval valued intuitionistic fuzzy sets. Fuzzy Sets Syst. 1989, 31, 343–349.
[CrossRef]
123. Atanassov, K.; Gargov, G. Elements of intuitionistic fuzzy logic. Part I. Fuzzy Sets Syst. 1998, 95, 39–52.
[CrossRef]
124. Atanassov, K.T. More on intuitionistic fuzzy sets. Fuzzy Sets Syst. 1989, 33, 37–45. [CrossRef]
125. Atanassov, K.T. Remarks on the intuitionistic fuzzy sets. Fuzzy Sets Syst. 1992, 51, 117–118. [CrossRef]
126. Atanassov, K.T. New operations defined over the intuitionistic fuzzy sets. Fuzzy Sets Syst. 1994, 61, 137–142.
[CrossRef]
127. Atanassov, K.T. Operators over interval valued intuitionistic fuzzy sets. Fuzzy Sets Syst. 1994, 64, 159–174.
[CrossRef]
128. Atanassov, K.T. Remarks on the intuitionistic fuzzy sets—III. Fuzzy Sets Syst. 1995, 75, 401–402. [CrossRef]
129. Atanassov, K.T. An equality between intuitionistic fuzzy sets. Fuzzy Sets Syst. 1996, 79, 257–258. [CrossRef]
130. Atanassov, K.T. Remark on the intuitionistic fuzzy logics. Fuzzy Sets Syst. 1998, 95, 127–129. [CrossRef]
131. Atanassov, K.T. Two theorems for intuitionistic fuzzy sets. Fuzzy Sets Syst. 2000, 110, 267–269. [CrossRef]
132. Atanassova, L.C. Remark on the cardinality of the intuitionistic fuzzy sets. Fuzzy Sets Syst. 1995, 75, 399–400.
[CrossRef]
133. Ban, A.I.; Gal, S.G. Decomposable measures and information measures for intuitionistic fuzzy sets.
Fuzzy Sets Syst. 2001, 123, 103–117. [CrossRef]
134. Burillo, P.; Bustince, H. Construction theorems for intuitionistic fuzzy sets. Fuzzy Sets Syst. 1996, 84, 271–281.
[CrossRef]
135. Burillo, P.; Bustince, H. Entropy on intuitionistic fuzzy sets and on interval-valued fuzzy sets. Fuzzy Sets Syst.
1996, 78, 305–316. [CrossRef]
136. Bustince, H. Construction of intuitionistic fuzzy relations with predetermined properties. Fuzzy Sets Syst.
2000, 109, 379–403. [CrossRef]
137. Bustince, H.; Burillo, P. Structures on intuitionistic fuzzy relations. Fuzzy Sets Syst. 1996, 78, 293–300.
[CrossRef]
138. Çoker, D. An introduction to intuitionistic fuzzy topological spaces. Fuzzy Sets Syst. 1997, 88, 81–89.
[CrossRef]
139. Çoker, D.; Demirci, M. An introduction to intuitionistic fuzzy topological spaces in šostak’s sense. Busefal
1996, 67, 67–76.
140. De, S.K.; Biswas, R.; Roy, A.R. Some operations on intuitionistic fuzzy sets. Fuzzy Sets Syst. 2000, 114,
477–484. [CrossRef]
141. De, S.K.; Biswas, R.; Roy, A.R. An application of intuitionistic fuzzy sets in medical diagnosis. Fuzzy Sets Syst.
2001, 117, 209–213. [CrossRef]
142. Demirci, M. Axiomatic theory of intuitionistic fuzzy sets. Fuzzy Sets Syst. 2000, 110, 253–266. [CrossRef]
143. Lee, S.J.; Lee, E.P. The category of intuitionistic fuzzy topological spaces. Bull. Korean Math. Soc. 2000, 37,
63–76.
144. Mondal, T.K.; Samanta, S.K. On intuitionistic gradation of openness. Fuzzy Sets Syst. 2002, 131, 323–336.
[CrossRef]
145. Samanta, S.K.; Mondal, T.K. Intuitionistic gradation of openness: Intuitionistic fuzzy topology. Busefal 1997,
73, 8–17.
146. Szmidt, E.; Kacprzyk, J. Distances between intuitionistic fuzzy sets. Fuzzy Sets Syst. 2000, 114, 505–518.
[CrossRef]

22
Mathematics 2018, 6, 146

147. Szmidt, E.; Kacprzyk, J. Entropy for intuitionistic fuzzy sets. Fuzzy Sets Syst. 2001, 118, 467–477. [CrossRef]
148. Brouwer, L.E.J. Intuitionism and formalism. Bull. Am. Math. Soc. 1913, 20, 81–96. [CrossRef]
149. Brouwer, L.E.J. Intuitionistische verzamelingsleer. Amst. Ak. Versl. 1921, 29, 797–802. (In Dutch)
150. Brouwer, L.E.J. Intuitionistische splitsing van mathematische grondbegrippen. Amst. Ak. Versl. 1923, 32,
877–880. (In Dutch)
151. Brouwer, L.E.J. Über die Bedeutung des Satzes vom ausgeschlossenen Dritten in der Mathematik,
insbesondere in der Funktionentheorie. J. Reine Angew. Math. 1925, 154, 1–7. (In German)
152. Brouwer, L.E.J. Zur Begründung der intuitionistischen Mathematik. I. Math. Ann. 1925, 93, 244–257. (In German)
[CrossRef]
153. Brouwer, L.E.J. Zur Begründung der intuitionistischen Mathematik. II. Math. Ann. 1926, 95, 453–472. (In German)
[CrossRef]
154. Brouwer, L.E.J. Zur Begründung der intuitionistischen Mathematik. III. Math. Ann. 1927, 96, 451–488. (In German)
[CrossRef]
155. Brouwer, L.E.J. Intuitionistische Betrachtungen über den Formalismus. Sitz. Preuß. Akad. Wiss. Phys.
Math. Kl. 1928, 48–52. (In German)
156. Brouwer, L.E.J. On the significance of the principle of excluded middle in mathematics, especially in function
theory. With two Addenda and corrigenda. In From Frege to Gödel. A Source Book in Mathematical Logic,
1879–1931; van Heijenoort, J., Ed.; Harvard University Press: Cambridge, MA, USA, 1967; pp. 334–345.
157. Van Heijenoort, J. From Frege to Gödel. A Source Book in Mathematical Logic, 1879–1931; Harvard University
Press: Cambridge, MA, USA, 1967.
158. Hilbert, D. Die Grundlagen der Mathematik. Vortrag, gehalten auf Einladung des Mathematischen Seminars
im Juli 1927 in Hamburg. Abh. Math. Semin. Univ. Hamb. 1928, 6, 65–85. (In German) [CrossRef]
159. Hilbert, D.; Bernays, P. Grundlagen der Mathematik. I; Springer: Berlin/Heidelberg, Germany, 1934. (In German)
160. Hilbert, D. The foundations of mathematics. In From Frege to Gödel. A Source Book in Mathematical Logic,
1879–1931; van Heijenoort, J., Ed.; Harvard University Press: Cambridge, MA, USA, 1967; pp. 464–480.
161. Kolmogorov, A.N. On the principle of excluded middle. In From Frege to Gödel. A Source Book in Mathematical
Logic, 1879–1931; van Heijenoort, J., Ed.; Harvard University Press: Cambridge, MA, USA, 1967; pp. 414–437.
162. Takeuti, G.; Titani, S. Intuitionistic fuzzy logic and intuitionistic fuzzy set theory. J. Symb. Log. 1984, 49,
851–866. [CrossRef]
163. Takeuti, G.; Titani, S. Globalization of intuitionistic set theory. Ann. Pure Appl. Log. 1987, 33, 195–211.
[CrossRef]
164. Baaz, M.; Fermüller, C.G. Intuitionistic counterparts of finitely-valued logics. In Proceedings of the 26th
International Symposium on Multiple-Valued Logic, Santiago de Compostela, Spain, 19–31 January 1996;
pp. 136–141.
165. Ciabattoni, A. A proof-theoretical investigation of global intuitionistic (fuzzy) logic. Arch. Math. Log. 2005,
44, 435–457. [CrossRef]
166. Gottwald, S. Universes of fuzzy sets and axiomatizations of fuzzy set theory. I. Model-based and axiomatic
approaches. Stud. Log. 2006, 82, 211–244. [CrossRef]
167. Gottwald, S. Universes of fuzzy sets and axiomatizations of fuzzy set theory. II. Category theoretic
approaches. Stud. Log. 2006, 84, 23–50. [CrossRef]
168. Hájek, P.; Cintula, P. On theories and models in fuzzy predicate logics. J. Symb. Log. 2006, 71, 863–880.
[CrossRef]
169. Dubois, D.; Gottwald, S.; Hajek, P.; Kacprzyk, J.; Prade, H. Terminological difficulties in fuzzy set theory—The
case of “Intuitionistic Fuzzy Sets”. Fuzzy Sets Syst. 2005, 156, 485–491. [CrossRef]
170. Cattaneo, G.; Ciucci, D. Generalized negations and intuitionistic fuzzy sets—A criticism to a widely used
terminology. In Proceedings of the 3rd Conference of the European Society for Fuzzy Logic and Technology,
Zittau, Germany, 10–12 September 2003; pp. 147–152.
171. Grzegorzewski, P.; Mrówka, E. Some notes on (Atanassov’s) intuitionistic fuzzy sets. Fuzzy Sets Syst. 2005,
156, 492–495. [CrossRef]
172. Gutiérrez García, J.; Rodabaugh, S.E. Order-theoretic, topological, categorical redundancies of
interval-valued sets, grey sets, vague sets, interval-valued “intuitionistic” sets, “intuitionistic” fuzzy sets
and topologies. Fuzzy Sets Syst. 2005, 156, 445–484. [CrossRef]

23
Mathematics 2018, 6, 146

173. Atanassov, K. Answer to D. Dubois, S. Gottwald, P. Hajek, J. Kacprzyk and H. Prade’s paper “Terminological
difficulties in fuzzy set theory—The case of “Intuitionistic Fuzzy Sets””. Fuzzy Sets Syst. 2005, 156, 496–499.
[CrossRef]
174. Butnariu, D.; Klement, E.P.; Mesiar, R.; Navara, M. Sufficient triangular norms in many-valued logics with
standard negation. Arch. Math. Log. 2005, 44, 829–849. [CrossRef]
175. Cintula, P.; Klement, E.P.; Mesiar, R.; Navara, M. Residuated logics based on strict triangular norms with an
involutive negation. Math. Log. Quart. 2006, 52, 269–282. [CrossRef]
176. Kleene, S.C. Introduction to Metamathematics; North-Holland: Amsterdam, The Netherlands, 1952.
177. Xia, M.; Xu, Z.; Zhu, B. Some issues on intuitionistic fuzzy aggregation operators based on Archimedean
t-conorm and t-norm. Knowl.-Based Syst. 2012, 31, 78–88. [CrossRef]
178. Rahman, K.; Abdullah, S.; Ahmed, R.; Ullah, M. Pythagorean fuzzy Einstein weighted geometric aggregation
operator and their application to multiple attribute group decision making. J. Intell. Fuzzy Syst. 2017, 33,
635–647. [CrossRef]

c 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

24
mathematics
Article
On the Most Extended Modal Operator of First Type
over Interval-Valued Intuitionistic Fuzzy Sets
Krassimir Atanassov 1,2
1 Department of Bioinformatics and Mathematical Modelling, Institute of Biophysics and Biomedical
Engineering, Bulgarian Academy of Sciences, 105 Acad. G. Bonchev Str., 1113 Sofia, Bulgaria; [email protected]
2 Intelligent Systems Laboratory, Prof. Asen Zlatarov University, 8010 Bourgas, Bulgaria

Received: 30 May 2018; Accepted: 4 July 2018; Published: 13 July 2018

Abstract: The definition of the most extended modal operator of first type over interval-valued
intuitionistic fuzzy sets is given, and some of its basic properties are studied.

Keywords: interval-valued intuitionistic fuzzy set; intuitionistic fuzzy set; modal operator

1. Introduction
Intuitionistic fuzzy sets (IFSs; see [1–5]) were introduced in 1983 as an extension of the fuzzy
sets defined by Lotfi Zadeh (4.2.1921–6.9.2017) in [6]. In recent years, the IFSs have also been
extended: intuitionistic L -fuzzy sets [7], IFSs of second [8] and nth [9–12] types, temporal IFSs [4,5,13],
multidimensional IFSs [5,14], and others. Interval-valued intuitionistic fuzzy sets (IVIFSs) are the most
detailed described extension of IFSs. They appeared in 1988, when Georgi Gargov (7.4.1947–9.11.1996)
and the author read Gorzalczany’s paper [15] on the interval-valued fuzzy set (IVFS). The idea of IVIFS
was announced in [16,17] and extended in [4,18], where the proof that IFSs and IVIFSs are equipollent
generalizations of the notion of the fuzzy set is given.
Over IVIFS, many (more than the ones over IFSs) relations, operations, and operators are defined.
Here, similar to the IFS case, the standard modal operators and ♦ have analogues, but their
extensions—the intuitionistic fuzzy extended modal operators of the first type—already have two
different forms. In the IFS case, there is an operator that includes as a partial case all other extended
modal operators. In the present paper, we construct a similar operator for the case of IVIFSs and study
its properties.

2. Preliminaries
Let us have a fixed universe E and its subset A. The set

A = { x, M A ( x ), NA ( x ) | x ∈ E},

where M A ( x ) ⊂ [0, 1] and NA ( x ) ⊂ [0, 1] are closed intervals and for all x ∈ E:

sup M A ( x ) + sup NA ( x ) ≤ 1 (1)

is called IVIFS, and functions M A : E → P ([0, 1]) and NA : E → P ([0, 1]) represent the set of degrees of
membership (validity, etc.) and the set of degrees of non-membership (non-validity, etc.) of element x ∈ E to a
fixed set A ⊆ E, where P ( Z ) = {Y |Y ⊆ Z } for an arbitrary set Z.
Obviously, both intervals have the representation:

M A ( x ) = [inf M A ( x ), sup M A ( x )],

Mathematics 2018, 6, 123; doi:10.3390/math6070123 25 www.mdpi.com/journal/mathematics


Mathematics 2018, 6, 123

NA ( x ) = [inf NA ( x ), sup NA ( x )].

Therefore, when

inf M A ( x ) = sup M A ( x ) = μ A ( x ) and inf NA ( x ) = sup NA ( x ) = νA ( x ),

the IVIFS A is transformed to an IFS.


We must mention that in [19,20] the second geometrical interpretation of the IFSs is given
(see Figure 1).
IVIFSs have geometrical interpretations similar to, but more complex than, those of the IFSs.
For example, the analogue of the geometrical interpretation from Figure 1 is shown in Figure 2.
Obviously, each IVFS A can be represented by an IVIFS as

A = { x, M A ( x ), NA ( x ) | x ∈ E}

= { x, M A ( x ), [1 − sup M A ( x ), 1 − inf M A ( x )] | x ∈ E}.

0, 1 E
@
@
@ •
@ x
@
@
@
@
 @
νA ( x ) • @
@
@
@
0, 0 μ A (x) 1, 0

Figure 1. The second geometrical interpretation of an intuitionistic fuzzy set (IFS).

0, 1
@
@
@
1 − inf M A r @r
@
@
@
@ r
1 − sup M A r @
@ r
sup NA r @
@
inf NA r @r
@
@
r r r r @
0, 0 1, 0
inf M A sup M A 1 − sup NA
1 − inf NA

Figure 2. The second geometrical interpretation of an interval-valued intuitionistic fuzzy set (IVIFS).

26
Mathematics 2018, 6, 123

The geometrical interpretation of the IVFS A is shown in Figure 3. It has the form of a section
lying on the triangle’s hypotenuse.

0, 1
@
@
@q
sup M A = 1 − inf M A q @
@
@
@
@
@
inf NA = 1 − sup M A q @
@
@q
@
@
@
@
q q @
0, 0 inf M sup M 1, 0
A A

Figure 3. The second geometrical interpretation of an IVFS.

Modal-type operators are defined similarly to those defined for IFSs, but here they have two
forms: shorter and longer. The shorter form is:

A= { x, M A ( x ), [inf NA ( x ), 1 − sup M A ( x )] | x ∈ E},


♦ A = { x, [inf M A ( x ), 1 − sup NA ( x )], NA ( x ) | x ∈ E},
Dα ( A ) = { x, [inf M A ( x ), sup M A ( x ) + α(1 − sup M A ( x ) − sup NA ( x ))],
[inf NA ( x ), sup NA ( x ) + (1 − α)(1 − sup M A ( x ) − sup NA ( x ))]
| x ∈ E },
Fα,β ( A) = { x, [inf M A ( x ), sup M A ( x ) + α(1 − sup M A ( x ) − sup NA ( x ))],
[inf NA ( x ), sup NA ( x ) + β(1 − sup M A ( x ) − sup NA ( x ))]
| x ∈ E}, for α + β ≤ 1,
Gα,β ( A) = { x, [α inf M A ( x ), α sup M A ( x )], [ β inf NA ( x ), β sup NA ( x )]
| x ∈ E },
Hα,β ( A) = { x, [α inf M( x ), α sup M A ( x )], [inf NA ( x ), sup NA ( x )
+ β(1 − sup M A ( x ) − sup NA ( x ))] | x ∈ E},
∗ ( A) =
Hα,β { x, [α inf M A ( x ), α sup M A ( x )], [inf NA ( x ), sup NA ( x )
+ β(1 − α sup M A ( x ) − sup NA ( x ))] | x ∈ E},
Jα,β ( A) = { x, [inf M A ( x ), sup M A ( x ) + α(1 − sup M A ( x )
− sup NA ( x ))], [ β inf NA ( x ), β sup NA ( x )] | x ∈ E},
∗ ( A) =
Jα,β { x, [inf M A ( x ), sup M A ( x ) + α(1 − sup M A ( x )
− β sup NA ( x ))], [ β inf NA ( x ), β. sup NA ( x )] | x ∈ E},

where α, β ∈ [0, 1].


Obviously, as in the case of IFSs, the operator Dα is an extension of the intuitionistic fuzzy forms
of (standard) modal logic operators and ♦, and it is a partial case of Fα,β .

27
Mathematics 2018, 6, 123

The longer form of these operators (operators , ♦, and D do not have two forms—only the one
above) is (see [4]):

F α γ
 ( A) = { x, [inf M A ( x ) + α(1 − sup M A ( x ) − sup NA ( x )),
β δ

sup M A ( x ) + β(1 − sup M A ( x ) − sup NA ( x ))],


[inf NA ( x ) + γ(1 − sup M A ( x ) − sup NA ( x )),
sup NA ( x ) + δ(1 − sup M A ( x ) − sup NA ( x ))] | x ∈ E}
where β + δ ≤ 1,
G α γ
 ( A) = { x, [α inf M A ( x ), β sup M A ( x )],
β δ

[γ inf NA ( x ), δ sup NA ( x )] | x ∈ E},


H α γ
 ( A) = { x, [α inf M A ( x ), β sup M A ( x )],
β δ

[inf NA ( x ) + γ(1 − sup M A ( x ) − sup NA ( x )),


sup NA ( x ) + δ(1 − sup M A ( x ) − sup NA ( x ))] | x ∈ E},

H α γ

( A) = { x, [α inf M A ( x ), β sup M A ( x )],
β δ

[inf NA ( x ) + γ(1 − β sup M A ( x ) − sup NA ( x )),


sup NA ( x ) + δ(1 − β sup M A ( x ) − sup NA ( x ))] | x ∈ E},
J α γ
 = { x, [inf M A ( x ) + α(1 − sup M A ( x ) − sup NA ( x )),
β δ

sup M A ( x ) + β(1 − sup M A ( x ) − sup NA ( x ))],


[γ inf NA ( x ), δ sup NA ( x )] | x ∈ E},

 
J α γ ( A) = { x, [inf M A ( x ) + α(1 − δ sup M A ( x ) − sup NA ( x )),
β δ

sup M A ( x ) + β(1 − sup M A ( x ) − δ sup NA ( x ))],


[γ. inf NA ( x ), δ. sup NA ( x )] | x ∈ E},

where α, β, γ, δ ∈ [0, 1] such that α ≤ β and γ ≤ δ.


Figure 4 shows to which region of the triangle the element x ∈ E (represented by the small
rectangular region in the triangle) will be transformed by the operators F, G, ..., irrespective of whether
they have two or four indices.

0, 1
@
@
@
@
@
@
H∗ @
@
H @
H F @
@
@
@
@
G J J J∗ @
@
0, 0 1, 0

Figure 4. Region of transformation by the application of the operators.

28
Mathematics 2018, 6, 123

3. Operator X
Now, we introduce the new operator

X a1 b1 c1 d1 e1 f1
 ( A)
a2 b2 c2 d2 e2 f2

= { x, [ a1 inf M A ( x ) + b1 (1 − inf M A ( x ) − c1 inf NA ( x )),


a2 sup M A ( x ) + b2 (1 − sup M A ( x ) − c2 sup NA ( x ))],

[d1 inf NA ( x ) + e1 (1 − f 1 inf M A ( x ) − inf NA ( x )),


d2 sup NA ( x ) + e2 (1 − f 2 sup M A ( x ) − sup NA ( x ))]| x ∈ E},

where a1 , b1 , c1 , d1 , e1 , f 1 , a2 , b2 , c2 , d2 , e2 , f 2 ∈ [0, 1], the following three conditions are valid for i = 1, 2:

ai + ei − ei f i ≤ 1, (2)

bi + di − bi ci ≤ 1, (3)

bi + ei ≤ 1, (4)

and
a1 ≤ a2 , b1 ≤ b2 , c1 ≤ c2 , d1 ≤ d2 , e1 ≤ e2 , f 1 ≤ f 2 . (5)

Theorem 1. For every IVIFS A and for every a1 , b1 , c1 , d1 , e1 , f 1 , a2 , b2 , c2 , d2 , e2 , f 2 ∈ [0, 1] that satisfy
(2)–(5), X a1 b1 c1 d1 e1 f1  ( A) is an IVIFS.
a2 b2 c2 d2 e2 f2

Proof. Let a1 , b1 , c1 , d1 , e1 , f 1 , a2 , b2 , c2 , d2 , e2 , f 2 ∈ [0, 1] satisfy (2)–(5) and let A be a fixed IVIFS. Then,
from (5) it follows that

a1 inf M A ( x ) + b1 (1 − inf M A ( x ) − c1 inf NA ( x ))

≤ a2 sup M A ( x ) + b2 (1 − sup M A ( x ) − c2 sup NA ( x ))


and
d1 inf NA ( x ) + e1 (1 − f 1 inf M A ( x ) − inf NA ( x ))

≤ d2 sup NA ( x ) + e2 (1 − f 2 sup M A ( x ) − sup NA ( x )).


Now, from (5) it is clear that it will be enough to check that

X = a2 sup M A ( x ) + b2 (1 − sup M A ( x ) − c2 sup NA ( x ))

+d2 sup NA ( x ) + e2 (1 − f 2 sup M A ( x ) − sup NA ( x ))


= ( a2 − b2 − e2 f 2 ) sup M A ( x ) + (d2 − e2 − b2 c2 ) sup NA ( x ) + b2 + e2 ≤ 1.
In fact, from (2),
a2 − b2 − e2 f 2 ≤ 1 − b2 − e2

and from (3):


d2 − e2 − b2 c2 ≤ 1 − b2 − e2 .

Then, from (1),

X ≤ (1 − b2 − e2 )(sup M A ( x ) + sup NA ( x )) + b2 + e2

29
Mathematics 2018, 6, 123

≤ 1 − b2 − e2 + b2 + e2 = 1.
Finally, when sup M A ( x ) = inf NA ( x ) = 0 and from (4),

X = b2 (1 − 0 − 0) + e2 (1 − 0 − 0) = b2 + e2 ≤ 1.

Therefore, the definition of the IVIFS is correct.

All of the operators described above can be represented by the operator


X a1 b1 c1 d1 e1 f1
 at suitably chosen values of its parameters. These representations
a2 b2 c2 d2 e2 f2

are the following:


A = X 1 0 r1 1 0 s1
 ( A ),
1 0 r2 1 1 1

♦A = X 1 0 r1 1 0 s1
 ( A ),
1 1 1 1 0 s2

Dα ( A ) = X 1 0 r1 1 α 1
 ( A ),
1 0 r2 1 1−α 1

Fα,β ( A) = X 1 0 r1 1 0 s1
 ( A ),
1 α 1 1 β 1

Gα,β ( A) = X α 0 r1 β 0 s1
 ( A ),
α 0 r2 β 0 s2

Hα,β ( A) = X α 0 r1 1 0 s1
 ( A ),
α 0 r2 1 β 1

∗ ( A)
Hα,β = X  ( A ),
α 0 r1 α 0 s1
1 0 r2 1 β α

Jα,β ( A) = X 1 0 r1 β 0 s1
 ( A ),
1 α 1 β 0 s2

∗ ( A)
Jα,β = X  ( A ),
1 0 r1 β 0 s1
1 α β β 0 s2

F α γ
 ( A) = X 1 α 1 1 γ 1
 ( A ),
β δ 1 β 1 1 δ s1

G α γ
 ( A) = X α 0 r1 β 0 s1
 ( A ),
β δ γ 0 r2 δ 0 s2

H α γ
 ( A) = X α 0 r1 1 γ 1
 ( A ),
β δ β 0 r2 1 δ 1

H α γ

( A ) = X α 0 r1 1 γ 1
 ( A ),
β δ β 0 r2 1 δ β

J α γ
 ( A) = X 1 α 1 γ 0 s1
 ( A ),
β δ 1 β 1 δ 0 s2

J α γ

( A) = X 1 α δ γ 0 s1
 ( A ),
β δ 1 β δ δ 0 s2

where r1 , r2 , s1 , s2 are arbitrary real numbers in the interval [0, 1].


Three of the operations, defined over two IVIFSs A and B, are the following:

¬A = { x, NA ( x ), M A ( x ) | x ∈ E},
A∩B = { x, [min(inf M A ( x ), inf MB ( x )), min(sup M A ( x ), sup MB ( x ))],
[max(inf NA ( x ), inf NB ( x )), max(sup NA ( x ), sup NB ( x ))] | x ∈ E},
A∪B = { x, [max(inf M A ( x ), inf MB ( x )), max(sup M A ( x ), sup MB ( x ))],
[min(inf NA ( x ), inf NB ( x )), min(sup NA ( x ), sup NB ( x ))] | x ∈ E}.

30
Mathematics 2018, 6, 123

For any two IVIFSs A and B, the following relations hold:

A⊂B iff ∀ x ∈ E, inf M A ( x ) ≤ inf MB ( x ), inf NA ( x ) ≥ inf NB ( x ),


sup M A ( x ) ≤ sup MB ( x ) and sup NA ( x ) ≥ sup NB ( x )),
A⊃B iff B ⊂ A,
A=B iff A ⊂ B and B ⊂ A.

Theorem 2. For every two IVIFSs A and B and for every a1 , b1 , c1 , d1 , e1 , f 1 , a2 , b2 , c2 , d2 , e2 , f 2 ∈ [0, 1] that
satisfy (2)–(5),

(a) ¬ X a1 b1 c1 d1 e1 f1
 (¬ A ) = Xd,e, f ,a,b,c ( A),
a2 b2 c2 d2 e2 f2
(b) X a1 b1 c1 d1 e1 f1
(A ∩ B)
a2 b2 c2 d2 e2 f2

⊂ X a1 b1 c1 d1 e1 f1
 ( A) ∩ X a1 b1 c1 d1 e1 f1
 ( B ),
a2 b2 c2 d2 e2 f2 a2 b2 c2 d2 e2 f2

(c) X a1 b1 c1 d1 e1 f1
(A ∪ B)
a2 b2 c2 d2 e2 f2

⊃ X a1 b1 c1 d1 e1 f1
 ( A) ∪ X a1 b1 c1 d1 e1 f1
 ( B ).
a2 b2 c2 d2 e2 f2 a2 b2 c2 d2 e2 f2

Proof. (c) Let a1 , b1 , c1 , d1 , e1 , f 1 , a2 , b2 , c2 , d2 , e2 , f 2 ∈ [0, 1] satisfy (2)–(5) , and let A and B be fixed
IVIFSs. First, we obtain:
Y = X a1 b1 c1 d1 e1 f1  ( A ∪ B)
a2 b2 c2 d2 e2 f2

= X a1 b1 c1 d1 e1 f1
 ({ x, [max(inf M
A ( x ), inf MB ( x )),
a2 b2 c2 d2 e2 f2

max(sup M A ( x ), sup MB ( x ))],

[min(inf NA ( x ), inf NB ( x )), min(sup NA ( x ), sup NB ( x ))] | x ∈ E})

= { x, [ a1 max(inf M A ( x ), inf MB ( x )) + b1 (1 − max(inf M A ( x ), inf MB ( x ))


−c1 min(inf NA ( x ), inf NB ( x ))), a2 max(sup M A ( x ) sup MB ( x ))
+b2 (1 − max(sup M A ( x ) sup MB ( x )) − c2 min(sup NA ( x ), sup NB ( x )))],
[d1 min(inf NA ( x ), inf NB ( x )) + e1 (1 − f 1 max(inf M A ( x ), inf MB ( x ))
− min(inf NA ( x ), inf NB ( x ))), d2 min(sup NA ( x ), sup NB ( x ))
+e2 (1 − f 2 max(sup M A ( x ) sup MB ( x )) − min(sup NA ( x ), sup NB ( x )))]| x ∈ E}.
Second, we calculate:

Z = X a1 b1 c1 d1 e1 f1
 ( A) ∪ X a1 b1 c1 d1 e1 f1
 ( B)
a2 b2 c2 d2 e2 f2 a2 b2 c2 d2 e2 f2

= { x, [ a1 inf M A ( x ) + b1 (1 − inf M A ( x ) − c1 inf NA ( x )),


a2 sup M A ( x ) + b2 (1 − sup M A ( x ) − c2 sup NA ( x ))],

[d1 inf NA ( x ) + e1 (1 − f 1 inf M A ( x ) − inf NA ( x )),


d2 sup NA ( x ) + e2 (1 − f 2 sup M A ( x ) − sup NA ( x ))]| x ∈ E}

31
Mathematics 2018, 6, 123

∪{ x, [ a1 inf MB ( x ) + b1 (1 − inf MB ( x ) − c1 inf NB ( x )),


a2 sup MB ( x ) + b2 (1 − sup MB ( x ) − c2 sup NB ( x ))],

[d1 inf NB ( x ) + e1 (1 − f 1 inf MB ( x ) − inf NB ( x )),


d2 sup NB ( x ) + e2 (1 − f 2 sup MB ( x ) − sup NB ( x ))]| x ∈ E}

= { x, [max( a1 inf M A ( x ) + b1 (1 − inf M A ( x ) − c1 inf NA ( x )),


a1 inf MB ( x ) + b1 (1 − inf MB ( x ) − c1 inf NB ( x ))),

max( a2 sup M A ( x ) + b2 (1 − sup M A ( x ) − c2 sup NA ( x )),

a2 sup MB ( x ) + b2 (1 − sup MB ( x ) − c2 sup NB ( x )))],

[min(d1 inf NA ( x ) + e1 (1 − f 1 inf M A ( x ) − inf NA ( x )),


d1 inf NB ( x ) + e1 (1 − f 1 inf MB ( x ) − inf NB ( x ))),

min(d2 sup NA ( x ) + e2 (1 − f 2 sup M A ( x ) − sup NA ( x )),

d2 sup NB ( x ) + e2 (1 − f 2 sup MB ( x ) − sup NB ( x )))] | x ∈ E}.

Let
P = a1 max(inf M A ( x ), inf MB ( x )) + b1 (1 − max(inf M A ( x ), inf MB ( x ))

−c1 min(inf NA ( x ), inf NB ( x ))) − max( a1 inf M A ( x ) + b1 (1 − inf M A ( x ) − c1 inf NA ( x )),


a1 inf MB ( x ) + b1 (1 − inf MB ( x ) − c1 inf NB ( x )))

= a1 max(inf M A ( x ), inf MB ( x )) + b1 − b1 max(inf M A ( x ), inf MB ( x ))


−b1 c1 min(inf NA ( x ), inf NB ( x ))) − max(( a1 − b1 ) inf M A ( x ) + b1 − b1 c1 inf NA ( x ),
( a1 − b1 ) inf MB ( x ) + b1 − b1 c1 inf NB ( x ))

= a1 max(inf M A ( x ), inf MB ( x )) − b1 max(inf M A ( x ), inf MB ( x ))


−b1 c1 min(inf NA ( x ), inf NB ( x )) − max(( a1 − b1 ) inf M A ( x ) − b1 c1 inf NA ( x ),
( a1 − b1 ) inf MB ( x ) − b1 c1 inf NB ( x )).
Let inf M A ( x ) ≥ inf MB ( x ). Then

P = ( a1 − b1 ) inf M A ( x ) − b1 c1 min(inf NA ( x ), inf NB ( x )) − max(( a1 − b1 ) inf M A ( x )

−b1 c1 inf NA ( x ), ( a1 − b1 ) inf MB ( x ) − b1 c1 inf NB ( x )).


Let ( a1 − b1 ) inf M A ( x ) − b1 c1 inf NA ( x ) ≥ ( a1 − b1 ) inf MB ( x ) − b1 c1 inf NB ( x ). Then

P = ( a1 − b1 ) inf M A ( x ) − b1 c1 min(inf NA ( x ), inf NB ( x )) − ( a1 − b1 ) inf M A ( x )

+b1 c1 inf NA ( x )
= b1 c1 inf NA ( x ) − b1 c1 min(inf NA ( x ), inf NB ( x )) ≥ 0.
If ( a1 − b1 ) inf M A ( x ) − b1 c1 inf NA ( x ) < ( a1 − b1 ) inf MB ( x ) − b1 c1 inf NB ( x ). Then

P = ( a1 − b1 ) inf M A ( x ) − b1 c1 min(inf NA ( x ), inf NB ( x )) − ( a1 − b1 ) inf MB ( x )

32
Mathematics 2018, 6, 123

+b1 c1 inf NB ( x )).


= b1 c1 inf NB ( x ) − b1 c1 min(inf NA ( x ), inf NB ( x )) ≥ 0.
Therefore, the inf M A -component of IVIFS Y is higher than or equal to the inf M A -component
of IVIFS Z. In the same manner, it can be checked that the same inequality is valid for the
sup M A -components of these IVIFSs. On the other hand, we can check that that the inf NA -
and sup NA -components of IVIFS Y are, respectively, lower than or equal to the inf NA and
sup NA -components of IVIFS Z. Therefore, the inequality (c) is valid.

4. Conclusions
In the near future, the author plans to study some other properties of the new operator
X a1 b1 c1 d1 e1 f1
.
a2 b2 c2 d2 e2 f2

In [21], it is shown that the IFSs are a suitable tool for the evaluation of data mining processes and
objects. In the near future, we plan to discuss the possibilities of using IVIFSs as a similar tool.
Funding: This research was funded by the Bulgarian National Science Fund under Grant Ref. No.
DN-02-10/2016.

Conflicts of Interest: The author declares no conflict of interest.

References
1. Atanassov, K. Intuitionistic Fuzzy Sets. 1983, VII ITKR Session. Deposed in Centr. Sci.-Techn. Library of the
Bulg. Acad. of Sci., 1697/84. Available online: http://www.biomed.bas.bg/bioautomation/2016/vol_20.s1/
files/20.s1_03.pdf (accessed on 13 July 2018)
2. Atanassov, K. Intuitionistic fuzzy sets. Int. J. Bioautom. 2016, 20, S1–S6.
3. Atanassov, K. Intuitionistic fuzzy sets. Fuzzy Sets Syst. 1986, 20, 87–96. [CrossRef]
4. Atanassov, K. Intuitionistic Fuzzy Sets; Springer: Heidelberg, Germany, 1999.
5. Atanassov, K. On Intuitionistic Fuzzy Sets Theory; Springer: Berlin, Germany, 2012.
6. Zadeh, L. Fuzzy sets. Inf. Control 1965, 8, 338–353. [CrossRef]
7. Atanassov, K.; Stoeva, S. Intuitionistic L-fuzzy sets. In Cybernetics and Systems Research 2; Trappl, R., Ed.;
Elsevier Sci. Publ.: Amsterdam, The Nertherlands, 1984; pp. 539–540.
8. Atanassov, K. A second type of intuitionistic fuzzy sets. BUSEFAL 1993, 56, 66–70.
9. Atanassov, K.; Szmidt, E.; Kacprzyk, J.; Vassilev, P. On intuitionistic fuzzy pairs of n-th type. Issues Intuit.
Fuzzy Sets Gen. Nets 2017, 13, 136–142.
10. Atanassov, K.T.; Vassilev, P. On the Intuitionistic Fuzzy Sets of n-th Type. In Advances in Data Analysis
with Computational Intelligence Methods; Studies in Computational Intelligence; Gaw˛eda, A., Kacprzyk, J.,
Rutkowski, L., Yen, G., Eds.; Springer: Cham, Switzerland, 2018; Volume 738, pp. 265–274.
11. Parvathi, R.; Vassilev, P.; Atanassov, K. A note on the bijective correspondence between intuitionistic fuzzy
sets and intuitionistic fuzzy sets of p-th type. In New Developments in Fuzzy Sets, Intuitionistic Fuzzy Sets,
Generalized Nets and Related Topics; SRI PAS IBS PAN: Warsaw, Poland, 2012; Volume 1, pp. 143–147.
12. Vassilev, P.; Parvathi, R.; Atanassov, K. Note on intuitionistic fuzzy sets of p-th type. Issues Intuit. Fuzzy Sets
Gen. Nets 2008, 6, 43–50.
13. Atanassov, K. Temporal Intuitionistic Fuzzy Sets; Comptes Rendus de l’Academie bulgare des Sciences;
Bulgarian Academy of Sciences: Sofia, Bulgaria, 1991; pp. 5–7.
14. Atanassov, K.; Szmidt, E.; Kacprzyk, J. On intuitionistic fuzzy multi-dimensional sets. Issues Intuit. Fuzzy
Sets Gen. Nets 2008, 7, 1–6.
15. Gorzalczany, M. Interval-valued fuzzy fuzzy inference method—Some basic properties. Fuzzy Sets Syst.
1989, 31, 243–251. [CrossRef]
16. Atanassov, K. Review and New Results on Intuitionistic Fuzzy Sets. 1988, IM-MFAIS-1-88,
Mathematical Foundations of Artificial Intelligence Seminar. Available online: http://www.biomed.bas.bg/
bioautomation/2016/vol_20.s1/files/20.s1_03.pdf (accessed on 13 July 2018 )
17. Atanassov, K. Review and new results on intuitionistic fuzzy sets. Int. J. Bioautom. 2016, 20, S7–S16.

33
Mathematics 2018, 6, 123

18. Atanassov, K.; Gargov, G. Interval valued intuitionistic fuzzy sets. Fuzzy Sets Syst. 1989, 31, 343–349.
[CrossRef]
19. Atanassov, K. Geometrical Interpretation of the Elements of the Intuitionistic Fuzzy Objects. 1989,
IM-MFAIS-1-89, Mathematical Foundations of Artificial Intelligence Seminar. Available online: http:
//biomed.bas.bg/bioautomation/2016/vol_20.s1/files/20.s1_05.pdf (accessed on 13 July 2018)
20. Atanassov, K. Geometrical interpretation of the elements of the intuitionistic fuzzy objects. Int. J. Bioautom.
2016, 20, S27–S42.
21. Atanassov, K. Intuitionistic fuzzy logics as tools for evaluation of Data Mining processes. Knowl.-Based Syst.
2015, 80, 122–130. [CrossRef]

c 2018 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

34
mathematics
Article
N -Hyper Sets
Young Bae Jun 1 , Seok-Zun Song 2, * and Seon Jeong Kim 3
1 Department of Mathematics Education, Gyeongsang National University, Jinju 52828, Korea;
[email protected]
2 Department of Mathematics, Jeju National University, Jeju 63243, Korea
3 Department of Mathematics, Natural Science of College, Gyeongsang National University, Jinju 52828,
Korea; [email protected]
* Correspondence: [email protected]

Received: 21 April 2018; Accepted: 21 May 2018; Published: 23 May 2018

Abstract: To deal with the uncertainties, fuzzy set theory can be considered as one of the mathematical
tools by Zadeh. As a mathematical tool to deal with negative information, Jun et al. introduced a new
function, which is called a negative-valued function, and constructed N -structures in 2009. Since then,
N -structures are applied to algebraic structures and soft sets, etc. Using the N -structures, the notions
of (extended) N -hyper sets, N -substructures of type 1, 2, 3 and 4 are introduced, and several related
properties are investigated in this research paper.

Keywords: N -structure; (extended) N -hyper set; N -substructure of types 1, 2, 3, 4

JEL Classification: 06F35; 03G25; 08A72

1. Introduction
Most mathematical tools for computing, formal modeling and reasoning are crisp, deterministic
and precise in many characters. However, several problems in economics, environment, engineering,
social science, medical science, etc. do not always involve crisp data in real life. Consequently,
we cannot successfully use the classical method because of various types of uncertainties presented
in the problem. To deal with the uncertainties, fuzzy set theory can be considered as one of the
mathematical tools (see [1]). A (crisp) set A in a universe X can be defined in the form of its
characteristic function μ A : X → {0, 1} yielding the value 1 for elements belonging to the set A
and the value 0 for elements excluded from the set A. Thus far, most of the generalization of the
crisp set has been conducted on the unit interval [0, 1] and they are consistent with the asymmetry
observation. In other words, the generalization of the crisp set to fuzzy sets relied on spreading
positive information that fit the crisp point {1} into the interval [0, 1]. Because no negative meaning of
information is suggested, we now feel a need to deal with negative information. To do so, we also feel
a need to supply mathematical tools. To attain such object, Jun et al. [2] introduced a new function,
which is called negative-valued function, and constructed N -structures. Since then, N -structures are
applied to rings (see [3]), BCH-algebras (see [4]), (ordered) semigroups (see [5–8]). The combination of
soft sets and N -structures is dealt with in [9,10] and [11]. The purpose of this paper is to introduce the
notions of (extended) N -hyper sets, N -substructures of type 1, 2, 3 and 4, and to investigate several
related properties. In our consecutive research in future, we will try to study several applications based
on N -structures, for example, another type of algebra, soft and rough set theory, decision-making
problems, etc. In particular, we will study complex dynamics through N -structures based on the
paper [12].

Mathematics 2018, 6, 87; doi:10.3390/math6060087 35 www.mdpi.com/journal/mathematics


Mathematics 2018, 6, 87

2. Preliminaries
Denote by F ( X, [−1, 0]) the collection of all functions from a set X to [−1, 0]. We say that an
element of F ( X, [−1, 0]) is a negative-valued function from X to [−1, 0] (briefly, N -function on X). By an
N -structure, we mean an ordered pair ( X, ρ) of X and an N -function ρ on X (see [2]).
For any family { ai | i ∈ Λ} of real numbers, we define

 max{ ai | i ∈ Λ}, if Λ is finite,
{ ai | i ∈ Λ } : =
sup{ ai | i ∈ Λ}, otherwise.

 min{ ai | i ∈ Λ} if Λ is finite,
{ ai | i ∈ Λ } : =
inf{ ai | i ∈ Λ} otherwise.

Given a subset A of [−1, 0], we define


 
( A) = { a | a ∈ A} − { a | a ∈ A }.

3. (Extended) N -Hyper Sets


Definition 1. Let X be an initial universe set. By an N -hyper set over X, we mean a mapping μ : X →
P ∗ ([−1, 0]), where P ∗ ([−1, 0]) is the collection of all nonempty subsets of [−1, 0].

In an N -hyper set μ : X → P ∗ ([−1, 0]) over X, we consider two N -structures ( X, μ∧ ), ( X, μ∨ )


and a fuzzy structure ( X, μ ) in which

μ∧ : X → [−1, 0], x → {μ( x )}, (1)

μ∨ : X → [−1, 0], x → {μ( x )}, (2)
μ : X → [0, 1], x → (μ( x )). (3)

It is clear that μ ( x ) = μ∨ ( x ) − μ∧ ( x ) for all x ∈ X.

Example 1. Let X = { a, b, c, d} and define an N -hyper set μ : X → P ∗ ([−1, 0]) over X by Table 1.

Table 1. N -hyper set.

X a b c d
μ [−0.5, 0] (−0.6, −0.3) [−0.4, −0.2) (−1, −0.8]

Then, μ generates two N -structures ( X, μ∧ ) and ( X, μ∨ ), and a fuzzy structure ( X, μ ) as Table 2.

Table 2. N -structures ( X, μ∧ ), ( X, μ∨ ) and ( X, μ ).

X a b c d
μ∧ −0.5 −0.6 −0.4 −1
μ∨ 0 −0.3 −0.2 −0.8
μ 0.5 0.3 0.2 0.2

Definition 2. Given an N -structure ( X, ϕ) over X, define a map

ϕe : P ∗ ( X ) → P ∗ ([−1, 0]), A → { ϕ( a) | a ∈ A}, (4)

where P ∗ ( X ) is the set of all nonempty subsets of X. We call ϕe the extended N -hyper set over X.

36
Mathematics 2018, 6, 87

Example 2. Let X = { a, b, c, d} be an initial universe set and let ( X, ϕ) be an N -structure over X given by
Table 3.

Table 3. N -structure ( X, ϕ).

X a b c d
ϕ −0.5 −0.3 −0.4 −0.8

Then, the extended N -hyper set ϕe over X is described as Table 4.

Table 4. The extended N -hyper set ϕe over X.

A ∈ P ∗ (X ) ϕe ( A) A ∈ P ∗ (X ) ϕe ( A)
{ a} {−0.5} {b} {−0.3}
{c} {−0.4} {d} {−0.8}
{ a, b} {−0.5, −0.3} { a, c} {−0.5, −0.4}
{ a, d} {−0.5, −0.8} { a, b, c} {−0.5, −0.4, −0.3}
{ a, b, d} {−0.5, −0.3, −0.8} { a, b, c, d} {−0.5, −0.4, −0.3, −0.8}

Definition 3. Let X be an initial universe set with a binary operation ∗. An N -structure ( X, ϕ) over X
is called.

• an N -substructure of ( X, ∗) with type 1 (briefly, N1 -substructure of ( X, ∗)) if it satisfies:


  
(∀ x, y ∈ X ) ϕ( x ∗ y) ≤ { ϕ( x ), ϕ(y)} , (5)

• an N -substructure of ( X, ∗) with type 2 (briefly, N2 -substructure of ( X, ∗)) if it satisfies:


  
(∀ x, y ∈ X ) ϕ( x ∗ y) ≥ { ϕ( x ), ϕ(y)} , (6)

• an N -substructure of ( X, ∗) with type 3 (briefly, N3 -substructure of ( X, ∗)) if it satisfies:


  
(∀ x, y ∈ X ) ϕ( x ∗ y) ≥ { ϕ( x ), ϕ(y)} , (7)

• an N -substructure of ( X, ∗) with type 4 (briefly, N4 -substructure of ( X, ∗)) if it satisfies:


  
(∀ x, y ∈ X ) ϕ( x ∗ y) ≤ { ϕ( x ), ϕ(y)} . (8)

It is clear that every N4 -substructure of ( X, ∗) is an N1 -substructure of ( X, ∗), and every


N3 -substructure of ( X, ∗) is an N2 -substructure of ( X, ∗).

Example 3. Let X be the set of all integers and let ∗ be a binary operation on X defined by

(∀ x, y ∈ X ) ( x ∗ y = −(| x | + |y|)) .

(1) Define an N -structure ( X, ϕ) over X by

ϕ : X → [−1, 0], x → −1 + 1
.
e| x|

Then, ϕ(0) = 0, lim ϕ( x ) = −1 and


| x |→∞

37
Mathematics 2018, 6, 87

  
ϕ ( x ∗ y ) = −1 + 1
≤ −1 + 1
, −1+ 1
= { ϕ( x ), ϕ(y)}
e| x|+|y| e| x| e|y|

for all x, y ∈ X. Therefore, ( X, ϕ) is an N4 -substructure of ( X, ∗), and hence it is also an N1 -substructure


of ( X, ∗).
(2) Let ( X, ϕ) be an N -structure over X in which ϕ is given by

ϕ : X → [−1, 0], x → −1 .
1+| x |

Then,

ϕ( x ∗ y) = ϕ(−(| x | + |y|)) = 1+|−(|−x1|+|y|)|


 
= 1+|− 1
x |+|y|
≥ −1 , −1
1+| x | 1+|y|

= { ϕ( x ), ϕ(y)}

for all x, y ∈ X. Therefore, ( X, ϕ) is an N3 -substructure of ( X, ∗), and hence it is also an N2 -substructure


of ( X, ∗).

For any initial universe set X with binary operations, let H( X ) denote the set of all ( X, ∗) where ∗
is a binary operation on X, that is,

H( X ) := {( X, ∗) | ∗ is a binary operation on X } .

We consider the following subsets of H( X ):

N1 ( ϕ) := {( X, ∗) ∈ H( X ) | ϕ is an N1 -substructure of ( X, ∗)},
N2 ( ϕ) := {( X, ∗) ∈ H( X ) | ϕ is an N2 -substructure of ( X, ∗)},
N3 ( ϕ) := {( X, ∗) ∈ H( X ) | ϕ is an N3 -substructure of ( X, ∗)},
N4 ( ϕ) := {( X, ∗) ∈ H( X ) | ϕ is an N4 -substructure of ( X, ∗)}.

Theorem 1. Given an N -structure ( X, ϕ) over an initial universe set X, if ( X, ∗) ∈ N1 ( ϕ), then


(P ∗ ( X ), ∗) ∈ N1 ( ϕe∧ ).

Proof. If (X, ∗) ∈ N1 ( ϕ), then ϕ is an N1-substructure of (X, ∗), that is, Equation (5) is valid. Let A, B ∈
P ∗ (X). Then,

ϕe∧ ( A ∗ B) = { ϕ( a ∗ b) | a ∈ A, b ∈ B}. (9)

Note that
  
(∀ε > 0)(∃ a0 ∈ X ) ϕ( a0 ) < { ϕ( a) | a ∈ A} + ε

and
  
(∀ε > 0)(∃b0 ∈ X ) ϕ(b0 ) < { ϕ(b) | b ∈ B} + ε .

38
Mathematics 2018, 6, 87

It follows that
 
{ ϕ( a ∗ b) | a ∈ A, b ∈ B} ≤ ϕ( a0 ∗ b0 ) ≤ { ϕ( a0 ), ϕ(b0 )}
   
≤ { ϕ( a) | a ∈ A} + ε, { ϕ(b) | b ∈ B} + ε

= { ϕe∧ ( A) + ε, ϕe∧ ( A) + ε}

= { ϕe∧ ( A), ϕe∧ ( A)} + ε.

Since ε is arbitrary, it follows that



ϕe∧ ( A ∗ B) ≤ { ϕe∧ ( A), ϕe∧ ( A)} .

Therefore, (P ∗ ( X ), ∗) ∈ N1 ( ϕe∧ ).

Theorem 2. Given an N -structure ( X, ϕ) over an initial universe set X, if ( X, ∗) ∈ N2 ( ϕ ),


then (P ∗ ( X ), ∗) ∈ N2 ( ϕe∨ ).

Proof. If (X, ∗) ∈ N2 ( ϕ), then ϕ is an N2-substructure of (X, ∗), that is, Equation (6) is valid.
Let A, B ∈ P ∗ (X). Then,

ϕe∨ ( A ∗ B) = { ϕ( a ∗ b) | a ∈ A, b ∈ B}. (10)

Let ε be any positive number. Then, there exist a0 , b0 ∈ X such that


  
ϕ ( a0 ) >
{ ϕ( a) | a ∈ A} − ε ,
  
ϕ(b0 ) > { ϕ(b) | b ∈ B} − ε ,

respectively. It follows that


 
{ ϕ( a ∗ b) | a ∈ A, b ∈ B} ≥ ϕ( a0 ∗ b0 ) ≥ { ϕ( a0 ), ϕ(b0 )}
   
≥ { ϕ( a) | a ∈ A} − ε, { ϕ(b) | b ∈ B} − ε

= { ϕe∨ ( A) − ε, ϕe∨ ( B) − ε}

= { ϕe∨ ( A), ϕe∨ ( B)} − ε,

which shows that ϕe∨ ( A ∗ B) ≥ { ϕe∨ ( A), ϕe∨ ( B)}. Therefore, (P ∗ ( X ), ∗) ∈ N2 ( ϕe∨ ).

Definition 4. Given N -hyper sets μ and λ over an initial universe set X, we define hyper-union (∪
˜ ),
hyper-intersection (∩
˜ ), hyper complement () and hyper difference (\) as follows:

˜ λ : X → P ∗ ([−1, 0]), x → μ( x ) ∪ λ( x ),
μ∪
˜ λ : X → P ∗ ([−1, 0]), x → μ( x ) ∩ λ( x ),
μ∩
μ \ λ : X → P ∗ ([−1, 0]), x → μ( x ) \ λ( x ),
μ : X → P ∗ ([−1, 0]), x → [−1, 0] \ {t ∈ [−1, 0] | t ∈
/ μ( x )}.

Proposition 1. If μ and λ are N -hyper sets over an initial universe set X, then
  
(∀ x ∈ X ) (μ ∪˜ λ) ( x ) ≥ {μ ( x ), λ ( x )} , (11)

39
Mathematics 2018, 6, 87

and
  
(∀ x ∈ X ) (μ ∩˜ λ) ( x ) ≤ {μ ( x ), λ ( x )} . (12)

Proof. Let x ∈ X. Then,


  
(μ ∪˜ λ)∨ ( x ) = {μ( x ) ∪ λ( x )} ≥ {μ( x )} (and {λ( x )})

and
  
(μ ∪˜ λ)∧ ( x ) = {μ( x ) ∪ λ( x )} ≤ {μ( x )} (and {λ( x )}).

It follows that
   
(μ ∪˜ λ)∨ ( x ) ≥ {μ( x )}, {λ( x )}

and
   
(μ ∪˜ λ)∧ ( x ) ≤ {μ( x )}, {λ( x )} .
  
Note that { a, b} + {c, d} ≥ { a + c, b + d} for all a, b, c, d ∈ [−1, 0]. Hence,

(μ ∪˜ λ) ( x ) = (μ ∪˜ λ)∨ ( x ) − (μ ∪˜ λ)∧ ( x )


       
≥ {μ( x )}, {λ( x )} − {μ( x )}, {λ( x )}
       
≥ {μ( x )}, {λ( x )} + − {μ( x )}, − {λ( x )}
     
≥ {μ( x )} − {μ( x )}, {λ( x )} − {λ( x )}

= {μ ( x ), λ ( x )} ,

and so Equation (11) is valid. For any x ∈ X, we have


  
(μ ∩˜ λ)∨ ( x ) = {μ( x ) ∩ λ( x )} ≤ {μ( x )} (and {λ( x )})

and
  
(μ ∩˜ λ)∧ ( x ) = {μ( x ) ∩ λ( x )} ≥ {μ( x )} (and {λ( x )}),

which imply that


   
(μ ∩˜ λ)∨ ( x ) ≤ {μ( x )}, {λ( x )}

and
   
(μ ∩˜ λ)∧ ( x ) ≥ {μ( x )}, {λ( x )} .
  
Since { a, b} + {c, d} ≤ { a + c, b + d} for all a, b, c, d ∈ [−1, 0], we have

40
Mathematics 2018, 6, 87

(μ ∩˜ λ) ( x ) = (μ ∩˜ λ)∨ ( x ) − (μ ∩˜ λ)∧ ( x )


       
≤ {μ( x )}, {λ( x )} − {μ( x )}, {λ( x )}
       
= {μ( x )}, {λ( x )} + − {μ( x )}, − {λ( x )}
     
≤ {μ( x )} − {μ( x )}, {λ( x )} − {λ( x )}

= {μ ( x ), λ ( x )} .

This completes the proof.

Proposition 2. If μ is an N -hyper set over an initial universe set X, then


 

(∀ x ∈ X ) (μ ∪˜ μ ) ( x ) ≥ μ ( x ), μ ( x ) . (13)

Proof. Note that

(μ ∪˜ μ )( x ) = μ( x ) ∪ μ ( x ) = μ( x ) ∪ ([−1, 0] \ μ( x )) = [−1, 0]

for all x ∈ X. It follows that




(μ ∪˜ μ ) ( x ) = (μ ∪˜ μ )∨ ( x ) − (μ ∪˜ μ )∧ ( x ) = 1 ≥ μ ( x ), μ ( x )

for all x ∈ X.

Proposition 3. If μ and λ are N -hyper sets over an initial universe set X, then

(∀ x ∈ X ) ((μ \ λ) ( x ) ≤ μ ( x )) . (14)

Proof. Note that (μ \ λ)( x ) = μ( x ) \ λ( x ) ⊆ μ( x ) for all x ∈ X. Hence,

(μ \ λ)∨ ( x ) ≤ μ∨ ( x ) and (μ \ λ)∧ ( x ) ≥ μ∧ ( x ).

It follows that

(μ \ λ) ( x ) = (μ \ λ)∨ ( x ) − (μ \ λ)∧ ( x )


≤ μ ∨ ( x ) − μ ∧ ( x ) = μ  ( x ),

proving the proposition.

Given N -hyper sets μ and λ over an initial universe set X, we define

μξ : X → [−1, 0], x → μ∧ ( x ) − μ∨ ( x ), (15)


 
˜ λ : X → P ∗ ([−1, 0]), x →
μ∨ { a, b} ∈ [−1, 0] | a ∈ μ( x ), b ∈ λ( x ) , (16)
 
μ∧˜ λ : X → P ∗ ([−1, 0]), x → { a, b} ∈ [−1, 0] | a ∈ μ( x ), b ∈ λ( x ) . (17)

Example 4. Let μ and λ be N -hyper sets over X = { a, b, c, d} defined by Table 5.

41
Mathematics 2018, 6, 87

Table 5. N -hyper sets μ and λ.

X a b c d
μ [−0.5, 0] {−1, −0.6} [−0.4, −0.2) (−1, −0.8]
λ [−0.6, −0.3] {−1, −0.8, −0.5} [−0.5, −0.3) (−0.9, −0.7]

Then, μξ is given as Table 6

Table 6. N -function ( X, μξ ).

X a b c d
μ∧ −0.5 −1 −0.4 −1
μ∨ 0 −0.6 −0.2 −0.8
μξ −0.5 −0.4 −0.2 −0.2

and
  
(μ ∨˜ λ)(b) = {−1, −1}, {−1, −0.8}, {−1, −0.5},
   
{−0.6, −1}, {−0.6, −0.8}, {−0.6, −0.5}
= {−1, −0.8, −0.5, −0.6},
  
(μ ∧˜ λ)(b) = {−1, −1}, {−1, −0.8}, {−1, −0.5},
   
{−0.6, −1}, {−0.6, −0.8}, {−0.6, −0.5}
= {−1, −0.8, −0.6}.

Thus, (μ ∨
˜ λ)∨ (b) = −0.5, (μ ∧
˜ λ)∨ (b) = −0.6 and (μ ∨
˜ λ ) ∧ ( b ) = −1 = ( μ ∧
˜ λ ) ∧ ( b ).

Proposition 4. Let X be an initial universe set with a binary operation ∗. If μ and λ are N -hyper sets over
X, then
  
(∀ x ∈ X ) (μ ∨˜ λ)∨ ( x ) = {μ∨ ( x ), λ∨ ( x )} (18)

and
  
(∀ x ∈ X ) (μ ∨˜ λ)∧ ( x ) = {μ∧ ( x ), λ∧ ( x )} . (19)

Proof. For any x ∈ X, let α := μ∨ ( x ) and β := λ∨ ( x ). Then,



(μ ∨˜ λ)∨ ( x ) = {(μ ∨˜ λ)( x )}
  
= { a, b} ∈ [−1, 0] | a ∈ μ( x ), b ∈ λ( x )
 
= {α, b | b ∈ λ( x )}, { a, b | a ∈ μ( x ), b ∈ λ( x )},
  
{ a, β | a ∈ μ( x )}, {α, β}
 
= {α, β} = {μ∨ ( x ), λ∨ ( x )} .

Thus, Equation (18) is valid. Similarly, we can prove Equation (19).

Similarly, we have the following property.

42
Mathematics 2018, 6, 87

Proposition 5. Let X be an initial universe set with a binary operation ∗. If μ and λ are N -hyper sets over
X, then
  
(∀ x ∈ X ) (μ ∧˜ λ)∨ ( x ) = {μ∨ ( x ), λ∨ ( x )} (20)

and
  
(∀ x ∈ X ) (μ ∧˜ λ)∧ ( x ) = {μ∧ ( x ), λ∧ ( x )} . (21)

Definition 5. Let X be an initial universe set with a binary operation ∗. An N -hyper set μ : X → P ∗ ([−1, 0])
is called: an N -hyper subset of ( X, ∗) with type (i, j) for i, j ∈ {1, 2, 3, 4} (briefly, N(i,j) -substructure of ( X, ∗))
if ( X, μ∨ ) is an Ni -substructure of ( X, ∗) and ( X, μ∧ ) is an N j -substructure of ( X, ∗).

Given an N -hyper set μ : X → P ∗ ([−1, 0]), we consider the set

N(i,j) (μ) := {( X, ∗) ∈ H( X ) | μ is an N(i,j) -substructure of ( X, ∗)}

for i, j ∈ {1, 2, 3, 4}.

Theorem 3. Let X be an initial universe set with a binary operation ∗. For any N -hyper set μ : X →
P ∗ ([−1, 0]), we have

( X, ∗) ∈ N(3,4) (μ) ⇒ ( X, ∗) ∈ N4 (μξ ). (22)

Proof. Let ( X, ∗) ∈ N(3,4) (μ). Then, ( X, μ∨ ) is an N3 -substructure of ( X, ∗) and ( X, μ∧ ) is an


N4 -substructure of ( X, ∗), that is,

μ∨ ( x ∗ y) ≥ {μ∨ ( x ), μ∨ (y)}

and

μ∧ ( x ∗ y) ≤ {μ∧ ( x ), μ∧ (y)}

for all x, y ∈ X. It follows that

μ ξ ( x ∗ y ) = μ ∧ ( x ∗ y ) − μ ∨ ( x ∗ y ) ≤ μ ∧ ( x ) − μ ∨ ( x ) = μ ξ ( x ).

Similarly, we get μξ ( x ∗ y) ≤ μξ (y). Hence, μξ ( x ∗ y) ≤ {μξ ( x ), μξ (y)}, and so ( X, ∗) ∈
N4 ( μ ξ ).

Corollary 1. Let X be an initial universe set with a binary operation ∗. For any N -hyper set μ : X →
P ∗ ([−1, 0]), we have

( X, ∗) ∈ N(3,4) (μ) ⇒ ( X, ∗) ∈ N1 (μξ ).

Theorem 4. Let X be an initial universe set with a binary operation ∗. For any N -hyper set μ : X →
P ∗ ([−1, 0]), we have

( X, ∗) ∈ N(4,3) (μ) ⇒ ( X, ∗) ∈ N3 (μξ ). (23)

Proof. It is similar to the proof of Theorem 3.

43
Mathematics 2018, 6, 87

Corollary 2. Let X be an initial universe set with a binary operation ∗. For any N -hyper set μ : X →
P ∗ ([−1, 0]), we have

( X, ∗) ∈ N(4,3) (μ) ⇒ ( X, ∗) ∈ N2 (μξ ).

Theorem 5. Let X be an initial universe set with a binary operation ∗. For any N -hyper set μ : X →
P ∗ ([−1, 0]), we have

( X, ∗) ∈ N(1,3) (μ) ⇒ ( X, ∗) ∈ N3 (μξ ). (24)

Proof. Let ( X, ∗) ∈ N(1,3) (μ). Then, ( X, μ∨ ) is an N1 -substructure of ( X, ∗) and ( X, μ∧ ) is an


N3 -substructure of ( X, ∗), that is,

μ∨ ( x ∗ y) ≤ {μ∨ ( x ), μ∨ (y)} (25)

and

μ∧ ( x ∗ y) ≥ {μ∧ ( x ), μ∧ (y)}

for all x, y ∈ X. Equation (25) implies that

μ∨ ( x ∗ y) ≤ μ∨ ( x ) or μ∨ ( x ∗ y) ≤ μ∨ (y).

If μ∨ ( x ∗ y) ≤ μ∨ ( x ), then

μ ξ ( x ∗ y ) = μ ∧ ( x ∗ y ) − μ ∨ ( x ∗ y ) ≥ μ ∧ ( x ) − μ ∨ ( x ) = μ ξ ( x ).

If μ∨ ( x ∗ y) ≤ μ∨ (y), then

μ ξ ( x ∗ y ) = μ ∧ ( x ∗ y ) − μ ∨ ( x ∗ y ) ≥ μ ∧ ( y ) − μ ∨ ( y ) = μ ξ ( y ).

It follows that μξ ( x ∗ y) ≥ {μξ ( x ), μξ (y)}, and so ( X, ∗) ∈ N3 (μξ ).

Corollary 3. Let X be an initial universe set with a binary operation ∗. For any N -hyper set μ : X →
P ∗ ([−1, 0]), we have

( X, ∗) ∈ N(1,3) (μ) ⇒ ( X, ∗) ∈ N2 (μξ ). (26)

Theorem 6. Let X be an initial universe set with a binary operation ∗. For any N -hyper set μ : X →
P ∗ ([−1, 0]), we have

( X, ∗) ∈ N(3,1) (μ) ⇒ ( X, ∗) ∈ N1 (μξ ). (27)

Proof. It is similar to the proof of Theorem 5.

Theorem 7. Let X be an initial universe set with a binary operation ∗. For any N -hyper set μ : X →
P ∗ ([−1, 0]), we have

( X, ∗) ∈ N(2,4) (μ) ⇒ ( X, ∗) ∈ N1 (μξ ). (28)

44
Mathematics 2018, 6, 87

Proof. Let ( X, ∗) ∈ N(2,4) (μ). Then, ( X, μ∨ ) is an N2 -substructure of ( X, ∗) and ( X, μ∧ ) is an


N4 -substructure of ( X, ∗), that is,

μ∨ ( x ∗ y) ≥ {μ∨ ( x ), μ∨ (y)} (29)

and

μ∧ ( x ∗ y) ≤ {μ∧ ( x ), μ∧ (y)}

for all x, y ∈ X. Then, μ∨ ( x ∗ y) ≥ μ∨ ( x ) or μ∨ ( x ∗ y) ≥ μ∨ (y) by Equation (29). If μ∨ ( x ∗ y) ≥ μ∨ ( x ),


then

μ ξ ( x ∗ y ) = μ ∧ ( x ∗ y ) − μ ∨ ( x ∗ y ) ≤ μ ∧ ( x ) − μ ∨ ( x ) = μ ξ ( x ).

If μ∨ ( x ∗ y) ≥ μ∨ (y), then

μ ξ ( x ∗ y ) = μ ∧ ( x ∗ y ) − μ ∨ ( x ∗ y ) ≤ μ ∧ ( y ) − μ ∨ ( y ) = μ ξ ( y ).

It follows that μξ ( x ∗ y) ≤ {μξ ( x ), μξ (y)}, that is, ( X, ∗) ∈ N1 (μξ ).

Theorem 8. Let X be an initial universe set with a binary operation ∗. For any N -hyper set μ : X →
P ∗ ([−1, 0]), if ( X, ∗) ∈ N(4,2) (μ), then
  
(∀ x, y ∈ X ) μ ( x ∗ y) ≤ {μ ( x ), μ (y)} . (30)

Proof. If ( X, ∗) ∈ N(4,2) (μ), then ( X, μ∨ ) is an N4 -substructure of ( X, ∗) and ( X, μ∧ ) is an


N2 -substructure of ( X, ∗), that is,

μ∨ ( x ∗ y) ≤ {μ∨ ( x ), μ∨ (y)}

and

μ∧ ( x ∗ y) ≥ {μ∧ ( x ), μ∧ (y)} (31)

for all x, y ∈ X. Then, μ∧ ( x ∗ y) ≥ μ∧ ( x ) or μ∧ ( x ∗ y) ≥ μ∧ (y) by Equation (31). If μ∧ ( x ∗ y) ≥ μ∧ ( x ),


then

μ  ( x ∗ y ) = μ ∨ ( x ∗ y ) − μ ∧ ( x ∗ y ) ≤ μ ∨ ( x ) − μ ∧ ( x ) = μ  ( x ).

If μ∧ ( x ∗ y) ≥ μ∧ (y), then

μ  ( x ∗ y ) = μ ∨ ( x ∗ y ) − μ ∧ ( x ∗ y ) ≤ μ ∨ ( y ) − μ ∧ ( y ) = μ  ( y ).

It follows that μ ( x ∗ y) ≤ {μ ( x ), μ (y)} for all x, y ∈ X.

4. Conclusions
Fuzzy set theory has been considered by Zadeh as one of the mathematical tools to deal with
the uncertainties. Because fuzzy set theory could not deal with negative information, Jun et al. have
introduced a new function, which is called negative-valued function, and constructed N -structures in
2009 as a mathematical tool to deal with negative information. Since then, N -structures have been
applied to algebraic structures and soft sets, etc. Using the N -structures, in this article, we have
studied the notions of (extended) N -hyper sets, N -substructures of type 1, 2, 3 and 4, and have been
investigated several related properties.

45
Mathematics 2018, 6, 87

Author Contributions: All authors contributed equally and significantly to the study and preparation of the
article. They have read and approved the final manuscript.
Acknowledgments: The authors thank the anonymous reviewers for their valuable comments and suggestions.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353. [CrossRef]
2. Jun, Y.B.; Lee, K.J.; Song, S.Z. N -ideals of BCK/BCI-algebras. J. Chungcheong Math. Soc. 2009, 22, 417–437.
3. Ceven, Y. N -ideals of rings. Int. J. Algebra 2012, 6, 1227–1232.
4. Jun, Y.B.; Öztürk, M.A.; Roh, E.H. N -structures applied to closed ideals in BCH-algebras. Int. J. Math.
Math. Sci. 2010, 943565. [CrossRef]
5. Khan, A.; Jun, Y.B.; Shabir, M. N -fuzzy quasi-ideals in ordered semigroups. Quasigroups Relat. Syst. 2009, 17,
237–252.
6. Khan, A.; Jun, Y.B.; Shabir, M. N -fuzzy ideals in ordered semigroups. Int. J. Math. Math. Sci. 2009, 814861.
[CrossRef]
7. Khan, A.; Jun, Y.B.; Shabir, M. N -fuzzy filters in ordered semigroups. Fuzzy Syst. Math. 2010, 24, 1–5.
8. Khan, A.; Jun, Y.B.; Shabir, M. N -fuzzy bi-ideals in ordered semigroups. J. Fuzzy Math. 2011, 19, 747–762.
9. Jun, Y.B.; Alshehri, N.O.; Lee, K.J. Soft set theory and N -structures applied to BCH-algebras. J. Comput.
Anal. Appl. 2014, 16, 869–886.
10. Jun, Y.B.; Lee, K.J.; Kang, M.S. Ideal theory in BCK/BCI-algebras based on soft sets and N -structures.
Discret. Dyn. Nat. Soc. 2012, 910450. [CrossRef]
11. Jun, Y.B.; Song, S.Z.; Lee, K.J. The combination of soft sets and N -structures with applications. J. Appl. Math.
2013, 420312. [CrossRef]
12. Bucolo, M.; Fortuna, L.; la Rosa, M. Complex dynamics through fuzzy chains. IEEE Trans. Fuzzy Syst. 2004,
12, 289–295. [CrossRef]

c 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

46
mathematics
Article
Hypergraphs in m-Polar Fuzzy Environment
Muhammad Akram 1, * and Gulfam Shahzadi 1
1 Department of Mathematics, University of the Punjab, New Campus, Lahore 54590, Pakistan;
[email protected]
* Correspondence: [email protected]

Received: 7 February 2018; Accepted: 16 February 2018; Published: 20 February 2018

Abstract: Fuzzy graph theory is a conceptual framework to study and analyze the units that are
intensely or frequently connected in a network. It is used to study the mathematical structures
of pairwise relations among objects. An m-polar fuzzy (mF, for short) set is a useful notion in
practice, which is used by researchers or modelings on real world problems that sometimes involve
multi-agents, multi-attributes, multi-objects, multi-indexes and multi-polar information. In this paper,
we apply the concept of mF sets to hypergraphs, and present the notions of regular mF hypergraphs
and totally regular mF hypergraphs. We describe the certain properties of regular mF hypergraphs
and totally regular mF hypergraphs. We discuss the novel applications of mF hypergraphs in
decision-making problems. We also develop efficient algorithms to solve decision-making problems.

Keywords: regular mF hypergraph; totally regular mF hypergraph; decision-making; algorithm;


time complexity

1. Introduction
Graph theory has interesting applications in different fields of real life problems to deal with the
pairwise relations among the objects. However, this information fails when more than two objects
satisfy a certain common property or not. In several real world applications, relationships are more
problematic among the objects. Therefore, we take into account the use of hypergraphs to represent
the complex relationships among the objects. In case of a set of multiarity relations, hypergraphs are
the generalization of graphs, in which a hypergraph may have more than two vertices. Hypergraphs
have many applications in different fields including biological science, computer science, declustering
problems and discrete mathematics.
In 1994, Zhang [1] proposed the concept of bipolar fuzzy set as a generalization of fuzzy set [2].
In many problems, bipolar information are used, for instance, common efforts and competition, common
characteristics and conflict characteristics are the two-sided knowledge. Chen et al. [3] introduced the
concept of m-polar fuzzy (mF, for short) set as a generalization of a bipolar fuzzy set and it was shown that
2-polar and bipolar fuzzy set are cryptomorphic mathematical notions. The framework of this theory is
that “multipolar information” (unlike the bipolar information which gives two-valued logic) arise because
information for a natural world are frequently from n factors (n ≥ 2). For example, ‘Pakistan is a good
country’. The truth value of this statement may not be a real number in [0, 1]. Being a good country may
have several properties: good in agriculture, good in political awareness, good in regaining macroeconomic
stability etc. Each component may be a real number in [0, 1]. If n is the number of such components
under consideration, then the truth value of the fuzzy statement is a n-tuple of real numbers in [0, 1],
that is, an element of [0, 1]n . The perception of fuzzy graphs based on Zadeh’s fuzzy relations [4] was
introduced by Kauffmann [5]. Rosenfeld [6] described the fuzzy graphs structure. Later, some remarks
were given by Bhattacharya [7] on fuzzy graphs. Several concepts on fuzzy graphs were introduced by
Mordeson and Nair [8]. In 2011, Akram introduced the notion of bipolar fuzzy graphs in [9]. Li et al. [10]
considered different algebraic operations on mF graphs.

Mathematics 2018, 6, 28; doi:10.3390/math6020028 47 www.mdpi.com/journal/mathematics


Mathematics 2018, 6, 28

In 1977, Kauffmann [5] proposed the fuzzy hypergraphs. Chen [11] studied the interval-valued
fuzzy hypergraph. Generalization and redefinition of the fuzzy hypergraph were explained by
Lee-Kwang and Keon-Myung [12]. Parvathi et al. [13] introduced the concept of intuitionistic fuzzy
hypergraphs. Samanta and Pal [14] dealt with bipolar fuzzy hypergraphs. Later on, Akram et al. [15]
considered certain properties of the bipolar fuzzy hypergraph. Bipolar neutrosophic hypergraphs with
applications were presented by Akram and Luqman [16]. Sometimes information is multipolar, that is,
a communication channel may have various signal strengths from the others due to various reasons
including atmosphere, device distribution, mutual interference of satellites etc. The accidental mixing
of various chemical substances could cause toxic gases, fire or explosion of different degrees. All these
are components of multipolar knowledge which are fuzzy in nature. This idea motivated researchers to
study mF hypergraphs [17]. Akram and Sarwar [18] considered transversals of mF hypergraphs
with applications. In this research paper, we introduce the idea of regular and totally regular
mF hypergraphs and investigate some of their properties. We discuss the new applications of mF
hypergraphs in decision-making problems. We develop efficient algorithms to solve decision-making
problems and compute the time complexity of algorithms. For other notations, terminologies and
applications not mentioned in the paper, the readers are referred to [19–31].
In this paper, we will use the notations defined in Table 1.

Table 1. Notations.

Symbol Definition
H∗ = ( A∗ , B∗ ) Crisp hypergraph
H = ( A, B) mF hypergraph
H = ( A∗ , B∗ )
D Dual mF hypergraph
N (x) Open neighourhood degree of a vertex in H
N [x] Closed neighourhood degree of a vertex in H
γ ( x1 , x2 ) Adjacent level of two vertices
σ ( T1 , T2 ) Adjacent level of two hyperedges

2. Notions of mF Hypergraph
Definition 1. An mF set on a non-empty crisp set X is a function A : X → [0, 1]m . The degree of each element
x ∈ X is denoted by A( x ) = ( P1 oA( x ), P2 oA( x ), ..., Pm oA( x )), where Pi oA : [0, 1]m → [0, 1] is the i-th
projection mapping [3].
Note that [0, 1]m (m-th-power of [0, 1]) is considered as a poset with the point-wise order ≤, where m is
an arbitrary ordinal number (we make an appointment that m = {n|n < m} when m > 0), ≤ is defined by
x < y ⇔ pi ( x ) ≤ pi (y) for each i ∈ m ( x, y ∈ [0, 1]m ), and Pi : [0, 1]m → [0, 1] is the i-th projection mapping
(i ∈ m). 1 = (1, 1, ..., 1) is the greatest value and 0 = (0, 0, ..., 0) is the smallest value in [0, 1]m .

Definition 2. Let A be an mF subset of a non-empty fuzzy subset of a non-empty set X. An mF relation on A


is an mF subset B of X × X defined by the mapping B : X × X → [0, 1]m such that for all x, y ∈ X

Pi oB( xy) ≤ inf{ Pi oA( x ), Pi oA(y)}

1 ≤ i ≤ m, where Pi oA( x ) denotes the i-th degree of membership of a vertex x and Pi oB( xy) denotes the i-th
degree of membership of the edge xy.

Definition 3. An mF graph is a pair G = ( A, B), where A : X → [0, 1]m is an mF set in X and B : X × X →


[0, 1]m is an mF relation on X such that

Pi oB( xy) ≤ inf{ Pi oA( x ), Pi oA(y)}

48
Mathematics 2018, 6, 28

1 ≤ i ≤ m, for all x, y ∈ X and Pi oB( xy) = 0 for all xy ∈ X × X − E for all i = 1, 2, ..., m. A is called the
mF vertex set of G and B is called the mF edge set of G, respectively [3].

Definition 4. An mF hypergraph on a non-empty set X is a pair H = ( A, B) [17], where


A = {ζ 1 , ζ 2 , ζ 3 , ..., ζ r } is a family of mF subsets on X and B is an mF relation on the mF subsets ζ j such that

1. B( Ej ) = B({ x1 , x2 , ..., xr }) ≤ inf{ζ j ( x1 ), ζ j ( x2 ), ..., ζ j ( xs )}, for all x1 , x2 , ..., xs ∈ X.



2. k supp(ζ k ) = X, for all ζ k ∈ A.

Example 1. Let A = {ζ 1 , ζ 2 , ζ 3 , ζ 4 , ζ 5 } be a family of 4-polar fuzzy subsets on X = { a, b, c, d, e, f , g} given



in Table 2. Let B be a 4-polar fuzzy relation on ζ j s, 1 ≤ j ≤ 5, given as, B({ a, c, e}) = (0.2, 0.4, 0.1, 0.3),
B({b, d, f }) = (0.2, 0.1, 0.1, 0.1), B({ a, b}) = (0.3, 0.1, 0.1, 0.6), B({e, f }) = (0.2, 0.4, 0.3, 0.2),
B({b, e, g}) = (0.2, 0.1, 0.2, 0.4). Thus, the 4-polar fuzzy hypergraph is shown in Figure 1.

Table 2. 4-polar fuzzy subsets.

x∈X ζ1 ζ2 ζ3 ζ4 ζ5
a (0.3,0.4,0.5,0.6) (0,0,0,0) (0.3,0.4,0.5,0.6) (0,0,0,0) (0,0,0,0)
b (0,0,0,0) (0.4,0.1,0.1,0.6) (0.4,0.1,0.1,0.6) (0,0,0,0) (0.4,0.1,0.1,0.6)
c (0.3,0.5,0.1,0.3) (0,0,0,0) (0,0,0,0) (0,0,0,0) (0,0,0,0)
d (0,0,0,0) (0.4,0.2,0.5,0.1) (0,0,0,0) (0,0,0,0) (0,0,0,0)
e (0.2,0.4,0.6,0.8) (0,0,0,0) (0,0,0,0) (0.2,0.4,0.6,0.8) (0.2,0.4,0.6,0.8)
f (0,0,0,0) (0.2,0.5,0.3,0.2) (0,0,0,0) (0.2,0.5,0.3,0.2) (0,0,0,0)
g (0,0,0,0) (0,0,0,0) (0,0,0,0) (0,0,0,0) (0.3,0.5,0.1,0.4)

Figure 1. 4-Polar fuzzy hypergraph.

Example 2. Consider a 5-polar fuzzy hypergraph with vertex set {a, b, c, d, e, f, g} whose degrees of membership
are given in Table 3 and three hyperedges {a,b,c}, {b,d,e}, {b,f,g} such that B({ a, b, c}) = (0.2, 0.1, 0.3, 0.1, 0.2),
B({b, d, e}) = (0.1, 0.2, 0.3, 0.4, 0.2), B({b, f , g}) = (0.2, 0.2, 0.3, 0.3, 0.2). Hence, the 5-polar fuzzy
hypergraph is shown in Figure 2.

Table 3. 5-polar fuzzy subsets.

x∈X ζ1 ζ2 ζ3
a (0.2,0.1,0.3,0.1,0.3) (0,0,0,0,0) (0,0,0,0,0)
b (0.2,0.3,0.5,0.6,0.2) (0.2,0.3,0.5,0.6,0.2) (0.2,0.3,0.5,0.6,0.2)
c (0.3,0.2,0.4,0.5,0.2) (0,0,0,0,0) (0,0,0,0,0)
d (0,0,0,0,0) (0.6,0.2,0.2,0.3,0.3) (0,0,0,0,0)
e (0,0,0,0,0) (0.4,0.5,0.6,0.7,0.3) (0,0,0,0,0)
f (0,0,0,0,0) (0,0,0,0,0) (0.1,0.2,0.3,0.4,0.4)
g (0,0,0,0,0) (0,0,0,0,0) (0.2,0.4,0.6,0.8,0.4)

49
Mathematics 2018, 6, 28

Figure 2. 5-Polar fuzzy hypergraph.

Definition 5. Let H = ( A, B) be an mF hypergraph on a non-empty set X [17]. The dual mF hypergraph of


H, denoted by H D = ( A∗ , B∗ ), is defined as

1. A∗ = B is the mF set of vertices of H D .


2. If | X | = n then, B∗ is an mF set on the family of hyperedges { X1 , X2 , ..., Xn } such that, Xi ={Ej | x j ∈
Ej , Ej is a hyperedge of H}, i.e., Xi is the mF set of those hyperedges which share the common vertex xi
and B∗ ( Xi ) = inf{ Ej | x j ∈ Ej }.

Example 3. Consider the example of a 3-polar fuzzy hypergraph H = ( A, B) given in Figure 3, where
X = { x1 , x2 , x3 , x4 , x5 , x6 } and E = { E1 , E2 , E3 , E4 }. The dual 3-polar fuzzy hypergraph is shown in Figure 4
with dashed lines with vertex set E = { E1 , E2 , E3 , E4 } and set of hyperedges { X1 , X2 , X3 , X4 , X5 , X6 } such
that X1 = X3 .

Figure 3. 3-Polar fuzzy hypergraph.

50
Mathematics 2018, 6, 28

Figure 4. Dual 3-polar fuzzy hypergraph.

Definition 6. The open neighourhood of a vertex x in the mF hypergraph is the set of adjacent vertices of x
excluding that vertex and it is denoted by N ( x ).

Example 4. Consider the 3-polar fuzzy hypergraph H = ( A, B), where A = {ζ 1 , ζ 2 , ζ 3 , ζ 4 } is a family of


3-polar fuzzy subsets on X = { a, b, c, d, e} and B is a 3-polar fuzzy relation on the 3-polar fuzzy subsets
ζ i , where ζ 1 = {( a, 0.3, 0.4, 0.5), (b, 0.2, 0.4, 0.6)}, ζ 2 = {(c, 0.2, 0.1, 0.4), (d, 0.5, 0.1, 0.1), (e, 0.2, 0.3, 0.1)},
ζ 3 = {(b, 0.1, 0.2, 0.4), (c, 0.4, 0.5, 0.6)}, ζ 4 = {( a, 0.1, 0.3, 0.2), (d, 0.3, 0.4, 0.4)}. In this example, open
neighourhood of the vertex a is b and d, as shown in Figure 5.

Figure 5. 3-Polar fuzzy hypergraph.

Definition 7. The closed neighourhood of a vertex x in the mF hypergraph is the set of adjacent vertices of x
including that vertex and it is denoted by N [ x ].

Example 5. Consider the 3-polar fuzzy hypergraph H = ( A, B), where A = {ζ 1 , ζ 2 , ζ 3 , ζ 4 } is a family of


3-polar fuzzy subsets on X = { a, b, c, d, e} and B is a 3-polar fuzzy relation on the 3-polar fuzzy subsets
ζ j , where ζ 1 = {( a, 0.3, 0.4, 0.5), (b, 0.2, 0.4, 0.6)}, ζ 2 = {(c, 0.2, 0.1, 0.4), (d, 0.5, 0.1, 0.1), (e, 0.2, 0.3, 0.1)},

51
Mathematics 2018, 6, 28

ζ 3 = {(b, 0.1, 0.2, 0.4), (c, 0.4, 0.5, 0.6)}, ζ 4 = {( a, 0.1, 0.3, 0.2), (d, 0.3, 0.4, 0.4)}. In this example, closed
neighourhood of the vertex a is a, b and d, as shown in Figure 5.

Definition 8. Let H = ( A, B) be an mF hypergraph on crisp hypergraph H ∗ = ( A∗ , B∗ ). If all vertices in A


have the same open neighbourhood degree n, then H is called n-regular mF hypergraph.

Definition 9. The open neighbourhood degree of a vertex x in H is denoted by deg( x ) and defined by
deg( x ) = (deg(1) ( x ), deg(2) ( x ), deg(3) ( x ), . . . , deg(m) ( x )), where

deg(1) ( x ) = Σ x∈ N ( x) P1 ◦ ζ j ( x ),

deg(2) ( x ) = Σ x∈ N ( x) P2 ◦ ζ j ( x ),

deg(3) ( x ) = Σ x∈ N ( x) P3 ◦ ζ j ( x ),
..
.

deg(m) ( x ) = Σ x∈ N ( x) Pm ◦ ζ j ( x ).

Example 6. Consider the 3-polar fuzzy hypergraph H = ( A, B), where A = {ζ 1 , ζ 2 , ζ 3 , ζ 4 } is a family of


3-polar fuzzy subsets on X = { a, b, c, d, e} and B is a 3-polar fuzzy relation on the 3-polar fuzzy subsets
ζ j , where ζ 1 = {( a, 0.3, 0.4, 0.5), (b, 0.2, 0.4, 0.6)}, ζ 2 = {(c, 0.2, 0.1, 0.4), (d, 0.5, 0.1, 0.1), (e, 0.2, 0.3, 0.1)},
ζ 3 = {(b, 0.1, 0.2, 0.4), (c, 0.4, 0.5, 0.6)}, ζ 4 = {( a, 0.1, 0.3, 0.2), (d, 0.3, 0.4, 0.4)}. The open neighbourhood
degree of a vertex a is deg( a) = (0.5, 0.8, 1).

Definition 10. Let H = ( A, B) be an mF hypergraph on crisp hypergraph H ∗ = ( A∗ , B∗ ). If all vertices in A


have the same closed neighbourhood degree m, then H is called m-totally regular mF hypergraph.

Definition 11. The closed neighbourhood degree of a vertex x in H is denoted by deg[ x ] and defined by deg[ x ] =
(deg(1) [ x ], deg(2) [ x ], deg(3) [ x ], . . . , deg(m) [ x ]), where

deg(1) [ x ] = deg(1) ( x ) + ∧ j P1 ◦ ζ j ( x ),

deg(2) [ x ] = deg(2) ( x ) + ∧ j P2 ◦ ζ j ( x ),

deg(3) [ x ] = deg(3) ( x ) + ∧ j P3 ◦ ζ j ( x ),
..
.
(m)
deg(m) [ x ] = dG ( x ) + ∧ j Pm ◦ ζ j ( x ).

Example 7. Consider the 3-polar fuzzy hypergraph H = ( A, B), where A = {ζ 1 , ζ 2 , ζ 3 , ζ 4 } is a family of


3-polar fuzzy subsets on X = { a, b, c, d, e} and B is a 3-polar fuzzy relation on the 3-polar fuzzy subsets
ζ j , where ζ 1 = {( a, 0.3, 0.4, 0.5), (b, 0.2, 0.4, 0.6)}, ζ 2 = {(c, 0.2, 0.1, 0.4), (d, 0.5, 0.1, 0.1), (e, 0.2, 0.3, 0.1)},
ζ 3 = {(b, 0.1, 0.2, 0.4), (c, 0.4, 0.5, 0.6)}, ζ 4 = {( a, 0.1, 0.3, 0.2), (d, 0.3, 0.4, 0.4)}. The closed neighbourhood
degree of a vertex a is deg[ a] = (0.6, 1.1, 1.2).

Example 8. Consider the 3-polar fuzzy hypergraph H = ( A, B), where A = {ζ 1 , ζ 2 , ζ 3 } is a family of 3-polar
fuzzy subsets on X = { a, b, c, d, e} and B is a 3-polar fuzzy relation on the 3-polar fuzzy subsets ζ j , where

ζ 1 {( a, 0.5, 0.4, 0.1), (b, 0.3, 0.4, 0.1), (c, 0.4, 0.4, 0.3)},

ζ 2 = {( a, 0.3, 0.1, 0.1), (d, 0.2, 0.3, 0.2), (e, 0.4, 0.6, 0.1)},

52
Mathematics 2018, 6, 28

ζ 3 = {(b, 0.3, 0.4, 0.3), (d, 0.4, 0.3, 0.4), (e, 0.4, 0.3, 0.1)}.

By routine calculations, we can show that the above 3-polar fuzzy hypergraph is neither regular nor totally regular.

Example 9. Consider the 4-polar fuzzy hypergraph H = ( A, B); define X = { a, b, c, d, e, f , g, h, i } and


A = {ζ 1 , ζ 2 , ζ 3 , ζ 4 , ζ 5 , ζ 6 }, where

ζ 1 = {( a, 0.4, 0.4, 0.4, 0.4), (b, 0.4, 0.4, 0.4, 0.4), (c, 0.4, 0.4, 0.4, 0.4)},

ζ 2 = {(d, 0.4, 0.4, 0.4, 0.4), (e, 0.4, 0.4, 0.4, 0.4), ( f , 0.4, 0.4, 0.4, 0.4)},

ζ 3 = {( g, 0.4, 0.4, 0.4, 0.4), (h, 0.4, 0.4, 0.4, 0.4), (i, 0.4, 0.4, 0.4, 0.4)},

ζ 4 = {( a, 0.4, 0.4, 0.4, 0.4), (d, 0.4, 0.4, 0.4, 0.4), ( g, 0.4, 0.4, 0.4, 0.4)},

ζ 5 = {(b, 0.4, 0.4, 0.4, 0.4), (e, 0.4, 0.4, 0.4, 0.4), (h, 0.4, 0.4, 0.4, 0.4)},

ζ 6 = {(c, 0.4, 0.4, 0.4, 0.4), ( f , 0.4, 0.4, 0.4, 0.4), (i, 0.4, 0.4, 0.4, 0.4)}.

By routine calculations, we see that the 4-polar fuzzy hypergraph as shown in Figure 6 is both regular and
totally regular.

Figure 6. 4-Polar regular and totally regular fuzzy hypergraph.

Remark 1. 1. For an mF hypergraph H = ( A, B) to be both regular and totally regular, the number of
vertices in each hyperedge Bj must be the same. Suppose that | Bj | = k for every j, then H is said to
be k-uniform.
2. Each vertex lies in exactly the same number of hyperedges.

Definition 12. Let H = ( A, B) be a regular mF hypergraph. The order of a regular fuzzy hypergraph H is

O( H ) = (Σ x∈ X ∧ P1 ◦ ζ j ( x ), Σ x∈ X ∧ P2 ◦ ζ j ( x ), · · · , Σ x∈ X ∧ Pm ◦ ζ j ( x )),

for every x ∈ X. The size of a regular mF hypergraph is S( H ) = ∑ j S( Bj ), where

S( Bj ) = (Σ x∈ Bj P1 ◦ ζ j ( x ), Σ x∈ Bj P2 ◦ ζ j ( x ), · · · , Σ x∈ Bj Pm ◦ ζ j ( x )).

Example 10. Consider the 4-polar fuzzy hypergraph H = ( A, B); define X = { a, b, c, d, e, f , g, h, i } and
A = {ζ 1 , ζ 2 , ζ 3 , ζ 4 , ζ 5 , ζ 6 }, where

ζ 1 = {( a, 0.4, 0.4, 0.4, 0.4), (b, 0.4, 0.4, 0.4, 0.4), (c, 0.4, 0.4, 0.4, 0.4)},

ζ 2 = {(d, 0.4, 0.4, 0.4, 0.4), (e, 0.4, 0.4, 0.4, 0.4), ( f , 0.4, 0.4, 0.4, 0.4)},

ζ 3 = {( g, 0.4, 0.4, 0.4, 0.4), (h, 0.4, 0.4, 0.4, 0.4), (i, 0.4, 0.4, 0.4, 0.4)},

ζ 4 = {( a, 0.4, 0.4, 0.4, 0.4), (d, 0.4, 0.4, 0.4, 0.4), ( g, 0.4, 0.4, 0.4, 0.4)},

53
Mathematics 2018, 6, 28

ζ 5 = {(b, 0.4, 0.4, 0.4, 0.4), (e, 0.4, 0.4, 0.4, 0.4), (h, 0.4, 0.4, 0.4, 0.4)},

ζ 6 = {(c, 0.4, 0.4, 0.4, 0.4), ( f , 0.4, 0.4, 0.4, 0.4), (i, 0.4, 0.4, 0.4, 0.4)}.

The order of H is, O( H ) = (3.6, 3.6, 3.6, 3.6) and S( H ) = (7.2, 7.2, 7.2, 7.2).

We state the following propositions without proof.

Proposition 1. The size of a n-regular mF hypergraph is nk


2 , | X | = k.

Proposition 2. If H is both n-regular and m-totally regular mF hypergraph , then O( H ) = k(m − n), where
| X | = K.

Proposition 3. If H is both m-totally regular mF hypergraph , then 2S( H ) + O( H ) = mk, | X | = K.

Theorem 1. Let H = ( A, B) be an mF hypergraph of a crisp hypergraph H ∗ . Then A : X −→ [0, 1]m is a


constant function if and only if the following are equivalent:
(a) H is a regular mF hypergraph,
(b) H is a totally regular mF hypergraph.

Proof. Suppose that A : X −→ [0, 1]m , where A = {ζ 1 , ζ 2 , ..., ζ r } is a constant function. That is,
Pi oζ j ( x ) = ci for all x ∈ ζ j , 1 ≤ i ≤ m, 1 ≤ j ≤ r.
( a) ⇒ (b): Suppose that H is n-regular mF hypergraph. Then deg( x ) = ni , for all x ∈ ζ j . By using
definition 11, deg[ x ] = ni + k i for all x ∈ ζ j . Hence, H is a totally regular mF hypergraph.
(b) ⇒ ( a): Suppose that H is a m-totally regular mF hypergraph. Then deg[ x ] = k i , for all x ∈ ζ j ,
1 ≤ j ≤ r.

⇒ deg( x ) + ∧ j Pi oζ j ( x ) = k i for all x ∈ ζ j ,

⇒ deg( x ) + ci = k i for all x ∈ ζ j ,

⇒ deg( x ) = k i − ci for all x ∈ ζ j .


Thus, H is a regular mF hypergraph. Hence (1) and (2) are equivalent.
Conversely, suppose that (1) and (2) are equivalent, i.e. H is regular if and only if H is totally
regular. On contrary, suppose that A is not constant, that is, Pi oζ j ( x ) = Pi oζ j (y) for some x and y in A.
Let H = ( A, B) be a n-regular mF hypergraph; then

deg( x ) = ni for all x ∈ ζ j ( x ).

Consider,

deg[ x ] = deg( x ) + ∧ j Pi oζ j ( x ) = ni + ∧ j Pi oζ j ( x ),
deg[y] = deg(y) + ∧ j Pi oζ j (y) = ni + ∧ j Pi oζ j (y).

Since Pi oζ j ( x ) and Pi oζ j (y) are not equal for some x and y in X, hence deg[ x ] and deg[y] are not
equal, thus H is not a totally regular m-poalr fuzzy hypergraph, which is again a contradiction to our
assumption.
Next, let H be a totally regulr mF hypergraph, then deg[ x ] = deg[y].

54
Mathematics 2018, 6, 28

That is,

deg( x ) + ∧ j Pi oζ j ( x ) = deg(y) + ∧ j Pi oζ j (y),


deg( x ) − deg(y) = ∧ j Pi oζ j (y) − ∧ j Pi oζ j ( x ).

Since the right hand side of the above equation is nonzero, the left hand side of the above equation is
also nonzero. Thus deg( x ) and deg(y) are not equal, so H is not a regular mF hypergraph, which is
again contradiction to our assumption. Hence, A must be constant and this completes the proof.

Theorem 2. If an mF hypergraph is both regular and totally regular, then A : X −→ [0, 1]m is
constant function.

Proof. Let H be a regular and totally regular mF hypergraph. Then

deg( x ) = ni for all x ∈ ζ j ( x ),

and

deg[ x ] = k i for all x ∈ ζ j ( x ),


⇔ deg( x ) + ∧ j Pi oζ j ( x ) = k i , for all x ∈ ζ j ( x ),
⇔ n1 + ∧ j Pi oζ j ( x ) = k i , for all x ∈ ζ j ( x ),
⇔ ∧ j Pi oζ j ( x ) = k i − ni , for all x ∈ ζ j ( x ),
⇔ Pi oζ j ( x ) = k i − ni , for all x ∈ ζ j ( x ).

Hence, A : X −→ [0, 1]m is a constant function.

Remark 2. The converse of Theorem 1 may not be true, in general. Consider a 3-polar fuzzy hypergraph
H = ( A, B), define X = { a, b, c, d, e},

ζ 1 = {( a, 0.2, 0, 2, 0.2), (b, 0.2, 0.2, 0.2), (c, 0.2, 0.2, 0.2)},

ζ 2 = {( a, 0.2, 0, 2, 0.2), (d, 0.2, 0.2, 0.2)},

ζ 3 = {(b, 0.2, 0.2, 0.2), (e, 0.2, 0.2, 0.2)},

ζ 4 = {(c, 0.2, 0.2, 0.2), (e, 0.2, 0.2, 0.2)}.

Then A : X −→ [0, 1]m ,


where A = {ζ 1 , ζ 2 , ..., ζ r } is a constant function. But deg( a) = (0.6, 0.6, 0.6) =
(0.4, 0.4, 0.4) = deg(e). Also (deg[ a] = (0.8, 0.8, 0.8) = (0.6, 0.6, 0.6) = deg[e]). So H is neither regular nor
totally regular mF hypergraph.

Definition 13. An mF hypergraph H = ( A, B) is called complete if for every x ∈ X, N ( x ) = { x | x ∈ X − x }


that is, N ( x ) contains all the remaining vertices of X except x.

Example 11. Consider a 3-polar fuzzy hypergraph H = ( A, B) as shown in Figure 7, where


X = { a, b, c, d} and A = {ζ 1 , ζ 2 , ζ 3 }, where ζ 1 = {( a, 0.3, 0.4, 0.6), (c, 0.3, 0.4, 0.6)}, ζ 2 =
{( a, 0.3, 0.4, 0.6), (b, 0.3, 0.4, 0.6), (d, 0.3, 0.4, 0.6)}, ζ 3 = {(b, 0.3, 0.4, 0.6), (c, 0.3, 0.4, 0.6), (d, 0.3, 0.4, 0.6)}.
Then N ( a) = {b, c, d}, N (b) = { a, c, d}, N (c) = { a, b, d}.

55
Mathematics 2018, 6, 28

Figure 7. 3-Polar fuzzy hypergraph.

Remark 3. For a complete mF hypergraph, the cardinality of N ( x ) is the same for every vertex.

Theorem 3. Every complete mF hyprgraph is a totally regular mF hypergraph.

Proof. Since given mF hypergraph H is complete, each vertex lies in exactly the same number of
hyperedges and each vertex has the same closed neighborhood degree m. That is, deg[ x1 ] = deg[ x2 ]
for all x1 , x2 ∈ X. Hence H is m-totally regular.

3. Applications to Decision-Making Problems


Analysis of human nature and its culture has been entangled with the assessment of social
networks for many years. Such networks are refined by designating one or more relations on the set of
individuals and the relations can be taken from efficacious relationships, facets of some management
and from a large range of others means. For super-dyadic relationships between the nodes, network
models represented by simple graph are not sufficient. Natural presence of hyperedges can be found in
co-citation, e-mail networks, co-authorship, web log networks and social networks etc. Representation
of these models as hypergraphs maintain the dyadic relationships.

3.1. Super-Dyadic Managements in Marketing Channels


In marketing channels, dyadic correspondence organization has been a basic implementation.
Marketing researchers and managers have realized that their common engagement in marketing
channels is a central key for successful marketing and to yield benefits for the company. mF
hypergraphs consist of marketing managers as vertices and hyperedges show their dyadic
communication involving their parallel thoughts, objectives, plans, and proposals. The more powerful
close relation in the research is more beneficial for the marketing strategies and the production of
an organization. A 3-polar fuzzy network model showing the dyadic communications among the
marketing managers of an organization is given in Figure 8.

56
Mathematics 2018, 6, 28

Figure 8. Super-dyadic managements in marketing channels.

The membership degrees of each person symbolize the percentage of its dyadic behaviour towards
the other people of the same dyad group. The adjacent level between any pair of vertices illustrates
how proficient their dyadic relationship is. The adjacent levels are given in Table 4.

Table 4. Adjacent levels.

Dyad pairs Adjacent level Dyad pairs Adjacent level


γ(Kadeen, Kashif) (0.2,0.3,0.3) γ(Kaarim, Kaazhim) (0.2,0.3,0.3)
γ(Kadeen, Kaamil) (0.2,0.3,0.3) γ(Kaarim, Kaab) (0.1,0.2,0.3)
γ(Kadeen, Kaarim) (0.2,0.3,0.3) γ(Kaarim, Kadar) (0.2,0.3,0.3)
γ(Kadeen, Kaazhim) (0.2,0.3,0.3) γ(Kaab, Kadar) (0.1,0.2,0.3)
γ(Kashif, Kaamil) (0.2,0.3,0.4) γ(Kaab, Kabeer) (0.1,0.1,0.3)
γ(Kashif, Kaab) (0.1,0.2,0.3) γ(Kadar, Kabaark) (0.1,0.3,0.2)
γ(Kashif, Kabeer) (0.1,0.1,0.3) γ(Kaazhim, Kabeer) (0.1,0.1,0.3)
γ(Kaamil, Kadar)) (0.2,0.2,0.3) γ(Kaazhim, Kabaark) (0.1,0.3,0.2)
γ(Kaamil, Kabaark) (0.1,0.3,0.2) γ(Kabeer, Kabaark) (0.1,0.1,0.2)

It can be seen that the most capable dyadic pair is (Kashif, Kaamil). 3-polar fuzzy hyperedges
are taken as different digital marketing strategies adopted by the different dyadic groups of the
same organization. The vital goal of this model is to determine the most potent dyad of digital
marketing techniques. The six different groups are made by the marketing managers and the digital
marketing strategies adopted by these six groups are represented by hyperedges. i.e., the 3-polar fuzzy
hyperedges { T1 , T2 , T3 , T4 , T5 , T6 } show the following strategies {Product pricing, Product planning,
Environment analysis and marketing research, Brand name, Build the relationships, Promotions},
respectively. The exclusive effects of membership degrees of each marketing strategy towards the
achievements of an organization are given in Table 5.

57
Mathematics 2018, 6, 28

Table 5. Effects of marketing strategies.

Marketing Strategy Profitable growth Instruction manual Create longevity


for company success of the business
Product pricing 0.1 0.2 0.3
Product planning 0.2 0.3 0.3
Environment analysis and marketing research 0.1 0.2 0.2
Brand name 0.1 0.3 0.3
Build the relationships 0.1 0.3 0.2
Promotions 0.2 0.3 0.3

Effective dyads of market strategies enhance the performance of an organization and discover the
better techniques to be adopted. The adjacency of all dyadic communication managements is given in
Table 6.

Table 6. Adjacency of all dyadic communication managements.

Dyadic strategies Effects


σ(Product pricing, Product planning) (0.1,0.2,0.3)
σ(Product pricing, Environment analysis and marketing research) (0.1,0.2,0.2)
σ(Product pricing, Brand name) (0.1,0.2,0.3)
σ(Product pricing, Build the relationships) (0.1,0.2,0.2)
σ(Product pricing, Promotions) (0.1,0.2,0.3)
σ(Product planning, Environment analysis and marketing research) (0.1,0.2,0.2)
σ(Product planning, Brand name) (0.1,0.3,0.3)
σ(Product planning, Build the relationships) (0.1,0.3,0.2)
σ(Product planning, Promotions) (0.2,0.3,0.3)
σ(Environment analysis and marketing research, Brand name) (0.1,0.2,0.2)
σ(Environment analysis and marketing research, Build the relationships) (0.1,0.2,0.2)
σ(Environment analysis and marketing research, Promotions) (0.1,0.2,0.2)
σ(Brand name, Build the relationships) (0.1,0.3,0.2)
σ(Brand name, Promotions) (0.1,0.3,0.3)
σ(Build the relationships, Promotions) (0.1,0.3,0.2)

The most dominant and capable marketing strategies adopted mutually are Product planning
and Promotions. Thus, to increase the efficiency of an organization, dyadic managements should make
powerful planning for products and use the promotions skill to attract customers to purchase their
products. The membership degrees of this dyad is (0.2, 0.3, 0.3) which shows that the amalgamated
effect of this dyad will increase the profitable growth of an organization up to 20%, instruction manual
for company success up to 30%, create longevity of the business up to 30% . Thus, to promote
the performance of an organization, super dyad marketing communications are more energetic.
The method of determining the most effective dyads is explained in the following algorithm.
Algorithm 1
1. Input: The membership values A( xi ) of all nodes (marketing managers) x1 , x2 , ..., xn .
2. Input: The membership values B( Ti ) of all hyperedges T1 , T2 , ..., Tr .
3. Find the adjacent level between nodes xi and x j as,
4. do i from 1 → n − 1
5. do j from i + 1 → n
6. do k from 1 → r
7. if xi , x j ∈ Ek then
8. γ( xi , x j ) = maxk inf{ A( xi ), A( x j )}.
9. end if
10. end do
11. end do
12. end do

58
Mathematics 2018, 6, 28

13. Find the best capable dyadic pair as maxi,j γ( xi , x j ).


14. do i from 1 → r − 1
15. do j from i + 1 → r
16. do k from 1 → r
17. if xk ∈ Ti ∩ Tj then
18. σ ( Ti , Tj ) = maxk inf{ B( Ti ), B( Tj )}.
19. end if
20. end do
21. end do
22. end do
23. Find the best effective super dyad management as maxi,j σ( Ti , Tj ).

Description of Algorithm 1: Lines 1 and 2 pass the input of m-polar fuzzy set A on n vertices
x1 , x2 , . . . , xn and m-polar fuzzy relation B on r edges T1 , T2 , ..., Tr . Lines 3 to 12 calculate the adjacent
level between each pair of nodes. Line 14 calculates the best capable dyadic pair. The loop initializes by
taking the value i = 1 of do loop which is always true, i.e., the loop runs for the first iteration. For any
ith iteration of do loop on line 3, the do loop on line 4 runs n − i times and, the do loop on line 5 runs
r times. If there exists a hyperedge Ek containing xi and x j then, line 7 is executed otherwise the if
conditional terminates. For every ith iteration of the loop on line 3, this process continues n times and
then increments i for the next iteration maintaining the loop throughout the algorithm. For i = n − 1,
the loop calculates the adjacent level for every pair of distinct vertices and terminates successfully at
line 12. Similarly, the loops on lines 13, 14 and 15 maintain and terminate successfully.

3.2. m-Polar Fuzzy Hypergraphs in Work Allotment Problem


In customer care centers, availability of employees plays a vital role in solving customer problems.
Such a department should ensure that the system has been managed carefully to overcome practical
difficulties. A lot of customers visit such centers to find a solution of their problems. In this part, focus
is given to alteration of duties for the employees taking leave. The problem is that employees are
taking leave without proper intimation and alteration. We now show the importance of m-polar fuzzy
hypergraphs for the allocation of duties to avoid any difficulties.
Consider the example of a customer care center consisting of 30 employees. Assuming that six
workers are necessary to be available at their duties. We present the employees as vertices and the
degree of membership of each employee represents the work load, percentage of available time and
number of workers who are also aware of the employee’s work type. The range of values for present
time and the workers, knowing the type of work is given in Tables 7 and 8.

Table 7. Range of membership values of table time.

Time Membership value


5h 0.40
6h 0.50
8h 0.70
10 h 0.90

Table 8. Workers knowing the work type.

Workers Membership value


3 0.40
4 0.60
5 0.80
6 0.90

59
Mathematics 2018, 6, 28

The degree of membership of each edge represents the common work load, percentage of available
time and number of workers who are also aware of the employee’s work type. This phenomenon can
be represented by a 3-polar fuzzy graph as shown in Figure 9.

Figure 9. 3-Polar fuzzy graph.

Using Algorithm 2, the strength of allocation and alteration of duties among employees is given
in Table 9.

Table 9. Alteration of duties.

Workers A ( ai , a j ) S ( ai , a j )
a1 , a2 (0.7,0.8,0.8) 0.77
a1 , a3 (0.7,0.9,0.8) 0.80
a2 , a3 (0.5,0.7,0.7) 0.63
a3 , a4 (0.7,0.6,0.8) 0.70
a3 , a5 (0.7,0.9,0.8) 0.80
a4 , a5 (0.9,0.9,0.9) 0.90
a5 , a6 (0.7,0.8,0.8) 0.77
a5 , a1 (0.5,0.6,0.7) 0.60
a1 , a6 (0.6,0.8,0.5) 0.63

Column 3 in Table 9 shows the percentage of alteration of duties. For example, in case of leave,
duties of a1 can be given to a3 and similarly for other employees.
The method for the calculation of alteration of duties is given in Algorithm 2.
Algorithm 2
1. Input: The n number of employees a1 , a2 , . . . , an .
2. Input: The number of edges E1 , E2 , . . . , Er .
3. Input: The incident matrix Bij where, 1 ≤ i ≤ n, 1 ≤ j ≤ r.
4. Input the membership values of edges ξ 1 , ξ 2 , . . . , ξ r
5. do i from 1 → n
6. do j from 1 → n
7. do k from 1 → r
8. if ai , a j ∈ Ek then
9. do t from 1 → m
10. Pt ◦ A( ai , a j ) = | Pt ◦ Bik − Pt ◦ Bjk | + Pt ◦ ξ k
11. end do

60
Mathematics 2018, 6, 28

12. end if
13. end do
14. end do
15. end do
16. do i from 1 → n
17. do j from 1 → n
18. if A( ai , a j ) > 0 then
P1 ◦ A( ai , a j ) + P2 ◦ A( ai , a j ) + . . . + Pm ◦ A( ai , a j )
19. S ( ai , a j ) =
m
20. end if
21. end do
22. end do

Description of Algorithm 2: Lines 1, 2, 3 and 4 pass the input of membership values of vertices,
hyperedges and an m-polar fuzzy adjacency matrix Bij . The nested loops on lines 5 to 15 calculate
the rth, 1 ≤ r ≤ m, strength of allocation and alteration of duties between each pair of employees.
The nested loops on lines 16 to 22 calculate the strength of allocation and alteration of duties between
each pair of employees. The net time complexity of the algorithm is O(n2 rm).

3.3. Availability of Books in Library


A library in a college is a collection of sources of information and similar resources, made
accessible to the student community for reference and examination preparation. A student preparing
for a given examination will use the knowledge sources such as

1. Prescribed textbooks (A)


2. Reference books in syllabus (B)
3. Other books from library (C)
4. Knowledgeable study materials (D)
5. E-gadgets and internet (E)

It is important to consider the maximum availability of the sources which students mostly use.
This phenomenon can be discussed using m-polar fuzzy hypergraphs. We now calculate the importance
of each source in the student community.
Consider the example of five library resources { A, B, C, D, E} in a college. We represent these
sources as vertices in a 3-polar fuzzy hypergraph. The degree of membership of each vertex represents
the percentage of students using a particular source for exam preparation, percentage of faculty
members using the sources and number of sources available. The degree of membership of each edge
represents the common percentage. The 3-polar fuzzy hypergraph is shown in Figure 10.

61
Mathematics 2018, 6, 28

Figure 10. 3-Polar fuzzy hypergraph.

Using Algorithm 3, the strength of each library source in given in Table 10.

Table 10. Library sources.

Sources si T ( si ) S ( ai , a j )
A (1.7,1.7,1.4) 1.60
B (1.6,1.6,1.1) 1.43
E (1.6,1.6,1.0) 1.40
C (0.9,1.2,1.0) 1.03
D (0.8,1.2,1.0) 1.0

Column 3 in Table 10 shows that sources A and B are mostly used by students and faculty.
Therefore, these should be available in maximum number. There is also a need to confirm the
availability of source E to students and faculty.
The method for the calculation of percentage importance of the sources is given in Algorithm 3
whose net time complexity is O(nrm).
Algorithm 3
1. Input: The n number of sources s1 , s2 , . . . , sn .
2. Input: The number of edges E1 , E2 , . . . , Er .
3. Input: The incident matrix Bij where, 1 ≤ i ≤ n, 1 ≤ j ≤ r.
4. Input: The membership values of edges ξ 1 , ξ 2 , . . . , ξ r
5. do i from 1 → n
6. A ( si ) = 1
7. C ( si ) = 1
8. do k from 1 → r
9. if si ∈ Ek then
10. A(si ) = max{ A(si ), ξ k }
11. C (si ) = min{C (si ), Bik }
12. end if
13. end do
14. T ( si ) = C ( si ) + A ( si )
15. end do

62
Mathematics 2018, 6, 28

16. do i from 1 → n
17. if T (si ) > 0 then
P ◦ T (si ) + P2 ◦ T (si ) + . . . + Pm ◦ T (si )
18. S ( si ) = 1
m
19. end if
20. end do

Description of Algorithm 3: Lines 1, 2, 3 and 4 pass the input of membership values of vertices,
hyperedges and an m-polar fuzzy adjacency matrix Bij . The nested loops on lines 5 to 15 calculate the
degree of usage and availability of library sources. The nested loops on lines 16 to 20 calculate the
strength of each library source.

4. Conclusions
Hypergraphs are generalizations of graphs. Many problems which cannot be handled by graphs
can be solved using hypergraphs. mF graph theory has numerous applications in various fields of
science and technology including artificial intelligence, operations research and decision making.
An mF hypergraph constitutes a generalization of the notion of an mF fuzzy graph. mF hypergraphs
play an important role in discussing multipolar uncertainty among several individuals. In this
research article, we have conferred certain concepts of regular mF hypergraphs and applications
of mF hypergraphs in decision-making problems. We aim to generalize our notions to (1) mF soft
hypergraphs, (2) soft rough mF hypergraphs, (3) soft rough hypergraphs, and (4) intuitionistic fuzzy
rough hypergraphs.
Author Contributions: Muhammad Akram and Gulfam Shahzadi conceived and designed the experiments;
Muhammad Akram performed the experiments; Muhammad Akram and Gulfam Shahzadi analyzed the data;
Gulfam Shahzadi wrote the paper.
Conflicts of Interest: The authors declare no conflict of interest regarding the publication of this research article.

References
1. Zhang, W.R. Bipolar fuzzy sets and relations: a computational framework forcognitive modeling and
multiagent decision analysis. In Proceedings of the Fuzzy Information Processing Society Biannual
Conference, Industrial Fuzzy Control and Intelligent Systems Conference and the NASA Joint Technology
Workshop on Neural Networks and Fuzzy Logic, San Antonio, TX, USA, 18–21 December 1994; pp. 305–309,
doi: 10.1109/IJCF.1994.375115.
2. Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353.
3. Chen, J.; Li, S.; Ma, S.; Wang, X. m-Polar fuzzy sets: An extension of bipolar fuzzy sets. Sci. World J. 2014, 8.
4. Zadeh, L.A. Similarity relations and fuzzy orderings. Inf. Sci. 1971, 3, 177–200.
5. Kaufmann, A. Introduction la Thorie des Sous-Ensembles Flous Lusage des Ingnieurs (Fuzzy Sets Theory); Masson:
Paris, French, 1973.
6. Rosenfeld, A. Fuzzy Graphs, Fuzzy Sets and Their Applications; Academic Press: New York, NY, USA, 1975;
pp. 77–95.
7. Bhattacharya, P. Remark on fuzzy graphs. Pattern Recognit. Lett. 1987, 6, 297–302.
8. Mordeson, J.N.; Nair, P.S. Fuzzy graphs and fuzzy hypergraphs. In Studies in Fuzziness and Soft Computing;
Springer-Verlag: Berlin Heidelberg, Germany, 2000.
9. Akram, M. Bipolar fuzzy graphs. Inf. Sci. 2011, 181, 5548–5564.
10. Li, S.; Yang, X.; Li, H.; Ma, M. Operations and decompositions of m-polar fuzzy graphs. Basic Sci. J. Text. Univ.
2017, 30, 149–162.
11. Chen, S.M. Interval-valued fuzzy hypergraph and fuzzy partition. IEEE Trans. Syst. Man Cybern. B Cybern.
1997, 27, 725–733.
12. Lee-Kwang, H.; Lee, K.M. Fuzzy hypergraph and fuzzy partition. IEEE Trans. Syst. Man Cybern. 1995, 25,
196–201.
13. Parvathi, R.; Thilagavathi, S.; Karunambigai, M.G. Intuitionistic fuzzy hypergraphs. Cybern. Inf. Technol. 2009,
9, 46–48.

63
Mathematics 2018, 6, 28

14. Samamta, S.; Pal, M. Bipolar fuzzy hypergraphs. Int. J. Fuzzy Log. Intell. Syst. 2012, 2, 17–28.
15. Akram, M.; Dudek, W.A.; Sarwa, S. Properties of bipolar fuzzy hypergraphs. Ital. J. Pure Appl. Math. 2013, 31,
141–160.
16. Akram, M.; Luqman, A. Bipolar neutrosophic hypergraphs with applications. J. Intell. Fuzzy Syst. 2017, 33,
1699–1713, doi:10.3233/JIFS-162207.
17. Akram, M.; Sarwar, M. Novel application of m-polar fuzzy hypergraphs. J. Intell. Fuzzy Syst. 2017, 32,
2747–2762.
18. Akram, M.; Sarwar, M. Transversals of m-polar fuzzy hypergraphs with applications. J. Intell. Fuzzy Syst.
2017, 33, 351–364.
19. Akram, M.; Adeel, M. mF labeling graphs with application. Math. Comput. Sci. 2016, 10, 387–402.
20. Akram, M.; Waseem, N. Certain metrics in m-polar fuzzy graphs. N. Math. Nat. Comput. 2016, 12, 135–155.
21. Akram, M.; Younas, H.R. Certain types of irregular m-polar fuzzy graphs. J. Appl. Math. Comput. 2017, 53,
365–382.
22. Akram, M.; Sarwar, M. Novel applications of m-polar fuzzy competition graphs in decision support system.
Neural Comput. Appl. 2017, 1–21, doi:10.1007/s00521-017-2894-y.
23. Chen, S.M. A fuzzy reasoning approach for rule-based systems based on fuzzy logics. IEEE Trans. Syst. Man
Cybern. B Cybern. 1996, 26, 769–778.
24. Chen, S.M.; Lin, T.E.; Lee, L.W. Group decision making using incomplete fuzzy preference relations based on
the additive consistency and the order consistency. Inf. Sci. 2014, 259, 1–15.
25. Chen, S.M.; Lee, S.H.; Lee, C.H. A new method for generating fuzzy rules from numerical data for handling
classification problems. Appl. Artif. Intell. 2001, 15, 645–664.
26. Horng, Y.J.; Chen, S.M.; Chang, Y.C.; Lee, C.H. A new method for fuzzy information retrieval based on fuzzy
hierarchical clustering and fuzzy inference techniques. IEEE Trans. Fuzzy Syst. 2005, 13, 216–228.
27. Narayanamoorthy, S.; Tamilselvi, A.; Karthick, P.; Kalyani, S.; Maheswari, S. Regular and totally regular fuzzy
hypergraphs. Appl. Math. Sci. 2014, 8, 1933–1940.
28. Radhamani, C.; Radhika, C. Isomorphism on fuzzy hypergraphs. IOSR J. Math. 2012, 2, 24–31.
29. Sarwar, M.; Akram, M. Certain algorithms for computing strength of competition in bipolar fuzzy graphs.
Int. J. Uncertain. Fuzz. 2017, 25, 877–896.
30. Sarwar, M.; Akram, M. Representation of graphs using m-polar fuzzy environment. Ital. J. Pure Appl. Math.
2017, 38, 291–312.
31. Sarwar, M.; Akram, M. Novel applications of m-polar fuzzy concept lattice. N. Math. Nat. Comput. 2017, 13,
261–287.

c 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

64
mathematics
Article
On Generalized Roughness in LA-Semigroups
Noor Rehman 1 , Choonkil Park 2,∗ , Syed Inayat Ali Shah 3 and Abbas Ali 1
1 Department of Mathematics and Statistics, Riphah International University, Hajj Complex I-14,
Islamabad 44000, Pakistan; [email protected] (N.R.); [email protected] (A.A.)
2 Department of Mathematics, Research Institute of Natural Sciences, Hanyang University, Seoul 04763, Korea
3 Department of Mathematics, Islamia College University, Peshawar 25120, Pakistan; [email protected]
* Correspondence: [email protected]; Tel.: +82-2-2220-0892; Fax: +82-2-2281-0019

Received: 30 May 2018; Accepted: 25 June 2018; Published: 27 June 2018

Abstract: The generalized roughness in LA-semigroups is introduced, and several properties of lower
and upper approximations are discussed. We provide examples to show that the lower approximation
of a subset of an LA-semigroup may not be an LA-subsemigroup/ideal of LA-semigroup under a set
valued homomorphism.

Keywords: roughness; generalized roughness; LA-semigroup

MSC: 08A72; 54A40; 03B52; 20N25

1. Introduction
The algebraic structure of a left almost semigroup, abbreviated as an LA-semigroup, has been
introduced by Naseerudin and Kazim in [1]. Later, Mushtaq and others investigated the structure
of LA-semigroups and added some important results related to LA-semigroups (see [2–7]).
LA-semigroups are also called AG-groupoids. Ideal theory, which was introduced in [8], plays a
basic role in the study of LA-semigroups. Pawlak was the first to discuss rough sets with the help of
equivalence relation among the elements of a set, which is a key point in discussing the uncertainty [9].
There are at least two methods for the development of rough set theory, the constructive and axiomatic
approaches. In constructive methods, lower and upper approximations are constructed from the
primitive notions, such as equivalence relations on a universe and neighborhood system. In rough sets,
equivalence classes play an important role in the construction of both lower and upper approximations
(see [10]). But sometimes in algebraic structures, as is the case in LA-semigroups, finding equivalence
relations is too difficult. Many authors have worked on this to initiate rough sets without equivalence
relations. Couso and Dubois in [11] initiated a generalized rough set or a T-rough set with the help of
a set valued mapping. It is a more generalized rough set compared with the Pawlak rough set.
In this paper, we initiate the study of generalized roughness in LA-semigroups and of generalized
rough sets applied in the crisp form of LA-semigroups. Approximations of LA-subsemigroups and
approximations of ideals in LA-semigroups are given.

2. Preliminaries
A groupoid (S, ∗) is called an LA-semigroup if it satisfies the left invertive law

( a ∗ b) ∗ c = (c ∗ b) ∗ a for all a, b, c ∈ S.

Throughout the paper, S and R will denote LA-semigroups unless stated otherwise. Let S be an
LA-semigroup and A be a subset of S. Then A is called an LA-subsemigroup of S if A2 ⊆ A, that is,
ab ∈ A for all a, b ∈ A. A subset A of S is called left ideal (or right ideal) of S if SA ⊆ A (or AS ⊆ A).

Mathematics 2018, 6, 112; doi:10.3390/math6070112 65 www.mdpi.com/journal/mathematics


Mathematics 2018, 6, 112

An LA-subsemigroup A of S is called bi-ideal of S if ( AS) A ⊆ A. An LA-subsemigroup A of S is


called an interior ideal of S if (SA) S ⊆ A. An element a of S is called idempotent, if a2 = a for all
a ∈ S. If every element of S is an idempotent, then S is idempotent.

3. Rough Sets
In this section, we study Pawlak roughness and generalized roughness in LA-semigroups.

3.1. Pawlak Approximations in LA-Semigroups


The concept of a rough set was introduced by Pawlak in [9]. According to Pawlak, rough set
theory is based on the approximations of a set by a pair of sets called lower approximation and upper
approximation of that set. Let U be a nonempty finite set with an equivalence relation R. We say
(U, R) is the approximation space. If A ⊆ U can be written as the union of some classes obtained from
R, then A is called definable; otherwise, it is not definable. Therefore, the approximations of A are
as follows:
R ( A) = { x ∈ U : [ x ] R ⊆ A}

R ( A) = { x ∈ U : [ x ] R ∩ A = ∅} .
 
The pair R ( A) , R ( A) is a rough set, where R ( A) = R ( A) .

Definition 1. [5] Let ρ be an equivalence relation on S. Then ρ is called a congruence relation on S if

( a, b) ∈ ρ implies that ( ay, by) ∈ ρ and (ya, yb) ∈ ρ for all a, b, y ∈ S.

 2. [8] Let ρ be a congruence relation on S. Then the approximation of S is defined by


Definition
ρ ( A) = ρ ( A) , ρ ( A) for every A ∈ P (S) , where P (S) is the power set of S, and
 
ρ ( A) = x ∈ U : [ x ]ρ ⊆ A

and  
ρ ( A) = x ∈ U : [ x ]ρ ∩ A = ∅ .

3.2. Generalized Roughness or T-Roughness in LA-Semigroups


A generalized rough set is the generalization of Pawlak’s rough set. In this case, we use set valued
mappings instead of congruence classes.

Definition 3. [11] Let X and Y be two nonempty sets and B ⊆ Y. Let T : X → P (Y ) be a set valued (SV)
mapping, where P (Y ) denotes the set of all nonempty subsets of Y. The upper approximation and the lower
approximation of B with respect to T are defined by

T ( B) = { x ∈ X | T ( x ) ∩ B = ∅}

and
T ( B) = { x ∈ X | T ( x ) ⊆ B} .

Definition 4. [12] Let X and Y be two nonempty sets and B ⊆ Y. Let T : X → P (Y ) be an SV mapping,
 
where P (Y ) denotes the set of all nonempty subsets of Y. Then T ( B) , T ( B) is called a T-rough set.

Definition 5. Let R and S be two LA-semigroups and T : R → P (S) be an SV mapping. Then T is called an
SV homomorphism if T ( a) T (b) ⊆ T ( ab) for all a, b ∈ R.

66
Mathematics 2018, 6, 112

Example 1. Let R = { a, b, c} with the following multiplication table:

· a b c
a a a a
b c c c
c a a c

Then R is an LA-semigroup. Define an SV mapping T : R → P ( R) by T ( a) = T (c) = { a, b, c} and


T (b) = {b, c} . Then clearly T is an SV homomorphism.

Example 2. Let S = { a, b, c, d, e} with the following multiplication table:

· a b c d e
a e b a b c
b b b b b b
c c b e b a
d b b b b b
e a b c b e

Then S is an LA-semigroup. Define an SV mapping T : S → P (S) by T ( a) = T (b) = T (c) = T (e) =


{ a, b, c, d, e} and T (d) = {b, d} . Clearly T is an SV homomorphism.

Definition 6. Let R and S be two LA-semigroups and T : R → P (S) be an SV mapping. Then T is called a
strong set valued (SSV) homomorphism if T ( a) T (b) = T ( ab) for all a, b ∈ R.

Example 3. Let R = { a, b, c} with the following multiplication table:

· a b c
a a a a
b c c c
c a a c

Then R is an LA-semigroup and S = { a, b, c} with the following multiplication table:

· a b c
a b c b
b c c c
c c c c

Then S is an LA-semigroup. Define an SV mapping T : R → P (S) by T ( a) = T (c) = {c} and


T (b) = {b, c} . Then T is an SSV homomorphism.

Proposition 1. Let T : R → P (S) be an SV homomorphism. If ∅ = A, B ⊆ S, then T ( A) T ( B) ⊆ T ( AB) .

Proof. Let x ∈ T ( A) T ( B) . Then x = ab, where a ∈ T ( A) and b ∈ T ( B). Then T ( a) ∩ A = ∅ and


T (b) ∩ B = ∅. Therefore, there exist y, z ∈ S such that y ∈ T ( a) ∩ A and z ∈ T (b) ∩ B, which implies
that y ∈ T ( a), y ∈ A, z ∈ T (b), and z ∈ B. It follows that yz ∈ T ( a) T (b) ⊆ T ( ab) and yz ∈ AB. Thus,
yz ∈ T ( ab) ∩ AB, so T ( ab) ∩ AB = ∅. It follows that ab ∈ T ( AB) . Hence, x ∈ T ( AB); therefore,
T ( A) T ( B) ⊆ T ( AB) .

The following example shows that equality in Proposition 1 may not hold.

Example 4. Consider the LA-semigroup R of Example 1.

67
Mathematics 2018, 6, 112

Define an SV mapping T : R → P ( R) by T ( a) = T (c) = { a, b, c} and T (b) = {b, c}. Then T is an


SV homomorphism. Let A = { a, b} and B = {b} . Then T ( A) = { a, b, c} and T ( B) = { a, b, c}. Therefore,
T ( A) T ( B) = { a, b, c} { a, b, c} = { a, c}, and AB = { a, b} {b} = { a, c} . Thus, T ( AB) = { a, b, c} . Hence,
T ( AB) ⊆ T ( A) T ( B) .

Proposition 2. Let T : R → P (S) be an SSV homomorphism. If ∅ = A, B ⊆ S, then


T ( A) T ( B) ⊆ T ( AB) .

Proof. Let x ∈ T ( A) T ( B) . Then x = ab, where a ∈ T ( A) and b ∈ T ( B). Therefore, T ( a) ⊆ A and


T (b) ⊆ B. Thus, T ( a) T (b) ⊆ AB. Therefore, T ( ab) ⊆ AB, which implies ab ∈ T ( AB) . It follows
that x ∈ T ( AB). Hence T ( A) T ( B) ⊆ T ( AB) .

The following example shows that equality in Proposition 2 may not hold.

Example 5. Consider the LA-semigroups R and S of Example 3. Define an SV mapping T : R → P (S)


by T ( a) = T (c) = {c} and T (b) = {b, c} . Then, T is an SSV homomorphism. Let A = { a, c} and
B = {b, c} ⊆ S. Then T ( A) = { a, c} and T ( B) = { a, b, c}. Thus, T ( A) T ( B) = { a, c} { a, b, c} = { a, c},
and AB = { a, c} {b, c} = {b, c} . Thus, T ( AB) = { a, b, c} . Hence, T ( AB) ⊆ T ( A) T ( B) .

The fact that considered groupoids are LA-semigroups is important in Propositions 3 and 4
and examples.

Proposition 3. Let T : R → P (S) be an SV homomorphism. If H is an LA-subsemigroup of S, then T ( H ) is


an LA-subsemigroup of R.

Proof. Let x, y ∈ T ( H ) . Then T ( x ) ∩ H = ∅ and T (y) ∩ H = ∅. Thus, there exist a, b ∈ S such


that a ∈ T ( x ) ∩ H and b ∈ T (y) ∩ H. Thus, a ∈ T ( x ) , a ∈ H and b ∈ T (y) , b ∈ H. Therefore,
ab ∈ T ( x ) T (y) ⊆ T ( xy) and ab ∈ H. Hence, ab ∈ T ( xy) ∩ H, and T ( xy) ∩ H = ∅. Therefore,
xy ∈ T ( H ) . Hence, T ( H ) is an LA-subsemigroup of R.

Proposition 4. Let T : R → P (S) be an SSV homomorphism. If H is an LA-subsemigroup of S, then T ( H )


is an LA-subsemigroup of R.

Proof. Let x, y ∈ T ( H ) . Then T ( x ) ⊆ H and T (y) ⊆ H. Therefore, T ( x ) T (y) ⊆ HH = H 2 . Thus,


T ( xy) ⊆ H 2 , so T ( xy) ⊆ H, which implies xy ∈ T ( H ) . Hence, T ( H ) is an LA-subsemigroup
of R.

The following example shows that, in the case of an SV homomorphism, T ( A) may not be an
LA-subsemigroup.

Example 6. Consider the LA-semigroup S of Example 3.


Define an SV mapping T : S → P (S) by T (b) = T (c) = { a, b, c} and T ( a) = {b, c} . Then T is an
SV homomorphism. Let A = {b, c} ⊆ S. Then A is an LA-subsemigroup of S, and T ( A) = { a} . It follows
that T ( A) T ( A) = { a} { a} = {b} ⊆ T ( A) . Hence, T ( A) is not an LA-subsemigroup of S.

Proposition 5. Let T : R → P (S) be an SV homomorphism. If A is a left ideal of S, then T ( A) is a left ideal


of R.

Proof. Let x and r be elements of T ( A) and R, respectively. Then T ( x ) ∩ A = ∅, so there exists a ∈ S


such that a ∈ T ( x ) ∩ A. Thus, a ∈ T ( x ) and a ∈ A. Since r ∈ R, there exists a y ∈ S such that y ∈ T (r ).
Hence, ya ∈ T (r ) a ⊆ SA ⊆ A. Thus, ya ∈ A and ya ∈ T (r ) T ( x ) ⊆ T (rx ) . Hence, ya ∈ T (rx ) ∩ A.
It follows that T (rx ) ∩ A = ∅. Therefore, rx ∈ T ( A) . Therefore, T ( A) is a left ideal of R.

68
Mathematics 2018, 6, 112

Corollary 1. Let T : R → P (S) be an SV homomorphism. If A is a right ideal of S, then T ( A) is a right ideal


of R.

Corollary 2. Let T : R → P (S) be an SV homomorphism. If A is an ideal of S, then T ( A) is an ideal of R.

Proposition 6. Let T : R → P (S) be an SSV homomorphism. If A is a left ideal of S, then T ( A) is a left


ideal of R.

Proof. Let x ∈ T ( A) and r ∈ R. Then T ( x ) ⊆ A. Since r ∈ R, T (r ) ⊆ S. Thus, T (r ) T ( x ) ⊆ SA ⊆ A.


Thus, T (r ) T ( x ) ⊆ A, and T (rx ) ⊆ A. It follows that rx ∈ T ( A) . Hence, T ( A) is a left ideal of R.

The following example shows that, in the case of an SV homomorphism, T ( A) may not be a
left ideal.

Example 7. Consider the LA-semigroup S of Example 2.


Define an SV mapping T : S → P (S) by T ( a) = T (b) = T (c) = T (e) = { a, b, c, d, e} and
T (d) = {b, d} . Clearly T is an SV homomorphism. Let A = {b, d} be a subset of S. Then A is a left ideal of S,
and T ( A) = {d} . Hence, ST ( A) = {b} ⊆ T ( A) . Therefore, T ( A) is not a left ideal of S.

Corollary 3. Let T : R → P (S) be an SSV homomorphism. If A is a right ideal of S, then T ( A) is a right


ideal of R.

Corollary 4. Let T : R → P (S) be an SSV homomorphism. If A is an ideal of S, then T ( A) is an ideal of R.

Proposition 7. Let R and S be two idempotent LA-semigroups and T : R → P (S) be an SV homomorphism.


If A, B are ideals of S, then
T ( A) ∩ T ( B) = T ( AB) .

Proof. Since AB ⊆ AS ⊆ A, AB ⊆ A. Thus, T ( AB) ⊆ T ( A), and AB ⊆ SB ⊆ B. It follows that


AB ⊆ B. Thus, T ( AB) ⊆ T ( B) . Hence, T ( AB) ⊆ T ( A) ∩ T ( B) .
Let c ∈ T ( A) ∩ T ( B) . Then c ∈ T ( A) and c ∈ T ( B). Thus, T (c) ∩ A = ∅, and T (c) ∩ B = ∅,
so there exist x, y ∈ S such that x ∈ T (c) ∩ A and y ∈ T (c) ∩ B. It follows that x ∈ T (c) , x ∈ A,
and y ∈ T (c) , y ∈ B. Thus, xy ∈ T (c) T (c) ⊆ T (cc) = T (c), and x ∈ A and y ∈ B. Hence, xy ∈ AB,
so xy ∈ T (c) ∩ AB. Thus, T (c) ∩ AB = ∅. Hence, c ∈ T ( AB) . Thus, T ( A) T ( B) ⊆ T ( AB) . Therefore,

T ( A) ∩ T ( B) = T ( AB) ,

as desired.

Proposition 8. Let R and S be two idempotent LA-semigroups and T : R → P (S) be an SSV homomorphism.
If A and B are ideals of S, then
T ( A) ∩ T ( B) = T ( AB) .

Proof. Let AB ⊆ AS ⊆ A. Then AB ⊆ A. Therefore, T ( AB) ⊆ T ( A), and AB ⊆ SB ⊆ B. Hence,


T ( AB) ⊆ T ( B). Therefore,
T ( AB) ⊆ T ( A) ∩ T ( B) .

Let c ∈ T ( A) ∩ T ( B) . Then c ∈ T ( A) and c ∈ T ( B) . Hence, T (c) ⊆ A and T (c) ⊆ B, so


T (c) T (c) ⊆ AB. Thus, T (cc) ⊆ AB. Thus, T (c) ⊆ AB. Hence, c ∈ T ( AB) . This implies that
T ( A) ∩ T ( B) ⊆ T ( AB) . Therefore,

T ( A) ∩ T ( B) = T ( AB) ,

69
Mathematics 2018, 6, 112

as desired.

Proposition 9. Let T : R → P (S) be an SV homomorphism. If A is a bi-ideal of S, then T ( A) is a bi-ideal


of R.

Proof. Let x, y ∈ T ( A) and r ∈ R. Then T ( x ) ∩ A = ∅ and T (y) ∩ A = ∅. Hence, there exist a, b ∈ S


such that a ∈ T ( x ) ∩ A and b ∈ T (y) ∩ B, so a ∈ T ( x ) , a ∈ A, and b ∈ T (y) , b ∈ A. Since r ∈ R,
there is a c ∈ S such that c ∈ T (r ) . Now, ( ac) b ∈ ( T ( x ) T (r )) T (y) ⊆ T ( xr ) T (y) ⊆ T (( xr ) y) .
Thus, ( ac) b ∈ T (( xr ) y) and ( ac) b ∈ A, so ( ac) b ∈ T (( xr ) y) ∩ A. Hence, T (( xr ) y) ∩ A = ∅.
Thus, ( xr ) y ∈ T ( A) . Therefore, T ( A) is a bi-ideal of R.

Proposition 10. Let T : R → P (S) be an SSV homomorphism. If A is a bi-ideal of S, then T ( A) is a bi-ideal


of R.

Proof. Let x, y ∈ T ( A) and r ∈ R. Then T ( x ) ⊆ A and T (y) ⊆ A. Since r ∈ R, T (r ) ⊆ S. Now,


T (( xr ) y) = T ( xr ) T (y) = ( T ( x ) T (r )) T (y) ⊆ ( AS) A ⊆ A. Therefore, T (( xr ) y) ⊆ A. Thus,
( xr ) y ⊆ T ( A) . Hence, T ( A) is a bi-ideal of R.

The following example shows that, in the case of an SV homomorphism, T ( A) may not be
a bi-ideal.

Example 8. Consider the LA-semigroup S of Example 2.


Define an SV mapping T : S → P (S) by T ( a) = T (b) = T (c) = T (e) = { a, b, c, d, e} and
T (d) = {b} . Then T is an SV homomorphism. Let A = {b, d}. Then A is a bi-ideal of S, and T ( A) = {d} .
Now, ( T ( A) S) T ( A) = {b} ⊆ T ( A) . Hence, T ( A) is not a bi-ideal of S.

Proposition 11. Let T : R → P (S) be an SV homomorphism. If A is an interior ideal of S, then T ( A) is an


interior ideal of R.

Proof. Let r ∈ T ( A), and a, b ∈ R. Then T (r ) ∩ A = ∅. Thus, there exists a c ∈ S such that
c ∈ T (r ) ∩ A. This implies that c ∈ T (r ) and c ∈ A. Since a, b ∈ R, there exist x, y ∈ S such that
x ∈ T ( a) and y ∈ T (b) . It follows that ( xc) y ∈ ( T ( a) T (r )) T (b) ⊆ T (( ar ) b), and ( xc) y ∈ A.
Therefore, ( xc) y ∈ T (( ar ) b) ∩ A. Thus, T (( ar ) b) ∩ A = ∅, so ( ar ) b ∈ T ( A) . Hence, T ( A) is an
interior of R.

Proposition 12. Let T : R → P (S) be an SSV homomorphism. If A is an interior ideal of S, then T ( A) is an


interior ideal of R.

Proof. Let r ∈ T ( A) and a, b ∈ R. Then T (r ) ⊆ A. Since a, b ∈ R, T ( a) ⊆ S, T (b) ⊆ S. It follows


that T (( ar ) b) = T ( ar ) T (b) = ( T ( a) T (r )) T (b) ⊆ (SA) S ⊆ A. Therefore, T (( ar ) b) ⊆ A. Thus,
( ar ) b ∈ T ( A) . Hence, T ( A) is an interior ideal of R.

Definition 7. A subset A of an LA-semigroup S is called a quasi-ideal of S if SA ∩ AS ⊆ A.

Proposition 13. Let T : R → P (S) be an SSV homomorphism. If A is a quasi-ideal of S, then T ( A) is a


quasi-ideal of R.

Proof. Let A be a quasi-ideal of S. We prove T ( AS ∩ SA) ⊆ T ( A) . Let x ∈ T ( AS ∩ SA) . Then


T ( x ) ⊆ AS ∩ SA ⊆ A. Therefore, T ( x ) ⊆ A. Therefore, x ∈ T ( A) . Thus, T ( AS ∩ SA) ⊆ T ( A) .
Hence, T ( A) is a quasi-ideal of R.

70
Mathematics 2018, 6, 112

Proposition 14. Let T : R → P (S) be an SV homomorphism. If A is a quasi-ideal of S, then T ( A) is a


quasi-ideal of R.

Proof. Let A be a quasi-ideal of S. Then we have to show that T ( AS ∩ SA) ⊆ T ( A) . Let x ∈


T ( AS ∩ SA) . Then T ( x ) ∩ ( AS ∩ SA) = ∅. Thus, there exists a y ∈ S such that y ∈ T ( x ) ∩ ( AS ∩ SA) .
This implies that y ∈ T ( x ) and y ∈ ( AS ∩ SA) ⊆ A, so y ∈ T ( x ) and y ∈ A. Thus, y ∈ T ( x ) ∩ A.
Therefore, x ∈ T ( A) . Hence, T ( AS ∩ SA) ⊆ T ( A) . Therefore, T ( A) is a quasi-ideal of R.

Definition 8. An ideal P of an LA-semigroup S with left identity e is said to be prime if AB ⊆ P implies either
A ⊆ P or B ⊆ P for all ideals A, B of S.

Proposition 15. Let T : R → P (S) be an SSV homomorphism. If A is a prime ideal of S, then T ( A) is a


prime ideal of R.

Proof. Since A is an ideal of S, by Corollary 2, T ( A) is an ideal of R. Let xy ∈ T ( A) . Then


T ( xy) ∩ A = ∅. Thus, there exists a z ∈ S such that z ∈ T ( xy) ∩ A, so z ∈ T ( xy) = T ( x ) T (y),
and z ∈ A. Since z = ab ∈ T ( x ) T (y), ab ∈ A, and A is a prime ideal of S, a ∈ A or b ∈ A, which
implies that a ∈ T ( x ) and a ∈ A or that b ∈ T (y) and b ∈ A. Therefore, a ∈ T ( x ) ∩ A or b ∈ T (y) ∩ A.
Thus, T ( x ) ∩ A = ∅ or T (y) ∩ A = ∅. It follows that x ∈ T ( A) or y ∈ T ( A) . Hence, T ( A) is a prime
ideal of R.

Proposition 16. Let T : R → P (S) be an SSV homomorphism. If A is a prime ideal of S, then T ( A) is a


prime ideal of R.

Proof. Since A is an ideal of S, by Corollary 4, T ( A) is an ideal of R. Let xy ∈ T ( A) . Then T ( xy) ⊆ A.


Let z ∈ T ( xy) = T ( x ) T (y) , where z = ab ∈ T ( x ) T (y) . Then a ∈ T ( x ) , b ∈ T (y), and ab ∈ A. Since
A is a prime ideal of S, a ∈ A or b ∈ A. Thus, a ∈ T ( x ) ⊆ A or b ∈ T (y) ⊆ A. Thus, x ∈ T ( A) or
y ∈ T ( A) . Hence, T ( A) is a prime ideal of R.

Remark 1. The algebraic approach—in particular, the semigroup theory—can be introduced in the area of
genetic algorithms and to the evolutionary based procedure for optimization and clustering (see [13]).

4. Conclusions
In this paper, we discussed the generalized roughness in (crisp) LA-subsemigroups or ideals of
LA-semigroups with the help of set valued/strong set valued homomorphisms. We have provided
examples showing that the lower approximations of a subset of an LA-semigroup may not be an
LA-subsemigroup or ideal of LA-semigroup, under a set valued homomorphism.

Author Contributions: Conceptualization: N.R. and C.P.; Methodology: A.A.; Software: N.R.; Validation: A.A.
and C.P.; Formal Analysis: S.I.A.S.; Investigation: C.P.; Resources: N.R.; Data Curation: C.P.; Writing—Original
Draft Preparation: A.A.; Writing—Review & Editing: N.R.; Visualization: S.I.A.S.; Supervision: N.R.; Project
Administration: A.A.; Funding Acquisition: C.P.
Funding: This work was supported by Basic Science Research Program through the National
Research Foundation of Korea funded by the Ministry of Education, Science and Technology
(NRF-2017R1D1A1B04032937).

Conflicts of Interest: The authors declare that they have no competing interests.

References
1. Kazim, M.A.; Naseerudin, M. On almost-semigroup. Alig. Bull. Math. 1972, 2, 1–7.
2. Dudek, W.A.; Jun, Y.B. Rough subalgebras of some binary algebras connected with logics. Int. J. Math.
Math. Sci. 2005, 2005, 437–443. [CrossRef]

71
Mathematics 2018, 6, 112

3. Dudek, W.A.; Jun, Y.B.; Kim, H.S. Rough set theory applied to BCI-algebras. Quasigroups Relat. Syst. 2002, 9,
45–54.
4. Mushtaq, Q. Abelian groups defined by LA-semigroups. Stud. Sci. Math. Hungar. 1983, 18, 427–428.
5. Mushtaq, Q.; Iqbal, M. Partial ordering and congruences on LA-semigroups. Ind. J. Pure Appl. Math. 1991,
22, 331–336.
6. Mushtaq, Q.; Khan, M. M-Systems in LA-semigroups. SEA Bull. Math. 2009, 33, 321–327.
7. Mushtaq, Q.; Yusuf, S.M. On LA-semigroups. Alig. Bull. Math. 1978, 8, 65–70.
8. Aslam, M.; Shabir, M.; Yaqoob, N. Roughness in left almost semigroups. J. Adv. Pure Math. 2011, 3, 70–88.
[CrossRef]
9. Pawlak, Z. Rough set. Int. J. Comput. Sci. 1982, 11, 341–356.
10. Wang, J.; Zhang, Q.; Abdel-Rahman, H.; Abdel-Monem, M.I. A rough set approach to feature selection based
on scatter search metaheuristic. J. Syst. Sci. Complex. 2014, 27, 157–168. [CrossRef]
11. Couois, I.; Dubois, D. Rough set, coverings and incomplete information. Fund. Inf. 2001, XXI, 1001–1025.
12. Davvaz, B. A short note on algebraic T-rough sets. Inf. Sci. 2008, 178, 3247–3252. [CrossRef]
13. Fortuna, L.; Graziani, S.; Xibilia, M.G. Genetic algorithms and applications in system engeering: A survey.
Trans. Inst. Meas. Control 1993, 15, 143–156.

c 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

72
mathematics
Article
Fuzzy Semi-Metric Spaces
Hsien-Chung Wu
Department of Mathematics, National Kaohsiung Normal University, Kaohsiung 802, Taiwan;
[email protected]

Received: 27 May 2018; Accepted: 19 June 2018; Published: 22 June 2018

Abstract: The T1 -spaces induced by the fuzzy semi-metric spaces endowed with the special kind
of triangle inequality are investigated in this paper. The limits in fuzzy semi-metric spaces are also
studied to demonstrate the consistency of limit concepts in the induced topologies.

Keywords: fuzzy semi-metric space; T1 -space; triangle inequality; triangular norm

1. Introduction
Given a universal set X, for any x, y ∈ X, let d˜( x, y) be a fuzzy subset of R+ with membership
function ξ d˜( x,y) : R+ → [0, 1], where the value ξ d˜( x,y) (t) means that the membership degree of the
distance between x and y is equal to t. Kaleva and Seikkala [1] proposed the fuzzy metric space by
defining a function M∗ : X × X × [0, ∞) → [0, 1] as follows:

M∗ ( x, y, t) = ξ d˜( x,y) (t). (1)

On the other hand, inspired by the Menger space that is a special kind of probabilistic metric space
(by referring to Schweizer and Sklar [2–4], Hadžić and Pap [5] and Chang et al. [6]), Kramosil and
Michalek [7] proposed another concept of fuzzy metric space.
Let X be a nonempty universal set, let ∗ be a t-norm, and let M be a mapping defined on
X × X × [0, ∞) into [0, 1]. The 3-tuple ( X, M, ∗) is called a fuzzy metric space if and only if the
following conditions are satisfied:

• for any x, y ∈ X, M( x, y, t) = 1 for all t > 0 if and only if x = y;


• M ( x, y, 0) = 0 for all x, y ∈ X;
• M ( x, y, t) = M(y, x, t) for all x, y ∈ X and t ≥ 0;
• M ( x, y, t) ∗ M (y, z, s) ≤ M ( x, z, t + s) for all x, y, z ∈ X and s, t ≥ 0 (the so-called
triangle inequality).

The mapping M in fuzzy metric space ( X, M, ∗) can be regarded as a membership function of a


fuzzy subset of X × X × [0, ∞). Sometimes, M is called a fuzzy metric of the space ( X, M, ∗). According
to the first and second conditions of fuzzy metric space, the mapping M( x, y, t) can be interpreted as
the membership degree of the distance that is less than t between x and y. Therefore, the meanings of
M and M∗ defined in Equation (1) are different.
George and Veeramani [8,9] studied some properties of fuzzy metric spaces. Gregori and
Romaguera [10–12] also extended their research to study the properties of fuzzy metric spaces and
fuzzy quasi-metric spaces. In particular, Gregori and Romaguera [11] proposed the fuzzy quasi-metric
spaces in which the symmetric condition was not assumed. In this paper, we study the so-called fuzzy
semi-metric space without assuming the symmetric condition. The main difference is that four forms
of triangle inequalities that were not addressed in Gregori and Romaguera [11] are considered in
this paper. Another difference is that the t-norm in Gregori and Romaguera [11] was assumed to be
continuous. However, the assumption of continuity for t-norm is relaxed in this paper.

Mathematics 2018, 6, 106; doi:10.3390/math6070106 73 www.mdpi.com/journal/mathematics


Mathematics 2018, 6, 106

The Hausdorff topology induced by the fuzzy metric space was studied in Wu [13], and the
concept of fuzzy semi-metric space was considered in Wu [14]. In this paper, we shall extend to study
the T1 -spaces induced by the fuzzy semi-metric spaces that is endowed with special kind of triangle
inequality. Roughly speaking, the fuzzy semi-metric space does not assume the symmetric condition
M ( x, y, t) = M(y, x, t). In this case, there are four kinds of triangle inequalities that can be considered,
which will be presented in Definition 2. We shall induce the T1 -spaces from the fuzzy semi-metric
space based on a special kind of triangle inequality, which will generalize the results obtained in
Wu [13]. On the other hand, since the symmetric condition is not satisfied in the fuzzy semi-metric
space, three kinds of limit concepts will also be considered in this paper. Furthermore, we shall prove
the consistency of limit concepts in the induced topologies.
This paper is organized as follows. In Section 2, the basic properties of t-norm are presented that
will be used for the further discussion. In Section 3, we propose the fuzzy semi-metric space that is
endowed with four kinds of triangle inequalities. In Section 4, we induce the T1 -space from a given
fuzzy semi-metric space endowed with a special kind of triangle inequality. In Section 5, three kinds of
limits in fuzzy semi-metric space will be considered. We also present the consistency of limit concepts
in the induced topologies.

2. Properties of t-Norm
We first recall the concept of triangular norm (i.e., t-norm). We consider the function ∗ :
[0, 1] × [0, 1] → [0, 1] from the product space [0, 1] × [0, 1] of unit intervals into the unit interval
[0, 1]. The function ∗ is called a t-norm if and only if the following conditions are satisfied:

• (boundary condition) a ∗ 1 = a;
• (commutativity) a ∗ b = b ∗ a;
• (increasing property) if b ≤ c, then a ∗ b ≤ a ∗ c;
• (associativity) ( a ∗ b) ∗ c = a ∗ (b ∗ c).

From the third condition, it follows that, for any a ∈ [0, 1], we have 0 ∗ a ≤ 0 ∗ 1. From the first
condition, we also have 0 ∗ 1 = 0, which implies 0 ∗ a = 0. The following proposition from Wu [13]
will be useful for further study

Proposition 1. By the commutativity of t-norm, if the t-norm is continuous with respect to the first component
(resp. second component), then it is also continuous with respect to the second component (resp. first component).
In other words, for any fixed a ∈ [0, 1], if the function f ( x ) = a ∗ x (resp. f ( x ) = x ∗ a) is continuous, then
the function g( x ) = x ∗ a (resp. g( x ) = a ∗ x) is continuous. Similarly, if the t-norm is left-continuous
(resp. right-continuous) with respect to the first or second component, then it is also left-continuous (resp.
right-continuous) with respect to each component.

We first provide some properties that will be used in the subsequent discussion.

Proposition 2. We have the following properties:

(i) Given any fixed a, b ∈ [0, 1], suppose that the t-norm ∗ is continuous at a and b with respect to the first or
second component. If { an }∞ ∞
n=1 and { bn }n=1 are two sequences in [0, 1] such that an → a and bn → b as
n → ∞, then an ∗ bn → a ∗ b as n → ∞.
(ii) Given any fixed a, b ∈ (0, 1], suppose that the t-norm ∗ is left-continuous at a and b with respect to the
first or second component. If { an }∞ ∞
n=1 and { bn }n=1 are two sequences in [0, 1] such that an → a − and
bn → b− as n → ∞, then an ∗ bn → a ∗ b as n → ∞.
(iii) Given any fixed a, b ∈ [0, 1), suppose that the t-norm ∗ is right-continuous at a and b with respect to the
first or second component. If { an }∞ ∞
n=1 and { bn }n=1 are two sequences in [0, 1] such that an → a + and
bn → b+ as n → ∞, then an ∗ bn → a ∗ b as n → ∞.

74
Mathematics 2018, 6, 106

Proof. To prove part (i), since an → a as n → ∞, there exist an increasing sequence { pn }∞ n=1 and
a decreasing sequence {qn }∞ n=1 such that pn ↑ a and qn ↓ a satisfying pn ≤ an ≤ qn . In addition,
there exists an increasing sequence {rn }∞ ∞
n=1 and a decreasing sequence { sn }n=1 such that rn ↑ b and
sn ↓ b satisfying rn ≤ bn ≤ sn . By Remark 1, we see that the t-norm is continuous with respect to
each component. Given any  > 0, using the continuity of t-norm at b with respect to the second
component, there exists n0 ∈ N such that
 
a∗b− < a ∗ rn0 and a ∗ sn0 < a ∗ b + . (2)
2 2
In addition, using the continuity of t-norm at a with respect to the first component, there exists
n1 ∈ N such that
 
a ∗ rn0 − < pn1 ∗ rn0 and qn1 ∗ sn0 < a ∗ sn0 + . (3)
2 2
According to Equation (2) and using the increasing property of t-norm, for n ≥ n0 , we have

a∗b− < a ∗ r n0 (4)
2

≤ a ∗ r n ≤ a ∗ bn ≤ a ∗ s n ≤ a ∗ s n 0 < a ∗ b + . (5)
2
In addition, according to Equation (3), for m ≥ n1 and n ≥ n0 , we have
 
a ∗ r n0 − < p n 1 ∗ r n 0 ≤ p m ∗ r n ≤ a m ∗ bn ≤ q m ∗ s n ≤ q n 1 ∗ s n 0 < a ∗ s n 0 + . (6)
2 2

By taking n2 = max{n0 , n1 }, from Equations (4) and (6), we obtain that n ≥ n2 implies

a ∗ b −  < a ∗ r n0 − (by Equation (4))
2

< an ∗ bn < a ∗ sn0 + (from Equation (6) by taking m = n)
2
< a ∗ b + , (by Equation (5)),

which says that | a ∗ b − an ∗ bn | < . This shows the desired convergence.


To prove part (ii), we note that there exist two increasing sequences { pn }∞ ∞
n=1 and {rn }n=1 such
that pn ↑ a and rn ↑ b satisfying pn ≤ an and rn ≤ bn . By Remark 1, we see that the t-norm is left
continuous with respect to each component. Given any  > 0, using the left-continuity of t-norm at b
with respect to the second component, there exists n0 ∈ N such that

a∗b− < a ∗ r n0 .
2
In addition, using the left-continuity of t-norm at a with respect to the first component, there exists
n1 ∈ N such that

a ∗ r n0 − < p n1 ∗ r n0 .
2
Using the increasing property of t-norm, for m ≥ n1 and n ≥ n0 , we have

a ∗ r n0 − < p n 1 ∗ r n 0 ≤ p m ∗ r n ≤ a m ∗ bn .
2

Since an → a− and bn → b−, we see that an ≤ a and bn ≤ b for all n. By taking n2 = max{n0 , n1 },
for n ≥ n2 , we obtain

a ∗ b −  < a ∗ r n0 − < an ∗ bn ≤ a ∗ b < a ∗ b + ,
2

75
Mathematics 2018, 6, 106

which says that | a ∗ b − an ∗ bn | < . This shows the desired convergence. Part (iii) can be similarly
proved, and the proof is complete.

The associativity of t-norm says that the operation a1 ∗ a2 ∗ · · · ∗ a p is well-defined for p ≥ 2.


The following proposition from Wu [13] will be useful for further study.

Proposition 3. Suppose that the t-norm ∗ is left-continuous at 1 with respect to the first or second component.
We have the following properties:

(i) For any a, b ∈ (0, 1) with a > b, there exists r ∈ (0, 1) such that a ∗ r ≥ b.
(ii) For any a ∈ (0, 1) and any n ∈ N with n ≥ 2, there exists r ∈ (0, 1) such that r ∗ r ∗ · · · ∗ r > a for
n-times.

3. Fuzzy Semi-Metric Space


In the sequel, we shall define the concept of fuzzy semi-metric space without considering the
symmetric condition. Because of lacking symmetry, the concept of triangle inequality should be
carefully interpreted. Therefore, we propose four kinds of triangle inequalities.

Definition 1. Let X be a nonempty universal set, and let M be a mapping defined on X × X × [0, ∞) into
[0, 1]. Then, ( X, M) is called a fuzzy semi-metric space if and only if the following conditions are satisfied:

• for any x, y ∈ X, M( x, y, t) = 1 for all t > 0 if and only if x = y;


• M ( x, y, 0) = 0 for all x, y ∈ X with x = y.

We say that M satisfies the symmetric condition if and only if M( x, y, t) = M (y, x, t) for all x, y ∈ X and
t > 0. We say that M satisfies the strongly symmetric condition if and only if M( x, y, t) = M(y, x, t) for all
x, y ∈ X and t ≥ 0.

We remark that the first condition says that M ( x, x, t) = 1 for all t > 0. However, the value
of M ( x, x, 0) is free. Recall that the mapping M( x, y, t) is interpreted as the membership degree of
the distance that is less than t between x and y. Therefore, M( x, x, t) = 1 for all t > 0 means that
the distance that is less than t > 0 between x and x is always true. The second condition says that
M ( x, y, 0) = 0 for x = y, which can be similarly realized that the distance that is less than 0 between
two distinct elements x and y is impossible.

Definition 2. Let X be a nonempty universal set, let ∗ be a t-norm, and let M be a mapping defined on
X × X × [0, ∞) into [0, 1].

• We say that M satisfies the -triangle inequality if and only if the following inequality is satisfied:

M( x, y, t) ∗ M (y, z, s) ≤ M ( x, z, t + s) for all x, y, z ∈ X and s, t > 0.

• We say that M satisfies the -triangle inequality if and only if the following inequality is satisfied:

M ( x, y, t) ∗ M (z, y, s) ≤ M ( x, z, t + s) for all x, y, z ∈ X and s, t > 0.

• We say that M satisfies the -triangle inequality if and only if the following inequality is satisfied:

M (y, x, t) ∗ M(y, z, s) ≤ M ( x, z, t + s) for all x, y, z ∈ X and s, t > 0.

• We say that M satisfies the -triangle inequality if and only if the following inequality is satisfied:

M (y, x, t) ∗ M(z, y, s) ≤ M ( x, z, t + s) for all x, y, z ∈ X and s, t > 0.

76
Mathematics 2018, 6, 106

We say that M satisfies the strong ◦-triangle inequality for ◦ ∈ {, , , } when s, t > 0 is replaced by
s, t ≥ 0.

Remark 1. It is obvious that if the mapping M satisfies the symmetric condition, then the concepts of -triangle
inequality, -triangle inequality, -triangle inequality and -triangle inequality are all equivalent.

Example 1. Let X be a universal set, and let d : X × X → R+ satisfy the following conditions:

• d( x, y) ≥ 0 for any x, y ∈ X;
• d( x, y) = 0 if and only if x = y for any x, y ∈ X;
• d( x, y) + d(y, z) ≥ d( x, z) for any x, y, z ∈ X.

Note that we do not assume d( x, y) = d(y, x ). For example, let X = [0, 1]. We define

y − x, if y ≥ x,
d( x, y) =
1, otherwise.

Then, d( x, y) = d(y, x ) and the above three conditions are satisfied. Now, we take t-norm ∗ as a ∗ b = ab
and define
⎧ ⎧
⎪ t ⎪ t
⎨ t + d( x, y) , if t > 0,
⎪ ⎪
⎨ t + d( x, y) if t > 0,
M ( x, y, t) = if t = 0 and d( x, y) = 0, = ⎪ if t = 0 and x = y,
⎪ 1,
⎪ ⎪ 1,
⎩ ⎩
0, if t = 0 and d( x, y) > 0, 0, if t = 0 and x = y.

It is clear to see that M ( x, y, t) = M(y, x, t) for t > 0, since d( x, y) = d(y, x ). We are going to claim
that ( X, M, ∗) is a fuzzy semi-metric space satisfying the -triangle inequality. For t > 0 and M( x, y, t) = 1,
we have t = t + d( x, y), which says that d( x, y) = 0, i.e., x = y. Next, we are going to check the -triangle
inequality. For s > 0 and t > 0, we first have

1 1 1 1
d( x, y) + d(y, z) ≥ [d( x, y) + d(y, z)] ≥ d( x, z).
t s s+t t+s

Then, we obtain

t s ts
M ( x, y, t) ∗ M (y, z, s) = · =
t + d( x, y) s + d(y, z) ts + td(y, z) + sd( x, y) + d( x, y)d(y, z)
ts 1
≤ =
ts + td(y, z) + sd( x, y) 1 + 1s d(y, z) + 1t d( x, y)
1 t+s
≤ = = M( x, z, t + s).
1+ t+s d ( x, z )
1 t + s + d( x, z)

This shows that ( X, M, ∗) defined above is indeed a fuzzy semi-metric space satisfying the
-triangle inequality.

Given a fuzzy semi-metric space ( X, M ), when we say that the mapping M satisfies some kinds
of (strong) triangle inequalities, it implicitly means that the t-norm is considered in ( X, M).

• Suppose that M satisfies the (strong) -triangle inequality. Then,

M ( x, y, t) ∗ M(z, y, s) ≤ M( x, z, t + s) and M (z, y, t) ∗ M( x, y, s) ≤ M(z, x, t + s).

Since the t-norm is commutative, it follows that

M ( x, y, t) ∗ M (z, y, s) = M (z, y, t) ∗ M( x, y, s) ≤ min { M( x, z, t + s), M(z, x, t + s)} .

77
Mathematics 2018, 6, 106

• Suppose that M satisfies the (strong) -triangle inequality. Then, we similarly have

M (y, x, t) ∗ M (y, z, s) = M (y, z, t) ∗ M (y, x, s) ≤ min { M( x, z, t + s), M (z, x, t + s)} .

Definition 3. Let ( X, M ) be a fuzzy semi-metric space.

• We say that M is nondecreasing if and only if, given any fixed x, y ∈ X, M( x, y, t1 ) ≥ M( x, y, t2 )


for t1 > t2 > 0. We say that M is strongly nondecreasing if and only if, given any fixed x, y ∈ X,
M ( x, y, t1 ) ≥ M( x, y, t2 ) for t1 > t2 ≥ 0.
• We say that M is symmetrically nondecreasing if and only if, given any fixed x, y ∈ X, M( x, y, t1 ) ≥
M (y, x, t2 ) for t1 > t2 > 0. We say that M is symmetrically strongly nondecreasing if and only if,
given any fixed x, y ∈ X, M ( x, y, t1 ) ≥ M(y, x, t2 ) for t1 > t2 ≥ 0.

Proposition 4. Let ( X, M ) be a fuzzy semi-metric space. Then, we have the following properties:

(i) If M satisfies the -triangle inequality, then M is nondecreasing.


(ii) If M satisfies the -triangle inequality or the -triangle inequality, then M is both nondecreasing and
symmetrically nondecreasing.
(iii) If M satisfies the -triangle inequality, then M is symmetrically nondecreasing.

Proof. Given any fixed x, y ∈ X, for t1 > t2 > 0, we have the following inequalities.

• Suppose that M satisfies the -triangle inequality. Then,

M ( x, y, t1 ) ≥ M( x, y, t2 ) ∗ M(y, y, t1 − t2 ) = M( x, y, t2 ) ∗ 1 = M ( x, y, t2 ).

• Suppose that M satisfies the -triangle inequality. Then,

M( x, y, t1 ) ≥ M ( x, y, t2 ) ∗ M (y, y, t1 − t2 ) = M( x, y, t2 ) ∗ 1 = M ( x, y, t2 )

and
M( x, y, t1 ) ≥ M ( x, x, t1 − t2 ) ∗ M(y, x, t2 ) = 1 ∗ M(y, x, t2 ) = M(y, x, t2 ).

• Suppose that M satisfies the -triangle inequality. Then,

M ( x, y, t1 ) ≥ M ( x, x, t1 − t2 ) ∗ M( x, y, t2 ) = 1 ∗ M ( x, y, t2 ) = M( x, y, t2 )

and
M( x, y, t1 ) ≥ M(y, x, t2 ) ∗ M(y, y, t1 − t2 ) = M(y, x, t2 ) ∗ 1 = M(y, x, t2 ).

• Suppose that M satisfies the -triangle inequality. Then,

M ( x, y, t1 ) ≥ M( x, x, t1 − t2 ) ∗ M (y, x, t2 ) = 1 ∗ M(y, x, t2 ) = M (y, x, t2 ).

This completes the proof.

Definition 4. Let ( X, M ) be a fuzzy semi-metric space.

• We say that M is left-continuous with respect to the distance at t0 > 0 if and only if, for any fixed x, y ∈ X,
given any  > 0, there exists δ > 0 such that 0 < t0 − t < δ implies | M( x, y, t) − M( x, y, t0 )| < ;
that is, the mapping M ( x, y, ·) : (0, ∞) → [0, 1] is left-continuous at t0 . We say that M is left-continuous
with respect to the distance on (0, ∞) if and only if the mapping M( x, y, ·) is left-continuous on (0, ∞) for
any fixed x, y ∈ X.

78
Mathematics 2018, 6, 106

• We say that M is right-continuous with respect to the distance at t0 ≥ 0 if and only if, for any fixed x, y ∈ X,
given any  > 0, there exists δ > 0 such that 0 < t − t0 < δ implies | M ( x, y, t) − M ( x, y, t0 )| < ; that is,
the mapping M ( x, y, ·) : (0, ∞) → [0, 1] is right-continuous at t0 . We say that M is right-continuous
with respect to the distance on [0, ∞) if and only if the mapping M ( x, y, ·) is left-continuous on [0, ∞) for
any fixed x, y ∈ X.
• We say that M is continuous with respect to the distance at t0 ≥ 0 if and only if, for any fixed x, y ∈ X,
given any  > 0, there exists δ > 0 such that |t − t0 | < δ implies | M( x, y, t) − M ( x, y, t0 )| < ; that is,
the mapping M ( x, y, ·) : (0, ∞) → [0, 1] is continuous at t0 . We say that M is continuous with respect to
the distance on [0, ∞) if and only if the mapping M( x, y, ·) is continuous on [0, ∞) for any fixed x, y ∈ X.
• We say that M is symmetrically left-continuous with respect to the distance at t0 > 0 if and only
if, for any fixed x, y ∈ X, given any  > 0, there exists δ > 0 such that 0 < t0 − t < δ implies
| M( x, y, t) − M(y, x, t0 )| < . We say that M is symmetrically left-continuous with respect to the distance
on (0, ∞) if and only if it is symmetrically left-continuous with respect to the distance at each t > 0.
• We say that M is symmetrically right-continuous with respect to the distance at t0 ≥ 0 if and only
if, for any fixed x, y ∈ X, given any  > 0, there exists δ > 0 such that 0 < t − t0 < δ implies
| M( x, y, t) − M(y, x, t0 )| < . We say that M is symmetrically right-continuous with respect to the
distance on [0, ∞) if and only if it is symmetrically right-continuous with respect to the distance at each
t ≥ 0.
• We say that M is symmetrically continuous with respect to the distance at t0 ≥ 0 if and only if, for any fixed
x, y ∈ X, given any  > 0, there exists δ > 0 such that |t − t0 | < δ implies | M( x, y, t) − M(y, x, t0 )| < .
We say that M is symmetrically continuous with respect to the distance on [0, ∞) if and only if it is
symmetrically continuous with respect to the distance at each t ≥ 0.

Proposition 5. Let ( X, M) be a fuzzy semi-metric space such that the ◦-triangle inequality is satisfied for
◦ ∈ {, , }. Then, we have the following properties:

(i) Suppose that M is left-continuous or symmetrically left-continuous with respect to the distance at t > 0.
Then M ( x, y, t) = M(y, x, t). In other words, if M is left-continuous or symmetrically left-continuous
with respect to the distance on (0, ∞). Then M satisfies the symmetric condition.
(ii) Suppose that M is right-continuous or symmetrically right-continuous with respect to the distance at t ≥ 0.
Then M( x, y, t) = M(y, x, t). In other words, if M is right-continuous or symmetrically right-continuous
with respect to the distance on [0, ∞). Then M satisfies the strongly symmetric condition.

Proof. To prove part (i), given any t > 0, there exists nt ∈ N satisfying t − 1
nt > 0. We consider the
following cases:

• Suppose that the -triangle inequality is satisfied. Then,


       
1 1 1 1
M y, x, t − = 1 ∗ M y, x, t − = M x, x, ∗ M y, x, t − ≤ M( x, y, t)
nt nt nt nt

and
       
1 1 1 1
M x, y, t − = 1 ∗ M x, y, t − = M y, y, ∗ M x, y, t − ≤ M(y, x, t).
nt nt nt nt

Using the left-continuity of M, it follows that M(y, x, t) ≤ M( x, y, t) and M( x, y, t) ≤ M (y, x, t)


by taking nt → ∞. This shows that M ( x, y, t) = M(y, x, t) for all t > 0. On the other hand,
we also have
       
1 1 1 1
M x, y, t − = M x, y, t − ∗ 1 = M x, y, t − ∗ M y, y, ≤ M( x, y, t)
nt nt nt nt

79
Mathematics 2018, 6, 106

and
       
1 1 1 1
M y, x, t − = M y, x, t − ∗ 1 = M y, x, t − ∗ M x, x, ≤ M(y, x, t).
nt nt nt nt

Using the symmetric left-continuity of M, it follows that M(y, x, t) ≤ M ( x, y, t) and M ( x, y, t) ≤


M (y, x, t) by taking nt → ∞. This shows that M( x, y, t) = M(y, x, t) for all t > 0.
• Suppose that the -triangle inequality is satisfied. Then,
       
1 1 1 1
M y, x, t − = M y, x, t − ∗ 1 = M y, x, t − ∗ M y, y, ≤ M( x, y, t)
nt nt nt nt

and
       
1 1 1 1
M x, y, t − = M x, y, t − ∗ 1 = M x, y, t − ∗ M x, x, ≤ M(y, x, t).
nt nt nt nt

The left-continuity of M shows that M( x, y, t) = M(y, x, t) for all t > 0. We can similarly obtain
the desired result using the symmetric left-continuity of M.
• Suppose that the -triangle inequality is satisfied. Then, this is the same situation as the -triangle
inequality.

To prove part (ii), given any t ≥ 0 and n ∈ N, we consider the following cases.

• Suppose that the -triangle inequality is satisfied. Then,


         
1 1 1 1 2
M y, x, t + = 1 ∗ M y, x, t + = M x, x, ∗ M y, x, t + ≤ M x, y, t +
n n n n n

and
         
1 1 1 1 2
M x, y, t + = 1 ∗ M x, y, t + = M y, y, ∗ M x, y, t + ≤ M y, x, t + .
n n n n n

The right-continuity of M shows that M( x, y, t) = M(y, x, t) for all t ≥ 0. We can similarly obtain
the desired result using the symmetric right-continuity of M.
• Suppose that the -triangle inequality is satisfied. Then,
         
1 1 1 1 2
M y, x, t + = M y, x, t + ∗ 1 = M y, x, t + ∗ M y, y, ≤ M x, y, t +
n n n n n

and
         
1 1 1 1 2
M x, y, t + = M x, y, t + ∗ 1 = M x, y, t + ∗ M x, x, ≤ M y, x, t + .
n n n n n

The right-continuity of M shows that M( x, y, t) = M(y, x, t) for all t ≥ 0. We can similarly obtain
the desired result using the symmetric right-continuity of M.
• Suppose that the -triangle inequality is satisfied. Then, this is the same situation as the -triangle
inequality.

This completes the proof.

From Proposition 5, if M is left-continuous or symmetrically left-continuous with respect to the


distance on (0, ∞), or right-continuous and or symmetrically right-continuous with respect to the
distance on on (0, ∞], then we can just consider the -triangle inequality.

80
Mathematics 2018, 6, 106

Proposition 6. Let ( X, M) be a fuzzy semi-metric space such that M is left-continuous or symmetrically


left-continuous with respect to the distance on (0, ∞), or right-continuous and or symmetrically right-continuous
with respect to the distance on (0, ∞]. Suppose that M( x, x, 0) = 1 for any x ∈ X. Then, M satisfies the
◦-triangle inequality if and only if M satisfies the strong ◦-triangle inequality for ◦ ∈ {, , }.

Proof. We first note that the converse is obvious. Now, we assume that M satisfies the -triangle
inequality.

• Suppose that s = t = 0. If x = y or y = z, then M(y, x, 0) = 0 or M(y, z, 0) = 0, which implies

M (y, x, 0) ∗ M (y, z, 0) = 0 ≤ M ( x, z, 0).

If x = y = z, then M(y, x, 0) = 1 = M(y, z, 0) = M ( x, z, 0), which implies

M (y, x, 0) ∗ M (y, z, 0) = 1 ∗ 1 = 1 = M ( x, z, 0).

• Suppose that s > 0 and t = 0. If x = y, then M(y, x, 0) = 0, which implies

M (y, x, t) ∗ M (y, z, s) = M(y, x, 0) ∗ M (y, z, s) = 0 ≤ M( x, z, t + s).

If x = y, then M(y, x, t) = M ( x, x, 0) = 1, which implies

M(y, x, t) ∗ M(y, z, s) = 1 ∗ M(y, z, s) = M(y, z, s) = M ( x, z, t + s).

• Suppose that s = 0 and t > 0. If y = z, then M(y, z, 0) = 0, which implies

M (y, x, t) ∗ M (y, z, s) = M(y, x, t) ∗ M(y, z, 0) = 0 ≤ M( x, z, t + s).

If y = z, then M(y, z, s) = M(y, y, 0) = 1. Using Proposition 5, we have

M (y, x, t) ∗ M(y, z, s) = M (y, x, t) ∗ 1 = M(y, x, t) = M( x, y, t) = M( x, z, t + s).

We can similarly obtain the desired results for ◦ ∈ {, }. This completes the proof.

Proposition 7. Let ( X, M) be a fuzzy semi-metric space. Suppose that M satisfies the -triangle inequality,
and that M is left-continuous with respect to the distance at t > 0. Given any fixed x, y ∈ X,
if M( x, y, t) > 1 − r, then there exists t0 with 0 < t0 < t such that M( x, y, t0 ) > 1 − r.

Proof. Let  = M ( x, y, t) − (1 − r ) > 0. Using the left-continuity of M, there exists t0 with


0 < t0 < t such that | M( x, y, t) − M( x, y, t0 )| < . From part (i) of Proposition 4, we also have
0 ≤ M( x, y, t) − M ( x, y, t0 ) < , which implies M( x, y, t0 ) > 1 − r. This completes the proof.

4. T1 -Spaces
Let ( X, M, ∗) be a fuzzy metric space, i.e., the symmetric condition is satisfied. Given t > 0 and
0 < r < 1, the (r, t)-ball of x is defined by

B( x, r, t) = {y ∈ X : M ( x, y, t) = M (y, x, t) > 1 − r }

by referring to Wu [13]. In this paper, since the symmetric condition is not satisfied, two different
concepts of open ball will be proposed below. Therefore, the T1 -spaces generated from these two
different open balls will generalize the results obtained in Wu [13].

81
Mathematics 2018, 6, 106

Definition 5. Let ( X, M) be a fuzzy semi-metric space. Given t > 0 and 0 < r < 1, the (r, t)-balls centered
at x are denoted and defined by

B ( x, r, t) = {y ∈ X : M( x, y, t) > 1 − r }

and
B ( x, r, t) = {y ∈ X : M (y, x, t) > 1 − r } .

Let B  denote the family of all (r, t)-balls B ( x, r, t), and let B  denote the family of all (r, t)-balls
B ( x, r, t).

It is clearly that if the symmetric condition for M is satisfied, then

B ( x, r, t) = B ( x, r, t).

In this case, we simply write B( x, r, t) to denote the (r, t)-balls centered at x, and write B to denote the
family of all (r, t)-balls B( x, r, t).
We also see that B ( x, r, t) = ∅ and B ( x, r, t) = ∅, since x ∈ B ( x, r, t) and x ∈ B ( x, r, t) by the
fact of M ( x, x, t) = 1 for all t > 0. Since 0 < r < 1, it is obvious that if M( x, y, t) = 0, then y ∈ B ( x, r, t).
In other words, if y ∈ B ( x, r, t), then M( x, y, t) > 0. Similarly, if y ∈ B ( x, r, t), then M(y, x, t) > 0.

Proposition 8. Let ( X, M ) be a fuzzy semi-metric space.


(i) For each x ∈ X, we have x ∈ B ( x, r, t) ∈ B  and x ∈ B ( x, r, t) ∈ B  .
(ii) If x = y, then there exist B ( x, r, t) and B ( x, r, t) such that y ∈ B ( x, r, t) and y ∈ B ( x, r, t).

Proof. Part (i) is obvious. To prove part (ii), since x = y, there exists t0 > 0 such that M ( x, y, t0 ) < 1.
There also exists r0 such that M( x, y, t0 ) < r0 < 1. Suppose that y ∈ B ( x, 1 − r0 , t0 ). Then, we have

M ( x, y, t0 ) > r0 > M ( x, y, t0 ).

This contradiction says that y ∈ B ( x, 1 − r0 , t0 ), and the proof is complete.

Proposition 9. Let ( X, M) be a fuzzy semi-metric space.


(i) Suppose that M satisfies the ◦-triangle for ◦ ∈ {, , }. Then, the following statements hold true:

• Given any B ( x, r, t) ∈ B  , there exists n ∈ N such that B ( x, 1/n, 1/n) ⊆ B ( x, r, t).


• Given any B ( x, r, t) ∈ B  , there exists n ∈ N such that B ( x, 1/n, 1/n) ⊆ B ( x, r, t).
(ii) Suppose that M satisfies the ◦-triangle for ◦ ∈ {, , }. Then, the following statements hold true:

• Given any B ( x, r, t) ∈ B  , there exists n ∈ N such that B ( x, 1/n, 1/n) ⊆ B ( x, r, t).


• Given any B ( x, r, t) ∈ B  , there exists n ∈ N such that B ( x, 1/n, 1/n) ⊆ B ( x, r, t).

Proof. To prove part (i), it suffices to prove the first case. We take n ∈ N such that 1/n ≤ min{r, t}.
Then, for y ∈ B ( x, 1/n, 1/n), using parts (i) and (ii) of Proposition 4, we have
 
1 1
M( x, y, t) ≥ M x, y, > 1 − ≥ 1 − r,
n n

which says that y ∈ B ( x, r, t). Part (ii) can be similarly obtained by using parts (ii) and (iii) of
Proposition 4, and the following inequalities:
 
1 1
M( x, y, t) ≥ M y, x, > 1 − ≥ 1 − r.
n n

82
Mathematics 2018, 6, 106

This completes the proof.

Proposition 10. (Left-Continuity for M) Let ( X, M) be a fuzzy semi-metric space along with a t-norm ∗ such
that the following conditions are satisfied:

• M is left-continuous with respect to the distance on (0, ∞);


• the t-norm ∗ is left-continuous at 1 with respect to the first or second component.

Suppose that M satisfies the -triangle inequality. Then, we have the following inclusions:

(i) Given any y ∈ B ( x, r, t), there exists B (y, r̄, t̄) such that B (y, r̄, t̄) ⊆ B ( x, r, t).
(ii) Given any y ∈ B ( x, r, t), there exists B (y, r̄, t̄) such that B (y, r̄, t̄) ⊆ B ( x, r, t).

Proof. For y ∈ B ( x, r, t), we have M( x, y, t) > 1 − r. By part (i) of Proposition 7, there exists t0 with
0 < t0 < t such that M ( x, y, t0 ) > 1 − r. Let r0 = M ( x, y, t0 ). Then, we have r0 > 1 − r. There exists
s with 0 < s < 1 such that r0 > 1 − s > 1 − r. By part (i) of Proposition 3, there exists r1 with
0 < r1 < 1 such that r0 ∗ r1 ≥ 1 − s. Let r̄ = 1 − r1 and t̄ = t − t0 . Similarly, for y ∈ B ( x, r, t), we have
M (y, x, t) > 1 − r. In this case, let r0 = M (y, x, t0 ).
To prove part (i), for y ∈ B ( x, r, t) and z ∈ B (y, r̄, t̄), we have

M (y, z, t − t0 ) = M(y, z, t̄) > 1 − r̄ = r1 .

By the -triangle inequality, we also have

M( x, z, t) ≥ M( x, y, t0 ) ∗ M(y, z, t − t0 ) = r0 ∗ M(y, z, t − t0 ) ≥ r0 ∗ r1 ≥ 1 − s > 1 − r.

This shows that z ∈ B ( x, r, t). Therefore, we obtain the inclusion B (y, r̄, t̄) ⊆ B ( x, r, t).
To prove part (ii), for y ∈ B ( x, r, t) and z ∈ B (y, r̄, t̄), we have

M (z, y, t − t0 ) = M(z, y, t̄) > 1 − r̄ = r1 .

By the -triangle inequality, we also have

M (z, x, t) ≥ M(z, y, t − t0 ) ∗ M(y, x, t0 ) = M (z, y, t − t0 ) ∗ r0 ≥ r0 ∗ r1 ≥ 1 − s > 1 − r.

This shows that z ∈ B ( x, r, t). Therefore, we obtain the inclusion B (y, r̄, t̄) ⊆ B ( x, r, t).
This completes the proof.

According to Proposition 5, since M is assumed to be left-continuous with respect to the distance


on (0, ∞), it is not necessarily to consider the ◦-triangle inequality for ◦ ∈ {, , } in Proposition 10.

Proposition 11. (Symmetric Left-Continuity for M) Let ( X, M) be a fuzzy semi-metric space along with a
t-norm ∗ such that the following conditions are satisfied:

• M is symmetrically left-continuous with respect to the distance on (0, ∞);


• the t-norm ∗ is left-continuous at 1 with respect to the first or second component.

Suppose that M satisfies the -triangle inequality. Then, we have the following inclusions:

(i) Given any y ∈ B ( x, r, t), there exists B (y, r̄, t̄) such that B (y, r̄, t̄) ⊆ B ( x, r, t).
(ii) Given any y ∈ B ( x, r, t), there exists B (y, r̄, t̄) such that B (y, r̄, t̄) ⊆ B ( x, r, t).

Proof. For y ∈ B ( x, r, t), we have M( x, y, t) > 1 − r. By part (ii) of Proposition 7, there exists t0 with
0 < t0 < t such that M (y, x, t0 ) > 1 − r. Let r0 = M (y, x, t0 ). Then, we have r0 > 1 − r. There exists
s with 0 < s < 1 such that r0 > 1 − s > 1 − r. By part (i) of Proposition 3, there exists r1 with

83
Mathematics 2018, 6, 106

0 < r1 < 1 such that r0 ∗ r1 ≥ 1 − s. Let r̄ = 1 − r1 and t̄ = t − t0 . Similarly, for y ∈ B ( x, r, t), we have
M (y, x, t) > 1 − r. In this case, let r0 = M ( x, y, t0 ).
To prove part (i), for y ∈ B ( x, r, t) and z ∈ B (y, r̄, t̄), we have

M (z, y, t − t0 ) = M(z, y, t̄) > 1 − r̄ = r1 .

By the -triangle inequality, we have

M(z, x, t) ≥ M(z, y, t − t0 ) ∗ M(y, x, t0 ) = M(z, y, t − t0 ) ∗ r0 ≥ r1 ∗ r0 ≥ 1 − s > 1 − r.

This shows that z ∈ B ( x, r, t). Therefore, we obtain the inclusion B (y, r̄, t̄) ⊆ B ( x, r, t).
To prove part (ii), for y ∈ B ( x, r, t) and z ∈ B (y, r̄, t̄), we have

M (y, z, t − t0 ) = M(y, z, t̄) > 1 − r̄ = r1 .

By the -triangle inequality, we have

M ( x, z, t) ≥ M( x, y, t0 ) ∗ M(y, z, t − t0 ) = r0 ∗ M(y, z, t − t0 ) ≥ r1 ∗ r0 ≥ 1 − s > 1 − r.

This shows that z ∈ B ( x, r, t). Therefore, we obtain the inclusion B (y, r̄, t̄) ⊆ B ( x, r, t).
This completes the proof.

According to Proposition 5, since M is assumed to be symmetrically left-continuous with respect


to the distance on (0, ∞), it is not necessarily to consider the ◦-triangle inequality for ◦ ∈ {, , } in
Proposition 11.

Proposition 12. (Left-Continuity for M) Let ( X, M) be a fuzzy semi-metric space along with a t-norm ∗ such
that the following conditions are satisfied:

• M is left-continuous with respect to the distance on (0, ∞);


• the t-norm ∗ is left-continuous at 1 with respect to the first or second component.

Suppose that M satisfies the -triangle inequality. We have the following inclusions:

(i) If x ∈ B ( x1 , r1 , t1 ) ∩ B ( x2 , r2 , t2 ), then there exists B ( x, r3 , t3 ) such that

B ( x, r3 , t3 ) ⊆ B ( x1 , r1 , t1 ) ∩ B ( x2 , r2 , t2 ). (7)

(ii) If x ∈ B ( x1 , r1 , t1 ) ∩ B ( x2 , r2 , t2 ), then there exists B ( x, r3 , t3 ) such that

B ( x, r3 , t3 ) ⊆ B ( x1 , r1 , t1 ) ∩ B ( x2 , r2 , t2 ). (8)

Proof. Using part (i) of Proposition 10, there exist t̄1 , t̄2 , r̄1 , r̄2 such that

B ( x, r̄1 , t̄1 ) ⊆ B ( x1 , r1 , t1 ) and B ( x, r̄2 , t̄2 ) ⊆ B ( x2 , r2 , t2 ).

We take t3 = min{t̄1 , t̄2 } and r3 = min{r̄1 , r̄2 }. Then, for y ∈ B ( x, r3 , t3 ), using part (i) of
Proposition 4, we have
M( x, y, t̄1 ) ≥ M( x, y, t3 ) > 1 − r3 ≥ 1 − r̄1

and
M( x, y, t̄2 ) ≥ M( x, y, t3 ) > 1 − r3 ≥ 1 − r̄2 ,

which say that


y ∈ B ( x, r̄1 , t̄1 ) ∩ B ( x, r̄2 , t̄2 ) ⊆ B ( x1 , r1 , t1 ) ∩ B ( x2 , r2 , t2 ).

84
Mathematics 2018, 6, 106

Therefore, we obtain the inclusion of Equation (7). The second inclusion of Equation (8) can be
similarly obtained. This completes the proof.

Proposition 13. (Symmetric Left-Continuity for M) Let ( X, M) be a fuzzy semi-metric space along with a
t-norm ∗ such that the following conditions are satisfied:

• M is symmetrically left-continuous with respect to the distance on (0, ∞);


• the t-norm ∗ is left-continuous at 1 with respect to the first or second component.

Suppose that M satisfies the -triangle inequality. Then, we have the following inclusions:

(i) If x ∈ B ( x1 , r1 , t1 ) ∩ B ( x2 , r2 , t2 ), then there exists B ( x, r3 , t3 ) such that

B ( x, r3 , t3 ) ⊆ B ( x1 , r1 , t1 ) ∩ B ( x2 , r2 , t2 ). (9)

(ii) If x ∈ B ( x1 , r1 , t1 ) ∩ B ( x2 , r2 , t2 ), then there exists B ( x, r3 , t3 ) such that

B ( x, r3 , t3 ) ⊆ B ( x1 , r1 , t1 ) ∩ B ( x2 , r2 , t2 ). (10)

Proof. Using part (iv) of Proposition 11, there exist t̄1 , t̄2 , r̄1 , r̄2 such that

B ( x, r̄1 , t̄1 ) ⊆ B ( x1 , r1 , t1 ) and B ( x, r̄2 , t̄2 ) ⊆ B ( x2 , r2 , t2 ).

We take t3 = min{t̄1 , t̄2 } and r3 = min{r̄1 , r̄2 }. Then, for y ∈ B ( x, r3 , t3 ), using part (i) of
Proposition 4, we have
M(y, x, t̄1 ) ≥ M(y, x, t3 ) > 1 − r3 ≥ 1 − r̄1

and
M (y, x, t̄2 ) ≥ M(y, x, t3 ) > 1 − r3 ≥ 1 − r̄2 ,

which say that


y ∈ B ( x, r̄1 , t̄1 ) ∩ B ( x, r̄2 , t̄2 ) ⊆ B ( x1 , r1 , t1 ) ∩ B ( x2 , r2 , t2 ).

Therefore, we obtain the inclusion of Equation (9). The second inclusion of Equation (10) can be
similarly obtained. This completes the proof.

The following proposition does not assume the left-continuity or symmetric left-continuity for M.
Therefore, we can consider the different ◦-triangle inequality for ◦ ∈ {, , , }.

Proposition 14. Let ( X, M) be a fuzzy semi-metric space along with a t-norm ∗ that is left-continuous at 1
with respect to the first or second component. Suppose that x = y. We have the following properties.

(i) Suppose that M satisfies the -triangle inequality or the -triangle inequality. Then,

B ( x, r, t) ∩ B (y, r, t) = ∅ and B ( x, r, t) ∩ B (y, r, t) = ∅

for some r ∈ (0, 1) and t > 0.


(ii) Suppose that M satisfies the -triangle inequality. Then,

B ( x, r, t) ∩ B (y, r, t) = ∅

for some r ∈ (0, 1) and t > 0.


(iii) Suppose that M satisfies the -triangle inequality. Then,

B ( x, r, t) ∩ B (y, r, t) = ∅

85
Mathematics 2018, 6, 106

for some r ∈ (0, 1) and t > 0.

Proof. Since x = y, there exists t0 > 0 such that M( x, y, t0 ) < 1. There also exists r0 such that
M ( x, y, t0 ) < r0 < 1. By part (ii) of Proposition 3, there exists r̂ with 0 < r̂ < 1 such that r̂ ∗ r̂ > r0 .

• Suppose that M satisfies the -triangle inequality. We are going to prove that
   
t0 t
B x, 1 − r̂, ∩ B y, 1 − r̂, 0 = ∅
2 2

by contradiction. Suppose that


   
t0 t
z ∈ B x, 1 − r̂, ∩ B y, 1 − r̂, 0 .
2 2

Since M satisfies the -triangle inequality, it follows that


   
t0 t
M ( x, y, t0 ) ≥ M x, z, ∗ M y, z, 0 ≥ r̂ ∗ r̂ > r0 > M( x, y, t0 ),
2 2

which is a contradiction.
• Suppose that M satisfies the -triangle inequality for
   
t0 t
z ∈ B x, 1 − r̂, ∩ B y, 1 − r̂, 0 .
2 2

Since M satisfies the -triangle inequality, it follows that


   
t0 t
M ( x, y, t0 ) ≥ M z, x, ∗ M z, y, 0 ≥ r̂ ∗ r̂ > r0 > M( x, y, t0 ),
2 2

which is a contradiction.
• Suppose that M satisfies the -triangle inequality for
   
t0 t
z ∈ B x, 1 − r̂, ∩ B y, 1 − r̂, 0 .
2 2

Since M satisfies the -triangle inequality, it follows that


   
t0 t
M( x, y, t0 ) ≥ M x, z, ∗ M z, y, 0 ≥ r̂ ∗ r̂ > r0 > M( x, y, t0 ),
2 2

which is a contradiction. On the other hand, for


   
t0 t
z ∈ B x, 1 − r̂, ∩ B y, 1 − r̂, 0 .
2 2

Since M satisfies the -triangle inequality, it follows that


   
t0 t
M(y, x, t0 ) ≥ M y, z, ∗ M z, x, 0 ≥ r̂ ∗ r̂ > r0 > M( x, y, t0 ),
2 2

which is a contradiction.
• Suppose that M satisfies the -triangle inequality. For
   
t0 t
z ∈ B x, 1 − r̂, ∩ B y, 1 − r̂, 0 .
2 2

86
Mathematics 2018, 6, 106

Since M satisfies the -triangle inequality, it follows that


   
t0 t
M ( x, y, t0 ) ≥ M z, x, ∗ M y, z, 0 ≥ r̂ ∗ r̂ > r0 > M( x, y, t0 ),
2 2

which is a contradiction. On the other hand, for


   
t0 t
z ∈ B x, 1 − r̂, ∩ B y, 1 − r̂, 0 .
2 2

Since M satisfies the -triangle inequality, it follows that


   
t0 t
M (y, x, t0 ) ≥ M z, y, ∗ M x, z, 0 ≥ r̂ ∗ r̂ > r0 > M( x, y, t0 ),
2 2

which is a contradiction.

This completes the proof.

Theorem 1. Let ( X, M ) be a fuzzy semi-metric space along with a t-norm ∗ that is left-continuous at 1 with
respect to the first or second component. Suppose that M is left-continuous or symmetrically left-continuous
with respect to the distance on (0, ∞), and that M satisfies the -triangle inequality.

(i) We define

τ  = {O ⊆ X : x ∈ O if and only if there exist t > 0 and r ∈ (0, 1)


such that B ( x, r, t) ⊆ O } .

Then, the family B  induces a T1 -space ( X, τ  ) such that B  is a base for the topology τ  , in which
O ∈ τ  if and only if, for each x ∈ O , there exist t > 0 and r ∈ (0, 1) such that B ( x, r, t) ⊆ O .
(ii) We define

τ  = {O ⊆ X : x ∈ O if and only if there exist t > 0 and r ∈ (0, 1)


such that B ( x, r, t) ⊆ O } .

Then, the family B  induces a T1 -space ( X, τ  ) such that B  is a base for the topology τ  , in which
O ∈ τ  if and only if, for each x ∈ O , there exist t > 0 and r ∈ (0, 1) such that B ( x, r, t) ⊆ O .

Moreover, the T1 -spaces ( X, τ  ) and ( X, τ  ) satisfy the first axiom of countability.

Proof. Using part (i) of Proposition 8, part (i) of Proposition 12 and part (i) of Proposition 13, we see
that τ  is a topology such that B  is a base for τ  . Part (ii) of Proposition 8 says that ( X, τ  ) is a
T1 -space. Part (i) of Proposition 9 says that there exist countable local bases at each x ∈ X for τ  and
τ  , respectively, which also says that τ  and τ  satisfy the first axiom of countability. We can similarly
obtain the desired results regarding the topology τ  . This completes the proof.

According to Proposition 5, since M is assumed to be left-continuous or symmetrically


left-continuous with respect to the distance on (0, ∞), it follows that the topologies obtained in
Wu [13] are still valid when we consider the ◦-triangle inequality for ◦ ∈ {, , }.

Proposition 15. Let ( X, M) be a fuzzy semi-metric space along with a t-norm ∗ that is left-continuous at 1
with respect to the first or second component. Suppose that M is left-continuous with respect to the distance
on (0, ∞), and that M satisfies the -triangle inequality. Then, regarding the T1 -spaces ( X, τ  ) and ( X, τ  ),
B ( x, r, t) is a τ  -open set and B ( x, r, t) is a τ  -open set.

87
Mathematics 2018, 6, 106

Proof. Using part (i) of Proposition 10, we see that B ( x, r, t) is a τ  -open set and B ( x, r, t) is a τ  -open
set. This completes the proof.

5. Limits in Fuzzy Semi-Metric Space


Since the symmetric condition is not satisfied in the fuzzy semi-metric space, three kinds of limit
concepts will also be considered in this paper by referring to Wu [14]. In this section, we shall study
the consistency of limit concepts in the induced topologies, which was not addressed in Wu [13].
Let ( X, d) be a metric space. If the sequence { xn }∞
n=1 in ( X, d ) converges to x, i.e., d ( xn , x ) → 0
d
as n → ∞, then it is denoted by xn −→ x as n → ∞. In this case, we also say that x is a d-limit of the
sequence { xn }∞
n =1 .

Definition 6. Let ( X, M ) be a fuzzy semi-metric space, and let { xn }∞


n=1 be a sequence in X.

M
• We write xn −→ x as n → ∞ if and only if

lim M( xn , x, t) = 1 for all t > 0.


n→∞

In this case, we call x a M -limit of the sequence { xn }∞


n =1 .
M
• We write xn −→ x as n → ∞ if and only if

lim M( x, xn , t) = 1 for all t > 0.


n→∞

In this case, we call x a M -limit of the sequence { xn }∞


n =1 .
M
• We write xn −→ x as n → ∞ if and only if

lim M( xn , x, t) = lim M ( x, xn , t) = 1 for all t > 0.


n→∞ n→∞

In this case, we call x a M-limit of the sequence { xn }∞


n =1 .

Proposition 16. Let ( X, M) be a fuzzy semi-metric space along with a t-norm ∗ that is left-continuous at 1
with respect to the first or second component, and let { xn }∞
n=1 be a sequence in X.

(i) Suppose that M satisfies the -triangle inequality or the -triangle inequality. Then, we have the following
properties:

M M
• If xn −→ x and xn −→ y, then x = y.
M M
• If xn −→ x and xn −→ y, then x = y.
M M
(ii) Suppose that M satisfies the -triangle inequality. If xn −→ x and xn −→ y, then x = y. In other words,
the M -limit is unique.
M M
(iii) Suppose that M satisfies the -triangle inequality. If xn −→ x and xn −→ y, then x = y. In other words,
the M -limit is unique.

Proof. To prove the first case of part (i), we first assume that M satisfies the -triangle inequality.
For any t > 0, using the left-continuity of t-norm at 1, we have
   
t t
M ( x, y, t) ≥ M x, xn , ∗ M xn , y, → 1 ∗ 1 = 1,
2 2

88
Mathematics 2018, 6, 106

which says that x = y. To prove the second case of part (i), we have
   
t t
M (y, x, t) ≥ M y, xn , ∗ M xn , x, → 1 ∗ 1 = 1,
2 2

which says that x = y. Now suppose that M satisfies the -triangle inequality. Then, we have
   
t t
M (y, x, t) ≥ M xn , y, ∗ M x, xn , → 1 ∗ 1 = 1,
2 2

and    
t t
M ( x, y, t) ≥ M xn , x, ∗ M y, xn , → 1 ∗ 1 = 1.
2 2
Therefore, we can similarly obtain the desired result.
To prove part (ii), we have
   
t t
M ( x, y, t) ≥ M xn , x, ∗ M xn , y, → 1 ∗ 1 = 1,
2 2

which says that x = y. To prove part (iii), we have


   
t t
M ( x, y, t) ≥ M x, xn , ∗ M y, xn , → 1 ∗ 1 = 1,
2 2

which says that x = y. This completes the proof.

Let ( X, τ ) be a topological space. The sequence { xn }∞


n=1 in X converges to x ∈ X with respect
τ
to the topology τ is denoted by xn −→ x as n → ∞, where the limit is unique when τ is a
Hausdorff topology.

Remark 2. Let ( X, M) be a fuzzy semi-metric space along with a t-norm ∗ and be endowed with a topology
τ
τ  given in Theorem 1. Let { xn }∞  
n=1 be a sequence in X. Since B is a base for τ , it follows that xn −→ x
as n → ∞, if and only if, given any t > 0 and 0 < r < 1, there exists nr,t such that xn ∈ B ( x, r, t) for all
τ
n ≥ nr,t . Since xn ∈ B ( x, r, t) means M ( xn , x, t) > 1 − r, it says that xn −→ x as n → ∞, if and only if,
given any t > 0 and 0 < r < 1, there exists nr,t such that M ( xn , x, t) > 1 − r for all n ≥ nr,t .

Proposition 17. Let ( X, M) be a fuzzy semi-metric space along with a t-norm ∗. Suppose that M is
left-continuous or symmetrically left-continuous with respect to the distance on (0, ∞), and that M satisfies the
strong -triangle inequality. Then, the following statements hold true:
τ
(i) Let τ  be the topology induced by ( X, M, ∗), and let { xn }∞
n=1 be a sequence in X. Then, xn −→ x as
M
n → ∞ if and only if xn −→ x as n → ∞.
τ
(ii) Let τ  be the topology induced by ( X, M, ∗), and let { xn }∞
n=1 be a sequence in X. Then, xn −→ x as
M
n → ∞ if and only if xn −→ x as n → ∞.

Proof. Under the assumptions, Theorem 1 says that we can induce two topologies τ  and τ  . It suffices
τ
to prove part (i). Suppose that xn −→ x as n → ∞. Fixed t > 0, given any  ∈ (0, 1), there
exists n,t ∈ N such that xn ∈ B ( x, , t) for all n ≥ n,t , which says that M( xn , x, t) > 1 − , i.e.,
0 ≤ 1 − M ( xn , x, t) <  for all n ≥ n,t . Therefore, we obtain M ( xn , x, t) → 1 as n → ∞. Conversely,
given any t > 0, if M ( xn , x, t) → 1 as n → ∞, then, given any  ∈ (0, 1), there exists n,t ∈ N such
that 1 − M( xn , x, t) < , i.e., M( xn , x, t) > 1 −  for all n ≥ n,t , which says that xn ∈ B ( x, , t) for all
τ
n ≥ n,t . This shows that xn −→ x as n → ∞, and the proof is complete.

89
Mathematics 2018, 6, 106

Let ( X, M) be a fuzzy semi-metric space. We consider the following sets

B̄ ( x, r, t) = {y ∈ X : M( x, y, t) ≥ 1 − r }

and
B̄ ( x, r, t) = {y ∈ X : M (y, x, t) ≥ 1 − r } .

If the symmetric condition is satisfied, then we simply write B̄( x, r, t). We are going to consider
the closeness of B̄ ( x, r, t) and B̄ ( x, r, t).

Proposition 18. Let ( X, M) be a fuzzy semi-metric space along with a t-norm ∗ that is left-continuous at 1
with respect to the first or second component. Suppose that M is continuous or symmetrically continuous with
respect to the distance on (0, ∞). If M satisfies the -triangle inequality, then B̄ ( x, r, t) and B̄ ( x, r, t) are
τ  -closed and τ  -closed, respectively. In other words, we have

τ  -cl( B̄ ( x, r, t)) = B̄ ( x, r, t) and τ  -cl( B̄ ( x, r, t)) = B̄ ( x, r, t).

Proof. Under the assumptions, Theorem 1 says that we can induce two topologies τ  and τ  satisfying
the first axiom of countability. To prove the first case, for y ∈ τ  -cl( B̄ ( x, r, t)), since ( X, τ  ) satisfies
 τ
the first axiom of countability, there exists a sequence {yn }∞n=1 in B̄ ( x, r, t ) such that yn −→ y as
n → ∞. We also have M ( x, yn , t) ≥ 1 − r for all n. By Proposition 17, we have M(yn , y, t) → 1 as
n → ∞ for all t > 0. Given any  > 0, the -triangle inequality says that

M ( x, y, t + ) ≥ M ( x, yn , t) ∗ M (yn , y, ) ≥ (1 − r ) ∗ M(yn , y, ).

Since the t-norm ∗ is left-continuous at 1 with respect to each component by Remark 1, we obtain
 
M ( x, y, t + ) ≥ (1 − r ) ∗ lim M (yn , y, ) = (1 − r ) ∗ 1 = 1 − r.
n→∞

By the right-continuity of M, we also have

M ( x, y, t) = lim M ( x, y, t + ) ≥ 1 − r,
 →0+

which says that y ∈ B̄ ( x, r, t).


To prove the second case, for y ∈ τ  -cl( B̄ ( x, r, t)), since ( X, τ  ) satisfies the first axiom of
 τ
countability, there exists a sequence {yn }∞
n=1 in B̄ ( x, r, t ) such that yn −→ y as n → ∞. We also have
M (yn , x, t) ≥ 1 − r for all n. By Proposition 17, we have M(y, yn , t) → 1 as n → ∞ for all t > 0.
Given any  > 0, the -triangle inequality says that

M (y, x, t + ) ≥ M (y, yn , ) ∗ M(yn , x, t) ≥ M(y, yn , ) ∗ (1 − r ).

Since the t-norm ∗ is left-continuous at 1 with respect to each component by Remark 1, we obtain
 
M (y, x, t + ) ≥ lim M (y, yn , ) ∗ (1 − r ) = 1 ∗ (1 − r ) = 1 − r.
n→∞

By the right-continuity of M, we also have

M (y, x, t) = lim M (y, x, t + ) ≥ 1 − r,


 →0+

which says that y ∈ B̄ ( x, r, t). This completes the proof.

90
Mathematics 2018, 6, 106

6. Conclusions
In fuzzy metric space, the triangle inequality plays an important role. In general, since the
symmetric condition is not necessarily to be satisfied, the so-called fuzzy semi-metric space is proposed
in this paper. In this situation, four different types of triangle inequalities are proposed and studied.
The main purpose of this paper is to establish the T1 -spaces that are induced by the fuzzy semi-metric
spaces along with the special kind of triangle inequality.
On the other hand, the limit concepts in fuzzy semi-metric space are also proposed and studied in
this paper. Since the symmetric condition is not satisfied, three kinds of limits in fuzzy semi-metric
space are considered. The concepts of uniqueness for the limits are also studied. Finally, we present
the consistency of limit concepts in the induced T1 -spaces.
Funding: This research received no external funding.

Acknowledgments: The author would like to thank the reviewers for providing the useful suggestions that
improve the presentation of this paper.
Conflicts of Interest: The author declares no conflict of interest.

References
1. Kaleva, O.; Seikkala, S. On Fuzzy Metric Spaces. Fuzzy Sets Syst. 1984, 12, 215–229. [CrossRef]
2. Schweizer, B.; Sklar, A. Statistical Metric Spaces. Pac. J. Math. 1960, 10, 313–334. [CrossRef]
3. Schweizer, B.; Sklar, A.; Thorp, E. The Metrization of Statistical Metric Spaces. Pac. J. Math. 1960, 10, 673–675.
[CrossRef]
4. Schweizer, B.; Sklar, A. Triangle Inequalities in a Class of Statistical Metric Spaces. J. Lond. Math. Soc. 1963,
38, 401–406. [CrossRef]
5. Hadžić, O.; Pap, E. Fixed Point Theory in Probabilistic Metric Spaces; Klumer Academic Publishers: Norwell,
MA, USA, 2001.
6. Chang, S.S.; Cho, Y.J.; Kang, S.M. Nonlinear Operator Theory in Probabilistic Metric Space; Nova Science
Publishers: New York, NY, USA, 2001.
7. Kramosil, I.; Michalek, J. Fuzzy Metric and Statistical Metric Spaces. Kybernetika 1975, 11, 336–344.
8. George, A.; Veeramani, P. On Some Results in Fuzzy Metric Spaces. Fuzzy Sets Syst. 1994, 64, 395–399.
[CrossRef]
9. George, A.; Veeramani, P. On Some Results of Analysis for Fuzzy Metric Spaces. Fuzzy Sets Syst. 1997, 90,
365–368. [CrossRef]
10. Gregori, V.; Romaguera, S. Some Properties of Fuzzy Metric Spaces. Fuzzy Sets Syst. 2002, 115, 399–404.
[CrossRef]
11. Gregori, V.; Romaguera, S. Fuzzy Quasi-Metric Spaces. Appl. Gen. Topol. 2004, 5, 129–136. [CrossRef]
12. Gregori, V.; Romaguera, S.; Sapena, A. A Note on Intuitionistic Fuzzy Metric Spaces. Chaos Solitons Fractals
2006, 28, 902–905. [CrossRef]
13. Wu, H.-C. Hausdorff Topology Induced by the Fuzzy Metric and the Fixed Point Theorems in Fuzzy Metric
Spaces. J. Korean Math. Soc. 2015, 52, 1287–1303. [CrossRef]
14. Wu, H.-C. Common Coincidence Points and Common Fixed Points in Fuzzy Semi-Metric Spaces. Mathematics
2018, 6, 29. [CrossRef]

c 2018 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

91
mathematics
Article
Nilpotent Fuzzy Subgroups
Elaheh Mohammadzadeh 1 , Rajab Ali Borzooei 2, *
1 Department of Mathematics, Payame Noor University, Tehran 19395-3697, Iran; [email protected]
2 Department of Mathematics, Shahid Beheshti University, G. C., Tehran 19395-3697, Iran
* Correspondence: [email protected]

Received: 18 December 2017; Accepted: 13 February 2018; Published: 19 February 2018

Abstract: In this paper, we introduce a new definition for nilpotent fuzzy subgroups, which is called
the good nilpotent fuzzy subgroup or briefly g-nilpotent fuzzy subgroup. In fact, we prove that this
definition is a good generalization of abstract nilpotent groups. For this, we show that a group G is
nilpotent if and only if any fuzzy subgroup of G is a g-nilpotent fuzzy subgroup of G. In particular,
we construct a nilpotent group via a g-nilpotent fuzzy subgroup. Finally, we characterize the elements
of any maximal normal abelian subgroup by using a g-nilpotent fuzzy subgroup.

Keywords: nilpotent group; nilpotent fuzzy subgroup; generalized nilpotent fuzzy subgroup

1. Introduction
Applying the concept of fuzzy sets of Zadeh [1] to group theory, Rosenfeld [2] introduced the notion
of a fuzzy subgroup as early as 1971. Within a few years, it caught the imagination of algebraists like
wildfire and there seems to be no end to its ramifications. With appropriate definitions in the fuzzy setting,
most of the elementary results of group theory have been superseded with a startling generalized effect
(see [3–5]). In [6] Dudek extended the concept of fuzzy sets to the set with one n-ary operation i.e., to the
set G with one operation on f : G −→ G, where n ≥ 2. Such defined groupoid will be denoted by ( G, f ).
Moreover, he introduced the notion of a fuzzy subgroupoid of an n-ary groupoid. Specially, he proved
that if every fuzzy subgroupoid μ defined on ( G, f ) has the finite image, then every descending chain of
subgroupoids of ( G, f ) terminates at finite step. One of the important concept in the study of groups is
the notion of nilpotency. In [7] Kim proposed the notion of a nilpotent fuzzy subgroup. There, he attached
to a fuzzy subgroup an ascending series of subgroups of the underlying group to define nilpotency of the
fuzzy subgroup. With this definition, the nilpotence of a group can be completely characterized by the
nilpotence of its fuzzy subgroups. Then, in [8] Guptaa and Sarmahas, defined the commutator of a pair
of fuzzy subsets of a group to generate the descending central chain of fuzzy subgroups of a given fuzzy
subgroup and they proposed a new definition of a nilpotent fuzzy subgroup through its descending
central chain. Specially, They proved that every Abelian (see [9]) fuzzy subgroup is nilpotent. There are
many natural generalizations of the notion of a normal subgroup. One of them is subnormal subgroup.
The new methods are important to guarantee some properties of the fuzzy sets; for example, see [10].
In [3] Kurdachenko and et all formulated this concept for fuzzy subgroups to prove that if every fuzzy
subgroup of γ is subnormal in γ with defect at most d, then γ is nilpotent ([3] Corollary 4.6 ). Finally
in [11,12] Borzooei et. al. defind the notions of Engel fuzzy subgroups (subpolygroups) and investigated
some related results. Now, in this paper we define the ascending series differently with Kim’s definition.
We then propose a definition of a nilpotent fuzzy subgroup through its ascending central series and call it
g-nilpotent fuzzy subgroups. Also, we show that each g-nilpotent fuzzy subgroup is nilpotent. Moreover,
we get the main results of nilpotent fuzzy subgroups with our definition. Basically this definition help
us with the fuzzification of much more properties of nilpotent groups. Furthermore, we prove that for

Mathematics 2018, 6, 27; doi:10.3390/math6020027 www.mdpi.com/journal/mathematics

92
Mathematics 2018, 6, 27

a fuzzy subgroup μ of G, { x ∈ G | μ([ x, y1 , ..., yn ]) = μ(e) f or any y1 , ..., yn ∈ G } is equal to the n−th
term of ascending series where [ x, y1 ] = x −1 y1−1 xy1 and [ x, y1 , ..., yn ] = [[ x, y1 , ..., yn−1 ], yn ]. Therefore,
we have a complete analogy concept of nilpotent groups of an abstract group. Specially, we prove that a
finite maximal normal subgroup can control the g-nilpotent fuzzy subgroup and makes it finite.

2. Preliminary
Let G be any group and x, y ∈ G. Define the n-commutator [ x,n y], for any n ∈ N and x, y ∈ G,
by [ x,0 y] = x, [ x,1 y] = x −1 y−1 xy and [ x,n y] = [[ x,n−1 y], y] also, for any y1 , ..., yn ∈ G, [ x, y1 , ..., yn ] =
[[ x, y1 , ..., yn−1 ], yn ]. For any x, g ∈ G, we consider x g = g−1 xg and [ x, y] = [ x,1 y].

Theorem 1. [13] Let G be a group and x, y, z ∈ G. Then


(1) [ x, y] = [y, x ]−1 ,
(2) [ x.y, z] = [ x, z]y .[y, z] and [ x, y.z] = [ x, z].[ x, y]z ,
−1 −1
(3) [ x, y−1 ] = ([ x, y]y )−1 and [ x −1 , y] = ([ x, y] x )−1 .
Note that x = x.[ x, g].
g

Definition 1. [13] Let X1 , X2 , ... be nonempty subsets of a group G. Define the commutator subgroup of X1 and
X2 by

[ X1 , X2 ] = [ x1 , x2 ] | x1 ∈ X1 , x2 ∈ X2 .
More generally, define
[ X1 , ..., Xn ] = [[ X1 , ..., Xn−1 ], Xn ]
where n ≥ 2 and [ X1 ] =  X1 . Also recall that X1X2 =  x1x2 | x1 ∈ X1 , x2 ∈ X2 

Definition 2. [1] A fuzzy subset μ of X is a function μ : X → [0, 1].

Also, for fuzzy subsets μ1 and μ2 of X, then μ1 is smaller than μ2 and write μ1 ≤ μ2 iff for all x ∈ X,
we have μ1 ( x ) ≤ μ2 ( x ). Also, μ1 ∨ μ2 and μ1 ∧ μ2 , for any μ1 , μ2 are defined as follows:
(μ1 ∨ μ2 )( x ) = max {μ1 ( x ), μ2 ( x )}, (μ1 ∧ μ2 )( x ) = min{μ1 ( x ), μ2 ( x )}, for any x ∈ X.

Definition 3. [14] Let f be a function from X into Y, and μ be a fuzzy subset of X. Define the fuzzy subset f (μ)
of Y, for any y ∈ Y, by
⎧ 
⎨ μ ( x ), f −1 ( y )  = φ
( f (μ))(y) = x ∈ f −1 ( y )

0, otherwise

Definition 4. [2] Let μ be a fuzzy subset of a group G. Then μ is called a fuzzy subgroup of G if for any x, y ∈ G;
μ( xy) ≥ μ( x ) ∧ μ(y), and μ( x −1 ) ≥ μ( x ). A fuzzy subgroup μ of G is called normal if μ( xy) = μ(yx ), for any
x, y in G. It is easy to prove that a fuzzy subgroup μ is normal if and only if μ( x ) = μ(y−1 xy), for any x, y ∈ G
(See [14]).

Theorem 2. [14] Let μ be a fuzzy subgroup of G. Then for any x, y ∈ G, μ( x ) = μ(y), implies
μ( xy) = μ( x ) ∧ μ(y). Moreover, for a normal subgroup N of G, fuzzy subset ξ of N
G
as the following definition:

ξ ( xN ) = μ(z), f or any x ∈ G
z∈ xN

93
Mathematics 2018, 6, 27

G
is a fuzzy subgroup of N.

Definition 5. [14] Let μ be a fuzzy subset of a semigroup G. Then Z (μ) is define as follows:

Z (μ) = { x ∈ G | μ( xy) = μ(yx ) and μ( xyz) = μ(yxz), for any y, z ∈ G }

If Z (μ) = G, then μ is called a commutative fuzzy subset of G.

Note that since μ( xy) = μ(yx ) then we have μ( xyz) = μ( x (yz)) = μ((yz) x ) = μ(yzx ).

Theorem 3. [14] Let μ be a fuzzy subset of a semigroup G. If Z (μ) is nonempty, then Z (μ) is a subsemigroup of
G. Moreover, if G is a group, then Z (μ) is a normal subgroup of G.

We recall the notion of the ascending central series of a fuzzy subgroup and a nilpotent fuzzy
subgroup of a group [14]. Let μ be a fuzzy subgroup of a group G and Z0 (μ) = {e}. Clearly {e} is a
normal subgroup of G. Let π0 be the natural homomorphism of G onto Z0G(μ) . It is clear that π0 = I.
Suppose that Z1 (μ) = π0−1 ( Z (π0 (μ))). Since Z (π0 (μ)) is a normal subgroup of G
Z0 (μ)
, then it is clear
that Z1 (μ)
is a normal subgroup of G. Also we see that Z1 (μ)
= Z (μ). Now let π1 be the natural
homomorphism of G onto Z1G(μ) and Z2 (μ) = π1−1 ( Z (π1 (μ))). Since π1 (μ) is a fuzzy subgroup of Z1G(μ) ,
then Z (π1 (μ)) is a normal subgroup of G
Z1 (μ)
, which implies that Z2 (μ) is a normal subgroup of G.
Similarly suppose that Zi (μ)
has been defined and so Zi (μ) is a normal subgroup of G, for i ∈ N ∪ {0}.
Let πi be the natural homomorphism of G onto ZiG(μ) and Zi+1 (μ) = πi−1 ( Z (πi (μ))). Then Zi+1 (μ)
is a normal subgroup of G. Since 1 G ⊆ Z (πi (μ), then πi−1 (1 G ) ⊆ πi−1 ( Z (πi (μ))). Therefore,
Zi (μ) Zi (μ)
Ker (πi ) = Zi (μ) ⊆ Zi+1 (μ), for i = 0, 1, ....

Definition 6. [14] Let μ be a fuzzy subgroup of a group G. The ascending central series of μ is defined to be the
ascending chain of normal subgroups of G as follows:

Z0 (μ) ⊆ Z1 (μ) ⊆ Z2 (μ)....

Now the fuzzy subgroup μ of G is called nilpotent if there exists a nonnegative integer m, such that
Z m (μ) = G. The smallest such integer is called the class of μ.

Theorem 4. [14] Let μ be a fuzzy subgroup of a group G, i ∈ N and x ∈ G. If xyx −1 y−1 ∈ Zi−1 (μ), for any
y ∈ G, then x ∈ Zi (μ). Moreover, if T = { x ∈ G | μ( xyx −1 y−1 ) = μ(e), for any y ∈ G }, then T = Z (μ).

Let G be a group. We know that Z ( G ) is a normal subgroup of G. Let Z2 ( G ) be the inverse image
of Z ( Z(GG) ), under the canonical projection G −→ Z(GG) . Then Z2 ( G ) is normal in G and contains Z ( G ).
Continue this process by defining inductively, Z1 ( G ) = Z ( G ) and Zi ( G ) is the inverse image of Z ( Z G
)
i −1 ( G )
under the canonical projection G −→ G
Zi−1 ( G )
for any i ∈ N. Thus we obtain a sequence of normal
subgroups of G, called the ascending central series of G that is, {e} = Z0 ( G ) ⊆ Z1 ( G ) ⊆ Z2 ( G ) ⊆ ....
The other definition is as follows [13]: Let G be a group and Z0 ( G ) = {e}. It is clear that {e} is a normal
Z (G)
subgroup of G. Put {1 e} = Z ( {Ge} ). Then Z1 ( G ) = Z ( G ) is a normal subgroup of G. Similarly for
Z (G)
any integer n > 1, put Z n (G) = Z ( Z G(G) ). Then Zi (μ) is called the i-th center of group G. Moreover,
n −1 n −1
{e} = Z0 ( G ) ⊆ Z1 ( G ) ⊆ Z2 ( G ) ⊆ ... is called upper central series of G. These two definitions are
Z (G)
equivalent since, π ( Z2 ( G )) = π (π −1 ( Z ( Z(GG) ))) = Z ( Z(GG) ). Thus Z2(G) = Z ( Z(GG) ). Similarly we get the
result for any n ∈ N.

94
Mathematics 2018, 6, 27

Theorem 5. [13] Let G be a group and n ∈ N. Then


(i) x ∈ Zn ( G ) if and only if for any yi ∈ G where 1 ≤ i ≤ n, [ x, y1 , ..., yn ] = e,
(ii) [ Zn ( G ), G ] ⊆ Zn−1 ( G ).
(iii) Class of nilpotent groups is closed with respect to subgroups and homomorphic images.

Notation. From now on, in this paper we let G be a group.

3. Good Nilpotent Fuzzy Subgroups


One of the important concept in the study of groups is the notion of nilpotency. It was introduced
for fuzzy subgroups, too (See [14]). Now, in this section we give a new definition of nilpotent fuzzy
subgroups which is similar to one in the abstract group theory. It is a good generation of the last one.
With this nilpotency we get some new main results.
Let μ be a fuzzy subgroup of G. Put Z0 (μ) = {e}. Clearly Z0 (μ)  G. Let Z1 (μ) = { x ∈ G |
μ([ x, y]) = μ(e), for any y ∈ G }. Now using Theorems 4, we have Z1 (μ) = Z (μ) is a normal subgroup
Z (μ)
of G. We define a subgroup Z2 (μ) of G such that Z2 (μ) = Z ( Z G(μ) ); Since Z1 (μ)  G then Z1 (μ)  Z2 (μ).
1 1
Z (μ)
We show that [ Z2 (μ), G ] ⊆ Z1 (μ). For this let x ∈ Z2 (μ) and g ∈ G. Thus xZ1 (μ) ∈ Z2 (μ) = Z ( Z G(μ) ),
1 1
which implies that [ xZ1 (μ), gZ1 (μ)] = Z1 (μ) for any g ∈ G. Therefore [ x, g] ∈ Z1 (μ). Hence
[ Z2 (μ), G ] ⊆ Z1 (μ). Therefore x g = x [ x, g] ∈ Z2 (μ). Thus Z2 (μ)  G. Similarly for k ≥ 2 we define a
Z (μ)
normal subgroup Zk (μ) such that Z k (μ) = Z ( Z G(μ) ). It is clear that Z0 (μ) ⊆ Z1 (μ) ⊆ Z2 (μ) ⊆ ....
k −1 k −1

Definition 7. A fuzzy subgroup μ of G is called a good nilpotent fuzzy subgroup of G or briefly g-nilpotent fuzzy
subgroup of G if there exists a none negative integer n, such that Zn (μ) = G. The smallest such integer is called
the class of μ.

Example 1. Let D3 =  a, b; a3 = b2 = e, ba = a2 b be the dihedral group with six element and t0 , t1 ∈ [0, 1]
such that t0 > t1 . Define a fuzzy subgroup μ of D3 as follows:

t0 if x ∈< a >
μ( x ) =
t1 if x ∈< a >

Then (D3 \a)(D3 \a) = a, (a)(D3 \a) = (D3 \a), (D3 \a)(a) = (D3 \a) and (a)(a) = (a).
Now, we show that Z1 (μ) = D3 . If x ∈  a and y ∈ /  a, then xy ∈ /  a. Thus by the above relations, we
have [ x, y] = x −1 y−1 xy = (yx )−1 ( xy) ∈  a, which implies that μ[ x, y] = t0 = μ(e). Similarly, for the cases
x∈/  a and y ∈  a or x, y ∈  a or x, y ∈
/  a, we have μ[ x, y] = μ(e). Hence for any x, y ∈ D3 , μ[ x, y] = μ(e)
and so by Theorem 4, Z (μ) = D3 . Now, since Z1 (μ) = Z (μ), we get μ is g-nilpotent fuzzy subgroup.

In the following we see that for n ∈ N, each normal subgroup Zn (μ), in which is defined by
Zn+1 (μ)
Zn (μ)
= Z ( Z G(μ) ) is equal to { x ∈ G | μ([ x, y1 , ..., yn ]) = μ(e), f or any y1 , y2 , ..., yn ∈ G }.
n

Lemma 1. Let μ be a fuzzy subgroup of G. Then for k ∈ N

Zk (μ) = { x ∈ G | μ([ x, y1 , ..., yk ]) = μ(e), f or any y1 , y2 , ..., yk ∈ G }.

95
Mathematics 2018, 6, 27

Proof. We prove it by induction on k. If k = 1, then by definition of Z1 (μ) we have Z1 (μ) = { x ∈ G |


μ([ x, y]) = μ(e) for any y ∈ G }. Now let k = n + 1, and the result is true for k ≤ n. Then

Zn+1 (μ) G
x ∈ Zn+1 (μ) ⇐⇒ xZn (μ) ∈ = Z( )
Zn (μ) Zn (μ)
⇐⇒ [ xZn (μ), y1 Zn (μ)] = Zn (μ), f or any y1 ∈ G
⇐⇒ [ x, y1 ] Zn (μ) = Zn (μ), f or any y1 ∈ G
⇐⇒ [ x, y1 ] ∈ Zn (μ), f or any y1 ∈ G
⇐⇒ μ([[ x, y1 ], y2 , ..., yn+1 ]) = μ(e), f or any y1 , ..., yn+1 ∈ G.

This complete the proof.

Theorem 6. Any g-nilpotent fuzzy subgroup of G is a nilpotent fuzzy subgroup.

Proof. Let fuzzy subgroup μ of G be g-nilpotent. Since Z1 (μ) = Z(μ) = Z1 (μ), for n = 1, the proof is true.
Now let Zn+1 (μ) = G. Then by Lemma 1, {x | μ([x, y1 , ..., yn+1 ]) = μ(e) f or any y1 , y2 , ..., yn+1 ∈ G} = G.
We should prove that Z n+1 (μ) = G. Let x ∈ G. Then μ([ x, y1 , ..., yn+1 ]) = μ(e), for any y1 , ..., yn+1 ∈ G.
Therefore by Theorem 4, [ x, y1 , ..., yn ] ∈ Z (μ). Consequently, by Theorem 4, [ x, y1 , ..., yn−1 ] ∈ Z2 (μ).
Similarly, by using k-times Theorem 4, we have x ∈ Z n+1 (μ) and so Z n+1 (μ) = G. Therefore μ is a
nilpotent fuzzy subgroup of G.

Theorem 7. Let μ be a fuzzy subgroup of G. Then μ is commutative if and only if μ is g-nilpotent fuzzy subgroup
of class 1.

Proof. (⇒) Let μ be commutative. Then Z (μ) = G. Since Z1 (μ) = Z (μ), then Z1 (μ) = G which implies
that μ is g-nilpotent of class 1.
(⇐) If μ is g-nilpotent of class 1, then Z1 (μ) = G. Hence Z1 (μ) = Z (μ) = G. Therefore, μ
is commutative.

Notation. If μ is a fuzzy subgroup of G, then Zk−1 ( Z(Gμ) ) means the (k − 1)-th center of G
Z (μ)
([15] ).

G
Next we see that a g-nilpotent fuzzy subgroup of G makes the g-nilpotent fuzzy subgroup of Z (μ)
.
For this, we need the following two Lemmas.

Zk (μ)
Lemma 2. Let μ be a fuzzy subgroup of G. Then for any k ∈ N, Z (μ)
= Zk−1 ( Z(Gμ) ).

Proof. First we recall that for i ∈ N, x ∈ Zi ( G ) if and only if [ x, y1 , ..., yi ] = e, for any y1 , y2 , ..., yi ∈ G
(See [13]). Hence

G
xZ (μ) ∈ Zk−1 ( ) ⇐⇒ [ xZ (μ), y1 Z (μ), ..., yk−1 Z (μ)] = Z (μ), f or any y1 , ..., yk−1 ∈ G
Z (μ)
⇐⇒ [ x, y1 , ..., yk−1 ] Z (μ) = Z (μ), f or any y1 , ..., yk−1 ∈ G
⇐⇒ [ x, y1 , ..., yk−1 ] ∈ Z (μ), f or any y1 , ..., yk−1 ∈ G
⇐⇒ μ[ x, y1 , ..., yk−1 , yk ] = μ(e), f or any yk ∈ G (by Theorem 4)
⇐⇒ x ∈ Zk (μ) (by Lemma 1)
Z (μ)
⇐⇒ xZ (μ) ∈ k .
Z (μ)

96
Mathematics 2018, 6, 27

Zk (μ)
Therefore Z (μ)
= Zk−1 ( Z(Gμ) ).

Lemma 3. Let μ be a fuzzy subgroup of G, H = G


Z (μ)
, μ be a fuzzy subgroup of H and N = Z (μ). If H is
H
nilpotent, then N is nilpotent, too.

Proof. Let H be nilpotent of class n, that is Zn ( H ) = H. We will prove that there exist m ≤ n such that
Zm ( N
H
)= NH H
. For this by Theorem 5, since N is a homomorphic image of H, we get N H
is nilpotent of class
at most m.

Theorem 8. Let μ be a fuzzy subgroup of G and μ be a fuzzy subgroup of Z(Gμ) . If μ is a g-nilpotent fuzzy subgroup
of class n, then μ is a g-nilpotent fuzzy subgroup of class m, where m ≤ n.

Proof. Let μ be a g-nilpotent fuzzy subgroup of class n. Then Zn (μ) = G. Now we show that there
Z (μ)
exists m ≤ n, such that Zm (μ) = Z(Gμ) . By Lemma 2, Zn (μ) = G ⇐⇒ Z(Gμ) = Zn(μ) = Zn−1 ( Z(Gμ) ) , and
similarly (put m instead of n and μ instead of μ),
G G
G Z (μ) Z (μ)
Zm (μ) = ⇐⇒ Zm−1 ( )=
Z (μ) Z (μ) Z (μ)
Consequently, it is enough to show that if Zn−1 ( Z(Gμ) ) = G
Z (μ)
, then

G G
Z (μ) Z (μ)
Zm−1 ( )=
Z (μ) Z (μ)

It follows by Lemma 3 (put H = G


Z (μ)
in Lemma 3).

We now consider homomorphic images and the homomorphic pre-image of g-nilpotent


fuzzy subgroups.

Theorem 9. Let H be a group, f : G −→ H be an epimorphism and μ be a fuzzy subgroup of G. If μ is a


g-nilpotent fuzzy subgroup, then f (μ) is a g-nilpotent fuzzy subgroup.

Proof. First, we show that f ( Zi (μ)) ⊆ Zi ( f (μ)), for any i ∈ N. Let i ∈ N. Then x ∈ f ( Zi (μ)) implies that
x = f (u), for some u ∈ Zi (μ). Since f is epimorphism, hence for any y1 , ..., yn ∈ H we get yi = f (vi ) for
some vi ∈ G where 1 ≤ i ≤ n. Therefore [ x, y1 , ..., yn ] = [ f (u), f (v1 ), ..., f (vn )] which implies that
 
( f (μ))([ x, y1 , ..., yn ]) = μ(z) = μ(z)
f (z)=[ x,y1 ,...,yn ] f (z)= f ([u,v1 ,...,vn ])

Now, since u ∈ Zi (μ), by Lemma 1, we get μ([u, v1 , ..., vn ]) = μ(eG ). Therefore,

( f (μ))([ x, y1 , ..., yn ]) = μ(eG ) = ( f (μ))(e H )

Hence by Lemma 1, x ∈ Zi ( f (μ)). Consequently, f ( Zi (μ)) ⊆ Zi ( f (μ)). Hence if μ is g-nilpotent,


then there exists nonnegative integer n such that Zn (μ) = G which implies that f ( Zn (μ)) = f ( G ).
Therefore Zn ( f (μ)) = H which implies that f (μ) is g-nilpotent.

Theorem 10. Let H be a group, f : G −→ H be an epimorphism and ν be a fuzzy subgroup of H. Then ν is a


g-nilpotent fuzzy subgroup if and only if f −1 (ν) is a g-nilpotent fuzzy subgroup.

97
Mathematics 2018, 6, 27

Proof. First, we show that Zi ( f −1 (ν)) = f −1 ( Zi (ν)), for any i ∈ N. Now, let i ∈ N. Then by Lemma 1,

x ∈ Zi ( f −1 (ν)) ⇐⇒ ( f −1 (ν))([ x, x1 , ..., xi ]) = ( f −1 (ν))(e), f or any x1 , x2 , ..., xi ∈ G


⇐⇒ ν([ f ( x ), ..., f ( xi )]) = ν(e), f or any x1 , x2 , ..., xi ∈ G
⇐⇒ f ( x ) ∈ Zi (ν),
⇐⇒ x ∈ f −1 ( Zi (ν)).

Hence ν is g-nilpotent if and only if there exists nonnegative integer n such that Zn (ν) = H if and
only if f −1 ( Zn (ν)) = f −1 ( H ) if and only if Zn ( f −1 (ν)) = G if and only if, f −1 (ν) is g-nilpotent.

Proposition 1. Let μ and ν be two fuzzy subgroups of G such that μ ⊆ ν and μ(e) = ν(e). Then Z(μ) ⊆ Z(ν).

Proof. Let x ∈ Z (μ). Then μ([ x, y]) = μ(e), for any y ∈ G. Since

ν(e) = μ(e) = μ([ x, y]) ≤ ν([ x, y]) ≤ ν(e).

hence ν(e) = ν([ x, y]) and so x ∈ Z (ν). Therefore Z (μ) ⊆ Z (ν).

Lemma 4. Let μ be a fuzzy subgroup of G and i > 1. Then for any y ∈ G, [ x, y] ∈ Zi−1 (μ) if and only if
x ∈ Zi (μ).

Proof. (=⇒) Let [ x, y] ∈ Zi−1 (μ). Then by Lemma 1, μ([[ x, y], y1 , ..., yi−1 ]) = μ(e) for any y, y1 , ..., yi−1 ∈
G. Hence x ∈ Zi (μ).
(⇐=) The proof is similar.

In the following we see a relation between nilpotency of a group and its fuzzy subgroups.

Theorem 11. G is nilpotent if and only if any fuzzy subgroup μ of G is a g-nilpotent fuzzy subgroup.

Proof. (=⇒) Let G be nilpotent of class n and μ be a fuzzy subgroup of G. Since Zn ( G ) = G, it is enough
to prove that for any nonnegative integer i, Zi ( G ) ⊆ Zi (μ). For i=0 or 1, the proof is clear. Let for i > 1,
Zi ( G ) ⊆ Zi (μ) and x ∈ Zi+1 ( G ). Then for any y ∈ G, [ x, y] ∈ Zi ( G ) ⊆ Zi (μ) and so by Lemma 4,
x ∈ Zi+1 (μ). Hence Zi+1 ( G ) ⊆ Zi+1 (μ), for any i ≥ 0, and this implies that Zn (μ) = G. Therefore, μ is
g-nilpotent.
(⇐=) Let any fuzzy subgroups of G be g-nilpotent. Suppose that fuzzy set μ on G is defined as follows:


⎨ 1 i f x ∈ Z0 ( G )
μ( x ) = 1
+ i f x ∈ Zi ( G ) − Zi−1 ( G )

⎩ 0
i 1
otherwise

We show that Zi (μ) ⊆ Zi ( G ), for any nonnegative integer i. For i = 0, the result is immediate. If
i = 1 and x ∈ Z1 (μ), then μ([ x, y]) = μ(e) = 1 for any y ∈ G. By definition of μ, [ x, y] ∈ Z0 ( G ) = {e}
and so x ∈ Z1 ( G ). Now let Zi−1 (μ) ⊆ Zi−1 ( G ), for i ≥ 2. Then by Lemma 4, x ∈ Zi (μ) implies that
for any y ∈ G; [ x, y] ∈ Zi−1 (μ) ⊆ Zi−1 ( G ). Hence, for any y, y1 , ..., yi−1 ∈ G, [ x, y, y1 , ..., yi−1 ] = e which
implies that x ∈ Zi ( G ). Thus by induction on i, Zi (μ) ⊆ Zi ( G ), for any nonnegative integer i. Now since
Zi ( G ) ⊆ Zi (μ) for any nonnegative integer i, then Zi (μ) = Zi ( G ). Now by the hypotheses there exist
n ∈ N such that G = Zn (μ) = Zn ( G ). Hence, G is nilpotent.

98
Mathematics 2018, 6, 27

Theorem 12. Let fuzzy subgroups μ1 and μ2 of G be g-nilpotent fuzzy subgroups. Then the fuzzy set μ1 × μ2 of
G × G is a g-nilpotent fuzzy subgroup, too.

Proof. Let μ = μ1 × μ2 . It is clear that μ is fuzzy subgroup of G. So we show that μ is g-nilpotent.


It is enough to show that Zn (μ1 × μ2 ) = G × G, for n ∈ N. Suppose that ( x, y) ∈ G × G. Then
there exist n1 , n2 ∈ N such that Zn1 (μ1 ) = G and Zn2 (μ2 ) = G. Hence for any x1 ..., xn , y1 ..., yn ∈ G,
μ1 ([ x, x1 ..., xn ]) = μ(e) and μ2 ([y, y1 ..., yn ]) = μ(e) for n = max {n2 , n1 }. Then

(μ1 × μ2 )([( x, y), ..., ( xn , yn )]) = min{μ1 [ x, x1 ..., xn ], μ2 [y, y1 ..., yn ]} = (μ1 × μ2 )(e, e).

Therefore, Zn (μ1 × μ2 ) = G × G.

Definition 8. Let μ be a normal fuzzy subgroup of G. For any x, y ∈ G, define a binary relation on G as follows

x ∼ y ⇐⇒ μ( xy−1 ) = μ(e)

Lemma 5. Binary relation ∼ in Definition 8, is a congruence relation.

Proof. The proof of reflexivity and symmetrically is clear. Hence, we prove the transitivity. Let x ∼ y
and y ∼ z, for x, y, z ∈ G. Then μ( xy−1 ) = μ(yz−1 ) = μ(e). Since μ is a fuzzy subgroup of G, then
μ( xz−1 ) ≥ min{μ( xy−1 ), μ(yz−1 )} = μ(e). Hence μ( xz−1 ) = μ(e) and so x ∼ z. Therefore ∼ is an
equivalence relation. Now let x ∼ y and z ∈ G. Then μ(( xz)(yz)−1 ) = μ( xy−1 ) = μ(e) and so xz ∼ yz.
Since μ is normal, we get μ((zx )(zy)−1 ) = μ((zy)−1 (zx )) = μ(y−1 x ) = μ( xy−1 ) = μ(e) and so zx ∼ zy.
Therefore, ∼ is a congruence relation on G.

Notation. For the congruence relation in Definition 8, for any x ∈ G, the equivalence class containing
x is denoted by xμ, and Gμ = { xμ | x ∈ G }. It is easy to prove that Gμ by the operation ( xμ).(yμ) = xyμ
for any xμ, yμ ∈ G
μ is a group, where eμ is unit of G
μ and ( xμ)−1 = x −1 μ, for any xμ ∈ G
μ.

G
Theorem 13. Let μ be a normal fuzzy subgroup of G. Then μ is a g-nilpotent fuzzy subgroup if and only if μ is a
nilpotent group.

Proof. (=⇒) Let μ be a g-nilpotent fuzzy subgroup of G. First we show that for any n ∈ N and
x1 , ..., xn ∈ G, [ x, x1 , ..., xn ]μ = [ xμ, x1 μ, ..., xn μ]. For n = 1, we have

[ x, x1 ]μ = ( x −1 μ).(( x1 )−1 μ).( xμ).( x1 μ) = [ xμ, x1 μ]

Now assume that it is true for n − 1. By hypotheses of induction, we have

[ x, x1 , ..., xn ]μ = ([ x, x1 , ..., xn−1 ]−1 μ).( xn−1 μ).([ x, x1 , ..., xn−1 ]μ).( xn μ)
= ([ xμ, x1 μ..., xn−1 μ]−1 ).( xn−1 μ).([ xμ, x1 μ..., xn−1 μ]).( xn μ)
= [ xμ, x1 μ, ..., xn μ].

Therefore, if μ is a g-nilpotent fuzzy subgroup then there exist n ∈ N; Zn (μ) = G, which implies by
Lemma 1, that

{ x ∈ G | μ[ x, x1 , ..., xn ] = μ(e) f or any x1 , x2 , x3 , ..., xn ∈ G } = G , ( I )

99
Mathematics 2018, 6, 27

Also μ( x ) = μ(e) if and only if x ∼ e if and only if xμ = eμ , (II). Thus, by (I) and (II) we have

G
= { xμ | x ∈ G } = { xμ | μ[ x, x1 , ..., xn ] = μ(e), ∀ x1 , x2 , x3 , ..., xn ∈ G }
μ
G
= { xμ | [ xμ, x1 μ, ..., xn μ] = eμ, ∀ x1 , x2 , x3 , ..., xn ∈ G } = Zn ( )
μ

G
Consequently μ is a nilpotent group of class n.
(⇐=) If G
μ is a nilpotent group of class n, then

G G
= Zn ( ) = { xμ | [ xμ, x1 μ, ..., xn μ] = eμ, ∀ x1 , x2 , x3 , ..., xn ∈ G }}
μ μ

Thus for x ∈ G we have xμ ∈ Gμ . Therefore [ xμ, x1 μ, ..., xn μ] = eμ for any x1 , x2 , x3 , ..., xn ∈ G which
implies by (II) that μ[ x, x1 , ..., xn ] = μ(e). Thus, by Lemma 1, x ∈ Zn (μ). Thus G = Zn (μ) and so μ is
g-nilpotent.

Theorem 14. Let μ be a fuzzy subgroup of G and μ∗ = { x | μ( x ) = μ(e)} be a normal subgroup of G. If G


μ∗ is a
nilpotent group, then μ is a g-nilpotent fuzzy subgroup.

Proof. Let G
μ∗ be a nilpotent group and π : G −→ G
μ∗ be the natural epimomorphism. Since

z ∈ π −1 (π ( x )) ⇐⇒ π (z) = π ( x ) ⇐⇒ π (z−1 x ) = e ⇐⇒ z−1 x ∈ kerπ = μ∗


⇐⇒ μ(z−1 x ) = μ(e) ⇐⇒ μ(z) = μ( x ).

hence for any x ∈ G,


 
π −1 (π (μ))( x ) = π (μ)(π ( x )) = μ(z) = μ ( z ) = μ ( x ),
z∈π −1 (π ( x )) μ(z)=μ( x )

and so π −1 (π (μ)) = μ. Now since G


μ∗ is a nilpotent group and π (μ) is a fuzzy subgroup of G
μ∗ , then by
Theorem 11, π (μ) is g-nilpotent and by Theorem 10, π −1 (π (μ)) = μ is g-nilpotent.

Example 2. In Example 1, μ(e) = t0 and so μ∗ = { x | μ( x ) = μ(e)} =  a. Thus μ∗ is a normal subgroup


of D3 . Also D
μ∗ ≈ Z2 . Since Z2 is Abelian hence it is nilpotent and so by Theorem 14, μ is a g-nilpotent
3

fuzzy subgroup.

Theorem 15. Let μ and ν be two fuzzy subgroups of G such that μ ⊆ ν and μ(e) = ν(e). If μ is a g-nilpotent
fuzzy subgroup of class m , then ν is a g-nilpotent fuzzy subgroup of class n, where n ≤ m.

Proof. Let μ and ν be two fuzzy subgroups of G where μ ⊆ ν and μ(e) = ν(e). First, we show that for
any i ∈ N, Zi (μ) ⊆ Zi (ν). By Theorem 1, for i = 1 the proof is clear. Let for i ≥ 2, Zi (μ) ⊆ Zi (ν) and
x ∈ Zi+1 (μ). Then by Lemma 4, for any y ∈ G, [ x, y] ∈ Zi (μ) ⊆ Zi (ν). Thus, by Lemma 4, x ∈ Zi+1 (ν).
Hence Zi+1 (μ) ⊆ Zi+1 (ν). Now let μ be g-nilpotent of class m. Then G = Zm (μ) ⊆ Zm (ν) ⊆ G. Thus
G = Zm (ν), which implies that ν is g-nilpotent of class at most m.

Definition 9. [4] Let μ be a fuzzy set of a set S. Then the lower level subset is

μt = { x ∈ S; μ( x ) ≤ t}, where t ∈ [0, 1].

100
Mathematics 2018, 6, 27

Now fuzzification of μt is the fuzzy set Aμt defined by



μ( x ) i f x ∈ μt
Aμt ( x ) =
0 otherwise

Clearly, Aμt ⊆ μ and ( Aμt )t = μt .

Corollary 1. Let μ be a nilpotent fuzzy subgroup of G. Then Aμt is nilpotent too.

Proof. Let μ be a nilpotent fuzzy subgroup of G, since Aμt ⊆ μ then by Theorem 15, Aμt is nilpotent.

In the following we see that our definition for terms of Zk (μ), is equivalent to an important relation,
which will be used in the main Lemma 7.

Zk (μ)
Lemma 6. Let μ be a fuzzy subgroup of G. For k ≥ 2, Zk−1 (μ)
= Z( Z G
) if and only if [Zk (μ), G] ⊆ Zk−1 (μ).
k−1 (μ)

Zk (μ)
Proof. (=⇒) Let for k ≥ 2, Zk−1 (μ)
= Z( Z G
) and w ∈ [ Zk (μ), G ]. Then there exist x ∈ Zk (μ) and
k −1 ( μ )
g ∈ G such that w = [ x, g]. Since

Zk (μ) G
x ∈ Zk (μ) =⇒ xZk−1 (μ) ∈ = Z( )
Zk−1 (μ) Zk−1 (μ)
=⇒ [ xZk−1 (μ), gZk−1 (μ)] = Zk−1 (μ), f or any g ∈ G
=⇒ [ x, g] Zk−1 (μ) = Zk−1 (μ), f or any g ∈ G
=⇒ [ x, g] ∈ Zk−1 (μ).

hence w ∈ Zk−1 (μ).


(⇐=) Let for k ≥ 2, [ Zk (μ), G ] ⊆ Zk−1 (μ) and xZk−1 (μ) ∈ ZZk (μ(μ) ) . Hence x ∈ Zk (μ). Since [ Zk (μ), G ] ⊆
k −1
Zk−1 (μ), for any g ∈ G, we have [ x, g] ∈ Zk−1 (μ) which implies that [ xZk−1 (μ), gZk−1 (μ)] = Zk−1 (μ)
Z (μ)
and so xZk−1 (μ) ∈ Z ( Z G(μ) ). Hence Z k (μ) ⊆ Z ( Z G(μ) ). Now, let xZk−1 (μ) ∈ Z ( Z G(μ) ). Then
k −1 k −1 k −1 k −1
for any g ∈ G we have, [ xZk−1 (μ), gZk−1 (μ)] = Zk−1 (μ) which implies that [ x, g] Zk−1 (μ) = Zk−1 (μ)
and so [ x, g] ∈ Zk−1 (μ). Now by Lemma 1, μ([ x, g, y1 , y2 ..., yk−1 ]) = μ(e), for any g, y1 , y2 ..., yk−1 ∈ G.
Z (μ) Z (μ) Z (μ)
Hence x ∈ Zk (μ) and this implies that xZk−1 (μ) ∈ Z k (μ) . So Z k (μ) ⊇ Z ( Z G(μ) ). Therefore, Z k (μ) =
k −1 k −1 k −1 k −1
Z( Z G
).
k −1 ( μ )

Lemma 7. Let μ be a g-nilpotent fuzzy subgroup of G of class n ≥ 2 and N, be a nontrivial normal subgroup of G
(i.e 1 = N  G). Then N ∩ Z (μ) = 1.

Proof. Since μ is g-nilpotent, so there exist n ≥ 2 such that Zn (μ) = G. Thus

1 = Z0 (μ) ⊆ Z1 (μ) ⊆ ... ⊆ Zn (μ) = G

Since N ∩ Zn (μ) = N ∩ G = N = 1, then there is j ∈ N such that N ∩ Zj (μ) = 1. Let i be the smallest
index such that N ∩ Zi (μ) = 1 ( so N ∩ Zi−1 (μ) = 1). Then we claim that [ N ∩ Zi (μ), G] ⊆ N. For this
let w ∈ [ N ∩ Zi (μ), G]. Then there exists x ∈ N ∩ Zi (μ) and g ∈ G such that w = [ x, g] = x−1 g−1 xg.
Since N  G, then w = x −1 x g ∈ N. Thus [ N ∩ Zi (μ), G ] ⊆ N. Also since x ∈ N ∩ Zi (μ), by Lemma 6,
[ x, g] ∈ [ Zi (μ), G ] ⊆ Zi−1 (μ). Thus [ N ∩ Zi (μ), G ] ⊆ Zi−1 (μ). Hence [ N ∩ Zi (μ), G ] ⊆ N ∩ Zi−1 (μ) = 1.
Therefore N ∩ Zi (μ) ≤ Z ( G ) ≤ Z (μ) and so N ∩ Zi (μ) ≤ N ∩ Z (μ) = 1 . Hence N ∩ Zi (μ) = 1 which is
a contradiction. Consequently N ∩ Z (μ) = 1.

101
Mathematics 2018, 6, 27

The following theorem shows that for a g-nilpotent fuzzy subgroup μ each minimal normal subgroup
of G is contained in Z (μ).

Theorem 16. Let μ be a g-nilpotent fuzzy subgroup of G of class n ≥ 2. If N is a minimal normal subgroup of G,
then N ≤ Z (μ).

Proof. Since N and Z (μ) are normal subgroups of G, we get N ∩ Z (μ)  G. Now since N is a minimal
normal subgroup of G, N ∩ Z (μ) ≤ N and by Lemma 7, 1 = N ∩ Z (μ). we get N ∩ Z (μ) = N. Therefore
N ≤ Z ( μ ).

Theorem 17. Let μ be a g-nilpotent fuzzy subgroup of G and A be a maximal normal Abelian subgroup of G.
If μ( x ) = μ(e) for any x ∈ A, and μ( x ) = μ(e) for any x ∈ G − A, then

A = CG ( A) = { x ∈ G | [ x, a] = e, f or any a ∈ A}.

Proof. First, we prove that CG ( A)  G. For this, let x ∈ CG ( A) and g ∈ G. Then for all a ∈ A we have
−1 −1 −1
[ x g , a] = [ x, a g ] g . Since A is Abelian, then a g = a. Hence x ∈ CG ( A) implies that [ x g , a] = [ x, a g ] g =
CG ( A )
[ x, a] = e and so x ∈ CG ( A). Thus CG ( A)  G. Suppose A  CG ( A). Then 1 = A  A . Let μ be
g g G
C ( A) C ( A)
the fuzzy subgroup of G
A . Then by Lemma 7,
G
A ∩ Z (μ) = 1. So there exists A =
gA ∈ GA ∩ Z (μ).
Hence g ∈ CG ( A) and μ[ gA, xA] = μ(eA) for any x ∈ G . Thus by Theorem 2, μ([ g, x ] a) = μ(e).
a∈ A
Now if for some a ∈ A, μ([ g, x ]) =  μ( a), then by definition

of μ, [ g, x ] ∈ A and if for any a ∈ A,
μ([ g, x ]) = μ( a), then by Theorem 2, μ([ g, x ] a) = (μ([ g, x ]) ∧ μ( a)) = μ([ g, x ]). Thus [ g, x ] ∈ A.
a∈ A a∈ A
Now let B =  A, g. Then A  B  G(B  G since A  G, and for x ∈ G we have g x = g[ g, x ] ∈ B ).
Moreover, since g ∈ CG ( A), then B is Abelian. Therefore, B is a normal Abelian subgroup of G, which is
a contradiction. Thus A = CG ( A).

Now we show that with some conditions every g-nilpotent fuzzy subgroup is finite.

Corollary 2. Let μ be a g-nilpotent fuzzy subgroup of G and A be a finite maximal normal Abelian subgroup of G.
If μ( x ) = μ(e) for any x ∈ A, and μ( x ) = μ(e) for any x ∈ G − A, then μ is finite, too.

Proof. Since A  G, for g ∈ G and x ∈ A we have x g ∈ A. Now let

θ : G −→ Aut( A)
g −→ θ g : A −→ A
x −→ x g .

We prove that θ is a homomorphism. Let g1 , g2 ∈ G. Then θ ( g1 g2 ) = θ g1 g2 . Thus for x ∈ A,

(θ ( g1 g2 ))( x ) = (θ g1 g2 )( x ) = x g1 g2 = x g1 .x g2 = (θ ( g1 ))( x ).(θ ( g2 ))( x ).

But Ker (θ ) = { g ∈ G | θ ( g) = I }, in which I is the identity homomorphism. Thus for any


x ∈ A, (θ ( g))( x ) = I ( x ) which implies that x g = x. Hence g ∈ CG ( A). Therefore, Ker (θ ) = CG ( A).
By Theorem 17, A = CG ( A). Thus KerG(θ ) = G A is embeded in Aut ( A ). Now since A and so Aut ( A ) are
finite we get G is finite which implies that μ is finite.

102
Mathematics 2018, 6, 27

4. Conclusions
By the notion of a g-nilpotent fuzzy subgroup we can investigate on fuzzification of nilpotent
groups. Moreover, since this is similar to group theory, definition, it is much easier than before to study
the properties of nilpotent fuzzy groups. Moreover, if we accept the definition of a g-nilpotent fuzzy
subgroup, then one can verify, as we have done in Theorem 16, that for a g-nilpotent fuzzy subgroup μ
each minimal normal subgroup of G is contained in the center of μ. We hope that these results inspire
other papers on nilpotent fuzzy subgroups.
Author Contributions: Elaheh Mohammadzadeh and Rajab Ali Borzooei conceived and designed the paper structure.
Then Elaheh Mohammadzadeh performed the first reaserch and Rajab Ali Borzooei completed the research.
Conflicts of Interest: The authors declare no conflicts of interest.

References
1. Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353, doi:10.1016/S0019-9958(65)90241-X.
2. Rosenfeld, A. Fuzzy groups. J. Math. Anal. Appl. 1971, 35, 512–517, doi:10.1016/0022-247X(71)90199-5.
3. Kurdachenko, L.A.; Subbotin, I.Y.; Grin, K.O. On some properties of normal and subnormal fuzzy subgros.
Southeast Asian Bull. Math. 2014, 38, 401–421.
4. Biswas, R. Fuzzy subgroups and anti fuzzy subgroups. Fuzzy Sets Syst. 1990, 35, 121–124,
doi:10.1016/0165-0114(90)90025-2.
5. Bhattacharya, P.; Mukherjee, N.P. Fuzzy relations and fuzzy groups. Inf. Sci. 1985, 36, 267–282,
doi:10.1016/0020-0255(85)90057-X.
6. Dudek, W.A. Fuzzification of n-ary groupoids. Quasigroups Related Syst. 2000, 7, 45–66.
7. Kim, J.G. Commutative fuzzy sets and nilpotent fuzzy groups. Inf. Sci. 1995, 83, 161–174,
doi:10.1016/0020-0255(94)00082-M.
8. Guptaa, K.C.; Sarma, B.K. Nilpotent fuzzy groups. Fuzzy Sets Syst. 1999, 101, 167–176,
doi:10.1016/S0165-0114(97)00067-5.
9. Ray, S. Generated and cyclic fuzzy groups. Inf. Sci. 1993, 69, 185–200, doi:10.1016/0020-0255(93)90119-7.
10. Bucolo, M.; Fazzino, S.; la Rosa, M.; Fortuna, L. Small-world networks of fuzzy chaotic oscillators. Chaos Solitons
Fractals 2003, 17, 557–565, doi:10.1016/S0960-0779(02)00398-3.
11. Ameri, R.; Borzooei, R.A.; Mohammadzadeh, E. Engel fuzzy subgroups. Ital. J. Pure Appl. Math. 2015, 34 ,
251–262.
12. Borzooei, R.A.; Mohammadzadeh, E.; Fotea, V. On Engel Fuzzy Subpolygroups. New Math. Nat. Comput. 2017,
13, 195–206, doi:10.1142/S1793005717500089.
13. Robinson, D.J.S. A Course in the Theory of Groups; Springer-Verlag: New York, NY, USA, 1980; pp. 121–158;
ISBN 978-1-4612-643-9.
14. Mordeson, J.N.; Bhutani, K.R.; Rosenfeld, A. Fuzzy Group Theory; Studies in Fuzziness and Soft Computing;
Springer-Verlag: Berlin/Heidelberg, Germany, 2005; pp. 61–89; ISBN 978-3-540-25072-2.
15. Hungerford, T.W. Algebra; Springer-Verlag: New York, NY, USA, 1974; pp. 23–69; ISBN 978-1-4612-6103-2.

c 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution (CC
BY) license (http://creativecommons.org/licenses/by/4.0/).

103
mathematics
Article
Neutrosophic Triplet G-Module
Florentin Smarandache 1 , Mehmet Şahin 2 and Abdullah Kargın 2, *
1 Department of Mathematics, University of Mexico, 705-Ave., Gallup, NM 87301, USA; [email protected]
2 Department of Mathematics, Gaziantep University, Gaziantep 27310, Turkey; [email protected]
* Correspondence: [email protected]

Received: 19 March 2018; Accepted: 2 April 2018; Published: 5 April 2018

Abstract: In this study, the neutrosophic triplet G-module is introduced and the properties of
neutrosophic triplet G-module are studied. Furthermore, reducible, irreducible, and completely
reducible neutrosophic triplet G-modules are defined, and relationships of these structures with
each other are examined. Also, it is shown that the neutrosophic triplet G-module is different from
the G-module.

Keywords: neutrosophic triplet G-module; neutrosophic triplet group; neutrosophic triplet vector space

1. Introduction
Neutrosophy is a branch of philosophy, firstly introduced by Smarandache in 1980. Neutrosophy [1]
is based on neutrosophic logic, probability, and set. Neutrosophic logic is a generalized form of many
logics such as fuzzy logic, which was introduced by Zadeh [2], and intuitionistic fuzzy logic, which
was introduced by Atanassov [3]. Furthermore, Bucolo et al. [4] studied complex dynamics through
fuzzy chains; Chen [5] introduced MAGDM based on intuitionistic 2–Tuple linguistic information,
and Chen [6] obtain some q–Rung Ortopair fuzzy aggregation operators and their MAGDM. Fuzzy
set has function of membership; intuitionistic fuzzy set has function of membership and function of
non-membership. Thus; they do not explain the indeterminancy states. However, the neutrosophic set
has a function of membership, a function of indeterminacy, and a function of non-membership. Also,
many researchers have studied the concept of neutrosophic theory in [7–12]. Recently, Olgun et al. [13]
studied the neutrosophic module; Şahin et al. [14] introduced Neutrosophic soft lattices; Şahin et al. [15]
studied the soft normed ring; Şahin et al. [16] introduced the centroid single-valued neutrosophic
triangular number and its applications; Şahin et al. [17] introduced the centroid points of transformed
single-valued neutrosophic number and its applications; Ji et al. [18] studied multi-valued neutrosophic
environments and their applications. Also, Smarandache et al. [19] studied neutrosophic triplet (NT)
theory and [20,21] neutrosophic triplet groups. A NT has a form <x, neut(x), anti(x)>, in which neut(x)
is neutral of “x” and anti(x) is opposite of “x”. Furthermore, neut(x) is different from the classical
unitary element. Also, the neutrosophic triplet group is different from the classical group. Recently,
Smarandache et al. [22] studied the NT field and [23] the NT ring; Şahin et al. [24] introduced the
NT metric space, the NT vector space, and the NT normed space; Şahin et al. [25] introduced the NT
inner product.
The concept of G-module [26] was introduced by Curties. G-modules are algebraic structures
constructed on groups and vector spaces. The concept of group representation was introduced by
Frobenious in the last two decades of the 19th century. The representation theory is an important
algebraic structure that makes the elements, which are abstract concepts, more evident. Many
important results could be proved only for representations over algebraically closed fields. The module
theoretic approach is better suited to deal with deeper results in representation theory. Moreover,
the module theoretic approach adds more elegance to the theory. In particular, the G-module structure
has been extensively used for the study of representations of finite groups. Also, the representation

Mathematics 2018, 6, 53; doi:10.3390/math6040053 104 www.mdpi.com/journal/mathematics


Mathematics 2018, 6, 53

theory of groups describes all the ways in which group G may be embedded in any linear group GL
(V). The G-module also holds an important place in the representation theory of groups. Recently
some researchers have been dealing with the G-module. For example, Fernandez [27] studied
fuzzy G-modules. Sinho and Dewangan [28] studied isomorphism theory for fuzzy submodules
of G-modules. Şahin et al. [29] studied soft G-modules. Sharma and Chopra [30] studied the injectivity
of intuitionistic fuzzy G-modules.
In this paper, we study neutrosophic triplet G-Modules in order to obtain a new algebraic
constructed on neutrosophic triplet groups and neutrosophic triplet vector spaces. Also we define the
reducible neutrosophic triplet G-module, the irreducible neutrosophic triplet G-module, and the
completely reducible neutrosophic triplet G-module. In this study, in Section 2, we give some
preliminary results for neutrosophic triplet sets, neutrosophic triplet groups, the neutrosophic triplet
field, the neutrosophic triplet vector space, and G-modules. In Section 3, we define the neutrosophic
triplet G-module, and we introduce some properties of a neutrosophic triplet G-module. We show
that the neutrosophic triplet G-module is different from the G-module, and we show that if certain
conditions are met, every neutrosophic triplet vector space or neutrosophic triplet group can be a
neutrosophic triplet G-module at the same time. Also, we introduce the neutrosophic triplet G-module
homomorphism and the direct sum of neutrosophic triplet vector space. In Section 4, we define
the reducible neutrosophic triplet G-module, the irreducible neutrosophic triplet G-module, and the
completely reducible neutrosophic triplet G-module, and we give some properties and theorems for
them. Furthermore, we examine the relationships of these structures with each other, and we give
some properties and theorems. In Section 5, we give some conclusions.

2. Preliminaries

Definition 1. Let N be a set together with a binary operation *. Then, N is called a neutrosophic triplet set if for
any a ∈ N there exists a neutral of “a” called neut(a) that is different from the classical algebraic unitary element
and an opposite of “a” called anti(a) with neut(a) and anti(a) belonging to N, such that [21]:

a*neut(a) = neut(a)* a = a,
and
a*anti(a) = anti(a)* a = neut(a).

Definition 2. Let (N,*) be a neutrosophic triplet set. Then, N is called a neutrosophic triplet group if the
following conditions are satisfied [21].

(1) If (N,*) is well-defined, i.e., for any a, b ∈ N, one has a*b ∈ N.


(2) If (N,*) is associative, i.e., (a*b)*c = a*(b*c) for all a, b, c ∈ N.

Theorem 1. Let (N,*) be a commutative neutrosophic triplet group with respect to * and a, b ∈ N, in which a
and b are both cancellable [21],

(i) neut(a)*neut(b) = neut(a*b).


(ii) anti(a)*anti(b) = anti(a*b).

Definition 3. Let (NTF,*, #) be a neutrosophic triplet set together with two binary operations * and #. Then
(NTF,*, #) is called neutrosophic triplet field if the following conditions hold [22].

1. (NTF,*) is a commutative neutrosophic triplet group with respect to *.


2. (NTF, #) is a neutrosophic triplet group with respect to #.
3. a#(b*c) = (a#b)*(a#c) and (b*c)#a = (b#a)*(c#a) for all a, b, c ∈ NTF.

Theorem 2. Let (N,*) be a neutrosophic triplet group with respect to *. For (left or right) cancellable a ∈ N, one
has the following [24]:

105
Mathematics 2018, 6, 53

(i) neut(neut(a)) = neut(a);


(ii) anti(neut(a)) = neut(a);
(iii) anti(anti(a)) = a;
(iv) neut(anti(a)) = neut(a).

Definition 4. Let (NTF, ∗1 , #1 ) be a neutrosophic triplet field, and let (NTV, ∗2 , #2 ) be a neutrosophic triplet
set together with binary operations “ ∗2 ” and “#2 ”. Then (NTV, ∗2 , #2 ) is called a neutrosophic triplet vector
space if the following conditions hold. For all u, v ∈ NTV, and for all k ∈ NTF, such that u∗2 v ∈ NTV and u
#2 k ∈ NTV [24];

(1) (u∗2 v) ∗2 t = u∗2 (v∗2 t); u, v, t ∈ NTV;


(2) u∗2 v = v∗2 u; u, v ∈ NTV;
(3) (v∗2 u) #2 k = (v#2 k) ∗2 (u#2 k); k ∈ NTF and u, v ∈ NTV;
(4) (k∗1 t) #2 u = (k#2 v) ∗1 (u#2 v); k, t ∈ NTF and u ∈ NTV;
(5) (k#1 t) #2 u = k#1 (t#2 u); k, t ∈ NTF and u ∈ NTV;
(6) There exists any k ∈ NTF such that u#2 neut(k) = neut(k) #2 u = u; u ∈ NTV.

Definition 5. Let G be a finite group. A vector space M over a field K is called a G-module if for every g ∈ G
and m  M there exists a product (called the action of G on M) m.g ∈ M satisfying the following axioms [26]:

(i) m.1G = m, ∀ m  M (1G being the identity element in G);


(ii) m.(g.h) = (m.g).h, ∀ m  M; g, h  G;
(iii) (k1 m1 + k2 m2 ).g = k1 (m1 .g)+ k2 (m2 .g); k1 , k2  K; m1 , m2  M, and g  G.

Definition 6. Let M be a G-module. A vector subspace N of M is a G-submodule if N is also a G-module under


the same action of G [26].
Definition 7. Let M and M∗ be G-modules. A mapping φ [26]: M → M∗ is a G-module homomorphism if

(i) φ(k1 .m1 + k2 .m2 ) = k1 . φ(m1 ) + k2 .φ(m2 );


(ii) φ(m.g) = φ(m).g; k1 , k2  K; m, m1 , m2  M; g  G.

Further, if φ is 1-1, then φ is an isomorphism. The G-modules M and M* are said to be isomorphic
if there exists an isomorphism φ of M onto M*. Then we write M ∼ = M∗ .
Definition 8. Let M be a nonzero G-module. Then, M is irreducible if the only G-submodules of M are M and
{0}. Otherwise, M is reducible [26].
Definition 9. Let M1 , M2 , M3 , . . . , Mn be vector spaces over a field K [31]. Then, the set {m1 + m2 + . . . +
mn ; mi  Mi } becomes a vector space over K under the operations
       
(m1 +m2 + . . . + mn ) + m1  + m2  + . . . + mn  = m1 + m1  + m2 + m2  + . . . + mn + mn  and

α(m1 + m2 + . . . + mn ) = αm1 + αm2 + . . . + αmn ; α  K, mn   Mi

It is the called direct sum of the vector spaces M1 , M2 , M3 , . . . , Mn and is denoted by in=1 ⊕ Mi .
Remark 1. The direct sum M = in=1 ⊕ Mi of vector spaces Mi has the following properties [31]:

(i) Each element m  M has a unique expression as the sum of elements of Mi .


(ii) The vector subspaces M1 , M2 , M3 , . . . , Mn of M are independent.
(iii) For each 1 ≤ i ≤ n, M j ∩ (M1 + M2 + . . . + M j−1 + M j+1 + . . . + Mn ) = {0}.

Definition 10. A nonzero G-module M is completely reducible if for every G-submodule N of M there exists a
G-submodule N* of M such that M = N ⊕ N ∗ [26].

106
Mathematics 2018, 6, 53

Proposition 1. A G-submodule of a completely reducible G-module is completely reducible [26].

3. Neutrosophic Triplet G-Module

Definition 11. Let (G, *) be a neutrosophic triplet group, (NTV,∗1 , #1 ) be a neutrosophic triplet vector space on
a neutrosophic triplet field (NTF,∗2 , #2 ), and g*m  NTV for g  G, m  NTV. If the following conditions are
satisfied, then (NTV,∗1 , #1 ) is called neutrosophic triplet G-module.

(a) There exists g ∈ G such that m*neut(g) = neut(g)*m = m for every m ∈ NTV;
(b) m∗1 (g∗1 h) = (m∗1 g) ∗1 h, ∀ m ∈ NTV; g, h ∈ G;
(c) (k1 #1 m1 ∗1 k2 #1 m2 )*g = k1 #1 (m1 *g)∗1 k2 #1 (m2 *g), ∀k1 , k2  NTF; m1 , m2  NTV; g  G.

Corollary 1. Neutrosophic G-modules are generally different from the classical G-modules, since there is a single
unit element in classical G-module. However, the neutral element neut(g) in neutrosophic triplet G-module
is different from the classical one. Also, neutrosophic triplet G-modules are different from fuzzy G-modules,
intuitionistic fuzzy G-modules, and soft G-modules, since neutrosophic triplet set is a generalized form of fuzzy
set, intuitionistic fuzzy set, and soft set.
Example 1. Let X be a nonempty set and let P(X) be set of all subset of X. From Definition 4, (P(X), ∪, ∩) is a
neutrosophic triplet vector space on the (P(X), ∪, ∩) neutrosophic triplet field, in which
the neutrosophic triplets with respect to ∪; neut(A) = A and anti(A) = B, such that A, B ∈ P(X); A ⊆ B;
and the neutrosophic triplets with respect to ∩; neut(A) = A and anti(A) = B, such that A, B ∈ P(X); B ⊇ A.
Furthermore, (P(X), ∪) is a neutrosophic triplet group with respect to ∪, in which
neut(A) = A and anti(A) = B such that A, B ⊂ P(X); A ⊆ B. We show that (P(X), ∪, ∩) satisfies condition of
neutrosophic triplet G-module. From Definition 11:

(a) It is clear that if A = B, there exists any A ∈ P(X) for every B ∈ P(X), such that B ∪ neut(A) = neut(A) ∪
B = A.
(b) It is clear that A ∪ (B ∪ C) = (A ∪ B) ∪ C, ∀ A ∈ P(X); B, C ∈ P(X).
(c) It is clear that

((A1 ∩ B1 ) ∪ ( A2 ∩ B2 )) ∪ C = ( A1 ∩ B1 ) ∪ C)) ∪ ( A2 ∩ B2 ) ∪ C)), ∀ A1 , A2  P(X); B1 , B2  P(X); C  P(X).


Thus, (P(X), ∪, ∩) is a neutrosophic triplet G-module.
Corollary 2. If G = NTV, * = ∗1 , then each (NTV,∗1 , #1 ) neutrosophic triplet vector space is a neutrosophic
triplet G-module at the same time. Thus, if G = NTV and * = ∗1 , then every neutrosophic triplet vector space or
neutrosophic triplet group can be a neutrosophic triplet G-module at the same time. It is not provided by classical
G-module.
Proof of Corollary 1. If G = NTV, * = ∗1 ;

(a) There exists a g  NTV such that m*neut(g) = neut(g)*m = m, ∀m  NTV;


(b) It is clear that m*(g*h) = (m*g)*h, as (NTV,*) is a neutrosophic triplet group; ∀ m, g, h ∈ NTV;
(c) It is clear that (k1 #1 m1 ∗1 k2 #1 m2 )*g = k1 #1 (m1 *g)∗1 k2 #1 (m2 *g), since (NTV,∗1 , #1 ) is a neutrosophic
triplet vector space; ∀ g, k1 , k2  NTF; m1 , m2  NTV.

Definition 12. Let (NTV,∗1 , #1 ) be a neutrosophic triplet G-module. A neutrosophic triplet subvector space
(N, ∗1 , #1 ) of (NTV,∗1 , #1 ) is a neutrosophic triplet G-submodule if (N,∗1 , #1 ) is also a neutrosophic triplet
G-module.
Example 2. From Example 1; for N ⊆ X, (P(N), ∪, ∩) is a neutrosophic triplet subvector space of (P(X), ∪, ∩).
Also, (P(N), ∪, ∩) is a neutrosophic triplet G-module. Thus, (P(N), ∪, ∩) is a neutrosophic triplet G-submodule
of (P(X), ∪, ∩).

107
Mathematics 2018, 6, 53

Example 3. Let (NTV,∗1 , #1 ) be a neutrosophic triplet G-module. N = {neut(x)} ∈ NTV is a neutrosophic


triplet subvector space of (NTV,∗1 , #1 ). Also, N = {neut(x) = x} ∈ NTV is a neutrosophic triplet G-submodule of
(NTV,∗1 , #1 ).
Definition 13. Let (NTV,∗1 , #1 ) and (NTV ∗ ,∗3 , #3 ) be neutrosophic triplet G-modules on neutrosophic triplet
field (NTF,∗2 , #2 ) and (G, *) be a neutrosophic triplet group. A mapping φ: NTV→ NTV ∗ is a neutrosophic
triplet G-module homomorphism if

(i) φ(neut(m)) = neut(φ(m))


(ii) φ(anti(m)) = anti(φ(m))
(iii) φ((k1 #1 m1 ) ∗1 (k2 #1 m2 )) = (k1 #3 φ(m1 ))∗3 (k2 #3 φ(m2 ))
(iv) φ(m*g) = φ(m)*g; ∀ k1 , k2 ∈ NTF; m, m1 , m2 ∈ M; g ∈ G.

Further, if φ is 1-1, then φ is an isomorphism. The neutrosophic triplet G-modules (NTV,∗1 , #1 )


and (NTV ∗ ,∗3 , #3 ) are said to be isomorphic if there exists an isomorphism φ: NTV → NTV ∗ . Then,
we write NTV ∼ = NTV ∗ .
Example 4. From Example 1, (P(X), ∪, ∩) is neutrosophic triplet vector space on neutrosophic triplet field
(P(X), ∪, ∩). Furthermore, (P(X), ∪, ∩) is a neutrosophic triplet G-module. We give a mapping φ: P(X) →
P(X), such that φ(A) = neut(A). Now, we show that φ is a neutrosophic triplet G-module homomorphism.

(i) φ(neut(A)) = neut(neut(A)) = neut(φ(A))


(ii) φ(anti(A)) = neut(anti(A)); from Theorem 2, neut(anti(A)) = neut(A).
anti(φ(A)) = anti(neut(A)); from Theorem 2, anti(neut(A)) = neut(A). Then φ(anti(A)) = anti(φ(A)).
(iii) φ((A1 ∩ B1 ) ∪ ( A2 ∩ B2 )) = neut(A1 ∩ B1 ) ∪ ( A2 ∩ B2 )); from Theorem 1, as neut(a)*neut(b) = neut(a*b);
neut(A1 ∩ B1 ) ∪ ( A2 ∩ B2 )) = neut(A1 ∩ B1 ) ∪ neut( A2 ∩ B2 ) =
((neut(A1 ) ∩ neut(B1 )) ∪ ((neut(A2 ) ∩ neut(B2 )). From Example 1, as neut(A) = A,
((neut(A1 ) ∩ neut(B1 )) ∪ ((neut(A2 ) ∩ neut(B2 )) = ( A1 ∩ neut(B1 )) ∪ ( A2 ∩ neut(B2 )) =
( A1 ∩ neut(B1 ) ) ∪( A2 ∩ neut(B2 )) = ( A1 ∩φ( B1 )) ∪( A2 ∩ φ (B2 )).
(iv) φ(A*B) = neut(A*B); from Theorem 1, as neut(a)*neut(b) = neut(a*b), neut(A*B) = neut(A)* neut(B).
From Example 1, as neut(A) = A, neut(A)* neut(B) = A* neut(B) = A* φ(B).

4. Reducible, Irreducible, and Completely Reducible Neutrosophic Triplet G-Modules

Definition 14. Let (NTV,∗1 , #1 ) be neutrosophic triplet G-modules on neutrosophic triplet field (NTF,∗2 , #2 ).
Then, (NTV,∗1 , #1 ) is irreducible neutrosophic triplet G-modules if the only neutrosophic triplet G-submodules
of (NTV,∗1 , #1 ) are (NTV,∗1 , #1 ) and {neut(x) = x}, x ∈ NTV. Otherwise, (NTV,∗1 , #1 ) is reducible neutrosophic
triplet G-module.
Example 5. From Example 2, for N = {1,2} ⊆ {1,2,3} = X, (P(N), ∪, ∩) is a neutrosophic triplet subvector space
of (P(X), ∪, ∩). Also, (P(N), ∪, ∩) is a neutrosophic triplet G-module. Thus, (P(N), ∪, ∩) is a neutrosophic
triplet G-submodule of (P(X), ∪, ∩). Also, from Definition 14, (P(X), ∪, ∩) is a reducible neutrosophic triplet
G-module.

108
Mathematics 2018, 6, 53

Example 6. Let X = G = {1, 2} and P(X) be power set of X. Then, (P(X), *, ∩) is a neutrosophic triplet vector
space on the (P(X), *, ∩) neutrosophic triplet field and (G, *) is a neutrosophic triplet group, in which


⎪ B\ A, s( A) < s( B)ΛB ⊃ A ∧ A = B



⎪ A\ B, s( B) < s( A)ΛA ⊃ B ∧ B = A


( A\ B) , s( A) > s( B) ∧ A ⊃ B ∧ B = A
A∗ B =

⎪ ( B\ A) , s( B) > s( A) ∧ B ⊃ A ∧ A = B



⎪ X, s( A) = s( B) ∧ A = B


∅, A = B

Here, s(A) means the cardinal of A, and A’ means the complement of A.


The neutrosophic triplets with respect to *;
neut(∅) = ∅, anti(∅) = ∅; neut({1}) = {1, 2}, anti({1}) = {2}; neut({2}) = {1, 2}, anti({2}) = {1}; neut({1, 2}) = ∅,
anti({1,2}) = {1, 2};
The neutrosophic triplets with respect to ∩;

neut(A) = A and anti(A) = B, in which B ⊃ A.
Also, (P(X), *, ∩) is a neutrosophic triplet G-module. Here, only neutrosophic triplet G-submodules of (P(X), *,
∩) are (P(X), *, ∩) and {neut(∅) = ∅}. Thus, (P(X),*, ∩) is a irreducible neutrosophic triplet G-module.
Definition 15. Let (NTV1 ,∗1 , #1 ), (NTV2 ,∗1 , #1 ), . . . , (NTVn ,∗1 , #1 ) be neutrosophic triplet vector spaces on
(NTF,∗2 , #2 ). Then, the set {m1 + m2 + . . . + mn ; mi  NTVi } becomes a neutrosophic triplet vector space on
(NTF,∗2 , #2 ), such that

( m1 ∗1 m2 ∗1 . . . ∗1 m n ) ∗1 ( m1  ∗1 m2  ∗1 . . . ∗1 m n  ) = ( m1 ∗1 m1  ) ∗1 ( m2 ∗1 m2  ) ∗1 . . . ∗1
(mn ∗1 mn  ) and

α#1 (m1 ∗1 m2 ∗1 . . . ∗1 mn ) = α#1 m1 )∗1 α#1 m2 )∗1 . . . ∗1 (α#1 mn ); α  NT f , mn   NTVi .

It is called the direct sum of the neutrosophic triplet vector spaces NTV1 , NTV2 , NTV3 , . . . , NTVn
and is denoted by in=1 ⊕ NTVi .
Remark 2. The direct sum NTV = in=1 ⊕ NTVi of neutrosophic triplet vector spaces NTVi has the following
properties.
(i) Each element m  NTV has a unique expression as the sum of elements of NTVi .
(ii) For each 1 ≤ i ≤ n, NTVj ∩ (NTV1 + NTV2 + . . . + NTVj−1 + NTVj+1 + . . . + NTVn ) = {x: neut(x)
= x}.

Definition 16. Let (NTV, ∗1 , #1 ) be neutrosophic triplet G-modules on neutrosophic triplet field (NTF, ∗2 , #2 ),
such that NTV = {neut(x) = x}. Then, (NTV, ∗1 , #1 ) is a completely reducible neutrosophic triplet G-module
if for every neutrosophic triplet G-submodule (N1 , ∗1 , #1 ) of (NTV, ∗1 , #1 ) there exists a neutrosophic triplet
G-submodule (N2 , ∗1 , #1 ) of (NTV, ∗1 , #1 ), such that NTV = N1 ⊕ N2 .
Example 7. From Example 5, for N = {1, 2}, (P(N), ∪, ∩) is a neutrosophic triplet vector space on (P(N), ∪, ∩)
and a neutrosophic triplet G-module. Also, the neutrosophic triplet G-submodules of (P(N), ∪, ∩) are (P(N), ∪,
∩), (P(M), ∪, ∩), (P(K), ∪, ∩), and (P(L), ∪, ∩). Here, M = {1}, K = {2}, and T = {∅}, in which P(M)⊕P(K) =
P(N), P(K)⊕P(M) = P(N), P(N)⊕P(T) = P(N), and P(T)⊕P(N) = P(N). Thus, (P(N), ∪, ∩) is a completely
reducible neutrosophic triplet G-module.
Theorem 3. A neutrosophic triplet G-submodule of a completely reducible neutrosophic triplet G-module is
completely neutrosophic triplet G-module.
Proof of Theorem 1. Let (NTV, ∗1 ,#1 ) is a completely reducible neutrosophic triplet G-module on neutrosophic
triplet field (NTF, ∗2 , #2 ). Assume that (N, ∗1 , #1 ) is a neutrosophic triplet G-submodule of (NTV, ∗1 , #1 ) and
(M, ∗1 , #1 ) is a neutrosophic triplet G-submodule of (N, ∗1 , #1 ). Then, (M, ∗1 ,#1 ) is a neutrosophic triplet

109
Mathematics 2018, 6, 53

G-submodule of (NTV, ∗1 , #1 ). There exists a neutrosophic triplet G-submodule (T, ∗1 , #1 ), such that NTV =
M⊕T, since (NTV, ∗1 , #1 ) is a completely reducible neutrosophic triplet G-module. Then, we take N  = T ∩ N.
From Remark 2,
N  ∩ M ⊂ M ∩ T = { x : neut( x ) = x } (1)

Then, we take y ∈ N. If y ∈ N, y ∈ NTV and y = m ∗1 t, in which m ∈ M; t ∈ T. Therefore, we obtain t ∈


N. Thus,
tN  = T ∩ N and y = m∗1 tN  ⊕ M (2)

From (i) and (ii), we obtain N = N  ⊕M. Thus, (N, ∗1 , #1 ) is completely reducible neutrosophic triplet G-module.
Theorem 4. Let (NTV, ∗1 , #1 ) be a completely reducible neutrosophic triplet G-module on neutrosophic triplet
field (NTF, ∗2 , #2 ). Then, there exists a irreducible neutrosophic triplet G-submodule of (NTV, ∗1 , #1 ).
Proof of Theorem 2. Let (NTV, ∗1 , #1 ) be a completely reducible neutrosophic triplet G-module and (N, ∗1 ,
#1 ) be a neutrosophic triplet G-submodule of (NTV, ∗1 , #1 ). We take y = neut(y) ∈ N, and we take collection
sets of neutrosophic triplet G-submodules of (N, ∗1 , #1 ) such that do not contain element y. This set is not empty,
because there is {x: x = neut(x)} neutrosophic triplet G-submodule of (N, ∗1 , #1 ). From Zorn’s Lemma, the
collection has maximal element (M, ∗1 , #1 ). From Theorem 3, (N, ∗1 , #1 ) is a completely reducible neutrosophic
triplet G-module, and there exists a (N1 , ∗1 , #1 ) neutrosophic triplet G-submodule, such that N = M⊕ N1 .
We show that (N1 ∗1 , #1 ) is a irreducible neutrosophic triplet G–submodule. Assume that (N1 , ∗1 , #1 ) is a
reducible neutrosophic triplet G–submodule. Then, there exists (K1 , ∗1 , #1 ) and (K2 , ∗1 , #1 ) neutrosophic triplet
G-submodules of (N1 , ∗1 , #1 ), such that y ∈ N1 , N2 , and from Theorem 3, N1 = K1 ⊕ K2 , in which, as N =
M⊕ N1 , N = M⊕K1 ⊕ K2 . From Remark 2, (M∗1 K1 ) ∩ K2 = {neut(x) = x} or (M∗1 K2 ) ∩ K1 = neut(x) = x}.
Then, y ∈ / (M∗1 K1 ) ∩ K2 or y ∈ / (M∗1 K2 ) ∩ K1 . Hence, y ∈/ (M∗1 K1 ) or y ∈/ (M∗1 K2 ). This is a contraction.
Thus, (N1 ∗1 , #1 ) is an irreducible neutrosophic triplet G-submodule.
Theorem 5. Let (NTV, ∗1 , #1 ) be a completely reducible neutrosophic triplet G-module. Then, (NTV, ∗1 , #1 ) is
a direct sum of irreducible neutrosophic triplet G-modules of (NTV, ∗1 , #1 ).
Proof of Theorem 3. From Theorem 3, (Ni ,∗1 , #1 ) (i = 1, 2, . . . , n), neutrosophic triplet G-submodules of
(NTV, ∗1 , #1 ) are completely reducible neutrosophic triplet G-modules, such that NTV = Ni−k ⊕ Nk (k = 1, 2,
. . . , i − 1). From Theorem 4, there exists (Mi , ∗1 , #1 ) irreducible neutrosophic triplet G-submodules of (Ni , ∗1 ,
#1 ). Also, from Theorem 3, (Mi , ∗1 , #1 ) are completely reducible neutrosophic triplet G-modules, such that Ni =
Ni−k ⊕ Nk (k = 1, 2, . . . , i − 1). If these steps are followed, we obtained (NTV, ∗1 , #1 ), which is a direct sum of
irreducible neutrosophic triplet G-modules of (NTV, ∗1 , #1 ).

5. Conclusions
In this paper; we studied the neutrosophic triplet G-module. Furthermore, we showed that
neutrosophic triplet G-module is different from the classical G-module. Also, we introduced the
reducible neutrosophic triplet G-module, the irreducible neutrosophic triplet G-module, and the
completely reducible neutrosophic triplet G-module. The neutrosophic triplet G-module has new
properties compared to the classical G-module. By using the neutrosophic triplet G-module, a theory of
representation of neutrosophic triplet groups can be defined. Thus, the usage areas of the neutrosophic
triplet structures will be expanded.

Author Contributions: Florentin Smarandache defined and studied neutrosophic triplet G-module, Abdullah
Kargın defined and studied reducible neutrosophic triplet G-module, irreducible neutrosophic triplet G-module,
and the completely reducible neutrosophic triplet G-module. Mehmet Şahin provided the examples and organized
the paper.
Conflicts of Interest: The authors declare no conflict of interest.

110
Mathematics 2018, 6, 53

References
1. Smarandache, F. Neutrosophy: Neutrosophic Probability, Set and Logic; Research Press: Rehoboth, MA,
USA, 1998.
2. Zadeh, A.L. Fuzzy sets. Inf. Control 1965, 8, 338–353. [CrossRef]
3. Atanassov, T.K. Intuitionistic fuzzy sets. Fuzzy Sets Syst. 1986, 20, 87–96. [CrossRef]
4. Bucolo, M.; Fortuna, L.; Rosa, M.L. Complex dynamics through fuzzy chains. IEEE Trans. Fuzzy Syst. 2004,
12, 289–295. [CrossRef]
5. Chen, S.M. Multi-attribute group decision making based on intuitionistic 2-Tuple linguistic information.
Inf. Sci. 2018, 430–431, 599–619.
6. Chen, S.M. Some q–Rung Orthopair fuzzy aggregation operators and their applications to multiple-Attribute
decision making. Int. J. Intell. Syst. 2018, 33, 259–280.
7. Sahin, M.; Deli, I.; Ulucay, V. Similarity measure of bipolar neutrosophic sets and their application to multiple
criteria decision making. Neural Comput. Appl. 2016. [CrossRef]
8. Liu, C.; Luo, Y. Power aggregation operators of simplifield neutrosophic sets and their use in multi-attribute
group decision making. İEE/CAA J. Autom. Sin. 2017. [CrossRef]
9. Sahin, R.; Liu, P. Some approaches to multi criteria decision making based on exponential operations of
simplied neutrosophic numbers. J. Intell. Fuzzy Syst. 2017, 32, 2083–2099. [CrossRef]
10. Liu, P.; Li, H. Multi attribute decision-making method based on some normal neutrosophic bonferroni mean
operators. Neural Comput. Appl. 2017, 28, 179–194. [CrossRef]
11. Broumi, S.; Bakali, A.; Talea, M.; Smarandache, F. Decision-Making Method Based on the Interval Valued
Neutrosophic Graph. In Proceedings of the IEEE Future Technologie Conference, San Francisco, CA, USA,
6–7 December 2016; pp. 44–50.
12. Liu, P. The aggregation operators based on Archimedean t-conorm and t-norm for the single valued
neutrosophic numbers and their application to Decision Making. Int. J. Fuzzy Syst. 2016, 18, 849–863.
[CrossRef]
13. Olgun, N.; Bal, M. Neutrosophic modules. Neutrosophic Oper. Res. 2017, 2, 181–192.
14. Şahin, M.; Uluçay, V.; Olgun, N.; Kilicman, A. On neutrosophic soft lattices. Afr. Mat. 2017, 28, 379–388.
15. Şahin, M.; Uluçay, V.; Olgun, N. Soft normed rings. Springerplus 2016, 5, 1–6.
16. Şahin, M.; Ecemiş, O.; Uluçay, V.; Kargın, A. Some new generalized aggregation operator based on centroid
single valued triangular neutrosophic numbers and their applications in multi- attribute decision making.
Assian J. Mat. Comput. Res. 2017, 16, 63–84.
17. Şahin, M.; Olgun, N.; Uluçay, V.; Kargın, A.; Smarandache, F. A new similarity measure based on falsity
value between single valued neutrosophic sets based on the centroid points of transformed single valued
neutrosophic numbers with applications to pattern recognition. Neutrosophic Sets Syst. 2017, 15, 31–48.
[CrossRef]
18. Ji, P.; Zang, H.; Wang, J. A projection–based TODIM method under multi-valued neutrosophic enviroments
and its application in personnel selection. Neutral Comput. Appl. 2018, 29, 221–234. [CrossRef]
19. Smarandache, F.; Ali, M. Neutrosophic triplet as extension of matter plasma, unmatter plasma and antimatter
plasma. In Proceedings of the APS Gaseous Electronics Conference, Bochum, Germany, 10–14 October 2016.
[CrossRef]
20. Smarandache, F.; Ali, M. The Neutrosophic Triplet Group and its Application to Physics; Universidad National de
Quilmes, Department of Science and Technology, Bernal: Buenos Aires, Argentina, 2014.
21. Smarandache, F.; Ali, M. Neutrosophic triplet group. Neural Comput. Appl. 2016, 29, 595–601. [CrossRef]
22. Smarandache, F.; Ali, M. Neutrosophic Triplet Field Used in Physical Applications, (Log Number:
NWS17-2017-000061). In Proceedings of the 18th Annual Meeting of the APS Northwest Section, Pacific
University, Forest Grove, OR, USA, 1–3 June 2017.
23. Smarandache, F.; Ali, M. Neutrosophic Triplet Ring and its Applications, (Log Number: NWS17-2017-000062).
In Proceedings of the 18th Annual Meeting of the APS Northwest Section, Pacific University, Forest Grove,
OR, USA, 1–3 June 2017.
24. Şahin, M.; Kargın, A. Neutrosophic triplet normed space. Open Phys. 2017, 15, 697–704. [CrossRef]
25. Şahin, M.; Kargın, A. Neutrosophic triplet inner product space. Neutrosophic Oper. Res. 2017, 2, 193–215.

111
Mathematics 2018, 6, 53

26. Curties, C.W. Representation Theory of Finite Group and Associative Algebra; American Mathematical Society:
Providence, RI, USA, 1962.
27. Fernandez, S. A Study of Fuzzy G-Modules. Ph.D. Thesis, Mahatma Gandhi University, Kerala, India,
April 2004.
28. Sinho, A.K.; Dewangan, K. Isomorphism Theory for Fuzzy Submodules of G–modules. Int. J. Eng. 2013, 3,
852–854.
29. Şahin, M.; Olgun, N.; Kargın, A.; Uluçay, V. Soft G-Module. In Proceedings of the Eighth International
Conference on Soft Computing, Computing with Words and Perceptions in System Analysis, Decision and
Control (ICSCCW-2015), Antalya, Turkey, 3–4 September 2015.
30. Sharma, P.K.; Chopra, S. Injectivity of intuitionistic fuzzy G-modules. Ann. Fuzzy Math. Inform. 2016, 12,
805–823.
31. Keneth, H.; Ray, K. Linear Algebra, Eastern Economy, 2nd ed.; Pearson: New York, NY, USA, 1990.

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

112
mathematics
Article
Some Types of Subsemigroups Characterized in
Terms of Inequalities of Generalized Bipolar
Fuzzy Subsemigroups
Pannawit Khamrot 1 and Manoj Siripitukdet 1,2, *
1 Departament of Mathematics, Faculty of Science, Naresuan University, Phitsanulok 65000, Thailand;
[email protected]
2 Research Center for Academic Excellence in Mathematics, Faculty of Science, Naresuan University,
Phitsanulok 65000, Thailand
* Correspondence: [email protected]

Received: 14 November2017; Accepted: 23 November 2017; Published: 27 November 2017

Abstract: In this paper, we introduce a generalization of a bipolar fuzzy (BF) subsemigroup, namely,
a (α1 , α2 ; β 1 , β 2 )-BF subsemigroup. The notions of (α1 , α2 ; β 1 , β 2 )-BF quasi(generalized bi-, bi-) ideals
are discussed. Some inequalities of (α1 , α2 ; β 1 , β 2 )-BF quasi(generalized bi-, bi-) ideals are obtained.
Furthermore, any regular semigroup is characterized in terms of generalized BF semigroups.

Keywords: generalized bipolar fuzzy (BF) semigroup; (α1 , α2 ; β 1 , β 2 )-bipolar fuzzy subsemigroup;
fuzzy quasi(generalized bi-, bi-) ideal; regular semigroup

1. Introduction
Most of the bipolarities separate positive and negative information response; positive
information representations are compiled to be possible, while negative information representations are
impossible [1]. The bipolar information of evaluation can help to evaluate decisions. Sometimes,
decisions are not only influenced by the positive decision criterion, but also with the negative
decision criterion, for example, environmental and social impact assessment. Evaluated alternative
consideration should weigh the negative effects to select the optimal choice. Therefore bipolar
information affects the effectiveness and efficiency of decision making. It is used in decision-making
problems, organization problems, economic problems, evaluation, risk management, environmental
and social impact assessment, and so forth. Thus, the concept of bipolar fuzzy (BF) sets are more
relevant in mathematics.
In 1965, Zadeh [2] introduced the fuzzy set theory, which can be applied to many areas, such as
mathematics, statistics, computers, electrical instruments, the industrial industry, business, engineering,
social applications, and so forth. In 2003, Bucolo et al. [3] proposed small-world networks of fuzzy
chaotic oscillators. The fuzzy set was used to establish the mathematical method for dealing with
imprecise and uncertain environments. In 1971, Rosenfeld [4] applied fuzzy sets to group structures.
Then, the fuzzy set was used in the theory of semigroups in 1979. Kuroki [5] initiated fuzzy semigroups
based on the notion of fuzzy ideals in semigroups and introduced some properties of fuzzy ideals
and fuzzy bi-ideals of semigroups. The fundamental concepts of BF sets were initiated by Zhang [6]
in 1994. He innovated the BF set as BF logic, which has been widely applied to solve many real-world
problems. In 2000, Lee [7] studied the notion of bipolar-valued fuzzy sets. Kim et al. [8] studied the
notions of BF subsemigroups, BF left (right, bi-) ideals. He provided some necessary and sufficient
conditions for a BF subsemigroup and BF left (right, bi-) ideals of semigroups.
In this paper, generalizations of BF semigroups are introduced. Definitions and properties of
(α1 , α2 ; β 1 , β 2 )-BF quasi (generalized bi-, bi-) ideals are obtained. Some inequalities of (α1 , α2 ; β 1 , β 2 )-BF

Mathematics 2017, 5, 71; doi:10.3390/math5040071 113 www.mdpi.com/journal/mathematics


Mathematics 2017, 5, 71

quasi (generalized bi-, bi-) ideals are obtained. Finally, we characterize a regular semigroup in terms of
generalized BF semigroups.

2. Preliminaries
In this section, we give definitions and examples that are used in this paper. By a subsemigroup
of a semigroup S we mean a non-empty subset A of S such that A2 ⊆ A, and by a left (right) ideal
of S we mean a non-empty subset A of S such that SA ⊆ A( AS ⊆ A). By a two-sided ideal or
simply an ideal, we mean a non-empty subset of a semigroup S that is both a left and a right ideal
of S. A non-empty subset A of S is called an interior ideal of S if SAS ⊆ A, and a quasi-ideal of S if
AS ∩ SA ⊆ A. A subsemigroup A of S is called a bi-ideal of S if ASA ⊆ A. A non-empty subset A is
called a generalized bi-ideal of S if ASA ⊆ A [9].
By the definition of a left (right) ideal of a semigroup S, it is easy to see that every left (right) ideal
of S is a quasi-ideal of S.

Definition 1. A semigroup S is called regular if for all a ∈ S there exists x ∈ S such that a = axa.

Theorem 1. For a semigroup S, the following conditions are equivalent.

(1) S is regular.
(2) R ∩ L = RL for every right ideal R and every left ideal L of S.
(3) ASA = A for every quasi-ideal A of S.

Definition 2. Let X be a set; a fuzzy set (or fuzzy subset) f on X is a mapping f : X → [0, 1], where [0, 1] is
the usual interval of real numbers.

The symbols f ∧ g and f ∨ g will denote the following fuzzy sets on S:

( f ∧ g)( x ) = f ( x ) ∧ g( x )

( f ∨ g)( x ) = f ( x ) ∨ g( x )
for all x ∈ S.
A product of two fuzzy sets f and g is denoted by f ◦ g and is defined as

x =yz { f ( y ) ∧ g(z)}, if x = yz for some y, z ∈ S
( f ◦ g)( x ) =
0, otherwise

Definition 3. Let S be a non-empty set. A BF set f on S is an object having the following form:

f := {( x, f p ( x ), f n ( x )) : x ∈ S}

where f p : S → [0, 1] and f n : S → [−1, 0].

Remark 1. For the sake of simplicity we use the symbol f = (S; f p , f n ) for the BF set
f = {( x, f p ( x ), f n ( x )) : x ∈ S}.

Definition 4. Given a BF set f = (S; f p , f n ), α ∈ [0, 1] and β ∈ [−1, 0], the sets

P( f ; α) := { x ∈ S| f p ( x ) ≥ α}

and
N ( f ; β) := { x ∈ S| f n ( x ) ≤ β}

114
Mathematics 2017, 5, 71

are called the positive α-cut and negative β-cut of f , respectively. The set C ( f ; (α, β)) := P( f ; α) ∩ N ( f ; β)
is called the bipolar (α, β)-cut of f .

We give the generalization of a BF subsemigroup, which is defined by Kim et al. (2011).

Definition 5. A BF set f = (S; f p , f n ) on S is called a (α1 , α2 ; β 1 , β 2 )-BF subsemigroup on S, where


α1 , α2 ∈ [0, 1], β 1 , β 2 ∈ [−1, 0] if it satisfies the following conditions:

(1) f p ( xy) ∨ α1 ≥ f p ( x ) ∧ f p (y) ∧ α2


(2) f n ( xy) ∧ β 2 ≤ f n ( x ) ∨ f n (y) ∨ β 1

for all x, y ∈ S.

We note that every BF subsemigroup is a (0, 1, −1, 0)-BF subsemigroup.


The following examples show that f = (S; f p , f n ) is a (α1 , α2 ; β 1 , β 2 )-BF subsemigroup on S but
f = (S; f p , f n ) is not a BF subsemigroup on S.

Example 1. The set S = {2, 3, 4, ...} is a semigroup under the usual multiplication. Let f = (S; f p , f n ) be
a BF set on S defined as follows:

1 1
f p ( x ) := and f n ( x ) := −
x+2 x+2
for all x ∈ S.
Let x, y ∈ S. Then
1 1
f p ( xy) = < = f p (x)
xy + 2 x+2
and
1 1
f p ( xy) = < = f p (y)
xy + 2 y+2
Thus, f p ( xy) < f p ( x ) ∧ f p (y). Therefore f = (S; f p , f n ) is not a BF subsemigroup on S.
Let α2 ∈ [0, 1], β 1 ∈ [−1, 0], α1 = 14 and β 2 = − 14 . Thus for all x, y ∈ S,

1 1 1
f p ( xy) ∨ ≥ ∧ ≥ f p ( x ) ∧ f p ( y ) ∧ α2
4 x+2 y+2
and
1 1 1
f n ( xy) ∧ − ≤− ∨− ≤ f n ( x ) ∨ f n (y) ∨ β 1
4 x+2 y+2
Hence f = (S; f p , f n ) is a ( 14 , α2 ; β 1 , − 14 )-BF subsemigroup on S.
1 1
We note that f = (S; f p , f n ) is a (α1 , α2 ; β 1 , β 2 )-BF subsemigroup on S for all α1 ≥ and β 2 ≤ − .
4 4

Definition 6. A BF set f = (S; f p , f n ) on S is called a (α1 , α2 ; β 1 , β 2 )-BF left (right) ideal on S, where
α1 , α2 ∈ [0, 1], and β 1 , β 2 ∈ [−1, 0] if it satisfies the following conditions:

(1) f p ( xy) ∨ α1 ≥ f p (y) ∧ α2 ( f p ( xy) ∨ α1 ≥ f p ( x ) ∧ α2 )


(2) f n ( xy) ∧ β 2 ≤ f n (y) ∨ β 1 ( f n ( xy) ∧ β 2 ≤ f n ( x ) ∨ β 1 )

for all x, y ∈ S.

A BF set f = (S; f p , f n ) on S is called a (α1 , α2 ; β1 , β2 )-BF ideal on S (α1 , α2 ∈ [0, 1], β1 , β2 ∈ [−1, 0])
if it is both a (α1 , α2 ; β1 , β2 )-BF left ideal and a (α1 , α2 ; β1 , β2 )-BF right ideal on S.
By Definition 6, every (α1 , α2 ; β 1 , β 2 )-BF ideal on a semigroup S is a (α1 , α2 ; β 1 , β 2 )-BF
subsemigroup on S.
We note that a (0, 1, −1, 0)-BF left (right) ideal is a BF left (right) ideal.

115
Mathematics 2017, 5, 71

Definition 7. A (α1 , α2 ; β 1 , β 2 )-BF subsemigroup f = (S; f p , f n ) on a subsemigroup S is called


a (α1 , α2 ; β 1 , β 2 )-BF bi-ideal on S, where α1 , α2 ∈ [0, 1], and β 1 , β 2 ∈ [−1, 0] if it satisfies the following
conditions:

(1) f p ( xay) ∨ α1 ≥ f p ( x ) ∧ f p (y) ∧ α2


(2) f n ( xay) ∧ β 2 ≤ f n ( x ) ∨ f n (y) ∨ β 1

for all x, y ∈ S.

We note that every (α1, α2; β1, β2 )-BF bi-ideal on a semigroup is a (α1, α2; β1, β2 )-BF subsemigroup on
the semigroup.

3. Generalized Bi-Ideal and Quasi-Ideal


In this section, we introduce a product of BF sets and characterize a regular semigroup by
generalized BF subsemigroups.
We let f = (S : f p , f n ) and g = (S : g p , gn ) be two BF sets on a semigroup S and let
(α1 ,α2 ) ( β 1 ,β 2 )
α1 , α2 ∈ [0, 1], and β 1 , β 2 ∈ [−1, 0]. We define two fuzzy sets f p and f n on S as follows:

(α1 ,α2 )
fp ( x ) = ( f p ( x ) ∧ α1 ) ∨ α2

( β 1 ,β 2 )
fn (x) = ( f n (x) ∨ β2 ) ∧ β1
for all x ∈ S.
(α1 ,α2 )
We define two operations ∧ and ∨ on S as follows:
( β 1 ,β 2 )

(α1 ,α2 )
( fp ∧ g p )( x ) = (( f p ∧ g p )( x ) ∧ α1 ) ∨ α2

( fn ∧ gn )( x ) = (( f n ∨ gn )( x ) ∨ α2 ) ∧ α1
( β 1 ,β 2 )

(α1 ,α2 )
for all x ∈ S, and we define products f p ◦ g p and f n ◦ gn as follows:
( β 1 ,β 2 )
For all x ∈ S,
(α1 ,α2 )
( fp ◦ g p )( x ) = (( f p ◦ g p )( x ) ∧ α1 ) ∨ α2

( fn ◦ gn )( x ) = (( f n ◦ gn )( x ) ∨ α2 ) ∧ α1
( β 1 ,β 2 )

where 
x =yz { f p ( y ) ∧ g p (z)} if x = yz for some y, z ∈ S
( f p ◦ g p )( x ) =
0 otherwise

y=yz { f n ( y ) ∨ gn ( z )} if x = yz for some y, z ∈ S
( f n ◦ gn )( x ) =
0 otherwise
We set
(α1 ,α2 ) (α1 ,α2 )
f ◦ g := (S; f p ◦ gp, fn ◦ gn )
( β 1 ,β 2 ) ( β 1 ,β 2 )

Then it is a BF set.
We note that
(1,0)
(1) fp ( x ) = f p ( x ),
(0,−1)
(2) fn (x) = f n ( x ),

116
Mathematics 2017, 5, 71

(1,0) (0,−1)
(3) f = (S; f p , f n ) = (S; f p , fn ),
(α1 ,α2 ) (α1 ,α2 ) (α1 ,α2 )
(4) (f ◦ g) p = f p ◦ g p and ( f ◦ g)n = f n ◦ gn .
( β 1 ,β 2 ) ( β 1 ,β 2 ) ( β 1 ,β 2 )

Definition 8. A BF set f = (S; f p , f n ) on S is called a (α1 , α2 ; β 1 , β 2 )-BF generalized bi-ideal on S, where


α1 , α2 ∈ [0, 1], and β 1 , β 2 ∈ [−1, 0] if it satisfies the following conditions:
(1) f p ( xay) ∨ α1 ≥ f p ( x ) ∧ f p (y) ∧ α2
(2) f n ( xay) ∧ β 2 ≤ f n ( x ) ∨ f n (y) ∨ β 1
for all x, y, a ∈ S.

Definition 9. A BF set f = (S; f p , f n ) on S is called a (α1 , α2 ; β 1 , β 2 )-BF quasi-ideal on S, where


α1 , α2 ∈ [0, 1], and β 1 , β 2 ∈ [−1, 0] if it satisfies the following conditions:
(1) f p ( x ) ∨ α1 ≥ ( f p ◦S p )( x ) ∧ (S p ◦ f p )( x ) ∧ α2
(2) f n ( x ) ∧ β 2 ≤ ( f n ◦Sn )( x ) ∨ (Sn ◦ f n )( x ) ∨ β 1
for all x, y ∈ S.

In the following theorem, we give a relation between a bipolar (α, β)-cut of f and a
(α1 , α2 ; β 1 , β 2 )-BF generalized bi-ideal on S.

Theorem 2. Let f = (S; f p , f n ) be a BF set on a semigroup S with Im( f p ) ⊆ Δ+ ⊆ [0, 1] and


Im( f n ) ⊆ Δ− ⊆ [−1, 0]. Then C ( f ; (α, β))(=∅) is a generalized bi-ideal of S for all α ∈ Δ+ and β ∈ Δ− if
and only if f is a (α1 , α2 ; β 1 , β 2 )-BF generalized bi-ideal on S for all α1 , α2 ∈ [0, 1] and β 1 , β 2 ∈ [−1, 0].

Proof. Let α1 , α2 ∈ [0, 1], β 1 , β 2 ∈ [−1, 0]. Suppose on the contrary that f is not a (α1 , α2 ; β 1 , β 2 )-BF
generalized bi-ideal on S. Then there exists x, y, a ∈ S such that

f p ( xay) ∨ α1 < f p ( x ) ∧ f p (y) ∧ α2 or f n ( xay) ∧ β 2 > f n ( x ) ∨ f n (y) ∨ β 1 (1)

Let α = f p ( x ) ∧ f p (y) and β = f n ( x ) ∨ f n (y). Then x, y ∈ C ( f ; (α , β )). By assumption, we have


xay ∈ C ( f ; (α , β )). By Equation (1), f p ( xay) ≤ f p ( xay) ∧ α1 < f p ( x ) ∧ f p (y) ∧ α2 ≤ f p ( x ) ∧ f p (y) =
α or f n ( xay) ≥ f n ( xay) ∧ β 2 > f n ( x ) ∨ f n (y) ∨ β 1 ≥ f n ( x ) ∨ f n (y) = β . Thus, xay ∈ / C ( f ; (α , β )).
This is a contradiction. Therefore f is a (α1 , α2 ; β 1 , β 2 )-BF generalized bi-ideal on S.
Conversely, let α ∈ Δ+ , and β ∈ Δ− , and suppose that C ( f ; (α, β)) = ∅. Let a ∈ S and
x, y ∈ C ( f ; (α, β)). Then f p ( x ) ≥ α, f p (y) ≥ α, f n ( x ) ≤ β and f n (y) ≤ β. By assumption, f is a
(α, f p ( xay); f n ( xay), β)-BF generalized bi-ideal on S, and thus f p ( xay) ∨ f p ( xay) ≥ f p ( x ) ∧ f p (y) ∧ α
and f n ( xay) ∧ f n ( xay) ≤ f n ( x ) ∨ f n (y) ∨ β. Then f p ( xay) ≥ f p ( x ) ∧ f p (y) ∧ α ≥ α ∧ α = α and
f n ( xay) ≤ f n ( x ) ∨ f n ( x ) ∨ β ≤ β ∨ β = β. Hence, xay ∈ C ( f ; (α, β)). Therefore C ( f ; (α, β)) is
a generalized bi-ideal of S.

Corollary 1. Let f = (S; f n , f p ) be a BF set on a semigroup. Then the following statements hold:
(1) f is a (α1 , α2 ; β 1 , β 2 )-BF generalized bi-ideal on S for all α1 , α2 ∈ [0, 1] and β 1 , β 2 ∈ [−1, 0] if and only
if C ( f ; (α, β))(=∅) is a generalized bi-ideal of S for all α ∈ Im( f p ), and β ∈ Im( f n );
(2) f is a (α1 , α2 ; β 1 , β 2 )-BF generalized bi-ideal on S for all α1 , α2 ∈ [0, 1], and β 1 , β 2 ∈ [−1, 0] if and only
if C ( f ; (α, β))(=∅) is a generalized bi-ideal of S for all α ∈ [0, 1], and β ∈ [−1, 0].

Proof. (1) Set Δ+ = [0, 1] and Δ− = [−1, 0], and apply Theorem 2.
(2) Set Δ+ = Im( f p ) and Δ− = Im( f n ), and apply Theorem 2.

Lemma 1. Every (α1 , α2 ; β 1 , β 2 )-BF generalized bi-ideal on a regular semigroup S is a (α1 , α2 ; β 1 , β 2 )-BF
bi-ideal on S.

117
Mathematics 2017, 5, 71

Proof. Let S be a regular semigroup and f = (S; f p , f n ) be a (α1 , α2 ; β 1 , β 2 )-BF generalized


bi-ideal on S. Let a, b ∈ S; then there exists x ∈ S such that b = bxb. Thus we have
f p ( ab) ∨ α1 = f p ( a(bxb)) ∨ α1 = f p ( a(bx )b) ∨ α1 ≥ f p ( a) ∧ f p (b) ∧ α2 and f n ( ab) ∧ β 2 = f n ( a(bxb)) ∧
β 2 = f n ( a(bx )b) ∧ β 2 ≤ f n ( a) ∨ f n (b) ∨ β 1 . This shows that f is a (α1 , α2 ; β 1 , β 2 )-BF subsemigroup on
S, and thus f is a (α1 , α2 ; β 1 , β 2 )-BF bi-ideal on S.

Let S be a semigroup and ∅ = I ⊆ S. A positive characteristic function and a negative


characteristic function are respectively defined by

p p 1, x ∈ I
C I : S → [0, 1], x → C I ( x ) :=
0, x ∈
/I

and

−1, x∈I
C nI : S → [−1, 0], x → C nI ( x ) :=
0, x∈
/I

Remark 2.
p
(1) For the sake of simplicity, we use the symbol C I = (S; C I , C nI ) for the BF set. That is,
p
C I = (S; C I , C nI ) = (S; (C I ) p , (C I )n ). We call this a bipolar characteristic function.
p
(2) If I = S, then CS = (S; CS , CSn ). In this case, we denote S = (S, S p , Sn ).

In the following theorem, some necessary and sufficient conditions of (α1 , α2 ; β 1 , β 2 )-BF
generalized bi-ideals are obtained.

Theorem 3. Let f = (S; f p , f n ) be a BF set on a semigroup S. Then the following statements are equivalent:

(1) f is a (α1 , α2 ; β 1 , β 2 )-BF generalized bi-ideal on S.


(α2 ,α1 ) (α2 ,α1 ) (α2 ,α1 ) ( β 2 ,β 1 )
(2) fp ◦ Sp ◦ fp ≤ fp and f n ◦ Sn ◦ fn ≥ fn .
( β 2 ,β 1 ) ( β 2 ,β 1 )

(α2 ,α1 ) (α2 ,α1 )


Proof. (⇒) Let a be any element of S. In the case for which ( f p ◦ Sp ◦ f p )( a) = 0, it is clear
(α2 ,α1 ) (α2 ,α1 ) (α ,α )
that f p ◦ S p ◦ f p ≤ Otherwise, there exist x, y, r, s ∈ S such that a = xy and x = rs.
fp 2 1 .
Because f is a (α1 , α2 ; β 1 , β 2 )-BF generalized bi-ideal on S, we have f p (rsy) ∨ α1 ≥ f p (r ) ∧ f p (s) ∧ α2
and f n (rsy) ∧ β 2 ≤ f n (r ) ∨ f n (s) ∨ β 1 . Consider
(α2 ,α1 ) (α2 ,α1 )
( fp ◦ Sp ◦ f p )( a) = (( f p ◦S p ◦ f p )( a) ∧ α2 ) ∨ α1

=( {( f p ◦S p )( x ) ∧ f p (y)} ∧ α2 ) ∨ α1
a= xy
 
=( { { f p (r ) ∧ S p (s)} ∧ f p (y)} ∧ α2 ) ∨ α1
a= xy x =rs
 
=( { { f p (r ) ∧ 1} ∧ f p (y)} ∧ α2 ) ∨ α1
a= xy x =rs

=( { f p (r ) ∧ f p ( y ) ∧ α2 } ∧ α2 ) ∨ α1
a=rsy

≤( { f p (rsy) ∨ α1 } ∧ α2 ) ∨ α1
a=rsy

≤( { f p ( a ) ∨ α1 } ∧ α2 ) ∨ α1
a=rsy

≤ (( f p ( a) ∨ α1 ) ∧ α2 ) ∨ α1
= ( f p ( a ) ∧ α2 ) ∨ α1
(α2 ,α1 )
= fp ( a)

118
Mathematics 2017, 5, 71

(α2 ,α1 ) (α2 ,α1 ) (α2 ,α1 )


Hence f p ◦ Sp ◦ fp ≤ fp .
( β 2 ,β 1 )
Similarly, we can show that f n ◦ Sn ◦ fn ≥ fn .
( β 2 ,β 1 ) ( β 2 ,β 1 )
(⇐) Conversely, let a, x, y, z ∈ S such that a = xyz. Then we have

f p ( xyz) ∨ α1 ≥ ( f p ( a) ∧ α2 ) ∨ α1
(α2 ,α1 )
= fp ( a)
(α2 ,α1 ) (α2 ,α1 )
≥ ( fp ◦ Sp ◦ f p )( a)
= (( f p ◦S p ◦ f p )( a) ∧ α2 ) ∨ α1

=( {( f p ◦S p )(b) ∧ f p (c)} ∧ α2 ) ∨ α1
a=bc
≥ (( f p ◦S p )( xy) ∧ f p (z) ∧ α2 ) ∨ α1

=( { f p (r ) ∧ S p (s)} ∧ f p (z) ∧ α2 ) ∨ α1
xy=rs

≥ ( f p ( x ) ∧ S p ( y ) ∧ f p ( z ) ∧ α2 ) ∨ α1
≥ ( f p ( x ) ∧ f p ( z ) ∧ α2 ) ∨ α1
≥ f p ( x ) ∧ f p ( z ) ∧ α2

Similarly, we can show that f n ( xyz) ∧ β 2 ≤ f n ( x ) ∨ f n (z) ∨ β 1 for all x, y, z ∈ S. Therefore f is


a (α1 , α2 ; β 1 , β 2 )-BF generalized bi-ideal on S for all α1 , α2 ∈ [0, 1] and β 1 , β 2 ∈ [−1, 0].

Theorem 4. Let f = (S; f p , f n ) be a BF set on a semigroup S. Then the following statements are equivalent:

(1) f is a (α1 , α2 ; β 1 , β 2 )-BF bi-ideal on S .


(α2 ,α1 ) (α2 ,α1 ) (α2 ,α1 ) ( β 2 ,β 1 )
(2) fp ◦ Sp ◦ fp ≤ fp and f n ◦ Sn ◦ fn ≥ fn .
( β 2 ,β 1 ) ( β 2 ,β 1 )

Proof. The proof is similar to the proof of Theorem 3.

In the following theorem, we give a relation between a bipolar (α, β)-cut of f and a
(α1 , α2 ; β 1 , β 2 )-BF quasi-ideal on S.

Theorem 5. Let f = (S; f p , f n ) be a BF set on a semigroup S with Im( f p ) ⊆ Δ+ ⊆ [0, 1] and


Im( f n ) ⊆ Δ− ⊆ [−1, 0]. Then C ( f ; (α, β))(=∅) is a quasi-ideal of S for all α ∈ Δ+ and β ∈ Δ− if and
only if f is a (α1 , α2 ; β 1 , β 2 )-BF quasi-ideal on S for all α1 , α2 ∈ [0, 1] and β 1 , β 2 ∈ [−1, 0].

Proof. (⇒) Let α1 , α2 ∈ [0, 1] and β 1 , β 2 ∈ [−1, 0]. Suppose on the contrary that f is not
a (α1 , α2 ; β 1 , β 2 )-BF quasi-ideal on S. Then there exists x ∈ S such that

f p ( x ) ∨ α1 < ( f p ◦S p )( x ) ∧ (S p ◦ f p )( x ) ∧ α2

or
f n ( x ) ∧ β 2 > ( f n ◦Sn )( x ) ∨ (Sn ◦ f n )( x ) ∨ β 1 (2)

Case 1: f p ( x ) ∨ α1 < ( f p ◦S p )( x ) ∧ (S p ◦ f p )( x ) ∧ α2 . Let = ( f p ◦S p )( x ) ∧ (S p ◦ f p )( x ). Then α


α ≤ ( f p ◦S p )( x ), α ≤ (S p ◦ f p )( x ). This implies that there exist a, b, c, d ∈ S such that x = ab = cd. Then

α ≤ ( f p ◦S p )( x ) = { f p (y) ∧ S p (z)} ≤ f p ( a) ∧ S p (b) = f p ( a)
x =yz

119
Mathematics 2017, 5, 71


α ≤ (S p ◦ f p )( x ) = {S p (z) ∧ f p (y)} ≤ S p (c) ∧ f p (d) = f p (d)
x =yz

Let β = f n ( a) ∨ f n (d). Then f n ( a) ≤ β and f n (d) ≤ β .


Thus a, d ∈ C( f ; (α , β )), and so ad ∈ C( f ; (α , β ))S and ad ∈ SC( f ; (α , β )). Hence x ∈ C( f ; (α , β ))S
and x ∈ SC( f ; (α , β )), and it follows that x ∈ C( f ; (α , β ))S ∩ SC( f ; (α , β )). By hypothesis,
x ∈ C( f ; (α , β )).
Case 2: f n ( x ) ∧ β 2 > ( f n ◦Sn )( x ) ∨ (Sn ◦ f n )( x ) ∨ β 1 . Let β = ( f n ◦Sn )( x ) ∨ (Sn ◦ f n )( x ).
Then β ≥ ( f n ◦Sn )( x ) and β ≥ (Sn ◦ f n )( x ). This implies that there exist a , b , c , d ∈ S such that
x = a b = c d . Then

β ≥ ( f n ◦Sn )( x ) = { f n (y) ∨ Sn (z)} ≥ f n ( a ) ∨ Sn (b ) ≥ f n ( a )
x =yz

β ≥ (Sn ◦ f n )( x ) = {Sn (z) ∨ f n (y)} ≥ Sn (c ) ∨ f n (d ) ≥ f n (d )
x =yz

Let α = f p ( a ) ∧ f p (d ). Then f p ( a ) ≥ α and f p (d ) ≥ α . Thus a , d ∈ C ( f ; (α , β )), and so


a d ∈ C ( f ; (α , β ))S and a d ∈ SC ( f ; (α , β )). Hence x ∈ C ( f ; (α , β ))S and x ∈ SC ( f ; (α , β )),
and it follows that x ∈ C ( f ; (α , β ))S ∩ SC ( f ; (α , β )). By hypothesis, x ∈ C ( f ; (α , β )).
Therefore x ∈ C ( f ; (α , β )). By Equation (2),

f p ( x ) ≤ f p ( x ) ∨ α1 < ( f p ◦S p )( x ) ∧ (S p ◦ f p )( x ) ∧ α2 ≤ ( f p ◦S p )( x ) ∧ (S p ◦ f p )( x ) = α

or
f n ( x ) ≥ f n ( x ) ∧ β 2 > ( f n ◦Sn )( x ) ∨ (Sn ◦ f n )( x ) ∨ β 1 ≥ ( f n ◦Sn )( x ) ∨ (Sn ◦ f n )( x ) = β

and it follows that x ∈ / C ( f ; (α , β )). This is a contradiction. Therefore f is a (α1 , α2 ; β 1 , β 2 )-BF
quasi-ideal on S.
(⇐) Conversely, let α ∈ Δ+ and β ∈ Δ− , and suppose that C ( f ; (α, β)) = ∅. Let x ∈ S be such
that x ∈ C ( f ; (α, β))S ∩ SC ( f ; (α, β)). Then x ∈ C ( f ; (α, β))S and x ∈ SC ( f ; (α, β)). Thus there exist
y, z ∈ C ( f ; (α, β)) and z, y ∈ S such that x = yz and x = y z .
By assumption, f is a ( f p ( x ), α; β, f n ( x ))-BF quasi-ideal on S, and thus

f p (x) = f p (x) ∨ f p (x)


≥ ( f p ◦S p )( x ) ∧ (S p ◦ f p )( x ) ∧ α
 
= { f p (y) ∧ S p (z)} ∧ {S p (y ) ∧ f p (z )} ∧ α
x =yz x =y z
 
= { f p ( y ) ∧ 1} ∧ {1 ∧ f p (z )} ∧ α
x =yz x =y z
 
= { f p (y)} ∧ { f p (z )} ∧ α
x =yz x =y z

≥ f p (y) ∧ f p (z ) ∧ α

Because y, z ∈ C ( f ; (α, β)), we have f p (y) ≥ α and f p (z ) ≥ α. Then f p ( x ) ≥ α. Similarly, we can
show that f n ( x ) ≤ β. Hence, x ∈ C ( f ; (α, β)). Therefore C ( f ; (α, β)) is a quasi-ideal of S.

Corollary 2. Let f = (S; f n , f p ) be a BF set on a semigroup S. Then

(1) f is a (α1 , α2 ; β 1 , β 2 )-BF quasi-ideal on S for all α1 , α2 ∈ [0, 1] and β 1 , β 2 ∈ [−1, 0] if and only if
C ( f ; (α, β))(=∅) is a quasi-ideal of S for all α ∈ Im( f p ) and β ∈ Im( f n );
(2) f is a (α1 , α2 ; β 1 , β 2 )-BF quasi-ideal on S for all α1 , α2 ∈ [0, 1] and β 1 , β 2 ∈ [−1, 0] if and only if
C ( f ; (α, β))(=∅) is a quasi-ideal of S for all α ∈ [0, 1] and β ∈ [−1, 0].

120
Mathematics 2017, 5, 71

Proof. (1) Set Δ+ = [0, 1] and Δ− = [−1, 0], and apply Theorem 5.
(2) Set Δ+ = Im( f p ) and Δ− = Im( f n ), and apply Theorem 5.

In the following theorem, we discuss a quasi-ideal of a semigroup S in terms of the bipolar


characteristic function being a (α1 , α2 ; β 1 , β 2 )-BF quasi-ideal on S.

Theorem 6. Let S be a semigroup. Then a non-empty subset I is a quasi-ideal of S if and only


p
if the bipolar characteristic function CI = (S; CI , CnI ) is a (α1, α2; β1, β2 )-BF quasi-ideal on S for all
α1, α2 ∈ [0, 1] and β1, β2 ∈ [−1, 0].

Proof. (⇒) Let I be a quasi-ideal of S and x ∈ S. Let α1 , α2 ∈ [0, 1] and β 1 , β 2 ∈ [−1, 0].
Case 1: x, y ∈ I. Then
p p p
C I ( x ) ∨ α1 = 1 ≥ (C I ◦S p )( x ) ∧ (S p ◦C I )( x ) ∧ α2

C nI ( x ) ∧ β 2 = −1 ≤ (C nI ◦Sn )( x ) ∨ (Sn ◦C nI )( x ) ∨ β 1
p
Case 2: x ∈
/ I. Then x ∈
/ SI or x ∈
/ IS. If x ∈
/ SI, then (C I ◦S p )( x ) = 0 and (C nI ◦Sn )( x ) = 0. Thus
p p p
C I ( x ) ∨ α1 ≥ 0 = (C I ◦S p )( x ) ∧ (S p ◦C I )( x ) ∧ α2

C nI ( x ) ∧ β 2 ≤ 0 = (C nI ◦Sn )( x ) ∨ (Sn ◦C nI )( x ) ∨ β 1
p
Therefore C I = (S; C I , C nI ) is a (α1 , α2 ; β 1 , β 2 )-BF quasi-ideal on S.
(⇐) Conversely, let C I be a (α1 , α2 ; β 1 , β 2 )-BF quasi-ideal on S for all α1 , α2 ∈ [0, 1] and β 1 , β 2 ∈ [−1, 0].
Let a ∈ IS ∩ SI. Then there exist a, b ∈ S and x, y ∈ I such that a = xb = cy.
Then (C I ) p ( x ) = 1 = (C I ) p (y) and (C I )n ( x ) = −1 = (C I )n (y). Hence x, y ∈ C (C I ; (1, −1)), and
so xb ∈ C (C I ; (1, −1))S and cy ∈ SC (C I ; (1, −1)). Hence a ∈ C (C I ; (1, −1))S and a ∈ SC (C I ; (1, −1)),
and it follows that a ∈ C (C I ; (1, −1))S ∩ SC (C I ; (1, −1)). By Corollary 2, C (C I ; (1, −1)) is a quasi-ideal.
p
Thus a ∈ C (C I ; (1, −1)), and so C I ( a) ≥ 1. This implies that a ∈ I. Therefore I is a quasi-ideal
on S.

Theorem 7. Let S be a semigroup. Then I is a generalized bi-ideal of S if and only if the bipolar characteristic function
p
CI = (S; CI , CnI ) is a (α1, α2; β1, β2 )-BF generalized bi-ideal on S for all α1, α2 ∈ [0, 1] and β1, β2 ∈ [−1, 0].

Proof. (⇒) Let I be a generalized bi-ideal of S and x, y, a ∈ S. Let α1 , α2 ∈ [0, 1] and β 1 , β 2 ∈ [−1, 0].
Case 1: x, y ∈ I. Then xay ∈ I; thus
p p p
C I ( xay) ∨ α1 = 1 ≥ C I ( x ) ∧ C I (y) ∧ α2
and
C nI ( xay) ∧ β 2 = −1 ≤ C nI ( x ) ∨ C nI (y) ∨ β 1
Case 2: x ∈
/ I or y ∈
/ I. Then
p p p
C I ( xay) ∨ α1 ≥ 0 = C I ( x ) ∧ C I (y) ∧ α2
and
C nI ( xay) ∧ β 2 ≤ 0 = C nI ( x ) ∨ C nI (y) ∨ β 1
p
Therefore C I = (S; C I , C nI )
is a (α1 , α2 ; β 1 , β 2 )-BF generalized bi-ideal on S.
(⇐) Conversely, let C I be a (α1 , α2 ; β 1 , β 2 )-BF generalized bi-ideal on S for all
α1 , α2 ∈ [0, 1] and β 1 , β 2 ∈ [−1, 0]. Let a ∈ S and x, y ∈ I. Then (C I ) p ( x ) = 1 = (C I ) p (y) and
(C I )n ( x ) = −1 = (C I )n (y). Hence, x, y ∈ C (C I ; (1, −1)). By Corollary 1, C (C I ; (1, −1)) is
p
a generalized bi-ideal. Thus xay ∈ C (C I ; (1, −1)), and so C I ( xay) ≥ 1. This implies that xay ∈ I.
Therefore I is a generalized bi-ideal on S.

121
Mathematics 2017, 5, 71

Theorem 8. Every (α1 , α2 ; β 1 , β 2 )-BF left (right) ideal on a semigroup S is a (α1 , α2 ; β 1 , β 2 )-BF quasi-ideal
on S.

Proof. Let f = (S; f p , f n ) be a (α1 , α2 ; β 1 , β 2 )-BF left ideal on S and x, y ∈ S. Then

(S◦ f ) p ( xy) ∧ α2 = (S p ◦ f p )( xy) ∧ α2



=( {S p (y) ∧ f p (z)}) ∧ α2
x =yz

= { f p (z)} ∧ α2
x =yz

≤ { f p (yz)} ∨ α1
x =yz

≤ f p ( x ) ∨ α1

Thus f p ( x ) ∨ α1 ≥ (S p ◦ f p )( xy) ∧ α2 . Hence f p ( x ) ∨ α1 ≥ (S p ◦ f p )( xy) ∧ α2 ≥ (S p ◦ f p )( xy) ∧


( f p ◦S p )( xy) ∧ α2 . Similarly, we can show that f n ( x ) ∧ β 2 ≤ ( f n ◦Sn )( x ) ∨ (Sn ◦ f n )( x ) ∨ β 1 .
Therefore f is a (α1 , α2 ; β 1 , β 2 )-BF quasi-ideal on S.

Lemma 2. Every (α1 , α2 ; β 1 , β 2 )-BF quasi-ideal on a semigroup S is a (α1 , α2 ; β 1 , β 2 )-BF bi-ideal on S.

Proof. Let f = (S; f p , f n ) be a (α1 , α2 ; β 1 , β 2 )-BF quasi-ideal on S and x, y, z ∈ S. Then

f p ( xy) ∨ α1 ≥ ( f p ◦S p )( xy) ∧ (S p ◦ f p )( xy) ∧ α2


 
= { f p ( a) ∧ S p (b)} ∧ {S p (r ) ∧ f p (s)} ∧ α2
xy= ab xy=rs

≥ f p ( x ) ∧ S p ( y ) ∧ S p ( x ) ∧ f p ( y ) ∧ α2
≥ f p ( x ) ∧ 1 ∧ 1 ∧ f p ( y ) ∧ α2
= f p ( x ) ∧ f p ( y ) ∧ α2

Hence, f p ( xy) ∨ α1 ≥ f p ( x ) ∧ f p (y) ∧ α2 . Additionally,

f p ( xyz) ∨ α1 ≥ ( f p ◦S p )( xyz) ∧ (S p ◦ f p )( xyz) ∧ α2


 
= { f p ( a) ∧ S p (b)} ∧ {S p (r ) ∧ f p (s)} ∧ α2
xyz= ab xyz=rs

≥ f p ( x ) ∧ S p (yz) ∧ S p ( xy) ∧ f p (z) ∧ α2


≥ f p ( x ) ∧ 1 ∧ 1 ∧ f p ( z ) ∧ α2
= f p ( x ) ∧ f p ( z ) ∧ α2

Hence, f p ( xyz) ∨ α1 ≥ f p ( x ) ∧ f p (z) ∧ α2 . Similarly, we can show that f n ( xy) ∧ β 2 ≤ f n ( x ) ∨


f n (y) ∨ β 1 and f n ( xyz) ∧ β 2 ≤ f n ( x ) ∨ f n (z) ∨ β 1 . Therefore f is a (α1 , α2 ; β 1 , β 2 )-BF bi-ideal on S.

Lemma 3. Let A and B be non-empty subsets of a semigroup S. Then the following conditions hold:
(α2 ,α1 ) (α2 ,α1 )
(1) (C A ) p ∧ ( CB ) p = ( C A ∩ B ) p .
( β 2 ,β 1 ) ( β ,β )
(2) (C A )n ∨ ( CB ) n = (C A∪ B )n 2 1 .
(α2 ,α1 ) (α2 ,α1 )
(3) (C A ) p ◦ (CB ) p = (C AB ) p .
( β 2 ,β 1 ) ( β ,β )
(4) (C A )n ◦ ( CB ) n = (C AB )n 2 1 .

122
Mathematics 2017, 5, 71

Lemma 4. If f = (S; f p , fn ) is a (α1 , α2 ; β1 , β2 )-BF left ideal and g = (S; g p , gn ) is a (α1 , α2 ; β1 , β2 )-BF right
(α2 ,α1 ) (α2 ,α1 )
ideal on a semigroup S, then f p ◦ gp ≤ f p ∧ g p and fn ◦ gn ≥ fn ∨gn .
(β2 ,β1 ) (β2 ,β1 )

Theorem 9. For a semigroup S, the following are equivalent.


(1) S is regular.
(α2 ,α1 ) (α2 ,α1 )
(2) fp ∧ gp = f p ◦ g p and f n ∨ gn = f n ◦ gn for every (α1 , α2 ; β 1 , β 2 )-BF right ideal
( β 2 ,β 1 ) ( β 2 ,β 1 )
f = (S; f p , f n ) and every (α1 , α2 ; β 1 , β 2 )-BF left ideal g = (S; g p , gn ) on S.

Next, we characterize a regular semigroup by generalizations of BF subsemigroups.

Theorem 10. For a semigroup S, the following are equivalent.

(1) S is regular.
(α2 ,α1 ) (α2 ,α1 ) (α2 ,α1 ) (α2 ,α1 )
(2) fp ∧ hp ∧ gp ≤ f p ◦ hp ◦ g p and f n ∨ hn ∨ gn ≥ f n ◦ hn ◦ gn for
( β 2 ,β 1 ) ( β 2 ,β 1 ) ( β 2 ,β 1 ) ( β 2 ,β 1 )
every (α1 , α2 ; β 1 , β 2 )-BF right ideal f = (S; f p , f n ), every (α1 , α2 ; β 1 , β 2 )-BF generalized bi-ideal
h = (S; h p , hn ) and every (α1 , α2 ; β 1 , β 2 )-BF left ideal g = (S; g p , gn ) on S.
(α2 ,α1 ) (α2 ,α1 ) (α2 ,α1 ) (α2 ,α1 )
(3) fp ∧ hp ∧ gp ≤ f p ◦ hp ◦ g p and f n ∨ hn ∨ gn ≥ f n ◦ hn ◦ gn for
( β 2 ,β 1 ) ( β 2 ,β 1 ) ( β 2 ,β 1 ) ( β 2 ,β 1 )
every (α1 , α2 ; β 1 , β 2 )-BF right ideal f = (S; f p , f n ) , every (α1 , α2 ; β 1 , β 2 )-BF bi-ideal h = (S; h p , hn )
and every (α1 , α2 ; β 1 , β 2 )-BF left ideal g = (S; g p , gn ) on S.
(α2 ,α1 ) (α2 ,α1 ) (α2 ,α1 ) (α2 ,α1 )
(4) fp ∧ hp ∧ gp ≤ f p ◦ hp ◦ g p and f n ∨ hn ∨ gn ≥ f n ◦ hn ◦ gn for
( β 2 ,β 1 ) ( β 2 ,β 1 ) ( β 2 ,β 1 ) ( β 2 ,β 1 )
every (α1 , α2 ; β 1 , β 2 )-BF right ideal f = (S; f p , f n ) , every (α1 , α2 ; β 1 , β 2 )-BF quasi-ideal h = (S; h p , hn )
and every (α1 , α2 ; β 1 , β 2 )-BF left ideal g = (S; g p , gn ) on S.

Proof. (1 ⇒ 2). Let f , h and g be a (α1 , α2 ; β 1 , β 2 )-BF right ideal, a (α1 , α2 ; β 1 , β 2 )-BF generalized
bi-ideal and a (α1 , α2 ; β 1 , β 2 )-BF left ideal on S, respectively. Let a ∈ S. Because S is regular, there exists
x ∈ S such that a = axa. Thus
(α2 ,α1 ) (α2 ,α1 )
( fp ◦ hp ◦ g p )( a) = (( f p ◦h p ◦ g p )( a) ∧ α2 ) ∨ α1

=( { f p (y) ∧ (h p ◦ g p )(z) ∧ α2 }) ∨ α1
a=yz

≥ ( f p ( ax ) ∧ (h p ◦ g p )( a) ∧ α2 ) ∨ α1
≥ ( f p ( ax ) ∨ α1 ) ∧ ((h p ◦ g p )( a) ∨ α1 ) ∧ (α2 ∨ α1 )

≥ ( f p ( a ) ∧ α2 ) ∧ ( {(h p (r ) ∧ g p (s)) ∨ α1 }) ∧ (α2 ∨ α1 )
a=rs
≥ ( f p ( a) ∧ α2 ) ∧ ((h p ( a) ∨ α1 ) ∧ ( g p ( xa) ∨ α1 ) ∧ (α2 ∨ α1 )
≥ ( f p ( a) ∧ α2 ) ∧ ((h p ( a) ∨ α1 ) ∧ (( g p ( a) ∧ α2 ) ∨ α1 ) ∧ (α2 ∨ α1 )
≥ ( f p ( a ) ∧ h p ( a ) ∧ g p ( a ) ∧ α2 ) ∧ α1
= (( f p ∧ h p ∧ g p )( a) ∧ α2 ) ∧ α1
(α2 ,α1 ) (α2 ,α1 )
= ( fp ∧ hp ∧ g p )( a)

Similarly, we can show that f n ∨ hn ∨ gn ≥ f n ◦ hn ◦ gn .


( β 2 ,β 1 ) ( β 2 ,β 1 ) ( β 2 ,β 1 ) ( β 2 ,β 1 )
(2 ⇒ 3 ⇒ 4). This is straightforward, because every (α1 , α2 ; β 1 , β 2 )-BF bi-ideal is a
(α1 , α2 ; β 1 , β 2 )-BF generalized bi-ideal and every (α1 , α2 ; β 1 , β 2 )-BF quasi-ideal is a (α1 , α2 ; β 1 , β 2 )-BF
bi-ideal on S.

123
Mathematics 2017, 5, 71

(4 ⇒ 1). Let f and g be any (α1 , α2 ; β 1 , β 2 )-BF right ideal and (α1 , α2 ; β 1 , β 2 )-BF left ideal on S,
respectively. Let a ∈ S. By Theorem 8, S = (S, S p , Sn ) is a (α1 , α2 ; β 1 , β 2 )-BF quasi ideal, and we have

(α2 ,α1 )
( fp ∧ g p )( a) = (( f p ∧ g p )( a) ∧ α2 ) ∨ α1
= (( f p ∧ S p ∧ g p )( a) ∧ α2 ) ∨ α1
(α2 ,α1 ) (α2 ,α1 )
= ( fp ∧ Sp ∧ g p )( a)
(α2 ,α1 ) (α2 ,α1 )
≤ ( fp ◦ Sp ◦ g p )( a)
(α2 ,α1 )
≤ ( fp ◦ g p )( a)

(α2 ,α1 ) (α2 ,α1 )


Thus f p ∧ g p ≤ f p ◦ g p for every (α1 , α2 ; β 1 , β 2 )-BF right ideal f and every
(α1 , α2 ; β 1 , β 2 )-BF left ideal g on S. Similarly, we can show that f n ◦ gn ≤ f n ∨ gn . By Lemma 4,
( β 2 ,β 1 ) ( β 2 ,β 1 )
(α2 ,α1 ) (α2 ,α1 ) (α2 ,α1 ) (α2 ,α1 )
fp ◦ gp ≤ f p ∧ g p and f n ◦ gn ≥ f n ∨ gn . Thus f p ◦ gp = f p ∧ g p and
( β 2 ,β 1 ) ( β 2 ,β 1 )
fn ◦ gn = f n ∨ gn . Therefore by Theorem 9, S is regular.
( β 2 ,β 1 ) ( β 2 ,β 1 )

Theorem 11. For a semigroup S, the following are equivalent.

(1) S is regular.
(α2 ,α1 ) (α2 ,α1 ) (α2 ,α1 ) ( β 2 ,β 1 )
(2) fp = fp ◦ Sp ◦ f p and f n = fn ◦ Sn ◦ f n for every (α1 , α2 ; β 1 , β 2 )-BF
( β 2 ,β 1 ) ( β 2 ,β 1 )
generalized bi-ideal f = (S; f p , f n ) on S.
(α2 ,α1 ) (α2 ,α1 ) (α2 ,α1 ) ( β 2 ,β 1 )
(3) fp = fp ◦ Sp ◦ f p and f n = fn ◦ Sn ◦ f n for every (α1 , α2 ; β 1 , β 2 )-BF
( β 2 ,β 1 ) ( β 2 ,β 1 )
bi-ideal f = (S; f p , f n ) on S.
(α2 ,α1 ) (α2 ,α1 ) (α2 ,α1 ) ( β 2 ,β 1 )
(4) fp = fp ◦ Sp ◦ f p and f n = fn ◦ Sn ◦ f n for every (α1 , α2 ; β 1 , β 2 )-BF
( β 2 ,β 1 ) ( β 2 ,β 1 )
quasi-ideal f = (S; f p , f n ) on S.

Proof. (1 ⇒ 2). Let f be a (α1 , α2 ; β 1 , β 2 )-BF generalized bi-ideal on S and a ∈ S. Because S is regular,
there exists x ∈ S such that a = axa. Hence we have
(α2 ,α1 ) (α2 ,α1 )
( fp ◦ Sp ◦ f p )( a) = (( f p ◦S p ◦ f p )( a) ∧ α2 ) ∨ α1

=( {( f p ◦S p )(y) ∧ f p (z)} ∧ α2 ) ∨ α1
a=yz

≥ (( f p ◦S p )( ax ) ∧ f p ( a)) ∧ α2 ) ∨ α1

= (( { f p (r ) ∧ S p (s)}) ∧ f p ( a)) ∧ α2 ) ∨ α1
ax =rs
≥ ((( f p ( a) ∧ S p ( x )) ∧ f p ( a)) ∧ α2 ) ∨ α1
= (( f p ( a) ∧ 1)) ∧ f p ( a)) ∧ α2 ) ∨ α1
= ( f p ( a ) ∧ α2 ) ∨ α1
(α2 ,α1 )
= fp ( a)

124
Mathematics 2017, 5, 71

(α2 ,α1 ) (α2 ,α1 ) (α2 ,α1 ) ( β 2 ,β 1 )


Thus f p ◦ Sp ◦ fp ≥ fp . Similarly, we can show that f n ◦ Sn ◦ fn ≤ fn .
( β 2 ,β 1 ) ( β 2 ,β 1 )
(α2 ,α1 ) (α2 ,α1 ) (α2 ,α1 ) ( β 2 ,β 1 ) (α2 ,α1 )
By Theorem 3, f p ◦ Sp ◦ fp ≤ fp and f n ◦ Sn ◦ fn ≥ fn . Therefore, f p ◦
( β 2 ,β 1 ) ( β 2 ,β 1 )
(α2 ,α1 ) (α2 ,α1 ) ( β 2 ,β 1 )
Sp ◦ fp = fp and f n ◦ Sn ◦ fn = fn .
( β 2 ,β 1 ) ( β 2 ,β 1 )
(2 ⇒ 3 ⇒ 4). Obvious.
(4 ⇒ 1). Let Q be any quasi-ideal of S. By Theorem 6 and Lemma 3, we have

(α2 ,α1 ) (α2 ,α1 ) (α2 ,α1 )


( CQ ) p = ( CQ ) p ◦ (S) p ◦ ( CQ ) p
(α ,α )
= (CQSQ ) p 2 1

Thus, Q = QSQ. Therefore it follows from Theorem 1 that S is regular.

Theorem 12. For a semigroup S, the following are equivalent.

(1) S is regular.
(α2 ,α1 ) (α2 ,α1 )
(2) fp ∧ gp ≤ f p ◦ g p and f n ∨ gn ≥ f n ◦ gn for every (α1 , α2 ; β 1 , β 2 )-BF generalized
( β 2 ,β 1 ) ( β 2 ,β 1 )
bi-ideal f = (S; f p , f n ) and every (α1 , α2 ; β 1 , β 2 )-BF left ideal g = (S; g p , gn ) on S.
(α2 ,α1 ) (α2 ,α1 )
(3) fp ∧ gp ≤ f p ◦ g p and f n ∨ gn ≥ f n ◦ gn for every (α1 , α2 ; β 1 , β 2 )-BF bi-ideal
( β 2 ,β 1 ) ( β 2 ,β 1 )
f = (S; f p , f n ) and every (α1 , α2 ; β 1 , β 2 )-BF left ideal g = (S; g p , gn ) on S.
(α2 ,α1 ) (α2 ,α1 )
(4) fp ∧ gp ≤ f p ◦ g p and f n ∨ gn ≥ f n ◦ gn for every (α1 , α2 ; β 1 , β 2 )-BF quasi-ideal
( β 2 ,β 1 ) ( β 2 ,β 1 )
f = (S; f p , f n ) and every (α1 , α2 ; β 1 , β 2 )-BF left ideal g = (S; g p , gn ) on S.

Proof. (1 ⇒ 2). Let f and g be any (α1 , α2 ; β 1 , β 2 )-BF generalized bi-ideal and any (α1 , α2 ; β 1 , β 2 )-BF
left ideal on S, respectively. Let a ∈ S. Because S is regular, there exists x ∈ S such that a = axa.
Thus we have
(α2 ,α1 )
( fp ◦ g p )( a) = (( f p ◦ g p )( a) ∧ α2 ) ∨ α1

=( { f p (y) ∧ g p (z)} ∧ α2 ) ∨ α1
a=yz

≥ ( f p ( a) ∧ g p ( xa)) ∧ α2 ) ∨ α1
≥ ( f p ( a) ∨ α1 ) ∧ ( g p ( xa) ∨ α1 ) ∧ (α2 ∨ α1 )
≥ ( f p ( a) ∨ α1 ) ∧ (( g p ( a) ∧ α2 ) ∨ α1 ) ∧ (α2 ∨ α1 )
= ( f p ( a ) ∧ g p ( a ) ∧ α2 ) ∨ α1
= (( f p ∧ g p )( a) ∧ α2 ) ∨ α1
(α2 ,α1 )
= ( fp ∧ g p )( a)

(α2 ,α1 ) (α2 ,α1 )


Hence f p ◦ gp ≥ f p ∧ g p . Similarly, we can show that f n ◦ gn ≤ f n ∨ gn .
( β 2 ,β 1 ) ( β 2 ,β 1 )
(2 ⇒ 3 ⇒ 4). Obvious.
(4 ⇒ 1). Let f and g be any (α1 , α2 ; β 1 , β 2 )-BF right ideal and (α1 , α2 ; β 1 , β 2 )-BF left ideal on
(α2 ,α1 ) (α2 ,α1 )
S, respectively. By Theorem 8, f is a (α1 , α2 ; β 1 , β 2 )-BF quasi ideal. Thus f p ◦ gp ≥ f p ∧ gp

125
Mathematics 2017, 5, 71

(α2 ,α1 ) (α2 ,α1 )


and f n ◦ gn ≤ f n ∨ gn . By Lemma 4, f p ◦ gp ≤ f p ∧ g p and f n ◦ gn ≥ f n ∨ gn .
( β 2 ,β 1 ) ( β 2 ,β 1 ) ( β 2 ,β 1 ) ( β 2 ,β 1 )
(α2 ,α1 ) (α2 ,α1 )
Thus f p ◦ gp = f p ∧ g p and f n ◦ gn = f n ∨ gn . Therefore by Theorem 9, S is regular.
( β2 ,β1 ) ( β2 ,β1 )

4. Conclusions
In this paper, we propose the generalizations of BF sets. In particular, we introduce several
concepts of generalized BF sets and study the relationship between such sets and semigroups. In other
words, we propose generalized BF subsemigroups. This under consideration, the results obtained in
this paper are some inequalities of (α1 , α2 ; β 1 , β 2 )-BF quasi(generalized bi-, bi-) ideals and characterize
a regular semigroup in terms of generalized BF semigroups. The importance of BF sets has positive
and negative components frequently found in daily life, for example, in organizations, economics,
performance, development, evaluation, risk management or decisions, and so forth. Therefore we
establish generalized BF sets on semigroups, which enhances the structure of the algebra. We hope
that the study of some types of subsemigroups characterized in terms of inequalities of generalized BF
subsemigroups is a useful mathematical tool.

Author Contributions: Both authors contributed equally to this manuscript.


Conflicts of Interest: The authors declare no conflict of interest.

References
1. Bloch, I. Bipolar Fuzzy Mathematical Morphology for Spatial Reasoning. Int. J. Approx. Reason. 2012, 53,
1031–1070.
2. Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353.
3. Bucolo, M.; Fazzino, S.; La Rosa, M.; Fortuna, L. Small-world networks of fuzzy chaotic oscillators.
Chaos Solitons Fractals 2003, 17, 557–565.
4. Rosenfeld, A. Fuzzy group. J. Math. Anal. Appl. 1971, 35, 338–353.
5. Kuroki, N. On fuzzy ideals and fuzzy bi-ideals in semigroups. Fuzzy Sets Syst. 1981, 5, 203–215.
6. Zhang, W.R. Bipolar Fuzzy Sets and Relations: A Computational Framework Forcognitive Modeling
and Multiagent Decision Analysis. In Proceedings of the 1994 Industrial Fuzzy Control and Intelligent
Systems Conference and the NASA Joint Technology Workshop on Neural Networks and Fuzzy Logic
Fuzzy Information Processing Society Biannual Conference, San Antonio, TX, USA, 18–21 December 1994;
pp. 305–309.
7. Lee, K.M. Bipolar-Valued Fuzzy Sets and Their Operations. In Proceedings of the International Conference
on Intelligent Technologies, Bangkok, Thailand, 13–15 December 2000; pp. 307–312.
8. Kim, C.S.; Kang, J.G.; Kang, J.M. Ideal theory of semigroups based on the bipolar valued fuzzy set theory.
Ann. Fuzzy Math. Inform. 2011, 2, 193–206.
9. Mordeson, J.N.; Malik, D.S.; Kuroki, N. Fuzzy Semigroups; Springer: Berlin, Germany, 2003.

c 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

126
mathematics
Article
Hyperfuzzy Ideals in BCK/BCI-Algebras †
Seok-Zun Song 1 , Seon Jeong Kim 2 and Young Bae Jun 3, *
1 Department of Mathematics, Jeju National University, Jeju 63243, Korea; [email protected]
2 Department of Mathematics, Natural Science of College, Gyeongsang National University,
Jinju 52828, Korea; [email protected]
3 Department of Mathematics Education, Gyeongsang National University, Jinju 52828, Korea
* Correspondence: [email protected]
† To the memory of Professor Lotfi A. Zadeh.

Received: 17 November 2017; Accepted: 10 December 2017; Published: 14 December 2017

Abstract: The notions of hyperfuzzy ideals in BCK/BCI-algebras are introduced, and related
properties are investigated. Characterizations of hyperfuzzy ideals are established. Relations between
hyperfuzzy ideals and hyperfuzzy subalgebras are discussed. Conditions for hyperfuzzy subalgebras
to be hyperfuzzy ideals are provided.

Keywords: hyperfuzzy set; hyperfuzzy subalgebra; hyperfuzzy ideal

1. Introduction
After Zadeh [1] has introduced the fundamental concept of fuzzy sets, several generalizations of
fuzzy sets are achieved. As a generalization of fuzzy sets and interval-valued fuzzy sets, Ghosh and
Samanta [2] introduced the notion of hyperfuzzy sets, and then they applied it to group theory.
They defined hyperfuzzy (normal) subgroups and hyperfuzzy cosets, and investigated their properties.
The hyperfuzzy set has a subset of the interval [0, 1] as its image. Hence, it is a generalization of an
interval-valued fuzzy set. In mathematics, BCK and BCI-algebras are algebraic structures, introduced
by Imai, Iseki and Tanaka, that describe fragments of the propositional calculus involving implication
known as BCK and BCI logics (see [3–5]). Jun et al. [6] applied hyperfuzzy sets to BCK/BCI-algebras
by using the infimum and supremum of the image of hyperfuzzy sets. They introduced the notion
of k-fuzzy substructure for k ∈ {1, 2, 3, 4} and then they introduced the concepts of hyperfuzzy
substructures of several types by using k-fuzzy substructures, and investigated their basic properties.
They also introduced the notion of hyperfuzzy subalgebras of type (i, j) for i, j ∈ {1, 2, 3, 4}, and
discussed relations between hyperfuzzy substructure/subalgebra and its length. They investigated
the properties of hyperfuzzy subalgebras related to upper and lower level subsets.
The aim of this paper is to study BCK/BCI-algebraic structures based on hyperfuzzy structures.
So, the notions and results in this manuscript are a generalization of BCK/BCI-algebraic structures
based on fuzzy and interval-valued fuzzy structures. We introduce the notion of hyperfuzzy ideals in
BCK/BCI-algebras, and investigate several properties. We consider characterizations of hyperfuzzy
ideals, and discuss relations between hyperfuzzy subalgebras and hyperfuzzy ideals. We provide
conditions for hyperfuzzy subalgebras to be hyperfuzzy ideals.

2. Preliminaries
By a BCI-algebra (see [7,8]) we mean a system X := ( X, ∗, 0) in which the following axioms hold:

(I) (( x ∗ y) ∗ ( x ∗ z)) ∗ (z ∗ y) = 0,
(II) ( x ∗ ( x ∗ y)) ∗ y = 0,
(III) x ∗ x = 0,
(IV) x∗y = y∗x = 0 ⇒ x = y

Mathematics 2017, 5, 81; doi:10.3390/math5040081 127 www.mdpi.com/journal/mathematics


Mathematics 2017, 5, 81

for all x, y, z ∈ X. If a BCI-algebra X satisfies 0 ∗ x = 0 for all x ∈ X, then we say that X is a BCK-algebra
(see [7,8]). We can define a partial ordering ≤ by

(∀ x, y ∈ X ) ( x ≤ y ⇐⇒ x ∗ y = 0).

In a BCK/BCI-algebra X, the following hold (see [7,8]):

(∀ x ∈ X ) ( x ∗ 0 = x ), (1)
(∀ x, y, z ∈ X ) (( x ∗ y) ∗ z = ( x ∗ z) ∗ y). (2)

A non-empty subset S of a BCK/BCI-algebra X is called a subalgebra of X (see [7,8]) if x ∗ y ∈ S


for all x, y ∈ S.
We refer the reader to the books [7,8] for further information regarding BCK/BCI-algebras.
By a fuzzy structure over a nonempty set X we mean an ordered pair ( X, ρ) of X and a fuzzy set ρ
on X.
Let X be a nonempty set. A mapping μ̃ : X → P̃ ([0, 1]) is called a hyperfuzzy set over X (see [2]),
where P̃ ([0, 1]) is the family of all nonempty subsets of [0, 1]. An ordered pair ( X, μ̃) is called a hyper
structure over X.
Given a hyper structure ( X, μ̃) over a nonempty set X, we consider two fuzzy structures ( X, μ̃inf )
and ( X, μ̃sup ) over X (see [6]) in which

μ̃inf : X → [0, 1], x → inf{μ̃( x )},


μ̃sup : X → [0, 1], x → sup{μ̃( x )}.

Given a nonempty set X, let BK ( X ) and B I ( X ) denote the collection of all BCK-algebras and all
BCI-algebras, respectively. Also B( X ) := BK ( X ) ∪ B I ( X ).

Definition 1 ([6]). For any ( X, ∗, 0) ∈ B( X ), a fuzzy structure ( X, μ) over ( X, ∗, 0) is called a

• fuzzy subalgebra of ( X, ∗, 0) with type 1 (briefly, 1-fuzzy subalgebra of ( X, ∗, 0)) if

(∀ x, y ∈ X ) (μ( x ∗ y) ≥ min{μ( x ), μ(y)}) , (3)

• fuzzy subalgebra of ( X, ∗, 0) with type 2 (briefly, 2-fuzzy subalgebra of ( X, ∗, 0)) if

(∀ x, y ∈ X ) (μ( x ∗ y) ≤ min{μ( x ), μ(y)}) , (4)

• fuzzy subalgebra of ( X, ∗, 0) with type 3 (briefly, 3-fuzzy subalgebra of ( X, ∗, 0)) if

(∀ x, y ∈ X ) (μ( x ∗ y) ≥ max{μ( x ), μ(y)}) , (5)

• fuzzy subalgebra of ( X, ∗, 0) with type 4 (briefly, 4-fuzzy subalgebra of ( X, ∗, 0)) if

(∀ x, y ∈ X ) (μ( x ∗ y) ≤ max{μ( x ), μ(y)}) . (6)

It is clear that every 3-fuzzy subalgebra is a 1-fuzzy subalgebra and every 2-fuzzy subalgebra is a
4-fuzzy subalgebra.

Definition 2 ([6]). For any ( X, ∗, 0) ∈ B( X ) and i, j ∈ {1, 2, 3, 4}, a hyper structure ( X, μ̃) over ( X, ∗, 0) is
called an (i, j)-hyperfuzzy subalgebra of ( X, ∗, 0) if ( X, μ̃inf ) is an i-fuzzy subalgebra of ( X, ∗, 0) and ( X, μ̃sup )
is a j-fuzzy subalgebra of ( X, ∗, 0).

128
Mathematics 2017, 5, 81

3. Hyperfuzzy Ideals
In what follows, let ( X, ∗, 0) ∈ B( X ) unless otherwise specified.

Definition 3. A fuzzy structure ( X, μ) over ( X, ∗, 0) is called a

• fuzzy ideal of ( X, ∗, 0) with type 1 (briefly, 1-fuzzy ideal of ( X, ∗, 0)) if

(∀ x ∈ X ) (μ(0) ≥ μ( x )) , (7)
(∀ x, y ∈ X ) (μ( x ) ≥ min{μ( x ∗ y), μ(y)}) , (8)

• fuzzy ideal of ( X, ∗, 0) with type 2 (briefly, 2-fuzzy ideal of ( X, ∗, 0)) if

(∀ x ∈ X ) (μ(0) ≤ μ( x )) , (9)
(∀ x, y ∈ X ) (μ( x ) ≤ min{μ( x ∗ y), μ(y)}) , (10)

• fuzzy ideal of ( X, ∗, 0) with type 3 (briefly, 3-fuzzy ideal of ( X, ∗, 0)) if it satisfies (7) and

(∀ x, y ∈ X ) (μ( x ) ≥ max{μ( x ∗ y), μ(y)}) , (11)

• fuzzy ideal of ( X, ∗, 0) with type 4 (briefly, 4-fuzzy ideal of ( X, ∗, 0)) if it satisfies (9) and

(∀ x, y ∈ X ) (μ( x ) ≤ max{μ( x ∗ y), μ(y)}) . (12)

It is clear that every 3-fuzzy ideal is a 1-fuzzy ideal and every 2-fuzzy ideal is a 4-fuzzy ideal.

Definition 4. For any i, j ∈ {1, 2, 3, 4}, a hyper structure ( X, μ̃) over ( X, ∗, 0) is called an (i, j)-hyperfuzzy
ideal of ( X, ∗, 0) if ( X, μ̃inf ) is an i-fuzzy ideal of ( X, ∗, 0) and ( X, μ̃sup ) is a j-fuzzy ideal of ( X, ∗, 0).

Example 1. Consider a BCK-algebra X = {0, 1, 2, 3, 4} with the binary operation ∗ which is given in Table 1
(see [8]).

Table 1. Cayley table for the binary operation “∗”.

∗ 0 1 2 3 4
0 0 0 0 0 0
1 1 0 1 0 0
2 2 2 0 0 0
3 3 3 3 0 0
4 4 3 4 1 0

(1) Let ( X, μ̃) be a hyper structure over ( X, ∗, 0) in which μ̃ is given as follows:




⎪ [0.5, 0.6) if x = 0,

⎨ (0.4, 0.8] if x = 1,
μ̃ : X → P̃ ([0, 1]), x →

⎪ [0.3, 0.7] if x = 2,


[0.2, 0.9] if x ∈ {3, 4}.

It is routine to verify that ( X, μ̃) is a (1, 4)-hyperfuzzy ideal of ( X, ∗, 0).

129
Mathematics 2017, 5, 81

(2) Let ( X, μ̃) be a hyper structure over ( X, ∗, 0) in which μ̃ is given as follows:




⎪ [0.5, 0.9) if x = 0,

⎨ (0.4, 0.9] if x = 1,
μ̃ : X → P̃ ([0, 1]), x →

⎪ [0.3, 0.5] if x = 2,


[0.2, 0.4] if x ∈ {3, 4}.

It is routine to verify that ( X, μ̃) is a (1, 1)-hyperfuzzy ideal of ( X, ∗, 0).

Example 2. Consider a BCI-algebra X = {0, 1, 2, a, b} with the binary operation ∗ which is given in Table 2
(see [8]).

Table 2. Cayley table for the binary operation “∗”.

∗ 0 1 2 a b
0 0 0 0 a a
1 1 0 1 b a
2 2 2 0 a a
a a a a 0 0
b b a b 1 0

(1) Let ( X, μ̃) be a hyper structure over ( X, ∗, 0) in which μ̃ is given as follows:




⎪ [0.33, 0.75) if x = 0,



⎨ (0.63, 0.75] if x = 1,
μ̃ : X → P̃ ([0, 1]), x → [0.43, 0.70] if x = 2,



⎪ [0.53, 0.65] if x = a,


[0.63, 0.65] if x = b.

By routine calculations, we know that ( X, μ̃) is a (4, 1)-hyperfuzzy ideal of ( X, ∗, 0).


(2) Let ( X, μ̃) be a hyper structure over ( X, ∗, 0) in which μ̃ is given as follows:


⎪ [0.33, 0.39) if x = 0,



⎨ (0.63, 0.69] if x = 1,
μ̃ : X → P̃ ([0, 1]), x → [0.43, 0.49] if x = 2,



⎪ [0.53, 0.59] if x = a,


[0.63, 0.69] if x = b.

By routine calculations, we know that ( X, μ̃) is a (4, 4)-hyperfuzzy ideal of ( X, ∗, 0).

Proposition 1. Given a hyper structure ( X, μ̃) over ( X, ∗, 0), we have the following assertions.

(1) If ( X, μ̃) is a (1, 4)-hyperfuzzy ideal of ( X, ∗, 0), then


 
(∀ x, y ∈ X ) x ≤ y ⇒ μ̃inf ( x ) ≥ μ̃inf (y), μ̃sup ( x ) ≤ μ̃sup (y) . (13)

(2) If ( X, μ̃) is a (1, 1)-hyperfuzzy ideal of ( X, ∗, 0), then


 
(∀ x, y ∈ X ) x ≤ y ⇒ μ̃inf ( x ) ≥ μ̃inf (y), μ̃sup ( x ) ≥ μ̃sup (y) . (14)

(3) If ( X, μ̃) is a (4, 1)-hyperfuzzy ideal of ( X, ∗, 0), then


 
(∀ x, y ∈ X ) x ≤ y ⇒ μ̃inf ( x ) ≤ μ̃inf (y), μ̃sup ( x ) ≥ μ̃sup (y) . (15)

130
Mathematics 2017, 5, 81

(4) If ( X, μ̃) is a (4, 4)-hyperfuzzy ideal of ( X, ∗, 0), then


 
(∀ x, y ∈ X ) x ≤ y ⇒ μ̃inf ( x ) ≤ μ̃inf (y), μ̃sup ( x ) ≤ μ̃sup (y) . (16)

Proof. If ( X, μ̃) is a (1, 4)-hyperfuzzy ideal of ( X, ∗, 0), then ( X, μ̃inf ) is a 1-fuzzy ideal of ( X, ∗, 0) and
( X, μ̃sup ) is a 4-fuzzy ideal of ( X, ∗, 0). Let x, y ∈ X be such that x ≤ y. Then x ∗ y = 0, and so

μ̃inf ( x ) ≥ min{μ̃inf ( x ∗ y), μ̃inf (y)} = min{μ̃inf (0), μ̃inf (y)} = μ̃inf (y)

by (8) and (7), and

μ̃sup ( x ) ≤ max{μ̃sup ( x ∗ y), μ̃sup (y)} = max{μ̃sup (0), μ̃sup (y)} = μ̃sup (y)

by (12) and (9). Similarly, we can prove (2), (3) and (4).

Proposition 2. Given a hyper structure ( X, μ̃) over ( X, ∗, 0), we have the following assertions.

(1) If ( X, μ̃) is an (i, j)-hyperfuzzy ideal of ( X, ∗, 0) for (i, j) ∈ {(2, 2), (2, 3), (3, 2), (3, 3)}, then
 
(∀ x, y ∈ X ) x ≤ y ⇒ μ̃inf ( x ) = μ̃inf (0), μ̃sup ( x ) = μ̃sup (0) . (17)

(2) If ( X, μ̃) is either a (1, 2)-hyperfuzzy ideal or a (1, 3)-hyperfuzzy ideal of ( X, ∗, 0), then the following
assertion is valid.
 
(∀ x, y ∈ X ) x ≤ y ⇒ μ̃inf ( x ) ≥ μ̃inf (y), μ̃sup ( x ) = μ̃sup (0) . (18)

(3) If ( X, μ̃) is either a (2, 1)-hyperfuzzy ideal or a (3, 1)-hyperfuzzy ideal of ( X, ∗, 0), then the following
assertion is valid.
 
(∀ x, y ∈ X ) x ≤ y ⇒ μ̃inf ( x ) = μ̃inf (0), μ̃sup ( x ) ≥ μ̃sup (y) . (19)

(4) If ( X, μ̃) is either a (2, 4)-hyperfuzzy ideal or a (3, 4)-hyperfuzzy ideal of ( X, ∗, 0), then the following
assertion is valid.
 
(∀ x, y ∈ X ) x ≤ y ⇒ μ̃inf ( x ) = μ̃inf (0), μ̃sup ( x ) ≤ μ̃sup (y) . (20)

(5) If ( X, μ̃) is either a (4, 2)-hyperfuzzy ideal or a (4, 3)-hyperfuzzy ideal of ( X, ∗, 0), then the following
assertion is valid.
 
(∀ x, y ∈ X ) x ≤ y ⇒ μ̃inf ( x ) ≤ μ̃inf (y), μ̃sup ( x ) = μ̃sup (0) . (21)

Proof. We prove (1) only, Others can be verified by the similar way. If ( X, μ̃) is a (2, 3)-hyperfuzzy
ideal of ( X, ∗, 0), then ( X, μ̃inf ) is a 2-fuzzy ideal of ( X, ∗, 0) and ( X, μ̃sup ) is a 3-fuzzy ideal of ( X, ∗, 0).
Let x, y ∈ X be such that x ≤ y. Then x ∗ y = 0, and thus

μ̃inf ( x ) ≤ min{μ̃inf ( x ∗ y), μ̃inf (y)} = min{μ̃inf (0), μ̃inf (y)} = μ̃inf (0),
μ̃sup ( x ) ≥ max{μ̃sup ( x ∗ y), μ̃sup (y)} = max{μ̃sup (0), μ̃sup (y)} = μ̃sup (0)

by (10), (9), (11) and (7). Since μ̃inf (0) ≤ μ̃inf ( x ) and μ̃sup (0) ≥ μ̃sup ( x ) for all x ∈ X, it follows that
μ̃inf (0) = μ̃inf ( x ) and μ̃sup (0) = μ̃sup ( x ) for all x ∈ X. Similarly, we can verify that (17) is true for
(i, j) ∈ {(2, 2), (3, 2), (3, 3)}.

Proposition 3. Given a hyper structure ( X, μ̃) over ( X, ∗, 0), we have the following assertions.

131
Mathematics 2017, 5, 81

(1) If ( X, μ̃) is a (1, 4)-hyperfuzzy ideal of ( X, ∗, 0), then


  
μ̃inf ( x ) ≥ min{μ̃inf (y), μ̃inf (z)}
(∀ x, y, z ∈ X ) x∗y ≤ z ⇒ . (22)
μ̃sup ( x ) ≤ max{μ̃sup (y), μ̃sup (z)}

(2) If ( X, μ̃) is a (1, 1)-hyperfuzzy ideal of ( X, ∗, 0), then


  
μ̃inf ( x ) ≥ min{μ̃inf (y), μ̃inf (z)}
(∀ x, y, z ∈ X ) x∗y ≤ z ⇒ . (23)
μ̃sup ( x ) ≥ min{μ̃sup (y), μ̃sup (z)}

(3) If ( X, μ̃) is a (4, 1)-hyperfuzzy ideal of ( X, ∗, 0), then


  
μ̃inf ( x ) ≤ max{μ̃inf (y), μ̃inf (z)}
(∀ x, y, z ∈ X ) x∗y ≤ z ⇒ . (24)
μ̃sup ( x ) ≥ min{μ̃sup (y), μ̃sup (z)}

(4) If ( X, μ̃) is a (4, 4)-hyperfuzzy ideal of ( X, ∗, 0), then


  
μ̃inf ( x ) ≤ max{μ̃inf (y), μ̃inf (z)}
(∀ x, y, z ∈ X ) x∗y ≤ z ⇒ . (25)
μ̃sup ( x ) ≤ max{μ̃sup (y), μ̃sup (z)}

Proof. Assume that ( X, μ̃) is a (1, 4)-hyperfuzzy ideal of ( X, ∗, 0). Let x, y, z ∈ X be such that x ∗ y ≤ z.
Then ( x ∗ y) ∗ z = 0, and so

μ̃inf ( x ) ≥ min{μ̃inf ( x ∗ y), μ̃inf (y)}


≥ min{min{μ̃inf (( x ∗ y) ∗ z), μ̃inf (z)}, μ̃inf (y)}
= min{min{μ̃inf (0), μ̃inf (z)}, μ̃inf (y)}
= min{μ̃inf (y), μ̃inf (z)}

by (8) and (7), and

μ̃sup ( x ) ≤ max{μ̃sup ( x ∗ y), μ̃sup (y)}


≤ max{max{μ̃sup (( x ∗ y) ∗ z), μ̃sup (z)}, μ̃sup (y)}
= max{max{μ̃sup (0), μ̃sup (z)}, μ̃sup (y)}
= max{μ̃sup (y), μ̃sup (z)}

by (12) and (9). Similarly, we can check that (2), (3) and (4) hold.

Proposition 4. Given a hyper structure ( X, μ̃) over ( X, ∗, 0), we have the following assertions.

(1) If ( X, μ̃) is an (i, j)-hyperfuzzy ideal of ( X, ∗, 0) for (i, j) ∈ {(2, 2), (2, 3), (3, 2), (3, 3)}, then
 
(∀ x, y, z ∈ X ) x ∗ y ≤ z ⇒ μ̃inf ( x ) = μ̃inf (0), μ̃sup ( x ) = μ̃sup (0) . (26)

(2) If ( X, μ̃) is either a (1, 2)-hyperfuzzy ideal or a (1, 3)-hyperfuzzy ideal of ( X, ∗, 0), then the following
assertion is valid.
  
μ̃inf ( x ) ≥ min{μ̃inf (y), μ̃inf (z)}
(∀ x, y, z ∈ X ) x ∗ y ≤ z ⇒ . (27)
μ̃sup ( x ) = μ̃sup (0)

132
Mathematics 2017, 5, 81

(3) If ( X, μ̃) is either a (2, 1)-hyperfuzzy ideal or a (3, 1)-hyperfuzzy ideal of ( X, ∗, 0), then the following
assertion is valid.
  
μ̃inf ( x ) = μ̃inf (0)
(∀ x, y, z ∈ X ) x ∗ y ≤ z ⇒ . (28)
μ̃sup ( x ) ≥ min{μ̃sup (y), μ̃sup (z)}

(4) If ( X, μ̃) is either a (2, 4)-hyperfuzzy ideal or a (3, 4)-hyperfuzzy ideal of ( X, ∗, 0), then the following
assertion is valid.
  
μ̃inf ( x ) = μ̃inf (0)
(∀ x, y, z ∈ X ) x ∗ y ≤ z ⇒ . (29)
μ̃sup ( x ) ≤ max{μ̃sup (y), μ̃sup (z)}

(5) If ( X, μ̃) is either a (4, 2)-hyperfuzzy ideal or a (4, 3)-hyperfuzzy ideal of ( X, ∗, 0), then the following
assertion is valid.
  
μ̃inf ( x ) ≤ max{μ̃inf (y), μ̃inf (z)}
(∀ x, y, z ∈ X ) x ∗ y ≤ z ⇒ . (30)
μ̃sup ( x ) = μ̃sup (0)

Proof. If ( X, μ̃) is a (2, 3)-hyperfuzzy ideal of ( X, ∗, 0), then ( X, μ̃inf ) is a 2-fuzzy ideal of ( X, ∗, 0) and
( X, μ̃sup ) is a 3-fuzzy ideal of ( X, ∗, 0). Let x, y, z ∈ X be such that x ∗ y ≤ z. Then ( x ∗ y) ∗ z = 0,
and so

μ̃inf ( x ) ≤ min{μ̃inf ( x ∗ y), μ̃inf (y)}


≤ min{min{μ̃inf (( x ∗ y) ∗ z), μ̃inf (z)}, μ̃inf (y)}
= min{min{μ̃inf (0), μ̃inf (z)}, μ̃inf (y)}
= min{μ̃inf (0), μ̃inf (y)}
= μ̃inf (0)

by (10) and (9), and

μ̃sup ( x ) ≥ max{μ̃sup ( x ∗ y), μ̃sup (y)}


≥ max{max{μ̃sup (( x ∗ y) ∗ z), μ̃sup (z)}, μ̃sup (y)}
= max{max{μ̃sup (0), μ̃sup (z)}, μ̃sup (y)}
= max{μ̃sup (0), μ̃sup (y)}
= μ̃sup (0)

by (11) and (7). Since μ̃inf (0) ≤ μ̃inf ( x ) and μ̃sup (0) ≥ μ̃sup ( x ) for all x ∈ X, it follows that
μ̃inf ( x ) = μ̃inf (0) and μ̃sup ( x ) = μ̃sup (0). Similarly, we can verify that (26) is true for (i, j) ∈
{(2, 2), (3, 2), (3, 3)}. Similarly, we can show that (2), (3), (4) and (5) are true.

Given a hyper structure ( X, μ̃) over X and α, β ∈ [0, 1], we consider the following sets:

U (μ̃inf ; α) := { x ∈ X | μ̃inf ( x ) ≥ α},


L(μ̃inf ; α) := { x ∈ X | μ̃inf ( x ) ≤ α},
U (μ̃sup ; β) := { x ∈ X | μ̃sup ( x ) ≥ β},
L(μ̃sup ; β) := { x ∈ X | μ̃sup ( x ) ≤ β}.

Theorem 1. (1) A hyper structure ( X, μ̃) over ( X, ∗, 0) is a (1, 4)-hyperfuzzy ideal of ( X, ∗, 0) if and only if
the sets U (μ̃inf ; α) and L(μ̃sup ; β) are either empty or ideals of X for all α, β ∈ [0, 1].

133
Mathematics 2017, 5, 81

(2) A hyper structure ( X, μ̃) over ( X, ∗, 0) is a (1, 1)-hyperfuzzy ideal of ( X, ∗, 0) if and only if the sets
U (μ̃inf ; α) and U (μ̃sup ; β) are either empty or ideals of X for all α, β ∈ [0, 1].
(3) A hyper structure ( X, μ̃) over ( X, ∗, 0) is a (4, 1)-hyperfuzzy ideal of ( X, ∗, 0) if and only if the sets
L(μ̃inf ; α) and U (μ̃sup ; β) are either empty or ideals of X for all α, β ∈ [0, 1].
(4) A hyper structure ( X, μ̃) over ( X, ∗, 0) is a (4, 4)-hyperfuzzy ideal of ( X, ∗, 0) if and only if the sets
L(μ̃inf ; α) and L(μ̃sup ; β) are either empty or ideals of X for all α, β ∈ [0, 1].

Proof. Assume that ( X, μ̃) is a (1, 4)-hyperfuzzy ideal of ( X, ∗, 0) and U (μ̃inf ; α) = ∅ = L(μ̃sup ; β)
for α, β ∈ [0, 1]. Then there exist a ∈ U (μ̃inf ; α) and b ∈ L(μ̃sup ; β). Hence μ̃inf (0) ≥ μ̃inf ( a) ≥ α and
μ̃sup (0) ≤ μ̃sup (b) ≤ β, that is, 0 ∈ U (μ̃inf ; α) ∩ L(μ̃sup ; β). Let x, y ∈ X be such that x ∗ y ∈ U (μ̃inf ; α)
and y ∈ U (μ̃inf ; α). Then μ̃inf ( x ∗ y) ≥ α and μ̃inf (y) ≥ α. It follows that

μ̃inf ( x ) ≥ min{μ̃inf ( x ∗ y), μ̃inf (y)} ≥ α

and so that x ∈ U (μ̃inf ; α). Thus U (μ̃inf ; α) is an ideal of ( X, ∗, 0). Now let a, b ∈ X be such that
a ∗ b ∈ L(μ̃sup ; β) and b ∈ L(μ̃sup ; β). Then μ̃sup ( a ∗ b) ≤ β and μ̃sup (b) ≤ β, which imply that

μ̃sup ( a) ≤ max{μ̃sup ( a ∗ b), μ̃sup (b)} ≤ β.

Thus a ∈ L(μ̃sup ; β), and therefore L(μ̃sup ; β) is an ideal of ( X, ∗, 0).


Conversely, suppose that the sets U (μ̃inf ; α) and L(μ̃sup ; β) are either empty or ideals of X for all
α, β ∈ [0, 1]. For any x ∈ X, let μ̃inf ( x ) = α and μ̃sup ( x ) = β. Then x ∈ U (μ̃inf ; α) ∩ L(μ̃sup ; β), and
so U (μ̃inf ; α) and L(μ̃sup ; β) are nonempty. Hence U (μ̃inf ; α) and L(μ̃sup ; β) are ideals of ( X, ∗, 0), and
thus 0 ∈ U (μ̃inf ; α) ∩ L(μ̃sup ; β). It follows that μ̃inf (0) ≥ α = μ̃inf ( x ) and μ̃sup (0) ≤ β = μ̃sup ( x ) for
all x ∈ X. Assume that there exist a, b ∈ X such that

μ̃inf ( a) < min{μ̃inf ( a ∗ b), μ̃inf (b)}.

If we take γ := min{μ̃inf ( a ∗ b), μ̃inf (b)}, then γ ∈ [0, 1], a ∗ b ∈ U (μ̃inf ; γ) and b ∈ U (μ̃inf ; γ).
Since U (μ̃inf ; γ) is an ideal of X, we have a ∈ U (μ̃inf ; γ), that is, μ̃inf ( a) ≥ γ. This is a contradiction,
and so

μ̃inf ( x ) ≥ min{μ̃inf ( x ∗ y), μ̃inf (y)}

for all x, y ∈ X. Now, suppose that

μ̃sup ( x ) > max{μ̃sup ( x ∗ y), μ̃sup (y)}

for some x, y ∈ X, and take


 
β := 1
2 μ̃sup ( x ) + max{μ̃sup ( x ∗ y), μ̃sup (y)} .

Then x ∗ y ∈ L(μ̃sup ; β) and y ∈ L(μ̃sup ; β), which imply that x ∈ L(μ̃sup ; β) since L(μ̃sup ; β) is an
ideal of X. Hence μ̃sup ( x ) ≤ β, which is a contradiction, and so

μ̃sup ( x ) ≤ max{μ̃sup ( x ∗ y), μ̃sup (y)}

for all x, y ∈ X. Therefore ( X, μ̃) is a (1, 4)-hyperfuzzy ideal of ( X, ∗, 0). Similarly, we can verify that
(2), (3), and (4) hold.

Theorem 2. If a hyper structure ( X, μ̃) over ( X, ∗, 0) is a (2, 3)-hyperfuzzy ideal of ( X, ∗, 0), then the sets
U (μ̃inf ; α)c and L(μ̃sup ; β)c are either empty or ideals of ( X, ∗, 0) for all α, β ∈ [0, 1].

134
Mathematics 2017, 5, 81

Proof. If a hyper structure ( X, μ̃) over ( X, ∗, 0) is a (2, 3)-hyperfuzzy ideal of ( X, ∗, 0), then ( X, μ̃inf ) is
a 2-fuzzy ideal of X and ( X, μ̃sup ) is a 3-fuzzy ideal of ( X, ∗, 0). Let α, β ∈ [0, 1] be such that U (μ̃inf ; α)c
and L(μ̃sup ; β)c are nonempty. Then there exist x, a ∈ X such that x ∈ U (μ̃inf ; α)c and a ∈ L(μ̃sup ; β)c .
Hence μ̃inf (0) ≤ μ̃inf ( x ) < α and μ̃sup (0) ≥ μ̃sup ( a) > β, which imply that 0 ∈ U (μ̃inf ; α)c ∩ L(μ̃sup ; β)c .
Let x, y ∈ X be such that x ∗ y ∈ U (μ̃inf ; α)c and y ∈ U (μ̃inf ; α)c . Then μ̃inf ( x ∗ y) < α and μ̃inf (y) < α.
It follows from (10) that

μ̃inf ( x ) ≤ min{μ̃inf ( x ∗ y), μ̃inf (y)} < α

and so that x ∈ U (μ̃inf ; α)c . Hence U (μ̃inf ; α)c is an ideal of ( X, ∗, 0). Now let a, b ∈ X be such
that a ∗ b ∈ L(μ̃sup ; β)c and b ∈ L(μ̃sup ; β)c . Then μ̃sup ( a ∗ b) > β and μ̃sup (b) > β, which imply
from (11) that

μ̃sup ( a) ≥ max{μ̃sup ( a ∗ b), μ̃sup (b)} > β

Thus a ∈ L(μ̃sup ; β)c , and therefore L(μ̃sup ; β)c is an ideal of X.

The following example shows that the converse of Theorem 2 is not true, that is, there exists a
hyper structure ( X, μ̃) over ( X, ∗, 0) such that

(1) ( X, μ̃) is not a (2, 3)-hyperfuzzy ideal of ( X, ∗, 0),


(2) The nonempty sets U (μ̃inf ; α)c and L(μ̃sup ; β)c are ideals of ( X, ∗, 0) for all α, β ∈ [0, 1].

Example 3. Consider a BCI-algebra X = {0, 1, a, b, c} with the binary operation ∗ which is given in Table 3
(see [8]).

Table 3. Cayley table for the binary operation “∗”.

∗ 0 1 a b c
0 0 0 a b c
1 1 0 a b c
a a a 0 c b
b b b c 0 a
c c c b a 0

Let ( X, μ̃) be a hyper structure over ( X, ∗, 0) in which μ̃ is given as follows:




⎪ [0.23, 0.85) if x = 0,



⎨ (0.43, 0.83] if x = 1,
μ̃ : X → P̃ ([0, 1]), x → [0.53, 0.73] if x = a,



⎪ (0.63, 0.73] if x = b,


[0.63, 0.75) if x = c.

Then


⎪ ∅ if α ∈ [0, 0.23],



⎨ {0} if α ∈ (0.23, 0.43],
U (μ̃inf ; α)c = {0, 1} if α ∈ (0.43, 0.53],



⎪ {0, 1, a} if α ∈ (0.53, 0.63],


X if α ∈ (0.63, 1.0],

135
Mathematics 2017, 5, 81

and


⎪ ∅ if β ∈ (0.85, 1.0],



⎨ {0} if β ∈ (0.83, 0.85],
L(μ̃sup ; β)c = {0, 1} if β ∈ (0.75, 0.83],



⎪ {0, 1, c} if β ∈ (0.73, 0.75],


X if β ∈ (0, 0.73].

Hence the nonempty sets U (μ̃inf ; α)c and L(μ̃sup ; β)c are ideals of ( X, ∗, 0) for all α, β ∈ [0, 1]. But ( X, μ̃)
is not a (2, 3)-hyperfuzzy ideal of ( X, ∗, 0) since

μ̃inf (c) = 0.63 > 0.53 = min{μ̃inf (c ∗ a), μ̃inf ( a)}

and/or

μ̃sup ( a) = 0.73 < 0.75 = max{μ̃sup ( a ∗ c), μ̃inf (c)}.

Using the similar way to the proof of Theorem 2, we have the following theorem.

Theorem 3. (1) If a hyper structure ( X, μ̃) over ( X, ∗, 0) is a (2, 2)-hyperfuzzy ideal of ( X, ∗, 0), then the sets
U (μ̃inf ; α)c and U (μ̃sup ; β)c are either empty or ideals of ( X, ∗, 0) for all α, β ∈ [0, 1].
(2) If a hyper structure ( X, μ̃) over ( X, ∗, 0) is a (3, 2)-hyperfuzzy ideal of ( X, ∗, 0), then the sets
L(μ̃inf ; α)c and U (μ̃sup ; β)c are either empty or ideals of ( X, ∗, 0) for all α, β ∈ [0, 1].
(3) If a hyper structure ( X, μ̃) over ( X, ∗, 0) is a (3, 3)-hyperfuzzy ideal of ( X, ∗, 0), then the sets
L(μ̃inf ; α)c and L(μ̃sup ; β)c are either empty or ideals of ( X, ∗, 0) for all α, β ∈ [0, 1].

Using the similar way to the proof of Theorems 1 and 2, we have the following theorem.

Theorem 4. (1) If a hyper structure ( X, μ̃) over ( X, ∗, 0) is a (1, 2)-hyperfuzzy ideal of ( X, ∗, 0), then the sets
U (μ̃inf ; α) and U (μ̃sup ; β)c are either empty or ideals of ( X, ∗, 0) for all α, β ∈ [0, 1].
(2) If a hyper structure ( X, μ̃) over ( X, ∗, 0) is a (1, 3)-hyperfuzzy ideal of ( X, ∗, 0), then the sets U (μ̃inf ; α)
and L(μ̃sup ; β)c are either empty or ideals of ( X, ∗, 0) for all α, β ∈ [0, 1].
(3) If a hyper structure ( X, μ̃) over ( X, ∗, 0) is a (2, 1)-hyperfuzzy ideal of ( X, ∗, 0), then the sets
U (μ̃inf ; α)c and U (μ̃sup ; β) are either empty or ideals of ( X, ∗, 0) for all α, β ∈ [0, 1].
(4) If a hyper structure ( X, μ̃) over ( X, ∗, 0) is a (3, 1)-hyperfuzzy ideal of ( X, ∗, 0), then the sets
L(μ̃inf ; α)c and U (μ̃sup ; β) are either empty or ideals of ( X, ∗, 0) for all α, β ∈ [0, 1].
(5) If a hyper structure ( X, μ̃) over ( X, ∗, 0) is a (2, 4)-hyperfuzzy ideal of ( X, ∗, 0), then the sets
U (μ̃inf ; α)c and L(μ̃sup ; β) are either empty or ideals of ( X, ∗, 0) for all α, β ∈ [0, 1].
(6) If a hyper structure ( X, μ̃) over ( X, ∗, 0) is a (3, 4)-hyperfuzzy ideal of ( X, ∗, 0), then the sets
L(μ̃inf ; α)c and L(μ̃sup ; β) are either empty or ideals of ( X, ∗, 0) for all α, β ∈ [0, 1].
(7) If a hyper structure ( X, μ̃) over ( X, ∗, 0) is a (4, 2)-hyperfuzzy ideal of ( X, ∗, 0), then the sets L(μ̃inf ; α)
and U (μ̃sup ; β)c are either empty or ideals of ( X, ∗, 0) for all α, β ∈ [0, 1].
(8) If a hyper structure ( X, μ̃) over ( X, ∗, 0) is a (4, 3)-hyperfuzzy ideal of ( X, ∗, 0), then the sets L(μ̃inf ; α)
and L(μ̃sup ; β)c are either empty or ideals of ( X, ∗, 0) for all α, β ∈ [0, 1].

4. Relations between Hyperfuzzy Ideals and Hyperfuzzy Subalgebras


Theorem 5. Let ( X, ∗, 0) ∈ BK ( X ). For any i, j ∈ {1, 4}, every (i, j)-hyperfuzzy ideal is an (i, j)-hyperfuzzy
subalgebra.

136
Mathematics 2017, 5, 81

Proof. Let ( X, ∗, 0) ∈ BK ( X ) and let ( X, μ̃) be a (1, 4)-hyperfuzzy ideal of ( X, ∗, 0). Then ( X, μ̃inf ) is
a 1-fuzzy ideal of ( X, ∗, 0) and ( X, μ̃sup ) is a 4-fuzzy ideal of ( X, ∗, 0). Since x ∗ y ≤ x for all x, y ∈ X,
it follows from Proposition 1, (8) and (12) that

μ̃inf ( x ∗ y) ≥ μ̃inf ( x ) ≥ min{μ̃inf ( x ∗ y), μ̃inf (y)} ≥ min{μ̃inf ( x ), μ̃inf (y)}


μ̃sup ( x ∗ y) ≤ μ̃sup ( x ) ≤ max{μ̃sup ( x ∗ y), μ̃sup (y)} ≤ max{μ̃sup ( x ), μ̃sup (y)}

for all x, y ∈ X. Therefore ( X, μ̃) is a (1, 4)-hyperfuzzy subalgebra of ( X, ∗, 0). Similarly, we can prove
the result for (i, j) ∈ {(1, 1), (4, 1), (4, 4)}.

The converse of Theorem 5 is not true for (i, j) = (1, 4) as seen in the following example.

Example 4. Consider a BCK-algebra X = {0, a, b, c} with the binary operation ∗ which is given in Table 4
(see [8]).

Table 4. Cayley table for the binary operation “∗”.

∗ 0 a b c
0 0 0 0 0
a a 0 0 a
b b a 0 b
c c c c 0

Let ( X, μ̃) be a hyper structure over ( X, ∗, 0) in which μ̃ is given as follows:




⎪ [0.4, 0.7] if x = 0,

⎨ (0.4, 0.7] if x = a,
μ̃ : X → P̃ ([0, 1]), x →
⎪ [0.2, 0.9]
⎪ if x = b,


[0.4, 0.5) ∪ (0.5, 0.7) if x = c.

It is routine to verify that ( X, μ̃) is a (1, 4)-hyperfuzzy subalgebra of ( X, ∗, 0). But it is not a
(1, 4)-hyperfuzzy ideal of ( X, ∗, 0) since μ̃sup (b) = 0.9 > 0.7 = max{μ̃sup (b ∗ a), μ̃sup ( a)}.

Example 5. Let X = {0, 1, 2, 3, 4} be the BCK-algebra in Example 1. Let ( X, μ̃) be a hyper structure over
( X, ∗, 0) in which μ̃ is given as follows:


⎪ [0.5, 0.9) if x = 0,



⎨ (0.4, 0.6) ∪ (0.6, 0.8] if x = 1,
μ̃ : X → P̃ ([0, 1]), x → [0.3, 0.5] if x = 2,


⎪ [0.2, 0.4) ∪ (0.5, 0.6]
⎪ if x = 3,


[0.2, 0.5] if x = 4.

Then ( X, μ̃) is a (1, 1)-hyperfuzzy subalgebra of ( X, ∗, 0). Since

μ̃sup (4) = 0.5 < 0.6 = min{μ̃sup (4 ∗ 3), μ̃sup (3)},

( X, μ̃sup ) is not a 1-fuzzy ideal of X. Hence ( X, μ̃) is not a (1, 1)-hyperfuzzy ideal of ( X, ∗, 0).

Example 6. Consider a BCK-algebra X = {0, a, b, c} with the binary operation ∗ which is given in Table 5
(see [8]).

137
Mathematics 2017, 5, 81

Table 5. Cayley table for the binary operation “∗”.

∗ 0 a b c
0 0 0 0 0
a a 0 0 0
b b a 0 a
c c c c 0

(1) Let ( X, μ̃) be a hyper structure over ( X, ∗, 0) in which μ̃ is given as follows:




⎪ [0.3, 0.9] if x = 0,

⎨ (0.5, 0.8] if x = a,
μ̃ : X → P̃ ([0, 1]), x →
⎪ [0.4, 0.5]
⎪ if x = b,


[0.6, 0.7) if x = c.

Then ( X, μ̃) is a (4, 1)-hyperfuzzy subalgebra of ( X, ∗, 0). Since

μ̃inf (1) = 0.5 > 0.4 = max{μ̃inf (1 ∗ 2), μ̃inf (2)}

and/or

μ̃sup (2) = 0.5 < 0.8 = min{μ̃sup (2 ∗ 1), μ̃sup (1)},

( X, μ̃inf ) is not a 4-fuzzy ideal of ( X, ∗, 0) and/or ( X, μ̃sup ) is not a 1-fuzzy ideal of ( X, ∗, 0). Therefore
( X, μ̃) is not a (4, 1)-hyperfuzzy ideal of ( X, ∗, 0).
(2) Let ( X, μ̃) be a hyper structure over ( X, ∗, 0) in which μ̃ is given as follows:


⎪ [0.3, 0.5] if x = 0,

⎨ (0.5, 0.7] if x = a,
μ̃ : X → P̃ ([0, 1]), x →

⎪ [0.4, 0.7) if x = b,


[0.6, 0.8] if x = c.

Then ( X, μ̃) is a (4, 4)-hyperfuzzy subalgebra of ( X, ∗, 0) and ( X, μ̃sup ) is a 4-fuzzy ideal of ( X, ∗, 0).
But ( X, μ̃inf ) is not a 4-fuzzy ideal of ( X, ∗, 0) since

μ̃inf (1) = 0.5 > 0.4 = max{μ̃inf (1 ∗ 2), μ̃inf (2)}.

Hence ( X, μ̃) is not a (4, 4)-hyperfuzzy ideal of ( X, ∗, 0).

We provide conditions for a (1, 4)-hyperfuzzy subalgebra to be a (1, 4)-hyperfuzzy ideal.

Theorem 6. For any ( X, ∗, 0) ∈ BK ( X ), if a (1, 4)-hyperfuzzy subalgebra ( X, μ̃) of ( X, ∗, 0) satisfies the


condition (22), then ( X, μ̃) is a (1, 4)-hyperfuzzy ideal of ( X, ∗, 0).

Proof. Let ( X, ∗, 0) ∈ BK ( X ) and let ( X, μ̃) be a (1, 4)-hyperfuzzy subalgebra of ( X, ∗, 0) that


satisfies (22). Since 0 ∗ x ≤ x for all x ∈ X, we have μ̃inf (0) ≥ μ̃inf ( x ) and μ̃sup (0) ≤ μ̃sup ( x ) for
all x ∈ X by (22). Since x ∗ ( x ∗ y) ≤ y for all x, y ∈ X, it follows from (22) that

μ̃inf ( x ) ≥ min{μ̃inf ( x ∗ y), μ̃inf (y)}

138
Mathematics 2017, 5, 81

and

μ̃sup ( x ) ≤ max{μ̃sup ( x ∗ y), μ̃sup (y)}

for all x, y ∈ X. Hence ( X, μ̃) is a (1, 4)-hyperfuzzy ideal of ( X, ∗, 0).

Using the similar way to the proof of Theorem 6, we have the following theorem.

Theorem 7. For any ( X, ∗, 0) ∈ BK ( X ), we have the following assertions.


(1) If ( X, μ̃) is a (1, 1)-hyperfuzzy subalgebra of ( X, ∗, 0) which satisfies the condition (23), then ( X, μ̃) is a
(1, 1)-hyperfuzzy ideal of ( X, ∗, 0).
(2) If ( X, μ̃) is a (4, 1)-hyperfuzzy subalgebra of ( X, ∗, 0) which satisfies the condition (24), then ( X, μ̃) is a
(4, 1)-hyperfuzzy ideal of ( X, ∗, 0).
(3) If ( X, μ̃) is a (4, 4)-hyperfuzzy subalgebra of ( X, ∗, 0) which satisfies the condition (25), then ( X, μ̃) is a
(4, 4)-hyperfuzzy ideal of ( X, ∗, 0).

Theorem 8. For any ( X, ∗, 0) ∈ BK ( X ) and i, j ∈ {2, 3}, every (i, j)-hyperfuzzy ideal is an (i, j)-hyperfuzzy
subalgebra.

Proof. Let ( X, ∗, 0) ∈ BK ( X ) and let ( X, μ̃) be a (2, 3)-hyperfuzzy ideal of ( X, ∗, 0). Then ( X, μ̃inf ) is a
2-fuzzy ideal of ( X, ∗, 0) and ( X, μ̃sup ) is a 3-fuzzy ideal of ( X, ∗, 0). Since x ∗ y ≤ x for all x, y ∈ X,
we have

μ̃inf ( x ∗ y) = μ̃inf (0) ≤ min{μ̃inf ( x ), μ̃inf (y)}


μ̃sup ( x ∗ y) = μ̃sup (0) ≥ max{μ̃sup ( x ), μ̃sup (y)}

for all x, y ∈ X by Proposition 2, (9) and (7). Hence ( X, μ̃) is a (2, 3)-hyperfuzzy subalgebra of ( X, ∗, 0).
Similarly, we can prove it for (i, j) ∈ {(2, 2), (3, 2), (3, 3)}.

Using the similar way to the proof of Theorems 5 and 8, we have the following theorem.

Theorem 9. For any ( X, ∗, 0) ∈ BK ( X ), every (i, j)-hyperfuzzy ideal is an (i, j)-hyperfuzzy subalgebra for
(i, j) ∈ {(1, 2), (1, 3), (2, 1), (2, 4), (3, 1), (3, 4), (4, 2), (4, 3)}.

5. Conclusions
In the paper [2], Ghosh and Samanta have introduced the concept of hyperfuzzy sets as a
generalization of fuzzy sets and interval-valued fuzzy sets, and have presented an application of
hyperfuzzy sets in group theory. Jun et al. [6] have applied the hyperfuzzy sets to BCK/BCI-algebras.
In this article, we have discussed ideal theory in BCK/BCI-algebras by using the hyperfuzzy sets,
and have introduced the notion of hyperfuzzy ideals in BCK/BCI-algebras, and have investigate
several properties. We have considered characterizations of hyperfuzzy ideals, and have discussed
relations between hyperfuzzy subalgebras and hyperfuzzy ideals. We have provided conditions for
hyperfuzzy subalgebras to be hyperfuzzy ideals. Recently, many kinds of fuzzy sets have several
applications to deal with uncertainties from our different kinds of daily life problems, in particular,
for solving decision making problems (see [9–13]). In the future, we shall extend our proposed
approach to some decision making problem under the field of fuzzy cluster analysis, decision-making,
uncertain programming and mathematical programming [9]. Moreover, we will apply the notions
and results in this manuscript to related algebraic structures, for example, MV-algebras, BL-algebras,
MTL-algebras, EQ-algebras, effect algebras, and so on.

Acknowledgments: The authors wish to thank the anonymous reviewers for their valuable suggestions. The first
author, S. Z. Song, was supported by Basic Science Research Program through the National Research Foundation
of Korea (NRF) funded by the Ministry of Education (No. 2016R1D1A1B02006812).

139
Mathematics 2017, 5, 81

Author Contributions: All authors contributed equally and significantly to the study and preparation of the
article. They have read and approved the final manuscript.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Zadeh, L.A. Fuzzy sets. Inf. Control. 1965, 8, 338–353.
2. Ghosh, J.; Samanta, T.K. Hyperfuzzy sets and hyperfuzzy group. Int. J. Adv. Sci. Technol. 2012, 41, 27–37.
3. Imai, Y.; Iséki, K. On axiom systems of propositional calculi. Proc. Jpn. Acad. Ser. A Math. Sci. 1966, 42,
19–21.
4. Iséki, K. An algebra related with a propositional calculus. Proc. Jpn. Acad. Ser. A Math. Sci. 1966, 42, 26–29.
5. Iseki, K.; Tanaka, S. An introduction to the theory of BCK-algebras. Math. Jpn. 1978, 23, 1–26
6. Jun, Y.B.; Hur, K.; Lee, K.J. Hyperfuzzy subalgebras of BCK/BCI-algebras. Ann. Fuzzy Math. Inf. 2017,
in press.
7. Huang, Y.S. BCI-Algebra; Science Press: Beijing, China, 2006.
8. Meng, J.; Jun, Y.B. BCK-Algebras; Kyungmoon Sa Co.: Seoul, Korea, 1994.
9. Garg, H. A robust ranking method for intuitionistic multiplicative sets under crisp, interval environments
and its applications. IEEE Trans. Emerg. Top. Comput. Intell. 2017, 1, 366–374.
10. Feng, F.; Jun, Y.B.; Liu, X.; Li, L. An adjustable approach to fuzzy soft set based decision making. J. Comput.
Appl. Math. 2010, 234, 10–20.
11. Xia, M.; Xu, Z. Hesitant fuzzy information aggregation in decision making. Int. J. Approx. Reason. 2011, 52,
395–407.
12. Tang, H. Decision making based on interval-valued intuitionistic fuzzy soft sets and its algorithm. J. Comput.
Anal. Appl. 2017, 23, 119–131.
13. Wei, G.; Alsaadi, F.E.; Hayat, T.; Alsaedi, A. Hesitant bipolar fuzzy aggregation operators in multiple
attribute decision making. J. Intell. Fuzzy Syst. 2017, 33, 1119–1128.

c 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

140
mathematics
Article
Length-Fuzzy Subalgebras in BCK/BCI-Algebras
Young Bae Jun 1 , Seok-Zun Song 2, * and Seon Jeong Kim 3
1 Department of Mathematics Education, Gyeongsang National University, Jinju 52828, Korea;
[email protected]
2 Department of Mathematics, Jeju National University, Jeju 63243, Korea
3 Department of Mathematics, Natural Science of College, Gyeongsang National University,
Jinju 52828, Korea; [email protected]
* Correspondence: [email protected]

Received: 1 December 2017; Accepted: 5 January 2018; Published: 12 January 2018

Abstract: As a generalization of interval-valued fuzzy sets and fuzzy sets, the concept of hyperfuzzy
sets was introduced by Ghosh and Samanta in the paper [J. Ghosh and T.K. Samanta, Hyperfuzzy sets
and hyperfuzzy group, Int. J. Advanced Sci Tech. 41 (2012), 27–37]. The aim of this manuscript is to
introduce the length-fuzzy set and apply it to BCK/BCI-algebras. The notion of length-fuzzy subalgebras
in BCK/BCI-algebras is introduced, and related properties are investigated. Characterizations of a
length-fuzzy subalgebra are discussed. Relations between length-fuzzy subalgebras and hyperfuzzy
subalgebras are established.

Keywords: hyperfuzzy set; hyperfuzzy subalgebra; length of hyperfuzzy set; length-fuzzy subalgebra

MSC: 06F35; 03G25; 03B52

1. Introduction
Fuzzy set theory was firstly introduced by Zadeh [1] and opened a new path of thinking to
mathematicians, physicists, chemists, engineers and many others due to its diverse applications in
various fields. Algebraic hyperstructure, which was introduced by the French mathematician Marty [2],
represents a natural extension of classical algebraic structures. Since then, many papers and several
books have been written in this area. Nowadays, hyperstructures have a lot of applications in several
domains of mathematics and computer science. In a classical algebraic structure, the composition of two
elements is an element, while in an algebraic hyperstructure, the composition of two elements is a set.
The study of fuzzy hyperstructures is an interesting research area of fuzzy sets. As a generalization of
fuzzy sets and interval-valued fuzzy sets, Ghosh and Samanta [3] introduced the notion of hyperfuzzy
sets, and applied it to group theory. Jun et al. [4] applied the hyperfuzzy sets to BCK/BCI-algebras,
and introduced the notion of k-fuzzy substructures for k ∈ {1, 2, 3, 4}. They introduced the concepts
of hyperfuzzy substructures of several types by using k-fuzzy substructures, and investigated their
basic properties. They also defined hyperfuzzy subalgebras of type (i, j) for i, j ∈ {1, 2, 3, 4}, and
discussed relations between the hyperfuzzy substructure/subalgebra and its length. They investigated
the properties of hyperfuzzy subalgebras related to upper- and lower-level subsets.
In this paper, we introduce the length-fuzzy subalgebra in BCK/BCI-algebras based on
hyperfuzzy structures, and investigate several properties.

2. Preliminaries
By a BCI-algebra we mean a system X := ( X, ∗, 0) ∈ K (τ ) in which the following axioms hold:

(I) (( x ∗ y) ∗ ( x ∗ z)) ∗ (z ∗ y) = 0,
(II) ( x ∗ ( x ∗ y)) ∗ y = 0,

Mathematics 2018, 6, 11; doi:10.3390/math6010011 141 www.mdpi.com/journal/mathematics


Mathematics 2018, 6, 11

(III) x ∗ x = 0,
(IV) x ∗ y = y ∗ x = 0 ⇒ x = y,

for all x, y, z ∈ X. If a BCI-algebra X satisfies 0 ∗ x = 0 for all x ∈ X, then we say that X is a BCK-algebra.
We can define a partial ordering ≤ by

(∀ x, y ∈ X ) ( x ≤ y ⇐⇒ x ∗ y = 0).

In a BCK/BCI-algebra X, the following hold:

(∀ x ∈ X ) ( x ∗ 0 = x ), (1)
(∀ x, y, z ∈ X ) (( x ∗ y) ∗ z = ( x ∗ z) ∗ y). (2)

A non-empty subset S of a BCK/BCI-algebra X is called a subalgebra of X if x ∗ y ∈ S for all


x, y ∈ S.
We refer the reader to the books [5,6] for further information regarding BCK/BCI-algebras.
An ordered pair ( X, ρ) of a nonempty set X and a fuzzy set ρ on X is called a fuzzy structure
over X.
Let X be a nonempty set. A mapping μ̃ : X → P̃ ([0, 1]) is called a hyperfuzzy set over X (see [3]),
where P̃ ([0, 1]) is the family of all nonempty subsets of [0, 1]. An ordered pair ( X, μ̃) is called a hyper
structure over X.
Given a hyper structure ( X, μ̃) over a nonempty set X, we consider two fuzzy structures ( X, μ̃inf )
and ( X, μ̃sup ) over X in which

μ̃inf : X → [0, 1], x → inf{μ̃( x )},


μ̃sup : X → [0, 1], x → sup{μ̃( x )}.

Given a nonempty set X, let BK ( X ) and B I ( X ) denote the collection of all BCK-algebras and all
BCI-algebras, respectively. Also, B( X ) := BK ( X ) ∪ B I ( X ).

Definition 1. [4] For any ( X, ∗, 0) ∈ B( X ), a fuzzy structure ( X, μ) over ( X, ∗, 0) is called a

• fuzzy subalgebra of ( X, ∗, 0) with type 1 (briefly, 1-fuzzy subalgebra of ( X, ∗, 0)) if

(∀ x, y ∈ X ) (μ( x ∗ y) ≥ min{μ( x ), μ(y)}) , (3)

• fuzzy subalgebra of ( X, ∗, 0) with type 2 (briefly, 2-fuzzy subalgebra of ( X, ∗, 0)) if

(∀ x, y ∈ X ) (μ( x ∗ y) ≤ min{μ( x ), μ(y)}) , (4)

• fuzzy subalgebra of ( X, ∗, 0) with type 3 (briefly, 3-fuzzy subalgebra of ( X, ∗, 0)) if

(∀ x, y ∈ X ) (μ( x ∗ y) ≥ max{μ( x ), μ(y)}) , (5)

• fuzzy subalgebra of ( X, ∗, 0) with type 4 (briefly, 4-fuzzy subalgebra of ( X, ∗, 0)) if

(∀ x, y ∈ X ) (μ( x ∗ y) ≤ max{μ( x ), μ(y)}) . (6)

It is clear that every 3-fuzzy subalgebra is a 1-fuzzy subalgebra and every 2-fuzzy subalgebra is a
4-fuzzy subalgebra.

142
Mathematics 2018, 6, 11

Definition 2. [4] For any ( X, ∗, 0) ∈ B( X ) and i, j ∈ {1, 2, 3, 4}, a hyper structure ( X, μ̃) over ( X, ∗, 0) is
called an (i, j)-hyperfuzzy subalgebra of ( X, ∗, 0) if ( X, μ̃inf ) is an i-fuzzy subalgebra of ( X, ∗, 0) and ( X, μ̃sup )
is a j-fuzzy subalgebra of ( X, ∗, 0).

3. Length-Fuzzy Subalgebras
In what follows, let ( X, ∗, 0) ∈ B( X ) unless otherwise specified.

Definition 3. [4] Given a hyper structure ( X, μ̃) over ( X, ∗, 0), we define

μ̃ : X → [0, 1], x → μ̃sup ( x ) − μ̃inf ( x ), (7)

which is called the length of μ̃.

Definition 4. A hyper structure ( X, μ̃) over ( X, ∗, 0) is called a length 1-fuzzy (resp. 2-fuzzy, 3-fuzzy and
4-fuzzy) subalgebra of ( X, ∗, 0) if μ̃ satisfies the condition (3) (resp. (4)–(6)).

Example 1. Consider a BCK-algebra X = {0, 1, 2, 3, 4} with the binary operation ∗ which is given in
Table 1 (see [6]).

Table 1. Cayley table for the binary operation “∗”.

∗ 0 1 2 3 4
0 0 0 0 0 0
1 1 0 1 0 0
2 2 2 0 0 0
3 3 3 3 0 0
4 4 3 4 1 0

Let ( X, μ̃) be a hyper structure over ( X, ∗, 0) in which μ̃ is given as follows:




⎪ [0.2, 0.4) ∪ [0.5, 0.8) if x = 0,



⎨ (0.5, 0.9] if x = 1,
μ̃ : X → P̃ ([0, 1]), x → [0.1, 0.3] ∪ (0.4, 0.6] if x = 2,


⎪ [0.6, 0.9]
⎪ if x = 3,


[0.3, 0.5] if x = 4.

Then, the length of μ̃ is given by Table 2.

Table 2. The length of ( X, μ̃).

X 0 1 2 3 4
μ̃ 0.6 0.4 0.5 0.3 0.2

It is routine to verify that ( X, μ̃) is a length 1-fuzzy subalgebra of ( X, ∗, 0).

Proposition 1. If ( X, μ̃) is a length k-fuzzy subalgebra of ( X, ∗, 0) for k = 1, 3, then μ̃ (0) ≥ μ̃ ( x ) for all
x ∈ X.

Proof. Let ( X, μ̃) be a length 1-fuzzy subalgebra of ( X, ∗, 0). Then,

μ̃ (0) = μ̃ ( x ∗ x ) ≥ min{μ̃ ( x ), μ̃ ( x )} = μ̃ ( x ) (8)

143
Mathematics 2018, 6, 11

for all x ∈ X. If ( X, μ̃) is a length 3-fuzzy subalgebra of ( X, ∗, 0), then

μ̃ (0) = μ̃ ( x ∗ x ) ≥ max{μ̃ ( x ), μ̃ ( x )} = μ̃ ( x ) (9)

for all x ∈ X.

Proposition 2. If ( X, μ̃) is a length k-fuzzy subalgebra of ( X, ∗, 0) for k = 2, 4, then μ̃ (0) ≤ μ̃ ( x ) for
all x ∈ X.

Proof. It is similar to the proof of Proposition 1.

Theorem 1. Given a subalgebra A of ( X, ∗, 0) and B1 , B2 ∈ P̃ ([0, 1]), let ( X, μ̃) be a hyper structure over
( X, ∗, 0) given by

B2 if x ∈ A,
μ̃ : X → P̃ ([0, 1]), x → (10)
B1 otherwise.

If B1  B2 , then ( X, μ̃) is a length 1-fuzzy subalgebra of ( X, ∗, 0). Also, if B1  B2 , then ( X, μ̃) is a


length 4-fuzzy subalgebra of ( X, ∗, 0).

Proof. If x ∈ A, then μ̃( x ) = B2 and so

μ̃ ( x ) = μ̃sup ( x ) − μ̃inf ( x ) = sup{μ̃( x )} − inf{μ̃( x )} = sup{ B2 } − inf{ B2 }.

If x ∈
/ A, then μ̃( x ) = B1 and so

μ̃ ( x ) = μ̃sup ( x ) − μ̃inf ( x ) = sup{μ̃( x )} − inf{μ̃( x )} = sup{ B1 } − inf{ B1 }.

Assume that B1  B2 . Then, sup{ B2 } − inf{ B2 } ≥ sup{ B1 } − inf{ B1 }. Let x, y ∈ X. If x, y ∈ A,


then x ∗ y ∈ A and so

μ̃ ( x ∗ y) = sup{ B2 } − inf{ B2 } = min{μ̃ ( x ), μ̃ (y)}.

If x, y ∈
/ A, then μ̃ ( x ∗ y) ≥ sup{ B1 } − inf{ B1 } = min{μ̃ ( x ), μ̃ (y)}. Suppose that x ∈ A and
y∈
/ A (or, x ∈
/ A and y ∈ A). Then,

μ̃ ( x ∗ y) ≥ sup B1 − inf B1 = min{μ̃ ( x ), μ̃ (y)}.

Therefore, ( X, μ̃) is a length 1-fuzzy subalgebra of ( X, ∗, 0).


Assume that B1  B2 . Then,

sup{ B2 } − inf{ B2 } ≤ sup{ B1 } − inf{ B1 },

and so

μ̃ ( x ∗ y) = sup{ B2 } − inf{ B2 } = max{μ̃ ( x ), μ̃ (y)}

for all x, y ∈ A. If x ∈
/ A or y ∈ / A, then μ̃ ( x ∗ y) ≤ max{μ̃ ( x ), μ̃ (y)}. Hence, ( X, μ̃) is a length
4-fuzzy subalgebra of ( X, ∗, 0).

It is clear that every length 3-fuzzy subalgebra is a length 1-fuzzy subalgebra and every length
2-fuzzy subalgebra is a length 4-fuzzy subalgebra. However, the converse is not true, as seen in the
following example.

144
Mathematics 2018, 6, 11

Example 2. Consider the BCK-algebra ( X, ∗, 0) in Example 1. Given a subalgebra A = {0, 1, 2} of ( X, ∗, 0),


let ( X, μ̃) be a hyper structure over ( X, ∗, 0) given by

{0.2n | n ∈ [0.2, 0.9)} if x ∈ A,
μ̃ : X → P̃ ([0, 1]), x →
{0.2n | n ∈ (0.3, 0.7]} otherwise.

Then, ( X, μ̃) is a length 1-fuzzy subalgebra of ( X, ∗, 0) by Theorem 1. Since

μ̃ (2) = μ̃sup (2) − μ̃inf (2)


= sup{0.2n | n ∈ [0.2, 0.9)} − inf{0.2n | n ∈ [0.2, 0.9)}
= 0.18 − 0.04 = 0.14

and

μ̃ (3 ∗ 2) = μ̃ (3) = μ̃sup (3) − μ̃inf (3)


= sup{0.2n | n ∈ (0.3, 0.7]} − inf{0.2n | n ∈ (0.3, 0.7]}
= 0.14 − 0.06 = 0.08,

we have μ̃ (3 ∗ 2) = 0.08 < 0.14 = max{0.08, 0.14} = max{μ̃ (3), μ̃ (2)}. Therefore, ( X, μ̃) is not a length
3-fuzzy subalgebra of ( X, ∗, 0).
Give a subalgebra A = {0, 1, 2, 3} of ( X, ∗, 0), let ( X, μ̃) be a hyper structure over ( X, ∗, 0) given by

(0.4, 0.7) if x ∈ A,
μ̃ : X → P̃ ([0, 1]), x →
[0.3, 0.9) otherwise.

Then, ( X, μ̃) is a length 4-fuzzy subalgebra of ( X, ∗, 0) by Theorem 1. However, it is not a length 2-fuzzy
subalgebra of ( X, ∗, 0), since

μ̃ (4 ∗ 2) = μ̃ (4) = 0.6 > 0.3 = min{μ̃ (4), μ̃ (2)}.

Theorem 2. A hyper structure ( X, μ̃) over ( X, ∗, 0) is a length 1-fuzzy subalgebra of ( X, ∗, 0) if and only if
the set

U (μ̃; t) := { x ∈ X | μ̃ ( x ) ≥ t} (11)

is a subalgebra of ( X, ∗, 0) for all t ∈ [0, 1] with U (μ̃; t) = ∅.

Proof. Assume that ( X, μ̃) is a length 1-fuzzy subalgebra of ( X, ∗, 0) and let t ∈ [0, 1] be such that
U (μ̃; t) is nonempty. If x, y ∈ U (μ̃; t), then μ̃ ( x ) ≥ t and μ̃ (y) ≥ t. It follows from (3) that

μ̃ ( x ∗ y) ≥ min{μ̃ ( x ), μ̃ (y)} ≥ t,

and so x ∗ y ∈ U (μ̃; t). Hence, U (μ̃; t) is a subalgebra of ( X, ∗, 0).


Conversely, suppose that U (μ̃; t) is a subalgebra of ( X, ∗, 0) for all t ∈ [0, 1] with U (μ̃; t) = ∅.
Assume that there exist a, b ∈ X such that

μ̃ ( a ∗ b) < min{μ̃ ( a), μ̃ (b)}.

If we take t := min{μ̃ ( a), μ̃ (b)}, then a, b ∈ U (μ̃; t) and so a ∗ b ∈ U (μ̃; t). Thus, μ̃ ( a ∗ b) ≥ t,
which is a contradiction. Hence,

μ̃ ( x ∗ y) ≥ min{μ̃ ( x ), μ̃ (y)}

145
Mathematics 2018, 6, 11

for all x, y ∈ X. Therefore, ( X, μ̃) is a length 1-fuzzy subalgebra of ( X, ∗, 0).

Corollary 1. If ( X, μ̃) is a length 3-fuzzy subalgebra of ( X, ∗, 0), then the set U (μ̃; t) is a subalgebra of
( X, ∗, 0) for all t ∈ [0, 1] with U (μ̃; t) = ∅.

The converse of Corollary 1 is not true, as seen in the following example.

Example 3. Consider a BCI-algebra X = {0, 1, 2, a, b} with the binary operation ∗, which is given in Table 3
(see [6]).

Table 3. Cayley table for the binary operation “∗”.

∗ 0 1 2 a b
0 0 0 0 a a
1 1 0 1 b a
2 2 2 0 a a
a a a a 0 0
b b a b 1 0

Let ( X, μ̃) be a hyper structure over ( X, ∗, 0) in which μ̃ is given as follows:




⎪ [0.3, 0.4) ∪ [0.6, 0.9) if x = 0,



⎨ (0.5, 0.7] if x = 1,
μ̃ : X → P̃ ([0, 1]), x → [0.1, 0.3] ∪ (0.5, 0.6] if x = 2,


⎪ [0.4, 0.7]
⎪ if x = a,


(0.3, 0.5] if x = b.

Then, the length of μ̃ is given by Table 4.

Table 4. The length of ( X, μ̃).

X 0 1 2 a b
μ̃ 0.6 0.2 0.5 0.3 0.2

Hence, we have


⎪ ∅ if t ∈ (0.6, 1],



⎨ {0} if t ∈ (0.5, 0.6],
U (μ̃; t) = {0, 2} if t ∈ (0.3, 0.5],


⎪ {0, 2, a}
⎪ if t ∈ (0.2, 0.3],


X if t ∈ [0, 0.2],

and so U (μ̃; t) is a subalgebra of ( X, ∗, 0) for all t ∈ [0, 1] with U (μ̃; t) = ∅. Since

μ̃ (b ∗ 2) = μ̃ (b) = 0.2  0.5 = max{μ̃ (b), μ̃ (2)},

( X, μ̃) is not a length 3-fuzzy subalgebra of ( X, ∗, 0).

Theorem 3. A hyper structure ( X, μ̃) over ( X, ∗, 0) is a length 4-fuzzy subalgebra of ( X, ∗, 0) if and only if
the set

L (μ̃; t) := { x ∈ X | μ̃ ( x ) ≤ t} (12)

146
Mathematics 2018, 6, 11

is a subalgebra of ( X, ∗, 0) for all t ∈ [0, 1] with L (μ̃; t) = ∅.

Proof. Suppose that ( X, μ̃) is a length 4-fuzzy subalgebra of ( X, ∗, 0) and L (μ̃; t) = ∅ for all t ∈ [0, 1].
Let x, y ∈ L (μ̃; t). Then, μ̃ ( x ) ≤ t and μ̃ (y) ≤ t, which implies from (6) that

μ̃ ( x ∗ y) ≤ max{μ̃ ( x ), μ̃ (y)} ≤ t.

Hence, x ∗ y ∈ L (μ̃; t), and so L (μ̃; t) is a subalgebra of ( X, ∗, 0).


Conversely, assume that L (μ̃; t) is a subalgebra of ( X, ∗, 0) for all t ∈ [0, 1] with L (μ̃; t) = ∅.
If there exist a, b ∈ X such that

μ̃ ( a ∗ b) > max{μ̃ ( a), μ̃ (b)},

then a, b ∈ L (μ̃; t) by taking t = max{μ̃ ( a), μ̃ (b)}. It follows that a ∗ b ∈ L (μ̃; t), and so μ̃ ( a ∗ b) ≤ t,
which is a contradiction. Hence,

μ̃ ( x ∗ y) ≤ max{μ̃ ( x ), μ̃ (y)}

for all x, y ∈ X, and therefore ( X, μ̃) is a length 4-fuzzy subalgebra of ( X, ∗, 0).

Corollary 2. If ( X, μ̃) is a length 2-fuzzy subalgebra of ( X, ∗, 0), then the set L (μ̃; t) is a subalgebra of
( X, ∗, 0) for all t ∈ [0, 1] with L (μ̃; t) = ∅.

The converse of Corollary 2 is not true, as seen in the following example.

Example 4. Consider the BCI-algebra X = {0, 1, 2, a, b} in Example 3 and let ( X, μ̃) be a hyper structure
over ( X, ∗, 0) in which μ̃ is given as follows:


⎪ [0.6, 0.8) if x = 0,



⎨ (0.3, 0.7] if x = 1,
μ̃ : X → P̃ ([0, 1]), x → [0.4, 0.6) ∪ (0.6, 0.7] if x = 2,


⎪ [0.1, 0.7]
⎪ if x = a,


(0.2, 0.8] if x = b.

Then, the length of μ̃ is given by Table 5.

Table 5. The length of ( X, μ̃).

X 0 1 2 a b
μ̃ 0.2 0.4 0.3 0.6 0.6

Hence, we have


⎪ X if t ∈ [0.6, 1],



⎨ {0, 1, 2} if t ∈ [0.4, 0.6),
L (μ̃; t) = {0, 2} if t ∈ [0.3, 0.4),


⎪ {0}
⎪ if t ∈ [0.2, 0.3),


∅ if t ∈ [0, 0.2),

and so L (μ̃; t) is a subalgebra of ( X, ∗, 0) for all t ∈ [0, 1] with L (μ̃; t) = ∅. However, ( X, μ̃) is not a length
2-fuzzy subalgebra of ( X, ∗, 0) since

μ̃ ( a ∗ 1) = 0.6  0.4 = min{μ̃ ( a), μ̃ (1)}.

147
Mathematics 2018, 6, 11

Theorem 4. Let ( X, μ̃) be a hyper structure over ( X, ∗, 0) in which ( X, μ̃inf ) satisfies the Condition (4).
If ( X, μ̃) is a (k, 1)-hyperfuzzy subalgebra of ( X, ∗, 0) for k ∈ {1, 2, 3, 4}, then it is a length 1-fuzzy subalgebra
of ( X, ∗, 0).

Proof. Assume that ( X, μ̃) is a (k, 1)-hyperfuzzy subalgebra of ( X, ∗, 0) for k ∈ {1, 2, 3, 4} in which
( X, μ̃inf ) satisfies the Condition (4). Then, μ̃inf ( x ∗ y) ≤ μ̃inf ( x ) and μ̃inf ( x ∗ y) ≤ μ̃inf (y) for all x, y ∈ X,
and ( X, μ̃sup ) is a 1-fuzzy subalgebra of X. It follows from (3) that

μ̃ ( x ∗ y) = μ̃sup ( x ∗ y) − μ̃inf ( x ∗ y)


≥ min{μ̃sup ( x ), μ̃sup (y)} − μ̃inf ( x ∗ y)
= min{μ̃sup ( x ) − μ̃inf ( x ∗ y), μ̃sup (y) − μ̃inf ( x ∗ y)}
≥ min{μ̃sup ( x ) − μ̃inf ( x ), μ̃sup (y) − μ̃inf (y)}
= min{μ̃ ( x ), μ̃ (y)}

for all x, y ∈ X. Therefore ( X, μ̃) is a length 1-fuzzy subalgebra of ( X, ∗, 0).

Corollary 3. Let ( X, μ̃) be a hyper structure over ( X, ∗, 0) in which ( X, μ̃inf ) satisfies the Condition (4).
If ( X, μ̃) is a (k, 3)-hyperfuzzy subalgebra of ( X, ∗, 0) for k ∈ {1, 2, 3, 4}, then it is a length 1-fuzzy subalgebra
of ( X, ∗, 0).

Corollary 4. For j ∈ {1, 3}, every (2, j)-hyperfuzzy subalgebra is a length 1-fuzzy subalgebra.

In general, any length 1-fuzzy subalgebra may not be a (k, 1)-hyperfuzzy subalgebra for
k ∈ {1, 2, 3, 4}, as seen in the following example.

Example 5. Consider a BCI-algebra X = {0, 1, a, b, c} with the binary operation ∗, which is given in Table 6
(see [6]).

Table 6. Cayley table for the binary operation “∗”.

∗ 0 1 a b c
0 0 0 a b c
1 1 0 a b c
a a a 0 c b
b b b c 0 a
c c c b a 0

Let ( X, μ̃) be a hyper structure over ( X, ∗, 0) in which μ̃ is given as follows:




⎪ [0.1, 0.9) if x = 0,

⎨ (0.1, 0.8] if x = 1,
μ̃ : X → P̃ ([0, 1]), x →

⎪ [0.4, 0.9] if x = a,


[0.3, 0.6) if x ∈ {b, c}.

The length of μ̃ is given by Table 7 and it is routine to verify that ( X, μ̃) is a length 1-fuzzy subalgebra of
( X, ∗, 0).

Table 7. The length of ( X, μ̃).

X 0 1 a b c
μ̃ 0.8 0.7 0.5 0.3 0.3

148
Mathematics 2018, 6, 11

However, it is not a (k, 1)-hyperfuzzy subalgebra of X since

μ̃inf ( a ∗ a) = μ̃inf (0) = 0.1 < 0.4 = min{μ̃inf ( a), μ̃inf ( a)},
μ̃inf (b ∗ c) = μ̃inf ( a) = 0.4 > 0.3 = min{μ̃inf (b), μ̃inf (c)},
μ̃inf (b ∗ b) = μ̃inf (0) = 0.1 < 0.3 = max{μ̃inf (b), μ̃inf (b)},
μ̃inf (b ∗ c) = μ̃inf ( a) = 0.4 > 0.3 = max{μ̃inf (b), μ̃inf (c)}.

We provide a condition for a length 1-fuzzy subalgebra to be a (k, 1)-hyperfuzzy subalgebra for
k ∈ {1, 2, 3, 4}.

Theorem 5. If ( X, μ̃) is a length 1-fuzzy subalgebra of ( X, ∗, 0) in which μ̃inf is constant on X, then it is a


(k, 1)-hyperfuzzy subalgebra of ( X, ∗, 0) for k ∈ {1, 2, 3, 4}.

Proof. Assume that ( X, μ̃) is a length 1-fuzzy subalgebra of ( X, ∗, 0) in which μ̃inf is constant on X.
It is clear that ( X, μ̃inf ) is a k-fuzzy subalgebra of ( X, ∗, 0) for k ∈ {1, 2, 3, 4}. Let μ̃inf ( x ) = k for all
x ∈ X. Then,

μ̃sup ( x ∗ y) = μ̃ ( x ∗ y) + μ̃inf ( x ∗ y)


= μ̃ ( x ∗ y) + k
≥ min{μ̃ ( x ), μ̃ (y)} + k
= min{μ̃ ( x ) + k, μ̃ (y) + k}
= min{μ̃sup ( x ), μ̃sup (y)}

for all x, y ∈ X. Thus, ( X, μ̃sup ) is a 1-fuzzy subalgebra of X. Therefore, ( X, μ̃) is a (k, 1)-hyperfuzzy
subalgebra of ( X, ∗, 0) for k ∈ {1, 2, 3, 4}.

Corollary 5. If ( X, μ̃) is a length 3-fuzzy subalgebra of ( X, ∗, 0) in which μ̃inf is constant on X, then it is a


(k, 1)-hyperfuzzy subalgebra of ( X, ∗, 0) for k ∈ {1, 2, 3, 4}.

Corollary 6. Let ( X, μ̃) be a hyper structure over ( X, ∗, 0) in which μ̃inf is constant on X. Then, ( X, μ̃) is a
(k, 1)-hyperfuzzy subalgebra of ( X, ∗, 0) for k ∈ {1, 2, 3, 4} if and only if ( X, μ̃) is a length 1-fuzzy subalgebra
of ( X, ∗, 0).

Theorem 6. If ( X, μ̃) is a length 1-fuzzy subalgebra of ( X, ∗, 0) in which μ̃sup is constant on X, then it is a


(4, k)-hyperfuzzy subalgebra of ( X, ∗, 0) for k ∈ {1, 2, 3, 4}.

Proof. Let ( X, μ̃) be a length 1-fuzzy subalgebra of ( X, ∗, 0) in which μ̃sup is constant on X. Clearly,
( X, μ̃sup ) is a k-fuzzy subalgebra of ( X, ∗, 0) for k ∈ {1, 2, 3, 4}. Let μ̃sup ( x ) = t for all x ∈ X. Then,

μ̃inf ( x ∗ y) = μ̃sup ( x ∗ y) − μ̃ ( x ∗ y)


= t − μ̃ ( x ∗ y)
≤ t − min{μ̃ ( x ), μ̃ (y)}
= t + max{−μ̃ ( x ), −μ̃ (y)}
= max{t − μ̃ ( x ), t − μ̃ (y)}
= max{μ̃inf ( x ), μ̃inf (y)}

for all x, y ∈ X, and so ( X, μ̃inf ) is a 4-fuzzy subalgebra of ( X, ∗, 0). Therefore, ( X, μ̃) is a


(4, k)-hyperfuzzy subalgebra of ( X, ∗, 0) for k ∈ {1, 2, 3, 4}.

149
Mathematics 2018, 6, 11

Theorem 7. Let ( X, μ̃) be a hyper structure over ( X, ∗, 0) in which ( X, μ̃sup ) satisfies the Condition (5).
For any k ∈ {1, 2, 3, 4}, if ( X, μ̃) is a (4, k )-hyperfuzzy subalgebra of ( X, ∗, 0), then it is a length 1-fuzzy
subalgebra of ( X, ∗, 0).

Proof. Let ( X, μ̃) be a (4, k )-hyperfuzzy subalgebra of ( X, ∗, 0) for k ∈ {1, 2, 3, 4} in which ( X, μ̃sup )
satisfies the Condition (5). Then, μ̃sup ( x ∗ y) ≥ μ̃sup ( x ) and μ̃sup ( x ∗ y) ≥ μ̃sup (y) for all x, y ∈ X,
and ( X, μ̃inf ) is a 4-fuzzy subalgebra of ( X, ∗, 0). It follows from (6) that

μ̃ ( x ∗ y) = μ̃sup ( x ∗ y) − μ̃inf ( x ∗ y)


≥ μ̃sup ( x ∗ y) − max{μ̃inf ( x ), μ̃inf (y)}
= min{μ̃sup ( x ∗ y) − μ̃inf ( x ), μ̃sup ( x ∗ y) − μ̃inf (y)}
≥ min{μ̃sup ( x ) − μ̃inf ( x ), μ̃sup (y) − μ̃inf (y)}
= min{μ̃ ( x ), μ̃ (y)}

for all x, y ∈ X. Hence, ( X, μ̃) is a length 1-fuzzy subalgebra of ( X, ∗, 0).

Corollary 7. Let ( X, μ̃) be a hyper structure over ( X, ∗, 0) in which ( X, μ̃sup ) satisfies the Condition (5).
For any k ∈ {1, 2, 3, 4}, every (2, k)-hyperfuzzy subalgebra is a length 1-fuzzy subalgebra.

Theorem 8. Let ( X, μ̃) be a hyper structure over ( X, ∗, 0) in which ( X, μ̃inf ) satisfies the Condition (5).
If ( X, μ̃) is a (k, 4)-hyperfuzzy subalgebra of ( X, ∗, 0) for k ∈ {1, 2, 3, 4}, then it is a length 4-fuzzy subalgebra
of ( X, ∗, 0).

Proof. Assume that ( X, μ̃) is a (k, 4)-hyperfuzzy subalgebra of ( X, ∗, 0) for k ∈ {1, 2, 3, 4} in which
( X, μ̃inf ) satisfies the Condition (5). Then, μ̃inf ( x ∗ y) ≥ μ̃inf ( x ) and μ̃inf ( x ∗ y) ≥ μ̃inf (y) for all x, y ∈ X,
and ( X, μ̃sup ) is a 4-fuzzy subalgebra of X. Hence,

μ̃ ( x ∗ y) = μ̃sup ( x ∗ y) − μ̃inf ( x ∗ y)


≤ max{μ̃sup ( x ), μ̃sup (y)} − μ̃inf ( x ∗ y)
= max{μ̃sup ( x ) − μ̃inf ( x ∗ y), μ̃sup (y) − μ̃inf ( x ∗ y)}
≤ max{μ̃sup ( x ) − μ̃inf ( x ), μ̃sup (y) − μ̃inf (y)}
= max{μ̃ ( x ), μ̃ (y)}

for all x, y ∈ X, and so ( X, μ̃) is a length 4-fuzzy subalgebra of ( X, ∗, 0).

Corollary 8. Let ( X, μ̃) be a hyper structure over ( X, ∗, 0) in which ( X, μ̃inf ) satisfies the Condition (5).
If ( X, μ̃) is a (k, 2)-hyperfuzzy subalgebra of ( X, ∗, 0) for k ∈ {1, 2, 3, 4}, then it is a length 4-fuzzy subalgebra
of ( X, ∗, 0).

Corollary 9. For j ∈ {2, 4}, every (3, j)-hyperfuzzy subalgebra is a length 4-fuzzy subalgebra.

Theorem 9. Let ( X, μ̃) be a hyper structure over ( X, ∗, 0) in which μ̃inf is constant. Then, every length 4-fuzzy
subalgebra is a (k, 4)-hyperfuzzy subalgebra for k ∈ {1, 2, 3, 4}.

Proof. Let ( X, μ̃) be a length 4-fuzzy subalgebra of ( X, ∗, 0) in which μ̃inf is constant. It is obvious that
( X, μ̃inf ) is a k-fuzzy subalgebra of ( X, ∗, 0) for k ∈ {1, 2, 3, 4}. Let μ̃inf ( x ) = t for all x ∈ X. Then,

150
Mathematics 2018, 6, 11

μ̃sup ( x ∗ y) = μ̃ ( x ∗ y) + μ̃inf ( x ∗ y) = μ̃ ( x ∗ y) + t


≤ max{μ̃ ( x ), μ̃ (y)} + t
= max{μ̃ ( x ) + t, μ̃ (y) + t}
= max{μ̃ ( x ), μ̃ (y)}

for all x, y ∈ X, and hence ( X, μ̃sup ) is a 4-fuzzy subalgebra of ( X, ∗, 0). Therefore, ( X, μ̃) is a
(k, 4)-hyperfuzzy subalgebra of ( X, ∗, 0) for k ∈ {1, 2, 3, 4}.

Corollary 10. Let ( X, μ̃) be a hyper structure over ( X, ∗, 0) in which μ̃inf is constant. Then, every length
2-fuzzy subalgebra is a (k, 4)-hyperfuzzy subalgebra for k ∈ {1, 2, 3, 4}.

Theorem 10. Let ( X, μ̃) be a hyper structure over ( X, ∗, 0) in which ( X, μ̃sup ) satisfies the Condition (4).
For every k ∈ {1, 2, 3, 4}, every (1, k )-hyperfuzzy subalgebra is a length 4-fuzzy subalgebra.

Proof. For every k ∈ {1, 2, 3, 4}, let ( X, μ̃) be a (1, k )-hyperfuzzy subalgebra of ( X, ∗, 0) in which
( X, μ̃sup ) satisfies the Condition (4). Then, μ̃sup ( x ∗ y) ≤ μ̃sup ( x ) and μ̃sup ( x ∗ y) ≤ μ̃sup (y) for all
x, y ∈ X. Since ( X, μ̃inf ) is a 1-fuzzy subalgebra of ( X, ∗, 0), we have

μ̃ ( x ∗ y) = μ̃sup ( x ∗ y) − μ̃inf ( x ∗ y)


≤ μ̃sup ( x ∗ y) − min{μ̃inf ( x ), μ̃inf (y)}
= max{μ̃sup ( x ∗ y) − μ̃inf ( x ), μ̃sup ( x ∗ y) − μ̃inf (y)}
≤ max{μ̃sup ( x ) − μ̃inf ( x ), μ̃sup (y) − μ̃inf (y)}
= max{μ̃ ( x ), μ̃ (y)}

for all x, y ∈ X. Thus, ( X, μ̃) is a length 4-fuzzy subalgebra of ( X, ∗, 0).

Corollary 11. Let ( X, μ̃) be a hyper structure over ( X, ∗, 0) in which ( X, μ̃sup ) satisfies the Condition (4).
For every k ∈ {1, 2, 3, 4}, every (3, k )-hyperfuzzy subalgebra is a length 4-fuzzy subalgebra.

Theorem 11. Let ( X, μ̃) be a length 4-fuzzy subalgebra of ( X, ∗, 0). If μ̃sup is constant on X, then ( X, μ̃) is a
(1, k)-hyperfuzzy subalgebra of ( X, ∗, 0) for k ∈ {1, 2, 3, 4}.

Proof. Assume that μ̃sup is constant on X in a length 4-fuzzy subalgebra ( X, μ̃) of ( X, ∗, 0).
Obviously, ( X, μ̃sup ) is a k-fuzzy subalgebra of ( X, ∗, 0) for k ∈ {1, 2, 3, 4}. Let μ̃sup ( x ) = t for
all x ∈ X. Then,

μ̃inf ( x ∗ y) = μ̃sup ( x ∗ y) − μ̃ ( x ∗ y)


= t − μ̃ ( x ∗ y)
≥ t − max{μ̃ ( x ), μ̃ (y)}
= min{t − μ̃ ( x ), t − μ̃ (y)}
= min{μ̃inf ( x ), μ̃inf (y)}

for all x, y ∈ X, and so ( X, μ̃inf ) is a 1-fuzzy subalgebra of ( X, ∗, 0). Therefore, ( X, μ̃inf ) is a


(1, k)-hyperfuzzy subalgebra of ( X, ∗, 0) for k ∈ {1, 2, 3, 4}.

Corollary 12. Let ( X, μ̃) be a length 2-fuzzy subalgebra of ( X, ∗, 0). If μ̃sup is constant on X, then ( X, μ̃) is a
(1, k)-hyperfuzzy subalgebra of ( X, ∗, 0) for k ∈ {1, 2, 3, 4}.

151
Mathematics 2018, 6, 11

4. Conclusions
In order to consider a generalization of fuzzy sets and interval-valued fuzzy sets, the notion of
hyperfuzzy sets was introduced by Ghosh and Samanta (see [3]). Jun et al. [4] and Song et al. [7]
have applied the hyperfuzzy sets to BCK/BCI-algebras. In this article, we have introduced the
concept of length-fuzzy sets based on hyperfuzzy sets, and have presented an application in
BCK/BCI-algebras. We have introduced the notion of length fuzzy subalgebras in BCK/BCI-algebras,
and have investigated related properties. We have discussed characterizations of a length fuzzy
subalgebra, and have established relations between length fuzzy subalgebras and hyperfuzzy
subalgebras. Recently, many kinds of fuzzy sets have several applications to deal with uncertainties
from our different kinds of daily life problems, in particular, for solving decision-making problems
(see [8–12]). In the future, from a purely mathematical standpoint, we will apply the notions and
results in this manuscript to related algebraic structures, for example, MV-algebras, BL-algebras,
MTL-algebras, EQ-algebras, effect algebras and so on. From an applicable standpoint, we shall extend
our proposed approach to some decision-making problems under the field of fuzzy cluster analysis,
uncertain programming, mathematical programming, decision-making problems and so on.

Acknowledgments: The authors wish to thank the anonymous reviewers for their valuable suggestions. To the
memory of Lotfi A. Zadeh.
Author Contributions: All authors contributed equally and significantly to the study and preparation of the
article. They have read and approved the final manuscript.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353.
2. Marty, F. Sur une generalization de la notion de groupe. In Proceedings of the 8th Congress Math Scandenaves,
Stockholm, Sweden, 1934; pp. 45–49.
3. Ghosh, J.; Samanta, T.K. Hyperfuzzy sets and hyperfuzzy group. Int. J. Adv. Sci. Technol. 2012, 41, 27–37.
4. Jun, Y.B.; Hur, K.; Lee, K.J. Hyperfuzzy subalgebras of BCK/BCI-algebras. Ann. Fuzzy Math. Inform. 2017,
in press.
5. Huang, Y.S. BCI-Algebra; Science Press: Beijing, China, 2006.
6. Meng, J.; Jun, Y.B. BCK-Algebras; Kyungmoon Sa Co.: Seoul, Korea, 1994.
7. Song, S.Z.; Kim, S.J.; Jun, Y.B. Hyperfuzzy ideals in BCK/BCI-algebras. Mathematics 2017, 5, 81, doi:10.3390/
math5040081.
8. Garg, H. A robust ranking method for intuitionistic multiplicative sets under crisp, interval environments
and its applications. IEEE Trans. Emerg. Top. Comput. Intell. 2017, 1, 366–374.
9. Feng, F.; Jun, Y.B.; Liu, X.; Li, L. An adjustable approach to fuzzy soft set based decision making. J. Comput.
Appl. Math. 2010, 234, 10–20.
10. Xia, M.; Xu, Z. Hesitant fuzzy information aggregation in decision making. Int. J. Approx. Reason. 2011,
52, 395–407.
11. Tang, H. Decision making based on interval-valued intuitionistic fuzzy soft sets and its algorithm. J. Comput.
Anal. Appl. 2017, 23, 119–131.
12. Wei, G.; Alsaadi, F.E.; Hayat, T.; Alsaedi, A. Hesitant bipolar fuzzy aggregation operators in multiple attribute
decision making. J. Intell. Fuzzy Syst. 2017, 33, 1119–1128.

c 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

152
mathematics
Article
Neutrosophic Permeable Values and Energetic
Subsets with Applications in BCK/BCI-Algebras
Young Bae Jun 1, *, Florentin Smarandache 2 , Seok-Zun Song 3 and Hashem Bordbar 4
1 Department of Mathematics Education, Gyeongsang National University, Jinju 52828, Korea
2 Mathematics & Science Department, University of New Mexico, 705 Gurley Ave., Gallup, NM 87301, USA;
[email protected]
3 Department of Mathematics, Jeju National University, Jeju 63243, Korea; [email protected]
4 Postdoctoral Research Fellow, Shahid Beheshti University, Tehran, District 1,
Daneshjou Boulevard 1983969411, Iran; [email protected]
* Correspondence: [email protected]

Received: 14 March 2018; Accepted: 2 May 2018; Published: 7 May 2018

Abstract: The concept of a (∈, ∈)-neutrosophic ideal is introduced, and its characterizations are
established. The notions of neutrosophic permeable values are introduced, and related properties are
investigated. Conditions for the neutrosophic level sets to be energetic, right stable, and right vanished
are discussed. Relations between neutrosophic permeable S- and I-values are considered.

Keywords: (∈, ∈)-neutrosophic subalgebra; (∈, ∈)-neutrosophic ideal; neutrosophic (anti-)permeable


S-value; neutrosophic (anti-)permeable I-value; S-energetic set; I-energetic set

MSC: 06F35; 03G25; 08A72

1. Introduction
The notion of neutrosophic set (NS) theory developed by Smarandache (see [1,2]) is a more general
platform that extends the concepts of classic and fuzzy sets, intuitionistic fuzzy sets, and interval-valued
(intuitionistic) fuzzy sets and that is applied to various parts: pattern recognition, medical diagnosis,
decision-making problems, and so on (see [3–6]). Smarandache [2] mentioned that a cloud is a NS
because its borders are ambiguous and because each element (water drop) belongs with a neutrosophic
probability to the set (e.g., there are types of separated water drops around a compact mass of water
drops, such that we do not know how to consider them: in or out of the cloud). Additionally, we are
not sure where the cloud ends nor where it begins, and neither whether some elements are or are not
in the set. This is why the percentage of indeterminacy is required and the neutrosophic probability
(using subsets—not numbers—as components) should be used for better modeling: it is a more organic,
smooth, and particularly accurate estimation. Indeterminacy is the zone of ignorance of a proposition’s
value, between truth and falsehood.
Algebraic structures play an important role in mathematics with wide-ranging applications in
several disciplines such as coding theory, information sciences, computer sciences, control engineering,
theoretical physics, and so on. NS theory is also applied to several algebraic structures. In particular,
Jun et al. applied it to BCK/BCI-algebras (see [7–12]). Jun et al. [8] introduced the notions of energetic
subsets, right vanished subsets, right stable subsets, and (anti-)permeable values in BCK/BCI-algebras
and investigated relations between these sets.
In this paper, we introduce the notions of neutrosophic permeable S-values, neutrosophic
permeable I-values, (∈, ∈)-neutrosophic ideals, neutrosophic anti-permeable S-values,
and neutrosophic anti-permeable I-values, which are motivated by the idea of subalgebras
(i.e., S-values) and ideals (i.e., I-values), and investigate their properties. We consider characterizations

Mathematics 2018, 6, 74; doi:10.3390/math6050074 153 www.mdpi.com/journal/mathematics


Mathematics 2018, 6, 74

of (∈, ∈)-neutrosophic ideals. We discuss conditions for the lower (upper) neutrosophic ∈Φ -subsets to
be S- and I-energetic. We provide conditions for a triple (α, β, γ) of numbers to be a neutrosophic
(anti-)permeable S- or I-value. We consider conditions for the upper (lower) neutrosophic ∈Φ -subsets to
be right stable (right vanished) subsets. We establish relations between neutrosophic (anti-)permeable
S- and I-values.

2. Preliminaries
An algebra ( X; ∗, 0) of type (2, 0) is called a BCI-algebra if it satisfies the following conditions:

(I) (∀ x, y, z ∈ X ) ((( x ∗ y) ∗ ( x ∗ z)) ∗ (z ∗ y) = 0);


(II) (∀ x, y ∈ X ) (( x ∗ ( x ∗ y)) ∗ y = 0);
(III) (∀ x ∈ X ) ( x ∗ x = 0);
(IV) (∀ x, y ∈ X ) ( x ∗ y = 0, y ∗ x = 0 ⇒ x = y).

If a BCI-algebra X satisfies the following identity:

(V) (∀ x ∈ X ) (0 ∗ x = 0),

then X is called a BCK-algebra. Any BCK/BCI-algebra X satisfies the following conditions:

(∀ x ∈ X ) ( x ∗ 0 = x ) , (1)
(∀ x, y, z ∈ X ) ( x ≤ y ⇒ x ∗ z ≤ y ∗ z, z ∗ y ≤ z ∗ x ) , (2)
(∀ x, y, z ∈ X ) (( x ∗ y) ∗ z = ( x ∗ z) ∗ y) , (3)
(∀ x, y, z ∈ X ) (( x ∗ z) ∗ (y ∗ z) ≤ x ∗ y) , (4)

where x ≤ y if and only if x ∗ y = 0. A nonempty subset S of a BCK/BCI-algebra X is called a


subalgebra of X if x ∗ y ∈ S for all x, y ∈ S. A subset I of a BCK/BCI-algebra X is called an ideal of X if
it satisfies the following:

0 ∈ I, (5)
(∀ x, y ∈ X ) ( x ∗ y ∈ I, y ∈ I → x ∈ I ) . (6)

We refer the reader to the books [13] and [14] for further information regarding
BCK/BCI-algebras.
For any family { ai | i ∈ Λ} of real numbers, we define

{ ai | i ∈ Λ} = sup{ ai | i ∈ Λ}

and

{ ai | i ∈ Λ} = inf{ ai | i ∈ Λ}.
 
If Λ = {1, 2}, we also use a1 ∨ a2 and a1 ∧ a2 instead of { ai | i ∈ {1, 2}} and { ai | i ∈ {1, 2}},
respectively.
We let X be a nonempty set. A NS in X (see [1]) is a structure of the form

A := { x; A T ( x ), A I ( x ), A F ( x ) | x ∈ X },

where A T : X → [0, 1] is a truth membership function, A I : X → [0, 1] is an indeterminate membership


function, and A F : X → [0, 1] is a false membership function. For the sake of simplicity, we use the
symbol A = ( A T , A I , A F ) for the NS

A := { x; A T ( x ), A I ( x ), A F ( x ) | x ∈ X }.

154
Mathematics 2018, 6, 74

A subset A of a BCK/BCI-algebra X is said to be S-energetic (see [8]) if it satisfies

(∀ x, y ∈ X ) ( x ∗ y ∈ A ⇒ { x, y} ∩ A = ∅) . (7)

A subset A of a BCK/BCI-algebra X is said to be I-energetic (see [8]) if it satisfies

(∀ x, y ∈ X ) (y ∈ A ⇒ { x, y ∗ x } ∩ A = ∅) . (8)

A subset A of a BCK/BCI-algebra X is said to be right vanished (see [8]) if it satisfies

(∀ x, y ∈ X ) ( x ∗ y ∈ A ⇒ x ∈ A) . (9)

A subset A of a BCK/BCI-algebra X is said to be right stable (see [8]) if A ∗ X := { a ∗ x | a ∈


A, x ∈ X } ⊆ A.

3. Neutrosophic Permeable Values


Given a NS A = ( A T , A I , A F ) in a set X, α, β ∈ (0, 1] and γ ∈ [0, 1), we consider the following sets:

UT∈ ( A; α) = { x ∈ X | A T ( x ) ≥ α}, UT∈ ( A; α)∗ = { x ∈ X | A T ( x ) > α},


U I∈ ( A; β) = { x ∈ X | A I ( x ) ≥ β}, U I∈ ( A; β)∗ = { x ∈ X | A I ( x ) > β},
UF∈ ( A; γ) = { x ∈ X | A F ( x ) ≤ γ}, UF∈ ( A; γ)∗ = { x ∈ X | A F ( x ) < γ},
L∈ ∈ ∗
T ( A; α ) = { x ∈ X | A T ( x ) ≤ α }, L T ( A; α ) = { x ∈ X | A T ( x ) < α },

L∈I ( A; β) = { x ∈ X | A I ( x ) ≤ β}, L∈I ( A; β)∗ = { x ∈ X | A I ( x ) < β},


L∈ ∈ ∗
F ( A; γ ) = { x ∈ X | A F ( x ) ≥ γ }, L F ( A; γ ) = { x ∈ X | A F ( x ) > γ }.

We say UT∈ ( A; α), U I∈ ( A; β), and UF∈ ( A; γ) are upper neutrosophic ∈Φ -subsets of X, and L∈ T ( A; α ),
L I ( A; β), and L∈
∈ ∈
F ( A; γ ) are lower neutrosophic ∈Φ -subsets of X, where Φ ∈ { T, I, F }. We say UT ( A; α ) ,

U I∈ ( A; β)∗ , and UF∈ ( A; γ)∗ are strong upper neutrosophic ∈Φ -subsets of X, and L∈ T ( A; α ) ∗ , L∈ ( A; β )∗ ,
I
and L∈ ∗
F ( A; γ ) are strong lower neutrosophic ∈Φ -subsets of X, where Φ ∈ { T, I, F }.

Definition 1 ([7]). A NS A = ( A T , A I , A F ) in a BCK/BCI-algebra X is called an (∈, ∈)-


neutrosophic subalgebra of X if the following assertions are valid:

x ∈ UT∈ ( A; α x ), y ∈ UT∈ ( A; αy ) ⇒ x ∗ y ∈ UT∈ ( A; α x ∧ αy ),


x ∈ U I∈ ( A; β x ), y ∈ U I∈ ( A; β y ) ⇒ x ∗ y ∈ U I∈ ( A; β x ∧ β y ), (10)
x∈ UF∈ ( A; γx ), y∈ UF∈ ( A; γy ) ⇒ x∗y ∈ UF∈ ( A; γx ∨ γy ) ,

for all x, y ∈ X, α x , αy , β x , β y ∈ (0, 1] and γx , γy ∈ [0, 1).

Lemma 1 ([7]). A NS A = ( A T , A I , A F ) in a BCK/BCI-algebra X is an (∈, ∈)-neutrosophic subalgebra of


X if and only if A = ( A T , A I , A F ) satisfies
⎛ ⎞
AT ( x ∗ y) ≥ AT ( x ) ∧ AT (y)
⎜ ⎟
(∀ x, y ∈ X ) ⎝ A I ( x ∗ y) ≥ A I ( x ) ∧ A I (y) ⎠ . (11)
A F ( x ∗ y) ≤ A F ( x ) ∨ A F (y)

Proposition 1. Every (∈, ∈)-neutrosophic subalgebra A = ( A T , A I , A F ) of a BCK/BCI-algebra X satisfies

(∀ x ∈ X ) ( A T (0) ≥ A T ( x ), A I (0) ≥ A I ( x ), A F (0) ≤ A F ( x )) . (12)

155
Mathematics 2018, 6, 74

Proof. Straightforward.

Theorem 1. If A = ( A T , A I , A F ) is an (∈, ∈)-neutrosophic subalgebra of a BCK/BCI-algebra X, then the


lower neutrosophic ∈Φ -subsets of X are S-energetic subsets of X, where Φ ∈ { T, I, F }.

Proof. Let x, y ∈ X and α ∈ (0, 1] be such that x ∗ y ∈ L∈


T ( A; α ). Then

α ≥ A T ( x ∗ y ) ≥ A T ( x ) ∧ A T ( y ),

and thus A T ( x ) ≤ α or A T (y) ≤ α; that is, x ∈ L∈ ∈ ∈


T ( A; α ) or y ∈ L T ( A; α ). Thus { x, y } ∩ L T ( A; α )  = ∅.
Therefore L∈
T ( A; α ) is an S-energetic subset of X. Similarly, we can verify that L ∈ ( A; β ) is an S-energetic
I
subset of X. We let x, y ∈ X and γ ∈ [0, 1) be such that x ∗ y ∈ L∈ F ( A; γ ). Then

γ ≤ A F ( x ∗ y ) ≤ A F ( x ) ∨ A F ( y ).

It follows that A F ( x ) ≥ γ or A F (y) ≥ γ; that is, x ∈ L∈ ∈


F ( A; γ ) or y ∈ L F ( A; γ ). Hence { x, y } ∩
L∈
F ( A; γ )

= ∅, and therefore L F ( A; γ) is an S-energetic subset of X.

Corollary 1. If A = ( A T , A I , A F ) is an (∈, ∈)-neutrosophic subalgebra of a BCK/BCI-algebra X, then the


strong lower neutrosophic ∈Φ -subsets of X are S-energetic subsets of X, where Φ ∈ { T, I, F }.

Proof. Straightforward.

The converse of Theorem 1 is not true, as seen in the following example.

Example 1. Consider a BCK-algebra X = {0, 1, 2, 3, 4} with the binary operation ∗ that is given in Table 1
(see [14]).

Table 1. Cayley table for the binary operation “∗”.

* 0 1 2 3 4
0 0 0 0 0 0
1 1 0 0 0 0
2 2 1 0 0 1
3 3 2 1 0 2
4 4 1 1 1 0

Let A = ( A T , A I , A F ) be a NS in X that is given in Table 2.

Table 2. Tabulation representation of A = ( A T , A I , A F ).

X AT ( x) A I ( x) A F ( x)
0 0.6 0.8 0.2
1 0.4 0.5 0.7
2 0.4 0.5 0.6
3 0.4 0.5 0.5
4 0.7 0.8 0.2

If α ∈ [0.4, 0.6), β ∈ [0.5, 0.8), and γ ∈ (0.2, 0.5], then L∈ ∈


T ( A; α ) = {1, 2, 3}, L I ( A; β ) = {1, 2, 3},
and L∈F ( A; γ ) = { 1, 2, 3 } are S-energetic subsets of X. Because

A T (4 ∗ 4) = A T (0) = 0.6  0.7 = A T (4) ∧ A T (4)

156
Mathematics 2018, 6, 74

and/or

A F (3 ∗ 2) = A F (1) = 0.7  0.6 = A F (3) ∨ A F (2),

it follows from Lemma 1 that A = ( A T , A I , A F ) is not an (∈, ∈)-neutrosophic subalgebra of X.

Definition 2. Let A = ( A T , A I , A F ) be a NS in a BCK/BCI-algebra X and (α, β, γ) ∈ Λ T × Λ I × Λ F ,


where Λ T , Λ I , and Λ F are subsets of [0, 1]. Then (α, β, γ) is called a neutrosophic permeable S-value for
A = ( A T , A I , A F ) if the following assertion is valid:
⎛ ⎞
x ∗ y ∈ UT∈ ( A; α) ⇒ A T ( x ) ∨ A T (y) ≥ α,
⎜ ⎟
(∀ x, y ∈ X ) ⎝ x ∗ y ∈ U I∈ ( A; β) ⇒ A I ( x ) ∨ A I (y) ≥ β, ⎠ (13)
x ∗ y ∈ UF∈ ( A; γ) ⇒ A F ( x ) ∧ A F (y) ≤ γ

Example 2. Let X = {0, 1, 2, 3, 4} be a set with the binary operation ∗ that is given in Table 3.

Table 3. Cayley table for the binary operation “∗”.

* 0 1 2 3 4
0 0 0 0 0 0
1 1 0 1 1 0
2 2 2 0 2 0
3 3 3 3 0 3
4 4 4 4 4 0

Then (X, ∗, 0) is a BCK-algebra (see [14]). Let A = ( AT , AI , AF ) be a NS in X that is given in Table 4.

Table 4. Tabulation representation of A = ( A T , A I , A F ).

X AT ( x) A I ( x) A F ( x)
0 0.2 0.3 0.7
1 0.6 0.4 0.6
2 0.5 0.3 0.4
3 0.4 0.8 0.5
4 0.7 0.6 0.2

It is routine to verify that (α, β, γ) ∈ (0, 2, 1] × (0.3, 1] × [0, 0.7) is a neutrosophic permeable S-value for
A = ( A T , A I , A F ).

Theorem 2. Let A = ( A T , A I , A F ) be a NS in a BCK/BCI-algebra X and (α, β, γ) ∈ Λ T × Λ I × Λ F ,


where Λ T , Λ I , and Λ F are subsets of [0, 1]. If A = ( A T , A I , A F ) satisfies the following condition:
⎛ ⎞
AT ( x ∗ y) ≤ AT ( x ) ∨ AT (y)
⎜ ⎟
(∀ x, y ∈ X ) ⎝ A I ( x ∗ y) ≤ A I ( x ) ∨ A I (y) ⎠ , (14)
A F ( x ∗ y) ≥ A F ( x ) ∧ A F (y)

then (α, β, γ) is a neutrosophic permeable S-value for A = ( A T , A I , A F ).

Proof. Let x, y ∈ X be such that x ∗ y ∈ UT∈ ( A; α). Then

α ≤ A T ( x ∗ y ) ≤ A T ( x ) ∨ A T ( y ).

157
Mathematics 2018, 6, 74

Similarly, if x ∗ y ∈ U I∈ ( A; β) for x, y ∈ X, then A I ( x ) ∨ A I (y) ≥ β. Now, let a, b ∈ X be such that


a ∗ b ∈ UF∈ ( A; γ). Then

γ ≥ A F ( a ∗ b ) ≥ A F ( a ) ∧ A F ( b ).

Therefore (α, β, γ) is a neutrosophic permeable S-value for A = ( A T , A I , A F ).

Theorem 3. Let A = ( A T , A I , A F ) be a NS in a BCK-algebra X and (α, β, γ) ∈ Λ T × Λ I × Λ F , where Λ T ,


Λ I , and Λ F are subsets of [0, 1]. If A = ( A T , A I , A F ) satisfies the following conditions:

(∀ x ∈ X ) ( A T (0) ≤ A T ( x ), A I (0) ≤ A I ( x ), A F (0) ≥ A F ( x )) (15)

and
⎛ ⎞
AT ( x ) ≤ AT ( x ∗ y) ∨ AT (y)
⎜ ⎟
(∀ x, y ∈ X ) ⎝ A I ( x ) ≤ A I ( x ∗ y) ∨ A I (y) ⎠ , (16)
A F ( x ) ≥ A F ( x ∗ y) ∧ A F (y)

then (α, β, γ) is a neutrosophic permeable S-value for A = ( A T , A I , A F ).

Proof. Let x, y, a, b, u, v ∈ X be such that x ∗ y ∈ UT∈ ( A; α), a ∗ b ∈ UI∈ ( A; β), and u ∗ v ∈ UF∈ ( A; γ). Then

α ≤ A T ( x ∗ y) ≤ A T (( x ∗ y) ∗ x ) ∨ A T ( x )
= A T (( x ∗ x ) ∗ y) ∨ A T ( x ) = A T (0 ∗ y) ∨ A T ( x )
= A T (0) ∨ A T ( x ) = A T ( x ),
β ≤ A I ( a ∗ b) ≤ A I (( a ∗ b) ∗ a) ∨ A I ( a)
= A I (( a ∗ a) ∗ b) ∨ A I ( a) = A I (0 ∗ b) ∨ A I ( a)
= A I (0) ∨ A I ( a ) = A I ( a ),

and
γ ≥ A F (u ∗ v) ≥ A F ((u ∗ v) ∗ u) ∧ A F (u)
= A F ((u ∗ u) ∗ v) ∧ A F (u) = A F (0 ∗ v) ∧ A F (v)
= A F (0) ∧ A F ( v ) = A F ( v )

by Equations (3), (V), (15), and (16). It follows that

A T ( x ) ∨ A T (y) ≥ A T ( x ) ≥ α,
A I ( a) ∨ A I (b) ≥ A I ( a) ≥ β,
A F (u) ∧ A F (v) ≤ A F (u) ≤ γ.

Therefore (α, β, γ) is a neutrosophic permeable S-value for A = ( A T , A I , A F ).

Theorem 4. Let A = ( A T , A I , A F ) be a NS in a BCK/BCI-algebra X and (α, β, γ) ∈ Λ T × Λ I × Λ F ,


where Λ T , Λ I , and Λ F are subsets of [0, 1]. If (α, β, γ) is a neutrosophic permeable S-value for A = ( A T , A I ,
A F ), then upper neutrosophic ∈Φ -subsets of X are S-energetic where Φ ∈ { T, I, F }.

158
Mathematics 2018, 6, 74

Proof. Let x, y, a, b, u, v ∈ X be such that x ∗ y ∈ UT∈ ( A; α), a ∗ b ∈ U I∈ ( A; β), and u ∗ v ∈ UF∈ ( A; γ).
Using Equation (13), we have A T ( x ) ∨ A T (y) ≥ α, A I ( a) ∨ A I (b) ≥ β, and A F (u) ∧ A F (v) ≤ γ.
It follows that

A T ( x ) ≥ α or A T (y) ≥ α, that is, x ∈ UT∈ ( A; α) or y ∈ UT∈ ( A; α);


A I ( a) ≥ β or A I (b) ≥ β, that is, a ∈ U I∈ ( A; β) or b ∈ U I∈ ( A; β);

and
A F (u) ≤ γ or A F (v) ≤ γ, that is, u ∈ UF∈ ( A; γ) or v ∈ UF∈ ( A; γ).

Hence { x, y} ∩ UT∈ ( A; α) = ∅, { a, b} ∩ U I∈ ( A; β) = ∅, and {u, v} ∩ UF∈ ( A; γ) = ∅.


Therefore UT∈ ( A; α), U I∈ ( A; β), and UF∈ ( A; γ) are S-energetic subsets of X.

Definition 3. Let A = ( A T , A I , A F ) be a NS in a BCK/BCI-algebra X and (α, β, γ) ∈ Λ T × Λ I × Λ F ,


where Λ T , Λ I , and Λ F are subsets of [0, 1]. Then (α, β, γ) is called a neutrosophic anti-permeable S-value for
A = ( A T , A I , A F ) if the following assertion is valid:
⎛ ⎞
x ∗ y ∈ L∈
T ( A; α ) ⇒ A T ( x ) ∧ A T ( y ) ≤ α,
⎜ ⎟
(∀ x, y ∈ X ) ⎝ x ∗ y ∈ L∈I ( A; β) ⇒ A I ( x ) ∧ A I (y) ≤ β, ⎠ . (17)
x ∗ y ∈ L∈
F ( A; γ ) ⇒ A F ( x ) ∨ A F ( y ) ≥ γ

Example 3. Let X = {0, 1, 2, 3, 4} be a set with the binary operation ∗ that is given in Table 5.

Table 5. Cayley table for the binary operation “∗”.

* 0 1 2 3 4
0 0 0 0 0 0
1 1 0 0 1 0
2 2 1 0 2 0
3 3 3 3 0 3
4 4 4 4 4 0

Then (X, ∗, 0) is a BCK-algebra (see [14]). Let A = ( AT , AI , AF ) be a NS in X that is given in Table 6.

Table 6. Tabulation representation of A = ( A T , A I , A F ).

X AT ( x) A I ( x) A F ( x)
0 0.7 0.6 0.4
1 0.4 0.5 0.6
2 0.4 0.5 0.6
3 0.5 0.2 0.7
4 0.3 0.3 0.9

It is routine to verify that (α, β, γ) ∈ (0.3, 1] × (0.2, 1] × [0, 0.9) is a neutrosophic anti-permeable S-value
for A = ( A T , A I , A F ).

Theorem 5. Let A = ( A T , A I , A F ) be a NS in a BCK/BCI-algebra X and (α, β, γ) ∈ Λ T × Λ I × Λ F ,


where Λ T , Λ I , and Λ F are subsets of [0, 1]. If A = ( A T , A I , A F ) is an (∈, ∈)-neutrosophic subalgebra of X,
then (α, β, γ) is a neutrosophic anti-permeable S-value for A = ( A T , A I , A F ).

159
Mathematics 2018, 6, 74

Proof. Let x, y, a, b, u, v ∈ X be such that x ∗ y ∈ L∈ ∈ ∈


T ( A; α ), a ∗ b ∈ L I ( A; β ), and u ∗ v ∈ L F ( A; γ ).
Using Lemma 1, we have

A T ( x ) ∧ A T (y) ≤ A T ( x ∗ y) ≤ α,
A I ( a) ∧ A I (b) ≤ A I ( a ∗ b) ≤ β,
A F (u) ∨ A F (v) ≥ A F (u ∗ v) ≥ γ,

and thus (α, β, γ) is a neutrosophic anti-permeable S-value for A = ( A T , A I , A F ).

Theorem 6. Let A = ( A T , A I , A F ) be a NS in a BCK/BCI-algebra X and (α, β, γ) ∈ Λ T × Λ I × Λ F ,


where Λ T , Λ I , and Λ F are subsets of [0, 1]. If (α, β, γ) is a neutrosophic anti-permeable S-value for A = ( A T ,
A I , A F ), then lower neutrosophic ∈Φ -subsets of X are S-energetic where Φ ∈ { T, I, F }.

Proof. Let x, y, a, b, u, v ∈ X be such that x ∗ y ∈ L∈ ∈ ∈


T ( A; α), a ∗ b ∈ L I ( A; β), and u ∗ v ∈ L F ( A; γ).
Using Equation (17), we have A T ( x ) ∧ A T (y) ≤ α, A I ( a) ∧ A I (b) ≤ β, and A F (u) ∨ A F (v) ≥ γ,
which imply that

A T ( x ) ≤ α or A T (y) ≤ α, that is, x ∈ L∈ ∈


T ( A; α ) or y ∈ L T ( A; α );

A I ( a) ≤ β or A I (b) ≤ β, that is, a ∈ L∈I ( A; β) or b ∈ L∈I ( A; β);

and
A F (u) ≥ γ or A F (v) ≥ γ, that is, u ∈ L∈ ∈
F ( A; γ ) or v ∈ L F ( A; γ ).

Hence { x, y} ∩ L∈ ∈ ∈
T ( A; α )  = ∅, { a, b } ∩ L I ( A; β )  = ∅, and { u, v } ∩ L F ( A; γ ) = ∅.
Therefore L∈
T ( A; α ) , L ∈ ( A; β ), and L∈ ( A; γ ) are S-energetic subsets of X.
I F

Definition 4. A NS A = ( A T , A I , A F ) in a BCK/BCI-algebra X is called an (∈, ∈)- neutrosophic ideal of


X if the following assertions are valid:
⎛ ⎞
x ∈ UT∈ ( A; α) ⇒ 0 ∈ UT∈ ( A; α)
⎜ ⎟
(∀ x ∈ X ) ⎝ x ∈ U I∈ ( A; β) ⇒ 0 ∈ U I∈ ( A; β) ⎠ , (18)
x ∈ UF∈ ( A; γ) ⇒ 0 ∈ UF∈ ( A; γ)
⎛ ⎞
x ∗ y ∈ UT∈ ( A; α x ), y ∈ UT∈ ( A; αy ) ⇒ x ∈ UT∈ ( A; α x ∧ αy )
⎜ ⎟
(∀ x, y ∈ X ) ⎝ x ∗ y ∈ U I∈ ( A; β x ), y ∈ U I∈ ( A; β y ) ⇒ x ∈ U I∈ ( A; β x ∧ β y ) ⎠ , (19)
x∗y ∈ UF∈ ( A; γx ), y∈ UF∈ ( A; γy ) ⇒ x∈ UF∈ ( A; γx ∨ γy )

for all α, β, α x , αy , β x , β y ∈ (0, 1] and γ, γx , γy ∈ [0, 1).

Theorem 7. A NS A = ( A T , A I , A F ) in a BCK/BCI-algebra X is an (∈, ∈)-neutrosophic ideal of X if and


only if A = ( A T , A I , A F ) satisfies
⎛ ⎞
A T (0) ≥ A T ( x ) ≥ A T ( x ∗ y ) ∧ A T ( y )
⎜ ⎟
(∀ x, y ∈ X ) ⎝ A I (0) ≥ A I ( x ) ≥ A I ( x ∗ y) ∧ A I (y) ⎠ . (20)
A F (0) ≤ A F ( x ) ≤ A F ( x ∗ y ) ∨ A F ( y )

Proof. Assume that Equation (20) is valid, and let x ∈ UT∈ ( A; α), a ∈ U I∈ ( A; β), and u ∈ UF∈ ( A; γ)
for any x, a, u ∈ X, α, β ∈ (0, 1] and γ ∈ [0, 1). Then A T (0) ≥ A T ( x ) ≥ α, A I (0) ≥ A I ( a) ≥ β,
and A F (0) ≤ A F (u) ≤ γ. Hence 0 ∈ UT∈ ( A; α), 0 ∈ U I∈ ( A; β), and 0 ∈ UF∈ ( A; γ), and thus
Equation (18) is valid. Let x, y, a, b, u, v ∈ X be such that x ∗ y ∈ UT∈ ( A; α x ), y ∈ UT∈ ( A; αy ),
a ∗ b ∈ U I∈ ( A; β a ), b ∈ U I∈ ( A; β b ), u ∗ v ∈ UF∈ ( A; γu ), and v ∈ UF∈ ( A; γv ) for all α x , αy , β a , β b ∈ (0, 1]

160
Mathematics 2018, 6, 74

and γu , γv ∈ [0, 1). Then A T ( x ∗ y) ≥ α x , A T (y) ≥ αy , A I ( a ∗ b) ≥ β a , A I (b) ≥ β b , A F (u ∗ v) ≤ γu ,


and A F (v) ≤ γv . It follows from Equation (20) that

A T ( x ) ≥ A T ( x ∗ y) ∧ A T (y) ≥ α x ∧ αy ,
A I ( a) ≥ A I ( a ∗ b) ∧ A I (b) ≥ β a ∧ β b ,
A F ( u ) ≤ A F ( u ∗ v ) ∨ A F ( v ) ≤ γu ∨ γv .

Hence x ∈ UT∈ ( A; α x ∧ αy ), a ∈ U I∈ ( A; β a ∧ β b ), and u ∈ UF∈ ( A; γu ∨ γv ). Therefore A = ( A T , A I ,


A F ) is an (∈, ∈)-neutrosophic ideal of X.
Conversely, let A = ( A T , A I , A F ) be an (∈, ∈)-neutrosophic ideal of X. If there exists x0 ∈ X
such that A T (0) < A T ( x0 ), then x0 ∈ UT∈ ( A; α) and 0 ∈ / UT∈ ( A; α), where α = A T ( x0 ). This is a
contradiction, and thus A T (0) ≥ A T ( x ) for all x ∈ X. Assume that A T ( x0 ) < A T ( x0 ∗ y0 ) ∧ A T (y0 ) for
some x0 , y0 ∈ X. Taking α := A T ( x0 ∗ y0 ) ∧ A T (y0 ) implies that x0 ∗ y0 ∈ UT∈ ( A; α) and y0 ∈ UT∈ ( A; α);
but x0 ∈ / UT∈ ( A; α). This is a contradiction, and thus A T ( x ) ≥ A T ( x ∗ y) ∧ A T (y) for all x, y ∈ X.
Similarly, we can verify that A I (0) ≥ A I ( x ) ≥ A I ( x ∗ y) ∧ A I (y) for all x, y ∈ X. Now, suppose
that A F (0) > A F ( a) for some a ∈ X. Then a ∈ UF∈ ( A; γ) and 0 ∈ / UF∈ ( A; γ) by taking γ = A F ( a).
This is impossible, and thus A F (0) ≤ A F ( x ) for all x ∈ X. Suppose there exist a0 , b0 ∈ X such
that A F ( a0 ) > A F ( a0 ∗ b0 ) ∨ A F (b0 ), and take γ := A F ( a0 ∗ b0 ) ∨ A F (b0 ). Then a0 ∗ b0 ∈ UF∈ ( A; γ),
b0 ∈ UF∈ ( A; γ), and a0 ∈ / UF∈ ( A; γ), which is a contradiction. Thus A F ( x ) ≤ A F ( x ∗ y) ∨ A F (y) for all
x, y ∈ X. Therefore A = ( A T , A I , A F ) satisfies Equation (20).

Lemma 2. Every (∈, ∈)-neutrosophic ideal A = ( A T , A I , A F ) of a BCK/BCI-algebra X satisfies

(∀ x, y ∈ X ) ( x ≤ y ⇒ A T ( x ) ≥ A T (y), A I ( x ) ≥ A I (y), A F ( x ) ≤ A F (y)) . (21)

Proof. Let x, y ∈ X be such that x ≤ y. Then x ∗ y = 0, and thus

A T ( x ) ≥ A T ( x ∗ y ) ∧ A T ( y ) = A T (0) ∧ A T ( y ) = A T ( y ),
A I ( x ) ≥ A I ( x ∗ y ) ∧ A I ( y ) = A I (0) ∧ A I ( y ) = A I ( y ),
A F ( x ) ≤ A F ( x ∗ y ) ∨ A F ( y ) = A F (0) ∨ A F ( y ) = A F ( y ),

by Equation (20). This completes the proof.

Theorem 8. A NS A = ( A T , A I , A F ) in a BCK-algebra X is an (∈, ∈)-neutrosophic ideal of X if and only if


A = ( A T , A I , A F ) satisfies
⎛ ⎧ ⎞
⎨ AT ( x ) ≥ AT (y) ∧ AT (z)

⎜ ⎟
(∀ x, y, z ∈ X ) ⎝ x ∗ y ≤ z ⇒ A I ( x ) ≥ A I (y) ∧ A I (z) ⎠ (22)


A F ( x ) ≤ A F (y) ∨ A F (z)

Proof. Let A = ( A T , A I , A F ) be an (∈, ∈)-neutrosophic ideal of X, and let x, y, z ∈ X be such that


x ∗ y ≤ z. Using Theorem 7 and Lemma 2, we have

A T ( x ) ≥ A T ( x ∗ y ) ∧ A T ( y ) ≥ A T ( y ) ∧ A T ( z ),
A I ( x ) ≥ A I ( x ∗ y ) ∧ A I ( y ) ≥ A I ( y ) ∧ A I ( z ),
A F ( x ) ≤ A F ( x ∗ y ) ∨ A F ( y ) ≤ A F ( y ) ∨ A F ( z ).

161
Mathematics 2018, 6, 74

Conversely, assume that A = ( A T , A I , A F ) satisfies Equation (22). Because 0 ∗ x ≤ x for all x ∈ X,


it follows from Equation (22) that

A T (0) ≥ A T ( x ) ∧ A T ( x ) = A T ( x ),
A I (0) ≥ A I ( x ) ∧ A I ( x ) = A I ( x ),
A F (0) ≤ A F ( x ) ∨ A F ( x ) = A F ( x ),

for all x ∈ X. Because x ∗ ( x ∗ y) ≤ y for all x, y ∈ X, we have

A T ( x ) ≥ A T ( x ∗ y ) ∧ A T ( y ),
A I ( x ) ≥ A I ( x ∗ y ) ∧ A I ( y ),
A F ( x ) ≤ A F ( x ∗ y ) ∨ A F ( y ),

for all x, y ∈ X by Equation (22). It follows from Theorem 7 that A = ( A T , A I , A F ) is an


(∈, ∈)-neutrosophic ideal of X.

Theorem 9. If A = ( A T , A I , A F ) is an (∈, ∈)-neutrosophic ideal of a BCK/BCI-algebra X, then the lower


neutrosophic ∈Φ -subsets of X are I-energetic subsets of X where Φ ∈ { T, I, F }.

Proof. Let x, a, u ∈ X, α, β ∈ (0, 1], and γ ∈ [0, 1) be such that x ∈ L∈ ∈


T ( A; α ), a ∈ L I ( A; β ),
and u ∈ L∈
F ( A; γ ) . Using Theorem 7, we have

α ≥ A T ( x ) ≥ A T ( x ∗ y ) ∧ A T ( y ),
β ≥ A I ( a ) ≥ A I ( a ∗ b ) ∧ A I ( b ),
γ ≤ A F ( u ) ≤ A F ( u ∗ v ) ∨ A F ( v ),

for all y, b, v ∈ X. It follows that

A T ( x ∗ y) ≤ α or A T (y) ≤ α, that is, x ∗ y ∈ L∈ ∈


T ( A; α ) or y ∈ L T ( A; α );

A I ( a ∗ b) ≤ β or A I (b) ≤ β, that is, a ∗ b ∈ L∈ ∈


T ( A; β ) or b ∈ L T ( A; β );

and
A F (u ∗ v) ≥ γ or A F (v) ≥ γ, that is, u ∗ v ∈ L∈ ∈
T ( A; γ ) or v ∈ L T ( A; γ ).

Hence {y, x ∗ y} ∩ L∈ ∈ ∈
T ( A; α ), { b, a ∗ b } ∩ L I ( A; β ), and { v, u ∗ v } ∩ L F ( A; γ ) are nonempty,
∈ ∈ ∈
and therefore L T ( A; α), L I ( A; β) and L F ( A; γ) are I-energetic subsets of X.

Corollary 2. If A = ( A T , A I , A F ) is an (∈, ∈)-neutrosophic ideal of a BCK/BCI-algebra X, then the strong


lower neutrosophic ∈Φ -subsets of X are I-energetic subsets of X where Φ ∈ { T, I, F }.

Proof. Straightforward.

Theorem 10. Let (α, β, γ) ∈ Λ T × Λ I × Λ F , where Λ T , Λ I , and Λ F are subsets of [0, 1]. If A = ( A T , A I ,
A F ) is an (∈, ∈)-neutrosophic ideal of a BCK-algebra X, then

(1) the (strong) upper neutrosophic ∈Φ -subsets of X are right stable where Φ ∈ { T, I, F };
(2) the (strong) lower neutrosophic ∈Φ -subsets of X are right vanished where Φ ∈ { T, I, F }.

Proof. (1) Let x ∈ X, a ∈ UT∈ ( A; α), b ∈ U I∈ ( A; β), and c ∈ UF∈ ( A; γ). Then A T ( a) ≥ α, A I (b) ≥ β,
and A F (c) ≤ γ. Because a ∗ x ≤ a, b ∗ x ≤ b, and c ∗ x ≤ c, it follows from Lemma 2 that A T ( a ∗
x ) ≥ A T ( a) ≥ α, A I (b ∗ x ) ≥ A I (b) ≥ β, and A F (c ∗ x ) ≤ A F (c) ≤ γ; that is, a ∗ x ∈ UT∈ ( A; α),

162
Mathematics 2018, 6, 74

b ∗ x ∈ U I∈ ( A; β), and c ∗ x ∈ UF∈ ( A; γ). Hence the upper neutrosophic ∈Φ -subsets of X are right stable
where Φ ∈ { T, I, F }. Similarly, the strong upper neutrosophic ∈Φ -subsets of X are right stable where
Φ ∈ { T, I, F }.
(2) Assume that x ∗ y ∈ L∈ ∈ ∈
T ( A; α ), a ∗ b ∈ L I ( A; β ), and c ∗ d ∈ L F ( A; γ ) for any x, y, a, b, c, d ∈ X.
Then A T ( x ∗ y) ≤ α, A I ( a ∗ b) ≤ β, and A F (c ∗ d) ≥ γ. Because x ∗ y ≤ x, a ∗ b ≤ a,
and c ∗ d ≤ c, it follows from Lemma 2 that α ≥ A T ( x ∗ y) ≥ A T ( x ), β ≥ A I ( a ∗ b) ≥ A I ( a),
and γ ≤ A F (c ∗ d) ≤ A F (c); that is, x ∈ L∈ ∈ ∈
T ( A; α ), a ∈ L I ( A; β ), and c ∈ L F ( A; γ ). Therefore the lower
neutrosophic ∈Φ -subsets of X are right vanished where Φ ∈ { T, I, F }. In a similar way, we know that
the strong lower neutrosophic ∈Φ -subsets of X are right vanished where Φ ∈ { T, I, F }.

Definition 5. Let A = ( A T , A I , A F ) be a NS in a BCK/BCI-algebra X and (α, β, γ) ∈ Λ T × Λ I × Λ F ,


where Λ T , Λ I , and Λ F are subsets of [0, 1]. Then (α, β, γ) is called a neutrosophic permeable I-value for
A = ( A T , A I , A F ) if the following assertion is valid:
⎛ ⎞
x ∈ UT∈ ( A; α) ⇒ A T ( x ∗ y) ∨ A T (y) ≥ α,
⎜ ⎟
(∀ x, y ∈ X ) ⎝ x ∈ U I∈ ( A; β) ⇒ A I ( x ∗ y) ∨ A I (y) ≥ β, ⎠ . (23)
x∈ UF∈ ( A; γ) ⇒ A F ( x ∗ y) ∧ A F (y) ≤ γ

Example 4. (1) In Example 2, (α, β, γ) is a neutrosophic permeable I-value for A = ( A T , A I , A F ).


(2) Consider a BCI-algebra X = {0, 1, a, b, c} with the binary operation ∗ that is given in Table 7 (see [14]).

Table 7. Cayley table for the binary operation “∗”.

* 0 1 a b c
0 0 0 a b c
1 1 0 a b c
a a a 0 c b
b b b c 0 a
c c c b a 0

Let A = ( A T , A I , A F ) be a NS in X that is given in Table 8.

Table 8. Tabulation representation of A = ( A T , A I , A F ).

X AT ( x) A I ( x) A F ( x)
0 0.33 0.38 0.77
1 0.44 0.48 0.66
a 0.55 0.68 0.44
b 0.66 0.58 0.44
c 0.66 0.68 0.55

It is routine to check that (α, β, γ) ∈ (0.33, 1] × (0.38, 1] × [0, 0.77) is a neutrosophic permeable I-value
for A = ( A T , A I , A F ).

Lemma 3. If a NS A = ( A T , A I , A F ) in a BCK/BCI-algebra X satisfies the condition of Equation (14), then

(∀ x ∈ X ) ( A T (0) ≤ A T ( x ), A I (0) ≤ A I ( x ), A F (0) ≥ A F ( x )) . (24)

Proof. Straightforward.

Theorem 11. If a NS A = ( A T , A I , A F ) in a BCK-algebra X satisfies the condition of Equation (14),


then every neutrosophic permeable I-value for A = ( A T , A I , A F ) is a neutrosophic permeable S-value for
A = ( A T , A I , A F ).

163
Mathematics 2018, 6, 74

Proof. Let (α, β, γ) be a neutrosophic permeable I-value for A = ( A T , A I , A F ). Let x, y, a, b, u, v ∈ X


be such that x ∗ y ∈ UT∈ ( A; α), a ∗ b ∈ U I∈ ( A; β), and u ∗ v ∈ UF∈ ( A; γ). It follows from Equations (23),
(3), (III), and (V) and Lemma 3 that

α ≤ A T (( x ∗ y) ∗ x ) ∨ A T ( x ) = A T (( x ∗ x ) ∗ y) ∨ A T ( x )
= A T (0 ∗ y ) ∨ A T ( x ) = A T (0) ∨ A T ( x ) = A T ( x ),
β ≤ A I (( a ∗ b) ∗ a) ∨ A I ( a) = A I (( a ∗ a) ∗ b) ∨ A I ( a)
= A I (0 ∗ b ) ∨ A I ( a ) = A I (0) ∨ A I ( a ) = A I ( a ),

and
γ ≥ A F ((u ∗ v) ∗ u) ∧ A F (u) = A F ((u ∗ u) ∗ v) ∧ A F (u)
= A F (0 ∗ v ) ∧ A F ( u ) = A F (0) ∧ A F ( u ) = A F ( u ).

Hence A T ( x ) ∨ A T (y) ≥ AT ( x) ≥ α, A I ( a) ∨ A I (b) ≥ A I ( a) ≥ β,


and A F (u) ∧ A F (v) ≤ A F (u) ≤ γ. Therefore (α, β, γ) is a neutrosophic permeable S-value for
A = ( A T , A I , A F ).

Given a NS A = ( A T , A I , A F ) in a BCK/BCI-algebra X, any upper neutrosophic ∈Φ -subsets of


X may not be I-energetic where Φ ∈ { T, I, F }, as seen in the following example.

Example 5. Consider a BCK-algebra X = {0, 1, 2, 3, 4} with the binary operation ∗ that is given in Table 9
(see [14]).

Table 9. Cayley table for the binary operation “∗”.

* 0 1 2 3 4
0 0 0 0 0 0
1 1 0 0 0 0
2 2 1 0 1 0
3 3 1 1 0 0
4 4 2 1 2 0

Let A = ( A T , A I , A F ) be a NS in X that is given in Table 10.

Table 10. Tabulation representation of A = ( A T , A I , A F ).

X AT ( x) A I ( x) A F ( x)
0 0.75 0.73 0.34
1 0.53 0.45 0.58
2 0.67 0.86 0.34
3 0.53 0.56 0.58
4 0.46 0.56 0.66

Then UT∈ ( A; 0.6) = {0, 2}, U I∈ ( A; 0.7) = {0, 2}, and UF∈ ( A; 0.4) = {0, 2}. Because 2 ∈ {0, 2} and
{1, 2 ∗ 1} ∩ {0, 2} = ∅, we know that {0, 2} is not an I-energetic subset of X.

We now provide conditions for the upper neutrosophic ∈Φ -subsets to be I-energetic where
Φ ∈ { T, I, F }.

Theorem 12. Let A = ( A T , A I , A F ) be a NS in a BCK/BCI-algebra X and (α, β, γ) ∈ Λ T × Λ I × Λ F ,


where Λ T , Λ I , and Λ F are subsets of [0, 1]. If (α, β, γ) is a neutrosophic permeable I-value for A = ( A T , A I ,
A F ), then the upper neutrosophic ∈Φ -subsets of X are I-energetic subsets of X where Φ ∈ { T, I, F }.

164
Mathematics 2018, 6, 74

Proof. Let x, a, u ∈ X and (α, β, γ) ∈ Λ T × Λ I × Λ F , where Λ T , Λ I , and Λ F are subsets of [0, 1] such
that x ∈ UT∈ ( A; α), a ∈ U I∈ ( A; β), and u ∈ UF∈ ( A; γ). Because (α, β, γ) is a neutrosophic permeable
I-value for A = ( A T , A I , A F ), it follows from Equation (23) that

A T ( x ∗ y) ∨ A T (y) ≥ α, A I ( a ∗ b) ∨ A I (b) ≥ β, and A F (u ∗ v) ∧ A F (v) ≤ γ

for all y, b, v ∈ X. Hence

A T ( x ∗ y) ≥ α or A T (y) ≥ α, that is, x ∗ y ∈ UT∈ ( A; α) or y ∈ UT∈ ( A; α);


A I ( a ∗ b) ≥ β or A I (b) ≥ β, that is, a ∗ b ∈ U I∈ ( A; β) or b ∈ U I∈ ( A; β);

and
A F (u ∗ v) ≤ γ or A F (v) ≤ γ, that is, u ∗ v ∈ UF∈ ( A; γ) or v ∈ UF∈ ( A; γ).

Hence {y, x ∗ y} ∩ UT∈ ( A; α), {b, a ∗ b} ∩ U I∈ ( A; β), and {v, u ∗ v} ∩ UF∈ ( A; γ) are nonempty,
and therefore the upper neutrosophic ∈Φ -subsets of X are I-energetic subsets of X where
Φ ∈ { T, I, F }.

Theorem 13. Let A = ( AT , A I , A F ) be a NS in a BCK/BCI-algebra X and (α, β, γ) ∈ ΛT × Λ I × Λ F ,


where ΛT , Λ I , and Λ F are subsets of [0, 1]. If A = ( AT , A I , A F ) satisfies the following condition:
⎛ ⎞
AT ( x ) ≤ AT ( x ∗ y) ∨ AT (y)
⎜ ⎟
(∀ x, y ∈ X ) ⎝ A I ( x ) ≤ A I ( x ∗ y) ∨ A I (y) ⎠ , (25)
A F ( x ) ≥ A F ( x ∗ y) ∧ A F (y)

then (α, β, γ) is a neutrosophic permeable I-value for A = ( A T , A I , A F ).

Proof. Let x, a, u ∈ X and (α, β, γ) ∈ Λ T × Λ I × Λ F , where Λ T , Λ I , and Λ F are subsets of [0, 1] such
that x ∈ UT∈ ( A; α), a ∈ U I∈ ( A; β), and u ∈ UF∈ ( A; γ). Using Equation (25), we obtain

α ≤ A T ( x ) ≤ A T ( x ∗ y ) ∨ A T ( y ),
β ≤ A I ( a ) ≤ A I ( a ∗ b ) ∨ A I ( b ),
γ ≥ A F ( u ) ≥ A F ( u ∗ v ) ∧ A F ( v ),

for all y, b, v ∈ X. Therefore (α, β, γ) is a neutrosophic permeable I-value for A = ( A T , A I , A F ).

Combining Theorems 12 and 13, we have the following corollary.

Corollary 3. Let A = ( A T , A I , A F ) be a NS in a BCK/BCI-algebra X and (α, β, γ) ∈ Λ T × Λ I × Λ F ,


where Λ T , Λ I , and Λ F are subsets of [0, 1]. If A = ( A T , A I , A F ) satisfies the condition of Equation (25),
then the upper neutrosophic ∈Φ -subsets of X are I-energetic subsets of X where Φ ∈ { T, I, F }.

Definition 6. Let A = ( A T , A I , A F ) be a NS in a BCK/BCI-algebra X and (α, β, γ) ∈ Λ T × Λ I × Λ F ,


where Λ T , Λ I , and Λ F are subsets of [0, 1]. Then (α, β, γ) is called a neutrosophic anti-permeable I-value for
A = ( A T , A I , A F ) if the following assertion is valid:
⎛ ⎞
x ∈ L∈
T ( A; α ) ⇒ A T ( x ∗ y ) ∧ A T ( y ) ≤ α,
⎜ ⎟
(∀ x, y ∈ X ) ⎝ x ∈ L∈I ( A; β) ⇒ A I ( x ∗ y) ∧ A I (y) ≤ β, ⎠ . (26)
x∈ L∈
F ( A; γ ) ⇒ A F ( x ∗ y) ∨ A F (y) ≥ γ

165
Mathematics 2018, 6, 74

Theorem 14. Let A = ( A T , A I , A F ) be a NS in a BCK/BCI-algebra X and (α, β, γ) ∈ Λ T × Λ I × Λ F ,


where Λ T , Λ I , and Λ F are subsets of [0, 1]. If A = ( A T , A I , A F ) satisfies the condition of Equation (19),
then (α, β, γ) is a neutrosophic anti-permeable I-value for A = ( A T , A I , A F ).

Proof. Let x, a, u ∈ X be such that x ∈ L∈ ∈ ∈


T ( A; α ), a ∈ L I ( A; β ), and u ∈ L F ( A; γ ). Then

A T ( x ∗ y) ∧ A T (y) ≤ A T ( x ) ≤ α,
A I ( a ∗ b) ∧ A I (b) ≤ A I ( a) ≤ β,
A F (u ∗ v) ∨ A F (v) ≥ A F (u) ≥ γ,

for all y, b, v ∈ X by Equation (20). Hence (α, β, γ) is a neutrosophic anti-permeable I-value for
A = (AT , AI , AF ).

Theorem 15. Let A = ( A T , A I , A F ) be a NS in a BCK/BCI-algebra X and (α, β, γ) ∈ Λ T × Λ I × Λ F ,


where Λ T , Λ I , and Λ F are subsets of [0, 1]. If (α, β, γ) is a neutrosophic anti-permeable I-value for A = ( A T ,
A I , A F ), then the lower neutrosophic ∈Φ -subsets of X are I-energetic where Φ ∈ { T, I, F }.

Proof. Let x ∈ L∈ ∈ ∈
T ( A; α ), a ∈ L I ( A; β ), and u ∈ L F ( A; γ ). Then A T ( x ∗ y ) ∧ A T ( y ) ≤ α, A I ( a ∗ b ) ∧
A I (b) ≤ β, and A F (u ∗ v) ∨ A F (v) ≥ γ for all y, b, v ∈ X by Equation (26). It follows that

A T ( x ∗ y) ≤ α or A T (y) ≤ α, that is, x ∗ y ∈ L∈ ∈


T ( A; α ) or y ∈ L T ( A; α );

A I ( a ∗ b) ≤ β or A I (b) ≤ β, that is, a ∗ b ∈ L∈I ( A; β) or b ∈ L∈I ( A; β);

and
A F (u ∗ v) ≥ γ or A F (v) ≥ γ, that is, u ∗ v ∈ L∈ ∈
F ( A; γ ) or v ∈ L F ( A; γ ).

Hence {y, x ∗ y} ∩ L∈ ∈ ∈
T ( A; α ), { b, a ∗ b } ∩ L I ( A; β ) and { v, u ∗ v } ∩ L F ( A; γ ) are nonempty,
and therefore the lower neutrosophic ∈Φ -subsets of X are I-energetic where Φ ∈ { T, I, F }.

Combining Theorems 14 and 15, we obtain the following corollary.

Corollary 4. Let A = ( A T , A I , A F ) be a NS in a BCK/BCI-algebra X and (α, β, γ) ∈ Λ T × Λ I × Λ F ,


where Λ T , Λ I , and Λ F are subsets of [0, 1]. If A = ( A T , A I , A F ) satisfies the condition of Equation (19),
then the lower neutrosophic ∈Φ -subsets of X are I-energetic where Φ ∈ { T, I, F }.

Theorem 16. If A = ( A T , A I , A F ) is an (∈, ∈)-neutrosophic subalgebra of a BCK-algebra X, then every


neutrosophic anti-permeable I-value for A = ( A T , A I , A F ) is a neutrosophic anti-permeable S-value for
A = ( A T , A I , A F ).

Proof. Let (α, β, γ) be a neutrosophic anti-permeable I-value for A = ( A T , A I , A F ).


Let x, y, a, b, u, v ∈ X be such that x ∗ y ∈ L∈ ∈ ∈
T ( A; α ), a ∗ b ∈ L I ( A; β ), and u ∗ v ∈ L F ( A; γ ). It follows
from Equations (26), (3), (III), and (V) and Proposition 1 that

α ≥ A T (( x ∗ y) ∗ x ) ∧ A T ( x ) = A T (( x ∗ x ) ∗ y) ∧ A T ( x )
= A T (0 ∗ y ) ∧ A T ( x ) = A T (0) ∧ A T ( x ) = A T ( x ),
β ≥ A I (( a ∗ b) ∗ a) ∧ A I ( a) = A I (( a ∗ a) ∗ b) ∧ A I ( a)
= A I (0 ∗ b ) ∧ A I ( a ) = A I (0) ∧ A I ( a ) = A I ( a ),

and
γ ≤ A F ((u ∗ v) ∗ u) ∨ A F (u) = A F ((u ∗ u) ∗ v) ∨ A F (u)
= A F (0 ∗ v ) ∨ A F ( u ) = A F (0) ∨ A F ( u ) = A F ( u ).

166
Mathematics 2018, 6, 74

Hence AT (x) ∧ AT (y) ≤ AT (x) ≤ α, A I (a) ∧ A I (b) ≤ A I (a) ≤ β, and AF (u) ∨ AF (v) ≥ AF (u) ≥ γ.
Therefore (α, β, γ) is a neutrosophic anti-permeable S-value for A = ( A T , A I , A F ).

4. Conclusions
Using the notions of subalgebras and ideals in BCK/BCI-algebras, Jun et al. [8] introduced the
notions of energetic subsets, right vanished subsets, right stable subsets, and (anti-)permeable values
in BCK/BCI-algebras, as well as investigated relations between these sets. As a more general platform
that extends the concepts of classic and fuzzy sets, intuitionistic fuzzy sets, and interval-valued
(intuitionistic) fuzzy sets, the notion of NS theory has been developed by Smarandache (see [1,2]) and
has been applied to various parts: pattern recognition, medical diagnosis, decision-making problems,
and so on (see [3–6]). In this article, we have introduced the notions of neutrosophic permeable S-values,
neutrosophic permeable I-values, (∈, ∈)-neutrosophic ideals, neutrosophic anti-permeable S-values,
and neutrosophic anti-permeable I-values, which are motivated by the idea of subalgebras (s-values)
and ideals (I-values), and have investigated their properties. We have considered characterizations
of (∈, ∈)-neutrosophic ideals and have discussed conditions for the lower (upper) neutrosophic
∈Φ -subsets to be S- and I-energetic. We have provided conditions for a triple (α, β, γ) of numbers to
be a neutrosophic (anti-)permeable S- or I-value, and have considered conditions for the upper (lower)
neutrosophic ∈Φ -subsets to be right stable (right vanished) subsets. We have established relations
between neutrosophic (anti-)permeable S- and I-values.

Author Contributions: Y.B.J. and S.-Z.S. initiated the main idea of this work and wrote the paper. F.S. and H.B.
performed the finding of the examples and checking of the contents. All authors conceived and designed the new
definitions and results and read and approved the final manuscript for submission.

Funding: This research received no external funding.

Acknowledgments: The authors wish to thank the anonymous reviewers for their valuable suggestions.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Smarandache, F. A Unifying Field in Logics: Neutrosophic Logic. Neutrosophy, Neutrosophic Set,
Neutrosophic Probability; American Reserch Press: Rehoboth, NM, USA, 1999.
2. Smarandache, F. Neutrosophic set-a generalization of the intuitionistic fuzzy set. Int. J. Pure Appl. Math.
2005, 24, 287–297.
3. Garg, H.; Nancy. Some new biparametric distance measures on single-valued neutrosophic sets with
applications to pattern recognition and medical diagnosis. Information 2017, 8, 126.
4. Garg, H.; Nancy. Non-linear programming method for multi-criteria decision making problems under
interval neutrosophic set environment. Appl. Intell. 2017, doi:10.1007/s10489-017-1070-5.
5. Garg, H.; Nancy. Linguistic single-valued neutrosophic prioritized aggregation operators and their
applications to multiple-attribute group decision-making. J. Ambient Intell. Humaniz. Comput. 2018,
doi:10.1007/s12652-018-0723-5.
6. Nancy; Garg, H. Novel single-valued neutrosophic aggregated operators under Frank norm operation and
its application to decision-making process. Int. J. Uncertain. Quantif. 2016, 6, 361–375.
7. Jun, Y.B. Neutrosophic subalgebras of several types in BCK/BCI-algebras. Ann. Fuzzy Math. Inform. 2017,
14, 75–86.
8. Jun, Y.B.; Ahn, S.S.; Roh, E.H. Energetic subsets and permeable values with applications in
BCK/BCI-algebras. Appl. Math. Sci. 2013, 7, 4425–4438.
9. Jun, Y.B.; Smarandache, F.; Bordbar, H. Neutrosophic N -structures applied to BCK/BCI-algebras.
Informations 2017, 8, 128.
10. Jun, Y.B.; Smarandache, F.; Song, S.Z.; Khan, M. Neutrosophic positive implicative N -ideals in BCK-algebras.
Axioms 2018, 7, 3.

167
Mathematics 2018, 6, 74

11. Öztürk, M.A.; Jun, Y.B. Neutrosophic ideals in BCK/BCI-algebras based on neutrosophic points. J. Int. Math.
Virtual Inst. 2018, 8, 1–17.
12. Song, S.Z.; Smarandache, F.; Jun, Y.B. Neutrosophic commutative N -ideals in BCK-algebras. Information
2017, 8, 130.
13. Huang, Y.S. BCI-Algebra; Science Press: Beijing, China, 2006.
14. Meng, J.; Jun, Y.B. BCK-Algebras; Kyungmoon Sa Co.: Seoul, Korea, 1994.

c 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

168
mathematics
Article
A Novel (R, S)-Norm Entropy Measure of
Intuitionistic Fuzzy Sets and Its Applications in
Multi-Attribute Decision-Making
Harish Garg * and Jaspreet Kaur
School of Mathematics, Thapar Institute of Engineering & Technology, Deemed University,
Patiala 147004, Punjab, India; [email protected]
* Correspondence: [email protected]; Tel.: +91-86990-31147

Received: 16 May 2018; Accepted: 28 May 2018; Published: 30 May 2018

Abstract: The objective of this manuscript is to present a novel information measure for measuring
the degree of fuzziness in intuitionistic fuzzy sets (IFSs). To achieve it, we define an ( R, S)-norm-based
information measure called the entropy to measure the degree of fuzziness of the set. Then, we prove
that the proposed entropy measure is a valid measure and satisfies certain properties. An illustrative
example related to a linguistic variable is given to demonstrate it. Then, we utilized it to propose
two decision-making approaches to solve the multi-attribute decision-making (MADM) problem in
the IFS environment by considering the attribute weights as either partially known or completely
unknown. Finally, a practical example is provided to illustrate the decision-making process.
The results corresponding to different pairs of ( R, S) give different choices to the decision-maker to
assess their results.

Keywords: entropy measure; (R, S)-norm; multi attribute decision-making; information measures;
attribute weight; intuitionistic fuzzy sets

1. Introduction
Multi-attribute decision-making (MADM) problems are an important part of decision theory in
which we choose the best one from the set of finite alternatives based on the collective information.
Traditionally, it has been assumed that the information regarding accessing the alternatives is taken
in the form of real numbers. However, uncertainty and fuzziness are big issues in real-world
problems nowadays and can be found everywhere as in our discussion or the way we process
information. To deal with such a situation, the theory of fuzzy sets (FSs) [1] or extended fuzzy sets
such as an intuitionistic fuzzy set (IFS) [2] or interval-valued IFS (IVIFS) [3] are the most successful
ones, which characterize the attribute values in terms of membership degrees. During the last few
decades, researchers has been paying more attention to these theories and successfully applied them
to various situations in the decision-making process. The two important aspects of solving the MADM
problem are, first, to design an appropriate function that aggregates the different preferences of
the decision-makers into collective ones and, second, to design appropriate measures to rank the
alternatives. For the former part, an aggregation operator is an important part of the decision-making,
which usually takes the form of a mathematical function to aggregate all the individual input
data into a single one. Over the last decade, numerable attempts have been made by different
researchers in processing the information values using different aggregation operators under IFS and
IVIFS environments. For instance, Xu and Yager [4], Xu [5] presented some weighted averaging and
geometric aggregation operators to aggregate the different intuitionistic fuzzy numbers (IFNs). Garg [6]
and Garg [7] presented some interactive improved aggregation operators for IFNs using Einstein
norm operations. Wang and Wang [8] characterized the preference of the decision-makers in terms of

Mathematics 2018, 6, 92; doi:10.3390/math6060092 169 www.mdpi.com/journal/mathematics


Mathematics 2018, 6, 92

interval-numbers, and then, an MADM was presented corresponding to it with completely unknown
weight vectors. Wei [9] presented some induced geometric aggregation operators with intuitionistic
fuzzy information. Arora and Garg [10] and Arora and Garg [11] presented some aggregation operators
by considering the different parameterization factors in the analysis in the intuitionistic fuzzy soft set
environment. Zhou and Xu [12] presented some extreme weighted averaging aggregation operators for
solving decision-making problems in terms of the optimism and pessimism points of view. Garg [13]
presented some improved geometric aggregation operators for IVIFS. A complete overview about the
aggregation operators in the IVIFSs was summarized by Xu and Guo in [14]. Jamkhaneh and Garg [15]
presented some new operations for the generalized IFSs and applied them to solve decision-making
problems. Garg and Singh [16] presented a new triangular interval Type-2 IFS and its corresponding
aggregation operators.
With regard to the information measure, the entropy measure is basically known as the measure
for information originating from the fundamental paper “The Mathematical theory of communication”
in 1948 by C.E.Shannon [17]. Information theory is one of the trusted areas to measure the degree
of uncertainty in the data. However, classical information measures deal with information that is
precise in nature. In order to overcome this, Deluca and Termini [18] proposed a set of axioms for
fuzzy entropy. Later on, Szmidt and Kacprzyk [19] extended the axioms of Deluca and Termini [18]
to the IFS environment. Vlachos and Sergiadis [20] extended their measure to the IFS environment.
Burillo and Bustince [21] introduced the entropy of IFSs as a tool to measure the degree of intuitionism
associated with an IFS. Garg et al. [22] presented a generalized intuitionistic fuzzy entropy measure
of order α and degree β to solve decision-making problems. Wei et al. [23] presented an entropy
measure based on the trigonometric functions. Garg et al. [24] presented an entropy-based method for
solving decision-making problems. Zhang and Jiang [25] presented an intuitionistic fuzzy entropy by
generalizing the measure of Deluca and Termini [18]. Verma and Sharma [26] presented an exponential
order measure between IFSs.
In contrast to the entropy measures, the distance or similarity measures are also used by
researchers to measure the similarity between two IFSs. In that direction, Taneja [27] presented a theory
on the generalized information measures in the fuzzy environment. Boekee and Van der Lubbe [28]
presented the R-norm information measure. Hung and Yang [29] presented the similarity measures
between the two different IFSs based on the Hausdorff distance. Garg [30], Garg and Arora [31]
presented a series of distance and similarity measures in the different sets of the environment to
solve decision-making problems. Joshi and Kumar [32] presented an (R, S)-norm fuzzy information
measures to solve decision-making problems. Garg and Kumar [33,34] presented some similarity and
distance measures of IFSs by using the set pair analysis theory. Meanwhile, decision-making methods
based on some measures (such as distance, similarity degree, correlation coefficient and entropy) were
proposed to deal with fuzzy IF and interval-valued IF MADM problems [35–38].
In [39–43], emphasis was given by the researchers to the attribute weights during ranking of the
alternatives. It is quite obvious that the final ranking order of the alternatives highly depends on the
attribute weights, because the variation of weight values may result in a different final ranking order of
alternatives [39,44–47]. Now, based on the characteristics of the attribute weights, the decision-making
problem can be classified into three types: (a) the decision-making situation where the attribute weights
are completely known; (b) the decision-making situation where the attribute weights are completely
unknown; (c) the decision-making situation where the attribute weights are partially known. Thus,
based on these types, the attribute weights in MADM can be classified as subjective and objective
attribute weights based on the information acquisition approach. If the decision-maker gives weights
to the attributes, then such information is called subjective. The classical approaches to determine
the subjective attribute weights are the analytic hierarchy process (AHP) method [48] and the Delphi
method [49]. On the other hand, the objective attribute weights are determined by the decision-making
matrix, and one of the most important approaches is the Shannon entropy method [17], which
expresses the relative intensities of the attributes’ importance to signify the average intrinsic information

170
Mathematics 2018, 6, 92

transmitted to the decision-maker. In the literature, several authors [39,44,50–52] have addressed
the MADM problem with subjective weight information. However, some researchers formulated
a nonlinear programming model to determine the attribute weights. For instance, Chen and Li [44]
presented an approach to assess the attribute weights by utilizing IF entropy in the IFS environment.
Garg [53] presented a generalized intuitionistic fuzzy entropy measure to determine the completely
unknown attribute weight to solve the decision-making problems. Although some researchers put some
efforts into determining the unknown attribute weights [45,46,54,55] under different environments,
still it remains an open problem.
Therefore, in an attempt to address such problems and motivated by the characteristics of
the IFSs to describe the uncertainties in the data, this paper addresses a new entropy measure to
quantify the degree of fuzziness of a set in the IFS environment. The aim of this entropy is to
determine the attribute weights under the characteristics of the attribute weights that they are either
partially known or completely unknown. For this, we propose a novel entropy measure named
the (R, S)-norm-based information measure, which makes the decision more flexible and reliable
corresponding to different values of the parameters R and S. Some of the desirable properties of the
proposed measures are investigated, and some of their correlations are dreived. From the proposed
entropy measures, some of the existing measures are considered as a special case. Furthermore,
we propose two approaches for solving the MADM approach based on the proposed entropy measures
by considering the characteristics of the attribute weights being either partially known or completely
unknown. Two illustrative examples are considered to demonstrate the approach and compare the
results with some of the existing approaches’ results.
The rest of this paper is organized as follows. In Section 2, we present some basic concepts of IFSs
and the existing entropy measures. In Section 3, we propose a new (R, S)-norm-based information
measure in the IFS environment. Various desirable relations among the approaches are also investigated
in detail. Section 4 describes two approaches for solving the MADM problem with the condition that
attribute weights are either partially known or completely unknown. The developed approaches have
been illustrated with a numerical example. Finally, a concrete conclusion and discussion are presented
in Section 5.

2. Preliminaries
Some basic concepts related to IFSs and the aggregation operators are highlighted, over the
universal set X, in this section.

Definition 1. [2] An IFS A defined in X is an ordered pair given by:

A = { x, ζ A ( x ), ϑ A ( x ) | x ∈ X } (1)

where ζ A , ϑ A : X −→ [0, 1] represent, respectively, the membership and non-membership degrees of the element
x such that ζ A , ϑ A ∈ [0, 1] and ζ A + ϑ A ≤ 1 for all x. For convenience, this pair is denoted by A = ζ A , ϑ A 
and called an intuitionistic fuzzy number (IFN) [4,5].

Definition 2. [4,5] Let the family of all intuitionistic fuzzy sets of universal set X be denoted by FS(X). Let A,
B ∈ FS(X) be such that then some operations can be defined as follows:

1. A ⊆ B if ζ A ( x ) ≤ ζ B ( x ) and ϑ A ( x ) ≥ ϑB ( x ), for all x ∈ X;


2. A ⊇ B if ζ A ( x ) ≥ ζ B ( x ) and ϑ A ( x ) ≤ ϑB ( x ), for all x ∈ X;
3. A = B iff ζ A ( x ) = ζ B ( x ) and ϑ A ( x ) = ϑB ( x ), for all x ∈ X;
4. A ∪ B = { x, max(ζ A ( x ), ζ B ( x )), min(ϑ A ( x ), ϑB ( x )): x ∈ X };
5. A ∩ B = { x, min(ζ A ( x ), ζ B ( x )), max(ϑ A ( x ), ϑB ( x )): x ∈ X };
6. Ac = { x, ϑ A ( x ), ζ A ( x ): x ∈ X }.

171
Mathematics 2018, 6, 92

Definition 3. [19] An entropy E: IFS( X ) −→ R+ on IFS(X) is a real-valued functional satisfying the


following four axioms for A, B ∈ IFS( X )

(P1) E( A) = 0 if and only if A is a crisp set, i.e., either ζ A ( x ) = 1, ϑ A ( x ) = 0 or ζ A ( x ) = 0, ϑ A ( x ) = 1 for


all x ∈ X.
(P2) E( A) = 1 if and only if ζ A ( x ) = ϑ A ( x ) for all x ∈ X.
(P3) E( A) = E( Ac ).
(P4) If A ⊆ B, that is, if ζ A ( x ) ≤ ζ B ( x ) and ϑ A ( x ) ≥ ϑB ( x ) for any x ∈ X, then E( A) ≤ E( B).

Vlachos and Sergiadis [20] proposed the measure of intuitionistic fuzzy entropy in the IFS
environment as follows:
n !
1
E( A) = −
n ln 2 ∑ ζ A ( xi ) ln ζ A ( xi ) + ϑ A ( xi ) ln ϑ A ( xi ) − (1 − π A ( xi )) ln(1 − π A ( xi )) − π A ( xi ) ln 2 (2)
i =1

Zhang and Jiang [25] presented a measure of intuitionistic fuzzy entropy based on a generalization
of measure of Deluca and Termini [18] as:
⎡    ⎤
ζ A ( xi ) + 1 − ϑ A ( xi ) ζ A ( xi ) + 1 − ϑ A ( xi )
n ⎢ log +⎥
1 ⎢ 2 2 ⎥
E( A) = − ∑ ⎢    ⎥ (3)
n i =1 ⎣ ϑ A ( xi ) + 1 − ζ A ( xi ) ϑ A ( xi ) + 1 − ζ A ( xi ) ⎦
log
2 2

Verma and Sharma [26] proposed an exponential order entropy in the IFS environment as:
⎡ ⎤
ζ A ( xi ) + 1 − ϑ A ( xi ) 1− ζ A (xi )+1−ϑ A (xi )
1 n ⎢ e 2

2
E( A) = √
n ( e − 1)
∑ ⎢⎣ ϑ A ( xi ) + 1 − ζ A ( xi ) 1− ϑ A (xi )+1−ζ A (xi )

⎦ (4)
i =1 + e 2 −1
2
β
Garg et al. [22] generalized entropy measure Eα ( A) of order α and degree β as:
⎡
α α
 ⎤
2− β 2− β 1− 2−α β
β 2−β n
⎢ ζ A ( x i ) + ϑ A ( x i ) ( ζ A ( x i ) + ϑ A ( x i )) ⎥
Eα ( A) =
n (2 − β − α ) ∑ log ⎣ ⎦ (5)
i =1 1− 2−α β
+2 (1 − ζ A ( xi ) − ϑ A ( xi ))

where log is to the base two, α > 0, β ∈ [0, 1], α + β = 2.

3. Proposed (R, S)-Norm Intuitionistic Fuzzy Information Measure


In this section, we define a new (R, S)-norm information measure, denoted by HRS , in the IFS
environment. For it, let Ω be the collection of all IFSs.

Definition 4. For a collection of IFSs A = {(x, ζ A (x), ϑ A (x)) | x ∈ X }, an information measure HRS : Ωn → R;
n ≥ 2 is defined as follows:

⎧ ⎡  1 ⎤

⎪ S

⎪ n ⎢ ζ SA ( xi ) + ϑSA ( xi ) + π SA ( xi ) ⎥

⎪ R×S ⎢ ⎥; either R > 1, 0 < S < 1 or 0 < R < 1, S > 1

⎪ n( R−S) ∑ ⎣   1 ⎦

⎪ i =1

⎪ − ζ R
( x ) + ϑ R
( x ) + π R
( x )
R

⎪ A i A i A i
!

⎪ n  1
⎨ R 1 − ζ RA ( xi ) + ϑ RA ( xi ) + π A R (x ) R ; when S = 1; 0 < R < 1
HRS ( A) = n ( R −1) ∑ i
(6)
⎪ i =1 !
⎪ S
⎪ n  S 1
⎪ ζ A ( xi ) + ϑSA ( xi ) + π SA ( xi ) S − 1 ; when R = 1; 0 < S < 1
⎪ n (1− S ) i ∑



⎪ ⎡=1 ⎤



⎪ n ζ A ( xi ) log ζ A ( xi ) + ϑ A ( xi ) log ϑ A ( xi )

⎪ −1
∑ ⎣ ⎦; R = 1 = S.
⎪ n
⎩ i =1 + π A ( xi ) log π A ( xi )

172
Mathematics 2018, 6, 92

Theorem 1. An intuitionistic fuzzy entropy measure HRS ( A) defined in Equation (6) for IFSs is a valid measure,
i.e., it satisfies the following properties.

(P1) HRS ( A) = 0 if and only if A is a crisp set, i.e., ζ A ( xi ) = 1, ϑ A ( xi ) = 0 or ζ A ( xi ) = 0, ϑ A ( xi ) = 1 for


all xi ∈ X.
(P2) HRS ( A) = 1 if and only if ζ A ( xi ) = ϑ A ( xi ) for all xi ∈ X.
(P3) HRS ( A) ≤ HRS ( B) if A is crisper than B, i.e., if ζ A ( xi ) ≤ ζ B ( xi ) & ϑ A ( xi ) ≤ ϑB ( xi ),
for max{ζ B ( xi ), ϑB ( xi )} ≤ 13 and ζ A ( xi ) ≥ ζ B ( xi ) & ϑ A ( xi ) ≥ ϑB ( xi ), for min{ζ B ( xi ), ϑB ( xi )} ≤ 13
for all xi ∈ X.
(P4) HRS ( A) = HRS ( Ac ) for all A ∈ IFS( X ).

Proof. To prove that the measure defined by Equation (6) is a valid information measure, we will
have to prove that it satisfies the four properties defined in the definition of the intuitionistic fuzzy
information measure.

1. Sharpness: In order to prove (P1), we need to show that HRS ( A) = 0 if and only if A is a crisp set,
i.e., either ζ A ( x ) = 1, ϑ A ( x ) = 0 or ζ A ( x ) = 0, ϑ A ( x ) = 1 for all x ∈ X.
Firstly, we assume that HRS ( A) = 0 for R, S > 0 and R = S. Therefore, from Equation (6), we have:
⎛  1 ⎞
ζ SA ( xi ) + ϑSA ( xi ) + π SA ( xi )
S
R×S ⎜ n ⎟
⎜ ⎟
n( R − S) i∑
=1
⎝   ⎠=0
1

− ζ RA ( xi ) + ϑ RA ( xi ) + π A ( xi )
R R

 1  1
⇒ ζ SA ( xi ) + ϑSA ( xi ) + π SA ( xi ) − ζ R
S
A ( xi ) + ϑ A ( xi ) + π A ( xi )
R
R R
= 0 for all i = 1, 2, . . . , n.
 1  1
ζ SA ( xi ) + ϑSA ( xi ) + π SA ( xi )
S
= ζ RA ( xi ) + ϑ RA ( xi ) + π A ( xi )
R
i.e., R
for all i = 1, 2, . . . , n.

Since R, S > 0 and R = S, therefore, the above equation is satisfied only if ζ A ( xi ) = 0, ϑ A ( xi ) = 1


or ζ A ( xi ) = 1, ϑ A ( xi ) = 0 for all i = 1, 2, . . . , n.
Conversely, we assume that set A = (ζ A , ϑ A ) is a crisp set i.e., either ζ A ( xi ) = 0 or 1. Now,
for R, S > 0 and R = S, we can obtain that:
 1  1
S R
ζ SA ( xi ) + ϑSA ( xi ) + π SA ( xi ) − ζ RA ( xi ) + ϑ RA ( xi ) + π A
R
( xi ) =0

for all i = 1, 2, . . . , n, which gives that HRS ( A) = 0.


Hence, HRS ( A) = 0 iff A is a crisp set.
2. Maximality: We will find maxima of the function HRS ( A); for this purpose, we will differentiate
Equation (6) with respect to ζ A ( xi ) and ϑ A ( xi ). We get,
⎧   1− S  ⎫

⎪ ⎪
ζ SA ( xi ) + ϑSA ( xi ) + π SA ( xi ) ζ SA−1 ( xi ) − π SA−1 ( xi ) ⎪
S
∂HRS ( A) R×S n ⎨ ⎬
∂ζ A ( xi )
= ∑
n ( R − S ) i =1 ⎪   1− R   ⎪
(7)

⎩− ζ R ( x ) + ϑ R ( x ) + π R ( x ) ⎪
ζ R −1 ( x ) − π R −1 ( x ) ⎭
R
A i A i A i A i A i

and:
⎧   1− S  ⎫

⎪ ⎪
ζ SA ( xi ) + ϑSA ( xi ) + π SA ( xi ) ϑSA−1 ( xi ) − π SA−1 ( xi ) ⎪
S
∂HRS ( A) R×S n ⎨ ⎬
n( R − S) i∑
= (8)
∂ϑ A ( xi ) ⎪  R
=1 ⎪
 1− R  ⎪
⎩ − ζ ( x i ) + ϑ R ( x i ) + π R ( x i ) R ϑ R −1 ( x i ) − π R −1 ( x i ) ⎪

A A A A A

173
Mathematics 2018, 6, 92

In order to check the convexity of the function, we calculate its second order derivatives as follows:
⎧   1−2S  2 ⎫

⎪ ⎪

ζ SA−1 ( xi ) − π SA−1 ( xi )
S

⎪ (1 − S) ζ SA ( xi ) + ϑSA ( xi ) + π SA ( xi ) ⎪


⎪ ⎪


⎪   1− S   ⎪


⎪ ⎪


⎨ + ( S − 1) ζ S ( x i ) + ϑ S ( x i ) + π S ( x i )
S
ζ S −2 ( x i ) + π S −2 ( x i ) ⎪

∂2 HRS ( A) R×S n A A A A A
∂2 ζ A ( x i )
=
n( R − S) ∑⎪   1−2R  2 ⎪
i =1 ⎪
⎪ ⎪

ζ RA−1 ( xi ) − π A
R −1
R

⎪ − (1 − R ) ζ RA ( xi ) + ϑ RA ( xi ) + π A
R
( xi ) ⎪
⎪ ( xi )

⎪ ⎪


⎪   1− R   ⎪ ⎪

⎪ ⎪
⎩ − ( R − 1) ζ R ( x ) + ϑ R ( x ) + π R ( x ) R ζ R −2 ( x ) + π R −2 ( x ) ⎪ ⎭
A i A i A i A i A i

⎧   1−2S  2 ⎫

⎪ ⎪

ϑSA−1 ( xi ) − π SA−1 ( xi )
S

⎪ (1 − S) ζ SA ( xi ) + ϑSA ( xi ) + π SA ( xi ) ⎪


⎪ ⎪


⎪   1− S   ⎪


⎪ ⎪

⎪ + ( S − 1) ζ S ( x i ) + ϑ S ( x i ) + π S ( x i ) S ϑ S −2 ( x i ) + π S −2 ( x i )
⎨ ⎪

∂2 HRS ( A) R×S n A A A A A
∂2 ϑ A ( x i )
=
n( R − S) ∑⎪   1−2R  2 ⎪
i =1 ⎪
⎪ ⎪

ϑ RA−1 ( xi ) − π A
R −1
R

⎪ − (1 − R ) ζ RA ( xi ) + ϑ RA ( xi ) + π A
R
( xi ) ⎪
⎪ ( xi )

⎪ ⎪


⎪     ⎪


⎪ 1 − R ⎪

⎩ − ( R − 1) ζ R ( x ) + ϑ R ( x ) + π R ( x ) R
ϑ R −2
( x ) + π R −2
( x ) ⎭
A i A i A i A i A i

and
⎧   1−2S ⎫

⎪ S ⎪


⎪ (1 − S) ζ SA ( xi ) + ϑSA ( xi ) + π SA ( xi ) × ⎪


⎪ ⎪


⎪     ⎪


⎪ ⎪

∂2 HRS ( A) R×S n ⎨ × ϑ A ( xi ) − π A ( xi ) ζ A ( xi ) − π SA−1 ( xi )
S − 1 S − 1 S − 1

∂ϑ A ( xi )∂ζ A ( xi )
=
n( R − S) ∑⎪   1−2R ⎪
i =1 ⎪
⎪ R ⎪


⎪ − (1 − R) ζ RA ( xi ) + ϑ RA ( xi ) + π AR
( xi ) × ⎪


⎪ ⎪


⎪     ⎪


⎩ ⎪
× ϑ RA−1 ( xi ) − π A R −1
( xi ) ζ RA−1 ( xi ) − π AR −1
( xi ) ⎭

∂HRS ( A) ∂HRS ( A)
To find the maximum/minimum point, we set ∂ζ A ( xi )
= 0 and ∂ϑ A ( xi )
= 0, which gives that
ζ A ( xi ) = ϑ A ( xi ) = π A ( xi ) = 1
3 for all i and hence called the critical point of the function HRS .

(a) When R < 1, S > 1, then at the critical point ζ A ( xi ) = ϑ A ( xi ) = π A ( xi ) = 1


3, we
compute that:

∂2 HRS ( A)
<0
∂2 ζ A ( x i )
 2
∂2 HRS ( A) ∂2 HRS ( A) ∂2 HRS ( A)
and · − >0
∂2 ζ A ( x i ) ∂2 ϑ A ( x i ) ∂ϑ A ( xi )∂ζ A ( xi )

Therefore, the Hessian matrix of HRS ( A) is negative semi-definite, and hence, HRS ( A) is a
concave function. As the critical point of HRS is ζ A = ϑ A = 13 and by the concavity, we get
that HRS ( A) has a relative maximum value at ζ A = ϑ A = 13 .
(b) When R > 1, S < 1, then at the critical point, we can again easily obtain that:

∂2 HRS ( A)
<0
∂2 ζ A ( x i )
 2
∂2 HRS ( A) ∂2 HRS ( A) ∂2 HRS ( A)
and · − >0
∂2 ζ A ( x i ) ∂2 ϑ A ( x i ) ∂ϑ A ( xi )ζ A ( xi )

This proves that HRS ( A) is a concave function and its global maximum at ζ A ( xi ) = ϑ A ( xi ) = 13 .

Thus, for all R, S > 0; R < 1, S < 1 or R > 1, S < 1, the global maximum value of HRS ( A) attains
at the point ζ A ( xi ) = ϑ A ( xi ) = 13 , i.e., HRS ( A) is maximum if and only if A is the most fuzzy set.

174
Mathematics 2018, 6, 92

3. Resolution: In order to prove that our proposed entropy function is monotonically increasing
and monotonically decreasing with respect to ζ A ( xi ) and ϑ A ( xi ), respectively, for convince,
let ζ A ( xi ) = x, ϑ A ( xi ) = y and π A ( xi ) = 1 − x − y, then it is sufficient to prove that for R, S > 0,
R = S, the entropy function:

R×S + S 1 1
,
f ( x, y) = ( x + y S + (1 − x − u ) S ) S − ( x R + y R + (1 − x − y ) R ) R (9)
n( R − S)

where x, y ∈ [0, 1] is an increasing function w.r.t. x and decreasing w.r.t. y.


Taking the partial derivative of f with respect to x and y respectively, we get:
⎡   1− S  ⎤
x S −1 ( x i ) − (1 − x − y ) S −1 ⎥
S
∂f R×S ⎢ x S ( x i ) + y S ( x i ) + (1 − x − y ) S ( x i )
= ⎢ ⎥ (10)
∂x n( R − S) ⎣   1− R  ⎦
x R −1 ( x i ) − (1 − x − y ) R −1
R
− x R ( x i ) + y R ( x i ) + (1 − x − y ) R ( x i )

and:
⎡   1− S  ⎤
y S −1 ( x i ) − (1 − x − y ) S −1 ⎥
S
∂f R×S ⎢ x S ( x i ) + y S ( x i ) + (1 − x − y ) S ( x i )
= ⎢ ⎥ (11)
∂y n( R − S) ⎣   1− R  ⎦
y R −1 ( x i ) − (1 − x − y ) R −1
R
− x R ( x i ) + y R ( x i ) + (1 − x − y ) R ( x i )

∂f ∂f
For the extreme point of f , we set ∂x = 0 and ∂y = 0 and get x = y = 13 .
∂f
Furthermore, ∂x ≥ 0, when x ≤ y such that R, S > 0, R = S, i.e., f ( x, y) is increasing with x ≤ y,
∂f ∂f ∂f
and ≤ 0 is decreasing with respect to x, when x ≥ y. On the other hand,
∂x ∂y ≥ 0 and ∂y ≤ 0
when x ≥ y and x ≤ y, respectively.
Further, since HRS ( A) is a concave function on the IFS A, therefore, if max{ζ A ( x ), ϑ A ( x )} ≤ 13 ,
then ζ A ( xi ) ≤ ζ ( xi ) and ϑ A ( xi ) ≤ ϑB ( xi ), which implies that:

1 1 1
ζ A ( xi ) ≤ ζ B ( xi ) ≤ ; ϑ ( x ) ≤ ϑ B ( xi ) ≤ ; π A ( xi ) ≥ π B ( xi ) ≥
3 A i 3 3

Thus, we observe that (ζ B ( xi ), ϑB ( xi ), π B ( xi )) is more around ( 13 , 13 , 13 ) than


(ζ A ( xi ), ϑ A ( xi ), π A ( xi )). Hence, HRS ( A) ≤ HRB ( B).
Similarly, if min{ζ A ( xi ), ϑ A ( xi )} ≥ 13 , then we get HRS ( A) ≤ HRB ( B).
4. Symmetry: By the definition of HRS ( A), we can easily obtain that HRS ( Ac ) = HRS ( A).

Hence HRS ( A) satisfies all the properties of the intuitionistic fuzzy information measure and,
therefore, is a valid measure of intuitionistic fuzzy entropy.

Consider two IFSs A and B defined over X = { x1 , x2 , . . . , xn }. Take the disjoint partition of X as:

X1 = { x i ∈ X | A ⊆ B },
= { xi ∈ X | ζ A ( x ) ≤ ζ B ( x ); ϑ A ( x ) ≥ ϑB ( x )}

and:

X2 = { xi ∈ X | A ⊇ B }
= { xi ∈ X | ζ A ( x ) ≥ ζ B ( x ); ϑ A ( x ) ≤ ϑB ( x )}

Next, we define the joint and conditional entropies between IFSs A and B as follows:

175
Mathematics 2018, 6, 92

1. Joint entropy:
⎡ 1 ⎤
S S
⎢ ζ A∪ B ( xi ) + ϑ A∪ B ( xi ) + (1 − ζ A∪ B ( xi ) − ϑ A∪ B ( xi ))
S S
R×S n ⎥
HRS ( A ∪ B) =
n( R − S) ∑ ⎢⎣  ⎥
 R1 ⎦
i =1
− ζ RA∪ B ( xi ) + ϑ RA∪ B ( xi ) + (1 − ζ A∪ B ( xi ) − ϑ A∪ B ( xi )) R
⎡  1 ⎤
S
R×S ⎢ ζ SB ( xi ) + ϑBS ( xi ) + (1 − ζ B ( xi ) − ϑB ( xi ))S ⎥
⎢ ⎥
n( R − S) x ∑
= ⎣  1⎦
i ∈ X1 R
− ζ BR ( xi ) + ϑBR ( xi ) + (1 − ζ B ( xi ) − ϑB ( xi )) R
⎡  1 ⎤
S
R×S ⎢ ζ SA ( xi ) + ϑSA ( xi ) + (1 − ζ A ( xi ) − ϑ A ( xi ))S ⎥
+ ∑ ⎢ ⎥
n( R − S) x ∈X ⎣   ⎦
1
R
i 2
− ζ RA ( xi ) + ϑ RA ( xi ) + (1 − ζ A ( xi ) − ϑ A ( xi )) R

2. Conditional entropy:
⎡ 1  1 ⎤
S R
⎢ ζ A ( xi ) + ϑ A ( xi ) + π A ( xi ) − ζ RA ( xi ) + ϑ RA ( xi ) + π A ( xi )
S S S R
R×S ⎥
HRS ( A| B) =
n( R − S) ∑ ⎢⎣  1  1⎦

x i ∈ X2 S R
− ζ B ( xi ) + ϑ B ( xi ) + π B ( xi )
S S S
+ ζ B ( xi ) + ϑ B ( xi ) + π B ( xi )
R R R

and:
⎡ 1  1 ⎤
S R
⎢ ζ B ( xi ) + ϑ B ( xi ) + π B ( xi ) − ζ BR ( xi ) + ϑBR ( xi ) + π BR ( xi )
S S S
R×S ⎥
HRS ( B| A) =
n( R − S) ∑ ⎢⎣  1  1⎦

x i ∈ X1 S R
− ζ SA ( xi ) + ϑSA ( xi ) + π SA ( xi ) + ζ RA ( xi ) + ϑ RA ( xi ) + π A R
( xi )

Theorem 2. Let A and B be the two IFSs defined on universal set X = { x1 , x2 , . . . , xn }, where,
A = { xi , ζ A ( xi ), ϑ A ( xi ) | xi ∈ X } and B = { xi , ζ B ( xi ), ϑB ( xi ) | xi ∈ X }, such that either A ⊆ B or
A ⊇ B ∀ xi ∈ X, then:

HRS ( A ∪ B) + HRS ( A ∩ B) = HRS ( A) + HRS ( B)

Proof. Let X1 and X2 be the two disjoint sets of X, where,

X1 = { x ∈ X : A ⊆ B } , X2 = { x ∈ X : A ⊇ B }

i.e., for xi ∈ X1 , we have ζ A ( xi ) ≤ ζ B ( xi ), ϑ A ( xi ) ≥ ϑB ( xi ) and xi ∈ X2 , implying that ζ A ( xi ) ≥


ζ B ( xi ), ϑ A ( xi ) ≤ ϑB ( xi ). Therefore,
⎡ 1 ⎤
S S
⎢ ζ A∪ B ( xi ) + ϑ A∪ B ( xi ) + (1 − ζ A∪ B ( xi ) − ϑ A∪ B ( xi ))
S S
R×S n ⎥
HRS ( A ∪ B) + HRS ( A ∩ B) =
n( R − S) ∑ ⎢⎣  ⎥
 R1 ⎦
i =1
− ζ RA∪ B ( xi ) + ϑ RA∪ B ( xi ) + (1 − ζ A∪ B ( xi ) − ϑ A∪ B ( xi )) R
⎡  S1 ⎤
ζ SA∩ B ( xi ) + ϑSA∩ B ( xi ) + (1 − ζ A∩ B ( xi ) − ϑ A∩ B ( xi ))S
R×S n ⎢ ⎢


n( R − S) i∑
+ ⎣  1⎦
=1 R R
− ζ A∩ B ( xi ) + ϑ A∩ B ( xi ) + (1 − ζ A∩ B ( xi ) − ϑ A∩ B ( xi ))
R R

!
R×S   S1   R1
=
n( R − S) ∑ ζ SB ( xi ) + ϑBS ( xi ) + π BS ( xi ) − ζ BR ( xi ) + ϑBR ( xi ) + π BR ( xi )
x i ∈ X1
!
R×S   S1   R1
+
n( R − S) ∑ ζ SA ( xi ) + ϑSA ( xi ) + π SA ( xi ) − ζ RA ( xi ) + ϑ RA ( xi ) + π A
R
( xi )
x i ∈ X2
!
R×S   S1   R1
+
n( R − S) ∑ ζ SA ( xi ) + ϑSA ( xi ) + π SA ( xi ) − ζ RA ( xi ) + ϑ RA ( xi ) + π A
R
( xi )
x i ∈ X1
!
R×S   S1   R1
+
n( R − S) ∑ ζ SB ( xi ) + ϑBS ( xi ) + π BS ( xi ) − ζ B ( xi ) R + ϑB ( xi ) R + π BR ( xi )
x i ∈ X2

= HRS ( A) + HRS ( B)

176
Mathematics 2018, 6, 92

Theorem 3. The maximum and minimum values of the entropy HRS ( A) are independent of the parameters R
and S.

Proof. As from the above theorem, we conclude that the entropy is maximum if and only if A is
the most IFS and minimum when A is a crisp set. Therefore, it is enough to show that the value of
HRS ( A) in these conditions is independent of R and S. When A is the most IFS, i.e., ζ A ( xi ) = ϑ A ( xi ),
for all xi ∈ X, then HRS ( A) = 1, and when A is a crisp set, i.e., either ζ A ( xi ) = 0, ϑ A ( xi ) = 1 or
ζ A (xi ) = 1, ϑ A (xi ) = 0 for all xi ∈ X, then HRS ( A) = 0. Hence, in both cases, HRS ( A) is independent of
the parameters R and S.

Remark 1. From the proposed measure, it is observed that some of the existing measures can be obtained from it
by assigning particular cases to R and S. For instance,

1. When π A ( xi ) = 0 for all xi ∈ X, then the proposed measures reduce to the entropy measure of Joshi and
Kumar [32].
2. When R = S and S > 0, then the proposed measures are reduced by the measure of Taneja [27].
3. When R = 1 and R = S, then the measure is equivalent to the R-norm entropy presented by Boekee and
Van der Lubbe [28].
4. When R = S = 1, then the proposed measure is the well-known Shannon’s entropy.
5. When S = 1 and R = S, then the proposed measure becomes the measure of Bajaj et al. [37].

Theorem 4. Let A and B be two IFSs defined over the set X such that either A ⊆ B or B ⊆ A, then the
following statements hold:

1. HRS ( A ∪ B) = HRS ( A) + HRS ( B| A);


2. HRS ( A ∪ B) = HRS ( B) + HRS ( A| B);
3. HRS ( A ∪ B) = HRS ( A) + HRS ( B| A) = HRS ( B) + HRS ( A| B).

Proof. For two IFSs A and B and by using the definitions of joint, conditional and the proposed
entropy measures, we get:

1. Consider:

HRS ( A ∪ B) − HRS ( A) − HRS ( B| A)


⎡ 1 ⎤
S S
n ⎢ ζ A∪ B ( xi ) + ϑ A∪ B ( xi ) + (1 − ζ A∪ B ( xi ) − ϑ A∪ B ( xi ))
S S
R×S ⎥
⎢ ⎥
n( R − S) i∑
= ⎣   R1 ⎦
=1
− ζ A∪ B ( xi ) + ϑ A∪ B ( xi ) + (1 − ζ A∪ B ( xi ) − ϑ A∪ B ( xi ))
R R R

!
R×S n   S1   R1

n( R − S) ∑ ζ SA ( xi ) + ϑSA ( xi ) + π SA ( xi ) − ζ RA ( xi ) + ϑ RA ( xi ) + π A
R
( xi )
i =1
⎡  S1   R1 ⎤
+ S
R×S ζ
⎢ B i
S
( x ) ϑ (
B ix ) + π S
(
B i x ) − ζ R
(
B i x ) + ϑ R
(
B i x ) + π R
(
B i x ) ⎥
⎢ ⎥
n( R − S) x ∑
− ⎣   S1   R1 ⎦
i ∈ X1
− ζ A ( xi ) + ϑ A ( xi ) + π A ( xi ) + ζ A ( xi ) + ϑ A ( xi ) + π A ( xi )
S S S R R R

177
Mathematics 2018, 6, 92

!
R×S   S1   R1
=
n( R − S) ∑ ζ SB ( xi ) + ϑBS ( xi ) + π BS ( xi ) − ζ BR ( xi ) + ϑBR ( xi ) + π BR ( xi )
x i ∈ X1
!
R×S   S1   R1
+
n( R − S) ∑ ζ SA ( xi ) + ϑSA ( xi ) + π SA ( xi ) − ζ RA ( xi ) + ϑ RA ( xi ) + π A
R
( xi )
x i ∈ X2
!
R×S   S1   R1

n( R − S) ∑ ζ SA ( xi ) + ϑSA ( xi ) + π SA ( xi ) − ζ RA ( xi ) + ϑ RA ( xi ) + π A
R
( xi )
x i ∈ X1
!
R×S   S1   R1
+
n( R − S) ∑ ζ SA ( xi ) + ϑSA ( xi ) + π SA ( xi ) − ζ RA ( xi ) + ϑ RA ( xi ) + π A
R
( xi )
x i ∈ X2
⎡  S1   R1 ⎤
+ S
R×S ⎢ ζ S
B ( x i ) ϑ B ( x i ) + π S
B ( x i ) − ζ R
B ( x i ) + ϑ R
B ( x i ) + π R
B ( x i ) ⎥
⎢ ⎥
n( R − S) x ∑
− ⎣   S1   R1 ⎦
i ∈ X1
− ζ A ( xi ) + ϑ A ( xi ) + π A ( xi ) + ζ A ( xi ) + ϑ A ( xi ) + π A ( xi )
S S S R R R

= 0

2. Consider:

HRS ( A ∪ B) − HRS ( B) − HRS ( A| B)


⎡ 1 ⎤
S S
n ⎢ ζ A∪ B ( xi ) + ϑ A∪ B ( xi ) + (1 − ζ A∪ B ( xi ) − ϑ A∪ B ( xi ))
S S
R×S ⎥
⎢ ⎥
n( R − S) i∑
= ⎣   R1 ⎦
=1
− ζ A∪ B ( xi ) + ϑ A∪ B ( xi ) + (1 − ζ A∪ B ( xi ) − ϑ A∪ B ( xi ))
R R R

!
R×S n   S1   R1

n( R − S) ∑ ζ SB ( xi ) + ϑBS ( xi ) + π BS ( xi ) − ζ BR ( xi ) + ϑBR ( xi ) + π BR ( xi )
i =1
⎡  S1   R1 ⎤
+ S
R×S ζ
⎢ A i
S
( x ) ϑ (
A i x ) + π S
(
A i x ) − ζ R
(
A i x ) + ϑ R
(
A i x ) + π R
(
A i x ) ⎥
⎢ ⎥
n( R − S) x ∑
− ⎣   S1   R1 ⎦
i ∈ X2
− ζ B ( xi ) + ϑ B ( xi ) + π B ( xi ) + ζ B ( xi ) + ϑ B ( xi ) + π B ( xi )
S S S R R R

!
R×S   S1   R1
= ∑
n( R − S) x ∈X ζ S
(
B i x ) + ϑ S
(
B i x ) + π S
(
B i x ) − ζ R
(
B i x ) + ϑ R
(
B i x ) + π R
(
B i x )
i 1
!
R×S  S 
1  R1
+
n( R − S) ∑ ζ S
A ( x i ) + ϑ S
A ( x i ) + π S
A ( x i ) − ζ R
A ( x i ) + ϑ R
A ( x i ) + π R
A ( x i )
x i ∈ X2
!
R×S   S1   R1

n( R − S) ∑ ζ SB ( xi ) + ϑBS ( xi ) + π BS ( xi ) − ζ BR ( xi ) + ϑBR ( xi ) + π BR ( xi )
x i ∈ X1
!
R×S   S1   R1
+
n( R − S) ∑ ζ SB ( xi ) + ϑBS ( xi ) + π BS ( xi ) − ζ BR ( xi ) + ϑBR ( xi ) + π BR ( xi )
x i ∈ X2
⎡  S1   R1 ⎤
R×S ⎢ Aζ S
( x i ) + ϑ S
A ( x i ) + π S
A ( x i ) − ζ R
A ( x i ) + ϑ R
A ( x i ) + π R
A ( x i ) ⎥
⎢ ⎥
n( R − S) x ∑
− ⎣  S1   R1 ⎦
∈ X
i 2
ζ SB ( xi ) + ϑBS ( xi ) + π BS ( xi ) − ζ BR ( xi ) + ϑBR ( xi ) + π BR ( xi )
= 0

3. This can be deduced from Parts (1) and (2).

Before elaborating on the comparison between the proposed entropy function and other entropy
functions, we state a definition [56] for an IFS of the form A =  x, ζ A ( xi ), ϑ A ( xi ) | x ∈ X  defined on
universal set X, which is as follows:

An = { x, [ζ A ( xi )]n , 1 − [1 − ϑ A ( xi )]n  | x ∈ X } (12)

178
Mathematics 2018, 6, 92

Definition 5. The concentration of an IFS A of the universe X is denoted by CON ( A) and is defined by:

CON ( A) = { x, ζ CON ( A) ( x ), ϑCON ( A) ( x ) | x ∈ X }

where ζ (CON ( A)) ( x ) = [ζ A ( x )]2 , ϑCON ( A) ( x )) = 1 − [1 − ϑ A ( x )]2 , i.e., the operation of the concentration of
an IFS is defined by CON ( A) = A2 .

Definition 6. The dilation of an IFS A of the universe X is denoted by DIL( A) and is defined by:

DIL( A) = { x, ζ DIL( A) ( x ), ϑDIL( A) ( x ) | x ∈ X }

where ζ DIL( A) ( x ) = [ζ A ( x )]1/2 and ϑDIL( A) ( x ) = 1 − [1 − ϑ A ( x )]1/2 , i.e., the operation of the dilation of an
IFS is defined by DIL( A) = A1/2

Example 1. Consider a universe of the discourse X = { x1 , x2 , x3 , x4 , x5 }, and an IFS A “LARGE” of X may


be defined by:

LARGE = {( x1 , 0.1, 0.8), ( x2 , 0.3, 0.5), ( x3 , 0.5, 0.4), ( x4 , 0.9, 0), ( x5 , 1, 0)}

Using the operations as defined in Equation (12), we have generated the following IFSs

A1/2 , A2 , A3 , A4 ,

which are defined as follows:

A1/2 may be treated as “More or less LARGE”


A2 may be treated as “very LARGE”
A3 may be treated as “quite very LARGE”
A4 may be treated as “very very LARGE”

and their corresponding sets are computed as:


1
A2 = {( x1 , 0.3162, 0.5528), ( x2 , 0.5477, 0.2929), ( x3 , 0.7071, 0.2254), ( x4 , 0.9487, 0), ( x5 , 1, 0)}
A 2
= {( x1 , 0.01, 0.96), ( x2 , 0.09, 0.75), ( x3 , 0.25, 0.64), ( x4 , 0.81, 0), ( x5 , 1, 0)}
A3 = {( x1 , 0.001, 0.9920), ( x2 , 0.0270, 0.8750), ( x3 , 0.1250, 0.7840), ( x4 , 0.7290, 0), ( x5 , 1, 0)}
A4 = {( x1 , 0.0001, 0.9984), ( x2 , 0.0081, 0.9375), ( x3 , 0.0625, 0.8704), ( x4 , 0.6561, 0), ( x5 , 1, 0)}

From the viewpoint of mathematical operations, the entropy values of the above defined IFSs,
A1/2 , A, A2 , A3 and A4 , have the following requirement:

E( A1/2 ) > E( A) > E( A2 ) > E( A3 ) > E( A4 ) (13)

Based on the dataset given in the above, we compute the entropy measure for them at different
values of R and S. The result corresponding to these different pairs of values is summarized in Table 1
along with the existing approaches’ results. From these computed values, it is observed that the
ranking order of the linguistic variable by the proposed entropy follows the pattern as described in
Equation (13) for some suitable pairs of ( R, S), while the performance order pattern corresponding
to [19,21,57] and [58] is E( A) > E( A1/2 ) > E( A2 ) > E( A3 ) > E( A4 ), which does not satisfy the
requirement given in Equation (13). Hence, the proposed entropy measure is a good alternative and
performs better than the existing measures. Furthermore, for different pairs of ( R, S), a decision-maker
may have more choices to access the alternatives from the viewpoint of structured linguistic variables.

179
Mathematics 2018, 6, 92

Table 1. Entropy measures values corresponding to existing approaches, as well as the proposed approach.

1
Entropy Measure A2 A A2 A3 A4
E{ BB} [21] 0.0818 0.1000 0.0980 0.0934 0.0934
E{SK } [19] 0.3446 0.3740 0.1970 0.1309 0.1094
E{ ZL} [57] 0.4156 0.4200 0.2380 0.1546 0.1217
E{ HY } [58] 0.3416 0.3440 0.2610 0.1993 0.1613
E{ ZJ } [25] 0.2851 0.3050 0.1042 0.0383 0.0161
0.2 [22]
E0.4 0.5995 0.5981 0.5335 0.4631 0.4039
HRS (proposed measure)
R = 0.3, S = 2 2.3615 2.3589 1.8624 1.4312 1.1246
R = 0.5, S = 2 0.8723 0.8783 0.6945 0.5392 0.4323
R = 0.7, S = 2 0.5721 0.5769 0.4432 0.3390 0.2725
R = 2.5, S = 0.3 2.2882 2.2858 1.8028 1.3851 1.0890
R = 2.5, S = 0.5 0.8309 0.8368 0.6583 0.5104 0.4103
R = 2.5, S = 0.7 0.5369 0.5415 0.4113 0.3138 0.2538

4. MADM Problem Based on the Proposed Entropy Measure


In this section, we present a method for solving the MADM problem based on the proposed
entropy measure.

4.1. Approach I: When the Attribute Weight Is Completely Unknown


In this section, we present a decision-making approach for solving the multi-attribute
decision-making problem in the intuitionistic fuzzy set environment. For this, consider a set of ‘n’
different alternatives, denoted by A1 , A2 , . . . , An , which are evaluated by a decision-maker under the
‘m’ different attributes G1 , G2 , . . . , Gm . Assume that a decision-maker has evaluated these alternatives in
the intuitionistic fuzzy environment and noted their rating values in the form of the IFNs αij = ζ ij , ϑij 
where ζ ij denotes that the degree of the alternative Ai satisfies under the attribute Gj , while ϑij denotes
the dissatisfactory degree of an alternative Ai under Gj such that ζ ij , ϑij ∈ [0, 1] and ζ ij + ϑij ≤ 1 for
i = 1, 2, . . . , m and j = 1, 2, . . . , n. Further assume that the weight vector ω j ( j = 1, 2, . . . , m) of each
attribute is completely unknown. Hence, based on the decision-maker preferences αij , the collective
values are summarized in the form of the decision matrix D as follows:

G1 G2 ... Gm
⎛ ⎞
A1 ζ 11 , ϑ11  ζ 12 , ϑ12  ... ζ 1m , ϑ1m 
A2 ⎜ ζ 21 , ϑ21  ζ 22 , ϑ22  ... ζ 2m , ϑ2m  ⎟
D= . ⎜ ⎟ (14)
⎜ .. .. .. .. ⎟
.. ⎝ . . . . ⎠
An ζ n1 , ϑn1  ζ n2 , ϑn2  ... ζ nm , ϑnm 

Then, the following steps of the proposed approach are summarized to find the best alternative(s).

Step 1: Normalize the rating values of the decision-maker, if required, by converting the rating
values corresponding to the cost type attribute into the benefit type. For this, the following
normalization formula is used:

ζ ij , ϑij  ; if the benefit type attribute
rij = (15)
ϑij , ζ ij  ; if the cost type attribute

and hence, we obtain the normalized IF decision matrix R = (rij )n×m .

180
Mathematics 2018, 6, 92

Step 2: Based on the matrix R, the information entropy of attribute Gj ( j = 1, 2, . . . , m) is computed as:
!
R×S n  1  1
( HRS ) j =
n( R − S) ∑ ζ ijS + ϑijS + πijS
S
− ζ ijR + ϑijR + πijR
R
(16)
i =1

where R, S > 0 and R = S.


Step 3: Based on the entropy matrix, HRS (αij ) defined in Equation (16), the degree of divergence (d j )
of the average intrinsic information provided by the correspondence on the attribute Gj can be
n
defined as d j = 1 − κ j where κ j = ∑ HRS (αij ), j = 1, 2, . . . , m. Here, the value of d j represents
i =1
the inherent contrast intensity of attribute Gj , and hence, based on this, the attributes weight
ω j ( j = 1, 2, . . . , n) is given as:

dj 1 − κj 1 − κj
ωj = m = m = m (17)
∑ dj ∑ (1 − κ j ) m − ∑ κj
j =1 j =1 j =1

Step 4: Construct the weighted sum of each alternative by multiplying the score function of each
criterion by its assigned weight as:
m
Q ( Ai ) = ∑ ω j (ζ ij − ϑij ); i = 1, 2, . . . , n (18)
j =1

Step 5: Rank all the alternatives Ai (i = 1, 2, . . . , n) according to the highest value of Q( Ai ) and, hence,
choose the best alternative.

The above-mentioned approach has been illustrated with a practical example of the decision-maker,
which can be read as:

Example 2. Consider a decision-making problem from the field of the recruitment sector. Assume that
a pharmaceutical company wants to select a lab technician for a micro-bio laboratory. For this, the company
has published a notification in a newspaper and considered the four attributes required for technician selection,
namely academic record ( G1 ), personal interview evaluation ( G2 ), experience ( G3 ) and technical capability
( G4 ). On the basis of the notification conditions, only five candidates A1 , A2 , A3 , A4 and A5 as alternatives are
interested and selected to be presented to the panel of experts for this post. Then, the main object of the company
is to choose the best candidate among them for the task. In order to describe the ambiguity and uncertainties in
the data, the preferences related to each alternative are represented in the IFS environment. The preferences of
each alternative are represented in the form of IFNs as follows:

G1 G2 G3 G4

A1 0.7, 0.2 0.5, 0.4 0.6, 0.2 0.6, 0.3⎞
A2 ⎜0.7, 0.1 0.5, 0.2 0.7, 0.2 0.4, 0.5⎟
⎜ ⎟
D = A3 ⎜0.6, 0.3 0.5, 0.1 0.5, 0.3 0.6, 0.2⎟ (19)
⎜ ⎟
A4 ⎝0.8, 0.1 0.6, 0.3 0.3, 0.7 0.6, 0.3⎠
A5 0.6, 0.3 0.4, 0.6 0.7, 0.2 0.5, 0.4

Then, the steps of the proposed approach are followed to find the best alternative(s) as below:

Step 1: Since all the attributes are of the same type, so there is no need for the normalization process.
Step 2: Without loss of generality, we take R = 0.3 and S = 2 and, hence, compute the entropy
measurement value for each attribute by using Equation (16). The results corresponding to it
are HRS ( G1 ) = 3.4064, HRS ( G2 ) = 3.372, HRS ( G3 ) = 3.2491 and HRS ( G4 ) = 3.7564.

181
Mathematics 2018, 6, 92

Step 3: Based on these entropy values, the weight of each criterion is calculated as ω = (0.2459, 0.2425,
0.2298, 0.2817) T .
Step 4: The overall weighted score values of the alternative corresponding to R = 0.3, S = 2 and
ω = (0.2459, 0.2425, 0.2298, 0.2817) T obtained by using Equation (18) are Q( A1 ) = 0.3237,
Q( A2 ) = 0.3071, Q( A3 ) = 0.3294, Q( A4 ) = 0.2375 and Q( A5 ) = 0.1684.
Step 5: Since Q( A3 ) > Q( A1 ) > Q( A2 ) > Q( A4 ) > Q( A5 ), hence the ranking order of the
alternatives is A3 % A1 % A2 % A4 % A5 . Thus, the best alternative is A3 .
However, in order to analyze the influence of the parameters R and S on the final ranking order
of the alternatives, the steps of the proposed approach are executed by varying the values of R from
0.1 to 1.0 and S from 1.0 to 5.0. The overall score values of each alternative along with the ranking
order are summarized in Table 2. From this analysis, we conclude that the decision-maker can plan to
choose the values of R and S and, hence, their respective alternatives according to his goal. Therefore,
the proposed measures give various choices to the decision-maker to reach the target.

Table 2. Effect of R and S on the entropy measure HRS by using Approach I.

S R S (A )
HR S (A )
HR S (A )
HR S (A )
HR S (A )
HR Ranking Order
1 2 3 4 5

0.1 0.3268 0.3084 0.3291 0.2429 0.1715 A3 % A1 % A2 % A4 % A5


0.3 0.3241 0.3081 0.3292 0.2374 0.1690 A3 % A1 % A2 % A4 % A5
1.2 0.5 0.3165 0.2894 0.3337 0.2368 0.1570 A3 % A1 % A2 % A4 % A5
0.7 0.1688 -0.0988 0.4296 0.2506 -0.0879 A3 % A4 % A1 % A5 % A2
0.9 0.3589 0.3992 0.3065 0.2328 0.2272 A2 % A1 % A3 % A4 % A5
0.1 0.3268 0.3084 0.3291 0.2429 0.1715 A3 % A1 % A2 % A4 % A5
0.3 0.3239 0.3076 0.3293 0.2374 0.1688 A3 % A1 % A2 % A4 % A5
1.5 0.5 0.3132 0.2811 0.3359 0.2371 0.1515 A3 % A1 % A2 % A4 % A5
0.7 0.4139 0.5404 0.2712 0.2272 0.3185 A2 % A1 % A5 % A3 % A2
0.9 0.3498 0.3741 0.3125 0.2334 0.2121 A2 % A1 % A3 % A4 % A5
0.1 0.3268 0.3084 0.3291 0.2429 0.1715 A3 % A1 % A2 % A4 % A5
0.3 0.3237 0.3071 0.3294 0.2375 0.1684 A3 % A1 % A2 % A4 % A5
2.0 0.5 0.3072 0.2666 0.3396 0.2381 0.1415 A3 % A1 % A2 % A4 % A5
0.7 0.3660 0.4140 0.3022 0.2308 0.2393 A2 % A1 % A3 % A5 % A4
0.9 0.3461 0.3631 0.3150 0.2331 0.2062 A2 % A1 % A3 % A4 % A5
0.1 0.3268 0.3084 0.3291 0.2429 0.1715 A3 % A1 % A2 % A4 % A5
0.3 0.3235 0.3067 0.3295 0.2376 0.1681 A3 % A1 % A2 % A4 % A5
2.5 0.5 0.3010 0.2517 0.3436 0.2396 0.1308 A3 % A1 % A2 % A4 % A5
0.7 0.3578 0.3920 0.3074 0.2304 0.2261 A2 % A1 % A3 % A4 % A5
0.9 0.3449 0.3591 0.3158 0.2322 0.2045 A2 % A1 % A3 % A4 % A5
0.1 0.3268 0.3084 0.3291 0.2429 0.1715 A3 % A1 % A2 % A4 % A5
0.3 0.3234 0.3064 0.3296 0.2376 0.1678 A3 % A1 % A2 % A4 % A5
3.0 0.5 0.2946 0.2368 0.3476 0.2417 0.1199 A3 % A1 % A4 % A2 % A5
0.7 0.3545 0.3829 0.3095 0.2298 0.2209 A2 % A1 % A3 % A4 % A5
0.9 0.3442 0.3570 0.3161 0.2314 0.2037 A2 % A1 % A3 % A4 % A5
0.1 0.3268 0.3084 0.3291 0.2429 0.1715 A3 % A1 % A2 % A4 % A5
0.3 0.3231 0.3058 0.3298 0.2379 0.1674 A3 % A1 % A2 % A4 % A5
5.0 0.5 0.2701 0.1778 0.3638 0.2520 0.0767 A3 % A1 % A4 % A2 % A5
0.7 0.3496 0.3706 0.3123 0.2277 0.2137 A2 % A1 % A3 % A4 % A5
0.9 0.3428 0.3532 0.3168 0.2293 0.2020 A2 % A1 % A3 % A4 % A5

4.2. Approach II: When the Attribute Weight Is Partially Known


In this section, we present an approach for solving the multi-attribute decision-making problem
in the IFS environment where the information about the attribute weight is partially known.
The description of the MADM problem is mentioned in Section 4.1.
Since decision-making during a real-life situation is highly complex due to a large number of
constraints, human thinking is inherently subjective, and the importance of the attribute weight

182
Mathematics 2018, 6, 92

vector is incompletely known. In order to represent this incomplete information about the weights,
the following relationship has been defined for i = j:

1. A weak ranking: ωi ≥ ω j ;
2. A strict ranking: ωi − ω j ≥ σi ; (σi > 0).
3. A ranking with multiples: ωi ≥ σi ω j , (0 ≤ σi ≤ 1);
4. An interval form: λi ≤ ωi ≤ λi + δi , (0 ≤ λi ≤ λi + δi ≤ 1);
5. A ranking of differences: ωi − ω j ≥ ωk − ωl , ( j = k = l ).

The set of this known weight information is denoted by Δ in this paper.


Then, the proposed approach is summarized in the following steps to obtain the most desirable
alternative(s).

Step 1: Similar to Approach I.


Step 2: similar to Approach I.
Step 3: The overall entropy of the alternative Ai (i = 1, 2, . . . , n) for the attribute Gj is given by:

m
H ( Ai ) = ∑ HRS (αij )
j =1
 -
R×S m n  1 1

=
n( R − S) ∑ ∑ (ζ ijS + ϑijS + πijS ) S − (ζ ijR + ϑijR + πijR ) R (20)
j =1 i =1

where R, S > 0 and R = S.


By considering the importance of each attribute in terms of weight vector ω =
(ω1 , ω2 , . . . , ωm ) T , we formulate a linear programming model to determine the weight vector
as follows:
 -
n n m
min H = ∑ H ( Ai ) = ∑ ∑ ω j HRS (αij )
i =1 i =1 j =1
 -
R×S m n  1 1

=
n( R − S) ∑ ωj ∑ (ζ ijS + ϑijS + πijS ) S − (ζ ijR + ϑijR + πijR ) R
j =1 i =1
m
s.t. ∑ ωj = 1
j =1
ω j ≥ 0; ω ∈ Δ

After solving this model, we get the optimal weight vector ω = (ω1 , ω2 , . . . , ωm ) T .
Step 4: Construct the weighted sum of each alternative by multiplying the score function of each
criterion by its assigned weight as:
m
Q ( Ai ) = ∑ ω j (ζ ij − ϑij ); i = 1, 2, . . . , n (21)
j =1

Step 5: Rank all the alternative Ai (i = 1, 2, . . . , n) according to the highest value of Q( Ai ) and, hence,
choose the best alternative.

To demonstrate the above-mentioned approach, a numerical example has been taken, which is
stated as below.

Example 3. Consider an MADM problem, which was stated and described in Example 2, where the five
alternatives A1 , A2 , . . . , A5 are assessed under the four attributes G1 , G2 , G3 , G4 in the IFS environment. Here,
we assume that the information about the attribute weight is partially known and is given by the decision-maker

183
Mathematics 2018, 6, 92

4
as Δ = {0.15 ≤ ω1 ≤ 0.45, 0.2 ≤ ω2 ≤ 0.5, 0.1 ≤ ω3 ≤ 0.3, 0.1 ≤ ω4 ≤ 0.2, ω1 ≥ ω4 , ∑ ω j = 1}. Then,
j =1
based on the rating values as mentioned in Equation (19), the following steps of the Approach II are executed as
below:
Step 1: All the attributes are te same types, so there is no need for normalization.
Step 2: Without loss of generality, we take R = 0.3 and S = 2 and, hence, compute the entropy measurement
value for each attribute by using Equation (20). The results corresponding to it are HRS ( G1 ) = 3.4064,
HRS ( G2 ) = 3.372, HRS ( G3 ) = 3.2491 and HRS ( G4 ) = 3.7564.
Step 3: Formulate the optimization model by utilizing the information of rating values and the partial
information of the weight vector Δ = {0.15 ≤ ω1 ≤ 0.45, 0.2 ≤ ω2 ≤ 0.5, 0.1 ≤ ω3 ≤ 0.3, 0.1 ≤
4
ω4 ≤ 0.2, ω1 ≥ ω4 , ∑ ω j = 1} as:
j =1

min H = 3.4064ω1 + 3.372ω2 + 3.2491ω3 + 3.7564ω4


subject to 0.15 ≤ ω1 ≤ 0.45,
0.2 ≤ ω2 ≤ 0.5,
0.1 ≤ ω3 ≤ 0.3,
0.1 ≤ ω4 ≤ 0.2,
ω1 ≥ ω4 ,
and ω1 + ω2 + ω3 + ω4 = 1.

Hence, we solve the model with the help of MATLAB software, and we can obtain the weight vector as
ω = (0.15, 0.45, 0.30, 0.10) T .
Step 4: The overall weighted score values of the alternative corresponding to R = 0.3, S = 2 and ω =
(0.15, 0.45, 0.30, 0.10) T obtained by using Equation (21) are Q( A1 ) = 0.2700, Q( A2 ) = 0.3650,
Q( A3 ) = 0.3250 and Q( A4 ) = 0.1500 and Q( A5 ) = 0.1150.
Step 5: Since Q( A2 ) > Q( A3 ) > Q( A1 ) > Q( A4 ) > Q( A5 ), hence the ranking order of the alternatives is
A2 % A3 % A1 % A4 % A5 . Thus, the best alternative is A2 .

5. Conclusions
In this paper, we propose an entropy measure based on the (R, S)-norm in the IFS environment.
Since the uncertainties present in the data play a crucial role during the decision-making process,
in order to measure the degree of fuzziness of a set and maintaining the advantages of it, in the
present paper, we addressed a novel (R, S)-norm-based information measure. Various desirable
relations, as well as some of its properties, were investigated in detail. From the proposed measures,
it was observed that some of the existing measures were the special cases of the proposed measures.
Furthermore, based on the different parametric values of R and S, the decision-maker(s) may have
different choices to make a decision according to his/her choice. In addition to these and to explore the
structural characteristics and functioning of the proposed measures, two decision-making approaches
were presented to solve the MADM problems in the IFS environment under the characteristics that
attribute weights are either partially known or completely unknown. The presented approaches were
illustrated with numerical examples. The major advantages of the proposed measure are that it gives
various choices to select the best alternatives, according to the decision-makers’ desired goals, and
hence, it makes the decision-makers more flexible and reliable. From the studies, it is concluded
that the proposed work provides a new and easy way to handle the uncertainty and vagueness in
the data and, hence, provides an alternative way to solve the decision-making problem in the IFS
environment. In the future, the result of this paper can be extended to some other uncertain and fuzzy
environments [59–62].
Author Contributions: Conceptualization, Methodology, Validation, H.G.; Formal Analysis, Investigation, H.G.,
J.K.; Writing-Original Draft Preparation, H.G.; Writing-Review & Editing, H.G.; Visualization, H.G.

184
Mathematics 2018, 6, 92

Conflicts of Interest: The authors declare no conflict of interest.

References
1. Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353. [CrossRef]
2. Atanassov, K.T. Intuitionistic fuzzy sets. Fuzzy Sets Syst. 1986, 20, 87–96. [CrossRef]
3. Atanassov, K.; Gargov, G. Interval-valued intuitionistic fuzzy sets. Fuzzy Sets Syst. 1989, 31, 343–349.
[CrossRef]
4. Xu, Z.S.; Yager, R.R. Some geometric aggregation operators based on intuitionistic fuzzy sets. Int. J. Gen. Syst.
2006, 35, 417–433. [CrossRef]
5. Xu, Z.S. Intuitionistic fuzzy aggregation operators. IEEE Trans. Fuzzy Syst. 2007, 15, 1179–1187.
6. Garg, H. Generalized intuitionistic fuzzy interactive geometric interaction operators using Einstein t-norm
and t-conorm and their application to decision-making. Comput. Ind. Eng. 2016, 101, 53–69. [CrossRef]
7. Garg, H. Novel intuitionistic fuzzy decision-making method based on an improved operation laws and its
application. Eng. Appl. Artif. Intell. 2017, 60, 164–174. [CrossRef]
8. Wang, W.; Wang, Z. An approach to multi-attribute interval-valued intuitionistic fuzzy decision-making
with incomplete weight information. In Proceedings of the 15th IEEE International Conference on Fuzzy
Systems and Knowledge Discovery, Jinan, China, 18–20 October 2008; Volume 3, pp. 346–350.
9. Wei, G. Some induced geometric aggregation operators with intuitionistic fuzzy information and their
application to group decision-making. Appl. Soft Comput. 2010, 10, 423–431. [CrossRef]
10. Arora, R.; Garg, H. Robust aggregation operators for multi-criteria decision-making with intuitionistic fuzzy
soft set environment. Sci. Iran. E 2018, 25, 931–942. [CrossRef]
11. Arora, R.; Garg, H. Prioritized averaging/geometric aggregation operators under the intuitionistic fuzzy
soft set environment. Sci. Iran. 2018, 25, 466–482. [CrossRef]
12. Zhou, W.; Xu, Z. Extreme intuitionistic fuzzy weighted aggregation operators and their applications in
optimism and pessimism decision-making processes. J. Intell. Fuzzy Syst. 2017, 32, 1129–1138. [CrossRef]
13. Garg, H. Some robust improved geometric aggregation operators under interval-valued intuitionistic fuzzy
environment for multi-criteria decision -making process. J. Ind. Manag. Optim. 2018, 14, 283–308. [CrossRef]
14. Xu, Z.; Gou, X. An overview of interval-valued intuitionistic fuzzy information aggregations and applications.
Granul. Comput. 2017, 2, 13–39. [CrossRef]
15. Jamkhaneh, E.B.; Garg, H. Some new operations over the generalized intuitionistic fuzzy sets and their
application to decision-making process. Granul. Comput. 2018, 3, 111–122. [CrossRef]
16. Garg, H.; Singh, S. A novel triangular interval type-2 intuitionistic fuzzy sets and their aggregation operators.
Iran. J. Fuzzy Syst. 2018. [CrossRef]
17. Shanon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [CrossRef]
18. Deluca, A.; Termini, S. A definition of Non-probabilistic entropy in setting of fuzzy set theory. Inf. Control
1971, 20, 301–312. [CrossRef]
19. Szmidt, E.; Kacprzyk, J. Entropy for intuitionistic fuzzy sets. Fuzzy Sets Syst. 2001, 118, 467–477. [CrossRef]
20. Vlachos, I.K.; Sergiadis, G.D. Intuitionistic fuzzy information-application to pattern recognition. Pattern Recognit.
Lett. 2007, 28, 197–206. [CrossRef]
21. Burillo, P.; Bustince, H. Entropy on intuitionistic fuzzy sets and on interval-valued fuzzy sets. Fuzzy Sets Syst.
1996, 78, 305–316. [CrossRef]
22. Garg, H.; Agarwal, N.; Tripathi, A. Generalized Intuitionistic Fuzzy Entropy Measure of Order α and Degree
β and its applications to Multi-criteria decision-making problem. Int. J. Fuzzy Syst. Appl. 2017, 6, 86–107.
[CrossRef]
23. Wei, C.P.; Gao, Z.H.; Guo, T.T. An intuitionistic fuzzy entropy measure based on the trigonometric function.
Control Decis. 2012, 27, 571–574.
24. Garg, H.; Agarwal, N.; Tripathi, A. Entropy based multi-criteria decision-making method under Fuzzy
Environment and Unknown Attribute Weights. Glob. J. Technol. Optim. 2015, 6, 13–20.
25. Zhang, Q.S.; Jiang, S.Y. A note on information entropy measure for vague sets. Inf. Sci. 2008, 178, 4184–4191.
[CrossRef]
26. Verma, R.; Sharma, B.D. Exponential entropy on intuitionistic fuzzy sets. Kybernetika 2013, 49, 114–127.

185
Mathematics 2018, 6, 92

27. Taneja, I.J. On generalized information measures and their applications. In Advances in Electronics and
Electron Physics; Elsevier: New York, NY, USA, 1989; Volume 76, pp. 327–413.
28. Boekee, D.E.; Van der Lubbe, J.C. The R-norm information measure. Inf. Control 1980, 45, 136–155. [CrossRef]
29. Hung, W.L.; Yang, M.S. Similarity measures of intuitionistic fuzzy sets based on Hausdorff distance.
Pattern Recognit. Lett. 2004, 25, 1603–1611. [CrossRef]
30. Garg, H. Distance and similarity measure for intuitionistic multiplicative preference relation and its
application. Int. J. Uncertain. Quantif. 2017, 7, 117–133. [CrossRef]
31. Garg, H.; Arora, R. Distance and similarity measures for Dual hesistant fuzzy soft sets and their applications
in multi criteria decision-making problem. Int. J. Uncertain. Quantif. 2017, 7, 229–248. [CrossRef]
32. Joshi, R.; Kumar, S. An (R, S)-norm fuzzy information measure with its applications in multiple-attribute
decision-making. Comput. Appl. Math. 2017, 1–22. [CrossRef]
33. Garg, H.; Kumar, K. An advanced study on the similarity measures of intuitionistic fuzzy sets based on the set
pair analysis theory and their application in decision making. Soft Comput. 2018, 1–12. [CrossRef]
34. Garg, H.; Kumar, K. Distance measures for connection number sets based on set pair analysis and its
applications to decision-making process. Appl. Intell. 2018, 1–14. [CrossRef]
35. Garg, H.; Nancy. On single-valued neutrosophic entropy of order α. Neutrosophic Sets Syst. 2016, 14, 21–28.
36. Selvachandran, G.; Garg, H.; Alaroud, M.H.S.; Salleh, A.R. Similarity Measure of Complex Vague Soft Sets
and Its Application to Pattern Recognition. Int. J. Fuzzy Syst. 2018, 1–14. [CrossRef]
37. Bajaj, R.K.; Kumar, T.; Gupta, N. R-norm intuitionistic fuzzy information measures and its computational
applications. In Eco-friendly Computing and Communication Systems; Springer: Berlin, Germany, 2012;
pp. 372–380.
38. Garg, H.; Kumar, K. Improved possibility degree method for ranking intuitionistic fuzzy numbers and their
application in multiattribute decision-making. Granul. Comput. 2018, 1–11. [CrossRef]
39. Mei, Y.; Ye, J.; Zeng, Z. Entropy-weighted ANP fuzzy comprehensive evaluation of interim product
production schemes in one-of-a-kind production. Comput. Ind. Eng. 2016, 100, 144–152. [CrossRef]
40. Chen, S.M.; Chang, C.H. A novel similarity measure between Atanassov’s intuitionistic fuzzy sets based on
transformation techniques with applications to pattern recognition. Inf. Sci. 2015, 291, 96–114. [CrossRef]
41. Garg, H. Hesitant Pythagorean fuzzy sets and their aggregation operators in multiple attribute
decision-making. Int. J. Uncertain. Quantif. 2018, 8, 267–289. [CrossRef]
42. Chen, S.M.; Cheng, S.H.; Chiou, C.H. Fuzzy multiattribute group decision-making based on intuitionistic
fuzzy sets and evidential reasoning methodology. Inf. Fusion 2016, 27, 215–227. [CrossRef]
43. Kaur, G.; Garg, H. Multi-Attribute Decision-Making Based on Bonferroni Mean Operators under Cubic
Intuitionistic Fuzzy Set Environment. Entropy 2018, 20, 65. [CrossRef]
44. Chen, T.Y.; Li, C.H. Determining objective weights with intuitionistic fuzzy entropy measures: A comparative
analysis. Inf. Sci. 2010, 180, 4207–4222. [CrossRef]
45. Li, D.F. TOPSIS- based nonlinear-programming methodology for multiattribute decision-making with
interval-valued intuitionistic fuzzy sets. IEEE Trans. Fuzzy Syst. 2010, 18, 299–311. [CrossRef]
46. Garg, H.; Arora, R. A nonlinear-programming methodology for multi-attribute decision-making problem with
interval-valued intuitionistic fuzzy soft sets information. Appl. Intell. 2017, 1–16. [CrossRef]
47. Garg, H.; Nancy. Non-linear programming method for multi-criteria decision-making problems under
interval neutrosophic set environment. Appl. Intell. 2017, 1–15. [CrossRef]
48. Saaty, T.L. Axiomatic foundation of the analytic hierarchy process. Manag. Sci. 1986, 32, 841–845. [CrossRef]
49. Hwang, C.L.; Lin, M.J. Group Decision Making under Multiple Criteria: Methods and Applications; Springer:
Berlin, Germany, 1987.
50. Arora, R.; Garg, H. A robust correlation coefficient measure of dual hesistant fuzzy soft sets and their
application in decision-making. Eng. Appl. Artif. Intell. 2018, 72, 80–92. [CrossRef]
51. Garg, H.; Kumar, K. Some aggregation operators for linguistic intuitionistic fuzzy set and its application to
group decision-making process using the set pair analysis. Arab. J. Sci. Eng. 2018, 43, 3213–3227. [CrossRef]
52. Abdullah, L.; Najib, L. A new preference scale mcdm method based on interval-valued intuitionistic fuzzy
sets and the analytic hierarchy process. Soft Comput. 2016, 20, 511–523. [CrossRef]
53. Garg, H. Generalized intuitionistic fuzzy entropy-based approach for solving multi-attribute decision-making
problems with unknown attribute weights. Proc. Natl. Acad. Sci. India Sect. A Phys. Sci. 2017, 1–11. [CrossRef]

186
Mathematics 2018, 6, 92

54. Xia, M.; Xu, Z. Entropy/cross entropy-based group decision-making under intuitionistic fuzzy environment.
Inf. Fusion 2012, 13, 31–47. [CrossRef]
55. Garg, H.; Nancy. Linguistic single-valued neutrosophic prioritized aggregation operators and their
applications to multiple-attribute group decision-making. J. Ambient Intell. Humaniz. Comput. 2018, 1–23.
[CrossRef]
56. De, S.K.; Biswas, R.; Roy, A.R. Some operations on intuitionistic fuzzy sets. Fuzzy Sets Syst. 2000, 117, 477–484.
[CrossRef]
57. Zeng, W.; Li, H. Relationship between similarity measure and entropy of interval-valued fuzzy sets.
Fuzzy Sets Syst. 2006, 157, 1477–1484. [CrossRef]
58. Hung, W.L.; Yang, M.S. Fuzzy Entropy on intuitionistic fuzzy sets. Int. J. Intell. Syst. 2006, 21, 443–451.
[CrossRef]
59. Garg, H. Some methods for strategic decision-making problems with immediate probabilities in Pythagorean
fuzzy environment. Int. J. Intell. Syst. 2018, 33, 687–712. [CrossRef]
60. Garg, H. Linguistic Pythagorean fuzzy sets and its applications in multiattribute decision-making process.
Int. J. Intell. Syst. 2018, 33, 1234–1263. [CrossRef]
61. Garg, H. Generalized interaction aggregation operators in intuitionistic fuzzy multiplicative preference
environment and their application to multicriteria decision-making. Appl. Intell. 2017, 1–17. [CrossRef]
62. Garg, H.; Arora, R. Generalized and Group-based Generalized intuitionistic fuzzy soft sets with applications
in decision-making. Appl. Intell. 2018, 48, 343–356. [CrossRef]

c 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

187
mathematics
Article
Hesitant Probabilistic Fuzzy Linguistic Sets with
Applications in Multi-Criteria Group Decision
Making Problems
Dheeraj Kumar Joshi 1 , Ismat Beg 2, * and Sanjay Kumar 1
1 Department of Mathematics, Statistics and Computer Science, G. B. Pant University of Agriculture and
Technology, Pantnagar, Uttarakhand 263145, India; [email protected] (D.K.J.);
[email protected] (S.K.)
2 Centre for Mathematics and Statistical Sciences, Lahore School of Economics, Lahore 53200, Pakistan
* Correspondence: [email protected]

Received: 3 February 2018; Accepted: 23 March 2018; Published: 26 March 2018

Abstract: Uncertainties due to randomness and fuzziness comprehensively exist in control and
decision support systems. In the present study, we introduce notion of occurring probability of
possible values into hesitant fuzzy linguistic element (HFLE) and define hesitant probabilistic fuzzy
linguistic set (HPFLS) for ill structured and complex decision making problem. HPFLS provides a
single framework where both stochastic and non-stochastic uncertainties can be efficiently handled
along with hesitation. We have also proposed expected mean, variance, score and accuracy
function and basic operations for HPFLS. Weighted and ordered weighted aggregation operators
for HPFLS are also defined in the present study for its applications in multi-criteria group decision
making (MCGDM) problems. We propose a MCGDM method with HPFL information which is
illustrated by an example. A real case study is also taken in the present study to rank State Bank of
India, InfoTech Enterprises, I.T.C., H.D.F.C. Bank, Tata Steel, Tata Motors and Bajaj Finance using
real data. Proposed HPFLS-based MCGDM method is also compared with two HFL-based decision
making methods.

Keywords: hesitant fuzzy set; hesitant probabilistic fuzzy linguistic set; score and accuracy function;
multi-criteria group decision making; aggregation operator

1. Introduction
Uncertainties in decision making problems are due to either randomness or fuzziness, or by both
and can be classified into stochastic and non-stochastic uncertainty [1]. Stochastic uncertainties in
every system may be well captured by the probabilistic modeling [2,3]. Although several theories have
been proposed in the literature to deal with non-stochastic uncertainties but among them fuzzy set
theory [4,5] is extensively researched and successfully applied in decision making [6–10]. An extensive
literature is due to Mardani et al. [11] on the various fuzzy aggregation operators proposed in last
thirty years. Type-2 fuzzy sets [5], interval-valued fuzzy set (IVFS) [4], intuitionistic fuzzy sets (IFS) [12]
and interval-valued intuitionistic fuzzy sets (IVIFS) [13], Pythagorean fuzzy set [14] and neutrosophic
sets [15] are few other extensions of fuzzy sets practiced in MCGDM problems to include non-stochastic
uncertainty and hesitation.
Often decision makers (DMs) in multi-criteria group decision making (MCGDM) problems are
not in favor of the same assessment on decision criteria and provide different assessment information
on each criterion. Difficulty of agreeing on a common assessment is not because of margin of error or
some possible distribution as in case of IFS and type-2 fuzzy sets. To address this issue in MCGDM
problems Torra and Narukawa [16] and Torra [17] introduced hesitant fuzzy set (HFS) and applied

Mathematics 2018, 6, 47; doi:10.3390/math6040047 188 www.mdpi.com/journal/mathematics


Mathematics 2018, 6, 47

in MCGDM problems [18,19]. Various extensions of HFS e.g., triangular hesitant fuzzy set (THFS),
generalized hesitant fuzzy set (GHFS), interval valued hesitant fuzzy set (IVHFS), dual hesitant fuzzy
set (DHFS), interval valued intuitionistic hesitant fuzzy set (IVIHFS) and hesitant pythagorean fuzzy
set were used in decision making problems [20–28] considering decision hesitancy and prioritization
among decision criteria and developed a fuzzy group decision making method to evaluate complex
emergency response in sustainable development. Recently, Garg and Arora [29] proposed distance
and similarity measures-based MCDM method using dual hesitant fuzzy soft set.
Qualitative and quantitative analysis of decision criteria with hesitant and uncertain information
has always been an important issue for researchers in MCGDM problems. Limited knowledge
of decision makers (DMs), nature of considered alternatives and unpredictability of events are
main constraints in getting sufficient and accurate information about the decision preferences and
decision criteria. Many criteria which are difficult to be analyzed quantitatively can be analyzed using
linguistic variables [5]. Linguistic variables improve consistency and flexibility of traditional decision
making methods [30] and hence many researchers [31–45] have proposed use of linguistic variable
in decision making problems. Kobina et al. [46] proposed few probabilistic linguistic aggregation
operators for decision making problem. Garg and Kumar [47], Liu et al. [48] and Garg [49] proposed
various aggregation operators, prioritized aggregation operators for linguistic IFS and linguistic
neutrosophic set and applied them to MCGDM problems. Lin et al. [50] integrated linguistic sets with
HFS to define hesitant fuzzy linguistic set (HFLS) to include hesitancy and inconsistencies among
DMs in assessment of an alternative with respect to a certain criterion. Ren et al. [51] and Joshi &
Kumar [52] proposed TOPSIS method MCGDM using hesitant fuzzy linguistic and IVIHFL information.
Recently few researchers [53–55] have proposed generalized single-valued neutrosophic hesitant fuzzy
prioritized aggregation operators and linguistic distribution-based decision making methods using
hesitant fuzzy linguistic assessment for decision making methods.
Probabilistic and fuzzy approach-based MCGDM method process only either stochastic
or non stochastic uncertainty. One of their major limitations is not to handle both types of
uncertainties simultaneously. Comprehensive concurrence of stochastic and non stochastic uncertainty
in real life problems attracted researchers to incorporate probability theory with fuzzy logic. Idea of
integrating fuzzy set theory with probabilistic theory was initiated by Liang and Song [56] and
Meghdadi and Akbarzadeh [1]. In 2005, Liu and Li [57] defined probabilistic fuzzy set (PFS) to handle
both stochastic and non stochastic uncertainties in a single framework. To handle simultaneous
occurrence of both stochastic and non stochastic uncertainties with hesitation, Xu and Zhou [58]
introduced probabilistic hesitant fuzzy set (PHFS). PHFS permits more than one membership degree
of an element with different probabilities. Recently many applications of PHFS are found in MCGDM
problems [58–65].
Earlier in all HFL-based decision making methods, probabilities of occurrence of elements are
assumed to be equal. Assumption of equal probabilities in HFL is too hard to be followed by DMs in
real life problems of decision making due to their hesitation. For example, a decision maker provides
hesitant fuzzy linguistic element (HFLE) {s2 , < 0.4, 0.5, 0.6 >} to evaluate the safety level of a vehicle.
He or she thinks that the safety level associated with 0.6 and 0.4 are the most and least suitable.
However, he or she contradicts with own decision by associating equal probability to each 0.4,
0.5, 0.6. Hence, HFLE {s2 , < 0.4, 0.5, 0.6 >} with equal probabilities cannot represent DM’s accurate
assessment of decision criteria. With this limitation in present form of HFLS, we introduce notion of
hesitant probabilistic fuzzy linguistic set (HPFLS). This new class of set undertakes both uncertainties
caused by randomness and fuzziness in the environment of hesitation in a single framework.
In the present study, we have proposed HPFLS with expected mean, variance, score and accuracy
function and a few operations on its elements. We also develop novel hesitant probabilistic fuzzy
linguistic weighted averaging (HPFLWA), hesitant probabilistic fuzzy linguistic weighted geometric
(HPFLWG), hesitant probabilistic fuzzy linguistic ordered weighted averaging (HPFLOWA) and
hesitant probabilistic fuzzy linguistic ordered weighted geometric (HPFLOWG) aggregation operators

189
Mathematics 2018, 6, 47

to aggregate the HPFL information. A MCGDM method with HPFL information is proposed.
Methodology of proposed MCGDM method is illustrated by a numerical example and also applied on
a real case study to rank the organizations.

2. Preliminaries
In this section, we briefly review fundamental concepts and definitions of hesitant fuzzy set,
linguistic variables, hesitant fuzzy linguistic set and hesitant probabilistic fuzzy set.

Definition 1. ([16,17]) Let X be a reference set. An HFS A on X is defined using a function I A ( X ) that returns
a subset of [0, 1]. Mathematically, it is symbolized using following expression:

A = {< x, I A ( x ) > | x ∈ X } (1)

where I A ( X ) is hesitant fuzzy element (HFE) having a set of different values lies between [0, 1].

Definition 2. ([32]) Let S = {si |i = 1, 2, . . . t} be a finite discrete LTS. Here si represents a possible value
for a linguistic variable and satisfies the following characteristics:
1. The set is ordered: si > s j if i > j


2. Max si , s j = si if i ≥ j


3. Min si , s j = s j if i ≥ j
Xu [66] extended finite discrete LTS S = {si |i = 1, 2, . . . t} to continuous LTS
S = {sθ |s0 ≤ sθ ≤ st , θ ∈ [0, t]} to conserve all the provided information. An LTS is original if sθ ∈ S,
otherwise it is called virtual.

Definition 3. ([50]) Let X be the reference set and sθ ∈ S. A hesitant fuzzy linguistic set A in X is a
mathematical object of following form:

A = {< x, sθ ( x ), h A ( x ) > | x ∈ X } (2)

Here h A ( x ) is a set of possible finite number of values belonging to [0, 1] and denotes the possible
membership degrees that x belongs to sθ ( x ).

Definition 4. ([58]) Let X be the reference set. An HPFS HF on X is a mathematical object of following form:

HP = {< γi , h(γi | pi ) > sθ ∈ X } (3)

Here h(γi | pi ) is set of elements γi | pi expressing the hesitant fuzzy information with probabilities to the set
HP , 0 ≤ γi ≤ 1 (i = 1, 2, . . . #h) (number of possible elements in h(γi | pi ), pi ∈ [0, 1] are corresponding
#h
probabilities with condition ∑ pi = 1 s.
i=1

3. Hesitant Probabilistic Fuzzy Linguistic Set (HPFLS) and Hesitant Probabilistic Fuzzy
Linguistic Element (HPFLE)
Qualitative and quantitative analysis of decision criteria with hesitant is always been an important
issue for researchers in MCGDM problems. Earlier classification of fuzzy sets (hesitant fuzzy set [16,17],
hesitant fuzzy linguistic set [50] and probabilistic hesitant fuzzy [58]) are not capable to deal with
fuzziness, hesitancy and uncertainty both qualitatively and quantitatively. Keeping in mind the
limitations of HFLEs and to fully describe precious information provided by DMs; our aim is to
propose a new class of set called HPFLS. This set can easily describe stochastic and non-stochastic
uncertainties with hesitant information using both qualitative and quantitative terms. In this section,

190
Mathematics 2018, 6, 47

we also develop expected mean, variance, score and accuracy function of HPFLEs, along with a
comparison method. Some basic operations of HPFLEs are also defined in this section.

Definition 5. Let X and S be the reference set and linguistic term set. An HPFLS HPL on X is a mathematical
object of following form:
HPL = {< x, h PL ( p x ) > | x ∈ X } (4)

Here h PL ( p x ) = sθ ( x | pk ), h(γi | pi )|γi , pi , sθ ( x ) ∈ S and h PL ( p x ) is set of some elements x denoting


the hesitant fuzzy linguistic information with probabilities to the set HPL , 0 ≤ γi ≤ 1, i = 1, 2, . . . , # h PL .
Here # h PL is the number of possible elements in h PL ( p x ), pi ∈ [0, 1] is the hesitant probability of γi and
#h PL
∑ pi = 1. We call h PL ( p x ) HPFLE and HPL is set of all HPLFEs.
i=1

As an illustration of Definition 5, we assume two HPFLEs


h PL ( p x ) = [(s1 |1), {0.2|0.3, 0.4|0.2, 0.5|0.5}], h PL ( py ) = [(s3 |1), {0.4|0.2, 0.5|0.4, 0.2|0.4}] on
reference set X = { x, y}.
An object HPL = [< x, (s1 |1), {0.2|0.3, 0.4|0.2, 0.5|0.5} >, < y, (s3 |1), {0.4|0.2, 0.5|0.4, 0.2|0.4}]
represents an HPFLS.
It is important to note that if the probabilities of the possible values in HPFLEs are equal, i.e.,
p1 = p2 = . . . p#h , then HPFLE reduced to HFLE.

3.1. Some Basic Operations on Hesitant Probabilistic Fuzzy Linguistic Element (HPFLEs)
Based on operational rules of hesitant fuzzy linguistic set [50] and hesitant probabilistic
.   /
set [61], we propose following operational laws for h1PL ( p x ) = sθ (( x )| p), h γ j | p j |γ j , p j and
h2PL ( py ) = sθ ((y)| p), h(γk | pk )|γk , pk  then
λ .   /
(1) (h1PL ) s λ (( x )| p), h γ j λ | p j |γ j , p j for some λ > 0
=
0θ   1
(2) λ(h1PL ) = sλθ (( x )| p), h 1 − (1 − γ j )λ | p j |γ j , p j for some λ > 0
0 1
(3) h1PL ⊕ h2PL = s(θ ( x)+θ (y)) | p), h((γ j + γk − γ j γk )| p j pk )
0 1
(4) h1PL ⊗ h2PL = s(θ ( x)θ (y)) | p), h((γ j γk )| p j pk )
0 2 3  1
(5) h1PL ∪ h2PL = s(θ ( x)∨θ (y)) | p), h((γ j ∨ γk )| p j ∨ pk /∑ Max ( p j , pk
0 2 3  1
(6) h1PL ∩ h2PL = s(θ ( x)∧θ (y)) | p), h((γ j ∧ γk )| p j ∧ pk /∑ Min( p j , pk

Using definition of ⊕ and ⊗, it can be easily proved that h1PL ⊕ h2PL and h1PL ⊗ h2PL are
λ
commutative. In order to show that (h1PL ) , λ(h1PL ), h1PL ⊕ h2PL , h1PL ⊗ h2PL , h1PL ∪ h2PL and
h1PL ∩ h2PL are again HPFLE, we assume that h1PL ( p x ) = [(s2 |1), {0.2|0.3, 0.4|0.2, 0.5|0.5}] and
h2PL ( py ) = [(s3 |1), {0.4|0.2, 0.5|0.4, 0.2|0.4}] are two HPFLEs on reference set X = { x, y} and
perform the operation laws as follows:
λ
(h1PL ) = [(s4 |1), {0.04|0.3, 0.16|0.2, 0.25|0.5}]
λ(h1PL ) = [(s4 |1), {0.36|0.3, 0.64|0.2, 0.75|0.5}]
h1PL ⊕ h2PL = [(s5 |1), {0.54|0.06, 0.58|0.12, 0.28|0.12, 0.76|0.04, 0.82|0.08, 0.52|0.08, 0.8|0.1, 0.8|0.2, 0.5|0.2}]
h1PL ⊗ h2PL = [(s5 |1), {0.08|0.06, 0.1|0.12, 0.04|0.12, 0.16|0.04, 0.2|0.08, 0.08|0.08, 0.2|0.1, 0.25|0.2, 0.1|0.2}]
h1PL ∪ h2PL = [(s5 |1), {0.4|0.08, 0.5|0.11, 0.2|0.11, 0.4|0.06, 0.5|0.11, 0.4|0.11, 0.5|0.14, 0.5|0.14, 0.5|0.14}]
h1PL ∩ h2PL = [(s5 |1), {0.2|0.08, 0.2|0.13, 0.2|0.13, 0.4|0.08, 0.4|0.08, 0.2|0.08, 0.4|0.08, 0.5|0.17, 0.2|0.17}]

3.2. Score and Accuracy Function for Hesitant Probabilistic Fuzzy Linguistic Element (HPFLE)
Comparison is an indispensable and is required if we tend to apply HPFLE in decision making
and optimization problems. Hence, we define expected mean, variance, score and accuracy function of
HPFLE in this sub section as follows:

191
Mathematics 2018, 6, 47

Definition 6. Expected mean E(h PL ( p x )) and variance V (h PL ( p x )) for a HPFLE


h PL ( p x ) = sθ (( x )| pk ), h(γi | pi )|γi , pi  are defined as follows:

#h
∑ ( γi p i )
i=1
E(h PL ( p x )) = (5)
#h
#h
V (h PL ( p x )) = ∑ (γi − E(h PL ( p x )))2 pi (6)
i=1

Definition 7. Score function S(h PL ( p x )) and accuracy function A(h PL ( p x )) fora HPFLE
h PL ( p x ) = sθ (( x )| pk ), h(γi | pi )|γi , pi  are defined as follows:

S(h PL ( p x )) = E(h PL ( p x )) (sθ ( x )( pk )) (7)

A(h PL ( p x )) = V (h PL ( p x ))(sθ ( x )( pk )) (8)

Using score and accuracy functions two HPFLEs h PL ( p x ), h PL ( py ) can be compared as follows:

(1) If S(h PL ( p x )) > S(h PL ( py )), then h PL ( p x ) > h PL ( py )


(2) If S(h PL ( p x )) < S(h PL ( py )), then h PL ( p x ) < h PL ( py )
(3) If S(h PL ( p x )) = S(h PL ( py )),

(a) If A(h PL ( p x )) > A(h PL ( py )), then h PL ( p x ) > h PL ( py )


(b) If A(h PL ( p x )) < A(h PL ( py )), then h PL ( p x ) < h PL ( py )
(c) If A(h PL ( p x )) = A(h PL ( py )), then h PL ( p x ) = h PL ( py )

As an illustration of Definitions 6 and 7, we compare two HPFLEs


h PL ( p x ) = [(s2 |1), {0.2|0.3, 0.4|0.2, 0.5|0.5}] and h PL ( py ) = [(s3 |1), {0.4|0.2, 0.5|0.4, 0.2|0.4}]
using score and accuracy functions as follows:

E(h PL ( p x )) = (0.2 ∗ 0.3 + 0.4 ∗ 0.2 + 0.5 ∗ 0.5)/3 = 0.13


E(h PL ( py )) = (0.4 ∗ 0.2 + 0.5 ∗ 0.4 + 0.2 ∗ 0.4)/3 = 0.12
V (h PL ( p x )) =(((0.2 − 0.13)2 ) ∗ 0.3 + ((0.4 − 0.13)2 ) ∗ 0.2) + ((0.5 − 0.13)2 ) ∗ 0.5))/3 = 0.0279
V (h PL ( py )) =(((0.4 − 0.12)2 ) ∗ 0.2 + ((0.5 − 0.12)2 ) ∗ 0.4) + (0.2 − 0.12)2 ∗ 0.4))/3 = 0.0252
S(h PL ( p x )) = s(2∗1)((0.2∗0.3+0.4∗0.2+0.5∗0.5)/3) = s0.26
S(h PL ( py )) = s(3∗1)(0.4∗0.2+0.5∗0.4+0.2∗0.4)/3 = s0.36
A(h PL ( p x )) = s(2∗1)((0.2−0.13)2 )∗0.3+((0.4−0.13)2 )∗0.2)+((0.5−0.13)2 )∗0.5)/3 = s0.0558
A(h PL ( py )) = s(3∗1)((0.4−0.12)2 )∗0.2+((0.5−0.12)2 )∗0.4)+(0.2−0.12)2 ∗0.4)/3 = s0.0756

Since S(h PL ( py )) > S(h PL ( p x )) therefore h PL ( py ) < h PL ( p x ).


Different HPFLEs may have different number of PFNs. To make them equal in numbers we extend
HPFLEs until they have same number of PFNs. It can be extended according to DMs risk behavior.

4. Aggregation Operators for Hesitant Probabilistic Fuzzy Linguistic Set (HPFLS)


In group decision making problems, an imperative task is to aggregate the assessment information
obtained from DMs about alternatives against each criterion. Various aggregation operators for
HFLS [18,50] and HPFS [58–63,67] have been developed in the past few decades. As we propose
HPFLS for MCGDM problems, we also develop few aggregations operators to aggregate information
in the form of HPFLEs. In this section, we define HPFLW and HPFLOW operators.

192
Mathematics 2018, 6, 47

4.1. Hesitant Probabilistic Linguistic Fuzzy Weighted Aggregation Operators


Let HPL i = h PL ( p x ) = sθ i (( x )| p), h(γi | pi )|γi , pi , (i = 1, 2, . . . , n) be collection of
HPFLEs. Hesitant probabilistic fuzzy linguistic weighted averaging (HPFLWA) operator and hesitant
probabilistic fuzzy linguistic weighted geometric (HPFLWG) operator are defined as follows:

n → H
Definition 8. HPFLWA is a mapping HPL PL such that

n
HPFLWA ( H1 , H2 , . . . , Hn ) = ⊕ (ωi Hi )
4 ! 5 i=1 6!7 (9)
n n
= ∑ ωi sθ i (( x )| p) , ∪ 1 − ∏ ( 1 − γi ) ω i | p 1 p 2 . . . p n
i=1 γ1 ∈ H1 ,γ2 ∈ H2 ,...,γn ∈ Hn , i=1

n → H
Definition 9. HPFLWG operator is a mapping HPL PL such that

n
HPFLWG ( H1 , H2 , . . . , Hn ) = ⊗ ( Hi )ωi
4 ! 5i = 1 6!7 (10)
n n
ω ω
= ∑ sθ i (( x )| p) ,
i ∪ ∏ ( γi ) i | p 1 p 2 . . . p n
i=1 γ1 ∈ H1 ,γ2 ∈ H2 ,...,γn ∈ Hn , i=1

where ω = (ω1 , ω2 , . . . , ωn ) is weight vector of Hi (i = 1, 2, . . . , n) with ωi ∈ [0, 1] and


n
∑ ωi = 1, pn is the probability of γi in the HPFLEs Hi (i = 1, 2, . . . , n). In particular if
i=1
 T
ω = 1 1 1
n, n, . . . , n then HPFLWA and HPFLWG operator are reduced to following hesitant
probabilistic fuzzy linguistic averaging (HPFLA) operator and hesitant probabilistic fuzzy linguistic
geometric (HPFLG) operator respectively:
n  
HPFLA ( H1 , H2 , . . . , Hn ) = ⊕ n1 Hi
4 ! 5 i=1 6!7 (11)
n n 1
= ∑ n1 sθ i (( x )| p) , ∪ 1 − ∏ ( 1 − γi ) n | p 1 p 2 . . . p n
i=1 γ1 ∈ H1 ,γ2 ∈ H2 ,...,γn ∈ Hn , i=1

n 1
HPFLG ( H1 , H2 , . . . , Hn ) = ⊗ ( Hi ) n
4 ! 5i = 1 6!7 (12)
n 1 n 1
= ∑ sθ n i (( x )| p) , ∪ ∏ ( γi ) n | p 1 p 2 . . . p n
i=1 γ1 ∈ H1 ,γ2 ∈ H2 ,...,γn ∈ Hn , i=1

n n n
Lemma 1. ([17] Let αi > 0, ωi > 0, i = 1, 2, . . . , n and ∑ ωi = 1, then ∏ αi ωi ≤ ∑ αi ωi and
i=1 i=1 i=1
equality holds if and only if α1 = α2 = . . . = αn .

Theorem 1. Let Hi = h PL ( p x ) = {sθ i (( x )| p), h(γi | pi )|γi , pi >}(i = 1, 2, . . . , n) be collection of


HPFLEs. Let ω = (ω1 , ω2 , . . . , ωn ) be weight vector of Hi (i = 1, 2, . . . , n) with ωi ∈ [0, 1] and
n
∑ ωi = 1, then
i=1

HPFLWG ( H1 , H2 , . . . , Hn ) ≤ HPFLWA( H1 , H2 , . . . , Hn )
HPFLG ( H1 , H2 , . . . , Hn ) ≤ HPFLA( H1 , H2 , . . . , Hn )

Proof. Using Lemma l, we have following inequality for any γi ∈ Hi (i = 1, 2, . . . , n),


n n n n
∏ ( γi ) ω i ≤ ∑ γi ω i = 1 − ∑ ( 1 − γi ) ω i ≤ ∏ ( 1 − γi ) ω i
i=1 i=1 i=1 i=1

193
Mathematics 2018, 6, 47

Thus, we can obtain the following inequality:


4 5 6!7 4 5 6!7
n n
ω
∪ ∏ ( γi ) i | p 1 p 2 . . . p n ≤ ∪ 1 − ∏ ( 1 − γi ) ω i | p 1 p 2 . . . p n
γ1 ∈ H1 ,γ2 ∈ H2 ,...,γn ∈ Hn , i=1 γ1 ∈ H1 ,γ2 ∈ H2 ,...,γn ∈ Hn , i=1

4 ! 5 6!7 4 ! 5 6!7
n n n n
ω
∑ sθ i ωi (( x )| p) , ∪ ∏ ( γi ) i | p 1 p 2 . . . p n ≤ ∑ ωi sθ i (( x )| p) , ∪ 1 − ∏ ( 1 − γi ) ω i | p 1 p 2 . . . p n
i = 1 γ1 ∈ H1 ,γ2 ∈ H2 ,...,γn ∈ Hn , i = 1 i = 1 γ1 ∈ H1 ,γ2 ∈ H2 ,...,γn ∈ Hn , i = 1

#h
∑ ( γi p i )
Using definition of score function S(h PL ( p x )) = #h (sθ ( x )( pk )), we have i = 1

HPFLWG ( H1 , H2 , . . . , Hn ) ≤ HPFLWA( H1 , H2 , . . . , Hn ). Similarly, it can be proved that HPFLG


( H1 , H2 , . . . , Hn ) ≤ HPFLA ( H1 , H2 , . . . , Hn ). 

4.2. Hesitant Probabilistic Fuzzy Linguistic Ordered Weighted Aggregation Operators


Xu and Zhou [58] defined ordered weighted averaging and geometric aggregation operators
to aggregate hesitant probabilistic fuzzy information for MCGDM problems. In this sub section we
propose hesitant probabilistic fuzzy linguistic ordered weighted averaging (HPFLOWA) operator and
hesitant probabilistic fuzzy linguistic ordered weighted geometric (HPFLOWG) operators.
i
Let HPL = h PL ( p x ) = {sθ i (( x )| p), h(γi | pi )|γi , pi >}(i = 1, 2, . . . , n) be collection of HPFLEs,
n
ω = (ω1 , ω2 , . . . , ωn ) is weight vector of with ωi ∈ [0, 1] and ∑ ωi = 1. Let pi is the probability of
i=1
γi in the HPFLEs Hi (i = 1, 2, . . . , n), γσ(i) ith be the largest of Hi , pσ(i) is the probability of γσ(i) , and
ωσ(i) be the largest of ω. We develop the following two ordered weighted aggregation operators:

n → H
Definition 10. HPFLOWA operator is a mapping HPL PL such that

n
HPFLOWA ( H1 , H2 , . . . , Hn ) = ⊕ (ωi Hσ(i) )
8 ! 9 5 i=1 6:;
n n (13)
= ∑ ωi sθ σ(i) (( x )| p) , ∪ 1 − ∏ ( 1 − γσ ( i ) ) ωi | p σ (1 ) p σ ( 2 ) . . . p σ ( n )
i=1 γσ(1)1 ∈ Hσ(1) ,γσ(2) ∈ Hσ(2) ,...,γσ(n) ∈ Hσ(n) , i=1

n → H
Definition 11. HPFLOWG operator is a mapping HPL PL such that

n
HPFLOWG ( H1 , H2 , . . . , Hn ) = ⊕ ( Hσ(i) )ωi
8 ! 9 i=1
5 6:;
n n
ω
(14)
= ∑ sθ σ(i) ωi (( x )| p) , ∪ ∏ ( γσ ( i ) ) i | p σ ( 1 ) p σ (2 ) . . . p σ ( n )
i=1 γσ(1)1 ∈ Hσ(1) ,γσ(2) ∈ Hσ(2) ,...,γσ(n) ∈ Hσ(n) , i=1

Similar to Theorem 1, the above ordered weighted operators have the relationship below:

HPFLOWG ( H1 , H2 , . . . , Hn ) ≤ HPFLOWA( H1 , H2 , . . . , Hn )

4.3. Properties of Proposed Weighted and Ordered Weighted Aggregation Operators


Following are few properties of proposed weighted and ordered weighted aggregation operators
that immediately follow from their definitions.
Property 1. (Monotonicity). Let ( H1 , H2 , . . . , Hn ) and ( H1 , H2 , . . . , Hn ) be two collections of HPFLNs, if
Hi ≤ Hi for all I = 1,2, . . . n, then

HPFLWA ( H1 , H2 , . . . , Hn ) ≤ HPFLWA( H1 , H2 , . . . , Hn )


HPFLWG ( H1 , H2 , . . . , Hn ) ≤ HPFLWG( H1 , H2 , . . . , Hn )
HPFLOWA ( H1 , H2 , . . . , Hn ) ≤ HPFLOWA( H1 , H2 , . . . , Hn )
HPFLOWG ( H1 , H2 , . . . , Hn ) ≤ HPFLOWG( H1 , H2 , . . . , Hn )

194
Mathematics 2018, 6, 47

Property 2. (Idempotency). Let Hi = Hi , (i = 1, 2, . . . , n), then

HPFLWA( H1 , H2 , . . . , Hn ) = HPFLWG( H1 , H2 , . . . , Hn ) = HPFLOWA( H1 , H2 , . . . , Hn ) = HPFLOWG( H1 , H2 , . . . , Hn ) = H

Property 3. (Boundedness). All aggregation operators lie between the max and min operators:

min ( H1 , H2 , . . . , Hn ) ≤ HPFLWA( H1 , H2 , . . . , Hn ) ≤ max ( H1 , H2 , . . . , Hn )


min ( H1 , H2 , . . . , Hn ) ≤ HPFLWG( H1 , H2 , . . . , Hn ) ≤ max ( H1 , H2 , . . . , Hn )
min ( H1 , H2 , . . . , Hn ) ≤ HPFLOWA( H1 , H2 , . . . , Hn ) ≤ max ( H1 , H2 , . . . , Hn )
min ( H1 , H2 , . . . , Hn ) ≤ HPFLOWG( H1 , H2 , . . . , Hn ) ≤ max ( H1 , H2 , . . . , Hn )

5. Application of Hesitant Probabilistic Fuzzy Linguistic Set to Multi-Criteria Group Decision


Making (MCGDM)
In this section, we propose a MCGDM method with hesitant probabilistic fuzzy
linguistic information. Let { A1 , A2 , . . . ,Am } be set of alternatives to be ranked by a group of DMs
{ D1 , D2 , . . . , Dk } against criteria {C1 , C2 , . . . , Cn }. w = (w1 , w2 , . . . , wn ) T is the weight vector of
n
criteria with the condition 0 ≤ w ≤ 1 and ∑ w = 1. H <k = ( H k ) is HPFL decision matrix
j j ij m×n
j=1
where = {sθ (( x )| p), h(γt | pt )|γt , pt }(t = 1, 2, . . . , T ) denotes HPFLE when alternative Ai is
Hijk
evaluated by kthDM under the criteria Cj . If two or more decision makers provide the same value,
then the value comes only once in decision matrix. Algorithm of proposed HPFLS-based MCGDM
method includes following steps:
Step 1: Construct HPFL decision matrices H <k = ( H k ) (i = 1, 2, . . . , m; j = 1, 2, . . . n),
ij m×n
according to the preferences information provided by the DMs about the alternative Ai under the
criteria Cj denoted by HPFLE H = h PL ( p x ) = {sθ (( x )| p), h(γt | pt )|γt , pt >}(t = 1, 2, . . . , T ).
Step 2: Use the proposed aggregation operators (HPFLWA and HPFLWG) given
in Section 3, to aggregate individual hesitant probabilistic fuzzy linguistic decision
matrix information provided by each decision maker into a single HPFL decision matrix
H= = ( Hij ) (i = 1, 2, . . . , m; j = 1, 2, . . . n).
m×n
Step 3: Calculate the overall criteria value for each alternative Ai (i = 1, 2, . . . m) by applying the
HPFLWA and HPFLWG aggregation operator as follows:
8 ;
HPFLWA (Ci1 , Ci2 , . . . , Cin )
Hi (i = 1, 2, . . . , m) =
= HPFLWA ((C11 , C12 , . . . , C1n ), (C21 , C22 , . . . , C2n ), . . . , (Cm1 , Cm2 , . . . , Cmn ))
8 ;
HPFLWG (Ci1 , Ci2 , . . . , Cin )
Hi (i = 1, 2, . . . , m) =
= HPFLWG ((C11 , C12 , . . . , C1n ), (C21 , C22 , . . . , C2n ), . . . , (Cm1 , Cm2 , . . . , Cmn ))

Step 4: Use score or accuracy functions to calculate the score values S(h PL ( p x )) and accuracy
values A(h PL ( p x )) of the aggregated hesitant probabilistic fuzzy linguistic preference values
Hi (i = 1, 2, . . . m) .
Step 5: Rank all the alternatives Ai (i = 1, 2, . . . m) in accordance with S(hi PL ( p x )) or
A(hi PL ( p x )), (i = 1, 2, . . . m).

6. Illustrative Example
An example is undertaken in this section to understand the implementation methodology of
proposed MCGDM method with HPFL information. Further, a real case study is done to rank
organizations using proposed MCGDM method. We also compare proposed method with existing
HPFL-based MCGDM methods proposed by Lin et al. [50] and Zhou et al. [68].

195
Mathematics 2018, 6, 47

Example. Suppose that a group of three decision makers (D1 , D2 , D3 )intend to rank four alternatives (A1 , A2 ,
A3 , A4 )on the basis of three criteria (C1 , C2 , C3 ). All DMs are considered equally important and equal weights
are assigned to them. Each DM provides evaluation information of each alternative under each criterion in form
of HPFLEs with following LTS:

S = {s0 = extremely poor, s1 = very poor,s2 = poor,s3 = fair,s4 = good,


s5 = very good, s6 = extremly good}

Step 1: HPFL decision matrices are constructed according to preferences information provided by
DMsD1 , D2 and D3 about the alternative Ai (i = 1, 2, 3, 4) under the criteria Cj (i = 1, 2, 3).
Tables 1–3 represent HPFL evaluation matrices provided by DMsD1 , D2 and D3 .

<1 provided by D .
Table 1. Hesitant probabilistic fuzzy linguistic (HPFL) decision matrix H 1

C1 C2 C3
A1 {(s1 , 0.4|0.3, 0.5|0.7)} {(s2 , 0.5|0.2, 0.6|0.8)} {(s1 , 0.4|1.0)}
A2 {(s3 , 0.3|0.5, 0.4|0.5,)} {(s4 , 0.4|0.4, 0.5|0.6)} {(s2 , 0.2|0.6, 0.5|0.4)}
A3 {(s5 , 0.1|0.5, 0.5|0.5)} {(s4 , 0.3|0.3, 0.5|0.7)} {(s1 , 0.1|0.4, 0.2|0.6)}
A4 {(s2 , 0.4|0.6, 0.5|0.4)} {(s1 , 0.2|1.0)} {(s3 , 0.2|0.6, 0.5|0.4)}

<2 provided by D .
Table 2. HPFL decision matrix H 2

C1 C2 C3
A1 {(s1 , 0.4|1.0)} {(s2 , 0.2|0.4, 0.4|0.6)} {(s1 , 0.8|1.0)}
A2 {(s4 , 0.2|0.5, 0.4|0.5)} {(s5 , 0.4|0.4, 0.5|0.6)} {(s2 , 0.1|0.4, 0.5|0.6)}
A3 {(s2 , 0.2|0.5, 0.5|0.5)} {(s4 , 0.3|0.6, 0.4|0.4)} {(s2 , 0.1|1.0)}
A4 {(s3 , 0.3|0.5, 0.5|0.5)} {(s4 , 0.5|1.0)} {(s3 , 0.2|0.6, 0.5|0.4)}

<3 provided by D .
Table 3. HPFL decision matrix H 3

C1 C2 C3
A1 {(s2 , 0.2|0.4, 0.4|0.6)} {(s4 , 0.2|1.0)} {(s4 , 0.5|1.0)}
A2 {(s4 , 0.3|1.0)} {(s5 , 0.3|0.4, 0.4|0.6)} {(s3 , 0.3|0.6, 0.5|0.4)}
A3 {(s2 , 0.3|0.5, 0.4|0.5)} {(s2 , 0.2|0.5, 0.4|0.5)} {(s4 , 0.5|1.0)}
A4 {(s2 , 0.4|0.6, 0.5|0.4)} {(s4 , 0.5|1.0)} {(s4 , 0.3|0.5, 0.5|0.5)}

Step 2: Aggregate (>


H 1 ), (> H 2 ) and (>H 3 ) into a single HPFL decision matrix
=
H = ( Hij )4×3 (i = 1, 2, . . . , 4; j = 1, 2, . . . 3) using HPFLWA and HPFLWG operators.

196
Mathematics 2018, 6, 47

Following is the sample computation process of aggregation of HPFLEs h111 , h211 , h311 into a single
H11 using proposed HPFLWA and HPFLWG operators.

H11 = HPFLWA(h111 , h211 , h311 ) = [{(s 1 , 0.4|0.3, 0.5| 0.7)}, {(s 1 , 0.4| 1.0)}, {(s 2 , 0.2|0.4, 0.4|0.6 }]
⎡ ⎤
{(s (1+1+2)/3 , {[1 − (((1 − 0.4)ˆ(1/3)) ∗ ((1 − 0.4)ˆ(1/3)) ∗ ((1 − 0.2)ˆ(1/3)))] |(0.3 ∗ 1 ∗ 0.4)},
⎢ ⎥
⎢ {[1 − (((1 − 0.4)ˆ(1/3)) ∗ ((1 − 0.4)ˆ(1/3)) ∗ ((1 − 0.2)ˆ(1/3)))] |(0.3 ∗ 1 ∗ 0.4)}, ⎥
⎢ ⎥
⎢ {[1 − (((1 − 0.4)ˆ(1/3)) ∗ ((1 − 0.4)ˆ(1/3)) ∗ ((1 − 0.4)ˆ(1/3)))] |(0.3 ∗ 1 ∗ 0.4)}, ⎥
⎢ ⎥
⎢ {[1 − (((1 − 0.4)ˆ(1/3)) ∗ ((1 − 0.4)ˆ(1/3)) ∗ ((1 − 0.4)ˆ(1/3)))] |(0.3 ∗ 1 ∗ 0.6)}, ⎥
=⎢⎢


⎢ {[1 − (((1 − 0.5)ˆ(1/3)) ∗ ((1 − 0.4)ˆ(1/3)) ∗ ((1 − 0.2)ˆ(1/3)))] |(0.7 ∗ 1 ∗ 0.4)}, ⎥
⎢ ⎥
⎢ {[1 − (((1 − 0.5)ˆ(1/3)) ∗ ((1 − 0.4)ˆ(1/3)) ∗ ((1 − 0.4)ˆ(1/3)))] |(0.7 ∗ 1 ∗ 0.6)}, ⎥
⎢ ⎥
⎣ {[1 − (((1 − 0.5)ˆ(1/3)) ∗ ((1 − 0.4)ˆ(1/3)) ∗ ((1 − 0.2)ˆ(1/3)))] |(0.7 ∗ 1 ∗ 0.4)}, ⎦
{[1 − (((1 − 0.5)ˆ(1/3)) ∗ ((1 − 0.4)ˆ(1/3)) ∗ ((1 − 0.4)ˆ(1/3)))] |(0.7 ∗ 1 ∗ 0.6)}
= [{(s 1.3 , 0.34|0.12, 0.34|0.12, 0.4|0.18, 0.4|0.18, 0.38|0.28, 0.44|0.42, 0.38|0.28, 0.44|0.42}]
H11 = [{(s 1.3 , 0.34|0.12, 0.4|0.18, 0.38|0.28, 0.44|0.42}]

H11 = HPFLWG (h111 , h211 , h311 ) = [{(s 1 , 0.4|0.3, 0.5| 0.7)}, {(s 1 , 0.4| 1.0)}, {(s 2 , 0.2|0.4, 0.4|0.6 }]
⎡ ⎤
{(s (1ˆ(1/3)+1ˆ(1/3)+2ˆ(1/3)) , {[((0.4)ˆ(1/3)) ∗ ((0.4)ˆ(1/3)) ∗ ((0.2)ˆ(1/3)))] |(0.3 ∗ 1 ∗ 0.4)},
⎢ ⎥
⎢ {[(((0.4)ˆ(1/3)) ∗ ((0.4)ˆ(1/3)) ∗ ((0.2)ˆ(1/3)))] |(0.3 ∗ 1 ∗ 0.4)}, ⎥
⎢ ⎥
⎢ {[((( 0.4)ˆ(1/3)) ∗ ((0.4)ˆ(1/3)) ∗ ((0.4)ˆ(1/3)))] |(0.3 ∗ 1 ∗ 0.4)}, ⎥
⎢ ⎥
⎢ {[(((0.4)ˆ(1/3)) ∗ ((0.4)ˆ(1/3)) ∗ ((0.4)ˆ(1/3)))] |(0.3 ∗ 1 ∗ 0.6)}, ⎥
=⎢⎢

⎢ {[(((0.5)ˆ(1/3)) ∗ ((0.4)ˆ(1/3)) ∗ ((0.2)ˆ(1/3)))] |(0.7 ∗ 1 ∗ 0.4)}, ⎥ ⎥
⎢ ⎥
⎢ {[(((0.5)ˆ(1/3)) ∗ ((0.4)ˆ(1/3)) ∗ ((0.4)ˆ(1/3)))] |(0.7 ∗ 1 ∗ 0.6)}, ⎥
⎢ ⎥
⎣ {[(((0.5)ˆ(1/3)) ∗ ((0.4)ˆ(1/3)) ∗ ((0.2)ˆ(1/3)))] |(0.7 ∗ 1 ∗ 0.4)}, ⎦
{[(((0.5)ˆ(1/3)) ∗ ((0.4)ˆ(1/3)) ∗ ((0.4)ˆ(1/3)))] |(0.7 ∗ 1 ∗ 0.6)}
= [{(s 3.3 , 0.32|0.12, 0.32|0.12, 0.4|0.2, 0.4|0.2, 0.34|0.3, 0.43|0.38, 0.34|0.3, 0.43|0.38}]
H11 = [{(s 3.3 , 0.32|0.12, 0.4|0.2, 0.3|0.3, 0.4|0.38}]

Similarly other HPFLEs of HPFL decision matrices (Tables 1–3) are aggregated into the single
HPFL decision matrix using HPFLWA and HPFLWG operators, and shown in Tables 4 and 5.

Table 4. Aggregated hesitant probabilistic fuzzy linguistic element (HPFLE) group decision matrix
using hesitant probabilistic fuzzy linguistic weighted averaging (HPFLWA) operator.

C1 C2 C3
{(s1.3 , 0.34|0.12, 0.4|0.18, {(s2.7 , 0.32|0.08, 0.38|0.12,
A1 {(s2 , 0.61|1.0)}
0.38|0.28, 0.44|0.42)} 0.37|0.32, 0.42|0.48)}
{(s4.7 , 0.37|0.08, 0.41|0.12, {(s2.3 , 0.2|0.144, 0.35|0.22,
{(s3.7 , 0.27|0.25, 0.34|0.25,
A2 0.4|0.08, 0.44|0.18, 0.41|0.096 0.29|0.144, 0.42|0.144, 0.32|0.096
0.3|0.25, 0.37|0.25)}
0.44|0.144, 0.47|0.216)} 0.39|0.064, 0.44|0.144, 0.5|0.096)}
{(s3 , 0.2|0.125, 0.32|0.13, {(s3.3 , 0.27|0.09, 0.3|0.06,
A3 0.24|0.125, 0.35|0.125, 0.38|0.125 0.34|0.09, 0.37|0.06, 0.35|0.21 {(s2.3 , 0.26|0.4, 0.29|0.6)}
0.44|0.125, 0.47|0.125)} 0.41|0.21, 0.38|0.14, 0.44|0.14)}
{(s2.3 , 0.37|0.18, 0.44|0.18, {(s3.3 , 0.23|0.18, 0.35|0.12,
A4 0.41|0.18, 0.47|0.12, 0.41|0.12 {(s3 , 0.42|1.0)} 0.32|0.18, 0.42|0.12, 0.35|0.12
0.44|0.08, 0.47|0.12, 0.5|0.08)} 0.42|0.12, 0.44|0.08, 0.5|0.08)}

Table 5. Aggregated HPFLE group decision matrix using hesitant probabilistic fuzzy linguistic
weighted geometric (HPFLWG) operator.

C1 C2 C3
{(s3.3 , 0.32|0.12, 0.4|0.2, {(s4.1 , 0.27|0.08, 0.3|0.1,
A1 {(s3.6 , 0.54|1.0)}
0.3|0.3, 0.4|0.38)} 0.3|0.3, 0.36|0.5)}
{(s5 , 0.36|0.08, 0.4|0.12, {(s4 , 0.18|0.14, 0.3|0.22,
{(s4.6 , 0.26|0.25, 0.3|0.25,
A2 0.4|0.22, 0.4|0.14, 0.42|0.1 0.2|0.14, 0.4|0.1, 0.2|0.1
0.3|0.3, 0.4|0.2)}
0.5|0.22)} 0.3|0.06)}
{(s4.2 , 0.18|0.13, 0.2|0.13, {(s4.4 , 0.26|0.09, 0.3|0.06,
A3 0.3|0.15, 0.3|0.13 0.34|0.09, 0.37|0.06, 0.35|0.21 {(s3.8 , 0.17|0.4, 0.2|0.6)}
0.42|0.2, 0.5|0.23)} 0.41|0.21, 0.38|0.14, 0.44|0.14)}
{(s4 , 0.36|0.18, 0.43|0.24, {(s4.5 , 0.23|0.18, 0.31|0.16,
A4 0.39|0.18, 0.46|0.12, 0.42|0.08 {(s4.2 , 0.37|1.0)} 0.27|0.28, 0.42|0.18, 0.42|0.12,
0.46|0.12, 0.5|0.08)} 0.5|0.08)}

197
Mathematics 2018, 6, 47

Step 3: Aggregate assessment of each alternative Ai (i = 1, 2, 3, 4) against each criteria


is calculated using the HPFLWA and HPFLWG aggregation operators with criteria weights
w1 = 0.4, w2 = 0.3, w3 = 0.4 as follows:

H1 = HPFLWA (C11 , C12 , C13 )


H1 = 9
[(s1.3 , 0.34|0.12, 0.4|0.18, 0.38|0.28, 0.44|0.42), (s2.7 , 0.32|0.08, 0.38|0.12, 0.37|0.32, 0.42|0.48), (s2 , 0.61 : |1.0)]
(s1.93 , 0.43|0.01, 0.44|0.014, 0.44|0.038, 0.45|0.058, 0.45|0.014, 0.46|0.022, 0.46|0.058, 0.47|0.086,
H1 =
0.44|0.022, 0.45|0.034, 0.45|0.09, 0.47|0.134, 0.46|0.034, 0.47|0.05, 0.47|0.134, 0.49|0.202)
H1 = HPFLWG (C11 , C12 , C13 )
H1 = 9
[(s3.3 , 0.32|0.12, 0.4|0.2, 0.3|0.3, 0.4|0.38), (s4.1 , 0.27|0.08, 0.3|0.1, 0.3|0.3, 0.36|0.5), (s3.6 , 0.54|1.0)] :
(s1.93 , 0.4|0.01, 0.42|0.014, 0.41|0.038, 0.43|0.058, 0.42|0.014, 0.45|0.022, 0.44|0.058, 0.46|0.086,
H1 =
0.41|0.022, 0.44|0.034, 0.43|0.09, 0.45|0.134, 0.44|0.034, 0.46|0.05, 0.46|0.134, 0.48|0.202)

Similarly, other elements of HPFL decision matrices (Tables 4 and 5) are aggregated into the
overall HPFL decision matrix using HPFLWA and HPFLWG operators and shown in Tables 6 and 7.

Table 6. Collective HPFLE group decision matrix using HPFLWA operator.

{(s1.93 , 0.43|0.01, 0.44|0.014, 0.44|0.038, 0.45|0.058, 0.45|0.014, 0.46|0.022, 0.46|0.058, 0.47|0.086, 0.44|0.022,
A1
0.45|0.034, 0.45|0.09, 0.47|0.134, 0.46|0.034, 0.47|0.05, 0.47|0.134, 0.49|0.202
{(s3.57 , 0.282|0.003, 0.321|0.004, 0.307|0.003, 0.345|0.006, 0.295|0.003, 0.332|0.005, 0.332|0.008, 0.323|0.004,
0.36|0.006, 0.347|0.004, 0.383|0.01, 0.335|0.005, 0.37|0.008, 0.37|0.012, 0.306|0.003, 0.344|0.004, 0.33|0.003,
0.367|0.006, 0.318|0.003, 0.354|0.005, 0.354|0.008, 0.345|0.003, 0.381|0.004, 0.368|0.003, 0.403|0.006, 0.357|0.003,
A2 0.391|0.005, 0.391|0.008, 0.315|0.002, 0.353|0.003, 0.339|0.002, 0.375|0.004, 0.327|0.002, 0.363|0.003, 0.363|0.005,
0.338|0.001, 0.374|0.002, 0.361|0.001, 0.396|0.003, 0.35|0.002, 0.384|0.002, 0.384|0.003, 0.354|0.003, 0.39|0.004,
0.377|0.003, 0.411|0.006, 0.366|0.003, 0.399|0.005, 0.399|0.008, 0.375|0.002, 0.41|0.003, 0.397|0.002, 0.431|0.004,
0.387|0.002, 0.419|0.003, 0.419|0.005,
{(s2.9 , 0.241|0.005, 0.287|0.005, 0.256|0.005, 0.301|0.005, 0.312|0.005, 0.341|0.005, 0.354|0.005, 0.252|0.003,
0.298|0.003, 0.268|0.003, 0.312|0.003, 0.323|0.003, 0.351|0.003, 0.364|0.003, 0.262|0.005, 0.307|0.005, 0.277|0.005,
0.321|0.005, 0.332|0.005, 0.359|0.005, 0.372|0.005, 0.274|0.003, 0.318|0.003, 0.288|0.003, 0.332|0.003, 0.342|0.003,
0.369|0.003, 0.382|0.003, 0.266|0.011,0.31|0.011, 0.281|0.011, 0.324|0.011, 0.335|0.011, 0.362|0.011, 0.375|0.011,
0.287|0.011, 0.33|0.011, 0.301|0.011, 0.344|0.011, 0.354|0.011, 0.381|0.011, 0.393|0.011, 0.277|0.007, 0.321|0.007,
0.292|0.007, 0.335|0.007, 0.345|0.007, 0.372|0.007, 0.385|0.007, 0.298|0.007, 0.34|0.007, 0.312|0.007, 0.354|0.007,
A3 0.364|0.007, 0.39|0.007, 0.402|0.007, 0.241|0.007, 0.287|0.007,0.256|0.007, 0.301|0.007, 0.312|0.007, 0.341|0.007,
0.354|0.007, 0.252|0.005, 0.298|0.005, 0.268|0.005, 0.312|0.005, 0.323|0.005, 0.351|0.005, 0.364|0.005, 0.262|0.007,
0.307|0.007, 0.277|0.007, 0.321|0.007, 0.332|0.007, 0.359|0.007, 0.372|0.007, 0.274|0.005, 0.318|0.005, 0.288|0.005,
0.332|0.005, 0.342|0.005, 0.369|0.005, 0.382|0.005, 0.266|0.016, 0.31|0.016, 0.281|0.016, 0.324|0.016, 0.335|0.016,
0.362|0.016, 0.375|0.016, 0.287|0.016, 0.33|0.016, 0.301|0.016, 0.344|0.016, 0.354|0.016, 0.381|0.016, 0.393|0.016,
0.277|0.011, 0.321|0.011, 0.292|0.011, 0.335|0.011, 0.345|0.011, 0.372|0.011, 0.385|0.011, 0.298|0.011, 0.34|0.011,
0.312|0.011, 0.354|0.011, 0.364|0.011, 0.39|0.011, 0.402|0.011)}
{(s2.83 , 0.346|0.032, 0.375|0.032, 0.362|0.032, 0.39|0.022, 0.362|0.022, 0.377|0.014, 0.39|0.022, 0.405|0.014,
0.376|0.022, 0.404|0.022, 0.391|0.022, 0.418|0.014, 0.391|0.014, 0.406|0.01, 0.418|0.014, 0.432|0.01, 0.368|0.032,
0.396|0.032, 0.383|0.032, 0.41|0.022, 0.383|0.022, 0.398|0.014, 0.41|0.022, 0.424|0.014, 0.397|0.022, 0.423|0.022,
0.411|0.022, 0.437|0.014, 0.411|0.014, 0.426|0.01, 0.437|0.014, 0.451|0.01, 0.376|0.022, 0.404|0.022, 0.391|0.022,
A4
0.418|0.014, 0.391|0.014, 0.406|0.01, 0.418|0.014, 0.432|0.01, 0.397|0.022,0.423|0.022, 0.411|0.022, 0.437|0.014,
0.411|0.014, 0.426|0.01, 0.437|0.014, 0.451|0.01,0.405|0.014, 0.431|0.014, 0.419|0.014, 0.445|0.01, 0.419|0.01,
0.433|0.006, 0.445|0.01, 0.458|0.006, 0.425|0.014, 0.45|0.014, 0.438|0.014, 0.463|0.01, 0.438|0.01, 0.452|0.006,
0.463|0.01, 0.476|0.006,)}

198
Mathematics 2018, 6, 47

Table 7. Collective HPFLE group decision matrix using HPFLWG operator.

{(s1.93 , 0.4|0.01, 0.42|0.014, 0.41|0.038, 0.43|0.058, 0.42|0.014, 0.45|0.022, 0.44|0.058, 0.46|0.086, 0.41|0.022,
A1
0.44|0.034, 0.43|0.09, 0.45|0.134, 0.44|0.034, 0.46|0.05, 0.46|0.134, 0.48|0.202
{(s3.57 , 0.27|0.003, 0.31|0.004, 0.29|0.003, 0.32|0.006, 0.28|0.003, 0.31|0.005, 0.31|0.008, 0.32|0.004, 0.36|0.006,
0.34|0.004, 0.38|0.01, 0.33|0.005, 0.37|0.008, 0.36|0.012, 0.3|0.003, 0.34|0.004, 0.33|0.003, 0.36|0.006, 0.31|0.003,
0.35|0.005, 0.34|0.008, 0.34|0.003, 0.38|0.004, 0.36|0.003, 0.4|0.006, 0.35|0.003, 0.39|0.005, 0.38|0.008, 0.31|0.002,
A2
0.35|0.003, 0.34|0.002, 0.37|0.004, 0.32|0.002, 0.36|0.003, 0.35|0.005, 0.33|0.001, 0.37|0.002, 0.36|0.001, 0.39|0.003,
0.34|0.002, 0.38|0.002, 0.37|0.003, 0.34|0.003, 0.39|0.004, 0.37|0.003, 0.41|0.006, 0.35|0.003, 0.39|0.005, 0.39|0.008,
0.36|0.002, 0.4|0.003, 0.38|0.002, 0.42|0.004, 0.37|0.002, 0.41|0.003, 0.4|0.005,
{(s2.9 , 0.24|0.005, 0.29|0.005, 0.26|0.005, 0.3|0.005, 0.31|0.005, 0.32|0.005, 0.33|0.005, 0.25|0.003, 0.3|0.003,
0.27|0.003, 0.31|0.003, 0.32|0.003, 0.34|0.003, 0.35|0.003, 0.25|0.005, 0.3|0.005, 0.27|0.005, 0.32|0.005, 0.33|0.005,
0.35|0.005, 0.36|0.005, 0.26|0.003, 0.31|0.003, 0.28|0.003, 0.33|0.003, 0.34|0.003, 0.36|0.003, 0.37|0.003,
0.26|0.011,0.31|0.011, 0.28|0.011, 0.32|0.011, 0.33|0.011, 0.35|0.011, 0.36|0.011, 0.27|0.011, 0.32|0.011, 0.29|0.011,
0.34|0.011, 0.35|0.011, 0.37|0.011, 0.38|0.011, 0.26|0.007, 0.32|0.007, 0.28|0.007, 0.33|0.007, 0.34|0.007, 0.36|0.007,
0.37|0.007, 0.28|0.007, 0.33|0.007, 0.3|0.007, 0.34|0.007, 0.35|0.007, 0.37|0.007, 0.38|0.007, 0.24|0.007,
A3
0.29|0.007,0.26|0.007, 0.3|0.007, 0.31|0.007, 0.32|0.007, 0.33|0.007, 0.25|0.005, 0.3|0.005, 0.27|0.005, 0.31|0.005,
0.32|0.005, 0.34|0.005, 0.35|0.005, 0.25|0.007, 0.3|0.007, 0.27|0.007, 0.32|0.007, 0.33|0.007, 0.35|0.007, 0.36|0.007,
0.26|0.005, 0.31|0.005, 0.28|0.005, 0.33|0.005, 0.34|0.005, 0.36|0.005, 0.37|0.005, 0.26|0.016, 0.31|0.016, 0.28|0.016,
0.32|0.016, 0.33|0.016, 0.35|0.016, 0.36|0.016, 0.27|0.016, 0.32|0.016, 0.29|0.016, 0.34|0.016, 0.35|0.016, 0.37|0.016,
0.38|0.016, 0.26|0.011, 0.32|0.011, 0.28|0.011, 0.33|0.011, 0.34|0.011, 0.36|0.011, 0.37|0.011, 0.28|0.011, 0.33|0.011,
0.3|0.011, 0.34|0.011, 0.35|0.011, 0.37|0.011, 0.38|0.011)}
{(s2.83 , 0.33|0.032, 0.36|0.032, 0.35|0.032, 0.37|0.022, 0.35|0.022, 0.36|0.014, 0.37|0.022, 0.38|0.014, 0.37|0.022,
0.4|0.022, 0.39|0.022, 0.41|0.014, 0.39|0.014, 0.4|0.01, 0.41|0.014, 0.42|0.01, 0.36|0.032, 0.39|0.032, 0.38|0.032,
0.4|0.022, 0.38|0.022, 0.39|0.014, 0.4|0.022, 0.41|0.014, 0.4|0.022, 0.42|0.022, 0.41|0.022, 0.44|0.014, 0.41|0.014,
A4 0.43|0.01, 0.44|0.014, 0.45|0.01, 0.37|0.022, 0.4|0.022, 0.39|0.022, 0.41|0.014, 0.39|0.014, 0.4|0.01, 0.41|0.014,
0.42|0.01, 0.4|0.022,0.42|0.022, 0.41|0.022, 0.44|0.014, 0.41|0.014, 0.43|0.01, 0.44|0.014, 0.45|0.01, 0.4|0.014,
0.43|0.014, 0.42|0.014, 0.44|0.01, 0.42|0.01, 0.43|0.006, 0.44|0.01, 0.46|0.006, 0.42|0.014, 0.45|0.014,0.43|0.014,
0.46|0.01, 0.43|0.01, 0.45|0.006, 0.46|0.01, 0.47|0.006,)}

Step 4: The score values S(hi PL ( p x )) (i = 1, 2, 3, 4) of the alternatives Ai (i = 1, 2, 3, 4) are


calculated and shown as follows (Table 8):

Table 8. Score values for the alternatives using HPFLWA and HPFLWG operators.

Score HPFLWA HPFLWG


S(h1 PL ( p x )) S0.05652 S0.05409
S(h2 PL ( p x )) S0.00559 S0.00545
S(h3 PL ( p x )) S0.00745 S0.00723
S(h4 PL ( p x )) S0.01901 S0.01874

Step 5. Finally, alternatives Ai (i = 1, 2, 3, 4) are ranked in accordance with score values S(hi PL ( p x ))
and shown in Table 9.

Table 9. Ranking of alternatives using proposed HPFLWA and HPFLWG operators.

Method Ranking Best/Worst


Using HPFLWA operator A1 > A4 > A3 > A2 A1 /A2
Using HPFLWG operator A1 > A4 > A3 > A2 A1 /A2

Table 9 confirms that using both proposed HPFLWA and HPFLWG operators best and worst
alternatives are A1 and A2 respectively.

6.1. A Real Case Study


A real case study is undertaken to rank seven organizations; State Bank of India (A1 ), InfoTech
Enterprises (A2 ), ITC (A3 ), H.D.F.C. Bank (A4 ), Tata Steel (A5 ), Tata Motors (A6 ) and Bajaj Finance (A7 )
on the basis of their performance against following four criteria.

199
Mathematics 2018, 6, 47

1. Earnings per share (EPS) of company (C1 )


2. Face value (C2 )
3. Book value (C3 )
4. P/C ratio (Put-Call Ratio) of company (C4 )

In this real case study, C1 , C2 , and C3 are benefit criteria while C4 is cost criterion. Real data for
each alternative against each criterion are retrieved from http://www.moneycontrol.com from date
20.7.2017 to 27.7.2017. Table 10 shows their average values.

Table 10. Average of actual numerical value of criteria.

C1 C2 C3 C4
A1 13.15 1.00 196.53 19.27
A2 61.18 5.00 296.12 14.98
A3 8.54 1.00 37.31 30.52
A4 59.07 2.00 347.59 28.50
A5 22.25 2.00 237.82 5.98
A6 35.47 1.12 511.31 7.95
A7 36.64 2.00 174.60 45.39

To construct hesitant fuzzy decision matrix (Table 11), we use the method proposed by Bisht and
Kumar [69] and fuzzify Table 10 using triangular and Gaussian membership functions.

Table 11. Hesitant fuzzy decision matrix

C1 C2 C3 C4
A1 0.3784, 0.3029 0.6065, 0.50 0.7545, 0.6247 0.997, 0.9614
A2 0.9676, 0.8718 0.6065, 0.50 0.8964, 0.7662 0.696, 0.5743
A3 0.1534, 0.0318 0.6065, 0.50 0.1368, 0.0027 0.778, 0.6457
A4 0.9997, 0.9959 0.6065, 0.50 0.8122, 0.6775 0.7748, 0.6429
A5 0.949, 0.8382 0.6065, 0.50 0.8655, 0.7312 0.2278, 0.14
A6 0.7445, 0.6159 0.7491, 0.62 0.9843, 0.9111 0.512, 0.4214
A7 0.8197, 0.6847 0.6065, 0.50 0.933, 0.8138 0.3055, 0.23

Probabilities are associated with elements of hesitant fuzzy decision matrix (Table 11) to convert
it into probabilistic hesitant fuzzy decision matrix I = [ IPij = (μij | pij )])m×n . Probabilities which are
associated with first row of hesitant fuzzy decision matrix (Table 11) are as follows:

μ ( p111 ) = 0.3784+0.3029 = 0.5554,μ ( p11 ) = 0.3784+0.3029 = 0.4446


0.3784 2 0.3029

μ ( p112 ) = 0.6065
0.6065+0.5 = 0.5481, μ ( p 2 ) =
12 0.6065+0.5 = 0.4519
0.5

μ ( p114 ) = 0.9614
0.997+0.9614 = 0.9285, μ ( p 2 ) =
14 0.9614+0.997 = 0.0715
0.997

Similarly all elements of hesitant fuzzy decision matrix are associated with probabilities and
probabilistic hesitant fuzzy decision matrix (Table 12) is obtained.
Following table (Table 13) shows hesitant probabilistic fuzzy linguistic decision matrix.

200
Mathematics 2018, 6, 47

Table 12. Probabilistic Hesitant fuzzy decision matrix.

C1 C2 C3 C4
{(0.3784|0.5554), {(0.6065|0.5481), {(0.7545|0.5471), {(0.997|0.0715),
A1
(0.3029|0.4446)} (0.5|0.4519)} (0.6247|0.4529)} (0.9614|0.9285)}
{(0.9676|0.5261), {(0.6065|0.5481), {(0.8964|0.5392), {(0.696|0.4166),
A2
(0.8718|0.4739)} (0.5|0.4519)} (0.7662|0.4608)} (0.5743|0.5834)}
{(0.1534|0.8283), {(0.6065|0.5481), {(0.1368|0.9806), {(0.778|0.3852),
A3
(0.0318|0.1717)} (0.5|0.4519)} (0.0027|0.0194)} (0.6457|0.6148)}
{(0.9997|0.501), {(0.6065|0.5481), {(0.8122|0.5452), {(0.7748|0.3867),
A4
(0.9959|0.499)} (0.5|0.4519)} (0.6775|0.4548)} (0.6429|0.6133)}
{(0.949|0.531), {(0.6065|0.5481), {(0.8655|0.542), {(0.2278|0.4731),
A5
(0.8382|0.469)} (0.5|0.4519)} (0.7312|0.458)} (0.14|0.5269)}
{(0.7445|0.5473), {(0.7492|0.5472), {(0.9843|0.5193), {(0.512|0.4575),
A6
(0.6159|0.4527)} (0.62|0.4528)} (0.9111|0.4807)} (0.4214|0.5425)}
{(0.8197|0.5449), {(0.6065|0.5481), {(0.933|0.5341), {(0.3055|0.4742),
A7
(0.6847|0.4551)} (0.5|0.4519)} (0.8138|0.4659)} (0.23|0.5258)}

Table 13. Hesitant probabilistic fuzzy linguistic decision matrix.

C1 C2 C3 C4
{s1 , (0.3784|0.5554), {s1 , (0.6065|0.5481), {s2 , (0.7545|0.5471), {s3 , (0.997|0.0715),
A1
(0.3029|0.4446)} (0.5|0.4519)} (0.6247|0.4529)} (0.9614|0.9285)}
{s3 , (0.9676|0.5261), {s1 , (0.6065|0.5481), {s2 , (0.8964|0.5392), {s2 , (0.696|0.4166),
A2
(0.8718|0.4739)} (0.5|0.4519)} (0.7662|0.4608)} (0.5743|0.5834)}
{s0 , (0.1534|0.8283), {s1 , (0.6065|0.5481), {s0 , (0.1368|0.9806), {s2 , (0.778|0.3852),
A3
(0.0318|0.1717)} (0.5|0.4519)} (0.0027|0.0194)} (0.6457|0.6148)}
{s3 , (0.9997|0.501), {s1 , (0.6065|0.5481), {s2 , (0.8122|0.5452), {s2 , (0.7748|0.3867),
A4
(0.9959|0.499)} (0.5|0.4519)} (0.6775|0.4548)} (0.6429|0.6133)}
{s3 , (0.949|0.531), {s1 , (0.6065|0.5481), {s3 , (0.8655|0.542), {s0 , (0.2278|0.4731),
A5
(0.8382|0.469)} (0.5|0.4519)} (0.7312|0.458)} (0.14|0.5269)}
{s2 , (0.7445|0.5473), {s2 , (0.7492|0.5472), {s3 , (0.9843|0.5193), {s1 , (0.512|0.4575),
A6
(0.6159|0.4527)} (0.62|0.4528)} (0.9111|0.4807)} (0.4214|0.5425)}
{s2 , (0.8197|0.5449), {s1 , (0.6065|0.5481), {s3 , (0.933|0.5341), {s0 , (0.3055|0.4742),
A7
(0.6847|0.4551)} (0.5|0.4519)} (0.8138|0.4659)} (0.23|0.5258)}

Step 2: Assessment of each alternative Ai (i = 1, 2, 3, 4, 5, 6, 7) against each criteria Cj (i = 1, 2, 3, 4)


is aggregated using HPFLWA aggregation operator (Equation (9)) as follows:

H1 = HPFLWA
9 (C11 , C12 , C13 , C14 ) :
{s1 , (0.3784|0.5554), (0.3029|0.4446)}, {s1 , (0.6065|0.5481), (0.5|0.4519),
H1 =
{s , (0.7545|0.5471)}, (0.6247|0.4529), {s3 , (0.997|0.0715), (0.9614|0.9285)}
⎡ 2 ⎤
{s1.75 , (0.884|0.012), (0.871|0.01), (0.877|0.1055), (0.8811|0.0095), (0.8678|0.0079),
⎢ 0.3029|0.4446 , 0.3784|0.5554 , 0.3029|0.4446 , 0.3784|0.5554 , 0.3029|0.4446 , ⎥
⎢ ( ) ( ) ( ) ( ) ( ) ⎥
H1 = ⎢ ⎥
⎣ (0.3784|0.5554), (0.3029|0.4446), (0.3784|0.5554), (0.3029|0.4446), (0.3784|0.5554), ⎦
(0.3029|0.4446)}

In the aggregation of assessment of the alternatives, all criteria are considered of equal weight
of 0.25. Similarly other elements of HPFL decision matrix (Table 13) are aggregated and following
collective HPFL decision matrix (Table 14) is obtained.

201
Mathematics 2018, 6, 47

Table 14. Collective hesitant probabilistic fuzzy linguistic decision matrix

{s1.75 , (0.884|0.012), (0.871|0.01), (0.877|0.1055), (0.8811|0.0095), (0.8678|0.0079), (0.3029|0.4446),


A1 (0.3784|0.5554), (0.3029|0.4446), (0.3784|0.5554), (0.3029|0.4446), (0.3784|0.5554), (0.3029|0.4446),
(0.3784|0.5554), (0.3029|0.4446), (0.3784|0.5554), (0.3029|0.4446)}
{s2 , (0.3784|0.5554), (0.3029|0.4446), (0.3784|0.5554), (0.3029|0.4446), (0.3784|0.5554),
A2 (0.3029|0.4446), (0.3784|0.5554), (0.3029|0.4446), (0.3784|0.5554), (0.3029|0.4446), (0.3784|0.5554),
(0.3029|0.4446), (0.3784|0.5554), (0.3029|0.4446), (0.3784|0.5554), (0.3029|0.4446)}
{s0.75 , (0.3784|0.5554), (0.3029|0.4446), (0.3784|0.5554), (0.3029|0.4446), (0.3784|0.5554),
A3 (0.3029|0.4446), (0.3784|0.5554), (0.3029|0.4446), (0.3784|0.5554), (0.3029|0.4446), (0.3784|0.5554),
(0.3029|0.4446), (0.3784|0.5554), (0.3029|0.4446), (0.3784|0.5554), (0.3029|0.4446)}
{s2 , (0.3784|0.5554), (0.3029|0.4446), (0.3784|0.5554), (0.3029|0.4446), (0.3784|0.5554),
A4 (0.3029|0.4446), (0.3784|0.5554), (0.3029|0.4446), (0.3784|0.5554), (0.3029|0.4446), (0.3784|0.5554),
(0.3029|0.4446), (0.3784|0.5554), (0.3029|0.4446), (0.3784|0.5554), (0.3029|0.4446)}
{s1.75 , (0.3784|0.5554), (0.3029|0.4446), (0.3784|0.5554), (0.3029|0.4446), (0.3784|0.5554),
A5 (0.3029|0.4446), (0.3784|0.5554), (0.3029|0.4446), (0.3784|0.5554), (0.3029|0.4446), (0.3784|0.5554),
(0.3029|0.4446), (0.3784|0.5554), (0.3029|0.4446), (0.3784|0.5554), (0.3029|0.4446)}
{s2 , (0.3784|0.5554), (0.3029|0.4446), (0.3784|0.5554), (0.3029|0.4446), (0.3784|0.5554),
A6 (0.3029|0.4446), (0.3784|0.5554), (0.3029|0.4446), (0.3784|0.5554), (0.3029|0.4446),(0.3784|0.5554),
(0.3029|0.4446), (0.3784|0.5554), (0.3029|0.4446), (0.3784|0.5554), (0.3029|0.4446)}
{s1.5 , (0.3784|0.5554), (0.3029|0.4446), (0.3784|0.5554), (0.3029|0.4446), (0.3784|0.5554),
A7 (0.3029|0.4446), (0.3784|0.5554), (0.3029|0.4446), (0.3784|0.5554), (0.3029|0.4446), (0.3784|0.5554),
(0.3029|0.4446), (0.3784|0.5554), (0.3029|0.4446), (0.3784|0.5554), (0.3029|0.4446)}

Step 3: The score values S(hi PL ( p x )) (i = 1, 2, 3, 4, 5, 6, 7) of the alternatives


Ai (i = 1, 2, 3, 4, 5, 6, 7) are calculated using Equation (7) and are shown as follows:

S(h1 PL ( p x )) = S0.0902 , S(h2 PL ( p x )) = S0.092 , S(h3 PL ( p x )) = S0.019


S(h4 PL ( p x )) = S0.101 , S(h5 PL ( p x )) = S0.076 , S(h6 PL ( p x )) = S0.094
S(h7 PL ( p x )) = S0.063

Step 4: Finally, alternatives Ai (i = 1, 2, 3, 4, 5, 6, 7) are ranked as A4 > A6 > A2 > A1 > A5 >
A7 > A3 in accordance with score values S(hi PL ( p x )).

6.2. Comparative Analysis


In this section, we compare proposed HPFL-based MCGDM methods with existing HFL-based
methods. We apply the proposed method on two different problems which are adapted from Zhou
et al. (2016) and Lin et al. (2014) and compare the ranking results. In order to apply the proposed
HPFL-based MCGDM on the examples taken by both Lin et al. (2014) and Zhou et al. (2016), we have
considered probability of each element of HFL decision matrices as unity.

6.2.1. Comparison 1
In comparison 1, methodology of proposed HPFL-based MCGDM method is applied on the
following HFL decision matrix (Table 15) of the problem taken by Lin et al. [50].

Table 15. Hesitant fuzzy linguistic decision matrix ([50]).

G1 G2 G3 G4
A1 <s5 , (0.3, 0.5)> <s3 , (0.6, 0.7, 0.8)> <s2 , (0.7, 0.8)> <s4 , (0.8, 0.9)>
A2 <s2 , (0.3, 0.4, 0.5)> <s5 , (0.6, 0.9)> <s3 , (0.6, 0.7)> <s5 , (0.4, 0.5)>
A3 <s6 , (0.4, 0.6)> <s2 , (0.7, 0.8)> <s5 , (0.3, 0.5, 0.7)> <s3 , (0.6, 0.7)>
A4 <s5 , (0.7, 0.9)> <s1 , (0.3, 0.4)> <s7 , (0.5, 0.7)> <s2 , (0.3, 0.5)>
A5 <s4 , (0.2, 0.3)> <s2 , (0.6, 0.7)> <s4 , (0.5, 0.6)> <s2 , (0.7, 0.8, 0.9)>

202
Mathematics 2018, 6, 47

Following table (Table 16) shows the ranking results of the alternatives which are obtained using
proposed HPFL and existing HFL-based MCDM method of Lin et al. [50].

Table 16. Comparison of ranking of alternatives.

Method Ranking Best Alternative/Worst Alternative


Proposed A4 > A1 > A3 > A2 > A5 A4 /A5
Lin et al. [50] A4 > A3 > A1 > A2 > A5 A4 /A5

On applying the proposed MCGDM method on ranking problem which is adapted from Lin et al.
(2014), A4 and A5 are ranked again as the best and the worst alternatives respectively.

6.2.2. Comparison 2
In comparison 2, the methodology of proposed HPFL-based MCGDM method is applied on the
following HFL decision matrix (Table 17) of the problem taken by Zhou et al. [67].

Table 17. The special linguistic hesitant fuzzy decision matrix ([67]).

C1 C2 C3 C4
X1 <s5 , (0.3, 0.4)> <s6 , (0.2, 0.4)> <s5 , (0.5, 0.7)> <s4 , (0.4)>
X2 <s3 , (0.5, 0.6)> <s5 , (0.3, 0.5)> <s4 , (0.7)> <s5 , (0.4, 0.6)>
X3 <s4 , (0.4, 0.6)> <s5 , (0.4)> <s7 , (0.7, 0.8)> <s3 , (0.6)>
X4 <s3 , (0.3, 0.4, 0.5)> <s4 , (0.6)> <s3 , (0.4, 0.7)> <s3 , (0.8)>
X5 <s6 , (0.5, 0.7)> <s6 , (0.5, 0.6)> <s4 , (0.6, 0.8)> <s5 , (0.7)>

Following table (Table 18) shows the ranking results of the alternatives which are obtained using
proposed HPFL and existing HFL-based MCDM method of Zhou et al. [67].

Table 18. Comparison of ranking of alternatives.

Method Ranking Best Alternative/Worst Alternative


Proposed X5 > X3 > X1 > X2 > X4 X5 /X4
Zhou et al. [67] X5 > X3 > X1 > X2 > X4 X5 /X4

On applying the proposed MCGDM method on ranking problem adapted from Zhou et al. [67],
X5 and X4 are ranked again as the best and the worst alternatives respectively.
As there is no change found in the ranking results of the alternatives in both the comparisons,
it confirms that the proposed HPFL-based MCGDM method is also suitable with HFL information.

7. Conclusions
Uncertainties due to randomness and fuzziness both occur in the system simultaneously. In certain
decision making problem, DMs prefer to analyze the alternatives against decision criteria qualitatively
using linguistic terms. In this paper, we have proposed hesitant probabilistic fuzzy linguistic
set (HPFLS) to integrate hesitant fuzzy linguistic information with probability theory. Prominent
characteristic of HPFLS is to associate occurring probabilities to HFLEs which makes it more effective
than HFLS. We have investigated the expected mean, variance, score and accuracy function, and basic
operations for HPFLEs. We have also defined HPFLWA, HPFLWG, HPFLOWA and HPFLOWG
aggregation operators to aggregate hesitant probabilistic fuzzy linguistic information. A novel
MCGDM method using HPFLWA, HPFLWG, HPFLOWA and HPFLOWG is also proposed in the
present study. Advantage of proposed HPFLS-based MCGDM method is that it associates probabilities
to HFLE which makes it competent enough to handle both stochastic and non-stochastic uncertainties

203
Mathematics 2018, 6, 47

with hesitant information using both qualitative and quantitative terms. Another advantage of
proposed MCGDM method is that it allows DMs to use their intuitive ability to judge alternatives
against criteria using probabilities. This is also important to note that the proposed method can also be
used with HFL information if DMs associate equal probabilities to HFLE. Methodology of proposed
HPFL-based MCGDM method is illustrated by an example. A real case study to rank the organizations
is also undertaken in the present work.
Even though, proposed HPFL-based MCGDM method includes both stochastic and non-stochastic
uncertainties along with hesitation, but to determine probabilities of membership grades in linguistic
fuzzy set is very difficult in real life problem of decision making. Proposed HPFL-based MCGDM
method will be effective when either DMs are expert of their field or they have pre-defined probability
distribution function so that the appropriate probabilities could be assigned. Applications of proposed
HPFLS with Pythagorean membership grades can also be seen as the scope of future research in
decision making problems as an enhancement of the methods proposed by Garg [49].

Author Contributions: Dheeraj Kumar Joshi and Sanjay Kumar defined HPFLS and studied its properties.
They together developed MCGDM method using HPFL information. Ismat Beg contributed in verifying the proof
of Theorem 1 and the properties of aggregation operators. All authors equally contributed in the research paper.
Conflicts of Interest: Authors declare no conflicts of interest.

References
1. Meghdadi, A.H.; Akbarzadeh-T, M.R. Probabilistic fuzzy logic and probabilistic fuzzy systems.
In Proceedings of the 10th IEEE International Conference on Fuzzy Systems, Melbourne, Australia,
2–5 December 2001; Volume 3, pp. 1127–1130.
2. Valavanis, K.P.; Saridis, G.N. Probabilistic modeling of intelligent robotic systems. IEEE Trans. Robot. Autom.
1991, 7, 164–171. [CrossRef]
3. Pidre, J.C.; Carrillo, C.J.; Lorenzo, A.E.F. Probabilistic model for mechanical power fluctuations in
asynchronous wind parks. IEEE Trans. Power Syst. 2003, 18, 761–768. [CrossRef]
4. Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353. [CrossRef]
5. Zadeh, L.A. Fuzzy logic and approximate reasoning. Synthese 1975, 30, 407–428. [CrossRef]
6. Lee, L.W.; Chen, S.M. Fuzzy decision making and fuzzy group decision making based on likelihood-based
comparison relations of hesitant fuzzy linguistic term sets. J. Intell. Fuzzy Syst. 2015, 29, 1119–1137. [CrossRef]
7. Wang, H.; Xu, Z. Admissible orders of typical hesitant fuzzy elements and their application in ordered
information fusion in multi-criteria decision making. Inf. Fusion 2016, 29, 98–104. [CrossRef]
8. Liu, J.; Chen, H.; Zhou, L.; Tao, Z. Generalized linguistic ordered weighted hybrid logarithm averaging
operators and applications to group decision making. Int. J. Uncertain. Fuzz. Knowl.-Based Syst. 2015, 23,
421–442. [CrossRef]
9. Liu, J.; Chen, H.; Xu, Q.; Zhou, L.; Tao, Z. Generalized ordered modular averaging operator and its application
to group decision making. Fuzzy Sets Syst. 2016, 299, 1–25. [CrossRef]
10. Yoon, K.P.; Hwang, C.L. Multiple Attribute Decision Making: An Introduction; Sage Publications:
New York, NY, USA, 1995; Volume 104.
11. Mardani, A.; Nilachi, M.; Zavadskas, E.K.; Awang, S.R.; Zare, H.; Jamal, N.M. Decision making methods
based on fuzzy aggregation operators: Three decades of review from 1986 to 2018. Int. J. Inf. Technol. Decis.
Mak. 2017. [CrossRef]
12. Atanassov, K.T. Intuitionistic fuzzy sets. Fuzzy Sets Syst. 1986, 20, 87–96. [CrossRef]
13. Atanassov, K.T.; Gargov, G. Interval-valued intuitionistic fuzzy sets. Fuzzy Sets Syst. 1989, 31, 343–349.
[CrossRef]
14. Yager, R.R. Pythagorean membership grades in multicriteria decision making. IEEE Trans. Fuzzy Syst. 2014,
22, 958–965. [CrossRef]
15. Majumdar, P. Neutrosophic Sets and Its Applications to Decision Making. In Computational Intelligence for
Big Data Analysis. Adaptation, Learning, and Optimization; Acharjya, D., Dehuri, S., Sanyal, S., Eds.; Springer:
Cham, Switzerland, 2015; Volume 19.

204
Mathematics 2018, 6, 47

16. Torra, V.; Narukawa, Y. On hesitant fuzzy sets and decision. In Proceedings of the 18th IEEE International
Conference on Fuzzy Systems, Jeju Island, Korea, 20–24 August 2009; pp. 1378–1382.
17. Torra, V. Hesitant fuzzy sets. Int. J. Intell. Syst. 2010, 25, 529–539. [CrossRef]
18. Xia, M.; Xu, Z. Hesitant fuzzy information aggregation in decision making. Int. J. Approx. Reason. 2011, 52,
395–407. [CrossRef]
19. Farhadinia, B.; Xu, Z. Distance and aggregation-based methodologies for hesitant fuzzy decision making.
Cogn. Comput. 2017, 9, 81–94. [CrossRef]
20. Qian, G.; Wang, H.; Feng, X. Generalized hesitant fuzzy sets and their application in decision support system.
Knowl.-Based Syst. 2013, 37, 357–365. [CrossRef]
21. Peng, J.J.; Wang, J.Q.; Wang, J.; Yang, L.J.; Chen, X.H. An extension of ELECTRE to multi-criteria
decision-making problems with multi-hesitant fuzzy sets. Inf. Sci. 2015, 307, 113–126. [CrossRef]
22. Chen, S.W.; Cai, L.N. Interval-valued hesitant fuzzy sets. Fuzzy Syst. Math. 2013, 6, 38–44.
23. Yu, D. Triangular hesitant fuzzy set and its application to teaching quality evaluation. J. Inf. Comput. Sci.
2013, 10, 1925–1934. [CrossRef]
24. Zhu, B.; Xu, Z.; Xia, M. Dual hesitant fuzzy sets. J. Appl. Math. 2012, 2012, 879629. [CrossRef]
25. Zhang, Z. Interval-valued intuitionistic hesitant fuzzy aggregation operators and their application in group
decision-making. J. Appl. Math. 2013, 2013, 670285. [CrossRef]
26. Joshi, D.; Kumar, S. Interval-valued intuitionistic hesitant fuzzy Choquet integral based TOPSIS method for
multi-criteria group decision making. Eur. J. Oper. Res. 2016, 248, 183–191. [CrossRef]
27. Garg, H. Hesitant Pythagorean fuzzy sets and their aggregation operators in multiple attribute decision
making. Int. J. Uncertain. Quantif. 2018. [CrossRef]
28. Qi, X.-W.; Zhang, J.-L.; Zhao, S.-P.; Liang, C.-Y. Tackling complex emergency response solutions evaluation
problems in sustainable development by fuzzy group decision making approaches with considering decision
hesitancy and prioritization among assessing criteria. Int. J. Environ. Res. Public Health 2017, 14, 1165.
[CrossRef] [PubMed]
29. Garg, H.; Arora, R. Distance and similarity measures for dual hesitant fuzzy soft sets and their applications
in multicriteria decision making problem. Int. J. Uncertain. Quantif. 2017, 7, 229–248. [CrossRef]
30. Martínez, L.; Ruan, D.; Herrera, F.; Wang, P.P. Linguistic decision making: Tools and applications. Inf. Sci.
2009, 179, 2297–2298. [CrossRef]
31. Xu, Z. An interactive procedure for linguistic multiple attribute decision making with incomplete weight
information. Fuzzy Optim. Decis. Mak. 2007, 6, 17–27. [CrossRef]
32. Herrera, F.; Martínez, L. A 2-tuple fuzzy linguistic representation model for computing with words.
IEEE Trans. Fuzzy Syst. 2000, 8, 746–752.
33. Herrera, F.; Herrera-Viedma, E.; Alonso, S.; Chiclana, F. Computing with words and decision making.
Fuzzy Optim. Decis. Mak. 2009, 8, 323–324. [CrossRef]
34. Lan, J.; Sun, Q.; Chen, Q.; Wang, Q. Group decision making based on induced uncertain linguistic OWA
operators. Decis. Support Syst. 2013, 55, 296–303. [CrossRef]
35. Beg, I.; Rashid, T. TOPSIS for hesitant fuzzy linguistic term sets. Int. J. Intell. Syst. 2013, 28, 1162–1171.
[CrossRef]
36. Rodríguez, R.M.; Martınez, L.; Herrera, F. A group decision making model dealing with comparative
linguistic expressions based on hesitant fuzzy linguistic term sets. Inf. Sci. 2013, 241, 28–42.
37. Yuen, K.K.F. Combining compound linguistic ordinal scale and cognitive pairwise comparison in the rectified
fuzzy TOPSIS method for group decision making. Fuzzy Optim. Decis. Mak. 2014, 13, 105–130. [CrossRef]
38. Zhang, Z.; Wu, C. Hesitant fuzzy linguistic aggregation operators and their applications to multiple attribute
group decision making. J. Intell. Fuzzy Syst. 2014, 26, 2185–2202.
39. Beg, I.; Rashid, T. Group decision making using comparative linguistic expression based on hesitant
intuitionistic fuzzy sets. Appl. Appl. Math. Int. J. 2015, 10, 1082–1092.
40. Wang, J.Q.; Wang, D.D.; Zhang, H.Y.; Chen, X.H. Multi-criteria group decision making method based on
interval 2-tuple linguistic information and Choquet integral aggregation operators. Soft Comput. 2015, 19,
389–405. [CrossRef]
41. Merigó, J.M.; Palacios-Marqués, D.; Zeng, S. Subjective and objective information in linguistic multi-criteria
group decision making. Eur. J. Oper. Res. 2016, 248, 522–531. [CrossRef]

205
Mathematics 2018, 6, 47

42. Beg, I.; Rashid, T. Hesitant 2-tuple linguistic information in multiple attributes group decision making.
J. Intell. Fuzzy Syst. 2016, 30, 109–116. [CrossRef]
43. Zhou, W.; Xu, Z. Generalized asymmetric linguistic term set and its application to qualitative decision
making involving risk appetites. Eur. J. Oper. Res. 2016, 254, 610–621. [CrossRef]
44. De Maio, C.; Fenza, G.; Loia, V.; Orciuoli, F. Linguistic fuzzy consensus model for collaborative development
of fuzzy cognitive maps: A case study in software development risks. Fuzzy Optim. Decis. Mak. 2017, in press.
[CrossRef]
45. Gao, J.; Xu, Z.; Liao, H. A dynamic reference point method for emergency response under hesitant
probabilistic fuzzy environment. Int. J. Fuzzy Syst. 2017, 19, 1261–1278. [CrossRef]
46. Kobina, A.; Liang, D.; He, X. Probabilistic linguistic power aggregation operators for multi-criteria group
decision making. Symmetry 2017, 9, 320. [CrossRef]
47. Garg, H.; Kumar, K. Some aggregation operators for linguistic intuitionistic fuzzy set and its application to
group decision-making process using the set pair analysis. Arbian J. Sci. Eng. 2017. [CrossRef]
48. Liu, P.; Mahmood, T.; Khan, Q. Multi-attribute decision-making based on prioritized aggregation operator
under hesitant intuitionistic fuzzy linguistic environment. Symmetry 2017, 9, 270. [CrossRef]
49. Garg, H. Linguistic Pythagorean fuzzy sets and its applications in multi attribute decision making process.
Int. J. Intell. Syst. 2018. [CrossRef]
50. Lin, R.; Zhao, X.; Wei, G. Models for selecting an ERP system with hesitant fuzzy linguistic information.
J. Intell. Fuzzy Syst. 2014, 26, 2155–2165.
51. Ren, F.; Kong, M.; Pei, Z. A new hesitant fuzzy linguistic topsis method for group multi-criteria linguistic
decision making. Symmetry 2017, 9, 289. [CrossRef]
52. Joshi, D.; Kumar, S. Trapezium cloud TOPSIS method with interval-valued intuitionistic hesitant fuzzy
linguistic information. Granul. Comput. 2017. [CrossRef]
53. Wu, Y.; Li, C.-C.; Chen, X.; Dong, Y. Group decision making based on linguistic distribution and hesitant
assessment: Maximizing the support degree with an accuracy constraint. Inf. Fusion 2018, 41, 151–160.
[CrossRef]
54. Wang, R.; Li, Y. Generalized single-valued neutrosophic hesitant fuzzy prioritized aggregation operators
and their applications to multiple criteria decision-making. Information 2018, 9, 10. [CrossRef]
55. Garg, H.; Nancy. Linguistic single-valued neutrosophic prioritized aggregation operators and their
applications to multiple-attribute group decision-making. J. Ambient Intell. Humaniz. Comput. 2018.
[CrossRef]
56. Liang, P.; Song, F. What does a probabilistic interpretation of fuzzy sets mean? IEEE Trans. Fuzzy Syst. 1996,
4, 200–205. [CrossRef]
57. Liu, Z.; Li, H.X. A probabilistic fuzzy logic system for modeling and control. IEEE Trans. Fuzzy Syst. 2005, 13,
848–859.
58. Xu, Z.; Zhou, W. Consensus building with a group of decision makers under the hesitant probabilistic fuzzy
environment. Fuzzy Optim. Decis. Mak. 2017, 16, 481–503. [CrossRef]
59. Hao, Z.; Xu, Z.; Zhao, H.; Su, Z. Probabilistic dual hesitant fuzzy set and its application in risk evaluation.
Knowl.-Based Syst. 2017, 127, 16–28. [CrossRef]
60. Zhou, W.; Xu, Z. Group consistency and group decision making under uncertain probabilistic hesitant fuzzy
preference environment. Inf. Sci. 2017. [CrossRef]
61. Zhou, W.; Xu, Z. Expected hesitant VaR for tail decision making under probabilistic hesitant fuzzy
environment. Appl. Soft Comput. 2017, 60, 297–311. [CrossRef]
62. Ding, J.; Xu, Z.; Zhao, N. An interactive approach to probabilistic hesitant fuzzy multi-attribute group
decision making with incomplete weight information. J. Intell. Fuzzy Syst. 2017, 32, 2523–2536. [CrossRef]
63. Li, J.; Wang, J.Q. Multi-criteria outranking methods with hesitant probabilistic fuzzy sets. Cogn. Comput.
2017, 9, 611–625. [CrossRef]
64. Zhang, S.; Xu, Z.; He, Y. Operations and integrations of probabilistic hesitant fuzzy information in decision
making. Inf. Fusion 2017, 38, 1–11. [CrossRef]
65. Wang, Z.-X.; Li, J. Correlation coefficients of probabilistic hesitant fuzzy elements and their applications to
evaluation of the alternatives. Symmetry 2017, 9, 259. [CrossRef]
66. Xu, Z. A method based on linguistic aggregation operators for group decision making with inguistic
preference relations. Inf. Sci. 2004, 166, 19–30. [CrossRef]

206
Mathematics 2018, 6, 47

67. Gou, X.; Liao, H.; Xu, Z.; Herrera, F. Double hierarchy hesitant fuzzy linguistic term set and MULTIMOORA
method: A case of study to evaluate the implementation status of haze controlling measures. Inf. Fusion
2017, 38, 22–34. [CrossRef]
68. Zhou, H.; Wang, J.Q.; Zhang, H.Y.; Chen, X.H. Linguistic hesitant fuzzy multi-criteria decision-making
method based on evidential reasoning. Int. J. Syst. Sci. 2016, 47, 314–327. [CrossRef]
69. Bisht, K.; Kumar, S. Fuzzy time series forecasting method based on hesitant fuzzy sets. Expert Syst. Appl.
2016, 64, 557–568. [CrossRef]

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

207
mathematics
Article
The Effect of Prudence on the Optimal Allocation in
Possibilistic and Mixed Models
Irina Georgescu
Academy of Economic Studies, Department of Economic Cybernetics, Piata Romana No 6 R 70167,
Oficiul Postal 22, 010374 Bucharest, Romania; [email protected]

Received: 30 May 2018; Accepted: 24 July 2018; Published: 2 August 2018

Abstract: In this paper, several portfolio choice models are studied: a purely possibilistic model
in which the return of the risky is a fuzzy number, and four models in which the background risk
appears in addition to the investment risk. In these four models, risk is a bidimensional vector
whose components are random variables or fuzzy numbers. Approximate formulas of the optimal
allocation are obtained for all models, expressed in terms of some probabilistic or possibilistic
moments, depending on the indicators of the investor preferences (risk aversion, prudence).

Keywords: prudence; optimal allocation; possibilistic moments

1. Introduction
The standard portfolio choice problem [1–3] considers the determination of the optimal proportion
of the wealth an agent invests in a risk-free asset and in a risky asset. The study of this probabilistic
model is usually done in the classical expected utility theory. The optimal allocation of a risky asset
appears as the solution of a maximization problem. By Taylor approximations, several forms of the
solution have been found, depending on different moments of the return of the risky asset, as well as
some on indicators of the investors’s risk preferences. In the form of the solution from [4] Chapter 2
or [5] Chapter 5, the mean value, the variance, and the Arrow–Pratt index of the investor’s utility
function appear. The approach from [6–8] led to forms of the approximate solution which depend on
the first three moments, the Arrow–Pratt index ru , and the prudence index Pu [9]. The solution found
in [10] is expressed according to the first four moments and the indicators of risk aversion, prudence,
and temperance of the utility function. Another form of the solution in which the first four moments
appear can be found in [11].
All the above models are probabilistic, the risk is represented by random variables, and the
attitude of the agent towards risk is expressed by notions and properties which use the probabilistic
indicators (expected value, variance, covariance, moments, etc.). The probabilistic modeling does not
cover all uncertainty situations in which risk appears (e.g., when the information is not extracted from
a sufficiently large volume of data). Possibility theory, initiated by Zadeh in [12] can model different
situations: "while probability theory offers a quantitative model for randomness and indecisiveness, possibility
theory offers a qualitative model of incomplete knowledge" ([13], p. 277).
In possibility theory, risk is modeled by the notion of possibilistic distribution [14–16].
Fuzzy numbers are the most important class of possibilistic distribution [17]. They generalize real
numbers, and by Zadeh’s extension principle [12], the operations with real numbers can be extended to
operations with fuzzy numbers. So, the set of fuzzy numbers is endowed with a rich algebraic structure,
very close to the set of real numbers, and their possibilistic indicators (possibilistic expected value,
possibilistic variance, possibilistic moments, etc.) have important mathematical properties [14–19].
Fuzzy numbers are also capable of modelling a large scope of risk situations ([14–16,20–25]). For this,
most studies on possibilistic risk have been done in the framework offered by fuzzy numbers,

Mathematics 2018, 6, 133; doi:10.3390/math6080133 208 www.mdpi.com/journal/mathematics


Mathematics 2018, 6, 133

although there exist approaches on possibilistic risk in contexts larger than that offered by fuzzy
numbers. For example, in [26] there is a treatment of risk aversion in an abstract framework including
fuzzy numbers, random fuzzy numbers, type-2 fuzzy sets, random type-2 fuzzy sets, etc.)
In this paper, several portfolio choice models are studied: a purely possibilistic model, in which
the return of the risky asset is represented by a fuzzy number [14,15], and four more models, in which a
probabilistic or possibilistic background risk appears. In the formulation of the maximization problem
for the first model, the possibilistic expected utility from [16], definition 4.2.7, is used. In the case
of the other four models, the notion of bidimensional possibilistic expected utility ([16], p. 60) or
the bidimensional mixed expected utility ([16], p. 79) is used. The approximate solutions of these
two models are expressed by the possibilistic moments associated with a random variable, a fuzzy
number ([14,15,24,25]), and by the indicators on the investor risk preferences.
In the first part of Section 2 the definitions of possibilistic expected utility (cf. [16]) and possibilistic
indicators of a fuzzy number (expected value, variance, moments) are presented. The second part
of the section contains the definition of a mixed expected utility associated with a mixed vector,
a bidimensional utility function, and a weighting function ([16]).
Section 3 is concerned with the possibilistic standard portfolio-choice model, whose construction
is inspired by the probabilistic model of [10]. The return of the risky asset is here a fuzzy number,
while in [10] it is a random variable. The total utility function of the model is written as a possibilistic
expected value. The maximization problem of the model and the first-order conditions are formulated,
from which its optimal solution is determined.
Section 4 is dedicated to the optimal asset allocation in the framework of the possibilistic portfolio
model defined in the previous section. Using a second-order Taylor approximation, a formula for
the approximate calculation of the maximization problem solution is found. In the component of
the formula appear the first three possibilistic moments, the Arrow–Pratt index, and the prudence
indices of the investor’s utility function. The general formula is particularized for triangular fuzzy
numbers and HARA (hyperbolic absolute riks aversion) and CRRA (constant relative risk aversion)
utility functions.
In Section 5 four moments are defined in which the background risk appears in addition to the
investment risk. In these models, risk is represented by a bidimensional vector whose components
are either random variables or fuzzy numbers. The agent will have a unidimensional utility function,
but the total utility function will be:

• a bidimensional probabilistic expected utility, when both components are random variables;
• a bidimensional possibilistic expected utility ([16], p. 60), when both components are fuzzy numbers;
• a mixed expected utility ([16], p. 79), when a component is a random variable, and the other is a
fuzzy number.

Section 6 is dedicated to the determination of an approximate calculation formula for the solution
of the optimization problems of the four models with background risk from the previous section.
We will study in detail only the model in which the investment risk is a fuzzy number and the
background risk is a random variable. For the other three cases, only the approximate calculation
formulas of the solutions are presented. The proofs are presented in an Appendix A .

2. Preliminaries
In this section we recall some notions and results on the possibilistic expected utility, mixed expected
utility (cf. [16]), and some possibilistic indicators associated with fuzzy numbers (cf. [14,18,19,24,25,27]).
For the definition and arithmetical properties of the fuzzy numbers, we refer to [14–16].

209
Mathematics 2018, 6, 133

2.1. Possibilistic Expected Utility


The classic risk theory is usually developed in the framework of expected utility (EU). The main
concept of EU theory is the probabilistic expected utility E(u( X )) associated with a utility function u
(representing the agent) and a random variable X (representing the risk).
In case of a possibilistic risk EU theory, the agent will be represented by a utility function u, and the
risk by a fuzzy number A. Besides these, we will consider a weighting function f . The level-sets [ A]γ ,
γ ∈ [0, 1] mean a gradualism of risk. By the appearance of f in the definition of possibilistic expected
utility and the possibilistic indicators, a weighting of this gradualism is done (by [14], p. 27, “different
weighting functions can give different importances to level-sets of possibility distributions”).
Thus, we fix a mathematical context consisting of:

• a utility function u of class C 2 ,


• a fuzzy number A whose level sets are [ A]γ = [ a1 (γ), a2 (γ)], γ ∈ [0, 1],
• a weighting function f : [0, 1] → R. ( f is a non-negative and increasing function that satisfies
?1
0 f ( γ ) dγ = 1).

The possibilistic expected utility associated with the triple (u, A, f ) is


@ 1
1
E f (u( A)) = [u( a1 (γ)) + u( a2 (γ))] f (γ)dγ. (1)
2 0

In the interpretation from [14], p. 27, the possibilistic expected utility can be viewed as the result of
the following process: on each γ-level set [ A]γ = [ a1 (γ), a2 (γ)], one considers the uniform distribution.
Then, E f (u( A)) is defined as the f -weighted average of the probabilistic expected values of these
uniform distributions.
The following possibilistic indicators associated with a fuzzy number A and a weighting function
f are particular cases of (1) .

• Possibilistic expected value [18,19]:


@ 1
1
E f ( A) = [ a1 (γ) + a2 (γ)] f (γ)dγ, (2)
2 0

(u is the identity function of R).


• Possibilistic variance [18,27]:
@ 1
1
Var f ( A) = [(u( a1 (γ)) − E f ( A))2 + (u( a2 (γ)) − E f ( A))2 ] f (γ)dγ, (3)
2 0

(for u( x ) = ( x − E f ( x ))2 , x ∈ R).


• The n-th order possibilistic moment [24,25]:
@ 1
1
M ( An ) = [un ( a1 (γ)) + un ( a2 (γ))] f (γ)dγ. (4)
2 0

Proposition 1. Let g : R → R, h : R → R be two utility functions, a, b ∈ R and u = ag + bh.


Then, E f (u( A)) = aE f ( g( A)) + bE f (h( A)).

Corollary 1. E f ( a + bh( A)) = a + bE f (h( A)).

2.2. Mixed Expected Utility


In the financial-economic world, as in the social world, there may be complex situations of
uncertainty with multiple risk parameters. In the papers on probabilistic risk, such phenomena

210
Mathematics 2018, 6, 133

are conceptualized by the notion of a random vector (all risk parameters are random variables).
However, there can be situations considered as “hybrid”, in which some parameters are random
variables and others are fuzzy numbers. This is the notion of the mixed vector, which together with a
multidimensional and a weighting function, are the basic entities of the mixed EU theory.
In order to treat a risk problem within a mixed EU theory, it is necessary to have a concept of
expected utility.
Since two risk parameters appear in the portfolio choice model with background risk from the
paper, we will present the definition of mixed expected utility in the bidimensional case.
A bidimensional mixed vector has the form ( A, X ), where A is a fuzzy number and X is a random
variable. We will denote by M ( X ) the expected value of X. If g : R → R is a continuous function,
then M ( g( X )) is the probabilistic expected utility of X with respect to g.
Let u : R2 → R be a bidimensional utility function of class C 2 , ( A, X ) a mixed vector,
and f : [0, 1] → R a weighting function. Assume that the level sets of the fuzzy number A are
[ A]γ = [ a1 (γ), a2 (γ)], γ ∈ [0, 1]. For any γ ∈ [0, 1], we consider the probabilistic expected values
M (u( ai (γ), X )), i = 1, 2.
The mixed expected utility associated with the triple (u, ( A, X ), f ) is:
@ 1
1
E f (u( A, X )) = [ M(u( a1 (γ), X )) + M(u( a2 (γ), X ))] f (γ)dγ. (5)
2 0

In the definition of E f (u( A, X ), we distinguish the following steps:

• In the first step, the possibilistic risk is parametrized by the decomposition of A in its level sets
[ a1 (γ), a2 (γ)], γ ∈ [0, 1].
• In the second step, for each level γ one considers the parametrized probabilistic utilities
M (u( a1 (γ), X )) and M(u( a2 (γ), X )).
• In the third step, the mixed expected utility E f (u( A, X )) is obtained as the f -weighted average of
the family of means
1
( [ M(u( a1 (γ), X )) + M(u( a2 (γ))])γ∈[0,1] .
2

Remark 1. If a ∈ R then E f (u( a, X )) = M(u( a, X )).

Proposition 2. Let g, h be two bidimensional utility functions, a, b ∈ R and u = ag + bh.


Then, E f (u( A, X )) = aE f ( g( A, X )) + bE f (h( A, X )).

Propositions 1 and 2 express the linearity of possibilistic expected value and mixed expected
utility with respect to the utility functions which appear in the definitions of these two operators.

Corollary 2. If A is a fuzzy number and Z is a random variable, then E f ( AZ ) = M( Z ) E f ( A) and


E f ( A2 Z ) = M ( Z ) E f ( A2 ).

3. Possibilistic Standard Model


In this section we will present a possibilistic portfolio choice model in which the return of the
risky asset is a fuzzy number. Investing an initial wealth between a risk-free asset (bonds) and a risky
asset (stocks), an agent seeks to determine that money allocation in the risky asset such that their
winnings are maximum.
In defining the total utility of the model, we will use the possibilistic expected utility introduced
in the previous section.
We consider an agent (characterized by a utility function u of class C 2 , increasing and concave)
which invests a wealth w0 in a risk-free asset and in a risky asset. The agent invests the amount α in
a risky asset and w0 − α in a risk-free asset. Let r be the return of the risk-free asset and x a value of

211
Mathematics 2018, 6, 133

the return of the risky asset. We denote by w = w0 (1 + r ) the future wealth of the risk-free strategy.
The portfolio value (w0 − α, α) will be (according to [4], pp. 65–66):

(w0 − α)(1 + r ) + α(1 + x ) = w + α( x − r ). (6)

The probabilistic investment model from [4] Chapter 4 or [5] Chapter 5 starts from the hypothesis
that the return of the risky asset is a random variable X0 . Then, x is a value of X0 and (6) leads to the
following maximization problem:

max M[u(w + α( X0 − r ))]. (7)


α

By denoting X = X0 − r the excess return, the model (7) becomes:

max M [u(w + αX )]. (8)


α

If we make the assumption that the return of the risky asset is a fuzzy number B0 , then x will be a
value of B0 . To describe the possibilistic model resulting from such a hypothesis, we fix a weighting
function f : [0, 1] → R. The expression (6) suggests to us the following optimization problem:

max E f [u(w + α( B0 − r ))]. (9)


α

By denoting with B = B0 − r the excess return, the problem (8) becomes:

max E f [u(w + αB)]. (10)


α

There is a similarity between the optimization problem (8) and the optimization problem (10).
Between the two optimization problems, there are two fundamental differences:

• In (8) there is a probabilistic risk X, and in (10) there is a possibilistic risk A.


• Problem (8) is formulated in terms of a probabilistic expected utility operator M(u(.)), while (10)
is formulated using the possibilistic expected utility operator E f (u(.)).

Assume that the level sets of the fuzzy number B are [ B]γ = [b1 (γ), b2 (γ)], γ ∈ [0, 1].
According to (1), the total utility function of the model (10) will have the following form:
@ 1
1
V (α) = E f [u(w + αB)] = [u(w + αb1 (γ)) + u(w + αb2 (γ))] f (γ)dγ.
2 0

Deriving twice, one obtains:


@ 1
1
V  (α) = [b12 (γ)u (w + αb1 (γ)) + b22 (γ)u (w + αb2 (γ))] f (γ)dγ.
2 0

Since u ≤ 0, it follows V  (α) ≤ 0, thus V is concave.


We assume everywhere in this paper that the portfolio risk is small, thus analogously with [5]
(Section 5.2), we can take the possibilistic excess return B as B = kμ + A, where μ > 0 and A is a fuzzy
number with E f ( A) = 0. Of course E f ( B) = kμ in that case. The total utility V (α) will be written:

V (α) = E f [u(w + α(kμ + A)]. (11)

Assuming that the level sets of A are [ A]γ = [ a1 (γ), a2 (γ)], γ ∈ [0, 1], the expression (11) becomes:
@ 1
1
V (α) = [u(w + α(kμ + a1 (γ))) + u(w + α(kμ + a2 (γ)))] f (γ)dγ.
2 0

212
Mathematics 2018, 6, 133

By deriving, one obtains:


@ 1
1
V  (α) = [(kμ + a1 (γ))u (w + α(kμ + a1 (γ)))+
2 0

(kμ + a2 (γ))u (w + α(kμ + a2 (γ)))] f (γ)dγ,


which can be written
V  (α) = E f [(kμ + A)u (w + α(kμ + A))]. (12)

Let α(k ) be the solution of the maximization problem max V (α), with V (α) being written under
α
the form (12). Then, the first order condition V  (α(k )) = 0 will be written:

E f [(kμ + A)u (w + α(k )(kμ + A))] = 0. (13)

As in [5] (Section 5.2), we assume that α(0) = 0.


Everywhere in this paper, we will keep the notations and hypotheses from above.

4. The Effect of Prudence on the Optimal Allocation


The main result of this section is a formula for the approximate calculation of the solution α(k )
of Equation (13). In the formula will appear the indicators of absolute risk aversion and prudence,
marking how these influence the optimal investment level α(k ) in the risky asset.
We will consider the second-order Taylor approximation of α(k ) around k = 0:

1 1
α(k) ≈ α(0) + kα (0) + k2 α (0) = kα (0) + k2 α (0). (14)
2 2

For the approximate calculation of α(k ), we will determine the approximate values of α (k ) and
α (k). Note that the calculation of the approximate values of α (0) and α (0) follows an analogous
line to the one used in [10] in the analysis of the probabilistic model. In the proof of the approximate
calculation formulas of α (0) and α (0), we will use the properties of the possibilistic expected utility
from Section 2.1. Before this, we will recall the Arrow–Pratt index ru (w) and prudence index Pu (w)
associated with the utility function u:

u (w) u (w)


ru (w) = − 
; Pu (w) = −  . (15)
u (w) u (w)
μ
Proposition 3. α (0) ≈ 1
E f ( A2 ) r u ( w )
.

Pu (w) E f ( A3 )
Proposition 4. α (0) ≈ (ru (w))2 ( E f ( A2 ))3
μ2 .

We recall from Section 3 that A = B − E f ( B). The following result gives us an approximate
expression of α(k):

E f ( B) 1 Pu (w) E f [( B− E f ( B)) ]
3
Theorem 1. α(k ) ≈ 1
ru (w) Var f ( B)
+ 2 ((ru (w))2 (Var f ( B))3
( E f ( B))2 .

Remark 2. The previous theorem gives us an approximate solution of the maximization problem max V (α)
α
with respect to the indices of absolute risk aversion and prudence ru (w), Pu (w), and the first three possibilistic
moments E f ( B), Var f ( B), and E f [( B − E f ( B))3 ].
This result can be seen as a possibilistic version of the formula (A.6) of [10], which gives us the optimal
allocation of investment in the context of a probabilistic portfolio choice model.

213
Mathematics 2018, 6, 133

Example 1. We consider the triangular fuzzy number B = (b, α, β) defined by:



b− x
⎨ 1−
⎪ α if b − α ≤ x ≤ b,
x −b
B(t) = 1− β if b ≤ x ≤ b + β,


0 otherwise.

The level sets of B are [ B]γ = [b1 (γ), b2 (γ)], where b1 (γ) = b − (1 − γ)α and b2 (γ) = b + (1 − γ) β,
for γ ∈ [0, 1]. We assume that the weighting function f has the form f (γ) = 2γ, for γ ∈ [0, 1]. Then,
by [25], Lemma 2.1:
β−α α2 + β2 + αβ
E f ( B) = b + ; Var f ( B) = ,
6 18
@ 1
E f [( B − E f ( B))2 ] = γ[(b1 (γ) − E f ( B))2 + (b2 (γ) − E f ( B))2 ]dγ
0

19( β3 − α3 ) αβ( β − α)
= + .
1080 72
By replacing these indicators in the formula of Theorem 1, we obtain

β−α 19( β3 −α3 )


1 b+ 6 1 Pu (w) + αβ(72
β−α)
β−α 2
α(k) ≈ + 1080
(b + ) .
ru (w) α2 + β2 +αβ 2 ((ru (w))2 ( α + β18+αβ )3
2 2
6
18

Assume that the utility function u is HARA-type (see [5], Section 3.6):
w 1− γ w
u(w) = ζ (η + ) , for η + > 0.
γ γ

Then, according to [5] (Section 3.6):

w −1 γ+1 w
ru (w) = (η + ) ; Pu (w) = ( η + ) −1 ,
γ γ γ
γ +1
1 w Pu (w) γ (η + wγ )−1 γ+1 w
= η + and = = ( η + ).
ru (w) γ ((ru (w))2 (η + wγ )−2 γ γ

Replacing in the approximation calculation formula of α(k), it follows:

β−α 19( β3 −α3 )


w b+ 6 1γ+1 w + αβ(72
β−α)
β−α 2
α(k) ≈ (η + ) + (η + ) 1080
(b + ) .
γ α2 + β2 +αβ 2 γ γ ( α + β18+αβ )3
2 2
6
18

If B = (b, α) is a symmetric triangular fuzzy number (α = β), then the approximate solution α(k )
gets a very simple form:
b w
α(k ) ≈ 18 2 (η + ).
α γ
Following [5] (Section 3.6), we consider the CRRA-type utility function:

w1− γ
1− γ if γ = 1,
u(w) =
ln(w) if γ = 1.

214
Mathematics 2018, 6, 133

γ γ +1
For γ = 1, we have ru (w) = w and Pu (w) = w . A simple calculation leads to the following
form of the solution:
β−α 19( β3 −α3 )
w b+ 6 1 w ( γ + 1) + αβ(72
β−α)
β−α 2
α(k) ≈ + 1080
(b + ) .
γ α2 + β2 +αβ 2 γ2 ( α + β18+αβ )3
2 2
6
18

5. Models with Background Risk


In the two standard portfolio choice problems (8) and (10), a single risk parameter appears: in (8)
the risk is represented by the random variable X, and in (10) by the fuzzy number B. In both cases,
we will call it investment risk. More complex situations may exist in which other risk parameters may
appear in addition to the investment risk. This supplementary risk is called background risk (see [4,5]).
For simplicity, in this paper we will study investment models with a single background risk
parameter. In the interpretation from [4], this background risk is associated with labor income.
Therefore, the considered portfolio choice problems will have two types of risk: investment risk and
background risk. Each can be random variables or fuzzy numbers, according to the following table.
The models corresponding to the four cases in Table 1 are obtained by adding in (7) and (9)
the background risk as a random variable of a fuzzy number. For each problem we will have an
approximate solution expressed in terms of indicators, Arrow–Pratt index, and prudence.

Table 1. Models with background risk.

Investment Risk Background Risk


1 probabilistic probabilistic
2 possibilistic possibilistic
3 possibilistic probabilistic
4 probabilistic possibilistic

Case 1. Besides the return of the risky asset X0 , we will have a probabilistic background risk
represented by a random variable Z. Starting from the standard model (7), the following optimization
problem is obtained by adding the background risk Z:

max M [u(w + α( X0 − r ) + Z )]. (16)


α

Case 2. Besides the return of the possibilistic risky-asset B0 , a possibilistic background risk
represented by a fuzzy number C appears. In the standard model (8) the fuzzy number C is added and
the following optimization problem is obtained:

max E f [u(w + α( B0 − r ) + C )]. (17)


α

Case 3. Besides the investment risk B0 a probabilistic background risk represented by a random
variable Z appears. The optimization problem is obtained adding the random variable Z in (9):

max E f [u(w + α( B0 − r ) + Z )]. (18)


α

Case 4. Besides the investment risk X0 of (7) the possibilistic background risk represented by a
fuzzy number C appears:
max E f [u(w + α( X0 − r ) + C )]. (19)
α

Problem (17) is formulated in terms of a bidimensional possibilistic expected utility (see [16], p. 60),
and (18) and (19) use the mixed expected utility defined in Section 2.

215
Mathematics 2018, 6, 133

By denoting with X = X0 and B = B0 − r the probabilistic excess return and the possibilistic
excess return, respectively, the optimization problems (16)–(19) become

max M[u(w + αX + Z )], (20)


α

max E f [u(w + αB + C )], (21)


α

max E f [u(w + αB + Z ], (22)


α

max E f [u(w + αX + C ]. (23)


α

In the following section we will study model 3 in detail, proving an approximate calculation
formula of the solution of the optimization problem (18). The proof of the approximate solutions of the
other three optimization problems is done similarly.

6. Approximate Solutions of Portfolio Choice Model with Background Risk


In this section we will prove the approximate calculation formulas for the solutions of the
optimization problems (20)–(23). These formulas will emphasize how risk aversion and the agent’s
prudence influence the optimal proportions invested in the risky asset in the case of the four portfolio
choice models with background risk. We will study in detail only the mixed model (22), in which,
besides this possibilistic risk, a probabilistic background risk may appear, modeled by a random
variable Z. This mixed model comes from the possibilistic standard model by adding Z in the
composition of the total utility function. More precisely, the total utility function W (α) will be:

W (α) = E f [u(w + α(kμ + A) + Z )], (24)

where the other components of the model have the same meaning as in Section 3.
Assume that the level sets of A are [ A]α = [ a1 (γ), a2 (γ)], γ ∈ [0, 1]. By definition (5) of the mixed
expected utility, formula (24) can be written as:
@ 1
1
W (α) = [ M(u(w + α(kμ + a1 (γ)) + Z )) + M(u(w + α(kμ + a2 (γ)) + Z ))] f (γ)dγ.
2 0

One computes the first derivative of W (α):


@ 1
1
W  (α) = (kμ + a1 (γ)) M(u (w + α(kμ + a1 (γ)) + Z )) f (γ)dγ+
2 0

@ 1
1
+ kμ + a2 (γ)) M(u (w + α(kμ + a2 (γ)) + Z )) f (γ)dγ.
2 0

W  (α) can be written as:

W  (α) = E f [(kμ + A)u (w + α(kμ + A) + Z )]. (25)

By deriving one more time, we obtain:

W  (α) = E f [(kμ + A)2 u (w + α(kμ + A) + Z )].

Since u ≤ 0, it follows that W  (α) ≤ 0, and thus W is concave. Then, the solution β(k) of the
optimization problem max W (α) will be given by W  ( β(k )) = 0. By (25),
α

E f [(kμ + A)u (w + β(k)(kμ + A) + Z )] = 0. (26)

216
Mathematics 2018, 6, 133

In this case we will also make the natural hypothesis β(0) = 0.


To compute an approximate value of β(k ) we will write the second-order Taylor approximation
of β(k ) around k = 0:

1 1
β(k ) ≈ β(0) + kβ (0) + k2 β (0) = kβ (0) + k2 β (0). (27)
2 2

We propose to find some approximate values of β (0) and β (0).

μ
Proposition 5. β (0) ≈ ( 1
E f ( A2 ) r u ( w )
− M( Z )).

Pu (w)( β (0))2 E f [( B− E f ( B)) ]


3
Proposition 6. β (0) ≈ Var f ( B) 1− M( Z ) Pu (w)
.

Theorem 2.

E f ( B) 1 1 1 E2f ( B) E f [( B − E f ( B))3 ]
β(k) ≈ [ − M( Z )] + Pu (w)[ − M( Z )]2 .
Var f ( B) ru (w) 2 ru (w) Var3f ( B)[1 − M( Z ) Pu (w)]

Remark 3. In the approximate expression of β(k ) from the previous theorem appear the Arrow index and
the prudence index of the utility function u, the possibilistic indicators E f ( B), Var f ( B), and the possibilistic
expected value M ( Z ).

Example 2. We consider that the investment risk is represented by a fuzzy number B = (b, α, β) and
the background risk by the random variable Z with the normal distribution N (m, σ2 ). We will consider a
HARA-type utility:

w 1− γ w
u(w) = ζ (η + ) , for η + > 0.
γ γ
Using the computations from Example 1 and taking into account that M( Z ) = m, one reaches the following
form of the approximate solution:
β−α
w b+
β(k) ≈ (η + − m) α2 + β2 +6αβ +
γ
18

γ + 1 (η +
w
γ− m)2 19(1080
β3 − α3 )
+ αβ(72
β−α)
1
+ .
+γ 1 − m γ+
w
η ( α + β18+αβ )3
2 2 1
γ (η + γ)
2γ w

1− γ
Let us assume that the utility function u is of CRRA-type: u(w) = w1−γ if γ = 1 and u(w) = ln(w),
if γ = 1.
γ +1
For γ = 1 we have ru (w) = w , Pu (w) = γw , from where it follows:

β−α
w b+
β(k) ≈ ( − m) α2 + β2 +6αβ +
γ
18

19( β3 −α3 )
(γ + 1)( wγ − m)2 + αβ(72
β−α)
1
+ 1080
mw(γ+1)
.
( α + β18+αβ )3
2 2
2w 1− γ2

For γ = 1:
β−α
b+ 6
β(k) ≈ (w − m) +
α2 + β2 +αβ
18

217
Mathematics 2018, 6, 133

19( β3 −α3 )
( w − m )2 + αβ(72
β−α)
1
+ 1080
.
w ( α + β18+αβ )3
2 2
1 − 2mv

We will state without proof some results on approximate solutions of the other three models
with background risk. For the optimization problems (20) and (23), we will assume that X = kμ + Y,
with μ > 0 and E(Y ) = 0 (according to the model of [5], Section 5.2), and for (21), we will take
B = kμ + A, with B = kμ + A, with μ > 0 and E f ( A) = 0.

Theorem 3. An approximate solution β 1 (k ) for the optimization problem (20) is

M( X ) 1
β 1 (k) ≈ [ − M( Z )]+
Var ( X ) ru (w)

1 1 M2 ( Z ) M[( X − M( X ))3 ]
+ Pu (w)[ − M( Z )]2 .
2 ru (w) Var3 ( X )[1 − M ( Z ) Pu (w)]

Example 3. The formula from Theorem 3 may take different forms, depending on the distributions of the
random variables X and Z. If X is the normal distribution N (m, σ) then M( X ) = m, Var ( X ) = σ2 and
M [( X − M ( X ))3 ] = 0, thus
m 1
β 1 (k) ≈ 2 [ − M( Z )].
σ ru (w)
Assume that the utility function u is of HARA-type:
w 1− γ w
u(w) = ζ (η + ) for η + > 0,
γ γ

and Z is the distribution N (0, 1), we obtain:

m 1 m w
β 1 (k) ≈ = 2 ( η + ).
σ2 ru ( w ) σ γ

The form of β 1 (k ) from the previous section extends the approximate calculation formula of the
solution of the probabilistic model (8) (see [6,7]). Its proof follows some steps similar to the ones in the
formula of β(k) from Theorem 2, but uses the probabilistic techniques from [6,7].

Theorem 4. An approximate solution β 2 (k ) of the optimization problem (21) is

E f ( B) 1
β 2 (k) ≈ [ − E f (C )]+
Var f ( B) ru (w)

1 1 E2f ( B) E f [( B − E f ( B))3 ]
+ Pu (w)[ − E f (C )]2 .
2 ru (w) Var3f ( B)[1 − E f (C ) Pu (w)]

Example 4. We assume that:

• B is a triangular fuzzy number B = (b, α, β) and C is a symmetric triangular fuzzy number C = (c, δ),
the utility function u is of HARA-type: u(w) = ζ (η + w −1 for η + w > 0,
• γ) γ
• the weighting function f has the form f (t) = 2t for t ∈ [0, 1].

By taking into account the calculations from Examples 1, 2, and the fact that E f (C ) = c, the approximate
solution β 2 (k ) becomes:
β−α
w b+
β 2 (k) ≈ (η + − c) 2 2 6 +
γ α + β +αβ
18

218
Mathematics 2018, 6, 133

2 19( β3 −α3 )
γ + 1 (η + γ − c) + αβ(72
β−α)
w
1
+ 1080
.
η+ w ( α + β18+αβ )3 1 − c γ+
2 2 1
γ (η + γ)
2γ γ
w

Theorem 5. An approximate solution β 3 (k ) of the optimization problem (23) is:

M( X ) 1
β 2 (k) ≈ [ − E f (C )]+
Var ( X ) ru (w)

1 1 M2 ( X ) M[( X − M( X ))3 ]
+ Pu (w)[ − E f (C )]2 .
2 ru (w) Var3 ( X )[1 − E f (C ) Pu (w)]

Example 5. We consider the following hypotheses:


• X has the normal distribution N (m, σ) and C is the triangular fuzzy numbers C = (c, δ, ),
the utility function u is of HARA-type: u(w) = ζ (η + w −1 for η + w > 0,
• γ) γ
• the weighting function f has the form: f (t) = 2 for t ∈ [0, 1].
−δ
Then, M( X ) = m, Var ( X ) = σ2 , M[( X − M( X ))3 ] = 0, and E f (c) = c + 6 .
It follows the following form of β 3 (k ):

m 1 m w −δ
β 3 (k) ≈ [ − E f (C )] = 2 [η + − c − ].
σ2 ru ( w ) σ γ 6

Author Contributions: The contribution belongs entirely to the author.

Funding: This research received no external funding.

Conflicts of Interest: The author declares no conflict of interest.

Appendix A
Proof of Corollary 2. We take u( x, z) = xz, and applying (5), we have
@ 1
1
E f ( AZ ) = [ M( a1 (γ) Z ) + M( a2 (γ) Z )] f (γ)dγ
2 0
@ 1
1
= [ a1 (γ) M( Z ) + a2 (γ) M( Z )] f (γ)dγ = M( Z ) E f ( A).
2 0

Taking u( x, z) = x2 z, we obtain
@ 1
1
E f ( A2 Z ) = [ M( a21 (γ) Z ) + M( a22 (γ) Z )] f (γ)dγ
2 0
@ 1
1
= [ a21 (γ) M( Z ) + a22 (γ) M( Z )] f (γ)dγ = M( Z ) E f ( A2 ).
2 0

Proof of Proposition 3. We consider the Taylor approximation:

u (w + α(kμ + x )) ≈ u (w) + α(kμ + x )u (w).

Then, by (11) and Proposition 1

V  (α) ≈ E f [(kμ + A)(u (w) + u (w)α(kμ + A)]

= u (w)(kμ + E f ( A)) + αu (w) E f [(kμ + A)2 ].

219
Mathematics 2018, 6, 133

The equation V  (α(k )) = 0, becomes

u (w)(kμ + E f ( A)) + α(k )u (w) E f [(kμ + A)]2 ≈ 0.

We derive it with respect to k:

u (w)μ + u (w)(α (k ) E f [(kμ + A)2 ] + 2α(k)μE f (kμ + A)) ≈ 0.

In this equality we make k = 0. Taking into account that α(0) = 0, it follows

u (w)μ + u (w)α (0) E f ( A2 ) ≈ 0,

from where we determine α (0):

μ u (w) μ 1
α  (0) ≈ − = .
E f ( A2 ) u (w) E f ( A2 ) r u ( w )

Proof of Proposition 4. To determine the approximate value of α (0) we start with the following
Taylor approximation:

α2
u (w + α(kμ + x )) ≈ u (w) + α(kμ + x )u (w) + (kμ + x )2 u (w),
2
from which it follows:

u (w) 2
(kμ + x )u (w + α(kμ + x )) ≈ u (w)(kμ + x ) + u (w)α(kμ + x )2 + α (kμ + x ).
2

Then, by (11) and the linearity of the operator E f (.)

V  (α) = E f [(kμ + A)u (w + α(kμ + A))]

u (w) 2
≈ u (w) E f (kμ + A) + u (w)αE f [(kμ + A)2 ] + α E f [(kμ + A)3 ].
2
Using this approximation for α = α(k ), the equation V  (α(k )) = 0, becomes

u (w)
u (w)(kμ + E f ( A)) + u (w)α(k ) E f [(kμ + A)2 ] + (α(k))2 E f [(kμ + A)3 ] ≈ 0.
2
Deriving with respect to k one obtains:

μu (w) + u (w)[α (k ) E f ((kμ + A)2 ) + 2μα(k ) E f (kμ + A)]+

u (w)
+ [2α(k)α (k) E f ((kμ + A)3 ) + 3(α(k))2 μE f ((kμ + A)2 )] ≈ 0.
2
We derive one more time with respect to k:

u (w)[α (k ) E f ((kμ + A)2 ) + 2μα (k ) E f (kμ + A) + 2μα (k ) E f (kμ + A)+


u (w)
2μ2 α(k )] + [2(α (k))2 E f ((kμ + A)3 ) + 2α(k)α (k) E f ((kμ + A)3 )+
2
+6α(k)α (k) E f ((kμ + A)2 ) + 6μα(k)α (k) E f ((kμ + A)2 ) + 6μ2 (α(k)2 ) E f (kμ + A)] ≈ 0.

220
Mathematics 2018, 6, 133

In the previous relation, we take k = 0.

u (w)[α (0) E f ( A2 ) + 2μα (0) E f ( A) + 2μα (0) E f ( A) + 2μ2 α(0)]+

u (w)
+ [2(α (0))2 E f ( A3 ) + 2α(0)α (0) E f ( A3 ) + 6α(0)α (0) E f ( A2 )+
2
6μα(0) E f ( A2 ) + 6μ2 (α(0))2 E f ( A)] ≈ 0.

Taking into account that α(0) = 0 and E f ( A) = 0, one obtains

u (w)α (0) E f ( A2 ) + u (w)(α (0))2 E f ( A3 ) ≈ 0,

from where we get α”(0):

u (w) E f ( A3 ) 
α (0) ≈ − (α (0))2 .
u (w) E f ( A2 )

By replacing α (0) with the expression from Proposition 3 and taking into account (15), it follows:

Pu (w) E f ( A3 ) 2
α (0) = μ .
((ru (w)) ( E f ( A2 ))3
2

Proof of Theorem 1. By replacing in (14) the approximate values of α (0) and α (0) given by
Propositions 3 and 4 and taking into account that E f ( B) = kμ, one obtains:

1
α(k ) ≈ kα (0) + k2 α (0)
2

kμ 1 1 2 Pu ( w )
E f ( A3 )
= + ( kμ )
E f ( A2 ) r u ( w ) 2 (ru (w))2 ( E f ( A2 ))3
E f ( B) 1 1 Pu (w) E f ( A3 )
= + ( E ( B))2 .
E f ( A2 ) r u ( w ) 2 f (ru (w))2 ( E f ( A2 ))3
However, E f ( A2 ) = E f [( B − E f ( B))2 ] = Var f ( B). Then,

1 E f ( B) 1 Pu (w) E f [( B − E f ( B))3 ]
α(k) ≈ + ( E f ( B))2 .
ru (w) Var f ( B) 2 (ru (w))2 (Var f ( B))3

Proof of Proposition 5. We consider the Taylor approximation:

u (w + α(kμ + x ) + z) ≈ u (w) + (α(kμ + x ) + z)u (w).

Then,

(kμ + x )u (w + α(kμ + x ) + z) ≈ u (w)(kμ + x ) + u (w)α(kμ + x )2 + u (w)z(kμ + x ).

From this relation, from (25) and the linearity of mixed expected utility, it follows:

W  (α) ≈ u (w)(kμ + E f ( A)) + u (w)αE f [(kμ + A)2 ] + u (w) E f [(kμ + A) Z ].

221
Mathematics 2018, 6, 133

Then, the equation W  ( β(k )) = 0, will be written

u (w)(kμ + E f ( A)) + u (w) β(k ) E f [(kμ + A)2 ] + u (w) E f [(kμ + A) Z ] ≈ 0.

By deriving with respect to k one obtains:

u (w)μ + u (w)( β (k) E f [(kμ + A)2 ] + 2β(k )μE f (kμ + A)) + u (w)μM( Z ) ≈ 0.

For k = 0, it follows

u (w)μ + u (w)μM ( Z ) + u (w) β (0) E f ( A2 ) ≈ 0,

from where β (0) is obtained:

(u (w) + u (w) M( Z ))μ μ 1


β  (0) ≈ − = ( − M( Z )).
u (w) E f ( A2 ) E f ( A2 ) r u ( w )

Proof of Proposition 6. We consider the Taylor approximation

1
u (w + α(kμ + x ) + z) ≈ u (w) + u (w)[α(kμ + x ) + z] + u (w)[α(kμ + x ) + z]2 ,
2
from where it follows

(kμ + x )u (w + α(kμ + x ) + z) ≈ u (w)(kμ + x ) + u (w)(kμ + x )[α(kμ + x ) + z]

1
+ u (w)(kμ + x )[α(kμ + x ) + z]2 .
2
By (25), the previous relation and the linearity of mixed expected utility, we will have

W  (α) ≈ u (w)(kμ + E f ( A)) + u (w) E f [(kμ + A)(α(kμ + A) + Z )]+

1
+ u (w) E f [(kμ + A)(α(kμ + A) + Z )2 ].
2
Then, from W  ( β(k )) = 0, we will deduce:

u (w)(kμ + E f ( A)) + u (w) E f [(kμ + A)( β(k)(kμ + A) + Z )]+

1
+ u (w) E f [(kμ + A)( β(k)(kμ + A) + Z )2 ] ≈ 0.
2
If we denote

g(k ) = E f [(kμ + A)( β(k )(kμ + A) + Z )], (A1)

h(k ) = E f [(kμ + A)( β(k)(kμ + A) + Z )2 ], (A2)

then the previous relation can be written

1
u (w)(kμ + E f ( A)) + u (w) g(k ) + u (w)h(k ) ≈ 0.
2
Deriving twice with respect to k, we obtain:

1
u (w) g (k ) + u (w)h (k ) ≈ 0. (A3)
2

222
Mathematics 2018, 6, 133

We set k = 0 in (A2):
1
u (w) g (0) + u (w)h (0) ≈ 0. (A4)
2
The computation of g (0). We notice that

g(k ) = β(k ) E f [(kμ + A)2 ] + E f [(kμ + A) Z ].

By denoting g1 (k ) = β(k) E f [(kμ + A)2 ] and g2 (k ) = E f [(kμ + A) Z ], we will have g(k) = g1 (k ) +


g2 (k). One easily sees that g2 (k) = 0, thus g (k ) = g1 (k ) + g2 (k ) = g1 (k ). We derive g1 (k):

g1 (k) = β (k ) E f [(kμ + A)2 ] + 2μβ(k ) E f (kμ + A)

= β (k) E f [(kμ + A)2 ] + 2μ2 kβ(k),


Since E f (kμ + A) = kμ + E f ( A) = kμ. We derive one more time

g1 (k ) = β (k ) E f [(kμ + A)2 ] + 2μβ (k ) E f (kμ + A) + 2μ2 [ β(k ) + kβ (k )].

Setting k = 0 in the previous relation and taking into account that β(0) = E f ( A) = 0, it follows

g (0) = β (0) E f ( A2 ). (A5)

The computation of h (0). We write h(k ) as

h(k ) = β2 (k ) E f [(kμ + A)3 ] + 2β(k ) E f [(kμ + A)2 Z ] + E f [(kμ + A) Z2 ].

We denote
h1 (k ) = β2 (k ) E f [(kμ + A)3 ],

h2 (k ) = β(k ) E f [(kμ + A)2 Z ],

h3 (k ) = E f [(kμ + A) Z2 ].

Then, h(k ) = h1 (k) + h2 (k ) + h3 (k ). One notices that h3 (0) = 0, thus

h (0) = h1 (0) + 2h2 (0). (A6)

We first compute h2 (0). One can easily notice that

d d2
h2 (k ) = β (k ) E f [(kμ + A)2 Z ] + 2β (k ) E f [(kμ + A)2 Z ] + β(k ) 2 E f [(kμ + A)2 Z ].
dk dk
Taking into account that

d
E [(kμ + A)2 Z ] = 2μE f [(kμ + A) Z ],
dk f

and β(0) = 0,
We deduce
h2 (0) = β (0) E f ( A2 Z ) + 4μβ (0) E f ( AZ ). (A7)

We will compute h1 (0). We derive twice h1 (k ):

d2 2 d d
h1 (k ) = ( β (k)) E f [(kμ + A)3 ] + 2 ( β2 (k)) E f [(kμ + A)3 ]+
dk2 dk dk

d2
+ β2 ( k ) E [(kμ + A)3 ].
dk2 f

223
Mathematics 2018, 6, 133

We compute the following derivatives from the last sum:

d 2
( β (k)) = 2β(k) β (k),
dk

d2 2
( β (k)) = 2[ β (k) β(k) + ( β (k))2 ],
dk2
d
E [(kμ + A)3 ] = 3μE f [(kμ + A)2 ].
dk f
Then, taking into account β(0) = 0:

h1 (0) = 2[ β (0) β(0) + ( β (0))2 ] E f ( A3 ) + 2β(0) β (0)3μE f [(kμ + A)2 ] + β2 (0) = dk E f [( kμ +
d
A )3 ]
(A8)
= 2( β (0))2 E f ( A3 ).

By (A6)–(A8):

h (0) = h1 (0) + 2h2 (0) = = 2( β (0))2 E f ( A3 ) + 2( β (0) E f ( A2 Z ) + 4μβ (0) E f ( AZ )). (A9)

Replacing in (A3) the values of g (0) and h (0) given by (A5) and (A9):

1
u (w) β (0) E f ( A2 ) + u (w)[2( β (0))2 E f ( A3 ) + 2( β (0) E f ( A2 Z ) + 4μβ (0) E f ( AZ ))] ≈ 0,
2
from where
β (0)[u (w) E f ( A2 ) + u (w) E f ( A2 Z )] ≈

≈ − β (0)u (w)[ β (0) E f ( A3 ) + 4μE f ( AZ )].


The approximate value of β (0) follows:

β (0) E f ( A3 ) + 4μE f ( AZ )
β (0) ≈ −u (w) β (0) .
u (w) E f (A
2 ) + u ( w ) E
f (A
2 Z)

According to Corollary 2, the expression above which approximates β (0) can be written:

β (0) E f ( A3 ) + 4μM ( Z ) E f ( A)
β (0) ≈ −u (w) β (0)
u (w) E f ( A2 ) + M( Z ) E f ( A2 )

u (w) β (0) β  (0) E f ( A3 )


=− ,
E f ( A ) u (w) + M ( Z )u (w)
2 

Since E f ( A) = 0. If we replace A with B − E f ( B), one obtains

( β (0))2 u (w) E f [( B − E f ( B))3 ]


β (0) ≈ −
Var f ( B) u (w) + M( Z )u (w)

Pu (w)( β (0))2 E f [( B − E f ( B))3 ]


= .
Var f ( B) 1 − M ( Z ) Pu (w)

Proof of Theorem 2. The approximation formula of β (0) from Proposition 5 can be written:

μ 1
β  (0) ≈ [ − M( Z )]. (A10)
Var f ( B) ru (w)

224
Mathematics 2018, 6, 133

According to (27), (A9), and Proposition 6,

1
β(k ) ≈ kβ (0) + k2 β (0)
2

μk 1
= [ − M( Z )]+
Var f ( B) ru (w)

1 (kμ)2 E f [( B − E f ( B))3 ] 1
+ Pu (w) [ − M( Z )]2 .
2 Var3f ( B)[1 − M( Z ) Pu (w)] ru (w)

Since μk = E f ( B), it follows

E f ( B) 1
β(k) ≈ [ − M( Z )]+
Var f ( B) ru (w)

1 1 E2f ( B) E f [( B − E f ( B))3 ]
Pu (w)[ − M( Z )]2 .
2 ru (w) Var3f ( B)[1 − M ( Z ) Pu (w)]

References
1. Arrow, K.J. Essays in the Theory of Risk Bearing; North-Holland Publishing Company: Amsterdam,
The Netherlands, 1970.
2. Brandt, M. Portfolio choice problems. In Handbook of Financial Econometrics: Tools and Techniques; Ait-Sahalia, Y.,
Hansen, L.P., Eds.; North-Holland Publishing Company: Amsterdam, The Netherlands, 2009; Volume 1.
3. Pratt, J.W. Risk Aversion in the Small and in the Large. Econometrica 1964, 32, 122–136. [CrossRef]
4. Eeckhoudt, L.; Gollier, C.; Schlesinger, H. Economic and Financial Decisions under Risk; Princeton University Press:
Princeton, NJ, USA, 2005.
5. Gollier, C. The Economics of Risk and Time; MIT Press: Cambridge, MA, USA, 2004.
6. Athayde, G.; Flores, R. Finding a Maximum Skewness Portfolio A General Solution to Three-Moments
Portfolio Choice. J. Econ. Dyn. Control 2004, 28, 1335–1352. [CrossRef]
7. Garlappi, L.; Skoulakis, G. Taylor Series Approximations to Expected Utility and Optimal Portfolio Choice.
Math. Financ. Econ. 2011, 5, 121–156. [CrossRef]
8. Zakamulin, V.; Koekebakker, S. Portfolio Performance Evaluation with Generalized Sharpe Ratios:
Beyond the Mean and Variance. J. Bank. Financ. 2009, 33, 1242–1254. [CrossRef]
9. Kimball, M.S. Precautionary saving in the small and in the large. Econometrica 1990, 58, 53–73. [CrossRef]
10. Ñiguez, T.M.; Paya, I.; Peel, D. Pure Higher-Order Effects in the Portfolio Choice Model. Financ. Res. Lett.
2016, 19, 255–260. [CrossRef]
11. Le Courtois, O. On Prudence, Temperance, and Monoperiodic Portfolio Optimization. In Proceedings of the
Risk and Choice: A Conference in Honor of Louis Eeckhoudt, Toulouse, France, 12–13 July 2012.
12. Zadeh, L.A. Fuzzy sets as a basis for a theory of possibility. Fuzzy Sets Syst. 1978, 1, 3–28. [CrossRef]
13. Dubois, D.; Foulloy, L.; Mauris, G.; Prade, H. Probability–possibility transformations, triangular fuzzy sets
and probabilistic inequalities. Reliab. Comput. 2004, 10, 273–297. [CrossRef]
14. Carlsson, C.; Fullér, R. Possibility for Decision; Springer: Berlin, Germany, 2011.
15. Dubois, D.; Prade, H. Possibility Theory; Plenum Press: New York, NY, USA, 1988.
16. Georgescu, I. Possibility Theory and the Risk; Springer: Berlin, Germany, 2012.
17. Dubois, D.; Prade, H. Fuzzy Sets and Systems: Theory and Applications; Academic Press: New York, NY, USA, 1980.
18. Carlsson, C.; Fullér, R. On possibilistic mean value and variance of fuzzy numbers. Fuzzy Sets Syst. 2001,
122, 315–326. [CrossRef]
19. Fullér, R.; Majlender, P. On weighted possibilistic mean and variance of fuzzy numbers. Fuzzy Sets Syst.
2003, 136, 363–374.

225
Mathematics 2018, 6, 133

20. Lucia-Casademunt, A.M.; Georgescu, I. Optimal saving and prudence in a possibilistic framework. In Distributed
Computing and Artificial Intelligence; Springer: Cham, Switzerland, 2013; Volume 217, pp. 61–68.
21. Collan, M.; Fedrizzi, M.; Luukka, P. Possibilistic risk aversion in group decisions: Theory with application
in the insurance of giga-investments valued through the fuzzy pay-off method. Soft Comput. 2017, 21,
4375–4386. [CrossRef]
22. Majlender, P. A Normative Approach to Possibility Theory and Decision Support. Ph.D. Thesis, Turku Centre
for Computer Science, Turku, Finland, 2004.
23. Mezei, J. A Quantitative View on Fuzzy Numbers. Ph.D. Thesis, Turku Centre for Computer Science,
Turku, Finland, 2011.
24. Thavaneswaran, A.; Thiagarajahb, K.; Appadoo, S.S. Fuzzy coefficient volatility (FCV) models with
applications. Math. Comput. Model. 2007, 45, 777–786. [CrossRef]
25. Thavaneswaran, A.; Appadoo, S.S.; Paseka, A. Weighted possibilistic moments of fuzzy numbers with
applications to GARCH modeling and option pricing. Math. Comput. Model. 2009, 49, 352–368. [CrossRef]
26. Kaluszka, M.; Kreszowiec, M. On risk aversion under fuzzy random data. Fuzzy Sets Syst. 2017, 328, 35–53.
[CrossRef]
27. Zhang, W.G.; Wang, Y.L. A Comparative Analysis of Possibilistic Variances and Covariances of Fuzzy
Numbers. Fundam. Inform. 2008, 79, 257–263.

c 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

226
mathematics
Article
The Emergence of Fuzzy Sets in the Decade of the
Perceptron—Lotfi A. Zadeh’s and Frank Rosenblatt’s
Research Work on Pattern Classification
Rudolf Seising
The Research Institute for the History of Science and Technology, Deutsches Museum, 80538 Munich, Germany;
[email protected]; Tel.: +49-(0)-89-2179-298

Received: 25 May 2018; Accepted: 19 June 2018; Published: 26 June 2018

Abstract: In the 1950s, the mathematically oriented electrical engineer, Lotfi A. Zadeh, investigated
system theory, and in the mid-1960s, he established the theory of Fuzzy sets and systems
based on the mathematical theorem of linear separability and the pattern classification problem.
Contemporaneously, the psychologist, Frank Rosenblatt, developed the theory of the perceptron
as a pattern recognition machine based on the starting research in so-called artificial intelligence,
and especially in research on artificial neural networks, until the book of Marvin L. Minsky and
Seymour Papert disrupted this research program. In the 1980s, the Parallel Distributed Processing
research group requickened the artificial neural network technology. In this paper, we present the
interwoven historical developments of the two mathematical theories which opened up into fuzzy
pattern classification and fuzzy clustering.

Keywords: pattern classification; fuzzy sets; perceptron; artificial neural networks; Lotfi A. Zadeh;
Frank Rosenblatt

1. Introduction
“Man’s pattern recognition process—that is, his ability to select, classify, and abstract
significant information from the sea of sensory information in which he is immersed—is
a vital part of his intelligent behavior.”
Charles Rosen [1] (p. 38)

In the 1960s, capabilities for classification, discrimination, and recognition of patterns were
demands concerning systems deserving of the label “intelligent”. Back then, and from a mathematical
point of view, patterns were sets of points in a mathematical space; however, by and by, they received
the meaning of datasets from the computer science perspective.
Under the concept of a pattern, objects of reality are usually represented by pixels; frequency
patterns that represent a linguistic sign or a sound can also be characterized as patterns. “At the lowest
level, general pattern recognition reduces to pattern classification, which consists of techniques to
separate groups of objects, sounds, odors, events, or properties into classes, based on measurements
made on the entities being classified”. This said artificial intelligence (AI) pioneer, Charles Rosen,
in the introduction of an article in Science in 1967, he claimed in the summary: “This function,
pattern recognition, has become a major focus of research by scientists working in the field of artificial
intelligence” [1] (p. 38, 43).
The first AI product that was supposed to solve the classification of patterns, such as handwritten
characters, was an artificial neuronal network simulation system named perceptron. Its designer was
Frank Rosenblatt, a research psychologist at the Cornell Aeronautical Laboratory in Buffalo, New York.

Mathematics 2018, 6, 110; doi:10.3390/math6070110 227 www.mdpi.com/journal/mathematics


Mathematics 2018, 6, 110

The historical link between pattern discrimination or classification and fuzzy sets documents
a RAND report entitled “Abstraction and Pattern Classification”, written in 1964 by Lotfi A. Zadeh,
a Berkeley professor of electrical engineering. In this report, he introduced the concept of fuzzy sets
for the first time [2]. (The text was written by Zadeh. However, he was not employed at RAND
Corporation; Richard Bellman and Robert Kalaba worked at RAND, and therefore, the report appeared
under the authorship and order: Bellman, Kalaba, Zadeh; later, the text appeared in the Journal of
Mathematical Analysis and Application [3].)
“Pattern recognition, together with learning” was an essential feature of computers going
“Steps Toward Artificial Intelligence” in the 1960s as Marvin Lee Minsky postulated already at the
beginning of this decade [4] (p. 8). On the 23rd of June of the same year, after four years of simulation
experiments, Rosenblatt and his team of engineers and psychologists at the Cornell Aeronautical
Laboratory demonstrated to the public their experimental pattern recognition machine, the “Mark I
perceptron”.
Another historical link connects pattern recognition or classification with the concept of linear
separability when Minsky and Seymour Papert showed in their book, “Perceptrons: an introduction to
computational geometry” published in 1969, that Rosenblatt’s perceptron was only capable of learning
linearly separable patterns. Turned to logics, this means that a single-layer perceptron cannot learn the
logical connective XOR of the propositional logic.
In addition, a historical link combines Zadeh’s research work on optimal systems and the
mathematical concept of linear separability, which is important to understand the development
from system theory to fuzzy system theory.
We refer to the years from 1957 to 1969 as the decade of the perceptron. It was amidst these years,
and it was owing to the research on pattern recognition during the decade of the perceptron, that fuzzy
sets appeared as a new “mathematics of fuzzy or cloudy quantities” [5] (p. 857).
This survey documents the history of Zadeh’s mathematical research work in electrical
engineering and computer science in the 1960s. It shows the intertwined system of research in
various areas, among them, mathematics, engineering and psychology. Zadeh’s mathematically
oriented thinking brought him to fundamental research in logics and statistics, and the wide spectrum
of his interests in engineering sciences acquainted him with research on artificial neural networks and
natural brains as well.

2. Pattern Separation
Today, algorithms in machine learning and statistics solve the problem of pattern classification,
i.e., of separating points in a set. More specifically, and in the case of Euclidean geometry,
they determine sets of points to be linearly separable. In the case of only two dimensions in the
plane, linear separability of two sets A and B means that there exists at least one line in the plane with
all elements of A on one side of the line and all elements of B on the other side.
For n-dimensional Euclidean spaces, this generalizes if the word “line” is replaced by
“hyperplane”: A and B are linearly separable if there exists at least one hyperplane with all elements of
A on one side of the hyperplane and all elements of B on the other side.
Let us consider the case n = 2 (see Figure 1): Two subsets A ⊆ 2n , B ⊆ 2n are linearly separable if
there exist n + 1 = 3 real numbers w1 , w2 , and for all a = (a1 , a2 ) ∈ A, b = (b1 , b2 ) ∈ B it holds

w1 a1 + w2 a2 ≤ w3 ≤ w1 b1 + w2 b2 .

The points x = (x1 , x2 ) with w1 x1 + w2 x2 = w3 build the separating line.

228
Mathematics 2018, 6, 110

Figure 1. The points (0,1) and (1,0) are not linearly separable.

In 1936, the Polish mathematician, Meier (Maks) Eidelheit (1910–1943), published an article where
he proved the later so-called Eidelheit separation theorem concerning the possibility of separating
convex sets in normed vector spaces (or local-convex spaces) by linear functionals [6].
One of the researchers who checked the separation theorem for applications in electrical
engineering was Lotfi Aliasker Zadeh. He was born in Baku, Azerbaidjan; he studied electrical
engineering at the University of Tehran, Iran, and he graduated with a BSc degree in 1942.
The following year, he emigrated to the United States (US) via Cairo, Egypt. He landed in Philadelphia,
and then worked for the International Electronic Laboratories in New York. In 1944, he went to Boston
to continue his studies at the Massachusetts Institute for Technology (MIT). In 1946, Zadeh was awarded
a Master’s of Science degree at MIT, and then he changed to Columbia University in New York, where
he earned his Doctor of Philosophy (PhD) degree in 1950 for his thesis in the area of continuous analog
systems [7]. After being appointed assistant professor, he was searching for new research topics. Both
information theory and digital technology interested him, and he turned his attention to digital systems.
Zadeh, in an interview with the author on 8 September 1999, in Zittau, at the margin of the 7th Zittau
Fuzzy Colloquium at the University Zittau/Görlitz said he “was very much influenced by Shannon’s
talk that he gave in New York in 1946 in which he described his information theory.” Zadeh began
delivering lectures on automata theory, and in 1949, he organized and moderated a discussion meeting
on digital computers at Columbia University, in which Claude E. Shannon, Edmund Berkeley, and
Francis J. Murray took part. It was probably the first public debate on this subject ever, as suggested
by Zadeh in an interview with the author on 15 June 2001, University of California, Berkeley.)
In the second half of the 1950s, Zadeh (Figure 2) became one of the pioneers of system theory,
and among his interests, was the problem of evaluating the performance of systems like electrical
circuits and networks with respect to their input and their output. His question was whether such
systems could be “identified” by experimental means. His thoughts “On the Identification Problem”
appeared in the December 1956 edition of “IRE Transactions on Circuit Theory” of the Institute of
Radio Engineers [8]. For Zadeh, a system should be identified given (1) a system as a black box B
whose input–output relationship is not known a priori, (2) the input space of B, which is the set of all
time functions on which the operations with B are defined, and (3) a black box class A that contains B,
which is known a priori. Based on the observed response behavior of B for various inputs, an element
of A should be determined that is equivalent to B inasmuch as its responses to all time functions in the
input space of B are identical to those of B. In a certain sense, one can claim to have “identified” B by
means of this known element of A.
Of course, this “system identification” can turn out to be arbitrarily difficult to achieve.
Only insofar as information about black box B is available can black box set A be determined. If B has
a “normal” initial state in which it returns to the same value after every input, such as the resting state
of a linear system, then the problem is not complicated. If this condition is not fulfilled, however, then
B’s response behavior depends on a “not normal” initial state, and the attempt to solve the problem
gets out of hand very quickly.
All different approaches to solving the problem that was proposed up to that point were of
theoretical interest, but they were not very helpful in practice and, on top of that, many of the

229
Mathematics 2018, 6, 110

suggested solutions did not even work when the “black box set” of possible solutions was very limited.
In the course of the article, Zadeh only looks at very specific nonlinear systems, which are relatively
easy to identify by observation as sinus waves with different amplitudes. The identification problem
remained unsolved for Zadeh.
In 1956, Zadeh took a half-year sabbatical at the Institute for Advanced Study (IAS) in Princeton,
as disclosed by Zadeh in an interview with the author on 16 June 2001, University of California,
Berkeley, that was, for him, the “Mecca for mathematicians”. It inspired him very quickly, and he
took back to New York many very positive and lasting impressions. As a “mathematical oriented
engineer”—he characterized himself that way in one of my interviews on 26 July 2000, University of
California, Berkeley—he now started analyzing concepts in system theory from a mathematical point
of view, and one of these concepts was optimality.
In his editorial to the March 1958 issue of the “IRE Transactions on Information Theory”, Zadeh
wrote, “Today we tend, perhaps, to make a fetish of optimality. If a system is not ‘best’ in one sense or
another, we do not feel satisfied. Indeed, we are not apt to place too much confidence in a system that
is, in effect, optimal by definition”. In this editorial, he criticized scalar-valued performance criteria of
systems because “when we choose a criterion of performance, we generally disregard a number of
important factors. Moreover, we oversimplify the problem by employing a scalar loss function” [9].
Hence, he suggested that vector-valued loss functions might be more suitable in some cases.

Figure 2. Lotfi A. Zadeh, undated photo, approximately 1950s, photo credit: Fuzzy archive
Rudolf Seising.

3. Optimality and Noninferiority


In September 1963, Zadeh continued the mentioned criticism in a correspondence to the
“IEEE Transactions on Automatic Control” of the Institute of Electrical and Electronics Engineers [9].
He emphasized, “one of the most serious weaknesses of the current theories of optimal control is
that they are predicated on the assumption that the performance of a system can be measured by
a single number”. Therefore, he sketched the usual reasoning with scalar-valued performance criteria
of systems as follows: If ∑ is a set of systems and if P(S) is the real-valued performance index of
a system S, then a system S0 is called optimal in the ∑ if P(S0 ) ≥ P(S) for all S ∈ ∑. Thereafter,
he criticized that method: “The trouble with this concept of optimality is that, in general, there is

230
Mathematics 2018, 6, 110

more than one consideration that enters into the assessment of performance of S, and in most cases,
these considerations cannot be subsumed under a single scalar-valued criterion. In such cases, a system
S may be superior to a system S in some respects and inferior to S in others, and the class of systems
∑ is not completely ordered” [9] (p. 59).
For that reason, Zadeh demanded the distinction between the concepts of “optimality” and
“noninferiority”. To define what these concepts mean, he considered the “constraint set” C ⊆ ∑ that is
defined by the constraints imposed on system S, and a partial ordering ≥ on ∑ by associating with
each system S in ∑ the following three disjoint subsets of ∑:

(1) ∑ > (S), the subset of all systems which are “superior” to S.
(2) ∑ ≤ (S), the subset of all systems, which are inferior or equal (“inferior”) to S.
(3) ∑ ~(S), the subset of all systems, which are not comparable with S.

That followed Zadeh’s definition of the system’s property of “noninferiority”:

Definition 1. A system S0 in C is noninferior in C if the intersection of C and ∑> (S0 ) is empty: C ∩ ∑> (S0 ) = Ø.

Therefore, there is no system in C, which is better than S0 .


The system’s property of optimality he defined, as follows:

Definition 2. A system S0 in C is optimal in C if C is contained in ∑ ≤ (S0 ): C ⊆ ∑ ≤ (S).

Therefore, every system in C is inferior to S0 or equal to S0 .


These definitions show that an optimal system S0 is necessarily “noninferior”, but not all
noninferior systems are optimal.
Zadeh considered the partial ordering of the set of systems, ∑, by a vector-valued performance
criterion. Let system S be characterized by the vector x = (x1 , ..., xn ), whose real-valued
components represent, say, the values of n adjustable parameters of S, and let C be a subset of
n-dimensional Euclidean space Rn . Furthermore, let the performance of S be measured by an m vector
p(x) = [p1 (x), ..., pm (x)], where pi (x), i = 1, ..., m, is a given real-valued function of x. Then S ≥ S if and
only if p(x) ≥ p(x ). That is, pi (x) ≥ pi (x ), i = 1, ..., m.
Figure 3 illustrates “the case where ∑> (S) or, equivalently, ∑> (x) is a fixed cone with a vertex at
x, and the constraint set C is a closed bounded subset of Rn ” [9] (p. 59).

Figure 3. Illustration of the significance of C and ∑> (x) [9].

231
Mathematics 2018, 6, 110

If pi (x) = ai i x1 + ... + an xn i , where ai = (ai i , ..., an i ) is the gradient of pi (x), ai = grad pi (x) (a constant
vector), then, ∑> (x) is the polar cone of the cone spanned by ai . By definition, noninferior points
cannot occur in the interior of the set C. If C is a convex set, then the set of all noninferior points on the
boundary of C is the set Γ of all points x0 , through which hyperplanes separating the set C and the set
∑> (x0 ) can be passed. Figure 4 shows the set Γ heavy-lined on the boundary of C.

Figure 4. The set of noninferior points on the boundary of C [9].

In this example, ∑> (x0 ) and C are convex sets, and for convex sets, the separation theorem says
that there exists a hyperplane, which separates them.

4. Rosenblatt’s Perceptron
Among other researchers who studied the separability of data points was Frank Rosenblatt,
a research psychologist at the Cornell Aeronautical Laboratory in Buffalo, New York. Rosenblatt was
born in New Rochelle, New York on 11 July 1928. In 1957, at the Fifteenth International Congress of
Psychology held in Brussels, he suggested a “theory of statistical separability” to interpret receiving
and recognizing patterns in natural and artificial systems.
In 1943, Warren McCulloch and Walter Pitts published the first model of neurons that was
later called “artificial” or “McCulloch–Pitts neuron”. In their article, “A logical calculus of the ideas
immanent in nervous activity”, they “realized” the entire logical calculus of propositions by “neuron
nets”, and they arrived at the following assumptions [10] (p. 116):

1. The activity of the neurons is an “all-or-none” process.


2. A certain fixed number of synapses must be excited within the period of latent addition in order
to excite a neuron at any time, and this number is independent of previous activity and position
on the neuron.
3. The only significant delay within the nervous system is synaptic delay.
4. The activity of any inhibitory synapse prevents without exception the excitation of the neuron at
that time.

The structure of the net does not change with time.


In 1949, based on neurophysiological experiments, the Canadian psychologist, Donald Olding
Hebb, proposed the later so-called “Hebb learning rule”, i.e., a time-dependent principle of behavior
of nerve cells: “When an axon of cell A is near enough to excite cell B, and repeatedly or persistently

232
Mathematics 2018, 6, 110

takes part in firing it, some growth process or metabolic change takes place in one or both cells so that
A’s efficiency, as one of the cells firing B, is increased” [11] (p. 62).
In the same year, the Austrian economist, Friedrich August von Hayek, published “The Sensory
Order” [12], in which he outlined general principles of psychology. Especially, he proposed to apply
probability theory instead of symbolic logic to model the behavior of neural networks which achieve
reliable performance even when they are imperfect by nature as opposed to deterministic machines.
Rosenblatt’s theory was in the tradition of Hebb’s and Hayek’s thoughts. The approach of
statistical separability distinguishes his model from former brain models. Rosenblatt “was particularly
struck by the fact that all of the mathematically precise, logical models which had been proposed to date
were systems in which the phenomenon of distributed memory, or ‘equipotentiality’ which seemed
so characteristic of all biological systems, was either totally absent, or present only as a nonessential
artefact, due to postulated ‘repetitions’ of an otherwise self-contained functional network, which
by itself, would be logically sufficient to perform the functions of memory and recall” [13] (p. iii).
Therefore, Rosenblatt chose a “model in terms of probability theory rather than symbolic logic” [14]
(p. 388).
In his “Probabilistic Model for Visual Perception”, as his talk was entitled [15], he characterized
perception as a classification process, and in his first project report that appeared in the following
year, he wrote, “Elements of stimulation which occur most commonly together are assigned to the
same classes in a ‘sensory order’. The organization of sensory classes (colors, sounds, textures, etc.)
thus comes to reflect the organization of the physical environment from which the sensations originate”
([13], p. 8). To verify his theory, Rosenblatt promised the audience a working electronic model in
the near future, and for a start, he presented simulations running on the Weather Bureau’s IBM 704.
He fed the computer with “two cards, one with squares marked on the left side and the other with
squares on the right side”. The program differentiated between left and right after “reading” through
about 50 punched cards. “It then started registering a ‘Q’ for the left squares and ‘O’ for the right
squares” [13] (p. 8).
Rosenblatt illustrated the organization of the perceptron via such comparisons with a biological
brain, as shown in Figure 5. These illustrations compare the natural brain’s connections from the retina
to the visual area with a perceptron that connects each sensory point of the “retina” to one or more
randomly selected “A-units” in the association system. The A-units transduce the stimuli, and they
increase in value when activated (represented by the red points in Figure 5).
Their responses arrive at “R-units”, which are binary devices (i.e., “on” or “off”, and “neutral” in
the absence of any signal because the system will not deliver any output), as Figure 6 shows for a very
simple perceptron. The association system has two parts, the upper source set tends to activate the
response R = 1, and the lower one tends to activate the response R = 0. From the responses, a feedback
to the source set is generated, and these signals multiply the activity rate of the A-unit that receives
them. Thus, the activity of the R-units shows the response to stimuli as a square or circle, as presented
in the environment. “At the outset, when a perceptron is first exposed to stimuli, the responses which
occur will be random, and no meaning can be assigned to them. As time goes on, however, changes
occurring in the association systems cause individual responses to become more and more specific to
such particular, well-differentiated classes of forms as squares, triangles, clouds, trees, or people” [16]
(p. 3).
Rosenblatt attached importance to the following “fundamental feature of the perceptron”:
“When an A-unit of the perceptron has been active, there is a persistent after-effect which serves
the function of a ‘memory trace’. The assumed characteristic of this memory trace is a simple one:
whenever a cell is active, it gains in ‘strength’ so that its output signals (in response to a fixed stimulus)
become stronger, or gain in frequency or probability” [16] (p. 3).

233
Mathematics 2018, 6, 110

Figure 5. Organization of a biological brain and a perceptron [16] (p. 2), the picture was modified for
better readability).

Figure 6. Detailed organization of a single perceptron [16] (p. 3).

234
Mathematics 2018, 6, 110

Rosenblatt presented the results of experiments in which a perceptron had to learn to discriminate
between a circle and a square with 100, 200, and 500 A-units in each source set of the association system.
In Figure 7, the broken curves indicate the probability that the correct response is given when identical
stimuli of a test figure were shown during the training period. Rosenblatt called this “the perceptron’s
capacity to recollect”. The solid curves show the probability that the appropriate response for any
member of the stimulus class picked at random will be given. Rosenblatt called this “the perceptron’s
capacity to generalize” [16] (p. 10). Figure 7 shows that both probabilities (capacities) converge in the
end to the same limit. “Thus”, concluded Rosenblatt, “in the limit it makes no difference whether the
perceptron has seen the particular stimulus before or not; it does equally well in either case” [16] (p. 4).
Clearly, probability theory is necessary to interpret the experimental results gathered with his
perceptron simulation system. “As the number of association units in the perceptron is increased,
the probabilities of correct performance approach unity”, Rosenblatt claimed, and with reference to
Figure 7, he continued, “it is clear that with an amazingly small number of units—in contrast with the
human brain’s 1010 nerve cells—the perceptron is capable of highly sophisticated activity [16] (p. 4).

Figure 7. Learning curves for three typical perceptrons [16] (p. 6).

In 1960, on the 23rd of June, a “Perceptron Demonstration”, sponsored by the ONR (Office of
Naval Research) and the Directorate of Intelligence and Electronic Warfare, Rome Air Development
Center, took place at the Cornell Aeronautical Laboratory. After a period of successful simulation
experiments, Rosenblatt and his staff had created the experimental machine Mark I perceptron.
Four hundred photocells formed an imaginary retina in this perceptron—a simulation of the
retinal tissue of a biological eye—and over 500 other neuronlike units were linked with these photocells
by the principle of contingency, so they could supply them with impulses that came from stimuli in the
imaginary retina. The actual perceptron was formed by a third layer of artificial neurons, the processing
or response layer. The units in this layer formed a pattern associator.

235
Mathematics 2018, 6, 110

In this classic perceptron, cells can be differentiated into three layers. Staying with the analogy of
biological vision, the “input layer” with its (photo) cells or “stimulus units” (S cells) corresponds to the
retinal tissue, and the middle “association layer” consists of so-called association units (A cells), which
are wired with permanent but randomly selected weights to S cells via randomly linked contacts.
Each A cell can, therefore, receive a determined input from the S layer. In this way, the input pattern
of the S layer is distributed to the A layer. The mapping of the input pattern from the S layer onto
a pattern in the A layer is considered “pre-processing”. The “output layer”, which actually makes up
the perceptron, and which is, thus, also called the “perceptron layer”, contains the pattern-processing
response units (R cells), which are linked to the A cells. R and A cells are McCulloch–Pitts neurons,
but their synapses are variable and are adapted appropriately according to the Hebb rule. When the
sensors detect a pattern, a group of neurons is activated, which prompts another neuron group to
classify the pattern, i.e., to determine the pattern set to which said pattern belongs.
A pattern is a point in the x = (x1 , x2 , ..., xn ) n-dimensional vector space, and so, it has n components.
Let us consider again just the case n = 2, then a pattern x = (x1 , x2 ) such as this also belongs to one of L
“pattern classes”. This membership occurs in each individual use case. The perceptron “learned” these
memberships of individual patterns beforehand on the basis of known classification examples it was
provided. After an appropriate training phase, it was then “shown” a new pattern, which it placed in
the proper classes based on what it already “learned”. For a classification like this, each unit, r, of the
perceptron calculated a binary output value, yr , from the input pattern, x, according to the following
equation: yr = θ (wr1 x1 + wr2 x2 ).
The weightings, wr1 and wr2 , were adapted by the unit, r, during the “training phase”, in which the
perceptron was given classification examples, i.e., pattern vectors with an indication of their respective
pattern class, Cs , such that an output value, yr = 1, occurred only if the input pattern, x, originated in
its class, Cr . If an element, r, delivered the incorrect output value, yr , then its coefficients, wr1 and wr2 ,
were modified according to the following formulae:

Δwr1 = εr · (δrs − yr ) · x1 and Δwr2 = εr · (δrs − yr ) · x2 .

In doing so, the postsynaptic activity, yr , used in the Hebb rule is replaced by the difference
between the correct output value, δrs , and the actual output value, yr . These mathematical conditions
for the perceptron were not difficult. Patterns are represented as vectors, and the similarity and
disparity of these patterns can be represented if the vector space is normalized; the dissimilarity of
two patterns, v1 and v2 , can then be represented as the distance between these vectors, such as in the
following definition: d(v1 v2 ) = || v2 − v1 ||.

5. Perceptron Convergence
In 1955, the mathematician Henry David Block came to Cornell where he started in the Department
of Mathematics; however, in 1957, he changed to the Department of Theoretical and Applied Mechanics.
He collaborated with Rosenblatt, and derived mathematical statements analyzing the perceptron’s
behavior. Concerning the convergence theorem, the mathematician, Jim Bezdek, said in my interview:
“To begin, I note that Dave Block proved the first Perceptron convergence theorem, I think with Nilsson
at Stanford, and maybe Novikoff, in about 1962, you can look this up” [17] (p. 5).
Block published his proof in “a survey of the work to date” in 1962 [18]. Nils John Nilsson was
Stanford’s first Kumagai Professor of Engineering in Computer Science, and Albert Boris J. Novikoff
earned the PhD from Stanford. In 1958, he became a research mathematician at the Stanford Research
Institute (SRI). He presented a convergence proof for perceptrons at the Symposium on Mathematical
Theory of Automata at the Polytechnic Institute of Brooklyn (24–26 April 1962) [19]. Other versions of
the algorithm were published by the Russian control theorists, Mark Aronovich Aizerman, Emmanuel
M. Braverman, and Lev I. Rozonoér, at the Institute of Control Sciences of the Russian Academy of
Sciences, Moscow [20–22].

236
Mathematics 2018, 6, 110

In 1965, Nilsson wrote the book “Learning Machines: Foundations of Trainable Pattern-Classifying
Systems”, in which he also described in detail the perceptron’s error correction and learning procedure.
He also proved that for separable sets, A and B, in the n-dimensional Euclidean space, the relaxation
(hill-climbing/gradient) algorithm will converge to a solution in finite iterations.
Judah Ben Rosen, an electrical engineer, who was head of the applied mathematics department
in the Shell Development Company (1954–1962) came as a visiting professor to Stanford’s computer
science department (1962–1964). In 1963, he wrote a technical report entitled “Pattern Separation by
Convex Programming” [23], which he later published as a journal article [24].
Coming from the already mentioned separation theorem, he showed “that the pattern separation
problem can be formulated and solved as a convex programming problem, i.e., the minimization of
a convex function subject to linear constraints” [24] (p. 123). For the n-dimensional case, he proceeded
as follows: A number l of point sets in an n-dimensional Euclidean space is to be separated by
an appropriate number of hyperplanes. The mi points in the ith set (where i = 1, ..., l) are denoted by
n-dimensional vectors, pij , j = 1, ..., mi. Then, the following matrix describes the points in the ith set:

Pi = pi1 , pi2 , ..., pimi .

In the simplest case, at which the Rosenblatt perceptron failed, two points, P1 and P2 , are to be
separated. Rosen provides this definition:

Definition 3. The point sets, P1 and P2 , are linearly separable if their convex hulls do not intersect. (The convex
hull of a set is the set of all convex combinations of its points. In other words, given the points pi from P and
given λi from R, then the following set is the convex hull of P: conv (P) = λ1·p1 + λ2·p2 + ... + λn·pn.)

An equivalent statement is the following:


“The point sets, P1 and P2 , are linearly separable if and only if a hyperplane
H = H(z,α) = {p|p z = α} exists such that P1 und P2 lie on the opposite sides of H. (p refers to the
transpose of p.)
The orientation of the hyperplane H is, thus, specified by the n-dimensional unit vector, z, and its
distance from the origin is determined by a scalar, α. The linear separation of P1 and P2 was, therefore,
equivalent to demonstrating the existence of a solution to the following system of strict inequalities.
(Here || || denotes the Euclidean norm, and ei is the mi -dimensional unit vector):

p1j z > α j = 1, ..., m1


p2j z < α j = 1, ..., m2 ||z|| = 1,
p1j z > α e1
p2j z < α e2 ||z|| = 1.

Rosen came to the conclusion “that the pattern separation problem can be formulated and solved
as a convex programming problem, i.e., the minimization of a convex subject to linear constraints”. [24]
(p. 1) He considered the two linearly separable sets, P1 and P2 . The Euclidean distance, δ, between
these two sets is then indicated by the maximum value of γ, for which z and α exist such that

P 1 z ≥ (α + 1/2 γ)e1
P 2 z ≤ (α + 1/2 γ)e2 ||z|| = 1.

The task is, therefore, to determine the value of the distance, δ, between the sets, P1 and P2 ,
formulated as the nonlinear programming problem that can find a maximum, γ, for which the above
inequalities are true. Rosen was able to reformulate it into a convex quadratic programming problem
that has exactly one solution when the points, P1 and P2 , are linearly separable. To do so, he introduced

237
Mathematics 2018, 6, 110

β
a vector, x, and a scalar, β, for which the following applies: y = √2 , α = √ , and z = √x .
'x' 'x' 'x'
Maximizing γ is, thus, equivalent to minimizing the convex function, ' x '2 :

min 1 p  x ≥ ( β + 1 ) e1
σ= ' x '2 | 1 }
x, β 4 p 2 x ≤ ( β − 1 ) e2
   
x pij
After introducing the (n + 1)-dimensional vectors, y = , qij = , and the (n + 1)
β −1
× mi –matrices, Qi = [qi1 , qi2 , ..., qimi ], Rosen could use the standard form of convex quadratic
programming, and formulate the following theorem of linear separability:

Theorem 1. The point sets, P1 and P2 , are linearly separable if and only if the convex quadratic programming
problem  
min 1 n 2 Q1 y ≥ e1
σ=
y ∑
4 i=1
yi | 
Q2 y ≤ e2
},

has a solution. If P1 and P2 are linearly separable, then the distance, δ, between them is given by δ = √1σ ,
 
x0
and a unique vector, y0 = , achieves the minimum, σ. The separating hyperplane is given by
β0
 

H(x0 , β 0 ) = p/p x0 ≥ β 0 .

6. Fuzzy Pattern Classification


In the middle of the 1960s, Zadeh also got back to the topics of pattern classification and linear
separability of sets. In the summer of 1964, he and Richard E. Bellman, his close friend at the
RAND Corporation, planned on doing some research together. Before that, there was the trip to
Dayton, Ohio, where he was invited to talk on pattern recognition in the Wright-Patterson Air Force
Base. Here, within a short space of time, he developed his little theory of “gradual membership”
into an appropriately modified set theory: “Essentially the whole thing, let’s walk this way, it didn’t
take me more than two, three, four weeks, it was not long”, Said (Zadeh in an interview with the
author on June 19, 2001, UC Berkeley.) When he finally met with Bellman in Santa Monica, he had
already worked out the entire theoretical basis for his theory of fuzzy sets: “His immediate reaction
was highly encouraging and he has been my strong supporter and a source of inspiration ever since”,
said (Zadeh in “Autobiographical Note 1”—an undated two-page typewritten manuscript, written
after 1978.)
Zadeh introduced the conceptual framework of the mathematical theory of fuzzy sets in four
early papers. Most well-known is the journal article “Fuzzy Sets” [25]; however, in the same year,
the conference paper “Fuzzy Sets and Systems” appeared in a proceedings volume [26], in 1966,
“Shadows of Fuzzy Sets” was published in Russia [27], and the journal article “Abstraction and Pattern
Classification” appeared in print [4]. The latter has three official authors, Bellman, Kalaba, and Zadeh,
but it was written by Zadeh; moreover, the text of this article is the same as the text of a RAND
memorandum of October 1964 [3]. By the way, preprints of “Fuzzy Sets” and “Shadows of Fuzzy
Sets” [28] appeared already as “reports” of the Electronic Research Laboratory, University of California,
Berkeley, in 1964 and 1965 ([29], Zadeh 1965c).
Fuzzy sets “do not constitute classes or sets in the usual mathematical sense of these terms”.
They are “imprecisely defined ‘classes’”, which “play an important role in human thinking, particularly
in the domains of pattern recognition, communication of information, and abstraction”, Zadeh wrote
in his seminal paper [25] (p. 338). A “fuzzy set” is “a class in which there may be a continuous infinity
of grades of membership, with the grade of membership of an object x in a fuzzy set A represented by

238
Mathematics 2018, 6, 110

a number μA (x) in the interval [0, 1]” [26] (p. 29). He defined fuzzy sets, empty fuzzy sets, equal fuzzy
sets, the complement, and the containment of a fuzzy set. He also defined the union and intersection
of fuzzy sets as the fuzzy sets that have membership functions that are the maximum or minimum,
respectively, of their membership values. He proved that the distributivity laws and De Morgan’s laws
are valid for fuzzy sets with these definitions of union and intersection. In addition, he defined other
ways of forming combinations of fuzzy sets and relating them to one another, such as, the “algebraic
sum”, the “absolute difference”, and the “convex combination” of fuzzy sets.
Concerning pattern classification, Zadeh wrote that these “two basic operations: abstraction and
generalization appear under various guises in most of the schemes employed for classifying patterns
into a finite number of categories” [3] (p. 1). He completed his argument as follows: “Although
abstraction and generalization can be defined in terms of operations on sets of patterns, a more natural
as well as more general framework for dealing with these concepts can be constructed around the
notion of a ‘fuzzy’ set—a notion which extends the concept of membership in a set to situations in
which there are many, possibly a continuum of, grades of membership” [3] (p. 1).
After a discussion of two definitions of “convexity” for fuzzy sets and the definition of “bounded”
fuzzy sets, he defined “strictly” and “strongly convex” fuzzy sets. Finally, he proved the separation
theorem for bounded convex fuzzy sets, which was relevant to the solution of the problem of
pattern discrimination and classification that he perhaps presented at the Wright-Patterson Air Force
Base (neither a manuscript nor any other sources exist; Zadeh did not want to either confirm or
rule out this detail in the interviews with the author). At any rate, in his first text on fuzzy sets,
he claimed that the concepts and ideas of fuzzy sets “have a bearing on the problem of pattern
classification” [2] or [3] (p. 1). “For example, suppose that we are concerned with devising a test
for differentiating between handwritten letters, O and D. One approach to this problem would be
to give a set of handwritten letters, and to indicate their grades of membership in the fuzzy sets,
O and D. On performing abstraction on these samples, one obtains the estimates, μ <0 and μ<D , of μ0 and
μ D , respectively. Then, given a letter, x, which is not one of the given samples, one can calculate its
grades of membership in O and D, and, if O and D have no overlap, classify x in O or D” [26] (p. 30)
(see Figure 8).

Figure 8. Illustration to Zadeh’s view on pattern classification: the sign  (or x, as Zadeh wrote)
belongs with membership value, μO (O), to the “class” of Os and with membership value, μD (D), to the
“class” of Ds.

In his studies about optimality in signal discrimination and pattern classification, he was forced
to resort to a heuristic rule to find an estimation of a function f (x) with the only means of judging the

239
Mathematics 2018, 6, 110

“goodness” of the estimate yielded by such a rule lying in experimentation.” [3] (p. 3). In the quoted
article, Zadeh regarded a pattern as a point in a universe of discourse, Ω, and f (x) as the membership
function of a category of patterns that is a (possibly fuzzy) set in Ω.
With reference to Rosen’s article, Zadeh stated and proofed an “extension of the separation
theorem to convex fuzzy sets” in his seminal paper, of course, without requiring that the convex fuzzy
sets A and B be disjoint, “since the condition of disjointness is much too restrictive in the case of fuzzy
sets” [25] (p. 351). A hyperplane, H, in an Euclidean space, En , is defined by an equation, h(x) = 0,
then, h(x) ≥ 0 is true for all points x ∈ En on one side of H, and h(x) ≤ 0 is true for all points, x ∈ En ,
on the other side of H. If a fuzzy set, A, is on the one side of H, and fuzzy set, B, is on its other side,
their membership functions, fA (x) and fB (x), and a number, KH , dependent on H, fulfil the following
inequalities:
fA (x) ≤ KH and fB (x) ≥ KH .

Zadeh defined MH , the infimum of all KH , and DH = 1 − MH , the “degree of separation” of A and
B by H. To find the highest possible degree of separation, we have to look for a member in the family
of all possible hypersurfaces that realizes this highest degree. In the case of hyperplane, H, in En Zadeh
defined the infimum of all MH by
M = In f H M H ,

and the “degree of separation of A and B” by the relationship,

D = 1 − M.

Thereupon Zadeh presented his extension of the “separation theorem” for convex fuzzy sets:

Theorem 2. Let A and B be bounded convex fuzzy sets in En , with maximal grades, MA and MB , respectively,
[MA = Supx fA (x), MB = Supx fB (x)]. Let M be the maximal grade for the intersection, A ∩ B (M = Supx Min
[fA (x), fB (x)]). Then, D = 1 − M [25] (p. 352) (see Figure 9).

Figure 9. Illustration of the separation theorem for fuzzy sets in E1 [25].

In 1962, the electrical engineer Chin-Liang Chang came from Taiwan to the US, and in 1964,
to UC Berkeley to pursue his PhD under the supervision of Zadeh. In his thesis “Fuzzy Sets and
Pattern Recognition” (See: http://www.eecs.berkeley.edu/Pubs/Dissertations/Faculty/zadeh.html),
he extended the perception convergence theorem to fuzzy sets, he presented an algorithm for finding
a separating hyperplane, and he proved its convergence in finite iterations under a certain condition.
A manuscript by Zadeh and Chang entitled “An Application of Fuzzy Sets in Pattern Recognition”

240
Mathematics 2018, 6, 110

with a date of 19 December 1966 (see Figure 10) never appeared published in a journal, but it became
part of Chang’s PhD thesis.

Figure 10. Unpublished manuscript (excerpt) of Zadeh and Chang, 1966. (Fuzzy Archive, Rudolf Seising).

Rosenblatt heralded the perceptron as a universal machine in his publications, e.g., “For the
first time, we have a machine which is capable of having original ideas. ... As a concept, it would
seem that the perceptron has established, beyond doubt, the feasibility and principle of nonhuman
systems which may embody human cognitive functions ... The future of information processing
devices which operate on statistical, rather than logical, principles seems to be clearly indicated” [14].
“For the first time we have a machine which is capable of having original ideas”, he said in “The New
Scientist”. “As an analogue of the biological brain the perceptron . . . seems to come closer to meeting
the requirements of a functional explanation of the nervous system than any system previously
proposed” [30] (p. 1392), he continued. To the New York Times, he said, “in principle it would be
possible to build brains that could reproduce themselves on an assembly line and which would be
conscious of their existence” [31] (p. 25).
The euphoria came to an abrupt halt in 1969, however, when Marvin Minsky and Seymour Papert
completed their study of perceptron networks, and published their findings in a book [32]. The results
of the mathematical analysis to which they had subjected Rosenblatt’s perceptron were devastating:
“Artificial neuronal networks like those in Rosenblatt’s perceptron are not able to overcome many
different problems! For example, it could not discern whether the pattern presented to it represented
a single object or a number of intertwined but unrelated objects. The perceptron could not even
determine whether the number of pattern components was odd or even. Yet this should have been
a simple classification task that was known as a ‘parity problem’. What we showed came down
to the fact that a Perceptron cannot put things together that are visually nonlocal”, Minsky said to
Bernstein [33].
Specifically, in their analysis they argued, firstly, that the computation of the XOR had to be
done with multiple layers of perceptrons, and, secondly, that the learning algorithm that Rosenblatt
proposed did not work for multiple layers.
The so-called “XOR”, the either–or operator of propositional logic presents a special case of the
parity problem that, thus, cannot be solved by Rosenblatt’s perceptron. Therefore, the logical calculus
realized by this type of neuronal networks was incomplete.
The truth table (Table 1) of the logical functor, XOR, allocates the truth value “0” to the truth
values of the two statements, x1 and x2 , when their truth values agree, and the truth value “1” when
they have different truth values.

241
Mathematics 2018, 6, 110

Table 1. Truth table of the logical operator XOR.

x1 x2 x1 XOR x2
0 0 0
0 1 1
1 0 1
1 1 0

x1 and x2 are components of a vector of the intermediate layer of a perceptron, so they can be
interpreted, for example, as the coding of a perception by the retina layer. So, y = x1 XOR x2 is the
truth value of the output neuron, which is calculated according to the truth table. The activity of x1
and x2 determines this value. It is a special case of the parity problem in this respect. For an even
number, i.e., when both neurons are active or both are inactive, the output is 0, while for an odd
number, where just one neuron is active, the value is 1.
To illustrate this, the four possible combinations of 0 and 1 are entered into a rectangular coordinate
system of x1 and x2 , and marked with the associated output values. In order to see that, in principle,
a perceptron cannot learn to provide the output values demanded by XOR, the sum of the weighted
input values is calculated by w1 x1 + w2 x2 .
The activity of the output depends on whether this sum is larger or smaller than the threshold
value, which results in the plane extending between x1 and x2 as follows:
Θ = w1 x1 + w2 x2 , which results in: x2 = −w1 w2 x1 + Θ w2 .
This is the equation of a straight line in which, on one side, the sum of the weighted input
values is greater than the threshold value (w1 x1 + w2 x2 > Θ) and the neuron is, thus, active (fires);
however, on the other side, the sum of the weighted input values is smaller than the threshold value
(w1 x1 + w2 x2 < Θ), and the neuron is, thus, not active (does not fire).
However, the attempt to find precisely those values for the weights, w1 and w2 , where the
associated line separates the odd number with (0, 1) and (1, 0) from the even number with (0, 0),
and (1, 1) must fail (see Figure 10). The proof is very easy to demonstrate by considering all four cases:
x1 = 0, x2 = 1: y should be 1 → w1 · 0 + w2 · 1 ≥ Θ → neuron is active!
x1 = 1, x2 = 0: y should be 1 → w1 · 1 + w2 · 0 ≥ Θ → neuron is active!
x1 = 0, x2 = 0: y should be 0 → w1 · 0 + w2 · 0 < Θ → neuron is inactive!
x1 = 1, x2 = 1: y should be 0 → w1 · 1 + w2 · 1 < Θ → neuron is inactive!
Adding the first two equations results in w1 + w2 ≥ 2Θ.
From the last two equations comes Θ > w1 + w2 ≥ 2Θ, and so Θ > 2Θ.
This applies only where Θ < 0. This is a contradiction of w10 + w20 < Θ. Q.E.D.
The limits of the Rosenblatt perceptron were, thus, demonstrated, and they were very narrow,
for it was not even able to classify linearly separable patterns. In their book, Minsky and Papert
estimated that more than 100 groups of researchers were working on perceptron networks or similar
systems all over the world at that time. In their paper “Adaptive Switching Circuits”, Bernard Widrow
and Marcian Edward Hoff publicized the linear adaptive neuron model, ADALINE, an adaptive
system that was quick and precise thanks to a more advanced learning process which today is known
as the “Delta rule” [34]. In his 1958 paper “Die Lernmatrix”, German physicist, Karl Steinbuch,
introduced a simple technical realization of associative memories, the predecessor of today’s neuronal
associative memories [35]. In 1959, the paper “Pandemonium” by Oliver Selfridge was published
in which dynamic, interactive mechanisms were described that used filtering operations to classify
images by means of “significant criteria, e.g., four corners to identify a square”. He expected to develop
a system that will also recognize “other kinds of features, such as curvature, juxtaposition of singular
points, that is, their relative bearings and distances and so forth” [36] (p. 93), [37]. Already since
1955, Wilfred Kenelm Taylor in the Department of Anatomy of London’s University College aimed to
construct neural analogs to study theories of learning [38].

242
Mathematics 2018, 6, 110

However, the publication of Minsky and Papert’s book disrupted research in artificial neural
networks for more than a decade. Because of their fundamental criticism, many of these projects were
shelved or at least modified in the years leading up to 1970. In the 15 years that followed, almost no
research grants were approved for projects in the area of artificial neuronal networks, especially not by
the US Defense Department for DARPA (Defense Advanced Research Projects Agency. The pattern
recognition and learning networks faltered on elementary questions of logic in which their competitor,
the digital computer, proved itself immensely powerful.

7. Outlook
The disruption of artificial neural networks research later became known as the “AI winter”,
but artificial neural networks were not killed by Minsky and Papert. In 1988, Seymour Papert did
wonder whether this was actually their plan: “Did Minsky and I try to kill connectionism, and how
do we feel about its resurrection? Something more complex than a plea is needed. Yes there was
some hostility in the energy behind the research reported in Perceptrons, and there is some degree of
annoyance at the way the new movement has developed; part of our drive came, as we quite plainly
acknowledged in our book, from the fact that funding and research energy were being dissipated on
what still appear to me (since the story of new, powerful network mechanisms is seriously exaggerated)
to be misleading attempts to use connectionist methods in practical applications. But most of the
motivation for Perceptrons came from more fundamental concerns many of which cut cleanly across
the division between networkers and programmers” [39] (p. 346).
Independent of artificial neural networks, fuzzy pattern classification became popular in the 1960s.
Looking to the emerging field of biomedical engineering, the Argentinian mathematician Enrique
Ruspini, a graduate student at the University of Buenos Aires, came “across, however, early literature
on numerical taxonomy (then focused on “Biological Systematics”, which was mainly concerned with
classification of biological species), a field where its seminal paper by Sokal (Robert Reuven Sokal was
an Austrian-American biostatistician and entomologist) and Sneath (Peter Henry Andrews Sneath
was a British microbiologist. He began working on numerical methods for classifying bacteria in
the late 1950s) had been published in 1963” [40,41]. In the interview, he continued, “It is interesting
to note that the field was so young that, at the point, there were not even accepted translations to
Spanish of words such as ‘pattern’ or ‘clustering’. After trying to understand and formulate the nature
of the problem (I am a mathematician after all!), it was clear to me that the stated goal of clustering
procedures (‘classify similar objects into the same class and different objects into different classes’)
could not be attained within the framework of classical set theory. By sheer accident I walked one day
into the small library of the Department of Mathematics at the School of Science. Perusing through
the new-arrivals rack I found the 1965 issue of Information and Control with Lotfi Zadeh’s seminal
paper [25]. It was clear to me and my colleagues that this was a much better framework to consider
and rigorously pose fuzzy clustering problems. Drawing also from results in the field of operations
research I was soon able to pose the clustering problem in terms of finding the optimal solution of
a continuous variable system with well-defined performance criteria and constraints.” [17] (p. 2f)
In the quoted double-interview, Jim Bezdek also looked back: “So, when I got to Cornell in
1969, the same year that the Minsky and Papert book came out, he [Henry David Block] and others
(including his best friend, Bernie Widrow, I might add), were in a funk about the apparent death of the
ANNs (artificial neural networks). Dave wanted to continue in this field, but funding agencies were
reluctant to forge ahead with NNs in the face of the damning damning indictment (which in hindsight
was pretty ridiculous) by Minsky and Papert. About 1970, Richard Duda sent Dave a draft of his book
with Peter Hart, the now and forever famous ‘Duda and Hart’ book on Pattern Classification and Scene
Analysis, published in 1973 [42,43]. Duda asked Dave to review it. Dave threw it in Joe Dunn’s inbox,
and from there it made its way to mine. So I read it—cover to cover—trying to find corrections, etc.
whilst simultaneously learning the material, and that’s how I entered the field of pattern recognition”.
Bezdek included his best Dave Block story: “In maybe 1971, Dave and I went over to the Cornell

243
Mathematics 2018, 6, 110

Neurobiology Lab in Triphammer Woods, where we met a young enterprising neuroscientist named
Howard Moraff, who later moved to the NSF, where he is (I think still today). Howard was hooking
up various people to EEG sensor nodes on their scalps—16 sites at that time—and trying to see if there
was any information to be gleaned from the signals. We spent the day watching him, talking to him, etc.
Dave was non-committal to Howard about the promise of this enterprise, but as we left the building,
Dave turned to me and said ‘Maybe there is some information in the signals Jim, but we are about
50 years too early’”. Then, he commented this: “I have told this story many times since then (43 years
ago now), and I always end it by saying this: ‘And if Dave could see the signals today, given our
current technology, what do you think he would say now? He would say «Jim, we are about 50 years
too soon»’. So, the bottom line for me in 1971 was: don’t do NNS, but clustering and classifier design
with OTHER paradigms is ok. As it turned out, however, I was out of the frying pan of NNs, and into
the fire of Fuzzy Sets, which was in effect a (very) rapid descent into the Maelstrom of probabilistic
discontent” [17] (p. 5f). (NFS: National science Foundation, EEG: Electroencephalography)
Since 1981, the psychologists, James L. McClelland and David E. Rumelhart, applied artificial
neural networks to explain cognitive phenomena (spoken and visual word recognition). In 1986,
this research group published the two volumes of the book “Parallel Distributed Processing:
Explorations in the Microstructure of Cognition” [44]. Already in 1982, John J. Hopfield, a biologist and
Professor of Physics at Princeton, CalTech, published the paper “Neural networks and physical systems
with emergent collective computational abilities” [45] on his invention of an associative neural network
(now more commonly known as the “Hopfield Network”), i.e., feedback networks that have only one
layer that is both input, as well as output layer, and each of the binary McCulloch–Pitts neurons is
linked with every other, except itself. McClelland’s research group could show that perceptrons with
more than one layer can realize the logical calculus; multilayer perceptrons were the beginning of the
new direction in AI: parallel distributed processing.
In the mid-1980s, traditional AI explored their limitations, and with “more powerful hardware”
(e.g., parallel architectures) and “new advances made in neural modelling learning methods”
(e.g., feedforward neural networks with more than one layer, i.e., multilayer perceptrons), artificial
neural modeling has awakened new interest in the fields of science, industry, and governments.
In Japan, this resulted in the Sixth Generation Computing Project that started in 1986 [46], in Europe
the following year, the interdisciplinary project “Basic Research in Adaptive Intelligence and
Neurocomputing” (BRAIN) of the European Economic Community [47], and in the US, the DARPA
Neural Network Study (1987–1988) [48].
Today, among other algorithms, e.g., decision trees and random forests, artificial neural networks
are enormously successful in data mining, machine learning, and knowledge discovery in databases.

Funding: This research received no external funding.


Acknowledgments: I would like to thank James C. Bezdek, Enrique H. Ruspini, and Chin-Liang Chang for giving
interviews in the year 2014 and helpful discussions. I am very thankful to Lotfi A. Zadeh, who sadly passed away
in September 2017, for many interviews and discussions in almost 20 years of historical research work. He has
encouraged and supported me with constant interest and assistance, and he gave me the opportunity to collect
a “digital fuzzy archive” with historical sources.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Rosen, C.A. Pattern Classification by Adaptive Machines. Science 1967, 156, 38–44. [CrossRef] [PubMed]
2. Bellman, R.E.; Kalaba, R.; Zadeh, L.A. Abstraction and Pattern Classification. Memorandum RM-4307-PR;
The RAND Corporation: Santa Monica, CA, USA, 1964.
3. Bellman, R.E.; Kalaba, R.; Zadeh, L.A. Abstraction and Pattern Classification. J. Math. Anal. Appl. 1966, 13,
1–7. [CrossRef]
4. Minsky, M.L. Steps toward Artificial Intelligence. Proc. IRE 1960, 49, 8–30. [CrossRef]
5. Zadeh, L.A. From Circuit Theory to System Theory. Proc. IRE 1962, 50, 856–865. [CrossRef]

244
Mathematics 2018, 6, 110

6. Eidelheit, M. Zur Theorie der konvexen Mengen in linearen normierten Räumen. Studia Mathematica 1936, 6,
104–111. [CrossRef]
7. Zadeh, L.A. On the Identification Problem. IRE Trans. Circuit Theory 1956, 3, 277–281. [CrossRef]
8. Zadeh, L.A. What is optimal? IRE Trans. Inf. Theory 1958, 1.
9. Zadeh, L.A. Optimality and Non-Scalar-Valued Performance Criteria. IEEE Trans. Autom. Control 1963, 8,
59–60. [CrossRef]
10. McCulloch, W.S.; Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys.
1943, 5, 115–133. [CrossRef]
11. Hebb, D.O. The Organization of Behavior: A Neuropsychological Theory; Wiley and Sons: New York,
NY, USA, 1949.
12. Hayek, F.A. The Sensory Order: An Inquiry into the Foundations of Theoretical Psychology; University of Chicago
Press: Chicago, IL, USA, 1952.
13. Rosenblatt, F. The Perceptron. A Theory of Statistical Separability in Cognitive Systems, (Project PARA);
Report No. VG-1196-G-1; Cornell Aeronautical Laboratory: New York, NY, USA, 1958.
14. Rosenblatt, F. The Perceptron. A Probabilistic Model for Information Storage and Organization in the Brain.
Psychol. Rev. 1958, 65, 386–408. [CrossRef] [PubMed]
15. Rosenblatt, F. A Probabilistic Model for Visual Perception. Acta Psychol. 1959, 15, 296–297. [CrossRef]
16. Rosenblatt, F. The Design of an Intelligent Automaton. Res. Trends 1958, VI, 1–7.
17. Seising, R. On the History of Fuzzy Clustering: An Interview with Jim Bezdek and Enrique Ruspini.
IEEE Syst. Man Cybern. Mag. 2015, 1, 20–48. [CrossRef]
18. Block, H.D. The Perceptron: A Model for Brain Functioning I. Rev. Mod. Phys. 1962, 34, 123–135. [CrossRef]
19. Novikoff, A. On Convergence Proofs for Perceptions. In Proceedings of the Symposium on Mathematical Theory
of Automata; Polytechnic Institute of Brooklyn: Brooklyn, NY, USA, 1962; Volume XII, pp. 615–622.
20. Aizerman, M.A.; Braverman, E.M.; Rozonoer, L.I. Theoretical Foundations of the Potential Function Method
in Pattern Recognition Learning. Autom. Remote Control 1964, 25, 821–837.
21. Aizerman, M.A.; Braverman, E.M.; Rozonoer, L.I. The Method of Potential Function for the Problem of
Restoring the Characteristic of a Function Converter from Randomly Observed Points. Autom. Remote Control
1964, 25, 1546–1556.
22. Aizerman, M.A.; Braverman, E.M.; Rozonoer, L.I. The Probability Problem of Pattern Recognition Learning
and the Method of Potential Functions. Autom. Remote Control 1964, 25, 1175–1190.
23. Rosen, J.B. Pattern Separation by Convex Programming; Technical Report No. 30; Applied Mathematics and
Statistics Laboratories, Stanford University: Stanford, CA, USA, 1963.
24. Rosen, J.B. Pattern Separation by Convex Programming. J. Math. Anal. Appl. 1965, 10, 123–134. [CrossRef]
25. Zadeh, L.A. Fuzzy Sets. Inf. Control 1965, 8, 338–353. [CrossRef]
26. Zadeh, L.A. Fuzzy Sets and Systems. In System Theory; Microwave Research Institute Symposia Series XV;
Fox, J., Ed.; Polytechnic Press: Brooklyn, NY, USA, 1965; pp. 29–37.
27. Zadeh, L.A. Shadows of Fuzzy Sets. Problemy peredachi informatsii. In Akadamija Nauk SSSR Moskva;
Problems of Information Transmission: A Publication of the Academy of Sciences of the USSR; The Faraday
Press: New York, NY, USA, 1966; Volume 2.
28. Zadeh, L.A. Shadows of Fuzzy Sets. In Notes of System Theory; Report No. 65-14; Electronic Research
Laboratory, University of California Berkeley: Berkeley, CA, USA, 1965; Volume VII, pp. 165–170.
29. Zadeh, L.A. Fuzzy Sets; ERL Report No. 64-44; University of California at Berkeley: Berkeley, CA, USA, 1964.
30. Rival. The New Yorker. 6 December 1958, p. 44. Available online: https://www.newyorker.com/magazine/
1958/12/06/rival-2 (accessed on 21 June 2018).
31. New Navy Device learns by doing. Psychologist Shows Embryo of Computer Designed to Read and Crow
Wise. New York Times, 7 July 1958; 25.
32. Minsky, M.L.; Papert, S. Perceptrons; MIT Press: Cambridge, MA, USA, 1969.
33. Bernstein, J.; Profiles, A.I. Marvin Minsky. The New Yorker, 14 December 1981; 50–126.
34. Widrow, B.; Hoff, M.E. Adaptive Switching Circu its, IRE Wescon Convention Record. New York IRE 1960, 4,
96–104.
35. Steinbuch, K. Die Lernmatrix, Kybernetik; Springer: Berlin, Germany, 1961; Volume 1.
36. Selfridge, O.G. Pattern Recognition and Modern Computers. In Proceedings of the AFIPS ‘55 Western Joint
Computer Conference, Los Angeles, CA, USA, 1–3 March 1955; ACM: New York, NY, USA, 1955; pp. 91–93.

245
Mathematics 2018, 6, 110

37. Selfridge, O.G. Pandemonium: A Paradigm for Learning. In Mechanisation of Thought Processes: Proceedings of
a Symposium Held at the National Physical Laboratory on 24th, 25th and 27th November 1958; National Physical
Laboratory, Ed.; Her Majesty’s Stationery Office: London, UK, 1959; Volume I, pp. 511–526.
38. Taylor, W.K. Electrical Simulation of Some Nervous System Functional Activities. Inf. Theory 1956, 3, 314–328.
39. Papert, S.A. One AI or many? In The Philosophy of Mind; Beakley, B., Ludlow, P., Eds.; MIT Press: Cambridge,
MA, USA, 1992.
40. Sokal, R.R.; Sneath, P.H.A. Principles of numerical Taxonomy; Freeman: San Francisco, NC, USA, 1963.
41. Sokal, R.R. Numerical Taxonomy the Principles and Practice of Numerical Classification; Freeman: San Francisco,
CA, USA, 1973.
42. Duda, R.O.; Hart, P.E. Pattern Classification and Scene Analysis, 2nd ed.; Wiley: Hoboken, NJ, USA, 1973.
43. Duda, R.O.; Hart, P.E.; Stork, D.G. Pattern Classification; Wiley: Hoboken, NJ, USA, 2000.
44. Rumelhart, D.E.; McClelland, J.L.; The PDP Research Group. Parallel Distributed Processing. Explorations in the
Microstructure of Cognition; MIT Press: Cambridge, MA, USA, 1986.
45. Hopfield, J.J. Neural networks and physical systems with emergent collective computational abilities.
Proc. Natl. Acad. Sci. USA 1982, 79, 2554–2558. [CrossRef] [PubMed]
46. Gaines, B. Sixth generation computing. A conspectus of the Japanese proposals. Newsletter 1986, 95, 39–44.
47. Roman, P. The launching of BRAIN in Europe. In Europea Science Notes; US Office of Naval Research: London,
UK, 1987; Volume 41.
48. DARPA. Neural Network Study; AFCEA International Press: Washington, DC, USA, 1988.

© 2018 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

246
mathematics
Article
Credibility Measure for Intuitionistic Fuzzy Variables
Mohamadtaghi Rahimi 1, *, Pranesh Kumar 1 and Gholamhossein Yari 2
1 Department of Mathematics and Statistics, University of Northern British Columbia, Prince George,
BC V2N 4Z9, Canada; [email protected]
2 Department of Mathematics, Iran University of Science and Technology, 16846-13114 Tehran, Iran;
[email protected]
* Correspondence: [email protected]; Tel.: +1-250-960-6756

Received: 7 March 2018; Accepted: 26 March 2018; Published: 2 April 2018

Abstract: Credibility measures in vague environments are to quantify the approximate chance of
occurrence of fuzzy events. This paper presents a novel definition about credibility for intuitionistic
fuzzy variables. We axiomatize this credibility measure and to clarify, give some examples. Based on
the notion of these concepts, we provide expected values, entropy, and general formulae for the
central moments and discuss them through examples.

Keywords: credibility measure; intuitionistic fuzzy variables; expected value; entropy

1. Introduction
Fuzzy set theory, proposed by Zadeh [1] and intuitionistic fuzzy set proposed by Atanassov [2,3],
have influenced human judgment a great deal and decreased uncertainties in available information.
Entropy, as an important tool of measuring the degree of uncertainty, and also the main core of
information theory, for the first time was introduced by Shannon [4], and Zadeh [5] was the first who
defined entropy for fuzzy sets by introducing weighted Shannon entropy. However, the synthesis
of entropy and fuzzy set theory was first defined by Du Luca and Termini [6] using the Shannon
function. He replaced the membership degrees of elements with the variable in the Shannon function.
Later, this definition was extended by Szmidt and Kacprzyk [7] by introducing the intuitionistic fuzzy
sets. The fuzzy and intuitionistic fuzzy entropy are now often employed in various scientific studies.
For example, Huang et al. [8] used fuzzy two dimensional entropy to develop a novel approach for the
automatic recognition of red Fuji apples in natural scenes, Yari et al. [9,10] employed it in option pricing
and portfolio optimization, Song et al. [11] used fuzzy logics in psychology while studying children’s
emotions, and Farnoosh et al. [12] proposed a method for image processing based on intuitionistic
fuzzy entropy. Additionally, many researchers have recently conceptualized fuzzy entropy from
different aspects. Some of them can be found in [13–16].
In 2002, using credibility, a new formula was presented by Liu and Liu [17] for expected values of
fuzzy variables. By these notations, a new environment has been created in the fuzzy area, both in pure
and applied branches. Decision-making, portfolio optimization, pricing models, and supply chain
problems are some of the areas which have used these conceptions. Now, in this paper, we define a
new concept of credibility measure for intuitionistic fuzzy sets to be used in all of the mentioned areas.
After Liu and Liu [17], new concepts and properties of fuzzy credibility functions were proposed
by some researchers. For example, a sufficient and necessary condition was given by Li and Liu [18]
for credibility measures and, in 2008, an entropy measure was defined by Li and Liu [19] for discrete
and continuous fuzzy variables, based on the credibility distributions. Therefore, in the rest of the
paper, several additional concepts of fuzzy credibility functions are presented as a basis of developing
the credibility measure.

Mathematics 2018, 6, 50; doi:10.3390/math6040050 247 www.mdpi.com/journal/mathematics


Mathematics 2018, 6, 50

In this paper, following the introduction in Section 1 and introducing some concepts and
knowledge about credibility and fuzzy entropy measures in Section 2, based on the measure defined
by Liu and Liu [17], we present a novel definition about credibility for intuitionistic fuzzy variables in
Section 3. Section 3 also presents central moments and entropy formulation. All of these definitions
are followed by their corresponding examples. We finally discuss and conclude in Section 4.

2. Preliminaries
Suppose that A is a fuzzy subset of the universe of discourse, U. Then the possibility and necessity
measures are defined as (Zadeh, [20]):

Pos{X is A}  Supu∈A πX {u} ∈ [0, 1],

Nec{X is A}  1 − Supu∈Ac πX {u},

where πX {u} is the possibility distribution function of ΠX , a possibility distribution associated with
the variable X taking values in U.
For a fuzzy variable, ξ, with membership function μ, the credibility inversion theorem or, in other
words, the credibility of ξ ∈ β ⊂ R employed by Liu and Liu [17] is:

1 1
Cr{ξ ∈ β} = (Pos{ξ ∈ β} + Nec{ξ ∈ β}) = (Supx∈β μ(x) + 1 − Supx∈βc μ(x)) (1)
2 2
Later, this formula was extended by Mandal et al. [21] to its general form as:

Cr{ξ ∈ β} = ρPos{ξ ∈ β} + (1 − ρ)Nec{ξ ∈ β}, 0≤ρ≤1 (2)

Liu and Liu [17] also defined the expected value of a fuzzy variable by the credibility function as:

+∞
@ @0
E[ξ] = Cr{ξ ≥ r}dr − Cr{ξ ≤ r}dr, (3)
0 −∞

Later, Li and Liu [19] formulated a definition of entropy based on the notion of credibility for
continuous distributions:
+∞
@
H(ξ) = S(Cr{ξ = x}) dx, (4)
−∞

where the integral reduces to sigma for discrete distributions and S(t) = −tLnt − (1 − t)Ln(1 − t),
0 ≤ t ≤ 1.
Further, De Luca and Termini [6] for the first time debated the fuzzy entropy measure (formulated
in the following manner) which was later extended for intuitionistic fuzzy entropy measures by Szmidt
and Kacprzyk [7].
Let H be a real-valued function: F(X) → [0, 1] . H is an entropy measure of fuzzy set, if it satisfies
the four axiomatic requirements:

= ) = 0 iff A
FS1 : H(A = is a crisp set, i.e., μ = (xi ) = 0 or 1 ∀xi ∈ X.
A
= ) = 1 iff μ = (xi ) = 0.5 ∀xi ∈ X.
FS2 : H(A A
= ) ≤ H(B
FS3 : H(A = ) if A
= is less fuzzy than B = ∀ xi ∈ X.
c
= ) = H(A
FS4 : H(A = ), where A = is the complement of A. =

? +∞
Here, H(A = ) = ∑n S(μA (xi )) or H(A
=) = S(μA (x))dx, for discrete and continuous
i=1 −∞
distributions, respectively.

248
Mathematics 2018, 6, 50

3. Credibility Measures in Intuitionistic Fuzzy Environment

Definition 1. Determinacy of an intuitionistic fuzzy set


Let A be an intuitionistic fuzzy subset of the universe of discourse, U; and f: A→B be a function that
changes the intuitionistic fuzzy elements u ∈ A to fuzzy elements v ∈ B. Then, the determinacy measure is
defined as follows:
det{ X is B}  Supv∈ B π X {u} ∈ [0, 1],

where v ∈ B is a fuzzy number with γ and 1-γ as the degrees of membership and non-membership, respectively;
γ is the degree of non-membership of the corresponding value of u ∈ A, and π X {u} is the possibility distribution
function of Π X .

3.1. Axioms of a Possibility-Determinacy Space


The quadruplet (Θ, P(Θ), Pos, Det) is called a possibility-determinacy space of an intuitionistic
fuzzy variable if:


⎪ i)Pos{Θ} = 1,



⎪ ii )Pos{∅} = 0,

⎨ iii)0 ≤ Pos+ A = Pos A + Det A ≤ 1,
{ } { } { }
(5)

⎪ and 0 ≤ Pos− {A} = Pos{A} ≤ 1, for A in Pos{Θ}



⎪ iv ) Pos +
{ U A } = sup { Pos { A } + Det { A }} ,


i i i i i
and Pos− {Ui Ai } = supi {Pos{Ai }},

where Θ is a nonempty set, P(Θ) the power set of Θ, Pos a distribution of possibility from 2U to [0,1]
and Det, the determinacy (Definition 1). It is easy to check that the above axioms tend to the possibility
space axioms when Det{A}=0; that is, when we have a fuzzy variable.
Possibility and necessity in the intuitionistic fuzzy environment will be denoted by duals
(Pos+ , Pos− ) and (Nec+ , Nec− ). These expressions represent the maximum and the minimum of the
possibility and necessity, respectively.

3.2. Necessity Measure of an Intuitionistic Fuzzy Set


Following the concepts of the triangular fuzzy numbers by Dubois and Prade [22], let A be an
intuitionistic fuzzy variable on a possibility-determinacy space (Θ, P (Θ), Pos, Det). Then, the necessity
measure of A is defined as follows:

Nec+ {A} = 1 − Pos+ {Ac }, Nec− {A} = 1 − Pos− {Ac }, ϕ = Supx∈R μ{x}. (6)

Example 1. Let ξ be a triangular intuitionistic fuzzy number with the following membership and non-membership
functions, μ and γ:
⎧ x−a ⎧
⎪ ⎪
⎪ b − x + ω(x − a)

⎪ ϕ, a ≤ x < b, ⎪
⎪ , a ≤ x < b,

⎪ b−a ⎪
⎪ b−a
⎨ ϕ, x = b, ⎨ ω, x = b,
μ(x) = c−x γ(x) = 0 ≤ ω+ϕ ≤ 1

⎪ ⎪
⎪ x − b + ω(c − x)
⎪ ϕ, b < x ≤ c, ⎪ , b < x ≤ c,
⎪ ⎪
⎩ c−b
⎪ ⎪

⎩ 0
c−b
0 otherwise, otherwise,

249
Mathematics 2018, 6, 50

Then, we have Pos and Det given as:




⎪ ϕ, a < x0 ≤ b,
⎨ c−x
Pos− {X ≥ x0 } = supx≥x0 μ(x) = 0
ϕ, b < x0 ≤ c,

⎪ c−b
⎩ 0 otherwise,

1, a ≤ x0 ≤ c,
Pos+ {X ≥ x0 } = supx≥x0 μ(x) =
0 otherwise,

3.3. Credibility Measure in Intuitionistic Fuzzy Environment


Based on the Pos (Pos+ and Pos− ) and Nec (Nec+ and Nec− ) measures, the credibility measure
in intuitionistic fuzzy environment is given as:

Cr− {A} = ρPos− (A) + (1 − ρ)Nec− (A), 0≤ρ≤1


−   (7)
Ö Cr {ξ ∈ B} = ρSupx∈B μ{x} + (1 − ρ) 1 − Supx∈Bc μ{x} .

detCr− {A} = ρ(Pos+ {A} − Pos− {A}) + (1 − ρ)(Nec− {A} − Nec+ {A})
−   (8)
Ö detCr {ξ ∈ B} = ρSupx∈B γ{x} + (1 − ρ) 1 − Supx∈Bc γ{x} .
Here, the Cr− {A} and detCr− {A} are, respectively, the fixed and the determinacy of the
credibility measure, and ξ is an intuitionistic fuzzy variable with membership function μ and
non-membership function γ.
We can see that Cr satisfies the following conditions:
(i) Cr− {∅} = detCr− {∅} = 0,
(ii) Cr− {R} = 1,
(iii) for A ⊂ B, Cr− {A} ≤ Cr− {B} & detCr− {A} ≤ detCr− {B}, for any A, B ∈ 2R .
Thus, similar to the credibility measure defined in Liu and Liu [17] and Equation (3.2) in
 
Mandal et al. [10], Cr− is an intuitionistic fuzzy measure on R, 2R .

Example 2. Following Example 1, the credibility for a standard triangular intuitionistic fuzzy number ξ is
as follows: ⎧  
⎪ ρϕ + (1 − ρ) 1 − x0 − a ϕ ),
⎪ a ≤ x0 < b,


⎨   b−a
− c − x0
Cr {ξ ≥ x0 } =

⎪ ρ ϕ + (1 − ρ)(1 − ϕ)), b < x0 ≤ c,

⎪ c−b

0, otherwise.
⎧  
⎪ x0 − a

⎪ ρ(1 − ϕ) + (1 − ρ) 1 − ϕ ), a ≤ x0 < b,

⎨  b−a


detCr {ξ ≥ x0 } = c − x0

⎪ ρ 1− ϕ + (1 − ρ)(1 − ϕ)), b < x0 ≤ c,

⎪ c−b

0, otherwise.

Lemma 1. Let ξ be an intuitionistic fuzzy variable taking values in R. If there exist an interval, B,
such that Cr − {ξ ∈ B} = ϕ or detCr − {ξ ∈ B} = ϕ + ω, then for every interval, α (s.t. α ∩ β) = ∅,
we have Cr − {ξ ∈ α} = 0, where ϕ and ω are, respectively, the supremum values of membership and
non-membership functions.

250
Mathematics 2018, 6, 50

Proof of Lemma 1. From Equations (7) and (8), the maximum value for Cr− and detCr− occur when
Supx∈Bc μ(x) = 0 and Supx∈Bc γ(x) = 0, respectively. Then, since α ⊂ Bc , we have Supx∈α μ(x) =
Supx∈α γ(x) = 0. Therefore, Cr− {ξ ∈ α} = detCr− {ξ ∈ α} = 0.

Credibility is a value between 0 to ϕ for possibility function and increases to ϕ + ω when


determinacy is involved. In this lemma, it is shown when we have an interval containing the highest
credibility value, the existence of any other disjoint interval containing positive credibility is impossible
and, therefore, we can ignore the intervals having no positive possibility and determinacy values in
the following definitions, especially, for entropy.
To check this lemma for discrete fuzzy variables, see Li and Liu [19].

Definition 2. For the expected value and central moments of an intuitionistic fuzzy variable based on Cr − and
detCr − , the general form for the nth moments of real-valued continuous intuitionistic fuzzy variable about a
2 3 2 3
value c are introduced as E− (ξ − c)n and detE− (ξ − c)n , where:
⎧ +∞
⎪ @ @0



⎪ lE− [ξ ] = Cr − {ξ ≥ r }dr − Cr − {ξ ≤ r }dr,


0 −∞
(9)

⎪ +∞
@ @0

⎪ − +


⎩ detE = Cr { ξ ≥ r } dr − Cr + {ξ ≤ r }dr.
0 −∞

Here E− is the fixed value which is similar to the expected value in fuzzy variables, whereas
detE− measures the determinacy of an expected value. The expected value does not exist and is not
defined if the right-hand side of Equation (9) is ∞–∞.
For the central moments such as variance, skewness and kurtosis, similar to Liu and Liu [17], and
based on the defined credibility measures in Section 3.3, for each intuitionistic fuzzy variable ξ with
finite expected value, we have:
+ n , + n ,
CM− [ξ, n] = E− ξ − E− (ξ ) , detCM− [ξ, n] = detE− ξ − detE− (ξ) ,

where CM− [ξ, n] and detCM− [ξ, n] for n = 2, 3, and 4, respectively, represent the variance, skewness,
and kurtosis.
Note: In this new Definition 2, the expected value for membership degrees is isolated from the
expected value for non-membership degrees wherein both are calculated from the credibility functions.
2 3
It means that we have a dual E− [ξ], detE− [ξ] which denotes the expected values, separately. A
linear combination of the elements of this dual can be used as the score function, if one wants to
compare some intuitionistic fuzzy variables.

Definition 3. Entropy for intuitionistic fuzzy variables


Similar to entropy measure by credibility functions, entropy is formulized for intuitionistic fuzzy variables.
In this definition, we have again two measures, fixed and the determinacy entropies:

+∞
@    
H(ξ) = S Cr− {ξ = xi } + S detCr− {ξ = xi } dx,
−∞
x⊂β

where according to Lemma 1, β is the smallest interval containing the positive possibilities.

251
Mathematics 2018, 6, 50

Example 3. Let ξ be a triangular fuzzy variable with the membership and non-membership functions introduced
in Section 3.1. Then, the entropy is:
 
c−a
H(ξ) = (2ϕ + ω).
2

If ξ be a trapezoidal fuzzy variable (a,b,c,d), then:


    
d−a 1
H(ξ) = + ln 2 − (c − b) (2ϕ + ω).
2 2

4. Discussion and Conclusions


In this paper, we defined the notion of credibility in intuitionistic fuzzy environment as an
extension of credibility for fuzzy values which was not described before and, thus, by this conception,
we have created a new environment as we have had for fuzzy in [17]. Based on these conceptions,
we presented novel definitions of expected value, entropy and a general formula for central moments
of intuitionistic fuzzy variables. In each step, all the definitions and axioms in the paper are provided
by illustrative examples.

Acknowledgments: Authors sincerely thank reviewers for their valuable comments and suggestions which have
led to the present form of the paper. Mohamadtaghi Rahimi thanks University of Northern British Columbia for
awarding the post-doctoral fellowship and acknowledges the partial funding support from the NSERC Discovery
Development Grant of Prof. Pranesh Kumar.
Author Contributions: All the authors contributed equally in this work. They read and approved the
final manuscript.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Zadeh, L.A. Fuzzy set. Inf. Control 1965, 8, 338–353. [CrossRef]
2. Atanassov, K.T. Intuitionistic fuzzy sets. Fuzzy Sets Syst. 1986, 20, 87–96. [CrossRef]
3. Atanassov, K.T. More on intuitionistic fuzzy sets. Fuzzy Sets Syst. 1986, 33, 37–46. [CrossRef]
4. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 21, 379–423. [CrossRef]
5. Zadeh, L.A. Probability measures of fuzzy events. J. Math. Anal. Appl. 1968, 23, 421–427. [CrossRef]
6. De Luca, A.; Termini, S. A definition of a non-probabilistic entropy in the setting of fuzzy sets theory.
Inf. Control 1972, 20, 301–312. [CrossRef]
7. Szmidt, E.; Kacprzyk, J. Entropy for intuitionistic fuzzy sets. Fuzzy Sets Syst. 2001, 118, 467–477. [CrossRef]
8. Huang, L.; He, D.; Yang, S.X. Segmentation on Ripe Fuji Apple with Fuzzy 2D Entropy based on 2D
histogram and GA Optimization. Intell. Autom. Soft Comput. 2013, 19, 239–251. [CrossRef]
9. Rahimi, M.; Kumar, P.; Yari, G. Portfolio Selection Using Ant Colony Algorithm And Entropy Optimization.
Pak. J. Stat. 2017, 33, 441–448.
10. Yari, G.; Rahimi, M.; Moomivand, B.; Kumar, P. Credibility Based Fuzzy Entropy Measure. Aust. J. Math.
Anal. Appl. 2016, 13, 1–7.
11. Song, H.S.; Rhee, H.K.; Kim, J.H.; Lee, J.H. Reading Children’s Emotions based on the Fuzzy Inference and
Theories of Chromotherapy. Information 2016, 19, 735–742.
12. Farnoosh, R.; Rahimi, M.; Kumar, P. Removing noise in a digital image using a new entropy method based
on intuitionistic fuzzy sets. In Proceedings of the 2016 IEEE International Conference on Fuzzy Systems
(FUZZ-IEEE), Vancouver, BC, Canada, 24–29 July 2016.
13. Markechová, D. Kullback-Leibler Divergence and Mutual Information of Experiments in the Fuzzy Case.
Axioms 2017, 6, 5. [CrossRef]
14. Markechová, D.; Riečan, B. Logical Entropy of Fuzzy Dynamical Systems. Entropy 2016, 18, 157. [CrossRef]

252
Mathematics 2018, 6, 50

15. Markechová, D.; Riečan, B. Entropy of Fuzzy Partitions and Entropy of Fuzzy Dynamical Systems. Entropy
2016, 18, 19. [CrossRef]
16. Yari, G.; Rahimi, M.; Kumar, P. Multi-period Multi-criteria (MPMC) Valuation of American Options Based
on Entropy Optimization Principles. Iran. J. Sci. Technol. Trans. A Sci. 2017, 41, 81–86. [CrossRef]
17. Liu, B.; Liu, Y.K. Expected value of fuzzy variable and fuzzy expected value models. IEEE Trans. Fuzzy Syst.
2002, 10, 445–450.
18. Li, X.; Liu, B. A sufficient and necessary condition for credibility measures. Int. J. Uncertain. Fuzz.
Knowl.-Based Syst. 2006, 14, 527–535. [CrossRef]
19. Li, P.; Liu, B. Entropy of Credibility Distributions for Fuzzy Variables. IEEE Trans. Fuzzy Syst. 2008, 16,
123–129.
20. Zadeh, L.A. Fuzzy Sets as the basis for a theory of possibility. Fuzzy Sets Syst. 1978, 1, 3–28. [CrossRef]
21. Mandal, S.; Maitya, K.; Mondal, S.; Maiti, M. Optimal production inventory policy for defective items with
fuzzy time period. Appl. Math. Model. 2010, 34, 810–822. [CrossRef]
22. Dubois, D.; Prade, H. Fuzzy Sets and Systems: Theory and Applications; Academic Press:
New York, NY, USA, 1980.

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

253
mathematics
Article
Certain Algorithms for Modeling Uncertain Data
Using Fuzzy Tensor Product Bézier Surfaces
Musavarah Sarwar and Muhammad Akram *
Department of Mathematics, University of the Punjab, New Campus, Lahore 54590, Pakistan;
[email protected]
* Correspondence: [email protected]; Tel.: +92-42-99231241

Received: 31 January 2018; Accepted: 7 March 2018; Published: 9 March 2018

Abstract: Real data and measures are usually uncertain and cannot be satisfactorily described by
accurate real numbers. The imprecision and vagueness should be modeled and represented in
data using the concept of fuzzy numbers. Fuzzy splines are proposed as an integrated approach
to uncertainty in mathematical interpolation models. In the context of surface modeling, fuzzy
tensor product Bézier surfaces are suitable for representing and simplifying both crisp and imprecise
surface data with fuzzy numbers. The framework of this research paper is concerned with various
properties of fuzzy tensor product surface patches by means of fuzzy numbers including fuzzy
parametric curves, affine invariance, fuzzy tangents, convex hull and fuzzy iso-parametric curves.
The fuzzification and defuzzification processes are applied to obtain the crisp Beziér curves and
surfaces from fuzzy data points. The degree elevation and de Casteljau’s algorithms for fuzzy Bézier
curves and fuzzy tensor product Bézier surfaces are studied in detail with numerical examples.

Keywords: fuzzy tensor product Bézier surface; fuzzy parametric curves; fuzzy iso-parametric curves;
degree elevation algorithm; De Casteljau’s algorithm

1. Introduction
Data points are usually collected using physical objects to capture their geometric entity and
representation in a digital framework, i.e., CAGD and CAD systems. Information is collected by using
particular devices such as scanning tools. However, the recorded data do not significantly describe
error-free data. This is due to the fact that the errors are produced by limitations of the devices,
human errors and environmental factors, etc. Generally, these sorts of data which have uncertain
characteristics cannot be used directly to create digitized models. In order to make uncertain data
valuable for analysis and modeling, this kind of data have to be characterized in a different approach
to handle uncertainties of the measurements.
In curve designing and geometric modeling, control points play a major role in the process of
controlling the shape of curves and surfaces. The issue of uncertain shape of surfaces and curves
can be handled by using left, crisp, right control points through fuzzy numbers called fuzzy control
points [1].
Natural spline, B-spline and Bernstein Bézier functions can be used to produce geometric models
with data points [2–4]. The surfaces and curves produced with these functions are the standard
approaches to represent a set of given data points. Tensor product Bézier surfaces, also known as
Bernstein Bézier surfaces, can be determined by a collection of vertices called control points, which
are joined in a sequence to form a closed or open control grid. The shape of the surface changes with
the control grid in a smooth fashion. However, there is a major problem in shape designing due to
uncertainty, imprecision and vagueness of the real data. The designers and experts are unable to
choose an appropriate set of control points due to errors and uncertainties. One of the methods used
to handle vagueness and uncertainty issues is the theory of fuzzy sets introduced in [5].

Mathematics 2018, 6, 42; doi:10.3390/math6030042 254 www.mdpi.com/journal/mathematics


Mathematics 2018, 6, 42

The problem of interpolation was first proposed by Zadeh in [5] stating that if for each r + 1
distinct real numbers yo , y1 , . . . , yr , a fuzzy value is given instead of crisp value, is it possible to
construct a smooth curve to fit this fuzzy data of r + 1 points? To solve Zadeh’s proposed problem,
Lagrange interpolation polynomial for fuzzy data was first investigated by Lowen [6]. The problem
of interpolating fuzzy data using fuzzy splines was also considered by Kaleva [7]. By using spline
functions of odd degree, the interpolation of fuzzy data was considered in [8] with complete splines,
in [9] with natural splines, and in [10] with fuzzy splines. The concept of a fuzzy tensor product Bézier
surface was introduced in [1]. The construction of the fuzzy B-spline model, modeling of uncertain
data based on B-spline model curve are discussed in [11–13].
In this research paper, we study various properties of fuzzy tensor product surfaces by means
of fuzzy numbers including fuzzy parametric curves, affine invariance, fuzzy tangents, convex hull
property and fuzzy iso-parametric curves. We also develop De Casteljau’s and degree elevation
algorithms for fuzzy Beziér curves and fuzzy tensor product surfaces with numerical examples. We
apply the process of fuzzification to obtain the fuzzy interval of fuzzy data points where the crisp
solution exists. This is followed by the defuzzification process to construct crisp Beziér curves and
surfaces which focus on the defuzzification of fuzzy data points.
We used standard definitions and terminologies in this paper. For other notations, terminologies
and applications not mentioned in the paper, the readers are referred to [14–21].

Definition 1 ([5,20]). A fuzzy set λ on a non-empty universe Y is a mapping λ : Y → [0, 1]. A fuzzy relation
on Y is a fuzzy subset ν in Y × Y.

Definition 2 ([21]). A triangular fuzzy number is a fuzzy set on R, denoted by the symbol A = (δ, β, γ),
δ < β < γ δ, β, γ ∈ R, with membership function defined as,

⎪ y−δ

⎪ , y ∈ [δ, β]
⎪β−δ

λ A (y) = γ − y , y ∈ [β, γ]

⎪γ − β



0 , otherwise

The α-cute operation, 0 < α ≤ 1, of triangular fuzzy number is defined as Aα = [(β − δ)α +
δ, −(γ − β)α + γ]. For any two triangular fuzzy numbers A = (δ1 , β 1 , γ1 ) and B = (δ2 , β 2 , γ2 ), the sum
A + B = (δ1 + δ2 , β 1 + β 2 , γ1 + γ2 ) is a triangular fuzzy number with membership function defined as,
μ A+B (z) = max min{μ A (x), μ A (y)}. The multiplication of A = (δ, β, γ) by a scalar ω = 0 is a triangular
z=x+y
fuzzy number ωA whose membership function is μωA (z) = max μ A (x).
{y:ωy=z}

Definition 3 ([1]). Let Y be a space and P be a subset of r + 1 control points in Y. P is said to be a


collection of fuzzy control points in Y if there exists μ P : P → [0, 1] such that μ P (pk ) = 1 in which
P = {(pk , μ P (pk ))| pk ∈ Y }. Therefore,


⎪ pk ∈ Y
⎨0 ,
μ P (pk ) = c ∈ (0, 1) , =Y
pk ∈


⎩1 , pk ∈ Y

   
with μ P (pk ) = ( μ P (pk ), μ P (pk ), μ P (pk )) where, μ P (pk ), μ P (pk ) are left-grade and right-grade membership
 
values. pk ∈= Y means that pk partially belongs to Y. Fuzzy control points can be written as pk = ( p k , pk , p k )
 
where p k , pk , p k are left fuzzy control points, crisp control points and right fuzzy control points, respectively.

255
Mathematics 2018, 6, 42

Definition 4 ([1]). Consider a collection of r + 1 distinct fuzzy control points p∗k , 1 ≤ k ≤ r, then a fuzzy
Bernstein Beziér (B.B) curve is defined as,
r
P∗ (u) = ∑ Bkr (u)p∗k
k=0
 
r
where Bkr (u) = uk (1 − u)r−k is kth Berstein polynomial of degree r.
k

Definition 5 ([1]). Consider a collection of (r + 1) × (q + 1) fuzzy control points pk,j , 1 ≤ k ≤ r 1 ≤ j ≤ q,


then a fuzzy Beziér surface is defined as,

r q
∑ ∑ Bkr (u)Bj (v)pk,j
q
P(u, v) = u, v ∈ [0, 1].
k=0 j=0

2. Fuzzy Tensor Product Beziér Surfaces


Consider a fuzzy B.B curve,
r
P∗ (u) = ∑ Bkr (u)p∗k (1)
k=0

If we define two operators on fuzzy control points, shift operator Ep∗k = p∗k+1 and identity operator
I p∗k
= p∗k then, Equation (1) can be written as P∗ (u) = [uE + (1 − u)I]r p∗o , u ∈ [0, 1]. This is called the
symbolic representation of fuzzy B.B curve. For u ∈ [0, 1], a fuzzy straight line can be defined as,

L(u) = (1 − u)p0 + up1


 
where, pk = ( p k , pk , p k ) are fuzzy control points. Consider two fuzzy B.B polynomials

r q
P∗ (u) = ∑ Bkr (u)b∗k , P∗ (v) = ∑ Bj (v)a∗j
q

k=0 j=0

where, b∗k ,
0 ≤ k ≤ r and a∗j , 0 ≤ j ≤ q are fuzzy control points. The fuzzy tensor product surface or fuzzy
Bernstein Beziér (B.B) surface can be generated using P∗ (u) and P∗ (v) as,

r q
∑ ∑ Bkr (u)Bj (v)pk,j
q
P(u, v) = u, v ∈ [0, 1]
k=0 j=0

 
where, pk,j = ( p k,j , pk,j , p k,j ) are fuzzy control points. For any fuzzy B.B surface, r and q are the degrees
of corresponding fuzzy B.B curves. We can say that P(u, v) is a fuzzy B.B surface of degree r × q.
If r = q = 3, the fuzzy B.B surface is known as fuzzy cubic by cubic patch. Likewise, the case r = q = 2 is
called a fuzzy quadratic by quadratic patch. Also, (r + 1) × (q + 1) fuzzy control points are organized
into r + 1 rows and q + 1 columns. A fuzzy B.B surface of degree 2 × 2, with fuzzy control points in
Table 1, is shown in Figure 1. The fuzzy control points along with dashed lines is called a fuzzy control
grid of a fuzzy surface. Each column and row of the fuzzy control points interpret a fuzzy B.B curve.
The fuzzy B.B curve defined by the fuzzy control points pk,j , 0 ≤ j ≤ q is called kth fuzzy u-curve
and the fuzzy B.B curve defined by pk,j , 0 ≤ k ≤ r is jth fuzzy v-curve. Consequently, there are (r + 1)
number of fuzzy u-curves and (q + 1) number of fuzzy v-curves. The fuzzy u-curves of Figure 1 are
shown in Figure 2. The fuzzy u-curve with 0th row of fuzzy control points is shown with red lines,
the 1st row of fuzzy control points is shown in blue and the 2nd row of fuzzy control points is shown
in green.

256
Mathematics 2018, 6, 42

Table 1. Fuzzy control points.

 
pk,j p k,j pk,j p k,j
p0,0 (0.5, 4, −0.5) (1, 4, 0) (1.5, 4, 0.5)
p0,1 (2.5, −0.5, 0.5) (3, 4, 1) (3.5, 4, 1.5)
p0,2 (4.5, 0, −0.5) (5, 4, 0) (5.5, 4, 0.5)
p1,0 (−0.5, 2, −0.5) (0, 2, 0) (0.5, 2, 0.5)
p1,1 (3, 1.5, 1.5) (3, 2, 1) (3, 2.5, 1.5)
p1,2 (4.5, 2.5, 0.5) (5, 2, 1) (5.5, 1.5, 1.5)
p2,0 (0.5, 0, −0.5) (1, 0, 0) (1.5, 0, 0.5)
p2,1 (2.5, −0.5, 0.5) (3, 0, 1) (3.5, 0.5, 1.5)
p2,2 (4.5, 0, −0.5) (5, 0, 0) (5.5, 0, 0.5)

Figure 1. Fuzzy quadratic by quadratic patch.

Figure 2. Fuzzy u-curves.

q
In fuzzy B.B surface P(u, v), Bkr (u) and Bj (v) are basis functions of degree r and q, respectively.
There are four fuzzy boundary curves to P(u, v),
r r
P(u, 0) = ∑ Bkr (u)pk,0 P(u, 1) = ∑ Bkr (u)pk,1 , u ∈ [0, 1]
k=0 k=0
q q
P(0, v) = ∑ Bj (v)p0,j P(1, v) = ∑ Bj (v)p1,j ,
q q
v ∈ [0, 1].
j=0 j=0

257
Mathematics 2018, 6, 42

We now present some properties of fuzzy Beziér surfaces.

1. As P(0, 0) = p0,0 , P(1, 0) = p1,0 , P(0, 1) = p0,1 and P(1, 1) = p1,1 therefore, P(u, v) interpolates four
fuzzy control points.
q
2. Bkr (u) and Bj (v) are crisp basis functions for all 0 ≤ k ≤ r, 0 ≤ j ≤ q, u, v ∈ [0, 1] therefore, these
r q
q
are non-negative and ∑ ∑ Bkr (u)Bj (v) = 1.
k=0 j=0
3. =→Y
Let f : X = be an affine transformation where, X = are sets of triangular fuzzy numbers
= and Y
and f (y) = By + b̂ where, the elements of of B are triangular fuzzy numbers and b̂ is a 2 × 1 vector
of triangular fuzzy numbers. Fuzzy B.B surface satisfies affine invariance property.
 
r q
∑ ∑ Bkr (u)Bj (v)pk,j
q
f (P(u, v)) = B + b̂
k=0 j=0
r q r q
∑ ∑ Bkr (u)Bj (v)B(pk,j ) + ∑ ∑ Bkr (u)Bj (v)b̂
q q
=
k=0 j=0 k=0 j=0
r q
∑ ∑ Bkr (u)Bj (v) f (pk,j )
q
=
k=0 j=0

4. As P(u, v) is a linear combination of fuzzy control points with non-negative coefficients whose
sum is one therefore, fuzzy B.B surface lies in the convex hull defined by the fuzzy control mesh.
5. As P (u) = r[uE + (1 − u)I]r−1 (p1∗ − p0∗ ) therefore, the fuzzy tangents at the end points of fuzzy

Beziér curve can be drawn using a pair of fuzzy control points. For u = 0, P∗ (0) = r(p1∗ − p0∗ ) and

for u = 1, P∗ (1) = r(pr∗ − pr∗−1 ).
∂P(u, v) ∂P(u, v)
6. At every point of fuzzy Beziér curve, we have two fuzzy tangent directions and .
∂u ∂v
For any fuzzy B.B surface, if we fix one parameter, say u = a, then P(u, v) becomes,

r q q
∑ ∑ Bkr (a)Bj (v)pk,j = ∑ Bj (v)Pj∗ (a)
q q
P(a, v) =
k=0 j=0 j=0

r
where, Pj∗ (a) = ∑ Bkr (a)pk,j . P(a, v) is known as fuzzy u iso-parametric curve. Fuzzy iso-parametric
k=0
curve on any fuzzy B.B surface can be obtained by fixing one parameter as constant. A fuzzy B.B surface
can be considered as a family of fuzzy iso-parametric curves and these fuzzy iso-parametric curves
can be studied in terms of fuzzy control curves. Figure 2 represents fuzzy u iso-parametric curves
of Figure 1. For any value of a, Pj∗ (a) define fuzzy control point positions for fuzzy iso-parametric
curve P(a, v).
Clearly, the four fuzzy iso-parametric curves are fuzzy boundary curves P(u, 1), P(u, 0), P(1, v)
and P(0, v). These fuzzy boundary curves are defined by rows and columns of fuzzy control points
and are fuzzy control curves. For example, in Figure 1, P(0, v) shown in red color is a fuzzy B.B curve
of degree 2 with fuzzy control points p0,0 , p0,1 , p0,2 . Similarly, P(u, 1), shown in green, is a fuzzy B.B
curve with fuzzy control points p1,0 , p1,1 , p1,2 .
We now describe and design de Casteljau’s algorithm to find any point on fuzzy B.B curve.

258
Mathematics 2018, 6, 42

Algorithm 1. De Casteljau’s algorithm for fuzzy B.B curves


Consider fuzzy B.B curve of degree r,
r
P∗ (u) = ∑ Bkr (u)p∗k
k=0

Using symbolic representation, P∗ (u) can also be expressed as,

r −1
P∗ (u) = ∑ Bkr−1 (u)[(1 − u)p∗k + up∗k+1 ]
k=0
r −1
∗(1)
= ∑ Bkr−1 (u)pk (u). (2)
k=0

∗(0) ∗(i) ∗(i −1) ∗(i −1)


Denote pk (u) = p∗k and pk = (1 − u)pk (u) + upk+1 (u), 1 ≤ i ≤ r and
r −i
∗(i)
0 ≤ k ≤ r − i. Equation (2) can also be expressed as, P∗ (u) = ∑ Bkr−i (u)pk (u) where,
k=0
∗(i) ∗(i) ∗(i) ∗(i)
pk (u) = ( p k (u), pk (u), p k (u)) and,

∗(i) ∗(i −1) ∗(i −1)


p k (u) = (1 − u) p k (u) + u p k+1 (u)
∗(i) ∗(i −1) ∗(i −1)
pk (u) = (1 − u)pk (u) + upk+1 (u)
∗(i) ∗(i −1) ∗(i −1)
p k (u) = (1 − u) p k (u) + u p k+1 (u).

∗(r) ∗(r) ∗(r) ∗(r)


For i = r, P∗ (u) = p0 (u) = ( p 0 , p0 , p 0 ).

Example 1. Consider a fuzzy cubic B.B curve, given in Figure 3, having fuzzy control points as shown in
(3) (3)
Table 2. We now find P∗ ( 12 ) using de Casteljau’s algorithm. Clearly, P∗ ( 12 ) = ( p 0 ( 12 ), p0 ( 12 ), p 0 ( 12 )) where,
(3)

(3) 1 1 (2) 1 1 (2) 1


p0 ( ) = p ( ) + p1 ( )
2 2 0 2 2 2
1 1 (1) 1 1 (1) 1 1 1 (1) 1 1 (1) 1
= ( p 0 ( ) + p 1 ( )) + ( p 1 ( ) + p 2 ( ))
2 2 2 2 2 2 2 2 2 2
1 1 1 1 1 1 1 1 1
= ( p 0 + p 1) + ( p 1 + p 2) + ( p 2 + p 3)
4 2 2 2 2 2 4 2 2
1 3 3 1
= p 0 + p 1 + p 3 + p 3 = (1.4, 0)
8 8 8 8
(3)
Similarly, p0 ( 12 ) = (1.5, 0), p 0 ( 12 ) = (1.6, 0) and therefore, P∗ ( 12 ) = ((1.4, 0), (1.5, 0), (1.6, 0)).
(3)

Table 2. Fuzzy control points.

 
pk pk pk pk
p0 (−0.1, 0) (0, 0) (0.1, 0)
p1 (0.9, 2) (1, 2) (1.1, 2)
p2 (1.9, −2) (2, −2) (2.1, −2)
p3 (2.9, 0) (3, 0) (3.1, 0)

259
Mathematics 2018, 6, 42

Algorithm 2. De Casteljau’s algorithm for fuzzy B.B surfaces


Algorithm 1 can be extended to fuzzy B.B surfaces. De Casteljau’s can be implemented several
times to find P(u, v) for particular (u, v). It is based on fuzzy iso-parametric curves. Consider the
equation of fuzzy Beziér surface,

r q r
∑ Bkr (u)( ∑ Bj (v)pk,j ) = ∑ Bkr (u)lk (v).
q
P(u, v) =
k=0 j=0 k=0

It clearly shows that P(u, v) can be calculated using r + 1 fuzzy control points lo(v), l1 (v), . . . , lr (v).
The procedure can be illustrated as:

1. As lo(v) is a fuzzy control point on fuzzy iso-parametric curve defined by the row of fuzzy control
points p0,0 , p0,1 , . . . , p0,q . Therefore, for the fuzzy iso-parametric curve on the 1st row, Algorithm 1
can be applied to compute lo(0). Repeat this process for all other fuzzy iso-parametric curves.
2. After r + 1 implementations of de Casteljau’s algorithms, we obtain lo(v), l1 (v), . . . , lr (v).
3. At the end, apply de Casteljau’s algorithm to r + 1 fuzzy control points lo(v), l1 (v), . . . , lr (v) with
given u to compute P(u, v).

Example 2. In this example, we illustrate the process of Algorithm 2 for fuzzy quadratic by quadratic surface
as shown in Figure 1. We now calculate the value of P(u, v) for u = v = 12 .
 
Step 1: For k = 0, we compute lo( 12 ) = ( l o ( 12 ), lo ( 12 ), l o ( 12 )). The fuzzy control points on the first row are po,o ,
po,1 , po,2 . Applying Algorithm 2,

 1 (2) 1 1 (1) 1 1 (1) 1 1 1 1


l o ( ) = p o,o ( ) = p o,o ( ) + p o,1 ( ) = p o,o + p o,1 + p o,2 = (2.5, 0.75.0)
2 2 2 2 2 2 4 2 4

Similarly, lo ( 12 ) = (3, 4, 0.5) and l o ( 12 ) = (3.5, 4.25, 1).
Step 2: Applying Algorithm 1 on all fuzzy iso-parametric curves, we obtain three fuzzy control points as shown
in Table 3.
Step 3: Applying Algorithm 2 for u = 12 , we obtain the following expression,

1 1 1 1 1 1 1 1
P( , ) = lo ( ) + l1 ( ) + l2 ( ) = ((2.5, 0.8125, 0), (2.875, 2, 0.625), (3.25, 2.1875, 1.125)).
2 2 4 2 2 2 4 2

Table 3. Fuzzy control points.

1
Values on Fuzzy Iso-Parametric Curves for v = 2
lo ( 12 ) ((2.5, 0.75, 0), (3, 4, 0.5), (3.5, 4.25, 1))
l1 ( 12 ) ((2.5, 1.375, 0), (2.75, 2, 0.75), (3, 2.125, 1.25))
l2 ( 12 ) ((2.5, −0.25, 0), (3, 0, 0.5), (3.5, 0.25, 1))

Upon defining the fuzzy Beziér surface model, the next step is the defuzzification process.
This procedure can be applied to obtain the results as a single value. For defining defuzzification, we
use the α-cut operation of fuzzy control points on the definition of fuzzy Beziér surface. This is called
the fuzzification process, and is defined as follows.
Fuzzification process [11]:
If { pk,j | 0 ≤ k ≤ r, 0 ≤ j ≤ q} are the set of fuzzy control points then, pk,j α is the alpa- cut of pk,j
and is defined in Equation (3):

260
Mathematics 2018, 6, 42

 
pk,j α = ( pk,j α , pk,j α , pk,j α )
   
= ([(pk,j − p k,j )α + p k,j ], pk,j , [(pk,j − p k,j )α + p k,j ]) (3)

After fuzzification, the next procedure is the defuzzification of fuzzy control points to obtain the
crisp solution which is described below.
Defuzzification process [11]:
The defuzzification of fuzzy control point pk,j α is a crisp control point pk,j , calculated in
α
Equation (4):

1  
pk,j = { p + pk,j α + pk,j α } (4)
α 3 k,j α
The fuzzification and defuzzification process is illustrated in Figures 4 and 5. The fuzzification
process is applied by means of 0.5-cut operation and a crisp Beziér surface is obtained by applying the
defuzzification process.

Figure 3. Fuzzy cubic B.B cuve.

Figure 4. Fuzzification of Figure 1.

261
Mathematics 2018, 6, 42

Figure 5. Defuzzification of Figure 4.

Degree Elevation for a Fuzzy B.B Curve


Numerous applications that include more than one fuzzy B.B curve require all the fuzzy curves
to have the same degree. Additionally, higher degree fuzzy B.B curves take a longer time to process,
but provide more flexibility for designing shapes. The key point is to change the degree of fuzzy B.B
curve without changing its shape. This process is called degree elevation. We now explain the process of
degree raising for a fuzzy B.B curve.
Consider fuzzy B.B curve of degree r having r + 1 fuzzy control points,
r
P∗ (u) = ∑ Bkr (u)p∗k . (5)
k=0

To increase the degree of fuzzy B.B curve to r + 1, r + 2 fuzzy control H∗k , 0 ≤ k ≤ r + 1, are
required. As the fuzzy curve passes through p∗o and pr∗ therefore, the new set of fuzzy control points
must include p∗o and pr∗ . By replacing u by 1 − u + u, Equation (5) can be written as,

r+1
P∗ (u) = ∑ Bkr+1 (u)H ∗k . (6)
k=0

where, H ∗o = p∗o , H r+1


∗ = p∗ and
r
 
k k
H ∗k = p∗k−1 + 1 − p∗k , 1 ≤ k ≤ r.
r+1 r+1
 
Each edge of fuzzy control
 point contains a new fuzzy control point. More precisely, edge pk−1 pk
 k k
contains Hk in the ratio 1 − : . In de Casteljau’s algorithm, the fuzzy line segment is
r+1 r+1
divided in the ration t:1 − t. Unlike Algorithm 2, the ratio is not a constant but varies with index k.

Example 3. Consider a fuzzy quadratic B.B curve having fuzzy control points as shown in Table 4. The fuzzy
quadratic B.B curve is shown in Figure 6.
By applying degree elevation algorithm, the fuzzy cubic B.B curve obtained from Figure 6 is shown in Figure 7.
Table 4. Fuzzy control points.

 
pk pk pk pk
p0 (0, 0) (0.5, 0) (1, 0)
p1 (3, 5) (3.5, 5) (4, 5)
p2 (6, 0) (6.5, 0) (7, 0)

262
Mathematics 2018, 6, 42

Figure 6. Fuzzy quadratic B.B curve.

Figure 7. Fuzzy cubic B.B curve.

Fuzzy Rational Beziér Surface Patch


A fuzzy rational Beziér curve (FRB) [13] is defined as,
r
∑ wk Bkr (u)p∗k 9 :
r wk Bkr (u) r
R∗ (u) = ∑ p∗k = ∑ Rrk (u)p∗k
k=0
r = r
∑ wk Bkr (u) k=0 ∑ wk Bkr (u) k=0
k=0 k=0

 
where, wk = (wk , wk , wk ) are fuzzy weights. Fuzzy rational Beziér curves has several benefits over
simple fuzzy Beziér curves. It provides large control to the shape of fuzzy curves. In addition, a 2D
FRB curve can represented as a projection of a 3D fuzzy Beziér curve as,
r
R∗ (u) = ∏(P∗ (u)), P∗ (u) = (Px∗ (u), Py∗ (u), Pw∗ (u)) = ∑ Bkr (u)Pk
k=0

       
where, P k = (w k x k , w k y k , w k )
, Pk = (wk xk , wk yk , wk ) , P k = and the operator ∏ is
(w k x k , w k y k , w k )
defined as ∏(x, y, w) = (x /w, y/w).
The degree elevation and de Casteljau algorithm for fuzzy Beziér curve can be extended to FRB
curve. For this, transform the FRB curve into a 3D fuzzy Beziér curve as discussed above. Next, apply
the algorithms to 3D fuzzy Beziér curve. Finally, convert the 3D fuzzy Beziér curve to 2D fuzzy curve
by applying the projection operator ∏. The resulting fuzzy control points turn out to be the fuzzy
weights of given FRB curve.

263
Mathematics 2018, 6, 42

3. Conclusions
Fuzzy splines are the most useful mathematical and graphical tools to reduce uncertainty in
curve and surface modeling. In this research paper, various properties of fuzzy tensor product surface
patches are studied using fuzzy numbers including fuzzy parametric curves, affine invariance, fuzzy
tangents, convex hull and fuzzy iso-parametric curves. The degree elevation and de Casteljau’s
algorithms for fuzzy Bézier curves, fuzzy tensor product Bézier surfaces and FRB curves are presented.
The proposed techniques are useful to visualize uncertain and vague measures via surface modeling.
The process of fuzzification is applied to obtain the fuzzy interval of fuzzy data points where the
crisp solution exists. It is then followed by the defuzzification process to construct crisp Beziér curves
and surfaces which are focused on the defuzzification of fuzzy data points. Finally, to check the
effectiveness of Beziér surfaces this process is applied to numerical examples. We aim to extend the
theory of fuzzy splines to find its applications in geometric modeling, representing fuzzy data points
using fuzzy numbers and fuzzy spline approximation problems.

Author Contributions: Musavarah Sarwar and Muhammad Akram conceived of the presented idea. Musavarah
Sarwar developed the theory and performed the computations. Muhammad Akram verified the analytical
methods.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Wahab, A.F.; Ali, J.M.; Majid, A.A. Fuzzy Geometric Modeling. In Proceedings of the IEEE 2009
Sixth International Conference on Computer Graphics, Imaging and Visualization, Tianjin, China,
11–14 August 2009; pp. 276–280.
2. Farin, G. Curves and Surfaces for CAGD: A Practical Guide, 5th ed.; Morgan Kaufmann:
Burlington, MA, USA, 2002.
3. Rogers, D.F. An Introduction to NURBS: With Historical Perspective, 1st ed.; Morgan Kaufmann: Burlington,
MA, USA, 2000.
4. Yamaguchi, F. Curves and Surfaces in Computer Aided Geometric Design; Springer Science & Business Media:
Berlin/Heidelberg, Germany, 1988.
5. Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353.
6. Lowen, R. A Fuzzy Lagrange interpolation Theorem. Fuzzy Sets Syst. 1990, 34, 33–38.
7. Kaleva, O. Interpolation of fuzzy data. Fuzzy Sets Syst. 1994, 61, 63–70.
8. Abbasbandy, S. Interpolation of fuzzy data by complete splines. J. Appl. Math. Comput. 2001, 8, 587–594.
9. Abbasbandy, S.; Babolian, E. Interpolation of fuzzy data by natural splines. J. Appl. Math. Comput. 1998, 5,
457–463.
10. Abbasbandy, S.; Ezzati, R.; Behforooz, H. Interpolation of fuzzy data by using fuzzy splines. Int. J. Uncertain.
Fuzziness Knowl. Based Syst. 2008, 16, 107–115.
11. Zakaria, R.; Wahab, A.F.; Gobithaasan, R.U. Fuzzy B-Spline surface modeling. J. Appl. Math. 2014, 2014, 285045.
12. Zakaria, R.; Wahab, A.B. Fuzzy B-spline modeling of uncertainty data. Appl. Math. Sci. 2012, 6, 6971–6991.
13. Wahab, A.F.; Zakaria, R.; Ali, J.M. Fuzzy interpolation rational bezier curve. In Proceedings of the IEEE 2010
Seventh International Conference on Computer Graphics, Imaging and Visualization (CGIV), Sydney, NSW,
Australia, 7–10 August 2010; pp. 63–67.
14. Anile, A.M.; Falcidieno, B.; Gallo, G.; Spagnuolo, M.; Spinello, S. Modeling uncertain data with fuzzy
B-splines. Fuzzy Sets Syst. 2000, 113, 397–410.
15. Behforooz, H.; Ezzati, R.; Abbasbandy, S. Interpolation of fuzzy data by using E(3) cubic splines. Int. J. Pure
Appl. Math. 2010, 60, 383–392.
16. Dubois, D.; Prade, H. Operations on fuzzy numbers. Int. J. Syst. Sci. 1978, 9, 613–626.
17. Fortuna, L.; Muscato, G. A roll stabilization system for a monohull ship: modeling, identification, and
adaptive control. IEEE Trans. Control Syst. Technol. 1996, 4, 18–28.
18. Sarwar, M.; Akram, M. An algorithm for computing certain metrics in intuitionistic fuzzy graphs. J. Intell.
Fuzzy Syst. 2016, 30, 2405–2416.

264
Mathematics 2018, 6, 42

19. Sarwar, M.; Akram, M. Certain algorithms for computing strength of competition in bipolar fuzzy graphs.
Int. J. Uncertain. Fuzziness Knowl. Based Syst. 2017, 25, 877–896.
20. Zadeh, L.A. Similarity relations and fuzzy orderings. Inf. Sci. 1971, 3, 177–200.
21. Chang, S.S.L.; Zadeh, L.A. On fuzzy mapping and control. IEEE Trans. Syst. Man Cybern. 1972, 2, 30–34.

c 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

265
mathematics
Article
Numerical Methods for Solving Fuzzy Linear Systems
Lubna Inearat and Naji Qatanani *
Department of Mathematics, An–Najah National University, Nablus, P.O. Box 7, Palestine;
[email protected]
* Correspondence: [email protected]

Received: 21 November 2017; Accepted: 29 January 2018; Published: 1 February 2018

Abstract: In this article, three numerical iterative schemes, namely: Jacobi, Gauss–Seidel and
Successive over-relaxation (SOR) have been proposed to solve a fuzzy system of linear equations
(FSLEs). The convergence properties of these iterative schemes have been discussed. To display the
validity of these iterative schemes, an illustrative example with known exact solution is considered.
Numerical results show that the SOR iterative method with ω = 1.3 provides more efficient results in
comparison with other iterative techniques.

Keywords: fuzzy system of linear equations (FSLEs); iterative schemes; strong and weak
solutions; Hausdorff

1. Introduction
The subject of Fuzzy System of Linear Equations (FSLEs) with a crisp real coefficient matrix and
with a vector of fuzzy triangular numbers on the right-hand side arise in many branches of science
and technology such as economics, statistics, telecommunications, image processing, physics and even
social sciences. In 1965, Zadeh [1] introduced and investigated the concept of fuzzy numbers that can
be used to generalize crisp mathematical concept to fuzzy sets.
There is a vast literature on the investigation of solutions for fuzzy linear systems. Early work in
the literature deals with linear equation systems whose coefficient matrix is crisp and the right hand
vector is fuzzy. That is known as FSLEs and was first proposed by Friedman et al. [2]. For computing a
solution, they used the embedding method and replaced the original fuzzy n × n linear system by a
2n × 2n crisp linear system. Later, several authors studied FSLEs. Allahviranloo [3,4] used the Jacobi,
Gauss–Seidel and Successive over-relaxation (SOR) iterative techniques to solve FSLEs. Dehghan and
Hashemi [5] investigated the existence of a solution provided that the coefficient matrix is strictly
diagonally dominant matrix with positive diagonal entries and then applied several iterative methods
for solving FSLEs. Ezzati [6] developed a new method for solving FSLEs by using embedding method
and replaced an n × n FSLEs by two n × n crisp linear system. Furthermore, Muzziolia et al. [7]
discussed FSLEs in the form of A1 x + b1 = A2 x + b2 with A1 , A2 being square matrices of fuzzy
coefficients and b1 , b2 fuzzy number vectors. Abbasbandy and Jafarian [8] proposed the steepest
descent method for solving FSLEs. Ineirat [9] investigated the numerical handling of the fuzzy linear
system of equations (FSLEs) and fully fuzzy linear system of equations (FFSLEs).
Generally, FSLEs is handled under two main headings: square (n × n) and nonsquare (m × n)
forms. Most of the works in the literature dealwith square form. For example, Asady et al. [10],
extended the model of Friedman for n × n fuzzy linear system to solve general m × n rectangular
fuzzy linear system for ×n, where the coefficients matrix is crisp and the right-hand side column is a
fuzzy number vector. They replaced the original fuzzy linear system m × n by a crisp linear system
2m × 2n. Moreover, they investigated the conditions for the existence of a fuzzy solution.
Fuzzy elements of this system can be taken as triangular, trapezoidal or generalized fuzzy
numbers in general or parametric form. While triangular fuzzy numbers are widely used in earlier

Mathematics 2018, 6, 19; doi:10.3390/math6020019 266 www.mdpi.com/journal/mathematics


Mathematics 2018, 6, 19

works, trapezoidal fuzzy numbers have neglected for along time. Besides, there exist lots of works
using the parametric and level cut representation of fuzzy numbers.
The paper is organized as follows: In Section 2, a fuzzy linear system of equations is introduced.
In Section 3, we present the Jacobi, Gauss–Seidel and SOR iterative methods for solving FSLEs with
convergence theorems. The proposed algorithms are implemented using a numerical example with
known exact solutions in Section 4. Conclusions are drawn in Section 5.

2. Fuzzy Linear System

Definition 1. In Reference [11]: An arbitrary fuzzy number in parametric form is represented by an ordered
pair of functions (v(r ), v(r )), 0 ≤ r ≤ 1, which satisfy the following requirements:

(1) v(r ) is a bounded left-continuous non-decreasing function over [0, 1].


(2) v(r ) is a bounded left-continuous non-increasing function over [0, 1].
(3) v(r ) ≤ v(r ); 0 ≤ r ≤ 1.

Definition 2. In Reference [12]: For arbitrary fuzzy numbers u and vthe quantity

D (u, v) = sup {max{|ur − vr |, |ur − vr |}}


0≤r ≤1

is called the Hausdorff distance between u and v.

Definition 3. In Reference [13]: The n × n linear system




⎪ a11 x1 + a12 x2 + . . . + a1n xn = b1



⎪ a21 x1 + a22 x2 + . . . + a2n xn = b2

⎨ . . ..
(1)

⎪ . . ..



⎪ . . ..


an1 x1 + an2 x2 + . . . + ann xn = bn
 
where the coefficients matrix A = aij , 1 ≤ i, j ≤ n is a crisp n × n matrix and each
bi ∈ E1 , 1 ≤ i ≤ n, is fuzzy number, is called FSLEs.

Definition 4. In Reference [13]: A fuzzy number vector X = ( x1 , x2 , . . . , xn )t given by


 
xi = xi (r ), xi (r ) , 1 ≤ i ≤ n, 0 ≤ r ≤ 1 is called (in parametric form) a solution of the FSLEs (1) if

n n
∑ aij x j = ∑ aij x j = bi ,
j =1 j =1
n n
(2)
∑ aij x j = ∑ aij x j = bi .
j =1 j =1

Following Friedman [2] we introduce the notations below:


 t
x = x1 , x2 , . . . x n , − x1 , − x2 , . . . − x n
 t
b = b1 , b2 , . . . bn , −b1 , −b2 , . . . − bn
 
S = sij , 1 ≤ i, j ≤ 2n, where sij are determined as follows:

aij ≥ 0 ⇒ sij = aij , si+n,j+n = aij ,


(3)
aij < 0 ⇒ si,j+n = − aij , si+n,j = − aij .

267
Mathematics 2018, 6, 19

and any sij which is not determined by Equation (3) is zero. Using matrix notation, we have

SX = b (4)

The structure of S implies that sij ≥ 0 and thus


 
B C
S= (5)
C B

where B contains the positive elements of A , C contains the absolute value of the negative elements of
A and A = B − C. An example in the work of Friedman [2] shows that the matrix S may be singular
even if A is nonsingular.

Theorem 1. In Reference [2]: The matrix S is nonsingular matrixif and only if the matrices A = B − C and
B + C are both nonsingular.

Proof. By subtracting the jth column of S, from its (n + j)th column for 1 ≤ j ≤ n we obtain
   
B C B C−B
S= → = S1 .
C B C B−C

Next, we adding the (n + i ) throw of S to its ith row for 1 ≤ i ≤ n then we obtain
   
B C−B B+C 0
S1 = → = S2 .
C B−C C B−C

Clearly, |S| = |S1 | = |S2 | = | B + C || B − C | = | B + C || A|.


Therefore
|S| = 0 if and only if | A| = 0 and | B + C | = 0,
These concludes the proof. 

Corollary 1. In Reference [2]: If a crisp linear system does not have a unique solution, the associated
fuzzy linear system does not have one either.
 T
Definition 5. In Reference [14]: If X = x1 , x2 , . . . xn , − x1 , − x2 , . . . , − xn is a solution of
system (4) and for each 1 ≤ i ≤ n, when the inequalities xi ≤ xi hold, then the solution
 T
X = x1 , x2 , . . . xn , − x1 , − x2 , . . . , − xn is called a strong solution of the system (4) .
 T
Definition 6. In Reference [14]: If X = x1 , x2 , . . . xn , − x1 , − x2 , . . . , − xn is a solution of
system (4) and for some i ∈ [1, n], when the inequality xi ≥ xi hold, then the solution
 T
X = x1 , x2 , . . . xn , − x1 , − x2 , . . . , − xn is called a weak solution of the system (4).
 
B C
Theorem 2. In Reference [14]: Let S = be a nonsingular matrix. Then the system (4) has a
C B
 
strong solution if and only if ( B + C )−1 b − b ≤ 0.

Theorem 3. In Reference [14]: The FSLEs (1) has a unique strong solution if and only if the following
conditions hold:

(1) The matrices


A = B −  + C are both invertible matrices.
C and B
(2) ( B + C )−1 b − b ≤ 0.

268
Mathematics 2018, 6, 19

3. Iterative Schemes
In this section we will present the following iterative schemes for solving FSLEs.

3.1. The Jacobi and Gauss–Seidel Iterative Schemes


An iterative technique for solving an n × n linear system AX = b involves a process of converting
the system AX = b into an equivalent system X = TX + C. After selecting an initial approximation
X 0 , a sequence { X k } is generated by computing

X k = TX k−1 + C k ≥ 1.

Definition 7. In Reference [4]: A square matrix A is called diagonally dominant matrix


n n
if aij ≥ ∑ aij , j = 1, 2, . . . , n. A is called strictly diagonally dominant if aij > ∑ aij ,
i =1,i = j i =1,i = j
j = 1, 2, . . . , n.
Next, we are going to present the following theorems.
Theorem 4. In Reference [3]: Let the matrix A in Equation (1) be strictly diagonally dominant then both the
Jacobi and the Gauss–Seidel iterative techniques converge to A−1 Y for any X 0 .

Theorem 5. In Reference [3]: The matrix A in Equation (1) is strictly diagonally dominant if and only if
matrix S is strictly diagonally dominant.

Proof. For more details see [3].


From [3], without loss of generality, suppose that sii > 0 for all i = 1, 2, . . . , 2n.
Let S = D + L + U where
9 : 9 : 9 :
D1 0 L1 0 U1 S2
D= , L= , U=
0 D1 S2 L 1 0 U1

( D1 )ii = sii > 0, I = 1, 2, . . . , n, and assume S1 = D1 + L1 + U1 . In the Jacobi method, from the
structure of SX = Y we have
9 :9 : 9 :9 : 9 :
D1 0 X L1 + U1 S2 X Y
+ =
0 D1 X S2 L1 + U1 X Y

then
X = D1−1 Y − D1−1 ( L1 + U1 ) X − D1−1 S2 X,
(6)
X = D1−1 Y − D1−1 ( L1 + U1 ) X − D1−1 S2 X.
Thus, the Jacobi iterative technique will be

X k+1 = D1−1 Y − D1−1 ( L1 + U1 ) X k − D1−1 S2 X ,


k

k +1 (7)
D1−1 Y − D1−1 ( L1 D1−1 S2 X k ,
k
X = + U1 ) X − k = 0, 1, . . .
 
k +1 t
The elements of X k+1 = X k+1 , X are
9 :
n n
xik+1 (r ) = 1
si,i y (r ) − ∑ si,j x kj (r ) − ∑ si,n+ j x kj (r ) ,
i
j=1,j=i j =1
9 :
n n
xik+1 (r ) = 1
si,i yi (r ) − ∑ si,j x kj (r ) − ∑ si,n+ j x kj (r ) ,
j=1,j=i j =1
k = 0, 1, 2, . . . , i = 1, 2, . . . , n.

269
Mathematics 2018, 6, 19

The result in the matrix form of the Jacobi iterative technique is X k+1 = PX k + C where
9 : 9 : 9 :
− D1−1 ( L1 + U1 ) − D1−1 S2 D1−1 Y X
P= −1 , C= , X= .
− D1 S2 − D1−1 ( L1 + U1 ) D1−1 Y X

For the Gauss–Seidel method, we have:


9 :9 : 9 :9 : 9 :
D1 + L1 0 X U1 S2 X Y
+ = (8)
S2 D1 + L1 X 0 U1 X Y

then
X = ( D1 + L1 )−1 Y − ( D1 + L1 )−1 U1 X − ( D1 + L1 )−1 S2 X,
(9)
X = ( D1 + L1 )−1 Y − ( D1 + L1 )−1 U1 X − ( D1 + L1 )−1 S2 X.
Thus, the Gauss–Seidel iterative technique becomes

X k+1 = ( D1 + L1 )−1 Y − ( D1 + L1 )−1 U1−1 X k − ( D1 + L1 )−1 S2 X ,


k

k +1 −1 −1 −1
(10)
U1−1 X
k
X = ( D1 + L1 ) Y − ( D1 + L1 ) − ( D1 + L1 ) S2 X , k
k = 0, 1, . . .
 
k +1 t
So the elements of X k+1 = X k+1 , X are
9 :
i −1 n n
xik+1 (r ) = 1
si,i y (r ) − ∑ si,j x kj +1 (r ) − ∑ si,j x kj (r ) − ∑ si,n+ j x kj (r ) ,
i
j =1 j = i +1 j =1
9 :
i −1 n n
xik+1 (r ) = 1
si,i yi (r ) − ∑ si,j x kj (r ) − ∑ si,j x kj (r ) − ∑ si,n+ j x kj (r ) ,
j =1 j = i +1 j =1
k = 0, 1, 2, . . . , i == 1, 2, . . . , n.

This results in the matrix form of the Gauss–Seidel iterative technique as

9 X k+1 = PX: +C 9
k
: 9 :
−1 −1
−( D1 + L1 ) U1 −( D1 + L1 ) S2 ( D1 + L1 )−1 Y X
P= , C= , X= .
−( D1 + L1 )−1 S2 −( D1 + L1 )−1 U1 ( D1 + L1 )−1 Y X


From Theorems 4 and 5, both Jacobi and Gauss–Seidel iterative schemes converge to the unique
 
solution X = A−1 Y, for any X 0 , where X ∈ R2n and X, X ∈ En . For a given tolerance  > 0 the
decision to stop is
k +1 k k +1
X − X X − X k
<  , <  , k = 0, 1, . . .
k +1 k +1
X X

3.2. Successive over-Relaxation (SOR) Iterative Method


In this section we turn next to a modification of the Gauss–Seidel iteration which known as SOR
iterative method. By multiplying system (8) by D −1 gives,
9 :9 : 9 :9 : 9 :
I + D1−1 L1 0 X D1−1 U1 S2 X D1−1 Y
+ = (11)
S2 I + D1−1 L1 X 0 D1−1 U1 X D1−1 Y

Let D1−1 U1 = U1 ,D1−1 L1 = L1 then

270
Mathematics 2018, 6, 19

9 :9 : 9 :9 : 9 :
I + L1 0 X U1 S2 X D1−1 Y
+ = (12)
S2 I + L1 X 0 U1 X D1−1 Y

Hence
( I + L1 ) X = D −1 Y − U1 X − S2 X,( I + L1 ) X = D −1 Y − U1 X − S2 X (13)

for some parameter ω :

( I + ωL1 ) X = ωD −1 Y − [(1 − ω ) I + ωU1 ] X − ωS2 X,


(14)
( I + ωL1 ) X = ωD −1 Y − [(1 − ω ) I + ωU1 ] X − ωS2 X.

If ω = 1, then clearly X is just the Gauss–Seidel solution (13). Then the SOR iterative method
takes the form:

X k+1 = ( I + ωL1 )−1 ωD −1 Y − ( I + ωL1 )−1 [(1 − ω ) I + ωU1 ] X k − ( I + ωL1 )−1 ωS2 X ,
k

k +1 (15)
= ( I + ωL1 )−1 ωD −1 Y − ( I + ωL1 )−1 [(1 − ω ) I + ωU1 ] X − ( I + ωL1 )−1 ωS2 X k .
k
X

Consequently, this results in the matrix form of the SOR iterative method as
X K +1 = PX K + C where
9 :
−( I + ωL1 )−1 [(1 − ω ) I + ωU1 ] −( I + ωL1 )−1 ωS2
P= ,
−( I + ωL1 )−1 ωS2 −( I + ωL1 )−1 [(1 − ω ) I + ωU1 ]
9 :
( I + ωL1 )−1 ωD −1
C= .
( I + ωL1 )−1 ωD −1

For 0 < ω < 1 this method is called the successive under-relaxation method that can be used to
achieve convergence for systems that are not convergent by the Gauss–Seidel method.
For ω > 1 the method is called the SOR method that can be used to accelerate of convergence of
linear systems that are already convergent by the Gauss–Seidel method.
Theorem 6. In Reference [4]: If S is a positive definite matrix and 0 < ω < 2 then the SOR method converges
for any choice of initial approximate vector X 0 .

4. Numerical Example and Results


To demonstrate the efficiency and accuracy of the proposed iterative techniques, we consider the
following numerical example with known exact solution.

Example 1. Consider the 6 × 6 non-symmetric fuzzy linear system

9x1 + 2x2 − x3 + x4 + x5 − 2x6 = (−53 + 8r, −25 − 20r )


− x1 + 10x2 + 2x3 + x4 − x5 − x6 = (−13 + 9r, 18 − 22r )
x1 + 3x2 + 9x3 − x4 + x5 + 2x6 = (18 + 17r, 73 − 38r )
(16)
2x1 − x2 + x3 + 10x4 − 2x5 + 3x6 = (31 + 16r, 61 − 14r )
x1 + x2 − x3 + 2x4 + 7x5 − x6 = (34 + 8r, 58 − 16r )
3x1 + 2x2 + x3 + x4 − x5 + 10x6 = (51 + 26r, 99 − 22r )

271
Mathematics 2018, 6, 19

The extended 12 × 12 matrix is


⎡ ⎤
9 2 0 1 1 0 0 0 −1 0 0 −2
⎢ 0 −1 0 −1 −1 ⎥
⎢ 10 2 1 0 0 0 0 ⎥
⎢ ⎥
⎢ 1 3 9 0 1 2 0 0 0 −1 0 0 ⎥
⎢ ⎥
⎢ 2 0 1 10 0 3 0 −1 0 0 −2 0 ⎥
⎢ ⎥
⎢ 1 1 0 2 7 0 0 0 −1 0 0 −1 ⎥
⎢ ⎥
⎢ 3 0 −1 0 ⎥
⎢ 2 1 1 0 10 0 0 0 ⎥
S=⎢ ⎥
⎢ 0 0 −1 0 0 −2 9 2 0 1 1 0 ⎥
⎢ ⎥
⎢ −1 0 0 0 −1 −1 0 10 2 1 0 0 ⎥
⎢ ⎥
⎢ 0 0 0 −1 0 0 1 3 9 0 1 2 ⎥
⎢ ⎥
⎢ ⎥
⎢ 0 −1 0 0 −2 0 2 0 1 10 0 3 ⎥
⎢ ⎥
⎣ 0 0 −1 0 0 −1 1 1 0 2 7 0 ⎦
0 0 0 0 −1 0 3 2 1 1 0 10

X = S −1 Y =
⎡ ⎤
0.1136 −0.0220 0.0041 −0.0050 −0.0148 −0.0020 −0.0088 −0.0046 0.0100 0.0011 −0.0050 0.0167 ⎡ ⎤
⎢ 0.0007 0.1034 −0.0206 −0.0117 0.0020 0.0096 0.0073 −0.0065 0.0010 −0.0057 0.0116 0.0122 ⎥ −53 + 8r
⎢ ⎥⎢ −13 + 9r ⎥
⎢ ⎥⎢ ⎥
⎢ −0.0041 −0.0267 0.1184 0.0080 −0.0130 −0.0266 −0.0023 0.0044 −0.0035 0.0133 −0.0045 −0.0081 ⎥⎢ ⎥
⎢ ⎥⎢ 18 + 17r ⎥
⎢ −0.0126 0.0090 −0.0075 0.1023 0.0022 −0.0258 −0.0006 0.0081 −0.0023 −0.0071 0.0272 0.0012 ⎥⎢ ⎥
⎢ ⎥⎢ 31 + 16r ⎥
⎢ −0.0130 −0.0136 0.0037 −0.0253 0.1450 0.0042 −0.0049 −0.0064 0.0150 0.0028 −0.0100 0.0067 ⎥⎢ ⎥
⎢ ⎥⎢ 34 + 8r ⎥
⎢ −0.0330 −0.0130 −0.0067 −0.0069 −0.0023 −0.0023 −0.0063 ⎥⎢ ⎥
⎢ 0.0041 0.1046 0.0002 0.0001 0.0114 ⎥⎢ 51 + 26r ⎥
⎢ ⎥⎢ ⎥
⎢ −0.0088 −0.0046 0.0100 0.0011 −0.0050 0.0167 0.1136 −0.0220 0.0041 −0.0050 −0.0148 −0.0020 ⎥⎢ ⎥
⎢ ⎥⎢ −25 − 20r ⎥
⎢ 0.0073 −0.0065 0.0010 −0.0057 0.0116 0.0122 0.0007 0.1034 −0.0206 −0.0117 0.0020 0.0096 ⎥⎢ ⎥
⎢ ⎥⎢ 18 − 22r ⎥
⎢ 0.0023 0.0044 −0.0035 0.0133 −0.0045 −0.0081 −0.0041 −0.0267 0.1184 0.0080 −0.0130 −0.0266 ⎥⎢ ⎥
⎢ ⎥⎢ 73 − 38r ⎥
⎢ −0.0006 0.0081 −0.0023 −0.0071 −0.0126 −0.0075 −0.0258 ⎥⎢ ⎥
⎢ 0.0272 0.0012 0.0090 0.1023 0.0022 ⎥⎢ ⎥
⎢ ⎥⎢ 61 − 14r ⎥
⎢ −0.0049 −0.0064 0.0150 0.0028 −0.0100 0.0067 −0.0130 −0.0136 0.0037 −0.0253 0.1450 0.0042 ⎥⎢ ⎥
⎢ ⎥⎣ 58 − 16r ⎦
⎣ 0.0002 0.0001 −0.0023 −0.0023 0.0114 −0.0063 −0.0330 −0.0130 −0.0067 −0.0069 0.0041 0.1046 ⎦
99 − 22r

The exact solution is


 
x1 = x1 (r ), x1 (r ) = (−4.12 + 0.12r, −2.88 − 1.12r ),
 
x2 = x2 (r ), x2 (r ) = (−0.25 + 0.25r, 1.25 − 1.25r ),
 
x3 = x3 (r ), x3 (r ) = (0.78 + 1.22r, 5.22 − 3.22r ),
 
x4 = x4 (r ), x4 (r ) = (3.6 + 0.4r, 4.4 − 0.4r ),
 
x5 = x5 (r ), x5 (r ) = (6.66 + 0.34r, 8.34 − 1.34r ),
 
x6 = x6 (r ), x6 (r ) = (6.78 + 2.22r, 10.22 − 1.22r ).

The exact and approximate solution using the Jacobi, Gauss–Seidel and the SOR iterative schemes
are shown in Figures 1–3 respectively. The Hausdoeff distance of solutions with ε = 10−3 in the Jacobi
method is 0.4091 × 10−3 in the Gauss–Seidel method is 0.4335 × 10−4 and in the SOR method with
ω = 1.3 is 5.5611 × 10−4 .

272
Mathematics 2018, 6, 19






U0HPEHUVKLS9DOXH














         

Figure 1. The Hausdorff distance of solutions with ε = 10−3 , in the Jacobi method is 0.4091 × 10−3 .






U0HPEHUVKLS9DOXH














         

Figure 2. The Hausdorff distance of solutions with ε = 10−3 in the Gauss–Seidel method is 0.4335 × 10−4 .










0HPEHUVKLS9DOXH












         

Figure 3. The Hausdorff distance of solutions with ε = 10−3 in successive over-relaxation (SOR)
method with ω = 1.3 is 5.5611 × 10−4 .

273
Mathematics 2018, 6, 19

5. Conclusions
In this article the Jacobi, Gauss–Seidel and SOR iterative methods have been used to solve the
FSLEs where the coefficient matrix arrays are crisp numbers, the right-hand side column is an arbitrary
fuzzy vector and the unknowns are fuzzy numbers. The numerical results have shown to be in a
close agreement with the analytical ones. Moreover, Figures 1–3 containing the Hausdorff distance
of solutions show clearly that the SOR iterative method is more efficient in comparison with other
iterative techniques.

Author Contributions: Lubna Inearat and Naji Qatanani conceived and designed the experiments, both
performed the experiments. Lubna Inearat and Naji Qatanani analyzed the data, both contributed
reagents/materials/analysis tools and wrote the paper.
Conflicts of Interest: The authors declare no conflicts of interest.

References
1. Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353. [CrossRef]
2. Friedman, M.; Ming, M.; Kandel, A. Fuzzy linear systems. Fuzzy Sets Syst. 1998, 96, 201–209. [CrossRef]
3. Allahviraloo, T. Numerical methods for fuzzy system of linear equations. Appl. Math. Comput. 2004, 155,
493–502.
4. Allahviraloo, T. Successive over relaxation iterative method for fuzzy system of linear equations.
Appl. Math. Comput. 2005, 62, 189–196.
5. Dehghan, M.; Hashemi, B. Iterative solution of fuzzy linear systems. Appl. Math. Comput. 2006, 175, 645–674.
[CrossRef]
6. Ezzati, R. Solving fuzzy linear systems. Soft Comput. 2011, 15, 193–197. [CrossRef]
7. Muzzioli, S.; Reynaerts, H. Fuzzy linear systems of the form A1 x + b1 = A2 x + b2 . Fuzzy Sets Syst. 2006,
157, 939–951.
8. Abbasbandy, S.; Jafarian, A. Steepest descent method for system of fuzzy linear equations.
Appl. Math. Comput. 2006, 175, 823–833. [CrossRef]
9. Ineirat, L. Numerical Methods for Solving Fuzzy System of Linear Equations. Master’s Thesis, An-Najah
National University, Nablus, Palestine, 2017.
10. Asady, B.; Abbasbandy, S.; Alavi, M. Fuzzy general linear systems. Appl. Math. Comput. 2005, 169, 34–40.
[CrossRef]
11. Senthilkumar, P.; Rajendran, G. An algorithmic approach to solve fuzzy linear systems. J. Inf. Comput. Sci.
2011, 8, 503–510.
12. Bede, B. Product type operations between fuzzy numbers and their applications in geology.
Acta Polytech. Hung. 2006, 3, 123–139.
13. Abbasbandy, S.; Alavi, M. A method for solving fuzzy linear systems. Iran. J. Fuzzy Syst. 2005, 2, 37–43.
14. Amrahov, S.; Askerzade, I. Strong solutions of the fuzzy linear systems. CMES-Comput. Model. Eng. Sci.
2011, 76, 207–216.

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

274
MDPI
St. Alban-Anlage 66
4052 Basel
Switzerland
Tel. +41 61 683 77 34
Fax +41 61 302 89 18
www.mdpi.com

Mathematics Editorial Office


E-mail: [email protected]
www.mdpi.com/journal/mathematics
MDPI
St. Alban-Anlage 66
4052 Basel
Switzerland
Tel: +41 61 683 77 34
Fax: +41 61 302 89 18
www.mdpi.com ISBN 978-3-03897-323-2

You might also like