MFCS
MFCS
(A0504193)
UNIT-I
Mathematical Logic
Syllabus
Mathematical Logic: Statements and notations, Connectives, Well-formed formulas, Truth
Tables, tautology, converse, inverse and contrapositive ,equivalence, implication, Normal forms.
• Discrete maths are the study of discrete structures which mathematical models dealing
with discrete objects and relationships between them.
In the world of computers, all the information is stored in bits, units of information that
can take the value of either 0 or 1. It‘s not like in nature, where something can take all the
values in between 0 and 1 as well. Instead, everything is binary.
Since the bits are the building blocks of everything that happens in computer software,
everything becomes discrete. For instance, the hard drive on the laptop I‘m using right
now can store 1 845 074 329 600 bits of information.
The study of algorithms is also firmly in the discrete world. An algorithm is a step-by-
step list of instructions to the computer and it‘s what makes computer programs possible.
When determining how much time an algorithm needs to run, you count the number of
operations it needs to perform. Notice the word count. Again, discrete mathematics.
• In continuous mathematics (the opposite of discrete), the calculation would go like this:
• ∫ =[1/2∗x2]50=52 /2−0=12.5
• In discrete mathematics, the equivalent calculation would go like
this:∑ 0+1+2+3+4=10
• 1)Computer networks
• 2)Programming languages
• 3)Finite state automata or compilers
• 4)Databases
Symbolic logic is used in framing algorithms and their verification and in automatic theorem
proving. Set theory, Graph Theory, trees etc are used in storage and retrieval of information (data
structures),Algorithms and their complexity studies also uses tools from discrete mathematics.
Formal Languages, Automata theory, Turing machines etc are themselves part of discrete
mathematics and so is Recursive Function Theory.Undecidability of many problems are
established using Turing machines which is the Mathematical model for studying theoretical
limitations of Computation.Lattices and Boolean Algebra are used in Computer Science as well
as in communications and networking.
Mathematical Logic
Logic
It is the study of the principles and methods that distinguishes between valid and invalid
argument.
1.Proposition Logic
Proposition or Statement
A proposition or a statement can defined has a declarative sentence to which we can assign one
and only one of the truth values either true (or) false but not a both is called a proposition.
These two values true and false are denoted by the symbols T and F respectively. Sometimes
these are also denoted by the symbols 1 and 0 respect.
2) 2*3=5 false
These are propositions (or statements) because they are either true or false.
Types of Propositions
1)Atomic proposition
2)Compound Proposition
1)Atomic proposition
• Examples:
• 1)India capatial is New Delhi
• 2)2*3=5
2) Compound Proposition
• Two or more atomic propositions can be combined to form a compound proposition with
help of Connectives. Compound Proposition also called as Propositional function.
Notations
Logical Connectives
The words or phrases or symbols which are used to make a compound proposition by two
or more atomic propositions are called logical connectives or simply connectives.
There are five basic connectives called negation, conjunction, disjunction, conditional
and biconditional.
The negation of a statement is generally formed by writing the word ‗not‗ at a proper
place in the statement (proposition) or by prefixing the statement with the phrase (~).It is not
the case that‗. If p denotes a statement then the negation of p is written as p and read as not
p‗. If the truth value of p is T then the truth value of p is F. Also if the truth value of p is F
then the truth value of p is T.
Ex:~P
P ~P
F T
Disjunction (OR)
P Q PVQ
F F F
F T T
T F T
T T T
Conjunction (AND(^))
P Q P∧Q
F F F
F T F
T F F
T T T
P : It is raining today.
If P and Q are any two statements (or propositions) then the statement P → Q which is
read as, If P , then Q‗ is called a conditional statement (or proposition) or implication and the
connective is the conditional connective.
P Q P→ Q
F F T
F T T
T F F
T T T
T T T
T F F
F T F
F F T
Tautology: A proposition is said to be a tautology if its truth value is T for any assignment
of truth values to its components.
Example: P->Q
Tautology: A statement formula which is true regardless of the truth values of the
statements which replace the variables in it is called a universally valid formula or a
logical truth or a tautology.
a)(PVQ) V ~P
Solution
P Q ~P PVQ (PVQ)V~P
F F T F T
F T T T T
T F F T T
T T F T T
In the above table (PVQ) V ~P is givig all truth values are true so it is a tautology.
F F F T T T T T
F F T T T T T T
F T F T F F T T
T F F F T F F T
F T T T T T T T
T F T F T F T T
T T F T F F F T
T T T T T T T T
Implication:
• If P and Q are any two propositions then P->Q or If P then Q
• P->Q is proposition whose truth value is false only when P is true and Q is false.
• P->Q
• Here P is antecedent and Q is Consequent.
P Q P→ Q
F F T
F T T
T F T
T T T
Example:
• P: Today is Sunday
• Q: It is a holiday
A well formed formula of predicate calculus is obtained by using the following rules.
1. An atomic formula is a wff.
2. If A is a wff, then 7A is also a wff.
3. If A and B are wffs, then (A V B), (A ٨ B), (A → B) and (A D B).
4. If A is a wff and x is a any variable, then (x)A and ($x)A are wffs.
5. Only those formulas obtained by using (1) to (4) are wffs.
Since we will be concerned with only wffs, we shall use the term formulas for wff. We shall
follow the same conventions regarding the use of parentheses as was done in the case of statement
formulas.
For example, "The capital of Virginia is Richmond." is a specific proposition. Hence it is a wff
by Rule
2.
Let B be a predicate name representing "being blue" and let x be a variable. Then B(x) is an
atomic formula meaning "x is blue". Thus it is a wff by Rule 3. above. By applying Rule 5. to
B(x), xB(x) is a wff and so is xB(x). Then by applying Rule 4. to them x B(x) x B(x) is
seen to be a wff. Similarly if R is a predicate name representing "being round". Then R(x) is an
atomic formula. Hence it is a wff. By applying Rule 4 to B(x) and R(x), a wff B(x) R(x) is
obtained. In this manner, larger and more complex wffs can be constructed following the rules
given above. Note, however, that strings that can not be constructed by using those rules are not
wffs. For example, xB(x)R(x), and B( x ) are NOT wffs, NOR are B( R(x) ), and B( x R(x) )
.
More examples: To express the fact that Tom is taller than John, we can use the atomic formula
taller(Tom, John), which is a wff. This wff can also be part of some compound statement such
as taller(Tom, John) taller(John, Tom), which is also a wff.
If x is a variable representing people in the world, then taller(x,Tom), x taller(x,Tom), x
taller(x,Tom), x y taller(x,y) are all wffs among others. However, taller( x,John) and
taller(Tom Mary, Jim), for example, are NOT wffs.
Logical Equivalence
Two formulas A and B are said to equivalent to each other if and only if A↔ B
is a tautology.If A↔B is a tautology, we write A ⇔ B which is read as A is
equivalent to B.
A ↔ B is a tautology if and only if truth tables of A and B are the same. Equivalence relation is
symmetric and transitive.
(or)
Let P and Q are two propositional functions P is Equivalent to Q.
Ex:P->Q ⇔ ~P V Q.
Method I. Truth Table Method: One method to determine whether any two statement formulas
are equivalent is to construct their truth tables.
P Q P→Q ¬P ¬P∨Q
T T T F T
T F F F F
F T T T T
F F T T T
In the above table both P → Q and ¬P∨Q are have same truth values.
Equivalence Formulas:
1. Idempotent laws:
(a) P ∨ P ⇔ P (b) P ∧ P ⇔ P
2. Associative laws:
(a) (P ∨ Q) ∨ R ⇔ P ∨ (Q ∨ R) (b) (P ∧ Q) ∧ R ⇔ P ∧ (Q ∧ R)
3. Commutative laws:
(a) P ∨ Q ⇔ Q ∨ P (b) P ∧ Q ⇔ Q ∧ P
4. Distributive laws:
P∨(Q∧R)⇔(P∨Q)∧(P∨R) P∧(Q∨R)⇔(P∧Q)∨(P∧R)
6. Component laws:
7. Absorption laws:
(a) P ∨ (P ∧ Q) ⇔ P (b) P ∧ (P ∨ Q) ⇔ P
8.Demorgan‘s Laws
(a) ¬(P ∨ Q) ⇔ ¬P ∧ ¬Q (b) ¬(P ∧ Q) ⇔ ¬P ∨ ¬Q
Tautological Implications.
Clearly A ⇒ B guarantees that B has a truth value T whenever A has the truth value T .One can
determine whether A ⇒ B by constructing the truth tables of A and B in the same manner as was
done in the determination of A ⇔ B.
T T F F T T T
T F F T F F T
F T T F T T T
F F T T T T T
Since all the entries in the last column are true, (P → Q) → (¬Q → ¬P ) is a
tautology.
So that (P → Q) ⇒ (¬Q → ¬P ).
In order to show any of the given implications, it is sufficient to show that an assignment of the
truth value T to the antecedent of the corresponding condi-tional leads to the truth value T for the
consequent. This procedure guarantees that the conditional becomes tautology, thereby proving
the implication.
Solution: Assume that the antecedent ¬Q ∧ (P → Q) has the truth value T , then both ¬Q and P
→Q have the truth value T , which means that Q has the truth value F , P → Q has the truth
value T .
Hence P must have the truth value F .Therefore the consequent ¬P must have the truth value T.
¬Q∧(P→Q)⇒¬P.
Another method to show A ⇒ B is to assume that the consequent B has the truth value F and
then show that this assumption leads to A having the truth value F . Then A → B must have the
truth value T .
Solution: Assume that P has the truth value F . When P has F , P → Q has T , then ¬(P → Q)
has F.Hence ¬(P → Q) → P has T .
So that ¬(P→Q)⇒P.
Normal Forms
If a given statement formula A(p1, p2, ...pn) involves n atomic variables, we have 2n possible
combinations of truth values of statements replacing the variables.
The formula A is a tautology if A has the truth value T for all possible assignments of the truth
values to the variables p1, p2, ...pn and A is called a contradiction if A has the truth value F for all
possible assignments of the truth values of the n variables. A is said to be satisable if A has the
truth value T for atleast one combination of truth values assigned to p1, p2, ...pn.
The construction of truth table involves a finite number of steps, but the construction may not be
practical. We therefore reduce the given statement formula to normal form and find whether a
given statement formula is a Tautology or Contradiction or at least satisfiable.It will be
convenient to use the word product in place of conjunction and sum in place of disjunction .
A product of the variables and their negations in a formula is called an elementary Product.
Similarly, a sum of the variables and their negations in a formula is called an elementary sum.
Let P and Q be any atomic variables Then P,~P∧Q, ¬Q∧P∧~P,Q∧~P are some example of
elementary products.
On the other hand P,~P∨ Q, ¬Q∨ P∨ ~P,Q∨ ~P. are some examples of elementary sums.
Types of Normal forms
1) Disjunctive Normal forms
2) Conjunctive Normal forms.
Disjunctive Normal Form (DNF)
A formula which is equivalent to a given formula and which consists of a sum of elementary
products is called a disjunctive normal form of the given formula.
(a) P ∧ (P → Q)
⇔ (P∧¬P)∨(P∧Q)
b)¬(P ∨ Q) ↔(P ∧ Q)
The method for obtaining conjunctive normal form of a given formula is similar to the one
given for disjunctive normal form. Again, the conjunctive normal form is not unique.
⇔ (¬(P∨Q)→(P∧Q))∧((P∧Q)→¬(P∨Q))
⇔ ((P∨Q)∨(P∧Q))∧(¬(P∧Q)∨¬(P∨Q))
⇔ [(P∨Q∨P)∧(P∨Q∨Q)]∧[(¬P∨¬Q)∨(¬P∧¬Q)]
⇔ (P∨Q∨P)∧(P∨Q∨Q)∧(¬P∨¬Q∨¬P)∧(¬P∨¬Q∨¬Q)
Principal Disjunctive Normal Form
In this section, we will discuss the concept of principal disjunctive normal form (PDNF).
Minterm: For a given number of variables, the minterm consists of conjunctions in which each
statement variable or its negation, but not both, appears only once.
Let P and Q be the two statement variables. Then there are 22 minterms given by
P ∧ Q, P ∧ ¬Q,
¬P ∧ Q, and ¬P ∧ ¬Q
T T T F F F
T F F T F F
F T F F T F
F F F F F T
(ii). Each minterm has the truth value T for exactly one combination of the truth values of the
variables P and Q.
PDNF
(ii). For every truth value T in the truth table of the given formula, select the minterm which
also has the value T for the same combination of the truth values of P and Q.
(iii). The disjunction of these minterms will then be equivalent to the given formula
P Q P→Q Minterm
T T T P∧Q
T F F P∧¬Q
F T T ¬P∧Q
F F T ¬P∧¬Q
T T T P∧Q∧R T F T T
T T F P∧Q∧¬R T F F T
T F T P∧¬Q∧R F F F F
T F F P∧¬Q∧¬R F F F F
F T T ¬P∧Q∧R F T T T
F T F ¬P∧Q∧¬R F F F F
F F T ¬P∧¬Q∧R F T F T
F F F ¬P∧¬Q∧¬R F F F F
In order to obtain the principal disjunctive normal form of a given formula is con-
structed as follows:
(2). Next, negations are applied to the variables by De Morgan‗s laws followed by the
application of distributive laws.
(3). Any elementarily product which is a contradiction is dropped. Minterms are ob-tained in the
disjunctions by introducing the missing factors. Identical minterms appearing in the disjunctions
are deleted.
(a) ¬P∨ Q
(b) (P ∧ Q) ∨ (¬P ∧ R) ∨ (Q ∧ R).
(a) ¬P∨ Q
¬P∨Q ⇔ (¬P∧T)∨(Q∧T)
⇔ (¬P∧(Q∨¬Q))∨(Q∧(P∨¬P)) [∵P∨¬P⇔T]
⇔ (¬P∧Q)∨(¬P∧¬Q)∨(Q∧P)∨(Q∧¬P)
⇔ (¬P∧Q)∨(¬P∧¬Q)∨(P∧Q) [∵P∨P⇔P]
(b) (P ∧ Q) ∨ (¬P ∧ R) ∨ (Q ∧ R)
(P ∧ Q) ∨ (¬P ∧ R) ∨ (Q ∧ R) ⇔ (P∧Q∧T)∨(¬P∧R∧T)∨(Q∧R∧T)
⇔ (P∧Q∧(R∨¬R))∨(¬P∧R∧(Q∨¬Q))∨(Q∧R∧(P∨¬P))
⇔ (P∧Q∧R)∨(P∧Q∧¬R)∨(¬P∧R∧Q) ∨ (¬P∧R∧¬Q) ∨
(Q∧R∧P)∨(Q∧R∧¬P)
⇔ (P∧Q∧R)∨(P∧Q∧¬R)∨(¬P∧Q∧R)∨(¬P∧¬Q∧R)
Principal Conjunctive Normal Form
The dual of a minterm is called a Maxterm. For a given number of variables, the maxterm
consists of disjunctions in which each variable or its negation, but not both, appears only once.
Each of the maxterm has the truth value F for exactly one combination of the truth values of the
variables. Now we define the principal conjunctive normal form.
For a given formula, an equivalent formula consisting of conjunctions of the max-terms only is
known as its principle conjunctive normal form. This normal form is also called the product-of-
sums canonical form.The method for obtaining the PCNF for a given formula is similar to the
one described previously for PDNF.
(¬P→R)∧(Q↔P) ⇔[¬(¬P)∨R]∧[(Q→P)∧(P→Q)]
⇔ [(P∨R)∧[(¬Q∨P)∧(¬P∨Q)]
⇔ ( P∨R∨F)∧[(¬Q∨P∨F)∧(¬P∨Q∨F)]
⇔ [(P∨R)∨(Q∧¬Q)]∧[¬Q∨P)∨(R∧¬R)]∧[(¬P∨Q)∨(R∧¬R)]
⇔
(P∨R∨Q)∧(P∨R∨¬Q)∧(P∨¬Q∨R)∧(P∨¬Q∨¬R)∧(¬P∨Q∨R)∧(¬P∨Q∨¬R)
Unit-II
Syllabus:
Predicates: Rules of inference, Consistency, Predicate calculus: Free and bounded variable,
Quantifiers: Universal Quantifiers, Existential Quantifiers.
Rules of inference:
E1 77P <=>P
E2 P ٨ Q <=> Q ٨ P } Commutative laws
E3 P V Q <=> Q V P
E4 (P ٨ Q) ٨ R <=> P ٨ (Q ٨ R) } Associative laws
E5 (P V Q) V R <=> PV (Q V R)
E6 P ٨ (Q V R) <=> (P ٨ Q) V (P ٨ R) } Distributive laws
E7 P V (Q ٨ R) <=> (P V Q) ٨ (PVR)
E8 7(P ٨ Q) <=> 7P V7Q
E9 7(P V Q) <=>7P ٨ 7Q } De Morgan‘s laws
E10 P V P <=> P
E11 P ٨ P <=> P
E12 R V (P ٨ 7P) <=>R
E13 R ٨ (P V 7P) <=>R
E14 R V (P V 7P) <=>T
E15 R ٨ (P ٨ 7P) <=>F
E16 P → Q <=> 7P V Q
E17 7 (P→ Q) <=> P ٨ 7Q
E18 P→ Q <=> 7Q → 7P
E19 P → (Q → R) <=> (P ٨ Q) → R
E20 7(PD Q) <=> P D 7Q
E21 PDQ <=> (P → Q) ٨ (Q → P)
E22 (PDQ) <=> (P ٨ Q) V (7 P ٨ 7Q)
Rule CP: If we can derives s from R and a set of premises , then we can derive R → S from the
set of premises alone.
Note. 1. Rule CP follows from the equivalence E10 which states that
( P ٨ R ) → S óP → (R → S).
2. Let P denote the conjunction of the set of premises and let R be any formula
The above equivalence states that if R is included as an additional premise and
S is derived from P ٨ R then R → S can be derived from the premises P alone.
3. Rule CP is also called the deduction theorem and is generally used if the
conclusion is of the form R → S. In such cases, R is taken as an additional
premise and S is derived from the given premises and R.
{1} (1) 7P V Q P
{2} (2) P P, assumed premise
{1, 2} (3) Q T, (1), (2) and I11
{4} (4) 7Q V R P
{1, 2, 4} (5) R T, (3), (4) and I11
{6} (6) R→S P
{1, 2, 4, 6} (7) S T, (5), (6) and I11
{2, 7} (8) P→S CP
Example 7. ” If there was a ball game , then traveling was difficult. If they arrived on time,
then traveling was not difficult. They arrived on time. Therefore, there was no ball game”.
Show that these statements constitute a valid argument.
{1} (1) P → Q P
{2} (2) R → 7Q P
{3} (3) R P
{2, 3} (4) 7Q T, (2), (3), and I11
{1, 2, 3} (5) 7P T, (2), (4) and I12
Consistency of premises:
Consistency
A set of formulas H1, H2, …, Hm is said to be consistent if their conjunction has the truth
value T for some assignment of the truth values to be atomic appearing in H1, H2, …, Hm.
Inconsistency
If for every assignment of the truth values to the atomic variables, at least one of the
formulas H1, H2, … Hm is false, so that their conjunction is identically false, then the formulas
H1, H2, …, Hm are called inconsistent.
Solution.
We introduce 77 (P٨ Q) as an additional premise and show that this additional premise
leads to a contradiction.
{1} (1) 77(P٨ Q) P assumed premise
{1} (2) P٨ Q T, (1) and E1
{1} (3) P T, (2) and I1
Solution.
P: Jack misses many classes.
Q: Jack fails high school.
R: Jack reads a lot of books.
S: Jack is uneducated.
The premises are P→ Q, Q → S, R→ 7S and P٨ R
{1} (1) P→Q P
{2} (2) Q→ S P
{1, 2} (3) P → S T, (1), (2) and I13
{4} (4) R→ 7S P
{4} (5) S → 7R T, (4), and E18
{1, 2, 4} (6) P→7R T, (3), (5) and I13
{1, 2, 4} (7) 7PV7R T, (6) and E16
{1, 2, 4} (8) 7(P٨R) T, (7) and E8
{9} (9) P٨ R P
{1, 2, 4, 9) (10) (P٨ R) ٨ 7(P٨ R) T, (8), (9) and I9
The rules above can be summed up in the following table. The "Tautology" column shows how
to interpret the notation of a given rule.
Addition
Simplification
Conjunction
Modus ponens
Modus tollens
Hypothetical syllogism
Disjunctive syllogism
Resolution
Proof of contradiction:
The "Proof by Contradiction" is also known as reductio ad absurdum, which is probably Latin
for "reduce it to something absurd".
2. Based on that assumption reach two conclusions that contradict each other.
This is based on a classical formal logic construction known as Modus Tollens: If P implies Q
and Q is false, then P is false. In this case, Q is a proposition of the form (R and not R) which is
always false. P is the negation of the fact that we are trying to prove and if the negation is not
true then the original proposition must have been true. If computers are not "not stupid" then
they are stupid. (I hear that "stupid computer!" phrase a lot around here.)
Example:
Lets prove that there is no largest prime number (this is the idea of Euclid's original
proof). Prime numbers are integers with no exact integer divisors except 1 and themselves.
We have reached a contradiction (N is not prime by Step 4, and N is prime by Step 5) and
therefore our original assumption that there is a largest prime must be false.
Note: The conclusion in Step 5 makes implicit use of one other important theorem: The
Fundamental Theorem of Arithmetic: Every integer can be uniquely represented as the product
of primes. So if N had a composite (i.e. non-prime) factor, that factor would itself have prime
factors which would also be factors of N.
Predicative logic:
Let's now turn to a rather important topic: the distinction between free variables and bound
variables.
The first occurrence of x is free, whereas the second and third occurrences of x are bound,
namely by the first occurrence of the quantifier . The first and second occurrences of the
variable y are also bound, namely by the second occurrence of the quantifier .
Informally, the concept of a bound variable can be explained as follows: Recall that
quantifications are generally of the form:
or
where may be any variable. Generally, all occurences of this variable within the quantification
are bound. But we have to distinguish two cases. Look at the following formula to see why:
Quantifiers
The variable of predicates is quantified by quantifiers. There are two types of quantifier in
predicate logic − Universal Quantifier and Existential Quantifier.
Universal Quantifier
Universal quantifier states that the statements within its scope are true for every value of the
specific variable. It is denoted by the symbol ∀
Existential Quantifier
Existential quantifier states that the statements within its scope are true for some values of the
specific variable. It is denoted by the symbol ∃.
∃xP(x) is read as for some values of x, P(x) is true.
Example − "Some people are dishonest" can be transformed into the propositional form
∃xP(x) where P(x) is the predicate which denotes x is dishonest and the universe of discourse is
some people.
UNIT-III
Relations
Syllabus
Relations: Relations, Properties of binary Relations, Types of relations: equivalence,
compatibility and partial ordering relations, Hasse diagram. Lattices and its properties.
Introduction
The elements of a set may be related to one another. For example, in the set of natural
numbers there is the ‗less than‘ relation between the elements. The elements of one set may also
be related to the elements another set.
Binary Relation
A binary relation between two sets A and B is a rule R which decides, for any elements, whether
a is in relation R to b. If so,we write a R b.If a is not in relation R to b,then we shall write a /R b.
We can also consider a R b as the ordered pair (a, b) in which case we can define a binary
relation from A to B as a subset of A X B. This subset is denoted by the relation R.
For example, the relation of father to his child is F = {(a, b) / a is the father of b}
In this relation F, the first member is the name of the father and the second is the name of the
child.
The definition of relation permits any set of ordered pairs to define a relation.
The range R of a binary relation S is the set of all second elements of the ordered
pairs in the relation.(i.e) R(S) = {b / $ a for which (a, b) Є S}.
For example
For the relation S = {(1, 2), (3, a), (b, a) ,(b, Joe)}
D(S) = {1, 3, b, b} and
R(S) = {2, a, a, Joe}
Let X and Y be any two sets. A subset of the Cartesian product X * Y defines a relation, say C.
For any such relation C, we have D( C ) Í X and R( C) Í Y, and the relation C is said to from X
to Y. If Y = X, then C is said to be a relation form X to X. In such case, c is called a relation in
X. Thus any relation in X is a subset of X * X . The set X * X is called a universal relation in X,
while the empty set which is also a subset of X * X is called a void relation in X.
For example
Let L denote the relation ―less than or equal to‖ and D denote the relation
―divides‖ where x D y means ― x divides y‖. Both L and D are defined on the
set {1, 2, 3, 4}
L = {(1, 1), (1, 2), (1, 3), (1, 4), (2, 2), (2, 3), (2, 4), (3, 3), (3, 4), (4, 4)}
D = {(1, 1), (1, 2), (1, 3), (1, 4), (2, 2), (2, 4), (3, 3), (4, 4)}
L Ç D = {(1, 1), (1, 2), (1, 3), (1, 4), (2, 2), (2, 4), (3, 3), (4, 4)}
=D
Symmetric
Transitive
The relations <, £, >, ³ and = are transitive in the set of real numbers
The relations Í, Ì, Ê, É and equality are also transitive in the family of sets.
The relation of similarity in the set of triangles in a plane is transitive.
Equivalence Relation:
For example
Problem1: Let us consider the set T of triangles in a plane. Let us define a relation
R in T as R= {(a, b) / (a, b Є T and a is similar to b}
We have to show that relation R is an equivalence relation
Solution :
If the triangle a is similar to the triangle b, then triangle b is similar to the triangle a then
a R b => b R a
R is symmetric.
For any a, b, c Є, if a R b and b R c, then a – b is divisible by 3 and
b–c is divisible by 3. So that (a – b) + (b – c) is also divisible by 3,
hence a – c is also divisible by 3. Thus R is transitive.
Hence R is equivalence.
Problem3 Let Z be the set of all integers. Let m be a fixed integer. Two integers a and b are
said to be congruent modulo m if and only if m divides a-b, in which case we write a º b (mod
m). This relation is called the relation of congruence modulo m and we can show that is an
equivalence relation.
Solution :
m divides b - a
b º a (mod m)
bRa
that is R is symmetric.
Equivalence Classes:
Let R be an equivalence relation on a set A. For any a ЄA, the equivalence class generated by a
is the set of all elements b Є A such a R b and is denoted [a]. It is also called the R –
equivalence class and denoted by a Є A.
i.e., [a] = {b Є A / b R a}
Let Z be the set of integer and R be the relation called ―congruence modulo 3‖
defined by R = {(x, y)/ xÎ Z Ù yÎZ Ù (x-y) is divisible by 3}
Then the equivalence classes are
[0] = {… -6, -3, 0, 3, 6, …}
[1] = {…, -5, -2, 1, 4, 7, …}
[2] = {…, -4, -1, 2, 5, 8, …}
Composition of binary relations:
Definition
Let R be a relation from X to Y, a relation R from Y to X is called the converse of R,
where the ordered pairs of Ř are obtained by interchanging the numbers in each of the ordered
pairs of R. This means for x Î X and y Î Y, that x R y ó y Ř x.
Then the relation Ř is given by R = {(x, y) / (y, x) Î R} is called the converse of R
Example:
Let R = {(1, 2),(3, 4),(2, 2)}
Then Ř = {(2, 1),(4, 3),(2, 2)}
Definition
A binary relation R in a set P is called a partial order relation or a partial ordering in
P if R isreflexive, ant symmetric, and transitive.
i.e.,
aRa for all a ∈ P
aRb and bRa ⇒ a = b
aRb and bRc ⇒ aRc
A set P together with a partial ordering R is called a partial ordered set or poset. The relation R
is often denoted by the symbol ≤ which is different from the usual less than equal to symbol.
Thus, if ≤ is a partial order in P , then the ordered pair (P, ≤) is called a poset.
Example: Show that the relation ‖greater than or equal to‖ is a partial ordering on the set of
integers.
Solution: Let Z be the set of all integers and the relation R =≥
(i). Since a ≥ a for every integer a, the relation ≥ is reflexive.
(ii). Let a and b be any two integers.
Let aRb and bRa ⇒ a ≥ b and b ≥ a
⇒a=b
∴ The relation≥ is antisymmetric.
(iii).Let a, b and c be any three integers.
Let aRb and bRc ⇒ a ≥ b and b ≥ c
⇒a≥c
∴ The relation Let aRb and bRc ⇒ a ≥ b and b ≥ c
⇒a≥c
∴ The relation≥ is transitive.
Since the relation ≥ is reflexive, antisymmetric and transitive,≥ is partial ordering on the set of
integers. Therefore, (Z, ≥) is a poset
Hasse Diagram:
A Hasse diagram is a digraph for a poset which does not have loops and arcs implied by the
transitivity.
Example 10: For the relation {< a, a >, < a, b >, < a, c >, < b, b >, < b, c >, < c, c >} on set {a,
b,c}, the Hasse diagram has the arcs {< a, b >, < b, c >} as shown below.
Ex: Let A be a given finite set and r(A) its power set. Let Í be the subset relation on the elements
of r(A). Draw Hasse diagram of (r(A), Í) for A = {a, b, c}
Functions:
Introduction
A function is a special type of relation. It may be considered as a relation in which each
element of the domain belongs to only one ordered pair in the relation. Thus a function from A
to B is a subset of A X B having the property that for each a ЄA, there is one and only one b Є B
such that (a, b) Î G.
Definition
Let A and B be any two sets. A relation f from A to B is called a function if for every a Є A
there is a unique b Є B such that (a, b) Є f .
Note that the definition of function requires that a relation must satisfy two additional
conditions in order to qualify as a function.
The first condition is that every a Є A must be related to some b Є B, (i.e) the domain of f must
be A and not merely subset of A. The second requirement of uniqueness can be expressed as (a,
b) Є f ٨ (b, c) Є f => b = c
Intuitively, a function from a set A to a set B is a rule which assigns to every element of A, a
unique element of B. If a ЄA, then the unique element of B assigned to a under f is denoted by f
(a).The usual notation for a function f from A to B is f: A® B defined by a ® f (a) where a Є A,
f(a) is called the image of a under f and a is called pre image of f(a).
Definition: A mapping f: R® b is called a constant mapping if, for all aÎA, f (a) = b,
a fixed element.
For example f: Z®Z given by f(x) = 0, for all x ÎZ is a constant mapping.
Definition
A mapping f: A®A is called the identity mapping of A if f (a) = a,
for all aÎA. Usually it is denoted by IA or simply I.
Composition of functions:
If f: A®B and g: B®C are two functions, then the composition of functions f and g, denoted
by g o f, is the function is given by g o f : A®C and is given by
g o f = {(a, c) / a Є A ٨ c Є C ٨ $bÎ B ': f(a)= b ٨ g(b) = c}
and (g of)(a) = ((f(a))
Example 2: Let f(x) = x+2, g(x) = x – 2 and h(x) = 3x for x Î R, where R is the set of
real numbers.
Then f o f = {(x, x+4)/xÎ R}
f o g = {(x, x)/ x Î X}
g o f = {(x, x)/ xÎ X}
g o g = {(x, x-4)/x Î X}
h o g = {(x,3x-6)/ x Î X}
h o f = {(x, 3x+6)/ x Î X}
Inverse functions:
Let f: A® B be a one-to-one and onto mapping. Then, its inverse, denoted by f -1 is given by f -
1 = {(b, a) / (a, b) Î f} Clearly f-1: B® A is one-to-one and onto.
Theorem: Let f: X ®Y and g: Y ® Z be two one to one and onto functions. Then gof is also one
to one and onto function.
Proof
Let f:X ® Y g : Y ® Z be two one to one and onto functions. Let x1, x2 Î X
g o f (x1) = g o f(x2),
g (f(x1)) = g(f(x2)),
x1 = x2 since [ g is 1-1}
so that gof is 1-1.
Theorem (g o f) -1 = f -1 o g -1
(i.e) the inverse of a composite function can be expressed in terms of the
composition of the inverses in the reverse order.
Proof.
f: A ® B is one to one and onto.
g: B ® C is one to one and onto.
gof: A ® C is also one to one and onto.
Þ (gof) -1: C ® A is one to one and onto.
Let a Î A, then there exists an element b Î b such that f (a) = b Þ a = f-1 (b).
Now b Î B Þ there exists an element c Î C such that g (b) = c Þ b = g -1(c).
Then (gof)(a) = g[f(a)] = g(b) = c Þ a = (gof) -1(c). …….(1)
(f -1 o g-1) (c) = f -1(g -1 (c)) = f -1(b) = a Þ a = (f -1 o g -1)( c ) ….(2)
Combining (1) and (2), we have
(gof) -1 = f -1 o g -1
Introduction:
A lattice is partially ordered set (L, £) in which every pair of elements a, b ÎL has a greatest
lower bound and a least upper bound.
The glb of a subset, {a, b} Í L will be denoted by a * b and the lub by a Å b.
.
Usually, for any pair a, b Î L, GLB {a, b} = a * b, is called the meet or product and LUB{a,
b} = a Å b, is called the join or sum of a and b.
Example1 Consider a non-empty set S and let P(S) be its power set. The relation Í ―contained
in‖ is a partial ordering on P(S). For any two subsets A, BÎ P(S)
GLB {A, B} and LUB {A, B} are evidently A Ç B and A È B respectively.
Example2 Let I+ be the set of positive integers, and D denote the relation of ―division‖
in I+ such that for any a, b Î I+ , a D b iff a divides b. Then (I+, D) is a lattice in which
the join of a and b is given by the least common multiple(LCM) of a and b, that is,
a Å b = LCM of a and b, and the meet of a and b, that is , a * b is the greatest common divisor
(GCD) of a and b.
Two lattices can have the same diagram. For example if S = {1, 2, 3} then (p(s), Í ) and (S6,D)
have the same diagram viz. fig(1), but the nodes are differently labeled .
We observe that for any partial ordering relation £ on a set S the
converse relation ³ is also partial ordering relation on S. If (S, £) is a lattice
With meet a * b and join a Å b , then (S, ³ ) is the also a lattice with meet
a Å b and join a * b i.e., the GLB and LUB get interchanged . Thus we have
the principle of duality of lattice as follows.
Any statement about lattices involving the operations ^ and V and the relations £
and ³ remains true if ^, V, ³ and £ are replaced by V, ^, £ and ³ respectively.
The operation ^ and V are called duals of each other as are the relations £ and ³..
Also, the lattice (L, £) and (L, ³) are called the duals of each other.
Properties of lattices:
Let (L, £) be a lattice with the binary operations * and Å then for any a, b, c Î L,
a* b=b*a , aÅ b = bÅ a (Commutative)
(a * b) * c = a * (b * c) , (a Å ) Å c = a Å (b Å c)
o (Associative)
a * (a Å b) = a , a Å (a * b ) = a (absorption)
For any a ÎL, a £ a, a £ LUB {a, b} => a £ a * (a Å b). On the other hand,
GLB {a, a Å b} £ a i.e., (a Å b) Å a, hence a * (a Å b) = a
Theorem 1
Let (L, £) be a lattice with the binary operations * and Å denote the operations of meet and join
respectively For any a, b Î L,
a£b óa*b=aóaÅb=b
Proof
Let us assume a * b = a.
Now (a * b) Å b = a Å b
We know that by absorption law , (a * b) Å b = b
so that a Å b = b, therefore a * b = a Þ a Å b = b (5)
similarly, we can prove a Å b = b Þ a * b = a (6)
From (5) and (6), we get
a*b=aÛaÅb =b
Hence the theorem.
Unit-IV:
Algebraic Structures
Syllabus:
Algebraic structures: Algebraic systems with examples and general properties, semi groups and
monoids, groups & its types, Introduction to homomorphism and Isomorphism (Proof of
theorems are not required)
Algebraic systems
N = {1,2,3,4,….. } = Set of all natural numbers.
Binary Operation: The binary operator * is said to be a binary operation (closed operation) on a
non- empty set A, if a * b ∈ A for all a, b ∈ A (Closure property).
Ex: The set N is closed with respect to addition and multiplication but
not w.r.t subtraction and division.
Algebraic System: A set A with one or more binary(closed) operations defined on it is called
an algebraic system.
Properties
if (a * b) * c = a *( b* c) for all a, b, c in A
Identity: For an algebraic system (A, *), an element ‗e‘ in A is said to be an identity element of A if
a * e = e * a = a for all a ∈ A.
Note: For an algebraic system (A, *), the identity element, if exists, is unique.
Inverse: Let (A, *) be an algebraic system with identity ‗e‘. Let a be an element in A. An
element b is said to be inverse of A .
if a * b = b * a = e
Semi groups
1. * is closed operation on A.
Monoid
An algebraic system (A, *) is said to be a monoid if the following conditions are satisfied.
1) * is a closed operation in A.
2) * is an associative operation in A.
3) There is an identity in A.
Closure property : 1We know that product of two natural numbers is again a natural number.
Examples
‗concatenation of strings‘ by + .
s1+s2 ∈ S
Associativity: Concatenation of strings is associative.
3.2 Groups
Group: An algebraic system (G, *) is said to be a group if the following conditions are satisfied.
1) * is a closed operation.
2) * is an associative operation.
3) There is an identity in G.
a * b = b * a "a, b ∈ G.
Properties
Order of a group : The number of elements in a group is called order of the group.
Finite group: If the order of a group G is finite, then G is called a finite group.
Ex1 . Show that, the set of all integers is an abelian group with respect to addition.
Z.
3. Identity : We have 0 ∈ Z and a + 0 = a for all a ∈ Z .
∴ Identity element exists, and ‗0‘ is the identity element.
a+(–a)=0
Ex2 . Show that set of all non zero real numbers is a group with respect to multiplication .
1. Closure property : We know that, product of two nonzero real numbers is again a nonzero real
number .
associative.
is
commutative.
Note: Show that set of all real numbers ‗R‘ is not a group with respect to multiplication.
Solution: We have 0 ∈ R .
Example: Let S be a finite set, and let F(S) be the collection of all functions f: S → S under the
operation of composition of functions, then show that F(S) is a monoid.
Solution:
f2 ∈ F(S)
Associativity: Composition of functions is associative.
∴ F(S) is a monoid.
Note: F(S) is not a group, because the inverse of a non bijective function on S does not exist.
Ex. Show that the set of all positive rational numbers forms an abelian group
under the composition * defined by
a * b = (ab)/2 .
1. Closure property: We know that, Product of two positive rational numbers is again a
rational number.
(a b)/2 = 2
=> b = (4 / a) ∈ A
∴ (A ,*) is a group.
Commutativity: a * b = (ab/2) = (ba/2) = b * a
Finite groups
* 1 -1
1 1 -1
-1 -1 1
1. Closure property: Since all the entries of the composition table are the elements of the
given set, the set G is closed under multiplication.
2. Associativity: The elements of G are real numbers, and we know that multiplication of real
numbers is associative.
4. Inverse: From the composition table, we see that the inverse elements of
5. Commutativity: The corresponding rows and columns of the table are identical.
Therefore the binary operation . is commutative.
* 1 w w2
1 1 w w2
w w w2 1
w2 w2 1 w
1. Closure property: Since all the entries of the composition table are the elements of the
given set, the set G is closed under multiplication.
4. Inverse: From the composition table, we see that the inverse elements of 1 w, w2
5. Commutativity: The corresponding rows and columns of the table are identical.
Therefore the binary operation . is commutative.
Modulo systems.
Addition modulo m ( +m )
a +m b = a + b if a + b < m
b = ab if a b < p
Ex. 3 *5 4 = 2 , 5 *5 4 = 0 , 2 *5 2 = 4
+6 0 1 2 3 4 5
0 0 1 2 3 4 5
1 1 2 3 4 5 0
2 2 3 4 5 0 1
3 3 4 5 0 1 2
4 4 5 0 1 2 3
5 5 0 1 2 3 4
1. Closure property: Since all the entries of the composition table are the elements of the given
set, the set G is closed under +6 .
2 +6 ( 3 +6 4 ) = 2 +6 1 = 3
3. Identity : Here, The first row of the table coincides with the top row. The element
heading that row , i.e., 0 is the identity element.
4. . Inverse: From the composition table, we see that the inverse elements of 0, 1, 2, 3, 4. 5 are 0, 5,
4, 3, 2, 1 respectively
5.Commutativity: The corresponding rows and columns of the table are identical. Therefore the binary
operation +6 is commutative.
*7 1 2 3 4 5 6
1 1 2 3 4 5 6
2 2 4 6 1 3 5
3 3 6 2 5 1 4
4 4 1 5 2 6 3
5 5 3 1 6 4 2
6 6 5 4 3 2 1
1. Closure property: Since all the entries of the composition table are the elements of the given
set, the set G is closed under *7 .
2 *7 ( 3 *7 4 ) = 2 *7 5 = 3
3. Identity : Here, The first row of the table coincides with the top row. The element
heading that row , i.e., 1 is the identity element.
4. . Inverse: From the composition table, we see that the inverse elements of 1, 2, 3, 4. 5 6 are 1, 4,
5, 2, 5, 6 respectively.
5. Commutativity: The corresponding rows and columns of the table are identical. Therefore the
binary operation *7 is commutative.
In a group of even order there will be at least one element (other than identity element) which is its
own inverse
Let (G, *) be a group. Let ‗a‘ be an element of G. The smallest integer n such that an = e is called
order of ‗a‘. If no such number exists then the order is infinite.
a) The order of every element of a finite group is finite and is a divisor of the order of the group.
infinite
Ans. D
3.3Sub groups
if (H, *) is a group.
Note: For any group {G, *}, {e, * } and (G, * ) are trivial sub groups.
H1 = { 1, -1 } is a subgroup of G .
H2 = { 1 } is a trivial subgroup of G.
Theorem: A non empty sub set H of a group (G, *) is a sub group of G iff
i) a*b∈ H " a, b ∈ H
f ( a * b) = f(a) ⊕f (b)
Isomorphism : If a homomorphism f : G → G1 is a bijection then f is called isomorphism
between G and G1 .
Then we write G ≡ G1
Example : Let R be a group of all real numbers under addition and R+ be a group of all positive real
numbers under multiplication. Show that the mapping f : R → R+ defined by
f(x) = 2x for all x ∈ R is an isomorphism.
a , b ∈ R.
= 2a 2b
= f(a).f(b)
∴ f is an homomorphism.
Next, let us prove that f is a Bijection.
=> a = b
∴ f is one.to-one.
Next, take any c ∈ R+.
Then log2 c ∈ R and f (log2 c ) = 2 log2 c = c.
⇒ Every element in R+ has a pre image in R. i.e.,
f is onto.
∴ f is a bijection.
Hence, f is an isomorphism.
Ex. Let R be a group of all real numbers under addition and R+ be a group of all positive real
numbers under multiplication. Show that the mapping f : R+ → R defined by f(x) = log10 x
for all x ∈ R is an isomorphism.
Solution: First, let us show that f is a homomorphism. Let
a , b ∈ R+ .
Now, f(a.b) = log10 (a.b)
= log10 a + log10 b
= f(a) + f(b)
∴ f is an homomorphism.
=> a = b
∴ f is one.to-one.
Next, take any c ∈ R.
Then 10c ∈ R and f (10c) = log10 10c = c.
⇒ Every element in R has a pre image in
R+ . i.e., f is onto.
∴ f is a bijection.
Hence, f is an isomorphism.
UNIT-V
Graph Theory
Syllabus
Graph Theory: Representation of Graph, DFS, BFS, Spanning Trees, planar Graphs.
Representation of Graphs:
Drawbacks
1. It may be difficult to insert and delete nodes in G.
2. If the number of edges is 0(m) or 0(m log2 m), then the matrix A will be sparse, hence a
great deal of space will be wasted.
2.A set E of edges such that each edge e in E is identified with a unique
1. Simple Path
2. Cycle Path
Complete Graph
A graph G is said to be complete if every node u in G is adjacent to every other node v in G.
Tree
A connected graph T without any cycles is called a tree graph or free tree or, simply, a tree.
1. Multiple edges: Distinct edges e and e' are called multiple edges if they connect the same
endpoints, that is, if e = [u, v] and e' = [u, v].
2. Loops: An edge e is called a loop if it has identical endpoints, that is, if e = [u, u].
3. Finite Graph:A multigraph M is said to be finite if it has a finite number of nodes and
a finite number of edges.
Directed Graphs
A directed graph G, also called a digraph or graph is the same as a multigraph except that each
edge e in G is assigned a direction, or in other words, each edge e is identified with an ordered
pair (u, v) of nodes in G.
Indegree of 1 = 1
Indegree pf 2 = 2
Outdegree :The outdegree of a node or vertex is the number of edges for which v is tail.
Example
Outdegree of 1 =1
Outdegree of 2 =2
Simple Directed Graph
A directed graph G is said to be simple if G has no parallel edges. A simple graph G may
have loops, but it cannot have more than one loop at a given node.
Graph Traversal
The breadth first search (BFS) and the depth first search (DFS) are the two algorithms used for
traversing and searching a node in a graph. They can also be used to find out whether a node is
reachable from a given node or not.
The aim of DFS algorithm is to traverse the graph in such a way that it tries to go far from the
root node. Stack is used in the implementation of the depth first search. Let‘s see how depth first
search works with respect to the following graph:
As stated before, in DFS, nodes are visited by going through the depth of the tree from the
starting node. If we do the depth first traversal of the above graph and print the visited node, it
will be ―A B E F C D‖. DFS visits the root node and then its children nodes until it reaches the
end node, i.e. E and F nodes, then moves up to the parent nodes.
Algorithmic Steps
Based upon the above steps, the following Java code shows the implementation of the
DFS algorithm:
else
{
s.pop();
}
}
//Clear visited property of nodes
clearNodes();
}
This is a very different approach for traversing the graph nodes. The aim of BFS algorithm is to
traverse the graph as close as possible to the root node. Queue is used in the implementation of
the breadth first search. Let‘s see how BFS traversal works with respect to the following graph:
If we do the breadth first traversal of the above graph and print the visited node as the output, it
will print the following output. ―A B C D E F‖. The BFS visits the nodes level by level, so it will
start with level 0 which is the root node, and then it moves to the next levels which are B, C and
D, then the last levels which are E and F.
Algorithmic Steps
Based upon the above steps, the following Java code shows the implementation of the
BFS algorithm:
spanning tree of G is a selection of edges of G that form a tree spanning every vertex. That is,
every vertex lies in the tree, but no cycles (or loops) are formed. On the other hand, every bridge
of G must belong to T.
A spanning tree of a connected graph G can also be defined as a maximal set of edges of G that
contains no cycle, or as a minimal set of edges that connect all vertices.
Example:
Spanning forests
A spanning forest is a type of subgraph that generalises the concept of a spanning tree.
However, there are two definitions in common use. One is that a spanning forest is a subgraph
that consists of a spanning tree in each connected component of a graph. (Equivalently, it is a
maximal cycle-free subgraph.) This definition is common in computer science and optimisation.
It is also the definition used when discussing minimum spanning forests, the generalization to
disconnected graphs of minimum spanning trees. Another definition, common in graph theory, is
that a spanning forest is any subgraph that is both a forest (contains no cycles) and spanning
(includes every vertex).
The number t(G) of spanning trees of a connected graph is an important invariant. In some cases,
it is easy to calculate t(G) directly. It is also widely used in data structures in different computer
languages. For example, if G is itself a tree, then t(G)=1, while if G is the cycle graph Cn with n
vertices, then t(G)=n. For any graph G, the number t(G) can be calculated using Kirchhoff's
matrix-tree theorem (follow the link for an explicit example using the theorem).
Cayley's formula is a formula for the number of spanning trees in the complete graph Kn with n
vertices. The formula states that t(Kn) = nn − 2. Another way of stating Cayley's formula is that
− 2
there are exactly nn labelled trees with n vertices. Cayley's formula can be proved using
Kirchhoff's matrix-tree theorem or via the Prüfer code.
If G is the complete bipartite graph Kp,q, then t(G) = pq − 1qp − 1, while if G is the n-dimensional
If G is a multigraph and e is an edge of G, then the number t(G) of spanning trees of G satisfies
the deletion-contraction recurrence t(G)=t(G-e)+t(G/e), where G-e is the multigraph obtained
by deleting e and G/e is the contraction of G by e, where multiple edges arising from this
contraction are not deleted.
A spanning tree chosen randomly from among all the spanning trees with equal probability is
called a uniform spanning tree (UST). This model has been extensively researched in probability
and mathematical physics.
Algorithms
The classic spanning tree algorithm, depth-first search (DFS), is due to Robert Tarjan. Another
important algorithm is based on breadth-first search (BFS).
Planar Graphs:
In graph theory, a planar graph is a graph that can be embedded in the plane, i.e., it can be
drawn on the plane in such a way that its edges intersect only at their endpoints.
A planar graph already drawn in the plane without edge intersections is called a plane graph or
planar embedding of the graph. A plane graph can be defined as a planar graph with a
mapping from every node to a point in 2D space, and from every edge to a plane curve, such that
the extreme points of each curve are the points mapped from its end nodes, and all curves are
disjoint except on their extreme points. Plane graphs can be encoded by combinatorial maps.
It is easily seen that a graph that can be drawn on the plane can be drawn on the sphere as well,
and vice versa.
The equivalence class of topologically equivalent drawings on the sphere is called a planar
map. Although a plane graph has an external or unbounded face, none of the faces of a planar
map have a particular status.
Applications
Telecommunications – e.g. spanning trees
Vehicle routing – e.g. planning routes on roads without underpasses
VLSI – e.g. laying out circuits on computer chip.
The puzzle game Planarity requires the player to "untangle" a planar graph so that none
of its edges intersect.
Example graphs
Planar Nonplanar
Butterfly graph
K5
K3,3
The complete graph
K4 is planar
UNIT-VI
Graph Theory and Applications
Graph Theory and Applications:
Graphs are among the most ubiquitous models of both natural and human-made structures. They
can be used to model many types of relations and process dynamics in physical, biological and
social systems. Many problems of practical interest can be represented by graphs.
In computer science, graphs are used to represent networks of communication, data organization,
computational devices, the flow of computation, etc. One practical example: The link structure of
a website could be represented by a directed graph. The vertices are the web pages available at
the website and a directed edge from page A to page B exists if and only if A contains a link to B.
A similar approach can be taken to problems in travel, biology, computer chip design, and many
other fields. The development of algorithms to handle graphs is therefore of major interest in
computer science. There, the transformation of graphs is often formalized and represented by
graph rewrite systems. They are either directly used or properties of the rewrite systems (e.g.
confluence) are studied. Complementary to graph transformation systems focussing on rule-
based in-memory manipulation of graphs are graph databases geared towards transaction-safe,
persistent storing and querying of graph-structured data.
Graph-theoretic methods, in various forms, have proven particularly useful in linguistics, since
natural language often lends itself well to discrete structure. Traditionally, syntax and
compositional semantics follow tree-based structures, whose expressive power lies in the
Principle of Compositionality, modeled in a hierarchical graph. Within lexical semantics,
especially as applied to computers, modeling word meaning is easier when a given word is
understood in terms of related words; semantic networks are therefore important in
computational linguistics. Still other methods in phonology (e.g. Optimality Theory, which uses
lattice graphs) and morphology (e.g. finite-state morphology, using finite-state transducers) are
common in the analysis of language as a graph. Indeed, the usefulness of this area of
mathematics to linguistics has borne organizations such as TextGraphs, as well as various 'Net'
projects, such as WordNet, VerbNet, and others.
Graph theory is also used to study molecules in chemistry and physics. In condensed matter
physics, the three dimensional structure of complicated simulated atomic structures can be
studied quantitatively by gathering statistics on graph-theoretic properties related to the topology
of the atoms. For example, Franzblau's shortest-path (SP) rings. In chemistry a graph makes a
natural model for a molecule, where vertices represent atoms and edges bonds. This approach is
especially used in computer processing of molecular structures, ranging from chemical editors to
database searching. In statistical physics, graphs can represent local connections between
interacting parts of a system, as well as the dynamics of a physical process on such systems.
Graph theory is also widely used in sociology as a way, for example, to measure actors' prestige
or to explore diffusion mechanisms, notably through the use of social network analysis
software.Likewise, graph theory is useful in biology and conservation efforts where a vertex can
represent regions where certain species exist (or habitats) and the edges represent migration
paths, or movement between the regions. This information is important when looking at breeding
patterns or tracking the spread of disease, parasites or how changes to the movement can affect
other species.
In mathematics, graphs are useful in geometry and certain parts of topology, e.g. Knot Theory.
Algebraic graph theory has close links with group theory.
A graph structure can be extended by assigning a weight to each edge of the graph. Graphs with
weights, or weighted graphs, are used to represent structures in which pairwise connections have
some numerical values. For example if a graph represents a road network, the weights could
represent the length of each road.
Then we say that the function f is an isomorphism and that the two graphs G1 and G2
are isomorphic. So two graphs G1 and G2 are isomorphic if there is a one-to-one correspondence
between vertices of G1 and those of G2 with the property that if two vertices of G1 are adjacent
then so are their images in G2. If two graphs are isomorphic then as far as we are concerned they
are the same graph though the location of the vertices may be different. To show you how the
program can be used to explore isomorphism draw the graph in figure 4 with the program (first
get the null graph on four vertices and then use the right mouse to add edges).
Save this graph as Graph 1 (you need to click Graph then Save). Now get the circuit graph with 4
vertices. It looks like figure 5, and we shall call it C(4).
Example:
The two graphs shown below are isomorphic, despite their different looking drawings.
ƒ(a) = 1
ƒ(b) = 6
ƒ(c) = 8
ƒ(d) = 3
ƒ(g) = 5
ƒ(h) = 2
ƒ(i) = 4
ƒ(j) = 7
Subgraphs:
A subgraph of a graph G is a graph whose vertex set is a subset of that of G, and whose
adjacency relation is a subset of that of G restricted to this subset. In the other direction, a
supergraph of a graph G is a graph of which G is a subgraph. We say a graph G contains
another graph H if some subgraph of G is H or is isomorphic to H.
A subgraph H is a spanning subgraph, or factor, of a graph G if it has the same vertex set as G.
We say H spans G.
A subgraph H of a graph G is said to be induced if, for any pair of vertices x and y of H, xy is an
edge of H if and only if xy is an edge of G. In other words, H is an induced subgraph of G if it
has all the edges that appear in G over the same vertex set. If the vertex set of H is the subset S of
V(G), then H can be written as G[S] and is said to be induced by S.
A universal graph in a class K of graphs is a simple graph in which every element in K can be
embedded as a subgraph.
K5, a complete graph. If a subgraph looks like this, the vertices in that subgraph form a clique of
size 5.
Multi graphs:
Multigraphs might be used to model the possible flight connections offered by an airline. In this
case the multigraph would be a directed graph with pairs of directed parallel edges connecting
cities to show that it is possible to fly both to and from these locations.
A multigraph with multiple edges (red) and a loop (blue). Not all authors allow multigraphs to
have loops.
Euler circuits:
In graph theory, an Eulerian trail is a trail in a graph which visits every edge exactly once.
Similarly, an Eulerian circuit is an Eulerian trail which starts and ends on the same vertex. They
were first discussed by Leonhard Euler while solving the famous Seven Bridges of Königsberg
problem in 1736. Mathematically the problem can be stated like this:
Given the graph on the right, is it possible to construct a path (or a cycle, i.e. a path starting and
ending on the same vertex) which visits each edge exactly once?
Euler proved that a necessary condition for the existence of Eulerian circuits is that all vertices in
the graph have an even degree, and stated without proof that connected graphs with all vertices
of even degree have an Eulerian circuit. The first complete proof of this latter claim was
published in 1873 by Carl Hierholzer.
The term Eulerian graph has two common meanings in graph theory. One meaning is a graph
with an Eulerian circuit, and the other is a graph with every vertex of even degree. These
definitions coincide for connected graphs.
For the existence of Eulerian trails it is necessary that no more than two vertices have an odd
degree; this means the Königsberg graph is not Eulerian. If there are no vertices of odd degree,
all Eulerian trails are circuits. If there are exactly two vertices of odd degree, all Eulerian trails
start at one of them and end at the other. Sometimes a graph that has an Eulerian trail but not an
Eulerian circuit is called semi-Eulerian.
An Eulerian trail, Eulerian trail or Euler walk in an undirected graph is a path that uses each
edge exactly once. If such a path exists, the graph is called traversable or semi-eulerian.
An Eulerian cycle, Eulerian circuit or Euler tour in an undirected graph is a cycle that uses
each edge exactly once. If such a cycle exists, the graph is called unicursal. While such graphs
are Eulerian graphs, not every Eulerian graph possesses an Eulerian cycle.
For directed graphs path has to be replaced with directed path and cycle with directed cycle.
The definition and properties of Eulerian trails, cycles and graphs are valid for multigraphs as
well.
Every vertex of this graph has an even degree, therefore this is an Eulerian graph. Following the
edges in alphabetical order gives an Eulerian circuit/cycle.
Hamiltonian graphs:
In the mathematical field of graph theory, a Hamiltonian path (or traceable path) is a path in
an undirected graph which visits each vertex exactly once. A Hamiltonian cycle (or
Hamiltonian circuit) is a cycle in an undirected graph which visits each vertex exactly once and
also returns to the starting vertex. Determining whether such paths and cycles exist in graphs is
the Hamiltonian path problem which is NP-complete.
Hamiltonian paths and cycles are named after William Rowan Hamilton who invented the
Icosian game, now also known as Hamilton's puzzle, which involves finding a Hamiltonian cycle
in the edge graph of the dodecahedron. Hamilton solved this problem using the Icosian Calculus,
an algebraic structure based on roots of unity with many similarities to the quaternions (also
invented by Hamilton). This solution does not generalize to arbitrary graphs.
A Hamiltonian path or traceable path is a path that visits each vertex exactly once. A graph that
contains a Hamiltonian path is called a traceable graph. A graph is Hamilton-connected if for
every pair of vertices there is a Hamiltonian path between the two vertices.
A Hamiltonian cycle, Hamiltonian circuit, vertex tour or graph cycle is a cycle that visits each
vertex exactly once (except the vertex which is both the start and end, and so is visited twice). A
graph that contains a Hamiltonian cycle is called a Hamiltonian graph.
Similar notions may be defined for directed graphs, where each edge (arc) of a path or cycle can
only be traced in a single direction (i.e., the vertices are connected with arrows and the edges
traced "tail-to-head").
Examples
a complete graph with more than two vertices is Hamiltonian
every cycle graph is Hamiltonian
every tournament has an odd number of Hamiltonian paths
every platonic solid, considered as a graph, is Hamiltonian
Chromatic Numbers:
In graph theory, graph coloring is a special case of graph labeling; it is an assignment of labels
traditionally called "colors" to elements of a graph subject to certain constraints. In its simplest
form, it is a way of coloring the vertices of a graph such that no two adjacent vertices share the
same color; this is called a vertex coloring. Similarly, an edge coloring assigns a color to each
edge so that no two adjacent edges share the same color, and a face coloring of a planar graph
assigns a color to each face or region so that no two faces that share a boundary have the same
color.
Vertex coloring is the starting point of the subject, and other coloring problems can be
transformed into a vertex version. For example, an edge coloring of a graph is just a vertex
coloring of its line graph, and a face coloring of a planar graph is just a vertex coloring of its
planar dual. However, non-vertex coloring problems are often stated and studied as is. That is
partly for perspective, and partly because some problems are best studied in non-vertex form, as
for instance is edge coloring.
The convention of using colors originates from coloring the countries of a map, where each face
is literally colored. This was generalized to coloring the faces of a graph embedded in the plane.
By planar duality it became coloring the vertices, and in this form it generalizes to all graphs. In
mathematical and computer representations it is typical to use the first few positive or
nonnegative integers as the "colors". In general one can use any finite set as the "color set". The
nature of the coloring problem depends on the number of colors but not on what they are.
Graph coloring enjoys many practical applications as well as theoretical challenges. Beside the
classical types of problems, different limitations can also be set on the graph, or on the way a
color is assigned, or even on the color itself. It has even reached popularity with the general
public in the form of the popular number puzzle Sudoku. Graph coloring is still a very active
field of research.
A proper vertex coloring of the Petersen graph with 3 colors, the minimum number possible.
Vertex coloring
When used without any qualification, a coloring of a graph is almost always a proper vertex
coloring, namely a labelling of the graph‘s vertices with colors such that no two vertices sharing
the same edge have the same color. Since a vertex with a loop could never be properly colored, it
is understood that graphs in this context are loopless.
The terminology of using colors for vertex labels goes back to map coloring. Labels like red and
blue are only used when the number of colors is small, and normally it is understood that the
labels are drawn from the integers {1,2,3,...}.
A coloring using at most k colors is called a (proper) k-coloring. The smallest number of colors
needed to color a graph G is called its chromatic number, χ(G). A graph that can be assigned a
(proper) k-coloring is k-colorable, and it is k-chromatic if its chromatic number is exactly k. A
subset of vertices assigned to the same color is called a color class, every such class forms an
independent set. Thus, a k-coloring is the same as a partition of the vertex set into k independent
sets, and the terms k-partite and k-colorable have the same meaning.
star graph , 2
wheel graph ,