0% found this document useful (0 votes)
7 views

Cns 3

Cns basic 3

Uploaded by

gbharothh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Cns 3

Cns basic 3

Uploaded by

gbharothh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 40

UNIT-3

Message Authentication Requirements


Data is prone to various attacks. One of these attacks includes message authentication.
This threat arises when the user does not have any information about the originator of the
message. Message authentication can be achieved using cryptographic methods which
further make use of keys.

Authentication Requirements :

 Revelation: It means releasing the content of the message to someone who does not
have an appropriate cryptographic key.
 Analysis of Traffic: Determination of the pattern of traffic through the duration of
connection and frequency of connections between different parties.
 Deception: Adding out of context messages from a fraudulent source into a
communication network. This will lead to mistrust between the parties communicating
and may also cause loss of critical data.
 Modification in the Content: Changing the content of a message. This includes
inserting new information or deleting/changing the existing one.
 Modification in the sequence: Changing the order of messages between parties. This
includes insertion, deletion, and reordering of messages.
 Modification in the Timings: This includes replay and delay of messages sent between
different parties. This way session tracking is also disrupted.
 Source Refusal: When the source denies being the originator of a message.
 Destination refusal: When the receiver of the message denies the reception.

Message Authentication Functions :

All message authentication and digital signature mechanisms are based on two functionality
levels:
 Lower level: At this level, there is a need for a function that produces an authenticator,
which is the value that will further help in the authentication of a message.
 Higher-level: The lower level function is used here in order to help receivers verify the
authenticity of messages.
These message authentication functions are divided into three classes:
 Message encryption: While sending data over the internet, there is always a risk of a
Man in the middle(MITM) attack. A possible solution for this is to use message
encryption. In message encryption, the data is first converted to a ciphertext and then
sent any further. Message encryption can be done in two ways:
 Symmetric Encryption: Say we have to send the message M from a source P to
destination Q. This message M can be encrypted using a secret key K that both P and Q
share. Without this key K, no other person can get the plain text from the ciphertext. This
maintains confidentiality. Further, Q can be sure that P has sent the message. This is
because other than Q, P is the only party who possesses the key K and thus the
ciphertext can be decrypted only by Q and no one else. This maintains authenticity. At a
very basic level, symmetric encryption looks like this:

Message authentication code:-

A message authentication code (MAC) is a cryptographic checksum applied to


a message in network communication to guarantee its integrity
and authenticity. A MAC ensures the transmitted message originated with the
stated sender and was not modified during transmission, either accidentally or
intentionally. A MAC is sometimes referred to as a tag because of the way it is
added to the message it is verifying.
MAC for message verification
Symmetric key cryptographic techniques are used to generate MACs for individual
messages. The process requires a standard MAC algorithm that takes two inputs: the original
message and a secret key known only to the message originator and its intended recipient.
The following figure provides an overview of how a sender generates a MAC and how it is
verified by the receiver.
MAC-based message verification requires both the sender and receiver to follow specific
steps to ensure the message's credibility:

1. The sender and receiver share a secret symmetric key.

2. The sender runs a standard algorithm to create the MAC. As input, the algorithm takes the
original message and the secret key.

3. The algorithm combines the message and secret key and, from this content, generates a
fixed-length checksum that is used to create the MAC.

4. The sender appends the MAC to the message and transmits them both to the sender.

5. When the sender receives the message and MAC, it runs the MAC algorithm using the
transmitted message and shared secret key as input.

6. The algorithm combines the message and secret key and, from this content, generates a
fixed-length checksum that is used to create its own MAC.

7. The receiver compares the sender's MAC against its own MAC. If they match, the receiver
accepts the message. If the two MACs do not match, the receiver rejects the message.

When the two MACs match, the receiver knows the message came from the legitimate
sender and was not altered when transmitted between the sender and the receiver. If the
sender and receiver are not using the same secret key or if the message content is different
between the sender and receiver, the MAC values will not match and the receiver rejects the
message.

Although a MAC ensures authenticity and integrity, it does not protect the message data
itself. That is not the purpose of a MAC. For data protection, the message needs to
be encrypted in a separate process. In addition, a MAC does not
offer nonrepudiation capabilities like a digital signature, which provides a record of a
document's origin. With a MAC, there is no way to prove who created the original message.

Hash Functions and Types of Hash functions

What is a Hash Function ?


A hash function is a function that takes an input (or ‘message’) and returns a fixed-size
string of bytes. The output, typically a number, is called the hash code or hash value. The
main purpose of a hash function is to efficiently map data of arbitrary size to fixed-size
values, which are often used as indexes in hash tables.
Key Properties of Hash Functions
 Deterministic: A hash function must consistently produce the same output for the same
input.
 Fixed Output Size: The output of a hash function should have a fixed size, regardless of
the size of the input.
 Efficiency: The hash function should be able to process input quickly.
 Uniformity: The hash function should distribute the hash values uniformly across the
output space to avoid clustering.
 Pre-image Resistance: It should be computationally infeasible to reverse the hash
function, i.e., to find the original input given a hash value.
 Collision Resistance: It should be difficult to find two different inputs that produce the
same hash value.
 Avalanche Effect: A small change in the input should produce a significantly different
hash value.
Applications of Hash Functions
 Hash Tables: The most common use of hash functions in DSA is in hash tables, which
provide an efficient way to store and retrieve data.
 Data Integrity: Hash functions are used to ensure the integrity of data by generating
checksums.
 Cryptography: In cryptographic applications, hash functions are used to create secure
hash algorithms like SHA-256.
 Data Structures: Hash functions are utilized in various data structures such as Bloom
filters and hash sets.
Types of Hash Functions
There are many hash functions that use numeric or alphanumeric keys. This article focuses
on discussing different hash functions:
1. Division Method.
2. Multiplication Method
3. Mid-Square Method
4. Folding Method
5. Cryptographic Hash Functions
6. Universal Hashing
7. Perfect Hashing
1. Division Method
The division method involves dividing the key by a prime number and using the remainder
as the hash value.

Where k is the key and 𝑚m is a prime number.


h(k)=k mod m

Advantages:

 Works well when 𝑚m is a prime number.


 Simple to implement.

 Poor distribution if 𝑚m is not chosen wisely.


Disadvantages:

In the multiplication method, a constant 𝐴A (0 < A < 1) is used to multiply the key. The
2. Multiplication Method

fractional part of the product is then multiplied by 𝑚m to get the hash value.
h(k)=⌊m(kAmod1)⌋
Where ⌊ ⌋ denotes the floor function.

 Less sensitive to the choice of 𝑚m.


Advantages:

Disadvantages:
 More complex than the division method.
3. Mid-Square Method
In the mid-square method, the key is squared, and the middle digits of the result are taken
as the hash value.
Steps:
1. Square the key.
2. Extract the middle digits of the squared value.
Advantages:
 Produces a good distribution of hash values.
Disadvantages:
 May require more computational effort.
4. Folding Method

taking the modulo with respect to 𝑚m.


The folding method involves dividing the key into equal parts, summing the parts, and then

Steps:
1. Divide the key into parts.

3. Take the modulo 𝑚m of the sum.


2. Sum the parts.

Advantages:
 Simple and easy to implement.
Disadvantages:
 Depends on the choice of partitioning scheme.
5. Cryptographic Hash Functions
Cryptographic hash functions are designed to be secure and are used in cryptography.
Examples include MD5, SHA-1, and SHA-256.
Characteristics:
 Pre-image resistance.
 Second pre-image resistance.
 Collision resistance.
Advantages:
 High security.
Disadvantages:
 Computationally intensive.
6. Universal Hashing
Universal hashing uses a family of hash functions to minimize the chance of collision for

h(k)=((a⋅k+b)modp)modm
any given set of inputs.

Where a and b are randomly chosen constants, p is a prime number greater than m,
and k is the key.
Advantages:
 Reduces the probability of collisions.
Disadvantages:
 Requires more computation and storage.
7. Perfect Hashing
Perfect hashing aims to create a collision-free hash function for a static set of keys. It
guarantees that no two keys will hash to the same value.
Types:
 Minimal Perfect Hashing: Ensures that the range of the hash function is equal to the
number of keys.
 Non-minimal Perfect Hashing: The range may be larger than the number of keys.
Advantages:
 No collisions.
Disadvantages:
 Complex to construct.

SHA-256 Algorithm:

SHA 256 is a part of the SHA 2 family of algorithms, where SHA stands for Secure Hash Algorithm.
Published in 2001, it was a joint effort between the NSA and NIST to introduce a successor to the SHA 1
family, which was slowly losing strength against brute force attacks.
The significance of the 256 in the name stands for the final hash digest value, i.e. irrespective of the size
of plaintext/cleartext, the hash value will always be 256 bits.

What are the Characteristics of the SHA-256 Algorithm?

Some of the
standout features of the SHA algorithm are as follows:

 Message Length: The length of the cleartext should be less than 264 bits. The size needs to be in the
comparison area to keep the digest as random as possible.

 Digest Length: The length of the hash digest should be 256 bits in SHA 256 algorithm, 512 bits in
SHA-512, and so on. Bigger digests usually suggest significantly more calculations at the cost of
speed and space.

 Irreversible: By design, all hash functions such as the SHA 256 are irreversible. You should neither
get a plaintext when you have the digest beforehand nor should the digest provide its original value
when you pass it through the hash function again.

Steps in SHA-256 Algorithm

Padding Bits

It adds some extra bits to the message, such that the length is exactly 64 bits short of a multiple of 512.
During the addition, the first bit should be one, and the rest of it should be filled with zeroes.
What is Hashing?

Hashing is the process of scrambling raw information to the extent that it cannot reproduce it back to its
original form. It takes a piece of information and passes it through a function that performs mathematical
operations on the plaintext. This function is called the hash function, and the output is called the hash
value/digest.

There are two primary applications of hashing:

 Password Hashes: In most website servers, it converts user passwords into a hash value before being stored
on the server. It compares the hash value re-calculated during login to the one stored in the database for
validation.

 Integrity Verification: When it uploads a file to a website, it also shared its hash as a bundle. When a user
downloads it, it can recalculate the hash and compare it to establish data integrity.

What is the SHA-256 Algorithm?

SHA 256 is a part of the SHA 2 family of algorithms, where SHA stands for Secure Hash Algorithm.
Published in 2001, it was a joint effort between the NSA and NIST to introduce a successor to the SHA 1
family, which was slowly losing strength against brute force attacks.

The significance of the 256 in the name stands for the final hash digest value, i.e. irrespective of the size
of plaintext/cleartext, the hash value will always be 256 bits.
.What are the Characteristics of the SHA-256 Algorithm?

Some of the standout features of the SHA algorithm are as follows:

 Message Length: The length of the cleartext should be less than 264 bits. The size needs to be in the
comparison area to keep the digest as random as possible.

 Digest Length: The length of the hash digest should be 256 bits in SHA 256 algorithm, 512 bits in SHA-512,
and so on. Bigger digests usually suggest significantly more calculations at the cost of speed and space.

 Irreversible: By design, all hash functions such as the SHA 256 are irreversible. You should neither get a
plaintext when you have the digest beforehand nor should the digest provide its original value when you pass it
through the hash function again.

Now that you got a fair idea about the technical requirements for SHA, you can get into its complete
procedure, in the next section.

Steps in SHA-256 Algorithm

You can divide the complete process into five different segments, as mentioned below:

Padding Bits

It adds some extra bits to the message, such that the length is exactly 64 bits short of a multiple of 512.
During the addition, the first bit should be one, and the rest of it should be filled with zeroes.
Padding Length

You can add 64 bits of data now to make the final plaintext a multiple of 512. You can calculate these 64
bits of characters by applying the modulus to your original cleartext without the padding.

Initialising the Buffers:

You need to initialize the default values for eight buffers to be used in the rounds as follows:
Compression Functions

The entire message gets broken down into multiple blocks of 512 bits each. It puts each block through 64
rounds of operation, with the output of each block serving as the input for the following block. The entire
process is as follows:
Courtesy: Medium Article on SHA 256

While the value of K[i] in all those rounds is pre-initialized, W[i] is another input that is calculated
individually for each block, depending on the number of iterations being processed at the moment.

Output

With each iteration, the final output of the block serves as the input for the next block. The entire cycle
keeps repeating until you reach the last 512-bit block, and you then consider its output the final hash
digest. This digest will be of the length 256-bit, as per the name of this algorithm.
Applications of SHA algorithm

As seen in the image above, the SHA algorithm is being used in a lot of places, some of which are as
follows:

 Digital Signature Verification: Digital signatures follow asymmetric encryption methodology to verify the
authenticity of a document/file. Hash algorithms like SHA 256 go a long way in ensuring the verification of the
signature.

 Password Hashing: As discussed above, websites store user passwords in a hashed format for two benefits. It
helps foster a sense of privacy, and it lessens the load on the central database since all the digests are of
similar size.

 SSL Handshake: The SSL handshake is a crucial segment of the web browsing sessions, and it’s done using
SHA functions. It consists of your web browsers and the web servers agreeing on encryption keys and hashing
authentication to prepare a secure connection.

 Integrity Checks: As discussed above, verifying file integrity has been using variants like SHA 256 algorithm
and the MD5 algorithm. It helps maintain the full value functionality of files and makes sure they were not
altered in transit.

SHA (Secure Hash Algorithm):


In recent years, the most widely used hash function has been the Secure Hash Algorithm (SHA).

Introduction:

The Secure Hash Algorithm is a family of cryptographic hash functions developed by the NIST

(National Institute of Standards & Technology).

SHA is based on the MD4 algorithm and its design closely models MD5.
SHA-1 is specified in RFC 3174.

Purpose: Authentication, not encryption.

SHA-1 produces a hash value of 160 bits. In 2002, NIST produced a revised version of the

standard, FIPS 180-2, that defined three new versions of SHA, with hash value lengths of 256,

384, and 512 bits, known as SHA-256, SHA-384, and SHA-512, respectively.

SHA-1 logic:

The algorithm takes a message with maximum of length of less than 264 bits.

Produce output is 160 bits message digest.

The input is processed 512 bits block.

Processed Steps:

Algorithm processing Steps:

Step1: Append Padding Bits

Step 2: Append Length

Step 3: Initialize MD Buffer

Step 4: Process Message in 512-bit (16-Word) Blocks

Step 5: Output

Step-1: Appending Padding Bits: The original message is "padded" (extended) so that its length

(in bits) is congruent to 448, modulo 512. The padding rules are:
The original message is always padded with one bit "1" first.

Then zero or more bits "0" are padded to bring the length of the message up to 64 bits fewer than

a multiple of 512.

Step-2: append length: a block of 64 bits is appended to the message. This block is treated as

unsigned 64-bit integers (most significant byte first) and contains the length of the original

message.

Step-3: Initialize MD buffer: 160-bit buffer is used to hold intermediate and final results of the

hash function. This buffer can be represented as five 32 bit registers (A, B, C,D,E).

Step 4: Process message in 1024-bit (128-word) blocks. The heart of the algorithm is a module

that consists of 80 rounds; this module is labeled F in Figure 11.8. The logic is illustrated in

Figure 11.9.
HMAC:
In recent years, there has been increased interest in developing a MAC derived from a

cryptographic hash function, because they generally execute faster than symmetric block ciphers,

and because code for cryptographic hash functions is widely available.

A hash function such as SHA was not designed for use as a MAC and cannot be used directly for

that purpose because it does not rely on a secret key. There have been a number of proposals for
the incorporation of a secret key into an existing hash algorithm, originally by just pre-pending a

key to the message. Problems were found with these earlier, simpler proposals, but they resulted

in the development of HMAC.

HMAC Design Objectives:

• To use, without modifications, available hash functions. In particular, to use hash functions that

perform well in software and for which code is freely and widely available.

• To allow for easy replaceability of the embedded hash function in case faster or more secure hash

functions are found or required.

• To preserve the original performance of the hash function without incurring a significant

degradation.

• To use and handle keys in a simple way.

• To have a well understood cryptographic analysis of the strength of the authentication mechanism

based on reasonable assumptions about the embedded hash function.

HMAC Algorithm:
Cipher-Based Message Authentication Code (CMAC):
DIGITAL SIGNATURES

A digital signature is an authentication mechanism that enables the creator of a message to


attach a code that acts as a signature. Typically, the signature is formed by taking the hash of the
message and encrypting the message with the creator’s private key. The signature guarantees the
source and integrity of the message.

The digital signature standard (DSS) is an NIST standard that uses the secure hash algorithm(SHA).

Properties
Message authentication protects two parties who exchange messages from any third party.

However, it does not protect the two parties against each other. Several forms of dispute
between the two are possible.

DIGITAL SIGNATURE STANDARD

The Digital Signature Standard (DSS) makes use of the Secure Hash Algorithm (SHA) described and
presents a new digital signature technique, the Digital Signature Algorithm (DSA).

This latest version incorporates digital signature algorithms based on RSA and on elliptic curve
cryptography. In this section, we discuss the original DSS algorithm. The DSS uses an algorithm
that is designed to provide only the digital signature function.

Unlike RSA, it cannot be used for encryption or key exchange. Nevertheless, it is a public-key
technique.
In the RSA approach, the message to be signed is input to a hash function that produces a secure
hash code of fixed length. This hash code is then encrypted using the sender's private key to form
the signature. Both the message and the signature are then transmitted. The recipient takes the
message and produces a hash code.

The recipient also decrypts the signature using the sender's public key. If the calculated hash code
matches the decrypted signature, the signature is accepted as valid. Because only the sender
knows the private key, only the sender could have produced a valid signature.

Digital Signature Algorithm

The DSA is based on the difficulty of computing discrete logarithms and is based on schemes
originally presented by El Gamal and Schnorr. The DSA signature scheme has advantages, being
both smaller (320 vs 1024bit) and faster over RSA. Unlike RSA, it cannot be used for encryption or
key exchange. Nevertheless, it is a public-key technique.

DSA typically uses a common set of global parameters (p,q,g) for a community of clients, as
shown. A 160-bit prime number q is chosen. Next, a prime number p is selected with a length
between 512 and 1024 bits such that q divides (p – 1). Finally, g is chosen to be of the form h(p–
1)/q mod p where h is an integer between 1 and (p – 1) with the restriction that g must be greater
than 1. Thus, the global public key components of DSA have the same for as in the Schnorr
signature scheme.
Signing and Verifying

The structure of the algorithm, as revealed here is quite interesting. Note that the test at the end
is on the value r, which does not depend on the message at all. Instead, r is a function of k and the
three global public-key components. The multiplicative inverse of k (mod q) is passed to a
function that also has as inputs the message hash code and the user's private key. The structure
of this function is such that the receiver can recover r using the incoming message and signature,
the public key of the user, and the global public key.
Knapsack Encryption Algorithm
The Knapsack Encryption Algorithm, also known as the Merkle-Hellman
Knapsack cryptosystem, was developed by Ralph Merkle and Martin Hellman in
1978. This groundbreaking algorithm emerged during the early days of public key
cryptography and quickly gained popularity as an innovative method for secure
communication. At that time, it was understood a major advancement in
cryptography due to its asymmetric-key nature – a technique that requires two
separate keys for encryption and decryption.

How it works?

The Knapsack Encryption Algorithm is an asymmetric-key cryptosystem that


requires two different keys for communication: a public key and a private key. The
process of encryption involves the conversion of the message (plaintext) into an
unreadable form using the public key, while decryption is done using the
corresponding private key to retrieve the original plaintext.

The main concept behind the algorithm is to transform a message or the


information into a series of many bits which are then multiplied with another
sequence generated from super-increasing integers. This produces an encrypted
code, which can only be deciphered by someone who knows how to reverse-
engineer these calculations using their knowledge of prime factors or other
cryptographic techniques, only possible with possession or knowledge of the
private key.

One advantage of Knapsack Encryption is its ability to perform quick


computations compared to other encryption methods like RSA without
compromising data security. However, one disadvantage is its vulnerability when
used alone since it has fallen out favor as encryption standards have evolved over
time.
Ex1: try to encrypt the messages 0100, 1011,1010, 0101

S={1, 2, 3,9}, r=15, q=17, m=4

P=message

H=hard Knapsack

Sul:
Ki= wi * r mod q for Encryption
1* 15 mod 17=15

2* 15 mod 17=13

4* 15 mod 17=9

9* 15 mod 17=16

H= {15, 13, 9, 16} hard Knapsack

0100 * 15, 13, 9, 16= 13

1011 * 15, 13, 9, 16= 40

1010 * 15, 13, 9, 16= 24

0101 * 15, 13, 9, 16= 29

Encryption messages= {13, 40, 24,29}

Decrypt the message


c- = c * r-1 mod q for Decrypt
r-1= 15-1 mod 17 =8

r-1=8

13 * 8 mod 17 = 2 {0100}={1,2,4,9}

40 * 8 mod 17 = 14 {1011}={1,2,4,9}

24 * 8 mod 17 = 5 {1010}={1,2,4,9}

29 * 8 mod 17 = 11 {0101}={1,2,4,9}
Comparison to other Encryption Algorithms

Knapsack encryption algorithm, being one of the earliest public key


cryptosystems, offers some unique features compared to other encryption
algorithms. Here's a comparison table to give you a clear understanding of
how knapsack encryption stands against other popular encryption methods

Encryptio Key Type Securit Spee Applications
n y d
Algorith
m

Knapsack Asymmetric Strong in its time, Slower Limited due to


Encryptio but now considered than security concerns;
n vulnerable due to symmetric historical interest
(Merkle- LLL Algorithm algorithms
Hellman)

Asymmetric Secure for large Slower compared Wide range of


key sizes and to symmetric applications,
RSA proper algorithms including SSL/TLS,
implementation email encryption,
and digital
signatures

Symmetric Weak due to small Faster than Historical interest,


key size and asymmetric largely replaced by
DES susceptibility to algorithms, but AES and other secure
brute force attacks slower than algorithms
alternatives like
AES

This table shows that while the knapsack encryption algorithm was
revolutionary in its time, it has been surpassed by other encryption
methods such as RSA and AES in terms of security, speed, and
application. Nonetheless, understanding knapsack encryption remains
essential for those interested in the history and development of
cryptography.

Authentication Applications
• will consider authentication functions

• developed to support application-level authentication & digital


signatures

• will consider Kerberos – a private-key authentication service

• then X.509 directory authentication service

Kerberos
• trusted key server system from MIT
• provides centralised private-key third-party
authentication in a distributed network
– allows users access to services distributed
through network
– without needing to trust all workstations
– rather all trust a central authentication server
– Efficiency
• two versions in use: 4 & 5
Kerberos Requirements
• first published report identified its requirements
as:
– security
– reliability
– transparency
– scalability
• implemented using an authentication protocol
based on Needham-Schroeder
• A pure private-key scheme

A 3-step improvements leading to Kerberos V4


• A simple authentication dialogue
– Has to enter password for each server
– Plaintext transmission of password
• AS+TGS model
– Enter the password once for multiple services
–Difficulty in choosing lifetime
• V4
model
–Use private session keys
–Can also verify server
–AS is the KDC for (C, TGS)
–TGS is the KDC for (C, V)
–a basic third-party authentication scheme
–have an Authentication Server (AS)
–users initially negotiate with AS to identify self
–AS provides a authentication credential (ticket granting
ticket TGT)
– have a Ticket Granting server (TGS)
– users subsequently request access to other services from
TGS on basis of users TGT


• a Kerberos environment consists of:

– a Kerberos server

– a number of clients, all registered with server

– application servers, sharing keys with server


• this is termed a realm

– typically a single administrative domain

• Inter-realm authentication possible

– Mutual trust required

Kerberos Version 5

• developed in mid-1990’s
• provides improvements over v4
– addresses environmental shortcomings

• encryption alg, network protocol, byte order, ticket lifetime,


authentication forwarding, interrealm auth

– and technical deficiencies

• double encryption, non-std mode of use,


subsession keys

• specified as Internet standard RFC 1510

X.509 Authentication Service


• Part of CCITT X.500 directory service
standards
– distributed servers maintaining some info
database
• defines framework for authentication
services
– Directory may store public-key certificates
– With public key of user
– signed by certification authority
• Also defines authentication protocols
• uses public-key crypto & digital signatures
– Algorithms not standardised, but RSA
recommended
– Used in various contexts, e.g email security,
IP security, web security

• issued by a Certification Authority (CA),


containing:
– version (1, 2, or 3)
– serial number (unique within CA) identifying
certificate
– signature algorithm identifier
– issuer X.500 name (CA)
– period of validity (from - to dates)
– subject X.500 name (name of owner)
– subject public-key info (algorithm, parameters, key)
– issuer unique identifier (v2+) , in case of name reuse
– subject unique identifier (v2+) , in case of name
reuse
– extension fields (v3)
– signature (of hash of all fields in certificate,
encrypted by the private key of the CA)
• notation CA<<A>> denotes certificate for A signed by
CA
• any user with access to CA can get any certificate
from it
• only the CA can modify a certificate
• because cannot be forged, certificates can be
placed in a public directory
Authentication procedures:

• X.509 includes three alternative authentication


procedures:
• Assumes each already knows the certified public
key of the other
• One-Way Authentication
• Two-Way Authentication
• Three-Way Authentication
• all use public-key signatures
One-Way Authentication
• 1 message ( A->B) used to establish

– the identity of A and that message is from A

– message was intended for B

– integrity & originality of message


• message must include timestamp, nonce, B's identity and is signed by A

Two-Way Authentication
• 2 messages (A->B, B->A) which also establishes in
addition:
– The identity of B and that reply is from B
– That reply is intended for A
– Integrity & originality of reply
• reply includes original nonce from A, also timestamp
and nonce from B
Three-Way Authentication
• 3 messages (A->B, B->A, A->B) which enables above
authentication without synchronized clocks
• has reply from A back to B containing signed copy of
nonce from B
• means that timestamps need not be checked or
relied upon
Public Key Infrastructure(PKI)
Key Management
 Cryptographic keys are nothing but special pieces of data. Key
management refers to the secure administration of
cryptographic keys.
 Key management deals with entire key lifecycle as depicted in
the following illustration −
 There are two
specific requirements of key management for public key
cryptography.
o Secrecy of private keys. Throughout the key lifecycle,
secret keys must remain secret from all parties except
those who are owner and are authorized to use them.
o Assurance of public keys. In public key cryptography,
the public keys are in open domain and seen as public
pieces of data. By default there are no assurances of
whether a public key is correct, with whom it can be
associated, or what it can be used for. Thus key
management of public keys needs to focus much more
explicitly on assurance of purpose of public keys.
 The most crucial requirement of ‘assurance of public key’ can
be achieved through the public-key infrastructure (PKI), a key
management systems for supporting public-key cryptography.

Public Key Infrastructure (PKI)


PKI provides assurance of public key. It provides the identification of
public keys and their distribution. An anatomy of PKI comprises of the
following components.

 Public Key Certificate commonly referred to as ‘digital


certificate’.
 Private Key tokens.
 Certification Authority.
 Registration Authority.
 Certificate Management System.
Digital Certificate

For analogy, a certificate can be considered as the ID card issued to


the person. People use ID cards such as a driver's license, passport
to prove their identity. A digital certificate does the same basic thing
in the electronic world, but with one difference.

Digital Certificates are not only issued to people but they can be
issued to computers, software packages or anything else that need
to prove the identity in the electronic world.

 Digital certificates are based on the ITU standard X.509 which


defines a standard certificate format for public key certificates
and certification validation. Hence digital certificates are
sometimes also referred to as X.509 certificates.
Public key pertaining to the user client is stored in digital
certificates by The Certification Authority (CA) along with other
relevant information such as client information, expiration date,
usage, issuer etc.
 CA digitally signs this entire information and includes digital
signature in the certificate.
 Anyone who needs the assurance about the public key and
associated information of client, he carries out the signature
validation process using CA’s public key. Successful validation
assures that the public key given in the certificate belongs to
the person whose details are given in the certificate.

The process of obtaining Digital Certificate by a person/entity is


depicted in the following illustration.
As shown in the illustration, the CA accepts the application from a
client to certify his public key. The CA, after duly verifying identity of
client, issues a digital certificate to that client.

Certifying Authority (CA)

As discussed above, the CA issues certificate to a client and assist


other users to verify the certificate. The CA takes responsibility for
identifying correctly the identity of the client asking for a certificate
to be issued, and ensures that the information contained within the
certificate is correct and digitally signs it.

Key Functions of CA

The key functions of a CA are as follows −

 Generating key pairs − The CA may generate a key pair


independently or jointly with the client.
 Issuing digital certificates − The CA could be thought of as
the PKI equivalent of a passport agency − the CA issues a
certificate after client provides the credentials to confirm his
identity. The CA then signs the certificate to prevent
modification of the details contained in the certificate.
 Publishing Certificates − The CA need to publish certificates
so that users can find them. There are two ways of achieving
this. One is to publish certificates in the equivalent of an
electronic telephone directory. The other is to send your
certificate out to those people you think might need it by one
means or another.
 Verifying Certificates − The CA makes its public key available
in environment to assist verification of his signature on clients’
digital certificate.
 Revocation of Certificates − At times, CA revokes the
certificate issued due to some reason such as compromise of
private key by user or loss of trust in the client. After
revocation, CA maintains the list of all revoked certificate that is
available to the environment.

Classes of Certificates

There are four typical classes of certificate −

 Class 1 − These certificates can be easily acquired by


supplying an email address.
 Class 2 − These certificates require additional personal
information to be supplied.
 Class 3 − These certificates can only be purchased after
checks have been made about the requestor’s identity.
 Class 4 − They may be used by governments and financial
organizations needing very high levels of trust.

Registration Authority (RA)

CA may use a third-party Registration Authority (RA) to perform the


necessary checks on the person or company requesting the
certificate to confirm their identity. The RA may appear to the client
as a CA, but they do not actually sign the certificate that is issued.

Certificate Management System (CMS)

It is the management system through which certificates are


published, temporarily or permanently suspended, renewed, or
revoked. Certificate management systems do not normally delete
certificates because it may be necessary to prove their status at a
point in time, perhaps for legal reasons. A CA along with associated
RA runs certificate management systems to be able to track their
responsibilities and liabilities.
Private Key Tokens

While the public key of a client is stored on the certificate, the


associated secret private key can be stored on the key owner’s
computer. This method is generally not adopted. If an attacker gains
access to the computer, he can easily gain access to private key. For
this reason, a private key is stored on secure removable storage
token access to which is protected through a password.

Different vendors often use different and sometimes proprietary


storage formats for storing keys. For example, Entrust uses the
proprietary .epf format, while Verisign, GlobalSign, and Baltimore use
the standard .p12 format.

Hierarchy of CA
Certificate authority (CA) hierarchies are reflected in certificate
chains. A certificate chain traces a path of certificates from a branch
in the hierarchy to the root of the hierarchy.

The following illustration shows a CA hierarchy with a certificate


chain leading from an entity certificate through two subordinate CA
certificates (CA6 and CA3) to the CA certificate for the root CA.
Verifying a certificate chain is the process of ensuring that a specific
certificate chain is valid, correctly signed, and trustworthy. The
following procedure verifies a certificate chain, beginning with the
certificate that is presented for authentication.

Biometric authentication
Biometric authentication is defined as a security
measure that matches the biometric features of a user
looking to access a device or a system. Access to the
system is granted only when the parameters match
those stored in the database for that particular user

Types of biometric
authentication
1. Fingerprint scanners

Fingerprint scanners — the most common form of biometric


authentication method — scan the swirls and ridges unique to
every person’s fingertips. Current technological advances
have resulted in scanners that go beyond fingerprint ridges to
scan for vascular patterns. This has helped bring down false
positives that occasionally occur with consumer-grade
biometric options found on smartphones. Fingerprint scanners
continue to remain the most accessible and popular.

2. Facial recognition

Like the fingerprint scanner, facial recognition


technology scans a face based on approved and stored
parameters and measurements. These parameters are
collectively called face prints. Access is granted only when a
large number of them are satisfied. Despite the inconsistency
in matching faces to parameters from different angles or
distinguishing between similar or related people, facial
recognition is included in several smart devices.

3. Voice recognition
This version of scanning technologies focuses on vocal
characteristics to distinguish one person from another. A voice is
captured to a database, and several data points are recorded as
parameters for a voiceprint. Vocal recognition technologies focus
more on mouth and throat shape formation and sound qualities than
merely listening to a voice. This helps reduce the chances of
misreading a voice imitation attempt.

4. Eye scanners

Eye scanners include retina and iris scanners. A retina scanner


projects a bright light at an eye to highlight blood vessel patterns
that a scanner can read. These readings are compared to the
information saved in the database. Iris scanners evaluate unique
patterns in the colored ring of the pupil. Both scanner forms are ideal
for hands-free verification. However, they can be unreliable if a
person wears contact lenses or spectacles.

Benefits of Biometric Authentication for Enterprise Security

In a world that is increasingly going online, protecting


personal data and ensuring confidential transactions is a
constant challenge. Passwords are fast becoming obsolete,
and biometric authentication is increasingly becoming critical
to ensuring safety. The global biometric system market size
is projected to reach $65.3 billion by 2024, according
Opens a new window

to MarketsandMarkets. Here’s a closer look at the benefits


biometric authentication offers to enterprise security.

You might also like