Cns 3
Cns 3
Authentication Requirements :
Revelation: It means releasing the content of the message to someone who does not
have an appropriate cryptographic key.
Analysis of Traffic: Determination of the pattern of traffic through the duration of
connection and frequency of connections between different parties.
Deception: Adding out of context messages from a fraudulent source into a
communication network. This will lead to mistrust between the parties communicating
and may also cause loss of critical data.
Modification in the Content: Changing the content of a message. This includes
inserting new information or deleting/changing the existing one.
Modification in the sequence: Changing the order of messages between parties. This
includes insertion, deletion, and reordering of messages.
Modification in the Timings: This includes replay and delay of messages sent between
different parties. This way session tracking is also disrupted.
Source Refusal: When the source denies being the originator of a message.
Destination refusal: When the receiver of the message denies the reception.
All message authentication and digital signature mechanisms are based on two functionality
levels:
Lower level: At this level, there is a need for a function that produces an authenticator,
which is the value that will further help in the authentication of a message.
Higher-level: The lower level function is used here in order to help receivers verify the
authenticity of messages.
These message authentication functions are divided into three classes:
Message encryption: While sending data over the internet, there is always a risk of a
Man in the middle(MITM) attack. A possible solution for this is to use message
encryption. In message encryption, the data is first converted to a ciphertext and then
sent any further. Message encryption can be done in two ways:
Symmetric Encryption: Say we have to send the message M from a source P to
destination Q. This message M can be encrypted using a secret key K that both P and Q
share. Without this key K, no other person can get the plain text from the ciphertext. This
maintains confidentiality. Further, Q can be sure that P has sent the message. This is
because other than Q, P is the only party who possesses the key K and thus the
ciphertext can be decrypted only by Q and no one else. This maintains authenticity. At a
very basic level, symmetric encryption looks like this:
2. The sender runs a standard algorithm to create the MAC. As input, the algorithm takes the
original message and the secret key.
3. The algorithm combines the message and secret key and, from this content, generates a
fixed-length checksum that is used to create the MAC.
4. The sender appends the MAC to the message and transmits them both to the sender.
5. When the sender receives the message and MAC, it runs the MAC algorithm using the
transmitted message and shared secret key as input.
6. The algorithm combines the message and secret key and, from this content, generates a
fixed-length checksum that is used to create its own MAC.
7. The receiver compares the sender's MAC against its own MAC. If they match, the receiver
accepts the message. If the two MACs do not match, the receiver rejects the message.
When the two MACs match, the receiver knows the message came from the legitimate
sender and was not altered when transmitted between the sender and the receiver. If the
sender and receiver are not using the same secret key or if the message content is different
between the sender and receiver, the MAC values will not match and the receiver rejects the
message.
Although a MAC ensures authenticity and integrity, it does not protect the message data
itself. That is not the purpose of a MAC. For data protection, the message needs to
be encrypted in a separate process. In addition, a MAC does not
offer nonrepudiation capabilities like a digital signature, which provides a record of a
document's origin. With a MAC, there is no way to prove who created the original message.
Advantages:
In the multiplication method, a constant 𝐴A (0 < A < 1) is used to multiply the key. The
2. Multiplication Method
fractional part of the product is then multiplied by 𝑚m to get the hash value.
h(k)=⌊m(kAmod1)⌋
Where ⌊ ⌋ denotes the floor function.
Disadvantages:
More complex than the division method.
3. Mid-Square Method
In the mid-square method, the key is squared, and the middle digits of the result are taken
as the hash value.
Steps:
1. Square the key.
2. Extract the middle digits of the squared value.
Advantages:
Produces a good distribution of hash values.
Disadvantages:
May require more computational effort.
4. Folding Method
Steps:
1. Divide the key into parts.
Advantages:
Simple and easy to implement.
Disadvantages:
Depends on the choice of partitioning scheme.
5. Cryptographic Hash Functions
Cryptographic hash functions are designed to be secure and are used in cryptography.
Examples include MD5, SHA-1, and SHA-256.
Characteristics:
Pre-image resistance.
Second pre-image resistance.
Collision resistance.
Advantages:
High security.
Disadvantages:
Computationally intensive.
6. Universal Hashing
Universal hashing uses a family of hash functions to minimize the chance of collision for
h(k)=((a⋅k+b)modp)modm
any given set of inputs.
Where a and b are randomly chosen constants, p is a prime number greater than m,
and k is the key.
Advantages:
Reduces the probability of collisions.
Disadvantages:
Requires more computation and storage.
7. Perfect Hashing
Perfect hashing aims to create a collision-free hash function for a static set of keys. It
guarantees that no two keys will hash to the same value.
Types:
Minimal Perfect Hashing: Ensures that the range of the hash function is equal to the
number of keys.
Non-minimal Perfect Hashing: The range may be larger than the number of keys.
Advantages:
No collisions.
Disadvantages:
Complex to construct.
SHA-256 Algorithm:
SHA 256 is a part of the SHA 2 family of algorithms, where SHA stands for Secure Hash Algorithm.
Published in 2001, it was a joint effort between the NSA and NIST to introduce a successor to the SHA 1
family, which was slowly losing strength against brute force attacks.
The significance of the 256 in the name stands for the final hash digest value, i.e. irrespective of the size
of plaintext/cleartext, the hash value will always be 256 bits.
Some of the
standout features of the SHA algorithm are as follows:
Message Length: The length of the cleartext should be less than 264 bits. The size needs to be in the
comparison area to keep the digest as random as possible.
Digest Length: The length of the hash digest should be 256 bits in SHA 256 algorithm, 512 bits in
SHA-512, and so on. Bigger digests usually suggest significantly more calculations at the cost of
speed and space.
Irreversible: By design, all hash functions such as the SHA 256 are irreversible. You should neither
get a plaintext when you have the digest beforehand nor should the digest provide its original value
when you pass it through the hash function again.
Padding Bits
It adds some extra bits to the message, such that the length is exactly 64 bits short of a multiple of 512.
During the addition, the first bit should be one, and the rest of it should be filled with zeroes.
What is Hashing?
Hashing is the process of scrambling raw information to the extent that it cannot reproduce it back to its
original form. It takes a piece of information and passes it through a function that performs mathematical
operations on the plaintext. This function is called the hash function, and the output is called the hash
value/digest.
Password Hashes: In most website servers, it converts user passwords into a hash value before being stored
on the server. It compares the hash value re-calculated during login to the one stored in the database for
validation.
Integrity Verification: When it uploads a file to a website, it also shared its hash as a bundle. When a user
downloads it, it can recalculate the hash and compare it to establish data integrity.
SHA 256 is a part of the SHA 2 family of algorithms, where SHA stands for Secure Hash Algorithm.
Published in 2001, it was a joint effort between the NSA and NIST to introduce a successor to the SHA 1
family, which was slowly losing strength against brute force attacks.
The significance of the 256 in the name stands for the final hash digest value, i.e. irrespective of the size
of plaintext/cleartext, the hash value will always be 256 bits.
.What are the Characteristics of the SHA-256 Algorithm?
Message Length: The length of the cleartext should be less than 264 bits. The size needs to be in the
comparison area to keep the digest as random as possible.
Digest Length: The length of the hash digest should be 256 bits in SHA 256 algorithm, 512 bits in SHA-512,
and so on. Bigger digests usually suggest significantly more calculations at the cost of speed and space.
Irreversible: By design, all hash functions such as the SHA 256 are irreversible. You should neither get a
plaintext when you have the digest beforehand nor should the digest provide its original value when you pass it
through the hash function again.
Now that you got a fair idea about the technical requirements for SHA, you can get into its complete
procedure, in the next section.
You can divide the complete process into five different segments, as mentioned below:
Padding Bits
It adds some extra bits to the message, such that the length is exactly 64 bits short of a multiple of 512.
During the addition, the first bit should be one, and the rest of it should be filled with zeroes.
Padding Length
You can add 64 bits of data now to make the final plaintext a multiple of 512. You can calculate these 64
bits of characters by applying the modulus to your original cleartext without the padding.
You need to initialize the default values for eight buffers to be used in the rounds as follows:
Compression Functions
The entire message gets broken down into multiple blocks of 512 bits each. It puts each block through 64
rounds of operation, with the output of each block serving as the input for the following block. The entire
process is as follows:
Courtesy: Medium Article on SHA 256
While the value of K[i] in all those rounds is pre-initialized, W[i] is another input that is calculated
individually for each block, depending on the number of iterations being processed at the moment.
Output
With each iteration, the final output of the block serves as the input for the next block. The entire cycle
keeps repeating until you reach the last 512-bit block, and you then consider its output the final hash
digest. This digest will be of the length 256-bit, as per the name of this algorithm.
Applications of SHA algorithm
As seen in the image above, the SHA algorithm is being used in a lot of places, some of which are as
follows:
Digital Signature Verification: Digital signatures follow asymmetric encryption methodology to verify the
authenticity of a document/file. Hash algorithms like SHA 256 go a long way in ensuring the verification of the
signature.
Password Hashing: As discussed above, websites store user passwords in a hashed format for two benefits. It
helps foster a sense of privacy, and it lessens the load on the central database since all the digests are of
similar size.
SSL Handshake: The SSL handshake is a crucial segment of the web browsing sessions, and it’s done using
SHA functions. It consists of your web browsers and the web servers agreeing on encryption keys and hashing
authentication to prepare a secure connection.
Integrity Checks: As discussed above, verifying file integrity has been using variants like SHA 256 algorithm
and the MD5 algorithm. It helps maintain the full value functionality of files and makes sure they were not
altered in transit.
Introduction:
The Secure Hash Algorithm is a family of cryptographic hash functions developed by the NIST
SHA is based on the MD4 algorithm and its design closely models MD5.
SHA-1 is specified in RFC 3174.
SHA-1 produces a hash value of 160 bits. In 2002, NIST produced a revised version of the
standard, FIPS 180-2, that defined three new versions of SHA, with hash value lengths of 256,
384, and 512 bits, known as SHA-256, SHA-384, and SHA-512, respectively.
SHA-1 logic:
The algorithm takes a message with maximum of length of less than 264 bits.
Processed Steps:
Step 5: Output
Step-1: Appending Padding Bits: The original message is "padded" (extended) so that its length
(in bits) is congruent to 448, modulo 512. The padding rules are:
The original message is always padded with one bit "1" first.
Then zero or more bits "0" are padded to bring the length of the message up to 64 bits fewer than
a multiple of 512.
Step-2: append length: a block of 64 bits is appended to the message. This block is treated as
unsigned 64-bit integers (most significant byte first) and contains the length of the original
message.
Step-3: Initialize MD buffer: 160-bit buffer is used to hold intermediate and final results of the
hash function. This buffer can be represented as five 32 bit registers (A, B, C,D,E).
Step 4: Process message in 1024-bit (128-word) blocks. The heart of the algorithm is a module
that consists of 80 rounds; this module is labeled F in Figure 11.8. The logic is illustrated in
Figure 11.9.
HMAC:
In recent years, there has been increased interest in developing a MAC derived from a
cryptographic hash function, because they generally execute faster than symmetric block ciphers,
A hash function such as SHA was not designed for use as a MAC and cannot be used directly for
that purpose because it does not rely on a secret key. There have been a number of proposals for
the incorporation of a secret key into an existing hash algorithm, originally by just pre-pending a
key to the message. Problems were found with these earlier, simpler proposals, but they resulted
• To use, without modifications, available hash functions. In particular, to use hash functions that
perform well in software and for which code is freely and widely available.
• To allow for easy replaceability of the embedded hash function in case faster or more secure hash
• To preserve the original performance of the hash function without incurring a significant
degradation.
• To have a well understood cryptographic analysis of the strength of the authentication mechanism
HMAC Algorithm:
Cipher-Based Message Authentication Code (CMAC):
DIGITAL SIGNATURES
The digital signature standard (DSS) is an NIST standard that uses the secure hash algorithm(SHA).
Properties
Message authentication protects two parties who exchange messages from any third party.
However, it does not protect the two parties against each other. Several forms of dispute
between the two are possible.
The Digital Signature Standard (DSS) makes use of the Secure Hash Algorithm (SHA) described and
presents a new digital signature technique, the Digital Signature Algorithm (DSA).
This latest version incorporates digital signature algorithms based on RSA and on elliptic curve
cryptography. In this section, we discuss the original DSS algorithm. The DSS uses an algorithm
that is designed to provide only the digital signature function.
Unlike RSA, it cannot be used for encryption or key exchange. Nevertheless, it is a public-key
technique.
In the RSA approach, the message to be signed is input to a hash function that produces a secure
hash code of fixed length. This hash code is then encrypted using the sender's private key to form
the signature. Both the message and the signature are then transmitted. The recipient takes the
message and produces a hash code.
The recipient also decrypts the signature using the sender's public key. If the calculated hash code
matches the decrypted signature, the signature is accepted as valid. Because only the sender
knows the private key, only the sender could have produced a valid signature.
The DSA is based on the difficulty of computing discrete logarithms and is based on schemes
originally presented by El Gamal and Schnorr. The DSA signature scheme has advantages, being
both smaller (320 vs 1024bit) and faster over RSA. Unlike RSA, it cannot be used for encryption or
key exchange. Nevertheless, it is a public-key technique.
DSA typically uses a common set of global parameters (p,q,g) for a community of clients, as
shown. A 160-bit prime number q is chosen. Next, a prime number p is selected with a length
between 512 and 1024 bits such that q divides (p – 1). Finally, g is chosen to be of the form h(p–
1)/q mod p where h is an integer between 1 and (p – 1) with the restriction that g must be greater
than 1. Thus, the global public key components of DSA have the same for as in the Schnorr
signature scheme.
Signing and Verifying
The structure of the algorithm, as revealed here is quite interesting. Note that the test at the end
is on the value r, which does not depend on the message at all. Instead, r is a function of k and the
three global public-key components. The multiplicative inverse of k (mod q) is passed to a
function that also has as inputs the message hash code and the user's private key. The structure
of this function is such that the receiver can recover r using the incoming message and signature,
the public key of the user, and the global public key.
Knapsack Encryption Algorithm
The Knapsack Encryption Algorithm, also known as the Merkle-Hellman
Knapsack cryptosystem, was developed by Ralph Merkle and Martin Hellman in
1978. This groundbreaking algorithm emerged during the early days of public key
cryptography and quickly gained popularity as an innovative method for secure
communication. At that time, it was understood a major advancement in
cryptography due to its asymmetric-key nature – a technique that requires two
separate keys for encryption and decryption.
How it works?
P=message
H=hard Knapsack
Sul:
Ki= wi * r mod q for Encryption
1* 15 mod 17=15
2* 15 mod 17=13
4* 15 mod 17=9
9* 15 mod 17=16
r-1=8
13 * 8 mod 17 = 2 {0100}={1,2,4,9}
40 * 8 mod 17 = 14 {1011}={1,2,4,9}
24 * 8 mod 17 = 5 {1010}={1,2,4,9}
29 * 8 mod 17 = 11 {0101}={1,2,4,9}
Comparison to other Encryption Algorithms
This table shows that while the knapsack encryption algorithm was
revolutionary in its time, it has been surpassed by other encryption
methods such as RSA and AES in terms of security, speed, and
application. Nonetheless, understanding knapsack encryption remains
essential for those interested in the history and development of
cryptography.
Authentication Applications
• will consider authentication functions
Kerberos
• trusted key server system from MIT
• provides centralised private-key third-party
authentication in a distributed network
– allows users access to services distributed
through network
– without needing to trust all workstations
– rather all trust a central authentication server
– Efficiency
• two versions in use: 4 & 5
Kerberos Requirements
• first published report identified its requirements
as:
– security
– reliability
– transparency
– scalability
• implemented using an authentication protocol
based on Needham-Schroeder
• A pure private-key scheme
–
• a Kerberos environment consists of:
– a Kerberos server
Kerberos Version 5
• developed in mid-1990’s
• provides improvements over v4
– addresses environmental shortcomings
Two-Way Authentication
• 2 messages (A->B, B->A) which also establishes in
addition:
– The identity of B and that reply is from B
– That reply is intended for A
– Integrity & originality of reply
• reply includes original nonce from A, also timestamp
and nonce from B
Three-Way Authentication
• 3 messages (A->B, B->A, A->B) which enables above
authentication without synchronized clocks
• has reply from A back to B containing signed copy of
nonce from B
• means that timestamps need not be checked or
relied upon
Public Key Infrastructure(PKI)
Key Management
Cryptographic keys are nothing but special pieces of data. Key
management refers to the secure administration of
cryptographic keys.
Key management deals with entire key lifecycle as depicted in
the following illustration −
There are two
specific requirements of key management for public key
cryptography.
o Secrecy of private keys. Throughout the key lifecycle,
secret keys must remain secret from all parties except
those who are owner and are authorized to use them.
o Assurance of public keys. In public key cryptography,
the public keys are in open domain and seen as public
pieces of data. By default there are no assurances of
whether a public key is correct, with whom it can be
associated, or what it can be used for. Thus key
management of public keys needs to focus much more
explicitly on assurance of purpose of public keys.
The most crucial requirement of ‘assurance of public key’ can
be achieved through the public-key infrastructure (PKI), a key
management systems for supporting public-key cryptography.
Digital Certificates are not only issued to people but they can be
issued to computers, software packages or anything else that need
to prove the identity in the electronic world.
Key Functions of CA
Classes of Certificates
Hierarchy of CA
Certificate authority (CA) hierarchies are reflected in certificate
chains. A certificate chain traces a path of certificates from a branch
in the hierarchy to the root of the hierarchy.
Biometric authentication
Biometric authentication is defined as a security
measure that matches the biometric features of a user
looking to access a device or a system. Access to the
system is granted only when the parameters match
those stored in the database for that particular user
Types of biometric
authentication
1. Fingerprint scanners
2. Facial recognition
3. Voice recognition
This version of scanning technologies focuses on vocal
characteristics to distinguish one person from another. A voice is
captured to a database, and several data points are recorded as
parameters for a voiceprint. Vocal recognition technologies focus
more on mouth and throat shape formation and sound qualities than
merely listening to a voice. This helps reduce the chances of
misreading a voice imitation attempt.
4. Eye scanners