0% found this document useful (0 votes)
59 views

User Authentication

The document discusses different methods for user authentication, including passwords, one-time passwords, biometrics, and securing passwords through hashing. Common vulnerabilities of password authentication like guessing, exposure, and password reuse are also covered. The use of salts and one-time passwords helps address some of these vulnerabilities.

Uploaded by

shivagupta9765
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
59 views

User Authentication

The document discusses different methods for user authentication, including passwords, one-time passwords, biometrics, and securing passwords through hashing. Common vulnerabilities of password authentication like guessing, exposure, and password reuse are also covered. The use of salts and one-time passwords helps address some of these vulnerabilities.

Uploaded by

shivagupta9765
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 12

User Authentication

Our earlier discussion of authentication involves messages and sessions. But what about
users? If a system cannot authenticate a user, then authenticating that a message came from
that user is pointless. Thus, a major security problem for operating systems is user
authentication. The protection system depends on the ability to identify the programs and
processes currently executing, which in turn depends on the ability to identify each user of
the system. Users normally identify themselves. How do we determine whether a user’s
identity is authentic? Generally, user authentication is based on one or more of three things:
the user’s possession of something (a key or card), the user’s knowledge of something (a user
identifier and password), or an attribute of the user (fingerprint, retina pattern, or signature).
 Passwords
The most common approach to authenticating a user identity is the use of passwords.
When the user identifies herself by user ID or account name, she is asked for a
password. If the user-supplied password matches the password stored in the system,
the system assumes that the account is being accessed by the owner of that account.
Passwords are often used to protect objects in the computer system, in the absence of
more complete protection schemes. They can be considered a special case of either
keys or capabilities. For instance, a password may be associated with each resource
(such as a file). Whenever a request is made to use the resource, the password must be
given. If the password is correct, access is granted. Different passwords may be
associated with different access rights. For example, different passwords may be used
for reading files, appending files, and updating files. In practice, most systems require
only one password for a user to gain full rights. Although more passwords
theoretically would be more secure, such systems tend not to be implemented due to
the classic trade-off between security and convenience. If security makes something
inconvenient, then the security is frequently bypassed or otherwise circumvented.

 Password Vulnerabilities
Passwords are extremely common because they are easy to understand and use.
Unfortunately, passwords can often be guessed, accidentally exposed, sniffed (read by
an eavesdropper), or illegally transferred from an authorized user to an unauthorized
one, as we show next.
There are two common ways to guess a password. One way is for the intruder (either
human or program) to know the user or to have information about the user. All too
frequently, people use obvious information (such as the names of their cats or
spouses) as their passwords. The other way is to use brute force, trying enumeration—
or all possible combinations of valid password characters (letters, numbers, and
punctuation on some systems)—until the password is found. Short passwords are
especially vulnerable to this method. For example, a four-character password provides
only 10,000 variations. On average, guessing 5,000 times would produce a correct hit.
A program that could try a password every millisecond would take only about 5
seconds to guess a four-character password. Enumeration is less successful where
systems allow longer passwords that include both uppercase and lowercase letters,
along with numbers and all punctuation characters. Of course, users must take
advantage of the large password space and must not, for example, use only lowercase
letters.

 Securing Passwords
One problem with all these approaches is the difficulty of keeping the password secret
within the computer. How can the system store a password securely yet allow its use
for authentication when the user presents her password? The UNIX system uses
secure hashing to avoid the necessity of keeping its password list secret. Because the
list is hashed rather than encrypted, it is impossible for the system to decrypt the
stored value and determine the original password. Here’s how this system works.
Each user has a password. The system contains a function that is extremely difficult—
the designers hope impossible —to invert but is simple to compute. That is, given a
value x, it is easy to compute the hash function value f(x). Given a function value f(x),
however, it is impossible to compute x. This function is used to encode all passwords.
Only encoded passwords are stored. When a user presents a password, it is hashed and
compared against the stored encoded password. Even if the stored encoded password
is seen, it cannot be decoded, so the password cannot be determined. Thus, the
password file does not need to be kept secret
The flaw in this method is that the system no longer has control over the passwords.
Although the passwords are hashed, anyone with a copy of the password file can run
fast hash routines against it—hashing each word in a dictionary, for instance, and
comparing the results against the passwords. If the user has selected a password that is
also a word in the dictionary, the password is cracked. On sufficiently fast computers,
or even on clusters of slow computers, such a comparison may take only a few hours.
Furthermore, because UNIX systems use a well-known hashing algorithm, a cracker
might keep a cache of passwords that have been cracked previously. For these
reasons, systems include a “salt,” or recorded random number, in the hashing
algorithm. The salt value is added to the password to ensure that if two plaintext
passwords are the same, they result in different hash values. In addition, the salt value
makes hashing a dictionary ineffective, because each dictionary term would need to
be combined with each salt value for comparison to the stored passwords. Newer
versions of UNIX also store the hashed password entries in a file readable only by the
superuser. The programs that compare the hash to the stored value are run setuid to
root, so they can read this file, but other users cannot.

 One-Time Passwords
To avoid the problems of password sniffing and shoulder surfing, a system can use a
set of paired passwords. When a session begins, the system randomly selects and
presents one part of a password pair; the user must supply the other part. In this
system, the user is challenged and must respond with the correct answer to that
challenge. This approach can be generalized to the use of an algorithm as a password.
Such algorithmic passwords are not susceptible to reuse. That is, a user can type in a
password, and no entity intercepting that password will be able to reuse it. In this
scheme, the system and the user share a symmetric password. The password pw is
never transmitted over a medium that allows exposure. Rather, the password is used
as input to the function, along with a challenge ch presented by the system. The user
then computes the function H(pw, ch). The result of this function is transmitted as the
authenticator to the computer. Because the computer also knows pw and ch, it can
perform the same computation. If the results match, the user is authenticated. The next
time the user needs to be authenticated, another ch is generated, and the same steps
ensue. This time, the authenticator is different. This one-time password system is one
of only a few ways to prevent improper authentication due to password exposure.
One-time password systems are implemented in various ways. Commercial
implementations use hardware calculators with a display or a display and numeric
keypad. These calculators generally take the shape of a credit card, a key-chain
dongle, or a USB device. Software running on computers or smartphones provides the
user with H(pw, ch); pw can be input by the user or generated by the calculator in
synchronization with the computer. Sometimes, pw is just a personal identification
number (PIN). The output of any of these systems shows the one-time password. A
one-time password generator that requires input by the user involves two-factor
authentication. Two different types of components are needed in this case—for
example, a one-time password generator that generates the correct response only if the
PIN is valid. Two-factor authentication offers far better authentication protection than
single-factor authentication because it requires “something you have” as well as
“something you know.”

 Biometrics
Yet another variation on the use of passwords for authentication involves the use of
biometric measures. Palm- or hand-readers are commonly used to secure physical
access—for example, access to a data center. These readers match stored parameters
against what is being read from hand-reader pads. The parameters can include a
temperature map, as well as finger length, finger width, and line patterns. These
devices are currently too large and expensive to be used for normal computer
authentication. Fingerprint readers have become accurate and cost-effective and
should become more common in the future. These devices read finger ridge patterns
and convert them into a sequence of numbers. Over time, they can store a set of
sequences to adjust for the location of the finger on the reading pad and other factors.
Software can then scan a finger on the pad and compare its features with these stored
sequences to determine if they match. Of course, multiple users can have profiles
stored, and the scanner can differentiate among them. A very accurate two-factor
authentication scheme can result from requiring a password as well as a user name
and fingerprint scan. If this information is encrypted in transit, the system can be very
resistant to spoofing or replay attack. Multifactor authentication is better still.
Consider how strong authentication can be with a USB device that must be plugged
into the system, a PIN, and a fingerprint scan. Except for having to place ones finger
on a pad and plug the USB into the system, this authentication method is no less
convenient than that using normal passwords. Recall, though, that strong
authentication by itself is not sufficient to guarantee the ID of the user. An
authenticated session can still be hijacked if it is not encrypted.

Authentication

 Authentication involves verifying the identity of the entity who


transmitted a message.
 For example, if D(Kd)(c) produces a valid message, then we know the
sender was in possession of E(Ke).
 This form of authentication can also be used to verify that a message
has not been modified
 Authentication revolves around two functions, used
for signatures (or signing), and verification:
o A signing function, S(Ks) that produces an authenticator,
A, from any given message m.
o A Verification function, V(Kv,m,A) that produces a value of
"true" if A was created from m, and "false" otherwise.
o Obviously, S and V must both be computationally efficient.
o More importantly, it must not be possible to generate a valid
authenticator, A, without having possession of S(Ks).
o Furthermore, it must not be possible to divine S(Ks) from the
combination of (m and A), since both are sent visibly across
networks.
 Understanding authenticators begins with an understanding of hash
functions, which is the first step:
o Hash functions, H(m) generate a small fixed-size block of data
known as a message digest, or hash value from any given
input data.
o For authentication purposes, the hash function must
be collision resistant on m. That it should not be reasonably
possible to find an alternate message m' such that H(m') =
H(m).
o Popular hash functions are MD5, which generates a 128-bit
message digest, and SHA-1, which generates a 160-bit digest.
 Message digests are useful for detecting (accidentally) changed
messages, but are not useful as authenticators, because if the hash
function is known, then someone could easily change the message and
then generate a new hash value for the modified message. Therefore,
authenticators take things one step further by encrypting the message
digest.
 A message-authentication code, MAC, uses symmetric encryption
and decryption of the message digest, which means that anyone
capable of verifying an incoming message could also generate a new
message.
 An asymmetric approach is the digital-signature algorithm, which
produces authenticators called digital signatures. In this case Ks and
Kv are separate, Kv is the public key, and it is not practical to
determine S(Ks) from public information. In practice the sender of a
message signs it (produces a digital signature using S(Ks)), and the
receiver uses V(Kv) to verify that it did indeed come from a trusted
source, and that it has not been modified.
 There are three good reasons for having separate algorithms for
encryption of messages and authentication of messages:

o Authentication algorithms typically require fewer calculations,


making verification a faster operation than encryption.
o Authenticators are almost always smaller than the messages,
improving space efficiency. (?)
o Sometimes we want authentication only, and not
confidentiality, such as when a vendor issues a new software
patch.
 Another use of authentication is non-repudiation, in which a person
filling out an electronic form cannot deny that they were the ones who
did so.

Key Distribution
 Key distribution with symmetric cryptography is a major problem,
because all keys must be kept secret, and they obviously can't be
transmitted over unsecure channels. One option is to send them out-of-
band, say via paper or a confidential conversation.
 Another problem with symmetric keys, is that a separate key must be
maintained and used for each correspondent with whom one wishes to
exchange confidential information.
 Asymmetric encryption solves some of these problems, because the
public key can be freely transmitted through any channel, and the
private key doesn't need to be transmitted anywhere. Recipients only
need to maintain one private key for all incoming messages, though
senders must maintain a separate public key for each recipient to which
they might wish to send a message. Fortunately the public keys are not
confidential, so this key-ring can be easily stored and managed.
 Unfortunately there are still some security concerns regarding the
public keys used in asymmetric encryption. Consider for example the
following man-in-the-middle attack involving phony public keys:
Figure 15.9 - A man-in-the-middle attack on asymmetric cryptography.

 One solution to the above problem involves digital certificates, which are
public keys that have been digitally signed by a trusted third party. But wait a
minute - How do we trust that third party, and how do we know they are really
who they say they are? Certain certificate authorities have their public keys
included within web browsers and other certificate consumers before they are
distributed. These certificate authorities can then vouch for other trusted
entities and so on in a web of trust.

Implementation of Cryptography
 Network communications are implemented in multiple layers - Physical, Data
Link, Network, Transport, and Application being the most common
breakdown.
 Encryption and security can be implemented at any layer in the stack, with
pros and cons to each choice:
o Because packets at lower levels contain the contents of higher layers,
encryption at lower layers automatically encrypts higher layer
information at the same time.
o However security and authorization may be important to higher levels
independent of the underlying transport mechanism or route taken.
 At the network layer the most common standard is IPSec, a secure form of the
IP layer, which is used to set up Virtual Private Networks, VPNs.
 At the transport layer the most common implementation is SSL, described
below.

An Example: SSL

 SSL (Secure Sockets Layer) 3.0 was first developed by Netscape, and has now
evolved into the industry-standard TLS protocol. It is used by web browsers to
communicate securely with web servers, making it perhaps the most widely
used security protocol on the Internet today.
 SSL is quite complex with many variations, only a simple case of which is
shown here.
 The heart of SSL is session keys, which are used once for symmetric
encryption and then discarded, requiring the generation of new keys for each
new session. The big challenge is how to safely create such keys while
avoiding man-in-the-middle and replay attacks.
 Prior to commencing the transaction, the server obtains a certificate from
a certification authority, CA, containing:
o Server attributes such as unique and common names.
o Identity of the public encryption algorithm, E( ), for the server.
o The public key, k_e for the server.
o The validity interval within which the certificate is valid.
o A digital signature on the above issued by the CA:
 a = S(K_CA )( ( attrs, E(k_e), interval )
 In addition, the client will have obtained a public verification
algorithm, V( K_CA ), for the certifying authority. Today's modern browsers
include these built-in by the browser vendor for a number of trusted certificate
authorities.
 The procedure for establishing secure communications is as follows:

o The client, c, connects to the server, s, and sends a random 28-byte


number, n_c.
o The server replies with its own random value, n_s, along with its
certificate of authority.
o The client uses its verification algorithm to confirm the identity of the
sender, and if all checks out, then the client generates a 46 byte
random premaster secret, pms, and sends an encrypted version of it
as cpms = E(k_s)(pms)
o The server recovers pms as D(k_s)(cpms).
o Now both the client and the server can compute a shared 48-
byte master secret, ms, = f( pms, n_s, n_c )
o Next, both client and server generate the following from ms:
 Symmetric encryption keys k_sc_crypt and k_cs_crypt for
encrypting messages from the server to the client and vice-
versa respectively.
 MAC generation keys k_sc_mac and k_cs_mac for generating
authenticators on messages from server to client and client to
server respectively.
o To send a message to the server, the client sends:
 c = E(k_cs_crypt)(m, S(k_cs_mac) )( m ) ) )
o Upon receiving c, the server recovers:
 (m,a) = D(k_cs_crypt)(c)
 and accepts it if V(k_sc_mac)(m,a) is true.
 This approach enables both the server and client to verify the authenticity of
every incoming message, and to ensure that outgoing messages are only
readable by the process that originally participated in the key generation.
 SSL is the basis of many secure protocols,including Virtual Private
Networks, VPNs, in which private data is distributed over the insecure public
internet structure in an encrypted fashion that emulates a privately owned
network.

User Authentication

 Protection, dealt with making sure that only certain users were allowed to perform
certain tasks, i.e. that a users privileges were dependent on his or her identity. But
how does one verify that identity to begin with?

Implementing Security Defence

Security Policy

 A security policy should be well thought-out, agreed upon, and contained in a


living document that everyone adheres to and is updated as needed.
 Examples of contents include how often port scans are run, password
requirements, virus detectors, etc.

Vulnerability Assessment

 Periodically examine the system to detect vulnerabilities.


o Port scanning.
o Check for bad passwords.
o Look for suid programs.
o Unauthorized programs in system directories.
o Incorrect permission bits set.
o Program checksums / digital signatures which have changed.
o Unexpected or hidden network daemons.
o New entries in startup scripts, shutdown scripts, cron tables, or other
system scripts or configuration files.
o New unauthorized accounts.
 The government considers a system to be only as secure as its most far-
reaching component. Any system connected to the Internet is inherently less
secure than one that is in a sealed room with no external communications.
 Some administrators advocate "security through obscurity", aiming to keep as
much information about their systems hidden as possible, and not announcing
any security concerns they come across. Others announce security concerns
from the rooftops, under the theory that the hackers are going to find out
anyway, and the only one kept in the dark by obscurity are honest
administrators who need to get the word.

Intrusion Detection

 Intrusion detection attempts to detect attacks, both successful and unsuccessful


attempts. Different techniques vary along several axes:
o The time that detection occurs, either during the attack or after the fact.
o The types of information examined to detect the attack(s). Some
attacks can only be detected by analysing multiple sources of
information.
o The response to the attack, which may range from alerting an
administrator to automatically stopping the attack (e.g. killing an
offending process), to tracing back the attack in order to identify the
attacker.
 Another approach is to divert the attacker to a honeypot, on
a honeynet. The idea behind a honeypot is a computer running
normal services, but which no one uses to do any real work.
Such a system should not see any network traffic under normal
conditions, so any traffic going to or from such a system is by
definition suspicious. Honeypots are normally kept on a
honeynet protected by a reverse firewall, which will let
potential attackers in to the honeypot, but will not allow any
outgoing traffic. ( So that if the honeypot is compromised, the
attacker cannot use it as a base of operations for attacking other
systems. ) Honeypots are closely watched, and any suspicious
activity carefully logged and investigated.
 Intrusion Detection Systems, IDSs, raise the alarm when they detect an
intrusion. Intrusion Detection and Prevention Systems, IDPs, act as filtering
routers, shutting down suspicious traffic when it is detected.
 There are two major approaches to detecting problems:
o Signature-Based Detection scans network packets, system files, etc.
looking for recognizable characteristics of known attacks, such as text
strings for messages or the binary code for "exec /bin/sh". The problem
with this is that it can only detect previously encountered problems for
which the signature is known, requiring the frequent update of
signature lists.
o Anomaly Detection looks for "unusual" patterns of traffic or operation,
such as unusually heavy load or an unusual number of logins late at
night.
 The benefit of this approach is that it can detect previously
unknown attacks, so called zero-day attacks.
 One problem with this method is characterizing what is
"normal" for a given system. One approach is to benchmark the
system, but if the attacker is already present when the
benchmarks are made, then the "unusual" activity is recorded as
"the norm."
 Another problem is that not all changes in system performance
are the result of security attacks. If the system is bogged down
and really slow late on a Thursday night, does that mean that a
hacker has gotten in and is using the system to send out SPAM,
or does it simply mean that a CS 385 assignment is due on
Friday? :-)
 To be effective, anomaly detectors must have a very low false
alarm (false positive) rate, lest the warnings get ignored, as
well as a low false negative rate in which attacks are missed.

Virus Protection

 Modern anti-virus programs are basically signature-based detection systems,


which also have the ability (in some cases) of disinfecting the affected files
and returning them back to their original condition.
 Both viruses and anti-virus programs are rapidly evolving. For example,
viruses now commonly mutate every time they propagate, and so anti-virus
programs look for families of related signatures rather than specific ones.
 Some antivirus programs look for anomalies, such as an executable program
being opened for writing (other than by a compiler).
 Avoiding bootleg, free, and shared software can help reduce the chance of
catching a virus, but even shrink-wrapped official software has on occasion
been infected by disgruntled factory workers.
 Some virus detectors will run suspicious programs in a sandbox, an isolated
and secure area of the system which mimics the real system.
 Rich Text Format, RTF, files cannot carry macros, and hence cannot carry
Word macro viruses.
 Known safe programs (e.g. right after a fresh install or after a thorough
examination) can be digitally signed, and periodically the files can be re-
verified against the stored digital signatures. (Which should be kept secure,
such as on off-line write-only medium).

Auditing, Accounting, and Logging

 Auditing, accounting, and logging records can also be used to detect


anomalous behaviour.
 Some of the kinds of things that can be logged include authentication failures
and successes, logins, running of suid or sgid programs, network accesses,
system calls, etc. In extreme cases almost every keystroke and electron that
moves can be logged for future analysis. (Note that on the flip side, all this
detailed logging can also be used to analyse system performance. The down
side is that the logging also affects system performance (negatively!), and so a
Heisenberg effect applies).
 "The Cuckoo's Egg" tells the story of how Cliff Stoll detected one of the early
UNIX break ins when he noticed anomalies in the accounting records on a
computer system being used by physics researchers.

Firewalling to Protect Systems and Networks

 Firewalls are devices (or sometimes software) that sit on the border between two
security domains and monitor/log activity between them, sometimes restricting the
traffic that can pass between them based on certain criteria.
 For example, a firewall router may allow HTTP: requests to pass through to a web
server inside a company domain while not allowing telnet, ssh or other traffic to pass
through.
 A common architecture is to establish a de-militarized zone, DMZ, which sort of sits
"between" the company domain and the outside world, as shown below. Company
computers can reach either the DMZ or the outside world, but outside computers can
only reach the DMZ. Perhaps most importantly, the DMZ cannot reach any of the
other company computers, so even if the DMZ is breached, the attacker cannot get to
the rest of the company network. (In some cases, the DMZ may have limited access to
company computers, such as a web server on the DMZ that needs to query a database
on one of the other company computers).

Figure 3 - Domain separation via firewall.

 Firewalls themselves need to be resistant to attacks, and unfortunately have several


vulnerabilities:
o Tunneling, which involves encapsulating forbidden traffic inside of packets
that are allowed.
o Denial of service attacks addressed at the firewall itself.
o Spoofing, in which an unauthorized host sends packets to the firewall with the
return address of an authorized host.
 In addition to the common firewalls protecting a company internal network from the
outside world, there are also some specialized forms of firewalls that have been
recently developed:
o A personal firewall is a software layer that protects an individual computer. It
may be a part of the operating system or a separate software package.
o An application proxy firewall understands the protocols of a particular service
and acts as a stand-in ( and relay ) for the particular service. For example, and
SMTP proxy firewall would accept SMTP requests from the outside world,
examine them for security concerns, and forward only the "safe" ones on to
the real SMTP server behind the firewall.
o XML firewalls examine XML packets only, and reject ill-formed packets.
Similar firewalls exist for other specific protocols.
o System call firewalls guard the boundary between user mode and system
mode, and reject any system calls that violate security policies.

You might also like