Is Question N Answer
Is Question N Answer
Ans. When we talk about system security we mean Financial security , physical security , Computer security
.
The computer security means that we are addressing three very important aspects of
any computer related system ; confidentiality , integrity , and availability .
Confidentiality : ensures that computer related assets are accessed only by authorized parties .It is
sometimes called secrecy or policy . Confidentiality is the security property we understand best because its
meaning is narrower than the other two .this defines which people or systems are authorized to access the
current system? By accessing data do we mean that a authorized party can access a single bit ? pieces of
data out of context? Can someone who is authorized disclose those data to other parties ? this is defined by
the confidentiality which determines the data access strategies among the users and databases .
Integrity :
Integrity is much harder to pin down . when we survey the way some people use the term we find several
different meanings .
for example , if we say that we have preserved the integrity of an item ,we may mean that the item is
Precise
Accurate
Unmodified
Modified only in acceptable ways
Modified only by authorized person
Modified only by authorized process
Consistent
Internally consistent
Meaningful and usable
Availability :
Availability applies both to data and to services for example an object or service is thought to be available
if: It is present in a usable form
It has capacity enough to meet service needs
It is making clear progress and if I wait mode it has bounded waiting time
The service is completed in an acceptable period of time.
We can trust an overall description of availability by combining these goals . We say a data item , service is
available if
1. there is a timely response to our request .
2. there is a fair allocation of resources , so that some requestors are not favored over others .
3. the service or system involved follows a philosophy of fault tolerance , whereby hardware or software
faults lead to graceful cessation of service or to work a rounds rather than to crashes and abrupt
loss of information .
4. th service or system can be used easily and in the way it was intended to be used .
5. there is controlled concurrency ; that is there is support for simultaneous access , deadlock
management , and exclusive access as required .
These are the various goals of a security systems
Interruption Interception
Hardware
Modification Fabrication
Software Data
Hardware Vulnerabilities:
Hardware is more visible than software, largely because it is composed of physical objects. Because
we can see what devices are hooked to the system, it is rather simple to attack by adding devices, changing
them, removing them, intercepting the traffic to them, or flooding them with traffic until they can no longer
function. However, designers can usually put safeguards in place.
But there are other ways that computer hardware can be attacked physically. Computers have been
drenched with water, burned, frozen, gassed and electrocuted with power surges. People have spilled soft
drinks, corn chips, ketchup, beer and many of dust, and especially ash from cigarette smoke, have
threatened precisely engineered moving parts. Computers have been kicked, slapped, bumped, jarred, and
punched. Although such attacks might be intentional, most are not; this abuse might be considered
“involuntary machine slaughter”: accidental acts not intended to do serious damage to the hardware
involved.
Software Vulnerabilities:
Computing equipment is of little use without the software (operating system, controllers, utility programs,
and application programs) that users expect. Software can be replaced, changed, or destroyed maliciously,
or it can be modified, deleted, or misplaced accidentally. Whether intentional or not, these attacks exploit
the software’s vulnerabilities.
Sometimes, the attacks are obvious, as when the software no longer runs. More subtle are attacks in which
the software has been altered but seems to run normally. Whereas physical equipment usually show some
mark of inflicted injury when its boundary has been breached, the loss of a line of source or object code
may not leave an obvious mark in a program. Furthermore, it is possible to change a program so that it
does all it did before, and then some. That is, a malicious intruder can “enhance” the software to enable it
to perform functions you may not find desirable. In this case, it may be very hard to detect that the
software has been changed, let alone to determine the extent of the change.
Data Vulnerabilities:
Hardware security is usually the concern of a relatively small staff of computing center professionals.
Software security is a larger problem, extending to all programmers and analysts who create or modify
programs. Computer programs are written in a dialect intelligible primarily to computer professionals, so a
“leaked” source listing of a program might very well be meaningless to the general public.
Printed data, however, can be readily interpreted by the general public. Because of its visible nature, a
data attack is a more widespread and serious problem than either a hardware or software attack. Thus, data
items have greater public value than hardware and software, because more people know how to use or
interpret data.
a) Amateurs:- Most Amateurs are not career criminals but rather are normal people who observe
weakness in a security system that allows them to access cash or other valuables. In the
same sense, most capture criminals are ordinary computer professionals or users doing their
jobs when they discover they have access to something valuables.
b) Crackers:- System crackers often high school or university students attempts to access
computing facilities for which they have not been authorized. Cracking a computer defense is
seen as the ultimate victimless crime. The perception is that nobody is hurt or even
endangered by a little stolen time. Crackers enjoy the simple challenge of trying to login just
to see whether it can be done. Most crackers can do their harm without confronting any-body
not even making a sound. In the absence of explicit warnings not to trespass in a system,
crackers infer that access is permitted.
c) Career criminal:- By contrast the career computer criminals understands the targets of
computer crime. Criminals seldom change fields from murder. More often, criminals begins as
computer professionals who engage in computer crime, finding the prospects & pay off good.
There is some evidence that organized crime & international groups are engaged computer
crime. Recently, electronic spices & information brokers have begun to recognize that trading
companies or individuals, secrets can be lucrative.
• Software controls: Programs must be secure enough to prevent outside attack. They must also
be developed and maintained so that we can be confident of the programs dependability.
Program controls include the following:
Internal program control: Parts of the programs that enforce security restrictions such as access
limitations in database management programs.
Operating system and network system controls: limitations enforced by the operating system or
network to protect each user from other users.
Independent control programs: application programs such as password checkers or virus scanners
that protect certain types of vulnerability
Development controls: Quality standards, under which a program is designed coded, tested and
maintained to prevent software faults from becoming exploitable.
• Hardware controls: Numerous hardware devices have been created to assist in providing
computer security such as
Hardware or smart card implementation of encryption
Lock or cable limiting access
Devises to verify users identity
Intrusion detect system
Circuit board that control access to storage media
• Physical control: some of the easiest, most effective and least expensive controls are physical
controls. Physical controls include locks on doors, guard set and entry point, backup copies of
important software and data, and physical side planning that reduces the risk of natural
disaster. Often the simple system controls are overlooked while we seek more sophisticated
approaches.
Q 6] Explain the terms with respect to threats, vulnerabilities, & control in the security system.
Ans: A computer system has three separate but vulnerable component hardware, software and data. Each
of these assets offers value to different members of community affected by the system. To analyze
security, we can brainstorm about the ways in which the system or its information can experience
some kind of loss or harm.
1) Interception: Interception threats means that some unauthorized party has gain access to an
asset. The outsider party can be a person, program or computing system.
2) Interruption: In an interruption an asset of the system is being lost, unavailable or unstable.
An example is malicious instruction of hardware device.
3) Modification: If an unauthorized party not only accesses but also tampers with an asset, the
threat is a modification. For example someone may change the values in database.
4) Fabrication: An unauthorized party might create fabrication of counter feet objects on a
computing system. The intruder may insert spurious transaction to a network communication
system or adds records to existing databases.
b) Vulnerability: A Vulnerability is a weakness in the security system. For example, in the procedures,
design or implementation that might be exploited to cause loss or harm. For instant a particular
system may be Vulnerable to unauthorized data manipulation because system does not verify users
identity before allowing data access.
b) Control: How do we address the problem? We use control as protective measures. That is control is
an action, device, procedures or technique that removes or reduces Vulnerabilities. In general we
can describe the relationship among threat, Vulnerability, control in this way. A threat is blocked by
control of Vulnerability.
Ans: A computer system has three separate but vulnerable component hardware, software and
data. Each of these assets offers value to different members of community affected by the system.
To analyze security, we can brainstorm about the ways in which the system or its information can
experience some kind of loss or harm:
a) Threats: - A threat to a computing system is a set of circumstances that has potential to cause loss
or harm. There are many threats to computer system including human initiated and computer
initiated ones. We have all experienced the result of inadvertent human errors. Hardware design
flows and software failures but natural disasters are threats too. They can bring down a system when
computer room is flooded. We can view any threat as being one of four kinds:
1) Interception: Interception threats means that some unauthorized party has gain access to an
asset. The outsider party can be a person, program or computing system.
2) Interruption: In an interruption an asset of the system is being lost, unavailable or unstable.
An example is malicious instruction of hardware device.
3) Modification: If an unauthorized party not only accesses but also tampers with an asset, the
threat is a modification. For example someone may change the values in database.
4) Fabrication: An unauthorized party might create fabrication of counter fit objects on a
computing system. The intruder may insert spurious transaction to a network communication
system or adds records to existing databases.
b) Vulnerability: - Vulnerability is a weakness in the security system. For example, in the procedures,
design or implementation that might be exploited to cause loss or harm. For instance a particular
system may be Vulnerable to unauthorized data manipulation because system does not verify users
identity before allowing data access.
There are three types of vulnerabilities:
1) Hardware vulnerability: Hardware is more visible than software largely because it is composed
of physical objects. Because we can see what devices are hooked to the system and it is
rather simple to attack by adding devices changing them, removing them, intercepting the
traffic to them, or floating them with traffic until they can no longer function.
Interruption (Denial of service)
Interception (Theft)
Hardware
Modification Fabrication (Substitution)
2) Software vulnerabilities: Computing equipment is of little use without the software can be
replaced, changed, or destroyed. Whether intentional or not these attacks exploit the
software’s vulnerabilities.
Interruption (Deletion)
Interception
Software
Modification Fabrication
3) Data vulnerabilities: The general public can readily interpret Printed data. Because of its
visible nature a data attack is more widespread and serious problem than hardware or
software attack. Thus data items have greater public values than h/w and s/w because more
people know how to use data.
Interruption (Loss)
Interception
Data
Modification Fabrication
c) Control: - How do we address the problem? We use control as protective measures. That is control
is an action, device, procedures or technique that removes or reduces Vulnerabilities. In general we
can describe the relationship among threat, Vulnerability, control in this way. A threat is blocked by
control of Vulnerabilities.
Types of control are as follows:
I. Encryption, II. Policies and procedures,
III. Hardware control, IV. Physical control,
V. Software control.
A threat to a computing system is a set of circumstances that has the potential to cause loss or harm.
There are many threats to a computer system, including human-initiated and computer-initiated ones. We
have all experienced the results of inadvertent human errors, hardware design flaws, and software failures.
But natural disasters are threats, they can bring the system down when the computer room is flooded or
the data center collapses from an earthquake.
We use a control as a protecting measure. That is, a control is an action, device, procedure, or technique
that removes or reduces vulnerability.
A threat is blocked by control of vulnerability.
To device controls, we must know as much about threats as possible. We can view any threat as being one
of four kinds: interception, interruption, modification, and fabrication.
• An interception means that some unauthorized party has gained access to an asset. The outside
party can be a person, a program, or a computing system. Examples of this type of failure are illicit
copying of program or data files, or wiretapping to obtain data in a network.
• If an unauthorized party not only accesses but tampers with an asset, the threat is a modification.
For example, someone might change the values in a database, alter a program so that it performs an
additional computation, or modify data being transmitted electronically.
These four classes of threats-interception, interruption, modification and fabrication- describe the kinds
of problems we might encounter.
Ans:
Encryption is the process of encoding a message so that its meaning is not obvious and nobody able
to break the code easily. Encryption method convert the plaintext i.e. the text which is suppose to
converted in the encrypted form called ciphertext
Various Encryption methods are as follows:
1. SUBSTITUTION CIPHERS
In this encryption method we substitute a character or a symbol for each character of the original
message .This technique is called a monoalphabetic cipher or simple substitution. The various substitution
cipher are:
Example:
Plaintext: TREATY IMPOSSIBLE
Ciphertext: WUHDWB LPSRVVLEOH
The Vernam Cipher is a type of one-time pad davised by Gilbert Vernam for AT&T. This cipher is
immune to most cryptanalytic attacks . The basic encryption involves arbitrarily long nonrepeating
sequence of numbers that are combine with the sequence of plaintext
Example:
Plaintext: VERNAM CIPHER
Is encoded as
Tehrsp itxmab
2. TRANPOSITIONS(PERMUTATIONS)
A transposition is a encryption in which the letters of the message are rearranged .With the
transposition ,the cryptography aims for diffusion .
C1 C2 C3 C4 C5
C6 C7 C8 C9 C10
Stream ciphers convert one symbol of plaintext immediately into symbol of ciphertext. The transformation
depends only on the symbol, the key and the control information of the encipherment algorithm. The stream
cipher works as shown below
fig: stream ciphers
Block cipher encrypts a group of plaintext symbols as one block. The columnar transposition and the other
transpositions are the examples of block cipher. The block cipher words as shown below.
Ans.
Encryption is the process of encoding a message so that its meaning is not obvious; decryption is the
reverse process, transforming an encrypted message back into its normal, original form. Alternatively, the
terms encode and decode are used instead of encrypt and decrypt. That is, we say that we encode, encrypt
or encipher the original message to hide its meaning. Then we decode, decrypt or decipher it to reveal the
original message. A system for encryption and decryption called a cryptosystem.
The original form of a message is known as plaintext, and the encrypted form is called ciphertext.
The relationship is shown in following figure: For convenience in explanation, we denote a plaintext message
P as a sequence of individual characters P = (p1, p2, ……….., pn). Similarly, ciphertext is written as C = (c1,
c2, ……….cm). For instance, the plaintext message “I want cookies” can be thought of as the message
string (I, , w,a,n,t, , c,o,o,k,i,,e,s ).It may be transformed into ciphertext(c1, c2, ………..,c14), and the
encryption algorithm tells how the transformation is done.
We use this formal information to describe the transformation between plaintext and ciphertext
Encryption Decryption
Plaintext Ciphertext Original
Plaintext
Encryption
For eg., We write C =E(p) and P= D(c), where C represents the ciphertext, E is the encryption rule, P is the
plaintext, and D is the decryption rule. What we seek is a cryptosystem for which P =D(E(P)). In other
words, we want to be able to convert the message to protect it from an intruder, but we also want to be
able to get the original message back so that the receiver can read it properly.
There are slight differences in the meanings of these three pairs of words, although they are not
significant in this context. Strictly speaking, encoding is the process of translating entire words or phases
to other words or phases, whereas enciphering is the translating letters or symbols individually; encryption
is the group term that covers both encoding & enciphering.
Original
Cipher text Plaintext
Plaintext
Encryption Decryption
The cryptosystems involves a set of rules for how to encrypt the plaintext and how to decrypt the
cipher text. The encryption and decryption rules, called algorithms, often use a device called a key, denoted
by K, so that the resulting cipher text depends on the original plaintext message, the algorithm, and the key
value. We write this dependence as C = E (K,P) where E is a set of encryption algorithms, and the key K
selects one specific algorithm from the set. Sometimes the encryption and decryption keys are the same so,
P = D(K, E(K,P)). This form is called symmetric encryption because D and E are mirror- image processes. At
other times, encryption and decryption keys comes in pairs. Then a decryption key, KD, inverts the
encryption of key KE, so that P = D(KD, E(KE, P)). Encryption algorithms of this forms are called asymmetric
because converting C back to P involves a series of steps and a key that are different from the steps and
key of E.
The difference between symmetric and asymmetric encryption is as shown in the figure:
Key
Original
Cipher text Plaintext
Plaintext
Encryption Decryption
a) Symmetric Cryptosystem
Encryption Key KE Decryption Key KD
Original
Cipher text Plaintext
Plaintext
Encryption Decryption
b) Asymmetric Cryptosystem
-Key generation:
It generates the key hard to guess for eavesdroppers.
-Key transfer:
Distribution of the key to the communicational parties.
-Updating keys:
Updating key for the secure communication.
-Storing key:
Storing the key securely on storing devices.
-Compromised key:
It is needed when the key is lost, stolen.
-Lifetime of keys:
Keys are generated according to their use for particular time after that time security of
the key is not necessary.
Key Distribution:
It is one of the important issue in the key management. The X9.17 standard specifies 2 types
of keys:
- Key-encryption key
- Data key
Key-encryption key encrypts other keys for distribution. data key encrypt message traffic. These are
most commonly used concepts in the key distribution.
Solution to the distribution problem is split the key into several different parts and send each
of these parts over different channels.
Key-encryption key shared by pairs works well in small networks, but can quickly get cumbersome
if network becomes large.
Since every pair of user s must exchange key, total no. of key
exchanges required in an ‘n’ person network is n(n-1)/2.
In 6 person network, 15 key exchanges are required. In 1000 person network nearly50000
key exchanges are required. In these cases creating central key server makes operation much more
efficient.
Key Generation:
The security of fan algorithm rests in the key, if you are using cryptographically weak process
to generate keys, then your whole system is weak.
-Random keys:
Random keys are hard to remember hence may be used for key generation.
Left half
text
Combine key
substitute
permute
Add halves
Q.15] Describe Double and Triple DES algorithm and also discuss the security DES.
TRIPLE DES
However, a simple trick does indeed enhance the security of DES. Using two keys and
applying them in three operation adds apparent strength.
The so-called triple DES procedure is C=E(k1, D(k2, E(k1,m))). That is , you encrypt with the
second, and encrypt first again.
Although this process is called triple DES, because of the three applications of the DES
algorithm, it only doubles the effective key length. But a 112-bit effective key length is quite strong, and it
is effective against all feasible known attacks.
i) DES stands for Data Encryption Standard, which was developed by U.S. government in 1976. But after
few years it was found to be less efficient as compared to the requirements of the computer systems.
Thus in 1999 U.S. National Institute of Standards and Technology designed an algorithm called Advanced
Encryption Standards (AES).
Both the algorithms are block cipher algorithms and use a key (public key or private key) for encryption.
iv) DES algorithm performs encryption by passing data through processes like substitution and
permutation.
Whereas AES performs substitution, shifting, and bit mixing processes to encrypt the data.
v) DES algorithm was designed to go through 16 rounds precisely. In order to increase this number, the
whole algorithm will be required to be redefined.
AES, on the other hand, was designed in such a way that, changing the limit on repeat loop can easily
change number of cycles of AES.
Q18]: Discuss the application of Encryption in cryptographic hash function?
ANS]: With the recent news of weaknesses in some common security algorithms (MD4, MD5, SHA-0), many
are wondering exactly what these things are: They form the underpinning of much of our electronic
infrastructure, and in this Guide we'll try to give an overview of what they are and how to understand them
in the context of the recent developments.
Though we're fairly strong on security issues, we are not crypto experts. We've done our best to assemble
(digest?) the best available information into this Guide, but we welcome being pointed to the errors of our
ways. A "hash" (also called a "digest", and informally a "checksum") is a kind of "signature" for a stream of
data that represents the contents. The closest real-life analog we can think is "a tamper-evident seal on a
software package": if you open the box (change the file), it's detected.
This is a common confusion, especially because all these words are in the category of "cryptography", but
it's important to understand the difference. Encryption transforms data from a cleartext to ciphertext and
back (given the right keys), and the two texts should roughly correspond to each other in size: big cleartext
yields big ciphertext, and so on. "Encryption" is a two-way operation.
Hashes, on the other hand, compile a stream of data into a small digest (a summarized form: think
"Reader's Digest"), and it's strictly a one way operation. All hashes of the same type - this example shows
the "MD5" variety - have the same size no matter how big the inputs are: "Encryption" is an obvious target
for attack (e.g., "try to read the encrypted text without the key"), but even the one-way nature of hashes
admits of more subtle attacks. We'll cover them shortly, but first we must see for what purposes hashes are
commonly used. We'll note here that though hashes and digests are often informally called "checksums",
they really aren't. True checksums, such as a Cyclic Redundancy Check are designed to catch data-
transmission errors and not deliberate attempts at tampering with data. Aside of the small output space
(usually 32 bits), they are not designed with the same properties in mind. We won't mention true
checksums again.
What's inside a cryptographic hash? The first answer is "it depends on the kind of hash", but the
second answer usually starts with "a lot of math". A colloquial explanation is that all the bits are
poured into a pot and stirred briskly, and this is about as technical we care to delve into here.
There are plenty of resources that show the internal workings of a hash algorithm, almost all of
which involve lots of shifting and rotating through multiple "rounds:
Each has its own advantages in terms of performance, several variations of collision resistance, how well its
security has been studied professionally, and so on.
Reliably generate collisions in four hash functions much faster than brute-force time, and in one case (MD4,
which is admittedly obsolete), with a hand calculation. This has been a stunning development. In the short
term, this will have only a limited impact on computer security. The bad guys can't suddenly start tampering
with software that can fool published checksums, and they can't suddenly start cracking hashed passwords.
Previously-signed digital signatures are just as secure as they were before, because one can't retroactively
generate new documents to sign with your matched pair of inputs. What it does mean, though, is that we've
got to start migrating to better hash functions. Even though SHA-1 has long been thought to be secure,
NIST (the National Institute of Standards and Technology) has standard for even longer hash functions
which are named for the number of bits in their output: SHA-224, SHA-256, SHA-384, and SHA-512.
Five hundred twelve bits of hash holds 1.34 x 10154 possible values, which is far, far more than the number
of hydrogen atoms in the universe. This is likely to be safe from brute-force attacks for quite a while.
Q:19] Describe how Digital Signatures are applicable for encryption .Write its properties and
explain its requirements with relevant block diagram
Another typical situation parallels a common human need an order to transfer funds from one person to
another .in other words we want to be able to send electronically the equivalent of a computerized
checking .
• a check is tangibler object authorizing financial transaction
• the signature on the check confirms authenticity since only the legitimate signer can produce that
signature
• in the case of an alleged forgery , a third party can be called in to judge authenticity
• once a check is cashed , it is cancelled so that it cannot be reused
• the paper check is not alterable . or most forms of aleration are easily detected
Digital signature is a protocol that produces the same effect as a real signature . ti is a mark that only the
sender can make but other people can easily recognize as belonging to the sender . just like a real signature
, a digital signature is used to confirm agreement to a message.
Properties
A digital signature must meet 2 primary conditions
• It must be forgeable . If person P signs message M with signature S(P,M), it is impossible for
anyone else to produce the pair [M,S(P,M)]
• It must be authentic. If a person R receives the pair [M,S(P,M)] purposely from P , R can check
that the signature is really from P could have created this signature and the algorithm.
• It is not alterable .after being transmitted ,M cannot be changed by S,R or any other
interceptor.
• It is not reusable . a previous message presented again will be again be detected by R
DIGITAL SIGNATURE
DIGITAL SIGNATURE
TOP SECRETE
UNFORGEABLE
(PROTECTS A)
Q.21] What are the point that should keep in mind about any key distribution Protocol?
The two basic kinds of encryptions are symmetric (also called “secret key”) and asymmetric (also
called “public key”). Symmetric algorithms use one key, which works for both encryption and decryption.
Usually, the decryption algorithm is closely related to the encryption one. (For e.g. the Creaser cipher with a
shift of 3 uses the encryption algorithm “substitute the character three letters later in the alphabet” with the
decryption “substitute the character three latter a earlier in the alphabet”. )
The symmetric systems provide a two-way channel to their user: A and B share a secret key and
they can both encrypt information to send to the other as well as decrypt information from other. As long as
the key remain secret, the system also provides authentication, proof that a message received was not
fabricated by someone other than declared sender, Authentication is ensured because only legitimate
sender can produce a message that will decrypt properly with the shared key.
The symmetric of this situation is major advantage of this type of encryption, but it also leads to a
problem: key distribution. How do A and B obtain their shared secret key? And only A and B can that key for
their encrypted communications. If A wants to share encrypted communication with other user C, A and C
need a different shared key. Key distribution is major difficulty in using symmetric encrypted. In general, n
user who want to communicate in pair need n*(n-1)/2 keys. In other words, the number of keys needed
increase at rate proportional to the square of the number of users. So a property of symmetric encryption
system is that they required means of key distribution.
Public key systems, on other hand, exile at key management. By the nature of the public key
approach, you can send a public key in an e-mail message or post it into a public directory. Only the
corresponding private key, which presumably kept private, can decrypt what has been encrypted with the
public key.
But for both kind of encryption, a key must be kept secured. Once symmetric or private key is know
by an outsider, all message written previously or in the future can be decrypted (and hence read or
modified) by the outsider. So, for all encryption algorithms, key management is major issue. It involves
storing, safeguarding and activating keys.
1. The amount of secrecy needed should determine the amount labor appropriate for the encryption
and decryption.
Principle 1 is a reiteration of the principle of timeliness and of the earlier observation that even a simple
cipher may be strong enough to deter the casual interceptor or to hold off any interceptor for a short time.
2. The set of keys and the enciphering algorithm should be free from complexity.
This principle implies that we should restrict neither the choice of keys nor the types of plaintext on which
the algorithm can work. For instance, an algorithm that works only on plaintext having an equal number of
As and Es is useless. Similarly it would be difficult to select keys such that the sum of the values of the
letters of the key is a prime number. Restrictions such as these make the use of encipherment prohibitively
complex, it will not be used. Furthermore the key must be transmitted, stored, and remembered, so it must
be short.
3. The implementation of process should be as simple as possible.
Principle 3 was formulated with hand implementation in mind. A complicated algorithm is prone to error or
likely to be forgotten. With the development and popularity of digital computers, algorithms far too complex
for hand implementation became feasible. Still the issue of complexity is important. People will avoid an
encryption algorithm whose implementation process severely hinders message transmission, there by
undermining security. And a complex algorithm is more likely to be programmed incorrectly.
4. Error in ciphering should not propagate and cause corruption of further information in the message.
Principle 4 acknowledges that humans make errors in their use of enciphering algorithms. One error early in
the process should not throw off the entire remaining cipher-text. For example, dropping one letter in a
columnar transposition throws off the entire remaining encipherment unless the receiver can guess where
the letter was dropped; the remaining of the message will be unintelligible. By contrast, reading the wrong
row or column for a poly alphabetic substitution affects only one character remaining characters are
unaffected.
5. The size enciphered text should no larger than the text of the original message.
The idea behind principle 5 is that a cipher text that expands dramatically in size cannot possibly carry more
information that the plaintext, yet it gives the cryptanalyst more data from which to infer a pattern.
Furthermore, a longer cipher text implies more space for storage and more time to communicate.
These principles were developed before the ready availability of digital computers, even though Shannon
was aware of computers and the computational power they represented.
If a component is isolated from the effects of other components, then it is easier to trace a problem to the
fault that caused it and to limit the damage the fault causes. It is also easier to maintain the system, since
changes to an isolated component do not affect other components. And it is easier to see where
vulnerabilities may lie if the component is isolated. We call this isolation encapsulation.
Information hiding is another characteristic of modular software. When information is hidden, each
component hides its precise implementation or some other design decision from the others. Thus, when a
change is needed, the overall design can remain intact while only the necessary changes are made to
particular components.
Modularity
Modularization is the process of dividing a task into subtasks. This division is done on a logical or
functional basis. Each component performs a separate, independent part of the task. The goal is to have
each component meet four conditions:
Often, other characteristics, such as having a single input and single output or using a limited set of
programming constructs, help a component be modular. From a security standpoint, modularity should
improve the likelihood that an implementation is correct.
In particular, smallness is an important quality that can help security analysts understand what each
component does. That is, in good software, design and program units should be only as large as needed
to perform their required functions. There are several advantages to having small, independent
components.
Security analysts must be able to understand each component as an independent unit and be assured of
its limited effect on other components.
A modular component usually has high cohesion and low coupling. By cohesion, we mean that all the
elements of a component have a logical and functional reason for being there; every aspect of the
component is tied to the component's single purpose. A highly cohesive component has a high degree of
focus on the purpose; a low degree of cohesion means that the component's contents are an unrelated
jumble of actions, often put together because of time-dependencies or convenience.
Coupling refers to the degree with which a component depends on other components in the system.
Thus, low or loose coupling is better than high or tight coupling, because the loosely coupled
components are free from unwitting interference from other components.
Encapsulation
Encapsulation hides a component's implementation details, but it does not necessarily mean complete
isolation. Many components must share information with other components, usually with good reason.
However, this sharing is carefully documented so that a component is affected only in known ways by
others in the system. Sharing is minimized so that the fewest interfaces possible are used. Limited
interfaces reduce the number of covert channels that can be constructed.
Information Hiding
Developers who work where modularization is stressed can be sure that other components will have
limited effect on the ones they write. Thus, we can think of a component as a kind of black box, with
certain well-defined inputs and outputs and a well-defined function. Other components' designers do not
need to know how the module completes its function; it is enough to be assured that the component
performs its task in some correct manner.
Information hiding is desirable, because developers cannot easily and maliciously alter the components
of others if they do not know how the components work.
chain
a) Before infection
b) After infection
2. Memory Resident Viruses:
Some part of the operating system and most user program execute, terminate and disappear. For
very frequently used parts of the operating system and for very specialized user programs, it would take too
long to reload the program each time it was needed. Such code remains in memory and is called resident
code. Examples of resident code are the routine that interprets keys pressed on keyboard, code that
handles error condition that arise during program execution, or a program that acts like an alarm clock.
Resident routine are sometimes called TSR or “terminate and stay resident” routines.
Virus writers also like to attach virus to resident code because resident code is activated many times
while the machine is running. Each time the resident code runs the virus does too. Once activated the virus
can look for and infect uninfected carriers. For example after activation boot sector might attach itself to a
piece of resident code. Then each time the virus was activated it might check whether any removable disk in
a disk drive was infected and if not infect it.
3. Document virus:
The most popular virus type is document virus which is implemented within a formatted document
such as written document, a database, a slide presentation or a spreadsheet. These documents are highly
structured files that contain both data and commands. The commands are part of programming language
including macros, variables and procedures, file accesses and even system calls. The writer of document
virus uses any of the features of programming language to perform malicious action.
The ordinary user sees only the content of the document, so virus writer simply includes virus in command
part of the document as in integrated program virus
A. Web Bug:
A web bug, sometimes called a pixel tag, clear gif, one-by-one gif, invisible gif, or beacon gif,
is a hidden image on any document that can display HTML tags, such as a web page, HTML e-
mail message, or even a spreadsheet. Its creator intends the bug to be invisible, unseen bu
users but very useful since it can track the web activities of an user.
E.g. On Blue Nile Home Page, the following big code automatically downloads as a web bug
from the site.
Web bugs do not seem to be malicious. They plant numerical data but do not track personal
info. They can be used to track the surfing habits of an user. Their profile can be used to
direct retailers in whom you are interested. More malicious codes can be clearly used to
review the web server’s log files and determine your PC info, e.g. IP Address. The Web Server
can capture things such as, IP Address, the kind of web browser used, monitor's resolution,
browser settings, connection time, and previous cookie values.
Brain Virus:
One of the earliest virus, it was given its name because it changes the label of any disk it
attacks, to the word "BRAIN". It attacks PC's running Microsoft Operating System.
What it does:
The virus first locates itself in the upper memory and executes a system call to reset the
upper memory limit to itself. It traps interrupt no. 19 (disk read) by resetting interrupt vector
table to point to it. It then sets the address for interrupt no. 6to former address of interrupt
no. 9
The Brain Virus appears to have no effect other than to pass the infection. Variants of this
virus erase disks or destroys the FAT.
How it Spreads:
The Brain Virus positions itself in the boot sector and 6 other sectors. One of the sectors will
contain the original boot code, moved from the original location, while 2 other contain
remaining code of virus. Once installed, the virus intercepts disk read requests for the disk
drive under attack.
The virus reads the boot sector and inspects the 5 and 6 bytes for hexadecimal value 1234.
Q-34. One feature of a capability-based protection system is the ability of one process to transfer
a copy to another process. Describe a situation in which one process should be able to transfer a
capability to another.
One possible access right to an object is transfer or propagate .A subject having this right can pass
copies of capabilities to other subjects. In turn each of these capabilities has a list of permitted types of
accesses, one of which might also be transfer. In this instance, process A can pass a copy of a capability to
B, who can then pass a copy to C.B can prevent further distribution of capability by omitting the transfer
right from the rights passed in the capability to C. B might still pass certain access right to C, but not the
rights to propagate access rights to other subjects.
As a process executes, it operates in a domain. The domain is the collection of objects to which the
process has access. As execution continues, the process may call a sub procedure, passing some of the
objects to which it has access as arguments to the sub procedure. The domain of the sub procedure is not
necessarily the same as that of its calling procedure; in fact, a calling procedure may pas only some of its
objects to the sub procedure and the sub procedure may have access rights to other objects not accessible
to the calling procedure. The caller may also pass only some of its access rights for the object it passes to
the sub procedure.
Data Storage
Figure: process execution domain
Since each capability identifies a single object in a domain, the collection of capabilities defines the
domain. When a process calls a sub procedure and passes certain objects to the sub procedure, the
operating system forms a stack of all the capabilities of the current procedure. The operating system then
creates new capabilities for the sub procedures, as shown in the figure
Domain for MAIN
Devices
Process Files
CALL SUB
Data Storage
Devices
Process Files
Data Storage
Capabilities are straightforward way to keep rack of the access rights of subjects to objects during
execution. The capabilities are backed by more comprehensive table, such as an access control matrix or an
access control list. Each time a process seeks to use a new object, the operating system examines the
master list of objects and subjects to determine whether the object is accessible. If so, the operating
system creates a capability for that object.
Capabilities must be stored in memory inaccessible to normal users. One way of accomplishing this is
to store capabilities in segments not pointed at by user’s segment table or enclose them in protected
memory as from a pair of base/bounds registers. During execution, only the capabilities of objects that have
been accessed by current process are kept readily available. This restriction improves the speed with which
access to an object can be checked. They can be revoked. When an subject revokes a capability, no further
access under the revoked capability should be permitted. A capability table can contain pointers to the
active capabilities spawned under it so that the operating system can trace what access rights should be
deleted if a capability is revoked.
Q35. Explain why asynchronous I/O activity is a problem with many memory protection scheme
including base bound and paging. Suggest a solution to problem.
A major advantage of an operating system with fence register is the ability to relocate, this characteristic is
important in a multiuser environment. With two or more user, none can know in advance where a program
will be loaded for execution. The relocation register solves the problem by providing base or starting
address. All address inside programs are offsets from that base address. All address inside programs are
offsets from that base address. A variable fence register is known as base register.
Fence register provide a lower bound but not an upper one. An upper bound can be useful in knowing
how much space is allotted and in checking for overflows into forbidden areas. To overcome this problem a
second register is often added. The second register called a bound register is an upper address limit. Each
program address is forced to be above the base address because the content of base register is added to
address.
This technique protects a program address from modification by another user. When execution
changes from one user program to another, the operating system must change the content of base and
bound register to reflect the true address space for that user. This change is part of general preparation
called a context switch.
With a pair of base/bound register a user is perfectly protected from outside user. Erroneous address
inside a user address space can still affect that program because the base/bound checking guarantees only
that each address is inside the user address space.
We can solve this problem by using another pair of base/bound registers, one for the instruction of the
program and a second for the data space. Then only instruction fetches are relocated and checked with the
first register pair and only data accesses are relocated and checked with the second register pair. Although
two pair of register does not prevent all program errors, they limit the effect of data manipulating
instruction to the data space. The pair of register offers another more important advantage: the ability to
split a program into two pieces that can relocated separately.
These two features seem to call for the use of three or more pair of registers: one for code, one for read
only data and one for modifiable data value. Although two pair of registers are the limit for practical
computer design. For each additional pair of registers, something in the machine code of each instruction
must indicate which relocation pair into be used to address the instruction operands. That is with more than
two pairs each instruction specifies one of two or more data spaces. But with only two pairs the decision can
be automatic: instruction with one pair, data with other.
Q36. Design protocol by which two mutually suspicious parties can authenticate each other.
Protocols are publicly posted for scrutiny by the entire internet community. Each accepted protocol is
known by its request for comment (RFC) number.
Many problems with protocol have been identified by sharp reviewer and corrected before the protocol
was established as standard.
Two mutually suspicious parties can authenticate each other by establishing TCP connection through
sequence number. The client sends a sequence number to open connection, the server response with that
number and the sequence number of its own, and the client responds with server’s sequence number. One
can guess client’s next sequence number. Sequence numbers are incremented regularly, so it can be easy
to predicate the next number.
The user of the service can be assured of any server’s authenticity by requesting an authenticating
response from the server.
Authentication is effective only when it works .A weak or flawed authentication allows access to any
system or person who can circumvent the authentication.
The protocol should allow the two parties to authenticate the data that is unique and difficult to
guess. If same users to store data and run processes use two computers and if each has authenticated its
user on first access, you might assume that computer-to-computer or local user to remote process
authentication is unnecessary.
Some time the system demands for certain identification of user ,but the user is supposed to trust the
system .The programmer can easily write a program that displays the standard prompt for user ID and
password. The user can be suspicious of the computing system, just as system is suspicious of the user. The
user can not enter the confidential data until convinced that the computing system is legitimate. The
computer acknowledges the user only after passing the authentication process.
User authentication is a serious issue that becomes even more serious when unacquainted users
seek to share facilities by means of computer network. The traditional authentication device is a password.
A plaintext password file presents a serious vulnerability for a computing system. These files are usually
heavily protected or encrypted. The problem over here is to choose the strong password for the user. The
protocols are needed to perform mutual authentication in an atmosphere of distrust.
The most common authentication mechanism for user to operating system is user password, a word known
to computer and user. Although password protection offers a relatively secure system. But the question is
how secure are password itself? Password are somewhat limited as protection devices because of the
relatively small number of bit of information they contain.
Here are some way user can might be able to determine the
User’s password.
1.Exhaustive attack:-
This is also called as brute force attack. The attacker tries all possible passwords, usually in
some automated fashion. The number of possible passwords depends upon the implementation of the
particular computing system. The password might from 1 to 8 character hence that number might be
tractable. Hence intruder may break the file.
But break in time can be made more tractable in number of ways. Searching for a particular
password does not necessarily require all passwords to be tried.
2. Probable passwords:-
Penetrators searching for passwords they realize these very human characteristics and use
them to their advantages. Therefore penetrators try techniques that are likely to lead rapid success. If
people try short passwords to long once the petnetrators will try all passwords in order by length.
2. Single Permissions
a. Password or Other token
We can apply simple form of password protection to file protection by allowing user to assign a
password to a file. Accesses are limited to those who provide correct access at the time of file
opening. Password access may be given as read-only or modifications.
However file password suffer from many difficulties like
I. Loss: password may be forgotten
II. Use: supplying password at each time is inconvenient and time consuming
III. Disclosure: if password is disclosed to authorized access file becomes accessible
IV. Revocation: to revoke one user’s access rights to file, someone must change the
password, causing same problems as disclosure.
b. Temporal Acquired Permission
The UNIX operating system provides an interesting scheme based on a three level user-group-
world hierarchy. The UNIX designers added a permissions called set user id (suid). If this
protection is set to the file to be executed, the protection level is of file’s owner not of executor.
This mechanism is convenient for systems function that general users should be able to perform
only in the prescribed way.
3. Per- Object and Per-User Protection
The primary limitation of these file protection scheme is the ability to create meaningful groups of
related users who should have similar access to one or more data sets. The access control lists or access
control matrices described earlier provided very flexible protection. Their disadvantage is for user who
wants to allow access to many users and to many different data sets; such user must still specify each
data to be accessed by each user. As a new user is added, that user’s special access rights must be
specified by all appropriate user.
Q 40] What are different methods of protection and protection level of the operating system?
Security Methods of Operating System:
The basic protection is separation: keeping one user’s objects separate from other users. Separation
in an operating system can occur in several ways:
1) Physical separation
2) Temporal separation
3) Logical separation
4) Cryptographic separation
Physical separation: in which different processes use different physical objects, such as separate printers for
output requiring different levels of security.
Temporal separation: in which processes having different security requirements are executed at different
times.
Logical separation: in which users operate under the illusion that no other processes exist, as when an
operating system constrains a program’s access so that program cannot access objects outside its permitted
domain.
Cryptographic separation: in which processes conceal their data and computations in such a way that they
are unintelligible to outside processes.
Combinations of two or more of these forms of separation are also possible. The categories of separation
are listed roughly in increasing order of complexity to implement, and, for the first three, in decreasing
order of the security provided.
There are several ways an operating system can assist, offering protection at any of several levels.
Do not protect: Operating systems with no protection are appropriate when sensitive procedures are being
run at separate times.
Isolate: When an operating system provides isolation, different processes running concurrently are unaware
of the presence of each other. Each process has its own address space, files, and other objects. The
operating system must confine each process somehow, so that the objects of the other processes are
completely concealed.
Share all or share nothing: With this form of protection, the owner of an object declares it to be public or
private. A public object is available to all users, whereas a private object is available only to its owners.
Share via access limitation: With protection by access limitation, the operating system checks the allow
ability of each user’s potential access to an object. Lists of acceptable actions guide the operating system in
determine whether a particular user should have access to particular objects. In some sense, operating
system acts as a guard between users and objects, ensuring that only authorized accesses occurs.
Share by capability: An extension of limited access sharing, this form of protection allows dynamic creation
of sharing rights for objects. The degree of sharing can depend on the owner or the subject, on the context
of the computation, or on the object itself.
Limit use of an object: This form of protection limits not just the access to an object but the use made of
that object after it has been accessed. More powerfully, a user may be allowed access to data in a database
to derive statistical summaries, but not to determine specific data values.
These modes of sharing are arranged in increasing order of difficulty to implement, but also in increasing
order of fineness of protection they provide. A given operating system may provide different levels of
protection for different objects, users, or situations. The granularity of control concerns us. The larger the
level of object controlled, the easier it is to implement access control.
Don’t tell anyone else. The easiest attack is social engineering, in which the attacker contacts the
system’s administrator or a user to elicit the password in some way. For example, the attacker may phone a
user, claim to be “system administrator”, and ask the user to verify the user’s password. Under no
circumstances should you ever give out your private password; legitimate administrators can circumvent
your password if need be, and others are merely trying to deceive you.
Q43. What are some other levels of protection that users might want to apply to code or data in
addition to the common read write or execute permissions?
There are various protection levels that users might want to apply to code or data in addition to the
common read write or execute permissions and they are:-
The German information security agency produced a catalog of criteria. Five years after U.S. Keeping with
tradition the security community began to call the document the German green book because of its green
cover. The German criteria identified eight basic security functions deemed a sufficient to enforce a broad
spectrum of security policies:
1. Identification and authentication: - unique and certain association of an identity with the subject or
object.
2. Administration of rights: - the ability to control the assignment and revocation. Off to access rights
and subjects and objects. .
3. Verification of rights :- the mediation off the attempt at subject to exercise rights with respect to an
object
4. Audit:- a record of information on the successful or attempted Unsuccessful exercise their rights
5. Object reuse: - Resetting reusable resources in such a way that no information flows occurs in
contradiction to the security policy.
6. Error Recovery:- identification of situations from which to recovery is necessary and invocation of the
appropriate action
7. Continuity of services: - identification of functionality that must be available in the system and what
degree of delay or loss can be isolated.
8. Data communications security: - Peer entity authentication control of access the communications
systems as did confidentiality, integrity and origin authentication.
These protection level criteria are the ones that can be used by the user to apply to code or data other than
the normal read/write/execute permissions.
Q. 44 Why should the directory of one user not be generally accessible (for read only) to other
user?
Every file has a unique owner who possesses “control” access right (include the rights to declare who
has what access) and to revoke access to any person at any time. Each user has a file directory, which lists
all the files to which that user has access.
Clearly, no user can be allowed to write in the file directory because that would be a way to forge
access to a file. Therefore, the operating system must maintain all file directories, under commands from
the owners of files. The obvious rights to files are the common read, write and execute familiar on many
shared system. Furthermore, an other right, owner, is possessed by the owner, permitting that user to
grant and revoke access rights. Fig. shows an e.g. of file directory.
This approach is easy to implement because it uses one list per user, naming all the objects that user
is allowed to access. However, several difficulties can arise. First, the list becomes too large if many shared
objects, such as libraries of subprograms or a common table of users, are accessible to all users. The
directory of each user must have one entry for such shared object, even if the user has no intention of
accessing the object.
A second difficulty is revocation of access. If owner A has passed to user B the right to read file F, an
entry for F is made in the directory for B. This granting of access implies a level of trust between A and B. If
A later questions that trust, A may want to revoke the access right of B. The operating system can respond
easily to the single request to delete the right of B to access F, because that action involves deleting one
entry from a specific directory. But if A wants to remove the rights of everyone to access F, the operating
system must search each individual directory for the entry f, an activity that can be time consuming on a
large system. For e.g. large timesharing systems or networks of smaller system can easily have 5,000 to
10,000 active accounts. Moreover, B may have passed the access right for F to another user, so A may not
know that F’s access exits and should be revoked. This problem is particularly serious in a network.
A third difficulty involves pseudonyms .Owner A and B may have two different files named F, and
they may both want to allow access by S. Clearly, the directory for S can’t contain two entries under the
same name for different files. Therefore, S has to be able to uniquely identify the F for A (or B). One
approach is to include the original owner’s designation as if were part of the file name, with a notation such
as A: F (or B: F).
Q45] Explain the fence register used for relocating the users programming.
The most obvious problem in multiprogramming is preventing one program from affecting the memory of
other programs. Fortunately, protection can be built into the hardware mechanism that controls efficient use
of memory, so that solid protection can be provided at essentially no additional cost.
Fence: The simplest form of memory protection was introduced in single user operating system to prevent
faulty user program from destroying part of the resident portion of operating system. As name implies a
fence is a method to confine users to one side of a boundary.
In one implementation, the fence was a predefined memory address, enabling the operating system to
reside on one side and the user to stay the other. An example of this situation is depicted in figure.
Unfortunately this kind of implementation was very restrictive because a predefined amount of space was
always reserved for operating system, whether it is needed or not. If less that the predefined space is
needed then excess space was wasted. Conversely if the operating system needed more space, it could not
grow beyond the fence boundary.
Relocation:
If the operating system can be assumed to be fixed size, programmers can write their code assuming
that the program begins at a constant address. This feature of the operating system makes it easy to
determine the address of any object in the program. However it also makes it essentially impossible to
change the starting address if, for example the new version of the operating system is larger or smaller
than the old. If the size of the operating system is allowed to change, then the program must be written in
the way that does not depend on placement at a specific location in memory.
Relocation is the process of taking a program written as if it began at address 0 and changing all
addresses to reflect the actual address at which the program is located in memory. In many instances the
effort merely entails adding a constant relocation factor to each address of the program. That is the
relocation factor is the starting address of the memory assigned for the program.
Continently the fence register can be used in this situation to provide an extra benefit: the fence
register can be hardware relocation device. The contents of the fence are added to each program address.
This action both relocates the address and guarantees that no one can access a location lower than the
fence address. (addresses are treated as unsigned integers, so adding the value in the fence register to any
number is guaranteed to produce a result at or above the fence address) Special instructions can be added
for the times when a program legitimately intends to access a location of the operating system.
Q.46] If two users share access to a segment , they must do it by same name . must their
protection rights to it be same?why or why not ?
1. Segmentation involves the simple notion of dividing segment into separate pieces . each piece
has a logical unity ,exhibiting a relationship among all of its code or data values .
2. For example , a segment may be a code of a single procedure , the data of an array ,or the
collection data value used by a particular module.
3. Segmentation was developed as a feasible means to produce the effect of the equivalent of a n
unbounded number of base/bound registers.
4. In other words segmentation allows a program to be divided into many pieces having different
access rights.
5. Of two users share access to a segment , they must do it by d different access rights.for this
segmentation processes uses both hardware and software .
6. The overall system can associate the certain levels of protection with certain segments and it
uses both the operating system and hardware to check that protection an each access to execute
only code and a third might be writeable data .
7. In a situation like this one , segmentation can approximate the goal of separate protection of
different pieces of a program
Q47]. Explain why asynchronous I/O activity is a problem with many memory protection scheme
including base bound and paging. Suggest a solution to problem.
A major advantage of an operating system with fence register is the ability to relocate, this characteristic is
important in a multiuser environment. With two or more user, none can know in advance where a program
will be loaded for execution. The relocation register solves the problem by providing base or starting
address. All address inside programs are offsets from that base address. All address inside programs are
offsets from that base address. A variable fence register is known as base register.
Fence register provide a lower bound but not an upper one. An upper bound can be useful in knowing how
much space is allotted and in checking for overflows into forbidden areas. To overcome this problem a
second register is often added. The second register called a bound register is an upper address limit. Each
program address is forced to be above the base address because the content of base register is added to
address.
This technique protects a program address from modification by another user. When execution changes
from one user program to another, the operating system must change the content of base and bound
register to reflect the true address space for that user. This change is part of general preparation called a
context switch.
With a pair of base/bound register a user is perfectly protected from outside user. Erroneous address inside
a user address space can still affect that program because the base/bound checking guarantees only that
each address is inside the user address space.
We can solve this problem by using another pair of base/bound registers, one for the instruction of the
program and a second for the data space. Then only instruction fetches are relocated and checked with the
first register pair and only data accesses are relocated and checked with the second register pair. Although
two pair of register does not prevent all program errors, they limit the effect of data manipulating
instruction to the data space. The pair of register offers another more important advantage: the ability to
split a program into two pieces that can relocated separately.
These two features seem to call for the use of three or more pair of registers: one for code, one for read only
data and one for modifiable data value. Although two pair of registers are the limit for practical computer
design. For each additional pair of registers, something in the machine code of each instruction must indicate
which relocation pair into be used to address the instruction operands. That is with more than two pairs each
instruction specifies one of two or more data spaces. But with only two pairs the decision can be automatic:
instruction with one pair, data with other.
Addresse
Memor
s
y
0
Base Operating
Register System
N
N+1
N+1 User A
p Program
Space
P User
pq+1
User B Progra
Bounds P+ Program m
Register Space Space
q
User C
q+1 Program Space
Hig
h
Fig: Pair of Base/Bounds Registers.
1) Failure of the computing system in the middle of modifying data , is a serious problem.
2) If the data item to be modified was a long field, half of the field might show the new value while the
other half would contain the old. Also, a more subtle problem occurs when several fields are updated and
no single file appears to be in obvious error.
3) The solution to this problem is – The Two Phase Update technique.
4) During the first, intent phase the DBMS gathers the resources it needs to perform the update including
data, create dummy records, open files, lock out other users and calculate.
5) The first phase is repeatable an unlimited number of times, as it takes no permutation actions. If the
system fails during execution, no harm is done. All the steps can be restarted and repeated after the
system resumes processing.
6) The last event of the first phase, is called commiting, involves the writing of a commit flag, to the
database. This means that the changes made are now permanent, after this point.
7) The second phase makes the permanent changes.No actions from before the commit can be repeated.If
the system fails during the second phase, the database may contain incomplete data, but the system can
repair these data by performing all activities of the second phase.
1. The stockroom checks the database to determine that 50 boxes of paper clips. If not, the requisition is
rejected and the transaction is complete.
2. If enough are in stock , the stockroom deducts 50 from the inventory. (107-50=57).
3. The stockroom charges accounting’s supplies budget for 50 boxes of paper clips.
4. The stockroom checks it’s remaining quantity to check if it is below the reord point.
5. A delivery order is prepared, enabling 50 boxes of paper clips to be sent to accounting.
Suppose a failure occurs while these steps are processed. If the failure occurs before
step1 is complete no harm is done.
However, during steps 2,3,4 changes are made to the elements in the database are inconsistent. When a
two-phase commit is used, shadow values are maintained for key data points.A shadow data value is
computed and stored locally during the intent phase, and it is copied to the actual database during the
commit phase.
Intent:
1. Check the value of COMMIT-FLAG in the database. If it is set, this phase cannot be performed. Halt or
loop , checking COMMIT-FLAG until it is not set.
2. Compare number of boxes of paper clips on hand to number requisitioned,if more are requisitioned than
are on hand, halt.
3. Compute TCLIPS= Onhand –REQUISITION.
4. Obtain BUDGET, the current supplies budget remaining for accounting \department. Computer TBUDGET
=BUDGET-COST.
5.Check whether TCLIPS is below reorder point; if so set TREORDER=TRUE; else set
TREORDER=FALSE.
COMMIT:
1. Set COMMIT-FLAG in database.
2. Copy TCLIPS to CLIPS in database.
3.Copy TBUDGET to BUDGET in database.
4. Copy TREORDER to REORDER in database.
5.Prepare notice to delivery paper clips to accounting department.
6. Unset COMMIT-FLAG.
Q 52: What factors make a data sensitive? Give Example?
Sensitive data are data that should not be made public.The challenge of access control problem is to limits
user’s access so that they can obtain only the data to which they have legitimate access.
Several factors can make data sensitive.
• Inherently sensitive:-
The values itself is Sensitive indeed. E.g. The location of defensive missiles or the
maiden income of barbers in a town with only one barber.
• Declared Sensitive:-
The database administrator or the owner of the data may have declared the data to be
sensitive. E.g. classified military data.
Q 54] How does DBMS detect inconsistency in database? How does it ensure the concurrency in
such cases?
Database systems are often multi-user systems. Accesses by two users sharing the
same database must be constrained so that neither interferes with other. Simple locking is done by DBMS.
If two users attempt to read the same data item, there is no conflict because both obtain the same value.
If both users try to modify the same data items, we often assume that there is no conflict because each
knows what to write; the value to be written does not depend on the previous value of the data item.
However, this supposition is not quite accurate.
To see how concurrent modification can get us into trouble, suppose that the database consists of seat
reservations for a particular airline flight. Agent A, booking a seat for passenger Mock, submits a query to
find what seats are still available. The agent knows that Mock prefers a right aisle seat, and the agent finds
that seats 5D, 11D, and 14D are open. At the same time, agent B is trying to book seats for a family of
three traveling together. In response to a query, the database indicates that 8A-B-C and 11 D-E-F are two
remaining groups of three adjacentunasssigned seats. Agent A submits the update command.
As well as commands for seats 11E and 11F. Then two passengers have been booked into the same seat.
Both agents have acted properly: Each sought a list of empty seats,
choose one seat from the list, and updated the database to show to whom the seat was assigned. The
difficulty in this situation is the time delay between reading a value from the database and writing a
modification of that value. During the delay time , another user has accessed the same data.
To resolve this problem, a DBMS treats the entire query update cycle as a single atomic operation. The
command from the agent must now resemble “read the current value of the seat PASSENGER- NAME for
seat 11D f it is ‘UNASSIGNED’ , modify to ‘ MOCK , E ‘”.The read-modify cycle must be completed as an
Uninterrupted item without allowing any other users access to the PASSENGER-NAME field for seat 11D. The
second agents request to book would not be considered until after the first agents had been completed at
the same time the value of PASSENGER-NAME would no longer be ‘UNASSIGNED’.
Another problem in concurrent access is read-write. Suppose one user is updating a value when a second
user wishes to read it. If the read is done while the write is in progress the reader may receive data are only
partly updated. Consequently, the DBMS locks any read requests until a write has been completed.
Q55. What is Element Integrity? How does it help to detect and correct manual errors?
Element Integrity:
It concern that the value of a specific data element is written or changed only by authorized users. Proper
access controls protect a database from corruption by unauthorized users.
The integrity of the database elements is their correctness or accuracy. Ultimately, authorized users are
responsible for entering correct data in databases. However, users and programs make mistakes collecting
data, computing results, and entering values. Therefore, DBMS sometimes take special action to help catch
errors as they are made and to correct errors after they are inserted.
Q.56] How is access control is implemented in database explain the user authentication and
access policies in this context?
Access Control:
Databases are often separated logically by user access privileges. For examples, all
Users can be granted access to general data, but only the personnel department can obtain sales data and
only the marketing department can obtain sales data.
The database administrator specifies who should be allowed to access to which data, at the view relation
,field ,record or even element level. The DBMS must enforce this policy ,granting access to all specified
data or no access where prohibited. Furthermore, the number of modes access to all specified data or no
access where prohibited.
It is important to notice that you can access data by inference without needing direct access to the secure
objects itself .Restricting inferences may mean prohibiting certain access to limit the queries from the users
who are not intend unauthorized access values. Moreover, attempts to check requested accesses for
possible unacceptable interferences may actually degrade the DBMS’s performance .
Access policies:
Integrity:
In case of multilevel database integrity becomes both more important more difficult to achieve.
Because of the property of access control, a process that reads high level data is not allowed to write a
file at lower level. When this applied to databases, however this principle says that high level user
should not be able to write a lower-level data element.
The problem with this interpretation arises when DBMS must be able to read all records in the
database and write new records for any of the following purpose: to do backups, to scan the database to
answer queries, to recognize the database according to user’s processing needs, or to update all records
of the database.
When people encounter this problem, they handle it by using trust and commonsense. People who have
access to sensitive information are careful not to convey it to uncleared individuals. In computing
systems, there are two choices: either the process cleared at high level cannot write to lower level, or
the process must be a “trusted process”, the computer equivalent of a person with security clearance.
Confidentiality:
User trusts that database will provide correct information, meaning that the data are consistent and
accurate. In multilevel databases two users working at two different levels of security might get two
different answers to the same query. In order to preserve confidentiality, precision is sacrificed.
Enforcing the confidentiality also leads to unknowing redundancy. Suppose a personnel; specialist
works at one level of access permission. The specialist knows tha t Bob Hill works for the company.
However Bob’s record does not appear on the retirement payment roster. The specialist assumes this
omission is an error and creates new record for Bob.
The reason that no record for Bob appears is that Bob is the secrete agent, and his employment with
the company is not supposed to be public knowledge. There actually is a record on Bob in the file but,
because of his special position, his record is not accessible to personnel specialist. The creation of new
record means that there are two records on Bob; one sensitive and one not. This situation is called
‘polyinstanntiation’ meaning that one record can appear many times with different level of
confidentiality at each time.
Thus, merely scanning the database for duplicate is not satisfactory way to find records entered
unknowingly by people with only low clearances.
Q. 57] Explain different attacks used to determine sensitive data values from a database.
The following attacks are used to determine sensitive data values from the database:
Direct Attack
In a direct attack, a user tries to determine values of sensitive fields by seeking them directly with queries
that yield few records. The most successful technique is to form a query so specific that it matches exactly
one data item.
In table 1 a sensitive query might be
The above query discloses that for record ADAMS, DRUGS=1. however, it is an obvious attack because it
selects people for whom DRUGS=1
Table 1.Sample database
Bailey M B 0 0. 0 Grey
Koch F C 0 0 1 West
Liu F A 0 10 2 Grey
Indirect Attack
Another procedure, used by the U.S. Census Bureau and other organizations that gather sensitive data, is to
release only statistics. The organizations suppress individual names, addresses, or other characteristics by
which a single individual can be recognized. Only natural statistics, such as count, sum, and mean are
released.
The indirect attack seek to infer a final result based on one or more intermediate statistical results. But this
approach requires work outside the database itself.
Sum
An attack by sum tries to infer a value from a reported sum. For example, with the sample database in table
1, it might seem safe to report student aid total by sex and dorm. Such a report is shown in table 2. this
seemingly innocent report reveals that no female living in grey is receiving financial aid. Thus we can infer
that any female living in grey is certainly not receiving financial aid. This approach often allows us to
determine a negative result.
Count
The count can be combined with the sum to produce some even more revealing results. Often these two
statistics are released for a database to allow users to determine average values.
Table 3 shows the count of records for students by dorm and sex. This table is innocuous by itself.
Combined with the sum table, this table demonstrates that the two males in Holmes and West are receiving
financial aid in the amount of $5000 and $4000, respectively. We can obtain the names by selecting the
subschema of NAME, DORM, which is not sensitive because it delivers only low-security data on the entire
database.
M 1 3 1 5
F 2 1 3 6
Total 3 4 4 11
Median
By a slightly more complicated process, we can determine an individual value from medians. The attack
requires finding selections having one point of intersection that happens to be exactly in the middle.
For example, in our sample database, there are five males and three persons whose drug use value is 2.
arranged in the order of aid, these lists are shown in table 4. someone working at the Health Clinic might be
able to find out that Majors is a White male whose drug use score is 2. That information identifies majors as
the intersection of these two lists and pinpoints Majors financial aid as $2000. in this example, the queries
q = median (AID where SEX=M)
p = median (AID where DRUGS=2)
reveal the exact financial aid amount for Majors.
Bailey M 0 0
Dewitt M 3 1000
Majors M 2 2000
Groff M 3 4000
Adams M 1 5000
Liu F 2 0
Majors M 2 2000
Hill F 2 5000
Attacks:
Attacks are the external things which causes the computer system malfunctioning and diminish the
value of the software’s assets. Attacks can destroy the system security gradually. There are various attacks
which determine sensitive data values from a database. They are given as follows.
1. Threats
A threat to a computer system is a set of circumstance that has the potential to cause loss or
harm.
2. Vulnerability
The vulnerability is a weakness in the security system, for example, in procedures, design, or
implementation that might be exploited to cause loss or harm. For instance, a particular system may be
vulnerable to unauthorized data manipulation because the system does not verify a user’s identity be fore
allowing data access.
However, we can see a small crack in the wall-a vulnerability that threatens the man’s security. If the
water rises to or beyond the level of the crack it will exploit the vulnerability and harm the man.
A human who exploits a vulnerability perpetrates an attack on the system .An attack can also be launched
by another system as when one system sends an overwhelming set of messages to another system virtually
shutting down second system’s ability to function.
3. Control:
A Control is an action, devise, procedure, or technique that removes or reduces vulnerability. For
example the man is placing his finger in the hole, controlling the threat of water leaks until he finds a more
permanent solution to the problem
Q 58] What is inference problem? Explain with respect to its vulnerability on database.
The inference problem is a way to infer or derive sensitive date from nonsensitive
data. The interference problem is a subtle vulnerability in database security.
The database in table-1 can help illustrate the inference problem. Recall that AID is the
amount of financial aid a student is receiving. FINES is the amount of parking fines still owed. DRUGS is
the result of a drug-use survey:0 means never used and 3 means frequent user. Obviously this
information should be kept confidential. We assume that AID, FINES, and DRUGS are sensitive fields,
although only when the values are related to specific individual.
Direct Attack: In a direct attack, a user tries to determine values of sensitive fields by seeking them
directly with queries that yield few records. The most successful technique is to form a query so specific that
it matches exactly one data item.
In Table-1, a sensitive query might be
Indirect Attack: Another procedure, used by organizations that gather sensitive data, is to release only
statistics. The organizations suppress individual names, addresses, or other characteristics by which a
single individual can be recognized. Only neural statistics, such as count, sum, and mean, are released.
• Sum: An attack by sum tries to infer a value from a reported sum. For example, with sample
database in Table-1, it might seem safe to report student aid total by sex and dorm. This
Highest
approach often allows us to determine a negative result.
• Count: The count can Values for
be combined with sum to produce some even more revealing results.
Often these two statistics are released for a database to allow users to determine average
Attribute 1
values.
• .
Median: By a slightly more complicated process, we can determine an individual value from
medians. The attack. requires finding selections having one point of intersection that happens to
be exactly in the middle, as shown in fig-1 below:
Linear System Vulnerability: A tracker is a specific case of a more general vulnerability. With a little
logic, algebra, and luck in the distribution of the database contents, it may be possible to determine series
of queries that returns results relating to several different sets. This attack can also be used to obtain
results other than numerical ones.
Q59) Explain relationship between security and precision with help of Diagram ?
Least Sensitive
Conceal for
Maximum Security Most Sensitive
Concealed-Not Disclosed
Reveal for Maximum
Precision Cannot be inferred from queries
Figure: Security versus Precision
For example: A researcher may want list of grades for all students using drugs. These queries probably do
not compromise the identity of any individual. We want disclose as much data as possible so that users of
the database have access to the data they need. This goal called Precision, aims to protect all sensitive data
while revealing as much non sensitive data as possible.
We can depict the relationship between security and precision with help of concentric circles. As figure
shows, the sensitive data in the central circle should be carefully concealed. The outside line represents data
we willingly disclose in response to queries. But we know that the user may put together piecies of the
disclosed data and infer other , more deeply hidden, data. The figure shows us that beneath the outer layer
may be yet more nonsensitive data that the user cannot infer.
The ideal combination of security and precision allow us to maintain the perfect confidentiality with
maximum precision; in other words, we disclose all and on;y the nonsensitive data. But achieving this goal
is not as easy as it might seem.
Q.60] What is N item k % rule? How does it help to suppress personal data values?
The n-item k- percent rule eliminates certain low frequency elements from being displayed. It is not
sufficient to delete them, however, if their values can also be inferred.
The data in this table suggest that the cells with counts of 1 should be suppressed, their counts are too
revealing.But it does no good to suppress the Male-Holmes cell when the value 1 can be determined by
subtracting Female_Holmes (2) from the total (3) to determine 1 as shown in table.
When one cell is suppressed in a table with totals for rows and columns, it is necessary to suppress
at least one additional cell on the rows and one on the columns to provide some confusion. Using this logic,
all cells would have to be suppressed in this small sample table. When totals are not provided, single cells in
a row or column can be suppressed.
The basic security requirements of database systems are not unlike those of other
computing systems. The basic problems-access control, exclusion of spurious data,
authentication of users, and reliability –have appeared in many contexts.
a) Physical database integrity :- The data of a database are immune to physical problems, such
as power failures, and someone can reconstruct the database if it is destroyed through a
catastrophe.
b) Logical database integrity :- The structure of the database is preserved. With logical integrity
of a database, a modification to the value of one field does not affect other fields.
c) Element integrity :-The data contained in each element are accurate.
d) Auditability :-It is possible to track who or what has access(or modified) the elements in the
database.
e) Access control: The user is allowed to access only authorized data and different user can be
restricted to different modes of access.(such as read or write).
f) User authentication: Every user is positively identified ,both for the audit trail and for
permission to access certain data.
g) Availability: User can access the database
Q.63 Explain following points with respect to security of the database system?
1. Integrity of database
2. Auditability
3. User Authentication
4. Access control
5. Element integrity
6. Availability
Ans.
1. Integrity of database
The data of a database are immune to physical problems, such as power failures,
and someone can reconstruct the database if it is destroyed through a catastrophe. In logical database
integrity the structure of the database preserved. With logical integrity of the database, a modification to
the value one field does not affect other fields.
Integrity of the database as a whole is the responsibility of the DBMS, the operating system, the
computing system manager. From the perspective of the operating system and computing system
manager, database and DBMSs are files and programs. Therefore, one way of protecting the database as
whole is to regularly back up all files on the system.
2. Auditability For all application it ,ay be desirable to generate an audit record of all access to
database. Such a record can help to maintain the database’s integrity, or at least to discover after
the fact who had affected what values and when. Another advantage of this is that user can access
protected data incrementally. That is no single access reveals the data.
3. User AuthenticationThe DBMS can require authentication, a DBMS might insist that a user pass
both specific password and time of day checks. This authentication supplements the authentication
performed by the operating system The DBMS runs as an application program on top of the
operating system. This sytem design means that there is no trusted path from the DBMS to the
operating system.
4. Access ControlDatabase are often prevented and separated logically by the user privileges. All
users can be granted access to general data, but only the personal department can obtain salary
data only the marketing department can obtain sales data. The database administrator specifies
who should be allowed access to which data as the view, relation, field, record, or even element
level. The DBMS must enforce this policy granting access to all specified data or no access where
prohibited.
5. Element Integrity The integrity of the database is their correctness or accuracy. Authorised users
are responsible for entering correct data in database. Users and programs are make mistakes
collecting data, computing results. Etc.
6. Avalibility A DBMS has aspects of both programs and a system. It is a program that uses other
hardware and software resources so that many users it is the application run. Users often the DBMS
for granted employing it as in the only essential tool with which to perform particulars task.
The computer security means that we are addressing three very important aspects of any computer related
system ; confidentiality , integrity and availability .
Database concerns about reliability abd integrity can be viewed from three dimensions as follows :
1. Database integrity : concern that a database as a whole is protected against damage , as from the
failure of a disk drive or the corruption of the master database index . these concerns are addressed
by operating system integrity controls and recovery procedures .
2. Element Integrity : concern that the value of a specific data element is written or changed only by
authorized users . proper access controls protect a database from corruption by un authorized
persons .
3. Element Accuracy : Concern that only correct values are written into the elements of a database .
checks on the values of elements can help to prevent insertion of improper values . also , constraint
conditions can detect incorrect values .
Thus , integrity ,confidentiality , reliability are closely related concepts in databases Users trust the DBMS
to maintain their data correctly , so integrity issues are very important to database security .
Q.65] ”Database concerns about reliability and integrity can be viewed from three dimensions.”
– Explain.
Databases amalgamate data from many sources, and users expect a DBMS to provide access to the
data in a reliable way. When software engineers say that software is reliable, they mean that the software
runs for very long periods of time without failing. Users certainly expect a DBMS to be reliable, since the
data usually are key to business or organizational needs. Moreover, users entrust their data to a DBMS and
rightly expect it to protect the data from loss or damage. Concerns for reliability and integrity are general
security issues, but they are more highly apparent with databases.
There are several ways that a DBMS guards against loss or damage. However, the controls we
consider are not absolute: no control can prevent an unauthorized user from advertently entering an
acceptable but incorrect value.
Database concerns about reliability and integrity can be viewed from three dimensions:
• Database integrity: concern that the database as a whole is protected against damage, as from the
failure of a disk drive or the corruption of the master database index. These concerns are addressed by
the operating system integrity controls and recovery procedures.
• Element integrity: concern that the value of a specific data element is written or changed only by
authorized users. Proper access controls protect a database from corruption by unauthorized users.
• Element accuracy: concern that only correct values are written in to the elements of a database.
Checks on the values of elements can help to prevent insertion of improper values. Also, constraint
conditions can detect incorrect values.
Q.66] What is sensitivity data? Explain several factors that can make data sensitive?
Some databases contain what is called sensitive data. As a working definition, let us say that sensitive
data are data that should not be made public. Determining which data items and fields are sensitive
depends both on individual database and the underlying meaning of the data. Obviously, some databases,
such as public library catalog, contain no sensitive data; other database, such as defense- related ones, are
totally sensitive These two cases- nothing sensitive and everything sensitive-are the easiest to handle
because they can be covered by access controls to the database itself. Someone either is or is not an
authorized user. These controls are provided by the operating system.
The more difficult problem, which is also the more interesting one, is the case in which some but not
all of the elements in the database are sensitive. There may be varying degrees of sensitivity. For example,
a university database might contain student data consisting of name, financial aid, dorm, drug use, sex,
parking fines, and race. Name and dorm are probably the least sensitivity; financial aid, parking fines, and
drug use the most; sex and race somewhere in between. That is, many people may have legitimate access
to name, some to sex and race, and relatively few to financial aid, parking fines, or drug use. Indeed,
knowledge of the existence of some fields, such as drug use, may itself be sensitive.
Sample Database:
Name Sex Race Aid Fines Drugs Dorm
Adams M C 5000 45 1 Holmes
Bailey M B 0 0 0 Grey
Chin F A 3000 20 0 West
Dewitt M B 1000 35 3 Grey
Earhart F C 2000 95 1 Holmes
Fein F C 1000 15 0 West
Groff M C 4000 0 3 West
Hill F B 5000 10 2 Holmes
Koch F C 0 0 1 West
Liu F A 0 10 2 Grey
Majors M C 2000 0 2 Grey
Any descriptive information about data when revealed leads to a disclosure. Following are the types of
disclosures:
1. Exact Data: The most serious type of disclosure is the exact value of the data item concerned. The user
may intentionally request sensitive data, or may request general data unknowingly that some of it is
sensitive.
2. Bounds: Disclosing the bounds of a sensitive value is also a form of disclosure. This means indicating
that a sensitive value, say y lies between L & H. The user may then use a technique similar to binary
search, and issue multiple requests to determine L<=y<=H and then L<=y<=H/2 and so on till the
user gets the desired precision.
3. Negative Result: This means determining that z is NOT a value of y. A user can query to determine a
negative result. This may provide valuable information such as “0 is not the number of HIV cases in an
University” which means at least one case is present in the University.
4. Existence: Sometimes, the existence of a data is itself a sensitive information, regardless of its value.
E.g. An employer may not want his workers to know that all long distance lines are being monitored. In
that case, discovering a LONG DISTANCE entry in a personal record leads to disclosure.
5. Probable Value: It may be possible to determine the probability that a certain element has a certain
value.
Sometimes it is difficult to determine what data are sensitive and how to protect them. The situation
is complicated by desire to share no sensitive data. For reasons of confidentiality we want to disclose only
those data that are not sensitive. Such an outlook encourages a conservative philosophy in determining
what data to disclose ;less is better than more.
On the other hand, consider the users of the data. The conservative philosophy suggests rejecting
any query that mentions a sensitive field. We may thereby reject many reasonable and nondisclosing
queries. For example, a researcher may want a list of grades for all students using drugs. These queries
probably do not compromise the identity of any individual. We want to disclose as much data as possible so
that users of the database have access to the data they need.
Precision aims to protect all sensitive data while revealing as much non sensitive data as possible
where as security allows authorize access to information thereby not allowing the data to get corrupted by
intruders.
One can depict the relationship between security and precision with concentric circles. As shown in the
figure ,the sensitive data in the central circle should be carefully concealed.
Least sensitive
Concealed
The outside band represents data we willingly disclose in response to queries .But the user may put
together pieces of disclosed data and infer other, more deeply hidden, data. The figure shows that beneath
the outer layer may be yet more non sensitive data that the user cannot infer.
The ideal combination of security and precision allows us to maintain perfect confidentiality with maximum
precision; in other words, we disclose all and only the non sensitive data. But achieving this goal is not as
easy .
DIRECT ATTACK: In a direct attack,a user tries to determine values of sensitive fields by seeking them
directly with queries that yield few records.The most successful technique is to form a query so specific that
it matches exactly one data item.
In table below ,a sensitive query might be
List NAME where
SEX=M^ DRUGS=1
This query discloses that for record ADAMS,DRUGS=1.However,it is an obvious attack because it selects
people for whom DRUG=1.
Table Sample Database(repeated)
On the surface,this query looks as if it should conceal drug usage by selecting other non-drug-related
records as well.However,this query still retrieves only one record,revealing a name that corresponds to the
sensitive DRUG value.The DBMS needs to know that SEX has only two possible values,so that the second
clause will select no records.Even if that were possible,the DBMS would also need to know that no records
exist with DORM=AYRES,even though AYRES might in fact be an acceptable value for DORM.
The rule of “n items over k percent “ means that data should be withheld if n items represent
over k percent of the result reported .In the previous case, the one person selected represents hundred
percent of the data reported, so that there would be no ambiguity about which person matches the query.
INDIRECT ATTACK :
The indirect attack seeks to infer a final result based on one or more intermediate statistical results.But this
approach requires work outside the database itself.In particular, a statistical attack seeks to use some
apparently anonymous statistical measure to infer individual data.
We reprsent several examples of indirect of indirect attacks on databases that reports statistics.
Sum:
An attack by sum tries to infer a value from a reported sum.For,example with the sample database in table
drawn ealier ,it might seem safe to report student aid total by sex and dorm.Such a report is shown in the
table drawn below.This seemingly innocent report reveals that no female living in Grey is receiving financial
aid.Thus,we can infer that any female living in Grey(such as Liu) is certainly not receiving financial aid.This
approach often allows us to determine a negative result.
TRACKER ATTACKS:
A tracker attack can fool the database manager into locating the desired data by using additional queries
that produce small results,The tracker adds additional records to be retrieved for two different queries; the
two sets records cancel each other out,leaving only the statistic or data desired.The approach is to use
intelligent padding of two queries.In other words,instead of trying to identify a unique value,we request n-1
other values.Given n and n-1,we can easily compute the desired single element.
For instance,suppose we wish to know how many female Caucasians live in Holmes Hall.A query
posed might be
Q1=c1+c2+c3+c4+c5
Q2=c1+c2+c4
Q3=c3+c4
Q4=c4+c5
Q5=c2+c5
To see how ,use of basic algebra to note that q1-q2=c3+c5, and q3-q4=c3-c5.Then ,subtracting these two
equations ,we obtain c5=((q1-q2)-(q3-q4)/2.Once we know c5,
We can derive the others .
In fact, this attack can also be used to obtain results other than numerical ones . Recall that we can apply
logical rules and ,or ,typical operators for database queries ,to derive
Values from a series of logical operations . For example ,each expression may represent a query asking for
precise data instead of counts .
The result of the query is a set of records . Using logic and set algebra in a manner
similar to our numerical example , we can carefully determine the actual values for each of the s value.
The n-item k-percent rule eliminates certain low frequency elements from being displayed.the data in the
table suggests that the cell with count of one should be suppressed as theit counts are too revealing.But it
does no good to suppress the male-holmes cell when the value 1 can be determined by substracting female-
holmes from the total to determine I as shown in the next table.
Sex 0 1 2 3
Male 1 1 1 2
Female 2 2 2 0
Another control combines rows or columns to protect sensitive values.these counts combine with other
results such as sum,permit us to infer individual drug use.
Suppression by combining revealing values
Drug use
Sex 0 or 1 2 or 3
male 2 3
female 4 2
To suppress sensitive information it is possible to combine the attributes values for 0 and 1 and ,also for 2
and 3,producing the less sensitive results as shown in the table above.
Random sample
With random sample control a result is not derived from the whole database;instead the result is
computed on arandom sample of the database.The sample chosen is large enough to be valid.thus a result
of 5 percent for a particular query means that 5 percent of the record chosen for the sample had the desired
property.In this way all equivalent queries will produce the saem result,although the result will be fo the
entire database.
Query analysis
A more complex form of security uses query analysis.Here a query and its implications are analysed to
determine whether a result should be provided.Query analysis can be quite difficult.One approach involves
maintaining a query history for each user and judging a query in the context of what inferences are possible
given previous results.
K1
D Q
E Z
M 7
K2
D
E
M @
P
Original Record Encryption 9
Encrypted Record
Mechanism
Checksum
Sensitivity Lock: A Sensitivity Lock is a combination of a unique identifier and the sensitivity level.
Because the identifier is unique, each lock relates to one particular record. Many different elements will have
the same sensitivity level. A malicious subject should not able to identify two elements having identical
sensitivity levels just by looking at the sensitivity level portion of the lock. Because of the encryption, the
lock’s contents, especially the sensitivity level, are concealed from plain view. Thus, the lock is associated
with one specific record, and it protects the secrecy of the sensitivity level of that record.
Secr
R07 Secret Agent TS
Record Data Item Sensitivity Mark Sensitivity Lock
Number
Key K
Encryption
Function
Sensitivity Lock
The integrity lock was proposed at the U.S. The lock is a way to provide both integrity and limited access for
database. A model of the basic integrity lock is shown in the figure below.
Secrete Agent TS 10 FB
Sensitivity Mark
Checksum
Integrity Lock
As illustrated, each apparent data item consists of three pieces: the actual data item itself, a sensitivity
label, and a checksum. The sensitivity label defines the sensitivity of the data, and the checksum is
computed across both data and sensitivity label to prevent unauthorized modification of the data or its label.
The actual data item is stored in plain text, for efficiency because the DBMS may need to match a query.
The third piece of the integrity lock for a field is an error-detecting code, called a cryptographic
checksum. To guarantee that a data value or its sensitivity classification has not been changed, this
checksum must be unique for a given element, and must contain both the elements data value and
something to tie that value to a particular position in the database. As shown in the figure below
Sensitivity Mark
Checksum
An appropriate cryptographic checksum includes something unique to the record, something unique to this
datafield within the record, the value of this element, and the sensitivity classification of the element. These
four components guard against anyone’s changing, copying, or moving the data. The checksum can be
computed with a strong encryption algorithm such as the Data Encryption Standard(DES).
The integrity lock DBMS was invented as a short-term solution to the security problem for multilevel
databases. The intention was to able to use any (untrusted) database manager with a trusted procedure
that handles access control. The sensitive data were obliterated or concealed with encryption that protected
both a data item and its sensitivity. In this way, only the access procedure would need to be trusted
because only it would be able to achieve or grant access to sensitive data.
The efficiency of integrity lock’s is a serious drawback. The space needed for storing an
element must be expanded to contain the sensitivity label. Because there are several pieces in the label and
one label for every element, the space required is significant.
Q. 76 Explain the trusted front end design model for multilevel databases?
A trusted front end is also know as a guard and operates much like the reference monitor. This
approach, originated by Hinke and Schaefer, recognized that many DBMSs have been built and put into
use without considerations of multilevel security. Staff members are already trained in using these
DBMSs ad they may in fact use them frequently. The front-end concept takes advantage of \existing
tools and expertise, enhancing the security of these existing systems with minimal change to the
system. The interaction between a user, a trusted front end and a DBMS involves the following steps.
1. A user identifies himself or herself to the front end; the front end authenticates the user’s identify.
2. The user issues a query to the front end.
3. The front end verifies the users, authentication to data.
4. The front end issues a query to the database manager.
5. The data base manager performs I/O access, interacting with low level access control to achieve
access to actual data.
6. The database manager returns the result of the query to the trusted front end.
7. The front analyses the sensitivity levels of the data items in the result and selects those items
consistent with the user’s security level.
8. The front end transmits selected data to the untrusted front end for formatting.
9. The untrusted front end transmits formatted data to the user.
The trusted front end serves as a one way filter, screening out results the user should not be able to
access. But the scheme is inefficient because potentially much data is retrieved and then discarded an
inappropriate for the user.
Q 77. Explain the use of commutative filters at record, attribute, and element level.
A commutative filter is a process that forms an interface between the user and a DBMS. However, unlike the
trusted front end, the filter tries to capitalize on the efficiently of most DBMSs. The filter reformats the
query so that the database manager does as much of the work as possible, screening out many
unacceptable records. The filter then provides a second screening to select only data to which the user has
access.
1. When used at record level , the filter requests desired data plus cryptographic checksum information;
it then verifies the accuracy and accessibility of data to be passed to the user.
2. At the attribute level, the filter checks whether all attributes in the user’s query are accessible to the
user and if so passes the query to the database manager. On return, it deletes all fields to which the
user has no access rights.
3. At the element level, the system requests desired data plus cryptographic
checksum information. When these are returned, it checks the classification level of every element of
every record retrieved against the user’s level.
Suppose a group of physicists in Washington works on very sensitive projects,. So the current user should
not be allowed to access the physicist’s names in the database. The restriction presents a problem with this
query:
retrieve NAME where ((OCCUP= PHYSICIST) ^ (CITY=WASHDC))
Suppose, too, that the current user is prohibited from knowing anything about any people in Moscow. Using
a conventional DBMS, the query might access all records, and the DBMS would pass the result on to the
user. However we have seen the user might infer things about Moscow employees or Washington physicists
working on secret projects without even accessing those fields directly.
The commutative filter reforms the original query in a trustable way so that threw sensitive information is
never extracted from the database. Our sample query would become
Retrieve NAME where ((OCCUP= PHYSICIST) ^ (CITY=WASHDC))
From all records R where
(NAME-SECRECY-LEVEL (R) <= USER-SECRECY-LEVEL) ^
(OCCUP-SECRECY-LEVEL (R) < = USER-SECRECY-LEVEL)^
(CITY-SECRECY-LEVEL (R) < = USER-SECRECY-LEVEL))
The filter works by restricting the query to the DBMS and the restricting the results before they are returned
to the user. In this instance, the filter would request NAME ,NAME-SECRECY-LEVEL , OCCUP, OCCUP-
SECRECY-LEVEL, CITY , CITY-SECRECY-LEVEL values and then would filter and return to the user those
fields on item that are of secrecy level acceptable for the user.
The below is an example of query filtering operation:
Q.79] What is purpose of the encryption in multilevel secure database management system?
K1
D Q
E Z
7
M
k K2
D @
E P
M 9
The obvious control for multilevel databases is partitioning . the database is divided into separate
databases , each at its own level of sensitivity . this approach is similar to maintaining separate files in
separate file cabinates .
The disadvantages of partitioning are
1. this control destroys the advantage of datadbases : elimination of redundancy and improved
accuracy through having only one field to update .
2. furthermore , it does not address the problem of a high level user who needs access some low lovel
data combined with high level data
3. Nevertheless, because of the difficulty of establishing , maintaining and using multilevel databases ,
many users with mixed sensitivities handle their data by using separate , isolated databases .
Inference by Count:
The count can be combined with the sum to produce some even more revealing results . Often these two
statistics are released for a database to allow users to determine average values. Table below shows the
count of records for students by dorm and sex. Combined this table demonstrates that two males in
Holmes and West are receiving financial aid in the amount of $5000 and $4000 respectively.
F 2 1 3 6
Total 3 4 4 11
Inference by Median: By a slightly more complicated process, we can determine an individual value from
medians. The attack requires finding selections having one point of intersection that happens to be exactly
in the middle, as shown in fig-1 below:
Highest
Values for
Attribute 1
.
Lowest
. Median for Highest
Values for . . . Attribute 1; . . . value for
Attribute 2 Median for Attribute2
Attribute 2
.
Fig-1: Intersecting Medians. .
Lowest
Values for
There are many different types of computer networks. Peer-to-peer, ethernet, token ring, local area network
(LAN), and wide area network (WAN) are examples of computer network configurations. Each has its own
advantages and disadvantages.
Peer-to-peer Networks: Peer-to-peer networks are used when only a few computers
need to be linked together to share a printer and files. A networking program ( such as Lantastic) is used to
facilitate the operation of the network.
Ethernet Networks: Ethernet networks can use a configuration that has a central
switching hub with a number of computers all connecting tothe hub. A server or bridge to other networks may
also be connected to this central switch. The Ethernet protocol allows
each computer to request a data download or upload butonly one at a time. If two computers were to request
data at exactly the same time, then both requests would be sent
back and they would have to re-apply. Each computer and device within the network must be equipped with
an Ethernet card. The card is the physical connection between
the computer and the network cabling
Token ring Networks: Token-ring networks are networks laid out in a ring configuration. Each computer is
connected to the one next to it and the last one in the line is connected back to the first. A server or bridge to
other networks may also be connected in this ring. The token-ring protocol requires that a computer request a
software "token" to
attach to its request for data. The token travels around the ring in one direction looking for a computer in the
ring that needs its services, delivering requests, or returning data. All this happens mat very high speeds so
that a computer does not have to wait more than a fraction of a second for a free token. Each computer and
device within the network must be equipped with a token-ring card. The card is the physical connection
between the computer and the network cabling. The token-ring network is mainly an IBM system.
Local Area Networks: Local Area Networks (LANs) are usually located in one building and may consist of
several small networks linked together. The small networks may be of the same type or a combination of
types. A large computer server may act as a central control centre and storage area; or, each network within
the LAN may have its own server, and the LAN may exist just to enable the sharing between the networks.
The main function of a LAN is to share information and resources like printers and file servers. Computers
within the networks may or may not have hard drives (terminals). The protocol programs control the transfer
of data between the networks and the individual computer. Novell and
Microsoft NT ServerTM are examples of two network software protocol packages commonly used with LAN
systems.
Wide Area Networks: Wide Area Networks (WANs) are similar to LANs, but the WAN network usually
extends to other buildings and in some cases to other places. The networks may be wired together, but in
some cases the links include laser beams, microwaves, and/or radio waves. The main function of a WAN is to
share information and resources like printers and file servers. Computers within the network may or may not
have hard drives (terminals).
Internetworks: Internetwork can be defined as network of networks, and also called as internet. It is
connection of two or more separate networks. Internet is physically and logically exposed, i.e. any person or
even attacker can access internet. Due to its complex connectivity it practically possible to reach any
resource connected to the network.The most significant Internetwork is Internet. It has been spread
worldwide and it is impossible to count the number of hosts connected to the internet because hundreds of
users are getting added to Internet everyday.
There is one more type of the networks called as campus area network or CAN which covers the computers in
adjacent buildings and they are generally owned by an university or a company.
The three necessary components of an attack are: method, opportunity, motive. The four important motives
are: challenge, power, money, ideology. Based on this we can identify who attacks n/w.
Challenge: The single most significant motivation for a n/w attacker is the intellectual challenge. He or
she is intrigued with knowing he answers to Can I defeat this network? What would happen if I tried this
technique or action.
Some attackers enjoy the intellectual stimulation of defeating the supposedly undefeatable. Other attackers
seek to demonstrate the weakness in the network so that others may pay attention to strengthening the
security. Still other attackers are unknown, unnamed attackers who do it for fun.
Fame : Some attackers seek recognition for their activities. That is, part of the challenge is doing the
deed and the other part is taking the credit for it. They may not be able to brag too openly , but they enjoy
the personal thrill of setting their names written up in new media.
Money : Financial gains also thrill some attackers. Some attackers seek industrial espionage , seeking
information on a company’s products, clients, or long- range plans. Industrial espionage is illegal but it
occurs in part because of the high potential gain. Its existence and consequences can be embarrassing for
the target companies. Thus many incidents go unreported. There are a few reliable statistics on how much
espionage is going on.
Ideology: Many security analysts believe that the Code Red worm of 2001 was launched by a group
motivated by the tension in the U.S-China relations. We can distinguish between two types of related
behavior Hactivism and cyber terrorism.
Hactivism involves operations that use hacking techniques against a target’s (n/w ) with the client of
disrupting normal operations but not causing serious damage.
Cyber terrorism is more dangerous than above and involve politically motivated hacking operations intended
to cause grave harm such as loss of life or severe economic damage.
Anonymity: A cartoon image shows a dog typing at a workstation and saying to another dog on the internet
no knows you are a dog. A network removes most of the clues such as appearance, voice or contacts by
which e recognize acquaintances.
Automation: In some networks, one or both end points, as well as all intermediate points, involved in a
given communication may be machines with only minimal human supervision.
Distance:Many networks connect end points that are physically far apart. Although not all network
connections involve distance, the speed of communication is fast enough that humans usually can not tell
whether the remote site is near or far.
Opaqueness:Because the dimension of the distance s hidden, users can not tell whether a remote host is in
room next door or in a different country. In the same way users can not distinguish whether they are
connected to node in an office, school, home or warehouse or whether the nodes computing system is large
or small, modest or powerful. In fact the users can not tell the current communication involves the same
host with which they communicated last time.
Routing diversity:To maintain and improve reliability and performance, routing between two endpoints are
usually dynamic. That is the same interaction may follow one path through the network for the first time
and the different path through the network or the second time. In fact the query may take different path
from the response that follows few seconds later
Q87.List the various reason for which the networks are attacked.
The reasons for which the networks are attacked are as follows:
Challenge: Why do people do dangerous or daunting things, like climb mountains or swim across the English
Channel or engage in extreme spots? Because of the challenge, the situation is no different for someone
skilled in writing or using programs. The single most motivation for a network attacker is the intellectual
challenge. He or she is intrigued with knowing the answer to can I defeat this network? What would happen
if I tried this approach or that technique? Some attacker enjoys the intellectual stimulation of deleting the
supposedly undefeatable.
Money & Espionage: The challenge of accomplishment is enough for some attackers. But other attackers
perform industrial espionage, seeking information on a company’s product, clients or long range plans. We
know industrial espionage, has a role when we read about laptops & sensitive papers having been lifted
from hotel room s when other more valuable items were left behind. Some countries are notorious for using
espionage to aid their state –run industries. Sometimes industrial espionage is responsible for seemingly
strange corporate behavior .
Fame: The challenge of accomplishment is enough for some attackers. But other attacker seek recognition
for their activities. That is part of the challenge is doing the deed, another pert is taking credit for it. In
many cases, we do not know who the attacker really are, but they leave behind a “calling card with a
recognizable name, mafiaboy often retain some anonymity by using pseudonym but they achieve fane
nevertheless.
Ideology: In the fast few years, we are stating to find cases in which attacks perpetrated do advance
ideological ends. For example, many security analyses believe that the code Red Worm of2001 was
launched by a group motivated by the tension in U.S. China relations. Hactivism involves operation that use
hacking techniques against a target with the intent of distrusting normal operations but no causing serious
damage. Cyberorism is more dangerous than hactivism: practically motivated hacking operations intended
to cause grave harm such as loss of life serve economic danger.
A firewall is a device that filters all traffic between a protected or “inside” network
and a less trustworthy or “Outside” network. Usually a firewall runs on a dedicated device because it is a
single point through which traffic is channeled, performance is important, which means nonfirewall functions
should not be done on the same machine. Because a firewall is executable code, the attacker could
compromise that code and execute from the firewalls device.
The purpose of a firewall is to keep bad things outside a protected Environment To accomplish that ,
firewalls implement a security policy that is specifically designed to address what bad things may happen.
For example, the policy might be to prevent any access from outside ( While still allowing traffic to pass
from the inside to outside ). Alternatively, the policy might permit accesses only from certain places, from
certain users, or for certain activities. Part of the challenge of protecting a network with a firewall is
determining which security policy meets the needs of the installation
A firewall is a special form of reference monitor. By carefully positioning a firewall within a network , we can
assure that all the network accesses that we want to control must pass through it. This restriction meets the
“always invoked” condition. A firewall is typically well isolated, making it highly immune to modification.
Usually a firewallis implemented on a separate computer, with direct connections only to the outside and
inside networks. This isolation is expected to meet the “tamperproof” requirement. And firewall designers
strongly recommend keeping the functionality of the firewall simple.
Types of Firewall
Firewalls have wide range of capabilities. Types of Firewalls include:
1) Packet Filtering Gateways or Screening Routers.
2) Stateful inspection Firewalls.
3) Application proxies.
4) Guards.
5) Personal firewalls.
Packet Filtering Gateways: A Packet Filtering Gateways or screening routers is a simplest and in some
situations the most effective type of firewall. A Packet Filtering Gateways controls access to packets based
on address or specific transport protocol type (such as HTTP web traffic).
Stateful inspection firewall: A stateful inspection firewall maintains state information from one packet to
another. A stateful inspection firewall would track the sequence of packets and conditions from one packet
to another to thwart an attack.
Application Proxy: An application proxy gateway also called as bastion host is a firewall that stimulates the
effects of an application so that the application will receive only requests to act properly. A proxy gateway is
a two headed device. It looks to the inside as if it is the outside connection while to the outside it responds
just as the insider would.
Guard: A guard is a sophisticated firewall. It receives protocol data units , interprets them, and passes
through the same or different protocol data units that achieve either the same result or modified result.
Ans) In the 1980’s ,Digital equipment Corporation organized the problem of needing to authenticate
nonhuman entities in a computing system. For example a process might retrieve a user query, which it then
reformats, perhaps limits, and submits to a database manager. Both the database manager and query
processor want to be sure that a particular communication channel is authentic between the two. Neither of
these servers is running under the direct control or supervision of a human. Human forms of access control
are thus inappropriate.
Digital created a simple architecture for this requirement effective against the following threats.
• Impersonation of a server by a rogue process for either of the two servers involved in the
authentication.
• Interception or modification of data exchange between the servers
• Replay of previous authentication
The architecture assumes that each server has it’s own private key and the corresponding public key is
available to or held by every other process that might need to establish an authenticated channel. To begin
an authenticated communication between server A and server B,A sends a request to B, encrypted under
B’s public key decrypts the request and replies with a message encrypted under A’s public key. To avoid
replay, A and B can append a random number to the message to be encrypted.
A and B can establish a private channel by one of them choosing an encryption key and sending it to the
other in the authentication message .Once the authentication is complete,all communication between the
secret key can be assumed to be as secure as was the original dual public key exchange. To protect the
privacy of the channel, Gasser recommends a separate cryptographic processor , such as a smart card ,so
that privacy keys are never exposed outside the processor.
Exposed
Communication media
Physically protected perimeter
2
User’s Internal
Workstation Server
(Client) 3 Firewall
Ans:
Intrusion Detection software builds patterns of normal system usage, triggering an alarm any time
the usage seems abnormal. After a decade of promising research result in intrusion detection ,products are
now commercially available. Some trusted operating systems include a primitive degree of intrusion
detection software.
Although the problems are daunting, there have been many successful implementation of trusted
operating system.
Kernalized Design: A kernal is the part of an operating system that performs the lowest-level functions. In
standard operating system designs, the kernal implements operation such as synchronization, interprocess
communication, message passing, and interrupt handling. The Kernel is also called a nucleus or core. The
notion of designing an operating system around a kernel is described by Lampson and Sturgis and by Popek
and Kline.
A security kernel is responsible for enforcing the security mechanism of the entire operating system . The
security kernel provides the security interfaces among the hardware ,operating system and other parts of
the computing system.
Reference monitor: The most important part of the security kernel is the reference monitor, the portion
that controls accesses to objects. A reference monitor is not necessarily a single piece of code; rather, it is
the collection of access controls for devices, files, memory,interprocess communication an other kinds of
objects.
The reference monitor concept has been used for many trusted operating systems and also for smaller
pieces of trusted software.
Trusted computing base:The trusted computing base or TCB is the name we give to everything in the
trusted operating system necessary to enforce the security policy.
TCB implementation :
Security related activities are likely to be performed in different places. Security is potentially related to
every memory access, every I /O Operation, every file or program access, every initiation or termination of
users and every interprocess communication.
Separation/ Isolation
There are four ways to separate one process from other:
Physical separation, temporal separation, cryptographic separation, logical separation.
With physical separation, two different processes use two different hardware facilities. For example:
sensitive computation can be performed on the reserved computing system, non sensitive task are
performed on public system
Hardware separation offers several attractive features including support for multiple independent threads
of execution, memory protection, mediation of I/O and at least three different degree of execution privilege.
Temporal separation occurs when different processes are run at different times.
Encryption is used for cryptographic separation, so two different processes can be run at same time,
because unauthorized user can not access sensitive data in a readable form .
Logical separation also called isolation is provided when process such as reference monitor separates one
user’s objects from those of other user. Secure computing systems have been built with each of these form
of separation.
Ring structuring:It is an example of open design and complete mediation.
Q94. What security problems are faced while using Kerberos in distributed system?
Kerberos is a system that supports authentication in distributed system. Originally designed to work with
secret key encryption. Kerberos is based on idea that a central server provides authenticated tokens called
tickets t requesting application. A ticket is unforgeble, nonreplayable, authenticated objects. That is it is an
encrypted data structure naming a user and a service that user is allowed to obtain. It also contains a time
value and some control information.
Server access request
Ticket –Granting
User
Server
Service Ticket
Ticket Authorization
Unique Keys
Authentication
Service Requests
Other Other
Server Server
Authorization Keys
Other
Kerberos
Server
• Firewalls can protect an environment only if the firewalls control the entire perimeter. That is, firewalls
are effective only if no unmediated connections breach the perimeter. If even one inside host connects
to an outside address, by a modem for example, the entire inside net is vulnerable through the modem
and its host.
• Firewalls do not protect data outside the perimeter; data that have properly passed (outbound)
through the firewall are exposed as if there were no firewall.
• Firewalls are the most visible part of an installation to the outside, so they are the most attractive
target for attack. For this reason, several different layers of protection, called defense in depth, are
better than relying on the strength of just a single firewall.
• Firewalls must be correctly configured, that configuration must be updated as the internal and
external environment changes, and firewall activity reports must be reviewed periodically for evidence
of attempted or successful intrusion.
• Firewalls are targets for the penetrators. While a firewall is designed to withstand attack, it is not
impenetrable. Designers intentionally keep a firewall small and simple so that even if a penetrator
breaks it, the firewall does not have further tools, such as compilers, linkers, loaders, and the like, to
continue an attack.
• Firewalls exercise only minor control over the content admitted to the inside, meaning that
inaccurate data or malicious code must be controlled by other means inside the perimeter.
Firewalls are important tools in protecting an environment connected to a network. However, the
environment must be viewed as a whole, all possible exposures must be considered, and the firewall must
fit onto a larger, comprehensive security strategy. Firewalls alone cannot secure an environment.
Q97. Describe 4 areas related with administrative and physical aspects?
The 4 areas related with administrative and physical aspects are :
1. Planning: What advance preparations and study lets us know that our implementation meets our
security needs for today and tomorrow.?
Every security plan must address seven issues:
a) policy, indicating the goals of computer security effort and willingness of the people involved to
work to achieve these goals.
b) current state, describing the status of security at the time of plan
c) requirements, recommending ways to meet the security goals.
d) recommended controls, mapping controls to the vulnerabilities identified in the policy and
requirements.
e) accountability, describing who is responsible for each security activity
f) continuing attention, specify a structure for periodically updating the security plan
2. Risk Analysis: How do we weigh the benefits of controls against their costs, and how do we justify any
controls?
Risk analysis is a process of examining a system and its operational context to determine possible
exposures and its potential harm they can cause
There are 3 strategies for risk reduction:
a). avoiding a risk, by changing requirements for security or the other system characteristics.
b). transferring the risk, by allocating risk to other systems, people, organizations, or assets; or by
buying insurance to cover any financial loss should the risk become a reality.
c)assuming a risk, by accepting it, controlling it with available resources, and preparing to deal with
the loss if it occurs.
3. Policy: How do we establish a framework to see that our computer security needs continue to me met?
A security policy is a high level management document to inform all users of thegoals and constraints
on using the system. A policy document is written in brod enough terms that it does not change frequently.
The policy should articulate senior management decision regarding security as well as asserting
management's commitment to security.
4. Physical Control: What aspects of the computing environment have an impact on security?
Physical Security is a term used to describe protection needed outside the computer system.
Typical physical security includes guards, locks and fences to deter direct attacks. Other protection include
protection from natural disasters like fires and floods and power outrages. Physical protection is also
endangered by human activities.
A security plan identifies and organizes the security activities for a computing system . a plan is both a
description of the current situation and a plan for improvement.
Every security plan must address seven issues.
-Policy
-Current state
-Requirements
-Recommended controls
-Accountability
-Timetable
-Continuing attention
Policy :
A security plan must state the organization’s policy on security. A security policy is a high level statement
of purpose and intent. Basically policy is one of the most difficult sections to write well .This includes eight
steps as follows:
1.identify enterprise knowledge
2. identify operational area knowledge
3.identify staff knowledge
4.establish security requirements
5. map high priority information assets
6. Perform infrastructure vulnerability evaluation
7. Conduct multidimensional risk analysis
8. develop a protection strategy
- who should be allowed to access ?
- to what system and organizational resources should access be allowed ?
- What type of access should be allowed?
Correctness: are the requirements understandable? Are they stated without error ?
Consistency : Are there any ambiguous requirements ?
Completeness : Are all possible situations addressed by the requirements ?
Security policies
Security plan
Security
planning
process
requirements
Security
Techniques &
controls
Recommended controls :The security requirements lay out the system needs in terms of what should be
protected. The security plan must also recommend what controls should be incorporated into the system to
meet those requirements.
Responsibility for implementation :A section of the security plan should identify which people are
responsible for implementing the security requirements . this documentations assists those who must co-
ordinate their individual responsibilities with those of other developers. At the same time, the plan makes
explicit who is accountable should some requirement not be mate or some vulnerabilities not be
addressed . that is , the plan notes who is responsible for implementing controls when a new vulnerability is
discovered or a new kind of asset is introduced .
Consider example the group listed below:
1. Personal computer users
2. Project leaders
3. Managers
4. Database administrators
5. Information officers
6. Personal staff members etc.
Timetable: A comprehensive security plan cannot be executed instantly . the security plan includes a
timetable that shows how and when the elements of the plan will be performed . these dates also gives
milestones so that management can track the progress of implementation .
The plan should specify the order in which the controls are to be implemented so that the most serious
exposures are covered as soon as possible . The plan must be extensible .
Continuing attention : Good intensions are not enough when it comes to security . We must not only take
care in defining requirements and controls , but we must find ways to evaluating a system’s security to be
sure that the system is as secure as we intend it to be.
Q.99 Explain response team in detail.
The evolution of the Internet has been widely chronicled. Resulting from a research project that
established communications among a handful of geographically distributed systems, the Internet now
covers the globe as a vast collection of networks made up of millions of systems.The Internet has become
one of the most powerful and widely available communications mediums on earth, and our reliance on it
increases daily. Governments, corporations, banks,and schools conduct their day-to-day business over the
Internet. With such widespread use,the data that resides on and flows across the network varies from
banking and securities transactions to medical records, proprietary data, and personal correspondence.The
Internet is easy and cheap to access, but the systems attached to it lack a corresponding ease of
administration. As a result, many Internet systems are not securely onfigured.Additionally the underlying
network protocols that support Internet communication are insecure, and few applications make use of the
limited security protections that are currently available.The combination of the data available on the
network and the difficulties involved in protecting the data securely make Internet systems vulnerable
attack targets. It is not uncommon to see articles in the media referring to Internet intruder activities.But,
exploitation of security problems on the Internet is not a new phenomenon.
In 1988 the “Internet Worm” incident occurred and resulted in a large percentage of the systems on the
network at that time being compromised and temporarily placed out of service. Shortly after the incident, a
meeting was held to identify how to improve response to computer security incidents on the Internet. The
recommendations resulting from the meeting included a call for a single point of contact to be established
for Internet security problems that would act as a trusted clearinghouse for security information. In
response to the recommendations, the reponse teams were developed to enhance the security of
systems.CERT Coordination Center (also known as the CERT/CC and originally named the Computer
Emergency Response Team) was formed to provide response to computer security incidents on the Internet
[CERT/CC 1997b]. The CERT/CC was one of the first organizations of this type—a computer security incident
response team (CSIRT2).
Q. 100] Give three controls that could have both positive as well as negative effects.
• 2 means that the control mitigates the vulnerability significantly and should be a prime candidate for
addressing it.
• 1 means that the control mitigates the vulnerability somewhat but not as well as one labeled 2, so it
should be secondary candidate for addressing it.
• 0 means that the vulnerability may have beneficial side effects that enhance some aspect of
security.
• -1 means that the control worsens the vulnerability somewhat or incurs new vulnerability.
• -2 means that the controls worsens the vulnerability significantly or incurs new vulnerabilities.
In VAM rating scheme , the matrix is used to support decisions about the controls in the following way.
Beginning with the rows of the matrix , each of which corresponds to the vulnerability.
We follow the row access to look for instances in which a cell is labeled with 2 . Then we follow the column
up to its heading, to see which security techniques are strong controls for this kind of vulnerability.
In this way, we can look at the implications of using each control to address known vulnerability.
If a security policy is written poorly, it can not guide the developers and users in providing appropriate
security mechanisms to protect important assets. Certain characteristics make a security policy a good one.
Coverage:A security policy must be comprehensive: It must either apply to a explicitly exclude all possible
situations. Furthermore, a security policy may not be updated as each new situation arises, so it must be
general enough to apply naturally to new cases that occur as the system is used in unusual or unexpected
ways.
Durability:A security policy must grow and adapt well. In large measure, it will survive the systems growth
the expansion without change. If written in a flexible way, the existing policy will be applicable to new
situations. However there are times when the policy must be changed (such as when government
regulations mandate new security constraints), so the policy must changeable when it needs to be.
An important key to durability is keeping the policy free from ties to specific data or protection
mechanism that almost certainly will change. For example, an initial version of a security policy might
require a ten character password from anyone requiring access to data on the sun workstation in the room
110. But when that workstation is replaces or moved, the policy’s guidance becomes useless. It is
preferable to describe assets needing protection in terms of their function and characteristics, rather than in
terms of specific implementation. For example the policy on sun workstations could be reworded to mandate
strong authentication for access to sensitive student’s grade or customer’s proprietary data. Better still we
can separate the elements of policy, having one policy statement for student grades and another for
customers’ proprietary data. Similarly we may want to define one policy that may applies to preserving the
confidentiality of relationships, and another protecting the use of system through strong authentication.
Realism: The policy must be realistic. That is, it must be possible to implement the stated security
requirements with existing technology. Moreover, the implementation must be beneficial in terms of cost
time and convenience; the policy should not recommend a control that works but prevents the system or
user from performing their activities and functions. It is important to make economically worthwhile
investments in security just as for any other careful business investments.
Usefulness: The obscure or incomplete security policy will not be implemented properly if at all. The policy
must be written in language that can be read, understood and followed by anyone who must implement it or
is affected by it. For this reason the policy should be succinct, clear and direct.
Hardware
Software
Data
People
Documentation
Supplies
Q105. Why should anyone perform a risk analysis in preparation for creating security plan?
Effective security planning includes a careful risk analysis. A risk is a potential problem that the system or
its users may experience. We distinguish a risk from other project events by looking for three things.
A loss associated with an event. The event must generate a negative effect; compromised security, lost
time, diminished quality, lost money, lost control, lost understanding & so on. This loss is called the risk
impact.
The likelihood that the event will occurred there is probability of occurrence associated with each risk,
measured from 0 to 1. When risk probability is 1 we say we have problem.
The degree to which we can change the outcome we must determine what if anything, we can do to avoid
the impact or at least reduce its effects. Risk control involves a set of action to reduce or eliminate the risk.
Many of the security controls are the example of risk control.
We usually warn to weigh the pro and cons of different actions we take to address each risk. To end we
can quantify the effects of risk by multiplying the risk impact by the risk probability, yielding the risk
exposure. Clearly risk probability can change overtime, so it is important to track them and plan for events
accordingly.
In general there are 3 strategies for risk reduction .
1. Avoiding the risk by changing the requirements for security or other system characteristics
2. Transferring the risk, by allocating the risk to other system by people organizations or assets ; or
by buying insurance company financial lost should risk become reality
Q.106]
Ans. Write a short note on :Security Policies
There may also be some corporate level responsibilities such as accounting and personnel activities.
Data items at any level may have different degrees of sensitivity, such as, public, proprietary, or
internal; here the names may vary according to the organizations, and no hierarchy applies.
Let us assume that public information is less sensitive than proprietary, which in turn is less
sensitive than internal. Projects and departments tend to be fairly separated, with some overlap as
people work on two or more projects. Corporate level responsibilities tend to overlie projects and
departments, as people throughout the organization may have the need for accounting and
personnel data. However, even corporate data may have degrees of sensitivity. Projects themselves
may include a degree of sensitivity: Staff members on project old-standby have no need to know
about new-product, while staff members on new-product may have access on all data on old-
standby.
Accounting Personnel
3. Assuming the risk, by accepting it, controlling it with available resources and preparing to deal with
loss if occurs.
Thus cost associated not only with risk potential impact but also reducing it.
Risk leverage is difference in risk exposure divided by the cost of reducing the risk.
Risk analysis is the process of examining the system and its operational to determine possible
exposure and the potential harm they can cause, thus, first step in risk analysis is to identify and list
all exposures in the computing system of interest. Then for each exposure we identify possible controls
and there cost.
The last step is cost benefit analysis: Thus it causes less to implement a control and to accept the cost
of the loss? In the remainder of risk section, we describe the risk analysis, present examples of risk
analysis methods, and discuss some of the drawback to performing risk analysis.
1. Hazard analysis is a set of systematic techniques intended expose potentially hazardous system
states. In particular , it can help us expose security concerns and then identify prevention or
mitigation strategies to address them .
2. That is hazard analysis ferrets out likely causes of problems so that we can apply appropriate
technique for preventing the problem or softeneing its likely consequences .
3. Thus , it usually involves developing hazard lists , as well as procedures for exploring “what if”
scenarios to trigger consideration of nonobvious the sources of problems can be lurking in any
artifacts of the development or maintainence process , not just in the code , so a hazard analysis
must be broad in its domain of investigation :in other words , hazard analysis is a system issue , not
just a code issue .
4. Similarly , there re many kinds of problems , ranging from incorrect code to unclear code to unclear
consequences of a particular action . a good hazard analysis takes all of them into account .
5. Although hazard analysis is a generally good practice an any project , it is required in some
regulated and critical application domains , and it can be invaluable for finding security flaws . it is
never too early to be thinking about the sources of hazards :
6. The analysis should begin when you first start thinking about sources of hazards :the analysis
should begin when you first start thinking about building a new system or when someone proposes
a significant upgrade to an existing system . hazard analysis should continue throughout the system
life cycle:
7. You must identify potential hazards that can be introduced during system design installation ,
operation and maintainence.A variety of a techniques support the identification and maintainence of
potential hazards .
8. Among the most effective are hazard and operability studies (HAZOP), failure modes and effects
analysis (FMEA) and fault tree analysis (FTA).
9. HAZOP is a structured analysis techniques originally developed for the process control and chemical
plant industries . over the last few years it has been adapted to discover potential hazards in the
safety critical software system .
10. FMEA is a bottom –up technique applied at the system component level . a team identifies each
component ‘s possible faults or fault modes ;then , it determines what could trigger the fault and
what systemwide effects each fault might have . by keeping system consequences in mind , the
team often finds possible system failures that are not made visible by other analytical means .
11. FTA cmplements FMEA . it is a top –down technique that begins with a postulated hazardous
system malfunction . then the FTA team works backwards to identifythe possible precursor to the
mishap. By tracing back from a specific hazardous malfunction , we can locate unexpected
contributors to mishap and then we look for opportunities to mitigate the risks
Anderson asks that we consider carefully the economic aspect of security when we device our security
policy. He points out that the security engineering community tends to overstate security problem because
it is in their best interest to do so. “The typical infosec professional is a firewall vendor struggling to meet
quarterly sales targets to prop up a sagging stock price, or a professor trying to mine the ‘cyber terrorism’
industry for grants or a policeman pitching for budget to build up a computer crime agency.
Moreover the security community is a subject to fads as in other disciplines. Anderson says that the security
is trendy in 2002 which means that vendors are pushing firewalls and encryption products that have been
oversold and address only part of typical security organization problems. He says that rather on focusing on
what is fashionable we focuss on asking for a reasonable return on our investment in security.
Soo Hoo’s research indicates that a reasonable number is 20% at the time when companies usually expect
30% return from their investment in IT. It may be more worthwhile to implement more inexpensive
measures such as enabling screen locking than larger more complex and expensive measures such as PKI
and centralized access control.
Q112 ) How are copyrights important in computer security? How do they differ from patents?
Copyrights are designed to protect expression of ideas. The right to copy an expression of idea (not an idea)
is protected by copyrights. An idea and expression of idea differ from each other. For example an algorithm
is as idea. On the other hand code (program) written to implement an algorithm is an expression of idea. So
copyright protect the right of copying code i.e. no one else is allowed to use the code with his name, but
algorithm can be used by anybody in the world.
A programmer or developr translates an algorithm in to a program i.e. he writes a code to implement an
algorithm. A programmer hopes to earn a living by presenting his program in such manner that other
people would like to use this software and will pay to use it. The law of copyright protects individuals right
to earn living by saying that particular way of expressing an idea belongs to its author. Copyright gives a
programmer an exclusive right to make copies of software and only developer (author) can sell his software.
In short we can say that copyright does not allow piracy. The law of copyright prevents the others from
selling or using pirated copies of software.
Patents protect the inventions and ideas. It does not allow others to use the idea or objects, which ware
developed newly or invented by some one. An inventor can have a patent of his invention if the invention is
new i.e. if nobody else has developed similar objects before.
Patents differ from copyrights because they ware developed to protect newly developed items or newly
invented ideas, whereas copyrights ware designed to protect the expression of the idea. There is difference
between idea and implementation of an idea. Unlike copyrights patent can protect new and useful process,
machines, software, etc.
Q.113] What is trade secret? List various characteristics of trade secret.
TRADE SECRET A trade secret is unlike a patent or copyright in that it must be kept a secret. The
information has value only as a secret, and an infringer is one who divulges the secret. Once divulged, the
information usually cannot be made secret again.
Q.115] List issues involved in software vulnerability reporting argument. What are technical
issues? Select a vulnerability reporting process that you think is appropriate and explain why It
meet more requirements from any other process ?
From rule based ethical theory attackers are wrong to perform malicious attacks. Notoriety or credit for
finding the flaws is a small interest.
Following are issues involved in software vulnerability reporting argument.
Full Disclosure: It helps user’s asses the seriousness of vulnerability and apply appropriate
protection. But it also gives attackers more information with which to formulate attacks. Early full disclosure
before the vendor has countermeasures ready may actually harm users by leaving them vulnerable to a now
widely known attack.
Partial Disclosure:One can argue that the vulnerability details are there to be discovered; when
vendor announces a patch for an unspecified flaw in a product, the attacker will then spread a complete
description of vulnerability to other attackers through an underground network and attack will start against
users who may not have applied the vendors fix.
No Disclosure:Users are best served by a scheme in which every so often new code is released,
sometimes fixing security vulnerabilities, sometime adding new features. But without sense of significance
or urgency, users may not install this new code.
Of all the vulnerability reporting process ‘Responsible’ vulnerability reporting seems best. Other reporting
such as Users Interest is more favor to users and Venders Interest is more favors to vendors.
In case of ‘Responsible’ vulnerability reporting compromisation between these two is achieved. This process
meets constraints of timeliness, fair play and responsibility. User report a suspected vulnerability a ‘reporter’
and the manufacturer the ‘Vendor’. A third party such as a computer emergency response center called
‘Coordinator’ is required when there is a conflict of power issues between reporter and vendor.
Ans :Many people argue that cracking is an acceptable practice because lack of protection means that the
owners of systems or data do not really value them. Spafford questions this logic by using the analogy of
entering a house.
Consider the argument that an intruder who does no harm and makes no changes is simply learning about
how computer systems operate.” Most of these people would never think to walk down a street, trying every
door to find one unlocked, then search through the drawers or the furniture inside .Yet, these same people
seem to give no second thought to making repeated attempts at guessing passwords to accounts they do
not own, and once onto a system, browsing through the files on disk.” How would you feel if you knew your
home had been invaded, even if no harm was done?
Spafford noted that breaking into a house or a computer system constitutes trespassing. To do so in an
effort to make security vulnerabilities more visible is “presumptuous and reprehensible”. To enter either a
home or a computer system in unauthorized way, even with benign intent can lead to unintended
consequences. “Many systems have been damages accidentally by ignorant intruders.”
Requirement to yes No No
distribute
Easy of filling Very easy, do- it- yourself Very complicated; No filling
specialist lawyer
suggested
Legal Protection Sue if unauthorized copy sold Sue if invention copied Sue if secret
improperly
obtained
Ans: Various Computer groups like ACM, IEEE and DPMA have developed there codes of ethics they are as
follows:
IEEE The IEEE has produce codes of ethics for its members. The IEEE is an organization of engineers,not
limited to computing.There codes of ethics are given below
We the members of the IEEE, in reorganization of the importance of ours technologies in affecting the
quality thought of the world.
1.to accept responsibility in making engineering decisions consistent with the safety, health and welfare of
the public, and to disclose promptly factors that might endanger the public or the envoirment;
2.to avoid real or perceived conflicts of interest whenever possible.
3. to be honest and realistic in stating claims or estimates based on available data
4. to reject bribery in all of its forms
5. to improve understanding of technology
6. to maintain and improve our technical competence and to undertake or after full disclosure of pertinent
limitations;
7.to avoid injuring others, their property, reputation, or employment by false or malicious action;
8. to assist colleagues and coworkers in their professional development and to support them in the following
this code of ethics.
ACM: The ACM codes of ethics recognizes three kinds of its members.The code ethics has three
sections(plus fourth commitment section) as shown below
Computer Ethics Institute The Computer Ethics Institute is a nonprofit group that aims to encourage
people to consider the ethical aspects of their computing activities. They are as follows:
1. Thou shalt not use a computer to harm the other people.
2.Thou shalt not interfere with others people’s computer work.
3.Thou shalt not snoop around others people computers.
4.Thou shalt not use a computer to steal.
5.Thou shalt not use a computer to bear false witness.
6 Thou shalt not use or copy the computer software for which you have not paid.
7.Thou shalt not appropriate other peoples intellectual output
8.Thou shalt think about the social consequences of the program you are writing or the system you are
designing.
9. Thou shalt not use a computer in the ways that insure consideration and respect for yur fellow humans .
LAW: As we know that Law is not always the appropriate way to deal with issues of human behavior. It is
difficult to define a law to preclude only the events we want it to. For example, a law that restricts animals
from public place must be refined to permit guide dogs for the blind.
ETHIC:-An Ethic is objectively defined standards of right or wrong .Ethical standard are often idealistic
principles because they focus on one objective. In a given situation , however , several objective may be
involved, so people have to determine an action that is appropriate, considering all the objectives
.Therefore, through our choice, each of us defines a personal set of ethical practices set of ethical principles
is called an ethical system.
An ethic is different from Law in several important ways. Hence they are given a follows:-
Ans.: Even when everyone acknowledges that the computer crime has been committed, computer crime is
hard to prosecute for the following reasons.
• Lack of understanding. Courts, lawyers, police agents, or jurors do not necessarily understand
computers. Many judges began practicing law before the invention of computers, and most began
before the widespread use of the personal computer. Fortunately, computer literacy in the courts is
improving as judges, lawyers, and police officers use computers in their daily activities.
• Lack of physical evidence. Police and courts have for years depended on tangible evidence, such as
fingerprints. As readers of Sherlock Holmes know, seemingly minuscule clues can lead to solutions to
the most complicated crimes. But with many computer crimes there simply are no fingerprints and
no physical clues of any sort.
• Lack of recognition of assets. We know what cash is, or diamonds, or even negotiable securities. But
are twenty invisible magnetic spots really equivalent to a million dollars? Is computer time an asset?
What is the value of stolen computer time if the system would have been idle during the time of the
theft?
• Lack of political impact. Solving and obtaining a conviction for a murder or robbery is popular with
the public, and so it gets high priority with prosecutors and police chiefs. Solving and obtaining a
conviction for an obscure high-tech crime, especially one not involving obvious and significant loss,
may get less attention. However, as computing becomes more pervasive, the visibility and impact of
computer crime will increase.
• Complexity of case. Basic crimes that everyone understands, such as murder, kidnapping, or auto
theft, can be easy to prosecute, a complex money-laundering or tax fraud case may be more difficult
to present to a jury because jurors have a hard time following a circuitous accounting trail. But the
hardest crime to present may be a high-tech crime, as root access by a buffer overflow in which
memory was overwritten by other instructions, which allowed the attacker to copy and execute code
at will and then delete the code, eliminating all traces of entry.
• Juveniles. Many computer crimes are committed by juveniles. Society understands immaturity and
disregards even very serious crimes by juveniles because the juveniles did not understand the
impact of their actions. A more serious, related problem is that many adults see juveniles computer
crimes as childhood pranks, the modern equivalent of tipping over an outhouse.
Even when there is clear evidence of a crime, the victim may not want to prosecute because of possible
negative publicity. Banks, insurance companies, investment firms, the government, and the health care
groups think their trust by the public will be diminished if a computer vulnerability is exposed. Also, they
may fear repetition of the same crime by others: so-called copycat crimes. For all of these reasons,
computer crimes are often not prosecuted.
Q 122] Why do we need separate category for computer crime? Why is it hard to define?
Crimes can be organized into certain categories including murder, robbery, and
littering. We do not separate crime into categories for different weapons such as gun
crime or knife crime but we separate crime victims into categories depending on
whether they are people or other objects. Consider an example to see why these
categories are not sufficient and why we need special laws relating to computers as
subject and object of crime.
Rules and property:
Parker and Nycom describe the theft of a trade secret proprietary software package. The theft occurred
across state boundaries by means of telephone lines; this aspect is important because it means that the
crime is subject to federal law as well as state law.
The legal system has explicit rules about what constitutes property. Generally property is
tangible unlike magnetic impulses. For example authorized use of a neighbor’s lawn mower
constitutes theft, even if the lawn mower was returned in the same condition as it was when taken.
To a computer professional taking a copy of computer package without permission is clear cut theft.
A similar problem arises with computer services. We would generally agree that unauthorized access
to a computing system is a crime.
Q 123] Briefly discuss laws around the world that differ from U.S.laws & that should be of
interest of computer security.
There are some laws from U.S. which defines aspect of crime against or using the
computer. These laws include:
1) U.S.Computer fraud and Abuse Act.
The primary federal statute prohibits:
a) unauthorised access to a computer containing data protected for national defense or foreign
relations concerns
b) unauthorised access to a computer containing certain banking or financial information
c) unauthorised access, use, modification, destruction or disclosure of a computer or information
ina computer operated on behalf of US govt.
d) accessing without permission a "protected computer" which the courts now interpret to
include any computer connected to the Internet.
e) Computer Fraud
2) U.S.Economics Espionage Act.
The 1996 Act outlaws use of computer for foreign espionage to benefit a foreign country or
business or theft of trade secrets
3) U.S. Electronic Funds Tranfer Act
This law prohibits use, transport, sale, receipt, or supply of counterfeit, stolen, altered, lost or
fraudently obtaineddebit instruments in the interstate or foreign commerce.
4) U.S. Freedom of Information Act
This Act provides public access to information collected by the executive branch of the federal
Govt. The Act requires disclosure of any available data, unless the data fall under one of the
several specific exceptions, such as national security or personal privacy. The law's original intent
was to release to individual any information the Govt. had collected on them. Even foreign
Govt.'s can file for information. This Act applies only to the Govt. agencies although similar laws
could require disclosure from private resources.
5) U.S. Privacy Act
The privacy Act of 1974 protects the privacy of personal data collected by Govt. An individual is
allowed to determine what data have been collected on him or her, for what purpose, and to
whom such information has been desseminated. The additional use of the law is to prevent one
Govt. agency from accessing data collected from another purpose.