Final-Info Assurance and Security Module - 012053
Final-Info Assurance and Security Module - 012053
i|Page
4.2 Cryptography .............................................................................................................................. 82
4.3 Types of Cryptography ............................................................................................................... 84
4.4 Introduction to the TCP/IP Stack ................................................................................................ 93
4.5 Firewalls.................................................................................................................................... 113
Chapter Five ........................................................................................................................................ 119
5. Application Security ....................................................................................................................... 119
5.1 Viruses and Other Wildlife ................................................................................................. 119
5.2 Malware Taxonomy ............................................................................................................ 119
5.3 Viruses and Public Health ................................................................................................... 120
5.4 Computer Worms ................................................................................................................ 124
5.5 Trojans Horses and Backdoors ........................................................................................... 125
5.6 Other Forms of Malicious Logic ......................................................................................... 127
5.7 Counter measures ................................................................................................................ 130
ii | P a g e
Chapter One
1.Introduction
1.1 Information Assurance
Information assurance (IA) is the practice of assuring information and managing risks related
to the use, processing, storage, and transmission of information or data and the systems and
processes used for those purposes. Information assurance includes protection of the integrity,
availability, authenticity, non-repudiation and confidentiality of user data. It uses physical,
technical and administrative controls to accomplish these tasks. While focused predominantly
on information in digital form, the full range of IA encompasses not only digital but also analog
or physical form. These protections apply to data in transit, both physical and electronic forms
as well as data at rest in various types of physical and electronic storage facilities. Information
assurance as a field has grown from the practice of information security.
Overview
Information assurance is the process of adding business benefit through the use of IRM
(Information Risk Management) which increases the utility of information to authorized users,
and reduces the utility of information to those unauthorized. It is strongly related to the field of
information security, and also with business continuity. IA relates more to the business level
and strategic risk management of information and related systems, rather than the creation and
application of security controls. Therefore in addition to defending against malicious hackers
and code (e.g., viruses), IA practitioners consider corporate governance issues such as privacy,
regulatory and standards compliance, auditing, business continuity, and disaster recovery as
they relate to information systems. Further, while information security draws primarily from
computer science, IA is an interdisciplinary field requiring expertise in business, accounting,
user experience, fraud examination, forensic science, management science, systems
engineering, security engineering, and criminology, in addition to computer science. Therefore,
IA is best thought of as a superset of information security (i.e. umbrella term), and as the
business outcome of Information Risk Management. Information Assurance is also the term
used by governments, including the government of the United Kingdom, for the provision of
holistic security to information systems. In this use of the term, the interdisciplinary approach
1|Page
set out above is somewhat lessened in that, while security or systems engineering, business
continuity/ enterprise resilience, forensic investigation and threat analysis is considered,
management science, accounting and criminology is not considered in developing mitigations
to the risks developed in the risk assessments conducted. HMG Information Assurance
Standard 1&2, which has replaced HMG Information Security Standard, sets out the principles
and requirements of risk management in accordance with the above principles and is one of the
Information Assurance Standards currently used within the UK public sector.
The information assurance process typically begins with the enumeration and classification of
the information assets to be protected. Next, the IA practitioner will perform a risk assessment
for those assets. Vulnerabilities in the information assets are determined in order to enumerate
the threats capable of exploiting the assets. The assessment then considers both the probability
and impact of a threat exploiting vulnerability in an asset, with impact usually measured in
terms of cost to the asset’s stakeholders. The sum of the products of the threats’ impact and the
probability of their occurring is the total risk to the information asset. With the risk assessment
complete, the IA practitioner then develops a risk management plan. This plan proposes
countermeasures that involve mitigating, eliminating, accepting, or transferring the risks, and
considers prevention, detection, and response to threats. Countermeasures may include
technical tools such as firewalls and anti-virus software, policies and procedures requiring such
controls as regular backups and configuration hardening, employee training in security
awareness, or organizing personnel into dedicated computer emergency response team (CERT)
or computer security incident response team (CSIRT). The cost and benefit of each
countermeasure is carefully considered. Thus, the IA practitioner does not seek to eliminate all
risks, were that possible, but to manage them in the most cost-effective way. After the risk
management plan is implemented, it is tested and evaluated, often by means of formal audits.
The IA process is an iterative one, in that the risk assessment and risk management plan are
meant to be periodically revised and improved based on data gathered about their completeness
and effectiveness.
2|Page
1.2 Information security
History:
Since the early days of communication, diplomats and military commanders understood that it
was necessary to provide some mechanism to protect the confidentiality of correspondence and
to have some means of detecting tampering. Julius Caesar is credited with the invention of the
Caesar cipher c. 50 B.C., which was created in order to prevent his secret messages from being
read should a message fall into the wrong hands, but for the most part protection was achieved
through the application of procedural handling controls. Sensitive information was marked up
to indicate that it should be protected and transported by trusted persons, guarded and stored in
a secure environment or strong box. As postal services expanded, governments created official
organizations to intercept, decipher, read and reseal letters (e.g. the UK Secret Office and
Deciphering Branch in 1653). In the mid-19th century more complex classification systems
were developed to allow governments to manage their information according to the degree of
sensitivity. The British Government codified this, to some extent, with the publication of the
Official Secrets Act in 1889. By the time of the First World War, multi-tier classification
systems were used to communicate information to and from various fronts, which encouraged
greater use of code making and breaking sections in diplomatic and military headquarters. In
the United Kingdom this led to the creation of the Government Code and Cypher School in
1919. Encoding became more sophisticated between the wars as machines were employed to
scramble and unscramble information. The volume of information shared by the Allied
countries during the Second World War necessitated formal alignment of classification systems
and procedural controls. An arcane range of markings evolved to indicate who could handle
documents (usually officers rather than men) and where they should be stored as increasingly
complex safes and storage facilities were developed. Procedures evolved to ensure documents
were destroyed properly and it was the failure to follow these procedures which led to some of
the greatest intelligence coups of the war (e.g. U-570). During the 1990s, the computer security
industry witnessed a revolution in the mainstream emergence of the hacking subculture.
Hackers suddenly had different motives: greed, ideology, and revenge. In early 2002, a Russian
hacker was arrested for attempting to extort $10,000 from a U.S. bank after breaking into one
of its Web servers and stealing a customer list with names, addresses, and bank account
3|Page
numbers. Governments are getting into the act too: Almost every civilized nation has some sort
of information warfare program designed to cripple the computing infra-structure of an
adversary‘s military. Finally, a huge number of attacks have originated from disgruntled
employees and former employees of companies who know and exploit the soft spots in a
corporate security policy. The end of the 20th century and early years of the 21st century saw
rapid advancements in telecommunications, computing hardware and software, and data
encryption. The availability of smaller, more powerful and less expensive computing
equipment made electronic data processing within the reach of small business and the home
user. These computers quickly became interconnected through the Internet. The rapid growth
and widespread use of electronic data processing and electronic business conducted through
the Internet, along with numerous occurrences of international terrorism, fueled the need for
better methods of protecting the computers and the information they store, process and
transmit. The academic disciplines of computer security and information assurance emerged
along with numerous professional organizations all sharing the common goals of ensuring the
security and reliability of information systems.
4|Page
with by IT security specialists. One of the most common methods of providing information
assurance is to have an off-site backup of the data in case one of the mentioned issues arises.
Governments, military, corporations, financial institutions, hospitals, and private businesses
amass a great deal of confidential information about their employees, customers, products,
research and financial status. Most of this information is now collected, processed and stored
on electronic computers and transmitted across networks to other computers.
Definitions:
Computer security
5|Page
technologies because of its somewhat elusive objective of preventing unwanted computer
behavior instead of enabling wanted computer behavior.
Computer data often travels from one computer to another, leaving the safety of its protected
physical surroundings. Once the data is out of hand, people with bad intention could modify or
forge your data, either for amusement or for their own benefit.
Cryptography can reformat and transform our data, making it safer on its trip between
computers. The technology is based on the essentials of secret codes, augmented by modern
mathematics that protects our data in powerful ways.
1. Computer Security – generic name for the collection of tools designed to protect data and to
thwart hackers.
2. Network Security – measures to protect data during their transmission.
3. Internet Security – measures to protect data during their transmission over a collection of
interconnected networks.
Why Security?
6|Page
1.3 Principles of Security (Goals)
These three concepts form what is often referred to as the CIA triad (Figure 1.1). The three
concepts embody the fundamental security objectives for both data and for information and
computing services. FIPS PUB 199 provides a useful characterization of these three objectives
in terms of requirements and the definition of a loss of security in each category:
To assess effectively the security needs of an organization and to evaluate and choose various
security products and policies, the manager responsible for security needs some systematic way
of defining the requirements for security and characterizing the approaches to satisfying those
requirements. The OSI security architecture was developed in the context of the OSI protocol
architecture. However, for our purposes in this chapter, an understanding of the OSI protocol
architecture is not required.
For our purposes, the OSI security architecture provides a useful, if abstract, overview of many
of the concepts. The OSI security architecture focuses on security attacks, mechanisms, and
services. These can be defined briefly as follows:
Threat: A potential for violation of security, which exists when there is a circumstance,
capability, action, or event that could breach security and cause harm. That is, a threat is a
possible danger that might exploit vulnerability.
7|Page
Attack: An assault on system security that derives from an intelligent threat; that is, an
intelligent act that is a deliberate attempt (especially in the sense of a method or technique) to
evade security services and violate the security policy of a system
To assess the security needs of an organization effectively, the manager responsible for security needs
some systematic way of defining the requirements for security and characterization of approaches to
satisfy those requirements.
One approach is to consider three aspects of information security:
1. Security attack –Any action that compromises the security of information owned by an organization.
2. Security mechanism –A mechanism that is designed to detect, prevent or recover from a security attack.
3. Security service –A service that enhances the security of the data processing systems and the
information transfers of an organization. The services are intended to counter security attacks and they
make use of one or more security mechanisms to provide the service.
Security Services
1. Confidentiality: Ensures that the information in a computer system and transmitted information are
accessible only for reading by authorized parties. E.g. printing, displaying and other forms of disclosure.
2. Authentication: Ensures that the origin of a message or electronic document is correctly identified, with
an assurance that the identity is not false.
3. Integrity: Ensures that only authorized parties are able to modify computer system assets and
transmitted information. Modification includes writing, changing status, deleting, creating and delaying
or replaying of transmitted messages.
4. Non repudiation: Requires that neither the sender nor the receiver of a message be able to deny the
transmission.
5. Access control: Requires that access to information resources may be controlled by or the target system.
6. Availability: Requires that computer system assets be available to authorized parties when needed.
8|Page
Authentication: The assurance that the communicating entity is the one that it claims to be.
Peer Entity Authentication: Used in association with a logical connection to provide confidence in
the identity of the entities connected.
Data Origin Authentication: In a connectionless transfer, provides assurance that the source of
received data is as claimed.
Access Control:-The prevention of unauthorized use of a resource (i.e., this service controls
that can have access to a resource, under what conditions access can occur, and what those
accessing the resource are allowed to do).
Data Integrity:
Connection Integrity with Recovery: Provides for the integrity of all user data on a connection and
detects any modification, insertion, deletion, or replay of any data within an entire data sequence, with
recovery attempted.
Connection Integrity without Recovery: As above, but provides only detection without recovery.
Selective-Field Connection Integrity: Provides for the integrity of selected fields within the user data
of a data block transferred over a connection and takes the form of determination of whether the selected
fields have been modified, inserted, deleted, or replayed
Connectionless Integrity: Provides for the integrity of a single connectionless data block and may take
the form of detection of data modification. Additionally, a limited form of replay detection may be
provided.
9|Page
Selective-Field Connectionless Integrity: Provides for the integrity of selected fields within a single
connectionless data block; takes the form of determination of whether the selected fields have been
modified.
Nonrepudiation, Origin: Proof that the message was sent by the specified party.
Nonrepudiation, Destination: Proof that the message was received by the specified party.
Security Attacks
Security attacks can be classified in terms of Passive attacks and Active attacks as per X.800
and RFC 2828. There are four general categories of attack which are listed below.
Interruption: An asset of the system is destroyed or becomes unavailable or unusable. This is an attack
on availability. Examples: destruction of piece of hardware, Cutting of a communication line or
disabling of file management system.
10 | P a g e
Modification: An unauthorized party not only gains access to but tampers with an asset. This is an
attack on integrity.
Examples: changing values in data file, altering a program, modifying the contents of messages
being transmitted in a network.
Fabrication: An unauthorized party inserts counterfeit objects into the system. This is an attack on
authenticity.
1. Passive attack
Passive attacks are in the nature of eavesdropping on, or monitoring of, transmissions. The goal
of the opponent is to obtain information that is being transmitted. Passive attacks are of two
types:
11 | P a g e
Release of message contents: A telephone conversation, an e-mail message and a transferred file may
contain sensitive or confidential information. We would like to prevent the opponent from learning the
contents of these transmissions.
Traffic analysis: If we had encryption protection in place, an opponent might stillbe able to observe
the pattern of the message. The opponent could determine the location and identity of communication
hosts and could observe the frequency and length of messages being exchanged. This information might
be useful in guessing the nature of communication that was taking place. Passive attacks are very
difficult to detect because they do not involve any alteration of data. However, it is feasible to prevent
the success of these attacks.
12 | P a g e
Figure 1.7: Traffic analysis
Passive attacks are very difficult to detect because they do not involve any alteration of data.
However, it is feasible to prevent the success of these attacks.
2. Active attacks
These attacks involve some modification of the data stream or the creation of a false stream.
These attacks can be classified in to four categories:
13 | P a g e
Figure 1.8: Masquerade
Replay –involves passive capture of a data unit and its subsequent transmission to produce an
unauthorized effect.
14 | P a g e
Modification of messages –Some portion of message is altered or the messages are delayed or
recorded, to produce an unauthorized effect.
Denial of service –Prevents or inhibits the normal use or management of communication facilities.
Another form of service denial is the disruption of an entire network, either by disabling the network or
overloading it with messages so as to degrade performance.
15 | P a g e
It is quite difficult to prevent active attacks absolutely, because to do so would require physical
protection of all communication facilities and paths at all times. Instead, the goal is to detect
them and to recover from any disruption or delays caused by them
Security Mechanisms
According to X.800, the security mechanisms are divided into those implemented in a specific
protocol layer and those that are not specific to any particular protocol layer or security service.
X.800 also differentiates reversible & irreversible encipherment mechanisms. A reversible
encipherment mechanism is simply an encryption algorithm that allows data to be encrypted
and subsequently decrypted, whereas irreversible encipherment include hash algorithms and
message authentication codes used in digital signature and message authentication
applications.
Incorporated into the appropriate protocol layer in order to provide some of the OSI security
services:
Encipherment: It refers to the process of applying mathematical algorithms for converting data into a
form that is not intelligible. This depends on algorithm used and encryption keys.
Digital Signature: The appended data or a cryptographic transformation applied to any data unit
allowing to prove the source and integrity of the data unit and protect against forgery.
Access Control: A variety of techniques used for enforcing access permissions to the system resources.
Data Integrity: A variety of mechanisms used to assure the integrity of a data unit or stream of data
units.
Authentication Exchange: A mechanism intended to ensure the identity of an entity by means of
information exchange.
Traffic Padding: The insertion of bits into gaps in a data stream to frustrate traffic analysis attempts.
Routing Control: Enables selection of particular physically secure routes for certain data and allows
routing changes once a breach of security is suspected.
Notarization: The use of a trusted third party to assure certain properties of a data exchange
16 | P a g e
Pervasive Security Mechanisms:
These are not specific to any particular OSI security service or protocol layer.
Trusted Functionality: That which is perceived to be correct with respect to some criteria
Security Level: The marking bound to a resource (which may be a data unit) that names or designates
the security attributes of that resource.
Event Detection: It is the process of detecting all the events related to network security.
Security Audit Trail: Data collected and potentially used to facilitate a security audit, which is an
independent review and examination of system records and activities.
Security Recovery: It deals with requests from mechanisms, such as event handling and management
functions, and takes recovery actions.
Data is transmitted over network between two communicating parties, who must cooperate for
the exchange to take place. A logical information channel is established by defining a route
through the internet from source to destination by use of communication protocols by the two
parties.
17 | P a g e
Whenever an opponent presents a threat to confidentiality, authenticity of information, security
aspects come into play. Two components are present in almost all the security providing
techniques.
A trusted third party may be needed to achieve secure transmission. It is responsible for
distributing the secret information to the two parties, while keeping it away from any opponent.
It also may be needed to settle disputes between the two parties regarding authenticity of a
message transmission.
The general model shows that there are four basic tasks in designing a particular security
service:
1. Design an algorithm for performing the security-related transformation. The algorithm should
be such that an opponent cannot defeat its purpose.
2. Generate the secret information to be used with the algorithm.
3. Develop methods for the distribution and sharing of the secret information.
4. Specify a protocol to be used by the two principals that makes use of the security algorithm
and the secret information to achieve a particular security service
Various other threats to information system like unwanted access still exist. The existence of
hackers attempting to penetrate systems accessible over a network remains a concern. Another
threat is placement of some logic in computer system affecting various applications and utility
programs. This inserted code presents two kinds of threats:
Information access threats intercept or modify data on behalf of users who should not have
access to that data
Service threats exploit service flaws in computers to inhibit use by legitimate users.
18 | P a g e
Viruses and worms are two examples of software attacks inserted into the system by means of
a disk or also across the network. The security mechanisms needed to cope with unwanted
access fall into two broad categories
Placing a gatekeeper function, which includes a password-based login methods that provide
access to only authorized users and screening logic to detect and reject worms, viruses etc.
An internal control monitoring the internal system activities analyzes the stored information
and detects the presence of unauthorized users or intruders.
Enterprise security is about building systems to remain dependable in the face of malice, error,
or mischance. As a discipline, it focuses on the tools, processes, and methods needed to design,
implement, and test complete systems, and to adapt existing systems as their environment
evolves. Enterprise security requires cross-disciplinary expertise, ranging from cryptography
and computer security through hardware tamper-resistance and formal methods to knowledge
of economics, applied psychology, organizations and the law. System engineering skills, from
business process analysis through software engineering to evaluation and testing, are also
important; but they are not sufficient, as they deal only with error and mischance rather than
malice. Many security systems have critical assurance requirements. Their failure may
19 | P a g e
endanger human life and the environment (as with nuclear safety and control systems), do
serious damage to major economic infrastructure (cash machines and other bank systems),
endanger personal privacy (medical record systems), undermine the viability of whole business
sectors (pay-TV), and facilitate crime (burglar and car alarms). Even the perception that a
system is more vulnerable than it really is (paying with a credit card over the Internet) can
significantly hold up economic development. The conventional view is that while software
engineering is about ensuring that certain things happen (‘John can read this file’), security is
about ensuring that they don‘t (‘The Chinese government can‘t read this file’). Reality is much
more complex. Security requirements differ greatly from one system to another. One typically
needs some combination of user authentication, transaction integrity and accountability, fault-
tolerance, message secrecy, and covertness. But many systems fail because their designers
protect the wrong things, or protect the right things but in the wrong way.
A Framework
Good Enterprise security requires four things to come together. There‘s policy: what you‘re
supposed to achieve. There‘s mechanism: the ciphers, access controls, hardware tamper-
resistance and other machinery that you assemble in order to implement the policy. There‘s
assurance: the amount of reliance you can place on each particular mechanism. Finally, there‘s
incentive: the motive that the people guarding and maintaining the system have to do their job
properly, and also the motive that the attackers have to try to defeat your policy. All of these
interact (see Figure.1.14).
20 | P a g e
Figure 1.1: Enterprise Security Analysis Framework
As an example, let‘s think of the 9/11 terrorist attacks. The hijackers‘ success in getting knives
through airport security was not a mechanism failure but a policy one; at that time, knives with
blades up to three inches were permitted, and the screeners did their task of keeping guns and
explosives off as far as we know. Policy has changed since then: first to prohibit all knives,
then most weapons (baseball bats are now forbidden but whiskey bottles are OK); it‘s flip-
flopped on many details (butane lighters forbidden then allowed again). Mechanism is weak,
because of things like composite knives and explosives that don‘t contain nitrogen. Assurance
is always poor; many tons of harmless passengers‘ possessions are consigned to the trash each
month, while well below half of all the weapons taken through screening (whether accidentally
or for test purposes) are picked up. Serious analysts point out major problems with priorities.
For example, the TSA has spent $14.7 billion on aggressive passenger screening, which is
fairly ineffective, while $100 m spent on reinforcing cockpit doors would remove most of the
risk. The President of the Airline Pilots Security Alliance notes that most ground staff aren‘t
screened, and almost no care is taken to guard aircraft parked on the ground overnight. As most
airliners don‘t have locks, there‘s not much to stop a bad guy wheeling steps up to a plane and
placing a bomb on board; if he had piloting skills and a bit of chutzpah, he could file a flight
plan and make off with it. Yet screening staff and guarding planes are just not a priority. Why
are such poor policy choices made? Quite simply, the incentives on the decision makers favour
visible controls over effective ones. The result is what Bruce Schneier calls ‗security theatre‘
measures designed to produce a feeling of security rather than the reality. Most players also
have an incentive to exaggerate the threat from terrorism: politicians to scare up the vote,
journalists to sell more papers, companies to sell more equipment, government officials to build
their empires, and security academics to get grants. The upshot of all this is that most of the
damage done by terrorists to democratic countries comes from the overreaction. Fortunately,
electorates figure this out over time. In Britain, where the IRA bombed us intermittently for a
generation, the public reaction to the 7/7 bombings was mostly a shrug.
21 | P a g e
1.8 Cyber Defense
Definition – What does Cyber Defense mean?
Cyber defense is a computer network defense mechanism which includes response to actions
and critical infrastructure protection and information assurance for organizations, government
entities and other possible networks. Cyber defense focuses on preventing, detecting and
providing timely responses to attacks or threats so that no infrastructure or information is
tampered with. With the growth in volume as well as complexity of cyber-attacks, cyber
defense is essential for most entities in order to protect sensitive information as well as to
safeguard assets. With the understanding of the specific environment, cyber defense analyzes
the different threats possible to the given environment. It then helps in devising and driving the
strategies necessary to counter the malicious attacks or threats. A wide range of different
activities is involved in cyber defense for protecting the concerned entity as well as for the
rapid response to a threat landscape. These could include reducing the appeal of the
environment to the possible attackers, understanding the critical locations & sensitive
information, enacting preventative controls to ensure attacks would be expensive, attack
detection capability and reaction and response capabilities. Cyber defense also carries out
technical analysis to identify the paths and areas the attackers could target. Cyber defense
provides the much-needed assurance to run the processes and activities, free from worries about
threats. It helps in enhancing the security strategy utilizations and resources in the most
effective fashion. Cyber defense also helps in improving the effectiveness of the security
resources and security expenses, especially in critical locations.
Cyber Defense protects your most important business assets against attack.
By aligning the knowledge of the threats you face with an understanding of your environment,
you are able to maximize the effectiveness of your security spend and target your resources at
the critical locations. All of this is driven from your business strategy by identifying where it
may be at risk from a range of threats from a malicious insider right through to Advanced
Persistent Threats (APT). Cyber Defense covers a wide range of activities that are essential in
enabling your business to protect itself against attack and respond to a rapidly evolving threat
landscape. This will include cyber deterrents to reduce your appeal to the attackers,
preventative controls that require their attacks to be more costly, attack detection capability to
22 | P a g e
spot when they are targeting you and reaction and response capabilities to repel them. Typically
a Cyber Defense engagement will include a range of services that are aimed at long term
assurance of your business, from the understanding of how security impacts your business
strategy and priorities, through to training and guidance that enables your employees to
establish the right security culture. At the same time the engagement will include specialist
technical analysis and investigation to ensure that you can map out and protect the paths the
attackers will use to compromise your most sensitive assets. These activities will also enable
you to obtain evidence of any threats that may already have breached your defenses and
providing the capability to manage or remove them as needed. Using this blend of services,
Cyber Defense provides the assurances you need to run your business free from worry about
the threats that it faces and to ensure that your security strategy utilizes your resources in the
most effective manner.
Definitions
Many of the terms used in Enterprise security are straightforward, but some are misleading or
even controversial. There are more detailed definitions of technical terms in the relevant
chapters, which you can find using the index. The first thing we need to clarify is what we
mean by system. In practice, this can denote:
23 | P a g e
Enterprise Security Architecture: Establishing the Business Context
24 | P a g e
―operational excellence‖. This business driver can be distilled into relevant attributes that
require assurance to satisfy the overarching business driver. Conversely, the online retailer may
have a strategic objective of being “customer focused”, as expressed in their vision statement
to provide a superior online shopping experience. Business attributes can generally be
identified through an understanding of the business drivers that are set by the top levels of an
organization. Security architects will often conduct structured interviews with senior
management in order to identify business attributes by determining the essence of what is
conveyed by high level business drivers. In the example of the business driver labeled
―operational excellence‖, the executives might be referring to the availability, reliability, and
safety of their operations and resources. In this case, the business attributes defined are
―available‖, ―safe‖, and ―reliable‖. Each attribute is then linked to the business driver they
support. This pairing of a business driver and attribute results in the creation of a proxy asset.
Again, building on our example, a sample proxy asset is ―operational excellence‖ with the
attribute of ―available‖. Each proxy asset is owned by the organization and is assessed as
having value to them. The fact that the proxy asset has value sets the requirement that it should
be protected. The value of these proxy assets is difficult to define given that they are often
intangible and exist at a very high level. Despite being unable to assign a monetary value to a
proxy asset, it is still possible to identify risks that may act against the asset. Our online retailer
may have attributes of ―confidential‖, ―reputable‖, and ―error-free‖. An inventory of proxy
assets can be maintained by the security architect and will be considered as key assets to the
organization. This is later used to conduct a business threat and risk assessment to identify risks
to the business. It is through a business threat and risk assessment that the sometimes-
competing aspects of confidentiality, integrity, and availability can be reconciled. When the
overall objective and needs of a business are understood, through proxy assets, then impact can
be understood as it relates to confidentiality, integrity, and availability. Understanding of the
business helps prioritize which of these elements is most important, and which aspects of the
business are most in need of protection.
Example 1 – A Bank
Banks operate a surprisingly large range of security-critical computer systems.
1. The core of a bank‘s operations is usually a branch bookkeeping system. This keeps customer account
master files plus a number of journals that record the day‘s transactions. The main threat to this system
25 | P a g e
is the bank‘s own staff; about one percent of bankers are fired each year, mostly for petty dishonesty
(the average theft is only a few thousand dollars). The main defense comes from bookkeeping
procedures that have evolved over centuries. For example, each debit against one account must be
matched by an equal and opposite credit against another; so money can only be moved within a bank,
never created or destroyed. In addition, large transfers of money might need two or three people to
authorize them. There are also alarm systems that look for unusual volumes or patterns of transactions,
and staff is required to take regular vacations during which they have no access to the bank‘s premises
or systems.
2. One public face of the bank is its automatic teller machines. Authenticating transactions based on a
customer‘s card and personal identification number in such a way as to defend against both outside and
inside attack is harder than it looks! There have been many epidemics of ‗phantom withdrawals‘ in
various countries when local villains (or bank staff) have found and exploited loopholes in the system.
Automatic teller machines are also interesting as they were the first large scale commercial use of
cryptography, and they helped establish a number of crypto standards.
3. Another public face is the bank‘s website. Many customers now do more of their routine business, such
as bill payments and transfers between savings and checking accounts, online rather than at a branch.
Bank websites have come under heavy attack recently from phishing from bogus websites into which
customers are invited to enter their passwords. The ‗standard‘ internet security mechanisms designed
in the 1990s, such as SSL/TLS, turned out to be ineffective once capable motivated opponents started
attacking the customers rather than the bank. Phishing is a fascinating Enterprise security problem
mixing elements from authentication, usability, psychology, operations and economics.
4. Behind the scenes are a number of high-value messaging systems. These are used to move large sums
of money (whether between local banks or between banks internationally); to trade in securities; to issue
letters of credit and guarantees; and so on. An attack on such a system is the dream of the sophisticated
white-collar criminal. The defense is a mixture of bookkeeping procedures, access controls, and
cryptography.
5. The bank‘s branches will often appear to be large, solid and prosperous, giving customers the
psychological message that their money is safe. This is theatre rather than reality: the stone façade gives
no real protection. If you walk in with a gun, the tellers will give you all the cash you can see; and if
you break in at night, you can cut into the safe or strong room in a couple of minutes with an abrasive
wheel. The effective controls these days center on the alarm systems which are in constant
26 | P a g e
communication with a security company‘s control center. Cryptography is used to prevent a robber or
burglar manipulating the communications and making the alarm appear to say ‗all‘s well‘ when it isn‘t.
1. Some of the most sophisticated installations are the electronic warfare systems whose goals include
trying to jam enemy radars while preventing the enemy from jamming yours. This area of information
warfare is particularly instructive because for decades, well-funded research labs have been developing
sophisticated counter measures and so on with a depth, subtlety and range of deception strategies that
are still not found elsewhere. As I write, in 2007, a lot of work is being done on adapting jammers to
disable improvised explosive devices that make life hazardous for allied troops in Iraq. Electronic
warfare has given many valuable insights: issues such as spoofing and service-denial attacks were live
there long before bankers and bookmakers started having problems with bad guys targeting their
websites.
2. Military communication systems have some interesting requirements. It is often not sufficient to just
encipher messages: the enemy, on seeing traffic encrypted with somebody else‘s keys, may simply
locate the transmitter and attack it. Low-probability-of-intercept (LPI) radio links are one answer; they
use a number of tricks that are now being adopted in applications such as copyright marking. Covert
communications are also important in some privacy applications, such as in defeating the Internet
censorship imposed by repressive regimes.
3. Military organizations have some of the biggest systems for logistics and inventory management, which
differ from commercial systems in having a number of special assurance requirements. For example,
one may have a separate stores management system at each different security level: a general system
for things like jet fuel and boot polish, plus a second secret system for stores and equipment whose
location might give away tactical intentions. (This is very like the businessman who keeps separate sets
of books for his partners and for the tax man, and can cause similar problems for the poor auditor.)
There may also be intelligence systems and command systems with even higher protection
requirements. The general rule is that sensitive information may not flow down to less restrictive
classifications. So you can copy a file from a Secret stores system to a Top Secret command system,
but not vice versa. The same rule applies to intelligence systems which collect data using wiretaps:
27 | P a g e
information must flow up to the intelligence analyst from the target of investigation, but the target must
not know which of his communications have been intercepted. Managing multiple systems with
information flow restrictions is a hard problem and has inspired a lot of research. Since 9/11, for
example, the drive to link up intelligence systems has led people to invent search engines that can index
material at multiple levels and show users only the answers they are cleared to know.
4. The particular problems of protecting nuclear weapons have given rise over the last two generations to
a lot of interesting security technology, ranging from electronic authentication systems that prevent
weapons being used without the permission of the national command authority, through seals and alarm
systems, to methods of identifying people with a high degree of certainty using biometrics such as iris
patterns. The civilian security engineer can learn a lot from all this. For example, many early systems
for inserting copyright marks into digital audio and video, which used ideas from spread-spectrum radio,
were vulnerable to resynchronization attacks that are also a problem for some spread-spectrum systems.
Another example comes from munitions management. There, a typical system enforces rules such as
‗Don‘t put explosives and detonators in the same truck‘. Such techniques can be recycled in food
logistics where hygiene rules forbid raw and cooked meats being handled together.
Example 3 – A Hospital
From soldiers and food hygiene we move on to healthcare. Hospitals have a number of
interesting protection requirements mostly to do with patient safety and privacy.
1. Patient record systems should not let all the staff see every patient‘s record, or privacy violations can
be expected. They need to implement rules such as ‗nurses can see the records of any patient who has
been cared for in their department at any time during the previous 90 days‘. This can be hard to do with
traditional computer security mechanisms as roles can change (nurses move from one department to
another) and there are cross- system dependencies(if the patient records system ends up relying on the
personnel system for access control decisions, then the personnel system may just have become critical
for safety, for privacy or for both).
2. Patient records are often anonymized for use in research, but this is hard to do well. Simply encrypting
patient names is usually not enough as an enquiry such as ‗show me all records of 59 year old males
who were treated for a broken collarbone on September 15th 1966‘ would usually be enough to find the
record of a politician who was known to have sustained such an injury at college. But if records cannot
be anonymized properly, then much stricter rules have to be followed when handling the data, and this
increases the cost of medical research.
28 | P a g e
3. Web-based technologies present interesting new assurance problems in healthcare. For example, as
reference books such as directories of drugs move online, doctors need assurance that life-critical data,
such as the figures for dosage per body weight, are exactly as published by the relevant authority, and
have not been mangled in some way. Another example is that as doctors start to access patients‘ records
from home or from laptops or even PDAs during house calls, suitable electronic authentication and
encryption tools are starting to be required.
4. New technology can introduce risks that are just not understood. Hospital administrators understand the
need for backup procedures to deal with outages of power, telephone service and so on; but medical
practice is rapidly coming to depend on the net in ways that are often not documented. For example,
hospitals in Britain are starting to use online radiology systems: X-rays no longer travel from the X-ray
machine to the operating theatre in an envelope, but via a server in a distant town. So a network failure
can stop doctors operating just as much as a power failure. All of a sudden, the Internet turns into a
safety-critical system, and denial-of-service attacks might kill people.
1. Many families use some of the systems we‘ve already described. You may use a web-based electronic
banking system to pay bills, and in a few years you may have encrypted online access to your medical
records. Your burglar alarm may send an encrypted ‗all‘s well‘ signal to the security company every
few minutes, rather than waking up the neighborhood when something happens.
2. Your car probably has an electronic immobilizer that sends an encrypted challenge to a radio
transponder in the key fob; the transponder has to respond correctly before the car will start. This makes
theft harder and cuts your insurance premiums. But it also increases the number of car thefts from
homes, where the house is burgled to get the car keys. The really hard edge is a surge in car-jacking:
criminals who want a getaway car may just take one at gunpoint.
3. Early mobile phones were easy for villains to ‗clone‘: users could suddenly find their bills inflated by
hundreds or even thousands of dollars. The current GSM digital mobile phones authenticate themselves
to the network by a cryptographic challenge-response protocol similar to the ones used in car door locks
and immobilizers.
4. Satellite TV set-top boxes decipher movies so long as you keep paying your subscription. DVD players
use copy control mechanisms based on cryptography and copyright marking to make it harder to copy
29 | P a g e
disks (or to play them outside a certain geographic area). Authentication protocols can now also be used
to set up secure communications on home networks (including WiFi, Bluetooth and Home Plug).
5. In many countries, households who can‘t get credit can get prepayment meters for electricity and gas,
which they top up using a smartcard or other electronic key which they refill at a local store. Many
universities use similar technologies to get students to pay for photocopier use, washing machines and
even soft drinks.
6. Above all, the home provides a haven of physical security and seclusion. Technological progress will
impact this in many ways. Advances in locksmithing mean that most common house locks can be
defeated easily; does this matter? Research suggests that burglars aren‘t worried by locks as much as
by occupants, so perhaps it doesn‘t matter much but then maybe alarms will become more important
for keeping intruders at bay when no-one‘s at home. Electronic intrusion might over time become a
bigger issue, as more and more devices start to communicate with central services. The security of your
home may come to depend on remote systems over which you have little control.
30 | P a g e
Chapter Two
2. Brief Overview of Commercial Issues
2.1 Introduction
Human being from ages had two inherent needs:
These two needs gave rise to the art of coding the messages in such a way that only the intended
people could have access to the information. Unauthorized people could not extract any
information, even if the scrambled messages fell in their hand. The art and science of concealing
the messages to introduce secrecy in information security is recognized as cryptography. The
word „cryptography‟ was coined by combining two Greek words, „Krypto‟ meaning hidden and
graphene‟ meaning writing.
What is Cryptography?
Cryptography is about constructing and analyzing protocols that prevent third parties or the
public from reading private messages; various aspects in information security such as data
confidentiality, data integrity, authentication, and non-repudiation are central to modern
cryptography. Modern cryptography exists at the intersection of the disciplines of mathematics,
computer science, electrical engineering, communication science, and physics. Applications of
cryptography include electronic commerce, chip-based payment cards, digital
currencies,computer passwords, and military communications. Cryptography is associated
with the process of converting ordinary plain text into unintelligible text and vice-versa. It is a
method of storing and transmitting data in a particular form so that only those for whom it is
intended can read and process it. Cryptography not only protects data from theft or alteration,
but can also be used for user authentication.
Earlier cryptography was effectively synonymous with encryption but nowadays cryptography
is mainly based on mathematical theory and computer science practice.
31 | P a g e
Modern cryptography concerns with:
1. Symmetric-key cryptography
2. Hash functions.
3. Public-key cryptography
Symmetric-key Cryptography: Both the sender and receiver share a single key. The sender uses
this key to encrypt plaintext and send the cipher text to the receiver. On the other side the
receiver applies the same key to decrypt the message and recover the plain text.
Public-Key Cryptography: This is the most revolutionary concept in the last 300-400 years. In
Public-Key Cryptography two related keys (public and private key) are used. Public key may
be freely distributed, while its paired private key remains a secret. The public key is used for
encryption and for decryption private key is used.
Hash Functions: No key is used in this algorithm. A fixed-length hash value is computed as per
the plain text that makes it impossible for the contents of the plain text to be recovered. Hash
functions are also used by many operating systems to encrypt passwords. (We will discuss later
in chapter 4.)
32 | P a g e
E-commerce Security is a part of the Information Security framework and is specifically
applied to the components that affect e-commerce that include Computer Security, Data
security and other wider realms of the Information Security framework. E-commerce security
has its own particular nuances and is one of the highest visible security components that affect
the end user through their daily payment interaction with business. E-commerce security is the
protection of e-commerce assets from unauthorized access, use, alteration, or destruction.
Dimensions of e commerce security-Integrity, Non-repudiation, Authenticity, Confidentiality,
Privacy, Availability. E-Commerce offers the banking industry great opportunity, but also
creates a set of new risks and vulnerability such as security threats. Information security,
therefore, is an essential management and technical requirement for any efficient and effective
Payment transaction activities over the internet. Still, its definition is a complex Endeavour due
to the constant technological and business change and requires a coordinated match of
algorithm and technical solutions.
Ecommerce Security Issues
E-commerce security is the protection of e-commerce assets from unauthorized access, use,
alteration, or destruction. While security features do not guarantee a secure system, they are
necessary to build a secure system. Security features have four categories:
Authentication: Verifies who you say you are. It enforces that you are the only one allowed
to logon to your Internet banking account.
Authorization: Allows only you to manipulate your resources in specific ways. This prevents
you from increasing the balance of your account or deleting a bill.
Encryption: Deals with information hiding. It ensures you cannot spy on others during Internet
banking transactions.
Auditing: Keeps a record of operations. Merchants use auditing to prove that you bought
specific merchandise.
Integrity: prevention against unauthorized data modification
Nonrepudiation: prevention against any one party from reneging on an agreement after the
fact
Availability: prevention against data delays or removal.
33 | P a g e
E-Commerce Security Tools
SECURITY THREATS
34 | P a g e
Surfing involves hackers placing software agents onto a third-party system and setting it off to
send requests to an intended target. DDOS (distributed denial of service attacks) involves
hackers placing software agents onto a number of third-party systems and setting them off to
simultaneously send requests to an intended target.
Web security is also known as ―Cyber security‖. It basically means protecting a website or
web application by detecting, preventing and responding to cyber threats.
Websites and web applications are just as prone to security breaches as physical homes, stores,
and government locations. Unfortunately, cybercrime happens every day, and great web
security measures are needed to protect websites and web applications from becoming
compromised.
That‘s exactly what web security does – it is a system of protection measures and protocols
that can protect your website or web application from being hacked or entered by unauthorized
personnel. This integral division of Information Security is vital to the protection of websites,
web applications, and web services. Anything that is applied over the Internet should have
some form of web security to protect it.
The web poses some additional security troubles because:
o so very many different computers are involved in any networked environment;
o the fundamental protocols of the Internet were not designed with security in mind; and,
o The physical infrastructure of the Internet is not owned or controlled by any one organization,
and no guarantees can be made concerning the integrity and security of any part of the Internet.
Unfortunately, a web-based system is often advertised as “secure” merely because the web
server uses SSL encryption to protect portions of the site. As we’ll soon see, there is a great
deal more to the story than that.
There are a lot of factors that go into web security and web protection. Any website or
application that is secure is surely backed by different types of checkpoints and techniques for
keeping it safe.
35 | P a g e
There are a variety of security standards that must be followed at all times, and these standards
are implemented and highlighted by the OWASP. Most experienced web developers from top
cyber security companies will follow the standards of the OWASP as well as keep a close eye
on the Web Hacking Incident Database to see when, how, and why different people are hacking
different websites and services.
Essential steps in protecting web apps from attacks include applying up-to-date encryption,
setting proper authentication, continuously patching discovered vulnerabilities, avoiding data
theft by having secure software development practices. The reality is that clever attackers may
be competent enough to find flaws even in a fairly robust secured environment, and so a holistic
security strategy is advised.
Available Technology
There are different types of technologies available for maintaining the best security standards.
Some popular technical solutions for testing, building, and preventing threats include:
Likelihood of Threat
Your website or web application‘s security depends on the level of protection tools that have
been equipped and tested on it. There are a few major threats to security which are the most
common ways in which a website or web application becomes hacked. Some of the top
vulnerabilities for all web-based services include:
o SQL injection
o Password breach
o Cross-site scripting
o Data breach
o Remote file inclusion
36 | P a g e
o Code injection
Preventing these common threats is the key to making sure that your web-based service is
practicing the best methods of security.
There are two big defense strategies that a developer can use to protect their website or web
application. The two main methods are as follows:
1. Resource assignment – By assigning all necessary resources to causes that are dedicated to
alerting the developer about new web security issues and threats, the developer can receive a
constant and updated alert system that will help them detect and eradicate any threats before
security is officially breached.
2. Web scanning – There are several web scanning solutions already in existence that are
available for purchase or download. These solutions, however, are only good for known
vulnerability threats – seeking unknown threats can be much more complicated. This method
can protect against many breaches, however, and is proven to keep websites safe in the long
run. Web Security also protects the visitors from the below-mentioned points: –
Stolen Data: Cyber-criminals frequently hacks visitor‘s data that is stored on a website like
email addresses, payment information, and a few other details.
Phishing schemes: This is not just related to email, but through phishing, hackers design a
layout that looks exactly like the website to trick the user by compelling them to give their
sensitive details.
Session hijacking: Certain cyber attackers can take over a user‘s session and compel them to
take undesired actions on a site.
Malicious redirects: Sometimes the attacks can redirect visitors from the site they visited to a
malicious website.
SEO Spam: Unusual links, pages, and comments can be displayed on a site by the hackers to
distract your visitors and drive traffic to malicious websites.
Thus, web security is easy to install and it also helps the business people to make their website
safe and secure. A web application firewall prevents automated attacks that usually target small
or lesser-known websites. These attacks are borne out by malicious bots or malware that
37 | P a g e
automatically scan for vulnerabilities they can misuse, or cause DDoS attacks that slow down
or crash your website.
Thus, Web security is extremely important, especially for websites or web applications that
deal with confidential, private, or protected information. Security methods are evolving to
match the different types of vulnerabilities that come into existence.
Many “layers” must work in concert to produce a functioning web-based system. Each layer
has its own security vulnerabilities, and its own procedures and techniques for coping with
these vulnerabilities.
We’ll examine each such layer in turn, proceeding from the hardware (furthest from the end
user) to the web browser (closest to the end user).
Keep in mind that many attacks take advantage of weaknesses in multiple layers. Even if one
such weakness does not expose the service to attack, that weakness in concert with others can
be used for nefarious purposes. The complexity of these layers’ interaction only makes the job
of the security professional that much more difficult.
Hardware
Physical access to computer hardware gives evens a slightly-skilled person total control of that
hardware. Without physical security to protect hardware (i.e. doors that lock) nothing else
about a computer system can be called secure.
Of course, there are many ways in which malicious humans can attack hardware:
o o Using operating system installation floppies and CDs to circumvent normal OS access control
to devices and hard disk contents;
o Physical removal or destruction of the hardware;
o Electromagnetic interference, including nuclear EMP munitions and e-bombs;
o Direct eavesdropping technologies such as keyboard loggers and network sniffers; and
o Indirect eavesdropping technologies such as van Eck Phreaking (reconstituting the display of
a computer monitor in a remote location by gathering the emitted radiation from that monitor.
Hardware is also most susceptible to natural occurrences:
o water and humidity;
38 | P a g e
o smoke and dust;
o heat and fire;
o lightning and other electrical phenomenon;
o radiation, particularly alpha particles which can flip memory bits;
o flora and fauna, especially circuit board-eating molds and insects; and
o Weather and geological effects such as tornados, hurricanes, and earthquakes.
Securing hardware is usually a matter of installing locking doors and electromagnetic shielding,
deploying redundant hardware in remote locations, installing temperature/moisture/air quality
controls and filters, performing and checking filesystem backups, and so forth.
Networking hardware is susceptible to all of the above problems, but often must be exposed
(i.e. cables) which makes it a fine target for attack. Some simple things greatly improve the
security of LANs: installing switches instead of hubs to limit Ethernet’s chatty broadcasts, thus
making it much harder to jack in and eavesdrop with a “promiscuous” NIC.
Operating System
As the software charged with controlling access to the hardware, the file system, and the
network, weaknesses in an operating system are the most valued amongst crackers.
When we speak here of an operating system, we really mean just the kernel, file system(s),
network software/stack, and authentication (who are you?) and authorization (what can you
do?) mechanisms.
Most OS authentication is handled through user names and passwords. Biometric (e.g. voice,
face, retina, iris, fingerprint) and physical token-based (swipe cards, pin-generating cards)
authentication are sometimes used to augment simple passwords, but the costs and accuracy of
the technology limit their adoption.
Once authenticated, the OS is responsible for enforcing authorization rules for a user’s account.
The guiding thought here is the Principle of Least Privilege: disallow every permission that
isn’t explicitly required.
Protecting an operating system from attack is a cat and mouse game,that requires constant
vigilance. Obviously, code patches must be applied (if the benefit of the patch is deemed to
outweigh the risk of changing a functioning system), but system logs must be gathered and
studied on a regular basis to identify suspicious activity.
A number of tools can be used to strengthen and monitor the security of an OS:
39 | P a g e
o File system rights (sometimes access control lists and partition mount permissions limit non-
super user accounts to only the files they require;
o disk quotas prevent users from intentionally or accidentally filling a disk, thereby denying other
users’ access to the partition;
o detection software (e.g. Tripwire) reports modifications to system-critical files and directories;
o firewalls (i.e. packet filters, proxy servers, Network Address Translation, and Virtual Private
Networks) help to block out spurious network traffic, but don’t stop attacks on the layers that
follow;
o intrusion-detection software (e.g. Snort) identify network-based attacks based on a library of
attack profiles; and
o Anti-virus software removes, disables, or warns about dangerous viruses, worms, or Trojan
horses.
For server computers, the most important rule is to only install and run those software packages
that are absolutely required. The more programs that are running, the greater the opportunity
for someone to find a hole in the defenses.
Service
For our purposes, a “service” is any class of software that typically runs unattended on a server-
style computer and performs some task in response to a network-originated request. Web
servers (e.g. Apache, IIS, including server-side scripting platforms), FTP servers, email servers
(e.g. Send mail, Qmail, Exim), Telnet and SSH servers, file and print servers (e.g.
SMB/Samba), database servers (e.g. Oracle, SQL Server, MySQL, DB/2, PostgreSQL) and so
on are all example of these services.
The most common attack on these services involves a buffer overflow: sending a message
containing too much data for a limited storage space in the computer’s memory, overflowing
the bounds of that space and in some cases executing the code that was delivered at the end of
the message.
The infamous Code Red and Code Red II worms of 2001 sent overly long query strings to the
Microsoft IIS webserver’s indexing service, inducing a buffer overflow, and allowing the worm
to propagate and damage the infected system. An example Code Red attack looks like this in
an Apache webserver’s log (a single wrapped line):
40 | P a g e
Some attacks against the service layer are intended to change the service’s normal OS user
account to that of an account with greater permissions, ideally the superuser account (a.k.a.
privilege escalation). Once that has been accomplished, other attacks using higher layers of the
web service can do more damage.
Keeping on top of the latest patches for service software helps prevent these sorts of exploits,
but the best defense is to limit the access the service has to the computer it runs on, and to other
computers in its network neighborhood. The former is often accomplished by running the
service using a OS user account with minimal privileges (e.g. “nobody” for Apache), and by
restricting the service to a closed-off region of the file system. The latter is accomplished by
setting up DMZ-like network configurations.
Data
As an organization’s most valuable IT asset, the nonchalant treatment and security of data is
often surprising. What is not surprising is that crackers know this and most of their efforts are
ultimately focused on displaying, corrupting, or stealing an organization’s data.
Measures for protecting the database server hardware and the RDBMS service have already
been discussed.
Backups of critical (and what data is not critical?) data must obviously be performed on a
regular schedule, and must also be checked periodically so that it’s known that all data is
backed up properly and that the backup media is functioning.
Backup media must also be removed to remote sites to guard against large-scale natural
disaster. Transporting the media must be performed by trusted couriers.
Finally, backups should be encrypted in some way to prevent any of the many people that come
into contact with the media from reading all of the organization’s data. In practice, this
encryption is rarely performed.
Since most web applications use some form of special application account to access the
database, the permissions granted to this account must follow the Principle of Least Privilege.
While not a complete solution, this does reduce the chance that an application-layer exploit, or
a simple programming error, might damage the contents of the database.
41 | P a g e
Application
The application layer consists of specialty software that performs the specific tasks required of
the web system. This software may be custom-written in-house or through outsourcing, or may
be purchased as a shrink-wrapped product. Generally, this sort of software is not used by many
different organizations, and so is not examined by as many people for security defects. On the
other hand, the relative obscurity of such software means that few crackers will be aware of
any such defects.
The main vulnerability of web applications is Cross-Site Scripting (XSS).
Cross-Site Scripting (a.k.a. XSS, script embedding, or script injection) is more an attack on the
users of a web application, than on the web system itself. It usually involves injecting some
client-side browser scripting code (i.e. JavaScript) into one of the application’s forms that, once
displayed on the site, results in that code being run (on the end user’s browser). This code can
do anything that client-side script code can do, but is often used to redirect the user to another
site for some malevolent purpose. Such script code can also forward the user’s session key to
another site, so that the recipient of this key can impersonate the legitimate owner of this key.
The best method for defeating this type of attack is to validate all input to a web application
and disable (perhaps by mapping to HTML entity codes) any special HTML characters such
as, but not limited to: “<“, “>”, and “&”.
Network Protocol
It is at the network protocol layer that most of the web system security is addressed by product
marketing departments. While important, as we’ve seen this is only one piece of a very large
pie.
The primary technology that protects the web application protocol in question, HTTP, is the
Secure Sockets Layer (SSL), now renamed Transport Layer Security (TLS). TLS provides both
authentication and encryption services to communicating computers using digital certificates
issued by Certificate Authorities (CAs) also known as Trust Authorities.
TLS encrypts all data between client browser and webserver for those pages where it is deemed
necessary (identified with URLs beginning with HTTPS ://). The level and method of
encryption is negotiated between client and server, but relies on public key cryptography to
scramble and digitally sign the message. Encryption protects the message from:
42 | P a g e
o Eavesdropping, or simple monitoring of the unprotected traffic;
o Modification of the message to erroneous or meaningless data;
o Man-in-the-middle attacks, which allow the attacker to interpose himself between the client
and the server, relaying messages between both, while modifying some to his own ends; and,
o Replay attacks, used by the attacker to retransmit the same message over and over to the server
in order to execute web application functionality over and over, usually to detriment of the
original sender.
TLS also provides authentication through the same digital certificate. In most cases, this means
that the user can verify that the web application they are visiting is indeed registered to the
company that purports to provide this service.
Browser
Unfortunately, given the design of the HTTP protocol (even when secured through SSL/TLS),
there is very little that can be done to protect the web system at the browser layer. Hence, web
applications may never trust any data originating from a client browser.
TLS-based client digital certificates can be used to more positively identify clients to servers,
but they are as yet rarely used, partially because of expense, but also because they are difficult
to move from one client computer to another, thereby diminishing one of the benefits of web
systems: client location transparency.
A public key infrastructure (PKI) is a set of roles, policies, hardware, software and procedures
needed to create, manage, distribute, use, store and revoke digital certificates and manage
public-key encryption.
Public Key Infrastructure (PKI) is a technology for authenticating users and devices in the
digital world. The basic idea is to have one or more trusted parties digitally sign documents
certifying that a particular cryptographic key belongs to a particular user or device. The key
can then be used as an identity for the user in digital networks.
43 | P a g e
The users and devices that have keys are often just called entities. In general, anything can be
associated with a key that it can use as its identity. Besides a user or device, it could be a
program, process, manufacturer, component, or something else. The purpose of a PKI is to
securely associate a key with an entity.
The trusted party signing the document associating the key with the device is called a certificate
authority (CA). The certificate authority also has a cryptographic key that it uses for signing
these documents. These documents are called certificates.
In the real world, there are many certificate authorities, and most computers and web browsers
trust a hundred or so certificate authorities by default.
A public key infrastructure relies on digital signature technology, which uses public key
cryptography. The basic idea is that the secret key of each entity is only known by that entity
and is used for signing. This key is called the private key. There is another key derived from it,
called the public key, which is used for verifying signatures but cannot be used to sign. This
public key is made available to anyone, and is typically included in the certificate document.
The purpose of a PKI is to facilitate the secure electronic transfer of information for a range of
network activities such as e-commerce, internet banking and confidential email. It is required
for activities where simple passwords are an inadequate authentication method and more
rigorous proof is required to confirm the identity of the parties involved in the communication
and to validate the information being transferred.
In cryptography, a PKI is an arrangement that binds public keys with respective identities of
entities (like people and organizations). The binding is established through a process of
registration and issuance of certificates at and by a certificate authority (CA). Depending on
the assurance level of the binding, this may be carried out by an automated process or under
human supervision.
The PKI role that assures valid and correct registration is called a registration authority (RA).
An RA is responsible for accepting requests for digital certificates and authenticating the entity
making the request. In a Microsoft PKI, a registration authority is usually called a subordinate
CA.
44 | P a g e
An entity must be uniquely identifiable within each CA domain on the basis of information
about that entity. A third-party validation authority (VA) can provide this entity information
on behalf of the CA.
The X.509 standard defines the most commonly used format for public key certificates.
In cyberspace there is a need to verify the identities of individuals for a number of purposes.
Some of these events include sending and receiving secure email, sending and receiving signed
email, setting up a secure session (SSL), and accessing a protected resource. The way in which
this goal of authentication is accomplished is by verifying that a public key belongs to an
individual that you know and trust. Public-Key Infrastructure is designed to allow this kind of
authentication.
One way to associate public keys to individuals is by publishing a mapping of names to keys.
This directory would act much like the White Pages does for distributing phone numbers based
on name. The directory must be trusted; therefore it must be authentic but need not be secret.
Entries would be of the form:
Digital Certificates
Digital Certificates were proposed by Loren Kohnfelder here at MIT in a B.S. thesis in ’78.
They are an authenticated identifier pairing the public key to a significant name. This allows
any user to identify them and establishes trust between themselves and a verifier who trusts the
certificate authority. The CA is assumed to correctly identify the person who has requested the
certificate.
Advantages
45 | P a g e
Difficulty Issues
Scalability
Robustness
Naming
PK infrastructure has a very intimate link with naming. We want a system that is easy to use
for people, similar to that of file names. The naming relationship should be as follows:
Naming is a large issue. Since the CA has the burden of properly identifying and labeling the
parties with certificates, names must be made clear and accurate.
Naming provides an interface between people and cyberspace. People must then write security
policy based on the name associated with a PK used to sign message. Writers of such policies
need to know/understand the relationship between keys and names.
Descriptive
Global uniqueness
Dynamic
46 | P a g e
Examples
X.509
X.509 is one of the most popular standards specifying the contents of a digital certificate. One
of the main goals of X.509 is global uniqueness of names.
Some major problems with DN here is that single points of failure disrupt the system. The
structure itself is also awkward.
Version #
Certificate Serial #
Signature Algorithm Identifier
Issuer Distinguished Name (DN)
Validity Period
Subject DN
Subject PK Information
o algorithm identifier
o associated key parameters
Issuer Unique #
Subject Unique #
47 | P a g e
Extensions
o key usage
o certificate policies
o subject/issuer alternate names
o path constraints
o criticality bits
Overview
48 | P a g e
Figure 2.1: Enterprise information security architecture
Positioning
Enterprise information security architecture was first formally positioned by Gartner in their
whitepaper called ―Incorporating Security into the Enterprise Architecture Process‖. This was
published on 24 January 2006. Since this publication, security architecture has moved from
being silo based architecture to an enterprise focused solution that incorporates business,
information and technology. The picture below represents a one-dimensional view of enterprise
architecture as a service-oriented architecture. It also reflects the new addition to the enterprise
architecture family called ―Security‖. Business architecture, information architecture and
technology architecture used to be called BIT for short. Now with security as part of the
architecture family it has become BITS.
49 | P a g e
Security architectural change imperatives now include things like
Business roadmaps
Legislative and legal requirements
Technology roadmaps
Industry trends
Risk trends
Visionaries
Goals
Methodology
50 | P a g e
Given these descriptions, whose levels of detail will vary according to affordability and other
practical considerations, decision makers are provided the means to make informed decisions
about where to invest resources, where to realign organizational goals and processes, and what
policies and procedures will support core missions or business functions.
A strong enterprise information security architecture process helps to answer basic questions
like:
Having documented the organization’s strategy and structure, the architecture process then
flows down into the discrete information technology components such as:
Organization charts, activities, and process flows of how the IT Organization operates
Organization cycles, periods and timing
Suppliers of technology hardware, software, and services
Applications and software inventories and diagrams
Interfaces between applications – that is: events, messages and data flows
Intranet, Extranet, Internet, e-Commerce, EDI links with parties within and outside of the
organization
Data classifications, Databases and supporting data models
Hardware, platforms, hosting: servers, network components and security devices and where
they are kept
Local and wide area networks, Internet connectivity diagrams
51 | P a g e
Wherever possible, all of the above should be related explicitly to the organization’s strategy,
goals, and operations. The enterprise information security architecture will document the
current state of the technical security components listed above, as well as an ideal-world desired
future state (Reference Architecture) and finally a “Target” future state which is the result of
engineering tradeoffs and compromises vs. the ideal. Essentially the result is a nested and
interrelated set of models, usually managed and maintained with specialized software available
on the market.
Such exhaustive mapping of IT dependencies has notable overlaps with both metadata in the
general IT sense, and with the ITIL concept of the configuration management database.
Maintaining the accuracy of such data can be a significant challenge.
Along with the models and diagrams goes a set of best practices aimed at securing adaptability,
scalability, manageability etc. These systems engineering best practices are not unique to
enterprise information security architecture but are essential to its success nonetheless. They
involve such things as componentization, asynchronous communication between major
components, and standardization of key identifiers and so on.
The organization must design and implement a process that ensures continual movement from
the current state to the future state. The future state will generally be a combination of one or
more:
52 | P a g e
Closing gaps that are present between the current organization strategy and the ability of the
IT security dimensions to support it
Closing gaps that are present between the desired future organization strategy and the ability
of the security dimensions to support it
Necessary upgrades and replacements that must be made to the IT security architecture based
on supplier viability, age and performance of hardware and software, capacity issues, known
or anticipated regulatory requirements, and other issues not driven explicitly by the
organization’s functional management.
On a regular basis, the current state and future state are redefined to account for evolution of
the architecture, changes in organizational strategy, and purely external factors such as changes
in technology and customer/vendor/government requirements, and changes to both internal and
external threat landscapes over time.
Intrusion detection (ID) is the process of monitoring for and identifying specific malicious
traffic. Most network administrators do ID all the time without realizing it. Security
administrators are constantly checking system and security log files for something suspicious.
An antivirus scanner is an ID system when it checks files and disks for known malware.
Administrators use other security audit tools to look for inappropriate rights, elevated
privileges, altered permissions, incorrect group memberships, unauthorized registry changes,
malicious file manipulation, inactive user accounts, and unauthorized applications.
An IDS can take the form of a software program installed on an operating system, but today‘s
commercial network-sniffing IDS/IPS typically takes the form of a hardware appliance because
of performance requirements. An IDS uses either a packet-level network interface driver to
intercept packet traffic or it ―hooks‖ the operating system to insert inspection subroutines. An
IDS is a sort of virtual food-taster, deployed primarily for early detection, but increasingly used
to prevent attacks.
53 | P a g e
When the IDS notice a possible malicious threat, called an event, it logs the transaction and
takes appropriate action. The action may simply be to continue to log, send an alert, redirect
the attack, or prevent the maliciousness. If the threat is high risk, the IDS will alert the
appropriate people. Alerts can be sent by e-mail, Simple Network Management Protocol
(SNMP), pager, SMTP to a mobile device, or console broadcast. An IDS supports the defense-
in-depth security principle and can be used to detect a wide range of rogue events, including
but not limited to the following:
Impersonation attempts
Password cracking
Protocol attacks
Buffer overflows
Installation of rootkits
Rogue commands
Software vulnerability exploits
Malicious code, like viruses, worms, and Trojans
Illegal data manipulation
Unauthorized file access
Denial of service (DoS) attacks
Threat Types
To really understand IDS, you must understand the security threats and exploits it can detect
and prevent. Threats can be classified as attacks or misuse, and they can exploit network
protocols or work as malicious content at the application layer.
Attacks or Misuse
Attacks are unauthorized activity with malicious intent using specially crafted code or
techniques. Attacks include denial of service, virus or worm infections, buffer overflows,
malformed requests, file corruption, malformed network packets, or unauthorized program
execution.
54 | P a g e
Misuse refers to unauthorized events without specially crafted code. In this case, the offending
person used normally crafted traffic or requests and their implicit level of authorization to do
something malicious. Misuse can also refer to unintended consequences, such as when a
hapless new user overwrites a critical document with a blank page. Another misuse event could
be a user mapping a drive to a file server share not intended by the network administrator.
Regardless of how an alert is detected, the administrator groups all alerts into one of four
categories:
Many of the security threats detected by an ID exploit network protocols (layers two and three
of the OSI model). Network protocols such as TCP/IP define standard ways of transmitting
data to facilitate open communications. The data is sent in a packet (layer three), which is then
encapsulated into a layer two frame, which is then transmitted as packages of electronic bits
(1s and 0s) framed in a particular format defined by a network protocol—but the protocols do
not contemplate the consequences of malicious packet creation. This is because protocols are
designed to perform functions, not to be secure.
Flag Exploits: Abnormally crafted network packets are typically used for DoS attacks on host
machines, to skirt past network perimeter defenses (bypassing access control devices), to
impersonate another user‘s session (attack on integrity), or to crash a host‘s IP stack (DoS).
Malicious network traffic works by playing tricks with the legitimate format settings of the IP
protocol. For instance, using a specially crafted tool, an attacker can set incompatible sequences
of TCP flags, causing destination host machines to issue responses other than the normal
responses, resulting in session hijacking or more typically a DoS condition.
55 | P a g e
Other examples of maliciously formed TCP traffic include an attacker setting an ACK flag in
an originating session packet without sending an initial SYN packet to initiate traffic, or
sending a SYN and FIN (start and stop) combination at the same time. TCP flags can be set in
multiple ways and each generates a response that can either identify the target system,
determine if a stateful packet-inspecting device is in front of the target, or create a no-response
condition. Port scanners often use different types of scans to determine whether the destination
port is open or closed, even if firewall-like blocking mechanisms are installed to stop normal
port scanners.
Fragmentation and Reassembly Attacks: Although not quite the security threat they once
were, IP packets can be used in fragmentation attacks. TCP/IP fragmentation is allowed
because all routers have a maximum transmission unit (MTU), which is the maximum number
of bytes that they can send in a single packet. A large packet can be broken down into multiple
smaller packets (known as fragments) and sent from source to destination. A fragment offset
value located in each fragment tells the destination IP host how to reassemble the separate
packets back into the larger packet.
Attacks can use fragment offset values to cause the packets to maliciously reassemble and
intentionally force the reassembly of a malicious packet. If an IDS or firewall allows
fragmentation and does not reassemble the packets before inspection, an exploit may slip by.
For example, suppose a firewall does not allow FTP traffic, and an attacker sends fragmented
packets posing as some other allowable traffic. If the packets act as SMTP e-mail packets
headed to destination port 25, they could be passed through, but after they are past the firewall,
they could reassemble to overwrite the original port number and become FTP packets to
destination port 21. The main advantage here for the attacker is stealth, which allows him or
her to bypass the IDS.
Today, most IDSs, operating systems, and firewalls have anti fragmentation defenses. By
default, a Windows host will drop fragmented packets.
Application Attacks
Content Obfuscation: Most IDSs look for known malicious commands or data in a network
packet‘s data payload. A byte-by-byte comparison is done between the payload and each
56 | P a g e
potential threat signature in the IDS‘s database. If something matches, it‘s flagged as an event.
This is how ―signature-based‖ IDSs work. Someone has to have the knowledge to write the
―signature.
Because byte scanning is relatively easy to do, attackers use encoding schemes to hide their
malicious commands and content. Encoding schemes are non-plaintext character
representations that eventually get converted to plaintext for processing. The flexibility of the
coding for international languages on the Internet allows ASCII characters to be represented
by many different encoding schemes, including hexadecimal (base 16, in which the word
―Hello‖ looks like ―48 65 6C 6C 6F‖), decimal notation (where ―Hello‖ is ―72 101108 108
111‖), octal (base 8, in which ―Hello‖ appears as ―110 145 154 154 157‖), Unicode (where
―Hello‖ = ―0048 0065 006C 006C 006F‖), and any combination thereof. Web URLs and
commands have particularly flexible syntax. Complicating the issue, most browsers
encountering common syntax mistakes, like reversed slashes or incorrect case, convert them to
their legitimate form.
Data Normalization: An IDS signature database has to consider all character encoding
schemes and tricks that can end up creating the same malicious pattern. This task is usually
accomplished by normalizing the data before inspection. Normalization reassembles fragments
into single whole packets, converts encoded characters into plain ASCII text, fixes syntax
mistakes, removes extraneous characters, converts tabs to spaces, removes common hacker
tricks, and does its best to convert the data into its final intended form.
First-Generation IDS
IDS development as we know it today began in the early 1980s, but only started growing in the
PC marketplace in the late 1990s. First-generation IDSs focused almost exclusively on the
benefit of early warning resulting from accurate detection. This continues to be a base
requirement of IDS, and vendors frequently brag about their product‘s accuracy. The practical
reality is that while most IDSs are considered fairly accurate, no IDS has ever been close to
being perfectly accurate. Although a plethora of antivirus scanners enjoy year-after-year 95 to
99 percent accuracy rates, IDSs never get over 90 percent accuracy against a wide spectrum of
real-world attack traffic. Most are in the 80 percent range. Some test results show 100 percent
detection rates, but in every such instance, the IDS was tuned after several previous, less
57 | P a g e
accurate rounds of testing. When an IDS misses a legitimate threat, it is called a false negative.
Most IDS are plagued with even higher false positive rates, however.
IDSs have high false positive rates. A false positive is when the IDS says there is a security
threat by ―alerting,‖ but the traffic is not malicious or was never intended to be malicious
(benign condition). A common example is when an IDS flags an e-mail as infected with a
particular virus because it is looking for some key text known to be in the message body of the
e-mail virus (for example, the phrase ―cheap pharmaceuticals‖).When an e-mail intended to
warn readers about the virus includes the keywords that the reader should be on the lookout
for, it can also create a false positive. The IDS should be flagging the e-mail as infected only
if it actually contains a virus, not just if it has the same message text.
Simply searching for text within the message body to detect malware is an immature detection
choice. Many security web services that send subscribers early warning e-mails complain that
nearly 10 percent of their e-mails are kicked back by overly zealous IDSs. Many of those same
services have taken to misrepresenting the warning text purposely (by slightly changing the
text, such as ―che4p_pharmaceut1cals‖) in a desperate attempt to get past the subscribers‘
poorly configured defenses.
Second-Generation IDS
The net effect of most IDSs being fairly accurate and none being highly accurate has resulted
in vendors and administrators using other IDS features for differentiation. Here are some of
those other features that may be more or less useful in different circumstances:
58 | P a g e
First-generation IDSs focused on accurate attack detection. Second-generation IDSs do that
and work to simplify the administrator‘s life by offering a bountiful array of back-end options.
They offer intuitive end-user interfaces, intrusion prevention, centralized device management,
event correlation, and data analysis. Second-generation IDSs do more than just detect attacks—
they sort them, prevent them, and attempt to add as much value as they can beyond mere
detection.
Depending on what assets you want to protect, an IDS can protect a host or a network. All IDSs
follow one of two intrusion detection models anomaly (also called profile, behavior, heuristic,
or statistical) detection or signature (knowledge-based) detection although some systems use
parts of both when it‘s advantageous. Both anomaly and signature detection work by
monitoring a wide population of events and triggering based on predefined behaviors.
Host-Based IDS
A host-based IDS (HIDS) is installed on the host it is intended to monitor. The host can be a
server, workstation, or any networked device (such as a printer, router, or gateway). A HIDS
installs as a service or daemon, or it modifies the underlying operating system‘s kernel or
application to gain first inspection authority. Although a HIDS may include the ability to sniff
network traffic intended for the monitored host, it excels at monitoring and reporting direct
interactions at the application layer. Application attacks can include memory modifications,
maliciously crafted application requests, buffer overflows, or file-modification attempts.
A HIDS can inspect each incoming command, looking for signs of maliciousness, or simply
track unauthorized file changes.
59 | P a g e
manipulate files, access the system, change passwords, escalate privileges, and otherwise
directly modify the host. On a UNIX host, a behavior-monitoring HIDS may monitor attempts
to access system binaries, attempts to download password files, and change permissions and
scheduled jobs. A behavior-monitoring HIDS on a web server may monitor incoming requests
and report maliciously crafted HTML responses, cross site scripting attacks, or SQL injection
code.
Network-based IDSs (NIDSs) are the most popular IDSs, and they work by capturing and
analyzing network packets speeding by on the wire. Unlike a HIDS, a NIDS is designed to
protect more than one host. It can protect a group of computer hosts, like a server farm, or
monitor an entire network. Captured traffic is compared against protocol specifications and
normal traffic trends or the packet‘s payload data is examined for malicious content. If a
security threat is noted, the event is logged and an alert is generated.
With a HIDS, you install the software on the host you want monitored and the software does
all the work. Because a NIDS works by examining network packet traffic, including traffic not
intended for the NIDS host on the network, it has a few extra deployment considerations. It is
common for brand-new NIDS users to spend hours wondering why their IDS isn‘t generating
any alerts. Sometimes it‘s because there is no threat traffic to alert on, and other times it‘s
because the NIDS isn‘t set up to capture packets headed to other hosts.
Packet-Level Drivers
Network packets are captured using a packet-level software driver bound to a network interface
card. Many Unix and Windows systems do not have native packet-level drivers built in, so IDS
implementations commonly rely on open source packet-level drivers. Most commercial IDSs
have their own packet-level drivers and packet-sniffing software.
60 | P a g e
Promiscuous Mode
For a NIDS to sniff packets, the packets have to be given to the packet-level driver by the
network interface card. By default, most network cards are not promiscuous, meaning they
only read packets off the wire that are intended for them. This typically includes unicast
packets, meant solely for one particular workstation, broadcast packets, meant for every
computer that can listen to them, and multicast traffic, meant for two or more previously
defined hosts. Most networks contain unicast and broadcast traffic. Multicast traffic isn‘t as
common, but it is gaining in popularity for web-streaming applications. By default, a network
card in normal mode drops traffic destined for other computers and packets with transmission
anomalies (resulting from collisions, bad cabling, and so on). If you are going to set up an IDS,
make sure its network interface card has a promiscuous mode and is able to inspect all traffic
passing by on the wire.
For the purposes of this chapter, a network segment can be defined as a single logical packet
domain. For a NIDS, this definition means that all network traffic heading to and from all
computers on the same network segment can be physically monitored.
You should have at least one NIDS inspection device per network segment to monitor a
network effectively. This device can be a fully operational IDS interface or, more commonly,
a router or switch interface to which all network traffic is copied, known as a span port, or a
traffic repeater device, known as a sensor or tap. One port plugs into the middle of a connection
on the network segment to be monitored, and the other plugs into a cable leading to the central
IDS console.
Anomaly detection (AD) was proposed in 1985 by noted security laureate Dr. Dorothy E.
Denning, and it works by establishing accepted baselines and noting exceptional differences.
Baselines can be established for a particular computer host or for a particular network segment.
Some IDS vendors refer to AD systems as behavior-based since they look for deviating
61 | P a g e
behaviors. If an IDS looks only at network packet headers for differences, it is called protocol
anomaly detection.
The goal of AD is to be able to detect a wide range of malicious intrusions, including those for
which no previous detection signature exists. By learning known good behaviors during a
period of ―profiling,‖ in which an AD system identifies and stores all the normal activities that
occur on a system or network, it can alert to everything else that doesn‘t fit the normal profile.
Anomaly detection is statistical in nature and works on the concept of measuring the number
of events happening in a given time interval for a monitored metric. A simple example is
someone logging in with the incorrect password too many times, causing an account to be
locked out and generating a message to the security log. Anomaly detection IDS expands the
same concept to cover network traffic patterns, application events, and system utilization.
Here are some other events AD systems can monitor and trigger alerts from:
62 | P a g e
Signature-Detection Model
Signature-detection or misuse IDSs is the most popular type of IDS, and they work by using
databases of known bad behaviors and patterns. This is nearly the exact opposite of AD
systems. When you think of a signature detection IDS, think of it as an antivirus scanner for
network traffic. Signature-detection engines can query any portion of a network packet or look
for a specific series of data bytes. The defined patterns of code are called signatures, and often
they are included as part of a governing rule when used within an IDS.
Signatures are byte sequences that are unique to a particular malady. A byte signature may
contain a sample of virus code, a malicious combination of keystrokes used in a buffer
overflow, or text that indicates the attacker is looking for the presence of a particular file in a
particular directory. For performance reasons, the signature must be crafted so it is the shortest
possible sequence of bytes needed to detect its related threat reliably. It must be highly accurate
in detecting the threat and not cause false positives. Signatures and rules can be collected
together into larger sets called signature databases or rule sets.
Since the beginning, IDS developers have wanted the IDS to do more than just monitor and
report maliciousness. What good is a device that only tells you you‘ve been maligned when the
real value is in preventing the intrusion? That‘s like a car alarm telling you that your car has
been stolen, after the fact. Like intrusion detection, intrusion prevention has long been practiced
by
network administrators as a daily part of their routine. Setting access controls, requiring
passwords, enabling real-time antivirus scanning, updating patches, and installing perimeter
firewalls are all examples of common intrusion-prevention controls. Intrusion-prevention
controls, as they apply to IDSs, involve real-time countermeasures taken against a specific,
active threat. For example, the IDS might notice a ping flood and deny all future traffic
originating from the same IP address. Alternatively, a host-based IDS might stop a malicious
program from modifying system files.
Going far beyond mere monitoring and alerting, second-generation IDSs are being called
intrusion-prevention systems (IPSs). They either stop the attack or interact with an external
system to put down the threat.
63 | P a g e
If the IPS, as shown in Figure 2.2, is a mandatory inspection point with the ability to filter real-
time traffic, it is considered inline. Inline IPSs can drop packets, reset connections, and route
suspicious traffic to quarantined areas for inspection. If the IPS isn‘t inline and is only
inspecting the traffic, it still can instruct other network perimeter systems to stop an exploit. It
may do this by sending scripted commands to a firewall, instructing it to deny all traffic from
the remote attacker‘s IP address, calling a virus scanner to clean a malicious file, or simply
telling the monitored host to deny the hacker‘s intended modification.
For an IPS to cooperate with an external device, they must share a common scripting language,
API, or some other communicating mechanism. Another common IPS method is for the IDS
device to send reset (RST) packets to both sides of the connection, forcing both source and
destination hosts to drop the communication. This method isn‘t seen as being very accurate,
because often the successful exploit has happened by the time a forced reset has occurred, and
the sensors themselves can get in the way and drop the RST packets.
Figure 2.2: IDS placed to drop malicious packets before they can enter the network
64 | P a g e
Chapter Three
3. Network Firewall Security
3.1 Securing Private Networks
Firewall is a router or other communications device which filters access to a protected network.
Firewall is also a program that screens all incoming traffic and protects the network from
unwelcome intruders.
It is a means of protection a local system or network of systems from network-based security
threats, while affording access to the outside world via WANs or the Internet
Firewall Objectives
Firewall Features
65 | P a g e
Access control
Additional features
Data encryption
Authentication
Connection relay (hide internal network)
reporting/logging
e-mail virus protection
spy ware protection
Use one or both methods
Packet filtering
Proxy service
It protects from
Remote logins
IP spoofing
Source addressing
SMTP session hijacking
Spam
Denial of service
E-mail bombs
Internet connectivity is no longer an option for most organizations. However, while internet
access provides benefits to the organization, it enables the outside world to reach and interact
with local network assets. This creates the threat to the organization.
While it is possible to equip each workstation and server on the premises network with strong
security features, such as intrusion protection, this is not a practical approach. The alternative,
increasingly accepted, is the firewall.
The firewall is inserted between the premise network and internet to establish a controlled link
and to create an outer security wall or perimeter.
o The aim of this perimeter is to protect the premises network from internet based attacks and to
provide a single choke point where security and audit can be imposed.
66 | P a g e
The firewall can be a single computer system or a set of two or more systems that cooperate to
perform the firewall function.
Other influences on network design include budgets, availability requirements, the network‘s
size and scope, future growth expectations, capacity requirements, and management‘s
tolerance of risks. For example, dedicated WAN links to remote offices can be more reliable
than virtual private networks (VPNs), but they cost more, especially when covering large
distances. Fully redundant networks can easily recover from failures, but having duplicate
hardware increases costs, and the more routing paths available, the harder it is to secure and
segregate traffic flows.
67 | P a g e
3.6 Designing an Appropriate Network
There are invariably numerous requirements and expectations placed upon a network, such as
meeting and exceeding the organization‘s availability and performance requirements,
providing a platform that is conducive for securing sensitive network assets, and enabling
effective and secure links to other networks. On top of that, the overall network design must
provide the ability to grow and support future network requirements.
Common steps for obtaining such information include meeting with project stakeholders,
application and system owners, developers, management, and users. It is important to
understand their expectations and needs with regard to performance, security, availability,
budget, and the overall importance of the new project. Adequately understanding these
elements will ensure that project goals are met, and that appropriate network performance and
security controls are included in the design. One of the most common problems encountered in
a network implementation is unmet expectations resulting from a difference of assumptions.
That‘s why expectations should be broken down into mutually observable (and measureable)
facts as much as possible, so the security designers ensure that there is explicit agreement with
any functional proposals clearly understood and agreed.
Performance
The legacy Cisco Hierarchical Internetworking model, which most network engineers are
intimately familiar with, is a common design implemented in large-scale networks today,
although many new types of purposed designs have been developed that support emerging
technologies like class fabrics, lossless Ethernet, layer two bridging with trill or IEEE 802.1aq,
and other data center–centric technologies.
The three-tier hierarchy still applies to campus networks, but no longer to data centers. This is
a legacy model socialized by Cisco, but even Cisco has newer thinking for data centers.
Networks are becoming much more specialized, and the security thinking for different types of
networks is significantly different. The Cisco three-tier model is derived from the Public
Switched Telephone Network (PSTN) model, which is in use for much of the world‘s telephone
infrastructure. The Cisco Hierarchical Internetworking model, depicted in Figure 3.1, uses
three main layers commonly referred to as the core, distribution, and access layers:
68 | P a g e
Core layer Forms the network backbone and is focused on moving data as fast as possible
between distribution layers. Because performance is the core layer‘s primary focus, it should
not be used to perform CPU-intensive operations such as filtering, compressing, encrypting, or
translating network addresses for traffic.
Distribution layer Sits between the core and the access layer. This layer is used to aggregate
access-layer traffic for transmission into and out of the core.
Access layer Composed of the user networking connections. Filtering, compressing,
encrypting, and address-translating operations should be performed at the access and
distribution layers.
The Cisco model is highly scalable. As the network grows, additional distribution and access
layers can be added seamlessly. As the need for faster connections and more bandwidth arises,
the core and distribution equipment can be upgraded as required. This model also assists
corporations in achieving higher levels of availability by allowing for the implementation of
redundant hardware at the distribution and core layers. And because the network is highly
segmented, a single network failure at the access or distribution layers does not affect the entire
network.
69 | P a g e
3.7 Internal Security Practices
Organizations that deploy firewalls strictly around the perimeter of their network leave
themselves vulnerable to internally initiated attacks, which are statistically the most common
threats today. Internal controls, such as firewalls and early detection systems (IDS, IPS, and
SIEM), should be located at strategic points within the internal network to provide additional
security for particularly sensitive resources such as research networks, repositories containing
intellectual property, and human resource and payroll databases.
Dedicated internal firewalls, as well as the ability to place access control lists on internal
network devices, can slow the spread of a virus. Figure 3.2 depicts a network utilizing internal
firewalls.
When designing internal network zones, if there is no reason for two particular networks to
communicate, explicitly configure the network to block traffic between those networks, and
log any attempts that hosts make to communicate between them. With modern VoIP networks,
this can be a challenge as VoIP streams are typically endpoint to endpoint, but consider only
allowing the traffic you know to be legitimate between any two networks.
A common technique used by hackers is to target an area of the network that is less secure, and
then work their way in slowly via ―jumping‖ from one part of the network to another. If all of
the internal networks are wide open, there is little hope of detecting, much less preventing, this
type of threat vector.
70 | P a g e
Figure 3.2: internal firewall can be used to increase internal security
Organizations need to provide information to internal and external users and to connect their
infrastructure to external networks, so they have developed network topologies and application
architectures that support that connectivity while maintaining adequate levels of security. The
most prevalent terms for describing these architectures are intranet, extranet, and demilitarized
zone (DMZ). Organizations often segregate the applications deployed in their intranets and
extranets from other internal systems through the use of firewalls. An organization can exert
higher levels of control through firewalling to ensure the integrity and security of these systems.
Intranets
The main purpose of an intranet is to provide internal users with access to applications and
information. Intranets are used to house internal applications that are not generally available to
external entities, such as time and expense systems, knowledge bases, and organization bulletin
boards. The main purpose of an intranet is to share organization information and computing
71 | P a g e
resources among employees. To achieve a higher level of security, intranet systems are
aggregated into one or more dedicated subnets and are firewalled.
From a logical connectivity standpoint, the term intranet does not necessarily mean an internal
network. Intranet applications can be engineered to be universally accessible. Thus, employees
can enter their time and expense systems while at their desks or on the road. When intranet
applications are made publicly accessible, it is a good practice to segregate these systems from
internal systems and to secure access with a firewall. Additionally, because internal
information will be transferred as part of the normal application function, it is commonplace to
encrypt such traffic. It is not uncommon to deploy intranet applications in a DMZ configuration
to mitigate risks associated with providing universal access.
Extranets
Extranets are application networks that are controlled by an organization and made available
to trusted external parties, such as suppliers, vendors, partners, and customers. Possible uses
for extranets are varied and can include providing application access to business partners, peers,
suppliers, vendors, partners, customers, and so on. However, because these users are external
to the corporation, and the security of their networks is beyond the control of the corporation,
extranets require additional security processes and procedures beyond those of intranets. As
Figure 13-5 shows, access methods to an extranet can vary greatly—VPNs, direct connections,
and even remote users can connect.
72 | P a g e
Figure 3.3: a possible extranet design
73 | P a g e
In response to these issues, the Internet Architecture Board (IAB) included authentication and
encryption as necessary security features in the next-generation IP, which has been issued as
IPv6. Fortunately, these security capabilities were designed to be usable both with the current
IPv4 and the future IPv6. This means that vendors can begin offering these features now, and
many vendors do now have some IPsec capability in their products.
IP-level security encompasses three functional areas: authentication, confidentiality, and key
management. The authentication mechanism assures that a received packet was, in fact,
transmitted by the party identified as the source in the packet header. In addition, this
mechanism
assures that the packet has not been altered in transit. The confidentiality facility enables
communicating nodes to encrypt messages to prevent eavesdropping by third parties. The key
management facility is concerned with the secure exchange of keys. The current version of
IPsec, known as IPsecv3, encompasses authentication and confidentiality. Key management is
provided by the Internet Key Exchange standard, IKEv2.
We begin this section with an overview of IP security (IPsec) and an introduction to the IPsec
architecture. We then look at some of the technical details.
Secure branch office connectivity over the Internet: A company can build a secure virtual
private network over the Internet or over a public WAN. This enables a business to rely heavily
on the Internet and reduce its need for private networks, saving costs and network management
overhead.
Secure remote access over the Internet: An end user whose system is equipped with IP
security protocols can make a local call to an Internet service provider and gain secure access
to a company network. This reduces the cost of toll charges for traveling employees and
telecommuters.
74 | P a g e
Establishing extranet and intranet connectivity with partners: IPsec can be used to secure
communication with other organizations, ensuring authentication and confidentiality and
providing a key exchange mechanism.
Enhancing electronic commerce security: Even though some Web and electronic commerce
applications have built-in security protocols, the use of IPsec enhances that security.
The principal feature of IPsec that enables it to support these varied applications is that it can
encrypt and/or authenticate all traffic at the IP level. Thus, all distributed applications,
including remote logon, client/server, e-mail, file transfer, Web access, and so on, can be
secured.
Benefits of IPsec:
When IPsec is implemented in a firewall or router, it provides strong security that can be
applied to all traffic crossing the perimeter. Traffic within a company or workgroup does not
incur the overhead of security-related processing.
IPsec in a firewall is resistant to bypass if all traffic from the outside must use IP and the
firewall is the only means of entrance from the Internet into the organization.
IPsec is below the transport layer (TCP, UDP) and so is transparent to applications. There is
no need to change software on a user or server system when IPsec is implemented in the
firewall or router. Even if IPsec is implemented in end systems, upper-layer software, including
applications, is not affected.
IPsec can be transparent to end users. There is no need to train users on security mechanisms,
issue keying material on a per-user basis, or revoke keying material when users leave the
organization.
IPsec can provide security for individual users if needed. This is useful for off-site workers and
for setting up a secure virtual sub network within an organization for sensitive applications.
Routing Applications
In addition to supporting end users and protecting premises systems and networks, IPsec can
play a vital role in the routing architecture required for internetworking. IPsec can assure that
A router advertisement (a new router advertises its presence) comes from an authorized router.
75 | P a g e
A neighbor advertisement (a router seeks to establish or maintain a neighbor relationship with
a router in another routing domain) comes from an authorized router.
A redirect message comes from the router to which the initial packet was sent.
A routing update is not forged.
Without such security measures, an opponent can disrupt communications or divert some
traffic. Routing protocols such as Open Shortest Path First (OSPF) should be run on top of
security associations between routers that are defined by IPsec.
The key exchange function allows for manual exchange of keys as well as an automated
scheme.
The IPsec specification is quite complex and covers numerous documents. The most important
of these are RFCs 2401, 4302, 4303, and 4306. In this section, we provide an overview of some
of the most important elements of IPsec.
Security Associations
A key concept that appears in both the authentication and confidentiality mechanisms for IP is
the security association (SA). An association is a one-way relationship between a sender and a
receiver that affords security services to the traffic carried on it. If a peer relationship is needed,
for two-way secure exchange, then two security associations are required. Security services are
afforded to an SA for the use of ESP.
76 | P a g e
An SA is uniquely identified by three parameters:
Security parameter index (SPI): A bit string assigned to this SA and having local significance
only. The SPI is carried in an ESP header to enable the receiving system to select the SA under
which a received packet will be processed.
IP destination address: This is the address of the destination endpoint of the SA, which may be
an end-user system or a network system such as a firewall or router.
Protocol identifier: This field in the outer IP header indicates whether the association is an AH
or ESP security association.
Hence, in any IP packet, the security association is uniquely identified by the Destination
Address in the IPv4 or IPv6 header and the SPI in the enclosed extension header (AH or ESP).
An IPsec implementation includes a security association database that defines the parameters
associated with each SA. An SA is characterized by the following parameters:
Sequence number counter: A 32-bit value used to generate the Sequence Number field in AH
or ESP headers.
Sequence counter overflow: A flag indicating whether overflow of the sequence number
counter should generate an auditable event and prevent further transmission of packets on this
SA.
Antireplay window: Used to determine whether an inbound AH or ESP packet is a replay, by
defining a sliding window within which the sequence number must fall.
AH information: Authentication algorithm, keys, key lifetimes, and related parameters being
used with AH.
ESP information: Encryption and authentication algorithm, keys, initialization values, key
lifetimes, and related parameters being used with ESP.
Lifetime of this security association: A time interval or byte count after which an SA must be
replaced with a new SA (and new SPI) or terminated, plus an indication of which of these
actions should occur.
IPsec protocol mode: Tunnel, transport, or wildcard (required for all implementations).
77 | P a g e
Path MTU: Any observed path maximum transmission unit (maximum size of a packet that
can be transmitted without fragmentation) and aging variables (required for all
implementations).
The key management mechanism that is used to distribute keys is coupled to the authentication
and privacy mechanisms only by way of the security parameters index. Hence, authentication
and privacy have been specified independent of any specific key management mechanism.
• Padding (0–255 bytes): May be required if the encryption algorithm requires the plaintext to
be a multiple of some number of octets.
Pad Length (8 bits): Indicates the number of pad bytes immediately preceding this field.
Next Header (8 bits): Identifies the type of data contained in the Payload Data field by
identifying the first header in that payload (e.g., an extension header in IPv6, or an upper-layer
protocol such as TCP).
Integrity Check Value (variable): A variable-length field (must be an integral number of 32-bit
words) that contains the integrity check value computed over the ESP packet minus the
Authentication Data field.
78 | P a g e
Figure 3.4: IPsec ESP format
Transport Mode
Transport mode provides protection primarily for upper-layer protocols. That is, transport
mode protection extends to the payload of an IP packet. Examples include a TCP or UDP
segment, both of which operate directly above IP in a host protocol stack. Typically, transport
mode is used for end-to-end communication between two hosts (e.g., a client and a server, or
two workstations). When a host runs ESP over IPv4, the payload is the data that normally
follow the IP header. For IPv6, the payload is the data that normally follow both the IP header
and any IPv6 extension headers that are present, with the possible exception of the destination
options header, which may be included in the protection.
79 | P a g e
ESP in transport mode encrypts and optionally authenticates the IP payload but not the IP
header.
Tunnel Mode
Tunnel mode provides protection to the entire IP packet. To achieve this, after the ESP fields
are added to the IP packet, the entire packet plus security fields are treated as the payload of
new outer IP packet with a new outer IP header. The entire original, inner, packet travels
through a tunnel from one point of an IP network to another; no routers along the way are able
to examine the inner IP header. Because the original packet is encapsulated, the new, larger
packet may have totally different source and destination addresses, adding to the security.
Tunnel mode is used when one or both ends of a security association are a security gateway,
such as a firewall or router that implements IPsec. With tunnel mode, a number of hosts on
networks behind firewalls may engage in secure communications without implementing IPsec.
The unprotected packets generated by such hosts are tunneled through external networks by
tunnel mode SAs set up by the IPsec software in the firewall or secure router at the boundary
of the local network.
Here is an example of how tunnel mode IPsec operates. Host A on a network generates an IP
packet with the destination address of host B on another network. This packet is routed from
the originating host to a firewall or secure router at the boundary of A‘s network. The firewall
filters all outgoing packets to determine the need for IPsec processing. If this packet from A to
B requires IPsec, the firewall performs IPsec processing and encapsulates the packet with an
outer IP header. The source IP address of this outer IP packet is this firewall, and the destination
address may be a firewall that forms the boundary to B‘s local network. This packet is now
routed to B‘s firewall, with intermediate routers examining only the outer IP header. At B‘s
firewall, the outer IP header is stripped off, and the inner packet is delivered to B.
ESP in tunnel mode encrypts and optionally authenticates the entire inner IP packet, including
the inner IP header.
80 | P a g e
Chapter Four
4. Review of Shared Key Cryptography and
Hash Functions
1. Plain Text: This is the original message or data which is fed into the algorithm as input.
2. Encryption Algorithm: This encryption algorithm performs various substitutions and
transformations on the plain text.
3. Secret Key: The key is another input to the algorithm. The substitutions and transformations
performed by algorithm depend on the key.
4. Cipher Text: This is the scrambled (unreadable) message which is output of the encryption
algorithm. This cipher text is dependent on plaintext and secret key. For a given plaintext, two
different keys produce two different cipher texts.
5. Decryption Algorithm: This is the reverse of encryption algorithm. It takes the cipher text and
secret key as inputs and outputs the plain text.
81 | P a g e
Two main requirements are needed for secure use of conventional encryption:
A strong encryption algorithm is needed. It is desirable that the algorithm should be in such a
way that, even the attacker who knows the algorithm and has access to one or more cipher texts
would be unable to decipher the cipher text or figure out the key.
The secret key must be distributed among the sender and receiver in a much secured way. If in
any way the key is discovered and with the knowledge of algorithm, all communication using
this key is readable.
The important point is that the security of conventional encryption depends on the secrecy of
the key, not the secrecy of the algorithm i.e. it is not necessary to keep the algorithm secret, but
only the key is to be kept secret. This feature that algorithm need not be kept secret made it
feasible for wide spread use and enabled manufacturers develop low cost chip implementation
of data encryption algorithms. With the use of conventional algorithm, the principal security
problem is maintaining the secrecy of the key.
4.2 Cryptography
A cipher is a secret method of writing, as by code. Cryptography, in a very broad sense, is the
study of techniques related to aspects of information security. Hence cryptography is concerned
with the writing (ciphering or encoding) and deciphering (decoding) of messages in secret
code. Cryptographic systems are classified along three independent dimensions:
82 | P a g e
3. The way in which plaintext is processed
A block cipher processes the input one block of elements at a time, producing an output block
for each input block. Stream cipher processes the input elements continuously, producing
output one element at a time as it goes along.
Cryptanalysis
The process of attempting to discover the plaintext or key is known as cryptanalysis. It is very
difficult when only the cipher text is available to the attacker as in some cases even the
encryption algorithm is not known. The most common attack under these circumstances is
brute-force approach of trying all the possible keys. This attack is made impractical when the
key size is considerably large. The table below gives an idea on types of attacks on encrypted
messages.
Cryptography can be defined as the conversion of data into a scrambled code that can be
deciphered and sent across a public or a private network.
A Cipher text-only attack is an attack with an attempt to decrypt cipher text when only the
cipher text itself is available.
A Known-plaintext attack is an attack in which an individual has the plaintext samples and
its encrypted version (cipher text) thereby allowing him to use both to reveal further secret
information like the key
A Chosen- plaintext attack involves the cryptanalyst be able to define his own plaintext, feed
it into the cipher and analyze the resulting cipher text.
A Chosen-cipher text attack is one, where attacker has several pairs of plaintext-cipher text
and cipher text chosen by the attacker.
83 | P a g e
An encryption scheme is unconditionally secure if the cipher text generated by the scheme
does not contain enough information to determine uniquely the corresponding plain text, no
matter how much cipher text and time is available to the opponent. Example for this type is
One-time Pad.
An encryption scheme is computationally secure if the cipher text generated by the scheme
meets the following criteria:
The average time required for exhaustive key search is given below:
84 | P a g e
o Symmetric Key Cryptography – Examples
1. Data Encryption Standard (DES): The Data Encryption Standard was published in 1977 by
the US National Bureau of Standards. DES uses a 56 bit key and maps a 64 bit input block of
plaintext onto a 64 bit output block of cipher text. 56 bits is a rather small key for today’s
computing power.
2. Triple DES: Triple DES was the answer to many of the shortcomings of DES. Since it is based
on the DES algorithm, it is very easy to modify existing software to use Triple DES. It also has
the advantage of proven reliability and a longer key length that eliminates many of the shortcut
attacks that can be used to reduce the amount of time it takes to break DES.
3. Advanced Encryption Standard (AES) (RFC3602): Advanced Encryption Standard (AES)
is an encryption standard adopted by the U.S. government. The standard comprises three block
ciphers, AES-128, AES-192 and AES-256. Each AES cipher has a 128-bit block size, with key
sizes of 128, 192 and 256 bits, respectively. The AES ciphers have been analyzed extensively
and are now used worldwide, as was the case with its predecessor, the Data Encryption
Standard(DES).
Problems with Conventional Cryptography
1. Key Management: Symmetric-key systems are simpler and faster; their main drawback is that
the two parties must somehow exchange the key in a secure way and keep it secure after that.
Key Management caused nightmare for the parties using the symmetric key cryptography.
They were worried about how to get the keys safely and securely across to all users so that the
decryption of the message would be possible. This gave the chance for third parties to intercept
the keys in transit to decode the top-secret messages. Thus, if the key was compromised, the
entire coding system was compromised and a ―Secret‖ would no longer remain a ―Secret‖.
This is why the “Public Key Cryptography” came into existence.
85 | P a g e
cryptography was first discovered in 1973 by the British Clifford Cocks of Communications-
Electronics Security Group (CESG) of (Government Communications Headquarters – GCHQ)
but this was a secret until 1997.
1. Digital Signature Standard (DSS): Digital Signature Standard (DSS) is the digital signature
algorithm (DSA) developed by the U.S. National Security Agency (NSA) to generate a digital
signature for the authentication of electronic documents. DSS was put forth by the National
Institute of Standards and Technology (NIST) in 1994, and has become the United States
government standard for authentication of electronic documents. DSS is specified in Federal
Information Processing Standard (FIPS) 186.
2. Algorithm – RSA: – RSA (Rivest, Shamir and Adleman who first publicly described it in
1977) is an algorithm for public-key cryptography. It is the first algorithm known to be
suitable for signing as well as encryption, and one of the first great advances in public key
cryptography. RSA is widely used in electronic commerce protocols, and is believed to be
secure given sufficiently long keys and the use of up-to-date implementations.
RSA Cryptanalysis
o Rivest, Shamir, and Adelman placed a challenge in Martin Gardner’s column in Scientific
American (journal) in which the readers were invited to crack.
C=114,381,625,757,888,867,669,235,779,976,146,612,010,218,296,721,242,362,562,561
,842,935,706,935,245,733,897,830,597,123,563,958,705,058,989,075,147,599,290,026,8
79,543,541
o This was solved in April 26, 1994, cracked by an international effort via the internet with the
use of 1600 workstations, mainframes, and supercomputers attacked the number for eight
months before finding its Public key and its private key. Encryption key = 9007 The message
“first solver wins one hundred dollars”.
o Of course, the RSA algorithm is safe, as it would be incredibly difficult to gather up such
international participation to commit malicious acts.
3. ElGamal
o ElGamal is a public key method that is used in both encryption and digital signing.
o The encryption algorithm is similar in nature to the Diffie-Hellman key agreement protocol.
o It is used in many applications and uses discrete logarithms.
86 | P a g e
o ElGamal encryption is used in the free GNU Privacy Guard software
Hash Functions
A cryptographic hash function is a hash function that takes an arbitrary block of data and returns
a fixed-size bit string, the cryptographic hash value, such that any (accidental or intentional)
change to the data will (with very high probability) change the hash value. The data to be
encoded are often called the message, and the hash value is sometimes called the message
digest or simply digests.
87 | P a g e
o Hash function coverts data of arbitrary length to a fixed length. This process is often referred
to as hashing the data. In general, the hash is much smaller than the input data; hence hash
functions are sometimes called compression functions.
o Since a hash is a smaller representation of a larger data, it is also referred to as a digest.
o Hash function with n bit output is referred to as an n-bit hash function. Popular hash functions
generate values between 160 and 512 bits.
Efficiency of Operation
o Generally for any hash function h with input x, computation of hx is a fast operation.
Computationally hash functions are much faster than a symmetric encryption Properties of
Hash Functions
o In order to be an effective cryptographic tool, the hash function is desired to possess following
properties –
Pre-Image Resistance
This property means that it should be computationally hard to reverse a hash function. In other
words, if a hash function h produced a hash value z, then it should be a difficult process to find
any input value x that hashes to z.
This property protects against an attacker who only has a hash value and is trying to find the
input.
Second Pre-Image Resistance
This property means given an input and its hash, it should be hard to find a different input with
the same hash.
In other words, if a hash function h for an input x produces hash value hx, then it should be
difficult to find any other input value y such that hy = hx. This property of hash function
protects against an attacker who has an input value and its hash, and wants to substitute
different value as legitimate value in place of original input value.
Collision Resistance
This property means it should be hard to find two different inputs of any length that result in
the same hash. This property is also referred to as collision free hash function.
In other words, for a hash function h, it is hard to find any two different inputs x and y such
that hx = hy. Since, hash function is compressing function with fixed hash length, it is
impossible for a hash function not to have collisions. This property of collision free only
88 | P a g e
confirms that these collisions should be hard to find. This property makes it very difficult for
an attacker to find two input values with the same hash.
Also, if a hash function is collision-resistant then it is second pre-image resistant.
At the heart of a hashing is a mathematical function that operates on two fixed-size blocks of
data to create a hash code. This hash function forms the part of the hashing algorithm.
The size of each data block varies depending on the algorithm. Typically the block sizes are
from 128 bits to 512 bits. The following illustration demonstrates hash function.
Hashing algorithm involves rounds of above hash function like a block cipher. Each round
takes an input of a fixed size, typically a combination of the most recent message block and the
output of the last round.
This process is repeated for as many rounds as are required to hash the entire message.
Schematic of hashing algorithm is depicted in the following illustration
89 | P a g e
Since, the hash value of first message block becomes an input to the second hash operation,
output of which alters the result of the third operation, and so on. This effect, known as an
avalanche effect of hashing.
Avalanche effect results in substantially different hash values for two messages that differ
by even a single bit of data. Understand the difference between hash function and algorithm
correctly. The hash function generates a hash code by operating on two blocks of fixed-length
binary data.
Hashing algorithm is a process for using the hash functions, specifying how the message will
be broken up and how the results from previous message blocks are chained together.
A. Message Digest MD
MD5 was most popular and widely used hash function for quite some years. The MD family
comprises of hash functions MD2, MD4, MD5 and MD6. It was adopted as Internet Standard
RFC 1321. It is a 128-bit hash function. MD5 digests have been widely used in the software
world to provide assurance about integrity of transferred file. For example, file servers often
provide a pre-computed MD5 checksum for the files, so that a user can compare the checksum
of the downloaded file to it.
In 2004, collisions were found in MD5. An analytical attack was reported to be successful only
in an hour by using computer cluster. This collision attack resulted in compromised MD5 and
hence it is no longer recommended for use.
90 | P a g e
B. Secure Hash Function SHA
Family of SHA comprise of four SHA algorithms; SHA-0, SHA-1, SHA-2, and SHA-3.
Though from same family, there are structurally different.
The original version is SHA-0, a 160-bit hash function, was published by the National Institute
of Standards and Technology NIST in 1993. It had few weaknesses and did not become very
popular.
Later in 1995, SHA-1 was designed to correct alleged weaknesses of SHA-0. SHA-1 is the
most widely used of the existing SHA hash functions. It is employed in several widely used
applications and protocols including Secure Socket Layer SSL security.
In 2005, a method was found for uncovering collisions for SHA-1 within practical time frame
making long-term employability of SHA-1 doubtful. SHA-2 family has four further SHA
variants, SHA-224, SHA-256, SHA-384, and SHA-512 depending up on number of bits in their
hash value. No successful attacks have yet been reported on SHA-2 hash function. Though
SHA-2 is a strong hash function. Though significantly different, its basic design is still follows
design of SHA-1. Hence, NIST called for new competitive hash function designs.
In October 2012, the NIST chose the Keccak algorithm as the new SHA-3 standard. Keccak
offers many benefits, such as efficient performance and good resistance for attacks.
C. RIPEMD
The RIPEND is an acronym for RACE Integrity Primitives Evaluation Message Digest.
This set of hash functions was designed by open research community and generally known as
a family of European hash functions.
The set includes RIPEND, RIPEMD-128, and RIPEMD-160. There also exist 256, and 320-bit
versions of this algorithm.
Original RIPEMD 128bit is based upon the design principles used in MD4 and found to provide
questionable security. RIPEMD 128-bit version came as a quick fix replacement to overcome
vulnerabilities on the original RIPEMD.
RIPEMD-160 is an improved version and the most widely used version in the family. The 256
and 320-bit versions reduce the chance of accidental collision, but do not have higher levels of
security as compared to RIPEMD-128 and RIPEMD-160 respectively.
91 | P a g e
D. Whirlpool
1. Password Storage
o Hash functions provide protection to password storage. Instead of storing password in clear,
mostly all logon processes store the hash values of passwords in the file. The Password file
consists of a table of pairs which are in the form userid, h(P). The process of logon is depicted
in the following illustration
o An intruder can only see the hashes of passwords, even if he accessed the password. He can
neither logon using hash nor can he derive the password from hash value since hash function
possesses the property of pre-image resistance.
92 | P a g e
o Data integrity check is a most common application of the hash functions. It is used to generate
the checksums on data files. This application provides assurance to the user about correctness
of the data. The process is depicted in the following illustration
The integrity check helps the user to detect any changes made to original file. It however, does
not provide any assurance about originality. The attacker, instead of modifying file data, can
change the entire file and compute all together new hash and send to the receiver. This integrity
check application is useful only if the user is sure about the originality of file.
93 | P a g e
Classical Encryption Techniques
There are two basic building blocks of all encryption techniques: substitution and
transposition.
Substitution Encryption Techniques
These techniques involve substituting or replacing the contents of the plaintext by other
letters, numbers or symbols. Different kinds of ciphers are used in substitution technique.
Caesar Ciphers or Shift Cipher:
The earliest known use of a substitution cipher and the simplest was by Julius Caesar. The
Caesar cipher involves replacing each letter of the alphabet with the letter standing 3 places
further down the alphabet.
Let us consider,
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Choose k, Shift all letters by k
For example, if k = 5
A becomes F, B becomes G, C becomes H, and so on…
Mathematically give each letter a number,
abcdefghijklmnopqrstuvwxyz
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
If shift = 3 then
e.g., plain text: pay more money
Cipher text: SDB PRUH PRQHB
Note that the alphabet is wrapped around, so that letter following „z‟ is „a‟.
For each plaintext letter p, substitute the cipher text letter c such that
C = E(p) = (p+3) mod 26
A shift may be any amount, so that general Caesar algorithm is
C = E (p) = (p+k) mod 26
Where k takes on a value in the range 1 to 25.The decryption algorithm is simply
P = D(C) = (C-k) mod 26
If it is known that a given cipher text is a Caesar cipher, then a brute force cryptanalysis is
easily performed.
With a Caesar cipher, there are only 26 possible keys, of which only 25 are of any use, since
mapping A to A etc doesn’t really obscure the message!
Monoalphabetic Ciphers:
94 | P a g e
Here, Plaintext characters are substituted by a different alphabet stream of characters shifted
to the right or left by n positions. When compared to the Caesar ciphers, these
monoalphabetic ciphers are more secure as each letter of the cipher text can be any
permutation of the 26 alphabetic characters leading to 26! Or greater than 4 x 1026 possible
keys. But it is still vulnerable to cryptanalysis, when a cryptanalyst is aware of the nature of
the plaintext, he can find the regularities of the language. To overcome these attacks, multiple
substitutions for a single letter are used. For example, a letter can be substituted by different
numerical cipher symbols such as 17, 54, 69…etc. Even this method is not completely secure
as each letter in the plain text affects on letter in the cipher text.
Or, using a common key which substitutes every letter of the plain text.
The key ABCDEFGHIIJ LMNOPQRSTUVWXYZ
QWERTYUIIOPAS DFGHJ KLZXCV BNM
Would encrypt the message I think therefore I am
into OZIIOFAZIITKTYGKTOQD
But any attacker would simply break the cipher by using frequency analysis by observing the
number of times each letter occurs in the cipher text and then looking upon the English letter
frequency table. So, substitution cipher is completely ruined by these attacks.
Monoalphabetic ciphers are easy to break as they reflect the frequency of the original
alphabet. A countermeasure is to provide substitutes, known as homophones for a single
letter.
Playfair Ciphers:
The Playfair Cipher is a manual symmetric encryption cipher invented in 1854 by Charles
Wheatstone; however its name and popularity came from the endorsement of Lord Playfair.
It is the best known multiple letter encryption cipher which treats diagrams in the plaintext as
single units and translates these units into cipher text diagrams. The Playfair Cipher is a
digram substitution cipher offering a relatively weak method of encryption. It was used for
tactical purposes by British forces in the Second Boer War and in World War I and for the
same purpose by the Australians and Germans during World War II. This was because
Playfair is reasonably fast to use and requires no special equipment. A typical scenario for
Playfair use would be to protect important but non-critical secrets during actual combat. By
the time the enemy cryptanalysts could break the message, the information was useless to
95 | P a g e
them.
It is based around a 5×5 matrix, a copy of which is held by both communicating parties, into
which 25 of the 26 letters of the alphabet (normally either j and i are represented by the same
letter or x is ignored) are placed in a random fashion.
For example, the plain text is Shi Sherry loves Heath Ledger and the agreed key is sherry.
The matrix will be built according to the following rules.
in pairs,
without punctuation,
All Js are replaced with Is.
o SH IS HE RR YL OV ES HE AT HL ED GE R
Example: LI TE RA LL TE RA LX LY
SH IS HE RX RY LO VE SH EA TH LE DG ER
The alphabet square is prepared using, a 5*5 matrix, no repetition letters, no Js and key is
written first followed by the remaining alphabets with no i and j.
For the generation of cipher text, there are three rules to be followed by each pair of letters.
letters appear on the same row: replace them with the letters to their immediate right
respectively.
letters appear on the same column: replace them with the letters immediately below
respectively
not on the same row or column: replace them with the letters on the same row respectively
but at the other pair of corners of the rectangle defined by the original pair.
96 | P a g e
Based on the above three rules, the cipher text obtained for the given plain text is
HE GH ER DR YS IQ WH HE SC OY KR AL RY
Another example which is simpler than the above one can be given as:
Here, key word is playfair.
Plaintext is Hellothere
hello there becomes -> he_lx_lo_th_er_ex .
Applying the rules again, for each pair,
If they are in the same row, replace each with the letter to its right (mod 5)
he -> KG.
If they are in the same column, replace each with the letter below it (mod 5)
lo -> RV
Otherwise, replace each with letter we‘d get if we swapped their column indices
lx -> YV
So the cipher text for the given plain text is KG YV RV QM GI KU
To decrypt the message, just reverse the process. Shift up and left instead of down and right.
Drop extra x‘s and locate any missing I‘s that should be j‘s. The message will be back into
the original readable form.
No longer used by military forces because of the advent of digital encryption devices. Playfair
is now regarded as insecure for any purpose because modern hand-held computers could easily
break the cipher within seconds.
97 | P a g e
separate keys, in contrast to symmetric encryption, which uses only one key. Public key
schemes are neither more nor less secure than private key (security depends on the key size
for both). Public-key cryptography complements rather than replaces symmetric
cryptography.
Both also have issues with key distribution, requiring the use of some suitable protocol.
The concept of public-key cryptography evolved from an attempt to attack two of the most
difficult problems associated with symmetric encryption:
a public-key, which may be known by anybody, and can be used to encrypt messages, and
verify signatures
a private-key, known only to the recipient, used to decrypt messages, and sign (create)
signatures.
is asymmetric because those who encrypt messages or verify signatures cannot decrypt
messages or create signatures
Public-Key algorithms rely on one key for encryption and a different but related key for
decryption. These algorithms have the following important characteristics:
It is computationally infeasible to find decryption key knowing only algorithm & encryption
key.
It is computationally easy to en/decrypt messages when the relevant (en/decrypt) key is known.
Either of the two related keys can be used for encryption, with the other used for decryption
(for some algorithms like RSA).
The following figure illustrates public-key encryption process and shows that a public-key
encryption scheme has six ingredients: plaintext, encryption algorithm, public & private keys,
cipher text & decryption algorithm.
98 | P a g e
Figure 4.2: public-key encryption process
The essential steps involved in a public-key encryption scheme are given below:
1. Each user generates a pair of keys to be used for encryption and decryption.
2. Each user places one of the two keys in a public register and the other key is kept private.
3. If B wants to send a confidential message to A, B encrypts the message using A‘s public key.
4. When A receives the message, she decrypts it using her private key. Nobody else can decrypt
the message because that can only be done using A‘s private key (Deducing a private key
should be infeasible).
5. If a user wishes to change his keys –generate another pair of keys and publish the public one:
no interaction with other users is needed.
The first attack on Public-key Cryptography is the attack on Authenticity. An attacker may
impersonate user B: he sends a message E(KUA,X) and claims in the message to be B –A has
no guarantee this is so. To overcome this, B will encrypt the message using his private key:
Y=E(KRB,X). Receiver decrypts using B‘s public key KRB. This shows the authenticity of the
99 | P a g e
sender because (supposedly) he is the only one who knows the private key. The entire encrypted
message serves as a digital signature. This scheme is depicted in the following figure:
But, a drawback still exists. Anybody can decrypt the message using B‘s public key. So,
secrecy or confidentiality is being compromised.
One can provide both authentication and confidentiality using the public-key scheme twice:
100 | P a g e
A encrypts X with his private key: Y=E(KRA,X)
A encrypts Y with B‘s public key: Z=E(KUB,Y)
B will decrypt Z (and she is the only one capable of doing it): Y=D(KRB, Z)
B can now get the plaintext and ensure that it comes from A (he knows public key of A):
decrypt Y using A‘s public key: X=D(KUA, Y).
1. Computationally easy for a party B to generate a pair (public key KUb, private key KRb).
2. Easy for sender A to generate ciphertext:
C EKUb (M)
3. Easy for the receiver B to decrypt ciphertect using private key: M=DKRb (C) DKRb[EKUb
(M)].
4. Computationally infeasible to determine private key (KRb) knowing public key (KUb).
5. Computationally infeasible to recover message M, knowing KUb and cipher text C
6. Either of the two keys can be used for encryption, with the other used for decryption:
Easy is defined to mean a problem that can be solved in polynomial time as a function of input
length. A problem is infeasible if the effort to solve it grows faster than polynomial time as a
function of input size. Public-key cryptosystems usually rely on difficult math functions rather
101 | P a g e
than S-P networks as classical cryptosystems. One-way function is one, easy to
calculate in one direction, infeasible to calculate in the other direction (i.e., the inverse is
infeasible to compute). Trap-door function is a difficult function that becomes easy if some
extra information is known. Our aim to find a trap-door one-way function, which is easy to
calculate in one direction and infeasible to calculate in the other direction unless certain
additional information is known.
Like private key schemes brute force exhaustive search attack is always theoretically possible.
But keys used are too large (>512bits).
Security relies on a large enough difference in difficulty between easy (en/decrypt) and hard
(cryptanalyse) problems. More generally the hard problem is known, its just made too hard to
do in practise.
Requires the use of very large numbers, hence is slow compared to private key schemes.
RSA algorithm:
RSA is the best known, and by far the most widely used general public key encryption
algorithm, and was first published by Rivest, Shamir & Adleman of MIT in 1978 [RIVE78].
Since that time RSA has reigned supreme as the most widely accepted and implemented
general-purpose approach to public-key encryption. The RSA scheme is a block cipher in
which the plaintext and the ciphertext are integers between 0 and n-1 for some fixed n and
typical size for n is 1024 bits (or 309 decimal digits). It is based on exponentiation in a finite
(Galois) field over integers modulo a prime, using large integers (eg. 1024 bits). Its security is
due to the cost of factoring large numbers.
RSA involves a public-key and a private-key where the public key is known to all and is used
to encrypt data or message. The data or message which has been encrypted using a public key
can only be decryted by using its corresponding private-key. Each user generates a key pair i.e.
public and private key using the following steps:
102 | P a g e
Each user selects two large primes at random – p, q
Compute their system modulus n=p.q
Calculate ø(n), where ø(n)=(p-1)(q-1)
Selecting at random the encryption key e, where 1<e<ø(n),and gcd(e,ø(n))=1
Solve following equation to find decryption key d: e.d=1 mod ø(n) and 0≤d≤n
Publish their public encryption key: KU={e,n}
Keep secret private decryption key: KR={d,n}
Both the sender and receiver must know the values of n and e, and only the receiver knows the
value of d. Encryption and Decryption are done using the following equations.
1. It‘s possible to find values of e, d, n such that Med = M mod n for all M<n
2. It is relatively easy to calculate Me and C for all values of M < n.
3. It is impossible to determine d given e and n
Fermat’s little theorem: if p is prime and a is positive integer not divisible by p, then
ap-1≡ 1 mod p.
Corollary: For any positive integer a and prime p, ap≡ a mod p.
Fermat‘s theorem, as useful as will turn out to be does not provide us with integers d,e we are
looking for –Euler‘s theorem (a refinement of Fermat‘s) does. Euler‘s function associates to
103 | P a g e
any positive integer n, a number φ(n): the number of positive integers smaller than n and
relatively prime to n. For example, φ(37) = 36 i.e. φ(p) = p-1 for any prime p. For any two
primes p,q, φ(pq)=(p-1)(q-1).
An example of RSA can be given as, Select primes: p=17 & q=11
Compute n = pq =17×11=187
Compute ø(n)=(p–1)(q-1)=16×10=160
Select e : gcd(e,160)=1; choose e=7
Determine d: de=1 mod 160 and d < 160 Value is d=23 since 23×7=161= 10x160x1
Publish public key KU={7,187}
Keep secret private key KR={23,187}
Now, given message M = 88 (nb. 88<187)
encryption: C = 887 mod 187 = 11
decryption: M = 1123 mod 187 = 88
e.d=1 mod ø(n) i.e. 11d mod 120 = 1 i.e. (1111) mod 120=1; so d = 11
public key :{11,143} and private key: {11,143}
104 | P a g e
C=Me mod n, so ciphertext = 7 11 mod143 = 727833 mod 143; i.e. C = 106 M=Cd mod n,
plaintext = 10611 mod 143 = 1008 mod 143; i.e. M = 7
Key Management
One of the major roles of public-key encryption has been to address the problem of key
distribution. Two distinct aspects to use of public key encryption are present.
The most general schemes for distribution of public keys are given below
Public Announcement of Public keys
105 | P a g e
Here any participant can send his or her public key to any other participant or broadcast the
key to the community at large. For example, many PGP users have adopted the practice of
appending their public key to messages that they send to public forums.
It has a major weakness. Anyone can forge such a public announcement. That is, some user
could pretend to be user A and send a public key to another participant.
1. The authority maintains a directory with a {name, public key} entry for each participant.
2. Each participant registers a public key with the directory authority. Registration would have to
be in person or by some form of secure authenticated communication
106 | P a g e
Figure 4.7: public key publication
3. A participant may replace the existing key with a new one at any time, either because of the
desire to replace a public key that has already been used for a large amount of data, or because
the corresponding private key has been compromised in some way.
4. Periodically, the authority publishes the entire directory or updates to the directory.
5. Participants could also access the directory electronically. For this purpose, secure,
authenticated communication from the authority to the participant is mandatory.
This scheme has still got some vulnerability. If an adversary succeeds in obtaining or
computing the private key of the directory authority, the adversary could authoritatively pass
out counterfeit public keys and subsequently impersonate any participant and eavesdrop on
messages sent to any participant. Or else, the adversary may tamper with the records kept by
the authority.
107 | P a g e
of public keys of all users. The public authority has its own (private key, public key) that it is
using to communicate to users. Each participant reliably knows a public key for the authority,
with only the authority knowing the corresponding private key.
For example, consider that Alice and Bob wish to communicate with each other and the
following steps take place and are also shown in the figure below:
1. Alice sends a time stamped message to the central authority with a request for Bob‘s public
key (the time stamp is to mark the moment of the request)
2. The authority sends back a message encrypted with its private key(for authentication) message
contains Bob‘s public key and the original message of Alice –this way Alice knows this is not
a reply to an old request;
3. Alice starts the communication to Bob by sending him an encrypted message containing her
identity IDA and a nonce N1 (to identify uniquely this transaction)
4. Bob requests Alice‘s public key in the same way (step 1)
5. Bob acquires Alice‘s public key in the same way as Alice did. (Step-2)
6. Bob replies to Alice by sending an encrypted message with N1 plus a new generated nonce N2
(to identify uniquely the transaction)
108 | P a g e
7. Alice replies once more encrypting Bob‘s nonce N2 to assure bob that its correspondent is
Alice
Thus, a total of seven messages are required. However, the initial four messages need be used
only infrequently because both A and B can save the other’s public key for future use, a
technique known as caching. Periodically, a user should request fresh copies of the public keys
of its correspondents to ensure currency.
Public-Key Certificates
The above technique looks attractive, but still has some drawbacks. For any
communication between any two users, the central authority must be consulted by both users
to get the newest public keys i.e. the central authority must be online 24 hours/day. If the central
authority goes offline, all secure communications get to a halt. This clearly leads to an
undesirable bottleneck.
A further improvement is to use certificates, which can be used to exchange keys without
contacting a public-key authority, in a way that is as reliable as if the keys were obtained
directly from a public-key authority. A certificate binds an identity to public key, with all
contents signed by a trusted Public-Key or Certificate Authority (CA). A user can present his
or her public key to the authority in a secure manner, and obtain a certificate. The user can
then publish the certificate. Anyone needed this user’s public key can obtain the certificate
and verify that it is valid by way of the attached trusted signature. A participant can also
convey its key information to another by transmitting its certificate. Other participants can
verify that the certificate was created by the authority.
1. Any participant can read a certificate to determine the name and public key of the certificate’s
owner.
2. Any participant can verify that the certificate originated from the certificate authority and is
not counterfeit.
3. Only the certificate authority can create and update certificates.
4. Any participant can verify the currency of the certificate.
109 | P a g e
Figure 4.9: certificate authority
CA = E(PRauth, [T||IDA||PUa])
where PRauth is the private key used by the authority and T is a timestamp. A may then pass
this certificate on to any other participant, who reads and verifies the certificate as follows:
D(PUauth, CA) = D(PUauth, E(PRauth, [T||IDA||PUa])) = (T||IDA||PUa)
The recipient uses the authority’s public key, PUauth to decrypt the certificate. Because the
certificate is readable only using the authority’s public key, this verifies that the certificate
came from the certificate authority. The elements IDA and PUa provide the recipient with the
name and public key of the certificate’s holder. The timestamp T validates the currency of the
certificate. The timestamp counters the following scenario. A’s private key is learned by an
adversary. A generates a new private/public key pair and applies to the certificate authority
for a new certificate.
Meanwhile, the adversary replays the old certificate to B. If B then encrypts messages using
the compromised old public key, the adversary can read those messages. In this context, the
compromise of a private key is comparable to the loss of a credit card. The owner cancels the
110 | P a g e
credit card number but is at risk until all possible communicants are aware that the old credit
card is obsolete. Thus, the timestamp serves as something like an expiration date. If a
certificate is sufficiently old, it is assumed to be expired.
1. A generates a public/private key pair {PUa, PRa} and transmits a message to B consisting of
PUa and an identifier of A, IDA.
2. B generates a secret key, Ks, and transmits it to A, encrypted with A’s public key.
3. A computes D(PRa, E(PUa, Ks)) to recover the secret key. Because only A can decrypt the
message, only A and B will know the identity of Ks.
4. A discards PUa and PRa and B discards PUa.
111 | P a g e
In this case, if an adversary, E, has control of the intervening communication channel, then E
can compromise the communication in the following fashion without being detected:
1. A generates a public/private key pair {PUa, PRa} and transmits a message intended
for B consisting of PUa and an identifier of A, IDA.
2. E intercepts the message, creates its own public/private key pair {PUe, PRe} and transmits
PUe||IDA to B.
3. B generates a secret key, Ks, and transmits E(PUe, Ks).
4. E intercepts the message, and learns Ks by computing D(PRe, E(PUe, Ks)).
5. E transmits E(PUa, Ks) to A.
The result is that both A and B know Ks and are unaware that Ks has also been revealed to E.
A and B can now exchange messages using Ks E no longer actively interferes with the
communications channel but simply eavesdrops. Knowing Ks E can decrypt all messages, and
both A and B are unaware of the problem. Thus, this simple protocol is only useful in an
environment where the only threat is eavesdropping.
112 | P a g e
1. A uses B’s public key to encrypt a message to B containing an identifier of A (IDA) and a
nonce (N1), which is used to identify this transaction uniquely.
2. B sends a message to A encrypted with PUa and containing A’s nonce (N1) as well as a new
nonce generated by B (N2) Because only B could have decrypted message (1), the presence of
N1 in message (2) assures A that the correspondent is B.
3. A returns N2 encrypted using B’s public key, to assure B that its correspondent is A.
4. A selects a secret key Ks and sends M = E(PUb, E(PRa, Ks)) to B. Encryption of this message
with B’s public key ensures that only B can read it; encryption with
A’s private key ensures that only A could have sent it.
5. B computes D(PUa, D(PRb, M)) to recover the secret key.
The result is that this scheme ensures both confidentiality and authentication in the exchange
of a secret key.
4.5 Firewalls
Firewall design principles
Internet connectivity is no longer an option for most organizations. However, while internet
access provides benefits to the organization, it enables the outside world to reach and interact
with local network assets. This creates the threat to the organization. While it is possible
to equip each workstation and server on the premises network with strong security features,
such as intrusion protection, this is not a practical approach. The alternative, increasingly
accepted, is
the firewall.
The firewall is inserted between the premise network and internet to establish a controlled link
and to erect an outer security wall or perimeter. The aim of this perimeter is to protect the
premises network from internet based attacks and to provide a single choke point
where security and audit can be imposed. The firewall can be a single computer system or a set
of two or more systems that cooperate to perform the firewall function.
113 | P a g e
Firewall characteristics:
All traffic from inside to outside, and vice versa, must pass through the firewall. This is
achieved by physically blocking all access to the local network except via the firewall. Various
configurations are possible.
Only authorized traffic, as defined by the local security policy, will be allowed to pass. Various
types of firewalls are used, which implement various types of security policies.
The firewall itself is immune to penetration. This implies that use of a trusted system with a
secure operating system.
Four techniques that firewall use to control access and enforce the site‘s security policy is as
follows:
Service control – determines the type of internet services that can be accessed, inbound or
outbound. The firewall may filter traffic on the basis of IP address and TCP port number; may
provide proxy software that receives and interprets each service request before passing it on;
or may host the server software itself, such as web or mail service.
Direction control – determines the direction in which particular service request may be initiated
and allowed to flow through the firewall.
User control – controls access to a service according to which user is attempting to access it.
Behavior control – controls how particular services are used.
Capabilities of firewall
A firewall defines a single choke point that keeps unauthorized users out of the protected
network, prohibits potentially vulnerable services from entering or leaving the network, and
provides protection from various kinds of IP spoofing and routing attacks.
A firewall provides a location for monitoring security related events. Audits and alarms can be
implemented on the firewall system.
A firewall is a convenient platform for several internet functions that are not security related.
A firewall can serve as the platform for IPsec.
114 | P a g e
Limitations of firewall
The firewall cannot protect against attacks that bypass the firewall. Internal systems may have
dial-out capability to connect to an ISP. An internal LAN may support a modem pool that
provides dial-in capability for traveling employees and telecommuters.
The firewall does not protect against internal threats. The firewall does not protect against
internal threats, such as a disgruntled employee or an employee who unwittingly cooperates
with an external attacker.
The firewall cannot protect against the transfer of virus-infected programs or files. Because of
the variety of operating systems and applications supported inside the perimeter, it would be
impractical and perhaps impossible for the firewall to scan all incoming files, e-mail, and
messages for viruses.
Types of firewalls
There are 3 common types of firewalls.
Packet filters
Application-level gateways
Circuit-level gateways
Packet filtering router
A packet filtering router applies a set of rules to each incoming IP packet and then forwards or
discards the packet. The router is typically configured to filter packets going in both directions.
Filtering rules are based on the information contained in a network packet:
115 | P a g e
Figure 4.12: packet filtering router
The packet filter is typically set up as a list of rules based on matches to fields in the IP or TCP
header. If there is a match to one of the rules, that rule is invoked to determine whether to
forward or discard the packet. If there is no match to any rule, then a default action is taken.
The default discard policy is the more conservative. Initially everything is blocked, and services
must be added on a case-by-case basis. This policy is more visible to users, who are most likely
to see the firewall as a hindrance. The default forward policy increases ease of use for end users
but provides reduced security.
Simple
Transparent to users
Very fast
Because packet filter firewalls do not examine upper-layer data, they cannot prevent attacks
that employ application specific vulnerabilities or functions.
Because of the limited information available to the firewall, the logging functionality present
in packet filter firewall is limited.
It does not support advanced user authentication schemes.
They are generally vulnerable to attacks such as layer address spoofing.
116 | P a g e
Some of the attacks that can be made on packet filtering routers and the appropriate counter
measures are the following:
IP address spoofing – the intruders transmit packets from the outside with a source IP address
field containing an address of an internal host.
Countermeasure: to discard packet with an inside source address if the packet arrives on an
external interface.
Source routing attacks – the source station specifies the route that a packet should take as it
crosses the internet; i.e., it will bypass the firewall.
Tiny fragment attacks – the intruder create extremely small fragments and force the TCP header
information into a separate packet fragment. The attacker hopes that only the first fragment is
examined and the remaining fragments are passed through.
Countermeasure: to discard all packets where the protocol type is TCP and the IP fragment
offset is equal to 1.
An Application level gateway, also called a proxy server, acts as a relay of application level
traffic. The user contacts the gateway using a TCP/IP application, such as Telnet or FTP, and
the gateway asks the user for the name of the remote host to be accessed. When the user
responds
and provides a valid user ID and authentication information, the gateway contacts the
application on the remote host and relays TCP segments containing the application data
between the two endpoints.
Application level gateways tend to be more secure than packet filters. It is easy to log and audit
all incoming traffic at the application level. A prime disadvantage is the additional processing
overhead on each connection.
117 | P a g e
Figure 4.13: Application level gateway
A typical use of Circuit level gateways is a situation in which the system administrator trusts
the internal users. The gateway can be configured to support application level or proxy service
on inbound connections and circuit level functions for outbound connection
118 | P a g e
Chapter Five
5. Application Security
5.1 Viruses and Other Wildlife
The word virus has become a generic term describing a number of different types of attacks on
computers using malicious code. Many have been infected at least once, either by one of the
famous attacks such as Melissa, ExploreZip, MiniZip, Code Red, NIMDA, BubbleBoy, I
LoveYou, NewLove, KillerResume, Kournikova, NakedWife, or Klez; Each of which uses a
certain amount of the computer‘s resources to display or gather data about the user.
Malicious Logic
Computer viruses, worms, and Trojan horses are effective tools with which to attack computer
systems.
They assume an authorized user’s identity.
This makes most traditional access controls useless.
We study malicious logic, focusing on Trojan horses and computer viruses, and discuss
defenses.
Denial of service attack (DoS) – Attack that produces so many requests of system resources
in the computer under attack such as calls to the operating system, or opening dialogs with
other machines and then hanging onto the line to tie it up that normal functions on the targeted
computer are overwhelmed and cease.
Distributed DoS attack (DDoS) – DoS attack launched from many different computers,
usually zombies hijacked for this purpose.
Rootkit – Malware, usually a small suite of programs, that install a new account or steal an
existing one, and then elevate the security level of that account to the highest degree (root for
Unix, Administrator for Windows) so that attackers can do their will without obstruction.
Sniffer – An attack, usually a Trojan horse, that monitors computer transactions or keystrokes.
A keystroke logger, for instance, detects sensitive information by monitoring the user‘s
keystrokes.
119 | P a g e
Trojan horse – Malware named for its method of getting past computer defenses by pretending
to be something useful.
Zombie – A corrupted computer that is waiting for instructions and commands from its master,
the attacker.
The time and effort it takes to takes to root out the virus and repair the damage.
The diversion of time and effort from what may have been revenue production.
The out and out loss of computer hardware (rare these days) or documents, files, and
applications that either cannot be recovered, or for which the time and expense of
recovery can‘t be justified.
The classification of malicious code into categories such as “virus” or “worm” is today
somewhat quaint.
Attackers who want to harm your system will get there any way they can and whipping
up a software half-breed that blurs definitions
For this reason, modern attack tools tend to be labeled by their function more than
their genealogy.
Hence there are viruses, worms, rootkits, Trojan horses, password sniffers, and
zombies.
In this course we shall call all such programs malicious code, or for short, malware
Malicious logic is a set of instructions that cause a site’s security policy to be violated.
Most malicious code today is concerned not only with trashing your machine, but also in using
your machine to infect others.
A classic example is the software used to create a DDoS attack.
After hiding itself in your computer, modern malware typically seeks information from you to
use to infect others, and it usually finds it in your address book or by prowling your local area
network.
120 | P a g e
The malware then stalks its new victims, often by sending an email in your name and infects
them as well.
Viruses
A virus is a code fragment that copies itself into a larger program, modifying that program.
It is not an independent program but depends upon a host program, which it infects.
A virus executes only when its host program begins to run.
The virus then replicates itself, infecting other programs as it reproduces.
After seeing to its own reproduction, it then does whatever dirty work it carries in its
programming, or payload.
A virus might start reproducing right away, or it might lie dormant for some time, until
it‘s triggered by a particular event. (Friday the 13th virus).
A virus may infect memory, a floppy disk, a hard drive, a backup tape, or any other
type of storage.
Viruses also can move about as macros, such as those written in the scripting language
used to automate keystrokes in office programs such as Microsoft Word or Excel.
1949 – John von Neumann presented a paper on the “Theory and Organization of
Complicated Automata,” in which he postulated that a computer program could
reproduce.
1950 – Bell Labs game they called “Core Wars.” In which, two programmers would
unleash software “organisms” and watch as they vied for control of the computer.
1984 – Ken Thompson described the development of what can be considered the first
practical computer virus. Thompson wrote a self-reproducing program in the C
programming language.
1949 – John von Neumann presented a paper on the “Theory and Organization of
Complicated Automata,” in which he postulated that a computer program could
reproduce.
1950 – Bell Labs game they called “Core Wars.” In which, two programmers would
unleash software “organisms” and watch as they vied for control of the computer.
121 | P a g e
1984 – Ken Thompson described the development of what can be considered the first
practical computer virus. Thompson wrote a self-reproducing program in the C
programming language.
Types of Viruses
The boot sector is the part of a disk used to bootstrap the system or mount a disk
Code in that sector is executed when the system “sees” the disk for the first time
When the system boots, or the disk is mounted, any virus in that sector is executed.
(The actual boot code is moved to another place, possibly another sector.)
A boot sector infector is a virus that inserts itself into the boot sector of a disk.
Executable Infectors
The PC variety of executable infectors is called COM or EXE viruses because they
infect programs with those extensions.
The virus can prepend itself to the executable or append itself.
An executable infector is a virus that infects executable programs
Multipartite Viruses
A multipartite virus is one that can infect either boot sectors or applications
Such a virus typically has two parts, one for each type.
122 | P a g e
Stealth Viruses
Encrypted Viruses
Computer virus detectors often look for known sequences of code to identify computer
viruses.
To conceal these sequences, some viruses encipher most of the virus code, leaving only
a small decryption routine and a random cryptographic key in the clear.
An encrypted virus is one that enciphers all of the virus code except for a small
decryption routine Polymorphic Viruses
A polymorphic virus is a virus that changes its form each time it inserts itself into
another program
Consider an encrypted virus. The body of the virus varies depending on the key chosen,
so detecting known sequences of instructions will not detect the virus.
However, the decryption algorithm can be detected. Polymorphic viruses were designed
to prevent this.
Macro Viruses
A macro virus is a virus composed of a sequence of instructions that is interpreted, rather than
executed directly.
Virus Detection
123 | P a g e
Scanning
o Once a virus has been detected , it is possible to write scanning program that look for signature
string characteristics of the virus
Integrity checking with checksums
o Integrity checking reads the entire disk and records integrity
A computer virus infects other programs. A variant of the virus is a program that
spreads from computer to computer, spawning copies of itself on each one.
A worm is a program that replicates and propagates itself without having to attach itself
to a host
Worms can continue replicating themselves until they completely fill available
resources, such as memory, hard drive space, and network bandwidth.
Viruses and worms can be used to infect a system and modify a system to allow a hacker
to gain access. Many viruses and worms carry Trojans and backdoors.
A virus and a worm are similar in that they‘re both forms of malicious software
(malware).
A virus infects another executable and uses this carrier program to spread itself. The
virus code is injected into the previously benign program and is spread when the
program is run.
A worm is similar to a virus in many ways but does not need a carrier program. A worm
can self-replicate and move from infected host to another host.
A worm spreads from system to system automatically, but a virus needs another
program in order to spread.
History of Worms
1975 – John Brunner‘s science fiction novel, The Shockwave Rider, programs called
“tapeworms” lived inside computers, spread from machine to machine, and were
“indefinitely self-perpetuating so long as the net exists.”
124 | P a g e
1980 – John Schoch and Jon Hupp, researchers at Xerox Palo Alto Research Center,
developed the first experimental worm programs as a research tool.
The Xerox PARC worms were, on the whole, useful creatures; they handled mail, ran
distributed diagnostics, and performed other distributed functions.
A creation of Robert Tappan Morris, a 23-yearold doctoral student from Cornell, who
on the second of November 1988, at about 6:00 p.m., released a self-replication bit of
code onto the Internet designed to spread itself freely, but to do little else.
There was no dangerous payload.
Soon, however VAX and Sun machines (the only systems targeted) across the country
started to bog down.
This same scene was replayed at the sites of over 6,000 machines across the country.
While no physical damage was caused by the worm, the U.S. General Accounting
Office estimated that the worm cost between $100,000 and $10,000,000 due to lost
access.
Trojans and backdoors are types of malware used to infect and compromise computer
systems
A Trojan horse is a program with an overt effect and a covert effect.
An overt channel is the normal and legitimate way that programs communicate within
a computer system or network.
A covert channel uses programs or communications paths in ways that were not
intended.
Trojans can use covert channels to communicate. Some client Trojans use covert
channels to send instructions to the server component on the compromised system.
Trojan horses can make copies of themselves a propagating Trojan horse
Trojan horse hides in an independent program that performs a useful or appealing
function or appears to perform that function.
125 | P a g e
Along with the apparent function, however, the program performs some other
unauthorized operation
A typical Trojan horse tricks a user into running a program, often an attractive or helpful
one. When the unsuspecting user runs the program, it does indeed perform the expected
function.
But its real purpose is often to penetrate the defenses of the system by usurping the
user’s legitimate privileges and thus obtaining information that the penetrator isn’t
authorized to access.
An example of this would be the modern rootkit, which is a script that controls a small
suite of programs that create an administrative level account on the targeted system,
and then create a backdoor.
Backdoor is an unmonitored entrance way that evades the security mechanisms,
through which the attacker can later gain convenient access.
Backdoors
Types of Trojans
126 | P a g e
Virus and Worm Hoaxes
Logic Bombs
Rabbits and Bacteria
Spyware
Spam
Software problems (The Buffer-Overflow Attack)
Software Attacks
Hardware Threats …etc
Logic Bombs
A logic bomb is a type of malware that executes its malicious purpose when a specific
criteria is met.
Such as a user logging in or the arrival of midnight, Friday the 13th.
The most common factor is date/time
Logic bomb might delete files on a certain date/time
Disaffected employees may plant Trojan horses in systems use logic bombs such as
deleting the payroll roster when that user’s name is deleted.
Types of Bombs
127 | P a g e
1. A bomb that‘s set to go off on a particular date or after some period of time has elapsed is
called a time bomb.(e.g. Friday the 13th)
2. A bomb that‘s set to go off when a particular event occurs is called a logic bomb.
While true
do
mkdir x
chdir x
done
Spyware
Spyware is simply software that literally spies on what you do on your computer.
Spyware can be as simple as a cookie used by a website to record a few brief facts about
your visit to that website, or spyware could be of a more insidious type, such as a key
logger
Cookie a text file that your browser creates and stores on your hard drive—that a
website you have visited downloads to your machine and uses to recognize you when
you return to the site
Key loggers are programs that record every keystroke you make on your keyboard
This spyware then logs your keystrokes to the spy‘s file
The most common use of a key logger is to capture usernames and passwords.
And can capture every document you type, as well as anything else you might type
This data can be stored in a small file hidden on your machine for later extraction or
sent out in TCP packets to some predetermined address
Wait until after hours to upload this data to some server or to use your own email
software to send the data to an anonymous email address.
There are also some key loggers that take periodic screenshots from your machine,
revealing anything that is open on your computer.
128 | P a g e
Spyware is software that literally spies on your activities on a particular computer.
Spam
Software Attacks
Password Crack
129 | P a g e
Routers and firewall arrangements can offer protection against IP spoofing
Man-in-the-middle or TCP hijacking attack, an attacker monitors (or sniffs) packets
from the network, modifies them, and inserts them back into the network.
May uses IP spoofing to enable an attacker to impersonate another entity on the
network.
It allows the attacker to eavesdrop as well as to change, delete, reroute, add, forge, or
divert data.
A sniffer is a program or device that can monitor data traveling over a network.
Sniffers can be used both for legitimate network management functions and for stealing
information.
Sniffers add risk to the network, because many systems and users send information on
local networks in clear text.
A sniffer program shows all the data going by, including passwords, the data inside
files such as word-processing documents—and screens full of sensitive data from
applications
Social engineering is the process of using social skills to convince people to reveal
access credentials or other valuable information to the attacker.
A perpetrator posing as a person higher in the organizational hierarchy than the victim.
To prepare for this false representation, the perpetrator may have used social
engineering tactics against others in the organization to collect seemingly unrelated
information that, when used together, makes the false representation more credible
There are many programs that can help you keep viruses and other wildlife away from your
system and can wipe out the critters if they gain access (virus protection programs)
These products, and the system administration procedures that go along with them, have two
overlapping goals:
They don’t let you run a program that’s infected, and they keep infected programs from
damaging your system.
Antivirus
130 | P a g e
Virus protection software uses two main techniques:
The first uses signatures, which are snapshots of the code patterns of the virus.
The antivirus program lurks in the background watching files come and go until it
detects a pattern that aligns with one of its stored signatures, and then it sounds the
alarm and maybe isolates or quarantines the code.
Alternatively, the virus protection program can go looking for trouble. It can
periodically scan the various disks and memories of the computer, detecting and
reporting suspicious code segments, and placing them in quarantine.
One problem with signature-based virus protection programs is that they require a
constant flow of new signatures in response to evolving attacks.
Their publishers stay alert for new viruses, determine the signatures, and then make
them available as updated virus definition tables to their users.
Another problem is called the Zero Day problem. Basically, this occurs when a user
trips over a new virus before the publisher discovers it and can issue an updated
signature.
A third problem is that, just as with biological pathogens, viruses can mutate.
Sometimes this happens accidentally; other times, it happens because a clever
programmer uses file compression software to change the signature of the virus to elude
signature detection.
This means it can change its own form by introducing extra statements or adding
random numbers, to elude signature detection.
To counter these, virus protection publishers are adding what is called heuristic
detection features to their wares.
A heuristic is a rule or behavior. If a virus exhibits that behavior, the antivirus software
tries to stop it in the act.
For instance, a code s that suddenly accesses a critical operating system area or file, or
unexplained changes in file size, particularly in system files, sudden decreases in
available hard disk space, or changes in file time or date stamps.
System access controls – ensure that unauthorized users don’t get into the system and
encourage (sometimes force) authorized users to be security-conscious.
131 | P a g e
Data access controls – monitor who can access what data, and for what purpose. Another word
for this is authorization, that is, what you can do once you are authenticated.
discretionary access controls
mandatory access controls
System and Security Administration – methods perform the offline procedures that make or
break a secure system—by clearly delineating system administrator responsibilities, by training
users appropriately, and by monitoring users to make sure that security policies are observed.
System Design – take advantage of basic hardware and software security characteristics.
Trying to log into a system is a kind of challenge/response scenario. You tell the system who
you are, and the system requests that you prove it by providing information that matches what
the computer has stored about you. In security terms, this two-step process is called
identification and authentication.
Identification is the way you tell the system who you are.
Authentication is the way you prove to the system that you are who you say you are.
What you know – The most familiar example is a password. The theory is that if you know the
secret password for an account, you must be the owner of that account.
What you have – Examples are keys, tokens, badges, and smart cards you must use to unlock
your terminal or your account. The theory is that if you have the key or equivalent, you must
be the owner of it.
What you are – Examples are physiological or behavioral traits, such as your fingerprint,
handprint, retina pattern, iris pattern, voice, signature, or keystroke pattern. Biometric systems
compare your particular trait against the one stored for you and determine whether you are who
you claim to be.
132 | P a g e
Multifactor authentication
Multifactor authentication is a way to cascade the three methods listed previously such that if
an attacker gets past one safeguard, they still have to pass another.
Passwords are still, far and away, the authentication tool of choice.
In a multifactor authentication system, username and password would be augmented with one
of the other two systems.
Login Processes
Password Authentication Protocol – user provides a username and password, and these are
compared with values stored in a table to see if they match.
Challenge Handshake Authentication Protocol (CHAP) – the device doing the authenticating,
usually a network server, sends the client program an ID value and a random number, and both
the sender and peer share a predefined secret word, phrase or value.
Mutual authentication – can be thought of as two-way authentication. The client authenticates
to the server, and then the server authenticates to the client or workstation.
One-time password – is a variation of the username/password combination. With OTP, the user
creates a password, and the system creates a variation of the password each time a password is
required.
Per-session authentication – requires the client to re authenticate for each exchange of
information is burdensome, but it provides a great deal of security.
Tokens – A token or token card is usually a small device that supplies the response to a
challenge that is received when trying to log on.
133 | P a g e
Table 5.1: Sample login/password controls
* * *
134 | P a g e