0% found this document useful (0 votes)
2 views

Cryptography Module 5

Risk management is essential for safeguarding an organization's information assets and involves identifying, assessing, and controlling risks to ensure information security. Key components include risk identification, assessment, and control, with collaboration among information security teams, management, and users being crucial. Effective risk management requires understanding both internal assets and external threats, as well as regular reviews and updates of security measures.

Uploaded by

nimithbe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Cryptography Module 5

Risk management is essential for safeguarding an organization's information assets and involves identifying, assessing, and controlling risks to ensure information security. Key components include risk identification, assessment, and control, with collaboration among information security teams, management, and users being crucial. Effective risk management requires understanding both internal assets and external threats, as well as regular reviews and updates of security measures.

Uploaded by

nimithbe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

An Overview of Risk Management

Risk management is crucial for protecting an organization’s information assets and ensuring its long-term
competitiveness. The process involves identifying, assessing, and controlling risks that could impact
information security. This is essential in an environment where information technology (IT) is central to
business operations. Proper risk management allows organizations to balance the costs of security measures
with the benefits they bring, ensuring that their systems remain secure and operational.

The key elements of risk management include:

1. Risk Identification: This is the process of identifying vulnerabilities in the organization’s IT


infrastructure. Understanding the current security posture of information assets is essential for
recognizing potential risks.

2. Risk Assessment: This involves evaluating the exposure or risk level to the organization's
information assets. It assesses how vulnerable these assets are to various threats.

3. Risk Control: Implementing controls to mitigate the identified risks and reduce them to an
acceptable level. This is the final step in protecting the organization’s assets and ensuring continued
operations.

The relationship between these components is critical for a robust risk management strategy, as shown in the
figure of risk management components.

Cryptography Module 5 Ashwin Mysore 1


Know Yourself

The first principle of effective risk management is to "know yourself," which means identifying and
understanding the organization's information systems. You must know what assets (data, systems, etc.) the
organization has, how they add value, and which vulnerabilities they are susceptible to. Once the assets are
identified, it's crucial to review the security controls already in place. Just having controls is not enough;
regular maintenance and updating are necessary to ensure they remain effective.

Know the Enemy

The second principle from Sun Tzu's strategy is to "know the enemy." This involves understanding the
threats that could harm the organization's information assets. These threats could range from cyberattacks to
natural disasters. Each threat should be evaluated based on its potential impact on the organization's
information and assets. Identifying and ranking these threats is crucial for prioritizing responses and
defenses.

The Role of Communities of Interest

Risk management is not just the responsibility of the IT department; it involves three key communities of
interest:

1. Information Security Team: They are the leaders in managing and addressing the risks due to their
expertise in identifying and mitigating threats.

2. Management: Responsible for allocating sufficient resources (time, money, personnel) to implement
security controls and ensure that the organization’s assets are protected.

3. Users: They interact directly with the systems and understand the value of the information assets.
Users play an important role in identifying vulnerabilities and threats.

These communities must collaborate to evaluate and apply the appropriate risk controls, determine their
cost-effectiveness, and ensure that these controls remain effective over time. Regular reviews of risk
controls and their effectiveness are vital to keeping the organization secure.

Risk Identification
Risk identification is a critical component of any risk management strategy, particularly in information
security. It requires a thorough understanding of an organization’s information assets and potential risks that
could affect these assets. Identifying these risks helps in understanding the threats facing each asset, which is
essential for prioritizing actions and developing appropriate controls.
The risk identification process involves several key steps:
1. Identify and Classify Assets: The first step is identifying all the information assets that the
organization possesses. These assets can include:
o People: Employees, contractors, and other personnel who interact with systems.
o Procedures: Business processes and workflows that govern the use and protection of assets.
o Data: Critical business and operational data that need to be protected.
o Software: Applications and systems used to process data.
o Hardware: The physical devices that support business operations.
2. Prioritize Assets: Once the assets are identified, it’s important to classify them based on their
criticality to the organization. Some assets are more valuable than others and require higher levels of
protection. This process helps in understanding which assets need immediate protection and which
ones can be managed with less stringent controls.

Cryptography Module 5 Ashwin Mysore 2


3. Assess the Threats and Risks: After classifying the assets, a threat assessment process is conducted
to identify the risks each asset faces. This includes determining the likelihood of threats and their
potential impact on the assets. The process involves:
o Identifying both external and internal threats.
o Quantifying risks, assessing vulnerabilities, and understanding the potential consequences of
various events.
4. Plan and Organize the Process:
o Organize a Team: The risk identification process should involve a team made up of
representatives from all affected departments (users, managers, IT, and information security
groups). This ensures that all perspectives are considered, and every potential risk is covered.
o Follow Project Management Principles: As with any major security initiative, it's important
to follow proper project management principles. This includes defining roles and
responsibilities, setting timelines, and ensuring deliverables are met.
o Periodic Reviews and Presentations: Risk identification should involve regular reviews and
updates to management, ensuring that progress is tracked, and any necessary changes are
addressed in a timely manner.

Categorizing Components of an Information System


Risk identification also involves categorizing the various components of the organization’s information
system. The table below provides an overview of these components:
Traditional System Components Risk Management System Components
People: Employees, Nonemployees People: Trusted employees, Other staff, People at trusted
organizations, Strangers
Procedures: IT and business procedures, Procedures: Standard IT and business procedures
sensitive procedures
Data: Information storage, processing, and Data: Sensitive and critical data
transmission
Software: Applications, operating systems Software: Applications, security components, operating
systems
Hardware: Devices and peripherals Hardware: Security devices, networking components,
system devices, peripherals
Systems: Intranet, Internet, or DMZ Systems: Internal and external systems, network
components components

Cryptography Module 5 Ashwin Mysore 3


Asset Identification and Inventory
Asset identification is a critical part of risk management, and it involves the systematic process of cataloging
all elements of an organization's system. This includes people, procedures, data, software, hardware, and
networking elements. The goal of asset identification is to determine the relative priority of these assets
based on their importance to the success of the organization.

1. Asset Categorization
Assets are categorized as follows:
• People: Employees and non-employees. Employees are divided into trusted roles (with more
authority and accountability) and non-trusted roles. Non-employees include contractors, consultants,
and strangers.
• Procedures: Business and IT procedures, divided into standard procedures and sensitive procedures.
Sensitive procedures may be targeted by threat agents.
• Data: Includes all information in its different states—transmission, processing, and storage.
• Software: Categorized into applications, operating systems, and security components.
• Hardware: Divided into systems devices and peripherals, and those part of information security
control systems.

2. People, Procedures, and Data Identification


Identifying people, procedures, and data is more complex than identifying hardware and software. It requires
careful tracking and categorization of attributes such as:
• People: Position name/ID, supervisor, security clearance level, special skills.
• Procedures: Description, purpose, relation to software/hardware, storage location.
• Data: Classification, ownership, structure type (sequential or relational), backup procedures.

3. Hardware, Software, and Network Asset Identification


For hardware, software, and network assets, it is important to track attributes that may include:
• Name: Use of a standardized naming system that avoids revealing information to attackers.
• IP Address and MAC Address: Useful for tracking network devices. IP addresses are limited to
those with static IPs, while MAC addresses help identify specific network devices.
• Element Type: Categories such as servers, desktops, and network devices for hardware; operating
systems, custom applications, etc., for software.
• Versioning and Updates: Keeping track of software versions, firmware revisions, and updates is
essential for risk management.
• Physical and Logical Location: Tracking the physical location (for hardware) and logical network
location (for devices) of assets.
• Controlling Entity: The organizational unit responsible for the asset, important for determining the
tolerance for risk and managing controls.

4. Automated Asset Inventory Tools


Automated tools can help identify and track the various components of hardware, software, and network
systems. These tools can store inventory information in a database, which must be regularly updated.

Cryptography Module 5 Ashwin Mysore 4


Data Classification and Management
Data classification is essential for securing the confidentiality and integrity of information within
organizations. Different schemes are used to classify data based on sensitivity levels.

1. Common Data Classification Categories:


• Confidential: Highly sensitive corporate information with strict access controls. Access is granted
only on a need-to-know basis.
• Internal: Information for internal use within the organization, including authorized contractors and
employees.
• External: Information approved for public release.

2. Military Classification:
The U.S. military uses a more complex classification system with five levels:
• Unclassified: Publicly available information.
• Sensitive But Unclassified (SBU): Information whose unauthorized disclosure could affect national
interests.
• Confidential: Information that could damage national security if disclosed.
• Secret: Information whose unauthorized disclosure could cause serious damage to national security.
• Top Secret: Information whose unauthorized disclosure could cause exceptionally grave damage to
national security.

3. Organizational Data Classification Scheme:


Organizations typically use simpler classification systems:
• Public: Information for general dissemination.
• For Official Use Only: Information not for public release but not highly sensitive.
• Sensitive: Information important to the business that could cause damage if revealed.
• Classified: Information that is critical to the organization and its loss could have significant
consequences.

Security Clearances and Access Control


Security clearances are essential for determining who can access classified data. Employees are assigned
roles based on their access rights, which are aligned with the data classification scheme. These roles help
ensure that users are only granted access to information necessary for their responsibilities, maintaining the
confidentiality of sensitive data.

Classifying Information Assets:


• Organizations often subdivide categories of assets further (e.g., "Internet components" can be divided
into servers, networking devices, firewalls, etc.).
• It's crucial to assign a data classification scheme (e.g., confidential, internal, public) to represent the
sensitivity of data.
• Classification must allow for determining priority levels and should be comprehensive (covering all
assets) and mutually exclusive (each asset should fit only one category).

Valuating Information Assets:


• To determine the value of an asset, you can ask a series of questions based on its importance, revenue
generation, profitability, replacement cost, protection costs, and potential exposure to liability or
embarrassment if compromised.
• Value can be considered from multiple perspectives, such as:

Cryptography Module 5 Ashwin Mysore 5


o Cost of creation (e.g., software development, data collection).
o Maintenance costs over the asset’s lifespan.
o Replacement costs if lost or damaged.
o Cost of protection, considering the difficulty and expense of implementing security measures.
o Intellectual property value, including trade secrets or proprietary data.

Prioritizing Information Assets:


• Once assets are classified and valued, they should be prioritized using a weighted factor analysis. This
involves scoring each asset against a set of critical factors (e.g., impact to revenue, profitability, or public
image).
• The weights assigned to each factor reflect its importance to the organization, and the scores give a final
prioritized list of assets.
• For example, the customer order via SSL (inbound) might score the highest, indicating it’s the most
critical asset, while a supplier’s fulfillment advice might be ranked lower.

Identifying and Prioritizing Threats


The process of identifying and prioritizing threats in an organization's information security begins with a
detailed threat assessment. This assessment involves determining which threats pose the most danger,
considering factors like the probability of occurrence, potential damage, frequency, and the costs involved in
both recovery and prevention. Here's a breakdown of the steps involved:
1. Identify Relevant Threats:
o Not all threats apply to every organization. An organization should first categorize and
identify which threats are relevant to its environment. For example, a company in a flood-
prone area should be concerned about flooding, but a business in an area where floods rarely
occur can exclude it from their analysis.

2. Assess the Degree of Danger:


o Threats can be ranked based on how dangerous they are. The risk assessment can be
conducted using a subjective scale from 1 to 5, where 1 represents minimal risk and 5
represents a highly significant risk. This ranking helps to prioritize efforts toward the most
dangerous threats.

3. Cost of Recovery:
o The cost of recovering from a successful attack is a crucial factor in prioritizing threats. A
threat that would cause minimal damage or that has a low cost of recovery may be less of a
priority compared to a threat that would disrupt business operations and incur high costs.

4. Cost of Prevention:
o Preventing threats requires financial and resource investment. The cost of protection against
some threats, like viruses or malicious software, is relatively low, while others, like natural
disasters, may require large investments. The cost of implementing security measures
influences the level of protection required for each identified threat

5. Framework for Analysis:


o The framework used for threat assessment includes subjective rankings and estimates for both
the cost of recovery and the cost of prevention. These assessments are made based on the
available data and are refined over time as more information becomes available.

Cryptography Module 5 Ashwin Mysore 6


6. Results of Threat Assessment:
o Once threats are ranked, organizations can develop a focused strategy to mitigate the most
significant threats. Additionally, the data from threat assessments can help inform resource
allocation and security strategies.

Vulnerability Identification
• Vulnerability Identification:
o Vulnerabilities are weaknesses in an organization's information assets, security procedures,
design, or controls that can be exploited to breach security.
o Vulnerabilities can come in various forms, such as flaws in software, hardware, procedures, or
human error.
• Threats and Vulnerabilities:
o The process involves identifying the threats an organization faces and mapping them to the
vulnerabilities of specific information assets.
o Vulnerabilities can be categorized based on the assets they affect. Some threats might create
multiple vulnerabilities for an asset.
• Group Brainstorming:
o The identification of vulnerabilities should ideally be done through group brainstorming sessions
with experts from different areas within the organization (e.g., networking, systems management,
information security).
• TVA Worksheet:
o The Threats-Vulnerabilities-Assets (TVA) worksheet is a tool used to visualize the relationship
between threats, vulnerabilities, and assets.
o Assets are placed along the horizontal axis, and threats are placed along the vertical axis of a
grid. Each cell in the grid represents the vulnerabilities between specific threats and assets.
• Risk Assessment Preparation:
o The TVA worksheet helps prioritize assets and threats and provides a starting point for the risk
assessment process.
o Vulnerabilities are categorized using a notation system (e.g., T1V1A1 for the first vulnerability
between the first threat and the first asset).
• Control Identification:
o During the risk assessment phase, the team evaluates not only the vulnerabilities but also existing
controls that may mitigate the risk. These controls are cataloged and categorized.

Cryptography Module 5 Ashwin Mysore 7


Briefly Explain the risk assessment process.
Risk Assessment
The risk assessment process involves evaluating the relative risk associated with vulnerabilities that could
affect an organization's information assets. The key components of risk assessment include:

1. Identifying Assets, Threats, and Vulnerabilities: The first step involves identifying the
organization's information assets (e.g., intellectual property, networks, software) and the potential
threats and vulnerabilities they face (e.g., espionage, natural disasters, technical failures).

2. Risk Rating: Each vulnerability is assigned a risk rating or score, which helps prioritize which
vulnerabilities need the most attention. This rating is not absolute but provides a comparative
measure of risk across the organization’s assets.

3. Likelihood of Vulnerability Occurrence: Risk assessment involves evaluating the likelihood that a
specific vulnerability will be exploited. Likelihood is rated on a scale from 0.1 (low) to 1.0 (high).
Factors like industry research, external sources, and organizational data can help in determining this
likelihood.

4. Risk Determination: The overall risk is calculated by multiplying the asset value by the likelihood
of vulnerability occurrence, adjusted for current controls, with an element of uncertainty factored in.
This gives a numerical value for each vulnerability's relative risk, guiding decisions on where to
focus resources.

5. Identification of Controls: Once risks are identified, controls are put in place to mitigate these risks.
Controls can be policies (like general security policies), programs (such as training), or technologies
(like firewalls). Effective controls aim to reduce residual risks (risks that remain after initial
controls).

6. Documentation of Results: A key deliverable from the risk assessment process is the "ranked
vulnerability risk worksheet." This document lists the identified assets, their impact values,
vulnerabilities, their likelihood, and the calculated risk-rating factor, allowing the organization to
prioritize mitigation efforts.

7. Risk Control Strategies: Once risks are identified, the organization must select strategies to manage
them. The five basic strategies are:

o Defend: Prevent exploitation through countermeasures.

o Transfer: Shift the risk to another party (e.g., through insurance).

o Mitigate: Reduce the impact or likelihood of the risk.

o Accept: Acknowledge the risk but decide to live with it.

o Terminate: Eliminate the risk by discontinuing the activity or asset.

Cryptography Module 5 Ashwin Mysore 8


Chalk out the process of deciding how to proceed with one of the five strategies.

Risk Control Strategies


Organizations must control risks from information security threats to maintain a competitive advantage.

1. Defend
The Defend strategy aims to prevent the exploitation of vulnerabilities. It is the preferred method and
includes:
• Application of Policy: Setting rules and guidelines to control risks.
• Education and Training: Raising awareness and improving skills to recognize and mitigate threats.
• Application of Technology: Using technical controls like firewalls, encryption, and intrusion
detection systems to secure assets.
The goal of defending is to eliminate exposure or counter threats. For example, McDonald’s mitigated risks
from cyber-attacks by changing egg supplier conditions to reduce exposure to animal rights activists.

2. Transfer
The Transfer strategy shifts risk to other assets or organizations. This can involve:
• Outsourcing: Hiring third-party services, such as web hosting, to transfer the risk associated with
managing certain systems.
• Purchasing Insurance: Transferring the financial burden of potential damage from a security
breach.
Example: A company outsourcing its website management to specialists rather than handling it in-house.
The risk of downtime or cyber-attacks is transferred to the service provider, who assumes responsibility.

3. Mitigate
The Mitigate strategy reduces the impact of a security breach by preparing for potential incidents through
various plans:
• Incident Response Plan (IRP): Provides steps to follow during an active security breach.
• Disaster Recovery Plan (DRP): Focuses on actions needed to recover from a major incident,
ensuring the organization can restore operations.
• Business Continuity Plan (BCP): Ensures that the business can continue operating if a disaster
affects critical systems, such as activating secondary data centers.
These plans enable rapid response to minimize damage and downtime.

4. Accept
The Accept strategy involves recognizing a vulnerability but choosing to take no action. This decision is
typically made after:
• Assessing risk levels.
• Estimating potential damage.
• Conducting a cost-benefit analysis.
For example, if the cost of securing a server exceeds the potential loss from a breach, an organization might
choose to accept the risk. However, consistently relying on acceptance can lead to negligent security
practices.

5. Terminate
The Terminate strategy seeks to eliminate activities or systems that introduce uncontrollable risks. For
example, an organization may choose not to pursue an e-commerce strategy if the risks outweigh the
benefits. This approach can significantly reduce exposure to certain vulnerabilities by discontinuing risky
activities.

Cryptography Module 5 Ashwin Mysore 9


Technical Controls Overview
• Definition: Enforce policies in automated IT functions not under direct human control.
• Purpose: Help balance information availability with confidentiality and integrity.
• Examples: Firewalls, VPNs, access control mechanisms.
• Importance: Essential in environments where systems make rapid, independent decisions.

Access Control
• Definition: Determines whether and how a user gains access to systems or physical areas.
• Components: Combination of policies, programs, and technologies.
• Types:
o Mandatory Access Control (MAC):
▪ Based on data classification and user clearance (e.g., Top Secret, Confidential).
▪ Uses sensitivity levels and lattice-based access control (access matrix: ACL for
objects, capabilities table for subjects).
o Nondiscretionary Access Control:
▪ Centralized control.
▪ Role-Based (RBAC): Access tied to job roles.
▪ Task-Based: Access linked to specific responsibilities.
o Discretionary Access Control (DAC):
▪ Set by resource owner/user.
▪ Example: Windows file sharing permissions.

Access Control Mechanisms What is access control, and what are the commonly used mechanisms for
implementing it?
• Identification:
o Supplicant presents an identifier (ID) to the system.
o Can include names, department codes, or generated random IDs.
• Authentication:
o Verifies identity using:
▪ Something you know: Passwords, PINs, passphrases.
▪ Something you have: ID cards, smart cards, tokens (synchronous/asynchronous).
▪ Something you are: Biometrics (fingerprint, iris, voice, etc.).
o Two-Factor Authentication (2FA): Combines two types (e.g., card + PIN).
• Authorization:
o Matches authenticated user to a set of access rights.
o Methods:
▪ Per-user authorization
▪ Group-based authorization
▪ Single Sign-On (SSO) via systems like LDAP.
• Accountability:
o Tracks user actions using logs and audits.
o Ensures actions are traceable to authenticated users.
o Supports intrusion detection, troubleshooting, and resource tracking.

Cryptography Module 5 Ashwin Mysore 10


Firewalls
A firewall in information security is analogous to fire barriers in buildings or vehicles—designed to block
threats and contain risks. It acts as a gatekeeper between the trusted internal network and untrusted
external networks (like the Internet), allowing or denying traffic based on predefined rules. Firewalls can
be hardware devices, software programs, or a combination of both.
How is an application layer firewall different from a packet-filtering
Types of Firewalls by Processing Mode firewall? Why is an application layer firewall sometimes called a
1. Packet-Filtering Firewalls: proxy server?
o Operate at Layer 3 (Network Layer) of the OSI model.
o Examine packet headers for attributes like source/destination IP, ports, and protocol.
o Use Access Control Lists (ACLs) to allow or deny packets.
o Subtypes:
▪ Static filtering: Rules are manually configured and unchanged unless edited.
▪ Dynamic filtering: Rules adapt based on events (e.g., traffic spikes).
▪ Stateful inspection: Tracks ongoing connections using a state table and only allows
responses to known internal requests. More secure but vulnerable to DoS attacks due
to higher processing needs.
2. Application Gateways (Proxy Firewalls):
o Operate at the Application Layer.
o Common in DMZ setups; isolates internal servers by proxying requests.
o Caches content for performance but is protocol-specific and less flexible.
3. Circuit Gateways:
What is a circuit gateway, and how does it differ from the
o Work at the Transport Layer. other forms of firewalls?
o Do not inspect data; instead, they create secure “circuits” or tunnels.
o Allow only pre-approved types of connections.
4. MAC Layer Firewalls:
o Filter traffic using MAC addresses.
o Operate at Layer 2 and are useful in internal network segmentation.
5. Hybrid Firewalls:
o Combine features of the above types.
o Most commercial firewalls today are hybrids.

Key Features and Considerations


• Packet filtering is efficient but vulnerable to header spoofing.
• Application firewalls are secure but resource-intensive.
• Circuit gateways offer controlled access but minimal inspection.
• Stateful firewalls offer a balance but are sensitive to overload.
List the five generations of firewall technology. Which generations are still
Firewalls Categorized by Generation in common use?
Firewalls have evolved through five generations, each offering increasingly sophisticated security:
1. First Generation – Static Packet Filtering:
o Simple networking devices filtering packets based on header information.
o Operate mainly on routers, examining IP addresses, ports, and protocols.
2. Second Generation – Application-Level Gateways (Proxies):
o Act as intermediaries between internal and external systems.
o Filter traffic based on application data (e.g., HTTP, FTP).
3. Third Generation – Stateful Inspection Firewalls:
o Monitor active connections using state tables.

Cryptography Module 5 Ashwin Mysore 11


o Allow traffic only if it matches a known active session.
4. Fourth Generation – Dynamic Packet Filtering:
o Filter packets dynamically based on context (source/destination/port).
o Allow only specific, anticipated packets through.
5. Fifth Generation – Kernel Proxy Firewalls:
o Operate within the OS kernel (e.g., Windows NT).
o Evaluate packets across multiple layers of the protocol stack.
o Use components like the SVEN (Security Verification Engine), Interceptor/Packet Analyzer,
and dynamic protocol stacks via NAT for high security.

Firewalls Categorized by Structure


Firewalls can also be classified based on their implementation:
1. Commercial-Grade Firewall Appliances:
• Standalone hardware with customized firmware and OS.
• Store rules in non-volatile memory, allowing physical or authorized remote updates.
• Often built from general-purpose systems with hardened OS versions.
2. Commercial-Grade Firewall Systems:
• Software installed on general-purpose hardware.
• Cost-effective for enterprises with existing infrastructure.
• Offers flexibility and performance tuning.
3. SOHO (Small Office/Home Office) Firewall Appliances:
• Simple, cost-effective devices for home users.
• Combine NAT, stateful packet filtering, port forwarding, wireless access, and basic intrusion
detection.
• Hide internal IPs via NAT, reducing vulnerability to scans.
4. Residential-Grade Firewall Software:
• Installed directly on user systems.
• Includes tools like ZoneAlarm, Comodo, etc.
• Often free, but with limited configurability and features.
• Security can conflict with usability at higher protection levels.

Firewall Architectures Explain the four common architectural implementations


Firewall devices can be deployed using various architectures depending on the organization's objectives,
technical capability, and budget. The four common firewall architectures are:

1. Packet-Filtering Routers
• Located at the network boundary between internal and external networks.
• Filters incoming and outgoing packets based on rules (ACLs).
• Pros: Simple, cost-effective, reduces exposure to external attacks.
• Cons: No strong authentication or logging; ACLs can be complex and impact performance.

2. Screened Host Firewalls


• Combines a packet-filtering router with an application proxy server (bastion host).
• The router filters traffic; the proxy handles application-level inspection.
• Pros: Two layers of defense; protects internal data better than routers alone.
• Cons: Bastion host is a target for attacks; must be well-secured.

Cryptography Module 5 Ashwin Mysore 12


3. Dual-Homed Host Firewalls
• Bastion host with two NICs: one to internal, one to external network.
• Often uses NAT (Network Address Translation) to hide internal IPs.
• Reserved private IP ranges:
o Class A: 10.0.0.0/8
o Class B: 172.16.0.0 – 172.31.255.255/12
o Class C: 192.168.0.0 – 192.168.255.255/16
• Pros: All traffic passes through the firewall; good protection; protocol translation possible.
• Cons: If compromised, external access may be lost; can be overloaded under high traffic.

4. Screened Subnet Firewalls (with DMZ)


• Uses two routers and multiple bastion hosts; creates a DMZ (Demilitarized Zone).
• Servers accessible to external users (web, FTP) are placed in the DMZ.
• Two main functions:
o Protect DMZ systems from external threats.
o Restrict access from DMZ to internal networks.
• Pros: High security; limits access paths.
• Cons: Complex and expensive to manage.
• Extranet: Special DMZ area for authenticated users (e.g., online checkout systems).

Cryptography Module 5 Ashwin Mysore 13


SOCKS Firewall
• Uses SOCKS protocol to proxy TCP traffic.
• Requires SOCKS client agents on each workstation.
• Pros: Distributes filtering; removes burden from central router.
• Cons: Complex to manage; each client must be configured and maintained.

Selecting the Right Firewall


1. Key Questions to Consider:
o What firewall tech best balances security vs. cost?
o What features come built-in vs. extra cost?
o How easy is setup/configuration? Is qualified support available?
o Can it scale with the organization's network?
2. Prioritization:
o Protection is the top priority.
o Cost is the second; budget constraints may limit options.
o Compromise between ideal security and budget is often necessary.

Configuring and Managing Firewalls


• Every firewall device needs its own set of configuration rules.
• Packet Filtering: Examines each packet using source/destination addresses and port/service type.
• Challenges:
o Syntax errors → easy to detect.
o Logic errors (e.g., wrong port or action) → harder to catch, more dangerous.
• Misconfigurations can cause major disruptions, e.g., blocking all emails.
• Rule Management:
o Must be debugged, tested, and correctly sequenced.
o Efficient rule sets place broad, fast-evaluating rules before slower, specific ones.
• Balance Security and Usability:
o When security interferes with business tasks, usability often wins.

Discuss the best practices for Firewalls


Best Practices for Firewall Use
1. Allow outbound traffic from trusted network; filter/log if needed.
2. Firewall configuration access must not be exposed to public networks.
o Only authorized admins should have access using strong, preferably 2FA authentication.
3. SMTP traffic should go through a secure SMTP gateway.

Cryptography Module 5 Ashwin Mysore 14


4. Block all ICMP traffic (e.g., ping) to prevent reconnaissance attacks.
5. Block Telnet access from outside to internal servers, especially DNS.
o Use VPN for secure external access.
6. Web Services:
o Use proxies or DMZs to isolate internal web servers.
o Allow HTTP/HTTPS only to public-facing servers.
o Place sensitive servers inside the firewall with proxy services in DMZ.
o Rebuild external servers regularly due to inevitable breaches.
7. Deny unauthenticated data.
o Block packets with spoofed internal source addresses at the external firewall.

What is a content filter? Where is it placed in the network to gain the best result for the
Content Filters organization?
• Definition: Software tools that restrict access to specific types of online content or protocols.
• Not a firewall, but often used alongside firewalls to limit internal users' access to external
content.
• Sometimes referred to as "reverse firewalls".

Components of Content Filters


1. Rating Component:
o Functions like firewall rules for websites.
o Can range from simple allow/deny lists to multi-level access controls.
2. Filtering Component:
o Restricts access to specific sites, servers, or resources.
o Uses a reverse ACL (capability table) — lists resources users cannot access.

Types of Filtering Modes


• Exclusive Mode:
o Blocks specific sites explicitly.
o Problematic due to the huge and growing number of sites to block.
• Inclusive Mode:
o Only allows listed sites.
o Slows workflow as users must request permission to add sites.

Modern Filtering Techniques


• Use protocol-based inspection.
• Dynamically analyze content and restrict access based on logical interpretation.

Use Cases
• Block non-business sites (e.g., pornography).
• Prevent spam from entering.
• Ensure employee productivity and reduce bandwidth abuse.

Examples
• Home/Small Office Tools: NetNanny, SurfControl.
• Corporate Tools: Novell Border Manager.

Cryptography Module 5 Ashwin Mysore 15


Challenges
• Require extensive setup and regular maintenance.
• Need frequent updates to stay effective.
• Content creators often try to bypass filters using tricks like avoiding specific flagged keywords.

Modern Solutions
• Some filters now come with auto-updating services, similar to antivirus software.
• Use keyword matching (e.g., “nude”, “sex”) to block or filter content.

Protecting Remote Connections


In today’s interconnected world, organizations need to facilitate remote access for employees, contractors,
and mobile users. These connections must be secure to protect sensitive corporate data and prevent
unauthorized access.
Legacy Methods and Modern Trends
Traditionally, organizations used leased lines or dial-up services like Remote Access Service (RAS) to
connect remote users. These services were secure but expensive and inflexible. With the widespread
availability of the Internet, Virtual Private Networks (VPNs) have emerged as a preferred option, offering
secure connections over public networks at a lower cost.

Remote Access Threats and Safeguards


Even though dial-up and leased lines are less common today, many organizations still use them, and they
continue to pose security vulnerabilities.
War Dialing Threat
Attackers can use tools such as war dialers to scan large ranges of phone numbers to detect modems. Once
found, attackers attempt to exploit weak authentication to gain unauthorized access to the organization’s
network.

Authentication Protocols for Dial-Up Access


To combat these vulnerabilities, more secure authentication systems have been developed:
1. RADIUS (Remote Authentication Dial-In User Service)
• Centralizes the authentication process.
• When a user dials in, the Remote Access Server (RAS) sends credentials to the RADIUS server.
• The server verifies credentials and returns an accept/deny response.
• Scales well in environments with many access points.

Cryptography Module 5 Ashwin Mysore 16


2. TACACS (Terminal Access Controller Access Control System)
• Offers centralized control of authentication, authorization, and accounting (AAA).
• Three versions: TACACS, Extended TACACS, and TACACS+.
o TACACS+ separates AAA functions and supports two-factor authentication.
3. Diameter Protocol
• Successor to RADIUS.
• Provides enhanced AAA services with support for modern encryption (IPSec, TLS).
• Extensible and designed to work with future protocols.

Advanced Authentication: Kerberos and SESAME


Kerberos
• Developed by MIT; uses symmetric key encryption.
• Authenticates users once and provides tickets for seamless access to multiple services.
• Components:
o Authentication Server (AS)
o Key Distribution Center (KDC)
o Ticket Granting Server (TGS)
• Vulnerable to Denial of Service (DoS) and key compromise.

SESAME (Secure European System for Applications in a Multivendor Environment)


• European alternative to Kerberos.
• Uses public key cryptography.
• Adds advanced access control, scalability, and auditing.
• Replaces Kerberos's TGS with a Privilege Attribute Server (PAS) and issues Privilege Attribute
Certificates (PACs).

Virtual Private Networks (VPNs)


A Virtual Private Network (VPN) creates a private, secure network connection over a public, unsecured
network (like the Internet). It encapsulates data traffic, ensuring privacy and security through encryption,
tunneling protocols, and authentication processes.

VPN Types
The Virtual Private Network Consortium (VPNC) categorizes VPNs into three main types:
1. Trusted VPNs (Legacy VPNs):

Cryptography Module 5 Ashwin Mysore 17


o These VPNs rely on leased lines from a service provider.
o Packet-switching is done over these leased circuits.
o The organization trusts the provider to maintain privacy and security.
2. Secure VPNs:
o These VPNs use encryption protocols to secure data transmitted over public networks like
the Internet.
o The focus is on ensuring privacy, even when data passes through unsecured areas.
3. Hybrid VPNs:
o A combination of both trusted and secure VPNs.
o They provide encrypted transmissions over trusted leased networks, creating a more secure
and flexible solution.

Key Requirements for VPNs


Regardless of the specific technology or protocol used, VPNs must meet the following essential security
criteria:
1. Encapsulation:
o Incoming and outgoing data is encapsulated within frames of a protocol that can traverse
public networks.
2. Encryption:
o The data is encrypted to maintain its confidentiality as it passes over public networks.
3. Authentication:
o Remote devices (and sometimes users) are authenticated to ensure they are authorized to
access the network.
o This helps prevent unauthorized access and ensures that only verified users can perform
specific actions.

Common VPN Protocols and Modes


1. IPSec (Internet Protocol Security)
• Transport Mode:
o Encrypts only the data within an IP packet, leaving the header unencrypted.
o Useful for end-to-end encryption but can reveal the destination to attackers who may
compromise endpoints.
o Suitable for remote workers and telecommuters who need to transmit data securely.

Cryptography Module 5 Ashwin Mysore 18


• Tunnel Mode:
o Encrypts the entire packet, including the header.
o Prevents packet eavesdroppers from identifying the destination system.
o Commonly used in gateway-to-gateway VPN setups, providing encryption across a broader
network.

Protocols in Tunnel Mode:


o IPSec, PPTP, and L2TP are commonly used in tunnel-mode VPN implementations, as seen in
solutions like Microsoft ISA Server.
2. SSL (Secure Sockets Layer)
• SSL VPNs are often easier to set up because they rely on the SSL protocol, which is built into most
web browsers.
• SSL VPNs do not require special client software, and they offer broad compatibility.
• However, they are vulnerable to user errors and potential security lapses because they are more
widely accessible.

VPN Use Cases and Configurations


• Remote Access:
o VPNs allow employees to securely connect to an organization's network from remote
locations.
o Remote access can be full network access, limited to specific features (like email or file
transfer), or allow remote control of a workstation.
• Traditional Dial-up vs. VPN:
o Remote Access Servers (RAS) were previously used for dial-up access but are now less
common due to cost and infrastructure requirements.
o VPNs provide a more cost-effective solution using the public Internet while offering
industrial-grade security.
• Advantages of VPN:
o VPNs are cost-efficient, as they do not require the expensive leased lines or infrastructure
used by RAS.
o They are easier to set up and maintain, with support available in many operating systems
(e.g., Windows XP, Windows 2000).

Cryptography Module 5 Ashwin Mysore 19


What is a honeypot? How is it different from a honeynet?
Honeypots, Honeynets, and Padded Cell Systems
Honeypots
• Definition: Decoy systems used to lure attackers away from real systems.
• Also known as: Decoys, lures, fly-traps.
• Function:
o Emulate vulnerable systems/services.
o Attract attackers to study their behavior.
• Goals:
o Divert attackers from real assets.
o Log and analyze attacker behavior.
o Provide time to respond and defend.
• Deployment:
o May run with a tool like Deception Toolkit.
o Any unauthorized access is flagged as suspicious.
o Monitored with event loggers and analyzers.
Honeynet
• A network of honeypots on a subnet.
• Simulates a more complex environment to engage attackers longer.
Padded Cell Systems How does a padded cell system differ from a honeypot?
• Definition: Hardened honeypots that cannot be easily compromised.
• Used with: Traditional IDPS.
• How it works:
o When IDPS detects an attacker, they're redirected to the padded cell.
o Environment simulates real systems with fake data.
• Purpose:
o Observe and monitor without risk.
o Collect intelligence on attacker methods.

Advantages
• Diverts attackers from actual systems.
• Grants time to security teams to react.
• Enables detailed monitoring and analysis.
• May catch internal threats (insiders snooping).
Disadvantages
• Legal risks (unclear laws, risk of entrapment).
• Limited widespread success or validation.
• May provoke more aggressive attacks.
• Requires high technical skill to manage.

Trap-and-Trace Systems
• Definition: Systems that lure attackers and then trace their origin.
• Components:
o Honeypot/padded cell (trap).
o Alarm and tracking software (trace).
• Purpose:
o Distract attackers while admins trace them.
o Identify insiders or external intruders.

Cryptography Module 5 Ashwin Mysore 20


• Legal/Ethical Issues:
o Risk of crossing from enticement (legal) to entrapment (illegal).
o May involve unintentional access to third-party systems.
o Vigilante “back hacking” is unethical and possibly illegal.
Wasp Trap Syndrome
• Analogy: Honeypots may attract more attackers than originally present, increasing risk.

Active Intrusion Prevention: LaBrea Tarpit


• Tool: LaBrea (sticky honeypot + IDPS).
• How it works:
o Claims unused IP addresses in a network.
o Engages attackers with fake TCP/IP handshakes.
o Sets low TCP window size → holds connection open.
• Effect:
o Slows down or halts network-based worms/attacks.
o Buys time for administrators to respond.

Cryptography Module 5 Ashwin Mysore 21


Scanning and Analysis Tools
To ensure effective network security, it's essential to identify vulnerabilities proactively, not just reactively.
Many organizations mistakenly rely solely on perimeter defenses like firewalls and neglect in-depth
scanning, leaving them exposed to hidden threats. A robust defense-in-depth strategy includes the use of
various scanner and analysis tools, which can uncover system weaknesses, misconfigurations, and
unsecured network elements.
These tools include:
• Intrusion Detection and Prevention Systems (IDPS): Detect and alert on suspicious activities.
• Active/Passive Vulnerability Scanners: Identify security flaws across hosts and applications.
• Automated Log Analyzers and Protocol Analyzers (Sniffers): Monitor, analyze, and detect
anomalies in network traffic.
Security administrators often use tools from the hacker community to simulate attacks and discover
vulnerabilities before real attackers do. This approach is ethical and effective when done transparently and in
cooperation with ISPs to avoid being flagged or blacklisted.

Stages in Attack Protocol:


1. Footprinting: Gathering public data (like IP addresses and organization info) using search engines,
public directories, and websites. This step reveals valuable information such as server names,
directories, and developer data.
2. Fingerprinting: A deeper scan of IP address ranges to determine open ports and active services,
helping attackers or defenders understand system roles and structures.
Key Tools:
• Sam Spade: A multipurpose tool for footprinting, offering DNS lookups, ping sweeps, traceroutes,
and more.
• Wget (Linux/BSD): Used to mirror websites and analyze offline content for hidden information.
• Nmap: A widely used port scanner that identifies active hosts, open ports, and running services. It’s
vital for both defenders and attackers.

Port Scanning and Common Ports:


Port scanners check which ports are open on a device and what services are running on them. This reveals
potential entry points into a network. For example:
• Port 22: SSH
• Port 80: HTTP
• Port 443: HTTPS
• Port 25: SMTP (email)
Security Implication:
Open ports, if unnecessary or unprotected, can be exploited by attackers to gain access or control.
Organizations must disable unused ports and services to minimize exposure.

Firewall Analysis Tools


1. Nmap (Idle Scanning)
o Uses idle hosts to stealthily scan internal networks.
o Exploits predictable IP ID sequences to detect open ports behind firewalls.
o Appears as if scan originates from a trusted DMZ host.
2. Firewalk
o Uses incrementing TTL values to map out firewall rules and router policies.
o Determines which protocols/services are blocked or allowed.
o Useful for discovering the firewall’s default filtering behavior.

Cryptography Module 5 Ashwin Mysore 22


3. HPING
o Advanced ping tool that supports TCP, UDP, and ICMP.
o Allows custom packet crafting to probe firewalls.
o Can bypass misconfigured filters and identify internal infrastructure.

Operating System Detection Tools


• XProbe
o Uses ICMP responses to fingerprint remote OS types.
o Matches response patterns with a database of known OS behaviors.
o Reliable for OS detection but should be blocked at firewalls.

Vulnerability Scanners What is a vulnerability scanner? Describe the types of vulnerability scanner
1. Active Scanners
o Initiate traffic to detect exposed usernames, shares, ports, services, misconfigurations.
o Examples:
▪ GFI LANguard – Free for non-commercial use.
▪ Nessus – Scans for services, OS versions, firewalls; has a destructive mode for exploit
testing.
2. Fuzzers (Blackbox Scanners)
o Feed random inputs to detect program/protocol weaknesses.
o Example: SPIKE
▪ SPIKE Proxy collects web usage patterns.
▪ Tests for SQL injection, buffer overflow, XSS, etc.
▪ SPIKE core can fuzz any TCP/IP-based protocol.
3. Exploit Frameworks (Penetration Testing Tools)
o Simulate real attacks by exploiting vulnerabilities.
o Examples:
▪ Core Impact (Paid)
▪ CANVAS (Paid)
▪ Metasploit Framework (Free) – Automates exploits with customization (e.g., adding
users, modifying data).
4. Passive Scanners
o Monitor network traffic without generating packets.
o Detect client- and server-side vulnerabilities.
o Examples:
▪ Tenable PVS
▪ Sourcefire RNA
Packet Sniffers: What kind of data and information can be found using a packet sniffer?
Packet sniffers, also known as network protocol analyzers, are tools that capture and analyze network
packets. These tools are crucial for diagnosing network issues but can also be misused for eavesdropping if
in the wrong hands. Commercial sniffers like Sniffer and open-source alternatives like Snort are available. A
widely used, free, and powerful sniffer is Wireshark, formerly Ethereal, which can inspect both live and
saved network traffic. It offers features like protocol filters and TCP session reconstruction.
Legal Usage Conditions:
To legally use a packet sniffer, administrators must:
1. Be on a network owned by their organization.
2. Have direct authorization from network owners.

Cryptography Module 5 Ashwin Mysore 23


3. Have user consent (often obtained during account creation).
These conditions are the same for legal employee monitoring.
Misconceptions and Threats:
Many assume switched networks are immune to sniffing; however, techniques like ARP spoofing and
session hijacking (using tools like Ettercap) can bypass this. Thus, encryption is essential to protect data in
transit.

Wireless Security Tools:


Wireless networks, particularly those using the 802.11 standard, are widespread and convenient but
vulnerable. Proper assessment and protection of wireless subnets are critical.
Top Wireless Tools (as per Insecure.org, 2006):
1. Kismet – A passive wireless sniffer and intrusion detection tool.
2. NetStumbler – A freeware Windows-based network detection tool.
3. Aircrack – A WEP/WPA key cracking tool.
4. Airsnort – Cracks 802.11 WEP encryption.
5. KisMac – A passive stumbler for Mac OS X.
Another useful tool is AirSnare, which monitors wireless traffic for unauthorized devices or access points
and alerts administrators when new hardware is detected.

Biometric Access Controls What is biometric authentication? What does the term biometric mean?
Biometric access control authenticates a user (supplicant) by recognizing unique human traits. Unlike
passwords or ID cards, biometrics rely on inherent characteristics, making them difficult to fake. These
systems compare live input (like a fingerprint) with a stored, encrypted template during login attempts.

Types of Biometric Methods:


• Fingerprint, palm print, and hand geometry recognition
• Facial recognition, both manual (via ID) and automated (via camera)
• Eye-based scans: Retinal (blood vessel pattern) and Iris (pattern of features like freckles and
striations)
• Signature recognition, using stylus input and comparison with stored data
• Voice recognition, comparing stored and live voiceprints
Only three characteristics are considered truly unique: fingerprints, retina, and iris. Most biometric systems
extract and store minutiae—unique points like fingerprint ridges—for matching.

Accuracy Metrics:
1. False Reject Rate (FRR): Type I error—valid users wrongly denied access. A nuisance, but not a
security risk.
2. False Accept Rate (FAR): Type II error—unauthorized users wrongly granted access. A critical risk.
3. Crossover Error Rate (CER): The point where FAR and FRR are equal. A lower CER indicates a
better biometric system.

Effectiveness vs. Acceptability:


• Highly accurate methods (e.g., iris scan) tend to be less user-friendly.
• More acceptable methods (e.g., signature or voice recognition) tend to be less secure.
• Security professionals must balance effectiveness, user acceptance, and error rates when
deploying these systems.

Cryptography Module 5 Ashwin Mysore 24

You might also like