0% found this document useful (0 votes)
2 views

Chapter 4

This document outlines the learning objectives and key concepts related to intrusion detection systems (IDS), including types of intruder behaviors, host-based and network-based IDS features, and the purpose of honeypots. It details the components of IDS, such as sensors and analyzers, and discusses detection methods like anomaly and signature detection. Additionally, it introduces Snort as an example of an IDS, highlighting its architecture and rule-based detection capabilities.

Uploaded by

Shreya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Chapter 4

This document outlines the learning objectives and key concepts related to intrusion detection systems (IDS), including types of intruder behaviors, host-based and network-based IDS features, and the purpose of honeypots. It details the components of IDS, such as sensors and analyzers, and discusses detection methods like anomaly and signature detection. Additionally, it introduces Snort as an example of an IDS, highlighting its architecture and rule-based detection capabilities.

Uploaded by

Shreya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 44

LEARNING OBJECTIVES :

• After studying this chapter, you should be able to: Distinguish


among various types of intruder behavior patterns.
• Understand the basic principles of and requirements for
intrusion detection.
• Discuss the key features of host-based intrusion detection.
• Explain the concept of distributed host-based intrusion detection.

• Discuss the key features network-based intrusion detection.


• Define the intrustion detection exchange format.
• Explain the purpose of honeypots.
• Present an overview of Snort.
INTRUDERS
• Identified three classes of intruders:
• Masquerader: An individual who is not authorized to use
the computer and who penetrates a system’s access
controls to exploit a legitimate user’s account
• Misfeasor: A legitimate user who accesses data,
programs, or resources for which such access is not
authorized, or who is authorized for such access but
misuses his or her privileges.
• Clandestine user: An individual who seizes supervisory
control of the system and uses this control to evade
auditing and access controls or to suppress audit collection
lists the following examples of
intrusion:
Performing a remote root compromise of an e-mail server
Defacing a Web server
Guessing and cracking passwords
Copying a database containing credit card numbers
Viewing sensitive data, including payroll records and medical information,
without authorization
Running a packet sniffer on a workstation to capture usernames and passwords
Using a permission error on an anonymous FTP server to distribute pirated
software and music files
Dialing into an unsecured modem and gaining internal network access
Posing as an executive, calling the help desk, resetting the executive’s e-mail
password, and learning the new password
Using an unattended, logged-in workstation without permission
Some
Examples
of Intruder
Patterns of
Behavior
INTRUSIO
N
DETECTIO
N
IDSs can be classified as follows:
• Host-based IDS : Monitors the characteristics of a single
host and the events occurring within that host for
suspicious activity
• Network-based IDS : Monitors network traffic for
particular network segments or devices and analyzes
network, transport, and application protocols to identify
suspicious activity
An IDS has three logical
components:
• Sensors: Sensors are responsible for collecting data. The input for a
sensor may be any part of a system that could contain evidence of an
intrusion. Types of input to a sensor includes network packets, log files,
and system call traces. Sensors collect and forward this information to
the analyzer.
• Analyzers: Analyzers receive input from one or more sensors or from
other analyzers. The analyzer is responsible for determining if an
intrusion has occurred. The output of this component is an indication
that an intrusion has occurred .The output may include evidence
supporting the conclusion that an intrusion occurred. The analyzer may
provide guidance about what actions to take as a result of the
intrusion.
• User interface: The user interface to an IDS enables a user to view
output from the system or control the behavior of the system. In some
systems, the user interface may equate to a manager, director, or
console component.
Requirements
• Run continually with minimal human supervision.
• Be fault tolerant in the sense that it must be able to recover from system crashes
and reinitializations.
• Resist subversion. The IDS must be able to monitor itself and detect if it has been
modified by an attacker.
• • Impose a minimal overhead on the system where it is running.
• Be able to be configured according to the security policies of the system that is
being monitored.
• Be able to adapt to changes in system and user behavior over time.
• Be able to scale to monitor a large number of hosts.
• Provide graceful degradation of service in the sense that if some components of
the IDS stop working for any reason, the rest of them should be affected as little as
possible.
• Allow dynamic reconfiguration; that is, the ability to reconfigure the IDS without
having to restart it.
Host-based IDSs
• Anomaly detection: Involves the collection of data relating to the
behavior of legitimate users over a period of time. Then statistical
tests are applied to observed behavior to determine with a high level
of confidence whether that behavior is not legitimate user behavior.
The following are two approaches to statistical anomaly detection:
• a. Threshold detection: This approach involves defining thresholds,
independent of user, for the frequency of occurrence of various
events. b. Profile based: A profile of the activity of each user is
developed and used to detect changes in the behavior of individual
accounts.
• 2. Signature detection: Involves an attempt to define a set of rules
or attack patterns that can be used to decide that a given behavior is
that of an intruder.
Audit Records
• A fundamental tool for intrusion detection is the audit
record. 1 Some record of ongoing activity by users must
be maintained as input to an IDS. Basically, two plans
are used:
• Native audit records.
• Detection-specific audit records.
Each audit record contains the
following fields:
• Subject: Initiators of actions. A subject is typically a terminal user but might also be
a process acting on behalf of users or groups of users. All activity arises through
commands issued by subjects. Subjects may be grouped into different access classes,
and these classes may overlap.
• Action: Operation performed by the subject on or with an object; for example, login,
read, perform I/O, execute.
•Object: Receptors of actions. Examples include files, programs, messages, records,
terminals, printers, and user- or program-created structures. When a subject is the
recipient of an action, such as electronic mail, then that subject is considered an
object. Objects may be grouped by type. Object granularity may vary by object type
and by environment. For example, database actions may be audited for the database
as a whole or at the record level.
•Exception-Condition: Denotes which, if any, exception condition is raised on return.
•Resource-Usage: A list of quantitative elements in which each element gives the
amount used of some resource (e.g., number of lines printed or displayed, number of
records read or written, processor time, I/O units used, session elapsed time).
• Time-Stamp: Unique time-and-date stamp identifying when the action took place.
Anomaly Detection
• threshold detection and profile-based systems.
• Threshold detection involves counting the number of occurrences of a specific
event type over an interval of time. If the count surpasses what is considered a
reasonable number that one might expect to occur, then intrusion is assumed.
Threshold analysis, by itself, is a crude and ineffective detector of even
moderately sophisticated attacks. Both the threshold and the time interval
must be determined. Because of the variability across users, such thresholds
are likely to generate either a lot of false positives or a lot of false negatives.
However, simple threshold detectors may be useful in conjunction with more
sophisticated techniques.
• Profile-based anomaly detection focuses on characterizing the past behavior of
individual users or related groups of users and then detecting significant
deviations. A profile may consist of a set of parameters , so that deviation on
just a single parameter may not be sufficient in itself to signal an alert. The
foundation of this approach is an analysis of audit records. The audit records
provide input to the intrusion detection function in two ways. First, the designer
must decide on a number of quantitative metrics that can be used to measure
user behavior. An analysis of audit records over a period of time can be used to
determine the activity profile of the average user. Thus, the audit records serve
to define typical behavior. Second, current audit records are the input used to
detect intrusion. That is, the intrusion detection model analyzes incoming audit
records to determine deviation from average behavior
metrics that are useful for profile-
based intrusion detection are the
following:
• Counter: A nonnegative integer that may be incremented but not
decremented until it is reset by management action. Typically, a count of
certain event types is kept over a particular period of time. Examples include
the number of logins by a single user during an hour, the number of times a
given command is executed during a single user session, and the number of
password failures during a minute.
• Gauge: A nonnegative integer that may be incremented or decremented.
Typically, a gauge is used to measure the current value of some entity.
Examples include the number of logical connections assigned to a user
application and the number of outgoing messages queued for a user process.
• Interval timer: The length of time between two related events. An example is
the length of time between successive logins to an account.
• Resource utilization: Quantity of resources consumed during a specified
period. Examples include the number of pages printed during a user session
and total time consumed by a program execution
• Given these general metrics, various tests can be
performed to determine whether current activity fits
within acceptable limits. lists the following approaches
that may be taken:
• • Mean and standard deviation • Multivariate • Markov
process • Time series • Operational
Signature Detection
• Rule-based anomaly detection is similar in terms of its approach and
strengths to statistical anomaly detection. With the rule-based approach,
historical audit records are analyzed to identify usage patterns and to
generate automatically rules that describe those patterns. Rules may
represent past behavior patterns of users, programs, privileges, time slots,
terminals, and so on. Current behavior is then observed, and each
transaction is matched against the set of rules to determine if it conforms to
any historically observed pattern of behavior. As with statistical anomaly
detection, rule-based anomaly detection does not require knowledge of
security vulnerabilities within the system. Rather, the scheme is based on
observing past behavior and, in effect, assuming that the future will be like
the past.
• In order for this approach to be effective, a rather large database of rules
will be needed. For example, a scheme contains anywhere from 10 to the
power 4 or 6 rules
• Rule-based penetration identification takes a very different
approach to intrusion detection. The key feature of such
systems is the use of rules for identifying known penetrations or
penetrations that would exploit known weaknesses. Rules can
also be defined that identify suspicious behavior, even when the
behavior is within the bounds of established patterns of usage.
Typically, the rules used in these systems are specific to the
machine and operating system. The most fruitful approach to
developing such rules is to analyze attack tools and scripts
collected on the Internet. These rules can be supplemented with
rules generated by knowledgeable security personnel. In this
latter case, the normal procedure is to interview system
administrators and security analysts to collect a suite of known
penetration scenarios and key events that threaten the security
of the target system. A simple example of the type of rules that
can be used is found in NIDX, an early system that used
heuristic rules that can be used to assign degrees of suspicion
to activities
DISTRIBUTED HOST-BASED
INTRUSION DETECTION
Architecture
for
Distributed
Intrusion
Detection
Cont..
• Host agent module: An audit collection module
operating as a background process on a monitored
system. Its purpose is to collect data on securityrelated
events on the host and transmit these to the central
man.
• • LAN monitor agent module: Operates in the same
fashion as a host agent module except that it analyzes
LAN traffic and reports the results to the central
manager.
• • Central manager module: Receives reports from LAN
monitor and host agents and processes and correlates
these reports to detect intrusion.
A network-based IDS (NIDS)
• Types of Network Sensors:
• In inline sensor
INTRUSION DETECTION EXCHANGE
FORMAT
• Intrusion Detection Message Exchange Requirements
(RFC 4766):
• The Intrusion Detection Message Exchange Format (RFC
4765):
• The Intrusion Detection Exchange Protocol (RFC 4767).
The functional components are as
follows:
• Data source: The raw data that an IDS uses to detect unauthorized or undesired
activity. Common data sources include network packets, operating system audit
logs, application audit logs, and system-generated checksum data.
• Sensor: Collects data from the data source. The sensor forwards events to the
analyzer.
• Analyzer: The ID component or process that analyzes the data collected by the
sensor for signs of unauthorized or undesired activity or for events that might be
of interest to the security administrator. In many existing IDSs, the sensor and
the analyzer are part of the same component.
• Administrator: The human with overall responsibility for setting the security
policy of the organization, and, thus, for decisions about deploying and configuring
the IDS. This may or may not be the same person as the operator of the IDS. In
some organizations, the administrator is associated with the network or systems
administration groups. In other organizations, it’s an independent position.
Cont..
• Manager: The ID component or process from which the
operator manages the various components of the ID
system. Management functions typically include sensor
configuration, analyzer configuration, event notification
management, data consolidation, and reporting.
• Operator: The human that is the primary user of the IDS
manager. The operator often monitors the output of the
IDS and initiates or recommends further action.
HONEYPOTS
• Honeypots are designed to
• • Divert an attacker from accessing critical systems.
• • Collect information about the attacker’s activity.
• • Encourage the attacker to stay on the system long
enough for administrators to respond.
EXAMPLE SYSTEM: SNORT
• Snort is an open source, highly configurable and portable host-
based or network-based IDS. Snort is referred to as a lightweight
IDS, which has the following characteristics:
• Easily deployed on most nodes (host, server, router) of a network
• Efficient operation that uses small amount of memory and
processor time
• Easily configured by system administrators who need to implement
a specific security solution in a short amount of time
• Snort can perform real-time packet capture, protocol analysis, and
content searching and matching. Snort can detect a variety of
attacks and probes, based on a set of rules configured by a system
administrator.
Snort Architecture
A Snort installation consists of four
logical components ( Figure 8.9 ):
• Packet decoder: The packet decoder processes each captured
packet to identify and isolate protocol headers at the data link,
network, transport, and application layers. The decoder is designed
to be as efficient as possible and its primary work consists of setting
pointers so that the various protocol headers can be easily extracted.
•Detection engine: The detection engine does the actual work of
intrusion detection. This module analyzes each packet based on a set
of rules defined for this configuration of Snort by the security
administrator. In essence, each packet is checked against all the
rules to determine if the packet matches the characteristics defined
by a rule. The first rule that matches the decoded packet triggers the
action specified by the rule. If no rule matches the packet, the
detection engine discards the packet.
• • Logger: For each packet that matches a rule, the rule
specifies what logging and alerting options are to be
taken. When a logger option is selected, the logger
stores the detected packet in human readable format or
in a more compact binary format in a designated log
file. The security administrator can then use the log file
for later analysis.
• • Alerter: For each detected packet, an alert can be
sent. The alert option in the matching rule determines
what information is included in the event notification.
The event notification can be sent to a file, to a UNIX
socket, or to a database. Alerting may also be turned off
during testing or penetration studies. Using the UNIX
socket, the alert can be sent to a management machine
elsewhere on the network.
Snort Rules
• Snort uses a simple, flexible rule definition language that
generates the rules used by the detection engine.
Although the rules are simple and straightforward to write,
they are powerful enough to detect a wide variety of
hostile or suspicious traffic.
• Each rule consists of a fixed header and zero or more
options ( Figure 8.10 ). The header has the following
elements:
• • Action: The rule action tells Snort what to do when it
finds a packet that matches the rule criteria.
• • Protocol: Snort proceeds in the analysis if the packet
protocol matches this field. The current version of Snort
(2.6) recognizes four protocols: TCP, UDP, ICMP, and IP.
Future releases of Snort will support a greater range of
protocols.
• Source IP address: Designates the source of the packet.
The rule may specify a specific IP address, any IP
address, a list of specific IP addresses, or the negation
of a specific IP address or list. The negation indicates
that any IP address other than those listed is a match
• Source port: This field designates the source port for the
specified protocol (e.g., a TCP port). Port numbers may
be specified in a number of ways, including specific port
number, any ports, static port definitions, ranges, and
by negation.
• • Direction: This field takes on one of two values:
unidirectional (- ) or bidirectional ( - ). The bidirectional
option tells Snort to consider the address/ port pairs in
the rule as either source followed by destination or
destination followed by source. The bidirectional option
enables Snort to monitor both sides of a conversation.
• Destination IP address: Designates the destination of
the packet
• Destination port: Designates the destination port.
Cont..
• There are four major categories of rule options:
• meta-data: Provide information about the rule but do
not have any affect during detection
• payload: Look for data inside the packet payload and
can be interrelated
• non-payload: Look for non-payload data
• post-detection: Rule-specific triggers that happen after
a rule has matched a packet
END

You might also like