Lecture 2 Detecting System Intrusion
Lecture 2 Detecting System Intrusion
INTRODUCTION
First things first: Detecting system intrusion is not the same as Intrusion Detection
System/Intrusion Prevention System (IDS/IPS). We want to detect system intrusion once
attackers pass all defensive technologies in the company (such as IDS/IPS mentioned above),
full-packet capture devices with analysts behind them, firewalls, physical security guards, and all
other preventive technologies and techniques. Many preventative technologies are using
blacklisting [1] most of the time, and thus that’s why they fail. Blacklisting is allowing
everything by default and forbidding something that is considered to be malicious. So, for the
attacker, it is a challenge to find yet another way to bypass the filter. It is so much harder to
circumvent a whitelisting system.
One of the ways to monitor changes in the file system is to implement LoggedFS. This particular
file system logs everything that happens inside the file system. It is easily configurable via
Extensible Markup Language (XML) files to fit your needs.
File integrity monitoring (FIM) is an internal control or process that performs the act of
validating the integrity of the operating system and application software files using a verification
method between the current file state and the known, good baseline. This comparison method
often involves calculating a known cryptographic checksum of the file’s original baseline and
comparing that with the calculated checksum of the current state of the file. Other file attributes
can also be used to monitor integrity. Generally, the act of performing file integrity monitoring is
automated, using internal controls such as an application or a process. Such monitoring can be
performed randomly, at a defined polling interval, or in real time.
b) Security Objectives
Changes to configurations, files, and file attributes across the IT infrastructure are com mon; but
hidden within a large volume of daily changes can be the few that impact the file or
configuration integrity. These changes can also reduce security posture and in some cases may
be leading indicators of a breach in progress.
Values monitored for unexpected changes to files or configuration items include:
Credentials
Privileges and security settings
Content
Core attributes and size
Hash values
Configuration value
c) 0 Day Attacks
About 90 percent of all successful compromises are made via known flaws, so 0day attacks are
not that common. A zero-day attack or threat is an attack that exploits a previously unknown
vulnerability in a computer application, meaning that the attack occurs on “day zero” of
awareness of the vulnerability. This means that the developers have had zero days to address and
patch the vulnerability. 0day exploits (actual software that uses a security hole to carry out an
attack) are used or shared by attackers before the developer of the target software knows about
the vulnerability.
Attack Vectors
Malware writers are able to exploit zero-day vulnerabilities through several different attack
vectors. Web browsers are a particular target because of their widespread distribution and usage.
Attackers can also send email attachments, which exploit vulnerabilities in the application
opening the attachment. Exploits that take advantage of common file types are listed in databases
such as United States Computer Emergency Readiness (US-CERT). Malware can be engineered
to take advantage of these file-type exploits to compromise attacked systems or steal confidential
data such as banking passwords and personal identity information.
Vulnerability Window
Zero-day attacks occur during the vulnerability window that exists in the time between when
vulnerability is first exploited and when software developers start to develop and publish a
counter to that threat.
For viruses, Trojans, and other zero-day attacks, the vulnerability window typically follows this
time line:
A special type of vulnerability management process focuses on finding and eliminating zero-day
weaknesses. This unknown vulnerability management life cycle is a security and quality
assurance process that aims to ensure the security and robustness of both in-house and third-party
software products by finding and fixing unknown (zero-day) vulnerabilities.
The unknown vulnerability management process consists of four phases: analyze, test, report,
and mitigate.
When attackers compromise a system, what is the very first thing they do? They install different
backdoors and as many as possible. So, if some backdoor was found on the system and it was
deleted, it does not mean the system is clean. It is much safer to restore the system to a good
known state; typically it is done via OS reinstallation.
e) Rootkits
A rootkit is a stealthy type of malicious software designed to hide the existence of certain
processes or programs from normal methods of detection, and enables continued privileged
access to a computer. The term rootkit is a concatenation of the word “root” (the traditional name
of the privileged account on Unix operating systems) and the word “kit” (which refers to the
software components that implement the tool). The term rootkit has negative connotations
through its association with malware.
There are a lot of software for rootkit searches that are meant to be run on a live system. One of
many examples would be the software “rootkit hunter”
f) Security Awareness
Security awareness is the knowledge and attitude members of an organization possess regarding
the protection of the physical and, especially, information assets of that organization. Many
organizations require formal security awareness training for all workers when they join the
organization and periodically thereafter (usually annually). Topics covered in security awareness
training include the following:
The nature of sensitive material and physical assets that they may come in contact with,
such as trade secrets, privacy concerns, and government classified information
Employee and contractor responsibilities in handling sensitive information, including the
review of employee nondisclosure agreements
Requirements for proper handling of sensitive material in physical form, which includes
marking, transmission, storage, and destruction
Proper methods for protecting sensitive information on computer systems, including
password policy and use of the two-factor authentication
Other computer security concerns, including malware, phishing, and social engineering
Workplace security, including building access, wearing of security badges, reporting of
incidents, and forbidden articles
Consequences of failure to properly protect information, including the potential loss of
employment, economic consequences to the firm, damage to individuals whose private
records are divulged, and possible civil and criminal penalties.
g) Data Correlation
Data correlation is a technique used in information security to put all the pieces together and
come up with some meaningful information. For example, if you see Linux system SSH
connections coming in all day long, and after watching someone log in 200 times there is a
successful login: What does it tell you? It should be a good starting point to suggest that a brute
force attack is going on. All technologies help to find out intrusions; however, technol ogies do
not find intrusions, people do. Appliances and sensors are typically good about finding bad
events, but good events can combine into bad ones as well. Let’s look at a sim ple scenario in
which a human makes a determination about compromise:
Let’s say there is a company with many employees who travel a lot around the globe. The
company is doing a good job by implementing various control systems and logging systems; this
company also uses Radio Frequency Identification (RFID) enabled cards for its employees in
order to track who is coming and leaving its offices. All data is collected and pushed to the
Security Information and Event Management (SIEM) engine to correlate events and logs. One
morning two seemingly good events come into SIEM. The first event is user John’s virtual
private network (VPN) connection being established from overseas to the corporate office. The
second event is user John’s RFID badge being scanned at the entrance to the corporate office.
Well, both events are pretty standard and are harmless when taken separately, but when
combined, they reveal something weird. How can user John VPN in from overseas and get a
physical entrance to the office at the same time? The answer is one of two: Either the VPN
credentials have been compromised, or his employee card is being used by someone else to enter
the office.
h) SIEM
Security Information and Event Management (SIEM) solutions are a combination of the formerly
disparate product categories of SIM (security information management) and SEM (security event
manager). SIEM technology provides real-time analysis of security alerts generated by network
hardware and applications. SIEM solutions come as software, appli ances, or managed services,
and they are also used to log security data and generate reports for compliance purposes. The
acronyms SEM, SIM, and SIEM have been used interchangeably, though there are differences in
meaning and product capabilities. The segment of security management that deals with real-time
monitoring, correlation of events, notifications, and console views is commonly known as
Security Event Management (SEM). The second area provides long term storage, analysis, and
reporting of log data and is known as Security Information Management (SIM).
The term Security Information Event Management (SIEM) describes the product capabili ties of
gathering, analyzing, and presenting information from network and security devices; identity and
access management applications; vulnerability management and pol icy compliance tools;
operating system, database and application logs; and external threat data. A key focus is to
monitor and help manage user and service privileges, directory ser vices, and other system
configuration changes, as well as providing log auditing and review and incident response. The
following are a list of SIEM capabilities:
Data Aggregation: SIEM/LM (log management) solutions aggregate data from many
sources, including network, security, servers, databases, and applications, providing the
ability to consolidate monitored data to help avoid missing crucial events.
Correlation: Looks for common attributes and links events together into meaningful
bundles. This technology provides the ability to perform a variety of correlation
techniques to integrate different sources, in order to turn data into useful information.
Alerting: The automated analysis of correlated events and production of alerts, to notify
recipients of immediate issues.
Dashboards: SIEM/LM tools take event data and turn it into informational charts to assist
in seeing patterns or identifying activity that is not forming a standard pattern.
Compliance: SIEM applications can be employed to automate the gathering of
compliance data, producing reports that adapt to existing security, governance, and
auditing processes.
Retention: SIEM/SIM solutions employ long-term storage of historical data to facilitate
the correlation of data over time and to provide the retention necessary for compliance
requirements.