Computer Security Research
Computer Security Research
Access control - In the fields of physical security and information security, access
control (AC) is the selective restriction of access to a place or other resource. The act
of accessing may mean consuming, entering, or using. Permission to access a resource is
called authorization. Locks and login credentials are two analogous mechanisms of access
control.
Anti-key loggers - An anti-key logger (or anti–keystroke logger) is a type of software
specifically designed for the detection of keystroke logger software; often, such software will
also incorporate the ability to delete or at least immobilize hidden keystroke logger software
on a computer. In comparison to most anti-virus or anti-spyware software, the primary
difference is that an anti-key logger does not make a distinction between
a legitimate keystroke-logging program and an illegitimate keystroke-logging program (such
as malware); all keystroke-logging programs are flagged and optionally removed, whether
they appear to be legitimate keystroke-logging software or not.
Anti-malware - Antivirus software is a computer program used to prevent, detect, and
remove malware. However, with the proliferation of other kinds of malware, antivirus
software started to provide protection from other computer threats. In particular, modern
antivirus software can protect users from: malicious browser helper objects (BHOs), browser
hijackers, ransomware, key loggers, backdoors, rootkits, Trojan horses, worms,
malicious LSPs, diallers, fraud tools, adware and spyware. Some products also include
protection from other computer threats, such as infected and
malicious URLs, spam, scam and phishing attacks, online identity (privacy), online
banking attacks, social engineering techniques, advanced persistent threat (APT)
and botnet DDoS attacks.
Anti-spyware - Spyware is software that aims to gather information about a person or
organization, sometimes without their knowledge, that may send such information to another
entity without the consumer's consent, that asserts control over a device without the
consumer's knowledge, or it may send such information to another entity with the consumer's
consent, through cookies.
Anti-subversion software - Software subversion is the process of making software perform
unintended actions either by tampering with program code or by altering behaviour in
another fashion. For example, code tampering could be used to change program code to
load malicious rules or heuristics, SQL injection is a form of subversion for the purpose
of data corruption or theft and buffer overflows are a form of subversion for the purpose of
unauthorised access. These attacks are examples of computer hacking.
Anti-tamper software - Anti-tamper software (or tamper-resistant software) is software
which makes it harder for an attacker to modify it. The measures involved can be passive
such as obfuscation to make reverse engineering difficult or active tamper-detection
techniques which aim to make a program malfunction or not operate at all if modified.[1] It is
essentially tamper resistance implemented in the software domain. It shares certain aspects
but also differs from related technologies like copy protection and trusted hardware, though it
is often used in combination with them. Anti-tampering technology typically makes the
software somewhat larger and also has a performance impact. There are no provably
secure software anti-tampering methods; thus, the field is an arms race between attackers
and software anti-tampering technologies.
Cryptographic software - Encryption software is software that uses cryptography to prevent
unauthorized access to digital information. Cryptography is used to protect digital information
on computers as well as the digital information that is sent to other computers over
the Internet.
Computer-aided dispatch (CAD) - Computer-aided dispatch (CAD), also called computer-
assisted dispatch, is a method of dispatching taxicabs, couriers, field service technicians,
mass transit vehicles or emergency services assisted by computer. It can either be used to
send messages to the dispatcher via a mobile data terminal (MDT) and/or used to store and
retrieve data (i.e. radio logs, field interviews, client information, schedules, etc.). A dispatcher
may announce the call details to field units over a two-way radio. Some systems
communicate using a two-way radio system's selective calling features. CAD systems may
send text messages with call-for-service details to alphanumeric pagers or wireless
telephony text services like SMS.
Firewall - In computing, a firewall is a network security system that monitors and controls
incoming and outgoing network traffic based on predetermined security rules. A firewall
typically establishes a barrier between a trusted internal network and untrusted external
network, such as the Internet.
Intrusion detection system (IDS) - An intrusion detection system (IDS) is a device
or software application that monitors a network or systems for malicious activity or policy
violations. Any malicious activity or violation is typically reported either to an administrator or
collected centrally using a security information and event management (SIEM) system. A
SIEM system combines outputs from multiple sources, and uses alarm filtering techniques to
distinguish malicious activity from false alarms. IDS types range in scope from single
computers to large networks. The most common classifications are network intrusion
detection systems (NIDS) and host-based intrusion detection systems (HIDS). A system that
monitors important operating system files is an example of an HIDS, while a system that
analyses incoming network traffic is an example of an NIDS. It is also possible to classify IDS
by detection approach: the most well-known variants are signature-based detection
(recognizing bad patterns, such as malware); and anomaly-based detection (detecting
deviations from a model of "good" traffic, which often relies on machine learning). Some IDS
products have the ability to respond to detected intrusions. Systems with response
capabilities are typically referred to as an intrusion prevention system. Intrusion detection
systems can also serve specific purposes by augmenting them with custom tools, such as
using a honeypot to attract and characterize malicious traffic.
Intrusion prevention system (IPS) - Some systems may attempt to stop an intrusion
attempt but this is neither required nor expected of a monitoring system. Intrusion detection
and prevention systems (IDPS) are primarily focused on identifying possible incidents,
logging information about them, and reporting attempts. In addition, organizations use IDPS
for other purposes, such as identifying problems with security policies, documenting existing
threats and deterring individuals from violating security policies. IDPS have become a
necessary addition to the security infrastructure of nearly every organization. IDPS typically
record information related to observed events, notify security administrators of important
observed events and produce reports. Many IDPS can also respond to a detected threat by
attempting to prevent it from succeeding. They use several response techniques, which
involve the IDPS stopping the attack itself, changing the security environment (e.g.
reconfiguring a firewall) or changing the attack's content. Intrusion prevention systems (IPS),
also known as intrusion detection and prevention systems (IDPS), are network
security appliances that monitor network or system activities for malicious activity. The main
functions of intrusion prevention systems are to identify malicious activity, log information
about this activity, report it and attempt to block or stop it. Intrusion prevention systems are
considered extensions of intrusion detection systems because they both monitor network
traffic and/or system activities for malicious activity. The main differences are, unlike
intrusion detection systems, intrusion prevention systems are placed in-line and are able to
actively prevent or block intrusions that are detected. IPS can take such actions as sending
an alarm, dropping detected malicious packets, resetting a connection or blocking traffic from
the offending IP address. An IPS also can correct cyclic redundancy check (CRC) errors,
defragment packet streams, mitigate TCP sequencing issues, and clean up
unwanted transport and network layer options.
Log management software - Log management (LM) comprises an approach to dealing with
large volumes of computer-generated log messages (also known as audit records, audit
trails, event-logs, etc.).
Records management - Records management, also known as records and information
management, is an organizational function devoted to the management of information in an
organization throughout its life cycle, from the time of creation or inscription to its eventual
disposition. This includes identifying, classifying, storing, securing, retrieving, tracking and
destroying or permanently preserving records. The ISO 15489-1: 2001standard ("ISO 15489-
1:2001") defines records management as "[the] field of management responsible for the
efficient and systematic control of the creation, receipt, maintenance, use and disposition of
records, including the processes for capturing and maintaining evidence of
and information about business activities and transactions in the form of records". In
determining how long to retain records, their capacity for re-use is important. Many are kept
as evidence of activities, transactions, and decisions. Others document what happened and
why. The purpose of records management is part of an organization's broader function
of Governance, risk management, and compliance and is primarily concerned with managing
the evidence of an organization's activities as well as the reduction or mitigation of risk
associated with it.
Sandbox - In computer security, a "sandbox" is a security mechanism for separating running
programs, usually in an effort to mitigate system failures or software vulnerabilities from
spreading. It is often used to execute untested or untrusted programs or code, possibly from
unverified or untrusted third parties, suppliers, users or websites, without risking harm to the
host machine or operating system. A sandbox typically provides a tightly controlled set of
resources for guest programs to run in, such as scratch space on disk and memory. Network
access, the ability to inspect the host system or read from input devices are usually
disallowed or heavily restricted. In the sense of providing a highly controlled environment,
sandboxes may be seen as a specific example of virtualization. Sandboxing is frequently
used to test unverified programs that may contain a virus or other malicious code, without
allowing the software to harm the host device.
Security information management - Security information management (SIM) is
an information security industry term for the collection of data such as log files into a central
repository for trend analysis.
SIEM - In the field of computer security, security information and event management (SIEM)
software products and services combine security information management (SIM)
and security event management (SEM). They provide real-time analysis of security alerts
generated by applications and network hardware. Vendors sell SIEM as software, as
appliances or as managed services; these products are also used to log security data and
generate reports for compliance purposes. The term and the initialism SIEM was coined by
Mark Nicolett and Amrit Williams of Gartner in 2005.
Anti-theft - An anti-theft system is any device or method used to prevent or deter the
unauthorized appropriation of items considered valuable. Theft is one of the most common
and oldest criminal behaviours. From the invention of the first lock and key to the introduction
of RFID tags and biometric identification, anti-theft systems have evolved to match the
introduction of new inventions to society and the resulting theft by others.
Parental control - Parental controls are features which may be included in digital
television services, computer and video games, mobile devices and software that allow
parents to restrict the access of content to their children. This may be content they deem
inappropriate for their age, maturity level or feel is aimed more at an adult audience. Parental
controls fall into roughly four categories: content filters, which limit access to age
inappropriate content; usage controls, which constrain the usage of these devices such as
placing time-limits on usage or forbidding certain types of usage; computer usage
management tools, which enforces the use of certain software; and monitoring, which can
track location and activity when using the devices.
Career Path
The road to becoming a Chief information Security Officer/CSO often starts with entry-level
security positions such as:
Security Administrator
Network Administrator
System Administrator
Security Analyst
Security Engineer
Security Consultant
Security Architect
In large organizations, it’s possible to be promoted to Senior Security Architect or even Chief
Security Architect.
Security analyst
Analyses and assesses vulnerabilities in the infrastructure (software, hardware, networks),
investigates using available tools and countermeasures to remedy the detected vulnerabilities,
and recommends solutions and best practices. Analyses and assesses damage to the
data/infrastructure as a result of security incidents, examines available recovery tools and
processes, and recommends solutions. Tests for compliance with security policies and
procedures. May assist in the creation, implementation, or management of security solutions.
Security engineer
Performs security monitoring, security and data/logs analysis, and forensic analysis, to detect
security incidents, and mounts the incident response. Investigates and utilizes new technologies
and processes to enhance security capabilities and implement improvements. May also review
code or perform other security engineering methodologies.
Security Consultant/Specialist/Intelligence
Broad titles that encompass any one or all of the other roles or titles tasked with protecting
computers, networks, software, data or information systems against viruses, worms, spyware,
malware, intrusion detection, unauthorized access, denial-of-service attacks, and an ever
increasing list of attacks by hackers acting as individuals or as part of organized crime or foreign
governments.
Since the job of a Security Consultant covers the waterfront, technical knowledge is
paramount. Here are a variety of hard skills that we’ve found employers requesting
Security architect
Designs a security system or major components of a security system, and may head a security
design team building a new security system.
Networking
Networking Basics
What is network
Types of network- internet, intranet, dark net
Network areas – LAN, WAN, MAN, PAN
TCP/IP, OSI model
IP address
Network devices specifications
Server Concepts
Introduction to windows servers - Windows Server is a brand name for a group of
server operating systems released by Microsoft. It includes all Windows operating
systems that are branded "Windows Server", but not any other Microsoft product. The
first Windows server edition to be released under that brand was Windows Server 2003.
However, the first server edition of Windows was Windows NT 3.1 Advanced Server,
followed by Windows NT 3.5 Server, Windows NT 4.0 Server, and Windows 2000 Server;
the latter was the first server edition to include Active Directory, DNS Server, DHCP
Server, Group Policy, SQL Server, as well as many other popular features used today.
DNS - The Domain Name System (DNS) is a hierarchical and decentralized naming
system for computers, services, or other resources connected to the Internet or a private
network. It associates various information with domain names assigned to each of the
participating entities. Most prominently, it translates more readily memorized domain
names to the numerical IP addresses needed for locating and identifying computer
services and devices with the underlying network protocols. By providing a
worldwide, distributed directory service, the Domain Name System has been an essential
component of the functionality of the Internet since 1985. The Domain Name System
delegates the responsibility of assigning domain names and mapping those names to
Internet resources by designating authoritative name servers for each domain. Network
administrators may delegate authority over sub-domains of their allocated name space to
other name servers. This mechanism provides distributed and fault-tolerant service and
was designed to avoid a single large central database. The Domain Name System also
specifies the technical functionality of the database service that is at its core. It defines
the DNS protocol, a detailed specification of the data structures and data communication
exchanges used in the DNS, as part of the Internet Protocol Suite. The Internet maintains
two principal namespaces, the domain name hierarchy and the Internet
Protocol (IP) address spaces. The Domain Name System maintains the domain name
hierarchy and provides translation services between it and the address spaces.
Internet name servers and a communication protocol implement the Domain Name
System. A DNS name server is a server that stores the DNS records for a domain; a
DNS name server responds with answers to queries against its database. The most
common types of records stored in the DNS database are for Start of Authority (SOA), IP
addresses (A and AAAA), SMTP mail exchangers (MX), name servers (NS), pointers
for reverse DNS lookups (PTR), and domain name aliases (CNAME). Although not
intended to be a general purpose database, DNS has been expanded over time to store
records for other types of data for either automatic lookups, such as DNSSEC records, or
for human queries such as responsible person (RP) records. As a general purpose
database, the DNS has also been used in combating unsolicited email (spam) by storing
a real-time black hole list (RBL). The DNS database is traditionally stored in a structured
text file, the zone file, but other database systems are common.
DHCP - The Dynamic Host Configuration Protocol (DHCP) is a network management
protocol used on UDP/IP networks whereby a DHCP server dynamically assigns an IP
address and other network configuration parameters to each device on a network so they
can communicate with other IP networks. A DHCP server enables computers to request
IP addresses and networking parameters automatically from the Internet service
provider (ISP), reducing the need for a network administrator or a user to manually
assign IP addresses to all network devices. In the absence of a DHCP server, a
computer or other device on the network needs to be manually assigned an IP address,
or to assign itself an APIPA address, which will not enable it to communicate outside its
local subnet. DHCP can be implemented on networks ranging in size from home
networks to large campus networks and regional Internet service
provider networks. A router or a residential gateway can be enabled to act as a DHCP
server. Most residential network routers receive a globally unique IP address within the
ISP network. Within a local network, a DHCP server assigns a local IP address to each
device connected to the network.
Active directory (AD) - Active Directory (AD) is a directory
service that Microsoft developed for the Windows domain networks. It is included in
most Windows Server operating systems as a set of processes and services. Initially,
Active Directory was only in charge of centralized domain management. Starting
with Windows Server 2008, however, Active Directory became an umbrella title for a
broad range of directory-based identity-related services. A server running Active
Directory Domain Service (AD DS) is called a domain controller.
It authenticates and authorizes all users and computers in a Windows domain type
network—assigning and enforcing security policies for all computers and installing or
updating software. For example, when a user logs into a computer that is part of a
Windows domain, Active Directory checks the submitted password and determines
whether the user is a system administrator or normal user. Also, it allows management
and storage of information, provides authentication and authorization mechanisms, and
establishes a framework to deploy other related services: Certificate Services, Active
Directory Federation Services, Lightweight Directory Services and Rights Management
Services. Active Directory uses Lightweight Directory Access Protocol (LDAP) versions 2
and 3, Microsoft's version of Kerberos, and DNS.
Types of Datacentre - public cloud providers (Amazon, Google), scientific computing
centres (national laboratories), co-location centers (private ‘clouds’ where servers are
housed together), ‘in-house’ data centres (facilities owned and operated by company
using the servers). Tier-1- 99.671% minimum uptime- 28.8 hours of downtime
annually- no redundancy- Tier-2-99.741% minimum uptime- 22 hours of downtime
annually- partial redundancy-Tier-3-99.982% minimum uptime- no more than 1.6
hours of downtime annually- N+1 redundancy (the facility has what is required to
operate plus a backup)-Tier-4-99.995% minimum uptime- 0.04 hours of downtime
annually, 2N+1 redundancy (the facility has what is required to operate plus a
backup).
Corporate networks - A corporate area network (CAN) is a separate, protected
portion of a corporation's intranet. When people are on the corporate area network,
they are sometimes said to be in the CAN: they do not have access to the Internet --
or to the rest of the corporate network, for that matter. Users may be connected
directly, for example in a token ring configuration, or may be geographically
dispersed and connected by backbone lines. CAN is sometimes said to stand
for campus area network, where it refers to an interconnection of local area networks
(LANs) that are geographically dispersed more widely than in a LAN, but less so than
in a wide area network (WAN).
Cloud servers - A cloud server is a logical server that is built, hosted and delivered
through a cloud computing platform over the Internet. Cloud servers possess and
exhibit similar capabilities and functionality to a typical server but are accessed
remotely from a cloud service provider. A cloud server may also be called a virtual
server or virtual private sever. A cloud server is primarily an Infrastructure as a
Service (IaaS) based cloud service model. There are two types of cloud server:
logical and physical. A cloud server is considered to be logical when it is delivered
through server virtualization. In this delivery model, the physical server is logically
distributed into two or more logical servers, each of which has a separate OS, user
interface and apps, although they share physical components from the underlying
physical server. Whereas the physical cloud server is also accessed through the
Internet remotely, it isn’t shared or distributed. This is commonly known as a
dedicated cloud server.
Proxy servers - In computer networks, a proxy server is a server (a computer system or
an application) that acts as an intermediary for requests from clients seeking resources
from other servers. A client connects to the proxy server, requesting some service, such
as a file, connection, web page, or other resource available from a different server and
the proxy server evaluates the request as a way to simplify and control its complexity.
Proxies were invented to add structure and encapsulation to distributed systems.
Security
Understanding security
1. What is security- IT security protects the integrity of information technologies like
computer systems, networks, and data from attack, damage, or unauthorized
access. A business trying to compete in a world of digital transformation needs to
understand how to adopt security solutions that begin with design. This is what it
means to "shift security left"—to make security a part of the infrastructure and
product lifecycle as early as possible. This helps security be both proactive and
reactive. Continuous security is fed by a routine system of feedback and
adaptation, often handled through the use of automatic checkpoints. Automation
ensures fast and effective feedback that doesn’t slow the product lifecycle down.
Integrating security in this way also means that updates and responses can be
implemented quickly and holistically as the security landscape changes.
2. SOC operations- A security operations center (SOC) is a centralized unit that
deals with security issues on an organizational and technical level. A SOC within
a building or facility is a central location from where staff supervises the site,
using data processing technology.[1] Typically, a SOC is equipped
for access monitoring, and controlling of lighting, alarms, and vehicle barriers.
An information security operations center (ISOC) is a dedicated site where
enterprise information systems (web sites, applications, databases, data centers
and servers, networks, desktops and other endpoints) are monitored, assessed,
and defended.
3. Ethical Hacking- Ethical Hacking sometimes called as Penetration Testing is an
act of intruding/penetrating into system or networks to find out threats,
vulnerabilities in those systems which a malicious attacker may find and exploit
causing loss of data, financial loss or other major damages. The purpose of
ethical hacking is to improve the security of the network or systems by fixing the
vulnerabilities found during testing. Ethical hackers may use the same methods
and tools used by the malicious hackers but with the permission of the authorized
person for the purpose of improving the security and defending the systems from
attacks by malicious users. Ethical Hacking sometimes called as Penetration
Testing is an act of intruding/penetrating into system or networks to find out
threats, vulnerabilities in those systems which a malicious attacker may find and
exploit causing loss of data, financial loss or other major damages. The purpose
of ethical hacking is to improve the security of the network or systems by fixing
the vulnerabilities found during testing. Ethical hackers may use the same
methods and tools used by the malicious hackers but with the permission of the
authorized person for the purpose of improving the security and defending the
systems from attacks by malicious users.
4. Pen Testing- A penetration test, colloquially known as a pen test, is an
authorized simulated cyber attack on a computer system, performed to evaluate
the security of the system.[1][2] The test is performed to identify both weaknesses
(also referred to as vulnerabilities), including the potential for unauthorized parties
to gain access to the system's features and data,[3][4] as well as
strengths,[5] enabling a full risk assessment to be completed. The process
typically identifies the target systems and a particular goal, then reviews available
information and undertakes various means to attain that goal. A penetration test
target may be a white box (which provides background and system information)
or black box (which provides only basic or no information except the company
name). A gray box penetration test is a combination of the two (where limited
knowledge of the target is shared with the auditor).[6] A penetration test can help
determine whether a system is vulnerable to attack if the defenses were
sufficient, and which defenses (if any) the test defeated. Security issues that the
penetration test uncovers should be reported to the system owner.[8] Penetration
test reports may also assess potential impacts to the organization and suggest
countermeasures to reduce risk. The National Cyber Security Center, describes
penetration testing as the following: "A method for gaining assurance in the
security of an IT system by attempting to breach some or all of that system's
security, using the same tools and techniques as an adversary might.". The goals
of a penetration test vary depending on the type of approved activity for any given
engagement with the primary goal focused on finding vulnerabilities that could be
exploited by a nefarious actor and informing the client of those vulnerabilities
along with recommended mitigation strategies. Penetration tests are a
component of a full security audit. For example, the Payment Card Industry Data
Security Standard requires penetration testing on a regular schedule, and after
system changes. Flaw hypothesis methodology is a systems analysis and
penetration prediction technique where a list of hypothesized flaws in a software
system are compiled through analysis of the specifications and documentation for
the system. The list of hypothesized flaws is then prioritized on the basis of the
estimated probability that a flaw actually exists, and on the ease of exploiting it to
the extent of control or compromise. The prioritized list is used to direct the actual
testing of the system.
Security concepts
A good admin should be able to say, for example, “It’s a web server,
so it’s only running 80, 443, and 22 for remote administration; that’s
it.” — and so on and so on for every type of server in the
environment. There shouldn’t be any surprises when seeing port scan
results.
What you don’t want to hear in this sort of test is, “Wow,
what’s that port?” Having to ask that question is a sign that the
administrator is not fully aware of everything running on the box in
question, and that’s precisely the situation we need to avoid.
2. Least Privilege
The next über-important concept is that of least privilege. Least
privilege simply says that people and things should only be able to do
what they need to do their jobs, and nothing else. The reason I include
“things” is that that admins often configure automated tasks that need
to be able to do certain things — backups for example. Well, what
often happens is the admin will just put the user doing the backup into
the domain admins group — even if they could get it to work another
way. Why? Because it’s easier.
This rule of least privilege simply reminds us not to give into the
temptation to do that. Don’t give in. Take the time to make all access
granular, and at the lowest level possible.
3. Defense In Depth
Defense In Depth is perhaps the least understood concept out of the
four. Many think it’s simply stacking three firewalls instead of one, or
using two antivirus programs rather than one. Technically this could
apply, but it’s not the true nature of Defense In Depth.
The benefit is quite simple — you get more chances to stop an attack
from becoming successful. It’s possible for someone to get all the
way in, all the way to the box in question, and be stopped by the fact
that malicious code in question wouldn’t run on the host. But maybe
when that code is fixed so that it would run, it’ll then be caught by an
updated IPS or a more restrictive firewall ACL. The idea is to lock
down everything you can at every level. Not just one
thing, everything — file permissions, stack protection, ACLs, host
IPS, limiting admin access, running as limited users — the list goes
on and on.
1.
2. CIA Methods- The three fundamental principles of security are availability,
integrity, and confidentiality and are commonly referred to as CIA or AIC triad
which also form the main objective of any security program. The level of security
required to accomplish these principles differs per company, because each has
its own unique combination of business and security goals and requirements. All
security controls, mechanisms, and safeguards are implemented to provide one
or more of these principles. All risks, threats, and vulnerabilities are measured for
their potential capability to compromise one or all of the AIC principles.
Confidentiality[edit]
Ensures that the necessary level of secrecy is enforced at each junction of data processing
and prevents unauthorized disclosure. This level of confidentiality should prevail while data
resides on systems and devices within the network, as it is transmitted and once it reaches
its destination.
Threat sources
Network Monitoring
Shoulder Surfing- monitoring key strokes or screen
Stealing password files
Social Engineering- one person posing as the actual
Countermeasures
Encrypting data as it is stored and transmitted.
By using network padding
Implementing strict access control mechanisms and data classification
Training personnel on proper procedures.
Integrity[edit]
Integrity of data is protected when the assurance of accuracy and reliability of information
and system is provided, and unauthorized modification is prevented.
Threat sources
Viruses
Logic Bombs
Backdoors
Countermeasures
Strict Access Control
Intrusion Detection
Hashing
Availability[edit]
Availability ensures reliability and timely access to data and resources to authorized
individuals.
Threat sources
Device or software failure.
Environmental issues like heat, cold, humidity, static electricity, and contaminants can
also affect system availability.
Denial-of-service (DoS) attacks
Countermeasures
Maintaining backups to replace the failed system
IDS to monitor the network traffic and host system activities
Use of certain firewall and router configurations
Terms
Next I’d like to go over some extremely crucial industry terms. These can
get a bit academic but I’m going to do my best to boil them down to their
basics.
Vulnerability
Threat
Risk
Risk is perhaps the most important of all these definitions since the main
mission of information security officers is to manage it. The simplest
explanation I’ve heard is that risk is the chance of something bad
happening. That’s a bit too simple, though, and I think the best way to look
at these terms is with a couple of formulas:
Multiplication is used here for a very specific reason — any time one of
the two sides reaches zero, the result becomes zero. In other words, there
will be no risk anytime there is no threat or no vulnerability.
That last part is important. The entire purpose of assigning a value to risk
is so that managers can make the decisions on what to fix and what not to.
If there is a risk associated with hosting certain data on a public FTP
server, but that risk isn’t serious enough to offset the benefit, then it’s good
business to go ahead and keep it out there.
Standard — A standard dictates what will be used to carry out the policy.
As an example, if the policy says all internal users will use a single,
corporate email client, the standard may say that the client will be Outlook
2000, etc.
TCP SYN Flood Attacks- A SYN flood is a form of denial-of-service attack in which an attacker
sends a succession of SYN requests to a target's system in an attempt to consume enough
server resources to make the system unresponsive to legitimate traffic. Normally when a client
attempts to start a TCP connection to a server, the client and server exchange a series of
messages which normally runs like this:
1. The client requests a connection by sending a SYN (synchronize) message to the server.
2. The server acknowledges this request by sending SYN-ACK back to the client.
3. The client responds with an ACK , and the connection is established.
This is called the TCP three-way handshake, and is the foundation for every connection
established using the TCP protocol.
A SYN flood attack works by not responding to the server with the expected ACK code. The
malicious client can either simply not send the expected ACK , or by spoofing the source IP
address in the SYN , causing the server to send the SYN-ACK to a falsified IP address - which will
not send an ACK because it "knows" that it never sent a SYN .
The server will wait for the acknowledgement for some time, as simple network congestion could
also be the cause of the missing ACK . However, in an attack, the half-open connections created
by the malicious client bind resources on the server and may eventually exceed the resources
available on the server. At that point, the server cannot connect to any clients, whether legitimate
or otherwise. This effectively denies service to legitimate clients. Some systems may also
malfunction or crash when other operating system functions are starved of resources in this way.
Vulnerabilities
1. Network mapping- Network mapping is the study of the physical connectivity of
networks e.g. the Internet. Network mapping discovers the devices on the
network and their connectivity. It is not to be confused with network discovery
or network enumerating which discovers devices on the network and their
characteristics such as (operating system, open ports, listening network services,
etc.). The field of automated network mapping has taken on greater importance
as networks become more dynamic and complex in nature.
Authenticated scans allow for the scanner to directly access network based assets using
remote administrative protocols such as secure shell (SSH) or remote desktop protocol
(RDP) and authenticate using provided system credentials. This allows the vulnerability
scanner to access low-level data, such as specific services and configuration details of the
host operating system. It's then able to provide detailed and accurate information about the
operating system and installed software, including configuration issues and missing security
patches.[1]
Unauthenticated scans is a method that can result in a high number of false positives and
is unable to provide detailed information about the assets operating system and installed
software. This method is typically used by threat actors or security analyst trying determine
the security posture of externally accessible assets.[1]
Security defence
1. Firewalls- In computing, a firewall is a network security system that monitors and
controls incoming and outgoing network traffic based on predetermined security
rules.[1] A firewall typically establishes a barrier between a trusted internal network
and untrusted external network, such as the Internet. Firewalls are often
categorized as either network firewalls or host-based firewalls. Network
firewalls filter traffic between two or more networks and run on network hardware.
Host-based firewalls run on host computers and control network traffic in and out
of those machines. In terms of computer security, a firewall is a piece
of software. This software monitors the network traffic. A firewall has a set
of rules which are applied to each packet. The rules decide if a packet can pass,
or whether it is discarded. Usually a firewall is placed between a network that is
trusted, and one that is less trusted. When a large network needs to be protected,
the firewall software often runs on a computer that does nothing else. A firewall
protects one part of the network against unauthorized access.
2. IDS/IPS
3. VPN- A virtual private network (VPN) extends a private network across a public
network, and enables users to send and receive data across shared or public
networks as if their computing devices were directly connected to the private
network. Applications running across a VPN may therefore benefit from the
functionality, security, and management of the private network. Encryption is a
common though not an inherent part of a VPN connection. VPN technology was
developed to allow remote users and branch offices to access corporate
applications and resources. To ensure security, the private network connection is
established using an encrypted layered tunneling protocol and VPN users use
authentication methods, including passwords or certificates, to gain access to the
VPN. In other applications, Internet users may secure their transactions with a
VPN, to circumvent geo-restrictions and censorship, or to connect to proxy
servers to protect personal identity and location to stay anonymous on the
Internet. However, some websites block access to known VPN technology to
prevent the circumvention of their geo-restrictions, and many VPN providers have
been developing strategies to get around these roadblocks. A VPN is created by
establishing a virtual point-to-point connection through the use of dedicated
circuits or with tunneling protocols over existing networks. A VPN available from
the public Internet can provide some of the benefits of a wide area
network (WAN). From a user perspective, the resources available within the
private network can be accessed remotely.
4. Proxy servers- In computer networks, a proxy server is a server (a computer
system or an application) that acts as an intermediary for requests
from clients seeking resources from other servers.[1] A client connects to the proxy
server, requesting some service, such as a file, connection, web page, or other
resource available from a different server and the proxy server evaluates the
request as a way to simplify and control its complexity.[2] Proxies were invented to
add structure and encapsulation to distributed systems.
SIEM tools
1. Introduction- In the field of computer security, security information and event
management (SIEM) software products and services combine security
information management (SIM) and security event management(SEM). They
provide real-time analysis of security alerts generated by applications and
network hardware. Vendors sell SIEM as software, as appliances or as managed
services; these products are also used to log security data and generate reports
for compliance purposes. The term and the initialism SIEM was coined by Mark
Nicolett and Amrit Williams of Gartner in 2005
2. Syslog, Windows events-
3. Log collection
4. SIEM architecture
5. Queries
6. Alerts
End point protection
Endpoint security or endpoint protection is an approach to the protection of computer
networks that are remotely bridged to client devices. The connection
of laptops, tablets, mobile phones and other wireless devices to corporate networks
creates attack paths for security threats.[1][2] Endpoint security attempts to ensure that
such devices follow a definite level of compliance to standards.
1. Anti-virus
2. HIDS/HIPS
3. DLP
4. SHA value reputation
5. Signature creation
Email protection
1. How to identify a phishing email
2. Email gateway
3. URL Re-writing
4. Header analysis
5. SPAM
6. URL and IP reputation
Encryptions and hash values
1. Encryption standards
2. SSL methods
3. HASH values