0% found this document useful (0 votes)
75 views

Computer Security Research

Computer security, also known as cybersecurity, involves protecting computer systems and networks from cyber threats like malware, hacking, and data theft. It aims to prevent unauthorized access and modification of information and disruption of computer services. Common computer security measures include access control, antivirus software, firewalls, intrusion detection and prevention systems, encryption, and log management.

Uploaded by

Jayanth kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
75 views

Computer Security Research

Computer security, also known as cybersecurity, involves protecting computer systems and networks from cyber threats like malware, hacking, and data theft. It aims to prevent unauthorized access and modification of information and disruption of computer services. Common computer security measures include access control, antivirus software, firewalls, intrusion detection and prevention systems, encryption, and log management.

Uploaded by

Jayanth kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 24

Computer Security

Computer Security, Cybersecurity or Information Technology Security (IT security) is the


protection of computer systems from theft or damage to their hardware, software or electronic
data, as well as from disruption or misdirection of the services they provide.

Types of security and privacy

 Access control - In the fields of physical security and information security, access
control (AC) is the selective restriction of access to a place or other resource. The act
of accessing may mean consuming, entering, or using. Permission to access a resource is
called authorization. Locks and login credentials are two analogous mechanisms of access
control.
 Anti-key loggers - An anti-key logger (or anti–keystroke logger) is a type of software
specifically designed for the detection of keystroke logger software; often, such software will
also incorporate the ability to delete or at least immobilize hidden keystroke logger software
on a computer. In comparison to most anti-virus or anti-spyware software, the primary
difference is that an anti-key logger does not make a distinction between
a legitimate keystroke-logging program and an illegitimate keystroke-logging program (such
as malware); all keystroke-logging programs are flagged and optionally removed, whether
they appear to be legitimate keystroke-logging software or not.
 Anti-malware - Antivirus software is a computer program used to prevent, detect, and
remove malware. However, with the proliferation of other kinds of malware, antivirus
software started to provide protection from other computer threats. In particular, modern
antivirus software can protect users from: malicious browser helper objects (BHOs), browser
hijackers, ransomware, key loggers, backdoors, rootkits, Trojan horses, worms,
malicious LSPs, diallers, fraud tools, adware and spyware. Some products also include
protection from other computer threats, such as infected and
malicious URLs, spam, scam and phishing attacks, online identity (privacy), online
banking attacks, social engineering techniques, advanced persistent threat (APT)
and botnet DDoS attacks.
 Anti-spyware - Spyware is software that aims to gather information about a person or
organization, sometimes without their knowledge, that may send such information to another
entity without the consumer's consent, that asserts control over a device without the
consumer's knowledge, or it may send such information to another entity with the consumer's
consent, through cookies.
 Anti-subversion software - Software subversion is the process of making software perform
unintended actions either by tampering with program code or by altering behaviour in
another fashion. For example, code tampering could be used to change program code to
load malicious rules or heuristics, SQL injection is a form of subversion for the purpose
of data corruption or theft and buffer overflows are a form of subversion for the purpose of
unauthorised access. These attacks are examples of computer hacking.
 Anti-tamper software - Anti-tamper software (or tamper-resistant software) is software
which makes it harder for an attacker to modify it. The measures involved can be passive
such as obfuscation to make reverse engineering difficult or active tamper-detection
techniques which aim to make a program malfunction or not operate at all if modified.[1] It is
essentially tamper resistance implemented in the software domain. It shares certain aspects
but also differs from related technologies like copy protection and trusted hardware, though it
is often used in combination with them. Anti-tampering technology typically makes the
software somewhat larger and also has a performance impact. There are no provably
secure software anti-tampering methods; thus, the field is an arms race between attackers
and software anti-tampering technologies.
 Cryptographic software - Encryption software is software that uses cryptography to prevent
unauthorized access to digital information. Cryptography is used to protect digital information
on computers as well as the digital information that is sent to other computers over
the Internet.
 Computer-aided dispatch (CAD) - Computer-aided dispatch (CAD), also called computer-
assisted dispatch, is a method of dispatching taxicabs, couriers, field service technicians,
mass transit vehicles or emergency services assisted by computer. It can either be used to
send messages to the dispatcher via a mobile data terminal (MDT) and/or used to store and
retrieve data (i.e. radio logs, field interviews, client information, schedules, etc.). A dispatcher
may announce the call details to field units over a two-way radio. Some systems
communicate using a two-way radio system's selective calling features. CAD systems may
send text messages with call-for-service details to alphanumeric pagers or wireless
telephony text services like SMS.
 Firewall - In computing, a firewall is a network security system that monitors and controls
incoming and outgoing network traffic based on predetermined security rules. A firewall
typically establishes a barrier between a trusted internal network and untrusted external
network, such as the Internet.
 Intrusion detection system (IDS) - An intrusion detection system (IDS) is a device
or software application that monitors a network or systems for malicious activity or policy
violations. Any malicious activity or violation is typically reported either to an administrator or
collected centrally using a security information and event management (SIEM) system. A
SIEM system combines outputs from multiple sources, and uses alarm filtering techniques to
distinguish malicious activity from false alarms. IDS types range in scope from single
computers to large networks. The most common classifications are network intrusion
detection systems (NIDS) and host-based intrusion detection systems (HIDS). A system that
monitors important operating system files is an example of an HIDS, while a system that
analyses incoming network traffic is an example of an NIDS. It is also possible to classify IDS
by detection approach: the most well-known variants are signature-based detection
(recognizing bad patterns, such as malware); and anomaly-based detection (detecting
deviations from a model of "good" traffic, which often relies on machine learning). Some IDS
products have the ability to respond to detected intrusions. Systems with response
capabilities are typically referred to as an intrusion prevention system. Intrusion detection
systems can also serve specific purposes by augmenting them with custom tools, such as
using a honeypot to attract and characterize malicious traffic.
 Intrusion prevention system (IPS) - Some systems may attempt to stop an intrusion
attempt but this is neither required nor expected of a monitoring system. Intrusion detection
and prevention systems (IDPS) are primarily focused on identifying possible incidents,
logging information about them, and reporting attempts. In addition, organizations use IDPS
for other purposes, such as identifying problems with security policies, documenting existing
threats and deterring individuals from violating security policies. IDPS have become a
necessary addition to the security infrastructure of nearly every organization. IDPS typically
record information related to observed events, notify security administrators of important
observed events and produce reports. Many IDPS can also respond to a detected threat by
attempting to prevent it from succeeding. They use several response techniques, which
involve the IDPS stopping the attack itself, changing the security environment (e.g.
reconfiguring a firewall) or changing the attack's content. Intrusion prevention systems (IPS),
also known as intrusion detection and prevention systems (IDPS), are network
security appliances that monitor network or system activities for malicious activity. The main
functions of intrusion prevention systems are to identify malicious activity, log information
about this activity, report it and attempt to block or stop it. Intrusion prevention systems are
considered extensions of intrusion detection systems because they both monitor network
traffic and/or system activities for malicious activity. The main differences are, unlike
intrusion detection systems, intrusion prevention systems are placed in-line and are able to
actively prevent or block intrusions that are detected. IPS can take such actions as sending
an alarm, dropping detected malicious packets, resetting a connection or blocking traffic from
the offending IP address. An IPS also can correct cyclic redundancy check (CRC) errors,
defragment packet streams, mitigate TCP sequencing issues, and clean up
unwanted transport and network layer options.
 Log management software - Log management (LM) comprises an approach to dealing with
large volumes of computer-generated log messages (also known as audit records, audit
trails, event-logs, etc.).
 Records management - Records management, also known as records and information
management, is an organizational function devoted to the management of information in an
organization throughout its life cycle, from the time of creation or inscription to its eventual
disposition. This includes identifying, classifying, storing, securing, retrieving, tracking and
destroying or permanently preserving records. The ISO 15489-1: 2001standard ("ISO 15489-
1:2001") defines records management as "[the] field of management responsible for the
efficient and systematic control of the creation, receipt, maintenance, use and disposition of
records, including the processes for capturing and maintaining evidence of
and information about business activities and transactions in the form of records". In
determining how long to retain records, their capacity for re-use is important. Many are kept
as evidence of activities, transactions, and decisions. Others document what happened and
why. The purpose of records management is part of an organization's broader function
of Governance, risk management, and compliance and is primarily concerned with managing
the evidence of an organization's activities as well as the reduction or mitigation of risk
associated with it.
 Sandbox - In computer security, a "sandbox" is a security mechanism for separating running
programs, usually in an effort to mitigate system failures or software vulnerabilities from
spreading. It is often used to execute untested or untrusted programs or code, possibly from
unverified or untrusted third parties, suppliers, users or websites, without risking harm to the
host machine or operating system. A sandbox typically provides a tightly controlled set of
resources for guest programs to run in, such as scratch space on disk and memory. Network
access, the ability to inspect the host system or read from input devices are usually
disallowed or heavily restricted. In the sense of providing a highly controlled environment,
sandboxes may be seen as a specific example of virtualization. Sandboxing is frequently
used to test unverified programs that may contain a virus or other malicious code, without
allowing the software to harm the host device.
 Security information management - Security information management (SIM) is
an information security industry term for the collection of data such as log files into a central
repository for trend analysis.
 SIEM - In the field of computer security, security information and event management (SIEM)
software products and services combine security information management (SIM)
and security event management (SEM). They provide real-time analysis of security alerts
generated by applications and network hardware. Vendors sell SIEM as software, as
appliances or as managed services; these products are also used to log security data and
generate reports for compliance purposes. The term and the initialism SIEM was coined by
Mark Nicolett and Amrit Williams of Gartner in 2005.
 Anti-theft - An anti-theft system is any device or method used to prevent or deter the
unauthorized appropriation of items considered valuable. Theft is one of the most common
and oldest criminal behaviours. From the invention of the first lock and key to the introduction
of RFID tags and biometric identification, anti-theft systems have evolved to match the
introduction of new inventions to society and the resulting theft by others.
 Parental control - Parental controls are features which may be included in digital
television services, computer and video games, mobile devices and software that allow
parents to restrict the access of content to their children. This may be content they deem
inappropriate for their age, maturity level or feel is aimed more at an adult audience. Parental
controls fall into roughly four categories: content filters, which limit access to age
inappropriate content; usage controls, which constrain the usage of these devices such as
placing time-limits on usage or forbidding certain types of usage; computer usage
management tools, which enforces the use of certain software; and monitoring, which can
track location and activity when using the devices.
Career Path

The road to becoming a Chief information Security Officer/CSO often starts with entry-level
security positions such as:

 Security Administrator
 Network Administrator
 System Administrator

This is followed by intermediate-level positions such as:

 Security Analyst
 Security Engineer
 Security Consultant

This is followed by high-level positions such as:

 Security Architect

In large organizations, it’s possible to be promoted to Senior Security Architect or even Chief
Security Architect.

This is followed by top-level positions such as:

 Chief information Security Officer


 Chief Security Officer
Security/Network/System administrator
Installs and manages organization-wide security systems. This position may also include taking
on some of the tasks of a security analyst in smaller organizations.

Security analyst
Analyses and assesses vulnerabilities in the infrastructure (software, hardware, networks),
investigates using available tools and countermeasures to remedy the detected vulnerabilities,
and recommends solutions and best practices. Analyses and assesses damage to the
data/infrastructure as a result of security incidents, examines available recovery tools and
processes, and recommends solutions. Tests for compliance with security policies and
procedures. May assist in the creation, implementation, or management of security solutions.

Security engineer
Performs security monitoring, security and data/logs analysis, and forensic analysis, to detect
security incidents, and mounts the incident response. Investigates and utilizes new technologies
and processes to enhance security capabilities and implement improvements. May also review
code or perform other security engineering methodologies.

Security Consultant/Specialist/Intelligence
Broad titles that encompass any one or all of the other roles or titles tasked with protecting
computers, networks, software, data or information systems against viruses, worms, spyware,
malware, intrusion detection, unauthorized access, denial-of-service attacks, and an ever
increasing list of attacks by hackers acting as individuals or as part of organized crime or foreign
governments.
Since the job of a Security Consultant covers the waterfront, technical knowledge is
paramount. Here are a variety of hard skills that we’ve found employers requesting

Security architect
Designs a security system or major components of a security system, and may head a security
design team building a new security system.

Chief Information Security Officer (CISO)


A high-level management position responsible for the entire information security division/staff.
The position may include hands-on technical work.

Chief Security Officer (CSO)


A high-level management position responsible for the entire security division/staff. A newer
position now deemed needed as security risks grow.
COURSE OUTLINE

Networking

 Networking Basics
 What is network
 Types of network- internet, intranet, dark net
 Network areas – LAN, WAN, MAN, PAN
 TCP/IP, OSI model
 IP address
 Network devices specifications

Server Concepts
 Introduction to windows servers - Windows Server is a brand name for a group of
server operating systems released by Microsoft. It includes all Windows operating
systems that are branded "Windows Server", but not any other Microsoft product. The
first Windows server edition to be released under that brand was Windows Server 2003.
However, the first server edition of Windows was Windows NT 3.1 Advanced Server,
followed by Windows NT 3.5 Server, Windows NT 4.0 Server, and Windows 2000 Server;
the latter was the first server edition to include Active Directory, DNS Server, DHCP
Server, Group Policy, SQL Server, as well as many other popular features used today.
 DNS - The Domain Name System (DNS) is a hierarchical and decentralized naming
system for computers, services, or other resources connected to the Internet or a private
network. It associates various information with domain names assigned to each of the
participating entities. Most prominently, it translates more readily memorized domain
names to the numerical IP addresses needed for locating and identifying computer
services and devices with the underlying network protocols. By providing a
worldwide, distributed directory service, the Domain Name System has been an essential
component of the functionality of the Internet since 1985. The Domain Name System
delegates the responsibility of assigning domain names and mapping those names to
Internet resources by designating authoritative name servers for each domain. Network
administrators may delegate authority over sub-domains of their allocated name space to
other name servers. This mechanism provides distributed and fault-tolerant service and
was designed to avoid a single large central database. The Domain Name System also
specifies the technical functionality of the database service that is at its core. It defines
the DNS protocol, a detailed specification of the data structures and data communication
exchanges used in the DNS, as part of the Internet Protocol Suite. The Internet maintains
two principal namespaces, the domain name hierarchy and the Internet
Protocol (IP) address spaces. The Domain Name System maintains the domain name
hierarchy and provides translation services between it and the address spaces.
Internet name servers and a communication protocol implement the Domain Name
System. A DNS name server is a server that stores the DNS records for a domain; a
DNS name server responds with answers to queries against its database. The most
common types of records stored in the DNS database are for Start of Authority (SOA), IP
addresses (A and AAAA), SMTP mail exchangers (MX), name servers (NS), pointers
for reverse DNS lookups (PTR), and domain name aliases (CNAME). Although not
intended to be a general purpose database, DNS has been expanded over time to store
records for other types of data for either automatic lookups, such as DNSSEC records, or
for human queries such as responsible person (RP) records. As a general purpose
database, the DNS has also been used in combating unsolicited email (spam) by storing
a real-time black hole list (RBL). The DNS database is traditionally stored in a structured
text file, the zone file, but other database systems are common.
 DHCP - The Dynamic Host Configuration Protocol (DHCP) is a network management
protocol used on UDP/IP networks whereby a DHCP server dynamically assigns an IP
address and other network configuration parameters to each device on a network so they
can communicate with other IP networks. A DHCP server enables computers to request
IP addresses and networking parameters automatically from the Internet service
provider (ISP), reducing the need for a network administrator or a user to manually
assign IP addresses to all network devices. In the absence of a DHCP server, a
computer or other device on the network needs to be manually assigned an IP address,
or to assign itself an APIPA address, which will not enable it to communicate outside its
local subnet. DHCP can be implemented on networks ranging in size from home
networks to large campus networks and regional Internet service
provider networks. A router or a residential gateway can be enabled to act as a DHCP
server. Most residential network routers receive a globally unique IP address within the
ISP network. Within a local network, a DHCP server assigns a local IP address to each
device connected to the network.
 Active directory (AD) - Active Directory (AD) is a directory
service that Microsoft developed for the Windows domain networks. It is included in
most Windows Server operating systems as a set of processes and services. Initially,
Active Directory was only in charge of centralized domain management. Starting
with Windows Server 2008, however, Active Directory became an umbrella title for a
broad range of directory-based identity-related services. A server running Active
Directory Domain Service (AD DS) is called a domain controller.
It authenticates and authorizes all users and computers in a Windows domain type
network—assigning and enforcing security policies for all computers and installing or
updating software. For example, when a user logs into a computer that is part of a
Windows domain, Active Directory checks the submitted password and determines
whether the user is a system administrator or normal user. Also, it allows management
and storage of information, provides authentication and authorization mechanisms, and
establishes a framework to deploy other related services: Certificate Services, Active
Directory Federation Services, Lightweight Directory Services and Rights Management
Services. Active Directory uses Lightweight Directory Access Protocol (LDAP) versions 2
and 3, Microsoft's version of Kerberos, and DNS.
 Types of Datacentre - public cloud providers (Amazon, Google), scientific computing
centres (national laboratories), co-location centers (private ‘clouds’ where servers are
housed together), ‘in-house’ data centres (facilities owned and operated by company
using the servers). Tier-1- 99.671% minimum uptime- 28.8 hours of downtime
annually- no redundancy- Tier-2-99.741% minimum uptime- 22 hours of downtime
annually- partial redundancy-Tier-3-99.982% minimum uptime- no more than 1.6
hours of downtime annually- N+1 redundancy (the facility has what is required to
operate plus a backup)-Tier-4-99.995% minimum uptime- 0.04 hours of downtime
annually, 2N+1 redundancy (the facility has what is required to operate plus a
backup).
 Corporate networks - A corporate area network (CAN) is a separate, protected
portion of a corporation's intranet. When people are on the corporate area network,
they are sometimes said to be in the CAN: they do not have access to the Internet --
or to the rest of the corporate network, for that matter. Users may be connected
directly, for example in a token ring configuration, or may be geographically
dispersed and connected by backbone lines. CAN is sometimes said to stand
for campus area network, where it refers to an interconnection of local area networks
(LANs) that are geographically dispersed more widely than in a LAN, but less so than
in a wide area network (WAN).
 Cloud servers - A cloud server is a logical server that is built, hosted and delivered
through a cloud computing platform over the Internet. Cloud servers possess and
exhibit similar capabilities and functionality to a typical server but are accessed
remotely from a cloud service provider. A cloud server may also be called a virtual
server or virtual private sever. A cloud server is primarily an Infrastructure as a
Service (IaaS) based cloud service model. There are two types of cloud server:
logical and physical. A cloud server is considered to be logical when it is delivered
through server virtualization. In this delivery model, the physical server is logically
distributed into two or more logical servers, each of which has a separate OS, user
interface and apps, although they share physical components from the underlying
physical server. Whereas the physical cloud server is also accessed through the
Internet remotely, it isn’t shared or distributed. This is commonly known as a
dedicated cloud server.
 Proxy servers - In computer networks, a proxy server is a server (a computer system or
an application) that acts as an intermediary for requests from clients seeking resources
from other servers. A client connects to the proxy server, requesting some service, such
as a file, connection, web page, or other resource available from a different server and
the proxy server evaluates the request as a way to simplify and control its complexity.
Proxies were invented to add structure and encapsulation to distributed systems.

Security
 Understanding security
1. What is security- IT security protects the integrity of information technologies like
computer systems, networks, and data from attack, damage, or unauthorized
access. A business trying to compete in a world of digital transformation needs to
understand how to adopt security solutions that begin with design. This is what it
means to "shift security left"—to make security a part of the infrastructure and
product lifecycle as early as possible. This helps security be both proactive and
reactive. Continuous security is fed by a routine system of feedback and
adaptation, often handled through the use of automatic checkpoints. Automation
ensures fast and effective feedback that doesn’t slow the product lifecycle down.
Integrating security in this way also means that updates and responses can be
implemented quickly and holistically as the security landscape changes.
2. SOC operations- A security operations center (SOC) is a centralized unit that
deals with security issues on an organizational and technical level. A SOC within
a building or facility is a central location from where staff supervises the site,
using data processing technology.[1] Typically, a SOC is equipped
for access monitoring, and controlling of lighting, alarms, and vehicle barriers.
An information security operations center (ISOC) is a dedicated site where
enterprise information systems (web sites, applications, databases, data centers
and servers, networks, desktops and other endpoints) are monitored, assessed,
and defended.
3. Ethical Hacking- Ethical Hacking sometimes called as Penetration Testing is an
act of intruding/penetrating into system or networks to find out threats,
vulnerabilities in those systems which a malicious attacker may find and exploit
causing loss of data, financial loss or other major damages. The purpose of
ethical hacking is to improve the security of the network or systems by fixing the
vulnerabilities found during testing. Ethical hackers may use the same methods
and tools used by the malicious hackers but with the permission of the authorized
person for the purpose of improving the security and defending the systems from
attacks by malicious users. Ethical Hacking sometimes called as Penetration
Testing is an act of intruding/penetrating into system or networks to find out
threats, vulnerabilities in those systems which a malicious attacker may find and
exploit causing loss of data, financial loss or other major damages. The purpose
of ethical hacking is to improve the security of the network or systems by fixing
the vulnerabilities found during testing. Ethical hackers may use the same
methods and tools used by the malicious hackers but with the permission of the
authorized person for the purpose of improving the security and defending the
systems from attacks by malicious users.
4. Pen Testing- A penetration test, colloquially known as a pen test, is an
authorized simulated cyber attack on a computer system, performed to evaluate
the security of the system.[1][2] The test is performed to identify both weaknesses
(also referred to as vulnerabilities), including the potential for unauthorized parties
to gain access to the system's features and data,[3][4] as well as
strengths,[5] enabling a full risk assessment to be completed. The process
typically identifies the target systems and a particular goal, then reviews available
information and undertakes various means to attain that goal. A penetration test
target may be a white box (which provides background and system information)
or black box (which provides only basic or no information except the company
name). A gray box penetration test is a combination of the two (where limited
knowledge of the target is shared with the auditor).[6] A penetration test can help
determine whether a system is vulnerable to attack if the defenses were
sufficient, and which defenses (if any) the test defeated. Security issues that the
penetration test uncovers should be reported to the system owner.[8] Penetration
test reports may also assess potential impacts to the organization and suggest
countermeasures to reduce risk. The National Cyber Security Center, describes
penetration testing as the following: "A method for gaining assurance in the
security of an IT system by attempting to breach some or all of that system's
security, using the same tools and techniques as an adversary might.". The goals
of a penetration test vary depending on the type of approved activity for any given
engagement with the primary goal focused on finding vulnerabilities that could be
exploited by a nefarious actor and informing the client of those vulnerabilities
along with recommended mitigation strategies. Penetration tests are a
component of a full security audit. For example, the Payment Card Industry Data
Security Standard requires penetration testing on a regular schedule, and after
system changes. Flaw hypothesis methodology is a systems analysis and
penetration prediction technique where a list of hypothesized flaws in a software
system are compiled through analysis of the specifications and documentation for
the system. The list of hypothesized flaws is then prioritized on the basis of the
estimated probability that a flaw actually exists, and on the ease of exploiting it to
the extent of control or compromise. The prioritized list is used to direct the actual
testing of the system.
 Security concepts

InfoSec Concepts- Eric


Cole’s Four Basic Security Principles
To start with, I’d like to cover eric cole’s four basic security principles.
These four concepts should constantly be on the minds of all
security professionals.

1. Know Thy System


Perhaps the most important thing when trying to defend a system
is knowing that system. It doesn’t matter if it’s a castle or a Linux
server — if you don’t know the ins and outs of what you’re actually
defending, you have little chance of being successful.

An good example of this in the information security world is


knowledge of exactly what software is running on your systems.
What daemons are you running? What sort of exposure do they
create? A good self-test for someone in a small to medium-sized
environment would be to randomly select an IP from a list of your
systems and see if you know the exact list of ports that are open on
the machines.

A good admin should be able to say, for example, “It’s a web server,
so it’s only running 80, 443, and 22 for remote administration; that’s
it.” — and so on and so on for every type of server in the
environment. There shouldn’t be any surprises when seeing port scan
results.

What you don’t want to hear in this sort of test is, “Wow,
what’s that port?” Having to ask that question is a sign that the
administrator is not fully aware of everything running on the box in
question, and that’s precisely the situation we need to avoid.

2. Least Privilege
The next über-important concept is that of least privilege. Least
privilege simply says that people and things should only be able to do
what they need to do their jobs, and nothing else. The reason I include
“things” is that that admins often configure automated tasks that need
to be able to do certain things — backups for example. Well, what
often happens is the admin will just put the user doing the backup into
the domain admins group — even if they could get it to work another
way. Why? Because it’s easier.

Ultimately this is a principle that is designed to conflict directly with


human nature, i.e. laziness. It’s always more difficult to give granular
access that allows only specific tasks than it is to give a higher
echelon of access that includes what needs to be accomplished.

This rule of least privilege simply reminds us not to give into the
temptation to do that. Don’t give in. Take the time to make all access
granular, and at the lowest level possible.

3. Defense In Depth
Defense In Depth is perhaps the least understood concept out of the
four. Many think it’s simply stacking three firewalls instead of one, or
using two antivirus programs rather than one. Technically this could
apply, but it’s not the true nature of Defense In Depth.

The true idea is that of stacking multiple types of protection between


an attacker and an asset. And these layers don’t need to be products
— they can be applications of other concepts themselves, such as
least privilege.

Let’s take the example of an attacker on the Internet trying to


compromise a web server in the DMZ. This could be relatively easy
given a major vulnerability, but with an infrastructure built using
Defense In Depth, it can be significantly more difficult.

The hardening of routers and firewalls, the inclusion of IPS/IDS, the


hardening of the target host, the presence of host-based IPS on the
host, anti-virus on the host, etc. — any of these steps can potentially
stop an attack from being fully successful.

The idea is that we should think in reverse — rather than thinking


about what needs to be put in place to stop an attack, think instead of
what all has to happen for it to be successful. Maybe an attack had to
make it through the external router, the firewall, the switch, get to the
host, execute, make a connection outbound to a host outside,
download content, run that, etc, etc.
What if any of those steps were unsuccessful? That’s the key to
Defense In Depth — put barriers in as many points as possible. Lock
down network ACLs. Lock down file permissions. Use network
intrusion prevention, use intrusion detection, make it more difficult
for hostile code to run on your systems, make sure your daemons are
running as the least privileged user, etc, etc.

The benefit is quite simple — you get more chances to stop an attack
from becoming successful. It’s possible for someone to get all the
way in, all the way to the box in question, and be stopped by the fact
that malicious code in question wouldn’t run on the host. But maybe
when that code is fixed so that it would run, it’ll then be caught by an
updated IPS or a more restrictive firewall ACL. The idea is to lock
down everything you can at every level. Not just one
thing, everything — file permissions, stack protection, ACLs, host
IPS, limiting admin access, running as limited users — the list goes
on and on.

The underlying concept is simple — don’t rely on single solutions to


defend your assets. Treat each element of your defense as if it were
the only layer. When you take this approach you’re more likely to
stop attacks before they achieve their goal.

4. Prevention Is Ideal, But Detection Is A Must


The final concept is rather simple but extremely important. The idea
is that while it’s best to stop an attack before it’s successful, it’s
absolutely crucial that you at least know it happened. As an example,
you may have protections in place that try and keep code from being
executed on your system, but if code is executed and something is
done, it’s critical that you are alerted to that fact and can take action
quickly.

The difference between knowing about a successful attack within 5 or


10 minutes vs. finding out about it weeks later is astronomical. Often
times having the knowledge early enough can result in the attack not
being successful at all, i.e. maybe they get on your box and add a user
account, but you get to the machine and take it offline before they are
able to do anything with it.
Regardless of the situation, detection is an absolute must because
there’s no guarantee that you’re prevention measures are going to be
successful.

1.
2. CIA Methods- The three fundamental principles of security are availability,
integrity, and confidentiality and are commonly referred to as CIA or AIC triad
which also form the main objective of any security program. The level of security
required to accomplish these principles differs per company, because each has
its own unique combination of business and security goals and requirements. All
security controls, mechanisms, and safeguards are implemented to provide one
or more of these principles. All risks, threats, and vulnerabilities are measured for
their potential capability to compromise one or all of the AIC principles.

Confidentiality[edit]
 Ensures that the necessary level of secrecy is enforced at each junction of data processing
and prevents unauthorized disclosure. This level of confidentiality should prevail while data
resides on systems and devices within the network, as it is transmitted and once it reaches
its destination.
 Threat sources
 Network Monitoring
 Shoulder Surfing- monitoring key strokes or screen
 Stealing password files
 Social Engineering- one person posing as the actual
 Countermeasures
 Encrypting data as it is stored and transmitted.
 By using network padding
 Implementing strict access control mechanisms and data classification
 Training personnel on proper procedures.
Integrity[edit]
 Integrity of data is protected when the assurance of accuracy and reliability of information
and system is provided, and unauthorized modification is prevented.
 Threat sources
 Viruses
 Logic Bombs
 Backdoors
 Countermeasures
 Strict Access Control
 Intrusion Detection
 Hashing
Availability[edit]
 Availability ensures reliability and timely access to data and resources to authorized
individuals.
 Threat sources
 Device or software failure.
 Environmental issues like heat, cold, humidity, static electricity, and contaminants can
also affect system availability.
 Denial-of-service (DoS) attacks
 Countermeasures
 Maintaining backups to replace the failed system
 IDS to monitor the network traffic and host system activities
 Use of certain firewall and router configurations

3. Attack categories, types and vectors- It’s


a good practice to think of
information security attacks and defenses in terms of the CIA
triad. Consider some common techniques used by attackers —
sniffing traffic, reformatting hard drives, and modifying
system files.

Sniffing traffic is an attack on confidentiality because it’s based on seeing


that which is not supposed to be seen. An attacker who reformats a victim’
s hard drive has attacked the availability of their system. Finally, someone
writing modified system files has compromised the integrity of that
system. Thinking in these terms can go a long way toward helping you
understand various offensive and defensive techniques.

Terms
Next I’d like to go over some extremely crucial industry terms. These can
get a bit academic but I’m going to do my best to boil them down to their
basics.

Vulnerability

A vulnerability is a weakness in a system. This one is pretty straight


forward because vulnerabilities are commonly labeled as such in
advisories and even in the media. Examples include the LSASS issue that
let attackers take over systems, etc. When you apply a security patch to a
system, you’re doing so to address a vulnerability.

Threat

A threat is an event, natural or man-made, that can cause damage to your


system. Threats include people trying to break into your network to steal
information, fires, tornados, floods, social engineering, malicious
employees, etc. Anything that can cause damage to your systems is
basically a threat to those systems. Also remember that threat is usually
rated as a probability, or a chance, of that threat coming to bear. An
example would be the threat of exploit code being used against a particular
vulnerability. If there is no known exploit code in the wild the threat is
fairly low. But the second working exploit code hits the major mailing
lists, your threat (chance) raises significantly.

Risk

Risk is perhaps the most important of all these definitions since the main
mission of information security officers is to manage it. The simplest
explanation I’ve heard is that risk is the chance of something bad
happening. That’s a bit too simple, though, and I think the best way to look
at these terms is with a couple of formulas:

Risk = Threat x Vulnerability

Multiplication is used here for a very specific reason — any time one of
the two sides reaches zero, the result becomes zero. In other words, there
will be no risk anytime there is no threat or no vulnerability.

As an example, if you are completely vulnerable to xyz issue on your


Linux server, but there is no way to exploit it in existence, then your risk
from that is nil. Likewise, if there are tons of ways of exploiting the
problem, but you already patched (and are therefore not vulnerable), you
again have no risk whatsoever.

A more involved formula includes the impact, or cost, to the equation


(literally):

Risk = Threat x Vulnerability x Cost

What this does is allow a decision maker to attach quantitative meaning to


the problem. It’s not always an exact science, but if you know that
someone stealing your business’s most precious intellectual property
would cost you $4 billion dollars, then that’s good information to have
when considering whether or not to address the issue.

That last part is important. The entire purpose of assigning a value to risk
is so that managers can make the decisions on what to fix and what not to.
If there is a risk associated with hosting certain data on a public FTP
server, but that risk isn’t serious enough to offset the benefit, then it’s good
business to go ahead and keep it out there.

That’s the whole trick — information security managers have to know


enough about the threats and vulnerabilities to be able to make sound
business decisions about how to evolve the IT infrastructure. This is risk
management, and it’s the entire business justification for information
security.

Policy — A policy is a high level statement from management saying


what is and is not allowed in the organization. A policy will say, for
example, that you can’t read personal email at work, or that you can’t do
online banking, etc. A policy should be broad enough to encompass the
entire organization and should have the endorsement of those in charge.

Standard — A standard dictates what will be used to carry out the policy.
As an example, if the policy says all internal users will use a single,
corporate email client, the standard may say that the client will be Outlook
2000, etc.

Procedure — A procedure is a description of how exactly to go about


doing a certain thing. It’s usually laid out in a series of steps, i.e. 1)
Download the following package, 2) Install the package using
Add/Remove Programs, 3) Restart the machine, etc. A good way to think
of standards and procedures is to imagine standards as being whatto do or
use, and procedures as how to actually do it.

Cyber kill chain - Computer scientists at Lockheed-Martin corporation described a new


"intrusion kill chain" framework or model to defend computer networks in 2011.[6] They wrote that
attacks may occur in phases and can be disrupted through controls established at each phase.
Since then, the "cyber kill chain™" has been adopted by data security organizations to define
phases of cyber-attacks.[11]
A cyber kill chain reveals the phases of a cyber attack: from early reconnaissance to the goal of
data exfiltration.[12] The kill chain can also be used as a management tool to help continuously
improve network defense. According to Lockheed Martin, threats must progress through several
phases in the model, including:

1. Reconnaissance: Intruder selects target, researches it, and attempts to identify


vulnerabilities in the target network.
2. Weaponization: Intruder creates remote access malware weapon, such as a virus or
worm, tailored to one or more vulnerabilities.
3. Delivery: Intruder transmits weapon to target (e.g., via e-mail attachments, websites or
USB drives)
4. Exploitation: Malware weapon's program code triggers, which takes action on target
network to exploit vulnerability.
5. Installation: Malware weapon installs access point (e.g., "backdoor") usable by intruder.
6. Command and Control: Malware enables intruder to have "hands on the keyboard"
persistent access to target network.
7. Actions on Objective: Intruder takes action to achieve their goals, such as data
exfiltration, data destruction, or encryption for ransom.
Defensive courses of action can be taken against these phases:[13]

1. Detect: determine whether an attacker is poking around


2. Deny: prevent information disclosure and unauthorized access
3. Disrupt: stop or change outbound traffic (to attacker)
4. Degrade: counter-attack command and control
5. Deceive: interfere with command and control
6. Contain: network segmentation changes

4. Foot printing and reconnaissance concepts - Footprinting (also known as


reconnaissance) is the technique used for gathering information about computer
systems and the entities they belong to. To get this information, a hacker might
use various tools and technologies. This information is very useful to a hacker
who is trying to crack a whole system. When used in the computer security
lexicon, "Footprinting" generally refers to one of the pre-attack phases; tasks
performed prior to doing the actual attack. Some of the tools used for Footprinting
are Sam Spade, nslookup, traceroute, Nmap and neotrace. Gathering information
about victim, the word reconnaissance is a military word meaning the process of
obtaining information about enemy forces or mission into enemy territory to obtain
information. In computer security reconnaissance is a type of computer attack in
which an intruder engages with the targeted system to gather information about
vulnerabilities. The attacker first discovers any vulnerable ports by using software’s
like port scanning. After a port scan, an attacker usually exploits known
vulnerabilities of services associated with open ports that were detected. To prevent
most port scan attacks or reconnaissance attacks is to use a good firewall and
intrusion prevention system (IPS). The firewall controls which ports are exposed and
to whom they are visible. The IPS can detect port scans in progress and shut them
down before the attacker can gain a full map of your network
 Malware
Malware (a portmanteau for malicious software) is any software intentionally designed
to cause damage to a computer, server, client, or computer network.[1]Malware does the
damage after it is implanted or introduced in some way into a target's computer and can
take the form of executable code, scripts, active content, and other software.[2] The code
is described as computer viruses, worms, Trojan horses, ransomware, spyware, adware,
and scareware, among other terms. Malware has a malicious intent, acting against the
interest of the computer user—and so does not include software that causes
unintentional harm due to some deficiency, which is typically described as a software
bug.
1. Trojan- In computing, a Trojan horse, or Trojan, is any malicious computer
program which misleads users of its true intent. The term is derived from
the Ancient Greek story of the deceptive wooden horse that led to the fall of the
city of Troy. Trojans are generally spread by some form of social engineering, for
example where a user is duped into executing an e-mail attachment disguised to
appear not suspicious, (e.g., a routine form to be filled in), or by clicking on some
fake advertisement on social media or anywhere else. Although their payload can
be anything, many modern forms act as a backdoor, contacting a controller which
can then have unauthorized access to the affected computer.[6] Trojans may allow
an attacker to access users' personal information such as banking information,
passwords, or personal identity. It can infect other devices connected to the
network. Ransomwareattacks are often carried out using a Trojan.
Unlike computer viruses and worms, Trojans generally do not attempt to inject
themselves into other files or otherwise propagate themselves.
2. Ransomware- Ransomware is a type of malicious
software from cryptovirology that threatens to publish the victim's data or
perpetually block access to it unless a ransom is paid. While some simple
ransomware may lock the system in a way which is not difficult for a
knowledgeable person to reverse, more advanced malware uses a technique
called cryptoviral extortion, in which it encrypts the victim's files, making them
inaccessible, and demands a ransom payment to decrypt them.[1][2][3][4] In a
properly implemented cryptoviral extortion attack, recovering the files without the
decryption key is an intractable problem – and difficult to trace digital
currencies such as Ukash and cryptocurrency are used for the ransoms, making
tracing and prosecuting the perpetrators difficult. Ransomware attacks are
typically carried out using a Trojan that is disguised as a legitimate file that the
user is tricked into downloading or opening when it arrives as an email
attachment. However, one high-profile example, the "WannaCry worm", traveled
automatically between computers without user interaction. Starting from around
2012 the use of ransomware scams has grown internationally.[5][6][7] There have
been 181.5 million ransomware attacks in the first six months of 2018. This marks
a 229% increase over this same time frame in 2017.[8] In June 2013,
vendor McAfee released data showing that it had collected more than double the
number of samples of ransomware that quarter than it had in the same quarter of
the previous year.[9] CryptoLocker was particularly successful, procuring an
estimated US $3 million before it was taken down by authorities,[10] and
CryptoWall was estimated by the US Federal Bureau of Investigation (FBI) to
have accrued over US $18m by June 2015.
3. Virus- A computer virus is a type of malicious software that, when
executed, replicates itself by modifying other computer programs and inserting its
own code.[1] When this replication succeeds, the affected areas are then said to
be "infected" with a computer virus. Virus writers use social
engineering deceptions and exploit detailed knowledge of security
vulnerabilities to initially infect systems and to spread the virus. The vast majority
of viruses target systems running Microsoft Windows,[4][5][6] employing a variety of
mechanisms to infect new hosts,[7] and often using complex anti-detection/stealth
strategies to evade antivirus software.[8][9][10][11] Motives for creating viruses can
include seeking profit (e.g., with ransomware), desire to send a political message,
personal amusement, to demonstrate that a vulnerability exists in software,
for sabotage and denial of service, or simply because they wish to
explore cybersecurity issues, artificial life and evolutionary algorithms. Computer
viruses currently cause billions of dollars' worth of economic damage each
year,[13] due to causing system failure, wasting computer resources, corrupting
data, increasing maintenance costs, etc. In response, free, open-source antivirus
tools have been developed, and an industry of antivirus software has cropped up,
selling or freely distributing virus protection to users of various operating
systems.[14] As of 2005, even though no currently existing antivirus software was
able to uncover all computer viruses (especially new ones), computer security
researchers are actively searching for new ways to enable antivirus solutions to
more effectively detect emerging viruses, before they have already become
widely distributed. The term "virus" is also commonly, but erroneously, used to
refer to other types of malware. "Malware" encompasses computer viruses along
with many other forms of malicious software, such as computer
"worms", ransomware, spyware, adware, trojan
horses, keyloggers, rootkits, bootkits, malicious Browser Helper Object (BHOs),
and other malicious software. The majority of active malware threats are actually
trojan horse programs or computer worms rather than computer viruses. The
term computer virus, coined by Fred Cohen in 1985, is a misnomer.[16] Viruses
often perform some type of harmful activity on infected host computers, such as
acquisition of hard disk space or central processing unit (CPU) time, accessing
private information (e.g., credit card numbers), corrupting data, displaying political
or humorous messages on the user's screen, spamming their e-mail
contacts, logging their keystrokes, or even rendering the computer useless.
However, not all viruses carry a destructive "payload" and attempt to hide
themselves—the defining characteristic of viruses is that they are self-replicating
computer programs which modify other software without user consent.
4. Worms- A computer worm is a standalone malware computer program that
replicates itself in order to spread to other computers.[1] Often, it uses a computer
network to spread itself, relying on security failures on the target computer to
access it. Worms almost always cause at least some harm to the network, even if
only by consuming bandwidth, whereas viruses almost always corrupt or modify
files on a targeted computer. Many worms are designed only to spread, and do
not attempt to change the systems they pass through. However, as the Morris
worm and Mydoomshowed, even these "payload-free" worms can cause major
disruption by increasing network traffic and other unintended effects.
 Phishing
1. Phishing- Phishing is the fraudulent attempt to obtain sensitive information such
as usernames, passwords and credit card details by disguising as a trustworthy
entity in an electronic communication.[1][2] Typically carried out by email
spoofing[3] or instant messaging,[4] it often directs users to enter personal
information at a fake website, the look and feel of which are identical to the
legitimate site. Phishing is an example of social engineering techniques being
used to deceive users. Users are often lured by communications purporting to be
from trusted parties such as social web sites, auction sites, banks, online
payment processors or IT administrators. Attempts to deal with phishing incidents
include legislation, user training, public awareness, and technical security
measures — because phishing attacks also often exploit weaknesses in current
web security. The word itself is a neologism created as a homophone of fishing,
due to the similarity of using a bait in an attempt to catch a victim.
2. Whaling- The term whaling has been coined for spear phishing attacks directed
specifically at senior executives and other high-profile targets.[15] In these cases,
the content will be crafted to target an upper manager and the person's role in the
company. The content of a whaling attack email may be an executive issue such
as a subpoena or customer complaint.
3. Vishing- Not all phishing attacks require a fake website. Messages that claimed to
be from a bank told users to dial a phone number regarding problems with their
bank accounts.[43] Once the phone number (owned by the phisher, and provided
by a voice over IP service) was dialed, prompts told users to enter their account
numbers and PIN. Vishing (voice phishing) sometimes uses fake caller-ID data to
give the appearance that calls come from a trusted organization.[44]
4. Spear phishing- Phishing attempts directed at specific individuals or companies
have been termed spear phishing.[8] In contrast to bulk phishing, spear phishing
attackers often gather and use personal information about their target to increase
their probability of success. Threat Group-4127 (Fancy Bear) used spear
phishing tactics to target email accounts linked to Hillary Clinton's 2016
presidential campaign. They attacked more than 1,800 Google accounts and
implemented the accounts-google.com domain to threaten targeted users.
 Denial of service
What is DOS?- A denial-of-service (DoS) is any type of attack where the attackers attempt to
prevent legitimate users from accessing the service or network. In the DoS, the attackers usually
send several messages asking the server to accepts requests from invalid return addresses. This
can be done in several ways. This attack included but not limited to:
- Flooding the network
- Disrupting communication between machines or preventing access to a service
- Preventing individual from accessing a service.
A person victim of DoS does not typically result in the theft or loss of important information, they
cost the victim a great deal of time and money to handle.
In computing, a denial-of-service attack (DoS attack) is a cyber-attack in which the perpetrator
seeks to make a machine or network resource unavailable to its intended users by temporarily or
indefinitely disrupting services of a host connected to the Internet. Denial of service is typically
accomplished by flooding the targeted machine or resource with superfluous requests in an
attempt to overload systems and prevent some or all legitimate requests from being fulfilled.[1]
In a distributed denial-of-service attack (DDoS attack), the incoming traffic flooding the victim
originates from many different sources. This effectively makes it impossible to stop the attack
simply by blocking a single source.
A DoS or DDoS attack is analogous to a group of people crowding the entry door of a shop,
making it hard for legitimate customers to enter, disrupting trade.
Criminal perpetrators of DoS attacks often target sites or services hosted on high-profile web
servers such as banks or credit card payment
gateways. Revenge, blackmail[2][3][4] and activism[5] can motivate these attacks.
1. DDOS

TCP SYN Flood Attacks- A SYN flood is a form of denial-of-service attack in which an attacker
sends a succession of SYN requests to a target's system in an attempt to consume enough
server resources to make the system unresponsive to legitimate traffic. Normally when a client
attempts to start a TCP connection to a server, the client and server exchange a series of
messages which normally runs like this:

1. The client requests a connection by sending a SYN (synchronize) message to the server.
2. The server acknowledges this request by sending SYN-ACK back to the client.
3. The client responds with an ACK , and the connection is established.
This is called the TCP three-way handshake, and is the foundation for every connection
established using the TCP protocol.
A SYN flood attack works by not responding to the server with the expected ACK code. The
malicious client can either simply not send the expected ACK , or by spoofing the source IP
address in the SYN , causing the server to send the SYN-ACK to a falsified IP address - which will
not send an ACK because it "knows" that it never sent a SYN .
The server will wait for the acknowledgement for some time, as simple network congestion could
also be the cause of the missing ACK . However, in an attack, the half-open connections created
by the malicious client bind resources on the server and may eventually exceed the resources
available on the server. At that point, the server cannot connect to any clients, whether legitimate
or otherwise. This effectively denies service to legitimate clients. Some systems may also
malfunction or crash when other operating system functions are starved of resources in this way.
 Vulnerabilities
1. Network mapping- Network mapping is the study of the physical connectivity of
networks e.g. the Internet. Network mapping discovers the devices on the
network and their connectivity. It is not to be confused with network discovery
or network enumerating which discovers devices on the network and their
characteristics such as (operating system, open ports, listening network services,
etc.). The field of automated network mapping has taken on greater importance
as networks become more dynamic and complex in nature.

2. Vulnerability scanning - A vulnerability scanner is a computer


program designed to assess computers, networks or applications for known
weaknesses. In plain words, these scanners are used to discover the
weaknesses of a given system. They are utilized in the identification and
detection of vulnerabilities arising from mis-configurations or flawed programming
within a network-based asset such as a firewall, router, web server, application
server, etc. Modern vulnerability scanners allow for both authenticated and
unauthenticated scans. Modern scanners are typically available as SaaS
(Software as a Service); provided over the internet and delivered as a web
application. The modern vulnerability scanner often has the ability to customize
vulnerability reports as well as the installed software, open ports, certificates and
other host information that can be queried as part of its workflow.

 Authenticated scans allow for the scanner to directly access network based assets using
remote administrative protocols such as secure shell (SSH) or remote desktop protocol
(RDP) and authenticate using provided system credentials. This allows the vulnerability
scanner to access low-level data, such as specific services and configuration details of the
host operating system. It's then able to provide detailed and accurate information about the
operating system and installed software, including configuration issues and missing security
patches.[1]
 Unauthenticated scans is a method that can result in a high number of false positives and
is unable to provide detailed information about the assets operating system and installed
software. This method is typically used by threat actors or security analyst trying determine
the security posture of externally accessible assets.[1]

3. OWASP top 10, Web application -

 Security defence
1. Firewalls- In computing, a firewall is a network security system that monitors and
controls incoming and outgoing network traffic based on predetermined security
rules.[1] A firewall typically establishes a barrier between a trusted internal network
and untrusted external network, such as the Internet. Firewalls are often
categorized as either network firewalls or host-based firewalls. Network
firewalls filter traffic between two or more networks and run on network hardware.
Host-based firewalls run on host computers and control network traffic in and out
of those machines. In terms of computer security, a firewall is a piece
of software. This software monitors the network traffic. A firewall has a set
of rules which are applied to each packet. The rules decide if a packet can pass,
or whether it is discarded. Usually a firewall is placed between a network that is
trusted, and one that is less trusted. When a large network needs to be protected,
the firewall software often runs on a computer that does nothing else. A firewall
protects one part of the network against unauthorized access.

2. IDS/IPS
3. VPN- A virtual private network (VPN) extends a private network across a public
network, and enables users to send and receive data across shared or public
networks as if their computing devices were directly connected to the private
network. Applications running across a VPN may therefore benefit from the
functionality, security, and management of the private network. Encryption is a
common though not an inherent part of a VPN connection. VPN technology was
developed to allow remote users and branch offices to access corporate
applications and resources. To ensure security, the private network connection is
established using an encrypted layered tunneling protocol and VPN users use
authentication methods, including passwords or certificates, to gain access to the
VPN. In other applications, Internet users may secure their transactions with a
VPN, to circumvent geo-restrictions and censorship, or to connect to proxy
servers to protect personal identity and location to stay anonymous on the
Internet. However, some websites block access to known VPN technology to
prevent the circumvention of their geo-restrictions, and many VPN providers have
been developing strategies to get around these roadblocks. A VPN is created by
establishing a virtual point-to-point connection through the use of dedicated
circuits or with tunneling protocols over existing networks. A VPN available from
the public Internet can provide some of the benefits of a wide area
network (WAN). From a user perspective, the resources available within the
private network can be accessed remotely.
4. Proxy servers- In computer networks, a proxy server is a server (a computer
system or an application) that acts as an intermediary for requests
from clients seeking resources from other servers.[1] A client connects to the proxy
server, requesting some service, such as a file, connection, web page, or other
resource available from a different server and the proxy server evaluates the
request as a way to simplify and control its complexity.[2] Proxies were invented to
add structure and encapsulation to distributed systems.
 SIEM tools
1. Introduction- In the field of computer security, security information and event
management (SIEM) software products and services combine security
information management (SIM) and security event management(SEM). They
provide real-time analysis of security alerts generated by applications and
network hardware. Vendors sell SIEM as software, as appliances or as managed
services; these products are also used to log security data and generate reports
for compliance purposes. The term and the initialism SIEM was coined by Mark
Nicolett and Amrit Williams of Gartner in 2005
2. Syslog, Windows events-
3. Log collection
4. SIEM architecture
5. Queries
6. Alerts
 End point protection
Endpoint security or endpoint protection is an approach to the protection of computer
networks that are remotely bridged to client devices. The connection
of laptops, tablets, mobile phones and other wireless devices to corporate networks
creates attack paths for security threats.[1][2] Endpoint security attempts to ensure that
such devices follow a definite level of compliance to standards.
1. Anti-virus
2. HIDS/HIPS
3. DLP
4. SHA value reputation
5. Signature creation
 Email protection
1. How to identify a phishing email
2. Email gateway
3. URL Re-writing
4. Header analysis
5. SPAM
6. URL and IP reputation
 Encryptions and hash values
1. Encryption standards
2. SSL methods
3. HASH values

 Secure coding practices, and threat modelling


 DLP, anti-virus and anti-malware
 Network protocols and packet analysis tools
 Cloud computing
 SaaS models
 Windows, UNIX and Linux operating systems
 Virtualization technologies
 MySQL/MSSQL database platforms
 Identity and access management principles
 Application security and encryption technologies
 Secure network architectures
 Advanced Persistent Threats (APT), social engineering, network access
controllers (NAC), gateway anti-malware and enhanced authentication.
 ISO 27001/27002, ITIL and COBIT frameworks
 PCI, HIPAA, NIST, GLBA and SOX compliance assessments
 Performance tuning views, indexes, SQL and PLSQL
 C, C++, C#, Java or PHP programming languages
 Cryptography
 Digital networking
 Digital communication
CERTIFICATIONS

 Certified Ethical Hacker


 Cisco Certified Network Professional Security
 GIAC Security Certifications
 Certified Information Systems Security Professional
 Certified Information Security Manager
 Certified in risk and Information Systems control
 Certified penetration tester
 Certified reverse engineering analyst

You might also like