Certified Network Security Practitioner (AutoRecovered)
Certified Network Security Practitioner (AutoRecovered)
Security
Practitioner (CNSP)
Study Guide
2024
.
Compiled by: Japhet Mwakideu
1
Compiled by Japhet Mwakideu From Various Internet Sources
The exam will cover the following topics
• OSI Layer
• Cryptography
• Password Storage
2
Compiled by Japhet Mwakideu From Various Internet Sources
TCP/IP (Protocols And Networking Basics)
Physical layer
The physical layer refers to the physical communication medium and the technologies to transmit data
across that medium. At its core, data communication is the transfer of digital and electronic signals
through various physical channels like fiber-optic cables, copper cabling, and air. The physical layer
includes standards for technologies and metrics closely related with the channels, such as Bluetooth,
NFC, and data transmission speeds.
Network layer
The network layer is concerned with concepts such as routing, forwarding, and addressing across a
dispersed network or multiple connected networks of nodes or machines. The network layer may also
manage flow control. Across the internet, the Internet Protocol v4 (IPv4) and IPv6 are used as the main
network layer protocols.
Transport layer
The primary focus of the transport layer is to ensure that data packets arrive in the right order, without
losses or errors, or can be seamlessly recovered if required. Flow control, along with error control, is
often a focus at the transport layer. At this layer, commonly used protocols include the Transmission
Control Protocol (TCP), a near-lossless connection-based protocol, and the User Datagram Protocol
(UDP), a lossy connectionless protocol. TCP is commonly used where all data must be intact (e.g. file
share), whereas UDP is used when retaining all packets is less critical (e.g. video streaming).
Session layer
The session layer is responsible for network coordination between two separate applications in a
session. A session manages the beginning and ending of a one-to-one application connection and
synchronization conflicts. Network File System (NFS) and Server Message Block (SMB) are
commonly used protocols at the session layer.
3
Compiled by Japhet Mwakideu From Various Internet Sources
Presentation layer
The presentation layer is primarily concerned with the syntax of the data itself for applications to send
and consume. For example, Hypertext Markup Language (HTML), JavaScipt Object Notation (JSON),
and Comma Separated Values (CSV) are all modeling languages to describe the structure of data at the
presentation layer.
Application layer
The application layer is concerned with the specific type of application itself and its standardized
communication methods. For example, browsers can communicate using HyperText Transfer Protocol
Secure (HTTPS), and HTTP and email clients can communicate using POP3 (Post Office Protocol
version 3) and SMTP (Simple Mail Transfer Protocol).
IPv4 addresses are 32-bit numbers that are typically displayed in dotted decimal notation and contains
two primary parts—the network prefix and the host number. IPv6 is a 128-bits address and consists of
eight groups of four hexadecimal digits. The following topics describe:
IPv4 subnetting
All hosts within a single network share the same network address. Each host also has an address that
uniquely identifies it. Depending on the scope of the network and the type of device, the address is
4
Compiled by Japhet Mwakideu From Various Internet Sources
either globally or locally unique. Devices that are visible to users outside the network (webservers, for
example) must have a globally unique IP address. Devices that are visible only within the network
must have locally unique IP addresses.
IP addresses are assigned by a central numbering authority known as the Internet Assigned Numbers
Authority (IANA). IANA ensures that addresses are globally unique where needed and has a large
address space reserved for use by devices not visible outside their own networks.
Class A addresses use only the first byte (octet) to specify the network prefix, leaving 3 bytes to define
individual host numbers.
Class B addresses use the first 2 bytes to specify the network prefix, leaving 2 bytes to define host
addresses.
Class C addresses use the first 3 bytes to specify the network prefix, leaving only the last byte to
identify hosts.
In binary format, with an x representing each bit in the host number, the three address classes can be
represented as follows:
content_copy zoom_out_map
00000000 xxxxxxxx xxxxxxxx xxxxxxxx (Class A)
00000000 00000000 xxxxxxxx xxxxxxxx (Class B)
00000000 00000000 00000000 xxxxxxxx (Class C)
Because each bit (x) in a host number can have a 0 or 1 value, each represents a power of 2. For
example, if only 3 bits are available for specifying the host number, only the following host numbers
are possible:
content_copy zoom_out_map
111 110 101 100 011 010 001 000
5
Compiled by Japhet Mwakideu From Various Internet Sources
In each IP address class, the number of host-number bits raised to the power of 2 indicates how many
host numbers can be created for a particular network prefix. Class A addresses have 224 (or
16,777,216) possible host numbers, class B addresses have 216 (or 65,536) host numbers, and class C
addresses have 28 (or 256) possible host numbers.
content_copy zoom_out_map
11010000 01100010 11000000 10101010 = 208.98.192.170
01110110 00001111 11110000 01010101 = 118.15.240.85
00110011 11001100 00111100 00111011 = 51.204.60.59
IPv4 Subnetting
Because of the physical and architectural limitations on the size of networks, you often must break
large networks into smaller subnetworks. Within such a subnetted network, each interface requires its
own network number and identifying subnet address.
NOTE: The IP routing world has shifted to Classless Inter-Domain Routing (CIDR). As its name
implies, CIDR eliminates the notion of address classes and simply conveys a network prefix along with
a mask. The mask indicates which bits in the address identify the network (the prefix). This document
discusses subnetting in the traditional context of classful IP addresses.
Figure 1 shows a network that comprises of three subnets.
In addition to sharing the class B network prefix (the first two octets), each subnet shares the third
octet. Because we are using a /24 network mask with a class B address, the third octet identifies the
subnet. All devices on a subnet must have the same subnet address. In this case, the Alpha subnet has
the IP address 172.16.1.0/24, the Beta subnet has the IP address 172.16.2.0/24, and the Gamma subnet
is assigned 172.16.10.10/24.
Taking one of these subnets as an example, the Beta subnet address 172.16.2.0/24 is represented in
binary notation as:
6
Compiled by Japhet Mwakideu From Various Internet Sources
content_copy zoom_out_map
10101100 . 00010000 . 00000010 . xxxxxxxx
Because the first 24 bits in the 32-bit address identify the subnet, the last 8 bits are available to assign
to hosts attachments on each subnet. To reference a subnet, the address is written as 172.16.10.0/24 (or
just 172.16.10/24). The /24 indicates the length of the subnet mask (sometimes written as
255.255.255.0). This network mask indicates that the first 24 bits identify the network and subnetwork
while the last 8 bits identify hosts on the respective subnetwork.
To help allocate address spaces more efficiently, VLSMs were introduced. Using VLSM, network
architects can allocate more precisely the number of addresses required for a particular subnet.
For example, suppose a network with the prefix 192.14.17/24 is divided into two smaller subnets, one
consisting of 18 devices and the other of 46 devices.
To accommodate 18 devices, the first subnet must have 25 (32) host numbers. Having 5 bits assigned
to the host number leaves 27 bits of the 32-bit address for the subnet. The IP address of the first subnet
is therefore 192.14.17.128/27, or the following in binary notation:
content_copy zoom_out_map
11000000 . 00001110 . 00010001 . 100xxxxx
The subnet mask includes 27 significant digits.
To create the second subnet of 46 devices, the network must accommodate 26 (64) host numbers. The
IP address of the second subnet is 192.14.17.64/26, or
content_copy zoom_out_map
11000000 . 00001110 . 00010001 . 01xxxxxx
By assigning address bits within the larger /24 subnet mask, you create two smaller subnets that use the
allocated address space more efficiently.
7
Compiled by Japhet Mwakideu From Various Internet Sources
IPv6 Address Space, Addressing, and Address Types
IPv6 Address Format
What Is IPv6?
The ongoing expansive growth of the Internet and the need to provide IP addresses to accommodate
it—to support increasing numbers of new users, computer networks, Internet-enabled devices, and new
and improved applications for collaboration and communication—is escalating the emergent use of a
new IP protocol. IPv6, with its robust architecture, was designed to satisfy these current and
anticipated near future requirements.
IPv4 is widely used throughout the world today for the Internet, intranets, and private networks. IPv6
builds upon the functionality and structure of IPv4 in the following ways:
Provides a simplified and enhanced packet header to allow for more efficient routing.
Improves support for mobile phones and other mobile computing devices.
Enforces increased, mandatory data security through IPsec (which was originally designed for it).
IPv6 addresses consist of 128 bits, instead of 32 bits, and include a scope field that identifies the type
of application suitable for the address. IPv6 does not support broadcast addresses, but uses multicast
addresses for broadcast. In addition, IPv6 defines a new type of address called anycast.
Unicast
A unicast address specifies an ID for a single interface to which packets are delivered. Under IPv6, the
vast majority of Internet traffic is foreseen to be unicast. It is for this reason that the largest assigned
block of the IPv6 address space is dedicated to unicast addressing. Unicast addresses include all
addresses other than loopback, multicast, link-local-unicast, and unspecified.
For SRX Series Firewalls, the flow module supports the following kinds of IPv6 unicast packets:
Pass-through unicast traffic, including traffic from and to virtual routers. The device transmits pass-
through traffic according to its routing table.
Host-inbound traffic from and to devices directly connected to SRX Series Firewall interfaces. For
example, host-inbound traffic includes logging, routing protocol, and management types of traffic. The
flow module sends the unicast packets to the Routing Engine and receives the packets from it. Traffic
8
Compiled by Japhet Mwakideu From Various Internet Sources
is processed by the Routing Engine instead of by the flow module, based on routing protocols defined
for the Routing Engine.
The flow module supports all routing and management protocols that run on the Routing Engine. Some
examples are:
OSPFv3,
RIPng
TELNET
SSH
Multicast
A multicast address specifies an ID for a set of interfaces that typically belong to different nodes. It is
identified by a value of 0xFF. IPv6 multicast addresses are distinguished from unicast addresses by the
high-order octet value of the addresses.
The devices support only host-inbound and host-outbound multicast traffic. Host inbound traffic
includes logging, routing protocols, management traffic, and so on.
Anycast
An anycast address specifies an ID for a set of interfaces that typically belong to different nodes. A
packet with an anycast address is delivered to the nearest node, according to routing protocol rules.
No difference between anycast addresses and unicast addresses except for the subnet-router address.
For an anycast subnet-router address, the low order bits, typically 64 or more, are zeros. Anycast
addresses are taken from the unicast address space.
The flow module treats anycast packets in the same way as it handles unicast packets. If an anycast
packet is intended for the device, it is treated as host-inbound traffic. It delivers it to the protocol stack
which continues processing it.
Unicast addresses support global address scope and two types of local address scope:
9
Compiled by Japhet Mwakideu From Various Internet Sources
Link–local unicast addresses—Used only on a single network link. The first 10 bits of the prefix
identify the address as a link-local address. Link-local addresses cannot be used outside the link.
Site-local unicast addresses—Used only within a site or intranet. A site consists of multiple network
links. Site-local addresses identify nodes inside the intranet and cannot be used outside the site.
Multicast addresses support 16 different types of address scope, including the following:
Node
Link
Site
Organization
Global scope
Multicast addresses identify a set of interfaces. Each multicast address consists of the first 8 bits of all
1s, a 4-bit flags field, a 4-bit scope field, and a 112-bit group ID:
content_copy zoom_out_map
11111111 | flgs | scop | group ID
The first octet of 1s identifies the address as a multicast address. The flags field identifies whether the
multicast address is a well-known address or a transient multicast address. The scope field identifies
the scope of the multicast address. The 112-bit group ID identifies the multicast group.
Similar to multicast addresses, anycast addresses identify a set of interfaces. However, packets are sent
to only one of the interfaces, not to all interfaces. Anycast addresses are allocated from the normal
unicast address space and cannot be distinguished from a unicast address in format. Therefore, each
member of an anycast group must be configured to recognize certain addresses as anycast addresses.
10
Compiled by Japhet Mwakideu From Various Internet Sources
the 32 bits that compose an IPv4 address to 128 bits. Each extra bit given to an address doubles the
size of the address space.
IPv4 has been extended using techniques such as NAT, which allows for ranges of private addresses to
be represented by a single public address, and temporary address assignment. Although useful, these
techniques fall short of the requirements of novel applications and environments such as emerging
wireless technologies, always-on environments, and Internet-based consumer appliances.
In addition to the increased address space, IPv6 addresses differ from IPv4 addresses in the following
ways:
Include a scope field that identifies the type of application that the address pertains to.
Do not support broadcast addresses, but instead uses multicast addresses to broadcast a packet.
IPv6 addresses consist of 8 groups of 16-bit hexadecimal values separated by colons (:). IPv6
addresses have the following format:
content_copy zoom_out_map
aaaa:aaaa:aaaa:aaaa:aaaa:aaaa:aaaa:aaaa
Each aaaa is a 16-bit hexadecimal value, and each a is a 4-bit hexadecimal value. Following is a
sample IPv6 address:
content_copy zoom_out_map
3FFE:0000:0000:0001:0200:F8FF:FE75:50DF
You can omit the leading zeros of each 16-bit group, as follows:
content_copy zoom_out_map
3FFE:0:0:1:200:F8FF:FE75:50DF
You can compress 16-bit groups of zeros to double colons (::) as shown in the following example, but
only once per address:
content_copy zoom_out_map
3FFE::1:200:F8FF:FE75:50DF
11
Compiled by Japhet Mwakideu From Various Internet Sources
An IPv6 address prefix is a combination of an IPv6 prefix (address) and a prefix length. The prefix
takes the form ipv6-prefix/prefix-length and represents a block of address space (or a network). The
ipv6-prefix variable follows general IPv6 addressing rules. The prefix-length variable is a decimal
value that indicates the number of contiguous, higher-order bits of the address that make up the
network portion of the address. For example, 10FA:6604:8136:6502::/64 is a possible IPv6 prefix with
zeros compressed. The site prefix of the IPv6 address 10FA:6604:8136:6502::/64 is contained in the
left most 64 bits, 10FA:6604:8136:6502.
For more information about the text representation of IPv6 addresses and address prefixes, see RFC
4291, IP Version 6 Addressing Architecture.
Limitations
SRX300, SRX320, SRX340, SRX345, SRX380, and SRX550HM devices have the following
limitations:
Changes in source AS and destination AS are not immediately reflected in exported flows.
IPv6 traffic transiting over IPv4 based IP-IP tunnel (for example, IPv6-over-IPv4 using ip-x/x/x
interface) is not supported.
SEE ALSO
About the IPv6 Basic Packet Header
Understanding IPv6 Packet Header Extensions
CLI Configuration for IPv6 Protocol Family
In configuration commands, the protocol family for IPv6 is named inet6. In the configuration
hierarchy, instances of inet6 are parallel to instances of inet, the protocol family for IPv4. In general,
you configure inet6 settings and specify IPv6 addresses in parallel to inet settings and IPv4 addresses.
NOTE: On SRX Series Firewalls, if you configure identical IP addresses on a single interface, you do
not see a warning message; instead, you will see a syslog message.
The following example shows the CLI commands you use to configure an IPv6 address for an
interface:
content_copy zoom_out_map
[edit]
user@host# show interfaces
ge-0/0/0 {
unit 0 {
family inet {
address 10.100.37.178/24;
}
}
12
Compiled by Japhet Mwakideu From Various Internet Sources
}
Defining a Hub
A hub, sometimes referred to as a network hub, is a device designed to connect multiple devices to a
single network. It serves as a common connection point and is considered a physical layer network
device. Its main function is to link several Local Area Networks (LANs) together.
Defining a Switch
A switch is a network device that enables several devices to link together within a computer network.
This device functions using a data link layer for connection. The primary method of data transmission
used by a switch is packet switching, which allows data packets to be disseminated across the internet.
Defining a Router
A router is a networking device that serves as a bridge between two or more packet-switched networks.
Apart from directing data traffic between networks, a router also enables multiple devices to share a
single internet connection. It does this by routing data packets between the networks.
Comparing a Hub, Switch, and Router
S. Hub Switch Router
No
1. A hub is a physical layer A switch operates on the A router functions on the network
device, belonging to layer 1 of data link layer, or layer 2 layer, which is layer 3 of an OSI
an OSI model. of an OSI model. model.
2. A hub uses the half-duplex A switch employs the full- A router also operates using the
transmission method. duplex transmission full-duplex transmission method.
method.
3. A hub operates based on A switch operates based A router operates based on IP
broadcasting. on MAC addresses. addresses.
4. A hub is mainly used to A switch is primarily used A router can be used within a LAN
connect components of a LAN. within a LAN. as well as a Metropolitan Area
Network (MAN).
5. A hub is not considered a A switch is a smart device Routers are essentially miniature
smart device as it forwards any that sends messages to computers that perform various
data it receives on one specific devices by intelligent tasks, including creating
connection to all other scanning their addresses. address tables to aid in routing
connections. decisions.
6. A single network is required to
A single network is also A minimum of two networks are
connect a hub. required to connect a needed to connect a router.
switch.
7. A hub is less costly compared A switch is more A router is the most costly device
to a switch and a router. expensive than a hub. compared to a hub and a switch.
13
Compiled by Japhet Mwakideu From Various Internet Sources
Network Discovery Protocols
14
Compiled by Japhet Mwakideu From Various Internet Sources
more to collect detailed information. Mapping devices manually can be time-consuming.
Device discovery tools automatically discover a range of network devices, add them to
monitoring databases, create dynamic network maps to visually track device performance of
the changing network topology. With the help of centralized dashboards, IT admins can
monitor IP address range, subnets, and more to collect detailed information.
o Use Quality of Experience metrics to monitor and troubleshoot network devices: Device
discovery tools utilize Quality of Experience (QoE) metrics to provide the device performance
status based on real-time user experience. QoE uses packet analysis sensors to drill down into
specific nodes and gain information about average response time, packet loss, and more.
o Obtain insights into hardware health: Discovery tools can also provide immediate insights
into hardware outages on a network. Additionally, some tools can provide information and
performance metrics of hardware assets, including power supply, fan speed, temperature, and
more.
o Scan a range of network devices from a single dashboard: These tools offer a single
dashboard that provides all the information in one place. This allows users to find the details of
specific network devices easily and quickly. Teams do not have to collate metrics manually and
IP addresses to scan the entire network.
Types of devices network tools can discover
Network discovery tools can discover various devices such as:
o Hardware assets, such as switches, printers, firewalls, servers, printers
o Virtual computers and networks
o Software assets, such as applications and operating systems
Some discovery tools can also discover logical and physical relationships between network assets.
Network discovery and monitoring protocols
Network tools utilize the most common discovery protocols to monitor and locate network devices.
o Simple Network Management Protocol (SNMP): SNMP is a simple networking protocol
also known as Internet standard protocol used to monitor network-connected devices. It helps
IT administrators to collect and sort data about managed devices on the network. It is one of the
most frequently used protocols that provides a common mechanism for network devices to pass
information within LAN or WAN environments.
o Link Layer Discovery Protocol (LLDP): LLDP is an Institute of Electrical and Electronics
Engineers (IEE) standard protocol that is used to define encapsulated messages. This vendor-
neutral, one-way protocol enables devices to transmit data to their nearby or directly connected
neighbor devices. The data is further stored in management information databases (MIB) for
faster querying using SNMP.
o Ping: Ping is a network software utility tool used to test whether a particular device or IP is
reachable or not. The major role of Ping is to send Internet Control Message Protocol (ICMP)
queries to identify network devices. It works by measuring the time taken by the packets to
reach a destination device from a local host and vice-versa. It records the round-trip time of
packets and packet loss, reports errors, and provides statistical summaries to help admins
discover network devices.
15
Compiled by Japhet Mwakideu From Various Internet Sources
Network Architectures, Mapping And Target Identification
o Peer-To-Peer network
o Client/Server network
Peer-To-Peer network
o Peer-To-Peer network is a network in which all the computers are linked together with equal
privilege and responsibilities for processing the data.
o Peer-To-Peer network is useful for small environments, usually up to 10 computers.
o Peer-To-Peer network has no dedicated server.
o Special permissions are assigned to each computer for sharing the resources, but this can lead
to a problem if the computer with the resource is down.
16
Compiled by Japhet Mwakideu From Various Internet Sources
o If one computer stops working but, other computers will not stop working.
o It is easy to set up and maintain as each computer manages itself.
Disadvantages Of Peer-To-Peer Network:
o In the case of Peer-To-Peer network, it does not contain the centralized system . Therefore, it
cannot back up the data as the data is different in different locations.
o It has a security issue as the device is managed itself.
Client/Server Network
o Client/Server network is a network model designed for the end users called clients, to access
the resources such as songs, video, etc. from a central computer known as Server.
o The central controller is known as a server while all other computers in the network are
called clients.
o A server performs all the major operations such as security and network management.
o A server is responsible for managing all the resources such as files, directories, printer, etc.
o All the clients communicate with each other through a server. For example, if client1 wants to
send some data to client 2, then it first sends the request to the server for the permission. The
server sends the response to the client 1 to initiate its communication with the client 2.
It's important to understand why every administrator should map out the networks that they monitor.
It's equally important to know what is network mapping and how to go about drawing up your map.
This includes which devices to include in your map and which tools to use.
This article provides a breakdown of all of this information. It is designed to give administrators a solid
understanding of how to create a network.
Introduction
Throughout history, people have created and used maps for navigation. Maps help us understand where
destinations are located and how to reach them. To communicate this information effectively, maps
need to be drawn correctly and labeled accurately. It's also important that maps be kept up to date,
because terrain often changes.
In the networking world, we use maps for similar purposes. Network mapping helps administrators
perform the functions of their jobs more efficiently thanks to having properly organized data at their
fingertips. In environments where multiple administrators work on a single network, a network map
helps everyone involved understand how the network is laid out. When the control of a network is
transferred to a new administrator, network discovery becomes much easier with the help of an
efficiently designed network map.
18
Compiled by Japhet Mwakideu From Various Internet Sources
Here are some of the advantages of having an inventory of hardware assets:
•
• Preparedness for end-of-life dates. It's good practice to replace aging equipment before
it fails. Having an accurate inventory will help you know when to replace aging
equipment.
• An understanding of upgrade needs. When bottlenecks or other slowdowns occur on
your network, you need to know where to look to address the issue. Having a map of all
your equipment will help you diagnose issues and determine where replacements need
to be made.
Network security verification. You need to know that your system is secured. Having an inventory of
your equipment will help you verify that all security patches have been applied. It will also help
identify where new security loopholes may exist.
Here are a few advantages that network mapping gives you when troubleshooting:
• A visualization of points of failure. If you know that a specific region of your network is
reporting issues, a map will help you regionalize the issue and give you an idea of the possible
points of failure
• An indicator of trouble spots. A good network map can help to predict future issues. A
proactive network administrator will be able to use the map to find vulnerabilities and address
them before issues occur.
• An idea of the affected area. When issues arise in your network, they generally don't happen on
an island. You need to know where issues will trickle down to impact other devices. Having a
network map will help you obtain an understanding of what else may be affected when trouble
arises.
Devices to Include in a Network Map
So, what is network mapping? Which devices should it include?
Ideally, your map should be comprehensive and all-encompassing. As long as things don't get too
confusing due to over-complication, you want to include as much information as possible. Here are
some ideas on the devices that should be recorded and some tips on how to document them properly.
PCs
Most of the clients on your network will probably be personal computers. Recording each one may
seem tedious, and keeping your list of PCs accurate and up to date may be difficult. However, this is
still an important piece of the process.
Here are some things to take note of for PCs on your network map:
• IP assignment. Have a record of each PC's IP address if it is assigned a statically. If not,
indicate that it is configured for DHCP.
• Uplink information. Confirm whether the device has a wired or wireless connection. If it is a
wired connection, confirm where it connects to the rest of the network.
• Software information. Your map should indicate each PC's operating system. Any business-
specific software installations can be listed as well.
• Hostname. Every computer should have a logical hostname. Each PC on your network map
should be labeled by hostname for easy recognition.
19
Compiled by Japhet Mwakideu From Various Internet Sources
Printers
Printers are an important part of your network map. Each printer’s information is often needed when
configuring clients to be able to send print requests to these devices. Because of this, the printers on
your network map may be referred to most often.
Here are the specifics to take note of when it comes to printer mapping:
• Record each IP address and hostname. Both identifiers should be listed for reference.
• Purpose. It's a good idea to describe which types of print requests are sent to each printer. If the
printer is using a special type of paper, this should be indicated as well.
• Physical location. Your map should make it clear where to find each printer. Department, floor,
and the specific room can all be recorded here.
• Make and model. The printer make and model are used for driver lookups and installation. It's
also important to know when ordering parts or additional toner.
• Tray assignments. Have a simple listing of the number of trays and the purpose assigned to
each tray.
VoIP Phones>
You will probably have just as many VoIP devices on your network as computers. As networks move
away from PBX phone systems and toward IP phones, these devices are very important to include in
your network maps.
Here is the information that should be included for each VoIP device:
• Extension number. Each phone has an extension for direct dialing. Having this number on
record helps with configuration and for contacting the specific phone when needed.
• VLAN ID. Your IP phone may be on a different virtual network than the rest of the clients.
This network will be labeled by a numerical identifier, referred to as a VLAN ID. Noting this
ID is important for clarity.
• Physical location. This is simple and obvious, but important. You need to have an idea of how
to locate each device physically.
Networking Equipment
While the devices in your network closet aren’t considered clients on your network, they shouldn't be
forgotten when you create your network map. Because the functionality of these devices is critical to
your network's performance, having these items mapped will help speed up resolution times when
network issues occur.
Here are the statistics to keep in mind when mapping out these devices:
• The number of connections. Routers and switches all come with a specific number of available
ports. Having this information on your network map will give you an understanding of what
you have available, and when an upgrade may be needed.
• Connection speeds. Your network throughput will only be as fast as your slowest allowed
connection. Switches and routers come with different speed specifications. Understanding the
speeds that each of your devices allows will help you when it comes time to make
infrastructure upgrades.
20
Compiled by Japhet Mwakideu From Various Internet Sources
• Uplink information. The way in which each device connects to others should be documented
clearly. It’s important to have an understanding of how data travels through your network, and
how everything is linked together.
Tools to Use
Scanning Software
When first building your network map, you have to gather as much information as possible. The best
way to do this is with network scanning software. There are a large number of tools available online.
Be sure that the one you choose reports the following items:
• IP address and MAC address. Each device’s address identifier must be listed.
• Vendor name. For the purposes of device identification, it's good to know the vendor (i.e., the
hardware manufacturer) associated with each IP and MAC address.
• Open ports. A listing of open ports helps paint a picture of each device's purpose.
Having all of this information from your network scan will help you start your map. You'll have a good
amount of information that you need all in one place, and it will help verify that nothing is forgotten.
21
Compiled by Japhet Mwakideu From Various Internet Sources
Network Scanning & Fingerprinting
Fingerprinting is part of initial recon of a target and a means of gathering a target network fingerprint
is done through scanning. There are many types of scanning like network/port scanning, web scanning,
service scanning, and vulnerability scanning. Much like with OSINT, no network is accessed without
first fingerprinting the network from the outside. It is important to identify the types of systems and
networks in the target network to select the tools needed for the penetration test. Fingerprinting greatly
increases the chances of success in gaining access to the target network.
Scanning is done in many different ways and with many different tools. There are automated scanners
that we will discuss in this series as well as purpose-built scanners which we will talk about in the next
article. Overall though, scanning allows the user to probe a singular target or network to find out what
devices may be listening.
Scanning can also find out what services might be listening on a specific port, the version of those
running services, operating system information, and if a firewall is running. When a simple port scan
reveals there are services running, a more detailed service scan must ensue. These service scans are
more focused and probe deeper into the target.
The data these service scans provide is generally referred to as “the attack surface.” Examples of these
more focused scans could be as follows; does the ftp server allow for anonymous login? Is the web
server running WordPress with default credentials? This is not an inclusive list but just a couple of
major items deeper scanning can bring to light.
There are times where scanning can provide a quick and easy way into the network through an
unpatched operating system or service. That low hanging fruit, however, is becoming harder and
harder to find. Yes, lazy admins still exist and there are unpatched servers on the internet from the
early 2000s, but you should always assume your target has a baseline sophistication that precludes
such simple errors.
While scanning is thought to occur only from the outside of a network, it also can happen within a
network. We will discuss that concept later on when we talk about surveying and moving through the
internal network. For the purposes of this article we will use nmap’s own server setup for scanning:
scanme.nmap.org.
22
Compiled by Japhet Mwakideu From Various Internet Sources
surface of its capabilities.
Nmap usage is very straight forward. One of the easiest ways to use it is just to invoke nmap with the
target IP address, multiple addresses, network using CIDR notation or via a fully qualified domain
name.
In the screenshot above you’ll notice that there were no command line switches invoked. With no
command line arguments, the scan was a basic scan. Nmap used a TCP SYN packet to scan for the top
1000 commonly used ports. Nmap is also smart enough to scan ports in random order instead of
progressive order. Also, notice that there are “open” and “filtered” ports. Open ports mean that there is
something on the port listening while filtered means that there could be something there but nmap
could not make a determination. Filtered ports could also mean that they are open for some IP
addresses but block others.
A network can pose a significant security risk to businesses due to the amount of software and devices
it interacts with. Vulnerabilities arise when a network has weaknesses that can be exploited by cyber
attackers. These weak points can be found in a variety of places, such as servers, firewalls, routers,
modems, physical connection ports, operating systems, and software updates. Any one of these could
serve as a way for criminals to gain access to the network and cause damage to the business’s systems.
Networks can face a range of threats. As such, it’s not possible to recognise a network cyber attack by
only monitoring a certain piece of the infrastructure or a particular type of data. On top of this,
23
Compiled by Japhet Mwakideu From Various Internet Sources
networks often face multiple attacks employing different techniques at the same time. Potential
network security threats include:
• Malware
• Viruses
• Botnets
• Keyloggers
• Ransomware
• SQL injection attacks
• Man-in-the-middle attacks
• Phishing attacks and social engineering
• Physical surveillance and sabotage
Network security is important as it protects personal data of employees and customers, as well as other
information that can used to damage the business. Securing this data is vital, as it is often essential to
everyday operations. In addition, if user data becomes compromised it can damage the integrity of the
organisation, possibly leading to customers going to other providers.
24
Compiled by Japhet Mwakideu From Various Internet Sources
The test concludes once the tester is confident there isn’t any more information that can be gleaned
about the network’s security. Following this, a report will be created to show their findings to the
business owner. Testing reports contain insights into the vulnerabilities found, details of recommended
remedial action, and the likely timeframe for solving any network problems.
Cryptography is the process of hiding or coding information so that only the person a message was
intended for can read it. The art of cryptography has been used to code messages for thousands of
years and continues to be used in bank cards, computer passwords, and ecommerce.
Modern cryptography techniques include algorithms and ciphers that enable the encryption and
decryption of information, such as 128-bit and 256-bit encryption keys. Modern ciphers, such as the
Advanced Encryption Standard (AES), are considered virtually unbreakable.
A common cryptography definition is the practice of coding information to ensure only the person that
a message was written for can read and process the information. This cybersecurity practice, also
known as cryptology, combines various disciplines like computer science, engineering, and
mathematics to create complex codes that hide the true meaning of a message.
Cryptography can be traced all the way back to ancient Egyptian hieroglyphics but remains vital to
securing communication and information in transit and preventing it from being read by untrusted
parties. It uses algorithms and mathematical concepts to transform messages into difficult-to-decipher
codes through techniques like cryptographic keys and digital signing to protect data privacy, credit
card transactions, email, and web browsing.
25
Compiled by Japhet Mwakideu From Various Internet Sources
the messaging tool WhatsApp, which encrypts conversations between people to ensure they cannot be
hacked or intercepted.
Cryptography also secures browsing, such as with virtual private networks (VPNs), which use
encrypted tunnels, asymmetric encryption, and public and private shared keys.
Authentication
Integrity
Similar to how cryptography can confirm the authenticity of a message, it can also prove the integrity
of the information being sent and received. Cryptography ensures information is not altered while in
storage or during transit between the sender and the intended recipient. For example, digital signatures
can detect forgery or tampering in software distribution and financial transactions.
Nonrepudiation
Cryptography confirms accountability and responsibility from the sender of a message, which means
they cannot later deny their intentions when they created or transmitted information. Digital signatures
are a good example of this, as they ensure a sender cannot claim a message, contract, or document they
created to be fraudulent. Furthermore, in email nonrepudiation, email tracking makes sure the sender
cannot deny sending a message and a recipient cannot deny receiving it.
Key exchange
Key exchange is the method used to share cryptographic keys between a sender and their recipient.
26
Compiled by Japhet Mwakideu From Various Internet Sources
Hash Function
Hash functions ensure that data integrity is maintained in the encryption and decryption phases of
cryptography. It is also used in databases so that items can be retrieved more quickly.
Hashing is the process of taking a key and mapping it to a specific value, which is the hash or hash
value. A hash function transforms a key or digital signature, then the hash value and signature are sent
to the receiver, who uses the hash function to generate the hash value and compare it with the one they
received in the message.
A common hash function is folding, which takes a value and divides it into several parts, adds parts,
and uses the last four remaining digits as the key or hashed value. Another is digit rearrangement,
which takes specific digits in the original value, reverses them, and uses the remaining number as the
hash value. Examples of hash function types include Secure Hash Algorithm 1 (SHA-1), SHA-2, and
SHA-3.
Types of Cryptographic Key Attacks and Risks
What are cryptographic key attacks? Modern cryptographic key techniques are increasingly advanced
and often even considered unbreakable. However, as more entities rely on cryptography to protect
communications and data, it is vital to keep keys secure. One compromised key could result in
regulatory action, fines and punishments, reputational damage, and the loss of customers and investors.
Potential key-based issues and attack types that could occur include:
1. Weak keys
2. Incorrect use of keys
3. Reuse of keys
4. Non-rotation of keys
5. Inappropriate storage of keys
6. Inadequate protection of keys
7. Insecure movement of keys
8. Insider threats (user authentication, dual control, and segregation of roles)
9. Lack of resilience
10. Lack of audit logging
11. Manual key management processes
How to Minimize the Risks Associated with Cryptography
Organizations and individuals can minimize and mitigate cryptography-related threats with a dedicated
electronic key management system from a reputable provider. The solution must use a hardware
security module to generate and protect keys, and underpin the entire system’s security.
It needs to include features like full key management life cycle, strong key generation, strict policy-
based controls, swift compromise detection, secure key destruction, strong user authentication, secure
workflow management, and a secure audit and usage log. This will protect the organization's keys,
enhance efficiency, and ensure compliance with data and privacy regulations.
Another potential solution is cryptography quantum, whereby it is impossible to copy data encoded in
a quantum state.
27
Compiled by Japhet Mwakideu From Various Internet Sources
A security compromise of Microsoft Active Directory (AD) can potentially undermine the integrity of
your identity management infrastructure, leading to catastrophic levels of data leakage, system
corruption, and destruction. That's why Active Directory Security is vital for today's business. Read on
to learn more about AD Security, how it works, and the security risks you may face without adequate
protection.
What Is Active Directory Security?
Active Directory (AD) security refers to cybersecurity measures and practices implemented to protect a
business network's Microsoft Active Directory infrastructure. Active Directory was developed by
Microsoft to easily manage and organize information about users, computers, and other resources
within a network. AD security plays a central role in authentication, authorization, and the overall
security of a Windows-based environment. Security Active Directory is vital to prevent data
breaches and unauthorized access to data, maintain system uptime, and more.
Critical aspects of Active Directory security include:
• Authentication: Ensure only authorized users and devices can access your network and
resources. This involves using strong passwords, multi-factor authentication (MFA), and other
authentication mechanisms.
• Access Control and Authorization: Controlling and managing the permissions assigned to users
and groups within the Active Directory, ensuring that users have the appropriate level of access
to resources. Implementing access controls and least privilege principles to restrict access to
sensitive resources and data, reducing the risk of unauthorized access.
• Group Policies: Implement and enforce group policies to control and configure user and
computer settings. Group Policies help enforce security settings, such as password policies,
account lockout policies, and other security configurations.
• Auditing and Monitoring: Enabling auditing features to track and monitor activities within the
Active Directory environment. This includes logging events related to authentication, changes
in permissions, and other security-relevant events.
• Security Patching and Updates: Regularly apply security patches and updates to the Active
Directory servers and associated systems to address vulnerabilities and ensure a secure
operating environment.
• Secure Communication: Configuring secure communication channels between Active
Directory components to protect against eavesdropping and man-in-the-middle attacks. This
often involves using protocols like Kerberos and implementing encryption mechanisms.
• Firewall and Network Segmentation: Employ firewalls and network segmentation to control
traffic between different network parts and limit the attack surface.
• Regular Security Assessments: Conduct regular security audits and assessments to identify
vulnerabilities, misconfigurations, or potential security risks within the Active Directory
infrastructure.
• Backup and Recovery: Implement robust backup and recovery mechanisms to ensure the
availability and integrity of Active Directory data in the event of data loss, corruption, or a
security incident.
Active Directory is a widely used directory service crucial in managing and organizing network
resources in a Windows environment, including user accounts, computers, and applications.
28
Compiled by Japhet Mwakideu From Various Internet Sources
Why is Active Directory Security Important?
Active Directory security is crucial for overall network security, as compromising it could lead to
unauthorized access, data breaches, and other security incidents. Regular monitoring, updating, and
adherence to security best practices are essential for maintaining a secure Active Directory
environment.
Do I Need Active Directory Security for My IT Environment?
The need for Active Directory (AD) security depends on a number of factors. Here are some
considerations that might help you determine whether you need to focus on Active Directory security:
• Size and Complexity of Organization: In larger organizations with complex IT infrastructures,
the need for robust Active Directory security is often more critical. The more users, devices,
and applications connected to the network, the greater the potential security risks.
• Sensitivity of Your Data: If your organization deals with sensitive or confidential information,
securing Active Directory becomes paramount. Unauthorized access to AD can lead to
compromises in data confidentiality and integrity.
• Regulatory Compliance: Depending on your industry, you may be subject to various regulatory
requirements that mandate specific security measures, including those related to Active
Directory. Ensure that your AD security practices align with any applicable regulations.
• Risk Tolerance: Consider your organization's risk tolerance. If the impact of a security breach
on your Active Directory would be significant, it's important to invest in appropriate security
measures. You may need a cyber risk assessment to help you understand the risks.
• Frequent Patch Management: If you spend much time managing software patches, AD Security
may help. Keep the Active Directory servers and related systems up to date with the latest
security patches. Vulnerabilities in the operating system or AD components can be exploited by
attackers.
The decision to invest in Active Directory security depends on your organization's specific context and
risk profile. Conduct a cyber risk assessment and consider the factors mentioned above to determine
the appropriate level of security measures for your Active Directory environment.
How Do I Implement Active Directory Security?
Implementing Active Directory (AD) security is crucial for protecting your organization's network and
resources. Here are some best practices for implementation:
• Evaluate Your Physical Security: Ensure that physical access to the servers hosting Active
Directory is restricted to authorized personnel only. Based on what you discover, you may need
to implement physical security measures, such as card key access, biometric authentication, and
surveillance.
• Secure Administrative Access: Limit administrative access to only those who require it. Use
strong, unique passwords for all AD administrator accounts. Implement Multi-Factor
Authentication (MFA) for administrative accounts to add an extra layer of security.
• Conduct Regular Audits and Monitoring: Conduct regular security audits and reviews of Active
Directory. Enable auditing features to track changes to AD objects and configurations. Set up
centralized logging and monitoring for suspicious activities.
29
Compiled by Japhet Mwakideu From Various Internet Sources
• Group Policy Security: Use Group Policies to enforce security settings on all computers within
the domain. Disable unnecessary services and features that could be exploited to gain
unauthorized access.
• Secure Replication: Ensure that AD replication between domain controllers is secure. Use
IPSec for encrypted replication traffic between domain controllers.
• Delegate Permissions Carefully: Follow the principle of least privilege when assigning
permissions. Delegate administrative tasks to specific users or groups only as necessary.
• Implement Account Lockout Policies: Set account lockout policies to prevent brute-force
attacks on user accounts. Monitor and investigate repeated account lockouts.
• Regular Patching and Updates: Regularly update and patch Active Directory domain
controllers.
• Secure DNS: Ensure DNS security by using secure DNS configurations. Disable unnecessary
DNS services and implement DNS security best practices.
• Firewall and Network Security: Implement firewalls to control traffic to and from domain
controllers. Use network security best practices to protect communication between domain
controllers and clients.
Remember that Active Directory security is an ongoing process, and staying informed about the latest
security threats and best practices is essential to adapt your security measures accordingly. Regularly
review and update your security policies to address emerging risks.
Can I Outsource Active Directory Security Management?
Yes, it is common for organizations to hire third-party vendors or managed security service providers
(MSSPs) to assist with managing Active Directory (AD) security. These third-party providers often
specialize in IT security services and can help ensure the proper configuration, monitoring, and
maintenance of Active Directory to enhance security.
Here are some common ways an MSSP can assist you with Active Directory security:
• Security Assessment and Auditing: Third-party providers can perform security assessments and
audits of your Active Directory environment to identify vulnerabilities, misconfigurations, and
areas for improvement.
• Implementation of Best Practices: They can help implement best practices for securing Active
Directory, including proper user and group management, password policies, and access
controls.
• Monitoring and Incident Response: Third-party providers may offer continuous monitoring
services to detect and respond to security incidents in real-time. This includes monitoring for
unusual activities, unauthorized access, and potential security breaches.
• Patch Management: Ensuring that Active Directory servers are up-to-date with the latest
security patches is crucial. Third-party providers can assist in managing the patching process to
minimize vulnerabilities.
• User Training and Education: Many security incidents stem from human error. Third-party
providers may offer training and educational programs to help users and IT staff understand
and adhere to security best practices.
30
Compiled by Japhet Mwakideu From Various Internet Sources
• Threat Intelligence Integration: Integrating threat intelligence feeds can enhance the ability to
detect and respond to emerging security threats. Third-party providers can help implement and
manage these integrations.
When considering hiring a third party for Active Directory security management, choosing a reputable
and experienced provider is important. Ensure they have a good track record in the industry,
understand your organization's specific needs, and comply with any regulatory requirements that may
apply to your industry.
Before entering into any agreement, it's also advisable to define clear service level agreements (SLAs)
and establish communication channels to ensure a smooth collaboration between your organization and
the third-party provider.
31
Compiled by Japhet Mwakideu From Various Internet Sources
• Virus & threat protection. Monitors threats to your desktop computer, laptop or tablet, run
scans, and is regularly updated to help detect the latest threats.
• Account protection. Access sign-in options and account settings.
• Firewall & network protection. Manages the local firewall settings and monitors what’s
happening with your networks and internet connections.
• App & browser control. Update settings for Microsoft Defender SmartScreen to help protect
your desktop computer, laptop or tablet against potentially dangerous apps, files, sites, and
downloads. You'll have exploit protection and you can customize protection settings for your
devices.
• Device security. Review built-in security options to help protect your desktop computer, laptop
or tablet from attacks by malicious software.
• Device performance & health. View status info about the performance health of your desktop
computer, laptop or tablet and keep it clean and up to date with the latest version of Windows
10.
• Family options. Keep track of your kids’ online activity and the desktop computers, laptops or
tablets in your household.
Status icons indicate your level of safety:
• Green means your device is sufficiently protected and there aren’t any recommended actions.
• Yellow means there is a safety recommendation for you.
• Red is a warning that something needs your immediate attention.
How do I review the virus and threat protection settings in the Windows Security app?
1. Open the Windows Security app by clicking the shield icon in the task bar or searching
the Start menu for Defender.
2. Click on the Virus & threat protection tile (or the shield icon on the left menu bar).
32
Compiled by Japhet Mwakideu From Various Internet Sources
Windows Security
How do I open Microsoft Defender?
1. Click on the Start button.
2. Click on Settings.
3. Click on Update & Security.
4. Click on Windows Security.
5. Click on Virus & threat protection.
How do I update the Windows Defender malware and virus defintions?
1. Click on the Start button.
2. Click on Settings.
3. Click on Update & Security.
4. Click on Windows Security.
5. Click on Virus & threat protection.
6. Click on Check for updates under Virus & threat protection updates.
7. Click on the Check for updates button.
How do I run a manual scan?
If you have any antivirus or malware concerns about a specific file or folder, you can right-click the
file or folder in File Explorer, then select Scan with Microsoft Defender.
If you suspect there is malware or a virus on your desktop computer, laptop or tablet, you should
immediately run a quick scan on all your files and folders. To run a quick scan:
1. Click on the Start button.
2. Click on Settings.
3. Click on Update & Security.
33
Compiled by Japhet Mwakideu From Various Internet Sources
4. Click on Windows Security.
5. Click on Virus & threat protection.
6. Click on the Quick scan button.
If you don't find any urgent issues, you may want to check your desktop computer, laptop or tablet
more thoroughly.
How to run an advanced scan
1. Click on the Start button.
2. Click on Settings.
3. Click on Update & Security.
4. Click on Windows Security.
5. Click on Virus & threat protection.
6. Click on Scan options.
7. Select one of the following scan options:
• Quick scan
• Full scan (this scan will check all files and programs currently running on your device).
• Custom scan (this will only scan specific files or folders).
• Microsoft Defender Offline scan (run this scan if your desktop computer, laptop, or tablet has
been, or could potentially be, infected by a virus or malware).
8. Click on the Scan now button.
Schedule your own scan
Even though Windows Security is regularly scanning your device to keep it safe, you can also set when
and how often these proactive system scans occur. To schedule a scan of your desktop computer,
laptop, or tablet:
1. Click on the Start button.
2. Scroll to Windows Administrative Tools.
3. Double-click on Task Scheduler.
4. In the left pane, expand Task Scheduler Library > Microsoft > Windows, and then scroll
down and select the Windows Defender folder.
5. In the top center pane, double-click Windows Defender Scheduled Scan.
6. In the Windows Defender Scheduled Scan Properties (Local Computer) window, select
the Triggers tab, go to the bottom of the window, and then select New.
7. Specify how often you want scans to run and when you’d like them to start.
8. Review the schedule and select OK.
34
Compiled by Japhet Mwakideu From Various Internet Sources
Task Scheduler - Windows Defender
Turn Microsoft Defender Antivirus real-time protection off or on.
While not recommended, there may be times where you need to briefly stop running the Windows
Defender real-time protection from running on your desktop computer, laptop or tablet. While real-
time protection is off, files you open or download won't be scanned for threats. If you switch Real-
time protection off, it will automatically turn back on after a short delay. This is to ensure you are
protected from malware and threats. To temporarily turn off real-time protection:
1. Click on the Start button.
2. Click on Settings.
3. Click on Update & Security.
4. Click on Windows Security.
5. Click on Virus & threat protection.
6. Click on Manage settings underneath Virus & threat protection settings.
7. Switch the Real-time protection setting to Off and choose Yes to verify.
To turn the Real-time protection back on, follow steps 1 - 6 described above. For step 7, switch
the Real-time protection setting to On and choose Yes to verify.
Linux is a widely-used and popular operating system known for its stability, flexibility, and security.
However, even with its built-in security features, Linux systems can still be vulnerable to security
breaches.
This article will present the latest Linux security statistics, tools, and best practices available to
keep your Linux system secure.
35
Compiled by Japhet Mwakideu From Various Internet Sources
Linux Security and Vulnerabilities: Stats
Compared to other operating systems like Windows and macOS, Linux has fewer vulnerabilities.
However, Linux is not immune to all types of cyberattacks. The most common vulnerabilities in Linux
systems are privilege escalation, memory corruption, and information disclosure. Cyber attackers use
these vulnerabilities to gain unauthorized access to a Linux system and steal data.
Reports from sources such as The National Vulnerability Database (NVD) and Crowdstrike show an
increase in Linux vulnerabilities each year. For example, there were 1,958 Linux vulnerabilities
reported in 2020. In 2021, there was a 35% rise in malware targeting Linux systems compared to 2020.
And in 2022, the number of new Linux malwares reached nearly 1.7 million, a 650% increase from the
previous year.
Significant Linux ransomwares and vulnerabilities over the years are:
1. Shellshock (2014 - active). A vulnerability in the Bash shell that lets attackers run random
code by running a specially prepared environment variable.
2. Ghost (2015 - resolved). A vulnerability in the GNU C Library (glibc) that allowed attackers
to run arbitrary code by sending a specific DNS response.
3. Dirty COW (2016 - resolved). This vulnerability affected the Linux kernel and gave
attackers root access by exploiting a race condition in the memory management system.
4. BlueBorne (2017 - resolved). This vulnerability affected Bluetooth implementations on
Windows, Linux, and Android. BlueBorne would run the code remotely, allowing the attackers
to steal sensitive information.
5. SACK Panic (2019 - resolved). A vulnerability in the TCP stack of the Linux kernel caused a
denial of service by sending TCP Selective Acknowledgment (SACK) packets.
6. Ghostcat (2020 - active). This vulnerability affects the Apache Tomcat web server and allows
attackers to gain unauthorized access to sensitive information.
7. SUDO (2021 - active). This vulnerability affected the sudo command-line utility and allowed
attackers to execute commands as root without a password.
36
Compiled by Japhet Mwakideu From Various Internet Sources
8. Text4Shell or ACT4Shell (2022 - active). A critical remote code execution
(RCE) vulnerability that abuses the Apache Commons Text interpolation functionality in string
substitution.
9. Linux Kernel Vulnerability (2023 - active). A security issue was found in the Linux kernel's
NVMe functionality, specifically in the nvmet_setup_auth() function, which can result in a
pre-auth denial of service (DoS) attack on a remote machine.
10. Signal Desktop Vulnerability (2023 - active). A vulnerability in the Signal Desktop software
allows attackers access to sensitive message attachments.
Linux Security Tips and Best Practices
As the use of Linux systems continues to grow, it's crucial to implement adequate security measures to
protect a system from potential threats. The sections below offer a range of practical tips and best
practices for enhancing the security of a Linux system.
1. Use Strong Passwords
(Basic security mechanism)
Use strong passwords and change them regularly as a basic step to securing your Linux system. Strong
passwords prevent unauthorized access to the system and reduce the risk of identity theft, data loss,
and other security incidents.
A strong password is at least 12 characters long and includes a mixture of upper and lowercase letters,
numbers, and special characters. That makes brute-force attacks extremely more difficult.
Regularly changing passwords also improves security. The process reduces the risk of password reuse
and exposure, giving a potential attacker a limited time frame to exploit the password if it becomes
compromised.
2. Verify All Accounts Have Passwords
(Basic security mechanism)
Accounts with no passwords allow anyone to log into the system without any authentication,
compromising the system's data security and confidentiality. Therefore, make sure to verify that no
accounts have empty passwords.
Run the awk command with the following options:
sudo awk -F: '($2 == "") {print $1}' /etc/shadow
This command searches the /etc/shadow file, which contains information about user account
passwords, and prints the names of any accounts with an empty password field.
Since accounts with empty passwords are a serious security risk, consider the following actions:
• Set a password. For instance, assign a new password to a user with the passwd command:
sudo passwd [username]
• Disable the account. Prevent users from logging into the account by disabling it entirely. To
lock a Linux user account, run the usermod command with the -L option (which prints no
output):
sudo usermod -L [username]
Alternatively, use the passwd command with the -l option:
sudo passwd -l [username]
37
Compiled by Japhet Mwakideu From Various Internet Sources
The user is now unable to log in using their password.
• Delete the account. Remove unnecessary accounts with:
sudo userdel [username]
The command shows no output if executed correctly.
3. Set Up Password Aging
(Basic security mechanism)
Password aging is the practice of requiring users to change passwords regularly. Regular password
changes reduce the chance of users reusing previous passwords. The practice also prevents password
cracking attacks, which often succeed because of weak passwords that are not changed frequently.
There are several ways to set up password aging for a Linux user:
• Use the chage command. For instance, enable a password aging process that ensures the
password expires in 60 days, the system warns the user 10 days before the expiration date, and
the user has to change the password within 14 days. To do so, run:
sudo chage -M 60 -m 10 -W 14 [username]
• Alternatively, use the passwd command:
sudo passwd -x 60 [username]
The command sets the password expiration date for NewUser at 60 days.
4. Restrict the Use of Previous Passwords on Linux
(Basic security mechanism)
Prevent all users from reusing old passwords. Old passwords might have been compromised and
attackers might be actively trying to take advantage of that to hack into the system.
To prevent password reuse attacks:
1. Enforce password history with PAM, a unique Linux library with the pam_unix module
feature. This feature keeps track of users' passwords and disallows the reuse of any previously
used ones.
2. Enforce rules for password complexity, including minimum length and a mix of characters,
with pam_cracklib. Requiring users to create complex passwords makes it more difficult for
attackers to guess or crack passwords.
3. Regularly check system logs for any suspicious activity, such as repeated failed login attempts,
to detect potential password-related security threats.
4. Store hashed passwords using a strong cryptographic hash function such as Message-Digest
Algorithm (MDA), Secure Hash Algorithm (SHA), or NTLM.
5. Use an enterprise password manager to generate and store unique, secure passwords for each
account.
5. Ensure OpenSSH Server Security
(Intermediate security mechanism)
OpenSSH is a widely used and secure implementation of SSH for Linux systems. It
provides encryption for data in transit, robust authentication methods, and a secure way to administer
38
Compiled by Japhet Mwakideu From Various Internet Sources
systems and transfer files remotely. To ensure the security of OpenSSH, minimize the tool's
vulnerabilities.
Secure the OpenSSH server by following these tips:
• Use non-standard SSH ports.
• Limit user access and disable root login.
• Use SSH key pairs for authentication.
• Disable root login and password-based logins on the server.
• Keep OpenSSH updated regularly.
• Use strong authentication methods.
• Limit the number of authentication attempts.
• Disable unused protocols and features.
• Implement a firewall.
• Monitor logs regularly.
6. Disable Root Login via SSH
(Intermediate security mechanism)
Linux machines have external root access enabled by default. That leaves an open SSH
security vulnerability which hackers can exploit with brute-force attacks. Disabling server SSH root
login prevents unauthorized individuals from gaining control over the system. An active root account
allows attackers to obtain or guess the root password with full administrative privileges.
To disable root login in Linux, change the SSH configuration file:
1. Open the file in a text editor of your choice. To access the config file in Vim, run:
sudo vim /etc/ssh/sshd_config
2. Find the PermitRootLogin line.
The command prints root as the only user with a UID of 0. If the output shows any non-root accounts
with a UID of 0, delete them or change the UID to a non-zero value with usermod.
9. Lock User Accounts After Login Failures
(Intermediate security mechanism)
Locking user accounts after several login failures makes it harder for an attacker to guess or brute-
force a password.
The system works by setting the maximum number of login attempts per user. Once that number is
reached, the account locks for a specified period. Another option is to install a system for unlocking
the account, either automatically after a set time has elapsed or manually by an administrator.
To achieve this, use different Identity Access Management (IAM) systems. These tools block
incoming traffic from IP addresses with failed login attempts, helping mitigate brute-force attacks or
monitor log files and ban IP addresses with repeated failed login attempts.
Writing custom scripts to parse log files, keep track of failed login attempts, and lock user accounts is
also an option.
10. Enable Two-Factor Authentication
(Intermediate security mechanism)
Two-factor authentication (2FA) is a security measure that adds an extra layer of protection. By
requiring a secondary verification method, such as a one-time code sent to the user's mobile device,
2FA makes it much more difficult for unauthorized users to access sensitive information or systems.
There are various ways to enable 2FA on Linux systems. Common methods include using TOTP
(Time-based One-Time Password) apps like Google Authenticator or a hardware token like
a Yubikey. Certain Linux systems have built-in 2FA capabilities, such as PAM (Pluggable
Authentication Modules), that work with various authentication methods.
11. Keep Linux Up to Date
(Basic security mechanism)
Hackers exploit outdated software. To maintain Linux server security, keep the Linux kernel and
software up to date. Different Linux distributions offer various package managers to update packages
manually, with yum and apt being the most popular.
Another method includes automatic updates. Automatic updates are installed in the background
without requiring any action from the user, making updating software easier and more convenient.
However, these types of updates are also risky.
40
Compiled by Japhet Mwakideu From Various Internet Sources
Important: Automatic updates cause compatibility issues with other packages or result in unexpected
changes to the system. In general, it is not recommended to run automatic updates on production
servers.
12. Use Linux Security Extensions
(Intermediate security mechanism)
Linux security extensions are tools and features that provide additional security measures to a Linux
operating system. These extensions help protect against misconfigured or compromised programs,
defend against potential attacks, and enforce limitations on networks and programs.
Popular Linux security extensions are:
• SELinux (Security-Enhanced Linux) is a security feature integrated into the Linux kernel that
uses Mandatory Access Control (MAC) system. It allows administrators to control access to
system resources by only allowing authorized users and processes. This ensures that only
trusted parties access and modify important system information. SELinux is more common in
RHEL and CentOS systems.
• AppArmor is a mandatory access control system that allows administrators to specify the
permissions required by applications to access system resources. It's been a default feature of
Ubuntu since version 7.10.
• TCP Wrappers are a security tool that provides basic access control for network services by
checking the client's IP address against a list of allowed or denied addresses. The request is
granted if the client's IP address is found in the allow list, and if it is found in the deny list, the
request is rejected.
• PAM (Pluggable Authentication Modules) provides a flexible and centralized system for
managing authentication on a Linux system. PAM allows administrators to configure the
authentication system and choose the best methods for their security needs. Moreover, PAM
makes it easier to enforce strong authentication policies and ensures that all applications and
services use the same authentication system.
13. Configure Linux Firewall
(Basic security mechanism)
A firewall on a Linux system acts as the first line of defense against malicious network traffic. The
firewall defines rules that govern what traffic is allowed and what is blocked. Sysadmins apply those
rules to control incoming and outgoing network traffic, blocking unauthorized access and only
allowing necessary services.
The default Linux firewall is iptables, a popular tool that provides packet filtering and manipulation
capabilities for IPv4 and IPv6 network traffic. It filters network traffic, forwards traffic between
network interfaces, and implements network address translation (NAT).
14. Reduce Network Service Vulnerabilities by Isolation
(Intermediate security mechanism)
To enhance the security of network services, run each service on a separate server or virtual instance.
The process limits the number of vulnerable services, making managing security patches, updates, and
configurations easier.
There are several ways to implement this method:
41
Compiled by Japhet Mwakideu From Various Internet Sources
• Use virtualization tools like VirtualBox to create individual virtual machines (VMs) for each
network service. Or, create isolated containers with Docker or Kubernetes for each network
service.
• Use firewall rules to control incoming and outgoing network traffic, only allowing the
necessary services.
• Segment the network into separate subnets to isolate different services and minimize the risk of
attacks.
• Regularly monitor network traffic and logs for suspicious activity and take appropriate action.
15. Secure Web Servers
(Intermediate security mechanism)
Web servers like Apache and Nginx are prime cyberattack targets as they often deal with sensitive
data. Securing these servers is critical to prevent unauthorized access and data breaches.
Top tips for securing Apache and Nginx web servers are:
• Regularly update the software to apply the latest security patches and fixes.
• Configure access control to limit access to sensitive information and prevent unauthorized
access.
• Disable unneeded modules and features to reduce the attack surface and minimize security
vulnerabilities.
• Use strong passwords to secure the administration interface and prevent unauthorized access.
• Use SSL certificates to encrypt data transmitted over the network and secure sensitive
information such as passwords and financial data.
• Regularly monitor logs for suspicious activity or unauthorized access attempts.
• Run web servers as a non-root user with limited privileges to prevent unauthorized access and
data breaches.
16. Detect Listening Network Ports
(Intermediate security mechanism)
In a Linux system, ports are used when a program, such as a server or a network service, opens
a network socket to receive incoming client connections. Open ports listen for those incoming
connections.
However, listening ports are a weakness attackers exploit. A vulnerable listening port provides access
to the system or sensitive information.
By detecting all listening ports, sysadmins identify and secure them by applying updates, limiting
access, or disabling unnecessary ones. Furthermore, identifying listening ports helps detect rogue or
unauthorized applications that pose a security risk.
Identify listening ports in a Linux system with netstat, ss, or lsof.
17. Disable Unwanted Linux Services
(Basic security mechanism)
Unneeded services in Linux are a security vulnerability and consume resources like memory and CPU.
To improve security and performance on a Linux operating system server, keep a minimal installation
with only the necessary packages.
To manage system services, Linux uses systemd with the systemctl command.
1. To check if a service is active, run:
42
Compiled by Japhet Mwakideu From Various Internet Sources
sudo systemctl status [service_name]
For instance, check the status for snap with:
sudo systemctl status snapd
When working with older systems that use System V or Upstart, run the chkconfig command to
manage services.
It is also important to check dependencies before installing software and inspect auto-started
dependencies to ensure they are needed.
18. Use Centralized Authentication Service
(Intermediate security mechanism)
A Centralized Authentication Service (CAS) is a single sign-on protocol that allows web applications
that may not be trusted to authenticate users through a centralized, trusted server. The CAS server
handles all authentication, so the user's credentials are not revealed to the applications.
A centralized authentication service is crucial for Linux security as it allows sysadmins to enforce
password policies and manage user accounts in a secure and scalable way. It makes monitoring and
auditing authentication easier, reduces the risk of lost login credentials, and ensures consistent user
data.
Common Linux Central Authentication Services are Kerberos, Samba, and Winbind.
19. Set Up an Intrusion Detection System
(Advanced security mechanism)
43
Compiled by Japhet Mwakideu From Various Internet Sources
An intrusion detection system (IDS) monitors processes running on the server. It detects potential
threats such as denial-of service attacks, port scans, or attempts to crack into computers by monitoring
network traffic.
Popular IDS options include:
• Sophos. A cloud-based management platform that integrates multiple security products and
uses machine learning to trigger automatic threat responses. It also uses advanced techniques
like sandboxing and SSL inspection to identify and isolate compromised systems.
• SolarWinds - NetFlow Traffic Analyzer. A network monitoring utility that inspects network
traffic using intrusion detection. It is configured with over 700 event correlation rules, allowing
it to automatically detect suspicious activities and implement remediation actions.
• Fail2Ban. A lightweight host-based intrusion detection software system designed to protect
against brute force attacks. It monitors server logs and detects any suspicious activity,
providing an extra layer of security for the server.
20. Manage Linux File Permissions
(Basic security mechanism)
Managing file permissions in Linux protects sensitive files and directories from unauthorized access.
Limiting access to files and directories reduces the risk of data breaches, theft of sensitive information,
and unauthorized modifications to the system.
Several tools manage file permissions in Linux, including the chmod command, which allows
sysadmins to change file permission recursively and configure multiple files and subdirectories using a
single command.
The ls command lists file permissions, and the chown command changes file ownership.
21. Use Access Control Lists (ACLs)
(Intermediate security mechanism)
Compared to traditional file permissions systems, ACLs are a more advanced way of controlling
access to files and directories in Linux systems. The traditional system only allows three basic
permissions (read, write, and execute) to be assigned to three permission classes (file owner, group
owner, and others). However, ACLs allow for more fine-grained control.
Sysadmins use ACLs to define different permissions for specific users and groups on a per-file or per-
directory basis. This allows for implementing more complex access control policies, like granting
certain users read-only access to sensitive files or allowing certain groups write access to
specific directories.
22. Monitor Suspicious Server Logs
(Intermediate security mechanism)
To improve Linux system security and prevent brute force attacks, analyze server logs with log
management applications such as Logwatch or logcheck.
Both tools allow sysadmins to regularly monitor the logs for unusual activity and provide a
summarized report.
Logwatch parses log files from services and applications running on the system and generates a daily
report of error messages, security alerts, and system warnings. The command has numerous options
and settings. For instance, to see a detailed report in the terminal, run:
sudo logwatch --detail high
44
Compiled by Japhet Mwakideu From Various Internet Sources
Logcheck focuses on log files related to system security, such as authentication logs and firewall logs.
Logcheck summarizes the events in these logs and sends a daily report via email to the sysadmin.
The logcheck command also has a lot of options. To output everything, run:
sudo -u logcheck logcheck -o -t
Note: Neither Logwatch nor logcheck come preinstalled in Linux. Use the apt command to install
them.
23. Restrict World-Writable Files
(Intermediate security mechanism)
World-writable files, directories, and devices on a Linux server pose a significant security risk. Any
user is able to modify these files, potentially leading to unauthorized changes, data tampering, or
malicious actions.
Locate and remove these files following these steps:
1. Identify world-writable files or directories with the find command:
sudo find /. -xdev -type d \( -perm -0002 -a ! -perm -1000 \) -print
2. In this case, the output prints one directory. If there are more, Investigate each one.
45
Compiled by Japhet Mwakideu From Various Internet Sources
3. Use chmod to update the permissions or remove unnecessary files or directories. For instance, set
file permissions to 600 to ensure the owner has full read and write access to the directory, while no
other user has access.
sudo chmod 600 [directory]
The command has no output.
24. Configure Logging and Auditing Processes
(Intermediate security mechanism)
Logging and auditing provide valuable information about the system and network events, aiding
administrators in detecting and addressing malicious threats. To increase security:
1. Centralize log data from different sources to a single repository making it easier to search,
analyze, and store logs.
2. Rotate logs regularly to keep only a limited number of logs and reduce the storage space.
3. Enforce a retention policy to save space and prevent log data from becoming too large to
handle.
4. Use SELinux or another auditing framework to track and record specific events, such as access
to sensitive files, user logins, and system changes.
5. Implement real-time syslog server monitoring tools, such as log analyzers or security
information and event management (SIEM) systems, to get alerts for potential security
incidents.
6. Restrict privileges to log files, allowing access to only necessary users to prevent unauthorized
modification or tampering of log data.
7. Encrypt log data before any network transmission s to maintain data confidentiality.
25. Disable Unwanted SUID and SGID Binaries
(Intermediate security mechanism)
SUID and SGID are file permissions allowing a file to be executed with owner or group privileges.
Still, they pose security risks if not adequately secured. The main risk is a privilege escalation attack.
Privilege escalation attacks occur when an attacker gains access to a system with limited privileges and
then uses a vulnerability to elevate their privileges to the binary owner or group level.
To disable these files:
1. Identify SUID and SGID files with the find command.
sudo find / -perm +u+s -type f
46
Compiled by Japhet Mwakideu From Various Internet Sources
Note: An alternative way to locate all SUID and SGID files is the find / -perm +6000 -type
f command. However, the command prints an error on some systems that do not support
the +6000 option, in which case the command returns an error message.
3. Evaluate the output and decide what to keep or discard.
4. Use the chmod command to change the permissions or remove unnecessary files.
5. Regularly monitor the SUID and SGID files to ensure the permissions have not changed.
26. Encrypt Data Communication
(Intermediate security mechanism)
Linux provides various methods for encrypting data communication:
1. Secure File Transfer Protocol (SFTP) transfers files between systems securely. With SFTP,
users choose the level of authentication for transferring files. To use SFTP, users must install
an SFTP server on one system and an SFTP client on the other. However, SFTP only protects
files during transfer, and the data is no longer encrypted after reaching the server.
2. Secure Socket Layer (SSL) guards information passed between two systems via the internet
and is used in server-client and server-server communication. SSL encrypts the transmitted
data, making it difficult for an attacker to intercept or alter the information. To use SSL, users
must obtain an SSL certificate and install it on both the client and the server systems.
3. Apache SSL (Secure Server Layer) is a component for Apache web server that provides secure
communication between a client and a server. It implements the SSL (Secure Socket Layer) or
TLS (Transport Layer Security) protocols, which provide encryption and authentication for
secure communication.
4. SSL/TLS: SSL (Secure Sockets Layer) and its successor, TLS (Transport Layer Security), are
widely used protocols for encrypting data communication over the internet. TLS (Transport
Layer Security) is more secure and provides better encryption.
27. Use Encryption Tools to Protect Sensitive Data
(Intermediate security mechanism)
Encryption allows users to protect confidential information from unauthorized access, even during a
data breach. Encrypting files before transmitting them with SFTP, for instance, ensures additional
protection. The following tools allow users to encrypt files before transmitting:
1. LUKS (Linux Unified Key Setup). The most widely used method for encrypting partitions on
Linux systems. LUKS allows users to encrypt the entire partition file system, protecting all the
data.
2. Node.js. An open-source JavaScript (JS) runtime environment that works as an encryption tool.
Node.js has an extensive cryptography library, such as crypto, node-forge, or sodium-native,
which provide functions for encrypting and decrypting data, generating encryption keys, and
managing cryptographic operations.
3. CryFS. A cryptographic file system that encrypts users' files and stores the encrypted data on
cloud storage services like Dropbox or Google Drive. CryFS works by transparently encrypting
a user's files before they are uploaded to the cloud and then decrypting them when accessed.
The files remain encrypted and protected from unauthorized access, even when stored in the
cloud.
47
Compiled by Japhet Mwakideu From Various Internet Sources
4. SecureDoc for Linux. A security solution for Linux endpoints, providing enterprise-class full
disk encryption. It separates encryption into two components, encryption, and key
management. SecureDoc for Linux adopts a Zero Trust approach to security but allows live
disk conversion during encryption.
28. Use a VPN
(Basic security mechanism)
Virtual Private Networks (VPNs) are crucial in ensuring communication security over public networks
on Linux systems. A VPN uses encryption and authentication to protect sensitive information from
interception or tampering during transmission.
Unlike open networks, private and virtual private networks limit access to only authorized users.
Private networks use a unique IP address to set up isolated communication channels between servers
within the same IP range, allowing data exchange without exposure to a public space.
With a VPN, data is encrypted at the source, transmitted over a public network, and decrypted at the
destination. This helps to secure communication between the two points, even over an insecure
network. VPNs are useful when connecting to a remote server as if directly connected to a private
network.
In Linux, a popular VPN is OpenVPN. This open-source VPN solution provides robust security and
high performance. It supports various encryption protocols, including SSL/TLS and PPTP. OpenVPN
is known for its ease of use, flexible configuration options, and compatibility with most Linux
distributions.
29. Harden the Linux Kernel
(Intermediate security mechanism)
The Linux kernel plays a crucial role in the overall security of a system. The /etc/sysctl.conf
file configures kernel parameters at runtime and hardens the Linux kernel. Hardening the Linux kernel
prevents attacks and limits the damage from potential attacks.
To harden the Linux kernel, configure settings in the /etc/sysctl.conf file:
• Disable IP forwarding to reduce the risk of the system being used as a pivot point in an attack.
• Enable source address verification to prevent IP spoofing attacks by verifying that incoming
packets have a valid source IP address.
• Disable ICMP redirect acceptance to stop attackers from altering the system's routing tables
through ICMP redirect messages.
• Disable IP source routing to prevent IP packet routing manipulation.
• Increase the connection tracking table size to limit the memory used by the connection tracking
system, reducing the risk of a denial-of-service attack.
30. Separate Disk Partitions for Improved Linux Security
(Intermediate security mechanism)
Separating disk partitions in a Linux system improves its security by isolating sensitive data and
system files from each other. This allows for implementing different security measures for each
partition, making it more difficult for an attacker to compromise the entire system.
Critical filesystems to be mounted on separate partitions include:
• /usr
• /home
48
Compiled by Japhet Mwakideu From Various Internet Sources
• /tmp
• /var
• /var/tmp
Creating separate partitions for Apache and FTP server roots is also recommended. Mounting the
partitions with different file permissions and mount options, such as the noexec option, prevents the
execution of programs in a partition. This helps prevent attackers from exploiting vulnerabilities in the
software installed on the system and limits the potential damage.
Having separate partitions also makes it easier to back up, restore, upgrade, or replace individual parts
without affecting the rest of the system. In the event of a security breach, this allows for restoring the
compromised partition without affecting the entire system.
31. Enable Disk Quotas
(Basic security mechanism)
Enabling disk quotas in Linux prevents the system from running out of disk space. Limited disk space
makes it easier for attackers to exploit vulnerabilities or cause a denial-of-service (DoS) attack by
filling up the disk. Setting disk quotas limits the disk space that each user or group takes, preventing
attackers from using up all available disk space.
32. Manage "Noowner" Files
(Basic security mechanism)
Managing "noowner" files is essential for Linux security because files not owned by a valid user or
group are easily manipulated by an attacker and used to hide malicious files or activities.
To manage "noowner" files:
1. Use the find command to locate files that do not belong to a valid user or group.
2. Investigate each reported file to determine its purpose and usage.
3. Assign the file to an appropriate user and group, or remove it if unnecessary.
33. Backup Linux System
(Intermediate security mechanism)
Backing up a Linux system is crucial for security, allowing users to recover from a system compromise
or data loss. A data backup copy helps restore the system to a secure state in case of a security breach,
hardware failure, or another disaster.
To back up a Linux system, use traditional UNIX backup programs such as dump and restore or a
cloud-based service such as AWS. Ensure the backups' security by encrypting and storing them in a
secure location, such as an external storage device or a NAS server.
Most Linux distributions have built-in backup tools. To start with backup on Linux, search for backup
tools in the system menu.
34. Install an Antivirus Program
(Basic security mechanism)
An antivirus program protects the system from viruses, trojans, and spyware. Antivirus scans the
system, detects potential threats, and prevents damage.
Several antivirus options are available for Linux systems, including free and paid options. Popular free
antivirus programs are ClamAV and AVG, and paid options include McAfee and Symantec. Also,
regularly update and run the antivirus program to ensure that it provides the most up-to-date protection.
35. Prevent Ransomware Attacks
49
Compiled by Japhet Mwakideu From Various Internet Sources
(Advanced security mechanism)
Ransomware is malware that encrypts user files and demands payment for the decryption key. If a
Linux system is infected with ransomware, the loss of important data, files, and sensitive information
is possible.
Preventing ransomware attacks requires a combination of security measures, such as:
• Setting up a firewall.
• Setting up ad blockers.
• Implementing strong passwords.
• Keeping software up-to-date.
• Using reliable antivirus software.
• Running regular security tests.
• Whitelisting applications.
• Setting up a sandbox.
• Improving email security.
• Employing the zero-thrust policy.
36. Regularly Perform Vulnerability Assessments
(Advanced security mechanism)
Performing regular vulnerability assessments in Linux helps identify any potential security risks and
weaknesses in the system.
The process enables sysadmins to proactively mitigate the risks and improve the system's overall
security.
Several tools and techniques perform vulnerability assessments in Linux, including:
• Network scanning tools scan the network for open ports and services and identify potential
vulnerabilities in the system.
• Penetration testing tools simulate an attack on the system to identify weaknesses an attacker
could exploit.
• Code review tools analyze the source code of applications and system components, looking for
potential security vulnerabilities.
• Vulnerability databases provide information about known vulnerabilities in specific software
and operating systems, including patches and workarounds.
37. Invest in Disaster Recovery
(Advanced security mechanism)
A disaster recovery plan outlines the steps to be taken in case of a system failure, data loss, or security
breach. A well-designed disaster recovery minimizes a disaster's impact, reduces downtime, and
increases the Linux system's security.
To create a disaster recovery plan, follow this checklist:
• Identify critical systems and data that must be recovered in a disaster.
• Choose a backup and recovery solution that meets your needs and budget.
• Test the backup and recovery plan regularly to ensure it works as expected.
• Train all personnel on the disaster recovery plan and procedures.
• Document the plan and keep it up-to-date.
38. Upgrade Security Incident Response Plan (CSIRP)
50
Compiled by Japhet Mwakideu From Various Internet Sources
(Advanced security mechanism)
An effective security incident response plan (CSIRP) is critical to a robust security program. It
provides a clear and organized plan for responding to various security incidents, such as data breaches,
cyber-attacks, and other security-related events. It is essential to periodically update the CSIRP to keep
up with new security threats and be able to detect and respond to security incidents to minimize
damage and data loss.
When upgrading the CRISP, make sure to:
• Review the current plan and identify areas for improvement.
• Evaluate contemporary security threats and assess the impact.
• Revise incident response procedures to align with current security best practices.
• Update incident categorization and prioritization criteria.
• Review and update communication plans for internal and external stakeholders.
• Test and refine the updated CSIRP through simulations and tabletop exercises.
39. Use a Security-Focused Web Browser
(Basic security mechanism)
Using a security-focused web browser and configuring it to block malicious sites on Linux protects
against malware attacks by blocking access to known malicious websites and warning users about
potentially dangerous sites.
Certain security-focused web browsers like Tor encrypt internet connections to protect against
eavesdropping and tampering. The process creates a secure internet connection and reduces the risk of
hacking or other cyber attacks.
By blocking malicious sites, security-focused web browsers reduce the risk of data breaches when
users inadvertently access sites that contain malware or steal personal information.
40. Ensure Linux Server Physical Security
(Intermediate/Advanced security mechanism)
To ensure the physical security of Linux servers, organizations must implement several measures to
prevent unauthorized access. Here are the key steps to consider:
• Secure physical console access. Configure the BIOS to disable booting from external devices
such as CDs, DVDs, or USB pens. Set BIOS and GRUB boot loader passwords to protect these
settings.
• Implement multi-layered security. Use physical locks on server room doors, install security
cameras to monitor the area, implement access control systems to restrict access to
unauthorized personnel and regularly check the servers for signs of tampering or theft.
• Implement environmental controls. Consider installing air conditioning to prevent server
damage due to heat or other environmental factors.
• Perform regular security audits. Ensure that the physical security measures are up to date and
effective in preventing unauthorized server access.
• Lock servers in IDCs. Make sure to lock all production servers in IDCs (Int
⭕ Notable Critical Microsoft Vulnerabilities
51
Compiled by Japhet Mwakideu From Various Internet Sources
• CVE-2023-35315 is a RCE Vulnerability affecting Windows Server configured as a Layer-2
bridge. An unauthenticated attacker must gain access to the restricted network before running
an attack by sending specially crafted file operation requests to Windows Server. Successful
exploitation will lead to remote code execution on the target system.
Windows Routing and Remote Access Service (RRAS) Remote Code Execution Vulnerability
• CVE-2023-35365, CVE-2023-35366, and CVE-2023-35367 - Routing and Remote Access
Service (RRAS) is a networking and routing service that provides dial-up or VPN connections
for remote users or site-to-site connectivity. It allows organizations to connect to the internet,
other networks, or remote users securely via VPN connections. To exploit this vulnerability, an
attacker must send specially crafted packets to a server configured with the RRAS service
running. This can be done by sending the packets from a compromised computer or by using a
botnet to send the packets from multiple computers. Once the packets are sent, they can exploit
a vulnerability in the RRAS service and gain control of the server.
52
Compiled by Japhet Mwakideu From Various Internet Sources
• CVE-2023-32046 - Windows MSHTML is a browser engine that renders web pages and is
frequently used by Internet Explorer. Even though Internet Explorer 11 has reached the end of
support, MSHTML vulnerabilities are still relevant today and are being patched by Microsoft.
This vulnerability can be exploited in both email and web-based attack scenarios.In an email
attack scenario, an attacker would send a specially crafted file to the user and convince them to
open it. In a web-based attack scenario, the attacker may either create a malicious website or
compromise an existing one that accepts or hosts user-provided content. The malicious website
would contain the specially crafted file aimed at exploiting the vulnerability.
Creating a web-based application can be a time-consuming process, and errors can happen along the
way. To make sure every link and feature works as intended, it’s wise to test every possible scenario.
Automated testing allows computer programs to do a lot of the testing for you, allowing you to focus
on feature development.
53
Compiled by Japhet Mwakideu From Various Internet Sources
This article discusses the importance of web app test automation and presents the best automation
frameworks to get end-to-end testing done.
What is web app test automation?
Web app test automation involves using a software program to perform automated tests that can
identify bugs in web applications.
You may already be testing your web-based app, but doing so manually. While manual testing allows
you to check an application’s individual aspects and get user feedback, it also has its limitations: it’s
both time-consuming and relies wholly on human judgement.
By contrast, test automation allows you to repeat tests multiple times by using a framework to ensure
each step of the test is followed. Automated testing also frees up engineers to work on other parts of
the project.
What’s more, you can continuously run automated tests as software updates are released, which
ensures that you’re not introducing new bugs into your app. Since you can build tests into a
framework, you can run those tests any time you make a software change with just the push of a
button.
To perform automated testing, software and QA engineers need to choose the test automation
framework that best suits their needs. With several automated testing tools available, we set some
criteria to identify the best frameworks out there.
Framework selection criteria
Frameworks were required to meet three criteria to be considered for this list of the best frameworks
for testing web-based applications.
The framework must be free to download and use
Creating a web application requires considerable investment, and just the costs for testing can add up
quickly. Therefore, we consider it vital that a test automation framework be free to download and
allows access to all of its features.
A framework based on a subscription model or one that has key features locked behind a paywall is a
hindrance and also slows down the process of getting started. In a company, delays can come in the
forms of purchase order requests and justification of use. With free tools, the time you need to get
started is limited only by your download speed.
The framework must be open-source
Open-source software, or OSS, provides users the ability to access any part of a framework’s code and
modify it. In practice, engineers and programmers will offer feedback on an open-source program’s
functionality. These same professionals can also make their own additions to the program, with a view
to helping other users.
Programs that use open-source code tend to generate a community of software developers who
contribute to the open-source project and have a vested interest in its success. These same people stand
behind the open-source software and are often quick to offer suggestions and guidance when asked.
While most software developers won’t need to access the core of the open-source software, the option
is there. In practice, however, if you hit an issue that requires heavy debugging, you’re better off if you
can see the framework’s code to trace the function calls responsible for the issue.
54
Compiled by Japhet Mwakideu From Various Internet Sources
Another benefit of using open-source software is the freedom to stick with a version of the software
that works best for you and use it as long as needed without concern. With closed-source programs,
you’re subject to changes the developers make, whether you like them or not.
The framework must be actively maintained and support all major browsers
A test automation framework won’t be useful to many users if it’s not compatible with the latest
versions of the most popular browsers out there (e.g. Chrome, Firefox, and Safari).
Browsers are no different than other software applications in that they see regular updates.
Frameworks that don’t accommodate such changes may lack features that could have a negative effect
on the framework’s ability to run correctly. A lack of updates to the framework might also mean that
your web app could have compatibility issues with a newer browser version that you won’t be able to
detect.
Regular test automation framework updates are also a good sign that the framework’s development
team are actively involved in the project and won’t jump ship any time soon.
Our top 4 frameworks for testing web-based applications
With these criteria in mind, here are the four web app testing frameworks that you should consider
today.
Cypress
Cypress was built from the ground up and not Selenium-based like many other automated testing
programs out there. Released in 2017, Cypress has plenty of documentation on how to use it.
Cypress relies on JavaScript and is beginner-friendly, operating directly in the browser itself, without
the need for any additional downloads or dependencies.
Of course, Cypress is completely free to download and use. It is open source and has seen regular
updates from the time of its public release. The automated test software runs in Chrome (including
Electron), Firefox, and Edge browsers.
The reasons above make Cypress a great go-to tool, especially for those who are newer to automated
testing. If you’re working with one of the above browsers and have an understanding of JavaScript,
look no further than Cypress.
For more information about Cypress, check out our in-depth guide and information on getting started.
Playwright
Playwright is a newcomer to the automated testing space for web applications. It works well with
popular programming languages such as JavaScript, Python, Java, and C#. In addition to supporting
Chrome and Firefox, Playwright also supports Webkit, making the framework ideal for web-based
apps designed to run on Safari as well.
As Microsoft's latest open source test library for automating web-based applications, Playwright does
have the support of a large organisation behind it. While this may be a red flag for some, the program
remains open-source and flexible to use. There’s no fee for downloading or using the software, and
new versions roll out about once a month to keep the framework fresh.
Playwright installs very smoothly with one command downloading all the prerequisite files for each of
the three Playwright-supported browsers. Better yet, any test you write for one particular browser can
also be used across the others with minimal work.
Selenium WebDriver
55
Compiled by Japhet Mwakideu From Various Internet Sources
Selenium WebDriver is part of the greater Selenium suite of tools designed to automate web-based
testing.
It should come as no surprise that Selenium is both free and open-source to meet the criteria of this list.
The test automation framework launched all the way back in 2007 and still receives regular updates.
Thanks to its track record, Selenium has a Google group full of people with questions and answers.
The WebDriver framework offers a lot of versatility, enabling software developers to create tests using
programming languages such as Java, Ruby, Python, C#, and PHP.
Additionally, Selenium WebDriver is compatible with all of the popular browsers out there, including
Chrome, Edge, Safari, Firefox, and Opera. This makes Selenium useful if your app needs to work with
all those browsers. On the negative side, Selenium’s setup is quite complicated as each browser
requires specific drivers to run.
Specific requirements for each browser make WebDriver a challenge to use as well. As a result,
WebDriver isn’t an ideal framework choice unless you need to test the gamut of browsers or your
company has already invested in the Selenium Test Suite.
Robot Framework
Robot Framework may be used for test automation but was created with robot process automation in
mind. The framework’s developers are active, providing frequent updates for users. Robot Framework
is entirely open source and free to use.
This automated test software is built around keyword-based syntax for test cases and does not require
knowledge of a programming language to use. That said, you can extend test scripts with libraries that
use Java or Python. Users have the ability to develop tools and libraries to further enhance Robot
Framework’s functionality.
Depending on the library you choose to implement with Robot Framework, the automated test
software is usable with theoretically any browser. Many software developers use the Selenium library,
offering flexibility to run on any browser.
Network Security tools aim to prevent devices, technologies, and processes from unauthorized data
access, identity thefts, and cyber threats.
Network security prevents unauthorized access of information or misuse of the organizational network.
It includes hardware and software technologies designed to protect the safety and reliability of a
network and data.
Network security tools are essential to secure your organization's network to stop several threats that
could damage the system and the network. It helps to monitor the network and prevent data breaches.
The network security tool can examine all the traffic across a network. Traffic monitoring helps the
organization proactively identify the issues and threats before it turns into significant damage to the
organization. Network security tools send real-time alerts for any unusual behavior to prevent any
breaches.
Some of the benefits of Network Security Tools are:
• Network security tools will minimize the business and financial impact of any breach, as they
help you stay compliant with regulations and prevent breaches.
• Network security helps your business stay compliant and provides multiple levels of security to
increase the scope of your business and offer a better workplace for your employees.
56
Compiled by Japhet Mwakideu From Various Internet Sources
• It ensures the protection of any sensitive information and data shared across the network.
Now let’s see the top 10 Network Security tools.
Best 10 Network Security Tools
1. Wireshark
Wireshark is an open-source network protocol analyzer that helps organizations capture real-time data
and track, manage, and analyze network traffic even with minute details.
It allows users to view the TCP session rebuilt streams. It helps to analyze incoming and outgoing
traffic to troubleshoot network problems.
Features
• Deep inspection of hundreds of protocols
• Capture real-time data and offline analysis
• It runs on multiple operating systems like Windows, Linux, macOS, etc.
• It provides color codes to each packet for quick analysis.
Pros
• Supports multiple operating systems like Windows, Linux, etc
• Easily integrates with third-party applications
Cons
• Steep learning curve
• Difficult to read the encrypted network traffic
• Lack of support
2. Nexpose
Nexpose is a network security software that provides real-time information about vulnerabilities and
reduces the threats in a network. In addition, Nexpose permits the users to allot a risk score to the
detected vulnerabilities so that they may be prioritized as per the security levels.
Nexpose helps IT teams to get real-time scanning of the network and detect network vulnerabilities. It
also continuously refreshes and adapts to new threats in software and data.
Features
• Nexpose provides real-time network traffic.
• It provides a risk score and helps IT teams prioritize the risk as per the security levels.
• It shows the IT teams different actions they can take immediately to reduce the risk.
Pros
• Easy to use
• In-depth scanning of network vulnerabilities.
Cons
• No domain-based authentication for Linux devices
• Lack of customer support
57
Compiled by Japhet Mwakideu From Various Internet Sources
3. Splunk
Splunk is used for monitoring network security. It provides both real-time data analysis and historical
data searches.
It is a cloud-based platform that provides insights for petabyte-scale data analytics across the hybrid
cloud.
Splunk’s search function makes application monitoring easy and user-friendly.
It contains a user interface to catch, index, and assemble data and generate alerts, reports, dashboards,
and graphs in real-time.
Features
• Splunk attributes risk to users and systems and maps alerts to cybersecurity frameworks, and
trigger alerts when the risk exceeds the threshold.
• It helps in prioritizing alerts and accelerating investigations with built-in threat intelligence.
• It helps to get automatic security content updates to stay updated with the emerging threats.
Pros
• The indexing of data is easy
• Easy to use
Cons
• Steep learning curve
4. Nagios
Nagios is a network security tool that helps to monitor hosts, systems, and networks. It sends alerts in
real-time. You can select which specific notifications you would like to receive.
It can track network resources like HTTP, NNTP, ICMP, POP3, and SMTP. It is a free tool.
Features
• Nagios help to monitor IT infrastructure components, including system metrics, network
protocols, application services, servers, and network infrastructure.
• It sends alerts when an unauthorized network is detected and provides IT admin with notice of
important events.
• It provides reports which show the history of events, notifications, and alert responses for later
review.
Pros
• Great tool for live monitoring
• User friendly
• Data monitoring can be tracked easily
Cons
• Limited reporting capabilities
• The system slows down while monitoring the data
58
Compiled by Japhet Mwakideu From Various Internet Sources
5. Tor
Tor is a network security tool that ensures the privacy of users while using the internet. It helps in
preventing cybersecurity threats and is useful in safeguarding information security.
Tor works on the concept of onion routing, and the layers are layered one over the other similar to the
onion. All the layers function smartly so that there is no need to reveal any IP and geographical
location of the user. Therefore, limiting the visibility of any sites, you are visiting.
Features
• Tor software is available for Linux, Windows, as well as Mac
• It helps to block the third-party trackers, and ads can't follow you
• It prevents third-party watching your connection from knowing what websites you visit
• It aims to make all users look the same and is difficult for trackers
Pros
• It protects the online identity
• Provides a high-level privacy
• User-friendly interface
Cons
• The system gets slower during navigation
• Starting and browsing time is high
6. Nessus Professional
Nessus professional is a network security software that can detect vulnerabilities like software bugs
and general security problems in software applications, IT devices, and operating systems and manage
them appropriately.
Users can access a variety of security plug-ins as well as develop their own and scan individual
computers as well as networks.
Features
• It provides customization of reports by vulnerability or hosts and creates a summary for the
users.
• Sends email notifications of the scan results
• It helps meet government, regulatory, and corporate requirements
• It scans cloud applications and prevents your organization from cybersecurity threats
Pros
• It offers flexibility for developing custom solutions
• Nessus VA scan covers all standard network devices like endpoints, servers, network devices,
etc.
• Provide plug-ins for many vulnerabilities
Cons
59
Compiled by Japhet Mwakideu From Various Internet Sources
• The software slows down when you scan a large scope
• Poor customer support
7. Metasploit
Metasploit is security software that contains various tools for executing penetrating testing services. IT
professionals use this tool to reach security goals such as vulnerabilities in the system, improving the
computer system security, cyber defense strategies and maintaining complete security assessments.
The penetration testing tools can examine various security systems, including web-based apps, servers,
networks, etc.
It allows the organization to perform security assessments and improve its overall network defenses
and make them more responsive.
Features
• The tools are used to take advantage of system weaknesses
• The module encoders are used to convert codes or information
• Metasploit allows a clean exit from the target system. It has compromised
Pros
• Good support for penetration testing
• Useful to learn and understand vulnerabilities that exist in the system
• Freely available and includes all penetration testing tools
Cons
• Software updates are less frequent
• Steep learning curve
8. Kali Linux
Kali Linux is a penetration testing tool used to scan IT systems and network vulnerabilities. The
organization can monitor and maintain its network security systems on just one platform.
It offers a security auditing operating system and tools with more than 300 techniques to make sure
that your sites and Linux servers stay safe.
Kali Linux is used by professional penetration testers, ethical hackers, cybersecurity experts, and
individuals who understand the usage and value of this software.
Features
• Kali Linux comes with pre-installed tools like Nmap, Aircrack-ng, Wireshark, etc., to help with
information security tasks.
• It provides multi-language support.
• It helps to generate the customized version of Kali Linux.
Pros
• Pre-installed tools are ready to use
• Simple and user-friendly interface
60
Compiled by Japhet Mwakideu From Various Internet Sources
Cons
• Limited customization
• The installation process is complicated
9. Snort
Snort is an open-source network security tool used to scan networks and prevent any unauthorized
activity in the network. IT professionals use it to track, monitor, and analyze network traffic. It helps to
discover any signs of theft, unauthorized access, etc. After detection, the tool will help send alerts to
the users.
Additionally, Snort is used to perform protocol analysis, detect frequent attacks on a system, look for
data captured from traffic, etc.
Features
• Snort provides a real-time traffic monitor
• It provides protocol analysis
• It can be installed in any network environment
Pros
• Good for monitoring network traffic
• Good for detecting any network intrusions
Cons
• Complicated settings and configuration
• Steep learning curve
10. Forcepoint
Forcepoint is a cloud-based security solution and is used to define network security, restrict users from
accessing specific content and block various attempts to hack or get your organization's information.
The IT admin can customize Forcepoint to monitor and detect any unauthorized acts in a network and
can take the appropriate action required. It adds an extra level of security for critical threats.
Forcepoint is majorly for the organizations working in the cloud, and it will be able to block or provide
warnings about any risky cloud servers.
Features
• Forcepoint helps in monitoring any unusual cloud activities.
• It provides tracking of any suspicious behavior and sends alerts to the IT admins.
• It protects and secures data.
• It helps to limit the access of your employees within the scope of your organization.
Pros
• Good support
• Easy to set up and user-friendly interface
Cons
61
Compiled by Japhet Mwakideu From Various Internet Sources
• Creating reports is difficult
• Less flexibility in real-time screen monitoring
• Public Records
• News media
• Libraries
• Social media platforms
• Images, Videos
• Websites
• The Dark web
Who uses OSINT?
62
Compiled by Japhet Mwakideu From Various Internet Sources
• Government
• Law Enforcement
• Military
• Investigative journalists
• Human rights investigators
• Private Investigators
• Law firms
• Information Security
• Cyber Threat Intelligence
• Pen Testers
• Social Engineers
We all use open-source and probably don't even realize it, but we also use it for different reasons. You
might use open-source information to do a credibility check and to find out more about the person
selling you something on Facebook marketplace. You may research someone you met on a dating app
or before hiring someone for a job.
A few years ago I found someone’s driver's license on the street when I was on a lunch break. I picked
it up, thinking I should drop it off at the local driver's license branch. Then I thought to myself, I
wonder what I will find if I just Google the person’s name (which I did). Turns out the second Google
result was a LinkedIn page with the person's name, photo, and workplace which was in the area. I
decided to call the company and ask to speak with this person and let them know I had found their
license on the street.
It seems like it was too easy to Google and find the result quickly but this is not uncommon nowadays.
Most people, if not everyone, have some sort of digital footprint. This is a simple example to show you
how quickly you can find information on a person by simply Googling their name.
Intelligence Cycle
Let’s talk about the Intelligence Cycle and what it means for those working in OSINT. There are some
variations of the intelligence cycle but generally, it includes similar steps. Using the Intelligence Cycle
can assist with understanding what each stage of the cycle means to the OSINT research that will
follow.
Stages of the Intelligence Cycle
63
Compiled by Japhet Mwakideu From Various Internet Sources
Preparation is when the needs and requirements of the request are assessed, such as determining the
objectives of the tasking and identifying the best sources to use to find the information for which you
are looking.
Collection is the primary and most important step in collecting data and information from as many
relevant sources as possible.
Processing is when the collected data and information are organized or collated.
Analysis and Production is the interpretation of the collected information to make sense of what was
collected, i.e. identifying patterns or a timeline of travel history. Produce a report to answer the
intelligence question, draw conclusions, and recommend next steps.
Dissemination is the presentation and delivery of open-source findings, i.e. written reports, timelines,
recommendations, etc. Answer the intel question for stakeholders.
64
Compiled by Japhet Mwakideu From Various Internet Sources
Passive versus Active OSINT
Understand the difference between passive and active research, as each type of research can have
different implications for your organization.
Passive means you do not engage with a target. Passive open-source collection is defined as
gathering information about a target using publicly available information. Passive means there will be
no communicating or engaging with individuals online, which includes commenting, messaging,
friending, and/or following.
Active means you are engaging with a target in some fashion, i.e. adding the target as a friend on
social profiles, liking, commenting on the target’s social media posts, messaging the target, etc. Active
open-source research is considered engagement and can be looked upon as an undercover operation for
65
Compiled by Japhet Mwakideu From Various Internet Sources
some organizations. Please be aware of the differences and request clarification from your agency prior
to engaging.
For active research, it’s a must to blend in with the group. If you are engaging with a target you may
want to create a couple of accounts on different platforms to make it look like you are a real person.
Each organization may have different interpretations of what is considered passive versus active
engagement. For example, joining private Facebook Groups may appear passive to some
organizations, whereas others may consider this as engaging. Sometimes this difference can imply
some sort of undercover operation capacity, therefore it's extremely important to have SOPs that
outline where the organization stands with this type of engagement.
Some researchers justify joining groups as passive, as they are only "passively" looking and not
actually communicating with targets.
A good example to consider is where a Facebook Group consists of 500 members or more, where
blending in may be easy, whereas a smaller group of 20 people may be riskier. Talk to your managers
before proceeding one way or the other.
66
Compiled by Japhet Mwakideu From Various Internet Sources
3. Investigative Journalism: OSINT can be used by journalists to gather information on a range of
topics, including politics, business, and crime. This can help to uncover stories and provide
evidence for reporting.
4. Academic Research: OSINT can be used by researchers to gather data on a range of topics,
including social trends, public opinion, and economic indicators.
5. Legal Proceedings: OSINT can be used in legal proceedings to gather evidence or to conduct
due diligence on potential witnesses or defendants.
OSINT is an exceptional tool for gathering information on a wide range of topics and can be used by a
variety of organizations and individuals to inform decision-making and strategy.
Why Open-Source Intelligence (OSINT)?
Open-source intelligence (OSINT) is beneficial because it offers several advantages over other forms
of intelligence collection.
Here are some reasons why OSINT is valuable:
1. Access to publicly available information: OSINT collects publicly available and legally
accessible information. This means that organizations do not have to rely on classified or
restricted sources of information, which can be costly and time-consuming to get.
2. Wide range of sources: OSINT can be gathered from a wide range of sources, including social
media, news articles, government reports, and academic papers. Organizations can gather
information on a wide range of topics from many different perspectives.
3. Timeliness: Because OSINT relies on publicly available information, it can be gathered quickly
and in real time. Organizations or businesses can stay up-to-date on current events and
emerging trends.
4. Cost-effective: OSINT is more cost-effective than other forms of intelligence collection, such
as human intelligence or signal intelligence. This is because OSINT relies on publicly available
information and does not require specialized equipment or personnel.
5. Transparency: OSINT is transparent and can be easily verified. This means that organizations
can be confident in the accuracy and reliability of the information they gather.
OSINT offers many advantages over other forms of intelligence collection, making it a valuable tool
for a wide range of organizations and individuals.
How does open-source intelligence (OSINT) work?
Open-source intelligence (OSINT) is the practice of collecting and analyzing publicly available
information to generate actionable intelligence. Here's a general overview of how OSINT works:
1. Collection: OSINT collection involves gathering publicly available information from a variety
of sources such as social media, news articles, government reports, academic papers, and
commercial databases. This process can be done manually by searching for and reviewing
sources, or through automated tools that can search and aggregate information.
2. Processing: Once the information is collected, it is processed to remove duplicate, irrelevant or
inaccurate data. This step involves filtering and categorizing the information based on
relevance and importance.
3. Analysis: The processed information is then analyzed to identify trends, patterns, and
relationships. This can involve using data visualization tools, data mining, and natural language
processing to extract meaningful insights from the data.
67
Compiled by Japhet Mwakideu From Various Internet Sources
4. Dissemination: The final step in the OSINT process is disseminating the intelligence to
decision-makers. This can be done in the form of reports, briefings, or alerts, depending on the
needs of the organization.
OSINT is an iterative process that involves constantly refining the collection, processing, and analysis
of information based on new data and feedback. Additionally, OSINT is subject to the same biases and
limitations as other forms of intelligence collection, and therefore requires careful evaluation and
interpretation by trained analysts.
Common OSINT techniques
Open-source intelligence (OSINT) encompasses a wide range of techniques for collecting and
analyzing publicly available information. Here are some common OSINT techniques:
1. Search Engines: Search engines such as Google, Bing, and Yahoo are valuable tools for
gathering OSINT. By using advanced search operators, analysts can quickly filter and refine
search results to find relevant information.
2. Social Media: Social media platforms such as Twitter, Facebook, and LinkedIn are valuable
sources of OSINT. By monitoring and analyzing social media activity, analysts can gain insight
into trends, sentiment, and potential threats.
3. Public Records: Public records such as court documents, property records, and business filings
are valuable sources of OSINT. By accessing these records, analysts can gather information on
individuals, organizations, and other entities.
4. News Sources: News sources such as newspapers, magazines, and online news outlets are
valuable sources of OSINT. By monitoring and analyzing news articles, analysts can gain
insight into current events, trends, and potential threats.
5. Web Scraping: Web scraping involves using software tools to extract data from websites. By
scraping data from multiple websites, analysts can gather large amounts of data quickly and
efficiently.
6. Data Analysis Tools: Data analysis tools such as Excel, Tableau, and R are valuable for
analyzing large datasets. By using these tools, analysts can identify patterns, trends, and
relationships in the data.
OSINT techniques are constantly evolving as new technologies and sources of information become
available. It's important for analysts to stay up-to-date on new techniques and tools in order to
effectively gather and analyze OSINT.
68
Compiled by Japhet Mwakideu From Various Internet Sources
• The database management system (DBMS).
• The physical database server or the virtual database server and the underlying hardware.
• Damage to brand reputation: Customers or partners might be unwilling to buy your products
or services (or do business with your company) if they don’t feel they can trust you to protect
your data or theirs.
• Business continuity (or lack thereof): Some businesses cannot continue to operate until a
breach is resolved.
• Fines or penalties for non-compliance: The financial impact for failing to comply with global
regulations such as the Sarbannes-Oxley Act (SAO) or Payment Card Industry Data Security
Standard (PCI DSS), industry-specific data privacy regulations such as HIPAA, or regional
data privacy regulations, such as Europe’s General Data Protection Regulation (GDPR) can be
devastating, with fines in the worst cases exceeding several million dollars per violation.
69
Compiled by Japhet Mwakideu From Various Internet Sources
Many software misconfigurations, vulnerabilities or patterns of carelessness or misuse can result in
breaches. The following are among the most common types or causes of database security attacks.
Insider threats
An insider threat is a security threat from any one of three sources with privileged access to the
database:
• A malicious insider who intends to do harm.
• A negligent insider who makes errors that make the database vulnerable to attack.
• An infiltrator, an outsider who somehow obtains credentials via a scheme, such as phishing or
by gaining access to the credential database itself.
Insider threats are among the most common causes of database security breaches and are often the
result of allowing too many employees to hold privileged user access credentials.
Human error
Accidents, weak passwords, password sharing and other unwise or uninformed user behaviors continue
to be the cause of nearly half (49%) of all reported data breaches.
Exploitation of database software vulnerabilities
Hackers make their living by finding and targeting vulnerabilities in all kinds of software, including
database management software. All major commercial database software vendors and open source
database management platforms issue regular security patches to address these vulnerabilities, but
failure to apply these patches in a timely fashion can increase your exposure.
SQL or NoSQL injection attacks
A database-specific threat, these involve the insertion of arbitrary SQL or non-SQL attack strings into
database queries that are served by web applications or HTTP headers. Organizations that don’t follow
secure web application coding practices and perform regular vulnerability testing are open to these
attacks.
Buffer overflow exploitation
Buffer overflow occurs when a process attempts to write more data to a fixed-length block of memory
than it is allowed to hold. Attackers can use the excess data, which is stored in adjacent memory
addresses, as a foundation from which to start attacks.
Malware
Malware is software that is written specifically to take advantage of vulnerabilities or otherwise cause
damage to the database. Malware can arrive via any endpoint device connecting to the database’s
network.
Attacks on backups
Organizations that fail to protect backup data with the same stringent controls that are used to protect
the database itself can be vulnerable to attacks on backups.
These threats are exacerbated by the following:
• Growing data volumes: Data capture, storage and processing continues to grow exponentially
across nearly all organizations. Any data security tools or practices need to be highly scalable
to meet near and distant future needs.
70
Compiled by Japhet Mwakideu From Various Internet Sources
• Infrastructure sprawl: Network environments are becoming increasingly complex,
particularly as businesses move workloads to multicloud or hybrid cloud architectures, making
the choice, deployment and management of security solutions ever more challenging.
• Cybersecurity skills shortage: Experts predict there might be as many as 8 million unfilled
cybersecurity positions by 2022.
Denial of service (DoS and DDoS) attacks
In a denial of service (DoS) attack, the attacker deluges the target server—in this case the database
server—with so many requests that the server can no longer fulfill legitimate requests from actual
users, and, often, the server becomes unstable or crashes.
In a distributed denial of service attack (DDoS), the deluge comes from multiple servers, making it
more difficult to stop the attack.
Best practices
Because databases are network-accessible, any security threat to any component within or portion of
the network infrastructure is also a threat to the database, and any attack impacting a user’s device or
workstation can threaten the database. Thus, database security must extend far beyond the confines of
the database alone.
When evaluating database security in your environment to decide on your team’s top priorities,
consider each of the following areas:
• Physical security: Whether your database server is on-premises or in a cloud data center, it
must be located within a secure, climate-controlled environment. If your database server is in a
cloud data center, your cloud provider takes care of this for you.
• Administrative and network access controls: The practical minimum number of users should
have access to the database, and their permissions should be restricted to the minimum levels
necessary for them to do their jobs. Likewise, network access should be limited to the
minimum level of permissions necessary.
• User account and device security: Always be aware of who is accessing the database and
when and how the data is being used. Data monitoring solutions can alert you if data activities
are unusual or appear risky. All user devices connecting to the network housing the database
should be physically secure (in the hands of the right user only) and subject to security controls
at all times.
• Encryption: All data, including data in the database and credential data, should be protected
with best-in-class encryption while at rest and in transit. All encryption keys should be handled
in accordance with best practice guidelines.
71
Compiled by Japhet Mwakideu From Various Internet Sources
• Database software security: Always use the latest version of your database management
software, and apply all patches when they are issued.
• Application and web server security: Any application or web server that interacts with the
database can be a channel for attack and should be subject to ongoing security testing and best
practice management.
• Backup security: All backups, copies or images of the database must be subject to the same
(or equally stringent) security controls as the database itself.
• Auditing: Record all logins to the database server and operating system, and log all operations
that are performed on sensitive data as well. Database security standard audits should be
performed regularly.
Controls and policies
In addition to implementing layered security controls across your entire network environment,
database security requires you to establish the correct controls and policies for access to the database
itself. These include:
• Administrative controls to govern installation, change and configuration management for the
database.
• Detective controls to monitor database activity monitoring and data loss prevention tools.
These solutions make it possible to identify and alert on anomalous or suspicious activities.
Database security policies should be integrated with and support your overall business goals, such as
protection of critical intellectual property and your cybersecurity policies and cloud security policies.
Ensure that you have designated responsibility for maintaining and auditing security controls within
your organization and that your policies complement those of your cloud provider in shared
responsibility agreements. Security controls, security awareness training and education programs, and
penetration testing and vulnerability assessment strategies should all be established in support of your
formal security policies.
Data protection tools and platforms
Today, a wide array of vendors offer data protection tools and platforms. A full-scale solution should
include all of the following capabilities:
• Discovery: Look for a tool that can scan for and classify vulnerabilities across all your
databases—whether they’re hosted in the cloud or on-premises—and offer recommendations
for remediating any vulnerabilities that are identified. Discovery capabilities are often required
to conform to regulatory compliance mandates.
• Data activity monitoring: The solution should be able to monitor and audit all data activities
across all databases, regardless of whether your deployment is on-premises, in the cloud, or in
a container. It should alert you to suspicious activities in real-time so that you can respond to
72
Compiled by Japhet Mwakideu From Various Internet Sources
threats more quickly. You’ll also want a solution that can enforce rules, policies and separation
of duties and that offers visibility into the status of your data through a comprehensive and
unified user interface. Make sure that any solution you choose can generate the reports you
need to meet compliance requirements.
• Encryption and tokenization capabilities: Upon a breach, encryption offers a final line of
defense against compromise. Any tool that you choose should include flexible encryption
capabilities that can safeguard data in on-premises, cloud, hybrid or multicloud environments.
Look for a tool with file, volume and application encryption capabilities that conform to your
industry’s compliance requirements, which might demand tokenization (data masking) or
advanced security key management capabilities.
• Data security optimization and risk analysis: A tool that can generate contextual insights by
combining data security information with advanced analytics will enable you to accomplish
optimization, risk analysis and reporting with ease. Choose a solution that can retain and
synthesize large quantities of historical and recent data about the status and security of your
databases, and look for one that offers data exploration, auditing and reporting capabilities
through a comprehensive but user-friendly self-service dashboard.
TLS is a cryptographic protocol that provides end-to-end security of data sent between applications
over the Internet. It is mostly familiar to users through its use in secure web browsing, and in particular
the padlock icon that appears in web browsers when a secure session is established. However, it can
and indeed should also be used for other applications such as e-mail, file transfers,
video/audioconferencing, instant messaging and voice-over-IP, as well as Internet services such as
DNS and NTP.
TLS evolved from Secure Socket Layers (SSL) which was originally developed by Netscape
Communications Corporation in 1994 to secure web sessions. SSL 1.0 was never publicly released,
whilst SSL 2.0 was quickly replaced by SSL 3.0 on which TLS is based.
TLS was first specified in RFC 2246 in 1999 as an applications independent protocol, and whilst was
not directly interoperable with SSL 3.0, offered a fallback mode if necessary. However, SSL 3.0 is
now considered insecure and was deprecated by RFC 7568 in June 2015, with the recommendation
that TLS 1.2 should be used. TLS 1.3 is also currently (as of December 2015) under development and
will drop support for less secure algorithms.
It should be noted that TLS does not secure data on end systems. It simply ensures the secure delivery
of data over the Internet, avoiding possible eavesdropping and/or alteration of the content.
73
Compiled by Japhet Mwakideu From Various Internet Sources
TLS is normally implemented on top of TCP in order to encrypt Application Layer protocols such as
HTTP, FTP, SMTP and IMAP, although it can also be implemented on UDP, DCCP and SCTP as well
(e.g. for VPN and SIP-based application uses). This is known as Datagram Transport Layer Security
(DTLS) and is specified in RFCs 6347, 5238 and 6083.
Without TLS, sensitive information such as logins, credit card details and personal details can easily be
gleaned by others, but also browsing habits, e-mail correspondence, online chats and conferencing
calls can be monitored. By enabling client and server applications to support TLS, it ensures that data
transmitted between them is encrypted with secure algorithms and not viewable by third parties.
Recent versions of all major web browsers currently support TLS, and it is increasingly common for
web servers to support TLS by default. However, use of TLS for e-mail and certain other applications
is still often not mandatory, and unlike with web browsers that provide visual clues, it is not always
apparent to users whether their connections are encrypted.
With symmetric cryptography, data is encrypted and decrypted with a secret key known to both sender
and recipient; typically 128 but preferably 256 bits in length (anything less than 80 bits is now
considered insecure). Symmetric cryptography is efficient in terms of computation, but having a
common secret key means it needs to be shared in a secure manner.
Asymmetric cryptography uses key pairs – a public key, and a private key. The public key is
mathematically related to the private key, but given sufficient key length, it is computationally
impractical to derive the private key from the public key. This allows the public key of the recipient to
be used by the sender to encrypt the data they wish to send to them, but that data can only be decrypted
with the private key of the recipient.
The advantage of asymmetric cryptography is that the process of sharing encryption keys does not
have to be secure, but the mathematical relationship between public and private keys means that much
74
Compiled by Japhet Mwakideu From Various Internet Sources
larger key sizes are required. The recommended minimum key length is 1024 bits, with 2048 bits
preferred, but this is up to a thousand times more computationally intensive than symmetric keys of
equivalent strength (e.g. a 2048-bit asymmetric key is approximately equivalent to a 112-bit symmetric
key) and makes asymmetric encryption too slow for many purposes.
For this reason, TLS uses asymmetric cryptography for securely generating and exchanging a session
key. The session key is then used for encrypting the data transmitted by one party, and for decrypting
the data received at the other end. Once the session is over, the session key is discarded.
A variety of different key generation and exchange methods can be used, including RSA, Diffie-
Hellman (DH), Ephemeral Diffie-Hellman (DHE), Elliptic Curve Diffie-Hellman (ECDH) and
Ephemeral Elliptic Curve Diffie-Hellman (ECDHE). DHE and ECDHE also offer forward secrecy
whereby a session key will not be compromised if one of the private keys is obtained in future,
although weak random number generation and/or usage of a limited range of prime numbers has been
postulated to allow the cracking of even 1024-bit DH keys given state-level computing resources.
However, these may be considered implementation rather than protocol issues, and there are tools
available to test for weaker cipher suites.
With TLS it is also desirable that a client connecting to a server is able to validate ownership of the
server’s public key. This is normally undertaken using an X.509 digital certificate issued by a trusted
third party known as a Certificate Authority (CA) which asserts the authenticity of the public key. In
some cases, a server may use a self-signed certificate which needs to be explicitly trusted by the client
(browsers should display a warning when an untrusted certificate is encountered), but this may be
acceptable in private networks and/or where secure certificate distribution is possible. It is highly
recommended though, to use certificates issued by publicly trusted CAs.
What is a CA?
A Certificate Authority (CA) is an entity that issues digital certificates conforming to the ITU-T’s
X.509 standard for Public Key Infrastructures (PKIs). Digital certificates certify the public key of the
owner of the certificate (known as the subject), and that the owner controls the domain being secured
by the certificate. A CA therefore acts as a trusted third party that gives clients (known as relying
parties) assurance they are connecting to a server operated by a validated entity.
End entity certificates are themselves validated through a chain-of-trust originating from a root
certificate, otherwise known as the trust anchor. With asymmetric cryptography it is possible to use the
private key of the root certificate to sign other certificates, which can then be validated using the public
key of the root certificate and therefore inherit the trust of the issuing CA. In practice, end entity
certificates are usually signed by one or more intermediate certificates (sometimes known as
subordinate or sub-CAs) as this protects the root certificate in the event that an end entity certificate is
incorrectly issued or compromised.
75
Compiled by Japhet Mwakideu From Various Internet Sources
Root certificate trust is normally established through physical distribution of the root certificates in
operating systems or browsers. The main certification programs are run by Microsoft (Windows &
Windows Phone), Apple (OSX & iOS) and Mozilla (Firefox & Linux) and require CAs to conform to
stringent technical requirements and complete a WebTrust, ETSI EN 319 411-3 (formerly TS 102 042)
or ISO 21188:2006 audit in order to be included in their distributions. WebTrust is a programme
developed by the American Institute of Certified Public Accountants and the Canadian Institute of
Chartered Accountants, ETSI is the European Telecommunications Standards Institute, whilst ISO is
the International Standards Organisation.
Root certificates distributed with major operating systems and browsers are said to be publicly or
globally trusted and the technical and audit requirements essentially means the issuing CAs are
multinational corporations or governments. There are currently around fifty publicly trusted CAs,
although most/all have more than one root certificate, and most are also members of the CA/Browser
Forum which develops industry guidelines for issuing and managing certificates.
It is however also possible to establish private CAs and establish trust through secure distribution and
installation of root certificates on client systems. Examples include the RPKI CAs operated by the
Regional Internet Registries (AfriNIC, APNIC, ARIN, LACNIC and RIPE NCC) that issue certificates
to Local Internet Registries attesting to the IP addresses and AS numbers they hold; as well as the
International Grid Trust Federation (IGTF) which provides a trust anchor for issuing server and client
certificates used by machines in distributed scientific computing. In these cases, the root certificates
can be securely downloaded and installed from sites using a certificate issued by a publicly trusted CA.
One weakness with the X.509 PKI system is that third parties (CAs) are able to issue certificates for
any domain, whether or not the requesting entity actually owns or otherwise controls it. Validation is
typically performed through domain validation – namely sending an e-mail with an authentication link
to an address known to be administratively responsible for the domain. This is usually one of the
standard contact addresses such as ‘hostmaster@domain’ or the technical contact listed a WHOIS
database, but this leaves itself open to man-in-the-middle attacks on the DNS or BGP protocols, or
more simply, users registering administrative addresses on domains that have not been reserved.
Perhaps more importantly, Domain Validated (DV) certificates do not assert that a domain has any
relationship with a legal entity, even though a domain may appear to have one.
For this reason, CAs are increasingly encouraging the use of Organisation Validated (OV) and
Extended Validation (EV) certificates. With OV certificates, the requesting entity is subject to
additional checks such as confirmation of organisation name, address and telephone number using
public databases. With EV certificates, there are additional checks on legal establishment, physical
location, and the identity of the individuals purporting to act on behalf of the requesting entity.
Of course, this still does not prevent CAs accidentally or fraudulently issuing incorrect certificates, and
there have also been incidents of security breaches where CAs were tricked into issuing fake
certificates. Despite substantial tightening up of security procedures in the wake of several high-profile
76
Compiled by Japhet Mwakideu From Various Internet Sources
incidents, the system remains reliant on third party trust which has led to the development of the DNS-
based Authentication of Named Entities (DANE) protocol as specified in RFCs 6698, 7671, 7672 and
7673.
With DANE, a domain administrator can certify their public keys by storing them in the DNS, or
alternatively specifying which certificates should be accepted by a client. This requires the use of
DNSSEC which cryptographically asserts the validity of DNS records, although DNSSEC does not yet
have widespread deployment and major browsers currently require installation of an add-on in order to
support DANE. Moreover, DNSSEC and DANE will still require validation of domain holders that
will likely have to be undertaken by domain registries and/or registrars instead of CAs.
Password Storage
77
Compiled by Japhet Mwakideu From Various Internet Sources
people had some basic knowledge. I’ll provide links to more in-depth information as we go, but for the
most part this explanation will cover key-concepts rather than detail.
Let’s have a look at a typical interaction with a web-based application we use to do something useful,
like share pictures of our favourite widgets. Let’s call it Widgetology.
PlainText
In the beginning there is plaintext, the technical term for your unencrypted password (as opposed to
plain text, which is the stuff you’re reading right here). We come up with a super secret password no
one would ever guess — say like “MyP@ssword1” — and we enter it into the Widgetology web site as
we set up our account.
Next time we want to access our account we type in our username and password, there’s some magic
in the background as our password is verified, and all going well we have access to the Widgetology
app and our personal data.
Somehow the Widgetology app knows our account password and can compare it to the password we
enter and confirm we are who we say we are.
Fortunately it’s not because Widgetology has simply stored our plaintext password in a database, for
that would be extraordinarily insecure. If the Widgetology system is compromised and the database
stolen the attacker would now have full access to all accounts immediately. It would also mean that
Widgetology admin staff would have access to our passwords, and while I’m sure they are a
trustworthy bunch, you just never can know for sure.
Hashing
Instead, we use a process called hashing to obscure the plaintext password in storage, in a way that
ensures we can still verify your password when you log in using plaintext.
Hashing is a one-way encryption of the password — with one-way simply meaning that once
encrypted the data cannot be decrypted. When you create an account or modify your password, this
encrypted password — called a hash — is stored in the Widgetology user database with your account
for future logins.
If the stored password cannot be decrypted, then how can we compare it to the password you enter?
The hashing process will always produce the same result given the same plaintext input. So we take the
password you enter, apply the hashing process, and compare the result to the hash we have stored in
the database. If they match, Widgetology knows you have entered the correct password.
If a hacker or rogue admin manages to steal the Widgetology user database, all they have is the hash
which cannot be used to log in, and cannot be decrypted to reveal your plaintext password.
For a long time this was how passwords were stored, and they still are in many systems including
Windows itself. Unfortunately there’s a few flaws in the plan.
Hashing algorithms have evolved and improved over the years, with older versions not considered as
secure as they once were. It’s beyond the scope here but typically the issue is limitations in the
algorithm (or computing capability when it was invented) allow for too many collisions — where
different plaintext can produce the same hash.
Technology isn’t really the problem here though.
Let’s look at the SHA-2 algorithm as an example (the SHA256 version specifically). SHA256 is a very
popular hashing algorithm and was and is extremely common in password management. The algorithm
78
Compiled by Japhet Mwakideu From Various Internet Sources
itself is considered secure — it is impossible to reverse the encryption, so that’s not the issue. People
are the issue.
Let’s take our secure password above as an example:
Plaintext: “MyP@ssword1”
SHA256 Hash:
55BFFD094830B5D09311BB357C415D8D1323F8185EE2F0C1F94E96C3E2BDD1B5
Well it turns out quite a few other Widgetology users had the same idea, and every one of these
“MyP@ssword1” passwords generates the same hash. So inside the user database there will be a
number of hashes that look just like mine. What’s worse is, across a million other applications and a
billion users, turns out that all over the planet there are user databases full of the exact same hash.
A SHA256 hash cannot be reversed or “cracked”, but in many cases it just doesn’t need to be. An
attacker will simply use a list (called a rainbow table) of hashes to compare to the stolen data. The
rainbow table includes a massive list of common words, phrases, and all the usual symbol and number
substitutions we all think help keep us secure — and their corresponding hashes. The attacker will
simply look up my hash in the rainbow table, and they will have my password in seconds. My
Widgetology account has been “hacked”.
If I’ve been a little bit more obscure with my password choice and it doesn’t appear in a rainbow table
I will definitely be a little safer. But there’s still an issue.
The attacker has the hash list on their own PC, and can decipher the more complex passwords at their
leisure. This is done using automated brute-force attacks. A program will create hashes using words,
phrases, symbols and numbers and compare them to the stolen hash list. Brute-forcing can take a lot of
time, especially if your password is long and not just full of common words. It is helped along by two
factors however — first, SHA256 is a very fast algorithm, so a lot of hashes can be generated without a
lot of computing power, and second — the attacker has the hash list for every user from the database
and can compare each guess with the full list in milliseconds.
As computing power improved and rainbow tables of known password hashes got larger and larger,
this brute force problem became a major issue. Storing passwords as simple hashes is now considered
insecure, although unfortunately it is still very common.
The solution is to make password hashes unique, even if the passwords are not. Pass the salt…
Salting
Like sprinkling salt on your dinner, adding a salt to a password hash adds some randomness and makes
each password hash unique.
Here’s how it works:
You enter your plaintext password as usual in the account creation process, and the Widgetology back
end takes over to create your account. This time, instead of just hashing your password, the system
creates a random string of characters — a salt — and adds it to your input. Your password becomes the
password + salt.
My password: MyP@ssword1
Salt: XElWz9WPwSLK3y0jUP6KhO
Salted password: MyP@ssword1XElWz9WPwSLK3y0jUP6KhO
The salted password is then hashed and stored in the user database. Using Bcrypt, a common password
salting application, it would look like this:
79
Compiled by Japhet Mwakideu From Various Internet Sources
Salted Hash: $2y$10$XElWz9WPwSLK3y0jUP6KhOHepv.KF4zj6z4J3XXyYRye.VXnPsMA2
Where:
• $2y is the hash algorithm (Blowfish in this case)
• $10 is the cost (or complexity/time)
• XElWz9WPwSLK3y0jUP6KhO is the salt (always 22 characters)
• Hepv.KF4zj6z4J3XXyYRye.VXnPsMA2 is the hash of the password+salt (always 31
characters)
In addition to the salt, modern password hashing algorithms have been deliberately slowed down. It
may take a second or two longer to create a password hash using modern salting techniques, but we
don’t do it often and it is barely noticeable to us. To an attacker trying to create millions of hash
guesses however that extra time is considerable. This extra complexity and time is configurable, so
applications can make it even harder if required (at the cost of more computing power), and we can all
make it harder over time as computing power increases.
You may have have noticed we are storing the salt with the hash. This means an attacker who has
stolen the database knows the random salt string we added to each password to generate the hash.
Turns out this isn’t very helpful for our attacker.
In theory the attacker could add the salt to a common password, generate the hash, then compare it to
our stored hash to see if it matches.
Common password + salt = hash?
It will work, but they have to do that for every password in their big list of common passwords with
my salt until they find a match. With the slow Bcrypt algorithm on our side, this could take a very very
long time.
What’s worse for the hacker is that every salt is unique, which means that every user who has come up
with the same password as me has a different hash. The attacker has to go through this common
password + salt = hash process individually for each user in the database. What can be done in seconds
with a SHA256 database may take thousands of years with a Bcrypt hash list.
Adding salt to a hash increases the difficulty of rainbow lookups and brute-forcing significantly, and is
now the minimum standard for password storage.
Problem solved. Almost.
Password Spraying
Stealing a user database isn’t the only way an attacker can hack your account, and it’s not the most
common.
The admins at Widgetology limit the number of times you can enter an incorrect password before your
account will be blocked. This prevents attackers from simply trying passwords continually until they
guess the right one.
However what is more difficult to defend against is an attacker using a single password against
multiple accounts — a technique called password spraying. If they know or can guess usernames, they
can over time simply try passwords against large numbers of accounts without being blocked.
A variation is to try your username/password combination across multiple sites. Your email address is
likely to be your username across many of the applications you use — so it’s only your password
protecting you against this technique.
Which leads us to the primary point of this article…
80
Compiled by Japhet Mwakideu From Various Internet Sources
Don’t Re-Use Passwords Across Sites!
When we put all this together hopefully it’s become apparent that using the same password, and
especially the same username/password across multiple sites is a bad idea!
Many sites still use simple hashing for password storage and their databases are vulnerable to brute-
force attacks if stolen. Even salted passwords can be cracked, it’s just a matter of time (with password
complexity playing a big part).
As soon as a password database is leaked attackers all over the internet start brute-forcing the
passwords, and from there start using the username/password combinations to try logging in to other
sites.
Want to know if your own super-secret passwords are already out there? Have I been Pwned will tell
you. If you’ve been using your email address for more than a few years you will almost certainly be on
the list.
Other mistakes, mis-configurations and malicious insiders can cause plaintext passwords to leak.
Admins are human too.
In addition to avoiding re-use, use complex passwords that are machine generated whenever possible.
Use a password manager like 1Password, Bitwarden or LastPass to manage your passwords for you
and help make them complex and unique. (Note that LastPass has had some bad publicity lately when
users password databases were stolen. It’s still far more secure than managing your own passwords —
assuming you don’t use a simple to guess password on your LastPass account. 1Password uses a less
convenient method of securing your password database that is immune to that type of breach — your
password database cannot be decrypted without a key that is only stored locally with you).
If you are creating your own passwords and need to remember them, use phrases rather than words to
make them longer, and use uncommon words or nonsense sentences. Length is the most effective way
to increase complexity.
And always turn on two-factor authentication when it is available — preferably using an authenticator
app rather than SMS.
81
Compiled by Japhet Mwakideu From Various Internet Sources