Cyber Security and Networking Questions (2)
Cyber Security and Networking Questions (2)
Types of PAN
Disadvantages of PAN
Applications of PAN
Advantages of a LAN
Disadvantages of LAN
Advantages of CAN
Advantages of MAN
Disadvantages of MAN
Advantages of WAN
Disadvantages of WAN
● Traffic congestion in Wide Area Network is very high.
● The fault tolerance ability of WAN is very less.
● Noise and error are present in large amount due to multiple
connection point.
● The data transfer rate is slow in comparison to LAN because of
large distances and high number of connected system within the
network.
Private or Private or
Ownership Private Private Private
Public Public
Many of the houses might have more than a computer. To interconnect those
computers and with other peripheral devices, a network should be
established similar to the local area network (LAN) within that home. Such a
type of network that allows a user to interconnect multiple computers and
other digital devices within the home is referred to as Home Area Network
(HAN). HAN encourages sharing of resources, files, and programs within the
network. It supports both wired and wireless communication.
There are three new categories, four categories with naming and scoping
changes, and some consolidation in the Top 10 for 2021.
● A01:2021-Broken Access Control moves up from the fifth position; 94%
of applications were tested for some form of broken access control.
The 34 Common Weakness Enumerations (CWEs) mapped to Broken
Access Control had more occurrences in applications than any other
category.
● A02:2021-Cryptographic Failures shifts up one position to #2,
previously known as Sensitive Data Exposure, which was a broad
symptom rather than a root cause. The renewed focus here is on
failures related to cryptography which often leads to sensitive data
exposure or system compromise.
● A03:2021-Injection slides down to the third position. 94% of the
applications were tested for some form of injection, and the 33 CWEs
mapped into this category have the second most occurrences in
applications. Cross-site Scripting is now part of this category in this
edition.
● A04:2021-Insecure Design is a new category for 2021, with a focus on
risks related to design flaws. If we genuinely want to “move left” as an
industry, it calls for more use of threat modeling, secure design patterns
and principles, and reference architectures.
● A05:2021-Security Misconfiguration moves up from #6 in the previous
edition; 90% of applications were tested for some form of
misconfiguration. With more shifts into highly configurable software, it’s
not surprising to see this category move up. The former category for
XML External Entities (XXE) is now part of this category.
● A06:2021-Vulnerable and Outdated Components was previously titled
Using Components with Known Vulnerabilities and is #2 in the Top 10
community survey, but also had enough data to make the Top 10 via
data analysis. This category moves up from #9 in 2017 and is a known
issue that we struggle to test and assess risk. It is the only category not
to have any Common Vulnerability and Exposures (CVEs) mapped to the
included CWEs, so a default exploit and impact weights of 5.0 are
factored into their scores.
● A07:2021-Identification and Authentication Failures was previously
Broken Authentication and is sliding down from the second position,
and now includes CWEs that are more related to identification failures.
This category is still an integral part of the Top 10, but the increased
availability of standardised frameworks seems to be helping.
● A08:2021-Software and Data Integrity Failures is a new category for
2021, focusing on making assumptions related to software updates,
critical data, and CI/CD pipelines without verifying integrity. One of the
highest weighted impacts from Common Vulnerability and
Exposures/Common Vulnerability Scoring System (CVE/CVSS) data
mapped to the 10 CWEs in this category. Insecure Deserialization from
2017 is now a part of this larger category.
● A09:2021-Security Logging and Monitoring Failures was previously
Insufficient Logging & Monitoring and is added from the industry survey
(#3), moving up from #10 previously. This category is expanded to
include more types of failures, is challenging to test for, and isn’t well
represented in the CVE/CVSS data. However, failures in this category
can directly impact visibility, incident alerting, and forensics.
● A10:2021-Server-Side Request Forgery is added from the Top 10
community survey (#1). The data shows a relatively low incidence rate
with above average testing coverage, along with above-average ratings
for Exploit and Impact potential. This category represents the scenario
where the security community members are telling us this is important,
even though it’s not illustrated in the data at this time.
Internetwork
An internet network is defined as two or more computer network LANs,
WANs, or computer network segments that are connected by devices and
configured with a local addressing system. The method is known as
internetworking. There are two types of Internetwork.
Conclusion
In conclusion, computer networks are essential components that connect
various computer devices in order to efficiently share data and resources.
PAN, LAN, CAN, MAN, and WAN networks serve a wide range of
applications and purposes, each with its own set of advantages and
drawbacks. Understanding these networks and their applications improves
connectivity, data exchange, and resource utilization in a variety of
applications from personal use to global communications.
● Simplex
● Half-Duplex
● Full-Duplex
What is WAN-as-a-service?
In this article, we will discuss the IP addressing structure and types, such as
IPv4 and IPv6. We will understand how the different addresses work and
what is so special about them in Internet communications.
What is an IP Address?
An IP address represents an Internet Protocol address. A unique address that
identifies the device over the network. It is almost like a set of rules
governing the structure of data sent over the Internet or through a local
network. An IP address helps the Internet to distinguish between different
routers, computers, and websites. It serves as a specific machine identifier in
a specific network and helps to improve visual communication between
source and destination.
IP addresses play a crucial role in the transfer of data across networks, such
as the Internet. However, they themselves do not transfer data. Instead, they
function as unique identifiers that enable devices to locate and communicate
with each other in a network.
IP address is divided into two parts: X1. X2. X3. X4
1. [X1. X2. X3] is the Network ID
2. [X4] is the Host ID
Currently, there are 2 versions of IP addresses in use i.e IPV4 and IPV6
Types of IP Addresses
There are 4 types of IP Addresses- Public, Private, Fixed, and Dynamic.
Among them, public and private addresses are derived from their local
network location, which should be used within the network while public IP is
used offline.
Types of IP Addresses
Conclusion
IP addresses plays an important role in communicating with devices over the
internet and also enables the systems to communicate with each another.
Structure of IP addresses helps in identifying devices on a network.
Understanding the difference between categories of IP addresses, public,
private as well as static or dynamic, is important for most users of the
internet and the web. As technology advances, the role of IP addresses will
increase, and so it will be important to understand them so that internet
communication is not only effective but also secure.
Can someone detect my IP address? Hackers and others can often see
your IP address when you’re online. However, you can make sure that the IP
address they see isn’t traceable back to you most of the time. By using a
Virtual Private Network (VPN).
The MAC address is used by the Media Access Control (MAC) sublayer of the
Data-Link Layer. MAC Address is worldwide unique since millions of network
devices exist and we need to uniquely identify each.
The First 6 digits (say 00:40:96) of the MAC Address identify the
manufacturer, called the OUI (Organisational Unique Identifier). IEEE
Registration Authority Committee assigns these MAC prefixes to its
registered vendors.
As the data travels from one router to the next, the MAC address
header is stripped off and a new one is generated for the next hop. However,
the IP header, which was generated by the original computer, remains intact
until it reaches the final destination. This process illustrates how the IP
header manages the “end to end” delivery, while the MAC headers handle
the “hop to hop” delivery.
So, Both IP and MAC addresses are essential for the functioning of the
Internet. While MAC addresses facilitate the direct, physical transfer of data
between network nodes, IP addresses ensure that the data reaches its final
destination.
Following are the steps which help to find MAC addresses for different OS
Command:
ipconfig /all
Step 1 – Press Window Start or Click on Windows Key.
Step 2 – In the search box, type cmd, and the command prompt will get
open.
Step 3 – Click on cmd, the command prompt window will display,
Step 4 – In the command prompt type ipconfig/all command and then press
enter.
Step 5 – As you will scroll down, each physical address is the MAC address
of your device.
__mask-blockquote__index=1__
If the user wants to reconnect, the DHCP server checks if the device is
connected before. If so, then the server tries to assign the same IP address (in
case the lease period has not expired). In case the user changed the router,
the user has to inform the ISP about the new MAC address because the new
MAC address is unknown to ISP, so the connection cannot be established.
Or the other option is Cloning, users can simply clone the registered MAC
address with ISP. Now the router keeps reporting the old MAC addresses to
ISP and there will be no connection issue.
Network Topology
In Computer Networks, Network Topology is the arrangement of the
various elements of a communication network. Network Topology is a
topological structure of a network and may be depicted physically or
logically. In this article, we are going to discuss network topology and its
various types.
1. Mesh Topology
In Mesh Topology, every node has a dedicated point-to-point link in every
other node. Such a network is called complete because, for any two devices,
there is a special link and non-redundant links cannot be added to the main
network.
Mesh Topology
● Full Mesh Topology : All the nodes within the network are
connected with each other If there are n number of nodes during a
network, each node will have an n-1 number of connections.
● Partial Mesh Topology : The partial mesh is more practical as
compared to the full mesh. In a partially connected mesh, all the
nodes aren’t necessary to be connected with one another during a
network.
2. Star Topology
In a Star Topology, all the nodes (PCs, printers and peripherals) are
connected to the central server. It has a central connection point, like a hub or
switch. In star topology each device is connected with a central hub.
Star Topology
● Star networks can require more cable length than a linear topology.
● More expensive cabling.
● Performance is based on the single concentrator i.e. hub.
3. Bus Topology
In bus topology, all stations are attached to the same cable. In the bus
network, messages are sent to both directions from a single point. In the bus
topology, signals are broadcast to all stations. Each computer checks the
address on the signal (data frame) as it passes along the bus. If the signal’s
address matches that of the computer, the computer processes the signal. If
the address doesn’t match, the computer takes no action and travels down
the bus.
Bus Topology
4. Ring Topology
All the nodes in a Ring Topology are connected in a closed circle of cable.
Messages that are transmitted travel around the ring unit they are addressed
to, the signal being refreshed by each node. In a ring network, every device
has exactly two neighbours for communication purposes.
Ring Topology
The most common access method of ring topology is token passing.
● The failure of a single node in the network can cause the entire
network to fail.
● Troubleshooting is difficult in this topology.
● The addition of stations in between or the removal of stations can
disturb the whole topology.
● Less secure.
5. Tree Topology
In tree topology nodes are connected in a hierarchical structure to form a tree.
There is a root node in tree topology and the remaining nodes are considered
as child nodes, basically it is a combination of star and bus topology. The
central bus works as a communication pathway, and each star-configured
network represents a level in the tree. In tree topology, a hierarchy is formed
by the branching cable having no loops that connect the root with all other
nodes for communication.
Tree Topology
6. Hybrid Topology
Hybrid topology is the combination of two or more types of topology, they
arise from the integration of multiple network topologies that is why called
Hybrid Network Topology.
Hybrid Topology
Daisy Chain
Conclusion
In conclusion, network topology defines the structured arrangement of
devices in a network, impacting data flow and connectivity. Key types include
Mesh, Star, Bus, Ring, Tree, Hybrid, Point-to-Point, and Daisy Chain
topologies. Selecting the appropriate topology depends on factors like
network size, scalability, cost, and reliability.
This layered approach makes it easier for different devices and technologies
to work together. OSI Model provides a clear structure for data transmission
and managing network issues. The OSI Model is widely used as a reference
to understand how network systems function.
In this article, we will discuss the OSI Model and each layer of the OSI Model
in detail. We will also discuss the flow of data in the OSI Model and how the
OSI Model is different from the TCP/IP Model.
The OSI (Open Systems Interconnection) Model is a set of rules that explains
how different computer systems communicate over a network. The OSI
Model was developed by the International Organization for Standardization
(ISO). The OSI Model consists of 7 layers and each layer has specific
functions and responsibilities.
This layered approach makes it easier for different devices and technologies
to work together. OSI Model provides a clear structure for data transmission
and managing network issues. The OSI Model is widely used as a reference
to understand how network systems function.
In this article, we will discuss the OSI Model and each layer of the OSI Model
in detail. We will also discuss the flow of data in the OSI Model and how the
OSI Model is different from the TCP/IP Model.
OSI Model
For those preparing for competitive exams like GATE, a strong
understanding of networking concepts, including the OSI model, is crucial. To
deepen your knowledge in this area and other key computer science topics,
consider enrolling in the GATE CS Self-Paced course . This course offers
comprehensive coverage of the syllabus, helping you build a solid foundation
for your exam preparation.
● Physical Layer
● Data Link Layer
● Network Layer
● Transport Layer
● Session Layer
● Presentation Layer
● Application Layer
The packet received from the Network layer is further divided into frames
depending on the frame size of the NIC(Network Interface Card). DLL also
encapsulates Sender and Receiver’s MAC address in the header.
At the Receiver’s side, Transport Layer reads the port number from its header
and forwards the Data which it has received to the respective application. It
also performs sequencing and reassembling of the segmented data.
● Connection-Oriented Service
● Connectionless Service
Example
Each layer adds specific information to ensure the data reaches its
destination correctly, and these steps are reversed upon arrival.
We can understand how data flows through the OSI Model with the help of
an example mentioned below.
Step 6: At Data Link Layer, data packets are encapsulated into frames, then
MAC address is added for local devices and then it checks for error using
error detection.
After the email reaches the receiver i.e. Person B, the process will reverse and
decrypt the e-mail content. At last, the email will be shown on Person B
email client.
Protocol
Layer Working Protocols
Data Unit
Establishing Physical
1 – Physical USB, SONET/SDH,
Connections between Bits
Layer etc.
Devices.
Transmission of data
3 – Network from one host to another, IP, ICMP, IGMP, OSPF,
Packets
Layer located in different etc.
networks.
As the modern Internet does not prefer the OSI Model, but still, the OSI
Model is still very helpful for solving network problems. It helps people
understand network concepts very easily.
Conclusion
In conclusion, the OSI (Open Systems Interconnection) model helps us
understand how data moves in networks. It consists of seven distinct layers:
Physical, Data Link, Network, Transport, Session, Presentation, and
Application. Each layer has specific responsibilities and interacts with the
layers directly above and below it. Since it is a conceptual model, the OSI
framework is still widely used to troubleshoot and understand networking
issues.
TCP Vs UDP –
1. Session Multiplexing:
A single host with a single IP address is able to communicate with
multiple servers. While using TCP, first a connection must be
established between the server and the receiver and the connection
is closed when the transfer is completed. TCP also maintains
reliability while the transfer is taking place. UDP on the other hand
sends no acknowledgement of receiving the packets. Therefore, it
provides no reliability.
2. Segmentation:
Information sent is first broken into smaller chunks for transmission.
Maximum Transmission Unit or MTU of a Fast Ethernet is 1500
bytes whereas the theoretical value of TCP is 65495 bytes.
Therefore, data has to be broken into smaller chunks before being
sent to the lower layers. MSS or Maximum Segment Size should be
set small enough to avoid fragmentation. TCP supports MSS and
Path MTU discovery with which the sender and the receiver can
automatically determine the maximum transmission capability. UDP
doesn’t support this; therefore it depends on the higher layer
protocols for data segmentation.
3. Flow Control:
If the sender sends data faster than what the receiver can process
then the receiver will drop the data and then request for a
retransmission, leading to wastage of time and resources. TCP
provides end-to-end flow control which is realized using a sliding
window. The sliding window sends an acknowledgement from
receiver’s end regarding the data that the receiver can receive at a
time.
UDP doesn’t implement flow control and depends on the higher
layer protocols for the same.
4. Connection Oriented:
TCP is connection oriented, i.e., it creates a connection for the
transmission to take place, and once the transfer is over that
connection is terminated. UDP on the other hand is connectionless
just like IP (Internet Protocol).
5. Reliability:
TCP sends an acknowledgement when it receives a packet. It
requests a retransmission in case a packet is lost. UDP relies on the
higher layer protocols for the same.
6. Headers:
The size of TCP header is 20-bytes (16-bits for source port, 16-bits for the
destination port, 32-bits for seq number, 32-bits for ack number, 4-bits
header length)
The size of the UDP header is 8-bytes (16-bits for source port, 16-bits for
destination port, 16-bits for length, 16-bits for checksum); it’s significantly
smaller than the TCP header.
Both UDP and TCP header consists of 16-bit Source port(these are used for
identifying the port number of the source) fields and 16-bits destination port
(these are used for specifying the offered application) fields.
Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) both
are protocols of the Transport Layer Protocols. TCP is a connection-oriented
protocol whereas UDP is a part of the Internet Protocol suite, referred to as
the UDP/IP suite. Unlike TCP, it is an unreliable and connectionless protocol.
In this article, we will discuss the differences between TCP and UDP.
Grasping the differences between TCP and UDP is essential for excelling in
exams like GATE, where networking is a significant topic. To strengthen your
understanding and boost your exam preparation, consider enrolling in the
GATE CS Self-Paced Course. This course offers comprehensive coverage of
networking protocols, including in-depth explanations of TCP, UDP, and their
applications, ensuring you’re well-prepared for your exams.
Features of TCP
● TCP keeps track of the segments being transmitted or received by
assigning numbers to every single one of them.
● Flow control limits the rate at which a sender transfers data. This is
done to ensure reliable delivery.
● TCP implements an error control mechanism for reliable data
transfer.
● TCP takes into account the level of congestion in the network.
Applications of TCP
● World Wide Web (WWW) : When you browse websites, TCP
ensures reliable data transfer between your browser and web
servers.
● Email : TCP is used for sending and receiving emails. Protocols like
SMTP (Simple Mail Transfer Protocol) handle email delivery across
servers.
● File Transfer Protocol (FTP) : FTP relies on TCP to transfer large files
securely. Whether you’re uploading or downloading files, TCP
ensures data integrity.
● Secure Shell (SSH) : SSH sessions, commonly used for remote
administration, rely on TCP for encrypted communication between
client and server.
● Streaming Media : Services like Netflix, YouTube, and Spotify use
TCP to stream videos and music. It ensures smooth playback by
managing data segments and retransmissions.
Advantages of TCP
● It is reliable for maintaining a connection between Sender and
Receiver.
● It is responsible for sending data in a particular sequence.
● Its operations are not dependent on Operating System .
● It allows and supports many routing protocols.
● It can reduce the speed of data based on the speed of the receiver.
Disadvantages of TCP
● It is slower than UDP and it takes more bandwidth.
● Slower upon starting of transfer of a file.
● Not suitable for LAN and PAN Networks.
● It does not have a multicast or broadcast category.
● It does not load the whole page if a single data of the page is
missing.
Features of UDP
● Used for simple request-response communication when the size of
data is less and hence there is lesser concern about flow and error
control.
● It is a suitable protocol for multicasting as UDP supports packet
switching .
● UDP is used for some routing update protocols like RIP(Routing
Information Protocol) .
● Normally used for real-time applications which can not tolerate
uneven delays between sections of a received message.
Application of UDP
● Real-Time Multimedia Streaming : UDP is ideal for streaming audio
and video content. Its low-latency nature ensures smooth playback,
even if occasional data loss occurs.
● Online Gaming : Many online games rely on UDP for fast
communication between players.
● DNS (Domain Name System) Queries : When your device looks up
domain names (like converting “www.example.com” to an IP
address), UDP handles these requests efficiently .
● Network Monitoring : Tools that monitor network performance often
use UDP for lightweight, rapid data exchange.
● Multicasting : UDP supports packet switching, making it suitable for
multicasting scenarios where data needs to be sent to multiple
recipients simultaneously.
● Routing Update Protocols : Some routing protocols, like RIP
(Routing Information Protocol), utilize UDP for exchanging routing
information among routers.
Advantages of UDP
● It does not require any connection for sending or receiving data.
● Broadcast and Multicast are available in UDP.
● UDP can operate on a large range of networks.
● UDP has live and real-time data.
● UDP can deliver data if all the components of the data are not
complete.
Disadvantages of UDP
● We can not have any way to acknowledge the successful transfer of
data.
● UDP cannot have the mechanism to track the sequence of data.
● UDP is connectionless, and due to this, it is unreliable to transfer
data.
● In case of a Collision, UDP packets are dropped by Routers in
comparison to TCP.
● UDP can drop packets in case of detection of errors.
● Sending Emails
● Transferring Files
● Web Browsing
● Gaming
● Video Streaming
● Online Video Chats
TCP is a connection-oriented
protocol. Connection orientation UDP is the Datagram-oriented protocol.
means that the communicating This is because there is no overhead for
devices should establish a opening a connection, maintaining a
Type of Service connection before transmitting connection, or terminating a connection.
data and should close the UDP is efficient for broadcast and
connection after transmitting the multicast types of network transmission.
data.
TCP is comparatively slower than UDP is faster, simpler, and more efficient
Speed
UDP. than TCP.
TCP is used by HTTP, HTTPs , FTP UDP is used by DNS , DHCP , TFTP,
Protocols
, SMTP and Telnet . SNMP , RIP , and VoIP .
Introduction to TELNET
What is Telnet?
TELNET is a type of protocol that enables one computer to connect to the
local computer. It is used as a standard TCP/IP protocol for virtual terminal
service which is provided by ISO. The computer which starts the connection
is known as the local computer. The computer which is being connected to
i.e. which accepts the connection known as the remote computer. During
telnet operation, whatever is being performed on the remote computer will
be displayed by the local computer. Telnet operates on a client/server
principle.
History of TELNET
The Telnet protocol originated in the late 1960s, it was created to
provide remote terminal access and control over mainframes and
minicomputers. Initially, it was designed to be a simple and secure method of
connecting to a remote system. This protocol allowed users to access remote
computers using a terminal or command-line interface. Over time, Telnet’s
use has diminished due to security concerns, and alternatives like SSH are
now preferred for secure remote management
Logging in TELNET
The logging process can be further categorised into two parts:
● Local Login
● Remote Login
1. Local Login
Whenever a user logs into its local system, it is known as local login.
Local Login
● Keystrokes are accepted by the terminal driver when the user types
at the terminal.
● Terminal Driver passes these characters to the OS.
● Now, the OS validates the combination of characters and opens the
required application.
2. Remote Login
Remote Login is a process in which users can log in to a remote site i.e.
computer and use services that are available on the remote computer. With
the help of remote login, a user is able to understand the result of
transferring the result of processing from the remote computer to the local
computer.
● When the user types something on the local computer, the local
operating system accepts the character.
● The local computer does not interpret the characters, it will send
them to the TELNET client.
● TELNET client transforms these characters to a universal character
set called Network Virtual Terminal (NVT) characters and it will
pass them to the local TCP/IP protocol Stack.
● Commands or text which are in the form of NVT, travel through the
Internet and it will arrive at the TCP/IP stack at the remote computer.
● Characters are then delivered to the operating system and later on
passed to the TELNET server.
● Then the TELNET server changes those characters to characters
that can be understandable by a remote computer.
● The remote operating system receives characters from a
pseudo-terminal driver, which is a piece of software that pretends
that characters are coming from a terminal.
● The operating system then passes the character to the appropriate
application program.
TELNET Commands
Commands of Telnet are identified by a prefix character, Interpret As
Command (IAC) with code 255. IAC is followed by command and option
codes. The basic format of the command is as shown in the following figure :
1. Offering to enable.
2. Accepting a request to
WILL 251 11111011
enable.
1. Rejecting a request to
enable.
2. Offering to disable.
WON’T 252 11111100
3. Accepting a request to
disable.
1. Approving a request to
enable.
DO 253 11111101`
2. Requesting to enable.
1. Disapproving a request to
enable.
2. Approving an offer to
DON’T 254 11111110 disable.
3. Requesting to disable.
Suppress go
3 It will suppress go ahead signal after data.
ahead
Terminal
32 It set the terminal speed.
speed
Uses of TELNET
● Remote Administration and Management
● Network Diagnostics
● Understanding Command-Line Interfaces
● Accessing Bulletin Board Systems (BBS)
● Automation and Scripting
Advantages of TELNET
● It provides remote access to someone’s computer system.
● Telnet allows the user for more access with fewer problems in data
transmission.
● Telnet saves a lot of time.
● The oldest system can be connected to a newer system with telnet
having different operating systems.
Disadvantages of TELNET
● As it is somehow complex, it becomes difficult for beginners to
understand.
● Data is sent here in the form of plain text, that’s why it is not so
secured.
● Some capabilities are disabled because of not proper interlinking of
the remote and local devices.
Modes of Operation
● Default Mode: If no other modes are invoked then this mode is used.
Echoing is performed in this mode by the client. In this mode, the
user types a character and the client echoes the character on the
screen but it does not send it until the whole line is completed.
● Character Mode: Each character typed in this mode is sent by the
client to the server. A server in this type of mode normally echoes
characters back to be displayed on the client’s screen.
● Line Mode: Line editing like echoing, character erasing, etc. is done
from the client side. The client will send the whole line to the server.
Conclusion
Telnet is a client/server application protocol that allows remote access to
virtual terminals via local area networks or the internet. Telnet’s use has
decreased due to security concerns, with protocols such as SSH chosen for
safe remote management. Telnet is still useful for remote administration,
network diagnostics, instructional purposes, and interacting with legacy
systems.
● Public key – Everyone can see it, no need to protect it. (for
encryption function).
● Private key – Stays in computer, must be protected. (for decryption
function).
● User Key – If the public key and private key remain with the user.
● Host Key – If the public key and private key are on a remote system.
● Session key – Used when a large amount of data is to be
transmitted.
Features of SSH
● Encryption: Encrypted data is exchanged between the server and
client, which ensures confidentiality and prevents unauthorised
attacks on the system.
● Authentication: For authentication, SSH uses public and private key
pairs which provide more security than traditional password
authentication.
● Data Integrity: SSH provides Data Integrity of the message
exchanged during the communication.
● Tunnelling: Through SSH we can create secure tunnels for
forwarding network connections over encrypted channels.
SSH Functions
There are multiple functions performed by SSH Function, here below are
some functions:
● SSH provides high security as it encrypts all messages of
communication between client and server.
● SSH provides confidentiality.
● SSH allows remote login, hence is a better alternative to TELNET.
● SSH provides a secure File Transfer Protocol, which means we can
transfer files over the Internet securely.
● SSH supports tunnelling which provides more secure connection
communication.
SSH Protocol
To provide security between a client and a server the SSH protocol uses
encryption. All user authentication and file transfers are encrypted to protect
the network against attacks.
SSH Protocol
Symmetric Cryptography
Hashing
Commands in SSH
There are multiple commands supported by SSH protocol, you can tap on the
link if you want to know commands in SSH.
● Public keys from the local computers (system) are passed to the
server which is to be accessed.
● The server then identifies if the public key is registered.
● If so, the server then creates a new secret key and encrypts it with
the public key which was sent to it via local computer.
● This encrypted code is sent to the local computer.
● This data is unlocked by the private key of the system and is sent to
the server.
● The server after receiving this data verifies the local computer.
● SSH creates a route and all the encrypted data is transferred
through it with no security issues.
Conclusion
SSH keys are a necessary component for securing connections between your
computer and remote servers. By using a pair of cryptographic keys, one
public and one private you can authenticate yourself without sending
passwords over the network, making your connections much safer. This
method not only simplifies the login process but also enhances security by
protecting sensitive data from potential threats. Understanding how SSH
keys work is crucial for anyone looking to manage servers or transfer files
securely over the internet. With the growing importance of cybersecurity,
utilising SSH keys is a simple yet effective way to secure your online
activities.
SSH2 employs host keys for system authentication, SSH1 encrypts distinct
portions of the packets and uses both server and host keys. SSH2 uses a
different networking technology than SSH1, and it is a total redesign of the
protocol. SSH2 is also more secure.
What is port forwarding in SSH?
To install the OpenSSH server application, and related support files, use this
command at a terminal prompt:
$sudo apt-get update
$sudo apt install openssh-server
● Telnet is famous for being the original Internet when the Net first launched
in 1969 and was built to be a form of remote control to manage mainframe
computers from distant terminals. In those original days of large mainframe
computers, telnet enabled research students and professors to ‘log in’ to the
university mainframe from any terminal in the building.
● This remote login saved researchers hours of walking each semester. While
telnet pales in comparison to modern networking technology, it was
revolutionary in 1969, and telnet helped pave the way for the eventual
World Wide Web in 1989. While telnet technology is very old, it is still in
some use today by purists.
● Telnet is not a secure communication protocol because it does not use any
security mechanism and transfers the data over network/internet in a
plain-text form including the passwords and so any one can sniff the
packets to get that important information.
● There are no authentication policies & data encryption techniques used in
telnet causing huge security threats that is why telnet is no longer used for
accessing network devices and servers over public networks.
If you would like to change its ports, you’ll need to edit /etc/services with the
line:
$telnet 23/tcp
Right now it might looking like some tedious and wacky terminal commands
but if you try to run them once on your terminal, trust me you’ll find it
extremely easy!
Just like SSH, Telnet is also apparently just a dull and boring terminal screen
but with some unimaginable features.
Default Port 22 23
Data is encrypted,
Data providing secure
No encryption, data is transmitted in plain text.
Encryption communication over the
network.
If you want to connect your Windows PC with a Linux PC then you need
software called ‘PuTTY’.
Why is SSH more secure than Telnet? SSH has security measures of
encrypting all the text passed within the client and the server, including
passwords, and Telnet sends all the commands and responses in a plain text
that is easily intercepted.
Is SSH compatible with Telnet? SSH client is not compatible with Telnet,
however there are certain clients available in the SSH that could mimic the
Telnet in order to connect to the Telnet enabled only servers. But, this does
not bring encryption or security to the link into play.
When should I use Telnet instead of SSH? Telnet may be used in reliable
and secure networks where the encryption is not essential or use it in
diagnostic of old models systems where SSH is unavailable. Yet again,
however, SSH is usually preferable to RSH.
For instance, imagine your MAC Address or IP Address as the PIN code of the
nearest Post Office and your house address as a Port. Whenever any parcel
is sent to you it gets received by the nearest post office and then it is
identified by your address where to deliver that parcel. Similarly in a
computer data is first received using their IP or MAC address then it is
delivered to the application whose port number is with the data packets.
Port is a logical address of a 16-bit unsigned integer that is allotted to every
application on the computer that uses the internet to send or receive
data.Now every time any application sends any data, it is identified by the
port on which the application sent that data and the data is to be transferred
to the receiver application according to its port. We often call port as port
number.
In the OSI Model ports are used in the Transport layer. In the headers of
Transport layer protocols like TCP and UDP, we have a section to define
port(port number). The network layer has to do nothing with ports, their
protocols only care about IP Addresses.
Types of Ports
Ports are further divided into three categories:
Dynamic Port
Importance of Ports
Ports have many significance. Some of them are-
23 Telnet
7 Echo
22 SSH(Secure Shell)
3306 MySQL
5432 PostgreSQL
27017 MongoDB
Some common/Popular port numbers that are used by those
applications/services which are frequently used by us-FAQs on Ports in
Networking
Components of DHCP
The main components of DHCP include:
● DHCP Server: DHCP Server is a server that holds IP Addresses and other
information related to configuration.
● DHCP Client: It is a device that receives configuration information from the
server. It can be a mobile, laptop, computer, or any other electronic device
that requires a connection.
● DHCP Relay: DHCP relays basically work as a communication channel
between DHCP Client and Server.
● IP Address Pool: It is the pool or container of IP Addresses possessed by the
DHCP Server. It has a range of addresses that can be allocated to devices.
● Subnets: Subnets are smaller portions of the IP network partitioned to keep
networks under control.
● Lease: It is simply the time that how long the information received from the
server is valid, in case of expiration of the lease, the tenant must have to
re-assign the lease.
● DNS Servers: DHCP servers can also provide DNS (Domain Name System)
server information to DHCP clients, allowing them to resolve domain names
to IP addresses.
● Default Gateway: DHCP servers can also provide information about the
default gateway, which is the device that packets are sent to when the
destination is outside the local network.
● Options: DHCP servers can provide additional configuration options to
clients, such as the subnet mask, domain name, and time server information.
● Renewal: DHCP clients can request to renew their lease before it expires to
ensure that they continue to have a valid IP address and configuration
information.
● Failover: DHCP servers can be configured for failover, where two servers
work together to provide redundancy and ensure that clients can always
obtain an IP address and configuration information, even if one server goes
down.
● Dynamic Updates: DHCP servers can also be configured to dynamically
update DNS records with the IP address of DHCP clients, allowing for easier
management of network resources.
● Audit Logging: DHCP servers can keep audit logs of all DHCP transactions,
providing administrators with visibility into which devices are using which IP
addresses and when leases are being assigned or renewed.
Working of DHCP
DHCP works on the Application layer of the UDP Protocol. The main
task of DHCP is to dynamically assign IP Addresses to the Clients and
allocate information on TCP/IP configuration to Clients. For more, you can
refer to the Article Working of DHCP.
The DHCP port number for the server is 67 and for the client is 68. It is a
client-server protocol that uses UDP services. An IP address is assigned from
a pool of addresses. In DHCP, the client and the server exchange mainly 4
DHCP messages in order to make a connection, also called the DORA
process, but there are 8 DHCP messages in the process.
Working of DHCP
2. DHCP Offers A Message: The server will respond to the host in this
message specifying the unleashed IP address and other TCP configuration
information. This message is broadcasted by the server. The size of the
message is 342 bytes. If there is more than one DHCP server present in the
network then the client host will accept the first DHCP OFFER message it
receives. Also, a server ID is specified in the packet in order to identify the
server.
Note – This message is broadcast after the ARP request broadcast by the PC
to find out whether any other host is not using that offered IP. If there is no
reply, then the client host broadcasts the DHCP request message for the
server showing the acceptance of the IP address and Other TCP/IP
Configuration.
7. DHCP Release: A DHCP client sends a DHCP release packet to the server
to release the IP address and cancel any remaining lease time.
Note – All the messages can be unicast also by the DHCP relay agent if the
server is present in a different network.
Disadvantages
● IP conflict can occur.
● The problem with DHCP is that clients accept any server.
Accordingly, when another server is in the vicinity, the client may
connect with this server, and this server may possibly send invalid
data to the client.
● The client is not able to access the network in absence of a DHCP
Server.
● The name of the machine will not be changed in a case when a new
IP Address is assigned.
Conclusion
In conclusion, DHCP is a technology that simplifies network setup by
automatically assigning IP addresses and network configurations to devices.
While DHCP offers convenience, it’s important to manage its security
carefully. Issues such as IP address exhaustion, and potential data access
through DNS settings highlight the need for robust security measures like
firewalls and VPNs to protect networks from unauthorized access and
disruptions. DHCP remains essential for efficiently managing network
connections while ensuring security against potential risks.
SMTP Protocol
The SMTP model is of two types:
● End-to-End Method
● Store-and-Forward Method
SMTP Model
Components of SMTP
● Mail User Agent (MUA): It is a computer application that helps you
in sending and retrieving mail. It is responsible for creating email
messages for transfer to the mail transfer agent(MTA).
● Mail Submission Agent (MSA): It is a computer program that
receives mail from a Mail User Agent(MUA) and interacts with the
Mail Transfer Agent(MTA) for the transfer of the mail.
● Mail Transfer Agent (MTA): It is software that has the work to
transfer mail from one system to another with the help of SMTP.
● Mail Delivery Agent (MDA): A mail Delivery agent or Local Delivery
Agent is basically a system that helps in the delivery of mail to the
local system.
It provides the
HELO<SP><dom identification of
1. HELO Mandatory
ain><CRLF> the sender i.e.
the host name.
MAIL<SP>FROM
It specifies the
:
2. MAIL originator of Mandatory
<reverse-path><
the mail.
CRLF>
RCPT<SP>TO : It specifies the
3. RCPT <forward-path> recipient of Mandatory
<CRLF> mail.
It specifies the
4. DATA DATA<CRLF> beginning of Mandatory
the mail.
It closes the
5. QUIT QUIT<CRLF> TCP Mandatory
connection.
It aborts the
current mail
Highly
transaction but
6. RSET RSET<CRLF> recommende
the TCP
d
connection
remains open.
It is use to
Highly
VRFY<SP><strin confirm or
7. VRFY recommende
g><CRLF> verify the user
d
name.
Highly
8. NOOP NOOP<CRLF> No operation recommende
d
It reverses the
9. TURN TURN<CRLF> role of sender Seldom used
and receiver.
It specifies the
EXPN<SP><strin
10. EXPN mailing list to Seldom used
g><CRLF>
be expanded.
HELP<SP><strin
11. HELP It send some Seldom used
g><CRLF>
specific
documentation
to the system.
SEND<SP>FRO
M: It send mail to
12. SEND Seldom used
<reverse-path>< the terminal.
CRLF>
It send mail to
SOML<SP>FRO
the terminal if
M:
13. SOML possible; Seldom used
<reverse-path><
otherwise to
CRLF>
mailbox.
SAML<SP>FRO
It send mail to
M:
14. SAML the terminal Seldom used
<reverse-path><
and mailbox.
CRLF>
We cannot reduce the size of the We can reduce the size of the email
email in SMTP. in Extended SMTP.
The main identification feature for
SMTP clients open transmission ESMTP clients is to open a
with the command HELO. transmission with the command
EHLO (Extended HELLO).
Advantages of SMTP
● If necessary, the users can have a dedicated server.
● It allows for bulk mailing.
● Low cost and wide coverage area.
● Offer choices for email tracking.
● Reliable and prompt email delivery.
Disadvantages of SMTP
● SMTP’s common port can be blocked by several firewalls.
● SMTP security is a bigger problem.
● Its simplicity restricts how useful it can be.
● Just 7-bit ASCII characters can be used.
● If a message is longer than a certain length, SMTP servers may
reject the entire message.
● Delivering your message will typically involve additional
back-and-forth processing between servers, which will delay
sending and raise the likelihood that it won’t be sent.
Used at receiver
Not used at receiver side. Used at receiver side.
side.
Conclusion
SMTP is a fundamental part of email communication that allows messages to
be reliably transmitted between email servers. Despite its drawbacks, such
as security problems and the possibility of spam, SMTP is still widely used
due to its simplicity, efficiency, and broad support across various email
systems. Enhancements such as encryption and authentication may solve
some of its security issues, making it an appropriate choice for email delivery
in a variety of applications.
The default port for Simple mail Transfer Protocol is port 25.
What is SMTP Relay? SMTP Relay can be basically defined as the process
of transferring emails from one server to another server.
Describe some common issues in SMTP Email Delivery.
Since IP does not have an inbuilt mechanism for sending error and control
messages. It depends on Internet Control Message Protocol(ICMP) to provide
error control. In this article, we are going to discuss ICMP in detail along with
their uses, messages, etc.
What is ICMP?
ICMP is used for reporting errors and management queries. It is a
supporting protocol and is used by network devices like routers for sending
error messages and operations information. For example, the requested
service is not available or a host or router could not be reached.
Uses of ICMP
ICMP is used for error reporting if two devices connect over the internet and
some error occurs, So, the router sends an ICMP error message to the source
informing about the error. For Example, whenever a device sends any
message which is large enough for the receiver, in that case, the receiver will
drop the message and reply to the ICMP message to the source.
Traceroute: Traceroute utility is used to know the route between two devices
connected over the internet. It routes the journey from one router to another,
and a traceroute is performed to check network issues before data transfer.
In the ICMP packet format, the first 32 bits of the packet contain three fields:
Type (8-bit): The initial 8-bit of the packet is for message type, it provides a
brief description of the message so that receiving network would know what
kind of message it is receiving and how to respond to it. Some common
message types are as follows:
Checksum (16-bit): Last 16 bits are for the checksum field in the ICMP
packet header. The checksum is used to check the number of bits of the
complete message and enable the ICMP tool to ensure that complete data is
delivered.
The next 32 bits of the ICMP Header are Extended Header which has the
work of pointing out the problem in IP Message. Byte locations are identified
by the pointer which causes the problem message and receiving device looks
here for pointing to the problem.
The last part of the ICMP packet is Data or Payload of variable length. The
bytes included in IPv4 are 576 bytes and in IPv6, 1280 bytes.
Whenever an attacker sends a ping, whose size is greater than the maximum
allowable size, oversized packets are broken into smaller parts. When the
sender re-assembles it, the size exceeds the limit which causes a buffer
overflow and makes the machine freeze. This is simply called a Ping of Death
Attack. Newer devices have protection from this attack, but older devices did
not have protection from this attack.
Smurf Attack
Smurf Attack is a type of attack in which the attacker sends an ICMP packet
with a spoofed source IP address. These type of attacks generally works on
older devices like the ping of death attack.
Destination network
0
unreachable
3 – Destination
Unreachable
Destination host
1
unreachable
Destination protocol
2
unreachable
Destination port
3
unreachable
Fragmentation is
4 needed and the DF flag
set
5 – Redirect Message
10 – Router Solicitation 0
11 – Time Exceeded
Fragment reassembly
1
time exceeded.
12 – Parameter
Problem
Reply to Timestamp
14 – Timestamp Reply 0
message
ICMP will take the source IP from the discarded packet and inform the source
by sending a source quench message. The source will reduce the speed of
transmission so that router will be free from congestion.
When the congestion router is far away from the source the ICMP will send a
hop-by-hop source quench message so that every router will reduce the
speed of transmission.
Parameter Problem
Whenever packets come to the router then the calculated header checksum
should be equal to the received header checksum then only the packet is
accepted by the router.
Parameter Problem
ICMP will take the source IP from the discarded packet and inform the source
by sending a parameter problem message.
Destination Unreachable
There is no necessary condition that only the router gives the ICMP error
message time the destination host sends an ICMP error message when any
type of failure (link failure, hardware failure, port failure, etc) happens in the
network.
Redirection Message
Redirect requests data packets are sent on an alternate route. The message
informs a host to update its routing information (to send packets on an
alternate route).
Example: If the host tries to send data through a router R1 and R1 sends
data on a router R2 and there is a direct way from the host to R2. Then R1
will send a redirect message to inform the host that there is the best way to
the destination directly through R2 available. The host then sends data
packets for the destination directly to R2.
The router R2 will send the original datagram to the intended destination.
But if the datagram contains routing information then this message will not
be sent even if a better route is available as redirects should only be sent by
gateways and should not be sent by Internet hosts.
Redirection Message
For more, you can refer to Types of ICMP (Internet Control Message Protocol)
Messages.
Advantages of ICMP
● Network devices use ICMP to send error messages, and
administrators can use the Ping and Tracert commands to debug the
network.
● These alerts are used by administrators to identify issues with
network connectivity.
● A prime example is when a destination or gateway host notifies the
source host via an ICMP message if there is a problem or a change
in network connectivity that needs to be reported. Examples include
when a destination host or networking becomes unavailable, when
a packet is lost during transmission, etc.
● Furthermore, network performance and connection monitoring tools
commonly employ ICMP to identify the existence of issues that the
network team has to resolve.
● One quick and simple method to test connections and find the
source is to use the ICMP protocol, which consists of queries and
answers.
Disadvantages of ICMP
● If the router drops a packet, it may be due to an error; but, because
to the way the IP (internet protocol) is designed, there is no way for
the sender to be notified of this problem.
● Assume, while a data packet is being transmitted over the internet,
that its lifetime is over and that the value of the time to live field has
dropped to zero. In this case, the data packet is destroyed.
● Although devices frequently need to interact with one another, there
isn’t a standard method for them to do so in Internet Protocol. For
instance, the host needs to verify the destination’s vital signs to see
if it is still operational before transmitting data.
History of DNS
The development of the DNS can be traced back to the early days of
the internet, when it was a relatively small and tightly connected network
called ARPANET. In the early 1980s, ARPANET introduced a centrally
managed file called the “hosts.txt” file that mapped hostnames to IP
addresses. As the internet grew rapidly, this approach became
unmanageable.
In 1983, Paul Mockapetris and Jon Postel introduced the DNS as we know it
today through RFC 882 and RFC 883, providing a distributed and hierarchical
system for domain name resolution. This innovation paved the way for the
scalable and efficient DNS architecture that underpins the modern internet.
DNS Server
If the recursive DNS server doesn’t have the IP address for the requested
domain in its cache, it starts the resolution process by querying the root DNS
server.The root DNS server is the top-level server in the DNS hierarchy, and
it contains information about the authoritative DNS servers for top-level
domains (TLDs), such as “.com,” “.org,” “.net,” etc.
The root DNS server responds to the recursive DNS server’s query with a
referral to the authoritative DNS server for the “.com” TLD.The recursive DNS
server then queries the “.com” TLD DNS server for the IP address of the
domain in question.
The “.com” TLD DNS server, in response to the query from the recursive DNS
server, provides a referral to the authoritative DNS server responsible for the
specific domain, in this case, “example.com.”
The authoritative DYN DNS server receives the query and looks up the
requested DNS record, such as the A record for “www.example.com.”
The authoritative DYN DNS server responds to the recursive DNS server with
the requested DNS record, which includes the IP address associated with
“www.example.com.”
Finally, the recursive DNS server sends the IP address it received from the
authoritative DYN DNS server back to the user’s computer. The user’s
computer can then use this IP address to establish a connection to the web
server hosting “www.example.com.”
Importance of DNS
DNS is a fundamental component of the internet for several reasons:
● DNS over HTTPS (DoH) and DNS over TLS (DoT): These protocols
encrypt DNS traffic, enhancing user privacy and security.
● DNSSEC Adoption: Wider adoption of DNSSEC helps prevent DNS
cache poisoning and enhances the trustworthiness of DNS
responses.
● IPv6 Transition: As IPv6 adoption grows, DNS plays a critical role in
mapping IPv6 addresses to domain names.
● Edge Computing: DNS is integral to the emerging field of edge
computing, where low-latency access to resources is crucial.
● Blockchain and Decentralization: Some initiatives explore
blockchain-based DNS systems to increase resilience and reduce
centralization.
● Zero Trust Networking: DNS is a foundational component of
zero-trust networking models that enhance security by
authenticating and authorizing every network request.
Conclusion
The Domain Name System (DNS) is the unsung hero of the internet, silently
working behind the scenes to make the web accessible and user-friendly. It
has a rich history, a complex yet elegant structure, and immense importance
in today’s digital age. Despite its challenges and vulnerabilities, DNS
continues to evolve to meet the changing needs and demands of the internet,
ensuring that users can access the vast array of online resources with ease
and confidence. As the internet continues to grow and evolve, so too will the
Domain Name System, adapting to new technologies and security threats
while remaining a cornerstone of online communication and connectivity.
ARP Protocol
In other words, ARP is used to mapping the IP Address into MAC Address.
When one device wants to communicate with another device in a LAN (local
area network) network, the ARP protocol is used.
ARP protocol finds the MAC address based on IP address. IP address is used
to communicate with any device at the application layer. But to communicate
with a device at the data link layer or to send data to it, a MAC address is
required.
When data is sent to a local host, the data travels between networks via IP
address. But to reach that host in LAN, it needs the MAC address of that
host. In this situation the address resolution protocol plays an important role.
Types of ARP
There are four types of ARP protocol they are as follows:-
1. Proxy ARP
2. Gratuitous ARP
3. Reverse ARP
4. Inverse ARP
1. Proxy ARP
This is a technique through which proxy ARP in a network can answer ARP
queries of IP addresses that are not in that network. That is, if we understand
it in simple language, the Proxy server can also respond to queries of
IP-addresses of other networks.
Through this we can fool the other person because instead of the MAC
address of the destination device, the MAC address of the proxy server is
used and the other person does not even know.
2. Gratuitous ARP
This is an arp request of a host, which we use to check duplicate
ip-addresses. And we can also use it to update the arp table of other devices.
That is, through this we can check whether the host is using its original
IP-address, or is using a duplicate IP-address.
3. Reverse ARP
That is, to know the IP address of our computer server, we use Reverse ARP,
which works under a networking protocol.
Inverse ARP, it is the opposite of ARP, that is, we use it to know the IP
address of our device through MAC Address, that is, it is such a networking
technology, through this we convert MAC Address into IP address. Can
translate. It is mainly used in ATM machines.
All the fields given in ARP message format are being explained in detail
below:-
● Hardware Type: The size of this field is 2 bytes. This field defines
what type of Hardware is used to transmit the message. The most
common Hardware type is Ethernet. The value of Ethernet is 1.
● Protocol Type: This field tells which protocol has been used to
transmit the message. substantially the value of this field is 2048
which indicates IPv4.
● Hardware Address Length: It shows the length of the tackle address
in bytes. The size of the Ethernet MAC address is 6 bytes.
● Protocol Address Length: It shows the size of the IP address in
bytes. The size of the IP address is 4 bytes.
● OP law: This field tells the type of message. If the value of this field
is 1 also it’s a request message and if the value of this field is 2 also
it’s a reply message.
● Sender Hardware Address: This field contains the MAC address of
the device transferring the message.
● Sender Protocol Address: This field contains the IP address of the
device transferring the message.
● Target Hardware Address: This field is empty in the request
message. This field contains the MAC address of the entering
device.
● Target Protocol Address: This field contains the IP address of the
entering device.
● By using this protocol we can easily find out the MAC Address of
the device.
● There is no need to configure the end nodes at all to extract the
MAC address through this protocol.
● Through this protocol we can easily translate IP addresses into MAC
Addresses.
● There are four main types of this protocol. Which we can use in
different ways, and they prove to be very helpful.
Answer:
The ARP protocol is a network dispatches(communication) protocol that
establishes a mapping between IP addresses and MAC addresses. This
allows one device to know the MAC address of another device when it wants
to communicate with the other device. The ARP protocol works on a Local
area network( LAN).
The default time-to-live (TTL) values for Windows and Linux are:
● Windows: Typically 126–128
● Linux: Typically 62–64
● The sending host sets the initial TTL value as an eight-binary digit
field in the packet header.
● The datagram’s TTL field is set by the sender and reduced by each
router along the path to its destination.
● The router reduces the TTL value by at least one while forwarding
IP packets.
● When the packet TTL value hits 0, the router discards it and sends
an ICMP message back to the originating host.
● This system ensures that a packet moving via the network is
dropped after a set amount of time, rather than looping indefinitely.
Working of TTL
Thus, using the TTL value there is a restriction on the duration for which the
data exists on the network. Furthermore, it also helps to find out the period
of data for which it has been on the network and how long it will be on the
network.
Example of TTL
In the scenario below, Host A wishes to interact with Host B using a ping
packet. Host A uses a TTL of 255 in the ping and transmits it to Router A, its
gateway. When Router A notices that the packet is going for a layer 3 i.e.
Network layer, it hops to Router B, reduces the TTL by 255 – 1 = 254, and
delivers it to Router B. Router B and Router C decrement the TTL in the same
way. Router B decrements TTL in a packet from 254 to 253 and Router C
decrements the TTL from 253 to 252. The ping packet TTL is decreased to
252 when it reaches Host B.
TTL Example
Whenever TTL reaches the value of zero, TTL=0 then the packet is discarded
by the router, and the Time Exceeded Error message is sent to the originating
host.
It specifies the number of seconds for which a cache server can provide the
record’s cached value. When the set time has elapsed since the previous
refresh, the caching server will contact the authoritative server to obtain the
current and possibly updated value for the record.
Time to live field has a direct impact on page load time (cached data loads
faster) and content freshness on your site (i.e., data cached for too long can
become stale).
TTLs should be configured as follows to ensure that your visitors only see the
most recent version of your website:
● For static content like images, documents, etc., a longer TTL value is
set as they rarely get updated.
● For dynamic content such as HTML files, it is difficult to set TTL
values. To exemplify, the comment section of a website frequently
changes and its refresh time cannot be predicted at all if a user is
permitted to modify the existing posts also then caching is not a
recommended practice.
ping command
● The tracert/traceroute command is used to trace the path between
two devices. There are multiple routers in the path using which
connection is established. So, it will provide the names or IP
Addresses of routers existing in the path of two connecting devices.
tracert command
● In Internet Protocol (IP) multicast, TTL may have control over the
packet forwarding scope or range.
○ 0 is restricted to the same host
○ 1 is restricted to the same subnet
○ 32 is restricted to the same site
○ 64 is restricted to the same region
○ 128 is restricted to the same continent
○ 255 is unrestricted
● TTL is also employed in caching for Content Delivery Networks
(CDNs). TTLs are used herein for specifying the duration of serving
cached information until a new copy is downloaded from an origin
server. A CDN can offer updated content without requests
propagating back to the origin server if the time between origin
server pulls is properly adjusted. This accumulative effect enables a
CDN to efficiently offer information closer to a user while minimizing
the amount of bandwidth required at the origin.
Even if the elapsed time was significantly less than a second, every router
that handles a packet must reduce the TTL by at least one. In this
perspective, Time-to-Live serves as a hop counter. So, it puts a limit on how
far a datagram can propagate via the Internet.
When a packet is forwarded, the TTL must be reduced by at least one. It may
decrease the TTL by one for each second it retains a packet for longer than
one second. Time-to-Live is used as a time counter in this manner.
Conclusion
In conclusion, we learn that TTL is concept that sets a time limit in a network
to ensure that data packets are do not circulate indefinitely which helps in
improving network performance, managing data caching and enhancing
security of the network. It helps in managing and optimizing network traffic. It
plays a important role in routing protocols, IoT, mobile network, and various
other applications.
In DNS caching, TTL specifies the amount of time a DNS record should be
kept in the cache before querying the authoritative server for an updated
record. It helps in maintaining the accuracy and authenticity of DNS data.
The TTL value becomes zero when the packet is discarded by the router, and
a Time Exceeded message is sent back to the original host.
Switches have many ports, and when data arrives at any port, the destination
address is examined first and some checks are also done and then it is
processed to the devices. Different types of communication are supported
here like unicast, multicast, and broadcast communication.
Types of Switches
Switches are mainly classified into the following types that are mentioned
below.
● Virtual Switches: Virtual Switches are the switches that are inside
Virtual Machine hosting environments.
● Routing Switches: These are the switches that are used to connect
LANs.They also have the work of performing functions in the
Network Layer of the OSI Model.
● Unmanaged Switches: Unmanaged Switches are the devices that
are used to enable Ethernet devices that help in automatic data
passing. These are generally used for home networks and small
businesses. In case of the requirement of more switches, we just add
more switches by plug and play method.
● Managed Switches: Managed Switches are switches having more
complex networks. SNMP (Simple Network Management Protocol)
can be used for configuring managed switches. These types of
switches are mostly used in large networks having complex
architecture. They provide better security levels and precision
control but they are more costly than Unmanaged switches.
● LAN Switches: LAN (Local Area Network) Switches are also called
ethernet switches or data switches. LAN switches always try to
avoid overlapping of data packets in the network just by allocating
bandwidth in such a manner.
● PoE Switches: Power over Ethernet(PoE) are the switches used in
Gigabit Ethernets. PoE help in combining data and power
transmission over the same cable so that it helps in receiving data
and electricity over the same line.
● Smart Switches: Smart Switches are switches having some extra
controls on data transmissions but also have extra limitations over
managed Switches. They are also called partially managed
switches.
● Stackable Switches: Stackable switches are connected through a
backplane to combine two logical switches into a single switch.
● Modular Switches: These types of switches help in accommodating
two or more cards. Modular switches help in providing better
flexibility.
Switching Techniques
Switching techniques are used to decide the best route for data transmission
between source and destination. These are classified into three categories :
● Circuit Switching
● Message Switching
● Packet Switching
Step 2: The switch port has to be connected directly to the router using the
cable. Generally, if there is an uplink port present in the switch, the wire
should be connected to that port, if the uplink power is not present, then the
wire has to be connected to any port of the router.
Advantages of Switches
● Prevents traffic overloading in a network by segmenting the
network into smaller subnets.
● Increases the bandwidth of the network.
● Less frame collision as the switch creates the collision domain for
each connection.
Disadvantages of Switches
● It can not stop traffic destined for a different LAN segment from
traveling to all other LAN segments.
● Switches are more expensive.
Conclusion
In contemporary networking, it is essential to use network switches because
of efficient information flow between machines on Local Area Network.
Depending on the needs of a certain organization with varying networking
requirements, they can select among different types of switches ranging from
simple non-managed types to sophisticated managed types. When talking
about the role of networking and the need for layers 2 and 3 switches, one
cannot ignore their importance as far as connection separation as well as
routing is concerned. Also, there are other features like Power over Ethernet
(PoE) and modularity that give more flexibility in case an application requires
something specific.
A Layer 2 switch operates at the Data Link Layer and forwards data based on
MAC addresses, while a Layer 3 switch operates at both the Data Link Layer
and the Network Layer, using IP addresses to route data between different
subnets or VLANs.
Can I use an unmanaged switch in a large network?
While unmanaged switches are easy to use, they lack advanced features and
control, making them unsuitable for large or complex networks. Managed
switches are recommended for such environments.
While both Layer 3 switches and routers perform routing functions, Layer 3
switches combine high-speed switching with routing capabilities, often used
within LANs for inter-VLAN routing. Routers are typically used to connect
different networks or for WAN connections.
PoE switches provide both power and data over a single Ethernet cable,
simplifying the installation of devices like IP cameras, wireless access points,
and VoIP phones without the need for separate power supplies.
Introduction of a Router
Network devices are physical devices that allow hardware on a
computer network to communicate and interact with one another. For
example Repeater, Hub, Bridge, Switch, Routers, Gateway, Router, and NIC,
etc.
What is a Router?
A Router is a networking device that forwards data packets between
computer networks. One or more packet-switched networks or subnetworks
can be connected using a router. By sending data packets to their intended IP
addresses, it manages traffic between different networks and permits several
devices to share an Internet connection.
Let us understand this by a very general example, suppose you search for
www.google.com in your web browser then this will be a request that will be
sent from your system to Google's server to serve that webpage, now your
request which is nothing but a stream of packets don't just go to the google`s
server straight away they go through a series of networking devices known
as a router which accepts this packets and forwards them to correct path and
hence it reaches to the destination server. A router has several interfaces by
which it can connect to several host systems. Routers are the devices that are
operated on the Network Layer of the OSI Model, these are the most
common devices used in networking.
Router
How Does Router Work?
● A router determines a packet’s future path by examining the
destination IP address of the header and comparing it to the routing
database. The list of routing tables outlines how to send the data to
a specific network location. They use a set of rules to determine the
most effective way to transmit the data to the specified IP address.
● To enable communication between other devices and the internet,
routers utilise a modem, such as a cable, fibre, or DSL modem. Most
routers include many ports that can connect a variety of devices to
the internet simultaneously. In order to decide where to deliver data
and where traffic is coming from, it needs routing tables.
● A routing table primarily specifies the router’s default path. As a
result, it might not determine the optimum path to forward the data
for a particular packet. For instance, the office router directs all
networks to its internet service provider through a single default
channel.
● Static and dynamic tables come in two varieties in the router. The
dynamic routing tables are automatically updated by dynamic
routers based on network activity, whereas the static routing tables
are configured manually.
Router
Network
Types of Router
There are several types of routers. Some of them are mentioned below:
Functions of Router
The router performs below major functions:
1. Forwarding: The router receives the packets from its input ports,
checks its header, performs some basic functions like checking
checksum, and then looks up to the routing table to find the
appropriate output port to dump the packets onto, and forwards the
packets onto that output port.
2. Routing: Routing is the process by which the router ascertains what
is the best path for the packet to reach the destination, It maintains a
routing table that is made using different algorithms by the router
only.
3. Network Address Translation (NAT): Routers use NAT to translate
between different IP address ranges. This allows devices on a
private network to access the internet using a single public IP
address.
4. Security: Routers can be configured with firewalls and other security
features to protect the network from unauthorized access, malware,
and other threats.
5. Quality of Service (QoS): Routers can prioritize network traffic based
on the type of data being transmitted. This ensures that critical
applications and services receive adequate bandwidth and are not
affected by lower-priority traffic.
6. Virtual Private Network (VPN) connectivity: Routers can be
configured to allow remote users to connect securely to the network
using a VPN.
7. Bandwidth management: Routers can be used to manage network
bandwidth by controlling the amount of data that is allowed to flow
through the network. This can prevent network congestion and
ensure that critical applications and services receive adequate
bandwidth.
8. Monitoring and diagnostics: Routers can be configured to monitor
network traffic and provide diagnostics information in the event of
network failures or other issues. This allows network administrators
to quickly identify and resolve problems.
Architecture of Router
A generic router consists of the following components:
1. Input Port: This is the interface by which packets are admitted into
the router, it performs several key functions as terminating the
physical link at the router, this is done by the leftmost part in the
below diagram, and the middle part does the work of interoperating
with the link-layer like decapsulation, in the last part of the input
port the forwarding table is looked up and is used to determine the
appropriate output port based on the destination address.
2. Switching Fabric: This is the heart of the Router, It connects the
input ports with the output ports. It is kind of a network inside a
networking device. The switching fabric can be implemented in
several ways some of the prominent ones are:
● Switching via memory: In this, we have a processor which
copies the packet from input ports and sends it to the
appropriate output port. It works as a traditional CPU with
input and output ports acting as input and output devices.
● Switching via bus: In this implementation, we have a bus
that connects all the input ports to all the output ports. On
receiving a packet and determining which output port it
must be delivered to, the input port puts a particular token
on the packet and transfers it to the bus. All output ports
can see the packets but they will be delivered to the output
port whose token has been put in, the token is then
scraped off by that output port and the packet is forwarded
● Switching via interconnection network: This is a more
sophisticated network, here instead of a single bus we use
a 2N bus to connect n input ports to n output ports.
3. Output Port: This is the segment from which packets are transmitted
out of the router. The output port looks at its queuing buffers (when
more than one packets have to be transmitted through the same
output port queuing buffers are formed) and takes packets, does link
layer functions, and finally transmits the packets to an outgoing link.
4. Routing Processor: It executes the routing protocols, and it works
like a traditional CPU. It employs various routing algorithms like the
link-state algorithm, distance-vector algorithm, etc. to prepare the
forwarding table, which is looked up to determine the route and the
output port.
Architecture of Router
1. Vulnerability Exploits
2. DDoS Attacks
3. Administration Credentials
Advantages of Router
● Easier Connection: Sharing a single network connection among
numerous machines is the main advantage of router. This enables
numerous people to connect to the internet, boosting total
productivity. In addition, routers have connections between various
media and network designs.
● Security: Undoubtedly, installing a router is the first step in securing
a network connection. Because using a modem to connect directly to
the internet exposes your computer to several security risks. So that
the environment is somewhat secure, routers can be utilized as an
intermediary between two networks. While not a firewall or
antivirus replacement.
● NAT Usage: Routers use Network Address Translation (NAT) to map
multiple private IP addresses into one public IP address. This allows
for a better Internet connection and information flow between all
devices connected to the network.
● Supports Dynamic Routing: The router employs dynamic routing
strategies to aid in network communication. The internet work’s
optimum path is chosen through dynamic routing. Additionally, it
creates collision and broadcast domains. Overall, this can lessen
network traffic.
● Filtering of Packets: Switching between packets and filtering
packets are two more router services. A collection of filtering rules
are used by routers to filter the network. The packets are either
allowed or passed through.
Disadvantages of Router
● Slower: Routers analyze multiple layers of information, from the
physical layer to the network layer, which slows down connections.
The same issue can also be encountered when multiple devices are
connected to these network devices, causing “connection waiting”.
● High Cost: They are more expensive than some other tools for
systems administration. This includes security, extension, and the
focal point. As a result, routers are typically not the greatest option
for issues.
● Need for configuration: The router must be properly configured to
work properly. In general, the more complex the intended use, the
more configuration is required. This requires professional
installation, which can add to the cost of buying a router.
● Quality Issues: The time transitions are not always accurate. Even
yet, some modern devices use the 2.4GHz band, which is frequently
deactivated. These kinds of separations are frequently possible for
those who live in apartments and condominiums.
● Bandwidth shortages: Dynamic routing techniques used by routers
to support connections tend to cause network overhead, consuming
a lot of bandwidth. This leads to a bandwidth shortage that
significantly slows down the internet connection between
connected devices.
Applications of Router
There are several applications of router because nowadays routers are widely
used in most of the networking communication for better communication:
Routing Protocol
The router can recognise other routers on the network and decide on a
dynamic basis where to deliver all network messages through the routing
protocol. Several protocols exist, some of which are listed below:
You can also refer to the article Difference between Router and Modem.
A router is not just for Wi-Fi, even though it can broadcast a wireless signal
(Wi-Fi) to connected and enabled devices. In addition, routers provide wired
connectivity to the Internet. Once the router has established a hardwired or
Ethernet connection to the Internet, it can then translate that connection into
Wi-Fi signal that multiple devices can pick up.
Your router has several IP addresses on its own. In addition to the router’s
internal IP, which serves as your LAN default gateway, it also contains
additional private IP addresses for each device and a private “management”
IP address.
What is an SSID?
SSID stands for “Service Set Identifier”. SSIDs allow users to locate and join
the wireless network that the router broadcasts.
What is a Router?
The router is a networking device that works at the network layer i.e., a third
layer of the ISO-OSI model, and is the multiport device. It establishes a
simple connection between the networks to provide the data flow between
the networks. Router transfers data in the form of packets is used in LAN as
well as MAN.
It works on network layer 3 and is used in LANs, MANs, and WANs. It stores
IP addresses and maintains addresses on its own.
Working of Router
● Many networked devices, including PCs, tablets, printers, and other
items, can be connected to the internet and formed into a network
by using a router in a house or workplace.
● In order to facilitate communication between these devices and the
internet, a router first links the modem to other devices.
● Data packets with specified IP addresses are routed and transmitted
by routers across networks or within networks.
● It accomplishes this by assigning a local IP address to every device
connected to the internet; this guarantees the proper destination,
preventing data from getting lost in the network.
● Once the optimal and fastest path has been determined, data
packets are sent from that path to the networked devices.
Types of Router
1. Wireless Routers
2. Wired Routers
Advantages of Router
Working of Switch
Types of Switches
1. Managed Switches
2. Unmanaged Switch
Basic connection is the primary usage for the unmanaged switches. These
are typically found in small networks or locations where a modest number of
additional ports are needed, like a conference room, a lab, or a residence.
Plugging in is all that is necessary for unmanaged switches to function; no
configuration is needed.
Advantages of Switch
Router Switch
What is router?
Router is a networking device which works at the network layer i.e., third
layer of the ISO-OSI model and are the multiport devices. It establish a
simple connection between the networks in order to provide the data flow
between the networks
What is Switch?
The router distributes the signal to the network’s devices, and the modem is
in charge of sending and receiving signals from the ISP. The modem is
connected to the router, which is connected to every device on the network,
in a standard home network configuration.
Can a router act as a firewall?
What is Ping?
Last Updated : 09 Apr, 2024
A ping is a basic Internet command that allows a user to test and verify
whether a given destination IP address exists and can accept requests in
computer network administration. Ping is also used for diagnosis to confirm
that the computer the user tries to reach is operational. Ping can be used
with any operating system (OS) that supports networking, including the
majority of embedded network administration software.
What is Ping?
Ping (Packet Internet Groper) is a method for determining communication
latency between two networks or ping is a method of determining the time it
takes for data to travel between two devices or across a network. As
communication latency decreases, communication effectiveness improves. A
low ping time is critical in situations where the timely delivery of data is more
important than the quantity and quality of the desired information.
● So, if an online game streamer has two network options, one with
10ms of ping and 10mbps internet speed, and the other with
100ms of ping and 500mbps internet speed, the gamer will
obviously choose the first because he or she wants to interact with
the audience in real-time. However, if a person wants to watch
YouTube videos and download them, he or she will obviously select
the second option in order to speed up the download process.
Ping is also helpful in online gaming. It measures how long it takes for a
signal to go from a computer to a server.
On your operating system you can adjust ping command by adding some
options that will help you specify things like the number of packets that must
be sent, packet’s size, timeout duration and many others.
In networking, understanding the path that data packets take from one point
to another is crucial for diagnosing and troubleshooting connectivity issues.
One of the most valuable tools for this purpose is the traceroute command in
Linux. Traceroute is a command-line tool used in Linux or other operating
systems to track the path that data takes from your computer to a specified
destination, such as a website.
What is Traceroute?
The `traceroute` command is a network diagnostic tool used to trace the
route taken by packets from a source to a destination over an IP network. It
provides valuable insights into the network path, including the number of
hops (routers) between the source and destination, and the round-trip time
(RTT) for each hop.
For Windows
The physical distance between your computer and its destination affects how
long each hop takes. The further away it is, the longer the hop time. This is
important to remember when fixing network issues. Also, the type of
connection matters. Computers with faster connections, like Gigabit Ethernet
(GE), usually have quicker hop times than those with slower connections.
Additionally, how the data is delivered can make a difference. For example, if
data goes through a wireless router shared with several devices, it can be
slower than if it’s sent through a dedicated connection like Ethernet or
fiber-optic.
High latency is important when data needs to arrive quickly to work properly.
For example, sending still images isn’t affected much by latency. But for
Voice over Internet Protocol (VoIP) calls or videoconferences, high latency can
greatly impact the quality and experience.
Option Description
-4 Use IPv4
-6 Use IPv6
This command traces the route to the google.com domain, displaying the IP
addresses and round-trip times for each hop along the path.
Syntax:
traceroute -4 google.com
Syntax:
traceroute -6 google.com
Explanation: By using the `-F` option, traceroute ensures that packets are not
fragmented during the traceroute process to the destination `google.com`.
Syntax:
traceroute -f 10 google.com
Explanation: By providing the `-f` option followed by the TTL value (e.g.,
10), traceroute initiates the traceroute operation from the specified hop to the
destination `google.com`.
Syntax:
traceroute -g 192.168.43.45 google.com
Syntax:
traceroute -m 5 google.com
Explanation: By specifying the `-m` option followed by the desired TTL value
(e.g., 5), traceroute limits the traceroute operation to a maximum of 5 hops to
the destination `google.com`.
Syntax:
traceroute -n google.com
Syntax:
traceroute -q 1 google.com
Explanation: By using the `-q` option followed by the desired number of
probes (e.g., 1), traceroute sends the specified number of probes per hop
during the traceroute operation to the destination `google.com`.
Syntax:
traceroute google.com 100
Syntax:
traceroute --help
displaying help of traceroute
Conclusion
The traceroute command in Linux offers a wide range of options for tracing
the route of packets to a destination. By understanding these options and
their syntax, users can effectively diagnose network connectivity issues and
troubleshoot routing problems. Whether it’s specifying Internet Protocol
versions, controlling packet behavior, or customizing the traceroute operation,
the traceroute command provides comprehensive functionality for network
analysis and troubleshooting.
Traceroute and tracert do the same thing. The only difference is that you use
the command “traceroute” on Mac and Linux systems, and “tracert” on a
Windows system.
Traceroute provides a list of all the routers (hops) your data passes through
to reach its destination, along with the time it takes for each hop.
Can Traceroute be used on any operating system?
A gateway is a network node or device that connects two networks that use
different transmission protocols. Gateways play an important role in
connecting two networks. It works as the entry-exit point for a network
because all traffic that passes across the networks must pass through the
gateway.
What is Gateway?
A gateway is a connecting point of any network that helps it to connect with
different networks. The gateway monitors and controls all the incoming and
outgoing traffic of the network. Suppose there are two different networks
and they want to communicate with each other, so they need to set up a path
between them. Now that path will be made between gateways of those
different networks. Gateways are also known as protocol converters because
they help to convert protocol supported by traffic of the different networks
into that are supported by this network. Because of that, it makes smooth
communication between two different networks.
Gateway
Actually what happens on the gateway after receiving a data packet is that
they check header information that is present in the data packet. After that, it
validates the destination IP address and searches for any error. If it gets no
error then it makes that data packet compatible for the new network by
converting protocols or other stuff.
Functionality of Gateways
There are various functionalities that are supported by any gateway:
Based on Functionality
Disadvantages of Gateways
● Its implementation is difficult and costly.
● It is hard to manage.
● It causes time delay because the conversion of data according to the
network takes time.
● Failure of the gateway can cause the failure of connection with other
networks.
Gateways Router
A gateway is a device that is used Route is a device that
for communication between receives, analyzes, and
networks having different sets of forwards the data packets to
protocols. other networks.
A bad gateway error message like 502 Bad Gateway, shows something is
not right with a website’s server communication. You can refresh the web
browser, open a new browser session, or remove your browser’s cache to fix
the error.
Introduction To Subnetting
What is a Subnet?
A subnet is like a smaller group within a large network. It is a way to split a
large network into smaller networks so that devices present in one network
can transmits data more easily. For example, in a company, different
departments can each have their own subnet, keeping their data traffic
separate from others. Subnet makes the network faster and easier to manage
and also improves the security of the network.
The 32-bit IP address is divided into sub-classes. These are given below:
● For Subnet-1: The first bit which is chosen from the host id part is
zero and the range will be from (193.1.2.00000000 till you get all
1’s in the host ID part i.e, 193.1.2.01111111) except for the first bit
which is chosen zero for subnet id part.
● For Subnet-2: The first bit chosen from the host id part is one and
the range will be from (193.1.2.100000000 till you get all 1’s in the
host ID part i.e, 193.1.2.11111111).
Note:
1. To divide a network into four (2 2 ) parts you need to choose two bits
from the host id part for each subnet i.e, (00, 01, 10, 11).
2. To divide a network into eight (2 3 ) parts you need to choose three
bits from the host id part for each subnet i.e, (000, 001, 010, 011,
100, 101, 110, 111) and so on.
3. We can say that if the total number of subnets in a network
increases the total number of usable hosts decreases.
The network can be divided into two parts: To divide a network into two
parts, you need to choose one bit for each Subnet from the host ID part.
Note: It is a class C IP so, there are 24 bits in the network id part and 8 bits in
the host id part.
Example 1: An organization is assigned a class C network address of
201.35.2.0. It uses a netmask of 255.255.255.192 to divide this into
sub-networks. Which of the following is/are valid host IP addresses?
1. 201.35.2.129
2. 201.35.2.191
3. 201.35.2.255
4. Both (A) and (C)
Solution:
Converting the last octet of the
netmask into the binary form: 255.255.255.11000000
Converting the last octet of option 1
into the binary form: 201.35.2.10000001
Converting the last octet of option 2
into the binary form: 201.35.2.10111111
Converting the last octet of option 3
into the binary form: 201.35.2.11111111
From the above, we see that Options 2 and 3 are not valid host IP addresses
(as they are broadcast addresses of a subnetwork), and OPTION 1 is not a
broadcast address and it can be assigned to a host IP.
1. 201.32.64.135
2. 201.32.64.240
3. 201.32.64.207
4. 201.32.64.231
Solution:
Converting the last octet of the netmask
into the binary form: 255.255.255.11111000
Converting the last octet of option 1
into the binary form: 201.32.64.10000111
Converting the last octet of option 2
into the binary form: 201.32.64.11110000
Converting the last octet of option 3
into the binary form: 201.32.64.11001111
Converting the last octet of option 4
into the binary form: 201.32.64.11100111
From the above, we can see that in OPTION 1, 3, and 4, all the host bits are 1
and give the valid broadcast address of subnetworks.
and OPTION 2, the last three bits of the Host address are not 1 therefore it’s
not a valid broadcast address.
Advantages of Subnetting
● It provides security to one network from another network. eg) In an
Organisation, the code of the Developer department must not be
accessed by another department.
● It may be possible that a particular subnet might need higher
network priority than others. For example, a Sales department
needs to host webcasts or video conferences.
● In the case of Small networks, maintenance is easy.
Disadvantages of Subnetting
● In the case of a single network, only three steps are required to
reach a Process i.e Source Host to Destination Network, Destination
Network to Destination Host, and then Destination Host to Process.
● In the case of a Single Network only two IP addresses are wasted to
represent Network Id and Broadcast address but in the case of
Subnetting two IP addresses are wasted for each Subnet.
● The cost of the overall Network also increases. Subnetting requires
internal routers, Switches, Hubs, Bridges, etc. which are very costly.
Conclusion
Subnetting is an important part of managing computer networks. It allows us
to break a large network into smaller, more manageable parts called subnets.
This makes it easier to organize and use IP addresses efficiently. By using
subnetting, we can reduce unnecessary traffic on the network and improve its
performance.
What is a VPN?
A virtual private network (VPN) is a technology that creates a safe and
encrypted connection over a less secure network, such as the Internet. A
Virtual Private Network is a way to extend a private network using a public
network such as the Internet. The name only suggests that it is a “Virtual
Private Network”, i.e. user can be part of a local network sitting at a remote
location. It makes use of tuneling protocols to establish a secure connection.
History of VPNs
ARPANET introduced the idea of connecting distant computers in the 1960s.
The foundation for current internet connectivity was established by ensuring
the development of protocols like TCP/IP in the 1980s. Particular VPN
technologies first appeared in the 1990s in response to the growing concerns
about online privacy and security.
Characteristics of VPN
● Encryption: VPNs employ several encryption standards to maintain
the confidentiality of the transmitted data and, even if intercepted,
can’t be understood.
● Anonymity: Thus, VPN effectively hides the users IP address, thus
offering anonymity and making tracking by websites or other third
parties impossible.
● Remote Access: VPNs provide the means for secure remote
connection to business’ networks thus fostering employee
productivity through remote working.
● Geo-Spoofing: The user can also change the IP address to another
country using the VPN hence breaking the regional restrictions of
some sites.
● Data Integrity: VPNs make sure that the data communicated in the
network in the exact form and not manipulated in any way.
Types of VPN
There are several types of VPN and these are vary from specific requirement
in computer network. Some of the VPN are as follows:
For more details you can refer Types of VPN published article.
VPN Protocols
● OpenVPN: A cryptographic protocol that prioritises security is called
OpenVPN. OpenVPN is compatible protocol that provides a variety
of setup choices.
● Point-To-Point Tunneling Protocol(PPTP): PPTP is not utilized
because there are many other secure choices with higher and more
advanced encryption that protect data.
● WireGuard: Wireguard is a good choice that indicates capability in
terms of performance.
● Secure Socket Tunneling Protocol(SSTP): SSTP is developed for
Windows users by Microsoft. It is not widely used due to the lack of
connectivity.
● Layer 2 Tunneling Protocol(L2TP) It connects a user to the VPN
server but lacks encryption hence it is frequently used with IPSec to
offer connection, encryption, and security simultaneously.
Benefits of VPN
● When you use VPN it is possible to switch IP.
● The internet connection is safe and encrypted with VPN
● Sharing files is confidential and secure.
● Your privacy is protected when using the internet.
● There is no longer a bandwidth restriction.
● It facilitates cost savings for internet shopping.
Limitations of VPN
● VPN may decrease your internet speed.
● Premium VPNs are not cheap.
● VPN usage may be banned in some nations.]
Conclusion
In conclusion, a VPN (Virtual Private Network) is a powerful tool that
enhances your online privacy and security by encrypting your internet
connection and masking your IP address. Whether you’re accessing public
Wi-Fi, wanting to browse the web more securely, or bypassing geographical
restrictions, a VPN offers a layer of protection that keeps your data safe. As
the digital landscape continues to evolve, understanding and using a VPN
can be an essential step in safeguarding your online presence.
VLAN ranges:
● VLAN 0, 4095: These are reserved VLAN which cannot be seen or
used.
● VLAN 1: It is the default VLAN of switches. By default, all switch
ports are in VLAN. This VLAN can’t be deleted or edit but can be
used.
● VLAN 2-1001: This is a normal VLAN range. We can create, edit
and delete these VLAN.
● VLAN 1002-1005: These are CISCO defaults for fddi and token
rings. These VLAN can’t be deleted.
● Vlan 1006-4094: This is the extended range of Vlan.
Configuration –
We can simply create VLANs by simply assigning the vlan-id and Vlan name.
#switch1(config)#vlan 2
#switch1(config-vlan)#vlan accounts
Here, 2 is the Vlan I’d and accounts is the Vlan name. Now, we assign Vlan to
the switch ports.e.g-
Switch(config)#int fa0/0
Switch(config-if)#switchport mode access
Switch(config-if)#switchport access Vlan 2
Example –
Assigning IP address 192.168.1.1/24, 192.168.1.2/24 and 192.168.2.1/24 to
the PC’s. Now, we will create Vlan 2 and 3 on switch.
Switch(config)#vlan 2
Switch(config)#vlan 3
We have made VLANs but the most important part is to assign switch ports
to the VLANs.
Switch(config)#int fa0/0
Switch(config-if)#switchport mode access
Switch(config-if) #switchport access Vlan 2
Switch(config)#int fa0/1
Switch(config-if)#switchport mode access
Switch(config-if) #switchport access Vlan 3
Switch(config)#int fa0/2
Switch(config-if)#switchport mode access
Switch(config-if) #switchport access Vlan 2
As seen, we have assigned Vlan 2 to fa0/0, fa0/2, and Vlan 3 to fa0/1.
There are three ways to connect devices on a VLAN, the type of connections
are based on the connected devices i.e. whether they are VLAN-aware(A
device that understands VLAN formats and VLAN membership) or
VLAN-unaware(A device that doesn’t understand VLAN format and VLAN
membership).
1. Trunk Link –
All connected devices to a trunk link must be VLAN-aware. All
frames on this should have a special header attached to it called
tagged frames.
2. Access link –
It connects VLAN-unaware devices to a VLAN-aware bridge. All
frames on the access link must be untagged.
3. Hybrid link –
It is a combination of the Trunk link and Access link. Here both
VLAN-unaware and VLAN-aware devices are attached and it can
have both tagged and untagged frames.
Advantages –
● Performance –
The network traffic is full of broadcast and multicast. VLAN reduces
the need to send such traffic to unnecessary destinations. e.g.-If the
traffic is intended for 2 users but as 10 devices are present in the
same broadcast domain, therefore, all will receive the traffic i.e.
wastage of bandwidth but if we make VLANs, then the broadcast or
multicast packet will go to the intended users only.
● Formation of virtual groups –
As there are different departments in every organization namely
sales, finance etc., VLANs can be very useful in order to group the
devices logically according to their departments.
● Security –
In the same network, sensitive data can be broadcast which can be
accessed by the outsider but by creating VLAN, we can control
broadcast domains, set up firewalls, restrict access. Also, VLANs can
be used to inform the network manager of an intrusion. Hence,
VLANs greatly enhance network security.
● Flexibility –
VLAN provide flexibility to add, remove the number of host we
want.
● Cost reduction –
VLANs can be used to create broadcast domains which eliminate
the need for expensive routers.
By using Vlan, the number of small size broadcast domain can be
increased which are easy to handle as compared to a bigger
broadcast domain.
Disadvantages of VLAN
1. Voice over IP (VoIP) : VLANs can be used to isolate voice traffic from
data traffic, which improves the quality of VoIP calls and reduces the
risk of network congestion.
2. Video Conferencing : VLANs can be used to prioritize video traffic
and ensure that it receives the bandwidth and resources it needs for
high-quality video conferencing.
3. Remote Access : VLANs can be used to provide secure remote
access to cloud-based applications and resources, by isolating
remote users from the rest of the network.
4. Cloud Backup and Recovery : VLANs can be used to isolate backup
and recovery traffic, which reduces the risk of network congestion
and improves the performance of backup and recovery operations.
5. Gaming : VLANs can be used to prioritize gaming traffic, which
ensures that gamers receive the bandwidth and resources they need
for a smooth gaming experience.
6. IoT : VLANs can be used to isolate Internet of Things (IoT) devices
from the rest of the network, which improves security and reduces
the risk of network congestion.
What is a VPN?
A virtual private network (VPN) is a technology that creates a safe and
encrypted connection over a less secure network, such as the Internet. A
Virtual Private Network is a way to extend a private network using a public
network such as the Internet. The name only suggests that it is a “Virtual
Private Network”, i.e. user can be part of a local network sitting at a remote
location. It makes use of tuneling protocols to establish a secure connection.
History of VPNs
ARPANET introduced the idea of connecting distant computers in the 1960s.
The foundation for current internet connectivity was established by ensuring
the development of protocols like TCP/IP in the 1980s. Particular VPN
technologies first appeared in the 1990s in response to the growing concerns
about online privacy and security.
Characteristics of VPN
● Encryption: VPNs employ several encryption standards to maintain
the confidentiality of the transmitted data and, even if intercepted,
can’t be understood.
● Anonymity: Thus, VPN effectively hides the users IP address, thus
offering anonymity and making tracking by websites or other third
parties impossible.
● Remote Access: VPNs provide the means for secure remote
connection to business’ networks thus fostering employee
productivity through remote working.
● Geo-Spoofing: The user can also change the IP address to another
country using the VPN hence breaking the regional restrictions of
some sites.
● Data Integrity: VPNs make sure that the data communicated in the
network in the exact form and not manipulated in any way.
Types of VPN
There are several types of VPN and these are vary from specific requirement
in computer network. Some of the VPN are as follows:
For more details you can refer Types of VPN published article.
VPN Protocols
● OpenVPN: A cryptographic protocol that prioritises security is called
OpenVPN. OpenVPN is compatible protocol that provides a variety
of setup choices.
● Point-To-Point Tunneling Protocol(PPTP): PPTP is not utilized
because there are many other secure choices with higher and more
advanced encryption that protect data.
● WireGuard: Wireguard is a good choice that indicates capability in
terms of performance.
● Secure Socket Tunneling Protocol(SSTP): SSTP is developed for
Windows users by Microsoft. It is not widely used due to the lack of
connectivity.
● Layer 2 Tunneling Protocol(L2TP) It connects a user to the VPN
server but lacks encryption hence it is frequently used with IPSec to
offer connection, encryption, and security simultaneously.
Benefits of VPN
● When you use VPN it is possible to switch IP.
● The internet connection is safe and encrypted with VPN
● Sharing files is confidential and secure.
● Your privacy is protected when using the internet.
● There is no longer a bandwidth restriction.
● It facilitates cost savings for internet shopping.
Limitations of VPN
● VPN may decrease your internet speed.
● Premium VPNs are not cheap.
● VPN usage may be banned in some nations.]
Advantages of HTTP
Disadvantages of HTTP
Advantages of HTTPS
Disadvantages of HTTPS
HTTP HTTPS
What is HTTP?
2. Due to its simplicity, HTTP has been the most widely used protocol
for data transfer over the Web but the data (i.e. hypertext)
exchanged using HTTP isn’t as secure as we would like it to be.
What is HTTPS?
Hypertext Transfer Protocol Secure (HTTPS) is an extended version of the
Hypertext Transfer Protocol (HTTP). It is used for secure communication.
HTTPS, the communication protocol is encrypted using Transport Layer
Security.
HTTPS is just HTTP with verification and encryption. The use of TLS (SSL) by
HTTPS to encrypt and digitally sign standard HTTP requests and answers is
the only distinction between the two protocols.
The default port number of HTTP is 80 and the default port number of
HTTPS is 443.
How do I switch from HTTP to HTTPS for my website?
You cannot manually switch between HTTP and HTTPS. All you have to do is
enter the destination’s address, and the website will decide which mode to
use.
What is Encryption?
Encryption in cryptography is a process by which plain text or a piece of
information is converted into cipher text or text that can only be decoded by
the receiver for whom the information was intended. The algorithm used for
the encryption process is known as cipher. It helps to protect consumer
information, emails, and other sensitive data from unauthorized access as
well as secures communication networks. Presently there are many options
to choose from and find the most secure algorithm that meets our
requirements.
Types of Encryption
There are two methods or types through which encryption take place, these
below are two types of encryption:
Features of Encryption
● Confidentiality: Information can only be accessed by the person for
whom it is intended and no other person except him can access it.
● Integrity: Information cannot be modified in storage or transition
between sender and intended receiver without any addition to
information being detected.
● Non-repudiation: The creator/sender of information cannot deny his
intention to send information at later stage.
● Authentication: The identities of sender and receiver are confirmed.
As well as you can detect the origination of information is confirmed.
Encryption Algorithms
To secure information, you can employ a variety of data encryption
algorithms. The algorithms differ in terms of how accurately they safeguard
data as well as how complex they are. Some of the more popular algorithms
that have been in use over the years are listed below:
RSA is an asymmetric key algorithm which is named after its creators Rivest,
Shamir and Adleman. The algorithm is based on the fact that the factors of
large composite number is difficult: when the integers are prime, this method
is known as Prime Factorization. It is generator of public key and private key.
Using public key we convert plain text to cipher text and private key is used
for converting cipher text to plain text. Public key is accessible by everyone
whereas Private Key is kept secret. Public Key and Private Key are kept
different.Thus making it more secure algorithm for data security.
3. Triple DES
Triple DES is a block cipher algorithm that was created to replace its older
version Data Encryption Standard(DES). In 1956 it was found out that 56
key-bit of DES was not enough to prevent brute force attack, so Triple DES
was discovered with the purpose of enlarging the key space without any
requirement to change algorithm. It has a key length of 168 bits three 56-bit
DES keys but due to meet-in-middle-attack the effective security is only
provided for only 112 bits. However Triple DES suffers from slow
performance in software. Triple DES is well suited for hardware
implementation. But presently Triple DES is largely replaced by AES
(Advance Encryption Standard).
4. Twofish
5. Blowfish
Blowfish was created to solve the DES algorithm’s problem. The algorithm is
freely usable by everyone and has been released into the public domain. The
technique uses a 64-bit block size, and the length of the key can range from
32 to 448 bits. It is the best permutation technique for cipher-related
encryption and operates on the Feistel structure using a 16-bit round cipher.
The information in the Blowfish algorithm is encrypted and decrypted using a
single key.
Advantages of Encryption
● Data encryption keeps the data isolated from the security of the
device on which it is stored.
● Encryption improves the security of our information.
● When the data is encrypted, it can only decrypt by the person
having key.
Disadvantages of Encryption
● If the password or key is lost, the user will be unable to open the
encrypted file.
● Although data encryption is a useful data security strategy, it
requires a lot of resources, including time, data processing, and the
use of many encryption and decryption techniques.
Future of Encryption
With advancement in technology it becomes more easier to encrypt data,
with neural networks it becomes easier to keep data safe. Neural Networks
of Google Brain have worked out to create encryption, without teaching
specifics of encryption algorithm. Data Scientist and Cryptographers are
finding out ways to prevent brute force attack on encryption algorithms to
avoid any unauthorized access to sensitive data.
Conclusion
Data protection is a function of encryption, and algorithm refers to a set of
guidelines or remarks that must be followed to throughout the encryption
process. The encryption functions, procedures, and keys utilised all contribute
to the system’s effectiveness. Using a public or private key, the recipient may
transform the coded text or unreadable format back to plain text.
Most people believe that AES is resistant to all types of attacks except brute
force attacks. Still, a lot of internet security experts think that AES will
become the industry standard for private-sector data encryption in the future.
What is Hashing?
Last Updated : 26 Feb, 2024
Now the question arises if Array was already there, what was the need for a
new data structure! The answer to this is in the word “efficiency“. Though
storing in Array takes O(1) time, searching in it takes at least O(log n) time.
This time appears to be small, but for a large data set, it can cause a lot of
problems and this, in turn, makes the Array data structure inefficient.
So now we are looking for a data structure that can store the data and search
in it in constant time, i.e. in O(1) time. This is how Hashing data structure
came into play. With the introduction of the Hash data structure, it is now
possible to easily store data in constant time and retrieve them in constant
time as well.
Components of Hashing
There are majorly three components of hashing:
What is Collision?
The hashing process generates a small number for a big key, so there is a
possibility that two keys could produce the same value. The situation where
the newly inserted key maps to an already occupied, and it must be handled
using some collision handling technology.
Collision in Hashing
Hash Function is a function that has a huge role in making a System Secure
as it converts normal data given to it as an irregular value of fixed length. We
can imagine it to be a Shaker in our homes.
When we put data into this function it outputs an irregular value. The
Irregular value it outputs is known as “Hash Value”.Hash Values are simply
numbers but are often written in Hexadecimal. Computers manage values as
Binary. The hash value is also data and is often managed in Binary.
Deterministic: Hash functions are deterministic, meaning that given the same
input, the output will always be the same. This makes hash functions useful
for verifying the authenticity of data, as any changes to the data will result in
a different hash value.
Advantages:
Data integrity: Hash functions are useful for ensuring the integrity of data, as
any changes to the data will result in a different hash value. This property
makes hash functions a valuable tool for detecting data tampering or
corruption.
Disadvantages:
Collision attacks: Hash functions are vulnerable to collision attacks, where an
attacker tries to find two different inputs that produce the same hash value.
This can compromise the security of hash-based protocols, such as digital
signatures or message authentication codes.
In the world of computer networks, a firewall acts like a security guard. Its job
is to watch over the flow of information between your computer or network
and the internet. It’s designed to block unauthorized access while allowing
safe data to pass through.
Essentially, a firewall helps keep your digital world safe from unwanted
visitors and potential threats, making it an essential part of today’s connected
environment. It monitors both incoming and outgoing traffic using a
predefined set of security to detect and prevent threats.
What is Firewall?
A firewall is a network security device, either hardware or software-based,
which monitors all incoming and outgoing traffic and based on a defined set
of security rules accepts, rejects, or drops that specific traffic.
Working of Firewall
Firewall match the network traffic against the rule set defined in its table.
Once the rule is matched, associate action is applied to the network traffic.
For example, Rules are defined as any employee from Human Resources
department cannot access the data from code server and at the same time
another rule is defined like system administrator can access the data from
both Human Resource and technical department. Rules can be defined on the
firewall based on the necessity and security policies of the organization. From
the perspective of a server, network traffic can be either outgoing or
incoming.
Firewall maintains a distinct set of rules for both the cases. Mostly the
outgoing traffic, originated from the server itself, allowed to pass. Still,
setting a rule on outgoing traffic is always better in order to achieve more
security and prevent unwanted communication. Incoming traffic is treated
differently. Most traffic which reaches on the firewall is one of these three
major Transport Layer protocols- TCP, UDP or ICMP. All these types have a
source address and destination address. Also, TCP and UDP have port
numbers. ICMP uses type code instead of port number which identifies
purpose of that packet.
Default policy: It is very difficult to explicitly cover every possible rule on the
firewall. For this reason, the firewall must always have a default policy.
Default policy only consists of action (accept, reject or drop). Suppose no rule
is defined about SSH connection to the server on the firewall. So, it will
follow the default policy. If the default policy on the firewall is set to accept,
then any computer outside of your office can establish an SSH connection to
the server. Therefore, setting default policy as drop (or reject) is always a
good practice.
Types of Firewall
Firewalls can be categorized based on their generation.
4. Hardware Firewall
Application layer firewall can inspect and filter the packets on any OSI layer,
up to the application layer. It has the ability to block specific content, also
recognize when certain application and protocols (like HTTP, FTP) are being
misused. In other words, Application layer firewalls are hosts that run proxy
servers. A proxy firewall prevents the direct connection between either side
of the firewall, each packet has to pass through the proxy.
This works as the Sessions layer of the OSI Model’s . This allows for the
simultaneous setup of two Transmission Control Protocol (TCP) connections.
It can effortlessly allow data packets to flow without using quite a lot of
computing power. These firewalls are ineffective because they do not inspect
data packets; if malware is found in a data packet, they will permit it to pass
provided that TCP connections are established properly.
Functions of Firewall
● Every piece of data that enters or leaves a computer network must
go via the firewall.
● If the data packets are safely routed via the firewall, all of the
important data remains intact.
● A firewall logs each data packet that passes through it, enabling the
user to keep track of all network activities.
● Since the data is stored safely inside the data packets, it cannot be
altered.
● Every attempt for access to our operating system is examined by our
firewall, which also blocks traffic from unidentified or undesired
sources.
In the late 1980s, Mogul, Reid, and Vixie worked at Digital Equipment Corp
(DEC) on packet-filtering technology. This tech became important for future
firewalls. They started the idea of checking external connections before they
reach computers on an internal network. Some people think this packet filter
was the first firewall, but it was really a part of the technology that later
became true firewall systems.
In the late 1980s to early 1990s, researchers at AT&T Bell Labs worked on a
new type of firewall called the circuit-level gateway. Unlike earlier methods,
this firewall didn’t need to reauthorize connections for each data packet but
instead vetted and allowed ongoing connections. From 1989 to 1990,
Presotto, Sharma, and Nigam developed this technology, and in 1991,
Cheswick and Bellovin continued to advance firewall technology based on
their work.
Marcus Ranum
From 1993 to 1994, at Check Point, Gil Shwed and developer Nir Zuk made
major contributions to creating the first widely-used and easy-to-use firewall
product called Firewall-1. Gil Shwed pioneered stateful inspection
technology, filing a U.S. patent in 1993. Following this, Nir Zuk developed a
user-friendly graphical interface for Firewall-1 in 1994. These innovations
were pivotal in making firewalls accessible and popular among businesses
and homes, shaping their adoption for years to come.
Importance of Firewalls
So, what does a firewall do and why is it important? Without protection,
networks are vulnerable to any traffic trying to access your systems, whether
it’s harmful or not. That’s why it’s crucial to check all network traffic.
Once a malicious person finds your network, they can easily access and
threaten it, especially with constant internet connections.
Conclusion
In conclusion, firewalls play a crucial role in safeguarding computers and
networks. By monitoring and controlling incoming and outgoing data, they
help prevent unauthorized access and protect against cyber threats. Using a
firewall is a smart way to enhance security and ensure a safer online
experience for users and organizations alike.
The firewall acts as a constant filter, analyzing incoming data and blocking
anything that appears suspicious from entering your network to protect
system.
Yes, Installing a firewall helps prevent worms and malicious software from
infecting a computer in addition to blocking unwanted traffic.
2. Application Security
Concerned with securing software applications and preventing vulnerabilities
that could be exploited by attackers. It involves secure coding practices,
regular software updates and patches, and application-level firewalls.
● Most of the Apps that we use on our cell phones are Secured and
work under the rules and regulations of the Google Play Store.
● There are 3.553 million applications in Google Play, Apple App
Store has 1.642 million, and Amazon App Store has 483 million
available for users to download. When we have other choices, this
does not mean that all apps are safe.
● Many of the apps pretend to be safe, but after taking all information
from us, the app shares the user information with the 3rd-party.
● The app must be installed from a trustworthy platform, not from
some 3rd party website in the form of an APK (Android Application
Package).
4. Cloud Security
5. Mobile Security
6. Endpoint Security
● All of the physical and virtual resources, systems, and networks that
are necessary for a society’s economics, security, or any combination
of the above to run smoothly are referred to as critical infrastructure.
Food and agricultural industries, as well as transportation systems,
comprise critical infrastructure.
● The infrastructure that is considered important might vary
depending on a country’s particular demands, resources, and level
of development, even though crucial infrastructure is comparable
across all nations due to basic living requirements.
● Industrial control systems (ICS), such as supervisory control and
data acquisition (SCADA) systems, which are used to automate
industrial operations in critical infrastructure industries, are
frequently included in critical infrastructure. SCADA and other
industrial control system attacks are very concerning. They can
seriously undermine critical infrastructure, including transportation,
the supply of oil and gas, electrical grids, water distribution, and
wastewater collection.
● Due to the links and interdependence between infrastructure
systems and sectors, the failure or blackout of one or more functions
could have an immediate, detrimental effect on several sectors.
Cyber security is vital in any organization, no matter how big or small the
organization is. Due to increasing technology and increasing software across
various sectors like government, education, hospitals, etc., information is
becoming digital through wireless communication networks.
The importance of cyber security is to secure the data of various
organizations like email, yahoo, etc., which have extremely sensitive
information that can cause damage to both us and our reputation. Attackers
target small and large companies and obtain their essential documents and
information.
3. Cloud Security: As more businesses move their data to the cloud, ensuring
this data is secure is a top priority. This includes using strong authentication
methods and regularly updating security protocols to protect against
breaches.
5. Zero Trust Security: This approach assumes that threats could come from
inside or outside the network, so it constantly verifies and monitors all access
requests. It’s becoming a standard practice to ensure a higher level of
security.
With the increase in digitalization, data is becoming more and more valuable.
Cybersecurity helps protect sensitive data such as personal information,
financial data, and intellectual property from unauthorized access and theft.
5.IoT Vulnerabilities: With more devices connected to the internet, like smart
home gadgets and wearable devices, there are new opportunities for cyber
attacks. Many of these devices lack strong security, which makies them easy
targets for hackers.
6.Cloud Security: As more data is stored in the cloud, ensuring its security
has become a top priority. Hackers are constantly trying to find ways to
access this data, making cloud security a critical area of focus.
● Use strong passwords: Use unique and complex passwords for all
of your accounts, and consider using a password manager to store
and manage your passwords.
● Keep your software up to date: Keep your operating system,
software applications, and security software up to date with the
latest security patches and updates.
● Enable two-factor authentication: Enable two-factor authentication
on all of your accounts to add an extra layer of security.
● Be aware of suspicious emails: Be cautious of unsolicited emails,
particularly those that ask for personal or financial information or
contain suspicious links or attachments.
● Educate yourself: Stay informed about the latest cybersecurity
threats and best practices by reading cybersecurity blogs and
attending cybersecurity training programs.
Challenges of Cybersecurity
● Constantly Evolving Threat Landscape: Cyber threats are constantly
evolving, and attackers are becoming increasingly sophisticated.
This makes it challenging for cybersecurity professionals to keep up
with the latest threats and implement effective measures to protect
against them.
● Lack of Skilled Professionals: There is a shortage of skilled
cybersecurity professionals, which makes it difficult for organizations
to find and hire qualified staff to manage their cybersecurity
programs.
● Limited Budgets: Cybersecurity can be expensive, and many
organizations have limited budgets to allocate toward cybersecurity
initiatives. This can result in a lack of resources and infrastructure to
effectively protect against cyber threats.
● Insider Threats: Insider threats can be just as damaging as external
threats. Employees or contractors who have access to sensitive
information can intentionally or unintentionally compromise data
security.
● Complexity of Technology: With the rise of cloud computing, IoT,
and other technologies, the complexity of IT infrastructure has
increased significantly. This complexity makes it challenging to
identify and address vulnerabilities and implement effective
cybersecurity measures.
Conclusion
Cybersecurity is an essential part of our digital lives, protecting our personal
and professional assets from cyber threats. By understanding the types of
cyber threats, taking proactive steps to protect yourself, and staying informed
about the latest best practices, you can help ensure the safety and security of
your digital assets.
● Mission-Critical Assets
● Data Security
● Endpoint Security
● Application Security
● Network Securit
● Perimeter Security
● The Human Layer
What are the 6 stages of cyber attack?
Use passwords for all your laptops, tablets, and smartphones. Never leave
these devices unattended in public places. Encrypt any devices and storage
that hold sensitive personal information. This includes laptops, tablets,
smartphones, USB drives, backup tapes, and cloud storage.
What is an IP Address?
Last Updated : 05 Sep, 2023
All the computers of the world on the Internet network communicate with
each other with underground or underwater cables or wirelessly. If I want to
download a file from the internet or load a web page or literally do anything
related to the internet, my computer must have an address so that other
computers can find and locate mine in order to deliver that particular file or
webpage that I am requesting. In technical terms, that address is called IP
Address or Internet Protocol Address.
But what is Internet protocol? This is just a set of rules that makes the
internet work. You are able to read this article because your computer or
phone has a unique address where the page that you requested (to read this
article from GeeksforGeeks) has been delivered successfully.
Working of IP addresses
The working of IP addresses is similar to other languages. It can also use
some set of rules to send information. Using these protocols we can easily
send, and receive data or files to the connected devices. There are several
steps behind the scenes. Let us look at them
Types of IP Address
Classes of IPv4 Address: There are around 4.3 billion IPv4 addresses and
managing all those addresses without any scheme is next to impossible.
Let’s understand it with a simple example. If you have to find a word from a
language dictionary, how long will it take? Usually, you will take less than 5
minutes to find that word. You are able to do this because words in the
dictionary are organized in alphabetical order. If you have to find out the
same word from a dictionary that doesn’t use any sequence or order to
organize the words, it will take an eternity to find the word. If a dictionary
with one billion words without order can be so disastrous, then you can
imagine the pain behind finding an address from 4.3 billion addresses. For
easier management and assignment IP addresses are organized in numeric
order and divided into the following 5 classes :
Address
IP Class Maximum number of networks
Range
Class
1-126 126 (27-2)
A
Class
128-191 16384
B
Class
192-223 2097152
C
Class
224-239 Reserve for multitasking
D
2. IPv6: But, there is a problem with the IPv4 address. With IPv4, we can
connect only the above number of 4 billion devices uniquely, and apparently,
there are much more devices in the world to be connected to the internet. So,
gradually we are making our way to IPv6 Address which is a 128-bit IP
address. In human-friendly form, IPv6 is written as a group of 8 hexadecimal
numbers separated with colons(:). But in the computer-friendly form, it can
be written as 128 bits of 0s and 1s. Since, a unique sequence of binary digits
is given to computers, smartphones, and other devices to be connected to the
internet. So, via IPv6 a total of (2^128) devices can be assigned with unique
addresses which are actually more than enough for upcoming future
generations.
2011:0bd9:75c5:0000:0000:6b3e:0170:8394
Classification of IP Address
2. Private IP Address: This is an internal address of your device which are not
routed to the internet and no exchange of data can take place between a
private address and the internet.
Lookup IP addresses
To know your public IP, you can simply search “What is my IP?” on google.
Other websites will show you equivalent information: they will see your
public IP address because, by visiting the location, your router has made an
invitation/request and thus revealed the information. the location IP location
goes further by showing the name of your Internet Service Provider and your
current city.
● On Windows: Click Start and type “cmd” in the search box and run
the command prompt. In the black command prompt dialog box
type “ipconfig” and press enter. You will be able to see your IP
Address there.
● On Mac: Go to system preferences and select Network, you will be
able to see the information regarding your network which includes
your IP Address.
Various online activities can reveal your IP address from playing games or
accepting bad cookies from a trap website or commenting on a website or
forum. Once, they have your IP, there are websites that help them get a
decent idea of your location. They can further use social media websites to
track your online presence and cross verify everything that they got from
these sites and use your information for their benefits or can sell these data
collected on the dark web which can further exploit you.
The worst which I have seen in my friend’s pc got infected while he was
installing an application that he downloaded from a pirated website. The
moment he hit install, a number of command prompt boxes started
appearing, tens of commands started running and after a while, it was back
to normal. Some malware was installed in the process. After a few days,
someone was trying to log in to his social media account and other accounts
using his computer as a host pc (his own IP address) but his computer was
idle. The hacker was using his pc and his network, i.e., his IP address to do
some serious stuff. He formatted his computer then and there, secured all his
emails and other accounts, and changed all the passwords and all the
security measures that had to be taken.
To secure and hide your IP address from unwanted people always remember
the following points:
Classification of IP Address
An IP Address is basically classified into two types:
● Private IP Address
● Public IP Address
Yes, we can trace Private IP Addresses, but this happens only by using other
devices on the local network. Devices that are connected to the local network
has private IP Address and this can only be visible to the devices that are
connected within that network. But it can’t be seen online as it happens in
public IP Addresses.
Range:
192.168.0.0 – 192.168.255.255
Answer:
Yes, a device can have both Public and Private IP Addresses at a single time.
This usually happens when Network Address Translation connects the local
network to the Internet.
2. Can we access the Internet with our Private IP Address?
Answer:
Yes, we can access the Internet with our Private IP Address. Router having
both private and public IP Addresses connection, becomes an intermediate in
connecting or accessing Internet.
Answer:
Answer:
Public IP Address and External IP Address are similar terms. This helps you
in connecting to the Internet from inside to outside your network.
Answer:
Answer:
Public IP Addresses can be any number except those that are reserved for
private IPs. But the main thing is that it must be unique.
What is Routing?
Last Updated : 16 Aug, 2024
Routing chooses the routes along which Internet Protocol (IP) packets get
from their source to their destination in packet-switching networks. This
article will discuss the details of the Routing Process along with its different
types and working principles.
What is a Router?
Routers are specialized pieces of network hardware that make these
judgments about Internet routing. It is a networking device that forwards
data packets between computer networks. Also, it helps to direct traffic
based on the destination IP address. It ensures that data reaches its intended
destination.
What is Routing?
Routing refers to the process of directing a data packet from one node to
another. It is an autonomous process handled by the network devices to
direct a data packet to its intended destination. Note that, the node here
refers to a network device called – ‘Router‘.
● The Source Node (Sender) sends the data packet on the network,
embedding the IP in the header of the data packet.
● The nearest router receives the data packet, and based on some
metrics, further routes the data packet to other routers.
● Step 2 occurs recursively till the data packet reaches its intended
destination.
Note: There are limits to how many hop counts a packet can do if it is
exceeded, the packet is considered to be lost.
1. Static Routing
2. Dynamic Routing
3. Default Routing
The first step that typically happens is, one node (client or server) initiates a
communication across a network using HTTP protocols.
The Routing table is a logical data structure used to store the IP addresses
and relevant information regarding the nearest routers. The source node then
looks up the IP addresses of all the nodes that can transmit the packet to its
destination selects the shortest path using the shortest path algorithm and
then routes accordingly.
The Routing Table is stored in a router, a network device that determines the
shortest path and routes the data packet.
In the procedure or routing, the data packet will undergo many hops across
various nodes in a network till it reaches its final destination node. Hop count
is defined as the number of nodes required to traverse through to finally
reach the intended destination node.
This hopping procedure has certain criteria defined for every data packet,
there’s a limited number of hops a packet can take if the packet exceeds that,
then it’s considered to be lost and is retransmitted.
Once all the data packets reach their intended destination node, they
re-assemble and transform into complete information that was sent by the
sender (source node). The receiver will perform various error-checking
mechanisms to verify the authenticity of the data packets.
Overall, the data packet will be transmitted over the least hop-count path as
well as the path on which there is less traffic to prevent packet loss.
Working of Routing
● Sender
● Receiver
● Routers
The shortest path is highlighted in red, the path with the least hop count. As
we can see, there are multiple paths from source to node but if all the
appropriate metrics are satisfied, the data packets will be transmitted
through the shortest path (highlighted in red).
In this type of routing protocol, all the nodes that are a part of the network
advertise their routing table to their adjacent nodes (nodes that are directly
connected) at regular intervals. With each router getting updated at regular
intervals, it may take time for all the nodes to have the same accurate
network view.
Let’s look at the metrics used to measure the cost of travel from one node to
another:-
1. Hop Count: Hop count refers to the number of nodes a data packet
has to traverse to reach its intended destination. Transmitting from
one node to another node counts as 1 – hop count. The goal is to
minimize the hop count and find the shortest path.
2. Bandwidth Consumption: Bandwidth is the ability of a network to
transmit data typically measured in Kbps (Kilobits per second),
Mbps (Megabits per second), or Gbps (Gigabits per second). The
bandwidth depends on several factors such as – the volume of data,
traffic on a network, network speed, etc. Routing decision is made in
a way to ensure efficient bandwidth consumption.
3. Delay: Delay is the time it takes for a data packet to travel from the
source node to its destination node. There are different types of
delay such as – propagation delay, transmission delay, and queuing
delay.
4. Load: Load refers to the network traffic on a certain path in the
context of routing. A data packet will be routed to the path with a
lesser load so that it reaches its destination in the specified time.
5. Reliability: Reliability refers to the assured delivery of the data
packet to its intended destination although there are certain other
factors, the data packet is routed in such a way that it reaches its
destination. The stability and availability of the link in the network
are looked over before routing the data packet from a specific path.
Conclusion
Routing is a fundamental concept in computer science that allows every
network device across the world to share data across the internet. Here, the
shortest path is selected by the routing algorithms when routing a data
packet. So, the Routing Algorithms select the shortest path based on metrics
like – hop count, delay, bandwidth, etc.
The default gateway is simply a router or another network device that allows
the host to connect with other networks outside its local network. It is a
crucial component of internetwork communication.
Benefits of IDS
● Detects Malicious Activity: IDS can detect any suspicious activities
and alert the system administrator before any significant damage is
done.
● Improves Network Performance: IDS can identify any performance
issues on the network, which can be addressed to improve network
performance.
● Compliance Requirements: IDS can help in meeting compliance
requirements by monitoring network activity and generating reports.
● Provides Insights: IDS generates valuable insights into network
traffic, which can be used to identify any weaknesses and improve
network security.
Placement of IDS
● The most optimal and common position for an IDS to be placed is
behind the firewall. Although this position varies considering the
network. The ‘behind-the-firewall’ placement allows the IDS with
high visibility of incoming network traffic and will not receive traffic
between users and network. The edge of the network point
provides the network the possibility of connecting to the extranet.
● In cases, where the IDS is positioned beyond a network’s firewall, it
would be to defend against noise from internet or defend against
attacks such as port scans and network mapper.An IDS in this
position would monitor layers 4 through 7 of the OSI model and
would use Signature-based detection method. Showing the number
of attemepted breacheds instead of actual breaches that made it
through the firewall is better as it reduces the amount of false
positives. It also takes less time to discover successful attacks
against network.
● An advanced IDS incorporated with a firewall can be used to
intercept complex attacks entering the network. Features of
advanced IDS include multiple security contexts in the routing level
and bridging mode. All of this in turn potentially reduces cost and
operational complexity.
● Another choice for IDS placement is within the network. This choice
reveals attacks or suspicious activity within the network. Not
acknowledging security inside a network is detrimental as it may
allow users to bring about security risk, or allow an attacker who
has broken into the system to roam around freely.
Advantages
● Early Threat Detection: IDS identifies potential threats early,
allowing for quicker response to prevent damage.
● Enhanced Security: It adds an extra layer of security, complementing
other cybersecurity measures to provide comprehensive protection.
● Network Monitoring: Continuously monitors network traffic for
unusual activities, ensuring constant vigilance.
● Detailed Alerts: Provides detailed alerts and logs about suspicious
activities, helping IT teams investigate and respond effectively.
Disadvantages
● False Alarms: IDS can generate false positives, alerting on harmless
activities and causing unnecessary concern.
● Resource Intensive: It can use a lot of system resources, potentially
slowing down network performance.
● Requires Maintenance: Regular updates and tuning are needed to
keep the IDS effective, which can be time-consuming.
● Doesn’t Prevent Attacks: IDS detects and alerts but doesn’t stop
attacks, so additional measures are still needed.
● Complex to Manage: Setting up and managing an IDS can be
complex and may require specialized knowledge.
Conclusion
Intrusion Detection System (IDS) is a powerful tool that can help businesses
in detecting and prevent unauthorized access to their network. By analyzing
network traffic patterns, IDS can identify any suspicious activities and alert
the system administrator. IDS can be a valuable addition to any organization’s
security infrastructure, providing insights and improving network
performance.
False positives and False Negatives are IDSs’ primary drawbacks. False
positives add to the noise that can seriously impair an intrusion detection
system’s (IDS) efficiency, while a false negative occurs when an IDS misses
an intrusion and consider it valid.
By using Machine Learning, one can achieve a high detection rate and a low
false alarm rate.
Types of IPS
An IPS is an essential tool for network security. Here are some reasons why:
Multiple
network
Network, subnets Only IDPS which can
Network-Ba transport, and analyze the widest
sed application TCP/IP range of application
and
layer activity protocols;
groups of
hosts
Wireless protocol
Multiple
activity;
WLANs
unauthorized
and
wireless Only IDPS able to
Wireless predict wireless
groups of protocol activity
local area
wireless
networks (WLAN)
clients
in use
Typically more
effective than the
others at
Network,
Multiple
transport, and
network
application TCP/IP identifying
subnets
layer activity reconnaissance
NBA scanning and
and
that causes
groups of
anomalous DoS attacks, and at
hosts
network flows reconstructing major
malware infections
and application
TCP/IP layer encrypted
activity communications
Conclusion:
An Intrusion Prevention System (IPS) is a crucial component of any network
security strategy. It monitors network traffic in real-time, compares it against
known attack patterns and signatures, and blocks any malicious activity or
traffic that violates network policies. An IPS is an essential tool for protecting
against known and unknown threats, complying with industry regulations,
and increasing network visibility. Consider implementing an IPS to protect
your network and prevent security breaches.
It is difficult to make Internet use secure in current situation, people are the
among the most important aspect. The two kinds of network security
instruments that are applied to protect against cyber threat dangers are
Intrusion Detection Systems (IDS) and Intrusion Prevention Systems (IPS)
thus forming a comprehensive scheme of cyber safeguards. a key point is to
admire IPS and IDS difference because they are the core of safeguarding
procedures against cyber threats.
Primary Terminologies
● Intrusion Detection System (IDS): Software that passively detects
network traffic patterns, reports them to be suspicious, and inserts
an administrative alert without killing the threat.
● Intrusion Prevention System (IPS): Another security measure that
detect events in real-time and blocks suspicious traffic from entering
the network before it leads to system abuse.
● Network Traffic: The data transfer referring for devices on a network,
this implies message, file transfer, and requests.
● Anomalies: A suspicious kind of network activity, unusual or
irregular from traffic patterns, that may be a result of security
concern.
● Cyber Threats: On the other hand, the success in establishing
network security can be attributed to the multidimensional nature of
security vulnerabilities, including viruses, malware and unauthorized
activities, intended for the networks.
Example: IDS does detection of traffic increases in the networks, during the
non-burst times, and informs the administrators to see if this is a bad security
attack.
Conclusion
Briefly, Intrusion Detection Systems (IDS) do nothing other than detect and
warn administrators about any abnormal network activity while Intrusion
Prevention System (IPS) work in real-time and automatically stop malicious
traffic. While IDS provides alert however it doesn’t resolve the issue, IPS
takes proactive stance to mitigate the security breach. Whether it is an IDS or
IPS or both is a factor of the risk tolerance, budget and the need for
immediate threat response. These systems being complementary roles of a
comprehensive cybersecurity plan.
IDS is reactive as it identifies threats and alerts them without blocking them
while IPS is very proactive because it blocks malicious traffic in real-time.
However, IPS can block a part of the threats and protect against already
known entities, but it must be noticed that IPS cannot catch zero day and
advanced threats all the time.
What are the key considerations when choosing between IDS and IPS?
The netstat command is like a special tool in Linux that helps you understand
and check things about how your computer connects to the internet. It can
tell you about the connections your computer is making, the paths it uses to
send information, and even some technical details like how many packets of
data are being sent or received. In simple terms, it’s like a window that shows
you what’s happening with your computer and the internet. This article will
help you learn how to use netstat, exploring different ways to get specific
information and giving you a better idea of what’s going on behind the
scenes.
Let’s explore some of the most commonly used options along with examples:
-a -all : Show both listening and non-listening sockets. With the –interfaces
option, show interfaces that are not up.
netstat -a | more
2) List All TCP Ports Using netstat Command in Linux
This command specifically lists all TCP ports, giving you information about
the TCP connections your system is engaged in.
netstat -at
3) List All UDP Ports Using netstat Command in Linux
By using this option, you can see only the ports that are actively listening for
incoming connections
netstat -l
Narrowing it down further, this command specifically lists the TCP ports that
are in a listening state.
netstat -lt
Similarly, this command focuses on displaying only the UDP ports that are
actively listening.
netstat -lu
For those working with UNIX systems, this option shows only the UNIX ports
that are in a listening state.
netstat -lx
This command provides statistical information for all ports, offering insights
into network activity.
netstat -s
10) List Statistics for UDP Ports Using netstat Command in Linux
11) Display PID and Program Names Using netstat Command in Linux
This option enriches the output by displaying Process ID (PID) and program
names associated with network connections.
netstat -pt
To find the port on which a specific program, in this case, SSH, is running, use
this command.
netstat -ap | grep ssh
This command helps identify the process associated with a given port, such
as port 80 in this example.
netstat -an | grep ':80'
To view all active connections using netstat, you can use the following
command:
netstat -a
Yes, netstat can show the processes associated with network connections. By
using the `-p` option, you can include the Process ID (PID) and program
names in the output. For example:
netstat -p
This command will display the processes along with their PIDs that are using
network resources.
How do I monitor network activity in real-time with netstat?
To monitor network activity in real-time using netstat, you can use the `-c`
option. This option continuously updates the netstat information at regular
intervals.
For example:
netstat -c
netstat -tuln
To show all network connections using netstat in Linux, you can use the
following command:
netstat -a
netstat -an
These commands will help you monitor and analyze network connections on
your Linux system.
Conclusion
In this article we discussed the netstat command in Linux which is like a
special tool that helps you see how your computer connects to the internet.
It’s like a window showing you information about connections, data paths,
and technical details. This article covers practical examples of netstat
commands, from displaying active connections to listing specific types of
ports and getting detailed statistics. Whether you’re a beginner or more
advanced, netstat offers versatile options. Common questions are answered,
making it clear what netstat does and how it differs from other commands
like ss in Linux. This knowledge helps users diagnose network issues and
understand their system’s internet activities better.
Linux Commands
Last Updated : 12 Mar, 2024
Today, the Linux kernel and other operating systems that are similar to Unix
share well over 100 Unix commands. For experienced users, Linux
Commands will be highly Customized and offer advanced functionality. All
the Linux/Unix commands are run in the terminal provided by the Linux
system.
A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R |
S | T | U | V | W | X | Y | Z
Commands Description
Used to check whether the calling program has access to a specified file. I
access
to check whether a file exists or not
Used to turn on or turn off the process for accounting or change info
accton
accounting file
acpi Used to display the battery status and other ACPI information
addr2line Used to convert addresses into file names and line numbers
It helps the user when they don’t remember the exact command but kn
apropos
keywords related to the command that define its uses or function
apt Provides a high-level CLI (Command Line Interface) for the package ma
system and is intended as an interface for the end user which enables so
better suited for interactive usage by default compared to more specialize
like apt-cache and apt-get
ar Used to create, modify and extract the files from the archives
Used to remove the specified jobs. To remove a job, its job number is pa
atrm
command
atq It displays the list of pending jobs which are scheduled by the u
Used for automatically generating Makefile.in files compliant with the set
automake
Standards
banner Used to print the ASCII character string in large letter to standard outp
basenam It strips directory information and suffixes from file names i.e. it prints the file nam
e any leading directory components removed
Used to read commands from standard input or a specified file and execute them w
batch
load levels permit i.e. when the load average drops below 1.5
A mail notification system for unix that notifies the user at the command line whe
biff
arrives and tells from whom it is
break Used to terminate the execution of for loop, while loop and until loop
builtin Used to run a shell builtin, passing it arguments(args), and also to get the ex
bzless It does not have to read the entire input file before starting, so with a large file, it st
bzmore Used as a filter for CRT viewing of bzip2 compressed files, which are saved with
Used to see the calendar of a specific month or a whole year. By default, it show
cal
month’s calendar as output
case It is the best alternative when we had to use multiple if/elif on a single var
Reads data from file and gives their content as output. It helps us to create, view,
cat
files
cfdisk It displays or manipulates the disk partition table by providing a text-based “graph
chage Used to view and change the user password expiry information
chattr It is a file system command which is used for changing the attributes of a file in
chfn It allows you to change a user’s name and other details easily. chfn stands for Ch
chkconfi
Used to list all available services and view or update their run level settin
g
chvt Used to switch between the different TTY (TeleTYpewriter) terminals avai
Used to display a CRC(Cyclic Redundancy Check) value, the byte size of the file and
cksum
the file to standard output
clear Used to clear the terminal screen
Used to compare the two files byte by byte and helps you to find out whether the
cmp
identical or not
It is used to filter out reverse line feeds. The col utility simply reads from the stand
col
writes to standard output
colcrt Used to format the text processor output so that it can be viewed on Cathode Ray
compres Used to reduce the file size. After compression, the file will be available with an
s extension
continue Used to skip the current iteration in for, while and until loop
cpio stands for “copy in, copy out“. It is used for processing the archive files like *.
cpio
This command can copy files to and from archives
A list of commands that you want to run on a regular schedule, and also the na
crontab
command used to manage that list
csplit Used to split any file into many parts as required by the user
ctags It allows quick access across the files (For example quickly seeing definition of a
curl A tool to transfer data to or from a server, using any of the supported prot
cut For cutting out the sections from each line of files and writing the result to stand
Used to store the history of a file. Whenever a file gets corrupted or anything goes
cvs
help us to go back to the previous version and restore our file
date Used to display the system date and time. It is also used to set date and time o
It is a command-line utility for Unix and Unix-like operating systems whose prim
dd
is to convert and copy files
declare Used to declare shell variables and functions, set their attributes and display
Used to generate a list of dependency description of kernel modules and its ass
depmod
files
df Used to display information related to file systems about total space and avai
diff Used to display the differences in the files by comparing the files line b
dmesg Used to examine the kernel ring buffer and print the message buffer of k
Used when the user wants to retrieve system’s hardware related informatio
dmidecode
Processor, RAM(DIMMs), BIOS detail, etc. of Linux system in a readable
domainnam
Used to return the Network Information System (NIS) domain name of th
e
dos2unix Converts a DOS text file to UNIX format
dosfsck Diagnoses MS-DOS file system for problems and attempts to repair th
Used to track the files and directories which are consuming excessive amount
du
hard disk drive
Used to print the super block and blocks group information for the filesystem
dumpe2fs
device
dumpkeys Used for the dump keyboard translation tables
Used for launching the ed text editor which is a line-based text editor with a minim
ed which makes it less complex for working on text files i.e creating, editing, displa
manipulating files
It treats the pattern as an extended regular expression and prints out the lines that
egrep
pattern
It allows ejecting a removable media (typically a CD-ROM, floppy disk, tape, or JAZ
eject
using the software
It is a editor having simple user interface. Also, there is no insert mode in this editor.
emacs
editing mode.
Used to either print environment variables. It is also used to run a utility or command
env
environment
ex It is a text editor in Linux which is also termed as the line editor mode of the vi
expan Allows you to convert tabs into spaces in a file and when no file is specified it reads f
d input
This command or scripting language works with scripts that expect user inputs. It au
expect
task by providing inputs
It is bash shell BUILTINS commands, which means it is part of the shell. It marks an
export
variables to be exported to child-processes
fc Used to list, edit or re-execute the commands previously entered into an interac
fc-cach It scans the font directories and build font cache for applications which use fontconfig
e handling
It is used to list the available fonts and font styles. Using the format option, the list o
fc-list
be filtered and sorted out
Format disk is a dialog-driven command in Linux used for creating and manipulating
fdisk
table
Used to determine the type of a file. .file type may be of human-readable(e.g. ‘ASCII
file
type(e.g. ‘text/plain; charset=us-ascii’)
find Used to find files and directories and perform subsequent operations on th
finger It is a user information lookup command which gives details of all the users log
fold It wraps each line in an input file to fit a specified width and prints it to the stand
for Used to repeatedly execute a set of command for every element present in t
Displays the total amount of free space available along with the amount of memo
free
swap memory in the system, and also the buffers used by the kernel
functio
Used to create functions or methods
n
gdb GNU Debugger tool helps to debug the programs written in C, C++, Ada, For
getent Used to get the entries in a number of important text files called databa
Searches a file for a particular pattern of characters, and displays all lines that c
grep
pattern
groupmo
Used to modify or change the existing group on Linux system
d
Groups are the collection of users. Groups make it easy to manage users with the
groups
and access privileges
It verifies the integrity of the groups information. It checks that all entries in /etc
grpck
/etc/gshadow have the proper format and contain valid data
gzip This command compresses files. Each single file is compressed into a sing
Used to instruct the hardware to stop all the CPU functions. Basically, it reboots
halt
system.
Used to filter and display the specified files, or standard input in a human reada
hexdump
format
Used to obtain the DNS(Domain Name System) name and set the system’s ho
hostname
NIS(Network Information System) domain name.
hostnamect
Provides a proper API used to control Linux system hostname and change its rel
l
It is a command line utility that allows the user to interactively monitor the sys
htop
resources or server’s processes in real time
hwclock Utility for accessing the hardware clock, also called Real Time Clock (R
iconv Used to convert some text in one encoding into another encoding
Used to find out user and group names and numeric ID’s (UID or group ID) of the
id
or any other user in the server
ifup It basically brings the network interface up, allowing it to transmit and rece
Used for capturing a screenshot for any of the active pages we have and it gives
import
an image file
Reads documentation in the info format. It will give detailed information for a com
info
compared with the main page
iostat Used for monitoring system input/output statistics for devices and partit
Used to display and monitor the disk IO usage details and even gets a table of
iotop
utilization by the process
ip Used for performing several network administration tasks
Used to set up and maintain tables for the Netfilter firewall for IPv4, included i
iptables
kernel
iptables-s It will save the current iptables rules in a user specified file, that can be used lat
ave user wants
Used to display the parameters, and the wireless statistics which are extrac
iwconfig
/proc/net/wireless
It is a command line utility for joining lines of two files based on a key field presen
join
the files
Used to terminate processes manually. kill command sends a signal to a process which t
kill
the process
last Used to display the list of all the users logged in and out since the file /var/log/wtmp
less Used to read contents of text file one page(one screen) per time
let Used to evaluate arithmetic expressions on shell variables
locat
Used to find the files by name
e
Used to display details about block devices and these block devices(Except ram disk)
lsblk
those files that represent devices connected to the pc.
Used to generate the detailed information of the system’s hardware configuration from
lshw
in the /proc directory
lsmo
Used to display the status of modules in the Linux kernel. It results in a list of loade
d
lsusb Used to display the information about USB buses and the devices connected to
This command in Linux prints the mail queue i.e the list of messages that are there
mailq
queue
man Used to display the user manual of any command that we can run on the ter
md5su
To verify data integrity using MD5 (Message Digest Algorithm 5)
m
mkdir Allows the user to create directories. This command can create multiple directori
modinf
Used to display the information about a Linux Kernel module
o
Used to view the text files in the command prompt, displaying one screen at a time i
more
is large (For example log files)
mount Used to mount the filesystem found on a device to big tree structure(Linux filesystem
nc(netcat
It is one of the powerful networking tool, security tool or network monitorin
)
Used for controlling NetworkManager. nmcli command can also be used to displ
nmcli
device status, create, edit, activate/deactivate, and delete network connect
It is a network administration tool for querying the Domain Name System (DNS
nslookup
domain name or IP address mapping or any other specific DNS record
o Used to convert the content of input in different formats with octal format as the default
d format
Used to join files horizontally (parallel merging) by outputting lines consisting of lin
paste
file specified, separated by tab as delimiter, to the standard output
pidof Used to find out the process IDs of a specific running program
ping Used to check the network connectivity between host and server/host
It is a user information lookup command which gives details of all the users logge
pinky
finger, in the pinky, you may trim the information of your interest.
Used to display the memory map of a process. A memory map indicates how mem
pmap
out
powerof
Sends an ACPI signal which instructs the system to power down
f
printf Used to display the given string, number or any other format specifier on the termi
Used to list the currently running processes and their PIDs along with some other
ps
depends on different options
pwd It prints the path of the working directory, starting from the root
ranlib Used to generate index to archive
read Reads up the total number of bytes from the specified file descriptor into the
rename Used to rename the named files according to the regular expression perle
Used to initialize the terminal. This is useful once a program dies leaving a term
reset
abnormal state
restore Used for restoring files from a backup created using dump
Used to remove objects such as files, directories, symbolic links and so on from the fi
rm
UNIX
route Used when you want to work with the IP/kernel routing table
It is a software utility for Unix-Like systems that efficiently sync files and directories
rsync
hosts or machines
Used to monitor Linux system’s resources like CPU usage, Memory utilization,
sar
consumption, etc.
screen Provides the ability to launch and use multiple shell sessions from a single ss
script Used to make typescript or record all the terminal activities
scriptrepla Used to replay a typescript/terminal_activity stored in the log file that was reco
y script command
Used to compare two files and then writes the results to standard output in a s
sdiff
format
Used for finding, filtering, text substitution, replacement and text manipulations
sed
deletion search etc.
select Used to create a numbered menu from which a user can select an opt
prints to standard output either the scan codes or the key code or the `ascii’ cod
showkey
pressed
sleep Used to create a dummy job. A dummy job helps in delaying the execu
Used to read and execute the content of a file(generally set of commands), pa
source
argument in the current shell script
strace It is one of the most powerful process monitoring, diagnostic, instructional too
sudo Used as a prefix of some command that only superuser are allowed to
sum Used to find checksum and count the blocks in a file
systemctl Used to examine and control the state of “systemd” system and service m
Used to execute a command and prints a summary of real-time, user CPU time and
time
time spent by executing a command when it terminates
tracepath Used to traces path to destination discovering MTU along this path
tracerout
Prints the route that a packet takes to reach the host
e
It displays the information related to terminal. It basically prints the file name of
tty
connected to standard input
type Used to describe how its argument would be translated if used as comma
uniq It is a command line utility that reports or filters out the repeated lines in a
Used to execute a set of commands as long as the final command in the ‘until’ Com
until
exit status which is not zero
Uptime Used to find out how long the system is active (running)
usernam
It provides a set of commands to fetch username and its configurations from the
e
users Used to show the user names of users currently logged in to the current h
vi It is the default editor that comes with the UNIX operating system is called visua
Displays a message, or the contents of a file, or otherwise its standard input, on the
wall
all currently logged in users
Used to find out number of lines, word count, byte and characters count in the files sp
wc
file arguments
Used to download files from the server even when the user has not logged on to the
Wget
can work in background without hindering the current process
Used to locate the executable file associated with the given command by searching
which
environment variable
while Used to repeatedly execute a set of command as long as the COMMAND retur
whoami Displays the username of the current user when this command is invoke
write Allows a user to communicate with other users, by copying lines from one user’s term
Used to build and execute commands from standard input. It converts input rece
xargs
standard input into arguments of a command
Used to print a continuous output stream of given STRING. If STRING is not mentioned
yes
prints ‘y’
zdiff Used to invoke the diff program on files compressed via gzip
Used to print the current time in the specified zone or you can say prints the current
zdump
zonename named on the command line
zgrep Used to search out expressions from a given a file even if it is compressed
It is a compression and file packaging utility for Unix. Each file is stored in single .zip {.
zip
file with the extension .zip
● Who is: This is a website that serves a good purpose for Hackers.
Through this website information about the domain name, email-id,
domain owner, etc; a website can be traced. Basically, this serves as
a way for Website Footprinting.
Advantages:
Counter Measures:
The Footprinting is a way for the computer security experts to find the weak
spots in systems. The Hackers also use footprinting to learn about the
security of systems they want to attack. In this below article we are going to
talk about what footprinting means in ethical hacking. We will also look at
the tools used and from where this information is coming from and how it is
used in the ethical hacking and the different types of footprinting.
Footprinting helps both the good hackers (ethical hackers) and the bad
hackers (Blackhat Hackers) to get the important information. This information
is useful for testing the websites or understanding how an organization
protects its computer systems. The data collected through the footprinting is
very important for hackers, including those who use their skills to help make
systems safer.
Types of Footprinting
1. Active Footprinting
2. Passive Footprinting
Active Footprinting
This involves gathering information about the target with direct interaction. In
this type of footprinting, the target may recognize the ongoing information
gathering process, as we only interact with the target network.
Passive Footprinting
Each of these pieces of information can tell the hacker something important
about the system they’re looking at. For example, knowing the IP addresses
can help them find where the computers are on the internet, while knowing
about the firewall can tell them what kind of protection the system has.
Google Hacking
This is not about hacking Google itself. It’s about using Google search in a
clever way to find important information. Hackers use special search words
to find things that most people can’t find easily. This can help them learn
about an organization’s computers.
Who is Lookup
This tool helps hackers find basic information about websites. They can learn
things like Who owns the website and Where the website is located and
Other important details about the organization.
Social Engineering
The Social Engineering is a way of tricking the people to get the information.
It works like The hacker learns about the person they want to trick and then
they use what they know to make the person trust them and then they trick
the person into giving away the secret information.
NeoTrace
Identification of Vulnerabilities
When an ethical hacker uses footprinting, they can find weak spots in a
system. This means they might be able to get into the system, just like a bad
hacker would. Once inside, they can see which parts of the system are not
well protected. They can find open ports, which are like open doors that
hackers could use. They can also spot other weak points that bad hackers
might try to use. This helps them figure out what kinds of attacks could hurt
the system.
Knowledge of Security Framework
By doing the footprinting ethical hackers can make the good guesses about
what kinds of attacks might work on the system. They look at all the weak
spots they found and think about how the bad hacker might try to use them.
They also look at how the system is protected and think about the ways to
get past those defenses. This helps them figure out which parts of the
system are most likely to be attacked. After Knowing this they can help make
those ports stronger before the real attack happens.
Conclusion
In this article we learned about the footprinting and how it works and why it
is important in the ethical hacking. Good hackers use it to protect systems but
everyone should take the steps to protect their own data too. This can
include using the VPNs removing important info from the internet and being
careful about what we share online. Remember any information on the
internet could be used by the hackers. Footprinting methods are always
changing so ethical hackers need to keep learning to stay ahead of the bad
hackers. By understanding the footprinting we can all help to keep our
systems and data safer.
Frequently Asked Questions on Types of Footprinting –
FAQ’s
No the footprinting is used by both good and bad hackers. Good hackers
called the ethical hackers use footprinting to find weak spots in the systems
so they can be fixed. They do this to help make computer systems safer.
You can protect yourself by being careful about what information you put
online. Use strong passwords and don’t share the personal details on public
websites and keep your computers security updated. Also be careful about
clicking on the strange links or downloading the files from the unknown
sources.
Syntax
domainname [options]
1. domainname -h
Displays the help menu with all available options and syntax for the
command. This is helpful for beginners who want to explore what the
domainname command can do.
domainname -h
2. domainname -a or –alias
It is used to display the alias name. Returns blank line if alias name is not set
up.
domainname -a
3. domainname -A or –all-fqdns
It is used to display all the fully qualified domain names (FQDN).
domainname -A
4. domainname -b or –boot
Sets the default domain name if none is available. This option is useful for
configuring domain names during the boot process.
domainname -b allinone
In the below example, you can see initially ‘none’ domainname was returned
but after setting up command returns the new name.
5. domainname -s or –short
Displays the short version of the hostname (without the domain name).
domainname -s
6. domainname -I or –all-ip-addresses
In this example you can see, the displayed domain name is the same as we
set up using -b option.
Conclusion
The domainname command in Linux is a crucial tool for managing network
domain settings. If you’re troubleshooting network issues or configuring
domain names for hosts, domainname helps you view and modify the NIS
domain names easily. Its wide range of options, including displaying IP
addresses, setting default domain names, and listing FQDNs, makes it
versatile and essential for system administrators managing Linux systems in
networked environments.
Conclusion
In this article we have discussed the `nslookup` command which is a variable
tool for querying the DNS server and obtaining information about domain
name or IP address mapping. We have studied that it is very useful for
troubleshooting DNS-related issues. We have also discussed options like
-type=a, -type=any, -type=mx, -type=ns, -type=ptr, and -type=soa. Overall,
we can say that by using nslookup information, administrators can gain
insights into the DNS infrastructure and resolve DNS-related problems
efficiently.
SubDomainizer tool is written in python, you must have python installed into
your Kali Linux in order to use this tool. This tool comes with an awesome
user interface. The user interface of the tool is very similar to Metasploitable1
and metsploitable2 which makes it very easy to run and use.
SubDomainizer
Step1:
To install the tool,first move to desktop and then install the tool using the
following commands.
cd Desktop
Step 2:
The tool has been downloaded onto your machine now. Now move to the
directory of the tool and use the following command to install the
requirements.
cd SubDomainizer
Now all the installation process has been done into your Kali Linux machine.
Now we will see the examples of using the tool.
Example 1:
Use the following command to run the tool or to find all the subdomains of
your target.
python3 SubDomainizer.py -u https://www.geeksforgeeks.org
Use this command to run the tool or to find all the subdomains of your target.
Unlock the power of Linux with our Online Linux Course with Certification!
Whether a beginner or an experienced professional, this course is designed
to help you master Linux, the backbone of modern computing. Dive into
comprehensive modules covering everything from basic commands to
advanced system administration. With hands-on projects and real-world
examples, you'll gain the skills to manage Linux environments efficiently and
confidently. Plus, earn a certification that showcases your expertise to
potential employers.
S.
N Black Box Testing Gray Box Testing White Box Testing
o.
This testing has Low This testing has a medium This testing has high-lev
1.
granularity. level of granularity. granularity.
It is done by end-users
It is done by end-users and
(called user acceptance It is generally done by test
2. also by the tester and
testing) and also by testers and developers.
developers.
and developers.
It is likely to be less
Most exhaustive among a
4. exhaustive than the other It is kind of in-between.
three.
two.
It is based on requirements,
It provides better
and test cases on the
variety/depth in test cases It can exercise code with
5. functional specifications, as
on account of high-level relevant variety of data.
the internals are not
knowledge of the internals.
known.
If used algorithm testing is
If algorithm testing is not If algorithm testing is, it i
6. also not suited best for
suited best for that. suited best for that.
that.
The ‘pwd,’ which stands for “print working directory.” In this article, we will
delve into the ‘pwd’ command, exploring its functionality, usage, and various
examples. It prints the path of the working directory, starting from the root.
pwd is shell built-in command(pwd) or an actual binary(/bin/pwd). $PWD is
an environment variable that stores the path of the current directory. This
command has two flags.
Table of Content
The output will be the absolute path of your current location in the file
system.
The default behavior of Built-in pwd is the same as pwd -L. Using “pwd -L”
to obtain the symbolic path of a directory containing a symbolic link.
The default behavior of /bin/pwd is the same as pwd -P. Utilizing “pwd -P” to
display the actual path, ignoring symbolic links.
The $PWD environment variable is a dynamic variable that stores the path of
the current working directory. It holds the same value as ‘pwd -L’ –
representing the symbolic path.
$PWD
Executing this command prints the symbolic path stored in the $PWD
environment variable
How do I print the current working directory in Linux using the ‘pwd’
command?
You can print the current working directory in Linux by simply entering the
‘pwd’ command in the terminal and pressing Enter. This will display the
absolute path of your current location in the file system.
The ‘pwd’ command and ‘/bin/pwd’ binary both serve the purpose of printing
the current working directory. However, the default behavior differs: ‘pwd’
behaves as if the ‘-L’ option is used, while ‘/bin/pwd’ behaves like ‘pwd -P’,
displaying the actual path and ignoring symbolic links.
Yes, you can redirect the output of the ‘pwd’ command to a file by using the
following command:
pwd > filename.txt
This will write the absolute path of the current working directory to the
specified file
How can I store the current working directory in a variable for use in a
Linux script?
You can store the current working directory in a variable in a Linux script by
using the following syntax:
current_directory=$(pwd)
echo "The current working directory is: $current_directory"
This captures the output of ‘pwd’ in the variable ‘current_directory’ for later
use in your script.
The $PWD environment variable in Linux holds the symbolic path of the
current working directory. It provides a dynamic way to access and utilize the
current directory path in scripts or commands. The value of $PWD is
equivalent to the output of ‘pwd -L’.
Conclusion
In this article we discussed the ‘pwd’ command in Linux, which helps you find
where you are in your computer’s folders or we can say “how to print the
current working directory “. It can show you the real folder path (‘pwd -P’) or
the symbolic one (‘pwd -L’). The $PWD thing does the same as ‘pwd -L’ and
is handy for scripts. Remember, ‘/bin/pwd’ shows the actual path. The FAQs
answered common questions, like how to use ‘pwd’ or save a folder path in a
script.
What is Threat?
A cyber threat is a malicious act that seeks to steal or damage data or
discompose the digital network or system. Threats can also be defined as the
possibility of a successful cyber attack to get access to the sensitive data of a
system unethically. Examples of threats include computer viruses, Denial of
Service (DoS) attacks, data breaches, and even sometimes dishonest
employees.
Types of Threat
Threats could be of three types, which are as follows:
What is Vulnerability?
In cybersecurity, a vulnerability is a flaw in a system’s design, security
procedures, internal controls, etc., that can be exploited by cybercriminals. In
some very rare cases, cyber vulnerabilities are created as a result of
cyberattacks, not because of network misconfigurations. Even it can be
caused if any employee anyhow downloads a virus or a social engineering
attack.
Types of Vulnerability
Vulnerabilities could be of many types, based on different criteria, some of
them are:
Types of Risks
There are two types of cyber risks, which are as follows:
1. External- External cyber risks are those which come from outside an
organization, such as cyberattacks, phishing, ransomware, DDoS attacks, etc.
2. Internal- Internal cyber risks come from insiders. These insiders could have
malicious intent or are just not be properly trained.
Threats
Vulnerabilities
Risks
Generally, can’t be
Can be controlled Can be controlled
controlled
Can be detected by
identifying mysterious
Can be detected by
Can be detected by emails, suspicious
penetration testing
anti-virus software and pop-ups, observing
hardware and many
threat detection logs unusual password
vulnerability scanners
activities, a slower than
normal network, etc
Conclusion
Despite having different meanings, the terms threat, vulnerability, and risk
are often used together. Threats are possibility of something negative to
happen, vulnerabilities are flaws that can be used against you, and risks are
the possible outcomes of these exploits. Understanding the difference
between them helps us in better risk prediction, reduces cyber threats,
improve system’s security and protect user sensitive private data.
Table of Content
● What are Security Testing Tools?
● Security Testing Tools
● Advantages of Security Testing Tools
● Dis-advantages of Security Testing Tools
● Importance of Security Testing Tools
● Comparison Criteria of Security Testing Tools
● Security Testing Tools Key Features
● Conclusion
● Frequently Asked Questions on Security Testing Tools
1. Sqlmap
Pros Cons
Requires deep understanding of SQL
Highly automated
injection
Wide database
Limited reporting capabilities
support
2. Burp Suite
Burp Suite is a widely used web application security testing tool. It provides
penetration testers and security professionals with a range of features like
web vulnerability scanning, penetration testing automation, and more.
Pros Cons
Pros Cons
4. OWASP ZAP
Pros Cons
Pros Cons
6. SonarQube
Pros Cons
Pros Cons
Extensive vulnerability
Limited plugin support
database
ZAP is one of the world’s most popular free security tools and is actively
maintained by a dedicated international team of volunteers.
Pros Cons
Great for both beginners and Can have performance issues with
professionals large applications
9. Acunetix Ltd.
Pros Cons
Comprehensive scanning
High license cost
features
10. Metasploit
Pros Cons
Excellent community
Not beginner-friendly
support
Conclusion
In conclusion, incorporating security testing tools into your software testing
strategy is vital for mitigating risks and protecting your applications from
security breaches. From open-source tools like OWASP ZAP to
enterprise-grade solutions like Acunetix, each tool brings unique advantages
suited for different testing needs.
By using these tools effectively, you can ensure that your applications are not
only functional but also secure from potential threats, providing confidence to
both users and stakeholders.
Using the results of scans and pen tests, vulnerability assessments are done
to evaluate potential threats. As part of an assessment, information about
identified vulnerabilities can be fed into a threat intelligence platform and
scored based on potential impact and exploitability. For example, a missing
patch that could enable attackers to do remote code execution in a system
would likely be deemed a high risk.
Security teams then prioritize and remediate the detected issues through
various actions, depending on the nature of the vulnerabilities. In the case of
the missing patch, an organization's security team generates a remediation
workflow ticket for the IT operations staff that's responsible for the affected
systems. After IT ops installs the patch, the security team commonly runs a
scan to confirm that the vulnerability was patched properly.
Vulnerability
Parameter Penetration tests
assessments
Target specific
Various aspects of the
Scope vulnerabilities and attack
system are covered
vectors
Conducted regularly as
Less frequent and is
Frequency part of an ongoing
performed when needed
strategy
Conclusion
This article helps one to understand that vulnerabilities assessment play an
important role of establishing the areas that can be exploited within your
information systems. In this way you will avoid information leaks, solve the
problems with non-compliance to regulations, and in general improve the
protection. The integration of other security measures alongside assessments
guarantees the organization against cyber threats.
History of Nessus
Originally, it was launched as an open-source tool in 1998, but its enterprise
edition became a commercial product in 2005. It was developed in 1998 by
Renaud Deraison as an open-source project, Nessus gained popularity for
vulnerability scanning. It was acquired by Tenable in 2005, and it transitioned
to a partially closed-source model, evolving with features like compliance
scanning. Tenable introduced “Nessus Essentials” in 2017 and Tenable.io, a
cloud platform leveraging Nessus. In 2023, Nessus remains a trusted tool for
organizations globally, reflecting its commitment to adaptability and
effectiveness in addressing cybersecurity challenges.
Why Nessus?
As we know many organizations and individuals use the Nessus tool for
vulnerability assessments and for finding security weaknesses. There are
multiple features that make a good choice for organizations and individuals.
Contents
0.0 None
The Base Metrics are the core components used to determine how severe a
security vulnerability is. They focus on the vulnerability’s characteristics,
regardless of whether it has been exploited or mitigated. These metrics
include Exploitability, Scope, and Impact.
Scope: This metric assesses whether the vulnerability can affect other
components beyond the initial target. The score will be higher if the
vulnerability can propagate, such as compromising an entire system through a
single application flaw.
Guide
The main difference between CVSS and CVE lies in their roles. CVE
(Common Vulnerability Enumeration) gives unique identifiers to specific
security vulnerabilities, making them easier to track. CVSS (Common
Vulnerability Scoring System) provides a score that shows how severe each
CVE is. For example, the Heartbleed vulnerability (CVE-2014-0160) has a
CVSS score 7.5, indicating high severity.
CVSS Limitations
● Limited Context: CVSS scores don’t account for the specific risks to
your organization. They tell you if a vulnerability is dangerous, but not if
it’s dangerous to you.
○ Example: Suppose two organizations—a financial institution and a
small retail store—face the same vulnerability. CVSS might rate it
as severe, but for the retailer, the risk might be minimal due to
fewer sensitive assets, whereas for the financial institution, it
could be critical due to the high value of their data.
● Subjectivity: CVSS scores can vary depending on the context, leading
to inconsistencies.
○ Example: A vulnerability in a widely used software might receive a
high CVSS score based on its potential impact. However, the risk
might be lower if a company has strong security operation
controls. Yet, another organization with weaker controls might find
the same vulnerability far more threatening, leading to different
assessments.
● Limited Scope: CVSS doesn’t fully consider the importance of specific
assets or existing controls.
○ Example: CVSS might score a vulnerability in an out-of-date
software as low because it’s not internet-facing. However, if that
software version is critical to a company’s operations, the low
score underestimates the risk, missing the asset’s importance.
● Complexity: The system requires a deep understanding of scoring
factors. Understanding how to calculate and interpret CVSS scores
requires familiarity with several factors, such as attack vectors,
complexity, and impact.
○ Example: This complexity can lead to misinterpretations or misuse
of scores for organizations without dedicated security expertise.
● Potential for Oversights: Relying solely on CVSS scores can lead to
missed opportunities to address the most pressing threats.
○ Example: If an organization relies solely on CVSS scores, it might
overlook threats that don’t score highly but are significant in their
specific context—like vulnerabilities in internal systems that an
insider could exploit.
● Organizations should adopt a risk-based vulnerability management
approach incorporating CVSS Base Scores and Temporal and
Environmental factors to address these limitations. This tailored
approach requires understanding the organization’s risks, including
business criticality, existing controls, and the current threat landscape.
About CWE
Common Weakness Enumeration (CWE™) is a
community-developed list of common software and hardware
weaknesses. A “weakness” is a condition in a software, firmware,
hardware, or service component that, under certain circumstances,
could contribute to the introduction of vulnerabilities. The CWE List
and associated classification taxonomy identify and describe
weaknesses in terms of CWEs.
CWE List
The CWE List is updated three to four times per year to add new
and update existing weakness information. Before being published
on the CWE website, weaknesses are developed in the CWE Content
Development Repository (CDR) on GitHub.com. The CDR provides
visibility into the CWE working queue and a platform for CWE
community partners to collaborate on content development.
Other views provide insight for a certain domain or use cases, such
as weaknesses introduced during design or implementation;
weaknesses with indirect security impacts; those in software written
in C, C++, Java, and PHP; in mobile applications; and many more.
Another useful feature is the external mappings of CWE content to
related resources including the annual CWE Top 25; OWASP Top
Ten; Seven Pernicious Kingdoms; Software Fault Pattern Clusters;
and SEI CERT Coding Standards for C, Java, and Perl.
CWE Community
The main purpose of the CVE system is to make it easy for groups to share information
about holes and risks in security and work together to fix these problems. According to
the CVE identifiers, it is easier to find vulnerabilities quickly and correctly, talk about
them, and take steps to lessen their effects.
Many security experts, researchers, and IT companies use the CVE system to keep
track of vulnerabilities and handle the risks that come with them. It is an important part
of the bigger ecosystem of cybersecurity tools and methods, which also includes patch
management, security alerts, and vulnerability management.
The Cybersecurity and Infrastructure Security Agency (CISA) of the U.S. Department of
Homeland Security pays the MITRE Corporation to keep the CVE List up to date. This
list is part of a bigger project called the CVE Program, whose goal is to find, describe,
and organize publicly known security holes.
Check to see if there is a patch or other way to fix the problem. It's best to highlight a
CVE that doesn't have a fix yet.
These things will help you figure out which CVEs to fix first. If the CVE is very bad, you
might even have to delay the release of software or make big changes to make it safer.
Dealing with CVEs) often means using a variety of security tools, each playing a unique
role. Here's a simplified outline of some key tools and what they do:
Tools and Approaches Available To Address and Fix CVEs
As you use different security tools to fix a CVE, it can feel like you're juggling a lot of
tasks. These tools can often help you see things more clearly and more broadly. When
they work together, it can be easier to handle them. Let's look at the available tools.
Each one is made to help with a different part of fixing a CVE. This will help you
understand what needs to be done without being too busy.
Cloud Security Tools: These are specialized tools for keeping your cloud info safe. They
include:
It's like a digital bouncer, checking everyone and everything that tries to access your
cloud info. It's good at stopping unauthorized access, malware, and other sneaky stuff,
but it won't help much with things that aren't in the cloud.
This tool is like a security advisor for your cloud setup. It helps find and fix risks in the
cloud, but it can't watch the actual data moving in and out in real-time.
Think of this as a protector for your cloud data, helping protect both cloud and
on-premises data. However, it's not great at dealing with app-level security or the core
cloud infrastructure.
This is a more advanced tool mixing CSPM and CWPP features. It works well for finding
problems in public clouds, but it has some flaws.
The Identity and Access Management (IAM) tool controls who can see what information
based on the level of security danger. It's like a high-tech guard guard. It quickly fixes
problems with access, but it's not meant to find new assets or deal with a lot of reports.
This is like a command center that gathers security information from different sources,
looks for strange behavior, and sets off alarms or takes other actions. It can let you
know about problems, but it can't fix the weaknesses themselves.
This tool checks and evaluates every device that tries to join your network, like a
security guard at the front door. It works great for finding new devices, but not so well for
keeping track of links that are already there or fixing security holes.
Because each of these tools has its own pros and cons, using more than one of them
together often gives you a fuller picture and more power over your security.
Conclusion
As we wrap up our look at CVE in cybersecurity, it's important to remember that threats
are still out there. In this area, these threats are always changing. The year 2023 has
shown how important CVEs are of late. They have important effects on safety.
Because of a certain flaw, people from far away could make admin accounts without
being verified. They were able to get into Confluence servers, which was very bad for
network security. Patches and strong security steps need to be put in place right away.
To protect against these weaknesses, these steps are very important. The fact that
CISA, FBI, and MS-ISAC are working together makes this urgency even clearer. They
worked together to make people more aware of these problems and give advice on how
to fix them.
It is important to stay informed and take the initiative when putting security steps in
place. These steps are very important to keep you safe from new cyber dangers. With
SafeAeon you can seek adequate assistance for an implication to your digital system.
It's becoming more and more important to understand and fix these weaknesses. It is
very important for keeping digital systems and networks safe and secure.
Overview :
In popular media, the term “hacker” refers to someone who uses bugs and
exploits to get into someone else’s security, or who uses his technical
knowledge to behave productively or maliciously. Hackers are computer
specialists who are knowledgeable in both hardware and software. A hacker is
a computer enthusiast who is proficient in a programming language, as well
as security and network administration. He is the type of person who enjoys
learning new technologies and computer system intricacies in order to
improve his capabilities and talents.
2. Scanning –
Before launching an attack, the hacker wants to determine whether
the system is operational, which apps are in use, and what versions
of those programs are in use. Scanning entails looking for all open
and closed ports in order to locate a backdoor into the system. It
entails getting the target’s IP address, user accounts, and other
information. The information acquired during the reconnaissance
phase is utilized to inspect the network using tools such as dialers
and port scanners. N-map is a popular, powerful, and freely available
scanning tool.
3. Gaining Control –
The information obtained in the previous two phases is utilized to
enter and take control of the target system over the network or
physically in this phase of the hacking method. This stage is often
referred to as “Owning the System.”
4. Maintaining Access –
After acquiring access to the system in the previous stage, the
hacker keeps the access for future attacks and makes changes to
the system so that no other security personnel or hacker can acquire
access to the compromised system. The attacked system is referred
to as the “Zombie System” in this case.
5. Log Clearing –
It is the method of erasing any remaining log files or other sorts of
evidence on the hacked system that could lead to the hacker’s
capture. Penetration testing is one of the instruments in ethical
hacking approaches that can be used to catch a hacker.
What is Enumeration?
Enumeration is the process of scanning a target system, network, or
application and collecting information on it while in the process. This step is
critical in the reconnaissance phase of ethical hacking or penetration testing
where the aim is to find out some of the weaknesses within the target.
Enumeration includes asking the system questions to get information such as
usernames, machine names, shares, services, and other assets. The
information that can be collected during the enumeration phase can be
utilized by an attacker to understand the structure and security of the
targeted system so that the attacker would understand what comes next.
Types Of Enumeration
In this section, we will be discussing the various types of Enumerations.
<host
<03> UNIQUE
name>
<host
<20> UNIQUE
name>
nbtstat [-a RemoteName] [-A IPAddress] [-c] [-n] [-r] [-R] [-RR]
[-s] [-S] [Interval]
Parameters
-a RemoteName
-A IPAddress
-c
-n
-r
-RR
-s
-S
Interval
2. SNMP(Simple Network Management Protocol) Enumeration:
Given below is the communication between the SNMP agent and manager:
● SNMP Enumeration tools are utilized to examine a solitary IP
address or a scope of IP addresses of SNMP empowered
organization gadgets to screen, analyze, and investigate security
dangers. Instances of this sort of instruments incorporate
NetScanTolls Pro, SoftPerfect Network Scanner, SNMP Informant,
and so forth
3. LDAP Enumeration:
● Lightweight Directory Access Protocol is an Internet Protocol for
getting to dispersed registry administrations.
● Registry administrations may give any coordinated arrangement of
records, regularly in a hierarchical and sensible structure, for
example, a corporate email index.
● A customer starts an LDAP meeting by associating with a Directory
System Agent on TCP port 389 and afterward sends an activity
solicitation to the DSA.
● Data is sent between the customer and the worker utilizing Basic
Encoding Rules.
● Programmer inquiries LDAP administration to assemble information
such as substantial usernames, addresses, division subtleties, and
so on that can be additionally used to perform assaults.
● There are numerous LDAP enumeration apparatuses that entrance
the registry postings inside Active Directory or other catalog
administrations. Utilizing these devices, assailants can identify data,
for example, substantial usernames, addresses, division subtleties,
and so forth from various LDAP workers.
● Examples of these kinds of tools include LDAP Admin Tool, Active
Directory Explorer, LDAP Admin, etc.
4. NTP Enumeration:
5. SMTP Enumeration:
● Mail frameworks ordinarily use SMTP with POP3 and IMAP that
empowers clients to spare the messages in the worker letter drop
and download them once in a while from the mainframe.
● SMTP utilizes Mail Exchange (MX) workers to coordinate the mail
through DNS. It runs on TCP port 25.
● SMTP provides 3 built-in commands: VRFY, EXPN, RCPT TO.
● These servers respond differently to the commands for valid and
invalid users from which we can determine valid users on SMTP
servers.
● Hackers can legitimately associate with SMTP through telnet brief
and gather a rundown of substantial clients on the mainframe.
● Hackers can perform SMTP enumeration using command-line
utilities such as telnet, netcat, etc., or by using tools such as
Metasploit, Nmap, NetScanTools Pro, etc.
6. DNS Enumeration using Zone Transfer:
7. IPsec Enumeration:
9. RPC Enumeration:
2. SNMP Enumeration:
3. LDAP Enumeration:
4. NTP Enumeration:
5. SMTP Enumeration:
7. IPsec Enumeration:
● This hack can be smothered by actualizing SIPS (SIP over TLS) and
confirming SIP queries and reactions (which can incorporate
uprightness insurance).
● The utilization of SIPS and the verification of reactions can stifle
many related hacks including eavesdropping and message or client
pantomime.
● The utilization of digest confirmation joined with the utilization of
TLS between SIP telephones and SIP intermediaries can give a
station through which clients can safely validate inside their SIP
domain.
● Voicemail messages can be changed over to message records and
parsed by ordinary spam channels. This can just shield clients from
SPIT voicemails.
9. RPC Enumeration:
● Try not to run rexd, users, or rwalld RPC administrations, since they
are of negligible utilization and give aggressors both valuable data
and direct admittance to your hosts.
● In high-security conditions, don’t offer any RPC administrations to
the public Internet. Because of the unpredictability of these
administrations, almost certainly, zero-day misuse contents will be
accessible to assailants before fixed data is delivered.
● To limit the danger of inner or confided in hacks against vital RPC
administrations, (for example, NFS segments, including statd, lockd,
and mountd), introduce the most recent seller security patches.
● Forcefully channel egress traffic, where conceivable, to guarantee
that regardless of whether an assault against an RPC administration
is effective, an associate back shell can’t be brought forth to the
hacker.
Conclusion
Gathering is the identification of targets and giving valuable data about their
security state, which is a significant step in the evaluation of security. It is a
useful resource for ethical hackers and security personnel to monitor for likely
risks but at the same time is dangerous if employed by crooks. When
enumeration is used and understood well and efforts are made to prevent
unauthorized access to such information, then most systems cannot be
compromised. Proper configuration management and some security tests
such as penetration testing should be frequently done in order to secure such
vital resources.
Is enumeration legal?
Table of Content
● What is an Advanced Persistent Threat ( APT)?
● Working of an Advanced Persistent Threat
● Characteristics of the Advanced Persistent Threat
● How to detect the Advanced Persistent Threat?
● How to be protected from Advanced Persistent threat?
● Some Famous Advanced Persistent Threat (APT) attacks
The attacker may have accessed the network, but there is a high chance of
getting detected. So in order to maintain access for a longer period of time,
the hacker tends to use some advanced methods, rewriting malicious script
and other sophisticated techniques.
What is Malware?
Malware is software that gets into the system without user consent to steal the
user’s private and confidential data, including bank details and passwords.
They also generate annoying pop-up ads and change system settings.
Malware includes computer viruses, worms, Trojan horses, ransomware,
spyware, and other malicious programs. Individuals and organizations need to
be aware of the different types of malware and take steps to protect their
systems, such as using antivirus software, keeping software and systems
up-to-date, and being cautious when opening email attachments or
downloading software from the internet.
Types of Malware
● Viruses – A Virus is a malicious executable code attached to another
executable file. The virus spreads when an infected file is passed
from system to system. Viruses can be harmless or they can modify
or delete data. Opening a file can trigger a virus. Once a program
virus is active, it will infect other programs on the computer.
● Worms – Worms replicate themselves on the system, attaching
themselves to different files and looking for pathways between
computers, such as computer network that shares common file
storage areas. Worms usually slow down networks. A virus needs a
host program to run but worms can run by themselves. After a worm
affects a host, it is able to spread very quickly over the network.
● Trojan horse – A Trojan horse is malware that carries out malicious
operations under the appearance of a desired operation such as
playing an online game. A Trojan horse varies from a virus because
the Trojan binds itself to non-executable files, such as image files,
and audio files.
Types of Malware
What is a Threat?
Threats are actions carried out primarily by hackers or attackers with
malicious intent, to steal data, cause damage, or interfere with computer
systems. A threat can be anything that can take advantage of a vulnerability
to breach security and negatively alter, erase, or harm objects. A threat is any
potential danger that can harm your systems, data, or operations. In
cybersecurity, threats include activities like hacking, malware attacks, or data
breaches that aim to exploit vulnerabilities.
Conclusion
In conclusion, information security is an important field that protects data and
systems against a wide range of risks such as viruses, worms, ransomware,
and more. Data, network, endpoint, cloud, application, identity, and physical
security measures must all be considered to provide effective security.
Organizations may protect their data’s confidentiality, integrity, and
availability by understanding and addressing these risks.
The world is digitally evolving day by day. With the introduction of Artificial
Intelligence, Cloud Computing Systems tasks have become more automated
and we have become heavily dependent on digital data. But it is also
required to protect the information as they are susceptible to malware
attacks and hence the concept of Cybersecurity arises.
Types of Viruses
There are different types of viruses:
● Boot sector virus: This virus affects the booting part of the computer.
Every time the computer boots the virus gets loaded and it infects
the floppy discs and other devices.
● Encrypted Virus: As the name suggests the program is in encrypted
format and hence it is difficult to detect. Before infecting, the virus is
decrypted so that it can execute itself.
● Email Virus: These types of viruses use emails as a medium to get
transferred. When the user clicks on the link or message, the virus
gets downloaded and it starts infecting the system.
● File Infector Virus: All of the computer’s executable files are
impacted by this virus. It can modify or delete the files.
● Polymorphic Virus: Polymorphic means many forms. This virus can
change into many forms and can infect it accordingly which makes it
very difficult to get detected.
If your computer is compromised, turn off the internet right once, use
antivirus software to do a thorough system scan, and delete any
compromised data. In extreme circumstances, think about reformatting your
system or get expert assistance.
What is a Rootkit?
Last Updated : 29 Jul, 2024
The term rootkit is derived from the words "root" and "kit." The phrases "root,"
"admin," "superuser," and "system admin" all refer to a user account with
power of administration in an operating system. Meanwhile, "kit" refers to a
collection of software tools. So, a rootkit is a collection of tools that grants
someone the most powerful capabilities in a system. Let's briefly discuss this.
What is a Rootkit?
A rootkit is a harmful software tool or program that allows a threat actor to
take remote control of and access to a computer or other system. While there
are actual applications for this kind of software, such as remote end-user
support, the majority of rootkits create a backdoor on victims' computers so
that harmful programs, such as viruses, ransomware, keylogger programs, or
other malware, can be introduced or the system can be used as a platform for
additional network security attacks. Rootkits commonly try to stop antivirus
and endpoint antimalware software from detecting harmful software.
Rootkits are available for purchase on the dark web. They can be used as a
social engineering technique that deceives users into granting permission for
the rootkits to be placed on their systems, or they can be installed as part of
scams. Once installed, the rootkits typically grant remote attackers admin
rights to the system. A rootkit grants the remote actor access to and control
over nearly every feature of the operating system (OS) once it is installed.
While most antimalware programs can now search for and remove rootkits
hidden within a system, older antivirus programmers sometimes have difficulty
identifying rootkits.
What Can a Rootkit Do?
Malicious software called a rootkit is created to covertly take over a computer
or network and get illegal access and control. To evade discovery, it has the
ability to change kernel functions, change system processes, and get around
security measures. Attackers may be able to monitor user activities, steal
confidential data, and run more malware with the help of rootkits. They are
especially difficult to find and eliminate as they have the ability to change
system settings in order to retain persistent access. Rootkits pose serious
security hazards because they threaten the integrity of operating systems and
applications by thoroughly embedding themselves into the system.
Rootkit Protection
● Antivirus and Anti-Malware Software: Use the most recent versions
of antivirus and anti-malware software to identify and get rid of
rootkits. Certain security tools include capabilities designed
specifically to identify rootkits.
● Regular System Updates: To fix security holes that rootkits may
exploit, make sure your operating system and apps are up to date.
● Behavior-Based Detection: Make use of software designed to keep
an eye on anomalous system activity, since this may point to the
existence of a rootkit.
● System Integrity Checks: To identify unauthorized modifications,
periodically confirm the accuracy of system files and settings.
● Least Privilege Principle: Limit user rights in accordance with the
least privilege principle to lessen the possible impact of a rootkit.
Types of Rootkits
Bootloader rootkit
When you switch on a computer, the bootloader loads the operating system. A
bootloader rootkit infiltrates this mechanism, infecting your machine with
malware before the operating system is ready for use. Bootloader rootkits are
less of a threat currently, because of security mechanisms such as Secure
Boot.
Firmware rootkit
Firmware is a sort of software that gives basic control over the hardware it is
designed for. Firmware can be found on a wide range of equipment, including
mobile phones and washing machines. A firmware rootkit is difficult to detect
because it hides in firmware, where most cybersecurity tools do not look for
malware.
Kernel Rootkits
The kernel of your operating system functions similarly to the nervous system.
It's a key layer that helps with essential tasks. A kernel rootkit can be
disastrous since it targets a critical component of your computer and grants a
threat actor significant control over the system.
Memory rootkit
Memory rootkits live in your computer's RAM and can slow down your system
while doing malicious functions. You can usually erase a memory rootkit by
restarting your computer, as this clears all processes from your machine's
memory.
Application rootkit
An application rootkit may replace your ordinary files with rootkit code,
granting the rootkit creator access to your machine each time you execute the
infected files. However, this sort of malware is easier to detect because files
containing rootkits can act abnormally. In addition, your security tools have a
better chance of detecting them.
What is Ransomware?
Ransomware is a form of malicious software that prevents computer users
from accessing their data by encrypting it. Cybercriminals use it to ransom
money from individuals or organizations whose data they have hacked, and
they hold the data hostage until the ransom is paid. If the cybercriminals do
not pay the ransom within the specified time frame, the data may leak to the
public or be permanently damaged. One of the most serious issues that
businesses face is ransomware.
4. Petya (2016 and 2017): Petya was unique because contrary to what
typical ransomware does, it encrypted all the hard drive files. Its variant,
NotPetya, was even more devastating and much of it is assumed to be state
sponsored.
1. Unusual File Activity: Any new extensions added at the end of the file
names, a huge number of files that do not exist before, or files that are locked
and encrypted, are signs of ransomware at work.
3. User Training: Inform work place workers on email phishing scams, links on
sites that seem suspicious to downloading software from unknown sources.
What is Botnet?
A Botnet is a group of internet-connected devices, such as personal
computers (PCs), servers, mobile devices, and Internet of Things (IoT)
devices, that have been infected and controlled by a common kind of
malware, typically without the owner's knowledge. Each machine controlled by
the bot-herder is referred to as a "bot." From a central point, the attacking
party may instruct every computer on its botnet to carry out a coordinated
illegal operation.
What is a Botnet?
A botnet is a network of hijacked computer devices that are used to conduct
various crimes and cyberattacks. Botnet assembly is often the infiltration step
of a multi-layer strategy. Bots are used to automate large-scale attacks
including data theft, server crashes, and virus spread. To delay their ability to
take advantage of the botnet, hackers usually take every precaution to make
sure the victims are unaware of the infection. To an organization's
cybersecurity Botnets create several threats. If an organization's systems are
detected with malware, they can be recruited into a botnet and used to launch
automated attacks on other systems.
In a computer, you are going to find two kinds of malicious elements that can
tamper with your computer data, disrupt, damage, or gain unauthorized
access to computer systems.
These two factors are known as the Worms and Viruses. These elements can
harm your computer significantly. However, there are many differences
present in their operation purposes.
Basis of
Worms Viruses
Comparison
It is less harmful as
Harmful It is more harmful.
compared.
Conclusion
Worms and Viruses are both a threat to the computer system. In between
them, some can harm your computer with high capacity and in some cases, it
can tamper the computer with low capacity. Knowing the difference between
them will help to figure out which malicious element has harmed your device.
Worms and Viruses have a difference in the field of Host Needs. The Worms
don’t need any association with any host to infect any system. However, the
Virus needs to take the help of any host to complete the process.
In between the Worms and the Virus, the worms can be spread faster than
the virus. As the Worms don’t need any help from the host, they can easily
be spread compared with the Virus.
Phishing Attack
Phishing is a type of cybersecurity attack that attempts to obtain data that are
sensitive like Username, Password, and more. It attacks the user through
mail, text, or direct messages. Now the attachment sends by the attacker is
opened by the user because the user thinks that the email, text, messages
came from a trusted source. It is a type of Social Engineering Attack. For
Example, The user may find some messages like the lottery winner. When the
user clicks on the attachment the malicious code activates that can access
sensitive information details. Or if the user clicks on the link that was sent in
the attachment they may be redirected to a different website that will ask for
the login credentials of the bank.
Types of Phishing Attack :
1. Spear Phishing –
This attack is used to target any specific organization or an individual
for unauthorized access. These types of attacks are not initiated by
any random hacker, but these attacks are initiated by someone who
seeks information related to financial gain or some important
information. Just like the phishing attack spear-phishing also comes
from a trusted source. This type of attack is much successful. It is
considered to be one of the most successful methods as both of the
attacks(that is phishing and spear-phishing) is an online attack on
users.
2. Clone Phishing –
This attack is actually based on copying the email messages that
were sent from a trusted source. Now the hackers alter the
information by adding a link that redirects the user to a malicious or
fake website. Now, this is sent to a large number of users and the
person who initiated it watches who clicks on the attachment that
was sent as a mail. This spreads through the contacts of the user
who has clicked on the attachment.
3. Catphishing –
It is a type of social engineering attack that plays with the emotions
of a person and exploits them to gain money and information. They
target them through dating sites. It is a type of engineering threat.
4. Voice Phishing –
Some attacks require to direct the user through fake websites, but
some attacks do not require a fake website. This type of attack is
sometimes referred to as vishing. Someone who is using the method
of vishing, use modern caller id spoofing to convince the victim that
the call is from a trusted source. They also use IVR to make it difficult
for the legal authorities to trace, block, monitor. It is used to steal
credit card numbers or some confidential data of the user. This type
of phishing can cause more harm.
5. SMS phishing –
These attacks are used to make the user revealing account
information. This attack is also similar to the phishing attack used by
cybercriminals to steal credit card details or sensitive information, by
making it look like it came from a trusted organization.
Cybercriminals use text messages to get personal information by
trying to redirect them to a fake website. This fake website looks like
that it is an original website.
The WAP gateway translates this WAP request into a conventional HTTP URL
request and sends it over the internet. The request reaches to a specified web
server and it processes the request just as it would have processed any other
request and sends the response back to the mobile device through WAP
gateway in WML file which can be seen in the micro-browser.
Internet access was only accessible from your computer until the release of
the first WAP devices. With WAP, you may now use your mobile phone to use
the Internet to interact with other people. large global communication and data
sharing are therefore expanded.
There is a WAP browser available as well, just like your personal internet
browser. Micro WAP Browser is the name of the browser used to access
websites using a WAP device. What makes it unique is that it uses less
hardware, memory, and CPU resources and presents the data in WML, a
constrained mark-up language.
● Select Start > Control Panel > System and Security > Windows Firewall.
...
● Select Turn Windows Firewall on or off. ...
● Select Turn off Windows Firewall (not recommended) for both Home or
work (private) network location settings and Public network location
settings, and then click OK.
3. Select Turn off Windows Firewall (not recommended) for both Home or work (private)
network location settings and Public network location settings, and then click OK.
Figure 9: Disabling the Windows firewall
What is Data Leakage?
In the realm of data science and machine learning, "data leakage" is a term
that denotes a critical problem that can severely impact the performance and
credibility of predictive models. Despite its significance, data leakage is often
misunderstood or overlooked, leading to erroneous conclusions and
unreliable outcomes.
This article delves into what data leakage is, explores its causes and
consequences, and guides how to prevent it.
Table of Content
● What is Data Leakage?
● Types of Data Leakage
● Causes of Data Leakage
● Consequences of Data Leakage
● How to Detect Data Leakage ?
● How to prevent Data Leakage?
Malicious Insiders
● Description: The risk that sensitive data or systems are exposed due
to physical vulnerabilities. This could happen when physical
safeguards like locks, security cameras, or access controls fail,
allowing unauthorized individuals to access critical assets.
● Examples:
○ Unauthorized access to a data center or server room.
○ Loss or theft of hardware devices containing
sensitive data, such as laptops, USB drives, or mobile
phones.
○ Physical tampering with systems or network
hardware to gain access or compromise security.
Electornic Communiucation
Acidental Leakage
● Description: Occurs when sensitive data is unintentionally exposed
or shared due to human error or system misconfigurations. Although
the intent isn’t malicious, accidental leakage can lead to severe data
breaches.
● Examples:
○ Misplacing confidential documents or sending
sensitive emails to the wrong recipient.
○ Sharing internal files or data publicly without
realizing it.
○ Accidentally uploading sensitive data to unsecured
cloud storage or shared drives.
Are you passionate about data and looking to make one giant leap into your
career? Our Data Science Course will help you change your game and, most
importantly, allow students, professionals, and working adults to tide over
into the data science immersion. Master state-of-the-art methodologies,
powerful tools, and industry best practices, hands-on projects, and
real-world applications. Become the executive head of industries related to
Data Analysis, Mac
Edward Kost
Data loss refers to the unwanted removal of sensitive information either due to an
information system error, or theft by cybercriminals. Data leaks are unauthorized
exposures of sensitive information through vulnerabilities on the digital landscape.
Data leaks are more complex to detect and remediate, they usually occur at the
interface of critical systems, both internally and throughout the vendor network.
In cybersecurity, the terms data leak, data breach, and data loss are often incorrectly
used interchangeably. Though their definitions slightly overlap, these terms refer to very
different events.
Before Data Loss Prevention (DLP) and data leak remediation solutions can be
discussed, this confusion should be cleared up with the correct definitions.
Whitepaper: Data Leak Detection
Know the difference between traditional and superior Digital Risk Protection Services.
Download Now
Data breaches are, unfortunately, common occurrences that are also burdensome on
the economy. The global cost of data breaches in 2021 is expected to reach $6 trillion
annually. This amount has doubled from $3 trillion back in 2015.
Because the latter description overlaps with the data breach definition, the difference
between these terms is usually misunderstood.
The average downtime cost during a data loss incident is almost $4,500/minute.
Whitepaper: Data Leak Detection
Know the difference between traditional and superior Digital Risk Protection Services.
Download Now
When sensitive data is stolen from either a data breach or a ransomware attack and
published on the dark web, these events are also classified as data leaks.
Data loss prevention is not just a security best practice, because it concerns the
Personal Identifiable Information (PHI) of customers, it's enforced by different regulatory
standards such as HIPAA, PCI-DSS, the Data Protection Act, GDPR, and even the new
cybersecurity executive order signed by President Biden.
What's the Difference Between Data Leaks and Data
Breaches?
Data leaks are usually caused by organizations accidentally exposing sensitive data
through security vulnerabilities, Such incidents are not initiated by cyberattackers.
Data breaches, on the other hand, are usually the result of a cybercriminal's persistence
to compromise sensitive resources.
Data leaks could develop into data breaches. If a data leak is discovered by cyber
criminals it could provide them with the necessary intelligence to execute a successful
data breach.
Another differentiator between these two events is the confidence of public exposure.
When sensitive data is stolen in a data breach, it's usually dumped on the dark web
which is clear evidence that it has reached the masses.
Data leaks, on the other hand, can remain exposed for a long period of time without
knowing who accessed it and whether it was disclosed to the public.
UpGuard offers customers the support of expert analysts that constantly monitor the
dark web for data leak instances, removing anxiety over possible sensitive data
exposure on criminal forums.
1. Overlooked Vulnerabilities
Data leaks most commonly occur accidentally, outside the monitoring boundaries of
typical information security programs.
These could be:
● Unpatched exposures
● Weak security policies
● Poorly configured firewalls
● Open-source vulnerabilities
● Poor vendor security postures as determing through a Third-Party Risk
Management program.
2. Human Elements
Humans are the weakest points of every cybersecurity architecture. With the correct
approach, any staff member can be tricked into leaking sensitive credentials to
cybercriminals,
This is usually achieved through phishing attacks, where a seemingly innocent email or
website infected with malicious links is presented to a victim. Upon interacting with
these links, staff members leak sensitive internal login information that could arm
cybercriminals for a devastating data breach.
Even if just an internal username is leaked to cybercriminals, this could still lead to a
data breach if supplemented with password guessing tactics like brute force tactics.
Data leaks are also caused by negligent behavior such as using weak passwords and
storing them in unsecure locations like a post-it note, on a mobile device, or a
public-facing online document.
When sensitive data is stolen from either a data breach or a ransomware attack and
published on the dark web, these events are also classified as data leaks.
To prevent staff from undermining security program investments, cyber that awareness
training should be implemented in the workplace to teach staff how to recognize
common cybercriminal tactics.
Each of the following common attack methods links to a post that can be used for
cybercrime awareness training:
● Phishing attacks
● Social Engineering Attacks
● DDoS attacks
● Ransomware attacks
● Malware attacks
● Clickjacking attacks
Intentional data leaks caused by insider threats are difficult to detect. To do this with a
high confidence of accuracy, behavioral analytics software powered by machine
learning is required. Such solutions detect potentially malicious activity against an
established baseline of safe behavior.
A more cost-effect approach is to only share sensitive information with those that
absolutely require it. This security framework is known as Privileged Access
Management (PAM).
Monitoring solutions should, at the very least, track activity across sensitive networks
such as systems of records, data banks, privileged access accounts, and key
applications.
For the most comprehensive data leak security, this effort should be coupled with an
additional level of defense that detects and shuts down data leaks caused by digital
transformation.
A cause of data breaches that isn't well known is overlooked software backdoors.
Backdoor access permits software providers to bypass security measures to push
necessary patch updates to end-users. This also allows instant remote access for
troubleshooting.
Sometimes these backdoors are accidentally left open by software providers, which
provides cybercriminals a gateway to instantly access sensitive resources without
having to contend with security barriers.
● Data Leak detection - Detected data leaks could indicate possible flaws in DLP
strategies.
● Endpoint Security - This is especially important in light of the proliferation of
remote work. Sophisticated endpoint agents can detect and control information
transfer between end-users, external parties and internal networks. Consider an
Endpoint Detection and Response (EDR) solution.
● Data Encryption - Both at motion and in rest
● Privileged Access Management (PAM) - Only end-users that absolutely require
access to sensitive resources should be given access to them. Privileged Access
control efforts should also be secured to prevent Privilege Escalation.
Prevent Data Leaks, Data Breaches, and Data Loss with UpGuard
UpGuard helps prevent data leaks, data breaches, and data losses with its two core
products: BreachSight and Vendor Risk. Manage attack surfaces, third-party risk, and
gain stronger visibility into your company's biggests risk and vulnerabilities using
UpGuard's award winning, industry-leading platform.
A few attackers use applications and contents as brute force devices. These
instruments evaluate various secret word mixes to sidestep confirmation
forms. In different cases, attackers attempt to get to web applications via
scanning for the correct session ID. Attacker inspiration may incorporate
taking data, contaminating destinations with malware, or disturbing help.
While a few attackers still perform brute force attacks physically, today
practically all brute force attacks are performed by bots. Attackers have
arrangements of usually utilized accreditations, or genuine client
qualifications, got through security breaks or the dull web. Bots deliberately
attack sites and attempt these arrangements of accreditations, and advise the
attacker when they obtain entrance.
● Never use information that can be found online (like names of family
members).
● Have as many characters as possible.
● Combine letters, numbers, and symbols.
● Avoid common patterns.
● Be different for each user account.
● Change your password periodically
● Use strong and long password
● Use multifactor authentication
The header of a TCP segment can range from 20-60 bytes. 40 bytes are for
options. If there are no options, a header is 20 bytes else it can be of upmost
60 bytes. Header fields:
● Source Port Address: A 16-bit field that holds the port address of
the application that is sending the data segment.
● Destination Port Address: A 16-bit field that holds the port address
of the application in the host that is receiving the data segment.
● Sequence Number: A 32-bit field that holds the sequence number ,
i.e, the byte number of the first byte that is sent in that particular
segment. It is used to reassemble the message at the receiving end
of the segments that are received out of order.
● Acknowledgement Number: A 32-bit field that holds the
acknowledgement number, i.e, the byte number that the receiver
expects to receive next. It is an acknowledgement for the previous
bytes being received successfully.
● Header Length (HLEN): This is a 4-bit field that indicates the length
of the TCP header by a number of 4-byte words in the header, i.e if
the header is 20 bytes(min length of TCP header ), then this field
will hold 5 (because 5 x 4 = 20) and the maximum length: 60 bytes,
then it’ll hold the value 15(because 15 x 4 = 60). Hence, the value of
this field is always between 5 and 15.
● Control flags: These are 6 1-bit control bits that control connection
establishment, connection termination, connection abortion, flow
control, mode of transfer etc. Their function is:
○ URG: Urgent pointer is valid
○ ACK: Acknowledgement number is valid( used in
case of cumulative acknowledgement)
○ PSH: Request for push
○ RST: Reset the connection
○ SYN: Synchronize sequence numbers
○ FIN: Terminate the connection
● Window size: This field tells the window size of the sending TCP in
bytes.
● Checksum: This field holds the checksum for error control . It is
mandatory in TCP as opposed to UDP.
● Urgent pointer: This field (valid only if the URG control flag is set) is
used to point to data that is urgently required that needs to reach
the receiving process at the earliest. The value of this field is added
to the sequence number to get the byte number of the last urgent
byte.
To master concepts like the TCP 3-Way Handshake and other critical
networking principles, consider enrolling in the GATE CS Self-Paced course .
This course offers a thorough understanding of key topics essential for GATE
preparation and a successful career in computer science. Get the knowledge
and skills you need with expert-led instruction.
Explanation : (D) For detail solution visit the article. GATE PYQs
Conclusion
The TCP 3-Way Handshake is a critical mechanism for establishing a secure
connection between a client and a server over a TCP/IP network. It consists of
three important steps: the client initiates the connection by sending an SYN
packet, the server responds with a SYN-ACK message to acknowledge the
client’s request and synchronize sequence numbers, and the client sends an
ACK packet to complete the connection. This handshake ensures that both
sides are in sync and prepared for dependable data transmission, making it
an essential mechanism for stable and secure communication in TCP/IP
networks.
Frequently Asked Questions on TCP 3-Way Handshake
Process – FAQs
What is the purpose of the SYN flag in the TCP three-way handshake?
The SYN (Synchronize Sequence Number) flag is used in the initial step of
the handshake. It informs the server that the client wants to establish a
connection and specifies the sequence number for subsequent segments .
C
content79qw
Follow
Penetration testing, or pen testing, is like hiring a friendly hacker to find and fix
security weaknesses in your computer systems before real attackers do.
Penetration Testing is a crucial cybersecurity practice aimed at identifying and
addressing vulnerabilities within an organization's systems and networks. . If
you're curious about how companies keep their digital information safe from
hackers, you've come to the right place. Penetration testing, often called "pen
testing" or "ethical hacking," is a method used to find weaknesses in a
computer system, network, or web application.
By simulating real-world cyberattacks, pen testing helps organizations
uncover security weaknesses before malicious actors can exploit them. This
proactive approach not only enhances the overall security posture but also
ensures compliance with industry regulations and standards. safeguarding
sensitive data and maintaining robust cybersecurity defenses.
The goal is to discover these vulnerabilities before the bad guys do, so they
can be fixed to prevent any unauthorized access or data breaches. This
process is essential for protecting sensitive data and ensuring a secure online
environment.
In this article, we will explore the different types of penetration testing,
including white box, black box, and gray box testing, and highlight their
importance in
Table of Content
● What is Penetration Testing?
● Types of Penetration Testing
● 1. Black Box Testing
● 2. White Box Testing
● 3. Gray Box Testing
● Stages of Pen Testing
● Pen testing is divided into 6 of the following stages:
● How to perform Penetration Testing?
● Significance of Penetration Testing
● 1. Risk Mitigation
● 2. Regulatory Compliance
● 3. Enhanced Incident Response
● Challenges in Penetration Testing
● 1. Scope Limitations
● 2. False Positives and Negatives
● 3. Ethical Dilemmas
● Penetration Testing: Evolving Trends
● 1. Automated Testing
● 2. Cloud Security Testing
● 3. Continuous Testing
Advantages:
● Identifies a wide range of vulnerabilities
● Provides a detailed understanding of the system
● Effective in finding complex vulnerabilities
Disadvantages:
● Time-consuming and resource-intensive
● Requires extensive knowledge and expertise
Advantages:
● Mimics real-world attack conditions
● Quick and cost-effective
● Useful for assessing external threats
Disadvantages:
● May miss internal vulnerabilities
● Less comprehensive compared to black box testing
● Relies heavily on the tester’s skill and experience
Advantages:
● Provides a realistic assessment of both internal and external threats
● More efficient than white box testing
● Identifies a wider range of vulnerabilities compared to black box
testing
Disadvantages:
● May still miss some internal or deeply embedded vulnerabilities
● Requires coordination to determine the appropriate level of access
for the tester
Type Description
Conclusion
Strong cybersecurity necessitates penetration testing, which allows
organizations to detect and address security flaws early on. In today's
ever-changing world of cyber threats, regular and comprehensive testing is
critical.
Configuration Guide
Next Steps
Contact Cisco
You’ve graduated from setting up that new wireless router and are ready for your next
adventure: setting up a firewall. Gulp. We know, seems really intimidating. But breathe
easy, because we’ve broken it down to 6 simple steps that should help you on your way
to network-security nirvana. And off we go…
Administrative access to your firewall should be limitedtoonly thoseyou trust. Tokeep
out any would-be attackers, make sure your firewall is secured by at least one of the
following configuration actions:
To best protect your network’s assets, you should first identify them. Plan out a structure
where assets are grouped based on business and application need similar sensitivity
level and function, and combined into networks (or zones). Don’t take the easy way out
and make it all one flat network. Easy for you is easy for attackers!
All your servers that provide web-based services (ie.g. email, VPN) should be organized
into a dedicated zone that limits inbound traffic from the internet—often called a
demilitarized zone, or DMZ. Alternatively, servers that are not accessed directly from the
internet should be placed in internal server zones. These zones usually include
database servers, workstations, and any point of sale (POS) or voice over internet
protocol (VoIP) devices.
If you are using IP version 4, internal IP addresses should be used for all your internal
networks. Network address translation (NAT) must be configured to allow internal
devices to communicate on the internet when necessary.
After you have designed your network zone structure and established the corresponding
IP address scheme, you are ready to create your firewall zones and assign them to your
firewall interfaces or sub-interfaces. As you build out your network infrastructure,
switches that support virtual LANs (VLANs) should be used to maintain level-2
separation between the networks.
Step 3: Configure access control lists (It’s your party, invite who you want.)
Once network zones are established and assigned to interfaces, you will start with
creating firewall rules called access control lists, or ACLs. ACLs determine which traffic
needs permission to flow into and out of each zone. ACLs are the building blocks of who
can talk to what and block the rest. Applied to each firewall interface or sub-interface,
your ACLs should be made specific as possible to the exact source and/or destination
IP addresses and port numbers whenever possible. To filter out unapproved traffic,
create a “deny all” rule at the end of every ACL. Next, apply both inbound and outbound
ACLs to each interface. If possible, disable your firewall administration interfaces from
public access. Remember, be as detailed as possible in this phase; not only test out that
your applications are working as intended, but also make sure to test out what should
not be allowed. Make sure to look into the firewalls ability to control next generation
level flows; can it block traffic based on web categories? Can you turn on advanced
scanning of files? Does it contain some level of IPS functionality. You paid for these
advanced features, so don’t forget to take those "next steps"
Step 4: Configure your other firewall services and logging (Your non-vinyl
record collection.)
If desired, enable your firewall to act as a dynamic host configuration protocol (DHCP)
server, network time protocol (NTP) server, intrusion prevention system (IPS), etc.
Disable any services you don’t intend to use.
To fulfill PCI DSS (Payment Card Industry Data Security Standard) requirements,
configure your firewall to report to your logging server, and make sure that enough detail
is included to satisfy requirement 10.2 through 10.3 of the PCI DSS.
Step 5: Test your firewall configuration (Don’t worry, it’s an open-book test.)
First, verify that your firewall is blocking traffic that should be blocked according to your
ACL configurations. This should include both vulnerability scanning and penetration
testing. Be sure to keep a secure backup of your firewall configuration in case of any
failures. If everything checks out, your firewall is ready for production. TEST TEST
TEST the process of reverting back to a configuration. Before making any changes,
document and test your recovering procedure.
Once your firewall is configured and running, you will need to maintain it so it functions
optimally. Be sure to update firmware, monitor logs, perform vulnerability scans, and
review your configuration rules every six months.
SSL encryption
Last Updated: 2024-01-04
The SSL protocol operates between the application layer and the TCP/IP layer. This
allows it to encrypt the data stream itself, which can then be transmitted securely, using
any of the application layer protocols.
Many different algorithms can be used for encrypting data, and for computing the
message authentication code. Some algorithms provide high levels of security but
require a large amount of computation for encryption and decryption. Other algorithms
are less secure but provide rapid encryption and decryption. The length of the key that
is used for encryption affects the level of security; the longer the key, the more secure
the data. SSL defines cipher suites to specify cryptographic algorithms that are used
during an SSL connection.
Table of Content
● How to Secure a Live Server?
○ Method 1: IP tables
○ Method 2: IPV6
● How to Secure a Live Server – FAQs
Method 1: IP tables
if you find some issues in your configuration, you can use the following
command to flush the entire iptable and start over. With your iptable flushed,
your system is vulnerable to attacks. Make sure to secure it using an
alternative method.
sudo iptables -F
Inserting rules
Insert rules for the following purposes to secure the server.
● Inserting rule to allow loopback connections for localhost connection
to work.
● Inserting rule to allow incoming connection from the already
established connection.
● Rule to allow HTTP on port 80, HTTPS on 443, and SSH on 22.
Let’s add a rule to allow established connections to continue using the
command below and then you can check that the rule was added using the
same sudo iptables -L as before. To do this enter the following command in
the terminal.
sudo iptables -A INPUT -m conntrack --ctstate
ESTABLISHED,RELATED -j ACCEPT
sudo iptables -L
Default policy
You should make sure of the default policy to be configured as accepting
incoming connections. This ensures that you don’t get locked out of your
account. Then, add a rule to drop the incoming connections as last. This
ensures to drop of the connection if the packet doesn’t match the rules above
drop rule. Thus, ensures security from unwanted connections to the server.
Setting the default policy as ‘ACCEPT’
sudo iptables -P INPUT ACCEPT
Method 2: IPV6
The above rules are for IPV4 and adding rules for IPV6 differs a bit in the
command statement. The adoption of IPV6 is still not much compared to IPV4
and it could be exploited if left open. Therefore, let’s add a default policy to it
and make it permanent. You can follow the commands mentioned below,
sudo ip6tables -L
sudo ip6tables -P INPUT DROP
sudo invoke-rc.d iptables-persistent save
Commands for IPV6 only differ in the keyword ‘ip6tables’ w.r.t IPV4.
The server is up or not
We’ve allowed all important protocols to establish a connection to our servers.
But if you try to ping the server right now, it’ll drop because of the rule we
added at the last. Thus, we need to allow ICMP for the same. Also, we want
the drop rules to be the last rule defined. Thus, we need to add this rule above
the DROP rule. To achieve the same, you can follow the below commands.
To get the line number to all the rules
sudo iptables -L --line-numbers
sudo iptables -I INPUT [Drop_rule_line_number] -p icmp
--icmp-type echo-request -j ACCEPT
This rule will be added at line 1 and the DROP rule at 1 will be shifted down
Now it allows us to ping the server again.
Conclusion
Effective server security is key to maintaining a safe and reliable live server
environment. By adhering to server security best practices and utilizing live
server protection techniques, you can significantly reduce the risk of
vulnerabilities and attacks. Regularly updating your secure server setup and
monitoring for potential threats will ensure that your server remains secure
and functional over time. Embrace these strategies to uphold the integrity and
security of your live server
Protection of Servers
Last Updated : 13 Apr, 2023
Servers are the core of any high-performing facility. Servers are the key to
efficient and continuous operations. Servers are expensive. That’s why server
monitoring is critically important. Some methods of physical protection of
Servers are as follows:
● Hardware Monitoring: Hardware monitoring is found in large server
farms. A server farm is a facility that houses hundreds of servers for
organizations. Google has many server farms around the world to
provide optimal services. Even smaller companies are building local
server farms to house the growing number of servers need to
conduct business. Hardware monitoring systems are used to monitor
the health of these systems and to minimize server and application
downtime. Modern hardware monitoring systems use USB and
network ports to transmit the condition of CPU temperature, power
supply status, fan speed and temperature, memory status, disk
space, and network card status. Hardware monitoring systems help
to monitor many systems from a single terminal.
● HVAC: HVAC systems are critical to the safety of people and
information systems in the organization’s facilities. When planning
modern IT offices, these frameworks play a very important role in the
overall security. HVAC systems control the ambient environment and
must be planned for and operated along with other data center
components. Almost all physical computer hardware devices
accompany ecological necessities that incorporate worthy
temperature and stickiness ranges. Environmental requirements
appear in a product specifications document or in a physical planning
guide. It is critical to maintaining these environmental requirements
to prevent system failures and extend the life of IT systems.
Commercial HVAC systems and other building management systems
now connect to the Internet for remote monitoring and control.
Recent events have shown such systems (often called “smart
systems”) also raise big security implications.
● Power: A ceaseless supply of electrical power is critical in today’s
massive server facilities. Some standards in building effective
electrical supply systems are:
○ Two or more feeds coming from two or more
electrical substations.
○ Server rooms should be on a different power supply
from the rest of the building.
○ Backup power systems are also required.
● Access Control: Physical access control is necessary to prevent
unauthorized access to server rooms and IT equipment. Access
control systems can include biometric readers, keypads, and security
cameras. Access to the server room should be limited to authorized
personnel only, and a log of access should be maintained.
● Fire Suppression: Fire suppression systems are essential to protect
the servers and other IT equipment from fire damage. Common fire
suppression systems used in server rooms include water-based
systems, gas-based systems, and foam-based systems. These
systems should be installed and maintained by certified professionals
to ensure their effectiveness.
● Cable Management: Proper cable management is crucial to maintain
a safe and organized server room. Cables should be organized and
labeled to avoid confusion and ensure easy maintenance. Cable
trays and cable channels can be used to organize and manage
cables effectively.
● Physical Security: The server room should have proper physical
security measures in place, such as reinforced doors, security
alarms, and security personnel. These measures should be taken to
prevent unauthorized access, theft, and vandalism.
● Environmental Monitoring: Environmental monitoring systems are
used to monitor the temperature, humidity, and other environmental
factors in the server room. These systems can alert IT personnel if
any environmental factor falls outside the acceptable range, allowing
them to take corrective action before any damage is done.
1. Phishing Attacks
2. Social Engineering
3. SQL Injections
Cloud vulnerabilities are increasing and are one of the popular cybersecurity
threats. The IBM reports confirm that cloud vulnerabilities have increased
150% in the past five years. Gartner cloud security is one of the
fastest-growing technologies in recent years. Verizon’s DBIR has found that
more than 90% of the 29000 breaches analyzed in the report were mainly
caused by website application breaches.
5. IoT Attacks
7. DDos
Distributed denial of service attack(DDos) is another famous attack that is
done to disrupt the normal traffic of a targeted server or the network.
Therefore, DDoS attacks are generally carried out with the networks of
Internet-connected machines. These networks consist of the computers and
the other devices which are actually been infected with the malware by further
allowing them to be controlled by a hacker or attacker.
8. Ransomware
Ransomware is the type of malware that locks and encrypts a victim’s data,
systems, or files rendering them unusable until the attackers receive a ransom
payment. Between 2018 to 2020 the average ransom fee increased from
$5000 to $200000. The ransomware attacks also cost the companies in the
form of income lost while the hackers hold the systems access for ransom.
Therefore the average length of system downtime after a ransomware attack
is 21 days.
The third-party breach occurred at the beginning of 2021 when the hackers
leaked personal data from more than 214 million Instagram, LinkedIn, and
Facebook accounts. Hackers or attackers get around security systems by
hacking the less protected networks that belong to the third party that has
privileged access to the hacker's primary target. Therefore the hackers were
able to access the data by breaching third-party contractors known as
SocialArks that had been employed by the three companies and had
privileged access to their networks.
Port Scanning is the name of the technique used to identify available ports
and services on hosts on a network. Security engineers sometimes use it to
scan computers for vulnerabilities, and hackers also use it to target victims. It
can be used to send connection requests to target computers and then track
ports. Network scanners do not actually harm computers; instead, they make
requests that are similar to those sent by human users who visit websites or
connect to other computers using applications like Remote Desktop Protocol
(RDP) and Telnet. A port scan is performed by sending ICMP echo-request
packets with specific flags set in the packet headers that indicate the type of
message being transmitted: Type 8 indicates the request to be an echo-reply
packet with the source IP address as the responding host, while Type 0
indicates that no response is expected from the responding host.
Types of Port Scans:
Types of Ports:
● Open: The host replies and announces that it is listening and open
for queries. An undesired open port means that it is an attack path
for the network.
● Closed: The host responds but notices that no application is
listening. Hackers will scan again if it is opened.
● Filtered: The host does not respond to a request. This could mean
that the packet was dropped due to congestion or a firewall.
● Nmap
● Angry IP Scan
● Netcat
● Zenmap
● Advanced Port Scanner
● MASSCAN
Here, we will discuss one such very harmful Cyber-Attack Port Scanning
Attack.
Prevention:
The preventive ways for Port Scan attack are listed as follows :
● Secured Firewalls:
○ A firewall can be used to track the traffic of open
ports, including both incoming and outgoing traffic
from the network.
○ Identification of an open port is that the target post
involved here is bound to respond with packets,
which shows that the target host listens on the port.
● Strong Security Mechanisms:
○ Computer systems with strong security can protect
open ports from being exploited.
○ Security administrators should be well aware that
any harmful attack should not be allowed access to
computer open ports.
4. Use firewalls.
Firewalls are another essential tool in defending networks against security threats. A
firewall can help prevent unauthorized access to a network by blocking incoming traffic
from untrusted sources. Additionally, firewalls can be configured to allow only certain
types of traffic, such as web traffic or email.
5. Monitor activity.
Finally, it’s important to monitor activity on the network. Tracking logs and other data
enables suspicious activity to be identified quickly, allowing security personnel to take
steps to investigate and mitigate potential threats.
In a web application, there are two things usually: the client and the server.
The third entity that remains unnoticed most of the time is the communication
channel. This channel can be a wired connection or a wireless connection.
There can be one or more servers in the way forwarding your request to the
destination server in the most efficient way possible. These are known as
Proxy servers.
This way the attacker is suitably situated between you and your bank’s server.
Every bit of sensitive data that you send to your server including your login
password, is visible to the attacker. ARP cache poisoning is one of the ways
to perform an MITM attack; other ways are –
● DNS spoofing.
● IP spoofing.
● Setting up a rogue Wi-Fi AP.
● SSL spoofing, etc.
The use of SSL can prevent these attacks from being successful. Since the
data is encrypted and only legitimate endpoints have the key to decrypt it, the
attacker can do very little from the data even if he gets access to it.
(SSL is only useful if it’s set up properly, there are ways to circumvent this
protection mechanism too, but they are very hard to carry out). Still, an
attacker can do a lot of damage if the web application with which the user has
been interacting does not utilize the use of something called the nonce. The
attacker can capture the encrypted request, for the entire session and then
carefully resend the requests used for logging in. This way the attacker will get
access to your account without knowing your password. Using nonce prevents
such “replay attacks”. A nonce is a unique number that is sent by the server to
the client before login. It is submitted with the username and password and is
invalidated after a single use.
Conclusion
Man In the Middle Attack offer a serious risk to online communication,
resulting in the stealing of private data, financial loss, and harm to reputation.
To avoid MitM attacks, take precautions such as employing encryption,
checking SSL/TLS certificates, and staying away from insecure Wi-Fi
networks. You may lower your risk of being a victim of a Man In the Middle
Attack by remaining attentive and implementing best practices.
DoS
DoS stands for Denial of Service. It is a type of attack on a service that
disrupts its normal function and prevents other users from accessing it. The
most common target for a DoS attack is an online service such as a website,
though attacks can also be launched against networks, machines, or even a
single program.
4. Protocol Attack.
4. Flooding Attack
DoS Stands for Denial of service attack. This attack is meant to shut down a
machine or network, due to which users are unable to access it. DoS attacks
accomplish this by flooding the target with traffic or sending it information that
triggers a crash.
What is a DDoS attack?
DDoS Stands for Distributed Denial of service attack. In a DDoS attack, the
attacker tries to make a particular service unavailable by directing continuous
and huge traffic from multiple end systems.
There is another type of XSS called DOM based XSS and its instances are
either reflected or stored. DOM-based XSS arises when user-supplied data is
provided to the DOM objects without proper sanitizing. An example of code
vulnerable to XSS is below, notice the variables firstname and lastname :
php
<?php
$firstname = $_GET["firstname"];
$lastname = $_GET["lastname"];
if($firstname == "" or $lastname == "")
}
else
?>
User-supplied input is directly added in the response without any sanity check.
Attacker an input something like –
html
and it will be rendered as JavaScript. There are two aspects of XSS (and any
security issue) –
1. Developer: If you are a developer, the focus would be secure
development to avoid having any security holes in the product. You
do not need to dive very deep into the exploitation aspect, just have
to use tools and libraries while applying the best practices for secure
code development as prescribed by security researchers. Some
resources for developers are – a). OWASP Encoding Project : It is a
library written in Java that is developed by the Open Web Application
Security Project(OWASP). It is free, open source and easy to use. b).
The “X-XSS-Protection” Header : This header instructs the browser
to activate the inbuilt XSS auditor to identify and block any XSS
attempts against the user. c). The XSS Protection Cheat Sheet by
OWASP : This resource enlists rules to be followed during
development with proper examples. The rules cover a large variety of
cases where a developer can miss something that can lead to the
website being vulnerable to XSS. d). Content Security Policy : It is a
stand-alone solution for XSS like problems, it instructs the browser
about “safe” sources apart from which no script should be executed
from any origin.
2. Security researchers: Security researchers, on the other hand, would
like similar resources to help them hunt down instances where the
developer became lousy and left an entry point. Researchers can
make use of – a). CheatSheets – 1. XSS filter evasion cheat sheet
by OWASP. 2. XSS cheat sheet by Rodolfo Assis. 3. XSS cheat
sheet by Veracode. b). Practice Labs – 1. bWAPP 2. DVWA(Damn
vulnerable Web Application) 3. prompt.ml 4. CTFs c). Reports – 1.
Hackerone Hacktivity 2. Personal blogs of eminent security
researchers like Jason Haddix, Geekboy, Prakhar Prasad, Dafydd
Stuttard(Portswigger) etc.
There are multiple ways by which a web application can protect itself from
Cross-Site Scripting issues. Some of them include,
1. Blacklist filtering.
2. Whitelist filtering.
3. Contextual Encoding.
4. Input Validation.
5. Content Security Policy.
1. Blacklist filtering
It is easy to implement a filtering technique that protects the website from XSS
issues only partially. It works based on a known list of finite XSS vectors. For
example, most XSS vectors use event listener attributes such as onerror,
onmouseover, onkeypress etc., Using this fact, users given HTML attributes
can be parsed and these event listeners attributes. This will mitigate a finite
set of XSS vectors such as <img src=x onerror=alert()>.
For vectors like <a href=”javascript:alert()”>XSS</a>, one may remove
javascript:, data:, vbscript: schemes from user given HTML.
Advantages:
1. These filters are easy to implement in a web application.
2. Almost zero risk of false positives of safe user content being filtered
by these filter
Disadvantages:
But this filtering can be easily bypassed as XSS vectors are not finite and
cannot be maintained so. Here is the list of some valid bypasses of this filter.
This filtering doesn’t protect the website completely.
1. <a href=”jAvAscRipt:alert()”>XSS</a>
2. <a href=”jAvAs cRipt:alert()”>XSS</a>
3. <a href=”jAvAscRipt:prompt()”>XSS</a>
2. Whitelist Filtering
Whitelist filtering is the opposite of blacklist based filtering. Instead of listing
out unsafe attributes and sanitizing user HTML with this list, whitelist filtering
lists out a set of set HTML tags and attributes. Entities that are known to be
sure safe are maintained and everything else will be filtered out.
This reduces XSS possibilities to the maximum extent and opens up XSS only
when there is a loophole in the filter itself that treats some unsafe entities as
safe. This filtering can be done both in the Client and server-side. Whitelist
filtering is the most commonly used filter in modern web applications.
Advantages:
1. Reduces XSS possibilities to a very good extent.
2. Some whitelist filters like the Antisamy filter rewrite User content with
Safe rules. These causes rewriting of HTML content with strict
standards of HTML language.
Disadvantages:
More often this works by accepting unsafe or unsanitized HTML, parses them
and constructs a safe HTML, and responds back to the user. This is
performance intensive. Usage of these filters heavily may have a hidden
performance impact on your modern web application.
3. Contextual Encoding
The other common mitigation technique is to consider all user given data as
textual data and not HTML content, even if it is an HTML content. This can be
done performing HTML entity encoding on user data. Encoding
<h1>test</h1> may get converted to <pre><test> test </></pre> The
browser will then parse this correctly and render <h1>test</h1> as text
instead of rendering it as h1 HTML tag.
Advantages:
If done correctly, contextual encoding eliminates XSS risk completely.
Disadvantages:
It treats all user data as unsafe. Thus, irrespective of the user data being safe
or unsafe, all HTML content will be encoded and will be rendered as plain text.
4. Input Validation
In the Input validation technique, a regular expression is applied for every
request parameter data i.e., user-generated content. Only if the content
passes through a safe regular expression, it is then allowed. Otherwise, the
request will be failed on the server-side with 400 response code.
Advantages:
Input validation not only reduces XSS but protects almost all vulnerabilities
that may arise due to trusting user content.
Disadvantages:
1. It might be possible to mitigate an XSS in the phone number field by
having a numeric regular expression validation but for a name field, it
might not be possible as names can be in multiple languages and
can have non-ASCII characters in Greek or Latin alphabets.
2. Regular expression testing is performance intensive. All parameters
in all requests to a server must be matched against a regular
expression.
Encoding Vs Filtering –
SSL stands for Secure Socket Layer while TLS stands for Transport Layer
Security. Both Secure Socket Layer and Transport Layer Security are the
protocols used to provide security between web browsers and web servers.
The main difference between Secure Socket Layer and Transport Layer
Security is that, in SSL (Secure Socket Layer), the Message digest is used to
create a master secret and It provides the basic security services which are
Authentication and confidentiality. while In TLS (Transport Layer Security), a
Pseudo-random function is used to create a master secret.
SSL TLS
SSL stands for Secure Socket Layer. TLS stands for Transport Layer Security.
SSL (Secure Socket Layer) supports the TLS (Transport Layer Security) does not
Fortezza algorithm. support the Fortezza algorithm.
TLS (Transport Layer Security) is the 1.0
SSL (Secure Socket Layer) is the 3.0 version.
version.
SSL (Secure Socket Layer) is less secured as TLS (Transport Layer Security) provides high
compared to TLS(Transport Layer Security). security.
TLS is highly reliable and upgraded. It
SSL is less reliable and slower.
provides less latency.
Conclusion
While Secure Socket Layer (SSL) and Transport Layer Security (TLS) both
aim to the secure communications over networks TLS is the more modern
and secure protocol. The TLS has replaced SSL due to its enhanced the
security features and performance improvements. Although SSL is still
commonly referenced it is advisable to use TLS for the secure
communications to benefit from the latest advancements in the cryptographic
technology.
H
hemangshmi8o
Follow
Also check -
Here’s why 2FA is essential in today’s digital landscape and how it plays a key
role in safeguarding sensitive information.
1. Protects Against Unauthorized Access
With 2FA, even if a hacker gains access to your password, they would still
need a second form of verification—like a unique code sent to your phone or a
fingerprint scan—to access your account. This significantly reduces the risk of
unauthorized access. Two-factor authentication makes it much harder for
cybercriminals to break into accounts, adding an essential barrier beyond just
a password.
2. Enhances Password Security
Many users reuse passwords across multiple accounts or choose passwords
that are easy to remember, making them vulnerable to attacks. 2FA
compensates for weak passwords by requiring an additional step for login.
Even if a password is compromised, 2FA reduces the chance that it alone will
lead to a successful breach. This is especially crucial for sensitive accounts
like online banking, corporate logins, and personal email.
3. Reduces Phishing Attack Success
Phishing attacks—where attackers trick users into revealing their
passwords—are common in the cybersecurity world. However, 2FA can help
protect against phishing because the attacker would still need the second
factor to gain access, even if they have your password. This makes 2FA a
valuable tool in fighting against social engineering attacks, reducing the
overall risk of a data breach.
4. Complies with Security Regulations
Many industries, especially those dealing with financial data, healthcare
information, or corporate security, have compliance standards requiring
enhanced security measures like 2FA. For example, the Payment Card
Industry Data Security Standard (PCI DSS) and General Data Protection
Regulation (GDPR) recommend or require 2FA to protect sensitive data.
Implementing 2FA not only strengthens security but also helps organizations
meet these important regulatory requirements.
5. Builds Customer Trust
For businesses, 2FA can improve trust among customers by demonstrating a
commitment to safeguarding their data. With so many high-profile data
breaches in recent years, consumers are increasingly aware of digital security.
Offering 2FA as an option for account security helps show that a business
prioritizes customer privacy, building loyalty and reputation in a competitive
market.
6. Lowers Financial and Operational Costs from Data Breaches
Recovering from a data breach can be financially devastating, especially for
small businesses. Not only do data breaches lead to lost revenue and
damage to a brand’s reputation, but they can also incur hefty recovery costs.
Implementing 2FA is a cost-effective way to significantly reduce the likelihood
of a breach, potentially saving organizations thousands, if not millions, in
recovery expenses and legal fees.
7. Easy to Implement and Widely Accessible
Modern 2FA solutions are easier than ever to set up, with options like SMS
codes, authentication apps, biometric scans, and even hardware tokens.
Many services and platforms now offer 2FA at no additional cost, making it
accessible to anyone. This combination of ease and accessibility means users
and organizations can boost their security quickly and with minimal setup.
Conclusion
In conclusion, Two-Factor Authentication (2FA) is a critical security measure
that adds an extra layer of protection to your online accounts. By requiring two
different forms of identification before granting access, 2FA significantly
reduces the risk of unauthorized access. This method combines something
you know (like a password) with something you have (such as a phone) or
something you are (like a fingerprint). Implementing 2FA is a straightforward
yet effective step towards safeguarding your digital life against the increasing
threats of hacking and identity theft. It's an essential tool in today's digital
world where security is paramount.
What is Two-Factor Authentication (2FA)? - FAQs
What is an example of two-factor authentication (2FA )?
Using two different factors like a password and a one-time passcode sent to a
mobile phone via SMS is two-factor authentication.
Multifactor Authentication
Last Updated : 19 Jan, 2023
Multifactor Authentication
As depicted in the diagram, for authentication, the user needs a password and
an additional phone or fingerprint to completely authenticate. So, we can
imagine it’s like an ATM, where the way to gather information about any bank
account requires both a physical card and a personal PIN. By requiring two or
more pieces
for full authentication, multi-factor authentication (MFA) adds protection to the
user’s identity.
Component of MFA:
These are divided into three groups, they are as follows:
1. Something you are familiar with, such as a password or a response
to a security question.
2. Something that you own, such as a smartphone app that receives
notifications or a token-generating device.
3. Something you are—usually a biometric trait like a fingerprint or face
scan, which is employed on many mobile devices.
Disadvantage:
The disadvantage is that multi-factor authentication takes longer. Not only can
require two or more types of verification to lengthen a procedure, but the
setup itself can be time-consuming. Multi-factor authentication cannot be set
up by a company on its own. It has to be done by a third party. Despite its
drawbacks, MFA is still considered one of the greatest levels of security that
all firms should strive to deploy to protect their employees, networks, and
consumers.
Last but not least, here’s how some of the drawbacks of multi-factor
authentication can be turned into benefits:
1. Consider a dedicated vendor management system.
2. Consider a specialized vendor management system.
3. Replace your VPN with a better, more complete solution instead of
spending money on an expensive one.
SQL Injection
Last Updated : 08 Aug, 2024
● SQL injection typically occurs when you ask a user for input, such as
their username/user ID and instead of their name/ID, the user inputs
an SQL statement that will be executed without the knowledge about
your database.
For example,
txtUserId = getRequestString("UserId");
txtSQL = "SELECT * FROM Users
WHERE UserId = " + txtUserId;
This resulting query will return data of all users, not just the user with UserId
=”105″.
SQL Injection based on 1=1 is always true. As you can see in the above
example, 1=1 will return all records for which this holds true. So basically, all
the student data is compromised. Now the malicious user can also similarly
use other SQL queries.
Consider the following SQL query.
Query 1:
SELECT * FROM USER WHERE
USERNAME = “” AND PASSWORD=””
Now the malicious attacker can use the ‘=’ operator cleverly to retrieve private
and secure user information. So following query when executed retrieves
protected data, not intended to be shown to users.
Query 2:
SELECT* FROM User WHERE
(Username = “” OR 1=1) AND
(Password=”” OR 1=1).
For more details, refer to How to Protect Against SQL Injection Attacks article.
The SQL declaration underneath will return all rows from the “users” desk
after which delete the “Employees ” table.
Query:
SELECT * FROM Users;
DROP TABLE Employees;
SQL Injection, often known as SQLI, is a typical attack vector that employs
malicious SQL code to manipulate Backend databases in order to obtain
information that was not intended to be shown. This information might contain
sensitive corporate data, user lists, or confidential consumer information.
listproducts.php?cat=1 --dbs
-D acuart --tables
SQLI Prevention:
Developers can prevent SQL Injection with the help of the following
techniques.
1. Use extensive data Sanitization: All user input must be filtered by websites.
Ideally, user data should be context-filtered. Email addresses, for example,
should be filtered to allow only the characters permitted in an e-mail address,
phone numbers should be filtered to allow only the characters permitted in a
phone number, and so on.
2. Make use of a web application firewall: Mod Security, a free and
open-source module for Apache, Microsoft IIS, and Nginx web servers, is a
prominent example. Mod Security offers a complex and constantly changing
collection of rules for filtering potentially hazardous online requests. Most
attempts to smuggle SQL across web channels are caught by its SQL
injection safeguards.
3. Patch software on a regular basis: Because SQL injection vulnerabilities
are frequently discovered in commercial software, it is critical to keep up with
updating.
4. Contextually limit database rights: Create numerous database user
accounts with the least amount of permission necessary for their usage
scenario. For example, the code powering a login page should query the
database using a restricted account that only has access to the appropriate
credentials table.
5. Monitor SQL statements from database-connected apps in real-time: This
will aid in the detection of rogue SQL statements and vulnerabilities. Machine
learning and/or behavioral analysis monitoring technologies can be extremely
effective.
What is a Session?
A session is used to save information on the server momentarily so that it
may be utilised across various pages of the website. It is the overall amount
of time spent on an activity. The user session begins when the user logs in to
a specific network application and ends when the user logs out of the
program or shuts down the machine.
Session values are far more secure since they are saved in binary or
encrypted form and can only be decoded on the server. When the user shuts
down the machine or logs out of the program, the session values are
automatically deleted. We must save the values in the database to keep
them forever.
What is a Cookie?
A cookie is a small text file that is saved on the user’s computer. The
maximum file size for a cookie is 4KB. It is also known as an HTTP cookie, a
web cookie, or an internet cookie. When a user first visits a website, the site
sends data packets to the user’s computer in the form of a cookie.
The information stored in cookies is not safe since it is kept on the client side
in a text format that anybody can see. We can activate or disable cookies
based on our needs.
Cookies Session
ookies are client-side files on a local computer that Sessions are server-side files that contain user
hold user information. data.
It can only store a certain amount of info. It can hold an indefinite quantity of data.
ecause cookies are kept on the local computer, we To begin the session, we must use the session
don’t need to run a function to start them. start() method.
n PHP, to get the data from Cookies , $_COOKIES In PHP , to get the data from Session,
the global variable is used $_SESSION the global variable is used
Conclusion
In conclusion, sessions and cookies both store user information but differ in
key ways. Sessions are stored on the server and are more secure but
temporary, while cookies are stored on the user’s computer and can last
longer but are less secure. Choosing between them depends on the need for
security and persistence of the data.
What is Kubernetes (k8s)?
Kubernetes is an open-source Container Management tool that automates
container deployment, container scaling, descaling, and container load
balancing (also called a container orchestration tool). It is written in Golang
and has a vast community because it was first developed by Google and later
donated to CNCF (Cloud Native Computing Foundation). Kubernetes can
group ‘n’ number of containers into one logical unit for managing and
deploying them easily. It works brilliantly with all cloud vendors i.e. public,
hybrid, and on-premises.
2. Scalability
● You can scale the application containers depending on the incoming
traffic Kubernetes offers Horizontal pod scaling the pods will be
scaled automatically depending on the load.
3. High availability
● You can achieve high availability for your application with the help of
Kubernetes and also it will reduce the latency issues for the end
users.
4. Cost-effectiveness
Features of Kubernetes
1. Automated Scheduling– Kubernetes provides an advanced
scheduler to launch containers on cluster nodes. It performs
resource optimization.
2. Self-Healing Capabilities– It provides rescheduling, replacing, and
restarting the containers that are dead.
3. Automated Rollouts and Rollbacks– It supports rollouts and
rollbacks for the desired state of the containerized application.
4. Horizontal Scaling and Load Balancing– Kubernetes can scale up
and scale down the application as per the requirements.
5. Resource Utilization– Kubernetes provides resource utilization
monitoring and optimization, ensuring containers are using their
resources efficiently.
6. Support for multiple clouds and hybrid clouds– Kubernetes can be
deployed on different cloud platforms and run containerized
applications across multiple clouds.
7. Extensibility– Kubernetes is very extensible and can be extended
with custom plugins and controllers.
8. Community Support- Kubernetes has a large and active community
with frequent updates, bug fixes, and new features being added.
The following are some of the Operations that can be performed with Cloud
Computing
● Public Cloud
● Private Cloud
● Hybrid Cloud
● Community Cloud
● Multi-Cloud
Public Cloud
The public cloud makes it possible for anybody to access systems and
services. The public cloud may be less secure as it is open to everyone. The
public cloud is one in which cloud infrastructure services are provided over
the internet to the general people or major industry groups. The
infrastructure in this cloud model is owned by the entity that delivers the
cloud services, not by the consumer. It is a type of cloud hosting that allows
customers and users to easily access systems and services. This form of
cloud computing is an excellent example of cloud hosting, in which service
providers supply services to a variety of customers. In this arrangement,
storage backup and retrieval services are given for free, as a subscription, or
on a per-user basis. For example, Google App Engine etc.
Public Cloud
Private Cloud
The private cloud deployment model is the exact opposite of the public cloud
deployment model. It’s a one-on-one environment for a single user
(customer). There is no need to share your hardware with anyone else. The
distinction between private and public clouds is in how you handle all of the
hardware. It is also called the “internal cloud” & it refers to the ability to
access systems and services within a given border or organization. The cloud
platform is implemented in a cloud-based secure environment that is
protected by powerful firewalls and under the supervision of an
organization’s IT department. The private cloud gives greater flexibility of
control over cloud resources.
Private Cloud
Hybrid Cloud
Community Cloud
Community Cloud
Multi-Cloud
We’re talking about employing multiple cloud providers at the same time
under this paradigm, as the name implies. It’s similar to the hybrid cloud
deployment approach, which combines public and private cloud resources.
Instead of merging private and public clouds, multi-cloud uses many public
clouds. Although public cloud providers provide numerous tools to improve
the reliability of their services, mishaps still occur. It’s quite rare that two
distinct clouds would have an incident at the same moment. As a result,
multi-cloud deployment improves the high availability of your services even
more.
Multi-Cloud
Each model has some advantages and some disadvantages, and the
selection of the best is only done on the basis of your requirement. If your
requirement changes, you can switch to any other model.
Complex, Complex,
Complex,
requires a requires a
requires a
Initial Setup Easy professional professional
professional
team to team to
team to setup
setup setup
Scalability
and High High Fixed High
Flexibility
Between
Distributed
Cost-Comp Cost-Eff public and
Costly cost among
arison ective private
members
cloud
Data
Low High High High
Privacy
Advantages of IaaS
● IaaS is cost-effective as it eliminates capital expenses.
● IaaS cloud provider provides better security than any other software.
● IaaS provides remote access.
Disadvantages of IaaS
● In IaaS, users have to secure their own data and applications.
● Cloud computing is not accessible in some regions of the World.
Advantages of PaaS
● PaaS is simple and very much convenient for the user as it can be
accessed via a web browser.
● PaaS has the capabilities to efficiently manage the lifecycle.
Disadvantages of PaaS
● PaaS has limited control over infrastructure as they have less control
over the environment and are not able to make some
customizations.
● PaaS has a high dependence on the provider.
SaaS has around 60 percent of cloud solutions and due to this, it is mostly
preferred by companies.
Advantages of SaaS
● SaaS can access app data from anywhere on the Internet.
● SaaS provides easy access to features and services.
Disadvantages of SaaS
● SaaS solutions have limited customization, which means they have
some restrictions within the platform.
● SaaS has little control over the data of the user.
● SaaS are generally cloud-based, they require a stable internet
connection for proper working.
What is a Container ?
One of the greatest challenges in software development is ensuring that an
app works similarly in a variety of environments. In earlier times, this has
been attended to by working through a virtual machine (VM), but it's quite a
heavyweight solution. That's when containers came along, as a more
lightweight and effective alternative for this challenge. They encapsulate an
application and its dependencies in such a way that the same computing
environment can run without running into problems.
Primary Terminologies
● Container: An isolated, stand-alone unit that encapsulates an
application and all its dependencies, it runs the same and
consistently in any environment, independently of the host system,
being unaffecting and not getting affected by it.
● Docker: Docker is an open-source platform designed to make it easy
for containers to be built, developed, and run. It provides one with
all the software required, in addition to development capabilities, to
build, run, and manage containers for maximum efficiency.
● Image: A container image is a lightweight, read-only, executable file
that includes everything needed to run a piece of software: the code,
the runtime, the libraries, the environment variables, and
configurations. It basically serves as a template for creating
containers.
● Containerization: The way of bundling the application together with
all its dependencies into a container .In this way, the application acts
in the same way in which it is executed.
● Orchestration: It automatically takes care of the coordination,
scheduling, and management of multi-container deployments
running on a cluster of machines, container orchestration tools
include Kubernetes and Docker Swarm.
Containers
● Architecture: All containers share the host OS kernel; however, the
running user spaces are isolated, making them lightweight.
● Boot Time: Containers have much less boot time typically in
seconds, as they do not need to boot a full OS.
● Isolation: Containers provide isolation at the process level, which is
less strong compared to VMs, but for many use cases this does not
matter
● Resource Usage: Containers consume fewer resources because they
do not need an entire OS—only the necessary binaries and libraries.
What is Containerization?
Containerization is the process of packing an application together with all its
dependencies into a container in order to allow the application to run
consistently from one computing environment to another, in simple terms
containerization involves using the host OS kernel to run many isolated
instances of applications on the same machine, making it very lightweight
and efficient in deploying applications.
Use Cases for Containerization
Firewall is the central part of cloud architecture. The firewall protects the
network and the perimeter of end-users. It also protects traffic between various
apps stored in the cloud.
Access control protects data by allowing us to set access lists for various assets.
For example, you can allow the application of specific employees while
restricting others. It's a rule that employees can access the equipment that they
required. We can keep essential documents which are stolen from malicious
insiders or hackers to maintaining strict access control.
Block Cipher and Stream Cipher are the types of symmetric key
cipher. These two block ciphers are used to transform plain text into
ciphertext. The difference between a Block cipher and a Stream
cipher is that the former transforms the plain text into cipher text by
taking the plain text block by block. On the other hand, a block
cipher produces cipher text from plain text by taking one byte of
plain text at a time. In this article, we will see the difference between
Block Cipher and Stream Cipher in detail.
Block Cipher
1. Division Method
The division method involves dividing the key by a prime number
and using the remainder as the hash value.
h(k)=k mod m
Disadvantages:
● Poor distribution if 𝑚m is not chosen wisely.
2. Multiplication Method
In the multiplication method, a constant 𝐴A (0 < A < 1) is used to
multiply the key. The fractional part of the product is then multiplied
by 𝑚m to get the hash value.
h(k)=⌊m(kAmod1)⌋
Advantages:
● Less sensitive to the choice of 𝑚m.
Disadvantages:
● More complex than the division method.
3. Mid-Square Method
In the mid-square method, the key is squared, and the middle digits
of the result are taken as the hash value.
Steps:
1. Square the key.
2. Extract the middle digits of the squared value.
Advantages:
● Produces a good distribution of hash values.
Disadvantages:
● May require more computational effort.
4. Folding Method
The folding method involves dividing the key into equal parts,
summing the parts, and then taking the modulo with respect to 𝑚m.
Steps:
1. Divide the key into parts.
2. Sum the parts.
3. Take the modulo 𝑚m of the sum.
Advantages:
● Simple and easy to implement.
Disadvantages:
● Depends on the choice of partitioning scheme.
Advantages:
● High security.
Disadvantages:
● Computationally intensive.
6. Universal Hashing
Universal hashing uses a family of hash functions to minimize the
chance of collision for any given set of inputs.
h(k)=((a⋅k+b)modp)modm
Where a and b are randomly chosen constants, p is a prime number
greater than m, and k is the key.
Advantages:
● Reduces the probability of collisions.
Disadvantages:
● Requires more computation and storage.
7. Perfect Hashing
Perfect hashing aims to create a collision-free hash function for a
static set of keys. It guarantees that no two keys will hash to the same
value.
Types:
● Minimal Perfect Hashing: Ensures that the range of the hash
function is equal to the number of keys.
● Non-minimal Perfect Hashing: The range may be larger than
the number of keys.
Advantages:
● No collisions.
Disadvantages:
● Complex to construct.
Conclusion
In conclusion, hash functions are very important tools that help store
and find data quickly. Knowing the different types of hash functions
and how to use them correctly is key to making software work better
and more securely. By choosing the right hash function for the job,
developers can greatly improve the efficiency and reliability of their
systems.
Example:
The hash algorithm MD5 is widely used to check the integrity of
messages. MD5 divides the message into blocks of 512 bits and
creates a 128 bit digest(typically, 32 Hexadecimal digits). It is no
longer considered reliable for use as researchers have demonstrated
techniques capable of easily generating MD5 collisions on
commercial computers.
The weaknesses of MD5 have been exploited by the Flame malware
in 2012.
In response to the insecurities of MD5 hash algorithms, the Secure
Hash Algorithm (SHA) was invented.
Implementation:
MD5 hash in Java
Related GATE Questions:
GATE-CS-2014-(Set-1)
GATE-CS-2016 (Set 1)
The MD4 algorithm is defined in RFC 1320, and the MD5 is defined in RFC 1321.
Implementation on FPGA
The internal structure of MD4 and MD5 are shown in the figures below:
As we can see from the figures, the hash calculation can be partitioned into two parts.
● The pre-processing part pads or splits the input message which is comprised by
a stream of 32-bit words into fixed sized blocks (512-bit for each).
● The digest part iteratively computes the hash values. Loop-carried dependency is
enforced by the algorithm itself, thus this part cannot reach an initiation interval
(II) = 1.
As these two parts can work independently, they are designed into parallel dataflow
processes, connected by streams (FIFOs).
Performance
MD4
A single instance of MD4 function processes input message at the rate of 512 bit / 50
cycles at 312.79MHz.
clock
BRAM DSP FF LUT CLB SRL
period(ns)
MD5
A single instance of MD5 function processes input message at the rate of 512 bit / 81
cycles at 329.05MHz.
clock
BRAM DSP FF LUT CLB SRL
period(ns)
Next
Previous
What is HMAC(Hash based Message
Authentication Code)?
Last Updated : 01 Jul, 2024
What is HMAC?
HMAC (Hash-Based Message Authentication Code) is a
cryptographic technique that ensures data integrity and authenticity
using a hash function and a secret key. Unlike approaches based on
signatures and asymmetric cryptography. Checking data integrity is
necessary for the parties involved in communication. HTTPS, SFTP,
FTPS, and other transfer protocols use HMAC. The cryptographic
hash function may be MD-5, SHA-1, or SHA-256. Digital signatures are
nearly similar to HMACs i.e. they both employ a hash function and a
shared key. The difference lies in the keys i.e. HMAC uses a symmetric
key(same copy) while Signatures uses an asymmetric (two different
keys).
Working of Hash-based Message Authentication
Code
HMACs provides client and server with a shared private key that is
known only to them. The client makes a unique hash (HMAC) for
every request. When the client requests the server, it hashes the
requested data with a private key and sends it as a part of the
request. Both the message and key are hashed in separate steps
making it secure. When the server receives the request, it makes its
own HMAC. Both the HMACS are compared and if both are equal, the
client is considered legitimate.
The formula for HMAC:
HMAC = hashFunc(secret key + message)
Summary of Calculation
● Select K.
○ If K < b, pad 0’s on left until k=b. K is between 0
and b ( 0 < K < b )
● EXOR K+ with ipad equivalent to b bits producing S1 bits.
● Append S1 with plain text M
● Apply SHA-512 on ( S1 || M )
● Pad n-bits until length is equal to b-bits
● EXOR K+ with opad equivalent to b bits producing S2 bits.
● Append S2 with output of step 5.
● Apply SHA-512 on step 7 to output n-bit hashcode.
The data is initially hashed by the client using a private key before
being sent to the server as part of the request. The server then
creates its own HMAC. This assures that the process is not vulnerable
to attacks, which could result in crucial data being disclosed as
subsequent MACs are generated. Additionally, once the procedure is
completed, the delivered message becomes irreversible and
resistant to hackers. Even if a malicious party attempts to steal the
communication, they will be unable to determine its length or
decrypt it because they do not have the decryption key.
Advantages of HMAC
● HMACs are ideal for high-performance systems like routers
due to the use of hash functions which are calculated and
verified quickly unlike the public key systems.
● Digital signatures are larger than HMACs, yet the HMACs
provide comparably higher security.
● HMACs are used in administrations where public key systems
are prohibited.
Disadvantages of HMAC
● HMACs uses shared key which may lead to non-repudiation.
If either sender or receiver’s key is compromised then it will
be easy for attackers to create unauthorized messages.
● Securely managing and distributing secret keys can be
challenging.
● Although unlikely, hash collisions (where two different
messages produce the same hash) can occur.
● The security of HMAC depends on the length of the secret
key. Short keys are more vulnerable to brute-force attacks.
● The security of HMAC relies on the strength of the chosen
hash function (e.g., SHA-256). If the hash function is
compromised, HMAC is also affected.
Applications of HMAC
● Verification of e-mail address during activation or creation of
an account.
● Authentication of form data that is sent to the client browser
and then submitted back.
● HMACs can be used for Internet of things (IoT) due to less
cost.
● Whenever there is a need to reset the password, a link that
can be used once is sent without adding a server state.
● It can take a message of any length and convert it into a
fixed-length message digest. That is even if you got a long
message, the message digest will be small and thus permits
maximizing bandwidth.
Cryptography is very much essential and vital for data encryption and
decryption to safeguard sensitive and touchy data in businesses and
individual. However, with the advancement of technology data
breaches and cyberattacks has become very common, and need to
employ different types of cryptography tools to combat such issues
and problems. Hashing is used for data integrity verification and to
detect any unauthorized modification or tampering and can ensure
the digital document's authenticity.
Secure Hash Algorithms (SHA) is one of the cryptography technology
and uses hashing for plaintext to message digest conversion. In this
article, we will learn all about the SHA like it's definition, difference
between SHA and AES, primary technology, key terms, practical
examples, real-life scenarios, pros, and cons etc.
Primary Technology
National Security Agency(NSA) developed SHA-2 family of hash
functions and SHA -256 is one the widely and popular SHA standard
of SHA-2.
SHA-256 takes an input message (of any length or size) and creates a
256-bit (32-byte) hash value and while creating the hash values
complex and standard mathematical algorithms are applied to the
input message.
Processing of SHA
1. Input
2. Preprocessing
3. Hashing
4. Output
Hash value can act as a tool for authenticating the originality of the
input message by making sure to verify any unauthorised and
modifications made due to the data tampering and henceforth
discarding the message. If the recipient gets different hash value
upon using the same hashing algorithm and hash function on the
input then the message are tempered and modified and henceforth
need to be discarded.
We may be get the hash or fixed size output as follows,
e3b0c4429cfbbc8c830a8f102620e8a020869d64f84e98fc48d7b8b67f
677f8b9d64f84e98fc48d7b8b67f677f8b9d
Collision Attacks
Avalanche Effect
Secure Hash functions support avalanche effect and are used to
determine the underlying modification and tempering of the data
even if any negligible and small changes are made to the inputs as it
would result into a significant and large change in the hash and
henceforth are easily detected and identified.
2. Digital Signatures
3. Password Hashing
Pros
Cons
What is Steganography?
Last Updated : 27 Mar, 2024
Image Steganography
Audio Steganography
Video Steganography
Advantages of Steganography
● It offers better security for data sharing and communication.
● It's veritably important delicate to descry. It can only be
detected by the receiver party.
● It can apply through colorful means like images, audio,
videotape, textbook,etc.
● It plays a vital part in securing the content of the
communication.
● It offers double subcaste of protection, first being the train
itself and second the data decoded.
● With the help of Steganography advanced functional agency
can communicate intimately.
Steganography Cryptography
Steganography is defined as a
Cryptography is defined as the
system of concealing data or
system of guarding information
information
underknown-secret data or and communication with the
training. help of colorful ways.
Steganography Tools
Steganography Tools are defined as tools which help the stoner to
hide secret dispatches or information inside another train in colorful
formats. There are colorful tools available in the request which helps
to perform steganography. Some of the steganography tools are
following-
● OpenStego
● Steghide
● OutGuess
● Hide n shoot
● QuickStego
● Disguise
TLS 1.2: TLS 1.2 is an advanced version of TLS 1.1. It was designed
for both improved reliability and high performance and also offers
better security. TLS 1.3: It is the latest version of TLS, it is used by
various network protocols for encoding, it is the modern version of
SSL.
2. Scalability
3. High availability
● You can achieve high availability for your application with the help of
Kubernetes and also it will reduce the latency issues for the end
users.
4. Cost-effectiveness
Nmap stands for Network Mapper is arguably one of the most popular s
open source security tools employed for network mapping applications. As
one of the primary utilities of the cybersecurity domain, recon helps the users
to scan the hosts and services in the computer network. Nmap uses the
concept whereby it sends packets to a target and tries to analyze the
response as a way of dealing with the target network. This article will not
only discuss various fundamental techniques of Nmap Scanning and the
general guidelines for conducting network vulnerability scans, but this article
will also explain to you how actually to use Nmap quite efficiently.
What is Nmap?
Nmap stands for Network Mapper which is a free Open source
command-line tool. Nmap is an information-gathering tool used for recon
reconnaissance. It scans hosts and services on a computer network which
means that it sends packets and analyzes the response. Listed below are the
most useful Scans which you can run with the help of Nmap tools.
Here:
Using this command your system sends a SYN packet and the Destination
responds with SYN and ACK packets which means the port is listening
and your system sends an ACK packet to complete the connection.
If the port is Closed then the Destination Respond with RST/ACK packets.
3-way handshake if the Destination port is close
In the above image, you can see the result of the TCP scan you can see the
port number and state of the ports and services on these ports.
SYN Scan is the same as TCP Scan but it does not complete the 3-way
handshake process.
In this scan, Source sends the SYN packet and the destination responds with
SYN/ACK packets but the source interrupts the 3-way handshake by sending
the RST packet. Because of the interruption Destination or host does not
keep a record of the Source system.
3. UDP Scan:
Here: -sU is used to activate the UDP Scan. It generally sends the empty
UDP packets and it takes more time than TCP Scan.
4. Ping Scan/NO PORT Scan:
Here: -sn and -sP both are used for Ping Scan.
Only print the available host that responds to the host Discovery probes
within the network. The above command does not tell anything about the
ports of the system. you can also use it to check for a single IP to check that
the host is up or not.
Different States of the Port Scan Results and their
Meaning
There are mainly 4 types of State in the port scan results.
1. Open: A port is Open means that a service is listening to the port, for
example, a MySQL service running at port 3306 as you can see in the TCP
Scan result image.
3. Filtered: Port is filtered by a security system like Firewall and whether the
port is open or closed is not determined. If the host sends an Unusual
response then also the port is filtered. Like in the above image of the UDP
Scan Result when the host sends a response like ICMP Unreachable then the
port is considered as filtered.
4. Open | Filtered: No answer is given by the host so the port may be filtered
by a firewall. But in some cases like the above result of the UDP Scan image,
the host does not send an ACK packet like in TCP Scan so due to the lack of
response means the port may be open.
Network Security
The basic principle of network security is protecting huge stored data and
networks in layers that ensure the bedding of rules and regulations that have
to be acknowledged before performing any activity on the data. These levels
are:
● Physical Network Security: This is the most basic level that
includes protecting the data and network through unauthorized
personnel from acquiring control over the confidentiality of the
network. The same can be achieved by using devices like biometric
systems.
● Technical Network Security: It primarily focuses on protecting the
data stored in the network or data involved in transitions through
the network. This type serves two purposes. One is protected from
unauthorized users, and the other is protected from malicious
activities.
● Administrative Network Security: This level of network security
protects user behavior like how the permission has been granted
and how the authorization process takes place. This also ensures
the level of sophistication the network might need for protecting it
through all the attacks. This level also suggests necessary
amendments that have to be done to the infrastructure.
Email Security
Network Segmentation
Access Control
Your network should not be accessible to every user. You need to identify
every user and every device in order to keep out any attackers. You can then
put your security policies into effect. Noncompliant endpoint devices might
either have their access restricted or blocked. Network access control (NAC)
is this process.
Sandboxing
Web Security
This type of network security ensures that any malicious software does not
enter the network and jeopardize the security of the data. Malicious software
like Viruses, Trojans, and Worms is handled by the same. This ensures that
not only the entry of the malware is protected but also that the system is
well-equipped to fight once it has entered.
Firewalls Security
Application Security
Application security denotes the security precautionary measures utilized at
the application level to prevent the stealing or capturing of data or code
inside the application. It also includes the security measurements made
during the advancement and design of applications, as well as techniques
and methods for protecting the applications whenever.
Wireless Security
Wireless networks are less secure than wired ones. If not properly secured,
setting up a wireless LAN can be like having Ethernet ports available
everywhere, even in places like parking lots. To prevent attacks and keep
your wireless network safe, you need dedicated products designed to protect
it from exploits and unauthorized access.
Web Security
A web security solution manages how your staff uses the internet, blocks
threats from websites, and stops access to harmful sites. It safeguards your
web gateway either onsite or in the cloud. Additionally, “web security”
involves measures taken to protect your own website from potential attacks
and vulnerabilities.
Cybercriminals are focusing more on mobile devices and apps. In the next
three years, about 90 percent of IT organizations might allow corporate
applications on personal mobile devices. It’s crucial to control which devices
can connect to your network and set up their connections securely to protect
network traffic from unauthorized access.
Industrial Network Security
VPN Security
● Active Scanning
● Passive Scanning
Scanning is more than just port scanning, but it is a very important part of
this process. Scanning allows you to identify open ports on the target system
and can be used for port mapping, performing an interactive session with the
operating system via those ports, or even redirecting traffic from these open
ports. There are many tasks that can be performed with a scanning tool.
1. TCP connect scan: This is a scan that sends TCP SYN packets to
each port on the target system, waiting for an RST/ACK. This is a
steal their type of scan because it does not show the open ports on
the target system. The last port that responds is its open port, and
you can use this to your advantage to determine which ports are
open.
2. TCP syn port scan: This is a similar type of scan, but the packets are
TCP SYN packets and not TCP ACK. This type of scan sends packets
to ports that are open and waiting for a reply.
3. Network Scanning: Network scanning is used to identify the devices
and services that are running on a target network, determine their
operating systems and software versions, and identify any potential
security risks or vulnerabilities. Network scanning can be performed
manually or automated using software tools, and can target specific
systems or an entire network.
4. Vulnerability Scanning: Vulnerability scanning is a process of
identifying, locating, and assessing the security vulnerabilities of a
computer system, network, or application. This process is performed
using automated software tools that scan for known vulnerabilities,
as well as weaknesses in the configuration or implementation of the
system being tested.
Purpose
Scanning attacks are performed by cybercriminals or malicious actors for
several reasons, including:
Active Scanning
Active scanning is a type of network scanning technique that is used to
gather information about a target system or network. Unlike passive
scanning, which only gathers information that is readily available, active
scanning actively interacts with the target system to gather information.
One of the benefits of passive scanning is that it is less intrusive and less
likely to trigger security measures, such as firewalls or intrusion detection
systems (IDS), than active scanning. As a result, passive scanning can provide
organizations with valuable information about their systems and networks
without putting them at risk.
Key Points:
There are three conditions that allow an attacker to utilize the scanning
techniques:
● Physical access to the target system: Using a port scanner or ping
sweep, you can locate open ports.
● Vulnerable target software: An application may have vulnerabilities
that allow you to use a TCP connect scan or an SYN flood attack.
● Administrator privileges on the target system (Windows); In order
for an attacker to perform an SYN flood attack, he must have
administrator privileges on the target system.
There are several port scanning or checking methods, Some of them are
given below:
Countermeasures:
The best option to prevent getting scanned is to block the scanning packets.
● For TCP connect scan, blocking ACK packets from entering your
network.
● For an SYN flood attack, you can use an SYN cookie or SYN proxy,
which will be discussed in the next session.