Computer Security Book 27-08-2022 - Proof Reading Unit 1 - 4
Computer Security Book 27-08-2022 - Proof Reading Unit 1 - 4
By
statutory warning
Information contained in this book has been obtained by author from sources believed to be reliable and are correct to the
best of his knowledge. Every effort has been made to avoid errors and omissions and ensure accuracy. Any error or omission
noted may be brought to the notice of the publisher which shall be taken care of in forthcoming edition of this book. However,
neither the publisher nor the author guarantee the accuracy or completeness of any information published herein, and neither
the publisher nor author take any responsibility of liability for any inconvenience, expenses, losses or damage to anyone
resulting from contents of this book.
The book is meant for educational and learning purposes and there is no attempt on the part of publisher/author to render
engineering and other professional services. If such services are required, the assistance of an appropriate professional
should be sought.
The author of the book has taken all possible care to ensure that the contents of the book do not violate any existing
copyright or other intellectual property rights of any person in any manner whatsoever. In the event, the author has been
unable to track any source and if any copyright has been inadvertently infringed, the facts may be brought to the notice of
the publisher in writing for corrective action.
Send all correspondence to : M/s S.K. Kataria & Sons, New Delhi
Email: [email protected]
Price: ` 425/-
Forword
Computer System Security is necessary because it ensures the confidentiality and integrity of
data. It also provides a level of protection from unauthorized access, damage, or destruction.
Without computer security, anyone could get access to other people's personal information
and misuse them at any level.
I am very glad to know that Mr. Niraj Kumar Tiwari and Mr. Praveen Kumar Tripathi,
dedicated faculty members of Computer Science & Engineering Department, SIET, Prayagraj
has added one more excellence to their knowledge repository in the form of this book.
Change this page I congratulate them on achieving such milestone in their academic journey. I am sure
content with that this book will be a great contribution in the field of Computer System Security. I appreciate
new ataached them for their sincere efforts to felicitate student’s community with their work in related field.
foreward page We look forward to read more of their books in coming future and wish them all the best
for their future endeavors.
With Regards.
Dr. R.K. Singh
Director,
Shambhunath Institute of Engineering & Technology, Jhalwa , Prayagraj
Preface
Computer System Security refers to safeguarding and protecting computers, as well as the data,
networks, software, and hardware they are connected to, against unauthorized access, abuse,
theft, data loss and other security concerns.
The entire world is within the grasp of technology which is advancing daily. Even a
single day without electronic gadgets around us is unthinkable. With the aid of this evolving
technology, intruders, hackers, and thieves are attempting to compromise the security of our
computers in order to profit, obtain notoriety, demand ransom payments, intimidate others,
infiltrate other enterprises and organizations, etc. Computer security is necessary to safeguard
our system from all of these threats.
This book is organized as per the syllabus designed by Dr. A.P.J. Abdul Kalam Technical
University, Lucknow. AKTU understood the importance of Computer System Security in real
world scenario due to which they added this subject in the B.Tech course curriculum.
This book is designed to cover a wide range of topics in the field of Computer System
Security field. As a Result, it is an excellent textbook on the subject. Because each unit is designed
to be as standalone as possible, you can focus on the topics that most interest you. We hope
that this textbook will spark your interest in the fast evolving field of Computer System Security.
We have attempted to present the content material in a clear manner with careful explanation
of the topics covered. Each unit ends with a summary describing the main points. We have
included many figures and tables throughout the text to make the book more enjoyable and
reader-friendly.
The main concern of this book is not only to cover the syllabus but also to give an idea
about the Computer System Security, its implementation, future research, and real life examples
in related field. This book will provide a deep understanding about different domains and aspect
of Computer System Security which could help the students to decide their future pathway.
To accomplish this book successfully, many people have bestowed their blessings on us. We
take this opportunity to thank all the peoples who have been contributed their support to us.
Firstly we would like to praise and thank GOD, The Almighty for granting countless
blessings, knowledge and opportunity to us.
We would like to thanks our Parents & family member for their continuous encouragement
which makes it possible to complete the book.
We express our sincere gratitude to Dr. K.K. Tewari, Secretary Utthan and
Dr. R.K. Singh, Director, Shambhunath Institute of Engineering & Technology Prayagraj for
their valuable faith and motivation during the course of preparation of this book.
We also thankful to Dr. Vibhash Yadav, Associate Professor, REC, Banda and Dr. M. K.
Sharma, Professor, Amrapali Group of Institutions, Haldwani for their continuous guidance
and suggestions to make it possible in effective manner.
I express my sincere gratitude to Head of Department, all faculty and staff members of
Computer Science and Engineering department, SIET, Prayagraj for their unconditional support
and suggestions during the preparation of this book.
We are also Thankful to Dr. Kamal Prakash Pandey Dean Academics, Dr. Abhishek
Pandey Professor, CSE Department, Mr. Abhishek Kumar Pandey, Associate Professor CSE
Department, Mr. Prashant Srivastava, Associate Professor CSE Department, Mr. Pankaj Tiwari,
Associate Professor, CSE Department & entire SGI Family for their Continuous support that
make it possible to write this book.
I extend my grateful thanks to the entire management and editorial staff of S.K. Kataria &
Sons Educational Publishers, who took great pains to get the book published in such a nice form.
2.10 Intrusion Detection System (IDS)..... 90 3.4 Web Security Landscape: Web
Security Definitions Goals and Threat
2.10.1 Goals of Intrusion Detection Models .......................................... 122
Systems............................... 91
3.4.1 Cyber Security Landscape
2.10.2 Network Intrusion Detection Components...................... 122
System (NIDS)..................... 92
3.4.2 Web Security in a Nutshell. 123
2.10.3 Host Intrusion Detection
3.4.3 The Web Security
System (HIDS)..................... 93
Problem ............................ 125
2.10.4 Signature-Based Intrusion 3.5 Web Security Definitions Goals and
Detection System (SIDS)...... 94 Threat Models................................ 126
2.10.5 Anomaly-Based Intrusion 3.5.1 Web Security Definition .... 126
Detection System (AIDS)..... 94 3.5.2 Web Security Goals............ 126
Long Type Questions........................ 95 3.5.3 Threat Modeling................. 127
Sample Questions with Answers....... 95 3.5.4 Need of Security Threat
Multiple Choice Questions............. 102 Modeling............................ 129
3. Secure Architecture 3.5.5 Threat Modeling
Principles Isolation Methodologies................... 129
and Leas...............................107–207 3.5.6 Advantages of Threat
3.1 Access Control Concept................. 107 Modeling............................ 133
3.1.1 Importance of Access 3.5.7 Best Practices of Threat
Control ............................. 108 Modeling............................ 134
3.1.2 Types of Access Control..... 108 3.5.8 Threat Modeling Tool......... 135
3.2 Unix and Windows Access Control 3.6 HTTP Content Rendering.............. 140
Summary....................................... 109
3.6.1 Rendering.......................... 140
3.2.1 Access Control in UNIX and
3.6.2 Types of Rendering ........... 141
Windows ........................... 109
put
matter3.6.3 Combining Server Rendering
3.2.2 UNIX -Access Control........ 109
of 3.7 and CSR via Rehydration...145
3.2.3 Windows NT – Access below 13.6
3.7 HTTP Content Rendering..147
Control.............................. 111 3.8 Cookies frames and frame busting..150
3.2.4 Access Control Lists........... 112 3.8.1 Types of Cookies................ 151
3.2.5 Windows NT Access Control 3.8.2 Steps to Block and Delete
Lists................................... 113 Cookies ............................. 152
3.3 Introduction to Browser Isolation .. 114
3.8.3 Cookies Policy................... 157
3.3.1 Web Browser .................... 115
3.8.4 Cookies Frames and Frame
3.3.2 Browser Isolation............... 115
Busting.............................. 158
3.3.3 Types of Isolated Browsing.116
3.8.5 Cookie-based and Cookie Less
3.3.4 Reasons Your Organisation Authentication................... 160
Needs Isolated Browsing.... 119 3.9 Major Web Server Threats.............. 163
3.3.5 Components of a Browser 3.9.1 Types of Web Server
Isolation System................. 120 Threats............................... 163
3.3.6 Benefits of Web Isolation 3.9.2 How to Prevent Different
Technology........................ 121 Attacks in Web Security...... 165
Contents xiii
3.10 Cross-site Request Forgery ............ 166 4.2 Cryptography................................. 217
3.10.1 How does CSRF Work?..... 167 4.2.1 Symmetric Cipher Model... 217
3.10.2 How to Construct a CSRF 4.2.2 Public key Cryptography... 219
Attack................................ 168 4.2.3 Applications for Public-Key
3.10.3 Preventing CSRF Attacks.... 168 Cryptosystems................... 222
3.11 Cross Site Scripting (XSS).............. 171 4.3 The RSA Algorithm........................ 222
3.11.1 Categories of XSS Attacks .172 4.3.1 RSA Algorithm Steps.......... 223
3.11.2 Types of Cross-Site 4.3.2 Example of RSA Algorithm..224
Scripting............................ 173 4.4 Digital Signature............................ 224
3.12 Defences and protections Against 4.4.1 Digital Signature Standard..225
XSS .............................................. 174
4.4.2 The Digital Signature
3.12.1 How to prevent XSS.......... 174 Algorithm........................... 226
3.12.2 Finding Vulnerabilities........ 178 4.5 Hash Functions.............................. 228
3.12.3 Stages of Vulnerability 4.5.1 Authentication Functions.... 228
Management...................... 180 4.5.2 Message Encryption........... 228
3.13 Secure Development...................... 184
4.5.3 Message Authentication
3.13.1 Secure Software Development Code.................................. 229
Lifecycle (SSDLC).............. 184
4.5.4 Hash Function................... 230
3.13.2 A Brief History of SDLC 4.6 Key Management .......................... 238
Practices............................ 185
4.6.1 Distribution of Public Key... 238
3.13.3 Secure Software Development 4.7 Some Real world Internet Security
Life Cycle Processes........... 186 Protocols........................................ 241
3.13.4 The Benefits of SSDLC...... 188 4.8 Email Security Certificates.............. 243
3.13.5 Secure SDLC Best 4.8.1 Email Security.................... 243
Practices............................ 189 4.8.2 Email Security Certificates: S/
Long Type Questions...................... 190 MIME (Secure/Multipurpose
Internet Mail Extension)..... 244
Sample Questions with Answers..... 190
4.8.3 X.509 Authentication
Multiple Choice Questions............. 203
Service............................... 249
4. Basic Cyptography...........208–295 4.8.4 Pretty Good Privacy
4.1 Introduction to Basic (PGP)................................ 252
Cryptography ................................ 208
4.9 Transport Layer Security................ 257
4.1.1 The OSI Security
4.9.1 Secure Socket Layer
Architecture........................ 208
(SSL)................................. 258
4.1.2 Advantages of OSI Security
4.9.2 TLS Protocol...................... 264
Architecture........................ 209
4.9.3 Secure Browsing–HTTPS... 265
4.1.3 Components of OSI Security
4.9.4 Secure Shell Protocol
Architecture ....................... 209
(SSH)................................. 266
4.1.4 Security Attack................... 209 4.10 IP Security (IPsec).......................... 269
4.1.5 Security Services................ 212 4.10.1 IP Security Architecture...... 271
4.1.6 Security Mechanism........... 215 4.10.2 IPsec Services.................... 272
4.1.7 A Model for Network 4.10.3 Encapsulating Security
Security.............................. 216 Payload.............................. 275
xiv Contents
1
2 remove ' Computer System Security
‘Computer security has become increasingly important since the late 1960s, when
modems (devices that allow computers to communicate over telephone lines) were introduced.
The proliferation of personal computers in the 1980s compounded the problem because they
enabled hackers (irresponsible computer files) to illegally access major computer systems from
the privacy of their homes. With the tremendous growth of the Internet in the late 20th and
early 21st centuries, computer security became a widespread concern. The development of
advanced security techniques aims to diminish such threats, though concurrent refinements in
the methods of computer crime pose ongoing hazards.
The security precautions related to computer information and access address four major
threats:
1. Theft of data, such as that of military secrets from government computers;
2. Vandalism, including the destruction of data by a computer virus;
3. Fraud, such as employees at a bank channeling funds into their own accounts;
4. Invasion of privacy, such as the illegal accessing of protected personal financial or
medical data from a large database.
The most basic means of protecting a computer system against theft, vandalism, invasion
of privacy, and other irresponsible behaviours is to electronically track and record the access to,
and activities of, the various users of a computer system. This is commonly done by assigning
an individual password to each person who has access to a system. The computer system itself
can then automatically track the use of these passwords, recording such data as which files were
accessed under particular passwords and so on. Another security measure is to store a system’s
data on a separate device or medium that is normally inaccessible through the computer system.
A system is said to be secure if its resources are used and accessed as intended under all
the circumstances, but no system can guarantee absolute security from malicious threats and
unauthorised access.
The security of a system can be threatened via two violations:
•• Threat: A programme that has the potential to cause serious damage to the system.
•• Attack: An attempt to break security and make unauthorised use of an asset.
organisation, then in that case data for that employee in all departments like accounts,
should be updated to reflect status to JOB LEFT so that data is complete and accurate
and in addition to this only authorised person should be allowed to edit employee
data.
Availability: means information must be available when needed. For example if one
3.
needs to access information of a particular employee to check whether employee has
outstood the number of leaves, in that case it requires collaboration from different
organisational teams like network operations, development operations, incident
response and policy/change management. Denial of service attack is one of the factor
that can hamper the availability of information.
Confidentiality
+ Authenticity
Computer
Security + Utility
Availability Integrity
Integrity
Non-availability Availability
Confidentiality Authentication
before 1.3.1 Attack
an assault An assault on system security that derives from an intelligent threat. That is, an intelligent act
that is a deliberate attempt (especially in the sense of a method or technique) to evade security
services and violate the security policy of a system.
Computer System Security 7
attacks accomplish this by flooding the target with traffic, or sending it information
that triggers a crash.
II. Masquerade. A masquerade attack takes place when one entity pretends to be a
different entity. A Masquerade attack involves one of the other forms of active attacks.
If an authorisation procedure isn’t always absolutely protected, it is able to grow to
be extraordinarily liable to a masquerade assault.
III. Modification of messages. It means that some portion of a message is altered or
that message is delayed or reordered to produce an unauthorised effect. Modification
is an attack on the integrity of the original data. It basically means that unauthorised
parties not only gain access to data but also spoof the data by triggering denial-of-
service attacks, such as altering transmitted data packets or flooding the network
with fake data. Modification is an attack on authentication. For example, a message
meaning “Allow JOHN to read confidential file X” is modified as “Allow Smith to read
confidential file X”.
IV. Repudiation. This attack occurs when the network is not completely secured or the
login control has been tampered with. With this attack, the author’s information can
be changed by actions of a malicious user in order to save false data in log files, up
to the general manipulation of data on behalf of others, similar to the spoofing of
e-mail messages.
V. Replay. It involves the passive capture of a message and its subsequent transmission
to produce an authorised effect. In this attack, the basic aim of the attacker is to save
a copy of the data originally present on that particular network and later on use this
data for personal uses. Once the data is corrupted or leaked it is insecure and unsafe
for the users.
VI. Denial of Service. It prevents the normal use of communication facilities. This
attack may have a specific target. For example, an entity may suppress all messages
directed to a particular destination. Another form of service denial is the disruption of
an entire network either by disabling the network or by overloading it with messages
so as to degrade performance
Do all underlined A cyber-attack is any type of offensive action that targets computer information systems,
text in BOLD infrastructures, computer networks or personal computer devices, using various methods to
steal, alter or destroy data or information systems.
Computer Security Threats
We are living in a digital era. Nowadays, most of the people use computer and internet. Due
to the dependency on digital things, the illegal computer activity is growing and changing like
any type of crime.
Security threats are interruption, interception, fabrication and modification. Attack is
a deliberate unauthorised action on a system or asset. Attack can be classified as active and
passive attack. An attack will have a motive and will follow a method when opportunity arises.
Information Security threats can be many like Software attacks, theft of intellectual property,
identity theft, theft of equipment or information, sabotage, and information extortion.
Computer System Security 9
Computer security threats are potential threats to your computer’s efficient operation
and performance. These could be harmless adware or dangerous Trojan infection. As the world
becomes more digital, computer security concerns are always developing. A threat in a computer
system is a potential danger that could jeopardise your data security. At times, the damage
is irreversible. A potential for violation of security, which exists when there is a circumstance,
capability, action, or event that could breach security and cause harm. That is, a threat is a
possible danger that might exploit vulnerability.
Types of Threats
A security threat is a threat that has the potential to harm computer systems and organisations.
The cause could be physical, such as a computer containing sensitive information being stolen.
It’s also possible that the cause isn’t physical, such as a viral attack.
1. Physical Threats: A physical danger to computer systems is a potential cause of
an occurrence/event that could result in data loss or physical damage. It can be classified as:
•• Internal: Short circuit, fire, non-stable supply of power, hardware failure due to
excess humidity, etc. cause it.
•• External: Disasters such as floods, earthquakes, landscapes, etc. cause it.
•• Human: Destroying of infrastructure and/or hardware, thefts,disruption, and
Do first letter Capital unintentional/intentional errors are among the threats.
of physical and threat2. Non-physical threats: A non-physical threat is a potential source of an incident
that could result in:
Remove give Space •• Hampering of the business operations that depend on computer systems.
remove - •• Sensitive – data or information loss
•• Keeping track of other’s computer system activities illegally.
•• Hacking id & passwords of the users, etc.
A cyber security threat is a malicious and deliberate attack by an individual or organisation
Do all text in to gain unauthorised access to another individual’s or organisation’s network to damage, disrupt,
bold or steal IT assets, computer networks, intellectual property, or any other form of sensitive data.
Types of Cyber Security Threats
Malware Attacks
t in small letter
Malware is an abbreviation of “malicious software”, which includes viruses, worms, Trojans,
spyware, and ransomware, and is the most common type of cyber-attack. Malware infiltrates a
system, usually via a link on an un trusted website or email or an unwanted software download.
It deploys on the target system, collects sensitive data, manipulates and blocks access to network
components, and may destroy data or shut down the system altogether.
10 Computer System Security
Viruses
Malvertising Worms
Fileless
malware Trojans
Rootkits Ransomware
(SQL), making them vulnerable to SQL injection. A new variant on this attack is No
SQL attacks, targeted against databases that do not use a relational data structure.
•• Code injection—an attacker can inject code into an application if it is vulnerable.
The web server executes the malicious code as if it were part of the application.
•• OS command injection—an attacker can exploit a command injection vulnerability
to input commands for the operating system to execute. This allows the attack to
exfiltrate OS data or take over the system.
•• LDAP injection—an attacker inputs characters to alter Lightweight Directory Access
Protocol (LDAP) queries. A system is vulnerable if it uses un sanitised LDAP queries.
These attacks are very severe because LDAP servers may store user accounts and
credentials for an entire organisation.
•• XML external Entities (XXE) Injection—an attack is carried out using specially
constructed XML documents. This differs from other attack vectors because it exploits
inherent vulnerabilities in legacy XML parsers rather than invalidated user inputs.
XML documents can be used to traverse paths, execute code remotely and execute
server-side request forgery (SSRF).
•• Cross-Site Scripting (XSS)—an attacker inputs a string of text containing malicious
JavaScript. The target’s browser executes the code, enabling the attacker to redirect
users to a malicious website or steal session cookies to hijack a user’s session. An
application is vulnerable to XSS if it doesn’t sanitise user inputs to remove JavaScript
code.
administer a test-run during off-hours so that you can evaluate the accuracy of results
and make adjustments where necessary.
2. Evaluate vulnerabilities
After identifying vulnerabilities, the next step is to evaluate the risk they pose to your
business using a cyber security vulnerability assessment. Vulnerability assessments allow
you to assign risk levels to identified threats so that you can prioritise remediation efforts.
Effective assessments also enhance compliance efforts as they ensure that vulnerabilities
are addressed before they can be exploited.
3. Address vulnerabilities
Once vulnerability’s risk level has been determined, you then need to treat the vulnerability.
The different ways you can treat vulnerability include:
•• Remediation: Vulnerability remediation involves completely fixing or patching
vulnerability. This is the preferred treatment of vulnerabilities as it eliminates risk.
•• Mitigation: Mitigation involves taking steps to lessen the likelihood of a vulnerability
being exploited. Vulnerability mitigation is typically performed as a means to buy time
until a proper patch is available.
•• Acceptance: Taking no action to address vulnerability is justified when an organisation
deems it to have a low risk. This is also justifiable when the cost of addressing the
vulnerability is greater than the cost incurred if it were to be exploited.
Common types of cyber security vulnerabilities
When building a vulnerability management programme, there are several key cyber security
vulnerabilities that you must be aware of. Below are six of the most common types of cyber
security vulnerabilities:
1. System misconfigurations. System misconfigurations occur as a result of network
assets having vulnerable settings or disparate security controls. A common tactic
cybercriminals use is to probe networks for system misconfigurations and gaps that can
be exploited. As more organisations adopt digital solutions, the likelihood of network
misconfigurations grows, so it is important to work with experienced security professionals
when implementing new technologies.
2. Out of date or unpatched software. Unpatched vulnerabilities can be exploited
by cybercriminals to carry out attacks and steal valuable data. Similar to system
misconfigurations, cyber adversaries will probe networks looking for unpatched systems
they can compromise. To limit this risk, It is important to establish a patch management
schedule so that all new system patches are implemented as soon as they are released.
3. Missing or weak authorisation credentials. A common tactic attackers employ is
Remove bold to brute force their way into a network by guessing employee credentials. It is important
at underlined to educate employees on cyber security best practices so that their login information
text cannot be easily exploited to gain access to a network.
4. Malicious insider threats. Whether unknowingly or with malicious intent, employees
who have access to critical systems can share information that allows cybercriminals to
breach a network. Insider threats can be difficult to track since all actions taken by
employees will appear legitimate and therefore raise little to no red flags. To help combat
18 Computer System Security
these threats, consider investing in network access control solutions, and segment your
network based on employee seniority and expertise.
5. Missing or poor data encryption. Networks with missing or poor encryption allow
attackers to intercept communication between systems, leading to a breach. When
poorly or unencrypted information is interrupted, cyber adversaries are able to extract
critical information and inject false information onto a server. This can undermine
an organisation’s cyber security compliance efforts and lead to substantial fines from
regulatory bodies.
6. Zero-day vulnerabilities. Zero-day threats are specific software vulnerabilities that
are known to the attacker but have not yet been identified by an organisation. This
means that there is no available fix since the vulnerability has not yet been reported to
the system vendor. These are extremely dangerous as there is no way to defend against
them until after the attack has been carried out. It is important to remain diligent and
continuously monitor your systems for vulnerabilities in order to limit the likelihood of
a zero-day attack.
First, any incorrect web address you enter for a server will take you to the 404 error
page. And, since there is an infinite number of ways to enter a web address incorrectly, this
offers hackers the opportunity to create an infinite number of malicious links linking to the same
error page. Theoretically, hackers could send out a million emails each with their own unique
malicious link. That’s pretty appealing. The other thing that makes error pages interesting to
hackers is they can customise them to be anything they want. They don’t actually have to have
a message saying “404 Page Not Found Error.” They can do anything they want on that page,
including creating a sign in box on a fake landing page to grab your credentials.
to click advertisements. However, there are also instances where hackers use hijacked
browsers to intercept sensitive information and even make unwitting victims download
additional malware.
In some cases, victims willingly download a browser add-on or toolbar plug-in that’s
bundled with browser hijacking capabilities. Usually, though, these developers go to
great lengths to hide this fact. In other instances, hackers might exploit security flaws
within browsers to force victims to install their browser hijacker, also known as hijack
ware.
II. Session hijacking. Session hijacking is a type of computer hijacking where hackers
gain unauthorised access to a victim’s online account or profile by intercepting or
cracking session tokens. Session tokens are cookies sent from a web server to users
to verify their identity and website settings. If a hacker successfully cracks a user’s
session token, the results can range from eavesdropping to the insertion of malicious
JavaScript programs.
Session hijacking was a common mode of attack for hackers in the early 2000s because
the first version of Hypertext Transfer Protocol (HTTP) wasn’t designed to adequately
protect cookies. However, in recent years, modern encryption and newer standards,
like HTTP Secure (HTTPS), do a better job of protecting cookie data. Better cookie
protection makes session hijacking less likely, albeit not impossible.
Domain hijacking. When a person or group tries to seize ownership of a web
III.
domain from its rightful owner, they are attempting a domain hijacking. For example,
a cybercriminal could submit phony domain transfer requests in hopes of securing
a trusted domain to orchestrate sophisticated phishing campaigns. At the other end
of the spectrum, a company that owns a trademarked brand name could use legal
threats to pressure the owner of the web domain to transfer rights. These corporate
takeover attempts are called reverse domain hijacking.
IV. Clipboard hijacking. When you use your device to copy and paste images, text and
other information, the act of copying temporarily stores that data in random access
memory (RAM). This section of RAM is known as the clipboard.Clipboard hijacking
happens when hackers replace the contents of a victim’s clipboard with their own
often malicious content. Depending on the technical ability of the attacker, clipboard
hijacking can be hard to detect and may be spread inadvertently by victims when
they paste information into web forms.
V. DNS hijacking. DNS hijacking and domain hijacking are similar in that both are
attempts to hijack control of a web domain. DNS hijacking describes the takeover in
a technical sense, however, whereas domain hijacking is a takeover by way of legal
coercion or social engineering.
Hackers and cybercriminals find DNS hijacking attractive because, similar to browser
hijacking, successful DNS attacks enable them to redirect a victim’s traffic in order to
generate revenue through ads, create cloned websites to steal private data and even
censor or control the free flow of information. There are several ways hackers might
carry out a DNS hijack. For example, they could attack vulnerabilities in the hardware
and software systems used by DNS providers or install malware on a victim’s machine
Computer System Security 21
that is programmed to change DNS settings. Hackers could even turn to man-in-
the-middle (MitM) attacks to take control of an established connection while it is in
progress to intercept DNS messages -- to simply gain access to the messages or to
enable the attacker to modify them before retransmission — or use DNS spoofing to
divert traffic away from valid servers and toward illegitimate servers. The DNS maps
names of websites to IP addresses used by a computer to locate a website. DNS is
often a target for cyber hijacking purposes.
IP hijacking. Routers used by internet service providers (ISPs) rely on a routing
VI.
protocol known as the Border Gateway Protocol (BGP). BGP is designed so that
routers operated by one provider can announce to routers operated by other providers
the IP address blocks it owns.
IP hijacking happens when an attacker hacks or masquerades as an internet provider
claiming to own an IP address it doesn’t. When this happens, traffic destined for one
network is redirected to the hacker’s network. The hacker then becomes a man in the
middle and can carry out a range of attacks from eavesdropping to packet injection
covertly inserting forged packets into a communication stream and more.
VII. Page hijacking. Also known as 302 redirect hijacking or Uniform Resource Locator
(URL) hijacking, a page hijacking attack tricks web crawlers used by search engines to
redirect traffic the hacker’s way. The web community introduced 302 HTTP responses
to provide website owners a way to temporarily redirect users and search engine
crawlers to a different URL in cases where a website is undergoing maintenance or
testing.
Bad actors realised that, by implementing carefully planned 302 redirects, they could
take over a victim’s site in search engine results. This is because web crawlers would
mistake a new page created and owned by the hijacker as an honest redirect from
the old page. Essentially, all of the victim’s page authority and ranking signals would
be transferred over to the hijacker’s page due to the false assumption by the web
crawler that the victim configured the redirect. yaha pe
is line ko uper While
wale still technically possible, the number of page hijackings decreased as web
paragraph me hi add crawlers became more sophisticated.
kerna hai
1.6.3 Buffer Overflow in Control Hijacking
Buffer Overflow
Buffers are memory storage regions that temporarily hold data while it is being transferred from
one location to another. A buffer overflow (or buffer overrun) occurs when the volume of data
exceeds the storage capacity of the memory buffer. As a result, the programme attempting to
write the data to the buffer overwrites adjacent memory locations.
For example, a buffer for log-in credentials may be designed to expect username and
password inputs of 8 bytes, so if a transaction involves an input of 10 bytes (that is, 2 bytes
more than expected), the programme may write the excess data past the buffer boundary.
Buffer overflows can affect all types of software. They typically result from malformed
inputs or failure to allocate enough space for the buffer. If the transaction overwrites executable
22 Computer System Security
code, it can cause the programme to behave unpredictably and generate incorrect results,
memory access errors, or crashes.
Buffer Overflow
(8 bytes) (2 bytes)
P A S S W O R D 1 2
0 1 2 3 4 5 6 7 8 9
1. Buffers are memory storage regions that temporarily hold data while it is being
transferred from one location to another.
2. A buffer overflow (or buffer overrun) occurs when the volume of data exceeds the
storage capacity of the memory buffer.
3. As a result, the programme attempting to write the data to the buffer overwrites
adjacent memory locations.
4. Attackers exploit buffer overflow issues by overwriting the memory of an application.
5. This changes the execution path of the programme, triggering a response that damages
files or exposes private information.
Buffer Overflow Attack
Attackers exploit buffer overflow issues by overwriting the memory of an application. This
changes the execution path of the programme, triggering a response that damages files or
exposes private information. For example, an attacker may introduce extra code, sending new
instructions to the application to gain access to IT systems.
If attackers know the memory layout of a program, they can intentionally feed input that
the buffer cannot store, and overwrite areas that hold executable code, replacing it with their
own code. For example, an attacker can overwrite a pointer (an object that points to another
area in memory) and point it to an exploit payload, to gain control over the programme.
Types of Buffer Overflow Attacks
Stack-based buffer overflows are more common, and leverage stack memory that only exists
heap based during the execution time of a function.
attack Heap-based attacks are harder to carry out and involve flooding the memory space
ko aur left me
allocated for a programme beyond memory used for current runtime operations.
shif kerna hai
like stack based C and C++ are two languages that are highly susceptible to buffer overflow attacks, as
they don’t have built-in safeguards against overwriting or accessing data in their memory. Mac
OSX, Windows, and Linux all use code written in C and C++. Neeche ki 2 lines ko isi paragraph
me add kerna hai
Languages such as PERL, Java, JavaScript, and C# use built-in safety mechanisms that
minimise the likelihood of buffer overflow.
Preventions of Buffer Overflows
Developers can protect against buffer overflow vulnerabilities via security measures in their
code, or by using languages that offer built-in protection.
Computer System Security 23
as counter measurement against control flow hijacking attacks and much of current research
efforts focus on making CFI fast and practical. As for protections at step 5, Not-Executable (NX)
policies, such as W X or DEP, protects the execution of injected payload.
Attack Step Attack Step
...to address of
Step 3: Randomiziation
gadget/shellcode
Below are some format parameters which can be used and their consequences:
•• ”%x” Read data from the stack
•• ”%s” Read character strings from the process’ memory
•• ”%n” Write an integer to locations in the process’ memory
To discover whether the application is vulnerable to this type of attack, it’s necessary to
verify if the format function accepts and parses the format string parameters shown in Table 1.2.
Table 1.2. Format String Parameters
Parameters Output Passed as
%% % character (literal) Reference
%p External representation of a pointer to void Reference
%d Decimal Value
%u Unsigned decimal Value
%x Hexadecimal Value
%s String Reference
%n Writes the number of characters into pointer Reference
A format string is an ASCII string that contains text and format parameters.
// A statement with format string
printf(“my name is : %s\n”, “Akash”);
// Output
// My name is: Akash
There are several format strings that specify output in C and many other programming
languages but our focus is on C.
Format string vulnerabilities are a class of bug that take advantage of an easily avoidable
programmer error. If the programmer passes an attacker-controlled buffer as an argument to
a printf (or any of the related functions, including sprintf, fprintf, etc), the attacker can perform
writes to arbitrary memory addresses. The following programme contains such an error:
// A simple C programme with format
// string vulnerability
#include<stdio.h>
return 0;
}
28 Computer System Security
Since printf has a variable number of arguments, it must use the format string to determine
the number of arguments. In the case above, the attacker can pass the string “%p %p %p %p %p
%p %p %p %p %p %p %p %p %p %p” and fool the printf into thinking it has 15 arguments.
It will naively print the next 15 addresses on the stack, thinking they are its arguments:
$ ./a.out “%p %p %p %p %p %p %p %p %p %p %p %p %p %p %p”
0xffffdddd 0 × 64 0xf7ec1289 0xffffdbdf 0xffffdbde (nil) 0xffffdcc4 0xffffdc64 (nil)
0x25207025 0x70252070 0x20702520 0x25207025 0x70252070 0x20702520
At about 10 arguments up the stack, we can see a repeating pattern of 0x252070 – those
are our %ps on the stack! We start our string with AAAA to see this more explicitly:
$ ./a.out “AAAA%p %p %p %p %p %p %p %p %p %p”
AAAA0xffffdde8 0x64 0xf7ec1289 0xffffdbef 0xffffdbee (nil) 0xffffdcd4 0xffffdc74 (nil)
0x41414141
The 0x41414141 is the hex representation of AAAA. We now have a way to pass an
arbitrary value (in this case, we’re passing 0x41414141) as an argument to printf. At this point
we will take advantage of another format string feature: in a format specifier, we can also select
a specific argument. For example, printf(“%2$x”, 1, 2, 3) will print 2. In general, we can do
printf(“%$x”) to select an arbitrary argument to printf. In our case, we see that 0x41414141 is
the 10th argument to printf, so we can simplify our string1:
$ ./a.out ‘AAAA%10$p’
AAAA0x41414141
Preventing Format String Vulnerabilities
•• Always specify a format string as part of programme, not as an input. Most format
string vulnerabilities are solved by specifying “%s” as format string and not using the
data string as format string
•• If possible, make the format string a constant. Extract all the variable parts as other
arguments to the call. Difficult to do with some internationalization libraries
•• If the above two practices are not possible, use defences such as Format_Guard. Rare
at design time. Perhaps a way to keep using a legacy application and keep costs down.
Increase trust that a third-party application will be safe
Vulnerable Code
The line printf(argv[1]); in the example is vulnerable, if you compile the programme and run it:
./example “Hello World %s%s%s%s%s%s”
The printf in the second line will interpret the %s%s%s%s%s%s in the input string as a
reference to string pointers, so it will try to interpret every %s as a pointer to a string, starting
from the location of the buffer (probably on the Stack). At some point, it will get to an invalid
address, and attempting to access it will cause the programme to crash.
Computer System Security 29
Different Payloads
An attacker can also use this to get information, not just crash the software.
For example, running:
./example “Hello World %p %p %p %p %p %p”
Will print the lines:
Hello World %p %p %p %p %p %p
Hello World 000E133E 000E133E 0057F000 CCCCCCCC CCCCCCCCCCCCCCCC
The first line is printed from the non-vulnerable version of printf, and the second line
from the vulnerable line. The values printed after the “Hello World” text, are the values on the
stack of my computer at the moment of running this example.
Also reading and writing to any memory location is possible in some conditions, and
even code execution.
Format String Vulnerabilities in Web Applications
Web applications are generally written in higher-level languages than C that means format
string vulnerabilities at all relevant to web application security. Many back-end applications,
such as web servers, are written in C/C++, so it is certainly possible for user inputs from a
web application to make it through to a vulnerable C programme, even if just to crash the web
server. But what about typical web application languages, like PHP, Python or JavaScript?
Format String Vulnerabilities in Python
From version 2.7 onwards, Python includes a new set of string formatting functions. These
provide far greater capabilities than pre-2.7 formatting constructs but can also open up interesting
attack vectors.
Every Python string has a format () method. A format string that replicates the first
example given for C might be:
print (“Directory {} contains {} files”. Format (“Work”, 42))
This simply replaces each {} placeholder with the corresponding argument to the format
( ) method.
However, format ( ) can also take an object and access its attributes to complete the
format string. This is convenient but can have unexpected consequences. To illustrate this,
let’s define the DirData class to use as the information object. Let’s also say there is a
confidential value stored in a global variable in the same module:
SECRET_VALUE = “passwd123”
class DirData:
def __init__(self):
self.name = “Work”
self.noOfFiles = 42
print(“Directory {dirInfo.name} contains {dirInfo.noOfFiles} files”.
format(dirInfo=DirData()))
30 Computer System Security
So far, this is just another way to get the same output. But Python objects can access lots
of internal attributes, including a dictionary of global variables. By stringing attributes together,
it is possible to get at the secret value:
print(“The secret is {dirInfo.__init__.__globals__[SECRET_VALUE]}”.
format(dirInfo=DirData()))
Output:
The secret is: passwd123
Again, the surest general way of eliminating such vulnerabilities is to avoid including
invalidated user inputs in format strings wherever possible. And as with so many other
vulnerabilities, always sanitise external application inputs before using them.
Preventing Format String Vulnerabilities
•• Always specify a format string as part of programme, not as an input.Most format
string vulnerabilities are solved by specifying “%s” as format string and not using the
data string as format string
•• If possible, make the format string a constant. Extract all the variable parts as other
arguments to the call. Difficult to do with some internationalisation libraries
•• If the above two practices are not possible, use defences such as Format Guard. Rare
at design time. Perhaps a way to keep using a legacy application and keep costs down.
Increase trust that a third-party application will be safe
attackers have to be able to allocate objects whose contents they control in an application’s heap.
The most common method used by attackers to achieve this goal is to target an application,
such as a web browser, which executes an interpreter as part of its operation. By providing a
web page with embedded JavaScript, an attacker can induce the interpreter to allocate their
objects, allowing the spraying to occur.
When implementing dynamic memory managers, developers face lots of challenges,
including heap fragmentation. A common solution is to allocate memory in chunks of a fixed
size. Usually, a heap manager has its own preferences for a chunk’s size as well as one or several
reserved pools that allocate these chunks. Heap spraying makes a targeted process continuously
allocate memory with required content block by block, banking on one of the allocations placing
shellcode at the required address (without checking any conditions).
A heap spray itself doesn’t exploit any security issues, but it can be used to make an
existing vulnerability easier to exploit. It’s essential to understand how attackers use the heap
spraying technique to know how to mitigate it. Here’s what an average attack looks like:
How Heap Spraying Affects the Process Memory
Oops
Control flow will Free space
jump here after 200 MB 200 MB
Spray allocations
the bug occurs
Heap spray
100 MB 100 MB
Allocated Allocated
0 MB 0 MB
Before the heap spraying attack After the heap spraying attack
The three steps towards securing your application from heap spray execution are:
I. Intercepting the Nt Allocate Virtual Memory call
II. Making executable memory un executable during the attempt to allocate it
III. Registering a structured exception handler (SEH) to handle exceptions that occur as
a result of the execution of un executable memory.
Preventing heap spraying attacks
To mitigate heap spraying attacks, following steps will be taken:
1. Form an allocation history
2. Detect shellcode execution
3. Detect a spray
Forming an allocation history
To intercept the execution of dynamically allocated memory, PAGE_EXECUTE_READWRITE
flag will be changed to PAGE_READWRITE.
Structure for saving allocations
struct _Allocation_Info
{
void* baseAddress;
size_t size;
ULONG protect;
};
Next, a hook for NtAllocateVirtualMemory will be defined. This hook will reset the PAGE_
EXECUTE_READWRITE flag and save allocations for which the flag was reset:
NTSTATUS WINAPI hookNtAllocateVirtualMemory
{
HANDLE ProcessHandle,
PVOID *BaseAddress,
ULONG_PTR ZeroBits,
PSIZE_T RegionSize,
ULONG AllocationType,
ULONG protect
}
Once we set the hook, any memory allocation with the PAGE_EXECUTE_READWRITE
bit will be modified. When there’s an attempt to pass control to this memory, the processor will
generate an exception that we can detect and analyse. Here we ignore multithreading issues.
However, in real life, it’s better to store allocations of each flow separately, since the
shellcode execution is expected to be single-threaded.
36 Computer System Security
Even after a fix is developed, the fewer the days, the higher the probability that an attack
against the afflicted software will be successful, because not every user of that software
will have applied the fix.
For zero - day exploits, unless the vulnerability is inadvertently fixed, for example, by
an unrelated update that happens to fix the vulnerability, the probability that a user has
applied a vendor - supplied patch that fixes the problem is zero, so the exploit would
remain available. Zero - day attacks are a severe threat.
8. What is Computer Security?
Ans. Computer security involves controls to protect computer systems, networks, and data
from breach, damage, or theft. Learn about the definition and basics of computer security,
and explore components of computer systems and computer security controls.
9. What are the elements of cyber security?
Ans. Major elements of cyber security are:
•• Information security
•• Network security
•• Operational security
•• Application security
•• End-user education
•• Business continuity planning
10. What is CIA?
Ans. Confidentiality, Integrity, and Availability (CIA) is a popular model which is designed to
develop a security policy. CIA model consists of three concepts:
Confidentiality: Ensure the sensitive data is accessed only by an authorised user.
Integrity: Integrity means the information is in the right format.
Availability: Ensure the data and resources are available for users who need them.
10. Define SQL Injection.
Ans. It is an attack that poisons malicious SQL statements to database. It helps you to take
benefit of the design flaws in poorly designed web applications to exploit SQL statements
to execute malicious SQL code. In many situations, an attacker can escalate SQL injection
attack in order to perform other attack, i.e. denial-of-service attack.
11. Explain social engineering and its attacks.
Ans. Social engineering is the term used to convince people to reveal confidential information.
There are mainly three types of social engineering attacks:
1. Human-based,
2. Mobile-based
3. Computer-based.
Human-based attack: They may pretend like a genuine user who requests higher authority
to reveal private and confidential information of the organisation.
Computer-based attack: In this attack, attackers send fake emails to harm the computer.
They ask people to forward such email.
40 Computer System Security
Mobile-based attack: Attacker may send SMS to others and collect important information.
If any user downloads a malicious app, then it can be misused to access authentication
information.
12. What is a computer virus?
Ans. A virus is a malicious software that is executed without the user’s consent. Viruses can
consume computer resources, such as CPU time and memory. Sometimes, the virus
makes changes in other computer programmes and insert its own code to harm the
computer system. A computer virus may be used to:
•• Access private data like user id and passwords
•• Display annoying messages to the user
•• Corrupt data in your computer
•• Log the user’s keystrokes
13. What is a computer worm?
Ans. A computer worm is an independent malicious computer programme that replicates
itself to spread to other computers. Often, it uses a computer network to spread, relying
on security failures on the target computer to gain access.
14. What is Trojan horse?
Ans. Trojan horse is a malicious computer programme that presents itself as legitimate software.
Also called Trojan horse, it hides malware in a file on a normal appearance.
15. What is Trap door?
Ans. Trap doors, also known as backdoors, are code fragments embedded in programmes
by the programmer(s) to allow quick access later, often during the testing or debugging
phase. If an inattentive programmer leaves this code or forgets to remove it, a potential
security hole is introduced.
16. Name common types of non-physical threats?
Ans. Following are various types of non-physical threats:
•• Trojans
•• Adware
•• Worms
•• Spyware
•• Denial of Service Attacks
•• Distributed Denial of Service Attacks
•• Virus
•• Key loggers
•• Unauthorised access to computer systems resources
•• Phishing
Computer System Security 41
28. Which of the following is an independent malicious programme that does not require
any other programme?
(a) Trap door (b) Trojan horse
(c) Virus (d) Worm
29. The _______ is a code that recognises a special input sequence or is triggered by an
unlikely sequence of events.
(a) Trap door (b) Trojan horse
(c) Logic bomb (d) Virus
30. The _______ is a code embedded in a legitimate program configured to “explode” when
certain conditions are met.
(a) Trap door (b) Trojan horse
(c) Logic bomb (d) Virus
31. Which of the following malware does not replicate automatically?
(a) Trojan horse (b) Virus
(c) Worm (d) Zombie
32. ________ is a form of virus explicitly designed to avoid detection by antivirus software.
(a) Stealth virus (b) Polymorphic virus
(c) Parasitic virus (d) Macro virus
33. In which of the following, a person is constantly followed/chased by another person or
group of several peoples?
(a) Phishing (b) Bulling
(c) Stalking (d) Identity theft
34. What is meant by marketplace for vulnerability?
(a) A market vulnerable to attacks
(b) A market consisting of vulnerable consumer
(c) A market to sell and purchase vulnerabilities
(d) All of the above
35. Which of the following is considered as the world’s first antivirus programme?
(a) Creeper (b) Reaper
(c) Tinkered (d) Ray Tomlinson
36. Hackers usually used the computer virus for ______ purpose.
(a) To log, monitor each and every user’s stroke
(b) To gain access the sensitive information like user’s Id and Passwords
(c) To corrupt the user’s data stored in the computer system
(d) All of the above
37. Which of the following malware’s type allows the attacker to access the administrative
controls and enables his/or her to do almost anything he wants to do with the infected
computers.
(a) RATs (b) Worms
(c) Rootkits (d) Botnets
Computer System Security 45
38. All of the following are examples of real security and privacy risks EXCEPT:
(a) hackers (b) spam
(c) viruses (d) identity theft
39. Which of the following are the types of scanning?
(a) Network, vulnerability, and port scanning
(b) Port, network, and services
(c) Client, Server, and network
(d) None of the above
40. Malicious access is unauthorised
(a) Destruction of data (b) Modification of data
(c) Reading of data (d) All of these
Answers
1. (a) 2. (d) 3. (d) 4. (b) 5. (d) 6. (d)
7. (c) 8. (c) 9. (a) 10. (b) 11. (a) 12. (c)
13. (c) 14. (c) 15. (c) 16. (b) 17. (c) 18. (d)
19. (a) 20. (d) 21. (a) 22. (d) 23. (d) 24. (a)
25. (b) 26. (a) 27. (e) 28. (d) 29. (a) 30. (c)
31. (a) 32. (a) 33. (c) 34. (c) 35. (c) 36. (a)
37. (a) 38. (b) 39. (a) 40. (c)
2 Confidentiality Policies
46
Confidentiality Policies 47
The following list offers some important considerations when developing an information
security policy:
1. Purpose: First state the purpose of the policy, which may be to:
•• Create an overall approach to information security.
•• Detect and preempt information security breaches such as misuse of networks,
data, applications, and computer systems.
•• Maintain the reputation of the organisation, and uphold ethical and legal
responsibilities.
•• Respect customer rights, including how to react to inquiries and complaints about
non-compliance.
2. Audience: Define the audience to whom the information security policy applies. You
may also specify which audiences are out of the scope of the policy (for example,
staff in another business unit which manages security separately may not be in the
scope of the policy).
3. Information security objectives: Guide your management team to agree on
well-defined objectives for strategy and security. Information security focuses on three
main objectives:
I. Confidentiality: Only individuals with authorisation can access data and
information assets.
II. Integrity: Data should be intact, accurate and complete, and IT systems must
be kept operational.
III. Availability: Users should be able to access information or systems when needed.
4. Authority and access control policy: The security policy may have different terms
for a senior manager vs. a junior employee. The policy should outline the level of
authority over data and IT systems for each organisational role.
5. Network security policy: Users are only able to access company networks and
servers via unique logins that demand authentication, including passwords, biometrics,
ID cards, or tokens. You should monitor all systems and record all login attempts.
6. Data classification: The policy should classify data into categories, which may
include “top secret”, “secret”, “confidential”, and “public”. The objective in classifying
data is:
•• to ensure that sensitive data cannot be accessed by individuals with lower clearance
levels
•• to protect highly important data, and avoid needless security measures for
unimportant data
7. Data support and operations: Data protection regulations systems that store
personal data, or other sensitive data must be protected according to organisational
standards, best practices, industry compliance standards, and relevant regulations.
Most security standards require, at a minimum, encryption, a firewall, and anti-malware
protection.
Confidentiality Policies 49
8. Security awareness and behaviour: Share IT security policies with your staff.
Conduct training sessions to inform employees of your security procedures and
mechanisms, including data protection measures, access protection measures, and
sensitive data classification. Social engineering place a special emphasis on the dangers
of social engineering attacks (such as phishing emails). Make employees responsible
for noticing, preventing and reporting such attacks.
9. Clean desk policy: Secure laptops with a cable lock. Shred documents that are
no longer needed. Keep printer areas clean so documents do not fall into the wrong
hands.
10. Encryption policy: Encryption involves encoding data to keep it inaccessible to or
hidden from unauthorised parties. It helps protect data stored at rest and in transit
between locations and ensure that sensitive, private, and proprietary data remains
private. It can also improve the security of client-server communication.
11. Data backup policy: A data backup policy defines rules and procedures for making
backup copies of data. It is an integral component of overall data protection, business
continuity, and disaster recovery strategy.
Here are key functions of a data backup policy:
•• Identifies all information the organisation needs to back up
•• Determines the frequency of backups, for example, when to perform an initial
full backup and when to run incremental backups
•• Defines a storage location holding backup data
•• Lists all roles in charge of backup processes, for example, a backup administrator
and members of the IT team
12. Responsibilities, rights, and duties of personnel: Appoint staff to carry
out user access reviews, education, change management, incident management,
implementation, and periodic updates of the security policy. Responsibilities should
be clearly defined as part of the security policy.
13. System hardening benchmarks: The information security policy should reference
security benchmarks the organisation will use to harden mission critical systems, such
as the Center for Information Security (CIS) benchmarks for Linux, Windows Server,
AWS, and Kubernetes.
14. References to regulations and compliance standards: The information
security policy should reference regulations and compliance standards that impact
the organisation, such as GDPR, CCPA, PCI DSS, SOX, and HIPAA.
2.1.4 Types of security policies
Policies are divided in two categories:
1. User policies and IT policies: User policies generally define the limit of the users
towards the computer resources in a workplace. For example, what are they allowed
to install in their computer, if they can use removable storages. Whereas, IT policies
are designed for IT department, to secure the procedures and functions of IT fields.
50 Computer System Security
2. General Policies: This is the policy which defines the rights of the staff and access
level to the systems. Generally, it is included even in the communication protocol as
a preventive measure in case there are any disasters.
3. Server Policies: This defines who should have access to the specific server and with
what rights. Which softwares should be installed, level of access to internet, how they
should be updated.
We use security policies to manage our network security. Most types of security policies
are automatically created during the installation. We can also customise policies to suit our
specific environment.
Some important cybersecurity policies recommendations are described below:
1. Virus and Spyware Protection policy: This policy provides the following
protection:
•• It helps to detect, removes, and repairs the side effects of viruses and security
risks by using signatures.
•• It helps to detect the threats in the files which the users try to download by using
reputation data from Download Insight.
•• It helps to detect the applications that exhibit suspicious behaviour by using
SONAR heuristics and reputation data.
2. Firewall Policy: This policy provides the following protection:
•• It blocks the unauthorised users from accessing the systems and networks that
connect to the Internet.
•• It detects the attacks by cybercriminals.
•• It removes the unwanted sources of network traffic.
3. Intrusion Prevention policy: This policy automatically detects and blocks the
network attacks and browser attacks. It also protects applications from vulnerabilities.
It checks the contents of one or more data packages and detects malware which is
coming through legal ways.
4. Live Update policy: This policy can be categorised into two types one is LiveUpdate
Content policy, and another is LiveUpdate Setting Policy. The LiveUpdate policy
contains the setting which determines when and how client computers download the
content updates from LiveUpdate. We can define the computer that clients contact
to check for updates and schedule when and how often client’s computer checks for
updates.
5. Application and Device Control: This policy protects a system’s resources from
applications and manages the peripheral devices that can attach to a system. The
device control policy applies to both Windows and Mac computers whereas application
control policy can be applied only to Windows clients.
6. Exceptions policy: This policy provides the ability to exclude applications and
processes from detection by the virus and spyware scans.
7. Host Integrity policy: This policy provides the ability to define, enforce, and restore
the security of client computers to keep enterprise networks and data secure. We use
this policy to ensure that the client’s computers who access our network are protected
Confidentiality Policies 51
and compliant with companies? This policy requires that the client system must have
installed antivirus.
8. WWW policies: World Wide Web and Internet play an important role in providing
access to information. Web means to support their mission and goal. WWW policy
uses the following points:
•• Offensive and harassing material should not make accessible via the company
websites.
•• The confidential matter should not be made available on the website of the
organisation.
•• Personal material on the organisational website should not be given space.
•• Personal/commercial advertising should not be made available through company’s
website.
•• Installation of web servers should be prohibited for the users of an organisation.
9. Email Security policies: This email policy accomplishes three objectives:
(i) Commercial objective: in teaching employees how to send effective emails
and stating target answering times, you can professionalise your email replies and
therefore gain competitive advantage.
(ii) Productivity objective: by setting out rules for the personal use of email you
can improve productivity and avoid misunderstandings.
(iii) Legal objective: in clearly stating what is considered as inappropriate email
content you can minimise the risk of law suits and minimise employer’s liability
by showing that the company warned employees of inappropriate email use.
•• Creating an E-Mail Policy: Before you start creating an email policy, do some
investigation into already existing company policies, such as guidelines on writing
business letters, access to confidential information, personal use of the telephone
systems and harassment at work. It is important that your email policy is compatible
with your company’s existing policies. You will also need to decide whether your
company is going to allow personal use of the email system, and if so, to what
extent. The email policy should be drafted with the help of human resources, IT
and board of directors in order to reflect all viewpoints in the organisation. It is
also advisable to have several employees look at the policy and provide their
feedback. Make sure that your policy is not so restrictive that it will compromise
your employees’ morale and productivity.
10. Confidentiality Policies: A confidentiality policy is intended to protect secrets;
specifically, it is intended to prevent unauthorised disclosure of information. A
confidentiality policy, also called an information flow policy, prevents the unauthorised
disclosure of information.
•• A confidentiality policy is a set of rules regarding the distribution and maintenance
of information and records. A confidentiality policy generally seeks to set clear
guidelines as to the type of information that must be kept restricted, the chain of
command for revealing private information, and information that may or must
be revealed to authorities.
52 Computer System Security
Example: Privacy Act requires that your personal information should be kept top secret
from outside anybody. Income tax return means ITR should only be read by you or the IT
Department and legal authority.
The confinement principle deals with preventing a server from disclosing the information
that the user of the service considers confidential. The confinement ensures that the web server
should allow accessing certain services to authorised users only. When a client makes data
request to the server then the server checks whether the client authorised to access the data/
service. If the client is authorised then the server allows the client to access certain services,
otherwise, the server restricts the client to take un allowed action.
The confinement principle states that if a code is untrusted then “KILL IT”. We often
need to run a buggy/un trusted code: Programme from un trusted internet sites: Eg: apps,
extensions, plugins, codecs for the media player. The problem of restricting information granted
to a process by its caller to that process.
Problem
The confinement problem is the problem of preventing a server from leaking information that
the user of the service considers confidential. The confinement problem deals with preventing
a process from taking disallowed actions. Consider a client/server situation: the client sends
a data request to the server; the server uses the data, performs some function, and sends the
results (data) back to the client.
In this case the confinement problem deals with preventing a server from leaking
information that the user of that service considers confidential.
Access control affects the function of the server in two ways:
(i) Goal of service provider: The server must ensure that the resources it accesses
on behalf of the client include only those Resources that the client is authorised to
access.
(ii) Goal of the service user: The server must ensure that it does not reveal the client’s
data to any other entity not authorised to see the client’s data.
Observations
A process that does not store information cannot leak it. This implies that the process cannot do
any computations because an analyst could observe the flow of control and deduce information
about the inputs. A process that cannot be observed and cannot communicate with other
processes cannot leak information. This is called total isolation. Total Isolation is not practical
because processes share CPU, networks and disk storage. Uncofined processes can transmit
information over shared resources
How might a process do this?
Approach 1: A covert channel is a path of communication that was not designed to
be used for communication e.g. process p is confined and cannot communicate with q, p & q
share a file system both have read, create, delete privilege to the same directory, p creates file
of length 0 or 1 bit, q reads the length and then deletes itcontinue the process above until p
creates a file called end when q knows that the message issent.
Note: If p creates a process q, then q must be similarly confined.
Approach 2: The rule of transitive confinement states that if a confined process invokes
a second process, the second process must be as confined as the caller.
Confidentiality Policies 55
Confinement is a mechanism for enforcing the principle of least privilege. The problem is
that the confined process needs to transmit data to another process. The confinement needs to
be on the transmission, not on the data access. The confinement mechanism must distinguish
between transmission of authorised data and the transmission of unauthorised data. This
presents a dilemma in that modern computers are designed to share resources and yet by the
act of sharing they create channels of communications along which information can be leaked.
Even time can be used to transmit information. e.g. One process can read the time by checking
the system clock or counting the number of instructions executed. A second process can write
the time by executing a set number of instructions and stopping.
standard permissions, which for files consist of modify, read and execute, read, write, and
finally, full control, which grants all permissions. Figure 1 depicts the graphical interface for
editing permissions in Windows 7.
Below figure
ALL_MODEL Properties
General Security Details Quick Heal Previous Versions
Object name: E:\docx/ALL_MODEL.docx
Group or user names:
Authenticated Users
SYSTEM
Administrators (hp-PC\Administrators)
Users (hp-PC\Users)
OK Cancel Apply
To finely tune permissions, there are also advanced permissions, which the standard
permissions are composed of. These are shown in below figure as
Advanced Security Settings for ALL_MODEL
Permisions Audting Owner Effective Permissions
To view details of a permission entry, double-click the entry. To modify, click Change Permissions
Permission entries:
Change Permissions...
Include inheritable permissions from this object’s parent
Managing permission entries
OK Cancel Apply
For example, the standard read permission encompasses several advanced permissions:
read data, read attributes, read extended attributes, and read permissions. Setting read to allow
58 Computer System Security
for a particular principal automatically allows each of these advanced permissions, but it is also
possible to set only the desired advanced permissions.
As in Linux, folders have permissions too: read is synonymous with the ability to list
the contents of a folder, and write allows a user to create new files within a folder. However,
while Linux checks each folder in the path to a file before allowing access, Windows has a
different scheme. In Windows, the path to a file is simply an identifier that has no bearing on
permissions. Only the ACL of the file in question is inspected before granting access. This
allows administrators to deny a user access to a folder, but allow access to a file within that
folder, which would not be possible in Linux.
In Windows, any ACEs applied to a folder may be set to apply not to just the selected
folder, but also to the subfolders and files within it. The ACEs automatically generated in this
way are called inherited ACEs, as opposed to ACEs that are specifically set, which are called
explicit ACEs. Note that administrators may stop the propagation of inheritance at a particular
folder, ensuring that the children of that folder do not inherit ACEs from ancestor folders.
This scheme of inheritance raises the question of how ACEs should take precedence.
In fact, there is a simple hierarchy that the operating system uses when making access control
decision. At any level of the hierarchy, deny ACEs take precedence over allow ACEs. Also,
explicit ACEs take precedence over inherited ACEs, and inherited ACEs take precedence in
order of the distance between the ancestor and the object in question—the parent’s ACEs take
precedence over the grandparent’s ACEs, and so on. use i.e. in place of -
With this algorithm in place, resolving permissions is a simple matter of enumerating the
entries of the ACL in the appropriate order until an applicable rule is found. This hierarchy,
along with the finely granulated control of Windows permissions, provides administrators with
substantial flexibility, but also may create the potential for security holes due to its complexity—if
rules are not carefully applied, sensitive resources may be exposed. use i.e. in place of -
2.3.3 Changing Permission Behaviour with Setuid, Setgid, and Sticky Bits
•• Unix-like systems typically employ three additional modes. These are actually
attributing but are referred to as permissions or modes. These special modes are for
a file or directory overall, not by a class, though in the symbolic notation (see below)
the setuid bit is set in the triad for the user, the setgid bit is set in the triad for the group
and the sticky bit is set in the triad for others. The set user ID, setuid, or SUID mode.
When a file with setuid is executed, the resulting process will assume the effective
user ID given to the owner class. This enables users to be treated temporarily as root
(or another user).
•• The set group ID, setgid, or SGID permission. When a file with setgid is executed, the
resulting process will assume the group ID given to the group class. When setgid is
applied to a directory, new files and directories created under that directory will inherit
their group from that directory. (Default behaviour is to use the primary group of the
effective user when setting the group of new files and directories, except on BSD-
derived systems which behave as though the setgid bit is always set on all directories.
•• The sticky mode. (also known as the Text mode.) The classical behaviour of the sticky
bit on executable files has been to encourage the kernel to retain the resulting process
image in memory beyond termination; however, such use of the sticky bit is now
Confidentiality Policies 59
Unsed
Time last accessed
Time last modified
Time Created
32 Bit
Jailkit
Jailkit is a set of utilities to limit user accounts to specific files using chroot() and or specific
commands. Setting up a chroot shell, a shell limited to some specific command, or a daemon
inside a chroot jail is a lot easier and can be automated using these utilities. Jailkit is a specialised
tool that is developed with a focus on security. It will abort in a secure way if the configuration,
Confidentiality Policies 61
the system setup or the environment is not 100% secure, and it will send useful log messages
that explain what is wrong to syslog. Jailkit is known to be used in network security appliances
from several leading IT security firms, internet servers from several large enterprise organisations,
internet servers from internet service providers, as well as many smaller companies and private
users that need to secure cvs, sftp, shell or daemon processes.
Process user ID model in modern UNIX systems:
(a) A process can be created by fork. Fork is a system call used for creating a new process,
which is called child process, which runs concurrently with the process that makes
the fork () call (parent process). After a new child process is created, both processes
will execute the next instruction following the fork() system call. A child process uses
the same pc (programme counter), same CPU registers, same open files which use
in the parent process. It takes no parameters and returns an integer value. Below
are different values returned by fork (). Negative Value: creation of a child process
was unsuccessful. Zero: Returned to the newly created child process. Positive value:
Returned to parent or caller. The value contains process ID of newly created child
process.
(b) When a process executes a file by exec, it keeps its three user IDs unless the set-user-
ID bit of the file is set, in which case the effective uid and saved uid are assigned the
user ID of the owner of the file.
(c) A process may change the user IDS via system call use full stop .
(c) Jail kit is a specialised tool that is developed with a focus on security.
(d) It will abort in a secure way if the configuration is not secure, and it will send useful
log messages that explain what is wrong to system log.
(e) Jail kit is known to be used in network security appliances.
3. FreeBSD jail:
(a) FreeBSD is a popular free and open source operating system that is based on the
Berkeley Software Distribution (BSD) version of the Unix operating system.
(b) It runs on processors such as the Pentium that are compatible with Intel’s x86.
(c) FreeBSD is an alternative to Linux that will run Linux applications.
(d) The jail mechanism is an implementation of FreeBSD’s OS-level virtualisation that
allows system administrators to partition a FreeBSD-derived computer system into
several independent minisystems called jails, all sharing the same kernel, with very
little overhead.
(e) The need for the FreeBSD jails came from a small shared environment hosting
provider’s desire to establish a clean, clear-cut separation between their own services
and those of their customers, mainly for security and ease of administration.
4. System call interposition:
(a) System call interposition is a powerful technique for regulating and monitoring
programme behaviour.
(b) It gives security systems the ability to monitor all of the application’s interaction with
network, file system and other sensitive system resources.
Types of System Calls: There are five different categories of system calls as given
below:
Device
Management
File Information
Management Management
Types of
System
Call
Process
Communication
Control
1. Process Control: Process control is the system call that is used to direct the processes.
Some process control examples include creating, load, abort, end, execute, process,
terminate the process, etc.
2. File Management: File management is a system call that is used to handle the files.
Some file management examples include creating files, delete files, open, close, read,
write, etc.
3. Device Management: Device management is a system call that is used to deal with
devices. Some examples of device management include read, device, write, get device
attributes, release device, etc.
4. Information Maintenance: Information maintenance is a system call that is used to
maintain information. There are some examples of information maintenance, including
getting system data, set time or date, get time or date, set system data, etc.
5. Communication: Communication is a system call that is used for communication.
There are some examples of communication, including create, delete communication
connections, send, receive messages, etc.
Examples of Windows and Unix System Calls:
Table 2.2 Windows and Unix System Calls
Process Windows Unix
Process Control CreateProcess() fork()
ExitProcess() exit()
WaitForSingleObject() wait()
File Manipulation CreateFile() open()
ReadFile() read()
WriteFile() write()
CloseHandle() close()
Device Manipulation SetConsoleMode() ioctl()
ReadConsole() read()
WriteConsole() write()
(Conted...)
64 Computer System Security
Guest OS
Program
User Mode
Kernel Mode
System Call System Call
Handler
VMM
output
OS/360 and successors for example, privileged system code also issues system calls. System
call interposition is a powerful method for regulating and monitoring programme behaviour
and block unauthorised calls. A wide variety of security tools have been developed which use
this technique.
However, traditional system call interposition techniques are vulnerable to kernel attacks
and have some limitations on effectiveness and transparency. Analyse the binary to determine
if it truly is malware or just run the code without letting it affect your system. With System Call
Interposition you can rewrite operating system calls but unless you use interposition to create
a complete container you need to be able to identify which system calls are dangerous before
allowing them through. This may not be obvious. Even using Interposition to create a container
may fail if the code exploits a subtle kernel bug. System call interposition isolates a process in
a single operating system.
Complications:
•• If app forks (create child process), monitor must also fork, Forked monitor monitors
forked app.
•• If monitor crashes, app must be killed.
•• Monitor must maintain all OS state associated with app
•• Current working directory (cwd), UID, EUID, GID
•• When app does “cd path” monitor must update its CWD
•• Otherwise: relative path requests interpreted incorrectly
These complications can be handled by following system calls as given below:
Ptrace: Ptrace is a system call found in UNIX and several Unix-like operating systems. By
using Ptrace (the name is an abbreviation of “process trace”) one process can control another,
enabling the controller to inspect and manipulate the internal state of its target. Ptrace is used
by debuggers and other code-analysis tools, mostly as aids to software development.
Systrace: Recording device activity over a short period of time is known as system
tracing. System tracing produces a trace file that can be used to generate a system report. This
report helps you identify how best to improve your app or game’s performance. The Android
platform provides several different options for capturing traces:
•• System Tracing app
•• Systrace command-line tool
•• Perfetto command-line tool
The System Tracing app is an Android tool that saves device activity to a trace file. On
a device running Android 10 (API level 29) or later, trace files are saved in Perfetto format. On
a device running an earlier version of Android, trace files are saved in the Systrace format.
Systrace is a legacy platform-provided command-line tool that records device activity over a
short period of time in a compressed text file. The tool produces a report that combines data
from the Android kernel, such as the CPU scheduler, disk activity, and app threads. Perfetto is the
new platform-wide tracing tool introduced in Android 10. It is a more general and sophisticated
open-source tracing project for Android, Linux, and Chrome.
66 Computer System Security
power plant hit by a big hack attack/worm. It’s being called one of the worst cyber attacks ever.
Bangladesh based group hacked into nearly 20,000 Indian websites including Indian Border
Security Force. The first virus that could crash power grids or destroy oil pipelines is available
online for anyone to download and tinker with. There is no way of knowing who will use it
or what they will use it for. If there was to be an attack on the critical infrastructure of India,
it would mean chaos. There would be no electricity, no water supply, no phone network, no
satellite network and no cash or banking facilities. An attack on any of its critical infrastructures
can cripple a country. Israel is one of those classic examples where the PM says this publicly
that they have a fourth division in their defence system which is the cyber warfare division.
Where we not only defend our borders we also go and pro-actively offend because offence is
defence in a borderless cyber world. We can say that state sponsored hacks are only going to
go up in the near future, without a doubt. The reason is that the governments have realised
that data is the new oil. You will be able to take decisions both pro-actively and reactively, in
case there is a situation, based on a more informed set of data entries and points, which would
be far more accurate than just diplomatic talks or reading articles.
Biggest unawareness:
•• The tremendous lack of awareness in the government, they don’t believe in insider
threats. And that means is that they will be caught unaware when an insider does
something.
•• Other problem is our hardware. Because all our hardware comes from abroad. And
there is a concern that while manufacturing these chips someone can add some extra
circuitry tool, which can be triggered at a certain time and some harm can happen.
India saw its biggest data breach when the SBI debit card breach happened. When this
happened, banks were initially in a state of denial. But subsequently they had to own up to the
biggest cyber security breach that took place in Indian history. The ATMs are not manufactured
by the banks. There are popular OEMs which manufacture the physical ATM’s and then you
put in a Windows system in that. It can be windows XP, 7 or 8. And on top of that you load up
a software. It’s basically a software that lets you select the type of account, enter the amount,
etc. what was observed was that there were multiple transactions happening in China and
close to ` 1.3 crore was withdrawn using certain VISA and MASTERCARD cards which were
specifically used in a few selected ATMs. These ATM machines are actually connected by a
network, to some sort of a control center. It means, that we are completely in the hands of
others, where they can do various things. The debate in about the national security versus the
privacy. Here, two people have to play the main role: the citizens and the government. But it’s
a fundamental right of any person to choose if he wants to be anonymous on the internet or
not which is known as anonymity.
•• Israel Power Grid hit by a big hack attack is being called one of the worst cyberattacks
ever.
•• In 2014 a hydropower plant in upstate New York got hacked.
•• France in infrastructure including its main nuclear power plant is being targeted by a
new and dangerous powerful cyber worm.
•• Bangladesh’s best group hacked into nearly 20000 Indian websites including the
Indian border security force.
68 Computer System Security
•• First virus that could crash Power Grid or destroy the pipeline is available online for
anyone to download and Tinker with.
•• India’s biggest data breach, (the SBI debit card breach) when this happened Bank was
initially in a state of denial but subsequently they had to own up the cyber security
breach that took place in Indian history.
Access Control Concepts
Your Security needs an Access Control when it comes to protecting your home or business,
as well as the building’s occupants, access control is one of the best ways for you to achieve
peace of mind. Understanding cornerstone access control concepts, including confidentiality,
integrity, and availability (and their mirror opposites: disclosure, alteration, and destruction),
and subjects and objects are a critical foundation to understanding access control. Outlined
below are overviews of the three basic types of access control systems that are available to
your company so you can see which are best suited for your day-to-day operations. In brief,
access control is used to identify an individual who does a specific job, authenticate them, and
then proceed to give that individual only the key to the door or workstation that they need
access to and nothing more.
Access control systems come in three variations as Discretionary Access Control (DAC),
Mandatory Access Control (MAC), and Role Based Access Control (RBAC).
1. Discretionary Access Control (DAC): Discretionary Access Control is a type of access
control system that holds the business owner responsible for deciding which people are
allowed in a specific location, physically or digitally. DAC is the least restrictive compared
to the other systems, as it essentially allows an individual complete control over any
objects they own, as well as the programmes associated with those objects. The drawback
to Discretionary Access Control is the fact that it gives the end user complete control to
set security level settings for other users and the permissions given to the end user are
inherited into other programmes they use which could potentially lead to malware being
executed without the end user being aware of it.
DAC is a type of access control system that assigns access rights based on rules specified
by users. The principle behind DAC is that subjects can determine who has access to
their objects. The DAC model takes advantage of using access control lists (ACLs) and
capability tables. Capability tables contain rows with ‘subject’ and columns containing
‘object’. The security kernel within the operating system checks the tables to determine
if access is allowed. Sometimes a subject/programme may only have access to read a
file; the security kernel makes sure no unauthorised changes occur.
Implementation
This popular model is utilised by some of the most popular operating systems, like Microsoft
Windows file systems.
Confidentiality Policies 69
Property Sheet
based on the subject’s role within the household or organisation and most privileges are
based on the limitations defined by their job responsibilities. So, rather than assigning
an individual as a security manager, the security manager position already has access
control permissions assigned to it. RBAC makes life much easier because rather than
assigning multiple individuals’ particular access, the system administrator only has to
assign access to specific job titles.
RBAC, also known as a non-discretionary access control, is used when system
administrators need to assign rights based on organisational roles instead of individual
user accounts within an organisation. It presents an opportunity for the organisation to
address the principle of ‘least privilege’. This gives an individual only the access needed
to do their job, since access is connected to their job.
Entitlements @ Email
Sally Brown Finance
Carlos Bayez
Implementation
Windows and Linux environments use something similar by creating ‘Groups’. Each group has
individual file permissions and each user is assigned to groups based on their work role. RBAC
assigns access based on roles. This is different from groups since users can belong to multiple
groups but should only be assigned to one role. Example roles are: accountants, developer,
among others. An accountant would only gain access to resources that an accountant would
need on the system. This requires the organisation to constantly review the role definitions
and have a process to modify roles to segregate duties. If not, role creep can occur. Role creep
is when an individual is transferred to another job/group and their access from their previous
job stays with them.
Approaches of Access Control
(i) Centralised access control: Centralised access control is concentrated at one logical
point for a system or organisation. Instead of using local access control databases, systems
Confidentiality Policies 71
VM1 VM2
apps apps
In this, a single Hardware platform can be used for both classified and classified data.
VMM Security Assumption Remove this line
The popularity of VMs is based on the following VMM security assumption:
•• Malware can infect guest OS and guest applications.
•• But Malware cannot escape from the infected VM i.e. cannot infect host OS and also
cannot infect other VMs on the same hardware.
However, this requires VMM to protect itself from malware and is not buggy. This
assumption is not very unrealistic because VMM is much simpler than the full OS and device
drivers run in host OS therefore VMM can be checked and tested so that one can be more
assured that VMM is not buggy and it does not have security flaws.
Covert Channels
A convert channel is an invasion or technique that is used to transfer information in a secretive
unauthorised and illicit manner. Although applications are isolated in different VMs on the same
hardware, covert channels are being found created by smart applications between classified data
in one VM and unclassified data in another VM. Covert channels are unintended communication
channels between isolated component which can be used to classify data from the source
component to the public component.
Classified VM Public VM
Convert
Malware Listener
Channel
Secret Document
Virtual Machine Monitor
For example, a secret document or key can be locked from a classified VM by a Malware
to the public VM through the Covert channel. Many types of Covert Channels exist in the
running system such as file lock status, cache contents, interrupt etc (all shared information
between VMs). These are also called side-channel attacks. If there is a system with 2 CPUs
such that classified VMs run on one and public VM run on another is there a covert channel
between the VMs? Yes, there could be a covert channel based on the time needed to read from
the main memory since the main memory is shared.
Confidentiality Policies 73
Infected VM
Malware
Infected VM
IDS VMM
Hardware
Sample Checks
Intrusion detection system does the following checks to detect stealth rootkit Malware:
Rootkit Malware creates processes that are invisible to standard process listing “P’s”
and open sockets that are invisible to “netstat” (network statistics). Network Statistics is a
command-line utility that displays network connections for transmission control protocol (both
incoming and outgoing), routing tables and a number of hardware interface and Hardware
protocol statistics. Give above line below this line
1. Lie Detector check: Detect stealth Malware that hides processes and Hardware activity
by asking the guest to list processes and match with the list of processes running in guest
OS. If there is a mismatch kill VM.
2. Application Code Integrity Detector: In this VMM computes the hash of user
applications code running in VM and compares it with a white list of hashes. If an unknown
program appears then kill VM.
3. Ensure guest OS kernel integrity: Also detects changes in sys-call-tables that contain
various system calls made.
4. Virus Signature Detector: VMM runs virus signature detected on guest OS memory
Browser Isolation
Browser isolation is a cybersecurity model used to physically isolate an internet users web
browser and their browsing activity away from the local machine and network, it is the underlying
model and technology that supports a remote browsing platform. Browser isolation technologies
are one of the most effective ways that an enterprise can reduce web-based attacks. Browser
isolation was an invention borne out of necessity, our current security tools (anti-virus, firewall,
intrusion detection and prevention) are failing to protect us from malware, ransomware and
browser based cyber-attacks. Browser based attacks are increasing in frequency, through their
browsers as they use the internet normally.
74 Computer System Security
results (data) back to the client. The confinement principle states that if a code is untrusted then
“KILL IT”. We often need to run a buggy/untrusted code:
•• Programme from untrusted internet sites: Eg: apps, extensions, plugins, codecs for
the media player.
•• Exposed applications: PDF viewers, outlooks
•• Legacy demons: Sendmail, Bind
•• Honeypots
Approach: Confinement: Ensure misbehaving application cannot harm the rest of the
system. If any application showing malicious activity kills it so that it cannot harm the rest of
the system.
The idea of VM subversion was suggested in Black hat security conference in 2006.
use above line VM subversion: Black hat security conference in 2006 where this idea was suggested
in place of of VM subversion. The meaning of subversion is the act of trying to destroy or damage an
underlined text established system. The idea of VM subversion considers a system with OS and an antivirus.
Then in the research paper Subvert 2006, it is shown that a VM with a virus can be inserted
which secretly lifts the running OS on top of this VM.Now antivirus cannot detect the virus in
VMM as this antivirus is now a guest VM.
AntiVirus AntiVirus
OS
OS
VMM with virus
Hardware Hardware
This is the same idea of the movie matrix where we did not know we were being run
by machines. Therefore, you can lift the OS of the machine over a Virtual machine and get
virtualized.
VM Based Malware (Blue kill Virus): VM Based malware is an alternative to blue
kill. VM Based malware is a virus that installs a malicious VMM between the running OS and
Hardware. In October 2006, Microsoft Security Bulletin asked people to disable the hardware
virtualisation feature on client-side systems. But there was a lot of controversy around this and
authors of the blue kill virus were challenged that this can be detected i.e. an application in
virtualised OS can detect this happening. The authors of the Blue kill virus did not participate
in this challenge. Later, this was shown in the conference that actually this can be detected that
guest OS is running on top of VMM. This led to the idea of Red kill technique.
VMM Detection (Red Kill Technique): Detecting that OS is running on top of a
VMM has other implications.
Example: honeypots. Companies usually want to know what new malware is there
which can harm the system. Antivirus companies are usually doing that. For this, they Create
honeypots on VM. Malware is contained there for further analysis of damage they incurred
and for reverse engineering of the code. The malware started having a code that can detect
whether it is running in a VM. If it is running in a VM then it stops working i.e. stops damaging
76 Computer System Security
so that it cannot be detected as malware. Many software that binds to hardware (for example
MS Windows) can refuse to run on top of VMM (Hypervisors).
Confinement can be implemented at many levels: This describes the different approaches
used to implement confinement as:
1. Hardware: In hardware confinement, the hardware is isolated. Applications are run on
isolated hardware. That is to run each programme or application on separate systems.
Malicious
App2
App
Process 1 Process 2
Operating System
Each process has a wrapper around it. Whenever the process tries for a system call, the
wrapper is intercepted and only some portion is taken into for a system call to go through.
4. Threads: In thread confinement, isolating threads sharing the same address space. Each
process has a thread, and threads share the same address space. If one thread is malicious
and not to infect other threads in the same address space, the idea is to isolate the same
address space shared by different threads.This is done by software fault isolation (SFI).
Untrusted
Let P
Software
Input
SFI
Transformer
Output
P
P is separated non
malicious thread
This SFI SW transformation could be any number of things. It could be a piece of the
compiler or of the loader. It could also involve a separate pass over machine language code
before execution commences. The point is that we are modifying the programme before it is
executed. (One easy realisation of SFI SW is to always output a programme that does nothing.
However, there are likely to be properties of the original programme that we are interested in
preserving, and these properties might not be satisfied by a programme that does nothing.)
Software fault isolation is the latest research area, nowadays most software has multiple
threads that share the same address space. Here arises a question Can we do isolation among
them?
Because if one threat is doing something dangerous then others should not be affected
by that. so we need to isolate them as well. For example, if there is a media player that has a
Codec thread that does coding and decoding. They do not need to interfere with each other
and device drivers should not corrupt the kernel.
Confidentiality Policies 79
Simple solution: To run different apps in separate address spaces but that makes it
very slow because of IPC (Interprocess communication) which requires context switching per
message (a time-consuming process). Interprocess communication is more time consuming
than intra process communication one more practical approach is as follows:
SFI approach:
1. The address space is shared but is divided into segments. These segments are used to
contain access to various threads. i.e Application1 can access only part 1 Application2
can access only part 2 and so on. With 32-bit OS, this approach does not do very well
but with 64-bit OS, it is quite possible.
Part 1 Part 2
APP#1 APP#2
2. Locate unsafe instructions: JMP, LOAD, STORE i.e. jump from one address to
another address then it is to be checked that it is not going into another application’s
code or data segment.
Therefore, at compile-time add guards before unsafe instructions. When loading code
ensure all guards are present.
Segment Matching Technique:
•• Designed for MIPS processor. Many registers are available.
•• This technique uses two dedicated registers dr1 and dr2 to do arithmetic’s for segment
addresses.
•• Register dr2 contains the segment ID
•• dr1, dr2: dedicated registers not used by binary – compiler pretends these registers
don’t exist – dr2 contains segment ID
•• Indirect load instruction
Actually, we are adding some overhead in terms of guard code.Guard code ensures code
does not load data from another segment.
80 Computer System Security
•• Fewer instructions than segment matching but does not catch offending instructions.
•• Similar guards placed on all unsafe instructions.
Problem: what if jmp[addr] jumps directly into indirect load? (bypassing guard)
Solution: jump guard must ensure [addr] does not bypass the load guard.
Cross-Domain Calls
Also, there may be cross-domain calls i.e a call to a function that belongs to a different domain.
caller callee
domain domain
return
br addr br addr
br addr ret stub br addr
br addr br addr
Only stubs are allowed to make cross-domain jumps.The table contains allowed exit
points.The addresses are hard cored (read-only segments).
Software Fault Isolation (SFI) is a security-enhancing programme transformation for
Instrumenting an untrusted binary module so that it runs inside a dedicated isolated address
space, called a sandbox. To ensure that the untrusted module cannot escape its sandbox,
existing approaches such as Google’s Native Client rely on a binary verifier to check that all
memory accesses are within the sandbox.
Example: Let us assume that you are running an application and you use a third-party
module or a
Third party library which is needed for that application to execute. You downloaded that
third party module or third-party library but may be a malicious code present in that library,
so the possible solutions can be
The Problem: We want to be able to confine an arbitrary programme. This does not
mean that any programme which works when free will still work under confinement, but that
Confidentiality Policies 81
any programme, if confined, will be unable to leak data. A misbehaving programme may well
be trapped as a result of an attempt to escape.
A list of possible leaks may help to create some intuition in preparation for a more abstract
description of confinement rules.
R0. If the service has memory, it can collect data, wait for its owner to call it, and then
return the data to him.
R1. The service may write into a permanent file in its owner’s directory. The owner can
then come around at his leisure and collect the data.
R2. The service may create a temporary file (in itself a legitimate action which cannot
be forbidden without imposing, an unreasonable constraint on the computing which a service
can do) and grant its owner access to this file. If he tests for its existence at suitable intervals,
he can read out the data before the service completes its work and the file is destroyed.
R3. The service may send a message to a process controlled by its owner, using the
system’s inter process communication facility.
R4. More subtly, the information may be encoded in the bill rendered for the service, since
its owner must get a copy. If the form of bills is suitably restricted, the amount of information
which can be transmitted in this way can be reduced to a few bits or tens of bits. Reducing it
to zero, however, requires more far-reaching measures. If the owner of the service pays for
resources consumed by the service, information can also be encoded in the amount of time
used or whatever. This can be avoided if the customer pays for resources.
R5. If the system has interlocks which prevent files from being open for writing and
reading at the same time, the service can leak data if it is merely allowed to read files which
can be written by its owner. The interlocks allow a file to simulate a shared Boolean variable
which one programme can set and the other can’t Give a procedure open (file, error) which
does go to error if the file is already open, the following procedures will perform this
Simulation:
procedure settrue (file); begin loop 1: open (file, loop 1) end;
procedure setfalse (file); begin close (file) end;
Boolean procedure value (file); begin value : = true;
open (file, loop 2); value := false; close (file); loop2; end;
Using these procedures and three files called data, sendclock, and receiveclock, a service
can send a stream of bits to another concurrently running program. Referencing the files as
though they were variables of this rather odd kind, then, we can describe the sequence of
events for transmitting a single bit:
sender: data : = bit being sent; sendclock : = true
receiver: wait for sendclock = true; received bit : = data;
receive clock : = true;
sender: wait for receive clock = true; sendclock : = false;
receiver: wait for sendclock = false; receiveclock : = false;
sender: wait for receiveclock = false;
82 Computer System Security
R6. By varying its ratio of computing to input/output or its paging rate, the service
can transmit information which a concurrently running process can receive by observing the
performance of the system. The communication channel thus established is a noisy one, but
the techniques of information theory can be used to devise an encoding which will allow
the information to get through reliably no matter how small the effects the service on system
performance are, provided they are not zero. The data rate of this channel may be very low,
of course.
enumerate files, or RegOpenKeyEx to get access to the registry). The OS then returns
the appropriate information to the application.
These same APIs are often used by security software when scanning for malicious software.
In order to hide, rootkits hijack these APIs and watch for any question an application
may ask that might be incriminating. So imagine that an application asks, “Operating
system, can you show me the contents of the file at c:\foo.exe?” The rootkit intercepts
the question before it gets to the operating system and quickly replies (as if it were the
operating system), “That file does not exist.”
There are a number of techniques a rootkit can use to subvert normal operating system
behavior:
1. Hooking operating system APIs: Some rootkits re-route OS APIs by changing the
address of these APIs to point to their own code. This can be done both in user mode
(where most applications run) and kernel mode (where device drivers run) and is often
referred to as “hooking.” When an application calls a hooked API, the system looks up
the address of the API in a table (such as the System Service Dispatch Table in kernel
mode or the Import Address Table in user mode). The operating system then executes
the code at that address. If a rootkit has hooked the API, it has changed the address in
the table to point to its own code so that its code runs, rather than the expected system
functionality. This allows the rootkit to intercept requests that might reveal its presence.
2. Hiding in unused space on the machine’s hard disk: This unused space is invisible
to normal OS APIs that are used to look for files on the hard disk. The rootkit will then
modify a commonly used driver (such as atapi.sys) so that when that driver loads it will
look in this unused space to find the rest of the rootkit’s code.
84 Computer System Security
3. Infecting the master boot record (MBR): The rootkit may also infect the MBR in
order to get its code into memory. The MBR is used to bootstrap a system, helping make
the transition from the hardware portion of a computer’s startup routine to loading the
operating system itself. If a rootkit can control that process, then it can control what code
gets loaded into memory before the OS even has a chance to protect itself.
Regardless of the technique used, a rootkit is trying to get its code running while hiding
its presence from other applications running on the machine.
Application Kernel Hardware
File system
Disk class Port
The combination of privilege and stealth make rootkits a particularly dangerous threat. In
recent years, one of the most sensational examples of a rootkit was the Tidserv family of
malware. Tidserv arrives on a machine much like any other piece of malware: through a
drive-by download, from peer-to-peer file sharing software, bundled with other threats, or
through a social-engineering attack (via email, SMS, etc.). When activated and depending
on the version of the threat, Tidserv might hide in unused space, infect commonly used
drivers, or infect the MBR in order to get itself running on the victim’s system. Once
Tidserv is running (and largely undetectable), the threat begins earning its keep by
directing the victim to malicious websites, manipulating Web search results, displaying
ads, or prompting the user to install more (usually malicious) software. Additionally, it
can contact remote servers in order to update itself with new functionality. A computer
infected with Tidservp-. is truly owned by the malicious software and, ultimately, by the
attacker at the other end controlling it.
Types of rootkits
There are various type of toolkit given below:
1. Hardware or firmware rootkit: Hardware or firmware rootkits can affect your hard
drive, your router, or your system’s BIOS, which is the software installed on a small
memory chip in your computer’s motherboard. Instead of targeting your operating
system, they target the firmware of your device to install malware which is difficult to
detect. Because they affect hardware, they allow hackers to log your keystrokes as well
as monitor online activity. Although less common than other types, hardware or firmware
rootkits are a severe threat to online safety.
2. Bootloader rootkit: The bootloader mechanism is responsible for loading the operating
system on a computer. Bootloader rootkits attack this system, replacing your computer’s
Confidentiality Policies 85
legitimate bootloader with a hacked one. This activates the rootkit even before your
computer’s operating system is fully loaded.
3. Memory rootkit: Memory rootkits hide in your computer’s random-access memory
(RAM) and use your computer’s resources to carry out malicious activities in the
background. Memory rootkits affect your computer’s RAM performance. Because they
only live in your computer’s RAM and don’t inject permanent code, memory rootkits
disappear as soon as you reboot the system – though sometimes further work is needed
to get rid of them. Their short lifespan means they tend not to be perceived as a significant
threat.
4. Application rootkit: Application rootkits replace standard files in your computer with
rootkit files and may even change the way standard applications work. These rootkits
infect programmes like Microsoft Office, Notepad, or Paint. Attackers can obtain access to
your computer every time you run those programmes. Because the infected programmes
still run normally, rootkit detection is difficult for users but antivirus programmes can
detect them since they both operate on the application layer.
5. Kernel mode rootkits: Kernel mode rootkits are among the most severe types of
this threat as they target the very core of your operating system (i.e., the kernel level).
Hackers use them not only to access the files on your computer but also to change the
functionality of your operating system by adding their own code.
6. Virtual rootkits: A virtual rootkit loads itself underneath the computer’s operating
system. It then hosts the target operating systems as a virtual machine, which allows it to
intercept hardware calls made by the original operating system. This type of rootkit does
not have to modify the kernel to subvert the operating system and can be very difficult
to detect.
2.9.1 Rootkit Detection Techniques
Detecting the presence of a rootkit on a computer can be difficult, as this kind of malware is
explicitly designed to stay hidden. Rootkits can also disable security software, which makes the
task even harder. As a result, rootkit malware could remain on your computer for a long time
causing significant damage. If a rootkit has already infected a system, though, the detection
and removal of the rootkit requires much more sophisticated techniques than are required for
a typical infection. Basically, the best prevention relies on the fact that the rootkit has not yet
had a chance to hide itself in the system.
Possible signs of rootkit malware include:
1. Blue screen: A large volume of Windows error messages or blue screens with white text
(sometimes called “the blue screen of death”), while your computer constantly needs to
reboot.
2. Unusual web browser behaviour: This might include unrecognised bookmarks or
link redirection.
3. Slow device performance: Your device may take a while to start and perform slowly
or freeze often. It might also fail to respond to input from the mouse or keyboard.
4. Windows settings change without permission: Examples might include your
screensaver changing, the taskbar hiding itself, or the incorrect date and time displaying
– when you haven’t changed anything.
86 Computer System Security
5. Web pages don’t function properly: Web pages or network activities appear
intermittent or don’t function properly because of excessive network traffic.
6. A rootkit scan is the best way to detect a rootkit infection, which your antivirus
solution can initiate. If you suspect a rootkit virus, one way to detect the infection is to
power down the computer and executes the scan from a known clean system.
7. Behavioral analysis is another method of rootkit detection: This means that
instead of looking for the rootkit, you look for rootkit-like behaviours. Whereas targeted
scans work well if you know the system is behaving oddly, a behavioural analysis may
alert you to a rootkit before you realise you are under attack.
Rootkit Prevention
Because rootkits can be dangerous and difficult to detect, it is important to stay vigilant when
browsing the internet or downloading programmes. Many of the same protective measures
you take to avoid computer viruses also help to minimise the risk of rootkits.
1. Use a comprehensive cyber security solution: Be proactive about securing your
devices and install a comprehensive and advanced antivirus solution. Kaspersky Total
Security provides full-scale protection from cyber threats and also allows you to run
rootkit scans.
2. Keep up-to-date: Ongoing software updates are essential for staying safe and preventing
hackers from infecting you with malware. Keep all programmes and your operating system
up to date to avoid rootkit attacks that take advantage of vulnerabilities.
3. Be alert to phishing scams: Phishing is a type of social engineering attack where
scammers use email to trick users into providing them with their financial information
or downloading malicious software, such as rootkits. To prevent rootkits from infiltrating
your computer, avoid opening suspicious emails, especially if the sender is unfamiliar to
you. If you are unsure if a link is trustworthy, don’t click on it.
4. Download files from trusted sources only: Be careful when opening attachments
and avoid opening attachments from people you don’t know to prevent rootkit from
being installed on your computer. Download software from reputable sites only. Don’t
ignore your web browser’s warnings when it tells you a website you are trying to visit is
unsafe.
5. Be alert to your computer’s behaviour or performance: Behavioral issues could
indicate that a rootkit is in operation. Stay alert to any unexpected changes and try to
find out why these are happening.
6. Symantec security products: Symantec security products such as Norton Internet
Security and Symantec Endpoint Protection include a number of technologies that are
designed to prevent, detect, and remove rootkits without being fooled by the tricks rootkits
use to remain hidden. In Fig. 2.21, below, shows how Symantec protection protect your
computer against the different phases of a rootkit’s attack.
Confidentiality Policies 87
File
Network
Website/
Domain/
IP address Network Reputation
File Behavioral Repair
With rootkits, as with all threats, the best defence is a good offence. Almost all rootkits
start off on a computer with a simple application that may be an .exe-based installer or
some user-mode shell code that will install the actual rootkit and hide itself. For example,
when Tidserv first arrives on a machine, and before it has installed its driver, it is just a
regular executable file that is probably dropped on the machine by other malware or via
a drive-by download. Obviously, a good security solution will detect the rootkit before
it can get its hooks into the system.
Network-Based Protection: The first layer of protection in the STAR security solution
is intrusion prevention system (IPS), which blocks threats that attempt to get onto a
machine. As noted earlier, a common way malware gets onto computers is when users
visit an otherwise innocuous website that has been compromised and is hosting an attack
toolkit that serves up drive-by down-loads. If the attack toolkit can exploit a bug in out-
of-date or vulnerable browser software that the user is running, it then silently downloads
malware such as a rootkit onto the vulnerable visitor’s computer. Since most malware is
now delivered via Web-based attacks, this is often the first opportunity for a Symantec
IPS to detect and prevent an attack. The IPS technology intercepts network traffic when
it sees malicious patterns of Web-attack toolkits or characteristics of vulnerabilities being
exploited from a malicious website. Symantec IPS also adds protection into the browser
to safeguard it against threats designed to take advantage of browser vulnerabilities even
if they are obfuscated or include complex JavaScript. Thus, if a user visits a site hosting
malicious content that tries to infect the user’s computer with a rootkit, IPS can block the
88 Computer System Security
threat either at the network layer or in the browser before it has a chance to download
thus protecting the user’s system.
Network-based detection is one of the most powerful ways to block malware in general. It
is much easier for malware authors to modify their files than it is for them to modify their
network traffic patterns. Strong IPS protection allows Symantec to prevent malware from
ever landing on a machine. Blocking threats at this level is the fastest, and the safest way
to keep a computer clean. In recent years, Symantec has observed a marked increase in
the percentage of threats blocked by our IPS engines as compared to more traditional
antivirus engines. In 2010, for example, half of the threats that Symantec detected were
blocked using network-based protection
File-Based Protection: In the rare case that malware evades network defences, or is
introduced onto a user’s computer through a non-network-based vector such as a USB
key, Symantec employs the additional defence of file-based protection. When a file is
written to the computer’s disk or accessed by the user, the file is immediately scanned by
Symantec AutoProtect technology and our antivirus engines. The scan looks for known
signatures as well as for known malicious patterns. Additionally, the Symantec MalHeur
(short for Malware Heuristics) engine is able to detect previously unknown malicious
files based on patterns developed from our having previously detected millions of other
threats. The MalHeur engine compares the characteristics of the potentially malicious file
against the attributes of millions of sample files (both benign and malicious) to logically
detect previously unseen malware. Thus, Symantec technologies can quickly identify and
remove potentially malicious files that appear similar to other known rootkit installers
before they are ever allowed to run.
Reputation-Based Protection: Norton Internet Security and Symantec Endpoint
Protection receive anonymous telemetry information about executable files installed and
running on our customers’ machines. Symantec’s Download Insight technology uses this
reputation profile for any new executable content our customers download. If the file has
a bad reputation (or no reputation) we can block the file before it gets a chance to infect
the system. Thus, a previously unknown rootkit installer that is unwittingly downloaded
onto a Symantec protected machine will not get to run specifically because we have
never seen it before.
Behavioral-Based Protection: Finally, if a rootkit installer is introduced by removable
media, there is Symantec’s SONAR behavioral-based protection technology. SONAR
monitors more than 1300 different application behaviours in real time. This means that,
as an application launches, SONAR scans the application for potentially malicious activity.
SONAR tracks whether an application tries to install new services or drivers, wants to
inject code into other processes, or if it tries to modify system files or perform other
malicious actions. SONAR also checks the reputation of the file that is attempting to run.
If SONAR determines that the application is malicious, it can quarantine it before it is
able to infect the system. SONAR technology successfully provided zero-day protection
for such high-profile threats as Hydraq, Imsolk and Stuxnet.
Confidentiality Policies 89
directly affecting the functionality of the infected computer, this rootkit downloads
and installs malware on the infected machine and makes it part of a worldwide botnet
used by hackers to carry out cyberattacks. Zero Access is in active use today.
•• TDSS: In 2008, the TDSS rootkit was detected for the first time. This is similar to
bootloader rootkits because it loads and runs at the operating systems’ early stages
– making detection and removal a challenge.
Internet
Firewall
Internal IDS
Switch
Monitor
Server
Workstation
Depending on your use case and budget, you can deploy a NIDS or HIDS or rely on
both main IDS types. The same applies to detection models as many teams set up a hybrid
system with SIDS and AIDS capabilities.
Before you determine a strategy, you need to understand the differences between IDS
types and how they complement each other. Let us look at each of the four main IDS types,
their pros and cons, and when to use them.
Anomaly
Based
Types of
Signature Intrusion Host
Based Detection Based
System
Network
Based
Database Workstations
with attack
Firewall
signatures
Public
Network Servers
Perimeter Perimeter
NIDS
Network Router Switch
traffic
Databses
Forensic Altert
analysis administrator
Company
devices
Firewall
Workstations Servers
Perimeter Perimeter
Network Router Switch
traffic
Company
Databases
devices
HIDS agent HIDS agent
Advantages of a HIDS
•• Offers deep visibility into the host device and its activity (changes to the configuration,
permissions, files, registry, etc.).
•• An excellent second line of defence against a malicious packet a NIDS failed to detect.
•• Good at detecting packets originating from inside the organisation, such as unauthorised
changes to files from a system console.
•• Effective at detecting and preventing software integrity breaches.
•• Better at analysing encrypted traffic than a NIDS due to fewer packets.
•• Far cheaper than setting up a NIDS.
94 Computer System Security
Disadvantages of a HIDS
•• Limited visibility as the system monitors only one device.
•• Less available context for decision-making.
•• Hard to manage for large companies as the team needs to configure and handle info
for every host.
•• More visible to attackers than a NIDS.
•• Not good at detecting network scan for discovering low-skill attack attempts.
•• Effective at, monitoring inbound network traffic.
•• Can efficiently process a high volume of network traffic.
2.10.4 Signature-Based Intrusion Detection System (SIDS)
A SIDS monitors packets moving through a network and compares them to a database of known
attack signatures or attributes. This common type of IDS security looks for specific patterns,
such as byte or instruction sequences. It detects the attacks on the basis of the specific patterns
such as number of bytes or number of 1’s or number of 0’s in the network traffic. It also detects
on the basis of the already known malicious instruction sequence that is used by the malware.
The detected patterns in the IDS are known as signatures. Signature-based IDS can easily
detect the attacks whose pattern (signature) already exists in system but it is quite difficult to
detect the new malware attacks as their pattern (signature) is not known.
Advantages of a SIDS
•• Works well against attackers using known attack signatures.
•• Helpful for discovering low-skill attack attempts.
•• Effective at monitoring inbound network traffic.
•• Can efficiently process a high volume of network traffic.
Disadvantages of a SIDS
•• Cannot identify a breach without a specific signature in the threat database.
•• A savvy hacker can modify an attack to avoid matching known signatures, such as
changing lowercase to uppercase letters or converting a symbol to its character code.
•• Requires regular updates of the threat database to keep the system up to date with
the latest risks.
2.10.5 Anomaly-Based Intrusion Detection System (AIDS)
An AIDS monitors ongoing network traffic and analyses patterns against a baseline. It goes
beyond the attack signature model and detects malicious behaviour patterns instead of specific
data patterns. This type of IDS uses machine learning to establish a baseline of expected
system behaviour (trust model) in terms of bandwidth, protocols, ports, and device usage. The
system can then compare any new behaviour to verified trust models and discover unknown
attacks signature-based IDS cannot identify. For example, someone in the Sales department
trying to access the website’s backend for the first time may not be a red flag for a SIDS. For
an anomaly-based setup, however, a person trying to access a sensitive system for the first
time is a cause for investigation.
Confidentiality Policies 95
Advantages of an AIDS
•• Can detect signs of unknown attack types and novel threats.
•• Relies on machine learning and AI to establish a model of trustworthy behaviour.
Disadvantages of an AIDS
•• Complex to manage.
•• Requires more processing resources than a signature based IDS.
•• High amounts of alarms can overwhelm admins.
8. Even after a fix is developed, the fewer the days, the higher the probability that an
attack against the afflicted software will be successful, because not every user of that
software will have applied the fix.
9. For zero-day exploits, unless the vulnerability is inadvertently fixed, for example, by
an unrelated update that happens to fix the vulnerability, the probability that a user
has applied a vendor – supplied patch that fixes the problem is zero, so the exploit
would remain available. Zero-day attacks are a severe threat.?
4. Describe the types of VM based isolation?use full stop in place of ?
Ans. Following are the types of Virtual Machine based isolation:
(a) Process virtual machines:
1. Process virtual machines support individual processes or a group of processes
and enforce isolation between the processes and operating system environment.
2. Process virtual machines can run processes compiled for the same Instruction
Set Architecture based (ISA) or for a different ISA as long as the virtual machine
runtime supports the translation.
3. Isolation policies are provided by a runtime component which runs the processes
under its control.
4. Isolation is guaranteed because the virtual machine runtime does not allow direct
access to the resources.
(b) System virtual machines (Hypervisor virtual machines):
1. System virtual machines provide a full replica of the underlying platform and thus
enable complete operating systems to be run within it.
2. The virtual machine monitor (also called the hypervisor) runs at the highest
privilege level and divides the platforms hardware resources amongst multiple
replicated guest systems.
3. All accesses by the guest systems to the underlying hardware resources are then
mediated by the virtual machine monitor.
4. This mediation provides the necessary isolation between the virtual machines.
5. System virtual machines can be implemented in a pure-isolation mode in which
the virtual systems do not share any resources between themselves or in a sharing-
mode in which the VM Monitor multiplexes resources between the machines.
6. Pure-isolation mode virtual machines are as good as separate physical machines.
(c) Hosted virtual machines:
1. Hosted Virtual Machines are built on top of an existing operating system called
the host.
2. The virtualisation layer sits above the regular operating system and makes the
virtual machine look like an application process.
3. We then install complete operating systems called guest operating systems within
the host virtual machines.
4. The VM can provide the same instruction set architecture as the host platform or
it may also support a completely different Instruction Set Architecture (ISA).
98 Computer System Security
5. VMware GSX Server is an example where the host ISA and guest ISA are the
same.
6. Isolation in hosted virtual machines is as good as the isolation provided by the
hypervisor approach except that the virtual machine monitor in the case of the
hosted VM does not run at the highest privilege.
7. The processes running inside the virtual machine cannot affect the operation of
processes outside the virtual machine.
(d) Hardware virtual machines:
1. Hardware virtual machines are virtual machines built using virtualisation primitives
provided by the hardware like processor or I/O.
2. The advantage of hardware level virtualisation is tremendous performance
improvements over the software based approaches and guarantees better isolation
between machines.
3. The isolation provided by the hardware assisted virtualisation is more secure than
that provided by its software counterpart for obvious reasons
5. What are the problems related with MAC?
Ans. Following are the different problems in MAC:
1. Requirement of new security levels:
(a) In MAC, there is no security level for common people (people outside organisation)
where they can access certain data or information to know organisation or business
and hence marketing of organisation or business is not possible in traditional
MAC.
(b) Hence, an organisation cannot have efficient growth by adopting MAC.
(c) Hence, an update is required to alter the security levels and include this functionality
in proposed model which is an alternate to MAC.
2. Filtration:
(a) The security levels are assigned to both subjects and objects.
(b) These levels are assigned to values inside each attribute.
(c) The Bell-Lampedusa model form the basis of MAC.
3. Polyinstantiation:
(a) In polyinstantiation, multiple instances of a tuple are created.
(b) Consider the example, where user with security level confidential can view
attributes which are at lower level or equal level as compared to this user.
(c) Other values are displayed as NULL. These values can be accesses and changed
by this user by taking a key which is at the lowest level in this relation and any
attribute can be accessed using this key or value.
6. What is VM based Isolation?
Ans. 1. A VM is an isolated environment with access to a subset of physical resources of the
computer system.
2. Each VM appears to be running on the bare hardware, giving the appearance of
multiple instances of the same computer, though all are supported by a single physical
system.
Confidentiality Policies 99
3. Execute:
(a) In Windows, an executable programme usually has an extension “.exe” and which
we can easily run.
(b) In Unix/Linux, we cannot run a programme unless the execute permission is set.
(c) If the execute permission is not set, we might still be able to see/modify the
programme code (provided read & write permissions are set), but not run it.
9. What are the components of IDS?
Ans. Components of intrusion detection system are:
1. A packet decoder: It takes packets from different networks and prepares them for
preprocessing or any further action. It basically decodes the coming network packets.
2. A preprocessor: It prepares and modifies the data packets and also performs
defragmentation of data packets, decodes the TCP streams.
3. A detection engine: It performs the packet detection on the basis of Snort rules. If any
packet matches the rules, appropriate action is taken, else it is dropped.
4. Logging and alerting system: The detected packet is either logged in system files or
in case of threats, the system is alerted.
5. Output modules: They control the type of output from the logging and alert system.
10. Why is security hard?
Ans. 1. Today in computers and on the internet attack is easier than defence. There are many
reasons for this, but the most important is the complexity of these systems.
2. Complexity is the worst enemy of security. The more complex a system is, the less
secure it is.
3. A hacker typically targets the “attack surface” of a system. The attack surface of a
system contains all the possible points that a hacker might target.
4. A complex system means a large attack surface, and that means a huge advantage
for the hacker.
5. The hacker just has to find one vulnerability. He can also attack constantly until
successful.
6. At the same time, the defender has to secure the entire attack surface from every
possible attack all the time.
7. Also the cost to attack a system is only a fraction of the cost to defend it.
8. This is one of the reasons why security is so hard, even though over the years there
is significant improvement in security technologies.
11. What is Access Control list (ACL) and also define what are the technologies used in
access control?
Ans. Access control list:
(a) An access-control list is a list of permissions attached to an object.
(b) An ACL specifies which users or system processes are granted access to objects, as
well as what operations are allowed on given objects.
(c) Each entry in a typical ACL specifies a subject and an operation.
Confidentiality Policies 101
(d) An access control list (ACL) is a table that tells a computer operating system which
access rights each user has to a particular system object, such as a file directory or
individual file.
(e) Each object has a security attribute that identifies its access control list.
Access control technology includes:
1. Access Technology Architectures:
(a) Internet of Things (IoT) access control
(b) Physical Access Control System (PACS)
2. Communications technologies:
(a) Radio Frequency Identification (RFID) access control
(b) Near Field Communication (NFC) access control
(c) Bluetooth Access Control (BAC) access control
(d) Wireless access control technology.
3. Authentication technologies:
(a) Biometric access control technology
(b) Smart card access control technology
(c) Mobile Access Control (MAC) access control
(d) Two Factor Authentication in access control.
4. Infrastructure technologies:
(a) Internet switches for access technology
(b) CAT6 Cable access control technology
(c) Power over Ethernet (PoE) access control
(d) IP based Access Control.
12. Write short notes on Software Fault Isolation (SFI) i. Goal and solution, ii. SFI approach
ns. Goal and solution:
A
1. Software Fault Isolation (SFI) is an alternative for unsafe languages, example C, where
memory safety is not granted but needs to be enforced at runtime by programme
instrumentation.
2. SFI is a programme transformation which confines a software component to a memory
sandbox. This is done by pre-fixing every memory access with a carefully designed
code sequence which efficiently ensures that the memory access occurs within the
sandbox.
SFI approach:
1. Traditionally, the SFI transformation is performed at the binary level and is followed
by a posteriori verification by a trusted SFI verifier.
2. Because the verifier can assume that the code has undergone the SFI transformation, it
can be kept simple, thereby reducing both verification time and the Trusted Computing
Base.
3. This approach is a simple instance of Proof Carrying Code where the complier is
untrusted and the binary verifier is either trusted or verified.
4. Traditional SFI is well suited for executing binary code from an untrusted origin.
102 Computer System Security
20. An hardware device’s interrupt request invokes __________, which handles this interrupt.
(a) Instruction Set Randomisation (b) Information Storage and Retrieval
(c) Interrupt Service Routine (d) Intermediate Session Routing
21. Which of the following is a method of randomisation?
(a) ASLR (b) Sys-call randomisation
(c) Memory randomisation (d) All of the above.
22. What is the minimum length of a string passed to the function through the input parameter
that can crash the application?
(a) 10 (b) 11
(c) 12 (d) 13
23. Applications developed by programming languages like ____ and ______ have this
common buffer-overflow error.
(a) C, Ruby (b) C, C++
(c) Python, Ruby (d) C, Python
24. _____________ buffer overflows, which are more common among attackers.
(a) Memory-based (b) Queue-based
(c) Stack-based (d) Heap-based
25. Which of the following string library functions is unsafe for buffer?
(a) gets (char * str)
(b) strcat (char * destination, const char * source)
(c) strcpy (char * destination, const char * source)
(d) All of the above
26. Which of the following statements is correct with respect to integer overflow?
(a) It is a result of an attempt to store a value greater than the maximum value an integer
can store
(b) Integer overflow can compromise a program’s reliability and security
(c) Both A and B
(d) None of the above
27. If an integer data type allows integers up to two bytes or 16 bits in length (or an unsigned
number up to decimal 65,535), and two integers are to be added together that will exceed
the value of 65,535, the result will be:
(a) Buffer Overflow (b) Integer Overflow
(c) Stack Overflow (d) Heap Overflow
28. A format string is a __________ string that contains __________ and __________
parameters.
(a) Format, text, ASCII (b) Text, ASCII, format
(c) ASCII, text, format (d) None of the above
29. Which of the following is not a format function in C?
(a) fprintf() (b) vsfprint()
(c) vfprintf() (d) vsprintf()
Confidentiality Policies 105
Answers
1. (d) 2. (b) 3. (c) 4. (a) 5. (a) 6. (b)
7. (d) 8. (a) 9. (a) 10. (b) 11. (d) 12. (b)
13. (c) 14. (a) 15. (c) 16. (b) 17. (b) 18. (c)
19. (d) 20. (c) 21. (d) 22. (c) 23. (b) 24. (c)
25. (d) 26. (c) 27. (b) 28. (c) 29. (b) 30. (a)
31. (a) 32. (b) 33. (d) 34. (c) 35. (a) 36. (d)
37. (a) 38. (b) 39. (d) 40. (d)
Secure Architecture
3 Principles Isolation and Leas
certificates, security tokens, smart cards and biometrics. It is the procedure by which users are
identified and granted specific privileges to information, systems, or resources. Understanding
the element of access control is essential to understanding how to handle proper disclosure of
information. It is the ability to allow or deny the use of a specific resource by a specific entity.
Access control structure can be used in handling physical resources (including a movie theatre,
to which only ticket-holders must be admitted), logical resources (a bank account, with a limited
number of people authorised to create a withdrawal), or digital resources. Digital resources
involve a private text files on a computer, which only specific users should be able to read.
3.1.1 Importance of Access Control
The goal of access control is to minimise the security risk of unauthorised access to physical and
logical systems. Access control is a fundamental component of security compliance programmes
that ensures security technology and access control policies are in place to protect confidential
information, such as customer data. Most organisations have infrastructure and procedures
that limit access to networks, computer systems, applications, files and sensitive data, such as
personally identifiable information and intellectual property. Access control systems are complex
and can be challenging to manage in dynamic IT environments that involve on-premises
systems and cloud services. After high-profile breaches, technology vendors have shifted away
from single sign-on systems to unified access management, which offers access controls for
on-premises and cloud environments.
3.1.2 Types of Access Control
The main models of access control are the following:
•• Mandatory access control (MAC): This is a security model in which access rights
are regulated by a central authority based on multiple levels of security. Often used in
government and military environments, classifications are assigned to system resources
and the operating system or security kernel. MAC grants or denies access to resource
objects based on the information security clearance of the user or device. For example,
Security-Enhanced Linux is an implementation of MAC on Linux.
•• Discretionary access control (DAC): This is an access control method in which
owners or administrators of the protected system, data or resource set the policies
defining who or what is authorised to access the resource. Many of these systems
enable administrators to limit the propagation of access rights. A common criticism
of DAC systems is a lack of centralised control.
•• Role-based access control (RBAC): This is a widely used access control mechanism
that restricts access to computer resources based on individuals or groups with defined
business functions — e.g., executive level, engineer level 1, etc. — rather than the
identities of individual users. The role-based security model relies on a complex
structure of role assignments, role authorisations and role permissions developed
using role engineering to regulate employee access to systems. RBAC systems can
be used to enforce MAC and DAC frameworks.
•• Rule-based access control: This is a security model in which the system administrator
defines the rules that govern access to resource objects. These rules are often based on
conditions, such as time of day or location. It is not uncommon to use some form of
both rule-based access control and RBAC to enforce access policies and procedures.
Secure Architecture Principles Isolation and Leas 109
account encrypted uid group id “in real life” where files whatshell
name password start program starts
on login
Every process inherits its uid based on which user starts the process. Every process also
has an effective uid, also a number, which may be different from the uid. Finally, each UNIX
process is a member of some groups. In the original UNIX every user was a member of one
group. Currently, users can be members of more than one group. Group information can be
gotten from /etc/passwd or from a file /etc/groups.
System administrators control the latter file. An entry in /etc/groups may look like:
staff : : 17 : fbs, ldzhou, ulfar
All of the above implements a form of authentication, knowing the identity of the subject
running commands. Objects in UNIX are files. UNIX attempts to make everything look like
a file. (E.g., one can think of “writing” to a process as equivalent to sending a message, etc.)
Because of this, we will only worry about files, recognising that just about every resource can
be cast as a file.
Here is a high-level overview of the UNIX file system. A directory is a list of pairs:
(filename, i-node number). Running the command ‘ls’ will produce a list of file names from
this list of pairs for the current working directory.
An i-node contains a lot of information, including:
•• where the file is stored: necessary since the directory entry is used to access the file,
•• the length of the file: necessary to avoid reading past the end of the file,
•• the last time the file was read,
•• the last time the file was written,
•• the last time the i-node was read,
•• the last time the i-node was written,
•• the owner: a uid, generally the uid of the process that created the file,
•• a group: gid of the process that created the file is a member of,
•• 12 mode bits to encode protection privileges: equivalent to encoding a set of
access rights.
Nine of the 12 mode bits are used to encode access rights. These access bits can
be thought of as the protection matrix entry. They are divided into three groups of three:
u g o
rwx rwx rwx
Fig. 3.1 (c) UNIX Access Control
The first triplet (u) is for the user, the second (g) for the group and the third (o)
for anyone else. If a particular bit is on, then the named set of processes have the
corresponding access privileges (r:read, w:write, x:execute). There are some subtleties
however. In order to access a file, it is necessary to utter that object’s name. Names are always
relative to some directory.
For example: ~fbs/text/cs513/www/L07.html. Directories are just files themselves, but
in the case of directories:
•• The “r” (read) bit controls the ability to read the list of files in a directory. If “r” is set,
you can use “ls” to look at the directory.
•• The “x” (search) bit controls the ability to use that directory to construct a valid
pathname. If the “x” bit is set, you can look at a file contained in the directory.
Thus, for example, the ‘x’ bit allows a user to make the directory under consideration the
current working directory and it needs to be on to read files in the current working directory.
So a file can be made inaccessible by turning off the ‘x’ bit for the directory in which the file
resides.
Secure Architecture Principles Isolation and Leas 111
Does ‘x’ without ‘r’ access make sense? Yes! This is a directory whose files’ names
cannot be learned, but whose files are accessible if you happen to know their names. This is
actually useful.
Does ‘r’ without ‘x’ access make sense? This is a directory whose files’ names can
be learned, but whose files cannot be accessed. It is not very useful.
In UNIX there are number of rules to define how the bits are set initially and how they
can be changed. We will discuss how to change them. There is a command ‘chmod’ that
changes the mode bits. What objects can chmod access? Only the uid that is the owner of a
file can execute chmod for that file (except for root’s uid, of course). There is also a command
to change the owner of a file, but that has been removed from more recent systems.
What about the final three of the 12 mode bits? The mechanism discussed so far
does not support domain changes. There is a single domain, the user id, and once a process
is running it is (abstractly) in that row of the protection matrix. Imagine a situation where we
want files to be viewable only from within a particular programme. This is not possible in the
current framework. But, the additional mode bits allow this. We will only mention two of the
three bits. They are: suid (set user id) and sgid (set group id). A file with the suid bit on does
not run with the uid of the process making the call, but rather with an effective uid that is the
owner of the file. This enables us to change from executing as one subject to executing as
another subject. The sgid bit works on the same principle, but for groups.
These additional mode bits are used when there are programmes that access lots of
objects but in a controlled way (e.g. root privileges). It is useful to have programmes that are
setuid for a user, and thus do less damage than a user running the programme with full root
privileges. We do not have the notion of a template, as we discussed previously, so this UNIX
mechanism is less powerful. We do not realise the principle of least privilege.
There is a UNIX that uses a notion of an additional access control list, and not just mode
bits to handle access control. In this case, each file has mode bits as we have been discussing
and also extended permissions.
The extended permissions provide exceptions to the mode bits as follows:
•• Specify: for example, “r-- u:harry” means that user harry has read only access.
•• Deny: for example “-w- g:acsu” means remove write access from the group acsu.
•• Permit: for example “rw- u:bill, g:swe” means give read and write access to bill if bill
is also a member of the group swe. The comma is conjunction.
With extended permissions it’s possible to force a user to enter a particular group before
being allowed access to a file.
3.2.3 Windows NT – Access Control
Windows NT supports multiple file systems, but the protection issues we will consider are
only associated with one: NTFS. In NT there is the notion of an item, which can be a file or a
directory. Each item has an owner. An owner is usually the thing that created the item. It can
change the access control list, allow other accounts to change the access control list and allow
other accounts to become owner. Entries in the ACL are individuals and groups. Note that NT
was designed for groups of machines on a network, thus, a distinction is made between local
groups (defined on a particular workstation) and global groups (domain wide). A single name
can therefore mean multiple things.
112 Computer System Security
NTFS is structured so that a file is a set of properties, the contents of the file being just
one of those properties. An ACL is a property of an item. The ACL itself is a list of entries: (user
or group, permissions). NTFS permissions are closer to extended permissions in UNIX than to
the 9 mode bits. The permission offers a rich set of possibilities:
•• R — read
•• W — write
•• X — execute
•• D — delete
•• P — modify the ACL
•• O — make current account the new owner (“take ownership”)
The owner is allowed to change the ACL. A user with permission P can also change the
ACL. A user with permission O can take ownership. There is also a packaging of privileges
known as permissions sets:
•• no access
•• read -- RX
•• change -- RWXO
•• full control -- RWXDPO
3.2.4 Access Control Lists
Some systems abbreviate access control lists. The basis for file access control in the UNIX
operating system is of this variety.
UNIX systems divide the set of users into three classes as:
•• the owner of the file,
•• the group owner of the file,
•• all other users. Each class has a separate set of rights.
Example: UNIX systems provide read (r), write (w), and execute (x) rights. When user
bishop creates a file, assume that it is in the group vulner. Initially, bishop requests that he be
able to read from and write to the file, that members of the group be allowed to read from the
file, and that no one else have access to the file. Then the permissions would be rw for owner,
r for group, and none for other.
UNIX permissions are represented as three triplets. The first is the owner rights; the
second, group rights; and the third, other rights. Within each triplet, the first position is r if read
access is allowed or – if it is not; the second position is w if write access is allowed or – if it is
not; and the third position is x if execute access is allowed or – if it is not. The permissions for
bishop’s file would be rw–r–––––.
An interesting question is how UNIX systems assign group ownership. Traditionally,
UNIX systems assign the effective principal group ID of the creating process. But in some cases
this is not appropriate. For instance, suppose the line printer programme works by using group
permissions; say its group is lpdaemon. Then, when a user copies a file into the spool directory,
lpdaemon must own the spool file. The simplest way to enforce this requirement is to make the
spool directory group owned by lpdaemon and to have the group ownership inherited by all
files created in that directory. Some systems—notably, Solaris and SunOS systems—augment
Secure Architecture Principles Isolation and Leas 113
the semantics of file protection modes by setting the setgid bit on the directory when any files
created in the directory are to inherit the group ownership of the containing directory.
Abbreviations of access control lists, such as those supported by the UNIX operating
system, suffer from a loss of granularity. Suppose a UNIX system has five users. Anne wants
to allow Beth to read her file, Caroline to write to it, Della to read and write to it, and Elizabeth
to execute it. Because there are only three sets of permissions and five desired arrangements
of rights (including Alice), three triplets are insufficient to allow all desired modes of access.
Hence, Alice must compromise, and either give someone more rights than she desires or give
someone fewer rights. Similarly, traditional UNIX access control does not allow one to say
“everybody but user Fran”; to do this, one must create groups of all users except Fran. Such
an arrangement is cumbersome, the more so because only a system administrator can create
groups. Many systems augment abbreviations of ACLs with full-blown ACLs. This scheme uses
the abbreviations of ACLs as the default permission controls; the explicit ACL overrides the
defaults as needed. The exact method varies.
EXAMPLE: IBM’s version of the UNIX operating system, called AIX, uses an ACL
(called “extended permissions”) to augment the traditional UNIX abbreviations of ACL (called
“base permissions”). Unlike traditional ACLs, the AIX ACL allows one to specify permissions
to be added or deleted from the user’s set. Like UNICOS, AIX bases matches on group and
user identity.
The specific algorithm (using AIX’s terminology, in which “base permissions” are the
UNIX abbreviations of ACLs and “extended permissions” are unabbreviated ACL entries) is
as follows:
•• Determine what set S of permissions the user has from the base permissions.
•• If extended permissions are disabled, stop. The set S is the user’s set of permissions.
•• Get the next entry in the extended permissions. If there are no more, stop. The set S
is the user’s set of permissions.
•• If the entry has the same user and group as the process requesting access, determine
if the entry denies access. If so, stop. Access is denied.
•• Modify S as dictated by the permissions in the entry.
•• Go to 3.
As a specific example, consider the following representation of an AIX system’s access
control permissions for the file xyzzy.
3.2.5 Windows NT Access Control Lists
Windows NT provides access control lists for those files on NTFS partitions. Windows NT allows
a user or group to read, write, execute, delete, change the permissions of, or take ownership of
a file or directory. These rights are grouped into commonly assigned sets called generic rights.
The generic rights for files are as follows:
•• no access, whereby the subject cannot access the file
•• read, whereby the subject can read or execute the file
•• change, whereby the subject can read, execute, write, or delete the file
•• full control, whereby the subject has all rights to the file
114 Computer System Security
In addition, the generic right special access allows the assignment of any of the six
permissions.
Windows NT directories also have their own notion of generic rights.
•• no access, whereby the subject cannot access the directory
•• read, whereby the subject can read or execute files within the directory
•• list, whereby the subject can list the contents of the directory and may change to a
subdirectory within that directory
•• add, whereby the subject may create files or subdirectories in the directory
•• add and read, which combines the generic rights add and read
•• change, whereby the subject can create, read, execute, or write files within the
directory and can delete subdirectories
•• full control, whereby the subject has all rights over the files and subdirectories in
the directory
As before, the generic special access right allows assignment of other combinations of
permissions.
When a user accesses a file, Windows NT first examines the file’s ACL. If the user is
not present in the ACL, and is not a member of any group listed in the ACL, access is denied.
Otherwise, if any ACL entry denies the user access, Windows NT denies the access (this is an
explicit denial, which is calculated first). If access is not explicitly denied, and the user is named
in the ACL (as either a user or a member of a group), the user has the union of the set of rights
from each ACL entry in which the user is named.
isolated environment away from the user’s computer. Since no web content actually ever reaches
the user’s computer, malware has no entry point into the system.
3.3.1 Web Browser
A web browser or a browser is a software application which is installed on your Personal
Computer or Mobile devices for retrieving information on the World Wide Web (WWW). When
a consumer requests an internet web page from a specific website, the net browser accesses
the necessary information from a web server and then displays the requested page on the Web
browsers such as Firefox, Chrome, Internet Explorer, Safari, and Opera usually provide an
extension or accessibility feature that allow user to modify the user experience of the browser
as well as enhance its functionality/performance and GUI interface. The Network Module gets
a site page and plans the information to be parsed by the HTML parser. The HTML parser
creates a DOM that could name different execution engines just like the JavaScript engine,
CSS. The valid flow of processed content material among components
3.3.2 Browser Isolation
Browser Isolation (also known as Web Isolation) is a technology that contains web browsing
activity inside an isolated environment, like a sandbox or virtual machine, in order to protect
computers from any malware the user may encounter. This isolation may occur locally on the
computer or remotely on a server. Browser Isolation technology provides malware protection for
day-to-day browsing by eliminating the opportunity for malware to access the end user’s device.
Browser Isolation essentially secures a computer/network from web-based threats by
executing all browsing activity in an isolated virtual environment. Possible threats are contained
in this environment and can’t infiltrate any part of the user’s ecosystem, such as their computer’s
hard-drive, or other devices on the network. Even though Browser Isolation is gaining traction
as an IT security solution, a lot of misinformation regarding Browser Isolation remains.
2 Remote Browser
Website request
1 Isolation initiates session
with destination website
SECURITY
SERVICE EDGE
with
4 Remote Browser Isolation Website with
Safe visual stream potentially
delivers a full browsing risky code
experience without
risk of infection 3 Remote Browser Isolation
safely executes website code
and renders and website into a
dynamic visual stream
Public Internet
Browsing without
Browser Isolation
Dangerous
Web
Content
Isolated Browser
Safe Visual Stream
•• Isolation does not send any web content to the user’s computer. It sends only a visual
stream in the form of pixels.
Remote cloud vendor
Cloud browsers
JaveScript
CSS
Endpoint (user) Cloud server Website
HTML
Remote or cloud-hosted browser isolation keeps untrusted browser activity as far away
as possible from user devices and corporate networks. It does so by conducting a user’s web
browsing activities on a cloud server controlled by a cloud vendor. It then transmits the resulting
webpages to the user’s device so that the user can interact with the Internet like normal, but
without actually loading full webpages on their device. Any user actions, such as mouse clicks
or form submissions, are transmitted to the cloud server and carried out there.
There are several ways a remote browser isolation server can send web
content to a user’s device:
•• Stream the browser to the user: The user views a video or an image of their
browsing activity; this technique is also known as “pixel pushing.” This method
introduces latency to user browsing activities, sometimes resulting in a poor user
experience.
•• Open, inspect, and rewrite each webpage to remove malicious content,
then send to the local user browser: With this method, known as DOM rewriting,
webpages are loaded in an isolated environment and rewritten to remove potential
attacks. Once the content is considered safe, it is sent to the user’s device, where
the webpage code loads and executes a second time. This approach may not be
compatible with all websites.
•• Send final output of webpage to user: Once a webpage fully loads and all code
is executed by the browser, a vector graphics representation of the final version of
the webpage is sent to the user.
2. On-premise browser isolation: On-premise browser isolation does the same thing,
but on a server that an organisation manages internally.
118 Computer System Security
Internal Firewall
JaveScript
CSS
Endpoint (user) Corporate server Website HTML
On-premise browser isolation works similarly to remote browser isolation. But instead of
taking place on a remote cloud server, browsing takes place on a server inside the organisation’s
private network. This can cut down on latency compared to some types of remote browser
isolation.
The downside of on-premise isolation is that the organisation has to provision their own
servers dedicated to browser isolation, which can be costly. The isolation also usually has to
occur within the organisation’s firewall, instead of outside it (as it does during the remote browser
isolation process). Even though user devices remain secure from malware and other malicious
code, the internal network itself remains at risk. Additionally, on-premise browser isolation is
difficult to expand to multiple facilities or networks, and especially so for remote workforces.
3. Client-side browser isolation: Client-side browser isolation still loads the webpages
on a user device, but it uses virtualisation or sandboxing to keep website code and content
separate from the rest of the device.
Like the other kinds of browser isolation, client-side browser isolation virtualises browser
sessions; unlike remote and on-premise browser isolation, client-side browser isolation does
this on the user device itself. It attempts to keep browsing separate from the rest of the device
using either virtualization or sandboxing.
Virtualisation: Virtualisation is the process of dividing a computer into separate virtual
machines without physically altering the computer. This is done at a layer of software below
the operating system called the “hypervisor.” Theoretically, what happens on one virtual
machine should not affect adjacent virtual machines, even when they are on the same device.
By loading webpages on a separate virtual machine within the user’s computer, the rest of the
computer remains secure.
Secure Architecture Principles Isolation and Leas 119
User’s device
Endpoint
(user)
JaveScript
Sandbox or Website CSS
virtual machine HTML
•• Users Are an Enormous Risk: Most users are not careful and can easily be tricked
into clicking a malicious link through social engineering tactics. Organisations allocate
significant budget resources to perimeter defences, but one careless employee can
circumvent it all by clicking one bad link and opening the front door for an attacker.
3.3.5 Components of a Browser Isolation System
A browser isolation system typically has eight components, forming a contained topography.
1. Client: The end-user uses an interface called ‘the client’ to initiate a web request.
The client can refer to an entity located on a desktop, a laptop, a smartphone, or any other
computing device with an active internet connection and a functional browser. In a remote
browser isolation system, the client is distinct from the hosting environment. But in a local
system, the client and the isolation solution can co-exist on the same premises.
2. Web security service: The web security service is an application that determines
which traffic will be contained and how. Ready-to-use browser isolation solutions will come
with a built-in web security service that can be configured as per your business needs. For
example, you can choose to filter out traffic from certain websites completely, or you can display
warnings or alerts if there is suspicious behaviour. You can also block downloads conditionally.
3. Threat isolation engine: This is an optional component that can selectively
isolate online activity. In case you want to isolate some activity in a virtual environment while
allowing others to pass as is, the threat isolation engine will come into action. It will run the
requests it receives from the client in an isolated environment, as per your web security service
configurations.
4. Secure and disposable container: A container is a standalone software unit that
can run independently of its surrounding infrastructure. Typically, containerised software is
used in cloud environments so that applications can be easily packaged for better portability.
But there is one key difference here — software resides in non-disposable containers. The
web security service initiates a secure and disposable container where the browser session
Secure Architecture Principles Isolation and Leas 121
can exist as a boxed-in package for browser isolation. Once the session ends, the container is
duly dismantled.
5. Web socket: This is a secure channel through which data flows between the client
and the web security service. The web socket is connected to the client in such a way that users
can still interact with the browsers in real-time (scroll, type, etc.) without any loss in quality.
6. Hosting environment: A hosting environment should ideally be a third-party cloud,
where the entire browsing isolation solution (web security service, threat isolation engine, and
container) can sit without ever touching the user’s local infrastructure. You could also host the
solution on a private cloud, situated in an on premise server or remotely. Finally, the hosting
environment could be a virtual machine on the same system that houses the client – but this
approach is the least secure of the three.
7. Public web: If the client is the destination for all traffic flowing through a browser
isolation system, the public web is its origin. Once the client raises a request via its browser, the
public web reads the request and initiates information transfer, just like in a regular browsing
experience. But instead of relaying the information directly to the client, it directs it to the hosting
environment, where it is passed through the browsing isolation solution.
8. Content: The content moving across a browser isolation system can be both malicious
as well as harmless. In some cases, the user can view all content as the solution only isolates
browsing activity but does not filter it. But some solutions provide an extra value add of
content filtering, blocking any proved to be malicious. Here, users can interact only with safe
or suspicious content in an isolated environment.
These eight components make up the entire ecosystem in which browsing isolation
operates. The ecosystem’s specifics can vary significantly depending on the solution you choose,
remote or local, and the level.
3.3.6 Benefits of Web Isolation technology
Isolated browsing ensures no malicious web content ever reaches the corporate network by
isolating all browsing activity in a remote virtual environment. Web Isolation technology protects
against all web-based threats.
The key benefits of this approach are:
(i) Protection from Malicious Websites: Because no local code execution happens
on the user’s computer, users are protected from all malicious websites.
(ii) Protection From Malicious Links: Since URLs are automatically opened in the
isolated web browser, whether they’re in webpages, emails, documents, Skype, etc.,
users are protected regardless of the source.
(iii) Protection From Malicious Emails: With Web Isolation, all web-based emails are
rendered harmlessly in the remote server, and links in email clients are automatically
opened in the remote server as well.
(iv) Protection From Malicious Downloads: Administrators can finely control which
files users are permitted to download, and all permitted downloads are first scanned
to eliminate threats.
122 Computer System Security
(v) Protection from Malicious Ads: Ads and trackers are automatically blocked. If
any ads are displayed, they’re rendered remotely – protecting the user from malicious
content.
(vi) Anonymous Browsing: Advanced anonymous browsing capabilities mask users’
true identities.
(vii) Data Loss Prevention: Built-in DLP capabilities protect corporate data from being
accidentally or intentionally exfiltrated. These capabilities allow an administrator to
restrict the files a user can upload to the internet.
(viii) User Behaviour Analytics: Organisations can obtain analytics into users’ web
activities, which can be used for compliance monitoring, and to detect insider threats
and unproductive employees.
(ix) Reduced Number of Security Alerts: Isolating all web content on a remote server
results in fewer security alerts and false positives that need to be investigated.
(x) Eliminates the Cost of Web-Based Malware: The effects of a malware infection
can be severe and require a substantial amount of money and time to fix. Isolated
browsing protects your network completely from web-based malware.
•• The organisation’s strategy for dealing with the given threats: Finally, create a
strategy for dealing with the identified risks. This could be purchasing new technology
or implementing additional security measures to mitigate identified risks. It could also
be to avoid certain technologies or not take advantage of technologies.
Advantages
•• The cyber security landscape can be to help prioritize security initiatives. It can also
help ensure that security efforts are being properly implemented.
•• The cyber security landscape can help an organisation identify when there are overlaps
in their current security solutions. It could also help identify opportunities where
additional technologies or resources that are to protect against risks are not being
implemented.
Disadvantages
•• There are no disadvantages associated with the cyber security landscape. However, if
the organisation does not have a way to measure their progress, they may not know
when they have completed the landscape.
•• The cyber security landscape is a complex solution, and it may be difficult to
implement. Also, it may be difficult to maintain, especially if it is used to manage
multiple companies or organisations.
•• A cyber security landscape is a huge network. It is a digital representation of an
organisation’s use of technology to protect its assets.
3.4.2 Web Security in a Nutshell
A computer is secure if you can depend on it and its software to behave as you expect. Using
this definition, web security is a set of procedures, practices, and technologies for protecting
web servers, web users, and their surrounding organisations. Security protects you against
unexpected behaviour.
Why should web security require special attention apart from the general subject of
computer and Internet security? Because the Web is changing many of the assumptions that
people have historically made about computer security and publishing:
•• The Internet is a two-way network. As the Internet makes it possible for web servers to
publish information to millions of users, it also makes it possible for computer hackers,
crackers, criminals, vandals, and other “bad guys” to break into the very computers
on which the web servers are running. Those risks don’t exist in most other publishing
environments, such as newspapers, magazines, or even “electronic” publishing systems
involving teletext, voice-response, and fax-back.
•• The World Wide Web is increasingly being used by corporations and governments to
distribute important information and conduct business transactions. Reputations can
be damaged and money can be lost if web servers are subverted.
•• Although the Web is easy to use, web servers and browsers are exceedingly complicated
pieces of software, with many potential security flaws. Many times in the past, new
features have been added without proper attention being paid to their security impact.
Thus, properly installed software may still pose security threats.
124 Computer System Security
•• Once subverted, web browsers and servers can be used by attackers as a launching
point for conducting further attacks against users and organisations.
•• Unsophisticated users will be (and are) common users of WWW-based services. The
current generation of software calls upon users to make security-relevant decisions on
a daily basis, yet users are not given enough information to make informed choices.
•• It is considerably more expensive and more time-consuming to recover from a security
incident than to take preventative measures ahead of time.
The World Wide Web is the fastest growing part of the Internet. Increasingly, it is also
the part of the Internet that is most vulnerable to attack.
Web servers make an attractive target for attackers for many reasons:
•• Publicity: Web servers are an organisation’s public face to the Internet and the
electronic world. A successful attack on a web server is a public event that may be
seen by hundreds of thousands of people within a matter of hours. Attacks can be
mounted for ideological or financial reasons; alternatively, they can simply be random
acts of vandalism.
•• Commerce: Many web servers are involved with commerce and money. Indeed,
the cryptographic protocols built into Netscape Navigator and other browsers were
originally placed there to allow users to send credit card numbers over the Internet
without fear of compromise. Web servers have thus become a repository for sensitive
financial information, making them an attractive target for attackers. Of course, the
commercial services on these servers also make them targets of interest.
•• Proprietary Information: Organisations are using web technology as an easy way
to distribute information both internally, to their own members, and externally, to
partners around the world. This proprietary information is a target for competitors
and enemies.
•• Network Access: Because they are used by people both inside and outside an
organisation, web servers effectively bridge an organisation’s internal and external
networks. Their position of privileged network connectivity makes web servers an
ideal target for attack, as a compromised web server may be used to further attack
computers within an organisation.
Unfortunately, the power of web technology makes web servers and browsers
especially vulnerable to attack as well:
•• Server extensibility: By their very nature, web servers are designed to be extensible.
This extensibility makes it possible to connect web servers with databases, legacy
systems, and other programmes running on an organisation’s network. If not properly
implemented, modules that are added to a web server can compromise the security
of the entire system.
•• Browser extensibility: In the same manner that servers can be extended, so can
web clients. Today, technologies such as ActiveX, Java, JavaScript, VBScript, and
helper applications can enrich the web experience with many new features that are
not possible with the HTML language alone. Unfortunately, these technologies can
also be subverted and employed against the browser’s user - often without the user’s
knowledge.
Secure Architecture Principles Isolation and Leas 125
Threat modeling consists of defining an enterprise’s assets, identifying what function each
application serves in the grand scheme, and assembling a security profile for each application.
The process continues with identifying and prioritising potential threats, then documenting both
the harmful events and what actions to take to resolve them.
Or, to put this in lay terms, threat modeling is the act of taking a step back, assessing
your organisation’s digital and network assets, identifying weak spots, determining what threats
exist, and coming up with plans to protect or recover.
It may sound like a no-brainer, but you’d be surprised how little attention security gets
in some sectors. We’re talking about a world where some folks use the term PASSWORD as
their password or leave their mobile devices unattended. In that light, it’s hardly surprising
that many organisations and businesses haven’t even considered the idea of threat modeling.
Main steps in the threat modeling process
When performing threat modeling, several processes and aspects should be included. Failing
to include one of these components can lead to incomplete models and can prevent threats
from being properly addressed.
1. Apply threat intelligence: This area includes information about types of threats,
affected systems, detection mechanisms, tools and processes used to exploit vulnerabilities,
and motivations of attackers.
Threat intelligence information is often collected by security researchers and made
accessible through public databases, proprietary solutions, or security communications outlets.
It is used to enrich the understanding of possible threats and to inform responses.
2. Identify assets: Teams need a real-time inventory of components, credentials,
and data in use, where those assets are located, and what security measures are in use. This
inventory helps security teams track assets with known vulnerabilities.
A real-time inventory enables security teams to gain visibility into asset changes. For
example, getting alerts when assets are added with or without authorised permission, which
can potentially signal a threat.
3. Identify mitigation capabilities: Mitigation capabilities generally refer to technology
to protect, detect, and respond to a certain type of threat, but can also refer to an organisation’s
security expertise and abilities, and their processes. Assessing your existing capabilities will
help you determine whether you need to add additional resources to mitigate a threat. For
example, if you have enterprise-grade antivirus, you have an initial level of protection against
traditional malware threats. You can then determine if you should invest further, for example,
to correlate your existing AV signals with other detection capabilities.
4. Assess risks: Risk assessments correlate threat intelligence with asset inventories
and current vulnerability profiles. These tools are necessary for teams to understand the current
status of their systems and to develop a plan for addressing vulnerabilities.
Risk assessments can also involve active testing of systems and solutions. For example,
penetration testing to verify security measures and patching levels are effective.
Secure Architecture Principles Isolation and Leas 129
5. Perform threat mapping: Threat mapping is a process that follows the potential path
of threats through your systems. It is used to model how attackers might move from resource to
resource and helps teams anticipate where defences can be more effectively layered or applied.
3.5.4 Need of Security Threat Modeling
Cybercrime has exacted a heavy toll on the online community in recent years, as detailed in
this piece by Security Boulevard, which draws its conclusions from several industry sources.
Among other things, the report says that data breaches exposed 4.1 billion records in 2019 and
that social media-enabled cybercrimes steal $3.25 billion in annual global revenue.
According to KnowBe4’s 2019 Security Threats and Trends report, 75 percent of
businesses consider insider threats to be a significant concern, 85 percent of organisations
surveyed reported being targeted by phishing and social engineering attacks, and percent of
responders cite email phishing scams as the largest security risk. As a result of these troubling
statistics, spending on cybersecurity products and services is expected to surpass $1 trillion
by 2021. Cybercrime is happening all the time, and no business, organisation, or consumer is
safe. Security breaches have increased by 11% since 2018, and a whopping 67 percent since
2014. Smart organisations and individuals will take advantage of any reliable resources to fight
this growing epidemic, and sound threat modeling designing for security purposes is essential
to accomplish this.
3.5.5 Threat Modeling Methodologies
There are as many ways to fight cybercrime as there are types of cyber-attacks. For instance,
here are ten popular threat modeling methodologies used today.
1. STRIDE: A methodology developed by Microsoft for threat modeling. It is used
along with a model of the target system. This makes it most effective for evaluating individual
systems.
It offers a mnemonic for identifying security threats in six categories:
•• Spoofing: An intruder posing as another user, component, or other system feature
that contains an identity in the modeled system (i.e. a user or programme pretends
to be another).
•• Tampering: The altering of data within a system to achieve a malicious goal (i.e.
attackers modify components or code).
•• Repudiation: The ability of an intruder to deny that they performed some malicious
activity, due to the absence of enough proof (i.e. threat events are not logged or
monitored).
•• Information Disclosure: Exposing protected data to a user that isn’t authorised to
see it (i.e. data is leaked or exposed).
•• Denial of Service: An adversary uses illegitimate means to exhaust services needed
to provide service to users (i.e. services or components are overloaded with traffic
to prevent legitimate use).
•• Elevation of Privilege: Allowing an intruder to execute commands and functions
that they aren’t allowed to (i.e. attackers grant themselves additional privileges to
gain greater control over a system).
130 Computer System Security
2. DREAD: Proposed for threat modeling, but Microsoft dropped it in 2008 due to
inconsistent ratings. OpenStack and many other organisations currently use DREAD. It’s
essentially a way to rank and assess security risks in five categories:
•• Damage Potential: Ranks the extent of damage resulting from an exploited weakness.
•• Reproducibility: Ranks the ease of reproducing an attack
•• Exploitability: Assigns a numerical rating to the effort needed to launch the attack.
•• Affected Users: A value representing how many users get impacted if an exploit
becomes widely available.
•• Discoverability: Measures how easy it is to discover the threat.
3. Process for Attack Simulation and Threat Analysis (PASTA): PASTA is an
attacker-centric methodology with seven steps. It is designed to correlate business objectives
with technical requirements. PASTA’s steps guide teams to dynamically identify, count, and
prioritise threats. It offers a dynamic threat identification, enumeration, and scoring process.
Once experts create a detailed analysis of identified threats, developers can develop an asset-
centric mitigation strategy by analysing the application through an attacker-centric view.
The steps of a PASTA threat model are:
1. Define business objectives
2. Define the technical scope of assets and components
3. Application decomposition and identify application controls
4. Threat analysis based on threat intelligence
5. Vulnerability detection
6. Attack enumeration and modeling
7. Risk analysis and development of countermeasures
Secure Architecture Principles Isolation and Leas 131
User Interaction
Scope
5. Visual, Agile, and Simple Threat (VAST): Visual, Agile, and Simple Threat
(VAST) is an automated threat modeling method built on the Threat Modeler platform. Large
enterprises implement VAST across their entire infrastructure to generate reliable, actionable
results and maintain scalability. It provides actionable outputs for the specific needs of various
stakeholders such as application architects and developers, cybersecurity personnel, etc. VAST
offers a unique application and infrastructure visualisation plan so that the creation and use of
threat models don’t require any specialised expertise in security subject matters.
VAST can integrate into the DevOps lifecycle and help teams identify various infrastructural
and operational concerns. Implementing VAST requires the creation of two types of threat
models:
•• Application threat model — uses a process-flow diagram to represent the
architectural aspect of the threat
•• Operational threat model — uses a data-flow diagram to represent the threat from
the attacker’s perspective
6. Trike: Trike focuses on using threat models as a risk management tool. Threat
models, based on requirement models, establish the stakeholder-defined “acceptable” level
of risk assigned to each asset class. Requirements model analysis yields a threat model where
threats are identified and given risk values. The completed threat model is then used to build
a risk model, factoring in actions, assets, roles, and calculated risk exposure.
Trike is a security audit framework for managing risk and defence through threat modeling
techniques. Trike defines a system, and an analyst enumerates the system’s assets, actors,
rules, and actions to build a requirement model. Trike generates a step matrix with columns
representing the assets and rows representing the actors. Every matrix cell has four parts to
match possible actions (create, read, update, and delete) and a rule tree — the analyst specifies
whether an action is allowed, disallowed, or allowed with rules.
Trike builds a data-flow diagram mapping each element to the appropriate assets and
actors with the requirements defined. The analyst uses the diagram to identify denial of service
(DoS) and privilege escalation threats. Trike assesses attack risks using a five-point probability
Secure Architecture Principles Isolation and Leas 133
scale for each CRUD action and actor. It also evaluates actors based on their permission level
for each action (always, sometimes, or never).
7. Attack Tree: The tree is a conceptual diagram showing how an asset, or target, could
be attacked, consisting of a root node, with leaves and children nodes added in. Child nodes
are conditions that must be met to make the direct parent node true. Each node is satisfied
only by its direct child nodes. It also has “AND” and “OR” options, which represent alternative
steps taken to achieve these goals.
Attack trees are charts that display the paths that attacks can take in a system. These
charts display attack goals as a root with possible paths as branches. When creating trees for
threat modeling, multiple trees are created for a single system, one for each attacker goal. This
is one of the oldest and most widely used threat modeling techniques. While once used alone,
it is now frequently combined with other methodologies, including PASTA, CVSS, and STRIDE.
8. T-MAP: T-MAP is an approach commonly used in Commercial Off the Shelf (COTS)
systems to calculate attack path weights. The model incorporates UML class diagrams, including
access class, vulnerability, target assets, and affected value.
9. OCTAVE: The Operationally Critical Threat, Asset, and Vulnerability Evaluation
(OCTAVE) process is a risk-based strategic assessment and planning method. OCTAVE focuses
on assessing organisational risks only and does not address technological risks. OCTAVE has
three phases:
•• Building asset-based threat profiles. (Organisational evaluation)
•• Identifying infrastructure vulnerabilities. (Information infrastructure evaluation)
•• Developing and planning a security strategy. (Evaluation of risks to the company’s
critical assets and decision making.)
10. Quantitative Threat Modeling Method: This hybrid method combines attack
trees, STRIDE, and CVSS methods. It addresses several pressing issues with threat modeling
for cyber-physical systems that contain complex interdependencies in their components. The
first step is building components attack trees for the STRIDE categories. These trees illustrate
the dependencies in the attack categories and low-level component attributes. Then the CVSS
method is applied, calculating the scores for all the tree’s components.
11. Hybrid Threat Modeling Method (HTMM): HTMM is a methodology developed
by Security Equipment Inc. (SEI) that combines two other methodologies:
•• Security Quality Requirements Engineering (SQUARE) — a methodology
designed to elicit, categorise and prioritise security requirements.
•• Persona non Grata (PnG) — a methodology that focuses on uncovering ways a
system can be abused to meet an attacker’s goals.
HTMM is designed to enable threat modeling which accounts for all possible threats,
produces zero false positives, provides consistent results, and is cost-effective. It works by
applying Security Cards, eliminating unlikely PnGs, summarising results, and formally assessing
risk using SQUARE.
3.5.6 Advantages of Threat Modeling
When performed correctly, threat modeling can provide a clear line of sight across a software
project, helping to justify security efforts. The threat modeling process helps an organisation
134 Computer System Security
document knowable security threats to an application and make rational decisions about how
to address them. Otherwise, decision-makers could act rashly based on scant or no supporting
evidence.
Overall, a well-documented threat model provides assurances that are useful in explaining
and defending the security posture of an application or computer system. And when the
development organisation is serious about security, threat modeling is the most effective way
to do the following:
•• Detect problems early in the software development life cycle (SDLC)—even before
coding begins.
•• Spot design flaws that traditional testing methods and code reviews may overlook.
•• Evaluate new forms of attack that you might not otherwise consider.
•• Maximise testing budgets by helping target testing and code review.
•• Identify security requirements.
•• Remediate problems before software release and prevent costly recoding post-
deployment.
•• Think about threats beyond standard attacks to the security issues unique to your
application.
•• Keep frameworks ahead of the internal and external attackers relevant to your
applications.
•• Highlight assets, threat agents, and controls to deduce components that attackers
will target.
•• Model the location of threat agents, motivations, skills, and capabilities to locate
potential attackers in relation to the system architecture.
3.5.7 Best Practices of Threat Modeling
The killer application of threat modeling is promoting security understanding across the whole
team. It’s the first step toward making security everyone’s responsibility. Conceptually, threat
modeling is a simple process. So consider these five basic best practices when creating or
updating a threat model:
1. Define the scope and depth of analysis: Determine the scope with stakeholders,
then break down the depth of analysis for individual development teams so they can threat
model the software.
2. Gain a visual understanding of what you’re threat modeling: Create a diagram
of the major system components (e.g., application server, data warehouse, thick client, database)
and the interactions among those components.
3. Model the attack possibilities: Identify software assets, security controls, and
threat agents and diagram their locations to create a security model of the system. Once
you’ve have modeled the system, you can identify what could go wrong (i.e., the threats) using
methods like STRIDE.
4. Identify threats: To produce a list of potential attacks, ask questions such as the
following:
•• Are there paths where a threat agent can reach an asset without going through a
control?
Secure Architecture Principles Isolation and Leas 135
4. Microsoft Threat Modeling Tool: Microsoft Threat Modeling Tool is one of the
oldest and the most tested threat modeling tools in the market. It is an open-source tool that
follows the spoofing, tampering, repudiation, information disclosure, denial of service, and
elevation of privilege (STRIDE) methodology.
•• Platform: MTMT is a desktop-based tool that runs on Windows OS.
•• Core features: This tool allows you to create a threat model based on data flow
diagrams (DFDs) you can create within the app. It focuses on Azure and Windows
services. You can create a threat list and look at mitigation tactics associated with each
threat. You can also generate reports of the models and export them.
•• Unique features: It is the most mature tool in the lot. This means that it has
comprehensive documentation and tutorials available.
•• Usability: If your business is looking to gain a basic idea of threat modeling or is
doing research on it, MTMT might be the way to go. The DFD creation is not very
advanced with respect to available components. The mitigation information is also
not intuitively displayed.
•• Customer support: Documentation and help forums are widely available, making
this tool perfect for research purposes.
•• Pricing model: The Microsoft Threat Modeling Tool is open source, so there is no
pricing involved.
•• Editorial comments: MTMT is good for an organisation looking to create and
understand its first threat model for a basic application. Keep in mind that it is a
Windows-based application.
5. OWASP Threat Dragon: The OWASP Threat Dragon is an open-source solution that
was released in 2016. It is very similar to MTTM, with less focus on Microsoft-centered services.
•• Platform: Threat Dragon is a web-based tool, though the older versions are desktop-
based.
•• Core features: Threat Dragon lets you create flow diagrams. These are fed into the
rules engine, which creates the potential threat list. It has a comprehensive reporting
engine. Threats can be added at the component level. Threat Dragon also offers
mitigation suggestions.
•• Unique features: The main advantage of the OWASP Threat Dragon is its powerful
rule engine.
•• Usability: Threat Dragon users report an average user experience, the usability
rating bogged down by the lack of a separate threat dashboard. Besides this feature
omission, this platform is a great option.
•• Customer support: OWASP Threat Dragon has comprehensive documentation
available, along with a good user base for peer-level troubleshooting.
•• Pricing model: OWASP Threat Dragon is open-source, so it comes at zero cost to
the company.
•• Editorial comments: Threat Dragon is the best for organisations looking for the
first threat modeling experience with existing security skills. Threat dragon has no
infrastructure constraints, which makes it superior to Microsoft’s open-source offering.
138 Computer System Security
•• Pricing model: Pricing is based on edition, model size, and the number of simulations.
It starts from $1380. The Community edition is free.
•• Editorial comments: SecuriCAD by Foreseeti is the ideal threat modeling tool for
organisations with moderately complex IT infrastructure. Current customers include
financial institutions, airports, and defense forces.
8. Threagile: The newest of all the tools, Threagile, is an open-sourced, code-based
threat modeling tool kit.
•• Platform: Threagile is an Integrated Developer Environment or IDE-based tool, which
focuses on integrating threat modeling at the application coding level.
•• Core features: Threagile’s aim is to ‘Threat-Model-As-Code’. It is an agile-based,
developer-friendly tool that works right from the application codebase. Input is in
the form of YAML files — everything from infrastructure to risk rules. The generated
model can be downloaded as a detailed data flow diagram. Reports are generated
in PDF, Excel, and JSON formats — JSON being particularly useful for DevSecOps.
The model is maintained and regenerated within the codebase.
•• Unique features: It is the most comprehensive code-driven threat methodology tool.
•• Usability: Threagile is completely YAML-based, which most IDEs support. This
means manipulating the threat model is easy.
•• Customer support: Threagile offers online documentation and has a growing
community of users.
•• Pricing model: This tool is open-sourced, so there is no pricing involved.
•• Editorial comments: Threagile is best for start-ups with small code-savvy teams
and in-house security experts. It also works well with agile environments.
9. ThreatModeler: ThreatModeler is a heavyweight in this landscape, offering security
and automation throughout the enterprise’s development life cycle. It has three editions —
Community, Appsec, and Cloud.
•• Platform: ThreatModeler is a web-based platform.
•• Core features: ThreatModeler runs using the Visual Agile Simple Threat or VAST
threat modeling methodology. It offers an intelligent threat engine, a report engine,
template builder, threat model versioning, and built-in workflow approval. It is
integrated with Visio, Lucid Charts, and Draw.io for diagramming. It also has native
integrations with JIRA and Jenkins. ThreatModeler also offers API access.
•• Unique features: ThreatModeler is the first commercially available and automated
threat modeling tool. Its VAST methodology offers a holistic view of the attack surface.
•• Usability: Clearly separated processes with colorful, easy-to-navigate dashboards
makes this tool very easy to navigate through, according to users.
•• Customer support: ThreatModeler offers premium support options for enterprises,
as well as a dedicated customer team.
•• Pricing model: This tool is based on annual subscription-based licenses, with no
limit on the number of users.
140 Computer System Security
Bytes 3C 62 6F 64 79 3E 48 65 6C 6C 6F 2C 20 3C 73 70 61 6E 3E 77 6F 72 6C 64 21 3C 2F 73 70 61
6E 3E 3C 2F 62 6F 64 79 3E
DOM html
head body
web performance
Server rendering generally produces a fast First Paint (FP) and First Contentful Paint
(FCP). Running page logic and rendering on the server makes it possible to avoid sending lots of
JavaScript to the client, which helps achieve a fast Time to Interactive (TTI). This makes sense,
since with server rendering you’re really just sending text and links to the user’s browser. This
approach can work well for a large spectrum of device and network conditions, and opens up
interesting browser optimizations like streaming document parsing.
FCP TTI
net
GET/top/1
JS
Server Rendering JS
With server rendering, users are unlikely to be left waiting for CPU-bound JavaScript
to process before they can use your site. Even when third-party JS can’t be avoided, using
server rendering to reduce your own first-party JS costs can give you more “budget” for the
rest. However, there is one primary drawback to this approach: generating pages on the server
takes time, which can often result in a slower Time to First Byte (TTFB).
Whether server rendering is enough for your application largely depends on what type
of experience you are building. There is a longstanding debate over the correct applications
of server rendering versus client-side rendering, but it’s important to remember that you can
opt to use server rendering for some pages and not others. Some sites have adopted hybrid
rendering techniques with success. Netflix server-renders its relatively static landing pages, while
prefetching the JS for interaction-heavy pages, giving these heavier client-rendered pages a
better chance of loading quickly.
Many modern frameworks, libraries and architectures make it possible to render the
same application on both the client and the server. These techniques can be used for Server
Rendering, however it’s important to note that architectures where rendering happens both
on the server and on the client are their own class of solution with very different performance
characteristics and tradeoffs. React users can use render To String () or solutions built atop it
like Next.js for server rendering. Vue users can look at Vue’s server rendering guide or Nuxt.
Angular has Universal. Most popular solutions employ some form of hydration though, so be
aware of the approach in use before selecting a tool.
•• Static Rendering
Static rendering happens at build-time and offers a fast First Paint, First Contentful
Paint and Time To Interactive - assuming the amount of client-side JS is limited. Unlike Server
Rendering, it also manages to achieve a consistently fast Time To First Byte, since the HTML
for a page doesn’t have to be generated on the fly. Generally, static rendering means producing
Secure Architecture Principles Isolation and Leas 143
a separate HTML file for each URL ahead of time. With HTML responses being generated in
advance, static renders can be deployed to multiple CDNs to take advantage of edge-caching.
FCP TTI
net
GET /
JS
Streaming JS
(optional)
Solutions for static rendering come in all shapes and sizes. Tools like Gatsby are designed
to make developers feel like their application is being rendered dynamically rather than generated
as a build step. Others like Jekyll and Metalsmith embrace their static nature, providing a more
template-driven approach.
One of the downsides to static rendering is that individual HTML files must be generated
for every possible URL. This can be challenging or even infeasible when you can’t predict what
those URLs will be ahead of time, or for sites with a large number of unique pages.
React users may be familiar with Gatsby, Next.js static export or Navi - all of these
make it convenient to the author using components. However, it’s important to understand the
difference between static rendering and pre rendering: static rendered pages are interactive
without the need to execute much client-side JS, whereas pre rendering improves the First
Paint or First Contentful Paint of a Single Page Application that must be booted on the client
in order for pages to be truly interactive.
If you’re unsure whether a given solution is static rendering or prerendering, try this test:
disable JavaScript and load the created web pages. For statically rendered pages, most of the
functionality will still exist without JavaScript enabled. For prerendered pages, there may still
be some basic functionality like links, but most of the page will be inert.
Another useful test is to slow your network down using Chrome DevTools, and observe
how much JavaScript has been downloaded before a page becomes interactive. Prerendering
generally requires more JavaScript to get interactive, and that JavaScript tends to be more
complex than the Progressive Enhancement approach used by static rendering.
•• Server Rendering vs Static Rendering
Server rendering is not a silver bullet - its dynamic nature can come with significant
compute overhead costs. Many server rendering solutions don’t flush early, can delay TTFB or
double the data being sent (e.g. inlined state used by JS on the client). In React, renderToString()
can be slow as it’s synchronous and single-threaded. Getting server rendering “right” can
involve finding or building a solution for component caching, managing memory consumption,
applying memorisation techniques, and many other concerns. You’re generally processing/
144 Computer System Security
rebuilding the same application multiple times - once on the client and once in the server. Just
because server rendering can make something show up sooner doesn’t suddenly mean you
have less work to do.
Server rendering produces HTML on-demand for each URL but can be slower than just
serving static rendered content. If you can put in the additional leg-work, server rendering +
HTML caching can massively reduce server render time. The upside to server rendering is the
ability to pull more “live” data and respond to a more complete set of requests than is possible
with static rendering. Pages requiring personalisation are a concrete example of the type of
request that would not work well with static rendering.
Server rendering can also present interesting decisions when building a PWA. Is it better
to use full-page service worker caching, or just server-render individual pieces of content?
•• Client-Side Rendering (CSR)
Client-side rendering (CSR) means rendering pages directly in the browser using JavaScript.
All logic, data fetching, templating and routing are handled on the client rather than the server.
Client-side rendering can be difficult to get and keep fast for mobile. It can approach the
performance of pure server-rendering if doing minimal work, keeping a tight JavaScript budget
and delivering value in as few RTTs as possible. Critical scripts and data can be delivered
sooner using HTTP/2 Server Push or <link rel=preload>, which gets the parser working for
you sooner. Patterns like PRPL are worth evaluating in order to ensure initial and subsequent
navigations feel instant.
FCP TTI
net
GET /
GET/bundle.js
JS
render(app)
The primary downside to Client-Side Rendering is that the amount of JavaScript required
tends to grow as an application grows. This becomes especially difficult with the addition of
new JavaScript libraries, polyfills and third-party code, which compete for processing power
and must often be processed before a page’s content can be rendered. Experiences built with
CSR that rely on large JavaScript bundles should consider aggressive code-splitting, and be sure
to lazy-load JavaScript - “serve only what you need, when you need it”. For experiences with
little or no interactivity, server rendering can represent a more scalable solution to these issues.
For folks building a Single Page Application, identifying core parts of the User Interface
shared by most pages means you can apply the Application Shell caching technique. Combined
with service workers, this can dramatically improve perceived performance on repeat visits.
Secure Architecture Principles Isolation and Leas 145
<h1>To Do's</h1>
<ul> Static HTML version of the
<li><input type="checkbox"> Wash dishes</li> requested page. Generally
<li><input type="checkbox" checked> Mop floors</li> inert due to use of JS event
<li><input type="checkbox"> Fold laundry</li> handlers.
</ul>
<footer><input placeholder="Add To Do..."></footer>
<script>
var DATA = {"todos":[
{"text":"Wash dishes","checked":false,"created":1546464530049}, Data required to render the
{"text":"Mop folders","checked":true,"created":1546464571013}, view (which is already
{"text":Fold laundry","checked":false,"created":1546424241610} rendered above)
]}
</script>
<script src="/bundle.js"></script>
JS to boot up
</body>
As you can see, the server is returning a description of the application’s UI in response
to a navigation request, but it’s also returning the source data used to compose that UI, and a
complete copy of the UI’s implementation which then boots up on the client. Only after bundle,
js has finished loading and executing does this UI become interactive.
Performance metrics collected from real websites using SSR rehydration indicate its use
should be heavily discouraged. Ultimately, the reason comes down to User Experience: it’s
extremely easy to end up leaving users in an “uncanny valley”.
146 Computer System Security
FCP TTI
net
GET /
GET/bundle.js
JS
DATA={..} render(app,DATA)
Document
Root element:
<html>
Element: Element:
<html> <body>
Element:
<title> Attribute: Element: Element:
"href" <a> <h1>
Text:
"My title" Text: Text:
"My link" "My header"
Browser
Server
HTTP Header:
Set-cookie: NAME=VALUE;
domain = (who can read);
expires = (when expires);
secure = (only over SSL)
cookie and sends the cookie to Alice’s browser. This cookie tells the website to load
Alice’s account content, so that the homepage now reads, “Welcome, Alice.”
Alice then clicks to a product page displaying a pair of jeans. When Alice’s web browser
sends an HTTP request to the website for the jeans product page, it includes Alice’s
session cookie with the request. Because the website has this cookie, it recognises the
user as Alice, and she does not have to log in again when the new page loads.
•• Personalisation: Cookies help a website “remember” user actions or user preferences,
enabling the website to customise the user’s experience. If Alice logs out of the shopping
website, her username can be stored in a cookie and sent to her web browser. Next
time she loads that website, the web browser sends this cookie to the web server,
which then prompts Alice to log in with the username she used last time.
•• Tracking: Some cookies record what websites users visit. This information is sent to
the server that originated the cookie the next time the browser has to load content
from that server. With third-party tracking cookies, this process takes place anytime
the browser loads a website that uses that tracking service.
If Alice has previously visited a website that sent her browser a tracking cookie, this
cookie may record that Alice is now viewing a product page for jeans. The next time
Alice loads a website that uses this tracking service, she may see ads for jeans. However,
advertising is not the only use for tracking cookies. Many analytics services also use
tracking cookies to anonymously record user activity. (Cloud flare Web Analytics is
one of the few services that does not use cookies to provide analytics, helping to
protect user privacy.)
3.8.1 Types of Cookies
There are three different types of cookies:
•• Session Cookies: These are mainly used by online shops and allows you to keep
items in your basket when shopping online. These cookies expire after a specific time
or when the browser is closed.
•• Permanent Cookies: These remain in operation, even when you have closed the
browser. They remember your login details and password so you don’t have to type
them in every time you use the site. It is recommended that you delete these type of
cookies after a specific time.
•• Third-Party Cookies: These are installed by third parties for collecting certain
information. For example: Google Maps.
The following screenshot shows where the data of a cookie is stored and to do this, I
have used a plugin of Firefox which is called Cookies Manager+. It shows the date when a
cookie will expire.
152 Computer System Security
Site Name
mozilla.org _utma
mozilla.org _utmb
mozilla.org _utmz
mozilla.org WT_FPC
mozilla.org wtspl
ocsp.entrust.net avr_3185115268_0_0_4294901760_2633791744...
statse.webtrendslive.com ACOOKIE
www.mozilla.org _utmli
www.????????????.com member_id
www.????????????.com pass_hash
www.????????????.com session_id
Name: pass_hash
Content: 3973c5ef7cdb1c980e437a49072733b8
Domain: .www.????????.com
Path: /
Send For: Any type of connection
Expires: Wednesday, December 19, 2012 1:21:21 PM Will expire in 6 days, 23 hours, 5
•• For Firefox: Keep in mind that the more popular a browser is, the higher the chance
that it is being targeted for spyware or malware infection.
Step 1: Look at the top end of your Firefox window and you will see a ‘Firefox’
button. Click on it and click ‘Options’.
Step 2: Click on ‘Privacy’.
Step 3: You will see ‘Firefox will:’ Set it to ‘Use custom settings for history’.
Step 4: Click on the ‘Show Cookies’ button on the right side.
Step 5: If you want to delete cookies set by individual sites, enter the complete
domain or partial domain name of the site you want to manage in the search field.
Your search will retrieve the list of cookies set for that site. Click ‘Remove Cookie’.
Step 6: If you want to delete all cookies, click the top of the Firefox window and
click on the Firefox button. Click on the History menu and pick out ‘Clear Recent
History...’ Select ‘Everything’ for the ‘Time Range to Clear’ option. The click on the
downward arrow located next to ‘Details’. This will open up the list of items. Click
‘Cookies’ and make sure all the other items are unselected. Click on the ‘Clear Now’
button at the bottom. Close your ‘Clear Recent History’ window.
154 Computer System Security
General
Privacy ?
Search
•• For Chrome
Step 1: At the top right hand side of your browser toolbar, click on the Chrome icon.
Step 2: Click on Settings.
Step 3: Scroll to the bottom and click ‘Show advanced settings’.
Step 4: Under ‘Privacy’, you will see ‘Content Settings’, click on it.
Step 5: Under ‘Cookies’, you will see ‘All cookies and site data’, click on this. Please
note that you can block cookies altogether from being set on your browser by clicking
‘Block sites from setting any data.’ Unfortunately, many websites you browse will stop
working if you do this. It is better if you just periodically clear your cookies manually
instead of preventing them from being set by your browser.
Step 6: You will see a full listing of all your cookies. You can click REMOVE ALL
to clear all your cookies or you can pick a particular website and clear your cookies
from that site.
Secure Architecture Principles Isolation and Leas 155
Settings
Browsing history
Download history
Passwords
Content licences
Font size
Saved content settings and search engines will not be cleared and may reflect
your browsing habits.
Page zoom
•• For Safari
Step 1: Open Safari.
Step 2: Click Safari and then on Preferences. Click on ‘Privacy’.
Step 3: Click on ‘Details’.
Step 4: You will see a list of websites that store cookies. You can remove single sites
by clicking the ‘Remove’ button and selecting a site. If you want clear all cookies,
click ‘Remove All’.
Step 5: When you have finished removing sites, click ‘Done’.
156 Computer System Security
Preferences
Tabs Cookies are small files stored on your computer. They allow Web sites to
Browsing remember you between visits.
Notifications
Content
Fonts Accept cookies
Downloads
Programs Accept only cookies from the site 1 visits
Manage cookies
OK Cancel Help
•• For Opera
Step 1: Click ‘Settings’ at the top of the Opera browser.
Step 2: Click ‘Preferences’ and select ‘Advanced’.
Step 3: In the ‘Advanced’ screen, select ‘Cookies’.
Step 4: At this point, you can select one of three options −
•• Accept all cookies (this is the default setting)
•• Accept cookies only from sites you visit and
•• Never accept cookies
If you block cookies, most of the sites you visit will stop working. This is usually not
a good choice. Your best default choice is to accept cookies only from sites you visit.
This blocks cookies set by advertising networks and other third party sites. These
third party sites set cookies to track your movements across sites to enhance their ad
targeting capabilities.
Step 5: Select ‘Delete new cookies when exiting Opera’. If you want to use a specific
website but don’t want to keep any cookies for that site between your visits, select
this option. It is not a good idea to use this option for sites you visit frequently.
Secure Architecture Principles Isolation and Leas 157
Preferences
Tabs Cookies are small files stored on your computer. They allow Web sites to
Browsing remember you between visits.
Notifications
Content
Fonts Accept cookies
Downloads
Programs Accept only cookies from the site 1 visits
Manage cookies
OK Cancel Help
those of third parties, or any other means, advertising based on the analysis of your habits of
navigation.
• Why are cookies NOT used on this website?
We do not store sensitive personally identifiable information such as your address, your password,
your credit or debit card details, etc., in the cookies we use.
• Who uses the information stored in cookies?
The information stored in the cookies of our website is used exclusively by us, except for those
identified later as “third-party cookies”, which are used and managed by external entities to
provide us with services requested by us to improve our services and User experience when
browsing our website. The main services for which these “third-party cookies” are used are to
obtain access statistics and guarantee the payment operations carried out.
• How can I avoid using cookies on this website?
If you prefer to avoid the use of cookies on this page, taking into account the above limitations,
you must first disable the use of cookies in your browser and, secondly, delete the cookies stored
in your browser associated with this website. This possibility of avoiding the use of cookies can
be carried out by you at any time.
• How do I disable and eliminate the use of cookies?
To restrict, block or delete cookies from this website, you can do so, at any time, by changing
your browser settings according to the guidelines indicated below. Although the parameterisation
of each browser is different, it is common for cookies to be configured in the «Preferences»
or «Tools» menu. For more details on the configuration of cookies in your browser, see the
«Help» menu.
3.8.4 Cookies Frames and Frame Busting
HTTP PROTOCOL: This is a stateless transport protocol in communication between client and
server. It means no state is stored during the period of communication. To maintain session
between client and server we use client side cookies
• Client side cookies (Header, cookies and request):
An HTTP cookie (also called web cookie, Internet cookie, browser cookie, or simply cookie)
is a small piece of data sent from a website and stored on the user’s computer by the user’s
web browser while the user is browsing. Cookies were designed to be a reliable mechanism for
websites to remember stateful information (such as items added in the shopping cart in an online
store) or to record the user’s browsing activity (including clicking particular buttons, logging
in, or recording which pages were visited in the past). They can also be used to remember
arbitrary pieces of information that the user previously entered into form fields such as names,
addresses, passwords, and credit card numbers.
Cookies authentication: A server and client can be recognised with the help of cookies
sending through authenticate server.
Cookies Security Policy: Some policies should be enforced for cookies
1. Policies for users-authentication
2. Policies for personalisation details of client and server both
Secure Architecture Principles Isolation and Leas 159
POST/login HTTP/1.1
username=XXX&password=YYY
HTTP/1.1 200 OK
Set-Cookie: sessid=XYZ
GET/profile HTTP/1.1
Cookie: sessid=XYZ
Access to protected
contents
Client Server
GET/secured
HTTP 302 HTTP 403
/login Unauthorized
GET/login
HTTP 200
login html
Post/login
user=&pass=
GET/secured
Cookie : auth
Cookieless Authentication
Cookieless authentication, also known as token-based authentication, is a technique that
leverages JSON web tokens (JWT) instead of cookies to authenticate a user. It uses a protocol
that creates encrypted security tokens. These tokens allow the user to verify their identity. In
return, the users receive a unique access token to perform the authentication. The token contains
information about user identities and transmits it securely between the server and client.
The entire Cookieless authentication works in the following manner:
1. The user logs into the service by providing their login credentials. It issues an access
request from the client-side by sending the credential and API key (public key) to the
application server.
2. The server verifies the login credentials that checks the password entered against the
username. Once approved, the server will generate a unique session token that will
help authorise subsequent actions.
3. This access token is sent back to the client via URL query strings, post request body,
or other means. The server-generated signed authentication token gets assigned with
an expiration time.
4. The token gets transmitted back to the user’s browser. On every subsequent request
to the application server or future website visits, the access token gets added to the
authorisation header along with the public key. If there is a match from the application
server against the private key, the user can proceed. If a given token expires, a new
token gets generated as an authentication request.
Benefits of Cookieless Authentication
•• Scalable and Efficient: In cookieless authentication, the tokens remain stored
on the user’s end. The server only needs to sign the authentication token once on
successful login. That makes the entire technique scalable and allows maintaining
more users on an application at once without any hassle.
•• Better Performance: Cookie-based authentication requires the server to perform
an authentication lookup every time the user requests a page. You can eliminate the
round-trips with tokens through the cookieless authentication technique. In cookieless
authentication, the access token and the public key are added to the authorisation
header on every page request.
•• Robust Security: Since cookieless authentication leverages tokens like JWT
(stateless), only a private key (used to create the authentication token) can validate
it when received at the server-side.
•• Seamless Across Devices: Cookieless authentication works well with all native
applications. Tokens are much easier to implement on iOS, Android, IoT devices,
and distributed systems, making the authentication system seamless.
•• Expiration Time: Usually, tokens get generated with an expiration time, after which
they become invalid. Then a new token needs to be obtained for reauthentication.
If a token gets leaked, the potential damage becomes much smaller due to its short
lifespan.
Secure Architecture Principles Isolation and Leas 163
HTML Injection
Web Server
Attacks
Se jac
hi
ss kin
Web server
io g
n
Vulnerabilities
Database
L ion Abscence of
UR etat check for use
r
rp data
Inte
Pirate
The attacker forces a non-authenticated user to log in to an account the attacker controls. If the
victim does not realise this, they may add personal data—such as credit card information—to
the account. The attacker can then log back into the account to view this data, along with the
victim’s activity history on the web application.
3.10.1 How does CSRF work?
For a CSRF attack to be possible, three key conditions must be in place:
•• A relevant action: There is an action within the application that the attacker has a
reason to induce. This might be a privileged action (such as modifying permissions
for other users) or any action on user-specific data (such as changing the user’s own
password).
•• Cookie-based session handling: Performing the action involves issuing one or
more HTTP requests, and the application relies solely on session cookies to identify
the user who has made the requests. There is no other mechanism in place for tracking
sessions or validating user requests.
•• No unpredictable request parameters: The requests that perform the action do
not contain any parameters whose values the attacker cannot determine or guess. For
example, when causing a user to change their password, the function is not vulnerable
if an attacker needs to know the value of the existing password.
For example, an application contains a function that lets the user change the email
address on their account. When a user performs this action, they make an HTTP request like
the following:
POST /email/change HTTP/1.1
Host: vulnerable-website.com
Content-Type: application/x-www-form-urlencoded
Content-Length: 30
Cookie: session=yvthwsztyeQkAPzeQ5gHgTvlyxHfsAfE
[email protected]
This meets the conditions required for CSRF:
•• The action of changing the email address on a user’s account is of interest to an
attacker. Following this action, the attacker will typically be able to trigger a password
reset and take full control of the user’s account.
•• The application uses a session cookie to identify which user issued the request. There
are no other tokens or mechanisms in place to track user sessions.
•• The attacker can easily determine the values of the request parameters that are needed
to perform the action.
With these conditions in place, the attacker can construct a web page containing the
following HTML:
168 Computer System Security
<html>
<body>
<form action=”https://vulnerable-website.com/email/change” method=”POST”>
<input type=”hidden” name=”email” value=”[email protected]” />
</form>
<script>
document.forms[0].submit();
</script>
</body>
</html>
If a victim user visits the attacker’s web page, the following will happen:
•• The attacker’s page will trigger an HTTP request to the vulnerable web site.
•• If the user is logged in to the vulnerable web site, their browser will automatically
include their session cookie in the request (assuming SameSite cookies are not being
used).
•• The vulnerable web site will process the request in the normal way, treat it as having
been made by the victim user, and change their email address.
3.10.2 How to construct a CSRF attack
Manually creating the HTML needed for a CSRF exploit can be cumbersome, particularly
where the desired request contains a large number of parameters, or there are other quirks in
the request. The easiest way to construct a CSRF exploit is using the CSRF PoC generator that
is built in to Burp Suite Professional:
•• Select a request anywhere in Burp Suite Professional that you want to test or exploit.
•• From the right-click context menu, select Engagement tools / Generate CSRF PoC.
•• Burp Suite will generate some HTML that will trigger the selected request (minus
cookies, which will be added automatically by the victim’s browser).
•• You can tweak various options in the CSRF PoC generator to fine-tune aspects of
the attack. You might need to do this in some unusual situations to deal with quirky
features of requests.
•• Copy the generated HTML into a web page, view it in a browser that is logged in to
the vulnerable web site, and test whether the intended request is issued successfully
and the desired action occurs.
3.10.3 Preventing CSRF attacks
The most robust way to defend against CSRF attacks is to include a CSRF token within relevant
requests. The token should be:
•• Unpredictable with high entropy, as for session tokens in general.
•• Tied to the user’s session.
•• Strictly validated in every case before the relevant action is executed.
Secure Architecture Principles Isolation and Leas 169
An Example
Let’s consider a hypothetical example of a site vulnerable to a CSRF attack. This site
is a web-based email site that allows users to send and receive email. The site uses implicit
authentication to authenticate its users. One page, http://example.com/compose. htm, contains
an HTML form allowing a user to enter a recipient’s email address, subject, and message as
well as a button that says, “Send Email.”
<form
action=”http://example.com/send_email.htm”
method=”GET”>
Recipient’s Email address: <input
type=”text” name=”to”>
Subject: <input type=”text” name=”subject”>
Message: <textarea name=”msg”></textarea>
<input type=”submit” value=”Send Email”>
</form>
When a user of example.com clicks “Send Email”, the data he entered will be sent to
http://example.com/ send_email.htm as a GET request. Since a GET request simply appends
the form data to the URL, the user will be sent to the following URL (assuming he entered
“[email protected]” as the recipient, “hello” as the subject, and “What’s the status of that
proposal?” as the message):
http://example.com/send_email.htm?to=bob%
40example.com&subject=hello&msg=What%27s+the+
status+of+that+proposal%3F
The page send email.htm would take the data it received and send an email to the
recipient from the user. Note that send email.htm simply takes data and performs an action with
that data. It does not care where the request originated, only that the request was made. This
means that if the user manually typed in the above URL into his browser, example.com would
still send an email. For example, if the user typed the following three URLs into his browser,
send email.htm would send three emails (one each to Bob, Alice, and Carol):
http://example.com/send_email.htm?to=bob%
40example.com&subject=hi+Bob&msg=test
http://example.com/send_email.htm?to=alice%
40example.com&subject=hi+Alice&msg=test
http://example.com/send_email.htm?to=carol%
40example.com&subject=hi+Carol&msg=test
170 Computer System Security
A CSRF attack is possible here because send email.htm takes any data it receives and
sends an email. It does not verify that the data originated from the form on compose.htm.
Therefore, if an attacker can cause the user to send a request to send email.htm, that page will
cause example.com to send an email on behalf of the user containing any data of the attacker’s
choosing and the attacker will have successfully performed a CSRF attack.
To exploit this vulnerability, the attacker needs to force the user’s browser to send a request
to send email.htm to perform some nefarious action. (We assume the user visits a site under
the attacker’s control and the target site does not defend against CSRF attacks.) Specifically,
the attacker needs to forge a cross-site request from his site to example.com. Unfortunately,
HTML provides many ways to make such requests. The tag, for example, will cause the browser
to load whatever URI is set as the src attribute, even if that URI is not an image (because the
browser can only tell the URI is an image after loading it). The attacker can create a page with
the following code:
<img src=”http://example.com/send_email.htm?
to=mallory%40example.com&subject=Hi&msg=My+
email+address+has+been+stolen”>
When the user visits that page, a request will be sent to send email.htm, which will then
send an email to Mallory from the user.
Trusted Site
Web Authenticated Session
User
Browser
Trusted
Action
Fig. 3.25 The Web Browser has established an authenticated session with the Trusted Site. Trusted Action
should only be performed when the Web Browser makes the request over the authenicated session.
Trusted Site
Web Authenticated Session
User
Browser
Trusted
Action
Fig. 3.26 A valid request. The Web Browser attempts to perform a Trusted Action. The Trusted Site confirms
that the Web Browser is authenticated and allows the action to be performed.
Secure Architecture Principles Isolation and Leas 171
Trusted Site
Web Authenticated Session
User
Browser
Trusted
Action
Attacking
Site
Fig. 3.27 A CSRF attack. The Attacking Site causes the browser to send a request to the Trusted Site. The
Trusted Site sees a valid, authenticated request from the Web Browser and performs the Trusted Action.
CSRF attacks are possible because web sites authenticate the web browser, not the user.
access any cookies, session tokens, or other sensitive information retained by the browser and
used with that site. These scripts can even rewrite the content of the HTML page.
Cross site scripting (XSS) is an attack in which an attacker injects malicious executable
scripts into the code of a trusted application or website. Attackers often initiate an XSS attack
by sending a malicious link to a user and enticing the user to click it. If the app or website lacks
proper data sanitisation, the malicious link executes the attacker’s chosen code on the user’s
system. As a result, the attacker can steal the user’s active session cookie.
Cross-site scripting (XSS) is a security exploit which allows an attacker to inject into a
website malicious client-side code. This code is executed by the victims and lets the attackers
bypass access controls and impersonate users. These attacks succeed if the Web app does
not employ enough validation or encoding. The user’s browser cannot detect the malicious
script is untrustworthy, and so gives it access to any cookies, session tokens, or other sensitive
site-specific information, or lets the malicious script rewrite the HTML content. The malicious
content often includes JavaScript, but sometimes HTML, Flash, or any other code the browser
can execute. The variety of attacks based on XSS is almost limitless, but they commonly include
transmitting private data like cookies or other session information to the attacker, redirecting
the victim to a webpage controlled by the attacker, or performing other malicious operations
on the user’s machine under the guise of the vulnerable site.
Cross-Site Scripting (XSS) attacks occur when:
1. Data enters a Web application through an untrusted source, most frequently a web
request.
2. The data is included in dynamic content that is sent to a web user without being
validated for malicious content.
The malicious content sent to the web browser often takes the form of a segment of
JavaScript, but may also include HTML, Flash, or any other type of code that the browser may
execute. The variety of attacks based on XSS is almost limitless, but they commonly include
transmitting private data, like cookies or other session information, to the attacker, redirecting
the victim to web content controlled by the attacker, or performing other malicious operations
on the user’s machine under the guise of the vulnerable site.
3.11.1 Categories of XSS attacks
XSS attacks can be put into three categories: stored (also called persistent), reflected (also called
non-persistent), or DOM-based.
•• Stored XSS Attacks: The injected script is stored permanently on the target servers.
The victim then retrieves this malicious script from the server when the browser sends
a request for data.
•• Reflected XSS Attacks: When a user is tricked into clicking a malicious link,
submitting a specially crafted form, or browsing to a malicious site, the injected code
travels to the vulnerable website. The Web server reflects the injected script back to
the user’s browser, such as in an error message, search result, or any other response
that includes data sent to the server as part of the request. The browser executes the
code because it assumes the response is from a “trusted” server which the user has
already interacted with.
Secure Architecture Principles Isolation and Leas 173
Data Persistence
XSS Server Client
In an HTML context, you should convert non-whitelisted values into HTML entities:
•• < converts to: <
•• > converts to: >
In a JavaScript string context, non-alphanumeric values should be Unicode-escaped:
•• < converts to: \u003c
•• > converts to: \u003e
Sometimes you’ll need to apply multiple layers of encoding, in the correct order. For
example, to safely embed user input inside an event handler, you need to deal with both the
JavaScript context and the HTML context. So you need to first Unicode-escape the input, and
then HTML-encode it:
<a href=”#” onclick=”x=’This string needs two layers of escaping’”>test</a>
Validate input on arrival
Encoding is probably the most important line of XSS defense, but it is not sufficient to prevent
XSS vulnerabilities in every context. You should also validate input as strictly as possible at
the point when it is first received from a user.
Examples of input validation include:
•• If a user submits a URL that will be returned in responses, validating that it starts with
a safe protocol such as HTTP and HTTPS. Otherwise someone might exploit your
site with a harmful protocol like javascript or data.
•• If a user supplies a value that it expected to be numeric, validating that the value
actually contains an integer.
•• Validating that input contains only an expected set of characters.
Input validation should ideally work by blocking invalid input. An alternative approach,
of attempting to clean invalid input to make it valid, is more error prone and should be avoided
wherever possible.
Whitelisting vs blacklisting
Input validation should generally employ whitelists rather than blacklists. For example, instead
of trying to make a list of all harmful protocols (javascript, data, etc.), simply make a list of safe
protocols (HTTP, HTTPS) and disallow anything not on the list. This will ensure your defense
doesn’t break when new harmful protocols appear and make it less susceptible to attacks that
seek to obfuscate invalid values to evade a blacklist.
Allowing “safe” HTML
Allowing users to post HTML markup should be avoided wherever possible, but sometimes it’s
a business requirement. For example, a blog site might allow comments to be posted containing
some limited HTML markup.
The classic approach is to try to filter out potentially harmful tags and JavaScript. You
can try to implement this using a whitelist of safe tags and attributes, but thanks to discrepancies
in browser parsing engines and quirks like mutation XSS, this approach is extremely difficult
to implement securely.
176 Computer System Security
The least bad option is to use a JavaScript library that performs filtering and encoding
in the user’s browser, such as DOMPurify. Other libraries allow users to provide content in
markdown format and convert the markdown into HTML. Unfortunately, all these libraries
have XSS vulnerabilities from time to time, so this is not a perfect solution. If you do use one
you should monitor closely for security updates.
How to prevent XSS using a template engine
Many modern websites use server-side template engines such as Twig and Freemarker
to embed dynamic content in HTML. These typically define their own escaping system. For
example, in Twig, you can use the e() filter, with an argument defining the context:
{{ user.firstname | e(‘html’) }}
Some other template engines, such as Jinja and React, escape dynamic content by default
which effectively prevents most occurrences of XSS.
How to prevent XSS in PHP
In PHP there is a built-in function to encode entities called html entities. You should
call this function to escape your input when inside an HTML context. The function should be
called with three arguments:
•• Your input string.
•• ENT_QUOTES, which is a flag that specifies all quotes should be encoded.
•• The character set, which in most cases should be UTF-8.
For example:
<?php echo htmlentities($input, ENT_QUOTES, ‘UTF-8’);?>
When in a JavaScript string context, you need to Unicode-escape input as already
described. Unfortunately, PHP doesn’t provide an API to Unicode-escape a string. Here is
some code to do that in PHP:
<?php function jsEscape($str) {
$output = ‘’;
$str = str_split($str);
for($i=0;$i<count($str);$i++) {
$chrNum = ord($str[$i]);
$chr = $str[$i];
if($chrNum === 226) {
if(isset($str[$i+1]) && ord($str[$i+1]) === 128) {
if(isset($str[$i+2]) && ord($str[$i+2]) === 168) {
$output .= ‘\u2028’;
$i += 2;
continue;
}
if(isset($str[$i+2]) && ord($str[$i+2]) === 169) {
$output .= ‘\u2029’;
Secure Architecture Principles Isolation and Leas 177
$i += 2;
continue;
}
}
}
switch($chr) {
case “’”:
case ‘”’:
case “\n”;
case “\r”;
case “&”;
case “\\”;
case “<”:
case “>”:
$output .= sprintf(“\\u%04x”, $chrNum);
break;
default:
$output .= $str[$i];
break;
}
}
return $output;
}
?>
Here is how to use the jsEscape function in PHP:
<script>x = ‘<?php echo jsEscape($_GET[‘x’])?>’;</script>
•• Reducing the attack surface: When researchers and testers discover a new
vulnerability, it is listed in the Common Weakness Enumeration (CWE) index.
Developers and security professionals pick the vulnerability in question and then work
on required security patches to rectify the flaw. Attackers also misuse CWE listings to
develop exploits that facilitate a malicious attack through various vulnerable versions.
Regular assessments through vulnerability scanning tools ensure web organisations
address these vulnerabilities before they can be exploited.
•• Application performance monitoring: Modern websites involve a combination of
multiple services and applications working together for an enhanced user experience.
Since modern networks are highly dynamic, the interactions between these systems
are periodically unpredictable. This could result in a range of defects that affect
application performance such as:
•• Response timeouts
•• Database server errors
•• Outdated server software
•• Insecure HTTP headers
•• Website outage
•• Poorly configured application firewalls
•• Un secure application server
Regular vulnerability scanning helps organisations pinpoint the cause of these defects
before they cause a significant impact on the website’s availability and reliability.
•• Forensics and attack detection: Vulnerability scans can be used to analyse the
root cause of a successful attack. These scanners can be used to identify various
indicators of compromise that show an attack in progress. Identifying vulnerabilities
aids in knowing the exact techniques used to infiltrate the system, such as unexpected
open ports, malicious files, and existing malware. Some vulnerability assessment tools
also identify machines used to commit the attack, which can help in the identification
of threat actors.
•• Speeding up continuous delivery: In the olden days, security testing would
present bottlenecks for the development process since bugs were identified at the end
of the development life cycle. Vulnerability assessment is a significant component of
modern DevOps workflows that eliminates these bottlenecks. Vulnerability scanners
automatically check the code and systems for weaknesses, which are quickly patched.
This allows for rapid, frequent product releases.
Ways to find a vulnerability in a website
The ever-changing cybersecurity landscape makes finding vulnerabilities and fixing them a
major consideration for website developers. Failure to address these vulnerabilities leaves
hackers with open doors to access the website with elevated privileges. Web developers and
administrators can find vulnerabilities on the websites in a number of ways, including:
•• Free vulnerability scanning: An application security scanner is a tool that is
configured to query specific interfaces to detect security and performance gaps.
These tools rely on documented tools and scripts to check for known weaknesses.
180 Computer System Security
Vulnerability scanners simulate various if-then scenarios to evaluate user actions and
system configurations that could facilitate an exploit. An efficiently configured passive
web security scan helps examine applications and networks, then provides a log of
weaknesses to be addressed in order of priority.
Crashtest Security Suite is a highly popular and effective scanner that simplifies
vulnerability scanning by helping organisations establish an end-to-end continuous
testing process. Besides detecting and alerting on system weaknesses, the online
scanner also helps developers to establish a reliable, repeatable remediation process.
•• Conducting penetration testing: Penetration testing is a proactive security
approach in which security professionals attempt to safely exploit vulnerabilities such
as different types of SQL injections, cross-site scripting, cross-site request forgery, and
cross-site request. Once vulnerabilities are identified, organisations tend to simulate
and understand the actions of an attacker. Security teams conduct penetration tests to
evaluate the efficiency of security mechanisms and compliance with security policies.
To do so, testers simulate an attacker’s workflow, relying on existing vulnerabilities
and privilege escalation to access system data. They then outline detailed reports
on insights provided by the test, which are then used to fine-tune security controls.
•• Creating a Threat Intelligence Framework: Once the penetration test report has
been tabled, it is important to create a central repository for the detection, alerting,
and management of security threats. A threat intelligence framework outlines a
repeatable, scalable security incident management plan for all stakeholders involved
in securing the website. A robust threat intelligence mechanism helps organisations
lower expenses by speeding up the response to data breaches. The shared repository
includes crucial information that can be used as a collaborative knowledge base for
organisation-wide security compliance.
3.12.3 Stages of Vulnerability Management
Vulnerability management strategies and tools enable organisations to quickly evaluate and
mitigate security vulnerabilities in their IT infrastructure. A vulnerability management process
can vary between environments, but most should follow these four stages, typically performed
by a combination of human and technological resources:
1. Identifying vulnerabilities
2. Evaluating vulnerabilities
3. Ereating vulnerabilities
4. Reporting vulnerabilities
Vulnerability management is a strategy that organisations can use to track, minimise,
and eradicate vulnerabilities in their systems. This process involves identifying and classifying
vulnerabilities, so that appropriate protections or remediation’s can be applied.
Often, vulnerability management processes employ the use of vulnerability scanners,
vulnerability databases, manual or automated vulnerability testing, and other tools. This
combination of tools and processes helps teams ensure that all threats are accounted for.
Secure Architecture Principles Isolation and Leas 181
This includes:
•• Vulnerabilities in code, such as SQL injection or cross site-scripting (XSS) opportunities
•• Insufficient authentication and authorisation mechanisms
•• Insecure or misconfigured settings, such as weak access controls or passwords
Why do you need a vulnerability management process?
Vulnerabilities provide openings for attackers to enter your systems. Once inside, they can abuse
resources, steal data, or deny access to services. If you do not identify and patch vulnerabilities,
you are essentially leaving the doors and windows open for attackers to enter your network.
Vulnerability management programmes provide structured guidelines to help you evaluate
and secure your network. Rather than ignoring vulnerabilities or taking the risk of vulnerabilities
being overlooked, this process can help you conduct a thorough search.
Vulnerability management strategies can help you ensure that vulnerabilities in your
system have the shortest possible life span. It can also provide proof of your due diligence in
case your network is compromised despite your efforts.
The 4 stages of vulnerability management
When creating a vulnerability management programme, there are several stages you should
account for. By building these stages into your management process, you help ensure that
no vulnerabilities are overlooked. You also help ensure that discovered vulnerabilities are
addressed appropriately.
1. Identify vulnerabilities: The first stage of the management process requires
identifying which vulnerabilities might affect your systems. Once you know which vulnerabilities
or vulnerability types you are looking for, you can begin identifying which ones exist.
This stage uses threat intelligence information and vulnerability databases to guide your
search. It also often uses vulnerability scanners to identify affected components and create an
inventory for use in patch management.
As part of this phase, you want to create a full map of your system that specifies where
assets are, how those assets can potentially be accessed, and which systems are currently in
place for protection. This map can then be used to guide the analysis of vulnerabilities and
ease remediation.
2. Evaluating vulnerabilities: After you have identified all possible vulnerabilities
in your system, you can begin evaluating the severity of the threats. This evaluation helps
you prioritise your security efforts and can help reduce your risks more quickly. If you start
remediating the most severe vulnerabilities first, you can reduce the chance of an attack occurring
while you’re securing the rest of your system. When evaluating vulnerabilities, there are several
systems you can use to establish the risk of a vulnerability being exploited.
One system is the Common Vulnerability Scoring System (CVSS). This is a standardised
system used by many vulnerability databases and researchers. CVSS evaluates the level of
vulnerability according to inherent characteristics, temporal traits, and the specific effect of the
vulnerability to your systems. The challenge with CVSS is that once a risk level is assigned, it
is permanent, so it’s important to include other factors from threat intelligence and your own
business risk information, in order to determine prioritisation.
182 Computer System Security
replace entire underlying components, all of which will then need to be re verified against both
the application requirements as well as another security test.
This can—and often does—set application developers back by weeks as they continue to
try to meet now-impossible release deadlines. This creates a lot of friction within organisations
and has companies choosing between two bad options: “signing off” on risk and releasing an
application with vulnerabilities or missing expectations on delivery targets (or both). What’s
worse, it can cost up to 100 times more to fix an issue discovered this late in the SDLC than
to simply fix it early on in the process (more on this later).
As the speed of innovation and frequency of software releases has accelerated over
time, it has only made all of these problems worse. This has led to the reimagining of the role
of application security in the software development process and creation of a secure SDLC.
3.13.3 Secure Software Development Life Cycle Processes
Implementing SDLC security affects every phase of the software development process. It requires
a mindset that is focused on secure delivery, raising issues in the requirements and development
phases as they are discovered. This is far more efficient—and much cheaper—than waiting for
these security issues to manifest in the deployed application. Secure software development life
cycle processes incorporate security as a component of every phase of the SDLC.
While building security into every phase of the SDLC is first and foremost a mindset
that everyone needs to bring to the table, security considerations and associated tasks will
actually vary significantly by SDLC phase.
• Phases of Secure Software Development Life Cycle
Each phase of the SDLC must contribute to the security of the overall application. This is done
in different ways for each phase of the SDLC, with one critical note: Software development life
cycle security needs to be at the forefront of the entire team’s minds. Let’s look at an example
of a secure software development life cycle for a team creating a membership renewal portal:
PHASE 1: REQUIREMENTS
In this early phase, requirements for new features are collected from various stakeholders. It’s
important to identify any security considerations for functional requirements being gathered
for the new release.
•• Sample functional requirement: user needs the ability to verify their contact
information before they are able to renew their membership.
•• Sample security consideration: users should be able to see only their own contact
information and no one else’s.
PHASE 2: DESIGN
This phase translates in-scope requirements into a plan of what this should look like in the
actual application. Here, functional requirements typically describe what should happen, while
security requirements usually focus on what shouldn’t.
•• Sample functional design: page should retrieve the user’s name, email, phone,
and address from CUSTOMER_INFO table in the database and display it on screen.
Secure Architecture Principles Isolation and Leas 187
•• Sample security concern: we must verify that the user has a valid session token
before retrieving information from the database. If absent, the user should be redirected
to the login page.
PHASE 3: Development
When it’s time to actually implement the design and make it a reality, concerns usually shift to
making sure the code well-written from the security perspective. There are usually established
secure coding guidelines as well as code reviews that double-check that these guidelines
have been followed correctly. These code reviews can be either manual or automated using
technologies such as static application security testing (SAST).
That said, modern application developers can’t be concerned only with the code they
write, because the vast majority of modern applications aren’t written from scratch. Instead,
developers rely on existing functionality, usually provided by free open source components
to deliver new features and therefore value to the organisation as quickly as possible. In fact,
90%+ of modern deployed applications are made of these open-source components.
These open-source components are usually checked using Software Composition Analysis
(SCA) tools.
Secure coding guidelines, in this case, may include:
•• Using parameterised, read-only SQL queries to read data from the database and
minimise chances that anyone can ever commandeer these queries for nefarious
purposes
•• Validating user inputs before processing data contained in them
•• Sanitising any data that’s being sent back out to the user from the database
•• Checking open source libraries for vulnerabilities before using them
PHASE 4: Verification
The Verification phase is where applications go through a thorough testing cycle to ensure they
meet the original design & requirements. This is also a great place to introduce automated
security testing using a variety of technologies. The application is not deployed unless these
tests pass. This phase often includes automated tools like CI/CD pipelines to control verification
and release.
Verification at this phase may include:
•• Automated tests that express the critical paths of your application
•• Automated execution of application unit tests that verify the correctness of the
underlying application
•• Automated deployment tools that dynamically swap in application secrets to be used
in a production environment
PHASE 5: MAINTENANCE AND EVOLUTION
The story doesn’t end once the application is released. In fact, vulnerabilities that slipped through
the cracks may be found in the application long after it’s been released. These vulnerabilities
may be in the code developers wrote, but are increasingly found in the underlying open-
source components that comprise an application. This leads to an increase in the number of
188 Computer System Security
(iii) Card readers are usually mounted on the exterior (non-secured) side of the door
that they control.
3. Access control keypads:
(i) Access control keypads are devices which may be used in addition to or in place
of card readers.
(ii) The access control keypad has numeric keys which look similar to the keys on a
touch-tone telephone.
(iii) The access control keypad requires that a person desiring to gain access must
enter a correct numeric code.
(iv) When access control keypads are used in addition to card readers, both a valid
card and the correct code must have presented before entry is allowed.
4. Electric lock hardware:
(i) Electric lock hardware is the equipment that is used to electrically lock and unlock
each door that is controlled by the access control system.
(ii) The specific type and arrangement of hardware to be used on each door is
determined based on the construction conditions at the door.
(iii) In almost all cases, the electric lock hardware is designed to control entrance into
a building or secured space. To comply with building and fire codes, the electric
lock hardware never restricts the ability to freely exit the building at any time.
5. Access control field panels:
(i) Access control field panels (also known as Intelligent Controllers) are installed in
each building where access control is to be provided.
(ii) Card readers, electric lock hardware, and other access control devices are all
connected to the access control field panels.
(iii) The access control field panels are used to process access control activity at the
building level.
(iv) The number of access control field panels to be provided in each building depends
on the number of doors to be controlled.
(iv) Access control field panels are usually installed in telephone, electrical, or
communications closets.
6. Access control server computer:
(i) The access control server computer is the brain of the access control system.
(ii) The access control server computer serves as the central database and file manager
for the access control system and is responsible for recording system activity, and
distributing information to and from the access control field panels.
(iii) A single access control server computer is used to control a large number of card-
reader controlled doors.
(iv) The access control server computer is usually a standard computer which runs
special access control system application software.
(v) In most cases, the computer is dedicated for full-time use with the access control
system.
192 Computer System Security
(e) Denial of service: It is an attack where the attackers attempt to prevent legitimate
users from accessing the service.
2. DREAD: DREAD was proposed for threat modelling but due to inconsistent ratings
it was dropped by Microsoft in 2008. It is currently used by open stack and many
other corporations. It provides a mnemonic for risk rating security threats using five
categories:
(a) Damage potential: Ranks the extent of damage that would occur if vulnerability
is exploited.
(b) Reproducibility: Ranks how easy it is to reproduce attack.
(c) Exploitability: Assigns a number to the effort required to launch the attack.
(d) Affected users: A value characterising how many people will be impacted if an
exploit become widely available.
(e) Discoverability: Measures the likelihood how easy it is to discover the threat.
3. PASTA:
(i) The Process for Attack Simulation and Threat Analysis (PASTA) is risk-centric
methodology.
(ii) The purpose is to provide a dynamic threat identification, enumeration, and
scoring process.
(iii) Upon completion of threat model security, subject matter experts develop a detailed
analysis of the identified threats.
(iv) Finally, appropriate security controls can be enumerated. This helps developer
to develop a asset-centric mitigation strategy by analysing attacker-centric view
of application.
4. Trike:
(i) The focus is in using threat models as risk management tool.
(ii) Threat models are based on requirement model.
(iii) The requirements model establishes the stakeholder-defined acceptable level of
risk assigned to each asset class.
(iv) Analysis of the requirements model yields a threat model from which threats are
identified and assigned risk values.
(v) The completed threat model is used to build a risk model on the basis of asset,
roles, actions, and calculated risk exposure.
5. VAST:
(i) VAST is an acronym for Visual, Agile, and Simple Threat modelling.
(ii) This methodology provides actionable outputs for the unique needs of various
stakeholders like application architects and developers, cyber security personnel
etc.
(iii) It provides a unique application and infrastructure visualisation scheme such
that the creation and use of threat models do not require specific security subject
matter expertise.
Secure Architecture Principles Isolation and Leas 195
6. Attack tree:
(i) Attack trees are the conceptual diagram showing how an asset, or target, might
be attacked.
(ii) These are multi-level diagram consisting of one root node, leaves and children
nodes.
(iii) Bottom to top, child nodes are conditions which must be satisfied to make the
direct parent node true.
(iv) An attack in considered complete when the root is satisfied. Each node may be
satisfied only by its direct child nodes.
7. Common Vulnerability Scoring System (CVSS) :
(i) It provides a way to capture the principal characteristics of vulnerability and
produce a numerical score depicting its severity.
(ii) The score can then be translated into a qualitative representation to help
organisations properly assess and prioritise their vulnerability management
processes.
8. T-MAP:
(i) T-MAP is an approach which is used in Commercial Off the Shelf (COTS) systems
to calculate the weights of attack paths.
(ii) This model is developed by using UML class diagrams, access class diagrams,
vulnerability class diagrams, target asset class diagrams and affected value class
diagrams.
4. What is rendering? Discuss rendering engine. List some rendering engine in web browser.
Ans.
•• Rendering or image synthesis is the automatic process of generating a photorealistic
or non-photorealistic image from a 2D or 3D model by means of computer programs.
Also, the result of displaying such a model is called a render.
•• A rendering engine is often used interchangeably with browser engines. It is responsible
for the layout of our website on our audience’s screen.
•• A rendering engine is responsible for the paint, and animations used on our website.
•• It creates the visuals on the screen or brightens the pixels exactly how they are meant
to be to give the feel of the website like how it was made to be.
•• Steps for what happens when we surf the web:
1. We type an URL into address bar in our preferred browser.
2. The browser parses the URL to find the protocol, host, port, and path. It forms a
HTTP request.
3. To reach the host, it first needs to translate the human readable host into an IP
number, and it does this by doing a DNS lookup on the host.
4. Then a socket needs to be opened from the user’s computer to that IP number,
on the port specified (most often port 80).
5. When a connection is open, the HTTP request is sent to the host.
196 Computer System Security
6. The host forwards the request to the server software configured to listen on the
specified port.
7. The server inspects the request and launches the server plugin needed to handle
the request.
8. The plugin gets access to the full request, and starts to prepare a HTTP response.
9. The plugin combines that data with some meta data and sends the HTTP response
back to the browser.
10. The browser receives the response, and parses the HTML in the response. A DOM
tree is built out of the broken HTML.
11. New requests are made to the server for each new resource that is found in the
HTML source (typically images, style sheets, and JavaScript files).
12. Stylesheets are parsed, and the rendering information in each gets attached to
the matching node in the DOM tree.
13. JavaScript is parsed and executed, and DOM nodes are moved and style
information is updated accordingly.
14. The browser renders the page on the screen according to the DOM tree and the
style information for each node.
15. We see the page on the screen.
List of rendering engines produced by major web browser vendors:
1. Blink: It is used in Google Chrome, and Opera browsers.
2. Web Kit: It is used in Safari browsers.
3. Gecko: It is used in Mozilla Firefox browsers.
4. Trident: It is used in Internet Explorer browsers.
5. EdgeHTML: It is used in Edge browsers.
6. Presto: Legacy rendering engine for Opera.
5. Describe cookies and frame busting?
Ans. Cookies:
•• These are small text files that the web browser stores on the computer.
•• The first time we visit a page on the internet, a new cookie is created, which collects
the information that can be accessed by the website operator.
•• However, some browsers store all cookies in a single file.
•• The information in this text file is in turn subdivided into attributes that are included
individually.
Frame busting:
•• Frame busting refers to code or annotation provided by a web page intended to
prevent the web page from being loaded in a sub-frame.
•• Frame busting is the recommended defense against click-jacking and is also required
to secure image-based authentication such as the sign-in seal used by Yahoo.
•• Sign-in seal displays a user-selected image that authenticates the Yahoo login page
to the user.
Secure Architecture Principles Isolation and Leas 197
•• Without frame busting, the correct image is displayed to the user, even though the
top page is not the real Yahoo login page.
•• New advancements in click jacking techniques using drag and drop to extract and
inject data into frames makes frame busting even more critical.
6. Explain web server threats in details?
Ans. Major web server threats are:
1. Injection flaws:
(a) Injection flaws, such as SQL, OS injection occur when untrusted data is sent to
an interpreter as part of a command or query.
(b) The attacker’s hostile data can trick the interpreter into executing unintended
commands or accessing data without proper authorisation.
2. Sensitive data exposure:
(a) Many web applications and APIs do not properly protect sensitive data such as
financial, healthcare.
(b) Attackers may steal or modify such weakly protected data to conduct credit card
fraud, identity theft, or other crimes.
(c) Sensitive data may be compromised without extra protection, such as encryption
at rest or in transit, and requires special precautions when exchanged with the
browser.
3. XML external entities:
(a) Many older or poorly configured XML processors evaluate external entity references
within XML documents.
(b) External entities can be used to disclose internal files using the file URI handler,
internal file shares, internal port scanning, remote code execution, and denial-
of-service attacks.
4. Broken access control:
(a) Restrictions on what authenticated users are allowed to do are often not properly
enforced.
(b) Attackers can exploit these flaws to access unauthorised functionality and/or data,
such as access other users accounts, view sensitive files, modify other users, data,
change access rights, etc.
5. Cross-Site Scripting (XSS):
(a) Injects malicious code from a trusted source to execute scripts in the victim’s
browser that can hijack user sessions or redirect the user to malicious sites.
(b) Cross-site scripting is a common vector that inserts malicious code into a web
application found to be vulnerable.
(c) Unlike other web attack types, such as SQL, its objective is not our web application.
Rather, it targets its users, resulting in harm to our clients and the reputation of
our organisation.
198 Computer System Security
6. Reflected XSS:
(a) Reflected XSS use a malicious script to reflect traffic to a visitor’s browser from
web application.
(b) Initiated via a link, a request is directed to a vulnerable website.
(c) Web application is then manipulated to activate harmful scripts.
7. Cross-Site Request Forgery (CSRF):
(a) It is also known as XSRF, Sea Surf, or session riding, cross-site request forgery
deceives the user’s browser-logged into our application-to run an unauthorised
action.
(b) A CSRF can transfer funds in an authorised manner and change passwords, in
addition to stealing session cookies and business data.
8. Man in the Middle Attack (MITM):
(a) A man in the middle attack can occur when a bad actor positions himself between
application and an unsuspecting user.
(b) MITM can be used for eavesdropping or impersonation.
(c) Meanwhile, account credentials, credit card numbers, and other personal
information can easily be harvested by the attacker.
9. Phishing attack:
(a) Phishing can be set up to steal user data, such as credit card and login information.
(b) The perpetrator, posing as a trustworthy entity, fools their prey into opening an
email, text memo, or instant message.
(c) Then attract to click a link that hides a payload.
(d) Such an action can cause malware to be covert installed.
(e) It is also possible for ransomware to freeze the user’s PC, or for sensitive data to
be passed.
10. Remote File inclusion (RFI):
(a) Remote File Inclusion (RFI) exploits weaknesses in those web applications that
dynamically call external scripts.
(b) Taking advantage of that function, an RFI attack uploads malware and takes over
the system.
11. Using components with known vulnerabilities: It occurs when attackers are able
to take control of and exploit vulnerable libraries, frameworks, and other modules
running with full privileges.
12. Insufficient logging and monitoring:
(a) Insufficient logging and monitoring, allows attackers to attack systems, maintain
persistence, pivot to more systems, and tamper, extract, or destroy data.
13. Backdoor attack:
(a) Being a form of malware, a backdoor circumvents login authentication to enter
a system.
(b) Many organisations offer employees and partners remote access to application
resources, including file servers and databases.
Secure Architecture Principles Isolation and Leas 199
(c) This enables bad actors to trigger system commands in the compromised system
and keep their malware updated.
(d) The attacker’s files are usually heavily cloaked, making detection problematic.
7. Write short note on cross-site scripting (XSS).
ns.
A
•• Cross-site scripting (XSS) is vulnerability in a web application that allows a third party
to execute a script in the user’s browser on behalf of the web application.
•• Cross-site scripting is one of the most prevalent vulnerabilities present on the web.
•• The exploitation of XSS against a user can lead to various consequences such as
account compromise, account deletion, privilege escalation, malware infection and
many more.
•• It allows an attacker to masquerade as a victim user, to carry out any actions that the
user is able to perform and to access any of the user’s data.
•• If the victim user has privileged access within the application, then the attacker might
be able to gainful control over all of the applications functionality and data.
8. Explain protection methods used for CSRF.
Ans. The protection methods used for CSRF are:
1. Anti CSRF Token:
(a) This is a cryptographically strong string that is submitted to the website separately
from cookies.
(b) This can be sent as a request parameter or as an HTTP header.
(c) The server checks for the presence and correctness of this token when a request
is made and proceeds only if the token is correct and the cookies are valid.
2. HTTP PUT method:
(a) The PUT method is used to create instances of a resource on the server.
(b) It is similar to POST except that sending the same PUT requests multiple times
does not do anything extra.
(c) If the server is using PUT method for sensitive actions then there is no need for
any additional CSRF protection (unless Cross-Origin Resource Sharing is enabled)
at that endpoint.
(d) It is because the PUT request cannot be duplicated through a web page like POST
request (HTTP forms do not allow PUT requests).
3. HTTP bearer authentication:
(a) This is a type of HTTP authentication where the user is identified through a token
that is submitted in authorisation header of each request.
(b) This mechanism solves CSRF because unlike cookies it is not submitted by the
browser automatically.
(c) There are problems and potential bypasses to each of these methods.
(d) Anti CSRF tokens do not have a fixed standard so their generation mechanism
and use depends solely on how developers intended it to be.
200 Computer System Security
(e) Due to this lack of a standard, a lot of implementation specific loopholes exist in
web applications
9. Explain different ways used to prevent XSS.
Ans. Different ways used to prevent XSS are:
1. Escaping:
(a) The first method used to prevent XSS vulnerabilities from appearing in our
applications is by escaping user input.
(b) Escaping data means taking the data an application has received and ensuring it
is secure before rendering it for the end user.
(c) By escaping user input, key characters in the data received by a web page will
be prevented from being interpreted in any malicious way.
(d) In essence, we are censoring the data our web page receives in a way that will
disallow the characters especially <and> characters from being rendered, which
otherwise could cause harm to the application and/or users.
2. Validating input:
(a) Validating input is the process of ensuring an application is rendering the correct
data and preventing malicious data from doing harm to the site, database, and
users.
(b) While whitelisting and input validation are more commonly associated with SQL
injection, they can also be used as an additional method of prevention for XSS.
(c) Whereas blacklisting, or disallowing certain, predetermined characters in user
input, disallows only known bad characters, whitelisting only allows known good
characters and is a better method for preventing XSS attacks as well as others.
(d) Input validation is especially helpful and good at preventing XSS informs, as it
prevents a user from adding special characters into the fields, instead refusing the
request.
3. Sanitizing:
(a) A third way to prevent cross-site scripting attacks is to sanitise user input.
(b) Sanitising data is a strong defense, but should not be used alone to battle XSS
attacks.
(c) Sanitising user input is especially helpful on sites that allow HTML markup, to
ensure data received can do no harm to users as well as our database by scrubbing
the data clean of potentially harmful markup, changing unacceptable user input
to an acceptable format.
10. Describe XSS vulnerabilities.
Ans. Following are XSS vulnerabilities:
1. Stored XSS vulnerabilities:
(a) Stored attacks are those where the injected script is permanently stored on the
target servers, such as in a database, in a message forum, visitor log, comment
field, etc.
(b) The victim then retrieves the malicious script from the server when it requests the
stored information. Stored XSS is also referred to as Persistent or Type-I XSS.
Secure Architecture Principles Isolation and Leas 201
(b) Good practice is to ensure that access privileges (and changes) are approved by
a sufficiently senior director or manager.
(c) Finally, access privileges should be reviewed regularly and amended as part of a
process of security governance.
2. Poor password management:
(a) Password management is most common mistakes when it comes to access control.
(b) When there are a lot of different systems that require a password to access then
it is not uncommon for employees and even business owners to use the same
password across the board.
(c) Even when employees are required to change their password regularly though,
there is still the problem of using passwords that are weak and easy to crack.
(d) It is logical why people would do this since remembering multiple passwords can
often be impractical.
3. Poor user education:
(a) One of the most important aspects of improving the security of company data is
educating employees about risk.
(b) Employees could easily be doing things that are putting our data at risk.
(c) Human error is always one of the biggest security risks for company so we should
be aware of this and take steps we can educate our employees, including risk-
training programmes.
14. Explain tools used for threats modelling?
ns. Tools used for threat modelling:
A
1. Microsoft’s threat modelling tool: This tool identifies threats based on STRIDE
threat classification scheme and it is based on Data Flow Diagram (DFD).
2. My App security:
(a) It offers the first commercially available threat modeling tool i.e., Threat Modeler.
(b) It uses VAST threat classification scheme and it is based on Process Flow Diagram
(PFD).
3. IriuRisk:
(a) It offers both a community and a commercial version of the tool.
(b) This tool is primarily used to create and maintain live threat model through the
entire SDLC.
(c) It connects with other several different tools like OWASP ZAP, BDD-Security etc.,
to facilitate automation and involves fully customisable questionnaires and risk
pattern libraries.
4. securiCAD:
(a) It is a threat modelling and risk management tool.
(b) Risk are identified and quantified by conducting automated attack simulations to
current and future IT architectures, and provides decision support based on the
findings.
(c) securiCAD is offered in both commercial and community editions.
Secure Architecture Principles Isolation and Leas 203
4 Basic Cyptography
208
Basic Cyptography 209
Praveen
Internet
Niraj Shivam
(ii)
Traffic Analysis: Traffic analysis is the process of intercepting and examining
messages in order to deduce information from patterns in communication. In a traffic
analysis attack, the hacker or opponent tries to access the same network to listen
or capture all network traffic between authentic users. From there, the attacker can
analyze that traffic to learn something about the authentic users.
Suppose that we had a way of masking the contents of messages or other information
traffic so that opponents, even if they captured the message, could not extract the
information from the message. The common technique for masking contents is
encryption. If we had encryption protection in place, an opponent might still be able
to observe the pattern of these messages. The opponent could determine the location
and identity of communicating hosts and could observe the frequency and length of
messages being exchanged. This information might be useful in guessing the nature
of the communication that was taking place.
Praveen
Internet
Niraj Shivam
Praveen
Internet
Niraj Shivam
(ii) Replay: Replay attack involves the passive capture of a data unit and its subsequent
retransmission to produce an authorised effect.
Praveen
Internet
Niraj Shivam
Praveen
Internet
Niraj Shivam
(iv)
Denial of Service: It prevents or inhibits the normal use or management of
communication facilities. This attack may have a specific target. Forr example, an
entity may suppress all messages directed to a particular destination. Another form of
service denial is the disruption of an entire network, either by disabling the network
or by overloading it with messages so as to degrade performance.
Praveen
Internet
Niraj Shivam
Confidentiality
Integrity
Message
Authentication
Security Services
Nonrepudiation
Entity Nonrepudiation
1. Authentication
Message authentication ensures the receiver about the sender’s identity. It assurance that the
communicating entity is the one that it claims to be.
Two specific authentication services are defined in X.800:
(i) Peer entity authentication: Provides for the corroboration of the identity of a
peer entity in an association. It is provided for use at the establishment of, or at times
during the data transfer phase of, a connection. It attempts to provide confidence
that an entity is not performing either a masquerade or an unauthorised replay of a
previous connection.
(ii) Data origin authentication: Provides for the corroboration of the source of a data
unit. It does not provide protection against the duplication or modification of data
units. This type of service supports applications like electronic mail where there are
no prior interactions between the communicating entities.
2. Access Control
In the context of network security, access control is the ability to limit and control the access
to host systems and applications via communications links. To achieve this, each entity trying
to gain access must first be identified, or authenticated, so that access rights can be tailored
to the individual.
This is the prevention of unauthorised use of a resource (i.e., this service controls who can
have access to a resource, under what conditions access can occur and what those accessing
the resource are allowed to do).
3. Data Confidentiality
It is the protection of data from unauthorised disclosure. Confidentiality is the protection of
transmitted data from passive attacks. With respect to the content of a data transmission, several
levels of protection can be identified. The broadest service protects all user data transmitted
between two users over a period of time.
For example, when a TCP connection is set up between two systems, this broad protection
prevents the release of any user data transmitted over the TCP connection. Narrower forms of
this service can also be defined, including the protection of a single message or even specific
fields within a message. These refinements are less useful than the broad approach and may
even be more complex and expensive to implement.
214 Computer System Security
The other aspect of confidentiality is the protection of traffic flow from analysis. This
requires that an attacker not be able to observe the source and destination, frequency, length,
or other characteristics of the traffic on a communications facility.
Data Confidentiality can be of following types:
(i) Connection Confidentiality: The protection of all user data on a connection.
(ii) Connectionless Confidentiality: The protection of all user data in a single data
block
(iii) Selective-Field Confidentiality: The confidentiality of selected fields within the
user data on a connection or in a single data block.
(iv) Traffic Flow Confidentiality: The protection of the information that might be
derived from observation of traffic flows.
4. Data Integrity
Data Integrity provides the assurance that data received are exactly as sent by an authorised
remove bold entity (i.e., contain no modification, insertion, deletion, or replay).
As with confidentiality, integrity can apply to a stream of messages, a single message,
or selected fields within a message. Again, the most useful and straightforward approach is
total stream protection. A connection-oriented integrity service, one that deals with a stream of
messages, assures that messages are received as sent, with no duplication, insertion, modification,
reordering, or replays. The destruction of data is also covered under this service. Thus, the
connection-oriented integrity service addresses both message stream modification and denial
of service. On the other hand, a connectionless integrity service, one that deals with individual
messages without regard to any larger context, generally provides protection against message
modification only.
Data Integrity can be of following types:
(i) Connection Integrity with Recovery: Provides for the integrity of all user data
on a connection and detects any modification, insertion, deletion, or replay of any
data within an entire data sequence, with recovery attempted.
(ii) Connection Integrity without Recovery: As above, but provides only detection
without recovery.
(iii) Selective-Field Connection Integrity: Provides for the integrity of selected fields
within the user data of a data block transferred over a connection and takes the form
of determination of whether the selected fields have been modified, inserted, deleted,
or replayed.
(iv) Connectionless Integrity: Provides for the integrity of a single connectionless data
block and may take the form of detection of data modification. Additionally, a limited
form of replay detection may be provided.
(v) Selective-Field Connectionless Integrity: Provides for the integrity of selected
fields within a single connectionless data block; takes the form of determination of
whether the selected fields have been modified.
Basic Cyptography 215
5. Nonrepudiation
Nonrepudiation provides protection against denial by one of the entities involved in a
remove bold communication of having participated in all or part of the communication. It prevents either
sender or receiver from denying a transmitted message. Thus, when a message is sent, the
receiver can prove that the alleged sender in fact sent the message. Similarly, when a message
is received, the sender can prove that the alleged receiver in fact received the message.
Nonrepudiation can be of following types:
(i) Nonrepudiation, Origin: Proof that the message was sent by the specified party.
(ii) Nonrepudiation, Destination: Proof that the message was received by the specified
party.
4.1.6 Security Mechanism
Network Security is field in computer technology that deals with ensuring security of computer
network infrastructure. As the network is very necessary for sharing of information whether it is
at hardware level such as printer, scanner, or at software level. Therefore security mechanism
can also be termed as is set of processes that deal with recovery from security attack. Various
mechanisms are designed to recover from these specific attacks at various protocol layers.
The implementation of the security services is provided through security mechanisms.
These mechanisms are:
(i) Encipherment: The use of mathematical algorithms to transform data into a form
that is not readily intelligible. The transformation and subsequent recovery of the data
depend on an algorithm and one or more encryption keys.
(ii) Digital Signature: Data appended to, or a cryptographic transformation of, a data
unit that allows a recipient of the data unit to prove the source and integrity of the
data unit and protect against forgery (e.g., by the recipient).
(iii) Access Control: A variety of mechanisms that enforce access rights to resources.
(iv) Data Integrity: A variety of mechanisms used to assure the integrity of a data unit
or stream of data units.
(v) Authentication Exchange: A mechanism intended to ensure the identity of an
entity by means of information exchange.
(vi) Traffic Padding: The insertion of bits into gaps in a data stream to frustrate traffic
analysis attempts.
(vii) Routing Control: Enables selection of particular physically secure routes for certain
data and allows routing changes, especially when a breach of security is suspected.
(viii) Notarisation: The use of a trusted third party to assure certain properties of a data
exchange.
There are also some passive security mechanism that is not specific to any particular
OSI security service or protocol layer as discussed below:
(i) Trusted Functionality: That which is perceived to be correct with respect to some
criteria (e.g., as established by a security policy).
(ii) Security Label: The marking bound to a resource (which may be a data unit) that
names or designates the security attributes of that resource.
216 Computer System Security
Sender Recipient
Security-related
transformation Information
channel
Message
message
message
Message
Secure
Secure
Security aspects come into play when it is necessary or desirable to protect the information
transmission from an opponent who may present a threat to confidentiality, authenticity, and
so on. All the techniques for providing security have two components:
•• A security-related transformation on the information to be sent. Examples include
the encryption of the message, which scrambles the message so that it is unreadable
by the opponent, and the addition of a code based on the contents of the message,
which can be used to verify the identity of the sender
•• Some secret information shared by the two principals and, it is hoped, unknown
to the opponent. An example is an encryption key used in conjunction with the
transformation to scramble the message before transmission and unscramble it on
reception.
Basic Cyptography 217
This general model shows that there are four basic tasks in designing a particular security
service:
1. Design an algorithm for performing the security-related transformation. The algorithm
should be such that an opponent cannot defeat its purpose.
2. Generate the secret information to be used with the algorithm.
3. Develop methods for the distribution and sharing of the secret information.
4. Specify a protocol to be used by the two principals that makes use of the security
algorithm and the secret information to achieve a particular security service.
4.2 CRYPTOGRAPHY
Cryptography is the study of secure communications techniques that allow only the sender
and intended recipient of a message to view its contents. The term is derived from the Greek
word kryptos, which means hidden.
Cryptographic systems are characterized along three independent dimensions:
1. The type of operations used for transforming plaintext to cipher text. All
encryption algorithms are based on two general principles as:
(i) Substitution: In which each element in the plaintext (bit, letter, group of bits or
letters) is mapped or replaced by another element,
(ii) Transposition: In which elements in the plaintext are rearranged. The
fundamental requirement is that no information be lost (that is, that all operations
are reversible).
2. The number of keys used. There are two type of method that uses either:
(i) Single Key: If both sender and receiver use the same key, the system is referred
to as symmetric, single-key, secret-key, or conventional encryption techniques.
(ii) Two Key: If the sender and receiver use different keys, the system is referred to
as asymmetric, two-key, or public-key encryption techniques.
3. The way in which the plaintext is processed. There are two ways through which
plaintext is processed as:
(i) Block Cipher: A block cipher processes the input one block of elements at a
time, producing an output block for each input block.
(ii) Stream Cipher: A stream cipher processes the input elements continuously,
producing output one element at a time, as it goes along.
4.2.1 Symmetric Cipher Model
A symmetric encryption scheme has five ingredients:
1. Plaintext: This is the original intelligible message or data that is fed into the algorithm
as input.
2. Encryption algorithm: The encryption algorithm performs various substitutions
and transformations on the plaintext.
3. Secret key: The secret key is also input to the encryption algorithm. The key is a
value independent of the plaintext and of the algorithm. The algorithm will produce
218 Computer System Security
a different output depending on the specific key being used at the time. The exact
substitutions and transformations performed by the algorithm depend on the key.
4. Cipher text: This is the scrambled message produced as output. It depends on the
plaintext and the secret key. For a given message, two different keys will produce two
different cipher texts. The cipher text is an apparently random stream of data and, as
it stands, is unintelligible.
5. Decryption algorithm: This is essentially the encryption algorithm run in reverse.
It takes the cipher text and the secret key and produces the original plaintext.
Let consider essential elements of a symmetric encryption scheme as shown in below
Fig. 4.9. A source produces a message in plaintext, X = [X1, X 2, ..., X M]. The M elements of
X are letters in some finite alphabet. Traditionally, the alphabet usually consisted of the 26
capital letters. Nowadays, the binary alphabet {0, 1} is typically used. For encryption, a key of
the form K = [K1, K 2, ..., K J] is generated. If the key is generated at the message source, then
it must also be provided to the destination by means of some secure channel. Alternatively, a
third party could generate the key and securely deliver it to both source and destination.
^
X
Cryptanalyst ^
K
Secure channel
Key
source
Encryption
With the message X and the encryption key K as input, the encryption algorithm forms the
ciphertext Y = [Y1, Y2, ..., Y N]. We can write this as:
Y = E(K, X)
This notation indicates that Y is produced by using encryption algorithm E as a function
of the plaintext X, with the specific function determined by the value of the key K.
Decryption
The intended receiver, in possession of the key, is able to invert the transformation:
X = D(K, Y) remove bold
An opponent, observing Y but not having access to K or X, may attempt to recover X or K or
both X and K. It is assumed that the opponent knows the encryption (E) and decryption (D)
algorithms. If the opponent is interested in only this particular message, then the focus of the
Basic Cyptography 219
effort is to recover X by generating a plaintext estimate ˆX. Often, however, the opponent is
interested in being able to read future messages as well, in which case an attempt is made to
recover K by generating an estimate ˆK. X cap
4.2.2 Public key cryptography
Principles of Public-Key Cryptosystems
Asymmetric algorithms rely on one key for encryption and a different but related key for
decryption. These algorithms have the following important characteristic:
•• It is computationally infeasible to determine the decryption key given only knowledge
of the cryptographic algorithm and the encryption key.
In addition, some algorithms, such as RSA, also exhibit the following characteristic:
•• Either of the two related keys can be used for encryption, with the other used for
decryption.
A public-key encryption scheme has following ingredients as shown in below figure:
1. Plaintext: This is the readable message or data that is fed into the algorithm as input.
2. Encryption algorithm: The encryption algorithm performs various transformations
on the plaintext.
3. Public and private keys: This is a pair of keys that have been selected so that if
one is used for encryption, the other is used for decryption. The exact transformations
performed by the algorithm depend on the public or private key that is provided as
input.
4. Cipher text: This is the scrambled message produced as output. It depends on
the plaintext and the key. For a given message, two different keys will produce two
different cipher texts.
5. Decryption algorithm: This algorithm accepts the cipher text and the matching
key and produces the original plaintext.
Bobs’s
public-key
ring
Joy
Ted
Mike Alice
Transmittted
ciphertext
Plaintext Plaintext
input Encryption algorithm Decryption algorithm output
(e.g., RSA) (reverse of encryption
algorithm)
Alice’s
public-key
ring
Joy
Ted
Mike Bob
Transmitted
ciphertext
Plaintext Plaintext
input Encryption algorithm Decryption algorithm output
(e.g., RSA) (reverse of encryption
algorithm)
Source A Destination B
PUb PRb
Key pair
source
Cryptanalyst
^
PRa
Source A Destination B
PRa PUa
Key pair
source
PRb
PUb
Key pair
source
PRa PUa
Key pair
source
There are two broad components when it comes to RSA cryptography, they
are:
(i) Key Generation: Generating the keys to be used for encrypting and decrypting the
data to be exchanged.
(ii) Encryption/Decryption Function: The steps that need to be run when scrambling
and recovering the data.
4.3.1 RSA algorithm Steps
•• RSA algorithm uses the following steps to generate public and private keys:
Step 1. Select two large prime numbers, p and q. remove bold
Step 2. Multiply these numbers to find n = p * q, where n is called the modulus for
encryption and decryption.
Step 3. Choose a number e less than n, such that n is relatively prime to (p – 1) * (q – 1).
It means that e and (p – 1) * (q – 1) have no common factor except 1. Choose “e” such that
1< e < j (n), e is prime to j(n) as gcd (e, j (n)) = 1
remove bold that are given in between in all places
Step 4. Then we calculate d as:
d = e–1mod j (n) or d* e mod j (n) = 1
(PU)
Step 5. Public Key PU = (e, n)
Private Key PR = (d, n)
(PR)
•• RSA algorithm uses the following steps to for Encryption/Decryption
Function
Once you generate the keys, you pass the parameters to the functions that calculate
your cipher text and plaintext using the respective key.
Step 6. If the plaintext is M, cipher text C = Me mod n. M to power e mod n
Step 7. If the cipher text is C, plaintext P = Cd mod n C to power d mod n
Key Generation
Select p, q p and q both prime, p ≠ q
Calculate n = p × q
Calculate f(n) = (p – 1)(q – 1) ϕ(n)
Select integer e gcd (f(n), e) = 1; 1< e < f(n)
Calculate d d ≡ e–1 (mod f(n))
Public key PU = {e, n] {e,n} remove ] and use }
Private key PR = {d, n}
Encryption
Plaintext: M<n
Ciphertext: C = Me mod n
Decryption
Ciphertext: C
mod
Plaintext: M = Cd mid n
Fig. 4.14 RSA Algorithm
224 Computer System Security
Thus, the digital signature function includes the authentication function. On the basis of
these properties, we can formulate the following requirements for a digital signature:
•• The signature must be a bit pattern that depends on the message being signed.
•• The signature must use some information unique to the sender, to prevent both
forgery and denial.
•• It must be relatively easy to produce the digital signature.
•• It must be relatively easy to recognize and verify the digital signature.
•• It must be computationally infeasible to forge a digital signature, either by constructing
a new message for an existing digital signature or by constructing a fraudulent digital
signature for a given message.
•• It must be practical to retain a copy of the digital signature in storage.
4.4.1 Digital Signature Standard
The National Institute of Standards and Technology (NIST) has published Federal Information
Processing Standard FIPS 186, known as the Digital Signature Standard (DSS). The DSS
makes use of the Secure Hash Algorithm (SHA) and presents a new digital signature technique,
the Digital Signature Algorithm (DSA). The DSS was originally proposed in 1991 and revised
in 1993 in response to public feedback concerning the security of the scheme. There was a
further minor revision in 1996. In 2000, an expanded version of the standard was issued as
FIPS 186-2. This latest version also incorporates digital signature algorithms based on RSA
and on elliptic curve cryptography. The DSS uses an algorithm that is designed to provide only
the digital signature function. Unlike RSA, it cannot be used for encryption or key exchange.
Nevertheless, it is a public-key technique.
The Approaches of Digital Signature
There are two approaches of Digital Signature as:
(i) RSA Approach (ii) DSA Approach
RSA Approach of Digital Signature
The below figure elaborates the approach for generating digital signatures to that used with
RSA. In the RSA approach, the message to be signed is input to a hash function that produces
a secure hash code of fixed length. This hash code is then encrypted using the sender’s private
key to form the signature. Both the message and the signature are then transmitted. The
recipient takes the message and produces a hash code. The recipient also decrypts the signature
using the sender’s public key. If the calculated hash code matches the decrypted signature, the
signature is accepted as valid. Because only the sender knows the private key, only the sender
could have produced a valid signature.
M II M
PRa PUa Compare
H E E(PRa, H(M)] D
RSA approach
II H
M M
k
DSS approach
f2 r
H
M
x q q
f4
k
s
r
f3
f1 s
M
H
Compare
–1 –1
s = f1(H(M), k, x, r, q) = (k (H(M) + xr)) mod q w = f3(s, q) = (s) mod q
k v = f4(y, q, g, H(M), w, r)
r = f2(k, p, q, g) = (g mod p) mod q
(H(M)w) mod q rw mod q
= ((g y ) mod p) mod q
(a) Signing (b) Verifying
Source A Destination B
M E D M
E(K, M)
K K
(a) Symmetric encryption: confidentiality and authentication
M E D M
E(PUb, M)
PUb PRb
(b) Public-key encryption: confidentiality
M E D M
M E E D D M
secret key, the attacker cannot alter the MAC to correspond to the alterations in the
message.
2. The receiver is assured that the message is from the alleged sender. Because no one
else knows the secret key, no one else could prepare a message with a proper MAC.
3. If the message includes a sequence number (such as is used with HDLC, X.25, and
TCP), then the receiver can be assured of the proper sequence because an attacker
cannot successfully alter the sequence number.
A MAC function is similar to encryption. One difference is that the MAC algorithm need
not be reversible, as it must for decryption. In general, the MAC function is a many-to-one
function. The domain of the function consists of messages of some arbitrary length, whereas
the range consists of all possible MACs and all possible keys. If an n-bit MAC is used, then there
are 2n possible MACs, whereas there are N possible messages with N >> 2n. Furthermore,
with a k-bit key, there are 2k possible keys.
Source A Destination B
C
M II M
K K Compare
C C(K, M)
(a) Message authentication
C
M II E M E M
K1 K Compare
K2 E(K2, [M || C(K1, M)]) K2
C C(K1, M)
(b) Message authentication and confidentiality; authentication tied to plantext
E(K2, M)
D
M E II C M
K1 Compare K2
K2 K1
C C(K1, E(K2, M))
(c) Message authentication and confidentiality; authentication tied to ciphertext
Source A Destination B
H
M II E D M
Compare
K E(K, [M || H(M)]) K
H H(M)
(a)
H
M II M
K K Compare
H E E(K, H(M)) D
(b)
H
M II M
PRa PUa Compare
H E D
use upword arrow symbol as
E(PRa, H(M))
in above fig
(c)
H
M II E D M
PRa PUa compare
K E(K, [M || E(PRa, H(M))]) K
H E E(PRa, H(M)) D
(d)
M II M S II H
PRa compare
S || H H(M || S)
(e)
|| H
M II E D M S
Compare
K E(K, [M || H(M || S)]) K
S || H H(M || S)
(f)
SHA-512 Logic
The algorithm takes as input a message with a maximum length of less than 2128 bits and
produces as output a 512-bit message digest. The input is processed in 1024-bit blocks. Below
Fig. 4.21 depicts the overall processing of a message to produce a digest.
N × 1024 bits
L bits 128 bits
Message 100.0 L
IV = 512
...
H1 H2
F + F + F +
H0 HN =
hash
code
64
+ = word-by-word addition mod 2
e = 510E527FADE682D1
f = 9B05688C2B3E6C1F
g = 1F83D9ABFB41BD6B
h = 5BE0CDI9137E2179
These values are stored in big-endian format, which is the most significant byte of a
word in the low-address (leftmost) byte position. These words were obtained by taking
the first sixty-four bits of the fractional parts of the square roots of the first eight prime
numbers.
•• Step 4: Process message in 1024-bit (128-word) blocks. The heart of the algorithm
is a module that consists of 80 rounds; this module is labeled F in Fig. 4.19. The logic
is illustrated in below Fig. 4.22.
Fig 4.21
Mi Hi–1
Message
schedule 64
a b c d e f g h
W0 K0
Round 0
a b c d e f g h
Wt Kt
Round t
a b c d e f g h
W79 K79
Round 79
+ + + + + + + +
Hi
•• Step 5: Output. After all N 1024-bit blocks have been processed; the output from
the Nth stage is the 512-bit message digest.
We can summarise the behavior of SHA-512 as follows:
H0 = IV
Hi = SUM64(Hi-1, abcdefghi)
236 Computer System Security
MD = HN
where
IV = initial value of the abcdefgh buffer, defined in step 3
abcdefghi = the output of the last round of processing of the ith message block
N = the number of blocks in the message (including padding and length
fields)
SUM64 = Addition modulo 264 performed separately on each word of the pair of
inputs
MD = final message digest value
SHA-512 Round Function
Let us look in more detail at the logic in each of the 80 steps of the processing of one 512-bit
block (Fig. 4.20). Each round is defined by the following set of equations:
T1 = h + Ch(e, f, g) + (S1512 e) + Wi + K i
T2 = (S512 0 a) + Maj(a, b, c)
a = T1 +T2
b = a
c = b
d = c
e = d + T1
f = e
g = f
h = g
where
t = setup number; 0 ≤ t ≤ 79
Ch(e, f, g) = (e and f) ⊕ (NOT e AND g) the conditional function:
If e then f else g
Maj(a, b, c) = (a and b) ⊕ (a AND c) ⊕ (b AND c) the function is true only of
the majority (two or three) of the arguments are true.
(S512 28 34
0 a) = ROTR (a) ⊕ ROTR (a) ⊕ ROTR (a)
39
a b c d e f g h
Maj Ch +
+
+ + Wt
+ + Kt
+
a b c d e f g h
512 bits
It remains to indicate how the 64-bit word values Wt are derived from the 1024-bit
message. Fig. 4.23 illustrates the mapping. The first 16 values of Wt are taken directly from
the 16 words of the current block. The remaining values are defined as follows:
Wt = s1512(Wt–2) + Wt–7 + s0512(Wt–15) + Wt–16
where
s0512(x) = ROTR1(x) ⊕ ROTR8(x) ⊕ SHR7(x)
s512 19 61
1 (x) = ROTR (x) ⊕ ROTR (x) ⊕ SHR (x)
6
n
ROTR (x) = circular right shift (rotation) of the 64-bit argument x by n bits
SHRn(x) = left shift of the 64-bit argument x by n bits with padding by
zeros on the right
1024 bits W0 W1 W9 W14 Wt–16 Wt–15 Wt–7 Wt–2 W63 W65 W71 W76
Mi
0 1 0 1 0 1
64 bits
Fig. 4.24 Creation of 80-word Input Sequence for SHA-512 Processing of Single Block
238 Computer System Security
Thus, in the first 16 steps of processing, the value of Wt is equal to the corresponding word
in the message block. For the remaining 64 steps, the value of Wt consists of the circular left shift
by one bit of the XOR of four of the preceding values of Wt, with two of those values subjected
to shift and rotate operations. This introduces a great deal of redundancy and interdependence
into the message blocks that are compressed, which complicates the task of finding a different
message block that maps to the same compression function output.
PUa PUb
PUa PUb
A B
PUa PUb
PUa PUb
Public-key
directory
PUa PUb
A B
This scheme is clearly more secure than individual public announcements but still has
vulnerabilities. If an adversary succeeds in obtaining or computing the private key of
the directory authority, the adversary could authoritatively pass out counterfeit public
keys and subsequently impersonate any participant and eavesdrop on messages
sent to any participant. Another way to achieve the same end is for the adversary to
tamper with the records kept by the authority.
3. Public-Key Authority: Stronger security for public-key distribution can be achieved
by providing tighter control over the distribution of public keys from the directory. It is similar
to the directory but, improves security by tightening control over the distribution of keys from
the directory. It requires users to know the public key for the directory. Whenever the keys are
needed, real-time access to the directory is made by the user to obtain any desired public key
securely.
As before, the scenario assumes that a central authority maintains a dynamic directory
of public keys of all participants. In addition, each participant reliably knows a public key for
the authority, with only the authority knowing the corresponding private key. The following
steps occur:
240 Computer System Security
Public-key
authority
Initiator Responder
A B
Thus, a total of seven messages are required. However, the initial four messages need
be used only infrequently because both A and B can save the other's public key for future use,
a technique known as caching. Periodically, a user should request fresh copies of the public
keys of its correspondents to ensure currency.
Basic Cyptography 241
A B
(2) CB
security protocol helps in the security and integrity of data over the internet. There are many
protocols that exist that help in the security of data over the internet such as Secure Socket
Layer (SSL), Transport Layer Security (TLS) etc.
Now, let us look at the various types of Internet Security Protocols:
1. SSL Protocol:
•• SSL Protocol stands for Secure Sockets Layer protocol, which is an encryption-
based Internet security protocol that protects confidentiality and integrity of data.
•• SSL is used to ensure the privacy and authenticity of data over the internet.
•• SSL is located between the application and transport layers.
•• At first, SSL contained security flaws and was quickly replaced by the first version
of TLS that’s why SSL is the predecessor of the modern TLS encryption.
•• TLS/SSL website has “HTTPS” in its URL rather than “HTTP”.
•• SSL is divided into three sub-protocols: the Handshake Protocol, the Record
Protocol, and the Alert Protocol.
2. TLS Protocol:
•• Same as SSL, TLS which stands for Transport Layer Security is widely used for
the privacy and security of data over the internet.
•• TLS uses a pseudo-random algorithm to generate the master secret which is a key
used for the encryption between the protocol client and protocol server.
•• TLS is basically used for encrypting communication between online servers like
a web browser loading a web page in the online server.
•• TLS also has three sub-protocols the same as SSL protocol – Handshake Protocol,
Record Protocol, and Alert Protocol.
3. Secure Hypertext Transfer Protocol (SHTTP):
•• SHTTP stands for Secure Hypertext Transfer Protocol, which is a collection of
security measures like Establishing strong passwords, setting up a firewall, thinking
of antivirus protection, and so on designed to secure internet communication.
•• SHTTP includes data entry forms that are used to input data, which has previously
been collected into a database. As well as internet-based transactions.
•• SHTTP’s services are quite comparable to those of the SSL protocol.
•• Secure Hypertext Transfer Protocol works at the application layer (that defines
the shared communications protocols and interface methods used by hosts in a
network) and is thus closely linked with HTTP.
•• SHTTP can authenticate and encrypt HTTP traffic between the client and the
server.
•• SHTTP operates on a message-by-message basis. It can encrypt and sign individual
messages.
4. Set Protocol:
•• Secure Electronic Transaction (SET) is a method that assures the security and
integrity of electronic transactions made using credit cards.
Basic Cyptography 243
S/MIME Functionality
In terms of general functionality, S/MIME is very similar to PGP. Both offer the ability to sign
and/or encrypt messages. In this subsection, we briefly summarise S/MIME capability. We then
look in more detail at this capability by examining message formats and message preparation.
Functions
S/MIME provides the following functions:
•• Enveloped data: This consists of encrypted content of any type and encrypted-
content encryption keys for one or more recipients.
•• Signed data: A digital signature is formed by taking the message digest of the content
to be signed and then encrypting that with the private key of the signer. The content
Basic Cyptography 247
plus signature are then encoded using base64 encoding. A signed data message can
only be viewed by a recipient with S/MIME capability.
•• Clear-signed data: As with signed data, a digital signature of the content is formed.
However, in this case, only the digital signature is encoded using base64. As a result,
recipients without S/MIME capability can view the message content, although they
cannot verify the signature.
•• Signed and enveloped data: Signed-only and encrypted-only entities may be
nested, so that encrypted data may be signed and signed data or clear-signed data
may be encrypted.
S/MIME Certificate Processing
S/MIME uses public-key certificates that conform to version 3 of X.509. The key-management
scheme used by S/MIME is in some ways a hybrid between a strict X.509 certification hierarchy
and PGP’s web of trust. As with the PGP model, S/MIME managers and/or users must configure
each client with a list of trusted keys and with certificate revocation lists. That is, the responsibility
is local for maintaining the certificates needed to verify incoming signatures and to encrypt
outgoing messages. On the other hand, the certificates are signed by certification authorities.
User Agent Role
An S/MIME user has several key-management functions to perform:
•• Key generation: The user of some related administrative utility (e.g., one associated
with LAN management) MUST be capable of generating separate Diffie-Hellman and
DSS key pairs and SHOULD be capable of generating RSA key pairs. Each key pair
MUST be generated from a good source of nondeterministic random input and be
protected in a secure fashion. A user agent SHOULD generate RSA key pairs with
a length in the range of 768 to 1024 bits and MUST NOT generate a length of less
than 512 bits.
•• Registration: A user’s public key must be registered with a certification authority in
order to receive an X.509 public-key certificate.
•• Certificate storage and retrieval: A user requires access to a local list of certificates
in order to verify incoming signatures and to encrypt outgoing messages. Such a list
could be maintained by the user or by some local administrative entity on behalf of
a number of users.
VeriSign Certificates
There are several companies that provide certification authority (CA) services. For example,
Nortel has designed an enterprise CA solution and can provide S/MIME support within an
organisation. There are a number of Internet-based CAs, including VeriSign, GTE, and the U.S.
Postal Service. Of these, the most widely used is the VeriSign CA service, a brief description
of which we now provide.
VeriSign provides a CA service that is intended to be compatible with S/MIME and a
variety of other applications. VeriSign issues X.509 certificates with the product name VeriSign
Digital ID. As of early 1998, over 35,000 commercial Web sites were using VeriSign Server
Digital IDs, and over a million consumer Digital IDs had been issued to users of Netscape and
Microsoft browsers.
248 Computer System Security
The information contained in a Digital ID depends on the type of Digital ID and its use.
At a minimum, each Digital ID contains use :
•• Owner’s public key
•• Owner’s name or alias
•• Expiration date of the Digital ID
•• Serial number of the Digital ID
•• Name of the certification authority that issued the Digital ID
•• Digital signature of the certification authority that issued the Digital ID
Digital IDs can also contain other user-supplied information, including:
•• Address
•• E-mail address
•• Basic registration information (country, zip code, age, and gender)
VeriSign provides three levels, or classes, of security for public-key certificates, as
summarised in below Table 4.3. A user requests a certificate online at VeriSign’s Web site or
other participating Web sites. Class 1 and Class 2 requests are processed on line, and in most
cases take only a few seconds to approve. Briefly, the following procedures are used:
•• For Class 1 Digital IDs, VeriSign confirms the user’s e-mail address by sending a PIN
and Digital ID pick-up information to the e-mail address provided in the application.
•• For Class 2 Digital IDs, VeriSign verifies the information in the application through
an automated comparison with a consumer database in addition to performing all
of the checking associated with a Class 1 Digital ID. Finally, confirmation is sent to
the specified postal address alerting the user that a Digital ID has been issued in his
or her name.
•• For Class 3 Digital IDs, VeriSign requires a higher level of identity assurance. An
individual must prove his or her identity by providing notarised credentials or applying
in person.
Table 4.3 VeriSign Public-Key Certificate Classes
Summary of IA Private Key Certificate Applications
Confirmation of Protection Applicant and implemented or
Identity Subscriber contemplated
Private Key by Users
Protection
Class 1 Automated PCA: trustworthy Encryption Web-browsing
unambiguous hardware; CA: software (PIN and certain e-mail
name and e-mail trust-worthy protected) usage
address search software or recommended but
trustworthy not required
hardware
(Contd...)
Basic Cyptography 249
Unsigned certificate:
contains user ID,
user’s public key Generate hash
code of unsigned
certificate
Signed certificate:
Recipient can verify
signature using CA’s
public key.
Certificates
The heart of the X.509 scheme is the public-key certificate associated with each user. These user
certificates are assumed to be created by some trusted certification authority (CA) and placed
in the directory by the CA or by the user. The directory server itself is not responsible for the
creation of public keys or for the certification function; it merely provides an easily accessible
location for users to obtain certificates.
Below figure shows the general format of a certificate, which includes the following
elements:
•• Version: Differentiates among successive versions of the certificate format; the default
is version 1. If the Issuer Unique Identifier or Subject Unique Identifier are present,
the value must be version 2. If one or more extensions are present, the version must
be version 3.
•• Serial number: An integer value, unique within the issuing CA, that is unambiguously
associated with this certificate.
•• Signature algorithm identifier: The algorithm used to sign the certificate, together
with any associated parameters. Because this information is repeated in the Signature
field at the end of the certificate, this field has little, if any, utility.
•• Issuer name: X.500 name of the CA that created and signed this certificate.
•• Period of validity: Consists of two dates: the first and last on which the certificate is valid.
•• Subject name: The name of the user to whom this certificate refers. That is, this
certificate certifies the public key of the subject who holds the corresponding private
key.
Basic Cyptography 251
•• Subject’s public-key information: The public key of the subject, plus an identifier of
the algorithm for which this key is to be used, together with any associated parameters.
•• Issuer unique identifier: An optional bit string field used to identify uniquely the
issuing CA in the event the X.500 name has been reused for different entities.
•• Subject unique identifier: An optional bit string field used to identify uniquely the
subject in the event the X.500 name has been reused for different entities.
•• Extensions: A set of one or more extension fields. Extensions were added in version
3 and are discussed later in this section.
•• Signature: Covers all of the other fields of the certificate; it contains the hash code of
the other fields, encrypted with the CA’s private key. This field includes the signature
algorithm identifier.
Signature Algorithm
Version algorithm
identifier Parameters
Certificate
serial number Issuer name
Signature
Algorithm
algorithm This update date
identifier Parameters
Version 1
Subject name
Subect’s Algorithm
public-key Parameters
info Key
Issuer unique Revoked User certificate serial #
identifier certificate Revocation date
Algorithms
Subject unique
Signature Parameters
identifier Encrypted
Algorithms
Signature Parameters
all
Encrypted
Below figure illustrates the digital signature service provided by PGP. The sequence is
as follows:
1. The sender creates a message.
2. SHA-1 is used to generate a 160-bit hash code of the message.
3. The hash code is encrypted with RSA using the sender’s private key, and the result
is prepended to the message.
4. The receiver uses RSA with the sender’s public key to decrypt and recover the hash
code.
5. The receiver generates a new hash code for the message and compares it with the
decrypted hash code. If the two match, the message is accepted as authentic.
Source A Destination B
E[PRa, H(M)]
PUa
PRa
H DP
M || Z Z –1
EP M
Compare
The combination of SHA-1 and RSA provides an effective digital signature scheme.
Because of the strength of RSA, the recipient is assured that only the possessor of the matching
private key can generate the signature. Because of the strength of SHA-1, the recipient is assured
that no one else could generate a new message that matches the hash code and, hence, the
signature of the original message.
2. Confidentiality
Another basic service provided by PGP is confidentiality, which is provided by encrypting
messages to be transmitted or to be stored locally as files. In both cases, the symmetric encryption
algorithm CAST-128 may be used. Alternatively, IDEA or 3DES may be used. The 64-bit cipher
feedback (CFB) mode is used.
As always, one must address the problem of key distribution. In PGP, each symmetric
key is used only once. That is, a new key is generated as a random 128-bit number for each
message. Thus, although this is referred to in the documentation as a session key, it is in reality
a one-time key. Because it is to be used only once, the session key is bound to the message
and transmitted with it. To protect the key, it is encrypted with the receiver’s public key. Below
figure illustrates the sequence as follows:
1. The sender generates a message and a random 128-bit number to be used as a session
key for this message only.
2. The message is encrypted, using CAST-128 (or IDEA or 3DES) with the session key.
3. The session key is encrypted with RSA, using the recipient’s public key, and is
prepended to the message.
254 Computer System Security
4. The receiver uses RSA with its private key to decrypt and recover the session key.
5. The session key is used to decrypt the message.
Source A Destination B
E(PUb, Ks)
PUb PRb
Ks EP DP
M M
Z EP || DC Z –1
3. Compression
As a default, PGP compresses the message after applying the signature but before encryption.
This has the benefit of saving space both for e-mail transmission and for file storage.
The placement of the compression algorithm, indicated by Z for compression and Z-1
Z inverse
for decompression in is critical:
1. The signature is generated before compression for two reasons:
(a) It is preferable to sign an uncompressed message so that one can store only
the uncompressed message together with the signature for future verification. If
one signed a compressed document, then it would be necessary either to store
a compressed version of the message for later verification or to recompress the
message when verification is required.
(b) Even if one were willing to generate dynamically a recompressed message for
verification, PGP’s compression algorithm presents a difficulty. The algorithm
is not deterministic; various implementations of the algorithm achieve different
tradeoffs in running speed versus compression ratio and, as a result, produce
different compressed forms. However, these different compression algorithms
are interoperable because any version of the algorithm can correctly decompress
the output of any other version. Applying the hash function and signature after
compression would constrain all PGP implementations to the same version of
the compression algorithm.
2. Message encryption is applied after compression to strengthen cryptographic security.
Because the compressed message has less redundancy than the original plaintext,
cryptanalysis is more difficult.
The compression algorithm used is ZIP, which is described.
4. E-mail Compatibility
When PGP is used, at least part of the block to be transmitted is encrypted. If only the signature
service is used, then the message digest is encrypted (with the sender’s private key). If the
confidentiality service is used, the message plus signature (if present) are encrypted (with a
Basic Cyptography 255
one-time symmetric key). Thus, part or all of the resulting block consists of a stream of arbitrary
8-bit octets. However, many electronic mail systems only permit the use of blocks consisting of
ASCII text. To accommodate this restriction, PGP provides the service of converting the raw
8-bit binary stream to a stream of printable ASCII characters.
The scheme used for this purpose is radix-64 conversion. Each group of three octets of
binary data is mapped into four ASCII characters. This format also appends a CRC to detect
transmission errors.
Transmission and Reception of PGP Messages
The following figure shows the steps during message transmission assuming that the message
is to be both signed and encrypted.
Convert to redix 64
X file –1
X R64 [X]
Decrypt key, X
Signature Yes Generate signature Confidentiality Yes Ks D(PRb, E(PUb, Ks))
required? X signature || X required?
Ks D(Ks, E(Ks, X))
No No
Compress
Decompress
X Z(X) –1
X Z (X)
Convert to redix 64
X R64[X]
timestamp portions of the message component ensures that detached signatures are
exactly the same as attached signatures prefixed to the message. Detached signatures
are calculated on a separate file that has none of the message component header fields.
•• Leading two octets of message digest: To enable the recipient to determine if the correct
public key was used to decrypt the message digest for authentication, by comparing
this plaintext copy of the first two octets with the first two octets of the decrypted
digest. These octets also serve as a 16-bit frame check sequence for the message.
•• Key ID of sender’s public key: Identifies the public key that should be used to decrypt
the message digest and, hence, identifies the private key that was used to encrypt
the message digest.
The message component and optional signature component may be compressed using
ZIP and may be encrypted using a session key.
The session key component includes the session key and the identifier of the recipient’s
public key that was used by the sender to encrypt the session key. The entire block is usually
encoded with radix-64 encoding.
Content Operation
Key ID of recipient’s
public key (PUb)
Session key
component
Session key (Ks) E(PUb, )
Timestamp
Key ID of sender’s
Signature public key (PUa)
Timestamp
ZIP E(Ks, )
Message
Data
Notation:
E(PUb, ) = encryption with user b’s public key E(PRa, ) = encryption with user a’s private key
E(Ks, ) = encryption with session key ZIP = Zip compression function
R64 = Radix-64 conversion function
Application
Application TLS
IP IP
Physical Physical
Network Network
In the above diagram, although TLS technically resides between application and transport
layer, from the common perspective it is a transport protocol that acts as TCP layer enhanced
with security services.
TLS is designed to operate over TCP, the reliable layer 4 protocol (not on UDP protocol),
to make design of TLS much simpler, because it doesn’t have to worry about ‘timing out’ and
‘retransmitting lost data’. The TCP layer continues doing that as usual which serves the need
of TLS.
Why TLS is Popular?
The reason for popularity of using a security at Transport Layer is simplicity. Design and
deployment of security at this layer does not require any change in TCP/IP protocols that are
implemented in an operating system. Only user processes and applications needs to be designed/
modified which is less complex.
4.9.1 Secure Socket Layer (SSL)
In this section, we discuss the family of protocols designed for TLS. The family includes SSL
versions 2 and 3 and TLS protocol. SSLv2 has been now replaced by SSLv3, so we will focus
on SSL v3 and TLS.
Brief History of SSL
In year 1995, Netscape developed SSLv2 and used in Netscape Navigator 1.1. The SSL version1
was never published and used. Later, Microsoft improved upon SSLv2 and introduced another
similar protocol named Private Communications Technology (PCT).
Netscape substantially improved SSLv2 on various security issues and deployed SSLv3
in 1999. The Internet Engineering Task Force (IETF) subsequently, introduced a similar TLS
(Transport Layer Security) protocol as an open standard. TLS protocol is non-interoperable
with SSLv3.
TLS modified the cryptographic algorithms for key expansion and authentication. Also,
TLS suggested use of open crypto Diffie-Hellman (DH) and Digital Signature Standard (DSS)
in place of patented RSA crypto used in SSL. But due to expiry of RSA patent in 2000, there
existed no strong reasons for users to shift away from the widely deployed SSLv3 to TLS.
Basic Cyptography 259
SSL SSL
TCP TCP
IP IP
SSL itself is not a single layer protocol as depicted in the image; in fact it is composed
of two sub-layers.
•• Lower sub-layer comprises of the one component of SSL protocol called as SSL
Record Protocol. This component provides integrity and confidentiality services.
•• Upper sub-layer comprises of three SSL-related protocol components and an
application protocol. Application component provides the information transfer service
between client/server interactions. Technically, it can operate on top of SSL layer as
well. Three SSL related protocol components are:
ii SSL Handshake Protocol
ii Change Cipher Spec Protocol
ii Alert Protocol.
•• These three protocols manage all of SSL message exchanges and are discussed later
in this section.
260 Computer System Security
TCP
IP
Data
Data Data
fragment MAC fragment MAC
ii Handshake protocol actions through four phases. These are discussed in the next
section.
•• Change Cipher Spec Protocol
ii Simplest part of SSL protocol. It comprises of a single message exchanged between
two communicating entities, the client and the server.
ii As each entity sends the Change Cipher Spec message, it changes its side of the
connection into the secure state as agreed upon.
ii The cipher parameters pending state is copied into the current state.
ii Exchange of this Message indicates all future data exchanges are encrypted and
integrity is protected.
•• SSL Alert Protocol
ii This protocol is used to report errors – such as unexpected message, bad record
MAC, security parameters negotiation failed, etc.
ii It is also used for other purposes – such as notify closure of the TCP connection,
notify receipt of bad or unknown certificate, etc.
Establishment of SSL Session
As discussed above, there are four phases of SSL session establishment. These are mainly
handled by SSL Handshake protocol.
Phase 1 − Establishing security capabilities.
•• This phase comprises of exchange of two messages – Client_hello and Server_hello.
Client Server
Client Server
certificate
(server_key_exchange)
Certificate_request
(server_hello_done)
•• Server sends certificate. Client software comes configured with public keys of various
“trusted” organisations (CAs) to check certificate.
•• Server sends chosen cipher suite.
•• Server may request client certificate. Usually it is not done.
•• Server indicates end of Server_hello.
Phase 3 − Client authentication and key exchange.
Client Server
certificate
client_key_exchange
Certificate_verify
Phase 4 − Finish.
Client Server
change_cipher_spec
finished
change_cipher_spec
Certificate_request
finished
•• Client and server send Change cipher spec messages to each other to cause the
pending cipher state to be copied into the current state.
•• From now on, all data is encrypted and integrity protected.
•• Message “Finished” from each end verifies that the key exchange and authentication
processes were successful.
All four phases, discussed above, happen within the establishment of TCP session. SSL
session establishment starts after TCP SYN/ SYNACK and finishes before TCP Fin.
Resuming a Disconnected Session
•• It is possible to resume a disconnected session (through Alert message), if the client
sends a hello_request to the server with the encrypted session_id information.
•• The server then determines if the session_id is valid. If validated, it exchanges Change
Cipher Spec and finished messages with the client and secure communications resume.
•• This avoids recalculating of session cipher parameters and saves computing at the
server and the client end.
SSL Session Keys
We have seen that during Phase 3 of SSL session establishment, a pre-master secret is sent
by the client to the server encrypted using server’s public key. The master secret and various
session keys are generated as follows:
•• The master secret is generated (via pseudo random number generator) using:
ii The pre-master secret.
ii Two nonces (RA and RB) exchanged in the client_hello and server_hello messages.
•• Six secret values are then derived from this master secret as:
ii Secret key used with MAC (for data sent by server)
ii Secret key used with MAC (for data sent by client)
ii Secret key and IV used for encryption (by server)
ii Secret key and IV used for encryption (by client)
264 Computer System Security
HTTP
HTTP SSL
TCP TCP
IP IP
The secure browsing through HTTPS ensures that the following contents are encrypted −
•• URL of the requested web page.
•• Web page contents provided by the server to the user client.
•• Contents of forms filled in by user.
•• Cookies established in both directions.
Working of HTTPS
HTTPS application protocol typically uses one of two popular transport layer security protocols
- SSL or TLS. The process of secure browsing is described in the following points.
•• You request a HTTPS connection to a webpage by entering https:// followed by URL
in the browser address bar.
•• Web browser initiates a connection to the web server. Use of https invokes the use
of SSL protocol.
•• An application, browser in this case, uses the system port 443 instead of port 80
(used in case of http).
•• The SSL protocol goes through a handshake protocol for establishing a secure session
as discussed in earlier sections.
•• The website initially sends its SSL Digital certificate to your browser. On verification
of certificate, the SSL handshake progresses to exchange the shared secrets for the
session.
•• When a trusted SSL Digital Certificate is used by the server, users get to see a padlock
icon in the browser address bar. When an Extended Validation Certificate is installed
on a website, the address bar turns green.
Use of HTTPS
•• Use of HTTPS provides confidentiality, server authentication and message integrity
to the user. It enables safe conduct of e-commerce on the Internet.
•• Prevents data from eavesdropping and denies identity theft which are common attacks
on HTTP.
Present day web browsers and web servers are equipped with HTTPS support. The use
of HTTPS over HTTP, however, requires more computing power at the client and the server
end to carry out encryption and SSL handshake.
4.9.4 Secure Shell Protocol (SSH)
The salient features of SSH are as follows:
•• SSH is a network protocol that runs on top of the TCP/IP layer. It is designed to replace
the TELNET which provided unsecure means of remote logon facility.
•• SSH provides a secure client/server communication and can be used for tasks such
as file transfer and e-mail.
•• SSH2 is a prevalent protocol which provides improved network communication
security over earlier version SSH1.
Basic Cyptography 267
SSH Defined
SSH is organised as three sub-protocols.
TCP
•• Transport Layer Protocol: This part of SSH protocol provides data confidentiality,
server (host) authentication, and data integrity. It may optionally provide data
compression as well.
ii Server Authentication: Host keys are asymmetric like public/private keys. A
server uses a public key to prove its identity to a client. The client verifies that
contacted server is a “known” host from the database it maintains. Once the
server is authenticated, session keys are generated.
ii Session Key Establishment: After authentication, the server and the client
agree upon cipher to be used. Session keys are generated by both the client and
the server. Session keys are generated before user authentication so that usernames
and passwords can be sent encrypted. These keys are generally replaced at regular
intervals (say, every hour) during the session and are destroyed immediately after
use.
ii Data Integrity: SSH uses Message Authentication Code (MAC) algorithms to
for data integrity check. It is an improvement over 32 bit CRC used by SSH1.
•• User Authentication Protocol: This part of SSH authenticates the user to
the server. The server verifies that access is given to intended users only. Many
authentication methods are currently used such as, typed passwords, Kerberos, public-
key authentication, etc.
•• Connection Protocol: This provides multiple logical channels over a single
underlying SSH connection.
SSH Services
SSH provides three main services that enable provision of many secure solutions. These services
are briefly described as follows:
•• Secure Command-Shell (Remote Logon): It allows the user to edit files, view
the contents of directories, and access applications on connected device. Systems
administrators can remotely start/view/stop services and processes, create user
accounts, and change file/directories permissions and so on. All tasks that are feasible
268 Computer System Security
at a machine’s command prompt can now be performed securely from the remote
machine using secure remote logon.
•• Secure File Transfer: SSH File Transfer Protocol (SFTP) is designed as an extension
for SSH-2 for secure file transfer. In essence, it is a separate protocol layered over
the Secure Shell protocol to handle file transfers. SFTP encrypts both the username/
password and the file data being transferred. It uses the same port as the Secure Shell
server, i.e. system port no 22.
•• Port Forwarding (Tunneling): It allows data from unsecured TCP/IP based
applications to be secured. After port forwarding has been set up, Secure Shell reroutes
traffic from a programme (usually a client) and sends it across the encrypted tunnel
to the program on the other side (usually a server). Multiple applications can transmit
data over a single multiplexed secure channel, eliminating the need to open many
ports on a firewall or router.
Internet
Secure Channel
Firewall
SSH Client with With only
Port Forwarding port 22 open
E-mail Host
Database App SSH Server
VNC Client E-mail Server
Database Server
VNC Server
Limitations
•• Applicable to TCP-based applications only (not UDP).
•• TCP/IP headers are in clear.
•• Suitable for direct communication between the client and the server. Does not cater
for secure applications using chain of servers (e.g. email)
•• SSL does not provide non-repudiation as client authentication is optional.
•• If needed, client authentication needs to be implemented above SSL.
IPSec aise likhna hai
4.10 IP SECURITY (IPSEC)
The IP security (IPSec) is an Internet Engineering Task Force (IETF) standard suite of protocols
write two in
words in placebetween 2 communication points across the IP network that provide data authentication,
of 2 integrity, and confidentiality. It also defines the encrypted, decrypted and authenticated packets.
The protocols needed for secure key exchange and key management are defined in it.
Uses of IP Security
IPsec can be used to do the following things:
•• To encrypt application layer data.
•• To provide security for routers sending routing data across the public internet.
•• To provide authentication without encryption, like to authenticate that the data
originates from a known sender.
•• To protect network data by setting up circuits using IPsec tunneling in which all data is
being sent between the two endpoints is encrypted, as with a Virtual Private Network
(VPN) connection.
Applications of IPSec
IPSec provides the capability to secure communications across a LAN, across private and public
WANs, and across the Internet. Examples of its use include the following:
•• Secure branch office connectivity over the Internet: A company can build a
secure virtual private network over the Internet or over a public WAN. This enables
a business to rely heavily on the Internet and reduce its need for private networks,
saving costs and network management overhead.
•• Secure remote access over the Internet: An end user whose system is equipped
with IP security protocols can make a local call to an Internet service provider (ISP)
and gain secure access to a company network. This reduces the cost of toll charges
for traveling employees and telecommuters.
•• Establishing extranet and intranet connectivity with partners: IPSec can be
used to secure communication with other organisations, ensuring authentication and
confidentiality and providing a key exchange mechanism.
•• Enhancing electronic commerce security: Even though some Web and electronic
commerce applications have built-in security protocols, the use of IPSec enhances
that security.
270 Computer System Security
An IP Security Scenario
User system
with IPSec
IP IPSec Secure IP
Header Header Payload Public (Internet)
or private
network
ad P
ylo e I
ylo P
Pa ure I
Pa c ur
ad
Se
c
Se
ad c
He PSe
er
He Sec
I
r
ade
IP
er
He IP
ad
r
ade
He IP
Networking device
with IPSec Networking device
with IPSec
IP IP IP IP
Header Payload Header Payload
Benefits of IPSec
Common benefits of IPSec are:
•• When IPSec is implemented in a firewall or router, it provides strong security that can
be applied to all traffic crossing the perimeter. Traffic within a company or workgroup
does not incur the overhead of security-related processing.
•• IPSec in a firewall is resistant to bypass if all traffic from the outside must use IP, and
the firewall is the only means of entrance from the Internet into the organisation.
•• IPSec is below the transport layer (TCP, UDP) and so is transparent to applications.
There is no need to change software on a user or server system when IPSec is
implemented in the firewall or router. Even if IPsec is implemented in end systems,
upper-layer software, including applications, is not affected.
•• IPsec can be transparent to end users. There is no need to train users on security
mechanisms, issue keying material on a per-user basis, or revoke keying material
when users leave the organisation.
•• IPsec can provide security for individual users if needed. This is useful for offsite
workers and for setting up a secure virtual subnetwork within an organisation for
sensitive applications.
Basic Cyptography 271
Architecture
ESP AH
protocol protocol
Encryption Authentication
algorithm algorithm
DOI
Key
management
Security Associations
A key concept that appears in both the authentication and confidentiality mechanisms for IP
is the security association (SA). An association is a one-way relationship between a sender
and a receiver that affords security services to the traffic carried on it. If a peer relationship is
needed, for two-way secure exchange, then two security associations are required. Security
services are afforded to an SA for the use of AH or ESP, but not both.
Basic Cyptography 273
the desired user configuration. Furthermore, IPSec provides a high degree of granularity in
discriminating between traffic that is afforded IPSec protection and traffic that is allowed to
bypass IPSec, in the former case relating IP traffic to specific SAs.
The means by which IP traffic is related to specific SAs (or no SA in the case of traffic
allowed to bypass IPSec) is the nominal Security Policy Database (SPD). In its simplest form,
an SPD contains entries, each of which defines a subset of IP traffic and points to an SA for
that traffic. In more complex environments, there may be multiple entries that potentially relate
to a single SA or multiple SAs associated with a single SPD entry. The reader is referred to the
relevant IPSec documents for a full discussion.
Each SPD entry is defined by a set of IP and upper-layer protocol field values, called
selectors. In effect, these selectors are used to filter outgoing traffic in order to map it into a
particular SA. Outbound processing obeys the following general sequence for each IP packet:
1. Compare the values of the appropriate fields in the packet (the selector fields) against
the SPD to find a matching SPD entry, which will point to zero or more SAs.
2. Determine the SA if any for this packet and its associated SPI.
3. Do the required IPSec processing (i.e., AH or ESP processing).
The following selectors determine an SPD entry:
•• Destination IP Address: This may be a single IP address, an enumerated list or
range of addresses, or a wildcard (mask) address. The latter two are required to support
more than one destination system sharing the same SA (e.g., behind a firewall).
•• Source IP Address: This may be a single IP address, an enumerated list or range of
addresses, or a wildcard (mask) address. The latter two are required to support more
than one source system sharing the same SA (e.g., behind a firewall).
•• User ID: A user identifier from the operating system. This is not a field in the IP or
upper-layer headers but is available if IPSec is running on the same operating system
as the user.
•• Data Sensitivity Level: Used for systems providing information flow security (e.g.,
Secret or Unclassified).
•• Transport Layer Protocol: Obtained from the IPv4 Protocol or IPv6 Next Header
field. This may be an individual protocol number, a list of protocol numbers, or a
range of protocol numbers.
•• Source and Destination Ports: These may be individual TCP or UDP port values,
an enumerated list of ports, or a wildcard port.
Authentication Header
The Authentication Header provides support for data integrity and authentication of IP packets.
The data integrity feature ensures that undetected modification to a packet’s content in transit is
not possible. The authentication feature enables an end system or network device to authenticate
the user or application and filter traffic accordingly; it also prevents the address spoofing attacks
observed in today’s Internet.
The AH also guards against the replay attack described later in this section. Authentication
is based on the use of a message authentication code (MAC) hence the two parties must share
a secret key.
Basic Cyptography 275
Sequence number
•• Next Header (8 bits): Identifies the type of data contained in the payload data field
by identifying the first header in that payload (for example, an extension header in
IPv6, or an upper-layer protocol such as TCP).
•• Authentication Data (variable): A variable-length field (must be an integral number
of 32-bit words) that contains the Integrity Check Value computed over the ESP packet
minus the Authentication Data field.
Bit: 0 16 24 31
Sequence number
Authentication coverage
Confidentiality coverage
Padding
The Padding field serves several purposes:
•• If an encryption algorithm requires the plaintext to be a multiple of some number of
bytes (e.g., the multiple of a single block for a block cipher), the Padding field is used
to expand the plaintext (consisting of the Payload Data, Padding, Pad Length, and
Next Header fields) to the required length.
•• The ESP format requires that the Pad Length and Next Header fields be right aligned
within a 32-bit word. Equivalently, the cipher text must be an integer multiple of 32
bits. The Padding field is used to assure this alignment.
•• Additional padding may be added to provide partial traffic flow confidentiality by
concealing the actual length of the payload.
DNS spoofing (also known as DNS cache poisoning): An attacker will drive the
3.
traffic away from real DNS servers and redirect them to a “pirate” server, unbeknownst
to the users. This may cause the corruption/theft of a user’s personal data.
Fast flux: An attacker will typically spoof his IP address while performing an attack.
4.
Fast flux is a technique to constantly change location-based data in order to hide
where exactly the attack is coming from. This will mask the attacker’s real location,
giving him the time needed to exploit the attack. Flux can be single or double or of
any other variant. A single flux changes the address of the webserver while double
flux changes both the address of the web server and the names of DNS serves.
Reflected attacks: Attackers will send thousands of queries while spoofing their own
5.
IP address and using the victim’s source address. When these queries are answered,
they will all be redirected to the victim himself.
Reflective amplification DoS: When the size of the answer is considerably larger
6.
than the query itself, a flux is triggered, causing an amplification effect. This generally
uses the same method as a reflected attack, but this attack will overwhelm the user’s
system’s infrastructure further.
DNS hijacking: In DNS hijacking the attacker redirects queries to a different
7.
domain name server. This can be done either with malware or with the unauthorised
modification of a DNS server. Although the result is similar to that of DNS spoofing,
this is a fundamentally different attack because it targets the DNS record of the website
on the nameserver, rather than a resolver’s cache.
Normal DNS Resolution DNS Hijacking
1
Example.com Example.com
2
DNS Server
3 Malicious
4 Server
Example.com Example.com
Server Server
NXDOMAIN attack: This is a type of DNS flood attack where an attacker inundates
8.
a DNS server with requests, asking for records that do not exist, in an attempt to cause
a denial-of-service for legitimate traffic. This can be accomplished using sophisticated
attack tools that can auto-generate unique subdomains for each request. NXDOMAIN
attacks can also target a recursive resolver with the goal of filling the resolver’s cache
with junk requests.
Basic Cyptography 279
= – 17 mod 120
d = 103
Public key = {7, 143}
Private key = {103, 143}
284 Computer System Security
where,
p = A prime number of length L bits
q = A 160-bits prime factor of (p – 1)
g = h(p – 1)/q mod p h to power (p-1)/q
x = A number less than q.
y = gx mod p. g to power x mod p
H = Message Digest algorithm.
If same secret (k1, k2) is used for signing two different messages, it will generate two
different signatures (r1, s1) and (r1, s2) :
1. s1 = k1 – 1(h1k2 + d(r1 + r2))
2. s2 = k1 – 1(h2k2 + d(r1 + r2))
where h1 = SHA512(m1) and h2 = SHA512(m2)
3. k1s1 – k1s2 = h1k2 + dr – h2k2 – dr
4. k1(s1 – s2) = k2(h1 – h2)
5. We cannot obtain k1, k2 from this equation and so this scheme is more secure
than original ECDSA (Elliptical Curve Digital Signature Algorithm) scheme.
8. What do you mean by PGP? Discuss its application?
Ans. PGP:
1. PGP (Pretty Good Privacy) is an encryption algorithm that provides cryptographic
privacy and authentication for data communication.
2. PGP uses a combination of public-key and conventional encryption to provide security
services for electronic mail message and data files.
3. PGP provides five services related to the format of messages and data files :
authentication, confidentiality, compression, e-mail compatibility and segmentation
Application of PGP:
1. PGP provides secure encryption of documents and data files that even advanced
super computers are not able to crack.
2. For authentication, PGP employs the RSA public-key encryption scheme and the
MD5, a one-way hash function to form a digital signature that assures the receiver
that an incoming messages is authentic (that it comes from the alleged send and that
it has not been altered.
The PGP messages are transmitted from the sender to the receiver using the
following steps:
1. If signature is required, the hash code of the uncompressed plaintext message is
created and encrypted using the sender’s private key.
2. The plaintext message and the signature are compressed using the ZIP compression
algorithm.
3. The compressed plaintext message and compressed signature are encrypted with a
randomly generated session key to provide confidentiality. The session key is then
encrypted with the recipient’s public key and is added to the beginning of the message.
4. The entire block is converted to radix-64 format.
286 Computer System Security
On receiving the PGP message, the receiver follows the following steps:
1. The entire block is first converted back to binary format.
2. The recipient recovers the session key using his or her private key, and then decrypts
the message with the session key.
3. The decrypted message is then decompressed.
4. If the message is signed, the receiver needs to verify the signature. For this, he or
she computes a new hash code and compares it with the received hash code. If they
match, the message is accepted; otherwise, it.
9. List the basic terminology used in cryptography.
Ans. Some basic terminology used in cryptography:
1. Plaintext: Plaintext is a readable, plain message that anyone can read.
2. Cipher text: The transformed message or coded message.
3. Cipher: An algorithm for transforming an intelligible message into one that is
unintelligible by transposition and/or substitution methods.
4. Key: Some critical information used by the cipher, known only to the sender and
receiver
5. Encoding/Encryption: The process of converting plaintext to cipher text using a
cipher and a key.
6. Decoding/Decryption: The process of converting cipher text back into plaintext
using a cipher and a key.
7. Cryptanalysis (code breaking): The study of principles and methods of transforming
an unintelligible message back into an intelligible message without knowledge of the
key.
8. Cryptology: The combination of cryptography and cryptanalysis.
9. Code: An algorithm for transforming an intelligible message into an unintelligible
one using a code-book.
10. Substitution: Replacing one entity with other.
11. Transposition: Shuffling the entities.
12. Block cipher: Processes the input one block element and produce one output block.
13. Stream Cipher: Processes the one input element and outputs one element at a time.
10. What is hash function? Discuss SHA-512 with all required steps, round function.
Ans. Hash function:
1. A cryptographic hash function is a transformation that takes an input and returns a
fixed-size string, which is called the hash value.
2. A hash value h is generated by a function H of the form: h = H (M) where M is the
variable length message and H(M) is the fixed length hash value.
3. The hash value is appended to the message at the source at a time when message is
assumed or known to be correct.
4. The receiver authenticates the message by recomputing the hash value.
Basic Cyptography 287
5. The standard does not dictate the use of a specific algorithm but Recommends RSA.
6. X.509 certificates format is used in S/MIME, IP security and SET.
Role of X.509 certificates in cryptography:
1. To verify that a public key belong to the user, computer or service identify contained
within the certificate.
2. To validate the identity of encrypted data
12. What is Transport Layer Security (TLS)? Explain the working of TLS.
Ans. 1. Transport Layer Security (TLS) is a protocol that provides communication security
between client/server applications that communicate with each other over the Internet.
2. It enables privacy, integrity and protection for the data that is transmitted between
different nodes on the Internet.
3. TLS is a successor to the Secure Socket Layer (SSL) protocol.
4. Transport Layer Security (TLS) is a protocol that provides authentication, privacy,
and data integrity between two communicating computer applications.
5. It is the most widely-deployed security protocol used for web browsers and other
applications that require data to be securely exchanged over a network, such as web
browsing sessions, file transfers, VPN connections, remote desktop sessions, and Voice
over IP (VoIP).
6. TLS is a cryptographic protocol that provides end-to-end communications security
over networks and is widely used for internet communications and online transactions.
7. TLS primarily enables secure Web browsing, applications access, data transfer and
most Internet-based communication.
8. It prevents the transmitted/transported data from being eavesdropped or tampered.
9. TLS is used to secure Web browsers, Web servers, VPNs, database servers and more.
10. TLS protocol consists of:
(a) TLS handshake protocol: It enables the client and server to authenticate each
other and select an encryption algorithm prior to sending the data.
(b) TLS record protocol: It works on top of the standard TCP protocol to ensure that
the created connection is secure and reliable. It also provides data encapsulation
and data encryption services.
Working of TLS:
1. A TLS connection is initiated using a sequence known as the TLS handshake.
2. The TLS handshake establishes a cipher suite for each communication session.
3. The cipher suite is a set of algorithms that specifies details such as which shared
encryption keys, or session keys, will be used for that particular session.
4. TLS is able to set the matching session keys over an unencrypted channel known as
public key cryptography.
5. The handshake also handles authentication, which usually consists of the server
proving its identity to the client. This is done using public keys.
Basic Cyptography 289
6. Public keys are encryption keys that use one-way encryption, meaning that anyone
can unscramble data encrypted with the private key to ensure its authenticity, but
only the original sender can encrypt data with the private key.
7. Once data is encrypted and authenticated, it is then signed with a message
authentication code (MAC).
8. The recipient can then verify the MAC to ensure the integrity of the data.
13. What is DES? Why were double and triple DES created and what are they?
Ans. Data Encryption Standard:
1. The DES has a 64-bit block size and uses a 56-bit key during execution (8 parity bits
are stripped off from full 64-bit key). DES is a symmetric cryptosystem, specifically a
16-round Feistel cipher.
2. A block to be enciphered is subjected to an initial permutation IP and then to a complex
key-dependent computation and finally to a permutation which is the inverse of the
initial permutation IP–1.
3. Permutation is an operation performed by a function, which moves an element at
place j to the place k.
4. The key-dependent computation can be simply defined in terms of a function f, called
the cipher function, and a function KS, called the key schedule.
Reason for creation:
1. Since DES uses 56 bit key to encrypt any plain text which can easily be cracked by
using modern technologies.
2. To prevent this from happening, double DES and triple DES were created which
are much more secured than the original DES because it uses 112 and 168 bit keys
respectively. They offer much more security than DES.
Double DES:
1. Double DES is an encryption technique which uses two instance of DES on same
plain text. In both instances it uses different keys to encrypt the plain text.
2. Both keys are required at the time of decryption. The 64 bit plain text goes into first
DES instance which than converted into a 64 bit middle text using the first key and
then it goes to second DES instance which gives 64 bit cipher text by using second
key.
3. However double DES uses 112 bit key but gives security level of 256 not 2112 and
this is because of meet-in-the middle attack which can be used to break through
double DES.
Triple DES:
1. In triple DES, three stages of DES are used for encryption and decryption of messages.
This increases the security of DES.
Two versions of triple DES are:
(a) Triple DES with two keys:
1. In triple DES with two keys, there are only two keys K1 and K2. The first and
the third stages use the key K1 and the second stage uses K2.
290 Computer System Security
2. The middle stage of triple DES uses decryption (reverse cipher) in the
encryption site and encryption cipher in the decryption site.
(b) Triple DES with three keys:
1. This cipher uses three DES cipher stages at the encryption site and three
reverse cipher at the decryption site.
2. The plaintext is first encrypted with a key K1, then encrypted with a second
key K2 and finally with a third key K3, where K1, K2 and K3 are all different.
3. Triple DES with three keys is used in PGP and S/MIME. Plaintext can be
obtained by first decrypting the cipher text with the key K1, then with K2 and
finally with K3.
14. Explain the DNS security threats? use full stop in place of ?
Ans. Common DNS security threats are:
1. Distributed Denial of service (DDoS) :
(a) The attacker controls an overwhelming amount of computers (hundreds or
thousands) in order to spread malware and flood the victim’s computer with
unnecessary and overloading traffic.
(b) Eventually, unable to harness the power necessary to handle the intensive
processing, the systems will overload and crash.
2. DNS spoofing (also known as DNS cache poisoning):
(a) Attacker will drive the traffic away from real DNS servers and redirect them to a
pirate server, unrecognised to the users.
(b) This may cause in the corruption/theft of a user’s personal data.
3. Fast flux:
(a) Fast flux is a technique to constantly change location-based data in order to hide
where exactly the attack is coming from.
(b) This will mask the attacker’s real location, giving him the time needed to exploit
the attack.
(c) Flux can be single or double or of any other variant. A single flux changes address
of the web server while double flux changes both the address of web server and
names of DNS serves.
4. Reflected attacks:
(a) Attackers will send thousands of queries while spoofing their own IP address and
using the victim’s source address.
(b) When these queries are answered, they will all be redirected to the victim himself.
5. Reflective amplification DoS:
(a) When the size of the answer is considerably larger than the query itself a flux is
triggered, causing an amplification effect.
(b) This generally uses the same method as a reflected attack, but this attack will
overwhelm the user’s system’s infrastructure further.
Basic Cyptography 291
15. Explain SSL encryption. What are the steps involved in SSL server authentication?
Ans. SSL encryption:
1. SSL (Secure Sockets Layer), is an encryption-based Internet security protocol.
2. It is used for the purpose of ensuring privacy, authentication, and data integrity in
Internet communications.
3. In order to provide a high degree of privacy, SSL encrypts data that is transmitted
across the web. This means that anyone who tries to intercept this data will only see
a garbled mix of characters.
4. SSL initiates an authentication process called a handshake between two communicating
devices to ensure that both devices are really who they claim to be.
5. SSL also digitally signs data in order to provide data integrity, verifying that the data
is not tampered, before reaching its intended recipient.
Steps involved in SSL server authentication are:
1. The client requests access from the server to a specific user account, and also sends
the user’s certificate containing a public key to the server.
2. The server checks the CA (Certification of Authority) signature in the certificate and
consults a local database to see if the CA is trusted. If not, the certificate is rejected
and the user is not authenticated.
3. The server checks the validity of the certificate, for example, by consulting a Certificate
Revocation List (CRL) published by the CA. If the certificate has been revoked or has
expired, the certificate is rejected.
4. The client signs a value with the user’s private key.
5. The server verifies the signature with the user’s public key.
6. If the signature is successfully verified, the user is authenticated, and the server can
move on to authorising
5. What is the algorithm of key exchange used in the parameter of a Cipher Suite?
(a) RSA (b) Fixed Diffie-Hellman
(c) Ephemeral Diffie-Hellman (d) All the above
6. Which hash algorithm does the DSS signature use?
(a) MD5 (b) SHA-2
(c) SHA-1 (d) Does not use a hash algorithm
7. Which hash algorithm does RSA signature use?
(a) MD5 (b) SHA-1
(c) MD5 and SHA-1 (d) Trap doors
8. What is the size of an RSA signature after MD5 and SHA-1 processing?
(a) 42 bytes (b) 32 bytes
(c) 36 bytes (d) 48 bytes
9. Which of the following is false for RSA algorithm :-
(a) Security of RSA depends on problem of factoring large number
(b) In software, RSA is 100 times slower than DES
(c) In hardware, RSA is 10 times slower than DES
(d) RSA can be faster than the symmetric algorithm
10. A cryptographic hash functions are :
(a) Easy to compute (b) Used in creating digital fingerprint
(c) Both 1 and 2 (d) None of the above
11. In public key distribution:-
(a) Public keys are published in a database
(b) Receiver decrypts the message using their private key
(c) Sender gets receiver’s public key from database
(d) All of the above
12. Some of the cryptography protocols are :-
(a) SSL (b) SET
(c) IPSec (d) All of the above
13. Which of the following is true of SSL (Secured Socket Layer) :-
(a) Client authentication is compulsory
(b) It is developed by Netscape
(c) Connection need not be encrypted
(d) All of the above
14. A public key certificate contains
(a) Private and public key of the entity being certified
(b) Digital signature algorithm id
(c) Identity of the receiver
(d) Both (a) and (b)
Basic Cyptography 293
5 Internet Infrastructure
296
Internet Infrastructure 297
Internet is different from the World Wide Web as the World Wide Web is a network of
computers and servers created by connecting them through the internet. So, the internet is
the backbone of the web as it provides the technical infrastructure to establish the WWW and
acts as a medium to transmit information from one computer to another computer. It uses web
browsers to display the information on the client, which it fetches from web servers.
Origin and Development of Internet
The first computer networks were dedicated special-purpose systems such as SABRE (an airline
reservation system) and AUTODIN I (a defence command-and-control system), both designed
and implemented in the late 1950s and early 1960s. The internet came in the year 1960 with
the creation of the first working model called ARPANET (Advanced Research Projects Agency).
It allowed multiple computers to work on a single network that was their biggest achievement
at that time. ARPANET use packet switching to communicate multiple computer systems under
a single network. On 29 October 1969, using ARPANET first message was transferred from
one computer to another.
Table 5.1 Brief history of Development of Internet
Year Event
1960 This is the year in which the internet started to share information as a way for government
researchers. And, the first known MODEM and dataphone were introduced by AT&T.
1961 On May 31, 1961, Leonard Kleinrock released his first paper, "Information Flow in
Large Communication Nets."
1962 A paper talking about packetisation was released by Leonard Kleinrock. Also, this
year, a suggestion was given by Paul Baran for the transmission of data with the help
of using fixed-size message blocks
1964 Baran produced a study on distributed communications in 1964. In the same year,
Leonard Kleinrock released Communication Nets Stochastic Message Flow and
Design, the first book on packet nets.
1965 The first long-distance dial-up link was established between a TX-2 computer and a
Q-32 at SDC in California by Lawrence G. Roberts of MIT and Tom Marill of SDC
in California with a Q-32. Also, the word "Packet" was coined by Donald in this year.
1966 After getting success at connecting over dial-up, a paper about
this was published by Tom Marill and Lawrence G. Roberts.
In the same year, Robert Taylor brought Larry Roberts and joined ARPA to develop
ARPANET.
1967 In 1967, 1-node NPL packet net was created by Donald Davies. For packet switch,
the use of a minicomputer was suggested by Wes Clark.
1968 On 9 December 1968, Hypertext was publicly demonstrated by Doug Engelbart. The
first meeting regarding NWG (Network Working Group) was also held this year, and
on June 3, 1968, the ARPANET programme plan was published by Larry Roberts.
(Contd...)
298 Computer System Security
1969 On 1 April 1969, talking about the IMP software and introducing the Host-to-Host,
RFC #1 was released by Steve Crocker. On 3 July 1969, a press was released for
announcing the public to the Internet by UCLA. On August 29, 1969, UCLA received
the first network equipment and the first network switch. CompuServe, the first
commercial internet service, was founded the same year.
1970 This is the year in which NCP was released by the UCLA team and Steve Crocker.
1971 In 1971, Ray Tomlinson sent the first e-mail via a network to other users.
1972 In 1972, the ARPANET was initially demonstrated to the general public.
1973 TCP was created by Vinton Cerf in 1973, and it was released in December 1974 with
the help of YogenDalal and Carl Sunshine. ARPA also launched the first international
link, SATNET, this year. And, the Ethernet was created by Robert Metcalfe at the
Xerox Palo Alto Research Center.
1974 In 1974, the Telnet, a commercial version of ARPANET, was introduced. Many consider
it to be the first Internet service provider.
1978 In 1978, to support real-time traffic, TCP split into TCP/IP, which was driven by John
Shoch, David Reed, and Danny Cohen. Later on, on 1 January 1983, the creation of
TCP/IP was standardised into ARPANET and helped create UDP. Also, in the same
year, the first worm was developed by Jon Hupp and John Shoch at Xerox PARC.
1981 BITNET was established in 1981. It is a time network that was formerly a network of
IBM mainframe computers in the United States.
1983 In 1983, the TCP/IP was standardised by ARPANET, and the IAB, short for Internet
Activities Board was also founded in the same year.
1984 The DNS was introduced by Jon Postel and Paul Mockapetris.
1986 The first Listserv was developed by Eric Thomas, and NSFNET was also created in
1986. Additionally, BITNET II was created in the same year 1986.
1988 The First T1 backbone was included in ARPANET, and CSNET and CSNET merged
to create CREN.
1989 A proposal for a distributed system was submitted by Tim Berners-Lee at CERN on
12 March 1989 that would later become the WWW.
1990 This year, NSFNET replaced the ARPANET. On 10 September 1990, Mike Parker, Bill
Heelan, and Alan Emtage released the first search engine Archie at McGill University
in Montreal, Canada.
1991 Tim Berners-Lee introduced the WWW (World Wide Web) on August 6, 1991. On
August 6, 1991, he also unveiled the first web page and website to the general public.
Also, this year, the internet started to be available to the public by NSF. Outside of
Europe, the first web server came on 1 December 1991.
(Contd...)
Internet Infrastructure 299
1992 The main revolution came in the field of the internet that the internet Society was
formed, and NSFNET upgraded to a T3 backbone.
1993 CERN submitted the Web source code to the public domain on April 30, 1993. This
caused the Web to experience massive growth. Also, this year, the United Nations
and the White House came, which helped to begin top-level domains, such as .gov
and .org. On 22 April 1993, the first widely-used graphical World Wide Web browser,
Mosaic, was released by the NCSA with the help of Eric Bina and Marc Andreessen.
1994 On April 4, 1994, James H. Clark and Marc Andreessen found the Mosaic
Communications Corporation, Netscape. On 13 October 1994, the first Netscape
browser, Mosaic Netscape 0.9, was released, which also introduced the Internet to
cookies. On 7 November 1994, a radio station, WXYC, announced broadcasting on
the Internet, and it became the first traditional radio station for this. Also, in the same
year, the W3C was established by Tim Berners-Lee.
1995 In February 1995, Netscape introduced the SSL (Secure sockets layer), and
the dot-com boom began. Also, the Opera web browser was introduced
to browsing web pages on 1 April 1995, and to make voice calls over
the Internet, the Vocaltec, the first VoIP software, was introduced.
Later, the Internet Explorer web browser was introduced by Microsoft on 16 August
1995. In RFC 1866, the next version of HTML 2.0 was released on 24 November 1995.
In 1995, JavaScript, originally known as LiveScript, was created by Brendan Eich.
At that time, he was an employee at Netscape Communications Corporation. Later
LiveScript was renamed to JavaScript with Netscape 2.0B3 on December 4, 1995.
In the same year, they also introduced Java.
1996 This year, Telecom Act took a big Decision and deregulated data networks. Also,
Macromedia Flash that is now known as Adobe Flash was released in 1996.
In December 1996, the W3C published CSS 1, the first CSS specification. As compared
to postal mail, more e-mail was sent in the USA. This is the year in which the network
has ceased to exist as CREN ended its support.
1997 In 1997, the 802.11 (Wi-Fi) standard was introduced by IEEE, and the internet2
consortium was also established.
1998 The first Internet weblogs arose in this year, and on February 10, 1998, XML became
a W3C recommendation.
1999 In September 1999, Napster began sharing files, and Marc Ostrofsky, the business.
com, the most expensive Internet domain name for $7.5 million on 1 December 1999.
Later on, on 26 July 2007, this domain was sold for $345 million to R.H. Donnelley.
2000 The craze of dot-com began to decrease.
2003 The members of CERN took the decision to dissolve the organisation on 7 January
2003. Also, this year, the Safari web browser came into the market on 30 June 2003.
(Contd...)
300 Computer System Security
2004 The Mozilla Firefox web browser was released by Mozilla on 9 November 2004.
2008 On 1 March 2008, the support b AOL for the Netscape Internet browser was ended.
Then, the Google Chrome web browser was introduced by Google on 11 December
2008, and gradually it became a popular web browser.
2009 A person using the fictitious name Satoshi Nakamoto published the internet money
Bitcoin on 3 January 2009.
2014 On 28 October 2014, W3C recommended and released the HTML5 programming
language to the public.
Internet
If you connect to the Internet through an Internet Service Provider (ISP), you are usually
assigned a temporary IP address for the duration of your dial-in session. If you connect to the
Internet from a local area network (LAN) your computer might have a permanent IP address or
it might obtain a temporary one from a DHCP (Dynamic Host Configuration Protocol) server.
In any case, if you are connected to the Internet, your computer has a unique IP address.
Key Note about IP address
IP address stands for internet protocol address. Every PC/Local machine is having an IP address
and that IP address is provided by the Internet Service Providers (ISP’s). These are some
sets of rules which govern the flow of data whenever a device is connected to the Internet. It
differentiates computers, websites, and routers. Just like human identification cards like Aadhaar
cards, Pan cards, or any other unique identification documents. Every laptop and desktop has
its own unique IP address for identification. It’s an important part of internet technology. An
Internet Infrastructure 301
IP address is displayed as a set of four-digit like 192.154.3.29. Here each number on the set
ranges from 0 to 255. Hence, the total IP address range from 0.0.0.0 to 255.255.255.255. you
can check the IP address of your Laptop or desktop by clicking on the windows start menu
then right click and go to network in that go to status and then Properties their you can see
the IP address. There are four different types of IP addresses available:
•• Static IP address
•• Dynamic IP address
•• Private IP address
•• Public IP address
5.1.2 Protocol Stacks and Packets
So your computer is connected to the Internet and has a unique address. How does it ‘talk’ to
other computers connected to the Internet?
Consider an example: Let’s say your IP address is 192.168.178.23755 and you want
to send a message to the computer 127.218.10.255. The message you want to send is “Hello
Computer 127.218.10.255”. Obviously, the message must be transmitted over whatever kind of
wire connects your computer to the Internet. Let’s say you’ve dialed into your ISP from home
and the message must be transmitted over the phone line. Therefore, the message must be
translated from alphabetic text into electronic signals, transmitted over the Internet and then
translated back into alphabetic text.
How is this accomplished? Through the use of a protocol stack. Every computer needs
one to communicate on the Internet and it is usually built into the computer’s operating system
(i.e. Windows, Unix, etc.). The protocol stack used on the Internet is referred to as the TCP/
IP protocol stack because of the two major communication protocols named as Transmission
Control Protocol (TCP) and Internet Protocol(IP) are used.
The TCP/IP stack looks like this:
Table 5.2 Protocol Stacks and Packets
Protocol Layer Comments
Application Protocols Layer Protocols specific to applications such as WWW, e-mail,
FTP, etc.
Transmission Control Protocol TCP directs packets to a specific application on a computer
Layer using a port number.
Internet Protocol Layer IP directs packets to a specific computer using an IP
address.
Hardware Layer Converts binary packet data to network signals and back.
(E.g. Ethernet network card, modem for phone lines, etc.)
If we were to follow the path that the message “Hello Computer 127.218.10.255” took
from our computer to the computer with IP address 127.218.10.255, it would happen something
like this:
1. The message would start at the top of the protocol stack on your computer and work
its way downward.
302 Computer System Security
2. If the message to be sent is long, each stack layer that the message passes through
may break the message up into smaller chunks of data. This is because data sent over
the Internet (and most computer networks) are sent in manageable chunks. On the
Internet, these chunks of data are known as packets.
3. The packets would go through the Application Layer and continue to the TCP layer.
Each packet is assigned a port number. Ports will be explained later, but suffice to
say that many programmes may be using the TCP/IP stack and sending messages.
We need to know which programme on the destination computer needs to receive
the message because it will be listening on a specific port.
4. After going through the TCP layer, the packets proceed to the IP layer. This is where
each packet receives its destination address 127.218.10.255.
5. Now that our message packets have a port number and an IP address, they are ready
to be sent over the Internet. The hardware layer takes care of turning our packets
containing the alphabetic text of our message into electronic signals and transmitting
them over the phone line.
6. On the other end of the phone line your ISP has a direct connection to the Internet.
The ISPs router examines the destination address in each packet and determines
where to send it. Often, the packet’s next stop is another router. More on routers and
Internet infrastructure later.
7. Eventually, the packets reach computer 127.218.10.255. Here, the packets start at
the bottom of the destination computer’s TCP/IP stack and work upwards.
8. As the packets go upwards through the stack, all routing data that the sending
computer’s stack added (such as IP address and port number) is stripped from the
packets.
9. When the data reaches the top of the stack, the packets have been re-assembled into
their original form, “Hello computer 127.218.10.255.
Application Application
TCP TCP
IP IP
Public
Telephone
Your Computer Modern Network Modern Pool ISP Port Server Router
192.168.178.237
Router
Dedicated Line
Router CSU/DSU CSU/DSU Router
Another Computer LAN ISP Backbor
127.128.10.255
As shown in Fig. 5.4 Internet Addresses are drawn with more detail. The physical
connection through the phone network to the Internet Service Provider might have been easy
to guess, but beyond that might bear some explanation.
The ISP maintains a pool of modems for their dial-in customers. This is managed by
some form of computer (usually a dedicated one) which controls data flow from the modem
pool to a backbone or dedicated line router. This setup may be referred to as a port server, as
it ‘serves’ access to the network. Billing and usage information is usually collected here as well.
After your packets traverse the phone network and your ISP’s local equipment, they are
routed onto the ISP’s backbone or a backbone the ISP buys bandwidth from. From here the
packets will usually journey through several routers and over several backbones, dedicated lines,
and other networks until they find their destination, the computer with address 127.218.10.255.
But wouldn’t it would be nice if we knew the exact route our packets were taking over the
Internet? As it turns out, there is a way...
5.1.4 Internet Infrastructure
Internet infrastructure is responsible for hosting, storing, processing, and serving the information
that makes up websites, applications, and content.The Internet backbone is made up of many
large networks which interconnect with each other. These large networks are known as Network
Service Providers or NSPs. Some of the large NSPs are UUNet, Cerf Net, IBM, BBN Planet,
Sprint Net, PSINet, as well as others. These networks peer with each other to exchange packet
traffic. Each NSP is required to connect to three Network Access Points or NAPs. At the NAPs,
packet traffic may jump from one NSP’s backbone to another NSP’s backbone. NSPs also
interconnect at Metropolitan Area Exchanges or MAEs. MAEs serve the same purpose as
the NAPs but are privately owned. NAPs were the original Internet interconnect points. Both
NAPs and MAEs are referred to as Internet Exchange Points or IXs. NSPs also sell bandwidth
to smaller networks, such as ISPs and smaller bandwidth providers. Below is a picture showing
this hierarchical infrastructure.
304 Computer System Security
NSP (i.e. CERF Net) NSP (i.e. UU Net) NSP (i.e. PSI Net)
This is not a true representation of an actual piece of the Internet. Diagram 4 is only
meant to demonstrate how the NSPs could interconnect with each other and smaller ISPs.
For understanding in easy way how does the Internet actually work a step-by-step
process given below:
Internet works by using a packet routing network that follows Internet Protocol (IP) and
Transport Control Protocol (TCP). TCP and IP work together to ensure that data transmission
across the internet is consistent and reliable, no matter which device you’re using or where
you’re using it.
•• When data is transferred over the internet, it’s delivered in messages and packets.
Data sent over the internet is called a message, but before messages get sent, they’re
broken up into tinier parts called packets.
•• These messages and packets travel from one source to the next using Internet Protocol
(IP) and Transport Control Protocol (TCP). IP is a system of rules that govern how
information is sent from one computer to another computer over an internet connection.
•• Using a numerical address (IP Address) the IP system receives further instructions on
how the data should be transferred.
•• The Transport Control Protocol (TCP) works with IP to ensure transfer of data is
dependable and reliable. This helps to make sure that no packets are lost, packets
are reassembled in proper sequence, and there’s no delay negatively affecting the
data quality.
Internet Infrastructure 305
Servers provide
Browsers send information in
request to server response to the
requests
Tablet Server Server
Device
PC
Laptop Optical fibres
Router
Router
For example, when you type in a web address into your browser how it works we can
understand all process step-by-step
Step 1: Your PC or device is connected to the web through a modem or router. Together,
these devices allow you to connect to other networks around the globe.Your router enables
multiple computers to join the same network while a modem connects to your ISP (Internet
Service Provider) which provides you with either cable or DSL internet.
Step 2: Type in a web address, known as a URL (Uniform Resource Locator). Each
website has its own unique URL that signals to your ISP where you want to go.
Step 3: Your query is pushed to your ISP which connects to several servers which store
and send data like a NAP Server (Network Access Protection) and a DNS (Domain Name Server).
Next, your browser looks up the IP address for the domain name you typed into your
search engine through DNS. DNS then translates the text-based domain name you type into
the browser into the number-based IP address.
Example: Google.com becomes 64.233.191.255
Step 4: Your browser sends a Hypertext Transfer Protocol (HTTP) request to the target
server to send a copy of the website to the client using TCP/IP.
Step 5: The server then approves request and sends a “200 OK” message to your
computer. Then, the server sends website files to the browser in the form of data packets.
Step 6: As your browser reassembles the data packets, the website loads allowing you
to learn, shop, browse, and engage.
Step 7: Enjoy your search results!
306 Computer System Security
Job
Banking
Searching
Internet
Marketing Internet
Security
•• Internet allows us to communicate with the people sitting at remote locations. There are
various apps available on the web that uses Internet as a medium for communication.
One can find various social networking sites such as:
ii Facebook
ii Twitter
ii Yahoo
ii Google+
ii Flickr
ii Orkut
•• One can surf for any kind of information over the internet. Information regarding
various topics such as Technology, Health & Science, Social Studies, Geographical
Information, Information Technology, Products etc can be surfed with help of a search
engine.
•• Apart from communication and source of information, internet also serves a medium
for entertainment. The following are the various modes for entertainment over internet.
ii Online Television
ii Online Games
ii Songs
ii Videos
ii Social Networking Apps
•• Internet allows us to use many services like:
ii Internet Banking
ii Matrimonial Services
ii Online Shopping
308 Computer System Security
Threat to
personal Spamming
information
Internet
Cyber Virus
Crime Attacks
•• There are always chances to loose personal information such as name, address, credit
card number. Therefore, one should be very careful while sharing such information.
One should use credit cards only through authenticated sites.
•• Another disadvantage is the Spamming. Spamming corresponds to the unwanted
e-mails in bulk. These e-mails serve no purpose and lead to obstruction of entire
system.
•• Virus can easily be spread to the computers connected to internet. Such virus attacks
may cause your system to crash or your important data may get deleted.
•• Also the biggest threat on internet is pornography. There are many pornographic
sites that can be found, letting your children to use internet which indirectly affects
the children healthy mental life.
•• There are various websites that do not provide the authenticated information. This
leads to misconception among many people.
Key Points:
•• January 1, 1983 is considered the official birthday of the Internet.
•• Computer scientists Vinton Cerf and Bob Kahn are credited with inventing the Internet.
•• Internet uses the standard Internet Protocol (TCP/IP).
•• Every computer in internet is identified by a unique IP address.
Internet Infrastructure 309
2. Hacking and remote access: Hackers are always looking to exploit a private
network or system’s vulnerabilities so that they can steal confidential information and data.
Remote access technology gives them another target to exploit. Remote access software allows
users to access and control a computer remotely – and since the pandemic, with more people
working remotely, its usage has increased.
The protocol which allows users to control a computer connected to the internet remotely is
called Remote Desktop Protocol, or RDP. Because businesses of all sizes so widely use RDP, the
chances of an improperly secured network are relatively high. Hackers use different techniques
to exploit RDP vulnerabilities until they have full access to a network and its devices. They may
carry out data theft themselves or else sell the credentials on the dark web.
3. Malware and malvertising: Malware is a portmanteau of “malicious” and “software”.
It’s a broad term related to viruses, worms, Trojans, and other harmful programmes that hackers
use to cause havoc and steal sensitive information. Any software intended to damage a computer,
server, or network can be described as malware.
Malvertising is a portmanteau of “malicious” and “advertising”. The term refers to online
advertising, which distributes malware. Online advertising is a complex ecosystem involving
publisher websites, ad exchanges, ad servers, retargeting networks, and content delivery
networks. Malvertisers exploit this complexity to place malicious code in places that publishers
and ad networks don’t always detect. Internet users who interact with a malicious ad could
download malware onto their device or be redirected to malicious websites.
4. Ransomware: Ransomware is a type of malware that prevents you from using
your computer or accessing specific files on your computer unless a ransom is paid. It is often
distributed as a Trojan – that is, malware disguised as legitimate software. Once installed, it
locks your system’s screen or certain files until you pay.
Because of their perceived anonymity, ransomware operators typically specify payment
in cryptocurrencies such as Bitcoin. Ransom prices vary depending on the ransomware variant
and the price or exchange rate of digital currencies. It isn’t always the case that if you pay, the
criminals will release the encrypted files.
Ransomware attacks are on the rise, and new ransomware variants continue to emerge.
Some of the most talked-about ransomware variants include Maze, Conti, GoldenEye, Bad
Rabbit, Jigsaw, Locky, and WannaCry.
5. Botnets: The term botnet is a contraction of “robot network”. A botnet is a network
of computers that have been intentionally infected by malware so they can carry out automated
tasks on the internet without the permission or knowledge of the computers’ owners.Once a
botnet’s owner controls your computer, they can use it to carry out malicious activities. These
include:
•• Generating fake internet traffic on third party websites for financial gain.
•• Using your machine’s power to assist in Distributed Denial of Service (DDoS) attacks
to shut down websites.
•• Emailing spam to millions of internet users.
•• Committing fraud and identity theft.
•• Attacking computers and servers.
Internet Infrastructure 311
Computers become part of a botnet in the same ways that they are infected by any
other type of malware for example, opening email attachments that download malware or
visiting websites infected with malware. They can also spread from one computer to another
via a network. The number of bots in a botnet varies and depends on the ability of the botnet
owner to infect unprotected devices.
6. Code Injection (Remote Code Execution): To attempt a code injection, an attacker
will search for places your application accepts user input – such as a contact form, data-entry
field, or search box. Then, through experimentation, the hacker learns what various requests
and field content will do. For example, if your site’s search function places terms into a database
query, they will attempt to inject other database commands into search terms. Alternatively, if
your code pulls functions from other locations or files, they will attempt to manipulate those
locations and inject malicious functions.
How to Prevent: Besides server or network-level protections like Cloud Flare and Liquid
Web’s Server Secure Plus, it is also important to address this security issue from a development
perspective. Keep any framework, CMS, or development platform regularly updated with
security patches. When programming, follow best practices regarding input sanitisation. No
matter how insignificant, all user input should be checked against a basic set of rules for what
input is expected. For example, if the expected input is a five-digit number, add code to remove
any input which is not a five-digit number. To help prevent SQL injections, many scripting
languages include built-in functions to sanitise input for safe SQL execution. Use these functions
on any variables that build database queries.
7. DDoS Attack: Distributed Denial of Service (DDoS) attacks are generally not
attempting to gain access. However, they are sometimes used in conjunction with brute force
attacks (explained below) and other attack types as a way to make log data less useful during
your investigation. For example, the hacker may directly attack your application layer by
overwhelming your site with more requests than it can handle. They may not even view an
entire page - just a single image or script URL with a flood of concurrent requests. Beyond the
traffic flood making your site unreachable (which any volumetric attack will do), a Layer 7 attack
can inflict further damage by flooding order queues or polling data with bogus transactions that
require extensive and costly manual verification to sort out.
How to Prevent: Blocking such an attack can be nearly impossible by conventional
means. There is generally no security issue being exploited. The requests themselves are not
malicious and deliberately blend in with normal traffic. The more widely distributed the attack,
the more difficult it is to distinguish legitimate requests from those that are not. If you’re not able
to use a DDoS protection service, options are fairly limited and vary case by case. The most
effective measures absorb all the traffic by increasing available server and network resources
to accommodate the additional traffic until the attack subsides or can be isolated.
8. Cross Site Request Forgery Cross-Site Scripting (XSS) Attack: JavaScript
and other browser-side scripting methods are commonly used to dynamically update page
content with external information such as a social media feed, current market information, or
revenue-generating advertisements. Hackers use XSS to attack your customers by using your
site as a vehicle to distribute malware or unsolicited advertisements. As a result, your company’s
reputation can be tarnished, and you may lose customer trust.
312 Computer System Security
•• The data is included in dynamic content that is sent to a web user without being
validated for malicious content.
The malicious content sent to the web browser often takes the form of a segment of
JavaScript, but may also include HTML, Flash, or any other type of code that the browser may
execute. The variety of attacks based on XSS is almost limitless, but they commonly include
transmitting private data, like cookies or other session information, to the attacker, redirecting
the victim to web content controlled by the attacker, or performing other malicious operations
on the user’s machine under the guise of the vulnerable site.
Three basic types of XSS are defined as follows:
1. Stored XSS (AKA Persistent or Type I): Stored XSS generally occurs when user input
is stored on the target server, such as in a database, in a message forum, visitor log, comment
field, etc. And then a victim is able to retrieve the stored data from the web application without
that data being made safe to render in the browser. With the advent of HTML5, and other
browser technologies, we can envision the attack payload being permanently stored in
the victim’s browser, such as an HTML5 database, and never being sent to the server at all.
2. Reflected XSS (AKA Non-Persistent or Type II): Reflected XSS occurs when user
input is immediately returned by a web application in an error message, search result, or any
other response that includes some or all of the input provided by the user as part of the request,
without that data being made safe to render in the browser, and without permanently storing the
user provided data. In some cases, the user provided data may never even leave the browser.
3. DOM Based XSS (AKA Type-0): As defined by Amit Klein, who published the first
article about this issue, DOM Based XSS is a form of XSS where the entire tainted data flow
from source to sink takes place in the browser, i.e., the source of the data is in the DOM, the
sink is also in the DOM, and the data flow never leaves the browser. For example, the source
(where malicious data is read) could be the URL of the page (e.g., document.location.href), or
it could be an element of the HTML, and the sink is a sensitive method call that causes the
execution of the malicious data (e.g., document.write).”
Note: For years, most people thought of these (Stored, Reflected, DOM) as three different
types of XSS, but in reality, they overlap. You can have both Stored and Reflected DOM Based
XSS. You can also have Stored and Reflected Non-DOM Based XSS too, but that’s confusing,
so to help clarify things, starting about mid 2012, the research community proposed and started
using two new terms to help organise the types of XSS that can occur:
(i) Server XSS (ii) Client XSS
Server XSS
Server XSS occurs when untrusted user supplied data is included in an HTML response
generated by the server. The source of this data could be from the request, or from a stored
location. As such, you can have both Reflected Server XSS and Stored Server XSS.
In this case, the entire vulnerability is in server-side code, and the browser is simply
rendering the response and executing any valid script embedded in it.
Client XSS
Client XSS occurs when untrusted user supplied data is used to update the DOM with an
unsafe JavaScript call. A JavaScript call is considered unsafe if it can be used to introduce
314 Computer System Security
valid JavaScript into the DOM. This source of this data could be from the DOM, or it could
have been sent by the server (via an AJAX call, or a page load). The ultimate source of the
data could have been from a request, or from a stored location on the client or the server. As
such, you can have both Reflected Client XSS and Stored Client XSS.
Defenses and Protections Against XSS
• Recommended Server XSS Defenses Server
XSS is caused by including untrusted data in an HTML response. The easiest and the strongest
defence against Server XSS in most cases is: Context-sensitive server side output encoding. The
details on how to implement Context-sensitive server side output encoding are presented in the
OWASP XSS (Cross Site Scripting) Prevention Cheat Sheet in great detail. Input validation
or data sanitisation can also be performed to help prevent Server XSS, but it’s much more
difficult to get correct than context-sensitive output encoding.
• Recommended Client XSS Defenses
Client XSS is caused when untrusted data is used to update the DOM with an unsafe JavaScript
call. The easiest and the strongest defence against Client XSS is: - Using safe JavaScript APIs
However, developers frequently don’t know which JavaScript APIs are safe or not, never
mind which methods in their favorite JavaScript library are safe. Some information on which
JavaScript and jQuery methods are safe and unsafe is presented in Dave Wichers’ DOM Based
XSS talk presented at OWASP AppSec USA in 2012: Unraveling some of the Mysteries around
DOM Based XSS. If you know that a JavaScript method is unsafe, our primary recommendation
is to find an alternative safe method to use. If you can’t for some reason, then context sensitive
output encoding can be done in the browser, before passing that data to the unsafe JavaScript
method. OWASP’s guidance on how do this properly is presented in the DOM based XSS
Prevention Cheat Sheet. Note that this guidance is applicable to all types of Client XSS,
regardless of where the data actually comes from (DOM or Server).
• How do I detect if a website is vulnerable?
If your website allows performing a site function using a static URL or POST request (i.e. one
that doesn’t change) then it is possible. If this command is performed through GET then it
is a much higher risk. If the site is purely POST see “Can applications using only POST be
vulnerable?” for use cases. A quick test would involve browsing the website through a proxy
such as Paros and record the requests made. At a later time perform the same action
and see if the requests are performed in an identical manner (your cookie will probably
change). If you are able to perform the same function using the GET or POST request
repeatedly then the site application may be vulnerable.
• Methods of CSRF mitigation
A number of effective methods exist for both prevention and mitigation of CSRF attacks. From
a user’s perspective, prevention is a matter of safeguarding login credentials and denying
unauthorised actors access to applications. Best practices include:
•• Logging off web applications when not in use
•• Securing usernames and passwords
Internet Infrastructure 315
How to Prevent: This Internet security issue can be challenging to address because
an attacker at this stage is generally taking careful steps to remain hidden. Many systems will
print connection information from your previous session when you log in. Be aware of this
information where available, and be mindful of activity that isn’t familiar.
Wi-Fi threats, in public and at home
Public Wi-Fi carries risks because the security on these networks – in coffee shops, shopping
malls, airports, hotels, restaurants, and so on – is often lax or non-existent. The lack of security
means that cybercriminals and identity thieves can monitor what you are doing online and steal
your passwords and personal information. Other public Wi-Fi dangers include:
•• Packet sniffing – attackers monitor and intercept unencrypted data as it travels
across an unprotected network.
•• Man-in-the-middle-attacks – attackers compromise a Wi-Fi hotspot to insert
themselves into communications between the victim and the hotspot to intercept
and modify data in transit.
•• Rogue Wi-Fi networks – attackers set up a honeypot in the form of free Wi-Fi
to harvest valuable data. The attacker’s hotspot becomes the conduit for all data
exchanged over the network.
5.2.1 Protection of Data Online
If you are wondering how to ensure internet protection and how to protect your data online,
sensible internet security tips you can follow include:
1. Enable multifactor authentication: Multifactor authentication (MFA) is an
authentication method that asks users to provide two or more verification methods to access an
online account. For example, instead of simply asking for a username or password, multifactor
authentication goes further by requesting additional information, such as:
•• An extra one-time password that the website’s authentication servers send to the
user’s phone or email address.
•• Answers to personal security questions.
•• A fingerprint or other biometric information, such as voice or face recognition.
Multifactor authentication decreases the likelihood of a successful cyber-attack. To make
your online accounts more secure, it is a good idea to implement multifactor authentication
where possible. You can also consider using a third-party authenticator app, such as Google
Authenticator and Authy, to help with internet security.
2. Use a firewall: A firewall acts as a barrier between your computer and another
network, such as the internet. Firewalls block unwanted traffic and can also help to block
malicious software from infecting your computer. Often, your operating system and security
system come with a pre-installed firewall. It is a good idea to make sure those features are turned
on, with your settings configured to run updates automatically, to maximise internet security.
3. Choose your browser carefully: Our browsers are our primary gateway to the web
and therefore play a key role in internet security. A good web browser should be secure and
help to protect you from data breaches. The Freedom of the Press Foundation has compiled
a detailed guide here, explaining the security pros and cons of the leading web browsers on
the market.
Internet Infrastructure 317
•• Look into third-party email spam filters. These provide an additional layer of
cybersecurity, as emails have to travel through two spam filters before getting to you
– your email provider’s spam filter plus the third-party app.
If you do find yourself overwhelmed with spam, it could be a sign that your email address
has been exposed in a data breach. When this happens, it is recommended to change
your email address.
7. Network security: Network security refers to any activity designed to protect the
usability and integrity of your network and data. It targets a variety of threats and stops them
from entering or spreading on your network
8. Wi-Fi router security: Your Wi-Fi router is an essential aspect of internet security. It
checks all incoming and outgoing traffic and controls access to your Wi-Fi network and, through
that, your phones, computers, and other devices. Router security has improved in recent years,
but there are still steps you can take to enhance internet protection.
Changing the default settings of your router, such as the default router name and login
details, is an important first step. This can help to make your Wi-Fi network less of a target for
potential hackers, as it indicates that the router is being actively managed.
There are various features and settings you can disable to increase the security of your
Wi-Fi router. Features such as remote access, Universal Plug and Play and Wi-Fi Protected
Set-Up can all be taken advantage of by malware programs. While they may be convenient,
turning them off makes your home network safer.
• Consider using a VPN
The best way to protect your data online when using public Wi-Fi is to use a virtual private
network (VPN). A VPN creates an encrypted tunnel between you and a remote server operated
by a VPN service. All your internet traffic is routed through this tunnel, which makes your data
more secure. If you connect to a public network using VPN, other people on that network should
not be able to see what you are doing – providing enhanced internet protection.
• Network security and the Internet of Things
The Internet of Things (IoT) is a term used to describe physical devices other than computers,
phones, and servers, which connect to the internet and can collect and share data. Examples
of IoT devices include wearable fitness trackers, smart refrigerators, smart watches, and voice
assistants like Amazon Echo and Google Home. It is estimated that by 2026, there will be 64
billion IoT devices installed around the world.
All these devices connected to the internet create new opportunities for information to
be compromised. Not only is more data than ever being shared through the IoT, but the nature
of that data is often highly sensitive. This underlines the need to be aware of internet security
threats and to practice good cybersecurity hygiene.
9. Internet mobile security: Mobile security refers to the techniques used to secure data
on mobile devices such as smartphones and tablets and is another aspect of internet protection.
10. Phone Tapping: Your smartphone can be vulnerable to tapping, especially if it
has been jailbroken or rooted. Phone tapping can allow third parties to listen to your calls or
read messages. If you’re concerned your phone may have been hacked, you can look out for
Internet Infrastructure 319
signs like unusual background noise on calls, your phone’s battery depleting faster than usual,
or behaving in strange ways.
If your phone seems to be turning itself on or off without your input, or if apps appear
that you don’t remember installing yourself, that could indicate that somebody else has access
to your phone. Receiving strange SMS messages, containing a garbled series of letters and
numbers, or getting a higher than usual phone bill could also indicate phone tapping. If you
have concerns about your mobile security, you can read more mobile security advice here.
11. Phone spoofing: Spoofing generally involves cybercriminals trying to convince you
that information is coming from a trusted source. Phone spoofing is when scammers deliberately
falsify the information which appears on your caller ID to disguise their identity. They do this so
that victims think an incoming call is coming from their local area or a number they recognise.
To stop phone spoofing, check to see if your phone carrier has a service or app that helps
identify and prevent spam calls. You can also look into third-party apps such as RoboKiller or
Nomorobo to help you screen calls – but be aware that these apps require you to share private
data with them.Often, if you receive a call from an unknown number, the best thing to do is
not answer it. Answering scam calls is a bad idea because the scammers then perceive you as
a potential target.
• How to remove spy software from your phone?
If you’re seeing signs that your smartphone has spyware, look at the apps installed on your
device. Remove anything that you are unsure of, or don’t remember installing.
Updating your phone’s operating system can help, as can more extreme measures such
as resetting your phone to factory settings. While this might be inconvenient, it can be well
worth doing if you’re concerned that your phone security has been compromised.
You can use Kaspersky Internet for Android to identify and remove malicious viruses and
malware from Android phones. Our detailed article on how to remove a virus from Android
explains how you can also do this manually.
• Some Internet safety tips: How to protect yourself online
So, what are the best internet protection methods? Follow these best practices to protect yourself
from internet security threats and different types of internet attacks:
•• You need internet security software that protects you round the clock. The
best internet security software will protect you from a range of internet security threats,
including hacking, viruses, and malware. A comprehensive internet security product
should be able to locate device vulnerabilities, block cyberthreats before they take
hold, and isolate and remove immediate dangers.
•• Block webcam access, so your internet privacy is assured. Webcam hacking
is when hackers access your mobile and computer cameras and record you. This
internet security threat is known as “confecting”. The number of recorded attacks is
relatively low, although most occur without the victim ever realising they have been
compromised, which means they go unaccounted for.
•• An ad-blocker can protect you from malvertising. Ad blockers clear web pages of
ads – and by blocking ads from displaying, you remove the risk of seeing and clicking
on an ad that may be harmful. Ad blockers also have other benefits. For example, they
320 Computer System Security
may reduce the number of cookies stored on your machine, increase your internet
privacy by reducing tracking, save bandwidth, help pages’ load faster, and prolong
battery life on mobile devices. Some ad blockers are free, while others cost money.
Bear in mind that not all ad blockers block every online ad, and some websites may
not run properly if you have the ad blocker turned on. You can, however, enable ad
blockers to allow online ads from specific websites.
•• Take care of the whole family with parental controls. Parental controls refer to
the settings that enable you to control what content your child can see on the internet.
Parent controls, used in conjunction with privacy settings, can help increase internet
security for kids. Setting up parental controls varies by platform and device – Internet
Matters has a comprehensive series of step-by-step guides for each platform. You can
also consider the use of a parental control app, such as Kaspersky Safe Kids.
•• Use of PC cleaner. A PC cleaner is a tool that removes unnecessary and temporary
files and programmes from your system. Kaspersky Total Security has a PC cleaner
feature that allows you to find and remove applications and browser extensions you
rarely use or that were installed without your consent.
•• Cross-platform protection. Internet protection these days needs to cover all the
devices we use to go online – laptops, desktops, smartphones, and tablets. The best
internet security software will allow you to install the antivirus programme on multiple
devices, giving you cross-platform protection from internet security threats.
•• Safe online banking and online shopping. Online shopping security tips to
remember include:
ii Make sure you’re transacting with a secure website – the URL should start with
https:// rather than http:// - the “s” stands for “secure” and indicates that the site
has a security certificate. There should also be a padlock icon to the left of the
address bar.
ii Check the URL carefully – criminals can create fake websites with URLs that are
similar to legitimate ones. They often change one or two letters in the URL to
deceive people.
ii Avoid submitting financial information when using public Wi-Fi.
•• Online banking security tips include:
ii Again, avoid submitting financial or personal information when using public Wi-Fi.
ii Use strong passwords and change them regularly.
ii Use multifactor authentication where possible.
ii Type your bank URL or use your banking app directly, instead of clicking on links
in emails – to avoid falling victim to a phishing scam.
ii Check bank statements regularly to identify any transactions you don’t recognise.
ii Keep your operating system, browser, and applications up to date. This will ensure
that any known vulnerabilities are patched.
ii Use a robust internet security product, such as the products offered by Kaspersky.
In a world where we spend much of our lives online, internet security is an important
issue. Understanding how to overcome internet security threats and different types of internet
attacks is the key to staying safe and protecting your data online.
Internet Infrastructure 321
Network 2 Network 4
Static Dynamic
Routing Routing
Default
Routing
•• It can be used for packet filtering, firewalling, or proxy servers. In the highest level of
a network, admins usually configure the default route for a given host to point to a
router that has a connection to the network service provider.
Disadvantages of Default Routing
•• The more complex the network is, the more difficult it can be to set up and use
efficiently.
•• Dynamic Routing: It is also known as Adaptive Routing. It is a technique in which
a router adds a new route in the routing table for each packet in response to the
changes in the condition or topology of the network. Dynamic protocols are used
to discover the new routes to reach the destination. In Dynamic Routing, RIP and
OSPF protocols are used to discover the new routes. If any route goes down, then
the automatic adjustment will be made to reach the destination.
The Dynamic protocol should have the following features:
•• All the routers must have the same dynamic routing protocol in order to exchange
the routes.
•• If the router discovers any change in the condition or topology, then router broadcast
this information to all other routers.
Advantages of Dynamic Routing
•• It is easier to configure.
•• It is more effective in selecting the best route in response to the changes in the condition
or topology.
Disadvantages of Dynamic Routing
•• It is more expensive in terms of CPU and bandwidth usage.
•• It is less secure as compared to default and static routing
Routing protocols
Example of routing process by using IP Address
In networking, a protocol is a standardised way of formatting data so that any connected
computer can understand the data. A routing protocol is a protocol used for identifying or
announcing network paths.
Internet routing protocol
The Internet Protocol (IP) is the protocol that describes how to route messages from one
computer to another computer on the network. Each message is split up into packets, and the
packets hop from router to router on the way to their destination.
Fig. 5.11 shows that a client is sending packet to server computer. A network of 10 routers
is shown between the client and the server, with various lines connecting them. There’s a path
from the client through the routers, to the server, highlighted with green arrows.
324 Computer System Security
Client Server
Let’s discuss the step through the process of routing a packet from a source to a destination.
Step 1: Send packet to router
Computers send the first packet to the nearest router. A router is a type of computing
device used in computer networks that helps move the packets along.
To:
91.198.174.192
From:
216.3.192.1
Router
Client
Diagram with laptop on left and router on right. Arrow goes from laptop to router, with
message “TO: 91.198.174.192” and “FROM: 216.3.192.1”.
Step 2: Router receives packet
When the router receives a packet, it looks at its IP header. The most important field is
the destination IP address, which tells the router where the packet wants to end up.
IP header
Field Content
Source IP Address 216.3.192.1
Destination IP Address 91.198.174.192
IP Version IPV4
Time to Live 64
Step 3: Router forwards packet
The router has multiple paths it could send a packet along, and its goal is to send the
packet to a router that’s closer to its final destination.
Internet Infrastructure 325
2 ?
3 ?
Diagram with a router on left and 3 routers on right. The left router has a line going
to each of the right routers, and the lines are labelled 1, 2, and 3. A question mark is shown
above each line. Diagram with a router on left and 3 routers on right. The left router has a
line going to each of the right routers, and the lines are labelled 1, 2, and 3. A question mark
is shown above each line. How does it decide? The router has a forwarding table that helps
it pick the next path based on the destination IP address. That table does not have a row for
every possible IP address; there are 2^{32}2
32
2, start superscript, 32, end superscript possible IP addresses, and that’s far too much
to store. Instead, the table has rows for IP address prefixes.
IP address prefix path
91.112 #1
91.198 #2
192.92 #3
IP addresses are hierarchical. When two IP addresses start with the same prefix, that
often means they’re on the same large network, like the Comcast SF network. Router forwarding
tables take advantage of that fact so that they can store far less information.
Once the router locates the most specific row in the table for the destination IP address,
it sends the packet along that path.
Diagram with a router on left and 3 routers on right. The left router has a line going to
each of the right routers, and the lines are labelled 1, 2, and 3. The second line, labelled 2, is
highlighted with green arrows going from left to right, and shows a packet above it.
326 Computer System Security
1 To:
91.198.174.192
2 From:
216.3.192.1
Router 3
Router
Client
(c) RIP is categorised as an interior gateway protocol within the use of distance vector
algorithm.
(d) It prevents routing loops by implementing a limit on the number of hops allowed
in the path.
2. Interior Gateway Routing Protocol (IGRP):
(a) It is distance vector Interior Gateway Routing Protocol (IGRP).
(b) It is used by router to exchange routing data within an independent system.
(c) Interior gateway routing protocol created in part to defeat the confines of RIP in
large networks.
(d) It maintains multiple metrics for each route as well as reliability, delay load, and
bandwidth.
(e) It measured in classful routing protocol, but it is less popular because of wasteful
of IP address space.
3. Open Shortest Path first (OSPF):
(a) Open Shortest Path First (OSPF) is an active routing protocol used in internet
protocol.
(b) It is a link state routing protocol and includes into the group of interior gateway
protocol.
(c) It operates inside a distinct autonomous system.
(d) It is used in the network of big business companies.
4. Exterior Gateway Protocol (EGP):
(a) The absolute routing protocol for internet is exterior gateway protocol.
(b) EGP (Exterior Gateway Protocol) is a protocol for exchanging routing table
information between two neighbour gateway hosts.
(c) The Exterior Gateway Protocol (EGP) is unlike distance vector and path vector
protocol.
5. Border Gateway Protocol (BGP): The (BGP) routing protocol is used to announce
which networks control which IP addresses, and which networks connect to each other.
(The large networks that make these BGP announcements are called autonomous
systems.) BGP is a dynamic routing protocol.
6. Enhanced Interior Gateway Routing Protocol (EIGRP): In EIGRP, if router is
not able to search the best route to a destination from the routing table, then it asks
route to its neighbours, and they pass the query to their neighbours while finding the
path.
7. Immediate system-to-immediate system (IS-IS): IS-IS classified as a link
state, interior gateway and classless protocol—is commonly used to send and share
IP routing information on the internet. The protocol uses an altered version of the
Dijkstra algorithm. Usually, the protocol organises routers into groups to create larger
domains and connect routers for data transferring. IS-IS frequently uses these two
network types:
•• Network service access point (NSAP): Similar to an IP address, an NSAP
is the identification of a service access point in systems that use the open system
interconnection (OSI) model.
328 Computer System Security
•• Network entity title (NET): This helps identify individual network routers
within larger computer networks.
signatures, we need a supporting public key infrastructure (PKI) and certification process. The
IETF recently proposed RPKI as such a framework. RPKI itself will not solve all interdomain
routing problems, but it will, hopefully, provide the much-needed building blocks upon which
internet routing security can be built.
Google delivers content and services to users and customers by connecting to thousands
of peer networks around the world. These peering relationships provide an opportunity to
work with peer networks to improve routing security for our own services and also for the
Internet overall. In support of the major focus areas in the MANRS task force, we’ve already
undertaken several measures to protect our network infrastructure from hijacks, such as filtering
and coordinating with peer networks, which will also make it easier to extend these protections
to other networks in the internet. The MANRS Observatory tracks routing security readiness
for all member networks and, as shown in the figure below, Google scores highly across all of
the key metrics.
Protection against false origin attacks
To protect against BGP origination attacks, we need a way to validate whether an AS that is
originating an IP prefix on the internet has the right to do so. In other words, we need to have
a secure way to certify that an AS is indeed the holder of the IP address space it is advertising
to other networks. In RPKI, this is done by using dedicated end-entity (EE) certificates to
generate cryptographically signed route filters, called route origin authorisations (ROAs). EE
certificates are generated by resource certificate authorities, which are usually run by resource
holders such as the Internet Assigned Numbers Authority (IANA), regional internet registries
(RIRs), local internet registries (LIRs) or internet service providers (ISPs), depending on location
in the hierarchy.
5.4.2 RPKI (Resource Public Key Infrastructure)
The RPKI is a distributed public database of cryptographically signed records that allows
operators to securely register routing information about their networks. Other networks can
download the records and use them to validate BGP announcements they receive as being
correctly originated (RPKI origin validation). RPKI adoption has increased significantly in the
last two years, with a number of large Internet ISPs, including AT&T, NTT, Telia, and Cogent
announcing they are performing origin validation. RPKI protection ultimately requires all
networks to register their routes to enable hijack protection. As of November 2020, Google
has registered more than 99% of its routes in the RPKI (as seen by the MANRS Observatory).
Further, we plan to deploy origin validation in 2021 to ensure that invalid routes are rejected,
thus preventing disruptions due to hijacks for Google Cloud customers and end users.
Consistent route filtering
Many networks continue to publish information about route ownership and relationships with
other networks in public IRRs (Internet Routing Registries). While IRR information is not as
secure as the RPKI, its wide use and coverage makes it a valuable source of data to build
filtering rules that ensure only valid routes are accepted from neighboring networks. To protect
our infrastructure, Google is currently deploying IRR-based route filtering that ensures valid
routes receive higher preference, and we also maintain up-to-date routing information in the
IRRs for our own routes (we use RADb). Through work with MANRS members, we are defining
330 Computer System Security
a consistent filtering approach that any cloud provider can follow to clarify and simplify the
work required by peer networks to maintain their data in the IRRs.
Working with our peers
Peer networks are our partners in this effort, and we rely on them to maintain records in public
routing information sources like the RPKI and IRRs to enable route validation by all networks,
not just Google. Through our peering portal, we provide customised information to every peer,
showing the IRR status of every route they announce to Google—and by early next year, this
will also include RPKI validity information. Since the beginning of 2020, we’ve been proactively
contacting our peers to alert them to routing information that appears invalid. This information
helps our peers quickly identify which routes may need updated or corrected records, along
with guidance on how to make the fixes. In parallel, we are working with other cloud service
providers to make data requirements consistent for all peers, simplifying the peering process
regardless of the cloud to which they are connected.
Expanding collaboration with Tier-1 networks
Tier-1 networks and large transit providers play a critical role in routing security, since they
act as the primary hubs of the internet through which other provider and customer networks
connect. Many of these networks have already taken the initiative to deploy various forms of
filtering, including RPKI origin validation. Google has established path-based filtering with most
of our Tier-1 network partners (sometimes called “peer-locks”) these filters help ensure that
traffic for Google services only follow valid paths. Stopping wide-scale propagation of invalid
routes at large hub networks helps minimise the impact of route leaks and hijacks, reducing
exposure for our customers and users.
netnod.se, a resolver – the name server a user queries directly – first has to figure out where
.se is, then netnod.se, and finally www.netnod.se.
The authoritative name servers that the resolvers use to find top level Domains (like
.se) are the root name servers.The root servers contain the information that makes up the root
zone, which is the global list of top level domains.
The root zone contains:
•• generic top level domains – such as .com, .net, and .org
•• country code top level domains – two-letter codes for each country, such as .se for
Sweden or .no for Norwayinformation about any domain.
5.5.2 DNS Cache
A DNS cache (sometimes called a DNS resolver cache) is a temporary database, maintained
by a computer’s operating system, that contains records of all the recent visits and attempted
visits to websites and other internet domains. In other words, a DNS cache is just a memory of
recent DNS lookups that your computer can quickly refer to when it’s trying to figure out how
to load a website. The internet relies on the Domain Name System to maintain an index of all
public websites and their corresponding IP addresses. You can think of it as a phone book.
With a phone book, we don’t have to memorise everyone’s phone number, which is the only
way phones can communicate: with a number. In the same way, DNS is used so we can avoid
having to memorise every website’s IP address, which is the only way network equipment can
communicate with websites.
The DNS cache attempts to speed up the process even more by handling the name
resolution of recently visited addresses before the request is sent out to the internet. There
are actually DNS caches at every hierarchy of the “lookup” process that ultimately gets your
computer to load the website. The computer reaches your router, which contacts your ISP,
which might hit another ISP before ending up at what’s called the “root DNS servers.” Each
of those points in the process has a DNS cache for the same reason, which is to speed up the
name resolution process.
DNS Cache Poisoning
A DNS cache becomes poisoned or polluted when unauthorised domain names or IP addresses
are inserted into it. Occasionally a cache may become corrupted because of technical glitches
or administrative accidents, but DNS cache poisoning is typically associated with computer
viruses or other network attacks that insert invalid DNS entries into the cache. Poisoning causes
client requests to be redirected to the wrong destinations, usually malicious websites or pages
full of advertisements.
For example, if the docs.google.com record from above had a different “A” record, then
when you entered docs.google.com in your web browser, you’d be taken somewhere else. This
poses a massive problem for popular websites. If an attacker redirects your request for Gmail.
com, for example, to a website that looks like Gmail but isn’t, you might end up suffering from
a phishing attack like whaling.
5.5.3 DNS Flushing
When troubleshooting cache poisoning or other internet connectivity problems, a computer
administrator may wish to flush (i.e. clear, reset, or erase) a DNS cache. Since clearing the DNS
332 Computer System Security
cache removes all the entries, it deletes any invalid records too and forces your computer to
repopulate those addresses the next time you try accessing those websites. These new addresses
are taken from the DNS server your network is set up to use.
For example, if the Gmail.com record was poisoned and redirecting you to a strange
website, flushing the DNS is a good first step to getting the regular G mail.com back again.In
Microsoft Windows, you can flush the local DNS cache using the ipconfig /flushdns command in
a Command Prompt. You know it works when you see the Windows IP configuration successfully
flushed the DNS Resolver Cache or Successfully flushed the DNS Resolver Cache message.
There are 4 DNS servers involved in loading a webpage:
(i) DNS recursor – The recursor can be thought of as a librarian who is asked to go
find a particular book somewhere in a library. The DNS recursor is a server designed
to receive queries from client machines through applications such as web browsers.
(ii) Root nameserver – The root server is the first step in translating (resolving) human
readable host names into IP addresses.
(iii) TLD nameserver – The top level domain server (TLD) can be thought of as a
specific rack of books in a library. This nameserver is the next step in the search for a
specific IP address, and it hosts the last portion of a hostname (In example.com, the
TLD server is “com”).
(iv) Authoritative nameserver – This final nameserver can be thought of as a dictionary
on a rack of books, in which a specific name can be translated into its definition. The
authoritative nameserver is the last stop in the nameserver query.
5.5.4 DNS Attack
A DNS attack is an exploit in which an attacker takes advantage of vulnerabilities in the
domain name system (DNS).A DNS Attack is any attack targeting the availability or stability
of a network’s DNS service. Attacks that leverage DNS as its mechanism as part of its overall
attack strategy, such as cache poisoning, are also considered DNS attacks.
Attacker WWW
1 3
Injects fake Request resolves Fake website
DNS entry to fake website
WWW
2
Issues request DNS
Client Real website
to real website
a client-side script that attacks machine elsewhere on the network. DNS rebinding establishes
communication between the attacker’s server and a web application on an internal network
through a browser. To explain how this works, let’s first look at two concepts: same-origin policy
(SOP) and time to live (TTL).
Same-Origin Policy (SOP): Web browsers use the same-origin policy as a defence
mechanism to restrict how websites from one origin can interact with other origins. The origin of
a website is defined by the protocol (e.g., http://), domain (e.g., paloaltonetworks.com), and port
(e.g., :80). For example, URLs A and B have the same origin, but URL C has a different origin.
A: http://www[.]yourname[.]com/index[.]html
B: http://www[.]yourname[.]com/news[.]html
C: https:///www[.]yourname[.]com/index[.]html (different protocol)
Websites with the same-origin policy restrict cross-policy interactions. Code (e.g.,
JavaScript) that originates from http://www[.]badactor[.]com/home.html and sends an HTTP
request to http://www[.]yourname[.]com/news[.]html will be restricted.
Time to Live (TTL): In a DNS system, time to live defines the amount of time in
seconds that a record can be cached before a web server will re-query the DNS name server
for a response. For example, a 300-second TLL keeps records for five minutes. After that, the
records become stale and will not be used. TTL is usually set by the authoritative name server
of a domain.
The same-origin policy identifies different origins with the combination of URI scheme,
hostname and port. Among these components, browsers rely on hostnames to recognise different
servers on the internet. However, hostnames are not directly bound to network devices. Instead,
they are resolved to IP addresses by DNS. Then, IP addresses bind to devices statically or
dynamically. Since domain owners have complete control of their DNS records, they can
resolve their hostnames to arbitrary IP addresses. The DNS rebinding attack abuses this privilege.
After the victims’ browsers load the attacking payloads from the hacker’s server, attackers
can rebind their hostnames to internal IP addresses pointing to the target servers. This allows
attackers’ scripts to access private resources through malicious hostnames without violating
the same-origin policy.
In above figure demonstrates the mechanism of a DNS rebinding attack with a hypothetical
example. In this example, the victim, Alex, has a private web service in his internal network with
IP address 192[.]0.0.1. This server contains confidential data and is supposed to be accessed by
Alex’s computer only. On the attack side, Bob controls two servers: a DNS resolver (1[.]2.3.4)
and a web server (5[.]6.7.8) hosting the malicious website. In addition, Bob registers a domain,
attack[.]com, with its nameserver (NS) record pointing to 1[.]2.3.4.
When Alex opens attack[.]com in his browser, it sends a DNS request to Bob’s resolver
and retrieves the address of the malicious server, 5[.]6.7.8. Once loaded in Alex’s browser, the
malicious script in Bob’s website attempts to trigger another DNS resolution for its own domain.
However, this time the resolver will return 192[.]0.0.1 instead. So attack[.]com is rebound to
the target IP address. After that, the malicious script can keep sending requests to attack[.]com,
which eventually reach the private server. Since Alex’s browser won’t recognise these requests
as cross-origin, the malicious website can read the returned secrets and exfiltrate stolen data
as long as it’s open on the victim’s browser.
Working of DNS Rebinding
1. An attacker controls a malicious DNS server that answers queries for a domain, say
rebind. Network.
2. The attacker tricks a user into loading http://rebind.network in their browser. There
are many ways they could do this, from phishing to persistent XSS or by buying an
HTML banner ad.
3. Once the victim follows the link, their web browser makes a DNS request looking
for the IP address of rebind. Network. When it receives the victim’s DNS request,
the attacker controlled DNS server responds with rebind. Network’s real IP address,
34.192.228.43. It also sets the TTL value on the response the be 1 second so that
the victim’s machine won’t cache it for long.
4. The victim loads the web page from http://rebind.network which contains malicious
JavaScript code that begins executing on the victim’s web browser. The page begins
repeatedly making some strange looking POST requests to http://rebind.network/
thermostatwith a JSON payload like {“tmode”: 1, “a_heat”: 95}.
5. At first, these requests are sent to the attacker’s web server running on 34.192.228.43,
but after a while (browser DNS caching is weird) the browser’s resolver observes that
the DNS entry for rebind. Network is stale and so it makes another DNS lookup.
6. The attacker’s malicious DNS server receives the victim’s second DNS request, but
this time it responds with the IP address 192.168.1.77, which happens to be an IP
address of a smart thermostat on the victim’s local network.
7. The victim’s machine receives this malicious DNS response and begins to point to
HTTP requests intended for http://rebind.network to 192.168.1.77. As far as the
336 Computer System Security
browser is concerned nothing has changed and so it sends another POST to http://
rebind.network/thermostat.
8. This time, that POST request gets sent to the small unprotected web server running
on the victim’s WiFi-connected thermostat. The thermostat processes the request and
the temperature in the victim’s home is set to 95 degrees
DNS Rebinding Protection
Various strategies attempt to mitigate the DNS rebinding attack in each related network
component.
• Browser-based Mitigation
Modern browsers such as Chrome and Firefox have implemented the DNS pinning technique
to defend against the DNS rebinding attack. This strategy forces the browser to cache the DNS
resolution results for a fixed period regardless of the DNS records’ time-to-live (TTL) value.
Consequently, malicious websites can’t rebind their hostnames by making repeated DNS requests
within this period. This protection is convenient because it can be implemented in browsers
without changing any other network infrastructure. However, it can only effectively block the
time-varying attack, which is a traditional implementation of the DNS rebinding attack. In this
implementation, the attackers assign an extremely low TTL to the DNS record of malicious
hostnames. After being loaded in the victim’s browser, the rebinding script waits for the record
expiration and then sends a request to its hostname, expecting the browser to resolve it again
and get the target IP address back. In this scenario, the DNS pinning technique ignores the low
TTL and still uses the same result for the second request.
However, there are multiple ways to bypass DNS pinning protection. A simple way is to
design the malicious script to send requests repeatedly until the browser cache expires. Then
the malicious hostname will rebind to the target IP address. Then, the attacker’s website can
receive the expected response from the target service.
Victim Browser
Malicious DNS Resolver
(1.2.3.4)
3. R
5. Cross-origin ebin
ding
communication 4. S scri
eco pt
nd r
equ
est
A more sophisticated implementation called multiple A-records attacks can achieve DNS
rebinding more stably and efficiently even with DNS pinning protection. Fig. 6 presents the
attacking procedures. In this case, the DNS behaviour is different from the traditional attack:
The victim’s browser only resolves the malicious hostname once. But both the attacker’s and
the target’s IP address are returned. When the malicious script sends the second request, the
browser will try the public IP address first. But the attacker’s web server remembers the victim’s
IP address and blocks the incoming traffic by firewall. This request failure forces the victim’s
browser to communicate to the private IP address and complete the DNS rebinding procedure.
• DNS-based Mitigation
Another type of mitigation focuses on the DNS resolution stage. The secure DNS service,
OpenDNS, drops the DNS responses pointing to RFC 1918 and loopback IP addresses. DNS
caching software such as DNSmasq and Unbound also implement similar filtering policies for
private IP addresses.
This strategy is also a centralised protection solution, but it still has limitations. First of
all, not all the secured DNS services have blocked the complete list of IP addresses pointing
to private services. For example, the non-routable IP address 0[.]0.0.0 can represent the IP
addresses of the local machine and can be targeted by a DNS rebinding attack. However,
multiple filtering policies have missed it. Besides the private IP addresses, attackers can rebind
their hostnames to internal hostnames with CNAME records. The victims’ internal resolvers or
their machines will finish the resolution to private IP addresses for the attackers. For example, a
malicious hostname can be rebound to localhost. Then all following traffic will reach the local
service. In summary, IP-based filtering fails to protect against all types of DNS rebinding attacks.
• Server-based Mitigation
Defenses on the web applications side can block DNS rebinding effectively. One of the solutions
is implementing HTTPS communication on all private services. The HTTPS handshake stage
requires the correct domain to validate the SSL certificate. During a DNS rebinding attack,
browsers think they are communicating to the malicious domains while the SSL certificates from
the internal servers are for different domains. Therefore, the attacking scripts can’t establish
SSL connections to the target services. Alternatively, implementing authentication with strong
credentials on all private services is also effective. With this application-level protection, even
if attackers launch DNS rebinding successfully, they can’t access confidential information.
However, this kind of mitigation depends on the developer of internal services. This
means it is not scalable. As third-party web applications populate in both home and enterprise
environments, it’s more difficult for the network owners to enforce protection to all potentially
vulnerable servers. Meanwhile, threat hunters keep digging DNS rebinding vulnerabilities from
third-party web applications – such as the Rails console RCE exploit mentioned in the previous
section.
• Real-time DNS Rebinding Detection
As our DNS Security service monitors our customers’ DNS traffic to provide real-time protection,
we have the opportunity to enforce sophisticated signatures to recognise the abnormal DNS
query pattern of the DNS rebinding attack. We launched a detection system consuming DNS
Security and passive DNS data to capture the indicators of compromise (IOCs) of ongoing
338 Computer System Security
rebinding attacks. The detector tracking DNS Security traffic can identify and deliver malicious
hostnames in real time.
Our system aims to capture the sequential DNS resolution pattern instead of relying on
isolated DNS responses. Its detection logic can identify DNS rebinding with high confidence
while allowing hostnames that resolve to internal IP addresses only for legitimate usage. Besides
the high detection accuracy, our system can cover all the varieties of DNS rebinding attacks
mentioned previously, including time-varying, multiple A-records and CNAME-based attacks.
Apart from attacks targeting internal IP addresses and localhost, it also recognises malicious
hostname rebinding to the internal hostnames of our customers.
integrity, and confidentiality. It also defines the encrypted, decrypted and authenticated packets.
The protocols needed for secure key exchange and key management are defined in it.
Uses of IP Security
IPsec can be used to do the following things:
•• To encrypt application layer data.
•• To provide security for routers sending routing data across the public internet.
•• To provide authentication without encryption, like to authenticate that the data
originates from a known sender.
•• To protect network data by setting up circuits using IPsec tunneling in which all
data is being sent between the two endpoints is encrypted, as with a Virtual Private
Network(VPN) connection.
An IP Security Scenario
User system
with IPSec
IP IPSec Secure IP
Header Header Payload Public (Internet)
or private
network
ad P
ylo P
ylo e I
Pa ure I
ad
Pa cur
Se
c
Se
ad c
He PSe
er
He Sec
er
I
ad
IP
er
He IP
ad
re
He IP
ad
Networking device
with IPSec Networking device
with IPSec
IP IP IP IP
Header Payload Header Payload
IP Security Architecture
IPSec Documents
The IPSec specification consists of numerous documents. The most important of these, issued
in November of 1998, are RFCs 2401, 2402, 2406, and 2408:
•• RFC 2401: An overview of a security architecture
•• RFC 2402: Description of a packet authentication extension to IPv4 and IPv6
•• RFC 2406: Description of a packet encryption extension to IPv4 and IPv6
•• RFC 2408: Specification of key management capabilities
342 Computer System Security
In addition to these four RFCs, a number of additional drafts have been published by
the IP Security Protocol Working Group set up by the IETF. The documents are divided into
seven groups, as given below:
•• Architecture: Covers the general concepts, security requirements, definitions, and
mechanisms defining IPSec technology.
•• Encapsulating Security Payload (ESP): Covers the packet format and general
issues related to the use of the ESP for packet encryption and, optionally, authentication.
•• Authentication Header (AH): Covers the packet format and general issues related
to the use of AH for packet authentication.
•• Encryption Algorithm: A set of documents that describe how various encryption
algorithms are used for ESP.
•• Authentication Algorithm: A set of documents that describe how various
authentication algorithms are used for AH and for the authentication option of ESP.
•• Key Management: Documents that describe key management schemes.
•• Domain of Interpretation (DOI): Contains values needed for the other documents
to relate to each other. These include identifiers for approved encryption and
authentication algorithms, as well as operational parameters such as key lifetime.
Architecture
ESP AH
protocol protocol
Encryption Authentication
algorithm algorithm
DOI
Key
management
IPSec Services
IPSec provides security services at the IP layer by enabling a system to select required security
protocols, determine the algorithm(s) to use for the service(s), and put in place any cryptographic
keys required to provide the requested services.
Internet Infrastructure 343
Two protocols are used to provide security: an authentication protocol designated by the
header of the protocol, Authentication Header (AH); and a combined encryption/authentication
protocol designated by the format of the packet for that protocol, Encapsulating Security Payload
(ESP). The services are
•• Access control
•• Connectionless integrity
•• Data origin authentication
•• Rejection of replayed packets (a form of partial sequence integrity)
•• Confidentiality (encryption)
•• Limited traffic flow confidentiality
5.7.4 Virtual Private Network (VPN)
A virtual private network (VPN) is an encrypted connection between two or more computers.
VPN connections take place over public networks, but the data exchanged over the VPN is
still private because it is encrypted. VPNs make it possible to securely access and exchange
confidential data over shared network infrastructure, such as the public Internet. For instance,
when employees are working remotely instead of in the office, they often use VPNs to access
corporate files and applications.
A VPN (Virtual Private Network) is a way of creating a secure connection „to“ and „from“
a network or a computer. The VPN uses strong encryption and restricted, private data access
1b
Firewall with VPN
option (VPN Gateway)
which keeps the data secure from the other users of the underlying network which could often
be a public network like the Internet. VPNs have been used for years, but they have become
more robust. They are more affordable & also much faster.
5.7.5 Types of VPN
There are different types of VPNs available. Let’s takes a look at most common types.
1. PPTP VPN: This is the most common and widely used VPN protocol. They enable
authorized remote users to connect to the VPN network using their existing Internet connection
and then log on to the VPN using password authentication. They don’t need extra hardware
and the features are often available as inexpensive add-on software. PPTP stands for Point-
to-Point Tunneling Protocol. The disadvantage of PPTP is that it does not provide encryption
and it relies on the PPP (Point-to-Point Protocol) to implement security measures.
2. Site-to-Site VPN: Site-to-site is much the same thing as PPTP except there is no
“dedicated” line in use. It allows different sites of the same organisation, each with its own real
network, to connect together to forma VPN. Unlike PPTP, the routing, encryption and decryption
is done by the routers on both ends, which could be hardware-based or software-based.
3. L2TP VPN: L2TP or Layer to Tunneling Protocol is similar to PPTP, since it also
doesn’t provide encryption and it relies on PPP protocol to do this. The difference between
PPTP and L2TP is that the latter provides not only data confidentiality but also data integrity.
L2TP was developed by Microsoft and Cisco.
4. IPsec: Tried and trusted protocol which sets up a tunnel from the remote site into
your central site. As the name suggests, it’s designed for IP traffic. IPsec requires expensive,
time consuming client installations and this can be considered an important disadvantage.
5. SSL: SSL or Secure Socket Layer is a VPN accessible via https over web browser.
SSL creates a secure session from your PC browser to the application server you’re accessing.
The major advantage of SSL is that it doesn’t need any software installed because it uses the
web browser as the client application.
6. MPLS: MPLS (Multi-Protocol Label Switching) are no good for remote access for
individual users, but for site-to-site connectivity, they’re the most flexible and scalable option.
These systems are essentially ISP-tuned VPNs, where two or more sites are connected to form
a VPN using the same ISP. An MPLS network isn’t as easy to set up or add to as the others,
and hence bound to be more expensive.
7. Hybrid VPN: A few companies have managed to combine features of SSL and IPsec
& also other types of VPN types. Hybrid VPN servers are able to accept connections from
multiple types of VPN clients. They offer higher flexibility at both client and server levels and
bound to be expensive.
Users can access an IPsec VPN by logging into a VPN application, or “client.” This
typically requires the user to have installed the application on their device. VPN logins are
usually password-based. While data sent over a VPN is encrypted, if user passwords are
compromised, attackers can log into the VPN and steal this encrypted data. Using two-factor
authentication (2FA) can strengthen IPsec VPN security, since stealing a password alone will
no longer give an attacker access.
Internet Infrastructure 345
be high for developed nations than less developed nations. This is done by the online retailers
to generate more profit and market their products. Hence, this can also be prevented using a
VPN. A VPN can make you seem like you are from a less developed area by connecting to a
server in that particular region. As a result, you can notice a significant price reduction. But
this method cannot be guaranteed 100%. Some websites use tracking cookies instead of price
discrimination tactics. Therefore, you will not be able to see any price reductions here. In this
case, you should clear the cache every time before using the VPN.
Disadvantages of VPN
1. Slowdown the Internet Speed: Sometimes when you use a VPN you can notice
a speed reduction. This is because of the data encryption. Since the data is encrypted in VPN,
it has to travel more than usual. This can lead to speed hit in connections. But this speed hit
is so small that it is barely noticeable. Despite of this, there are 3 main factors which decides
the speed hit.
•• Distance to the VPN servers
•• Kind of protocol
•• Power of the encryption
However, the speed hit from the VPN can be prevented if you have a powerful CPU
and the bandwidth.
2. Costs more money: Despite there are plenty of free VPN services available. Many
of them don’t offer the complete protection needed by the user. Moreover, using them is not a
reliable option. If you use them your privacy could be in danger. Hence, you need to go for a
paid VPN service for enjoying a full complete protection. However, paid VPN services will not
be convenient for everyone since they charge a subscription fee every month.
3. Device Compatibility: While VPNs generally support most of the devices and
the operating systems. There are some platforms which are not supported. This is because
these platforms are not widely used. In this case if you want to use VPN in such platform, you
have to manually setup a VPN connection. Besides that, if you have a computer with a VPN
connection, you can directly connect it to the unsupported platforms using an Ethernet cable.
But it will drastically reduce your online speed.
4. Privacy Issues: VPNs are meant to provide you the complete protection. But there
are some VPN services that can potentially be a treat. Especially the free VPN services with no
properly configured encryption. Moreover, there are chances where these VPNs can sell your
data to third party companies. And also the VPNs those tend to keep the log user data could
put your privacy in danger. With these kinds of VPNs, the purposes of VPN are defeated in
the first place. However, a paid VPN service offers complete protection with no log user policy.
5. Connection Droppings: Connection droppings are one of the most frequent
problems faced by a VPN user. When this happens you will have to face the inconvenience
of reconnecting it. Apart from that your real IP address can be exposed since your encrypted
connection is no longer in work. Due to this your anonymity can be lost. But the VPNs with Kill
Switch feature can be used to prevent this. A VPN with a kill switch feature instantly disconnects
from the internet when the connection from the server is lost.
Internet Infrastructure 347
6. Configuration Difficulty: Not all the VPN services are configured properly. An
improperly configured VPN can make your confidential information vulnerable to attackers. IP
and DNS leaks are one of the most common issues faced as a result of an improperly configured
VPN service. And also overall VPN services are not easier to use. Unless you are a tech savvy
you cannot manually configure them. Therefore, it is always a better idea to offer for a VPN
service with good user friendly experience.
7. Legality Issues: Even though the use of VPN is allowed in most of the countries,
there are some countries that consider private networks including VPNs to be illegal. If your
country is in the list of blocked nations, you shouldn’t use VPN. Despite of this if you use the
VPN illegally, you may end up facing the consequences.
Public Network
Modem Internet
Firewall
Dustbin
Secure Private Local Area Network
= Specified Traffic Allowed
Based on their method of operation, there are four different types of firewalls.
1. Circuit-Level Gateways: Circuit Level gateways are the firewalls that first observe
the TCP (i.e., transmission control protocol) sessions and connections and work at the session
layer of the OSI model it helps in providing the security between UDP and TCP using the
connection. It also acts as a handshaking device between trusted clients or servers to untrusted
hosts and vice versa.
2. Stateful Inspection Firewalls: Stateful inspection firewalls are the systems that
monitor both the incoming packets and TCP connections or session-level state information to
determine how these data are transmitted. It provides a higher level of security.
3. Application-Level Gateways (Proxy Firewalls): Application-level gateways, also
known as proxy firewalls, are implemented at the application layer via a proxy device. Instead
of an outsider accessing your internal network directly, the connection is established through
the proxy firewall. The external client sends a request to the proxy firewall. After verifying the
authenticity of the request, the proxy firewall forwards it to one of the internal devices or servers
on the client’s behalf. Alternatively, an internal device may request access to a webpage, and
the proxy device will forward the request while hiding the identity and location of the internal
devices and network.
4. Packet Filtering Firewalls: Packet filtering firewalls are Operating at the network
layer, they check a data packet for its source IP and destination IP, the protocol, source port,
and destination port against predefined rules to determine whether to pass or discard the
packet. Packet filtering firewalls are essentially stateless, monitoring each packet independently
without any track of the established connection or the packets that have passed through that
connection previously. This makes these firewalls very limited in their capacity to protect against
advanced threats and attacks.
Application Layer
Presentation Layer
Session Layer
Transport Layer
Network Layer
Data Link Layer
Physical Layer
A packet filtering firewall is a network security feature that controls the flow of incoming
and outgoing network data. The firewall examines each packet, which comprises user data and
control information, and tests them according to a set of pre-established rules. If the packet
completes the test successfully, the firewall allows it to pass through to its destination. It rejects
those that don’t pass the test. Firewalls test packets by examining sets of rules, protocols, ports
and destination addresses. Packet-filtering firewalls operate at the network layer (Layer 3) of the
OSI model. Packet-filtering firewalls make processing decisions based on network addresses,
ports, or protocols.
Internet Infrastructure 349
Protected Network
Packet-filtering firewalls are very fast because there is not much logic going behind
the decisions they make. They do not do any internal inspection of the traffic. They also do
not store any state information. You have to manually open ports for all traffic that will flow
through the firewall.
Packet filtering is an efficient defence system against intrusions from computers or
networks outside a local area network (LAN). It is also a standard and cost-effective means of
protection as most routing devices themselves possess integrated filtering capabilities, so there
is no need for setting a new firewall device. For example, if you set rules denying access to
port 80 to outsiders, you would block off all outside access to the HTTP server as most HTTP
servers run on port 80. Alternatively, you can set packet filtering firewall rules permitting packets
designated for your mail or web server and rejecting all other packets.
5.8.1 Types of Packet Filtering
1. Static packet filtering firewall: A static packet filtering firewall requires you to
establish firewall rules manually. Similarly, internal and external network connections remain
either open or closed unless otherwise adjusted by an administrator. These firewall types allow
users to define rules and manage ports, access control lists (ACLs) and IP addresses. They’re
often simple and practical, making them an apt choice for smaller applications or users without
a lot of criteria.
2. Dynamic packet filtering firewall: Dynamic firewalls allow users to adjust rules
dynamically to reflect certain conditions. You can set ports to remain open for specified periods
of time and to close automatically outside those established time frames. Dynamic packet filtering
firewalls offer more flexibility than static firewalls because you can set adjustable parameters
and automate certain processes
3. Stateless packet filtering firewall: Stateless packet filtering firewalls are perhaps
the oldest and most established firewall option. While they’re less common today, they do still
provide functionality for residential internet users or service providers who distribute low-power
customer-premises equipment (CPE). They protect users against malware, non-application-
specific traffic and harmful applications. If users host servers for multi-player video games,
email or live-streamed videos, for example, they often must manually configure firewalls if
they plan to deviate from default security policies. Manual configurations allow different ports
and applications through the packet filter.
4. Stateful packet filtering firewall: Unlike stateless packet filtering options, Stateful
firewalls use modern extensions to track active connections, like transmission control protocol
(TCP) and user datagram protocol (UDP) streams. By recognising incoming traffic and data
350 Computer System Security
packets’ context, Stateful firewalls can better identify the difference between legitimate and
malicious traffic or packages. Typically, new connections must introduce themselves to the
firewall before they gain access to the approved list of allowed connections.
5.8.2 Advantages and Disadvantages of Packet Filtering Firewall
Advantages
The following are some of the prominent advantages of packet filtering firewall that makes it
highly acceptable worldwide:
•• Need only one router: The key advantage of using packet filtering is that it requires
the use of only one screening router to protect an entire network.
•• Highly efficient and fast: The packet filtering router works very fast and effectively
and accepts and rejects the packets quickly based upon the destination and source
ports and addresses. However, other firewall techniques show more time-consuming
performance.
•• Transparent to users: Packet filtering works independently without any need for
user knowledge or cooperation. Users won’t get to know about the transmission of
packets until there is something that got rejected. On the contrary, other firewalls
require custom software, the configuration of client machines, or specific training or
procedures for users.
•• Built-in packet filtering in routers: Packet filtering capacities are inbuilt in widely
used hardware and software routing products. Additionally, now most websites
possess packet filtering techniques available in their routers itself, which also makes
this technique the most inexpensive one.
Disadvantages
Although packet filtering offers several advantages, it also has some weaknesses. Some of the
Disadvantages of a packet filtering firewall are:
•• Filtration based on IP address or Port Information: The biggest disadvantage
of packet filtering is that it works on the authentication of IP address and port number
and not based on the information like context or application.
•• Packet filtering is stateless: Another big disadvantage of packet filtering is that
it does not remember any past invasions or filtered packets. It tests every packet in
isolation and is stateless which allows hackers to break the firewall easily.
•• No safety from address spoofing: The packet filtering does not protect from
IP spoofing, in which hackers can insert fake IP addresses in packets to intrude the
network.
•• Not a perfect option for all networks: The packet filtering firewalls implementation
in highly desirable filters becomes difficult or highly time-consuming. Managing and
configuring ACLs sometimes get difficult.
Internet
Although intrusion detection systems monitor networks for potentially malicious activity,
they are also disposed to false alarms. Hence, organisations need to fine-tune their IDS products
when they first install them. It means properly setting up the intrusion detection systems to
recognise what normal traffic on the network looks like as compared to malicious activity.
Intrusion prevention systems also monitor network packets inbound the system to check the
malicious activities involved in it and at once send the warning notifications.
5.9.1 Classification of Intrusion Detection System
Intrusion detection system can be classified in two ways based on data source and based on
detection mechanism.
352 Computer System Security
Database Workstations
with attack
Firewall
signatures
Public
Network Servers
Perimeter Perimeter
NIDS
Network Router Switch
traffic
Databses
Forensic Altert
analysis administrator
Company
devices
Firewall
Workstations Servers
Perimeter Perimeter
Network Router Switch
traffic
Company
Databases
devices
HIDS agent HIDS agent
• Signature-based Method
Signature-based IDS detects the attacks on the basis of the specific patterns such as number
of bytes or number of 1’s or number of 0’s in the network traffic. It also detects on the basis of
the already known malicious instruction sequence that is used by the malware. The detected
patterns in the IDS are known as signatures.
Signature-based IDS can easily detect the attacks whose pattern (signature) already exists
in system but it is quite difficult to detect the new malware attacks as their pattern (signature)
is not known.
• Anomaly-based Method
Anomaly-based IDS was introduced to detect unknown malware attacks as new malware are
developed rapidly. In anomaly-based IDS there is use of machine learning to create a trustful
activity model and anything coming is compared with that model and it is declared suspicious
if it is not found in model. Machine learning-based method has a better-generalised property in
comparison to signature-based IDS as these models can be trained according to the applications
and hardware configurations.
• Hybrid Detection
A hybrid IDS uses both signature-based and anomaly-based detection. This enables it to detect
more potential attacks with a lower error rate than using either system in isolation.
• Intrusion Prevention System Benefits
Fewer security incidents. While connected units typically do not notice any changes,
the IPS ensures less disruption for university systems and a reduced number of security incidents.
Selective logging. The IPS only records network activity when it takes action,
maintaining the privacy of network users.
Privacy protection. The IPS compares network traffic against a list of known malicious
traffic and does not store or view content.
Reputation-managed protection. The IPS subscribes to a reputation-based list of
known malicious sites and domains, which it uses to proactively protect the university. Example:
Phishing or Malware attempts: If a university staff member clicks on a link in a phishing email
Internet Infrastructure 355
or a malware ad for a site that is on the IPS denylist of known malicious sites, traffic would be
blocked and the staff member would see a blank page.
Multiple threat protection. The IPS offers zero-day threat protection, mitigates brute
force password attempts, and provides protection against availability threats, such as DDoS and
DoS attempts. Example: Brute Force Password Attempt: If a criminal attempt to gain access
to a university account through brute force (e.g., repeated login attempts), the IPS can monitor
the size of the data movements, recognise unusual patterns, and block access.
Dynamic threat response. The IPS can be fine-tuned to recognise and respond to
particular threats, allowing the university to react to identified threats to university business.
Disadvantages of intrusion prevention systems
Disadvantages to intrusion prevention systems include the following:
•• When a system blocks abnormal activity on a network assuming it is malicious, it may
be a false positive and lead to a DoS to a legitimate user.
•• If an organisation does not have enough bandwidth and network capacity, an IPS
tool could slow a system down.
•• If there are multiple IPsec on a network, data will have to pass through each to reach
the end user, causing a loss in network performance.
•• IPS may also be expensive
(b) RIP (Routing Information Protocol) is a forceful protocol type used in local area
network and wide area network.
(c) RIP is categorised as an interior gateway protocol within the use of distance vector
algorithm.
(d) It prevents routing loops by implementing a limit on the number of hops allowed
in the path.
2. Interior Gateway Routing Protocol (IGRP):
(a) It is distance vector Interior Gateway Routing Protocol (IGRP).
(b) It is used by router to exchange routing data within an independent system.
(c) Interior gateway routing protocol created in part to defeat the confines of RIP in
large networks.
(d) It maintains multiple metrics for each route as well as reliability, delay load, and
bandwidth.
(e) It measured in classful routing protocol, but it is less popular because of wasteful
of IP address space.
3. Open Shortest Path first (OSPF):
(a) Open Shortest Path First (OSPF) is an active routing protocol used in internet
protocol.
(b) It is a link state routing protocol and includes into the group of interior gateway
protocol.
(c) It operates inside a distinct autonomous system.
(d) It is used in the network of big business companies.
4. Exterior Gateway Protocol (EGP):
(a) The absolute routing protocol for internet is exterior gateway protocol.
(b) EGP (Exterior Gateway Protocol) is a protocol for exchanging routing table
information between two neighbour gateway hosts.
(c) The Exterior Gateway Protocol (EGP) is unlike distance vector and path vector
protocol.
5. Border Gateway Protocol (BGP): The (BGP) routing protocol is used to announce
which networks control which IP addresses, and which networks connect to each other.
(The large networks that make these BGP announcements are called autonomous
systems.) BGP is a dynamic routing protocol.
6. Enhanced Interior Gateway Routing Protocol (EIGRP): In EIGRP, if router is
not able to search a best route to a destination from the routing table, then it asks
route to its neighbours, and they pass query to their neighbours while finding the
path.
7. Immediate system-to-immediate system (IS-IS): IS-IS classified as a link
state, interior gateway and classless protocol—is commonly used to send and share
IP routing information on the internet. The protocol uses an altered version of the
Dijkstra algorithm. Usually, the protocol organises routers into groups to create larger
domains and connect routers for data transferring. IS-IS frequently uses these two
network types:
Internet Infrastructure 357
5. Robustness: When some nodes are compromised, the entire network should not
be compromised.
6. Self-organisation: Nodes should be flexible enough to be self-organizing
(autonomous) and self-healing (failure tolerant).
7. Availability: Network should not fail frequently.
8. Time synchronisation: Protocols should not be manipulated to produce incorrect
data.
9. Secure localisation: Nodes should be able to accurately and securely acquire
location information.
10. Accessibility: Intermediate nodes should be able to perform data aggregation by
combining data from different nodes.
8. What is Session Hijacking?
Ans. TCP session hijacking is a security attack on a user session over a protected network. The
most common method of session hijacking is called IP spoofing, when an attacker uses
source-routed IP packets to insert commands into an active communication between
two nodes on a network and disguise itself as one of the authenticated users. This type
of attack is possible because authentication typically is only done at the start of a TCP
session. Another type of session hijacking is known as a man-in-the-middle attack, where
the attacker, using a sniffer, can observe the communication between devices and collect
the data that is transmitted.
9. Discuss link layer connection in TCP/IP model.
Ans. 1. The link layer in the TCP/IP model is a descriptive field networking protocols that
operate only on the local network segment (link) that a host is connected to. Such
protocol packets are not routed to othernetworks.
2. The link layer includes the protocols that define communication between local (on-
link) network nodes which fulfill the purpose of maintaining link states between the
local nodes, such as the local network topology, and that usually use protocols that
are based on the framing of packets specific to the link types.
3. The core protocols specified by the Internet Engineering Task Force (IETF) in this layer
are the Address Resolution Protocol (ARP), the Reverse Address Resolution Protocol
(RARP), and the Neighbor Discovery Protocol (NDP).
4. The link layer of the TCP/IP model is often compared directly with the combination of
the data link layer and the physical layer in the Open Systems Interconnection (OSI)
protocol stack. Although they are congruent to some degree in technical coverage of
protocols, they are not identical.
5. In general, direct or strict comparisons should be avoided, because the layering in
TCP/IP is not a principal design criterion and in general is considered to be harmful.
10. What is DNS Rebinding Attack also explain working?
Ans. DNS rebinding is a method of manipulating resolution of domain names that is commonly
used as a form of computer attack. In this attack, a malicious web page causes visitors
to run a client-side script that attacks machine elsewhere on the network.
360 Computer System Security
(c) It also contains a structure to facilitate the routing of datagrams to distant networks
if required.
(d) Since most of the other TCP/IP protocols use IP, understanding the IP addressing
scheme is of vital importance to understand TCP/IP.
2. Data encapsulation and formatting/packaging:
(a) As the TCP/IP network layer protocol, IP accepts data from the transport layer
protocols UDP and TCP.
(b) It then encapsulates this data into an IP datagram using aspecial format prior to
transmission.
3. Fragmentation and reassembly:
(a) IP datagrams are passed down to the data link layer for transmission on the local
network.
(b) However, the maximum frame size of each physical/data link network using IP
may be different.
(c) For this reason, IP includes the ability to fragment IP datagrams into pieces so
that they can each be carried on the local network.
(d) The receiving device uses the reassembly function to recreate the whole IP
datagram again.
14. What are the types of routing protocols?
Ans. Various types of routing protocols are:
1. Routing Information Protocols (RIP):
(a) RIP is dynamic routing protocol which uses hop count as a routing metric to find
best path between the source and destination network.
(b) RIP (Routing Information Protocol) is a forceful protocol type used in local area
network and wide area network.
(c) RIP is categorised as an interior gateway protocol within the use of distance vector
algorithm.
(d) It prevents routing loops by implementing a limit on the number of hops allowed
in the path.
2. Interior Gateway Routing Protocol (IGRP):
(a) It is distance vector Interior Gateway Routing Protocol (IGRP).
(b) It is used by router to exchange routing data within an independent system.
(c) Interior gateway routing protocol created in part to defeat the confines of RIP in
large networks.
(d) It maintains multiple metrics for each route as well as reliability, delay load, and
bandwidth.
(e) It measured in classful routing protocol, but it is less popular because of wasteful of
IP address space.
3. Open Shortest Path first (OSPF):
(a) Open Shortest Path First (OSPF) is an active routing protocol used in internet
protocol.
Internet Infrastructure 363
(b) It is a link state routing protocol and includes into the group of interior gateway
protocol.
(c) It operates inside a distinct autonomous system.
(d) It is used in the network of big business companies.
4. Exterior Gateway Protocol (EGP):
(a) The absolute routing protocol for internet is exterior gateway protocol.
(b) EGP (Exterior Gateway Protocol) is a protocol for exchanging routing table
information between two neighbor gateway hosts.
(c) The Exterior Gateway Protocol (EGP) is unlike distance vector and path vector
protocol.
15. Explain briefly fragmentation at network layer.
ns. 1. Fragmentation is done by the network layer when the maximum size of datagram
A
is greater than maximum size of data that can be held a frame i.e., it’s Maximum
Transmission Unit (MTU).
2. The network layer divides the datagram received from transport layer into fragments
so that data flow is not disrupted.
3. It is done by network layer at the destination side and is usually done at routers.
4. Source side does not require fragmentation due to segmentation by transport layer
i.e., the transport layer looks at datagram data limit and frame data limit and does
segmentation in such a way that resulting data can easily fit in a frame without the
need of fragmentation.
5. Receiver identifies the frame with the identification (16 bits) field in IP header. Each
fragment of a frame has same identification number.
6. Receiver identifies sequence of frames using the fragment offset (13 bits) field in IP
header.
7. An overhead at network layer is present due to extra header introduced due to
fragmentation.
4. The Open Shortest Path First (OSPF) protocol is an intra domain routing protocol based
on ________ routing.
(a) Distance vector (b) Link state
(c) Path vector (d) Non-distance vector
5. Which of the following is not a routing protocol?
(a) OSPF (b) BGP
(c) ARP (d) MGP
6. What layer in the TCP/IP stack is equivalent to the Transport layer of the OSI model?
(a) Application (b) Host to host
(c) Internet (d) Network Access
7. Which of the following protocols uses both TCP and UDP?
(a) FTP (b) SMTP
(c) Telnet (d) DNS
8. Length of Port address in TCP/IP is _________
(a) 4bit long (b) 16bit long
(c) 32bit long (d) 8 bit long
9. ROA stands for
(a) Route Organisation Administration
(b) Route Organisation Authorisation
(c) Rules of Authorisation
(d) Rules of Administration
10. A device operating at network layer is called __________
(a) Router (b) Equalizer
(c) Bridge (d) Repeater
11. What are the major components of the intrusion detection system?
(a) Analysis Engine (b) Event provider
(c) Alert Database (d) All of the mentioned
12. Which of the following is an advantage of anomaly detection?
(a) Rules are easy to define
(b) Custom protocols can be easily analysed
(c) The engine can scale as the rule set grows
(d) Malicious activity that falls within normal usage patterns is detected
13. TCP groups a number of bytes together into a packet called
(a) Packet (b) Buffer
(c) Segment (d) Stack
14. Which of these is not applicable for IP protocol?
(a) Is connectionless (b) Offer reliable service
(c) Offer unreliable service (d) None of the mentioned
Internet Infrastructure 365
Answers
1. (d) 2. (d) 3. (d) 4. (b) 5. (d) 6. (b)
7. (d) 8. (b) 9. (b) 10. (a) 11. (d) 12. (c)
13. (c) 14. (b) 15. (d) 16. (a) 17. (a) 18. (b)
19. (b) 20. (d) 21. (a) 22. (d) 23. (b) 24. (d)
25. (a) 26. (a) 27. (b) 28. (c) 29. (a) 30. (a)
31. (a) 32. (b) 33. (b) 34. (a) 35. (a) 36. (a)
37. (c) 38. (b) 39. (c) 40. (a)
Examination Paper, 2019–20
B. Tech. (Sem-III)
370
Examination, 2020–21 371
372
Examination Paper, 2021–22 373
374
Examination Paper, 2021–22 375
We request for your frank assessment regarding various aspects of the book as given below:
.............................................................................................................................................................................................
.............................................................................................................................................................................................
.............................................................................................................................................................................................
(ii) What are the chapters wherein the treatment of the subject matter is not systematic or organised or updated ?
.............................................................................................................................................................................................
.............................................................................................................................................................................................
.............................................................................................................................................................................................
.............................................................................................................................................................................................
(iii) Have you come across misprints/mistakes/factual inaccuracies in the book ? Please specify the chapters and
the page numbers.
.............................................................................................................................................................................................
.............................................................................................................................................................................................
.............................................................................................................................................................................................
.............................................................................................................................................................................................
(iv) What other books on the same subject have you found/heard better than the present book ? Please specify in
terms of price and quality.
.............................................................................................................................................................................................
.............................................................................................................................................................................................
.............................................................................................................................................................................................
.............................................................................................................................................................................................
(v) Further suggestions and comments for the improvement of this book.
.............................................................................................................................................................................................
.............................................................................................................................................................................................
.............................................................................................................................................................................................
.............................................................................................................................................................................................
381
OTHER DETAILS
(i) Who recommended you this book ? Please tick (3) in the box near the option relevant to you.
The best assessment will be awarded each month : The award will be in the form of our publications
as decided by the Editorial Board.
Please mail the filled up coupon at your earliest to :
S.K. KATARIA & SONS®
4885/109, Prakash Mahal, Dr. Subhash Bhargav Lane,
Opposite Delhi Medical Association, Daryaganj, New Delhi-110002 (INDIA)
Phone: +91-11-23243489, 43551243; Mobile: +91-9871775858
e-mail: [email protected]; [email protected]
Website: www.skkatariaandsons.com
382