0% found this document useful (0 votes)
32 views

System Analysis and Design

Uploaded by

s4nthoshsidhu
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views

System Analysis and Design

Uploaded by

s4nthoshsidhu
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 13

1.

Employing defence in depth increases the complexity of compromising


computer systems. Describe the concept of defence in depth, and
provide an example.
Defence in depth describes an approach to securing computer systems in
which multiple layers of countermeasures are employed to protect information
assets. The rationale for this layered approach is that it is much harder for an
attacker to penetrate several different layers than it would be to penetrate a
single barrier.
Countermeasures can be deployed on the following layers (or rings): data,
application, host, internal network, network perimeter, physical security and
policies, procedures and awareness.
A simple example of defence in depth is to deploying a security appliance at
the network edge (e.g. network firewall), while also utilising endpoint security
software on individual workstations (e.g. host firewalls and anti-virus
scanners) and securing data and applications with appropriate access control
and encryption.
2. Explain the three core principles of the CIA triad and for each name at
least one technique that can be used to maintain/implement the
principle.
The three principles are confidentiality, integrity and availability. Confidentiality
prevents the unauthorised disclosure of sensitive data. Integrity provides
assurances that data has not been modified by unauthorised parties.
Availability guarantees that services are available to authorised users when
needed.
The following lists some techniques to maintain each:
Confidentiality: encryption, authentication, access control
Integrity: digital signatures, message authentication codes
Availability: redundancy & failover, backups
3. Explain the rationale for and the overall approach of the ring-based
architecture implemented on Intel (and compatible) processors.
The rationale is to implement security at a low level by controlling access to
the hardware. Putting security mechanisms into the core of the system
reduces the chances of an adversary being able to subvert security and also
reduces performance overheads.
The overall approach is to have different privilege levels in the form of rings
where the lowest level (0) has the highest protection, i.e. OS kernel, and the
highest level (3) has the lowest protection, e.g. user processes. Procedures
can only access objects in their own ring or in outer rings and can invoke
subroutines only within their own ring. When a procedure needs to execute
privileged instructions (i.e. access hardware), controlled invocation is used to
execute such instructions in a controlled manner at a higher privilege level
and then return to the original level of operation. Privilege level is changed by
single CPU instruction which can only be executed at level 0.
4. Match the following terms with the correct descriptions
Principal → Identity to whom abstract policy accords specified rights,
Subject → Active entity on system being effectively agent of one or more
principal(s),
Object → Resource for which access is requested
5. Describe the difference between Discretionary Access Control (DAC)
and Mandatory Access Control (MAC).
Access controls regulate the permissions under which subjects can access
objects. With discretionary access control (DAC) a subject that owns an object
can decide on and configure access rights (permissions) that apply for the
object. With mandatory access control (MAC) the system enforces particular
access rights (permissions) based on the owner of an object.
6. Explain what the problem is with Access Control Matrices (ACMs) when
there are many subjects and objects. Briefly explain how this issue is
solved in reality.
When there are many subjects and objects, the problem with ACMs is that
they become huge and also very sparse (as each subject usually only has
permissions for a small subset of objects). It would be very inefficient or even
impossible to store such large ACMs. Hence, we use more efficient
techniques such as ACLs, which store the ACM by column at the objects, or
Capabilities, which store the ACM by row at the subjects.
7. Both the sharing settings and security permission settings operate
independently and, despite seeming similar, relate to different things.
Discuss the main difference between the two and at what architectural
security layer each operates. Which of these controls takes
precedence?
Share permissions manage access to directories and files over the network
and don't apply to local users. Permissions can be controlled per share.
Security settings are essentially NTFS file system permissions and affect local
and network users regardless where a user is coming from. Permissions can
be controlled for individual objects.

For local users share permissions are irrelevant. When share and NTFS
permissions are used simultaneously, i.e. remote users, the most restrictive
permission always wins. For example, when the share permission is set to
read and the file permissions to full control, the user only has read access.
The same happens when the share permissions are set to full control and the
file permissions to read access.
8. Assume that p and q are two large primes and n=pq. Based on RSA key
generation algorithm, the encryption key e of RSA must meet _______.
gcd(e, (p-1)(q-1))=1
9. Assume that (p, q, n, e, d) are generated according to RSA key
generation algorithm, the public key of RSA is _______.
e and n
10. Assume that (p, q, n, e, d) are generated according to RSA key
generation algorithm. The RSA decryption of a ciphertext c (where
0<c<n) is _______.
c^d (mod n)
11. A) Explain the main functionality of certificates. Your answer must
include the THREE important fields of certificates required for this
functionality.
A certificate binds a subject's identity (name, address etc.) to the public key of
the subject, and is authenticated by a digital signature from a Certificate
Authority (CA). The public key in the certificate can then be used by other
subjects to authenticate the subject in the certificate or encrypt data for the
subject in the certificate.

B) Explain how certificates are created and used for EFS file recovery.
You should explain the general technique without making any reference
to how this is done with Windows (i.e. GUI). Your answer must be no
longer than FIVE sentences. Longer answers will receive mark
reductions.
A recovery certificate including a key pair is created and signed by a CA
(which in the labs is the local domain controller). By registering the recovery
certificate, the certificate's subject is registered as recovery agent on
machines on which we may want to recover files in the future. Each machine
verifies the authenticity of the certificate before registering the key pair as
recovery key (in the labs with the domain controller's public key). Then any
files on a machine that are encrypted with EFS, are encrypted with the user's
public key as well as the recovery agent's public key. The password-protected
private key from the recovery agent can later be used by the admin to recover
the content of an encrypted file.
Note that for performance reasons the file is not actually encrypted multiple
times with different public keys. Instead, a (random) key is generated and
used for encrypting/decrypting the file with a symmetric encryption scheme
and that symmetric key is encrypted with the user's and recovery agent's
public keys (and later decrypted during recovery). Details:
https://en.wikipedia.org/wiki/Encrypting_File_System This information is here
for context and was not required for full marks.
12. What is forward secrecy? Why do we need it? Do not write more than 2
sentences each.
Forward secrecy means that past sessions keys are not compromised even if
long term key material on which these sessions keys were based is
compromised. This means if the attacker compromises Alice key, the attacker
still can't decrypt previous communications of Alice.

We need forward secrecy since there is always a chance that long term keys
will get compromised, but with forward secrecy information leakage is minimal
if the compromised key can be revoked quickly.
13. In the SSL/TLS handshake protocol, server authentication is ______ and
client authentication is ______.
Mandatory; Optional
14. IPSec defines two protocols: _______ and ________.
AH; ESP
15. IPSec provides security at _____.
Network layer
16. Explain the key differences between SSL/TLS and IPSec with regards to
the order of authentication and encryption, the type of authentication,
access control and where its implemented.
SSL/TLS authenticates, then encrypts; IPSec encrypts, then authenticates.

SSL/TLS authentication is often one-sided based on certs, IPSec authentication


is typically mutual based on certs or shared secrets.

SSL/TLS does not include access control, IPSEC includes some access control
for example packet filtering.

SSL/TLS is entirely implemented at (or just above) the transport layer in user
space (application and libraries), where IPSEC is implemented at the network
layer which means in partially in OS/kernel and partially in the application (user
space).
17. Describe the man-in-the-middle attack against the Diffie-Hellman
Exchange protocol, which is used by SSL/TLS, based on the two parties
(A)lice and (B)ob. Next, explain how this attack can be prevented.
Adversary M intercepts the message from A to B (g^a), and sends another
message (g^a') to B.

Adversary M intercepts the message from B to A (g^b), and sends another


message (g^b') to A.
M establishes the session key g^{ab'} with A, and the session key g^{a'b} with
B. A and B are unaware of this.

To prevent this attack some form of authentication must be used. For


example, A and B could also send their certificates and signed messages to
each other from which they can verify their identity.
18. When discussing IDS/IPS, what is a signature?
Patterns of activity corresponding to attacks, None of the above
19. Describe the two main purposes of honeypots.
Honeypots (and honeynets) can be used to lure attackers away from real
targets of value to decoy systems that have no production value.

Honeypots (and honeynets) are also used to lure attackers to these systems,
so we can collect information about attackers activities, behaviours and
techniques which useful for understanding attacks and refining defences
accordingly.
20. You have to configure a firewall to block incoming traffic into your
network (192.168.10.0/24). It must have the following properties:

Access to the web server www.my.org (192.168.10.112) is allowed for


HTTP and HTTPS only.
SSH access is allowed for all internal hosts from the external network
172.16.1.0/24 only and SSH access to host secure.my.org
(192.168.10.200) must be blocked.
All other incoming traffic is denied.

Your rules must be written in the below table format and no more than 4
rules must be specified to fulfil the above requirements.
There is no possibility of specifying a default policy for your firewall
(bad product). Direction is In or Out. An IP address or network can be
defined for Source IP/Net and Destination IP/Net. Single ports or comma-
separated lists of ports can be specified for source and destination
ports. Protocol is UDP or TCP. Your firewall has two actions (Accept and
Reject). Wildcards are specified with an asterisk (*) and can be used in
any fields except Rule#, Direction and Action.
Rule 1: In: : 192.168.10.112: : 80, 443: TCP: Accept.

Rule 2: In: : 192.168.10.200: : 22: TCP: Reject

Rule 3: In: 172.16.1.0/24: 192.168.10.0/24: *: 22: TCP: Accept

Rule 4: In: : : : : *: Reject


21. This task is related to the snort lab activity and the detection of
EternalBlue using eternalblue-success-unpatched-win7.pcap.

A) Briefly describe how you installed the rules from


https://asecuritysite.com/forensics/snort and how you ran snort (no
more than 2-3 sentences)
The rules from the handout can simply be copied into the
/etc/snort/rules/local.rules file. Then run snort as described in the handout, i.e.
sudo snort -qvde -A console -c /etc/snort/snort.conf -l /var/log/snort -r
Downloads/eternalblue-success-unpatched-win7.pcap -K asci
B) Describe the detection outcome in one sentence. Copy the first 3
lines of relevant snort events into the answer. This must be no more
than 10 lines.
Eternalblue is detected as evidenced by the snort output below:

04/17-01:22:49.382644 [*] [1:2466:7] NETBIOS SMB-DS IPC$ unicode share


access [*] [Classification: Generic Protocol Command Decode] [Priority: 3]
{TCP} 192.168.198.204:51112 -> 192.168.198.203:445 04/17-
01:22:49.382645 00:0C:29:68:24:5A -> 00:0C:29:A3:01:B7 type:0x800
len:0x72 192.168.198.203:445 -> 192.168.198.204:51112 TCP TTL:128
TOS:0x0 ID:180 IpLen:20 DgmLen:100 DF

04/17-01:22:49.384955 [*] [1:42944:2] OS-WINDOWS Microsoft Windows


SMB remote code execution attempt [*] [Classification: Attempted
Administrator Privilege Gain] [Priority: 1] {TCP} 192.168.198.204:51112 ->
192.168.198.203:445 04/17-01:22:49.385037 00:0C:29:68:24:5A ->
00:0C:29:A3:01:B7 type:0x800 len:0x5D 192.168.198.203:445 ->
192.168.198.204:51112 TCP TTL:128 TOS:0x0 ID:182 IpLen:20 DgmLen:79
DF

04/17-01:22:49.387454 [*] [1:2024218:1] ET EXPLOIT Possible


ETERNALBLUE MS17-010 Echo Response [*] [Priority: 0] {TCP}
192.168.198.203:445 -> 192.168.198.204:51112 04/17-01:22:49.387679
00:0C:29:A3:01:B7 -> 00:0C:29:68:24:5A type:0x800 len:0x42
192.168.198.204:51113 -> 192.168.198.203:445 TCP TTL:128 TOS:0x0
ID:4658 IpLen:20 DgmLen:52 DF

22. With regards to secure programming / software security, explain what


fuzzing is, including making clear what its purpose is, and how fuzzing
differs from normal testing.
Fuzzing refers to the process of testing software with randomly generated
input data, including very long input data, or input data that resembles classes
of known problem inputs. The purpose of fuzzing is to test that a program can
correctly handle abnormal input and abnormal inputs do not trigger potentially
security relevant bugs.
Fuzzing is different from normal/conventional testing where we typically test
with correct or incorrect inputs of benign users ("normal" input) and we do not
care about bugs triggered by abnormal input.
23. Explain the principle of least privilege based on the example of Kali
Linux which in 2020 abolished the principle of logging in and working as
root by default. Your answer should explain the principle itself and wrt
Kali Linux the issue and how it was solved.
The principle of least privilege means subjects should not have more
privileges than required to ensure that in a security failure the damage is
limited. The principle also makes systems more robust against accidental
failure (reliability). With Kali Linux until 2019 users would log in as root by
default which in case of a security failure provides full access to the system
and in case of an accident (say rm -rf /) has catastrophic consequences. To
improve security and reliability Kali employed the same approach as Ubuntu
and others where by default users log in as ordinary user and increase their
privilege level only when needed via the sudo command.
24. As part of securing a web site you are tasked with developing a regular
expression that can be used to validate flight ticket numbers as entered
by users. The flight ticket numbers have the following format:
AA-NN[N][N]-YYYY-MM-DD-CCCCCC
The different fields are explained below:

AA is a two character upper case airline code where each character can
be from the whole alphabet (we won't need stricter checking here as in
actually checking for valid 2-character airline codes)
NN[N][N] is a flight number which is either 2, 3 or 4 digits long
YYYY is the year, assume we accept all values between 2000 and 2099
MM is the month, this is always specified as two digits (with leading 0 if
necessary)
DD is the day of the month specified as two digits (with leading 0 if
necessary)
CCCCCC is a 6 digit number

All these fields are separated by hyphens.


^[A-Z][A-Z]-[0-9]{2,4}-20[0-9][0-9]-(0[1-9]|1[0-2])-(0[1-9]|1[0-9]|2[0-9]|3[0-1])-[0-
9]{6}$

[A-Z][A-Z] match any combo of two upper case characters

[0-9]{2,4} matches 2,3 or 4 digits

20[0-9][0-9] matches 2000-2099

(0[1-9]|1[0-2]) matches all possible month values


(0[1-9]|1[0-9]|2[0-9]|3[0-1]) matches all possible day values

[0-9]{6} matches 6 digits

^$ are important to prevent any valid IDs with something prepended or


appended

Note that we're deliberately not using shortcuts here, e.g. \d instead of the
longer [0-9] (also keep in mind that \d may not be equivalent to [0-9]).
However, these shortcuts are fine in your solution.

25. List the three different authentication factors and then explain why
multi-factor authentication makes it more difficult for an unauthorized
person to access a target.
Something you know (the knowledge factor), e.g. password, PIN

Something you have (the possession factor), e.g. smart card, key fob

Something you are (the inherence factor), e.g. fingerprint, voice, face

(Location factor, e.g. building X)

(Time factor, e.g. Mondays 15-16)

With multi-factor authentication, even if one factor is compromised or broken,


the attacker still has at least one more barrier to breach before being able to
successfully compromise the target.
26. Calculate the key space for 128-bit keys and compare that to the key
space for 10-character passwords (lower case English alphabetic, upper
case English alphabetic, 10 digits and 14 special characters are
allowed).
How many characters does a password following the above rules must
have to achieve the same or better security as 128-bit random keys if the
passwords would be chosen randomly?
128-bit keys: 2^128 = 3.403e+38

Passwords: (26+26+10+14)^10 = 6.429e+18

Need 21 character password to get equal or larger key space than 128-bit
random keys. Can find with search or use math:

2^128 <= 76^x -> x = ceil( log76(2^128)) = ceil( log(2^128)/log(76))


27. Describe how to create Windows Policy Objects on a DC to remove the
Recycle Bin functionality AND access for all Users in a OU.
Modify default Domain Policy, OU Policy etc. with the Group Policy
Management tool.

Set to Enabled the following two policies and comment appropriately:

User Configuration -> Policies -> Administrative Templates -> Desktop ->
Remove recycle bin from desktop

User Configuration -> Policies -> Administrative Templates -> Windows


Components -> File Explorer -> Do not move deleted files to the Recycle Bin

(alternatively to the last: windows registry modification, "alternatively"/bonus


for Remove properties from the recycle bin context menu)

Perform a group policy update with gpupdate. Or alternatively explain how


policy is updated in other ways.
28. A ciphertext has been generated with an affine cipher. The most
frequent letter of the ciphertext is "F", and the second most frequent
letter of the ciphertext is "C". Break this code by figuring out the key
numbers.
Assume the affine cipher is C=k1P+k2, we have

5=4*k1+k2 (mod 26)

2=19*k1+k2 (mod 26)

Subtract:
3=-15*k1 (mod 26)

Find modulo multiplicative inverse of -15:


3-7=-15-7*k1 (mod 26)

k1=-7*3=-21 (mod 26)

k1=5

k2=5-4k1=5-45=-15 (mod 26)

k2=11

C=5P+11
29. For a symmetric block cipher, what is the problem of simply encrypting
each block of data with the given key? What is the solution (it is
sufficient to simply name a mode)?
If each block of plaintext is simply encrypted with the given key (ECB mode),
then for a given block of plaintext and given key, we will always get the same
ciphertext. This can easily reveal structures/patterns in the plaintext. For
example, if we encrypt an image in ECB mode, we can still recognise the
image from the ciphertext as demonstrated with the famous Tux example. To
prevent this, other modes need to be used, such as CBC or CTR modes.
30. Explain what the below PAM authentication configuration does. The
explanation should cover what the modules do and the flow of
authentication processing. You can check the functionality of modules
with the man command (e.g. man pam_unix).

auth required /lib/security/pam_nologin.so auth sufficient


/lib/security/pam_unix.so auth sufficient /lib/security/pam_krb5.so
use_first_pass
The pam_nologin module will fail if /etc/nologin exists and the user is not root,
otherwise it will succeed. Since this module is required, authentication will fail
if this module fails (but the following modules are still invoked). Let's assume
the typical case where pam_nologin succeeds. The pam_unix module will
authenticate the user against the local database (/etc/passwd). If this is
successful, authentication will succeed and no further modules are run since
this module is sufficient. If the user can't be authenticated against the local
database, the pam_krb5 module will try to authenticate the user against
Kerberos. This will succeed if the user can be authenticated against Kerberos
and otherwise fail. The use_first_pass tells the Kerberos module to use the
password entered for the pam_unix module and not prompt the user again
(additional detail that is not marked).
31. This questions lead on from the lab activity but requires further research
using https://httpd.apache.org/docs/2.4/

You are a server administrator in your organisation and responsible for


the Ubuntu web server (your org really likes Ubuntu). Auditing policies
require that the following information is logged in the default Apache log
file which is not typically logged with the default log file:

Size of response in bytes


Remote user if the request was authenticate
Contents of the cookie AUTHID in requests send to the server

To differentiate from the default log format, the modified format should
be given the name: custom_combined
The default Apache log is /var/log/apache2/access.log which logs information
for each request. This is typically specified with LogFormat and CustomLog
directives in the Apache config file under /etc/apache2.

To change the logging we can change the Apache configuration by modifying


the LogFormat definition in the Apache config file (e.g. in sites-enabled/000-
default.conf on unmodified Ubuntu). By default the access.log is based on the
standard combined format, e.g. in sites-enabled/000-default.conf we find:

CustomLog ${APACHE_LOG_DIR}/access.log combined

Based on the documentation the combined format is:

LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-agent}i\""


combined

We can change the logging format by adding the additional fields (at the end)
and giving the format the required name:

LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-agent}i\"


%B %u \"%{AUTHID}C\" " custom_combined

This directive must be inserted in the config file before the

CustomLog directive, which we then need to change to:


CustomLog ${APACHE_LOG_DIR}/access.log custom_combined

In order for this custom logging to take effect, the Apache web server must be
restarted, e.g. with systemctl apache2 restart
32. Explain the concept of interposable libraries: what are they for, in which
case can they applied, how do they work?
Interposable libraries can be used for application-level logging and more
specifically for logging external functions, e.g. system calls, called by the
application.

Interposable libraries can only be used in case where dynamic shared


libraries are used to link the external functions.

Interposable libraries are libraries that implement the functions to be logged


(same function names as in the original library), log information from within
the function call and call the function of the original library to do the actual
work. The application is dynamically linked against and uses the interposable
library rather than the original library.
33. Explain the Cloud security threat of insecure interfaces and APIs and
outline possible countermeasures.
Cloud providers typically expose a set of interfaces/APIs to customers for
usage, management, orchestration and monitoring of cloud services.
Customers need to understand the security implications of using these APIs.
If interfaces/APIs have weaknesses, this exposes customers to various
security issues impacting confidentiality, integrity and availability and
accountability.

To mitigate the issue customers should review the Cloud architecture and
interfaces, understand the dependency chains behind the API, ensure strong
authentication and access control are implemented, perform security testing
and make sure software is updated regularly.
34. Explain the TWO main issues with IoT security.
Manufacturers of IoT devices have a strong incentive to produce these
devices (firmware, software) as quickly and cheaply as possible. They often
focus on functionality and security is a distant afterthought. This results in
devices with no security or poor security.

Furthermore, often vendors do not offer patches for IoT devices or it is very
difficult difficult to apply the provided patches for IoT devices, especially for
end users. This means often weaknesses/vulnerabilities identified for IoT
devices can't be fixed.

The result is hundreds of millions of insecure IoT devices connected to the


Internet.
35. Related to Splunk and the Splunk dataset used in the lab:

a) Identify the URI (without parameters) that generated the most 404
errors and the productId that is related to all these errors. (2 marks)

Query: sourcetype=access_* status=404 | top uri_path


Answer: URI is show.do and productId is SF-BVS-01

b) For the source www1/secure write a custom field extraction with


Extract Fields (can be accessed from the Event Actions menu for an
event) to extract the user names for failed password attempts (of valid
users) and identify the two user names with the most failed attempts. (3
marks)

Query:
sourcetype=www1/secure "Failed password| top user
Regex:
^[^\]\n]*\]:\s+\w+\s+\w+\s+\w+\s+(?P<user>[a-z]+) from
Answer: The most common user is root and 2nd most common mail
We can also capture failures for both valid and invalid users with one regex:
^\w+\s+\w+\s+\d+\s+\d+\s+\d+:\d+:\d+\s+\w+\d+\s+\w+:\s+\w+\s+\w+\s+\w+(?:
(\s+)|(\s+invalid\s+user\s+))(?P<user>[^ ]+) from

Alternatively, we can capture both with two separate automatically generated


regex which map to the same name.
36. Describe the difference between conventional encryption and fully
homomorphic encryption (FHE) related to the scenario of secure data
processing in the Cloud.
With conventional encryption, Alice encrypts the data and sends it to the
Could. The data can stored in encrypted form but no processing can be
performed on the data without decrypting it first. So the Cloud must decrypt
the data, process it and then encrypt the result. This means the Cloud must
have access to the private key and for the duration of the processing the data
is not encrypted.

With FHE Alice can encrypt and store data in the Cloud and the Cloud can
process the data in encrypted form and the generated result is also in
encrypted form and can later be decrypted by Alice. This is much more secure
as the Cloud does not need to have the private key and the data is never
decrypted in the Cloud.
37. This question is related to Splunk and the dataset used in the labs:
What is the name of the program listening on a "leet" port?
You must provide the answer and a brief description how you found the
answer (Splunk query and 1-2 sentences).
index=botsv3 earliest=0 sourcetype=osquery* 1337
There are two related events of which one shows the full command line of the
process started.
Answer: netcat

You might also like