0% found this document useful (0 votes)
6 views

Security

Uploaded by

chekrouni.derar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Security

Uploaded by

chekrouni.derar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Introduction:

Les Objectives:
Confidentiality:
Data Confidentiality refers to protection of data from unauthorized access and
disclosure, including means for protecting personal privacy and proprietary
information. Can be achieved through cryptography OR access control.

Integrity Control:
Ensuring integrity is ensuing that the processing performed by the system must be
complete, accurate, timely and authorized. Data integrity is ensuring that it was not
intentionally or accidentally altered or destroyed during processing, storaging, or
transmissioning, and correct errors if they occur. This counters the second attack
topology (to be seen later).

Identification:
Identify, in CS, is how to “know” the identity of an entity, often using an identifier
such as a user name or a unique signature.
Not to be confused with identity control (or authentication) that verifies this identity.

Non-Repudiation:
Non-repudiation, or identification proof is proving that an entity is whom it claims to
be, and is not a fake party claiming an entity’s identity, and therefore stealing what’s
meant for it, and attributing its -possibly malicious- action to the legitimate one.

Availability:
Availability ensures that systems and data are available to individuals when they need
it under any circumstances, including power outages or natural disasters.

Access Control:
Access control is an essential element of security that determines who is allowed to
access certain data, apps, and resources—and in what circumstances. They let the
right people in and keep the wrong people out. They rely heavily on identification and
authentication.

Authentication:
Authentication is a process that ensures the legitimacy of an access demand done by
an entity (whether be it a human or another system), in order to provide it with the
system resources it is allowed to access. The providing is done by assigning identity
data for the entity’s session. Configuring the access to system resources is the result
of these two processes (access control).
Reliability:
Reliability refers to the probability that the system will meet certain performance
standards in yielding correct output for a desired time duration. Close to it is
Availability,the difference between them is that reliability is oriented towards the
prevention of system failures and errors, mitigating unexpected interruptions and
ensuring quality service the whole time

Attacks Typologies:

Interception:
A malicious actor can access private or confidential information with no legitimate
authorization, with the aim of obtaining critical information such as passwords and
credit card numbers, or disturbing data exchanges on the network. When effectively
executed, it can be very hard to identify traces of the attack.

Tampering (Falsification):
This one involves not only gaining access to the asset but also manipulating it,
compromising the integrity by targeting data that is in the process of being
transferred between the two ends and corrupting it to achieve malicious advantages
or interrupt functionality. This can happen unintentionally if there are noises in the
network.

Production:
Occurs when an intruder claims an identity of a legitimate user, compromising
identity proof, authentication, and access control, and hence gaining access to
confidential data in order to attain his corrupt desires. Production may also threaten
availability, by exploiting his -supposedly- legitimate session to inject an overdose of
traffic into the network to deny the service.

Man in the Middle attack (MitM):


It can be one of the three previously seen typologies. There are three kinds:
● Black box: the attack is outside the company, and isn’t linked to it by any means, he is
totally obnoxious of what’s happening inside the company (at the beginning), and he
is totally unseen, and hence his attack is the deadliest, and probably untraceable.
● Grey Box: he is linked to the company somehow, he might be a guest, an interviewee,
etc., he has very limited access to the company’s resources. His attack is very strong
● White box: he is an insider, has full / partial access to resources, his attack is very very
strong.
A combination between these three is the most lethal one: an outsider trying to penetrate the
shields, and an insider helping him.

Interruption:
Interruption, or Denial of Service (DoS), rallies when a service is disrupted or
destroyed, and thus legitimate users may no longer reach the service, either
temporarily or permanently, forcing them into replacing it, and causing severe
damage to the victim’s business. This is typically done by overwhelming the host with
false requests to block it from serving legitimate ones, or using malwares to damage,
corrupt, or occupy system resources.

Additional information (SHOULD NOT BE


NEGLECTED):
*Confidential data will not stay confidential forever, it will be revealed with time, protect it
until then, it may lose its value by then.
*Do not aim to realize a 100% secured context, it can hardly exist, instead, arm yourself and
your data (cryptography).
*You can possibly know the hacker if you track him, but he can get away, and it is time
consuming and costly. Instead: look to correct the errors and heal the damage.
*information is three kinds:
● Les Donnees. (i think its raw data)
● Les Connaissances. (the target) (i think it's the processed and treated data)
● Les Messages. (transmitted data from Sender to Receiver)
*Cryptography ensures Confidentiality + Integrity control + Identification + proof.
*Cryptology = Cryptography (the defense) + cryptanalysis (the offense, the cracking).

Cryptography:
We have three kinds of algorithms:
● Symmetric algorithms: encryption algo = decryption algo. assures confidentiality.
● Asymmetric algorithms: assures confidentiality + identification + proof.
● Hashing functions: an integrity control mean.

Algo Sym:
Let P be the raw data, and E be the used algorithm, and K be the key, and C be the encrypted
data. To encrypt it: EK(P) = C. to decrypt it: EK(C) = P. easy enough? Ain’t it?
*The hacker can not know the used algorithm unless he obtains this information from one of
the users, but acquiring this knowledge does not change much.
*If the hacker has the key he will crack C in O(n), in other words: he will take the same time
you took to encrypt it. Else, he will crack it in O(expo(n)), as he has to try all the keys in the
world IN addition to the time taken by each key to decrypt.
*Keys are most honored, and most fragile, a key can be cracked using a photograph.
*SYM ALGO has two properties: A key size in bits, it determines the size of the key -and the
complexity of cracking it-, and a block size, if the messages’s size is > block’s size, the
message is cut to pieces such that all pieces are = block’s size, if messages’s size is < block
size, it is filled with zeros.
Mode Electronic CodeBook (EBC): a mode of encrypting exists only in SYM ALGOs.
*A message is cut into pieces that fit the block size, then each block is encrypted -and later
on: decrypted- independently.
*The sender and the receiver may use “une grille de calcul parallèle” baguette –sorry, it means
that the encrypted packets take different paths through the network, hardening the cracking
operation.
*Packets can reach unordered, and can be re-ordered using various ordering algorithms
studied in Reseaux last year.
*Looking at the encrypted message, the hacker might find patterns and use them to crack the
key, especially encrypted multimedia, especially if they train an AI to do it.
*UPs: can send in parallel paths. DOWNs: it has patterns which can be used to crack K.

*NOTE: D was not present in the PPTX he used to teach us, instead, there was only E to
encrypt and E to decrypt, and that’s why they are called “symmetric”.
Mode Cipher Block Chaining: this mode works by encrypting a block, then using it to
encrypt the next block
*It starts with an Initial Vector (IV), which is chosen randomly or pseudo-randomly, so that
the hackers may not guess it.
*It does the following:
● Formula: Ci = EK(Pi ⊕ Ci - 1)
● IV ⊕ B1 = C1, such that B1 = Block 1, and C1 = encrypted block 1.
● C1 ⊕ B2 = C2, and carries on until the last block is encrypted.
*This way, C will not contain any patterns.
*A downside, however, is that these blocks can not be sent separately in different paths, they
must be sent in one path, and they must be ordered, and order matters here.
*Decrypting it is like follows:
● Formula: DK(Ci)⊕Ci - 1
● Starting from the end: DK(Cn) ⊕ Cn - 1 = Pi, keeps on going until we reach the beginning
XOR it with the IV.
*two identical characters will be different post encryption, unless they exist in the same

🙂
block.
Big Time Issue: how to transmit the key? , well, the correct answer is: You can’t, unless
you’ve established a communication protocol pre-communicating.
Solution? Asymmetric Algos.

Asymmetric Algorithms:
Logic: it uses two keys, whatever is encrypted using they first key is exclusively decrypted
with the second key, and vice-versa, one of them should stay private, only the sender has it,
and the second is public.
Firstly, we create a key K, it will be private, Then we use an algorithm which takes Kpriv as
input and outputs Kpublic. Now, going from Kpub to Kpriv is near-impossible.
*IF you want to guarantee confidentiality: use the receiver’s Kpub to encrypt your message, so
that only he can decrypt it.
*IF you want to guarantee proof, use your own Kpriv to encrypt the message, this will leave a
signature that only you can give, which guarantees that only you have sent the message, and
not someone else claiming to be you.
*These algorithms’ performance is terrible when the file’s size is large, so they are not used to
send files, but instead to send keys, and Sym Algos are used to send files.
Let P be our message, C be our encrypted message, K be our key, Ḱ be our encrypted key, Kpriv
be our private key, Kpub be our public key, RSA be the asymmetric algorithm used.
C = Ek(P)
Ḱ = RSAKpriv(K)
C and Ḱ will be sent to the receiver to decrypt them accordingly.
*RSA is not block size based, but encrypts the message directly, and there is no necessity to
split the key into blocks as it is already small.

Hashing Functions:
Let F be a hashing func, P be our message, and Emprunt(P) be the hashed message.
F(P) = Emprunt (P), if one character changes in P, everything changes.
Emprunt(P) have a fixed size, whether P is large or small.
*You might find P1 and P2 which have the same emprunt.
*Hashing is done bit by bit.
*They are used to leave a signature of the file, to detect errors and tamperings.
*A combination of the past three methods is the optimal way.
The optimal way:
Let P be our message, and C be the encrypted message, and AES be our SYM ALGO used, and
K be our key.
Let RSA be our ASYM ALGO, and KBpub be the receiver’s public key, and Ḱ be the encrypted K
key.
Let MD5 be our hashing function, Emp(C) be the emprunt of C, and KApriv be our private key.
Steps:
● Firstly: Encrypt P : C = AESK(P) to ensure message’s confidentiality.
● Secondly: Encrypt K: Ḱ = RSA(K) using KBPub to ensure key’s transmission
confidentially.
● Thirdly: make emprunt of P: MD5(P) = Emp(P) to ensure that the message will not be
altered.
● Fourthly: Encrypt Empt(P) with KApriv to ensure that YOU have sent the emprunt, and
not someone else whom has altered the message AND the Emp. RSA(Emp(C)) =
sign(C)
In this case, message altering can be detected via the Emp, Ḱ altering can be detected post
decrypting, Emp altering can be detected with the sign(C). Unhackable :).
*We care to detect the error, and correct it, not to locate it, nor to know who caused it.

Certifying keys:
*Man in the Middle can intercept the exchange of the public keys and alter them, and this can
compromise these proposed solutions.
*By creating fake public keys of the two ends, fake KApub and fake KBpub of course alongside
the private keys. And send the first key to B and the second key to A, now whenever they
exchange encrypted data, he intercept it, decrypt it, read it, encrypt it again with the
legitimate keys to cover for his theft.
*Solution to that: use a trusted authority to certify the public keys, by doing the following:
● An entity goes to the authority to certify its keys.
● The authority encrypts their key with the authority’s PRIVATE key, to leave a
signature.
● Other entities can verify whether this entity is legit or someone else is claiming their
identity, the verification is done by decrypting the received key with the authority’s
PUB KEY, if that is done successfully, it means that the key is legit, otherwise it is
compromised.
The authority is trusted by the Government and probably by International organizations. The
only way to compromise the keys now is to compromise the authority, which is a great risk to
take, and can lead to prison right after the proposal.

Diffie-Helman:
*To effectively create burner keys to assert confidentiality.
*The algorithm is the following:
Let Amer be the sender, and Zaid be the receiver. Let Amer have {a, g, p} with “a” being a
secret key, “g” and “p” being random numbers. Let Zaid have {b} which is a secret key. Let
Amer create a message A = ga mod p, and send it to Zaid, Zaid will generate a key K = Ab mod
p, and then generate a message B = gb mod p, and send it to Amer. Now Amer will use B to
acquire K = Ba mod p.
*”a” and “b” are secret keys, “g” and “p” are meh. The key generated by Zaid is the same one
Amer gets.
*Cracking this algorithm is near-impossible
*I could not find an appropriate illustration on the web, all you’ll get is this:
Deal with it :)

Additional notes:
*Higher security standards may throttle your performance, and delay you from identifying the
hacker.
*Passwords are stored in the DB hashed.
*Man in the Middle can intercept the exchange of the public keys and alter them, and this can
compromise these proposed solutions.
*Solution to that: use a trusted authority to certify the public keys.
*In security, you invest in probable events, ABBAS stated that PROBA / STAT / P.L. are
essential for security engineers.
*Solutions or counter measures are expensive, limiting vulnerabilities is less expensive, by
creating a strong framework or strong context.
*You should define the probable risks before searching for solutions, by analyzing your
environment.

LINUX:
*The smallest storing unit is the “block”, its volume varies.
*You can store exclusively one file per block, if the file is bigger than the block then it is split
between several blocks, and thus, the size shown by the OS is always => the original size.
*To change the block size, you need to “hard reset” (in french: formatter) your storage device,
and install a new system for files. Hard reset is installing a new file system.
*The larger the block size is, the easier to find and extract data.
*File Allocation Table (FAT) does not have access rights management.
*Storage devices use an allocation table, similar to the one used by Hotels, whenever
someone new comes, a room is allocated for him, and his name is linked to that room.
*Deleting a file is actually deleting the link between the file’s link and the data stored in the
block.
*Formattage bas niveau will delete the links permanently.
*A file might have more than one link, different links might have different names, but they
refer to the same file, and editing one of them will apply the changes made to the others.
*the disk is partitioned into several partitions, the location of the beginning and the end of
these partitions are stored in a small partition on the disk.
*There are technologies that help mitigate disk damage and data loss.
*You can delete a file IF you have WRX rights on its parent folder. As X means: you can access
the folder, R means you can read (list) the children, W means you can do whatever you want.
Having X without R means you can access but can't see, so the system will not let you in.
Having W without R is useless, as before deleting a file, you must see it first, and you cant see
it if you don't have R right in the parent folder.

Disponibility:
RAID:
RAID1: Mode Mirroir
*Requires two disks at least.
*whichever is written on the first disk, is written on the second one, even ironed data, even
malwares. EVERYTHING.
It has three properties: failure tolerance, storage capacity, and price (NOTE: the teacher
explicitly mentioned the first two, but did not mention the last one, care for this when
answering les qsts de cours)
Failure tolerance = (n-1)/n %, n is the number of disks
Storage Capacity = 1 / n %
That’s because in RAID1 you can tolerate losing all disks, except one. On the other hand, you
are effectively using one disk’s worth of storage capacity. ABBAS mentioned that you can use
one of the disks for write operations, and the second for lecture ones, or the first one to be
used by group X, and the second to be used by group Y, I heard him, and I noted it down.
*This is about ensuring disponibility on a physical level.
*It is uninteresting to rise the tolerance rate if the data is located remotely or is physically
inaccessible.

RAID5:
Minimum: 3 disks.
Let I be our information to be stored, let Ii = Ai.Bi
I will be fragmented, I =A.B, A will be stored in the first drive, B in the second, and A⊕B in the
third drive.
The naming changes yes but the principle rests the same.
*You can tolerate the loss of one drive, only. Tolerance rate = 1/n
*Capacity = (n-1) / n
*TODO: RAID5+1, RAID5n+1, RAID5+1n. Idk how to solve TBH.
*I know that RAID5+1 has a tolerance rate of (n+1) / 2n. And storage capacity of (n-1)/2n.
From ABBAS’s document.

System’s Availability:
*To calculate a system’s total availability, I have this formula that I wrote during the lesson,
however, I did not write the meaning of all the symbols, so I am a bit concerned.
Disponibilite = (T - Σ(t(Pi)) / T, where the second T = temps total, Pi = Panne. However I am
unsure whether the first T is equivalent to the second T, the first T could represent the uptime,
but Allahu a’lam.

*This could be the path from the user to the service. Disponibility of such a system is: D =
Multiplication(Di), so in this case: D = 0.97 × 0.97 × 0.91 = 85%.
*Such a system could be integrated to tolerate the downtime of one device, if one of these
goes down, we have the others to replace it. D = 1 - Multiplication(1 - Di), so in this case:
D = 1 - ((1- 0.91) × (1 - 0.97) × (1 - 0.98)) = 0.99
*Duplicating your system will lower the pressure & frustration and increase your availability
and life span. A shoe might last for one year, three shoes will last for five years.
*If a solution has low D then replace it.
* learn Proba / Stat / PL, especially P.L., to excel in availability enhancing.
*The system will show failure symptoms before failing. You will find them in the Logs.
*The system might go down at time T0, and you will not realize it until a few hours have
passed, that’s because you did not implement monitoring systems, and the failure happened at
night.
*The time in which you know there was a failure is called TConst, it is followed up by a
diagnosis, then troubleshooting the problem, THEN your system will be UP again.
*Always look to dodge the failure.
*Be ready to deal with failures, to reduce the troubleshooting time, you might even automate
the process.

Side Notes:
*In Linux, you have additional control over the partitions and main folders, you can use
external storage devices as local ones by mounting them, and ejecting them when desired /
needed, additionally, you can use a folder as a partition. This is helpful to fend off cyber
attacks, by linking an external partition, located in a remote site, when a hacker strikes you
can eject it to protect it, for instance.
*If you eject “home” directory, it will become empty.
*Invest in fending off the threat with the highest probability.
*Don’t rush the solution, they are costly, study the proba first.
*DEFINITION:
Incident is only called so if we cannot predict when it may happen, when it’s but a probability.

Backup [& Restore]:


This allows us to reduce unavailability time spans, when a disaster happens and the data is
lost, these techniques allow you to restore your data, it is an obligation for enterprises.

Sauvegarde totale:
The simplest, whenever you’ve finished editing the data, back it up, it is the simplest, and has
no calculations, yet nonoptimal as you have to upload EVERYTHING EVERYTIME :).
*In this type, you can tolerate the loss of any version and hope you will not need it in the
future.

S. Differentielle:
First: do a Sauvegarde totale, let us call it E0.
Afterwards: Calculate the difference between the current version and E0, let’s call it Delta,
now save Delta, I repeat: save DELTA, so only the new edits are saved. Repeat for each
version.
*In this type, you can tolerate the loss of any version BUT the first one, it is needed for
calculations, if it is the lost, everything else is useless.
*UPS: more optimal than S. Totale, but it contains calculations, which can be tiring.

S. Incremental:
First: full backup kil3ada.
Afterwards: calculate the difference between the current version and the previous one, now
save the difference.
*UPS: most optimal in terms of time of saving.
*Downs: if one error occurs in one version, all the following ones will contain that error, and
restoring the data is heavy as you’ll have to use a recursive function to do the calculations up
to the first version.

Side notes:
*One issue you might face is to have an incident pre-backup, post-edit.
Risques:
*A risk touches a target, a valuable object, its formula is the following:
Risk = Menace * Vulnerability * Gravity / Countermeasures
Menace: whatever is harmful, whether it be an action, an individual, an insider, a competitor,
etc., menaces always increment and never decrement.
*Golden rule: Look to prevent the harm, not find the attacker.
*Vulnerability: the level of exposure to menaces, in a particular context. They increment and
decrement and stabilize. They are mostly human-related, to be specific: the tolerance rate, or
the level of risk taking, the higher it is, the more vulnerable you are, the less risk you take, the
less vulnerable you are, but lowering it may throttle you.
*Countermeasures: solutions to reduce the chances of linking the right menace with the right
vulnerability, and they are costly $$$. So before you look for them and pay, do a risk analysis
and a need analysis to identify the possible risks surrounding you, THEN find solutions and
implement them, and do this regularly.
*Gravity: whether the consequences of an attack are severe or not, i understand that they are
the attraction level, the more attractive an object is, the better it is, the severer the
consequences are.
*Some risks are neglected, some are acceptable, some are reducible, some are greater than
your ability to handle, and they are to be transferred.

Classes of Risks:

To be reduced:
You can handle them, and are responsible for them, they are to be maintained in this class.
These have an important impact and strong occurrence rate.

To be transferred:
You cannot handle them for whatever reasons, or they have a high impact and a low
occurrence rate. You transfer them whenever they occur to a third party to handle them, and
you pay for it.
To be avoided:
Irreversible, done by spreading awareness and staying away from it.

Acceptable:
To be maintained at this level, they have a high occurrence rate, but low impact.

Once you implement a solution, it will change a class and descend.


*You want to play on both Occurrence rate AND impact, otherwise play on the impact alone,
otherwise play on the occurrence alone, otherwise transfer it.
*A risk with high impact and high occurrence should not exist, SHOULD NOT EXIST,
eliminate it, even if it means eliminating the service endangered with it.
*Supervision and monitoring is required post awareness spread, know whom you trust.

Side notes:
HEHEHEHEHEHHEHEHEHEHEH

Post Disaster Recovery:


*Golden rule: we often think that there is no harm surrounding us, and that’s incorrect, DO
NOT rely on weak solutions, DO NOT elevate your tolerance level, DO NOT take risks.
*A disaster could a cyber attack, or a natural disaster
*Properties of disasters:
● Huge loss.
● Unseen and unlikely to happen.
● Requires investments.
● Not enough experience to handle it, or NONE, which results in huge pressure and
stress, which significantly tapers your performances.
*Train yourself to deal with such situations, not only in security, but in life in general. Develop
a reflect which suits such critical situations, as your reflex in the critical one is similar to the
normal one.
*How to deal with disasters: prepare and establish your plan beforehands, try hardship when
you’re at ease.
*Fix your flaws after you gain experience.
*What’s special about disasters is that we have no experience.
*Your devices’ value is equal to what you use them for, the data they store and the service
they provide, this is beyond their price.
*Keep your backup / plan B in general away from the currently used system, this way, if the
main system is harmed, your plan B will rest intact.
*Cloud services reduce the down time.
*In the downtime: you’ll lose money, clients, and unsaved data.
*Im

Side notes:
*Devices will most likely fail in their second year +.
*Do not take responsibility you cannot handle or are not payed to or are not asked to.
*If you have access to confidential data, do not create a mechanism to store a copy of this
data for yourself, this might get you in jail, especially if a data breach happens. This is
especially important for soft devs.
*Choose the right partner.

You might also like