0% found this document useful (0 votes)
83 views

Technical Introduction To Cybersecurity 1.0 Lesson Scripts

The lesson discusses different types of ciphers used to encrypt messages including substitution ciphers, transposition ciphers, and one-time pads. It explains how substitution ciphers work by replacing letters, transposition ciphers rearrange letters, and one-time pads introduce randomness through a key pad. Keys and algorithms are used to encrypt and decrypt ciphertexts.

Uploaded by

Deus miguel
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
83 views

Technical Introduction To Cybersecurity 1.0 Lesson Scripts

The lesson discusses different types of ciphers used to encrypt messages including substitution ciphers, transposition ciphers, and one-time pads. It explains how substitution ciphers work by replacing letters, transposition ciphers rearrange letters, and one-time pads introduce randomness through a key pad. Keys and algorithms are used to encrypt and decrypt ciphertexts.

Uploaded by

Deus miguel
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 101

Technical Introduction to

Cybersecurity
Lesson Scripts
1.0
Fortinet Training Institute - Library

https://training.fortinet.com

Fortinet Product Documentation

https://docs.fortinet.com

Fortinet Knowledge Base

https://kb.fortinet.com

Fortinet Fuse User Community

https://fusecommunity.fortinet.com/home

Fortinet Forums

https://forum.fortinet.com

Fortinet Product Support

https://support.fortinet.com

FortiGuard Labs

https://www.fortiguard.com

Fortinet Training Program Information

https://www.fortinet.com/nse-training

Fortinet | Pearson VUE

https://home.pearsonvue.com/fortinet

Fortinet Training Institute Helpdesk (training questions, comments, feedback)

https://helpdesk.training.fortinet.com/support/home

9/21/2023
TABLE OF CONTENTS

Cryptography and PKI Module 5


Overview Lesson 5
Ciphers Lesson 6
Keys and Cryptographic Algorithms Lesson 8
Hashing and Digital Signatures Lesson 11
Public Key Infrastructure Lesson 14
Secure Network Module 17
Overview Lesson 17
Secure Perimeter Lesson 18
Zero Trust Principles Lesson 20
Centralized Security Network Management Lesson 22
Secure SD-WAN Lesson 24
SASE Lesson 26
Network Segmentation Lesson 28
Firewall Lesson 30
Secure Switching and Ports Lesson 33
Security Protocols Lesson 35
Sandbox Lesson 37
Common Network Threats and Prevention Lesson 39
Authentication and Access Control Module 42
Overview Lesson 42
Authentication Methods Lesson 43
Single-Sign On Lesson 45
Authentication Framework, Protocols, and Tools Lesson 47
Access Control Methods Lesson 50
Access Control Best Practices Lesson 52
Network Access Control Lesson 55
Secure Remote Access Module 57
Overview Lesson 57
SSL VPN Lesson 59
IPsec VPN Lesson 61
ZTNA Lesson 63
Endpoint Security Module 65
Overview Lesson 65
Internet of Things Lesson 66
Endpoint Hardening Techniques Lesson 69
Endpoint Monitoring Lesson 72
Secure Data and Applications Module 74
Overview Lesson 74
Data Protection Lesson 75
Data Privacy Lesson 78
Secure Email Gateway Lesson 80
WAF Lesson 82
Content Filters Lesson 84
Application Hardening Techniques Lesson 86
Cloud Security and Virtualization Module 88
Overview Lesson 88
Cloud Service Models Lesson 89
Virtual Machine Risks Lesson 91
Common Cloud Threats Lesson 93
Cloud-Hosted Security Services Lesson 96
Securing the Cloud Lesson 99
Cryptography and PKI Module Overview Lesson

Cryptography and PKI Module

Overview Lesson

Welcome to the Overview lesson.

Click Next to continue.

By the end of this upcoming module, you should be able to do the following.

This module will give you a good foundation in cryptography and PKI.

Cryptography is the study of writing or solving codes. This straightforward definition belies the complexity of
cryptography and the challenges of implementing it on a computer network.

Public key infrastructure (PKI) is the name of that implementation. Simply speaking, PKI is an infrastructure
composed of hardware, software, policies, and procedures that facilitates the management of public digital keys
and makes cryptography over a network and the Internet, possible.

If you have ever connected to a secure website using a web browser to bank or to make a purchase, you have
used cryptography. Cryptography is ubiquitous in the modern landscape of communications and commerce. It is
also simple to use. Yet, the technology, hardware, and software that make network cryptography possible is
anything but simple. Although network cryptography is complex, it has been made simple to use for you and me.

This module explores the world of cryptography, including the various technologies, concepts, and components
that permit its implementation on a computer network. Like your experience connecting to a secure website, this
module endeavors to make the complexity of cryptography simple.

Cryptographic technology allows you to encrypt data and sign data digitally. Encryption is the process of
converting plaintext to crypto text, in other words making something that was readable, unreadable. A digital
signature also uses cryptography to produce a unique value that can be tied to a person. In most jurisdictions, a
digital signature is legally equivalent to a hand-written signature.

Encryption and using a digital signature satisfy a number of objectives. For example, encryption ensures that
information remains private and confidential. A digital signature guarantees the integrity and authenticity of the
data. It can be used to identify or authenticate an entity. It can also ensure non-repudiation. Non-repudiation has
legal implications and means that the signee cannot deny having signed the information. If the information is a
legal contract, the signee is bound to the contract because of their digital signature.

By satisfying these four terms—confidentiality, data integrity, authentication, and non-repudiation—cryptography


has facilitated the rise of e-commerce and secure communications.

In this module you will learn about ciphers, digital keys and certificates, cryptographic algorithms, hashing
functions, the digital signature and encryption processes, and PKI. After you complete this module, you will have a
stronger grasp how these technologies work and you will have keener insight about how they have made e-
commerce and secure communications possible.

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 5


Fortinet Technologies Inc.
Ciphers Lesson Cryptography and PKI Module

Ciphers Lesson

Welcome to the Ciphers lesson.

Click Next to get started.

After completing this lesson, you will be able to achieve these objectives.

A cipher is a secret or disguised way of writing a code.

In the digital world of computers, cryptographic algorithms are used as ciphers along with digital keys to convert
plain text to ciphertext and back again.

This process is known as encryption and decryption. Algorithms are usually public information and keys are
usually secrets, but not always. Secrets must be safe-guarded.

Click on the underlined word for more information.

Ciphers have been used since before the computer age. The simplest type of cipher is the substitution cipher.
Julius Caesar used this method when encrypting messages.

During the encryption process, the letters of a plain text message are replaced by other letters. Think of the
Western 26-letter alphabet. If you shifted the letters by three, the message "hail Caesar" would become "kdlo
fdhvdu", which is the cipher text.

In order for the recipient of the message to decrypt the ciphertext, they would need to know the number of letter-
shifts and shift in the opposite direction. Thus, when shifting three letters to the left, a "K" becomes an "H", a "D"
becomes an "A", and so on.

The number of letter-shifts is the shared secret or key, and the method is the cipher algorithm.

The transposition cipher is about rearranging letters, and is more complicated than the substitution cipher.

An example of a transposition cipher is the Rail Fence, so named because its schema resembles a fence.

There can be any number of rows, but in this example there are three. The plain text message is written in a
zigzag form, resembling a rail fence. When enciphering the plain text, the letters are taken row by row.

The message "He had a bad day. What a day Dad had." would be enciphered to what you see on the screen.

The receiver of the ciphertext would have to know the number of rows. This information is the shared secret or
key.

Click the button to see an example with four rows.

The next cipher type, named the one-time pad, introduces randomness to the substitution method.

Whereas the substitution cipher uses one shift-letter value for the entire message, the key pad cipher uses a
different value for each letter in the message.

Imagine the sender has a twenty-six sided die and they roll this die for each letter of the message. If the message
began as "Hi Bob" and the first five rolls were 10, 4, 3, 11, and 18, then the message would be converted to
"RMEZT".

This cipher type has a powerful feature. The randomness of the die ensures that there are no repetitive patterns,
and that there is an equal chance of converting a plain text letter to any one of the twenty-six letters of the
alphabet.

6 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Cryptography and PKI Module Ciphers Lesson

The Caesar shift-letter cipher has 26 possible combinations, which would not take too much effort to break using a
brute force attack. However, if each letter had 26 possibilities, then a five-letter message would be 26 multiplied by
itself five times. This amounts to almost twelve million possible combinations.

Without the help of a computer, this ciphertext is virtually impossible to break.

Click on the underlined words for more information.

Brute force attack (Slide Layer)

Apply your knowledge and solve a one-time pad cipher.

Use a pad of paper or the alphabet on the screen to help you solve this cipher.

The cipher text is GDCHV. The key is 5, 21, 15, 1, 7.

Think of the alphabet as a continuous circle, and not linear with two terminal ends. Remember to encrypt the
message, you would have to move right or clockwise. Therefore, you will have to move left or counterclockwise to
decrypt the message.

As an example, if your cipher text was "C" and your key was 7, you would have to move seven letters
counterclockwise. You would stop at "A", but continue moving until you had counted seven. So, the plain text
would be "V".

Once you have completed the exercise, click Submit to verify that you have solved the problem correctly.

The power of computers allows for more sophisticated encipherment, but the concepts remain the same. Two
cipher types that are commonly employed are stream and block ciphers.

Stream ciphers encrypt a stream of plain text data, one bit or byte at a time. Examples of stream ciphers are FISH
and RC4. Stream ciphers are faster than block ciphers.

Block ciphers break the plain text into blocks for encryption. The size of the blocks is determined by the size of the
key. If the key is 256 bits, then the blocks to be encrypted will also be 256 bits. If the size of the message is one
megabyte, then the message would be divided into multiple blocks, each one 256 bits in size.

Common block ciphers that exist today are Data Encryption Standard (DES), 3DES, Advanced Encryption
Standard (AES), and Blowfish.

You've completed the lesson. You can now achieve these objectives.

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 7


Fortinet Technologies Inc.
Keys and Cryptographic Algorithms Lesson Cryptography and PKI Module

Keys and Cryptographic Algorithms Lesson

Welcome to the Keys and Cryptographic Algorithms lesson.

Click Next to get started.

After completing this lesson, you will be able to achieve these objectives.

What is a digital key?

With respect to encryption, keys can be used to encipher the flow of information between two devices, bulk
stationary data, or a small piece of data, such as another digital key or a hash output value.

Depending on the task that the key is performing, its lifetime could be years or a few seconds. For example, a key
used to sign certificates might be valid for ten or more years, but a key used to encrypt a session between two
devices is only used for the duration of that session.

Keys are usually kept private or secret, but they also can be public. Public keys are often written to digital
certificates.

Click the underlined term for more information.

Public keys are often 1024 bits, 2048 bits, or larger.

Because of their relatively large size, public keys are often used to encrypt small pieces of data, like another key or
a hash of data. To encrypt large amounts of data would take too long.

Smaller keys, in the 128-bit to 256-bit range, are used to encrypt bulk data. So, the type or size of the key that is
used is determined by the type of cryptographic operation being performed.

If you are encrypting a key to safely transfer it to another entity, then use a larger key.

If you are encrypting a stream of data where performance is critical, use a smaller key.

While this rule of thumb is applied for performance reasons, there are also security issues to consider. The length
of the key impacts its strength, but length is not the only factor to consider when calculating key strength.

The complexity of the key is also an important factor.

Consider a password that is ten characters long, but all ten characters are digits. A ten-digit password has ten
billion possible permutations. In contrast, an eight-character password that combines digits plus upper and lower
letters has over two hundred and eighteen trillion permutations. Although two characters less in length, the latter
password is much stronger than the former.

Key stretching is a method used to strengthen keys or passwords that are either too short or predictable.

The process involves feeding the key or password into a hashing algorithm to produce an enhanced key or
password.

Examples of stretching algorithms are password-based key derivation function two (PBKDF2) and BCRYPT1.

BCRYPT is the default algorithm used for key stretching in OpenBSD and various Linux distributions. When
BCRYPT is used, a key or password is hashed multiple times before a 128-bit salt is added. Then, the combined
value is hashed once more. The final hash output is the enhanced key or password.

A salt is a random value added that is added to another value to increase entropy.

What is a symmetric algorithm?

8 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Cryptography and PKI Module Keys and Cryptographic Algorithms Lesson

A symmetric algorithm is a cipher used to encrypt and decrypt data using the same key.

Examples of symmetric algorithms are:

With the exception of RC4, which is a stream cipher, the rest are block ciphers.

Click each algorithm type to learn more.

The main advantage of symmetric cryptography is that it can encrypt and decrypt data more quickly than
asymmetric cryptography. This is because symmetric keys are shorter than asymmetric keys, and symmetric
cryptography uses the same key to encrypt and decrypt.

Symmetric cryptography works in a way that is similar to a combination lock. Imagine that Frank, Javier, and Nora
all work at the same facility and all need a key to access a secure room. They work different shifts, and the
employer does not allow the key to leave the building. The employer secures the key in a lock box with a
combination code. Frank, Joe, and Nora all need the combination code to access the key and to secure the key.

However, this similarity to a combination lock exposes one disadvantage of symmetric cryptography. Because
you use the same key to encrypt the plaintext as you use to decrypt the cyphertext, the key must remain a secret
to protect the data.

How do you protect the secret key? If Alice encrypted a message for Bob, she could safely send the ciphertext, but
how would she securely deliver the key?

How does symmetric cryptography work?

To encrypt information using symmetric cryptography, a crypto application uses a symmetric key and cipher to
convert the plaintext into ciphertext.

In order to decrypt the ciphertext, the same symmetric key and cipher convert the ciphertext to plaintext.

But the problem remains—how do you securely send the key to the receiving party?

What is an asymmetric algorithm?

An asymmetric algorithm is a cipher used for cryptographic operations using a pair of mathematically related keys.
They can do all, or some, of these operations:

A key exchange operation enables two parties to safely exchange a secret, or key, over a public channel, such as
the internet. Asymmetric algorithms and cryptography are also known as public-key algorithms and cryptography.

Examples of asymmetric algorithms are:

Click each algorithm to learn more.

Increased data security is the primary benefit of asymmetric cryptography. This is because users are never
required to share their private keys. Asymmetric cryptography employs a pair of mathematically related keys. For
encryption, the public key encrypts and the private key decrypts.

Asymmetric cryptography is similar to a lock and key. The lock secures the item and only the correct key can
unlock the lock.

The principal disadvantage of asymmetric cryptography is that the crypto processes are very slow compared to
symmetric cryptography. This becomes a problem when you are streaming encrypted data between two points, or
when you need to encrypt and decrypt large amounts of data. The slowness of asymmetric cryptography does not
make it viable in these circumstances.

How does asymmetric cryptography work?

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 9


Fortinet Technologies Inc.
Keys and Cryptographic Algorithms Lesson Cryptography and PKI Module

During the encryption process, the sender's crypto application converts the plaintext data to ciphertext using the
recipient's asymmetric public key and an asymmetric cipher.

During the decryption process, the recipient's crypto application converts the ciphertext to plaintext using their
private decryption key and the same asymmetric cipher that the sender used.

The main disadvantage of symmetric encryption is securely delivering the key to the recipient, while the main
disadvantage of asymmetric encryption is its slowness.

When symmetric and asymmetric cryptography are combined, the disadvantages of both are remedied.
Symmetric encryption secures bulk data to ensure good performance, while the symmetric key—a small piece of
data—is encrypted by the recipient's public asymmetric key. Thus, the question "How do we securely send the key
to the receiving party?" is answered.

In a scenario where the processes are combined, Alice sends an encrypted message to Bob. In the first step of
encryption, Alice's crypto application generates a one-time symmetric key and, together with a symmetric
algorithm, encrypts the message.

In the second step, the application retrieves Bob's public encryption key and uses this key, together with an
asymmetric algorithm, to encrypt the symmetric key.

In step three, the encrypted message and key are sent to Bob.

In the first step of decryption, Bob's crypto application retrieves the private key from his secure key store. At this
stage, Bob may be required to submit credentials. The private decryption key, together with the same asymmetric
algorithm that Alice used, decrypts the symmetric key.

In the second step, the symmetric key, together with the same symmetric algorithm that Alice used, are used to
decrypt the bulk data.

You've completed the lesson. You can now achieve these objectives.

10 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Cryptography and PKI Module Hashing and Digital Signatures Lesson

Hashing and Digital Signatures Lesson

Welcome to the Hashing and Digital Signatures lesson.

Click Next to get started.

After completing this lesson, you will be able to achieve these objectives.

What is hashing?

Hashing is the process of converting data of an arbitrary size to a unique value of a fixed size. Hashing has some
important features that support cryptography. These features are alluded to in the definition.

One, the output value is a fixed length, which is determined by the hashing function or algorithm. Because the
output value is always the same size for a given algorithm, the bad actor has no intelligence about the size of the
input data. It could be a twelve-character password or a six-page document—the bad actor has no idea.

Two, to avoid collisions, the output value is unique for every input value, just like a finger print is unique for every
human being. This feature makes hashing very useful when you want to detect changes to data. For example,
should someone tamper with an electronic document, even if it’s a small change, when comparing the hash output
of the original document with the output of the altered document, you would see that the two outputs are
completely different, and be alerted to the tampering.

And three, hashing is nonreversible or goes only one way. This means that if you run the output value through the
hashing function, you will not get the input value. You will get an output value of the original output value. This
feature denies a bad actor the opportunity to use the output value to reverse engineer and calculate the input
value.

These three security features of hashing play an important role in cryptography, especially when generating a
digital signature.

Combining the security features of hashing with asymmetric cryptography can produce a digital signature.

A digital signature serves a number of purposes. Digital signatures ensure the data integrity of the information
signed, authenticate the person or thing that signed the information, and support non-repudiation.

In the first step of the digital signing process, the information that needs to be signed is hashed. This produces an
output hash value.

In the second step, the asymmetric private signing key associated with the signer encrypts the output value using
an asymmetric algorithm. This encrypted output value is called a digital signature.

In the third step, the digital signature is attached to the information.

In order to verify the signed information, you require the corresponding public verification key.

But how do you verify that the key belongs to the signer?

In asymmetric cryptography, public keys are written to certificates. Certificates are issued and signed by a
certificate authority (CA). In addition to the public key, the certificate bears the owner's name. The verification
certificate can be retrieved from a repository by the receiver, or it can be attached to the signed document.

In the first step of the verification process, the receiver's application hashes the information, producing a new
output value.

In the second step, after verifying that the signer's verification certificate is valid, the public key decrypts (in other
words, verifies) the digital signature.

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 11


Fortinet Technologies Inc.
Hashing and Digital Signatures Lesson Cryptography and PKI Module

In the third step, the verifier compares the new output value to the original output value. If they are the same, then
you know that the information has not changed.

Because the public key successfully decrypted the digital signature, and because the name of the owner of that
key is written in the certificate, you can confirm the authenticity of the signer.

There are a number of popular hash functions used in cryptographic operations.

Common hash functions are:

The fifth version of MD5 produces a 128-bit hash output. This function was widely used until weaknesses,
primarily a susceptibility to collisions, were found. MD6 has supplanted MD5.

Thereafter, SHA-1 became more popular, followed by SHA-2 and SHA-3. The SHA-1 function produces a 160-bit
hash output, while SHA-2 is really a suite of hashing algorithms. The suite contains SHA-224, SHA-256, SHA-384,
and SHA-512. SHA-3 lets you decide the length of the output value.

LANMAN was used by legacy Windows operating systems to store passwords. LANMAN uses the data
encryption standard (DES), which was proven to be not secure and could be defeated by brute force attacks.
NTLM succeeded LANMAN, followed by NTLM version two.

Outside of North America, HAVAL and RIPEMD are popular hashing functions.

A very common method that is used to crack hashing is a brute force attack.

This means that the bad actor will try different input values until they produce the same value as the original hash
output.

If the attacker has no idea of the size of the input data, a brute force attack will have limited success. However, if
the attacker knows something about the input value, they can eliminate those input values that don't apply.

For example, if the input value is a password and the bad actor knows the password requirements—for example,
that the minimum length is ten characters and the maximum length is twenty characters, and that it must contain
uppercase letters, lowercase letters, and numbers, but no special characters—then they can drastically narrow
down the scope of their attack.

As another example, if the bad actor knew that the hash output was protecting a four-number PIN using MD5, then
they could try every conceivable combination until they arrived at the same hash output.

A type of brute force attack is the birthday attack, which is based on the birthday paradox and exploits hashing
functions that are known to produce collisions.

If you were in a room with 183 people, there is a fifty percent chance that one of them shares your birthday.
However, if you wanted to have a fifty percent chance of any two people sharing the same birthday, you would
only need twenty-three people. This is the birthday paradox.

For hashing functions, this means that it is much easier to find any two matches, if you don't care which two they
are.

When attempting to crack a hashed password, the bad actor doesn't care if their input data matches the password,
only that their input data produces the same hash output.

This is because servers don't store user passwords. They store the hashes of those passwords.

So, if the bad actor can reproduce the hash output protecting Alice's password, they can log in as Alice. As noted
earlier, a situation in which two or more input values produce the same output value is called a collision.

12 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Cryptography and PKI Module Hashing and Digital Signatures Lesson

The birthday attack exploits the mathematics behind the birthday problem in probability theory. The success of the
birthday attack largely depends upon the higher likelihood of collisions being found between random attack
attempts and a fixed degree of permutations.

The best defense against a brute force attack is to ensure that the hash function supports an output that is long
enough to be computationally infeasible to break.

When hashing is used to protect passwords, increasing the length of the hash output value may not be enough, if
the function is susceptible to collisions.

In order to protect the password hashes stored on computers, you can increase entropy through the process of
key stretching.

Click the underlined terms for more information.

Birthday Paradox:

To understand the math of the birthday paradox, it may be easier to calculate the odds that no one shares a
birthday and invert the results.

Imagine that, as people enter a room, you ask them their birthdays. What is the chance that two people in that
room share the same birthday?

If one person enters the room, there is no chance that they share a birthday with anyone, because there is no one
else in the room. However, after a second person enters the room, there is one chance out of 365 that the two
people now in the room share the same birthday. Three hundred and sixty-five is the denominator, because there
are 365 days in a year (omitting the leap year)—or 365 possibilities.

In other words, there is a 364/365 chance that these two people will not share the same birthday.

When the third person enters the room, they must avoid two birthdays. Thus, the chance that the third person will
not share a birthday with the first or second person is 363/365. To determine the probability that none of the three
people in the room share a birthday, you must multiply the last result with this fraction [0.9973 x 363/365].

When the fourth person enters the room, they must avoid three birthdays. The last result, 0.9918, is multiplied by
362/365.

If you advance ahead to the 23rd person entering the room, there are 23 days that you have to avoid; therefore, a
343/365 chance of not sharing the same birthday. However, when you account for all the people in the room
[0.5243 x 343/365], the result is a surprising 0.4927.

In other words, there is about a 49% chance that two people in a room of twenty-three people will not share the
same birthday, or a 51% chance that they will.

You've completed the lesson. You can now achieve these objectives.

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 13


Fortinet Technologies Inc.
Public Key Infrastructure Lesson Cryptography and PKI Module

Public Key Infrastructure Lesson

Welcome to the Public Key Infrastructure lesson.

Click Next to get started.

After completing this lesson, you will be able to achieve these objectives.

Public key infrastructure (PKI) is an ecosystem comprised of the policies, procedures, software, and hardware
needed to create, distribute, store, use, and revoke digital certificates.

A certificate is revoked when it is no longer trusted. Consequently, the content in the revoked certificate, such as a
public key, is rendered void.

Four entities compose a PKI.

What is a digital certificate?

In cryptography, a digital certificate is an electronic document, issued and signed by a trusted entity, such as a CA.

It contains the name of the certificate holder and may or may not contain a public key. If the certificate is issued to
a specific person, device or application, it binds the entity's identity to the public key by way of a digital signature.

The certificate is trusted because it is signed by a trusted entity, just like a driver's license is trusted because it is
issued by a government authority.

Although an encryption certificate contains a public encryption key used for encipherment, and a verification
certificate contains a public verification key used for verifying a signature, not all certificates contain keys.

Policy certificates contain policy information, such as defining password criteria to secure your key store.
Certificate revocation lists (CRLs) are certificates that contain information about revoked certificates, in other
words certificates that are no longer trusted.

There are different standards and best practices for the production and management of certificates, but perhaps
the most important one is X.509 version 3. This standard defines what content can be written to a certificate and
how the content is expressed. Common fields found in a certificate are:

Customized content may also be written to certificates. Certificate templates specify the attributes that will be in
different certificate types, including the purpose of the certificate. For example, the key in a certificate that has the
attribute "key usage equals key encipherment", will be used to encrypt objects.

Certificates are elements that allow trust and secure communications between entities within a private or public
domain. Given their importance, the certificate issuer—the CA—deserves critical consideration.

Click the common fields found in a certificate to learn more.

The CA performs two primary functions related to certificates.

First, the CA issues certificates to end entities, such as persons and devices, and issues certificates to help
manage end-entity certificates. Examples of management certificates are CRLs and policy certificates.

Second, the CA provides an ecosystem of trust whereby all entities within that system can safely interact with one
another. Trust hinges on the CA's private key because all certificates issued by the CA are signed by its private
key.

But why do end entities trust the public key?

14 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Cryptography and PKI Module Public Key Infrastructure Lesson

There are a number of reasons why end entities trust the CA. One might be that they have a relationship with the
trusted entity, just like a bank client has a relationship with their bank or a citizen has a relationship with their
government. However, there is a legal framework that supports this trust. With respect to PKI, a high-assurance
CA can prove in court that the CA's key pair was created in a manner consistent with the highest security
standards.

Low-assurance CAs generally do not have the same rigorous legal requirements and may not implement all the
same security elements as a high-assurance CA. Ultimately, the required degree of trust determines the
assurance level and the PKI implementation.

Click the private key to learn more.

In a PKI implementation, multiple CAs can be organized in different ways.

In a hierarchical environment, there would be one root CA and one or more subordinate CAs. The root CA is the
source of trust and the subordinate CAs issue certificates to end entities. A subordinate CA can have a
subordinate CA and so on, creating a deeper hierarchy.

Also, it is possible for a CA in one PKI to trust a CA in another PKI by cross certifying. This is a lateral structure
because the trust relationship exists between two CAs of different PKIs. In the process of cross certifying, cross
certificates are created that allow users of one PKI to trust the users of another PKI and the reverse. The cross-
certification process can take place between a root or subordinate CA from one PKI and a root or subordinate CA
from another PKI.

If you were to build a hierarchy of CAs from ground up, you would start with the source of trust—the root CA. The
root CA's certificate is signed by its own private key. In this scenario, two subordinate CAs are created below the
root. During the initialization of those subordinate CAs, the public keys are sent to the root CA to be written to
X.509 certificates and signed by the root CA's private key.

You could install a subordinate CA below one of these subordinate CAs. You repeat the certificate request
process, except this time, the superior subordinate CA signs the certificate.

Assume that the violet subordinate CA signed Mia's certificate and the blue subordinate CA signed Noah's
certificate. Noah sends Mia an important legal document, which he signs. Can Mia trust and validate Noah's
signature? Yes, because of the chain of trust.

The chain of trust is a series of associated certificates, each one relying on the other for trust. The chain
progressively leads to the root CA—the source of trust.

Mia's application can use Noah's public key, found in his certificate, to verify the signature on the document. This
verification can prove the integrity of the data and the authenticity of the signature. However, at this point, Mia
does not trust Noah's certificate.

Mia's application retrieves the certificate of the CA that issued Noah's certificate. The blue subordinate CA public
key verifies the integrity and authenticity of Noah's certificate, but Mia still does not trust Noah's certificate or the
subordinate CA's certificate.

Mia's application retrieves the certificate of the CA that issued the blue subordinate CA certificate, which is the root
CA. Because the root CA is the source of trust for Mia, she trusts the Noah's certificates, and the blue subordinate
CA.

This process of following the chain of trust to its source allows Mia to trust any valid certificate within this PKI.

When Mia decided to verify the signature on the document, she needed Noah's certificate. How did she get
Noah's certificate? There are two possibilities.

One, Noah's application sent the certificate, along with the signed document.

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 15


Fortinet Technologies Inc.
Public Key Infrastructure Lesson Cryptography and PKI Module

Or two, Mia's application retrieved the certificate from a server. PKI standards anticipated that end entities would
retrieve certificates from an X.500 and Lightweight Directory Access Protocol (LDAP)-compliant directory server.

The directory server not only stores user and device certificates, but all of the PKI certificates as well. These would
include policy, CRLs, authority revocation lists (ARLs), cross certificates, and CA certificates.

Click the underlined terms for more information.

A registration authority (RA) is a function for certificate enrollment used in PKIs. The RA verifies and forwards
certificate requests to the CA.

A person who requires certificates for themselves, for a device, or for an application, applies online or in person to
an RA. An RA can be an automated application or it can be manned by a person authorized to use an RA
application.

An RA is similar to an automobile licensing bureau. It must identify applicants and ensure that they meet the
requirements for a car license. Likewise, an RA is responsible for vetting requests to ensure only authorized and
legitimate persons or other entities receive certificates. The RA registers approved requests with the CA, which
generates and supplies a one-time authorization code to the requester.

The degree of vetting applied to the requests may be determined by the level of trust declared by the CA. In a
high-assurance CA, you may have to present yourself in person to the RA. But for other types of certificates, such
as SSL certificates, you can register for and purchase them online.

At the endpoint, the user needs a cryptographic application to generate key pairs, submit a certificate request to
the CA, and conduct cryptographic operations.

A certificate service request (CSR) can be submitted manually or can be automated to some degree.

Some cryptographic capability can come built into computer operating systems, such as Microsoft cryptographic
service provider (CSP).

To review, these are the components of a PKI that relate to the certificate lifecycle.

The CA creates, distributes, and revokes certificates.

The directory server stores certificates.

The RA vets and registers end entities for certificates.

And end entities generate keys, submit CSRs, validate certificates, and use keys and certificates to encrypt and
decrypt, and to sign and verify signatures.

All of these entities carry out their functions in accordance with policy, which is prescribed by PKI authorities.

You've completed the lesson.

16 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Secure Network Module

Overview Lesson

Welcome to the Secure Network module.

Click Next to get started.

Network security is composed of configurations and rules that are implemented on devices and systems in the
network.

These rules ensure the integrity, privacy, and usefulness of the network, and cover both the hardware and
software that make the network run.

Security measures can be applied to individual devices or groups of components with the same level of
vulnerability.

In the meantime, network threats are evolving while the attack surface is growing. So, keeping it secure is a
challenge that requires the knowledge of the concepts, models, and elements around secure networks.

The lessons in this Secure Network module provide an essential understanding of the following fundamentals
necessary to achieve your goals.

You will be able to:


l Compare two main secure network models, such as secure perimeter and zero trust, and identify the benefits of
each one.
The approach with the secure perimeter model is simple. Anyone inside the perimeter is trusted, and anyone
outside isn't. With the zero-trust model, every user and device within a network is untrusted by default. Zero trust
doesn't grant explicit and full access to anyone.

The Secure Perimeter model performs a verification at initial access using predefined rules and common policies,
unlike the zero-trust model, which continuously verifies with various fine-grained rules and adaptive policies. This
microsegmentation uses the principle of least privilege access, and minimizes the attack surface and prevents the
chances for lateral movements, unlike the secure perimeter model.

You will be able to:


l Identify how different network deployments, such as SD-WAN and SASE, secure and manage WAN network traffic.
Using SD-WAN creates a logical network over the physical network, where security is available but not included
by default.

Secure Access Service Edge (SASE) is cloud-based and globally distributed through an as-a-service deployment,
and security is built in.

SASE connects endpoints to the edge and sends traffic through globally distributed points of presence, while SD-
WAN connects branch offices to networks and follows the organization's configured policies to route traffic.

You are now welcome to go through all the lessons in this module and complete the quiz to validate your progress.

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 17


Fortinet Technologies Inc.
Secure Perimeter Lesson Secure Network Module

Secure Perimeter Lesson

Welcome to the Secure Perimeter lesson.

Click Next to get started.

After completing this lesson, you will be able to achieve these objectives.

A secure perimeter is a form of protection that consists of devices or techniques added to the edge of a managed
network.

Taking into consideration different LANs, which interconnect to create a WAN, the trusted zone for a company is
the IT managed portion of devices and applications. A remote office device can also be part of a secure perimeter
if it connects to the trusted zone using a secure remote access. Everything inside the managed network is
protected, like a castle surrounded by a moat. Everything outside the secure perimeter is considered untrusted.

Specific devices and applications that create the secure perimeter protect the trusted zone. This is like guards
near the drawbridge of a moat who check and evaluate the traffic. The secure perimeter has specific
authentication and authorization applications that protects and provides confidentiality while filtering traffic to the
trusted zone.

The secure perimeter can filter traffic at different OSI layers.

At the data link layer, a secure perimeter device can perform Media Access Control (MAC) filtering. For example,
the IT manager can create an access control list (ACL), a defined list of devices with known addresses in the
trusted network. Therefore, the secure perimeter allows only corresponding devices with known MAC addresses
to pass through the network.

Click the highlighted icons for more information.

At the transport layer, a secure perimeter device can perform packet filtering. Packet filtering allows or denies
packets based on a configured set of rules. Packet filtering can be stateless, where each packet is checked based
on its IP addresses, source and destination ports, and protocol. In contrast, packet filtering can also be stateful,
where the security device keeps track of the 5-tuple check and the TCP/IP connection state. Therefore, the return
traffic is validated only if it matches corresponding incoming traffic.

Click the underlined term for more information.

A secure perimeter device can also perform network address translation (NAT) filtering at the transport layer. NAT
filtering can translate a public IP address to a private IP address and a private IP address to a public IP address. A
translation is required to access the internal resource from the internet because it usually protects a private
network, an IP, and sometimes a port.

Click the underlined term for more information.

At the application layer, a secure perimeter device can provide proxy filtering. Through the man-in-the-middle
mechanism, this gateway hides the internal user from the internet.

At the same layer, a secure perimeter device can also act as an application layer gateway (ALG). An ALG inspects
the packets up to Layer 7, allowing the secure perimeter device to open TCP/UDP data ports dynamically. For
example, protocols like FTP or SIP have a control connection that provides the port used for data communication.
The secure perimeter device opens this temporary port only for the corresponding sessions.

Click the man-in-the-middle icon for more information.

18 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Secure Network Module Secure Perimeter Lesson

Secure perimeters face some challenges, especially with the emerging trend of remote working. Along with the
introduction of bring your own devices (BYOD), the Internet of Things (IoT) and the cloud working model, IT
departments have a harder time dynamically updating the security perimeter. It is from these drawbacks that the
zero trust principle, which is built upon the evolution of the secure perimeter, is widely used to ensure greater trust.

You will learn more about zero trust principles in the following lesson.

Click the highlighted icons for more information.

You have reached the end of the lesson.

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 19


Fortinet Technologies Inc.
Zero Trust Principles Lesson Secure Network Module

Zero Trust Principles Lesson

Welcome to the Zero Trust Principles lesson.

Click Next to continue.

After completing this lesson, you will be able to achieve these objectives.

The concept behind the zero-trust security model is that trust must be explicitly derived from a mix of identity-
based and context-based aspects.

Zero trust is not a single product or service that you can buy. Rather, it is a security concept or strategy that has
three core principles. One, never trust a user or device. Always authenticate a user or device to determine their
identity, and verify what resources the user or device is authorized to access. Two, implement least privilege. This
means that users and devices are granted access to only those resources that they need to complete their jobs or
functions. Three, assume that the network is already breached. Take precautions, such as reducing the attack
surface.

Click the underlined terms for more information.

Perimeter networks are closed, self-contained units with firewalls that act like drawbridges. Most endpoints and
servers exist within the network perimeter. Virtual private network (VPN) provide secure connections for remote
users and between local area networks. Users provide credentials, usually a user name and password, to
authenticate to the network. The network assigns roles to the authenticated users to control access to network
resources. However, if a user's credentials become compromised or if a user's device becomes infected, then
malware can move laterally across the network infecting other devices and perhaps exfiltrating valuable
information.

This type of cyberattack is equivalent to the myth of the Trojan horse. In the myth, a large wooden horse—the
Trojan horse—was rolled into the fortified city of Troy. Unbeknownst to the people of Troy, many Greek soldiers
were concealed inside the Trojan horse. After the Trojan horse was safely inside the fortified city, the hidden
warriors, disembarked and ransacked the city. Once the Trojan horse was inside the fortified city, there was no
way to prevent the attack.

The business transformation is forcing networks to decentralize so that some network components remain on-
premises, some are located in a private cloud, and others in the public cloud. This transformation, plus the advent
of BYOD, the internet of things (IoT), and an expanding remote workforce, has drastically enlarged the attack
surface and provided many more attack vectors.

Click on the icons for more information.

There are a number of things you can do to fulfill the "never trust" principle. One is to demand the identity of users
and devices on an ongoing basis. The identification process can be strengthened using MFA.

Restrictions to access the network and resources can also be context based, meaning that they are based on the
time and date of the request, the geographical location of the device asking for access, and the security posture of
the device.

Click the icons for more information.

The three principles of the zero trust model are never trust an end entity, grant the least amount of privileges to a
user or device, and assume that the network is already breached.

20 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Secure Network Module Zero Trust Principles Lesson

There are a number of tools that you can deploy to implement the "principle of least privilege." These are
deploying a privileged access management (PAM) system, defining the protect surface, and applying the Kipling
method.

Click on the different tabs for more information.

One action that you can take in anticipation of an inevitable network breach is to prepare contingencies for the
worst situations. Following this process provides you with the opportunity to make various plans that can be
invoked immediately after a breach.

Another precaution that you can take is to segment the network into smaller sections to restrict the lateral
movement of contagions.

Zero trust access (ZTA) is a secure access method that supports zero trust security. ZTA uses role-based access
control. Once a user is identified, they can be assigned a role that determines what resources they have access
to. Examples of roles are: employee, guest, contractor, and so on. A user can be assigned more than one role. For
example, they could be assigned the employee role and the account manager role.

Endpoint software agents that support the zero trust model can supply the network with valuable information
about the device, such as the operating system, patch level, installed software on the device, and so on. This
information can be used to assess the level of risk the device might pose to the network.

Network access control (NAC) identifies devices on the network, giving IT security greater visibility and control.

ZTNA is technology that establishes a secure session automatically between the end entity and the network
regardless of location, while ensuring granular control over access to applications, and enforcing the precept of
zero trust.

Unlike the previous network methodology that includes a perimeter network, there is a no trust zone in the zero
trust model. Trust must be proven through MFA and risk is judged through a context-based assessment.

Least privilege access ensures tighter restrictions to access resources by users, devices, and applications.

By doing a thorough review of the assets you need to protect, and by asking what, why, when, where, and how,
you reduce the attack surface and better protect the network. By preparing for a worst-case scenario, you are
better prepared when a breach does occur.

Also, unlike the perimeter network model, where the lateral movement of malware across the network could not be
restricted, microsegmentation isolates a compromised device or network segment.

You have reached the end of the lesson.

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 21


Fortinet Technologies Inc.
Centralized Security Network Management Lesson Secure Network Module

Centralized Security Network Management Lesson

Welcome to the Centralized Security Network Management lesson.

Click Next to get started.

After completing this lesson, you will be able to achieve these objectives.

Centralized security network management refers to the act of gathering security-related data from various devices
and applications into one central location.

Data gathering can be done through logs, simple network management protocol (SNMP), or an application
programing interface (API). An API also provides scripting possibilities, which can reduce manual tasks and
human error.

The objective of centralized security network management is to provide a comprehensive view of the network
security status through typical network management actions, such as configuring, controlling, operating, and
diagnosing a security network.

Click the highlighted icons for more information.

In an expanding network, including more IoTs, remote users, and cloud environments, security administrators
require dynamic and elastic centralized network security management. A possible solution can involve
implementing data fabric.

Data fabric architecture, defined by Gartner, provides the possibility to monitor and manage data and applications
wherever they are, while remaining centrally governed. Like a needle and thread, data fabric stitches together the
different parts of the security network to create a tapestry. As data gathers more data, it uses artificial intelligence
(AI) to help consolidate more relevant data to improve decision-making.

For example, repetitive tasks are initially performed manually. Over time, AI observes these tasks and not only
begins to automate them, but through machine learning (ML), also makes recommendations.

Click the highlighted icons for more information.

An example of centralized security network management is the one implemented in the Fortinet Security Fabric.
The Security Fabric has an open ecosystem consisting of APIs and fabric connectors that help to integrate
partners.

A broad view of the security and the network is provided through the Fabric Management Center with end-to-end
visibility. The Security Fabric links with FortiGuard AI-powered security enables fast, coordinated, detection.
Finally, automation in the Security Fabric can make dynamic adjustments to secure the entire attack surface
through zero-trust access.

The Fortinet Security Fabric provides centralized security network management.

The benefits of a centralized security network management include the following:


l A high-level view and broad visibility of security in the network. This broad visibility includes centralized reports and
simplified analyses of security events. It also helps you to avoid security issues, reduces incident response time, and
minimizes disruptions.
l Device integration. This allows for the centralized definition of configurations and policy orchestration.
l A reduction in the number of repetitive and manual tasks that you must perform.
l Easier maintenance. This includes the application of patches and updates to monitoring. Easier maintenance results
in improved protection of the entire digital attack surface and improved risk management.

22 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Secure Network Module Centralized Security Network Management Lesson

l Easier capacity planning and performance forecast from a central point.


l And finally, centralized security network management provides easier compliance audits.
You have reached the end of the lesson.

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 23


Fortinet Technologies Inc.
Secure SD-WAN Lesson Secure Network Module

Secure SD-WAN Lesson

Hello! In this lesson, we'll explain what SD-WAN is and how it has evolved.

SD-WAN stands for software-defined wide-area network, and it leverages the corporate WAN as well as multi-
cloud connectivity to deliver high-speed application performance.

In the past, organizations purchased and operated their own servers to run applications and store critical business
data. As a result, they had upfront capital expenses, and they needed to employ a team of highly trained
technicians to run these servers. While expensive, the competitive advantage it gave over those who didn't
computerize their businesses, made it worthwhile. One early challenge was to make these servers available to
various geographically-distributed networks, called local area networks or LANs.

You might recall that a WAN is a computer network that spans a large geographic area and typically consists of
two or more LANs. For example, if Acme Corporation spanned multiple cities and continents, each with their own
local area network, how would they connect these LANs so that someone in the London office could connect to a
database server in Singapore? Traditionally, businesses connected theirs LANs by way of a single, dedicated
service provider. Though expensive, they could control and secure this connection while providing access to
critical resources. However, this method had limitations. The single point of connectivity was subject to frequent
outages, which made it unreliable. In addition, because there was an increasing demand to host business
applications in the cloud, known as software as a service (SaaS), higher latency became an issue. SaaS
applications, like Salesforce, Dropbox, and Google Apps, and a greater reliance on video and voice conferencing,
contributed to the congestion. Businesses began to augment their connectivity by employing multiple providers, or
seeking more affordable broadband and other means of internet connectivity. The trend toward increasing hybrid
connections, and the growth of cloud applications to support underlying intelligent business decisions, led to the
first generation of SD-WAN.

Businesses added multiple dedicated carrier links and load-balancing per application traffic, based on how much
bandwidth was available. Although this approach seemed to solve a few bandwidth issues, it added yet another
product to solve another network challenge. These point products escalate complexity to the network
infrastructure. Why? Because adding multiple products from multiple vendors, each of which have separate
management consoles and which often do not fully integrate with other products, becomes a management
nightmare for IT security administrators. Still, the first generation of SD-WAN solved a pressing business need: its
basic load-balancing techniques allowed the network to make application-intelligent business decisions on hybrid
WAN links, including service provider, broadband, and long-term evolution or LTE, which is a standard for
wireless broadband communication for mobile devices and data terminals.

Accurate application identification, visibility into network performance, and reliable switchover of application traffic
between best performing WAN links pivoted SD-WAN as the most sought-after WAN technology across all
businesses.

However, security remained a serious consideration for businesses. Even after SD-WAN adoption, businesses
kept sending all their sensitive and critical application traffic to data centers for security purposes, or were forced
to install a sophisticated firewall solution to inspect their direct internet access. This added another point product
for security, making the network yet more complex, challenging to manage, and delaying cloud adoption.

Businesses needed to address these challenges by integrating security and networking functionalities into a
single, secure SD-WAN appliance. This enabled businesses to replace their multiple point products with a
powerful, single security appliance, at a reduced cost and ease of management. A strong security posture helped
businesses to use cloud applications more affordably, with lower latency, and with a direct internet connection
ensuring optimal application performance and best user experience. Continued network performance health

24 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Secure Network Module Secure SD-WAN Lesson

checks ensured that the best available WAN link was chosen, based on user-defined application service level
agreements. Should a particular link degrade, the SD-WAN device knew to move the connection to the better
performing WAN link.

Today, in secure SD-WAN, intuitive business policy workflows make it easy to configure and manage the
application needs with the flexibility of prioritizing business-critical applications. A centralized management
console provides single, pane-of-glass visibility and telemetry to identify, troubleshoot, and resolve network issues
with minimal IT staff. Comprehensive analytics on bandwidth utilization, application definition, path selection, and
the security threat landscape not only provide visibility into the extended network, but helps administrators to
quickly redesign policies, based on historical statistics, to improve network and application performance.

Overall, positive outcomes of a secure SD-WAN solution are simplification, consolidation, and cost reduction
while providing much needed optimal application performance and best user experience for the enterprise, SaaS,
and Unified Communications as a Service (UCaaS) applications. Run-time analytics and telemetry help
infrastructure teams coordinate and resolve issues in an accelerated manner, which reduces the number of
support tickets and network outages.

Fortinet introduced the term Secure SD-WAN, of which FortiGate is at the core—the next generation firewall
(NGFW) of Fortinet. In addition to the FortiGate device, the Secure SD-WAN solution includes other advanced
networking features.

Thank you for your time, and please remember to take the quiz that follows this lesson.

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 25


Fortinet Technologies Inc.
SASE Lesson Secure Network Module

SASE Lesson

Hello! In this lesson, we will introduce you to Secure Access Service Edge SASE, and explain how it has evolved.

SASE is a technology that combines Network as a Service with Security-as-a-Service capabilities.

SASE is delivered through the cloud as an as-a-service consumption model, to support secure access for today’s
distributed and hybrid enterprise networks.

Network security is a top priority for most organizations, however new challenges have emerged. Rapid and
disruptive digital innovation has brought on:
l an Expanding thin edge defined by small branch locations that are attached to the core network
l a Growing amount of off-network users accessing the central data center
l a Challenging user experience for off-network users
l an Expanding attack surface
l Multi-level compliance requirements, and
l Increasingly sophisticated cyber threats
As work environments have evolved, so too have user behavior and endpoint protection requirements. Users no
longer access information from a dedicated station within a pre-defined network perimeter confined to a corporate
office. Instead, users access information from a variety of locations, such as in the home, in the air, and from
hotels. They also access that information from different devices, such as desktop workstations, laptops, tablets,
and mobile devices. Adding to this network complexity is the rise of Bring-Your-Own-Device, where users access
enterprise systems through personal devices that are not part of the enterprise infrastructure.

Organizations today require that their users have immediate, continuous secure access to network and cloud-
based resources and data, including business-critical applications, regardless of location, on any device, and at
any time. Organizations must provide this access in a scalable and elastic way that integrates thin edge network
sites and remote users into the central infrastructure, and that favors a lean operational, as-a-service model.

Finding solutions that meet these requirements is challenging.

The reasons for this are clear.

While networks have evolved to support the workflows for remote endpoints and users, many outdated network
security solutions remain inflexible and do not extend beyond the data center to cover the ever-expanding network
perimeter and, therefore, the attack surface. With the advent of new thin edge networks, this challenge is
exacerbated.

Secondly, these solutions to converged networking and security oversight require that all traffic, whether coming
from thin edge locations or off-network users, runs through the core data center for inspection. This results in:
l High cost
l Complexity
l Elevated risk exposure
l Latency and a poor user experience when accessing multi-cloud-based applications and data
Finally, the multi-edge network environment of today has exposed the limitations of VPN-only solutions, which are
unable to support the security, threat detection, and zero-trust network access policy enforcement present at the
corporate on premise network. VPN-only solutions cannot scale to support the growing number of users and
devices, resulting in inconsistent security across all edges.

26 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Secure Network Module SASE Lesson

A new scalable, elastic, and converged solution is required to achieve secure, reliable network access for users
and endpoints. One which addresses the security of many hybrid organizations, defined by systems and users
spread across the corporate, and remote network. That solution is SASE.

A SASE solution provides integrated networking and security capabilities, including:


l Peering, which allows network connection and traffic exchange directly across the internet without having to pay a
third party.
l A Next-Generation Firewall NGFW or cloud-based Firewall-as-a-Service FWaaS , with security capabilities
including Intrusion Prevention System IPS, Anti-Malware, SSL Inspection, and Sandbox.
l A Secure Web Gateway to protect users and devices from online security threats by filtering malware and enforcing
internet security and compliance policies.
l Zero-Trust Network Access ZTNA , which ensures that no user or device is automatically trusted. Every attempt to
access a system, from either inside or outside, is challenged and verified before granting access. It consists of
multiple technologies, including multi-factor authentication MFA, secure Network Access Control NAC, and access
policy enforcement.
l Data Loss Prevention DLP prevents end-users from moving key information outside the network. These systems
inform content inspection of messaging and email applications operating over the network.
l Domain Name System DNS, which serves as the phone book of the internet and provides SASE with threat
detection capabilities to analyze and assess risky domains.
These services deliver:
l Optimized paths for all users to all clouds to improve performance and agility
l Enterprise-grade certified security for mobile workforces,
l Consistent security for all edges, and
l Consolidated management of security and network operations
Although classified as cloud-based, there are common SASE use cases, which may require a combination of
physical and cloud-based solutions. For SASE to be effectively deployed in this scenario, secure connectivity with
network access controls must be extended from the physical WAN infrastructure to the cloud edge. For example,
to roll out access to SASE at branch offices, you may see SASE reliant on physical networking appliances, such
as wireless (LTE and 5G), and wired (Ethernet) extenders or Wi-Fi access points.

The goal of SASE is to support the dynamic, secure access needs of today's organizations. Proper SASE service
allows organizations to extend enterprise-grade security and networking to the:
l Cloud edge, where remote, off-network users are accessing the network, and
l the Thin edge, such as small branch offices.
Fortinet's cloud-based SASE solution is called FortiSASE.

Thank you for your time, and please remember to take the quiz that follows this lesson.

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 27


Fortinet Technologies Inc.
Network Segmentation Lesson Secure Network Module

Network Segmentation Lesson

Welcome to the Network Segmentation lesson.

Click Next to get started.

After completing this lesson, you will be able to achieve these objectives.

Network segmentation divides a network into smaller, isolated, segments. This allows IT managers to better
control and protect a large network from breaches or malfunctions.

Compare network segmentation to a submarine. The different compartments in a submarine are isolated from one
another, so if one becomes flooded, the others remain safe and dry.

In a network, devices and applications with similar functions are grouped together for easier management. For
example, an HR subnetwork operates separately from a sales subnetwork. This avoids confidential information
leaking from one group to another.

Another example of segmentation involves servers that are accessed from the internet. A company may need
some servers on their network to be available to the internet, but not want to expose the rest of their network. In
this case, the servers that you want to be accessible from the internet are placed in a demilitarized zone (DMZ),
separated from the rest of the internal network. Therefore, users on the internet can access these servers without
gaining access to the rest of the network. The traffic moving in and out of the DMZ is classified as north-south
traffic.

The DMZ can be compartmentalized even further through micro segmentation. When microsegmentation is used,
each resource (hosts, users, applications and so on) is uniquely identified and protected. This identification and
protection is achieved through zero trust. The traffic that moves across a specific data center or cloud network is
called east-west traffic.

Click the underlined terms for more information.

A network can be segmented into physical or logical segments. Physical segmentation uses firewalls and
hardware like routers and switches to break networks down into multiple physical sections or subnets.

Logical segmentation breaks networks into smaller, more manageable sections and often does not require an
organization to invest in new hardware or wiring.

These different methods of segmentation are performed at the various levels of the OSI model.

At the data link layer, a logical segmentation approach is deployed using virtual local area networks (VLANs).
VLANs provide a way for groups of devices to communicate with one another through switches, as if they were on
the same wire.

Physical segmentation occurs at the network layer and a larger internal network is split into subnetworks. The data
flow between the different subnets is limited and controlled through firewall policies, access control lists (ACLs)
and routers.

A software defined wide area network (SD-WAN) is implemented at the application layer. With an SD-WAN, the
physical network, also known as an underlay network, applies an overlay network. The overlay network includes
tunnel encrypted virtual paths.

Click the underlined terms for more information.

28 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Secure Network Module Network Segmentation Lesson

One way to securely access segmented networks is to use a jump box. A jump box is a device with enhanced
access control and limited authorization that acts as a proxy to the devices in the internal segment. A jump box
also has additional monitoring and logging to alert the IT manager if it has been compromised.

Another method of accessing network segments is a bastion host. A bastion host is a server or computer whose
purpose is to provide access to a private network from an external network. It is configured to withstand attacks
while allowing the users to access subnets through just one application. An example of this is a secured HTML
connection. The bastion host provides secure SSH and RDP connects to the internal network segment.

Network segmentation provides the following benefits:


l Network management configuration is made easier. For example, an IT manager can configure the same
parameters for a small subnetwork.
l Network broadcasts are reduced. This is due to the fact that broadcasts are contained within their corresponding
subnets.
l Network congestion is minimized. Traffic flow in one segment does not affect the traffic flow in the rest of the
network.
l Attacks are limited to a specific segment. Much like network congestion, an attack in one segment is isolated from
the rest of the network.
l Vulnerable devices are given greater protection. If a vulnerable device is segmented in the internal network, it can
be better protected.
l Compliance affects a smaller scope, limiting the corresponding costs.
You have completed this lesson.

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 29


Fortinet Technologies Inc.
Firewall Lesson Secure Network Module

Firewall Lesson

Welcome to the Firewall lesson.

Click Next to get started.

After completing this lesson, you will be able to achieve these objectives.

As networks began to grow, interconnect with one another, and eventually connect to the internet, it became
important to control the flow of network traffic. Firewalls became a means of control that had to evolve and change
alongside networks. They are classified into generations defined as first-generation packet filter firewall, also
know as stateless firewall, second-generation stateful firewall, third-generation firewall, and next-generation
firewall (NGFW).

The first generation of firewall is a packet filter firewall, also known as a stateless firewall. It examines the routing
and transport layer protocols information such as source and destination network addresses, protocols, and port
numbers. Firewall policies use these attributes to define which packets are allowed through. The rules are ordered
in a list and the potential match is performed in order from top to bottom. The last firewall policy can be implicit,
denying the packet by default, or explicit, performing the corresponding configured action or either allowing or
denying the packet.

A stateless firewall allows a packet to pass if the network addresses, protocol, and port number match those of its
firewall policy. If it does not, the packet is either silently dropped or blocked.

Click the buttons for more information.

A drawback of a stateless firewall is that it requires additional configuration to offer a suitable level of protection.
For example, it requires an additional firewall policy for return traffic in a session. It also fails to appropriately
manage protocols. Stateless firewalls open random ports and use multiple connections, like FTP, with its control
and data connections.

Stateless firewalls use a "one-size-fits-all" approach to decide whether to allow traffic to pass. Because of this
open approach, bad actors can potentially bypass firewall rules and inject rogue packets through acceptable
protocols and ports, or exploit bugs in a computer networking software.

The second generation of firewall, known as a stateful firewall, offsets the limitations of the stateless firewall by
developing additional criteria for blocking or allowing traffic.

A stateful firewall is designed to observe the network connections over time by tracking the 5-tuple check and the
connection state in its session table. It watches as new network connections are made, and continuously
examines the traffic going back and forth between the endpoints. If a connection behaves improperly or if the
return traffic does not match the corresponding incoming traffic, the firewall blocks that connection. Any packet
that does not belong to a known conversation or does not match an allowed firewall policy is dropped.

Click the button for more information.

While stateful firewalls are an improvement, they still cannot block rogue packets if they are using an acceptable
protocol, such as HTTP. The explosion of the World Wide Web promoted HTTP as one of the most frequently
used network protocols. The problem is that HTTP is used in many ways, such as in static text content, e-
commerce, file hosting, and many other types of web applications. Because they all use the same port number,
the firewall is not able to distinguish between them.

30 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Secure Network Module Firewall Lesson

Network administrators need to distinguish between approved and malicious applications to determine which
ones to pass or block. Firewalls must look deeper into the data payloads to determine how protocols such as
HTTP are used.

The third generation of firewall looks deeper into the data payloads. While still stateful, these firewalls understand
the application layer protocols and control different uses of the same basic protocol. This is known as application
layer filtering. Firewalls that implement application layer filtering can understand protocols such as HTTP, FTP,
and DNS.

HTTP can distinguish between browser traffic, a blog, a file sharing site, e-commerce, social media, voice-over-IP,
and email. UTM firewalls also combine additional protections like antivirus, antispam, an intrusion prevention
system (IPS), and a virtual private network (VPN).

Click the underlined terms for more information.

Today, the prevalence of the internet has changed the way of working, playing, entertaining, and doing
commerce. Businesses have evolved to take advantage of cheaper, multi-cloud services, and the convenience of
mobile and IoT devices has dramatically expanded network edges, thereby increasing the attack surface.

Just as the internet has evolved, so have threat actors. They continue to change in terms of attack methods and
level of sophistication. Attacks can now come from trusted users, devices, and applications that spread malware,
both unknowingly and with malicious intent.

A firewall must prevent evolving cyberattacks at every edge of the network while delivering security, reliability, and
network performance. Next-generation firewalls, like FortiGate, provide these advanced security capabilities.

A next-generation firewall operates like airport security, with both having multiple security checkpoints. Just as a
security agent looks at your boarding pass as a first line of defense, a next-generation firewall looks at packets
and makes rule-based decisions whether to allow or drop the traffic.

Next, your travel bags are checked by security to see if you are carrying any banned or malicious items. This is
similar to the way a next-generation firewall performs deep packet inspection (DPI).

Click the underlined term for more information.

If suspicious items are found in your travel bag, an airport security agent sets the bag aside for enhanced
screening. In a similar vein, the next-generation firewall sends malicious content to a sandbox for further analysis.

Click the underlined term for more information.

As networks continue to evolve and introduce new challenges, next-generation firewalls also continue to evolve.
For example, next-generation firewalls can control applications, either by classification or by who the user is.
Application-level security helps protect web-browsing clients from attacks and threats.

Next-generation firewalls also adopted various segmentation approaches that segregate users, devices, and
applications that are aligned to business needs. By segmenting networks rather than using a flat network, the
firewall helps eliminate a single point of entry, which used to make it easier for cybercriminals to enter and spread
threats across the network. Within these challenges, firewalls are evolving from reactive to proactive devices,
using artificial intelligence to enforce security policies.

Next-generation firewalls also deliver high-performance inspection and greater network visibility, with little to no
degradation, to support and protect modern, distributed data centers located within a complex and hybrid IT
infrastructure. Hybrid data centers offer businesses greater agility, flexibility, and scale on demand, as well as an
expanded attack surface that requires an equally evolved security strategy. High-performance inspection includes
applications, compute resources, analytics, encrypted data that moves throughout the infrastructure, and data
storage across multiple private and public clouds.

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 31


Fortinet Technologies Inc.
Firewall Lesson Secure Network Module

You have completed the lesson.

32 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Secure Network Module Secure Switching and Ports Lesson

Secure Switching and Ports Lesson

Welcome to the Secure Switching and Ports lesson.

Click Next to get started.

After completing this lesson, you will be able to achieve these objectives.

Switches operate at the data link layer of the OSI model. They assign packets to VLANs based on the source MAC
address of the packet frame. Switches store port information, including port capabilities, VLAN parameters, and
the MAC addresses of devices connected to the switch in a content-addressable memory, or CAM, table.
Switches send frames to the ports of the corresponding switch.

Without secure switching, switches can be vulnerable to any attacks at the data link layer.

While the CAM table allows for faster packet forwarding and fewer collisions, by default, switches are prone to
potential storms. Storms can flood a frame when it is a broadcast frame, unicast frame, or multicast frame and the
MAC address is not in the CAM table. To avoid a potential excessive traffic flood, switches can control the
maximum number of broadcast, unknown unicast, and multicast packets sent per second on a port through a
configurable threshold.

Click the buttons for more information.

An attacker can exploit the way a flood attack works to create a MAC flooding attack.

Consider the example of a switch with one VLAN, four ports, and one entry in its CAM table related to the device
C, with its MAC address C connected to port 3. Device C is compromised and starts sending frames with the MAC
spoofing attack, which means that the frames are sourced from fake MAC addresses. This scenario shows this as
E, F, and G. The switch receives these frames on port 3 and updates its CAM table with the corresponding fake
MAC addresses.

Once the switch CAM table is full, it is unable to learn additional MAC addresses and port mappings. Therefore,
when device A sends a frame on port 1, its port mapping is not added to the CAM table, because it is full of fake
MAC addresses. The subsequent frame from device D to MAC address A is then not forwarded to port 1, but
flooded to all the ports. The result is similar to a denial of service (a DoS) attack with the legitimate traffic disrupted
by this flood.

A second effect is that the compromised device C also receives sensitive information from all other devices, here
from device D.

Click the buttons for more information.

You can contain a MAC flooding attack by limiting the number of entries per port or VLAN. In the previous
example, you could avoid attacks by limiting the number of entries per port to one. Therefore, even if device C is
compromised, the subsequent frames with spoofed MAC addresses do not create additional entries in the CAM
table and the frames are dropped.

When device A sends a frame, its MAC address A is added to the CAM table, allowing device D to get its frame
directly forwarded to device A without flooding all the ports.

In this example, the switch dynamically adds the MAC address entries for MAC addresses A and D. These
dynamic MAC addresses are deleted automatically from the table after their expiry time.

To enable stricter policies, you can enter MAC addresses statically. Therefore, with the previous limit of one entry
per port, the switch can completely block the MAC spoofing and flooding attacks on a port.

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 33


Fortinet Technologies Inc.
Secure Switching and Ports Lesson Secure Network Module

Since a static table may provide some constraints, you can also configure switches with sticky MAC addresses. In
this case, the MAC address is learned on the port and its entry is not deleted unless the switch is rebooted.
Creating a stick MAC address removes the constraint of entering the static MAC address in the table.

802.1X authentication is another form of security that you can add to ensure secure switching. In this situation, the
device must be authenticated before it can access the switch.

Click the buttons for more information.

The best practices to secure switching and ports are:


l Protect physical switches locally, for example in a closet or restricted room. Switches are vulnerable LAN elements
and you must place them in a secure location. You must also restrict managing access with authentication,
authorization, and a secure protocol like SSH or HTTPS.
l Separate switches to enforce better physical segmentation. In particular, you should separate switches between
internal and external networks, so you can prevent them from being compromised at the same time.
l To limit a MAC flooding attack, restrict the number of MAC addresses allowed on a specific port.
l Configure sticky or static MAC addresses. This could reduce MAC spoofing. It also provides additional security by
keeping a list of static MAC addresses up-to-date in a dynamic environment.
l Use access control lists of verified MAC addresses to filter unverified addresses and prevent MAC flooding attacks.
l Add port authentication, such as IEEE 802.1x standard.
l Lastly, implement port mirroring to actively monitor port activity. You can also use port mirroring to obtain a trace of
the monitored traffic.
You have completed this lesson.

34 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Secure Network Module Security Protocols Lesson

Security Protocols Lesson

Welcome to the Security Protocols lesson.

Click Next to get started.

After completing this lesson, you will be able to achieve these objectives.

A protocol provides a set of rules and methods used to establish a communication between different devices.

Consider two people having a private conversation in a different language. While the speakers may think that their
conversation is private because they are using an uncommon language, a third person who is listening could use
a translator to discover what the other two are saying. To avoid having their conversation leaked, the speakers
may decide to speak in a code language that is only known to the two of them, thereby encrypting their data.

Similarly, a security protocol protects data delivered through an encrypted communication between two
authenticated and legitimate participants.

Previously in a network environment, protocols were initially defined in clear text. For example, emails used
Multipurpose Internet Mail Extensions (MIME), web browsing used HTTP, remote control used Telnet, and finally,
remote access used L2TP. However, as security became a mandatory requirement, former protocols evolved. In
the previous examples, security was added to emails through Secure MIME (S/MIME), to web browsing through
HTTPS, to remote control through SSH, and to remote access through IPsec.

Click the underlined term for more information.

How are protocols secured?

For S/MIME, first, the email is first signed with the sender digital signature before being encrypted with the
receiver public key. After the email is sent, the receiver decrypts the email with their private key and the digital
signature confirms the identity of the sender. Therefore, security is added through the digital signature, which
provides authentication, non-repudiation, and data integrity. End-to-end encryption provides confidentiality.

In the case of HTTPS, the security is provided by Transport Layer Security (TLS). As the client initiates the
connection, the server answers providing their public key. The client generates a random session key and
encrypts it with the server public key before sending it to the server. The server decrypts the session key with its
private key. The web traffic is then encrypted through this symmetric key, which uses less resources than
asymmetric encryption.

Because it does not include authentication between the client and the server, this communication is subject to the
man in the middle mechanism (MITM), where an intermediate device intercepts the initial HTTPS connection over
the default port 443 and creates two sessions.

Click the underlined terms for more information.

The same authentication is included in the SSH protocol because it performs a verification of the end-to-end
connection. The client's public key is already stored in the SSH server. Therefore, when the SSH server receives a
connection request from the client, it sends back a random message encrypted with the SSH client's public key.
The client receives this encrypted message and decrypts it with its corresponding private key. The client sends
back the message to the SSH server. If the message corresponds to the initial message, the client is
authenticated.

Further communication encryption is implemented through the Diffie-Hellman (DH) key exchange. This
communication also includes integrity with a message authentication code (MAC) algorithm available in SSH

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 35


Fortinet Technologies Inc.
Security Protocols Lesson Secure Network Module

protocol version 2. SSH protocol is commonly used for file transfer like Secure Copy Protocol (SCP), tunneling,
and network management over the default port 22.

Click the buttons for more information.

You have reached the end of the lesson.

36 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Secure Network Module Sandbox Lesson

Sandbox Lesson

Hello! In this lesson, we will explain what a Sandbox is, why it was invented, and how it has evolved.

A sandbox, within the computer security context, is a system that confines the actions of an application, such as
opening a Word document or a browser, to an isolated virtual environment. Within this safe virtual environment,
the sandbox studies the various application interactions to uncover any malicious intent. So if something
unexpected or dangerous happens, it affects only the sandbox, and not the other computers and devices on the
network.

Sandbox technology is typically managed by an organization’s information security team, but is used by network,
applications, and desktop operations teams to bolster security in their respective domains.

Threat actors exploit vulnerabilities in legitimate applications to compromise the device, and from there move
through the network to infect other devices. Exploiting an unknown vulnerability is known as a zero-day attack.
Before sandboxing, there was no effective means to stop a zero-day attack. Firewalls and antivirus software could
stop known threats, but they were helpless against zero-day attacks.

A sandbox provided an isolated virtual environment that mimicked various computer devices, operating systems,
and applications. It allowed potential threats to play out within the safety of these virtual systems. If the sandbox
concluded that the suspicious file or activity was benign, no further action was needed. However, if it detected
malicious intent, the file could be quarantined or the activity could be stopped on the real device.

Many of the early sandboxes failed to tightly integrate with other security devices within the network. While a
sandbox might identify and defeat a zero-day attack, this vital threat intelligence was not always shared with the
other network security devices in a timely fashion. However, the failure to communicate and coordinate had less to
do with a defect of sandbox technology than a security architecture that was built upon point solutions. Point
solutions, which could not be fully integrated into other vendors' products, meant that the security operations
center (SOC) required a management console for each product. So, attempts to aggregate threat intelligence data
was difficult and time consuming.

The Second-Generation Sandbox came about to correct the siloed, piecemeal approach. Sandboxes were
equipped with more integration tools or partnered with other product vendors to improve integration. As a result,
they could share threat intelligence with other security devices, such as firewalls, email gateways, endpoints, and
other sandbox devices more effectively. The new approach to network security allowed analysts to correlate
threat intelligence centrally and respond to threats from a single pane-of-glass. Moreover, an integrated network
security environment could share information to a threat intelligence service in the cloud, which could be pushed
to other networks.

Today, threat actors are innovating automation and Artificial Intelligence AI techniques to accelerate the creation
of new malware variants and exploits, and to discover security vulnerabilities more quickly, with the goal of
evading and overwhelming current defenses. To keep pace and accelerate detection of these new threats, it is
imperative that AI-learning is added to the sandbox threat analysis process.

AI-driven attacks necessitated a Third-Generation Sandbox based on a threat analysis standard. Also, it needed
to cover the expanding attack surface of businesses due to the digital transformation. The digital transformation
refers to the movement of business data, applications, and infrastructure to the cloud.

The challenge of standards-based threat analysis arose due to the struggle to interpret and understand cyber
threat methods, which hampered effective responses. MITRE, a non-profit organization, proposed the ATT&CK
framework that describes standard malware characteristics categorically. Many organizations embraced MITRE
ATT&CK as a standard for threat analysis. So, it became necessary for security products to adopt the MITRE

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 37


Fortinet Technologies Inc.
Sandbox Lesson Secure Network Module

ATT&CK framework. It provided security devices with a common language in which to identify, describe, and
categorize threats, which could be shared with and readily understood by other vendor devices.

Lastly, as more businesses adopt digital transformation, there are new organizations or parts of organizations
exposed to attacks. One such example is the operational technology (OT) industry, which includes utilities,
manufacturing, oil and gas, and many others. Traditionally, OT kept their operational networks internal and
separate from their corporate business networks, but increasingly OT networks access corporate and third-party
vendor networks. Another example is organizations that offer applications, platforms, and infrastructure as
services in the public cloud—AWS and Azure to name a few. They host applications for other businesses, which
are accessed through the internet. These new areas require similar protection against zero-day threats to
minimize business disruption and security risks. As a result, sandbox technology evolved to provide wider
coverage to these areas and others as they develop.

The Fortinet sandbox product is named FortiSandbox and it embodies all of the latest technologies discussed
here. It integrates with other security products in a collective defence called the Fortinet Security Fabric. A critical
piece of the Security Fabric is FortiGuard Labs, which brings AI learning and other threat intelligence services to
sandbox technology.

Thank you for your time, and please remember to take the quiz that follows this lesson.

38 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Secure Network Module Common Network Threats and Prevention Lesson

Common Network Threats and Prevention Lesson

Welcome to the Common Network Threats and Prevention lesson.

Click Next to get started.

After completing this lesson, you will be able to achieve these objectives.

Network threats are unlawful or malicious activities that intend to take advantage of network vulnerabilities. The
goal is to breach, harm, or sabotage the information or data valuable of a company. Malicious actors also attack
networks to gain unauthorized access and manipulate the same according to their intentions. The motivations can
be political, financial, or retaliatory.

Common network threats include spoofing, hijacking, replay attacks, transitive attacks, and denial of service
(DoS).

Click the buttons to learn more about the common network threats.

Spoofing

Spoofing is a threat during which the attacker impersonates an authorized device or user to steal data, spread
malware, or bypass access control systems. The attacker is capable of spoofing any unique user or system
identifier, like a MAC or IP address.

Hijacking

During a hijacking threat, an attacker intercepts a connection to discover, and potentially modify, the initial two
parties' communications. Man-in-the-middle is an example of a hijacking attack. To prevent it, an end-to-end
encryption with multifactor authentication provides confidentiality and validates the initial two parties.

Replay Attack

While an attacker performs a hijacking attack in real time, a replay attack is delayed. It intercepts a communication
and maliciously repeats or resends the data later. For example, by fraudulently replaying an authentication, an
attacker could access confidential applications. A one-time session token can prevent replay attacks.

Transitive Attack

A transitive attack uses the trust chain between devices. If device A trusts device B, and device B trusts device C,
device A could then trust device C. If one device is corrupted, the other could also be attacked. You can prevent
this transitive attack by implementing the zero-trust concept.

Denial of Service (DoS)

During a denial-of-service (DoS) threat, the attacker attempts to exhaust the network resources and block
legitimate resource requests. This is like a traffic jam on a highway that blocks emergency vehicles from access.

An example of a DoS attack is when an attacker attempts to exhaust the network resources by using botnets. If
the attack is successful, resources become unavailable and legitimate users are denied access.

Click the botnet icon for more information.

Examples of DoS attacks include flood attacks (like smurf and fraggle attacks, and SYN flood and Christmas tree
attacks), ping of death, teardrop, permanent DoS, and fork bomb attacks.

Click the tabs to learn more about the different DoS attacks.

Flood Attacks

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 39


Fortinet Technologies Inc.
Common Network Threats and Prevention Lesson Secure Network Module

The goal of flood attacks is to overwhelm a device. Usually, a botnet creates the attack with connectionless traffic
like UDP or ICMP.

Click the different sub tabs to learn more about the different flood attacks.

Smurf Attack

In a smurf attack, the attacker spoofs a victim's source IP to broadcast an Internet Control Message Protocol
(ICMP) packet to a network. After the attacker sends the packets, most of the devices in the network reply to the
original source IP address, flooding the victim's device with traffic and reducing its resources for legitimate
requests.

Fraggle Attack

Similar to a smurf attack, a fraggle attack spoofs a victim and sends a spoofed UDP packet to the router's
broadcast address. The victim's device is then flooded with traffic, reducing its resources for legitimate requests.

SYN flood

A SYN flood is part of protocol attacks specifically targeting servers, proxies, or firewalls. During this attack, the
attacker creates an incomplete TCP three-way handshake, allowing SYN floods to exhaust the resources of the
device, which is waiting for half-opened connections.

Christmas tree attack

During a Christmas tree attack, the attacker sends out multiple TCP packets with the FIN, URG, and PSH flags
set. These Christmas tree packets require more processing by routers and hosts, reducing their resources and
potentially making them crash or reboot. A protocol analyzer identifies and analyzes these packets.

Ping of Death

While a ping is typically less than 100 bytes, a ping of death creates packets greater than 65,535 bytes. During a
ping of death, the origin device fragments the packets before the transmission. When the target device
reassembles the malformed packets, a buffer overflow or a crash can occur.

Teardrop Attack

A teardrop attack involves sending fragmented packets, modified with overlapping and oversized payloads. When
the target device tries to reassemble the fragments, the packets overlap one another, crashing the target network
device.

PDoS

In a permanent denial-of-service (PDoS), the attacker exploits vulnerabilities of the device to replace its software
by a corrupted firmware image, rendering the device useless. A reboot of the device is not enough, instead the
recovery of the device requires a reinstallation of the correct firmware or a total hardware replacement.

Fork Bomb

A fork is a system call used in Unix and Linux systems that replicates an existing process. Because a fork loop
consumes CPU and memory, a kernel panic can occur on the device, locking the system completely. A fork bomb
continues to replicate itself through the child processes, depleting available resources and slowing down or
crashing the system.

To prevent DoS threats, network devices, like firewalls, can implement anomaly checks of the packets. This
mechanism can block ping of death, Christmas tree and teardrop attacks. You can prevent fork bombs by limiting
the number of processes a user can own. To prevent smurf and fraggle attacks, routers no longer forward packets

40 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Secure Network Module Common Network Threats and Prevention Lesson

directed at their broadcast addresses. Also, to limit floods in general, firewall mechanisms, like DoS sensors or
network behavior analyzer using artificial intelligence, keep the traffic under expected amounts.

In addition to the previous prevention methods, two important best practices are to close unnecessary ports and
fix known vulnerabilities. Neglecting these steps can provide points of entry for network threats, like leaving doors
and windows open at home. To find open ports, a port scan, like Nmap, can provide a comprehensive list. If the
unnecessary ports cannot be closed directly on the server, you can create a firewall policy that blocks access to
these specific ports on the front firewall, such as FortiGate.

You can identify vulnerabilities with a penetration test. The best practice is to update the network devices regularly
with the latest patches.

Like network segmentation or the implementation of zero-trust principles, these practices are a part of the
hardening measures.

Click the underlined term for more information.

You have completed the lesson.

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 41


Fortinet Technologies Inc.
Authentication and Access Control Module

Overview Lesson

Welcome to the Authentication and Access Control module.

Click Next to get started.

Cybersecurity requires the ability to limit access based on a person or device's identity.

Secure authentication and being confident that you can reliably and correctly identify a user and device is the first
step in enforcing robust access control. Authentication can be split into two parts. The first is identification. The
object reveals its identity. When a person logs in to the network, they are likely prompted for a username. The
system trusts that this is their identity even without proof. The second part of authentication is verification. The
object must prove that they are who they claim to be. For example, after a person enters their username, the
system might prompt them to enter a password.

Authorization based on identity determines what resources the user can read, modify, or delete.

Correctly identifying users and computers allows security systems to implement access control and authorization
methods. There are many ways to authenticate users and implement access control methods, which you will learn
about in this module.

After completing the lessons in this module, you will be able to:
l List different types of authentication methods.
l Explain how single sign-on can simplify authentication and authorization.
l Describe the different types of access control methods.
l Explain how to configure users and groups to allow secure access.
l Describe the best way to use access control policies to secure resources.
l Use a network access control device to secure different types of assets.
Proceed to the first lesson to get started.

42 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Authentication and Access Control Module Authentication Methods Lesson

Authentication Methods Lesson

Welcome to the Authentication Methods lesson.

After completing this lesson, you will be able to achieve these objectives.

An entity, such as a person, application, or device, can prove its identity to computer systems in a number of ways.

The most common authentication factors are:


l Inherence
l Possession
l Knowledge, and
l Behavior
Other factors can affect the authentication process, but do not verify the identity of an entity.

These factors are contextual, meaning that the authentication process changes depending on the status of the
entity that is trying to authenticate.

If the authenticating entity attempts to log in from an unusual location, then the risk is greater, and the
authenticator may demand a more rigorous authentication process or deny the login attempt.

Another example of a content-based factor is the entity's behavior.

If the entity's behavior is unusual, then the risk is higher and the authenticator may request further proof of identity.

Knowledge-based authentication is the most common method. You authenticate by divulging something that only
you and the authenticator know.

An example of this method is questions and answers, or Q and A. During registration with an authenticator, you
provided answers to several questions, such as: what is your favorite movie or who is your favorite author. During
authentication, the authenticator poses one or more of these questions, and you must provide the correct
answers.

Other examples are a password or personal identification number (PIN). If you have ever used a bank machine,
you most likely had to provide your bank card and a PIN to authenticate.

Possession-based authentication is authentication using something you have that no one else has.

Logic dictates that if you can prove you possess an item that only you should have, then you must be who you say
you are. The same logic applies to digital signatures. If you sign an object with a private key that only you have
access to, then it must be you who is attempting to authenticate.

Another example is machine or device-based authentication, which is usually coupled with another authentication
method, such as a password or PIN. By registering your device with an authenticator, and because only you use
this device, the authenticator has a higher level of trust in your identity.

Other forms of possession-based authentication include hardware or software tokens and SMS messaging.
These methods use a one-time password (OTP). A hardware token is a dedicated device that generates an OTP
using a secret key and algorithm. Because the authenticator shares the same secret key and uses the same
algorithm as the token, it generates the same one-time code. The codes change either each time the user presses
the button on the token or after a predefined time interval. These tokens are known as event-based and time-
based tokens respectively.

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 43


Fortinet Technologies Inc.
Authentication Methods Lesson Authentication and Access Control Module

A software token works in a similar way, but it is software installed on a device, such as a mobile phone. Two
leading algorithms used by OTP are HMAC-based one-time password (HOTP) and time-based one-time
password (TOTP). The TOTP method combines the current time with a secret key and uses this combination as
the input value for a hash function. The software token uses the hash output value to generate the OTP.

Last, the authenticator can send a one-time code using SMS messaging to a person's mobile device. You might
say that this is something you get rather than something you have. These examples show that a person can
authenticate using a supported device that is known to the authenticator.

Inherence-based authentication uses a unique physical trait that the person authenticating possesses.

A person has many physical traits that are unique, but some common ones used for authentication are
fingerprints, retinas, irises, facial patterns, and hand measurements.

The two most common types of biometric data found in ePassports and used to authenticate passport holders are
fingerprints and retinas. DNA can also be a definitive identification method, but inherence-base authentication
does not typically use it to authenticate a person on a computer system.

All of these types are examples of static or passive biometric authentication.

Unlike inherence-based authentication, behavior-based authentication uses active biometric data to identify a
person. So, while the data is still an inherent trait of a person, you can describe it as something a person does.

Common examples or forms of this type of authentication are:


l Voice identification
l Keystroke dynamics, and
l Mouse use characteristics
The last two examples identify a person based on unique patterns they exhibit when they interact with a device,
such as a tablet, smartphone, or computer.

Less common methods for authenticating on a computer are signature analysis and cognitive biometrics.

Often when authenticating on a computer, the system prompts you for a single form to prove your identity, such as
a username and password. This is called single-factor authentication.

However, increasingly authenticators are demanding two or more forms during authentication. This is known as
multifactor authentication (MFA).

To return to the bank card and PIN scenario, if you have ever used a bank machine, then you have used MFA. The
bank card is something you have and the PIN is something you know.

MFA gives the authenticator greater confidence in a user's identity. While it is possible and even commonplace for
a password to be compromised, it is far less likely that bad actors have compromised your password, stolen your
device, and obtained a copy of your biometric data.

When you couple MFA with other network security techniques, such as device identification, security hygiene
enforcement, and access control, you have a powerful instrument to defend your network.

You have completed the lesson. You can now achieve these objectives.

44 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Authentication and Access Control Module Single-Sign On Lesson

Single-Sign On Lesson

Welcome to the Single Sign-On (SSO) lesson.

Click Next to get started.

After completing this lesson, you will be able to achieve these objectives.

Implementing a network authentication schema is like working a balance sheet. You need to strike a balance
between the two opposite sides of the ledger—security and productivity. Elevating security at the cost of
productivity can frustrate end users, entice them to take shortcuts, and reduce their effectiveness. Favoring
productivity at the expense of security could make your network vulnerable to bad actors. Single sign-on, or SSO,
strikes the right balance between these two sides of the ledger.

True SSO is an authentication process whereby a user authenticates once and can access multiple resources
across many systems and possibly domains. In other words, instead of having to authenticate on each system
before accessing the resources on those systems, one authentication suffices. This is achieved when the system,
from which the user has already authenticated, passes an authentication token seamlessly to other applications or
sites. An authentication token could be a cookie, but only if the sites share a common parent domain. As you will
learn, there are other types of authentication tokens that allow you to traverse domains.

Similar to SSO is same sign-on. In same sign-on, an entity uses their credentials housed in a directory server to
access applications. Typically, lightweight directory access protocol (LDAP)-compliant directory servers are used
for these purposes. Although the entity uses the same credentials stored in the LDAP directory, they still would
have to authenticate to each application.

Implementing SSO is common in enterprises where a user accesses multiple resources across the network. But
SSO is also prevalent in the cloud traversing multiple domains and separate autonomous networks. If you have
ever registered on a site, such as Spotify, you may have been given the option of leveraging existing user
credentials on a different site. In the example of Spotify, the choices are Facebook, Apple, and Google. Thus, if
you already have credentials on any one of these sites, you could use those to authenticate on Spotify. This
arrangement is only possible because of trust existing between Spotify and the other three businesses.

In this example, Antonio has credentials on the Google site. It is providing the identity of the user for authentication
purposes. Google is acting as an identity provider (IdP). Spotify is providing a service to Antonio and acting as a
service provider (SP). This type of arrangement between IdPs and SPs is common on the internet today.

The popularity of SSO arose because it removes the necessity of having to remember credentials for each of the
dozens of sites you visit. Not only is it convenient for the end user, but it reduces credential redundancy for
organizations and the administrative overhead of protecting and maintaining databases of credentials. Particularly
within organizations, SSO makes compliance and reporting easier for MIS through a centralized database.

A principal disadvantage of SSO is that if credentials became compromised then the bad actor potentially has
access to all of the user's resources. However, single sign-on is not restricted to single-factor authentication. You
can implement MFA to strengthen authentication while leveraging SSO to enhance productivity and ease-of-use.

How does SSO work?

An IdP, SP, and a user are required for SSO.

First, the user connects to the SP, which could be a website or application. Second, the SP redirects the user to
the IdP login page. Third, the user provides their credentials and authenticates on the IdP. Fourth, the IdP
generates an authentication token. The token is a package of information providing facts about the user's

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 45


Fortinet Technologies Inc.
Single-Sign On Lesson Authentication and Access Control Module

authentication plus other optional content. The user is sent back to the original site, and the embedded token acts
as proof that they have been authenticated.

SSO is a concept that requires a protocol for it to be implemented. There are numerous SSO protocols, such as
OAuth and Security Assertion Markup Language (SAML). Each of the protocols handles the details of the SSO
process slightly differently while remaining true to the SSO concept.

SAML version 2 is an open standard for exchanging authentication and authorization data between parties, in
particular between the IdP and SP. SAML is based on the Extensible Markup Language (XML).

SAML inserts statements within the XML-based messages. The statements are called security assertions, and
there are several types. An authentication assertion indicates how the entity authenticated and includes the date
and time of authentication. For example, the assertion might identify that Omar authenticated using his email
address and password. An attribute assertion provides additional information about the entity. For example, it
might state that Omar is a gold card member. And authorization assertions identify what the entity is entitled to do.

Many of these assertions are optional and leave room for different implementations. For example, the IdP may be
restricted to authentication only, leaving the SP to determine the entity's entitlements.

FortiAuthenticator is a Fortinet product and authentication platform that supports various authentication methods,
including SAML version 2. The following is an example of a SAML login where FortiAuthenticator acts as an IdP.

The entity, also known as the principal, connects to an SP.

The SP requests SAML assertions from the principal.

But the principal, Omar, has not authenticated, or his authentication token has expired. Omar's application replies
that it does not have any SAML assertions.

The SP redirects Omar to the IdP, which in this case is FortiAuthenticator.

Omar's application requests a SAML assertion for the SP.

The IdP prompts Omar for his credentials.

FortiAuthenticator verifies the credentials using an authentication server.

It provides an authentication assertion to the SP.

Omar's application is redirected to the SP and to the resources that were originally requested as long as Omar is
entitled to them.

You have completed the lesson. You can now achieve these objectives.

46 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Authentication and Access Control Module Authentication Framework, Protocols, and Tools Lesson

Authentication Framework, Protocols, and Tools Lesson

Welcome to the Authentication Framework, Protocols, and Tools lesson.

After completing this lesson, you will be able to achieve these objectives.

What is an authentication framework?

An authentication framework is the basic schema or plan for how entities will prove their identities within a system.

It includes the methods, forms, protocols, and other tools used to authenticate persons, devices, and applications
on a network.

This lesson will discuss some of the common means of assembling these elements to construct a workable
authentication framework.

Click each segment of the authentication framework for more information.

There are several protocols commonly used for clients authenticating on networks.

The next several slides explain these authentication protocols.

Remote authentication dial-in user service (RADIUS) is a client-server authentication, authorization, and
accounting (AAA) protocol and software that enables remote access servers to communicate with a central server
to authenticate dial-in users and authorize their access to the requested system or service.

The important word here is central, because it means that most or all of a network's remote authentication
requirements can be met with one server as opposed to having to maintain multiple credentials across numerous
servers.

This simplifies administration for both the administrator and the end user.

RADIUS can also enable the 802.1x framework, which uniquely encrypts user sessions.

Data packets are used to exchange data between computing devices in a packet-switched network.

They are formatted units of data consisting of control information and user data, also known as a payload.

The following is the exchange of packets between the RADIUS client and the RADIUS server. The packets
between the user, or supplicant, and the RADIUS client are not shown in this simplified diagram.

During RADIUS authentication, the user submits a request to access a server or network along with their
credentials.

The RADIUS client, which could be a firewall such as FortiGate, forwards the access-request packet to the
RADIUS authentication server.

The RADIUS server replies with one of three possible packets.

One possibility is that the server returns an access-reject packet because the credentials were incorrect.

The second possibility is that the server replies with an access-accept packet.

And the last possibility is that the server sends to the RADIUS client an access-challenge packet. This last option
occurs only if two-factor authentication has been configured, and it would prompt the user for their second
credentials.

As a part of an authentication framework your organization needs a place to store credentials.

If you use a RADIUS server, then you have several options.

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 47


Fortinet Technologies Inc.
Authentication Framework, Protocols, and Tools Lesson Authentication and Access Control Module

User credentials can be stored in the RADIUS server's database. Or your organization may have user information
already populated in another server, in which case it may make more sense to leverage this existing source of
credentials.

Depending on the vendor, RADIUS servers typically support several authentication protocols and server types,
such as structured query language (SQL) and lightweight directory access protocol (LDAP).

It is not uncommon for a RADIUS server to leverage an LDAP-compliant directory, such as Microsoft Active
Directory (AD), for authentication and authorization purposes.

LDAP is an open, vendor-neutral, industry-standard application protocol for accessing directory services over an
IP network.

It is an industry-standard communication protocol for directory servers. Aside from RADIUS, LDAP-supported
authentication can be configured in other ways.

For example, in this scenario the supplicant interfaces with an authentication proxy, which could be a firewall such
as FortiGate. The proxy queries the LDAP server for the user's credentials and privileges.

Last, TACACS+ is a remote AAA protocol that allows a remote access server to communicate with an
authentication server in order to validate user access.

In this sense, TACACS+ is similar to RADIUS. However, there are differences.

TACACS+ encrypts all AAA protocols while RADIUS encrypts passwords only. Also, TACACS+ relies on TCP as
a network transport protocol, while RADIUS uses UDP.

Authentication methods define the manner in which authentication takes place. They could also be described as
protocols that set the rules for interaction and verification, which endpoints or systems use to communicate.

The password authentication protocol (PAP) is used to authenticate PPP sessions. The point-to-point protocol
(PPP) refers to a suite of computer communication protocols that provide a standard way to transport
multiprotocol data over point-to-point links. PPP provides services at Layer 2—the data link layer of the OSI
model—that establishes a foundation upon which network layer protocols can operate.

PAP uses a two-way handshake process to authenticate a client using these steps:

One, the client sends the username and password to the server. It does this through an authentication-request
packet. Two, the server verifies the username and password.

If the credentials are correct, then the server sends an authentication-ack response packet to the client and a PPP
session is established between the client and the server. While the information can pass through an encrypted
tunnel to enhance security, the static username and password information is subject to numerous attacks through
password guessing and snooping.

The challenge handshake authentication protocol (CHAP) is also used to authenticate PPP sessions but uses a
three-way handshake. It creates a unique challenge phrase for each authentication session by generating a
random string. The random string is combined with a hash result of the device hostnames. This ensures dynamic
authentication information for each session and avoids static information.

Microsoft has its own version of CHAP named MS-CHAP. The most current version of this protocol is MS-CHAP
version two.

Both PAP and CHAP protocols can be used with VPNs.

One framework for authentication is 802.1x.

48 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Authentication and Access Control Module Authentication Framework, Protocols, and Tools Lesson

802.1x is an IEEE standard for port-based network access control (PNAC). It is part of the IEEE 802.1 group of
networking protocols. It provides an authentication mechanism to devices wishing to attach to a LAN or wireless
LAN.

802.1x authentication involves three players: a supplicant, an intermediary (also known as an authenticator), and
an authentication server.

The supplicant is the client device that petitions to join the LAN. The medium can be either Ethernet or wireless.

The intermediary is a network device that provides a data link or bridge between the client and the network and
can block or allow access to the network depending on the authentication results. The intermediary device could
be an Ethernet switch or wireless access point.

The authentication server is a trusted server that validates the credentials of the supplicant and has the
wherewithal to reject or accept the credentials. The authentication server typically supports RADIUS and EAP
protocols.

There is a wide range of authentication factors and forms supported, and include knowledge-based username and
password and possession-based digital certificate.

Click on the underlined words for more information.

The sequence of 802.1x authentication is as follows:


l The supplicant makes a connection to the network.
l The intermediary requests the supplicant to identify itself.
l The supplicant provides its identity.
l The intermediary forwards the identity to the authentication server.
l The authentication server returns the challenge to the intermediary.
l The intermediary forwards the challenge to the supplicant.
l The supplicant replies to the challenge.
l The intermediary forwards the information to the authentication server.
l If the credentials are accepted, the authentication server sends an accept message to the intermediary.
l The intermediary sends the accept message to the supplicant and allows the supplicant access.
Note that authentication is comprised of two parts: identification and verification.

The extensible authentication protocol (EAP) is an authentication framework, not a specific authentication
mechanism. It provides some common functions and negotiation of authentication methods, called EAP methods.
There are dozens of EAP methods. This slide describes three of them.
l EAP-FAST, which stands for flexible authentication via secure tunneling, handles authentication by using a
protected access credential (PAC) that an authentication server can manage dynamically. A PAC is a security
credential generated by the server that holds information specific to a peer, which, in this case, would be the relying
party or service provider. The server uses the transport layer security (TLS) protocol to establish a secure
communication tunnel to the peer.
l Protected EAP (PEAP) is a protocol that encapsulates EAP messages within an encrypted and authenticated TLS
tunnel. PEAP authenticates Wi-Fi LAN clients using only server-side certificates, which simplifies the
implementation and administration of a secure Wi-Fi LAN.
l Lightweight EAP (LEAP) is an EAP authentication type used primarily in Cisco Aironet wireless LANs. It encrypts
data transmissions using dynamically generated WEP keys, and supports mutual authentication between a wireless
client and a RADIUS server.
You have completed the lesson. You can now achieve these objectives.

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 49


Fortinet Technologies Inc.
Access Control Methods Lesson Authentication and Access Control Module

Access Control Methods Lesson

Welcome to the Access Control and Methods lesson.

Click Next to get started.

After completing this lesson, you will be able to achieve these objectives.

Access control is the ability to restrict access to a physical location or resource.

You encounter access control every day. It is used when you show identification and a ticket to board an airplane,
enter a place of work that requires an employee ID, or access your account online.

While authentication identifies who you are, access control, also called authorization, determines what you can
access.

You can identify yourself properly to airport security, but if you do not have the correct authorization, such as a
valid ticket, you will not be allowed to board a plane.

An access control method is the system used to determine what resources a user is allowed to access based on
authentication and the application of rules. These rules can be very simple:, as in "Allow everyone who has a
name on this list access", or very complex, as in "You need a valid government-issued photo id, a fingerprint
check, and a valid appointment to access these materials."

Computers and electronic devices have access control methods built into their operating systems, programs, and
hardware that control what users and programs are allowed to access, such as data or hardware resources.

Mandatory and discretionary access control defines whether the object being restricted, or the enforcing agency,
can change the access control rules. Mandatory access control does not allow any participating actor to change
the security requirements. A lock requires a key to enter, and cannot be changed except by another third party, for
example, a locksmith. In computer systems, mandatory access control forces all users and programs to always
check the enforcing engine before being allowed access to resources.

Discretionary access control allows some flexibility in who defines access. For example, a security guard can
allow an employee in if they do not have identification to visit HR to get a new badge. Or, a computer administrator
can grant themselves access to a file that they did not have access to before, or run a program with Microsoft User
Access Control (UAC). The enforcing agency or the subject trying to gain access can change the access at their
discretion.

An access control method can be either mandatory or discretionary, but not both, and can be combined with the
other methods mentioned in this lesson.

Lattice-based access control is a model that grants permission to locations or materials based on an assigned
security level. Users and devices are granted access to everything at their level or below, so a basic user would
have access to only the lowest level of resources, while an administrator, CEO, or person with top-secret
clearance would have access to the more restricted materials and any other materials below their level.

Rule-set-based access control enforces access usually through an ordered list of rules. Rules are usually
evaluated from the top down and, as soon as a match is made, access is allowed or denied based on that rule.
This is commonly used in firewall access rules but can be something as simple as a hotel door requiring keycard
entry during the night. If an access request reaches the bottom of the list without matching, it can either fail open
and allow the access or fail close and deny it.

Role-based access control allows access to resources based on roles that have been defined by administrators
and assigned to users or devices. These roles map to a set of permissions, usually some form of access control

50 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Authentication and Access Control Module Access Control Methods Lesson

list. Access is then granted if the user or device is successfully authenticated and has the appropriate role and
permissions to access the resources requested.

One of the major benefits of role-based access control is the flexibility in granting permissions to new users or
devices. Administrators assign the appropriate roles and the user or device gains access to the necessary
resources. A disadvantage to role-based access control is the time spent in defining and setting up meaningful
roles that take advantage of the flexibility of a role-based system.

Attribute-based access control can consider multiple attributes to determine if access should be allowed. A rule
engine collects and evaluates attributes against a policy. The major benefit of ABAC is the ability to dynamically
allow and deny access based on multiple factors not based on a static rule list like RBAC. What attributes are
available for a policy decision vary wildly depending on the type of access control. Nontraditional attributes like
patterns of behavior, geolocation, and current resource loads can be considered using ABAC.

ABAC attributes can be time-consuming and resource-intensive to define and may be inefficient when a simpler
access control method may be more time-efficient and secure. Extra expertise is usually required when defining
an ABAC policy to avoid configuration errors and potential security holes.

Access control methods are systems designed to make allow/deny decisions about whether a person, device, or
program can access a specific resource.

We have introduced multiple types of access control methods in this lesson.

You have completed the lesson. You can now achieve these objectives.

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 51


Fortinet Technologies Inc.
Access Control Best Practices Lesson Authentication and Access Control Module

Access Control Best Practices Lesson

Welcome to the Access Control Best Practices lesson.

Click Next to get started.

After completing this lesson, you will be able to achieve these objectives.

Access control complexity has exploded with the proliferation of new devices and technology.

Instead of having employees working with a fixed, permanent desktop computer and telephone infrastructure
locked in company-owned and secured buildings, modern employees can possess multiple internet connected
devices that they take with them, leave unattended in public places, connect to unsecured networks, and still
expect to have the same access to sensitive information to perform their jobs.

In addition, cyberattacks against devices, and more sophisticated social engineering attacks, make proper
deployment and enforcement of control methods even more important.

In this lesson, you will learn about some simple concepts and best practices to help correctly deploy and maintain
a security infrastructure. One of the major goals of access control is to allow authorized users to access their
information when they need it, and nothing else. This requires constant re-evaluation of policies, especially when
employees are hired or leave a company, new sources of data need to be accessed, or when a new technology
allows new ways to access information.

Access control is a lifecycle. As a user or device is provisioned and assigned permissions, it travels through the
lifecycle until it is deprovisioned and the cycle starts again.

Once you provision a user or device, the administrator or security authority assigns authentication credentials to it.
When the user or device authenticates, an access control policy authorizes the user or device to access the
information it requires using permissions. Administrators can provide additional services to allow features like
password changing or permission requests. If a user or device must be deprovisioned, the administrator removes
the user from the access control policies and ensures that it can no longer access information.

A governance system, which is usually a team responsible for reviewing and updating access control methods
and policies, reviews and modifies the process. Because of the highly dynamic nature of authentication and
granting access, it is important to have periodic reviews and tests of each stage of the lifecycle. Such reviews
ensure that the systems are protected while allowing users and devices to access the information they need
securely.

Correctly implementing access control can present many challenges. The biggest hindrance is usually the
complexity of configuring solutions and the need to configure multiple types of access control. You can apply
access control to computers, phones, doors, files, folders, networks, and many internet-connected devices, such
as printers and projectors. Maintaining a consistent, easy to follow system of limiting access is crucial to making
quick easy changes to allow new users or devices, and to identify problems and potential breaches.

Keeping a uniform set of access policies across multiple platforms is also a challenge. A Windows-based access
control solution may look very different from one designed for a Linux file system, and may require different
products, rules, and policies to produce a similar outcome. Users can become frustrated if they cannot remember
or easily change their password, and drive up support and maintenance costs if the system does not have the
appropriate self-service tools to allow easy user interaction.

Having multiple access solutions, especially across multiple buildings and geographical locations can make
maintaining an accurate audit trail and accounting tools difficult. Ensuring that a user or device is added or
removed from all access control policies in a large organization is a crucial step in ensuring the integrity of the

52 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Authentication and Access Control Module Access Control Best Practices Lesson

system. Having many devices and users with partial, outdated policies can be a huge security risk in addition to a
widespread customer support problem.

When establishing secure access control, it can be very helpful to shift from an individual user-based approach,
where each user has access control rules assigned to their user account, to a group or role-based approach.
Making this shift can vastly simplify the assigning and modifying of new permissions. This is because making one
change can affect all relevant users and help prevent cybersecurity fatigue from trying to assign rules and
restrictions correctly to individual accounts.

The cause of most security breaches against physical and cyber-based assets is human error.

Having a clear, published security policy prevents users from accidentally violating security rules and can help
people recognize when a security breach is occurring. The policy can be as simple as warning a security guard
when you see tailgating through a door, reporting a suspicious file permission or an attempted social engineering
attack to the IT support desk.

Educating users and providing them with appropriate tools to manage their security settings can drastically reduce
the risks from human error.

Having a robust password policy, one that requires long length, special characters, and periodic changes is one of
the most effective and easy-to-implement security measures.

Having appropriate self-service tools, such as password resetting and request forms for allowing different access
control, can reduce support calls, improve security, and help maintain a higher security posture.

A user who knows that IT will never call and ask them to reset a password will not be as susceptible to that social
engineering attack and report the incident to the appropriate security services.

When creating access control policies for files and folders, it is important to understand how different devices can
handle file permissions and how you can set them.

Ideally, an access control solution will be able to uniformly apply permissions across multiple types of devices, but
this is not always the case. Universally, administrators should be aware of which files and folders are viewable, or
readable, by users, and which are changeable, or writeable. For example, Windows and Linux file systems have
significantly different ways of displaying and applying read and write permissions to files and folders even though
fundamentally they are doing the same thing—defining what users and processes are allowed to do to specific
files.

Always restrict generic, unauthenticated users from accessing files and allow access only to authenticated users
and the appropriate system processes. Most attacks occur against file systems that grant only basic, user-level
access initially. Preventing these minimally protected accounts from being able to explore and read files in a file
system greatly increases security and prevents data loss.

In addition to having a robust access control policy defined for your file systems, it is critical to understand how
those permissions are inherited and distributed across your files and folders. This, unfortunately, is usually unique
to the operating system or authentication system being implemented, with many elements. Pay special attention
to security permissions that are applied to an individual file or folder, or to all files and folders in a specific location.

Additionally, ensure that any newly created files or folders in a folder inherit the permissions of the parent. This
prevents the huge security risk of having a highly secured folder, but unsecured subfolders containing important
information. Generic users may not be able to see or even access the top-level folder but can directly access all
the files in the subfolders. Network scanners and directory enumerators can discover these vulnerable folders and
be a very common avenue of data loss and compromise.

Please review these access control best practices before continuing.

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 53


Fortinet Technologies Inc.
Access Control Best Practices Lesson Authentication and Access Control Module

The principle of least privilege, or PoLP, and zero-trust architecture, or ZTA, are two modern methodologies that
are used to help define how access control is implemented. Both are useful in visualizing and making decisions
about how access control is defined and implemented.

The principle of least privilege is conceptually straightforward—allow things to access only what they need. If an
employee needs to access their desk, the break room, and a restroom, allow them physical access only to those
locations. There is no need for them to be able to access the warehouse to do their job. The same applies to
network, database, and folder access. In modern computer systems, the principle of least privilege applies to
kernel resources, file systems, and other aspects of the operating system. The principle prevents malicious actors
from laterally moving from one compromised system to another. For example, if someone steals a keycard from
the employee and that card grants access only to a desk, break room, and restroom, it is less potentially damaging
than if that card had access to an entire building because security was not implementing the principle of least
privilege.

Zero-trust architecture is an extension of the traditional security model. It's a methodology to enforce verification
and authorization of all connections to resources regardless of where the connections are coming from. Originally
designed to protect against external threats, the concept has had to be adapted with the advent of mobile devices
and more distributed and remotely located work forces. Instead of focusing security on the perimeter of internal
resources, security has become much more distributed, where access to all important resources is authenticated
and authorized before being allowed to continue, regardless of where access is originating from. A local
administrator in the same data center is treated the same as a remote user attempting to access the information
from a coffee shop. Attribute-based access control policies are used to validate attributes like username,
connecting device properties, and location, to validate the connection.

Working together, the principle of least privilege and zero-trust architecture create a very robust model that
minimizes possible attack surfaces and, more importantly, can minimize damage and data compromises when
inevitable failures occur. When designing or revising access control policies, keep these methodologies in mind
when making decisions.

Rigorous administration of policies is critical when setting up access control and in reviewing and maintaining a
secure posture. A common mistake in designing and enforcing access control is not allowing for periodic review of
and updates to policies. Effective governance helps enforce appropriate separation of duties, so administrative
and common user access can be modified to reflect current requirements. One of the major causes of breaches is
old access control policies and old, unused user accounts.

After you create strong access control, escalation, and disaster recovery policies, it is important to communicate
them to users and allow them to review them. A well-educated user is a powerful line of defense in many aspects
of cyber security, including access control. A user that is aware of security policies will use strong passwords and
keep them secure, and is much more likely to report possible access control violations and suspicious activity
when accessing their usual resources.

Click each segment to learn some best practices for configuring and maintaining secure access control.

You have completed the lesson. You can now achieve these objectives.

54 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Authentication and Access Control Module Network Access Control Lesson

Network Access Control Lesson

Hello! In this lesson, we will introduce you to Network Access Control (NAC) and explain how it has evolved.

NAC is an appliance or virtual machine that controls device access to the network. It began as a network
authentication and authorization method for devices joining the network, which follows the IEEE 802.1X
standards. The authentication method involves three parties—the client device, the authenticator, and the
authentication server. The authenticator could be a network switch or wireless access point that demarks the
protected network from the unprotected network. The client provides credentials in the form of a username and
password, digital certificate, or some other means, to the authenticator, which forwards these credentials to the
server. Pending on the outcome of authentication, the authenticator will either block the device or allow it access
to the network. Another method to control access to a network, especially a publicly available network, is a captive
portal. If you've ever connected to a network in an airport, hotel, or coffee shop, you might remember interacting
with a web page that asked you to agree to legal terms before granting access.

Later, NAC evolved to accommodate guest access, Bring Your Own Device (BYOD), and the Internet of Things
(IoT). For a couple of reasons, BYOD and IoT devices introduced new security challenges. One, BYODs are
personally owned, not assets of an organization. So, MIS does not control what runs on these devices, for
example, antivirus software or unsafe applications. Two, IoT devices are hardware with a sensor that transmit
data from one place to another over the internet, dramatically expanding the attack surface. Organizations buy
IoT-enabled devices from other vendors, and these devices connect back to vendor networks to provide
information about product use and maintenance needs. Organizations tolerate this situation because IoT devices
save them time and money. For example, if a printer is low on toner, the vendor could notify the network
administrator by email, or even deliver new toner cartridge automatically. In a smart home, IoT devices regulate
heat and humidity, remotely control the locks on doors, monitor what's in the fridge, and even help with your
grocery lists.

The evident convenience of these devices has made them wildly popular and numerous. However, the variety of
devices, the lack of standards, and the inability to secure these devices make them a potential conduit for
contagion to enter the network. Many IoT devices lack the CPU cycles or memory to host authentication and
security software. They identify themselves using a shared secret or unique serial number, which is inserted
during manufacturing. But this authentication scheme is very limited—should the secret become known, there is
likely no way to reset it, and without the ability to install security software, there is little visibility into those devices.
Fortunately, NAC evolved to solve these weaknesses.

When MIS introduces NAC into a network, the first thing NAC does is create profiles of all connected devices.
NAC then permits access to network resources based on the device profile, which is defined by function. This is
similar to granting individuals access to sensitive information based on their need to know. For example, NAC
would permit an IP camera connection to a network video recorder (NVR) server, but would prevent it from
connecting to a finance server. Based on its profile, an NVR has no business communicating with a finance
server. When access is granted this way, the network becomes segmented by device function. If a device is
compromised, malware can infect only those objects that the device is permitted to connect to. So, the
compromised IP camera from the earlier example could infect the NVR server, but not the finance server.

While NAC proved highly effective at managing numerous unprotected devices, it had shortcomings over its
evolution. Some NAC solutions were designed to help with BYOD onboarding in wireless networks, but performed
badly in the wired portion of the network. Other solutions were developed to work within a single vendor
environment, but couldn't automatically profile third-party devices. Some had good visibility into small, simple
networks, but didn't scale well into large, distributed networks.

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 55


Fortinet Technologies Inc.
Network Access Control Lesson Authentication and Access Control Module

Today, most NAC solutions have redressed these limitations. They have more complete visibility into the network
and are better at categorizing devices automatically. They effectively perform in both Ethernet and wireless
networks. Many NAC solutions have centralized architecture that improves managing devices across large and
multisite networks. Critically, NAC must also be integrated into the security framework, so that when a breach is
detected, NAC automatically notifies the security operations center (SOC) and coordinates with other security
devices to neutralize the threat.

Fortinet offers a network access control solution, named FortiNAC. It contains all of the features identified in this
lesson.

Thank you for your time, and please remember to take the quiz that follows this lesson.

56 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Secure Remote Access Module

Overview Lesson

Welcome to the Secure Remote Access Overview lesson.

Click Next to get started.

After completing this lesson, you will be able to achieve these objectives.

What is secure remote access?

Secure remote access is a combination of security methods and technologies that allow outside end entities to
connect to networks, without compromising digital assets or exposing networks to unauthorized parties.

Remote users could be connecting to networks from home, a hotel, a coffee shop, a branch, or a campus, train
stations, or airports.

The increasing number of remote workers has expanded the attack surface, a trend that predates the COVID-19
pandemic.

Users remain the principal conduit by which bad actors gain access to a network. More than 60% of all breaches
involve user credentials: stolen or hacked. According to the 2021 Data Breach Investigation Report from Verizon,
85% of breaches involve the human element.

A focused effort to harden security around the end user, especially the remote end user, is a top priority for
organizations.

How is remote access secured?

Secure access has all or most of these features, depending on how you configure Remote Secure Access.

The last three features are often expressed as AAA.

There are three common methods for secure remote access.

They are IPsec VPN, SSL VPN, and ZTNA.

What is virtual private network (VPN)?

A VPN is a private connection across a public network that enables a user to exchange data safely with a private
network, as if their computing device was directly connected to the private network.

There are two VPN use cases: secure remote access and site-to-site.

Site-to-site VPN is a connection between two or more networks, such as a corporate network and a branch office
network.

Secure remote access VPN is a connection between a remote user and a network. A secure remote access VPN
is composed of a client, server, and protocols.

Zero Trust Networking Access (ZTNA) is similar to VPN in that it satisfies the goals of secure remote access.
Essentially, the principal difference between VPN and ZTNA is that ZTNA applies the zero trust principle, which is
that no user or device, whether it is inside or outside a network, is trusted.

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 57


Fortinet Technologies Inc.
Overview Lesson Secure Remote Access Module

Around ZTNA is a regime of strong authentication and other security measures that allow a network to trust users
and devices.

This table compares the characteristics of IPsec VPN, SSL VPN, and ZTNA.

You've completed the lesson. You can now achieve these objectives.

58 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Secure Remote Access Module SSL VPN Lesson

SSL VPN Lesson

Welcome to the SSL VPN lesson.

Click Next to continue.

After completing this lesson, you will be able to achieve these objectives.

What is SSL VPN? SSL VPN is technology that supports an encrypted session, along with other security features,
between two computing devices, and provides security from the Transport layer of the OSI model, while
communicating data at the Application layer. While the information is exchanged at the application layer, between
the client and the server, the information is encapsulated at the transport layer.

Like IPsec VPN, SSL VPN requires a client and a server, but in SSL VPN the client is usually a browser and the
server is usually a web server. Like IPsec VPN, SSL VPN ensures privacy, data integrity, authentication, and anti-
replay functionality. Unlike IPsec, security is provided by the Transport layer security (TLS) protocol at the
Transport layer of the OSI model. TLS succeeded the now deprecated secure Socket layer (SSL) protocol.
Although superseded, SSL has become synonymous with TLS.

In the SSL portal or web type of VPN, a user connects to a website using a browser and enters their credentials to
initiate a secure connection. The SSL portal VPN allows for a single SSL connection to a web server using the
HTTPS protocol. The server is not necessarily a dedicated web server; it could be a firewall with web server
functionality. Additionally, based on their identity, the user can access a variety of network services as defined by
the organization.

The advantage of this method is that the user has quick and easy access to network resources because most
computing devices come with web browser applications pre-installed.

The SSL tunnel VPN type allows client VPN software to make a secure connection to a server. The client, which
includes a virtual network adapter, dynamically receives an IP address from the server each time the client makes
a connection. Inside the tunnel, the traffic is SSL/TLS encapsulated. The main advantage of tunnel mode is that
after the VPN is established, any IP network application running on the client can send traffic through the tunnel.
Compared with the SSL web type, the SSL tunnel type supports many more protocols, which potentially means
access to a greater number of network resources.

An end user connecting to an SSL VPN web server using the portal or web type follows these steps: one, remote
users establish a secure connection between the SSL component in the web browser and the SSL VPN web
server using HTTPS. The SSL VPN server can act as a reverse proxy. A reverse proxy is an application that sits in
front of backend applications and forwards requests to those applications. For example, a reverse proxy can relay
information to a web server. The user may think that they are connecting directly to the web server, when in fact
they are connected to the reverse proxy. The reverse proxy is implemented for security, performance, and
scalability reasons. Another reason is that after it decrypts the information received from the client, the reverse
proxy can forward it to backend servers using a different protocol, such as RDP, SSH, and so on. Two, once
connected, users provide credentials in order to pass an authentication check. And three, the firewall displays the
SSL VPN portal that contains services and network resources for users to access.

There are two main disadvantages to using this type of SSL VPN. The first is that all interactions with the internal
network must be done using the web browser. External network applications running on the user's device cannot
send data across the VPN. The second disadvantage is the limitation of what can be accessed on the network.
Because a secure HTTP/HTTPS gateway mechanism supports only a few popular protocols, such as FTP and
Windows shares, it restricts what network resources the user can access.

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 59


Fortinet Technologies Inc.
SSL VPN Lesson Secure Remote Access Module

An end user connecting to an SSL VPN server using the tunnel type follows these steps: one, the user connects to
the server using the SSL VPN client. Two, the user provides their credentials to authenticate successfully. Three,
the client establishes a tunnel between its virtual interface and the server, which assigns an IP address to the
client's virtual network adapter. The IP is assigned only for this session. All IP traffic initiated by the client is
encapsulated by the virtual interface into HTTPS and is sent to the server. The server decrypts the traffic, and de-
encapsulates the original IP packet, which then can be sent to the backend internal network. And four, the user
can access services and network resources through the encrypted tunnel. One important takeaway is that a VPN
client must be installed on the endpoint, and this could be considered a disadvantage.

You are now able to:


l Describe SSL VPN
l Describe how SSL VPN works
l Describe the differences between SSL VPN types

60 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Secure Remote Access Module IPsec VPN Lesson

IPsec VPN Lesson

Welcome to the IPsec VPN lesson.

Click Next to get started.

After completing this lesson, you will be able to achieve these objectives.

IPsec VPN is the technology that ensures data privacy and integrity between two or more computing devices, and
provides security at the network layer of the OSI model.

Like all remote secure access methods, IPsec VPN requires a client at one end and a server at the other. In the
case of IPsec VPN, the VPN server could be a firewall or router with VPN capabilities. The VPN server can also be
called a VPN (or IPsec) concentrator.

IPsec can be configured in one of two modes: tunnel or transport. The principal difference between tunnel mode
and transport mode, is that more components of the data packets are secured when tunnel mode is used than
when transport mode is used.

IPsec can also be configured to authenticate packets, or encrypt packets, or do both. This is determined by the
protocols enabled on the VPN server. If the authentication header (AH) protocol is enabled, then data integrity and
data origin authenticity is ensured, and protection against replay attacks is ensured.

If the encapsulating security payload (ESP) protocol is enabled, then packets are encrypted between the two
points, ensuring privacy. If both AH and ESP are selected during IPsec configuration, then both authentication and
encryption during a VPN session will be supported.

Click the underlined terms for more information.

Users typically use a password to log in to their VPN client application. If multi-factor authentication is supported,
the user must provide additional information to prove their identity. This information could be a one-time password
(OTP) provided by a token to prove that they are in possession of the device.

If the device is registered to Alice, and Alice provides the OTP which only her device could produce, then the user
must be Alice.

Once the user logs in to the VPN client, then the client initiates a session with the VPN server.

During the initial connection or "handshake" between the client and server, they agree to certain security attributes
that will define the session. These security attributes include what cryptographic algorithms and network
parameters will be used.

Authentication of the server and client also takes place. Lastly, a session key is generated and shared between
the client and server. The session key will encrypt the data flow between the two points. The protocol used to
achieve this is called internet key exchange (IKE).

At the end of this step, all security attributes for the session are determined, and a session key is produced to
encrypt data for the session going forward.

Click the buttons for more information.

Network parameters (Slide Layer)

All data that is sent over a network is broken down into smaller pieces called packets.

Packets contain both a payload, which is the actual data being sent, and headers, or information about the data,
so that computers receiving the packets know what to do with them.

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 61


Fortinet Technologies Inc.
IPsec VPN Lesson Secure Remote Access Module

IPsec adds several headers to data packets containing authentication and encryption information.

IPsec also adds trailers, which go after each packet's payload. Trailers are added when the ESP protocol is used.

If VPN is configured to use the ESP protocol, then parts of the packets are encrypted. In effect, this means that
there are two layers of encryption: the encrypted session initiated by IKE and the encryption of the packets.

IPsec provides authentication for each packet, like a stamp of authenticity on a collectible item.

This ensures that packets are from a trusted source and not an attacker.

IPsec encrypts the payload within each packet and each packet's IP header, unless transport mode is used.

Encrypted IPsec packets travel across one or more networks to their destination using a transport protocol. At this
stage, IPsec traffic differs from regular IP traffic in that it most often uses user datagram protocol (UDP) as its
transport protocol, rather than transmission control protocol (TCP).

At the end of the communication, the packets are decrypted and verified. Verification includes checking the
integrity and origin of the data.

Click the UDP term for a greater description.

You've completed the lesson. You can now achieve these objectives.

62 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Secure Remote Access Module ZTNA Lesson

ZTNA Lesson

Welcome to the ZTNA lesson.

Click Next to get started.

After completing this lesson, you will be able to achieve these objectives.

What is ZTNA?

ZTNA establishes a secure session between an end entity and a network, while ensuring granular control over
access to resources and exercising zero trust, regardless of the location of either the end entity or the network.

Part of the zero trust principle is the practice of least privilege access. This means that users are only granted
access to the resources necessary to fulfil their job requirements, and no more.

As a network security concept, zero trust operates under the premise that no user or device inside or outside the
network should be trusted, unless their identification and security status have been thoroughly checked. Zero trust
operates on the assumption that threats, both outside and inside the network, are omnipresent. Zero trust also
assumes that every attempt to access a network or an application is a threat.

So, regardless of whether the end entity is remote or on-premises, the connecting computing device automatically
establishes an encrypted session with the network. Specifically, this connection takes place between a ZTNA
client at the end entity and the ZTNA access proxy, which could be a firewall. The proxy point hides the locations
of requested applications from the outside. The proxy directs the client's request to the application, which could be
on-site or in the cloud, only if the user meets access requirements.

Other ZTNA components are authentication and security. Because the user is identified through authentication
against an on-premises backend server or an Identity-as-a-service (IDaaS), policy can be applied based on the
user roles.

Also, the ZTNA policy server enforces policy-to-control access, specifically to applications. For example, access
could, in part, be based on geolocation. So, if the remote device is connecting from an unexpected point in the
world, access to an application could be denied or privileges reduced.

Likewise, if a device fails a security sanity check, the user could be denied access. Security is composed of
firewalls and the ZTNA access proxy, which control access and provide security to application resources.

Unlike IPsec VPN, but similar to SSL VPN, ZTNA is vendor specific. This means that each vendor can implement
ZTNA in a way that best suits their specific requirements.

The diagram on this slide is the Fortinet ZTNA solution. The Fortinet ZTNA client is FortiClient.

Also in this diagram, FortiClient Endpoint Management Server (EMS) acts as the ZTNA policy server. When an
endpoint device with FortiClient attempts to connect to the network for the first time, it is directed to FortiClient
EMS to register. During the registration process, FortiClient provides the server with information about the device,
the user, and the security posture of the device. This information is written to tags and shared with the firewall,
FortiGate.

Based on the information in the tags, the device can be grouped and certain rules can be applied. The rules act as
instructions for FortiGate. FortiGate applies the rules to the device each time it connects to the network. An
example of a rule could be that a device with Windows 10 plus antivirus software is allowed access, but a device
with Windows 10 and no antivirus software is denied access.

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 63


Fortinet Technologies Inc.
ZTNA Lesson Secure Remote Access Module

At the end of the registration process, FortiClient EMS generates a digital certificate for the device, and sends the
certificate to the device and shares with FortiGate. From this point onward, the device submits the certificate to
FortiGate each time it needs to identify itself.

FortiClient is in continuous communication with FortiClient EMS. If the endpoint information changes, the server
updates the client tags and resynchronizes with FortiGate.

The ongoing communication between these components is called network telemetry, and it provides agile and
dynamic responses to enhance network security.

How does Fortinet ZTNA work?

When the endpoint connects to the ZTNA access proxy, FortiGate challenges the endpoint for device
identification.

The endpoint sends the device certificate to FortiGate, proving the device identity. Then, FortiGate applies the
associated tags and rules and either rejects the request or allows the device to proceed.

FortiGate challenges the endpoint for user authentication.

The endpoint prompts the user for their credentials and delivers the credentials to the access proxy.

In turn, the access proxy sends the user credentials to the backend for authentication.

The authenticating server could be an AD, an LDAP directory, a database, or IDaaS.

The ZTNA access proxy retrieves the user's identity, along with role information. FortiGate uses the role
information to help determine if the user has permission to access the requested network application.

Finally, assuming that the device and user have been identified, and the devices tags and rules plus the user's
roles allow access to the resource, an encrypted session is initiated between the ZTNA client and the ZTNA
access proxy, and the user gains access to the application.

You've completed the lesson. You can now achieve these objectives.

64 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Endpoint Security Module

Overview Lesson

Welcome to the Endpoint Security Overview lesson.

Click Next to get started.

The definition and number of endpoints has expanded in the last 20 years. Traditionally, an endpoint was defined
as a device connecting to a network. This included devices like desktop computers and traditional hardware
servers. Connections were established through wired connections using routers and switches.

With the invention and proliferation of wireless technologies like Wi-Fi, Bluetooth, and cellular data networks,
additional devices have been created to take advantage of easy connectivity. These days, anything from a watch
to a toothbrush can be "smart" and able to connect to the internet and be controlled by other connected devices,
like cellphones and computers.

In addition to the convenience of connected features, usage statistics, performance data, and other information
can be transmitted and used to monitor these devices. This data can then be used to improve the efficiency of
everything, from home heating costs to determining what type of crops to plant. This proliferation of connected
devices has been called the internet of things, or IoT.

While promising a more connected and efficient future, IoT also has some serious implications for security. Any
device that can connect to the internet or to other devices is potentially vulnerable to compromise by a bad actor.
Even worse, most new smart devices are not designed with security in mind. A manufacturer will usually prioritize
usability and cost for their new smart device over security. Awareness is improving, but securing IoT is a huge
concern for individuals, businesses, and governments.

The development of new endpoint protection tools has simplified how you secure both traditional and new types of
endpoints. Because each new endpoint that connects to a network is a potential avenue of attack, the attack
surface has grown larger. This larger attack surface demands new tools that will allow administrators to identify
and protect new endpoints as they join the network.

After completing the lessons in this module, you will be able to:
l Describe what constitutes an endpoint.
l Explain why it is critical to identify and secure all the endpoints that connect to a network.
l List common methods for securing endpoints.
Proceed to the first lesson to get started.

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 65


Fortinet Technologies Inc.
Internet of Things Lesson Endpoint Security Module

Internet of Things Lesson

Welcome to the Internet of Things lesson.

Click Next to get started.

After completing this lesson, you will be able to achieve these objectives.

The Internet of Things, or IoT, is the network of physical objects connected to the internet using software, sensors,
and other technologies. A home security camera that you can access from your phone and a sensor on a factory
line monitored remotely over the internet are all part of the Internet of Things. There are many ways to break down
and categorize the entire Internet of Things.

The first category is personal devices. These include small wearable devices like smart watches, cell phones, and
helmet cameras. This category can also include items like heart monitors and cars.

The second category is for residential locations like smart homes and smart cities. You can now connect devices
that maintain home security, heating and cooling, and refrigerators to the internet. A very exciting development is
the integration of multiple systems in urban planning and development. Cities can now integrate street cameras,
streetlights, and electrical, water, and waste management systems using smart devices to improve efficiency and
costs on a city-wide scale.

The last category contains industrial devices used in manufacturing, energy production, and agriculture. One of
the major driving forces behind the expansion of the Internet of Things has been the huge benefit of connected
devices in improving the safety and efficiency of manufacturing processes. There are many other types of devices
not covered in these categories, but it is safe to say that there are connected "smart" devices involved in every
aspect of our lives.

Advancing technology has resulted in an explosion of personal devices, like smart watches, cameras, and fitness
monitors that are connected to the internet.

Other devices in this category include medical monitoring devices that can monitor and notify individuals of heart
abnormalities, blood sugar, and oxygen levels.

Smart cars and bicycles can monitor vehicle health and provide diagnostic reports on the vehicles, as well as help
with navigation and provide other features for the convenience of the user.

Toothbrushes can help track how often and how well you brush your teeth, smart tags can give travelers peace of
mind on the location of their luggage, and speakers can suggest and play music choices based on the time of day
and a user's mood.

Maintaining control over your personal data and information is becoming much more difficult because so many
new devices can now collect and share potentially sensitive information over the internet. A simple example of
these possible security risks is a glucometer, which can be invaluable in helping someone with diabetes and which
can be used and checked over the internet as a connected device. But if that monitor is compromised, bad actors
can access sensitive information, like the user's personal information and medical status, and potentially use it to
harm the user.

An area that connected devices has proliferated in is the home. One of the earliest connected devices was the
smart thermostat. Using temperature sensors around the house, the thermostat can adjust where heating and
cooling is needed and control the environment based on a schedule and changing external conditions.
Connecting this type of a device to the internet and allowing control through a mobile device was an early
cornerstone of the smart home. Now, you can connect smart locks, alarms, cameras, lights, and common
appliances like refrigerators and microwaves through a common smart home hub.

66 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Endpoint Security Module Internet of Things Lesson

An extension to the concept of the smart home is the smart city. Using smart devices, personnel can monitor and
control all aspects of city infrastructure. This includes common city maintenance tasks like traffic management
using smart roads and streetlights, utilities like water, power, and waste management, and security using a city-
wide authentication system and integrated cameras. Even common public services like ambulance services can
benefit.

Imagine a scenario in which a person is injured, and the ambulance crew can immediately identify which hospital
in the city has the most vacant beds. They can then signal the city traffic system to clear traffic for faster
transportation to the hospital, while at the same time transmitting the patient's vital statistics from the smart
devices embedded in the medical gurney directly to the hospital's systems. The flexibility and power of a fully
interconnected smart city has almost limitless possibilities.

With the expansion of devices in the personal space, people can handle common security concerns on their own.
The individual can take precautions, such as patching smart home devices, changing default passwords, and
limiting what data is shared on the internet.

The increasing number of connected devices, especially in a large, interlaced network like a city, introduces a
massive number of endpoint connections that must be secured and validated at the enterprise level to ensure that
only valid devices can participate and send information over the city networks.

Another area where the Internet of Things and connected endpoints is advancing rapidly is in industrial control
systems and operational technology. While information technology focuses on the development and management
of computer systems for exchanging information, operational technology is the hardware and software used to
manage and control industrial equipment and systems. These two fields now have extensive overlap with the rise
of the industrial internet of things or IIoT. IIoT is the connection of instruments, sensors, and other devices used in
manufacturing and utility management over local networks and the cloud to allow better automation, efficiency,
and data collection.

Factories and utility plants used to be self-contained islands where all operational technology controls were
connected to a central control room. No external access to the industrial controls and sensors was allowed. These
days factories, energy producers, and agriculture drive the convergence of operational and information
technology.

Warehouses, assembly lines, chemical production facilities, oil rigs, and even farms can now be fully connected to
both local and central monitoring stations through the internet.

This has multiple benefits. First, managers can now have much greater visibility into industrial systems thanks to
real-time tracking and asset management. This is especially helpful in warehouses and managing supply chains.

ICS systems can monitor and adjust parameters on an assembly line, and maximize efficiency and power usage.

Finally, because of the extensive real-time monitoring, safety and maintenance are greatly improved. Devices can
immediately report problems and flag themselves for maintenance before user intervention is required and even
shut themselves down before accidents happen according to preprogrammed parameters.

The industrial Internet of Things exists in every industrial field. Agricultural production, assembly lines,
warehouses, energy, and chemical production and distribution all have fully developed, interconnected industrial
control systems to assist in safety, productivity, and cost-efficiency of industrial processes.

Just like smart cities, the rise in connected devices in the industrial sector has raised concerns about security.
Sensitive facilities like power stations and factories that handle dangerous materials are now at risk to external
compromise and potential tampering. Security for industrial systems and operational technology has lagged
behind more traditional computer systems and information technology but has been improving as awareness has
grown.

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 67


Fortinet Technologies Inc.
Internet of Things Lesson Endpoint Security Module

Because of the extensive proliferation of endpoints driven by the Internet of Things, the number of endpoints and
the reach of connected devices has grown without a corresponding increase in security reach. You must consider
the security risks of traditional computer systems for any device that can connect to the internet, or be connected
to, by wireless signals, RF, or Bluetooth.

One of the first concerns is the much larger attack surface. Because there are so many new devices connected to
networks, a security breach in one can lead to the easy compromise of others that share the same network. A bad
actor can compromise a wireless camera connected to a home wireless network and infiltrate the home. People
tend to think their laptops or phones are the most likely targets for attack, but because of the influx of new devices
with low default security, many recent network breaches now originate from an attacker finding a connected
device on the network and using it as the entry point to compromise other systems. Many IoT devices use poor or
no encryption and have very weak or even default passwords that bad actors can easily exploit. It is safest to
consider every device on your network, even your toothbrush or wireless camera, as a potential vulnerable
system and act accordingly.

A major risk from the compromise of IoT devices is the creation of botnets from compromised devices that can be
used to further compromise both your local devices and attack other sites. Data loss, infrastructure attacks, and
privacy violations are also extremely common consequences of unsecured endpoints and IoT devices.

One of the largest challenges of new IoT devices is how to connect them to established networks securely.
Because IoT devices are usually not well-known devices that are already configured to be managed with existing
infrastructure, allowing them to connect directly to a corporate network is a large risk. It is recommended that you
connect all unidentified or new devices to an isolated network until you can secure and register them as valid
devices. You can use a physically separate network, VLAN, or a dedicated Wi-Fi access point to accomplish this.

Once you identify a device, register its information, such as hostname, serial number, MAC address, or static IP
address. You should connect them to an isolated network and register them using identifying information, even if
you are not installing management software, as most IoT devices do not support the installing of traditional
security software. This way, if network security devices detect malicious activity, you can more easily track down
the offending device and block it from networks.

For companies, a basic strategy to protect their networks and endpoints is to first learn and profile all endpoints,
including IoT devices. Once they create a catalog of devices, they can divide the devices into groups by security
and connection needs. Companies should place only devices that need to talk to each other on the same network
infrastructure.

A classic example of network segmentation is a closed-circuit TV system where the company keeps all video
feeds on separate physical links from other traffic to prevent tampering. Companies can do this with IoT devices
like electronic locks, security cameras, and assembly line controls. They can then tightly control access to and
from these specific networks using firewalls and other network security devices.

Keeping specific device traffic segregated from other traffic makes it much simpler to protect and enforce rules to
prevent a compromised device from being able to attack neighboring systems, as well as enabling administrators
to monitor and identify any new objects on the protected networks. If you know and have identified thirty cameras
on your network, any new camera attempting to connect and send information should immediately be flagged as
suspicious. This is also much easier to do if you have a separate, discrete network than sharing traffic with a
network that has hundreds of other devices.

A large part of securing the Internet of Things is to consider these new devices as new endpoints that need to be
secured like any other traditional endpoint or computer system. You can identify their security needs and ensure
they are properly protected by applying traditional network security concepts and adapting them to these new
devices.

68 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Endpoint Security Module Endpoint Hardening Techniques Lesson

Endpoint Hardening Techniques Lesson

Welcome to the Endpoint Hardening Techniques lesson.

Click Next to get started.

After completing this lesson, you will be able to achieve these objectives.

With the spread of Internet of Things, or IoT, devices, the number of endpoints that need to be secured has
increased exponentially. Fortunately, there are many strategies and policies that you use to secure not only
traditional client and server endpoints, but the newer network connected devices that have proliferated across all
aspects of life. Many of these techniques are geared toward companies and enterprise networks, but you can also
use them in your personal and home environments. Remember that one of the greatest threats caused by the
spread of endpoints is that of an unsecured device allowing unauthorized access to a network that can be
exploited to gather information or compromise other devices.

Hardening endpoints can be broken down into several categories. The first category is using administrative
controls to enforce secure passwords and restrict user and network access using the principle of least privilege
(PoLP).

The second is hardening the local endpoint protection through a combination of operating system security, boot
management, local disk encryption, and data loss prevention (DLP) techniques.

The third is appropriate endpoint maintenance to ensure all devices are patched and updated regularly, have
regular policy checkups and have accurate, maintained backups for easy recovery.

The fourth is the monitoring of endpoint devices, which can be done locally through an endpoint protection
platform (EPP) client if available, or over the networks the devices are connected to using specialized network
intrusion detection systems (IDS). It is also possible to implement endpoint detection and response (EDR)
platforms that can preemptively block new, undiscovered attacks and take immediate action against suspicious
files and programs.

This lesson covers the first three topics.

The simplest way to harden and protect your endpoints and IoT devices is to ensure that the device has a secure
password. This is especially important in household IoT devices, which regularly ship with a default password that
the user is not required to change on installation. Tracking down and enforcing secure passwords on all
connected devices is a simple first step that can help reduce your overall risk. A common first attack strategy is to
scan a network for devices and attempt to log in and gain access to a local device using default passwords.

Another important step in securing endpoints is to ensure that users, especially administrators, have access to
only the permissions they need to perform their duties. Many endpoints, even basic IoT devices, grant users the
ability to create administrative roles and permissions sets. This allows the creation of authenticated roles that
allow users and administrators access to only the features they need on a device. This prevents a weak password
or social engineering attack from granting an attacker access to more permissions by accident.

If an attacker gains access to an account with restricted access, it will be much less damaging than if an attacker
gains full administrative access because the device is using the default administrative role. The enforcement of
permissions based on need is called the principle of least privilege, or PoLP, and is a good rule to follow when
defining any security policy, whether for endpoints, authentication, or file access.

For simpler endpoints that can't restrict user or administrative access, consider locking down access with very
secure passwords or two-factor authentication, or restricting which IP addresses can access the device using
another device, such as a router or firewall.

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 69


Fortinet Technologies Inc.
Endpoint Hardening Techniques Lesson Endpoint Security Module

Thorough defense is very important when hardening endpoints. If there are multiple layers of security, it is more
difficult to compromise an endpoint and use it to further attack a network. Remember, a network is only as secure
as its most vulnerable endpoint, so having a broader, top-down view when designing and enforcing security can
be a great help in determining policies for a network, even if they cannot be applied equally to all devices.

A frequently overlooked area in endpoint security is the hardening of endpoint firmware and boot processes. Most
security practices focus on securing devices when they are running and connected to the network. However,
threats that attack the firmware and boot processes of endpoint devices have been emerging. Hardening firmware
and boot processes is especially important for IoT devices, which lack many of the built-in protections that more
traditional desktops, laptops, and servers have integrated over the years to protect against malicious firmware
compromise.

Physically securing devices so that attackers do not have physical access is extremely important. It is much easier
to compromise a traditional computer system if you have physical access because many devices have an
administrator account reset procedure that requires only physical access to the device. Locking down the basic
input/output system (BIOS) and other boot-time systems can prevent these types of attacks from being
successful.

Firmware is the software that usually runs from a chip on the endpoint. This software is responsible for detecting
and reporting hardware connected to the device. After the firmware performs all the hardware checks, it assists in
loading the operating system.

Modern computers usually use either the legacy basic input output system (BIOS) or the newer unified extensible
firmware interface (UEFI). Both perform similar functions, but UEFI is much newer and usually incorporates a
graphical interface and more robust security features.

Understanding how your network endpoints load their operating systems and how to secure any potential
compromise is important for preventing firmware malware attacks, where code is inserted into the firmware that
can cause endpoints to load malicious software or whole new operating systems that can then be used to
compromise other devices. Restricting firmware so that it loads only approved software is one of the most
important new features of UEFI over BIOS.

Choosing an OS is not a luxury security administrators usually have, but, if possible, it is always a good idea to
select and use an OS that is easy to manage and secure. Many OSs now have built-in security features that make
it easier to manage and enforce security policies. In addition, many network security devices can now allow
access based on OS type. Having a fixed list of trusted OSs can help you enforce overall network security by
allowing only known OS types and versions to access your networks. That way, if a firmware attack compromises
a device, which then attempts to connect to the network with an unknown OS, other security devices can deny the
access.

While BIOS and UEFI are specific to traditional computers and laptops, most endpoints use some sort of
bootloader and firmware to secure and load the OS. Understanding and ensuring these systems are locked down
is a fundamental step in endpoint security.

One of the major advantages of using laptops and cell phones for work is that they are portable. One frequent
concern about these devices is data security. If a laptop is stolen and the data is not encrypted, it is very easy for
the thief to extract useful information. In addition to being harmful to the individual, an unencrypted corporate
laptop can contain a wealth of useful information about the corporate security posture. Just viewing browsing
history and cached DNS queries on a computer can reveal sensitive network information and security procedures.
Fully securing and encrypting endpoints is a critical aspect of cybersecurity, especially for high-risk devices that
may contain a great deal of sensitive information.

The most common way to secure these devices is to use full disk encryption, or FDE. FDE is a software-based
solution where the disk is encrypted by the OS. On boot time, the UEFI loads the decrypting information from the

70 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Endpoint Security Module Endpoint Hardening Techniques Lesson

OS. The cryptographic keys are usually stored in a trusted platform module (TPM) and protected by a password or
other authentication method. After the keys are accessed, the disk can be decrypted, and the OS can be loaded
normally. Because the entire disk is encrypted, if it is stolen, no useful information can be retrieved except by
attempting to brute force the drive encryption, which is very costly.

Another way to implement full disk encryption is to use a self-encrypting drive, or SED. An SED is a hard drive with
a built-in module that automatically handles the encryption and decryption of the contents of the hard drive using
instructions from the firmware and OS. Using an SED pushes the cryptographic effort onto the built-in module in
the hard drive, rather than the device CPU and software.

A final way to protect data on an endpoint is to use DLP software. This can detect if someone is trying to copy
sensitive information from a device or send it over the network. DLP can block or log the transaction for security.
Another common use of DLP is to prevent or limit the use of attachable drives, like USB flash drives or external
hard drives, to prevent the copying of large amounts of data. DLP can also be network based, where devices
inspect network traffic to alert administrators to keywords or other sensitive information being transmitted over
networks.

Many modern devices like smartphones automatically use full disk encryption, but on some devices, this may be
an option that is disabled by default. Always check if disk encryption and DLP is available on endpoints, especially
IoT devices that may not have these features enabled by default.

In any environment, it is extremely important for administrators to be able to update, patch, and back up all
connected endpoints. The difficulty of maintaining reliable patching and backup schedules is usually related to the
sheer number of different devices and procedures required to perform updates and backups. Having a
standardized desktop, laptop, server, and smartphone model and manufacturer for a company can greatly
simplify the task of patch and update maintenance. However, this is not always viable because of the need to
support critical legacy equipment, and the rise of bring your own device, or BYOD, in work environments.

Keeping patches up-to-date is critical because identifying and closing potential vulnerabilities is a key step in
preventing a large-scale cybersecurity attack. Updating OSs, firmware, and vulnerable software programs and
applications is a simple and effective way to reduce overall risk. While necessarily effective in preventing zero-day
attacks, having a fully patched and updated system can also help slow down and restrict the compromising of
systems using common, well-established malware and attack vectors. If your endpoint and network infrastructure
is up-to-date and healthy, a new, unknown attack method may be able to compromise a system, but further
infiltration may be hindered because no other tools in an attacker's toolkit will be effective in pivoting to other
systems or collecting and exfiltrating data.

In addition to maintaining, patching, and updating software, having a comprehensive backup solution for critical
endpoints can greatly assist in recovering from cyberattacks or accidents. You should back up critical endpoint
devices, like smartphones, laptops, servers, and databases frequently. If a device is compromised, you can then
collect forensic information and easily restore the device to the latest "clean" copy, with as little disruption as
possible.

Backing up IoT devices, like security cameras or smart locks, depends heavily on the manufacturer, and many
such devices do not have backup capability. In this case, having backup equipment that you can configure easily
to replace damaged, stolen, or compromised devices should be part of a comprehensive disaster recovery plan.
Having a regular backup schedule for all your devices, from computers to cameras, is one of the most effective
ways to mitigate a ransomware attack. If you have a current backup of your critical data, it is much easier to
restore and recover endpoints affected by ransomware.

You have completed the lesson.

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 71


Fortinet Technologies Inc.
Endpoint Monitoring Lesson Endpoint Security Module

Endpoint Monitoring Lesson

Welcome to the Endpoint Monitoring lesson.

Click Next to get started.

After completing this lesson, you will be able to achieve these objectives.

The process of hardening endpoints is broken into several categories. This lesson focuses on the fourth category,
endpoint monitoring.

This section includes the monitoring of endpoint devices, both locally through an endpoint protection platform
(EPP) client if available, or over the networks the devices are connected to using specialized network intrusion
detection systems (IDS). It is also possible to implement endpoint detection and response (EDR) platforms that
can preemptively block new, undiscovered attacks and take immediate action against suspicious files and
programs.

To help in the administration of modern endpoints, many companies have created endpoint solutions to help
manage and protect various types of endpoints from cyberthreats. Most endpoint solutions support servers,
desktops, laptops, and smartphones, with additional plugins and support for the proliferation of the new, unknown,
and IoT devices.

The first endpoint security solution is the endpoint protection platform, or EPP. This developed from the need of
administrators to ensure servers and desktops are patched and have the appropriate antivirus software installed.
Modern EPP platforms can verify versions of software and firmware, scan the local system for viruses and
malware, and enforce data loss prevention and other company-defined security policies. EPP is usually viewed as
a defensive measure against malicious attacks, and helps administrators maintain uniform software updates
across the enterprise. EPP can also allow basic monitoring and visibility into systems to help administrators
identify out-of-date devices, and remotely patch and install software on devices.

Another endpoint solution is endpoint detection and response, or EDR. This is a more proactive security solution
that constantly scans a device to detect indicators of compromise, or IOC. If the EDR client detects a suspicious
connection, program, or behavior, it can block the action and send an alert. This can help identify and stop threats
like ransomware and zero-day attacks that may not have an established signature that would be detected by
traditional anti-malware systems.

EDR usually leverages artificial intelligence and large comprehensive databases of known attacks to predict and
recognize suspicious files and programs. In addition to detection and immediate response, EDR can trigger alerts
to other connected endpoints and allow other endpoints to immediately block the suspicious program or file, even
before it can be opened or executed, providing an immediate response against zero-day and other previously
unidentified attacks. EDR systems can also have tools to help security investigators gather data on new threats,
and quarantine systems that are suspected of compromise.

Both EPP and EDR solutions usually provide monitoring resources to allow security administrators to have top-
down visibility on the health of their endpoints, and allow a quick response in case of potential attacks or outages.
These are usually a key component in monitoring by a security operations center, or SOC. In addition, many EDR
solutions allow an immediate response by automating the process to either lock down devices, or execute
operations in response to a threat detected by other parts of the network. For example, a security analyst can
publish an updated malware detection rule based on a common vulnerability and exposure (CVE) alert to plug a
potential security risk before a patch can be made available by the device manufacturer.

One of the largest challenges in securing new devices is how to connect them to established networks securely.
Many companies now allow employees to use BYOD computers and phones (Bring your own device). Because

72 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Endpoint Security Module Endpoint Monitoring Lesson

these devices are usually not well-known and not managed by the company, allowing them to connect directly to a
corporate network is a large risk.

Having monitoring software and detection in place to identify and isolate unknown devices is a critical step in
properly onboarding and securing these devices. If possible, force all new and unknown devices onto an isolated
network until they can be secured and registered. You can use a physically separate network, VLAN, or a
dedicated Wi-Fi access point to accomplish this. Once a device is registered, usually by hostname, serial number,
MAC address, or static IP address, appropriate monitoring software can be installed, and the device moved to a
production environment as a known endpoint.

With hard-to-secure devices and unknown endpoints, you should enforce the principle of least privilege. If these
devices need access only to a specific internet or internal server, isolate them on a unique network and allow only
that specific connection through firewalls and routers. That way, if the device does not meet network compliance
because they are not running an appropriate endpoint security solution, it has as limited access to other resources
as possible.

Once all known devices are registered, you can configure many network security devices, such as wireless
access points, switches, routers, firewalls, and other connectivity points to lock down and not allow unauthorized
devices to connect through the network. Disabling devices that are not monitored forces users with unknown
devices to register and prevents attackers from attempting to insert their own devices onto the network remotely or
by attempting to physically plug in a device locally.

You have completed this lesson.

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 73


Fortinet Technologies Inc.
Secure Data and Applications Module

Overview Lesson

Welcome to the Secure Data and Applications module.

Click Next to get started.

Sensitive information must be secured on the internet, and in applications. More applications and greater
complexity lead to increased opportunities for attacks.

The lessons in the Secure Data and Applications module will provide you with an essential understanding of the
following fundamentals:
l Understand the role of organizations in data privacy, especially with respect to compliance with laws and
regulations. Compliance drives consistency across all verticals and ensures that those storing sensitive data are
held accountable for its safety.
l Explain why protecting data is an ongoing effort throughout the data lifecycle. Like other commodities, data has a
lifecycle, and those managing it are responsible for its protection at each stage. Data traverse a continuum from
initial generation to eventual archival or deletion.
l Compare different content filters. Content filters help manage data by restricting access to sensitive or harmful
content.
l And finally, you will be able to apply techniques for application hardening. Application hardening is the process of
protecting an application by eliminating vulnerabilities and increasing layers of security.
Proceed to the first lesson to get started.

74 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Secure Data and Applications Module Data Protection Lesson

Data Protection Lesson

Welcome to the Data Protection lesson.

Click Next to get started.

After completing this lesson, you will be able to achieve these objectives.

Protecting sensitive data includes security and privacy safeguards for data at rest, data in use, and data in transit
to ensure the confidentiality, availability, integrity and non-repudiation of data assets. Therefore, data protection is
a combination of data privacy and data security.

Click the shaded icons for more information.

Data security refers to:


l Maintaining the integrity of digital and physical assets. It includes the tactics and techniques used to protect against
unauthorized access and exploitation of sensitive data.
l Ensuring the integrity of data throughout the entire data lifecycle. This also includes protecting personal data from
unauthorized third-party access and malicious attacks.
l Properly authorizing data use and access. This is achieved by ensuring that only those with proper authority have
access to sensitive data.
Data security also includes:
l Data sovereignty. This means that data is subject to the laws and governance structures of the nation where it is
collected.
l Data privacy. Overall, data protection is a combination of both data security and data privacy.
Even the most secure environments can be undermined (often inadvertently) by a single person. So, everybody
can play a critical role in defending against internal or external data security breaches.

How important is data protection?

In the first half of 2021, data breaches resulted in 18.8 billion records being exposed. The average cost of a data
breach in 2021 was $4.24 million, which was an increase from $3.86 million in 2020.

Beyond the financial risk to organizations, companies risk the loss of intellectual property and proprietary data, the
loss of public trust, and loss of customer trust as the result of a data breach. Legal risks can also be involved,
including the potential investigations performed by the information commissioner's office and even the police. In
the case of the payment card industry, data breaches that include a lack of compliance with data protection
standards can result in lawsuits, loss of contracts, and ultimately loss of business.

There are numerous data types that must be secured. Here is a list of some of the most common data types:
l Controlled unclassified information (CUI) is government created or owned information.
l Personally identifiable identification can be public, private or restricted information, such as addresses or biometric
data.
l Protected health information is information that is related to an individual's health care and status.
l Payment card industry data is related to administration and responsibility for the electronic payments.
l Sensitive institutional data is protected according to legal, regulatory, administrative, and contractual requirements.
l IT security information refers to any sensitive data within a protected network.

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 75


Fortinet Technologies Inc.
Data Protection Lesson Secure Data and Applications Module

l Export control records include data related to exported items, especially export restrictions, like those restrictions
related to the export of weapons.
l Student education records are records maintained by an educational agency or an institution.
As the amount of data that travels through and is stored by organizations explodes, the following risks are also
increasing:
l Accidental data exposure. For example, APIs allow two applications to interact in the cloud without any user action.
The use of insecure external-facing APIs can provide unauthorized access to sensitive data.
l The unintentional sharing of sensitive information. This can occur through phishing attacks in which attackers send a
fraudulent message to trick the receiver into sharing information.
l Infection by malicious software. In a more invasive manner, attackers can propagate malware, ransomware,
doxware, or leakware to compromise the confidentiality, integrity, and availability of data.
Click the icons for more information.

Threats can originate internally. Some examples include:


l An employee inadvertently moving restricted or sensitive data to the cloud or providing authorized access to
sensitive data.
l Legitimate credentials being stolen, and the corresponding account being compromised.
l A malicious employee intentionally stealing confidential company data for fraud, sabotage, espionage, revenge, or
extortion.
Risks are present throughout the data lifecycle, which consists of five main stages: creation, storage, usage,
archiving, and destruction. During the data lifecycle, data is moving between multiple states, meaning it is
constantly moving between rest, transit, and use.

Now, take a look at this scenario involving data storage and potential risks involved. You have been asked to
create a document that lists customer names, email addresses, social security numbers, and dates of birth.

You compile all the required information securely from the database and add it to your spreadsheet.

You save a copy of your spreadsheet both to your local drive and the shared network drive.

Those who require access to the document use their credentials to log in to the shared drive. However, a new
person starts at the company and does not have immediate access to the shared drive, so you email them a copy
of the document. After you send the file, there is now a copy of the document in the new employee's local drive
and in the email chain.

At this stage, there are at least four copies of the document containing personally identifiable information (PII)
stored in various locations: in a shared drive, as a local copy on your computer, as a local copy on the computer of
the new employee, and as copies within the email archive. Making and sending additional copies of documents,
especially those containing sensitive data types, creates additional risks and leaves a company more susceptible
to cyberattack.

Sometime later, you are instructed to delete any documents that are not in use and contain any PII. You move the
file from your local drive to the recycle bin and delete the file from your shared drive. However, while you may
assume that the document has been properly destroyed, there are still copies that exist on the new employee's
local drive and in the email archive. This means that the PII in these additional copies are at risk of being
compromised.

In this specific case, it is recommended that you avoid sending sensitive documents through email and instead,
send links to secure, shared drives. Shared network drives with access controls in place are more secure and limit
the ability for the document to be copied.

76 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Secure Data and Applications Module Data Protection Lesson

Other best practices for a company's data lifecycle management include following naming conventions, backing
up data according to policy, archiving according to policy, deleting data according to guidelines, and following data
management policy.

Other general tactics and techniques are used to protect data against unauthorized access and exploitation. Click
the tabs to learn more information.

Training provides a critical level of defense because it helps everybody understand the importance of data
protection and learn to become a human firewall against data theft. It also ensures that the data usage policies are
clearly understood and applied. Through training, employees become aware of the importance of creating strong
passwords, regularly installing updates and patches, using multifactor authentication whenever possible, and
controlling access to their sensitive data.

Data obfuscation or masking can minimize the value of sensitive data to unauthorized intruders. It also
differentiates information availability according to authorization levels.

Encryption is another way to enhance the confidentiality of data. It can also be used to provide data integrity. For
example, adding a certificate to the data can provide data integrity by digitally signing data to validate that the data
was not changed.

Data resiliency is important for availability and business continuity. For example, regular backups or storage high
availability allow an organization's data to always be available and accessible, even if that organization
experiences unexpected business corruption or disruptions such as cyberattacks.

At the end of a data lifecycle, destruction must be performed with the correct procedure, like erasure, which
overwrites the existing data with meaningless, pseudo-random patterns to destroy it completely.

For automatic data protection, data loss prevention (DLP) allows the detection and prevention of data breaches,
exfiltration, or unwanted destruction of sensitive data.

Data loss prevention (DLP) is an interesting solution in data protection. DLP helps organizations to:
l Identify sensitive information across multiple on-premises and cloud-based systems.
l Prevent the accidental sharing of data.
l Audit and monitor data movements
l Educate users on how to stay compliant.
DLP reduces risks while improving regulatory compliance.

For complete data protection while in use, in transit, or at rest, you can implement DLP in these different parts of a
network:
l The endpoint, usually as a software-based client on a laptop or a workstation
l The network level, usually on the perimeter to detect data in transit
l The storage place, meaning on the data-center servers to inspect data at rest
l In the cloud, as software as a service (SaaS) that protects data being stored in cloud services
You have completed the lesson.

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 77


Fortinet Technologies Inc.
Data Privacy Lesson Secure Data and Applications Module

Data Privacy Lesson

Welcome to the Data Privacy lesson.

Click Next to get started.

After completing this lesson, you will be able to achieve these objectives.

Data privacy refers to the proper handling of personal data and other sensitive data types.

Data privacy also includes the public's expectation of privacy, and the right to have control over their own personal
data. Organizations have an obligation to balance the processing of personal data with ensuring that individual
privacy preferences are respected and protected.

Overall, data privacy focuses mainly on the processes organizations use to collect, process, share, archive, and
delete sensitive data; policies for acceptable levels of risk; as well as compliance with data protection laws and
regulations.

There are multiple reasons why data privacy is so important.

Data is extremely valuable. It is an asset that when it is collected, it must be protected.

Data privacy laws exist to ensure individuals have control over their data.

Organizations must be transparent and effectively communicate their data management strategy, including what
data they collect; why they collect it; who can access the data they collect; and how they will store it, manage it,
delete it, and share it.

Governments across the world recognize the importance of individual's right to privacy. The protection of this right
is reflected in many data privacy and protection laws.

The collection of private, personal data has become easier with advancements in technology. It is vital that
companies protect this data.

Non-compliance has many consequences such as fines, lawsuits, reputational damage, and loss of customer
loyalty.

The world is focused on data privacy, and for good reason. Privacy risk is the likelihood of experiencing problems
resulting from data processing. That is why data privacy and security go together. Data is extremely valuable;
therefore, data is an asset that must be protected.

Data breaches are extremely costly both to a company's reputation as well as its finances. In the event of a data
breach, personally identifiable customer data has a high cost of about $150 dollars per record.

Organizations have an incredibly important role to play when it comes to data privacy:

Proprietary research, HR resources, and financial information must be protected from exposure in order to
maintain operations. To protect private data, organizations must:
l Identify sensitive data assets and classify the impact severity of such assets.
l Identify authorized roles, users, and policies for the retention of private data.
l Collect and report on data asset compromise.
l Develop timely response procedures that detail how to notify stakeholders of data spillage or breaches, and how to
effectively recover from a compromise
l Implement cryptographic measures for data obfuscation.

78 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Secure Data and Applications Module Data Privacy Lesson

All information assets should be assigned a data classification level, such as Restricted, Confidential, Internal, or
Public. This is based on the appropriate audience for the information because the sensitivity level guides the
selection of protective measures to secure the information, and data handling controls differ depending on the
sensitivity level of the data.

All policies pertaining to the classification of data and the handling of data within each data classification level
must be followed.

Click the buttons for more information.

More generally, data privacy laws and regulations are in place to ensure that individuals have control over their
data. For example, employees have an obligation to understand the data privacy regulations followed by their
organizations and to adhere to all policies and procedures related to data privacy. Even the strictest data privacy
regulations can be undermined, often inadvertently, by a single employee.

Across the world, governments also recognize the importance of an individual's right to privacy. The protection of
this right has been reflected in many data privacy and protection laws, including GDPR, ISO 27701, NIST SP800-
53, SOC 2, PIPEDA, CCPA, HIPAA, and PCI DSS.

Click the underlined words for more information

You have completed the lesson.

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 79


Fortinet Technologies Inc.
Secure Email Gateway Lesson Secure Data and Applications Module

Secure Email Gateway Lesson

Hello! In this lesson, we will explain what secure email gateway is and how it has evolved.

Email was one of the first activities people did when the world went online in the 1990s. It took very little bandwidth
because technology allowed for very little. It was also easy, fast, and didn't even cost a postage stamp! It was so
easy and inexpensive that it became a means to get a message to many people at little or no cost.

Some of those mass mailings came from legitimate businesses and were equivalent to advertising flyers sent by
post, but other mass mailings were sent by more nefarious characters. This was the beginning of spam—the act of
sending irrelevant and unsolicited messages on the internet to a large number of recipients.

Individuals could send and receive messages with little verification or accountability. Therefore, they offered
anonymity. Initially, people viewed spam more as a nuisance than a threat. But in 1996, America Online (AOL)
coined the term phishing to describe the fraudulent practice of sending emails purporting to be from a reputable
source, in order to induce individuals to reveal personal information.

For example, some of you may have met Prince Solomon of Abadodo, or another wily character, who wanted to
share their wealth with you. Other bad actors registered domain names that were strikingly close to the names of
legitimate businesses or organizations and masqueraded as that business in an email, coaxing you to click a link
or an attachment that contained malware.

The phishing technique relied on human naivety, carelessness, or distraction for it to work. One of the first
responses from businesses was to educate employees about phishing tactics. However, while education may
have reduced phishing exploits, it did not eliminate the threat. Something had to be done at the mail server and
Internet Service Provider (ISP) levels. In response, businesses installed spam filters on mail servers to stop spam
and phishing emails.

Spam filters rely on identifying specific words or patterns in the headers or bodies of messages. To use a simple
example, the word cash is common to email spam. If an IT professional added the word cash to the spam filter on
their company mail server, the filter would eliminate any email that contained that word.

ISPs also deployed spam filters. In addition to filtering, ISPs turned to strengthening authentication methods. By
the end of the first decade of the twenty-first century, ISPs began to implement Sender Policy Framework (SPF),
which slowly took shape during that decade but wasn't proposed as a standard until 2014.

SPF is an email authentication method that detects bogus sender addresses and emails.

However, for every defensive measure implemented by legitimate businesses, organizations, and ISPs, the bad
actors introduced a countermeasure that circumvented the latest defense.

To return to our simple example, spammers could easily bypass our filtered word, cash, by rendering it as c@sh or
some other variant. And while filters became more sophisticated in detecting spam patterns, they were too static
and easy to outsmart.

Spamming and phishing are just too lucrative for the bad actors to easily give up. In fact, the number of phishing
attacks has grown enormously since the turn of the century. In 2004, 176 unique phishing attacks were recorded.
By 2012, this number grew to 28,000. And no wonder; phishing was lucrative. Between lost money and damages,
the attacks caused a $500 million loss to businesses and individuals. More recently, during the first quarter of
2020, the Anti-Phishing Working Group (APWG) recorded 165,772 detected phishing sites.

Better defense was needed. Secure email gateways (SEG s) arose to provide more rigorous defense. In addition
to the spam filter, SEGs added antivirus scanners, threat emulation, and sandboxing to detect malicious
attachments and links in real time. Even if employee education and the spam filter failed, one of these other tools

80 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Secure Data and Applications Module Secure Email Gateway Lesson

could detect and neutralize the threat. However, the number of false positives, and the sheer volume of attacks,
overwhelmed the security teams, who became bogged down in manual remediation.

SEGs continue to evolve as threats evolve.

Today, greater automation and machine learning is built in to SEGs, which alleviates the demands placed on
security operations centers (SOCs). Data loss prevention (DLP) is also available to detect and stop the egress of
sensitive data.

In some cases, a SEG is integrated with other network security devices, such as edge and segmentation firewalls.
These devices collectively form an integrated fabric of security that security professionals can centrally manage
from a single pane of glass, and continually update using threat intelligence, as new methods and contagions
become known.

Fortinet has a SEG, called FortiMail. FortiMail includes all of the features discussed here, plus it integrates with
firewalls and sandboxing solutions. You can centrally manage all of these devices using FortiManager, and
update their threat intelligence using FortiGuard Labs, which is the global threat intelligence and research center
at Fortinet.

Thank you for your time, and please remember to take the quiz that follows this lesson.

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 81


Fortinet Technologies Inc.
WAF Lesson Secure Data and Applications Module

WAF Lesson

Hello! In this lesson, we will talk about Web application firewalls (WAFs) and how they have evolved over time.
What is a WAF and how does it differ from the traditional edge firewall?

A WAF is an appliance or software that monitors HTTP/HTTPS traffic and can block malicious traffic to and from a
web application. It differs from a traditional edge firewall in that it targets the content from specific web applications
and at the application level, while edge firewalls fashion secure gateways between the local area network and
outside servers at the network level. Specifically, by inspecting HTTP traffic, a WAF can stop attacks originating
from web application security flaws, such as SQL injection, cross-site scripting, file inclusion, and security
misconfigurations. Given that much of our time, both at work and at home, is spent interfacing with web
applications and web servers, the WAF becomes a vital component in our arsenal against bad actors and their
malicious online schemes.

The ancestor of the WAF is the application firewall that was first developed in the 1990s. Although largely a
network-based firewall, it could target some applications or protocols, such as File Transfer Protocol (FTP) and
remote shell (RSH), which is a command line computer program. The debut of the World Wide Web in 1991 was
the big bang of the internet universe, which has been expanding at an accelerated pace ever since. The very
accessibility and openness of the internet permitted anyone to search and explore, but it also permitted bad actors
to use it for their own sordid purposes.

As more people and organizations became victim to espionage, theft, and other crimes, developing a defense
against HTTP-based cyberattacks became a foremost priority. WAF couldn't rely on traditional edge firewall
methods that based decisions on a blocklist of network addresses, and blocked certain protocols and port
numbers. As all web applications used HTTP and either port 80 or 443, this approach wasn't very useful.

Let's look at a common attack method called SQL injection. Imagine you run an online business and customers
and partners log on to your site to buy products and services. A typical login page asks for a user ID and
password. An individual, let's call him John Smith, types his user ID—jsmith—and his password. This information
is verified on a backend database. If the password is true, John Smith gets in, but if the password is false, he does
not. Now, a bad actor probably doesn't know John's password. He could always guess, but that might take a very
long time. Instead, for the password, the bad actor types "abc123 or 2+2=4". When John's credentials are sent
back to the database for verification, it is likely that the password "abc123" is false; however, the expression
2+2=4 is true. Due to this flaw, the bad actor was able to break in to some sites. The first generation of WAFs used
blocklists and signature-based HTTP attributes to alert the firewall of an attack, so a SQL injection attack, like this,
was no longer successful.

With internet popularity soaring, soon the sheer number of web applications and their growing complexity made
the signature-based approach obsolete. As well, the number of false positives—alerts of attacks that were in fact
legitimate connections—grew to proportions beyond the capacity of IT security teams. In the next generation,
WAFs became more intelligent—there was an element of learning by the firewall. The WAF would learn the
behavior of the application to create a baseline it could use to evaluate whether attempts to access the
applications were normal or irregular, and therefore suspect. It also introduced session monitoring and heuristics,
which permitted the firewall to detect variants of known signatures. This was a step forward, but because
application learning was overseen by IT security, defence could not keep up with the ever-expanding number of
mutations of existing methods or new exploits. Moreover, there was no defence against zero-day exploits, which
exploited an unknown weakness in the code of an application.

The logical turn in WAF development was machine-learning unencumbered by human supervision. Now
behaviour analysis could be done at machine speed and could adapt to the ever changing attributes of the threat.
Other security features were augmented to the firewall. Among these assets were distributed denial of service

82 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Secure Data and Applications Module WAF Lesson

(DDoS) defense, IP reputation, antivirus, and data loss prevention (DLP). The firewall could stop any action that
violated acceptable HTTP behavior. It could identify the user and correlate the action they were attempting to do
with their permissions, and stop any action that went beyond the scope of their role. The WAF was also designed
to share information and collaborate with other security devices in the network, such as other firewalls and
sandboxes. This served to integrate the firewall into an interlocking collective defence as opposed to working
independently. And sandboxing allowed suspicious material to be tested safely in isolation from the network.
Zero-day attacks could be exposed and quarantined in these sandbox environments, and their signatures could
be shared with other devices in the network. In addition, these new discoveries could be uploaded to a threat
intelligence center on the internet, where they could be communicated to other networks.

Fortinet has a WAF named FortiWeb. FortiWeb can be integrated with FortiGate and FortiSandbox. FortiGuard
Labs is Fortinet's threat intelligence center, which can provide vital updates to FortiWeb and to other Fortinet
Security Fabric products.

Thank you for your time, and please remember to take the quiz that follows this lesson.

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 83


Fortinet Technologies Inc.
Content Filters Lesson Secure Data and Applications Module

Content Filters Lesson

Welcome to the Content Filters lesson.

Click Next to get started.

After completing this lesson, you will be able to achieve these objectives.

Content filtering is a process to screen or restrict access to objectionable emails, webpages, executables, and
other suspicious items. It is a common security measure that is often built into internet firewalls and blocks content
that contains harmful, illegal, or inappropriate information. For example, parents often use web filtering to protect
their children from improper or graphic material.

Content filters are used in different ways to block access to different types of materials. The common types of
content filters include search engine filters, email filters, DNS-base content filters, and web filters. Click the
different tabs to learn more about each type.

Search engine filters rate web content according to its text and images. Text and images hold a specific weight,
which is measured against a classification set. The weights vary based on whether the classification level is set to
off, moderate, or strict. Machine learning helps define the weights to avoid possible false positives. Depending on
the resulting value and the size of the document, the content can then be classified as safe, moderate,
inappropriate, or rejected from a strict point of view. The search engine result will then display contents if they
meet the level of classification set.

Email content filters check the header of incoming mails against real-time blackhole lists. The raw data of the body
is scanned for inappropriate content, providing a spam confidence level that is similar to search engine weights.
Email content filters also check attachments, identify keywords or potential unauthorized types of files, like
executables, and complete the email content filtering. This enables users to block, quarantine, or reject malicious
emails, including phishing, while accepting appropriate incoming emails.

Click the icons for more information.

DNS-based content filters check the website during the resolution of the domain through DNS servers using
blocklists. If the website is not allowed, the browser is redirected to a replacement message announcing that the
page is blocked. Alternatively, a company can define an allowlist, including all company approved websites. DNS-
based content filtering would then block all other websites.

Web filters are similar to DNS-based content filters with an additional function that categorizes websites. For
example, a requirement for schools in the United States is to adhere to the Children's Internet Protection Act
(CIPA), a bill that addresses concerns about children's access to obscene or harmful content over the internet,
such as pornography. Therefore, elementary and high schools use web filters to block material deemed harmful to
minors. All websites and their contents are rated through machine learning, so that the access to a specific URL is
allowed or blocked according to its category and the user's profile.

Content filters allow organizations to block access to sites known to carry malware, protecting their data and users
from malicious activity.

Content filters also can identify phishing or an exploit kit, blocking the access before it triggers a malicious
download. This is important while cyber criminals increasingly develop new, more sophisticated ways to illegally
access network and steal data.

Limiting user's access to only specific work-related internet can increase the bandwidth efficiency and enable
faster connections for all employees.

84 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Secure Data and Applications Module Content Filters Lesson

Organizations can use web filtering on web sites like social media and online shopping to increase staff
productivity.

Click the underlined terms for more information.

You have completed the lesson.

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 85


Fortinet Technologies Inc.
Application Hardening Techniques Lesson Secure Data and Applications Module

Application Hardening Techniques Lesson

Welcome to the Application Hardening Techniques lesson.

Click Next to get started.

After completing this lesson, you will be able to achieve these objectives.

The abundance of websites and app stores available on the cloud make it easy for people to download multiple
applications onto computers and mobile devices. However, it is possible for bad actors to maliciously tamper with
apps and use them as an attack vector, even if users download the apps from official and reputable sites.
Therefore, it is important to securely configure and maintain apps to protect the corresponding data. This security
process is called application hardening.

Hardening is a recommended security requirement that is defined by various cybersecurity regulations and
standards. Hardening helps minimize vulnerabilities and reduces the attack surface, thereby protecting the
integrity of the application and safeguarding sensitive data.

Organizations aim to ensure the security of their workplace by reinforcing the use of company-approved tools and
applications, and by training employees on secure workplace behaviors.

Many applications are now web-based and users must access them through a web browser. It is typically the
responsibility of a company's IT team to recommend a stable web browser that the employee's operating system
(like Windows or Linux) supports, and that meets the company's security standards. Following these company
guidelines, employees are then responsible for installing and configuring the approved web browser on all of their
work devices.

Employees must also correctly configure the web browser security options. Among these options is the
management of cookies, including flash cookies, which are also called local shared objects. A cookie tracks an
individual's browsing history or authentication credentials. To correctly manage cookies, users must clear all
cookies at the end of a browsing session and allow only required cookies. Users can configure cookies, along with
the cache and browser history, through the advanced security options of the browser.

Similarly, employees need to manage add-ons, which may contain sensitive information and expose browsing
data, such as login credentials. Users should install only official add-ons that their company approves.

Employees should also use pop-up blockers to secure applications. Pop-ups are usually unwanted
advertisements that appear on screen while a user is browsing. They are usually meant to be enticing, so users
will click them. In many instances, pop-ups contain malware, which downloads viruses or ransomware when
clicked. Employees should turn on pop-up blockers and also create an allowlist for pop-ups related to trusted
sites.

Employees must treat active content, such as ActiveX controls, Java applets, and Flash, with similar care.
Downloading these types of active content can potentially result in downloading malicious code that can be
injected into a device.

Click the underlined terms for more information.

At a company level, a recommended proxy can also help to further protect against attackers. The proxy can filter
restricted sites and increase bandwidth usage through caching and content filtering.

You can implement the previously mentioned browsing policies through a configuration file that is applied to all
users at a company. Applications and extensions must have the option for automatic updates and patch

86 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Secure Data and Applications Module Application Hardening Techniques Lesson

maintenances to continuously fix vulnerabilities and security flaws. You can apply the configuration file through a
patch management system, or through an individual user as a recommendation for application hardening.

Individual user behavior plays a role in application hardening. The first behavior recommendation is to use an
updated antivirus software to scan and block malicious code, like worms, Trojan horses, or spyware.

Users can use an application allowlist that includes supported and recommended third-party applications. These
lists may also include user account controls to prevent unauthorized access.

Individuals should also eliminate or disable obsolete or unused applications to avoid backdoor openings for
cyberattacks.

Finally, one of the best behavior recommendations a company can enforce is appropriate behaviors. Training
users in security awareness is the most valuable hardening technique because each user becomes a human
firewall. Training helps users to create secure and strong passwords, recognize social engineering attacks and
take appropriate action.

Application hardening helps companies to provide a high level of security and confidence at the same time. It also
improves application performance because it lowers the risk of attackers slowing them down or, even worse,
blocking them.

While implementing application hardening is costly and takes a lot of effort, the return on investment is greater.
For example, application hardening can block an attacker from installing ransomware and stealing highly sensitive
patient data from a hospital. Eventually, application hardening provides a safer environment, inhibiting the efforts
of attackers, and allowing easier security compliance and audits.

You have completed the lesson.

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 87


Fortinet Technologies Inc.
Cloud Security and Virtualization Module

Overview Lesson

Welcome to the Cloud Security and Virtualization Overview.

Click Next to get started.

Servers are no longer limited to a piece of hardware hidden in the basement closet of a corporate headquarters.
With the rise of virtualization and the ability to run multiple virtual computers on a single piece of hardware, the
number of distinct devices that can be running and connected to each other, both locally and over the internet, has
increased exponentially. Now, many common services and applications exist only virtually, and can connect
solely over the internet. The rise of cloud computing has many advantages, especially in areas of availability,
scalability, and cost, but it also has some serious drawbacks for security.

Because of the multiplication of devices in the cloud, administrators have much more difficulty maintaining the
security all of their online data, applications, and machines, which is important to ensuring that the compromise of
one machine does not endanger the entire cloud environment. Securing both virtual and real machines, servers,
and applications hosted in the cloud requires a specialized set of security tools to protect them from any potential
threats.

After completing the lessons in this module, you will be able to:
l Explain the unique security risks that are encountered by virtual and cloud-based machines.
l List the types of security services that can be hosted in the cloud.
l Explain how cloud-based security devices can protect online applications and devices.
Proceed to the first lesson to get started.

88 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Cloud Security and Virtualization Module Cloud Service Models Lesson

Cloud Service Models Lesson

Hello! In this lesson, we explore the mysterious "cloud", what it really is, how it came to be, and some of the
security issues that we encounter there.

First, let's de-mystify the cloud. It's amusing that "the cloud" has extremely high public name recognition, but few
understand what it really is.

Before the cloud, organizations purchased their own computer systems to run the application software needed to
run the business. These computer systems were located in the organization's facilities, and managed by teams of
experts. While not always the case, often there was more than one computer system (or server) per major
application.

This setup was expensive because of the capital cost of the computer hardware and labor cost of the resident
experts who kept it all running; but it was worth it. These systems raised overall productivity and helped maintain
competitive advantage.

Not long ago, someone noticed that of all their computer systems, only a few were completely busy at any given
moment in time. Most were idle, waiting for the next transaction to come in. Bottom line: there were many wasted
resources.

So, a new way of using server hardware was developed called virtualization, which actually comes from old
technology in mainframe computing that lets a single server run the operating systems and applications from
multiple servers simultaneously. The virtualization consolidates workloads onto fewer servers, increasing their
utilization, and saves money.

It wasn't long until most datacenters were transformed from rows of computer hardware dedicated to specific
applications, into a collection—or pool—of general hardware resources running virtualized applications. It was just
the smart thing to do.

Along comes some ingenious entrepreneurs who build enormous datacenters, filled with generalized computer
hardware, and offer to rent out portions of this infrastructure so that their customers can run their virtualized
applications there, instead of on their own hardware. With that, the cloud is born.

This type of cloud computing is called Infrastructure as a Service or IaaS. IaaS provides organizations with
networking, storage, physical servers, and virtualization, while users must still provide computers with operating
systems, middleware, data, and applications. Middleware is software that acts as a bridge between the OS and
applications. An organization uses this type of service when demand for its services or products varies, such as
during seasonal holidays where workloads on systems increase. Examples of this type of service provider are
Amazon Web Services, Microsoft Azure, and Google Compute Engine.

There are other types of clouds as well. For example, service providers rent cloud-based platforms for software
developers to develop and deliver applications. This service, named Platform as a Service or PaaS, provides the
OS and middleware in addition to the elements provided by IaaS. This service makes it easier, more efficient, and
cheaper for organizations to build, test, and deploy applications.

A third example is Software as a Service or SaaS. In this cloud service, the software is hosted by a third-party.
Typically, the end user connects to the application using their browser. Common examples of applications
available through SaaS are Google Mail, Salesforce, DocuSign, and Netflix.

Either way, moving the cost of having applications run on expensive, company-owned hardware capital assets to
a model where the price is a recurring operating cost is very attractive to most organizations.

Now let's look at what this means to security.

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 89


Fortinet Technologies Inc.
Cloud Service Models Lesson Cloud Security and Virtualization Module

When applications are hosted in a company's own datacenter, the security picture is straightforward: you put the
appropriate security technology at the right locations to address the specific security concerns.

Providing security for the cloud, however, is not so clear. You could say it's a bit cloudy. Bottom line: security is a
shared responsibility between the cloud provider and the customer utilizing the cloud services.

Designed in layers, security includes both the physical components and logical components.

The cloud infrastructure provided by IaaS vendors is protected in various ways. From an availability point of view,
the infrastructure is designed by the vendor to be highly available, and it follows that the infrastructure's uptime is
the responsibility of the vendor. From a security point of view, the vendor is only responsible for securing the
infrastructure it provides.

As a customer, when you install one or more virtualized applications in the vendor's cloud infrastructure, you are
responsible for securing the access, the network traffic, and the data applications.

Now, most vendors supply some form of security tools so that various parts of the customer's cloud application
environment can be secured. However, these tools can pose a few problems.

First, these tools tend to provide only a few, basic security functions, and they are the same tools the vendors use
to secure the underlying infrastructure. If an attacker were to bypass these tools at the infrastructure layer, they
would likely be able to bypass them at the customer's application level as well.

Second, and perhaps more important, is the fact that many organizations operate in a hybrid world where some of
their applications remain hosted in their own datacenters, some in Vendor-A IaaS cloud platform, some in Vendor-
B cloud platform, and various others with multiple SaaS vendors. This is what we call a "Multi-Cloud" environment,
and it comes with a "Multi-Cloud" problem: multiple, independent, uncoordinated security solutions—a problem
where complexity can scale geometrically with the number of cloud vendors involved.

Now, highly trained security staff are scarce to start with. Add to that a burden to integrate and operate multiple
non-integrated security environments simultaneously … it can be a real problem.

At Fortinet, we have security solutions such as FortiGate, FortiMail, FortiWeb, FortiSandbox, FortiInsight, and
others within the Fortinet Security Fabric that are not only at home in a company's data center, providing the same
consistent security, they are optimized for all the leading IaaS cloud providers.

To wrap up, we've shown the fundamentals of how "the cloud" came to be, how cloud environments are secured,
and described Fortinet's cloud security strategy that scales from simple cloud-only environments to complex multi-
cloud environments.

Thank you for your time, and please remember to take the quiz that follows this lesson.

90 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Cloud Security and Virtualization Module Virtual Machine Risks Lesson

Virtual Machine Risks Lesson

Welcome to the Virtual Machine Risks lesson.

Click Next to get started.

After completing this lesson, you will be able to achieve these objectives.

Virtualization is the process of using a single physical hardware resource to create multiple virtual instances of
working devices, such as computers. Virtual hardware may be located locally on a laptop, desktop, or servers in a
datacenter, or it may be hosted by a third party in a cloud infrastructure and be accessed only through web
browsers and application programming interfaces (APIs). Secure virtualization is achieved using processes,
procedures, and policies that ensure that the virtualized hardware infrastructure is secure and protected.

There are two areas to consider when securing virtual resources: the physical security of the hardware and the
hypervisor. Because a virtual server can contain hundreds, or even thousands, of virtual computers, physically
securing the hardware and maintaining a proper environment is critical. The loss or compromise of a single server
can have severe consequences. The hypervisor software that runs on the physical server has direct access to all
of the VMs that it is running. Securing the hypervisor and protecting its integrity and administrator accounts is
critical to ensuring the security of all the VMs running on the hypervisor.

Click the highlighted icons for more information.

Through its improving availability, scalability, and elasticity, virtualization enables organizations to reduce
infrastructure costs, but it also introduces additional challenges related to security, especially in relation to device
uptime, data storage, and machine security.

To secure VMs, you start with practices that are similar to the ones you use to secure traditional devices,
including:
l Patching the OS and the applications regularly
l Using access management with strong passwords policies
l Installing firewalls virtually in the environment in line with the protected VMs
l Implementing cloud-network segmentation to reduce the attack surface
Besides the usual security practices, virtualization-specific considerations include:
l Limiting the connectivity between the VM and the host to segment them and avoid potential virus propagation
through activities such as file sharing
l Removing unnecessary pieces of virtual hardware to reduce the attack surface
l Avoiding virtualization sprawl by implementing sound VM management planning and oversight, such as allowing
only specific administrations to deploy a standard list of validated VM images
l Restricting physical and administrative access to the hypervisor
Click the underlined term for more information.

Due to its nature, virtualization brings the following specific threats.

An attacker can run a code on a VM, that can allow them to break out and interact directly with the hypervisor. The
operator can then gain access to the host operating system (OS) and, therefore, all the other VMs running on that
host. This threat is called VM escape and you can help prevent it by securing and patching the hypervisor and all
virtualized OSs along with using strong access controls, passwords, and allowing the installation of only verified
and trusted applications.

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 91


Fortinet Technologies Inc.
Virtual Machine Risks Lesson Cloud Security and Virtualization Module

Due to its elasticity, virtualization can create and remove large amounts of data dynamically. If this data is not
correctly erased, the residual representation of digital data is called data remanence, which can be stolen by an
attacker. To prevent data remanence, you must implement an appropriate secure data destruction process,
including overwriting, degaussing, crypto-erasing, or shredding.

It is important to identify how a hosted virtualization platform handles sensitive data. To help control data security,
store sensitive information on a separate, secured database with proper data protections, where VMs can access
it.

When VMs have weaknesses and security vulnerabilities, an attacker can exploit them for privilege escalation
attacks. In this type of attack, the attackers grant themselves access to data they are not supposed to have,
bypassing the proper authorization channel. Least privilege enforcement can prevent this type of attack.

For availability and potential disaster recovery, virtualization allows VMs to move from one server or hypervisor to
another. An attacker can potentially intercept this live VM migration and, acting as a man in the middle, steal data
or alter the VMs during the migration. Encrypting the VMs and securing the data channels used in the migration
can prevent this type of attack.

You have completed the lesson.

92 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Cloud Security and Virtualization Module Common Cloud Threats Lesson

Common Cloud Threats Lesson

Welcome to the Common Cloud Threats lesson.

Click Next to get started.

After completing this lesson, you will be able to achieve these objectives.

According to the Fortinet 2023 Cloud Security Report, the biggest threat to public clouds and virtual environments
is perceived to be errors in configuration, setup, and deployment. This is because virtual and cloud-based
environments can be very complicated with multiple types of security layouts, network design, and infrastructure.
Navigating and ensuring that all stages of a cloud-based environment are secure is an enormous task with very
real impacts if done incorrectly. In this lesson, you will learn about some of the most significant areas of
configuration, as well as talk about other common security threats.

Some of the common security risks for cloud environments include:


l Authentication and Authorization—Ensure modern authentication methods protect sensitive data and virtual
machines. Also ensure that the cloud environment enforces the secure authorization of only valid users and devices
using the principle of least privilege.
l VM Creep—Use frequent auditing of cloud-based virtual machines and data to reduce your fingerprint in the cloud
and minimize your attack surface. Ensure that only resources that need to exist and be accessed, especially
publicly, are allowed.
l Misconfigured Storage—Cloud-based storage is managed in units with their own unique sets of rules,
authentication, and access control lists. Robustly securing and configuring these containers, disks, buckets, and
blobs is critical to prevent data loss and excessive resource usage.
l Data Loss—In addition to securely configuring storage, using data loss prevention platforms, and maintaining
logging and other security tools, consider how cloud management companies handle data storage, rights, and
ownership.
l Connectivity—Always use secure network protocols like HTTPS and SSH when connecting to and maintaining
cloud services and devices. Also, encrypt and secure data being transferred between cloud applications and
storage using secure protocols.
l Improper Logging and Monitoring—For security purposes, always have access to security logs for your cloud
environment. Cloud service companies may have very different methods of keeping and saving logs. Being able to
centralize this data collection on a security information and event management (SIEM) device or other log collection
point can help in maintaining security and quickly responding to threats that affect the cloud environment.
l Rights and Data Ownership—Data ownership in the cloud is an extremely complex issue. Data ownership depends
on the country of the company that wants to use cloud services, the country where the cloud service provider is
incorporated, and service level agreements (SLA) signed by the company and the hosting service. Having a clear
picture of who owns the data that is hosted in the cloud is critical to prevent misunderstandings and potential data
loss due to ownership conflicts and local laws.
One of the greatest concerns of cloud-based solutions is enforcing proper identification and authorization of users
and devices to resources. Because connections to cloud services come from both local and public resources, it is
critical to properly identify and restrict access to cloud-based data and resources. Because the potential attack
surface of cloud-based services is so large, proper authorization of all potential connections and access is key to
maintaining cloud security.

It is important to secure authentication and authorization for users, management connections through HTTPS and
API, and data connections to and from cloud services to ensure only necessary connections are allowed. It takes

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 93


Fortinet Technologies Inc.
Common Cloud Threats Lesson Cloud Security and Virtualization Module

only one improperly secured data stream or incorrectly configured authentication profile to allow potential
attackers a foothold to begin exploring and exploit an interconnected cloud service environment.

Using a comprehensive authentication suite like OAuth and SAML can be critical to enforcing security policies.
These token-based authentication systems allow connections from users and devices to connect to a wide variety
of services. This is especially helpful when accessing cloud resources across multiple platforms because both
Oath and SAML support interoperability between various companies, services, and security devices. They can
also simplify administration that supports an authentication database used for both local and cloud-based
accounts and services.

One of the most at-risk resources in the cloud is data. Personal information, credit cards, and intellectual property
are all sometimes stored and accessed by cloud services. These types of resources usually have their own unique
set of authentication rules and polices, either enforced by cloud service provider local policies, identity and access
management (IAM) systems, or storage access control lists (ACLs). Restricting access to cloud storage so that
only authorized devices and users can access only the data they need is critical. Centralized authentication
services and using the principle of least privilege for data access can help simplify this process.

Having a cloud-based data loss or leakage prevention (DLP) system in place to help monitor data connections
between cloud-based services can help detect and prevent large-scale data leaks. Attackers have been very
successful in exfiltrating data over time from cloud-based storage. Having a DLP device in the cloud to help detect
suspicious data transfers can help prevent this.

An often overlooked way to help reduce storage demands and reduce the attack surface is to audit and remove
any virtual machines, storage, and other cloud services that tend to increase over time and that are not correctly
removed or deleted after use. Trimming unused or outdated cloud services reduces the number of resources used
in the cloud and prevents old, undeleted systems from being potential security threats and initial attack points for
malicious actors.

Many of these types of data monitoring are rolled into cloud native protection, or CNP, or cloud native application
protection, or CNAP, platforms. They combine services like DLP, auditing, and risk assessment on one device.

Because of the rise of application programming interfaces, or APIs, to deploy, configure, and maintain cloud
services, you should take extra care to secure API connections using encryption to prevent sensitive information
like passwords and encryption keys from becoming compromised. It is also critically important to secure API keys
because attackers can use API interfaces to modify cloud environments and allow otherwise unauthorized
access, or delete used cloud resources.

In addition to securing API and other management interfaces, it is crucial to verify that cross-site policies and
permissions are restricted to only necessary connections because of the open and cross-site dependency of
many cloud-based services. It is very common to use cross-origin resource sharing (CORS) to allow sharing of
resources between websites. For example, a cloud-based website can allow a user to access and load graphics
and other information from a different cloud storage device using a CORS permission.

While this greatly helps in the sharing of cloud-based resources, you must take care to ensure this mechanism is
not exploited by attackers to execute cross-site scripting (XSS) attacks by tricking browsers or websites into
accessing data and passing it along to other sites because of incorrectly or broadly set CORS permissions.

Because of the frequently distributed nature of cloud-based services, especially those that may use multiple
vendors and have data stored in different locations, it is sometimes difficult to have a comprehensive view of what
is going on in a cloud environment. Log files can be kept locally in the cloud, forwarded to a cloud provider logging
solution, or ignored. Collecting and collating this information as centrally as possible is useful in helping
companies to maintain SLA targets and quickly detecting and preventing data breaches and other malicious
activity.

94 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Cloud Security and Virtualization Module Common Cloud Threats Lesson

Along with having a robust logging system in place to record activity in the cloud, real-time monitoring of the health
of associated cloud systems, storage, and connections can be invaluable. This allows quicker response time to
possible downtime events and potential attacks.

You can perform logging and monitoring aggregation using devices hosted in the cloud close to the deployed
cloud services, locally hosted devices that allow data to be sent from the cloud resources, or a combination of
both. It is also possible to have logging and monitoring information gathered by a cloud solutions provider and
then proxied to the security device of your choice to allow easier collection and analysis across multiple platforms.

You have completed this lesson.

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 95


Fortinet Technologies Inc.
Cloud-Hosted Security Services Lesson Cloud Security and Virtualization Module

Cloud-Hosted Security Services Lesson

Welcome to the Cloud Hosted Security Services lesson.

Click Next to get started.

After completing this lesson, you will be able to achieve these objectives.

With the rise of cloud-based services, such as web servers, Software-as-a-Service (SaaS), and Infrastructure-as-
a-Service (IaaS), cybersecurity platforms have also evolved to be hosted in the cloud. Traditional network
devices, such as cloud native firewalls (CNFs), web application firewalls (WAFs), and email gateways can all be
hosted in a variety of cloud environments to allow immediate security for sensitive applications and data. These
web-optimized devices can also provide more traditional network services to cloud-based applications and
servers, like high availability (HA), packet shaping, and latency optimization.

In addition to being near cloud services, having security devices in the cloud can simplify security for companies
that have many small branch offices. Centralizing services in the cloud can reduce costs by eliminating expensive
dedicated links to a main office that may not be centrally located or may be expensive to access through traditional
network links.

A cloud-based firewall can replace a locally hosted firewall and centralize network security in one place, allowing
only local network traffic to be routed to each branch location through a secure virtual private network (VPN).
Additionally, large cloud-based data centers and SaaS environments may use cloud firewalls to provide an
additional layer of protection against potential attackers.

Secure email gateways can be hosted in the cloud, allowing all incoming email to be scanned and verified in the
cloud, before being forwarded to a local email server.

Cloud-based WAFs sit in front of cloud-hosted web servers to provide load balancing, security, and scalability to
large cloud-hosted web farms, in addition to targeted security, to help prevent common attacks against web
servers and to maximize availability and uptime.

In addition to hosting network security devices in the cloud, other security services like centralized authentication,
virus scanning, and secure browsing are also now available in cloud-hosted variants. These services grant users
additional convenience and features that may have traditionally been available to only those users who are
physically located at a main branch office.

Cloud-based authentication or Authentication-as-a-Service helps companies host and configure identity and
access management (IAM), single sign-on (SSO), and two-factor authentication in a central location for both local
and cloud-based services. Authentication is usually some form of Lightweight Directory Access Protocol (LDAP),
security assertion markup language (SAML), or open authorization standard (OAuth). This can simplify identity
management and create a more centralized configuration of user policies for both cloud and local services, if most
of the used applications support the authentication method. You can also use authentication proxies to extend
support for additional types of authentication methods.

A further refinement of authentication as a service is a cloud access security broker or CASB. This allows you to
authenticate and control user access to cloud-based applications, regardless of where the users come from and
how they authenticate. Instead of performing authentication for each individual cloud-based application,
authentication is handled by the cloud-sensitive security broker, which authorizes users to access the defined
applications.

A cloud browser, also known as a remote browser or a remote browser isolation (RBI), is a web browser SaaS
instance that runs in the cloud and behaves like a regular, locally installed web browser. The security benefit of

96 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Cloud Security and Virtualization Module Cloud-Hosted Security Services Lesson

using a remote browser is that all network connections and file access is controlled in a tightly secured remote
sandbox. This limits the damage to files executed by the browser and makes it much easier to defend the user
from cross-site scripting (XSS) and other network attacks because the browser is not hosted on the user's local
machine.

Cloud-based antivirus services and sandboxes accept files from local or cloud-based servers for scanning.
Antivirus services inspect files for malicious code using both traditional lists as well as heuristic and AI-based
services. Sandboxes create a virtual machine (VM) of a target operating system (OS) and execute the file in an
isolated, secured environment. The execution, behavior, and usage of disk and memory resources can then be
analyzed to determine if the file is safe.

One of the biggest advantages of leveraging the cloud for security is the ability to easily send data from many
different endpoints to a central server in the cloud. For example, a central cloud logging server can accept logging
information from local hardware devices spread across the internet, as well as from cloud services applications
and cloud security infrastructure. Having all this information sent and stored in a central location makes
safeguarding, processing, and reviewing data using devices like a security information and event management
(SIEM) much simpler than having them located in many different local servers. Because scalability and reliability
are the two biggest benefits of using cloud services, keeping reliable backups and uptime metrics is much simpler
in a cloud environment.

Having security information and management centralized in the cloud also empowers security operation centers
or SOCs. These centers rely on quick, reliable, and centralized data collection to properly monitor indicators of
compromise (IOC) and the general health of your network.

SOCs are responsible for the escalation of and response to malicious activity. Having centralized control of
network devices and logging through cloud-provided services and applications can greatly simplify the execution
of these activities. An additional benefit of using centralized cloud security operations is the ability to leverage
external company expertise using a SOC-as-a -Service (SOCaaS). Cloud environments allow easy access to a
dedicated security operations staff that performs the day-to-day monitoring of security events and are immediately
available to respond to possible threats.

Because the complexity of securing cloud environments has increased, the cybersecurity industry has developed
corresponding cloud-based security services. Many security companies now provide a service to help centralize
and simplify cloud security concerns. This usually involves integrating software agents and accounts that allow
security companies to examine and monitor cloud environments and perform security and penetration testing
assessment to help identify risk factors. These services are referred to as cloud native protection, or CNP, or
cloud native application protection, or CNAP. These services can vary wildly, depending on the vendor, so it is
important to verify that they support the environment that your cloud applications are deployed on and that they
provide the services you require.

CNP and other bundled cloud security services are also referred to as Security-as–a-Service, or SECaaS.
SECaaS allows companies to use a single vendor to provide multiple online security tools and devices, all
integrated into one CNP platform. Services like disaster recovery, virus scanning, intrusion protection, and
authentication can all be provided as a service through these online platforms, with additional features added as
needed. There is a huge variety in the scale of services offered by these programs: from a simple web firewall—to
fill in a temporary gap in security, to a complete security infrastructure hosted on the cloud—in which the only local
security device in each office is a simple router or firewall that allows access to the secure cloud infrastructure.

Secure access service edge, or SASE, is a cloud architecture model that combines network and SECaaS
functions and delivers them as a single cloud service. Based on the Fortinet SASE implementation, SASE
specifically aims to extend networking and security beyond simple connections between offices and cloud
applications. SASE allows remote workers to take advantage of Firewall-as-a-Service (FWaaS), secure web
gateway (SWG), zero-trust network access (ZTNA), and a medley of threat detection functions, regardless of their

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 97


Fortinet Technologies Inc.
Cloud-Hosted Security Services Lesson Cloud Security and Virtualization Module

location. SASE combines secure networking from software-defined WAN with a security service edge containing
SECaaS products to validate and secure connections to hosted applications.

You have completed this lesson.

98 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
Cloud Security and Virtualization Module Securing the Cloud Lesson

Securing the Cloud Lesson

Welcome to the Securing the Cloud lesson.

Click Next to get started.

After completing this lesson, you will be able to achieve these objectives.

Now that you understand the components of cloud security, you will learn how they can work together to secure
cloud applications and data. Protecting cloud servers, such as email or web servers, can be straightforward. Many
cloud environments allow the deployment of a web application firewall, or WAF, and cloud-native firewall, or CNF,
directly into their virtual networks. This allows the administrator to partition the public facing cloud network from
the protected network containing the cloud servers. You can then perform traditional security procedures, such as
policy rules, traffic inspection, and authentication at this virtual network perimeter by the cloud-native security
devices. You can also perform enhanced logging and data monitoring at this boundary using devices like data loss
prevention or intrusion detection systems to prevent unauthorized data access and loss.

In addition to firewalls, you can host data loss prevention scanners, antivirus engines, and sandboxes in the cloud
next to these cloud servers to provide immediate security services instead of sending data across possible
unsecured links. The other major benefit of cloud security services is that you can easily replicate them in
additional cloud locations, adding an additional layer of security with minimal deployment and administrative
costs.

Similar to cloud-based servers, securing authentication and applications in the cloud is best done as close to the
cloud services as possible. Introducing long connection times between authenticating servers and their
applications can cause usability problems. Since sensitive data can pass between the user, application server,
and application, controlling and securing these links is extremely important. Ensure that users authenticate over
an encrypted link like SSL or using a VPN.

You should protect cloud-hosted authentication servers, such as Active Directory or LDAP, using CNF and
connections enforced using the principle of least privilege to ensure that only allowed applications can access
authenticated users, and administrators are restricted to known devices and network links. Proxying connections
and controlling access to these services, either through network controls or authorization restrictions, is critical to
ensure that your applications, and especially your sensitive services, such as authentication, are as protected as
possible.

When connecting to cloud applications, it is important to secure and control access to the data and services. This
control is usually done either on the client side or server side. Securing connections directly from clients is
generally performed by a forward proxy, which is often a network firewall that accepts the connection and then
brokers it forward to the destination server. The forward proxy can then authenticate and enforce all connections
to applications from clients. Forward proxies ensure that no connection from the client is ever allowed to directly
connect to an application and allows for very granular enforcement of your users traffic. The disadvantage of
forward proxies is that you must deploy one in front of each of your users, which can be difficult if your company
has multiple branch offices and remote workers. Forward proxies also are of limited use when you want to secure
your cloud applications from external, internet-based threats.

A reverse proxy is positioned in front of the servers and applications to be protected. This enables the reverse
proxy to verify and authenticate all incoming connections before passing the traffic to the target server. One of the
advantages of using a reverse proxy is that it can load balance connections across multiple identical servers to
allow greater scalability. Reverse proxies, such as a WAF, can also provide application-specific enforcement and
protection tailored to each application. The disadvantages of a reverse proxy are that you must deploy them

Technical Introduction to Cybersecurity 1.0 Lesson Scripts 99


Fortinet Technologies Inc.
Securing the Cloud Lesson Cloud Security and Virtualization Module

directly in front the application to be effective, and you might find them difficult to deploy appropriately in a cloud
environment that has applications in multiple locations and shifting data centers.

While you can use proxies to intercept traffic being sent to and from cloud applications, a third way of securing
could-based applications has emerged thanks to the expansion of the use of API, or application programming
interface. APIs are used on the internet to submit and receive information from an application. Common uses of
API include configuring firewalls, Twitter bots, pulling information from weather and mapping sites, and online
payments.

CASB servers can use APIs to directly communicate with cloud application providers to configure and allow
authorized users and scan data stored by the application to ensure security and prevent data loss. So instead of
configuring a proxy in front of every cloud application, the cloud applications are configured to work with the CASB
server to enforce authentication and provide access so the CASB server can scan stored data. This can greatly
simplify deployment because configuration needs to happen only on the cloud application and the CASB server.
The main limitation of an API-based CASB deployment is that not all cloud-based applications have a compatible
API to allow this interaction to occur. In such cases, you must use a proxy to secure the cloud application. Many
CASB servers can also act as a proxy for cloud applications in addition to using APIs directly with the application
to enforce security.

The use of APIs is increasing because they are easy to use and work with many different types of devices. Many
IoT devices use APIs. For example, a smart refrigerator may have an API that connects to a mobile app to monitor
its status. APIs are also used in many other applications where quick, simple connections for data exchange are
required.

There are two main types of API: representation state transfer (REST) and simple object access protocol (SOAP).
Use REST for quick configuration and simple monitoring. REST uses a simple request-and-response model like
the way a web page is retrieved from a website. You can use a REST API to request information from the server or
send basic configuration changes.

SOAP uses an XML schema to package the request as an HTTP POST request. The server receives the full
contents of the request and responds with another SOAP envelope containing the body of the response. Because
SOAP packages the request and responses in the XML envelope, it tends to be slower than a REST API and is
mostly used for legacy applications, or applications that require stateful sessions. REST is more commonly used
overall.

Securing APIs is critical because they exchange sensitive information, such as usernames and passwords. You
can secure APIs by encrypting the connections between them using HTTPS, VPNs between the client and server,
or an API gateway. An API gateway is a WAF that can identify and scan API connections and enforce security. It
does this by verifying the authentication used by the API session and enforcing API schemas. An API schema is
the set of commands and API calls that the API is expected to use. If a request deviates from the expected
commands, the API gateway blocks it. For example, if the API call to change an IP address is not part of the
approved schema, the API gateway prevents that call from being used, even by a valid user session. This ensures
that the API session uses only approved commands and prevents malicious actors from injecting commands into
the API.

You have completed the lesson.

100 Technical Introduction to Cybersecurity 1.0 Lesson Scripts


Fortinet Technologies Inc.
No part of this publication may be reproduced in any form or by any means or used to make any
derivative such as translation, transformation, or adaptation without permission from Fortinet Inc.,
as stipulated by the United States Copyright Act of 1976.
Copyright© 2023 Fortinet, Inc. All rights reserved. Fortinet®, FortiGate®, FortiCare® and FortiGuard®, and certain other marks are registered trademarks of Fortinet,
Inc., in the U.S. and other jurisdictions, and other Fortinet names herein may also be registered and/or common law trademarks of Fortinet. All other product or company
names may be trademarks of their respective owners. Performance and other metrics contained herein were attained in internal lab tests under ideal conditions, and
actual performance and other results may vary. Network variables, different network environments and other conditions may affect performance results. Nothing herein
represents any binding commitment by Fortinet, and Fortinet disclaims all warranties, whether express or implied, except to the extent Fortinet enters a binding written
contract, signed by Fortinet’s General Counsel, with a purchaser that expressly warrants that the identified product will perform according to certain expressly-identified
performance metrics and, in such event, only the specific performance metrics expressly identified in such binding written contract shall be binding on Fortinet. For
absolute clarity, any such warranty will be limited to performance in the same ideal conditions as in Fortinet’s internal lab tests. In no event does Fortinet make any
commitment related to future deliverables, features, or development, and circumstances may change such that any forward-looking statements herein are not accurate.
Fortinet disclaims in full any covenants, representations,and guarantees pursuant hereto, whether express or implied. Fortinet reserves the right to change, modify,
transfer, or otherwise revise this publication without notice, and the most current version of the publication shall be applicable.

You might also like