Geze Assignment
Geze Assignment
A computer vulnerability is a cyber-security term that refers to a defect in a system that can
leave it open to attack. This vulnerability could also refer to any type of weakness present in a
computer itself, in a set of procedures, or in anything that allows information security to be
exposed to a threat.
To exploit a vulnerability an attacker must be able to connect to the computer system.
Vulnerabilities can be exploited by a variety of methods including SQL injection, buffer
overflows, cross-site scripting (XSS) and open source exploit kits that look for known
vulnerabilities and security weaknesses in web applications.
A computer attack may be defined as actions directed against computer system to disrupt
equipment operations, change process control or corrupt stored data.
Different attack methods target different vulnerabilities and involve different types of weapons,
and several may be within the current capabilities of some terrorist groups.
Three different methods of attack have been identified, based on the effects of the weapons used.
However, as technology evolves, distinctions between these methods may begin to blur.
A physical attack involves conventional weapons directed against a computer facility or its
transmission lines;
An electronic attack (EA) involves the use the power of electromagnetic energy as a
weapon, more commonly as an electromagnetic pulse (EMP) to overload computer circuitry,
but also in a less violent form, to insert a stream of malicious code directly into an enemy's
microwave radio transmission; and
A computer network attack (CNA), usually involves malicious code used as a weapon to
infect enemy computers to exploit a weakness in software, in the system configuration, or in
the computer security practices of an organization or computer user. Other forms of CNA are
enabled when an attacker uses stolen information to enter restricted computer systems.
2. How to build Fault Tolerance System
Fault Tolerance is simply means a system’s ability to continue operating uninterrupted despite
the failure of one or more of its components. This is true whether it is a computer system, a
cloud cluster, a network, or something else. In other words, fault tolerance refers to how an
operating system (OS) responds to and allows for software or hardware malfunctions and
failures.
In many applications, where computers are used, outages or malfunctions can be expensive, or
even disastrous. In that case, the system must handle the failures, but such systems are hardly
ever perfect . It presents briefly the situations that might occur to any computer system as well as
fail-safes that can help it to continue working to a level of acceptance in the event of a failure of
some of its components.
There are countless ways in which a system can fail. To make it a fault tolerant, we need to
identify potential failures, which a system might encounter, and design counteractions. Each
failure’s frequency and impact on the system need to be estimated to decide which one a system
should tolerate. Here are just a few examples of potential issues to think of:
program experiences an unrecoverable error and crash (unhandled exceptions, expired
certificates, memory leaks)
component becomes unavailable (power outage, loss of connectivity)
data corruption or loss (hardware failure, malicious attack)
security (a component is compromised)
performance (an increased latency, traffic, demand)
Netflix's open source Hystrix is the most popular implementation of the circuit-breaker pattern.
Many companies where I've worked previously are leveraging this wonderful tool. Surprisingly,
Netflix announced that it will no longer update Hystrix. (Yeah, I know.) Instead, Netflix
recommends using an alternative solution like Resilence4j, which supports Java 8 and functional
programming, or an alternative practice like Adaptive Concurrency Limit.
Load balancing: Nginx and HaProxy
Load balancing is one of the most fundamental concepts in a distributed system and must be
present to have a production-quality environment. To understand load balancers, we first need to
understand the concept of redundancy. Every production-quality web service has multiple
servers that provide redundancy to take over and maintain services when servers go down.
A load balancer is a device or software that optimizes heavy traffic transactions by balancing
multiple server nodes. For instance, when thousands of requests come in, the load balancer acts
as the middle layer to route and evenly distribute traffic across different servers. If a server goes
down, the load balancer forwards requests to the other servers that are running well.
There are many load balancers available, but the two best-known ones are Nginx and HaProxy.
Nginx is more than a load balancer. It is an HTTP and reverse proxy server, a mail proxy server,
and a generic TCP/UDP proxy server. Companies like Group on, Capital One, Adobe, and
NASA use it.
HaProxy is also popular, as it is a free, very fast and reliable solution offering high availability,
load balancing, and proxying for TCP and HTTP-based applications. Many large internet
companies, including GitHub, Reddit, Twitter, and Stack Overflow, use HaProxy. Oh and yes,
Red Hat Enterprise Linux also supports HaProxy configuration.
Actor model: Akka
The actor model is a concurrency design pattern that delegates responsibility when an actor,
which is a primitive unit of computation, receives a message. An actor can create even more
actors and delegate the message to them.
Akka is one of the most well-known tools for the actor model implementation. The framework
supports Java and Scala, which are both based on JVM.
Asynchronous, non-blocking I/O using messaging queue: Kafka and Rabbit MQ
Multi-threaded development has been popular in the past, but this practice has been discouraged
and replaced with asynchronous, non-blocking I/O patterns. For Java, this is explicitly stated in
its Enterprise Java Bean (EJB) specifications:
"An enterprise bean must not use thread synchronization primitives to synchronize execution of
multiple instances.
"The enterprise bean must not attempt to manage threads. The enterprise bean must not attempt
to start, stop, suspend, or resume a thread, or to change a thread's priority or name. The
enterprise bean must not attempt to manage thread groups."
Now, there are other practices like stream APIs and actor models. But messaging queues
like Kafka and Rabbit MQ offer the out-of-box support for asynchronous and non-blocking IO
features, and they are powerful open source tools that can be replacements for threads by
handling concurrent processes.
Other options: Eureka and Chaos Monkey
Other useful tools for fault-tolerant systems include monitoring tools, such as Netflix's Eureka,
and stress-testing tools, like Chaos Monkey. They aim to discover potential issues earlier by
testing in lower environments, like integration (INT), quality assurance (QA), and user
acceptance testing (UAT), to prevent potential problems before moving to the production
environment.
3. What is Redundant Array of Inline disks (RAID) and Microsoft Cluster Technology
RAID (redundant array of independent disks or inline disk) is a way of storing the same data in
different places on multiple hard disks or solid-state drives to protect data in the case of a drive
failure. There are different RAID levels, however, and not all have the goal of
providing redundancy.
RAID works by placing data on multiple disks and allowing input/output (I/O) operations to
overlap in a balanced way, improving performance. Because the use of multiple disks increases
the mean time between failures (MTBF), storing data redundantly also increases fault tolerance.
RAID arrays appear to the operating system (OS) as a single logical drive. RAID employs the
techniques of disk mirroring or disk striping. Mirroring will copy identical data onto more than
one drive. Striping partitions helps spread data over multiple disk drives. Each drive's storage
space is divided into units ranging from a sector (512 bytes) up to several megabytes. The stripes
of all the disks are interleaved and addressed in order.
A network load balancing cluster filters and distributes TCP/IP traffic across a range of
nodes, regulating connection load according to administrator-defined port rules.
A failover cluster provides high availability for services, applications, and other
resources through an architecture that maintains a consistent image of the cluster on all
nodes and that allows nodes to transfer resource ownership on demand.
The following are the programming interfaces for the Windows Clustering technologies:
Disk mirroring, also known as RAID 1, is the replication of data to two or more disks. Disk
mirroring is a good choice for applications that require high performance and high availability,
such as transactional applications, email and operating systems. Disk mirroring also works with
solid state drives so “drive monitoring” may be a better term for contemporary storage systems.
Because both drives are operational, data can be read from them simultaneously, which makes
read operations quite fast. The RAID array will operate if one drive is operational. Write
operations, however, are slower because every write operation is done twice.
Disk (drive) mirroring is particularly advantageous for disaster recovery scenarios as it provides
instantaneous failover for data required by mission-critical applications. If primary drives in the
array are damaged or unable to operate, traffic is switched to secondary or mirrored backup
drives. The mirror copy is able to become operational on failover because the operating system
and application software are replicated to the mirror along with the data used by the applications
Mirroring is very simple to understand and one of the most reliable way of data protection. In
this technique, you just make a mirror copy of disk which you want to protect and in this way
you have two copies of data. In the time of failure, the controller use second disk to serve the
data, thus making data availability continuous.
When the failed disk is replaced with a new disk, the controller copies the data from the
surviving disk of the mirrored pair. Data is simultaneously recorded on both the disk. Though
this type of RAID gives you highest availability of data but it is costly as it requires double
amount of disk space and thus increasing the cost.
Striping is the most confusing RAID level as a beginner and needs a good understanding and
explanation. We all know that, RAID is collection of multiple disk’s and in these disk
predefined number of contiguously addressable disk blocks are defined which are called
as strips and collection of such strips in aligned in multiple disk is called stripe.
Suppose you have hard disk, which is a collection of multiple addressable block and these blocks
are stacked together and called strip and you have multiple such hard disk, which are place
parallel or serially. Then such combination of disk is called stripe.
The parity bits are used to re-create the data at the time of failure. Parity information can be
stored on separate, dedicated HDDs or distributed across all the drives in a RAID set. In the
above image, parity is stored on a separate disk.
The first three disks, labeled D, contain the data. The fourth disk, labeled P, stores the parity
information, which in this case is the sum of the elements in each row. Now, if one of the Disks
(D) fails, the missing value can be calculated by subtracting the sum of the rest of the elements
from the parity value.
Hope you have understood the basic of these RAID level. If you have any issue or concern,
please let us know through your mails and comment.
5. Discuss security standard and levels (ISO 15408 standard)
A security standard is "a published specification that establishes a common language, and
contains a technical specification or other precise criteria and is designed to be used consistently,
as a rule, a guideline, or a definition." The goal of security standards is to improve the security of
information technology (IT) systems, networks, and critical infrastructures. The Well-Written
cyber security standards enable consistency among product developers and serve as a reliable
standard for purchasing security products.
Security standards are generally provided for all organizations regardless of their size or the
industry and sector in which they operate. This section includes information about each standard
that is usually recognized as an essential component of any cyber security strategy.
A set of security features to be provided by a system before it can be deemed to be suitable for
use in a particular security processing mode, or in accordance with a generalized security policy.
ISO/IEC 15408-3:2008 defines the assurance requirements of the evaluation criteria. It includes
the evaluation assurance levels that define a scale for measuring assurance for component targets
of evaluation (TOEs), the composed assurance packages that define a scale for measuring
assurance for composed TOEs, the individual assurance components from which the assurance
levels and packages are composed, and the criteria for evaluation of protection profiles and
security targets.
ISO/IEC 15408-3:2008 defines the content and presentation of the assurance requirements in the
form of assurance classes, families and components and provides guidance on the organization
of new assurance requirements. The assurance components within the assurance families are
presented in a hierarchical order.
ISO/IEC 15408 provides independent, objective, validation of the reliability, quality, and
trustworthiness of IT products. It is the only international standard that customers can rely on to
help them make informed decisions about their IT purchases. Common Criteria sets specific
information assurance goals including strict levels of integrity, confidentiality, and availability
for systems and data, accountability at the individual level, and assurance that all goals are met.
Common Criteria Certification is a requirement of hardware and software devices used by many
governments.
6. How to prevent your computer system by using password security?
Passwords are used commonly to gain entry to networks and into various Internet accounts in
order to authenticate the user accessing the website and Password protection allows you to
protect your data set by assigning it a password. Another user cannot read, change, or delete
your data set without knowing the password.
Scammers, hackers and identity thieves are looking to steal your personal information - and your
computer. But there are steps you can take to protect your computer, like keeping your computer
software up-to-date and giving out your personal information only when you have good reason.
Password protection policies should be in place at organizations so that personnel know how to
create a password, how to store their password and how often to change it.
SSL and TLS are both cryptographic protocols used to increase security by encrypting
communication over computer networks. SSL (RFC specification) stands for Secure Sockets
Layer while TLS (RFC specification) stands for Transport Layer Security. TLS is the successor
of SSL 3.0 and is now the standard (although both methods are still commonly referred to as
SSL). SSL TLS can be used for a variety of applications including securing data over:
HTTPS,
FTPS,
SMTP, etc.
A cryptographic protocol must adhere to certain requirements in order to be deemed secure.
Ultimately, both SSL and TLS protocols offer one or more of the following properties:
1. Client Hello. The client sends information along with a set of options to the server
regarding SSL communication (SSL version number, cipher settings, etc).
2. Server Hello. The server makes a decision and provides it back to the client based on the
options provided.
3. Server Key Exchange. The server provides information to the client regarding the session
key as well as its public key.
4. Client Key Exchange. The client authenticates the server's certificate and confirms the
server's selected encryption algorithm.
5. Client/Server Begin Secure Communications. Both the client and server confirm that all
subsequent communications will be encrypted.
Many SSL TLS connections continue to be made using RSA keys. However, elliptic curve
cryptography (ECC) has been gaining traction as an alternative to RSA due to its ability to
provide the same level of security at a much smaller size. To learn more about ECC, read
our Elliptic Curve Cryptography article.
Benefits of SSL TLS#
There are a variety of benefits associated with securing connections using SSL/TLS. A few
examples of these benefits include the following:
A firewall is a specially programmed router that sits between a site and the rest of the network. It
is a router in the sense that it is connected to two or more physical networks and it forwards
packets from one network to another, but it also filters the packets that flow through it.
A VPN is an example of providing a controlled connectivity over a public network such as the
Internet. VPNs utilize a concept called an IP tunnel a virtual point-to-point link between a pair of
nodes that are actually separated by an arbitrary number of networks.
Installing the VPN:
Configuring the VPN:
1. connection name: any name like "chipedge-VPN"
2. description: any name like "chipedge-VPN"
3. Give the remote gateway as 106.51.226.92 and port as 10443.
4. Select "Prompt on Login" under Authentication.
5. Click on apply and it would save.
6. Configuration is done. you can close now.
Connect to VPN:
Chipedge recommends to change the default password provided. Change the Unix
password as below.
1. On your linux command prompt
Linux > passwd
Enter the new password, you would like. Enter again when prompted. You are done.
Note down your new password and do not share with anyone. Please note that you are
accountable for any violations on your account and could lead to legal action from
Global University.
9. Is it possible to absolutely secure and make the entire computer system safe from any
threats?
Yes it is possible to secure and make the entire computer system safe from any threats
Let's face it, the Internet is not a very safe place. There are hackers trying to access your
computer, worms trying to infect you, malicious Trojans disguised as helpful programs, and
spyware that reports your activities back to their makers. In many cases those who become
infected unknowingly become a breeding ground for unwanted programs and criminal activity.
It does not have to be this way. With proper education and smart computing the Internet can be
a safe, useful, and fun place to visit without having to worry about what is lurking around the
corner.
There is a way to provide tips and techniques for smart and safe computing. When
using these techniques you will not only protect yourself and your data from hackers
and viruses, but also keep your computer running more smoothly and reliably.
Always install Operating System updates
Keep your installed applications up-to-date
Do not use the same password at every time
Install and be sure to update your antivirus software
Use a firewall
Backup your data
Enable the display of file extensions
When installing a piece of software, watch out for bundled tool bars and programs that you
may not want
When installing a piece of software read the end user license agreement so you know what
you’re getting into
Be vigilant when using peer-to peer software
Some types of websites are more dangerous than others
Ignore emails that state you want a contest or a stranger asking for assistance with their
inheritance
Do not open attachments from people you do not know
A common method that computer infections use to infect your computer are security
vulnerabilities in your installed programs. Common programs that are targeted due to
their large install base are web browsers, Microsoft Office, Adobe Reader, Adobe Flash,
Adobe Shockwave, and Oracle Java. In order to make your computer as secure as
possible, you need to make sure these programs are updated when new security fixes
are released. The problem is that many people just ignore alerts about new updates,
even though these updates fix security problems that could allow hackers into your
computer.
If you are prompted by a known application that you commonly use stating that there is
a new update, just click the button to allow it to be updated. This is especially true for
web browsers, which are commonly targeted by malicious code on web sites. If there is
a new version of your web browser available, you should upgrade it so that any security
vulnerabilities are fixed.
If you use Windows, there is a great program called Secunia PSI that automatically
scans your computer for applications and automatically updates them for you.
Information about this program can be found at this tutorial:
The purpose of a DMZ is to protect an intranet from external access. By separating the intranet
from hosts that can be accessed outside a local network (LAN), internal systems are protected
from unauthorized access outside the network. For example, a business may have an intranet
comprised of employee workstations. The company's public servers, such as the web server and
mail server could be placed in a DMZ so they are separate from the workstations. If the servers
were compromised by an external attack, the internal systems would be unaffected.
A DMZ can be configured several different ways, but two of the most common include
single firewall and dual firewall architectures. In a single firewall setup, the intranet and DMZ
are on separate networks, but share the same firewall, which monitors and filters traffic from
the ISP. In a dual firewall setup, one firewall is placed between the intranet and the DMZ and
another firewall is placed between the DMZ and the Internet connection. This setup is more
secure since it provides two layers of defense against external attacks.
The term "DMZ" or "Demilitarized Zone" comes from a military term used to describe a neutral
area where military operations are not allowed to take place. These areas typically exist along
the border between two different countries. They serve as a buffer and are designed to prevent
unnecessary escalations of military action. Similarly, a DMZ is a neutral area within a computer
network that can be accessed by both internal and external computer systems.