CC Unit-Iv
CC Unit-Iv
Cloud Security
Cloud computing security refers to the set of procedures, processes and standards designed to provide information
security assurance in a cloud computing environment.
Cloud computing security addresses both physical and logical security issues across all the different service models
of software, platform and infrastructure. It also addresses how these services are delivered (public, private or hybrid
delivery model).
The Data and Analysis Center for Software (DACS) requires that software must exhibit the following three
properties to be considered secure:
Dependability — Software that executes predictably and operates correctly under a variety of conditions, including
when under attack or running on a malicious host.
Trustworthiness — Software that contains a minimum number of vulnerabilities or no vulnerabilities or
weaknesses that could sabotage the software’s dependability.
Survivability (Resilience) — Software that is resistant to or tolerant of attacks and has the ability to recover as
quickly as possible with as little harm as possible.
1) Confidentiality
Confidentiality refers to the prevention of intentional or unintentional unauthorized disclosure of information.
Confidentiality in cloud systems is related to the areas of intellectual property rights, covert channels, traffic
analysis, encryption, and inference:
Intellectual property rights — Intellectual property (IP) includes inventions, designs, and artistic, musical,
and literary works. Rights to intellectual property are covered by copyright laws, which protect creations of
the mind, and patents, which are granted for new inventions.
Covert channels — A covert channel is an unauthorized and unintended communication path that enables
the exchange of information. Covert channels can be accomplished through timing of messages or
inappropriate use of storage mechanisms.
Traffic analysis — Traffic analysis is a form of confidentiality breach that can be accomplished by analyzing
the volume, rate, source, and destination of message traffic, even if it is encrypted. Increased message
activity and high bursts of traffic can indicate a major event is occurring.
Encryption — Encryption involves scrambling messages so that they cannot be read by an unauthorized
entity, even if they are intercepted. The amount of effort (work factor) required to decrypt the message is a
function of the strength of the encryption key and the robustness and quality of the encryption algorithm.
Inference — Inference is usually associated with database security. Inference is the ability of an entity to use
and correlate information protected at one level of security to uncover information that is protected at a
higher security level.
2) Integrity
The concept of cloud information integrity requires that the following three principles are met:
Modifications are not made to data by unauthorized personnel or processes.
Unauthorized modifications are not made to data by authorized personnel or processes.
The data is internally and externally consistent — in other words, the internal information is consistent both
among all sub-entities and with the real-world, external situation.
3) Availability
Availability ensures the reliable and timely access to cloud data or cloud computing resources by the
appropriate personnel. Availability guarantees that the systems are functioning properly when needed. In
addition, this concept guarantees that the security services of the cloud system are in working order. A
denial-of-service (DOS) attack is an example of a threat against availability.
Authentication
Authentication is the testing or reconciliation of evidence of a user’s identity. It establishes the user’s identity and
ensures that users are who they claim to be. For example, a user presents an identity (user ID) to a computer login
screen and then has to provide a password. The computer system authenticates the user by verifying that the
password corresponds to the individual presenting the ID.
Authorization
Authorization refers to rights and privileges granted to an individual or process that enable access to computer
resources and information assets. Once a user’s identity and authentication are established, authorization levels
determine the extent of system rights a user can hold.
Auditing
To maintain operational assurance, organizations use two basic methods: system audits and monitoring. These
methods can be employed by the cloud customer, the cloud provider, or both, depending on asset architecture and
deployment. In addition, IT auditors might recommend improvements to controls, and they often participate in a
system’s development process to help an organization avoid costly reengineering after the system’s implementation.
Information technology (IT) auditors are often divided into two types: internal and external.
Internal auditors typically work for a given organization, whereas external auditors do not. Internal auditors
usually have a much broader mandate than external auditors, such as checking for compliance and standards
of due care, auditing operational cost efficiencies, and recommending the appropriate controls.
External auditors are often certified public accountants (CPAs) or other audit professionals who are hired to
perform an independent audit of an organization’s financial statements.
Least Privilege
The principle of least privilege maintains that an individual, process, or other type of entity should be given
the minimum privileges and resources for the minimum period of time required to complete a task.
This approach reduces the opportunity for unauthorized access to sensitive information.
Separation of Duties
Separation of duties requires that completion of a specified sensitive activity or access to sensitive objects is
dependent on the satisfaction of a plurality of conditions.
For example, an authorization would require signatures of more than one individual, or the arming of a
weapons system would require two individuals with different keys.
Defense in Depth
Defense in depth is the application of multiple layers of protection wherein a subsequent layer will provide
protection if a previous layer is breached.
The defense-in-depth strategy as defined in IATF (Information Assurance Technical Framework) promotes
application of the following information assurance principles:
Defense in multiple places — Information protection mechanisms placed in a number of locations to protect
against internal and external threats.
Layered defenses — A plurality of information protection and detection mechanisms employed so that an
adversary or threat must negotiate a series of barriers to gain access to critical information.
Security robustness — An estimate of the robustness of information assurance elements based on the value
of the information system component to be protected and the anticipated threats.
Deploy KMI/PKI — Use of robust key management infrastructures (KMI) and public key infrastructures
(PKI).
Deploy intrusion detection systems — Application of intrusion detection mechanisms to detect intrusions,
evaluate information, examine results, and, if necessary, take action
Fail Safe
Fail safe means that if a cloud system fails it should fail to a state in which the security of the system and its
data are not compromised.
One implementation would be to make a system default to a state in which a user or process is denied access
to the system. A complementary rule would be to ensure that when the system recovers, it should recover to a
secure state and not permit unauthorized access to sensitive information.
In the situation where system recovery is not done automatically, the failed system should permit access only
by the system administrator and not by other users, until security controls are reestablished.
Economy of Mechanism
Economy of mechanism promotes simple and comprehensible design and implementation of protection
mechanisms, So that unintended access paths do not exist or can be readily identified and eliminated.
Complete Mediation
In complete meditation, every request by a subject to access an object in a computer system must undergo a
valid and effective authorization procedure.
This mediation must not be suspended or become capable of being bypassed, even when the information
system is being initialized, undergoing shutdown, being restarted, or is in maintenance mode.
Open Design
There has always been an ongoing discussion about the merits and strengths of security designs that are kept
secret versus designs that are open to scrutiny and evaluation by the community at large.
A good example is an encryption system. Some feel that keeping the encryption algorithm secret makes it
more difficult to break. The opposing philosophy believes that exposing the algorithm to review and study by
experts at large while keeping the encryption key secret leads to a stronger algorithm because the experts
have a higher probability of discovering weaknesses in it.
For most purposes, an open-access cloud system design that has been evaluated and tested by a myriad of experts
provides a more secure authentication method than one that has not been widely assessed.
Weakest Link
“A chain is only as strong as its weakest link,” the security of a cloud system is only as good as its weakest
component. Thus, it is important to identify the weakest mechanisms in the security chain and layers of
defense, and improve them so that risks to the system are mitigated to an acceptable level.
Implementation Issues
Cloud software security requirements are a function of policies such as system security policies, software policies,
and information system policies.
Important areas addressed by a software system’s cloud security policy include the following:
Access controls
Data protection
Confidentiality
Integrity
Identification and authentication
Communication security
Accountability
Security policy functional requirement:
Derive the detailed functional requirements, e.g., “The server should return public-access Web pages to any
browser that requests those pages.”
Identify the related constraint requirements, e.g., “The server should return restricted Web pages only to
browsers that are acting as proxies for users with authorized privileges sufficient to access those Web pages.”
Derive the functional security requirements, e.g., “The server must authenticate every browser that requests
access to a restricted Web page.”
Identify the related negative requirements, e.g., “The server must not return a restricted Web page to any
browser that it cannot authenticate.”
Source of inputs to secure software policies which specifies the following items:
System and Services Acquisition — “Organizations must (i) employ system development life cycle processes that
incorporate information security considerations; (ii) employ software usage and installation restrictions; and (iii)
ensure that third-party providers employ adequate security measures to protect information, applications, and/or
services outsourced from the organization.”
System and Communications Protection — “Organizations must . . . (i) employ architectural designs, software
development techniques, and systems engineering principles that promote effective information security within
organizational information systems.”
System and Information Integrity — “Organizations must: (i) identify, report, and correct information and
information system flaws in a timely manner; (ii) provide protection from malicious code at appropriate locations
within organizational information systems.”
Policy Types
1. Senior Management statement of policy
2. Regulatory policy
3. Advisory Policy
4. Informative policy
Regulatory Policies
Regulatory policies are security policies that an organization must implement due to compliance, regulation,
or other legal requirements.
These companies might be financial institutions, public utilities, or some other type of organization
Advisory Policies
Advisory policies are security policies that are not mandated but strongly suggested, perhaps with serious
consequences defined for failure to follow them.
A company with such policies wants most employees to consider these policies mandatory.
Informative Policies
Informative policies are policies that exist simply to inform the reader. There are not implied or specified
requirements, and the audience for this information could be certain internal (within the organization) or
external parties.
This does not mean that the policies are authorized for public consumption but that they are general enough
to be distributed to external parties (vendors accessing an extranet, for example) without a loss of
confidentiality.
Virtual Threats
Some threats to virtualized systems are general in nature, as they are inherent threats to all computerized systems
(such as denial-of-service, or DoS, attacks).
Various organizations are currently conducting security analysis and proof of-concept (PoC) attacks against
virtualized systems, and security in virtual environments highlights some of the vulnerabilities exposed to any
malicious-minded individuals:
Shared clipboard Shared clipboard technology allows data to be transferred between VMs and the host,
providing a means of moving data between malicious programs in VMs of different security realms.
Keystroke logging Some VM technologies enable the logging of keystrokes and screen updates to be passed
across virtual terminals in the virtual machine, writing to host files and permitting the monitoring of
encrypted terminal connections inside the VM.
VM monitoring from the host Because all network packets coming from or going to a VM pass through the
host, the host may be able to affect the VM by the following:
Starting, stopping, pausing, and restart VMs
Monitoring and configuring resources available to the VMs, including CPU, memory, disk, and network
usage of VMs
Adjusting the number of CPUs, amount of memory, amount and number of virtual disks, and number of
virtual network interfaces available to a VM
Monitoring the applications running inside the VM
Viewing, copying, and modifying data stored on the VM’s virtual disks
Virtual machine monitoring from another VM Usually, VMs should not be able to directly access one
another’s virtual disks on the host. However, if the VM platform uses a virtual hub or switch to connect the
VMs to the host, then intruders may be able to use a hacker technique known as “ARP poisoning” to redirect
packets going to or from the other VM for sniffing.
Virtual machine backdoors A backdoor, covert communications channel between the guest and host could
allow intruders to perform potentially dangerous operations.
VM THREAT LEVELS
Virtual threat is classified into three levels of compromise:
Abnormally terminated Availability to the virtual machine is compromised, as the VM is placed into an
infinite loop that prevents the VM administrator from accessing the VM’s monitor.
Partially compromised The virtual machine allows a hostile process to interfere with the virtualization
manager, contaminating stet checkpoints or over-allocating resources.
Totally compromised The virtual machine is completely overtaken and directed to execute unauthorized
commands on its host with elevated privileges.
Hypervisor Risks
The hypervisor is the part of a virtual machine that allows host resource sharing and enables VM/host isolation.
Therefore, the ability of the hypervisor to provide the necessary isolation during intentional attack greatly determines
how well the virtual machine can survive risk.
Vulnerabilities in Hypervisor
Rogue Hypervisor: Rootkits that target virtualization, and in particular the hypervisor, have been gaining traction in
the hacker community. VM-based rootkits can hide from normal malware detection systems by initiating a “rogue”
hypervisor and creating a cover channel to dump unauthorized code into the system.
External Modification of the Hypervisor
In additional to the execution of the rootkit payload, a poorly protected or designed hypervisor can also create an
attack vector. Therefore, a self-protected virtual machine may allow direct modification of its hypervisor by an
external intruder. This can occur in virtualized systems that don’t validate the hypervisor as a regular process.
VM Escape
Due to the host machine’s fundamentally privileged position in relationship to the VM, an improperly configured
VM could allow code to completely bypass the virtual environment, and obtain full root or kernel access to the
physical host. This would result in a complete failure of the security mechanisms of the system, and is called VM
escape. Virtual machine escape refers to the attacker’s ability to execute arbitrary code on the VM’s physical host,
by “escaping” the hypervisor.
9) Contractual Issues with Vendors: To protect the interests of your business, it is extremely essential that
you read the terms and conditions deeply and set on understanding of contract before signing up for cloud
services.
If the cloud service provider provides a standard form of contract (which is a general practice), then you must
be fully aware of all the terms and conditions written in it. This will save you from tricky surprises and you
will be financially, mentally and legally prepared to save your business from unfavorable consequences of
cloud computing.
Some issues with contract are mentioned below:
Unclear software warranties
Varied intellectual property rights in world
Warranties and clause to protect customers
Unclear jurisdiction/legal governance of data-center
Restrictive data export regulations.
Conflict in inter-country laws.
Data storage in a country with fewer laws or without laws for data protection.
10) Service-Level Agreements (SLAs): SLAs are important in any cloud service contract, We have to give
attention on these points:
How is the availability calculated by the provider?
What will be independent measurement of performance?
How, much downtime should be acceptable?
What procedure to be followed if service provider fails to provide satisfactory delivery of services?
What procedure will be followed at the end-of-service for destruction of data from data-centers?
11) Lack of Laws Dedicated to Cloud Computing: Most of the countries don’t have specialized rule,
regulations and laws for dealing with legal issues in cloud computing. Most disputes and claims are settled by
companies according to their own guidelines, most of which are clearly not favorable for customers. If we take
the examples of developed countries like U.S.A. or E.U. countries, there some rule have been developed for
internet and data protection. In the case of jurisdiction lied under the service provider’s nation customer may feel
helpless to get legal help in issues. In recent times new regulation and laws are being developed world wide and
still there is the need of more specialized laws and legal regulatory institutory actions to develop a strong legal
base to handle legal issues in cloud computing.
Compliance
In a public cloud environment, the provider does not normally inform the clients of the storage location of their data.
In fact, the distribution of processing and data storage is one of the cloud’s fundamental characteristics. However,
the cloud provider should cooperate to consider the client’s data location requirements. In addition, the cloud vendor
should provide transparency to the client by supplying information about storage used, processing characteristics,
and other relevant account information. Another compliance issue is the accessibility of a client’s data by the
provider’s system engineers and certain other employees.
This factor is a necessary part of providing and maintaining cloud services, but the act of acquiring sensitive
information should be monitored, controlled, and protected by safeguards such as separation of duties.
Security Management
Security architecture involves effective security management to realize the benefits of cloud computation. Proper
cloud security management and administration should identify management issues in critical areas such as access
control, vulnerability analysis, change control, incident response, fault tolerance, and disaster recovery and business
continuity planning.
Controls
The objective of cloud security controls is to reduce vulnerabilities to a tolerable level and minimize the effects of an
attack. To achieve this, an organization must determine what impact an attack might have, and the likelihood of loss.
Examples of loss are compromise of sensitive information, financial embezzlement, loss of reputation, and physical
destruction of resources. The process of analyzing various threat scenarios and producing a representative value for
the estimated potential loss is known as a risk analysis (RA). Controls function as countermeasures for
vulnerabilities. There are many kinds of controls, but they are generally categorized into one of the following four
types:
Information Classification
Major area that relates to compliance and can affect the cloud security architecture is information classification. The
information classification process also supports disaster recovery planning and business continuity planning.
Information Classification Benefits
Employing information classification has several clear benefits to an organization engaged in cloud computing.
Some of these benefits are as follows:
It demonstrates an organization’s commitment to security protections.
It helps identify which information is the most sensitive or vital to an organization.
It supports the tenets of confidentiality, integrity, and availability as it pertains to data.
It helps identify which protections apply to which information.
It might be required for regulatory, compliance, or legal reasons.
Public data: Information that is similar to unclassified information; all of a company’s information that does not fit
into any of the next categories can be considered public. While its unauthorized disclosure may be against policy, it
is not expected to impact seriously or adversely the organization, its employees, and/or its customers.
Sensitive data: This information is protected from a loss of confidentiality as well as from a loss of integrity due to
an unauthorized alteration. This classification applies to information that requires special precautions to ensure its
integrity by protecting it from unauthorized modification or deletion. It is information that requires a higher-than-
normal assurance of accuracy and completeness.
Private data: This classification applies to personal information that is intended for use within the organization. Its
unauthorized disclosure could seriously and adversely impact the organization and/or its employees. For example,
salary levels and medical information are considered private.
Confidential data: This classification applies to the most sensitive business information that is intended strictly for
use within the organization. Its unauthorized disclosure could seriously and adversely impact the organization, its
stockholders, its business partners, and/or its customers.
For example, information about new product development, trade secrets, and merger negotiations is considered
confidential.
Security Awareness
Security awareness is often overlooked as an element affecting cloud security architecture because most of a security
practitioner’s time is spent on controls, intrusion detection, risk assessment, and proactively or reactively
administering security. Employees of both the cloud client and the cloud provider must be aware of the need to
secure information and protect the information assets of an enterprise. An effective computer security awareness and
training program requires proper planning, implementation, maintenance, and periodic evaluation.
The purpose of computer security awareness, training, and education is to enhance security by doing the following:
Improving awareness of the need to protect system resources
Developing skills and knowledge so computer users can perform their jobs more securely
Building in-depth knowledge, as needed, to design, implement, or operate security programs for
organizations and systems
A computer security awareness and training program should encompass the following seven steps
Passwords
Because passwords can be compromised, they must be protected. In the ideal case, a password should be used only
once. This “one-time password,” or OTP, provides maximum security because a new password is required for each
new logon. A password that is the same for each logon is called a static password. A password that changes with
each logon is termed a dynamic password.
Passwords can be provided by a number of devices, including tokens, memory cards, and smart cards.
Tokens Tokens, in the form of small, hand-held devices, are used to provide passwords. The following are
the four basic types of tokens:
Static password tokens
Synchronous dynamic password tokens, clock-based
Synchronous dynamic password tokens, counter-based
Asynchronous tokens, challenge-response
Memory Cards Memory cards provide nonvolatile storage of information, but they do not have any
processing capability. A memory card stores encrypted passwords and other related identifying information.
A telephone calling card and an ATM card are examples of memory cards.
Smart Cards Smart cards provide even more capability than memory cards by incorporating additional
processing power on the cards. These credit-card-size devices comprise microprocessor and memory and are
used to store digital signatures, private keys, passwords, and other personal information.
Biometrics An alternative to using passwords for authentication in logical or technical access control is
biometrics.
Access Control
Access control is intrinsically tied to identity management and is necessary to preserve the confidentiality, integrity,
and availability of cloud data. These and other related objectives flow from the organizational security policy. This
policy is a high-level statement of management intent regarding the control of access to information and the
personnel who are authorized to receive that information. Three things that must be considered for the planning and
implementation of access control mechanisms are threats to the system, the system’s vulnerability to these threats,
and the risk that the threats might materialize.
These concepts are defined as follows:
Threat — An event or activity that has the potential to cause harm to the information systems or networks
Vulnerability — A weakness or lack of a safeguard that can be exploited by a threat, causing harm to the
information systems or networks
Risk — The potential for harm or loss to an information system or network; the probability that a threat will
materialize
Autonomic Security
Autonomic security refers to security techniques based on autonomic computing which is self-managed,
reconfigurable according to changing conditions and self-healing. It offers capabilities that can improve the security
of information system and cloud computing.
The ability of autonomic security to collect and interpret data and recommend or implement solutions can enhance
security and provide recovery from harmful events.
Autonomic security system is self-managing, monitors changes ‘that affect the system and maintains internal
balances of processes associated with security. It has:
Sensory input
Decision making capabilities
Ability to implement remedial actions
Ability to maintain an equilibrium state of normal operations.
Examples of events that can be handled autonomously by system include the following:
Malicious attacks
Hardware or software faults
Power failures
Organizational policies
Software updates
Interactions among systems
Unintentional operator errors
Characteristics of autonomic computing systems introduced by IBM are given:
Self-awareness
Self-configuring
Self-optimizing
Self-healing
Self-protecting
Context aware
Open
Anticipatory
Autonomic security and protection techniques involve detection of harmful situation and taking actions that
will mitigate the situation. These systems will be designed to predict problems from analysis of sensoiy
inputs and initiate corrective actions.
An autonomous system security response is based on network knowledge, capabilities of connected
resources, information and complexity of situation as well as impact on affected application/component.
The decision making element of autonomic computing can take actions such as changing the strength of
required authentication or modifying encryption keys. According to current security position and context, the
state of system can be changed and level of authorization can be modified immediately.
Guidelines for autonomous protection systems
Minimize overhead requirements
Be consistent with security policies
Optimize security-related parameters.
Minimize impact on performance.
Minimize potential vulnerabilities.
Conduct regression analysis
Ensure that reconfiguration processes are secure
Disaster
A disaster is a rapidly occurring or unstoppable event that can cause suffering, loss of life, or damage.
In many instances, the aftermath of a disaster can impact social or natural conditions for a long period of
time.
A DRP is a comprehensive statement of consistent actions to be taken before, during, and after a disruptive
event that causes a significant loss of information systems resources.
The number one priority of DRP is personnel safety and evacuation, followed by the recovery of data center
operations and business operations and processes.
Specific areas that can be addressed by cloud providers include the following:
Protecting an organization from a major computer services failure
Providing extended backup operations during an interruption
Providing the capability to implement critical processes at an alternate site
Guaranteeing the reliability of standby systems through testing and simulations
Returning to the primary site and normal processing within a time frame that minimizes business loss
by executing rapid recovery procedures.
Minimizing the decision-making required by personnel during a disaster
Proving an organized way to make decisions if a disruptive event occurs
Minimizing the risk to the organization from delays in providing service
A business continuity plan addresses the means for a business to recover from disruptions and continue support for
critical business functions. It is designed to protect key business processes from natural or man-made failures or
disasters and the resultant loss of capital due to the unavailability of normal business processes. A BCP includes a
business impact assessment (BIA), which, in turn, contains a vulnerability assessment.
A BIA is a process used to help business units understand the impact of a disruptive event. A vulnerability
assessment is similar to a risk assessment in that it contains both a quantitative (financial) section and a qualitative
(operational) section.
DISASTER RECOVERY PLANNING
The primary objective of a disaster recovery plan is to provide the capability to implement critical processes at an
alternate site and return to the primary site and normal processing within a time frame that minimizes loss to the
organization by executing rapid recovery procedures. In many scenarios, the cloud platforms already in use by a
customer are extant alternate sites. Disasters primarily affect availability, which impacts the ability of staff to access
the data and systems, but it can also affect the other two tenets, confidentiality and integrity. In the recovery plan, a
classification scheme such as the one shown in Table can be used to classify the recovery time-frame needs of each
business function.
The DRP should address all information processing areas of the company:
Cloud resources being utilized
LANs, WANs, and servers
Telecommunications and data communication links
Workstations and workspaces
Applications, software, and data
Media and records storage
Staff duties and production processes
Backup services are important elements in the disaster recovery plan. The typically used alternative services are as follows:
Mutual aid agreements: An arrangement with another company that might have similar computing needs. The
other company may have similar hardware or software configurations or may require the same network data
communications or Internet access.
Subscription services: Third-party commercial services that provide alternate backup and processing facilities. An
organization can move its IT processing to the alternate site in the event of a disaster.
Multiple centers: Processing is spread over several operations centers, creating a distributed approach to
redundancy and sharing of available resources. These multiple centers could be owned and managed by the same
organization (in-house sites) or used in conjunction with a reciprocal agreement.
Service bureaus: Setting up a contract with a service bureau to fully provide all alternate backup-processing
services. The disadvantages of this arrangement are primarily the expense and resource contention during a large
emergency.
RISK MITIGATION
Risk mitigation is a systematic approach to reduce the extent of exposure to a risk and the probability of its
occurrence.
In cloud, risk mitigation is process of the selection and implementation of security controls to reduce the risk
to a level acceptable to the cloud provider and customer.
It is identification of ways to minimize or eliminate expected and conquered risks. Depending upon impact of
risk and the level of effort for the mitigation strategies, it may be appropriate to initiate several mitigation
activities.
Mitigation strategy reflects an organizational perspective on what mitigations are employed and where the
mitigations are applied to reduce risks to organizational operations and resources. Risk mitigation strategies
are the ’primary links between organizational risk management process and security policies. Effective risk
mitigation strategies consider the general placement and allocation of mitigations, the degree of intended
mitigation, and cover mitigations at each level of organization.
Risk mitigation is the final step of risk management process, it includes prioritization of risks, risk evaluation
and implementation of appropriate risk-reducing controls recommended from the risk assessment process.
Risk mitigation has a major phase of potential risk treatment or potential risk mitigation after risks have been
identified and assessed, all techniques to manage the risk fall into one of these four major categories:
Risk avoidance (eliminate, withdraw from or not become involved)
Risk reduction (optimize mitigate)
Risk sharing (transfer, outsource or insure)
Risk retention (accept and adjust)
The process of risk mitigation in cloud environment include these major goals:
Preparing the system technically and managerially to face the threats.
Prepare the proper risk management plan that include solutions for risk treatments.
Reconfigurability of system is designed as per requirements.
Minimize the effect of an intentional or unintentional disruptive event on the system under threat.
Providing recommended solution for handling consequences.
Prepare proper backup plan and alternative services options.
From a legal perspective, the necessary terms and conditions that bind the service provider to provide services
continually to the service consumer are formally defined in SLA.
TYPES OF SLA
There are two types of SLAs from the perspective of application hosting.
Infrastructure SLA. The infrastructure provider manages and offers guarantees on availability of the infrastructure,
namely, server machine, power, network connectivity, and so on. Enterprises manage themselves, their applications
that are deployed on these server machines. The machines are leased to the customers and are isolated from
machines of other customers.
Service-level guarantees offered by infrastructure providers is shown in Table 1.
Application SLA. In the application co-location hosting model, the server capacity is available to the applications
based solely on their resource demands. Hence, the service providers are flexible in allocating and de-allocating
computing resources among the co-located applications.
From SLA perspective there are multiple challenges for provisioning infrastructure.
1) The application is a black box to the MSP (Managed service provider) and the MSP has virtually no knowledge
about the application runtime characteristics. Therefore, the MSP needs to determine the right amount of computing
resources required for different components of an application at various workloads.
2) The MSP needs to understand the performance bottlenecks and the scalability of the application.
3) The MSP analyzes the application before it goes on-live. However, subsequent operations/enhancements by the
customer’s to their applications or auto updates beside others can impact the performance of the applications,
thereby making the application SLA at risk.
4) The risk of capacity planning is with the service provider instead of the customer. If every customer decides to
select the highest grade of SLA simultaneously, there may not be a sufficient number of servers for provisioning and
meeting the SLA obligations of all the customers.
A feasibility report consists of the results of the above three feasibility studies. The report forms the basis for further
communication with the customer. Once the provider and customer agree upon the findings of the report, the
outsourcing of the application hosting activity proceeds to the next phase, called “onboarding” of application.
2) On-Boarding of Application
Once the customer and the MSP agree in principle to host the application based on the findings of the feasibility
study, the application is moved from the customer servers to the hosting platform. Moving an application to the
MSP’s hosting platform is called on-boarding. As part of the on-boarding activity, the MSP understands the
application runtime characteristics using runtime profilers. This helps the MSP to identify the possible SLAs that can
be offered to the customer for that application. This also helps in creation of the necessary policies (also called rule
sets) required to guarantee the SLOs (Service level objective) mentioned in the application SLA. The application is
accessible to its end users only after the onboarding activity is completed.
TRUST MANAGEMENT
Probably the most critical issue to address before cloud computing can become the preferred computing paradigm is
that of establishing trust. Mechanisms to build and maintain trust between cloud computing consumers and cloud
computing providers, as well as between cloud computing providers among themselves, are essential for the success
of any cloud computing.
With the popularity and growth of cloud computing, service providers make new services available on clouds. All
these service and service providers have varying levels of quality and also due to anonymous nature of the cloud
computing, some dishonest or unprincipled or crooked service providers may tend to cheat unaware, unsuspecting
clients. Hence it becomes necessary to identify the quality of services and service providers who would meet the
trust requirements of customers.
Trust management is a key issue- that needs special attention and it is an important component of cloud
security.
Trust management is an abstract system that processes symbolic representations of social trust, usually to aid
automated decision making process. It increases and establishes the trust for the cloud computing systems
among the users. Such representations like cryptographic credentials, can link the abstract system of trust
management with results of trust assessment. Trust management is popular in implementing information
security, specifically access control policies.
Trust management system provide are assurance to users- that their data is secure and confidential with
particular cloud service provider.
The concept of trust management has been introduced to help and assist the automated verification of actions against
security policies. The definition of trust covers honesty, truthfulness, competence and reliability.
Trust management in cloud computing involves these aspects:
Data integrity and privacy protection
Trusted cloud computing over data-centers
Security Aware cloud architecture
Virtual network security and trust negotiation
Defence of virtualized resources
Guarantee of confidentiality, integrity and availability.