CC_Unit IV
CC_Unit IV
Data Security
Cloud Provisioning
Cloud provisioning is the allocation of a cloud provider's resources and services to a customer.
Cloud provisioning is a key feature of the cloud computing model, relating to how a customer
procures cloud services and resources from a cloud provider. The growing catalog of cloud
services that customers can provision includes infrastructure as a service (IaaS), software as a
service (SaaS) and platform as a service (PaaS) in public or private cloud environments.
With dynamic provisioning, cloud resources are deployed flexibly to match a customer's
fluctuating demands. Cloud deployments typically scale up to accommodate spikes in usage and
scale down when demands decrease. The customer is billed on a pay-per-use basis. When
dynamic provisioning is used to create a hybrid cloud environment, it is sometimes referred to
as cloud bursting.
With user self-provisioning, also called cloud self-service, the customer buys resources from the
cloud provider through a web interface or portal. This usually involves creating a user account
and paying for resources with a credit card. Those resources are then quickly spun up and made
available for use -- within hours, if not minutes. Examples of this type of cloud provisioning
include an employee purchasing cloud-based productivity applications via the Microsoft 365
suite or G Suite.
Cloud
provisioning can be conducted in one of three processes: advanced, dynamic and user self-
provisioning.
Organizations can also benefit from cloud provisioning's speed. For example, an organization's
developers can quickly spin up an array of workloads on demand, removing the need for an IT
administrator who provisions and manages compute resources.
Another benefit of cloud provisioning is the potential cost savings. While traditional on-premises
technology can exact large upfront investments from an organization, many cloud providers
allow customers to pay for only what they consume. However, the attractive economics
presented by cloud services can present its own challenges, which organizations should address
in a cloud management strategy.
Resource and service dependencies. Applications and workloads in the cloud often tap into
basic cloud infrastructure resources, such as compute, networking and storage. Beyond those,
public cloud providers' big selling point is in higher-level ancillary services, such as serverless
functions, machine learning and big data capabilities. However, those services may carry
dependencies that might not be obvious, which can lead to unexpected overuse and surprise
costs.
Policy enforcement. A self-service provisioning model helps streamline how users request and
manage cloud resources but requires strict rules to ensure they don't provision resources they
shouldn't. Recognize that different groups of users require different levels of access and
frequency -- a DevOps team may deploy multiple daily updates, while line-of-business users
might use self-service provisioning on a quarterly basis. Set up rules that govern who can
provision which types of resources, for what duration and with what budgetary controls,
including a chargeback system.
Adherence to policies also creates consistency in cloud provisioning. For example, specify
related steps such as backup, monitoring and integration with a configuration management
database -- even agreed-upon naming conventions when a resource is provisioned to ensure
consistency for management and monitoring.
Cost controls. Beyond provisioning policies, automated monitoring and alerts about usage and
pricing thresholds are essential. Be aware that these might not be real-time warnings; in fact, an
alert about an approaching budget overrun for a cloud service could arrive hours or days after the
fact.
AWS CloudFormation
Alternatively, third-party tools for cloud resource provisioning include the following:
CloudBolt
Morpheus Data
Scalr
Some organizations further automate the provisioning process as part of a broader cloud
management strategy through orchestration and configuration management tools, such as
HashiCorp's Terraform, Red Hat Ansible, Chef and Puppet.
Several providers have products designed for cloud computing management (VMware,
OpenQRM, CloudKick, and Managed Methods), along with the big players like BMC, HP, IBM
Tivoli and CA. Each uses a variety of methods to warn of impending problems or send up the red
flag when a sudden problem occurs. Each also tracks performance trends.
While they all have features that differentiate them from each other, they’re also focused on one
key concept: providing information about cloud computing systems. If your needs run into
provisioning, the choices become more distinct than choosing “agent vs. agentless” or “SNMP
vs. WBEM.”
The main cloud infrastructure management products offer similar core features:
When it comes to meeting those three criteria, there are a few vendors that offer pervasive
approaches in handling provisioning and managing metrics in hybrid environments: RightScale,
Kaavo, Zeus, Scalr and Morph. There are also options offered by cloud vendors themselves that
meet the second and third criteria, such as CloudWatch from Amazon Web Services.
These are the best infrastructure management and provisioning options available today:
1.RightScale
RightScale is the big boy on the block right now. Like many vendors in the nascent market, they
offer a free edition with limitations on features and capacity, designed to introduce you to the
product (and maybe get you hooked, ala K.C. Gillette’s famous business model at the turn of the
20th century). RightScale’s product is broken down into four components:
A fifth feature states that the “Readily Extensible Platform supports programmatic access to the
functionality of the RightScale Platform.” In looking at the product, these features aren’t really
separate from one another, but make a nice, integrated offering.
RightScale’s management environment is the main interface users will have with the software. It
is designed to walk a user through the initial process of migrating to the cloud using their
templates and library. The management environment is then used for (surprise!) managing that
environment, namely continuing builds and ensuring resource availability. This is where the
automation engine comes into play: being able to quickly provision and put into operation
additional capacity, or remove that excess capacity, as needed. Lastly, there is the Multi-Cloud
Engine, supporting Amazon, GoGrid, Eucalyptus and Rackspace.
2.Kaavo
Kaavo plays in a very similar space to RightScale. The product is typically used for:
Single-click deployment of complex multi-tier applications in the cloud (Dev, QA, Prod)
Handling demand bursts/variations by automatically adding/removing resources
Run-time management of application infrastructure in the cloud
Encryption of persisted data in the cloud
Automation of workflows to handle run-time production exceptions without human
intervention
The core of Kaavo’s product is called IMOD. IMOD handles configuration, provisioning and
changes (adjustments in their terminology) to the cloud environment, and across multiple
vendors in a hybrid model. Like all major CIM players, Kaavo’s IMOD sits at the “top” of the
stack, managing the infrastructure and application layers.
One great feature in IMOD is its multi-cloud, single system tool. For instance, you can create a
database backend in Rackspace while putting your presentation servers on Amazon. Supporting
Amazon and Rackspace in the public space and Eucalyptus in the private space is a strong selling
point, though it should be noted that most cloud management can support Eucalyptus if it can
also support Amazon, as Eucalyptus mimics Amazon EC2 very closely.
Both Kaavo and RightScale offer scheduled “ramp-ups” or “ramp-downs” (dynamic allocation
based on demand) and monitoring tools to ensure that information and internal metrics (like
SLAs) are transparently available. The dynamic allocation even helps meet the demands of those
SLAs. Both offer the ability to keep templates as well to ease the deployment of multi-tier
systems.
3. Zeus
Zeus was famous for its rock-solid Web server, one that didn’t have a lot of market share
but did have a lot of fanatical fans and top-tier customers. With Apache, and to a lesser-extent,
IIS, dominating that market, not to mention the glut of load balancers out there, Zeus took its
expertise in the application server space and came up with the Application Delivery
Controller piece of the Zeus Traffic Controller. It uses traditional load balancing tools to test
availability and then spontaneously generate or destroy additional instances in the cloud,
providing on-the-fly provisioning. Zeus currently supports this on the Rackspace and, to a lesser
extent, Amazon platforms.
4. Scalr
Scalr is a young project hosted on Google Code and Scalr.net that creates dynamic clusters,
similar to Kaavo and RightScale, on the Amazon platform. It supports triggered upsizing and
downsizing based on traffic demands, snapshots (which can be shared, incidentally, a very cool
feature), and the custom building of images for each server or server-type, also similar to
RightScale. Being a new release, Scalr does not support the wide number of platforms, operating
systems, applications, and databases that the largest competitors do, sticking to the traditional
expanded-LAMP architecture (LAMP plus Ruby, Tomcat, etc.) that comprises many content
systems.
5. Morph
While not a true management platform, the MSP-minded Morph products offers similar
functionality in its own private space. Morph CloudServer is a newer product on the market,
filling the management and provisioning space as an appliance. It is aimed at the enterprise
seeking to deploy a private cloud. Its top-tier product, the Morph CloudServer is based on the
IBM BladeCenter, and supports hundreds of virtual machines.
Under the core is an Ubuntu Linux operating system and the Eucalyptus cloud computing
platform. Aimed at the managed service provider market, Morph allows for the creation of
private clouds and the dynamic provisioning within those closed clouds. While still up-and-
coming, Morph has made quite a splash and bears watching, particularly because of its open-
source roots and participation in open-cloud organizations.
6. CloudWatch
Amazon’s CloudWatch works on Amazon’s platform only, which limits its overall usefulness as
it cannot be a hybrid cloud management tool. Since Amazon’s Elastic Compute Cloud (EC2) is
the biggest platform out there (though Rackspace claims it is closing that gap quickly), it still
bears mentioning.
CloudWatch for EC2 supports dynamic provisioning (called auto-scaling), monitoring, and load-
balancing, all managed through a central management console — the same central management
console used by Amazon Web Services. Its biggest advantage is that it requires no additional
software to install and no additional website to access applications through. While the product is
clearly not for enterprises that need hybrid support, those that exclusively use Amazon should
know that it is as robust and functional as the other market players.
Cloud Provisioning
Cloud provisioning refers to the processes for the deployment and integration of cloud
computing services within an enterprise IT infrastructure. This is a broad term that incorporates
the policies, procedures and an enterprise’s objective in sourcing cloud services and solutions
from a cloud service provider.
From a provider’s standpoint, cloud provisioning can include the supply and assignment of
required cloud resources to the customer. For example, the creation of virtual machines, the
allocation of storage capacity and/or granting access to cloud software.
There is no doubt that the pandemic is fast-tracking digital transformation and cloud
adoption for businesses across the globe. But whilst migrating to the cloud is the first step,
managing a complex cloud environment can be a much more difficult task.
An unmanaged cloud
An unmanaged cloud provides the core cloud services but with a ‘do it yourself’ model.
The customer rents access to the infrastructure, but they are responsible for running it as
well as all the tools and applications that run on top of it.
Some examples of unmanaged clouds are AWS and Azure. A lot of unmanaged clouds
have a pay-as-you-go pricing model, which can be appealing as a way to avoid wasting
resources and money. However, using an unmanaged cloud does mean that you require an
expert in-house IT team in order to manage the cloud, which can be expensive.
A managed cloud
With a managed cloud, the hosting provider acts as an extension of your IT team,
removing the need for an in-house team.
Using a managed cloud provider means that you will have a team of expert engineers at
your disposal to implement a tailor-made cloud solution. Cloud design, configuration,
storage and networking are all managed for you by the provider, as well as having a
predictable billing model each month.
Managed hosting providers take away the worry of keeping your cloud environment
running efficiently, as well as monitoring your platform for cyber threats and any potential
security issues.
Managed vs unmanaged
Managing a cloud platform whether that is Public or Private Cloud can seem relatively
simple, but for a cloud platform with a variety of services and applications to run, it can be
surprisingly complex to manage.
Although an unmanaged cloud often seems like the cheaper option, issues are likely to
occur along the way that need to be resolved by an expert. Having a dedicated team of in-
house IT experts working 24/7 is costly, and without this, your cloud platform could be at
risk. Managed providers become an extension of your business and provide you with all
you need for your business to succeed.
Some customers also choose to retain some in-house engineers to do frontline support or
work on specific projects, but our technical support team fill any IT skills gaps.
The way cloud security is delivered will depend on the individual cloud provider or the cloud
security solutions in place. However, implementation of cloud security processes should be a
joint responsibility between the business owner and solution provider.
More and more organizations are realizing the many business benefits of moving their systems to
the cloud. Cloud computing allows organizations to operate at scale, reduce technology costs and
use agile systems that give them the competitive edge. However, it is essential that organizations
have complete confidence in their cloud computing security and that all data, systems and
applications are protected from data theft, leakage, corruption and deletion.
All cloud models are susceptible to threats. IT departments are naturally cautious about moving
mission-critical systems to the cloud and it is essential the right security provisions are in place,
whether you are running a native cloud, hybrid or on-premise environment. Cloud security offers
all the functionality of traditional IT security, and allows businesses to harness the many
advantages of cloud computing while remaining secure and also ensure that data privacy and
compliance requirements are met.
Selecting the right cloud security solution for your business is imperative if you want to get the
best from the cloud and ensure your organization is protected from unauthorized access, data
breaches and other threats. Forcepoint Cloud Access Security Broker (CASB) is a complete
cloud security solution that protects cloud apps and cloud data, prevents compromised accounts
and allows you to set security policies on a per-device basis.
Today’s businesses want it all: secure data and applications accessible anywhere from any
device. It’s possible with cloud technology, but there are inherent cloud computing security
challenges to making it a reality.
What can enterprise businesses do to reap the benefits of cloud technology while ensuring a
secure environment for sensitive information? Recognizing those challenges is the first step to
finding solutions that work. The next step is choosing the right tools and vendors to mitigate
those cloud security challenges.
In our technology driven world, security in the cloud is an issue that should be discussed from
the board level all the way down to new employees. The CDNetworks blog recently discussed
“what is cloud security” and explained some of its benefits. Now that we understand what cloud
security is, let’s take a look at some of the key challenges that may be faced and why you want to
prevent unauthorized access at all costs.
As more and more businesses and operations move to the cloud, cloud providers are becoming a
bigger target for malicious attacks. Distributed denial of service (DDoS) attacks are more
common than ever before. Verisign reported IT services, cloud platforms (PaaS) and SaaS was
the most frequently targeted industry during the first quarter of 2015.
Complementing cloud services with DDoS protection is no longer just good idea for the
enterprise; it’s a necessity. Websites and web-based applications are core components of 21st
century business and require state-of-the-art cybersecurity.
Known data breaches in the U.S. hit a record-high of 738 in 2014, according to the Identity Theft
Research Center, and hacking was (by far) the number one cause. That’s an incredible statistic
and only emphasizes the growing challenge to secure sensitive data.
Traditionally, IT professionals have had great control over the network infrastructure and
physical hardware (firewalls, etc.) securing proprietary data. In the cloud (in all scenarios
including private cloud, public cloud, and hybrid cloud situations), some of those security
controls are relinquished to a trusted partner meaning cloud infrastructure can increase security
risks. Choosing the right vendor, with a strong record of implementing strong security measures,
is vital to overcoming this challenge.
When business critical information is moved into the cloud, it’s understandable to be concerned
with its security. Losing cloud data, either through accidental deletion and human error,
malicious tampering including the installation of malware (i.e. DDoS), or an act of nature that
brings down a cloud service provider, could be disastrous for an enterprise business. Often a
DDoS attack is only a diversion for a greater threat, such as an attempt to steal or delete data.
To face this challenge, it’s imperative to ensure there is a disaster recovery process in place, as
well as an integrated system to mitigate malicious cyberattacks. In addition, protecting every
network layer, including the application layer (layer 7), should be built-in to a cloud security
solution.
One of the great benefits of the cloud is it can be accessed from anywhere and from any device.
But, what if the interfaces and particularly the application programming interfaces (APIs) users
interact with aren’t secure? Hackers can find and gain access to these types of vulnerabilities and
exploit authentication via APIs if given enough time.
Virtualized Security
Virtualized security, or security virtualization, refers to security solutions that are software-
based and designed to work within a virtualized IT environment. This differs from traditional,
hardware-based network security, which is static and runs on devices such as traditional
firewalls, routers, and switches.
Virtualized security can take the functions of traditional security hardware appliances (such as
firewalls and antivirus protection) and deploy them via software. In addition, virtualized security
can also perform additional security functions. These functions are only possible due to
the advantages of virtualization, and are designed to address the specific security needs of a
virtualized environment.
For example, an enterprise can insert security controls (such as encryption) between the
application layer and the underlying infrastructure, or use strategies such as micro-segmentation
to reduce the potential attack surface.
Virtualized security can be implemented as an application directly on a bare metal hypervisor (a
position it can leverage to provide effective application monitoring) or as a hosted service on a
virtual machine. In either case, it can be quickly deployed where it is most effective, unlike
physical security, which is tied to a specific device.
Virtualized security is now effectively necessary to keep up with the complex security demands
of a virtualized network, plus it’s more flexible and efficient than traditional physical security.
Here are some of its specific benefits:
It’s important to note, however, that many of these risks are already present in a virtualized
environment, whether security services are virtualized or not. Following enterprise security best
practices (such as spinning down virtual machines when they are no longer needed and using
automation to keep security policies up to date) can help mitigate such risks.
Difference between physical and virtualized security:
Traditional physical security is hardware-based, and as a result, it’s inflexible and static. The
traditional approach depends on devices deployed at strategic points across a network and is
often focused on protecting the network perimeter (as with a traditional firewall). However, the
perimeter of a virtualized, cloud-based network is necessarily porous and workloads and
applications are dynamically created, increasing the potential attack surface.
Traditional security also relies heavily upon port and protocol filtering, an approach that’s
ineffective in a virtualized environment where addresses and ports are assigned dynamically. In
such an environment, traditional hardware-based security is not enough; a cloud-based network
requires virtualized security that can move around the network along with workloads and
applications.
Segmentation, or making specific resources available only to specific applications and users.
This typically takes the form of controlling traffic between different network segments or tiers.
Micro-segmentation, or applying specific security policies at the workload level to create
granular secure zones and limit an attacker’s ability to move through the network. Micro-
segmentation divides a data center into segments and allows IT teams to define security controls
for each segment individually, bolstering the data center’s resistance to attack.
Isolation, or separating independent workloads and applications on the same network. This is
particularly important in a multitenant public cloud environment, and can also be used to isolate
virtual networks from the underlying physical infrastructure, protecting the infrastructure from
attack
Application security
Application security is the process of developing, adding, and testing security features within
applications to prevent security vulnerabilities against threats such as unauthorized access and
modification.
Application security describes security measures at the application level that aim to prevent
data or code within the app from being stolen or hijacked. It encompasses the security
considerations that happen during application development and design, but it also involves
systems and approaches to protect apps after they get deployed.
Application security may include hardware, software, and procedures that identify or minimize
security vulnerabilities. A router that prevents anyone from viewing a computer’s IP address
from the Internet is a form of hardware application security. But security measures at the
application level are also typically built into the software, such as an application firewall that
strictly defines what activities are allowed and prohibited. Procedures can entail things like an
application security routine that includes protocols such as regular testing.
Authentication: When software developers build procedures into an application to ensure that
only authorized users gain access to it. Authentication procedures ensure that a user is who they
say they are. This can be accomplished by requiring the user to provide a user name and
password when logging in to an application. Multi-factor authentication requires more than one
form of authentication—the factors might include something you know (a password), something
you have (a mobile device), and something you are (a thumb print or facial recognition).
Authorization: After a user has been authenticated, the user may be authorized to access and use
the application. The system can validate that a user has permission to access the application by
comparing the user’s identity with a list of authorized users. Authentication must happen before
authorization so that the application matches only validated user credentials to the authorized
user list.
Encryption: After a user has been authenticated and is using the application, other security
measures can protect sensitive data from being seen or even used by a cybercriminal. In cloud-
based applications, where traffic containing sensitive data travels between the end user and the
cloud, that traffic can be encrypted to keep the data safe.
Logging: If there is a security breach in an application, logging can help identify who got access
to the data and how. Application log files provide a time-stamped record of which aspects of the
application were accessed and by whom.
Application security testing: A necessary process to ensure that all of these security controls
work properly.
Data security
Data security is the practice of protecting digital information from unauthorized access,
corruption, or theft throughout its entire lifecycle. It’s a concept that encompasses every aspect
of information security from the physical security of hardware and storage devices to
administrative and access controls, as well as the logical security of software applications. It also
includes organizational policies and procedures.
When properly implemented, robust data security strategies will protect an organization’s
information assets against cybercriminal activities, but they also guard against insider threats and
human error, which remains among the leading causes of data breaches today. Data security
involves deploying tools and technologies that enhance the organization’s visibility into where
its critical data resides and how it is used. Ideally, these tools should be able to apply protections
like encryption, data masking, and redaction of sensitive files, and should automate reporting to
streamline audits and adhering to regulatory requirements.
Encryption
Using an algorithm to transform normal text characters into an unreadable format, encryption
keys scramble data so that only authorized users can read it. File and database encryption
solutions serve as a final line of defense for sensitive volumes by obscuring their contents
through encryption or tokenization. Most solutions also include security key management
capabilities.
Data Erasure
More secure than standard data wiping, data erasure uses software to completely overwrite data
on any storage device. It verifies that the data is unrecoverable.
Data Masking
By masking data, organizations can allow teams to develop applications or train people using
real data. It masks personally identifiable information (PII) where necessary so that development
can occur in environments that are compliant.
Data Resiliency
Resiliency is determined by how well a data center is able to endure or recover any type of
failure – from hardware problems to power shortages and other disruptive events.