0% found this document useful (0 votes)
3 views

Cloud Computing Notes

Cloud computing delivers computing services over the internet, allowing users to access resources without maintaining physical infrastructure. It includes various service models like IaaS, PaaS, and SaaS, and can be deployed as public, private, or hybrid clouds. Cloud architecture encompasses the components and technologies necessary for cloud computing, facilitating seamless integration and resource management.

Uploaded by

chirag22004ltcse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Cloud Computing Notes

Cloud computing delivers computing services over the internet, allowing users to access resources without maintaining physical infrastructure. It includes various service models like IaaS, PaaS, and SaaS, and can be deployed as public, private, or hybrid clouds. Cloud architecture encompasses the components and technologies necessary for cloud computing, facilitating seamless integration and resource management.

Uploaded by

chirag22004ltcse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 29

Cloud Computing Notes:

Cloud Computing:

Cloud Computing is the delivery of computing services (like storage, servers,


databases, software, networking, etc.) over the Internet (the cloud).

Cloud computing refers to the delivery of computing services—such as storage,


processing power, databases, networking, software, and more—over the internet. It
allows businesses and individuals to access and use resources without the need for
owning or maintaining physical infrastructure.
The data is stored on physical servers, which are maintained by a cloud service
provider. Computer system resources, especially data storage and computing power,
are available on-demand, without direct management by the user in cloud computing.

Instead of storing files on a storage device or hard drive, a user can save them on
cloud, making it possible to access the files from anywhere, as long as they have
access to the web. The services hosted on cloud can be broadly divided into
infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS), and software-as-
a-service (SaaS). Based on the deployment model, cloud can also be classified as
public, private, and hybrid cloud.

Further, cloud can be divided into two different layers, namely, front-end and
back-end. The layer with which users interact is called the front-end layer. This
layer enables a user to access the data that has been stored in cloud through cloud
computing software.

The layer made up of software and hardware, i.e., the computers, servers, central
servers, and databases, is the back-end layer. This layer is the primary component
of cloud and is entirely responsible for storing information securely. To ensure
seamless connectivity between devices linked via cloud computing, the central
servers use a software called middleware that acts as a bridge between the database
and applications.

Cloud Architecture:

Cloud architecture is a key element of building in the cloud. It refers to the


layout and connects all the necessary components and technologies required for
cloud computing.
Cloud architecture dictates how components are integrated so that you can pool,
share, and scale resources over a network.
Think of it as a blueprint for running and deploying applications in cloud
environments.

Cloud architecture refers to how various cloud technology components, such as


hardware, virtual resources, software capabilities, and virtual network systems
interact and connect to create cloud computing environments. It acts as a blueprint
that defines the best way to strategically combine resources to build a cloud
environment for a specific business need.

Cloud architecture components:

Cloud architecture components include:


A frontend platform
A backend platform
A cloud-based delivery model
A network (internet, intranet, or intercloud)
In cloud computing, frontend platforms contain the client infrastructure—user
interfaces, client-side applications, and the client device or network that enables
users to interact with and access cloud computing services. For example, you can
open the web browser on your mobile phone and edit a Google Doc. All three of these
things describe frontend cloud architecture components.

On the other hand, the back end refers to the cloud architecture components that
make up the cloud itself, including computing resources, storage, security
mechanisms, management, and more.

Below is a list of the main backend components:

Application:
Service:
Runtime cloud:
Storage:
Infrastructure:
Security:
Cloud architecture, on the other hand, is the plan that dictates how cloud
resources and infrastructure are organized.

How does cloud architecture work?

In cloud architecture, each of the components works together to create a cloud


computing platform that provides users with on-demand access to resources and
services.

The back end contains all the cloud computing resources, services, data storage,
and applications offered by a cloud service provider. A network is used to connect
the frontend and backend cloud architecture components, enabling data to be sent
back and forth between them. When users interact with the front end (or client-side
interface), it sends queries to the back end using middleware where the service
model carries out the specific task or request.

Service Oriented Architecture:


SOA allows different services to communicate and work together, regardless of the
platform or language they're built on.

# SOA is a software design approach where applications are built by combining


independent, reusable services.
# These services communicate with each other through well-defined interfaces,
allowing for loose coupling and easier integration.
# Services can be deployed and maintained independently, making them easier to
update and manage.

Service-Oriented Architecture (SOA) in cloud computing is a method of designing and


building software applications using independent, reusable services that
communicate across a network, enabling integration and flexibility in cloud
environments.

SOA in Cloud Computing :

# Cloud computing provides a natural platform for SOA, as it allows for the
deployment and management of services in a flexible and scalable manner.
# Cloud services can be used as building blocks for applications, and SOA
principles can be used to design and implement these applications.
# Examples include using cloud services for data storage, processing, and
analytics, and building applications that leverage these services.

Key Principles of SOA:


Loose Coupling: Services are independent. They don’t rely on internal workings of
other services.

Reusability: Services are designed to be reused across different applications.

Interoperability: Services can work across different platforms and technologies.

Discoverability: Services can be discovered and used dynamically.

Statelessness: Services do not store state between requests (usually).

Benefits of SOA in Cloud Computing :

Reusability: Services can be reused across different applications, reducing


development time and effort.
Flexibility: SOA allows for easy integration of new services and applications.
Scalability: Cloud computing provides the infrastructure to scale services up or
down as needed.
Cost Reduction: By reusing services and leveraging cloud infrastructure,
organizations can reduce costs.

Service-oriented architecture (SOA) – How it supports cloud computing:

Improved Interoperability And Integration: Service-Oriented Architecture allows


different services to communicate seamlessly.

Easier Updates: Updates to one service don’t break others, making it reliable and
consistent.

Diverse Technology Support: Service-Oriented Architecture supports various


technologies, offering the same flexibility as Tyrion Lannister in navigating any
situation, no matter the circumstances.

Enhanced Scalability: Service-Oriented Architecture enables businesses to scale


resources on demand when they need it.

Cost Efficiency: Accoridng to McKinsey, by optimizing resources using Service-


Oriented Architecture, businesses can save up to 30% on IT costs.

Better Performance: Service-Oriented Architecture improves application speed. Think


of it like upgrading from a horse to a dragon—it’s faster, more powerful and gets
you where you need to be in no time.

Simplified Management: Managing services is easier with Service-Oriented


Architecture as it quietly ensures everything runs smoothly behind the scenes.

Quick Fixes: Issues can be resolved quickly by fixing the affected service rather
than the entire system.
Regular Updates: Services can be updated regularly without downtime, allowing users
to continue their work without any stoppage

Web Services :

Web services are software functions or applications that are available over the
internet and use standard protocols like HTTP to communicate. They allow different
systems to talk to each other, even if they are built using different languages or
platforms. A client invokes a web service by submitting an XML request, to which
the service responds with an XML response.

A web service is a set of open protocols and standards that allow data exchange
between different applications or systems.

Any software, application, or cloud technology that uses a standardized Web


protocol (HTTP or HTTPS) to connect, interoperate, and exchange data messages over
the Internet-usually XML (Extensible Markup Language) is considered a Web service.

Components of web services :

Simple Object Access Protocol(SOAP) : SOAP is a protocol used for exchanging


structured information between computers over a network. It is XML-based, which
means it uses XML to format messages.

This is a network protocol for exchanging structured data between nodes.

It uses XML format to transfer messages. It works on top of application layer


protocols like HTTP and SMTP for notations and transmission. SOAP allows processes
to communicate throughout platforms, languages, and operating system, since
protocols like HTTP are already installed on all platforms.

XML (Extensible Markup Language) : XML (Extensible Markup Language) plays a crucial
role in cloud computing by providing a standard way to exchange and manage data
across different platforms and applications. It's used for storing, transporting,
and sharing data in a flexible and extensible manner, making it well-suited for the
dynamic environment of the cloud.

WSDL stands for Web Services Description Language. A Web service cannot be used if
it can't be found.
The implementing client has to know where the web service is located.
Also, to invoke the correct web service, the client application has to
understand what the web service does. This is done with the help of Web services
description language(WSDL).

It is an XML-based language used to describe the details of a SOAP web service,


such as:

What the service does

Where it’s located (URL)

What data it needs (input/output format)


UDDI stands for Universal Description, Discovery, and Integration.

It's an XML-based standard that enables businesses to publish and find information
about web services. Essentially, it acts as a registry or directory for web
services, allowing businesses to list their services and find potential partners.

It is a platform-independent registry used for:

Publishing

Finding

Describing

Web services over the internet.

Think of it like a Yellow Pages for web services. Just like you look up a business
in a directory, applications can look up web services in UDDI.

REST (Representational State Transfer): REST is a working model or, more


accurately, a set of standards for building web applications.

It’s an architectural style used to design lightweight, fast, and scalable web
services.
REST is an architectural style used to design web services that allow different
applications or systems to communicate over the Internet using standard HTTP
methods.

REST is an architectural pattern for creating web services that are lightweight,
scalable, and stateless, and that use HTTP methods like GET, POST, PUT, and DELETE
to perform operations on resources (such as data objects like users, files, or
products).

Unlike SOAP, REST is not a protocol, but a set of guidelines for building APIs.

UDDI + WSDL + SOAP:


WSDL: Describes what a service does and how to call it.

SOAP: Describes how data is exchanged.

UDDI: Describes where to find the service.

So:

💡 "SOAP sends the message, WSDL describes the message, and UDDI helps you find the
service."

Types of Cloud Computing :

Cloud computing can either be classified based on the deployment model or the type
of service. Based on the specific deployment model, we can classify cloud as
public, private, and hybrid cloud. At the same time, it can be classified as
infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS), and software-as-
a-service (SaaS) based on the service the cloud model offers.

The types of services available to use vary depending on the cloud-based delivery
model or service model you have chosen. There are three main cloud computing
service models:

Infrastructure as a service (IaaS): This model provides on-demand access to cloud


infrastructure, such as servers, storage, and networking. This eliminates the need
to procure, manage, and maintain on-premises infrastructure.

Infrastructure as a service or IaaS is a type of cloud computing in which a service


provider is responsible for providing servers, storage, and networking over a
virtual interface. In this service, the user doesn’t need to manage the cloud
infrastructure but has control over the storage, operating systems, and deployed
applications.

Instead of the user, a third-party vendor hosts the hardware, software, servers,
storage, and other infrastructure components. The vendor also hosts the user’s
applications and maintains a backup.

Examples:

Amazon Web Services (AWS)

Microsoft Azure

Google Cloud Platform (GCP)

Key Features:

Virtualized resources (computing power, storage, networks)

Scalability based on demand

No physical hardware management for users

Users install and manage the OS and applications themselves

Platform as a service (PaaS): This model offers a computing platform with all the
underlying infrastructure and software tools needed to develop, run, and manage
applications.

Platform as a service or PaaS is a type of cloud computing that provides a


development and deployment environment in cloud that allows users to develop and
run applications without the complexity of building or maintaining the
infrastructure. It provides users with resources to develop cloud-based
applications. In this type of service, a user purchases the resources from a vendor
on a pay-as-you-go basis and can access them over a secure connection.

PaaS doesn’t require users to manage the underlying infrastructure, i.e., the
network, servers, operating systems, or storage, but gives them control over the
deployed applications. This allows organizations to focus on the deployment and
management of their applications by freeing them of the responsibility of software
maintenance, planning, and resource procurement.
Examples:

Google App Engine

Microsoft Azure App Services

Heroku

Key Features:

Application hosting with built-in development tools

No infrastructure management (hardware and OS are handled by the provider)

Support for software development and deployment

Ideal for developers who need to focus on writing code without managing servers or
networking

Types of PaaS
Various sorts of PaaS are presently accessible to engineers. They are :

1. Public PaaS
Designed for public cloud use, offering control over software while the provider
manages IT infrastructure. Suitable for small-medium businesses but less favored by
large organizations due to compliance issues.

2. Private PaaS
Plans to give dexterity of public pass while keeping up security, consistence,
advantages and ease of private security community. A private pass is normally
circulated as gadget or programming in client’s firewall, which is regularly kept
up in server farm on organization’s premises. A private PaaS can be created on
framework and works inside organization’s particular private cloud.

3. Hybrid PaaS
Consolidates organizations with Public PaaS and Private PaaS, with accommodation of
unbounded limit offered by Public PaaS and cost-adequacy of having inside framework
in Private PaaS. Hybrid PaaS utilizes hybrid cloud.

4. Communication PaaS (CPaaS)

5. Mobile PaaS (MPaaS)

Software as a service (SaaS): This model offers cloud-based applications that are
delivered and maintained by the service provider, eliminating the need for end
users to deploy software locally.
SaaS or software as a service allows users to access a vendor’s software on cloud
on a subscription basis. In this type of cloud computing, users don’t need to
install or download applications on their local devices. Instead, the applications
are located on a remote cloud network that can be directly accessed through the web
or an API.

In the SaaS model, the service provider manages all the hardware, middleware,
application software, and security. Also referred to as ‘hosted software’ or ‘on-
demand software’, SaaS makes it easy for enterprises to streamline their
maintenance and support.

Examples:

Google Workspace (Docs, Sheets, Gmail)

Microsoft Office 365

Salesforce

Key Features:

Fully managed applications with no need for installation or maintenance by users

Accessible from any device with internet connectivity

Subscription-based pricing (often pay-per-use)

Automatic updates and patch management

Type of saas :

Single-Tenant Architecture :

In this type of a SaaS platform architecture each customer or tenant has their own
“instance” of the software, which runs on a separate server and is supported by a
single infrastructure and database. Thus, there is no sharing of resources between
tenants, and all customer information is separate from other customers.

With this setup comes greater control and customization capabilities, but it might
be more expensive for the provider to maintain since they need to manage multiple
“instances” of the software.

Multi-Tenant Architecture :

On the other hand, multi-tenant architecture refers to a setup where multiple


customers or tenants share a single “instance” of the software. This is one of the
most popular types of SaaS architecture in cloud computing.

Here, the data of each tenant is kept separate and secure but they share the same
app, database, and infrastructure. Unlike the single-tenant version, this setup is
more efficient and cost-effective but typically offers less control and
customization to individual clients.

There are three main types of cloud architecture you can choose from: public,
private, and hybrid.
Types of cloud computing based on deployment:

1) Public cloud architecture uses cloud computing resources and physical


infrastructure that is owned and operated by a third-party cloud service provider.
Public clouds enable you to scale resources easily without having to invest in
your own hardware or software, but use multi-tenant architectures that serve other
customers at the same time.
Public cloud refers to computing services offered by third-party providers over the
internet. Unlike private cloud, the services on public cloud are available to
anyone who wants to use or purchase them. These services could be free or sold on-
demand, where users only have to pay per usage for the CPU cycles, storage, or
bandwidth they consume.

Public clouds can help businesses save on purchasing, managing, and maintaining on-
premises infrastructure since the cloud service provider is responsible for
managing the system. They also offer scalable RAM and flexible bandwidth, making it
easier for businesses to scale their storage needs.

2) Private cloud architecture refers to a dedicated cloud that is owned and managed
by your organization.
It is privately hosted on-premises in your own data center, providing more control
over resources and more security over data and infrastructure.
However, this architecture is considerably more expensive and requires more IT
expertise to maintain.

In a private cloud, the computing services are offered over a private IT network
for the dedicated use of a single organization.
Also termed internal, enterprise, or corporate cloud, a private cloud is usually
managed via internal resources and is not accessible to anyone outside the
organization. Private cloud computing provides all the benefits of a public cloud,
such as self-service, scalability, and elasticity, along with additional control,
security, and customization.

Private clouds provide a higher level of security through company firewalls and
internal hosting to ensure that an organization’s sensitive data is not accessible
to third-party providers. The drawback of private cloud, however, is that the
organization becomes responsible for all the management and maintenance of the data
centers, which can prove to be quite resource-intensive.

3) Hybrid cloud architecture uses both public and private cloud architecture to
deliver a flexible mix of cloud services. A hybrid cloud allows you to migrate
workloads between environments, allowing you to use the services that best suit
your business demands and the workload. Hybrid cloud architectures are often the
solution of choice for businesses that need control over their data but also want
to take advantage of public cloud offerings.

Hybrid cloud uses a combination of public and private cloud features. The “best of
both worlds” cloud model allows a shift of workloads between private and public
clouds as the computing and cost requirements change. When the demand for computing
and processing fluctuates, hybrid cloud allows businesses to scale their on-
premises infrastructure up to the public cloud to handle the overflow while
ensuring that no third-party data centers have access to their data.

In a hybrid cloud model, companies only pay for the resources they use temporarily
instead of purchasing and maintaining resources that may not be used for an
extended period. In short, a hybrid cloud offers the benefits of a public cloud
without its security risks.

In recent years, multicloud architecture is also emerging as more organizations


look to use cloud services from multiple cloud providers. Multicloud environments
are gaining popularity for their flexibility and ability to better match use cases
to specific offerings, regardless of vendor.
Features of Cloud Computing Platforms:

On-Demand Self-Service
➤ Users can access computing resources (like storage, servers) whenever they want
without needing human help.

Broad Network Access


➤ Services are available over the internet and can be accessed from mobile phones,
laptops, desktops, etc.

Resource Pooling
➤ Cloud providers serve many users by sharing physical and virtual resources
dynamically.

Rapid Elasticity / Scalability


➤ You can increase or decrease your resources (like RAM, storage) quickly based on
demand.

Measured Service (Pay-as-you-go)


➤ You only pay for what you use. Like electricity, no use = no bill.

High Availability and Reliability


➤ Cloud services run 24/7 with backup and failover systems, ensuring services are
almost always available.

Security
➤ Data is protected with encryption, firewalls, and access controls (though
security is a shared responsibility).

🔹 Functions of Cloud Computing Platforms:

Data Storage
➤ Store large amounts of data in the cloud (like Google Drive, AWS S3).

Application Hosting
➤ Host websites, mobile apps, or full software systems (like apps on AWS or
Heroku).

Backup and Disaster Recovery


➤ Keep data safe with automatic backup and quick recovery in case of failure.

Development and Testing


➤ Use cloud environments to develop, test, and deploy applications faster.

Big Data Analytics


➤ Analyze large datasets in real-time using cloud tools like AWS Redshift or Google
BigQuery.

Artificial Intelligence and Machine Learning


➤ Use tools like AWS SageMaker or Google AI Platform to train and deploy AI models.

Content Delivery
➤ Deliver websites, videos, and other content faster through CDNs (Content Delivery
Networks).
Utility Computing :
Utility computing is a cloud computing model where you pay only for what you use,
just like a utility service.

Utility computing is defined as a service provisioning model that offers computing


resources to clients as and when they require them on an on-demand basis. The
charges are exactly as per the consumption of the services provided, rather than a
fixed charge or a flat rate.

The concept of utility computing is simple—it provides processing power when you
need it, where you need it, and at the cost of how much you use it.

You don’t buy hardware or software — instead, you rent computing resources like:

CPU power

Storage

Memory

Network bandwidth

Elastic Computing : Also known as E2C

Perfect for websites, apps, and businesses that experience variable traffic.

Elastic Computing refers to the ability of a cloud system to automatically scale


resources up or down based on the current workload or demand.
You get more power when you need it, and you use less when you don’t — without
doing it manually.

Instead of buying and managing your own servers, EC2 gives you a virtual machine,
where you can run websites, apps, or even big data tasks. You can choose how much
memory, storage, and processing power you need- and stop it when you’re done. EC2
offers security, reliability, high performance, and cost-effective infrastructure
to meet demanding business needs.

Mashup :

A mashup is a web application that combines data, functionality, or services from


two or more sources to create a new service.

It’s like mixing two apps or APIs together to make something useful and unique!

They allow information to be viewed from different perspectives and combine data
from multiple sources into a single integrated tool. It is done using a web
application that takes information from one or more sources and presents it in a
new way or with a different graphical user interface.

Data Mashups
A data mashup focuses on combining data from multiple sources into a unified view
or interface. In this type, data is pulled from different databases or APIs and
presented in a single application, often for analysis or visualization purposes.
For example, combining public health data with geographic information to create a
real-time map of disease outbreaks. The goal is to aggregate diverse data points to
provide a comprehensive view of the subject.
Application Mashups
Application mashups integrate the functionality of different software applications
into a single user interface. These mashups often pull in different services,
enabling users to interact with features from multiple applications without
switching between platforms. A common example is a customer relationship management
(CRM) tool that integrates email, social media, and calendar functionalities into
one dashboard, enhancing productivity and user experience.

Business Mashups
Business mashups are tailored specifically to meet organizational or enterprise
needs, often involving both data and application mashups to create more complex and
functional systems. These mashups are used to streamline business processes by
combining various internal and external services, such as inventory management,
customer data, financial reporting, and supplier information. The goal is to
improve decision-making, operational efficiency, and data transparency within the
business ecosystem.

Common Inner Cloud Issues


1. Data Security and Privacy
Data stored in the cloud can be exposed to cyberattacks if not properly secured.

Risk of data breaches, especially when sensitive information is stored.

2. Data Loss or Leakage


Misconfiguration or system failure may result in loss or leakage of data.

Backup systems may fail if not properly maintained.

3. Downtime and Service Outages


Even big cloud providers (like AWS or Google Cloud) can face service outages.

A few minutes of downtime can cause huge loss for businesses.

4. Vendor Lock-in
Once you start using one cloud provider (like AWS), it becomes hard to switch to
another.

Tools, platforms, and architecture may not be compatible with others.

5. Lack of Control
Users don’t have full control over infrastructure (because it’s managed by the
provider).

Sometimes it's hard to fix problems or customize settings.

6. Performance Issues
Shared cloud resources can cause slow performance, especially during peak usage.

Network delays (latency) can affect app speed.

7. Misconfiguration
Incorrect settings (like open ports, wrong permissions) can lead to security risks.

It’s one of the most common reasons for cloud-related issues.

8. Compliance Issues
Different countries have different data laws (like GDPR in Europe).

Keeping data in the wrong region may break legal rules.

Cloud Monitoring System:

A cloud-based monitoring system provides continuous surveillance and analysis of


cloud resources, applications, and infrastructure, offering real-time insights into
performance, security, and cost optimization.
Here's a more detailed explanation:

Cloud-based monitoring is a technology solution that continuously tracks and


analyzes various aspects of your cloud environment, acting as a "digital watchdog"
for your infrastructure.
What it monitors:
It monitors key metrics like resource utilization (CPU, memory, storage),
application performance and availability, network traffic and latency, security
threats and vulnerabilities, cost optimization opportunities, and user experience
metrics.

Benefits:

Proactive problem detection: Identifies and resolves issues before they impact end-
users.
Improved security: Helps identify and address security threats and vulnerabilities.
Optimized performance: Ensures applications and infrastructure are performing
optimally.
Cost optimization: Helps identify areas where cloud resources can be optimized to
reduce costs.
Compliance: Facilitates compliance with industry regulations and standards.

Types of Cloud Monitoring:

Infrastructure Monitoring: Tracks the health and performance of cloud


infrastructure components like servers, networks, and storage.

Application Monitoring: Monitors the performance and availability of applications


running in the cloud.

Security Monitoring: Detects and responds to security threats and vulnerabilities.

Cost Monitoring: Tracks and analyzes cloud spending to identify areas for
optimization.

Network Performance Monitoring: Continually measures, diagnoses, and optimizes a


network's service quality.

Secure Cloud Communication how it is achieved:

Secure cloud communication is achieved through a combination of techniques,


including encryption for data at rest and in transit, strong authentication and
authorization mechanisms, and network security measures like firewalls and VPNs.
Here's a more detailed breakdown of how secure cloud communication is achieved:

1. Encryption:
Data at Rest:
Encrypting data while it's stored on cloud servers is crucial to prevent
unauthorized access, even if the storage is compromised.
Data in Transit:
Encrypting data while it's being transmitted between users and cloud servers (or
between different cloud locations) protects it from eavesdropping and interception.
Protocols:
Secure protocols like HTTPS, SSL/TLS, and VPNs are used to encrypt communication
channels.

2. Authentication and Authorization:


Strong Authentication:
Employing robust authentication methods, such as multi-factor authentication (MFA),
ensures that only authorized users can access cloud resources.
Access Control:
Implementing granular access controls, where users are granted only the necessary
permissions, limits the potential impact of a security breach.

3. Network Security:
Firewalls:
Firewalls act as a barrier, filtering network traffic and blocking unauthorized
access to cloud resources.
VPNs:
Virtual Private Networks (VPNs) create secure, encrypted tunnels for communication,
protecting data transmitted over public networks.
Intrusion Detection/Prevention Systems (IDS/IPS):
These systems monitor network traffic for malicious activity and can take action to
block threats.
Network Segmentation:
Dividing the cloud network into smaller, isolated segments can limit the scope of a
potential breach.

4. Other Security Measures:


Cloud Security Posture Management (CSPM):
Tools that help organizations continuously assess and improve their cloud security
posture.
Data Loss Prevention (DLP):
Measures to prevent sensitive data from leaving the cloud environment or being
accessed by unauthorized users.
Regular Audits and Monitoring:
Continuously monitoring cloud infrastructure for security vulnerabilities and
conducting regular security audits.
Security Policies:
Implementing and enforcing comprehensive security policies that outline acceptable
cloud usage and security practices.
Endpoint Protection:
Protecting the devices accessing the cloud with firewalls, intrusion prevention
systems, and anti-malware software.

What is Workload Balancing in Cloud :


Workload balancing (also called load balancing) means distributing incoming traffic
or tasks evenly across multiple servers in the cloud so that:

> No single server gets overloaded


> All resources are used efficiently

> Users get faster response time

Load balancing is an essential technique used in cloud computing to optimize


resource utilization and ensure that no single resource is overburdened with
traffic. It is a process of distributing workloads across multiple computing
resources, such as servers, virtual machines, or containers, to achieve better
performance, availability, and scalability.

In cloud computing, load balancing can be implemented at various levels, including


the network layer, application layer, and database layer. The most common load
balancing techniques used in cloud computing are:

Common Load Balancing Techniques in Cloud Computing:

Round Robin:
Distributes traffic evenly across all available servers in a sequence.

Weighted Round Robin:


Assigns different weights to servers, allowing for more traffic to be routed to
servers with higher capacity.

Least Connection:
Routes traffic to the server with the fewest active connections, which helps
balance the workload among servers.

Resource-Based:
Distributes traffic based on server capacity, such as CPU or memory utilization.

Request-Based:
Distributes traffic based on the type of request, allowing for specialized handling
of different types of traffic.

Dynamic Load Balancing:


Continuously monitors server load and adjusts traffic distribution accordingly.

Static Load Balancing:


Distributes traffic based on a predefined configuration, without dynamically
adjusting for server load.

Network Load Balancing: This technique is used to balance the network traffic
across multiple servers or instances. It is implemented at the network layer and
ensures that the incoming traffic is distributed evenly across the available
servers.

Application Load Balancing: This technique is used to balance the workload across
multiple instances of an application. It is implemented at the application layer
and ensures that each instance receives an equal share of the incoming requests.

Database Load Balancing: This technique is used to balance the workload across
multiple database servers. It is implemented at the database layer and ensures that
the incoming queries are distributed evenly across the available database servers.

Cloud load balancing, is a software-based load balancing service that distributes


traffic between multiple cloud servers. Like hardware load balancers, cloud load
balancers are designed to manage massive workloads so that no one server becomes
overwhelmed by requests, which can increase latency and cause downtime.

How the Cloud Balances Workloads

1. Load Balancers
A load balancer is like a traffic police — it decides where to send each incoming
request.

It distributes traffic across multiple servers so no server gets too much load.

Load Balancing : Load balancers act as traffic directors, receiving incoming


requests and distributing them to available backend servers.
They use algorithms to decide which server best handles a particular request,
considering factors like server load, geographical distance, and server health.
Common load balancing algorithms include round-robin (distributing traffic evenly),
least connections (routing to the least busy server), and weighted algorithms
(prioritizing certain servers).
Load balancers also ensure that if a server fails, traffic is automatically
redirected to other healthy servers, maintaining service availability.

Example: AWS Elastic Load Balancer (ELB), Azure Load Balancer, Google Cloud Load
Balancing

2. Auto-Scaling
Cloud systems can automatically add or remove servers based on traffic.

If traffic increases, it adds servers.


If traffic drops, it removes extra servers to save cost.
Auto-scaling dynamically adjusts the number of resources (e.g., servers) based on
demand.
If traffic increases, the system automatically provisions more resources to handle
the load, and conversely, it de-provisions resources when traffic decreases.
This ensures that the cloud environment can efficiently handle fluctuating
workloads without over-provisioning resources during low-traffic periods.

3. Virtualization and Containers


Cloud uses virtual machines or containers that can run multiple tasks efficiently.

These can be moved, cloned, or restarted easily if one becomes slow or crashes.

4. Monitoring and Analytics


Cloud platforms use tools to monitor performance (CPU, memory, traffic).

If one server is too busy, the system shifts some tasks to other less busy servers.

5. Geographical Load Distribution (Global Balancing)


Traffic is sent to the nearest data center to reduce delay (latency).

Helps in handling global users effectively.

Resource Optimization :

Resource optimization in cloud computing involves efficiently allocating and


managing cloud resources to maximize performance, minimize waste, and reduce costs.
This process aims to ensure that cloud resources are used effectively, aligning
with the demands of applications and workloads while optimizing overall
performance, compliance, and cost-efficiency.

Key Aspects of Resource Optimization:

Right-sizing:
Selecting the appropriate size and type of cloud resources (e.g., virtual machines,
storage, databases) for each workload to avoid overprovisioning and
underutilization.

Resource Allocation:
Matching cloud resources to application and workload requirements in real-time,
considering factors like performance, cost, and scalability.

Cost Optimization:
Identifying and eliminating unused or underutilized resources, negotiating
discounts with cloud providers, and optimizing storage and computing tiers to
reduce expenses.

Performance Enhancement:
Optimizing network configurations, database settings, and application architectures
to improve response times, throughput, and overall application performance.

Scalability:
Dynamically adjusting cloud resources to accommodate fluctuating workloads,
ensuring applications can handle peak loads without performance degradation.

Automation:
Leveraging automation tools and scripts to manage resource provisioning, scaling,
and monitoring, reducing manual effort and improving efficiency.

What is Virtualization and Why Is It Needed in Cloud Computing:

Virtualization is a technology that allows you to create multiple virtual machines


(VMs) on a single physical server.
Each VM behaves like an independent computer with its own operating system and
applications, even though it's sharing hardware with other VMs.
You are often limited by physical proximity and network design if you want to
access them. Virtualization removes all these limitations by abstracting physical
hardware functionality into software. You can manage, maintain, and use your
hardware infrastructure like an application on the web.

In cloud computing, virtualization is essential because it allows cloud providers


to:

> Efficiently use physical resources

> Run multiple services on fewer servers

> Provide on-demand and scalable services to users

> Lower operational costs while improving flexibility

Virtual machines and hypervisors are two important concepts in virtualization.

1) Virtual machine :
A virtual machine is a software-defined computer that runs on a physical computer
with a separate operating system and computing resources.
The physical computer is called the host machine and virtual machines are guest
machines.
Multiple virtual machines can run on a single physical machine.
Virtual machines are abstracted from the computer hardware by a hypervisor.

2) Hypervisor :
The hypervisor is a software component that manages multiple virtual machines in a
computer.
It ensures that each virtual machine gets the allocated resources and does not
interfere with the operation of other virtual machines.

It acts as a middle layer between the hardware and the virtual machines.

Key Functions of a Hypervisor:

Creates and manages virtual machines

Allocates resources like CPU, RAM, and storage to each VM

Isolates each VM for security and stability

Allows multiple operating systems to run at the same time (like Windows + Linux on
one system)

Key Functions of a Hypervisor:

Virtualization:
Resource Management:
Isolation:
Scalability:

Type of hypervisor :

> Type 1 hypervisor :


A type 1 hypervisor, or bare-metal hypervisor, is a hypervisor program installed
directly on the computer’s hardware instead of the operating system. Therefore,
type 1 hypervisors have better performance and are commonly used by enterprise
applications. KVM uses the type 1 hypervisor to host multiple virtual machines on
the Linux operating system.

> Type 2 hypervisor :


Also known as a hosted hypervisor, the type 2 hypervisor is installed on an
operating system. Type 2 hypervisors are suitable for end-user computing.

Virtualization in cloud computing plays a critical role in resource management,


efficiency, and scalability. It involves creating virtual versions of physical
hardware, such as servers, storage devices, or networks. This allows multiple
virtual machines (VMs) to run on a single physical server, improving resource
utilization and reducing costs. However, like any technology, virtualization has
its own set of challenges and benefits.

Benefits of Virtualization in Cloud Computing


Resource Optimization:

Virtualization allows better resource utilization. Instead of dedicating physical


hardware to a single application, multiple virtual machines can run on the same
server, ensuring that resources like CPU, RAM, and storage are used more
efficiently.

Cost Efficiency:

Virtualization reduces the need for physical hardware, leading to cost savings in
terms of purchasing and maintaining servers. It also reduces the energy consumption
and space required for physical servers.

Scalability:

Cloud services can be easily scaled up or down depending on demand. Virtual


machines can be quickly provisioned or deprovisioned without the need to invest in
physical hardware. This flexibility is a key advantage in cloud computing.

Isolation and Security:

Virtual machines are isolated from each other, meaning that one VM’s issues, such
as crashes or security breaches, do not affect other VMs running on the same
physical machine. This enhances security and system stability.

Flexibility and Portability:

Virtualized environments make it easy to move workloads across different cloud


environments or data centers. Virtual machines can be easily cloned, backed up, or
migrated to different locations with minimal disruption.

Disaster Recovery:

Virtualization enhances disaster recovery capabilities. Since VMs are essentially


software-based, they can be backed up, replicated, and restored more easily than
traditional physical hardware systems.

Faster Deployment:

New virtual machines can be created and deployed quickly, reducing the time it
takes to deploy applications and services in the cloud.

Challenges of Virtualization in Cloud Computing


Performance Overhead:

Virtualization introduces a layer of abstraction between the hardware and the


software, which can lead to performance overhead. While this is typically minor,
resource-intensive applications may see a performance degradation compared to
running directly on physical hardware.

Resource Contention:

Multiple virtual machines share the same underlying physical resources. If not
managed properly, this can lead to resource contention, where VMs compete for CPU,
memory, or I/O, resulting in degraded performance.

Complex Management:
Virtualized environments can become complex to manage as the number of VMs
increases. Administrators need tools and strategies to monitor, manage, and
maintain these virtualized environments effectively.

Security Concerns:

Although virtualization provides isolation, security risks can arise from


vulnerabilities in the hypervisor (the software managing the VMs). A breach in the
hypervisor could potentially compromise all VMs on the host. Furthermore, managing
security across numerous VMs can be a challenge.

Compatibility Issues:

Some legacy applications may not run properly in a virtualized environment,


requiring modifications or adjustments to work with virtualization technologies.
Additionally, some hardware devices may not be fully compatible with
virtualization.

Licensing and Compliance:

Virtualization can complicate software licensing and compliance. Many software


licenses are tied to physical hardware, so cloud providers must ensure they are in
compliance with licensing agreements when running software in virtual environments.

Data Management:

Virtualized environments often involve multiple VMs storing data across different
locations. Managing and securing data in such distributed environments can be more
challenging compared to a traditional physical infrastructure.

Multitenant Software :

Multitenant software is a software architecture where one single application is


shared by multiple customers (called tenants), but each customer’s data is kept
separate and secure.

It means that multiple customers of cloud vendor are using the same computing
resources. As they are sharing the same computing resources but the data of each
Cloud customer is kept totally separate and secure. It is very important concept of
Cloud Computing.

This approach is commonly used in Software-as-a-Service (SaaS) applications and


cloud computing.

Relational Database :

A Relational Database stores data in tables (rows and columns) and uses
relationships to connect that data.

It follows SQL (Structured Query Language) for managing and querying the data.
These databases use Structured Query Language (SQL) to manage and query data,
making them a common choice for structured data storage and retrieval in cloud
environments.
Data is typically structured across multiple tables, which can be joined together
via a primary key or a foreign key.

Cloud Storage and Distributed System :

Cloud storage and distributed systems like Google File System (GFS) and Hadoop
Distributed File System (HDFS) play a crucial role in cloud computing, as they
enable the efficient storage, management, and processing of massive amounts of data
across multiple machines. These distributed systems ensure high availability,
scalability, and fault tolerance, making them ideal for cloud-based environments
where large-scale data processing is required.

Cloud Storage in Cloud Computing

Cloud storage refers to storing data on remote servers, which can be accessed over
the internet. The infrastructure is managed by a cloud service provider, and users
can access their data from anywhere, typically via APIs or web interfaces.

Key Characteristics of Cloud Storage:

Scalability: Cloud storage can scale to handle vast amounts of data. Users only pay
for the storage they use, and the system can automatically expand to accommodate
more data as needed.

Accessibility: Cloud storage allows users to access their data from anywhere with
an internet connection, making it convenient for collaboration and remote work.

Redundancy & Fault Tolerance: Cloud storage providers often replicate data across
multiple locations to ensure that it remains available even in the event of
hardware failures or network disruptions.

Security: Cloud providers employ robust security measures, such as encryption,


authentication, and access control, to ensure that data is secure.

Cost Efficiency: Instead of maintaining on-premises storage infrastructure, cloud


storage allows businesses and individuals to store data without upfront investments
in hardware.

Popular Cloud Storage Solutions:

Amazon S3 (Simple Storage Service)

Microsoft Azure Blob Storage

Google Cloud Storage

Google File System :

Google File System (GFS) is a distributed file system designed by Google to handle
large-scale data storage across multiple machines while providing high reliability
and performance.
GFS (Google File System) is a distributed file system developed by Google to store
huge amounts of data across multiple machines (servers).
In essence, GFS is a foundational technology for cloud computing, allowing for
efficient and scalable data storage and processing on a large scale.

Key aspects of GFS:

Distributed Architecture:
GFS breaks files into 64MB chunks and replicates these chunks across multiple
servers (chunkservers) to ensure data durability and availability.

Fault Tolerance:
GFS is designed to handle failures gracefully, with mechanisms to automatically
recover data if chunkservers or other components fail.

Scalability:
GFS is designed to scale to handle large datasets and numerous users, making it
suitable for large-scale applications like web search and indexing.

High Throughput:
GFS is optimized for high-speed data processing and access, enabling efficient data
retrieval and processing.

Master Server:
A master server coordinates the entire system, managing metadata, file chunk
locations, and handling client requests.

How GFS Works:

File Splitting:

Large files are split into chunks of 64 MB.

Chunk Servers:

Chunks are stored on multiple servers (usually 3 copies for safety).

Master Server:

Keeps metadata (file names, chunk locations), but not the data itself.

Client Request:

Clients ask the Master for chunk info, then talk to Chunk Servers directly to
read/write data.

1. Google File System (GFS) :

GFS is a distributed file system developed by Google to meet the needs of large-
scale data processing. It is specifically optimized for applications like search
indexing, data mining, and processing large data sets across many machines.

Key Features of GFS:

High Fault Tolerance: GFS is designed to continue working seamlessly even if parts
of the system fail. Data is replicated across multiple nodes to prevent loss,
ensuring data availability and reliability.

Large File Storage: GFS is optimized to store very large files (in the range of
gigabytes or terabytes) and works efficiently by breaking these large files into
smaller chunks.

Data Consistency and Redundancy: GFS provides data consistency by using a master
server to coordinate file access. The system replicates data to ensure reliability,
even in the event of node failures.

Write-Once, Read-Many Model: Files are typically written once and read many times,
which simplifies the consistency and coordination mechanisms. This is ideal for
applications like logging or batch processing.

Designed for Large-Scale Data: GFS is built to handle petabytes of data and
thousands of machines, making it suitable for Google’s vast data processing needs.

HDFS (Hadoop Distributed File System) :

HDFS (Hadoop Distributed File System) is an open-source distributed file system


based on Google File System (GFS).

HDFS is another widely used distributed file system designed for storing large data
sets in a distributed environment. It is an open-source project under the Apache
Hadoop ecosystem and is optimized for big data analytics and processing.

It is part of the Apache Hadoop ecosystem and is used to store very large files
across multiple machines.
It is a core component of the Apache Hadoop ecosystem and is designed to handle
large-scale data processing jobs such as those found in big data environments.

Data Storage in HDFS:

Files are split into blocks (default size = 128 MB or 64 MB).

Each block is replicated (default = 3 copies) across different DataNodes.

Ensures data availability even if some nodes fail.

HDFS Architecture Overview :

HDFS follows a Master-Slave architecture.


It has one NameNode (the master) and many DataNodes (the workers).

NameNode(MasterNode):

Manages all the slave nodes and assigns work to them.


It executes filesystem namespace operations like opening, closing, and renaming
files and directories.
It should be deployed on reliable hardware that has a high configuration. not on
commodity hardware.

DataNode(SlaveNode):
Actual worker nodes do the actual work like reading, writing, processing, etc.
They also perform creation, deletion, and replication upon instruction from the
master.
They can be deployed on commodity hardware.

Key Components of HDFS Architecture:

Component Role
NameNode : The master server that manages file system metadata like
file names, block locations, and
permissions.

DataNode : The worker nodes that store the actual data blocks. They
regularly report to the NameNode.

Secondary NameNode : Supports the NameNode by periodically merging and backing


up its metadata (but not a backup NameNode).

Client : The user/system that interacts with HDFS by submitting


requests to read or write data.

Key Features of HDFS:

Scalability: HDFS is highly scalable, able to manage petabytes of data across


thousands of machines. It can easily expand as data needs grow.

Fault Tolerance: Like GFS, HDFS ensures that data is replicated across multiple
nodes to prevent data loss. By default, it replicates data three times, but this
can be configured.

Data Access: HDFS is designed to handle large sequential data access patterns,
making it ideal for tasks like batch processing or large-scale data analytics.

High Throughput: HDFS is optimized for high-throughput access to data. It provides


fast access to large files for reading, which is critical for big data
applications.

Master-Slave Architecture: HDFS uses a master-slave architecture, with a NameNode


as the master, which keeps track of the file system’s metadata (locations of the
blocks) and a DataNode as the slave, which stores the actual data.

Integration with Hadoop Ecosystem: HDFS is tightly integrated with other tools in
the Hadoop ecosystem (such as MapReduce, Hive, and Spark), allowing it to serve as
a foundational layer for big data processing.

Comparison between GFS and HDFS:

GFS is proprietary to Google, while HDFS is open-source and part of the Apache
Hadoop project.

Both file systems are optimized for large-scale, fault-tolerant data storage across
many machines, with replication strategies to ensure data integrity.

GFS was designed with Google’s internal needs in mind, focusing on applications
like search indexing, while HDFS was designed for big data processing and analytics
in the Hadoop ecosystem.

Benefits of Using Distributed File Systems in Cloud Computing


Scalability: Both GFS and HDFS allow for the storage and management of vast amounts
of data, with the ability to scale horizontally by adding more nodes to the system.
This makes them ideal for cloud environments where data needs can grow rapidly.

Fault Tolerance: Distributed systems are designed to handle node failures


gracefully by replicating data across multiple nodes. This ensures high
availability and reliability of data in a cloud environment, even in the event of
hardware failures.

Cost Efficiency: By leveraging distributed storage across many inexpensive


commodity machines, these systems reduce the cost of storing and processing large
datasets compared to traditional, monolithic file systems.

Performance Optimization: These systems are optimized for high-throughput and


parallel data access, which speeds up the processing of large-scale data. This is
especially beneficial in cloud environments where big data analytics and processing
are key requirements.

Simplified Data Management: Both GFS and HDFS manage data at the block level,
abstracting away the complexities of low-level data storage and providing a unified
view of the data across a distributed environment.

Key Differences:
Feature Database
Data Store
Structure Structured data with predefined schema
Various data types, including structured, semi-structured, and unstructured
Management Managed by a DBMS
Managed by different systems depending on storage solution
Purpose Efficient storage and retrieval of structured data
Persistent storage and management of various data types
Examples Relational databases, NoSQL databases, cloud-based database
services Cloud storage services, file systems, object storage, data lakes

Cloud Middleware:

In cloud computing, cloud middleware plays a crucial role by acting as a bridge


between applications and the underlying cloud infrastructure. Let’s break it down
simply:

🔹 What is Cloud Middleware:

In cloud computing, cloud middleware acts as a software layer that facilitates


communication and data flow between different applications, services, and devices
within a cloud environment, abstracting the underlying infrastructure complexitie.
Cloud middleware is software that connects different applications, services, and
databases running in the cloud. It helps these components communicate smoothly,
even if they are built using different technologies or hosted in different
environments.

Without middleware, developers would need to write custom code for communication
between services — which is time-consuming and error-prone. Middleware simplifies
this process and improves scalability, flexibility, and integration.

🔹 Key Functions of Cloud Middleware:

Integration – Connects apps and services across cloud and on-premises environments.

Communication – Enables data exchange between distributed systems (like APIs,


messaging).

Security – Manages authentication, authorization, and data encryption.

Scalability – Helps applications scale by managing resources efficiently.

Monitoring & Management – Offers tools to monitor performance and health of


applications.

Sky Computing :

In traditional cloud computing, companies usually use one cloud provider. But that
creates problems like:

> Limited service options

> Dependency on one vendor (vendor lock-in)

> Higher cost in some cases

Sky Computing solves this by enabling inter-cloud compatibility and giving freedom
to choose the best cloud services for different needs.

Sky Computing is an advanced model in cloud computing where multiple cloud


providers (like AWS, Azure, Google Cloud) are used together seamlessly — as if they
are part of one large, global cloud.

Sky computing in cloud computing refers to a concept where multiple cloud


providers' resources are combined to create a unified, large-scale, interoperable
infrastructure. It aims to overcome the limitations of traditional cloud computing
by providing a more flexible, cost-effective, and scalable solution, particularly
for large-scale computations.

It allows users to:

> Run applications across different clouds

> Choose the best services from each cloud

> Avoid being locked into a single provider

Key benefits:

# Flexibility
# Cost Efficiency
# Portability
# High Availability

Simple Example:
Imagine you run a website.

You store videos on Google Cloud


Use machine learning from AWS
Use global networking from Azure
Sky computing helps you combine all these services in one platform, automatically
and smoothly.

Big Table :

Google Cloud Bigtable is a highly scalable NoSQL database designed for handling
large volumes of data efficiently. It is built to store and manage terabytes to
petabytes of structured data while ensuring low-latency performance. This makes it
an excellent choice for applications requiring high throughput and real-time
analytics.

One of the key features of Bigtable is its row key-based indexing. Every row in a
table is uniquely identified by a row key, which allows quick lookups. Due to its
distributed architecture, Bigtable can process billions of rows and thousands of
columns seamlessly. It is particularly useful for use cases like time-series data,
financial transactions, and IoT analytics.

Challenges and issues in cloud computing :

Data Security and Privacy


Multi-Cloud Environments
Performance Challenges
Interoperability and Flexibility
High Dependence on Network
Lack of Knowledge and Expertise
Reliability and Availability
Password Security
Cost Management
Lack of expertise
Control or Governance
Compliance
Multiple Cloud Management
Migration
Hybrid-Cloud Complexity

Map reduce : MapReduce is a programming model and associated implementation for


efficiently processing large datasets in parallel across computer clusters. It's
widely used in various fields, including data analysis, machine learning, and web-
scale applications. The core idea is to break down complex tasks into smaller,
manageable steps (map) and then combine the results (reduce).

Here's a breakdown of key applications:


Data Analysis & Big Data:
ETL (Extract, Transform, Load):
Preparing data for analysis by extracting it from various sources, transforming it,
and loading it into a data warehouse, according to Naukri Code 360.
Log Analysis:
Analyzing web server logs or application logs to identify trends, notes IBM and
error patterns.
Data Warehouse Analysis:
Performing queries and aggregations on large data warehouses, explains K21 Academy.
Fraud Detection:
Identifying fraudulent transactions or patterns in large datasets, notes K21
Academy.
Tabulation & Counting:
Counting specific occurrences, like customer renewals by country, says IBM.
Machine Learning:
Collaborative Filtering: Recommending items to users based on their preferences and
the preferences of similar users.
K-means Clustering: Grouping data points based on similarity.
Linear Regression: Training machine learning models to predict outcomes based on
input features.

Other Applications:
Graph Processing:
Calculating PageRank, finding shortest paths, or detecting communities in large
graphs.
Image Processing:
Performing tasks like feature extraction, image classification, or image stitching.
Web Indexing:
Building and updating search engine indexes, mentioned by Wikipedia.
Text Mining:
Counting word frequencies, says IBM or identifying common hashtags in social media
data.
Entertainment:
Netflix uses MapReduce to provide personalized movie recommendations based on user
history.
E-commerce:
Amazon and other e-commerce companies use it to analyze customer buying behavior
and personalize shopping experiences.

Inter cloud :

Intercloud is a network of clouds that are linked with each other. This includes
private, public, and hybrid clouds that come together to provide a seamless
exchange of data.

The Intercloud platform provides end-to-end private connectivity for cloud


applications, enabling customers to address the problem while ensuring security,
the privacy of their data at any point-of-time. Most Intercloud platforms provide
the ‘pay-per-use’ service flexibility, giving clients the opportunity to manage
costs effectively, according to the cloud consumption. Other added advantage is its
portability: Migration of data could become as simple as “dragging and dropping”
from one provider to the next. This would save money, time and human resources.

Data loss prevention :


Flexibility:
Collaboration:
Reliability and Disaster Recovery:
Accessibility:
Reliability:
Reusability:
Scalability:
Interoperability:
Maintainability:
On-demand self-service:
Resource pooling:
Measured service:
Cost-effectiveness:
Multi-tenancy:
Sustainability:
Vendor Lock-in:
Security Concerns:
Dependence on Internet Connectivity:
Vulnerability:
Data Management:
Multitenant architecture :
Automated upgrades:
Easy customization:

You might also like