0% found this document useful (0 votes)
1 views

Cloud Computing

Uploaded by

Pranav Diwan
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

Cloud Computing

Uploaded by

Pranav Diwan
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 47

Cloud Computing

Cloud computing is the delivery of computing services—


1
including servers, storage, databases, networking,
software, analytics, and intelligence—over the internet
(“the cloud”) to offer faster innovation, flexible resources,
and economies of scale. Instead of owning their own
computing infrastructure or data centers, companies can
rent access to anything from applications to storage
from a cloud service provider.

Characteristics
1. On-Demand Self-Service – Users can provision
computing resources without human interaction with
each service provider.
2. Broad Network Access – Services are available
over the network and accessed through standard
mechanisms (e.g., mobile phones, laptops).
3. Resource Pooling – Providers serve multiple
customers with dynamically assigned physical and
virtual resources.
4. Rapid Elasticity – Capabilities can be scaled up or
down quickly.
5. Measured Service – Usage can be monitored,
controlled, and reported for transparency and
billing.
Components
1. Front-End Platform – The client or user interface
2
(e.g., web browser, app).
2. Back-End Platform – Servers, storage, and
application resources.
3. Cloud-Based Delivery – Services like SaaS, PaaS,
and IaaS.
4. Network (Internet) – Connects front-end to back-
end platforms.

Types (Service Models)


1. Infrastructure as a Service (IaaS) – Virtualized
computing resources over the internet (e.g., AWS
EC2).
2. Platform as a Service (PaaS) – Environment for
developers to build apps without worrying about the
underlying infrastructure (e.g., Google App Engine).
3. Software as a Service (SaaS) – Ready-to-use
software delivered over the internet (e.g., Gmail,
Microsoft 365).

Deployment Models
1. Public Cloud – Services offered over the public
internet and available to anyone.
2. Private Cloud – Exclusive to a single organization.
3. Hybrid Cloud – Combines public and private
clouds for greater flexibility.
3
Advantages
 Cost-efficiency (pay-as-you-go)
 Scalability and flexibility
 Business continuity and disaster recovery
 Easy collaboration and updates

Limitations
 Data security and privacy risks
 Dependency on internet connectivity
 Limited control over infrastructure
 Compliance and regulatory issues

Applications
 Data storage and backup
 Hosting websites and blogs
 Streaming services (Netflix, Spotify)
 Development and testing platforms
 Enterprise resource planning (ERP) systems
 IoT integration and analytics
1. Utility Computing
Definition
4
Utility computing is a service provisioning model where
computing resources such as storage, processing
power, and networking are provided to users as metered
services, similar to traditional utilities like electricity or
water. Users pay only for the resources they consume,
making it a cost-efficient alternative to maintaining
private infrastructure.
Characteristics
 Pay-as-you-go: Users are billed based on usage.
 On-Demand Availability: Services can be
provisioned and de-provisioned as needed.
 Resource Sharing: Resources are shared among
multiple clients via virtualization.
 Scalability: Easily scales up or down based on
user demand.
Applications
 Web hosting
 Batch processing
 Data analytics
 Scientific simulations
 Enterprise software delivery
2. Elastic Computing
Definition
5
Elastic computing refers to the ability of a cloud
system to dynamically allocate and release computing
resources based on the workload demands. It ensures
that applications have exactly the resources they need
at any time, optimizing performance and cost-
efficiency.

Characteristics
 Auto-scaling: Automatically adjusts resources as
demand changes.
 Agility: Adapts quickly to workload changes.
 Cost-Efficiency: Prevents over-provisioning and
under-utilization.
 Programmable Infrastructure: Controlled via
software APIs.
Applications
 E-commerce platforms
 Online gaming
 Video streaming services
 Data-intensive applications
 Real-time analytics
Difference Between Utility Computing and Elastic
Computing:
6
Utility
Feature Elastic Computing
Computing
Billing and service
Focus Dynamic resource scaling
model
Supports auto-scaling,
Billing Based on usage
billed per scale
Available, but not Automatic and
Scalability
automatic responsive
General cloud Workload optimization
Use Case
service delivery and responsiveness
Reduce cost and Improve performance and
Goal
complexity adaptability

Hypervisor
A hypervisor, also known as a virtual machine
monitor (VMM), is software or firmware that creates and
manages virtual machines (VMs) on a host system. It
allows multiple operating systems to share a single
hardware host by abstracting the hardware layer and
allocating resources to each VM. This technology is the
backbone of virtualization in cloud computing.

Types of Hypervisors
1. Type 1 (Bare-Metal Hypervisor)
o

o
Runs directly on physical hardware.
Examples: VMware ESXi, Microsoft Hyper-V,
7
Xen.
o Advantages: Higher performance, better
resource management, more secure.
o Use Case: Data centers, enterprise
virtualization.
2. Type 2 (Hosted Hypervisor)
o Runs on top of an existing OS (like an app).
o Examples: VMware Workstation, Oracle
VirtualBox.
o Advantages: Easier setup, good for
development/testing.
o Use Case: Local desktop virtualization,
learning environments.
Characteristics
 Resource Virtualization – CPU, memory, storage,
and I/O devices are virtualized.
 Isolation – Each VM is isolated from others,
ensuring security and fault containment.
 Hardware Abstraction – OSes can run
independently of the hardware.
 Snapshot and Cloning – Ability to save VM states
or duplicate environments.
 Migration Support – VMs can be moved between
hypervisors (live migration).
8
Components
1. Host Machine – The physical server where the
hypervisor runs.
2. Guest OS – Virtualized operating systems running
inside VMs.
3. Virtual Machine Monitor – Manages resource
allocation and VM operations.
4. Management Console – Interface to manage VMs
and monitor performance.

Google File System (GFS)


Definition:
GFS is a distributed file system designed by Google to
manage and store large amounts of data across many
machines in a fault-tolerant manner. It is specifically
tailored for handling large-scale data processing tasks,
such as those used in Google’s search engine, indexing,
and data analysis.
Key Features:
 High Scalability: GFS is designed to scale to
thousands of machines, storing petabytes of data
across multiple locations.
 Fault Tolerance: Data is replicated across multiple
machines to ensure reliability and fault tolerance.
9
 Large File Support: GFS is optimized for handling
large files (hundreds of MBs or GBs), which is ideal
for big data applications.
 Chunking: Files are divided into large chunks
(typically 64MB each), and each chunk is stored
across multiple servers.
 Single Master Node: GFS has a single master
server that manages metadata and coordinates
access to file chunks.
 Write-Once Model: Files are written once and can
be appended but not modified after they are written,
ensuring high consistency.
Use Cases:
 Large-scale data storage and management for
services like Google Search, YouTube, and Google
Maps.
 Big data processing frameworks such as
MapReduce, which require distributed storage and
high throughput.

HDFS (Hadoop Distributed File System)


Definition
HDFS is the primary storage system of the Hadoop
ecosystem. It is a distributed file system designed to
store large volumes of data across multiple machines
10
while maintaining high fault tolerance and data reliability.
Inspired by Google File System (GFS), it breaks files
into large blocks and distributes them across a cluster of
commodity hardware.

Characteristics
 Fault Tolerance: Data is replicated across multiple
nodes (default is 3 copies).
 High Throughput: Optimized for batch processing
and streaming large files.
 Scalability: Can handle petabytes of data by simply
adding more nodes.
 Write Once, Read Many: Data is immutable once
written; supports append operations.
 Data Locality: Computation is moved to where data
resides, minimizing network I/O.

Components
1. NameNode – Master node that manages metadata
(file system namespace, block locations).
2. DataNodes – Store actual data blocks and report to
NameNode.
3. Secondary NameNode – Periodically checkpoints
metadata to assist recovery.
11
4. HDFS Client – Interfaces for applications to
read/write data.

Limitations
 Not suitable for low-latency access.
 Lacks POSIX compliance.
 Doesn’t support random writes or file modifications.
 Metadata is stored in memory—NameNode can
become a bottleneck.
2. MapReduce
Definition
MapReduce is a programming model and processing
engine designed to perform parallel processing on large
datasets in a distributed computing environment.
Developed by Google and implemented in Hadoop, it
splits data processing tasks into two functions: Map
(data filtering and sorting) and Reduce (aggregation and
summarization).

Characteristics
 Parallel Processing: Tasks are executed across
multiple nodes simultaneously.

on other nodes.
12
Fault Tolerance: Automatically restarts failed tasks

 Simplicity: Developers write only map and reduce


functions; the framework handles distribution and
fault management.
 Scalability: Easily scales to hundreds or thousands
of machines.
 Data Locality Optimization: Moves computation to
where data resides.

Components
1. JobTracker (Master) – Assigns tasks to
TaskTrackers, manages job execution.
2. TaskTrackers (Workers) – Execute map and
reduce tasks on data nodes.
3. Map Function – Processes input key/value pairs
and outputs intermediate key/value pairs.
4. Reduce Function – Merges intermediate data by
key to produce the final output.
5. Input/Output Format – Determines how data is
read and written to HDFS.

Limitations
 High latency; not suitable for real-time processing.

learning).
13
Inefficient for small or iterative tasks (e.g., machine

 Rigid programming model.


 Requires deep understanding to optimize
performance.

QoS (Quality of Service)


Quality of Service (QoS) refers to the ability of a
network to provide guaranteed performance levels for
specific data flows or applications. It involves managing
bandwidth, latency, jitter, and packet loss to ensure
reliable and predictable service delivery. QoS is
essential in environments where different applications
(e.g., voice, video, and data) share the same network
infrastructure.

Characteristics
 Bandwidth Management: Allocates network
capacity to different traffic types.
 Latency Control: Ensures time-sensitive data (like
voice or video) reaches its destination quickly.
 Jitter Management: Reduces variation in packet
arrival time to ensure smooth playback or
communication.
 Packet Loss Prevention: Minimizes data loss,
especially for critical applications.

critical or real-time data.


14
Traffic Prioritization: Assigns higher priority to

Applications
 VoIP (Voice over IP)
 Video conferencing (e.g., Zoom, Microsoft Teams)
 Online gaming
 Streaming media services
 Enterprise resource planning (ERP) systems
 Industrial IoT communications

Sky Computing
Sky Computing is an emerging paradigm in cloud
computing that envisions a unified, transparent, and
interoperable cloud ecosystem where multiple cloud
providers operate like a single large “sky” of computing
resources. Unlike traditional multi-cloud or hybrid cloud
setups, Sky Computing abstracts away the complexity of
managing different providers, allowing seamless
computation and data sharing across diverse cloud
platforms.

Characteristics
 Cloud Interoperability – Abstracts cloud provider
boundaries, enabling cross-provider functionality.
 Seamless Portability – Applications and data can
move freely between clouds without modification.
15
 Decentralized Resource Aggregation –
Leverages compute, storage, and services from
multiple cloud vendors.
 Vendor Agnosticism – Users are not locked into
one provider, enhancing flexibility and reducing
costs.
 Federated Management – Resources from
different clouds are managed under a unified
control plane.

Difference Between Sky Computing and Multi-


Cloud/Hybrid Cloud:
Sky Hybrid
Feature Multi-Cloud
Computing Cloud
Full,
Interoperability Limited Moderate
seamless
High
Abstraction Low to
(transparent Medium
Level medium
to user)
Management Lower Higher (per-
Medium
Complexity (centralized) provider)
Vendor Lock-In Avoided Possible Possible
Leverage Combine
Unified cloud multiple public and
Goal
ecosystem clouds private
separately clouds
SOA (Service-Oriented Architecture)
Service-Oriented Architecture (SOA) is an
16
architectural design pattern where software functionality
is organized as a collection of loosely coupled,
interoperable services. Each service encapsulates a
specific business function and communicates with other
services over a network using standardized protocols
(typically HTTP and SOAP/REST). SOA aims to
promote reusability, scalability, and interoperability
across distributed systems.

Characteristics
 Loose Coupling: Services are independent and
interact via well-defined interfaces.
 Reusability: Services are modular and reusable
across different applications.
 Interoperability: Supports integration across
diverse platforms and technologies.
 Discoverability: Services can be published in a
directory and discovered dynamically.
 Composability: Services can be combined into
larger workflows or applications.
Components
1. Service Provider – Develops and hosts the
service; publishes the interface.
2. Service Consumer – Calls or uses the service
based on its interface.
17
3. Service Registry – A directory where services are
published for discovery (e.g., UDDI).
4. Service Contract – A formal specification
describing service capabilities, input/output, and
usage.
5. Middleware – Connects, manages, and secures
service interactions (e.g., ESB – Enterprise Service
Bus).
6. Message Protocols – Standards like SOAP, REST,
XML, JSON used for communication.

Difference Between SOA and Microservices:


Feature SOA Microservices
Coarse-grained
Granularity Fine-grained services
services
Typically uses
Often uses ESB
Communication lightweight
and SOAP
REST/HTTP
Centralized
Independently
Deployment service
deployable services
deployment
Shared Decentralized, per
Data Storage
database service
Highly scalable and
Scalability Moderate
agile
Best Use Case Enterprise Cloud-native app
Feature SOA
integration
Microservices
development
18
Multi-Tenancy:
Multi-tenancy is a software architecture in which a
single instance of an application serves multiple
customers (tenants). Each tenant’s data is isolated and
remains invisible to others, but they share the same
software application and infrastructure resources. This
model is especially common in cloud computing and
Software-as-a-Service (SaaS) platforms, enabling
efficient resource utilization and simplified maintenance.
Characteristics
 Resource Sharing: Multiple tenants share the
same infrastructure (e.g., servers, databases,
compute).
 Data Isolation: Each tenant's data is logically
separated and secure.
 Centralized Maintenance: Updates, backups, and
patches are applied to one instance, benefiting all
tenants.
 Cost-Efficiency: Reduces operational costs
through shared infrastructure and maintenance.
 Customizability: Tenants can often customize their
UI, settings, or workflows while using the same core
application.
Types of Multi-Tenancy
1. Database-per-Tenant – Each tenant gets their own
19
database; highest isolation, less efficient.
2. Schema-per-Tenant – Shared database with
individual schemas; balance of isolation and
efficiency.
3. Shared Schema (Row-Level Isolation) – Single
schema for all tenants; logical data partitioning via
tenant ID.
Applications:
 SaaS Platforms – CRM tools (e.g., Salesforce),
project management (e.g., Asana, Jira).
 Cloud Services – AWS, Azure, and GCP use multi-
tenancy at different service levels.
 Online Marketplaces – E-commerce platforms
serving multiple vendors.
 E-learning Platforms – Hosting content for
different educational institutions.
 IoT Platforms – Managing isolated data and
configurations for various device clusters or clients.

Mobile Computing:
Mobile computing refers to the ability to use
computing devices wirelessly in motion without being
tethered to a fixed physical location. It enables access to
data, applications, and services anytime and anywhere
using mobile devices like smartphones, tablets, and
laptops, often through wireless networks such as Wi-Fi,
20
4G/5G, or Bluetooth.

Characteristics
 Portability: Devices are lightweight and designed
for use on the move.
 Ubiquitous Access: Enables access to networks
and data from virtually anywhere.
 Wireless Communication: Operates using
wireless technologies like cellular, Wi-Fi, or satellite.
 Real-Time Connectivity: Supports on-the-go
access to live information and cloud services.
 Mobility: Supports dynamic environments and
mobile user interaction.
 Energy Efficiency: Devices are optimized for
battery usage and low power consumption.
Types of Mobile Computing:
1. Mobile Communication – Wireless networks and
protocols that enable mobile connectivity.
2. Mobile Hardware – Physical devices designed for
mobility.
3. Mobile Software – Applications and OS tailored for
mobile experiences.
Applications
 Healthcare – Mobile health monitoring,
21
telemedicine, and patient record access.
 Banking – Mobile banking, digital wallets, and on-
the-go financial services.
 Retail – Mobile POS systems, inventory checks,
and real-time customer service.
 Transportation & Logistics – Fleet tracking, route
optimization, and delivery updates.
 Education – Mobile learning platforms, e-books,
and virtual classrooms.
 Field Services – Utility workers, inspectors, and
emergency responders use mobile devices in real-
time.

Pitfalls of Virtualization:

Virtualization allows multiple virtual instances (virtual


machines, containers, etc.) to run on a single physical
machine. While it brings significant benefits, including
resource efficiency and scalability, there are various
pitfalls that organizations must consider to ensure the
technology is implemented and managed effectively.
Characteristics of Virtualization Pitfalls
 Resource Contention: Multiple virtual machines
(VMs) share the same physical resources, leading
to competition for CPU, memory, and storage.
 Complexity: Managing virtual environments
requires specialized knowledge in hypervisors,
virtual networks, and storage systems.
22
 Performance Overhead: Virtual machines often
incur performance penalties compared to running
directly on physical hardware due to the abstraction
layer of the hypervisor.
 Security Challenges: VMs are vulnerable to
breaches or attacks, and improper isolation can
lead to unauthorized access between VMs.
 Licensing Issues: Virtualized environments can
complicate licensing models, potentially leading to
non-compliance or higher costs.
Pitfalls in Detail
1. Performance Degradation
o Virtualization adds an abstraction layer

between the hardware and virtual machines,


which can result in performance overhead. This
means that resource-intensive applications
may not perform as well as they would on
physical hardware. For instance, high-demand
workloads like databases or big data
applications may suffer.
2. Resource Contention and Over-Provisioning
 It’s tempting to over-allocate resources like CPU
and memory to VMs, assuming that they won’t all
be fully utilized simultaneously. This often leads to
resource contention, where multiple VMs compete
for the same resources, resulting in slower
performance for all.
3.Resource Contention and Over-Provisioning
It’s tempting to over-allocate resources like CPU and
memory to VMs, assuming that they won’t all be fully
utilized simultaneously. This often leads to resource
23
contention, where multiple VMs compete for the same
resources, resulting in slower performance for all.
4.Security Vulnerabilities
 Virtualization introduces new security risks,
particularly related to VM isolation. Poorly
configured virtual environments or shared
infrastructure can lead to potential breaches,
allowing an attacker to break through isolation
layers and access other virtual machines.
5.Licensing and Cost Management
 Virtualization complicates licensing models,
especially for software that is licensed per physical
machine or per instance. Organizations may
unknowingly run afoul of licensing requirements,
leading to potential legal and financial
consequences.

Service Deployment Model


A Service Deployment Model refers to the architecture
or method used to deploy and manage services across
different computing environments, typically in cloud
computing or distributed systems. These models define
how services are made available to users and the level
of control, security, and scalability provided. The most
common deployment models in cloud computing include
Public Cloud, Private Cloud, Hybrid Cloud, and
Community Cloud.
24
Types of Service Deployment Models
1. Public Cloud
o Definition: In a Public Cloud deployment
model, services are delivered over the internet
and shared across multiple organizations. The
infrastructure is owned and managed by third-
party providers like Amazon Web Services
(AWS), Microsoft Azure, or Google Cloud
Platform (GCP).
Characteristics:
 Services are available to the general
public.
 Resources are shared among many
customers (tenants).
 Scalable, flexible, and cost-effective.
 The cloud provider manages infrastructure
and maintenance.
o Advantages:
 Low Cost: Users only pay for the
resources they use.
 Scalability: Easily scale up or down based
on demand.
25
 No Maintenance: The cloud provider
manages updates, patches, and
infrastructure.
o Limitations:
 Security Concerns: Data may be more
vulnerable due to shared resources.
 Less Control: Limited customization
options and control over infrastructure.
 Compliance: May be difficult to meet
industry-specific regulations.

2. Private Cloud
o Definition: A Private Cloud is a cloud
environment dedicated to a single organization.
It can be hosted on-premises or by a third-party
provider. Unlike public clouds, the infrastructure
is not shared with others, providing greater
control and customization.
o Characteristics:
 Exclusive use by a single organization.
 Can be hosted on-site or externally.
 Greater control over security and privacy.
o Advantages:
 Security: Enhanced security and data
control.
26
 Customization: Ability to tailor the
environment to specific needs.
 Compliance: Easier to meet regulatory
requirements.
o Limitations:
 Higher Cost: Requires investment in
hardware, software, and management.
 Scalability: Less elastic compared to
public clouds.
 Maintenance: Organization is responsible
for infrastructure upkeep.

3. Hybrid Cloud
o Definition: A Hybrid Cloud combines both
private and public cloud environments, allowing
data and applications to be shared between
them. It provides flexibility by leveraging the
scalability of the public cloud while maintaining
control over sensitive data with a private cloud.
o Characteristics:
 Integrates public and private clouds.
 Allows workloads to move seamlessly
between environments.
o

Advantages:
27
Offers a balance of scalability and control.

 Flexibility: Choose the best environment


for each workload.
 Cost Optimization: Use public cloud
resources for non-sensitive tasks while
maintaining critical services in the private
cloud.
 Disaster Recovery: Enhanced backup
and failover capabilities between cloud
types.
o Limitations:
 Complexity: Managing multiple
environments can be challenging.
 Data Latency: Potential delays in data
transfer between clouds.
 Security Management: Requires strong
security policies to protect data in both
environments.

4. Community Cloud
o Definition: A Community Cloud is shared by
several organizations that have common
interests, such as compliance requirements,
security policies, or specific business goals. It
is a collaborative environment where resources
are shared among a group with similar needs.
28
o Characteristics:
 Shared by multiple organizations with
similar objectives.
 Hosted either internally or by a third-party
provider.
 Costs and resources are shared among
community members.
o Advantages:
 Shared Costs: Expenses are distributed
among several organizations.
 Tailored for Specific Needs: Suited for
businesses with common goals or
compliance requirements.
 Collaboration: Encourages sharing of
resources and knowledge between
organizations.
o Limitations:
 Limited Control: Less flexibility compared
to private clouds.
 Complex Governance: Managing shared
resources across organizations can be
complex.

concerns about data privacy.


29
Security: Shared infrastructure may raise

Discretionary Access Control (DAC)


Discretionary Access Control (DAC) is an access
control model where the owner of a resource (e.g., files,
databases, or other digital assets) has the discretion to
decide who can access the resource and what
operations (read, write, execute, etc.) they can perform.
DAC systems rely on user permissions and user
identities, where access control is determined by the
owner or creator of the data.

Characteristics
 Owner-Centric Control: The resource owner
determines who has access to their resources and
to what degree (read, write, etc.).
 Flexible: Users can assign permissions to others,
making the system more flexible than other models
like Mandatory Access Control (MAC).
 Dynamic Permissions: Permissions can be
changed or revoked by the owner at any time.
 Identity-Based: Permissions are granted based on
user identity and roles.
 Permissions Propagation: The ability to pass
permissions to other users, allowing sharing of
resources.
30
Components of DAC
1. Owner: The person who creates or owns a
resource (e.g., a file or application). The owner
controls access to the resource.
2. Users: Individuals who access the resource. Users
may be granted or denied permissions by the
owner.
3. Permissions: The rights (read, write, execute,
delete) granted to users for accessing or modifying
the resource.
4. Access Control Lists (ACLs): A list associated
with a resource that specifies the permissions for
various users or groups.
5. File or Object: The actual resource being
protected, such as a file, database record, or
application.
Applications of DAC:
 File Systems: Most modern operating systems,
such as Windows and Unix-based systems (Linux,
macOS), use DAC for file access control. For
instance, in Linux, each file has an owner and the
owner can set permissions for users and groups to
access the file.
31
 Database Systems: Database administrators may
grant or revoke access to different users based on
their discretion, particularly in systems like MySQL
or PostgreSQL.
 Shared Network Drives: Network file servers use
DAC to determine which users can access or
modify files on shared network drives.
 Cloud Storage Services: Cloud storage solutions
(e.g., Google Drive, Dropbox) often allow users to
share documents and set specific access rights
(view, edit, etc.) for others.

Issues in Cloud Computing:


Cloud computing offers flexibility, scalability, and cost-
efficiency, but it also introduces several challenges:
1. Security and Privacy Concerns
Issue: Storing sensitive data in the cloud raises
concerns about unauthorized access and breaches.
Solution: Implement encryption, multi-factor
authentication, and strict access controls. Consider
private or hybrid clouds for added security and conduct
regular security audits.
2. Downtime and Reliability
Issue: Cloud service outages can cause disruptions in
business operations. Solution: Use disaster recovery
plans, negotiate SLAs for high availability, and
32
implement multi-cloud or hybrid approaches for failover
capabilities.
3. Data Loss and Backup
Issue: Risk of accidental deletion or corruption of cloud-
stored data. Solution: Regularly back up data, use data
redundancy across multiple locations, and automate
backup processes to ensure reliability.
4. Compliance and Legal Issues
Issue: Cloud environments may not comply with local
regulations (e.g., GDPR, HIPAA). Solution: Choose
compliant cloud providers, understand data residency
requirements, and ensure contractual agreements
address regulatory needs.
5. Vendor Lock-In
Issue: Dependence on a single cloud provider’s tools
and infrastructure can limit flexibility. Solution: Adopt
multi-cloud strategies, use standardized tools (e.g.,
containers), and focus on cloud portability to reduce
dependency on any one vendor.
6. Performance and Latency
Issue: Network latency and resource-heavy applications
can degrade performance. Solution: Implement edge
computing and CDNs to reduce latency, and optimize
resource allocation to ensure efficient use of cloud
infrastructure.
33
7. Cost Management
Issue: Cloud services are billed based on usage,
leading to unpredictable costs. Solution: Use cost
monitoring tools, auto-scaling features, and reserved
instances to optimize spending and avoid over-
provisioning.

Cloud Computing Solutions:


1. Enhanced Security and Privacy: Implement
encryption for data both at rest and in transit to protect
sensitive information from unauthorized access.
2. High Availability and Reliability
Adopt a disaster recovery (DR) plan with
geographically distributed data centers to ensure quick
recovery in case of service interruptions.
3. Data Backup and Redundancy
Use automated backup systems to regularly store
copies of data, ensuring it can be recovered in case of
loss or corruption.
4. Compliance and Legal Adherence
Select regulatory-approved providers who comply
with necessary industry certifications (e.g., ISO 27001,
HIPAA) to meet legal and compliance standards.
5. Avoiding Vendor Lock-In
Leverage containerization (e.g., Docker) and
34
Kubernetes for portability, ensuring applications can run
on any cloud platform, reducing vendor dependency.
6. Performance Optimization and Latency Reduction
Implement edge computing to process data closer to
the source, minimizing latency and improving overall
performance.
7. Effective Cost Management
Utilize auto-scaling to dynamically adjust cloud
resources based on demand, ensuring cost-efficiency
without over-provisioning.

1. VMware
VMware is a global leader in virtualization technology and cloud
infrastructure. It provides software solutions that enable
organizations to virtualize their IT resources, such as servers,
storage, and networking. VMware’s products allow businesses
to create and manage virtualized environments, increasing
efficiency, reducing costs, and enhancing scalability.
Key Products:
 VMware ESXi: A type-1 hypervisor that runs directly on
physical hardware to create and manage virtual machines.
 VMware vSphere: A suite of software tools used to
manage and operate VMware’s virtualized infrastructure,
including ESXi and vCenter Server.


VMware Workstation: A desktop application for running
virtual machines on a personal computer.
VMware vCenter: A centralized management platform for
35
VMware environments.
2. vSphere
VMware vSphere is a comprehensive virtualization platform
that allows businesses to manage virtualized environments on
a large scale. It provides the tools to create, manage, and
monitor virtual machines (VMs) on a centralized infrastructure.
vSphere is built around VMware's ESXi hypervisor and
vCenter Server for management and orchestration.
Key Features:
 vCenter Server: Centralized management tool for
managing ESXi hosts and virtual machines.
 High Availability (HA): Ensures that VMs remain online in
case of host failures by restarting them on other available
hosts.
 vMotion: Enables live migration of VMs from one physical
server to another without downtime.
 Distributed Resource Scheduler (DRS): Automatically
balances workloads across multiple ESXi hosts to
optimize resource utilization.
 Fault Tolerance (FT): Provides continuous availability by
creating an identical VM running in parallel, which takes
over if the primary VM fails.
Use Cases:
 Data center management, disaster recovery, and cloud
infrastructure.
 Scalability and resource management for large virtualized
environments.
3. Virtual Machines (VM)
36
A Virtual Machine (VM) is a software-based emulation of a
physical computer. It runs its own operating system and
applications, just like a physical machine, but is hosted on a
physical server using a hypervisor such as VMware ESXi. VMs
are isolated from each other, allowing multiple virtualized
instances to run on the same physical hardware.
Key Features:
 Isolation: VMs are independent of each other, meaning
that issues in one VM (such as crashes) do not affect
others.
 Resource Allocation: VMs have allocated resources
(CPU, memory, storage) that can be dynamically adjusted.
 Portability: VMs can be moved between different physical
machines or data centers.
 Snapshot and Cloning: VMs can be cloned or
snapshotted for backup, testing, and disaster recovery
purposes.
Use Cases:
 Testing and Development: Ideal for running different
operating systems or configurations on a single machine.
 Server Consolidation: Multiple VMs can run on one
physical server, improving hardware utilization and
reducing physical space requirements.
 Disaster Recovery: VMs can be easily backed up,
restored, or migrated, offering flexibility for recovery
scenarios.
Feature VMware vSphere
Virtual
Machine
(VM)
37
Company
that provides A virtualized
Virtualization
virtualization instance of a
platform for
Definition tools and computer
managing
cloud running on a
VMs at scale
infrastructure physical host
solutions
Offers
Manages Runs
virtualization
virtualized applications
Primary and cloud
environments and OS as a
Function solutions
and virtualized
(ESXi,
infrastructure instance
vCenter, etc.)
ESXi Virtual CPU,
ESXi,
(hypervisor), RAM,
vCenter,
Components vCenter storage,
Workstation,
Server, network
Cloud Suite
vMotion, HA interfaces
Cloud Large-scale Testing,
computing, data centers, development,
enterprise cloud server
Use Cases
virtualization, management, consolidation,
server resource disaster
consolidation allocation recovery
Feature VMware vSphere
Virtual
Machine
(VM)
38
Broad range Infrastructure
of management Single
virtualization for large- virtualized
Scope
products and scale instance or
cloud virtualized machine
services environments
Managed
Managed Managed
individually or
through through
through
Management products like vCenter for
management
vCenter centralized
tools (like
Server control
vSphere)

Cloud Middleware:
Cloud middleware is the software layer that sits between the
cloud infrastructure (hardware and virtualization layer) and the
application layer. It provides essential services like
communication, authentication, load balancing, data
management, and orchestration across distributed cloud
environments. It acts as a "glue" that simplifies interaction
between different cloud components and abstracts the
complexity of cloud platforms for developers and users.
Key Components
 API Gateways: Facilitate communication between
services and external clients.


Message Queues & Brokers: Ensure asynchronous data
exchange between components (e.g., RabbitMQ, Kafka).
Service Bus: Connects various cloud services and
39
ensures message routing and transformation.
 Security Services: Manage authentication, authorization,
and encryption.
 Orchestration Engines: Automate and manage multi-step
service workflows (e.g., Kubernetes, Docker Swarm).
Characteristics
 Platform-independent interaction
 Supports multi-cloud and hybrid cloud deployments
 Enhances scalability and fault tolerance
 Decouples application logic from infrastructure
Applications:
 SaaS integration
 Microservices communication
 IoT backend management
 Multi-cloud orchestration and governance

Bigtable (Google Bigtable):


Bigtable is Google’s highly scalable, distributed, and column-
oriented NoSQL database designed to handle massive
workloads and structured data across thousands of commodity
servers. It underpins services like Google Search, Maps, and
Gmail. It is built for high throughput, low latency, and
horizontal scalability in cloud environments.
Characteristics
 Column-family based: Optimized for queries over rows
with large numbers of columns.
40
 Sparse and multidimensional: Efficient for storing
sparse datasets with timestamps.
 Horizontal scaling: Can handle petabytes of data by
distributing across nodes.
 Consistency: Strong consistency guarantees for read and
write operations.
Components
 Tablet servers: Manage partitions of tables.
 Master server: Coordinates tablet server activities and
assigns tablets.
 Chubby: A distributed lock service for synchronization and
control.

Google App Engine (GAE)


Google App Engine is a Platform as a Service (PaaS) offering
from Google Cloud that allows developers to build and host
web applications in Google-managed data centers. It abstracts
infrastructure concerns like server management, scaling, and
load balancing, allowing developers to focus solely on writing
code.
Characteristics
 Auto-scaling: Scales applications automatically based on
traffic.


Multiple runtime support: Supports Python, Java, Go,
Node.js, and more.
Fully managed: Google handles patching, monitoring,
41
load balancing, etc.
 Micro-billing: You pay only for what you use.
Components
 App Engine Standard Environment: For rapid
deployment and limited customizability.
 App Engine Flexible Environment: Allows custom
runtimes and more control over infrastructure.
 Google Cloud Datastore/Firestore: Integrated NoSQL
databases.
 Traffic Splitting: Enables A/B testing by splitting user
traffic across versions.
Feature Google Bigtable Google App Engine
NoSQL Database Platform as a Service
Type
(Storage Layer) (Execution Layer)
Store & retrieve
Deploy & run
Purpose structured data at
applications
scale
Big data analytics, Web apps, APIs,
Use Case
logging, IoT microservices
Horizontally scalable Automatic application
Scaling
database scaling
Managed By Google Cloud Platform Google Cloud Platform
Feature
Language
Google Bigtable
Interfaced through
Google App Engine
Multiple languages
42
Support gRPC, REST supported (Java, Go)

WSDL (Web Services Description Language)


WSDL (Web Services Description Language) is an XML-based
language used for describing the functionalities offered by a
web service. It provides a standardized, machine-readable
format to detail how a service can be called, what parameters it
expects, and what data structures it returns. WSDL is a key
component in SOAP-based web services and supports
platform-agnostic integration between systems.
Characteristics
 XML-based: Ensures cross-platform compatibility and
easy parsing.
 Language and platform neutral: Clients and services
can be written in any language.
 Supports RPC and document-style messaging.
 Works with UDDI for service discovery.
 Describes operations, data types, messages, and
service bindings.
Components
1. Types: Defines the data types (XSD – XML Schema
Definition).
2. Message: Abstract data being exchanged (input/output).
3. PortType: Set of operations (like functions) offered by the
service.
4. Binding: Protocol and encoding style (e.g., SOAP over
HTTP).
5. Service: Contains service endpoint details.
43
Security Reference Architecture
A Security Reference Architecture (SRA) is a conceptual
blueprint that defines best practices, policies, standards, and
components necessary to secure IT systems and cloud
environments. It provides a structured, standardized
framework for implementing, assessing, and improving
security across infrastructure, platforms, and applications.
It acts as a guideline for organizations to build secure-by-
design systems, ensuring compliance with regulatory
standards and mitigating risks effectively.
Characteristics
 Framework-driven: Based on well-defined models like
NIST, ISO/IEC 27001, or CSA.
 Layered security: Addresses security at multiple layers
— data, application, network, and infrastructure.
 Policy-centric: Incorporates identity, access control, and
governance policies.
 Vendor-neutral: Applicable across different technologies
and cloud providers.
Core Components
1. Identity & Access Management (IAM): Secure
authentication and role-based authorization.
2. Data Protection: Encryption at rest and in transit, secure
key management.
3. Network Security: Firewalls, VPNs, segmentation,
intrusion detection/prevention.
4. Monitoring & Logging: Security event logging, SIEM,
44
anomaly detection.
5. Compliance & Governance: Ensures regulatory
adherence (e.g., GDPR, HIPAA).
6. Incident Response: Playbooks and processes for
managing security breaches.

AJAX (Asynchronous JavaScript and XML):

AJAX is a web development technique used to create


asynchronous, dynamic web applications by exchanging
small amounts of data with the server behind the scenes,
without reloading the entire page. It combines HTML, CSS,
JavaScript, and XMLHttpRequest (or modern Fetch API) to
enhance user experience and responsiveness.

Core Components

1. HTML/CSS: For content and styling.


2. JavaScript: Controls behavior and logic.
3. XMLHttpRequest / Fetch API: Facilitates server
communication.
4. Server-side script: Processes the request (PHP, Node.js,
etc.).

CSA (Cloud Security Alliance): The Cloud Security Alliance


(CSA) is a non-profit organization dedicated to defining and
raising awareness of best practices for secure cloud
computing. It provides a set of security standards, tools,
research, and guidance to help organizations assess and
improve cloud security posture. CSA aims to ensure trust within
the cloud ecosystem through education, certification, and
collaboration.

Characteristics
45
 Industry-led: Formed by cloud experts, vendors, and
security professionals.
 Open standards: Freely available guidance and
documentation.
 Global influence: Works with organizations and
governments worldwide.
 Security-focused: Covers identity, access, compliance,
data privacy, and more.

1. Infrastructure as a Service (IaaS):


The foundation layer, consisting of physical hardware (servers,
storage, networking), virtualized infrastructure, and basic
computing resources.
 Users manage the OS, middleware, data, and
applications.
 Examples: AWS, Azure, Google Compute Engine.
2. Platform as a Service (PaaS):
Built on top of IaaS, offering a platform for developing, testing,
and deploying applications.
 PaaS handles infrastructure management (servers,
storage, networking) for the developer.
 Examples: Heroku, Google App Engine.
3. Software as a Service (SaaS):
The top layer, delivering ready-to-use applications directly to
users.


Users access software through a web browser or
mobile app.
Examples: G Suite, Office 365, Salesforce.
46
Cloud service administration and monitoring:
1. Provision Resources – Create VMs, databases, storage
using cloud consoles or IaC tools.
2. Configure Services – Set up OS, networks, and security
settings.
3. Manage Access – Use IAM to assign roles and control
permissions.
4. Monitor Performance – Track CPU, memory, uptime via
tools like CloudWatch or Azure Monitor.
5. Log Activity – Collect logs for events, errors, and user
actions.
6. Set Alerts – Define thresholds to trigger alerts for
anomalies.
7. Automate Scaling – Use auto-scaling for dynamic
resource management.
8. Control Costs – Monitor usage and set budget alerts.
9. Ensure Security – Run compliance checks and apply
policies.
10. Handle Incidents – Use backups, playbooks, and
DR tools for recovery.
47

You might also like